qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,294,991 | <p>I am trying to find the limit as $x\to 8$ of the following function. What follows is the function and then the work I've done on it. </p>
<p>$$ \lim_{x\to 8}\frac{\frac{1}{\sqrt{x +1}} - \frac{1}{3}} {x-8}$$</p>
<hr>
<p>\begin{align}\frac{\frac{1}{\sqrt{x +1}} - \frac{1}{3}} {x-8} &= \frac{\frac{1}{\sqrt{x +1}} - \frac{1}{3}} {x-8} \times \frac{\frac{1}{\sqrt{x +1}} + \frac{1}{3}}{\frac{1}{\sqrt{x +1}} + \frac{1}{3}} \\\\
& = \frac{\frac{1}{x+1}-\frac{1}{9}}{(x-8)\left(\frac{1}{\sqrt{x +1}} + \frac{1}{3}\right)}\\\\
& = \frac{8-x}{(x-8)\left(\frac{1}{\sqrt{x +1}} + \frac{1}{3}\right)}\\\\
& = \frac {-1}{\frac{1}{\sqrt{x +1}} + \frac{1}{3}}\end{align}</p>
<p>At this point I try direct substitution and get:
$$ = \frac{-1}{\frac{2}{3}}$$</p>
<p>This is not the answer. Could someone please help me figure out where I've gone wrong?</p>
| Michael Rozenberg | 190,319 | <p>$$\lim\limits_{x\rightarrow8}\frac{\frac{1}{\sqrt{x+1}}-\frac{1}{3}}{x-8}=\lim\limits_{x\rightarrow8}\frac{8-x}{3(x-8)\sqrt{x+1}\left(3+\sqrt{x+1}\right)}=-\frac{1}{3\cdot3\cdot6}=-\frac{1}{54}$$</p>
|
2,316,561 | <p>How to evaluate the integral $$\frac{1}{2 \pi i}
\int \limits_{c-i \infty}^{c+i \infty} \frac{ds}{s(1-q^{1-s})}\text{?}$$ I tried with Perron's formula but I couldn't solve it. The result of the integral is $\frac{1}{2}$. Can someone help please?!</p>
| Felix Marin | 85,343 | <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>Note that
$\ds{\left.{1 \over 2\pi\ic}
\int_{c - \ic\infty}^{c + \ic\infty}{\dd s \over s\pars{1 - q^{1-s}}}
\right\vert_{\ \substack{c\ >\ 1\\[1mm] q\ >\ 1}} =
\int_{1^{+} - \infty\ic}^{1^{+} + \infty\ic}{1 \over
s\pars{1 - q^{1 - s}}}\,{\dd s \over 2\pi\ic}}$.</p>
</blockquote>
<p>The integrand has a <em>single pole</em> at $\ds{s = 0}$ and <em>single poles</em> at
$\ds{\quad p_{n} = 1 - {2n\pi \over \ln\pars{q}}\,\ic\quad}$ with $\ds{n \in \mathbb{Z}}$.
\begin{align}
&\int_{1^{+} - \infty\ic}^{1^{+} + \infty\ic}{1 \over
s\pars{1 - q^{1 - s}}}\,{\dd s \over 2\pi\ic} =
{1 \over 1 - q} + \sum_{n = -\infty}^{\infty}\lim_{s \to p_{n}}
{s - p_{n} \over s\pars{1 - q^{1 - s}}}
\\[5mm] = &\
{1 \over 1 - q} + \sum_{n = -\infty}^{\infty}\lim_{s \to p_{n}}\braces{%
{1 \over 1 - q^{1 - s} + s\bracks{-q\pars{1/q}^{s}\ln\pars{1/q}}}}
\\[5mm] = &\
{1 \over 1 - q} + {1 \over \ln\pars{q}}\sum_{n = -\infty}^{\infty}
{1 \over 1 - 2n\pi\ic/\ln\pars{q}}
\\[5mm] = &\
{1 \over 1 - q} + {1 \over \ln\pars{q}} +
{2 \over \ln\pars{q}}\Re\sum_{n = 1}^{\infty}
{1 \over 1 - 2n\pi\ic/\ln\pars{q}}
\\[5mm] = &\
{1 \over 1 - q} + {1 \over \ln\pars{q}} +
{2 \over \ln\pars{q}}\sum_{n = 1}^{\infty}
{1 \over \bracks{2n\pi/\ln\pars{q}}^{\,2} + 1}
\\[5mm] = &\
{1 \over 1 - q} + {1 \over \ln\pars{q}} +
{2 \over \ln\pars{q}}\,{1 \over \bracks{2\pi/\ln\pars{q}}^{\,2}}
\sum_{n = 1}^{\infty}{1 \over n^{2} + \bracks{\ln\pars{q}/\pars{2\pi}}^{\,2}}
\label{1}\tag{1}
\\[5mm] = &\
{1 \over 1 - q} + {1 \over \ln\pars{q}} +
{\ln\pars{q} \over 2\pi^{2}}
\bracks{-\,{2\pi^{2} \over \ln^{2}\pars{q}} + \pi^{2}\,
{\coth\pars{\ln\pars{q}/2} \over \ln\pars{q}}}
\\[5mm] = &\
{1 \over 1 - q} + {1 \over 2}\,\coth\pars{\ln\pars{q} \over 2}
\\[5mm] = &\
{1 \over 1 - q} +
{1 \over 2}\,{\root{q} + 1/\root{q} \over \root{q} - 1/\root{q}} =
{1 \over 1 - q} +
{1 \over 2}\,{q + 1 \over q - 1} = \bbx{1 \over 2}
\end{align}</p>
<blockquote>
<p>The sum in \eqref{1} is a <em>well known result</em>. Namely,
$\ds{\sum_{n = 1}^{\infty}{1 \over n^{2} + a^{2}} =
{-1 + \pi a\coth\pars{\pi a} \over 2a^{2}}}$.</p>
</blockquote>
|
4,111,835 | <p>So I was given the following prompt:</p>
<blockquote>
<p>When <span class="math-container">$x=−2$</span>, for what values of p does the series converge?
<span class="math-container">$$\sum_{n=1}^\infty\left(\frac{(-1)^{n+1}(x-3)^n}{5^n\cdot n^p}\right)$$</span></p>
</blockquote>
<p>I ended up working out this problem to find out that it's convergent for all values of <span class="math-container">$p$</span> greater than or equal to <span class="math-container">$2$</span>, but I'm a bit confused about how to show this. I'm also confused over whether or not this would be an alternating series, since more often than not the <span class="math-container">$-1$</span> in the numerator has been indicative of an alternating series. Any help would be appreciated!</p>
| user0102 | 322,814 | <p><strong>HINT</strong></p>
<p>For <span class="math-container">$x = -2$</span>, the proposed series reduces to
<span class="math-container">\begin{align*}
\sum_{n=1}^{\infty}\frac{(-1)^{n+1}(-5)^{n}}{5^{n}n^{p}} & = \sum_{n=1}^{\infty}\frac{(-1)^{n+1}(-1)^{n}5^{n}}{5^{n}n^{p}} = -\sum_{n=1}^{\infty}\frac{1}{n^{p}}
\end{align*}</span></p>
<p>which is known as <span class="math-container">$p$</span>-series.</p>
<p>Can you take it from here?</p>
|
109,037 | <blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://math.stackexchange.com/questions/107336/why-doesnt-dx-n-x-n1-rightarrow-0-as-n-rightarrow-infty-imply-x-n">Why doesn't $d(x_n,x_{n+1})\rightarrow 0$ as $n\rightarrow\infty$ imply ${x_n}$ is Cauchy?</a> </p>
</blockquote>
<p>my question is this:</p>
<hr>
<p><em>The following definition is weaker than the definition of Cauchy sequences:</em> </p>
<p>$\forall \; \epsilon > 0, \;\exists N \in \mathbb{N} \;s.t.\; \forall\; n \geq N, \; |a_{n+1}-a_n | < \epsilon.$</p>
<p><em>Show that this is not equivalent to $(a_n)$ being a Cauchy sequence.</em></p>
<hr>
<p>The definition of Cauchy sequence is: </p>
<p>A sequence $(s_n)$ is Cauchy if (and only if) for each $\epsilon > 0$ there exists an integer $N$ with the property that $|s_n-s_m| < \epsilon$ whenever $n\geq N$ and $m \geq N$.</p>
<p>Note that a sequence (of real numbers) is convergent if and only if it is Cauchy.</p>
<hr>
<p>So I see an (the?) obvious difference between these two in that the Cauchy criteria demands that all values in a sequence above a certain index ($N$) are within a prescribed tolerance of each other, whether adjacent or not. This is where the question is weaker, in that it only requires the immediately adjacent values of the sequence to be within a tolerance of $\epsilon$. This would then allow, by taking successive differences of adjacent values, to accumulate a difference greater than $\epsilon$. This is seen as,</p>
<p>$$\left| \sum_{i=1}^{n+1}\,a_i - \sum_{i=1}^{n}\,a_i\right| < \epsilon,\quad
\left|\sum_{i=1}^{n+2}\,a_i - \sum_{i=1}^{n+1}\,a_i\right| < \epsilon,\quad
\left| \sum_{i=1}^{n+3}\,a_i - \sum_{i=1}^{n+2}\,a_i\right| < \epsilon,$$
and summing each side of the inequalities gives (after reverting to sequence-notation and employing the triangle inequality),
$$
\left(|a_{n+1}-a_n| + |a_{n+2}-a_{n+1}| + |a_{n+3}-a_{n+2}| + \cdots + |a_{K} - a_{K-1}| \right) \leq \left( \epsilon_{1,2} + \epsilon_{2,3} + \epsilon_{3,4} + \cdots + \epsilon_{K-1,K} \right)
$$
which implies
$$\left( \epsilon_{1,2} + \epsilon_{2,3} + \epsilon_{3,4} + \cdots + \epsilon_{K-1,K} \right)_{\textrm{ by weaker criteria }} \geq |a_n - a_{n+K}|_{\textrm{ by Cauchy criteria }}$$</p>
<p>If I understand these differences correctly, then my main problem is putting these into a formal mathematical proof. Unless this would qualify?</p>
<p>Thanks much for the help and the site!</p>
| André Nicolas | 6,312 | <p>Your analysis is sound: though differences between neighbours may be ultimately small, these differences can build up. However, they do not <em>need</em> to build up: after all, there <em>are</em> Cauchy sequences. So the analysis has to be used to generate specific situations in which the differences <em>do</em> build up, or at least to prove less directly that this <em>can</em> happen. </p>
<p>We <em>exhibit</em> a sequence that satisfies the "weaker" condition and is <em>not</em> Cauchy. The following example is not mine, it was given on MathSE in the fairly recent past.</p>
<p>Look at the sequence
$$0, \tfrac{1}{2}, 1, \tfrac{2}{3}, \tfrac{1}{3}, 0, \tfrac{1}{4}, \tfrac{2}{4}, \tfrac{3}{4}, 1, \tfrac{4}{5}, \tfrac{3}{5}, \tfrac{2}{5}, \tfrac{1}{5}, 0, \tfrac{1}{6}, \tfrac{2}{6}, \tfrac{3}{6}, \tfrac{4}{6}, \tfrac{5}{6}, 1, \tfrac{6}{7} \dots$$
(we are travelling back and forth between $0$ and $1$, and each round uses smaller and smaller steps). </p>
<p>The above sequence satisfies your condition: After a while, any two consecutive terms are very close <em>to each other</em>. But the sequence is clearly not Cauchy. It is clear how in this example small differences do build up.</p>
|
3,758,635 | <p>I'm trying to prove <span class="math-container">$n+\left(-1\right)^n\ge \dfrac{n}{2}$</span> is true for all natural numbers <span class="math-container">$n \ge 2$</span> via induction. The base case is trivial as
<span class="math-container">$$2+(-1)^2 \ge \frac{1}{2}(2)$$</span>
<span class="math-container">$$3 \ge 1.$$</span>
For the induction step, I'm looking at <span class="math-container">$$(n+1) + (-1)^{n+1} = n+1+(-1)(-1)^n$$</span>
<span class="math-container">$$\ge \frac{1}{2}n +1-2\cdot(-1)^n$$</span>
This is where I get stuck. Any help would be appreciated.</p>
| Bananach | 70,687 | <p>An overkill solution would be to use the Gelfand formula, which states that</p>
<p><span class="math-container">$$
\rho(A) =\lim_n \|A^n\|^{1/n}
$$</span>
where the spectral radius <span class="math-container">$\rho(A)$</span> is defined as the supremum of the absolute values of all <span class="math-container">$x$</span> such that
<span class="math-container">$$
x\text{Id}-A
$$</span>
does not have a bounded inverse. In particular, for your <span class="math-container">$A$</span>, the spectral radius vanishes, as per your bound.</p>
<p>Note: if your operator were diagonalizable, then you'd have <span class="math-container">$\|A^n\|^{1/n}=\|A\|=\rho(A)$</span>. Non exponential decay of the operator norm shows your operator is not diagonalizable. The statement above says that, even more, it does not have any eigenvalue at all.</p>
|
1,193,558 | <p>I ran into this problem in a math camp, but I can't seem to solve it via elementary techniques. </p>
<p>If $a$ and $b$ are positive integers such that $a^n+n\mid b^n + n$ for all positive integers $n$, prove that $a=b$.</p>
| Anubhav Mukherjee | 72,329 | <p>No you are wrong...</p>
<p>$S$ is not discrete... $(m,n) \in S$ ( when $p=q=0$) where $m,n \in \mathbb{Z}$, and it is a limit point of a sequence in $S$.</p>
<p>But $S$ is countbale, and so $\mathbb{R^2}-S$ is path connected... you can find a general proof here <a href="https://math.stackexchange.com/questions/240453/if-a-subset-mathbbr2-is-countable-is-mathbbr2-setminus-a-path-connec">If $A\subset\mathbb{R^2}$ is countable, is $\mathbb{R^2}\setminus A$ path connected?</a></p>
|
672,412 | <p>I am reading an e-book called <a href="http://www.ldsinsight.org/">To Infinity and Beyond</a> by Dr. Kent A Bessey. In the book the author makes the claim that Georg Cantor made a discovery "where half of a pie is as large as the whole".</p>
<p>In talking about it, he seems to claim that because half a pie can be broken into an infinite amount of pieces, and likewise a whole pie can be broken into an infinite amount of pieces they are infact the same size.</p>
<p>By the same concept, he states that if you took all of the pieces of the edge of a box you could create as many more boxes of whatever size you wanted using those pieces.</p>
<p>This seems undeniably false to me. I cannot help but draw a parallel between limits -> infinity. Where those limits may equal 2 or some other finite value. In my view, even if you were to break half a pie into an infinite amount of pieces the pieces could never add up to more than half a pie.</p>
<p>Am I misunderstanding? Can someone explain this concept better?</p>
| John Habert | 123,636 | <p>If you want your mind blown some more, try <a href="http://en.wikipedia.org/wiki/Banach-Tarski_paradox">this</a>.</p>
<p>As for what is going on here, an infinite set is always the same size as half of itself because we can define a bijection between the two. The problem you seem to be running into is the author trying to give examples of how this might work using things that don't really conjure an image of the infinite. I would love to have an infinite size pie that I could cut some from and always have more. That being said, it is not exactly a good image for dealing with the infinite. The problem is, infinity is a concept and can never be represented exactly by something we can assign a number to. Better (as in larger sized numbers) examples might be grains of sand, stars in the sky or atoms in the universe. Though these examples are a little better, they still don't represent the fact that infinite sets are the same size as half of themselves.</p>
|
672,412 | <p>I am reading an e-book called <a href="http://www.ldsinsight.org/">To Infinity and Beyond</a> by Dr. Kent A Bessey. In the book the author makes the claim that Georg Cantor made a discovery "where half of a pie is as large as the whole".</p>
<p>In talking about it, he seems to claim that because half a pie can be broken into an infinite amount of pieces, and likewise a whole pie can be broken into an infinite amount of pieces they are infact the same size.</p>
<p>By the same concept, he states that if you took all of the pieces of the edge of a box you could create as many more boxes of whatever size you wanted using those pieces.</p>
<p>This seems undeniably false to me. I cannot help but draw a parallel between limits -> infinity. Where those limits may equal 2 or some other finite value. In my view, even if you were to break half a pie into an infinite amount of pieces the pieces could never add up to more than half a pie.</p>
<p>Am I misunderstanding? Can someone explain this concept better?</p>
| mjqxxxx | 5,546 | <p>There are (at least) two kinds of "size" in mathematics. One is <em>cardinality</em>. In set theory, the cardinality of a set is the number of elements it contains, and two sets have the same cardinality if there is a one-to-one mapping between them. This is a coarse kind of "size", in that many different sets share the same cardinality: the interval $[0,1]$ is the same size as the entire real line, which in turn is the same size as all of $\mathbb{R}^3$. In this sense, which is likely to be Cantor's meaning, half of a pie is certainly the same size as a whole pie. Another type of "size" is <em>measure</em>, which assigns real numbers to (some) sets in such a way as to generalize the usual lengths of line segments, areas of polygons, volumes of cubes and spheres, etc. This is much more precise: if you cut a disc of area $1$ into measurable pieces and then reassemble those pieces however you like, the result will still have area $1$. However, if you allow any type of pieces (not just measurable ones), then the <a href="http://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox">Banach-Tarski paradox</a> can happen: a sphere of volume $1$ can be cut into a finite number of pieces and reassembled into a sphere of volume $2$. (This can't happen in the plane, though, so your planar pie is safe.) </p>
|
1,132,922 | <p>Let $f$ and $g$ be two differentiable functions s.t $ f '(x) \le g '(x) $ for all $ x\lt 1$ and $ f '(x) \ge g'(x) $ for all $ x\gt 1$ then</p>
<ol>
<li><p>If $f(1) \ge g(1)$, then $f(x)\ge g(x)$ for all $x$</p></li>
<li><p>If $f(1) \le g(1)$, then $f(x)\le g(x)$ for all $x$</p></li>
<li><p>$f(1) \le g(1)$</p></li>
<li><p>$f(1) \ge g(1)$</p></li>
</ol>
<p>In this I am having difficulty in guessing fxn to counter options.</p>
| DeepSea | 101,504 | <p>$(f-g)' \leq 0 \to (f-g)(x) \geq (f-g)(1) \geq 0$ for $x < 1$ and $(f-g)(x) \geq (f-g)(1) \geq 0$ for $x > 1$. Thus: $1)$ is true.</p>
|
706,514 | <p>I know that the fundamental group of homeomorphic spaces are isomorphic. Is the converse true? I mean, can we say the two spaces with isomorphic fundamental groups are homeomorphic? </p>
| Neal | 20,569 | <p>Other answers give good counterexamples ($\mathbb{R}$ and a point, $\mathbb{S}^2$ and a point), so I'm just going to write a little expository answer about searching for converses. </p>
<p>The fundamental group is far too weak to detect homeomorphism type. In fact, knowing the entire sequence of homotopy groups is too weak: you can always "inflate" the space, e.g., as in Daniel Rust's answer, in such a manner that the result has the same homotopy groups but is not homeomorphic to the original space.</p>
<p>This method of building a counterexample violates compactness, and compact spaces are nice, so you might try adding a compactness assumption. Is knowing that two compact spaces have isomorphic homotopy groups enough to conclude that they are homeomorphic? Or even homotopy-equivalent?</p>
<p>Nope, still not good enough. In fact, lens spaces provide examples of non-homeomorphic, non-homotopy-equivalent compact <em>manifolds</em> with the same dimension and homotopy groups.</p>
<p>To my limited knowledge, here's the best general converse available: <a href="http://en.wikipedia.org/wiki/Whitehead_theorem">Whitehead's theorem</a>.</p>
<blockquote>
<p>If $X$ and $Y$ have the homotopy type of CW complexes and if a map $f:X\to Y$ induces an isomorphism of all homotopy groups, then $f$ is a homotopy equivalence.</p>
</blockquote>
|
1,423,491 | <p>Suppose you are flipping a coin with probability of Heads being 0.4931 and Tails being 0.5069</p>
<p>Can someone please tell me what is the probability of hitting 6 and 7 tails in a row in 24 tries? how about 44 tries?</p>
<p>OK. I have been asked to edit my question!
Here is the story:</p>
<p>The game "Baccarat" in casinos is very much like flipping a coin. the outcome is either "banker", "player" or Tie. If you disregard ties, the probability of banker win is 0.5069 and player win is 0.4931.
I am betting in a way that every loss is covered by an eventual win. But if there are 7 losses in a row, i do not play any more. I usually play 20 hands to win 100$. I wanted to know what is the probability of my loss.
I hope the critics are now satisfied with the reason behind my question!</p>
| M. Aykens | 403,744 | <p>I recently asked a similar question here:</p>
<p><a href="https://math.stackexchange.com/questions/2081699/coin-flipping-likely-to-hit-7-heads-or-tails-in-a-row-answered-and-betting-p/2082919#2082919">Coin flipping - likely to hit 7 heads or tails in a row (answered) and betting progression</a></p>
<p>In my question I tried to find out how probable I would get tails 7 times in a row if tails came up 60% of the time in a single flip.</p>
<p>What I found out is that in any 7 flips of the coin there is about 3% chance that it would come up tails 7 times in a row. And that after 128 and nearing 254 flips of the coin I would be expecting tails to happen 7 times in a row.</p>
<p>In just 24-44 flips you should have little chance of tails coming up 7 times in a row. Especially if your odds are closer to 50 50 as you said.</p>
|
3,921,255 | <p>If a function <span class="math-container">$f:M\to\mathbb{R}$</span> is continuous at point <span class="math-container">$x_0$</span> we know that for an arbitrary <span class="math-container">$\epsilon>0$</span> there exists a <span class="math-container">$\delta>0$</span> such that for all <span class="math-container">$x\in M$</span> and <span class="math-container">$|x-x_0|<\delta\implies |f(x)-f(x_0)|<\epsilon$</span>. Let's call those <span class="math-container">$\epsilon$</span> and <span class="math-container">$\delta(\epsilon)$</span> a pair, <span class="math-container">$(\epsilon,\delta)$</span>.</p>
<p>What happens if I shrink the <span class="math-container">$\delta$</span>? Does this imply that <span class="math-container">$|f(x)-f(x_0)|$</span> also shrinks?</p>
<p>My intuition says that we can't make any claim on the behaviour of <span class="math-container">$|f(x)-f(x_0)|$</span>. Sure if <span class="math-container">$\delta$</span> attains a value which is very small then it will be smaller than another <span class="math-container">$\delta'$</span> which belongs to a pair <span class="math-container">$(\epsilon',\delta')$</span> where the <span class="math-container">$\epsilon'<\epsilon$</span>. But if I shrink <span class="math-container">$\delta$</span> only a bit what happens then? How do I argue in a formal way?</p>
| Aphelli | 556,825 | <p>Let’s prove something stronger: for each such manifold <span class="math-container">$M$</span> (without boundary), the path-connected component of the identity in the group of homeomorphisms of <span class="math-container">$M$</span> acts transitively on <span class="math-container">$M$</span>.</p>
<p>Since it’s a continuous group action and <span class="math-container">$M$</span> is connected, it’s enough to show that any orbit is open. By considering homotopies that are the identity outside a ball, we can assume <span class="math-container">$M=B^n$</span>, <span class="math-container">$a,b$</span> being interior points, and require that the homotopy must be the identity on the boundary.</p>
<p>But in this case, you can consider the flow of a compactly supported vector field in the right direction.</p>
|
3,399,276 | <p>If the problem is to write the following with simplified polynomials</p>
<p><span class="math-container">$$\frac{x^2 + 5x + 6}{x^2+1}$$</span></p>
<p>Is it possible to do this problem with synthetic division? If so, how?</p>
<p>I've tried googling, finding on youtube, even plug this in Wolfram Alpha, no helpful results :/</p>
| Andrew Chin | 693,161 | <p>Synthetic division is only used when you have a polynomial divided by a linear divisor. However, we can <em>creatively</em> decompose the numerator into something that we want.</p>
<p><span class="math-container">\begin{align}
\frac{x^2+5x+6}{x^2+1}&=\frac{\color{blue}{x^2}+5x+\color{blue}{1}+5}{x^2+1}\\
&=\frac{\color{blue}{x^2+1}}{x^2+1}+\frac{5x+5}{x^2+1}\\
&=1+\frac{5x+5}{x^2+1}\\
\end{align}</span></p>
|
4,083,512 | <p>How exactly does multiplication make sense in synthetic geometry? I'll use a theorem expressing circle inversion. Let <span class="math-container">$C$</span> be some circle with radius <span class="math-container">$r$</span> and center <span class="math-container">$O$</span>, and let <span class="math-container">$P'$</span> be some point outside <span class="math-container">$C$</span>, then it has two tangents with <span class="math-container">$C$</span>, which we will use to form the lines <span class="math-container">$QP'$</span> and <span class="math-container">$RP'$</span>. Connect <span class="math-container">$R$</span> and <span class="math-container">$Q$</span> to one another and connect them to the center <span class="math-container">$O$</span>. Then, <span class="math-container">$OQP'$</span> is a square triangle and is similar to the triangle <span class="math-container">$OQP$</span> by virtue of having the same angles. Therefore, <span class="math-container">$\frac{OP}{OQ} = \frac{OQ}{OP'}$</span>.</p>
<p>What I don't get is, how are we justified from this last relationship to say that <span class="math-container">$OP*OP' = OQ^2 = r^2$</span>? Doesn't this require multiplication, which is an algebraic property not available in synthetic geometry? <span class="math-container">$\frac{OP}{OQ} = \frac{OQ}{OP'}$</span> expresses nothing else than <span class="math-container">$OP$</span> is to <span class="math-container">$OQ$</span> like <span class="math-container">$OQ$</span> is to <span class="math-container">$OP'$</span>, from which I wouldn't know how to derive something like multiplication, so I think I'm misunderstanding something here.</p>
<p>Here is a picture of the above I found on another thread <a href="https://math.stackexchange.com/questions/2538390/circle-inversion">Circle Inversion</a>
<a href="https://i.stack.imgur.com/b4FcX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b4FcX.png" alt="enter image description here" /></a></p>
| shintuku | 755,010 | <p>EDIT: Found a simpler, more intuitive proof:</p>
<p><a href="https://i.stack.imgur.com/wzPwj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wzPwj.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/3G7Od.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3G7Od.png" alt="enter image description here" /></a></p>
<p>The parallelogram has twice the area of the triangle since they share the same base and are on the same parallels. This parallelogram shares the same parallels as the parallelogram <span class="math-container">$OP * OP'$</span>, so they have the same area. I have the same conclusions on synthetic geometry multiplication in my old post below.</p>
<p>This is simply and application of Euclid Bk 1 P42 (<a href="https://mathcs.clarku.edu/%7Edjoyce/elements/bookI/propI42.html" rel="nofollow noreferrer">https://mathcs.clarku.edu/~djoyce/elements/bookI/propI42.html</a>)</p>
<hr />
<p>Solved! Using part of Euclid's geometrical proof of the pythagorean theorem: <a href="https://mathcs.clarku.edu/%7Edjoyce/elements/bookI/propI47.html" rel="nofollow noreferrer">https://mathcs.clarku.edu/~djoyce/elements/bookI/propI47.html</a></p>
<p>Using proof by construction, here's the proof that <span class="math-container">$OP * OP' = OQ^2$</span>.</p>
<p><a href="https://i.stack.imgur.com/BmhP7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BmhP7.png" alt="enter image description here" /></a></p>
<p>This means there is at least this analogy to multiplication in synthetic geometry, and it does depend on <span class="math-container">$\frac{OP}{OQ} = \frac{OQ}{OP'}$</span> but this actually gives us very little information on how to perform the 'multiplication': you actually have to construct said multiplication somehow.</p>
<p>If someone has alternatives, or more information on the nature of such constructions, it would be super appreciated! Another potential candidate for this proof would be a variation of Euclid's Book 6 Proposition 36: I'm not sure of this, but the constructions look similar.</p>
|
4,083,512 | <p>How exactly does multiplication make sense in synthetic geometry? I'll use a theorem expressing circle inversion. Let <span class="math-container">$C$</span> be some circle with radius <span class="math-container">$r$</span> and center <span class="math-container">$O$</span>, and let <span class="math-container">$P'$</span> be some point outside <span class="math-container">$C$</span>, then it has two tangents with <span class="math-container">$C$</span>, which we will use to form the lines <span class="math-container">$QP'$</span> and <span class="math-container">$RP'$</span>. Connect <span class="math-container">$R$</span> and <span class="math-container">$Q$</span> to one another and connect them to the center <span class="math-container">$O$</span>. Then, <span class="math-container">$OQP'$</span> is a square triangle and is similar to the triangle <span class="math-container">$OQP$</span> by virtue of having the same angles. Therefore, <span class="math-container">$\frac{OP}{OQ} = \frac{OQ}{OP'}$</span>.</p>
<p>What I don't get is, how are we justified from this last relationship to say that <span class="math-container">$OP*OP' = OQ^2 = r^2$</span>? Doesn't this require multiplication, which is an algebraic property not available in synthetic geometry? <span class="math-container">$\frac{OP}{OQ} = \frac{OQ}{OP'}$</span> expresses nothing else than <span class="math-container">$OP$</span> is to <span class="math-container">$OQ$</span> like <span class="math-container">$OQ$</span> is to <span class="math-container">$OP'$</span>, from which I wouldn't know how to derive something like multiplication, so I think I'm misunderstanding something here.</p>
<p>Here is a picture of the above I found on another thread <a href="https://math.stackexchange.com/questions/2538390/circle-inversion">Circle Inversion</a>
<a href="https://i.stack.imgur.com/b4FcX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b4FcX.png" alt="enter image description here" /></a></p>
| Micah | 30,836 | <p>It is possible to define proportionality relationships between line segments in purely geometric terms. The trick is that you force the line segments to be legs of a right triangle, which makes everything well-defined.</p>
<p>More formally, we say that <span class="math-container">$AB:AC=AD:AE$</span> if in the following diagram the hypotenuse line segments <span class="math-container">$BC$</span> and <span class="math-container">$DE$</span> are parallel:</p>
<p><a href="https://i.stack.imgur.com/emOnd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/emOnd.png" alt="right triangles" /></a></p>
<p>If <span class="math-container">$p,q,r,s$</span> are arbitrary line segments, then we can build a diagram like the above one using segments congruent to <span class="math-container">$p,q,r,s$</span> as legs of the two right triangles. We then say that <span class="math-container">$p:q=r:s$</span> if that diagram has parallel hypotenuses.</p>
<p>You can then define multiplication purely geometrically as well: you choose some particular line segment <span class="math-container">$1$</span>, and for any segments <span class="math-container">$p$</span> and <span class="math-container">$q$</span> you say that the line segment <span class="math-container">$pq$</span> has length such that <span class="math-container">$1:q=p:pq$</span>.</p>
<p>Once you have these definition, you can prove that in fact the definition <span class="math-container">$p:q=r:s$</span> has all the standard proportionality properties (e.g., that two triangles are similar if and only if their corresponding sides are proportional), and that the above multiplication forms a field along with the obvious addition operation on segment lengths by concatenation. These proofs are complicated and a little tedious but basically boil down to applying the inscribed angle theorem for circles lots of times in clever ways. You can find details in Hilbert's original book (though the <a href="http://www.gutenberg.org/ebooks/17384" rel="nofollow noreferrer">Gutenberg version</a> is kind of error-prone, so check his proofs carefully) or in <a href="https://www.powells.com/book/geometry-euclid-beyond-9780387986500" rel="nofollow noreferrer">Hartshorne's Euclidean geometry book</a>.</p>
|
154,955 | <blockquote>
<p>Let $M\in M_n(F)$ and define $\phi:M_n(F)\to M_n(F)$ by $\phi(X)=AX$
for all $X\in M_n(F)$. Prove that $\det(\phi)=\det(A)^n$.</p>
</blockquote>
<p>I can prove it by considering the matrix representation with respect to the usual basis, which turns out to be a block diagonal form consisting of $n$ copies of $A$. Nevertheless, I'm looking for a clean (basis-free) approach to this problem.</p>
| JLA | 30,952 | <p>The eigenvalue equation is $\phi(X)=\lambda_k X\implies AX=\lambda_k X$, so the columns of $X$ are the eigenvectors of $A$ corresponding to the eigenvalue $\lambda_k$. The operator $A$ has n eigenvalues counting multiplicity, and the multiplicity of $\lambda_k$ corresponding to $\phi(X)$ equation is $n$ times the the multiplicity of $\lambda_k$ corresponding to $A$. This is because having one column as an eigenvector for $A$ and the rest the zero is an eigenbasis corresponding to $\lambda_k$, and there are n times the multiplicity of $\lambda_k$ of these vectors in the basis since there are n columns. So the multiplicity of each $\lambda_k$ corresponding to $\phi$ (not counting multiplicities corresponding to $A$) is $n$, so that $\det \phi=\lambda_1^n\cdots\lambda_n^n=\det A^n.$ I hope this is understandable, if not, ask. </p>
|
4,553,332 | <p>What do we call the property that if <span class="math-container">$a = b$</span>, then <span class="math-container">$f(a) = f(b)$</span>?</p>
<p>Wikipedia calls it "substitution property" but is that correct?</p>
| ajotatxe | 132,456 | <p>Well, I'd say, if you want to enter into philosophy, that two objects <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span> can be distinguished only if there is some function <span class="math-container">$f$</span> that <span class="math-container">$f(x_1)\neq f(x_2)$</span>. (For example, the 'subindex function', that would yield the <span class="math-container">$1$</span> and the <span class="math-container">$2$</span>).</p>
<p>So the matter of your question, for me, is a pure axiom of logical reasoning.</p>
|
249,597 | <p>I am suppose to find all the solutions to this problem, I think some theorem states that there can only be as many solutions to the problem as the highest degree. I know that calculus reinforces this so I know that</p>
<p>$2x^2 + 4x + 1 = 0$</p>
<p>Can have at most two solutions. In calculus this is proven by the derivative being zero at only somewhere. I can't remember and it isn't important yet.</p>
<p>Anyways I have no idea what to do with this problem. I don't think I can factor it conventionally because of the 2 coefficient so what is the method at this point? I tried guessing and it didn't work at all for -2 - 3.</p>
| rundavidrun | 43,933 | <p>You should definitely commit the formula to memory. Here's a cool video to the tune of row-row-row-your-boat, but there are plenty more out there. <a href="http://www.youtube.com/watch?v=HRcj9slciqM" rel="nofollow">http://www.youtube.com/watch?v=HRcj9slciqM</a>. Once you get on youtube, search around as there are lots of great videos in there that show you how to use the formula.</p>
|
3,987,552 | <p>I need to prove or refute a property about a sequence of numbers.
Here is what is given to me:</p>
<p>Sequence (<span class="math-container">$a_1,a_2,...,a_k,a_{k+1},a_{k+2}$</span>) containing <span class="math-container">$k+2$</span> numbers. Every number <span class="math-container">$0 < a_i \leq M, i=1,...,k+2$</span> for the same given constant <span class="math-container">$M$</span>. Moreover, <span class="math-container">$\sum_{i=1}^{k+2} a_i = N$</span>, for another given constant <span class="math-container">$N > 0$</span>. The constants <span class="math-container">$N$</span> and <span class="math-container">$M$</span> are related by <span class="math-container">$kM \geq N, k > 1$</span>.</p>
<p>Then, I need to prove or refute the following property: Can all consecutive pairs of numbers <span class="math-container">$a_i, a_{i+1}, i=1,...,k+1$</span> be defined such that <span class="math-container">$a_i + a_{i+1} > M$</span> ? Or does it lead to a violation of one of the given constraints?</p>
<p>I tried searching for something similar but in truth, I barely know what to search for. My tentative proofs do not really go anywhere meaningful, so I had to resort to more experienced math guys to help me with this. It has been a long time since I had to prove some property like this.</p>
| Hagen von Eitzen | 39,174 | <p>If <span class="math-container">$a_i+a_{i+1}>M$</span> for all applicable <span class="math-container">$i$</span>, then
<span class="math-container">$$ N=\sum_{i=1}^{k+2}a_i=\frac{a_1}2+\sum_{i=1}^{k+1}\frac{a_i+a_{i+1}}2+\frac{a_{k+2}}2>\frac{k+1}2M$$</span>
and in fact
<span class="math-container">$$ N=\sum_{i=1}^{k+2}a_i=\sum_{j=1}^{\frac{k+2}2}(a_{2j-1}+a_{2j})>\frac{k+2}2M\qquad\text{if $k$ is even}.$$</span>
Hence we certainly need the additional condition that
<span class="math-container">$$\tag1 \left\lceil\frac{k+1}2\right\rceil M<N.$$</span>
In particular, <span class="math-container">$kM\ge N$</span> contradicts <span class="math-container">$(1)$</span> when <span class="math-container">$k=1$</span> or <span class="math-container">$k=2$</span>. Hence we incidentally also need
<span class="math-container">$$\tag2 k\ge3 $$</span>
(but as said, <span class="math-container">$(1)$</span> implies <span class="math-container">$(2)$</span> in the given context).
Finally, we also need
<span class="math-container">$$ \tag3 M>0$$</span>
to allow for <span class="math-container">$0<a_i\le M$</span> in the first place.</p>
<hr />
<p>On the other hand, assume we have <span class="math-container">$k\in\Bbb N$</span>, <span class="math-container">$N,M\in \Bbb R$</span> such that <span class="math-container">$(1)$</span> and <span class="math-container">$(3)$</span> and <span class="math-container">$kM\ge N$</span>. Then we can let
<span class="math-container">$$a_i=\frac N{k+2}. $$</span>
By <span class="math-container">$(1)$</span> and <span class="math-container">$(3)$</span> and <span class="math-container">$N\le kM$</span>, this makes
<span class="math-container">$$ 0<a_i<M.$$</span>
We clearly have
<span class="math-container">$$ \sum a_i = (k+2)\cdot \frac N{k+2}=N$$</span>
and by <span class="math-container">$(1)$</span>,
<span class="math-container">$$ a_i+a_{i+1}=\frac{2N}{k+2}\ge\frac N{\left\lceil \frac{k+1}2\right\rceil}>M,$$</span>
as desired.</p>
|
3,294,446 | <p>I'm currently working on a definite integral and am hoping to find alternative methods to evaluate. Here I will to address the integral:
<span class="math-container">\begin{equation}
I_n = \int_0^\frac{\pi}{2}\ln^n\left(\tan(x)\right)\:dx
\end{equation}</span>
Where <span class="math-container">$n \in \mathbb{N}$</span>. We first observe that when <span class="math-container">$n = 2k + 1$</span> (<span class="math-container">$k\in \mathbb{Z}, k \geq 0$</span>) that,
<span class="math-container">\begin{equation}
I_{2k + 1} = \int_0^\frac{\pi}{2}\ln^{2k + 1}\left(\tan(x)\right)\:dx = 0
\end{equation}</span>
This can be easily shown by noticing that the integrand is odd over the region of integration about <span class="math-container">$x = \frac{\pi}{4}$</span>. Thus, we need only resolve the cases when <span class="math-container">$n = 2k$</span>, i.e.
<span class="math-container">\begin{equation}
I_{2k} = \int_0^\frac{\pi}{2}\ln^{2k}\left(\tan(x)\right)\:dx
\end{equation}</span>
Here I have isolated two methods.</p>
<hr>
<p>Method 1:</p>
<p>Let <span class="math-container">$u = \tan(x)$</span>:
<span class="math-container">\begin{equation}
I_{2k} = \int_0^\infty\ln^{2k}\left(u\right) \cdot \frac{1}{u^2 + 1}\:du = \int_0^\infty \frac{\ln^{2k}\left(u\right)}{u^2 + 1}\:du
\end{equation}</span>
We note that:
<span class="math-container">\begin{equation}
\ln^{2k}(u) = \frac{d^{2k}}{dy^{2k}}\big[u^y\big]_{y = 0}
\end{equation}</span>
By Leibniz's Integral Rule:
<span class="math-container">\begin{align}
I_{2k} &= \int_0^\infty \frac{\frac{d^{2k}}{dy^{2k}}\big[u^y\big]_{y = 0}}{u^2 + 1}\:du = \frac{d^{2k}}{dy^{2k}} \left[ \int_0^\infty \frac{u^y}{u^2 + 1} \right]_{y = 0} \nonumber \\
&= \frac{d^{2k}}{dy^{2k}} \left[ \frac{1}{2}B\left(1 - \frac{y + 1}{2}, \frac{y + 1}{2} \right) \right]_{y = 0} =\frac{1}{2}\frac{d^{2k}}{dy^{2k}} \left[ \Gamma\left(1 - \frac{y + 1}{2}\right)\Gamma\left( \frac{y + 1}{2} \right) \right]_{y = 0} \nonumber \\
&=\frac{1}{2}\frac{d^{2k}}{dy^{2k}} \left[ \frac{\pi}{\sin\left(\pi\left(\frac{y + 1}{2}\right)\right)} \right]_{y = 0} = \frac{\pi}{2}\frac{d^{2k}}{dy^{2k}} \left[\operatorname{cosec}\left(\frac{\pi}{2}\left(y + 1\right)\right) \right]_{y = 0}
\end{align}</span></p>
<hr>
<p>Method 2:</p>
<p>We first observe that:
<span class="math-container">\begin{align}
\ln^{2k}\left(\tan(x)\right) &= \big[\ln\left(\sin(x)\right) - \ln\left(\cos(x)\right) \big]^{2k} \nonumber \\
&= \sum_{j = 0}^{2k} { 2k \choose j}(-1)^j \ln^j\left(\cos(x)\right)\ln^{2k - j}\left(\sin(x)\right)
\end{align}</span>
By the linearity property of proper integrals we observe:
<span class="math-container">\begin{align}
I_{2k} &= \int_0^\frac{\pi}{2} \left[ \sum_{j = 0}^{2k} { 2k \choose j}(-1)^j \ln^j\left(\cos(x)\right)\ln^{2k - j}\left(\sin(x)\right) \right]\:dx \nonumber \\
&= \sum_{j = 0}^{2k} { 2k \choose j}(-1)^j \int_0^\frac{\pi}{2} \ln^j\left(\cos(x)\right)\ln^{2k - j}\left(\sin(x)\right)\:dx \nonumber \\
& = \sum_{j = 0}^{2k} { 2k \choose j}(-1)^j F_{n,m}(0,0)
\end{align}</span>
Where
<span class="math-container">\begin{equation}
F_{n,m}(a,b) = \int_0^\frac{\pi}{2} \ln^n\left(\cos(x)\right)\ln^{m}\left(\sin(x)\right)\:dx
\end{equation}</span>
Utilising the same identity given before, this becomes:
<span class="math-container">\begin{align}
F_{n,m}(a,b) &= \int_0^\frac{\pi}{2} \frac{d^n}{da^n}\big[\sin^a(x) \big] \cdot \frac{d^m}{db^m}\big[\cos^b(x) \big]\big|\:dx \nonumber \\
&= \frac{\partial^{n + m}}{\partial a^n \partial b^m}\left[ \int_0^\frac{\pi}{2} \sin^a(x)\cos^b(x)\:dx\right] = \frac{\partial^{n + m}}{\partial a^n \partial b^m}\left[\frac{1}{2} B\left(\frac{a + 1}{2}, \frac{b + 1}{2} \right)\right] \nonumber \\
&= \frac{1}{2}\frac{\partial^{n + m}}{\partial a^n \partial b^m}\left[\frac{\Gamma\left(\frac{a + 1}{2}\right)\Gamma\left(\frac{b + 1}{2}\right)}{\Gamma\left(\frac{a + b}{2} + 1\right)}\right]
\end{align}</span>
Thus,
<span class="math-container">\begin{equation}
I_{2k} = \sum_{j = 0}^{2k} { 2k \choose j}(-1)^j \frac{1}{2}\frac{\partial^{2k }}{\partial a^j \partial b^{2k - j}}\left[\frac{\Gamma\left(\frac{a + 1}{2}\right)\Gamma\left(\frac{b + 1}{2}\right)}{\Gamma\left(\frac{a + b}{2} + 1\right)}\right]_{(a,b) = (0,0)}
\end{equation}</span></p>
<hr>
<p>So, I'm curious, are there any other Real Based Methods to evaluate this definite integral?</p>
| skbmoore | 321,120 | <p>Make an exponential generating function with <span class="math-container">$I_k$</span>,
<span class="math-container">$$I_k: = \frac{1}{2} (1 + (-1)^k) \int_0^\infty \frac{ \log^{k}(u) }{u^2+1} du $$</span>
Then
<span class="math-container">$$I(x)=\sum_{k=0}^\infty I_k\,\frac{x^k}{k!} = \frac{1}{2}\int_0^\infty \frac{ u^x + u^{-x} }{u^2+1} du $$</span>
where an interchange of <span class="math-container">$\sum$</span> and <span class="math-container">$\int$</span> has been made. The integral can be solved in closed form, <span class="math-container">$I(x) = \pi/2 \cdot \sec{(\pi x/2)}.$</span> Expanding the sec in a power series will give the last answer that Stafan Lafon gave, in terms of Euler numbers. </p>
|
4,602,464 | <p>I've been asked to provide context to my original question, so here's the context:</p>
<p>The rectangle in the problem below represents a pool table whose "pool table light" cannot be easily moved, but CAN easily be rotated. No portion of the pool table's perimeter can be too close to a wall in order for the players to use their pool sticks uninhibited. The left side of the pool table (head-side) is already as close to the wall as this threshold. Therefore, at first when you rotate it counter-clockwise on the table's center point, the corner will become closer to the wall than desired during the first x number of degrees of rotation. However, eventually there will be a degree of rotation where the corner is no longer too close to the wall again.</p>
<p>I'm interested in how to determine this degree of rotation mathematically more than I'm interested in practical suggestions about alternative ways of addressing this concrete problem. This is the reason why I initially asked the question in the abstracted form below:</p>
<p><strong>Original Question:</strong></p>
<p>If the size of a rectangle is 55.5" x 99.75", and its top-left corner's edge is located at an origin (0,0) on a Cartesian plane, while its top-right corner's edge is located at (97.5,0) on a Cartesian plane, as you begin to rotate the rectangle on its center point counter-clockwise the top-left corner's edge-position will have an x value location that is negative. This x value becomes more negative as you increase the counter-clockwise rotation for a certain number of degrees, and then the x value increases until it is no longer negative at some degree of rotation.</p>
<p><strong>At what degree of counter-clockwise rotation will x no longer be a negative value?</strong></p>
<p>I'm thinking it should be a little over 56 degrees. I tested this by putting a piece of paper along the left edge of a <a href="https://i.imgur.com/1FVhLRW.png" rel="nofollow noreferrer">rectangle that I drew to scale</a> onto my computer's display using <a href="https://inkscape.org/" rel="nofollow noreferrer">InkScape</a>. As I rotated this scaled rectangle counter-clockwise, I could see the rectangle's top-left corner translucently through this paper that began to cover it due to the rotation. According to Inkscape's rotation measurements, the top-left corner wasn't fully visible again until I rotated it approximately 56 degrees.</p>
<p><strong>Here are some conflicting AI generated answers. Are any correct?</strong></p>
<h2><strong>Answer 1:</strong></h2>
<p>To find the degree at which the x value of the top-left corner's edge becomes non-negative, we can use trigonometry. The center of the rectangle is at the point (48.75, 49.875), which is the midpoint of the line segment connecting the top-left and top-right corners.</p>
<p>Let's call the angle of rotation theta. The x value of the top-left corner's edge can be expressed as:</p>
<p>x = -55.5 * cos(theta) + 48.75</p>
<p>We want to find the value of theta such that x is non-negative, or x >= 0. Substituting the expression for x into this inequality, we get:</p>
<p>-55.5 * cos(theta) + 48.75 >= 0</p>
<p>Solving for theta, we get:</p>
<p>theta <= arccos((48.75 / -55.5))</p>
<p>The inverse cosine function, or arccos, returns an angle in radians. To convert this angle to degrees, we can multiply it by 180/pi, where pi is approximately equal to 3.14159. This gives us:</p>
<p>theta <= 57.9 degrees</p>
<p>Therefore, the top-left corner's edge has a non-negative x value at or after a counter-clockwise rotation of 57.9 degrees.</p>
<h2><strong>Answer 2:</strong></h2>
<p>To find the degree of rotation at which the top-left corner's x-coordinate becomes positive, we can use some basic geometry and trigonometry.</p>
<p>First, let's call the center of the rectangle's axis of rotation (the point about which it is rotating) the origin of the coordinate system, and let's call the distance from the origin to the top-left corner of the rectangle "r". We can then use the Pythagorean theorem to find the value of r:</p>
<p>r = sqrt((55.5/2)^2 + (99.75/2)^2)
= sqrt(3025/4 + 9900.0625/4)
= sqrt(7562.53125/4)
= sqrt(1890.6328125)
≈ 43.42</p>
<p>Now, let's call the angle between the positive x-axis and the line connecting the origin to the top-left corner of the rectangle "theta". Using some basic trigonometry, we can find the value of theta:</p>
<p>theta = atan((99.75/2)/(55.5/2))
= atan(4)
≈ 69.1 degrees</p>
<p>Now, let's consider the rotation of the rectangle about the origin. As the rectangle rotates counter-clockwise, the angle between the positive x-axis and the line connecting the origin to the top-left corner of the rectangle will decrease. When this angle becomes zero, the x-coordinate of the top-left corner will be positive.</p>
<p>Therefore, the degree of counter-clockwise rotation at which the x-coordinate of the top-left corner becomes positive is:</p>
<p>360 - theta
= 360 - 69.1
≈ 291 degrees</p>
<p>Note that this is just one solution -- the x-coordinate of the top-left corner will also become positive after any multiple of 360 degrees of rotation. For example, it will also become positive after 652 degrees of rotation (291 + 360), or 913 degrees of rotation (291 + 360 + 360), etc.</p>
<h2><strong>Answer 3:</strong></h2>
<p>To find the degree of rotation at which the x value of the top-left corner's edge is no longer negative, we can use trigonometry.</p>
<p>First, let's call the center of the rectangle (the point about which it rotates) point C, and let's call the top-left corner of the rectangle point A. Let's also call the angle of rotation theta (θ).</p>
<p>[asy]
unitsize(2cm);</p>
<p>pair A, B, C;</p>
<p>A = (-0.5,1);
B = (1.5,1);
C = (0.5,0.5);</p>
<p>draw(A--B--C--cycle);
draw(C--A,dashed);</p>
<p>label("<span class="math-container">$A$</span>", A, N);
label("<span class="math-container">$B$</span>", B, N);
label("<span class="math-container">$C$</span>", C, S);
label("<span class="math-container">$\theta$</span>", (0.7,0.9));
label("<span class="math-container">$OC$</span>", (C--(0,0)), S);
[/asy]</p>
<p>Since point C is the center of rotation, the distance from C to the x-axis (the line with equation y = 0) is constant. Let's call this distance h. We can then use the trigonometric functions sine and cosine to find the x and y coordinates of point A as a function of the angle of rotation theta.</p>
<p>The x coordinate of A is given by:</p>
<p>x = hcos(theta)</p>
<p>The y coordinate of A is given by:</p>
<p>y = hsin(theta)</p>
<p>Since we want to find the angle of rotation theta such that x is no longer negative, we can set x = 0 and solve for theta:</p>
<p>0 = hcos(theta)</p>
<p>cos(theta) = 0</p>
<p>Theta = 90 degrees or 270 degrees</p>
<p>So, the angle of rotation at which the x value of the top-left corner's edge is no longer negative is 90 degrees or 270 degrees.</p>
<h2><strong>Answer 4:</strong></h2>
<p>I apologize for the misunderstanding in my previous responses. You are correct that the degree of counter-clockwise rotation at which the top-left corner's x-coordinate becomes non-negative is approximately 56 degrees, as determined through your experimentation with a scaled rectangle.</p>
<p>To confirm this result using trigonometry, we can use the sine function:</p>
<p>opposite = hypotenuse * sin(angle)</p>
<p>Substituting in the known values, we get:</p>
<p>opposite = 99.75 * sin(angle)</p>
<p>We want to find the angle at which the opposite side is no longer negative, so we set opposite = 0 and solve for angle:</p>
<p>0 = 99.75 * sin(angle)
angle = arcsin(0 / 99.75)
angle = 55.999999 degrees</p>
<p>This result is approximately 56 degrees, as you determined through your experimentation.</p>
<p>I apologize for any confusion my previous responses may have caused. Let me know if you have any further questions.</p>
<h2><strong>Answer 5:</strong></h2>
<p>To determine the degree of counter-clockwise rotation at which the top-left corner's x-coordinate becomes non-negative, we can use some basic trigonometry.</p>
<p>The top-left corner's x-coordinate can be represented as the length of the side opposite the angle of rotation in a right triangle, with the center of the rectangle as the right angle and the side connecting the center of the rectangle to the top-left corner as the hypotenuse. The length of the side connecting the center of the rectangle to the top-right corner represents the length of the side adjacent to the angle of rotation.</p>
<p>We can use the sine function to calculate the length of the side opposite the angle of rotation:</p>
<p>opposite = hypotenuse * sin(angle)</p>
<p>Substituting in the known values, we get:</p>
<p>opposite = 99.75 * sin(angle)</p>
<p>We want to find the angle at which the opposite side is no longer negative, so we set opposite = 0 and solve for angle:</p>
<p>0 = 99.75 * sin(angle)
angle = arcsin(0 / 99.75)
angle = 56.309932474020215 degrees</p>
<p>Therefore, the top-left corner's x-coordinate becomes non-negative at approximately 56.31 degrees of counter-clockwise rotation.</p>
<p>I apologize for any confusion my previous responses may have caused. Let me know if you have any further questions.</p>
| rych | 73,934 | <p>The reconciling difference is in one word that is missing in the latter quote: <strong>differential</strong> 1-form. Differential 1-form is indeed a <em>smooth field</em> of (algebraic) 1-forms. This is described in @Nick's answer</p>
|
1,254,820 | <p>Let $ G $ be a locally compact abelian group. Then $ {L^{1}}(G) $ is a commutative algebra when equipped with convolution. Is there an involution $ ^{*} $ on $ {L^{1}}(G) $ so that it becomes a $ C^{*} $-algebra? We can show that the map $ f \mapsto \overline{f} $ is an involution, but with this involution, $ {L^{1}}(G) $ is not a $ C^{*} $-algebra. I believe the answer is negative, but I can’t prove it. If this is the case, can we inject $ {L^{1}}(G) $ into a larger algebra which is a $ C^{*} $-algebra?</p>
| Norbert | 19,538 | <p>A $C^*$-algebra $A$ is isometric to an $L_1$-space (even as Banach space!) iff it is one dimensional.</p>
<p>Assume $A$ is isometric to $L_1$ space and $\operatorname{dim}(A)>1$, then $A$ is <a href="http://en.wikipedia.org/wiki/Banach_space#Weak_convergences_of_sequences" rel="nofollow">weakly sequentially complete</a>. By <a href="http://projecteuclid.org/euclid.pjm/1103034194" rel="nofollow">result of Sakai</a> (proposition 2), this is possible only if $A$ is finite dimensional. By classification theorem for $C^*$ algebras we know that $A$ is finite $\ell_\infty$-sum of finite dimensional matrix algebras: $A=M_{n_1}\oplus_\infty\ldots\oplus_\infty M_{n_k}$. Since $\operatorname{dim}(A)>1$, then either $n_i\geq 2$ for some $i$ or $k\geq 2$. In both cases we see that $A$ contains a copy of $\ell_\infty^2$. Thus we have an embedding of $\ell_\infty^2$ into $A$ which is finite dimensional $\ell_1$-space. The latter is impossible <a href="http://www.ams.org/journals/spmj/2005-16-01/S1061-0022-04-00842-8/S1061-0022-04-00842-8.pdf" rel="nofollow">by result of Lyubich</a> (theorem 1).</p>
<p>Therefore $\dim(A)=1$, that is $A=\mathbb{C}=L_1(G)$, where $G$ is unique group consisting of one element - its identity.</p>
|
1,595,206 | <p>For a practice question I have been given I have been told to find a spanning tree using a breadth first search for the following graph:</p>
<p><a href="https://i.stack.imgur.com/pKYwB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pKYwB.png" alt="enter image description here"></a></p>
<p>From this point onwards I know only to construct an adjacency matrix and that is all. How would I go about doing a breadth first search? Also, how would I choose the initial nodes to begin the search? </p>
| Element118 | 274,478 | <p>In general, you can use any searching method on a connected graph to generate a spanning tree, with any source vertex.</p>
<p>Consider connecting a vertex to the "parent" vertex that "found" this vertex. Then, since every vertex is visited eventually, there is a path leading back to the source vertex.</p>
<p>By picking $2$ vertices $a, b$ and their paths to the source vertex, we see that there is a path between $a$ and $b$. Hence the graph is connected. Every vertex other than the source vertex generates an edge, so this graph has $n-1$ nodes. As such, it is a spanning tree of the original graph.</p>
|
2,339,964 | <p>I wondered whether it is possible to define a sequence without relying on sets or natural numbers and tried with this definition.</p>
<ol>
<li>A symbol which is not a comma is a sequence. </li>
<li>If $S$ is sequence and $s$ a symbol which is not a comma:
<ul>
<li>$S, s$ is a sequence, where every symbol occurring in $S$ precedes $\phi$ and $\phi$ is the last symbol of the sequence.</li>
<li>$s, S$ is a sequence, where every symbol occurring in $S$ follows $\phi$ and $\phi$ is the first symbol of the sequence. </li>
</ul></li>
</ol>
<p>So given the string "$a,b,c$", "$a$" is a sequence and "$a,b$" is a sequence too. Therefore "$a,b,c$" is the sequence "$S, c$", where "$S$" is "$a,b$", "$c$" is its last element and "$a,b$" precede it. Similarly "$a$" can be identified as the first element. </p>
<p>Is that correct?</p>
| Xodarap | 2,549 | <p>Why not make it even simpler?</p>
<ol>
<li>Any symbol is a sequence</li>
<li>Sequences are closed under concatenation</li>
</ol>
<p>This is similar to the definition of the <a href="https://en.wikipedia.org/wiki/Free_monoid" rel="nofollow noreferrer">free monoid</a>.</p>
<p>In terms of addressing elements in a sequence: given the sequence $AsB$, the symbol $s$ is the first element if $A$ is trivial, and the last if $B$ is trivial.</p>
|
2,546,792 | <blockquote>
<p><strong>Definition.</strong> A metric space $M$ is connected if there is no other open disjoint $A$ and $B$ in $M$ such that $M=A\cup B$ than the empty set and the total space $M$. A subset $C\subset M$ is connected if the subspace $C$ is connected.</p>
<p><strong>Definition</strong>. A connected component of $x\in M$ in a metric space $M$ is the union $C_x$ of all connected subsets of $M$ that contain the point $x$.</p>
</blockquote>
<p>I want to know how many connected components does have the set
$$
\{(x,y)\in\mathbb{R}^2:(xy)^2=xy\}
$$
I know it's the union of the axis $x = 0$ and $y = 0$ and the graph of the function $f(x) = 1/x$ (for $x\ne 0$), but don't know what to do with the definitions. Can someone help? </p>
| jgon | 90,543 | <p>Note that you can cut each petal in half longways, and then rearrange the half petals by translation to see that the shaded region has the same area as the part of a circle of radius the circumradius of the equilateral triangle minus an inscribed regular hexagon. Since you've computed the circumradius, $r=\sqrt{2}$, the area of the circle is $\pi r^2=2\pi$, and the area of the hexagon is $6r^2\sqrt{3}/4=3\sqrt{3}$. Hence the area of the shaded region is $2\pi - 3\sqrt{3}$.</p>
|
2,546,792 | <blockquote>
<p><strong>Definition.</strong> A metric space $M$ is connected if there is no other open disjoint $A$ and $B$ in $M$ such that $M=A\cup B$ than the empty set and the total space $M$. A subset $C\subset M$ is connected if the subspace $C$ is connected.</p>
<p><strong>Definition</strong>. A connected component of $x\in M$ in a metric space $M$ is the union $C_x$ of all connected subsets of $M$ that contain the point $x$.</p>
</blockquote>
<p>I want to know how many connected components does have the set
$$
\{(x,y)\in\mathbb{R}^2:(xy)^2=xy\}
$$
I know it's the union of the axis $x = 0$ and $y = 0$ and the graph of the function $f(x) = 1/x$ (for $x\ne 0$), but don't know what to do with the definitions. Can someone help? </p>
| g.kov | 122,782 | <p><a href="https://i.stack.imgur.com/HKGAD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HKGAD.png" alt="enter image description here"></a></p>
<p>The center $O_a$
of the circular arc $COB$
with the radius $R_a=|O_aO|=|O_aB|=|O_aC|$
is found as $O_a=DOa\cap EO_a$,
$|DO|=|DB|$,
$|EO|=|EB|$,
$DO_a\perp OB$,
$EO_a\perp OC$.</p>
<p>Due to the symmetry, $OO_a$
bisects $\angle COB$,
hence $\angle COO_a=60^\circ$.</p>
<p>Also
$\angle O_aCO=60^\circ$,
$\angle OO_aC=60^\circ$
thus </p>
<p>\begin{align}
|O_aC|&=|O_aO|=|O_aC|=|CO|=\tfrac23\cdot\sqrt6\cdot\tfrac{\sqrt3}2
\\
&=\sqrt2
.
\end{align} </p>
<p>The area of one
of the shaded regions
is a doubled difference
between a $60^\circ$ circular segment
$S_c$ and the area $S_t$ of equilateral
$\triangle OO_aC$</p>
<p>\begin{align}
S_c&=\tfrac12\,\tfrac\pi3(\sqrt2)^2=\tfrac\pi3
,\\
S_t&=\tfrac12\,\sqrt2\cdot \sqrt2\cdot\tfrac{\sqrt3}2
=\tfrac{\sqrt3}2
,
\end{align}</p>
<p>so the total shaded area is $3\cdot2\cdot(\tfrac\pi3-\tfrac{\sqrt3}2)=2\pi-3\sqrt3.$</p>
|
99,750 | <p>Let $G$ be a reductive group, $F$ a Frobenius morphism, $B$ a Borel subgroup $F$-stable and consider the finite groups $G^F$ and $U^F$ where $U$ is the radical unipotent of $B=UT$ ($T$ torus).</p>
<p>I would like a reference for the description of the algebra $End_{G^F}( \mathbb{C}[G^F/U^F] )$. More precisely, I'd like to relate it with a structure of Hecke algebra, which is usually defined as $End_{G^F}( \mathbb{C}[G^F/B^F] ) := End_{G^F} ( Ind_{B^F}^{G^F} 1 )$. I hope to find that the endomorphism algebra is isomorphic to some kind of extension of the Hecke algebra by the torus $T$.</p>
<p>Thank you!</p>
| Dima Pasechnik | 11,100 | <p>If you start from basics, then J.Tits' "Local approach to buildings" [1] would certainly win, as you won't even need a definition of a group to describe the natural geometries for the exceptional Lie groups.</p>
<p>[1] Tits, J. "A local approach to buildings", The geometric vein: The Coxeter Festschrift, Springer-Verlag, 1981, pp. 519–547</p>
|
1,336,209 | <p>I need to understand very good how the properties of this formula</p>
<p>$\frac{4}{\pi} = \frac{5}{4} + \sum_{N \geq 1} \left[ 2^{-12N + 1} \times(42N + 5)\times {\binom {2N-1} {N}}^3 \right] $</p>
<p>Taken from the paper "Radian Reduction for Trigonometric Function" (Hanek Payne Algorithm)</p>
<p>Some remarkable properties are stated, specifically these four ones</p>
<ol>
<li>The $k^{th}$ term of the formula is exactly representable in $6k$ bits;</li>
<li>The first $n$ terms of the sum can be represented exactly in $12n$ bits;</li>
<li>The most significant bit of the $k^{th}$ term has weight at most $2^{1-6k}$ and hence each successive term increases the number of valid bits in the sum by at least $6$;</li>
<li>If $12k < m + 1 \leq 12(k+1)$, then the $m^{th}$ bit of $\frac{4}{\pi}$ may be computed using only terms beyond the $k^{th}$.</li>
</ol>
<p>My questions are:
1. How to prove the formula?
2. How to prove the properties stated above?</p>
<p>PS. I guess with terms the paper means the generic term $a_N$ of the sum... </p>
| Community | -1 | <p>Let
$$S(n):=1^4+2^4+3^4+4^4+\ldots+n^4.$$
The first order backward difference verifies
$$\nabla S(n):=S(n)-S(n-1)=n^4,$$
so that $S(n)$ must be a polynomial of the fifth degree in $n$, and the ratio $$\frac{S(n)}{n^5}$$ has a finite limit.</p>
<hr>
<p>More precisely, if the leading term of $S(n)$ is $an^5$, the leading term of $\nabla S(n)$ is that of $\nabla an^5$, i.e. $5an^4$, and the limit is $a=\frac15$.</p>
|
3,752,676 | <p>what is <span class="math-container">$P(P(P(333^{333})))$</span>, where P is sum of digit of a number. for an example <span class="math-container">$P(35)=3+5=8$</span></p>
<p>a)18</p>
<p>b)9</p>
<p>c)33</p>
<p>d)333</p>
<p>f)5</p>
<p>I tried to find this but I couldn't. I started to find a pattern for an example the first few power of <span class="math-container">$333^{333}$</span> are:</p>
<p><span class="math-container">$A=333*333=110889 \; \; \; \; \; \; P(A)=3^{3}=27$</span></p>
<p><span class="math-container">$B=110889*333= 36926037 \; \; \; \; \; \; P(B)=36$</span></p>
<p><span class="math-container">$C=36926037*333=12296370321 \; \; \; \; \; \; P(C)=36 $</span></p>
<p><span class="math-container">$D=12296370321*333=4094691316893 \; \; \; \; \; \; P(D)=63$</span></p>
<p>Can I say it is always 9? so <span class="math-container">$P(P(P(333^{333})))=9$</span>?</p>
| Batominovski | 72,152 | <p>Note that <span class="math-container">$G$</span> is generated by <span class="math-container">$a:=(1,0)$</span> and <span class="math-container">$b:=(0,1)$</span>. Let <span class="math-container">$\pi:G\to (G/H)$</span> be the canonical projection. Then, <span class="math-container">$\pi(a)$</span> and <span class="math-container">$\pi(b)$</span> generate the factor group <span class="math-container">$G/H$</span>.</p>
<p>Observe that <span class="math-container">$\alpha:=\pi(a)$</span> generates a subgroup <span class="math-container">$M\cong Z_4$</span> of <span class="math-container">$G/H$</span>, while <span class="math-container">$\beta:=\pi(b)$</span> generates a subgroup <span class="math-container">$N\cong Z_4$</span> of <span class="math-container">$G/H$</span>. From this information, we know that <span class="math-container">$G/H$</span> contains a subgroup isomorphic to <span class="math-container">$Z_4$</span>. That means the possible choices are <span class="math-container">$Z_8$</span> and <span class="math-container">$Z_4\times Z_2$</span>. Now, since <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span> generate the abelian group <span class="math-container">$G/H$</span>, with both having order <span class="math-container">$4$</span>, we conclude that elements of <span class="math-container">$G/H$</span> have orders dividing <span class="math-container">$4$</span>. Thus, <span class="math-container">$Z_8$</span> is not possible. This implies <span class="math-container">$G/H\cong Z_4\times Z_2$</span>.</p>
<p>Indeed, <span class="math-container">$G/H$</span> is the abelian group with the presentation <span class="math-container">$$G/H=\langle \alpha,\beta\,|\,4\alpha=4\beta=2\alpha+2\beta=0\rangle\,.$$</span> If <span class="math-container">$L:=\langle \alpha+\beta\rangle$</span>, then <span class="math-container">$G$</span> is given by the internal direct product <span class="math-container">$M\times L$</span>, where <span class="math-container">$M=\langle \alpha\rangle$</span> as defined in the previous paragraph.</p>
|
308,251 | <p>I asked this question on Mathematics Stackexchange (<a href="https://math.stackexchange.com/q/2863312/660">link</a>), but got no answer.</p>
<p>Let $K$ be a field, let $x_1,x_2,\dots$ be indeterminates, and form the $K$-algebra $A:=K[[x_1,x_2,\dots]]$. </p>
<p>Recall that $A$ can be defined as the set of expressions of the form $\sum_ua_uu$, where $u$ runs over the set monomials in $x_1,x_2,\dots$, and each $a_u$ is in $K$, the addition and multiplication being the obvious ones. </p>
<p>Then $A$ is a local domain, its maximal ideal $\mathfrak m$ is defined by the condition $a_1=0$, and it seems natural to ask</p>
<blockquote>
<p>Is $K[[x_1,x_2,\dots]]$ an $\mathfrak m$-adically complete ring?</p>
</blockquote>
<p>I suspect that the answer is No, and that the series $\sum_{n\ge1}x_n^n$, which is clearly Cauchy, does <em>not</em> converge $\mathfrak m$-adically.</p>
| Duchamp Gérard H. E. | 25,256 | <p>A side remark (but strongly connected). Let <span class="math-container">$X$</span> be the set of variables (in the question, <span class="math-container">$X=\{x_1,x_2,\dots \}$</span>) and <span class="math-container">$K[[X]]$</span> be the corresponding ring (<span class="math-container">$K$</span>-algebra) of series. As a matter of fact, taking as <span class="math-container">$\mathfrak{m}$</span>, the ideal of series without constant term, one can check (exercise) that the completion of the ring <span class="math-container">$K[X]$</span> (polynomials) w.r.t. the <span class="math-container">$\mathfrak{m}$</span>-adic topology is <span class="math-container">$K[[X]]$</span> iff <span class="math-container">$X$</span> is finite. Otherwise, if <span class="math-container">$X$</span> is infinite, the <span class="math-container">$\mathfrak{m}$</span>-adic completion of <span class="math-container">$K[X]$</span> is within <span class="math-container">$K[[X]]$</span>, but smaller. It is the set of series <span class="math-container">$S=\sum_{\alpha\in \mathbb{N}^{(X)}}a_\alpha\, X^{\alpha}$</span> (multi index notation) such that, for all <span class="math-container">$n\in \mathbb{N}$</span>, the series <span class="math-container">$S_n:=\sum_{|\alpha|=n} a_\alpha\, X^{\alpha}$</span> is a polynomial (for every multi degree <span class="math-container">$\alpha\in \mathbb{N}^{(X)}$</span>, its total degree is <span class="math-container">$|\alpha|:=\sum_{x\in X}\alpha(x)$</span>). This explains, in particular, why the sum of all variables <span class="math-container">$\sum_{x\in X} x$</span>, which is a polynomial in the case when <span class="math-container">$X$</span> is finite, is not even in the <span class="math-container">$\mathfrak{m}$</span>-adic completion of <span class="math-container">$K[X]$</span> in the case when <span class="math-container">$X$</span> is infinite.</p>
|
2,815 | <p>This is a final exam question in my algorithms class:</p>
<p>$k$ is a taxicab number if $k = a^3+b^3=c^3+d^3$, and $a,b,c,d$ are distinct positive integers. Find all taxicab numbers $k$ such that $a,b,c,d < n$ in $O(n)$ time.</p>
<p>I don't know if the problem had a typo or not, because $O(n^3)$ seems more reasonable. The best I can come up with is $O(n^2 \log n)$, and that's the best anyone I know can come up with. </p>
<p>The $O(n^2 \log n)$ algorithm: </p>
<ol>
<li><p>Try all possible $a^3+b^3=k$ pairs, for each $k$, store $(k,1)$ into a binary tree(indexed by $k$) if $(k,i)$ doesn't exist, if $(k,i)$ exists, replace $(k,i)$ with $(k,i+1)$</p></li>
<li><p>Transverse the binary tree, output all $(k,i)$ where $i\geq 2$</p></li>
</ol>
<p>Are there any faster methods? This should be the best possible method without using any number theoretical result because the program might output $O(n^2)$ taxicab numbers. </p>
<p>Is $O(n)$ even possible? One have to prove there are only $O(n)$ taxicab numbers lesser than $2n^3$ in order to prove there exist a $O(n)$ algorithm.</p>
<p><strong>Edit</strong>: The professor admit it was a typo, it should have been $O(n^3)$. I'm happy he made the typo, since the answer Tomer Vromen suggested is amazing.</p>
| Miguel Velilla | 240,780 | <p>There is a faster algorithm to check if a given integer is a sum (or difference) of two cubes $n=a^3+b^3$</p>
<p>I don´t know if this algorithm is already known (probably yes, but I can´t find it on books or internet). I discovered and use it to compute integers to $n < 10^18$ </p>
<p>This process uses a single trick</p>
<p>$4(a^3+b^3)/(a+b) = (a+b)^2 + 3(a-b)^2$</p>
<p>We don´t know in advance what would be "a" and "b" and so what also would be "(a+b)", but we know that "(a+b)" should certainly divide $(a^3+b^3)$ , so if you have a fast primes factorizing routine, you can quickly compute each one of divisors of $(a^3+b^3)$ and then check if </p>
<p>$(4(a^3+b^3)/divisor - divisor^2)/3 = square$</p>
<p>When (and if) found a square, you have $divisor=(a+b)$ and $sqrt(square)=(a-b)$ , so you have a and b.</p>
<p>If not square found, the number is not sum of two cubes.</p>
<p>We know $divisor < (4(a^3+b^3)^(1/3)$ and this limit improves the task, because when you are assembling divisors of $(a^3+b^3)$ immediately discard those greater than limit.</p>
<p>Now some comparisons with other algorithms - for n = 10^18, by using brute force you should test all numbers below 10^6 to know the answer. On the other hand, to build all divisors of 10^18 you need primes until 10^9. </p>
<p>The max quantity of different primes you could fit into 10^9 is 10 $(2*3*5*7*11*13*17*19*23*29 = 5*10^9)$ so we have 2^10-1 different combinations of primes (which assemble the divisors) to check in worst case, many of them discarded because limit.</p>
<p>To compute prime factors I use a table with first 60.000.000 primes which works very well on this range.</p>
<p>---------- reply to Tito -------</p>
<p>Hi Tito, I think my poor notebook and FreeBasic language can´t manage these monster numbers.</p>
<p>I made this algorithm to try solve $a^3+b^3 = n*c^3$ (all integers), and it works very fast even in my notebook, but I can´t manage numbers greatert than 10^18 because 64 bits limitation of language. </p>
<p>I also have Ubasic program which have no limits for digits, but on the other hand it has no resources to manage tables with 60 million primes.</p>
<p>Take this example: I searched now all solucions for $a^3+b^3 = 1729*c^3$ with c<10^6 and program found these solutions in some 10 (ten) minuts.</p>
<p>$12^3 + 1^3 = 1729 * 1^3$ ,
$10^3 + 9^3 = 1729 * 1^3$ ,
$46^3 + -37^3 = 1729 * 3^3$ ,
$453^3 + -397^3 = 1729 * 26^3$ ,
$24580^3 + -24561^3 = 1729 * 271^3$ ,
$20760^3 + -3457^3 = 1729 * 1727^3$ ,
$25503^3 + -18503^3 = 1729 * 1810^3$ ,
$30151^3 + -1479^3 = 1729 * 2512^3$ ,
$2472830^3 + -1538423^3 = 1729 * 187953^3$ ,
$2879081^3 + 622072^3 = 1729 * 240681^3$ ,
$5328703^3 + -182620^3 = 1729 * 443967^3$ </p>
<p>Miguel</p>
|
530,920 | <p>I want to know if there is a way to find for example $\ln(2)$, without using the calculator ?</p>
<p>Thanks </p>
| OR. | 26,489 | <p>The operations that are relatively easy to compute by hand are addition, multiplication, and their inverses, subtraction, and division. With these operations we can compute all rational functions, e.g. $\frac{2x^2-1}{x^3+x-1}$.</p>
<p>We know that $$\ln(x)=\sum_{k=1}^{\infty}(-1)^k\frac{(x-1)^k}{k}$$</p>
<p>for values of $x$ close to $1$. So, if we take partial sums of this series we get approximations to logarithm that only require multiplications and sum and subtractions. </p>
<p>Notice that we only need to be able to compute values of logarithm for numbers close to $1$, since using $\ln(e^kx)=k+\ln(x)$ can allow us to reduce to this case.</p>
|
530,920 | <p>I want to know if there is a way to find for example $\ln(2)$, without using the calculator ?</p>
<p>Thanks </p>
| Jaume Oliver Lafont | 134,791 | <p>We can represent the logarithm of positive rational numbers as follows.</p>
<p>First, consider the following null conditionally convergent series (cancelled harmonic series):</p>
<p><span class="math-container">$$0=(1-1)+\left(\frac{1}{2}-\frac{1}{2}\right)+\left(\frac{1}{3}-\frac{1}{3}\right)+\left(\frac{1}{4}-\frac{1}{4}\right)+\left(\frac{1}{5}-\frac{1}{5}\right)+...$$</span></p>
<p>Note that we are computing <span class="math-container">$0=\log(1)=\log\left(\frac{1}{1}\right)$</span> by adding consecutive terms with 1 positive fraction and 1 negative fraction each, taken from the inverses of non-zero integers. This observation may sound trivial now, but it is interesting for what comes next.</p>
<p>We can rearrange the terms of this series to compute <span class="math-container">$\log(2)$</span> by taking two positive fractions and one negative for each term.</p>
<p><span class="math-container">$$\log\left(2\right)=\left(1+\frac{1}{2}-1\right)+\left(\frac{1}{3}+\frac{1}{4}-\frac{1}{2}\right)+\left(\frac{1}{5}+\frac{1}{6}-\frac{1}{3}\right)+\left(\frac{1}{7}+\frac{1}{8}-\frac{1}{4}\right)+...$$</span></p>
<p>This can be easily seen to be the Mercator series in disguise, so we have discovered nothing new yet.</p>
<p>But there is more. Similarly, we have</p>
<p><span class="math-container">$$\log\left(3\right)=\left(1+\frac{1}{2}+\frac{1}{3}-1\right)+\left(\frac{1}{4}+\frac{1}{5}+\frac{1}{6}-\frac{1}{2}\right)+\left(\frac{1}{7}+\frac{1}{8}+\frac{1}{9}-\frac{1}{3}\right)+\left(\frac{1}{10}+\frac{1}{11}+\frac{1}{12}-\frac{1}{4}\right)+...$$</span></p>
<p>This pattern holds for all positive integers, so the next step is applying the property that <span class="math-container">$\log(p/q)=\log(p)-\log(q)$</span> on these representations.</p>
<p>This leads to <span class="math-container">$\log(p/q)$</span> by adding <span class="math-container">$p$</span> positive fractions and <span class="math-container">$q$</span> negative fractions at each step. For example, we have</p>
<p><span class="math-container">$$\log\left(\frac{3}{2}\right)=\left(1+\frac{1}{2}+\frac{1}{3}-1-\frac{1}{2}\right)+\left(\frac{1}{4}+\frac{1}{5}+\frac{1}{6}-\frac{1}{3}-\frac{1}{4}\right)+\left(\frac{1}{7}+\frac{1}{8}+\frac{1}{9}-\frac{1}{5}-\frac{1}{6}\right)+...$$</span></p>
<p>as illustrated in <a href="http://oeis.org/A166871" rel="nofollow noreferrer">http://oeis.org/A166871</a>.</p>
<p>See also <a href="https://math.stackexchange.com/questions/46378/do-these-series-converge-to-logarithms">Do these series converge to logarithms?</a> and <a href="https://math.stackexchange.com/questions/883348/series-for-logarithms">Series for logarithms</a> for further discussion of generalized Mercator series.</p>
|
530,920 | <p>I want to know if there is a way to find for example $\ln(2)$, without using the calculator ?</p>
<p>Thanks </p>
| Community | -1 | <p>We have the CORDIC method, which can be quite effective for by-hand computation as it requires additions/subtractions only (an one multiply by a small integer).</p>
<p>There are two limitations though:</p>
<ul>
<li><p>it is better performed in base $2$, so a preliminary change of base is needed for the input argument (you can do it in base $10$ as well but it takes about $3$ times more operations); </p></li>
<li><p>you need a small table of constants.</p></li>
</ul>
<p>It is based on the identity $\log(ab)=\log(a)+\log(b)$.</p>
<p>You first normalize the binary number as $x=z\cdot2^e$, with $1\le z<10_b$. You have $\log(x)=\log(z)+e\cdot\log(2)$.</p>
<p>Then
$$\log(z)=\log(0.11_bz)-\log(0.11_b)\\
\log(z)=\log(0.111_bz)-\log(0.111_b)\\
\log(z)=\log(0.1111_bz)-\log(0.1111_b)\\
\cdots$$</p>
<p>You will use these equalities as follows. Initialize an accumulator $l\leftarrow0$ and</p>
<p>if $0.11_bz>1$ (i.e. $z>1.01010101_b\cdots$) let $z\leftarrow 0.11_bz$, $l\leftarrow l-\log(0.11_b)$;</p>
<p>if $0.111_bz>1$ (i.e. $z>1.00100100_b\cdots$) let $z\leftarrow 0.111_bz$, $l\leftarrow l-\log(0.111_b)$;</p>
<p>if $0.1111_bz>1$ (i.e. $z>1.00010001_b\cdots$) let $z\leftarrow 0.1111_bz$, $l\leftarrow l-\log(0.1111_b)$;</p>
<p>$\cdots$</p>
<p>The multiplies are actually performed as shifts and subtractions (f.i. $0.111_bz=z-0.001_bz$).</p>
<p>This way, we progressively reduce $z$ to bring it closer and closer to $1$, while $l$ gets closer and closer to the logarithm of the initial $z$. On every step we gain one bit of accuracy.</p>
<p>The table of constants ($\log(10_b)=-\log(0.1_b),-\log(0.11_b),-\log(0.111_b),\cdots$ up to the desired number of significant bits) is computed in the decimal base, so that the answer is readily available as such. </p>
<p>$$\begin{align}z&\to-\log(z)\\
0.1000000000000000000000000000000_b&\to 0.6931471806_d\\
0.1100000000000000000000000000000_b&\to 0.2876820725_d\\
0.1110000000000000000000000000000_b&\to 0.1335313926_d\\
0.1111000000000000000000000000000_b&\to 0.0645385211_d\\
0.1111100000000000000000000000000_b&\to 0.0317486983_d\\
0.1111110000000000000000000000000_b&\to 0.0157483570_d\\
0.1111111000000000000000000000000_b&\to 0.0078431775_d\\
0.1111111100000000000000000000000_b&\to 0.0039138993_d\\
0.1111111110000000000000000000000_b&\to 0.0019550348_d\\
0.1111111111000000000000000000000_b&\to 0.0009770396_d\\
0.1111111111100000000000000000000_b&\to 0.0004884005_d\\
0.1111111111110000000000000000000_b&\to 0.0002441704_d\\
0.1111111111111000000000000000000_b&\to 0.0001220778_d\\
0.1111111111111100000000000000000_b&\to 0.0000610370_d\\
0.1111111111111110000000000000000_b&\to 0.0000305180_d\\
0.1111111111111111000000000000000_b&\to 0.0000152589_d\\
0.1111111111111111100000000000000_b&\to 0.0000076294_d\\
0.1111111111111111110000000000000_b&\to 0.0000038147_d\\
0.1111111111111111111000000000000_b&\to 0.0000019074_d\\
0.1111111111111111111100000000000_b&\to 0.0000009537_d\\
0.1111111111111111111110000000000_b&\to 0.0000004768_d\\
0.1111111111111111111111000000000_b&\to 0.0000002384_d\\
0.1111111111111111111111100000000_b&\to 0.0000001192_d\\
0.1111111111111111111111110000000_b&\to 0.0000000596_d\\
0.1111111111111111111111111000000_b&\to 0.0000000298_d\\
0.1111111111111111111111111100000_b&\to 0.0000000149_d\\
0.1111111111111111111111111110000_b&\to 0.0000000075_d\\
0.1111111111111111111111111111000_b&\to 0.0000000037_d\\
0.1111111111111111111111111111100_b&\to 0.0000000019_d\\
0.1111111111111111111111111111110_b&\to 0.0000000009_d\\
0.1111111111111111111111111111111_b&\to 0.0000000005_d\\
\end{align}$$</p>
|
2,567,989 | <p>I need to prove that if $\gamma_1,\gamma_2:[0,1] \to \Bbb C - \{0\}$ are closed paths and have the same index ( or winding number) around $0$ , then they are homotopic.</p>
<p>So, I think I have an idea on how to show it:</p>
<p>we have $g_1,g_2:[0,1] \to \Bbb C$ continuous logarithms for $\gamma_1 , \gamma_2$ , that is $\gamma_i (t) = e \ ^ {g_i (t) } $ for $i=1,2$.</p>
<p>So it is enough to show that $g_1,g_2$ homotopic.</p>
<p>It seems easy to show that because $\Bbb C$ is convex, so i thought the homotopy would be :</p>
<p>$H(s,t) = (1-s) g_1(t) + sg_2(t)$</p>
<p>We need to show that $H(0,t) = \gamma_1(t) , H(1,t) = \gamma_2(t) $
(which is easy) and that $H(s,0) = H(s,1) $ . to show this :</p>
<p>Suppose $k= n(\gamma_i , 0)$ is the index of the paths then we know that $k = \dfrac{g_i(1) - g_i(0)}{2\pi i}$
So $2\pi i k = g_1(1) - g_1(0) = g_0(1) - g_0(0)$</p>
<p>So $H(s,0) = H(s,1) $ iff $2 \pi i k = 0$ , and this is not required in the question.</p>
<p>What am I missing ? </p>
<p>Thanks for helping.</p>
| eranreches | 208,983 | <p>Observe that for every $s$, the function $H\left(s,\cdot\right):\left[0,1\right]\rightarrow\mathbb{C}$ should also map under $\exp$ to a closed path with index $k$. Ergo, exactly as with the $g$'s</p>
<p>$$\frac{H\left(s,1\right)-H\left(s,0\right)}{2\pi i}=k$$</p>
<p>is the winding number of the path $\exp\Big(H\left(s,\cdot\right)\Big):\left[0,1\right]\rightarrow\mathbb{C}\setminus\left\{0\right\}$. Therefore, by requiring</p>
<p>$$H\left(s,1\right)=H\left(s,0\right)$$</p>
<p>you are unintentionally demanding that $k=0$.</p>
|
66,480 | <p>I have the following problem:</p>
<blockquote>
<p>Given $P(A)=0.2$, $P(B)=0.4$, $P(C)=0.8$, $P(D)=0.5$, find $P(A\cup B\cup C\cup D)$</p>
</blockquote>
<p>And the final answer should be 0.952</p>
<p>I know how to find the union of two and three elements (for 2, its: $A+B-AB$), but the formula becomes clumsy after 3. The best things I've found says that to find the union for n elements, I add as follows $$0.2-(0.2\times0.4)+(0.2\times0.4\times0.8)-(0.2\times0.4\times0.8\times0.5) = 0.152$$ which is wrong.</p>
<p>What is a good general rule for n events?</p>
| Sasha | 11,069 | <p>Use the following identity:</p>
<p>$$\mathbb{P}( A \cup B \cup C \cup D) = 1 - \mathbb{P}( (A \cup B \cup C \cup D)^c ) =
1 - \mathbb{P}( A^c \cap B^c \cap C^c \cap D^c )$$</p>
<p>Here $A^c$ means complement of set $A$. </p>
<p>Given independence of events $\mathbb{P}( A^c \cap B^c \cap C^c \cap D^c ) = \mathbb{P}( A^c ) \mathbb{P}( B^c ) \mathbb{P}( C^c ) \mathbb{P}( D^c )$. Now:</p>
<p>$$\mathbb{P}( A \cup B \cup C \cup D) = 1 - (1-0.2)(1-0.4)(1-0.8)(1-0.5) = 0.952$$</p>
|
63,064 | <p>If one defines on a $\mathbb{R},\mathbb{C}$-vector space a norm this gives rise to a metric. Why are particularly mappings that satisfy the norm axioms so important that in every book for beginners on linear algebra/functional analysis norms are studied ? </p>
<p>Aren't there also other functions that always give rise to a metric, that are worth studying? </p>
<p>What are the properties that a norm-induced metric has, that makes it so special (except being translation-invariant, $d(x+z,y+z)=d(x,y)$, and compatible with scalar multiplication, $d(\lambda x, \lambda y)= |\lambda | d(x,y)$; because I imagine that there would be also other mappings defined on the vector space that give, by some other rule of definition, rise to a translation-invariant,scalar multiplication compatible metric) ? </p>
<p>(<a href="https://math.stackexchange.com/questions/55934/why-do-we-have-the-notions-of-both-norm-and-metric">This</a> question was similar but not really what I was looking for - in case someone would want to redirect me to that question)</p>
| Matthew Daws | 1,438 | <p>If a metric $d$ on a vector space $V$ is translation invariant, and compatible with scalar multiplication, in your sense, then define $$\|x\| = d(x,0)$$I claim that this is a norm:</p>
<ul>
<li>$\|x\|=0$ iff $d(x,0)=0$ iff $x=0$</li>
<li>$\|\lambda x\| = d(\lambda x,0) = d(\lambda x,\lambda 0) = |\lambda| d(x,0) = |\lambda| \|x\|$</li>
<li>$\|x+y\| = d(x+y,0) = d(x,-y) \leq d(x,0) + d(0,-y)$ (by the triangle inequality) $= d(x,0) + d(y,0) = \|x\| + \|y\|$</li>
</ul>
<p>So in some sense, you answered your own question.</p>
|
3,693,814 | <p>For <span class="math-container">$x,y,z>0.$</span> Prove<span class="math-container">$:$</span> <span class="math-container">$$P={x}^{4}y+{x}^{4}z+3\,{x}^{3}{y}^{2}-11\,{x}^{3}yz+3\,{x}^{3}{z}^{2}+3
\,{x}^{2}{y}^{3}+3\,{x}^{2}{y}^{2}z+3\,{x}^{2}y{z}^{2}+3\,{x}^{2}{z}^{
3}+x{y}^{4}-11\,x{y}^{3}z+3\,x{y}^{2}{z}^{2}\\-11\,xy{z}^{3}+x{z}^{4}+{y
}^{4}z+3\,{y}^{3}{z}^{2}+3\,{y}^{2}{z}^{3}+y{z}^{4} \geqq 0$$</span>
There are many SOS way for <span class="math-container">$P,$</span> any one can find<span class="math-container">$?$</span></p>
<p>For example<span class="math-container">$,$</span></p>
<p><em>NguyenHuyen</em> use <em>fsos</em> function and gave the following expression<span class="math-container">$:$</span>
<a href="https://i.stack.imgur.com/XYk9o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XYk9o.png" alt="enter image description here"></a>
And also<span class="math-container">$:$</span> <span class="math-container">$$P=\frac{1}{4} \sum\limits_{cyc} \,z \left( x-y \right) ^{4}+\frac{3}{4} \sum\limits_{cyc} \, \left( {x}^{2}+{y}^{2}+4\,{z}^{2}
\right) \left( x-y \right) ^{2}z$$</span></p>
<p><span class="math-container">$(\ast)$</span> Result by <em>SBM</em><span class="math-container">$:$</span>
<span class="math-container">$$P=\frac{3}{2} \sum\limits_{cyc} \,z \left( xy+2\,{z}^{2} \right) \left( x-y \right) ^{2}+\sum\limits_{cyc} z \left(
x-y \right) ^{4}$$</span></p>
| Michael Rozenberg | 190,319 | <p>Another way by hand.</p>
<p>By AM-GM twice we obtain:
<span class="math-container">$$\sum_{cyc}(x^4y+x^4z+3x^3y^2+3x^3z^2-11x^3yz+3x^2y^2z)=$$</span>
<span class="math-container">$$=\sum_{cyc}(x^4y+x^4z-x^3y^2-x^3z^2+4x^3y^2+4x^3z^2-8x^3yz-3x^3yz+3x^2y^2z)=$$</span>
<span class="math-container">$$=\sum_{cyc}(x-y)^2(xy(x+y)+4z^3-3.5xyz)\geq \sum_{cyc}\left(x-y)^2(2\sqrt{x^3y^3}+4z^3-3.5xyz\right)\geq$$</span>
<span class="math-container">$$\geq \sum_{cyc}(x-y)^2\left(3\sqrt[3]{\left(\sqrt{x^3y^3}\right)^2\cdot4z^3}-3.5xyz\right)=\left(3\sqrt[3]4-3.5\right)xyz\sum_{cyc}(x-y)^2\geq0.$$</span></p>
|
4,270,857 | <p>Solve by separation of variables
<span class="math-container">$$\frac{\partial u}{\partial t}=k\frac{\partial^2 u}{\partial x^2}$$</span>
given intitial conditions:</p>
<p><span class="math-container">$$\frac{\partial u}{\partial x}=0 \text{ at } x=a \text{ and } x=-a, \forall t\geq 0;$$</span>
<span class="math-container">$$u \text{ is bounded for }-a\leq x\leq a\text{ as }t\rightarrow\infty;$$</span>
<span class="math-container">$$u=|x|\text{ for }-a\leq x\leq a\text{ at }t=0.$$</span></p>
<p>So I've been stuck on this problem for a few days now. I've tried a bunch of different things but I'm unsure of how to properly approach the problem. I have the answer, but it's unclear on how to get there. I originally thought that I could start with the general solution of the heat equation which is already known.
<span class="math-container">$$u(x,t)=X(x)T(t)=(A\cos{wx}+B\sin{wx})Ce^{-w^2kt}$$</span>
I started to play with initial conditions using this solution and ultimately I got with <span class="math-container">$B^*=BC, A^*=AC$</span> that <span class="math-container">$B^*=A^*\tan{wa}$</span> which lead to a weird <span class="math-container">$u$</span> that was defined on an interval using the boundary conditions. I couldn't gather much information.</p>
<p>After reviewing notes and the text, "Geoff Stephenson - PDE's for Scientists & Engineers, Ch. 4.3". I thought it would be wise to assume a solution of the form <span class="math-container">$u(x,t)=v(x)+w(x,t)$</span> where <span class="math-container">$\frac{\partial^2 v}{\partial x^2}=0$</span> and <span class="math-container">$w(x,t)=X(x)T(t)$</span>. Using boundary conditions, this lead me to
<span class="math-container">$$u(x,0)=|x|\implies |x|=v(x)+w(x,0)\implies v(x)=|x|-w(x,0)$$</span>
From this can I then say <span class="math-container">$u(x,t)=|x|-w(x,0)+w(x,t)$</span>?</p>
<p>I can't figure this stuff out for the life of me.</p>
| sankhya | 871,803 | <p>Are the coordinates of <span class="math-container">$X$</span> chosen independently and uniformly at random from <span class="math-container">$\{1,2,\ldots,n\}$</span>? Then, <span class="math-container">$E[x_ix_j]=E[x_i]E[x_j]$</span> and you can also calculate <span class="math-container">$E[x_i^2]$</span>.</p>
|
1,545,019 | <p>I'm sure this is probably an extremely simple problem but I'm stuck figuring this out.<br>
For example: </p>
<p>$(\frac{1}{5})^{x} + (\frac{7}{10})^{x} = 1$</p>
<p>What would be the steps to solve for x?</p>
| John | 7,163 | <p>Both terms on the left side are decreasing functions in $x$. Note also that the left hand side is $2$ when $x=0$ and $0.9$ when $x=1$.</p>
<p>A binary search between $x=0$ and $x=1$ works. I'll show you how to do this semi-manually with Excel or some similar program.</p>
<p>Enter the following (column A is blank):</p>
<pre><code> A B C
==================================================================
1 0 =.2^B1 + .7^B1
2 1
3 =B2-A3*0.5*ABS(B2-B1)
</code></pre>
<p>Fill down column C to Row 3, then fill down columns B and C down to about Row 40 or so, starting at Row 3.</p>
<p>Now, starting at Cell A3 and going down, enter $-1$ ("go down") if the value shown in column C is greater than $1$. Enter $1$ ("go up") if the value in column C is less than $1$.</p>
<p>By Cell A40, I found the answer to eleven significant figures: $0.83978030446$. (You may need to increase the number of significant figures shown in the cells to do this.)</p>
|
1,545,019 | <p>I'm sure this is probably an extremely simple problem but I'm stuck figuring this out.<br>
For example: </p>
<p>$(\frac{1}{5})^{x} + (\frac{7}{10})^{x} = 1$</p>
<p>What would be the steps to solve for x?</p>
| Claude Leibovici | 82,404 | <p>As Brevan Ellefsen answered, Newton method as described in the link given is a very good candidate for solving this kind of equations.</p>
<p>Just for your curiosity, let us apply it to your case $$f(x)=(\frac{1}{5})^{x} + (\frac{7}{10})^{x} - 1$$ using $x_0=1$ since $f(0)=1$ and $f(1)=-\frac 1 {10}$. The successive iterates will then be $$x_1=0.825040253981992$$ $$x_2=0.839658576323164$$ $$x_3=0.839780296147214$$ $$x_4=0.839780304467821$$ which is the solution for fifteen significant figures.</p>
|
1,704,707 | <p>The function $f: ℝ → ℝ$ defined by $f(x) = x^{3}$ is onto because for any real number $r$, we have that $\sqrt[3]r$ is a real number and $f(\sqrt[3]r)=r$. Consider the same function defined on the integers $g: ℤ → ℤ$ by $g(n) = n^3.$ Explain why $g$ is not onto $ℤ$ and give one integer that $g$ cannot output. </p>
<p>I can't think of any integer that cannot be cubed, so this problem has me confused.</p>
| Kernel_Dirichlet | 368,019 | <p>As others have said, since each of the $f_k$'s are Lipschitz, then they are equicontinuous, they are of course also uniformly bounded since $||f_k||_\infty$ $<1$. This is enough to conclude that $D$ is compact. You do not need to prove closure because compact subsets of a Hausdorff space (all metric spaces, including function spaces), are always closed. </p>
|
1,771,961 | <p>Suppose $U = \{(x,x,y,y) \in \mathbb{F}^4 : x, y \in \mathbb{F}\}$ </p>
<p>Find a subspace W of $\mathbb{F}^4$ such that $\mathbb{F}^4 = U\oplus W$</p>
<p>Attempt: Now from what I understand I would think that an element of $\mathbb{F}^4$ would look like $$(w,x,y,z) \quad\mbox{such that}\quad
w,x,y,z \in \mathbb{F}$$</p>
<p>with that being the case I would use a subspace of the form: $$W = (w-x, 0, 0, z-y) \in \mathbb{F}^4 \quad\mbox{such that}\quad w,x,y,z \in \mathbb{F}$$</p>
<p>But as a solution it was given that $$ W = (0,x,y,0). $$</p>
<p>Explanation?</p>
<p>I think I am not fully grasping how the direct sum sets are formed, but I got the idea that it was using an element from each subspace.</p>
| Ege Erdil | 326,053 | <p>$ U $ is a subspace spanned by the linearly independent set $ S = \{ (1, 1, 0, 0), (0, 0, 1, 1) \} $. Therefore, it suffices to pick a subspace $ W $ which is spanned by two vectors such that their adjoinment to $ S $ would not disturb its linear independence. In other words, we need to extend $ S $ to a basis of $ \mathbb{F}^4 $.</p>
<p>It is easy to see that $ (1, 0, 0, 0), (0, 0, 0, 1) \notin U $. Now, we check if the set $ S' $ formed by adjoining these vectors to $ S $ is linearly independent. The standard method is to row reduce the matrix whose columns are elements of $ S $, but a more direct approach works here. Let $ s_i $ denote the elements of $ S' $:</p>
<p>$$ c_1 s_1 + c_2 s_2 + c_3 s_3 + c_4 s_4 = ( c_1 + c_3, c_1, c_2, c_2 + c_4) $$</p>
<p>For the left hand side to equal zero, it is then clear that we must have $ c_i = 0 $ for all coefficients, establishing linear independence of $ S' $. Therefore, $ S' $ is a basis of $ \mathbb{F}^4 $ (by the dimension theorem), and we may take $ W = \textrm{span} \{(1, 0, 0, 0), (0, 0, 0, 1) \} = \{ (x, 0, 0, y) : x, y \in \mathbb{F} \} $.</p>
|
3,769,222 | <p>Evaluate: <span class="math-container">$\lim_{n\to \infty } \int_0^n (1-\frac{x}{n})^n \cos(\frac{x}{n})dx$</span></p>
<p>For fixed <span class="math-container">$n$</span>, I first rewrite my integral as
<span class="math-container">$$\int_0^n \left(1-\frac{x}{n}\right)^n \cos\left(\frac{x}{n}\right)dx = \int_0^\infty \left(1-\frac{x}{n}\right)^n \cos\left(\frac{x}{n}\right) \chi_{[0,n]}(x) dx,$$</span>
where <span class="math-container">$\chi_{[0,n]}(x)$</span> is the characteristic function.</p>
<p>We then have that <span class="math-container">$$\lim_{n\to \infty}\left(1-\frac{x}{n} \right)^n = e^{-x},$$</span> <span class="math-container">$$\lim_{n\to \infty}\cos\left(\frac{x}{n}\right) = 1,$$</span> <span class="math-container">$$\lim_{n\to \infty}\chi_{[0,n]}(x) = 1.$$</span></p>
<p>So that <span class="math-container">$$\lim_{n\to \infty}\left(1-\frac{x}{n}\right)^n \cos\left(\frac{x}{n}\right) \chi_{[0,n]}(x) = e^{-x}.$$</span></p>
<p>I'd now like to just quote the dominated convergence theorem and integrate <span class="math-container">$e^{-x}$</span> to get the limit of the integral. But I can't think of an integrable function that dominates the sequence of functions.</p>
<p>Any thoughts would be greatly appreciated.</p>
<p>Thanks in advance.</p>
| Oliver Díaz | 121,671 | <p><span class="math-container">$$
0\leq 1-\frac{x}{n}\leq e^{-x/n}\qquad 0<x\leq n$$</span></p>
<p>and so <span class="math-container">$0\leq \Big(1-\frac{x}{n}\Big)^n\mathbb{1}_{(0,n]}(x)\leq e^{-x}$</span>. The function <span class="math-container">$x\mapsto\cos(x/n)$</span> is controlled easily since <span class="math-container">$|\cos|\leq 1$</span>.</p>
<p>By dominated convergence
<span class="math-container">$$\lim_n\int^n_0\Big(1-\frac{x}{n}\Big)^n\cos(x/n)\,dx\xrightarrow{n\rightarrow\infty}\int^\infty_0 e^{-x}\,dx=1$$</span></p>
|
1,125 | <p>I know volume preserving diffeomorphisms of a $sphere^2$ make a grou'p sdiff($S_2$). I would to know if it is a Lie group, which I assume if it is that makes interpolation easier (like with rotations). </p>
<p>So that is one question, is it a Lie group?</p>
<p>Also is the group path connected? If so, how can I interpolate between two elements in the group?</p>
<p>These are not subjects I know very little about. I apologize if I'm phrasing it in some way that sounds ridiculous. </p>
| Community | -1 | <p>The term "volume preserving" sounds a bit ambiguous to me: do you mean that your map preserves the total volume or do you mean that its differential at every point preserves volume (i.e. has determinant 1)? The former is weaker than the latter, and gives you more room for interpolation.</p>
<p>In any case, there is a famous invariant of continuous maps $S^2\to S^2$ called the <em>degree</em>. Any two maps with the same degree are homotopic to each other. Being volume preserving (in the former sense) implies that the degree is $1$ (taking orientation into account!), so you can interpolate between any two volume preserving maps. <em>However</em>, the intermediate maps in this line of reasoning are only continuous, not necessarily diffeomorphisms. I'm confident that with a standard argument "approximate continuous functions by differentiable ones" you can get them to be differentiable, but I don't about "is diffeomorphism" and "is locally volume preserving" parts.</p>
|
2,895,343 | <blockquote>
<p>Differentiate $\tan^3(x^2)$</p>
</blockquote>
<p>I first applied the chain rule and made $u=x^2$ and $g=\tan^3u$. I then calculated the derivative of $u$, which is $$u'=2x$$ and the derivative of $g$, which is
$$g'=3\tan^2u$$</p>
<p>I then applied the chain rule and multiplied them together, which gave me </p>
<p>$$f'(x)=2x3\tan^2(x^2)$$</p>
<p>Is this correct? If not, any hints as to how to get the correct answer?</p>
| Lady | 586,433 | <p>$$u'=2x$$
$$g'=3\tan^2u \cdot sec^2u$$</p>
<p>$$f'(x)=2x \cdot 3\tan^2(x^2)\sec^2(x^2) = 6x\tan^2(x^2)\sec^2(x^2)$$</p>
|
74,290 | <p>I have this function $f(x)$ which is continuous and differentiable on $\mathbb R$. Is the following true - without the assumption that $f$ being absolutely continuous!</p>
<p>$$ \int_{a}^{b} f'(x) dx=f(b)-f(a)$$</p>
<p>Edit: $f$ is infinitely differentiable on $\mathbb R$. I think this may change everything!</p>
| Evgeny Savinov | 17,357 | <p>After edit, it became true, because of continuously $f'(x)$. And now you can use <a href="http://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow">Newton-Leibniz formula</a></p>
|
185,097 | <blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://math.stackexchange.com/questions/1/different-kinds-of-infinities">Different kinds of infinities?</a> </p>
</blockquote>
<p>Today I got to know that two infinity can be compared, But I want to know how is this possible? infinity will be infinity. If it doesn't have any particular value, how can we say that this infinity is small and other one is greater. Can anyone help me?</p>
| user17785 | 38,251 | <p>Cantor proved that there are "infinities" "bigger" than others.
For instance, $\mathbb{R}$ is strictly bigger than $\mathbb{N}$. What is meant by this can be stated the following way :
There is not enough natural numbers to number every real number.
In other words, if you have associated a real number to each natural number, then there will be real numbers left without an associated natural number.</p>
<p>The proof is called the Cantor Diagonal argument : <a href="http://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument" rel="nofollow">http://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument</a>.</p>
|
132,875 | <p>I was told that I could obtain an analytic solution to a particle falling under the influence of Newtonian gravity by using <code>DSolveValue</code>.</p>
<p><strong>What I am given</strong></p>
<ul>
<li>$G = M = m = 1$</li>
<li>$M$ is a point mass at $z=0$</li>
<li>the particle falls along the $z$ axis</li>
<li>$z(0) = 0$</li>
<li>$\frac{\mathrm{d}z(-\infty)}{\mathrm{d}t} = 0$</li>
</ul>
<p>Thus,
$$\frac{\mathrm{d}^2z(t)}{\mathrm{d}t^2} + \frac{1}{z(t)^2} = 0$$
and I'm trying to find $z$ as just a function of $t$.
So I tried</p>
<pre><code>zOft = DSolveValue[{z''[t] + 1/z[t]^2 == 0, z'[-∞] == 0, z[0] == 0}, z[t], t]
</code></pre>
<p>But I get the error</p>
<pre><code>DSolveValue::bvimp: General solution contains implicit solutions. In
the boundary value problem, these solutions will be ignored, so some
of the solutions will be lost.
</code></pre>
<p>And <code>zOft</code> just becomes <code>DSolveValue[...everything above...]</code>. So I don't actually get an expression for $z(t)$.</p>
<p>I am fairly new at <em>Mathematica</em>. Is there something I am doing wrong in the code? Or is it just generally not possible to analytically solve this? Is it something wrong with the fact that $\frac{1}{z(0)^2}$ is undefined? Do I have to somehow re-normalize the time coordinate? I was told that I should be able to find an analytic expression dependent only on $t$ of the position of the particle.</p>
| Kagaratsch | 5,517 | <p>It seems that the differential equation solver gets hung up on the boundary conditions for some reason. Let us therefore do some intermediate steps by hand. First, consider the general solution of the differential equation:</p>
<pre><code>DSolve[{z''[t] + 1/z[t]^2 == 0}, z[t], t]
</code></pre>
<blockquote>
<p><code>Solve[(-(Log[1 + (C[1] + Sqrt[C[1]] Sqrt[C[1] + 2/z[t]]) z[t]]/C[1]^(3/2))
+ (Sqrt[C[1] + 2/z[t]] z[t])/C[1])^2 == (t + C[2])^2, z[t]]</code></p>
</blockquote>
<p>This result means that an ordinary equation must be satisfied by <code>z[t]</code> in order to give the differential equation solution. Let's rewrite it as <code>expr==0</code> with</p>
<pre><code>expr = (-(Log[1 + (C[1] + Sqrt[C[1]] Sqrt[C[1] + 2/z[t]]) z[t]]/C[1]^(3/2))
+ (Sqrt[C[1] + 2/z[t]] z[t])/C[1])^2 - (t + C[2])^2;
</code></pre>
<p>Note that <code>t</code> only enters as <code>t^2</code>, which means that we can immediately replace <code>t</code> by its absolute value <code>Abs[t]</code> in all further considerations, since time in physics is a real variable (For simplicity, let us concentrate on the region where <code>Abs[t]=t</code> for now, and analytically continue the solution to the other region when we are done). We also see that there are two integration constants <code>C[1],C[2]</code> that we should use to obtain the boundary conditions we are interested in. First condition for <code>t=0</code>:</p>
<pre><code>Series[expr /. t -> 0, {z[0], 0, 0}] // Normal
</code></pre>
<blockquote>
<p><code>-C[2]^2</code></p>
</blockquote>
<p>implies <code>C[2] -> 0</code>. For the second condition we need the first derivative <code>z'[t]</code>:</p>
<pre><code>zp = z'[t] /. Solve[D[expr /. C[2] -> 0, t] == 0, z'[t]][[1]] // Simplify
</code></pre>
<blockquote>
<p><code>(t C[1]^(3/2) Sqrt[ C[1] + 2/z[t]])/(-Log[ 1 + (C[1] + Sqrt[C[1]] Sqrt[C[1] + 2/z[t]]) z[t]] + Sqrt[C[1]] Sqrt[C[1] + 2/z[t]] z[t])</code></p>
</blockquote>
<p>This almost looks like <code>C[1] -> 0</code> should be chosen. However, we should take this limit carefully: </p>
<pre><code>zpBC = Series[zp, {C[1], 0, 0}] // Normal
</code></pre>
<blockquote>
<p><code>(3 t)/z[t]^2</code></p>
</blockquote>
<p>We see that the first derivative is non-zero for <code>C[1] -> 0</code>, but we still need to take <code>t -> -Infinity</code>. It makes sense that <code>z[t]</code> should go to infinity in that limit as well (particle falling in from far away), but we need to make sure that it approaches infinity quicker than <code>Sqrt[t]</code> for the above first derivative to vanish. Indeed, we find:</p>
<pre><code>Series[expr /. C[2] -> 0, {C[1], 0, 0}] // Normal
</code></pre>
<blockquote>
<p><code>-t^2 + (2 z[t]^3)/9</code></p>
</blockquote>
<p>So that</p>
<pre><code>z[t] == (3^(2/3) t^(2/3))/2^(1/3)
</code></pre>
<p>And the first derivative properly goes to zero as <code>t -> -Infinity</code>:</p>
<pre><code>zpBC /. z[t] -> (3^(2/3) t^(2/3))/2^(1/3)
</code></pre>
<blockquote>
<p><code>2^(2/3)/(3^(1/3) t^(1/3))</code></p>
</blockquote>
<p>Therefore, <code>z[t] == (3^(2/3) t^(2/3))/2^(1/3)</code> is likely the solution you are after.
We can also check explicitly:</p>
<pre><code>z[t_] := (3^(2/3) t^(2/3))/2^(1/3)
z''[t] + 1/z[t]^2
</code></pre>
<blockquote>
<p><code>0</code></p>
</blockquote>
<p>that the differential equation is indeed satisfied. Also, recall that by <code>t</code> we actually mean <code>Abs[t]</code>. This implies that substituting <code>t -> -t</code> also gives a solution. This one is actually the one you need, since for <code>t<0</code> we have <code>Abs[t] == -t</code> and you expect the solution to be real on physical grounds.</p>
|
132,875 | <p>I was told that I could obtain an analytic solution to a particle falling under the influence of Newtonian gravity by using <code>DSolveValue</code>.</p>
<p><strong>What I am given</strong></p>
<ul>
<li>$G = M = m = 1$</li>
<li>$M$ is a point mass at $z=0$</li>
<li>the particle falls along the $z$ axis</li>
<li>$z(0) = 0$</li>
<li>$\frac{\mathrm{d}z(-\infty)}{\mathrm{d}t} = 0$</li>
</ul>
<p>Thus,
$$\frac{\mathrm{d}^2z(t)}{\mathrm{d}t^2} + \frac{1}{z(t)^2} = 0$$
and I'm trying to find $z$ as just a function of $t$.
So I tried</p>
<pre><code>zOft = DSolveValue[{z''[t] + 1/z[t]^2 == 0, z'[-∞] == 0, z[0] == 0}, z[t], t]
</code></pre>
<p>But I get the error</p>
<pre><code>DSolveValue::bvimp: General solution contains implicit solutions. In
the boundary value problem, these solutions will be ignored, so some
of the solutions will be lost.
</code></pre>
<p>And <code>zOft</code> just becomes <code>DSolveValue[...everything above...]</code>. So I don't actually get an expression for $z(t)$.</p>
<p>I am fairly new at <em>Mathematica</em>. Is there something I am doing wrong in the code? Or is it just generally not possible to analytically solve this? Is it something wrong with the fact that $\frac{1}{z(0)^2}$ is undefined? Do I have to somehow re-normalize the time coordinate? I was told that I should be able to find an analytic expression dependent only on $t$ of the position of the particle.</p>
| Jens | 245 | <p>Here is a very short answer, using <strong>energy conservation</strong>:</p>
<pre><code>energy = 1/2 z'[t]^2 - 1/z[t];
zSolution[t_] =
z[t] /. Simplify@First[DSolve[energy == 0 && z[0] == 0, z[t], t]]
(* ==> (3^(2/3) (-t)^(2/3))/2^(1/3) *)
</code></pre>
<p>Energy conservation reduces the order of the differential equation. The energy at $t\to-\infty$ must be equal to zero if the velocity is zero in that limit. This follows from the fact that the limit $t\to-\infty$ can only be defined provided that the motion is non-periodic in time. This implies that the potential energy at $t\to-\infty$ must also vanish.</p>
<p><strong>Added note about the energy:</strong></p>
<p>To justify that the quantity called <code>energy</code> above is in fact conserved and therefore must obey the differential equation I use here, you could use the <code>VariationalMethods</code> package, even though it's overkill for this simple problem. The logic is: first I define <code>lagrangian</code> and show that its Euler-Lagrange equation of motion is in fact the desired differential equation. Having therefore described the problem equivalently by a Lagrangian, I can derive conservation laws from it. In particular, the conservation law I need is that of the energy, which in <em>Mathematica</em> is obtained as <code>FirstIntegral[t]</code> from the command <code>FirstIntegrals</code>:</p>
<pre><code>Needs["VariationalMethods`"]
lagrangian = 1/2 Derivative[1][z][t]^2 + 1/z[t];
Simplify[EulerEquations[lagrangian, z[t], t]]
(* ==> 1/z[t]^2 + (z'')[t] == 0 *)
energy =
FirstIntegral[t] /. FirstIntegrals[lagrangian, z[t], t]
(* ==> -(1/z[t]) + 1/2 z'[t]^2 *)
</code></pre>
<p>As you can see, this is the quantity I start with in the original post. So this is just a <em>Mathematica</em>-based way of deriving energy conservation, provided you're familiar with Lagrangian mechanics. Of course in real physics problems, you should usually identify the relevant conservation law before doing any computations at all. It saves a lot of work.</p>
|
1,715,324 | <p>I am wondering what is the difference between algebraic sets and algebraic varieties in complex projective space.</p>
<p>It seems that both are zero sets of polynomials, so what is the difference?</p>
| Georges Elencwajg | 3,217 | <p>If we give $\mathbb A^n$ (resp. $ \mathbb P^n$) their Zariski topology, an algebraic set is just a closed subset $V\subset \mathbb A^n$ (resp. $V\subset \mathbb P^n$).<br>
An algebraic variety is a vastly more general general concept, the basic object in classical algebraic geometry (even if since Grothendieck we have the even more general notion of scheme).<br>
Every algebraic set, which <em>a priori</em> is a topological subspace, can be endowed with the structure of algebraic variety: the supplementary datum consists of decreeing which functions on open subsets $U\subset V$ are considered acceptable, thus obtaining the ring $\mathcal O_V(U)$ of "regular" functions on $U$.<br>
By using this procedure we get the affine (resp. projective varieties).<br>
However there are many algebraic varieties which are neither affine nor projective, and thus are completely different from algebraic subsets: the simplest example is the plane with the origin removed, $V=\mathbb A^2\setminus \{(0,0)\}$. </p>
|
102,966 | <p>Let $T^*$ denote upper triangular matrices (of the appropriate size) with positive diagonal entries and $\mathrm{UT}$ upper triangular matrices with all diagonal entries equal to 1.</p>
<blockquote>
<p>Does every (abstract group) embedding $\varphi:\mathrm{UT}(n,\mathbb{R})\to\mathrm{UT}(m,\mathbb{R})$ extend to $\bar{\varphi}:T^*(n,\mathbb{R})\to T^*(m,\mathbb{R})$?</p>
</blockquote>
<p>(In the other direction, any $\bar{\varphi}:T^*(n,\mathbb{R})\to T^*(m,\mathbb{R})$ restricts to a homomorphism $\mathrm{UT}(n,\mathbb{R})\to\mathrm{UT}(m,\mathbb{R})$, since $\mathrm{UT}$ is the derived subgroup of $T^*$.)</p>
<p>An affirmative answer to this question implies a relatively easy affirmative answer to <a href="https://mathoverflow.net/questions/93091/free-affine-actions-of-borel-subgroups">this question</a>. I've recently answered the latter independently, but wonder if there's a general result that could be used, and which I should know about. I asked the current question on Math.stackexchange but without success.</p>
<p>EDIT: Florian Eisele has shown that as stated above the answer to the original question is no. This makes me wonder if there's a reasonably natural reformulation for which the answer is yes. For the sake of asking a concrete question, let me hazard the following.</p>
<blockquote>
<p>Does every embedding $\varphi:\mathrm{UT}(n,\mathbb{Q})\to\mathrm{UT}(m,\mathbb{Q})$ extend to $\bar{\varphi}:T^*(n,\mathbb{Q})\to T^*(m,\mathbb{Q})$?</p>
</blockquote>
| Florian Eisele | 17,498 | <p>No (at least if you really mean <em>abstract</em> group embeddings). Choose $m=n=2$. Note that
$$UT(2,\mathbb R) \cong (\mathbb R,+)\quad \textrm{ and }\quad T^*(2,\mathbb R)\cong UT(2,\mathbb R)\rtimes (\mathbb R_+,\cdot)^2 $$ </p>
<p>Note secondly that the image of the conjugation action of $T^*(2,\mathbb R)$ on $UT(2,\mathbb R)$ is equal to $(\mathbb R_+,\cdot)\leq Aut((\mathbb R,+))$. In particular, $UT(2,\mathbb R)$ splits up into only three orbits under this action. I claim there is an injective homomorphism from $UT(2,\mathbb R)$ into itself such that its image $G$ splits up into many more orbits under the action of the normalizer of $G$ in $T^*(2,\mathbb R)$.
Such an embedding clearly cannot be extended to all of $T^*(2,\mathbb R)$.</p>
<p>As a $\mathbb Q$-vector space (and hence also as an abelian group), $\mathbb R\cong \bigoplus_I \mathbb Q$ for some index set $I$ of the same cardinality as $\mathbb R$. Let $S$ be a transcendence basis of $\mathbb R$ over $\mathbb Q$ (so the cardinality of $S$ will be the same as that of $I$). Let $G$ be the $\mathbb Q$-span of $S$. So there is an isomorphism from $(\mathbb R, +)$ to $(G,+)\subsetneq (\mathbb R,+)$. Now if $s_1\neq s_2\in S$, then the unique element in $\mathbb R-\{0\}$ sending $s_1$ to $s_2$ is $r=s_2/s_1$. But if $r\cdot s_2 = s_2^2/s_1$ could be written as a $\mathbb Q$-linear combination $\sum_i c_i s_i$ of elements in $S$, then we would get an algebraic relation $$
s_1(\sum_i c_i s_i)-s_2^2
$$
between the elements in the transcendence basis $S$.
Therefore $r\cdot s_2 \notin G$, i.e. $r\cdot G \nsubseteq G$, thus proving that all elements of $S$ lie in different orbits under the action of the normalizer of $G$ in $T^*(2,\mathbb R)$. </p>
|
2,273,477 | <p><strong>Bessel's Inequality</strong></p>
<p>Let $(X, \langle\cdot,\cdot\rangle )$ be an inner product space and $(e_k)$ an orthonormal sequence in $X$. Then for every $x \in X$ : $$ \sum_{1}^{\infty} |\langle x,e_k\rangle |^2 \le ||x||^2$$
where $\| \cdot\|$ is of course the norm induced by the inner product. </p>
<p>Now suppose we have a sequence of scalars $a_k$ and that the series $$ \sum_{1}^{\infty} a_k e_k = x $$
converges to a $x \in X$. </p>
<blockquote>
<p><strong>Lemma 1</strong>
We can easily show that $a_k=\langle x,e_k\rangle $
(i'll do it fast)</p>
<p><em>Proof.</em> Denote $s_n$ the sequence of partial sums of the above series, which of course converges to $x$. Then for every $j<n$ , $ \langle s_n, e_j\rangle = a_j$ and by continuuity of the inner product $a_j=\langle x,e_j\rangle $</p>
<p><strong>Lemma 2</strong> We can also show that since $s_n$ converges to $x$, then $σ_n = |a_1|^2 + ... + |a_2|^2 $ converges to $\|x\|^2 $ :</p>
<p><em>Proof.</em> $\|s_n\|^2 = \| a_1 e_1 +...+a_2 e_2\|^2 = |a_1|^2 + ... |a_n|^2 $ since $(e_k)$ are orthonormal (Pythagorean). But $||s_n||^2$ converges to $||x||^2 $ , which completes the proof.</p>
</blockquote>
<p>So we showed the following $$\sum_1^{\infty} |a_k|^2= \sum_1^\infty |\langle x,e_k\rangle |^2 = ||x||^2$$</p>
<p><strong>Confusion</strong></p>
<p>So the equality holds for Bessel inequality, for $x$. We arbitrarily chose $a_k$, so does that mean the the equality holds for all $x \in X$ ? Obviously not, otherwise it would be Bessel's equality. What am I getting wrong?</p>
| Martin Argerami | 22,857 | <p>What you are missing is that the statement says <em>orthonormal sequence</em> and not <em>orthonormal basis</em>. When the sequence is a basis, you get Parseval's equality. But the inequality holds for "partial sums". </p>
<p>If you already have $\sum_k a_ke_k=x$, then of course you get an equality. </p>
|
349,212 | <p><strong>Context:</strong> I am a PhD student in theoretical physics with higher-than-average education on differential geometry. I am trying to understand Lagrangian and Hamiltonian field theories and related concepts like Noether's theorem etc. in a mathematically rigorous way since the standard physics literature is sorely lacking for someone who values precision and generality, in my opinion.</p>
<p>I am currently studying various text by Anderson, Olver, Krupka, Sardanashvili etc. on the variational bicomplex and on the formulation of Lagrangian systems on jet bundles. I do not rule the formalism yet, but made significant steps towards understanding.</p>
<p>On the other hand, most physics literature employs the functional formalism, where rather than calculus on variations taking place on finite dimensional jet bundles (or the "mildly infinite dimensional" <span class="math-container">$\infty$</span>-jet bundle), it takes place on the suitably chosen (and usually not actually explicitly chosen) infinite dimensional space of smooth sections (of the given configuration bundle).</p>
<p>Even relatively precise physics authors like Wald, DeWitt or Witten (lots of 'W's here) seems to prefer this approach (I am referring to various papers on the so-called "covariant phase space formulation", which is a functional and infinite dimensional but manifestly "covariant" approach to Hamiltonian dynamics, which also seems to be a focus of DeWitts "The Global Approach to Quantum Field Theory", which is a book I'd like to read through but I find it impenetrable yet).</p>
<p>I find it difficult to arrive at a common ground between the functional formalism and the jet-based formalism. I also do not know if the functional approach had been developed to any modern standard of mathematical rigour, or the variational bicomplex-based approach has been developed precisely to avoid the usual infinite dimensional troubles.</p>
<p><strong>Example:</strong></p>
<p><a href="https://i.stack.imgur.com/PLEld.png" rel="noreferrer"><img src="https://i.stack.imgur.com/PLEld.png" alt="Augmented variational bicomplex"></a></p>
<p>Here is an image from Anderson's "The Variational Bicomplex", which shows the so-called augmented variational bicomplex. Here <span class="math-container">$I$</span> is the so-called interior Euler operator, which seems to be a substitute for integration by parts in he functional approach.</p>
<p>Later on, Anderson proves that the vertical columns are locally exact, and the <em>augmented</em> horizontal rows (sorry for picture linking, xypic doesn't seem to be working here, don't know how to draw complices)</p>
<p><a href="https://i.stack.imgur.com/StDOy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/StDOy.png" alt="enter image description here"></a></p>
<p>are locally exact as well. In fact for the homotopy operator <span class="math-container">$\mathcal H^1:\mathcal F^1\rightarrow\Omega^{n,0}$</span> that reconstructs Lagrangians from "source forms" (equations of motion) he gives (for source form <span class="math-container">$\Delta=P_a[x,y]\theta^a\wedge\mathrm d^nx$</span>) <span class="math-container">$$ \mathcal H^1(\Delta)=\int_0^1 P_a[x,tu]u^a\mathrm dt\ \mathrm d^nx. $$</span></p>
<p>On the other hand, if we use the functional formalism in an unrigorous manner, the functional derivative <span class="math-container">$$ S\mapsto\frac{\delta S[\phi]}{\delta \phi^a(x)} $$</span> behaves like the infinite dimensional analogue of the ordinary partial derivative, so using the local form of the homotopy operator for the de Rham complex (which for the lowest degree is <span class="math-container">$f:=H(\omega)=\int_0^1\omega_\mu(tx)x^\mu\mathrm dt$</span>) and extending it "functionally", one can arrive at the fact that if an "equation of motion" <span class="math-container">$E_a(x)[\phi]$</span> satisfies <span class="math-container">$\frac{\delta E_a(x)}{\delta\phi^b(y)}-\frac{\delta E_b(y)}{\delta\phi^a(x)}=0$</span>, then <span class="math-container">$E_a(x)[\phi]$</span> will be the functional derivative of the action functional <span class="math-container">$$ S[\phi]=\int_0^1\mathrm dt\int\mathrm d^nx\ E_a(x)[t\phi]\phi^a(x). $$</span></p>
<p>I have (re)discovered this formula on my own by simply abusing the finite dimensional analogy and was actually surprised that this works, but it does agree (up to evaluation on a secton and integration) with the homotopy formula given in Anderson.</p>
<p>This makes me think that the "variation" <span class="math-container">$\delta$</span> can be considered to be a kind of exterior derivative on the formal infinite dimensional space <span class="math-container">$\mathcal F$</span> of all (suitable) field configurations, and the Lagrangian inverse problem can be stated in terms of the de Rham cohomology of this infinite dimensional field space.</p>
<p>This approach however fails to take into account boundary terms, since it works only if integration by parts can be performed with reckless abandon and all resulting boundary terms can be thrown away. This can be also seen that if we consider the variational bicomplex above, the <span class="math-container">$\delta$</span> variation in the functional formalism corresponds to the <span class="math-container">$\mathrm d_V$</span> vertical differential, but in the augmented horizontal complex, the <span class="math-container">$\delta_V=I\circ\mathrm d_V$</span> appears, which has the effect of performing integrations by parts, and the first variation formula is actually <span class="math-container">$$ \mathrm d_V L=E(L)-\mathrm d_H\Theta, $$</span> where the boundary term appears explicitly in the form of the horizontally exact term.</p>
<p>The functional formalism on the other hand requires integrals everywhere and boundary terms to be thrown aside for <span class="math-container">$\delta$</span> to behave as an exterior derivative. Moreover, integrals of different dimensionalities (eg. integrals over spacetime and integrals over a hypersurface etc.) tend to appear sometimes in the functional formalism, which can only be treated using the same concept of functional derivative if various delta functions are introduced, which makes me think that de Rham currents (I am mostly unfamiliar with this area of mathematics) are also involved here.</p>
<p><strong>Question:</strong> I would like to ask for references to papers/and or textbooks that develop the functional formalism in a general and mathematically precise manner (if any such exist) and also (hopefully) that compare meaningfully the functional formalism to the jet-based formalism.</p>
| Tobias Diez | 17,047 | <p>This is meant as a long comment to the very good answer by Pedro Riberio.</p>
<p>There is a nice analog of the variational bicomplex in the functional framework. Namely, the space of differential forms on <span class="math-container">$M \times \Gamma(E)$</span> (where <span class="math-container">$E$</span> is a fiber bundle over <span class="math-container">$M$</span>) comes with a natural bigrading <span class="math-container">$\Omega^{p, q}(M \times \Gamma(E))$</span> induced by the product structure, i.e. the dual of the decomposition <span class="math-container">$T_{m, \phi} (M \times \Gamma(E)) = T_m M \times T_\phi \Gamma(E)$</span> of the tangent space. Moreover, the jet map
<span class="math-container">$$
j^k: M \times \Gamma(E) \to J^k E, \qquad (m, \phi) \mapsto j^k_m \phi
$$</span>
yields a morphism from the variational bicomplex <span class="math-container">$\Omega^{p, q}(J^k E)$</span> to the bicomplex <span class="math-container">$\Omega^{p, q}(M \times \Gamma(E))$</span> with the exterior differential. Personally, I find the functional bicomplex easier to understand than the variational one; and as remarked by Pedro the functional framework is more flexible as it also handles non-local Lagrangians. On the other hand, the jet bundle approach has advantages for simulation, because you stay in the finite-dimensional setting which makes it easier to discretize while preserving the (symplectic) geometry.</p>
|
2,367,788 | <p>Need to find the total numbers out of all 6 digit numbers where a digit is repeated exactly 4 times in the number. </p>
<p>Eg. 111122, 111123 is valid
But 111121 is not valid.</p>
| true blue anil | 22,388 | <p>We can avoid cases by recognizing that $\frac 9{10}$ of a string of random digits won't have a leading zero </p>
<p>Thus, [Choose and place quadruple]$\times$[Place different digit(s) in remaining two slots]$\times\frac9{10}$</p>
<p>$10\times \binom64 \times 9^2 \times \frac9{10} = 10,935$ </p>
|
3,902,836 | <p>There're similar questions already, but I somewhat struggled to apply their reasoning to this particular statement.</p>
<p>I wanted to ask if my proof is correct (and clarify several things which I've seemingly figured out when writing it here). Here's the proof:</p>
<p><span class="math-container">$$\mbox{(1) }\lim_{x \to 0^-} f\left(\frac1x\right)=l \mbox{ if } \forall\varepsilon>0(\exists\delta>0(\forall x(0<0-x<\delta \longrightarrow|f\left(\frac1x\right)-l|<\varepsilon$$</span></p>
<p><span class="math-container">$$\mbox{(2) }\lim_{x \to -\infty} f(x)=m \mbox{ if } \forall\varepsilon>0(\exists N(\forall x(x<N \longrightarrow|f(x)-m|<\varepsilon$$</span></p>
<p>We need to show that <span class="math-container">$l=m$</span>.</p>
<p>from (1) we have
<span class="math-container">$$-\delta>x>0\longrightarrow|f\left(\frac1x\right)-l|<\varepsilon$$</span></p>
<p>let <span class="math-container">$g(x)=f\left(\frac1x\right)$</span>, then we have
<span class="math-container">$$-\delta>x>0\longrightarrow|g\left(x\right)-l|<\varepsilon$$</span></p>
<p>in (2) we suppose that <span class="math-container">$x<N$</span>. Hence <span class="math-container">$\frac1x>\frac1N$</span>. Let <span class="math-container">$x'=\frac1x$</span>. Furthermore, since <span class="math-container">$-\delta$</span> is negative and we can assume that <span class="math-container">$N$</span> is negative. (If it's not, we can take <span class="math-container">$N$</span> to be -1 or any other negative number, because if the conclusion holds <span class="math-container">$\forall x<N$</span>, it surely holds for a subset <span class="math-container">$(-\infty,-1)$</span>). We can take <span class="math-container">$-\delta=\frac1N$</span>, which gives us</p>
<p><span class="math-container">$$\frac1N=-\delta<x'<0$$</span> and this is the hypothesis from (1). Hence we conclude that</p>
<p><span class="math-container">$$|g\left(x'\right)-l|<\varepsilon$$</span> or <span class="math-container">$$|f\left(\frac1{\frac1x}\right)-l|<\varepsilon$$</span> or <span class="math-container">$$|f\left(x\right)-l|<\varepsilon$$</span></p>
<p>To summarise, <span class="math-container">$x<N\longrightarrow|f\left(x\right)-l|<\varepsilon$</span>. By the definition of the limit <span class="math-container">$$\lim_{x \to -\infty}f(x)=l$$</span> but also <span class="math-container">$$\lim_{x \to -\infty}f(x)=m$$</span> Therefore <span class="math-container">$l=m$</span>.</p>
| Ben | 754,927 | <p>WA Don nailed it, but I feel compelled to add an answer.</p>
<p>I'm going through Spivak myself and when first encountering problems like this, I tried approaches very similar to super.t's.</p>
<p>It took a while to absorb what was going on. Initially I found it a bit confusing trying to keep straight which conditions are fixed, which can vary, and what implies what.</p>
<p>Here I'll rework the problem, annotating with a few comments. The aim is to hopefully help clarify the thinking behind the approach. Apologies for stating things that may seem obvious to you, and generally for over-explaining. Hopefully this is of some use to someone, someday.</p>
<p>Show that
<span class="math-container">$$\lim_{x \to 0^-} f\left(\frac1x\right)= \lim_{x \to -\infty} f(x)$$</span></p>
<p>We can see intuitively that this makes sense: as <span class="math-container">$x$</span> gets very small negative on the left hand side, <span class="math-container">$1/x$</span> becomes very large negative, and the 2 sides both look like <span class="math-container">$f$</span> of some very large negative number.</p>
<p>Let's assume that the first limit exists, and is equal to <span class="math-container">$\ell$</span></p>
<p><span class="math-container">$$\lim_{x \to 0^-} f\left(\frac1x\right)= \ell$$</span></p>
<p>If this is true, then from the definition of the limit at <span class="math-container">$0$</span> from below, we have: for any <span class="math-container">$\varepsilon > 0$</span> there exists some <span class="math-container">$\delta > 0$</span> such that for all <span class="math-container">$x$</span>, if</p>
<p><span class="math-container">$$0 < -x < \delta, \text{then} \left\vert f\left(\frac{1}{x}\right) - \ell \right\vert < \varepsilon$$</span></p>
<p>or, multiplying the <span class="math-container">$\delta$</span> expression by <span class="math-container">$-1$</span>, we have for all <span class="math-container">$x$</span>, if
<span class="math-container">$$0 > x > -\delta, \text{then} \left\vert f\left(\frac{1}{x}\right) - \ell \right\vert < \varepsilon$$</span></p>
<p>Let's pause for a moment and consider what this says. Here, <span class="math-container">$x$</span> is just some number. This says that, if we have some number between <span class="math-container">$0$</span> and <span class="math-container">$-\delta$</span>, then <span class="math-container">$f(\frac{1}{\text{number}})$</span> will be "sufficiently close" to <span class="math-container">$\ell$</span>.</p>
<p>Let's try sticking <span class="math-container">$1/y$</span> in as "number". The motivation for this choice is that we have an expression involving <span class="math-container">$\left\vert f\left(\frac{1}{x}\right) - \ell \right\vert$</span> and we'd like to instead have an expression for <span class="math-container">$f(x)$</span>. Sticking <span class="math-container">$1/y$</span> into the above expression for <span class="math-container">$x$</span> we have</p>
<p><span class="math-container">$$\text{If } 0 > 1/y > -\delta \text{ then } \left\vert f\left(\frac{1}{(1/y)}\right) - \ell \right\vert < \varepsilon$$</span>
or
<span class="math-container">$$\left\vert f(y) - \ell \right\vert < \varepsilon$$</span></p>
<p>Again, the original <span class="math-container">$x$</span> expression was true for all numbers <span class="math-container">$x$</span> that satisfied the <span class="math-container">$\delta$</span> condition on the left hand side, so this new expression for <span class="math-container">$f(y)$</span> is true for all numbers <span class="math-container">$1/y$</span> that satisfy <span class="math-container">$0 > 1/y > -\delta$</span>.</p>
<p>Now if <span class="math-container">$0 > 1/y > -\delta$</span>, what does this tell us about <span class="math-container">$y$</span>?</p>
<p>Multiplying by <span class="math-container">$-1$</span> to make everything nonegative, to avoid any sign confusion:</p>
<p><span class="math-container">$$0< -1/y < \delta$$</span>
Inverting
<span class="math-container">$$0<1/\delta<-y$$</span></p>
<p>Finally, multiplying again by <span class="math-container">$-1$</span> we have
<span class="math-container">$$0 > -1/\delta > y$$</span></p>
<p>Let's summarize what this all implies:</p>
<p>We know that if the first limit exists, then for any <span class="math-container">$\varepsilon >0$</span> there exists some <span class="math-container">$\delta>0$</span> such that for all <span class="math-container">$y$</span> if
<span class="math-container">$$y < -1/\delta \text{ then } \left\vert f(y) - \ell \right\vert < \varepsilon$$</span></p>
<p>Or, substituting <span class="math-container">$N = -1/\delta$</span> we have, <strong>for any <span class="math-container">$\varepsilon >0$</span> there exists some <span class="math-container">$N<0$</span> such that for all <span class="math-container">$y$</span>, if
<span class="math-container">$$y < N \text{ then } \left\vert f(y) - \ell \right\vert < \varepsilon$$</span></strong></p>
<p>Another way of writing this is:
<span class="math-container">$$\lim_{y \to -\infty} f(y) = \ell$$</span></p>
<p>Thus, if the first limit exists, then so too does the second, and they are equal.</p>
<p><strong>A key point here is the equivalence of the conditions <span class="math-container">$y < -1/\delta < 0$</span> and <span class="math-container">$0 > 1/y > -\delta$</span>.</strong> The first implies the second and vice versa.</p>
<p><strong>Finally to complete the proof, we can begin with the second limit and similarly show that if it exists, then so too does the first, and they are equal.</strong></p>
<p>The steps will be very similar to what we've already done.</p>
<p><strong>Alternate justification for substituting in <span class="math-container">$y = 1/x$</span></strong></p>
<p>Going back to the beginning, we see that the first limit involves <span class="math-container">$f(1/x)$</span>. What do the <span class="math-container">$\delta$</span>-restrictions tell us about <span class="math-container">$1/x$</span>?</p>
<p>If the first limit is <span class="math-container">$\ell$</span> then, for any <span class="math-container">$\varepsilon > 0$</span> there exists some <span class="math-container">$\delta > 0$</span> such that for all <span class="math-container">$x$</span> if
<span class="math-container">$$0 < -x < \delta, \text{then} \left\vert f\left(\frac{1}{x}\right) - \ell \right\vert < \varepsilon$$</span></p>
<p>If
<span class="math-container">$$0 < -x < \delta$$</span>
then
<span class="math-container">$$0 < 1/\delta <\frac{1}{-x}$$</span> or
<span class="math-container">$$0 > -1/\delta >\frac{1}{x}$$</span></p>
<p>Thus, for any <span class="math-container">$\varepsilon > 0$</span> there exists some <span class="math-container">$N = -1/\delta < 0$</span> such that for all <span class="math-container">$x$</span>, if
<span class="math-container">$$\frac{1}{x}< N \text{ then } \left\vert f\left(\frac{1}{x}\right) - \ell \right\vert < \varepsilon$$</span></p>
<p>We can see here we're almost to the second limit. We just need to replace the number <span class="math-container">$\frac{1}{x}$</span> with <span class="math-container">$x$</span> (or <span class="math-container">$y$</span>, or any other name).</p>
|
3,006,595 | <p>How to prove</p>
<blockquote>
<p><span class="math-container">$$\sum_{k=1}^{\infty}\frac{(-1)^k}{(2k+1)^2}(2H_{2k}+H_k)\stackrel ?=\frac{\pi^3}{32}-2G\ln2,$$</span>
where <span class="math-container">$G$</span> is the Catalan's constant.</p>
</blockquote>
<p><strong>Attempt</strong></p>
<p>For the first sum,
<span class="math-container">$$\sum_{k=1}^{\infty}\frac{(-1)^k}{(2k+1)^2}H_{2k}=\Re\left\{\sum_{k=1}^{\infty}\frac{i^k}{(k+1)^2}H_{k}\right\},$$</span>
which can be evaluated by using the formula in <a href="https://math.stackexchange.com/q/604316/394456">this post</a>:
<span class="math-container">$$\sum_{n=1}^\infty \frac{H_n}{n^2}\, x^n=\zeta(3)+\frac{\ln(1-x)^2\ln(x)}{2}+\ln(1-x)\operatorname{Li}_2(1-x)+\operatorname{Li}_3(x)-\operatorname{Li}_3(1-x),$$</span>
but we cannot apply the similar approach to the second sum
<span class="math-container">$$\sum_{k=1}^{\infty}\frac{(-1)^k}{(2k+1)^2}H_k.$$</span>
Then, I tried to write the sum as
<span class="math-container">$$\sum_{k=1}^{\infty}\frac{(-1)^k}{(2k+1)^2}\int_0^1\frac{2x^{2k}+x^k-3}{x-1}~\mathrm dx$$</span>
and it become more complicated.</p>
<p><strong>Edit:</strong> </p>
<p>Are we able to evaluate the sum <em>directly</em> (avoid calculating integrals and polylogs as much as possible)? The integral given by @Jack D'Aurizio is a bit complicated (<a href="https://math.stackexchange.com/a/2972249/394456">see this post</a>).</p>
| Jack D'Aurizio | 44,121 | <p>The series involving <span class="math-container">$H_k$</span> and <span class="math-container">$H_{2k}$</span> can be studied in a similar way: since
<span class="math-container">$$ \frac{-\log(1-x)}{1-x} = \sum_{n\geq 1} H_n x^{n} $$</span>
we have <span class="math-container">$ \frac{-\log(1+x^2)}{1+x^2} = \sum_{n\geq 1} H_n(-1)^n x^{2n} $</span> and
<span class="math-container">$$ \sum_{k\geq 1}\frac{(-1)^k}{(2k+1)^2}H_k = \int_{0}^{1}\frac{\log(1+x^2)\log(x)}{1+x^2}\,dx$$</span>
boils down to
<span class="math-container">$$ \int_{0}^{\pi/4} -2\log(\cos\theta) \log(\tan\theta)\,d\theta $$</span>
which is simple to tackle through <a href="https://math.stackexchange.com/questions/292468/fourier-series-of-log-sine-and-log-cos">well-known Fourier series</a>. It equals </p>
<p><span class="math-container">$$ -\frac{\pi^3}{64}-K\log(2)-\frac{\pi}{16}\log^2(2)+2\,\text{Im}\,\text{Li}_3\left(\frac{1+i}{2}\right)\approx -0.07355395672853217. $$</span></p>
|
3,006,595 | <p>How to prove</p>
<blockquote>
<p><span class="math-container">$$\sum_{k=1}^{\infty}\frac{(-1)^k}{(2k+1)^2}(2H_{2k}+H_k)\stackrel ?=\frac{\pi^3}{32}-2G\ln2,$$</span>
where <span class="math-container">$G$</span> is the Catalan's constant.</p>
</blockquote>
<p><strong>Attempt</strong></p>
<p>For the first sum,
<span class="math-container">$$\sum_{k=1}^{\infty}\frac{(-1)^k}{(2k+1)^2}H_{2k}=\Re\left\{\sum_{k=1}^{\infty}\frac{i^k}{(k+1)^2}H_{k}\right\},$$</span>
which can be evaluated by using the formula in <a href="https://math.stackexchange.com/q/604316/394456">this post</a>:
<span class="math-container">$$\sum_{n=1}^\infty \frac{H_n}{n^2}\, x^n=\zeta(3)+\frac{\ln(1-x)^2\ln(x)}{2}+\ln(1-x)\operatorname{Li}_2(1-x)+\operatorname{Li}_3(x)-\operatorname{Li}_3(1-x),$$</span>
but we cannot apply the similar approach to the second sum
<span class="math-container">$$\sum_{k=1}^{\infty}\frac{(-1)^k}{(2k+1)^2}H_k.$$</span>
Then, I tried to write the sum as
<span class="math-container">$$\sum_{k=1}^{\infty}\frac{(-1)^k}{(2k+1)^2}\int_0^1\frac{2x^{2k}+x^k-3}{x-1}~\mathrm dx$$</span>
and it become more complicated.</p>
<p><strong>Edit:</strong> </p>
<p>Are we able to evaluate the sum <em>directly</em> (avoid calculating integrals and polylogs as much as possible)? The integral given by @Jack D'Aurizio is a bit complicated (<a href="https://math.stackexchange.com/a/2972249/394456">see this post</a>).</p>
| Ali Shadhar | 432,085 | <p><span class="math-container">\begin{align}
S&=2\sum_{n=0}^\infty\frac{(-1)^nH_{2n}}{(2n+1)^2}+\sum_{n=0}^\infty\frac{(-1)^nH_{n}}{(2n+1)^2}\\
&=2\sum_{n=0}^\infty\frac{(-1)^nH_{2n+1}}{(2n+1)^2}+\sum_{n=0}^\infty\frac{(-1)^nH_{n}}{(2n+1)^2}-2\sum_{n=0}^\infty\frac{(-1)^n}{(2n+1)^3}\\
&=2\Im\sum_{n=1}^\infty\frac{(i)^nH_{n}}{n^2}+\sum_{n=1}^\infty\frac{(-1)^nH_{n}}{(2n+1)^2}-\frac{\pi^2}{16}\\
&=S_1+S_2-\frac{\pi^3}{16}\tag{1}
\end{align}</span>
Using <a href="https://math.stackexchange.com/questions/3366039/a-group-of-important-generating-functions-involving-harmonic-number">the generating function</a>: <span class="math-container">$$\sum_{n=1}^\infty\frac{x^nH_n}{n^2}=\operatorname{Li}_3(x)-\operatorname{Li}_3(1-x)+\ln(1-x)\operatorname{Li}_2(1-x)+\frac12\ln x\ln^2(1-x)+\zeta(3)$$</span>
then
<span class="math-container">\begin{align}
S_1&=2\Im\left(\operatorname{Li}_3(i)-\operatorname{Li}_3(1-i)+\ln(1-i)\operatorname{Li}_2(1-i)+\frac12\ln(i)\ln^2(1-i)\right)\\
&\boxed{S_1=-2\Im\operatorname{Li}_3(1-i)-G\ln2-\frac{\pi}{8}\ln^22}
\end{align}</span></p>
<p><span class="math-container">$$S_2=\sum_{n=1}^\infty\frac{(-1)^nH_n}{(2n+1)^2}=\int_0^1\frac{\ln(1+x^2)\ln x}{1+x^2}\ dx$$</span>
The last integral is evaluated <a href="https://math.stackexchange.com/q/3211825">here</a></p>
<p><span class="math-container">$$\boxed{\int_0^1\frac{\ln(1+x^2)\ln x}{1+x^2}\ dx=\frac3{32}\pi^3+\frac{\pi}8\ln^22-G\ln2+2\text{Im}\operatorname{Li_3}(1-i)=S_2}$$</span>
Plugging <span class="math-container">$S_1$</span> and <span class="math-container">$S_2$</span> in <span class="math-container">$(1)$</span>, we get <span class="math-container">$$\color{blue}{S=\frac{\pi^3}{32}-2G\ln2}$$</span></p>
<p>note that we used:
<span class="math-container">$$\ln(i)=\frac{\pi}{2}i$$</span>
<span class="math-container">$$\ln(1-i)=\frac12\ln2-\frac{\pi}{4}i$$</span>
<span class="math-container">$$\operatorname{Li_2}(1-i)=\frac{\pi^2}{16}-\left(\frac{\pi}{4}\ln2+G\right)i$$</span>
which give us:
<span class="math-container">$$\ln(i)\ln^2(1-i)=\frac{\pi^2}{8}\ln2-\left(\frac{\pi^3}{32}-\frac{\pi}{8}\ln^22\right)i$$</span>
<span class="math-container">$$\ln(1-i)\operatorname{Li_2}(1-i) =-\frac{\pi}{4}G-\frac{\pi^2}{32}\ln2-\left(\frac12\ln2G+\frac{\pi^3}{64}+\frac{\pi}{8}\ln^22\right)i$$</span></p>
|
1,163,001 | <p>I need help to prove this congruence: </p>
<p>$$ 3^n -4(2^n) + 6(1^n) + (-1)^n \equiv 0 \pmod {24} $$</p>
<p>I have tried to used Euler's Theorem on the powers of 2 and 3 individually but now I'm stuck.</p>
| Bill Dubuque | 242 | <p>${\rm mod}\ 3\!:\ f(n) \equiv 2^n - 2^n\equiv 0\ $ for $\,n\ge 1$</p>
<p>${\rm mod}\ 8\!:\ f(n)\equiv 3^n+(-1)^n+6,\,$ for $\,n\ge 1.\ $ $\,f(1)\equiv 0\equiv f(2),\,$ and $\,f(n+2)\equiv f(n)\,$ by $\,3^2\equiv 1\equiv (-1)^2,\,$ so $\,f(n)\equiv 0\,$ for all $\,n\ge 1\,$ by induction.</p>
<p>Therefore $\ 3,8\mid f(n)\,\Rightarrow\, 24={\rm lcm}(3,8)\mid f(n)\ $ for $\,n\ge 1$</p>
|
386,298 | <p>Kahler spaces are just certain singular spaces equipped with a Kahler metric in appropriate sense. I first came across it Demaily-Paun's classical paper <a href="https://annals.math.princeton.edu/wp-content/uploads/annals-v159-n3-p05.pdf" rel="nofollow noreferrer">Numercical Characterization of the Kahler cone of a compact Kahler manifold</a>. However it seems to refer the reader to the relevant background material in another paper of Demaily which is in French. I wonder whether there is any reference that have some background materials on Kahler spaces: definition, fundamental properties etc.</p>
| HYL | 14,037 | <p>For a reference in English, you could take a look at this <a href="https://eudml.org/doc/164492" rel="nofollow noreferrer">paper</a> of Varouchas, which contains a definition of Kähler spaces and relatied concepts (e.g. Kähler morphisms) and some of their fundamental properties.</p>
|
386,298 | <p>Kahler spaces are just certain singular spaces equipped with a Kahler metric in appropriate sense. I first came across it Demaily-Paun's classical paper <a href="https://annals.math.princeton.edu/wp-content/uploads/annals-v159-n3-p05.pdf" rel="nofollow noreferrer">Numercical Characterization of the Kahler cone of a compact Kahler manifold</a>. However it seems to refer the reader to the relevant background material in another paper of Demaily which is in French. I wonder whether there is any reference that have some background materials on Kahler spaces: definition, fundamental properties etc.</p>
| YangMills | 13,168 | <p>There is a discussion of this topic in the recent book <a href="https://smf.emath.fr/publications/cycles-analytiques-complexes-ii-lespace-des-cycles" rel="nofollow noreferrer">Cycles analytiques complexes II : l'espace des cycles</a> by Barlet and Magnússon, chapter XII.3. An English translation (to be published by Springer) is forthcoming.</p>
<p>You will also find a short discussion in the book <a href="https://www.ems-ph.org/books/book.php?proj_nr=210" rel="nofollow noreferrer">Degenerate Complex Monge–Ampère Equations</a> by Guedj and Zeriahi, chapter 16.3.</p>
|
1,631,366 | <p>I need to prove that $\log(x)^{10} < x$ for $\ x>10^{10}$
It's pretty clearly true to me, but I need a good proof of it. I tried induction, and got stuck there.</p>
| marty cohen | 13,079 | <p>You want,
for $x > 10^{10}$,
$(\log(x))^{10}
< x
$.
Setting
$x = y^{10}$,
this is
$(\log(y^{10}))^{10}
< y^{10}
$
for
$y > 10$
or
$\log(y^{10}) < y$
or
$y^{10} < 10^y$.</p>
<p>Since
$x^{1/x}$ is decreasing
for $x > e$,
if $y > 10$,
$10^{1/10} > y^{1/y}$
or
$10^y > y^{10}$.</p>
|
237,464 | <p>Let $-\frac{1}{2}\le a \le\frac{1}{2}$ and $b\in[0,\infty)$.</p>
<p>Definitions: $$f_k(a;b):=\frac{(2k+\frac{1}{2}+a)^2+b}{(2k+\frac{1}{2}-a)^2+b}(\frac{k}{k+1})^{2a},$$
$$f(a;b):=\prod\limits_{k=1}^\infty f_k(a;b)$$
QUESTIONs: </p>
<p>(1) Does $f(a;b)=1$ have any solution with $a\neq 0$?</p>
<p>(2) If yes: Single points $(a;b)$ or areas ? </p>
<p>Thank you very much !</p>
<p>EDIT: Have changed $(\frac{k}{k+1})^a$ to $(\frac{k}{k+1})^{2a}$. It was a mistake.</p>
<p>2th EDIT: It seems to be $f(a,b)<\pi^{2a}$ for $a>0$, at least for e.g. $b>2$. Correct ? </p>
| Gerald Edgar | 454 | <p>Numerically, I get this ($500$ factors)...<br>
red is ${}> 1$, yellow is ${} < 1$.</p>
<p><a href="https://i.stack.imgur.com/nSN9s.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nSN9s.jpg" alt="graph"></a></p>
<p>the jagged parts are probably just from numerical approximation. I'm guessing $f=1$ is only curves, and always $b < 1.8$. </p>
<p>The Gamma function formula yields a similar picture.</p>
<p>Here is the interesting location:</p>
<p><a href="https://i.stack.imgur.com/RTZtC.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RTZtC.jpg" alt="zoom"></a></p>
|
2,466,022 | <p>In Real and Complex Analysis, 3rd Edition, Walter Rudin advances the following:</p>
<p><a href="https://i.stack.imgur.com/XEVwK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XEVwK.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/dTwbg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dTwbg.png" alt="enter image description here"></a></p>
<p>How does $e^z \cdot e^{-z} = 1$ entail $(a)$?</p>
| Dr. Sonnhard Graubner | 175,066 | <p>if $e^z$ assumed to be $0$ then it can not be $$e^z\cdot e^{-z}=1$$ (it would be zero)</p>
|
3,203,346 | <p>I know that <span class="math-container">$f(0)=2$</span>, <span class="math-container">$f'(0)=3$</span> and <span class="math-container">$g=f^{-1}$</span>.</p>
<p>But how can I find the value of <span class="math-container">$g'(2)$</span>?</p>
| Tojra | 655,621 | <p>You have:
<span class="math-container">$g(f(x))=x$</span></p>
<p>Differentiate both sides:
<span class="math-container">$g' (f(x)) f'(x)=1$</span><br>
Put <span class="math-container">$x=0$</span>
<span class="math-container">$g' (f(0)) f'(0) =1$</span><br>
Therefore,<span class="math-container">$ g'(2)= 1/3$</span></p>
|
2,447,850 | <p>So I have to prove 2 things:</p>
<ol>
<li><p>That $\lim\limits_{n \rightarrow \infty}\frac{x^n}{n!} = 0$ where $n \in \mathbb N$ and $x \in \mathbb R, x>0$. </p></li>
<li><p>That $\lim\limits_{n \rightarrow \infty}\frac{x^n}{n!} = 0$ where $n \in \mathbb N$ and $x \in \mathbb R$. </p></li>
</ol>
<p>For #1, I know that $\frac{x^n}{n!} >0$, which means that I can find an upper bound and use squeeze theorem. For #2, I have no idea where to start.</p>
| Koto | 355,087 | <p>I think I've answered this question before, but note that $e^x=\sum_{n=0}^{\infty} \frac{x^n}{n!}$, so for every fixed $x\in \mathbb{R}$ the series converges, thus the sequence of the terms $\{\frac{x^n}{n!}\}_{n\in \mathbb{N}}$ must converge to zero as $n$ goes to infinity, otherwise, the sum would not converge.</p>
|
264,740 | <p>On Hilbert spaces, the following is true:</p>
<p>Let $T$ be a densely-defined linear operator with non-empty resolvent set, then $T$ is closed.</p>
<p>The obvious proof I see to show this uses explicitly the Hilbert space structure which is why I would like to ask:</p>
<p>Is the same result true for operators on Banach spaces?</p>
| Matthew Daws | 406 | <p>(This is really a very long comment...)</p>
<p>I think maybe the actual question comes about because some of the terminology in this area is hazy. Let $T:X\supseteq D(T)\rightarrow X$ be a linear operator on a Banach space $X$. For example, <a href="https://en.wikipedia.org/wiki/Resolvent_set" rel="nofollow noreferrer">Resolvent set, wikipedia</a> <em>defines</em> $\lambda\in\mathbb C$ to be in the resolvent if:</p>
<ol>
<li>$T-\lambda I$ injects;</li>
<li>$(T-\lambda I)^{-1}$ is bounded; and</li>
<li>$(T-\lambda I)^{-1}$ is densely defined</li>
</ol>
<p>Under this definition, it is <em>not</em> true that having non-empty resolvent set implies closed. For example, let $X=\ell^2$, let $D(T) = c_{00}$ be the space of eventually 0 sequences, and define $T((x_n)) = (nx_n)$. Set $\lambda=0$ and check the conditions: $T$ is bijective between $c_{00}$ and $c_{00}$; $T^{-1}((x_n)) = (n^{-1}x_n)$ is bounded; $c_{00}$ is dense in $\ell^2$. But $T$ is not closed; only <em>closable</em>.</p>
<p>Let's be a bit more precise. For any $T:X\supseteq D(T)\rightarrow X$, if $T$ is injective then we may <em>define</em> $T^{-1}:D(T^{-1})\rightarrow X$ by setting $D(T^{-1})$ to be the image of $T$, and defining $T^{-1}(T(x)) = x$ for $x\in D(T)$. This is well-defined as $T$ is injective. Let's compare the graphs:
$$ \mathcal{G}(T) = \{ (x,T(x)) : x \in D(T) \}, \quad
\mathcal{G}(T^{-1}) = \{ (T(x),x) : x\in D(T) \}, $$
so clearly $T$ is closed if and only if $T^{-1}$ is. As Robert Israel observes, it is also true that $T$ is closed if and only if $T-\lambda I$ is closed, and hence $(T-\lambda I)^{-1}$ <em>closed</em> does imply that $T$ is closed.</p>
<p>There are further definitions of "resolvent set". E.g. <a href="https://books.google.co.uk/books?id=U3k8yfchaPYC&lpg=PR7&ots=8ZDD91cTR7&dq=One-Parameter%20Semigroups%20for%20Linear%20Evolution%20Equations&lr&pg=PA239#v=snippet&q=resolvent%20set&f=false" rel="nofollow noreferrer">Engel, Nagel</a> <em>define</em> $\lambda$ to be in the resolvent set if $T-\lambda I:D(T)\rightarrow X$ is <em>bijective</em>, that is, $T-\lambda I$ has range the whole of $X$. They also by definition already assume $T$ is closed, so that then $(T-\lambda I)^{-1}$ is closed and hence by the Closed Graph Theorem, bounded.</p>
|
4,029,725 | <p>In the book by Artificial Intelligence by Norvig and Russel, I came across following problem:</p>
<blockquote>
<p>Prove if correct: <span class="math-container">$(A ∧ B) \models (A ⇔ B).$</span></p>
</blockquote>
<p>I quickly interpreted <span class="math-container">$\models$</span> as <span class="math-container">$\implies$</span> and tried to prove it using the <a href="https://www.wolframalpha.com/input/?i=%28A+and+B%29+implies+%28A+xnor+B%29" rel="nofollow noreferrer">truth table</a>:</p>
<p><a href="https://i.stack.imgur.com/SCakv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SCakv.png" alt="enter image description here" /></a></p>
<p>It seems that at least while interpreting <span class="math-container">$\models$</span> as <span class="math-container">$\implies$</span>, the statement is true. Then I gave second thought and did some more reading to come across <a href="https://cs.stackexchange.com/questions/72360/how-is-implication-same-as-entailment">this</a> thread. Now I know that the two are not same. But, it turns out, when we don't interpret <span class="math-container">$\models$</span> as <span class="math-container">$\implies$</span>, the statement is still true (my course TA uploaded answers without proof).</p>
<p>So I am wondering:</p>
<p><strong>Q1.</strong> How exactly the given statement is correct (given that <span class="math-container">$\models$</span> and <span class="math-container">$\implies$</span> are not same)?<br />
<strong>Q2.</strong> Is my method to interpret both same and then forming truth table, a correct method for such problems? If not then how I should solve it?<br />
<strong>Q3.</strong> If answer to Q2 is no, then will above method of interpreting <span class="math-container">$\models$</span> as <span class="math-container">$\implies$</span> and forming truth table always given correct answer? If not, when it will fail to give correct answer?<br />
<strong>Q4.</strong> I also tried to solve the same using <a href="https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/lecture-notes/Lecture7FinalPart1.pdf" rel="nofollow noreferrer">resolution</a>:</p>
<p><span class="math-container">$$\neg (A\Longleftrightarrow B)\equiv \neg((A\wedge B)\vee(\neg A\wedge\neg B))\equiv\neg(A\wedge B)\wedge \neg(\neg A\wedge \neg B)\equiv (\neg A\vee\neg B)\wedge (A \vee B)$$</span>
So my clauses will be:</p>
<ul>
<li><span class="math-container">$A$</span> (from <span class="math-container">$A\wedge B$</span>)</li>
<li><span class="math-container">$B$</span> (from <span class="math-container">$A\wedge B$</span>)</li>
<li><span class="math-container">$(\neg A\vee\neg B)$</span> from <span class="math-container">$(\neg A\vee\neg B)\wedge (A \vee B)$</span></li>
<li><span class="math-container">$(A \vee B)$</span> from <span class="math-container">$(\neg A\vee\neg B)\wedge (A \vee B)$</span></li>
</ul>
<p><a href="https://i.stack.imgur.com/Zrhje.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zrhje.png" alt="enter image description here" /></a></p>
<p>So I was able to derive empty clause, so my assumption <span class="math-container">$\neg (A\Longleftrightarrow B)$</span> was incorrect. So <span class="math-container">$(A ∧ B) \implies (A ⇔ B)$</span>. Will application of resolution technique for <span class="math-container">$\models$</span> be exactly same?</p>
<p><strong>Update</strong></p>
<p>[This is my updated understanding based on Graham's answer]</p>
<p><strong>(a)</strong> After reading Graham's answer, I felt that the truth table above is proving the "tautology" <span class="math-container">$(A ∧ B) \implies (A ⇔ B)$</span>, but not <span class="math-container">$(A ∧ B) \models (A ⇔ B)$</span>.</p>
<p><strong>(b)</strong> To prove <span class="math-container">$(A ∧ B) \models (A ⇔ B)$</span>, I need another truth table, something like this:
<a href="https://i.stack.imgur.com/Ywbb2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ywbb2.png" alt="enter image description here" /></a></p>
<p><strong>(c)</strong> Also, I guess the resolution technique is used to prove tautology <span class="math-container">$(A ∧ B) \implies (A ⇔ B)$</span> and not <span class="math-container">$(A ∧ B) \models (A ⇔ B)$</span>. However, I feel we can use the (first) truth table and resolution proving <span class="math-container">$\implies$</span> to also prove <span class="math-container">$\models$</span>, because of the following fact:</p>
<blockquote>
<p><span class="math-container">$\varphi\vDash \psi$</span> iff <span class="math-container">$M(\varphi)\subseteq M(\psi)$</span>: that is, iff every truth assignment that makes <span class="math-container">$\varphi$</span> true also make <span class="math-container">$\psi$</span> true. This is the case iff <span class="math-container">$\vDash \varphi\Rightarrow\psi$</span>, i.e., if the formula <span class="math-container">$\varphi\Rightarrow\psi$</span> is true in all truth assignments (is a tautology). - <a href="https://cs.stackexchange.com/a/72363/17040">source</a></p>
</blockquote>
<p>Can someone please confirm if my understanding in the above points is correct?</p>
| Graham Kemp | 135,106 | <p><span class="math-container">$\mathcal A \vDash \varphi$</span> says: "statement <span class="math-container">$\varphi$</span> is a semantic consequence of the list of statements <span class="math-container">$\mathcal A$</span> (often called the <em>premises</em>, or <em>assumptions</em>)" and a semantic consequence must be true when all the premises are interpreted as true. [Note: this list may be of zero, one, or several statements.]</p>
<p>In <em>propositional logic</em>, you can use a truth table to establish such a semantic consequence: and <span class="math-container">$(A\land B)\vDash(A\leftrightarrow B)$</span> since indeed <strong>"every assignment that values <span class="math-container">$A\land B$</span> as true will also value <span class="math-container">$A\leftrightarrow B$</span> as true,"</strong> will be shown by a table of every assignment of <span class="math-container">$A,B$</span> and the corresponding evaluations of the statements.</p>
<p><span class="math-container">$$\boxed{\begin{array}{c4|c}A&B & A\land B& A\leftrightarrow B\\\hline\top&\top &\top &\top&\star\\\top&\bot&\bot &\bot\\\bot&\top&\bot&\bot\\\bot&\bot&\bot&\top\\\end{array}}\\\text{the consequent is true in the only row where the premise is true}$$</span></p>
<p>[Note: the trend this century is to use single bar arrows, <span class="math-container">$\to,\leftrightarrow$</span>, for logical connectives, and double edge for metalogic inferences, though not all authors do so.]</p>
<hr />
<p><span class="math-container">$(A\land B)\to(A\leftrightarrow B)$</span> is a statement, and moreover one which is a tautology. That means that it is a logical consequence of no premises at all. Thus:<span class="math-container">$$\vDash (A\land B)\to(A\leftrightarrow B)$$</span></p>
<p>This can also be demonstrated by a truth table, where we show that <strong>"every assignment of <span class="math-container">$A,B$</span> will value the whole statement as true."</strong></p>
<p><span class="math-container">$$\boxed{\begin{array}{c4|c}A&B & (A\land B)\to(A\leftrightarrow B)\\\hline\top&\top &\top\\\top&\bot&\top\\\bot&\top&\top\\\bot&\bot&\top\\\end{array}}\\\text{the consequent is true in every row}$$</span></p>
<p>That is not <em>exactly</em> the same test, but it is clearly the case that where one works the other will too.</p>
<hr />
<p>It should be noted, truth tables work as semantic proofs only in <em>classic propositional logic</em>. They are less applicable in higher order logics, modal logics, among others.</p>
|
4,449,164 | <p>My question stems from page 2 of <a href="https://www.semanticscholar.org/paper/Stability-and-positive-supermartingales-Bucy/31bbbfc842b5717ead8cd7e997a5117fe7a66373" rel="nofollow noreferrer">this paper by Bucy</a>, which states:</p>
<blockquote>
<p>[A random variable <span class="math-container">$x$</span>] is almost everywhere constant a.e.
<span class="math-container">$P$</span>.</p>
</blockquote>
<p>where <span class="math-container">$P$</span> is a probability measure. My interpretation of this is as follows (where I consider <span class="math-container">$x$</span> to be real-valued).</p>
<h4>Interpretation</h4>
<p>Given the probability space <span class="math-container">$(\Omega ,\mathcal F, P)$</span> and some constant <span class="math-container">$c\in \mathbb R$</span>, <span class="math-container">$x:\Omega \rightarrow \mathbb R$</span> is a random variable of the following form:</p>
<p><span class="math-container">\begin{align}
x(\omega) = \begin{cases}
c \qquad&\omega \in A \\
g(\omega) &\omega \in A^c
\end{cases}
\end{align}</span>
where <span class="math-container">$A\in \mathcal F$</span> is such that <span class="math-container">$P(A^c)=0$</span> and <span class="math-container">$g$</span> is an arbitrary real-valued function. The sets <span class="math-container">$A$</span> satisfying the foregoing condition capture what we mean by "almost everywhere" w.r.t. the measure P.</p>
<h4>Questions</h4>
<ol>
<li><p>Is the above interpretation correct?</p>
</li>
<li><p>Is this to be thought of as a general form of what one might call a 'degenerate' random variable?</p>
</li>
<li><p>If (2) is yes, then is there some intuition for why such a definition might be desirable? As opposed to defining a degenerate r.v. as the constant function <span class="math-container">$\tilde x : \Omega \rightarrow \{c\}$</span>.</p>
</li>
</ol>
| Mason | 752,243 | <ol>
<li><p>It means that there exists <span class="math-container">$c \in \mathbb{R}$</span> such that <span class="math-container">$X = c$</span> a.s.. It's essentially what you wrote, except <span class="math-container">$g$</span> is measurable (if <span class="math-container">$g$</span> is allowed to be arbitrary, <span class="math-container">$X$</span> might not even be measurable).</p>
</li>
<li><p>Yes I think this is the meaning of the term "degenerate".</p>
</li>
<li><p>People almost always use equivalence classes for random variables for many reasons. One reason is that this makes <span class="math-container">$L^p$</span> spaces normed spaces. Another reason is that most of the time you are working with the measure <span class="math-container">$P$</span> and the best you can do is prove that something is true <span class="math-container">$P$</span>-a.s., e.g. a sequence converges a.s..</p>
</li>
</ol>
|
13,829 | <p>I was trying to understand the notion of a connection. I have heard in seminars that a connection is more or less a differential equation. I read the definition of Kozsul connection and I am trying to assimilate it. So far I cannot see why a connection is a differential equation. Please help me with some clarification.</p>
| Ross Millikan | 1,827 | <p>It diverges. You should be able to prove that $|\cos(x^2)|>0.1$ for most x. If you let x=$\sqrt{\pi}u$ it is easier to assess the range of u where the cosine is close to zero.</p>
|
73,675 | <p>Dear all,</p>
<p>I'm seeking a reference for a claim made in lecture 8 of Jacob Lurie's chromatic homotopy theory notes (<a href="http://www.math.harvard.edu/~lurie/252xnotes/Lecture8.pdf" rel="noreferrer">http://www.math.harvard.edu/~lurie/252xnotes/Lecture8.pdf</a>). More particularly, Theorem 6 of this lecture states that (say over $\mathbb{F}_2$, so that things are commutative) the spectrum $\mathbb{G} = \operatorname{Spec} \mathcal{A}_*$ of the dual Steenrod algebra $\mathcal{A}_*$ is the automorphism group of the additive formal group law, in the obvious sense.</p>
<p>Lurie argues convincingly that $\mathbb{G}$ does act on the additive formal group law, but I don't think he attempts to prove that this action gives an isomorphism with the automorphism group. I'd be grateful if someone could give me a reference for this fact.</p>
<p>Cheers,</p>
<p>Saul</p>
| John Palmieri | 4,194 | <p>If you're looking for a reference in print, it's in Ravenel's book <em>Complex Cobordism and Stable Homotopy Groups of Spheres</em>. See the comments after the proof of Theorem A2.2.18. (This book is <a href="http://www.math.rochester.edu/people/faculty/doug/mu.html#repub" rel="noreferrer">available online</a>, and you want Appendix 2.)</p>
|
2,385,152 | <p>Given a matrix A, e.g.
$$
A=\begin{bmatrix}
a_{11}& a_{12} & a_{13} \\
a_{21}& a_{22} & a_{23} \\
a_{31}& a_{32} & a_{33} \\
\end{bmatrix}
$$
eliminating the row and the column corresponding $a_{21}$ results in a smaller matrix
$$
B=\begin{bmatrix}
a_{12} & a_{13} \\
a_{32} & a_{33} \\
\end{bmatrix}
$$
Is there a denotation for such resultant matrix B? Matrix cofactor involves similar operation but it does not gives a matrix.</p>
| Luis Vera | 178,730 | <p>The only place I remember seeing special notation is in Steven Roman's "Advanced Linear Algebra".</p>
<blockquote>
<p>Let $A \in \mathcal{M}_{n \times m}(\mathbb{F}),\, A=[a_{ij}].$ If $B \subseteq\{1,\ldots,n\}$ and $C\subseteq\{1,\ldots,m\},$ then
$\,A[B,C]\,$ denotes the submatrix of $A$ that results in removing all rows that are not in $B$ and all columns that are not in $C.$</p>
</blockquote>
<p>I don't know if this notation is widely used, but I hope it helps.</p>
|
2,423,055 | <p>I am sort of baffled by this thing, already real number has every thing in it why is this concept of $\Bbb R^2$ ? What does it mean? What is its advantage?</p>
| md2perpe | 168,433 | <p>$\mathbb R $ doesn't have everything. For example, it doesn't have an $x $ such that $x^2 = -1$. The complex numbers contain such objects.</p>
<p>$\mathbb R^2$ can be used for a lot of things. The most obvious is to describe points in a plane.</p>
|
516,244 | <p>My professor gave us this example on her notes:</p>
<p>$$\sum_{n = 1}^\infty \left(\frac{3}{n(n+3)}+\frac{1}{2^n}\right)$$</p>
<p>So I know we're supposed to find the partial fraction, which ends up being</p>
<p>$$\left(\frac{3}{n(n+3)}=\frac{A}{n}+\frac{B}{n+3}=
\frac{1}{n}-\frac{1}{n+3}\right)$$</p>
<p>So based on how she did the other examples, I would expect her to do:</p>
<p>$$\sum_{n = 1}^\infty \left(\frac{3}{n(n+3)}=\frac{1}{1}-\frac{1}{4}+\frac{1}{2}-\frac{1}{5}\right.....)$$, because I'd be plugging in numbers for n starting with n=1. However, she instead did the following:</p>
<p>$$\sum_{n = 1}^\infty \left(\frac{3}{n(n+3)}=\frac{1}{n}-\frac{1}{n+1}+\frac{1}{n+1}-\frac{1}{n+2}+\frac{1}{n+2}-\frac{1}{n+3}\right)$$,</p>
<p>which would definitely be a lot more helpful in helping cancel out terms like you're supposed to when doing telescoping series, BUT I don't know why she's doing this. I thought we were supposed to plug in values from n and that's what should be increasing each time, but instead the number being added to n is the one going up and I have no clue why. I don't think I'm asking this question in the best way possible, but I'm kinda confusing myself because she did other examples and they feel nothing like this and I'm just starting to learn all this, so can somebody please give me some insight as to what is going on?</p>
<p>(and I know I'm supposed to also deal with the sum of the $$\frac{1}{2^n}$$ term but I'm kinda ignoring it for now since I don't even know what's going on with the first one</p>
| aryaman | 362,622 | <p>The area of a triangle $P(x_1, y_1)$, $Q(x_2, y_2)$ and $R(x_3, y_3)$ is given by
$$\triangle= \left|\frac{1}{2}(x_{1} (y_{2} – y_{3}) + x_{2} (y_{3} – y_{1}) + x_{3} (y_{1} – y_{2}))\right| $$
If the area of triangle is zero, it means the points are collinear.</p>
<p>If we code this in <code>Python3</code>, it will look like</p>
<pre><code>def triangle_area(x1, y1, x2, y2, x3, y3):
return abs(0.5*(x1*(y2-y3)+x2*(y3-y1)+x3*(y1-y2)))
</code></pre>
<p>If we code in <code>js</code>, it will look like this</p>
<pre><code>function triangle_area(x1, y1, x2, y2, x3, y3){
return Math.abs(0.5*(x1*(y2-y3)+x2*(y3-y1)+x3*(y1-y2)))
}
</code></pre>
|
101,929 | <p>I am trying to do the following :-</p>
<pre><code>f[x_] := {x, 2};
Do[f[x_] :=Append[f[x], {x, 4}],{3}] % runs into recursion limit
</code></pre>
<p>because I want to grow my function (which is a list of functions) in a loop ? What is the right way to do this ?</p>
| cleanplay | 8,107 | <p>I can do something like :-</p>
<pre><code> f[x_] := {x, 2};
Do[g[x_] = Join[f[x], {{x, 4}}]; f[x_] = g[x]; Clear[g], {3}]
</code></pre>
<p>I will accept better answers if there are. Thanks</p>
|
310,651 | <p>$\mathbb{Z}_{30}$* $= \{1,7,11,13,17,19,23,29\}$</p>
<p>The number of elements is $8$ and $8$ is not prime, therefore $\mathbb{Z}_{30}$* is not cyclic.</p>
<p>and the generators are $7,11,13,17,19,23,29$.</p>
<p>can anyone correct me please?</p>
| Ittay Weiss | 30,953 | <p>A group of prime order must indeed be cyclic. But a group of non-prime order, may or may not be cyclic. For instance $\mathbb Z_2 \times \mathbb Z_2$ is a non-cyclic group of order $4$, while $\mathbb Z_4$ is a cyclic group of order $4$. If the order of the group is not a prime number then you simply don't automatically know if it is prime or not. </p>
<p>Also, your answer says "... therefore $\mathbb Z^*_{30}$ is not cyclic and the generators are $7,11,13...$". This shows that you don't understand what a cyclic group is, or what generators are. A group is cyclic precisely when it has a generator. So, you can't possibly conclude that a group is not cyclic and then immediately present a generator for it. </p>
<p>For an element $a\in \mathbb Z^*_{30}$ to be a generator it must be the case that when you take all powers of $a$, computed in the group $\mathbb Z^*_{30}$ (in which the operation is multiplication modulo $30$ and the identity element is $1$) you get all elements in the group. </p>
<p>So, for instance, the element $29$ is not a generator. Let's check what it does generate: We need to look at $29$, $29,^2$.... But $29^2=841=1$. So all $29$ is going to generate is the subgroup $\{1,29\}$, not the whole group, so $29$ is not a generator of $\mathbb Z^*_{30}$. None of the other elements is a generator, which proves very directly that the group is indeed not cyclic. </p>
|
4,446,352 | <p>Consider two different ways to formally define a predicate, say the predicate <span class="math-container">$E(x)$</span> which is to be interpreted as "<span class="math-container">$x$</span> is even."</p>
<p><strong>CASE <span class="math-container">$~1$</span></strong></p>
<p><span class="math-container">$\forall a:[a \in N \implies [E(a) \iff \exists b:[b \in N \land a=2b]]]$</span></p>
<p><strong>CASE</strong> <span class="math-container">$~2$</span></p>
<p><span class="math-container">$\forall a:[E(a) \iff a\in N \land \exists b:[b \in N \land a=2b]]$</span></p>
<p>There is a subtle difference in outcomes. <span class="math-container">$E(1)$</span> will be FALSE in both cases. <span class="math-container">$E(-1)$</span>, however, will be UNDEFINED in case 1, but FALSE in case 2.</p>
<p>Which method, if either, is to be recommended?</p>
| Tankut Beygu | 754,923 | <p>It may be helpful to look into the statements in terms of <em>restricted quantifiers</em>. Restricted quantification is used when the range of a quantified variable is desired to be restricted to a part of the domain of discourse. Usual notations for the general case are (simpler notations are also used):</p>
<p><span class="math-container">$(\forall x)_{R(x)}\phi(x)$</span>, which abbreviates <span class="math-container">$\forall x(R(x)\rightarrow\phi(x))$</span></p>
<p>and</p>
<p><span class="math-container">$(\exists x)_{R(x)}\phi(x)$</span>, which abbreviates <span class="math-container">$\exists x(R(x)\wedge\phi(x))$</span>,</p>
<p>where <span class="math-container">$R(x)$</span> is the restricting predicate and <span class="math-container">$\phi(x)$</span> is any formula in the language.</p>
<p>In the context of arithmetic and set theory, they are called <em>bounded quantifiers</em>. For example:</p>
<p><span class="math-container">$(\exists n)_{n<t} \phi(n)$</span></p>
<p><span class="math-container">$(\forall x)_{x\in S} \phi(x)$</span></p>
<p>By restricted quantifiers, we can rewrite the statements as:</p>
<ol>
<li><span class="math-container">$(\forall n)_{n\in\mathbb{N}}(E(n) \leftrightarrow (\exists k)_{k\in\mathbb{N}}(n = 2k))$</span></li>
<li><span class="math-container">$\forall x(E(x)\leftrightarrow x\in\mathbb{N}\wedge(\exists k)_{k\in\mathbb{N}}(x = 2k))$</span></li>
</ol>
<p>In the first case, the universal set for the interpretation of the variable is prespecified as natural numbers.</p>
<p>In the second case, the the universal set for the interpretation of the variable is unrestricted (possibly, real numbers). If the set is specified externally, then <span class="math-container">$x\in\mathbb{N}$</span> is rendered superfluous (as well as the restricted quantification of <span class="math-container">$k$</span>). However, that does not mean that such a definition is inferior with respect to the other, it depends on the practical choice to be made according to the context.</p>
|
3,654,315 | <blockquote>
<p>Let <span class="math-container">$m$</span> be an odd positive integer. Prove that</p>
<p><span class="math-container">$$ \dfrac{ \sin (mx) }{\sin x } = (-4)^{\frac{m-1}{2}} \prod_{1 \leq j
\leq \frac{(m-1)}{2} } \left( \sin^2 x - \sin^2 \left( \dfrac{ 2 \pi
j }{m } \right) \right) $$</span></p>
</blockquote>
<h2>Atempt to the proof</h2>
<p>My idea is to use induction on <span class="math-container">$m$</span>. The base case is <span class="math-container">$m=3$</span> and we obtain</p>
<p><span class="math-container">$$ \dfrac{ \sin (3x) }{\sin x } = (-4) ( \sin^2 x - \sin^2 (2 \pi /3 ) ) $$</span></p>
<p>and this holds if one uses the well known <span class="math-container">$\sin (3x) = 3 \sin x - 4 \sin^3 x $</span> identity.</p>
<p>Now, if we assume the result is true for <span class="math-container">$m = 2k-1$</span>, then we prove it holds for <span class="math-container">$m=2k+1$</span>. We have</p>
<p><span class="math-container">$$ \dfrac{ \sin (2k + 1) x }{\sin x } = \dfrac{ \sin [(2k-1 + 2 )x] }{\sin x } = \dfrac{ \sin[(2k-1)x ] \cos (2x) }{\sin x } + \dfrac{ \cos [(2k-1) x ] \sin 2x }{\sin x } $$</span></p>
<p>And this is equivalent to</p>
<p><span class="math-container">$$ cos(2x) \cdot (-4)^{k-1} \prod_{1 \leq j
\leq k-1 }\left( \sin^2 x - \sin^2 \left( \dfrac{ 2 \pi
j }{m } \right) \right) + 2 \cos [(2k-1) x ] \cos x $$</span></p>
<p>Here I dont see any way to simplify it further. Am I on the right track?</p>
| Conrad | 298,272 | <p>Note that <span class="math-container">$\sin (x-\frac{2\pi j }{m})=-\sin(x+\frac{(m-2j)\pi}{m})$</span> and <span class="math-container">$m-2j$</span> goes through the odd numbers <span class="math-container">$1,...m-2$</span> when <span class="math-container">$ 1\le j \le \frac{m-1}{2}$</span></p>
<p>By the paralelogram rule for sine <span class="math-container">$\sin^2 x- \sin^2 y=\sin(x-y)\sin(x+y)$</span> so we get that the RHS product </p>
<p><span class="math-container">$P=\sin x \prod_{1 \leq j
\leq \frac{(m-1)}{2} } \left( \sin^2 x - \sin^2 \left( \dfrac{ 2 \pi
j }{m } \right) \right)=(-1)^{\frac{m-1}{2}}\prod_{0 \leq j
\leq m-1}\sin (x+\frac{j\pi}{m})=$</span></p>
<p><span class="math-container">$=(-1)^{\frac{m-1}{2}}2^{-(m-1)}\sin mx$</span> by the classic product formula, so we are done!</p>
<p>(the product formula is obtained by taking the imaginary part of both sides in <span class="math-container">$e^{2imx}-1=\Pi_{k=0,..m-1} {(e^{2ix}-e^{-\frac{2\pi ik}{m}})}$</span>)</p>
|
4,582,378 | <p><em>Defintition:</em> A real sequence <span class="math-container">$\ (x_n)_n\ $</span> is <em>convex</em> if <span class="math-container">$\ x_n - x_{n+1} \geq x_{n+1} - x_{n+2}\quad \forall\ n\in\mathbb{N}. $</span></p>
<p>Continuing on from this question <a href="https://math.stackexchange.com/questions/4581894/if-a-n-b-n-are-positive-decreasing-sequences-a-n-is-convex-sum-a/4582370">here</a>,</p>
<blockquote>
<p><strong>Proposition <span class="math-container">$\ 3:\ $</span></strong> If <span class="math-container">$\ (a_n)_n,\ (b_n)_n,\ $</span> are positive convex decreasing sequences, <span class="math-container">$\ \displaystyle\sum a_n \ $</span> converges and <span class="math-container">$\ \displaystyle\sum b_n \ $</span> diverges, then <span class="math-container">$\ \frac{a_n}{b_n}\to 0.\ $</span></p>
</blockquote>
<p>In the previous question, counter-examples were found if either <span class="math-container">$\ (a_n)_n,\ $</span> or <span class="math-container">$\ (b_n)_n,\ $</span> were not required to be convex (but were required to be decreasing), so requiring them both to be convex is a follow-up question I cannot resist investigating.</p>
<ol>
<li><p>If the proposition is false, then <span class="math-container">$\ \frac{a_n}{b_n} = c>0\ $</span> for infinitely many <span class="math-container">$\ n.\ $</span> (We may assume WLOG that <span class="math-container">$\ c=1,\ $</span> since <span class="math-container">$\ \displaystyle\sum a_n \ $</span> converges <span class="math-container">$\ \iff \displaystyle\sum \lambda a_n \ $</span> converges).</p>
</li>
<li><p>But in order for <span class="math-container">$\ \displaystyle\sum a_n \ $</span> to converge and <span class="math-container">$\ \displaystyle\sum b_n \ $</span> diverge, we need <span class="math-container">$\ a_n \ll b_n\ $</span> for most <span class="math-container">$\ n,\ $</span> meaning, I <em>think</em>, that for all <span class="math-container">$\varepsilon > 0$</span>, <span class="math-container">$$\lim_{n\to\infty} \left( \frac{ \text{ The number of integers } \leq n \text{ with } \frac{a_n}{b_n} < \varepsilon }{n} \right) = 1.$$</span></p>
</li>
</ol>
<p>I know as the question asker, I get to decide what is meant by "<span class="math-container">$\ll$</span>". But I'm not sure what I want this to mean rigorously, but maybe the definition above is appropriate?</p>
<p>I suspect these two facts are at odds with one another, although I don't know how to make this rigorous.</p>
| Adam Rubinson | 29,156 | <p>Given any strictly positive, convex, decreasing, summable <span class="math-container">$\ (a_n).$</span></p>
<p>Let: <span class="math-container">$\ k_1 = 1,\quad k_{j+1} = 2 \left\lceil \frac{1}{a_{k_j}} \right \rceil + k_j,\qquad \forall\ j\in\mathbb{N}$</span></p>
<p>Then for each <span class="math-container">$\ j\in\mathbb{N},\ $</span>let:</p>
<p><span class="math-container">$b_n = \left( \frac{ n - k_j}{ k_{j+1} - k_j }\right)\ a_{k_{j+1}} + \left( 1 - \frac{ n - k_j}{ k_{j+1} - k_j } \right)\ a_{k_j} \ $</span> for each <span class="math-container">$\ n\ $</span> with <span class="math-container">$\ k_j \leq n < k_{j+1},\ $</span> that is, if</p>
<p><span class="math-container">$\ k_j \leq n < k_{j+1},\ $</span> then <span class="math-container">$\ (n,b_n)\ $</span> lies on the straight line joining <span class="math-container">$\ (k_j, a_{k_j})\ $</span> to <span class="math-container">$\ (k_{j+1}, a_{k_{j+1}}),\ $</span> and so, since <span class="math-container">$\ (a_n)_n\ $</span> is convex, <span class="math-container">$\ (b_n)_n\ $</span> is also convex.</p>
<p>The idea behind the choice of <span class="math-container">$\ (k_j)_{j\in\mathbb{N}}\ $</span> is so that the area of the trapezium under the straight line joining <span class="math-container">$\ (k_j, a_{k_j}) = (k_j, b_{k_j})\ $</span> to <span class="math-container">$\ (k_{j+1}, a_{k_{j+1}}) = (k_{j+1}, b_{k_{j+1}})\ $</span> is, due to positivity of all <span class="math-container">$\ a_k,\ $</span> greater than the area of the triangle bounded by the <span class="math-container">$\ x-$</span>axis ( <span class="math-container">$\ n-$</span>axis ) and<span class="math-container">$\ (k_j, a_{k_j})\ $</span> to <span class="math-container">$\ (0, a_{k_{j+1}}).$</span> Formally, for each <span class="math-container">$\ j\in\mathbb{N}:$</span></p>
<p><span class="math-container">$$ \sum_{n=k_j}^{n=k_{j+1} - 1} b_n = \sum_{n=k_j}^{n=k_{j+1} - 1} \frac{ n - k_j}{ k_{j+1} - k_j }\ a_{k_{j+1}} + \sum_{n=k_j}^{n=k_{j+1} - 1} \left( \frac{ k_{j+1} - n }{ k_{j+1} - k_j } \right)\ a_{k_j} $$</span></p>
<p><span class="math-container">$$= \frac{ 1 }{ k_{j+1} - k_j } \left( \sum_{n=k_j}^{n=k_{j+1} - 1} \left( n - k_j \right)\ a_{k_{j+1}} + \sum_{n=k_j}^{n=k_{j+1} - 1} \left( k_{j+1} - n \right) a_{k_j} \right) $$</span></p>
<p><span class="math-container">$$= \frac{ 1 }{ k_{j+1} - k_j } \left( \frac{ \left( k_{j+1} - k_j - 1 \right) \left( k_{j+1} - k_j \right) }{ 2 } a_{k_{j+1}} + \frac{ \left( k_{j+1} - k_j \right) \left( k_{j+1} - k_j + 1 \right) }{ 2 } a_{k_j} \right) $$</span></p>
<p><span class="math-container">$$ = \frac{1}{2} \left( \left( k_{j+1} - k_j - 1 \right) a_{k_{j+1} } + \left( k_{j+1} - k_j + 1 \right) a_{k_j} \right) $$</span></p>
<p><span class="math-container">$$ = \frac{1}{2} \left( \left( k_{j+1} - k_j \right) \left( a_{k_{j+1} } + a_{k_J} \right) + \underbrace{a_{k_j} - a_{k_{j+1}}}_{ \geq 0,\ \text{ since } (a_{k_j})_j \text{ is decreasing} } \right) $$</span></p>
<p><span class="math-container">$$ \geq \frac{1}{2} \left( k_{j+1} - k_j \right) \left( a_{k_{j+1} } + a_{k_J} \right) = \left\lceil \frac{1}{a_{k_j}} \right \rceil \left( a_{k_{j+1} } + a_{k_J} \right) \geq \left\lceil \frac{1}{a_{k_j}} \right \rceil a_{k_j} \geq 1. $$</span></p>
<p>This shows formally that <span class="math-container">$\ \displaystyle\sum b_n\ $</span> diverges to <span class="math-container">$\ +\infty.$</span></p>
|
11,435 | <p>I know what they are and how to define them. They have to serve some purpose - else why include them? But I never used them and all examples on this site and the docs never show a practical use of <code>UpValues</code>.</p>
<p>Can someone give me an example to see when they are actually needed?</p>
| Verbeia | 8 | <p>I am embarrassed to admit that I completely forgot about my own <a href="http://www.verbeia.com/mathematica/mma/upvalues.nb" rel="noreferrer">tutorial explaining how to use <code>UpValues</code> to create a lag operator</a>, which I wrote sometime in the late 1990s and have had on my web site since then.</p>
<p>The <code>UpValues</code> construction is exactly what you need to define how certain built-in operators like arithmetic should behave in the context of a custom operator like a lag operator.</p>
<p>(Thanks to Murta for reminding me.)</p>
|
2,121,611 | <p>Compute a quadrature of</p>
<p>$\int_c^d\int_a^b f(x,y)dxdy$</p>
<p>using the Simpson rule and estimate the error.</p>
<p>So the Simpson rule says </p>
<p>$S(f) = (b-a)/6(f(a)+4f((a+b)/2) +f(b))$</p>
<p>So i get </p>
<p>$\int_c^d(b-a)/6(f(a)+4f((a+b)/2) +f(b))dy$</p>
<p>Is that even correct? How do I go on?</p>
| slitvinov | 78,215 | <p>Something to get you started. Define an operator:
$$
I_x(f) := \int_a^b f(x) \, dx
$$
We can see that $I_x(f + g) = I_x(f) + I_x(g)$ and if $\alpha$ does not depend on $x$, $I_x(\alpha f) = \alpha I_x(f)$. In other words the operator is linear.</p>
<p>If it acts on $f(x, y)$ it returns a function of $y$ only.</p>
<p>By analogy define another operator
$$
I_y(f) := \int_c^d f(y) \, dy
$$</p>
<p>Applying $I_x$ and $I_y$ consequently we get a double integral:
$$
I_y(I_x(f)) := \int_c^d \left[ \int_a^b f(x, y) \, dx \right] dy .
$$</p>
<p>Now define a "Simpson's rule" operator:
$$
S_x(f) := (f(a) + 4 f((a+b)/2) + f(b)) (b - a)/6
$$
It is also linear and if it acts on $f(x, y)$ it "kills" dependency on $x$. The same for $S_y$:
$$
S_y(f) := (f(c) + 4 f((c+d)/2) + f(d)) (d - c)/6
$$</p>
<p>By analogy with integral let compute $S_y(S_x(f(x, y)))$ and call it Simpson's rule for double integral. Apply $S_x$ to f(x, y):
$$
S_x(f(x, y)) = (f(a, y) + 4 f((a+b)/2, y) + f(b, y)) (b - a)/6
$$
and apply $S_y$ to the result using linearity
$$
S_y(S_x(f(x, y)) = (S_y(f(a, y)) + 4 S_y(f((a+b)/2, y)) + S_y(f(b, y))) (b - a)/6
$$
expand all three terms with $S_y$
$$
S_y(S_x(f(x, y)) = \\
\left(16f\left(\frac{b+a}{2},\frac{d+c}{2}\right)+4f\left(\frac{b+a}{2},d\right)+4f\left(\frac{b+a}{2},c\right)+4f\left(b,\frac{d+c}{2}\right)
+f\left(b,d\right)+f\left(b,c\right)+4f\left(a,\frac{d+c}{2}\right)+f\left(a,d\right)+f\left(a,c\right)\right)
\frac{\left(b-a\right)\left(d-c\right)}{36}
$$</p>
|
626,920 | <p>$a_n=\sum_{k=1}^{n} \frac{1}{n+k}=\frac{1}{n+1}+\frac{1}{n+2}+\dots+\frac{1}{2n}$</p>
<p>How to find $\lim a_n$?</p>
| Lucian | 93,448 | <p>Brutally speaking, $$\sum_{k=1}^{n}\frac1{n+k}=\left(\sum_{k=1}^{2n}\frac1k\right)-\left(\sum_{k=1}^{n}\frac1k\right)\simeq\ln2n-\ln n=\ln\frac{2n}n=\ln2.$$ Rigourously, we have $$\sum_{k=1}^{n}\frac1{n+k}=\frac1n\cdot\sum_{k=1}^{n}\frac1{1+\frac kn}=\int_0^1\frac{dx}{1+x}=\ln(1+x)|_0^1=\ln2.$$</p>
|
293,110 | <p>How do I prove that homomorphism $\phi : \; \mathrm{Mod}(S_g)\to \mathrm{Sp}(2g, \mathbb{Z})$ (induced by the action of mapping class group of a surface on integer homologies of a surface) is an epimorphism? My idea was to work with generators, but I was not able to prove it this way. </p>
<p>I would love to get detailed answers in order to understand this better. </p>
| Flash Sheridan | 13,530 | <p>Church’s “Set Theory with a Universal Set” and its variants have complements, with Replacement restricted to sets equinumerous to a well-founded set:</p>
<ul>
<li>Alonzo Church 1974a. “Set Theory with a Universal Set,” <em>Proceedings of the Tarski Symposium, Proceedings of Symposia in Pure Mathematics XXV,</em> ed. Leon Henkin, American Mathematical Society, ISBN: 978-0821814253, pp. 297-308. (Delivered 24 June 1971.)</li>
<li>Alonzo Church 1974b. “Notes on a Relative Consistency Proof of Axioms A– K of Church’s Set Theory with a Universal Set,” unpublished lecture notes, Church Archives, box 47, Folder 5.</li>
<li>Emerson Mitchell 1976. <em>A Model of Set Theory with a Universal Set,</em> unpublished Ph.D. thesis, University of Wisconsin at Madison.</li>
<li>“A Variant of Church’s Set Theory with a Universal Set in which the Singleton Function is a Set” (abridged), in <em>Logique et Analyse,</em> Vol 59, No 233 (2016) pp. 81–131, doi:10.2143/LEA.233.0.3149532. The full version is available at the Centre National de Recherches de Logique: <a href="http://www.logic-center.be/Publications/Bibliotheque/SheridanVariantChurch.pdf" rel="nofollow noreferrer">http://www.logic-center.be/Publications/Bibliotheque/SheridanVariantChurch.pdf</a>.</li>
<li>“Fixing Frege’s Set Theory,” slides from a talk on Church’s and my set theories with a universal set, delivered remotely at the University of Oxford Mathematical Institute, October 2013, and at the Stanford Mathematics Department, April 2014: <a href="http://www-logic.stanford.edu/seminar/1314/Sheridan_Fixing_Freges_Set_Theory.pdf" rel="nofollow noreferrer">http://www-logic.stanford.edu/seminar/1314/Sheridan_Fixing_Freges_Set_Theory.pdf</a>.</li>
</ul>
<p>See also Arnold Oberschelp 1973. <em>Set Theory over Classes,</em> Dissertationes Mathematicæ (Rozprawy Mat.) 106. [Mathematical Reviews 42 #8300], but note that the crucial part of the consistency proof in both [Friedrichsdorf 1979] and [Oberschelp 1973] is merely a reference to [Oberschelp 1964a], which uses a significantly different formalism.</p>
<p>More generally (chiefly, but not exclusively, about Quine’s <em>New Foundations</em>), see <a href="http://math.boisestate.edu/~holmes/holmes/setbiblio.html" rel="nofollow noreferrer">http://math.boisestate.edu/~holmes/holmes/setbiblio.html</a>, and also <a href="https://en.wikipedia.org/wiki/Universal_set#Restricted_comprehension" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Universal_set#Restricted_comprehension</a>.</p>
|
503,691 | <p>I'm a bit confused about a statement that I see often in the $\lambda$-calculus literature. Namely, what exactly does the following statement mean: "By induction on the length of $M\in\Lambda$." ?</p>
<p>In such literature, $\Lambda$ is defined as the smallest subset of $\mathcal{V}\cup\{(,),\lambda\}$ such that:</p>
<ul>
<li>$x\in\Lambda$, if $x\in\mathcal{V}$</li>
<li>$(PQ)\in\Lambda$, if $P,Q\in\Lambda$</li>
<li>$(\lambda x P)\in\Lambda$, if $x\in\mathcal{V}$ and $P\in\Lambda$</li>
</ul>
<p>where $\mathcal{V}$ is any fixed infinite set of symbols.</p>
<p>Now, the length of $M\in\Lambda$ is given by:</p>
<ul>
<li>$|x| = 1$</li>
<li>$(PQ)$ = $|P|+|Q|$</li>
<li>$(\lambda x P)$ = $1+|P|$</li>
</ul>
<p>Therefore, my question is: in what way is <em>"structural induction over $\Lambda$"</em> any different than <em>"induction on the length of a $\lambda$-term"</em>? In particular, what is the inductive hypothesis in such inductive method? </p>
| Trismegistos | 23,730 | <p>For induction on length inductive hypothesis is that statement holds for lambda term of length n. Induction step is to infer that statement holds for lambda term of length n + 1 when it holds for lambda term of length n.</p>
<p>Structural induction is described in this topic <a href="https://math.stackexchange.com/questions/285930/lambda-calculus-structural-induction-principle-over-lambda?rq=1">$\lambda$-calculus: structural induction principle over $\Lambda$</a> and it is something different. Structural induction described in that topic is some specific case of induction on length.</p>
|
403,344 | <p>I'm trying to determine whether this statement is true or false:</p>
<p>Assume that $dim V = n$ and let $T \in L(V)$. Let $f(z) \in P_n(F)$ be a
monic polynomial of degree
$n$ such that $f(T) = 0$. Then $f(z)$ is the characteristic polynomial of $T$.</p>
<p>I know that because $f(T) = 0$, the minimal polynomial of T divides it and that the minimal polynomial of T divides the characteristic polynomial of T as well, but I can't seem to put these together in a way that conclusively establishes the truth or falsehood of the above statement.</p>
<p>If anyone could shed some insight on the matter, it would be greatly appreciated. Thanks!</p>
| N. S. | 9,176 | <p><strong>Hint</strong> Take $T$ to be the $0$ operator. Then $f(T)=0$ for all $f(z)$ divisible by $z$.</p>
|
2,421,421 | <blockquote>
<p>Evaluate
$$ \lim_{x\to \pi/2} \frac{\sqrt{1+\cos(2x)}}{\sqrt{\pi}-\sqrt{2x}}$$ </p>
</blockquote>
<p>I tried to solve this by L'Hospital's rule..but that doesn't give a solution..appreciate if you can give a clue.</p>
| neonpokharkar | 477,567 | <p>Dividing numerator and denominator by $\sqrt{2}$
$$\lim_{x\to \frac{π}{2}} \frac {\sqrt{\frac{1+\cos 2x}{2}}}{\sqrt {\frac{π}{2}} -\sqrt { x}}$$
$$$$
Multiplying and dividing by $(\sqrt {\frac{π}{2} } + \sqrt {x})$
$$=\lim_{x\to \frac {π}{2}}(\sqrt {\frac{π}{2}} + \sqrt x)(\frac {\sqrt {\frac {1+ \cos 2x}{2}}}{\frac {π}{2} -x})$$
$$$$
As $\sqrt {\frac {1+\cos 2x}{2}} = \cos x$
$$=(\lim_{x\to \frac{π}{2}} \sqrt {\frac{π}{2}} +\sqrt{x})(\lim _{x\to \frac{π}{2}}\frac {\cos x}{\frac {π}{2} - x})$$
$$= \sqrt {2π} \lim_{x\to \frac{π}{2}} \frac {\sin (\frac {π}{2} - x)}{\frac {π}{2} - x}$$
$$$$
Let $t = \frac {π}{2} -x$
$$$$
As $x\to \frac{π}{2}$ then $t\to 0$
$$ =\sqrt{2π} \lim_{t\to 0} \frac {\sin t}{t}$$</p>
<blockquote>
<p>$$\lim_{x\to \frac{π}{2}} \frac {\sqrt {1+\cos 2x}}{\sqrt{π}-\sqrt 2x}=\sqrt {2π}$$</p>
</blockquote>
|
488,498 | <p>Find $dy/dx$ in terms of $x$ and $y$ when
$$x^2y^3=7x−3y$$</p>
<p>Not sure how to start here, would be nice with some pointers </p>
| JP McCarthy | 19,352 | <p>I am going to go a little bit further than the previous answer:</p>
<p>$$x^2y^3=7x-3y\,\,(*)$$</p>
<p>Now what does $\displaystyle \frac{dy}{dx}$ have to do with this equation? The derivative is concerned with slopes of tangents to curves. Where is the curve here? </p>
<p>Define a curve $\mathcal{C}$ on the plane as follows:
$$(x,y)\in\mathcal{C}\Leftrightarrow x^2y^3=7x-3y.$$
This $\Leftrightarrow$ means 'if and and only if'.</p>
<p>You have seen this idea before; for example you have
$$(x,y)\text{ on unit circle}\Leftrightarrow x^2+y^2=1.$$</p>
<p>In other words, points on the curve $\mathcal{C}$ satisfy the equation and every coordinate that satisfies the equation is on the curve. </p>
<p>Now if you draw some kind of squigly curve, then as long as the curve is not vertical nearby, then you can put a small circle around any point on the curve and <em>locally</em> it looks like the graph of a function $y=y(x)$. This is the <a href="http://en.wikipedia.org/wiki/Implicit_function_theorem" rel="nofollow">Implicit Function Theorem</a>, and roughly says that if you look near a suitable point $(x_1,y_1)$ on the curve you can solve $(*)$ for $y=y(x)$. This allows us to rewrite $(*)$ as
$$x^2[y(x)]^3=7x-3[y(x)],\,\,\,(**)$$
where I use the square brackets just to help us see where to apply the chain rule.</p>
<p>So now we differentiate (**) according to the normal theorems of differentiation such as the sum, product and chain rules. Using the chain rule to differentiate $(\sin x)^3$ we have</p>
<p>$$\frac{d}{dx}(\sin x)^3=3(\sin x)^2\cdot \frac{d}{dx}\sin x.$$</p>
<p>When we look at $[y(x)]^3$ in this light:
$$\frac{d}{dx}[y(x)]^3=3[y(x)]^2\cdot \frac{d}{dx}[y(x)].$$</p>
<p>But what is $\displaystyle \frac{d}{dx}[y(x)]$? Why exactly what we are looking for $\displaystyle \frac{dy}{dx}$!</p>
<p>So we differentiate (**) to generate an equation in $x$, $y$ and $\frac{dy}{dx}$ which we can solve for $\frac{dy}{dx}$ in terms of $x$ and $y$:</p>
<p>$$x^2\left(3y^2\frac{dy}{dx}\right)+2xy^3=7-3\frac{dy}{dx}$$
$$\Rightarrow 3x^2y^2\frac{dy}{dx}+3\frac{dy}{dx}=7-2xy^3$$
$$\Rightarrow \frac{dy}{dx}(3x^2y^2+3)=7-2xy^3$$
$$\Rightarrow \frac{dy}{dx}=\frac{7-2xy^3}{3x^2y^2+3}$$</p>
<p>PS: The reason you can't have the curve going vertical is that a) graphs of functions don't look like that: see <a href="http://en.wikipedia.org/wiki/Vertical_line_test" rel="nofollow">Vertical Line Test</a> and b) the slope will tend to infinity and be undefined there like $\tan 90^\circ$.</p>
<p>PPS: Another way to think of these curves is to solve $(*)$ equal to zero:
$$x^2y^3-7x+3y=0.$$
Now this has the form
$$F(x,y)=0.$$
However $z=F(x,y)$ can be viewed as a function of two variables. The graph of a function of two variables is a <em>surface</em>. The points where $F(x,y)=0$ are just the points "at sea-level" and form a 2-dimensional curve.</p>
<p>PPPS:The reason that $\displaystyle \frac{dy}{dx}$ depends on $x$ <em>and</em> $y$ is that for a single value of $x$ there might be more than one point on the curve --- with not-necessarily equal slopes. Think of the unit circle and $x=0.5$.</p>
|
2,262,846 | <p>This question is somewhat naive. Please see the following proof before reading the question itself. </p>
<p>Let $L$ be a vector space over a field $\Bbb{F}.$ Assume there is an associative multiplication on $L$ so that $L$ is an associative ring, and denote this multiplication simply as $xy$ for $x,y\in L$. Furthermore, suppose this multiplication to be compatible with scaling, that is $\alpha\in\Bbb{F},$ then $\alpha x = x\alpha\in L$ for all $x\in L$ ($L$ should be a left and right $\Bbb{F}$ vector space). Define $[x,y]=xy-yx$ for all $x,y\in L.$ Then $[-,-]:L\times L \longrightarrow L$ is a Lie bracket. </p>
<p><em>Proof:</em>
It is clear that $[-,-]$ is anti-commutative, and bilinear. We only need to show that it satisfies the Jacobi identity. That is </p>
<p>$$[[x,y],z]+[[y,z],x]+[[z,x],y]=0.$$</p>
<p>One readily sees that this is equivalent to </p>
<p>$$(xy)z-(yx)z-z(xy)+z(yx)+(yz)x-(zy)x-x(yz)+x(zy)+(zx)y-(xz)y-y(zx)+y(xz)=0.$$</p>
<p>The previous equation is certainly true when the multiplication defined earlier is associative. Therefore $L$ is a Lie algebra. $\Box$</p>
<p>So here's the question:</p>
<blockquote>
<p>1) Is it true that an algebra defined in this way is a Lie algebra? </p>
</blockquote>
<p>An example of such an associative algebra is the $M_{n\times n}(\Bbb{F}).$</p>
<blockquote>
<p>2) If so, is there any name which separates these from the classical Lie algebras?</p>
</blockquote>
<p>There is rich theory surrounding classical Lie algebras, especially their connection to Lie groups. This motivates the next question.</p>
<blockquote>
<p>3) Is there any significance in viewing an associative algebra in this way?</p>
</blockquote>
| Mark Viola | 218,419 | <p><strong>HINT:</strong></p>
<p>Using the AM-GM inequality, we have $x^6+y^6\ge 2\sqrt{x^6y^6}=2|x|^3|y|^3$, which provides an upper bound of $|y|$.</p>
|
2,300,386 | <blockquote>
<p>Consider the initial value problem:</p>
<p>\begin{cases}
u_{tt} &= c^2 u_{xx} \ \ & \text{for} \ -\infty < x < \infty, \ 0 \leq t < \infty\\
u(x,0) &= \phi(x) \ \ & \text{for} \ -\infty < x < \infty\\
u_t(x,0) &= \psi(x) \ \ & \text{for} \ -\infty < x < \infty\\
\end{cases}
where $\phi$ has compact support (that is, outside some bounded interval, $\phi$ is zero), and $\psi(x) = 0$. Define the kinetic energy $KE = \frac{1}{2}\int_{-\infty}^{\infty} \rho u_t^{2} dx$ and the potential energy $PE = \frac{1}{2} \int_{-\infty}^{\infty} T u_x^{2} dx$. Show not that, for large enough times $t$, each of $KE$ and $PE$ is itself constant, and they are equal to each other. Can you prove the same thing if the inital velocity $\psi$ merely has compact support, instead of being zero?</p>
</blockquote>
<p>I am not sure how to start this, how am I to show that $KE$ and $PE$ are constant? I usually post some work but I am not sure how to start this. Any help would be useful.</p>
| Wolfy | 217,910 | <p>We have
\begin{equation}
\begin{cases}
u_{tt} = c^2 u_{xx} \ \ & \text{for} \ -\infty < x < \infty, \ 0 \leq t < \infty\\
u(x,0) = \phi(x) \ \ & \text{for} \ -\infty < x < \infty\\
u_t(x,0) = \psi(x) \ \ & \text{for} \ -\infty < x < \infty
\end{cases}
\end{equation}
Therefore
\begin{equation}
\phi(x) = 0 \ \ \ \text{and} \ \ \ \psi(x) = 0
\end{equation}
and
\begin{equation}
KE = \frac{1}{2}\int_{-\infty}^{\infty} \rho u_t^{2} dx
\end{equation}
\begin{equation}
PE = \frac{1}{2} \int_{-\infty}^{\infty} T u_x^{2} dx
\end{equation}
From equation $(1)$ $u_{tt} = c^2 u_{xx}$ then
\begin{equation}
u_{tt} - c^2 u_{xx} = 0
\end{equation}
Here, $A = -c^2, B = 0$, and $C = 1$. Thus
\begin{equation}
B^2 - 4AC = 4C^2 > 0
\end{equation}
So the equation is hyperbolic. The equation characteristics are
\begin{equation}
\frac{dt}{dx} = \pm\frac{1}{C}
\end{equation}
or $$\xi = x - ct = \ \text{const} \ \ \ \text{and} \ \ \ \eta = x + ct = \ \text{const}$$
Now, in terms of new coordinates $\xi$ and $\eta$ then $$u_{xx} = u_{\xi \xi} + 2u_{\xi \eta} + u_{\eta \eta} \ \ \ u_{tt} = c^2(u_{\xi\xi} - 2 u_{\xi\eta + u_{\eta\eta}}$$
Thus, equation $(1)$ becomes
\begin{equation}
-4c^2 u_{\xi\eta} = 0
\end{equation}
Therefore $c\neq 0$,
\begin{equation}
u_{\xi \eta} = 0
\end{equation}
So, integrating $u_{\xi\eta} = 0$ twice we then get the solution
\begin{equation}
u(\xi \eta) = \phi(\xi) + \psi(\eta)
\end{equation}
where $\phi$ and $\psi$ are arbitrary functions. So, in terms of $x$ and $t$
\begin{equation}
u(x,t) = \phi(x - ct) + \psi(x + ct)
\end{equation}
Now from
\begin{equation}
u(x,0) = \phi(x) \ \ \ \text{and} \ \ \ u_t(x,0) = \psi(x)
\end{equation}
So,
\begin{equation}
\phi(x) + \psi(x) = \phi(x)
\end{equation}
and
\begin{equation}
-c \phi^{\prime}(x) + c\psi^{\prime}(x) = \psi(x)
\end{equation}
Integrating equation $(14)$
\begin{equation}
-c \phi(x) + c\phi(x) = \int_{x_0}^{x}\psi(\tau)d\tau
\end{equation}
where $x_0$ is an arbitrary constant. Now equation $(13)$ and $(15)$ becomes
$$\phi(x) = \frac{1}{2}\phi(x) - \frac{1}{2c}\int_{x_0}^{x}\psi(\tau)d\tau$$
and
$$\psi(x) = \frac{1}{2}\psi(x) + \frac{1}{2c}\int_{x_0}^{x}\psi(\tau)d\tau$$
Thus, equation $(11)$ gives the solution (D'Alembert solution) of the Cauchy problem as
$$u(x,t) = \frac{1}{2}\left[ \phi(x - ct) + \phi(x + ct) \right] + \frac{1}{2c}\int_{x-ct}^{x+ct}\psi(\tau)d\tau$$
So we have
$$u_t(x,t) = \frac{1}{2}\frac{\partial}{\partial t}\left[ \phi(x - ct) + \phi(x + ct) \right] + \frac{1}{2c}\frac{\partial}{\partial t}\left[ \int_{x-ct}^{x+ct} \psi(\tau)d\tau \right]$$
and
$$u_{x}(x,t) = \frac{1}{2}\frac{\partial}{\partial x}\left[ \phi(x-ct) + \phi(x+ct) \right] + \frac{1}{2c}\frac{\partial}{\partial x}\left[ \int_{x - ct}^{x+ct}\psi(\tau)d\tau \right]$$
therefore
$$KE = \frac{1}{2}\int_{-\infty}^{\infty}\rho u_{t}^{2}dx$$
So,
$$KE = \frac{1}{2}\int_{-\infty}^{\infty}\rho\left[ \frac{1}{2}\frac{\partial}{\partial t} \left[ \phi(x-ct) + \phi(x + ct) \right] \right] + \frac{1}{2c}\frac{\partial}{\partial t}\left[\int_{x - ct}^{x+ct}\psi(\tau)d\tau \right]^2dx$$
and $$PE = \frac{1}{2} \int_{-\infty}^{\infty} T u_x^{2} dx$$
So,
$$PE = \frac{1}{2}\int_{-\infty}^{\infty}T\left[ \frac{1}{2}\frac{\partial}{\partial x}\left[ \phi(x - ct) + \phi(x + ct) \right] + \frac{1}{2c}\frac{\partial}{\partial x}\left[ \int_{x-ct}^{x+ct} \psi(\tau)d\tau \right] \right]^2dx$$
Thus from above, we can see that for $t > 0$, $KE$ and $PE$ is constant thus $KE = PE$ at some stage.</p>
|
2,037,652 | <p>I'm taking a course on smooth manifolds and the following exercise was given to me:</p>
<blockquote>
<p>If $P\in\mathbb{R}[X_1, ..., X_{n+1}]$ is a homogeneous polynomial of degree $d$ such that $\left(\frac{\partial P}{\partial X_1}, ..., \frac{\partial P}{\partial X_{n+1}}\right)$ is never zero, prove that $Z(P):=\{[x_1:\dots:x_{n+1}]\in \mathbb{RP}^n\mid P(x_1,\dots, x_{n+1})=0\}$ is a regular submanifold of $\mathbb{RP}^n$.</p>
</blockquote>
<p>Here was my attempt: by interpreting $P$ as a smooth function $P:\mathbb{RP}^n\to \mathbb{R}$, we have that $\text{rank}(dP)\equiv 1$ in the whole domain, so by the constant rank theorem, $Z(P)=P^{-1}(0)$ is a regular submanifold of codimension $1$ (if this is wrong, please let me know).</p>
<p>First of all, the fact that $d$ was not important was really surprising to me. Also, it seemed to me that the condition $\left(\frac{\partial P}{\partial X_1}, ..., \frac{\partial P}{\partial X_{n+1}}\right)\neq 0$ was pretty strong, and made me wonder if it was really necessary. For example, taking $n=2$ and $P(X, Y, Z)=X^2-XZ+YZ$, we have $\left(\frac{\partial P}{\partial X}, \frac{\partial P}{\partial Y},\frac{\partial P}{\partial Z}\right)(0,0,0)= 0$, but $Z(P)$ is still a regular submanifold of codimension $1$ (right?).</p>
<p>So my question is if there's a more general way to analyse whether or not $Z(P)$ is a regular submanifold, i.e., without imposing $\left(\frac{\partial P}{\partial X_1}, ..., \frac{\partial P}{\partial X_{n+1}}\right)\neq 0$? And what would be the relationship between $d$ and the codimension of $Z(P)$? Thanks!</p>
| ziggurism | 16,490 | <p>Dimension is another word for the number of degrees of freedom in a space. If you have <span class="math-container">$n$</span> dimensions and you impose one equation, you typically reduce the dimension by 1 (so we say the resulting space has codimension 1). For only the count of degrees of freedom, it does not matter whether the relation imposed was polynomial of degree <span class="math-container">$d$</span>, analytic, holomorphic, or smooth. The rule of thumb is, impose 1 relation, cut the dimension by 1. Impose 2 relations, cut dimension by 2, etc (though according to the particulars there may be deviations from this rule). The degree of the polynomial determines the shape of the submanifold/variety, like its genus, but not its dimension.</p>
<p>As for regularity, the criterion you mention is valid for submanifolds which are the vanishing locus of a single function. The general criterion is to look at the dimension of the tangent space. For a variety, this is a simple generalization, where you look at the differentials of all the functions your variety is the vanishing locus of.</p>
|
206,636 | <p>What's the fastest way to find the local maxima of a 2D list? <em>E.g.</em></p>
<pre><code>nx = ny = 100;
dat = Table[Sin[2. \[Pi] x/nx] (0.1 + Cos[2. \[Pi] y/ny]), {y, 0, ny}, {x, 0, nx}];
ListPlot3D[dat]
</code></pre>
<p><img src="https://i.stack.imgur.com/MyGBl.png" alt="Mathematica graphics"></p>
<p>This (updated) function has three local maxima of different heights:</p>
<pre><code>Position[MaxDetect[dat], 1]
(* {{1, 26}, {51, 76}, {101, 26}} *)
dat[[1, 26]]
dat[[51, 76]]
dat[[101, 26]]
(* 1.1, 0.9, 1.1 *)
</code></pre>
<p>My original attempt was super-slow:</p>
<pre><code>RepeatedTiming[MaxDetect[Chop@dat];][[1]]
(* 1.55 *)
</code></pre>
<p>Turns out using <code>Chop</code> is a very bad idea. Without it is 100X faster:</p>
<pre><code>RepeatedTiming[MaxDetect[dat];][[1]]
(* 0.016 *)
</code></pre>
<p>Along the way I discovered another version that is 2X faster yet:</p>
<pre><code>RepeatedTiming[MaxDetect[Image[dat]];][[1]]
(* 0.0067 *)
</code></pre>
<p><strong>Questions</strong></p>
<ol>
<li>Why is <code>MaxDetect</code> so much slower when <code>Chop</code> is applied? (I should add that my actual non-example problem has lots of small values that needed to <code>Chop</code>-ping)</li>
<li>Why does converting to an <code>Image</code> speed it up further?</li>
<li>Is there any faster way available?</li>
</ol>
| Ulrich Neumann | 53,677 | <p>A little bit faster than @kglr 's tricky answer is</p>
<pre><code>Position[dat, Max[dat]]
</code></pre>
|
2,245,010 | <p>Does there exist a topological group which can be covered by (nontrivial and proper) open subgroups of itself? If so, what are groups of these types called and is this a nice property for a topological group to have? Or is this just impossible?</p>
| Najib Idrissi | 10,014 | <p>It's certainly possible, consider $G = \mathbb{Z}^2$ with the discrete topology. If $x \in G$ is any element, consider $H$ to be the subgroup generated by $x$ (unless $x = 0$ in which case consider $H = \mathbb{Z} \times \{0\}$ for example). It's proper because $G$ has rank $2 > 1$, nontrivial, and open because $G$ is discrete.</p>
<p>If you want an example that's not discrete, consider $G = \mathbb{R} \times \mathbb{Z}^2$ and apply a similar reasoning. I don't know if there's a connected example and I don't know if this property has a name.</p>
|
38,480 | <p>(Before reading, I apologize for my poor English ability.)</p>
<p>I have enjoyed calculating some symbolic integrals as a hobby, and this has been one of the main source of my interest towards the vast world of mathematics. For instance, the integral below
$$ \int_0^{\frac{\pi}{2}} \arctan (1 - \sin^2 x \; \cos^2 x) \,\mathrm dx = \pi \left( \frac{\pi}{4} - \arctan \sqrt{\frac{\sqrt{2} - 1}{2}} \right). $$
is what I succeeded in calculating today.</p>
<p>But recently, as I learn advanced fields, it seems to me that symbolic integration is of no use for most fields in mathematics. For example, in analysis where the integration first stems from, now people seem to be interested only in performing numerical integration. One integrates in order to find an evolution of a compact hypersurface governed by mean curvature flow, to calculate a probabilistic outcome described by Ito integral, or something like that. Then numerical calculation will be quite adequate for those problems. But it seems that few people are interested in finding an exact value for a symbolic integral.</p>
<p>So this is my question: Is it true that problems related to symbolic integration have lost their attraction nowadays? Is there no such field that seriously deals with symbolic calculation (including integration, summation) anymore?</p>
| J. M. ain't a mathematician | 498 | <p>I think it would be appropriate at this point to quote <a href="http://books.google.com/books?id=YHXU4W3Ez2MC&pg=PA247">Forman Acton</a>:</p>
<blockquote>
<p>...at a more difficult but less pernicious level we have the
inefficiencies engendered by exact analytic integrations where a sensible
approximation would give a simpler and more effective algorithm. Thus</p>
<p>$$\begin{align*}\int_0^{0.3}\sin^8\theta\,\mathrm d\theta&=\left[\left(-\frac18\cos\,\theta\right)\left(\sin^4\theta+\frac76\sin^2\theta+\frac{35}{24}\right)\sin^3\theta+\frac{105}{384}\left(\theta-\sin\,2\theta\right)\right]_0^{0.3}\\ &=(-0.119417)(0.007627+0.101887+1.458333)(0.0258085)+0.004341\\ &=-0.0048320+0.0048341=0.0000021\end{align*}$$</p>
<p>manages to compute a very small result as the difference between two much
larger numbers. The crudest approximation for $\sin\,\theta$ will give</p>
<p>$$\int_0^{0.3}\theta^8\,\mathrm d\theta=\frac19\left[\theta^9\right]_0^{0.3}=0.00000219$$</p>
<p>with considerably more potential accuracy and much less trouble. If
several more figures are needed, a second term of the series may be kept.</p>
<p>In a similar vein, if not too many figures are required, the quadrature</p>
<p>$$\int_{0.45}^{0.55}\frac{\mathrm dx}{1+x^2}=\left[\tan^{-1}x\right]_{0.45}^{0.55}=0.502843-0.422854=0.079989\approx 0.0800$$</p>
<p>causes the computer to spend a lot of time evaluating two arctangents to get
a result that would have been more expediently calculated as the product
of the range of integration ($0.1$) by the value of the integrand at the
midpoint ($0.8$). The expenditure of times for the two calculations is
roughly ten to one. For more accurate quadrature, Simpson's rule would still
be more efficient than the arctangent evaluations, nor would it lose a
significant figure by subtraction. The student that worships at the altars of
Classical Mathematics should really be warned that his rites frequently have
quite oblique connections with the external world.</p>
</blockquote>
<p>It may very well be that choosing the closed form approach would still end up with you having to (implicitly) perform a quadrature anyway; for instance, one efficient method for numerically evaluating the zeroth-order Bessel function of the first kind $J_0(x)$ uses the trapezoidal rule!</p>
<p>On the other hand, there are also situations where the closed form might be better for computational purposes. The usual examples are the complete elliptic integrals $K(m)$ and $E(m)$; both are more efficiently computed via the arithmetic-geometric mean than by using a numerical quadrature method.</p>
<p>But, as I said in the comments, for manipulational work, possessing a closed form for your integral is powerful stuff; there is a <a href="http://people.math.sfu.ca/~cbm/aands/">whole</a> <a href="http://dlmf.nist.gov/">body</a> <a href="http://functions.wolfram.com/">of results</a> that are now conveniently at your disposal once you have a closed form at hand. Think of it as "standing on the shoulders of giants".</p>
<p>In short, again, "it depends on the situation and the terrain".</p>
|
144,709 | <p>Recently, I was informed that we can verify the famous formula about <strong>$\mathrm{lcm}(a,b)$</strong> and <strong>$\gcd(a,b)$</strong> which is $$\mathrm{lcm}(a,b)=\frac{|ab|}{\gcd(a,b)} $$ via group theory. </p>
<p>The least common multiple of two integers $a$ and $b$, usually denoted by <strong>$\mathrm{lcm}(a,b)$</strong>, is the smallest positive integer that is a multiple of both $a$ and $b$ and the greatest common divisor <strong>($\gcd$)</strong>, of two or more non-zero integers, is the largest positive integer that divides the numbers without a remainder.</p>
<p>I do not know if we can prove this equation by using the groups or not, but if we can I am eager to know the way someone face it. Thanks.</p>
| Arturo Magidin | 742 | <p><strong>Lemma.</strong> Let $G$ be a group, written multiplicatively, and let $H$ and $K$ be two subgroups. If $HK = \{hk\mid h\in H, k\in K\}$, then
$$|HK||H\cap K| = |H||K|$$in the sense of cardinalities.</p>
<p><em>Proof.</em> Consider the map $H\times K\to HK$ given by $(h,k)\mapsto hk$. I claim that the map is exactly $|H\cap K|$ to $1$. Indeed, if $hk=h'k'$, then $h'^{-1}h = k'k^{-1}\in H\cap K$, so there exists $u\in H\cap K$, namely $u=h'^{-1}h$ such that $h=h'u$ and $k=u^{-1}k'$. Thus, $(h,k) = (h'u,u^{-1}k')$ maps to the same thing as $(h',k')$. Conversely, given $v\in H\cap K$, we have that $(h'v,v^{-1}k')\in H\times K$ maps to the same thing as $(h',k')$. </p>
<p>Thus, each element of $HK$ corresponds to precisely $|H\cap K|$ elements of $H\times K$. Thus, $|HK||H\cap K| = |H\times K| = |H||K|$, as claimed. $\Box$</p>
<p>Let $a$ and $b$ be integers, and consider $\mathbb{Z}/\langle ab\rangle$. This is a group with $|ab|$ elements. This group contains subgroups generated by $\gcd(a,b)$, by $a$, by $b$, and by $\mathrm{lcm}(a,b)$. $\gcd(a,b)$ generates the largest subgroup containing both $a$ and $b$; i.e., $\langle \gcd(a,b)\rangle = \langle a\rangle + \langle b\rangle$; while $\mathrm{lcm}(a,b)$ generates the smallest subgroup contained in both $\langle a\rangle$ and $\langle b\rangle$, i.e., $\langle \mathrm{lcm}(a,b)\rangle = \langle a\rangle\cap\langle b\rangle$. By the Lemma (with addition, since we are working in an additive group), we have:
$$|\langle a\rangle+\langle b\rangle| |\langle a\rangle\cap\langle b\rangle| = |\langle a\rangle||\langle b\rangle|$$
Now, the subgroup generated by $\gcd(a,b)$ has $\frac{|ab|}{\gcd(a,b)}$ elements; the subgroup generated by $\mathrm{lcm}(a,b)$ has $\frac{|ab|}{\mathrm{lcm}(a,b)}$ elements; that generated by $a$ has $\frac{|ab|}{|a|}$ elements, that generated by $b$ has $\frac{|ab|}{|b|}$ elements. Plugging all of that in it becomes
$$\gcd(a,b)\mathrm{lcm}(a,b) = |a||b|$$
which yields the desired equality.
$\Box$</p>
|
2,232,677 | <p>A parallellogram $ABCD$ is given, where $E$ is the midpoint of $BC$ and $F$ is a point on $AD$ so that $|FD| = 3|AF|$. $G$ is the point where $BF$ and $AE$ intersects.</p>
<p>Express the vector $AG$ in terms of vectors $AB$ and $AD$.</p>
<p>My solution to the problem is the following: Impose an <em>affine transformation</em> so that $ABCD$ becomes a unit square with point $A$ residing on origo. Then $BF$ is on the line $y = \frac{1}{4} - \frac{1}{4}x$ and $AE$ on $y = \frac{1}{2}x$. The lines' intersection gives me the required coefficients to express $AG$ in terms of $AB$ and $AD$. </p>
<p>My question is how would you solve this problem without the affine transformation? The reason I'm asking is because this problem was given at an early stage of the course before affine transformations was introduced. So I want to know if there is a simpler or "more intuitive" way to solve it which I haven't learned.</p>
| Soroush khoubyarian | 339,053 | <p>Firstly, the triangles AFG and BEG are similar. since AF is a quarter of AD, and BE is one half of BC(which is equal to AD) we can say that $\dfrac{AG}{EG} = \dfrac{1}{2}$ so it is obvious that $\dfrac{AG}{AE} = \dfrac{1}{3}$</p>
<p>Know we know that AE is just the sum of the vectors: $AB + BE = AB + \dfrac{AD}{2}$</p>
<p>On the other hand we know that AG is just one third of AE. So:</p>
<p>$AG = \dfrac{AB + \dfrac{AD}{2}}{3}$</p>
|
77,980 | <p>How do I check whether a give function is 'algebraic' or not? I have a function $m(z) = 2\pi i z^n$ where $z \in \mathbb{C} \backslash \mathbb{R}$ and $n \in \mathbb{Z}$. I can write this as $w - 2\pi i z^n = 0$ where $w = m(z)$ and it is a polynomial. I think this is an algebraic function? Is my understanding correct?</p>
<p>What is the general approach to find whether a given function is algebraic or not?</p>
| Gerry Myerson | 8,269 | <p>If $n$ is a non-negative integer then $m$ is a polynomial, a fortiori, algebraic. As you say, it is a zero of the equation $w-2\pi iz^n=0$, an equation with polynomial coefficients. If $n$ is a negative integer then it is a zero of $z^{-n}w-2\pi i=0$ which again is an equation with polynomial coefficients, so in this case, too, $m$ is algebraic. </p>
<p>I don't think there is a general approach to telling whether a function is algebraic, any more than there is a general approach for telling whether a number is algebraic. Indeed, nobody knows whether $\gamma$ (Euler's constant) is rational, so I suppose nobody knows whether the function $z^{\gamma}$ is algebraic. </p>
|
138,173 | <p>$f(x, y) = 0$ and $g(x, y) = 0$,
both $f$ and $g$ are cubic polynomial equation (at most 10 coefficients for each).</p>
<p>Is there any fixed method to solve this degenerate equation system?
thanks.</p>
| Igor Rivin | 11,142 | <p>The magic word is "resultant" (which see, on, e.g., Wikipedia). The resultant of your equations will be a single one variable equation of degree nine. There is not much hope of it being solvable in radicals, but it will be easily solved numerically. On the other hand, if you have software which will solve it numerically, it will probably solve the original system (in Mathematica I believe it is "NSolve")</p>
|
2,581,361 | <p>For even $n \in \mathbb{N}$, prove $\binom{n}{i}< \binom{n}{j} $ if $0\leq i<j\leq \frac{n}{2}$.</p>
<p>So far all I have been able to come up with are a bunch of seemingly useless inequalities. </p>
<p>Any hints would be greatly appreciated.</p>
| Robert Israel | 8,508 | <p>HINT:
$${n \choose j+1} = \frac{n-j}{j+1} {n \choose j} $$</p>
|
1,250,322 | <p>This is the recurrence relation I am trying to solve:
\begin{align}
T(n) & = 2 \cdot T \left( \frac{n}{4} \right) + 16, \\
T(1) & = c.
\end{align}
I broke this down (i.e., solved this recurrence relation) to $ \sqrt{2} * c * n + 32 * \sqrt{2} * n - 32 $, which runs in tight bounds $ \Theta(n) $. Can you guys confirm this? I’ll show more of my work if this answer is incorrect.</p>
| Ian | 83,396 | <p>Following Wikipedia's notation for the master theorem, you have $a=2,b=4,f(n)=16$. So $\log_b(a)=\log_4(2)=1/2$, so $f(n)=O(n^{\log_b(a)})$. So we are in case 1, and $T(n)=\Theta(n^{1/2})$. So somewhere you made a mistake.</p>
|
4,017,790 | <p>We consider the equation <span class="math-container">$y^2-4y+4+x^2y+\arctan(x)=0$</span>, the exercice of my book say that the normal vector <span class="math-container">$n$</span> of the cartesian curve implicitally defined by the previous equation is <span class="math-container">$n=l(1,0)$</span> for <span class="math-container">$l \neq 0$</span>. I don't understand the reason of this affermation, because I think that if <span class="math-container">$y(x)$</span> is the implicit function obtained form the equation, the parametric equation of the curve is <span class="math-container">$(t, y(t))$</span>, so the first coordinate of the normal vector is zero.
I would therefore expect that the normal vector is <span class="math-container">$n=l(0,1)$</span>. Is it right?</p>
| Leoncino | 571,776 | <p>The only possible way for a person to discover an empty box and a full one, is if eight slices have been taken (not more). The second condition is, that every slice was taken from the same box. This adds up to your result <span class="math-container">$(\frac{1}{2})^8$</span> since every person must take a slice from the same box.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.