qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,705,481 | <blockquote>
<p>$$\sum_{n=1}^{\infty} \frac{\ln(n)}{n^2}$$</p>
</blockquote>
<p>I have tried the comparison test with $\frac{1}{n}$ and got $0$ with $\frac{1}{n^2}$ I got $\infty$</p>
<p>What should I try?</p>
| Marco Cantarini | 171,547 | <p>Fix $0 < \alpha<1 $. If $n
$ is sufficiently large, say $n\geq N$, holds $$\log\left(n\right)\leq n^{\alpha}\tag{1}
$$ hence $$\sum_{n\geq1}\frac{\log\left(n\right)}{n^{2}}\leq C + \sum_{n\geq N}\frac{1}{n^{2-\alpha}}<\infty.$$ Maybe it's interesting to note that, if $s>1
$ the Riemann zeta function is defined as $$\zeta\left(s\right)=\sum_{n\geq1}\frac{1}{n^{s}}
$$ so in our case we have $$-\frac{d\zeta}{ds}\left(2\right)=\sum_{n\geq1}\frac{\log\left(n\right)}{n^{2}}.$$For prove $(1)$ note that $$\log\left(n\right)\leq n^{\alpha}\Leftrightarrow\frac{\log\left(\log\left(n\right)\right)}{\log\left(n\right)}\leq\alpha
$$ and since $\frac{\log\left(\log\left(n\right)\right)}{\log\left(n\right)}\rightarrow0
$ the inequality holds for a sufficiently large $n$.</p>
|
45,163 | <p>I would like to get reccomendations for a text on "advanced" vector analysis. By "advanced", I mean that the discussion should take place in the context of Riemannian manifolds and should provide coordinate-free definitions of divergence, curl, etc. I would like something that has rigorous theory but also plenty of concrete examples and a mixture of theoretic/concrete exercises.</p>
<p>The text that I have seen that comes closest to what I'm looking for is Janich's <a href="http://rads.stackoverflow.com/amzn/click/1441931449" rel="noreferrer">Vector Analysis</a>. The Hatcheresque style of writing in this particular text though isn't really suitable for me.</p>
<p>Looking forward to your reccomendations, thanks.</p>
| amWhy | 9,003 | <p>Se<a href="http://www.archive.org/details/117714283" rel="nofollow">e Willard Gibbs, archive text</a> for an old text on Vector analysis, also referenced in Wikipedia, available free, and downloadable...At the very least, it should be of historical importance?</p>
<p>Most of my search returned Janich's text as a reference, though there were more "introductory" level texts to choose from. Perhaps a search on AMS website will return some more timely texts at the caliber you're looking for.</p>
<p>You might want to check out this text in Vector Calculus <a href="http://rads.stackoverflow.com/amzn/click/3540761802" rel="nofollow">by Paul Matthews</a>; its TOC seemed more comparable to what you are looking for than an (out-of-print) text I found entitled "Advanced Vector Analysis". At any rate, you can "look inside" to peruse the table of contents, etc., of Matthews text, rated slightly higher than Janich's.</p>
<p><strong>Added</strong>: </p>
<p>In light of the text you mention in your answer to your question, you might find this pdf handout for a differential geometry class at Stanford interesting: <a href="http://math.stanford.edu/~conrad/diffgeomPage/handouts/stokesthm.pdf" rel="nofollow">Stokes' Theorem on Riemannian manifolds (or Div, Grad, Curl, and all that)</a>. Or, for that handout (and a whole bunch! of such handouts), see <a href="http://math.stanford.edu/~conrad/diffgeomPage/handouts.html" rel="nofollow">this page</a>. (My apologies if this material is too "basic" for your needs!)</p>
|
3,980,845 | <p>Given <span class="math-container">$X,Y$</span> i.i.d where <span class="math-container">$\mathbb{P}(X>x)=e^{-x}$</span> for <span class="math-container">$x\geq0$</span> and <span class="math-container">$\mathbb{P}(X>x)=1$</span> for all <span class="math-container">$x<0$</span>
and <span class="math-container">$V=\min(X,Y)$</span><br />
Calculate how <span class="math-container">$\mathbb{E}[V|X]$</span> distributed.</p>
<p>I've found that <span class="math-container">$F_{V|X=x}(v)=\left\{\begin{array}{rcl} 0&t\leq0\\1-e^{-t}&0\leq t\leq x\\1&else\end{array}\right.$</span><br />
And I've tried using the formula <span class="math-container">$\mathbb{E}[V|X]=\int_{\infty}^{\infty}vf_{V|X=x}dv$</span> and I got that <span class="math-container">$\mathbb{E}[V|X]=-xe^{-x}-e^{-x}+1$</span> and in the answer I had to compare with they got <span class="math-container">$\mathbb{E}[V|X]\sim U(0,1)$</span><br />
Not sure how to get to this distribution any help?</p>
| D F | 501,035 | <p><span class="math-container">$$E[V|X] = E[X|X, Y\ge X]P(Y\ge X|X) + E[Y|X, Y<X]P(Y<X|X) = X\int_{X}^{\infty}e^{-y}dy + \int_{0}^Xye^{-y}dy = Xe^{-X} + 1 - e^{-X}(1+X) = 1 - e^{-X}$$</span>
Which means that <span class="math-container">$E[V|X]\sim U(0, 1)$</span> since <span class="math-container">$F_{X}(X)$</span> is always distributed <span class="math-container">$U(0, 1)$</span> if <span class="math-container">$F_X(x)$</span> is a CDF of r.v. <span class="math-container">$X$</span> which obviously is the case here.</p>
|
246,606 | <p>I have matrix:</p>
<p>$$
A = \begin{bmatrix}
1 & 2 & 3 & 4 \\
2 & 3 & 3 & 3 \\
0 & 1 & 2 & 3 \\
0 & 0 & 1 & 2
\end{bmatrix}
$$</p>
<p>And I want to calculate $\det{A}$, so I have written:</p>
<p>$$
\begin{array}{|cccc|ccc}
1 & 2 & 3 & 4 & 1 & 2 & 3 \\
2 & 3 & 3 & 3 & 2 & 3 & 3 \\
0 & 1 & 2 & 3 & 0 & 1 & 2 \\
0 & 0 & 1 & 2 & 0 & 0 & 1
\end{array}
$$</p>
<p>From this I get that:</p>
<p>$$
\det{A} = (1 \cdot 3 \cdot 2 \cdot 2 + 2 \cdot 3 \cdot 3 \cdot 0 + 3 \cdot 3 \cdot 0 \cdot 0 + 4 \cdot 2 \cdot 1 \cdot 1) - (3 \cdot 3 \cdot 0 \cdot 2 + 2 \cdot 2 \cdot 3 \cdot 1 + 1 \cdot 3 \cdot 2 \cdot 0 + 4 \cdot 3 \cdot 1 \cdot 0) = (12 + 0 + 0 + 8) - (0 + 12 + 0 + 0) = 8
$$</p>
<p>But WolframAlpha is saying that <a href="http://www.wolframalpha.com/input/?i=det+%7B%7B1%2C2%2C3%2C4%7D%2C%7B2%2C3%2C3%2C3%7D%2C%7B0%2C1%2C2%2C3%7D%2C%7B0%2C0%2C1%2C2%7D%7D&dataset=" rel="nofollow">it is equal 0</a>. So my question is where am I wrong?</p>
| user1551 | 1,551 | <p>The others have pointed out what's wrong with your solution. Let's calculate the determinant now:
\begin{align*}
\det \begin{bmatrix}
1 & 2 & 3 & 4 \\
2 & 3 & 3 & 3 \\
0 & 1 & 2 & 3 \\
0 & 0 & 1 & 2
\end{bmatrix} &\stackrel{r1 - \frac12(r2+r3+r4)}{=}
\det \begin{bmatrix}
0 & 0 & 0 & 0 \\
2 & 3 & 3 & 3 \\
0 & 1 & 2 & 3 \\
0 & 0 & 1 & 2
\end{bmatrix}
= 0.
\end{align*}</p>
|
553,431 | <p>In the <a href="http://demonstrations.wolfram.com/NoFourInPlaneProblem/" rel="nofollow noreferrer">No-Four-In-Plane problem</a>, points are selected so that no four of them are coplanar.</p>
<p>Eight points can be picked from a <span class="math-container">$3\times3\times3$</span> space in a unique way.</p>
<p>Can 11 points be picked from a <span class="math-container">$4\times4\times4$</span> grid so that no four points are coplanar?</p>
<p>What is the maximal number of points selectable from a <span class="math-container">$5\times5\times5$</span> grid, and beyond?</p>
<p>NEW</p>
<p>There's a <a href="http://azspcs.com/Contest/Tetrahedra" rel="nofollow noreferrer">computer programming contest</a> running through June 4, 2016 that asks the question "What's the most points in an <em>n</em> × <em>n</em> × <em>n</em> grid of which no 4 are coplanar?" for larger values of <em>n</em>.</p>
| Oleg567 | 47,993 | <p>On $4\times 4 \times 4$:</p>
<p>maximal number of points for $4\times 4 \times 4$ grid is $10$.</p>
<p>As I checked, there are no way to build $11$ points in a $4 \times 4 \times 4$ grid (ignoring rotating, reflecting) with <strong>No-Four-In-Plane</strong> rule.</p>
<p>And there are $232$ ways to build $10$ such points:<br>
here is this list:</p>
<pre><code>(0,0,0) (0,0,1) (0,3,3) (1,1,3) (1,2,0) (1,3,3) (2,0,2) (2,1,0) (2,3,1) (3,1,2);
(0,0,0) (0,0,2) (0,3,1) (1,1,3) (1,2,1) (1,3,3) (2,0,1) (2,1,2) (2,3,3) (3,1,0);
(0,0,0) (0,0,2) (0,3,1) (1,1,3) (1,2,3) (1,3,2) (2,0,3) (2,1,0) (2,3,1) (3,2,0);
(0,0,0) (0,0,2) (0,3,3) (1,1,0) (1,2,3) (1,3,1) (2,0,1) (2,1,1) (2,3,0) (3,2,3);
(0,0,0) (0,0,3) (0,3,1) (1,1,3) (1,2,1) (1,3,2) (2,0,1) (2,1,0) (2,3,0) (3,1,3);
(0,0,0) (0,1,1) (0,2,3) (1,1,3) (1,3,0) (1,3,1) (2,0,2) (2,1,0) (2,2,3) (3,0,1);
(0,0,0) (0,1,1) (0,2,3) (1,1,3) (1,3,0) (1,3,2) (2,0,2) (2,1,0) (2,2,3) (3,2,2);
(0,0,0) (0,1,1) (0,2,3) (1,1,3) (1,3,1) (1,3,2) (2,0,2) (2,1,0) (2,2,3) (3,2,1);
(0,0,0) (0,1,3) (0,2,3) (1,0,2) (1,1,0) (1,3,1) (2,1,3) (2,3,0) (2,3,2) (3,0,1);
(0,0,0) (0,1,3) (0,2,3) (1,0,2) (1,1,0) (1,3,1) (2,2,3) (2,3,0) (2,3,2) (3,0,1);
(0,0,1) (0,1,1) (0,2,3) (1,1,3) (1,3,0) (1,3,2) (2,0,2) (2,1,0) (2,3,1) (3,0,2);
(0,0,1) (0,1,1) (0,2,3) (1,2,3) (1,3,0) (1,3,2) (2,0,2) (2,1,0) (2,3,1) (3,1,3);
(0,0,1) (0,1,3) (0,2,0) (1,2,3) (1,3,1) (1,3,2) (2,0,0) (2,1,0) (2,3,1) (3,0,2);
(0,0,1) (0,1,3) (0,2,0) (1,2,3) (1,3,1) (1,3,2) (2,0,0) (2,1,0) (2,3,2) (3,0,2);
(0,0,1) (0,1,3) (0,2,0) (1,2,3) (1,3,2) (1,3,3) (2,0,2) (2,1,0) (2,3,1) (3,0,2);
(0,0,0) (0,0,1) (0,2,3) (1,1,3) (1,2,0) (1,3,3) (2,3,1) (3,0,1) (3,1,0) (3,2,2);
(0,0,0) (0,0,1) (0,3,1) (1,0,3) (1,2,0) (1,3,2) (2,1,2) (3,1,1) (3,2,0) (3,3,3);
(0,0,0) (0,0,1) (0,3,2) (1,0,2) (1,2,3) (1,3,1) (2,1,3) (3,1,0) (3,2,0) (3,3,2);
(0,0,0) (0,0,1) (0,3,2) (1,1,3) (1,2,0) (1,3,2) (2,3,0) (3,0,3) (3,1,2) (3,2,3);
(0,0,0) (0,0,1) (0,3,2) (1,1,3) (1,2,0) (1,3,2) (2,3,1) (3,0,2) (3,1,0) (3,2,3);
(0,0,0) (0,0,2) (0,1,2) (1,0,3) (1,2,0) (1,3,1) (2,3,3) (3,1,0) (3,2,3) (3,3,1);
(0,0,0) (0,0,2) (0,1,3) (1,1,3) (1,2,0) (1,3,2) (2,3,0) (3,0,3) (3,1,1) (3,2,1);
(0,0,0) (0,0,2) (0,2,1) (1,1,0) (1,2,3) (1,3,2) (2,3,0) (3,0,3) (3,1,1) (3,2,3);
(0,0,0) (0,0,2) (0,2,3) (1,0,3) (1,2,0) (1,3,1) (2,1,0) (3,1,2) (3,2,1) (3,3,1);
(0,0,0) (0,0,2) (0,2,3) (1,0,3) (1,2,0) (1,3,1) (2,3,3) (3,1,2) (3,2,1) (3,3,1);
(0,0,0) (0,0,2) (0,3,1) (1,1,0) (1,2,3) (1,3,3) (2,3,2) (3,0,2) (3,1,3) (3,2,0);
(0,0,0) (0,0,2) (0,3,1) (1,1,3) (1,2,0) (1,3,3) (2,3,2) (3,0,0) (3,1,2) (3,2,3);
(0,0,0) (0,0,2) (0,3,1) (1,1,3) (1,2,0) (1,3,3) (2,3,2) (3,0,3) (3,1,2) (3,2,0);
(0,0,0) (0,0,2) (0,3,2) (1,1,2) (1,2,0) (1,3,3) (2,3,1) (3,0,1) (3,1,3) (3,2,0);
(0,0,0) (0,0,2) (0,3,3) (1,0,3) (1,2,0) (1,3,2) (2,1,2) (3,1,0) (3,2,3) (3,3,1);
(0,0,0) (0,0,2) (0,3,3) (1,1,3) (1,2,0) (1,3,0) (2,3,2) (3,0,1) (3,1,3) (3,2,2);
(0,0,0) (0,0,2) (0,3,3) (1,1,3) (1,2,2) (1,3,0) (2,3,2) (3,0,1) (3,1,3) (3,2,0);
(0,0,0) (0,0,3) (0,1,1) (1,1,0) (1,2,3) (1,3,0) (2,3,2) (3,0,1) (3,1,3) (3,2,2);
(0,0,0) (0,0,3) (0,2,1) (1,1,3) (1,2,0) (1,3,3) (2,3,1) (3,0,1) (3,1,2) (3,2,0);
(0,0,0) (0,0,3) (0,3,1) (1,1,3) (1,2,0) (1,3,3) (2,3,1) (3,0,2) (3,1,0) (3,2,1);
(0,0,0) (0,1,1) (0,1,2) (1,0,3) (1,2,3) (1,3,0) (2,2,3) (3,0,2) (3,1,0) (3,3,1);
(0,0,0) (0,1,1) (0,1,3) (1,0,1) (1,2,0) (1,3,2) (2,2,3) (3,0,1) (3,1,0) (3,3,3);
(0,0,0) (0,1,1) (0,2,3) (1,0,2) (1,2,3) (1,3,0) (2,1,0) (3,1,3) (3,3,1) (3,3,2);
(0,0,0) (0,1,1) (0,2,3) (1,0,2) (1,2,3) (1,3,1) (2,0,3) (3,1,2) (3,3,0) (3,3,2);
(0,0,0) (0,1,1) (0,2,3) (1,0,2) (1,3,0) (1,3,1) (2,2,3) (3,0,1) (3,2,0) (3,3,3);
(0,0,0) (0,1,1) (0,2,3) (1,0,2) (1,3,0) (1,3,1) (2,2,3) (3,0,1) (3,2,2) (3,3,2);
(0,0,0) (0,1,1) (0,2,3) (1,0,2) (1,3,0) (1,3,3) (2,1,3) (3,0,1) (3,1,0) (3,3,1);
(0,0,0) (0,1,1) (0,2,3) (1,0,3) (1,1,0) (1,3,1) (2,3,2) (3,0,2) (3,2,1) (3,2,2);
(0,0,0) (0,1,1) (0,2,3) (1,0,3) (1,2,3) (1,3,0) (2,2,1) (3,0,2) (3,1,0) (3,3,1);
(0,0,0) (0,1,1) (0,2,3) (1,0,3) (1,3,1) (1,3,2) (2,1,0) (3,0,1) (3,2,2) (3,3,2);
(0,0,0) (0,1,1) (0,2,3) (1,0,3) (1,3,1) (1,3,2) (2,1,0) (3,0,3) (3,1,2) (3,2,0);
(0,0,0) (0,1,1) (0,2,3) (1,0,3) (1,3,1) (1,3,2) (2,1,0) (3,0,3) (3,1,2) (3,2,2);
(0,0,0) (0,1,1) (0,2,3) (1,1,0) (1,2,3) (1,3,0) (2,3,2) (3,0,1) (3,0,2) (3,3,3);
(0,0,0) (0,1,1) (0,2,3) (1,1,3) (1,3,0) (1,3,1) (2,1,0) (3,0,1) (3,2,2) (3,3,2);
(0,0,0) (0,1,1) (0,2,3) (1,1,3) (1,3,0) (1,3,2) (2,1,0) (3,0,1) (3,2,2) (3,3,2);
(0,0,0) (0,1,1) (0,2,3) (1,1,3) (1,3,1) (1,3,2) (2,1,0) (3,0,1) (3,2,2) (3,3,2);
(0,0,0) (0,1,1) (0,2,3) (1,2,3) (1,3,0) (1,3,1) (2,3,2) (3,0,2) (3,1,2) (3,2,1);
(0,0,0) (0,1,1) (0,2,3) (1,2,3) (1,3,0) (1,3,1) (2,3,3) (3,0,2) (3,1,2) (3,2,1);
(0,0,0) (0,1,2) (0,1,3) (1,0,2) (1,1,0) (1,3,1) (2,3,1) (3,0,1) (3,2,3) (3,3,2);
(0,0,0) (0,1,2) (0,1,3) (1,0,2) (1,2,3) (1,3,0) (2,2,3) (3,1,1) (3,2,0) (3,3,1);
(0,0,0) (0,1,2) (0,2,2) (1,0,1) (1,2,0) (1,3,3) (2,0,1) (3,1,0) (3,1,3) (3,3,1);
(0,0,0) (0,1,2) (0,2,2) (1,0,1) (1,2,0) (1,3,3) (2,0,1) (3,1,3) (3,3,0) (3,3,1);
(0,0,0) (0,1,2) (0,2,2) (1,0,2) (1,3,1) (1,3,3) (2,0,3) (3,1,0) (3,2,3) (3,3,1);
(0,0,0) (0,1,2) (0,2,3) (1,0,1) (1,0,2) (1,3,0) (2,3,3) (3,1,3) (3,2,1) (3,3,1);
(0,0,0) (0,1,2) (0,2,3) (1,0,1) (1,3,0) (1,3,2) (2,0,0) (3,1,3) (3,2,1) (3,3,1);
(0,0,0) (0,1,2) (0,2,3) (1,0,2) (1,2,0) (1,3,0) (2,3,3) (3,1,1) (3,1,3) (3,3,2);
(0,0,0) (0,1,2) (0,2,3) (1,0,3) (1,1,0) (1,3,1) (2,3,3) (3,0,2) (3,2,1) (3,2,2);
(0,0,0) (0,1,2) (0,2,3) (1,0,3) (1,2,0) (1,3,0) (2,3,2) (3,1,1) (3,1,3) (3,3,2);
(0,0,0) (0,1,2) (0,2,3) (1,0,3) (1,2,1) (1,3,1) (2,1,3) (3,0,2) (3,1,0) (3,3,1);
(0,0,0) (0,1,2) (0,2,3) (1,0,3) (1,3,1) (1,3,2) (2,1,3) (3,0,1) (3,1,0) (3,3,1);
(0,0,0) (0,1,2) (0,2,3) (1,0,3) (1,3,1) (1,3,2) (2,2,0) (3,0,1) (3,2,2) (3,3,1);
(0,0,0) (0,1,2) (0,2,3) (1,1,3) (1,2,0) (1,3,3) (2,3,1) (3,0,1) (3,0,2) (3,1,0);
(0,0,0) (0,1,2) (0,3,1) (1,0,2) (1,0,3) (1,2,0) (2,3,3) (3,1,1) (3,2,1) (3,3,2);
(0,0,0) (0,1,2) (0,3,1) (1,0,2) (1,2,3) (1,3,0) (2,3,3) (3,0,2) (3,1,1) (3,1,3);
(0,0,0) (0,1,2) (0,3,1) (1,0,3) (1,1,0) (1,3,1) (2,2,3) (3,1,1) (3,2,0) (3,2,2);
(0,0,0) (0,1,2) (0,3,1) (1,0,3) (1,1,0) (1,3,1) (2,3,3) (3,0,2) (3,2,0) (3,2,2);
(0,0,0) (0,1,2) (0,3,1) (1,0,3) (1,2,0) (1,3,3) (2,0,1) (3,1,0) (3,1,3) (3,3,1);
(0,0,0) (0,1,2) (0,3,1) (1,0,3) (1,2,3) (1,3,2) (2,3,1) (3,0,2) (3,1,0) (3,1,3);
(0,0,0) (0,1,2) (0,3,1) (1,1,3) (1,2,3) (1,3,0) (2,2,1) (3,0,2) (3,1,0) (3,2,3);
(0,0,0) (0,1,2) (0,3,2) (1,0,1) (1,0,3) (1,2,0) (2,3,3) (3,1,3) (3,2,0) (3,3,1);
(0,0,0) (0,1,2) (0,3,2) (1,0,2) (1,2,3) (1,3,0) (2,0,3) (3,2,0) (3,2,3) (3,3,1);
(0,0,0) (0,1,2) (0,3,2) (1,0,3) (1,2,0) (1,3,1) (2,0,1) (3,1,0) (3,2,3) (3,3,1);
(0,0,0) (0,1,2) (0,3,2) (1,0,3) (1,2,0) (1,3,1) (2,0,1) (3,2,2) (3,2,3) (3,3,1);
(0,0,0) (0,1,3) (0,2,2) (1,0,1) (1,2,0) (1,3,2) (2,0,3) (3,1,1) (3,3,1) (3,3,2);
(0,0,0) (0,1,3) (0,2,2) (1,0,1) (1,2,0) (1,3,2) (2,1,0) (3,0,1) (3,3,1) (3,3,3);
(0,0,0) (0,1,3) (0,2,2) (1,0,2) (1,1,0) (1,3,1) (2,3,0) (3,0,1) (3,2,1) (3,3,3);
(0,0,0) (0,1,3) (0,2,2) (1,0,2) (1,1,0) (1,3,1) (2,3,0) (3,0,2) (3,2,1) (3,3,3);
(0,0,0) (0,1,3) (0,2,2) (1,0,2) (1,1,0) (1,3,1) (2,3,0) (3,1,1) (3,2,1) (3,2,3);
(0,0,0) (0,1,3) (0,2,2) (1,0,2) (1,3,0) (1,3,1) (2,2,3) (3,0,2) (3,2,1) (3,3,3);
(0,0,0) (0,1,3) (0,2,3) (1,0,0) (1,0,2) (1,3,1) (2,3,0) (3,1,2) (3,2,1) (3,3,3);
(0,0,0) (0,1,3) (0,2,3) (1,1,0) (1,3,1) (1,3,2) (2,0,0) (3,0,2) (3,2,1) (3,3,3);
(0,0,0) (0,1,3) (0,3,1) (1,0,0) (1,0,1) (1,3,3) (2,2,3) (3,1,2) (3,2,0) (3,3,2);
(0,0,0) (0,1,3) (0,3,1) (1,0,1) (1,1,2) (1,3,3) (2,3,0) (3,1,2) (3,2,0) (3,2,2);
(0,0,0) (0,1,3) (0,3,2) (1,0,0) (1,2,0) (1,3,1) (2,2,3) (3,1,2) (3,2,1) (3,3,3);
(0,0,0) (0,1,3) (0,3,2) (1,0,1) (1,1,3) (1,3,0) (2,2,3) (3,1,2) (3,2,0) (3,2,2);
(0,0,0) (0,1,3) (0,3,2) (1,0,1) (1,1,3) (1,3,0) (2,3,3) (3,1,2) (3,2,0) (3,2,2);
(0,0,0) (0,1,3) (0,3,2) (1,0,2) (1,1,0) (1,3,1) (2,2,3) (3,1,1) (3,2,1) (3,2,3);
(0,0,0) (0,1,3) (0,3,2) (1,0,2) (1,1,3) (1,3,0) (2,2,0) (3,1,1) (3,2,1) (3,2,3);
(0,0,0) (0,1,3) (0,3,2) (1,0,2) (1,2,0) (1,3,0) (2,3,3) (3,1,1) (3,1,2) (3,2,3);
(0,0,0) (0,1,3) (0,3,2) (1,0,2) (1,2,3) (1,3,0) (2,2,3) (3,1,1) (3,2,0) (3,3,1);
(0,0,0) (0,1,3) (0,3,2) (1,0,3) (1,1,0) (1,3,3) (2,0,1) (3,2,0) (3,2,2) (3,3,1);
(0,0,0) (0,1,3) (0,3,2) (1,0,3) (1,2,0) (1,2,3) (2,0,1) (3,1,0) (3,2,2) (3,3,1);
(0,0,0) (0,1,3) (0,3,2) (1,0,3) (1,2,0) (1,3,1) (2,3,3) (3,1,2) (3,2,0) (3,2,2);
(0,0,0) (0,1,3) (0,3,2) (1,0,3) (1,2,0) (1,3,2) (2,0,1) (3,1,1) (3,2,2) (3,3,1);
(0,0,0) (0,1,3) (0,3,2) (1,0,3) (1,2,0) (1,3,2) (2,0,1) (3,2,2) (3,2,3) (3,3,1);
(0,0,0) (0,2,2) (0,2,3) (1,0,3) (1,1,0) (1,3,1) (2,3,3) (3,0,2) (3,1,2) (3,3,1);
(0,0,0) (0,2,2) (0,2,3) (1,0,3) (1,2,0) (1,3,2) (2,1,0) (3,0,1) (3,1,1) (3,3,2);
(0,0,0) (0,2,3) (0,3,2) (1,0,1) (1,1,3) (1,3,0) (2,3,3) (3,0,1) (3,2,1) (3,2,2);
(0,0,0) (0,2,3) (0,3,2) (1,0,2) (1,0,3) (1,3,1) (2,2,0) (3,1,0) (3,2,1) (3,3,1);
(0,0,1) (0,0,2) (0,1,0) (1,1,3) (1,2,0) (1,3,2) (2,3,3) (3,0,0) (3,1,1) (3,2,1);
(0,0,1) (0,1,0) (0,1,3) (1,0,3) (1,2,0) (1,3,1) (2,2,0) (3,0,2) (3,1,2) (3,3,3);
(0,0,1) (0,1,0) (0,2,3) (1,0,0) (1,3,1) (1,3,2) (2,2,0) (3,0,3) (3,2,2) (3,3,2);
(0,0,1) (0,1,0) (0,2,3) (1,0,1) (1,1,3) (1,3,2) (2,3,3) (3,0,2) (3,2,2) (3,3,0);
(0,0,1) (0,1,0) (0,2,3) (1,0,3) (1,2,0) (1,3,2) (2,3,3) (3,1,1) (3,1,2) (3,3,2);
(0,0,1) (0,1,0) (0,2,3) (1,0,3) (1,3,1) (1,3,2) (2,0,0) (3,1,2) (3,2,0) (3,3,2);
(0,0,1) (0,1,0) (0,2,3) (1,0,3) (1,3,1) (1,3,2) (2,0,0) (3,1,3) (3,2,0) (3,3,2);
(0,0,1) (0,1,0) (0,2,3) (1,1,3) (1,3,0) (1,3,2) (2,0,0) (3,0,1) (3,2,1) (3,3,3);
(0,0,1) (0,1,0) (0,2,3) (1,2,0) (1,2,2) (1,3,2) (2,0,0) (3,0,3) (3,1,1) (3,3,2);
(0,0,1) (0,1,1) (0,1,3) (1,0,3) (1,1,0) (1,3,2) (2,3,2) (3,0,2) (3,2,0) (3,3,3);
(0,0,1) (0,1,1) (0,2,0) (1,0,2) (1,2,3) (1,3,0) (2,3,2) (3,1,0) (3,1,3) (3,2,1);
(0,0,1) (0,1,1) (0,2,3) (1,0,2) (1,3,1) (1,3,3) (2,2,0) (3,0,0) (3,2,3) (3,3,2);
(0,0,1) (0,1,1) (0,2,3) (1,0,3) (1,1,0) (1,3,1) (2,3,2) (3,0,3) (3,2,2) (3,3,0);
(0,0,1) (0,1,1) (0,3,2) (1,0,0) (1,1,3) (1,3,2) (2,2,0) (3,1,2) (3,2,1) (3,2,3);
(0,0,1) (0,1,1) (0,3,2) (1,0,1) (1,1,3) (1,3,0) (2,0,2) (3,2,0) (3,2,2) (3,3,3);
(0,0,1) (0,1,1) (0,3,2) (1,1,3) (1,2,0) (1,3,1) (2,0,0) (3,1,0) (3,2,2) (3,2,3);
(0,0,1) (0,1,1) (0,3,2) (1,1,3) (1,2,0) (1,3,2) (2,3,1) (3,0,0) (3,2,2) (3,2,3);
(0,0,1) (0,1,2) (0,2,0) (1,0,3) (1,1,0) (1,3,1) (2,1,3) (3,0,2) (3,2,0) (3,3,3);
(0,0,1) (0,1,2) (0,2,1) (1,0,2) (1,3,1) (1,3,3) (2,3,0) (3,0,2) (3,1,0) (3,2,3);
(0,0,1) (0,1,2) (0,2,1) (1,1,0) (1,3,1) (1,3,3) (2,0,2) (3,0,3) (3,1,0) (3,2,2);
(0,0,1) (0,1,2) (0,2,2) (1,0,1) (1,0,3) (1,3,0) (2,3,3) (3,1,3) (3,2,0) (3,3,2);
(0,0,1) (0,1,2) (0,2,2) (1,0,1) (1,1,3) (1,3,0) (2,3,3) (3,0,2) (3,2,0) (3,2,1);
(0,0,1) (0,1,2) (0,2,2) (1,0,3) (1,2,0) (1,3,2) (2,3,1) (3,1,1) (3,1,3) (3,2,0);
(0,0,1) (0,1,2) (0,2,2) (1,0,3) (1,2,0) (1,3,2) (2,3,3) (3,1,1) (3,1,3) (3,2,0);
(0,0,1) (0,1,2) (0,3,1) (1,1,3) (1,3,0) (1,3,1) (2,0,0) (3,0,2) (3,1,0) (3,2,3);
(0,0,1) (0,1,2) (0,3,2) (1,0,2) (1,0,3) (1,2,0) (2,3,1) (3,1,1) (3,2,3) (3,3,0);
(0,0,1) (0,1,2) (0,3,2) (1,1,0) (1,2,3) (1,3,1) (2,3,3) (3,0,3) (3,1,0) (3,2,2);
(0,0,1) (0,1,2) (0,3,2) (1,1,0) (1,2,3) (1,3,1) (2,3,3) (3,0,3) (3,2,0) (3,2,2);
(0,0,1) (0,1,3) (0,2,0) (1,0,0) (1,0,3) (1,3,1) (2,2,0) (3,1,2) (3,2,1) (3,3,2);
(0,0,1) (0,1,3) (0,2,0) (1,0,0) (1,2,2) (1,3,2) (2,3,0) (3,0,3) (3,1,1) (3,1,2);
(0,0,1) (0,1,3) (0,2,0) (1,0,0) (1,2,2) (1,3,2) (2,3,0) (3,1,2) (3,1,3) (3,2,1);
(0,0,1) (0,1,3) (0,2,0) (1,0,0) (1,2,3) (1,3,1) (2,1,3) (3,0,2) (3,3,0) (3,3,2);
(0,0,1) (0,1,3) (0,2,0) (1,0,0) (1,2,3) (1,3,1) (2,2,0) (3,0,3) (3,1,2) (3,3,2);
(0,0,1) (0,1,3) (0,2,0) (1,0,0) (1,2,3) (1,3,2) (2,3,0) (3,0,3) (3,1,1) (3,3,2);
(0,0,1) (0,1,3) (0,2,0) (1,0,0) (1,2,3) (1,3,2) (2,3,0) (3,1,1) (3,1,2) (3,3,2);
(0,0,1) (0,1,3) (0,2,0) (1,0,1) (1,2,2) (1,3,3) (2,3,0) (3,0,2) (3,1,0) (3,1,2);
(0,0,1) (0,1,3) (0,2,0) (1,0,1) (1,3,0) (1,3,3) (2,3,2) (3,0,2) (3,1,1) (3,2,2);
(0,0,1) (0,1,3) (0,2,0) (1,0,1) (1,3,2) (1,3,3) (2,3,0) (3,0,2) (3,1,0) (3,2,3);
(0,0,1) (0,1,3) (0,2,0) (1,0,2) (1,1,3) (1,3,3) (2,0,0) (3,2,1) (3,2,2) (3,3,0);
(0,0,1) (0,1,3) (0,2,0) (1,0,2) (1,2,3) (1,3,1) (2,1,0) (3,0,2) (3,1,2) (3,3,0);
(0,0,1) (0,1,3) (0,2,0) (1,0,3) (1,2,0) (1,3,1) (2,3,0) (3,0,2) (3,1,2) (3,3,3);
(0,0,1) (0,1,3) (0,2,0) (1,0,3) (1,3,1) (1,3,2) (2,1,0) (3,0,2) (3,1,0) (3,2,3);
(0,0,1) (0,1,3) (0,2,0) (1,0,3) (1,3,1) (1,3,2) (2,1,0) (3,0,2) (3,2,0) (3,3,3);
(0,0,1) (0,1,3) (0,2,0) (1,1,0) (1,3,1) (1,3,3) (2,0,3) (3,0,2) (3,2,2) (3,3,0);
(0,0,1) (0,1,3) (0,2,0) (1,1,3) (1,3,0) (1,3,1) (2,1,2) (3,0,0) (3,2,1) (3,3,2);
(0,0,1) (0,1,3) (0,2,0) (1,2,0) (1,3,1) (1,3,3) (2,1,0) (3,0,3) (3,1,2) (3,3,2);
(0,0,1) (0,1,3) (0,2,0) (1,2,3) (1,3,1) (1,3,2) (2,1,0) (3,0,2) (3,1,2) (3,3,3);
(0,0,1) (0,1,3) (0,2,1) (1,0,1) (1,1,0) (1,3,3) (2,3,2) (3,0,2) (3,2,3) (3,3,0);
(0,0,1) (0,1,3) (0,2,1) (1,1,0) (1,3,1) (1,3,2) (2,0,0) (3,0,2) (3,2,0) (3,3,3);
(0,0,1) (0,1,3) (0,2,1) (1,1,0) (1,3,2) (1,3,3) (2,0,2) (3,0,2) (3,2,3) (3,3,0);
(0,0,1) (0,1,3) (0,2,1) (1,1,3) (1,3,0) (1,3,2) (2,0,0) (3,0,2) (3,2,0) (3,3,3);
(0,0,1) (0,2,1) (0,2,2) (1,0,0) (1,1,3) (1,3,2) (2,3,0) (3,0,0) (3,1,2) (3,2,3);
(0,0,0) (0,0,1) (0,1,3) (1,0,3) (1,2,0) (1,3,1) (2,1,3) (2,3,2) (3,1,2) (3,2,1);
(0,0,0) (0,0,2) (0,1,1) (1,1,0) (1,2,3) (1,3,1) (2,0,3) (2,3,2) (3,1,0) (3,2,2);
(0,0,0) (0,0,2) (0,1,1) (1,1,3) (1,2,0) (1,3,2) (2,0,1) (2,3,0) (3,1,3) (3,2,3);
(0,0,0) (0,0,2) (0,1,1) (1,1,3) (1,2,0) (1,3,2) (2,1,3) (2,3,0) (3,0,1) (3,2,1);
(0,0,0) (0,0,2) (0,2,3) (1,1,3) (1,2,0) (1,3,2) (2,1,3) (2,3,0) (3,0,1) (3,2,1);
(0,0,0) (0,0,2) (0,3,1) (1,1,1) (1,2,3) (1,3,3) (2,1,2) (2,3,0) (3,1,2) (3,2,0);
(0,0,0) (0,0,2) (0,3,2) (1,1,1) (1,2,3) (1,3,2) (2,0,3) (2,3,1) (3,1,3) (3,2,0);
(0,0,0) (0,0,3) (0,3,1) (1,1,3) (1,2,0) (1,3,3) (2,1,1) (2,3,2) (3,1,2) (3,2,0);
(0,0,0) (0,1,1) (0,1,3) (1,0,1) (1,1,3) (1,3,2) (2,2,3) (2,3,0) (3,0,2) (3,2,2);
(0,0,0) (0,1,1) (0,2,3) (1,0,2) (1,3,0) (1,3,1) (2,1,3) (2,3,0) (3,0,1) (3,2,2);
(0,0,0) (0,1,1) (0,2,3) (1,1,3) (1,3,0) (1,3,2) (2,0,2) (2,1,0) (3,0,1) (3,2,2);
(0,0,0) (0,1,1) (0,2,3) (1,1,3) (1,3,0) (1,3,2) (2,0,2) (2,3,0) (3,0,1) (3,2,2);
(0,0,0) (0,1,2) (0,2,1) (1,0,2) (1,3,1) (1,3,3) (2,0,2) (2,1,3) (3,1,0) (3,2,0);
(0,0,0) (0,1,2) (0,2,2) (1,0,2) (1,3,1) (1,3,3) (2,0,3) (2,3,1) (3,1,0) (3,2,3);
(0,0,0) (0,1,2) (0,3,1) (1,0,3) (1,1,0) (1,3,1) (2,2,3) (2,3,3) (3,2,0) (3,2,2);
(0,0,0) (0,1,2) (0,3,1) (1,1,3) (1,2,0) (1,3,3) (2,0,0) (2,0,1) (3,2,3) (3,3,2);
(0,0,0) (0,1,2) (0,3,1) (1,1,3) (1,2,3) (1,3,0) (2,2,1) (2,3,2) (3,0,2) (3,1,0);
(0,0,0) (0,1,2) (0,3,2) (1,0,2) (1,2,3) (1,3,0) (2,2,3) (2,3,1) (3,1,1) (3,2,0);
(0,0,0) (0,1,3) (0,2,2) (1,0,1) (1,2,0) (1,3,2) (2,0,3) (2,1,0) (3,3,1) (3,3,2);
(0,0,0) (0,1,3) (0,2,2) (1,0,1) (1,2,0) (1,3,2) (2,0,3) (2,1,0) (3,3,1) (3,3,3);
(0,0,0) (0,1,3) (0,2,2) (1,0,1) (1,3,0) (1,3,2) (2,0,3) (2,1,0) (3,2,3) (3,3,1);
(0,0,0) (0,1,3) (0,2,2) (1,0,2) (1,1,0) (1,3,1) (2,2,3) (2,3,0) (3,0,2) (3,2,1);
(0,0,0) (0,1,3) (0,2,3) (1,2,0) (1,3,1) (1,3,2) (2,0,2) (2,1,0) (3,0,1) (3,3,2);
(0,0,0) (0,1,3) (0,3,2) (1,1,0) (1,2,0) (1,3,2) (2,0,1) (2,0,2) (3,2,3) (3,3,1);
(0,0,0) (0,2,2) (0,2,3) (1,0,2) (1,1,0) (1,3,1) (2,2,3) (2,3,0) (3,0,1) (3,1,3);
(0,0,0) (0,2,3) (0,3,2) (1,0,1) (1,0,2) (1,3,0) (2,1,3) (2,2,3) (3,1,0) (3,3,1);
(0,0,1) (0,0,2) (0,1,1) (1,0,2) (1,2,3) (1,3,0) (2,1,0) (2,3,2) (3,1,3) (3,2,1);
(0,0,1) (0,1,0) (0,1,3) (1,0,0) (1,2,3) (1,3,1) (2,0,3) (2,2,0) (3,1,2) (3,3,2);
(0,0,1) (0,1,0) (0,2,2) (1,0,2) (1,3,0) (1,3,1) (2,0,0) (2,1,3) (3,1,2) (3,2,3);
(0,0,1) (0,1,0) (0,2,2) (1,0,3) (1,1,3) (1,3,2) (2,0,1) (2,3,3) (3,1,2) (3,2,0);
(0,0,1) (0,1,0) (0,2,2) (1,0,3) (1,2,3) (1,3,1) (2,1,3) (2,2,0) (3,0,0) (3,3,2);
(0,0,1) (0,1,0) (0,2,2) (1,0,3) (1,3,1) (1,3,3) (2,0,0) (2,1,3) (3,1,2) (3,2,0);
(0,0,1) (0,1,0) (0,2,3) (1,0,1) (1,3,0) (1,3,3) (2,1,3) (2,2,0) (3,0,2) (3,2,2);
(0,0,1) (0,1,0) (0,2,3) (1,0,2) (1,0,3) (1,3,1) (2,1,3) (2,2,0) (3,1,1) (3,3,2);
(0,0,1) (0,1,0) (0,2,3) (1,0,2) (1,2,3) (1,3,1) (2,1,3) (2,2,0) (3,0,0) (3,3,2);
(0,0,1) (0,1,0) (0,2,3) (1,0,3) (1,3,1) (1,3,2) (2,1,0) (2,2,0) (3,0,2) (3,3,3);
(0,0,1) (0,1,0) (0,2,3) (1,1,3) (1,2,0) (1,3,2) (2,0,0) (2,3,1) (3,0,2) (3,2,3);
(0,0,1) (0,1,0) (0,2,3) (1,1,3) (1,3,0) (1,3,2) (2,0,0) (2,3,1) (3,0,2) (3,2,3);
(0,0,1) (0,1,1) (0,2,2) (1,1,3) (1,3,0) (1,3,2) (2,1,2) (2,3,0) (3,0,0) (3,2,3);
(0,0,1) (0,1,1) (0,2,3) (1,0,2) (1,2,3) (1,3,0) (2,0,0) (2,3,2) (3,1,0) (3,1,3);
(0,0,1) (0,1,1) (0,2,3) (1,0,2) (1,2,3) (1,3,0) (2,0,0) (2,3,2) (3,1,3) (3,2,1);
(0,0,1) (0,1,1) (0,3,2) (1,1,0) (1,2,3) (1,3,2) (2,0,0) (2,0,2) (3,1,3) (3,2,0);
(0,0,1) (0,1,2) (0,2,0) (1,0,3) (1,1,3) (1,3,0) (2,0,1) (2,3,3) (3,1,0) (3,2,2);
(0,0,1) (0,1,2) (0,2,2) (1,0,3) (1,2,0) (1,3,2) (2,1,0) (2,3,3) (3,1,3) (3,2,1);
(0,0,1) (0,1,2) (0,2,2) (1,0,3) (1,3,0) (1,3,2) (2,0,1) (2,3,3) (3,1,3) (3,2,0);
(0,0,1) (0,1,2) (0,3,2) (1,1,0) (1,2,3) (1,3,1) (2,0,2) (2,3,1) (3,0,3) (3,2,0);
(0,0,1) (0,1,3) (0,2,0) (1,0,1) (1,3,2) (1,3,3) (2,1,0) (2,3,0) (3,1,2) (3,2,1);
(0,0,1) (0,1,3) (0,2,0) (1,0,2) (1,3,1) (1,3,3) (2,0,1) (2,1,0) (3,2,2) (3,3,2);
(0,0,1) (0,1,3) (0,2,0) (1,0,3) (1,3,1) (1,3,2) (2,1,0) (2,2,0) (3,0,2) (3,2,3);
(0,0,1) (0,1,3) (0,2,0) (1,0,3) (1,3,1) (1,3,2) (2,1,0) (2,2,0) (3,0,2) (3,3,3);
(0,0,1) (0,1,3) (0,2,0) (1,0,3) (1,3,1) (1,3,2) (2,1,0) (2,3,0) (3,0,2) (3,2,3);
(0,0,1) (0,1,3) (0,2,0) (1,1,0) (1,3,1) (1,3,3) (2,0,3) (2,2,0) (3,0,2) (3,2,2);
(0,0,1) (0,1,3) (0,2,0) (1,2,3) (1,3,1) (1,3,2) (2,0,0) (2,1,0) (3,0,2) (3,3,3);
(0,0,1) (0,1,3) (0,2,1) (1,0,0) (1,1,3) (1,3,2) (2,0,2) (2,3,0) (3,2,0) (3,2,3);
(0,0,1) (0,1,3) (0,3,1) (1,0,0) (1,1,3) (1,3,2) (2,0,2) (2,3,0) (3,2,0) (3,2,3);
(0,0,0) (0,1,1) (0,2,3) (1,1,3) (1,3,2) (2,0,2) (2,1,0) (2,3,0) (3,2,1) (3,2,2);
(0,0,0) (0,1,1) (0,2,3) (1,3,2) (1,3,3) (2,0,2) (2,1,0) (2,3,0) (3,0,2) (3,2,1);
(0,0,0) (0,1,2) (0,2,1) (1,1,3) (1,3,3) (2,2,0) (2,3,1) (2,3,3) (3,0,2) (3,1,0);
(0,0,0) (0,1,2) (0,3,1) (1,2,3) (1,3,1) (2,0,2) (2,0,3) (2,2,0) (3,1,1) (3,3,2);
(0,0,0) (0,1,2) (0,3,1) (1,2,3) (1,3,3) (2,0,3) (2,2,0) (2,3,1) (3,0,2) (3,1,0);
(0,0,0) (0,1,2) (0,3,1) (1,3,1) (1,3,3) (2,0,3) (2,1,1) (2,2,0) (3,0,2) (3,2,3);
(0,0,0) (0,1,3) (0,3,1) (1,1,3) (1,2,3) (2,0,1) (2,0,2) (2,3,0) (3,2,0) (3,3,2);
(0,0,1) (0,0,2) (0,2,0) (1,2,3) (1,3,1) (2,0,2) (2,1,3) (2,3,0) (3,1,0) (3,2,2);
(0,0,1) (0,0,2) (0,2,0) (1,2,3) (1,3,1) (2,0,3) (2,1,0) (2,3,1) (3,1,2) (3,2,2);
(0,0,1) (0,0,2) (0,2,0) (1,2,3) (1,3,3) (2,0,3) (2,1,0) (2,3,1) (3,1,0) (3,2,2);
(0,0,1) (0,0,2) (0,2,1) (1,2,0) (1,3,3) (2,0,2) (2,1,0) (2,3,1) (3,1,2) (3,2,3);
(0,0,1) (0,0,2) (0,2,1) (1,2,0) (1,3,3) (2,0,2) (2,1,3) (2,3,1) (3,1,0) (3,2,2);
(0,0,1) (0,0,2) (0,2,1) (1,2,3) (1,3,0) (2,0,2) (2,1,0) (2,3,1) (3,1,3) (3,2,2);
(0,0,1) (0,1,1) (0,2,3) (1,3,0) (1,3,2) (2,0,2) (2,1,0) (2,3,1) (3,1,3) (3,2,2);
(0,0,1) (0,1,2) (0,2,1) (1,3,0) (1,3,1) (2,0,2) (2,1,0) (2,3,3) (3,0,2) (3,2,3);
(0,0,0) (0,0,2) (0,1,3) (1,2,0) (1,3,2) (2,1,0) (2,3,1) (3,0,3) (3,2,3) (3,3,1);
(0,0,0) (0,0,3) (0,1,1) (1,2,0) (1,3,2) (2,0,1) (2,3,0) (3,1,3) (3,2,3) (3,3,2);
(0,0,0) (0,1,1) (0,2,3) (1,3,1) (1,3,2) (2,0,3) (2,1,0) (3,0,1) (3,2,2) (3,3,2);
(0,0,0) (0,1,2) (0,2,3) (1,0,3) (1,3,1) (2,2,0) (2,3,3) (3,0,2) (3,1,0) (3,3,1);
(0,0,1) (0,1,1) (0,3,2) (1,1,3) (1,2,0) (2,0,2) (2,3,1) (3,1,0) (3,2,2) (3,2,3);
(0,0,1) (0,1,1) (0,3,2) (1,1,3) (1,3,0) (2,0,0) (2,2,3) (3,1,3) (3,2,0) (3,2,2).
</code></pre>
<hr>
<p>On $5\times 5 \times 5$:</p>
<p>maximal number of points for $5\times 5 \times 5$ grid is $13$.</p>
<p>There are $38$ ways to build such $13$ points (ignoring rotations and reflections). Each of these ways is equivalent to certain item of this list:</p>
<pre><code> (0,0,0) (0,0,3) (0,3,4) (1,2,0) (1,3,3) (1,4,1) (2,1,1) (3,1,4) (3,2,4) (3,4,3) (4,0,1) (4,1,2) (4,4,0);
(0,0,0) (0,1,2) (0,1,3) (1,2,4) (1,3,0) (1,4,1) (2,4,4) (3,0,2) (3,1,0) (3,3,1) (4,0,4) (4,2,3) (4,4,1);
(0,0,0) (0,1,2) (0,1,3) (1,3,4) (1,4,1) (2,1,0) (2,2,4) (2,4,1) (3,0,4) (3,2,0) (4,0,2) (4,2,3) (4,4,3);
(0,0,0) (0,1,2) (0,2,1) (1,1,3) (1,1,4) (1,4,3) (2,3,1) (3,0,4) (3,2,0) (3,4,3) (4,0,2) (4,2,4) (4,3,0);
(0,0,0) (0,1,2) (0,3,1) (1,1,3) (1,3,4) (1,4,1) (2,0,3) (3,1,1) (3,2,4) (3,4,0) (4,0,3) (4,2,0) (4,4,2);
(0,0,0) (0,1,2) (0,3,1) (1,1,4) (1,4,2) (1,4,4) (2,1,0) (3,0,4) (3,3,0) (3,4,1) (4,0,2) (4,2,3) (4,3,1);
(0,0,0) (0,1,2) (0,3,1) (1,3,4) (1,4,1) (1,4,3) (2,0,4) (3,1,1) (3,2,4) (3,3,2) (4,0,3) (4,2,0) (4,4,3);
(0,0,0) (0,1,2) (0,3,2) (1,2,0) (1,3,4) (1,4,3) (2,0,1) (3,1,4) (3,2,0) (3,3,1) (4,0,3) (4,1,1) (4,2,4);
(0,0,0) (0,1,2) (0,4,1) (1,0,2) (1,1,3) (1,3,4) (2,4,0) (3,2,0) (3,3,4) (3,4,3) (4,0,3) (4,1,1) (4,3,1);
(0,0,0) (0,1,2) (0,4,3) (1,2,4) (1,3,0) (1,3,1) (2,0,4) (3,0,2) (3,3,1) (3,4,1) (4,1,0) (4,2,3) (4,4,4);
(0,0,0) (0,1,2) (0,4,4) (1,1,4) (1,4,1) (1,4,2) (2,0,1) (3,1,3) (3,2,0) (3,3,4) (4,0,3) (4,2,0) (4,3,3);
(0,0,0) (0,1,2) (0,4,4) (1,2,4) (1,4,1) (1,4,2) (2,0,3) (2,1,0) (3,1,0) (3,3,1) (4,0,4) (4,2,3) (4,3,3);
(0,0,0) (0,1,2) (0,4,4) (1,2,4) (1,4,3) (2,0,1) (2,1,4) (2,4,0) (3,1,0) (3,3,1) (4,2,3) (4,3,1) (4,3,3);
(0,0,0) (0,1,3) (0,4,2) (1,0,1) (1,2,0) (1,4,4) (2,4,3) (3,0,4) (3,1,1) (3,3,4) (4,2,1) (4,3,0) (4,3,2);
(0,0,0) (0,1,3) (0,4,3) (1,2,0) (1,3,4) (1,4,1) (2,0,3) (3,1,1) (3,3,4) (3,4,2) (4,1,2) (4,2,4) (4,3,0);
(0,0,0) (0,1,4) (0,2,3) (1,2,0) (1,3,1) (1,4,4) (2,0,0) (3,0,3) (3,3,4) (3,4,2) (4,1,2) (4,1,3) (4,3,2);
(0,0,0) (0,2,3) (0,3,2) (1,0,3) (1,2,4) (1,4,3) (2,3,1) (3,0,4) (3,1,0) (3,1,2) (4,1,4) (4,3,1) (4,4,2);
(0,0,0) (0,2,3) (0,3,4) (1,1,4) (1,4,2) (2,0,4) (2,1,1) (2,3,0) (3,4,1) (3,4,3) (4,0,1) (4,1,3) (4,3,0);
(0,0,0) (0,2,3) (0,4,2) (1,1,4) (1,2,4) (1,4,0) (2,0,4) (2,1,1) (3,1,1) (3,4,3) (4,0,3) (4,3,1) (4,3,2);
(0,0,0) (0,2,3) (0,4,3) (1,0,0) (1,1,3) (1,4,4) (2,3,1) (3,0,4) (3,1,2) (3,4,1) (4,2,0) (4,3,2) (4,3,4);
(0,0,0) (0,2,4) (0,4,3) (1,2,1) (1,3,1) (2,0,3) (2,1,4) (2,4,0) (3,3,1) (3,4,4) (4,0,2) (4,1,0) (4,3,3);
(0,0,1) (0,0,3) (0,2,0) (1,2,3) (1,3,4) (1,4,1) (2,1,4) (3,1,0) (3,3,1) (3,4,4) (4,0,2) (4,1,0) (4,3,3);
(0,0,1) (0,0,3) (0,4,2) (1,2,0) (1,3,4) (1,4,4) (2,0,1) (3,1,0) (3,3,3) (3,4,1) (4,1,3) (4,2,4) (4,3,0);
(0,0,1) (0,1,0) (0,2,3) (1,0,3) (1,1,3) (1,2,0) (2,4,4) (3,1,1) (3,3,0) (3,4,2) (4,0,2) (4,3,1) (4,3,4);
(0,0,1) (0,1,1) (0,3,2) (1,0,3) (1,4,0) (1,4,2) (2,1,4) (2,2,0) (3,1,0) (3,3,3) (4,0,3) (4,2,2) (4,3,1);
(0,0,1) (0,1,1) (0,3,2) (1,0,4) (1,3,3) (1,4,1) (2,3,0) (2,4,3) (3,0,0) (3,1,2) (4,2,0) (4,2,4) (4,4,3);
(0,0,1) (0,1,1) (0,4,3) (1,0,3) (1,1,0) (1,1,4) (2,3,0) (2,4,4) (3,2,4) (3,4,2) (4,0,1) (4,2,0) (4,3,2);
(0,0,1) (0,1,2) (0,3,3) (1,3,4) (1,4,1) (1,4,3) (2,1,0) (3,1,4) (3,2,4) (3,4,0) (4,0,2) (4,2,1) (4,3,3);
(0,0,1) (0,1,2) (0,4,3) (1,2,4) (1,4,1) (2,0,4) (2,3,0) (2,4,3) (3,0,0) (3,1,4) (4,1,3) (4,3,1) (4,3,2);
(0,0,1) (0,2,1) (1,2,0) (1,3,3) (1,4,2) (2,0,1) (2,2,4) (2,4,0) (3,0,3) (3,1,0) (4,1,3) (4,3,2) (4,3,4);
(0,0,1) (0,2,3) (0,3,1) (1,0,3) (1,2,4) (1,4,2) (2,1,0) (2,4,2) (3,1,3) (3,3,0) (4,1,1) (4,3,4) (4,4,0);
(0,0,1) (0,2,4) (0,3,2) (1,0,3) (1,2,0) (1,4,2) (2,3,4) (2,4,0) (3,1,0) (3,1,4) (4,0,1) (4,1,3) (4,3,1);
(0,0,1) (0,2,4) (0,3,2) (1,1,0) (1,3,0) (1,4,4) (2,0,0) (3,1,3) (3,4,1) (3,4,2) (4,0,2) (4,2,1) (4,3,4);
(0,0,2) (0,3,1) (1,0,3) (1,1,0) (1,4,1) (2,0,1) (2,2,4) (2,4,0) (3,1,0) (3,4,4) (4,1,2) (4,3,2) (4,3,4);
(0,1,1) (0,2,3) (1,0,2) (1,2,0) (1,4,3) (2,0,0) (2,1,4) (2,4,3) (3,3,0) (3,3,1) (4,0,4) (4,1,2) (4,4,4);
(0,1,1) (0,2,3) (1,0,4) (1,3,0) (1,4,1) (2,0,1) (2,0,2) (2,4,4) (3,3,4) (3,4,3) (4,1,3) (4,2,0) (4,3,0);
(0,1,1) (0,2,3) (1,1,4) (1,3,3) (1,4,0) (2,0,2) (2,3,0) (2,4,1) (3,0,0) (3,0,1) (4,2,4) (4,3,4) (4,4,2);
(0,1,1) (0,3,3) (1,0,1) (1,2,4) (1,4,0) (2,0,3) (2,1,0) (2,2,0) (3,3,4) (3,4,3) (4,0,1) (4,3,2) (4,4,4).
</code></pre>
<p>There are no way to build $14$ points in a $5\times 5 \times 5$ grid, as I checked.</p>
<hr>
<p>Easiest (than brute force search) way to find solutions is layer-by layer constructing.</p>
<p>Each layer need to have $1$ or $2$ or $3$ points.</p>
<p>So, if I need to search all $14$-points on $5\times 5 \times 5$ grid, I consider these layers configurations: <br>
$3:3:3:3:2$,
$\quad 3:3:3:2:3$,
$\quad 3:3:2:3:3$,
$\quad \color{gray}{3:2:3:3:3}$,
$\quad \color{gray}{2:3:3:3:3}$.</p>
<p>To build/generate, for example, layer configuration $3:3:3:2:3$, I can add one point to
one of $13$-points configurations: <br>
$\color{red}{2}:3:3:2:3$,
$\quad 3:\color{red}{2}:3:2:3$,
$\quad 3:3:\color{red}{2}:2:3$,
$\quad 3:3:3:\color{red}{1}:3$,
$\quad 3:3:3:2:\color{red}{2}$. </p>
<hr>
<p>$\color{gray}{\small{\mbox{(I hope I didn't made errors in checking software)))}}}$</p>
|
3,503,999 | <p>Consider the function <span class="math-container">$$f(x):=\frac{x-x_0}{\Vert x-x_0 \Vert^2} + \frac{x-x_1}{\Vert x-x_1 \Vert^2}$$</span></p>
<p>for two fixed <span class="math-container">$x_0,x_1 \in \mathbb R^2$</span> and <span class="math-container">$x \in \mathbb R^2$</span> as well. </p>
<p>Does anybody know what the Hessian of the function </p>
<p><span class="math-container">$$g(x):=\Vert f(x) \Vert^2$$</span> </p>
<p>is? It is such a difficult composition of functions that I find it very hard to compute.</p>
<p>The bounty is for a person who fully derives the Hessian of <span class="math-container">$f. $</span> Please let me know if you have any questions.</p>
| PierreCarre | 639,238 | <p>Well, you can just compute <span class="math-container">$g(x)$</span>... I'll denote <span class="math-container">$x_0,x_1$</span> by <span class="math-container">$u,v$</span>.</p>
<p><span class="math-container">$$
f(x) = \frac{x-u}{\|x-u\|^2}+\frac{x-v}{\|x-v\|^2}=\left(\frac{x_1-u_1}{(x_1-u_1)^2+(x_2-u_2)^2}+\frac{x_1-v_1}{(x_1-v_1)^2+(x_2-v_2)^2}, \right.
$$</span>
<span class="math-container">$$
\left.\frac{x_2-u_2}{(x_1-u_1)^2+(x_2-u_2)^2}+\frac{x_2-v_2}{(x_1-v_1)^2+(x_2-v_2)^2} \right)
$$</span></p>
<p>and so,</p>
<p><span class="math-container">$$
g(x)=\left(\frac{x_1-u_1}{(x_1-u_1)^2+(x_2-u_2)^2}+\frac{x_1-v_1}{(x_1-v_1)^2+(x_2-v_2)^2}\right)^2+
$$</span>
<span class="math-container">$$
\left(\frac{x_2-u_2}{(x_1-u_1)^2+(x_2-u_2)^2}+\frac{x_2-v_2}{(x_1-v_1)^2+(x_2-v_2)^2} \right)^2
$$</span></p>
<p>Now you can compute the Hessian matrix. </p>
|
3,503,999 | <p>Consider the function <span class="math-container">$$f(x):=\frac{x-x_0}{\Vert x-x_0 \Vert^2} + \frac{x-x_1}{\Vert x-x_1 \Vert^2}$$</span></p>
<p>for two fixed <span class="math-container">$x_0,x_1 \in \mathbb R^2$</span> and <span class="math-container">$x \in \mathbb R^2$</span> as well. </p>
<p>Does anybody know what the Hessian of the function </p>
<p><span class="math-container">$$g(x):=\Vert f(x) \Vert^2$$</span> </p>
<p>is? It is such a difficult composition of functions that I find it very hard to compute.</p>
<p>The bounty is for a person who fully derives the Hessian of <span class="math-container">$f. $</span> Please let me know if you have any questions.</p>
| Calvin Khor | 80,734 | <p>(Edit) RIP bounty, but here's the (correct) solution computed via Sympy/Jupyter.</p>
<p>setting <span class="math-container">$x_1=0$</span> and relabelling <span class="math-container">$x_0$</span> as <span class="math-container">$(x_0,y_0)$</span>, <span class="math-container">$x$</span> as <span class="math-container">$(x,y)$</span>, </p>
<pre><code>x,y,x0,y0 = symbols('x y x0 y0')
</code></pre>
<p>we have from</p>
<pre><code>f1 = (x-x0)/((x-x0)**2 + (y-y0)**2) + x/(x**2+y**2)
f2 = (y-y0)/((x-x0)**2 + (y-y0)**2) + y/(x**2+y**2)
g = f1*f1 + f2*f2
simplify(g)
</code></pre>
<p><span class="math-container">$$g(x,y) = \displaystyle \frac{\left(x \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right) + \left(x - x_{0}\right) \left(x^{2} + y^{2}\right)\right)^{2} + \left(y \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right) + \left(x^{2} + y^{2}\right) \left(y - y_{0}\right)\right)^{2}}{\left(x^{2} + y^{2}\right)^{2} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{2}} $$</span></p>
<p><code>simplify(diff(g,x,x))</code> tells us that <span class="math-container">$\partial_x^2 g = $</span>
<span class="math-container">$$
\displaystyle \frac{2 \left(2 \left(x \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right) + \left(x - x_{0}\right) \left(x^{2} + y^{2}\right)\right) \left(4 x^{3} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{3} - 3 x \left(x^{2} + y^{2}\right) \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{3} + 3 \left(- x + x_{0}\right) \left(x^{2} + y^{2}\right)^{3} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right) + 4 \left(x - x_{0}\right)^{3} \left(x^{2} + y^{2}\right)^{3}\right) + 2 \left(y \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right) + \left(x^{2} + y^{2}\right) \left(y - y_{0}\right)\right) \left(4 x^{2} y \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{3} - y \left(x^{2} + y^{2}\right) \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{3} + 4 \left(x - x_{0}\right)^{2} \left(x^{2} + y^{2}\right)^{3} \left(y - y_{0}\right) + \left(x^{2} + y^{2}\right)^{3} \left(- y + y_{0}\right) \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)\right) + 4 \left(x y \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{2} + \left(x - x_{0}\right) \left(x^{2} + y^{2}\right)^{2} \left(y - y_{0}\right)\right)^{2} + \left(2 x^{2} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{2} + 2 \left(x - x_{0}\right)^{2} \left(x^{2} + y^{2}\right)^{2} - \left(x^{2} + y^{2}\right)^{2} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right) - \left(x^{2} + y^{2}\right) \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{2}\right)^{2}\right)}{\left(x^{2} + y^{2}\right)^{4} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{4}}$$</span></p>
<p><code>simplify(diff(g,y,y))</code> tells us that <span class="math-container">$\partial_y^2 g = $</span>
<span class="math-container">$$\displaystyle \frac{2 \left(2 \left(x \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right) + \left(x - x_{0}\right) \left(x^{2} + y^{2}\right)\right) \left(4 x y^{2} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{3} - x \left(x^{2} + y^{2}\right) \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{3} + \left(- x + x_{0}\right) \left(x^{2} + y^{2}\right)^{3} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right) + 4 \left(x - x_{0}\right) \left(x^{2} + y^{2}\right)^{3} \left(y - y_{0}\right)^{2}\right) + 2 \left(y \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right) + \left(x^{2} + y^{2}\right) \left(y - y_{0}\right)\right) \left(4 y^{3} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{3} - 3 y \left(x^{2} + y^{2}\right) \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{3} + 3 \left(x^{2} + y^{2}\right)^{3} \left(- y + y_{0}\right) \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right) + 4 \left(x^{2} + y^{2}\right)^{3} \left(y - y_{0}\right)^{3}\right) + 4 \left(x y \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{2} + \left(x - x_{0}\right) \left(x^{2} + y^{2}\right)^{2} \left(y - y_{0}\right)\right)^{2} + \left(2 y^{2} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{2} + 2 \left(x^{2} + y^{2}\right)^{2} \left(y - y_{0}\right)^{2} - \left(x^{2} + y^{2}\right)^{2} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right) - \left(x^{2} + y^{2}\right) \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{2}\right)^{2}\right)}{\left(x^{2} + y^{2}\right)^{4} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{4}}$$</span></p>
<p>there is a certain symmetry in the above, which makes the following output <code>0</code>:</p>
<pre><code>expr1 = diff(g,x,x)
expr2 = diff(g,y,y)
x3,y3 = symbols('x3 y3')
expr1=expr1.subs(x,x3)
expr1=expr1.subs(y,x)
expr1=expr1.subs(x3,y)
expr1=expr1.subs(x0,x3)
expr1=expr1.subs(y0,x0)
expr1=expr1.subs(x3,y0)
simplify(expr2-expr1)
</code></pre>
<p>And finally <code>simplify(diff(g,x,y))</code> gives <span class="math-container">$\partial_x\partial_y g = \partial_y \partial_x g = $</span>
<span class="math-container">$$\displaystyle \frac{4 \left(\left(x \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right) + \left(x - x_{0}\right) \left(x^{2} + y^{2}\right)\right) \left(4 x^{2} y \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{3} - y \left(x^{2} + y^{2}\right) \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{3} + 4 \left(x - x_{0}\right)^{2} \left(x^{2} + y^{2}\right)^{3} \left(y - y_{0}\right) + \left(x^{2} + y^{2}\right)^{3} \left(- y + y_{0}\right) \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)\right) + \left(y \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right) + \left(x^{2} + y^{2}\right) \left(y - y_{0}\right)\right) \left(4 x y^{2} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{3} - x \left(x^{2} + y^{2}\right) \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{3} + \left(- x + x_{0}\right) \left(x^{2} + y^{2}\right)^{3} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right) + 4 \left(x - x_{0}\right) \left(x^{2} + y^{2}\right)^{3} \left(y - y_{0}\right)^{2}\right) + 2 \left(x y \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{2} + \left(x - x_{0}\right) \left(x^{2} + y^{2}\right)^{2} \left(y - y_{0}\right)\right) \left(x^{2} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{2} + y^{2} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{2} + \left(x - x_{0}\right)^{2} \left(x^{2} + y^{2}\right)^{2} + \left(x^{2} + y^{2}\right)^{2} \left(y - y_{0}\right)^{2} - \left(x^{2} + y^{2}\right)^{2} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right) - \left(x^{2} + y^{2}\right) \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{2}\right)\right)}{\left(x^{2} + y^{2}\right)^{4} \left(\left(x - x_{0}\right)^{2} + \left(y - y_{0}\right)^{2}\right)^{4}}$$</span></p>
|
3,503,999 | <p>Consider the function <span class="math-container">$$f(x):=\frac{x-x_0}{\Vert x-x_0 \Vert^2} + \frac{x-x_1}{\Vert x-x_1 \Vert^2}$$</span></p>
<p>for two fixed <span class="math-container">$x_0,x_1 \in \mathbb R^2$</span> and <span class="math-container">$x \in \mathbb R^2$</span> as well. </p>
<p>Does anybody know what the Hessian of the function </p>
<p><span class="math-container">$$g(x):=\Vert f(x) \Vert^2$$</span> </p>
<p>is? It is such a difficult composition of functions that I find it very hard to compute.</p>
<p>The bounty is for a person who fully derives the Hessian of <span class="math-container">$f. $</span> Please let me know if you have any questions.</p>
| Christian Blatter | 1,303 | <p>If you choose <span class="math-container">${\bf x}_0$</span> and <span class="math-container">${\bf x}_1$</span> at <span class="math-container">$(\pm a,0)$</span> of the <span class="math-container">$(x,y)$</span>-plane you have
<span class="math-container">$$f(x,y)={(x+a,y)\over(x+a)^2+y^2}+{(x-a,y)\over(x-a)^2+y^2}\ .$$</span>
In the following Mathematica notebook <span class="math-container">${\tt gxx}$</span>, <span class="math-container">${\tt gxy}$</span>, <span class="math-container">${\tt gyy}$</span> are the entries of the Hessian matrix
<span class="math-container">$$\left[\matrix{g_{xx}&g_{xy}\cr g_{xy}&g_{yy}\cr}\right]\ .$$</span>
Here is the result:</p>
<p><a href="https://i.stack.imgur.com/SWjVe.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SWjVe.jpg" alt="enter image description here"></a></p>
|
3,676,911 | <p>I'm trying to understand what is the Hessian matrix of <span class="math-container">$f\colon\mathbb{R}^{n}\to\mathbb{R}$</span>
defined by <span class="math-container">$f\left(x\right)=\left\langle Ax,x\right\rangle \cdot\left\langle Bx,x\right\rangle $</span>
where <span class="math-container">$A,B$</span> are symetric <span class="math-container">$n\times n$</span> matrices. What I know is that
if we let <span class="math-container">$g\left(x\right)=\left\langle Ax,x\right\rangle $</span> and <span class="math-container">$h\left(x\right)=\left\langle Bx,x\right\rangle $</span>
then <span class="math-container">$\nabla g\left(x\right)=2Ax,\nabla h\left(x\right)=2Bx$</span> and
<span class="math-container">$\nabla^{2}g\left(x\right)=2A,\nabla^{2}h\left(x\right)=2B$</span>. Also
by the product rule we have <span class="math-container">$\left(fg\right)'=f'g+fg'$</span> which then
gives us
<span class="math-container">\begin{align*}
\left(fg\right)'' & =f''g+f'g'+f'g'+fg''=\\
& =f''g+2f'g'+fg''
\end{align*}</span>
Regarding <span class="math-container">$\nabla f\left(x\right)$</span> as a column vector, I tried to
implement this on the given <span class="math-container">$f\left(x\right)$</span> and what I got is
<span class="math-container">$$
\nabla f\left(x\right)=\nabla\left(gh\right)\left(x\right)=2Ax\cdot\left\langle Bx,x\right\rangle +\left\langle Ax,x\right\rangle \cdot2Bx
$$</span>
which seems to have worked fine with a concrete example. But then
I got to the Hessian:
<span class="math-container">\begin{align*}
\nabla^{2}f\left(x\right) & =\nabla^{2}\left(gh\right)\left(x\right)=2A\cdot\left\langle Bx,x\right\rangle +\underset{{\scriptscriptstyle \left(\ast\right)}}{\underbrace{2Ax\cdot2Bx}}+\underset{{\scriptscriptstyle \left(\ast\right)}}{\underbrace{2Ax\cdot2Bx}}+\left\langle Ax,x\right\rangle \cdot2B=\\
& =2A\cdot\left\langle Bx,x\right\rangle +\underset{{\scriptscriptstyle \left(\ast\right)}}{\underbrace{8Ax\cdot Bx}}+\left\langle Ax,x\right\rangle \cdot2B
\end{align*}</span>
Now as <span class="math-container">$Ax,Bx$</span> in <span class="math-container">$\left(\ast\right)$</span> are both column vectors I
thought I should try this instead
<span class="math-container">$$
\nabla^{2}f\left(x\right)=2A\cdot\left\langle Bx,x\right\rangle +\underset{{\scriptscriptstyle \left(\ast\ast\right)}}{\underbrace{8Ax\cdot\left(Bx\right)^{T}}}+\left\langle Ax,x\right\rangle \cdot2B
$$</span>
But that didn't work with my example.</p>
<p>In general I feel the whole process of differentiating functions that
are represented by matrices is quite a mystery to me when it comes
to where I should transpose and so. Any help is appreciated. Thanks
in advance.</p>
| greg | 357,854 | <p>Your function is the product of the following scalar functions
<span class="math-container">$$\eqalign{
\alpha &= x^TAx \quad\implies d\alpha = (2Ax)^Tdx \\
\beta &= x^TBx \quad\implies d\beta = (2Bx)^Tdx \\
f &= \alpha\beta \\
}$$</span>
Calculate the differential and the gradient of <span class="math-container">$f$</span>.
<span class="math-container">$$\eqalign{
df &= \alpha\,d\beta + \beta\,d\alpha \\
&= 2(\alpha Bx + \beta Ax)^Tdx \\
\frac{\partial f}{\partial x}
&= 2(\alpha Bx + \beta Ax) \;=\; g
\qquad ({\rm the\,gradient\,vector}) \\
}$$</span>
Calculate the differential and the gradient of <span class="math-container">$g$</span>.
<span class="math-container">$$\eqalign{
dg
&= 2(\alpha B\,dx + Bx\,d\alpha + \beta A\,dx + Ax\,d\beta) \\
&= 2\left(\alpha B + Bx(2Ax)^T + \beta A + Ax(2Bx)^T\right)dx \\
&= 2\left(\alpha B + 2Bxx^TA + \beta A + 2Axx^TB\right)dx \\
\frac{\partial g}{\partial x}
&= 2\alpha B + 4Bxx^TA + 2\beta A + 4Axx^TB \;=\; H
\quad({\rm the\,hessian\,matrix})\\
}$$</span></p>
|
2,709,809 | <blockquote>
<p>Let $Q$ be the square with corners $0$,$1$,$i$,$1+i$ and $R$ the rectangle with corners $0$,$2$,$i$,$2+i$. Prove there's no conformal map $Q$ to $R$ (on the interiors) that extends to a surjective homeomorphism on the closure, and takes corners to corners.</p>
</blockquote>
<p>This is from an old qualifying exam. I'd appreciate a hint about what kind of theorem to use.</p>
| ts375_zk26 | 204,508 | <p>Suppose that there is a conformal map $w=f(z):Q \to R$ satisfying the described conditions.
We may assume that $f(0)=0, f(1)=2,f(i)=i$ and $f(1+i)=2+i$ for corners correspondence.
Let $C_y=\{x+iy: 0\le x\le 1\}$ $(0\le y\le 1)$ be a horizontal segment joining two points $iy$ and $1+iy$ in $Q$. Its image $f(C_y)$ is a curve joining $f(iy)$ and $f(1+iy)$ in $R$. Obviously the length of $f(C_y)$ is not less than $2$:
$$
\text{the length of }f(C_y)=\int_0^1 |f^\prime(x+iy)|dx\ge 2.$$
Therefore we have $$
\int_0^1 \left(\int_0^1 |f^\prime(x+iy)|dx\right)dy \ge 2.\tag{1}$$
On the other hand we have
\begin{align}
\int_0^1 \left(\int_0^1 |f^\prime(x+iy)|dx\right)dy&=\iint_Q |f^\prime(x+iy)| dxdy\\
&\le \left(\iint_Q |f^\prime(x+iy)|^2 dxdy\right)^\frac{1}{2}\left(\iint_Q dxdy\right)^\frac{1}{2}\\
&=\left(\text{Area of }R\right)^\frac{1}{2}\left(\text{Area of }Q\right)^\frac{1}{2}=\sqrt{2},
\end{align}
which contradicts the result of $(1)$.</p>
|
2,709,809 | <blockquote>
<p>Let $Q$ be the square with corners $0$,$1$,$i$,$1+i$ and $R$ the rectangle with corners $0$,$2$,$i$,$2+i$. Prove there's no conformal map $Q$ to $R$ (on the interiors) that extends to a surjective homeomorphism on the closure, and takes corners to corners.</p>
</blockquote>
<p>This is from an old qualifying exam. I'd appreciate a hint about what kind of theorem to use.</p>
| Dap | 467,147 | <p>Here's another argument that might be interesting. Apply the Schwarz reflection principle to extend along an edge of $Q$ to a map from a rectangle twice as big as $Q$ to a rectangle twice as big as $P.$ Continue extending in this way to extend to an automorphism of the whole complex plane. This must be a Möbius transformation. This is contradiction because, for example, it doesn't preserve the cross-ratio of the corners of $Q.$</p>
|
3,500,418 | <p>I am fining the pointwise limit of the function <span class="math-container">$f_n(x) = \frac{x^n}{3-x^n}$</span> for <span class="math-container">$x ∈ [0,1]$</span> and <span class="math-container">$n ∈ N$</span></p>
<p>In order to do this I first divided through by <span class="math-container">$x^n$</span>, yielding me <span class="math-container">$$ \frac{1}{\frac{3}{x^n}-1}$$</span></p>
<p>Using this I have determined that <span class="math-container">$f_n(x) \to f(x) :=
\begin{cases}
0,\ 0 \leq x<1 \\
\frac{1}{2},\ x=1
\end{cases}$</span></p>
<p>If I assume that I have done this correctly then I could deduce that the convergence cannot be uniform on <span class="math-container">$[0, 1]$</span> since each <span class="math-container">$f_n$</span> is continuous, but the limit function is not continuous.</p>
<p>Am I correct to make this assumption or is there actually uniform convergence? If there is, how is it determined?</p>
<p>Due to this I have also adjusted the bounds of <span class="math-container">$x$</span> to <span class="math-container">$[0,\frac{1}{2}]$</span> to see if this instead would have uniform convergence. </p>
<p>In this case we would have <span class="math-container">$f_n(x) \to f(x) :=
\begin{cases}
0,\ 0 \leq x≤\frac{1}{2}
\end{cases}$</span></p>
<p>What are the implications for the uniform convergence of this? Surely this is also not uniform convergence for similar reasons as above?</p>
<p>I am struggling to get my head around all this so any help would be greatly appreciated!</p>
| José Carlos Santos | 446,262 | <p>Since:</p>
<ul>
<li><span class="math-container">$\cos(x)=1$</span> if and anly if <span class="math-container">$x=2k\pi$</span> for some <span class="math-container">$k\in\mathbb Z$</span>;</li>
<li><span class="math-container">$(\forall x\in\mathbb R):\cos(x)\in[-1,1]$</span>,</li>
</ul>
<p>you have <span class="math-container">$\cos(x)<1$</span> if and only if <span class="math-container">$x\in\mathbb R\setminus2\pi\mathbb Z$</span>.</p>
|
39,424 | <p>I need to teach an intro course on number theory in 1 month. I was just notified. Since I have never studied it, what are good books to learn it quickly?</p>
| Micah Milinovich | 3,659 | <p>If it is a course elementary number theory, look at "Elementary Number Theory" by Dudley.</p>
<p><a href="http://rads.stackoverflow.com/amzn/click/048646931X" rel="nofollow">http://www.amazon.com/Elementary-Number-Theory-Underwood-Dudley/dp/048646931X</a></p>
|
39,424 | <p>I need to teach an intro course on number theory in 1 month. I was just notified. Since I have never studied it, what are good books to learn it quickly?</p>
| Romeo | 7,867 | <p>For an introductory undergrad course I'd say the book to use by a long-shot is Kenneth Rosen's<a href="http://rads.stackoverflow.com/amzn/click/0321500318" rel="nofollow"> Elementary Number Theory and its Applications</a></p>
<p>The theory is all there, but it's placed nicely in a context appropriate for a mixed bag of undergrad students by a large number of interesting-but-doable exercises and informative historical notes. Modern applications to computer science, cryptography, etc are all there and can be emphasized (or not) as you see fit. </p>
<p>This is what I'd read if I were you. Last time I checked, the book was annoyingly expensive - but this is the only criticism of it I have. Most students give this book very favorable reviews, too. </p>
|
3,949,580 | <p>Is it possible to set this integral up without using substitution?</p>
<p><span class="math-container">$$\iint_D e^{x+y} \,\mathrm{d}x\,\mathrm{d}y\,,$$</span> where</p>
<p><span class="math-container">$$D = \left\{-1\le x+y \le 1, -1 \le -x + y \le 1\right\}$$</span></p>
<p>The answer is: <span class="math-container">$e-\frac{1}{e}$</span></p>
| MrCool690000 | 862,505 | <p>You still won't know how many balls are inside the jar until you look inside the jar. You can't add two numbers when you only know one of the numbers...</p>
|
2,788,015 | <p>I'm trying to solve an exercise that says</p>
<blockquote>
<p>Show that a locally compact space is $\sigma$-compact if and only if is separable.</p>
</blockquote>
<p>Here locally compact means that also is Hausdorff. I had shown that separability imply $\sigma$-compactness but I'm stuck in the other direction.</p>
<p>Assuming that $X$ is $\sigma$-compact it seems enough to show that a compact Hausdorff space is separable. However I don't have a clue about how to do it. </p>
<p>My first thought was try to show that a compact Hausdorff space is first countable, what would imply that it is second countable and from here the proof is almost done. However it seems that my assumption is not true, so I'm again in the starting point.</p>
<p>Some hint will be appreciated, thank you.</p>
<hr>
<p>EDIT: it seems that the exercise is wrong. Searching in the web I found <a href="http://at.yorku.ca/cgi-bin/bbqa?forum=ask_a_topologist_2003&task=show_msg&msg=0014.0001" rel="nofollow noreferrer">a "sketch" for a proof</a> that a compact Hausdorff space is not separable:</p>
<blockquote>
<p>Another natural example: take more than |R| copies of the unit interval
and take their product. This is compact Hausdorff (Tychonov theorem) but
not separable (proof not too hard, but omitted).</p>
<p>Hope this helped,</p>
<p>Henno Brandsma</p>
</blockquote>
<p>My knowledge of topology is little and the exercise appear in a book of analysis (this is a part of the exercise 18 on page 57 of <em>Analysis III</em> of Amann and Escher.)</p>
<p>My hope is that @HennoBrandsma (an user of this web) appear and clarify the question :)</p>
| spaceisdarkgreen | 397,125 | <p>Not sure if this is part of what you're wondering about, but will fill in the proof Henno omitted (slightly too long for a comment). </p>
<p>Let $\kappa >|\mathbb R|,$ $U$ and $U’$ be disjoint, open proper subsets of $I=[0,1],$ and for $\alpha<\beta<\kappa$ define $U_{\alpha,\beta} \subseteq I^\kappa$ to be the basis open set with $U$ at the $\alpha$-th, position, $U’$ at the $\beta$-th position and $I$ everywhere else. Let $D\subset I^\kappa$ be countable and label $D=\{f_1,f_2,\ldots\}.$</p>
<p>Then, for $\alpha<\kappa$ define the subset of $\mathbb N$ $$ A_\alpha = \{i\in\mathbb N: f_i(\alpha)\in U\}.$$ Since $\kappa > 2^{\mathbb N},$ by pigeonhole, there are $\alpha<\beta < \kappa $ such that $A_{\alpha}=A_\beta.$ So $\forall f\in D,$ either $f(\alpha)\in U$ and $f(\beta)\in U$ or $f(\alpha)\in I\setminus U$ and $f(\beta)\in I\setminus U.$ Thus $D\cap U_{\alpha,\beta} = \emptyset,$ so $D$ is not dense.</p>
|
3,232,341 | <p>How would I show this? I know a directed graph with no cycles has at least one node of outdegree zero (because a graph where every node has outdegree one contains a cycle), but do not know where to go from here.</p>
| Markoff Chainz | 377,313 | <p>Suppose that there exists a graph with no cycles and there are no nodes of indegree <span class="math-container">$0$</span>. Then each node has indegree <span class="math-container">$1$</span> or higher. Pick any node, since its indegree is <span class="math-container">$1$</span> or higher we can go to its parent node. This node has also indegree <span class="math-container">$1$</span> or higher and so we can keep doing this procedure until we arrive at the node we already visited. This will prove that there exists a cycle which contradicts our initial assumption. So we proved that every directed graph with no cycles has at least one node of indegree zero.</p>
|
215,333 | <p>There are many symbols for understanding internet-related properties: <code>$NetworkConnected</code>, <code>PingTime</code>, <code>NetworkPacketTrace</code>, <code>NetworkPacketRecording</code>, etc.</p>
<p>But is there any convenient way of testing your network's upload speed from within Mathematica?</p>
| Carl Lange | 57,593 | <p>The easiest method I can think of to get an estimate is to upload a file to a server and measure how long it takes. Should give a reasonable guess, but like all of these things, it can only be a guess. In this example latency to WRI's servers may add a lot.</p>
<pre><code>file = "mytestfile";
time = AbsoluteTiming[CopyFile[file, CloudObject["speedtest"]]]
</code></pre>
<blockquote>
<p><code>34.2041</code></p>
</blockquote>
<pre><code>size = FileSize[file]
</code></pre>
<blockquote>
<p><code>Quantity[98.5052, "Megabytes"]</code></p>
</blockquote>
<pre><code>speed = size/Quantity[time, "Seconds"]
</code></pre>
<blockquote>
<p><code>Quantity[2.72684, ("Megabytes")/("Seconds")]</code></p>
</blockquote>
<p>So in this case it gives my speed as <code>2.72MB/s</code>. This is reasonably close to my actual upload speed of <code>5MB/s</code>. I re-ran and it gave me <code>4.74MB/s</code>, which is pretty on the money. You could use <code>RepeatedTiming</code> instead of <code>AbsoluteTiming</code> to give a better estimate, but how long you want to run for and the size of the file is pretty much up to you.</p>
<p>You can replace <code>CloudObject</code> with <code>RemoteFile</code>, an <code>scp</code> or <code>ftp</code> link, a <code>Url</code> and other options described in the documentation if you don't have a Wolfram Cloud account. This also allows you to control for the download speed of the server you're sending the file to.</p>
|
3,426,756 | <p>From a point <span class="math-container">$O$</span> on the circle <span class="math-container">$x^2+y^2=d^2$</span>, tangents <span class="math-container">$OP$</span> and <span class="math-container">$OQ$</span> are drawn to the ellipse <span class="math-container">$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$</span>, <span class="math-container">$a>b$</span>. Show that the locus of the midpoint of chord PQ is given by <span class="math-container">$$x^2+y^2=d^2\bigg[\frac{x^2}{a^2}+\frac{y^2}{b^2}\bigg]^2$$</span></p>
<p>I recognize that the locus of a chord whose midpoint is at <span class="math-container">$(h,k)$</span> is given by <span class="math-container">$\frac{xh}{a^2}+\frac{yk}{b^2}=\frac{h^2}{a^2}+\frac{k^2}{b^2}$</span></p>
<p>I also recognize that PQ is the chord of contact, but to find its equation using the chord of contact formula I would require the coordinates of point O which I do not have.</p>
<p>Here I am getting the equation in terms of <span class="math-container">$x,y,h,k$</span>, but to find the locus I need the equation entirely in the form of <span class="math-container">$h,k$</span>, right? So how do I eliminate <span class="math-container">$x,y$</span> from the equation of the locus of the midpoint?</p>
| Arctic Char | 629,362 | <p>This may not be the solution you are looking for:</p>
<p>Let <span class="math-container">$\mathcal C=\{x^2+y^2=1\}$</span> be the unit circle. Let <span class="math-container">$O' = (\alpha, \beta)$</span> be any point outside of this circle. Let <span class="math-container">$O'P'$</span> and <span class="math-container">$O'Q'$</span> be two tangent line to <span class="math-container">$\mathcal C$</span>. One can check that the midpoint <span class="math-container">$m' = (x, y)$</span> of <span class="math-container">$P'Q'$</span> is given by (why?)</p>
<p><span class="math-container">$$(x, y) = m' = \frac{1}{\alpha^2+ \beta^2} (\alpha, \beta).$$</span></p>
<p>Now assume that <span class="math-container">$O = (\alpha, \beta)$</span> is on the ellipse <span class="math-container">$\{ (ax)^2 + (by)^2 = d^2\}$</span>. Thus <span class="math-container">$(a\alpha)^2 + (b\beta)^2 = d^2$</span>. Then </p>
<p><span class="math-container">$$ d^2(x^2 + y^2)^2 = \frac{d^2}{(\alpha^2 + \beta^2)^2}$$</span></p>
<p>and </p>
<p><span class="math-container">$$(ax)^2 +(by)^2 = \frac{(a\alpha)^2 + (by)^2}{(\alpha^2 + \beta^2)^2}=\frac{d^2}{(\alpha^2 + \beta^2)^2}$$</span></p>
<p>Thus the locus of the midpoint <span class="math-container">$m'$</span> is given by</p>
<p><span class="math-container">$$ \tag{1} (ax)^2 + (by)^2 = d^2 (x^2+ y^2)^2. $$</span></p>
<p>The above is related to your question in the following way: Consider the transformation: <span class="math-container">$$(x, y) \mapsto (x/a, y/b).$$</span>
Under this transformation, the ellipse <span class="math-container">$\frac{x^2}{a^2} + \frac{y^2}{b^2}=1$</span> is sent to the unit circle <span class="math-container">$\mathcal C$</span>, while the circle <span class="math-container">$x^2 + y^2 = d^2$</span> is sent to the ellipse <span class="math-container">$(ax)^2 + (by)^2 = d^2$</span>. The crucial observation is that tangent lines <span class="math-container">$OP, OQ$</span> are also sent to tangent lines <span class="math-container">$O'P', O'Q'$</span>, and the midpoint <span class="math-container">$m$</span> of <span class="math-container">$PQ$</span> are sent to the midpoint <span class="math-container">$m'$</span> of <span class="math-container">$P'Q'$</span> (see <a href="https://math.stackexchange.com/questions/3128094/prove-that-general-affine-transformations-preserve-ratios-of-lengths">here</a>). Thus if you take the inverse transformation </p>
<p><span class="math-container">$$ (x, y) \mapsto (ax, by)$$</span></p>
<p>Then the locus of <span class="math-container">$m'$</span> will be sent to the locus of <span class="math-container">$m$</span>. This implies your equation: if you change <span class="math-container">$x$</span>, <span class="math-container">$y$</span> to <span class="math-container">$x/a$</span>, <span class="math-container">$y/b$</span> respective in (1), you get </p>
<p><span class="math-container">$$x^2 + y^2 = d^2 \left(\frac{x^2}{a^2}+ \frac{y^2}{b^2}\right)^2.$$</span></p>
|
1,338,980 | <p>Suppose you have a set of data $\{x_i\}$ and $\{y_i\}$ with $i=0,\dots,N$. In order to find two parameters $a,b$ such that the line
$$
y=ax+b,
$$
give the best linear fit, one proceed minimizing the quantity
$$
\sum_i^N[y_i-ax_i-b]^2
$$
with respect to $a,b$ obtaining well know results. </p>
<p>Imagine now to desire a fit with a function like
$$
y=ax^p+b.
$$
After some manipulation one obtain the following relations
$$
a=\frac{N\sum_i(y_ix_i^p)-\sum_iy_i\cdot\sum_ix_i^p}{(\sum_ix_i^p)^2+N\sum_i(x_i^p)^2},
$$
$$
b=\frac{1}{N}[\sum_iy_i-a\sum_ix_i^p]
$$
and
$$
\frac{1}{N}[N\sum_i(y_ix_i^p\ln x_i)-\sum_iy_i\cdot\sum_ix_i^p\ln x_i]=\frac{a}{N}[N\sum_i(x_i^p)^2\ln x_i-\sum_ix_i^p\cdot\sum_ix_i^p\ln x_i.
$$
To me it seems that from this it is nearly impossible to extract the exponent $p$. Am I correct?</p>
| Claude Leibovici | 82,404 | <p>The model being intrisically nonlinear wih respect to its parameter, you will need nonlinear regression.</p>
<p>However, the problem is to provide good estimates. One way to do it is to rewrite the model as $$y=a e^{qx}+b$$ with $q=\log(p)$. Now, a hint already provided by Yves Daoust <a href="https://math.stackexchange.com/questions/1163618/exponential-curve-fit">here</a> : choose three points $(x_1,x_2,x_3)$ such as $x_2\approx \frac{x_1+x_3}{2}$. Now, write $$A=\frac{y_3-y_1}{y_2-y_1}$$ replace and simplify; this leads to $$A=1+e^{q\frac{x_3-x_1}{2} }$$ this gives $$q=\frac{2 }{x_3-x_1}\log (A-1)$$ then $p=e^q$. This now gives $$a=\frac{y_1-y_3}{e^{q x_1}-e^{q x_3}}$$ $$b=y_1-a e^{qx_1}$$ Now, you have consistent estimates for $(a,b,p)$ and the nonlinear regression will converge in very few iterations.</p>
<p>Using the data given in page 18 of JJacquelin's book, let us take (approximate values for the $x$'s) the three points $(-1,0.418)$, $(0,0.911)$, $(1,3.544)$. Applying the above, this gives $q=1.67537$, $p=5.34077$, $a=0.606574$, $b=0.304426$.</p>
|
1,338,832 | <p>Assume we have a group consisting of both women and men. (In my example it is 67 women and 43 men but that is not important.) The women are indistinguishable and the men are also indistinguishable.</p>
<p>In how many ways can we pick a subgroup consisting of $n$ women and $n$ men, i.e., the same number of women and men?</p>
<ul>
<li><p>For $n = 1$ I found the answer to be $2 = 2 \cdot 1$. ($\{(m,w), (w,m)\}$)</p></li>
<li><p>For $n = 2$ I found the answer to be $6 = 3 \cdot 2$. ($\{(m,m,w,w), (w,w,m,m), (m,w,w,m), (w,m,m,w), (m,w,m,w), (w,m,w,m)\}$.)</p></li>
</ul>
<p>Therefore, I assume that for a random number $n$, the answer is $n \cdot (n - 1)$.</p>
<p>How do I prove this?</p>
<p><strong>Update</strong></p>
<p>My assumption is wrong.</p>
| Ofir Schnabel | 140,778 | <p>you asking is how many vectors of order $2n$ are there when in any cordinat there is a man or a woman such that their number is equal. Your answer is $$\frac{(2n)!}{(n!)^2}.$$ Therefore, for $3$ you are wrong.</p>
|
1,030,335 | <blockquote>
<p>Let <span class="math-container">$n$</span> and <span class="math-container">$r$</span> be positive integers with <span class="math-container">$n \ge r$</span>. Prove that:</p>
<p><span class="math-container">$$\binom{r}{r} + \binom{r+1}{r} + \cdots + \binom{n}{r} = \binom{n+1}{r+1}.$$</span></p>
</blockquote>
<p>Tried proving it by induction but got stuck. Any help with proving it by induction or any other proof technique is appreciated.</p>
| Michael Hardy | 11,667 | <p>Assign numbers $1,2,3,\ldots,n,n+1$ to $n+1$ objects. The number of ways to choose $r+1$ of them is $\dbinom{n+1}{r+1}$.</p>
<p><em>Either</em> you choose the very last one and $r$ others bearing lower numbers (the number of ways to do that is $\dbinom n r$),</p>
<p><em>or</em> you choose the one just before the last one and $r$ others bearing lower numbers (the number of ways to do that is $\dbinom{n-1}r$),</p>
<p><em>or</em> you choose the one just before <em>that</em> one and $r$ others bearing lower numbers (the number of ways to do that is $\dbinom{n-2}r$),</p>
<p><em>or</em> $\ldots\ldots$</p>
|
19,261 | <p>Every simple graph $G$ can be represented ("drawn") by numbers in the following way:</p>
<ol>
<li><p>Assign to each vertex $v_i$ a number $n_i$ such that all $n_i$, $n_j$ are coprime whenever $i\neq j$. Let $V$ be the set of numbers thus assigned. <br/></p></li>
<li><p>Assign to each maximal clique $C_j$ a unique prime number $p_j$ which is coprime to every number in $V$.</p></li>
<li><p>Assign to each vertex $v_i$ the product $N_i$ of its number $n_i$ and the prime numbers $p_k$ of the maximal cliques it belongs to.</p></li>
</ol>
<blockquote>
<p>Then $v_i$, $v_j$ are adjacent iff $N_i$
and $N_j$ are not coprime,</p>
</blockquote>
<p>i.e. there is a (maximal) clique they both belong to. <strong>Edit:</strong> It's enough to assign $n_i = 1$ when $v_i$ is not isolated and does not share all of its cliques with another vertex.</p>
<p>Being free in assigning the numbers $n_i$ and $p_j$ lets arise a lot of possibilites, but also the following question:</p>
<blockquote>
<p><strong>QUESTION</strong></p>
<p>Can the numbers be assigned <em>systematically</em> such that the greatest $N_i$
is minimal (among all that do the job) — and if so: how?</p>
</blockquote>
<p>It is obvious that the $n_i$ in the first step have to be primes for the greatest $N_i$ to be minimal. I have taken the more general approach for other - partly <a href="https://mathoverflow.net/questions/19076/bringing-number-and-graph-theory-together-a-conjecture-on-prime-numbers/19080#19080">answered </a> - questions like "Can the numbers be assigned such that the set $\lbrace N_i \rbrace_{i=1,..,n}$ fulfills such-and-such conditions?"</p>
| Sebastian | 4,572 | <p>I think you should read something about the Ricci flow and Perelmann s work (for 3mfs), or Seiberg Witten/Yang-Mills theory (for 4-mfs). These theories give you very deep results in topology. But the hole theory is geometric.</p>
|
4,539,739 | <p>Here is the curve <span class="math-container">$y=2^{n-1}\prod\limits_{k=0}^n \left(x-\cos{\frac{k\pi}{n}}\right)$</span>, shown with example <span class="math-container">$n=8$</span>, together with the unit circle centred at the origin.</p>
<p><a href="https://i.stack.imgur.com/mBNbY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mBNbY.png" alt="enter image description here" /></a></p>
<p>Call the arc lengths between neighboring roots <span class="math-container">$l_1, l_2, l_3, ..., l_n$</span>.</p>
<blockquote>
<p>What is the exact value of <span class="math-container">$L=\lim\limits_{n\to\infty}\prod\limits_{k=1}^n l_k$</span> ?</p>
</blockquote>
<p>Desmos suggests that <span class="math-container">$L$</span> exists and is approximately <span class="math-container">$2.94$</span>. Maybe <span class="math-container">$\frac{8}{e}$</span> ?</p>
<p><strong>Context</strong></p>
<p>I have studied this curve, and found that it has several interesting properties.</p>
<ul>
<li><p>The curve is tangent to the unit circle at <span class="math-container">$n$</span> points, which are uniformly spaced around the circle.</p>
</li>
<li><p>The magnitude of the gradient at each root inside the circle is <span class="math-container">$n$</span>; the magnitude of the gradient at <span class="math-container">$x=\pm1$</span> is <span class="math-container">$2n$</span>.</p>
</li>
<li><p>The total area of the regions enclosed by the curve and the <em>x</em>-axis is <span class="math-container">$1$</span>.</p>
</li>
<li><p>As <span class="math-container">$n\to\infty$</span>, the volume of revolution of those regions about the <em>x</em>-axis approaches <span class="math-container">$\frac{1}{2}$</span> of the volume of the unit sphere, and the volume of revolution of those regions about the <em>y</em>-axis approaches <span class="math-container">$\frac{1}{\pi}$</span> of the volume of the unit sphere.</p>
</li>
<li><p>As <span class="math-container">$n\to\infty$</span>, if the curve is magnified so that the average area of those regions is always <span class="math-container">$2$</span>, then the product of those areas approaches <span class="math-container">$4\cosh^2{\left(\frac{\sqrt{\pi^2-8}}{2}\right)}\approx6.18$</span>, as shown <a href="https://math.stackexchange.com/a/4472892/398708">here</a>.</p>
</li>
</ul>
<p>I recently discovered that the product of arc lengths between neighboring roots seems to converge to a positive number as <span class="math-container">$n\to\infty$</span>. Hence, my question.</p>
<p>(If you know any other interesting properties of this curve, feel free to add them in the comments.)</p>
<p><strong>My attempt</strong></p>
<p>The part of the curve inside the circle can be expressed as <span class="math-container">$y=-\sqrt{1-x^2}\sin{(n\arccos{x})}$</span>. So</p>
<p><span class="math-container">$$L=\lim\limits_{n\to\infty}\prod\limits_{k=1}^n \int_{\cos{\frac{k\pi}{n}}}^{\cos{\frac{(k-1)\pi}{n}}}\sqrt{1+\left(n\cos{(n\arccos{x})+\frac{x\sin{(n\arccos{x})}}{\sqrt{1-x^2}}}\right)^2}dx$$</span></p>
<p>I do not know how to evaluate this limit. I tried taking the log of the product, without success. I tried to approximate each integral as areas of triangles (hoping that that approximation would become equality with the limit) and a rectangle at the bottom, multiplying each triangle's area by <span class="math-container">$\frac{4}{\pi}$</span> (which is the ratio of areas under sine or cosine to the area of an inscribed triangle), but that resulted in a different limit.</p>
<p>EDIT</p>
<p>Further numerical analysis strongly suggests that <span class="math-container">$L=\frac{8}{e}$</span>. I noticed that when <span class="math-container">$n$</span> doubles, the ratio of the two products is a certain number (which is close to <span class="math-container">$1$</span>), and when <span class="math-container">$n$</span> is doubled again, the ratio's distance to <span class="math-container">$1$</span> is approximately halved. So then I projected that the product indeed approaches <span class="math-container">$\frac{8}{e}$</span>. (I don't have Mathematica; anyone who has it is welcome to confirm this.)</p>
<p>I have simplified the expression of <span class="math-container">$L$</span>. Letting <span class="math-container">$x=\cos{\frac{u}{n}}$</span>, and ignoring the <span class="math-container">$1$</span> in the <span class="math-container">$\sqrt{1+(...)^2}$</span> (I think this is OK since <span class="math-container">$n\to\infty$</span>), we get</p>
<p><span class="math-container">$$L=\lim\limits_{n\to\infty}\prod\limits_{k=1}^n \int_{k\pi}^{(k-1)\pi}\sqrt{\left(n\cos{u}+(\sin{u})\cot{\frac{u}{n}}\right)^2}\left(-\frac{1}{n}\sin{\frac{u}{n}}\right)du$$</span></p>
<p><span class="math-container">$$\space{}=\lim\limits_{n\to\infty}\prod\limits_{k=1}^n \int_{(k-1)\pi}^{k\pi}\left|(\cos{u})\sin{\frac{u}{n}}+\frac{1}{n}(\sin{u})\cos{\frac{u}{n}}\right|du$$</span></p>
<p>So why is this equal to <span class="math-container">$\frac{8}{e}$</span> ?</p>
| metamorphy | 543,769 | <p>This question is closely related to computing the discriminant of Chebyshev polynomials of the second kind, found in literature (e.g., see G. Szegő <a href="https://books.google.com/books?id=3hcW8HBh7gsC" rel="nofollow noreferrer"><em>Orthogonal polynomials</em></a>, theorem <span class="math-container">$6.71$</span>).</p>
<p>I'm going an elementary way here. Denote <span class="math-container">$x_{n,k}=\cos(k\pi/n)$</span> for <span class="math-container">$0\leqslant k\leqslant n$</span>, so that <span class="math-container">$$S_n(x)=2^{n-1}\prod_{k=0}^n(x-x_{n,k})$$</span> is our polynomial. Then "ignoring the <span class="math-container">$1$</span> in the <span class="math-container">$\sqrt{1+(\dots)^2}$</span>" amounts to saying that <span class="math-container">$l_k:=l_{n,k}$</span> is approximately twice the supremum of <span class="math-container">$\big|S_n(x)\big|$</span> on <span class="math-container">$x_{n,k-1}<x<x_k$</span>; more precisely, we have <span class="math-container">$$L=\lim_{n\to\infty}\prod_{k=1}^n l_{n,k}=\lim_{n\to\infty}\prod_{k=1}^n\color{LightGray}{\Big[}2\sup_{x_{n,k-1}<x<x_{n,k}}\big|S_n(x)\big|\color{LightGray}{\Big]}=\lim_{n\to\infty}2^n\prod_{k=1}^n \big|S_n(x_{n,k}')\big|,$$</span> where <span class="math-container">$x_{n,k}'$</span> (for <span class="math-container">$1\leqslant k\leqslant n$</span>) are the roots of <span class="math-container">$S_n'(x)$</span>.</p>
<p>From <a href="https://en.wikipedia.org/wiki/Resultant#Definition" rel="nofollow noreferrer">properties of resultants</a>, we know that if <span class="math-container">$f$</span> is a polynomial of degree <span class="math-container">$d$</span>, with leading coefficient <span class="math-container">$a$</span>, roots <span class="math-container">$x_{d,k}$</span> for <span class="math-container">$1\leqslant k\leqslant d$</span>, and roots of its derivative <span class="math-container">$x'_{d,k}$</span> for <span class="math-container">$0<k<d$</span>, then <span class="math-container">$$\prod_{k=1}^{d-1}f(x_{d,k}')=\frac1{d^d a}\prod_{k=1}^d f'(x_{d,k}).$$</span></p>
<p>In our case <span class="math-container">$d=n+1$</span> and <span class="math-container">$a=2^{n-1}$</span>, so that <span class="math-container">$$L=\lim_{n\to\infty}\frac{2L_n}{(n+1)^{n+1}},\qquad L_n=\prod_{k=0}^n\big|S_n'(x_{n,k})\big|.$$</span></p>
<p>From <span class="math-container">$S_n(\cos t)=-\sin t\sin nt$</span> we get <span class="math-container">$S_n'(\cos t)=n\cos nt+\cot t\sin nt$</span>, hence <span class="math-container">$$S_n'(1)=2n,\quad S_n'(-1)=2n(-1)^n,\quad S_n'(x_{n,k})=n(-1)^k\quad(0<k<n)$$</span> and <span class="math-container">$L_n=4n^{n+1}$</span>, thus <span class="math-container">$L=\lim\limits_{n\to\infty}8/(1+1/n)^{n+1}=8/e$</span> as claimed.</p>
|
3,313,272 | <p><strong>Inscribe in a given cone, the height of which is equal to the radius of the base, a cylinder whose volume is a maximum.</strong> </p>
<p>I'm stuck. The answer key says the cylinder's height should be <span class="math-container">$\frac23$</span> the radius of the base of the cone, but the answer I'm getting is <span class="math-container">$\frac13$</span>.</p>
<p>The volume of a cylinder is <span class="math-container">$\pi r^2h$</span>, where <span class="math-container">$h$</span> is the height and <span class="math-container">$r$</span> is the radius of its base. Since the inscribing cone in this example has height equal to the radius of its own base, we know by similar triangles that any unit of height "added" to the cylinder is "taken" from the the radius of its base. Therefore, the volume <span class="math-container">$V$</span> of the inscribed cylinder is <span class="math-container">$$ \pi h(r-h)^2,$$</span> where <span class="math-container">$r$</span> is the radius of the <strong>cone</strong> and <span class="math-container">$h$</span> is the height of the inscribed cylinder. By the product rule, </p>
<p><span class="math-container">$$\frac{dV}{dh} = \pi(r-h)^2 - 2\pi h(r-h).$$</span></p>
<p>Setting <span class="math-container">$\frac{dV}{dh}$</span> equal to 0, we get</p>
<p><span class="math-container">$$0 = \pi(r-h)^2 - 2\pi h(r-h)$$</span>
<span class="math-container">$$2\pi h(r-h) = \pi(r-h)^2$$</span>
<span class="math-container">$$2h = r - h$$</span>
<span class="math-container">$$h = \frac13r.$$</span></p>
<p>Please help!</p>
| Sharky Kesa | 398,185 | <p>Let the cone's radius be <span class="math-container">$r$</span>, and suppose the radius of the cylinder was <span class="math-container">$x$</span>. Then the height of the cylinder can be determined to be <span class="math-container">$r-x$</span> using similar triangles in a triangular cross-section of the cone through the apex. Then the volume of the cylinder is</p>
<p><span class="math-container">$$\begin{aligned}
V &= \pi x^2(r-x)\\
\dfrac{\mathrm{d}V}{\mathrm{d}x} &= 2\pi rx - 3\pi x^2\\
0 &= 2\pi rx - 3\pi x^2\\
\pi x(3x - 2r) &= 0\\
x &= 0, \frac{2}{3} r
\end{aligned}$$</span></p>
<p>Trivially <span class="math-container">$x=0$</span> doesn't satisfy. Thus, the volume of the cylinder is maximised when the height is <span class="math-container">$\frac{1}{3}r$</span>.</p>
<p>This agrees with your answer, so I suspect the answers are incorrect. We can indeed verify by checking the volume when <span class="math-container">$x=\frac{1}{3}r$</span> (your answer key's value for the cylinder's radius), and when <span class="math-container">$x=\frac{2}{3}r$</span> (our calculated value for the maximising radius).</p>
<p><span class="math-container">$$x=\frac{1}{3}r \implies V = \frac{2\pi}{27}r^3$$</span></p>
<p><span class="math-container">$$x=\frac{2}{3}r \implies V = \frac{4\pi}{27}r^3$$</span></p>
<p>Clearly, our value gives the larger volume, so we can indeed confirm the answer key is incorrect.</p>
|
3,313,272 | <p><strong>Inscribe in a given cone, the height of which is equal to the radius of the base, a cylinder whose volume is a maximum.</strong> </p>
<p>I'm stuck. The answer key says the cylinder's height should be <span class="math-container">$\frac23$</span> the radius of the base of the cone, but the answer I'm getting is <span class="math-container">$\frac13$</span>.</p>
<p>The volume of a cylinder is <span class="math-container">$\pi r^2h$</span>, where <span class="math-container">$h$</span> is the height and <span class="math-container">$r$</span> is the radius of its base. Since the inscribing cone in this example has height equal to the radius of its own base, we know by similar triangles that any unit of height "added" to the cylinder is "taken" from the the radius of its base. Therefore, the volume <span class="math-container">$V$</span> of the inscribed cylinder is <span class="math-container">$$ \pi h(r-h)^2,$$</span> where <span class="math-container">$r$</span> is the radius of the <strong>cone</strong> and <span class="math-container">$h$</span> is the height of the inscribed cylinder. By the product rule, </p>
<p><span class="math-container">$$\frac{dV}{dh} = \pi(r-h)^2 - 2\pi h(r-h).$$</span></p>
<p>Setting <span class="math-container">$\frac{dV}{dh}$</span> equal to 0, we get</p>
<p><span class="math-container">$$0 = \pi(r-h)^2 - 2\pi h(r-h)$$</span>
<span class="math-container">$$2\pi h(r-h) = \pi(r-h)^2$$</span>
<span class="math-container">$$2h = r - h$$</span>
<span class="math-container">$$h = \frac13r.$$</span></p>
<p>Please help!</p>
| j.wood | 476,276 | <p>With the help of Sharky Kesa's answer I figured out the issue. I solved for the cylinder's <em>hight</em> and misunderstood the answer key, which gives its <em>radius</em>. The cylinder of maximum volume has radius and height that are <span class="math-container">$\frac23$</span> and <span class="math-container">$\frac13$</span> of the cone's radius, respectively. </p>
<p>Thanks for the help.</p>
|
1,443,441 | <blockquote>
<p>If <span class="math-container">$\frac{x^2+y^2}{x+y}=4$</span>,then all possible values of <span class="math-container">$(x-y)$</span> are given by<br></p>
<p><span class="math-container">$(A)\left[-2\sqrt2,2\sqrt2\right]\hspace{1cm}(B)\left\{-4,4\right\}\hspace{1cm}(C)\left[-4,4\right]\hspace{1cm}(D)\left[-2,2\right]$</span><br></p>
</blockquote>
<p>I tried this question.<br></p>
<p><span class="math-container">$\frac{x^2+y^2}{x+y}=4\Rightarrow x+y-\frac{2xy}{x+y}=4\Rightarrow x+y=\frac{2xy}{x+y}+4$</span><br></p>
<p><span class="math-container">$x-y=\sqrt{(\frac{2xy}{x+y}+4)^2-4xy}$</span>, but I am not able to proceed. I am stuck here. Is my method wrong?</p>
| juantheron | 14,311 | <p>Given $$\displaystyle \frac{x^2+y^2}{x+y} = 4\Rightarrow x^2+y^2 = 4x+4y$$</p>
<p>So we get $$x^2-4x+4+y^2-4y+4 = 8\Rightarrow (x-2)^2+(y-2)^2 = (2\sqrt{2})^2$$</p>
<p>Now Put $$x-2 = 2\sqrt{2}\cos \phi\Rightarrow x = 2+2\sqrt{2}\cos \phi$$</p>
<p>and $$y-2 = 2\sqrt{2}\sin \phi\Rightarrow y = 2+2\sqrt{2}\sin \phi$$</p>
<p>So $$\displaystyle x-y = 2\sqrt{2}\left(\cos\phi-\sin \phi\right) = 4\cdot \left[\cos \phi \cdot \frac{1}{\sqrt{2}}-\sin \phi\cdot \frac{1}{\sqrt{2}}\right] = 4\cos\left(\phi+\frac{\pi}{4}\right)$$</p>
<p>So we know that $$\displaystyle -4 \leq 4\cos\left(\phi+\frac{\pi}{4}\right)\leq 4$$</p>
<p>So we get $$-4 \leq x-y\leq 4\Rightarrow x-y\in \left[-4,4\right]$$</p>
|
1,443,441 | <blockquote>
<p>If <span class="math-container">$\frac{x^2+y^2}{x+y}=4$</span>,then all possible values of <span class="math-container">$(x-y)$</span> are given by<br></p>
<p><span class="math-container">$(A)\left[-2\sqrt2,2\sqrt2\right]\hspace{1cm}(B)\left\{-4,4\right\}\hspace{1cm}(C)\left[-4,4\right]\hspace{1cm}(D)\left[-2,2\right]$</span><br></p>
</blockquote>
<p>I tried this question.<br></p>
<p><span class="math-container">$\frac{x^2+y^2}{x+y}=4\Rightarrow x+y-\frac{2xy}{x+y}=4\Rightarrow x+y=\frac{2xy}{x+y}+4$</span><br></p>
<p><span class="math-container">$x-y=\sqrt{(\frac{2xy}{x+y}+4)^2-4xy}$</span>, but I am not able to proceed. I am stuck here. Is my method wrong?</p>
| juantheron | 14,311 | <p>Given $$\displaystyle \frac{x^2+y^2}{x+y} = 4\Rightarrow x^2+y^2 = 4x+4y$$</p>
<p>So we get $$x^2-4x+4+y^2-4y+4 = 8\Rightarrow (x-2)^2+(y-2)^2 = 8$$</p>
<p>Now we can write $$x-y = (x-2)-(y-2) = \left[(x-2)+(2-y)\right]$$</p>
<p>Now Using $\bf{Cauchy\; Schwartz\; Inequality}$</p>
<p>$$\displaystyle \left[(x-2)^2+(2-y)^2\right]\cdot [1^2+1^2]\geq \left[x-2+2-y\right]^2$$</p>
<p>So $$8\times 2 \geq (x-y)^2$$</p>
<p>So $$(x-y)^2\leq 4^2$$</p>
<p>So $$-4 \leq (x-y)\leq 4\Rightarrow x-y\in \left[-4,4\right]$$</p>
|
1,187,713 | <p>How would I go about proving that if $a_n$ is a real sequence such that $\lim_{n\to\infty}|a_n|=0$, then there exists a subsequence of $a_n$, which we call $a_{n_k}$, such that $\sum_{k=1}^\infty a_{n_k}$ is convergent.</p>
<p>I think that I can choose terms $a_{n_k}$ such that they are terms of a geometric series, so that means that it will converge, but I don't know how to formally state this.</p>
| graydad | 166,967 | <p>Start with a series you know is convergent. A geometric series will work as you have guessed. I'll use the series $\sum_{n=1}^\infty \frac{1}{n^2}$ for my example. Choose each $a_{n_{k}}$ such that $\left|a_{n_{k}}\right|\leq \frac{1}{n_k^2}$ for some integer $n_k$. You will need to prove that you can find infinitely many of these $a_{n_{k}}$. Once you know that, then $$\sum_{n=1}^\infty \left|a_{n_{k}}\right|\leq \sum_{k=1}^\infty \frac{1}{n_k^2} \leq \sum_{n=1}^\infty \frac{1}{n^2}$$ Hence, $$\sum_{n=1}^\infty a_{n_{k}}$$ is an absolutely convergent series.</p>
|
332,760 | <blockquote>
<p>For an odd prime, prove that a primitive root of $p^2$ is also a primitive root of $p^n$ for $n>1$. </p>
</blockquote>
<p>I have proved the other way round that any primitive root of $p^n$ is also a primitive root of $p$ but I have not been able to solve this one. I have tried the usual things that is I have assumed the contrary that there does not exist the primitive root following the above condition and then proceeded but couldn't solve it.<br>
Please help.</p>
| Ivan Loh | 61,044 | <p>Let $g$ be a primitive root $\pmod{p^2}$. Then $p|(g^{p-1}-1)$ by Fermat's little theorem and $p^2 \nmid (g^{p-1}-1)$ since $g$ is a primitive root. Thus by <a href="http://www.artofproblemsolving.com/Resources/Papers/LTE.pdf">Lifting the Exponent Lemma</a>, $p^{n-1}\|((g^{p-1})^{p^{n-2}}-1)$ and $p^n\|((g^{p-1})^{p^{n-1}}-1)$, so $g$ is also a primitive root $\pmod{p^n}, n>1$.</p>
<p><strong>Edit:</strong> More details: Let $d$ be the order of $g \pmod{p^n}, n>1$. Since $g$ is a primitive root $\pmod{p^2}$, we have $p(p-1) \mid d$. By above, $d|p^{n-1}(p-1), d\nmid p^{n-2}(p-1)$, so $d=p^{n-1}(p-1)$, and thus $g$ is a primitive root $\pmod{p^n}, n>1$.</p>
|
332,760 | <blockquote>
<p>For an odd prime, prove that a primitive root of $p^2$ is also a primitive root of $p^n$ for $n>1$. </p>
</blockquote>
<p>I have proved the other way round that any primitive root of $p^n$ is also a primitive root of $p$ but I have not been able to solve this one. I have tried the usual things that is I have assumed the contrary that there does not exist the primitive root following the above condition and then proceeded but couldn't solve it.<br>
Please help.</p>
| lab bhattacharjee | 33,337 | <p>We know from <a href="https://math.stackexchange.com/questions/227199/order-of-numbers-modulo-p2/229918#229918">here</a>, if $ord_pa=d, ord_{(p^2)}a= d$ or $pd$</p>
<p>If $a$ is a primitive root $\pmod {p^2}, ord_{(p^2)}a=\phi(p^2)=p(p-1)$ </p>
<p>Then $ord_pa$ can be $p-1$ or $p(p-1)$</p>
<p>But as $ord_pa<p, ord_pa$ must be $p-1=\phi(p)\implies a$ is a primitive root $\pmod p$</p>
<p>Again from <a href="https://math.stackexchange.com/questions/231350/number-of-consecutive-zeros-at-the-end-of-11100-1/231374#231374">here</a>, $$ord_{(p^s)}(a)=d, ord_{p^{(s+1)}}(a)=pd,\text{ then } ord_{p^{(s+2)}}(a)=p^2d$$</p>
<p>So, as $ord_pa=p-1,ord_{(p^2)}a=p(p-1); ord_{(p^3)}a$ will be $p\cdot p(p-1)=\phi(p^3)$</p>
<p>Again as, $ord_{(p^2)}a=p(p-1) ord_{(p^3)}a=p\cdot p(p-1);ord_{(p^4)}a$ will be $p\cdot p^2(p-1)=\phi(p^4)$</p>
|
3,386,999 | <p>How can I ind the values of <span class="math-container">$n\in \mathbb{N}$</span> that make the fraction <span class="math-container">$\frac{2n^{7}+1}{3n^{3}+2}$</span> reducible ?</p>
<p>I don't know any ideas or hints how I solve this question.</p>
<p>I think we must be writte <span class="math-container">$2n^{7}+1=k(3n^{3}+2)$</span> with <span class="math-container">$k≠1$</span></p>
| Will Jagy | 10,400 | <p>The extended euclidean algorithm for gcd (of polynomials with rational coefficients) also tells us that
<span class="math-container">$$ \left( 2 x^{7} + 1 \right) \left( { 1728 x^{2} - 1944 x + 2187 } \right) - \left( 3 x^{3} + 2 \right) \left( { 1152 x^{6} - 1296 x^{5} + 1458 x^{4} - 768 x^{3} + 864 x^{2} - 972 x + 512 } \right) = 1163 $$</span> </p>
<p>=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=</p>
<p><span class="math-container">$$ \left( 2 x^{7} + 1 \right) $$</span> </p>
<p><span class="math-container">$$ \left( 3 x^{3} + 2 \right) $$</span> </p>
<p><span class="math-container">$$ \left( 2 x^{7} + 1 \right) = \left( 3 x^{3} + 2 \right) \cdot \color{magenta}{ \left( \frac{ 6 x^{4} - 4 x }{ 9 } \right) } + \left( \frac{ 8 x + 9 }{ 9 } \right) $$</span>
<span class="math-container">$$ \left( 3 x^{3} + 2 \right) = \left( \frac{ 8 x + 9 }{ 9 } \right) \cdot \color{magenta}{ \left( \frac{ 1728 x^{2} - 1944 x + 2187 }{ 512 } \right) } + \left( \frac{ -1163}{512 } \right) $$</span>
<span class="math-container">$$ \left( \frac{ 8 x + 9 }{ 9 } \right) = \left( \frac{ -1163}{512 } \right) \cdot \color{magenta}{ \left( \frac{ - 4096 x - 4608 }{ 10467 } \right) } + \left( 0 \right) $$</span>
<span class="math-container">$$ \frac{ 0}{1} $$</span>
<span class="math-container">$$ \frac{ 1}{0} $$</span>
<span class="math-container">$$ \color{magenta}{ \left( \frac{ 6 x^{4} - 4 x }{ 9 } \right) } \Longrightarrow \Longrightarrow \frac{ \left( \frac{ 6 x^{4} - 4 x }{ 9 } \right) }{ \left( 1 \right) } $$</span>
<span class="math-container">$$ \color{magenta}{ \left( \frac{ 1728 x^{2} - 1944 x + 2187 }{ 512 } \right) } \Longrightarrow \Longrightarrow \frac{ \left( \frac{ 576 x^{6} - 648 x^{5} + 729 x^{4} - 384 x^{3} + 432 x^{2} - 486 x + 256 }{ 256 } \right) }{ \left( \frac{ 1728 x^{2} - 1944 x + 2187 }{ 512 } \right) } $$</span>
<span class="math-container">$$ \color{magenta}{ \left( \frac{ - 4096 x - 4608 }{ 10467 } \right) } \Longrightarrow \Longrightarrow \frac{ \left( \frac{ - 1024 x^{7} - 512 }{ 1163 } \right) }{ \left( \frac{ - 1536 x^{3} - 1024 }{ 1163 } \right) } $$</span>
<span class="math-container">$$ \left( 2 x^{7} + 1 \right) \left( \frac{ 1728 x^{2} - 1944 x + 2187 }{ 1163 } \right) - \left( 3 x^{3} + 2 \right) \left( \frac{ 1152 x^{6} - 1296 x^{5} + 1458 x^{4} - 768 x^{3} + 864 x^{2} - 972 x + 512 }{ 1163 } \right) = \left( 1 \right) $$</span> </p>
|
3,388,457 | <p>I made an equation
<span class="math-container">$$(100b+40+a)-(100a+40+b)=99$$</span> simplified that to <span class="math-container">$b-a=1$</span> , but do not know where to go from there.</p>
| The Demonix _ Hermit | 704,739 | <p>Since <span class="math-container">$a4b$</span> is divisible by <span class="math-container">$9$</span> , we have <span class="math-container">$$a+b=5 \text { or } a+b = 14$$</span></p>
<p>Since by reversing the number, we get a bigger number , we conclude <span class="math-container">$a\lt b$</span></p>
<p>All possible values of <span class="math-container">$a,b$</span> are:
<span class="math-container">$$(1,4),(2,3),(5,9)\text{ and }(6,8)$$</span></p>
<p>By simple trial and error , you can show that <span class="math-container">$(a,b) = (2,3)$</span></p>
<p>Hence the number is <span class="math-container">$243$</span>.</p>
|
3,631,042 | <p>Probably, <span class="math-container">$y = x^2$</span> plots a parabola only given certain assumptions that structure a cartesian coordinate plane, and it does not plot a parabola in e.g. the polar coordinate plane.</p>
<p>Now, why exactly does a parabola share an equation with the area of a square? 'Why' here is to be understood as inquiring at the equation's suggestion of a -geometrical- correspondence between the two given certain assumptions, but only the equation suggests this and not the actual shapes. Is this completely accidental, i.e., does the geometry of a parabola have nothing to do with that of a square, or does the equation <span class="math-container">$y = x^2$</span> indeed suggests some sort of relationship between the two shapes? </p>
<p>Most of all, I want to know: can we manage to identify any geometrical correspondence between a square and a parabola due to the equation?</p>
<p>(The equation of a circle in cartesian coordinates similarly bothers me, but at least we can speak of some sort of relationship between pythagorean triples.)</p>
| Jesus is Lord | 187,128 | <p><code>y = x * x</code> says <code>To compute the number y, multiply the number x with itself.</code></p>
<p>If you're talking about length, that corresponds to area of a square.</p>
<p>If you're talking about real numbers, you get a parabola curve.</p>
<p>If you're talking about complex numbers or some other set, you may get something else.</p>
<p>In other words, it depends on how you define "number" (what set of numbers you're working with) and how you define "multiply" (presumably repetitive addition, which assumes you have a definition of "addition").</p>
|
3,631,042 | <p>Probably, <span class="math-container">$y = x^2$</span> plots a parabola only given certain assumptions that structure a cartesian coordinate plane, and it does not plot a parabola in e.g. the polar coordinate plane.</p>
<p>Now, why exactly does a parabola share an equation with the area of a square? 'Why' here is to be understood as inquiring at the equation's suggestion of a -geometrical- correspondence between the two given certain assumptions, but only the equation suggests this and not the actual shapes. Is this completely accidental, i.e., does the geometry of a parabola have nothing to do with that of a square, or does the equation <span class="math-container">$y = x^2$</span> indeed suggests some sort of relationship between the two shapes? </p>
<p>Most of all, I want to know: can we manage to identify any geometrical correspondence between a square and a parabola due to the equation?</p>
<p>(The equation of a circle in cartesian coordinates similarly bothers me, but at least we can speak of some sort of relationship between pythagorean triples.)</p>
| Sam Cassidy | 339,509 | <p>If you take the graph of <span class="math-container">$y = x$</span>, the region under the graph between <span class="math-container">$0$</span> and <span class="math-container">$t$</span> is half of a square of side length <span class="math-container">$t$</span>, and <span class="math-container">$\int_0^t x \, \mathrm{d}x = \frac{t^2}{2}$</span>. So some sort of answer is "because the gradient of the parabola is linear and thus carves out half a rectangle".</p>
|
3,360,914 | <p>Let <span class="math-container">$A$</span>, <span class="math-container">$B$</span> and <span class="math-container">$C$</span> be symmetric, positive semi-definite matrices. Is it true that
<span class="math-container">$$ \|(A + C)^{1/2} - (B + C)^{1/2}\| \leq \|A^{1/2} - B^{1/2}\|,$$</span>
in either the 2 or Frobenius norm? </p>
<p>It is clearly true when <span class="math-container">$A, B$</span> and <span class="math-container">$C$</span> commute, but the general case is less clear to me. In fact, even the particular case <span class="math-container">$B = 0$</span> does not seem obvious.</p>
<hr>
<p>Without loss of generality, it is clear that we can assume that <span class="math-container">$C$</span> is diagonal.
We show that it is sufficient to prove to prove the inequality for the matrix with zeros everywhere except on any position <span class="math-container">$k$</span> on the diagonal,
<span class="math-container">$$
(C_k)_{ij} = \begin{cases} 1 & \text{if } i=j=k\\ 0 & \text{otherwise} \end{cases}
$$</span>
Clearly, if the inequality is true for one <span class="math-container">$C_k$</span>, it is true for any <span class="math-container">$C_k$</span>, by flipping the axes, and also for <span class="math-container">$C = \alpha C_k$</span>, for any <span class="math-container">$\alpha \geq 0$</span>, because
<span class="math-container">\begin{align}
\|(A + \alpha \, C_k)^{1/2} - (B + \alpha C_k)^{1/2}\|
&= \sqrt{\alpha} \|(A/\alpha + C_k)^{1/2} - (B/\alpha + C_k)^{1/2}\| \\
&\leq \sqrt{\alpha} \|(A/\alpha)^{1/2} - (B/\alpha)^{1/2}\|
= \sqrt{\alpha} \|A^{1/2} - B^{1/2}\|
\end{align}</span>
Now, a general diagonal <span class="math-container">$C$</span> can be decomposed as <span class="math-container">$C = \sum_{k=1}^{n} \alpha_k C_k$</span>.
Applying the previous inequality (specialized for a matrix <span class="math-container">$C$</span> with only one nonzero diagonal element) repeatedly,
we can remove the diagonal elements one by one
<span class="math-container">\begin{align}
&\|(A + \sum_{k=1}^{n}\alpha_k \, C_k)^{1/2} - (B + \sum_{k=1}^{n}\alpha_k \, C_k)^{1/2}\| \\
&\qquad = \|((A + \sum_{k=1}^{n-1}\alpha_k \, C_k) + \alpha_n C_n)^{1/2} - ((B + \sum_{k=1}^{n-1}\alpha_k \, C_k) + \alpha_n C_n)^{1/2}\| \\
&\qquad \leq \|(A + \sum_{k=1}^{n-1}\alpha_k \, C_k)^{1/2} - (B + \sum_{k=1}^{n-1}\alpha_k \, C_k)^{1/2}\| \\
&\qquad \leq \|(A + \sum_{k=1}^{n-2}\alpha_k \, C_k)^{1/2} - (B + \sum_{k=1}^{n-2}\alpha_k \, C_k)^{1/2}\| \\
&\qquad \leq \dots \leq \sqrt{\alpha} \|A^{1/2} - B^{1/2}\|.
\end{align}</span></p>
<hr>
<p>Here are three ways of proving the inequality in 1 dimension,
which I tried to generalize to the multidimensional case without success.
Let us write <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, <span class="math-container">$c$</span> instead of <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, <span class="math-container">$C$</span>,
to emphasize that we are working in one dimension,
and let us assume without loss of generality that <span class="math-container">$a \leq b$</span>.</p>
<ul>
<li><p>Let us write:
<span class="math-container">$$ f(c) = \sqrt{b + c} - \sqrt{a + c} $$</span>
We calculate that the derivative of <span class="math-container">$f$</span> is given by
<span class="math-container">$$
f'(c) = \frac{1}{2} \left( \frac{1}{\sqrt{b + c}} - \frac{1}{\sqrt{a + c}} \right) \leq 0,
$$</span>
and so <span class="math-container">$f(c) = f(0) + \int_{0}^{c} f'(x) \, d x \leq f(0)$</span>.</p></li>
<li><p>We have, by the fundamental theorem of calculus and a change of variable
<span class="math-container">\begin{align}
\sqrt{b + c} - \sqrt{a + c} &= \int_{a + c}^{b + c} \frac{1}{2 \sqrt{x}} \, d x = \int_{a}^{b} \frac{1}{2 \sqrt{x + c}} \, d x \\
&\leq \int_{a}^{b} \frac{1}{2 \sqrt{x}} \, d x = \sqrt{b} - \sqrt{a}.
\end{align}</span></p></li>
<li><p>Squaring the two sides of the inequality, we obtain
<span class="math-container">$$
a + c - 2 \sqrt{a+ c} \, \sqrt{b + c} + b + c \leq a + b - 2 \sqrt{a} \sqrt{b}.
$$</span>
Simplifying and rearranging,
<span class="math-container">$$
c + \sqrt{a} \sqrt{b} \leq \sqrt{a+ c} \, \sqrt{b + c} .
$$</span>
Squaring again
<span class="math-container">$$
\require{cancel} \cancel{c^2 + a b} + 2 c \sqrt{a b} \leq \cancel{c^2 + ab} + ac + bc,
$$</span>
leading to
<span class="math-container">$$ a + b - 2 \sqrt{ab} = (\sqrt{b} - \sqrt{a})^2 \geq 0$$</span>.</p></li>
</ul>
<p>Numerical experiments suggest that the inequality is true in both the 2 and the Frobenius norm.
(One realization of) the following code prints 0.9998775.</p>
<pre><code>import numpy as np
import scipy.linalg as la
n, d, ratios = 100000, 3, []
for i in range(n):
A = np.random.randn(d, d)
B = np.random.randn(d, d)
C = .1*np.random.randn(d, d)
A, B, C = A.dot(A.T), B.dot(B.T), C.dot(C.T)
lhs = la.norm(la.sqrtm(A + C) - la.sqrtm(B + C), ord='fro')
rhs = la.norm(la.sqrtm(A) - la.sqrtm(B), ord='fro')
ratios.append(lhs/rhs)
print(np.max(ratios))
</code></pre>
| Conrad | 298,272 | <p>Roughly speaking the sum behaves (for large <span class="math-container">$m,n$</span>) like <span class="math-container">$\sum_{m,n \ne 0}\frac{1}{m^2+n^2}$</span> and that is divergent since the sum say in m for fixed <span class="math-container">$n$</span> is about <span class="math-container">$\frac{1}{n}$</span>, so the double sum behaves like the harmonic sum. </p>
<p>(if <span class="math-container">$a \ge 1$</span>, <span class="math-container">$\int_1^{\infty}\frac{dx}{x^2+a^2}<\sum_{m \ge 1}\frac{1}{m^2+a^2}<\int_0^{\infty}\frac{dx}{x^2+a^2}$</span> so <span class="math-container">$\frac{\pi}{2a}-O(\frac{1}{a^2})< \sum_{m \ge 1}\frac{1}{m^2+a^2} < \frac{\pi}{2a}$</span></p>
|
2,020,128 | <p>For $r$ is a real number, I can write $r \in \mathbb{R}$.</p>
<p>For $\varepsilon$ is an infinitesimal, I'd like to write something like $\varepsilon \in something$ Is there a symbol for "the set of infinitesimals"? Or alternatively, a commonly used abbreviation for "infinitesimal"?</p>
<p>For $H$ is an infinite (hyperreal) number, I'd like to write something like $H \in \infty$ Is there a symbol for "the set of infinite hyperreals", or a common abbreviation?</p>
| mhwombat | 3,483 | <p>After more research, I have concluded that, as Bye_world suggests, there is no standard notation for the set of infinitesimals. Here are some of the notations I have seen used:</p>
<p>$\mathcal{I}$,
$N$,
$\mathbb{N}$,
$\Delta$.</p>
<p>Also, for "$x$ is an infinitesimal", I have seen the notation $x \approx 0$.</p>
|
4,637,565 | <p>I am thinking of positive sequences whose sum is infinite but whose sum of squares is not?</p>
<p>One representative sequence is <span class="math-container">$$x[n] = \frac{a}{n+b},$$</span> where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are given real numbers such that <span class="math-container">$a>0$</span> and <span class="math-container">$b\ge0$</span>.</p>
<p>I know that there will be infinitely many more sequences <span class="math-container">$x[n]$</span> such that <span class="math-container">$x[n]\ge0, ~x=1, 2, ...$</span>, <span class="math-container">$\sum x[n] = \infty$</span>, and <span class="math-container">$\sum (x[n])^2 <= M$</span> for a sufficiently large constant value <span class="math-container">$M$</span>.</p>
<p>Can you give me some examples? If possible, I would really appreciate it if you could tell me how to find these sequences (i.e., methodology of how to find).</p>
| Balaji sb | 213,498 | <p>As the answer from Gareth Ma points out, look for sequences in <span class="math-container">$\ell^2 \setminus \ell^1$</span>. This will be your set of sequences.</p>
<p>Concretely speaking, take <span class="math-container">$\{a_n : a_n \geq 0\}$</span> such that <span class="math-container">$\sum_n a_n < \infty$</span>. Now look at <span class="math-container">$\sum_n \sqrt{a_n}$</span> to generate the sequences you want. This method may sometimes work. Using this method we have <span class="math-container">$a_n = \frac{1}{n^{1+\epsilon}}$</span> which works. We know that <span class="math-container">$\sum_n \frac{1}{n^{1+\epsilon}} < \infty$</span> for every <span class="math-container">$\epsilon>0$</span>. This can be seen from the fact that <span class="math-container">$\sum_{n \geq k+1} \frac{1}{n^{1+\epsilon}} \leq \int_k^{\infty} \frac{dx}{x^{1+\epsilon}} < \infty$</span> for <span class="math-container">$k \geq 1$</span>. Now <span class="math-container">$\sum_n \frac{1}{n^{\frac{1+\epsilon}{2}}} = \infty$</span> for every <span class="math-container">$0<\epsilon \leq 2$</span>.</p>
<p>Specifically look for <span class="math-container">$a_n$</span> such that <span class="math-container">$$\lim_{n \rightarrow \infty} \frac{|a_{n+1}|}{|a_n|} = 1$$</span> or else it wont work.</p>
<p>This is because <span class="math-container">$$\lim_{n \rightarrow \infty} \frac{|a_{n+1}|}{|a_n|} > 1 \iff \lim_{n \rightarrow \infty} \frac{\sqrt{|a_{n+1}|}}{\sqrt{|a_n|}} > 1 $$</span>. Hence both <span class="math-container">$\sum_n a_n = \infty $</span> and <span class="math-container">$\sum_n \sqrt{a_n} = \infty$</span> and because <span class="math-container">$$\lim_{n \rightarrow \infty} \frac{|a_{n+1}|}{|a_n|} < 1 \iff \lim_{n \rightarrow \infty} \frac{\sqrt{|a_{n+1}|}}{\sqrt{|a_n|}} < 1 $$</span>. Hence both <span class="math-container">$\sum_n a_n < \infty $</span> and <span class="math-container">$\sum_n \sqrt{a_n} < \infty$</span>.</p>
<p>A Method to Generate more such sequences:</p>
<p>Let <span class="math-container">$a_n \geq 0$</span> and is monotonic and bounded and if <span class="math-container">$\{b_n\}$</span> is such that <span class="math-container">$b_n \geq 0$</span> and <span class="math-container">$\sum_n b_n < \infty$</span>.
Further if <span class="math-container">$\sum_n a_n < \infty$</span> and <span class="math-container">$\sum_n \sqrt{a_n} = \infty$</span> then <span class="math-container">$s_n = \sqrt{a_n} + b_n$</span> works since <span class="math-container">$\sum_n s_n = \infty$</span> and <span class="math-container">$\sum_n s_n^2 < \infty$</span> by Abel's convergence test. So you can practically generate infinite number of such sequences using a single such <span class="math-container">$\{a_n\}$</span> sequence by adding any positive convergence sequence <span class="math-container">$\{b_n\}$</span>. As an example <span class="math-container">$a_n = \frac{1}{n^{1+\epsilon}}$</span> works in this context.</p>
|
2,405,205 | <p>The Wikipedia article on <a href="https://en.wikipedia.org/wiki/Fraction_(mathematics)#Complex_fractions" rel="nofollow noreferrer">Fractions</a> says:</p>
<blockquote>
<p>If, in a complex fraction, there is no unique way to tell which fraction lines takes precedence, then this expression is improperly formed, because of ambiguity. So 5/10/20/40 is not a valid mathematical expression, because of multiple possible interpretations [...]</p>
</blockquote>
<p>The first sentence makes sense, but does the second sentence follow? WolframAlpha <a href="http://wolframalpha.com/input/?i=5%2F10%2F20%2F40" rel="nofollow noreferrer">interprets that input without issue</a>, as do popular programming languages.</p>
<p>Is the order of operations not accepted in formal math?</p>
| Community | -1 | <p>it's only because of the way it interprets it potentially. The reason the order of operations is needed is to stop ambiguous answers. in this case with parentheses added around the divisions you can make it equal 1, or ${5\over(10*20*40)}= {1\over1600} $,etc. some arithmetic without implied parentheses by order of operations would have 24 answers for just 4 operations. </p>
|
4,319,590 | <blockquote>
<p>Let <span class="math-container">$H$</span> be a subgroup of <span class="math-container">$G$</span> and <span class="math-container">$x,y \in G$</span>. Show that <span class="math-container">$x(Hy)=(xH)y.$</span></p>
</blockquote>
<p>I have that <span class="math-container">$Hy=\{hy \mid h \in H\}$</span> so wouldn't <span class="math-container">$x(Hy)=\{x hy \mid h \in H\}$</span>? If so there doesn't seem to be much to be shown since if this holds I suppose that <span class="math-container">$(xH)y=\{x h y \mid h \in H\}$</span> would also hold and these two are clearly the same sets? Am I misinterpreting the set <span class="math-container">$x(Hy)$</span>? Should this be <span class="math-container">$\{xhy \mid h \in H, y \in G\}$</span> for fixed <span class="math-container">$y$</span>?</p>
| Shaun | 104,041 | <p>You're correct.</p>
<hr />
<p>This can be done in a few lines:</p>
<p><span class="math-container">$$\begin{align}
x(Hy)&=\{xh'\mid h'\in Hy\}\\
&=\{x(hy)\mid h\in H\}\\
&=\{(xh)y\mid h\in H\}\\
&=\{h''y\mid h''\in xH\}\\
&=(xH)y.
\end{align}$$</span></p>
|
2,030,547 | <p>The following expression came up in a proof I was reading, where it is said "It is easily shown: $$\lim_{x\to\infty} x(1-\frac{\ln (x-1)}{\ln x})=0."$$</p>
<p>Unfortunately I'm not having an easy time showing it. I guess it should come down to showing that the ratio $\frac{\ln (x-1)}{\ln x}$ converges to 1 superlinearly, which seems intuitive but I don't know how to prove it formally. Any tips?</p>
<p>Edit: original question had an implicit typo - I had $\ln x - 1$ rather than the intended $\ln(x-1)$.</p>
| Community | -1 | <p>We might define a linear transformation $\theta : V \to V^{**}$ by the equation</p>
<p>$$ \theta(v)(f) = f(v) $$</p>
<p>The notation here is recursive; we are defining the linear transformation $\theta$ by specifying its value at every $v \in V$. In turn, we define the linear functional $\theta(v) \in V^{**}$ by specifying its value at every element $f \in V^*$.</p>
<p>To recap, the types of each subexpression are</p>
<ul>
<li>$\theta : V \to V^{**}$</li>
<li>$v \in V$</li>
<li>$f \in V^*$ (equivalently, $f : V \to \mathbb{R}$)</li>
<li>$\theta(v) \in V^{**}$ (equivalently, $\theta(v) : V^* \to \mathbb{R}$)</li>
<li>$\theta(v)(f) \in \mathbb{R}$</li>
<li>$f(v) \in \mathbb{R}$</li>
</ul>
<p>Now, some people don't like to use the usual function notation for function-valued functions. Here, the author is indicating the function via <em>decoration</em> — the author's notation $v^*$ means the same thing as my notation $\theta(v)$.</p>
<p>Another notation one might use for this is $v \mapsto (f \mapsto f(v))$. If you plug some value $v_0$ into this linear transformation, you get the linear functional $f \mapsto f(v_0)$. (and if you then plug some linear functional $f_0$ into that, you get the number $f_0(v_0)$)</p>
<p>For some purposes, the most convenient notation is just $v$ written on the right (rather than on the left like functions "usually" are) — i.e. rather than rigidly interpret $f(v)$ as "the function $f$ evaluated at the value $v$", to have the mental flexibility to view it the other way around "$v$ evaluated at $f$", or even "the 'product' of $f$ and $v$", as needed. In many situations, if you can do this, you really don't need to distinguish between $V$ and $V^{**}$ so there is no problem using the same notation both for an element in $V$ and its corresponding element of $V^{**}$.</p>
<hr>
<p>The reason for $v_0$, I think, is that the author wants you to think of it as an 'unspecified vector constant' rather than a 'vector-valued variable'. In my opinion there is no good reason to do so, but there may be some aspect of the author's choice of fine detail in mathematical grammar or the author's philosophical or pedagogical opinions that compels him to do so.</p>
|
133,711 | <p>I am trying to show that $$\int_{-\pi}^{\pi}e^{\alpha \cos t}\sin(\alpha \sin t)dt=0$$</p>
<p>Where $\alpha$ is a real constant.</p>
<hr>
<p>I found the problem while studying a particular question in this room,<a href="https://math.stackexchange.com/questions/124868/evaluate-int-c-frace-alpha-zzdz-where-alpha-in-mathbb-r-and-c-i">this one.</a> It becomes so challenging to me as I am trying to make life easy but I stucked!</p>
<p>EDIT:
The integral is from $-\pi$ to $\pi$</p>
<p>EDIT 2:
I am sorry for this edit, but it is a typo problem and I fix it now. In my question I have $e^\alpha \cos t$ not $e^\alpha$ only. I am very sorry. </p>
| joriki | 6,622 | <p>This is false. In the interior of the interval of integration, the value of the inner sine is in $(0,1]$. For sufficiently small $\alpha$, that means the value of the outer sine is positive, so since $\mathrm e^\alpha$ is also positive, the integral is positive.</p>
<p>[<em>Edit in response to the change in the question</em>:]</p>
<p>As N.S. has already pointed out, the new integral vanishes because the integrand is odd and the integration interval is symmetric about $0$. By the way, also note that $\mathrm e^\alpha$ is a non-zero constant that doesn't affect whether the integral is zero.</p>
<p>[<em>Edit in response to yet another change in the question</em>:]</p>
<p>The integrand is still odd; the cosine in the exponent doesn't change that. And <em>please</em> take more care in posting; it's a huge waste of everyone's time to ask two questions that you didn't mean to ask and have people spend time answering them.</p>
|
2,359,621 | <p>Consider $f:\mathbb{R}^2 \rightarrow \mathbb{R}$ where</p>
<p>$$f(x,y):=\begin{cases}
\frac{x^3}{x^2+y^2} & \textit{ if } (x,y)\neq (0,0) \\
0 & \textit{ if } (x,y)= (0,0)
\end{cases} $$</p>
<p>If one wants to show the continuity of $f$, I mainly want to show that </p>
<p>$$ \lim\limits_{(x,y)\rightarrow0}\frac{x^3}{x^2+y^2}=0$$</p>
<p>But what does $\lim\limits_{(x,y)\rightarrow0}$ mean? Is it equal to $\lim\limits_{(x,y)\rightarrow0}=\lim\limits_{||(x,y)||\rightarrow0}$ or does it mean $\lim\limits_{x\rightarrow0}\lim\limits_{y\rightarrow0}$?</p>
<p>If so, how does one show that the above function tends to zero?</p>
| Alekos Robotis | 252,284 | <p>The formal definition is as follows: given a function of $n$ real variables (here $n=2$): $f(x_1,\ldots, x_n),$ we say that
$$\lim_{(x_1,\ldots, x_n)\to (p_1,\ldots, p_n)}f(x_1,\ldots,x_n)=L$$
if for every $\epsilon>0$, there exists a $\delta$ sufficiently small that $$ \lvert (x_1,\ldots, x_n)-(p_1,\ldots, p_n)\rvert<\delta$$
implies that
$$ \lvert f(x_1,\ldots, x_n)-L\rvert<\epsilon.$$
In your case, this reduces to showing that for every $\epsilon>0$, there exists a $\delta$ sufficiently small that
$$ \lvert (x,y)\rvert<\delta$$
implies that
$$ \lvert f(x,y)\rvert<\epsilon.$$
Once you've digested this definition, it is worthwhile to observe that as $(x,y)\to 0$, we have that
$$ \bigg|\frac{x^3}{x^2+y^2}\bigg|\le \lvert x\rvert\to 0.$$</p>
|
2,359,621 | <p>Consider $f:\mathbb{R}^2 \rightarrow \mathbb{R}$ where</p>
<p>$$f(x,y):=\begin{cases}
\frac{x^3}{x^2+y^2} & \textit{ if } (x,y)\neq (0,0) \\
0 & \textit{ if } (x,y)= (0,0)
\end{cases} $$</p>
<p>If one wants to show the continuity of $f$, I mainly want to show that </p>
<p>$$ \lim\limits_{(x,y)\rightarrow0}\frac{x^3}{x^2+y^2}=0$$</p>
<p>But what does $\lim\limits_{(x,y)\rightarrow0}$ mean? Is it equal to $\lim\limits_{(x,y)\rightarrow0}=\lim\limits_{||(x,y)||\rightarrow0}$ or does it mean $\lim\limits_{x\rightarrow0}\lim\limits_{y\rightarrow0}$?</p>
<p>If so, how does one show that the above function tends to zero?</p>
| Lonidard | 206,444 | <p><strong>TLDR</strong>:You can intuitively think of it as $\lim_{||(x,y||\to 0}$. </p>
<p>This becomes more clear and formal switching to polar coordinates; if you write
$$f(x,y)=f(r\cos \theta, r\sin \theta)$$
we say that
$$\lim_{(x,y)\to (x_0,y_0)}f(x,y)=L$$
if for every $\theta \in [0,2\pi)$ we have
$$\lim_{r\to 0} f(r\cos \theta+x_0, r\sin \theta+y_0)=L$$</p>
<p>The intuition behind this definition is that we want
$$f(x,y)\stackrel{(x,y)\to (x_0,y_0)}{\longrightarrow} L$$
to be true if $f$ approaches $L$ getting closer to $(x_0,y_0)$, <strong>regardless of the direction</strong> from which this is happening.</p>
|
2,289,935 | <p>Please help, I don't know how to go with this. So far I've done this :</p>
<p>if $c_1v + c_2w + c_3(v\times w) = 0$ , then $c_1,c_2,c_3$ must be $0$, and $0$ must be the only solution.</p>
| marty cohen | 13,079 | <p>I would use the fact that
$0
= v\cdot (v\times w)
= w\cdot (v\times w)
$.</p>
<p>If
$v\times w
= av+bw$
then
$0
=v\cdot(v\times w)
= v\cdot(av+bw)
=a|v|^2+b(v\cdot w)
$
and
$0
=w\cdot(v\times w)
= w\cdot(av+bw)
=av\cdot w+b|w|^2
$
which implies
$a = b = 0$.</p>
|
2,289,935 | <p>Please help, I don't know how to go with this. So far I've done this :</p>
<p>if $c_1v + c_2w + c_3(v\times w) = 0$ , then $c_1,c_2,c_3$ must be $0$, and $0$ must be the only solution.</p>
| copper.hat | 27,978 | <p>By (one) definition, $a \times b$ is the unique vector satisfying
$\langle x, a \times b \rangle = \det \begin{bmatrix} x & a & b \end{bmatrix} $, and in particular we have $\|a \times b \|^2 = \det \begin{bmatrix} a \times b & a & b \end{bmatrix}$.</p>
<p>Hence we see that $a \times b = 0$ <strong>iff</strong> $a,b$ are
linearly dependent.</p>
<p>To see this note that if $a,b$ are linearly dependent, then
$\|a \times b \|^2 = \det \begin{bmatrix} a \times b & a & b \end{bmatrix} =0$ and so $a \times b = 0$. On the other hand, if $a \times b = 0$, then $\det \begin{bmatrix} x & a & b \end{bmatrix} = 0$
for all vectors $x$ and so we must have that $a,b$ are linearly
dependent (otherwise we could find a basis for $\mathbb{R}^3$ of the
form $a,b,x$ for which $\det \begin{bmatrix} x & a & b \end{bmatrix} \neq 0$, a contradiction).</p>
<p>Hence we have
$a,b$ are linearly independent <strong>iff</strong> $a \times b \neq 0$ <strong>iff</strong>
$\det \begin{bmatrix} a \times b & a & b \end{bmatrix} \neq 0$ <strong>iff</strong>
$a,b,a \times b$ are linearly independent.</p>
|
829,449 | <p>I am confused on the concept of extensionality versus intensionality. When we say 2<3 is True, we say that 2<3 can be demonstrated by a mathematical proof. So, according to mathematical logic, it is true. Yet, when we consider x(x+1) and X^2 + X, we can say that the x is the same for = 1. However, we call this intensional since the two expressions are true for the same value. This I understand. However, what I am having difficulty with is the claim that numbers are by their very nature abstract objects. So, how is it that there exists any truth values for mathematical statements? I know this seems like a general question but I am having difficulty in wrapping my head around the fact since a proposition about an abstract object by its very nature is intensional. Why then is the number 1 fixed. Is it simply because we agree that 1 is 1 and nothing else? And, does mathematical logic itself establish the meaning of 1?</p>
| mweiss | 124,095 | <p>A few thoughts:</p>
<ul>
<li>Resolving vectors into components is all about trigonometric ratios, which in turn are all about similar triangles.</li>
<li>Inverse-square laws (gravity, electrical force) can be interpreted geometrically in terms of the surface area of a sphere.</li>
<li>Planetary orbits and ellipses.</li>
</ul>
<p>I am sure I will think of more and will add to this list as they occur to me.</p>
|
829,449 | <p>I am confused on the concept of extensionality versus intensionality. When we say 2<3 is True, we say that 2<3 can be demonstrated by a mathematical proof. So, according to mathematical logic, it is true. Yet, when we consider x(x+1) and X^2 + X, we can say that the x is the same for = 1. However, we call this intensional since the two expressions are true for the same value. This I understand. However, what I am having difficulty with is the claim that numbers are by their very nature abstract objects. So, how is it that there exists any truth values for mathematical statements? I know this seems like a general question but I am having difficulty in wrapping my head around the fact since a proposition about an abstract object by its very nature is intensional. Why then is the number 1 fixed. Is it simply because we agree that 1 is 1 and nothing else? And, does mathematical logic itself establish the meaning of 1?</p>
| cesaruliana | 128,809 | <p>People already gave some good ideas, but I'll pitch in what could be a more systematic approach</p>
<p>Without calculus is somewhat difficult to study physics after Newton. Fortunately a lot was known before him. An account of this can be seen in Rene Dugas' "<a href="http://rads.stackoverflow.com/amzn/click/0486656322" rel="nofollow">A History of Mechanics</a>". In this book he explains how people did mechanics in the past, with lots of diagrams and doing explicitly the geometrical arguments people gave before Newton (and after as well, but that's not the point). </p>
<p>If I recall correctly almost all proofs before Newton required just euclidean geometry and some simple algebra, plus a bit of trigonometry every now and then. But you should be able not only to state but also to prove almost every assertion one learns in high school level mechanics, such as uniformly accelerated motion, static equilibrium and forces and so on.</p>
<p>It would take you some time to "adapt" the book for lecturing her, but it should enable you to go somewhat far.</p>
|
1,649,053 | <p>In figure $AD\perp DE$ and $BE\perp ED$.$C$ is mid point of $AB$.How to prove that $$CD=CE$$<a href="https://i.stack.imgur.com/ZtAA0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZtAA0.png" alt="enter image description here"></a></p>
| Narasimham | 95,860 | <p>Draw $GH$ parallel to $DE$. GAC equals to the alternate one CBH because cutting by parallels,GCA to its vertically opposite HCB, given $ CA= CB, $ so congruent.</p>
<p><a href="https://i.stack.imgur.com/nj2BO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nj2BO.png" alt="QnTrap"></a></p>
|
4,273,026 | <p>Let <span class="math-container">$\Omega\subset\mathbb{R}^n$</span> be a bounded open set, <span class="math-container">$n\geq 2$</span>. For <span class="math-container">$r>0$</span>, denote by <span class="math-container">$B_r(x_0)=\{x\in\mathbb{R}^n:|x-x_0|<r\}$</span> whose closure is a proper subset of <span class="math-container">$\Omega$</span>. Let <span class="math-container">$u\in W^{1,p}(\Omega)$</span> (the standard Sobolev space) for <span class="math-container">$1<p<n$</span> be a nonnegative, bounded function such that for every <span class="math-container">$\frac{1}{2}\leq\sigma^{'}<\sigma\leq 1$</span>, we have
<span class="math-container">\begin{equation}
\sup_{B_{\sigma^{'}r}(x_0)}\,u\leq \frac{1}{2}\sup_{B_{\sigma r}(x_0)}\,u+\frac{c}{(\sigma-\sigma^{'})^{\frac{n}{q}}}\left(\frac{1}{|B_r(x_0)|}\int_{B_r(x_0)}u^q\,dx\right)^\frac{1}{q}\quad\forall q\in(0,p^{*}),
\end{equation}</span>
where <span class="math-container">$c$</span> is some fixed positive constant,independent of <span class="math-container">$x_0,r$</span>, <span class="math-container">$p^{*}=\frac{np}{n-p}$</span> and <span class="math-container">$|B_r(x_0)|$</span> denote the Lebsegue measure of the ball <span class="math-container">$B_r(x_0)$</span>. Then by the iteration lemma stated below, we have
<span class="math-container">\begin{equation}
\sup_{B_{\frac{r}{2}}(x_0)}\,u\leq c\left(\frac{1}{|B_r(x_0)|}\int_{B_r(x_0)}u^q\,dx\right)^\frac{1}{q}\quad\forall q\in(0,p^{*}),
\end{equation}</span>
where <span class="math-container">$c$</span> is some fixed positive constant,independent of <span class="math-container">$x_0,r$</span>.</p>
<p>Iteration lemma: Let <span class="math-container">$f=f(t)$</span> be a nonnegative bounded function defined for <span class="math-container">$0\leq T_0\leq t\leq T_1$</span>. Suppose that for <span class="math-container">$T_0\leq t<\tau\leq T_1$</span> we have
<span class="math-container">$$
f(t)\leq c_1(\tau-t)^{-\theta}+c_2+\xi f(\tau),
$$</span>
where <span class="math-container">$c_1,c_2,\theta,\xi$</span> are nonnegative constants and <span class="math-container">$\xi<1$</span>. Then there exists a constant <span class="math-container">$c$</span> depending only on <span class="math-container">$\theta,\xi$</span> such that for every <span class="math-container">$\rho, R$</span>, <span class="math-container">$T_0\leq \rho<R\leq T_1$</span>, we have
<span class="math-container">$$
f(\rho)\leq c[c_1(R-\rho)^{-\theta}+c_2].
$$</span>
Applying the iteration lemma with <span class="math-container">$f(t)=\sup_{B_t(x_0)}\,u$</span>, <span class="math-container">$\tau=\sigma r$</span>, <span class="math-container">$t=\sigma^{'}r$</span>, <span class="math-container">$\theta=\frac{n}{q}$</span> in the given estimate on <span class="math-container">$u$</span> above, the second estimate on <span class="math-container">$u$</span> above follows. My question is can we obtain the following estimate
<span class="math-container">\begin{equation}
\sup_{B_r(x_0)}\,u\leq c\left(\frac{1}{|B_r(x_0)|}\int_{B_r(x_0)}u^q\,dx\right)^\frac{1}{q}\quad\forall q\in(0,p^{*}),
\end{equation}</span>
which estimates the supremum of <span class="math-container">$u$</span> over the whole ball <span class="math-container">$B_r(x_0)$</span>? If it is possible, does it follow by a covering argument? Here <span class="math-container">$c$</span> is some fixed positive constant, independent of <span class="math-container">$x_0,r$</span>.</p>
<p>Thank you very mych.</p>
| Math | 471,409 | <p>@ user378654 the below limit
<span class="math-container">$$
\lim_{a\to\infty} a(1-(\frac{1}{2})^\frac{1}{a})^n=0,
$$</span>
but I hope you have used the above limit to be non-zero to give a lower bound in your argument, which is independent of <span class="math-container">$a$</span>.</p>
<p>The above limit is zero, you may see the answer from here: <a href="https://math.stackexchange.com/questions/4273475/to-find-the-limit-of-a-sequence/4273505#4273505">To find the limit of a sequence</a></p>
<p>Also, it seems to give a lower bound independent of <span class="math-container">$a$</span>, the whole sequence should be considered. Can you please see it. Thanks.</p>
|
3,553,975 | <p>I fear that this is a stupid question, but I want to have a go anyway. </p>
<p>Let <span class="math-container">$k$</span> be a field, and let <span class="math-container">$f(x,y)$</span> be an irreducible homogeneous quadratic polynomial in <span class="math-container">$k[x,y]$</span>. </p>
<p><em>Question</em>: (when) is <span class="math-container">$k[x,y]/(f(x,y)) \cong (k[x]/f(x,1))[y]$</span> ?</p>
<p>Probably I am seeing ghosts, but is there some more general (correct) identity that I am totally missing ? Can the assumptions on <span class="math-container">$f(x,y)$</span> be relaxed ? </p>
| Henno Brandsma | 4,280 | <p>I would call a function that obeys <span class="math-container">$x < y \to f(x) < f(y)$</span> for all <span class="math-container">$x,y$</span>, a <em>strictly increasing</em> function.</p>
|
2,659,781 | <p>I saw a problem yesterday, which can be easily be solved if we are using fractions. But the problem is for the 4th grade children, and I don't know how to solved this using what they what learned.</p>
<p>I tried solved it using the graphic method ( segments ). Here's the problem:</p>
<p>A team of workers has to finish a road. First day, they builded <code>3/4</code> of the road and <code>2 meters</code>, the 2nd day <code>3/4</code> of the remaining road and <code>2 meters</code>. Last day, the remaining length was <code>1 meter</code>. What was the length of the road?</p>
| user326210 | 326,210 | <p>You can work backwards starting from the last day:</p>
<ul>
<li>The work took place over days 1, 2, and 3. They built the complete road starting from nothing.</li>
<li>They built some amount of the road on day 1. </li>
<li>They had to build the rest of it on days 2 and 3. We can think about days 2 and 3 collectively.</li>
<li>On days 2 and 3, they built the rest of it. They built 3/4 of the rest, then they built 3m. This <em>finished</em> the road. </li>
<li>To finish the road, they had to build 1/4 of the rest. This means that 3m is 1/4 of the rest.</li>
<li>So 12m is the amount of road they had to build overall on days 2 and 3.</li>
<li>On day 1, they built 3/4 of the road and 2 meters. When they finished, we know that there were 12m left to build on days 2 and 3.</li>
<li>So, after they built 3/4 of the road, there were 12m+2m = 14m left.</li>
<li>After they built 3/4 of the road, there was 1/4 of the road left.</li>
<li>This means that 14m is 1/4 of the total length of the road.</li>
<li>So the overall length of the road is 56m. </li>
</ul>
|
3,027,925 | <p>Just for my own understanding of how exactly integration works, are these steps correct:</p>
<p><span class="math-container">$$\begin{align}\int x\,d(x^2) \qquad &\implies x^2 = u \\ & \implies x= \sqrt{u}\end{align}$$</span> </p>
<p>Thus, it becomes <span class="math-container">$$\int\sqrt{u}\,du = \frac{2}{3}u^{3/2} \implies {2\over3}x^3$$</span></p>
| zero | 532,480 | <p>Yes, except for an integration on the LHS of the last line. Also make sure to keep track of your limits. You would get the same expression when you use <span class="math-container">$d(x^2) = 2x dx$</span></p>
|
2,741,832 | <p>When one first learns measure theory, it is a small novelty to find out that
$$\bigcup_{n=0}^\infty B_{\epsilon/2^n}(r_n)$$
is not all of $\mathbb{R}$, where $\{r_n\}$ is an enumeration of the rationals and $\epsilon$ is an arbitrary positive number (notice this fact is equally impressive if $\epsilon$ is small or large).</p>
<p>Of course, by measure arguments, the set above has measure at most $\epsilon$ and can't be all of $\mathbb{R}$. However, there doesn't seem to be another canonical line of reasoning that explains why the union above is not all of $\mathbb{R}$. That makes me wonder, what if we remove that ability to use this argument?</p>
<blockquote>
<p>Is there a pair of sequences of positive real numbers $\{c_n\}$ and $\{d_n\}$ both tending to $0$ such that
$$\sum_{n=0}^\infty c_n=\infty=\sum_{n=0}^\infty d_n$$
where we can demonstrate
$$\bigcup_{n=0}^\infty B_{c_n}(r_n)=\mathbb{R}\quad\text{and}\quad\bigcup_{n=0}^\infty B_{d_n}(r_n)\neq\mathbb{R}$$
with a fixed enumeration of the rationals $\{r_n\}$?</p>
</blockquote>
<p>An existential proof of both questions would be sufficient for me. But an explicit $\{c_n\}$ and $\{d_n\}$ would be interesting to see.</p>
<p>I feel like the $\{c_n\}$ construction might be fairly easy in comparison to $\{d_n\}$, and using dependent choice, I even think I have an argument off the top of my head: just let $\{c_n\}$ be fairly constant until you swallow up $[-N,N]$ and then let it decrease. Continue ad infinitum. But what about $\{d_n\}$?</p>
| Eric Wofsey | 86,856 | <p>Yes, this is possible. Your proposed construction of $(c_n)$ works with no difficulty. To construct $(d_n)$, the easiest thing to do is just pick one point that you want to not be covered. So fix some irrational number $\alpha$. We would like to just let $d_n=|\alpha-r_n|$. Then $\alpha$ will not be in any $B_{d_n}(r_n)$, but $\sum d_n$ will obviously be infinite. This does not satisfy that $d_n\to 0$, but you can easily modify it so that it does (just shrink the $d_n$ so that they converge to $0$ but the sum still diverges).</p>
<p>(This construction of $d_n$ illustrates that it really shouldn't be surprising that an open set can contain all the rationals but not be all of $\mathbb{R}$, since a trivial example of such a set is $\mathbb{R}\setminus\{\alpha\}$!)</p>
|
4,066,601 | <p>The question is</p>
<blockquote>
<p>Find all solutions <span class="math-container">$z\in \mathbb C$</span> for the following equation: <span class="math-container">$z^2 +3\bar{z} -2=0$</span></p>
</blockquote>
<p>I have attempted numerous methods of approaching this question, from trying to substitute <span class="math-container">$x+iy$</span> and <span class="math-container">$x-iy$</span> respectively, in addition to substituting <span class="math-container">$z^2$</span> for <span class="math-container">$z\bar z$</span>, but with no luck. I would really appreciate if you were able to provide some direction so I know where to start. Thank you!</p>
| Michael Hoppe | 93,935 | <p>No need for real and imaginary parts here.</p>
<p>From <span class="math-container">$z^2 +3\bar z = 2$</span> we have <span class="math-container">$\overline{z^2 +3\bar z} = \bar 2$</span>. Now <span class="math-container">$\overline{z^2 +3\bar z}= \bar z^2+3z$</span>.
Hence we have
<span class="math-container">$$\begin{align}
z^2 +3\bar z &= 2\\
\bar z^2+3 z &=2
\end{align}
$$</span></p>
<p>Subtracting gives
<span class="math-container">$$(z-\bar z)(z+\bar z-3)=0,$$</span>
that is <span class="math-container">$z=\bar z$</span> (hence <span class="math-container">$z$</span> is real in this case) or <span class="math-container">$\bar z =3-z$</span>. Both lead to quadratic equation: the first has two real solutions, the latter two complex solutions.</p>
|
634,890 | <blockquote>
<p><strong>Moderator Notice</strong>: I am unilaterally closing this question for three reasons. </p>
<ol>
<li>The discussion here has turned too chatty and not suitable for the MSE framework. </li>
<li>Given the recent pre-print of <a href="http://arxiv.org/abs/1402.0290" rel="noreferrer">T. Tao</a> (see also the blog-post <a href="http://terrytao.wordpress.com/2014/02/04/finite-time-blowup-for-an-averaged-three-dimensional-navier-stokes-equation/" rel="noreferrer">here</a>), the continued usefulness of this question is diminished.</li>
<li>The final update on <a href="https://math.stackexchange.com/a/649373/1543">this answer</a> is probably as close to an "answer" an we can expect. </li>
</ol>
</blockquote>
<p>Eminent Kazakh mathematician
Mukhtarbay Otelbaev, Prof. Dr. has published a full proof of the Clay Navier-Stokes Millennium Problem.</p>
<p>Is it correct?</p>
<p>See <a href="http://bnews.kz/en/news/post/180213/" rel="noreferrer">http://bnews.kz/en/news/post/180213/</a></p>
<p>A link to the paper (in Russian):
<a href="http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf" rel="noreferrer">http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf</a></p>
<p>Mukhtarbay Otelbaev has published over 200 papers, had over 70 PhD students, and he is a member of the Kazak Academy of Sciences. He has published papers on Navier-Stokes and Functional Analysis.</p>
<p>please confine answers to any actual mathematical error found!
thanks</p>
| Stephen Montgomery-Smith | 22,016 | <p>OK, I spent an afternoon getting help with Russian, and I think I understand a lot more.</p>
<p>So first he actually proves a rather abstract theorem (Theorem 2), and strong solutions of the Navier-Stokes is merely a corollary. He shows the existence of solutions satisfying certain bounds to
$$ \dot u + Au + B(u,u) = f , \quad u(0) = 0,$$
where $A$ and $B$ satisfy rather mild hypotheses that, for example, the replacements $A = E-\Delta$, and $B(u,v) = e^t u \cdot \nabla v + \nabla p$ where $p$ is a scalar chosen so that $B(u,v)$ is divergence free.</p>
<p>(I always thought the proof or counterexample would use the special structure of $B(u,v)$ that comes with the Navier-Stokes equation.)</p>
<p>In Chapter 5, he outlines how he will turn it into a different abstract problem, explaining that it is sufficient to find a bound on $\overset{0}{v} = \dot u + Au$. He constructs an equation for a quantity $v(\xi) \equiv v(\xi,t,x)$, so that in effect it is a time dependent velocity field described by a parameter $\xi$. He creates a differential equation in $\xi$, which morphs $v(0) = \overset 0v$ into $v(\xi_1)$, where $\|v(\xi)\| = \|\overset0v\|$, but $v(\xi_1)$ is easier to work with. This equation is given by equations (5.2) and (5.3).</p>
<p>So far, the only part of equation (5.3) that I am beginning to understand is the $-\alpha(\xi) R(v(\xi))$ part. $R(v)$ measures how far $v$ is from being an eigenvector of $A^\theta$. And so the differential equation
$$ \frac{dv}{d\xi} = -\alpha(\xi) R(v(\xi)) $$
pushes $v$ into becoming closer to become an eigenvector.</p>
<p>Anyway, it looks like Chapter 6 is the meat of the paper. Theorem 6.1 seems to be the main result. However it has a rather odd condition, namely that the dimension of the eigenspace corresponding to the smallest eigenvalue of $A$ should be at least 20. So I will be interested to see how he converts the Navier-Stokes into an equation with this property.</p>
|
634,890 | <blockquote>
<p><strong>Moderator Notice</strong>: I am unilaterally closing this question for three reasons. </p>
<ol>
<li>The discussion here has turned too chatty and not suitable for the MSE framework. </li>
<li>Given the recent pre-print of <a href="http://arxiv.org/abs/1402.0290" rel="noreferrer">T. Tao</a> (see also the blog-post <a href="http://terrytao.wordpress.com/2014/02/04/finite-time-blowup-for-an-averaged-three-dimensional-navier-stokes-equation/" rel="noreferrer">here</a>), the continued usefulness of this question is diminished.</li>
<li>The final update on <a href="https://math.stackexchange.com/a/649373/1543">this answer</a> is probably as close to an "answer" an we can expect. </li>
</ol>
</blockquote>
<p>Eminent Kazakh mathematician
Mukhtarbay Otelbaev, Prof. Dr. has published a full proof of the Clay Navier-Stokes Millennium Problem.</p>
<p>Is it correct?</p>
<p>See <a href="http://bnews.kz/en/news/post/180213/" rel="noreferrer">http://bnews.kz/en/news/post/180213/</a></p>
<p>A link to the paper (in Russian):
<a href="http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf" rel="noreferrer">http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf</a></p>
<p>Mukhtarbay Otelbaev has published over 200 papers, had over 70 PhD students, and he is a member of the Kazak Academy of Sciences. He has published papers on Navier-Stokes and Functional Analysis.</p>
<p>please confine answers to any actual mathematical error found!
thanks</p>
| nick kave | 125,228 | <p>On the Spanish site
<a href="http://francis.naukas.com/2014/01/18/la-demostracion-de-otelbaev-del-problema-del-milenio-de-navier-stokes/#comment-21031" rel="nofollow">http://francis.naukas.com/2014/01/18/la-demostracion-de-otelbaev-del-problema-del-milenio-de-navier-stokes/#comment-21031</a>
the following info appeared</p>
<blockquote>
<p>A young guy in Russia seems to have found a concrete gap in the proof.
This concerns Statement 6.3. In the ‘proof’, on p.56, the passage from
(6.33) to (6.34) is made by saying ‘using this and that and also that’
. However no reasons are visible where does the extra ||z|| on the
right hand side come from. At least some very detailed explanation
for this is needed.</p>
</blockquote>
|
137,794 | <p>I'm plotting the electric field of a charged ring based a solution from Jackson's <em>Electrodynamics</em>. </p>
<p><em>Mathematica</em> handles <code>VectorPlot3D</code> and <code>SliceVectorPlot3D</code> for the field without a hitch, and <code>SliceContourPlot3D</code> of the field magnitude as well. </p>
<p>However, attempting to produce a straight-up <code>ContourPlot3D</code> of the field magnitude returns multiple errors, including <code>Power::infy</code>,<code>Infinity::indet</code>, and <code>Power::indet</code>, along with general stop for those errors.</p>
<p>Does anyone have any ideas as to why this might be, and how to work around it to generate a contour plot? This is, by the way, version 10.3.</p>
<p>(I aplogize for the greek letters, they really didn't seem to want to copy cleanly.)</p>
<pre><code>Clear["Global`*"]
Φ[s_, z_] := Piecewise[{
{q*Sum[(R^l/(s^2 + z^2)^(0.5*(l + 1)))*LegendreP[l, 1/Sqrt[1 + s^2/z^2]], {l, 0, k}], Sqrt[s^2 + z^2] > R},
{q*Sum[((s^2 + z^2)^(0.5*l)/R^(l + 1))*LegendreP[l, 1/Sqrt[1 + s^2/z^2]], {l, 0, k}], Sqrt[s^2 + z^2] < R}
}]
dΦ = -{D[Φ[s, z], s], 0, D[Φ[s, z], z]};
dΦc = Simplify[TransformedField["Cylindrical" -> "Cartesian", dΦ, {s, ϕ, z} -> {x, y, Q}] /. Q -> z];
k = 5; q = 5; R = 0.2;
cont = SliceContourPlot3D[Norm[dΦc], "CenterPlanes",
{x, -0.35, 0.35}, {y, -0.35, 0.35}, {z, -0.35, 0.35},
ImageSize -> Large, PlotLegends -> Automatic, Contours -> 16,
ColorFunction -> "Rainbow"]
Clear[q, k, R];
k = 5; q = 5; R = 0.2;
ContourPlot3D[Norm[dΦc],
{x, -0.35, 0.35}, {y, -0.35, 0.35}, {z, -0.35, 0.35},
ImageSize -> Large, PlotLegends -> Automatic, Contours -> 16,
ColorFunction -> "Rainbow", RegionFunction -> Function[{x, y, z}, x*y*z > 0]]
Clear[k, q, R]
</code></pre>
| Jack LaVigne | 10,917 | <p>Using your definitions one can determine that the source of the problem is when <code>x</code> and <code>y</code> are very close to zero.</p>
<pre><code>contourList = Partition[
Flatten[
Table[{x, y, z, Norm[dΦc]}, {x, -0.35, 0.35, 0.01},
{y, -0.35, 0.35, 0.01},
{z, -0.35, 0.35, 0.01}
]
], 4];
</code></pre>
<p>Now seek out the elements where the value is <code>Indeterminate</code>.</p>
<p>Cases[contourList, {x___, Indeterminate, y___}]</p>
<p><img src="https://i.stack.imgur.com/mLXRV.png" alt="Mathematica graphics"></p>
<p>The indeterminate ones are where <code>x</code> and <code>y</code> are close to zero (i.e., the vertical <code>z</code> axis).</p>
<p>This is because there is a pre-multiplier term that has the value</p>
<pre><code>dΦc[[1, 3]]
(* 1/Sqrt[x^2 + y^2] *)
</code></pre>
|
4,496,815 | <blockquote>
<p>For <span class="math-container">$n, m \in \mathbb{N}, m \leq n$</span>, let <span class="math-container">$P(n, m)$</span> denote the number of permutations of length <span class="math-container">$n$</span> for which <span class="math-container">$m$</span> is the first number whose position is left unchanged. Thus, <span class="math-container">$P(n, 1) = (n - 1)!$</span> and <span class="math-container">$P(n, 2) = (n - 1)! - (n - 2)!$</span>. Show that <span class="math-container">$$P(n, m + 1) = P(n, m) - P(n - 1, m)$$</span> for each <span class="math-container">$m = 1, 2, \cdots, n - 1$</span>.</p>
</blockquote>
<p>Hello, can someone help me with the combinatorial proof for this?</p>
<p>I can prove it in other way, by proving that <span class="math-container">$$P(n, m) = \sum_{i = 0}^{m - 1}(-1)^i\binom{m-1}{i}(n - i-1)!$$</span>
using PIE. Now, turning <span class="math-container">$P(n, m) - P(n - 1,m)$</span> into <span class="math-container">$P(n, m+1)$</span> is just algebraic manipulation.</p>
<p>I'd be thankful if someone could help in proving this combinatorially.</p>
<p>Thanks</p>
| Drew Brady | 503,984 | <p>This is an immediate consequence of AM-GM.</p>
<p>First, <span class="math-container">$u + 1/u \geq 2$</span> whenever <span class="math-container">$u > 0$</span>.</p>
<p>Lower bound: by inequality above,
<span class="math-container">$\mu \geq {2^2}^2 = 16$</span>, and this is attained with <span class="math-container">$\alpha = \beta = \gamma = 1$</span>.</p>
<p>Upper bound: Using AM-GM again,
<span class="math-container">$
\mu \geq 4^{\gamma + 1/\gamma} \geq 4^{\gamma}.
$</span></p>
<p>Therefore, <span class="math-container">$\sup_{\alpha, \beta,\gamma > 0} \mu = +\infty$</span>, while <span class="math-container">$\inf_{\alpha, \beta, \gamma > 0} \mu = 16$</span>.</p>
|
3,104,706 | <p>Let <span class="math-container">$T$</span> be the left shift operator on <span class="math-container">$B(l^{2}(\mathbb{N}))$</span>. How to see that von Neumann algebra generated by <span class="math-container">$T$</span> is <span class="math-container">$B(l^{2}(\mathbb{N}))$</span>?</p>
| David Hill | 145,687 | <p>If <span class="math-container">$X$</span> is infinite, then there is an infinite subset <span class="math-container">$Y=\{y_1, y_2, \ldots\}\subset X$</span>. </p>
<ol>
<li><p>Define <span class="math-container">$g:X\to X$</span> so that <span class="math-container">$g(x)=x$</span> if <span class="math-container">$x\in X\backslash Y$</span> and set <span class="math-container">$g(y_i)=y_{i+1}$</span>. Then <span class="math-container">$g$</span> is injective, but <span class="math-container">$y_1\notin g(X)$</span>. </p></li>
<li><p>Define <span class="math-container">$h:X\to X$</span> so that <span class="math-container">$h(x)=x$</span> for <span class="math-container">$x\in X\backslash Y$</span>, <span class="math-container">$h(y_{2n})=y_n$</span> and <span class="math-container">$h(y_{2n+1})=y_n$</span>. Then <span class="math-container">$h$</span> is surjective, but <span class="math-container">$h(y_2)=h(y_3)= y_1$</span>. </p></li>
</ol>
|
747,949 | <p>There is a complex serie: $f(t_n)=\alpha_n+\beta_n i$, for $n = 1,...,N$,$t_n,\alpha_n$ and $\beta_n$ are known.When we have know that $f(t)$ has the following form:
$$f(t)=Ae^{-iBt}$$
with unknown amplitude $A$ and unknown phase $B$, how to estimate the parameters $A$ and $B$ by using a numerical optimization method?</p>
| Claude Leibovici | 82,404 | <p>As written by Martín-Blas Pérez Pinilla, let us suppose that you want to find the optimum values of parameters $A$ and $B$ which minimize the objective function $$\Phi(A,B)=\sum _{n=1}^N (\alpha_n-A\cos (Bt_n))^2+(\beta_n+A\sin (Bt_n))^2=\sum _{n=1}^N r_n$$ Now, since you want the objective function to be minimum, write its derivatives with respect to $A$ and $B$ and set them equal to zero. This will then correspond to $$\sum _{n=1}^N \frac{dr_n}{dA}=0$$ $$\sum _{n=1}^N \frac{dr_n}{dB}=0$$ This corresponds respectively to $$\sum _{n=1}^N [A-\alpha _n \cos (B t_n)+\beta_n \sin (B t_n)]=0$$ $$\sum _{n=1}^N [A t_n (\alpha_n \sin (B t_n)+\beta_n \cos (B t_n))]=0$$ What is nice is that the first equation allows to explicit $A$ as a function of $B$; so, only the second equation is left and you can solve it using Newton method provided that you have a reasonable guess (notice than $A$ disappears from the second equation). </p>
<p>As written by Martín-Blas Pérez Pinilla, you could start your iterations computing the average value of the $$B_n= -\frac1{t_n}\arctan\frac{\alpha_n}{\beta_n}$$ over the entire data set.</p>
|
70,429 | <p>For a $n$-dim smooth projective complex algebraic variety $X$, we can form the complex line bundle $\Omega^n$ of holomorphic $n$-form on $X$. Let $K_X$ be the divisor class of $\Omega^n$, then $K_X$ is called the canonical class of $X$.</p>
<p><strong>Question</strong>: Is homology class of $K_X$ in $H_{2n-2}(X)$ a topological invariant? If it's true, please tell me the idea of proof or some references. If not, please give me the counterexamples.</p>
| Francesco Polizzi | 7,460 | <p>For the question about <em>homeomorphisms</em> the answer is <em>no</em>, even if $X$ and $X'$ are algebraic surfaces. </p>
<p>In fact, in his paper [Orientation reversing homeomorphisms in surface geography, Math. Ann. 292 (1992)], D. Kotschick proves the following result:</p>
<blockquote>
<p><strong>Theorem.</strong>
There exist infinitely many pairs of simply connected algebraic surfaces of general type which are orientation-reversing homeomorphic (with respect to their complex orientations), but not diffeomorphic.</p>
</blockquote>
<p>He also makes a conjecture about <em>orientation-reversing diffeomorphic</em> algebraic surfaces.
As I said in my comments before, by using Seiberg-Witten theory one proves that, given <em>any</em> diffeomorphism $\phi \colon X \to X'$ between two smooth $4$-manifolds, one has either $\phi(K_X)=K_{X'}$ or $\phi(K_X)=-K_{X'}$. </p>
<p>Kotschick's conjecture is therefore the following:</p>
<blockquote>
<p><strong>Conjecture.</strong> If two algebraic surface with finite fundamental group are orientation-reversing diffeomorphic, then they are homeomorphic to a geometrically ruled rational surface. In particular, they are simply connected.</p>
</blockquote>
<p>I do not know the current state of this conjecture. </p>
<p><strong>Added On February 29, 2012</strong>. D. Kotschick kindly informed me that he actually proved this conjecture in his paper <a href="http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=18825">Orientations and geometrizations of compact complex surfaces</a>, Bulletin of the London Mathematical Society <strong>29</strong> (1997), 145-149. </p>
|
1,027,486 | <p>How do I integrate this?</p>
<p>$$\int_0^{2\pi}\frac{dx}{2+\cos{x}}, x\in\mathbb{R}$$</p>
<p>I know the substitution method from real analysis, $t=\tan{\frac{x}{2}}$, but since this problem is in a set of problems about complex integration, I thought there must be another (easier?) way.</p>
<p>I tried computing the poles in the complex plane and got
$$\text{Re}(z_0)=\pi+2\pi k, k\in\mathbb{Z}; \text{Im}(z_0)=-\log (2\pm\sqrt{3})$$
but what contour of integration should I choose?</p>
| Math-fun | 195,344 | <p>Here is an elementary treatment: </p>
<p>First note that $\displaystyle2+\cos x=\frac{3+\tan ^2 \frac{x}{2}}{1+\tan^2 \frac{x}{2}}$. Also note that for $\displaystyle f(x)=\frac{3+\tan ^2 \frac{x}{2}}{1+\tan^2 \frac{x}{2}}$, it holds that $\displaystyle f(x+\pi)=f(x)$ for $0<x<\pi$. Therefore
$$\begin{align}\int_0^{2\pi}\frac{1}{2+\cos x}dx&=\int_0^{2\pi}\frac{1+\tan^2 \frac{x}{2}}{3+\tan ^2 \frac{x}{2}}dx\\
&=2\int_0^{\pi}\frac{1+\tan^2 \frac{x}{2}}{3+\tan ^2 \frac{x}{2}}dx \\
&=2\pi-4\int_0^{\pi}\frac{1}{3+\tan ^2 \frac{x}{2}}dx \\
&=2\pi-8\int_0^{\infty}\frac{1}{(3+u^2)(1+u^2)}du\\
&=2\pi-4\int_0^{\infty}\frac{1}{1+u^2}du+4\int_0^{\infty}\frac{1}{3+u^2}du\\
&=\frac{2\pi}{\sqrt{3}}
\end{align}$$</p>
|
244,769 | <p>I am DMing a game of DnD and one of my players is really into fear effects, which is cool, but the effect of having monsters suffer from the "panicked" condition gets tedious to render via dice rolls.</p>
<p>The rule is, on the battle grid the monster will run for 1 square in a random direction, then from that new position it will move into another random adjacent square. repeat this process until its moved its full move speed.</p>
<pre><code>movespeed = 6;
points = Point[
NestList[{(#[[1]] + RandomChoice[{-1, 0, 1}]), #[[2]] +
RandomChoice[{-1, 0, 1}]} &, {11/2, 11/2}, movespeed]];
Graphics[{PointSize[Large], points},
GridLines -> {Range[0, 11], Range[0, 11]},
PlotRange -> {{0, 11}, {0, 11}}, Axes -> True]
</code></pre>
<p>I have written some code that shows me the squares the monster moves through, but I would love to replace the little black dots with numbers like "1", "2",...,"6" so that I know the path it actually took.</p>
| Ulrich Neumann | 53,677 | <p>Try <code>FindGeometricTransform</code> to describe the transformation <code>{x,y}<->{u,v}</code>.</p>
<p>Therefore it's necessary to know the three points <code>A,B,C</code> (I added <code>Buv</code> ).</p>
<pre><code>{Axy, Bxy, Cxy} = {{0.2, 0.8}, {0.1, 0.15}, {0.8, 0.25}}
{Auv, Buv, Cuv} = {{0, 75}, {0, -45.378 }, {100, 0}, {0, -45.378 }}
trafo = FindGeometricTransform[ {Auv, Buv, Cuv}, {Axy,
Bxy, Cxy}] [[2]]
</code></pre>
<p>Last line <code>{0,0,1}</code> of the transformationmatrix</p>
<pre><code>TransformationMatrix[trafo]
(*{{146.067, -22.4719, -11.236},
{39.2312, 179.161,-76.1753},
{0., 0.,1.}}*)
</code></pre>
<p>indicates an affine transformation!</p>
<pre><code>trafo[{Axy, Cxy, Bxy}] // Chop
(*{{0, 75.}, {100., 0}, {0, -45.378}}*)
trafo[ {0.6, 0.7}]
(*{60.6742, 72.7764}*)
</code></pre>
<p>The inverse transfomation follows to <code>InverseFunction[trafo]</code>.</p>
<p>This approach works in the same way if more than 3 points are known.</p>
|
3,858,414 | <p>I need help solving this task, if anyone had a similar problem it would help me.</p>
<p>The task is:</p>
<p>Calculate using the rule <span class="math-container">$\lim\limits_{x\to \infty}\left(1+\frac{1}{x}\right)^x=\large e $</span>:</p>
<p><span class="math-container">$\lim_{x\to0}\left(\frac{1+\mathrm{tg}\: x}{1+\sin x}\right)\Large^{\frac{1}{\sin x}}
$</span></p>
<p>I tried this:</p>
<p><span class="math-container">$ \lim_{x\to0}\left(\frac{1+\mathrm{tg}\: x}{1+\sin x}\right)^{\Large\frac{1}{\sin x}}=\lim_{x\to0}\left(\frac{1+\frac{\sin x}{\cos x}}{1+\sin x}\right)^{\Large\frac{1}{\sin x}}=\lim_{x\to0}\left(\frac{\sin x+\cos x}{\cos x\cdot(1+\sin x)}\right)^{\Large\frac{1}{\sin x}}
$</span></p>
<p>But I do not know, how to solve this task.
Thanks in advance !</p>
| Shaun | 104,041 | <p>It suffices to prove that, for all <span class="math-container">$x,y\in A$</span>, if <span class="math-container">$[x]\cap[y]\neq \varnothing$</span>, then <span class="math-container">$[x]=[y]$</span>.</p>
<p>Suppose <span class="math-container">$x,y$</span> are arbitrary in <span class="math-container">$A$</span> and let <span class="math-container">$z\in [x]\cap[y]$</span>. Then <span class="math-container">$zRx$</span> and <span class="math-container">$zRy$</span> by definition. The former implies <span class="math-container">$xRz$</span>, which, together with the latter, gives <span class="math-container">$xRy$</span>. So <span class="math-container">$[x]\subseteq [y]$</span>. But by symmetry, <span class="math-container">$[y]\subseteq [x]$</span>. Hence <span class="math-container">$[x]=[y]$</span>.</p>
|
4,531,652 | <p>In my school book, I read this theorem</p>
<blockquote>
<p>Let <span class="math-container">$n>0$</span> is an odd natural number (or an odd positive integer), then the equation <span class="math-container">$$x^n=a$$</span> has exactly one real root.</p>
</blockquote>
<p>But, the book doesn't provide a proof, only tells <span class="math-container">$x=\sqrt [n]a$</span>.
How can I prove this theorem?</p>
<p>I tried to prove some special cases</p>
<p><span class="math-container">$$x^3=8$$</span>
<span class="math-container">$$(x-2)(x^2+2x+4)=0$$</span>
<span class="math-container">$$x=2 \vee x^2+2x+4=0$$</span></p>
<p>But the Discriminant of <span class="math-container">$x^2+2x+4=0$</span> equals to <span class="math-container">$2^2-4×4=-12<0$</span>. So <span class="math-container">$x=2$</span> is an only root. But for <span class="math-container">$x^5=32$</span>, I got <span class="math-container">$x=2$</span> and <span class="math-container">$x^4+2x^3+4x^2+8x+16=0$</span>.</p>
<p>I don't know how I can proceed.</p>
| Suzu Hirose | 190,784 | <p>If <span class="math-container">$n$</span> is even then <span class="math-container">$x^n=a$</span> has two real roots, <span class="math-container">$x=\pm\sqrt[n]{a}$</span>, since <span class="math-container">$x^n=\left(x^{(n/2)}\right)^2$</span> is always positive, but solutions are restricted to <span class="math-container">$a\geq0$</span>. If <span class="math-container">$n$</span> is odd then <span class="math-container">$x^n=a$</span> does not have the negative root <span class="math-container">$x=-\sqrt[n]{a}$</span> since <span class="math-container">$(-\sqrt[n]{a})^n=(-1)^n (\sqrt[n]{a})^n=-a\neq a$</span>. It may also have solutions when <span class="math-container">$a$</span> is negative.</p>
<p>One can prove uniqueness of the solution to <span class="math-container">$\sqrt
[n]{a}$</span> using properties of the real numbers. If there is a number such that <span class="math-container">$b^n=(\sqrt[n]{a})^n=a$</span> then <span class="math-container">$(\sqrt[n]{a})^{-n}b^n=1$</span> thus <span class="math-container">$b\sqrt[n]{a}=1$</span> and by uniqueness of inverse <span class="math-container">$b=\sqrt[n]{a}$</span>. (If <span class="math-container">$n$</span> is even you also need to take into account negative roots.)</p>
|
664 | <p>Erdős's 1947 probabilistic trick provided a lower exponential bound for the Ramsey number $R(k)$. Is it possible to explicitly construct 2-colourings on exponentially sized graphs without large monochromatic subgraphs?</p>
<p>That is, can we explicitly construct (edge) 2-colourings on graphs of size $c^k$, for some $c>0$, with no monochromatic complete subgraph of size $k$?</p>
| Mike | 1,579 | <p>A question I have here is what do you mean by "explicit"? </p>
<p>Personally, I like the definition that a construction is explicit if it can be constructed in polynomial time (due to Alon? Wigderson??). Given that we are talking about exponentials in n here, this gets (slightly) complicated, but we'll say the controlling parameter here is $N=2^n$, the rough order of the number of vertices in a possible Ramsey graph.</p>
<p>One conjecture I have is that the set of Paley graphs on p vertices, where p ranges over all primes $1 \mod 4$ between $2^{(n/2)}$ and $2^{(n-1)}$ gives a lower bound on $R(n)$. This is NOT an explicit set, by my definition above. ::::grin:::::</p>
<p>If memory serves me, I think the best result known for your original question is in a paper of Noga Alon from a few yrs back. You may want to check his web page as well as Gasartch's survey page mentioned before.</p>
|
664 | <p>Erdős's 1947 probabilistic trick provided a lower exponential bound for the Ramsey number $R(k)$. Is it possible to explicitly construct 2-colourings on exponentially sized graphs without large monochromatic subgraphs?</p>
<p>That is, can we explicitly construct (edge) 2-colourings on graphs of size $c^k$, for some $c>0$, with no monochromatic complete subgraph of size $k$?</p>
| Gil Kalai | 1,532 | <p>Finding explicit constructions for Ramsey graphs is a central problem in extremal combinatorics. Indeed, computational complexity gives a way to formalize this problem. Asking for a graph which can be constructed in polynomial time is a fairly good definition although sometimes the definition is taken as having a log-space construction.</p>
<p>Until quite recently the best method for explicit construction was based on extremal combinatorics. The vertices of the graphs were certain sets (say k-subset of an n element sets) and the edges represented pairs of sets with presecibed intersection. The best result was by Frankl and Wilson and it gives a graph with n vertices whose edges are colored by 2 colors with no monochromatic clique of size $\exp (\sqrt{(\log n))}$. (I think this translates to $k^{\log k}$ in the way the question was formulated here.) Using sum-products theorems Barak Rao Shaltiel and Wigderson improved the bound to $\exp (\log n^{o(1)})$.</p>
<p>Payley graphs are conjectured be explicit examples for the correct behavior. But proving it is much beyond reach.</p>
<p><strong>Update(Nov 11, 2015)</strong>: Gil Cohen <a href="http://www.wisdom.weizmann.ac.il/~oded/MC/180.html">found</a> an explicit construction with no monochromatic cliques of size $2^{(\log \log n)^L}$. An independent construction which applies also to the bipartite case was <a href="http://eccc.hpi-web.de/report/2015/119/">achieved</a> by Eshan Chattopadhyay and David Zuckerman</p>
|
232,777 | <p>Let $F$ be an ordered field.</p>
<p>What is the least ordinal $\alpha$ such that there is no order-embedding of $\alpha$ into any bounded interval of $F$?</p>
| Fedor Petrov | 4,312 | <p>This is similar to your proof but without induction.</p>
<p>We prove that there are at least 3 such sets. For $r=k$ this is clear, so assume that $k>r$. Consider our $\binom{k}{r}+1$ $r$-sets. Call an element $v\in V$ appropriate if $v$ belongs to at most $\binom{k-1}{r-1}$ our sets. Then there exist at least $\binom{k}{r}+1-\binom{k-1}{r-1}=\binom{k-1}{r}+1$ our sets not containing $v$. Their union contains at least $k$ elements, and does not contain $v$. Now I claim that between any $k$ elements $x_1,\dots,x_k$ there exists an appropriate element $v$. Indeed, if not, then total number of pairs (our $r$-set $A$, $x_i\in A$) is at least $k(\binom{k-1}{r-1}+1)>r (\binom{k}{r}+1)$, a contradiction. So, we may find appropriate element $v$, the union $U$ of our sets not containing $v$ has cardinality at least $k$. Thus there exists appropriate $u\in U$ and the union of our sets which do not contain $u$ is a third set after $V,U$.</p>
<p>I wonder whether bound 3 may be further improved (for some values of $k,r$, of course for $k=r$ it can not.)</p>
|
2,566,803 | <p>Let $A,B,C$ be sets such that $f:A\to B$ is a function.</p>
<p>Let $F: C^B \to C^A$ be a function, such that $F(k)=k\circ f$.</p>
<p>Prove/disprove that if $f$ is surjective then $F$ is surjective.</p>
<p>I tried to prove it: If $f$ is surjective so for every $b\in B$ there is $a\in A$ so $f(a)=b$, but what now?</p>
| Andres Mejia | 297,998 | <p>Leet $A=\{1,2\}$, $C=\{a,b\}$ and $B=\{1\}$. Consider the function $f:A \to 1$, which is constant and surjective.</p>
<p>Now, consider $g \in C^A$ given by $1 \mapsto a$ and $2 \mapsto b$. Clearly, there is no $h:B \to C$ so that $h \circ f=g$</p>
<hr>
<p>Suppose that $f$ is <em>injective</em>.
Then let $g: A \to C$ be an arbitrary function. Clearly, we can construct the function $h:B \to C$ so that $h(b)=g(a)$ if $f(a)=b$, and arbitrary for every $b \in B$ so that $b \notin f(A)$.</p>
<p>By construction, it follows that $h \circ f=g$.</p>
|
196,902 | <p>Hello fellow Ace Users.</p>
<p>Currently I'm working on a project to implement Peridynamics.
This is a discretization technique in the fashion of a meshless particle method.
AceGen/AceFEM provides the feature of arbitrary nodes per element which suits my need perfectly as such a peridynamic particle interacts with arbitrary number of neighbour particles.
To use the benefits of this method such as modelling discontinuities I'm aiming to utilize an explicit solution procedure.</p>
<p>I appreciate any thoughts on this! I have some code running in AceGen/AceFEM so far, still struggling on some design decisions which lead to the following specific questions:</p>
<ul>
<li><ol>
<li>Whats exactly prarallelized in AceFEM? My recent experience indicate that SMSStandardModule["Tasks"] is not. Is that correct ? How about SMSStandardModule["Tangent and residual"] (I'm talking about the evaluation of the elements not solving the global equation system in parallel.)?</li>
</ol></li>
<li><ol start="2">
<li>Is there any known (maybe approximate) limit to the performance regarding arbitrary nodes per element?</li>
</ol></li>
<li><ol start="3">
<li>Does anyone have experiences with explicit simulations in AceFEM/AceGen?</li>
</ol></li>
<li><ol start="4">
<li>I expect a lot of data due to the particle discretization. Visualisation in post processing will be a to hard task to do in Mathematica. Does anyone have experiences with exporting the simulation data for use in e.g Paraview? If so, what's most performant way to write these to a file without significantly slowing down the simulation? I'm aware of the SMTPut[] feature by the way, but to my knowledge this binds me to Mathematica again.</li>
</ol></li>
</ul>
<p>As always You have my kudos in advance and I'm excited for your comments and answers !</p>
<p>Thanks for the response so far.</p>
<p>I'm back with a 'minimal' example that shows my main concerns.</p>
<p>The code is provided in <a href="https://github.com/5A5H/SimplePD" rel="nofollow noreferrer">SimpePDImplementation</a> on GitHub:</p>
<p>The element contains a vary basic implementation of explicit peridynamics following two steps for each time step:</p>
<ul>
<li><ol>
<li>Compute force density for each node (based on its neighbours)</li>
</ol></li>
<li><ol start="2">
<li>Integrate in time: acceleration = force density / density (per node)</li>
</ol></li>
</ul>
<p>These two task are implemented twice (using the same code), for once implemented into the SKR subroutine and for once as individual element tasks.
!As the code is explicit i do not want or have a system of equation to solve, but i definitely want go over all elements in parallel to gain speedup.
The results for both implementations are the same as expected, however the SKR implementation run significantly slower (i guess do to the solution of the linear system which is completely zero in this case).
<a href="https://i.stack.imgur.com/7Aby0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Aby0.png" alt="enter image description here"></a></p>
<p>While performing the analysis, I checked my CPU usage.</p>
<p>For the SKR implementation I get:
<a href="https://i.stack.imgur.com/h4bXs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h4bXs.png" alt="CPU usage for SKR Implementation."></a>
While for the Task implementation as reported I have:
<a href="https://i.stack.imgur.com/5vvhv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5vvhv.png" alt="enter image description here"></a></p>
<p>My conclusion so far is that the parallelization only works on the solution of the linear system and at least does not parallelize the loop over all elements for tasks.</p>
<p>I would be great if one of you guy can confirm, or even better of course tell me what I made wrong so that I know wether AceFEM works for my purpose at all.</p>
<p>Best,
S</p>
| Pinti | 42,046 | <p>I have no experience with meshless methods, but I will try to answer/comment on your questions.</p>
<ol>
<li><p>In AceFEM assembling of global matrices and vectors ("Tangent and residual" subroutine) is parallelized. Solving the linear system is also parallelized (Intel MKL PARDISO). "Tasks" subroutine is not parallelized, but in general such procedures should not be computationally too intensive. </p></li>
<li><p>Virtual element method (VEM) has been implemented in AceGen/AceFEM and it seems many (up to 100) nodes per element are possible. See <a href="https://www.sciencedirect.com/science/article/pii/S0045782518303396?via%3Dihub" rel="nofollow noreferrer">Aldakheel et. al., 2018</a> for example. See even better explanation in BHudobivnik <a href="https://mathematica.stackexchange.com/a/196927/42046">answer</a>.</p></li>
<li><p>According to comment by prof. Korelc, explicit simulations are possible. If this helps, <a href="https://www.researchgate.net/publication/251230596_Enhanced_displacement_mode_finite_elements_for_explicit_transient_analysis_focussing_on_efficiency" rel="nofollow noreferrer">Schmied et. al, 2013</a> have implemented element assembly subroutines in AceGen and used them in LS-Dyna software with explicit time integration.</p></li>
<li><p>This really depends what you would like to visualize. I would perform visualization out of Mathematica only if there was really no other option. For that purpose you can always save a limited subset of results in some efficient common data format. Maybe something like <a href="https://reference.wolfram.com/language/ref/format/HDF5.html" rel="nofollow noreferrer">HDF</a>? As always, I would recommend to start with small example and deal with problems of large example when/if it happens.</p></li>
</ol>
|
2,128,182 | <p>I've been looking for a definition of game in game theory. I'd like to know if there is a definition shorter than that of Neumann and Morgenstern in <em>Theory of Games and Economic Behavior</em> and not so vague like "interactive decision problem" or "situation of conflict, or any other kind of interaction". I've started a study of the proof of the existence of Nash equilibria using Brouwer's fixed-point theorem and I think of finding a definition that allows me to understand concepts as <em>normal-form game</em> and <em>mixed strategy</em> without excessive complexity. I'd appreciate some bibliographic suggestion. Thank you!</p>
| Hector | 318,351 | <p>I know this question already has an accepted answer, but games are usually defined depending on their form and their information structure. Therefore, the definition of a normal form game is different from that one of extensive form of incomplete information (for example). I usually define a game in normal form (its simplest form possible) as: </p>
<p><a href="https://i.stack.imgur.com/tPXQB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/tPXQB.png" alt="enter image description here"></a></p>
<p>Notice that normal forms are usually represented by matrices, as you probably know already. Also be aware that a Mixed Strategy can be defined as a probability distribution over pure strategies of a given Player, if that helps. Finally, let me tell you that proving Nash Theorem might be easier using Kakutani's Fixed Point Theorem.</p>
<p>Bibliographical references there are many out there, but you may choose the one you prefer depending on your needs. To introductory but yet precise books are "A Primer on Game Theory", by Gibbons; or "An Introduction to Game Theory", by Osborne. You may also like "A Course on Game Theory", by Osborne and Rubinstein, which is more advanced (I have only read the second one; I use them occasionally as references, so I just share my very personal and uninformed opinion).</p>
<p>Good luck!</p>
|
53,185 | <p>Let us consider a noncompact Kähler manifold with vanishing scalar curvature but nonzero Ricci tensor. I'm wondering what can it tell us about the manifold. The example (coming from physics) has the following Kähler form</p>
<p><span class="math-container">$$K = \bar{X} X + \bar{Y} Y + \log(\bar{X} X + \bar{Y} Y)$$</span></p>
<p>e.g. this is a 2D complex manifold. I claim that its Ricci form is nonzero, whereas its scalar curvature is identically zero.</p>
<p>I'm wondering if such manifolds possess any interesting properties and how can we classify them.</p>
<p><strong>UPD</strong>.</p>
<p>Partly the answer for 4 manifolds (2d complex manifolds) is given in the paper by C Lebrun "Counter-examples to the generalized positive action conjecture'' <a href="https://projecteuclid.org/journals/communications-in-mathematical-physics/volume-118/issue-4/Counter-examples-to-the-generalized-positive-action-conjecture/cmp/1104162166.full" rel="nofollow noreferrer">paper</a>. The author considers vanishing scalar curvature and derives the most generic form of the Kähler potential such that it vanishes. There are several integration constants in the final answer, playing with them we can get different manifolds including the one I was talking above. For that case the Kähler metric is the metric of a standard blow-up in the origin</p>
<p><span class="math-container">$$K = \bar{X}X+\bar{Y}Y+a\log(\bar{X}X+\bar{Y}Y)$$</span></p>
<p>where <span class="math-container">$a>0$</span>.</p>
<p>Now one can ask the same question about manifolds of higher dimension if they all with vanishing scalar curvature (but nonvanishing Ricci tensor) are described by the blow-ups of <span class="math-container">$\mathbb{C}^n$</span>'s. In particular, I'm interested in the following Kähler potential</p>
<p><span class="math-container">$$K = \sum\limits_{i=1}^N \sum\limits_{i=1}^{\tilde N}|X^i Y^j|^2 + a \log \sum\limits_{i=1}^N|X^i|^2.$$</span></p>
| Peter Koroteev | 5,550 | <p>Partly the answer for 4 manifolds (2d complex manifolds) is given in the paper by C Lebrun ``Counter-examples to the generalized positive action conjecture'' <a href="https://projecteuclid.org/euclid.cmp/1104162166" rel="nofollow noreferrer">paper</a>. The author considers vanishing scalar curvature and derives the most generic form of the Kahler potential such that it vanishes. There are several integration constants in the final answer, playing with them we can get different manifolds including the one I was talking above. For that case the Kahler metric is the metric of a standard blow-up in the origin.</p>
<p><span class="math-container">$K = \bar{X}X+\bar{Y}Y+a\log(\bar{X}X+\bar{Y}Y)$</span></p>
<p>where <span class="math-container">$a>0$</span>. Now one can ask the same question about manifolds of higher dimension if they all with vanishing scalar curvature (but nonvanishing Ricci tensor) are described by the blow-ups of <span class="math-container">$\mathbb{C}^n$</span>'s.</p>
|
488,141 | <p>\begin{align*}A=\left(\begin{array}{cccc} 1 & 2 & 3 & 4 \\ 0 & 1 & 2 & 3 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 1 \\\end{array}\right);\end{align*}</p>
<p>The eigenvalues are $1$, I know one of the eigenvectors is $(1,0,0,0)$, Is that all?</p>
<p>The mathematica gives, why not {{1,0,0,0},{1,0,0,0},{1,0,0,0},{1,0,0,0}}? </p>
<pre><code>Eigenvectors[A]
</code></pre>
<p>\begin{align*}\left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\\end{array}\right)\end{align*}</p>
| Bill Kleinhans | 73,675 | <p>But more simply, if $\sqrt3$ is an element of $Q[\sqrt[4]2]$, then $Q[\sqrt2,\sqrt3]$ is a subfield of $Q[\sqrt[4]2]$. However, since both are order 4, they must be isometric. But the first is a splitting field and the second is not.</p>
|
2,823,758 | <p>I was learning the definition of continuous as:</p>
<blockquote>
<p>$f\colon X\to Y$ is continuous if $f^{-1}(U)$ is open for every open $U\subseteq Y$</p>
</blockquote>
<p>For me this translates to the following implication:</p>
<blockquote>
<p>IF $U \subseteq Y$ is open THEN $f^{-1}(U)$ is open</p>
</blockquote>
<p>however, I would have expected the definition to be the other way round, i.e. with the 1st implication I defined. The reason for that is that just by looking at the metric space definition of continuous:</p>
<blockquote>
<p>$\exists q = f(p) \in Y, \forall \epsilon>0,\exists \delta >0, \forall x \in X, 0 < d(x,p) < \delta \implies d(f(x),q) < \epsilon$</p>
</blockquote>
<p>seems to be talking about Balls (i.e. open sets) in X and then has a forward arrow for open sets in Y, so it seems natural to expect the direction of the implication to go in that way round. However, it does not. Why does it not go that way? Whats is wrong with the implication going from open in X to open in Y? And of course, why is the current direction the correct one?</p>
<p>I think conceptually I might be even confused why the topological definition of continuous requires to start from things in the target space Y and then require things in the domain. Can't we just say map things from X to Y and have them be close? <strong>Why do we require to posit things about Y first in either definition for the definition of continuous to work properly</strong>?</p>
<hr>
<p>I can't help but point out that this question <a href="https://math.stackexchange.com/questions/323610/the-definition-of-continuous-function-in-topology">The definition of continuous function in topology</a> seems to be similar but perhaps lack the detailed discussion on the direction on the implication for me to really understand why the definition is not reversed or what happens if we do reverse it. The second answer there tries to make an attempt at explaining why we require $f^{-1}$ to preserve the property of openness but its not conceptually obvious to me why thats the case or whats going on. Any help?</p>
<hr>
<p>For whoever suggest to close the question, the question is quite clear:</p>
<blockquote>
<p><strong>why is the reverse implication not the "correct" definition of continuous?</strong></p>
</blockquote>
<hr>
<p>As an additional important point I noticed is, pointing out <strong>the difference between open mapping and continuous function would be very useful</strong>.</p>
<hr>
<p>Note: I encountered this in baby Rudin, so thats as far as my background in analysis goes, i.e. metric spaces is my place of understanding. </p>
<hr>
<p>Extra confusion/Appendix:</p>
<p>Conceptually, I think I've managed to nail what my main confusion is. In conceptual terms continuous functions are suppose to map "nearby points to nearby points" so for me its metric space definition makes sense in that sense. However, that doesn't seem obvious to me unless we equate "open sets" to be the definition of "close by". Balls are open but there are plenty of sets that are open but are not "close by", for example the union of two open balls. I think this is what is confusing me most. How is the topological def respecting that conceptual requirement? </p>
| Benjamin Dickman | 37,122 | <p>Perhaps the following paper would be of interest to you:</p>
<blockquote>
<p>Velleman, D. J. (1997). Characterizing continuity. <em>The American Mathematical Monthly, 104</em>(4), 318-322. <a href="https://www.jstor.org/stable/2974580" rel="nofollow noreferrer"><strong>Link</strong></a>.</p>
</blockquote>
<p>Here is the beginning:</p>
<p><a href="https://i.stack.imgur.com/el9rp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/el9rp.png" alt="enter image description here"></a></p>
|
109,734 | <p>I am trying to do this homework problem and I have no idea how to approach it. I have tried many methods, all resulting in failure. I went to the books website and it offers no help. I am trying to find the derivative of the function
$$y=\cot^2(\sin \theta)$$</p>
<p>I could be incorrect but a trig function squared would be the result of the trig function with the angle value and then squared. Not the angle value squared, that would give a different answer. Knowing this I also know that I can not use the table of simple trig derivatives so I know I can't just take the derivative as
$$y=\cot^2(x)$$
$$ x=\sin(\theta)$$ </p>
<p>This does not help because I can't get the derivative of cot squared. What I did try to do was rewrite it as $\frac{\cos x}{\sin x}\frac{\cos x}{\sin x}$ and then find the derivative of that but something went wrong with that and it does not produce an answer that is like the one in the book. In fact the book gets a csc squared in the answer so I know they are doing something very different.</p>
| Arturo Magidin | 742 | <p>Indeed, $\cot^2(a)$ means
$$\left(\cot (a)\right)^2.$$</p>
<p>You need to apply the Chain Rule <em>twice</em>: first, to deal with the square: set $g(u)=u^2$ as your "outside function", and $u=f(\theta) = \cot(\sin(\theta))$ as your inside function. Since $g'(u) = 2u$, then
$$\frac{d}{d\theta}\cot^2(\sin(\theta)) = \frac{d}{d\theta}\left(\cot\bigl(\sin(\theta)\bigr)\right)^2 = g'(u)f'(\theta) = 2uf'(\theta) = 2\cot\bigl(\sin(\theta)\bigr)f'(\theta).$$
Now let's deal with $f'(\theta)$; we have $f(\theta) = \cot\bigl(\sin(\theta)\bigr)$. The "outside function" is $h(u) = \cot(u)$, the "inside function" is $u(\theta) = \sin(\theta)$. Since $h'(u) = -\csc^2(u)= -\left(\csc(u)\right)^2$, and $f'(\theta) = \cos(\theta)$, we have:
$$\frac{d}{d\theta}\cot\bigl(\sin(\theta)\bigr) = h'(u)u'(\theta) = -\csc^2(\sin\theta)\cos(\theta).$$</p>
<p>Putting it all together:
$$\begin{align*}
\frac{d}{d\theta}\cot^2\bigl(\sin(\theta)\bigr) &= \frac{d}{d\theta}\left(\cot\bigl(\sin(\theta)\bigr)\right)^2\\
&= 2\left(\cot\bigl(\sin(\theta)\bigr)\right)\cdot \frac{d}{d\theta}\left(\cot\bigl(\sin(\theta)\bigr)\right)\\
&= 2\left(\cot\bigl(\sin(\theta)\bigr)\right)\cdot \left(-\csc^2\left(\sin(\theta)\right)\left(\frac{d}{d\theta}\sin(\theta)\right)\right)\\
&= 2\left(\cot\bigl(\sin(\theta)\bigr)\right)\cdot\left(-\csc^2\left(\sin(\theta)\right)\cos(\theta)\right)\\
&= -2\cot\bigl(\sin(\theta)\bigr)\csc^2\bigl(\sin(\theta)\bigr)\cos(\theta).
\end{align*}$$</p>
|
4,268,962 | <blockquote>
<p>Check whether <span class="math-container">$y=\ln (xy)$</span> is an answer of the following differential equation or not</p>
<p><span class="math-container">$$(xy-x)y''+xy'^2+yy'-2y'=0$$</span></p>
</blockquote>
<p>First I tried to solve the equation,</p>
<p><span class="math-container">$$x(yy''-y''+y'^2)+yy'-2y'=0$$</span>
<span class="math-container">$$x((yy')'-y'')+(yy')-2y'=0$$</span>
Since I have <span class="math-container">$-y''$</span> in the parenthesis , the substitution <span class="math-container">$z=yy'$</span> doesn't work here but if it was <span class="math-container">$-2y''$</span> instead, I could use the substitution <span class="math-container">$u=yy'-2y'$</span> but it is not the case.</p>
<hr />
<p>My second try was taking derivative of the answer (i.e <span class="math-container">$y=\ln(xy)$</span> ) and plugging it in the D.E,</p>
<p><span class="math-container">$$y'=\frac1x+\frac{y'}y\quad\Rightarrow y'(1-\frac1y)=\frac1x\quad\Rightarrow y'=\frac y{y-1}\times \frac1x$$</span></p>
<p><span class="math-container">$$y''=\frac{-1}{x^2}+\frac{yy''-y'^2}{y^2}\quad\Rightarrow y''=\frac{y}{y-1}\times(\frac{-1}{x^2}-\frac{y^2}{y'^2})$$</span>
But it is getting really ugly when I plug <span class="math-container">$y,y',y''$</span> in the original equation.</p>
| Math Lover | 801,574 | <p>We have <span class="math-container">$x y'=\frac y{y-1}$</span> and <span class="math-container">$(xy')' = - \frac{y'}{(y-1)^2}$</span></p>
<p>DE is <span class="math-container">$(xy-x)y''+xy'^2+yy'-2y'=0$</span></p>
<p>Rearranging LHS we get,</p>
<p><span class="math-container">$(xy-x)y''+xy'^2+yy'-2y'$</span></p>
<p><span class="math-container">$ = y (xy''+y') - (xy'' + y') - y' + xy'^2$</span></p>
<p><span class="math-container">$ = (y-1) (xy')' - y'+ xy'^2$</span></p>
<p><span class="math-container">$ = - \frac{y'}{y-1} - y' + xy'^2$</span></p>
<p><span class="math-container">$ = y' (- \frac{1}{y-1} - 1 + xy') = 0$</span></p>
|
4,268,962 | <blockquote>
<p>Check whether <span class="math-container">$y=\ln (xy)$</span> is an answer of the following differential equation or not</p>
<p><span class="math-container">$$(xy-x)y''+xy'^2+yy'-2y'=0$$</span></p>
</blockquote>
<p>First I tried to solve the equation,</p>
<p><span class="math-container">$$x(yy''-y''+y'^2)+yy'-2y'=0$$</span>
<span class="math-container">$$x((yy')'-y'')+(yy')-2y'=0$$</span>
Since I have <span class="math-container">$-y''$</span> in the parenthesis , the substitution <span class="math-container">$z=yy'$</span> doesn't work here but if it was <span class="math-container">$-2y''$</span> instead, I could use the substitution <span class="math-container">$u=yy'-2y'$</span> but it is not the case.</p>
<hr />
<p>My second try was taking derivative of the answer (i.e <span class="math-container">$y=\ln(xy)$</span> ) and plugging it in the D.E,</p>
<p><span class="math-container">$$y'=\frac1x+\frac{y'}y\quad\Rightarrow y'(1-\frac1y)=\frac1x\quad\Rightarrow y'=\frac y{y-1}\times \frac1x$$</span></p>
<p><span class="math-container">$$y''=\frac{-1}{x^2}+\frac{yy''-y'^2}{y^2}\quad\Rightarrow y''=\frac{y}{y-1}\times(\frac{-1}{x^2}-\frac{y^2}{y'^2})$$</span>
But it is getting really ugly when I plug <span class="math-container">$y,y',y''$</span> in the original equation.</p>
| Etemon | 717,650 | <p>Continuing my first approach:</p>
<p><span class="math-container">$$(xy-x)y''+xy'^2+yy'-2y'=0$$</span>
<span class="math-container">$$x(yy''+y'^2)-xy''+yy'-2y'=0$$</span>
<span class="math-container">$$x(yy')'+(x)'(yy')-xy''-2y'=0$$</span>
<span class="math-container">$$(xyy')'-xy''-y'-y'=0$$</span>
<span class="math-container">$$(xyy')'-(xy')'-y'=0$$</span>After integrating we get <span class="math-container">$$xyy'-xy'-y=C$$</span><span class="math-container">$$y'(xy-x)-y=C$$</span>
From here it is similar to @Rezha Adrian Tanuharja's answer.</p>
|
2,615,185 | <p>The title is not complete, since it would be too long. Consider the following statement:</p>
<blockquote>
<p>Let $U \subset \mathbb{R}^n$ be open, connected and such that its one-point compactification is a manifold. Then, this compactification must be (homeomorphic to) the sphere $S^n$.</p>
</blockquote>
<p>Is the statement above true? If so, why?</p>
| Nick A. | 412,202 | <p>I can imagine an ellementary approach only for the special case of $\mathbb{R^2} $ and $\mathbb{R}$.</p>
<p>For $\mathbb{R^2}$:We know that all the compact surfaces arise from adding to the sphere a finite amount of handles or Mobius-strips. In any case, if you remove a point from a compact surface which is something of the above but <em>not</em> a sphere then you wouldn't get something homemoprhic to $\mathbb{R^2}$. So the only compactification could be to a sphere.</p>
<p>Similar arguments goes for $\mathbb{R}$ since the only compact $1$ manifolds are a closed line segment and the circle. </p>
|
2,402,410 | <p>I defined the "function":</p>
<p>$$f(t)=t \delta(t)$$</p>
<p>I know that Dirac "function" is undefined at $t=0$ (see <a href="http://web.mit.edu/2.14/www/Handouts/Convolution.pdf" rel="nofollow noreferrer">http://web.mit.edu/2.14/www/Handouts/Convolution.pdf</a>).</p>
<p>In Wolfram I get $0 \delta(0)=0$ (<a href="http://www.wolframalpha.com/input/?i=0" rel="nofollow noreferrer">http://www.wolframalpha.com/input/?i=0</a>*DiracDelta(0)). Why? I expect $0 \delta(0)=undefined$ (if $\delta(0)=\infty$, thus I will have an indeterminate form $0 \infty$).</p>
<p>Thank you for your time.</p>
| Cauchy | 360,858 | <p>Look at $\delta$ as distribution: $\langle \delta, f \rangle = f(0)$. Then $\langle t \delta(t), f(t) \rangle = \langle \delta (t), t f(t) \rangle = (tf(t))\mid_{t = 0} = 0 \cdot f(0) = 0$ (see multiplication of distribution by smooth functions).</p>
|
1,297,319 | <p>integration equation </p>
<p>$$\int_{0}^{1/8} \frac{4}{\sqrt{(1-4x^2)}} \,dx$$</p>
<p>my work </p>
<p>$t= \sqrt{(1-4x^2)} $</p>
<p>$dt = -4x/\sqrt{(1-4x^2)} dx $</p>
<p>stuck here also </p>
| Anurag A | 68,092 | <p>Use the substitution $2x=\sin \theta$. Then $\frac{d}{d\theta}x=\frac{1}{2}\cos \theta $ and the integral becomes
$$\int_{0}^{1/8} \frac{4}{\sqrt{(1-4x^2)}} \,dx = \int_{0}^{\arcsin\left(\frac{1}{4}\right)} 2\,d\theta$$</p>
|
827,740 | <p>This is a new integral that I propose to evaluate in closed form:
$$ {\mathfrak{R}} \int_{0}^{\pi/2} \frac{x^2}{x^2+\log ^2(-2\cos x)} \:\mathrm{d}x$$
where $\Re$ denotes the real part and $\log (z)$ denotes the principal value of the logarithm defined for $z \neq 0$ by
$$ \log (z) = \ln |z| + i \mathrm{Arg}z, \quad -\pi <\mathrm{Arg} z \leq \pi.$$</p>
| Hakim | 85,969 | <p>I don't think a closed form exists after computing that integral numerically in Mathematica, and looking up in the <a href="http://oldweb.cecm.sfu.ca/projects/ISC/ISCmain.html" rel="nofollow"><em>Inverse Symbolic Calculator</em></a>. It is approximately equal to: $-0.10617124113817\ldots$ If you need more digits just ask.</p>
|
1,018,248 | <p>Let $X:=(X_t)_{t\geq0}$ be a Lévy process with triple $(b,A,\nu)$. Is there any known relation between the "distribution" of its jumps and the Lévy measure $\nu$? E.g. can we express something like $\mathbb{P}[X$ has $n$ jumps in $[0,1]]$ or $\mathbb{P}[X$ has a jump of absolute value $>u$ in $[0,1]]$ for some $u>0$ in terms of $\nu$?</p>
| binkyhorse | 18,357 | <p>Sample path properties are discussed e.g. in Sato's <em>Lévy processes and infinitely divisible distributions</em>, Section 21. For example, the following results are given there:</p>
<ul>
<li>Sample functions of $X$ are a.s. continuous if and only if $\nu=0$.</li>
<li>Sample functions of $X$ are a.s. piecewise constant if and only if $X$ is compound Poisson or a zero process.</li>
<li>If $\nu(\mathbb{R}^d)=\infty$, then a.s. jumping times are countable and dense in $[0,\infty)$; if $0<\nu(\mathbb{R}^d)<\infty$, then a.s. jumping times are countable in increasing order and the first jumping time has an exponential distribution with mean $1/\nu(\mathbb{R}^d)$. In this latter case, the process $\{J(t)\}$ of jumps in $[0,t)$ is a Poisson process with intensity measure $\nu(\mathbb{R}^d)$, so the number of jumps in $[0,t)$ has a Poisson distribution with mean $t\nu(\mathbb{R}^d)$.</li>
<li>$T_u$, the first time the process jumps by more than $u$, has an exponential distribution with mean $1/c$ if $\int_{D(u,\infty)}\nu(dx)=c<\infty$, where $D(u,\infty)=\{x\in\mathbb{R}^d: u<||x||<\infty\}$.</li>
</ul>
|
269,665 | <p>Is Klein bottle an algebraic variety? I guess no, but how to prove. How about other unorientable mainfolds? </p>
<p>If we change to Zariski topology, which mainfold can be an algebraic variety? </p>
| Community | -1 | <p>In the introduction (second page) of <a href="http://www.mathematik.uni-bielefeld.de/documenta/vol-12/17.pdf">this paper</a> of Biswas and Huisman, it is explained that any non-orientable compact topological surface $X$ is real algebraic (<em>i.e.</em> there exists a real smooth algebraic surface $S$ whose real points $S(\mathbb R)$ endowed with the natural differential structure is diffeomorphic to $X$). </p>
<p>In the case of Klein bottle, the corresponding algebraic surface is simply the blowup of $\mathbb P^2(\mathbb R)$ along a real point (I don't know topology enough to see why this is true). They also prove this algebraic surface is unique (<em>in some sense</em>, because blowing-up further a non-real point doesn't change the real points but change the algebraic surface). </p>
|
1,133,544 | <p>I've been struggling to show that $\mathrm{SL}_2(\mathbb{R})$ is a normal subgroup of $\mathrm{GL}_2(\mathbb{R})$. I already proved that $\mathrm{SL}_2(\mathbb{R})\leq\mathrm{GL}_2(\mathbb{R})$ (not shown). Now I want to show that
$$
A\cdot \mathrm{SL}_2(\mathbb{R})=\mathrm{SL}_2(\mathbb{R})\cdot A
$$
for every $A\in \mathrm{GL}_2(\mathbb{R})$. </p>
<p>I know that $\det(AB)=\det(A)\det(B)=\det(B)\det(A)=\det(BA)$. Thus,
$$\det(A\cdot \mathrm{SL}_2(\mathbb{R}))=\det(\mathrm{SL}_2(\mathbb{R})\cdot A )$$
but this does not seem to help me prove normality.</p>
<p>I thought that perhaps rearranging in the following form would help:</p>
<p>$$
A\cdot \mathrm{SL}_2(\mathbb{R})\cdot A^{-1}=\mathrm{SL}_2(\mathbb{R})
$$
If I can show that $A\cdot \mathrm{SL}_2(\mathbb{R})\cdot A^{-1}$ has determinant 1, then I am done. How can I do this?</p>
<p>I would like a hint (no full solutions, please) on how I can proceed.</p>
<p>Thanks!</p>
| Mister Benjamin Dover | 196,215 | <p>Hint: Can you write $SL_2$ as a kernel? You certainly know some multiplicative maps from linear algebra. (From my experience, the easiest way to show that some subgroup is normal is to exhibit it as a kernel of a homomorphism.)</p>
|
1,133,544 | <p>I've been struggling to show that $\mathrm{SL}_2(\mathbb{R})$ is a normal subgroup of $\mathrm{GL}_2(\mathbb{R})$. I already proved that $\mathrm{SL}_2(\mathbb{R})\leq\mathrm{GL}_2(\mathbb{R})$ (not shown). Now I want to show that
$$
A\cdot \mathrm{SL}_2(\mathbb{R})=\mathrm{SL}_2(\mathbb{R})\cdot A
$$
for every $A\in \mathrm{GL}_2(\mathbb{R})$. </p>
<p>I know that $\det(AB)=\det(A)\det(B)=\det(B)\det(A)=\det(BA)$. Thus,
$$\det(A\cdot \mathrm{SL}_2(\mathbb{R}))=\det(\mathrm{SL}_2(\mathbb{R})\cdot A )$$
but this does not seem to help me prove normality.</p>
<p>I thought that perhaps rearranging in the following form would help:</p>
<p>$$
A\cdot \mathrm{SL}_2(\mathbb{R})\cdot A^{-1}=\mathrm{SL}_2(\mathbb{R})
$$
If I can show that $A\cdot \mathrm{SL}_2(\mathbb{R})\cdot A^{-1}$ has determinant 1, then I am done. How can I do this?</p>
<p>I would like a hint (no full solutions, please) on how I can proceed.</p>
<p>Thanks!</p>
| Brian Rushton | 51,970 | <p>Hint: determinants are multiplicative, and real numbers commute.</p>
|
1,133,544 | <p>I've been struggling to show that $\mathrm{SL}_2(\mathbb{R})$ is a normal subgroup of $\mathrm{GL}_2(\mathbb{R})$. I already proved that $\mathrm{SL}_2(\mathbb{R})\leq\mathrm{GL}_2(\mathbb{R})$ (not shown). Now I want to show that
$$
A\cdot \mathrm{SL}_2(\mathbb{R})=\mathrm{SL}_2(\mathbb{R})\cdot A
$$
for every $A\in \mathrm{GL}_2(\mathbb{R})$. </p>
<p>I know that $\det(AB)=\det(A)\det(B)=\det(B)\det(A)=\det(BA)$. Thus,
$$\det(A\cdot \mathrm{SL}_2(\mathbb{R}))=\det(\mathrm{SL}_2(\mathbb{R})\cdot A )$$
but this does not seem to help me prove normality.</p>
<p>I thought that perhaps rearranging in the following form would help:</p>
<p>$$
A\cdot \mathrm{SL}_2(\mathbb{R})\cdot A^{-1}=\mathrm{SL}_2(\mathbb{R})
$$
If I can show that $A\cdot \mathrm{SL}_2(\mathbb{R})\cdot A^{-1}$ has determinant 1, then I am done. How can I do this?</p>
<p>I would like a hint (no full solutions, please) on how I can proceed.</p>
<p>Thanks!</p>
| Ivo Terek | 118,056 | <p><strong>Hint:</strong> If $A \in {\rm SL}(n, \Bbb R)$ and $G \in {\rm GL}(n, \Bbb R)$, you want to prove that $G^{-1}AG \in {\rm SL}(n, \Bbb R)$. But: $$\det(G^{-1}AG) = \det(G^{-1})\det A\, \det G.$$</p>
|
2,825,789 | <p>I struggle to understand the following theorem (not the proof, I can't even validate it to be true). Note: I don't have a math background.</p>
<blockquote>
<p>If S is not the empty set, then (f : T → V) is injective if and only if Hom(S, f) is injective.</p>
<p>Hom(S, f) : Hom(S, T) → Hom(T, V)</p>
</blockquote>
<p>As I understand, to prove</p>
<p><strong>f is injective ↔ Hom(S, f) is injective</strong></p>
<p>we can go two ways. We can either prove</p>
<ol>
<li><strong>f</strong> is injective → <strong>Hom(S, f)</strong> is injective AND</li>
<li><strong>f</strong> is not injective → <strong>Hom(S, f)</strong> is not injective</li>
</ol>
<p>Or we can prove</p>
<ol>
<li><strong>Hom(S, f)</strong> is injective → <strong>f</strong> is injective AND</li>
<li><strong>Hom(S, f)</strong> is not injective → <strong>f</strong> is not injective</li>
</ol>
<p>Both ways should give the same result, because biconditional is symmetric, right?!</p>
<p>Then I draw the following diagram:</p>
<p><a href="https://i.stack.imgur.com/1IRGM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1IRGM.png" alt="enter image description here" /></a></p>
<p>where I see <strong>f</strong> as injective but <strong>HOM(S, f)</strong> as not!</p>
<p>Where I'm wrong? How to visualize <strong>HOM(S, f)</strong> correctly?</p>
| Kaj Hansen | 138,538 | <blockquote>
<p><strong>Theorem</strong>: A continuous function <span class="math-container">$f: [a,b] \rightarrow \mathbb{R}$</span> is Riemann integrable.</p>
</blockquote>
<p><em>Proof:</em></p>
<p>Let <span class="math-container">$f: [a,b] \rightarrow \mathbb{R}$</span> be a continuous function. Any function that is continuous on a <a href="https://en.wikipedia.org/wiki/Compact_space#Definitions" rel="nofollow noreferrer">compact</a> set—such as our <span class="math-container">$f$</span> on <span class="math-container">$[a,b]$</span>—is also <a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="nofollow noreferrer"><em>uniformly</em> continuous</a> on that set<span class="math-container">$^\dagger$</span>. This is to say, given a <span class="math-container">$\mu > 0$</span>, we are guaranteed a <span class="math-container">$\delta > 0$</span> such that <span class="math-container">$|x - y| < \delta \implies |f(x) - f(y)| < \mu$</span> for <em>any</em> <span class="math-container">$x, y \in [a,b]$</span>. Consider a partition <span class="math-container">$\mathcal{P}$</span> of <span class="math-container">$[a, b]$</span> into <span class="math-container">$n$</span> equal intervals of width <span class="math-container">$\displaystyle \frac{b-a}{n}$</span>, with <span class="math-container">$n$</span> large enough so that <span class="math-container">$\displaystyle \frac{b-a}{n} < \delta$</span>. Computing the difference between the upper and lower sums:
<span class="math-container">\begin{align*}
U(f, \mathcal{P}) - L(f, \mathcal{P}) &= \sum_{k = 1}^{n} \left(x_k - x_{k-1} \right)\Big[\operatorname{sup}\{f(x) | x \in [x_{k-1}, x_k] \} - \operatorname{inf} \{f(x) | x \in [x_{k-1}, x_k] \} \Big] \\
& \leq \left( \frac{b-a}{n} \right) \cdot n \cdot \mu \ = \ (b-a)\mu
\end{align*}</span>
Given an <span class="math-container">$\varepsilon > 0$</span>, choose <span class="math-container">$\mu$</span> small enough so that <span class="math-container">$\displaystyle \mu < \frac{\varepsilon}{(b-a)}$</span>. Then <span class="math-container">$U(f, \mathcal{P}) - L(f, \mathcal{P}) < \varepsilon$</span>, and we conclude <span class="math-container">$f$</span> is Riemann integrable on <span class="math-container">$[a,b]$</span>.</p>
<hr>
<p><span class="math-container">$^\dagger$</span> See <strong><a href="https://math.stackexchange.com/questions/110573/continuous-mapping-on-a-compact-metric-space-is-uniformly-continuous">here</a></strong> for further discussion.</p>
|
353,087 | <p>Solve the interior Dirichlet Problem</p>
<p>$$(r^2u_r)_r+\dfrac{1}{\sin\phi}(\sin\phi~u_\phi)_\phi+\dfrac{1}{\sin^2\phi}u_{\theta\theta}=0\,, \,\,\,\,\,\,\, 0<r<1 $$</p>
<p>where $u(1,\phi)=\cos3\phi$</p>
| Ron Gordon | 53,268 | <p>You are really just solving Laplace's equation </p>
<p>$$\Delta u = 0$$</p>
<p>in the interior of the unit sphere, with a boundary condition that is independent of $\theta$. The solution to this problem is well known:</p>
<p>$$u(r,\phi,\theta) = \sum_{n=0}^{\infty} a_n r^n \, P_n(\cos{\phi})$$</p>
<p>where $P_n$ is the $n$th Legendre polynomial. You may derive this solution through a separation of variables; the separation constant turns out to be $n (n+1)$. See, for example, S. Holland, <em><a href="http://rads.stackoverflow.com/amzn/click/0486458016" rel="nofollow">Applied Analysis by the Hilbert Space Method</a></em>, Secs. 4.6 and 7.8. The coefficients $a_n$ are found using the orthogonality of the Legendres:</p>
<p>$$\begin{align}a_n &= \frac{2 n+1}{2} \int_0^{\pi} d\phi\, \sin{\phi} P_n(\cos{\phi}) \, \cos{3 \phi}\\ &=\frac{2 n+1}{2} \int_{-1}^1 dt \: P_n(t) \,(4 t^3-3 t) \end{align}$$</p>
<p>Express $4 t^3-3 t$ in terms of Legendres:</p>
<p>$$4 t^3-3 t = -\frac{3}{5} P_1(t) + \frac{8}{5} P_3(t)$$</p>
<p>By orthonormality, these coefficients of the Legendres are the coefficients of the Legendres in the solution. Therefore:</p>
<p>$$u(r,\phi,\theta) = -\frac{3}{5} r P_1(\cos{\phi}) + \frac{8}{5} r^3 P_3(\cos{\phi})$$</p>
|
4,531,939 | <p>I know that <span class="math-container">$$p(a|b)=\frac{p(a, b)}{p(b)}$$</span>
And I also know <span class="math-container">$$p(a, b) = p(a)p(b)$$</span></p>
<p>So, algebraically, it all seems to me that <span class="math-container">$$p(a|b)=p(a)$$</span>
I know something is wrong with this situation that I'm thinking about but I don't know where I am wrong.<br />
My problem is that I have Bayesian network like
<a href="https://i.stack.imgur.com/TtpXk.png" rel="nofollow noreferrer">this image</a>.
I have the probability distribution for the MotherGene and FatherGene, and I wanna calculate the conditional probability <span class="math-container">$p(ChildGene=1|MotherGene=0,FatherGene=2)$</span>. So it's gonna be <span class="math-container">$$ \frac{p(ChildGene=1, MotherGene=0, FatherGene=2)}{p(MotherGene=0, FatherGene=2)}$$</span>
It's exactly like <span class="math-container">$p(ChildGene=1)$</span> when I try to calculate and the Bayesian network doesn't affect anything.</p>
| BatMath | 1,038,433 | <p>To give a more detailed explanation -- For two events <span class="math-container">$a$</span> and <span class="math-container">$b$</span>, the quantity <span class="math-container">$p(a\vert b)$</span> is called the <em>probability of <span class="math-container">$a$</span> given <span class="math-container">$b$</span></em> and should be understood as such. It indeed satisfies the formula
<span class="math-container">$$
p(a\vert b) = \frac{p(a,b)}{p(b)}\quad \hbox{provided }p(b)>0.
$$</span>
These events are said to be <em>independent</em> if <span class="math-container">$p(a,b) = p(a)p(b)$</span>. It sometimes happens that two events are independent while still "interacting with one another in some way" -- so you should not be too alarmed by your observation.</p>
<p>To give a concrete (classical) example, we can toss three fair coins and denote by <span class="math-container">$a_{ij}$</span> the event "coin <span class="math-container">$i$</span> and <span class="math-container">$j$</span> match" (<span class="math-container">$1\le i<j\le 3$</span>) so that <span class="math-container">$p(a_{ij}) = 1/2$</span>. Now notice that the events <span class="math-container">$a_{12},a_{23},a_{13}$</span> are pairwise independent as
<span class="math-container">$$
p(a_{ij},a_{ik}) = \frac{1}{4} =
p(a_{ij})p(a_{ik}),
$$</span>
whenever <span class="math-container">$j\ne k$</span>. However these events are not independent since
<span class="math-container">\begin{align*}
p(a_{12},a_{21},a_{13}) = p\left(
\hbox{all coins match}
\right) = \frac{1}{4}\ne \frac{1}{8} = p(a_{12})p(a_{23})p(a_{13}).
\end{align*}</span>
Coming back to your point of view of conditional probability, we can note that <span class="math-container">$p(a_{12}\vert a_{13}) = p(a_{12}) = p(a_{12}\vert a_{23})$</span>. In other words the fact that coin <span class="math-container">$1$</span> or coin <span class="math-container">$2$</span> matches with coin <span class="math-container">$3$</span> does not affect the likelihood of the first two coins to match. However <span class="math-container">$p(a_{12}\vert a_{13},a_{23}) = 1$</span>. Indeed, if coin <span class="math-container">$1$</span> matches with coin <span class="math-container">$3$</span> and so do <span class="math-container">$2$</span> and <span class="math-container">$3$</span> then, we must have coin <span class="math-container">$1$</span> and <span class="math-container">$2$</span> that match.</p>
<p>I hope this provides an intuition to the fact that events can be independent whilst still "interacting" with one another.</p>
|
4,531,939 | <p>I know that <span class="math-container">$$p(a|b)=\frac{p(a, b)}{p(b)}$$</span>
And I also know <span class="math-container">$$p(a, b) = p(a)p(b)$$</span></p>
<p>So, algebraically, it all seems to me that <span class="math-container">$$p(a|b)=p(a)$$</span>
I know something is wrong with this situation that I'm thinking about but I don't know where I am wrong.<br />
My problem is that I have Bayesian network like
<a href="https://i.stack.imgur.com/TtpXk.png" rel="nofollow noreferrer">this image</a>.
I have the probability distribution for the MotherGene and FatherGene, and I wanna calculate the conditional probability <span class="math-container">$p(ChildGene=1|MotherGene=0,FatherGene=2)$</span>. So it's gonna be <span class="math-container">$$ \frac{p(ChildGene=1, MotherGene=0, FatherGene=2)}{p(MotherGene=0, FatherGene=2)}$$</span>
It's exactly like <span class="math-container">$p(ChildGene=1)$</span> when I try to calculate and the Bayesian network doesn't affect anything.</p>
| Suzu Hirose | 190,784 | <p>For example if eye colour is the gene, let <span class="math-container">$p(B)$</span> be the probability that the child's eye colour is blue, and <span class="math-container">$P(B|MB)$</span> be the probability that the child's eye colour is blue given that the mother's eye colour is blue.</p>
<p>Suppose the probability that the eye colour is blue is 0.25 but mothers with blue eyes give birth to children with blue eyes 50% of the time then
<span class="math-container">$$
P(B)=0.25\\
P(B|MB)=0.5
$$</span>
Now assuming mothers come from the same population as the children, <span class="math-container">$P(MB)=0.25$</span>, so <span class="math-container">$P(B)P(MB)=0.25^2={1\over 16}$</span> but <span class="math-container">$P(B|MB)=0.5\neq P(B)P(MB)$</span>.</p>
|
1,618,373 | <p>Prove that $S_4$ cannot be generated by $(1 3),(1234)$</p>
<p>I have checked some combinations between $(13),(1234)$ and found out that those combinations cannot generated 3-cycles.</p>
<p>Updated idea:<br>
Let $A=\{\{1,3\},\{2,4\}\}$<br>
Note that $(13)A=A,(1234)A=A$<br>
Hence, $\sigma A=A,\forall\sigma\in \langle(13),(1234)\rangle$<br>
In particular, $(12)\notin \sigma A,\forall\sigma\in \langle(13),(1234)\rangle$<br>
So we conclude that $S_4\neq\langle(13),(1234)\rangle$</p>
| Ennar | 122,131 | <p>If we denote $a = (1234)$ and $b = (13)$, one can easily check that $a^4 = e$, $b^2 = e$ and $ab = ba^{-1}$ which are precisely relations that define dihedral group $D_4$. Thus, subgroup generated by $a$ and $b$ in $S_4$ is isomorphic to quotient of $D_4$, and thus it's order is less or equal than $8$. Since $|S_4| = 4!$, obviously $a$ and $b$ can't generate whole $S_4$. One can easily check that there are $8$ distinct elements in $\langle a,b\rangle$, so it is actually isomorphic to $D_4$.</p>
|
139,575 | <p>I use Magma to calculate the L-value, yields</p>
<p>E:=EllipticCurve([1, -1, 1, -1, 0]);
E;
Evaluate(LSeries(E),1),RealPeriod(E),Evaluate(LSeries(E),1)/RealPeriod(E);</p>
<p>Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - x over Rational Field
0.386769938387780043302394751243 3.09415950710224034641915800995
0.125000000000000000000000000000</p>
<p><span class="math-container">$#torsionsubgroup = 4, c_{17}(E)=1.$</span></p>
<p>But the strong BSD predicts that</p>
<p><span class="math-container">$L(E,1)/\Omega_{\infty}$</span>= <span class="math-container">$(#Sha(E)/#tor(E)^2)*c_{17}(E)$</span></p>
<p>We will get <span class="math-container">$L(E,1)/\Omega_{\infty}=1/16$</span>, not <span class="math-container">$1/8$</span>.
Why does that happen? Thanks a lot.</p>
| Joe Silverman | 11,926 | <p>Tim's answer is great, but I want to mention one other place where people have lost a power of 2. The canonical height is often defined relative to the divisor (O), as I do in my books. So it is given by
$$ \hat h(P) = \frac12 \lim_{n\to\infty} 4^{-n} h\bigl(x([2^n]P)\bigr). $$
Here the $\frac12$ is inserted because $x$ has a double pole at $\infty$. However, in computing the height regulator for BSD, one should not use the $\frac12$, i.e., one should compute heights relative to the divisor $2(O)$. </p>
<p>Why, one might ask, use a weird divisor like $2(O)$. The answer is that when BSD is properly formulated for abelian varieties, it uses the height pairing $A(K)\times \hat A(K)$ that pairs points of $A$ with points on its dual, and the pairing is done relative to the Poincare divisor on $A\times \hat A$. If one traces through the definitions and identifies an elliptic curve with its dual, the Ponicare divisor on $E\times E$ is $(O)\times E + E\times (O)$, which eventually shows that one should use the height on $E$ relative to $2(O)$.</p>
<p>Note that this means that if you compute BSD using the wrong height on a curve of rank $r$, then your answer will be off by a factor of $2^r$.</p>
<p>(I learned about this potential error from Dick Gross many years ago.)</p>
|
1,119,027 | <p>I'm trying to learn Bayes's formula, and am coming up with some poker problems to learn this.</p>
<p>My problem is as following: given a $H4,H5$ ($4$ of hearts, $5$ of hearts) hand, what are the odds that I'll hit a straight flush?</p>
<p>My reasoning is like this:</p>
<p>$$\Pr(\text{straight flush}|H4H5) = (\Pr(H4H5|\text{straight flush}) \cdot \Pr(\text{straight flush})) / \Pr(H4H5)$$</p>
<p>Now, off <a href="http://en.wikipedia.org/wiki/Poker_probability" rel="nofollow">of wikipedia</a>, I learnt that:</p>
<p>$$P(\text{straight flush}) = 0.00139$$</p>
<p>Given that there are 36 ways to achieve a straight flush, and only 4 ways to have a straight flush with $H4,H5$ (namely $HA-H5, H2-6, H3-7, H4-8$), I calculated that:</p>
<p>$$\Pr(H4H5|\text{straight flush}) = 4/36 = 1/9$$</p>
<p>Now, how do we find $\Pr(H4H5)$? My reasoning was: There's a $2/52$ chance that we get dealt $H4$ or $H5$ as the first card, and then a $1/51$ chance that we get dealt $H4$ or $H5$ as the second card.</p>
<p>However, filling out those numbers says there is a 15% chance that this will happen. That numbers seems way to high to me. Surely, somewhere in my reasoning I'm making a mistake. Who can help?</p>
| Tahir Imanov | 208,078 | <p>What are you asking is what is the probability of drawing two cards, without putting them back.
Probability of drawing first card is 1/52.
Now, there are 51 cards left.
Therefore the probability of drawing second card is 1/51.
Therefore PR(A & B) = PR(A)*PR(B).</p>
|
1,186,825 | <p>Prove $$\lim_{n\to\infty}\int_0^1 \left(\cos{\frac{1}{x}} \right)^n\mathrm dx=0$$</p>
<p>I tried, but failed. Any help will be appreciated.</p>
<p>At most points $(\cos 1/x)^n\to 0$, but how can I prove that the integral tends to zero clearly and convincingly?</p>
| Jack D'Aurizio | 44,121 | <p>$$I_n=\int_{0}^{1}\cos^n\frac{1}{x}\,dx = \int_{1}^{+\infty}\frac{\cos^n x}{x^2}\,dx=\sum_{n\geq 0}\int_{1+2n\pi}^{1+(2n+2)\pi}\frac{\cos^n x}{x^2}\,dx$$
hence:
$$ I_n = \frac{1}{4\pi^2}\int_{1}^{1+2\pi}\psi'\left(\frac{x}{2\pi}\right)\cos^n x\,dx$$
and by Cauchy-Schwarz inequality:
$$ |I_n| \leq \frac{1}{4\pi^2}\sqrt{\int_{1}^{1+2\pi}\psi'\left(\frac{x}{2\pi}\right)^2\,dx}\sqrt{\int_{0}^{2\pi}\cos^{2n}x\,dx}\leq\frac{C}{n^{1/4}}$$
for some positive constant $C$. It follows that $I_n\to 0$ as long as $n\to+\infty.$</p>
|
2,048,054 | <p>I need to find signed distance from the point to the intersection of 2 hyperplanes. I was quite sure that this is something that every mathematician do twice a week :) But not found any good solution or explanation for same problem.</p>
<p>In my case the hyperplanes is defined as $y = w'*x + x_0$, but it is ok to define it with the set of points if there is no other way to solve.</p>
<p>The only solution i found is method to find points of intersection from points from hyperplanes here: <a href="https://www.mathworks.com/matlabcentral/fileexchange/50181-affinespaceintersection-intersection-of-lines-planes-volumes-etc" rel="nofollow noreferrer">https://www.mathworks.com/matlabcentral/fileexchange/50181-affinespaceintersection-intersection-of-lines-planes-volumes-etc</a></p>
<p>But i stuck how to find signed distance after that.</p>
<p>I have strong feeling that there is easy solution, but i don't know correct keywords.</p>
<p>It will be great to see formulas and implementation on any language.
But for sure any help highly appreciated.</p>
<p>Thank you.</p>
| Henno Brandsma | 4,280 | <p>If $X$ is locally connected, then every connected component $C$ of $X$ is open (and closed). For any space $X$, the connected components form a disjoint cover of $X$ (every point is in a component, and two components are disoint, or their union would be striclty larger, contradicting their maximality). Clearly, a disjoint cover has no smaller subcover at all. So if $X$ is locally connected and compact, $X$ can only have finitely many components, as the cover of components is an open cover, so compactness applies to it. So the disconnectedness is "limited" for compact locally connected spaces. So no compact locally connected totally diconnected space exists, except finite discrete spaces.</p>
|
1,410,163 | <p>Show that the limit of the function, $f(x,y)=\frac{xy^2}{x^2+y^4}$, does not exist when $(x,y) \to (0,0)$.</p>
<p>I had attempted to prove this by approaching $(0, 0)$ from $y = mx$, assuming $m = -1$ and $m = 1$. The result was $f(y, -y) = \frac{y}{1+y^2}$ and $f(y, y) = \frac{y}{1+y^2}$ as the limits which are obviously different. Essentially, I was just wondering what is the correct working out for a solution to this question.</p>
| tattwamasi amrutam | 90,328 | <p>Suppose that $A \subset B$. Let $ x \in B^c$. Then $x \not\in B$. Then $x \not \in A$. Thus $x \in A^c$. </p>
<p>Similarly assuming that $B^c \subset A^c$. Let $x \in A$. Then $x \not \in A^c$. Thus $x \not \in B^c$. Hence $x \in B$</p>
|
1,704,410 | <p>If we have two groups <span class="math-container">$G,H$</span> the construction of the direct product is quite natural. If we think about the most natural way to make the Cartesian product <span class="math-container">$G\times H$</span> into a group it is certainly by defining the multiplication</p>
<p><span class="math-container">$$(g_1,h_1)(g_2,h_2)=(g_1g_2,h_1h_2),$$</span></p>
<p>with identity <span class="math-container">$(1,1)$</span> and inverse <span class="math-container">$(g,h)^{-1}=(g^{-1},h^{-1})$</span>.</p>
<p>On the other hand we have the construction of the semidirect product which is as follows: consider <span class="math-container">$G$</span>,<span class="math-container">$H$</span> groups and <span class="math-container">$\varphi : G\to \operatorname{Aut}(H)$</span> a homomorphism, we define the semidirect product group as the Cartesian product <span class="math-container">$G\times H$</span> together with the operation</p>
<p><span class="math-container">$$(g_1,h_1)(g_2,h_2)=(g_1g_2,h_1\varphi(g_1)(h_2)),$$</span></p>
<p>and we denote the resulting group as <span class="math-container">$G\ltimes H$</span>.</p>
<p>We then show that this is a group and show many properties of it. My point here is the intuition.</p>
<p>This construction doesn't seem quite natural to make. There are many operations to turn the Cartesian product into a group. The one used when defining the direct product is the most natural. Now, why do we give special importance to this one?</p>
<p>What is the intuition behind this construction? What are we achieving here and why this particular way of making the Cartesian product into a group is important?</p>
| Michael Burr | 86,421 | <p>It is nice to think about $D_4$ as a semidirect product. Namely, $D_4=\langle \sigma,\tau:\sigma^4=\tau^2=1,\tau\sigma=\sigma^{-1}\tau\rangle$. You can see the automorphism because $\sigma$ and $\tau$ do not commute, but the automorphism ($x\mapsto x^{-1}$) tells you how to move the $\tau$ past the $\sigma$.</p>
<p>In general, the direct product is not enough because the operation between elements of the two subgroups is always commutative. On the other hand, if $G$ is a group, $N$ is a normal subgroup, $H$ is a subgroup ($H$ need not be normal like in a direct product), $H\cap N=\{1\}$, and $G=NH$, then $G$ <em>must</em> be a semidirect product. (The operation between elements of $N$ and $H$ need not be commutative.) So, you can argue that the semidirect product classifies all groups constructed in this way.</p>
<p>The big idea in a semidirect product is the following:</p>
<ul>
<li><p>You have two subgroups $N$ and $H$. You understand the operation when you multiply elements of $N$ and you understand the operation when you multiply elements of $H$.</p></li>
<li><p>The automorphism is used to compare the operation <em>between</em> elements of $N$ and elements of $H$.</p></li>
<li><p>You know that $N$ is normal, so for any $n\in N$ and $h\in H$, $hnh^{-1}$ is some element of $N$, and the map $n\mapsto hnh^{-1}$ is an automorphism of $N$. The semidirect product construction describes this conjugation automorphism. Therefore, if the automorphism determined by conjugation is $\phi_h:N\rightarrow N$, then $hn=hnh^{-1}h=\phi_h(n)h$.</p></li>
</ul>
|
1,704,410 | <p>If we have two groups <span class="math-container">$G,H$</span> the construction of the direct product is quite natural. If we think about the most natural way to make the Cartesian product <span class="math-container">$G\times H$</span> into a group it is certainly by defining the multiplication</p>
<p><span class="math-container">$$(g_1,h_1)(g_2,h_2)=(g_1g_2,h_1h_2),$$</span></p>
<p>with identity <span class="math-container">$(1,1)$</span> and inverse <span class="math-container">$(g,h)^{-1}=(g^{-1},h^{-1})$</span>.</p>
<p>On the other hand we have the construction of the semidirect product which is as follows: consider <span class="math-container">$G$</span>,<span class="math-container">$H$</span> groups and <span class="math-container">$\varphi : G\to \operatorname{Aut}(H)$</span> a homomorphism, we define the semidirect product group as the Cartesian product <span class="math-container">$G\times H$</span> together with the operation</p>
<p><span class="math-container">$$(g_1,h_1)(g_2,h_2)=(g_1g_2,h_1\varphi(g_1)(h_2)),$$</span></p>
<p>and we denote the resulting group as <span class="math-container">$G\ltimes H$</span>.</p>
<p>We then show that this is a group and show many properties of it. My point here is the intuition.</p>
<p>This construction doesn't seem quite natural to make. There are many operations to turn the Cartesian product into a group. The one used when defining the direct product is the most natural. Now, why do we give special importance to this one?</p>
<p>What is the intuition behind this construction? What are we achieving here and why this particular way of making the Cartesian product into a group is important?</p>
| Mariano Suárez-Álvarez | 274 | <p>You are looking at this in the wrong way.</p>
<p>The main reason for which we define the direct product of groups is that we like describing/understanding the structure of groups and we noticed that many groups are, well, direct products.</p>
<p>Now not all groups are direct products. For example, the dihedral group is not a direct product. But in this last example, for example, we are able to provide a very useful description of the group in a way that resembles a direct product in a way. As we find this same phenomenon in many contexts, we give it a name and call it semidirect product.</p>
<p>It is wrong, and a source of frustration, to look for the intuition of a definition which is motivated by examples: no one had any <em>intuitive</em> reason to come up with the definition of semidirect products out of thin air.</p>
<p>The definition does not have an intuition to justfy it: it is a useful concept in that it applies to many examples and it encapsulates many useful features which are useful to do things with groups.</p>
<p>You would not as for intution for the definition of the term «tree».</p>
<hr>
<p>The construction does not seem natural to you simply because you do not know many groups and you have not yet spent much time investigating the structure of groups in any detail — if you do that, then the sheer force of examples will make it natural. </p>
<p>The key point is what it means for a definition to be «natural». And it does almost never mean «one could come up with it out of abstract meditation»: essentally all definitions are made to codify a situation that people encounter often and which, for that reason, is useful to give a name to. Of course, this meaning of «naturality» is relative: what seems unnatural to you would be utterly natural to, say, Burnside.</p>
<p>The punchline of all this is that it is almost never useful or productive to ask for the intuition of definitions when you first encounter them: what will help you is not some etherial intuition but examples, and that is what one should ask for to maximize understanding.</p>
<hr>
<p>The next question would naturally be «what is the intuition behind the <a href="https://en.wikipedia.org/wiki/Zappa%E2%80%93Sz%C3%A9p_product" rel="noreferrer">Zappa–Szép product</a> and the answer would be the same: none. But some groups are not direct products not semidirect products but they still have two subgroups which somewhat similar properties as the factors of a direct product, and since this occurs often in practice, we give a name to that situation.</p>
|
4,243,344 | <blockquote>
<p><span class="math-container">${43}$</span> equally strong sportsmen take part in a ski race; 18 of
them belong to club <span class="math-container">${A}$</span>, 10 to club and 15 to club <span class="math-container">${C}$</span>. What is the
average place for (a) the best participant from club <span class="math-container">${B}$</span>; (b) the
worst participant from club <span class="math-container">${B}$</span>?</p>
</blockquote>
<hr />
<p>I've found the possible range of places the participant could get for both cases. In the case (a), the best participant from club <span class="math-container">${B}$</span> can be at any place between <span class="math-container">$1$</span> and <span class="math-container">$34$</span>. As for the case (b), the worst participant from club <span class="math-container">$B$</span> can get any place between <span class="math-container">$10$</span> and <span class="math-container">$43$</span>. To find the average place I need to compute the expected (mean) value of the this variable. But I'm not sure how to find the chances for getting each place. I suppose they should be equal, but neither <span class="math-container">$\frac{1}{33}$</span> nor <span class="math-container">$\frac{1}{43}$</span> seem to give the right answer.</p>
| Especially Lime | 341,019 | <p>The position of the highest placed person from <span class="math-container">$B$</span> (call this <span class="math-container">$X$</span>) can be anywhere from <span class="math-container">$1$</span> to <span class="math-container">$34$</span>, but these are not equally likely. For <span class="math-container">$X=1$</span> you simply need person <span class="math-container">$1$</span> to be from club <span class="math-container">$B$</span>. This has probability <span class="math-container">$18/43$</span>. For <span class="math-container">$X=10$</span>, say, you need person <span class="math-container">$10$</span> to be from club <span class="math-container">$B$</span> (probability <span class="math-container">$18/43$</span>) <strong>and</strong> none of the previous <span class="math-container">$9$</span> people to be from <span class="math-container">$B$</span>. So you need the remaining <span class="math-container">$17$</span> members of <span class="math-container">$B$</span> (who can be in any <span class="math-container">$17$</span> of the remaining <span class="math-container">$42$</span> positions) all to be within the last <span class="math-container">$33$</span>. This has probability <span class="math-container">$\frac{\binom{33}{9}}{\binom{42}{9}}$</span>, so overall you get
<span class="math-container">$$\Pr(X=10)=\frac{18}{43}\times\frac{\binom{33}{9}}{\binom{42}{9}}.$$</span></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.