qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,602,312 | <p>Can we prove that $\sum_{i=1}^k x_i$ and $\prod_{i=1}^k x_i$ is unique for $x_i \in R > 0$?<br>
I stated that conjecture to solve CS task, but I do not know how to prove it (or stop using it if it is false assumption). </p>
<p>Is the pair of Sum and Product of array of k real numbers > 0 unique per array? I do not want to find underlying numbers, or reverse this pair into factorisation. I just assumed that pair is unique per given set to compare two arrays of equal length to determine whether numbers inside with possible multiplicities are the same. The order in array does not matter. 0 is discarded to give meaningful sum and product.</p>
<p>At the input I get two arrays of equal length, one of which is sorted, and I want to determine if the elements with multiplicities are the same, but I do not want to sort the second array (this will take more than linear time, which is the objective).<br>
So I thought to make checksums, sum and product seemed unique to me, hence the question.</p>
<p>For example:<br>
I have two arrays: v=[1,3,5,5,8,9.6] and the second g=[9.6,5,3,1,5,8].<br>
I calculate sum = 31.6, and product = 5760. For both arrays, since one is sorted version of another.<br>
And now I calculate this for b=[1,4,4,5,8,9.6], sum is 31.6 but product is 6144.
So I assumed that if both sum and product are not the same for two given arrays, than the element differs.</p>
<p>Getting back to question, I am thinking that the pair {sum, product} is the same for all permutations of array (which is desired), but will change when elements are different (which is maybe wishfull thinking).</p>
| Christian Blatter | 1,303 | <p>The following is pretty intuitive:</p>
<p>A list $(a_1,a_2,\ldots,a_r)$ of vectors $a_k\in V$ is <em>linearly independent</em> iff none of the $a_k$, $1\leq k\leq r$, is a linear combination of its predecessors in the list. This implies that for each $k$ one has $${\rm dim}\bigl({\rm span}(a_1,\ldots, a_k)\bigr)={\rm dim}\bigl({\rm span}(a_1,\ldots, a_{k-1})\bigr)+1\ ,$$ so that
$${\rm dim}\bigl({\rm span}(a_1,\ldots, a_r)\bigr)=r\ .$$</p>
|
3,499,862 | <p>What is the equivalent of <span class="math-container">$\neg (\forall x) (P(x) \vee Q(x))$</span>? Will <span class="math-container">$P(x) \vee Q(x)$</span> be negated too? Or is just <span class="math-container">$\forall x$</span> negated?</p>
| Bram28 | 256,001 | <p>the Quantifier Negation Law says that:</p>
<p><span class="math-container">$$\neg (\forall x ) \varphi(x) \Leftrightarrow (\exists x) \neg \varphi(x)$$</span></p>
<p>for any formula <span class="math-container">$\varphi(x)$</span></p>
<p>Hence:</p>
<p><span class="math-container">$$\neg (\forall x)(P(x) \lor Q(x)) \Leftrightarrow (\exists x)\neg (P(x) \lor Q(x))$$</span></p>
<p>Now, you can either leave the statement this way, or you can push the negation further in to get:</p>
<p><span class="math-container">$$(\exists x)(\neg P(x) \land \neg Q(x))$$</span></p>
<p>Which one is it? Your choice!</p>
|
4,580,717 | <p>I know I can express "everyone is A" as:</p>
<p>P: is a person
<span class="math-container">$$ \forall x (Px \implies Ax) $$</span></p>
<p>And I can express "everyone who's A is B" as:</p>
<p><span class="math-container">$$ \forall x ((Px \land Ax) \implies Bx) $$</span></p>
<p>But how can I express "everyone who's A"? If I were to take only the first part of the previous statement it would mean "everything is a person and A" instead of just "everyone who's A":</p>
<p><span class="math-container">$$ \forall x (Px \land Ax) $$</span></p>
| Bertrand Wittgenstein's Ghost | 606,249 | <p>Okay, so this is where we are at: How to say, "everyone who is <span class="math-container">$A$</span>". Everyone who is <span class="math-container">$A$</span> is not really a proposition in and of itself. It's a collection of objects in some particular domain of discourse.</p>
<p>So, we can define the following predicate <span class="math-container">$\forall x(Qx \iff (Ax \land Px))$</span>, this is a proposition. Which set-theoretically says that <span class="math-container">$Q=\{x\in P: Ax\}$</span>. Either <span class="math-container">$Q$</span> is empty or populated, in the language of predicate logic, <span class="math-container">$\not \exists x Qx=1$</span> or <span class="math-container">$\exists x Qx=1$</span>.</p>
|
435,936 | <p>Does anyone know when
$x^2-dy^2=k$ is resoluble in $\mathbb{Z}_n$ with $(n,k)=1$ and $(n,d)=1$ ?
I'm interested in the case $n=p^t$</p>
| Najib Idrissi | 10,014 | <p>Yes, it is possible. What follows isn't a completely rigorous proof, but you should be able to make one from that.</p>
<p>The basic idea is to construct a set $A$ such that along one subsequence, $\lim |A \cap I_{a_n}|/a_n = 1$ and along another subsequence, $\lim |A \cap I_{b_n}|/b_n = 0$. We'll do this by adding or skipping consecutive "chunks" of integers.</p>
<ul>
<li>Start by adding $1$ to $A$. We're at density $1$.</li>
<li>Then skip $2$. For $n = 2$, we have $|A \cap I_2|/2 = 1/2$.</li>
<li>Now add $3$ and $4$, so that $|A \cap I_4|/4 = 3/4$.</li>
<li>Now skip $5$ through $12$, so that $|A \cap I_{12}|/12 = 3/12 = 1/4$.</li>
</ul>
<p>This should give you an idea of how the following works:</p>
<ul>
<li>"Add" enough integers so that you're back to density $\geq (2^n - 1)/2^n$</li>
<li>Then "skip" enough integers so that you're below density $\leq 1/2^n$.</li>
</ul>
<p>Repeat the two steps in order. Along the two subsequences constructed, the limit of the density is either 1 or 0.</p>
|
239,653 | <p>It is usually told that Birch and Swinnerton-Dyer developped their famous conjecture after studying the growth of the function
$$
f_E(x) = \prod_{p \le x}\frac{|E(\mathbb{F}_p)|}{p}
$$
as $x$ tends to $+\infty$ for elliptic curves $E$ defined over $\mathbb{Q}$, where the product is defined for the primes $p$ where $E$ has good reduction at $p$. Namely, this function should grow at the order of
$$
\log(x)^r
$$
when $x$ tends to $+\infty$, where $r$ is the (algebraic) rank of $E$.</p>
<p><strong>Question 1.</strong> Why is it natural to look at these kind of products?</p>
<p>Nowadays, people usually state the BSD conjecture as the equality
$$
r = \text{ord}_{s=1}L(E,s)\text{.}
$$</p>
<p><strong>Question 2.</strong> Are these two statements equivalent?</p>
| Jeremy Rouse | 48,142 | <p>In regards to question 2, in 1982 Goldfeld proved that if $f_{E}(x) \sim C (\log x)^{r}$, then (i) $L(E,s)$ has no zeroes with ${\rm Re}(s) > 1$, and (ii) the order of vanishing at $L(E,s)$ is equal to $r$. I do not know if the converse is true (even assuming GRH for $L(E,s)$), as I don't have a copy of Goldfeld's paper.</p>
<p>Strangely, in the case that $r = 0$, Goldfeld shows that $C = \sqrt{2}/L(E,1)$. </p>
|
239,653 | <p>It is usually told that Birch and Swinnerton-Dyer developped their famous conjecture after studying the growth of the function
$$
f_E(x) = \prod_{p \le x}\frac{|E(\mathbb{F}_p)|}{p}
$$
as $x$ tends to $+\infty$ for elliptic curves $E$ defined over $\mathbb{Q}$, where the product is defined for the primes $p$ where $E$ has good reduction at $p$. Namely, this function should grow at the order of
$$
\log(x)^r
$$
when $x$ tends to $+\infty$, where $r$ is the (algebraic) rank of $E$.</p>
<p><strong>Question 1.</strong> Why is it natural to look at these kind of products?</p>
<p>Nowadays, people usually state the BSD conjecture as the equality
$$
r = \text{ord}_{s=1}L(E,s)\text{.}
$$</p>
<p><strong>Question 2.</strong> Are these two statements equivalent?</p>
| Myshkin | 43,108 | <p>I recommend you to look at Birch and Swinnerton-Dyer's second paper (Notes on elliptic curves II). They explain beautifully the background for their conjecture.</p>
<p>The basic idea is that if you can prove that $L_E$ has an analytic continuation beyond $\Re(s)=3/2$ (Hasse-Weil conjecture), then</p>
<p>$$L_E(1)=\prod_p\bigg(\frac{|E(\mathbb{F}_p)|}{p}\bigg)^{-1}$$</p>
<p>In their paper they also mention:</p>
<blockquote>
<p>After the work of Siegel [19] on quadratic forms, it is natural to look at
the product $\prod N_p/p$ where $N_p$ is the number of rational points
on the curve over the finite field of $p$ elements.</p>
<p>[19] <em>C. L. Siegel, Über die analytische Theorie der quadratischen Formen. I, II, III. Ann. of Math. 36 (1935), 37 (1936), 38 (1937).</em></p>
</blockquote>
<p>This is explained in more detail at the beginning of their first paper.</p>
<blockquote>
<p>Siegel has shown that the density of rational points on a quadric
surface can be expressed in terms of the densities of $p$-adic points;
which for almost all primes $p$ depends directly on the number of
solutions of the corresponding equation in the finite field with $p$
elements.</p>
<p>It is natural to hope that something similar will happen for the
elliptic curve $$\Gamma:y^2=x^3-Ax-B$$where $A,B$ are rational
numbers.</p>
</blockquote>
|
239,653 | <p>It is usually told that Birch and Swinnerton-Dyer developped their famous conjecture after studying the growth of the function
$$
f_E(x) = \prod_{p \le x}\frac{|E(\mathbb{F}_p)|}{p}
$$
as $x$ tends to $+\infty$ for elliptic curves $E$ defined over $\mathbb{Q}$, where the product is defined for the primes $p$ where $E$ has good reduction at $p$. Namely, this function should grow at the order of
$$
\log(x)^r
$$
when $x$ tends to $+\infty$, where $r$ is the (algebraic) rank of $E$.</p>
<p><strong>Question 1.</strong> Why is it natural to look at these kind of products?</p>
<p>Nowadays, people usually state the BSD conjecture as the equality
$$
r = \text{ord}_{s=1}L(E,s)\text{.}
$$</p>
<p><strong>Question 2.</strong> Are these two statements equivalent?</p>
| Community | -1 | <p>If $E$ is an elliptic curve over $\mathbf{Q}$, define $L_p(E,s) = \frac{1}{1-ap^{-s} + p^{1-2s}}$, where $a$ is $p-(|E_p(\mathbf{F}_p)| - 1)$. The L-function of E is defined as the product over the $L_p(E,s)$ over all primes not dividing $2\Delta$ (called “good primes”):
$$ L(E,s) = \prod_{p} L_p(E,s)$$
Since we wish to look at $L(E,1)$, we notice that $\displaystyle L_p(E,1) = \frac{p}{p+1-a}$. From the definition of $a$, we see that $p+1-a=|E_p(\mathbf{F}_p)|$, and so $\displaystyle L_p(E,1) = \frac{p}{|E_p(\mathbf{F}_p)|}$. Taking the product over all good primes $p$ shows that
$$L(E,1) = \prod_{p}\frac{p}{|E_p(\mathbf{F}_p)|}.$$</p>
|
4,431,185 | <p>Let <span class="math-container">$ABC$</span> be a triangle and denote its area by <span class="math-container">$k = \mathrm{area}(ABC)$</span>. I want to divide <span class="math-container">$ABC$</span> into two sub-triangles <span class="math-container">$ABE$</span> and <span class="math-container">$AEC$</span> such that <span class="math-container">$\mathrm{area}(ABE) = t$</span> and <span class="math-container">$\mathrm{area}(AEC) = k-t$</span> for some <span class="math-container">$t < k$</span>.</p>
<p>Is it possible to find <span class="math-container">$E$</span> exactly? By "exactly" I mean that you could get an arbitrarily close approximation by repeatedly bisecting the edge <span class="math-container">$BC$</span> to find the split point <span class="math-container">$E$</span>.</p>
| Vasili | 469,083 | <p><span class="math-container">$A_{\triangle ABC}=\frac{1}{2}BC\cdot h=k \implies BC=\frac{2k}{h}$</span> (<span class="math-container">$h$</span> is a height from point <span class="math-container">$A$</span>)<p>
<span class="math-container">$A_{\triangle ABE}=\frac{1}{2}BE\cdot h=t \implies BE=\frac{2t}{h}$</span><p>
<span class="math-container">$A_{\triangle ACE}=\frac{1}{2}(BC-CE)\cdot h=\frac{1}{2}(\frac{2k}{h}-\frac{2t}{h})\cdot h=k-t$</span><p>
Thus, <span class="math-container">$BC:BE=k:t$</span> so <span class="math-container">$BE=\frac{2t}{h}=\frac{t}{k}BC$</span> .</p>
|
3,040,110 | <p>What is the Range of <span class="math-container">$5|\sin x|+12|\cos x|$</span> ?</p>
<p>I entered the value in desmos.com and getting the range as <span class="math-container">$[5,13]$</span>.</p>
<p>Using <span class="math-container">$\sqrt{5^2+12^2} =13$</span>, i am able to get maximum value but not able to find the minimum.</p>
| Community | -1 | <p>If <span class="math-container">$f(x) = 5|\sin x| + 12 |\cos x|$</span>, then</p>
<p><span class="math-container">\begin{align*}
f(x) &= \sqrt{f(x)^2} \\
&= \sqrt{25 \sin^2 x + 144 \cos^2 x + 60 |\sin x \cos x|} \\
&= \sqrt{25 + (144 - 25) \cos^2 x + 60 |\sin x \cos x|} \\
&\ge 5
\end{align*}</span></p>
<p>with equality obtained when <span class="math-container">$\cos x = 0$</span>.</p>
|
254,126 | <p>If 0 < a < b, where a, b $\in\mathbb{R}$, determine $\lim \bigg(\dfrac{a^{n+1} + b^{n+1}}{a^{n} + b^{n}}\bigg)$</p>
<p>The answer (from the back of the text) is $\lim \bigg(\dfrac{a^{n+1} + b^{n+1}}{a^{n} + b^{n}}\bigg) = b$ but I have no idea how to get there. The course is Real Analysis 1, so its a course on proofs. This chapter is on limit theorems for sequences and series. The squeeze theorem might be helpful. </p>
<p>I can prove that $\lim \bigg(\dfrac{a^{n+1} + b^{n+1}}{a^{n} + b^{n}}\bigg) \le b$ but I can't find a way to also prove that it is larger than b</p>
<p>Thank you!</p>
| Community | -1 | <p>Here's a proof using induction that $H^2(M_d) = 0$ for all $d \geq 1$. Now for $d = 1$ this is clear. Then by Mayer Vietoris we have</p>
<p>$$\ldots \leftarrow H^3\left(M_{d-1} \cup( \Bbb{R}^2 -\{p_d\};\Bbb{R}\right)\leftarrow H^2\left(M_{d-1}\cap ( \Bbb{R}^2 -\{p_d\});\Bbb{R}\right) \leftarrow \\ H^2(M_{d-1};\Bbb{R}) \oplus H^2(\Bbb{R}^2 - \{p_d\};\Bbb{R}) \leftarrow \ldots$$</p>
<p>Now the $H^3$ term is zero simply because the the space in question is just $\Bbb{R}^2$. By induction $H^2(M_{d-1})$ is zero and similarly $H^2(\Bbb{R}^2 - \{p_d\})$ is zero by the basis step. It will now follow that because</p>
<p>$$M_{d-1} \cap \Bbb{R}^2 -\{p_d\} = M_d$$</p>
<p>that $H^2(M_d) = 0$ for all $d$.</p>
<p><strong>Computation of $H^1(M_d)$ for all $d$:</strong></p>
<p>You can do this via induction. For $d = 1$ we have $\Bbb{R}^2 - \{p_1\}$ being homeomorphic (IIRC via some exponential function) to the infinite cylinder $\Bbb{S}^1 \times \Bbb{R}$. Now $\pi_1(S^1 \times \Bbb{R}) = \Bbb{Z}$ and so the Hurewicz Theorem gives that $H_1(S^1 \times \Bbb{R};\Bbb{Z} ) = \Bbb{Z}$. By universal coefficients we get that</p>
<p>$$H^1(S^1 \times \Bbb{R};\Bbb{R}) \cong \textrm{Hom}\left(H_1(S^1 \times \Bbb{R};\Bbb{Z} ),\Bbb{R}\right) \cong \Bbb{R}.$$</p>
<p>Now suppose for the inductive step it has been calculated that the first cohomology of $M_{d-1}$ is $\Bbb{R}^{d-1}$. What I suggest you to do now is suppose you have $d$ points in the plane. Now there is at least one coordinate (wlog say that $x$ - coordinate) such that not all the points have the same $x$ - coordinate. Choose a point with largest $x$ - coordinate. This point of course does not have to be unique.</p>
<p>Can you now find a line that divides the plane into two regions $U$ and $V$ with $U$ containing the $d-1$ points $p_1,\ldots,p_{d-1}$ and $V$ containing just $p_d$?</p>
|
3,846,339 | <p>Suppose I have the inequality <span class="math-container">$(\frac{A}{B})^X < (\frac{C}{D})\cdot(\frac{E}{F})^Y$</span> and I want X by itself.</p>
<p>Can I do this <span class="math-container">$X\cdot \log(\frac{A}{B}) < \log(\frac{C}{D})\cdot(Y\cdot \log(\frac{E}{F}))$</span>?
Am I breaking any rules on the right-hand side?</p>
| meet2410shah | 698,925 | <p>Yes!!! You need to apply the rule of addition for logarithmic functions.</p>
<p>Right hand side should be log(C / D) + (Y * log(E / F)).</p>
|
3,846,339 | <p>Suppose I have the inequality <span class="math-container">$(\frac{A}{B})^X < (\frac{C}{D})\cdot(\frac{E}{F})^Y$</span> and I want X by itself.</p>
<p>Can I do this <span class="math-container">$X\cdot \log(\frac{A}{B}) < \log(\frac{C}{D})\cdot(Y\cdot \log(\frac{E}{F}))$</span>?
Am I breaking any rules on the right-hand side?</p>
| rash | 650,763 | <p>It should be
<span class="math-container">$$X\log \left(\frac{A}{B}\right)=\log \left(\frac{C}{D}\right)+Y\log \left(\frac{E}{F}\right)$$</span>
You did not break the log into addition as <span class="math-container">$\log xy =\log x +\log y$</span></p>
|
231,583 | <p>Although there already exists active research area, so-called, automated theorem proving, mostly work on logic and elementary geometry. </p>
<p>Rather than only logic and elementary geometry, are there existing research results which by using machine learning techniques(possibly generative learning models) to discover new mathematical conjectures (if not yet proved), in wider branches of mathematics such as differential geometry, harmonic analysis etc...?</p>
<p>If this type of intellectual task is too difficult to study by far, then for a boiled-down version, can a machine learning system process a compact, well-formatted notes (to largely reduce natural language processing part) about for instance real analysis/algebraic topology, and then to ask it to solve some exercises ? Note that the focus and interest here are more about "exploring" new possible conjectures via current(or possible near future) state-of-the-art machine learning techniques, instead of proof techniques and logic-based knowledge representation which actually already are concerned and many works done in classical AI and automated theorem proving. </p>
<p>So, are there some knowing published research out there, particularly by generative model in machine learning techniques. </p>
<p>If there are not even known papers yet, then fruitful and interesting proposals and discussions are also highly welcomed. </p>
<p>As a probably not so good example I would like to propose, can a machine learning system "re-discover" Cauchy-Schwarz inequality if we "teach" it some basic operations, axioms and certain level of definitions, lemmas etc, with either provided or generated examples(numerical/theoretical) ? e.g. if artificial neural networks are used as training tools, what might be useful features in order eventually output a Cauchy-Schwarz inequality in final layer. </p>
| joro | 12,481 | <p>Check the answers in <a href="https://mathoverflow.net/questions/92148/interesting-conjectures-discovered-by-computers-and-proved-by-humans">Interesting conjectures “discovered” by computers and proved by humans?</a>.</p>
<p>An example in graph theory is the software <a href="http://www.math.illinois.edu/~dwest/regs/graffiti.html" rel="noreferrer">graffiti</a> which outputs conjectures.</p>
|
1,980,510 | <p>$$ \lim_{(x,y) \to (1,0)} \frac{(x-1)^2\ln(x)}{(x-1)^2 + y^2}$$</p>
<p>I tried L' Hospitals Rule and got now where. Then I tried using $x = r\cos(\theta)$ and $y = r\sin(\theta)$, but no help. How would I approach this? T</p>
| Ahmed S. Attaalla | 229,023 | <p>$$|\frac{(x-1)^2\ln(x)}{(x-1)^2 + y^2}| \leq |\frac{(x-1)^2 \ln (x)}{(x-1)^2}| \stackrel{x \neq 1}{=} |\ln x|$$</p>
<p>But $\ln x \to 0$ as $x \to 1$.</p>
<p>Hence, by squeeze theorem:</p>
<p>$$ \lim_{(x,y) \to (1,0)} \frac{(x-1)^2\ln(x)}{(x-1)^2 + y^2}=0$$</p>
|
3,297,190 | <p>Let <span class="math-container">$\{z_i\}_{i=1}^n$</span> and <span class="math-container">$\{w_i\}_{i=1}^n$</span> be two collections of vectors in <span class="math-container">$\mathbb R^p$</span>. Let <span class="math-container">$A$</span> be a real positive definite <span class="math-container">$p\times p$</span> matrix, with Cholesky factorization <span class="math-container">$LL^T$</span>, where <span class="math-container">$L$</span> is also <span class="math-container">$p\times p$</span>.</p>
<p>I want to solve the following optimization:</p>
<p><span class="math-container">$$\min_L F(L) \rightarrow \min_L \sum_{i=1}^p z^T_i LL^T z_i - w^T_i LL^T w_i$$</span>
subject to the constraint
<span class="math-container">$$\text{tr}(LL^T) = 1.$$</span></p>
<p>My approach: Lagrange multipliers. I thought that <span class="math-container">$\frac{d}{dL} \left(zLL^Tz - wLL^Tw\right) = 2(zz^T-ww^T)L$</span>, and <span class="math-container">$\frac{d}{dL}\text{Tr}(LL^T) = 2L$</span>, but this doesn't seem to lead to a solution.</p>
<p><strong>Edited: Rewrote problem to include a positive semi-definite constraint, via the Cholesky factorization.</strong></p>
| greg | 357,854 | <p>Rather than solve for the Cholesky factor directly, find a solution in terms of a less <em>structured</em> matrix, <span class="math-container">$M$</span>. Let a colon denote the matrix inner product, i.e.
<span class="math-container">$$\eqalign{
A:B &= {\rm Tr}(AB^T) \cr
M:M &= {\rm Tr}(MM^T) &= \frac{1}{\mu^2} \cr
}$$</span>
Also, for typing convenience let
<span class="math-container">$$\eqalign{
A &= \frac{MM^T}{M:M},\quad\;
Y &= Y^T = \sum_kz_kz_k^T - w_kw_k^T \cr
}$$</span>
Calculate the gradient of <span class="math-container">$F$</span>.
<span class="math-container">$$\eqalign{
F &= Y:A \cr
dF &= Y:dA \cr
&= Y:\Bigg(\frac{dM\,M^T+M\,dM^T}{M:M} - \frac{MM^T\big(dM:M+M:dM\big)}{(M:M)^2}\Bigg) \cr
&= 2\mu^2\Big(YM-FM\Big):dM \cr
\frac{\partial F}{\partial M}
&= 2\mu^2\big(YM-FM\big) \cr
}$$</span>
Set the gradient to zero and solve for <span class="math-container">$M$</span>.
<span class="math-container">$$\eqalign{
&YM = F M \cr
}$$</span>
Thus it appears that the columns of <span class="math-container">$M$</span> are equal to
the eigenvector <span class="math-container">$\{v_k\}$</span> of <span class="math-container">$Y$</span> associated with its minimum eigenvalue <span class="math-container">$\{\lambda_k\}$</span>, i.e.
<span class="math-container">$$k = \arg\min_j \lambda_j,\quad F = \lambda_k,\quad
M = (\,v_k\;v_k\;v_k\;\ldots\,) \,=\, v_k{\large\tt 1}^T
$$</span></p>
<p>Given the solution in terms of <span class="math-container">$M$</span>, recover a solution in terms of <span class="math-container">$L$</span>.
<span class="math-container">$$\eqalign{
L &= {\rm cholesky}\bigg(\frac{MM^T}{M:M}\bigg) \cr
A &= LL^T = \frac{MM^T}{M:M},\quad\quad
{\rm Tr}(LL^T) = \frac{M:M}{M:M} = 1 \cr
}$$</span></p>
|
3,297,190 | <p>Let <span class="math-container">$\{z_i\}_{i=1}^n$</span> and <span class="math-container">$\{w_i\}_{i=1}^n$</span> be two collections of vectors in <span class="math-container">$\mathbb R^p$</span>. Let <span class="math-container">$A$</span> be a real positive definite <span class="math-container">$p\times p$</span> matrix, with Cholesky factorization <span class="math-container">$LL^T$</span>, where <span class="math-container">$L$</span> is also <span class="math-container">$p\times p$</span>.</p>
<p>I want to solve the following optimization:</p>
<p><span class="math-container">$$\min_L F(L) \rightarrow \min_L \sum_{i=1}^p z^T_i LL^T z_i - w^T_i LL^T w_i$$</span>
subject to the constraint
<span class="math-container">$$\text{tr}(LL^T) = 1.$$</span></p>
<p>My approach: Lagrange multipliers. I thought that <span class="math-container">$\frac{d}{dL} \left(zLL^Tz - wLL^Tw\right) = 2(zz^T-ww^T)L$</span>, and <span class="math-container">$\frac{d}{dL}\text{Tr}(LL^T) = 2L$</span>, but this doesn't seem to lead to a solution.</p>
<p><strong>Edited: Rewrote problem to include a positive semi-definite constraint, via the Cholesky factorization.</strong></p>
| dineshdileep | 41,541 | <p>Define the <span class="math-container">$p\times n$</span> matrices <span class="math-container">$Z=[z_1,\dots,z_n]$</span> and <span class="math-container">$W=[w_1,\dots,w_n]$</span> (such that given vectors are respectively their columns). Convince yourself that you can rewrite your optimization problem as
<span class="math-container">\begin{align}
\min_{A} &<A,ZZ^T-WW^T> \\ &A\geq 0 ~,~<A,I> = 1
\end{align}</span>
where <span class="math-container">$A\geq 0$</span> implies <span class="math-container">$A$</span> should be positive-semi-definite. Also, for any two symmetric matrices <span class="math-container">$A,B$</span>, we define <span class="math-container">$<A,B> = \mathrm{trace}(AB)$</span>. We can define the eigen-decomposition
<span class="math-container">\begin{align}
A = \sum_{i=1}^{p}\lambda_iu_iu_i^T
\end{align}</span>
where <span class="math-container">$u_i$</span> are the eigen vectors and <span class="math-container">$\lambda_i$</span> are the eigen values (of unit-norm). Let <span class="math-container">$B=ZZ^T-WW^T$</span>. Convince yourselves that your optimization problem of finding <span class="math-container">$A$</span> is same as finding pairs <span class="math-container">$(\lambda_i,u_i)$</span> in the optimization problem
<span class="math-container">\begin{align}
\min_{\lambda_i,u_i}&\sum_{i=1}^{p}\lambda_iu_i^TBu_i
\\& \lambda_i\geq 0~,~\forall i
\\& \sum_{i=1}^{p}\lambda_i = 1
\end{align}</span>
From rayleigh-ritz ratio, it follows that for any unit-norm <span class="math-container">$u_i$</span>, we have that
<span class="math-container">\begin{align}
u_i^TBu_i\geq \lambda_{min}(B)
\end{align}</span>
and equality is achieved when <span class="math-container">$u_i$</span> is the eigen-vector corresponding to <span class="math-container">$\lambda_{min}(B)$</span>. Thus, it follows that
<span class="math-container">\begin{align}A = uu^T
\end{align}</span> where <span class="math-container">$u$</span> is the eigen-vector corresponding to smallest eigenvalue of <span class="math-container">$B$</span>.</p>
|
1,393,154 | <p><span class="math-container">$4n$</span> to the power of <span class="math-container">$3$</span> over <span class="math-container">$2 = 8$</span> to the power of negative <span class="math-container">$1$</span> over <span class="math-container">$3$</span></p>
<p>Written Differently for Clarity:</p>
<p><span class="math-container">$$(4n)^\frac{3}{2} = (8)^{-\frac{1}{3}}$$</span></p>
<hr />
<blockquote>
<p><strong>EDIT</strong></p>
<p>Actually, the problem should be solving <span class="math-container">$4n^{\frac{3}{2}} = 8^{-\frac{1}{3}}$</span>. Another user edited this question for clarity, but they edited it incorrectly to add parentheses around the right hand side, as can be seen above.</p>
</blockquote>
| layman | 131,740 | <p>So the real problem we are trying to complete is to solve $4n^{\frac{3}{2}} = 8^{-\frac{1}{3}}$.</p>
<p>The way to do this is to first divide both sides by $4$ and get:</p>
<p>$n^{\frac{3}{2}} = \dfrac{8^{-\frac{1}{3}}}{4}$</p>
<p>Now, since $8 = 2^{3}$ and $4 = 2^{2}$, we can rewrite this as:</p>
<p>$n^{\frac{3}{2}} = \dfrac{(2^{3})^{-\frac{1}{3}}}{2^{2}}$</p>
<p>and in the numerator, since we have something with a power raised to another power, we can multiply the two powers to get:</p>
<p>$n^{\frac{3}{2}} = \dfrac{2^{3(-\frac{1}{3})}}{2^{2}}$</p>
<p>which gives</p>
<p>$n^{\frac{3}{2}} = \dfrac{2^{-1}}{2^{2}}$</p>
<p>Now, raising something to a negative exponent means you move it through the fraction either up or down depending on where it started. So since $2^{-1}$ is in the numerator, we move it down to the denominator and remove the negative in the exponent, to get:</p>
<p>$n^{\frac{3}{2}} = \dfrac{1}{2^{1}2^{2}}$</p>
<p>Since we are multiplying two things raised to exponents with the same base in the denominator (the base is $2$), we can just add the exponents to get:</p>
<p>$n^{\frac{3}{2}} = \dfrac{1}{2^{3}}$</p>
<p>Finally, we can raise both sides to the $\frac{2}{3}$ power to get:</p>
<p>$(n^{\frac{3}{2}})^{\frac{2}{3}} = \left (\dfrac{1}{2^{3}} \right )^{\frac{2}{3}}$.</p>
<p>When you raise something with an exponent to another exponent, you can just multiply the exponents together to get:</p>
<p>$n^{\frac{3}{2} \cdot \frac{2}{3}} = \left (\dfrac{1}{2^{3}} \right )^{\frac{2}{3}}$</p>
<p>or</p>
<p>$n = \left (\dfrac{1}{2^{3}} \right )^{\frac{2}{3}}$</p>
<p>or</p>
<p>$n = \dfrac{1^{\frac{2}{3}}}{(2^{3})^{\frac{2}{3}}}$</p>
<p>Now, $1$ raised to any power is just $1$, so the numerator is $1$. With the denominator, we again multiply exponents to get:</p>
<p>$n = \dfrac{1}{2^{3 \cdot {\frac{2}{3}}}}$</p>
<p>or</p>
<p>$n = \dfrac{1}{2^{2}}$</p>
<p>or</p>
<p>$n = \dfrac{1}{4}$.</p>
|
259,795 | <p>Consider the function $f \colon\mathbb R \to\mathbb R$ defined by
$f(x)=
\begin{cases}
x^2\sin(1/x); & \text{if }x\ne 0, \\
0 & \text{if }x=0.
\end{cases}$</p>
<p>Use $\varepsilon$-$\delta$ definition to prove that the limit $f'(0)=0$.</p>
<p>Now I see that h should equals to delta; and delta should equal to epsilon in this case. Thanks for everyone contributed!</p>
| Qiaochu Yuan | 232 | <p>Any finite-dimensional (edit: Hausdorff) topological real vector space has the usual Euclidean topology (exercise), and any surjective linear transformation between finite-dimensional topological real vector spaces is isomorphic to projection to some subset of coordinates on $\mathbb{R}^n$ (exercise), so this follows from the fact that any projection of an open Euclidean ball is a (possibly lower-dimensional) open Euclidean ball. </p>
|
2,843,560 | <p>If $\sin x +\sin 2x + \sin 3x = \sin y\:$ and $\:\cos x + \cos 2x + \cos 3x =\cos y$, then $x$ is equal to</p>
<p>(a) $y$</p>
<p>(b) $y/2$</p>
<p>(c) $2y$</p>
<p>(d) $y/6$</p>
<p>I expanded the first equation to reach $2\sin x(2+\cos x-2\sin x)= \sin y$, but I doubt it leads me anywhere. A little hint would be appreciated. Thanks!</p>
| lesnik | 121,451 | <p>Draw 3 unit vectors $e_1$, $e_2$ and $e_3$. The angle between these each of these vectors and $x$ axis is $x$, $2x$ and $3x$ correspondingly.</p>
<p>$x$ coordinates of this vectors would be $cos(x)$, $cos(2x)$ and $cos(3x)$.</p>
<p>To find the sum of these coordinates you can first add up the vectors and than the x-coordinate of the sum of these 3 vectors.</p>
<p>So, $cos(x) + cos(2x) + cos(3x)$ and $sin(x) + sin(2x) + sin(3x)$ are the coordinates of $e_1 + e_2 + e_3$ vector!</p>
<p>And it must be coordinates of unit vector $e_y$!</p>
<p>Now it's a geometry problem which looks much easier for me. $e_1 + e_2 + e_3$ should be pointing to the same direction as $e_2$.</p>
<p>$x=y=\pi$ seems to be a solution.</p>
<p>$x=\pi / 2, y=\pi$ is also a solution.</p>
<p>So (a) may hold and (b) also may hold.</p>
|
2,949,011 | <blockquote>
<p>If <span class="math-container">$b_n\to\infty$</span> and <span class="math-container">$\{a_n\}$</span> is such that <span class="math-container">$b_n>a_n$</span> for all <span class="math-container">$n$</span>, then <span class="math-container">$a_n\to\infty$</span>.</p>
</blockquote>
<p>We are to prove this by using either a)formal definitions or b)counter examples. </p>
<p>I am very unsure of how to prove using the formal definitions. I can see it is quite clear that if I chose to use a counter example to prove we can't show that <span class="math-container">$a_n$</span> goes to infinity simply because it is less than <span class="math-container">$b_n$</span>, that this would maybe be easier, but once again, I am at a loss on how to go about setting up this proof.</p>
<p>Any help in proving this would be great.</p>
| Jannik Pitt | 355,418 | <p>I assume you meant <span class="math-container">$a_n \geq b_n$</span>. A sequence <span class="math-container">$(x_n)_{n \in \mathbb{N}}$</span> diverges to <span class="math-container">$\infty$</span> if for every <span class="math-container">$K$</span> there exists an <span class="math-container">$N \in \mathbb{N}$</span> such that for all <span class="math-container">$n\geq N$</span> it holds that <span class="math-container">$x_n \geq K$</span>. Now, we already know that <span class="math-container">$(b_n)_{n \in \mathbb{N}}$</span> satisfies this property. Now given an arbitrary <span class="math-container">$K$</span> how do we choose <span class="math-container">$N$</span> such that <span class="math-container">$a_n \geq K$</span> all <span class="math-container">$n \geq N$</span> using that <span class="math-container">$a_n \geq b_n$</span>?</p>
|
529,708 | <p>A is prime greater than 5, B is A*(A-1)+1,if B is prime,</p>
<p>then digital root of A and B must the same.(OEIS A065508)</p>
<p>Sample: 13*(13-1)+1 = 157
13 and 157 are prime and have same digital root 4</p>
| Calvin Lin | 54,563 | <p><strong>hint</strong>: consider mod 9. What happens in each of these cases?</p>
<p>Use the fact that B us prime, and in particular not a multiple of 3</p>
|
3,609,906 | <p>I need to compute <span class="math-container">$$\lim_{n \to \infty}\sqrt{n}\int_{0}^{1}(1-x^2)^n dx.$$</span>
I proved that
for <span class="math-container">$n\ge1$</span>,
<span class="math-container">$$\int_{0}^{1}(1-x^2)^ndx={(2n)!!\over (2n+1)!!},$$</span>
but I don't know how to continue from here.</p>
<p>I also need to calculate <span class="math-container">$\int_{0}^{1}(1-x^2)^ndx$</span> for <span class="math-container">$n=50$</span> with a <span class="math-container">$1$</span>% accuracy. I thought about using Taylor series but also failed.</p>
| Gary | 83,800 | <p>Answer to your second question. Since
<span class="math-container">$$
\frac{{\frac{{(2n)!!}}{{(2n + 1)!!}}}}{{\frac{{(2n + 2)!!}}{{(2n + 3)!!}}}} = \frac{{(2n)!!(2n + 3)!!}}{{(2n + 1)!!(2n + 2)!!}} = \frac{{n + \frac{3}{2}}}{{n + 1}}
$$</span>
and
<span class="math-container">$$
\frac{{n + 2}}{{n + 1}} < \left( {\frac{{n + \frac{3}{2}}}{{n + 1}}} \right)^2 < \frac{{n + 1}}{n}
$$</span>
for all <span class="math-container">$n\geq 1$</span>, the sequence
<span class="math-container">$$
\sqrt {n + 1} \frac{{(2n)!!}}{{(2n + 1)!!}}
$$</span>
is decreasing and the sequence
<span class="math-container">$$
\sqrt n \frac{{(2n)!!}}{{(2n + 1)!!}}
$$</span>
is increasing. They both converge to the same limit which is (by Ninad Munshi's answer) <span class="math-container">$\sqrt{\pi}/2$</span>. Hence,
<span class="math-container">$$
\frac{{\sqrt \pi }}{2}\frac{1}{{\sqrt {n + 1} }} < \frac{{(2n)!!}}{{(2n + 1)!!}} = \int_0^1 {(1 - x^2 )^n dx} < \frac{{\sqrt \pi }}{2}\frac{1}{{\sqrt n }}.
$$</span>
You can use this to show that
<span class="math-container">$$
0.124096 < \int_0^1 {(1 - x^2 )^{50} dx} < 0.125332.
$$</span></p>
|
3,609,906 | <p>I need to compute <span class="math-container">$$\lim_{n \to \infty}\sqrt{n}\int_{0}^{1}(1-x^2)^n dx.$$</span>
I proved that
for <span class="math-container">$n\ge1$</span>,
<span class="math-container">$$\int_{0}^{1}(1-x^2)^ndx={(2n)!!\over (2n+1)!!},$$</span>
but I don't know how to continue from here.</p>
<p>I also need to calculate <span class="math-container">$\int_{0}^{1}(1-x^2)^ndx$</span> for <span class="math-container">$n=50$</span> with a <span class="math-container">$1$</span>% accuracy. I thought about using Taylor series but also failed.</p>
| CHAMSI | 758,100 | <p>You won't need a close form for the integral. Here is an easy way to do it :</p>
<p>Denoting <span class="math-container">$ \left(\forall n\in\mathbb{N}\right),\ W_{n}=\displaystyle\int_{0}^{\frac{\pi}{2}}{\sin^{n}{x}\,\mathrm{d}x} : $</span></p>
<p>We have : <span class="math-container">\begin{aligned} \left(\forall n\in\mathbb{N}^{*}\right),\ W_{n+1}&=\displaystyle\int_{0}^{\frac{\pi}{2}}{\sin{x}\sin^{n}{x}\,\mathrm{d}x} \\ &=\left[-\cos{x}\sin^{n}{x}\right]_{0}^{\frac{\pi}{2}}+n\displaystyle\int_{0}^{\frac{\pi}{2}}{\cos^{2}{x}\sin^{n-1}{x}\,\mathrm{d}x}\\ &=n\displaystyle\int_{0}^{\frac{\pi}{2}}{\left(1-\sin^{2}{x}\right)\sin^{n-1}{x}\,\mathrm{d}x}\\ \left(\forall n\in\mathbb{N}^{*}\right),\ W_{n+1}&=n\left(W_{n-1}-W_{n+1}\right)\\ \iff \left(\forall n\in\mathbb{N}^{*}\right),\ W_{n+1}&=\displaystyle\frac{n}{n+1}W_{n-1} \end{aligned}</span></p>
<p>And since <span class="math-container">$ \left(W_{n}\right)_{n\in\mathbb{N}} $</span> is positive and decreasing, we have that : <span class="math-container">$$ \left(\forall n\geq 2\right),\ W_{n+1}\leq W_{n}\leq W_{n-1}\iff \displaystyle\frac{n}{n+1}\leq\displaystyle\frac{W_{n}}{W_{n-1}}\leq 1 $$</span></p>
<p>Thus <span class="math-container">$ \displaystyle\lim_{n\to +\infty}{\displaystyle\frac{W_{n}}{W_{n-1}}}=1 \cdot $</span></p>
<p>We can easily verify that the sequence <span class="math-container">$ \left(y_{n}\right)_{n\in\mathbb{N}} $</span> defined as following <span class="math-container">$ \left(\forall n\in\mathbb{N}\right),\ y_{n}=\left(n+1\right)W_{n}W_{n+1} $</span> is a constant sequence. (Using the recurrence relation that we got from the integration by parts to express <span class="math-container">$ W_{n+1} $</span> in terms of <span class="math-container">$ W_{n-1} $</span> will solve the problem)</p>
<p>Hence <span class="math-container">$ \left(\forall n\in\mathbb{N}\right),\ y_{n}=y_{0}=W_{0}W_{1}=\displaystyle\frac{\pi}{2} \cdot $</span></p>
<p>Now that we've got all the necessary tools, we can prove that <span class="math-container">$ \displaystyle\lim_{n\to +\infty}{\sqrt{n}W_{n}}=\sqrt{\displaystyle\frac{\pi}{2}} : $</span> <span class="math-container">\begin{aligned} \displaystyle\lim_{n\to +\infty}{\sqrt{n}W_{n}} &=\displaystyle\lim_{n\to +\infty}{\sqrt{y_{n-1}}\sqrt{\displaystyle\frac{W_{n}}{W_{n-1}}}}\\ &=\displaystyle\lim_{n\to +\infty}{\sqrt{\displaystyle\frac{\pi}{2}}\sqrt{\displaystyle\frac{W_{n}}{W_{n-1}}}}\\ \displaystyle\lim_{n\to +\infty}{\sqrt{n}W_{n}}&=\sqrt{\displaystyle\frac{\pi}{2}} \end{aligned}</span></p>
<p>Using the substitution <span class="math-container">$ \left\lbrace\begin{aligned}x&=\cos{y}\\ \mathrm{d}x&=-\sin{y}\,\mathrm{d}y\end{aligned}\right. $</span>, we can see that : <span class="math-container">$$ \left(\forall n\in\mathbb{N}\right),\ \int_{0}^{1}{\left(1-x^{2}\right)^{n}\,\mathrm{d}x}=\displaystyle\int_{0}^{\frac{\pi}{2}}{\sin^{2n+1}{y}\,\mathrm{d}y}=W_{2n+1} $$</span></p>
<p>Thus <span class="math-container">$$ \lim_{n\to +\infty}{\sqrt{n}\int_{0}^{1}{\left(1-x^{2}\right)^{n}\,\mathrm{d}x}}=\lim_{n\to +\infty}{\sqrt{\frac{n}{2n+1}}\sqrt{2n+1}W_{2n+1}}=\frac{1}{\sqrt{2}}\times\sqrt{\frac{\pi}{2}}=\frac{\sqrt{\pi}}{2} $$</span></p>
|
1,897,538 | <p>Next week I will start teaching Calculus for the first time. I am preparing my notes, and, as pure mathematician, I cannot come up with a good real world example of the following.</p>
<p>Are there good examples of
\begin{equation}
\lim_{x \to c} f(x) \neq f(c),
\end{equation}
or of cases when $c$ is not in the domain of $f(x)$?</p>
<p>The only thing that came to my mind is the study of physics phenomena at temperature $T=0 \,\mathrm{K}$, but I am not very satisfied with it.</p>
<p>Any ideas are more than welcome!</p>
<p><strong>Warning</strong></p>
<p>The more the examples are approachable (e.g. a freshman college student), the more I will be grateful to you! In particular, I would like the examples to come from natural or social sciences. Indeed, in a first class in Calculus it is not clear the importance of indicator functions, etc..</p>
<p><strong>Edit</strong></p>
<p>As B. Goddard pointed out, a very subtle point in calculus is the one of removable singularities. If possible, I would love to have some example of this phenomenon. Indeed, most of the examples from physics are of functions with poles or indeterminacy in the domain.</p>
| John Wayland Bales | 246,513 | <p>A $10$ cm length of spring steel wire with a breaking tension of 1 kg is formed into a spring of length $1$ cm.</p>
<p>With the spring suspended vertically, a $100$ gm weight attached to the end of the spring stretches it $1$ cm beyond its natural length. Assume Hooke's law holds until the spring is completely straightened out.</p>
<p>Graph the function $W(x)$ where $x$ is the number of centimeters the spring is stretched and $W(x)$ is the amount of weight in grams the spring is capable of supporting when stretched $x$ cm beyond its natural length.</p>
<p>What can you say about the graph of $y=W(x)$ in the vicinity of $x=9$?</p>
|
3,283,606 | <p>Good Evening,</p>
<p>I know this is a basic question, but I haven't been able to find a clear explanation for how to simplify the follow equation:
<span class="math-container">$$n\log_2n=10^6$$</span>
Solving this equation is part of the solution for Problem 1-1 from the Intro. to Algorithms book by CLRS:
<a href="http://atekihcan.github.io/CLRS/P01-01/" rel="nofollow noreferrer">http://atekihcan.github.io/CLRS/P01-01/</a></p>
<p>The author there simplifies the above to:
<span class="math-container">$$n=62746$$</span>
But I can't see how to do this. Thank you.</p>
| Parcly Taxel | 357,390 | <p><span class="math-container">$$n\log_2n=10^6$$</span>
<span class="math-container">$$n\ln n=10^6\ln 2$$</span>
<span class="math-container">$$\ln n=\frac1n10^6\ln 2$$</span>
<span class="math-container">$$n=e^{(10^6\ln 2)/n}$$</span>
<span class="math-container">$$10^6\ln2=\frac{10^6\ln2}ne^{(10^6\ln2)/n}$$</span>
Now we require the use of the non-elementary Lambert W function to simplify this further:
<span class="math-container">$$W(10^6\ln2)=\frac{10^6\ln2}n$$</span>
<span class="math-container">$$n=\frac{10^6\ln2}{W(10^6\ln2)}=62746.126469\dots$$</span>
Computing the Lambert W is a bit hard without libraries, which is why the derivation in the CLRS solutions link simply iterates over <span class="math-container">$n$</span> until the expression exceeds a million.</p>
|
937,912 | <p>I'm looking for a closed form of this integral.</p>
<p>$$I = \int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{\sqrt{1-x^2}} \,dx ,$$</p>
<p>where $\operatorname{Li}_2$ is the <a href="http://mathworld.wolfram.com/Dilogarithm.html" rel="noreferrer">dilogarithm function</a>.</p>
<p>A numerical approximation of it is</p>
<p>$$ I \approx 1.39130720750676668181096483812551383015419528634319581297153...$$</p>
<p>As Lucian said $I$ has the following equivalent forms:</p>
<p>$$I = \int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{\sqrt{1-x^2}} \,dx = \int_0^1 \frac{\operatorname{Li}_2\left( \sqrt{x} \right)}{2 \, \sqrt{x} \, \sqrt{1-x}} \,dx = \int_0^{\frac{\pi}{2}} \operatorname{Li}_2(\sin x) \, dx = \int_0^{\frac{\pi}{2}} \operatorname{Li}_2(\cos x) \, dx$$</p>
<p>According to <em>Mathematica</em> it has a closed-form in terms of generalized hypergeometric function, Claude Leibovici has given us <a href="https://math.stackexchange.com/a/937926/153012">this form</a>.</p>
<p>With <em>Maple</em> using Anastasiya-Romanova's form I could get a closed-form in term of Meijer G function. It was similar to Juan Ospina's <a href="https://math.stackexchange.com/a/938024/153012">answer</a>, but it wasn't exactly that form. I also don't know that his form is correct, or not, because the numerical approximation has just $6$ correct digits. </p>
<p>I'm looking for a closed form of $I$ without using generalized hypergeometric function, Meijer G function or $\operatorname{Li}_2$ or $\operatorname{Li}_3$.</p>
<p>I hope it exists. Similar integrals are the following.</p>
<p>$$\begin{align}
J_1 & = \int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{1+x} \,dx = \frac{\pi^2}{6} \ln 2 - \frac58 \zeta(3) \\
J_2 & = \int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{\sqrt{1-x}} \,dx = \pi^2 - 8 \end{align}$$</p>
<p>Related techniques are in <a href="http://carma.newcastle.edu.au/jon/Preprints/Papers/ArXiv/Zeta21/polylog.pdf" rel="noreferrer">this</a> or in <a href="http://arxiv.org/pdf/1010.6229.pdf" rel="noreferrer">this</a> paper. <a href="http://www-fourier.ujf-grenoble.fr/~marin/une_autre_crypto/Livres/Connon%20some%20series%20and%20integrals/Vol-3.pdf" rel="noreferrer">This</a> one also could be useful.</p>
| Ali Shadhar | 432,085 | <p>From writing <span class="math-container">$$\operatorname{Li}_2(x)=-\int_0^1\frac{x\ln u}{1-xu}du$$</span></p>
<p>It follows that </p>
<p><span class="math-container">$$-I=-\int_0^1\frac{\operatorname{Li}_2(x)}{\sqrt{1-x^2}}dx=\int_0^1\ln u\left[\int_0^1\frac{x}{(1-ux)\sqrt{1-x^2}}dx\right]du$$</span></p>
<p><span class="math-container">$$=\int_0^1\ln u\left[\frac{\pi}{2}\cdot\left(\frac{1}{u\sqrt{1-u^2}}-\frac1u\right)+\frac{\sin^{-1}(u)}{u\sqrt{1-u^2}}\right]du$$</span>
<span class="math-container">$$=\frac{\pi}2\int_0^1\frac{\ln u}{u}\left(\frac1{\sqrt{1-u^2}}-1\right)du+\int_0^1\frac{\ln u\sin^{-1}(u)}{u\sqrt{1-u^2}}du$$</span></p>
<p>For the first integral, let <span class="math-container">$u^2\to u$</span> first then apply integration by parts, we obtain</p>
<p><span class="math-container">$$\frac{\pi}{2}\int_0^1\frac{\ln u}{u}\left(\frac{1}{\sqrt{1-u^2}}-1\right)\ du=\frac{\pi}{8}\int_0^1\ln^2u\ du\left(\frac{1}{\sqrt{1-u}}-1\right)du\\=-\frac{\pi}{32}\int_0^1\ln^2u (1-u)^{-3/2}du=-\frac{\pi}{32}\frac{\partial^2}{\partial\alpha^2}\lim_{\alpha\ \mapsto1}\text{B}\left(\alpha,-\frac12\right)\\=-\frac{\pi}{32}\left(\frac23\pi^2-8\ln^22\right)=\boxed{\frac{\pi}4\ln^2(2)-\frac{\pi^3}{48}}\, .$$</span></p>
<p>The second integral is already calculated <a href="https://math.stackexchange.com/questions/3431753/is-there-a-closed-form-for-int-01-frac-lnx-sin-1xx-sqrt1-x2dx/3431858#3431858">here</a></p>
<p><span class="math-container">$$\int_0^1\frac{\ln(x) \sin^{-1}(x)}{x\sqrt{1-x^2}}dx=\boxed{4 \operatorname{Im} \operatorname{Li}_3(1+\mathrm{i}) -\frac{3 \pi^3}{16} -\frac{\pi}{4} \ln^2(2)} \, .$$</span></p>
<p>Collecting the boxed results we get</p>
<blockquote>
<p><span class="math-container">$$I= \frac{5 \pi^3}{24}-4 \operatorname{Im} \operatorname{Li}_3(1+\mathrm{i}) \, .$$</span></p>
</blockquote>
|
907,851 | <p>I am really new into math, why is $-2^2 = -4 $ and $(-2)^2 = 4 $? </p>
| Labba | 171,314 | <p>Because $-2^2$ means that you take the number $2$, raise it to the second power ($^2$), and then you consider its additive inverse ($-$). So, $2$ raised to the second power is $4$, whose additive inverse is $-4$. This is because exponentiation has a higher priority and it is the first thing you have to do; hence you compute the power before doing anything else, and then you deal with the minus sign.</p>
<p>On the other hand, brackets can change the meaning of a mathematical expression: here $(-2)^2$ simply means that you have to raise $-2$ to the second power, and elementary algebra tells us that $(-2) \cdot (-2) = 4$.</p>
|
251,028 | <p>I want to make <span class="math-container">$S[\{,\cdots, \}]$</span> as follows</p>
<p>First input of <span class="math-container">$S$</span> is given list <span class="math-container">$\{1,2,3,\cdots, n\}$</span> and it produces <span class="math-container">$s_{123\cdots n}$</span></p>
<p>Further, if the ordering of the list is given differently, still gives increasing order. i.e.,</p>
<p><span class="math-container">$S[{1,3,2}] = s_{123}$</span></p>
<p><span class="math-container">$S[{1,2,4,3}] = s_{1234}$</span></p>
<p>and so on.</p>
| LouisB | 22,158 | <p>This can be done with a pure function, like this:</p>
<pre><code>ClearAll[S]
S = Subscript[s, Row[{##} // Flatten // Sort, " "]] &;
S[{1, 2, 3, 4}]
S[1, 3, 2]
</code></pre>
<p><a href="https://i.stack.imgur.com/T0hhe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T0hhe.png" alt="enter image description here" /></a></p>
<p>Notice the use of <code>##</code> in the above to refer to the function arguments.<br />
If you don't like so much space between the subscripts, use</p>
<pre><code>ClearAll[S]
S = Subscript[s, Row[{##} // Flatten // Sort]] &;
</code></pre>
|
251,028 | <p>I want to make <span class="math-container">$S[\{,\cdots, \}]$</span> as follows</p>
<p>First input of <span class="math-container">$S$</span> is given list <span class="math-container">$\{1,2,3,\cdots, n\}$</span> and it produces <span class="math-container">$s_{123\cdots n}$</span></p>
<p>Further, if the ordering of the list is given differently, still gives increasing order. i.e.,</p>
<p><span class="math-container">$S[{1,3,2}] = s_{123}$</span></p>
<p><span class="math-container">$S[{1,2,4,3}] = s_{1234}$</span></p>
<p>and so on.</p>
| Lukas Lang | 36,508 | <p>Using <a href="https://reference.wolfram.com/language/ref/Format.html" rel="noreferrer"><code>Format</code></a>, <a href="https://reference.wolfram.com/language/ref/Interpretation.html" rel="noreferrer"><code>Interpretation</code></a>, and <a href="https://reference.wolfram.com/language/ref/Orderless.html" rel="noreferrer"><code>Orderless</code></a>:</p>
<pre><code>Attributes[S] = {Orderless};
Format[e : S[args___]] := Interpretation[Subscript[s, Row@{args}], e]
S[1, 2, 4, 3]
</code></pre>
<p><a href="https://i.stack.imgur.com/uolqv.png" rel="noreferrer"><img src="https://i.stack.imgur.com/uolqv.png" alt="(* S[1, 2, 3, 4] *)" /></a></p>
<pre><code>S[1, 2, 4, 3] // InputForm
(* S[1, 2, 4, 3] *)
</code></pre>
<p>The use of <code>Interpretation</code> ensures that you can still use the expression when copy-pasting it (see the last example). The <code>Orderless</code> attribute does the sorting automatically, and also makes it so that different orderings are considered equivalent:</p>
<pre><code>S[1, 2, 4, 3] == S[4, 3, 2, 1]
(* True *)
</code></pre>
<p>If you want some spacing between the different subscripts, you can use <a href="https://reference.wolfram.com/language/ref/Indexed.html" rel="noreferrer"><code>Indexed</code></a>:</p>
<pre><code>Format[e : S[args___]] := Interpretation[Indexed[s, {args}], e]
S[1, 2, 4, 3]
</code></pre>
<p><a href="https://i.stack.imgur.com/3tIvU.png" rel="noreferrer"><img src="https://i.stack.imgur.com/3tIvU.png" alt="enter image description here" /></a></p>
<p>If you really need <code>S[{...}]</code> instead of simply <code>S[...]</code>, you can't use <code>Orderless</code>. You'll have to manually sort the arguments, e.g. like this:</p>
<pre><code>S[args_] /; ! OrderedQ@args := S[Sort@args]
Format[e : S[args_]] := Interpretation[Subscript[s, Row@args], e]
S[1, 2, 4, 3]
</code></pre>
<p><a href="https://i.stack.imgur.com/uolqv.png" rel="noreferrer"><img src="https://i.stack.imgur.com/uolqv.png" alt="(* S[1, 2, 3, 4] *)" /></a></p>
<p>The first line automatically sorts the arguments of <code>S</code> if they are not already sorted (using <a href="https://reference.wolfram.com/language/ref/Condition.html" rel="noreferrer"><code>Condition</code></a> (<code>/;</code>) and <a href="https://reference.wolfram.com/language/ref/OrderedQ.html" rel="noreferrer"><code>OrderedQ</code></a>)</p>
|
10,666 | <p>My question is about <a href="http://en.wikipedia.org/wiki/Non-standard_analysis">nonstandard analysis</a>, and the diverse possibilities for the choice of the nonstandard model R*. Although one hears talk of <em>the</em> nonstandard reals R*, there are of course many non-isomorphic possibilities for R*. My question is, what kind of structure theorems are there for the isomorphism types of these models? </p>
<p><b>Background.</b> In nonstandard analysis, one considers the real numbers R, together with whatever structure on the reals is deemed relevant, and constructs a nonstandard version R*, which will have infinitesimal and infinite elements useful for many purposes. In addition, there will be a nonstandard version of whatever structure was placed on the original model. The amazing thing is that there is a <em>Transfer Principle</em>, which states that any first order property about the original structure true in the reals, is also true of the nonstandard reals R* with its structure. In ordinary model-theoretic language, the Transfer Principle is just the assertion that the structure (R,...) is an elementary substructure of the nonstandard reals (R*,...). Let us be generous here, and consider as the standard reals the structure with the reals as the underlying set, and having all possible functions and predicates on R, of every finite arity. (I guess it is also common to consider higher type analogues, where one iterates the power set ω many times, or even ORD many times, but let us leave that alone for now.) </p>
<p>The collection I am interested in is the collection of all possible nontrivial elementary extensions of this structure. Any such extension R* will have the useful infinitesimal and infinite elements that motivate nonstandard analysis. It is an exercise in elementary mathematical logic to find such models R* as ultrapowers or as a consequence of the Compactness theorem in model theory. </p>
<p>Since there will be extensions of any desired cardinality above the continuum, there are many non-isomorphic versions of R*. Even when we consider R* of size continuum, the models arising via ultrapowers will presumably exhibit some saturation properties, whereas it seems we could also construct non-saturated examples. </p>
<p>So my question is: what kind of structure theorems are there for the class of all nonstandard models R*? How many isomorphism types are there for models of size continuum? How much or little of the isomorphism type of a structure is determined by the isomorphism type of the ordered field structure of R*, or even by the order structure of R*? </p>
| Andreas Blass | 6,794 | <p>Under a not unreasonable assumption about cardinal arithmetic, namely $2^{<c}=c$ (which follows from the continuum hypothesis, or Martin's Axiom, or the cardinal characteristic equation t=c), the number of non-isomorphic possibilities for *R of cardinality c is exactly 2^c. To see this, the first step is to deduce, from $2^{<c} = c$, that there is a family X of 2^c functions from R to R such that any two of them agree at strictly fewer than c places. (Proof: Consider the complete binary tree of height (the initial ordinal of cardinality) c. By assumption, it has only c nodes, so label the nodes by real numbers in a one-to-one fashion. Then each of the 2^c paths through the tree determines a function f:c \to R, and any two of these functions agree only at those ordinals $\alpha\in c$ below the level where the associated paths branch apart. Compose with your favorite bijection R\to c and you get the claimed maps g:R \to R.) Now consider any non-standard model *R of R (where, as in the question, R is viewed as a structure with all possible functions and predicates) of cardinality c, and consider any element z in *R. If we apply to z all the functions *g for g in X, we get what appear to be 2^c elements of *R. But *R was assumed to have cardinality only c, so lots of these elements must coincide. That is, we have some (in fact many) g and g' in X such that *g(z) = *g'(z). We arranged X so that, in R, g and g' agree only on a set A of size $<c$, and now we have (by elementarity) that z is in *A. It follows that the 1-type realized by z, i.e., the set of all subsets B of R such that z is in *B, is completely determined by the following information: A and the collection of subsets B of A such that z is in *B. The number of possibilities for A is $c^{<c} = 2^{<c} = c$ by our cardinal arithmetic assumption, and for each A there are only c possibilities for B and therefore only 2^c possibilities for the type of z. The same goes for the n-types realized by n-tuples of elements of *R; there are only 2^c n-types for any finite n. (Proof for n-types: Either repeat the preceding argument for n-tuples, or use that the structures have pairing functions so you can reduce n-types to 1-types.) Finally, since any *R of size c is isomorphic to one with universe c, its isomorphism type is determined if we know, for each finite tuple (of which there are c), the type that it realizes (of which there are 2^c), so the number of non-isomorphic models is at most (2^c)^c = 2^c. </p>
<p>To get from "at most" to "exactly" it suffices to observe that (1) every non-principal ultrafilter U on the set N of natural numbers produces a *R of the desired sort as an ultrapower, (2) that two such ultrapowers are isomorphic if and only if the ultrafilters producing them are isomorphic (via a permutation of N), and (3) that there are 2^c non-isomorphic ultrafilters on N. </p>
<p>If we drop the assumption that $2^{<c}=c$, then I don't have a complete answer, but here's some partial information. Let \kappa be the first cardinal with 2^\kappa > c; so we're now considering the situation where \kappa < c. For each element z of any *R as above, let m(z) be the smallest cardinal of any set A of reals with z in *A. The argument above generalizes to show that m(z) is never \kappa and that if m(z) is always < \kappa then we get the same number 2^c of possibilities for *R as above. The difficulty is that m(z) might now be strictly larger than \kappa. In this case, the 1-type realized by z would amount to an ultrafilter U on m(z) > \kappa such that its image, under any map m(z) \to \kappa, concentrates on a set of size < \kappa. Furthermore, U could not be regular (i.e., (\omega,m(z))-regular in the sense defined by Keisler long ago). It is (I believe) known that either of these properties of U implies the existence of inner models with large cardinals (but I don't remember how large). If all this is right, then it would not be possible to prove the consistency, relative to only ZFC, of the existence of more than 2^c non-isomorphic *R's.</p>
<p>Finally, Joel asked about a structure theory for such *R's. Quite generally, without constraining the cardinality of *R to be only c, one can describe such models as direct limits of ultrapowers of R with respect to ultrafilters on R. The embeddings involved in such a direct system are the elementary embeddings given by Rudin-Keisler order relations between the ultrafilters. (For the large cardinal folks here: This is just like what happens in the "ultrapowers" with respect to extenders, except that here we don't have any well-foundedness.) And this last paragraph has nothing particularly to do with R; the analog holds for elementary extensions of any structure of the form (S, all predicates and functions on S) for any set S.</p>
|
4,604,730 | <p>Given a function <span class="math-container">$f$</span> from vectors to scalars, and a vector <span class="math-container">$\vec v$</span>, the directional derivative of <span class="math-container">$f$</span> with respect to <span class="math-container">$\vec v$</span> is defined as <span class="math-container">$\nabla_{\vec v} f = \lim_{h\rightarrow 0} \frac{f(\vec x + h \vec v) - f(\vec x)}{h}$</span>. It can also be computed as <span class="math-container">$\nabla_{\vec v} f = \vec v \cdot \nabla f$</span>. I find it very unintuitive that these two things are equivalent. I've looked at some proofs of this, but they seem to be using concepts that I don't yet understand.</p>
<p>For example, in two dimensions, if <span class="math-container">$\vec v = \pmatrix{1\\1}$</span>, then the directional derivative is <span class="math-container">$\nabla_{\vec v} f = f_x + f_y$</span>. Expanding the definitions, this gives
<span class="math-container">$$\lim_{h \rightarrow 0} \frac{f(x+h, y+h) - f(x,y)}{h} = \lim_{h \rightarrow 0} \frac{f(x+h, y) - f(x,y)}{h} + \lim_{h \rightarrow 0} \frac{f(x, y+h) - f(x,y)}{h}$$</span></p>
<p>This seems to imply that, for small values of <span class="math-container">$h \neq 0$</span>, we can make the approximation
<span class="math-container">$$\frac{f(x+h, y+h) - f(x,y)}{h} \approx \frac{f(x+h, y) - f(x,y)}{h} + \frac{f(x, y+h) - f(x,y)}{h}$$</span>
which simplifies to <span class="math-container">$$f(x+h,y+h) \approx f(x+h,y) + f(x,y+h) - f(x,y)$$</span></p>
<p>Why can we make this approximation? I can see that this approximation is exact when <span class="math-container">$f$</span> is a linear function. But for general <span class="math-container">$f$</span>, why can we get any information for <span class="math-container">$f(\_,\_)$</span> from values of <span class="math-container">$f(x,\_)$</span> and <span class="math-container">$f(\_,y)$</span>? Knowing the values of <span class="math-container">$f$</span> for some inputs give us no information for the values of <span class="math-container">$f$</span> for other inputs, right?</p>
| P. Lawrence | 545,558 | <p>Take two distinct points <span class="math-container">$P_0$</span> and <span class="math-container">$P_1$</span> in <span class="math-container">$\mathbb{R^n}$</span>.We'll assume that <span class="math-container">$f:B \to \mathbb R$</span> where <span class="math-container">$B$</span> is an open ball in <span class="math-container">$\mathbb{R^n}$</span> centred at <span class="math-container">$P_0$</span>, that <span class="math-container">$f$</span> and all its partials are continuously differentiable in <span class="math-container">$B$</span> and that all the points we talk about are in <span class="math-container">$B$</span>. In <span class="math-container">$\mathbb{R^2}$</span>, we can write <span class="math-container">$P_0=(x_0,y_0),P_1=(x_1,y_1)$</span>. Let <span class="math-container">$\mathbf v$</span> be the vector from <span class="math-container">$P_0$</span>to <span class="math-container">$P_1$</span> so <span class="math-container">$\mathbf v=[x_1-x_0,y_1-y_0.]$</span> The line joining <span class="math-container">$P_0$</span> and <span class="math-container">$P_1$</span> has parametric equation <span class="math-container">$(x,y)= P_0+t\mathbf v$</span>, i.e.<span class="math-container">$$x=x_0+t(x_1-x_0),y=y_0+t(y_1-y_0).$$</span> Let <span class="math-container">$g(t)=f(P_0+t \mathbf v)$</span> i.e.<span class="math-container">$$g(t)=f(x_0+t(x_1-x_0),y_0+t(y_1-y_0))$$</span> Then <span class="math-container">$g^{\prime}(t)$</span> is the directional derivative of <span class="math-container">$f$</span> in the direction <span class="math-container">$\mathbf v$</span> and the chain rule gives <span class="math-container">$$g^{\prime}(t)=f_1(x_0+t(x_1-x_0),y_0+t(y_1-y_0))(x_1-x_0)+f_2(x_0+t(x_1-x_0),y_0+t(y_1-y_0))(y_1-y_0)$$</span> i.e. <span class="math-container">$$g^{\prime}(t)=\mathbf v \bullet\nabla f(P_0+t\mathbf v)$$</span>. Note that the mean-value theorem gives <span class="math-container">$$g(1)-g(0)=g^{\prime}(t) \text { for some } 0<t<1$$</span> i.e. <span class="math-container">$$f(P_1)-f(P_0)=\mathbf v \bullet \nabla f(P_*)$$</span> where <span class="math-container">$P_*$</span> is on the line joining the points <span class="math-container">$P_0$</span> and <span class="math-container">$P_1$</span> and between them.All the above works the same for any <span class="math-container">$\mathbf{R^n}$</span></p>
|
1,290,111 | <p>How one can prove the following statement:</p>
<p>$k(n-1)<n^2-2n$ for all odd $n$ and $k<n$</p>
<p><em>Tried so far</em>: induction on $n$, graphing, and rewriting $n^2−2n$ as $(n−1)^2−1$.</p>
| Alufat | 198,706 | <p>Look, you should do something like that</p>
<p>\begin{align} &k(n-1) < n^2-2n \\
\iff &k(n-1) < (n-1)^2-1 \quad\text{(subtract $k(n-1)$ from both sides)}\\
\iff &0 < (n-1-k)(n-1)-1 \\
\iff &1 < (n-1-k)(n-1) \quad\text{(divide both sides by $(n-1)$)} \\
\iff &(n-1)^{-1} < (n-1-k)
\end{align}</p>
<p>Since a counterexample to $n=1$ was already given in the comments, let's take $n\geqslant 3$ (just to have $\frac{1}{n-1} < 1$). Note that $\frac{1}{(n-1)}$ is a number in $(0,1)$, while $n-1-k$ is an integer, so the inequality is true iff $n-1-k>0$, that is, when $k<n-1$. So you should restate your claim:</p>
<blockquote>
<p>$k(n-1) < n^2-2n$ for all $n\geqslant 3$ and $k<n-1$.</p>
</blockquote>
|
1,988,517 | <p>$$\binom{1}{0},\binom{1}{1},\binom{1}{2}$$
What does this mean, and how do I achieve an numerical value when trying to solve a proof or problem in this form? </p>
| Davis Yoshida | 113,908 | <p>The definition of ${n}\choose{k}$ is $\frac{n}{k!(n-k)!}$. To use them numerically you can simply compute those values. Notice that there's som cancellation between the numerator and denominator so you could instead compute:</p>
<p>$\frac{n(n-1)(n-2)...(n-k + 1)}{k!}$ or $\frac{n(n-1)(n-2)...(k + 1)}{(n-k)!}$</p>
|
1,947,082 | <p>Let us have sum of sequence (I'm not sure how this properly called in English): $$X(n) = \frac{1}{2} + \frac{3}{4}+\frac{5}{8}+...+\frac{2n-1}{2^n}$$</p>
<p>We need</p>
<p>$$\lim_{n \to\infty }X(n)$$</p>
<p>I have a solution, but was unable to find right answer or solution on the internet.</p>
<p>My idea:</p>
<p>This can be represented as $$ \frac{1}{2} + \frac{1}{4} + \frac{2}{4} + \frac{3}{8}+\frac{2}{8}+\frac{5}{16} + \frac{2}{16} ... + ...$$</p>
<p>Which is basically </p>
<p>$\frac{1}{2} + \frac{1}{2} +\frac{1}{4}+\frac{1}{8}$ - 1/2 + geometric progression + our initial sum divided by 2.</p>
<p>And then I thought: hey, so I can figure out one part of this sum, and second is twice smaller, and then it forms a cycle! (I suppose).</p>
<p>So it would be $B_1 = 1/2 + b_1/(1-1/2) = 3/2$
$$lim_{n \to\infty }X(n) = B_1/(1-1/2) = 3$$</p>
<p>Is this correct?</p>
| DonAntonio | 31,254 | <p>$$\frac12+\frac34+\ldots+\frac{2n-1}{2^n}=\left(1-\frac12\right)+\left(1-\frac14\right)+\left(\frac34-\frac18\right)+\ldots\left(\frac n{2^{n-1}}-\frac1{2^n}\right)=$$</p>
<p>$$1+1+\frac34+\ldots+\frac n{2^{n-1}}-\frac12-\frac14-\ldots-\frac1{2^n}\xrightarrow[n\to\infty]{}$$</p>
<p>$$\to\sum_{k=1}^\infty\frac k{2^{k-1}}-\sum_{k=1}^\infty\frac1{2^k}$$</p>
<p>Taking into account that for $\;|x|<1\;$ we have</p>
<p>$$\frac1{1-x}=\sum_{n=0}^\infty x^n\implies\frac1{(1-x)^2}=\sum_{n=1}^\infty nx^{n-1}$$</p>
<p>Substitute $\;x=\cfrac12\;$ above , one in each sum, and get:</p>
<p>$$\sum_{n=1}^\infty\frac n{2^{n-1}}-\sum_{n=1}^\infty\frac1{2^n}=\frac1{\left(1-\frac12\right)^2}-\left(\frac1{1-\frac12}-1\right)=4-(2-1)=3$$</p>
<p>and your answer is correct.</p>
|
1,816,807 | <blockquote>
<p>If $x,y,z\in \mathbb{R}$ and $x+y+z=4$ and $x^2+y^2+z^2=6\;,$ Then range of $xyz$</p>
</blockquote>
<p>$\bf{My\; Try::}$Using $$(x+y+z)^2=x^2+y^2+z^2+2(xy+yz+zx)$$</p>
<p>So we get $$16=6+2(xy+yz+zx)\Rightarrow xy+yz+zx = -5$$ and given $x+y+z=4$</p>
<p>Now let $xyz=c\;,$ Now leyt $t=x,y,z$ be the roots of cubic equation, Then</p>
<p>$$\displaystyle (t-x)(t-y)(t-z)=0\Rightarrow t^3-(x+y+z)t^2+(xy+yz+zx)t-xyz = 0$$</p>
<p>So we get $\displaystyle t^3-4t^2-5t-c=0$</p>
<p>Now let $f(t)=t^3-4t^2-5t-c\;,$ Then $f'(t)=3t^2-8t-5$</p>
<p>and $f''(t)=6t-8.$ Now for max. and Min.$f'(t)=0\Rightarrow 3t^2-8t-5=0$</p>
<p>So we get $\displaystyle t=\frac{8\pm \sqrt{64+60}}{2\cdot 3}=$</p>
<p>Now How can I solve it after that, Help required, Thanks</p>
| colormegone | 71,645 | <p>I thought it might be worth showing the Lagrange-multiplier method applied to this problem, largely for illustrating how the system of "Lagrange equations" can be handled, and for showing the interesting character of the solution.</p>
<p>With the constraints $ \ x^2 \ + \ y^2 \ + \ z^2 \ = \ 6 \ $ and $ \ x \ + \ y \ + \ z \ = \ 4 \ $ on the function $ \ f (x, \ y, \ z ) \ = \ xyz \ $ , the equations using two "multipliers" are</p>
<p>$$ yz \ = \ \lambda \cdot 2x \ + \ \mu \cdot 1 \ \ , \ \ xz \ = \ \lambda \cdot 2y \ + \ \mu \cdot 1 \ \ , \ \ xy \ = \ \lambda \cdot 2z \ + \ \mu \cdot 1 \ \ , $$</p>
<p>permitting us to write</p>
<p>$$ \mu \ = \ yz \ - \ 2 \lambda x \ = \ xz \ - \ 2 \lambda y \ = \ xy \ - \ 2 \lambda z \ \ . $$</p>
<p>Re-arranging the first implied equation produces</p>
<p>$$ \ (y \ - \ x) \ z \ + \ (y \ - \ x ) \ 2 \lambda \ = \ 0 \ \ \Rightarrow \ \ (y \ - \ x) \ ( z \ + \ 2 \lambda) \ = \ 0 \ \ ; $$</p>
<p>the other equated pairs of terms give us similar relations.</p>
<p>One solution then is to use $ \ x \ = \ y \ \ , \ \ x \ = \ z \ \ , $ and $ \ y \ = \ z \ $ in turn to obtain from the constraint equations</p>
<p>$$ x \ + \ x \ + \ z \ = \ 4 \ \ \Rightarrow \ \ x^2 \ + \ x^2 \ + \ ( 4 \ - \ 2x)^2 \ = \ 6x^2 \ - \ 16x \ + \ 16 \ = \ 6 $$
$$ 3x^2 \ - \ 8x \ + \ 5 \ = \ 0 \ \ \Rightarrow \ \ x \ = \ \frac{4 \ \pm \ 1}{3} \ = \ 1 \ \ , \ \ \frac{5}{3} \ \ , $$</p>
<p>as already described in the comments for the posted question, which the corrected cubic equation of OP would yield. So we find three ordered triples, $ ( 1, \ 1, \ 2) \ , \ ( 1, \ 2, \ 1) \ , $ and $ \ ( 2, \ 1, \ 1) \ $ which give the same value of $ \ 2 \ $ for $ \ xyz \ $ and another three,
$ ( \frac{5}{3}, \ \frac{5}{3}, \ \frac{2}{3}) \ , \ ( \frac{5}{3}, \ \frac{2}{3}, \ \frac{5}{3}) \ , $ and $ \ ( \frac{2}{3}, \ \frac{5}{3}, \ \frac{5}{3}) \ $ , for all of which $ \ xyz \ = \ \frac{50}{27} \ $ .</p>
<p>The alternative of using $ \ x \ \ne \ y \ , \ z \ = \ - 2 \lambda \ $ (and analogously for the other combinations of the three variables) produces the equation $ \ yz \ - \ 2 \cdot \left( -\frac{z}{2} \right) \cdot x \ = \ xz \ - \ 2 \cdot \left( -\frac{z}{2} \right) \cdot y \ $ $ \Rightarrow \ \ yz \ + \ xz \ = \ xz \ + \ yz \ $ , so no further information is gained. We conclude that we have found the extremal values for the functions already and that our function has the (constrained) range $$ \ \frac{50}{27} \ \le \ f (x, \ y, \ z ) \ \le \ 2 \ \ . $$</p>
<p>The graph below shows the geometrical interpretation of a sphere intersected by a tilted plane, so that we are seeking extremal values of the function on a circle. Since the function has symmetry about the line $ \ x \ = \ y \ = \ z \ $ , it may be expected that the maxima and minima number three each and are arranged symmetrically around the "constraint circle", the center of which lies at $ \ ( \frac{4}{3}, \ \frac{4}{3}, \ \frac{4}{3}) \ $ , which is connected to the appearance of $ \ \frac{4}{3} \ $ in the solution calculation using the quadratic formula in the original post.</p>
<p><a href="https://i.stack.imgur.com/gyvJO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gyvJO.png" alt="[pic to appear shortly]"></a></p>
<p><em>The yellow "vertical" lines emerge at the positions of the maximal values for the function, while the red lines mark the locations of the minimal values.</em></p>
|
820,015 | <p>Suppose that you have $k$ dice, each with $N$ sides, where $k\geq N$. The definition of a straight is when all $k$ dice are rolled, there is at least one die revealing each number from $1$ to $N$. </p>
<p>Given the pair $(k,N)$, what is the probability that any particular roll will give a straight?</p>
| sayantankhan | 47,812 | <p>I'm assuming that you're familiar with the inclusion-exclusion principle. In this particular problem, you need to determine the probability of a straight, or in other words, the complement of the event that atleast $1$ number does not appear in the throws. Let that probability be $P$. Also, let $p(j)$ be the probability that you're excluding atleast $j$ numbers from the the throw. Then
$$p(1)={N \choose 1}\left(\frac{N-1}{N}\right)^k$$
$$p(2)={N \choose 2}\left(\frac{N-2}{N}\right)^k$$
and so on.
Then $$P=1-p(1)+p(2)+ \cdots+(-1)^np(n)$$
Which is exactly what you wanted.</p>
|
820,015 | <p>Suppose that you have $k$ dice, each with $N$ sides, where $k\geq N$. The definition of a straight is when all $k$ dice are rolled, there is at least one die revealing each number from $1$ to $N$. </p>
<p>Given the pair $(k,N)$, what is the probability that any particular roll will give a straight?</p>
| Tony | 155,506 | <p>What about simply $\dfrac{^k\text{P}_n(k-n)!}{n^k}$, since $k \ge n$. If $k=n$ then we simply get $\dfrac{n!}{n^k}$?</p>
<p>Since there are only $n$ sides to the dice and if $k=n$, we are asking the total number of permutations of $n$ different values which is $^n\text{P}_n=n!$ divided by all the possible outcomes of $n^k$. If $k>n$ we just replace $^n\text{P}_n$ with $^k\text{P}_n$ assume some values besides $n$ different possible sides are included thus multiplying the total possible outcomes with at least a straight.</p>
|
2,130,699 | <p>I have the coordinates of potentially any points within a 3D arc, representing the path an object will take when launched through the air. X is forward, Z is up, if that is relevant.
Using any of these points, I need to create a bezier curve that follows this arc. The curve requires the tangent values for the very start and end of the curve.
I can't figure out the formula needed to convert this arc into a bezier curve, and I'm hoping someone can help.
I've found plenty of resources discussing circular arcs, but the arc here is not circular.</p>
| Mariano Suárez-Álvarez | 274 | <p>The ring $\mathbb R[X]/(X^2)$ is a real algebra with zero divisor and just trivial idempotents.</p>
|
72,854 | <p>Hi everybody,</p>
<p>Does there exist an explicit formula for the Stirling Numbers of the First Kind which are given by the formula
$$
x(x-1)\cdots (x-n+1) = \sum_{k=0}^n s(n,k)x^k.
$$</p>
<p>Otherwise, what is the computationally fastest formula one knows?</p>
| Igor Rivin | 11,142 | <p>Since the Stirling numbers are the coefficients of a polynomial of degree $n$ which is already factored, it can be evaluated at the roots of unity in $O(n\log n)$ multiplications. Then, by Fourier transform, the coefficients can be found in another $O(n\log n)$ multiplications, of roughly $O( n)$ bit numbers. This will find an entire row of the Stirling triangle in time $O(n^2 \log^k n),$ or $O(n \log^k n)$ time per Stirling number. The exponent $k$ is something like $2+\epsilon.$</p>
<p><strong>REMARK</strong> The recurrence approach takes $O(n^2)$ arithmetic operations, or $O(n^3)$ bit operations to generate either one, or all of the Stirling numbers, so if the goal is to generate all of them up to a certain size, the simple approach is better. However, if one needs either a single number or a row, the approach I give is considerably faster.</p>
|
1,733,970 | <p>I am watching continuous probabilities lectures (having almost no calc background) and I cannot understand solution for the following exercise:</p>
<p>Let $X$ be continuous random variable with a PDF of the form:
$f_X(x) = c(1-x),$ if $x \in [0,1]$ and $f_X(x) = 0 $ otherwise.</p>
<p>Find the following values:</p>
<p>1) c</p>
<p>Answer:</p>
<p>$$
\color{blue}{1 = \int_{-\infty}^\infty f_X(x) dx = \int_0^1 c(1-x)dx = c(x-x^2/2)\Big|_0^1 = c/2, \text{ and therefore, } c = 2.}
$$</p>
<p>Can someone help me to understand what is going on as for a person with no calc background.</p>
<p>I understand that we are considering the interval from 0 to 1 because this this how our PDF defined (it has zero probability outside of this interval).
I do not understand what we do next... the process of integration and how we get c.</p>
<p>Thanks for help!</p>
| browngreen | 321,445 | <p>Before getting into the calculus details, the idea behind this problem is that the integral of a PDF across its domain has to equal one. This is because the sum of the probabilities of all possible outcomes for a random variable has to be one. That being said, we set the integral equal to one in order to solve for c.</p>
<p>As far as the calculus goes, the problem is using the most basic of integration rules, including the power rule ($\int x^n={x^{n+1}\over n+1}$), the sum rule ($\int f(x)+g(x)=\int f(x)+\int g(x)$), and the constant coefficient rule ($\int kf(x)=k\int f(x)$).</p>
|
1,733,970 | <p>I am watching continuous probabilities lectures (having almost no calc background) and I cannot understand solution for the following exercise:</p>
<p>Let $X$ be continuous random variable with a PDF of the form:
$f_X(x) = c(1-x),$ if $x \in [0,1]$ and $f_X(x) = 0 $ otherwise.</p>
<p>Find the following values:</p>
<p>1) c</p>
<p>Answer:</p>
<p>$$
\color{blue}{1 = \int_{-\infty}^\infty f_X(x) dx = \int_0^1 c(1-x)dx = c(x-x^2/2)\Big|_0^1 = c/2, \text{ and therefore, } c = 2.}
$$</p>
<p>Can someone help me to understand what is going on as for a person with no calc background.</p>
<p>I understand that we are considering the interval from 0 to 1 because this this how our PDF defined (it has zero probability outside of this interval).
I do not understand what we do next... the process of integration and how we get c.</p>
<p>Thanks for help!</p>
| John | 7,163 | <p>If you don't have a calculus background, then this will (rightfully) seem like black magic.</p>
<p>The statement</p>
<p>$$\int_{-\infty}^{\infty} f_X(x)dx = 1$$</p>
<p>in essence tells you "the sum of all probabilities is $1$", but in a continuous way.</p>
<p>From the problem, you're told that the valid values of $x$ are between zero and one, so everything outside of that doesn't matter:</p>
<p>$$\int_0^1 f_X(x)dx = 1$$</p>
<p>From there, we substitute in the PDF:</p>
<p>$$\int_0^1 c(1-x) dx = 1$$</p>
<p>Now this is where calculus comes in. The definite integral of a constant times some power of $x$, with respect to $x$, integrated from $a$ to $b$, is</p>
<p>$$\int_a^b nx^m dx = \frac{n}{m+1}\left[b^{m+1} - a^{m+1}\right].$$</p>
<p>This would be straightforward to someone who has had calculus and it's something you'll need to become familiar with to understand what's going on.</p>
<p>So from here we can apply this formula to your problem:</p>
<p>$$\int_0^1 c(1-x) dx = c(x - \frac{x^2}{2})|_0^1 = c(1 - \frac{1}{2}) = c/2 = 1.$$</p>
<p>The last "$=1$" is the property of the PDF; the rest of the stuff leading up to that is calculus.</p>
|
2,492,107 | <p>The question asks $\text{span}(A1,A2)$</p>
<p>$$A1 =\begin{bmatrix}1&2\\0&1\end{bmatrix}$$
$$A2 = \begin{bmatrix}0&1\\2&1\end{bmatrix}$$</p>
<p>I began by calculating $c_1[A1] + c_2[A2]$ then converting it into a matrix and row reducing. I found the restrictions where the stuff after the augment must = 0 then plugged those back into \begin{bmatrix}w&x\\y&z\end{bmatrix} and got the wrong answer. Could anyone please review my work and explain my mistake or the information I am missing? Thank you.</p>
<p>The solution I worked out (wrong):
<a href="https://i.stack.imgur.com/nUdeq.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nUdeq.jpg" alt="enter image description here"></a></p>
| Guy Fsone | 385,707 | <p>Set $$\color{blue}{a_k= \frac{1}{k^2 -k +1} \implies a_{k+1}= \frac{1}{(k+1)^2 -k } =\frac{1}{k^2+ k +1}} $$</p>
<p>But since $ (k^2 -k +1)(k^2 +k +1) =k^4 +k^2 +1$ we have,</p>
<p>$$ a_{k+1} -a_k = \frac{1}{k^2 +k +1}- \frac{1}{k^2 -k +1} = \frac{2k}{(k^2 -k +1)(k^2 +k +1)} =\frac{2k}{k^4 +k^2 +1}$$</p>
<p>Then </p>
<p>$$a_n-a_0 = \sum_{k=0}^{n-1}a_{k+1} -a_k=2\sum_{k=0}^{n-1}\frac{k}{k^4 +k^2 +1} $$</p>
<p>Therefore, </p>
<p>$$\frac{1}{1+1^2+1^4} + \frac{2}{1+2^2+2^4} + \frac{3}{1+3^2+3^4} +....+\frac{24}{1+24^2+24^4} =\frac{1}{2}(a_{25}-a_0) \\= \color{blue}{\frac{1}{2}\left(\frac{1}{25^2 -25+1}-1\right)}$$</p>
|
465,001 | <p>$\mathbf{h}_i\in\mathbb{C}^{M}$ are column vectors $\forall i=\{1, 2, \cdots, K\}$.</p>
<p>$q_i\in\mathbb{R}_+$ are scalars $\forall i=\{1, 2, \cdots, K\}$</p>
<p>$\lvert\bullet\rvert$ denotes determinant of a square matrix or Euclidean norm of a vector according to the context. </p>
<p>From Sylvester's theorem, it's trivial to show that $\lvert\mathbf{I}_M+\mathbf{h}_1q_1\mathbf{h}_1^{\text{H}}\rvert=1+q_i\left|\mathbf{h}_1\right|^2$. Is it possible to extend the theorem to simplify $\lvert\mathbf{I}_M+\sum_{i=1}^K\mathbf{h}_iq_i\mathbf{h}_i^{\text{H}}\rvert$?</p>
<p>P. S. It's part of a problem involving multiple access channels and I am not sure if a definite solution exists. </p>
| Community | -1 | <p>Recall that
$$\lim_{x\to a}f(x)=\ell\in\overline{\mathbb R}\iff \forall(x_n)\to a,\; f(x_n)\to\ell$$
so take
$$x_n=\frac{1}{n\pi}$$
to show that the limit doesn't exist.</p>
|
1,250,755 | <p>I'm sure there's something I'm missing here; probably a naive confusion of mathematics with metamathematics. Regardless, I've come up with what looks to me like a proof that (first-order) ZF+AD is inconsistent:</p>
<p>Assume otherwise. Then it cannot be shown that ZF+AD is consistent relative to ZF. However, ZFC is consistent relative to ZF, and as ZF+AD is consistent (by our assumption for contradiction,) a model of ZF+AD exists in ZFC by Gödel's completeness theorem. Hence ZF+AD is consistent relative to ZFC. This establishes that ZF+AD is consistent relative to ZF, a contradiction.</p>
<p>I'd appreciate anyone pointing out the problematic step(s).</p>
<p>EDIT: Upon further thought, under the (misguided) assumption that this argument holds, an adaption of this argument can be used to show that every first-order countable theory has a countable model in ZF: ZFC has a model in ZF (I think this was shown by Cohen via forcing?) and such a theory has a model in ZFC by the first incompleteness theorem. "Composing" these models gives us a model of the theory in ZF.</p>
| Asaf Karagila | 622 | <p>All of this is problematic. You claim that the theory is inconsistent. Okay. So by contradiction it is not inconsistent, therefore the working assumption is that the theory is consistent. </p>
<p>But a theory being consistent does not mean consistent relative to $\sf ZF$. it means consistent. Moreover the fact there is a model of $T$ in a universe of $T'$ is not a contradiction. It's a set model, it's a toy model. It's not saying what is true or false in the universe. Finally, you conclude that there is a model which is a contradiction, but the working assumption towards contradiction was that there is a model in the first place! </p>
<p>For the second part, forcing cannot increase the consistency strength. This means that it cannot generate models out of nowhere without additional assumptions. In particular forcing requires us to assume the consistency of $\sf ZFC$, rather than proving it. And since we already assume there are models of $\sf ZFC$ nothing wrong in using them. </p>
<p>I'd just like to add that the completeness theorem does require some choice to be proved, but for countable theories it can be avoided. </p>
|
1,250,755 | <p>I'm sure there's something I'm missing here; probably a naive confusion of mathematics with metamathematics. Regardless, I've come up with what looks to me like a proof that (first-order) ZF+AD is inconsistent:</p>
<p>Assume otherwise. Then it cannot be shown that ZF+AD is consistent relative to ZF. However, ZFC is consistent relative to ZF, and as ZF+AD is consistent (by our assumption for contradiction,) a model of ZF+AD exists in ZFC by Gödel's completeness theorem. Hence ZF+AD is consistent relative to ZFC. This establishes that ZF+AD is consistent relative to ZF, a contradiction.</p>
<p>I'd appreciate anyone pointing out the problematic step(s).</p>
<p>EDIT: Upon further thought, under the (misguided) assumption that this argument holds, an adaption of this argument can be used to show that every first-order countable theory has a countable model in ZF: ZFC has a model in ZF (I think this was shown by Cohen via forcing?) and such a theory has a model in ZFC by the first incompleteness theorem. "Composing" these models gives us a model of the theory in ZF.</p>
| Burak | 137,499 | <p>In the second line of your second paragraph, you assume the statement $Con(ZF+AD)$ and then get a set model of $ZF+AD$ by Gödel's completeness theorem (in $ZFC$ or $ZF$, as our meta theory). Then, you claim that $ZF+AD$ is consistent relative to $ZFC$. Why would the statement $Con(ZFC) \rightarrow Con(ZF+AD)$ follow? Your assumption to get the model of $ZF+AD$ was the statement $Con(ZF+AD)$ itself and it does not follow from $Con(ZFC)$ alone.</p>
|
298,029 | <p>how can we convert sin function into continued fraction ?</p>
<p>for example </p>
<p><a href="http://mathworld.wolfram.com/EulersContinuedFraction.html">http://mathworld.wolfram.com/EulersContinuedFraction.html</a></p>
<p>how can we convert sin to simmilar continued fraction ?? </p>
<p>and what about sinh and cosh ? arcsin ? arctan ? cos ? arccos ?? </p>
<p>in general , how can convert any function to continued fraction ??? </p>
<p>my friend asked me this question , so i hope that you help me to be enabled to help him</p>
<p>thanx for all of you </p>
| Mahkoe | 124,475 | <p>I learned how to do this from this document (look for Theorem I)
<a href="https://people.math.osu.edu/sinnott.1/ReadingClassics/continuedfractions.pdf" rel="nofollow">https://people.math.osu.edu/sinnott.1/ReadingClassics/continuedfractions.pdf</a></p>
<p>Interestingly, this looks to be a translation of Euler's original work. I find him easy to understand, he avoids confusing his message in clumsy notation.</p>
|
1,878,806 | <p>I am a graduate school freshman.</p>
<p>I did not take a probability lecture.</p>
<p>So I don't have anything about Probability.</p>
<p>Could you suggest Probability book No matter What book level?</p>
| Community | -1 | <p>Art of Problem Solving's Intermediate Counting and Probability is a book for students with solid basics. If you would like something easier, Introduction to Counting and Probability(also by AoPS) is perfect for beginners, no matter what age. AoPS textbooks are written by the nation's best mathmeticians and contains thourough explanations, well written problems, and a complete solution manual. </p>
|
848,415 | <p>If the limit of one sequence $\{a_n\}$ is zero and the limit of another sequence $\{b_n\}$ is also zero does that mean that $\displaystyle\lim_{n\to\infty}(a_n/b_n) = 1$?</p>
| Wonder | 27,958 | <p>Given $b_n$ whose limit is 0, we can choose $a_n$ to make the limit:</p>
<ul>
<li><p>1: set $a_n = b_n$</p></li>
<li><p>0: set $a_n = b_n^2$, or even $a_n = 0$</p></li>
<li><p>Some other constant c: set $a_n = cb_n$</p></li>
<li><p>Undefined: $a_n = (-1)^nb_n$</p></li>
<li><p>Infinite: $a_n = \sqrt{|b_n|}$</p></li>
</ul>
<p>So we can make it basically anything.</p>
|
3,537,226 | <p>A deck contains six cards, one pair labelled '1', another pair labelled '2' and the last labelled '3'. The deck is shuffled and you a pair of cards at a time until there are no cards left. A pair of cards <span class="math-container">$(i,j)$</span> is called acceptable if <span class="math-container">$|i-j|\leq1$</span>. What is the probability you have drawn only acceptable pairs? How does your answer change if there are <span class="math-container">$n$</span> pairs and the condition becomes <span class="math-container">$|i-j|\leq k$</span>?</p>
<p>I'm quite stuck on the last bit of the problem. Here's my approach to the first part:</p>
<p>My solution so far:
My idea is that as long as a pair <span class="math-container">$(1,3)$</span> or <span class="math-container">$(3,1)$</span> is drawn, then set contains an unacceptable pair. The probability <span class="math-container">$(1,3)$</span> or <span class="math-container">$(3,1)$</span> is drawn first is
<span class="math-container">$$2\left(\frac26\times\frac25\right)=\frac4{15},$$</span></p>
<p>the probability <span class="math-container">$(1,3)$</span> or <span class="math-container">$(3,1)$</span> is drawn second is
<span class="math-container">$$\left(1-\frac4{15}\right)\frac4{15}=\frac{44}{225},$$</span></p>
<p>and the probability <span class="math-container">$(1,3)$</span> or <span class="math-container">$(3,1)$</span> is drawn last is
<span class="math-container">$$\left(1-\frac4{15}-\frac{44}{225}\right)\frac4{15}=\frac{484}{3375},$$</span></p>
<p>so the probability of having only acceptable pairs is
<span class="math-container">$$1-\frac4{15}-\frac{44}{225}-\frac{484}{3375}=1-\frac{900}{3375}-\frac{660}{3375}-\frac{484}{3375}=\frac{1331}{3375}.$$</span></p>
<p>(Correct me if this is wrong please!)</p>
<p>However, I am unsure of how to extend this to a more general number of cards and relaxed constraint. Taking the complement seems to be less efficient compared to finding the actual probability, but I'm not sure if there's a closed form. Qualitatively, all I can see is the probability dropping to zero as the number of cards increases. Could someone provide better insight? Cheers!</p>
| R. Burton | 614,269 | <p>Not a complete answer, but it might help.</p>
<hr>
<p>Let <span class="math-container">$[n]=\{1,2,\ldots,n\}$</span> . The deck of cards is represented by the multiset <span class="math-container">$D_n=[n]\cup[n]$</span>. For each <span class="math-container">$m$</span>, let <span class="math-container">$S_m^n$</span> be a partitioning of <span class="math-container">$D_n$</span> into subsets of <span class="math-container">$2$</span>, and <span class="math-container">$S^n=\bigcup_m S_m^n$</span> (note that the order in which the pairs are drawn doesn't actually matter, so it isn't necessary to count the permutations).</p>
<p>Call the <span class="math-container">$m^\text{th}$</span> partitioning <span class="math-container">$k$</span>-<em>acceptable</em> iff, for all <span class="math-container">$\{i,j\}\in S_m^n$</span>, <span class="math-container">$|i-j|\le k$</span>, and <span class="math-container">$k$</span>-<em>unacceptable</em> otherwise. The task is now to find the total number of partitionings (the largest <span class="math-container">$m$</span>) for each <span class="math-container">$n$</span>, and the subset of <span class="math-container">$k$</span>-acceptable partitionings.</p>
<p>The first part is relatively easy. I'm not great with combinatorics, so I just used the formula from <a href="https://math.stackexchange.com/questions/975382/counting-the-number-of-partitions-having-blocks-of-cardinality-2-and-non-distinc">this question</a>, played with Mathematica, and searched the OEIS until I got this:</p>
<p><span class="math-container">$$|S^n|=(2n-1)!!$$</span></p>
<p>For the second part, let <span class="math-container">$A_k^n$</span> be the set of <span class="math-container">$k$</span>-acceptable partitionings of <span class="math-container">$D_n$</span>. Then, the probability that a particular partitioning is <span class="math-container">$k$</span>-acceptable is:</p>
<p><span class="math-container">$$P(k,n)=\frac{|A_k^n|}{|S^n|}=\frac{|A_k^n|}{(2n-1)!!}$$</span></p>
<p>All that's left is to determine the size of <span class="math-container">$A_k^n$</span>.</p>
<p><strong>Note:</strong></p>
<p>Assuming that I haven't made any mistakes, if Christopher Well's answer is correct and <span class="math-container">$P(1,3)=\frac{1}{3}$</span>, then <span class="math-container">$|A_1^3|=5$</span>.</p>
|
209,892 | <p>From the values:</p>
<pre><code>{57.02, 71.04, 87.03, 97.05, 99.07, 101.05, 103.01, 113.08, 114.04, 115.03,
128.06, 128.09, 129.04, 131.04, 137.06, 147.07, 156.10, 163.03, 186.08}
</code></pre>
<p>I would like to find all possible combinations of 3 values that have the sum of roughly 344.25 (+/- 0.05 would be ok). I have tried:</p>
<pre><code>IntegerPartitions[344.2, {3}, {57.0, 71.0, 87.0, 97.1, 99.1, 101.1, 103.0, 113.1, 114.0, 115.0, 128.1, 128.1, 129.0, 131.0, 137.1, 147.1, 156.1, 163.0, 186.1}]
</code></pre>
<p>though <code>IntegerPartitions</code> only seems to accept whole numbers. Any help would be appreciated.</p>
| kglr | 125 | <pre><code>list = {57.0, 71.0, 87.0, 97.1, 99.1, 101.1, 103.0, 113.1, 114.0,
115.0, 128.1, 128.1, 129.0, 131.0, 137.1, 147.1, 156.1, 163.0, 186.1};
subsets = DeleteDuplicates @ Select[Subsets[list, {3}], 344.2 <= Total[#] <= 344.3 &]
</code></pre>
<blockquote>
<p>{{57., 101.1, 186.1},<br>
{87., 101.1, 156.1},<br>
{101.1, 115.,
128.1},<br>
{103., 113.1, 128.1}}</p>
</blockquote>
<pre><code>DeleteDuplicates[Join @@ Permutations /@ subsets]
</code></pre>
<blockquote>
<p>{{57., 101.1, 186.1}, {57., 186.1, 101.1}, {101.1, 57.,
186.1}, {101.1, 186.1, 57.}, {186.1, 57., 101.1}, {186.1, 101.1,
57.},<br>
{87., 101.1, 156.1}, {87., 156.1, 101.1}, {101.1, 87.,
156.1}, {101.1, 156.1, 87.}, {156.1, 87., 101.1}, {156.1, 101.1,
87.},<br>
{101.1, 115., 128.1}, {101.1, 128.1, 115.}, {115., 101.1,
128.1}, {115., 128.1, 101.1}, {128.1, 101.1, 115.}, {128.1, 115.,
101.1},<br>
{103., 113.1, 128.1}, {103., 128.1, 113.1}, {113.1, 103.,
128.1}, {113.1, 128.1, 103.}, {128.1, 103., 113.1}, {128.1, 113.1,
103.}}</p>
</blockquote>
|
2,052,826 | <p>I've been working on some quantum mechanics problems and arrived to this one where I have to deal with subscripts. I got stuck doing this:
I have $\epsilon_{imk}\epsilon_{ikn}=\delta_{mk}\delta_{kn}-\delta_{mn}\delta_{kk}$. But then I went to check and $\delta_{mk}\delta_{kn}-\delta_{mn}\delta_{kk}$ is equal to $\delta_{mn}-3\delta_{mn}$. Why is that so?
Thank you in advance.</p>
| Mark Viola | 218,419 | <p>Another approach is to write $$\epsilon_{ijk}=\hat x_i\cdot(\hat x_j\times \hat x_k)$$Then, we have</p>
<p>$$\begin{align}
\epsilon_{imk}\epsilon_{ikn}&=\hat x_i\cdot(\hat x_m\times \hat x_k)x_i\cdot(\hat x_k\times \hat x_n)\\\\
&=(\hat x_m\times \hat x_k)\cdot(\hat x_k\times \hat x_n)\\\\
&=\hat x_m \cdot (\hat x_k\times(\hat x_k\times \hat x_n))\\\\
&=\hat x_m \cdot(\delta_{nk}\hat x_k-\delta_{kk}\hat x_n)\\\\
&=\delta_{nk}\delta_{mk}-3\delta_{mn}
\end{align}$$</p>
|
192,636 | <p>Suppose I have some 3D points, e.g. <code>{{0, 0, 1}, {0, 0, 1.3}, {0, 1, 0}, {1.2, 0, 0}}</code>. Now I want to find the smallest and largest distance between two points.</p>
<p>A trivial way is to find all possible distances, then look for the smallest and largest number.This becomes very much time-consuming for large data sets.</p>
<p>Could you please suggest any alternative?</p>
| Gladaed | 57,652 | <p>You can find out all the distances by calculating the <code>DistanceMatrix</code>. Using <code>Min</code> and <code>Max</code> you can find the smallest and largest values. </p>
<pre><code>dm = {{0, 0, 1}, {0, 0, 1.3}, {0, 1, 0}, {1.2, 0, 0}} // DistanceMatrix
closest = Min@dm (* is 0 since the point is infinitly close to itself. *)
furthest = Max@dm
closest2 = # /. 0. -> Infinity & /@ dm // Min
</code></pre>
|
2,787,227 | <p>I'm trying to compute two integrals involving the Dirac delta, namely
\begin{align}
I_1&=\int^{1}_0\!\!\!\! dx_1\!\cdots\!\int^{1}_0\!\!\!\! dx_{8}\,\delta(x_1-x_2+x_3-x_4+x_5-x_6+x_7-x_8)\,,\\
I_2&=\int^{1}_0\!\!\!\! dx_1\!\cdots\!\int^{1}_0\!\!\!\! dx_{8}\,\delta(x_1-x_2+x_4-x_5)\delta(x_3-x_4+x_6-x_7)\,\delta(x_5-x_6+x_8-x_1)\,,
\end{align}
but I don't seem to have the right approach. I try to do case differentiations to find the individual contributions, but I havne't made much progress this way.</p>
<p>Is there a systematic method to evaluate such integrals? I also tried to evaluate them in Mathematica, but I didn't succeed with getting the exact fraction - however, I could approximate the integrals numerically and found $I_1\approx .50\pm.02$ and $I_2\approx .38\pm.02$.</p>
<p>I'd be happy about any suggestions!</p>
| hypernova | 549,945 | <p>Let us figure out an expression of $\left(I_n+\alpha\mathbf{x}\mathbf{x}^{\top}\right)^{1/2}$ at first. Denote
$$
\mathbf{v}_1=\frac{\mathbf{x}}{\left\|\mathbf{x}\right\|},
$$
and let
$$
\left\{\mathbf{v}_1,\mathbf{v}_2,\cdots,\mathbf{v}_n\right\}
$$
be an orthonormal basis for $\mathbb{R}^n$. We will show that each $\mathbf{v}_j$ serves as an eigenvector of $I_n+\alpha\mathbf{x}\mathbf{x}^{\top}$.</p>
<p>In fact, provided that $\mathbf{x}^{\top}\mathbf{v}_1=\left\|\mathbf{x}\right\|$,
$$
\left(I_n+\alpha\mathbf{x}\mathbf{x}^{\top}\right)\mathbf{v}_1=\mathbf{v}_1+\alpha\mathbf{x}\left\|\mathbf{x}\right\|=\mathbf{v}_1+\alpha\left\|\mathbf{x}\right\|^2\mathbf{v}_1=\left(1+\alpha\left\|\mathbf{x}\right\|^2\right)\mathbf{v}_1.
$$
This means that $\mathbf{v}_1$ is an eigenvector of $I_n+\alpha\mathbf{x}\mathbf{x}^{\top}$ associated with its eigenvalue $1+\alpha\left\|\mathbf{x}\right\|^2$.</p>
<p>Further, for $j\ge 2$, provided that $\mathbf{x}^{\top}\mathbf{v}_j=0$,
$$
\left(I_n+\alpha\mathbf{x}\mathbf{x}^{\top}\right)\mathbf{v}_j=\mathbf{v}_j+\mathbf{0}=\mathbf{v}_j.
$$
This means that $\mathbf{v}_j$ is an eigenvector of $I_n+\alpha\mathbf{x}\mathbf{x}^{\top}$ associated with its eigenvalue $1$.</p>
<p>Thanks to these facts, $I_n+\alpha\mathbf{x}\mathbf{x}^{\top}$ can be diagonalized as follows. We have
$$
\left(I_n+\alpha\mathbf{x}\mathbf{x}^{\top}\right)\left(\mathbf{v}_1,\mathbf{v}_2,\cdots,\mathbf{v}_n\right)=\left(\mathbf{v}_1,\mathbf{v}_2,\cdots,\mathbf{v}_n\right)\left(
\begin{array}{cccc}
1+\alpha\left\|\mathbf{x}\right\|^2&&&\\
&1&&\\
&&\ddots&\\
&&&1
\end{array}
\right),
$$
or equivalently,
$$
\left(I_n+\alpha\mathbf{x}\mathbf{x}^{\top}\right)V=V\Lambda\iff I_n+\alpha\mathbf{x}\mathbf{x}^{\top}=V\Lambda V^{-1}
$$
for short, where $V=\left(\mathbf{v}_1,\mathbf{v}_2,\cdots,\mathbf{v}_n\right)$ (by the way, since $\mathbf{v}_j$'s are orthonormal, $V$ is an orthogonal matrix), and $\Lambda$ denotes the last diagonal matrix. Therefore,
$$
\left(I_n+\alpha\mathbf{x}\mathbf{x}^{\top}\right)^{1/2}=V\Lambda^{1/2}V^{-1}=V\Lambda^{1/2}V^{\top},
$$
where
$$
\Lambda^{1/2}=\left(
\begin{array}{cccc}
\sqrt{1+\alpha\left\|\mathbf{x}\right\|^2}&&&\\
&1&&\\
&&\ddots&\\
&&&1
\end{array}
\right).
$$</p>
<hr>
<p>Denote $A=\left(I_n+\alpha\mathbf{x}\mathbf{x}^{\top}\right)^{1/2}$, and the above result reads
$$
A=V\Lambda^{1/2}V^{\top}.
$$
Since $V$ is independent from $\alpha$, we have
$$
{\rm d}_{\alpha}A={\rm d}_{\alpha}\left(V\Lambda^{1/2}V^{\top}\right)=V\left({\rm d}_{\alpha}\Lambda^{1/2}\right)V^{\top}.
$$
Note that
$$
{\rm d}_{\alpha}\Lambda^{1/2}={\rm d}_{\alpha}\left(
\begin{array}{cccc}
\sqrt{1+\alpha\left\|\mathbf{x}\right\|^2}&&&\\
&1&&\\
&&\ddots&\\
&&&1
\end{array}
\right)=\left(
\begin{array}{cccc}
\frac{\left\|\mathbf{x}\right\|^2}{2\sqrt{1+\alpha\left\|\mathbf{x}\right\|^2}}&&&\\
&0&&\\
&&\ddots&\\
&&&0
\end{array}
\right){\rm d}\alpha.
$$
Thus
\begin{align}
{\rm d}_{\alpha}A&=V\left({\rm d}_{\alpha}\Lambda^{1/2}\right)V^{\top}\\
&=\left(\mathbf{v}_1,\mathbf{v}_2,\cdots,\mathbf{v}_n\right)\left(
\begin{array}{cccc}
\frac{\left\|\mathbf{x}\right\|^2}{2\sqrt{1+\alpha\left\|\mathbf{x}\right\|^2}}&&&\\
&0&&\\
&&\ddots&\\
&&&0
\end{array}
\right)\left(
\begin{array}{c}
\mathbf{v}_1^{\top}\\
\mathbf{v}_2^{\top}\\
\vdots\\
\mathbf{v}_n^{\top}
\end{array}
\right){\rm d}\alpha\\
&=\frac{\left\|\mathbf{x}\right\|^2}{2\sqrt{1+\alpha\left\|\mathbf{x}\right\|^2}}\mathbf{v}_1\mathbf{v}_1^{\top}{\rm d}\alpha\\
&=\frac{1}{2\sqrt{1+\alpha\left\|\mathbf{x}\right\|^2}}\mathbf{x}\mathbf{x}^{\top}{\rm d}\alpha.
\end{align}
Therefore,
$$
\frac{\partial A}{\partial\alpha}=\frac{1}{2\sqrt{1+\alpha\left\|\mathbf{x}\right\|^2}}\mathbf{x}\mathbf{x}^{\top}.
$$</p>
<hr>
<p>Similarly, you may work out
$$
\frac{\partial A}{\partial\mathbf{x}}
$$
by computing
$$
{\rm d}_{\mathbf{x}}A={\rm d}_{\mathbf{x}}\left(V\Lambda^{1/2}V^{\top}\right)=\left({\rm d}_{\mathbf{x}}V\right)\Lambda^{1/2}V^{\top}+V\left({\rm d}_{\mathbf{x}}\Lambda^{1/2}\right)V^{\top}+V\Lambda^{1/2}\left({\rm d}_{\mathbf{x}}V\right)^{\top}.
$$
Since $\mathbf{x}$ is a vector, you may use the entry-wise form to make the calculating clearer.</p>
|
2,445,855 | <p>1)How is this letter or symbol pronounced mathematically?</p>
<p>$$\overline k$$</p>
<p>2) $'$ is this sign just a symbol of derivative? For example: </p>
<p>$$k'$$ Do we only understand this as a derivative?</p>
| Community | -1 | <p>It would depend on what mathematical area you were studying when you saw it probably. Just like x isn't always referring to the $x$-axis or $y$ to the $y$- axis on the real plane it could be any Cartesian product etc. $\circ$ could be used as function composition or other compositions as well as in place of the degree symbol if talking about temperature. The verniculum (bar) in fractions and roots means take everything under it together as one thing. in repeating decimals it is a short way to show what part repeats. the tick could mean first derivative, it could also mean the preimage of function k in a function context for example. math symbols get reused a lot. $\aleph$ could be a variable or with a subscript could be referring to a type of infinity for example. </p>
|
114,487 | <p>I have a stack of images (usually ca 100) of the same sample. The images have intrinsic variation of the sample, which is my signal, and a lot of statistical noise. I did a principal components analysis (PCA) on the whole stack and found that components 2-5 are just random noise, whereas the rest is fine. How can I produce a new stack of images where the noise components are filtered out?</p>
<p>EDIT:</p>
<p>I am sorry I was not as active as you yesterday. I must admit I am bit overwhelm by the depth and yet simplicity of your answers. It is hard for me to choose one, since all of them work great and give what I actually wanted.</p>
<p>I feel that I need to elaborate a bit more the problem I am working on. Unfortunately, my supervisor does not allow me to upload nay data before we have published the final results, so I have to work in abstract terms. We have an atomic cloud cooled to a temperature of 10 µK. Due to inter-atomic and laser interaction, the atomic cloud (all of the atoms a whole) is excited and starts to oscillates in different vibrational modes. This dynamic behavior is of great interest to us, since it provides and insight to the inter-atomic physics. </p>
<p>The problem is that most of the relevant variations are obscured by noise due to the imaging process. The noise usually is greatly suppressed if you take two images one with Noise+Signal and Noise only and then subtract them. However, this does not work if the noise in the two images is not correlated, which sadly is our case. Therefore, we decided to use PCA, because there you can clearly see the oscillation modes and filter everything that is crap. If you are interested in using PCA to visualize dynamics, you can have a look at this paper by different group:</p>
<p><a href="http://iopscience.iop.org/article/10.1088/1367-2630/16/12/122001" rel="noreferrer">http://iopscience.iop.org/article/10.1088/1367-2630/16/12/122001</a></p>
<p>I deeply thank everybody who contributed.</p>
| Anton Antonov | 34,008 | <p>This answer compares two dimension reduction techniques SVD and Non-Negative Matrix Factorization (NNMF) over a set of images with two different classes of signals (two digits below) produced by different generators and overlaid with different types of noise.</p>
<p>Note that question states that the images have one class of signals:</p>
<blockquote>
<p>I have a stack of images (usually ca 100) of the same sample.</p>
</blockquote>
<p>PCA/SVD produces somewhat good results, but NNMF often provides great results. The factors of NNMF allow interpretation of the basis vectors of the dimension reduction, PCA in general does not. This can be seen in the example below.</p>
<h2>Data</h2>
<p>The data set-up is explained in more detail in <a href="https://mathematica.stackexchange.com/a/114508/34008">my previous answer</a>. Here is the code used in this one:</p>
<pre><code>MNISTdigits = ExampleData[{"MachineLearning", "MNIST"}, "TestData"];
{testImages, testImageLabels} =
Transpose[List @@@ RandomSample[Cases[MNISTdigits, HoldPattern[(im_ -> 0 | 4)]], 100]];
testImages
</code></pre>
<p><a href="https://i.stack.imgur.com/uBGlW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uBGlW.png" alt="enter image description here"></a></p>
<p>See the breakdown of signal classes:</p>
<pre><code>Tally[testImageLabels]
(* {{4, 48}, {0, 52}} *)
</code></pre>
<p>Verify the images have the same sizes:</p>
<pre><code>Tally[ImageDimensions /@ testImages]
dims = %[[1, 1]]
</code></pre>
<p>Add different kinds of noise to the images:</p>
<pre><code>noisyTestImages6 =
Table[ImageEffect[
testImages[[i]],
{RandomChoice[{"GaussianNoise", "PoissonNoise", "SaltPepperNoise"}], RandomReal[1]}], {i, Length[testImages]}];
RandomSample[Thread[{testImages, noisyTestImages6}], 15]
</code></pre>
<p><a href="https://i.stack.imgur.com/USKiV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/USKiV.png" alt="enter image description here"></a></p>
<p>Since the important values of the signals are 0 or close to 0 we negate the noisy images:</p>
<pre><code>negNoisyTestImages6 = ImageAdjust@*ColorNegate /@ noisyTestImages6
</code></pre>
<p><a href="https://i.stack.imgur.com/kMUGQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kMUGQ.png" alt="enter image description here"></a></p>
<h2>Linear vector space representation</h2>
<p>We unfold the images into vectors and stack them into a matrix:</p>
<pre><code>noisyTestImagesMat = (Flatten@*ImageData) /@ negNoisyTestImages6;
Dimensions[noisyTestImagesMat]
(* {100, 784} *)
</code></pre>
<p>Here is centralized version of the matrix to be used with PCA/SVD:</p>
<pre><code>cNoisyTestImagesMat =
Map[# - Mean[noisyTestImagesMat] &, noisyTestImagesMat];
</code></pre>
<p>(With NNMF we want to use the non-centralized one.)</p>
<p>Here confirm the values in those matrices:</p>
<pre><code>Grid[{Histogram[Flatten[#], 40, PlotRange -> All,
ImageSize -> Medium] & /@ {noisyTestImagesMat,
cNoisyTestImagesMat}}]
</code></pre>
<p><a href="https://i.stack.imgur.com/RQpkq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RQpkq.png" alt="enter image description here"></a></p>
<h2>SVD dimension reduction</h2>
<p>For more details see the previous answers.</p>
<pre><code>{U, S, V} = SingularValueDecomposition[cNoisyTestImagesMat, 100];
ListPlot[Diagonal[S], PlotRange -> All, PlotTheme -> "Detailed"]
dS = S;
Do[dS[[i, i]] = 0, {i, Range[10, Length[S], 1]}]
newMat = U.dS.Transpose[V];
denoisedImages =
Map[Image[Partition[# + Mean[noisyTestImagesMat], dims[[2]]]] &, newMat];
</code></pre>
<p><a href="https://i.stack.imgur.com/CB4c0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CB4c0.png" alt="enter image description here"></a></p>
<p>Here are how the new basis vectors look like:</p>
<pre><code>Take[#, 50] &@
MapThread[{#1, Norm[#2],
ImageAdjust@Image[Partition[Rescale[#3], dims[[1]]]]} &, {Range[
Dimensions[V][[2]]], Diagonal[S], Transpose[V]}]
</code></pre>
<p><a href="https://i.stack.imgur.com/2Fn08.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2Fn08.png" alt="enter image description here"></a></p>
<p>Basically, we cannot tell much from these SVD basis vectors images.</p>
<h2>Load packages</h2>
<p>Here we load the packages for the what is computed next:</p>
<pre><code>Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/NonNegativeMatrixFactorization.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/OutlierIdentifiers.m"]
</code></pre>
<h2>NNMF</h2>
<p>This command factorizes the image matrix into the product $W H$ :</p>
<pre><code>{W, H} = GDCLS[noisyTestImagesMat, 20, "MaxSteps" -> 200];
{W, H} = LeftNormalizeMatrixProduct[W, H];
Dimensions[W]
Dimensions[H]
(* {100, 20} *)
(* {20, 784} *)
</code></pre>
<p>The rows of $H$ are interpreted as new basis vectors and the rows of $W$ are the coordinates of the images in that new basis. Some appropriate normalization was also done for that interpretation. Note that we are using the non-normalized image matrix.</p>
<p>Let us see the norms of $H$ and mark the top outliers:</p>
<pre><code>norms = Norm /@ H;
ListPlot[norms, PlotRange -> All, PlotLabel -> "Norms of H rows",
PlotTheme -> "Detailed"] //
ColorPlotOutliers[TopOutliers@*HampelIdentifierParameters]
OutlierPosition[norms, TopOutliers@*HampelIdentifierParameters]
OutlierPosition[norms, TopOutliers@*SPLUSQuartileIdentifierParameters]
</code></pre>
<p><a href="https://i.stack.imgur.com/BLpfp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BLpfp.png" alt="enter image description here"></a></p>
<p>Here is the interpretation of the new basis vectors (the outliers are marked in red):</p>
<pre><code>MapIndexed[{#2[[1]], Norm[#], Image[Partition[#, dims[[1]]]]} &, H] /. (# -> Style[#, Red] & /@
OutlierPosition[norms, TopOutliers@*HampelIdentifierParameters])
</code></pre>
<p><a href="https://i.stack.imgur.com/r5hX7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r5hX7.png" alt="enter image description here"></a></p>
<p>Using only the outliers of $H$ let us reconstruct the image matrix and the de-noised images:</p>
<pre><code>pos = {1, 6, 10}
dHN = Total[norms]/Total[norms[[pos]]]*
DiagonalMatrix[
ReplacePart[ConstantArray[0, Length[norms]],
Map[List, pos] -> 1]];
newMatNNMF = W.dHN.H;
denoisedImagesNNMF =
Map[Image[Partition[#, dims[[2]]]] &, newMatNNMF];
</code></pre>
<h2>Comparison</h2>
<p>At this point we can plot all images together for comparison:</p>
<pre><code>imgRows =
Transpose[{testImages, noisyTestImages6,
ImageAdjust@*ColorNegate /@ denoisedImages,
ImageAdjust@*ColorNegate /@ denoisedImagesNNMF}];
With[{ncol = 5},
Grid[Prepend[Partition[imgRows, ncol],
Style[#, Blue, FontFamily -> "Times"] & /@
Table[{"original", "noised", "SVD", "NNMF"}, ncol]]]]
</code></pre>
<p><a href="https://i.stack.imgur.com/otB5r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/otB5r.png" alt="enter image description here"></a></p>
<p>We can see that NNMF produces cleaner images. This can be also observed/confirmed using threshold binarization -- the NNMF images are much cleaner.</p>
<pre><code>imgRows =
With[{th = 0.5},
MapThread[{#1, #2, Binarize[#3, th],
Binarize[#4, th]} &, {testImageLabels, noisyTestImages6,
ImageAdjust@*ColorNegate /@ denoisedImages,
ImageAdjust@*ColorNegate /@ denoisedImagesNNMF}]];
With[{ncol = 5},
Grid[Prepend[Partition[imgRows, ncol],
Style[#, Blue, FontFamily -> "Times"] & /@
Table[{"label", "noised", "SVD", "NNMF"}, ncol]]]]
</code></pre>
<p><a href="https://i.stack.imgur.com/rrK5X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rrK5X.png" alt="enter image description here"></a></p>
<p>Usually with NNMF in order to get good results we have to do more that one iteration of the whole factorization and reconstruction process. And of course NNMF is much slower. Nevertheless, we can see clear advantages of NNMF's interpretability and leveraging it.</p>
<h2>Further experiments</h2>
<h3>Gallery of experiments other digit pairs</h3>
<p>See the gallery with screenshots from other experiments in </p>
<ul>
<li><p>this blog post at WordPress :
<a href="https://mathematicaforprediction.wordpress.com/2016/05/07/comparison-of-pca-and-nnmf-over-image-de-noising/" rel="nofollow noreferrer">"Comparison of PCA and NNMF over image de-noising"</a>, and</p></li>
<li><p>this post at community.wolfram.com :
<a href="http://community.wolfram.com/groups/-/m/t/852873" rel="nofollow noreferrer">"Comparison of PCA and NNMF over image de-noising"</a>.</p></li>
</ul>
<h3>Using <code>Classify</code></h3>
<p>Further comparisons can be done using <code>Classify</code> -- for more details see the last sections of the posts linked above.</p>
<p>Here is an image of such comparison:</p>
<p><a href="https://i.stack.imgur.com/Zpvzj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zpvzj.png" alt="enter image description here"></a></p>
|
3,741,222 | <p>I'm reading the Thompson's book about lattices and sphere packing and got stuck by a sentence of a kind of <span class="math-container">$Z_8$</span> he introduced to reach 2 pages later the full <span class="math-container">$E_8$</span> lattice.
You can find this lattice defined at pages 73-74 and it's basically. To resume it, it's a lattice packing with 16 closest point to the origin having shape: <span class="math-container">$$(\pm2, 0^7)$$</span>
The packing radius is then <span class="math-container">$1/2$</span> of their distance from the origin, i.e. <span class="math-container">$\rho=1$</span>. So far, so good.</p>
<p>My problem is when he tries to compute the center density of the lattice. Notice this center density can be interpreted as the "real" sphere center density since <span class="math-container">$\rho=1$</span>, as claimed in SPLAG when describing the formula for <span class="math-container">$\delta$</span>.</p>
<p>Instead of using any formula, Thompson uses a clever idea to estimate it, which sound like this:</p>
<blockquote>
<p>The center density is fairly easy to calculate. If all coordinate entries were written in binary form, then the lattice, by definition, would contain only those coordinates whose ones digits were either all 0's or 1's. In this case the only two out of every <span class="math-container">$2^8$</span> points with integer coordinates are acceptable. Thus, the center density = <span class="math-container">$1/2^7$</span></p>
</blockquote>
<p>I've got 2 problems with this result.</p>
<p>The first is I can choose for this lattice a generating matrix made by only 2 in all diagonal entries, i.e. twice the identity matrix.
The determinant of this would then <span class="math-container">$2^8$</span>. Using SPLAG formula for center density, and keeping <span class="math-container">$\rho=1$</span>, I would get <span class="math-container">$\delta=1/2^8$</span>, which is smaller by a factor of 2 compared to the one claimed by Thompson.</p>
<p>To confirm this latest sentence: as far as I can see, the lattice defined above can be seen as <span class="math-container">$Z_8$</span> lattice, which density is (always from SPLAG), <span class="math-container">$\delta=1/2^8$</span></p>
<p>However Thompson is using this <span class="math-container">$1/2^7$</span> to derive the full <span class="math-container">$E_8$</span> lattice, so I'm not claiming it's wrong a priori. But I'd like to understand where my reasoning is wrong and how to express coordinates in that binary format (I'm a programmer, so used to binary digits) to emulates Thompson's idea.</p>
<p>Thanks in advance</p>
| Heterotic | 117,522 | <p>Putting in 2 in each slot creates a legitimate basis, but for a different lattice! The lattice as introduced on page 73 is not the collection of vectors with all coordinates even. That would indeed be generated by 2xId<span class="math-container">$_8$</span>. However, the lattice the author wants to study is the collection of vectors with integer coordinates such that the sum of all coordinates is always even. 2xId<span class="math-container">$_8$</span> would be a sub-lattice of this lattice and there are other vectors such as (1,1,..,1) that belong to the former but not the latter. That's why he takes that particular basis. As to the "binary digits" trick, I have no idea either, probably because I cannot remember the definition of center density.</p>
|
477,483 | <p>I'm trying to solve this question but yet I don't manage to find an answer...</p>
<p>Does it exist a topology with cardinality $\alpha, \> \forall \> \alpha \ge 1$ ?</p>
| Ross Millikan | 1,827 | <p>The basic idea is that quadrilaterals average $90^\circ$ angles. If four of them meet at every corner, you wont have the required $720^\circ$ deficit to make a closed sphere. You need eight three-way vertices to get it to close. This is like the requirement that using hexagons and pentagons you need 12 pentagons for a full sphere. As one approximation, you can take a spherical cap and choose four points on the bottom circle to identify as "vertices" of your "quadrilateral", but maybe this is too trivial. You can also inscribe a cube in a sphere and project the edges outward to make a tessellation. The edges will not be geodesics. You can then subdivide the squares with quadrilaterals to make a tessellation with more elements.</p>
|
3,987,357 | <p>Let
<span class="math-container">$$
\psi(x)=
\left\{
\begin{array}{cll}
x \sin\Big(\dfrac{1}{x}\Big) & \text{if} & x\in (0,1],\, \\
0 & \text{if} & x=0,
\end{array}
\right.
$$</span>
and let <span class="math-container">$f:[-1,1]\rightarrow \mathbb{R}$</span> be Riemann integrable.</p>
<p>How can I show that <span class="math-container">$f\circ \psi$</span> is Riemann integrable?</p>
<p>I have several theorems in my book that could work if <span class="math-container">$\psi$</span> were a <span class="math-container">$C^1$</span> diffeomorphism or a homeomorphism with inverse satisfying Lipschitz condition. But <span class="math-container">$\psi$</span> is clearly neither of those. Any help with a zero set argument or reverting to definitions?</p>
| Yiorgos S. Smyrlis | 57,021 | <p><strong>Note.</strong> If <span class="math-container">$f$</span> is Riemann integrable and <span class="math-container">$\varphi$</span> continuous, then their composition <span class="math-container">$f\circ\varphi$</span> is NOT necessarily Riemann integrable. (See <a href="https://mathoverflow.net/questions/20045/about-the-riemann-integrability-of-composite-functions">here</a>.)</p>
<p><em>Sketch of proof.</em></p>
<p><strong>Claim 1.</strong> <em>If <span class="math-container">$f$</span> is Riemann integrable and <span class="math-container">$\varphi$</span> is differentiable and <span class="math-container">$\varphi'(x)\ge \eta>0$</span>, then <span class="math-container">$f\circ\varphi$</span> is Riemann integrable.</em></p>
<p>Proof. Let <span class="math-container">$P=\{a=t_0<\cdots<t_n=b\}$</span> be a partition of <span class="math-container">$[a,b]$</span>, then
<span class="math-container">$$
U(f\circ\varphi,P)-L(f\circ\varphi,P)=
\sum_{i=1}^n\big(M_i(f\circ\varphi)-m_i(f\circ\varphi)\big)(t_i-t_{i-1})
$$</span>
where
<span class="math-container">$$
m_i(f\circ\varphi)=\inf_{t\in [t_{i-1},t_i]}(f\circ\varphi)(t)=
\inf_{x\in [\varphi(t_{i-1}),\varphi(t_i)]}f(t) \quad\text{and}\quad
M_i(f\circ\varphi)=\sup_{t\in [t_{i-1},t_i]}(f\circ\varphi)(t)=
\sup_{x\in [\varphi(t_{i-1}),\varphi(t_i)]}f(t)
$$</span>
since <span class="math-container">$\varphi$</span> is increasing and hence
<span class="math-container">$$
U(f\circ\varphi,P)-L(f\circ\varphi,P)=
\sum_{i=1}^n\big(M_i(f\circ\varphi)-m_i(f\circ\varphi)\big)(t_i-t_{i-1})
\\=
\sum_{i=1}^n\big(M_i(f\circ\varphi)-m_i(f\circ\varphi)\big)
\big(\varphi(t_{i})-\varphi(t_{i-1})\big)
\frac{t_i-t_{i-1}}{\varphi(t_{i})-\varphi(t_{i-1})}
\\=
\sum_{i=1}^n\big(M_i(f\circ\varphi)-m_i(f\circ\varphi)\big)
\big(\varphi(t_{i})-\varphi(t_{i-1})\big)
\frac{1}{\varphi'(\tau_{i})}\le \frac{1}{\eta}\sum_{i=1}^n\big(M_i(f\circ\varphi)-m_i(f\circ\varphi)\big)=\frac{1}{\eta}\big(U(f,\tilde P)-L(f,\tilde P)\big)
$$</span>
where <span class="math-container">$\tau_i\in(t_{i-1},t_i)$</span> and <span class="math-container">$\,\tilde P=\{\varphi(a)=\tilde t_0<\cdots<\tilde t_n=\varphi(b)\}$</span>.</p>
<p>Clearly, for suitable choice of <span class="math-container">$\tilde P$</span>, and subsequently of <span class="math-container">$P$</span>, the <span class="math-container">$\frac{1}{\eta}\big(U(f,\tilde P)-L(f,\tilde P)\big)$</span> can become smaller than any given <span class="math-container">$\varepsilon$</span>, and hence <span class="math-container">$f\circ\varphi$</span> is Riemann integrable.</p>
<p><strong>Claim 2.</strong> If <span class="math-container">$g:[a,b]\to\mathbb R$</span> is bounded and for every <span class="math-container">$\delta_1,\delta_2>0$</span>, the function <span class="math-container">$g$</span> is Riemann integrable on <span class="math-container">$[a+\delta_1,b-\delta_2]$</span>. Then <span class="math-container">$g$</span> is Riemann integrable on <span class="math-container">$[a,b]$</span>.</p>
<p>Proof. Simple application of the integrability criterion, which was used above.</p>
<p>Therefore, due to <em>Claim 1</em>, if <span class="math-container">$\varphi(x)=x\sin(1/x)$</span>, then <span class="math-container">$\varphi'(x)\ge \eta>0$</span> or
<span class="math-container">$\varphi'(x)\le -\eta<0$</span>, for some <span class="math-container">$\eta>0$</span> is any interval of the form
<span class="math-container">$$
\Big[\frac{1}{\frac{\pi}{2}+k\pi}+\delta,\frac{1}{-\frac{\pi}{2}+k\pi}-\delta\Big], \quad \delta>0,\,\,k\in\mathbb Z^+
$$</span>
and hence, <span class="math-container">$f\circ \varphi$</span> is integrable in <span class="math-container">$\big[\frac{1}{\frac{\pi}{2}+k\pi}+\delta,\frac{1}{-\frac{\pi}{2}+k\pi}-\delta\big]$</span>. Due to <em>Claim 2</em>,
<span class="math-container">$f\circ \varphi$</span> is integrable in <span class="math-container">$J_k=\big[\frac{1}{\frac{\pi}{2}+k\pi},\frac{1}{-\frac{\pi}{2}+k\pi}\big]$</span>, and consequently in <span class="math-container">$I_n=\big[\frac{1}{\frac{\pi}{2}+k\pi},1\big]$</span>, for all <span class="math-container">$n\in\mathbb N$</span>, which is a finite union of <span class="math-container">$J_k$</span>'s.
Applying <em>Claim 2</em> once more we obtain that <span class="math-container">$f\circ \varphi$</span> is integrable in <span class="math-container">$[0,1]$</span>.</p>
|
3,336,742 | <p>Can we evaluate <span class="math-container">$\displaystyle\sum_{n=1}^\infty\frac{H_n^2}{n^32^n}$</span> ?</p>
<p>where <span class="math-container">$H_n=\sum_{k=1}^n\frac1n$</span> is the harmonic number.</p>
<p>A related integral is <span class="math-container">$\displaystyle\int_0^1\frac{\ln^2(1-x)\operatorname{Li}_2\left(\frac x2\right)}{x}dx$</span>.</p>
<p>where <span class="math-container">$\operatorname{Li}_2(x)=\sum_{n=1}^\infty\frac{x^n}{n^2}$</span> is the dilogarithmic function.</p>
<hr>
<p>Here is how the integral and the sum are related:</p>
<p>From <a href="https://math.stackexchange.com/questions/3263927/prove-frac-partial-partial-m-textbn-m-textbn-m-sum-k-0n-1-f/3271177#3271177">here</a> we have </p>
<p><span class="math-container">$$\int_0^1x^{n-1}\ln^2(1-x)\ dx=\frac{H_n^2+H_n^{(2)}}{n}$$</span></p>
<p>Divide both sides by <span class="math-container">$n^22^n$</span> then sum up we get</p>
<p><span class="math-container">$$\sum_{n=1}^\infty \frac{H_n^2+H_n^{(2)}}{n^32^n}=\int_0^1\frac{\ln^2(1-x)}{x}\sum_{n=1}^\infty \frac{x^n}{n^22^n}dx=\int_0^1\frac{\ln^2(1-x)\operatorname{Li}_2(x/2)}{x}dx$$</span></p>
| Ali Shadhar | 432,085 | <p>We proved <a href="https://math.stackexchange.com/questions/3366039/a-group-of-important-generating-functions-involving-harmonic-number">here</a></p>
<p><span class="math-container">$$\frac{\ln^2(1-x)}{1-x}=\sum_{n=1}^\infty x^n\left(H_n^2-H_n^{(2)}\right)\tag{1}$$</span></p>
<p>multiply both sides by <span class="math-container">$\frac{\ln^2x}{x}$</span> then integrate from <span class="math-container">$x=0$</span> to <span class="math-container">$1/2$</span>
we have</p>
<p><span class="math-container">\begin{align}
I&=\int_0^{1/2}\frac{\ln^2(1-x)\ln^2x}{x(1-x)}\ dx=\sum_{n=1}^\infty\left(H_n^2-H_n^{(2)}\right)\int_0^{1/2}x^{n-1}\ln^2x\ dx\\
&=\sum_{n=1}^\infty\left(H_n^2-H_n^{(2)}\right)\left(\frac{\ln^22}{n2^n}+\frac{2\ln2}{n^22^n}+\frac{2}{n^32^n}\right)\\
&=\ln^22\sum_{n=1}^\infty\frac{H_n^2-H_n^{(2)}}{n2^n}+2\ln2\sum_{n=1}^\infty\frac{H_n^2-H_n^{(2)}}{n^22^n}+2\sum_{n=1}^\infty\frac{H_n^2}{n^32^n}-2\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^32^n}\\
&=\ln^22S_1+2\ln2S_2+2\sum_{n=1}^\infty\frac{H_n^2}{n^32^n}-2S_3
\end{align}</span></p>
<p>Rearranging the terms we have </p>
<blockquote>
<p><span class="math-container">$$\sum_{n=1}^\infty\frac{H_n^2}{n^32^n}=\frac12I-\frac12\ln^22S_1-\ln2S_2+S_3\tag{2}$$</span></p>
</blockquote>
<hr>
<p><strong>Evaluation of <span class="math-container">$I$</span>:</strong></p>
<p><span class="math-container">\begin{align}
I&=\int_0^{1/2}\frac{\ln^2(1-x)\ln^2x}{x(1-x)}\ dx\overset{1-x\mapsto x}{=}\int_{1/2}^1\frac{\ln^2(1-x)\ln^2x}{x(1-x)}\ dx\\
2I&=\int_0^{1}\frac{\ln^2(1-x)\ln^2x}{x(1-x)}\ dx=\int_0^{1}\frac{\ln^2(1-x)\ln^2x}{x}\ dx+\underbrace{\int_0^{1}\frac{\ln^2(1-x)\ln^2x}{1-x}\ dx}_{1-x\mapsto x}\\
I&=\int_0^{1}\frac{\ln^2(1-x)\ln^2x}{x}\ dx=2\sum_{n=1}^\infty\frac{H_n}{n+1}\int_0^1x^n\ln^2x\ dx\\
&=4\sum_{n=1}^\infty\frac{H_n}{(n+1)^4}=4\sum_{n=1}^\infty\frac{H_n}{n^4}-4\zeta(5)=\boxed{8\zeta(5)-4\zeta(2)\zeta(3)}
\end{align}</span></p>
<p>where we used <span class="math-container">$\sum_{n=1}^\infty\frac{H_n}{n^4}=3\zeta(5)-\zeta(2)\zeta(3)$</span></p>
<hr>
<p><strong>Evaluation of <span class="math-container">$S_1$</span>:</strong></p>
<p>Divide both sides of (1) by <span class="math-container">$x$</span> then integrate from <span class="math-container">$x=0$</span> to <span class="math-container">$1/2$</span> and use the fact that <span class="math-container">$\int_0^{1/2}x^{n-1}=\frac1{n2^n}$</span></p>
<p><span class="math-container">\begin{align}
S_1&=\sum_{n=1}^\infty \frac{H_n^2-H_n^{(2)}}{n2^n}=\int_0^{1/2}\frac{\ln^2(1-x)}{x(1-x)}\ dx\\
&=\int_{1/2}^{1}\frac{\ln^2x}{x(1-x)}\ dx=\sum_{n=0}^\infty\int_{1/2}^1x^{n-1}\ln^2x\ dx\\
&=\frac13\ln^32+\sum_{n=1}^\infty\int_{1/2}^1x^{n-1}\ln^2x\ dx\\
&=\frac13\ln^32+\sum_{n=1}^\infty\left(\frac2{n^3}-\frac{\ln^22}{n2^n}-\frac{2\ln2}{n^22^n}-\frac{2}{n^32^n}\right)\\
&=\frac13\ln^32+2\zeta(3)-\ln^32-2\ln2\operatorname{Li}_2\left(\frac12\right)-2\operatorname{Li}_3\left(\frac12\right)=\boxed{\frac14\zeta(3)}
\end{align}</span></p>
<p>where we used <span class="math-container">$\operatorname{Li}_2\left(\frac12\right)=\frac12\zeta(2)-\frac12\ln^22$</span> and <span class="math-container">$\operatorname{Li}_3\left(\frac12\right)=\frac78\zeta(3)-\frac12\ln2\zeta(2)+\frac16\ln^32$</span></p>
<hr>
<p><strong>Evaluation of <span class="math-container">$S_2$</span>:</strong></p>
<p>integrate both sides of (1) from <span class="math-container">$x=0$</span> to <span class="math-container">$x$</span> to have</p>
<p><span class="math-container">$$-\frac13\ln^3(1-x)=\sum_{n=1}^\infty\frac{x^{n+1}}{n+1}\left(H_n^2-H_n^{(2)}\right)=\sum_{n=1}^\infty\frac{x^{n}}{n}\left(H_n^2-H_n^{(2)}-\frac{2H_n}{n}+\frac{2}{n^2}\right)\tag{3}$$</span></p>
<p>Now divide both sides of (3) by <span class="math-container">$x$</span> then integrate from <span class="math-container">$x=0$</span> to <span class="math-container">$1/2$</span> and use the fact that <span class="math-container">$\int_0^{1/2}x^{n-1}=\frac1{n2^n}$</span></p>
<p><span class="math-container">$$-\frac13\int_0^{1/2}\frac{\ln^3(1-x)}{x}\ dx=\sum_{n=1}^\infty\frac{1}{n^22^n}\left(H_n^2-H_n^{(2)}-\frac{2H_n}{n}+\frac{2}{n^2}\right)$$</span></p>
<p>Rearranging the terms</p>
<p><span class="math-container">$$S_2=\sum_{n=1}^\infty\frac{H_n^2-H_n^{(2)}}{n^22^n}=\boxed{2\sum_{n=1}^\infty\frac{H_n}{n^32^n}-\frac13\int_0^{1/2}\frac{\ln^3(1-x)}{x}\ dx-2\operatorname{Li}_4\left(\frac12\right)}$$</span></p>
<hr>
<p><strong>Evaluation of <span class="math-container">$S_3$</span>:</strong></p>
<p>By Cauchy product we have</p>
<p><span class="math-container">$$\operatorname{Li}_2^2(x)=\sum_{n=1}^\infty x^n\left(\frac{4H_n}{n^3}+\frac{2H_n^{(2)}}{n^2}-\frac{6}{n^4}\right)$$</span></p>
<p>divide both sides by <span class="math-container">$x$</span> then integrate from <span class="math-container">$x=0$</span> to <span class="math-container">$1/2$</span> and use the fact that <span class="math-container">$\int_0^{1/2}x^{n-1}=\frac1{n2^n}$</span> we have</p>
<p><span class="math-container">$$\int_0^{1/2}\frac{\operatorname{Li}_2^2(x)}{x}\ dx=\sum_{n=1}^\infty \frac{1}{n2^n}\left(\frac{4H_n}{n^3}+\frac{2H_n^{(2)}}{n^2}-\frac{6}{n^4}\right)$$</span></p>
<p>rearrange to get</p>
<p><span class="math-container">$$S_3=\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^32^n}=\boxed{3\operatorname{Li}_5\left(\frac12\right)-2\sum_{n=1}^\infty\frac{H_n}{n^42^n}+\frac12\int_0^{1/2}\frac{\operatorname{Li}_2^2(x)}{x}\ dx}$$</span></p>
<hr>
<p>Substituting the results of <span class="math-container">$I$</span>, <span class="math-container">$S_1$</span>, <span class="math-container">$S_2$</span> and <span class="math-container">$S_3$</span> in (2) we have </p>
<p><span class="math-container">$$\sum_{n=1}^\infty\frac{H_n^2}{n^32^n}=3\operatorname{Li}_5\left(\frac12\right)+2\ln2\operatorname{Li}_4\left(\frac12\right)+4\zeta(5)-2\zeta(2)\zeta(3)-\frac18\ln^22\zeta(3)-2\left(\color{blue}{\ln2\sum_{n=1}^\infty\frac{H_n}{n^32^n}+\sum_{n=1}^\infty\frac{H_n}{n^42^n}}\right)+\frac13\ln2\int_0^{1/2}\frac{\ln^3(1-x)}{x}\ dx+\frac12\int_0^{1/2}\frac{\operatorname{Li}_2^2(x)}{x}\ dx$$</span></p>
<p>I managed <a href="https://math.stackexchange.com/q/3195117">here</a> to prove</p>
<p><span class="math-container">$$\color{blue}{\ln2\sum_{n=1}^{\infty}\frac{H_n}{2^n n^3}+\sum_{n=1}^{\infty}\frac{H_n}{2^nn^4}
}=-\frac12\ln^22\sum_{n=1}^{\infty}\frac{H_n}{2^n n^2}-\frac16\ln^32\sum_{n=1}^{\infty}\frac{H_n}{2^n n}+\frac12\sum_{n=1}^{\infty}\frac{H_n}{n^4}-\frac{47}{32}\zeta(5)+\frac{1}{15}\ln^52+\frac{1}{3}\ln^32\operatorname{Li_2}\left( \frac12\right)+\ln^22\operatorname{Li_3}\left( \frac12\right)+2\ln2\operatorname{Li_4}\left( \frac12\right) +2\operatorname{Li_5}\left( \frac12\right)$$</span></p>
<p>plugging the trivial sums <span class="math-container">$\sum_{n=1}^{\infty}\frac{H_n}{ n^22^n}=\zeta(3)-\frac{1}{2}\ln(2)\zeta(2)$</span> and <span class="math-container">$\sum_{n=1}^\infty\frac{H_n}{n2^n}=\frac12\zeta(2)$</span> we get</p>
<p><span class="math-container">$$\color{blue}{\ln2\sum_{n=1}^{\infty}\frac{H_n}{2^n n^3}+\sum_{n=1}^{\infty}\frac{H_n}{2^nn^4}
}=2\operatorname{Li}_5\left( \frac12\right)+2\ln2\operatorname{Li}_4\left( \frac12\right)+\frac1{32}\zeta(5)-\frac12\zeta(2)\zeta(3)+\frac38\ln^22\zeta(3)\\-\frac16\ln^32\zeta(2)+\frac1{15}\ln^52$$</span></p>
<p>Also @Song nicely proved <a href="https://math.stackexchange.com/q/3325928">here</a></p>
<p><span class="math-container">$$\int_0^{1/2}\frac{\operatorname{Li}_2^2(x)}{x}\ dx=\frac12\ln^32\zeta(2)-\frac78\ln^22\zeta(3)-\frac58\ln2\zeta(4)+\frac{27}{32}\zeta(5)+\frac78\zeta(2)\zeta(3)\\-\frac{7}{60}\ln^52-2\ln2\operatorname{Li}_4\left(\frac12\right)-2\operatorname{Li}_5\left(\frac12\right)$$</span></p>
<p>for the integral:
<span class="math-container">\begin{align}
\int_0^{1/2}\frac{\ln^3(1-x)}{x}\ dx&=\int_{1/2}^{1}\frac{\ln^3x}{1-x}\ dx\\
&=\sum_{n=1}^\infty\int_{1/2}^1 x^{n-1}\ln^3x\ dx\\
&=\sum_{n=1}^\infty\left(\frac{\ln^32}{n2^n}+\frac{3\ln^22}{n^22^n}+\frac{6\ln2}{n^32^n}+\frac{6}{n^42^n}-\frac{6}{n^4}\right)\\
&=\ln^42+3\ln^32\operatorname{Li}_2\left(\frac12\right)+6\ln2\operatorname{Li}_3\left(\frac12\right)+6\operatorname{Li}_4\left(\frac12\right)-6\zeta(4)\\
&=6\operatorname{Li}_4\left(\frac12\right)-6\zeta(4)+\frac{21}4\ln2\zeta(3)-\frac32\ln^22\zeta(2)+\frac12\ln^42
\end{align}</span></p>
<hr>
<p>Combining these results we get</p>
<blockquote>
<p><span class="math-container">$$\sum_{n=1}^\infty\frac{H_n^2}{n^32^n}=-2\operatorname{Li}_5\left(\frac12\right)-\ln2\operatorname{Li}_4\left(\frac12\right)+\frac{279}{64}\zeta(5)-\frac{37}{16}\ln2\zeta(4)-\frac{9}{16}\zeta(2)\zeta(3)\\+\frac{7}{16}\ln^22\zeta(3)+\frac1{12}\ln^32\zeta(2)-\frac{1}{40}\ln^52$$</span></p>
</blockquote>
<hr>
<p><strong>BONUS:</strong></p>
<p>In our solution we got </p>
<p><span class="math-container">$$\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^32^n}=3\operatorname{Li}_5\left(\frac12\right)-2\sum_{n=1}^\infty\frac{H_n}{n^42^n}+\frac12\int_0^{1/2}\frac{\operatorname{Li}_2^2(x)}{x}\ dx$$</span></p>
<p>Substitute </p>
<p><span class="math-container">\begin{align}
\displaystyle\sum_{n=1}^{\infty}\frac{H_n}{n^42^n}&=2\operatorname{Li_5}\left( \frac12\right)+\ln2\operatorname{Li_4}\left( \frac12\right)-\frac16\ln^32\zeta(2)
+\frac12\ln^22\zeta(3)\\
&\quad-\frac18\ln2\zeta(4)-
\frac12\zeta(2)\zeta(3)+\frac1{32}\zeta(5)+\frac1{40}\ln^52
\end{align}</span></p>
<p>along with @Song's result we get </p>
<blockquote>
<p><span class="math-container">$$\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^32^n}=-2\operatorname{Li}_5\left(\frac12\right)-3\ln2\operatorname{Li}_4\left(\frac12\right)+\frac{23}{64}\zeta(5)-\frac1{16}\ln2\zeta(4)+\frac{23}{16}\zeta(2)\zeta(3)\\-\frac{23}{16}\ln^22\zeta(3)+\frac7{12}\ln^32\zeta(2)-\frac{13}{120}\ln^52$$</span></p>
</blockquote>
|
1,844,894 | <p>To explain my question, here is an example.</p>
<p>Below is an AP:</p>
<p>2, 6, 10, 14....n</p>
<p>Calculating the nth term in this sequence is easy because we have a formula. The common difference (d = 4) in AP is constant and that's why the formula is applicable, I think.</p>
<p>But what about this sequence:</p>
<p>5, 12, 21, 32....n</p>
<p>Here, the difference between two consecutive elements is not constant, but it too has a pattern which all of you may have guessed. Taking the differences between its consecutive elements and formimg a sequence results in an AP. For the above example, the AP looks like this:</p>
<p>5, 7, 9, 11.....n</p>
<p>So given a sequence with "uniformly varying common difference" , is there any formula to calculate the nth term of this sequence?</p>
| Hypergeometricx | 168,053 | <p>It's a cascasded form of the summation along a diagonal of Pascal's triangle, otherwise known as the <a href="https://math.stackexchange.com/questions/1490794/proof-of-the-hockey-stick-identity-sum-t-0n-binom-tk-binomn1k1">hockey stick identity</a>, i.e.
$$\sum_{r=0}^m \binom ra=\binom {m+1}{a+1}$$
Assume $p$ levels of cascaded summations. Working from inside out we have
$$\begin{align}
&\quad \sum_{x_0=0}^{n-1}
\sum_{x_1=0}^{x_0-1}
\sum_{x_2=0}^{x_1-1}\cdots
\sum_{x_{p-3}=0}^{x_{p-4}-1}
\sum_{x_{p-2}=0}^{x_{p-3}-1}
\sum_{x_{p-1}=0}^{x_{p-2}-1}
\quad \quad 1\\
&=
\sum_{x_0=0}^{n-1}
\sum_{x_1=0}^{x_0-1}
\sum_{x_2=0}^{x_1-1}\cdots
\sum_{x_{p-3}=0}^{x_{p-4}-1}
\sum_{x_{p-2}=0}^{x_{p-3}-1}
\underbrace{\sum_{x_{p-1}=0}^{x_{p-2}-1}
\binom{x_{p-1}}0}\\
&=
\sum_{x_0=0}^{n-1}
\sum_{x_1=0}^{x_0-1}
\sum_{x_2=0}^{x_1-1}\cdots
\sum_{x_{p-3}=0}^{x_{p-4}-1}
\underbrace{\sum_{x_{p-2}=0}^{x_{p-3}-1}
\quad\binom{x_{p-2}}1}\\
&=
\sum_{x_0=0}^{n-1}
\sum_{x_1=0}^{x_0-1}
\sum_{x_2=0}^{x_1-1}\cdots
\underbrace{\sum_{x_{p-3}=0}^{x_{p-4}-1}
\quad \binom{x_{p-3}}2}\\\\
&=\qquad\vdots\\\\
&=
\sum_{x_0=0}^{n-1}
\sum_{x_1=0}^{x_0-1}
\underbrace{\sum_{x_2=0}^{x_1-1}
\binom{x_2}{p-3}}\\
&=
\sum_{x_0=0}^{n-1}
\underbrace{\sum_{x_1=0}^{x_0-1}
\;\;\;\binom{x_1}{p-2}}\\
&=
\underbrace{\sum_{x_0=0}^{n-1}
\quad\binom{x_0}{p-1}}\\
&=
\qquad\binom{n}{p}
\end{align}$$
i.e.
$$\color{red}{\boxed{\sum_{x_0=0}^{n-1}\sum_{x_1=0}^{x_0-1}\sum_{x_2=0}^{x_1-1}
\cdots \sum_{x_{p-1}=0}^{x_{p-2}-1}1=\binom np}}$$
Putting $p=4$ gives the final equation in your question, i.e.
$$\begin{align}
&\quad \sum_{x_0=0}^{n-1}\sum_{x_1=0}^{x_0-1}\sum_{x_2=0}^{x_1-1}
\sum_{x_{3}=0}^{x_{2}-1}1\\
&=\sum_{x_0=0}^{n-1}\sum_{x_1=0}^{x_0-1}\sum_{x_2=0}^{x_1-1}\binom{x_2}1\\
&=\sum_{x_0=0}^{n-1}\sum_{x_1=0}^{x_0-1}\binom{x_1}2\\
&=\sum_{x_0=0}^{n-1}\binom{x_0}3\\
&=\binom n4\\
&=\frac {n(n-1)(n-2)(n-3)}{4!}\end{align}$$</p>
|
26,259 | <p>I've been reading <em>generationgfunctionology</em> by Herbert S. Wilf (you can find a copy of the second edition on the author's page <a href="http://www.math.upenn.edu/~wilf/DownldGF.html" rel="nofollow">here</a>).</p>
<p>On page 33 he does something I find weird. He wants to shuffle the index forward and does so like this:
\begin{align*}
(f_{n+1})_{n\in N_0} &= \frac{(f(X)-f(0))}{X}\\
(f_{n})_{n\in N_0} &= \sum_{n\in N_0} f_nx^n
\end{align*}</p>
<p>Why is this allowed? Namely $X$ has a noninvertible (0) constant term, so how is this division (multiplication with the reciprocal) defined within the ring of formal power series?</p>
| Jonas Meyer | 1,424 | <p>If a power series $g(X)$ has zero constant term, then it is true that it is not invertible, so you cannot divide by it in general. However, it may happen that you can divide <em>some</em> power series by $g(X)$, namely those that have $g(X)$ as a factor. If $f(X)$ is a power series and $g(X)$ is a nonzero power series, then we can divide $h(X)=g(X)f(X)$ by $g(X)$ by defining $\frac{h(X)}{g(X)}$ to be $f(X)$. This is unambiguous because, as Qiaochu noted, the power series ring is an integral domain.</p>
<p>This is what's going on here. Because $f(X)-f(0)$ has zero constant term, you can write it as $Xg(X)$ for some power series $g(X)$, and $\frac{f(X)-f(0)}{X}$ is another name for $g(X)$.</p>
|
15,124 | <p>Let $X \geq 1$ be an integer r.v. with $E[X]=\mu$. Let $X_i$ be a sequence of iid rvs with the distribution of $X$. On the integer line, we start at $0$, and want to know the expected position after we first cross $K$, which is some fixed integer. Each next position is determined by adding $X_i$ to the previous position. So the question is, if we stop this process after the first time $\tau$ for which $Y_{\tau}=\sum_{i=1}^{\tau}X_i > K$, that is, after the first time it crosses $K$, then what is $E[Y_{\tau}-K]$?. Can we get a bound of $O(\mu)$?</p>
| Yuval Filmus | 1,277 | <p>Edit: This answer shows what happens when $X$ could be zero (contrary to the question).</p>
<p>Let $K = 1$, and $X = M$ ($M > 1$) with probability $p$ and $0$ otherwise. Thus $E[Y_\tau - K] = M-1$ whereas $E[X] = pM$. Choosing $M = n+1$ and $p = 1/n(n+1)$ we get $E[X] = 1/n$ and $E[Y_\tau - K] = n$, so a bound of $O(\mu)$ isn't possible.</p>
|
1,511,733 | <p>B = matrix given below. I is identity matrix.</p>
<pre><code> [1 2 3 4]
[3 2 4 3]
[1 3 2 4]
[5 4 3 7]
</code></pre>
<p>So What will be the relation between the matrices A and C if AB = I and BC = I?</p>
<p>I think that A = C because both AB and BC have B in common and both of their product is an identity matrix but I'm not sure. </p>
| charles | 286,740 | <p>$B$ is invertible.
Since $AB=I$, $A=IB^{-1}$,
and since $BC=I$, $C=B^{-1}I$.
Therefore $A = C$.</p>
|
821,768 | <p><img src="https://i.stack.imgur.com/bS4PE.png" alt="enter image description here"></p>
<p>In the rectangle ABCD,
$$1. \, BE = EF = FC = AB$$
$$2. \, \angle AEB = \beta , \angle AFB = \alpha , \angle ACB = \theta. $$
Prove that $\alpha + \theta = \beta$.</p>
<p>I have so far obtained that - $$1. \cos\beta = \sin \beta$$ $$2.\cos\alpha = 2\sin\alpha$$ $$3. \cos \theta = 3\sin \theta$$
But I am not able to understand what to do next. Please help.</p>
| ajotatxe | 132,456 | <p>It is clear that the three angles are acute. Moreover,</p>
<p>$$\tan\beta=2\tan\alpha=3\tan\theta$$</p>
<p>$$\begin{align}
\tan(\alpha+\theta)&=\frac{\tan\alpha+\tan\theta}{1-\tan\alpha\tan\theta}\\
&=\frac{\frac56\tan\beta}{1-\frac{\tan^2\beta}6}\\
&=\frac{5\tan\beta}{6-\tan^2\beta}
\end{align}$$</p>
<p>So the statement is true only if $6-\tan^2\beta=5$, that is, only if $\tan\beta=1$ or $\beta=45^\circ$.</p>
|
3,156,962 | <blockquote>
<p>Find general term of <span class="math-container">$1+\frac{2!}{3}+\frac{3!}{11}+\frac{4!}{43}+\frac{5!}{171}+....$</span></p>
</blockquote>
<p>However it has been ask to check convergence but how can i do that before knowing the general term. I can't see any pattern,comment quickly!</p>
| Daniel Schepler | 337,888 | <p>Note that <span class="math-container">$\alpha^3 - 3 \alpha \beta^2$</span> and <span class="math-container">$3\beta\alpha^2 - \beta^3$</span> are the real and imaginary parts respectively of <span class="math-container">$(\alpha + i \beta)^3$</span>; this suggests that it will be useful to work in the ring of Gaussian integers <span class="math-container">$\mathbb{Z}[i]$</span>.</p>
<p>Now, suppose that for some integer prime <span class="math-container">$p$</span>, <span class="math-container">$p \mid \alpha^3 - 3\alpha \beta^2$</span> and <span class="math-container">$p\mid 3\beta\alpha^2 - \beta^3$</span>; then <span class="math-container">$p \mid (\alpha + i \beta)^3$</span>. If <span class="math-container">$p \equiv 3 \pmod{4}$</span>, then <span class="math-container">$p$</span> remains irreducible in <span class="math-container">$\mathbb{Z}[i]$</span>, so <span class="math-container">$p \mid \alpha + i \beta$</span>, implying that <span class="math-container">$p \mid \gcd_{\mathbb{Z}}(\alpha, \beta)$</span> which gives a contradiction. Similarly, if <span class="math-container">$p \equiv 1 \pmod{4}$</span>, then the factorization of <span class="math-container">$p$</span> into irreducibles is of the form <span class="math-container">$p = (c + di) (c - di)$</span> for some <span class="math-container">$c, d \in \mathbb{Z}$</span>. Thus, <span class="math-container">$c + di \mid (\alpha + i \beta)^3$</span> implies <span class="math-container">$c + di \mid \alpha + i\beta$</span> and <span class="math-container">$c - di \mid (\alpha + i\beta)^3$</span> implies <span class="math-container">$c - di \mid \alpha + i \beta$</span>. Since <span class="math-container">$c + di$</span> and <span class="math-container">$c - di$</span> are relatively prime in <span class="math-container">$\mathbb{Z}[i]$</span> (being irreducibles which do not differ by multiplication by a unit), this implies <span class="math-container">$(c + di) (c - di) \mid \alpha + i \beta$</span>; in other words, <span class="math-container">$p \mid \alpha + i \beta$</span>, again giving a contradiction.</p>
<p>The only remaining possibility is <span class="math-container">$p = 2$</span> which factors as <span class="math-container">$-i (1+i)^2$</span> in <span class="math-container">$\mathbb{Z}[i]$</span>. Now, by a similar argument to the above we have <span class="math-container">$(1 + i)^2 \nmid \alpha + \beta i$</span>, so the order of <span class="math-container">$1+i$</span> in <span class="math-container">$(\alpha + \beta i)^3$</span> is either 0 or 3. In the former case we get that <span class="math-container">$\gcd(\alpha^3 - 3\alpha\beta^2, 3\beta\alpha^2 - \beta^3) = 1$</span>, and in the latter case we get that <span class="math-container">$\gcd(\alpha^3 - 3\alpha\beta^2, 3\beta\alpha^2 - \beta^3) = 2$</span>.</p>
<hr>
<p>Note that this solution is very closely tied to the exact form of the polynomials under consideration; whereas the general idea of John Omielan's answer should be more generally applicable to other cases.</p>
|
215,864 | <p>I run to the following problem which says if you have a smooth curve that is evolving over time (say finite length at the beginning) then </p>
<p>$$\frac{d}{dt}(curve \; length \; at \; time \; t)=-\int_{curve} k\cdot v \; ds,$$</p>
<p>where $k$ is curvature of the curve and $v$ is velocity of point on curve. $ds$ represents integration by parametrization by arclength. </p>
<p>I have tried proving this but I can not get out without re parametrazing curves. Is there some neater way to do this?</p>
| Joseph O'Rourke | 237 | <p>This is not as comprehensive an answer as John Mangual's.
Consider it, rather, supplemental information to provide intuition,
drawn from the textbook, <a href="http://cs.smith.edu/~orourke/DCG/" rel="nofollow noreferrer"><em>Discrete and Computational Geometry</em></a>:
<img src="https://i.stack.imgur.com/AlS2R.png" alt="Eq.5.1">
<br />
<img src="https://i.stack.imgur.com/fAspT.png" alt="Fig.5.23"></p>
|
877,687 | <p>So my textbook's explanation of the derivative of e is very sketchy. They used lots of approximations and plugging things into the calculator. Basically I want to know how you can work out as h approaches 0</p>
<p>$$
\lim_{h\to0}\frac {10^{x+h}-10^x }h
$$</p>
| Community | -1 | <p>The curve you see is by <a href="http://en.m.wikipedia.org/wiki/B%C3%A9zier_curve#Quadratic_B.C3.A9zier_curves" rel="nofollow">definition</a> a quadratic bézier curve which is always a segment of a parabola. </p>
|
1,518,697 | <blockquote>
<p>For this homework exercise, we are asked to show that the ideal $I=(3,1+\sqrt{-5})$ is a flat $\mathbb{Z}[\sqrt{-5}]$-module. The hint is to show that $I$ becomes principal (and thus free as a module) when we invert $2$ <strong>or</strong> $3$, so that $I$ is locally flat.</p>
</blockquote>
<p>I'm having trouble understanding what happens to $I$ when we invert $2$ <strong>or</strong> $3$. I would say that if we invert $3$, then $3$ becomes a unit. How does this make $I$ principal? If we invert $2$, I don't see how this gets us anywhere.</p>
<p>Furthermore, I don't understand why it helps to prove that $I$ is locally flat. There is a lemma in our course notes saying that if $M$ is a finitely generated $R$-module in a local ring $R$, then $M$ is flat iff $M$ is free. So if $I$ is locally flat, why is it flat as well?</p>
<p>I hope my confusion is coming across. <strong>Any</strong> help relating to these questions is appreciated, and if anything is unclear, please tell me.</p>
| user26857 | 121,097 | <p>$I\mathbb Z[\sqrt{-5}]_3=\mathbb Z[\sqrt{-5}]_3$, and $I\mathbb Z[\sqrt{-5}]_2=(1+\sqrt{-5})\mathbb Z[\sqrt{-5}]_2\simeq\mathbb Z[\sqrt{-5}]_2$, so in both cases we get a free module. Now use that $2$ and $3$ are coprime in $\mathbb Z[\sqrt{-5}]$.</p>
<p><strong>Remark.</strong> In fact, $\mathbb Z[\sqrt{-5}]$ is a Dedekind domain, so all its ideals are invertible, that is, projective.</p>
|
1,811,581 | <p>When discussing the order relation on $\mathbb{C}$, it is said that such a statement as $z_1 < z_2$ where $z_1, z_2 \in \mathbb{C}$ is meaningless, unless $z_1$ and $z_2$ are real.</p>
<p>My question is, when will a complex number $z$ be real? I know that if $\bar{z}$ is the conjugate of $z$, then</p>
<p>$$z + \bar{z} = 2a$$
$$z\bar{z} = a^2+b^2$$</p>
<p>produce real numbers, but it is easy to add $0i$ to either equation to produce a complex number.</p>
| Felicity | 332,295 | <p>$\mathrm{span}\left\{\begin{bmatrix}
1\\0
\end{bmatrix},\begin{bmatrix}
0\\1
\end{bmatrix}\right\}$ is the set of linear combinations of the two vectors $\begin{bmatrix}
1\\0
\end{bmatrix}$
and $\begin{bmatrix}
0\\1
\end{bmatrix}$.</p>
<p>In other words, $\mathrm{span}\left\{\begin{bmatrix}
1\\0
\end{bmatrix},\begin{bmatrix}
0\\1
\end{bmatrix}\right\}=\left\{a\begin{bmatrix}
1\\0
\end{bmatrix}+b\begin{bmatrix}
0\\1
\end{bmatrix}:a,b\in\mathbb{R}\right\}$.</p>
<p>Thus, $\begin{bmatrix}
1\\1
\end{bmatrix}$ is in the span.</p>
<p>In particular, the two vectors $\begin{bmatrix}
1\\0
\end{bmatrix}$
and $\begin{bmatrix}
0\\1
\end{bmatrix}$ form the <em>standard basis</em> of the vector space $\mathbb{R}^2$. Thus, they span $\mathbb{R}^2$ and so do any two linearly independent vectors in $\mathbb{R}^2$.</p>
|
1,170,602 | <p>How to evaluate the integral </p>
<p>$$\int \sqrt{\sec x} \, dx$$</p>
<p>I read that its not defined.<br>
But why is it so ? Does it contradict some basic rules ?
Please clarify it .</p>
| Aaron Maroja | 143,413 | <p>First notice that $\cos x = 1 - 2\sin^2 \Big(\frac{x}{2}\Big)$ then </p>
<p>$$\int \frac{1}{\sqrt{\cos x}} \, dx = \int \frac{1}{\sqrt{1 - 2\sin^2 \Big(\frac{x}{2}\Big)}} \, dx = \color{red}{2F\Big(\left.\frac{x}{2}\right\vert 2\Big)} + C$$</p>
<p>where $F(\left.x\right\vert m)$ is an <a href="http://mathworld.wolfram.com/EllipticIntegraloftheFirstKind.html" rel="noreferrer">Elliptic Integral of First kind</a>.</p>
|
1,502,309 | <p>The initial notation is:</p>
<p>$$\sum_{n=5}^\infty \frac{8}{n^2 -1}$$</p>
<p>I get to about here then I get confused.</p>
<p>$$\left(1-\frac{3}{2}\right)+\left(\frac{4}{5}-\frac{4}{7}\right)+...+\left(\frac{4}{n-3}-\frac{4}{n-1}\right)+...$$</p>
<p>How do you figure out how to get the $\frac{1}{n-3}-\frac{1}{n-1}$ and so on? Like where does the $n-3$ come from or the $n-1$.</p>
| Mark Viola | 218,419 | <p>The correct way to analyze this is to write</p>
<p>$$\begin{align}
\sum_{n=5}^N\frac{2}{n^2-1}&=\sum_{n=5}^{N}\left(\frac{1}{n-1}-\frac{1}{n+1}\right)\\\\
&=\left(\frac14-\frac16\right)+\left(\frac15-\frac17\right)+\left(\frac16-\frac18\right)+\cdots \\\\
&+\left(\frac1{N-3}-\frac1{N-1}\right)+\left(\frac1{N-2}-\frac1N\right)+\left(\frac1{N-1}-\frac1{N+1}\right)\\\\
&=\frac14+\frac15+\frac{1}{N}+\frac{1}{N+1}
\end{align}$$</p>
<p>Therefore,</p>
<p>$$\sum_{n=5}\frac{8}{n^2-1}=4\lim_{N\to \infty}\left(\frac14+\frac15+\frac{1}{N}+\frac{1}{N+1}\right)=\frac95$$</p>
|
3,501,052 | <p>I want to find the number of real roots of the polynomial <span class="math-container">$x^3+7x^2+6x+5$</span>.
Using Descartes rule, this polynomial has either only one real root or 3 real roots (all are negetive). How will we conclude one answer without doing some long process?</p>
| Kori | 229,036 | <p>If you actually want the roots of a cubic, you can use the <a href="https://www.purplemath.com/modules/rtnlroot.htm" rel="nofollow noreferrer">rational root test</a> to find one of the roots. After you have one of the roots you can easily do a long division to find the resulting second degree polynomial from there you just apply the quadratic formula. This will yield the exact roots, but if you want to just know if you have three real or one real and two complex that is a different scenario.</p>
|
430,654 | <p>Show that this sequence converges and find the limit.
$a_1 = 0$, $a_{n+1} = \sqrt{5+2a_{n} }$ </p>
| Angela Pretorius | 15,624 | <p>For convergence, show that $\displaystyle\left|\sqrt{5+2(a_n+\delta)}-a_n\right|<\delta$ for sufficiently small delta and $a_n$ close to the limit. The limit is given by $\displaystyle a=\sqrt{5+2a}$, $a>0$ (since $a_1>0$ and $a_{n+1}$ has the same sign as $a_n$ thereafter) or $\displaystyle a=1+\sqrt{6}$.</p>
|
941,632 | <p>Is the Set $$S=\{e^{2x},e^{3x}\}$$ linearly independent?? And answer says Linearly independent over any interval $(a,b)$,only when $0$ doesnot belong to $(a,b)$</p>
<p>How do I proceed??</p>
<p>Thanks for the help!!</p>
| egreg | 62,967 | <p>I assume that you're working on the vector space of continuous real maps over the interval $(a,b)$.</p>
<p>So let's consider the two functions $f(x)=e^{2x}$ and $g(x)=e^{3x}$ and suppose that
$$
\alpha f+\beta g = 0.
$$
This means that, for every $x\in(a,b)$, we have
$$
\alpha f(x)+\beta g(x)=0
$$
and, in particular, for $x=c$ and $x=d$, where we assume $a<c<d<b$ (which is possible whenever $a<b$ (which may also be infinity); then
$$
\begin{cases}
\alpha e^{2c}+\beta e^{3c}=0\\
\alpha e^{2d}+\beta e^{3d}=0
\end{cases}
$$
We can divide the first equation by $e^{2c}\ne0$ and the second by $e^{2d}$ getting
$$
\begin{cases}
\alpha+\beta e^{c}=0\\
\alpha+\beta e^{d}=0
\end{cases}
$$
Subtract the first from the second to get
$$
\beta(e^d-e^c)=0
$$
Since $c<d$, we have $e^d\ne e^c$, so we conclude $\beta=0$ and, substituting in the first equation, also $\alpha=0$.</p>
<p>So the two functions are linearly independent no matter whether $0\in(a,b)$ or not.</p>
|
3,278,797 | <p>I tried to solve it and I got answer '3'. But that is just my intuition.I don't have concrete method to prove my answer .I did like this, in order to maximize the fraction, we need to minimize the denominator .So if plug in '1' in expression, denominator becomes '1'.Now denominator is minimalized,the result of expression is '3'. Thats how I got to conclusion.But I can't prove that " 3 is the solution" mathematically.
Can anyone show me how to prove it properly</p>
| David K | 139,123 | <p>You have the correct result but the wrong reasons.</p>
<p>First of all, you assume that a ratio can be maximized by minimizing the denominator. This is true if the numerator is constant, but the numerator here is not constant.</p>
<p>Second, you decided that the minimum value of the denominator is <span class="math-container">$1.$</span>
That is not true.
The minimum value of <span class="math-container">$x^2 - x + 1$</span> is <span class="math-container">$\frac34,$</span> which is attained when <span class="math-container">$x=\frac12.$</span></p>
<p>You tried to apply some intuition, which is a good thing to try,
but then you questioned the intuition, which is also a good thing to do.</p>
|
3,492,856 | <p><a href="https://i.stack.imgur.com/FHCP2.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FHCP2.jpg" alt="This is the question"></a> My solution:-</p>
<p>Since <span class="math-container">$OA=AB$</span>, let us find OA first.
<span class="math-container">$OA=\sqrt{(18-0)^2+(3-0)^2}=3\sqrt{37}$</span>. So, <span class="math-container">$AB=3\sqrt{37}$</span>. So,
<span class="math-container">$$\sqrt{(15-18)^2+(k-3)^2}=3\sqrt{37}$$</span>
<span class="math-container">$$\implies(-3)^2+(k-3)^2=333$$</span>
<span class="math-container">$$\implies k^2-6k+9=324$$</span>
<span class="math-container">$$\implies k^2-6k-315=0$$</span>
<span class="math-container">$Using\\calculator$</span>
<span class="math-container">$$k=21$$</span> or, <span class="math-container">$$k=-15$$</span></p>
<p>Is my solution correct?</p>
<p>I'm asking this because my book only mentions k=21 but not k=-15. But I don't find k=-15 to be an extraneous root as the equation holds true using -15 as the solution as well. So will the answer be both 21 and -15 or only -15?</p>
<p>If there's any problem in my question please let me know. Thanks in advance! </p>
| Math1000 | 38,584 | <p>Let <span class="math-container">$T_1:V\to W$</span> and <span class="math-container">$T_2:W\to U$</span> be linear maps. Then <span class="math-container">$$\ker T_1 = \{v\in V : T_1v=0\}\subset V,$$</span> whereas <span class="math-container">$$\ker T_2\circ T_1 = \{v\in V: T_2(T_1(v)) = 0\}\subset V.$$</span> Indeed, if <span class="math-container">$v\in\ker T_1$</span> then <span class="math-container">$T_2(T_1(v)) = T_20 = 0$</span> so that <span class="math-container">$v\in\ker T_2\circ T_1$</span> and hence <span class="math-container">$\ker T_1$</span> is a subspace of <span class="math-container">$\ker T_2\circ T_1$</span>.</p>
|
5,897 | <p>The following creates a button to select a notebook to run. When the button is pressed it seems that Mathematica finds the notebook but cannot evaluate it. The following error occurs</p>
<blockquote>
<p>Could not process unknown packet "1"</p>
</blockquote>
<pre><code>Button["run file 1",
NotebookEvaluate[
"/../file1.nb"]]
</code></pre>
<p>This occurs under Mathematica 8 on all platforms.</p>
<p>Any help greatly appreciated,
Christina</p>
| celtschk | 129 | <p>To make it work, use</p>
<pre><code>Button["run file 1", NotebookEvaluate["/../file1.nb"], Method->"Queued"]
</code></pre>
|
467,301 | <p>I'm reading Intro to Topology by Mendelson.</p>
<p>The problem statement is in the title.</p>
<p>My attempt at the proof is:</p>
<p>Since $X$ is a compact metric space, for each $n\in\mathbb{N}$, there exists $\{x_1^n,\dots,x_p^n\}$ such that $X\subset\bigcup\limits_{i=1}^p B(x_i^n;\frac{1}{n})$. Let $K=\frac{2p}{n}$. Then for each $x,y\in X$, $x\in B(x_i^n;\frac{1}{n})$ and $y\in B(x_j^n;\frac{1}{n})$ for some $i,j=1,\dots,p$. Thus, $d(x,y)\leq\frac{2p}{n}$.</p>
<p>The approach I was taking is taking $K$ to be the addition of the diameters of each open ball in the covering for $X$, that way, for any two elements in $X$, the distance between them must be less than the overall length of the covering. Did I say this mathematically or are there holes I need to fill in?</p>
<p>Thanks for any help or feedback!</p>
| Prahlad Vaidyanathan | 89,789 | <p>By compactness, there are finitely many points $x_1, x_2, \ldots, x_n$ such that
$$
X = \cup_{i=1}^n B(x_i, 1)
$$
Now for any two points $x, y \in X$, choose $x_i$ and $x_j$ such that
$$
d(x, x_i) < 1 \qquad d(y,x_j) < 1
$$
Then
$$
d(x,y) \leq d(x,x_i) + d(x_i, x_j) + d(x_j, y) < 2 + d(x_i, x_j)
$$
So choose
$$
K = 2 + \max\{d(x_i,x_j) : 1\leq i, j \leq n\}
$$</p>
|
40,709 | <p>Wolfram's MathWorld website, at the page on <a href="http://mathworld.wolfram.com/Function.html" rel="nofollow">functions</a>, makes the following claim about the notation $f(x)$ for a function:</p>
<blockquote>
<p>While this notation is deprecated by professional mathematicians, it is the more familiar one for most nonprofessionals.</p>
</blockquote>
<p>From context, it appears that this is referring to the use of $f(x)$ to refer to <em>the actual function</em>, rather than just to a particular value, when $x$ is (in the context) a dummy variable.</p>
<p>Is this true? Do professional mathematicians "deprecate" this notation?</p>
<p>To avoid long and windy discussions as to the values or otherwise of this notation (which would be much more appropriate in a blog), this question should be viewed as a poll. As MO runs on StackExchange 1.0, it doesn't have the feature whereby the actual "up" and "down" votes for an answer can be easily seen. Therefore I shall post two answers, one in favour and one against, the following statement. Please <strong>only vote up</strong>. A vote for one answer will be taken as a vote against the other. The Law of the Excluded Middle does not hold here. The motion is:</p>
<blockquote>
<p>This house believes that the notation $f(x)$ to refer to a function has value in professional mathematics and that there is no need to apologise or feel embarrassed when using it thus.</p>
</blockquote>
<p><strong>This poll has now run its course. The final tally can be seen below.</strong></p>
| Andrew Stacey | 45 | <p>Vote for this answer if you <strong>agree</strong> with the statement:</p>
<blockquote>
<p>This house believes that the notation $f(x)$ to refer to a function has value in professional mathematics and that there is no need to apologise or feel embarrassed when using it thus.</p>
</blockquote>
<p>(Note: the answer is CW so that this is a genuine poll)</p>
|
3,336,506 | <p>Let <span class="math-container">$V=\mathbb R^3$</span> be an inner product space with the standard inner product (that means <span class="math-container">$\langle(x_1,y_1,z_1),(x_2,y_2,z_2)\rangle=x_1y_1+x_2y_2+x_3y_3$</span> ).<br>
<span class="math-container">$U=span\{(1,2,3),(1,2,1)\}\subseteq V$</span> </p>
<p>a) Find an orthonormal basis for <span class="math-container">$U$</span><br>
b) Find an explicit formula for the orthogonal projection <span class="math-container">$P_U: V\rightarrow U$</span></p>
<p>So I found an orthonormal basis for <span class="math-container">$V$</span>, denote that basis <span class="math-container">$B=(b_1,b_2)=\bigg(\frac{1}{\sqrt6}(1,2,1),\frac{1}{\sqrt{30}}(-1,-2,5)\bigg)$</span>, and I'm totally sure it's an orthonormal basis, so no problem with that. On the other hand, what I'm having trouble with is finding the explicit formula. What I did was to use the following formula for the orthogonal projection:</p>
<p>Let <span class="math-container">$W\subseteq V$</span> be a vector subspace, where <span class="math-container">$V$</span> is an inner product vector space of finite dimension, and let <span class="math-container">$S=\{s_1,\dots,s_k\}$</span> be an orthonormal basis for <span class="math-container">$W$</span>, then the orthogonal projection upon <span class="math-container">$W$</span> is given by: <span class="math-container">$P_W(u)=\sum_{i=1}^k\langle u,s_i\rangle\cdot s_i$</span></p>
<p>This is the way I used it:<br>
<span class="math-container">$P_U(x,y,z)=\langle(x,y,z),b_1\rangle\cdot b_1+\langle(x,y,z),b_2\rangle\cdot b_2=\dots=\frac{1}{5}(x+2y,2x+4y,5z)$</span></p>
<p>The problem is that I got a projection upon <span class="math-container">$\mathbb R^3$</span>, instead of only upon <span class="math-container">$U$</span> (I took vectors<span class="math-container">$\in\mathbb R^3\backslash U$</span> to find that out), and I dont get what is wrong with what I did.</p>
<p>Any help would be appreciated.</p>
<p>Edit: I found this similar post: <a href="https://math.stackexchange.com/questions/1809883/orthogonal-projection-formula-question?rq=1">orthogonal projection formula question</a> , but since we didn't learned cross products I can't use the solution given there.</p>
| reuns | 276,986 | <p>It is not a distribution on <span class="math-container">$\Bbb{R}^n$</span> because <span class="math-container">$$ \lim_{r \to 0}\int_{\mathbb{R}^n} \phi(x)\frac{1_{|x|> r}}{|x|^n} d^n x $$</span> diverges whenever <span class="math-container">$\phi(0) \ne 0$</span>. What is a (tempered) distribution is <span class="math-container">$$<T,\phi>= \int_{\mathbb{R}^n} (\phi(x)-\phi(0) 1_{|x|<1})\frac{1}{|x|^n} d^n x $$</span>
And <span class="math-container">$$T \ast \phi(y)= \int_{\mathbb{R}^n} (\phi(y-x)-\phi(y) 1_{|x|<1})\frac{1}{|x|^n} d^n x $$</span>
The choice of <span class="math-container">$1_{|x|<1}$</span> is non-canonical but you can call it <span class="math-container">$$T=fp(\frac1{|x|^n})$$</span> (finite part) everyone will understand that you meant <span class="math-container">$$<fp(\frac1{|x|^n}),\phi>= \int_{\mathbb{R}^n} (\phi(x)-\phi(0) \psi(x))\frac{1}{|x|^n} d^n x=<T+c\delta ,\phi>$$</span> for some <span class="math-container">$\psi(0) = 1$</span></p>
|
1,204,864 | <blockquote>
<p>$$\text{Find }\,\dfrac{d}{dx}\Big(\cos^2(5x+1)\Big).$$</p>
</blockquote>
<p>I have tried using the rules outlined in my standard derivatives notes but I've failed to find the point of application.</p>
| abel | 9,252 | <p>you can also use $$2y = 2\cos^2(5x+1) = \cos(10x + 2) + 1$$ so that $$2\frac{dy}{dx} = -10\sin(10x+2)\to \frac{dy}{dx} = -5\sin(10x+2) $$</p>
|
1,102,668 | <p><a href="https://math.stackexchange.com/q/67994/198434">This question</a> shows how dividing both sides of an equation by some $f(x)$ may eliminate some solutions, namely $f(x)=0$. Naturally, all examples admit $f(x)=0$ as a solution to prove the point.</p>
<p>I tried to find a simple example of an equation that could be solved by dividing both sides by some $f(x)$ but where $f(x)=0$ was not a solution, and failed miserably. Sure, I can divide both sides of, let's say, $x^2-1=0$ by $x$ ($x=0$ is not a solution), but that doesn't help me solve the equation.</p>
<p>I started wondering if actually the equations that can be solved (or at least simplified) by dividing both sides by some $f(x)$ were precisely those where $f(x)=0$ is a solution: by removing a solution, the division reduces the equation to a simpler form. This is particularly obvious with this example given in that question's <a href="https://math.stackexchange.com/a/68001/198434">accepted answer</a>:</p>
<p>$$(x-1)(x-2)(x-3)(x-4)(x-5)(x-6)=0.$$</p>
<p>By successively dividing by $x-1$, $x-2$ and so on, the equation becomes simpler as the solutions are removed, until there's no solution left ($1=0$).</p>
<p>However, both the accepted answer and the quote in the question itself say that $f(x)=0$ <em>may</em> be a solution, which I also understand as it <em>may not</em> be one.</p>
<p>So, are there equations where dividing by some $f(x)$ significantly improves the equation resolution, without $f(x)=0$ being a solution?</p>
| Kyle Delaney | 250,589 | <p>Consider this differential equation:</p>
<p>$\frac{df}{dx} = kf$</p>
<p>You'd have to divide both sides by $f$ to get all your $f$ terms on the same side, right?</p>
|
1,746,782 | <p>This is what I've done:</p>
<p>Let $s < t$ and $F_t$ be a filtration adapted to $W(t)$
$$E[e^{t/2}\cos(W(t))|F_s] = e^{t/2} E[\cos(W(t)) - \cos(W(s)) + \cos(W(s))|F_s]$$
$$= e^{t/2} [E[\cos(W(t)) - \cos(W(s))|F_s] + \cos(W(s))]$$
Because of the independence of the increments:
$$= e^{t/2} [E[\cos(W(t)) - \cos(W(s))] + \cos(W(s))]$$
This is where I'm stuck. I don't know how to calculate $E[\cos(W(t)) - \cos(W(s))]$. </p>
<p>Following <a href="https://math.stackexchange.com/q/1596521/154124">the suggestion from Did in the comment</a> in this question, I can do this:
$$E[\cos(W(t)) - \cos(W(s))] = \frac{1}{\sqrt{2\pi (t-s)}}\int_{\mathbb{R}}(\cos(x) - \cos(y)).e^{-(x-y)^2/2(t-s)}dx$$</p>
<p>But I don't know if it is correct.</p>
<p>Edit: OMG! I made an horrible mistake. The increments $W(t) - W(s)$ are independent, not $\cos(W(t)) - \cos(W(s))$. With the hints from <a href="https://math.stackexchange.com/a/143190">this answer</a> and the one from Siron everything got clearer now.</p>
| Cavents | 322,026 | <p>To find $\mathbb{E}[\cos(W(t))]$ you could try integrating the density function. However, that might turn out to be messy. Instead, recall that
$$\cos x = \frac{e^{ix}+e^{-ix}}{2},$$
and hence
\begin{align}
\mathbb{E}[\cos(W(t))] = \mathbb{E}\left[\frac{e^{iW(t)}+e^{-iW(t)}}{2}\right],
\end{align}
where $\mathbb{E}[e^{iW(t)}]$ is the characteristic function of $W(t)$ evaluated in $1$.</p>
|
1,333,637 | <p>Where X is a space obtained by pasting the edges of a polygonal region together in pairs. </p>
<p>Alternatively: Show that X is homeomorphic to exactly one of the spaces in the following list: $S^2,T_n, P^2, K_m, P^2\#K_m$, where $K_m$ is the m-fold connected sum of $K$(Klein bottle) with itself and $M \geq 0$.</p>
<p>We have the classification theorem: If X is the space obtained from a polygonal region in the plane by pasting its edges together in pairs. Then X is homeomorphic to either $S^2, T_n$ or $P_m$. Where $P_m$ is the m-fold projective plane.</p>
<p>It seems I have to show that one of the things on the list that is not $S^2$ or $T_n$ is homeomorphic to $P_m$? The likely candidates seem to be, for the list in the title: $T_n\#P^2$. For the second list: $P^2\#K_m$. But I'm not sure if these are correct and how to formally show that those are homeomorphic to $P_m$</p>
| Gerry Myerson | 8,269 | <p>I think all you need to know is that the sum of three projective planes is homeomorphic to the sum of a torus and a projective plane. </p>
|
2,555,463 | <p>Given a line $l$ and two points $p_1$ and $p_2$, identify the point $v$ which is equidistant from $l$, $p_1$, and $p_2$, assuming it exists.</p>
<p>My idea is to: (1) identify the parabolas containing all points equidistant from each point and the line, then (2) intersect these parabolas. As $v$ is equidistant from all three and each parabola contains all points equidistant from $l$ and each point, the intersection of these parabolas must be $v$. However, I have had no luck in finding a way to compute, much less represent, these parabolas.</p>
| user | 505,767 | <p>Assuming such point <span class="math-container">$Q$</span> exists it must lie on the <strong>Bisector Line b</strong> of <span class="math-container">$P_1$</span> and <span class="math-container">$P_2$</span> i.e. the line through the midpoint of <span class="math-container">$P_1$</span> and <span class="math-container">$P_2$</span> and orthogonal to the line <span class="math-container">$\vec {P_1P_2}$</span>.</p>
<p><a href="https://i.stack.imgur.com/bWDb3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bWDb3.jpg" alt="enter image description here" /></a></p>
<p>Thus you can write down a parametric expression for <span class="math-container">$Q=Q(s)\in b$</span> and set the following <strong>equation for distances</strong>:</p>
<blockquote>
<p><span class="math-container">$$d^2(Ql)=d^2(QP_1)$$</span></p>
</blockquote>
<p>Firstly note that, without loss of generality, we can <strong>assume that the origin</strong> coincides with the intersection point between <span class="math-container">$l$</span> and <span class="math-container">$b$</span>. In fact if <span class="math-container">$b\equiv l$</span> solution is trivial otherwise if not we can find the intersection point and simply shift the axes (we could also rotate the axes in such way that <span class="math-container">$l$</span> or <span class="math-container">$b$</span> coincide with an axis but it should be more complex whereas shifting is trivial).</p>
<p>Thus, let's assume:</p>
<p><span class="math-container">$P_1=(x_1,y_1), P_2=(x_2,y_2)$</span>,</p>
<p><strong>l:</strong> <span class="math-container">$ ax+by=0$</span>,</p>
<p><strong>b:</strong> <span class="math-container">$cx+dy=0$</span></p>
<p><strong>NOTE</strong></p>
<p>find c and d is trivial</p>
<p><a href="https://math.stackexchange.com/questions/442781/finding-perpendicular-bisector-of-the-line-segement-joining-1-4-textand">Finding perpendicular bisector of the line segement joining $ (-1,4)\;\text{and}\;(3,-2)$</a></p>
<p>Parametric equation of <span class="math-container">$Q \in b$</span> is given by:</p>
<p><span class="math-container">$Q(t)=Q(d \cdot t,-c \cdot t)$</span> with <span class="math-container">$t\in \mathbb{R}$</span></p>
<p>The distances are given by:</p>
<p><span class="math-container">$$\text{d($Ql$)} = \frac{\left | Ax_{0} + By_{0} + C\right |}{\sqrt{A^2 + B^2} }= \frac {\left| ad\cdot t - bc\cdot t \right|}{\sqrt{a^2 + b^2}} $$</span></p>
<p><span class="math-container">$$\text{d($QP_1$)} = \sqrt{(d\cdot t-x_1)^2 + (-c\cdot t-y_1)^2}$$</span></p>
<p>and thus</p>
<p><span class="math-container">$$\frac {\left| adt - bct \right|}{\sqrt{a^2 + b^2}}=\sqrt{(d\cdot t-x_1)^2 + (-c\cdot t-y_1)^2}$$</span></p>
<p><span class="math-container">$$\frac {\left( ad\cdot t - bc\cdot t \right)^2}{{a^2 + b^2}}=(d\cdot t-x_1)^2 + (-c\cdot t-y_1)^2$$</span></p>
<p>from which "t" and thus "Q" can be easily found.</p>
|
1,840,485 | <p>I am an undergraduate really passionate about the mathematics and microbiology. I have few big problems in learning which I would like to seek your advice. </p>
<p>Whenever I study mathematical books (Rudin, Hoffman/Kunze, etc.), I always try to prove every theorem, lemma, corollary, and their relationships in the book. Unfortunately, that determination has been demanding huge time consumption; sometimes, it takes me days to fully understand and able to prove materials in the few pages of book. I am willing to devote my time to understand the topics, but I also wanted to devote time to my undergraduate research projects and other courses. Recently, I started to depend a lot more to the proofs presented in books and websites (like MSE), which has been causing a huge guilt and fear that I am not making the knowledge into my own.</p>
<p>Despite my effort to prove/solve every problem per chapter, I found myself to skip some of the problems and move on to the next chapter, which resulted huge fear as that means I did not fully understand the materials..</p>
<ul>
<li>How do you read the mathematics books and make knowledge on your own?</li>
<li>Is it absolutely recommended to prove everything and solve every problems in the book? </li>
<li>Also is it recommended to devote more time to the problems than exposition preceding the problems? I found myself devoting a lot time to the actual expositions in the book as I like to play around with definitions and theorems, try to come up with my own ideas, and formulate my own problems (I actually found that making my own problems is much more fun than problems presented in the book).</li>
</ul>
| Bhavani Chandra | 350,346 | <p>Mathematics is kind of a subject that is fun as well as scary. I always know the concept and do one or two problems per concept. As solving each & every problem is time consuming and is of no use.
My advice is practice all the concepts and theorems, and do two or 3 different types of problems which based on same concept.
It helps us to know which concept to apply for certain problem.
If you got the concept perfectly, it's application is not difficult, so have no fear and move forward. This subject is very easy when the concepts are understand well.</p>
|
1,482,205 | <p>Show that $\sigma(AB) \cup \{0\} = \sigma(BA) \cup \{0\}$ in general, and that $\sigma(AB) = \sigma(BA)$ if $A$ is bijective. </p>
<p>I studied the associative statement of this somewhere but it did not include the zeroth element.
If you assume the bijection, how can you show the first part?</p>
<h2>My attempt</h2>
<p>Let show that $A$ commutes with $B$ which is self-adjoint linear operatar such that $AB = BA$.
\begin{equation*}
AB = A^{\ast} B^{\ast} = (BA)^{\ast} = (ABA)^{\ast} = A^{\ast} B^{\ast} A^{\ast} = ABA = BA.
\end{equation*}
In general, the zeroth element follows directly in the left-hand-side if $A$ is bijective. $\square$</p>
<p>Comments</p>
<ul>
<li>not sure if $\sigma$ should be carried along; I was reading Kreuzig for the application</li>
</ul>
<hr>
<p>How can you show the first part with the zeroth element?</p>
| Marcos | 39,676 | <p>You got the addition wrong, it's (a+b)/b, not ab/b.</p>
|
3,399,586 | <p>Suppose that <span class="math-container">$f$$:$$\mathbb{R}\to\mathbb{R}$</span> is differentiable at every point and that </p>
<p><span class="math-container">$$f’(x) = x^2$$</span></p>
<p>for all <span class="math-container">$x$</span>. Prove that </p>
<p><span class="math-container">$$f(x) = \frac{x^3}{3} + C$$</span></p>
<p>where <span class="math-container">$C$</span> is a constant. </p>
<p>This has to be done without integrating, I have only been taught differential calculus and this question only assumes knowledge of that.</p>
<p>I tried applying mean value theorem and taylor’s approximation but could not come up with the proof. Can someone please provide the solution?</p>
| Quanto | 686,284 | <p>Here is to use the Mean-Value-Theorem (MVT) for the derivation. Let <span class="math-container">$\epsilon$</span> be small positive variable and observe that,</p>
<p><span class="math-container">$$x^2+\frac13\epsilon^2= \frac{6\epsilon x^2+2\epsilon^3}{6\epsilon}
=\frac{(x+\epsilon)^3 - (x-\epsilon)^3}{3(2\epsilon)}
=\frac{g(x+\epsilon)-g(x-\epsilon)}{(x+\epsilon)-(x-\epsilon)}\tag{1}$$</span></p>
<p>where </p>
<p><span class="math-container">$$g(x) = \frac13 x^3$$</span></p>
<p>According to MVT, the equation (1) can be written as, </p>
<p><span class="math-container">$$\frac{g(x+\epsilon)-g(x-\epsilon)}{(x+\epsilon)-(x-\epsilon)}=g'(a)=x^2+\frac13\epsilon^2$$</span></p>
<p>where <span class="math-container">$x-\epsilon < a < x+\epsilon$</span>. Now, let <span class="math-container">$\epsilon \rightarrow 0$</span> to obtain,</p>
<p><span class="math-container">$$g'(x)=x^2=f'(x)$$</span></p>
<p>Therefore,</p>
<p><span class="math-container">$$f(x) = g(x) +C= \frac13 x^3+C$$</span></p>
|
3,346,676 | <blockquote>
<p><strong>Question.</strong> Find a divergent sequence <span class="math-container">$\{X_n\}$</span> in <span class="math-container">$\mathbb{R}$</span> such that for any <span class="math-container">$m\in\mathbb{N}$</span>,
<span class="math-container">$$\lim_{n\to\infty}|X_{n+m}-X_n|=0$$</span></p>
</blockquote>
<p>I don't really know, if someone could walk me through this it'd be really appreciated.
Edit: I'm dumb af ignore what I said before I deleted it. lmao</p>
| N. S. | 9,176 | <p><span class="math-container">$X_n= \ln(n)$</span>.</p>
<p>Then, for each <span class="math-container">$m$</span> you have
<span class="math-container">$$\lim_{n\to\infty}|X_{n+m}-X_n| =\lim_{n\to\infty}|\ln(\frac{m+n}{n}) |= \lim_{n\to\infty}|\ln(1+\frac{m}{n}) | = \ln(1)=0$$</span></p>
|
192,821 | <p>I am using <a href="https://reference.wolfram.com/language/ref/TransformedField.html" rel="noreferrer"><code>TransformedField</code></a> to convert a system of ODEs from Cartesian to polar coordinates:</p>
<pre><code>TransformedField[
"Cartesian" -> "Polar",
{μ x1 - x2 - σ x1 (x1^2 + x2^2), x1 + μ x2 - σ x2 (x1^2 + x2^2)},
{x1, x2} -> {r, θ}
] // Simplify
</code></pre>
<p>and I get the result</p>
<pre><code>{r μ - r^3 σ, r}
</code></pre>
<p>but I am pretty sure that the right answer should be</p>
<pre><code>{r μ - r^3 σ, 1}
</code></pre>
<p>Where is the error?</p>
| Itai Seggev | 4,848 | <p>Mathematica's answer is correct and consistent with your expectations, but you are not accounting for the basis of the vector field. </p>
<p><code>TransformedField</code> transforms a vector field between two coordinate systems <em>and</em> bases. In this case, it is converting from <span class="math-container">$f(x,y)\hat x+g(x,y)\hat y$</span> to the same geometrical vector field expressed as <span class="math-container">$u(r,\theta)\hat r + v(r,\theta) \hat \theta$</span>. Mathematica's answer can therefore be interpreted as saying
<span class="math-container">$$\left(μ x_1 - x_2 - σ x_1 (x_1^2 + x_2^2)\right)\hat x + \left( x_1 + μ x_2 - σ x_2 (x_1^2 + x_2^2)\right)\hat y = \left(r μ - r^3 σ\right)\hat r + r \hat\theta$$</span></p>
<p>Notice that the expressions <span class="math-container">$r'$</span> and <span class="math-container">$\theta'$</span> don't appear anyhwere. Those are dynamical quantities, not geometrical ones (unless working in the jet bundle, but let's not go there). Also notice the hats! As stated in the documentation, <code>TransformedField</code> assumes inputs are in an orthonormal basis, and returns outputs in the same basis. That will be important for later on.</p>
<p>Now, you are dealing with a differential equation, and based on your expected answer I'll assume what you have is a first-order system and you are transforming the associated vector field (AKA the "right-hand side"). Finding solutions means find the integral curves of the vector field. This gives as a nice relationship between the geometrical variables and the dynamical ones, except that this relationship is of necessity expressed in the so called coordinate basis, written <span class="math-container">$(r',\theta') = a \frac{\partial}{\partial r} + b \frac{\partial}{\partial \theta}$</span>. So to get the answer expressed in your desired basis, we need the relationship between the coordinate and orthonormal basis vectors. As is covered in books on vector calculus (and elsewhere), the relationsip is <span class="math-container">$\hat r = \frac{\partial}{\partial r}$</span> and <span class="math-container">$\hat \theta = \frac{1}{r}\frac{\partial}{\partial \theta}$</span>. Substituting this into the answer Mathematica gave above, we get
<span class="math-container">$$\left(r μ - r^3 σ\right)\hat r + r \hat\theta = \left(r μ - r^3 σ\right) \frac{\partial}{\partial r} + (1) \frac{\partial}{\partial \theta},$$</span>
which is the answer you expected.</p>
|
545,634 | <p>Consider a function $f:\Omega\subset\mathbb{R}^n\rightarrow\mathbb{R}^m$, for which the Jacobian matrix </p>
<p>$J_f(x_1,...,x_n)= \left( \begin{array}{ccc}
\frac{\partial f_1}{\partial x_1} & ... & \frac{\partial f_1}{\partial x_n} \\
\vdots & & \vdots \\
\frac{\partial f_m}{\partial x_1} & ... & \frac{\partial f_m}{\partial x_n} \end{array} \right) $ is given. </p>
<p>Also, assume the component functions of $J_f$ are continuously differentiable on $\Omega$, and $\Omega$ is simply connected. If $m=1$ and $n=2$, it is well known that the function $f$ can be recovered from $J_f$ (in this case the gradient) if and only if $\frac{\partial}{\partial x_2}\frac{\partial f_1}{\partial x_1}=\frac{\partial}{\partial x_1}\frac{\partial f_1}{\partial x_2}$.</p>
<p>So my question is whether there is a generalization of this result for arbitrary values of $m$ and $n$. I would appreciate any references!</p>
<p>Thank you!</p>
| Olivier | 45,622 | <p>For a starter, the pdf of $X$ you posted is not a valid pdf. $\int_{-\infty}^{\infty} f_X(x) dx = \frac{3}{2}$. This also results in an incorret plot of $g(X)$. Which pdf do you mean?</p>
<p>Edit: Excuse me, I misread your definition. </p>
|
3,006,046 | <p>How to find the Newton polygon of the polynomial product <span class="math-container">$ \ \prod_{i=1}^{p^2} (1-iX)$</span> ?</p>
<p><strong>Answer:</strong></p>
<p>Let <span class="math-container">$ \ f(X)=\prod_{i=1}^{p^2} (1-iX)=(1-X)(1-2X) \cdots (1-pX) \cdots (1-p^2X).$</span></p>
<p>If I multiply , then we will get a polynomial of degree <span class="math-container">$p^2$</span>.</p>
<p>But it is complicated to express it as a polynomial form.</p>
<p>So it is complicated to calculate the vertices <span class="math-container">$ (0, ord_p(a_0)), \ (1, ord_p(a_1)), \ (2, ord_p(a_2)), \ \cdots \cdots$</span> </p>
<p>of the above product.</p>
<p>Help me doing this</p>
| sigmatau | 77,518 | <p><em>Partial Answer</em>: regarding the coefficients of the polynomial: </p>
<p>Fix one term in the brackets, say <span class="math-container">$Y=(1-5X)$</span>. In order for the coefficient <span class="math-container">$5$</span> to contribute to <span class="math-container">$a_j$</span>, we have to multiply <span class="math-container">$Y$</span> with <span class="math-container">$j-1$</span> other brackets, since this is the only way of getting a power of <span class="math-container">$j$</span> for <span class="math-container">$X$</span>. This corresponds to choosing a subset <span class="math-container">$S \in \{1,2,\ldots,p^{2}\}$</span> of size <span class="math-container">$j-1$</span> since each term in the product has a unique coefficient for <span class="math-container">$X$</span> that is in <span class="math-container">$\{1,2,\ldots,p^{2}\}$</span>. This leads to </p>
<p><span class="math-container">\begin{equation}
a_j=(-1)^{j} \underset{ S \subset \{1,2, \ldots, p^{2} \}, \ |S|=j}{\sum} \prod \limits_{s \in S} s \ .
\end{equation}</span></p>
|
482,102 | <p>The problem comes from Alan Pollack's Differential Topology, pg. 5.
Suppose that X is a k-dimensional manifold. Show that every point in X has a neighborhood diffeomorphic to all of $\Bbb{R}^k$.</p>
<p>I have already shown that $\Bbb{R}^k$ is diffeomorphic to $B_a$ (part (a) of the question) the open ball of radius $a$, though have little to no understanding of how to proceed.</p>
<p>Thank you</p>
<p>The author defines a k-manifold as a set such that each point possesses a neighborhood diffeomorphic to an open set of $\Bbb{R}^k$</p>
| Rhys | 47,565 | <p>Given a point $p \in \mathbb{R}^k$, and an open neighbourhood $U$ of $p$, there exists an open set $V$ with $p \in V \subset U$ such that $V$ is diffeomorphic to the $k$-dimensional unit ball. Just take the Euclidean metric on $\mathbb{R}^k$, and let $V$ be a sufficiently small open ball centred on $p$.</p>
|
843,634 | <p>I am wondering whether for any two lines $\mathfrak{L}, \mathfrak{L'}$ and any point $\mathfrak{P}$ in $\mathbf{P}^3$ there is a line having nonempty intersection with all of $\mathfrak{L}, \mathfrak{L'}$, $\mathfrak{P}$. I don't really know how to approach this, because I was never taught thinking about such a problem, not even related ones. I think the answer should be no, but have no means to justify it. Perhaps use the Klein representation of lines in 3 space? Could someone also recommend a book where similar problems are solved or at least posed as exercises? Also feel free to give an algebraic geometry perspective, but I don't really know how to approach this with algebraic geometry.</p>
| Charles | 1,778 | <p>The polynomial $x^2+x$ is always divisible by 2, but as polynomials $x^2+x\not\equiv0\pmod2$ -- for one thing, the degree on the left is different from the degree on the right.</p>
<p>Similarly, even though $x^p-x$ is zero mod $p$ for any $x$, <em>as polynomials</em>, $x^p$ and $x$ are different.</p>
<p>Basically "are equivalent as polynomials" is a fine-grained tool where "are equal at all integer $x$" is courser. If you know that $P(x)$ and $Q(x)$ are equivalent (mod $p$) as polynomials then you know that they take on the same value (mod $p$) for all integer $x$, but you can't conclude the converse.</p>
|
2,073,230 | <p>I thought I'd might use induction, but that seems too hard, then I tried to take the derivative and show that that's positive $\forall$n. But I can't figure out how to do that either, I've tried induction there too.</p>
| Mark | 310,244 | <p>Let's think about it via the derivative. Let:
$$f(x) = \left(1+\frac{a}{x}\right)^x$$
When $a = 0.6$ and $x\in\mathbb N$ we have your function.</p>
<p>We can't take the derivative of this yet, as we can't use the power rule (the exponent is a variable), and can't use the exponential rule (the base is variable). What we do is look at the natural log of it:
$$\ln f(x) = x\ln\left(1+\frac{a}{x}\right)$$
This is a function we can differentiate, so we get that:
$$(\ln f(x))' = \ln(1+\frac{a}{x})+x\frac{-\frac{a}{x^2}}{1+\frac{a}{x}}$$
Here, we need to be careful and recall that:
$$(\ln f(x))' = \frac{f'(x)}{f(x)}$$
This gives us that:
$$\frac{f'(x)}{f(x)} = \ln\left(1+\frac{a}{x}\right)-\frac{a}{a+x}$$
Now, we can multiply through by $f(x)$ to get that:
$$f'(x) = \left(\ln\left(\frac{x+a}{x}\right)-\frac{a}{a+x}\right)\left(1+\frac{a}{x}\right)^x$$
Now, we can rewrite:
$$\frac{a}{a+x} = 1-\frac{x}{a+x}$$
This gives us that:
$$f'(x) = \left(\ln\left(\frac{x+a}{x}\right)+\frac{x}{a+x}-1\right)\left(1+\frac{a}{x}\right)^x$$
Now, $\left(1+\frac{a}{x}\right)^x>0$ for all $x> 0$, so the sign of this will depend on the left productand. Now, we have that $\frac{x}{x+a}>\frac{1}{a}$, so as $0<a<1$, we have that $\frac{x}{x+a}>\frac{1}{a}>1$, so the left productand will be positive, so $f'(x)>0$ for $x>0$, so this result holds for $x\in\mathbb N$.</p>
|
3,337,210 | <p>I am struggling with the following equation, which I need to proof by induction:</p>
<p><span class="math-container">$$\sum_{k=1}^{2n}\frac{(-1)^{k+1}}{k}= \sum_{k=n+1}^{2n}\frac{1}{k}$$</span></p>
<p><span class="math-container">$n\in \mathbb{N}$</span>.<br/> I tried a few times and always got stuck.</p>
<p>Help would be appreciated. </p>
| fleablood | 280,126 | <p>The cosine proof can be made intuitive.</p>
<p>We know <span class="math-container">$b > d= b-r$</span> and <span class="math-container">$c > h$</span>. And we know by the pythagorean th. and <span class="math-container">$d$</span> and <span class="math-container">$h$</span> are sides of a right triangle with hypotenuse <span class="math-container">$a$</span>. SO <span class="math-container">$d^2 + h^2 = a^2$</span>. So <span class="math-container">$b^2 + c^2 > d^2 + h^2 = a^2$</span>. But how much bigger?</p>
<p>Draw a picture. If we let <span class="math-container">$d = b-r$</span> so <span class="math-container">$b= d+r$</span> then <span class="math-container">$b^2 = (d+r)^2 = d^2 + 2*dr + r^2$</span>. We can see in our picture that <span class="math-container">$b^2$</span> is a square composed of two squares (<span class="math-container">$d^2$</span> and <span class="math-container">$r^2$</span>) and two rectanges <span class="math-container">$d\times r$</span>.</p>
<p>Now <span class="math-container">$r$</span> and <span class="math-container">$h$</span> are two sides of right triangle with hypotenuse. So <span class="math-container">$c^2 = r^2 + h^2$</span>.</p>
<p>Taking all we have <span class="math-container">$a^2 = d^2 + h^2$</span></p>
<p>And <span class="math-container">$b^2 = d^2 + $</span> two rectangles, <span class="math-container">$r\times d$</span> and square <span class="math-container">$r\times r$</span>.</p>
<p>ANd <span class="math-container">$c^2 = h^2 + $</span> the square <span class="math-container">$r\times r$</span>.</p>
<p>So <span class="math-container">$a^2 = b^2 + c^2 - $</span> two rectangles <span class="math-container">$r\times d$</span> and two squares <span class="math-container">$r\times r$</span>.</p>
<p>Now we can "glue" an <span class="math-container">$r\times d$</span> rectangle to an <span class="math-container">$r\times r$</span> square to get an <span class="math-container">$r\times (d+r) = r\times b$</span> rectangle.</p>
<p>So <span class="math-container">$a^2 =b^2 +c^2 - 2(r*b)$</span>. But we need to express that <span class="math-container">$r$</span> variables in terms of <span class="math-container">$a,b,c$</span> and angle <span class="math-container">$A$</span>.</p>
<p>Well <span class="math-container">$r, h,$</span> and <span class="math-container">$c$</span> are the sides of a right triangle so <span class="math-container">$r = c*\cos A$</span> and .... that's it.</p>
<p>......</p>
<p><span class="math-container">$a^2 = b^2 + c^2 - 2bc\cos A$</span>[1].</p>
<p>So, was that mechanical, or explanitory? intuitive or being led by the nose?</p>
<p>Well, I don't know. IMO a good proof should be explanatory. But proofs also rely on an idealized reader with the assumption that every step will be absolutely ingested with utter comprehension. </p>
<p>But we are human. We stumble and sometimes see things and sometime get blinded.</p>
<p>.....</p>
<p>[1] This assumed <span class="math-container">$m\angle A < 90$</span>. If <span class="math-container">$m\angle A = 90$</span> we have a right angle and <span class="math-container">$a^2 = b^2 + c^2 - 0$</span> and <span class="math-container">$\cos 90 = 0$</span>..... And if <span class="math-container">$90 < m\angle A < 180$</span> we can do something very similar but with <span class="math-container">$d = b+r$</span>.</p>
|
1,584,594 | <p>Find all $n$ for which $n^8 + n + 1$ is prime. I can do this by writing it as a linear product, but it took me a lot of time. Is there any other way to solve this? The answer is $n = 1$.</p>
| lab bhattacharjee | 33,337 | <p>HINT:</p>
<p>If $w$ is a complex cube root of unity and $f(x)=x^8+x+1$</p>
<p>$f(w)=(w^3)^2\cdot w^2+w+1=0$</p>
<p>So $(x^2+x+1)|(x^8+x+1)$</p>
|
726,574 | <p>Ant stands at the end of a rubber string which has 1km of length. Ant starts going to the other end at speed 1cm/s. Every second the string becomes 1km longer. </p>
<p>For readers from countries where people use imperial system: 1km = 1000m = 100 000cm</p>
<p><strong>Will the ant ever reach the end of the string? But how to explain it.</strong> </p>
<p>I know that yes.</p>
<p>Let :
<code>a</code> - distance covered by ant
<code>d</code> - length of string
<code>c</code> - constant by which the string is extended</p>
<p>The distance covered by ant in second <code>i</code> is <code>a[i] = (a[i-1] + 1)* (d + c)/d</code></p>
<p>I even did computer simulation in microscale where the string is 10cm long and extends by 10cm every second and the ant reaches the end:</p>
<pre><code>public class Mrowka {
public final static double DISTANCE_IN_CM = 10;
public static void main(String[] args) {
double ant = 0;//ants distance
double d = DISTANCE_IN_CM;//length of string
double dLeft = d - ant;//distance left
int i = 0;
while(dLeft > 0){
ant++;
ant = ant * (d + DISTANCE_IN_CM)/d;
d = d + DISTANCE_IN_CM;
dLeft = d - ant;
System.out.println(i + ". Ant distance " + ant +"\t Length of string " + d + " distance left " + dLeft);
i++;
}
System.out.println("end");
}
}
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>0. Ant distance 2.0 Length of string 20.0 distance left 18.0
1. Ant distance 4.5 Length of string 30.0 distance left 25.5
2. Ant distance 7.333333333333333 Length of string 40.0 distance left 32.666666666666664
.....
12364. Ant distance 123658.53192119849 Length of string 123660.0 distance left 1.4680788015102735
12365. Ant distance 123669.5318833464 Length of string 123670.0 distance left 0.46811665360291954
12366. Ant distance 123680.53192635468 Length of string 123680.0 distance left -0.5319263546844013
end
</code></pre>
<p>EDIT:</p>
<p>I think that I need to calculate this <code>a[n] = (a[n-1] + 1)*(1 + 1/(1+n))</code> when <code>n->+oo</code></p>
| DonAntonio | 31,254 | <p>$$y=mx+c=\frac12x-3=\frac{x-6}2\implies 2y=x-6\implies 2y-x+6=0$$</p>
<p>You forgot to multiply $\;C\;$ by two...</p>
|
2,382,058 | <p>Penrose's paper <a href="http://www.iri.upc.edu/people/ros/StructuralTopology/ST17/st17-05-a2-ocr.pdf" rel="nofollow noreferrer"><em>On the cohomology of impossible figures</em></a> suggests me that we can't draw such an impossible figure on a contractible part $Q$ of a sheet of paper so that it completely fills it, because the first cohomology group $H^1(Q,G)$ of such a domain is trivial (where G is the ambiguity group of the figure) (see also <a href="https://math.stackexchange.com/questions/1372980/penroses-remark-on-impossible-figures">here</a>)
<a href="https://i.stack.imgur.com/9Tkcg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9Tkcg.png" alt="penrose tribar"></a>
Still he does some, e.g. this (the picture is taken from <a href="https://math.stackexchange.com/questions/51525/geometry-or-topology-behind-the-impossible-staircase">here</a>):</p>
<p><img src="https://i.stack.imgur.com/povr5.jpg" alt=" impossible staircase "></p>
<p>Why is it possible? And what is the general, true relationship between the impossibility of figures and cohomology? Cohomology of what, if not of the drawing domain?</p>
<p><strong>Edit</strong></p>
<p>I try to make my problem a bit clearer.</p>
<p>Here is a good cover of the solid disk on the paper containing the picture of the impossible staircase.</p>
<p><a href="https://i.stack.imgur.com/QqHTg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QqHTg.png" alt="enter image description here"></a></p>
<p>$Q_1$, $Q_2$ and $Q_3$ correspond to the open sets of Penrose (their overlapping areas are the thick radial blue lines), while $Q_4$, the solid disk bounded by the yellow circle, is an additional open set that overlaps with each other $Q$-s. I show the middle of the figure in big:</p>
<p><a href="https://i.stack.imgur.com/P2BLE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P2BLE.png" alt="enter image description here"></a></p>
<p>Here $d_{ij}$ stands for the same as in Penrose's paper. I think, that we have some problem with $d_{14}$, $d_{24}$ and $d_{34}$, but I don't know, what.</p>
| Arthur | 15,500 | <p>The drawing domain of most impossible figures is a circle or an annulus. The reason they're impossible is that there is no single "height function" that covers all of it, even though there is a "steepness function".</p>
<p>In other words, there exists a function that looks like it should be a derivative / gradient, but isn't. That's exactly what non-trivial cohomology is all about. This is impossible on a simply connected domain like the whole plane or a solid disc, but it is very much possible on a circle / annulus.</p>
|
2,382,058 | <p>Penrose's paper <a href="http://www.iri.upc.edu/people/ros/StructuralTopology/ST17/st17-05-a2-ocr.pdf" rel="nofollow noreferrer"><em>On the cohomology of impossible figures</em></a> suggests me that we can't draw such an impossible figure on a contractible part $Q$ of a sheet of paper so that it completely fills it, because the first cohomology group $H^1(Q,G)$ of such a domain is trivial (where G is the ambiguity group of the figure) (see also <a href="https://math.stackexchange.com/questions/1372980/penroses-remark-on-impossible-figures">here</a>)
<a href="https://i.stack.imgur.com/9Tkcg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9Tkcg.png" alt="penrose tribar"></a>
Still he does some, e.g. this (the picture is taken from <a href="https://math.stackexchange.com/questions/51525/geometry-or-topology-behind-the-impossible-staircase">here</a>):</p>
<p><img src="https://i.stack.imgur.com/povr5.jpg" alt=" impossible staircase "></p>
<p>Why is it possible? And what is the general, true relationship between the impossibility of figures and cohomology? Cohomology of what, if not of the drawing domain?</p>
<p><strong>Edit</strong></p>
<p>I try to make my problem a bit clearer.</p>
<p>Here is a good cover of the solid disk on the paper containing the picture of the impossible staircase.</p>
<p><a href="https://i.stack.imgur.com/QqHTg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QqHTg.png" alt="enter image description here"></a></p>
<p>$Q_1$, $Q_2$ and $Q_3$ correspond to the open sets of Penrose (their overlapping areas are the thick radial blue lines), while $Q_4$, the solid disk bounded by the yellow circle, is an additional open set that overlaps with each other $Q$-s. I show the middle of the figure in big:</p>
<p><a href="https://i.stack.imgur.com/P2BLE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P2BLE.png" alt="enter image description here"></a></p>
<p>Here $d_{ij}$ stands for the same as in Penrose's paper. I think, that we have some problem with $d_{14}$, $d_{24}$ and $d_{34}$, but I don't know, what.</p>
| mma | 63,290 | <p>I think, I misunderstood the role of the annulus. My conclusion is that we can draw an impossible figure on any domain of a sheet. The annulus (or any non-contractible open set) is not a constraint. It is only a tool for testing the possibility of the picture. The test is that the cocycle $d_{ij}$ described in Penrose's paper is a coboundary, or not. That is, the true relationship between cohomology and the impossible figures is simply the following.</p>
<blockquote>
<p>A locally realistic figure is globally impossible if and only if there is a non-contractible open subset of the
drawing domain on which the cocycle $\{d_{ij}\}$ is not a coboundary.</p>
</blockquote>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.