qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,392,661 | <p>For a National Board Exam Review: </p>
<blockquote>
<p>Find the equation of the perpendicular bisector of the line joining
(4,0) and (-6, -3)</p>
</blockquote>
<p>Answer is 20x + 6y + 29 = 0</p>
<p>I dont know where I went wrong. This is supposed to be very easy:</p>
<p>Find slope between two points:</p>
<p>$${ m=\frac{y^2 - y^1}{ x^2 - x^1 } = \frac{-3-0}{-6-4} = \frac{3}{10}}$$</p>
<p>Obtain Negative Reciprocal:</p>
<p>$${ m'=\frac{-10}{3}}$$</p>
<p>Get Midpoint fox X</p>
<p>$${ \frac{-6-4}{2} = -5 }$$</p>
<p>Get Midpoint for Y</p>
<p>$${ \frac{-0--3}{2} = \frac{3}{2} }$$</p>
<p>Make Point Slope Form: </p>
<p>$${ y = m'x +b = \frac{-10}{3}x + b}$$</p>
<p>Plugin Midpoints in Point Slope Form</p>
<p>$${ \frac{3}{2} = \frac{-10}{3}(-5) + b}$$</p>
<p>Evaluate b</p>
<p>$${ b = \frac{109}{6}}$$</p>
<p>Get Equation and Simplify</p>
<p>$${ y = \frac{-10}{3}x + \frac{109}{6}}$$
$${ 6y + 20x - 109 = 0 }$$</p>
<p>Is the problem set wrong? What am I doing wrong?</p>
| msinghal | 194,655 | <p>To find the midpoint, you don't need to negate the coordinates. So the midpoint is $\left(\frac{-6+4}{2}, \frac{0+(-3)}{2}\right)=\left(-1, \frac{-3}{2}\right)$</p>
|
1,392,661 | <p>For a National Board Exam Review: </p>
<blockquote>
<p>Find the equation of the perpendicular bisector of the line joining
(4,0) and (-6, -3)</p>
</blockquote>
<p>Answer is 20x + 6y + 29 = 0</p>
<p>I dont know where I went wrong. This is supposed to be very easy:</p>
<p>Find slope between two points:</p>
<p>$${ m=\frac{y^2 - y^1}{ x^2 - x^1 } = \frac{-3-0}{-6-4} = \frac{3}{10}}$$</p>
<p>Obtain Negative Reciprocal:</p>
<p>$${ m'=\frac{-10}{3}}$$</p>
<p>Get Midpoint fox X</p>
<p>$${ \frac{-6-4}{2} = -5 }$$</p>
<p>Get Midpoint for Y</p>
<p>$${ \frac{-0--3}{2} = \frac{3}{2} }$$</p>
<p>Make Point Slope Form: </p>
<p>$${ y = m'x +b = \frac{-10}{3}x + b}$$</p>
<p>Plugin Midpoints in Point Slope Form</p>
<p>$${ \frac{3}{2} = \frac{-10}{3}(-5) + b}$$</p>
<p>Evaluate b</p>
<p>$${ b = \frac{109}{6}}$$</p>
<p>Get Equation and Simplify</p>
<p>$${ y = \frac{-10}{3}x + \frac{109}{6}}$$
$${ 6y + 20x - 109 = 0 }$$</p>
<p>Is the problem set wrong? What am I doing wrong?</p>
| user21820 | 21,820 | <p>First you computed midpoint of $X$ wrongly.</p>
<p>Later you evaluated $b$ wrongly.</p>
|
2,961,686 | <p>Consider a matrix <span class="math-container">$A$</span> which we subject to a small perturbation <span class="math-container">$\partial A$</span>. If <span class="math-container">$\partial A$</span> is small, then we have <span class="math-container">$(A + \partial A)^{-1} \approx A^{-1} - A^{-1} \partial A A^{-1}$</span></p>
<p>I came across this approximation in some notes and I am trying to understand where it comes from. <a href="https://math.stackexchange.com/a/1063542/278734">This answer</a> seems related, but I am having trouble translating the results from the cited paper into the provided equation. </p>
| weijun yin | 605,934 | <p>In fact, you can understand the problem by the solution, given by mjqxxxx, of the related problem you give. The primary problem is to find a way to estimate the term in the right hand. Why not consider the determinant of the term. The determinant of the product equals the product of the corresponding determinant. Then if we consider epsilon as being infinitesimal, then the higher term is approaching zero. <span class="math-container">$\square$</span> </p>
|
2,979,103 | <blockquote>
<p>Let <span class="math-container">$S$</span> be the region <span class="math-container">$\{z:0<|z|<\sqrt{2}, \ 0 < \text{arg}(z) <
\pi/4\}$</span>. Determine the image of <span class="math-container">$S$</span> under the transformation</p>
<p><span class="math-container">$$f(z)=\frac{z^2+2}{z^2+1}.$$</span></p>
</blockquote>
<p>I'm facing some difficulties on problems of this nature that includes Möbius transformations. First I can see that under <span class="math-container">$z\mapsto z^2$</span> doubles the argument and squares the modulus. So now we have that our <span class="math-container">$S$</span> is transformed to
<span class="math-container">$$S_1=\{z:0<|z|<2, \ 0 < \text{arg}(z) <
\pi/2\}$$</span></p>
<p>We can now let <span class="math-container">$M(z)=\frac{z+2}{z+1}.$</span> But here we need to parametrize the boundary of <span class="math-container">$S_1$</span>. Thus we have that</p>
<p><span class="math-container">$$\gamma_1=2it, \quad t\in[0,1]$$</span>
<span class="math-container">$$\gamma_2=2t, \quad t\in[0,1]$$</span>
<span class="math-container">$$\gamma_3=2e^{it}, \quad t\in[0,\pi/2]$$</span></p>
<p>So, </p>
<p><span class="math-container">$$M(\gamma_1)=\frac{2it+2}{2it+1}=\frac{4t^2+2}{4t^2+1}-i\frac{2t}{4t^2+1},$$</span></p>
<p><span class="math-container">$$M(\gamma_2)=\frac{2t+2}{2t+1}$$</span></p>
<p><span class="math-container">$$M(\gamma_3)=\frac{2e^{it}+2}{2e^{it}+1}$$</span></p>
<p>substituting <span class="math-container">$t=0$</span> and <span class="math-container">$t=1$</span> in <span class="math-container">$M(\gamma_1)$</span> we get the two points <span class="math-container">$2$</span> and <span class="math-container">$\frac{6}{5}-i\frac{2}{5}$</span>. This means that <span class="math-container">$\gamma_1$</span> is mapped to a circle or line going through those points.</p>
<p>Doing the same for <span class="math-container">$M(\gamma_2)$</span> we get that the two points are <span class="math-container">$2$</span> and <span class="math-container">$\frac{4}{3}$</span>.</p>
<p>However, plotting these points dont really make sense. Also, I'm not sure how to handle <span class="math-container">$M(\gamma_3)$</span>, it gets very messy. This leads me to assume that the method I'm using is very inefficient.</p>
<p>Any way around this?</p>
| Martin R | 42,969 | <p>It helps to consider not only those segments or arcs, but the whole line or
circle that they are part of.</p>
<p><span class="math-container">$M$</span> maps <span class="math-container">$(0, 2, \infty)$</span> on the real line to <span class="math-container">$(2, \frac 43, 1)$</span>, therefore the segment <span class="math-container">$\gamma_2$</span>
from <span class="math-container">$0$</span> to <span class="math-container">$2$</span> is mapped to the segment from <span class="math-container">$2$</span> to <span class="math-container">$\frac 43$</span> on the real line.</p>
<p><span class="math-container">$(0, 2i, \infty)$</span> on the imaginary line are mapped to <span class="math-container">$(2, \frac 65 - \frac 25 i, 1)$</span>, therefore the segment from <span class="math-container">$0$</span> to <span class="math-container">$2i$</span> is mapped to an arc on the circle <span class="math-container">$C$</span> through <span class="math-container">$(2, \frac 65 - \frac 25 i, 1)$</span>. This would already be sufficient
to determine <span class="math-container">$C$</span>. It becomes even easier if we use that analytic
mappings preserve angles, so that <span class="math-container">$C$</span> must intersect the real line at right
angles. It follows that <span class="math-container">$C$</span> is the circle with center <span class="math-container">$\frac 32$</span> and radius
<span class="math-container">$\frac 12$</span>, and the segment <span class="math-container">$\gamma_2$</span> from <span class="math-container">$0$</span> to <span class="math-container">$2i$</span> is mapped to the arc
from <span class="math-container">$2$</span> to <span class="math-container">$\frac 65 - \frac 25 i$</span> on that circle.</p>
<p>In a similar manner you can conclude that the arc <span class="math-container">$\gamma_3$</span> from <span class="math-container">$2$</span> to <span class="math-container">$2i$</span> is mapped
to the arc from <span class="math-container">$\frac 43$</span> to <span class="math-container">$\frac 65 - \frac 25 i$</span> on the circle with
center <span class="math-container">$\frac 23$</span> and radius <span class="math-container">$\frac 23$</span>.</p>
<p><a href="https://i.stack.imgur.com/9V0qEm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9V0qEm.png" alt="enter image description here"></a></p>
|
2,817,507 | <p>I came across the following argument in my discrete maths textbook:</p>
<p>Since $n=O(n), 2n=O(n)$ etc., we have:
$$
S(n)=\sum_{k=1}^nkn=\sum_{k=1}^nO(n)=O(n^2)
$$</p>
<p>The accompanying question in the book is: <strong>What is wrong with the above argument?</strong></p>
<p><strong>Attempt:</strong> Performing the summation for $n=2,3$ gives:
$$
\sum_{k=1}^2k(2)\leq\sum_{k=1}^22(2)=2(2^2)=O(2^2)
$$
$$
\sum_{k=1}^3k(3)\leq\sum_{k=1}^33(3)=3(3^2)=O(3^2)
$$
Thus, it seems like the argument is correct. However, I know that I am not correct.</p>
| Régis | 254,227 | <p>It is a problem of quantifiers $f(n)=O(n)$ means that there exists a constant $M$ that is <em>independant of n</em> such that for all $n$ large enough $|f(n)| \le Mn$. In your case, the implied constant for each term is $k$, which <em>does</em> depend on $n$ since it ranges between $1$ and $n$. So you cannot get this way an implied constant for the sum that does not depend on $n$.</p>
|
2,817,507 | <p>I came across the following argument in my discrete maths textbook:</p>
<p>Since $n=O(n), 2n=O(n)$ etc., we have:
$$
S(n)=\sum_{k=1}^nkn=\sum_{k=1}^nO(n)=O(n^2)
$$</p>
<p>The accompanying question in the book is: <strong>What is wrong with the above argument?</strong></p>
<p><strong>Attempt:</strong> Performing the summation for $n=2,3$ gives:
$$
\sum_{k=1}^2k(2)\leq\sum_{k=1}^22(2)=2(2^2)=O(2^2)
$$
$$
\sum_{k=1}^3k(3)\leq\sum_{k=1}^33(3)=3(3^2)=O(3^2)
$$
Thus, it seems like the argument is correct. However, I know that I am not correct.</p>
| Chinny84 | 92,628 | <p>Assuming that we have
$$
O(kn)
$$
using your logic, then we can say if $n<5$ then choosing $k = 10$ we are saying that
$$
kn > n^2 \;\;\forall\; n
$$</p>
<p>alternatively if I have $k=2$ then
$$
kn < n^2 \;\;\forall \;n > 2
$$
so by choosing a fixed parameter we changing when $n^2$ is less than $kn$ this could be generalised to
$$
n^2 < kn \;\; \forall \;n > k
$$
this is the case when we assume that we have some notion of what $n$ is w.r.t $k$. However, the whole point of this notation is that we are saying that we have some linear relationship with the order of the iterations. If we keep letting $n$ increase regardless of $k$ (which is a constant) then we will eventually have a value of $n^2 > kn$ no matter what value of $k$ is. The big-oh notation signifies the relationship on $n$ and how the calculation scales with $n$.</p>
<p>To summarise if $k$ is constant then we will reach a point where
$$
O(kn) < O(n^2)
$$</p>
|
3,413,261 | <p>I know this was answered before but I'm having one particular problem on the proof that I'm not getting.</p>
<p>My Understanding of the distribution law on the absorption law is making me nuts, by the answers of the proof it should be like this.</p>
<p>A∨(A∧B)=(A∧T)∨(A∧B)=A∧(T∨B)=A∧T=A</p>
<p>This should prove the Absoption Law but on the Step (A ^ (T v B)), I'm not getting how they get to it.</p>
<p>If (A ^ T) v (A ^ B) will be distributed by my understand of this, the following is the answer
(A v A) ^ (A v B) ^ (T v A) ^ (T v B)
That we can go to
A ^ (A v B) ^ T
I'm getting lost on something here, because it looks to me we will enter a loop on it as:
A ^ T ^ (A v B) will be distributed again and I will go back to A ^ B if I distribute with A but if I distribute with T it will be B ^ T, nothing usefull also is it?</p>
<p>Can anyone help on this? Thanks in advance.</p>
| Henno Brandsma | 4,280 | <p>Lindelöf and second countable are saying that a space is "small" in some sense; so one way to find non-examples is to take products of lots of spaces, such products (or powers) are "big".</p>
<p><span class="math-container">$\Bbb R^I$</span> is not Lindelöf for <span class="math-container">$I$</span> uncountable. (it's also not first countable at any point). It is not normal (which is one of the easier ways to see it's not Lindelöf). Of course you're right that the discrete reals are not Lindelöf (take the open cover by singleton sets). BTW, it's a non-trivial fact that if <span class="math-container">$I$</span> has size at most that of <span class="math-container">$\Bbb R$</span>, this product is still separable, so it's also an example of a separable non-Lindelöf space for such <span class="math-container">$I$</span>.</p>
<p><span class="math-container">$[0,1]^I$</span> for <span class="math-container">$I$</span> uncountable is compact (Tychonoff's theorem) so Lindelöf but not first countable at any point too, so certainly not second countable either. But it <em>is</em> normal, of course. And separable iff <span class="math-container">$|I| \le |\Bbb R|$</span>.</p>
<p>For metric spaces: Lindelöf, second countable and separable are equivalent properties (see a more general fact in my answer <a href="https://math.stackexchange.com/a/2812134/4280">here</a>. So <span class="math-container">$(0,1)$</span> in the usual topology is certainly Lindelöf, as the rationals in it are dense.</p>
|
1,356,900 | <p>For section 1 on Fields, there is a question 2c:</p>
<p>2.</p>
<p>a) Is the set of all positive integers a field?</p>
<p>b) What about the set of all integers?</p>
<p>c) Can the answers to both these question be changed by re-defining addition or multiplication (or both)?</p>
<p>My answer initially to 2c was that for a), as long as there no longer was the addition axiom dealing with (-a) + (a) = 0 and the multiplicative axiom dealing with reciprocals, it would be a field. For b) it would be a field if the reciprocal axiom did not exist. </p>
<p>However, I looked at a suggested answer here (<a href="https://drexel28.wordpress.com/2010/09/21/halmos-chaper-one-section-1-fields/" rel="nofollow">https://drexel28.wordpress.com/2010/09/21/halmos-chaper-one-section-1-fields/</a>)
and completely did not understand the lemma:</p>
<p>I don't understand why the things are in parentheses, or what the things represent. I don't understand what card F = n means, and why in the sentence, plus and multiplication signs are in circles. I don't understand what is meant by F x F --> F. I don't understand cardinalities, or what legitimate binary operations are. I don't understand the use of inverses either. </p>
<p>In the section on Vector Space Examples, Halmos writes that, "Let P be set of all polynomials with complex coefficients, in a variable t. To make P into a complex vector space, we interpret vector addition and scalar multiplication as the ordinary addition of two polynomials and the multiplication of a polynomial by a complex number; the origin in P is the polynomial identically zero.</p>
<ol>
<li>What does it mean for something to be identically zero? </li>
</ol>
<p>What else should I read along with Halmos to get a good understanding of Linear Algebra and Abstract Algebra? I'm also reading Spence Freidberg Insel, Artin, Fraleigh.</p>
| coldnumber | 251,386 | <p>You point out correctly the field axioms that $\Bbb{N}$ and $\Bbb{Z}$ do not satisfy. Just a (maybe nitpicky) note on wording: it's better to say that they are not fields because they do not satisfy the field axioms than to say they would be fields if the definition of fields were different.</p>
<p>Now onto the link with the answer: </p>
<p>The first thing in parenthesis is $(\mathfrak{F}, \cdot, +)$; this is a common way of specifying which operations, $\cdot$ and $+,$ make your set $\mathfrak{F}$ a field. This distinction is important if you're studying different operations on the same set.</p>
<p>The cardinality of $F$, card($F$), is the number of elements of $F$. The link says card($F)=n$, but note it does not assume it is finite. We say two sets have the same cardinality if there exists a bijection (a function that is both one-to-one and onto) between them. The sets $\Bbb{Z}, \Bbb{N},$ and $\Bbb{Q}$ are all infinite, and even though $\Bbb{N} \subsetneq \Bbb{Z} \subsetneq \Bbb{Q}$, the three have the same cardinality: they are all countable sets. This is why the lemma proved applies to them. </p>
<p>I think the plus and times signs are in circles to distinguish them from the initial operations on $\mathfrak{F}$, but this may be a convention that I'm not aware of. To understand the proof, you don't need to worry about that; all you need to work with is the definition of the operations. </p>
<p>The polynomial $f(x)$ that is identically zero is just the zero polynomial: $f(x)=0$ for any $x$.</p>
<p>About the books: I learned abstract algebra from Artin and linear algebra from Friedberg and Axler. </p>
<p>EDIT:
For the definition of binary operation, see here: <a href="http://mathworld.wolfram.com/BinaryOperation.html" rel="nofollow">http://mathworld.wolfram.com/BinaryOperation.html</a></p>
<p>What the whole solution does is use the operations given in $(\mathfrak{F}, \cdot, +)$ to define new operations $\oplus$ and $\otimes$ on a set $F$ that has the same cardinality as $\mathfrak{F}$. Since they have the same cardinality, there is a bijection $\theta$ from $F$ to $\mathfrak{F}$, and this bijection has an inverse $\theta^{-1}$ from $\mathfrak{F}$ to $F$. </p>
|
38,586 | <p>The $n$-th Mersenne number $M_n$ is defined as
$$M_n=2^n-1$$
A great deal of research focuses on Mersenne primes. What is known in the opposite direction about Mersenne numbers with only small factors (i.e. smooth numbers)? In particular, if we let $P_n$ denote the largest prime factor of $M_n$, are any results known of the form
$$\liminf_{n\rightarrow \infty}\frac{P_n}{f(n)}= 1$$
for some function $f$?</p>
<p>I've only come across two (fairly distant) bounds so far. If we consider even-valued $n$, then $M_n=M_{n/2}(M_{n/2}+2)$, so:
$$\liminf_{n\rightarrow \infty}\frac{P_n}{2^{n/2}}\leq 1$$
In the other direction, [1] shows that $P_n\geq 2n+1$ for $n>12$, so
$$\liminf_{n\rightarrow \infty}\frac{P_n}{2n}\geq 1$$</p>
<p>[1] A. Schinzel, On primitive prime factors of $a^n-b^n$, Proc. Cambridge Philos. Soc. 58
(1962), 555-562.</p>
| Gjergji Zaimi | 2,384 | <p>I guess lower bounds on the largest prime factor of Mersenne numbers are not only interesting in number theory but also in coding theory (see this article of K. Kedlaya and S. Yekhanin <a href="http://arxiv.org/abs/0704.1694" rel="nofollow">here</a>). They say the current strongest lower bound is
$$P_n>\epsilon(n)n\log^2(n)/\log\log(n)$$
for all $n$ except for a set of asymptotic density zero and all functions $\epsilon$ that tend to zero monotonically and arbitraily slowly, and is due to C. Stewart. See his articles "The greatest prime factor of $a^n-b^n$" and "On divisors of Fermat, Fibonacci, Lucas, and Lehmer numbers".</p>
|
652,660 | <p>Show $\lnot(p\land q) \equiv \lnot p \lor \lnot q$</p>
<p>this is my solution . Check it please </p>
<p><img src="https://i.stack.imgur.com/1y7DB.jpg" alt="enter image description here"></p>
| Jon | 124,068 | <p>Your solution is right and covers all cases that arise in classical logic. Classically, a formula is false (in a model) if and only if it is not true (in the model). So, the falsity case mentioned in a comment reduces to showing that: If $\neg(p \wedge q)$ is not true, then $\neg p \vee \neg q$ is not true. But this is equivalent to a case you' ve already shown.</p>
<p>@Newb: The second line in $(\Leftarrow)$ is correct, since the falsity of a conjunct suffices to make the conjunction false. </p>
|
3,884,098 | <p>Suppose <span class="math-container">$A$</span> is a <span class="math-container">$n \times n$</span> symmetric real matrix with eigenvalues <span class="math-container">$\lambda_1, \lambda_2, \ldots, \lambda_n$</span>, what are the eigenvalues of <span class="math-container">$(I - A)^{3}$</span>?</p>
<p>Are they <span class="math-container">$(1 - \lambda_1)^3, (1 - \lambda_2)^3, \ldots, (1 - \lambda_n)^3$</span>? If so, how can I arrive at this conclusion?</p>
| Jake Mirra | 278,017 | <p>This is a perfect problem for proving the <em>contrapositive</em>. Suppose <span class="math-container">$ f $</span> is non-negative but <em>not</em> identically zero, and show that the integral is positive. To accomplish this, let <span class="math-container">$ x_0 $</span> be any point where <span class="math-container">$ f(x_0) = y_0 > 0 $</span>. By continuity, there exists <span class="math-container">$ \delta $</span> such that <span class="math-container">$ f(x) > y_0 / 2 > 0 $</span> for all <span class="math-container">$ x $</span> with <span class="math-container">$ |x - x_0| < \delta $</span>. Now consider any Riemann sum with <span class="math-container">$ \|\Delta\| < \delta $</span> and you can see that <span class="math-container">$ R_*(f, \Delta) > \delta y_0 / 2 $</span>. Hence the integral itself is greater than or equal to <span class="math-container">$ \delta y_0 / 2 $</span>. Does this argument work for you?</p>
|
794,736 | <blockquote>
<p>Let $60$ students and $10$ teachers. How many arrangements are there, such that, between two teachers must be exactly $6$ students? </p>
</blockquote>
<p>I know that there are $10!$ permutations for the teachers, and there are $54$ places between them for the students. Nothing said about the edges. Hence, at the edges might be $0-6$ students. </p>
<p>I've been told the answer is $10!\cdot 60!\cdot 7$ but I'm not sure why..I'll be glad for an explanation. </p>
<p>Thanks!</p>
| Alexander Gruber | 12,952 | <p>My phrasing would be as follows:</p>
<blockquote>
<p>Since $g(x)=g(x^2)$ for any $x$, given $x_0\ne 0$, we have that $$g(x_0)=g(x_0^2)=g((-x_0)^2)=g(-x_0),$$ so it suffices to consider $x_0>0$. In this case we have $$g(x_0)=g(x_0^{1/2})=g(x_0^{1/2})=g(x_0^{1/4})=\cdots =g(x_0^{1/2^n})=\cdots$$
Given an $\epsilon>0$, there is a $\delta$ so that $x\in (1-\delta,1+\delta)$ implies $|g(x)-g(1)|<\epsilon$. In particular, since we can choose a high enough $n$ so that $x_0^{1/{2^n}}\in (0,\delta)$, so $g(x_0)=g(1)$. So $g$ is constant on $\mathbb{R}\setminus \{0\}$, from which we easily obtain by continuity that $g$ is constant on $\mathbb{R}$.</p>
</blockquote>
|
1,115,222 | <blockquote>
<p>Suppose <span class="math-container">$f$</span> is a continuous, strictly increasing function defined on a closed interval <span class="math-container">$[a,b]$</span> such that <span class="math-container">$f^{-1}$</span> is the inverse function of <span class="math-container">$f$</span>. Prove that,
<span class="math-container">$$\int_{a}^bf(x)dx+\int_{f(a)}^{f(b)}f^{-1}(x)dx=bf(b)-af(a)$$</span></p>
</blockquote>
<p>A high school student or a Calculus first year student will simply, possibly, apply change of variable technique, then integration by parts and he/she will arrive at the answer without giving much thought into the process. A smarter student would probably compare the integrals with areas and conclude that the equality is immediate.</p>
<p>However, I am an undergrad student of Analysis and I would want to solve the problem "carefully". That is, I wouldn't want to forget my definitions, and the conditions of each technique. For example, while applying change of variables technique, I cannot apply it blindly; I must be prudent enough to realize that the criterion to apply it includes continuous differentiability of a function. Simply with <span class="math-container">$f$</span> continuous, I cannot apply change of variables technique.</p>
<p>Is there any method to solve this problem rigorously? One may apply the techniques of integration (by parts, change of variables, etc.) only after proper justification.</p>
<p>The reason I am not adding any work of mine is simply that I could not proceed even one line since I am not given <span class="math-container">$f$</span> is differentiable. However, this seems to hold for non-differentiable functions also.</p>
<p>I would really want some help. Pictorial proofs and/or area arguments are invalid.</p>
| Vim | 191,404 | <p>I think it is very natural from a geometrical point of view. It's just about the addition of two areas, which make up a big rectangle substracting a small one. See the graph below:<img src="https://i.stack.imgur.com/nDdaL.png" alt="enter image description here"></p>
<p><br/>
Now, obviously, in the case shown in my graph
$$S_1=\int_{a}^{b}f(x)dx$$
and
$$S_2=\int_{f(a)}^{f(b)} f^{-1}(x)dx$$
Geometrically, we have
$$S_1+S_2=S_{big}-S_{small}$$
where $S_{big}$ and $S_{small}$ respectively denotes the area of the big rectangle and the small one in the graph.
Therefore
$$\int_{a}^{b}f(x)dx+\int_{f(a)}^{f(b)} f^{-1}(x)dx=bf(b)-af(a)$$</p>
|
1,115,222 | <blockquote>
<p>Suppose <span class="math-container">$f$</span> is a continuous, strictly increasing function defined on a closed interval <span class="math-container">$[a,b]$</span> such that <span class="math-container">$f^{-1}$</span> is the inverse function of <span class="math-container">$f$</span>. Prove that,
<span class="math-container">$$\int_{a}^bf(x)dx+\int_{f(a)}^{f(b)}f^{-1}(x)dx=bf(b)-af(a)$$</span></p>
</blockquote>
<p>A high school student or a Calculus first year student will simply, possibly, apply change of variable technique, then integration by parts and he/she will arrive at the answer without giving much thought into the process. A smarter student would probably compare the integrals with areas and conclude that the equality is immediate.</p>
<p>However, I am an undergrad student of Analysis and I would want to solve the problem "carefully". That is, I wouldn't want to forget my definitions, and the conditions of each technique. For example, while applying change of variables technique, I cannot apply it blindly; I must be prudent enough to realize that the criterion to apply it includes continuous differentiability of a function. Simply with <span class="math-container">$f$</span> continuous, I cannot apply change of variables technique.</p>
<p>Is there any method to solve this problem rigorously? One may apply the techniques of integration (by parts, change of variables, etc.) only after proper justification.</p>
<p>The reason I am not adding any work of mine is simply that I could not proceed even one line since I am not given <span class="math-container">$f$</span> is differentiable. However, this seems to hold for non-differentiable functions also.</p>
<p>I would really want some help. Pictorial proofs and/or area arguments are invalid.</p>
| Micah | 30,836 | <p>I think you want to start by proving three things:</p>
<ol>
<li>The theorem holds for linear functions.</li>
<li>The theorem holds for piecewise-defined functions, if it holds for each individual piece and everything remains monotone and continuous.</li>
<li>The trapezoidal rule can be used to approximate the integral of any continuous function $f$ with the integral of a piecewise-linear function (which will be continuous and monotone if $f$ is).</li>
</ol>
<p>Numbers 1. and 2. basically don't require any analysis at all. Number 3. follows pretty quickly from the intermediate value theorem and the definition of the Riemann integral.</p>
<p>Now, given any $\epsilon > 0$, choose a mesh so that the trapezoidal rule approximates $f$ to within $\epsilon/2$, and another mesh so it approximates $f^{-1}$ to within $\epsilon/2$. Any mesh on $f$ yields a mesh on $f^{-1}$ and vice versa, so we can pass to a common refinement of these two meshes and consider the piecewise linear function $\tilde f$ given by approximating $f$ on that common refinement. Then the sum of the two integrals for $\tilde f$ will:</p>
<ul>
<li>Be exactly equal to $bf(b)-af(a)$</li>
<li>Be within $\epsilon$ of the sum of the two integrals for $f$.</li>
</ul>
<p>Since $\epsilon$ was arbitrary, we are done.</p>
|
1,115,222 | <blockquote>
<p>Suppose <span class="math-container">$f$</span> is a continuous, strictly increasing function defined on a closed interval <span class="math-container">$[a,b]$</span> such that <span class="math-container">$f^{-1}$</span> is the inverse function of <span class="math-container">$f$</span>. Prove that,
<span class="math-container">$$\int_{a}^bf(x)dx+\int_{f(a)}^{f(b)}f^{-1}(x)dx=bf(b)-af(a)$$</span></p>
</blockquote>
<p>A high school student or a Calculus first year student will simply, possibly, apply change of variable technique, then integration by parts and he/she will arrive at the answer without giving much thought into the process. A smarter student would probably compare the integrals with areas and conclude that the equality is immediate.</p>
<p>However, I am an undergrad student of Analysis and I would want to solve the problem "carefully". That is, I wouldn't want to forget my definitions, and the conditions of each technique. For example, while applying change of variables technique, I cannot apply it blindly; I must be prudent enough to realize that the criterion to apply it includes continuous differentiability of a function. Simply with <span class="math-container">$f$</span> continuous, I cannot apply change of variables technique.</p>
<p>Is there any method to solve this problem rigorously? One may apply the techniques of integration (by parts, change of variables, etc.) only after proper justification.</p>
<p>The reason I am not adding any work of mine is simply that I could not proceed even one line since I am not given <span class="math-container">$f$</span> is differentiable. However, this seems to hold for non-differentiable functions also.</p>
<p>I would really want some help. Pictorial proofs and/or area arguments are invalid.</p>
| adbforlife | 859,736 | <p>Let <span class="math-container">$P' = \{f(x_0=a), f(x_1), \cdots, f(x_n=b)\}$</span> be a partition of <span class="math-container">$f^{-1}$</span> on <span class="math-container">$[f(a), f(b)]$</span>. Then we know <span class="math-container">$P = \{x_0, \cdots, x_n\}$</span> is a partition of <span class="math-container">$f$</span> on <span class="math-container">$[a,b]$</span>. Observe that
<span class="math-container">\begin{align*}
L(f,P) + U(f^{-1}, P') &= \sum_{k=0}^{n-1}f(x_k) (x_{k+1} - x_k) + \sum_{k=0}^{n-1}x_{k+1}(f(x_{k+1}) - f(x_k))\\
&= \sum_{k=0}^{n-1} x_{k+1}f(x_{k+1}) - x_kf(x_k)\\
&= bf(b) - af(a)
\end{align*}</span>
Similarly we have
<span class="math-container">\begin{align*}
U(f,P) + L(f^{-1}, P') = bf(b) - af(a)
\end{align*}</span>
Thus, for all <span class="math-container">$P'$</span> we have
<span class="math-container">\begin{align*}
bf(b) - af(a) - U(f^{-1}, P') = L(f,P) \le \int_a^b f(x)dx \le U(f,P) = bf(b) - af(a) - L(f^{-1}, P')
\end{align*}</span>
Thus <span class="math-container">$\int_a^b f(x) dx = bf(b) - af(a) - \int_{f(a)}^{f(b)} f^{-1}(x)dx$</span>.</p>
|
1,903,235 | <p>According to Wikipedia, </p>
<blockquote>
<p>Hilbert space [...] extends the methods of vector algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions</p>
</blockquote>
<p>However, the article on Euclidean space states already refers to </p>
<blockquote>
<p>the n-dimensional Euclidean space.</p>
</blockquote>
<p>This would imply that Hilbert space and Euclidean space are synonymous, which seems silly.</p>
<p>What exactly is the difference between Hilbert space and Euclidean space? What would be an example of a non-Euclidean Hilbert space?</p>
| Pand | 1,143,192 | <ol>
<li>A Hilbert space does not have to be infinite dimensional (it could be).</li>
<li>The Euclidean space is an example of a finite dimensional (n- dimensional) Hilbert space where the scalar field is the set of real numbers, i.e., <span class="math-container">$\mathbb{R}^n$</span>.</li>
<li>It is best to leave out quantum mechanical discussions out of this since it will generally confuse the issue. But, I feel compelled, for the sake of students in QM, to correct Luther4.
(i) In quantum mechanics the wavefunction (state vector) of a single particle system is an element of a Hilbert space. The scalar field used in QM is the set of complex numbers. In general the Hilbert space of a single particle system is infinite dimensional. In many example, a finite dimensional subset is sufficient to describe the system.
(ii) There is no connection between the number of particles and the dimension of the Hilbert space used in QM. A m-particle system can be described by a cartesian product of m single particle Hilbert spaces. If quantum indistinguishability of the particles is imposed (Fermions Bosons etc,), a subset of the Hilbert space that satisfies certain symmetry with regards to particle exchange is used.</li>
</ol>
|
2,324,850 | <p>How to find the shortest distance from line to parabola?</p>
<p>parabola: $$2x^2-4xy+2y^2-x-y=0$$and the line is: $$9x-7y+16=0$$
Already tried use this formula for distance:
$$\frac{|ax_{0}+by_{0}+c|}{\sqrt{a^2+b^2}}$$</p>
| farruhota | 425,072 | <p>Look at the graphs:</p>
<p><a href="https://i.stack.imgur.com/N2IGw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N2IGw.png" alt="enter image description here" /></a></p>
<p>The tangent line to the parabola at the point <span class="math-container">$(x_0,y_0)$</span> is: <span class="math-container">$y=y_0+y'(x_0)(x-x_0).$</span></p>
<p>The tangent line must be parallel to the line, hence: <span class="math-container">$y'(x_0)=\frac97 \ \ \ (1)$</span></p>
<p>Take implicit differentiation from the equation of the parabola and solve the equation <span class="math-container">$(1)$</span> to find the point <span class="math-container">$(x_0,y_0).$</span></p>
<p>Then you find the distance between the point <span class="math-container">$(x_0,y_0)$</span> and the line.</p>
|
270,495 | <p>From <a href="http://en.wikipedia.org/wiki/Ultrafilter#Generalization_to_partial_orders">Wikipedia</a></p>
<blockquote>
<p>an ultrafilter $U$ on a set $X$ is a collection of subsets of $X$ that is a filter, that cannot be enlarged (as a filter). <strong>An ultrafilter may be considered as a finitely additive measure.</strong> Then every subset of $X$ is either considered "almost everything" (has measure 1) or "almost nothing" (has measure 0). ... define a function $m$ on the power set of $X$ by setting $m(A) = 1$ if $A$ is an element of $U$ and $m(A) = 0$ otherwise. Then $m$ is a finitely additive measure on $X$, and every property of elements of $X$ is either true almost everywhere or false almost everywhere. </p>
</blockquote>
<p>It says "an ultrafilter may be considered as a finitely additive measure." However, I only see how an ultrafilter induces a finitely additive measure, and don't see how a finitely additive measure induces an ultrafilter. In order for a finitely additive measure to induce an ultrafilter, I think it is not enough that for any subset $A$ of $X$, either $m(A) = 1$ and $m(X-A) = 0$ or $m(X-A) = 1$ and $m(A) = 0$, isn't it?</p>
<p>Thanks and regards!</p>
| Brian M. Scott | 12,042 | <p>It does not say that every finitely additive measure induces an ultrafilter. However, it is true that every non-trivial <span class="math-container">$\{0,1\}$</span>-valued finitely additive measure on <span class="math-container">$\wp(X)$</span> induces an ultrafilter on <span class="math-container">$X$</span>.</p>
<p>Suppose that <span class="math-container">$m$</span> is a <span class="math-container">$\{0,1\}$</span>-valued finitely additive measure defined on <span class="math-container">$\wp(X)$</span> such that <span class="math-container">$m(X)=1$</span>. Let <span class="math-container">$\mathscr{U}=\{U\subseteq X:m(U)=1\}$</span>. For each <span class="math-container">$A\subseteq X$</span> we have <span class="math-container">$m(A)+m(X\setminus A)=1$</span>, so exactly one of <span class="math-container">$A$</span> and <span class="math-container">$X\setminus A$</span> belongs to <span class="math-container">$\mathscr{U}$</span>. Clearly <span class="math-container">$V\in\mathscr{U}$</span> whenever <span class="math-container">$X\supseteq V\supseteq U\in\mathscr{U}$</span>, so <span class="math-container">$\mathscr{U}$</span> is closed under taking supersets. If <span class="math-container">$U,V\in\mathscr{U}$</span> and <span class="math-container">$U\cap V\notin\mathscr{U}$</span>, then</p>
<p><span class="math-container">$$1=m(U)=m(U\setminus V)+m(U\cap V)=m(U\setminus V)$$</span></p>
<p>and similarly <span class="math-container">$m(V\setminus U)=1$</span>, so</p>
<p><span class="math-container">$$m(U\cup V)=m(U\setminus V)+m(U\cap V)+m(V\setminus U)=2\;,$$</span></p>
<p>which is absurd. Thus, <span class="math-container">$\mathscr{U}$</span> is closed under taking finite intersections. Finally, for each <span class="math-container">$A\subseteq X$</span> we have <span class="math-container">$m(A)+m(X\setminus A)=1$</span>, so exactly one of <span class="math-container">$A$</span> and <span class="math-container">$X\setminus A$</span> belongs to <span class="math-container">$\mathscr{U}$</span>. Thus, <span class="math-container">$\mathscr{U}$</span> is an ultrafilter on <span class="math-container">$X$</span>.</p>
|
270,495 | <p>From <a href="http://en.wikipedia.org/wiki/Ultrafilter#Generalization_to_partial_orders">Wikipedia</a></p>
<blockquote>
<p>an ultrafilter $U$ on a set $X$ is a collection of subsets of $X$ that is a filter, that cannot be enlarged (as a filter). <strong>An ultrafilter may be considered as a finitely additive measure.</strong> Then every subset of $X$ is either considered "almost everything" (has measure 1) or "almost nothing" (has measure 0). ... define a function $m$ on the power set of $X$ by setting $m(A) = 1$ if $A$ is an element of $U$ and $m(A) = 0$ otherwise. Then $m$ is a finitely additive measure on $X$, and every property of elements of $X$ is either true almost everywhere or false almost everywhere. </p>
</blockquote>
<p>It says "an ultrafilter may be considered as a finitely additive measure." However, I only see how an ultrafilter induces a finitely additive measure, and don't see how a finitely additive measure induces an ultrafilter. In order for a finitely additive measure to induce an ultrafilter, I think it is not enough that for any subset $A$ of $X$, either $m(A) = 1$ and $m(X-A) = 0$ or $m(X-A) = 1$ and $m(A) = 0$, isn't it?</p>
<p>Thanks and regards!</p>
| André Nicolas | 6,312 | <p>Yes, it is enough (if we insist that the whole set has measure $1$). We need to check that any superset of a set of measure $1$ has measure $1$ (easy), and that the intersection of two sets of measure $1$ has measure $1$. </p>
<p>So let $A$ and $B$ have measure $1$. Then $A\cup B$ has measure $1$, and is the disjoint union of $A\cap B$, $A\setminus B$, and $B\setminus A$. </p>
<p>If $A\cap B$ does not have measure $1$, then each of $A\setminus B$ and $B\setminus A$ do, contradicting finite additivity. </p>
<p><strong>Remark:</strong> Finitely additive $\{0,1\}$-valued measures defined on all subsets of a set $I$, and ultrafilters on $I$ are such close relatives that there is no point in distinguishing between the two. </p>
<p>When we are working with an ultrapower $A^{I}/D$, it is often more natural to say that the functions $f$, $g$ from $I$ to $A$ are equal "almost everywhere" than to say $\{i:f(i)=g(i)\}\in D$. </p>
|
217,291 | <p>I am trying to recreate the following image in latex (pgfplots), but in order to do so I need to figure out the mathematical expressions for the functions</p>
<p><img src="https://i.stack.imgur.com/jYGNP.png" alt="wavepacket"></p>
<p>So far I am sure that the gray line is $\sin x$, and that
the redline is some version of $\sin x / x$.
Whereas the green line is some linear combination of sine and cosine functions.</p>
<p>Anyone know a good way to find these functions? </p>
| Makoto Kato | 28,422 | <p>Let $p$ be an odd prime number.
$p = x^2 + y^2$ has integer solutions if and only if $p \equiv 1$ (mod $4$).
This can be elegantly proved by using the properties of the ring $\mathbb{Z}[i]$.</p>
<p><a href="http://en.wikipedia.org/wiki/Proofs_of_Fermat%27s_theorem_on_sums_of_two_squares" rel="nofollow">http://en.wikipedia.org/wiki/Proofs_of_Fermat%27s_theorem_on_sums_of_two_squares</a></p>
|
3,407,489 | <p><span class="math-container">$\neg\left (\neg{\left (A\setminus A \right )}\setminus A \right )$</span></p>
<p><span class="math-container">$A\setminus A $</span> is simply empty set and <span class="math-container">$\neg$</span> of that is again empty set. Empty set <span class="math-container">$\setminus$</span> A is empty set or? But every empty set is included in every set?</p>
<p>I am confused when it comes to this..</p>
| Bram28 | 256,001 | <p>If by 'negation' you mean complement, then the 'negation' of the empty set is not the empty set, but the universal set ... or at least: the set of all things you are talking about: the Universe of Discourse. If we call that <span class="math-container">$U$</span>, we get:</p>
<p><span class="math-container">$((A\setminus A)'\setminus A)'=(\emptyset'\setminus A)'=(U\setminus A)'=A$</span></p>
<p>To go into that last step a little more:</p>
<p><span class="math-container">$A \setminus B$</span> is the same as <span class="math-container">$A \cap B'$</span>, and so:</p>
<p><span class="math-container">$(U\setminus A)'=(U \cap A')'=U' \cup A''=\emptyset \cup A = A$</span></p>
|
567,683 | <p>Let $F:\mathbb R^2\to \mathbb R^2$ be the force field with </p>
<p>$$F(x,y) = -\frac{(x,y)}{\sqrt{x^2 + y^2}}$$</p>
<p>the unit vector in the direction from $(x,y)$ to the origin. Calculate the work done against the force field in moving a particle from $(2a,0)$ to the origin along the top half of the circle $(x−a)^2+y^2=a^2$.</p>
<p>Okay, I tried to use the line integral and I set
$x=a+a\cos(t)$, $y= a\sin(t)$ and $t\in [0,\pi]$. Then the work should be </p>
<p>$$\int_0^\pi F(r (t))(r)′dt$$</p>
<p>But I can't got the right answer!!</p>
| user108946 | 108,946 | <p>Your vector field is conservative: $\nabla \times F = 0$. Thus the integral is path independent. This should simply your calculation considerably—choose the easy straight line path from $(2a,0)$ to $(0,0)$ and integrate.</p>
|
241,612 | <blockquote>
<p>Find all eigenvalues and eigenvectors:</p>
<p>a.) $\pmatrix{i&1\\0&-1+i}$</p>
<p>b.) $\pmatrix{\cos\theta & -\sin\theta \\ \sin\theta & \cos\theta}$</p>
</blockquote>
<p>For a I got:
$$\operatorname{det} \pmatrix{i-\lambda&1\\0&-1+i-\lambda}= \lambda^{2} - 2\lambda i + \lambda - i - 1
$$</p>
<p>For b I got:
$$\operatorname{det} \pmatrix{\cos\theta - \lambda & -\sin\theta \\ \sin\theta & \cos\theta - \lambda}= \cos^2\theta + \sin^2\theta + \lambda^2 -2\lambda \cos\theta = \lambda^2 -2\lambda \cos\theta +1$$</p>
<p>But how can I find the corresponding eigenvalues for a and b? </p>
| Smajl | 43,803 | <p>You can find eigen values by putting $\det(A-\lambda E)=0$.</p>
<p>If you want to find corresponding eigenvectors, too, try solving this equation:</p>
<p>$Av=\lambda v$, where v is an eigenvector for $\lambda$ in this equation. In other termss:
$Av_i=\lambda_i v_i$</p>
|
4,076,006 | <p>I would like to know the number of valuation rings of <span class="math-container">$\Bbb Q_p((T))$</span>.
I know <span class="math-container">$\Bbb Q_p$</span> has <span class="math-container">$2$</span> valuation rings, that is,<span class="math-container">$\Bbb Q_p$</span> and <span class="math-container">$\Bbb Z_p$</span>.
Every algebraic extension of <span class="math-container">$\Bbb Q_p$</span> has more than <span class="math-container">$2$</span> valuation rings because of extension theorem on valuation.But <span class="math-container">$\Bbb Q_p((T))$</span> is not algebraic over <span class="math-container">$\Bbb Q_p$</span>,I am at a loss.</p>
<p>How many valuation rings of <span class="math-container">$\Bbb Q_p((T))$</span> are there?</p>
<p>Thank you in advance.</p>
| Torsten Schoeneberg | 96,384 | <p>A <a href="https://en.wikipedia.org/wiki/Valuation_ring" rel="nofollow noreferrer">valuation ring</a> as you define it in a comment can also be described via a valuation, i.e. a surjective homomorphism <span class="math-container">$v: K^\times \twoheadrightarrow \Gamma$</span> onto a totally ordered abelian group <span class="math-container">$\Gamma$</span>.</p>
<p>You should first of all note (also w.r.t. <a href="https://math.stackexchange.com/q/4074221/96384">your previous question</a>) that if we stay in this generality, almost all fields you can reasonably think of have infinitely many valuation rings, although you might not be able to see them:</p>
<p>Namely, let <span class="math-container">$K$</span> be <strong>any (!)</strong> field that embeds into <span class="math-container">$\mathbb C$</span> (and unless you have a good point against the axiom of choice, this includes all <span class="math-container">$\mathbb Q_p$</span> and also the example <span class="math-container">$\mathbb Q_p((T))$</span> in this question). Now there are isomorphisms (unless you have a good point against the axiom of choice) <span class="math-container">$\mathbb C \simeq \mathbb C_\ell$</span> where <span class="math-container">$\mathbb C_\ell$</span> is the completion of the algebraic closure of the <span class="math-container">$\ell$</span>-adic numbers <span class="math-container">$\mathbb Q_\ell$</span> for your second favourite prime <span class="math-container">$\ell$</span>. Said field <span class="math-container">$\mathbb C_\ell$</span> comes naturally equipped with its <span class="math-container">$\ell$</span>-adic valuation <span class="math-container">$v_\ell$</span> with value group (written multiplicatively) <span class="math-container">$\Gamma = \ell^\mathbb Q$</span>. The restriction of this valuation along the isomorphisms and to the field <span class="math-container">$K$</span> is a valuation on <span class="math-container">$K$</span>, which when restricted to <span class="math-container">$\mathbb Q$</span> gives the <span class="math-container">$\ell$</span>-adic valuation. In particular, all these valuations are distinct and thus we have (modulo choice) infinitely many distinct valuation rings in <span class="math-container">$K$</span>.</p>
<p>So that kind of dooms your hope to find a field with three (or any finite number of) valuation rings, at least in characteristic <span class="math-container">$0$</span>. (Update: As reuns comments and I spelled out in an answer to your previous question, actually every field that is not algebraic over some <span class="math-container">$\mathbb F_p$</span> has infinitely many valuation rings -- and those that are algebraic over some <span class="math-container">$\mathbb F_p$</span> of course have only one, namely themselves.)</p>
<hr />
<p>Good news is, if you restrict your search to valuation rings whose value group <span class="math-container">$\Gamma$</span> (from now on written additively) is discrete in the sense that it has a smallest element <span class="math-container">$>0_\Gamma$</span> (which unfortunately is not the same as what we call a DVR; the difference is that here I allow "discrete" value groups of higher rank, i.e. basically <span class="math-container">$\mathbb Z^n$</span>), then I think you've found a match here.</p>
<p>To see that, adapt reuns' beautiful answer: The <span class="math-container">$n$</span>-divisibility of that big subgroup <span class="math-container">$(1+p\Bbb{Z}_p+T \Bbb{Q}_p[[T]], \cdot) \subset K^\times$</span> still shows (because of our "dsicreteness" assumption) that all its elements must have value <span class="math-container">$0$</span>, and of course so do all roots of unity; if <span class="math-container">$v(p)=0$</span> continue as in his proof and either get the trivial valuation <span class="math-container">$w_0(K^\times)=0$</span> i.e.</p>
<p><span class="math-container">$$R_0=K$$</span></p>
<p>or the discrete (rank 1) valuation <span class="math-container">$w_1(\sum a_i T^i)=\min(i: a_i \neq 0)$</span> i.e.</p>
<p><span class="math-container">$$R_1= \mathbb Q_p[[T]].$$</span></p>
<p>But now we could also have the option that <span class="math-container">$v(p) > 0$</span> as long as we make sure (cf. reuns' first bullet) that <span class="math-container">$v(p^{-r}T)$</span> is always <span class="math-container">$\ge 0$</span>, meaning that the value of <span class="math-container">$T$</span> has to be "infinitely bigger" than that of <span class="math-container">$p$</span>. That forces us basically to have <span class="math-container">$\mathbb Z \oplus \mathbb Z$</span> as value group, with the generator <span class="math-container">$v(T) := (1,0)$</span> "infinitely bigger" than the other generator <span class="math-container">$v(p)=(0,1)$</span>, i.e. the lexicographic order. Call this rank 2 valuation <span class="math-container">$w_2 : K^\times \rightarrow \mathbb Z \oplus \mathbb Z $</span>, it is explicitly given on <span class="math-container">$x= \sum a_i T^i$</span> as</p>
<p><span class="math-container">$$w_2(x) := \left(w_1(x), v_p(a_{w_1(x)})\right).$$</span></p>
<p>Its valuation ring is</p>
<p><span class="math-container">$$R_2 = \{x \in K: w_2(x) \ge (0,0)\} = \{\sum_{i \ge 0} a_i X^i \in \mathbb Q_p[[X]]: a_0 \in \mathbb Z_p\}$$</span></p>
<p>and there you got your three possible valuation rings with "discrete" value groups (of ranks <span class="math-container">$0,1,2$</span>) <span class="math-container">$R_0, R_1, R_2$</span>.</p>
<p>Compare the other answers to <a href="https://math.stackexchange.com/q/1307/96384">Concrete examples of valuation rings of rank two.</a> for another example.</p>
|
2,267,165 | <p>Find the distance between the two parallel planes?</p>
<p>$$a: x-2y+3z-2=0$$
$$b: 2x-4y+6z-1=0$$</p>
<p>The given answer is: $\dfrac{3}{\sqrt{56}}$</p>
| Dr. Sonnhard Graubner | 175,066 | <p>converting the equation of one plane into the Hessian normal form we get
$$\frac{x-2y+3x-2}{\pm \sqrt{1+4+9}}=0$$ and taking a Point from the other plane, for.e.g $$P(\frac{1}{2};0;0)$$ plugging this in the Hessian form we get
$$\frac{\frac{3}{2}}{\sqrt{14}}$$</p>
|
3,933,296 | <p>What I already have,</p>
<ol>
<li>Palindrome in form XYZYX, where X can’t be 0.</li>
<li>Divisibility rule of 9: sum of digits is divisible by 9. So, we have 2(X+Y)+Z = 9M.</li>
<li>The first part is divisible by 9 if and only if X+Y is divisible by 9. So, we have 10 pairs out of 90. And each such pair the total sum is divisible by 9 when Z is also divisible by 9. There are 2 such Zs: 0, 9. So, there are 20 divisible palindromes.</li>
<li>If (X+Y) mod 9 = 1, then 2(X+Y) mod 9 = 2; and in order for the total sum to be divisible by 8, Z must have the remainder of 1 when divided by 9. There is 1 such Z: 1. And again, we have 10 xy pairs with the given remainder. So, this case yields 10*1 = 30 more palindromes.</li>
<li>Same logic as on previous step applies to the case when 2(X+Y) mod 9 = 2.</li>
<li>So, total number of divisible palindromes = 80?</li>
</ol>
<p>When using this method, I only get 80 numbers of 5-digit palindromes that are divisible by 9(?) i dont think im doing this method correctly, can someone show me whats going on here</p>
| Explorer | 630,833 | <p>As you have rightly mentioned, we need to figure out all tuples of <span class="math-container">$(X,Y,Z)$</span> such that <span class="math-container">$$2(X+Y)+Z\bmod 9 = 0 \implies 2(X+Y)\bmod 9 = (9-Z)\bmod 9.\tag{1}$$</span>
Note that for any value of <span class="math-container">$2(X+Y)\bmod 9=0,1,\ldots,8$</span>, the corresponding value of <span class="math-container">$(X+Y)\bmod 9$</span> is unique. Also, for any given <span class="math-container">$k=0,1,\ldots,8$</span>, the tuples that satisfy <span class="math-container">$(X+Y)\bmod 9=k$</span> are given by
<span class="math-container">$$(k, 9) \text{ and } (X,k-X\bmod 9) \text{ for } X=1,2,\ldots,9.$$</span> So, there are <span class="math-container">$10$</span> tuples <span class="math-container">$(X,Y)$</span> corresponding to any given value of <span class="math-container">$2(X+Y)\bmod 9$</span>.</p>
<p>Further, for any given value of <span class="math-container">$2(X+Y)\bmod 9$</span> in <span class="math-container">$(1)$</span>, the corresponding value of <span class="math-container">$Z$</span> is also unique except when <span class="math-container">$2(X+Y)\bmod 9 = 0$</span>. When <span class="math-container">$2(X+Y)\bmod 9 = 0$</span>, <span class="math-container">$Z$</span> can either be <span class="math-container">$0$</span> or <span class="math-container">$9$</span>. Consequently, with <span class="math-container">$2(X+Y)\bmod 9=1,2,\ldots,8$</span>, there are <span class="math-container">$80$</span> palindromes divisible by <span class="math-container">$9$</span>, and with <span class="math-container">$2(X+Y)\bmod 9=0$</span>, there are <span class="math-container">$20$</span> palindromes divisible by <span class="math-container">$9$</span>.</p>
<p><strong>Hence, the total number of <span class="math-container">$5-$</span>digit palindromes divisible by <span class="math-container">$9$</span> is <span class="math-container">$100$</span>.</strong></p>
|
3,933,296 | <p>What I already have,</p>
<ol>
<li>Palindrome in form XYZYX, where X can’t be 0.</li>
<li>Divisibility rule of 9: sum of digits is divisible by 9. So, we have 2(X+Y)+Z = 9M.</li>
<li>The first part is divisible by 9 if and only if X+Y is divisible by 9. So, we have 10 pairs out of 90. And each such pair the total sum is divisible by 9 when Z is also divisible by 9. There are 2 such Zs: 0, 9. So, there are 20 divisible palindromes.</li>
<li>If (X+Y) mod 9 = 1, then 2(X+Y) mod 9 = 2; and in order for the total sum to be divisible by 8, Z must have the remainder of 1 when divided by 9. There is 1 such Z: 1. And again, we have 10 xy pairs with the given remainder. So, this case yields 10*1 = 30 more palindromes.</li>
<li>Same logic as on previous step applies to the case when 2(X+Y) mod 9 = 2.</li>
<li>So, total number of divisible palindromes = 80?</li>
</ol>
<p>When using this method, I only get 80 numbers of 5-digit palindromes that are divisible by 9(?) i dont think im doing this method correctly, can someone show me whats going on here</p>
| paw88789 | 147,810 | <p>Choose digits <span class="math-container">$Y$</span> and <span class="math-container">$Z$</span> freely (<span class="math-container">$100$</span> ways to do this). Then there is a unique nonzero digit <span class="math-container">$X$</span> so that <span class="math-container">$2X+2Y+Z\equiv 0 \pmod{9}$</span>.</p>
|
885,450 | <blockquote>
<p>After covering a distance of 30Km with a uniform speed, there got some
defect in train engine and therefore its speed is reduced to 4/5 of
its original speed. Consequently, the train reaches its destination 45
minutes late. If it had happened after covering 18Km of distance, the
train would have reached 9 minutes earlier>Find the speed of the train
and the distance of the journey.</p>
</blockquote>
<p>This is my question. .</p>
<p>now after letting the original speed of train be $x Km/Hr$ and the time taken be y Hr, threfore
distance = $xy$</p>
<p>CASE I:</p>
<p>speed = $x-4x/5$ Km/hr</p>
<p>time = $y + 45/60$</p>
<p>$xy=60xy/300 +45x/300 => +300xy-60xy = 45x => 240xy = 45x$ .............[i]</p>
<p>similarly in CASE II we get the equation:</p>
<p>$240xy = -9x$.........[ii]</p>
<p>but after solving these two equations the answer is coming to 0 which is wrong please tell me the correct solution.</p>
<p>thanks</p>
<p>(fast please)</p>
| cactus314 | 4,997 | <p>$$\log \left(1+\tfrac{1}{2^{2^n}}\right)\left(1+\tfrac{1}{2^{2^n}+2}\right)\cdots\left(1+\tfrac{1}{2^{
2^n+1}}\right)
= \log \left(1+\tfrac{1}{2^{2^n}}\right)+
\log \left(1+\tfrac{1}{2^{2^n}+2}\right)+\cdots + \log\left(1+\tfrac{1}{2^{
2^n+1}}\right)$$</p>
<p>Expanding on Alex' idea let $t = 2^{2^n}$ which is getting very large. Notice we get only even numbers:</p>
<p>$$ \log \left(1+\tfrac{1}{t}\right)+
\log \left(1+\tfrac{1}{t+2}\right)+\cdots + \log\left(1+\tfrac{1}{2t}\right) \approx
\frac{1}{t} + \frac{1}{t+2}+\cdots + \frac{1}{2t}
$$</p>
<p>We get a Riemann sum, but only half of the terms:</p>
<p>$$
\frac{1}{t}\big[1 + \frac{1}{1+\tfrac{2}{t}}+\cdots + \frac{1}{2}\big] \approx \frac{1}{2} \int_1^2 \frac{dt}{t} = \frac{\log 2}{2} = \log \sqrt{2}
$$</p>
|
1,024,068 | <p>I need to solve these two equations . </p>
<p>$ 2x + 4y + 3x^{2} + 4xy =0$</p>
<p>$ 4x + 8y + 2x^{2} + 4y^{3}$ = $0 $</p>
<p>I have added them , subtracted them . Nothing is helping here . Can anyone give hints ? Thanks</p>
| Empy2 | 81,790 | <p>Rearrange the first equation to get $y$ as a function of $x$. Plug that into the second equation.<br>
I think that, after rearrangement, you should get a polynomial in $x$ of degree 6.</p>
|
1,189,216 | <p>Wikipedia and other sources claim that </p>
<p>$PA +\neg G_{PA}$</p>
<p>can be consistent, where $\neg G_{PA}$ is the Gödel statement for PA.</p>
<p>So what is the error in my reasoning?</p>
<p>$G_{PA}$ = "$G_{PA}$ is unprovable in PA"</p>
<p>$\neg G_{PA} $</p>
<p>$\implies$ $\neg$ "$G_{PA}$ is unprovable in PA"</p>
<p>$\implies$ "$G_{PA}$ is provable in PA" </p>
<p>$\implies$ $G_{PA}$</p>
<p>I would also appreciate it if someone could provide a somewhat intuitive explanation.</p>
<p>Sources:</p>
<ol>
<li><a href="http://en.wikipedia.org/wiki/Non-standard_model_of_arithmetic#Arithmetic_unsoundness_for_models_with_.7EG_true" rel="nofollow noreferrer">Non-standard model of arithmetic on Wikipedia</a></li>
<li><a href="http://en.wikipedia.org/wiki/Rosser%27s_trick#Background" rel="nofollow noreferrer">Rosser's trick on Wikipedia</a></li>
<li><a href="https://math.stackexchange.com/a/16383/155096">Understanding Gödel's Incompleteness Theorem</a></li>
<li><a href="https://math.stackexchange.com/a/183106/155096">Is the negation of the Gödel sentence always unprovable too?</a></li>
</ol>
| Mauro ALLEGRANZA | 108,274 | <p><em>Long comment</em>, regarding the "deductive flaw" in your argument.</p>
<p>We have that <a href="http://plato.stanford.edu/entries/goedel-incompleteness/" rel="nofollow">Gödel's First Incompleteness Theorem</a> needs the Gödel's sentence $G_{\mathsf {PA}}$ such that :</p>
<blockquote>
<p>$\mathsf {PA} \vdash G_{\mathsf {PA}} ↔ ¬Prov_{\mathsf {PA}}(\ulcorner G_{\mathsf {PA}} \urcorner)$.</p>
</blockquote>
<p>Now your proof is :</p>
<p>1) $\lnot G_{\mathsf {PA}}$ --- assumed,</p>
<p>which means, as you say : "<strong>not</strong> $G_{\mathsf {PA}}$ is <em>unprovable</em> in $\mathsf {PA}$"</p>
<p>2) $Prov_{\mathsf {PA}}(\ulcorner G_{\mathsf {PA}} \urcorner)$ --- by the above equivalence and <em>double negation</em>, </p>
<p>which means, as you say : "$G_{\mathsf {PA}}$ is <em>provable</em> in $\mathsf {PA}$".</p>
<p>But 2), as you can easily check, is <strong>not</strong> $G_{\mathsf {PA}}$, so you cannot conclude it only by the above equivalence.</p>
<p>As per Hanno's answer, the "move" from : $Prov_{\mathsf {PA}}(\ulcorner S \urcorner)$ to $S$ needs some "additional resource", like the so-called <em>Reflection Principle</em> :</p>
<blockquote>
<p>(Ref) $ \ \ Prov_{\mathsf {PA}}(\ulcorner S \urcorner) → S$</p>
</blockquote>
<p>which asserts a sort of <em>soundeness</em> of the system (a stronger property than <em>consistency</em>).</p>
<p>In a nutshell, we cannot simply "equate" the intuitive concepts of <em>provable</em> and <em>true</em>.</p>
<p>The gist of Gödel's (and Traski's) Theorems is that if we work with the "formal counterparts" of the two concepts (like <em>provable</em> in a (formal) theory $\mathsf T$) we have to ackowledge that the two are <strong>not</strong> equivalent.</p>
|
1,248,331 | <p>That's the question :</p>
<p>Let $a$ be a cardinality such that this following statment is true :</p>
<p>For every $A, C$, if $ A \subseteq C$, $|A| = a$ and $|C| > a$, then $|C \setminus A| > |A|$.</p>
<p>Without using cardinality arithmethics, prove that $a + a = a$.</p>
<p>This is how the question is written; maybe i'm not reading it well because I can find a counter example. Let $C$ be $\{1,2,3,4,5\}$, and $A$ be $\{3,4\}$.
Then $A \subseteq C$, the cardinality of $C$ is bigger than the cardinality of $A$, and $|C \setminus A| = 3 > 2$.</p>
<p>So the statment is indeed true, yet $a + a = 4 \neq 2$.</p>
<p>What am I missing here ?
Thanks in advance.</p>
<p>Edit : Hagen von Eitzen and marini helped me see what i've missed, thanks. I am still trying to solve this regardless of my mistake reading the question right so a hint / help from here will be much appreciated.</p>
<p>I think that by saying show that $a + a = a$, it means show that $a$ has to be infinte, because when $a$ is finite than $a + a = 2a$, maybe i'm getting this all wrong...</p>
| martini | 15,379 | <p>You gave <strong>one</strong> $A$, $C$, <em>but</em>: The statement says <em>all</em> $A$, $C$. For $a=2$, $C = \{1,2,3\}$, $A = \{1,2\}$, is a counterexample, as $|C\setminus A| = 1$.</p>
|
1,248,331 | <p>That's the question :</p>
<p>Let $a$ be a cardinality such that this following statment is true :</p>
<p>For every $A, C$, if $ A \subseteq C$, $|A| = a$ and $|C| > a$, then $|C \setminus A| > |A|$.</p>
<p>Without using cardinality arithmethics, prove that $a + a = a$.</p>
<p>This is how the question is written; maybe i'm not reading it well because I can find a counter example. Let $C$ be $\{1,2,3,4,5\}$, and $A$ be $\{3,4\}$.
Then $A \subseteq C$, the cardinality of $C$ is bigger than the cardinality of $A$, and $|C \setminus A| = 3 > 2$.</p>
<p>So the statment is indeed true, yet $a + a = 4 \neq 2$.</p>
<p>What am I missing here ?
Thanks in advance.</p>
<p>Edit : Hagen von Eitzen and marini helped me see what i've missed, thanks. I am still trying to solve this regardless of my mistake reading the question right so a hint / help from here will be much appreciated.</p>
<p>I think that by saying show that $a + a = a$, it means show that $a$ has to be infinte, because when $a$ is finite than $a + a = 2a$, maybe i'm getting this all wrong...</p>
| Asaf Karagila | 622 | <p><strong>HINT:</strong> Take $C=A\times\{0,1\}$. If $|C|>|A|$, then it cannot be that $C$ can be split into two parts both of size $A$. </p>
|
3,846,717 | <p>Denote <span class="math-container">$\mathbb{F}=\mathbb{C}$</span> or <span class="math-container">$\mathbb{R}$</span>.</p>
<p><strong>Theorem (Cauchy - Schwarz Inequality).</strong> <em>If <span class="math-container">$\langle\cdot,\cdot\rangle$</span> is a semi-inner product on a vector space <span class="math-container">$H$</span>, then</em> <span class="math-container">$$\lvert\langle x,y\rangle\rvert\le\lVert x\rVert\lVert y\rVert,\quad\textit{for all}\;x,y,\in H.$$</span></p>
<p><em>Proof.</em> If <span class="math-container">$x=0$</span> or <span class="math-container">$y=0$</span>, then there is nothing to prove, so suppose that <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are both nonzero.</p>
<p>Given any scalar <span class="math-container">$z\in\mathbb{F}$</span>, there is a scalar <span class="math-container">$\alpha$</span> with modulus <span class="math-container">$\lvert\alpha\rvert=1$</span> such that <span class="math-container">$\alpha z=\lvert z \rvert$</span>. In particular, if we set <span class="math-container">$z=\langle x, y\rangle$</span>, then there is a scalar with <span class="math-container">$\lvert \alpha \rvert=1$</span> such that <span class="math-container">$$\langle x,y\rangle=\alpha\lvert\langle x,y\rangle\rvert.$$</span> Multiplying both sides by <span class="math-container">$\overline{\alpha}$</span>, we see that we also have <span class="math-container">$\overline{\alpha}\langle x, y\rangle=\lvert\langle x, y \rangle \rvert$</span>.</p>
<p>For each <span class="math-container">$t\in\mathbb{R}$</span>, using the Polar Identity and antilinearity in the second variable, we compute that
<span class="math-container">\begin{equation}
\begin{split}
0\le\lVert x-\alpha ty\rVert= &\rVert x\lVert^2-2\Re\big(\langle x,\alpha ty\rangle\big)+t^2\rVert y \lVert^2 \\
= &\rVert x\lVert^2-2t\Re\big(\overline{\alpha}\langle x, y\rangle\big)+t^2\lVert y \lVert^2\\
= & \lVert x \rVert ^2-2t\lvert\langle x, y\rangle\rvert+t^2\lVert y \rVert^2\\
= & at^2+bt+c,
\end{split}
\end{equation}</span>
where <span class="math-container">$a=\lVert y \rVert ^2$</span>, <span class="math-container">$b=-2\lvert \langle x, y\rangle \rvert$</span>, and <span class="math-container">$c=\lVert x \rVert ^2$</span>. This is a rel-valued quadratic polynomial in the variable <span class="math-container">$t$</span>. Since this polynomial is nonnegative, <span class="math-container">$\color{red}{it\;can\;have\;at\;most\;\;one\;real\;root}.$</span></p>
<p><span class="math-container">$\color{blue}{This\;implies\;that\;the\;discriminant\; cannot\;be\;strictly\;positive}.$</span></p>
<blockquote>
<p><strong>Question.</strong> What are the reasons why the red and blue assertions hold? I need the precise details.</p>
</blockquote>
<p>Thanks!</p>
| Bernard | 202,857 | <p>There's a high-school theorem on the sign of a quadratic polynomial:</p>
<blockquote>
<p>A quadratic polynomial <span class="math-container">$p(x)=ax^2+bx+c\quad(a\ne 0)$</span> has the sign of its leading coefficient, except between its (real) roots, if any.</p>
</blockquote>
|
1,855,748 | <blockquote>
<blockquote>
<p>Find a solution of the differential equation: $$\frac{d\left(x^2\frac{dy}{dx}\right)}{dx}=x\frac{dy}{dx}-y+5$$</p>
</blockquote>
</blockquote>
<hr>
<p>What I have attempted:</p>
<p>Consider: $$\frac{d\left(x^2\frac{dy}{dx}\right)}{dx}=x\frac{dy}{dx}-y+5$$</p>
<p>$$ \frac{d}{dx} (x^2 \frac{dy}{dx}) =x\frac{dy}{dx}-y+5 $$</p>
<p>$$ x^2 \frac{d^2y}{dx^2}+2x\frac{dy}{dx}=x\frac{dy}{dx}-y+5$$</p>
<p>$$ x^2 \frac{d^2y}{dx^2}+x\frac{dy}{dx} +y = 5 $$</p>
<p>Now I am stuck.. </p>
| Behrouz Maleki | 343,616 | <p><strong>Hint:</strong></p>
<p>set $x=e^{t}$ we have
$$\frac{dy}{dx}=\frac{dy}{dt}\frac{dt}{dx}=\frac{1}{x}\frac{dy}{dt}$$
$$\frac{d^2y}{dx^2}=\frac{d}{dx}\left(\frac{1}{x}\frac{dy}{dt}\right)=-\frac{1}{x^2}\frac{dy}{dt}+\frac{1}{x^2}\frac{d^2y}{dt^2}$$
we have
$$\frac{d^2y}{dt^2}+y=5$$</p>
|
1,855,748 | <blockquote>
<blockquote>
<p>Find a solution of the differential equation: $$\frac{d\left(x^2\frac{dy}{dx}\right)}{dx}=x\frac{dy}{dx}-y+5$$</p>
</blockquote>
</blockquote>
<hr>
<p>What I have attempted:</p>
<p>Consider: $$\frac{d\left(x^2\frac{dy}{dx}\right)}{dx}=x\frac{dy}{dx}-y+5$$</p>
<p>$$ \frac{d}{dx} (x^2 \frac{dy}{dx}) =x\frac{dy}{dx}-y+5 $$</p>
<p>$$ x^2 \frac{d^2y}{dx^2}+2x\frac{dy}{dx}=x\frac{dy}{dx}-y+5$$</p>
<p>$$ x^2 \frac{d^2y}{dx^2}+x\frac{dy}{dx} +y = 5 $$</p>
<p>Now I am stuck.. </p>
| Jan Eerland | 226,665 | <p>$$\frac{\text{d}}{\text{d}x}\left(x^2\cdot\frac{\text{d}}{\text{d}x}\left(y(x)\right)\right)=x\cdot\frac{\text{d}}{\text{d}x}\left(y(x)\right)-y(x)+5\Longleftrightarrow$$
$$x(xy''(x)+2y'(x))=xy'(x)-y(x)+5\Longleftrightarrow$$</p>
<hr>
<p>The general solution will be the sum of the complementary solution
and particular solution.</p>
<p>Find the complementary solution by solving:</p>
<hr>
<p>$$x^2y''(x)+xy'(x)+y(x)=0\Longleftrightarrow$$</p>
<hr>
<p>Assume a solution, proportional to $x^\mu$ for some constant $\mu$.</p>
<p>Substitute $y(x)=x^\mu$:</p>
<hr>
<p>$$x^2\cdot\frac{\text{d}^2}{\text{d}x^2}\left(x^\mu\right)+x\cdot\frac{\text{d}}{\text{d}x}\left(x^\mu\right)+x^\mu=0\Longleftrightarrow$$</p>
<hr>
<p>Substitute $\frac{\text{d}^2}{\text{d}x^2}\left(x^\mu\right)=(\mu-1)\mu x^{\mu-2}$ and $\frac{\text{d}}{\text{d}x}\left(x^\mu\right)=\mu x^{\mu-1}$:</p>
<hr>
<p>$$\mu^2 x^\mu+x^\mu=0\Longleftrightarrow$$
$$x^\mu\left(\mu^2+1\right)=0\Longleftrightarrow$$</p>
<hr>
<p>Assuming $x\ne0$, zeros must come from the polynomial:</p>
<hr>
<p>$$\mu^2+1=0\Longleftrightarrow$$
$$\mu=\pm i$$</p>
<p>Using Euler's identity:</p>
<p>$$y_c(x)=\text{C}_1\cos(\ln(x))+\text{C}_2\sin(\ln(x))$$</p>
<p>Now, find the particular solution:</p>
<p>$$x^2y''(x)+xy'(x)+y(x)=5$$</p>
<p>List the basis solutions in $y_c(x)$ so $y_{c_1}(x)=\cos(\ln(x))$ and $y_{c_2}(x)=\sin(\ln(x))$.</p>
<p>Compute the Wronskian of $y_{c_1}(x)$ and $y_{c_2}(x)$:</p>
<p>$$\mathcal{W}(x)=\left|\begin{matrix}
\cos(\ln(x)) & \sin(\ln(x)) \\
\frac{\text{d}}{\text{d}x}\left(\cos(\ln(x))\right) & \frac{\text{d}}{\text{d}x}\left(\sin(\ln(x))\right)
\end{matrix}\right|=\left|\begin{matrix}
\cos(\ln(x)) & \sin(\ln(x)) \\
-\frac{\sin(\ln(x))}{x} & \frac{\cos(\ln(x))}{x}
\end{matrix}\right|=\frac{1}{x}$$</p>
<p>Divide the differential equation by the leading term's coefficient $x^2$:</p>
<p>$$y''(x)+\frac{y'(x)}{x}+\frac{y(x)}{x^2}=\frac{5}{x^2}$$</p>
<p>Let:</p>
<ul>
<li>$$q(x)=\frac{5}{x^2}$$</li>
<li>$$r_1(x)=-\int\frac{q(x)y_{c_2}(x)}{\mathcal{W}(x)}\space\text{d}x=-\int\frac{5\sin(\ln(x))}{x}\space\text{d}x=-5\cos(\ln(x))+\text{K}_1$$</li>
<li>$$r_2(x)=\int\frac{q(x)y_{c_1}(x)}{\mathcal{W}(x)}\space\text{d}x=\int\frac{5\cos(\ln(x))}{x}\space\text{d}x=5\sin(\ln(x))+\text{K}_2$$</li>
</ul>
<p>So:</p>
<p>$$y(x)=y_c(x)+r_1(x)y_{c_1}(x)+r_2(x)y_{c_2}(x)=\text{C}_1\cos(\ln(x))+\text{C}_2\sin(\ln(x))+5$$</p>
|
741,436 | <p>I get stuck at the following question:</p>
<p>Consider the matrix<br>
$$A=\begin{bmatrix}
0 & 2 & 0 \\
1 & 1 & -1 \\
-1 & 1 & 1\\
\end{bmatrix}$$</p>
<p>Find $A^{1000}$ by using the Cayley-Hamilton theorem.</p>
<p>I find the characteristic polynomial by $P(A) = -A^{3} + 2A^2 = 0$ (by Cayley-Hamilton) but I don't see how to find $A^{1000}$ by this characteristic polynomial.</p>
| Martín-Blas Pérez Pinilla | 98,199 | <p>$$A^{1000}= A(A^3)^{333}=A (-2A^2)^{333}=(-2)^{333}A^{667}=\cdots$$</p>
|
566,993 | <p>Suppose $f(z)=1/(1+z^2)$ and we want to find the power series in $a=1$. I think we have to write $1/(1+z^2)=1/(1+(z-1)+1)^2=1/(1+(1+(z-1)^2+2(z-1)))$, but I'm stuck here.</p>
| medicu | 65,848 | <p>Elementary solution to this problem. </p>
<p>The problem is equivalent to that the $x^2 \in Z(G)$ for any $x\in G$, as stated in a previous response. </p>
<p>Under these conditions we show that $(xyx^{-1}y^{-1})^4=e.$
$$(xyx^{-1}y^{-1})^4=(xyx^{-1}y^{-1})^2(xyx^{-1}y^{-1})(xyx^{-1}y^{-1})=$$
$$=(xyx^{-1})(xyx^{-1}y^{-1})^2(y^{-1})(xyx^{-1}y^{-1})=$$
$$= xy(x^{-1}x)(yx^{-1}y^{-1})(xyx^{-1}y^{-1})(y^{-1})(xyx^{-1}y^{-1})=$$
$$=xy^2(x^{-1}y^{-1})(xyx^{-1})(y^{-1})^2(xyx^{-1}y^{-1})=$$
$$=y^2(xx^{-1})(y^{-1}xy(y^{-1})^2)(xyx^{-1}y^{-1})=(yxy^{-1})(yx^{-1}y^{-1})=e.$$
Considering that the group no has elements of order $ 2 $ result, step by step, that $(xyx^{-1}y^{-1})^2=e$, $(xyx^{-1}y^{-1})=e$ and $xy=yx$ and therefore the group is commutative.</p>
|
11,244 | <p>In order to evaluate new educational material the contentment of students with this material is often measured. However, just because a student is contented doesn't mean that he/she has actually learned something. Is there any research investigating the correlation between students contentment and the educational quality of the presented material?</p>
| Daniel R. Collins | 5,563 | <p>From Clark, Richard, Paul A. Kirschner, and John Sweller. "Putting students on the path to learning: The case for fully guided instruction." (2012):</p>
<blockquote>
<p>Even more disturbing is evidence that when learners are asked to
select between a more-guided or less-guided version of the same
course, less-skilled learners who choose the less-guided approach tend
to like it even though they learn less from it... Similarly,
more-skilled learners who choose the more-guided version of a course
tend to like it even though they too have selected the environment in
which they learn less.</p>
</blockquote>
<p>Reference cites Clark, Richard E. "Antagonism between achievement and enjoyment in ATI studies." <em>Educational Psychologist</em> 17.2 (1982): 92-101.</p>
|
3,592,747 | <p>How to solve this equation for <span class="math-container">$x$</span> in reals, without using theory of complex number?
<span class="math-container">$$\frac{a}{\left(x+\frac{1}{x}\right)^{2}}+\frac{b}{\left(x-\frac{1}{x}\right)^{2}}=1$$</span>
Where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> is some constants</p>
<p>Please write down a step-by-step solution with a good explanation to this equation. I spent a lot of time trying to solve it but could not find an answer.</p>
<p>According to <a href="https://www.desmos.com/calculator/iwke0frbow" rel="nofollow noreferrer">desmos</a>, it has four roots:</p>
<p><a href="https://i.stack.imgur.com/UO25t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UO25t.png" alt="enter image description here"></a></p>
<p><sub>I don't know which title and tags I should use for this question, please edit.</sub></p>
| Jean Marie | 305,862 | <p>Here is an approach yielding solutions by reducing the issue <strong>to solve successive quadratic equations</strong> :</p>
<p>Let us write the expression under the form :</p>
<p><span class="math-container">$$\frac{a}{X^{2}}+\frac{b}{X^{2}-4}=1\tag{1}$$</span></p>
<p>where we have set : </p>
<p><span class="math-container">$$X=x+\frac{1}{x}\tag{2}$$</span></p>
<p>Let us rewrite (1) under the form :</p>
<p><span class="math-container">$$\dfrac{a(X^2-4)+bX^2}{X^2(X^2-4)}=1\tag{3}$$</span></p>
<p>Setting </p>
<p><span class="math-container">$$Y=X^2\tag{4}$$</span></p>
<p>(3) amounts to</p>
<p><span class="math-container">$$aY-4a+bY=Y(Y-4) \ \ \ \iff \ \ \ Y^2-(4+a+b)Y+4a=0\tag{5}$$</span></p>
<p>It remains to solve (5), then, in a backward way (4), then (2) to obtain the different values of <span class="math-container">$x$</span>, some of them happening to be complex.</p>
<p><strong>Example</strong> : Let us take <span class="math-container">$a=1, b=3$</span>, which gives <span class="math-container">$Y=4\pm2\sqrt{3}$</span>, i.e.,</p>
<p><span class="math-container">$$Y_1=5.73205 \ \ \text{and} \ \ Y_2=2.26795$$</span></p>
<p>from which </p>
<p><span class="math-container">$$X_1=\pm \sqrt{Y_1}=\pm 2.39417 \ \ \text{and} \ \ X_2=\pm \sqrt{Y_2}=\pm 1.50597 \tag{6}$$</span></p>
<p>It remains to solve the four quadratic equations </p>
<p><span class="math-container">$$x+\dfrac{1}{x}=X$$</span></p>
<p>for the different values of <span class="math-container">$X$</span> in (6) : the first ones (with <span class="math-container">$X_1$</span>) will give 4 real roots :</p>
<p><span class="math-container">$$x = 1.85512, \ \ 0.53905, \ \ -0.53905, \ \ -1.85512$$</span></p>
<p>the two others (with <span class="math-container">$X_2$</span>) will give 4 complex roots.</p>
<p><strong>Remark</strong> : the initial equation (and the equivalent polynomial equation) is such that if <span class="math-container">$x$</span> is a solution, then <span class="math-container">$1/x$</span> is also a solution : it is said that such a polynomial is <a href="https://en.wikipedia.org/wiki/Reciprocal_polynomial" rel="nofollow noreferrer">"<strong>reciprocal</strong>"</a> (some say "palindromic"), characterised equivalently by the fact that <span class="math-container">$a_{k}=a_{n-k}$</span> for all <span class="math-container">$k$</span>, if <span class="math-container">$n$</span> denotes the degree of the polynomial.</p>
|
1,227,609 | <p>Let $X,Y,Z$ be finite sets, and consider probability distributions $p$ over $X\times Y\times Z$. If we know the marginals of $p$ over all the pairs $X\times Y$, $X\times Z$ and $Y\times Z$, is that enough to pin down $p$ uniquely?</p>
| celtschk | 34,930 | <p>No. Consider $X=Y=Z=\{0,1\}$ and the following two probability distributions:</p>
<ul>
<li>$p_1(x,y,z) = \frac18$</li>
<li>$p_2(x,y,z) = \frac18\left(1 + (-1)^{x+y+z}\right)$</li>
</ul>
<p>It is easily checked that in both cases, all marginal distributions are equally distributed.</p>
|
204,106 | <p>I would like to know what the definition of a short proof is.</p>
<p>In Lance Fortnow’s article “<a href="http://cacm.acm.org/magazines/2009/9/38904-the-status-of-the-p-versus-np-problem/fulltext" rel="nofollow">The Status of the P Versus NP Problem</a>”, Communications of the ACM, Vol. 52 No. 9, he says,</p>
<blockquote>
<p>If a formula θ is not a tautology, we can give an easy proof of that fact by exhibiting an assignment of the variables that makes θ false. But if θ were indeed a tautology, we don’t expect short proofs. If one could prove there are no short proofs of tautology that would imply P ≠ NP.</p>
</blockquote>
<p>I have tried to find a definition of a “short proof”, but have not been able to.</p>
| Andreas Blass | 6,794 | <p>The statement you quoted is somewhat sloppy, since there is no precise notion of a short proof for a single formula. There is, however, a notion of short proofs for a class $C$ of formulas, when the class contains formulas of arbitrarily high length. One says that $C$ admits short proofs if there is a polynomial $p(x)$ such that, for every natural number $n$, all formulas in $C$ of length $n$ have proofs of length at most $p(n)$. </p>
|
667,371 | <p>I try to solve this equation:
$$\sqrt{x+2}+\sqrt{x-3}=\sqrt{3x+4}$$</p>
<p>So what i did was:</p>
<p>$$x+2+2*\sqrt{x+2}*\sqrt{x-3}+x-3=3x+4$$</p>
<p>$$2*\sqrt{x+2}*\sqrt{x-3}=x+5$$</p>
<p>$$4*{(x+2)}*(x-3)=x^2+25+10x$$</p>
<p>$$4x^2-4x-24=x^2+25+10x$$</p>
<p>$$3x^2-14x-49$$</p>
<p>But this seems to be wrong! What did i wrong?</p>
| Magdiragdag | 35,584 | <p>Note: the original question read $= \sqrt{3x + 5}$ instead of $= \sqrt{3x + 4}$. There is no problem with the fixed question. Maybe just that it's unfinished. The final line should read $3x^2 - 14x + 49 = 0$ rather than just $x^2 - 14x + 49$. After that, solve the quadratic; only one of the solutions ($7$) to the quadratic gives a solution to the original equation (the other ($-7/3$) was introduced by the squaring operations).</p>
<p>Original response: the $3x + 4$ instead of $3x + 5$ on the first line seems a genuine mistake, rather than a typo; you continue with it.</p>
|
215,858 | <p>If, $$\mathcal L \left\{ \frac{\cos(2\sqrt{3t})}{\sqrt{\pi t}} \right\}= \frac{\exp\big(\frac{-3}{s}\big)}{\sqrt{s}}$$, $$\mathcal L^{-1} \left\{ \frac{\exp\big(\frac{-1}{s}\big)}{\sqrt{s}}\right\}=?$$</p>
<p>could help</p>
<p><a href="https://math.stackexchange.com/questions/215831/laplace-transform-proof-that-l-frac1kf-fractk-fks">Laplace transform, proof that $L \{ \frac{1}{k}f(\frac{t}{k}) \}= F(ks)$</a></p>
<p>Thanks!</p>
| Fly by Night | 38,495 | <p>A slight modification of the method you used to compute the first transform will give you:</p>
<p>$$\mathcal{L}\left(\frac{\cos\left(k\sqrt{t}\right)}{\sqrt{\pi t}}\right) = \frac{e^{-k^2/4s}}{\sqrt{s}} \, . $$</p>
<p>For your inverse transform you need $k=2$.</p>
|
215,858 | <p>If, $$\mathcal L \left\{ \frac{\cos(2\sqrt{3t})}{\sqrt{\pi t}} \right\}= \frac{\exp\big(\frac{-3}{s}\big)}{\sqrt{s}}$$, $$\mathcal L^{-1} \left\{ \frac{\exp\big(\frac{-1}{s}\big)}{\sqrt{s}}\right\}=?$$</p>
<p>could help</p>
<p><a href="https://math.stackexchange.com/questions/215831/laplace-transform-proof-that-l-frac1kf-fractk-fks">Laplace transform, proof that $L \{ \frac{1}{k}f(\frac{t}{k}) \}= F(ks)$</a></p>
<p>Thanks!</p>
| Mikasa | 8,581 | <p>You noted that: $$\mathcal L \{ \frac{\cos(2\sqrt{3t})}{\sqrt{\pi t}} \}= \frac{\exp(\frac{-3}{s})}{\sqrt{s}}$$ and $$\mathcal L \{ \frac{1}{k}f\bigg(\frac{t}{k}\bigg) \}= F(ks)$$ and $F(s)=\frac{\exp(\frac{-3}{s})}{\sqrt{s}}$. Set $s$ to $3s$ in $F(s)$, so you have $$\sqrt{3}F(3s)=\frac{\exp(\frac{-1}{s})}{\sqrt{s}}$$ So $$\mathcal L^{-1}\{\sqrt{3}F(3s)\}=\mathcal L^{-1} \bigg(\frac{\exp(\frac{-1}{s})}{\sqrt{s}}\bigg)$$ But $$\mathcal L^{-1}\{\sqrt{3}F(3s)\}=\frac{1}{\sqrt{3}}\frac{\cos(2\sqrt{3t})}{\sqrt{\pi t}}\bigg|_{t\to t/3}=\frac{\cos(2\sqrt{t})}{\sqrt{\pi t}}$$ </p>
|
228,889 | <p><em>[Attention! This question requires some reading and it's answer probably is in form of a "soft-answer", i.e. it can't be translated into a hard mathematical proposition. (I hope I haven't scared away all readers with this.)]</em></p>
<p>Consider the following three examples:</p>
<p>1) <em>[If this example seems too technical just skip it - it isn't that important for the idea I want to convey.]</em> The set $T$ of all terms (of a functional structure) is the set $$T=\bigcap_{O\text{ closed under concatenation with function symbols}}_{O\supseteq X} O,$$where $X$ is the (countable) of all variables and "closed under concatenation with function symbol" means: If $f$ is some function symbol of arity $n$ and $x_1,\ldots,x_n\in X$, then $fx_1\ldots x_n\in T$. The above $T$ is the <em>smallest</em> set such that it contains the variables and is closed under concatenation with function symbols.</p>
<p>2) The smallest subgroup $G$ of a group $X$, containing a set $A\subseteq X$, is the set $$G:=\bigcap_{O\ \text{ is a subgroup of $X$}}_{O\supseteq A} O.$$</p>
<p>3) The smallest $\sigma$-algebra $\mathcal{A}$ on a set $X$ containing a set $A\subseteq X$ is the set $$\mathcal{A}:=\bigcap_{O\ \text{ is a $\sigma$-algebra on $X$}}_{O\supseteq A} O.$$</p>
<p>4) The set $$C:=\bigcap_{O\ \text{ is open in $X$}}_{O\supseteq A} O,$$ where $X$ is a metric space and $A\subseteq X$ is arbitrary. </p>
<p>Now here's the thing: The sets $T$, $G$, and $\mathcal{A}$, from examples 1),2) and 3) are also <strong>closed under the closing condition defining them</strong>, i.e. $T$ is also closed under concatenation with function symbols, $G$ is also a group and $\mathcal{A}$ is also a $\sigma$-algebra; for $G$ and $\mathcal{A}$ this is already implied by their name (the smallest <em>subgroup</em>, the smallest $\sigma$*-algebra*), which was the reason I also gave $T$ as an example, where it's name doesn't already imply it's closure under it's defining closure operations. <strong>But</strong> the set $C$ from example 4) need not be open, if for example $\mathbb{R}=X$ and $A=[0,1]$ (maybe there are nontrivial metric space, where it <em>is</em> open for nontrivial sets $A$, but I didn't want to waste time checking that). That is, $C$ isn't closed (no pun intended) under the closing condition I used to define it; or differently said: There isn't a smallest open set containing $A$.</p>
<p>Notice that the closure conditions in 1) and 2) have a more algebraic character, since we close under some algebraic operations, where the closure condition in 3) as a more "set-theoretical-topology"-type character, since we close under set-theoretic operations. Nonetheless in all cases the outcome is again "closed". </p>
<p>My question is: How do generally (abstractly) "closing conditions" $\mathscr{C}$ have to look like such that the set $$ S:=\bigcap_{O\ \text{ is closed under $\mathscr{C}$}}_{O\supseteq A} O$$ is itself closed in $X$ under $\mathscr{C}$, where $A\subseteq X$ is an arbitrary set ? Differently said: How do generally (abstractly) "closing conditions" have to look like such that there is a smallest set being closed under these conditions, containing some arbitrary fixed set.</p>
| GEdgar | 442 | <p>A Bernstein set, $A\subseteq[0,1]$ has the property that both $A$ and $[0,1]\setminus A$ have outer measure $1$.</p>
|
2,129,362 | <p>I have this question below ,I don't know if this thing right or not but i've just wonders about it
If we have
$$
A=\int \sqrt(f_x)
$$
Can we do this move
$$
A^2=\int f_x
$$
Thanks </p>
| Shinja | 327,661 | <p>No</p>
<p>Counterexample:</p>
<p>$\int xdx = \frac{1}{2}x^2$<br>
$\int x^2dx = \frac{1}{3}x^3$</p>
<p>$(\frac{1}{2}x^2)^2=\frac{1}{4}x^4\neq\frac{1}{3}x^3$</p>
|
2,129,362 | <p>I have this question below ,I don't know if this thing right or not but i've just wonders about it
If we have
$$
A=\int \sqrt(f_x)
$$
Can we do this move
$$
A^2=\int f_x
$$
Thanks </p>
| Starz | 408,702 | <p>No. As an analogy, we can consider A as a simple summation.
$$A = x_1 + x_2 + ... + x_n$$
$$A^2 = (x_1 + x_2 + ... + x_n)^2\neq x_1^2 + x_2^2 + ... + x_n^2$$</p>
|
2,361,920 | <p>The question is as follows:
<a href="https://i.stack.imgur.com/RHPwG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RHPwG.png" alt="enter image description here"></a> </p>
<p>I can not solve this question so I am asking what exactly in the probability theory that I must revise so I could solve it, could anyone give me advice please?</p>
| Fimpellizzeri | 173,410 | <p>This is not so much a question on probability as it is on combinatorics.
One usually solves this kind of question as follows:</p>
<p>$\quad(1)$: Calculate the <strong>number of possible outcomes</strong>. </p>
<p>In our case, we're drawing $2$ socks out of a drawer with $8$ socks, so the number is $\binom{8}{2}=28$.<br>
If you are unfamiliar with the notation $\binom{n}k$, you should look into binomial coefficients and their combinatorial interpretation -- in short, that's the number of ways to choose $k$ out of $n$ objects.</p>
<p>$\quad(2)$: Calculate the <strong>number of success cases</strong>.</p>
<p>There are three cases here:</p>
<p>$\qquad(2.1)$: Two blue socks. Well, there's a single possibility here.</p>
<p>$\qquad(2.2)$: Two red socks. We can choose $2$ out of $4$ red socks, so that's $\binom42$ possibilities.</p>
<p>$\qquad(2.3)$: Two yellow socks. Once again, there's a single possibility here.</p>
<p>The total number of success cases is hence $1+\binom42+1=1+6+1=8$.</p>
<p>$\quad(3)$: Calculate the probability, which is simply</p>
<p>$$\frac{\text{Number of success cases}}{\text{Number of posssible outcomes}}=\frac{8}{28}\simeq 28.57\%$$</p>
|
22 | <p>By matrix-defined, I mean</p>
<p>$$\left<a,b,c\right>\times\left<d,e,f\right> = \left|
\begin{array}{ccc}
i & j & k\\
a & b & c\\
d & e & f
\end{array}
\right|$$</p>
<p>...instead of the definition of the product of the magnitudes multiplied by the sign of their angle, in the direction orthogonal)</p>
<p>If I try cross producting two vectors with no $k$ component, I get one with only $k$, which is expected. But why?</p>
<p>As has been pointed out, I am asking why the algebraic definition lines up with the geometric definition.</p>
| Zach Conn | 251 | <p>Here's an explanation in terms of the Hodge dual and the exterior (wedge) product.</p>
<p>Let ${e_1, e_2, e_3}$ be the standard orthonormal basis for $\mathbb{R}^3$. Consider the two vectors $a = a_1 e_1 + a_2 e_2 + a_3 e_3$ and $b = b_1 e_1 + b_2 e_2 + b_3 e_3$. From the matrix computation we obtain the familiar formula</p>
<p>$a\times b = (a_2 b_3 - a_3 b_2) e_1 + (a_3 b_1 - a_1 b_3) e_2 + (a_1 b_2 - a_2 b_1) e_3$.</p>
<p>But (see note at the bottom)</p>
<p>$a \wedge b = (a_1 b_2 - a_2 b_1) e_1 \wedge e_2 + (a_2 b_3 - a_3 b_2) e_2 \wedge e_3 + (a_3 b_1 - a_1 b_3) e_3 \wedge e_1$,</p>
<p>where the wedge $\wedge$ represents the exterior product. One can now compute the dual of this latter expression using that the left contraction of $(e_1 \wedge e_2)$ onto $(e_3 \wedge e_2 \wedge e_1)$ is $e_3$ (and similar relations). The result is that</p>
<p>$a \times b = (a \wedge b)^*$,</p>
<p>that is, the cross product of $a$ and $b$ is the dual of their exterior product.</p>
<p>Geometrically, this is an incredible picture. The exterior product is the plane element spanned by both $a$ and $b$, and the dual is the vector orthogonal to that plane.</p>
<p>This is my favorite interpretation of the cross product, but it's only helpful, of course, if you're familiar with exterior algebra and the Hodge dual.</p>
<p>Note: The wedge product can be found by formally computing </p>
<p>$(a_1 e_1 + a_2 e_2 + a_3 e_3) \wedge (b_1 e_1 + b_2 e_2 + b_3 e_3)$</p>
<p>using the distributivity and anticommutation relations of the exterior product.</p>
|
2,076 | <p>I'm attempting for the first time to create a map within <em>Mathematica</em>. In particular, I would like to take an output of points and plot them according to their lat/long values over a geographic map. I have a series of latitude/longitude values like so:</p>
<pre><code> {{32.6123, -117.041}, {40.6973, -111.9}, {34.0276, -118.046},
{40.8231, -111.986}, {34.0446, -117.94}, {33.7389, -118.024},
{34.122, -118.088}, {37.3881, -122.252}, {44.9325, -122.966},
{32.6029, -117.154}, {44.7165, -123.062}, {37.8475, -122.47},
{32.6833, -117.098}, {44.4881, -122.797}, {37.5687, -122.254},
{45.1645, -122.788}, {47.6077, -122.692}, {44.5727, -122.65},
{42.3155, -82.9408}, {42.6438, -73.6451}, {48.0426, -122.092},
{48.5371, -122.09}, {45.4599, -122.618}, {48.4816, -122.659},
{42.3398, -70.9843}}
</code></pre>
<p>I've tried finding documentation on how I would proceed but I cannot find anything that doesn't assume a certain level of introduction to geospatial data. Does anyone know of a good resource online or is there a simple explanation one can supply here? </p>
| Mark McClure | 36 | <h3>Here's a start.</h3>
<pre><code>latLngs={{32.6123,-117.041},{40.6973,-111.9},{34.0276,-118.046},
{40.8231,-111.986},{34.0446,-117.94},{33.7389,-118.024},
{34.122,-118.088},{37.3881,-122.252},{44.9325,-122.966},
{32.6029,-117.154},{44.7165,-123.062},{37.8475,-122.47},
{32.6833,-117.098},{44.4881,-122.797},{37.5687,-122.254},
{45.1645,-122.788},{47.6077,-122.692},{44.5727,-122.65},
{42.3155,-82.9408},{42.6438,-73.6451},{48.0426,-122.092},
{48.5371,-122.09},{45.4599,-122.618},{48.4816,-122.659},
{42.3398,-70.9843}};
Show[CountryData["UnitedStates",{"Shape", "Equirectangular"}],
Axes -> True, Epilog ->{PointSize[0.01], Red,
Point[Reverse /@ latLngs]}]
</code></pre>
<p>You can show the points on a natural Mercator projection like so:</p>
<pre><code>toMercator[{lat_, lng_}] := {lng,
Log[Abs[Sec[lat*Degree]+Tan[lat*Degree]]]/Degree};
mercPoints = toMercator /@ latLngs;
Show[CountryData["UnitedStates",{"Shape", "Mercator"}],
Frame-> True, Epilog ->{PointSize[0.01], Red,
Point[mercPoints]}]
</code></pre>
<p><img src="https://i.stack.imgur.com/Gkvsv.png" alt="Mathematica graphics" /></p>
<p>Presumably, there's a built in way to extract the values of from Mercator's (and other) projections, but I don't see how offhand.</p>
|
2,076 | <p>I'm attempting for the first time to create a map within <em>Mathematica</em>. In particular, I would like to take an output of points and plot them according to their lat/long values over a geographic map. I have a series of latitude/longitude values like so:</p>
<pre><code> {{32.6123, -117.041}, {40.6973, -111.9}, {34.0276, -118.046},
{40.8231, -111.986}, {34.0446, -117.94}, {33.7389, -118.024},
{34.122, -118.088}, {37.3881, -122.252}, {44.9325, -122.966},
{32.6029, -117.154}, {44.7165, -123.062}, {37.8475, -122.47},
{32.6833, -117.098}, {44.4881, -122.797}, {37.5687, -122.254},
{45.1645, -122.788}, {47.6077, -122.692}, {44.5727, -122.65},
{42.3155, -82.9408}, {42.6438, -73.6451}, {48.0426, -122.092},
{48.5371, -122.09}, {45.4599, -122.618}, {48.4816, -122.659},
{42.3398, -70.9843}}
</code></pre>
<p>I've tried finding documentation on how I would proceed but I cannot find anything that doesn't assume a certain level of introduction to geospatial data. Does anyone know of a good resource online or is there a simple explanation one can supply here? </p>
| Vitaliy Kaurov | 13 | <p>There is nice way to to put your data on rotatable 3D globe. Your data:</p>
<pre><code>centers = {{32.6123, -117.041}, {40.6973, -111.9}, {34.0276, \
-118.046}, {40.8231, -111.986}, {34.0446, -117.94}, {33.7389, \
-118.024}, {34.122, -118.088}, {37.3881, -122.252}, {44.9325, \
-122.966}, {32.6029, -117.154}, {44.7165, -123.062}, {37.8475, \
-122.47}, {32.6833, -117.098}, {44.4881, -122.797}, {37.5687, \
-122.254}, {45.1645, -122.788}, {47.6077, -122.692}, {44.5727, \
-122.65}, {42.3155, -82.9408}, {42.6438, -73.6451}, {48.0426, \
-122.092}, {48.5371, -122.09}, {45.4599, -122.618}, {48.4816, \
-122.659}, {42.3398, -70.9843}};
</code></pre>
<p>Function that defines mapping of coordinates onto sphere:</p>
<pre><code>SC[{lat_, lon_}] := r {Cos[lon \[Degree]] Cos[lat \[Degree]],
Sin[lon \[Degree]] Cos[lat \[Degree]], Sin[lat \[Degree]]};
</code></pre>
<p>Average Earth radius, countries names, 3D visualization where you can <em>Drag</em> globe to rotate, <em>Hold CTRL and drag</em> to zoom:</p>
<pre><code>r = 6367.5; places = CountryData["Countries"];
Graphics3D[{Opacity[.9], Sphere[{0, 0, 0}, r],
Map[Line[Map[SC, CountryData[#, "SchematicCoordinates"], {-2}]] &,
places], {Red, PointSize[Medium], Point[SC[#]] & /@ centers}},
Boxed -> False, SphericalRegion -> True, ViewAngle -> .3]
</code></pre>
<p><img src="https://i.stack.imgur.com/sTSA0.png" alt="enter image description here"></p>
|
552,155 | <p>I've been struggling to figure out this integral.
$$\int \frac {1}{x\sqrt{5-x^2}}$$</p>
<p>I'm almost certain it has something to do with this fact:</p>
<p>$$\int \frac 1{\sqrt{1-x^2}} = \sin^{-1}(x) + C$$</p>
<p>But I can't figure out how to use that to my advantage. No obvious substitution is jumping out at me either. I'm guessing there's something simple I'm missing... any help will be much appreciated!</p>
<p>Edit (11/13/13): An update on this, just for the sake of finishing what I started - I had gotten a bit stuck on this problem and could not seem to figure it out until I saw it worked out in <a href="http://redd.it/1qiw6h" rel="nofollow">this</a> thread on reddit. Where I was stuck turned out to be a very simple algebra manipulation. Just thought I'd add that link for the sake of any Googlers or what have you who might find this post and want to see it worked out in more detail. Many thanks to all involved in helping me understand this problem! </p>
| André Nicolas | 6,312 | <p>A natural substitution is $u=\sqrt{5-x^2}$, though I prefer the version $u^2=5-x^2$. Then $2u\,du=-2x\,dx$.</p>
<p>Rewrite our integral as
$$\int \frac{x\,dx}{x^2\sqrt{5-x^2}}.$$
After we make the substitution, we end up with
$$\int \frac{du}{u^2-5}.$$
Factor $u^2-5$ as $(u-\sqrt{5})(u+\sqrt{5})$ and use partial fractions. </p>
|
552,155 | <p>I've been struggling to figure out this integral.
$$\int \frac {1}{x\sqrt{5-x^2}}$$</p>
<p>I'm almost certain it has something to do with this fact:</p>
<p>$$\int \frac 1{\sqrt{1-x^2}} = \sin^{-1}(x) + C$$</p>
<p>But I can't figure out how to use that to my advantage. No obvious substitution is jumping out at me either. I'm guessing there's something simple I'm missing... any help will be much appreciated!</p>
<p>Edit (11/13/13): An update on this, just for the sake of finishing what I started - I had gotten a bit stuck on this problem and could not seem to figure it out until I saw it worked out in <a href="http://redd.it/1qiw6h" rel="nofollow">this</a> thread on reddit. Where I was stuck turned out to be a very simple algebra manipulation. Just thought I'd add that link for the sake of any Googlers or what have you who might find this post and want to see it worked out in more detail. Many thanks to all involved in helping me understand this problem! </p>
| Manny265 | 96,690 | <p>So substitute $ \sqrt{5-x^2} $ as u therefore ${u^2} = 5-x^2 $ right. From this keep in mind that x can be made subject of the formula and that will make $ x= \sqrt{5-u^2} $ .
<br>$ -2x dx = 2u du$ therefore $ x dx = - u du$ . You can multiply the equation by 1,which in fundamentals of mathematics there is no change made. 1 can be written as $ \frac{x}{x} $ making the equation $ -1 \int \frac{x u du}{x^2 u} $. After cancelling out the <i>u</i> s go and one x remains but it is with respect to u (the du part) so use the $x=\sqrt{5-u^2} $ for x. The x can be further broken down into $ \sqrt{\sqrt 5^2}-u^2 $ and then use the arcsin formula which is $\int \frac{1}{\sqrt{a^2-x^2}}$ = $arcsin \frac{x}{a} + C$,voila.</p>
|
1,212,262 | <p>The statement goes as follow: </p>
<p>$ B ∩ C ⊆ A ⇒ (C − A) ∩ (B − A) = ∅. $</p>
<p>First, the sign "=>" represents a tautology, no? ( apparently I get it confuse with the 3 bar sign, if you know what I mean).</p>
<p>Second, the fact that it equals to no solution, how do I prove that? Seems to contradict itself, at least to me.</p>
<p>EDIT: apparently necessary edit... Couldn't start the problem because I did not understand the notation. No need to down vote for that, the more people understand a concept properly, the better....</p>
| Clément Guérin | 224,918 | <p>You must enumerate all the case (as you did I find the same as yours).</p>
<p>Case 1 : (2,3) 3 boxes with 1 color and 2 boxes with another color.</p>
<p>Case 2 : (1,1,3) 3 boxes with 1 color, 1 box with another color and another 1 box with another color.</p>
<p>Case 3 : (1,2,2)...</p>
<p>Case 4 : (1,1,1,2)...</p>
<p>Now how many choices do I have in each case (here I get different calculations) ?</p>
<p>For the case 1, I choose one color for the three and then one for the two. That is :</p>
<p>$${5 \choose 1}{4 \choose 1}=20 $$</p>
<p>For the case 2, I choose one color for the three and then two colors for the remaining boxes. That is :</p>
<p>$${5 \choose 1}{4 \choose 2}=30$$</p>
<p>For the case 3, I choose two colors for the two pairs of boxes and then one color for the remaining box. That is :</p>
<p>$${5 \choose 2}{3 \choose 1}=30$$</p>
<p>For the case 3, I choose one colors for two boxes and then three colors for the three remaining boxes. That is :</p>
<p>$${5 \choose 1}{4 \choose 3}=20$$</p>
<p>On the whole I get 80 ways to do this. Here I assumed that the boxes were unlabelled. </p>
<p>If I wanted to sum up the arguments I used, first you reduce to some cases. Then to compute all the way to distribute the colored balls it suffices to choose a color for each set of boxes which are assumed to be colored in the same way.</p>
|
1,557,733 | <p>I have a function $f(\mathbf{u}, \Sigma)$ where $\mathbf{u}$ is a $p \times 1$ vector and $\Sigma$ is a $p \times p$ real symmetric matrix (positive semi-definite).</p>
<p>I somehow successfully computed the partial derivatives $\frac{\partial f}{\partial \mathbf{u}}$ and $\frac{\partial f}{\partial \Sigma}$.</p>
<p>In this case, how do I optimize the function $f$ using Newton-Raphson's method?</p>
<p>=====Details=======</p>
<p>$y = X\mathbf{u} + \frac{1}{2}\operatorname{diag}(X\Sigma X^{T})$</p>
<p>$e = exp(y)$</p>
<p>$f = \mathbf{1}^{T} e$</p>
<p>where $exp$ is component-wise and $\mathbf{1}$ is a vector with ones. $X$ is not symmetric.</p>
| cr001 | 254,175 | <p>As the second part is proven in the other answer, I will add the proof for the first part here. I will use the same point system as described in the other answer.</p>
<p>WLOG we consider the sum $m_a+m_b$.</p>
<p>Let the midpoint of $CF$ be $R$. Connect $BR.AR$.</p>
<p>First we show $BR+AR>BG+AG$. This is true because $G$ is inside triangle $BRA$. If we let the intersection of $AG$ and $BR$ be $T$, then $BG+AG<BT+AT<BR+AR$.</p>
<p>Now Connect $RD$. Since $BD+RD>BR$ we know $BC+BF>2BR$ since $BC=2BD$ and $BF=2RD$.</p>
<p>Similarly we have $AC+AF>2AR$.</p>
<p>Sum the two up we get $AB+BC+CA>2AR+2BR>2AG+2BG=2\cdot{2\over3}(m_a+m_b)$ and the result follows.</p>
|
200,278 | <p>Say I have two TimeSeries:</p>
<pre><code>x = TimeSeries[{2, 4, 1, 10}, {{1, 2, 4, 5}}]
y = TimeSeries[{6, 2, 6, 3, 9}, {{1, 2, 3, 4, 5}}]
</code></pre>
<p>x has a value at times: 1,2,4,5</p>
<p>y has a value at times: 1,2,3,4,5</p>
<p>I would like to build a list of pairs {<span class="math-container">$x_i$</span>, <span class="math-container">$y_i$</span>} which would not include missing elements (in this case element x element at time 3 is missing)</p>
<p>The desired result would be: </p>
<pre><code>{{2,6}, {4,2}, {1,3}, {10,9}}
</code></pre>
<p>I have a feeling that this should be simple and perhaps I'm not using right tools.</p>
| Carl Lange | 57,593 | <p>One way you can do this is by combining the two <code>TimeSeries</code> into <code>TemporalData</code> with no resampling:</p>
<pre><code>x = TimeSeries[{2, 4, 1, 10}, {{1, 2, 4, 5}}]
y = TimeSeries[{6, 2, 6, 3, 9}, {{1, 2, 3, 4, 5}}]
td = TemporalData[{x, y}, ResamplingMethod -> None]
</code></pre>
<p>Then, we can get the "Slices" for the range of times:</p>
<pre><code>td["SliceData", Range[5]]
</code></pre>
<blockquote>
<p><code>{{2, 4, Missing[], 1, 10}, {6, 2, 6, 3, 9}}</code></p>
</blockquote>
<p>And we can <code>Transpose</code> that to get pairs:</p>
<pre><code>Transpose@td["SliceData", Range[5]]
</code></pre>
<blockquote>
<p><code>{{2, 6}, {4, 2}, {Missing[], 6}, {1, 3}, {10, 9}}</code></p>
</blockquote>
<p>and finally, drop all pairs that contain <code>Missing</code> using <code>DeleteMissing</code>'s second and third argument:</p>
<pre><code>DeleteMissing[Transpose@td["SliceData", Range[5]], 1, 1]
</code></pre>
<blockquote>
<p><code>{{2, 6}, {4, 2}, {1, 3}, {10, 9}}</code></p>
</blockquote>
|
200,278 | <p>Say I have two TimeSeries:</p>
<pre><code>x = TimeSeries[{2, 4, 1, 10}, {{1, 2, 4, 5}}]
y = TimeSeries[{6, 2, 6, 3, 9}, {{1, 2, 3, 4, 5}}]
</code></pre>
<p>x has a value at times: 1,2,4,5</p>
<p>y has a value at times: 1,2,3,4,5</p>
<p>I would like to build a list of pairs {<span class="math-container">$x_i$</span>, <span class="math-container">$y_i$</span>} which would not include missing elements (in this case element x element at time 3 is missing)</p>
<p>The desired result would be: </p>
<pre><code>{{2,6}, {4,2}, {1,3}, {10,9}}
</code></pre>
<p>I have a feeling that this should be simple and perhaps I'm not using right tools.</p>
| Anjan Kumar | 19,742 | <p>You can simply normalize the data (<code>x</code> and <code>y</code>), convert it to an <a href="https://reference.wolfram.com/language/ref/Association.html" rel="nofollow noreferrer"><code>Association</code></a> and later <a href="https://reference.wolfram.com/language/ref/Merge.html" rel="nofollow noreferrer"><code>Merge</code></a> it. </p>
<pre><code>data = {<|Rule @@@ Normal[x]|>, <|Rule @@@ Normal[y]|>};
Merge[
data,
If[Length[#] == 2, #, Nothing] &
] // Values
</code></pre>
<blockquote>
<p>{{2, 6}, {4, 2}, {1, 3}, {10, 9}}</p>
</blockquote>
|
3,122,732 | <p>Let <span class="math-container">$C[a,b]$</span> denote the set of all continuous, real-valued maps on the interval <span class="math-container">$[a,b]$</span>. Let <span class="math-container">$P_n$</span> denote the set of all real polynomials on <span class="math-container">$[a,b]$</span> which have a <em>maximum</em> degree of <span class="math-container">$n$</span>. Let <span class="math-container">$P=\cup_{n\geq 1}P_n$</span>.</p>
<p>Then Weierstrass's Approximation theorem says that the closure of <span class="math-container">$P$</span> in <span class="math-container">$C[a,b]$</span> (with the uniform norm) is the entirety of <span class="math-container">$C[a,b]$</span>. This would seem to hint that <span class="math-container">$P_n$</span> is <em>not</em> dense in <span class="math-container">$C[a,b]$</span>. And indeed, <a href="https://math.stackexchange.com/questions/236003/is-the-set-of-polynomials-of-degree-less-than-or-equal-to-n-closed">one can show</a> that <span class="math-container">$P_n$</span> is a <em>closed</em> and (obviously) proper subset of <span class="math-container">$C[a,b]$</span> which would mean that for any <span class="math-container">$n\in\mathbb{N}$</span> and any <span class="math-container">$\epsilon>0$</span> there exists an <span class="math-container">$f$</span> in <span class="math-container">$C[a,b]$</span> such that
<span class="math-container">$$
\inf_{p\in P_n}|p-f|_\infty\geq\epsilon\,.
$$</span>
Polynomials of degree <span class="math-container">$n$</span> are determined by their values at <span class="math-container">$n+1$</span> distinct points. Thus, this result basically says that if we are limited to only being able to interpolate a function with polynomials at a maximum number of points, there will still exist continuous functions that we cannot approximate well.</p>
<blockquote>
<p>Can one give a <em>constructive</em> proof of this fact? In other words, given <span class="math-container">$n\in\mathbb{N}$</span> and <span class="math-container">$\epsilon>0$</span> can we explicitly construct a function <span class="math-container">$f$</span> such that
<span class="math-container">$$
\inf_{p\in P_n}|p-f|_\infty\geq\epsilon\,?
$$</span>
For bonus points, can we construct <span class="math-container">$f$</span> to be a polynomial of degree <span class="math-container">$n+1$</span>?</p>
</blockquote>
<p>For the first question, I expect a function with enough oscillation will accomplish this goal. And perhaps something as simple as <span class="math-container">$\bar{\epsilon}\sin(\bar{n}x)$</span> for appropriate <span class="math-container">$\bar{\epsilon}$</span> that depends on <span class="math-container">$\epsilon$</span> and <span class="math-container">$\bar{n}$</span> depending on <span class="math-container">$n$</span>.</p>
| John Hughes | 114,036 | <p><strong>A Broad Hint, but not a complete answer</strong></p>
<p>Suggestions: </p>
<ol>
<li><p>Simplify to <span class="math-container">$a = 0, b = 1$</span>, or <span class="math-container">$a = -1, b = 1$</span> to make the notation nicer. </p></li>
<li><p>You might want to look at something like <span class="math-container">$f(x) = (x-a)^{n+1}$</span>. Degree-<span class="math-container">$n$</span> polynomials won't be able to fit this very well. There'll be some minimum value <span class="math-container">$h$</span> of <span class="math-container">$u(p) = \max_{x \in [a,b]} |f(x) - p(x)|$</span> over all <span class="math-container">$p \in P_n$</span>. To get <span class="math-container">$h > \epsilon$</span>, multiply my <span class="math-container">$f$</span> by <span class="math-container">$\frac{\epsilon}{h}$</span>.</p></li>
</ol>
<p>Let's do that explicitly for <span class="math-container">$n = 0$</span>. Pick <span class="math-container">$a = 0, b = 1$</span>. So
<span class="math-container">$$
f(x) = x.
$$</span>
Approximating with with <span class="math-container">$p_c(x) = c$</span> in <span class="math-container">$P_0$</span> gives a min error of at least <span class="math-container">$1/2$</span> for every choice of <span class="math-container">$c$</span>. So to make the error larger than epsilon, multiply <span class="math-container">$f$</span> by <span class="math-container">$\frac{\epsilon}{\frac12 \epsilon}$</span> to get
<span class="math-container">$$
f_\epsilon = 2\epsilon x.
$$</span></p>
<p>Of course, the minimization is messier as <span class="math-container">$n$</span> grows, but I'll bet that if you do the cases <span class="math-container">$n = 1, 2$</span>, you'll start to get a general answer. </p>
|
3,122,732 | <p>Let <span class="math-container">$C[a,b]$</span> denote the set of all continuous, real-valued maps on the interval <span class="math-container">$[a,b]$</span>. Let <span class="math-container">$P_n$</span> denote the set of all real polynomials on <span class="math-container">$[a,b]$</span> which have a <em>maximum</em> degree of <span class="math-container">$n$</span>. Let <span class="math-container">$P=\cup_{n\geq 1}P_n$</span>.</p>
<p>Then Weierstrass's Approximation theorem says that the closure of <span class="math-container">$P$</span> in <span class="math-container">$C[a,b]$</span> (with the uniform norm) is the entirety of <span class="math-container">$C[a,b]$</span>. This would seem to hint that <span class="math-container">$P_n$</span> is <em>not</em> dense in <span class="math-container">$C[a,b]$</span>. And indeed, <a href="https://math.stackexchange.com/questions/236003/is-the-set-of-polynomials-of-degree-less-than-or-equal-to-n-closed">one can show</a> that <span class="math-container">$P_n$</span> is a <em>closed</em> and (obviously) proper subset of <span class="math-container">$C[a,b]$</span> which would mean that for any <span class="math-container">$n\in\mathbb{N}$</span> and any <span class="math-container">$\epsilon>0$</span> there exists an <span class="math-container">$f$</span> in <span class="math-container">$C[a,b]$</span> such that
<span class="math-container">$$
\inf_{p\in P_n}|p-f|_\infty\geq\epsilon\,.
$$</span>
Polynomials of degree <span class="math-container">$n$</span> are determined by their values at <span class="math-container">$n+1$</span> distinct points. Thus, this result basically says that if we are limited to only being able to interpolate a function with polynomials at a maximum number of points, there will still exist continuous functions that we cannot approximate well.</p>
<blockquote>
<p>Can one give a <em>constructive</em> proof of this fact? In other words, given <span class="math-container">$n\in\mathbb{N}$</span> and <span class="math-container">$\epsilon>0$</span> can we explicitly construct a function <span class="math-container">$f$</span> such that
<span class="math-container">$$
\inf_{p\in P_n}|p-f|_\infty\geq\epsilon\,?
$$</span>
For bonus points, can we construct <span class="math-container">$f$</span> to be a polynomial of degree <span class="math-container">$n+1$</span>?</p>
</blockquote>
<p>For the first question, I expect a function with enough oscillation will accomplish this goal. And perhaps something as simple as <span class="math-container">$\bar{\epsilon}\sin(\bar{n}x)$</span> for appropriate <span class="math-container">$\bar{\epsilon}$</span> that depends on <span class="math-container">$\epsilon$</span> and <span class="math-container">$\bar{n}$</span> depending on <span class="math-container">$n$</span>.</p>
| Eric Wofsey | 86,856 | <p>First, a comment: your thinking is kind of backwards. Since <span class="math-container">$P_n$</span> is closed in <span class="math-container">$C[a,b]$</span>, <em>any</em> continuous function <span class="math-container">$f$</span> that is not a polynomial of degree <span class="math-container">$\leq n$</span> is the example you seek, for appropriate <span class="math-container">$\epsilon$</span>. If you want <span class="math-container">$\epsilon$</span> to be given ahead of time, that just means you have to scale up <span class="math-container">$f$</span> to fit your <span class="math-container">$\epsilon$</span>. In other words, you just need to find an explicit lower bound for <span class="math-container">$\inf_{p\in P_n}\|p-f\|_\infty$</span> so you know how much you need to scale <span class="math-container">$f$</span> by to turn the lower bound into <span class="math-container">$\epsilon$</span>.</p>
<p>One simple way to find such an explicit bound is using the <span class="math-container">$L^2$</span> norm. Given any <span class="math-container">$f\in L^2[a,b]$</span>, you can compute the projection <span class="math-container">$g$</span> of <span class="math-container">$f$</span> onto the orthogonal complement of <span class="math-container">$P_n$</span> using Gram-Schmidt (and <span class="math-container">$g$</span> will be nonzero unless <span class="math-container">$f\in P_n$</span>). We then know that <span class="math-container">$$\|p-f\|_2\geq\|g\|_2$$</span> for any <span class="math-container">$p\in P_n$</span>. Converting this into the sup norm, we see that <span class="math-container">$$\|p-f\|_\infty\geq(b-a)^{-1/2}\|g\|_2$$</span> for all <span class="math-container">$p\in P_n$</span>. By scaling <span class="math-container">$f$</span> we can change the constant on the right side of this inequality to be anything you want (since <span class="math-container">$g$</span> scales in proportion to <span class="math-container">$f$</span>).</p>
<p>In particular, if you start with <span class="math-container">$f(x)=x^{n+1}$</span>, this gives an algorithm to explicitly compute a positive lower bound for <span class="math-container">$\|f-p\|_\infty$</span> for all <span class="math-container">$p\in P_n$</span>.</p>
|
2,077,694 | <p>How to find $A = M^{A}_{B}$ in linear transformation $F = \mathbb{P_{2}} \rightarrow \mathbb{R^{2}} $, where $ F(p(t)) = \begin{pmatrix} p(0) \\ P(1) \end{pmatrix},$
$ A = \{1,t,t^{2}\},$
$B=\left \{ \begin{pmatrix} 1\\ 0\end{pmatrix},\begin{pmatrix} 0\\ 1\end{pmatrix} \right \}$?</p>
| Asinomás | 33,907 | <p>If $n=p_1^{a_1}p_2^{a_2}\dots p_r^{a_r}$ then $\frac{\sigma(n)}{n}=\prod\limits_{j=1}^r \frac{p^{a_j+1}-1}{p^j(p-1)}$.</p>
<p>Clearly each factor is an increasing function with respect to $a_j$ and every factor is greater than or equal to $1$. So increasing the number of factors or increasing the exponents only makes the fraction larger.</p>
|
138,658 | <p>Suppose $X$ is a topological space, and $\mu$ is a Borel measure on $X$. Also suppose we have an $n$-dimensional vector bundle $E \to X$, with an inner product $\langle \cdot,\cdot \rangle_x$ on the fibre $E_x$ for all $x \in X$, in such a way that each $E_x$ is complete and such that there exists a vector bundle trivialisation which is compatible with the fibrewise inner products. The inner product $\langle \cdot, \cdot \rangle_x$ determines a norm $||\cdot||_x$ on $E_x$.</p>
<p>Say that a (not necessarily continuous) section $\sigma \colon X \to E$ is <em>measurable</em> if its restriction to each trivialising open set $U \subset X$ is given by a measurable function $U \to \mathbb{F}^n$ (here $\mathbb{F}$ is either $\mathbb{R}$ or $\mathbb{C}$, depending on how you feel). Denote the set of measurable sections (understood as being defined up to measure zero) by $\Gamma(E)$.</p>
<p>Given a section $\sigma \in \Gamma(E)$ and a number $p \in (0,\infty)$, we can define
$$||\sigma||_p := \left( \int_X ||\sigma(x)||_x^p \; d\mu(x) \right)^{1/p},$$
and we can then define the corresponding $E$-valued Lebesgue space $L^p(X;E)$ in the obvious way.</p>
<blockquote>
<p><strong>Question:</strong> do we have the usual duality relations for Lebesgue spaces, i.e. $(L^p(X;E))^* \cong L^{p^\prime}(X;E)$ where $1 = \frac{1}{p} + \frac{1}{p^\prime}$?</p>
</blockquote>
<p>As one would expect, there is a kind of Hölder inequality: if $\sigma \in L^p(X;E)$ and $\tau \in L^{p^\prime}(X;E)$, then the function $\langle \sigma,\tau \rangle$ on $X$, given by
$$\langle \sigma,\tau \rangle(x) := \int_X \langle \sigma(x), \tau(x) \rangle_x \; d\mu(x),$$
satisfies $|| \langle \sigma,\tau \rangle||_{L^1(X)} \leq ||\sigma||_p ||\tau||_{p^\prime}$. It follows that the pairing $\langle \cdot,\cdot \rangle$ can be used to isometrically embed $L^{p^\prime}(X;E)$ into $(L^p(X;E))^*$.</p>
<p>However, I haven't been able to prove the reverse containment - that each functional $\phi \in (L^p(X;E))^*$ is given by pairing with an element of $L^{p^\prime}(X;E)$ - without additional assumptions, such as the existence of a finite trivialising cover for $E$ with uniform norm control (for example, when $X$ is compact). In this case, one can recover the result from the corresponding result for trivial bundles - which is essentially the case of vector-valued Lebesgue spaces - but constants appear which depend on the cardinality of a trivialising cover, which is somewhat unexpected.</p>
<p>Has this been explicitly proven anywhere? Is it even true in general? (I'll be very surprised if it isn't)</p>
| Peter Michor | 26,935 | <p>Maybe the following helps:
Theorem 3.12 (page 20) in the following source has such a related result, albeit for higher Sobolev spaces.
There are quite subtle requirements for the trivialising atlas and the partition of unity which are used in the proof.</p>
<ul>
<li><a href="http://www.ams.org/mathscinet-getitem?mr=2343536" rel="nofollow noreferrer">MR2343536</a>; Eichhorn, Jürgen Global analysis on open manifolds. Nova Science Publishers, Inc., New York, 2007. x+644 pp.</li>
</ul>
<p>You can access the beginning of the book via scholar.google.com.</p>
<h1>Second Edit:</h1>
<p>It seems to me now it is much simpler.</p>
<p>Proof: Measure theoretically, there are no nontrivial bundles. So you can find a global orthonormal frame by measurable sections <span class="math-container">$s_1,\dots,s_n$</span> of your bundle so that any section <span class="math-container">$f$</span> is of the form <span class="math-container">$f=\sum _i f^i.s_i$</span> where <span class="math-container">$(f^i)_{i=1}^n \in L^p(X,\mathbb R^n)$</span>. This gives an isometry between <span class="math-container">$L^p$</span>-sections of the bundle and a usual <span class="math-container">$\mathbb R^n$</span>-valued <span class="math-container">$L^p$</span>-space.</p>
|
1,784,679 | <p>if $p,q,r$ are three positive integers prove that</p>
<p>$$LCM(p,q,r)=\frac{pqr \times HCF(p,q,r)}{HCF(p,q) \times HCF(q,r) \times HCF(r,p)}$$</p>
<p>I tried in this way;</p>
<p>Let $HCF(p,q)=x$ hence $p=xm$ and $q=xn$ where $m$ and $n$ are relatively prime.</p>
<p>similarly let $HCF(q,r)=y$ hence $q=ym_1$ and $r=yn_1$ where $m_1$ and $n_1$ are Relatively prime.</p>
<p>Alo let $HCF(r,p)=z$ hence $r=zm_2$ and $p=zn_2$</p>
<p>we have $$p=xm=zn_2$$</p>
<p>$$q=xn=ym_1$$and</p>
<p>$$r=yn_1=zm_2$$</p>
<p>can i have any hint to proceed?</p>
| Ryder Rude | 445,404 | <p>Let <span class="math-container">$P, Q$</span> and <span class="math-container">$R$</span> be the sets of prime factors of the numbers <span class="math-container">$p,q$</span> and <span class="math-container">$r$</span> respectively. Let the universal set be <span class="math-container">$P\cup Q\cup R$</span>. Let there be product function <span class="math-container">$v(S)$</span> which gives the product of the elements of set S.</p>
<p>Then, celarly,</p>
<p><span class="math-container">$$pqr=v(P\cap Q' \cap R')*v(Q\cap P' \cap R')*v(R\cap P'\cap Q')*v^2(P\cap Q\cap R')*v^2(Q\cap R\cap P')*v^2(R\cap P\cap Q')*v^3(P\cap Q\cap R)$$</span></p>
<p>Now,</p>
<p><span class="math-container">$$LCM(p,q,r)=v(P\cap Q' \cap R')*v(Q\cap P' \cap R')*v(R\cap P'\cap Q')*v(P\cap Q\cap R')*v(Q\cap R\cap P')*v(R\cap P\cap Q')*v(P\cap Q\cap R)$$</span></p>
<p><span class="math-container">$$HCF(p,q)=v(P\cap Q\cap R')*v(P\cap Q\cap R)$$</span></p>
<p><span class="math-container">$$HCF(q,r)=v(Q\cap R\cap P')*v(P\cap Q\cap R)$$</span></p>
<p><span class="math-container">$$HCF(r,p)=v(R\cap P\cap Q')*v(P\cap Q\cap R)$$</span></p>
<p><span class="math-container">$$HCF(p,q,r)=v(P\cap Q\cap R)$$</span></p>
<p>These expressions satisfy your equation.</p>
|
66,370 | <p>Let $(X,\mathcal{E},\mu)$ be a measure space. Let $u,v$ be $\mu$-measurable functions. If $0 \leq u \leq v$ and $\int_X v d\mu$ exists we know that $\int_X u d\mu \leq \int_X v d\mu$.</p>
<p>I wanted to know if $0 \leq u < v$ and $\int_X v d\mu$ exists then is it true that
$\int_X u d\mu < \int_X v d\mu$? This can be shown for simple functions easily i.e. if $u,v$ are simple.</p>
<p>I have assumed here that $\int_X u d\mu = \sup \{\int_X u_n d\mu, u_n \text{simple}, \mu-\text{measurable}, u_n \leq u\}$ where a simple function is defined to be a function
whose cardinality of the range is finite.</p>
<p>Any help is greatly appreciated.</p>
<p>Thanks,
Phanindra</p>
| Did | 6,179 | <p>By contraposition, you might want to prove that if $w\ge0$ and $\displaystyle\int\limits_Xw\mathrm d\mu=0$ then $w=0$ $\mu$-almost everywhere. To see this, consider $A_n=\{x\mid w(x)\ge1/n\}$ and note that $w\ge n^{-1}\mathbf 1_{A_n}$ hence $\displaystyle\int\limits_Xw\mathrm d\mu\ge n^{-1}\mu(A_n)$ hence $\mu(A_n)=0$ for every $n$ hence $\{x\mid w(x)\ne0\}=\bigcup\limits_nA_n$ has measure zero. You are done.</p>
|
2,512,556 | <p>What would be the solution of $ y''+y=\cos (ax) \ $ if $ \ a \to 1 \ $. </p>
<p><strong>Answer:</strong></p>
<p>I have found the complementary function $ y_c \ $ </p>
<p>$ y_c(x)=A \cos x+B \sin x \ $</p>
<p>But How can I find the particular integral if $ a \to 1 \ $ </p>
| Dr. Sonnhard Graubner | 175,066 | <p>a possible particular solution is given by $$y_P=\frac{\cos ^2(x) (-\cos (a x))-\sin ^2(x) \cos (a x)}{a^2-1}$$ now you can consider the case if $a$ tends to $1$</p>
|
3,500,799 | <p>what is the dimension of the vector space spanned by the set of vectors <span class="math-container">$(a,b,c) $</span>where <span class="math-container">$a^2+b^2=c$</span>?</p>
| KRL | 585,628 | <p>One approach is to come up with vectors <span class="math-container">$(a,b,c)$</span> that satisfy the equation and figure out if they are linearly independent. </p>
<p>Note that in your question the dimension is at most three. Here, vectors (1,0,1), (0,1,1), (1,-1,2) are linearly independent vectors that satisfy the equation. So the dimension is three. </p>
|
250,687 | <p>I'm doing a sanity check of the following equation:
<span class="math-container">$$\sum_{j=2}^\infty \frac{(-x)^j}{j!}\zeta(j) \approx x(\log x + 2 \gamma -1)$$</span></p>
<p>Naive comparison of the two shows a bad match but I suspect one of the graphs is incorrect.</p>
<ol>
<li>Why isn't there a warning?</li>
<li>How do I compute this sum correctly?</li>
</ol>
<pre><code>katsurda[x_] := NSum[(-x)^j/j! Zeta[j], {j, 2, Infinity}];
katsurdaApprox[x_] := x (Log[x] + 2 EulerGamma - 1) - Zeta[0];
plot1 = DiscretePlot[katsurda[x], {x, 0, 40, 2}];
plot2 = Plot[katsurdaApprox[x], {x, 0, 40}];
Show[plot1, plot2]
</code></pre>
<p><a href="https://i.stack.imgur.com/pBmVX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pBmVX.png" alt="enter image description here" /></a></p>
<ol start="3">
<li><strong>meta</strong> How do I avoid being mislead by incorrect numeric results? Would using <code>NIntegrate</code> instead of <code>NSum</code> give better guarantees? My usual approach of a avoiding machine precision, checking <code>Precision</code> of the answer and minding warnings fails in the example below</li>
</ol>
<pre><code>katsurda[x_] :=
NSum[(-x)^j/j! Zeta[j], {j, 2, Infinity}, WorkingPrecision -> 32,
NSumTerms -> 2.5 x];
katsurdaApprox[x_] := x (Log[x] + 2 EulerGamma - 1) - Zeta[0];
Print["Precision: ", Precision@katsurda[100]] (* 13.9729 *)
Print["Discrepancy: ", katsurda[100] - katsurdaApprox[100]] (* 94.65088290385, but should be <1 *)
</code></pre>
<p><strong>Background:</strong> the expression comes from "Power series with the Riemann zeta-function in the coefficients" by Katsurada M (<a href="https://projecteuclid.org/journals/proceedings-of-the-japan-academy-series-a-mathematical-sciences/volume-72/issue-3/Power-series-with-the-Riemann-zeta-function-in-the-coefficients/10.3792/pjaa.72.61.pdf" rel="nofollow noreferrer">paper</a>)</p>
| J. M.'s persistent exhaustion | 50 | <p>Using the <span class="math-container">$d$</span>-type <a href="https://doi.org/10.1016/0167-7977(89)90011-7" rel="nofollow noreferrer">Weniger transformation</a>, as implemented in <a href="https://resources.wolframcloud.com/FunctionRepository/resources/WenigerSum/" rel="nofollow noreferrer"><code>ResourceFunction["WenigerSum"]</code></a>, we get the following:</p>
<pre><code>Plot[{ResourceFunction["WenigerSum"][(-x)^j Zeta[j]/j!, {j, 2, ∞},
"ExtraTerms" -> 25, "Type" -> "D", WorkingPrecision -> 20],
x (Log[x] + 2 EulerGamma - 1) - Zeta[0]} // Evaluate, {x, 0, 45}, WorkingPrecision -> 25]
</code></pre>
<p><img src="https://i.stack.imgur.com/4U6Ap.png" alt="Weniger sum vs. asymptotic approximation" /></p>
<pre><code>Plot[ResourceFunction["WenigerSum"][(-x)^j Zeta[j]/j!, {j, 2, ∞},
"ExtraTerms" -> 25, "Type" -> "D", WorkingPrecision -> 20] -
(x (Log[x] + 2 EulerGamma - 1) - Zeta[0]) // Evaluate, {x, 0, 45},
PlotRange -> All, WorkingPrecision -> 25]
</code></pre>
<p><img src="https://i.stack.imgur.com/IBGz7.png" alt="residual" /></p>
<p>One would need to play around with the <code>"ExtraTerms"</code> and <code>"Terms"</code> settings for this function (e.g. try the combination <code>"Terms" -> 0, "ExtraTerms" -> 35</code>), and one has to use high precision as well to stave off the looming threat of catastrophic cancellation, especially for large <span class="math-container">$x$</span>. I have chosen the <span class="math-container">$d$</span>-type transformation since the original series is alternating (as recommended by Weniger), but the <span class="math-container">$t$</span>-type (<code>"Type" -> "T"</code>) can be used on such series as well. I invite you to do your own experiments.</p>
|
3,251,337 | <p>Be <span class="math-container">$E,F,K , L,$</span> points in the sides <span class="math-container">$AB,BC,CD,DA$</span> of a square <span class="math-container">$ABCD$</span>, respectively. Show that if <span class="math-container">$EK$</span> <span class="math-container">$\perp$</span> <span class="math-container">$FL$</span> , then <span class="math-container">$EK=FL$</span>.</p>
<p>I need help proving something like this:</p>
<p><a href="https://i.stack.imgur.com/WRNge.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WRNge.png" alt="enter image description here"></a></p>
<p>Any hints?</p>
<p>Edited: I wrote the principal statement wrong, now it's correct.</p>
<p>I saw a question in this forum about a similar problem, the problem was like this:
<a href="https://i.stack.imgur.com/sdxAI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sdxAI.png" alt="enter image description here"></a></p>
<p>And i thought that i can modify the problem like the first image, and after drawing a lot of squares in geogebra, i think it's true, but i don't know how to prove it.</p>
| DanielWainfleet | 254,665 | <p>In any topological space <span class="math-container">$X$</span> and any <span class="math-container">$E\subset X,$</span> the 3 sets <span class="math-container">$int(E),\, int(X\setminus E),\, \partial E)$</span> are pair-wise disjoint and their union is <span class="math-container">$X.$</span></p>
<p>So if <span class="math-container">$E\cap \partial E=\emptyset$</span> then <span class="math-container">$$E=E\cap X=E\cap (\,int (E) \cup int (X\setminus E)\cup \partial E\,)=$$</span> <span class="math-container">$$=(E\cap int E)\,\cup\, (E\cap int (X\setminus E))\,\cup\, (E\cap \partial E)\subset$$</span> <span class="math-container">$$\subset (E\cap int(E)\,\cup \,( E\cap (X\setminus E)\,\cup\, (E\cap \partial E)=$$</span> <span class="math-container">$$=int (E)\,\cup \, ( \emptyset)\,\cup \,(\emptyset)=$$</span> <span class="math-container">$$=int (E)\subset E$$</span> so <span class="math-container">$E=int(E).$</span></p>
<p>OR, from the first sentence above, for any <span class="math-container">$E\subset X$</span> we have <span class="math-container">$int(E)\subset E\subset \overline E=int(E)\cup \partial E.$</span></p>
<p>So if <span class="math-container">$E\cap \partial E=\emptyset$</span> then <span class="math-container">$$E=E\cap \overline E=E\cap (int (E) \cup \partial E)=$$</span> <span class="math-container">$$=(E\cap int (E))\,\cup \,(E\cap \partial E)=$$</span> <span class="math-container">$$=(E\cap int (E))\cup(\emptyset)=$$</span> <span class="math-container">$$=int(E)\subset E$$</span> so <span class="math-container">$E=int(E).$</span></p>
|
68,803 | <p>I am trying to understand how all the players in the title relate, but with all the grading shifts,and difficult isomorphisms involved in the subject I am having a hard time being sure that I have the picture right. I am going to write what I think is true, and if someone would confirm or deny it, that would be really nice. </p>
<p>The basic jumping off point is that if N is a simply connected manifold, symplectic cohomology of the cotangent disk bundle $D^*(N)$, symplectic cohomology $SH^*(D^*(N))$ as defined in Seidel's "A Biased View of Symplectic Cohomology" is naturally identified with isomorphic to $H_{n-*}(LN)$. And symplectic homology is isomorphic to $H^*(LN)$, these are the Hochschild cohomologies and homologies respectively of the algebra $C^*(N)$. </p>
<p>The reason is that the zero section generates the compact Fukaya category $Fuk^{cpt}(D^*(N))$ and $End(N_0,N_0) \cong C^*(N)$. There is expected to be a geometric Seidel map from $SH^*(D^*(N)) \to HH^*(Fuk^{cpt}(D^*(N))$ basically one considers cylinders in with a puncture which satisfy a deformed d-bar equation and which are asymptotic to periodic orbits of the Hamiltonian vector field and whose boundary lies on the zero section and is able to deform the compositions in the category to first order.</p>
<p>Question 1) Has anybody checked in this example that Seidel's map is an isomorphism?</p>
<p>Now we take equivariant versions of this, $SH^*_{eq}$ which should be identified with cyclic cohomology $CC^*(C^*(N))$ and we need to consider. Now we move to (linearized) contact homology, which Bourgeois and Oancea claim can be identified with $H_{eq}(LN,N)$(I give up on the gradings at this point :)) where N is included as the constant loops. Reasonable enough, since those are somehow the generators missing from contact homology. However, they also seem to be making an identification that $SH_* \cong H_*(LN)$ and not the cohomology... I get that it's not really a huge deal as vector spaces to identify a vector space and it's dual, but it adds to the confusion below.</p>
<p>With contact homology one can try a similar map to the Seidel map, namely work in the symplectic completion $T^*(N)$ and consider holomorphic disks with a puncture, boundary on the zero section, and as asymptotic now to a Reeb orbit as $|\rho|\to \infty$ `($|\rho|$ is some norm on $T^*(N)$). This gives a map from $CH_{*}\to CC^{*} $ which is mysterious because...(or maybe this is a map from "contact cohomology", I'm getting confused). Edit: A reference for this map is in a paper by Xiaojun Chen called "Lie Bialgebras and cyclic homology of A(\infty structures) in topology" </p>
<p>Question 2) We are supposed to somehow map $H_{*,eq}(LN,N) \to H_{*,eq}(LN)$. I'm not great with topology but this is a strange map, since it seems to go the wrong way. On the other hand one should expect it to be interesting by analogy with the Seidel map. What is this map at least conjecturally supposed to be? The only thing I could think of is the kernel of the map: $C_*(LN) \to C_*(N)$ induced by the map $LN \to N$ but this map doesn't even exist equivariantly.</p>
| Eigenbunny | 16,006 | <p>Let me rephrase: we consider the map from the symplectic cohomology of D*N, which
(Viterbo 97, et al) is the homology of the free loop space of N, to the Hochschild
cohomology of the Fukaya category consisting of compact Lagrangian submanifolds.
If we allowed only the zero-section N, that Hochschild cohomology would be the
Hochschild cohomology of the cochain algebra C*(N). (For the definition of the map
itself, see Seidel 02).</p>
<p>Clearly, this doesn't work out to be an isomorphism for N = T^n: the symplectic
cohomology is an exterior algebra tensor a Laurent polynomial algebra, whereas
the Hochschild cohomology is an exterior algebra tensor a power series algebra
(by the "odd" analogue of Hochschild-Kostant-Rosenberg).</p>
<p>The obvious rejoinder is that one should allow Lagrangians which are N equipped
with a nontrivial flat vector bundle. However, if one takes this to mean ordinary
finite-dimensional flat vector bundles, that doesn't cure the problem. There are
more interesting versions with certain infinite-dimensional vector bundles, for which
see Abouzaid 09.</p>
|
121,541 | <p>I'd like to pick <em>k</em> points from a set of points in <em>n</em>-dimensions that are approximately "maximally apart" (sum of pairwise distances is almost maxed). What is an efficient way to do this in MMA? Using the solution from C Woods, for example:</p>
<pre><code>KFN[list_, k_Integer?Positive] := Module[{kTuples},
kTuples=Subsets[RandomSample[list,Max[k*2,100]],{k}, 1000];
MaximalBy[kTuples,Total[Flatten[Outer[EuclideanDistance[#1,#2]&,#,#,1]]]&]
]
pts=RandomReal[1,{100,3}]
kfn=KFN[pts,3][[1]]
Graphics3D[{Blue,Point[pts],PointSize[Large],Red,Point[kfn]}]
</code></pre>
<p><a href="https://i.stack.imgur.com/ts1qt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ts1qt.png" alt="enter image description here"></a></p>
<p>But here's the catch: I need this algorithm to be efficient enough to scale to this size:</p>
<pre><code>pts=RandomReal[1,{10^6,10^3}];
kfn=KFN[pts,100]
Graphics3D[{Blue,Point[pts],PointSize[Large],Red,Point[kfn]}]
</code></pre>
<p><strong>Notes</strong></p>
<ol>
<li>The comment from J.M. works for <em>k=2</em>, but hangs at <em>k=4</em>. </li>
<li><p>Here's a nice paper on k-FN:</p>
<ul>
<li><a href="http://www.dcs.gla.ac.uk/workshops/ddr2012/papers/p3said.pdf" rel="nofollow noreferrer">http://www.dcs.gla.ac.uk/workshops/ddr2012/papers/p3said.pdf</a></li>
</ul></li>
</ol>
| mikado | 36,788 | <p>It is possible to write the total distance between the <code>k</code> points as a quadratic form based on the <code>DistanceMatrix</code> <code>M</code>. Finding the <code>k</code> points that maximise the total distance between them is then a matter of finding a vector <code>V</code> of zeros and ones that maximises <code>V.M.V</code> and whose total is <code>k</code> .</p>
<p>The following uses a literal implementation of this idea</p>
<pre><code>kfn[list_, k_Integer?Positive] :=
Module[{M, V, v, n, sol},
n = Length[list];
M = DistanceMatrix[list];
V = Array[v, n];
sol = NMaximize[{V.M.V, Total[V] == k, Thread[0 <= V <= 1],
Element[V, Integers]}, V];
Pick[list, Thread[V == 1] /. Last[sol]]
]
</code></pre>
<p>This is not very useful. It appears to be slower and less reliable than <code>KFN</code> by @C.Woods (repeated to make the answer self-contained)</p>
<pre><code>KFN[list_, k_Integer?Positive] := Module[{kTuples},
kTuples = Subsets[list, {k}];
MaximalBy[kTuples,
Total[Flatten[Outer[EuclideanDistance[#1, #2] &, #, #, 1]]] &]
]
</code></pre>
<p>However, we can relax the requirement that the vector <code>V</code> contains only integers (but still have <code>0 <= V <= 1</code>), to give a function</p>
<pre><code>candidate[list_, k_Integer?Positive] :=
Module[{M, V, v, n, sol},
n = Length[list];
M = DistanceMatrix[list];
V = Array[v, n];
sol = FindMaximum[{V.M.V, Total[V] == k, Thread[0 <= V <= 1]}, V,
Gradient -> 2 M.V, Method -> "QuadraticProgramming"];
Pick[list, Thread[V > 0.001] /. Last[sol]]
]
</code></pre>
<p>that picks out a list of points whose length is typically a little more than <code>k</code>. Note that this is a concave/convex problem with a unique global solution. When fed with this list of points <code>KFN</code> is much faster (and in limited testing, gives the same answer). For example:</p>
<pre><code> pts = RandomReal[1, {100, 3}];
AbsoluteTiming[KFN[pts, 3];]
(* {1.78323, Null} *)
AbsoluteTiming[KFN[candidate[pts, 3], 3];]
(* {0.239304, Null} *)
AbsoluteTiming[KFN[pts, 4];]
(* {63.4595, Null} *)
AbsoluteTiming[KFN[candidate[pts, 4], 4];]
(* {0.306873, Null} *)
</code></pre>
<p>I have no proof that <code>candidate</code> is guaranteed to pick a subset containing the solution, but it seems plausible that it should give good (if not optimal) candidates.</p>
<p>This certainly won't scale to the higher number of points requested in the OP, but since the size of the optimisation problem is independent of the dimensionality of the space in which the points lie, it should cope with high dimensional spaces.</p>
|
1,945,116 | <p>I need a simple definition of Disjoint cycles in Symmetric Groups.I already understand what cycles and Transpositions are. I need a simple definition and if possible,give a clear example.Thanks in advance Mathematician</p>
| paw88789 | 147,810 | <p>Disjoint cycles have no cycle elements in common. For example $(1, 2, 3)$ and $(4,5,6,7)$ are disjoint cycles. </p>
<p>By contrast, $(1,2,3)$ and $(3,4,5,6)$ are not disjoint because they have the $3$ in common.</p>
|
2,963,587 | <p>I'm working on a relatively low-level math project, and for one part of it I need to find to a function that returns how many many configurations are reachable within n moves. from the solved state.</p>
<p>Because there are 18 moves ( using the double moves metric ), one form of the function could be <span class="math-container">$\sum_{k=1}^{i} 18^{k} $</span>, since it would technically be the sum of all permutations reachable by the amount of moves 0,1,...,n. However, what would a more optimised function, which takes into account factors like inverses, cube symmetry, ...etc look like?</p>
| hmakholm left over Monica | 14,366 | <p><a href="http://cube20.org/" rel="noreferrer">http://cube20.org/</a> shows exact counts for <span class="math-container">$n$</span> up to 15, but only has approximate counts above that.</p>
<p>This probably means no nice formula that would make those higher values easy to compute is known. </p>
<p>(If there were a nice exact formula, it would presumably have shown immediately that going from 20 to 21 moves reveals no new configurations, mooting the whole brute-force search for "god's number").</p>
|
366,096 | <p>Let's consider $J\subset \mathbb R^2$ such that J is convex and such that it's boundary it's a curve $\gamma$. Let's suppose that $\gamma$ is anti-clockwise oriented, let's consider it signed curvature $k_s$. I want to prove the intuitive following fact:</p>
<p>$$
\int\limits_\alpha {k_s } \left( s \right)ds \geqslant 0
$$</p>
<p>For every sub-curve $\alpha \subset \gamma $.</p>
<p>And then prove that $k_s(s) \ge 0$</p>
<p>I have no idea how to attack this problem, intuitively I can see the result.</p>
| Brian Rushton | 51,970 | <p>If the curvature is negative, there must be a point with negative curvature. As you zoom up to that point, it looks more and more like the complement of a circle, which means that there are two points which are not connected by a straight line in the set.</p>
|
2,294,548 | <p><strong>Problem:</strong> Solve $y'=\sqrt{xy}$ with the initial condition $y(0)=1$.</p>
<p><strong>Attempt:</strong> Using $\sqrt{ab}=\sqrt{a}\cdot\sqrt{b}$, I get that the DE is separable by dividing both sides by $\sqrt{y}:$ $$y'=\sqrt{x}\cdot\sqrt{y}\Leftrightarrow\frac{y'}{\sqrt{y}}=\sqrt{x}$$</p>
<p>which can be rearranged to $$\frac{1}{\sqrt{y}}dy=\sqrt{x}dx$$ and proceeding to integrate both sides. </p>
<p>$$\int\frac{1}{\sqrt{y}} \ dy=\int\sqrt{x} \ dx \Longleftrightarrow2\sqrt{y}+C_1=\frac{2x\sqrt{x}}{3}+C_2$$</p>
<p>Which eventually gives $$y(x)=\left(\frac{\frac{2x\sqrt{x}}{3}+C_2-C1}{2}\right)^2=\left(\frac{x\sqrt{x}}{3}+D\right)^2=\frac{x^3}{9}+D.$$</p>
<p><strong>Question:</strong> However, according to <a href="https://math.stackexchange.com/questions/2293258/sqrtx2-1-sqrtx1-cdot-sqrtx-1?noredirect=1#comment4717869_2293258">this question</a> I posted yesterday, $\sqrt{xy}=\sqrt{x}\cdot\sqrt{y}$ only holds for $x,y\geq 0$, but nowhere in this question is this restriction given given. Why is it ok for me to use it then?</p>
<p><strong>Sidenote/question:</strong> Is my way of solving the DE correct? Any room for improvement?</p>
| Robert Z | 299,698 | <p>A solution of this Cauchy problem can not be defined in an interval $(-r,0)$ with $r>0$ because $y(0)=1$ and, by continuity, $y(x)$ is positive in a neighborhood of $0$ whereas $x<0$ so $xy<0$ and the square root on the RHS is not defined.</p>
<p>Moreover, for $x>0$, $y'(x)=\sqrt{xy(x)}\geq 0$ implies that $y$ is increasing. Therefore $y(x)\geq y(0)=1>0$.</p>
<p>Hence, with the initial condition $y(0)=1$, you may assume that $x\geq 0$ and $y(x)\geq 0$. </p>
<p>It follows that your solution
$$y(x) = \left(1 + \frac{x^{3/2}}{3} \right)^2$$
holds in the maximal interval $[0,+\infty)$.</p>
<p>P.S. Note that with the initial condition $y(0)=0$, the problem has not a unique solution. Two of them are $y(x)=0$ and $y(x)=x^3/9$ and their maximal interval is $\mathbb{R}$.</p>
|
2,663,303 | <blockquote>
<p>Let $G$ be finite. Suppose that $\left\vert \{x\in G\mid x^n =1\}\right\vert \le n$ for all $n\in \mathbb{N}$. Then $G$ is cyclic.</p>
</blockquote>
<p>What I have attempted was the fact that every element is contained in a maximal subgroup following that <a href="https://groupprops.subwiki.org/wiki/Cyclic_iff_not_a_union_of_proper_subgroups" rel="nofollow noreferrer">cyclic iff not a union of cyclic subgroups</a>, <a href="https://math.stackexchange.com/questions/2060998/count-elements-in-a-cyclic-group-of-given-order">the order of elements of a cyclic group</a>, and Sylow-$p$ subgroups......But none of them seems helpful. </p>
<p>: ) It’s very kind of you to give me some hints to push me further. Thanks!</p>
| user120527 | 530,843 | <p>Here are some hints of a possible way to prove this:</p>
<p>1) Show that for all $n$ dividing $|G|$, there is at most one cyclic subgroup of cardinal $n$ in $G$. Call it $H_n$ (when it exists).</p>
<p>2) Look at the map $\Psi: G\to \{\text{cyclic subgps} \}$, $x\mapsto \langle x \rangle$. Compute $|\Psi^{-1}(H_n)|$.</p>
<p>3) Write an equation giving $|G|$ in terms of $|\Psi^{-1}(H_n)|$, and compare with a famous formula involving Euler's totient function. </p>
|
694,668 | <p>Let (X,Y) be uniformly distributed in a circle of radius 1. Show that if R is the distance from the center of the circle to (X,Y) then $R^2$ is uniform on (0,1). </p>
<p>This is question from the Simulation text of Prof. Sheldon Ross. Any hints? </p>
| Unwisdom | 124,220 | <p>There are lots of approaches one could take, but the simplest one I can think of is to consider the shape of the CDF of $R^2$. </p>
<p>What is the probability that $R^{2}<t$? Well, this is clearly $0$ for $t<0$ and $1$ for $t>1$. For $t\in [0,1]$, the probability is the area of the circle of radius $R$ (divided by $\pi$, since $1/\pi$ is the joint density in the unit circle) where $R^2=t$. Substituting $R=\sqrt{t}$, this is:
$$\frac{\pi R^{2}}{\pi} = \frac{\pi t}{\pi} = t.$$
And we're done. This is the CDF of the uniform distribution on $[0,1]$, as required!</p>
|
445 | <p>Under what circumstances should a question be made community wiki?</p>
<p>Probably any question asking for a list of something (e.g. <a href="https://math.stackexchange.com/questions/81/list-of-interesting-math-blogs">1</a>) must be CW. What else? What about questions asking for a list of applications of something (like, say, <a href="https://math.stackexchange.com/questions/804/applications-for-the-class-equation-from-group-theory">3</a>) or questions like <a href="https://math.stackexchange.com/questions/1446/interesting-properties-of-ternary-relations">2</a>? Should all soft-questions be made community wiki (and how we should define soft-question, in that case)?</p>
| Grigory M | 152 | <p>I think we still need a “CW-policy”. Suggestion:</p>
<ol>
<li><code>big-list</code> questions, asking for sorted list of resources/books should be CW
<ul>
<li>and, probably, questions asking for the best example/intuition/etc in some field (including questions of most interesting propeties of some object etc)</li>
</ul></li>
<li>questions, that are not directly mathematical, should be CW (because answering them also shouldn't give rep):
<ul>
<li><code>notation</code>, <code>terminology</code> and likes</li>
<li><code>math-history</code> and likes</li>
</ul></li>
</ol>
|
3,978,606 | <p>Question says</p>
<blockquote>
<p>For <span class="math-container">$(C[0,1], \Vert\cdot\Vert_{\infty})$</span>, let <span class="math-container">$B=\{f\in C[0,1] :
\Vert f\Vert_{\infty} \leq 1\}$</span>. Find all <span class="math-container">$f\in B$</span> such that
there exist <span class="math-container">$g,h\in B$</span>, <span class="math-container">$g\neq h$</span>, with <span class="math-container">$f=\frac{g+h}{2}$</span></p>
</blockquote>
<p>I studied some graphs and I feel that for <span class="math-container">$f \equiv 1$</span> and <span class="math-container">$f \equiv -1$</span>, we can't find such <span class="math-container">$g,h, g\neq h$</span> which gives <span class="math-container">$f=\frac{g+h}{2}$</span> as only choice for <span class="math-container">$g$</span> and <span class="math-container">$h$</span> left is to be identically equal to <span class="math-container">$1$</span> or <span class="math-container">$-1$</span>, in respective case. For all other cases, we can increase and decrease function <span class="math-container">$f$</span> little bit(for those <span class="math-container">$x$</span> such that <span class="math-container">$\Vert f(x)\Vert \neq 1$</span>) so as to get average equal to <span class="math-container">$f$</span>. I can sketch that but not able to put it mathematically. Am I correct?</p>
<p>Thanks.</p>
| Kavi Rama Murthy | 142,385 | <p>Hint for a rigorous proof: Suppose <span class="math-container">$|f(c)| <1$</span> for some <span class="math-container">$c$</span>. Let <span class="math-container">$r=1-|f(c)|$</span>. Let <span class="math-container">$g_n$</span> be a piece-wise linear function such that <span class="math-container">$g_n(c)=\frac r 2$</span> and <span class="math-container">$g_n(x)=0$</span> for <span class="math-container">$|x-c| \geq \frac 1 n$</span>. [Draw a picture.] Then <span class="math-container">$f=\frac {(f+g_n) +(f-g_n)} 2$</span> and <span class="math-container">$|f(x)\pm g_n (x)| \leq 1$</span> for all <span class="math-container">$x$</span> provided <span class="math-container">$n$</span> is large enough.</p>
|
2,502,617 | <p>My teacher said this is the case because $A_\infty$ is generated by elements of order $3$ (the 3-cycles), and $S_\infty$ is not. I understand that the 3-cycles do not generate $S_n$, and that $\phi(x) = y \implies \text{order}(x) = \text{order}(y)$, but why can't there be an isomorphism between another set of generators of $A_\infty$ (say the two cycles), and a set of generators of $S_\infty$?</p>
| Derek Holt | 2,820 | <p>Suppose that $\phi:A_\infty \to S_\infty$ is an isomorphism. We know that $A_\infty = \langle X \rangle$, where $X$ is the set of $3$-cycles. It follows that $S_\infty = \phi(\langle X \rangle) = \langle \{\phi(x) : x \in X \} \rangle$.</p>
<p>Since $\phi(x)$ has order $3$ for all $x \in X$, it follows that $S_\infty$ is generated by a set of elements of order $3$. But every such element in $S_\infty$ is an even permutation, so this is a contradiction. Hence there can be no such isomorphism.</p>
|
2,502,617 | <p>My teacher said this is the case because $A_\infty$ is generated by elements of order $3$ (the 3-cycles), and $S_\infty$ is not. I understand that the 3-cycles do not generate $S_n$, and that $\phi(x) = y \implies \text{order}(x) = \text{order}(y)$, but why can't there be an isomorphism between another set of generators of $A_\infty$ (say the two cycles), and a set of generators of $S_\infty$?</p>
| orangeskid | 168,051 | <p>Every element in $A_{\infty}$ is a product of an even number of transpositions. If $\tau_1$, $\tau_2$, $\tau$ are transpositions, we have
$\tau_1 \tau_2 = (\tau_1 \tau)( \tau \tau_2)$. Hence every element in $A_{\infty}$ can be written as a product of an even number of elements of the form $\eta_1 \eta_2$, where $\eta_1$, $\eta_2$ are transpositions with disjoint support. Note also that all such elements are conjugate in $A_{\infty}$ to $(1,2)(3,4)$.
Let $f\colon A_{\infty}\to S_{\infty}$ be a morphism of groups. Every element in $f(A_{\infty})$ is a product of an even number of conjugates of $f((1,2)(3,4))$, so of signature $+1$. Hence $\phi(A_{\infty})\subset A_{\infty}$. </p>
|
94,525 | <p>I am trying to solve the equation
$$z^n = 1.$$</p>
<p>Taking $\log$ on both sides I get $n\log(z) = \log(1) = 0$.</p>
<p>$\implies$ $n = 0$ or $\log(z) = 0$</p>
<p>$\implies$ $n = 0$ or $z = 1$.</p>
<p>But I clearly missed out $(-1)^{\text{even numbers}}$ which is equal to $1$.</p>
<p>How do I solve this equation algebraically?</p>
| nb1 | 15,767 | <p>First of all, note that, apart from $z=1$, all other answers will be complex. For the equation</p>
<p>$z^n=1$, we use the theorems $e^{ix}=cosx + i sin x$ and that $(cosx + i sin x)^{n}=cos(nx)+ sin(nx)$.</p>
<p>Then, $z^n=1$ implies $r^n(cos(nx)+ sin(nx))=1$, where $z=re^{ix}$. Hence, $r=1$ and</p>
<p>$cos(nx)+ isin(nx)=1$. Thus, $cos(nx)=1$ and $sin(nx)=0$, equating real and imaginary parts separately.</p>
<p>That gives the answer $z=cos(\frac{2\pi}{k})+isin(\frac{2\pi}{k}), k=1,2 \cdots (n-1)$</p>
|
3,744,560 | <p>Suppose I have a function <span class="math-container">$\Lambda(t)$</span> for any <span class="math-container">$t>0$</span>. This function has the following three properties:</p>
<ol>
<li><span class="math-container">$\Lambda(t)$</span> is differentiable.</li>
<li><span class="math-container">$\Lambda(t)$</span> is strictly increasing.</li>
<li><span class="math-container">$\Lambda(T) = \Lambda(T+S) - \Lambda(S)$</span> for any <span class="math-container">$T,S>0$</span>.</li>
</ol>
<p>It is stated that the function has the form <span class="math-container">$\Lambda(t) = \lambda t$</span>, but how can I formally derive this from the above three properties. Thanks in advance.</p>
| Cardinal | 254,200 | <p><span class="math-container">$\frac{d}{dt}\Lambda(t) > 0 \rightarrow \Lambda(t) = C t + u$</span></p>
<p><span class="math-container">$\Lambda(S+T) = \Lambda(S) + \Lambda(T) \rightarrow u=0$</span></p>
<p>Hence</p>
<p><span class="math-container">$\Lambda(t) = Ct$</span></p>
|
3,744,560 | <p>Suppose I have a function <span class="math-container">$\Lambda(t)$</span> for any <span class="math-container">$t>0$</span>. This function has the following three properties:</p>
<ol>
<li><span class="math-container">$\Lambda(t)$</span> is differentiable.</li>
<li><span class="math-container">$\Lambda(t)$</span> is strictly increasing.</li>
<li><span class="math-container">$\Lambda(T) = \Lambda(T+S) - \Lambda(S)$</span> for any <span class="math-container">$T,S>0$</span>.</li>
</ol>
<p>It is stated that the function has the form <span class="math-container">$\Lambda(t) = \lambda t$</span>, but how can I formally derive this from the above three properties. Thanks in advance.</p>
| ir7 | 26,651 | <p><strong>Hint:</strong> Try proving these properties first, by giving <span class="math-container">$S$</span> and <span class="math-container">$T$</span> various values:
<span class="math-container">$$\Lambda(0)=0$$</span>
<span class="math-container">$$\Lambda(2t) = 2\Lambda(t)$$</span>
<span class="math-container">$$\Lambda(nt) = n\Lambda(t)$$</span>
<span class="math-container">$$\Lambda(m/n)=m/n\Lambda(1)$$</span></p>
<p>This should give linearity on <span class="math-container">$\mathbf{Q}_+$</span>.</p>
<p>To expand linearity to <span class="math-container">$\mathbf{R}_+$</span>, based on monotonicity only, see <a href="https://proofwiki.org/wiki/Monotone_Additive_Function_is_Linear" rel="nofollow noreferrer">proofwiki</a> (using density of the set of rationals in the set of real numbers and <a href="https://proofwiki.org/wiki/Peak_Point_Lemma" rel="nofollow noreferrer">Peak Point lemma</a> and <a href="https://proofwiki.org/wiki/Squeeze_Theorem/Sequences/Real_Numbers" rel="nofollow noreferrer">Squeeze theorem</a>).</p>
<p>See also <a href="https://en.wikipedia.org/wiki/Cauchy%27s_functional_equation" rel="nofollow noreferrer">Cauchy functional equation</a> for more.</p>
|
1,114 | <p>Or more specifically, why do people get so excited about them? And what's your favorite easy example of one, which illustrates why I should care (and is not a group)?</p>
| Akhil Mathew | 344 | <p>A groupoid is a category where every morphism is invertible. If such a category has one object, then it is a group. Unfortunately I don't know why people are so interested in them, so perhaps this is not helpful.</p>
<p>An example is the fundamental groupoid of path classes in a topological space; the objects are points and morphisms p->q are homotopy classes of curves from p to q. Composition is defined by placing together two paths so this is not a group.</p>
|
1,114 | <p>Or more specifically, why do people get so excited about them? And what's your favorite easy example of one, which illustrates why I should care (and is not a group)?</p>
| Harrison Brown | 382 | <p>I also unfortunately don't really understand why people care so much about them, although I should probably go back and read old TWFs. </p>
<p>Sort of a combinatorial example of the fundamental groupoid is the category assigned to a graph where the objects are vertices and the morphisms are directed paths, and v->w->v = id_v. (If this makes sense.) I believe that this category is nice because (IIRC) you can read off the definition of graph homomorphism from it.</p>
<p>ETA: Okay, a quick Baez-review gives <a href="http://math.ucr.edu/home/baez/week74.html" rel="nofollow">the following</a>, which isn't strictly speaking what you asked about but which helps me understand how groupoids are intrinsically special and not just occasionally an improvement that encodes more data than groups.</p>
<p>If you move from categories to n-categories, you can define n-groupoids, although this is subtle. Now, just as the fundamental groupoid is a more natural construction than the fundamental group, n-groupoids capture more information about homotopy than do "monoidal n-groupoids." But furthermore, all the constructions of homotopy are in some way reversible (if I'm following Baez correctly) -- not only is homotopy equivalence an equivalence relation, but even on higher levels this is true -- e.g., the fundamental groupoid functor has an adjoint, which is essentially the classifying space construction, so actually n-categories capture <em>everything</em> about homotopy! And in fact, Baez says that if you think about \omega-categories (which are a limit of n-categories, essentially, I guess?) then the homotopy category of \omega-groupoids is equivalent to <em>the</em> homotopy category.</p>
|
1,114 | <p>Or more specifically, why do people get so excited about them? And what's your favorite easy example of one, which illustrates why I should care (and is not a group)?</p>
| Yemon Choi | 763 | <p>In addition to the answers already given: Alan Weinstein wrote a nice <a href="http://www.ams.org/notices/199607/weinstein.pdf">article for the Notices of the AMS</a> which tries to give some motivating examples:</p>
<p>It seems that in certain situations where taking the quotient by a group action "destroys too much information", working directly with an associated groupoid is more useful. Several of the motivating examples in NCG a la Connes (et al) also seem to fit into this point of view.</p>
|
2,409,918 | <p>I need your help in evaluating the following integral in <strong>closed form</strong>. <span class="math-container">$$\displaystyle\int\limits_{0.5}^{1}
\frac{\mathrm{Li}_{2}\left(x\right)\ln\left(2x - 1\right)}{x}\,\mathrm{d}x$$</span></p>
<p>Since the function is singular at <span class="math-container">$x = 0.5$</span>, we are looking for Principal Value. The integral is finite and was evaluated numerically.</p>
<p>I expect the closed form result to contain <span class="math-container">$\,\mathrm{Li}_{3}$</span> and <span class="math-container">$\,\mathrm{Li}_{2}$</span>.</p>
<p>Thanks</p>
| James Arathoon | 448,397 | <p>This provisional answer is an horribly inelegant conjecture with no proof attached</p>
<p>$$\int\limits_{0.5}^{1}\frac{Li_2(x) \ln(2x-1)}{x} dx
=-\frac{1}{2}\sum_{n=1}^{\infty}\frac{\sum_{k=0}^{n-1} \frac{\binom{n-1}{k}}{(k+1)^2}}{ \sum_{k=0}^n (k (2 k-1)) \binom{n}{k}}\tag1$$</p>
<p>I have checked this solution using $m=100$ approximation</p>
<p>$$\int_{\frac{1}{2}}^1 \frac{ \left(\sum _{k=1}^{m} \frac{x^k}{k^2}\right)\log (2 x-1)}{x} \, dx =-\frac{1}{2}\sum_{n=1}^{m}\frac{\sum_{k=0}^{n-1} \frac{\binom{n-1}{k}}{(k+1)^2}}{ \sum_{k=0}^n (k (2 k-1)) \binom{n}{k}}$$ </p>
<p>The numerator summation arises from the number sequence given <a href="https://oeis.org/A097344" rel="nofollow noreferrer">here</a> (binomial transform of $1/(k+1)^2$: that is 1, 5/4, 29/18, 103/48, 887/300, 1517/360, etc.) and the denominator summation arises from the number sequence given <a href="https://oeis.org/A014477" rel="nofollow noreferrer">here</a> (the binomial transform of the hexagonal numbers)</p>
<p>Hopefully someone can make a little more sense of this than I have.</p>
<p><strong>Later Edit:</strong>
In working to understand <a href="https://oeis.org/A097344" rel="nofollow noreferrer">user90369</a>'s answer to this question I found these identities using Mathematica</p>
<p>$$Li_n(\frac{2}{m})-Li_n(\frac{1}{m})=\sum _{k=1}^{\infty } \frac{1}{m^k k^{n-1}}\sum _{v=1}^k \binom{k-1}{v-1}\frac{1}{v}$$</p>
<p>in the case of $m=2$
$$\zeta(n)-Li_n(\frac{1}{2})=\sum _{k=1}^{\infty } \frac{1}{2^k k^{n-1}}\sum _{v=1}^k \binom{k-1}{v-1}\frac{1}{v}$$</p>
<p><strong>Added Later Still:</strong> More Trivial Identities</p>
<p>$$Li_2(\frac{1}{2})=\sum\limits_{k=1}^\infty \frac{1}{k^3 2^k}\sum\limits_{v=1}^1 {\binom k v} \frac{1}{v}$$ </p>
<p>$$Li_4(\frac{1}{2})=\sum\limits_{k=1}^\infty \frac{1}{k^3 2^k}\sum\limits_{v=k}^k {\binom k v} \frac{1}{v}$$</p>
<p>$$\int\limits_{0.5}^{1}\frac{Li_2(x) \ln(2x-1)}{x} dx=-Li_2(\frac{1}{2})-Li_4(\frac{1}{2})-\sum\limits_{k=1}^\infty \frac{1}{k^3 2^k}\sum\limits_{v=2}^{k-1} {\binom k v} \frac{1}{v}$$ </p>
<p>Since
$$\sum\limits_{k=1}^{\infty} \frac{1}{k^3 2^k}\sum\limits_{v=1}^{k} {\binom k v} \frac{1}{v}=\sum _{v=1}^{\infty} \frac{1}{v}\sum _{k=1}^{\infty} \frac{1}{k^{3} 2^k}\binom{k}{v}$$</p>
<p>There is a brute force pattern matching approach that I have found, using</p>
<p>$$S_a=\sum\limits_{k=1}^{\infty} \frac{1}{k^3 2^k}\sum\limits_{v=a}^{a} {\binom k v} \frac{1}{v}=\sum _{v=a}^a \frac{1}{v}\sum _{k=1}^{\infty} \frac{1}{k^{3} 2^k}\binom{k}{v}$$</p>
<p>where
$$\int\limits_{0.5}^{1}\frac{Li_2(x) \ln(2x-1)}{x} dx= -\sum_{a=1}^{\infty} S_a$$ </p>
<p>From Mathematica
$$S_1=\frac{\pi ^2}{12}-\frac{\log ^2(2)}{2}=Li_2(\frac{1}{2})$$
$$S_2=1/48 \left(-\pi^2 + 12 \log(2) + 6 \log^2(2) \right)$$
$$S_3=\frac{1}{108} \left(6+\pi ^2-18 \log (2)-6 \log ^2(2)\right) $$
$$S_4=\frac{1}{192} \left(-8-\pi ^2+22 \log (2)+6 \log ^2(2)\right)$$</p>
<p>and so on. From $S_2$ onwards the general term as far as I have been able to determine is</p>
<p>$$S_a=(-1)^{k-1}C_a+(-1)^{k-1}\frac{\pi^2}{12a^2}+(-1)^{k}\frac{H_{a-1}\log(2)}{a^2}+(-1)^{k}\frac{\log^2(2)}{2a^2}$$</p>
<p>where $H_a$ is the Harmonic Number. I haven't been able to determine the pattern for the rational term, $C_a$ [$0$,$\frac{6}{(12\times3^2)}$,$\frac{8}{(12\times4^2)}$,$\frac{21}{2(12\times5^2)}$,$\frac{119}{10(12\times6^2)}$,$\frac{202}{15(12\times7^2)}$,$\frac{1525}{105(12\times8^2)}$,...]. Since the last three sum up to known closed forms it would be unfortunate if the rational term summation didn't do the same.</p>
<p>Once again I hope someone can make a little more sense of this than I have.</p>
|
2,409,918 | <p>I need your help in evaluating the following integral in <strong>closed form</strong>. <span class="math-container">$$\displaystyle\int\limits_{0.5}^{1}
\frac{\mathrm{Li}_{2}\left(x\right)\ln\left(2x - 1\right)}{x}\,\mathrm{d}x$$</span></p>
<p>Since the function is singular at <span class="math-container">$x = 0.5$</span>, we are looking for Principal Value. The integral is finite and was evaluated numerically.</p>
<p>I expect the closed form result to contain <span class="math-container">$\,\mathrm{Li}_{3}$</span> and <span class="math-container">$\,\mathrm{Li}_{2}$</span>.</p>
<p>Thanks</p>
| Knas | 634,505 | <p>I will be using next integral
<span class="math-container">$$
\operatorname{Lv}_{n}(x,\alpha) =\int\limits_{0}^{x} \dfrac{\ln^n t}{1-\alpha t}\,\mathrm{d}t = \dfrac{n!}{\alpha}\sum\limits_{k\,=\,0}^{n}\dfrac{(-1)^k}{\left(n-k\right)!}\ln^{n-k}x\operatorname{Li}_{k+1}(\alpha x)
$$</span>
with <span class="math-container">$x \geq 0$</span>, <span class="math-container">$n \in \mathbb{N}$</span> and <span class="math-container">$\alpha$</span> such that
<span class="math-container">$$ \alpha x < 1 \text{ or } \alpha x = 1\ \wedge\ x = 1$$</span>
If anywhere <span class="math-container">$\operatorname{Lv}$</span> will appear I'll omit calculations of <span class="math-container">$\operatorname{Li}$</span> constants. Let
<span class="math-container">\begin{align*}\mathfrak{I} &= \int\limits_{1/2}^{1}\dfrac{\ln\left(2x-1\right)}{x}\operatorname{Li}_2(x)\,\mathrm{d}x \\
&= \int\limits_{0}^{1}\dfrac{\ln x}{1+x}\operatorname{Li}_2\left(\dfrac{1+x}{2}\right)\,\mathrm{d}x \\
&= \operatorname{Li}_2\left(\dfrac{1+x}{2}\right)\Big(\ln x\ln\left(1+x\right) + \operatorname{Li}_2(-x)\Big)\Bigg\vert_{0}^{1}+\int\limits_{0}^{1}\dfrac{\ln x\ln\left(1+x\right) + \operatorname{Li}_2(-x)}{1+x}\ln \left(\dfrac{1-x}{2}\right)\,\mathrm{d}x \\
&= \operatorname{Li}_2(1)\operatorname{Li}_2(-1) + \underbrace{\int\limits_{0}^{1}\dfrac{\ln\left(1-x\right)\ln x\ln\left(1+x\right)}{1+x}\,\mathrm{d}x}_{\mathfrak{I}_1}+\underbrace{\int\limits_{0}^{1}\dfrac{\operatorname{Li}_2(-x)\ln\left(1-x\right)}{1+x}\,\mathrm{d}x}_{\mathfrak{I}_2}-\phantom{a}\\
&-\ln 2\underbrace{\int\limits_{0}^{1}\dfrac{1}{1+x}\left(\int\limits_{0}^{x}\dfrac{\ln t}{1+t}\,\mathrm{d}t\right)\,\mathrm{d}x}_{\mathfrak{I}_3}
\end{align*}</span></p>
<blockquote>
<h2><span class="math-container">$\mathfrak{I}_1:$</span></h2>
</blockquote>
<p>Make substitution <span class="math-container">$x\rightarrow \dfrac{1-x}{1+x}$</span>
<span class="math-container">\begin{align*}
\mathfrak{I}_1 &= \int\limits_{0}^{1} \dfrac{1}{1+x}\ln\left(\dfrac{2}{1+x}\right)\ln\left(\dfrac{1-x}{1+x}\right)\ln\left(\dfrac{2x}{1+x}\right)\,\mathrm{d}x \\
&= \ln 2 \int\limits_{0}^{1} \dfrac{\ln\left(1-x\right)\ln x}{1+x}\,\mathrm{d}x - \ln 2 \int\limits_{0}^{1} \dfrac{\ln\left(1+x\right)\ln x}{1+x}\,\mathrm{d}x - \mathfrak{I}_1 + \int\limits_{0}^{1} \dfrac{\ln^2\left(1+x\right)\ln x}{1+x}\,\mathrm{d}x + \phantom{a} \\
&\,+ \color{red}{\int\limits_{0}^{1} \dfrac{1}{1+x}\ln^2\left(\dfrac{2}{1+x}\right)\ln\left(\dfrac{1-x}{1+x}\right)\,\mathrm{d}x} \\
&= \ln 2 \underbrace{\int\limits_{0}^{1} \dfrac{\ln\left(1-x\right)\ln x}{1+x}\,\mathrm{d}x}_{\mathfrak{I}_{1,1}} - \ln 2 \int\limits_{0}^{1} \dfrac{\ln\left(1+x\right)\ln x}{1+x}\,\mathrm{d}x - \mathfrak{I}_1 + \color{red}{2}\int\limits_{0}^{1} \dfrac{\ln^2\left(1+x\right)\ln x}{1+x}\,\mathrm{d}x \\
&= \dfrac{1}{2}\mathfrak{I}_{1,1}\ln 2 - \dfrac{1}{4}\ln 2 \ln^2\left(1+x\right)\ln x\Bigg\vert_{0}^{1} +\dfrac{1}{4}\ln 2 \int\limits_{0}^{1} \dfrac{\ln^2\left(1+x\right)}{x}\,\mathrm{d}x+\dfrac{1}{3}\ln^3\left(1+x\right)\ln x\Bigg\vert_{0}^{1} - \phantom{a} \\
&\ - \dfrac{1}{3}\int\limits_{0}^{1} \dfrac{\ln^3\left(1+x\right)}{x}\,\mathrm{d}x \\
&= \dfrac{1}{2}\mathfrak{I}_{1,1}\ln 2 + \dfrac{1}{4}\ln 2 \int\limits_{1}^{2} \dfrac{\ln^2 x}{x-1}\,\mathrm{d}x - \dfrac{1}{3}\int\limits_{1}^{2} \dfrac{\ln^3 x}{x-1}\,\mathrm{d}x \\
&= \dfrac{1}{2}\mathfrak{I}_{1,1}\ln 2 + \dfrac{1}{4}\ln 2 \int\limits_{1/2}^{1} \dfrac{\ln^2 x}{x\left(1-x\right)}\,\mathrm{d}x + \dfrac{1}{3}\int\limits_{1/2}^{1} \dfrac{\ln^3 x}{x\left(1-x\right)}\,\mathrm{d}x \\
&= \dfrac{1}{2}\mathfrak{I}_{1,1}\ln 2 + \dfrac{1}{4}\ln 2\left(\dfrac{1}{3}\ln^3 2 + \operatorname{Lv}_{2}(1,1)-\operatorname{Lv}_{2}\left(\dfrac{1}{2},1\right)\right) + \dfrac{1}{3}\left(-\dfrac{1}{4}\ln^4 2 + \operatorname{Lv}_{3}(1,1)-\operatorname{Lv}_{3}\left(\dfrac{1}{2},1\right)\right) \\
&= \dfrac{1}{2}\mathfrak{I}_{1,1}\ln 2 + 2\operatorname{Li}_4\left(\dfrac{1}{2}\right) + \dfrac{29}{16}\zeta(3)\ln 2 - \dfrac{1}{45}\pi^4 + \dfrac{1}{12}\ln^4 2 - \dfrac{1}{12}\pi^2\ln^2 2
\end{align*}</span></p>
<blockquote>
<h2><span class="math-container">$\mathfrak{I}_{1,1}:$</span></h2>
</blockquote>
<p>Use identity
<span class="math-container">$$ xy = \dfrac{1}{2}x^2 + \dfrac{1}{2}y^2 - \dfrac{1}{2}\left(x-y\right)^2$$</span>
to have
<span class="math-container">\begin{align*}
\mathfrak{I}_{1,1} &= \dfrac{1}{2}\int\limits_{0}^{1} \dfrac{\ln^2 x}{1+x}\,\mathrm{d}x + \dfrac{1}{2}\int\limits_{0}^{1} \dfrac{\ln^2 \left(1-x\right)}{1+x}\,\mathrm{d}x - \dfrac{1}{2}\int\limits_{0}^{1} \dfrac{1}{1+x}\ln^2\left(\dfrac{x}{1-x}\right)\,\mathrm{d}x \\
&= \dfrac{1}{2}\operatorname{Lv}_2(1,-1)+\dfrac{1}{2}\int\limits_{0}^{1} \dfrac{\ln^2 x}{2-x}\,\mathrm{d}x-\dfrac{1}{2}\int\limits_{0}^{\infty} \dfrac{\ln^2 x}{\left(1+2x\right)\left(1+x\right)}\,\mathrm{d}x \\
&= \dfrac{1}{2}\operatorname{Lv}_2(1,-1)+\dfrac{1}{4}\operatorname{Lv}_2\left(1,\dfrac{1}{2}\right)-\dfrac{1}{2}\int\limits_{0}^{1} \dfrac{\ln^2 x}{\left(1+2x\right)\left(1+x\right)}\,\mathrm{d}x-\dfrac{1}{2}\int\limits_{0}^{1} \dfrac{\ln^2 x}{\left(2+x\right)\left(1+x\right)}\,\mathrm{d}x \\
&= \dfrac{1}{2}\operatorname{Lv}_2(1,-1)+\dfrac{1}{4}\operatorname{Lv}_2\left(1,\dfrac{1}{2}\right)-\int\limits_{0}^{1} \dfrac{\ln^2 x}{1+2x}\,\mathrm{d}x+\dfrac{1}{2}\int\limits_{0}^{1} \dfrac{\ln^2 x}{2+x}\,\mathrm{d}x \\
&= \dfrac{1}{2}\operatorname{Lv}_2(1,-1)+\dfrac{1}{4}\operatorname{Lv}_2\left(1,\dfrac{1}{2}\right)-\operatorname{Lv}_2(1,-2)+\dfrac{1}{4}\operatorname{Lv}_2\left(1,-\dfrac{1}{2}\right) \\
&= \dfrac{13}{8}\zeta(3)-\dfrac{1}{4}\pi^2\ln 2
\end{align*}</span>
So</p>
<blockquote>
<p><span class="math-container">$$\mathfrak{I}_1 = 2\operatorname{Li}_4\left(\dfrac{1}{2}\right)+\dfrac{21}{8}\zeta(3)\ln 2 - \dfrac{1}{45}\pi^4 + \dfrac{1}{12}\ln^4 2 - \dfrac{5}{24}\pi^2\ln^2 2$$</span></p>
</blockquote>
<hr>
<blockquote>
<h2><span class="math-container">$\mathfrak{I}_{2}:$</span></h2>
</blockquote>
<p>Use the same identity as for <span class="math-container">$\mathfrak{I}_{1,1}$</span></p>
<p><span class="math-container">\begin{align*}
-\mathfrak{I}_2 &= \int\limits_{0}^{1} \dfrac{\ln\left(1-x\right)}{1+x}\left(\int\limits_{0}^{1}\dfrac{\ln\left(1+xt\right)}{t}\,\mathrm{d}t\right)\,\mathrm{d}x \\
&= \dfrac{1}{2}\int\limits_{0}^{1} \dfrac{1}{1+x}\left(\int\limits_{0}^{1}\dfrac{\ln^2\left(1+xt\right)}{t}\,\mathrm{d}t\right)\,\mathrm{d}x + \dfrac{1}{2}\int\limits_{0}^{1}\int\limits_{0}^{1} \dfrac{1}{\left(1+x\right)t}\left(\ln^2\left(1-x\right)-\ln^2\left(\dfrac{1-x}{1+xt}\right)\right)\,\mathrm{d}t\,\mathrm{d}x \\
&= \dfrac{1}{2}\int\limits_{0}^{1} \dfrac{1}{1+x}\left(\int\limits_{0}^{x}\dfrac{\ln^2\left(1+t\right)}{t}\,\mathrm{d}t\right)\,\mathrm{d}x + \phantom{a} \\
&+ \dfrac{1}{2}\int\limits_{0}^{1} \dfrac{1}{t}\left(\int\limits_{0}^{1}\dfrac{\ln^2\left(1-x\right)}{1+x}\,\mathrm{d}x-\int\limits_{0}^{1}\dfrac{1}{1+x}\ln^2\left(\dfrac{1-x}{1+xt}\right)\,\mathrm{d}x\right)\,\mathrm{d}t \\
&= \dfrac{1}{2}\left.\ln\left(1+x\right)\int\limits_{0}^{x}\dfrac{\ln^2\left(1+t\right)}{t}\,\mathrm{d}t\right\vert_{0}^{1}-\dfrac{1}{2}\int\limits_{0}^{1}\dfrac{\ln^3\left(1+x\right)}{x}\,\mathrm{d}x + \phantom{a} \\
&+ \dfrac{1}{2}\int\limits_{0}^{1} \dfrac{1}{t}\left(\int\limits_{0}^{1}\dfrac{\ln^2 x}{2-x}\,\mathrm{d}x-\left(1+t\right)\int\limits_{0}^{1}\dfrac{\ln^2 x}{\left(1+xt\right)\left(2-x\left(1-t\right)\right)}\,\mathrm{d}x\right)\,\mathrm{d}t \\
&= \dfrac{1}{2}\ln 2\left(\dfrac{1}{3}\ln^3 2 + \operatorname{Lv}_2(1,1)-\operatorname{Lv}_2\left(\dfrac{1}{2},1\right)\right)+\dfrac{1}{2}\left(-\dfrac{1}{4}\ln^4 2+\operatorname{Lv}_3(1,1)-\operatorname{Lv}_3\left(\dfrac{1}{2},1\right)\right)+\phantom{a} \\
&+ \dfrac{1}{2}\int\limits_{0}^{1} \dfrac{1}{t}\left(\dfrac{1}{2}\operatorname{Lv}_2\left(1,\dfrac{1}{2}\right)-\int\limits_{0}^{1} \ln^2 x\left(\dfrac{t}{1+xt}+\dfrac{1-t}{2-x\left(1-t\right)}\right)\,\mathrm{d}x\right)\,\mathrm{d}t \\
&= 3\operatorname{Li}_4\left(\dfrac{1}{2}\right)+\dfrac{11}{4}\zeta(3)\ln 2 - \dfrac{1}{30}\pi^4 + \dfrac{1}{8}\ln^4 2 - \dfrac{1}{8}\pi^2\ln^2 2+\phantom{a} \\
&+ \dfrac{1}{2}\underbrace{\int\limits_{0}^{1} \dfrac{1}{t}\left(\dfrac{1}{2}\operatorname{Lv}_2\left(1,\dfrac{1}{2}\right)-t\operatorname{Lv}_2(1,-t)-\dfrac{1-t}{2}\operatorname{Lv}_2\left(1, \dfrac{1-t}{2}\right)\right)\,\mathrm{d}t}_{\mathfrak{I}_{2,1}}
\end{align*}</span>
Note that
<span class="math-container">$$ \operatorname{Lv}_2(1,\alpha) = \dfrac{2}{\alpha}\operatorname{Li}_3(\alpha)$$</span>
So <span class="math-container">$\mathfrak{I}_{2,1}$</span> can be simplified to
<span class="math-container">\begin{align*}
\mathfrak{I}_{2,1} &= \int\limits_{0}^{1} \dfrac{1}{t}\left(2\operatorname{Li}_3\left(\dfrac{1}{2}\right)+2\operatorname{Li}_3(-t)-2\operatorname{Li}_3\left(\dfrac{1-t}{2}\right)\right)\,\mathrm{d}t \\
&= 2\int\limits_{0}^{1} \dfrac{\operatorname{Li}_3(-t)}{t}\,\mathrm{d}t+2\int\limits_{0}^{1} \dfrac{1}{t}\left(\operatorname{Li}_3\left(\dfrac{1}{2}\right)-\operatorname{Li}_3\left(\dfrac{1-t}{2}\right)\right)\,\mathrm{d}t \\
&= 2\operatorname{Li}_4(-1)+2\sum\limits_{n=1}^{\infty}\dfrac{1}{n^32^n}\int\limits_{0}^{1} \dfrac{1-t^n}{1-t}\,\mathrm{d}t \\
&= 2\operatorname{Li}_4(-1)+2\sum\limits_{n=1}^{\infty}\dfrac{\mathcal{H}_n}{n^32^n} \\
&= 2\operatorname{Li}_4\left(\dfrac{1}{2}\right) - \dfrac{1}{4}\zeta(3)\ln 2 - \dfrac{1}{60}\pi^4 + \dfrac{1}{12}\ln^4 2
\end{align*}</span>
Closed form for series above you can find <a href="https://math.stackexchange.com/questions/909228/infinite-series-sum-n-1-infty-frach-nn32n?noredirect=1&lq=1">here</a>. So</p>
<blockquote>
<p><span class="math-container">$$\mathfrak{I}_2 = -4\operatorname{Li}_4\left(\dfrac{1}{2}\right)-\dfrac{21}{8}\zeta(3)\ln 2 + \dfrac{1}{24}\pi^4-\dfrac{1}{6}\ln^4 2 + \dfrac{1}{8}\pi^2\ln^2 2$$</span></p>
</blockquote>
<hr>
<blockquote>
<h2><span class="math-container">$\mathfrak{I}_3:$</span></h2>
</blockquote>
<p><span class="math-container">\begin{align*}
\mathfrak{I}_3 &= \ln\left(1+x\right)\Big(\ln x\ln\left(1+x\right) + \operatorname{Li}_2(-x)\Big)\Bigg\vert_{0}^{1} -
\int\limits_{0}^{1}\dfrac{\ln\left(1+x\right)\ln x}{1+x}\,\mathrm{d}x \\
&= \ln 2\operatorname{Li}_2(-1) - \dfrac{1}{2}\ln^2\left(1+x\right)\ln x\Bigg\vert_{0}^{1}+\dfrac{1}{2}\int\limits_{0}^{1} \dfrac{\ln^2\left(1+x\right)}{x}\,\mathrm{d}x \\
&= \ln 2\operatorname{Li}_2(-1) + \dfrac{1}{2}\left(\dfrac{1}{3}\ln^3 2 + \operatorname{Lv}_2(1,1) - \operatorname{Lv}_2\left(\dfrac{1}{2},1\right)\right) \\
&= \dfrac{1}{8}\zeta(3)-\dfrac{1}{12}\pi^2\ln 2
\end{align*}</span>
So</p>
<blockquote>
<p><span class="math-container">$$\mathfrak{I}_3 = \dfrac{1}{8}\zeta(3)-\dfrac{1}{12}\pi^2\ln 2$$</span></p>
</blockquote>
<hr>
<p>Collecting all parts together we have that
<span class="math-container">\begin{align*}
\mathfrak{I} &= \operatorname{Li}_2(1)\operatorname{Li}_2(-1)+\mathfrak{I}_1+\mathfrak{I}_2-\mathfrak{I}_3\ln 2 \\
&= -2\operatorname{Li}_4\left(\dfrac{1}{2}\right)-\dfrac{1}{8}\zeta(3)\ln 2 + \dfrac{1}{180}\pi^4 - \dfrac{1}{12}\ln^4 2
\end{align*}</span></p>
|
2,409,918 | <p>I need your help in evaluating the following integral in <strong>closed form</strong>. <span class="math-container">$$\displaystyle\int\limits_{0.5}^{1}
\frac{\mathrm{Li}_{2}\left(x\right)\ln\left(2x - 1\right)}{x}\,\mathrm{d}x$$</span></p>
<p>Since the function is singular at <span class="math-container">$x = 0.5$</span>, we are looking for Principal Value. The integral is finite and was evaluated numerically.</p>
<p>I expect the closed form result to contain <span class="math-container">$\,\mathrm{Li}_{3}$</span> and <span class="math-container">$\,\mathrm{Li}_{2}$</span>.</p>
<p>Thanks</p>
| Ali Shadhar | 432,085 | <p>Start with subbing <span class="math-container">$ 2x-1\to x$</span> then integrate by parts we have </p>
<p><span class="math-container">$$I=-\frac54\zeta(4)+\int_0^1\frac{\text{Li}_2(-x)}{1+x}\ln\left(\frac{1-x}{2}\right)\ dx+\int_0^1\frac{\ln(x)\ln(1+x)}{1+x}\ln\left(\frac{1-x}{2}\right)\ dx$$</span></p>
<p><span class="math-container">$$=-\frac54\zeta(4)+\int_0^1\frac{\text{Li}_2(-x)\ln(1-x)}{1+x}\ dx+\int_0^1\frac{\ln(x)\ln(1+x)\ln(1-x)}{1+x}\ dx$$</span></p>
<p><span class="math-container">$$-\ln(2)\int_0^1\frac{\ln(x)}{1+x}[\text{Li}_2(-x)+\ln(x)\ln(1+x)]\ dx$$</span></p>
<p><span class="math-container">$$=-\frac54\zeta(4)+A+B-C$$</span></p>
<p>For <span class="math-container">$A$</span>, use </p>
<p><span class="math-container">$$\frac{\text{Li}_2(-x)}{1+x}=\sum_{n=1}^\infty (-1)^{n-1}H_{n-1}^{(2)}x^{n-1}\tag1$$</span></p>
<p><span class="math-container">$$A=\sum_{n=1}^\infty (-1)^{n-1}H_{n-1}^{(2)}\int_0^1x^{n-1}\ln(1-x) \ dx=\sum_{n=1}^\infty (-1)^{n-1}H_{n-1}^{(2)}\left(-\frac{H_n}{n}\right)$$</span></p>
<p><span class="math-container">$$=\sum_{n=1}^\infty \frac{(-1)^nH_{n}^{(2)}H_n}{n}-\sum_{n=1}^\infty \frac{(-1)^nH_n}{n^3}$$</span></p>
<p>For B, use </p>
<p><span class="math-container">$$\frac{\ln(1+x)}{1+x}=\sum_{n=1}^\infty (-1)^{n}H_{n-1} x^{n-1}\tag2$$</span></p>
<p><span class="math-container">$$B=\sum_{n=1}^\infty (-1)^{n}H_{n-1} \int_0^1x^{n-1}\ln(x)\ln(1-x)\ dx=\sum_{n=1}^\infty (-1)^{n}H_{n-1}\left(\frac{H_n}{n^2}+\frac{H_n^{(2)}}{n}-\frac{\zeta(2)}{n}\right)$$</span></p>
<p><span class="math-container">$$=\sum_{n=1}^\infty \frac{(-1)^{n}H_{n-1}H_n}{n^2}+\sum_{n=1}^\infty \frac{(-1)^{n}H_{n-1}H_n^{(2)}}{n}-\zeta(2)\sum_{n=1}^\infty \frac{(-1)^{n}H_{n-1}}{n}$$</span></p>
<p><span class="math-container">$$=\sum_{n=1}^\infty \frac{(-1)^{n}H_n^2}{n^2}-\sum_{n=1}^\infty \frac{(-1)^{n}H_n}{n^3}+\sum_{n=1}^\infty \frac{(-1)^{n}H_{n}H_n^{(2)}}{n}-\sum_{n=1}^\infty \frac{(-1)^{n}H_n^{(2)}}{n^2}-\zeta(2)\sum_{n=1}^\infty \frac{(-1)^{n}H_{n-1}}{n}$$</span></p>
<p>Combine the results of A and B we get </p>
<p><span class="math-container">$$\small{A+B=\sum_{n=1}^\infty \frac{(-1)^{n}H_n^2}{n^2}-2\sum_{n=1}^\infty \frac{(-1)^{n}H_n}{n^3}+2\sum_{n=1}^\infty \frac{(-1)^{n}H_{n}H_n^{(2)}}{n}-\sum_{n=1}^\infty \frac{(-1)^{n}H_n^{(2)}}{n^2}-\zeta(2)\sum_{n=1}^\infty \frac{(-1)^{n}H_{n-1}}{n}}$$</span></p>
<p>The last sum can be easily obtained by integrating both sides of <span class="math-container">$(2)$</span>, <span class="math-container">$\Longrightarrow \sum_{n=1}^\infty \frac{(-1)^{n}H_{n-1}}{n}=\frac12\ln^2(2)$</span></p>
<p>For the rest of the sums, they are already calculated:</p>
<p><span class="math-container">$$\sum_{n=1}^\infty \frac{(-1)^{n}H_n}{n^3}=2\operatorname{Li_4}\left(\frac12\right)-\frac{11}4\zeta(4)+\frac74\ln2\zeta(3)-\frac12\ln^22\zeta(2)+\frac{1}{12}\ln^42\tag3$$</span></p>
<p><span class="math-container">$$\sum_{n=1}^{\infty}\frac{(-1)^nH_n^{(2)}}{n^2}=-4\operatorname{Li}_4\left(\frac12\right)+\frac{51}{16}\zeta(4)-\frac72\ln2\zeta(3)+\ln^22\zeta(2)-\frac16\ln^42\tag4$$</span></p>
<p><span class="math-container">$$\sum_{n=1}^{\infty}\frac{(-1)^nH_n^2}{n^2}=2\operatorname{Li}_4\left(\frac12\right)-\frac{41}{16}\zeta(4)+\frac74\ln2\zeta(3)-\frac12\ln^22\zeta(2)+\frac1{12}\ln^42\tag5$$</span></p>
<p>where <span class="math-container">$(3)$</span> can be found <a href="https://math.stackexchange.com/q/3219475">here</a> and <span class="math-container">$(4)$</span> and <span class="math-container">$(5)$</span> can be found <a href="https://math.stackexchange.com/questions/3251069/nice-two-related-sums-sum-n-1-infty-frac-1nh-n2n2-and-sum">here</a>.</p>
<p>Integral <span class="math-container">$C$</span> was nicely calculated by @Aknas ( check integral <span class="math-container">$\Im_3$</span> in his solution above) where he used </p>
<p><span class="math-container">$$\text{Li}_2(x)+\ln(x)\ln(1+x)=\int_0^x\frac{\ln(t)}{1+t}\ dt$$</span></p>
<p><span class="math-container">$$C= \dfrac{1}{8}\zeta(3)-\dfrac{1}{2}\ln(2)\zeta(2)$$</span></p>
<p>Combine all resuts of <span class="math-container">$A$</span>, <span class="math-container">$B$</span> and <span class="math-container">$C$</span> we obtain that</p>
<p><span class="math-container">$$I=-2\operatorname{Li}_4\left(\dfrac{1}{2}\right)-\dfrac{1}{8}\ln(2)\zeta(3) + \dfrac{1}{2}\zeta(4)- \dfrac{1}{12}\ln^4(2)$$</span></p>
|
929,532 | <p>Okay so I want some hints (not solutions) on figuring out whether these sets are open, closed or neither.</p>
<p>$A = \{ (x,y,z) \in \mathbb{R}^3\ \ | \ \ |x^2+y^2+z^2|\lt2 \ and \ |z| \lt 1 \} \\ B = \{(x,y) \in \mathbb{R}^2 \ | \ y=2x^2\}$</p>
<p>Okay so since this question is the last part of the question where I proved that if the function $f$ is continuous then $f^{-1}(B)$ is open if $B$ is open where $f: X \to Y $ and $ B \subseteq X $. I assume I am supposed to define an image of $A$ and $B$ and show that they are close/open then use this definition. But I am not sure how to define the functions. I have an hint for the question stating that I should use the fact that polynomials are continuous mappings and the fact that any norm $\|\cdot\| : V \to \mathbb{R} $ is continous. So for $A$ should I consider a norm $\|\cdot\| $ induced by $|\cdot|$ then the image of $A$ would be $(-2, 2)$ since this set is open, $A$ (its preimage) will be open? And for $b$ I define $f(x)=2x^2$ but that didn't sound right... I don't know I am confused on how to take the first step. So any hints on how I should approach this question? </p>
| Mike J | 271,776 | <p>We set $\Omega=N$, $F$=$2^{N}$, $\mu (A)=0$, if $|A|<\infty$; $\mu (A)=\infty$, if $|A|=\infty$.</p>
<p>We can see that $\mu$ is finite additive. Given a finite collection of disjoint $A_k$,1$\leq$ k $\leq$ n. If every $|A_k|<\infty$, then $|\bigcup_{k=1}^n A_k|<\infty$, so\ $\mu(\bigcup_{k=1}^n A_k)=0=\sum_{k=1}^n \mu(A_k)$; if there is at least one ${A_k}$ has infinite elements, $\mu (A_k)=\infty$, then $\bigcup_{k=1}^n A_k$ also has infinite elements, $\mu(\bigcup_{k=1}^n A_k)=\infty=\sum_{k=1}^n \mu(A_k)=\infty$.</p>
<p>On the other hand, we would like to say that it is not $\sigma$-additive. Let $A_n=\{n\}$ for every integer $n\ge1$, we have $\mu(A_n)=0$ and $\mu(\bigcup_{n=1}^{\infty} A_n)=\infty \ne 0=\sum_{n=1}^{\infty}\mu(A_n)$.</p>
|
744,952 | <p>Is it true that a map between ${\bf T1}$ topological spaces $f:X \to Y$ is surjective iff the induced geometric morphism $f:Sh(Y) \to Sh(X)$ is a surjection (i.e. its inverse image part $f^*$ is faithful)?</p>
<p>In "Sheaves in Geometry and Logic" a proof is given, but the the "if" part leaves me a bit unsatisfied (and, btw, the "only if" part holds for any topological space).
Thanks in advance!</p>
| Lutz Lehmann | 115,115 | <p>I'm not really sure that there is a simple solution for that. In some sense, this limit is equivalent to the theorem about the inverse Fourier transform, so there will be no simple solutions. But use Parseval/Plancherel (I never remember which one is about the Fourier series).</p>
<p>Or proceeding more elementary: Consider for simplicity the symmetric extension of $f$, so that the integral is over $(-∞,∞)$, resulting in twice the required value. Then, using
$$
\frac{\sin(ax)}{x}=\frac{e^{iax}-e^{-iax}}{2ix}=\frac12\int_{-a}^a e^{ixv}dv
$$
one gets
$$
\int_{-∞}^∞ \frac{\sin(ax)}{x}f(x)\,dx = \frac12\int_{-∞}^∞\int_{-a}^a e^{ixv} f(x)\,dv\,dx
$$
Now we only need to apply Fubini to exchange the order of integration, so that
$$
...=\frac12\int_{-a}^a \sqrt{2\pi} \hat f(v) \,dv
$$
and by dominated convergence, this converges to
$$
\frac{\sqrt{2\pi}}2\int_{-∞}^∞\hat f(v) \,dv=\pi\,\widehat{\widehat{f}}(0)=\pi\,f(0)
$$
There still are some assumptions that need to be added so that the used Fourier integrals, especially the last one, actually exist. $L^1$ and at least quadratic decay at infinity should be sufficient. The test functions, tempered or with compact support, all satisfy these conditions.</p>
|
744,952 | <p>Is it true that a map between ${\bf T1}$ topological spaces $f:X \to Y$ is surjective iff the induced geometric morphism $f:Sh(Y) \to Sh(X)$ is a surjection (i.e. its inverse image part $f^*$ is faithful)?</p>
<p>In "Sheaves in Geometry and Logic" a proof is given, but the the "if" part leaves me a bit unsatisfied (and, btw, the "only if" part holds for any topological space).
Thanks in advance!</p>
| Thorben | 135,025 | <p>Im really not sure if this works but i thought I try,</p>
<p>As you mentioned the claim, $$lim_{a\rightarrow\infty}\int_0^{\infty}f(x)\frac{sin(ax)}{x}dx=\pi/2f(0)$$
follows after your substitution as $f(x)$ is continuous at $0$ when we can move the limit inside.</p>
<p>So I started to construct $g(x)\in L^1(\mathbb R_+)$ s.t $|\int_0^{\infty}f(x)\frac{sin(ax)}{x}|\leq g(x)$.</p>
<p>For $x\in[0\infty)$ we have,</p>
<p>$$|f(x)\frac{sin(ax)}{x}|\leq|\frac{f(x)}{x}|$$
and Hölders inequality yields,</p>
<p>$$\int_1^{\infty}|\frac{f(x)}{x}|\leq(\int_1^{\infty}|f(x)|^2dx)^{1/2}(\int_1^{\infty}|1/x|^2)^{1/2}=(\int_1^{\infty}|f(x)|^2dx)^{1/2}<\infty $$
which follows from the assumption that $f(x)$ was square integrable.</p>
<p>Now for $x\in[0,1]$ we have two cases,</p>
<p>Let $A=\{x\in[0,1]:|f(x)\frac{sin(ax)}{x}|<1\}$. Clearly for all $x\in A$ we have,</p>
<p>$$ |f(x)\frac{sin(ax)}{x}|\leq1$$</p>
<p>which is integrable since $\lambda(A)\leq\lambda([0,1])=1$,</p>
<p>Let $B=\{x\in[0,1]:|f(x)\frac{sin(ax)}{x}|\geq1 \}$.</p>
<p>For $x\in B$ we have,</p>
<p>$$|f(x)\frac{sin(ax)}{x}|\leq|f(x)\frac{x}{x}|\leq|f(x)|\leq|f(x)|^2$$
which is integrable.</p>
<p>So finally we found a bound in $L^1$ for all $x\in\mathbb R_+$. And you can move the limit inside to obtain what was to prove.</p>
<p>I hope i didnt miss something... do not hesitate to correct me.</p>
|
94,501 | <p>The well-known Vandermonde convolution gives us the closed form <span class="math-container">$$\sum_{k=0}^n {r\choose k}{s\choose n-k} = {r+s \choose n}$$</span>
For the case <span class="math-container">$r=s$</span>, it is also known that <span class="math-container">$$\sum_{k=0}^n (-1)^k {r \choose k} {r \choose n-k} = (-1)^{n/2} {r \choose n/2} \quad [n \mathrm{\ is\ even}]$$</span>
When <span class="math-container">$r\not= s$</span>, is there a known closed form for the alternating Vandermonde sum? <span class="math-container">$$\sum_{k=0}^n (-1)^k {r \choose k} {s \choose n-k}$$</span></p>
| anon | 11,763 | <p>$$(1-x)^r(1+x)^s=\left(\sum_{g=0}^r (-x)^g{r\choose g}\right)\left(\sum_{h=0}^sx^h{s\choose h}\right)$$</p>
<p>$$\implies \sum_{k=0}^n(-1)^k{r\choose k}{s\choose n-k}=[x^n](1-x)^r(1+x)^s.$$</p>
<p>How closed would you consider this? I'm not sure if it gets simpler, but obviously it tells us</p>
<p>$$\sum_{k=0}^n(-1)^k{r\choose k}{r\choose n-k}=\begin{cases}0& n\text{ odd}\\ \\ {r\choose n/2}& n\text{ even}\end{cases}.$$</p>
|
879,886 | <p>If one number is thrice the other and their sum is $16$, find the numbers.</p>
<p>I tried,
Let the first number be $x$ and the second number be $y$
Acc. to question </p>
<p>$$
\begin{align}
x&=3y &\iff x-3y=0 &&(1)\\
x&=16-3y&&&(2)
\end{align}
$$</p>
| Mikasa | 8,581 | <p>I assume you have $$x=\color{red}{3y}, ~~x+y=16$$ Then $3(x+\color{red}{y})=3\times 16=48$ and so $3x+\color{red}{3y}=48$ and so $3x+x=48$ and so $4x=48$...</p>
|
879,886 | <p>If one number is thrice the other and their sum is $16$, find the numbers.</p>
<p>I tried,
Let the first number be $x$ and the second number be $y$
Acc. to question </p>
<p>$$
\begin{align}
x&=3y &\iff x-3y=0 &&(1)\\
x&=16-3y&&&(2)
\end{align}
$$</p>
| Community | -1 | <p>The problem statement says
$$x+3x=16,$$
hence
$$x=4,\\3x=12.$$</p>
|
879,886 | <p>If one number is thrice the other and their sum is $16$, find the numbers.</p>
<p>I tried,
Let the first number be $x$ and the second number be $y$
Acc. to question </p>
<p>$$
\begin{align}
x&=3y &\iff x-3y=0 &&(1)\\
x&=16-3y&&&(2)
\end{align}
$$</p>
| Rachit Gupta | 558,628 | <p>Let the first number be <span class="math-container">$x$</span>.<p>
Let the second number be <span class="math-container">$y$</span>.<p>
According to question</p>
<p><span class="math-container">$$ \tag{1}
x+y=16
$$</span>
<span class="math-container">$$
\tag{2}
x=3y
$$</span>
So, <span class="math-container">$x-3y=0 \tag{2}$</span>
Multiply equation <span class="math-container">$(1)$</span> by <span class="math-container">$3$</span>. <p>
Solve both equations:</p>
<p><span class="math-container">$$\tag{1} 3x+3y=48$$</span>
<span class="math-container">$$\tag{2} x-3y=0$$</span>
<span class="math-container">$$\tag{1) + (2}4x=48$$</span>
<span class="math-container">$$\tag{3}x=12$$</span>
Putting in equation <span class="math-container">$(1)$</span>:
<span class="math-container">$$\tag{1} x+y=16$$</span>
<span class="math-container">$$\tag{1),(3} 12+y=16$$</span>
<span class="math-container">$$\tag{4}y=16-12$$</span>
<span class="math-container">$$\tag{5}y=4$$</span></p>
|
879,886 | <p>If one number is thrice the other and their sum is $16$, find the numbers.</p>
<p>I tried,
Let the first number be $x$ and the second number be $y$
Acc. to question </p>
<p>$$
\begin{align}
x&=3y &\iff x-3y=0 &&(1)\\
x&=16-3y&&&(2)
\end{align}
$$</p>
| user637720 | 637,720 | <p>Let the 1st number be <span class="math-container">$x$</span> and the 2nd number be <span class="math-container">$3x$</span>. Since <span class="math-container">$x + 3x = 16$</span>, <span class="math-container">$4x = 16$</span>, so <span class="math-container">$x=4$</span>. Therefore, 1st number is <span class="math-container">$4$</span> and the other is <span class="math-container">$12$</span>. </p>
|
40,920 | <p>We have functions $f_n\in L^1$ such that $\int f_ng$ has a limit for every $g\in L^\infty$. Does there exist a function $f\in L^1$ such that the limit equals $\int fg$? I think this is not true in general (really? - why?), then can this be true if we also know that $f_n$ belong to a certain subspace of $L^1$?</p>
| Nate Eldredge | 822 | <p>Perhaps surprisingly, the answer is yes.</p>
<p>More generally, given any Banach space $X$, a sequence $\{x_n\} \subset X$ is said to be <strong>weakly Cauchy</strong> if, for every $\ell \in X^*$, the sequence $\{\ell(f_n)\} \subset \mathbb{R}$ (or $\mathbb{C}$) is Cauchy. If every weakly Cauchy sequence is weakly convergent, $X$ is said to be <strong>weakly sequentially complete</strong>.</p>
<p>Every reflexive Banach space is weakly sequentially complete (a nice exercise with the uniform boundedness principle). $L^1$ is not reflexive, but it turns out to be weakly sequentially complete anyway. This theorem can be found in P. Wojtaszczyk, <em>Banach spaces for analysts</em>, as Corollary 14 on page 140. It works for $L^1$ over an arbitrary measure space.</p>
|
1,885,068 | <p>Prove $$\int_0^1 \frac{x-1}{(x+1)\log{x}} \text{d}x = \log{\frac{\pi}{2}}$$</p>
<p>Tried contouring but couldn't get anywhere with a keyhole contour.</p>
<p>Geometric Series Expansion does not look very promising either.</p>
| Felix Marin | 85,343 | <p>$\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\Li}[1]{\,\mathrm{Li}_{#1}}
\newcommand{\mc}[1]{\,\mathcal{#1}}
\newcommand{\mrm}[1]{\,\mathrm{#1}}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
\color{#f00}{\int_{0}^{1}{x - 1 \over \pars{x + 1}\ln\pars{x}}\,\dd x} & =
\int_{0}^{1}{1 \over 1 + x}\
\overbrace{\int_{0}^{1}x^{t}\,\dd t}^{\ds{x - 1 \over \ln\pars{x}}}\
\,\dd x =
\int_{0}^{1}\int_{0}^{1}{x^{t} \over 1 + x}\,\dd x\,\dd t
\\[5mm] & =
\int_{0}^{1}\int_{0}^{1}{x^{t} - x^{t + 1} \over 1 - x^{2}}\,\dd x\,\dd t\,\,\,
\stackrel{x\ \mapsto\ x^{1/2}}{=}\,\,\,
\half\int_{0}^{1}
\int_{0}^{1}{x^{t/2 - 1/2}\,\,\, -\,\,\, x^{t/2} \over 1 - x}
\,\dd x\,\dd t
\\[5mm] & =
\half\int_{0}^{1}\bracks{\int_{0}^{1}{1 - x^{t/2} \over 1 - x}\,\,\,\dd x -
\int_{0}^{1}{1 - x^{t/2 - 1/2} \over 1 - x}\,\,\,\dd x}\,\dd t
\\[5mm] & =
\half\int_{0}^{1}\bracks{\Psi\pars{{t \over 2} + 1} -
\Psi\pars{{t \over 2} + \half}}\,\dd t
\\[1mm] & \pars{~\Psi:\ Digamma\ Function~}
\end{align}
with the $\ds{\Psi}$ <em>integral representation</em>
$\ds{\left.\Psi\pars{z} + \gamma = \int_{0}^{1}{1 - t^{z - 1} \over 1 - t}
\,\dd t\,\right\vert_{\ \Re\pars{z}\ >\ 0}.\quad}$ $\ds{\gamma}$ is the
<em>Euler-Mascheroni Constant</em>.
<hr>
Since $\ds{\Psi\pars{z} = \totald{\ln\pars{\Gamma\pars{z}}}{z}}$:
\begin{align}
\color{#f00}{\int_{0}^{1}{x - 1 \over \pars{x + 1}\ln\pars{x}}\,\dd x} & =
\left.\ln\pars{\Gamma\pars{t/2 + 1} \over
\Gamma\pars{t/2 + 1/2}}\right\vert_{\ 0}^{\ 1}\qquad
\pars{~\Gamma:\ Gamma\ Function~}
\\[5mm] & =
\ln\pars{{\Gamma\pars{3/2} \over \Gamma\pars{1}}\,
{\Gamma\pars{1/2} \over \Gamma\pars{1}}} =
\ln\pars{\half\,\Gamma^{\, 2}\pars{\half}} = \color{#f00}{\ln\pars{\pi \over 2}}
\end{align}</p>
<blockquote>
<p>Above we used $\ds{\Gamma\pars{1} = 1}$, the $\ds{\Gamma}$-<em>Recursion Formula</em>
$\ds{\Gamma\pars{z + 1} = z\,\Gamma\pars{z}}$ and
$\ds{\Gamma\pars{\half} = \root{\pi}}$.</p>
</blockquote>
|
3,494,758 | <p>If the eigen value of <span class="math-container">$n\times n$</span> matrix are not all distinct then does that imply eigen vectors are linearly dependent and hence not diagonalizable?</p>
| pre-kidney | 34,662 | <p>The graph of a <span class="math-container">$C^k$</span> function <span class="math-container">$f\colon \mathbb R\to\mathbb R$</span> is an example of a <span class="math-container">$C^k$</span> manifold. For instance, consider the graphs of the functions <span class="math-container">$|x|\cdot x^k$</span>.</p>
|
3,494,758 | <p>If the eigen value of <span class="math-container">$n\times n$</span> matrix are not all distinct then does that imply eigen vectors are linearly dependent and hence not diagonalizable?</p>
| Eric Wofsey | 86,856 | <p>Well, you can take any smooth manifold, and just consider it as a <span class="math-container">$C^k$</span> manifold for any <span class="math-container">$k<\infty$</span>. (If you define a manifold in terms of a maximal atlas, this means you enlarge the atlas to contain all charts that are <span class="math-container">$C^k$</span>-compatible with your smooth atlas, instead of just smoothly compatible charts.) This is the only way to get an example for <span class="math-container">$k>0$</span>: it is a (nontrivial) theorem that every <span class="math-container">$C^k$</span> manifold comes from a <span class="math-container">$C^\infty$</span> manifold (which is unique up to <span class="math-container">$C^\infty$</span> diffeomorphism) in this way. For <span class="math-container">$k=0$</span>, there exist <span class="math-container">$C^0$</span> (i.e., topological) manifolds that do not come from smooth manifolds in any dimension greater than <span class="math-container">$3$</span>, but they are quite difficult to construct.</p>
<p>That said, there are certainly easy examples of <span class="math-container">$C^k$</span> manifolds which are not <span class="math-container">$C^\infty$</span> manifolds in any particularly "natural" way. For instance, let <span class="math-container">$U\subseteq \mathbb{R}^m$</span> and take the image of an embedding <span class="math-container">$f:U\to\mathbb{R}^n$</span> which is <span class="math-container">$C^k$</span> but not <span class="math-container">$C^{k+1}$</span>. Very concretely, in the case <span class="math-container">$m=1$</span>, for instance, you can just take a curve defined by <span class="math-container">$n$</span> real-valued functions on an interval such that there is no point where their derivatives all vanish (so you get an embedding) but one of the functions is only <span class="math-container">$C^k$</span>. Of course the image of such an embedding can be given a smooth structure by identifying it with <span class="math-container">$U$</span>, but typically you will be interested in it as a <em>submanifold</em> of <span class="math-container">$\mathbb{R}^n$</span> and it is only <span class="math-container">$C^k$</span> as a submanifold.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.