qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,146,929 | <p>Let $f:S^n \to S^n$ be a homeomorphism. I know the result that a rigid motion in $\mathbb R^{n+1}$ is always <a href="https://math.stackexchange.com/a/866471/185631">linear</a>, but can we get more information from the assumption that $f:S^n \to S^n$ is a homeomorphism?</p>
| Peter Franek | 62,009 | <p>The question is hard to understand in its current form, but a few remarks:</p>
<ol>
<li>Not every homeomorphism of the sphere is linear.</li>
<li>Every homeomorphism of the sphere extends to a homeomorphism of $\Bbb R^{n+1}$ (see Tsemo Aristide answer)</li>
<li>If you insisted on smoothness and diffeomorphisms extension, the problem is far more complicated and depends very much on the dimension $n$.</li>
</ol>
|
101,953 | <p>I've been asked (by a person not by a homework) about how to compute the following limit:</p>
<p>$$ \lim_{x \to 10^-} \frac{[x^3] - x^3}{[x] - x}$$</p>
<p>where $[\cdot]$ is used to denote the floor function:</p>
<p>$$ [x] := \begin{cases} x && x \in \mathbb{Z} \\ 
\text{biggest integer smaller than }x && \text{otherwise} \end{cases}$$</p>
<p>My first thought was to sandwich this but using $x^3 - 1 \leq [x^3] \leq x^3$ to get $-1 \leq [x^3] - x^3 \leq 0$ leaves me with </p>
<p>$$ \lim_{x \to 10^-} \frac{1}{x - [x]} \leq \lim_{x \to 10^-} \frac{[x^3] - x^3}{[x] - x} \leq 0$$ </p>
<p>Which doesn't seem to lead anywhere. What's the right way to compute this limit? Thanks for your help.</p>
| Gottfried Helms | 1,714 | <p>Different from Davide's I replace $\small x=N-\delta $ where $\small \delta \to 0$. Also I would change sign in numerator and denominator to subtract the smaller from the larger values: </p>
<p>$\qquad \small { (N-\delta)^3 - [(N-\delta)^3 ] \over (N-\delta) - [N-\delta]} $ </p>
<p>which for small enough $\delta$ and <em>N=10</em> comes to </p>
<p>$\qquad \small { (10- \delta)^3 - [1000- \epsilon ] \over (10- \delta) - 9} =
{ (1000-300\delta+30\delta^2-\delta^3) - 999 \over 1-\delta} =
{ 1-300\delta+30\delta^2-\delta^3 \over 1-\delta}
$ </p>
<p>Here we insert $\small 0= -\delta + \delta $ and get </p>
<p>$\qquad \small
{ 1-\delta + \delta - 300\delta+30\delta^2-\delta^3 \over 1-\delta}
=1 + \delta \cdot { 1 - 300+30\delta-\delta^2 \over 1-\delta}
=1 - \delta \cdot { 299 -30\delta +\delta^2 \over 1-\delta}
$ </p>
<p>whose limit is <em>1</em> if $\small \delta \to 0$ , the same what Davide already got. </p>
<p><hr>
[update] Hmm, after a second read I question, whether we can talk of a "limit" here because if we approximate from $\small N+\delta$ we arrive at <em>+300</em> where we have $\small {0 \over 0} $ when $\small x=10 $
So perhaps this can answer/comment another one for the correct terminology?</p>
|
566,993 | <p>Suppose $f(z)=1/(1+z^2)$ and we want to find the power series in $a=1$. I think we have to write $1/(1+z^2)=1/(1+(z-1)+1)^2=1/(1+(1+(z-1)^2+2(z-1)))$, but I'm stuck here.</p>
| Boris Novikov | 62,565 | <p>Set $b=a^{-1}x$. We have $x^2=a^{-1}x^2a$, i.e. $ax^2=x^2a$ for all $x,a$. Since $x^2$ runs all group, then $G$ is Abelian.</p>
<p><strong>Correction:</strong> This proof is valid only for a finite group. Thanks to <strong>DonAntonio</strong>.</p>
<p><strong>Addendum:</strong> I am not sure that this assertion is true for infinite groups. A candidate \for a counter-example is $G=\langle a,b|a^2=b^2, (ab)^2=(ba)^2\rangle$.</p>
|
566,993 | <p>Suppose $f(z)=1/(1+z^2)$ and we want to find the power series in $a=1$. I think we have to write $1/(1+z^2)=1/(1+(z-1)+1)^2=1/(1+(1+(z-1)^2+2(z-1)))$, but I'm stuck here.</p>
| Heno | 1,042,723 | <p><span class="math-container">$\forall x,a\in G,ax,a^{-1} \in G\Rightarrow ax^2a^{-1}=x^2\Rightarrow ax^2=x^2a$</span></p>
<p><span class="math-container">$\forall x,y\in G$</span></p>
<p><span class="math-container">$xyxy=yxyx\Rightarrow x^{-1}y^{-1}xy=yxy^{-1}x^{-1}$</span></p>
<p><span class="math-container">$(xyx^{-1}y^{-1})^2=xy(x^{-1}y^{-1}xy)x^{-1}y^{-1}=xy^2xy^{-1}(x^{-1})^2y^{-1}$</span>=<span class="math-container">$x^2y^2y^{-2}x^{-2}=e$</span></p>
<p><span class="math-container">$order(xyx^{-1}y^{-1})\neq 2\Rightarrow xyx^{-1}y^{-1}=e$</span></p>
<p><span class="math-container">$xy=yx$</span></p>
|
2,871,892 | <p><a href="https://i.stack.imgur.com/XLen7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XLen7.png" alt="Q1"></a></p>
<p>Solution is 4. </p>
<p>Original matrix is simply [v1;v2;v3;v4]. It forms an identity matrix. Hence the only alteration of the determinant comes from row 1 operation where v1 is multiplied by 2. Then the determinant will also be multiplied by two so 2*2 =4. </p>
<p>We ignore the rest of row operations since they have no effect on the determinant.</p>
<p>Is this line of reasoning correct?</p>
| mengdie1982 | 560,634 | <p>We may prove that the non-identical function $f(x)$ which is continuous over $(0,+\infty)$ and satisfies $$f(xy)=f(x)f(y)$$ is $$f(x)=x^\alpha$$ only.</p>
|
520,046 | <blockquote>
<p>Find the smallest natural number that leaves residues $5,4,3,$ and $2$ when divided respectively by the numbers $6,5,4,$ and $3$.</p>
</blockquote>
<p>I tried
$$x\equiv5\pmod6\\x\equiv4\pmod5\\x\equiv3\pmod4\\x\equiv2\pmod3$$What $x$ value?</p>
| Shobhit | 79,894 | <p>Given</p>
<p>$x=6a+5=6(a+1)-1$</p>
<p>$x=5b+4=5(b+1)-1$</p>
<p>$x=4c+3=4(c+1)-1$</p>
<p>$x=3d+2=3(d+1)-1$</p>
<p>therefore x will be of the form $(\text{L.C.M(3,4,5,6)}k-1)$ or,</p>
<p>$x=60k-1$ for some $k$.</p>
<p>Can you guess that $k$?</p>
<blockquote>
<p>ANSWER:$k=1$, or $x=59$</p>
</blockquote>
|
2,594,669 | <p>Given the Pythagoras Theorem: <strong>a² + b² = c²</strong></p>
<p>Is there a way to get the value of <strong>b</strong> when we only have a value for <strong>a</strong> and the angle <strong>α</strong>?</p>
<p>To be frank, I have no clue about that, what I want isn't the angle of <strong>β</strong> but the length of <strong>b</strong> (Opposite).</p>
<p><a href="https://i.stack.imgur.com/0jRad.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0jRad.png" alt="enter image description here"></a></p>
| Enrico M. | 266,764 | <p>$$a = b\tan\theta$$</p>
<p>Where $\theta$ is the angle opposite to $a$</p>
<p>From this:</p>
<p>$$b = \frac{a}{\tan\theta}$$</p>
|
3,471,292 | <p>I need to find the value of the series <span class="math-container">$\sum_{n=0}^{\infty}\frac{(n+1)x^n}{n!}$</span>.I've computed its radius of convergence which comes out to be zero.</p>
<p>I'm not getting how to make adjustments in the general terms of the series to get the desired result...</p>
| Ben | 93,447 | <p>Is it not so that you can split the sum into two and simplify:</p>
<p><span class="math-container">$$\sum_{n=0}^\infty \frac{(n+1)x^n}{n!} = \sum_{n=0}^\infty \frac{nx^n}{n!} + \sum_{n=0}^\infty \frac{x^n}{n!} = \sum_{n=1}^\infty \frac{x^{n}}{(n-1)!} + \sum_{n=0}^\infty \frac{x^n}{n!}= \ldots$$</span></p>
<p>Considering what reindexing the first sum to start at 0 is, and knowing the expansion for <span class="math-container">$x \mapsto e^x$</span> will see you well!</p>
|
1,987,387 | <p>I don't remember any method to compute the closed from for the following series.
$$ \sum_{k=0}^{\infty}\binom{3k}{k} x^k .$$</p>
<p>I tried by putting $\binom{3k}{k}$ in Mathematica for different $k$ and asking for the generating function it deliver a complicated formula which is the following.
$$ \frac{2\cos[\frac{1}{3} \sin^{-1}(\frac{\sqrt{3x}}{2})]}{\sqrt{4-27x}} $$</p>
<p>I was wondering if there is any simple form? </p>
| DonAntonio | 31,254 | <p>That's a power series about $\;x_0=0\;$ and whose sequence of coefficients is</p>
<p>$$a_k=\binom{3k}kx^k=\frac{(3k)!}{k!(2k)!}\implies\;\left|\frac{a_{k+1}}{a_k}\right|=\frac{(3k+3)!}{(k+1)!(2k+2)!}\cdot\frac{k!(2k)!}{(3k)!}|x|=$$</p>
<p>$$=\frac{(3k+1)(3k+2)(3k+3)}{(k+1)(2k+1)(2k+2)}|x|\xrightarrow[k\to\infty]{}\frac{27}{4}|x|$$</p>
<p>and thus the series converges for</p>
<p>$$\frac{27}4|x|<1\iff |x|<\frac4{27}$$</p>
|
667,371 | <p>I try to solve this equation:
$$\sqrt{x+2}+\sqrt{x-3}=\sqrt{3x+4}$$</p>
<p>So what i did was:</p>
<p>$$x+2+2*\sqrt{x+2}*\sqrt{x-3}+x-3=3x+4$$</p>
<p>$$2*\sqrt{x+2}*\sqrt{x-3}=x+5$$</p>
<p>$$4*{(x+2)}*(x-3)=x^2+25+10x$$</p>
<p>$$4x^2-4x-24=x^2+25+10x$$</p>
<p>$$3x^2-14x-49$$</p>
<p>But this seems to be wrong! What did i wrong?</p>
| Clive Newstead | 19,542 | <p>What makes you think you've done anything wrong? You can factorise
$$3x^2-14x-49 = (3x+7)(x-7)$$
to obtain a solution to your equation.</p>
<p>Beware: one of the solutions from this quadratic is not a solution because it doesn't satisfy the original equation... this often happens when you solve equations by squaring stuff.</p>
<p><em>P.S. I must have reached the question after the typo was fixed.</em></p>
|
2,355,579 | <blockquote>
<p><strong>Problem:</strong> James has a pile of n stones for some positive integer n ≥ 2. At each step, he
chooses one pile of stones and splits it into two smaller piles and writes the product
of the new pile sizes on the board. He repeats this process until every pile is exactly
one stone.</p>
<p>For example, if James has n = 12 stones, he could split that pile into a pile of size
4 and another pile of size 8. James would then write the number 4 · 8 = 32 on the
board. He then decides to split the pile of 4 stones into a pile with 1 stone and a
pile with 3 stones and writes 1 · 3 = 3 on the board. He continues this way until he
has 12 piles with one stone each.</p>
<p>Prove that no matter how James splits the piles (starting with a single pile of n
stones), the sum of the numbers on the blackboard at the end of the procedure is
always the same.</p>
</blockquote>
<p><em>Hint: First figure out what the formula for the final sum will be. Then prove it
using strong induction.</em></p>
<p>The formula I came up with is $n(n-1) / $2. It could be totally wrong...</p>
<p><strong>My (Partial) Solution:</strong></p>
<p>1) <strong>Base Case:</strong> (n=2) James has one pile of 2 stones. Suppose he takes the top-most stone and puts in one pile A, which now has size 1. He takes the remaining stone and places it into another pile B, which now has size 1. Then the product of the sizes is 1. The sum of all the products on his board is 1. Now, if he were to start again, and take the bottom stone first and put it in pile A, he has a pile of size 1, and then takes the top stone and puts it in pile B, he has a pile of size 1, and the product is 1, with sum of all products on the board is 1. </p>
<p>2) <strong>Inductive Hypothesis (Strong Induction):</strong> Suppose for some $k \geq 2$, $n$ stones can be split in $ 2 \leq n \leq k $ stones and $k-n$ stones.</p>
<p>3) <strong>Inductive Step</strong>: Consider $n=k+1$. What do I do now?</p>
| Bram28 | 256,001 | <p>First, a couple of comments on how you think about, and write/present this proof.</p>
<p>When doing induction, it is always a good idea to get very clear on exactly what the claim is that you are trying to prove, and thus what the property is that you want to show all natural numbers have. So in this case, that would be the claim that for any number $n$: if you start with a pile of $n$ stones, then no matter how you keep splitting it, the eventual sum of products will be the same and in fact will be $\frac{n(n-1)}{2}$.</p>
<p>It is also a good idea to try to conceptually understand <em>why</em> you would want to use some specific induction method to prove your claim. In this case, you want to prove your formula by strong induction, since you can split any pile anywhere, and you want to show that no matter what smaller size piles you end up with, the formula always holds.</p>
<p>Now, as a base case you can in fact just use $n=1$, in which case there is noting to do, and so the sum of the products is $0$, which is indeed what you get when plugging in $n=1$ for your formula.</p>
<p>For the inductive 'step', again get clear on what exactly the inductive hypothesis is (indeed, you make it seem like the step is separated, and comes only after the inductive hypothesis, but the inductive hypotheses is <em>part</em> of the step, so I strongly discourage the way you do this in your post). Now, you write: $n$ stones can eb split into $n$ and $k-n$ stones. OK, so first of all, that first $n$ should be $k$, but I understand that was just a typo. But far more importantly, this is <em>not</em> the inductive assumption. In fact, it is not even an interesting claim: <em>of course</em> a pile of $k$ stones can be split in two! The inductive assumption is that the earlier claim is true for all piles of size $n<k$. IN sum, what you should say after the base cases is:</p>
<p>Step: Let $k$ be some arbitrary number greater than the base case (i.e. $k>1$). The inductive hypothesis is that for any $n<k$ we have that no matter how you split a pile of $n$ stones, the eventual sum of products equals $\frac{n(n-1)}{2}$ (and yes, I very much encourage for strong induction to use $n<k$ rather than $n \le k$, for that allows you to proceed using $k$ instead of $k+1$, which often means that your proof and formulas will become a little easier, and also avoids making it look like you are doing weak induction)</p>
<p>OK, so now we try to show that under that assumption it would follow that the claim is true for a pile of size $k$. Ok, so let's consider a pile of size $k$. Does it matter where we split it? No. Let's split it into piles of size $m$ and $k-m$. By the inductive hypothesis, these two piles will end up with a sum of products of $\frac{m(m-1)}{2}$ and $\frac{(k-m)(k-m-1)}{2}$ respectively, but you also obtained a product of $m(k-m)$ by this very split. Hence, the eventual sum of products will be:</p>
<p>$$\frac{m(m-1)}{2}+\frac{(k-m)(k-m-1)}{2}+m(k-m)=$$</p>
<p>$$\frac{m(m-1)+(k-m)(k-m-1)+2m(k-m)}{2}=$$</p>
<p>$$\frac{m^2-m+k^2-mk-k-mk+m^2+m+2mk-2m^2}{2}=$$</p>
<p>$$\frac{k^2-k}{2}=\frac{k(k-1)}{2}$$</p>
<p>And thus we find the formula is true for any pile of size $k$ as well, no matter where you split it.</p>
<p>Finally, you may want to wrap things up, and say something like:</p>
<p>Since we have proven the base and the step, we have completed the proof by strong induction, and hence the claim is proven.</p>
|
1,800,519 | <blockquote>
<p>Let $\omega$ be an $n$-form and $\mu$ be an $m$-form where both are acting on a manifold $M$. Is the Lie derivative $L_{X}(\omega \wedge \mu)$ where $X$ is a smooth vector field acting on $M$ an exact form? </p>
</blockquote>
<p>I think it is but I've been unable to prove it, so any help would be greatly appreciated. </p>
| Travis Willse | 155,629 | <p><strong>Hint</strong> Use <a href="https://en.wikipedia.org/wiki/Lie_derivative#The_Lie_derivative_of_differential_forms" rel="nofollow">Cartan's Magic Formula</a>, which says that the Lie derivative $\mathcal L_X$ of a differential form $\alpha$ satisfies
$$\mathcal L_X \alpha = \iota_X d \alpha + d (\iota_X \alpha) .$$
From the statement of the original problem in the comments, $L_X(\omega \wedge \mu)$ can be integrated, so $L_X(\omega \wedge \mu)$, and hence $\alpha := \omega \wedge \mu$, is a top form. In particular, $d \alpha = 0$; so, $\mathcal L_X \alpha$ is exact (and we can finish proving the claim in the comments with Stokes' Theorem).</p>
<p>NB that since we used the fact that $\omega \wedge \mu$ is a top form, this argument doesn't apply to the question in the full stated generality (i.e., to forms of general degree), and indeed, one can readily construct a counterexample using the above identity as a guide.</p>
|
2,640,763 | <p>Let $\{x_i\}_{i=1}^{n}$ and $\{y_i\}_{i=1}^{n}$ two positive sequences, the first one is monotonic, the second one is strictly increasing .</p>
<p>I noticed that in many cases if $\{x_i\}_{i=1}^{n}$ is increasing $$\frac{\frac{1}{n}\sum_{i=1}^{n}{x_iy_i}}{\left(\frac{1}{n}\sum_{i=1}^{n}{x_i}\right)\left(\frac{1}{n}\sum_{i=1}^{n}{y_i}\right)}\ge1$$ </p>
<p>and if $\{x_i\}_{i=1}^{n}$ is decreasing $$\frac{\frac{1}{n}\sum_{i=1}^{n}{x_iy_i}}{\left(\frac{1}{n}\sum_{i=1}^{n}{x_i}\right)\left(\frac{1}{n}\sum_{i=1}^{n}{y_i}\right)}\le1$$ </p>
<p>I am very skeptical about the veracity of this assertion, but I cannot prove it, any help or directions would be welcome.</p>
| Mathematician 42 | 155,917 | <p>Let $T:V\rightarrow V$ be a linear map and $\alpha=\left\{v_1, \dots, v_k, u_{k+1}, \dots u_n\right\}$ a basis of $V$ such that $\left\{v_1, \dots, v_k\right\}$ is a basis of $\ker(T)$. </p>
<p>You can easily prove that $T(u_{k+1}), \dots , T(u_n)$ are linearly independent vectors. (Mimick the technique used to prove the rank-nullity theorem). </p>
<p>Define $\beta=\left\{w_1, \dots, w_k,T(u_{k+1}), \dots , T(u_n) \right\}$. Here $w_1, \dots, w_k$ are vectors extending the linearly independent $T(u_{k+1}), \dots , T(u_n)$ to a basis of $V$. Now you can easily check that $[T]_{\alpha}^{\beta}$ is of the required form.</p>
<p>Notice that this does not imply that $\det(T)=0$ or $1$. Indeed, $\det(T)$ is defined as $\det([T]_{\gamma}^{\gamma})$ for some basis $\gamma$ of $V$. So the matrix of $T$ is expressed w.r.t. the <strong>same</strong> basis in both domain and target.</p>
|
1,369,409 | <p>I have a bit of an advanced combination problem that has left me stumped for a few days. Essentially my question is if you have n sets of items, and you can select a different number of items from each set, how do you compute the combinations without first creating new sets.</p>
<p>An example in pictures:
I have three sets:
<br>Set A has elements: <br>
Dog<br>
Cat<br>
Rhino<br><br>
Set B has elements:
<br>Pig
<br>Horse
<br>Cow
<br><br>Set C has elements:
<br>Lizard
<br>Snake
<br>Crocodile
<br>Alligator
<br><br>
And now I would like to compute all of the combinations with the criteria that 2 elements be selected from set a, 1 element is selected from set
B, and 2 elements are selected from set C.
<br><br>
The end result would contain all the unique combinations with those specifications.<br>
A current way I am using is to take Set A and turn Set A into all of the combinations of Set A that has 2 elements and storing it in a different set, Set D, then doing the same for Set C, storing in Set E, and then selecting one item from each Set B, Set D, and Set E to get all the unique combinations but I was wondering if there was a better solution.</p>
<p><br>EDIT: By compute I am referring to generating a list of all the possible sets, NOT figure out the count or number of items. That being said, this implies that each set must be unique (Dog, Cat, Pig, Lizard, Snake) is the same as (Cat, Dog, Pig, Snake, Lizard). </p>
| coldnumber | 251,386 | <p>An easy way to visualize all the combinations is to use a tree diagram. It will, however, get rather unwieldy, since as the other answers showed you'll end up with 54 branches. </p>
<p>First, list the three combinations of 2 elements of $A$, say $(a_1,a_2),(a_1,a_3)$, and $(a_2,a_3)$. From each these spring out 3 branches, one for each element of $B$ (because you're only looking for 1 element of $B$). Then from each of the branches will spring out 6 branches, one for each of the combinations of 2 elements of $C$: $(c_1, c_2),(c_1,c_3),(c_1,c_4),(c_2,c_3),(c_2,c_4),$ and $(c_3,c_4)$.</p>
<p>To read a particular combination, you can trace your path back along each branch.</p>
|
4,467,036 | <p>I have to calculate the following integral using contour integration: <span class="math-container">$$\int_0^1 \frac{dx}{(x+2)\sqrt[3]{x^2(x-1)}}$$</span></p>
<p>I've tried to solve this using the residue theorem, but I don't know how to calculate the residue of the function <span class="math-container">$$f(z) = \frac{1}{(z+2)\sqrt[3]{z^2(z-1)}}$$</span> Then I tried to make a substitution in the real integral, so that I would get a function whose residue I know how to calculate, but I couldn't figure out what substitution would do the trick. I would really appreciate if someone could help.</p>
| DinosaurEgg | 535,606 | <p>I will solve the more general integral</p>
<p><span class="math-container">$$I(z):=\int_0^1\frac{dx}{x^{2/3}(1-x)^{1/3}(x+z)}$$</span></p>
<p>which is found to be given by</p>
<p><span class="math-container">$$I(z)=\frac{2\pi}{\sqrt{3}}z^{-2/3}(1+z)^{-1/3}$$</span></p>
<p>Note that no matter your choice of branch of the cubic root function, your integral will be proportional to <span class="math-container">$I(2)$</span> (specifically <span class="math-container">$-I(2)$</span> or <span class="math-container">$e^{i\pi/3}I(2)$</span> for the two most common definitions of the function as mentioned in the comments)</p>
<p>To prove the statement, perform the change of variables <span class="math-container">$x \to 1/x$</span></p>
<p><span class="math-container">$$I(z)=\int_1^\infty\frac{du}{(u-1)^{1/3}(1+zu)}$$</span></p>
<p>Now it is easy to see that the integral will become elementary by setting <span class="math-container">$t=(u-1)^{2/3}$</span> which yields the form</p>
<p><span class="math-container">$$I(z)=\frac{3}{2}\int_0^{\infty}\frac{dt}{zt^{3/2}+z+1}=\frac{3}{2z^{2/3}(1+z)^{1/3}}\int_0^{\infty}\frac{da}{a^{3/2}+1}$$</span></p>
<p>The remaining integral is standard and can be done using complex analysis, with an appropriate contour (hint below) for the result advertised in equation 2.</p>
<p><strong>Hint:</strong></p>
<blockquote class="spoiler">
<p> Use a pizza slice contour centered at the origin of angle <span class="math-container">$2\pi/r$</span>. <span class="math-container">$$\int_0^\infty\frac{da}{a^r+1}=\frac{\pi}{r\sin\pi/r}, r\in \mathbb{R}$$</span></p>
</blockquote>
|
4,467,036 | <p>I have to calculate the following integral using contour integration: <span class="math-container">$$\int_0^1 \frac{dx}{(x+2)\sqrt[3]{x^2(x-1)}}$$</span></p>
<p>I've tried to solve this using the residue theorem, but I don't know how to calculate the residue of the function <span class="math-container">$$f(z) = \frac{1}{(z+2)\sqrt[3]{z^2(z-1)}}$$</span> Then I tried to make a substitution in the real integral, so that I would get a function whose residue I know how to calculate, but I couldn't figure out what substitution would do the trick. I would really appreciate if someone could help.</p>
| Sangchul Lee | 9,340 | <p>I will instead compute</p>
<p><span class="math-container">$$ I = \int_{0}^{1} \frac{\mathrm{d}x}{(x+2)\sqrt[3]{x^2\bbox[color:red;padding:3px;border:1px dotted red;]{(1-x)}}}. $$</span></p>
<p>You will have no problem converting this to your case, depending on which branch of <span class="math-container">$\sqrt[3]{\,\cdot\,}$</span> is used.</p>
<hr />
<p><strong>1<sup>st</sup> Solution.</strong> Let <span class="math-container">$\sqrt[3]{z} = \exp(\frac{1}{3}\log z)$</span> be the principal complex cube root. Also, let <span class="math-container">$f(z)$</span> be the holomorphic function defined on <span class="math-container">$\mathbb{C} \setminus [0, 1]$</span> by</p>
<p><span class="math-container">$$ f(z) = \frac{1}{(z+2) z \sqrt[3]{1 - z^{-1}}}. $$</span></p>
<p>Then consider the integral</p>
<p><span class="math-container">$$ J = \int_{|z|=R_0} f(z) \, \mathrm{d}z, $$</span></p>
<p>where <span class="math-container">$R_0 > 2$</span> so that <span class="math-container">$|z| = R_0$</span> encloses all the singularities of <span class="math-container">$f$</span>. Now we will compute <span class="math-container">$J$</span> in two ways. On one hand, by noting that <span class="math-container">$|f(z)| = \mathcal{O}(|z|^{-2})$</span>, we get</p>
<p><span class="math-container">$$ J = \lim_{R\to\infty} \int_{|z|=R} f(z) \, \mathrm{d}z = 0. $$</span></p>
<p>On the other hand, by "shrinking" the contour <span class="math-container">$|z| = R_0$</span> (blue circle in the figure below), we obtain a small circle around the pole <span class="math-container">$-2$</span> of <span class="math-container">$f$</span> and the dogbone contour around <span class="math-container">$[0, 1]$</span>:</p>
<p><img src="https://i.stack.imgur.com/H7fWf.png" alt="contours" /></p>
<p>In this limit, noting that <span class="math-container">$|f(z)| = \mathcal{O}(|z|^{-2/3})$</span> as <span class="math-container">$z \to 0$</span> and <span class="math-container">$|f(z)| = \mathcal{O}(|z-1|^{-1/3})$</span> as <span class="math-container">$z \to 1$</span>, we obtain</p>
<p><span class="math-container">$$ J = 2\pi i \mathop{\mathrm{Res}}_{z=-2} f(z) + (e^{i\pi/3} - e^{-i\pi/3}) I. $$</span></p>
<p>In this step, we utilized the observation that, for <span class="math-container">$0 < x < 1$</span>,</p>
<p><span class="math-container">\begin{align*}
\lim_{\varepsilon \to 0^+} \sqrt[3]{1-\frac{1}{x+i\varepsilon}} &= e^{i\pi/3} \sqrt[3]{\frac{1-x}{x}}, \\
\lim_{\varepsilon \to 0^+} \sqrt[3]{1-\frac{1}{x-i\varepsilon}} &= e^{-i\pi/3} \sqrt[3]{\frac{1-x}{x}}.
\end{align*}</span></p>
<p>Finally, since <span class="math-container">$J = 0$</span>, solving the above equality for <span class="math-container">$I$</span> gives</p>
<p><span class="math-container">$$ I
= -\frac{2\pi i}{e^{i\pi/3} - e^{-i\pi/3}} \left( \mathop{\mathrm{Res}}_{z=-2} f(z) \right)
= \frac{\pi}{\sin(\pi/3)} \frac{1}{\sqrt[3]{12}} $$</span></p>
<hr />
<p><strong>2<sup>nd</sup> Solution.</strong> The integrand has two branch points, namely <span class="math-container">$0$</span> and <span class="math-container">$1$</span>. So it would be easier if we can send one to <span class="math-container">$\infty$</span>. This can be done, for example, by invoking the substitution</p>
<p><span class="math-container">$$ t = \frac{x}{1-x}, \qquad \text{i.e.,} \qquad x = \frac{t}{1+t}. $$</span></p>
<p>Indeed, the above substitution yields</p>
<p><span class="math-container">$$ I = \int_{0}^{\infty} \frac{\mathrm{d}t}{t^{2/3}(3t+2)}. $$</span></p>
<p>Now this integral can be tackled by a fairly standard manner. For example, choosing the branch cut of <span class="math-container">$\log$</span> as <span class="math-container">$[0, \infty)$</span> and using the <a href="https://en.wikipedia.org/wiki/Hankel_contour" rel="noreferrer">Hankel contour</a> (or more precisely, keyhole contour followed by limit),</p>
<p><img src="https://i.stack.imgur.com/vHBZ1.png" alt="Hankel contour" /></p>
<p>we get</p>
<p><span class="math-container">\begin{align*}
\left(1 - \frac{1}{e^{4\pi i/3}} \right) I
&= \int_{\text{Hankel}} \frac{1}{z^{2/3}(3z+2)} \, \mathrm{d}z \\
&= 2\pi i \left( \mathop{\mathrm{Res}}_{z=-2/3} \frac{1}{z^{2/3}(3z+2)} \right)
= \frac{2\pi i}{3 e^{2\pi i/3}(2/3)^{2/3}}.
\end{align*}</span></p>
<p>Solving this for <span class="math-container">$I$</span> gives the same answer.</p>
|
1,421,740 | <p>Let $90^a=2$ and $90^b=5$, Evaluate </p>
<h1>$45^\frac {1-a-b}{2-2a}$ </h1>
<p>I know that the answer is 3 when I used logarithm, but I need to show to a student how to evaluate this without involving logarithm. Also, no calculators.</p>
| GAVD | 255,061 | <p>Let me try. </p>
<p>$$10 = 90^{a+b} \Rightarrow 3^2 = 90^{1-a-b} \Rightarrow 3 = 90^{\frac{1-a-b}{2}}.$$</p>
<p>Then, $$45 = 90^{(1-a-b)+b} = 90^{1-a}.$$</p>
<p>So, $$45^{\frac{1}{1-a}} = 90 \Rightarrow 45^{\frac{1-a-b}{2(1-a)}} = 90^{\frac{1-a-b}{2}} = 3.$$</p>
|
1,100,812 | <p>Here is the statement:
($\bf{Tonelli}$) If $f\in L^+(X,Y)$, then $\displaystyle g:x\mapsto\int_Yf_xd\nu$ is $\mathcal{M}$-measurable,\
$\displaystyle h:y\mapsto \int_Xf^yd\mu$ is $\mathcal{N}$-measurable (so $g\in L^+(X)$ and $h\in L^+(Y)$). And $$\displaystyle \int_{X\times Y}fd\mu \times\nu=\int_Xgd\mu=\int_Yhd\nu.$$
That is, $$\displaystyle \int_{X\times Y}fd\mu \times\nu=\int_X\left(\int_Yf_xd\nu\right)d\mu(x)=\int_Y\left(\int_Xf^yd\mu\right)d\nu(y)$$</p>
<p>($\bf{Fubini}$) If $f\in L^1(X\times Y)$, then $f_x\in L^1(Y,\nu)$ for a.e. $x\in X$ and $f^y\in L^1(X,\mu)$ for a.e. $y\in Y$. The a.e. defined functions $g$ and $h$ above are $\mathcal{M}$-measurable and $\mathcal{N}$-measurable respectively and the conclusion from above holds. </p>
<p>($\bf{Fubini}$) Since $f\in L^1(X\times Y)$, then $|f|\in L^+$. That is $(\int_{X\times Y})|f|<\infty$. Then
$\Rightarrow$ $\displaystyle\int_{X\times Y}|f|d\mu \times\nu=\int_Y\left(\int_X|f^y|d\mu\right)d\nu=\int_X\left(\int_Y|f_x|d\nu\right)d\mu$
$\Rightarrow \displaystyle \int_X|f^y|d\mu<\infty$ a.e. $y$ and $\displaystyle \int_Y|f_x|d\nu<\infty$ a.e. $x$.
$\Rightarrow f^y\in L^1(\mu)$ a.e. $y$ and $f_x\in L^1(\nu)$ a.e. $x$.
Let $f=f^+-f^-$, then $f_x=(f_x)^+-(f_x)^-=(f^+)_x-(f^-)_x$. So
\begin{eqnarray*}
\int_{X\times Y}f&=&\int_{X\times Y}f^+-\int_{X\times Y}f^-\\
&=&\int_X\left[\int_Y(f^+)_x\right]-\int_X\left[\int_Y(f^-)_x\right]\\
&=&\int_X\left[\int_Y(f^+)_x-(f_x)^-\right]\\
&=&\int_X\int_Yf_x.
\end{eqnarray*}
Similar for $f^y$.</p>
<p>My question is about the a.e. part of this proof. I'm not sure exactly why line 2 implies the a.e. conclusion. Should we not show this rigorously or is there a way of looking at these types of a.e. statements and seeing it automatically? Thank you! </p>
| Loreno Heer | 92,018 | <p>I think that follows from $\int_{X×Y}|f|<\infty$.</p>
|
1,102,324 | <p>I could use some pointers solving this problem:</p>
<blockquote>
<p>Given a certain s.v. $X$ with cdf $F_x(x)$ and pdf $f_X(x)$. Let s.v. $Y$ be the lower censored of $X$ at $x=b$. Meaning:</p>
<p>$$Y = \begin{cases}0 & \text{if }X<b\\
X & \text{if } X \geq b\end{cases}$$</p>
<p>Find cdf $F_Y(y)$ and pdf $f_Y(y)$</p>
</blockquote>
<h3>My attempt</h3>
<p>I'm looking for
$$\begin{align}
F_Y(y) = \mathcal{P}(Y<y) &= \mathcal{P}(Y<y\mid X<b)\mathcal{P}(X<b)+\mathcal{P}(Y<y\mid X \geq b)\mathcal{P}(X\geq b)\\
& = \mathcal{P}(0<y\mid X<b)\mathcal{P}(X<b) + \mathcal{P}(X<y\mid X \geq b)\mathcal{P}(X\geq b)
\end{align}$$</p>
<p>I've tried continuing using Bayes, but I always get kinda stuck. Which path should I follow?</p>
<p>Is the right mental picture for this problem?
<img src="https://i.stack.imgur.com/OB7vT.png" alt="censored"></p>
<p><em>Solution should be: $F_Y(y) = \frac{F_X(y)-F_X(b)}{1-F_X(b)}$</em></p>
| mickep | 97,236 | <p>Hint: $y=-ax^2+1$ is zero when $x=\pm1/\sqrt{a}$, so it might be so that you want to calculate the integral
$$
\int_{-1/\sqrt{a}}^{1/\sqrt{a}}1-ax^2\,dx.
$$</p>
|
381,566 | <p>I know practically nothing about fractional calculus so I apologize in advance if the following is a silly question. I already tried on math.stackexchange.</p>
<p>I just wanted to ask if there is a notion of fractional derivative that is linear and satisfy the following property <span class="math-container">$D^u((f)^n) = \alpha D^u(f)f^{(n-1)}$</span> where <span class="math-container">$\alpha$</span> is a scalar. In the case of standard derivatives we would have <span class="math-container">$\alpha = n$</span>.</p>
<p>Thank you very much.</p>
| Iosif Pinelis | 36,721 | <p>It appears you actually want <span class="math-container">$D^u(f^n)=\alpha f^{n-1} D^u f$</span>, where <span class="math-container">$\alpha$</span> is a scalar.</p>
<p>There is no reason for this to be true, and this is indeed false in general. E.g., for <span class="math-container">$n=2$</span> and the <a href="https://en.wikipedia.org/wiki/Fractional_calculus#Riemann%E2%80%93Liouville_fractional_derivative" rel="noreferrer">Riemann--Liouville fractional derivative</a> of <span class="math-container">$f:=\exp$</span> with <span class="math-container">$u=1/2$</span>, <span class="math-container">$a=0$</span>, and <span class="math-container">$x>0$</span> we have
<span class="math-container">$$f(x)^{n-1}(D^uf)(x)=e^{2 x} \text{erf}\left(\sqrt{x}\right)+\frac{e^x}{\sqrt{\pi } \sqrt{x}},$$</span>
whereas
<span class="math-container">$$(D^u(f^n))(x)=\sqrt{2} e^{2 x} \text{erf}\left(\sqrt{2} \sqrt{x}\right)+\frac{1}{\sqrt{\pi } \sqrt{x}},$$</span>
so that
<span class="math-container">$$\frac{D^u(f^n)}{f^{n-1}\,D^uf}$$</span>
is quite unlike any constant.</p>
<p>Moreover, the term <span class="math-container">$\text{erf}\left(\sqrt{2} \sqrt{x}\right)$</span> in the expression for <span class="math-container">$(D^u(f^n))(x)$</span> here versus the term <span class="math-container">$\text{erf}\left(\sqrt{x}\right)$</span> in the expression for <span class="math-container">$f(x)^{n-1}(D^uf)(x)$</span> seem to make it very unlikely that any other kind of fractional derivative will work as you want.</p>
|
251,818 | <p>In other words if a graph is $3$-regular does it need to have $4$ vertices? I ask because I have been asked to prove that if $n$ is an odd number and $G$ is an $n$-regular graph then $G$ must have an even number of vertices.</p>
| yo' | 43,247 | <p>I'm not sure how much detailed answer you want. So this is a hint, and the proof itself is hidden: Consider simple graph (no parallel edges, no loops on a vector) on $n$ vertices and think how many edges from a vertex can exist. As well, what if $n=0$?</p>
<blockquote class="spoiler">
<p> Well, if you consider the empty graph, than it is $k$-regular and has $0$ vertices, but that's another point. </p>
</blockquote>
<p>--</p>
<blockquote class="spoiler">
<p> Generally, a non-empty $k$-regular graph has to have at least $k+1$ vertices. </p>
</blockquote>
<p>--</p>
<blockquote class="spoiler">
<p> Moreover, if $k$ is odd and you don't allow loops, the number of vertices $n$ must be even. That's because for number of edges $m$ satisfies $2m=\sum_{v\in V} d(v)$ (each edge is counted on $2$ vertices) and hence $2m=kn$. </p>
</blockquote>
|
1,781,117 | <h1>The question</h1>
<p>Prove that:
$$\prod_{n=2}^∞ \left( 1 - \frac{1}{n^4} \right) = \frac{e^π - e^{-π}}{8π}$$</p>
<hr>
<h2>What I've tried</h2>
<p>Knowing that:
$$\sin(πz) = πz \prod_{n=1}^∞ \left( 1 - \frac{z^2}{n^2} \right)$$
evaluating at $z=i$ gives
$$ \frac{e^π - e^{-π}}{2i} = \sin(πi) = πi \prod_{n=1}^∞ \left( 1 + \frac{1}{n^2} \right)$$
so:
$$ \prod_{n=1}^∞ \left( 1 + \frac{1}{n^2} \right) = \frac{e^π - e^{-π}}{2π}$$</p>
<p>I'm stucked up and don't know how to continue, any help?</p>
| C.S. | 95,894 | <p>Note that $$\prod_{n=2}^{\infty} \left(1-\frac{1}{n^{2}}\right) \to \frac{1}{2}$$</p>
<p>This is because $$A_{n} =\prod_{k=2}^{n}\left(1-\frac{1}{n^2}\right) = \prod_{k=2}^{n} \frac{(k-1)(k+1)}{k^2} = \frac{n+1}{2n} \to \frac{1}{2}$$</p>
<p>We have used $\displaystyle \left(1-\frac{1}{n^4}\right) = \left(1+\frac{1}{n^2}\right) \cdot \left(1-\frac{1}{n^{2}}\right)$</p>
|
156,479 | <p>Let $S$ be a compact oriented surface of genus at least $2$ (possibly with boundary). Let $X$ be a connected component of the space of embeddings of $S^1$ into $S$.</p>
<p>Question : what is the fundamental group of $X$? My guess is that the answer is $\mathbb{Z}$ with generator the loop of embeddings obtained by precomposing the base embedding with a sequence of rotations of $S^1$.</p>
<p>I'm also interested in the higher homotopy groups of $X$, which I would guess are trivial.</p>
<hr>
<p>Edit: In response to Sam Nead's question, I'm most interested in the smooth category, but am also interested in the topological category. There are technical issues in giving an appropriate topology to mapping spaces in the PL category, so the question doesn't really make sense there.</p>
| Will Sawin | 18,060 | <p>For $S$ the sphere, assuming smooth embeddings, any curve divides the sphere into two discs, hence is diffeomorphic to the equator. Then $X$ is a quotient space of the orientation-preserving diffeomorphism group of the sphere by the subgroup that preserves the equator. The orientation-preserving diffeomorphism group of the sphere is homotopic to $SO_3$. The subgroup that preserves the equator is a product of two copies of the group of diffeomorphisms of the disc that fix the boundary. This is known to be contractible.</p>
<p>Thus the space is diffeomorphic to $SO_3$, which is the unit tangent bundle on $S^2$.</p>
<p>So as Sam Nead suspected, there is a lot of higher homotopy.</p>
|
2,618,804 | <p>Let $V$ be a vector space of dimension $m\geq 2$ and $ T: V\to V$ be a linear transformation such that $T^{n+1}=0$ and $T^{n}\neq 0$ for some $n\geq1$ .Then choose the correct statement(s):</p>
<p>$(1)$ $rank(T^n)\leq nullity(T^n)$</p>
<p>$(2)$ $rank(T^n)\leq nullity(T^{n+1})$</p>
<p><strong>Try:</strong></p>
<p>I found this case is possible if $n<m$ and took some examples for $(2)$ , found it true but I've no idea how to prove.
For (1) I'm not getting anything. </p>
| Community | -1 | <p>1.</p>
<p>Let $y\in Range(T)$ $\implies y=T(x)$ for some $x\in V$, $T^{n+1}(x)=T^{n}(T(x))=0 \implies y \in Ker T^n.$</p>
<ol start="2">
<li><p>Let $y \in KerT^n \implies T^n(y)=0 \implies T^{n+1}(y)=T(T^n(y))=T(0)=0 \implies y \in KerT^{n+1}$</p></li>
</ol>
|
66,370 | <p>Let $(X,\mathcal{E},\mu)$ be a measure space. Let $u,v$ be $\mu$-measurable functions. If $0 \leq u \leq v$ and $\int_X v d\mu$ exists we know that $\int_X u d\mu \leq \int_X v d\mu$.</p>
<p>I wanted to know if $0 \leq u < v$ and $\int_X v d\mu$ exists then is it true that
$\int_X u d\mu < \int_X v d\mu$? This can be shown for simple functions easily i.e. if $u,v$ are simple.</p>
<p>I have assumed here that $\int_X u d\mu = \sup \{\int_X u_n d\mu, u_n \text{simple}, \mu-\text{measurable}, u_n \leq u\}$ where a simple function is defined to be a function
whose cardinality of the range is finite.</p>
<p>Any help is greatly appreciated.</p>
<p>Thanks,
Phanindra</p>
| Ilya | 5,887 | <p><strong>Edit (after 11 years):</strong> Somehow nowhere (in the OP or in the previous version of the answer) I see being mentioned that strict integral inequality only holds if <span class="math-container">$v>u$</span> on some set of strictly positive measure.</p>
<hr />
<p>First of all, <span class="math-container">$\int_X v\,d\mu$</span> exists since <span class="math-container">$v\geq 0$</span>. Second, you want to show that <span class="math-container">$\int\limits_X(v-u)\,d\mu>0$</span> where <span class="math-container">$v-u$</span> is an arbitrary positive measurable function, so it's the same as to ask if <span class="math-container">$\int\limits_Xf\,d\mu>0$</span> for a positive function <span class="math-container">$f$</span> (of course, measurable).</p>
<p>Consider the set <span class="math-container">$X_n = \{x\in X:f(x)\geq\frac1n\}$</span> for all <span class="math-container">$n\in \mathbb N$</span>. Then <span class="math-container">$\bigcup\limits_{n=1}^\infty X_n = X$</span> and <span class="math-container">$X_n\subseteq X_{n+1}$</span>.</p>
<p>Now, suppose that <span class="math-container">$\mu(X)>0$</span> and
<span class="math-container">$$
\int\limits_Xf\,d\mu = 0
$$</span>
so
<span class="math-container">$$
0 = \int\limits_Xf\,d\mu \geq\int\limits_{X_n}f\,d\mu\geq\frac1n\mu(X_n)
$$</span>
so <span class="math-container">$\mu(X_n) = 0$</span> for any <span class="math-container">$n\in\mathbb N$</span>. By the continuity of measure we obtain that
<span class="math-container">$$
\mu(X) = \mu\left(\bigcup\limits_{n=1}^\infty X_n\right) = \lim\limits_{n\to\infty}\mu(X_n) = 0
$$</span>
which is not true.</p>
|
1,613,171 | <p>On page $61$ of the book <a href="http://solmu.math.helsinki.fi/2010/algebra.pdf" rel="nofollow">Algebra</a> by Tauno Metsänkylä, Marjatta Näätänen, it states</p>
<blockquote>
<p>$\langle \emptyset \rangle =\{1\},\langle 1 \rangle =\{1\}. H\leq G \implies \langle H \rangle =H$</p>
</blockquote>
<p>where $H \leq G$ means that H is the subgroup of G.</p>
<p>Now assumme $H=\emptyset$ so $\langle \emptyset \rangle = \emptyset \not = \{1\}$, contradiction. Please explain p.61 of the book that is the line in orange above.</p>
| Michael Albanese | 39,599 | <p>The notation $H \leq G$ means that $H$ is a subgroup of $G$. Your proposed counterexample fails because $\emptyset$ is not a subgroup of $G$ (it doesn't contain the identity element).</p>
|
1,844,374 | <p>Why does the "$\times$" used in arithmetic change to a "$\cdot$" as we progress through education? The symbol seems to only be ambiguous because of the variable $x$; however, we wouldn't have chosen the variable $x$ unless we were already removing $\times$ as the symbol for multiplication. So why do we? I am very curious. It seems like $\times$ is already quite sufficient as a descriptive symbol.</p>
| Fay | 530,222 | <p>The x might be used for younger children instead of the ⋅ because they might confuse it with the . in decimals especially if the equation was handwritten. The x on the other hand can't be confused with a different symbol since they are probably not doing algebra. Just a thought. (just realized other people said this, sorry)</p>
|
4,203,704 | <p>Understanding the Yoneda lemma maps.</p>
<p>I'm trying to understand the maps between the natural transformations and <span class="math-container">$F(A)$</span> in the proof of the Yoneda lemma. I've been struggling for a bit to understand the Yoneda lemma, so I'm trying to understand the mapping construction as a recipe and using that to understand the theorem and gain an intuition for it rather than vice versa.</p>
<p><strong>What, exactly, are the two maps between the natural transformations and <span class="math-container">$F(A)$</span> in the proof of the (covariant) Yoneda lemma?</strong></p>
<hr />
<p><a href="https://proofwiki.org/wiki/Yoneda_Lemma_for_Covariant_Functors" rel="nofollow noreferrer">This entry on ProofWiki</a> shows explicit constructions for the maps from <span class="math-container">$F(A)$</span> to <span class="math-container">$\mathrm{Nat}(h_A, F(A))$</span> and the reverse.</p>
<p>I'm going to change the notation slightly and use a superscript to denote which category an object is in. <span class="math-container">$x^C$</span> is an object in the category <span class="math-container">$C$</span>. <span class="math-container">$f^{\mathrm{Mor}(C)}$</span> is an arrow in the category <span class="math-container">$C$</span>. I will sometimes use a specific set, such as <span class="math-container">$F(A)$</span>, as an annotation. <span class="math-container">$x^{F(A)}$</span> means <span class="math-container">$x^{\mathrm{Set}}$</span> and, additionally, <span class="math-container">$x$</span> is an element of the set <span class="math-container">$F(A)$</span>.</p>
<p>The map <span class="math-container">$\alpha$</span> goes from <span class="math-container">$\mathrm{Nat}(h_A, F)$</span> to <span class="math-container">$F(A)$</span>. The map is defined as follows.</p>
<p><span class="math-container">$$ \alpha \;\;\text{is}\;\; \eta \mapsto \eta_A(\text{id}_A) $$</span></p>
<p>I am really confused why the RHS is not just <span class="math-container">$\eta_A$</span>. Since <span class="math-container">$\eta$</span> is a natural transformation between functors from <span class="math-container">$C$</span> to <span class="math-container">$\mathrm{Set}$</span>, this would mean that <span class="math-container">$\eta_A$</span> is in <span class="math-container">$\text{Mor}(\mathrm{Set})$</span>. However, <span class="math-container">$\text{id}_A$</span> is also in <span class="math-container">$\text{Mor}(\mathrm{Set})$</span>. I don't know how this construction produces an object in <span class="math-container">$\mathrm{Set}$</span>.</p>
<p>With type annotations, I think you get</p>
<p><span class="math-container">$$ \eta^{\mathrm{Nat}(h_A, F(A))} \mapsto \eta_A^{\mathrm{Mor}(\mathrm{Set})}(\text{id}_A^{\mathrm{Mor}(\mathrm{Set})}) $$</span></p>
<p>I'm also confused about the <span class="math-container">$\beta$</span> map. <span class="math-container">$\beta$</span> goes from <span class="math-container">$F(A)$</span> to <span class="math-container">$\mathrm{Nat}(h_A, F)$</span>.</p>
<p>Here is what <span class="math-container">$\beta$</span> looks like with type annotations on parameters alone and the ultimate RHS. I inferred the types myself, but the overall expression is similar to the presentation on ProofWiki.</p>
<p><span class="math-container">$$ u^{F(A)} \mapsto x^C \mapsto f^{\mathrm{Mor}(C)} \mapsto (Ff)(u)^{\mathrm{Set}} $$</span></p>
<p>I'm also confused by this expression. A given, specific natural transformation <span class="math-container">$\eta$</span> can be though of as a map from <span class="math-container">$C$</span> to <span class="math-container">$\mathrm{Mor}(\mathrm{Set})$</span>, i.e. it associates morphisms in the target category to objects in the source category.</p>
<p>Given this, it's hard for me to see why we don't end up with something of the following form for <span class="math-container">$\beta$</span>, i.e. we're given an element of <span class="math-container">$F(A)$</span>, and our map <span class="math-container">$\beta$</span> kicks out the component map of a natural transformation.</p>
<p><span class="math-container">$$ u^{F(A)} \mapsto x^C \mapsto (\cdots)^{\mathrm{Mor}(\mathrm{Set})} $$</span></p>
<p>So, what exactly are the maps between <span class="math-container">$F(A)$</span> and the natural transformations used in the proof of the Yoneda lemma? I'm having a hard time finding the proof presented in a substantially different way (or a more elementary way) from what ProofWiki does. For example, the <a href="https://en.wikipedia.org/wiki/Yoneda_lemma#Proof" rel="nofollow noreferrer">proof on Wikipedia</a> seems broadly similar in terms of the explicit construction, which makes me think I'm missing something big/obvious.</p>
| azif00 | 680,927 | <ul>
<li>No, <span class="math-container">$\eta$</span> is a natural transformation between <span class="math-container">$h^A : C \to \textsf{Set}$</span> and <span class="math-container">$F : C \to \textsf{Set}$</span>, so for each object <span class="math-container">$X$</span> in <span class="math-container">$C$</span> we have the component <span class="math-container">$\eta_X : h^A(X) \to F(X)$</span>, which is a morphism in <span class="math-container">$\textsf{Set}$</span>. In particular, <span class="math-container">$\eta_A : h^A(A) \to F(A)$</span> maps <span class="math-container">$\operatorname{id}_A \in h^A(A)$</span> to an element of the set <span class="math-container">$F(A)$</span>.</li>
<li>Now, fix an element <span class="math-container">$u$</span> of the set <span class="math-container">$F(A)$</span>. We want to define a natural transformation <span class="math-container">$\beta(u)$</span> between <span class="math-container">$h^A : C \to \textsf{Set}$</span> and <span class="math-container">$F : C \to \textsf{Set}$</span>, so for each object <span class="math-container">$X$</span> in <span class="math-container">$C$</span> we want to define a morphism <span class="math-container">$$\beta(u)_X : h^A(X) \to F(X)$$</span> in <span class="math-container">$\textsf{Set}$</span>. The natural way is by mapping <span class="math-container">$f \in h^A(X)$</span> (which is a morphism <span class="math-container">$A \to X$</span> in <span class="math-container">$C$</span>) to the <em>value of <span class="math-container">$F(f) : F(A) \to F(X)$</span> at <span class="math-container">$u$</span></em>, in other words <span class="math-container">$\beta(u)_X$</span> maps <span class="math-container">$f \in h^A(X)$</span> to <span class="math-container">$F(f)(u)$</span>. <br />
Thus <span class="math-container">$\beta(u)_X$</span> is "<span class="math-container">$f \mapsto F(f)(u)$</span>", and <span class="math-container">$\beta(u)$</span> is "<span class="math-container">$X \mapsto (f \mapsto F(f)(u))$</span>".</li>
</ul>
|
250,687 | <p>I'm doing a sanity check of the following equation:
<span class="math-container">$$\sum_{j=2}^\infty \frac{(-x)^j}{j!}\zeta(j) \approx x(\log x + 2 \gamma -1)$$</span></p>
<p>Naive comparison of the two shows a bad match but I suspect one of the graphs is incorrect.</p>
<ol>
<li>Why isn't there a warning?</li>
<li>How do I compute this sum correctly?</li>
</ol>
<pre><code>katsurda[x_] := NSum[(-x)^j/j! Zeta[j], {j, 2, Infinity}];
katsurdaApprox[x_] := x (Log[x] + 2 EulerGamma - 1) - Zeta[0];
plot1 = DiscretePlot[katsurda[x], {x, 0, 40, 2}];
plot2 = Plot[katsurdaApprox[x], {x, 0, 40}];
Show[plot1, plot2]
</code></pre>
<p><a href="https://i.stack.imgur.com/pBmVX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pBmVX.png" alt="enter image description here" /></a></p>
<ol start="3">
<li><strong>meta</strong> How do I avoid being mislead by incorrect numeric results? Would using <code>NIntegrate</code> instead of <code>NSum</code> give better guarantees? My usual approach of a avoiding machine precision, checking <code>Precision</code> of the answer and minding warnings fails in the example below</li>
</ol>
<pre><code>katsurda[x_] :=
NSum[(-x)^j/j! Zeta[j], {j, 2, Infinity}, WorkingPrecision -> 32,
NSumTerms -> 2.5 x];
katsurdaApprox[x_] := x (Log[x] + 2 EulerGamma - 1) - Zeta[0];
Print["Precision: ", Precision@katsurda[100]] (* 13.9729 *)
Print["Discrepancy: ", katsurda[100] - katsurdaApprox[100]] (* 94.65088290385, but should be <1 *)
</code></pre>
<p><strong>Background:</strong> the expression comes from "Power series with the Riemann zeta-function in the coefficients" by Katsurada M (<a href="https://projecteuclid.org/journals/proceedings-of-the-japan-academy-series-a-mathematical-sciences/volume-72/issue-3/Power-series-with-the-Riemann-zeta-function-in-the-coefficients/10.3792/pjaa.72.61.pdf" rel="nofollow noreferrer">paper</a>)</p>
| Michael E2 | 4,999 | <p>The sum is alternating, so you might need extra precision and <code>NSumTerms</code>:</p>
<pre><code>katsurda[x_] :=
NSum[(-x)^j/j! Zeta[j], {j, 2, Infinity}, WorkingPrecision -> 16,
NSumTerms -> Max[15, 2 x]];
katsurdaApprox[x_] := x (Log[x] + 2 EulerGamma - 1) - Zeta[0];
plot1 = DiscretePlot[katsurda[x], {x, 0, 40, 2}];
plot2 = Plot[katsurdaApprox[x], {x, 0, 40}];
Show[plot1, plot2, PlotRange -> All]
</code></pre>
<p><a href="https://i.stack.imgur.com/bxLSQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bxLSQ.png" alt="enter image description here" /></a></p>
<p>Note: There is a warning, but it's suppressed by the plotter:</p>
<pre><code>Block[{x = 30},
NSum[(-x)^j/j! Zeta[j], {j, 2, Infinity}]
]
</code></pre>
<blockquote>
<p>NumericalMath`NSequenceLimit::seqlim: The general form of the sequence could not be determined, and the result may be incorrect.</p>
</blockquote>
<pre><code>(* 126.442 *)
</code></pre>
<p><strong>Update:</strong></p>
<p>Here's a way to estimate the needed precision by estimating the largest term in the series. The method <code>"AlternatingSigns"</code> is (or should be) a reliable method for the sum, provided the working precision is high enough.</p>
<pre><code>katsurda[x_] := NSum[(-x)^j/j! Zeta[j], {j, 2, Infinity},
WorkingPrecision ->
16 +
FindMaxValue[{(j0*Log[x] - LogGamma[1 + j0])/Log[10],
j0 > 1}, {j0, x}], NSumTerms -> Max[15, 4 + 2 x],
Method -> "AlternatingSigns"];
katsurdaApprox[x_] := x (Log[x] + 2 EulerGamma - 1) - Zeta[0];
plot1 = DiscretePlot[katsurda[x], {x, Round[1.5^Range[10, 17]]}];
plot2 = Plot[katsurdaApprox[x], {x, 50, 1000}];
Show[plot1, plot2, PlotRange -> All]
</code></pre>
<p><a href="https://i.stack.imgur.com/1HcJp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1HcJp.png" alt="enter image description here" /></a></p>
<p><strong>Update 2:</strong></p>
<p>After some playing, we can see that the maximum for large <code>x</code> is around <code>j == x</code>, and <code>Method -> "AlternatingSigns"</code> adjusts the number of terms needed automatically. So <code>WorkingPrecision -> 16 + x</code> ensures a sufficient precision for <code>x >= 0</code>. Thus here's a simplified code:</p>
<pre><code>katsurda[x_] :=
NSum[(-x)^j/j! Zeta[j], {j, 2, Infinity},
Method -> "AlternatingSigns", WorkingPrecision -> 16 + x]
</code></pre>
|
3,124,285 | <p>I have written a proof, and I would appreciate verification. The problem is picked from "Set Theory and Matrices" by I. Kaplansky.</p>
<hr>
<p><em>Proof</em>. Let <span class="math-container">$a_1=f(x)$</span> and <span class="math-container">$a_2=f(y)$</span>, then</p>
<p><span class="math-container">$g(a_1)=g(a_2) \Longrightarrow g(f(x))=g(f(y)) \overset{def}{\Longrightarrow} x=y \Longrightarrow f(x)=f(y) \Longrightarrow a_1=a_2$</span></p>
<p>Therefore, according to transitivity, we get that <span class="math-container">$\,g(a_1)=g(a_2) \Longrightarrow a_1=a_2$</span>, which is (one of) the definition(s) of an injective function. Thus, <span class="math-container">$g$</span> is injective if <span class="math-container">$gf$</span> is injective. QED</p>
<hr>
<p>My insecurity is foremost the first step as well as some unclarity about explicitly highlight that <span class="math-container">$f$</span> is surjective. </p>
| Sri-Amirthan Theivendran | 302,692 | <p>First be precise about what you are proving: Let <span class="math-container">$f\colon X\to Y$</span> be a surjective map and <span class="math-container">$g\colon Y\to Z$</span> be a map such that <span class="math-container">$g\circ f$</span> is injective. Then <span class="math-container">$g$</span> is injective.</p>
<p>In your proof you never explicitly state where you are using the fact that <span class="math-container">$f$</span> is surjective. I would rewrite the first step as follows. Suppose that <span class="math-container">$g(a_1)=g(a_2)$</span> for some <span class="math-container">$a_1, a_2\in Y$</span>. Since <span class="math-container">$f$</span> is surjective there exists <span class="math-container">$x, y\in X$</span> such that <span class="math-container">$f(x)=a_1$</span> and <span class="math-container">$f(y)=a_2$</span>.</p>
<p>Then we can proceed with your proof. I would also recommend using words rather than symbols when writing proofs. </p>
<p>So to continue with the proof I would write, it follows that <span class="math-container">$g(f(x))=g(f(y))$</span>. Since <span class="math-container">$g\circ f$</span> is injective it is the case that <span class="math-container">$x=y$</span> and hence <span class="math-container">$a_1=a_2$</span>.</p>
|
121,541 | <p>I'd like to pick <em>k</em> points from a set of points in <em>n</em>-dimensions that are approximately "maximally apart" (sum of pairwise distances is almost maxed). What is an efficient way to do this in MMA? Using the solution from C Woods, for example:</p>
<pre><code>KFN[list_, k_Integer?Positive] := Module[{kTuples},
kTuples=Subsets[RandomSample[list,Max[k*2,100]],{k}, 1000];
MaximalBy[kTuples,Total[Flatten[Outer[EuclideanDistance[#1,#2]&,#,#,1]]]&]
]
pts=RandomReal[1,{100,3}]
kfn=KFN[pts,3][[1]]
Graphics3D[{Blue,Point[pts],PointSize[Large],Red,Point[kfn]}]
</code></pre>
<p><a href="https://i.stack.imgur.com/ts1qt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ts1qt.png" alt="enter image description here"></a></p>
<p>But here's the catch: I need this algorithm to be efficient enough to scale to this size:</p>
<pre><code>pts=RandomReal[1,{10^6,10^3}];
kfn=KFN[pts,100]
Graphics3D[{Blue,Point[pts],PointSize[Large],Red,Point[kfn]}]
</code></pre>
<p><strong>Notes</strong></p>
<ol>
<li>The comment from J.M. works for <em>k=2</em>, but hangs at <em>k=4</em>. </li>
<li><p>Here's a nice paper on k-FN:</p>
<ul>
<li><a href="http://www.dcs.gla.ac.uk/workshops/ddr2012/papers/p3said.pdf" rel="nofollow noreferrer">http://www.dcs.gla.ac.uk/workshops/ddr2012/papers/p3said.pdf</a></li>
</ul></li>
</ol>
| C. Woods | 37,886 | <p>Here is a naive solution that can be slow for large lists of points: </p>
<pre><code>KFN[list_, k_Integer?Positive] := Module[{kTuples},
kTuples = Subsets[list, {k}];
MaximalBy[kTuples,
Total[Flatten[Outer[EuclideanDistance[#1, #2] &, #, #, 1]]] &]
]
</code></pre>
<p>(Use of <code>Subsets</code> function thanks to N.J. Evans. )
I'm not convinced there is a computationally "efficient" solution for this problem, but there are probably better algorithms than the one I have given you.</p>
|
121,541 | <p>I'd like to pick <em>k</em> points from a set of points in <em>n</em>-dimensions that are approximately "maximally apart" (sum of pairwise distances is almost maxed). What is an efficient way to do this in MMA? Using the solution from C Woods, for example:</p>
<pre><code>KFN[list_, k_Integer?Positive] := Module[{kTuples},
kTuples=Subsets[RandomSample[list,Max[k*2,100]],{k}, 1000];
MaximalBy[kTuples,Total[Flatten[Outer[EuclideanDistance[#1,#2]&,#,#,1]]]&]
]
pts=RandomReal[1,{100,3}]
kfn=KFN[pts,3][[1]]
Graphics3D[{Blue,Point[pts],PointSize[Large],Red,Point[kfn]}]
</code></pre>
<p><a href="https://i.stack.imgur.com/ts1qt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ts1qt.png" alt="enter image description here"></a></p>
<p>But here's the catch: I need this algorithm to be efficient enough to scale to this size:</p>
<pre><code>pts=RandomReal[1,{10^6,10^3}];
kfn=KFN[pts,100]
Graphics3D[{Blue,Point[pts],PointSize[Large],Red,Point[kfn]}]
</code></pre>
<p><strong>Notes</strong></p>
<ol>
<li>The comment from J.M. works for <em>k=2</em>, but hangs at <em>k=4</em>. </li>
<li><p>Here's a nice paper on k-FN:</p>
<ul>
<li><a href="http://www.dcs.gla.ac.uk/workshops/ddr2012/papers/p3said.pdf" rel="nofollow noreferrer">http://www.dcs.gla.ac.uk/workshops/ddr2012/papers/p3said.pdf</a></li>
</ul></li>
</ol>
| Daniel Lichtblau | 51 | <p>Not necessarily best of quality but maybe could be made better with a bit of tuning.</p>
<pre><code>kDistant[pts_List, n_] := Module[
{objfun, len = Length[pts], ords, a, c1},
ords = Array[a, n];
c1 = Flatten[{Map[.5 <= # <= len + .5 &, ords],
Element[ords, Integers]}];
objfun[oo : {_Integer ..}] := Module[
{ovals = Clip[oo, {1, len}], ptlist, diffs},
ptlist = pts[[ovals]];
diffs =
Flatten[Table[
ptlist[[j]] - ptlist[[k]], {j, n - 1}, {k, j + 1, n}], 1];
Total[Map[Sqrt[#.#] &, diffs]]
];
Round[NArgMax[{objfun[ords], c1}, ords,
Method -> {"DifferentialEvolution", "SearchPoints" -> 40},
MaxIterations -> 400]]
]
</code></pre>
<p>Example searching for 10 from 100000 points.</p>
<pre><code>n = 10;
len = 10^5;
pts = RandomReal[{-10, 10}, {len, 3}];
Timing[furthest = kDistant[pts, n]]
(* Out[6]= {15.85276, {34502, 90523, 79761, 66318, 53570, 18000, 80585,
12958, 87680, 70241}} *)
</code></pre>
<p>Seems to have done alright, points tend toward corners. Some pairs are close but that's probably an indication that the objective is not the best to use for <code>k</code> large.</p>
<pre><code>Graphics3D[{Red, PointSize[Large], Point[pts[[furthest]]]}]
</code></pre>
<p><a href="https://i.stack.imgur.com/MulkV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MulkV.png" alt="enter image description here"></a></p>
<p>--- edit ---</p>
<p>The variant below might be better for some purposes. Instead of maximizing a sum of all disatnce pairs it will maximize the sum of all minimum separations, which should have the effect of pushing all points away from one another. Some experimentation led me to increase iterations and also slightly up the number of points.</p>
<pre><code>kDistantMins[pts_List, n_] := Module[
{objfun, len = Length[pts], ords, a, c1},
ords = Array[a, n];
c1 = Flatten[{Map[.5 <= # <= len + .5 &, ords],
Element[ords, Integers]}];
objfun[oo : {_Integer ..}] := Module[
{ovals = Clip[oo, {1, len}], ptlist, diffs, dnorms, minnorms},
ptlist = pts[[ovals]];
diffs =
Table[ptlist[[j]] - ptlist[[k]], {j, n - 1}, {k, j + 1, n}];
dnorms = Map[Sqrt[#.#] &, diffs, {2}];
minnorms = Map[Min, dnorms];
Total[minnorms]];
Round[NArgMax[{objfun[ords], c1}, ords,
Method -> {"DifferentialEvolution", "SearchPoints" -> 50},
MaxIterations -> 1000]]
];
</code></pre>
<p>Whether this turns out to be an improvement will depend on the end goal I guess.</p>
<p>--- end edit ---</p>
|
260,516 | <p>I was inspired by <a href="https://math.stackexchange.com/questions/2062960/there-exist-infinite-many-n-in-mathbbn-such-that-s-n-s-n-frac1n2?noredirect=1#comment4336226_2062960">this</a> topic on Math.SE.<br>
Suppose that $H_n = \sum\limits_{k=1}^n \frac{1}{k}$ - $n$th harmonic number. Then</p>
<h2>Conjecture</h2>
<blockquote>
<p>Let $M$ be a set of all $n$ such that
$$H_n - \lfloor{H_n\rfloor} < \frac{1}{n^{1+\epsilon}}.$$
Then
$$\forall\epsilon>0 : |M| = \bar\eta(\epsilon) < \infty.$$</p>
</blockquote>
<p>Picture below illustrates my conjecture, where I have checked this conjecture for $n < 10^6$ for each $\epsilon\in(0,1.1)$ with step 0.01
<a href="https://i.stack.imgur.com/ArpSz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ArpSz.png" alt="enter image description here"></a> </p>
<p>The following picture based on data provided by @GottfriedHelms for $n \approx 10^{100}$ (see answer below).
<a href="https://i.stack.imgur.com/ySVSZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ySVSZ.png" alt="enter image description here"></a></p>
| Gottfried Helms | 7,710 | <p><strong><em>This is a comment at the comments of Gerhard "Still Computing Oh So Slowly" Paseman's answer, giving just indexes n for more record-holders.</em></strong> </p>
<p>Let $\small h_n$ denote the <em>n</em>'th harmonic number, $\small A_n=\{h_n\}$ its fractional part. The sequence of $A_n$ has a remarkable shape like a sawtooth-curve with increasing wavelength when <em>n</em> is increased and with sharp local minima - a shape which can be exploited when we seek for possible <em>n</em> to be included in <strong><em>M</em></strong>.<br>
I use $\small f(n) = w_n = \{h_n\} \cdot n$ and $\small f(n,\varepsilon)=w_n \cdot n^\varepsilon $ with the OP's condition rewritten as $\small f(n,\varepsilon)<1 $ as criterion for the inclusion of <em>n</em> into the set <strong><em>M</em></strong> . Of course the cardinality of <strong><em>M</em></strong> is limited by an upper bound $ \small n \le N$ with some <strong><em>N</em></strong> that can numerically be handled, so actually we should write explicitely $\small |M(\varepsilon,N)| $ instead of <strong><em>M</em></strong> only. I could manage to use $\small N \approx e^{2000}$ with the help of Pari/GP. </p>
<p>The local minima of $\small A_n$ occur near $\small x_k=e^{k-\gamma}, k \in \mathbb N^+$ and the <em>n</em> to be tested is one of the next integers enclosing the $\small x_k$ - so this exact <em>n</em> must be empirically be determined. (Note that my <em>n</em> are the <em>n+1</em> used in the comments at the OP denoting the previous <em>high</em> value of <strong><em>A</em></strong> ) </p>
<p>The harmonic numbers $\small h_n$ can be computed in Pari/GP using <code>h(n) = psi(1+n) + Euler</code> ; however, this seems to be limited to something like $ \small n \lt e^{600}$ and so I had to introduce the Euler-McLaurin-formula for the larger <em>n</em> and implemented the switch from one method to the other at <code>n = 1e50</code> </p>
<p><hr>
The following table shows the first <em>40</em> entries of my <em>2000</em>-row table containing $\small n \approx 10^{840}$ which was possible to compute using Pari/GP in just a couple of seconds. </p>
<p>The basic option according to the focus in the OP's question is to test $\small f(n,\varepsilon) = w_n \cdot n^\varepsilon \lt 1$. For an example with $\small \varepsilon=0.1$ see the 6'th column and the 7'th column allowing to sum to the cardinality of <strong><em>M</em></strong> where we find $\small | M(0.1,2000) | = 7$ with $\small n \le N \approx e^{2000}$ </p>
<p>A second, much nicer, option is to define the function $\small e(n) = -\log_n(w_n) $ and test $\small e(n) \gt \varepsilon$ whether to include this <em>n</em> or not. This is simply possible when looking into the <em>5</em>'th column and simply compare.<br>
This latter method allows to compute the cardinalities of $\small M(\varepsilon)$ for arbitrary $\varepsilon$ really fast, for instance to create informative scatter- or lineplots, when first a list for $e(n)$ of length $\small N$ is created and then the successful comparisions with the intended $\varepsilon$ are summed to determine the cardinality. </p>
<pre><code> n | h_n | A=frac(h_n) | w=A*n | e(n) | w*n^0.1|in M?
-----------------+--------+-------------+----------+-----------+--------+---
1 1.00000 0.00000 0.00000 1.000000 0.00000 1
4 2.08333 0.0833333 0.333333 0.792481 0.382899 1
11 3.01988 0.0198773 0.218651 0.634006 0.277901 1
31 4.02725 0.0272452 0.844601 0.0491822 1.19066 .
83 5.00207 0.00206827 0.171667 0.398793 0.267051 1
227 6.00437 0.00436671 0.991243 0.00162136 1.70523 .
616 7.00127 0.00127410 0.784844 0.0377178 1.49191 .
1674 8.00049 0.000485572 0.812848 0.0279149 1.70759 .
4550 9.00021 0.000208063 0.946686 0.00650460 2.19790 .
12367 10.0000 0.0000430083 0.531883 0.0670005 1.36472 .
33617 11.0000 0.0000177086 0.595311 0.0497632 1.68811 .
91380 12.0000 0.00000305167 0.278861 0.111798 0.873923 1
248397 13.0000 0.00000122948 0.305399 0.0954806 1.05775 .
675214 14.0000 1.36205E-7 0.919678 0.00623806 3.52030 .
1835421 15.0000 3.78268E-7 0.694281 0.0252988 2.93703 .
4989191 16.0000 9.54538E-7 0.476237 0.0481002 2.22652 .
13562027 17.0000 1.48499E-8 0.201395 0.0975770 1.04059 .
36865412 18.0000 3.71993E-9 0.137137 0.114033 0.783098 1
100210581 19.0000 9.73330E-9 0.975380 0.00135314 6.15552 .
272400600 20.0000 1.61744E-9 0.440592 0.0421997 3.07297 .
740461601 21.0000 4.01333E-9 0.297172 0.0594162 2.29065 .
2012783315 22.0000 1.38447E-10 0.278664 0.0596444 2.37389 .
5471312310 23.0000 1.97920E-11 0.108288 0.0991384 1.01951 .
14872568831 24.0000 2.27220E-11 0.337935 0.0463183 3.51618 .
40427833596 25.0000 6.07937E-12 0.245776 0.0574601 2.82623 .
109894245429 26.0000 7.60776E-12 0.836049 0.00704359 10.6250 .
298723530401 27.0000 1.82203E-12 0.544283 0.0230213 7.64454 .
812014744422 28.0000 5.52830E-13 0.448906 0.0292071 6.96806 .
2207284924203 29.0000 1.00870E-13 0.222650 0.0528504 3.81951 .
6000022499693 30.0000 2.16954E-14 0.130173 0.0692963 2.46795 .
16309752131262 31.0000 3.65111E-14 0.595487 0.0170391 12.4772 .
44334502845080 32.0000 1.81005E-15 0.080247 0.0802804 1.85827 .
120513673457548 33.0000 4.59281E-15 0.553496 0.0182434 14.1651 .
327590128640500 34.0000 2.31992E-15 0.759983 0.00821173 21.4950 .
890482293866031 35.0000 2.71425E-17 0.024169 0.108145 0.755506 1
2420581837980561 36.0000 2.55560E-16 0.618603 0.0135588 21.3700 .
6579823624480555 37.0000 1.20561E-17 0.079326 0.0695767 3.02860 .
17885814992891026 38.0000 4.26642E-18 0.076308 0.0687541 3.21976 .
48618685882356024 39.0000 7.23781E-19 0.035189 0.0871101 1.64093 .
132159290357566703 40.0000 2.02186E-18 0.267208 0.0334763 13.7708 .
</code></pre>
<p>The set $\small M(0.1,e^{40})$ (represented by <em>40</em> rows) is here the set of all $n$ where $\small f(n,0.1)=w_n \cdot n^{0.1} \lt 1$ . The number of such entries for $\small n=132159290357566703 \approx 10^{17}$ is here <em>7</em> . </p>
<p>All numerical tests which I've done indicate that the cardinality of $\small M(\varepsilon \gt 0,\infty)$ is finite and roughly reciprocal to $\small \varepsilon$ and only if $\varepsilon=0$ is surely infinite. </p>
<p><hr></p>
<h1>Pari/GP tools</h1>
<p>This is the Pari/GP-program which I used. </p>
<p>Using the functions</p>
<pre><code> h(n) = Euler+if(n<1e50, psi(1+n),log(n)+1/2/n-1/12/n^2+1/120/n^4-1/252/n^6+1/240/n^8)
A(n) = if(n==1,return(0)); frac(h(n))
w(n) = A(n)*n
e(n) = if(n==1,return(1)); -log(w(n))/log(n)
</code></pre>
<p>The following needs only two steps to find the <em>n</em> with the next local minimum when called with index $\small k \in \mathbb N$ : </p>
<pre><code> {find_n(k)=local(n1,n2,a1,a2);
n1=floor(exp(k-Euler)); n2=n1+1; a1 = A(n1);a2 = A(n2);
if(a1<a2,return(n1),return(n2));
}
\\ create that list one time, then evaluate cardinality for various eps
\\ using that same list
{makeList(listlen=40)= local(list,n,w1,logn);
list=matrix(listlen,4);
list[1,]=[Euler,0,0,0];
for(k=2,listlen,
n=find_n(k); logn=log(n);
list[k,]=[logn+Euler,n,w1=w(n),-log(w1)/logn];
);
return(list); }
\\ compute cardinality with some eps
cardM(eps,list) = sum(k=1,#list[,1],list[k,4]>eps)
\\ apply, note: for long lists we need high internal precision
list = makeList(40) \\ the max N is here about e^40
print(cardM(0.1,list) )
</code></pre>
<hr>
<p><strong><em>[table 2]</em></strong>: This are sample data for the OP's plot of the cardinality of $\small M(\varepsilon)$ by the argument $\varepsilon$ where $\small N \approx \exp(250) \approx 10^{108}$ : </p>
<pre><code> eps | c=#M | r=c*eps | c= card(M) for n<= N approx 10^108
------+------------------
0.01 104 1.040
0.02 55 1.100
0.03 36 1.080
0.04 29 1.160
0.05 21 1.050
0.06 16 0.960
0.07 12 0.840
0.08 12 0.960
0.09 10 0.900
0.10 7 0.700
0.11 6 0.660
0.12 4 0.480
0.13 4 0.520
0.14 4 0.560
0.15 4 0.600
0.16 4 0.640
0.17 4 0.680
0.18 4 0.720
0.19 4 0.760
0.20 4 0.800
</code></pre>
<p><hr></p>
<h1>Pictures</h1>
<p><strong><em>[Picture 1]</em></strong> : I've extended the search-space for $n$ to the range $\small 1 \ldots e^{1000} \approx 10^{434} $ and to check sanity to the range $\small 1 \ldots e^{2000} \approx 10^{868} $. For epsilons $\varepsilon$ from $0.001$ in $200$ steps up to $0.2$ I made the following graph:<br>
<a href="https://i.stack.imgur.com/DAz5g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DAz5g.png" alt="picture"></a> </p>
<p>Remarks: for the very small epsilon the increase of the search-space gives slightly higher results, which also illustrates, that for "larger" epsilon the cardinality of $\small M(\varepsilon)$ is finite</p>
<p><hr>
<strong><em>[Picture 2]</em></strong>: Indicates uniformity of the $\small f(n) =w_n = frac(h_n) \cdot n^1$ at the <em>n</em>, where $\small w_n $ has a local minimum (on request of @GerhardPaseman):<br>
<a href="https://i.stack.imgur.com/18Az3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/18Az3.png" alt="picture 2"></a> </p>
<p><strong><em>[Picture 3]</em></strong>:It is also convenient to show a rescaling of the $\small f(n)$ so that we can immediately determine the cardinalities $\small |M(\varepsilon)|$ just by counting the number of dots $\small e(n)$ above $\small \varepsilon$. The derivation of the formula is </p>
<p>$$ \small{ \begin{array}{lll}
A(n) &\lt & {1\over n^1\cdot n^\varepsilon} & \text{ from OP}\\
f(n) \cdot n^\varepsilon &\lt & 1\\
\ln(f(n)) + \varepsilon \cdot \ln(n) &\lt & 0 \\
{\ln(f(n)) \over \ln(n)} + \varepsilon &\lt & 0 \\
\varepsilon & \lt& -\ln_n(f(n))
\end{array} }$$</p>
<p>and we simply count, how many dots in the picture show $\small e(n) = - \ln_n(f(n)) \gt \varepsilon$ </p>
<p><a href="https://i.stack.imgur.com/QFKcS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QFKcS.png" alt="picture"></a> </p>
<p><hr></p>
<h1>Data for cardinalities-plot</h1>
<pre><code>epsilon |M(eps,N)||M(eps,N)|
N~e^1000 N~e^2000
-------------------------------
0.0000 1000 2000
0.0010 633 862
0.0020 430 482
0.0030 315 330
0.0040 258 264
0.0050 204 205
0.0060 171 171
0.0070 151 151
0.0080 135 135
0.0090 120 120
0.0100 109 109
0.0110 100 100
0.0120 98 98
0.0130 89 89
0.0140 84 84
0.0150 79 79
0.0160 73 73
0.0170 69 69
0.0180 62 62
0.0190 57 57
0.0200 55 55
0.0210 50 50
0.0220 47 47
0.0230 44 44
0.0240 42 42
0.0250 41 41
0.0260 39 39
0.0270 39 39
0.0280 37 37
0.0290 37 37
0.0300 36 36
0.0310 36 36
0.0320 36 36
0.0330 36 36
0.0340 34 34
0.0350 34 34
0.0360 34 34
0.0370 33 33
0.0380 32 32
0.0390 31 31
0.0400 29 29
0.0410 29 29
0.0420 29 29
0.0430 26 26
0.0440 26 26
0.0450 25 25
0.0460 25 25
0.0470 24 24
0.0480 24 24
0.0490 23 23
0.0500 21 21
0.0510 21 21
0.0520 21 21
0.0530 20 20
0.0540 20 20
0.0550 20 20
0.0560 20 20
0.0570 19 19
0.0580 18 18
0.0590 18 18
0.0600 16 16
0.0610 16 16
0.0620 16 16
0.0630 16 16
0.0640 16 16
0.0650 16 16
0.0660 16 16
0.0670 16 16
0.0680 15 15
0.0690 14 14
0.0700 12 12
0.0710 12 12
0.0720 12 12
0.0730 12 12
0.0740 12 12
0.0750 12 12
0.0760 12 12
0.0770 12 12
0.0780 12 12
0.0790 12 12
0.0800 12 12
0.0810 11 11
0.0820 11 11
0.0830 11 11
0.0840 11 11
0.0850 11 11
0.0860 11 11
0.0870 11 11
0.0880 10 10
0.0890 10 10
0.0900 10 10
0.0910 10 10
0.0920 10 10
0.0930 10 10
0.0940 10 10
0.0950 10 10
0.0960 9 9
0.0970 9 9
0.0980 8 8
0.0990 8 8
0.1000 7 7
0.1010 7 7
0.1020 7 7
0.1030 7 7
0.1040 7 7
0.1050 7 7
0.1060 7 7
0.1070 7 7
0.1080 7 7
0.1090 6 6
0.1100 6 6
0.1110 6 6
0.1120 5 5
0.1130 5 5
0.1140 5 5
0.1150 4 4
0.1160 4 4
0.1170 4 4
0.1180 4 4
0.1190 4 4
0.1200 4 4
0.1210 4 4
0.1220 4 4
0.1230 4 4
0.1240 4 4
0.1250 4 4
0.1260 4 4
0.1270 4 4
0.1280 4 4
0.1290 4 4
0.1300 4 4
0.1310 4 4
0.1320 4 4
0.1330 4 4
0.1340 4 4
0.1350 4 4
0.1360 4 4
0.1370 4 4
0.1380 4 4
0.1390 4 4
0.1400 4 4
0.1410 4 4
0.1420 4 4
0.1430 4 4
0.1440 4 4
0.1450 4 4
0.1460 4 4
0.1470 4 4
0.1480 4 4
0.1490 4 4
0.1500 4 4
0.1510 4 4
0.1520 4 4
0.1530 4 4
0.1540 4 4
0.1550 4 4
0.1560 4 4
0.1570 4 4
0.1580 4 4
0.1590 4 4
0.1600 4 4
0.1610 4 4
0.1620 4 4
0.1630 4 4
0.1640 4 4
0.1650 4 4
0.1660 4 4
0.1670 4 4
0.1680 4 4
0.1690 4 4
0.1700 4 4
0.1710 4 4
0.1720 4 4
0.1730 4 4
0.1740 4 4
0.1750 4 4
0.1760 4 4
0.1770 4 4
0.1780 4 4
0.1790 4 4
0.1800 4 4
0.1810 4 4
0.1820 4 4
0.1830 4 4
0.1840 4 4
0.1850 4 4
0.1860 4 4
0.1870 4 4
0.1880 4 4
0.1890 4 4
0.1900 4 4
0.1910 4 4
0.1920 4 4
0.1930 4 4
0.1940 4 4
0.1950 4 4
0.1960 4 4
0.1970 4 4
0.1980 4 4
0.1990 4 4
</code></pre>
|
3,173,636 | <p>I have been trying to prove that there is no embedding from a torus to <span class="math-container">$S^2$</span> but to no avail.</p>
<p>I am completely stuck on where to start. The proof is supposed to be based on Homology theory. I know how to prove that <span class="math-container">$S^n$</span> cannot be embedded in <span class="math-container">$\mathbb{R}^n$</span> however that hasn't helped me in this case. Any help/other eamples of how to prove a lack of an embedding would be great. </p>
| Camilo Arosemena-Serrato | 33,495 | <p><a href="https://math.stackexchange.com/questions/1519028/torus-cannot-be-embedded-in-mathbb-r2">Here</a> are given several reasons why the torus cannot be embedded into <span class="math-container">$\mathbb R^2$</span>; two of them use the invariance of domain theorem. </p>
<p>Now, if the torus could be embedded into <span class="math-container">$S^2$</span>, then this embedding cannot be onto <span class="math-container">$S^2$</span>, as otherwise this would be a homeomorphism. Thus, as <span class="math-container">$S^2$</span> minus a point is homeomorphic to <span class="math-container">$\mathbb R^2$</span>, we would get an embedding of the torus into <span class="math-container">$\mathbb R^2$</span>.</p>
|
3,950,808 | <p><em>(note: this is very similar to <a href="https://math.stackexchange.com/questions/188252/spivaks-calculus-exercise-4-a-of-2nd-chapter">a related question</a> but as I'm trying to solve it without looking at the answer yet, I hope the gods may humor me anyways)</em></p>
<p>I'm self-learning math, and an <a href="https://www.reddit.com/r/math/comments/kcb1cd/how_do_i_gain_proficiency_in_mathematics_through/gfqn6y2/" rel="nofollow noreferrer">answer</a> to an /r/math post about self-learning was finally enough to motivate me to try getting feedback on this site :) . After reading throughout the internet, I've decided to start with Spivak's <em>Calculus</em>. I'm loving the book thankfully, but I'm stuck on this problem and I don't want to look at the answer quite yet.</p>
<blockquote>
<p>4 . (a) Prove that
<span class="math-container">$$\sum_{k=0}^l \binom{n}{k} \binom{m}{l-k} = \binom{n+m}{l}$$</span>.
Hint: Apply the binomal theorem to <span class="math-container">$(1+x)^n(1+x)^m$</span> .</p>
</blockquote>
<p>I've done all the prior problems, including Problem 3 (proving the Binomial Theorem) which is obviously closely tied to this, but even with the hint it feels like there's too much I don't know. I applied the hint and found:</p>
<p><span class="math-container">$$\left(\sum_{i=0}^n \binom{n}{i} x^i\right)\left(\sum_{j=0}^m \binom{m}{j} x^j\right) = \sum_{k=0}^{n+m} \binom{n+m}{k} x^k$$</span></p>
<p>And, setting <span class="math-container">$x=1$</span> got something even more interesting:</p>
<p><span class="math-container">$$\left(\sum_{i=0}^n \binom{n}{i}\right)\left(\sum_{j=0}^m \binom{m}{j}\right) = \sum_{k=0}^{n+m} \binom{n+m}{k}$$</span></p>
<p>However, I don't know where to go after that...is there some property of multiplying sums that I need to prove first, to relate the multiplication of two sums of binomials with the sum of the multiplication of two binomials?</p>
<p>Thank you!</p>
| Mike Earnest | 177,399 | <p>The key to using the below identity is to look at the coefficient of <span class="math-container">$x^l$</span> on both sides. Since the polynomials in <span class="math-container">$x$</span> are equal, their coefficients must be as well.
<span class="math-container">$$\left(\sum_{i=0}^n \binom{n}{i} x^i\right)\left(\sum_{j=0}^m \binom{m}{j} x^j\right) = \sum_{k=0}^{n+m} \binom{n+m}{k} x^k$$</span>
The coefficient of <span class="math-container">$x^l$</span> of the summation on the right hand side is obviously <span class="math-container">$\binom{n+m}{l}$</span>. The question is, what is the coefficient of the <span class="math-container">$x^l$</span> in the product of summations on the left?</p>
<p>It might help to look at a smaller example: say <span class="math-container">$n=2,m=3,l=3$</span>. The left hand side of the above equation becomes
<span class="math-container">$$
\left(\binom{2}0x^0+\binom21x^1+\binom22x^2\right)
\left(\binom{3}0x^0+\binom31x^1+\binom32x^2+\binom33x^3\right)
$$</span>
Now, expand out this product, and collect terms with equal powers of <span class="math-container">$x$</span>. The the <span class="math-container">$x^3$</span> term of the result is
<span class="math-container">$$
\left(\binom22\binom31+\binom21\binom32+\binom20\binom33\right)x^3
$$</span>
Therefore, this summation must be equal to the coefficient of <span class="math-container">$x^3$</span> in <span class="math-container">$\sum_{k=0}^{2+3}\binom{2+3}{k}x^k$</span>, which is <span class="math-container">$\binom{2+3}3$</span>.</p>
|
3,950,808 | <p><em>(note: this is very similar to <a href="https://math.stackexchange.com/questions/188252/spivaks-calculus-exercise-4-a-of-2nd-chapter">a related question</a> but as I'm trying to solve it without looking at the answer yet, I hope the gods may humor me anyways)</em></p>
<p>I'm self-learning math, and an <a href="https://www.reddit.com/r/math/comments/kcb1cd/how_do_i_gain_proficiency_in_mathematics_through/gfqn6y2/" rel="nofollow noreferrer">answer</a> to an /r/math post about self-learning was finally enough to motivate me to try getting feedback on this site :) . After reading throughout the internet, I've decided to start with Spivak's <em>Calculus</em>. I'm loving the book thankfully, but I'm stuck on this problem and I don't want to look at the answer quite yet.</p>
<blockquote>
<p>4 . (a) Prove that
<span class="math-container">$$\sum_{k=0}^l \binom{n}{k} \binom{m}{l-k} = \binom{n+m}{l}$$</span>.
Hint: Apply the binomal theorem to <span class="math-container">$(1+x)^n(1+x)^m$</span> .</p>
</blockquote>
<p>I've done all the prior problems, including Problem 3 (proving the Binomial Theorem) which is obviously closely tied to this, but even with the hint it feels like there's too much I don't know. I applied the hint and found:</p>
<p><span class="math-container">$$\left(\sum_{i=0}^n \binom{n}{i} x^i\right)\left(\sum_{j=0}^m \binom{m}{j} x^j\right) = \sum_{k=0}^{n+m} \binom{n+m}{k} x^k$$</span></p>
<p>And, setting <span class="math-container">$x=1$</span> got something even more interesting:</p>
<p><span class="math-container">$$\left(\sum_{i=0}^n \binom{n}{i}\right)\left(\sum_{j=0}^m \binom{m}{j}\right) = \sum_{k=0}^{n+m} \binom{n+m}{k}$$</span></p>
<p>However, I don't know where to go after that...is there some property of multiplying sums that I need to prove first, to relate the multiplication of two sums of binomials with the sum of the multiplication of two binomials?</p>
<p>Thank you!</p>
| Ben | 754,927 | <p>The solution uses some facts about polynomials that, unfortunately, Spivak hasn't yet proven.</p>
<p>He hasn't formally introduced polynomials yet. This happens in Chapter 3 (3rd Ed.). You may wish to return to this problem after reading chapter 3.</p>
<p>A polynomial function <span class="math-container">$f$</span> (functions are also introduced in chapter 3) has the form:
<span class="math-container">$$f(x) = a_nx^n + a_{n-1}x^{n-1} + \dots + a_1x + a_0$$</span>
where the <span class="math-container">$a_i$</span> coefficients are constants.</p>
<p>This polynomial is said to have degree <span class="math-container">$n$</span>, where <span class="math-container">$n$</span> is the largest power of <span class="math-container">$x$</span> with a nonzero coefficient.</p>
<p>A number <span class="math-container">$x_i$</span> is said to be a <em>root</em> of the polynomial if <span class="math-container">$f(x_i) = 0$</span>.</p>
<p>An <span class="math-container">$n$</span>-th degree polynomial can have at most <span class="math-container">$n$</span> roots. That is, there can be no more than <span class="math-container">$n$</span> different numbers <span class="math-container">$x_1, x_2, \dots, x_n$</span> such that <span class="math-container">$f(x_i) = 0$</span></p>
<p>This fact comes from Chapter 3 exercise 7 (3rd Ed.)</p>
<p>Because an <span class="math-container">$n$</span> degree polynomial has at most <span class="math-container">$n$</span> distinct roots, if a polynomial is zero for all <span class="math-container">$x$</span> then <em>all</em> of its coefficients (the <span class="math-container">$a_i$</span>'s) must be zero. (If any <span class="math-container">$a_i \neq 0$</span>, this forces <span class="math-container">$f$</span> to have a finite number of roots, which contradicts <span class="math-container">$f$</span> being zero for all values of <span class="math-container">$x$</span>.)</p>
<p>Restating,
<span class="math-container">$$0 = a_nx^n + a_{n-1}x^{n-1} + \dots + a_1x + a_0 \text{ for all }x \text{ if and only if } a_n = 0, a_{n-1} = 0, \dots a_0 = 0$$</span></p>
<p>Closely related to this fact is the following: If two polynomials are equal for all values of <span class="math-container">$x$</span>, <strong>they must have the same coefficients.</strong></p>
<p>To write it out explicitly, suppose</p>
<p><span class="math-container">$$f(x) = a_nx^n + a_{n-1}x^{n-1} + \dots + a_1x + a_0$$</span>
<span class="math-container">$$g(x) = b_nx^n + b_{n-1}x^{n-1} + \dots + b_1x + b_0$$</span>
and
<span class="math-container">$$f(x) = g(x) \text{ for all }x$$</span></p>
<p>Then <span class="math-container">$a_i = b_i$</span> for all <span class="math-container">$i = n, n-1, \dots, 1, 0.$</span></p>
<p>(You can show <em>this</em> by looking at <span class="math-container">$f-g$</span> and using what we previously learned about polynomials that are <span class="math-container">$0$</span> for all <span class="math-container">$x$</span>.)</p>
<p>Now let's return to the exercise.</p>
<p>We have
<span class="math-container">$$(1+x)^n(1+x)^m = (1+x)^{n+m}$$</span></p>
<p>We can expand each side using the binomial coefficients developed in preceding problems.</p>
<p>When we do so, we will have polynomials on the left hand side and right hand side of the equation. These polynomials will have <em>different looking</em> coefficients for each power of <span class="math-container">$x$</span>.</p>
<p>However, using what we know about polynomials, we know that these different looking coefficients must actually be equal. <strong>The coefficient for <span class="math-container">$x^l$</span> on the LHS must equal the coefficient for <span class="math-container">$x^l$</span> on the RHS.</strong> You can use <em>this</em> fact to prove the desired result.</p>
<p>Edit: When <span class="math-container">$a < b$</span>, <span class="math-container">$\binom{a}{b}$</span> is defined to be <span class="math-container">$0$</span>. IIRC this is also used in the problem (despite not being defined in the text).</p>
|
3,308,291 | <p>I have an array of numbers (a column in excel). I calculated the half of the set's total and now I need the minimum number of set's values that the sum of them would be greater or equal to the half of the total. </p>
<p>Example:</p>
<pre><code>The set: 5, 5, 3, 3, 2, 1, 1, 1, 1
Half of the total is: 11
The least amount of set values that need to be added to get 11 is 3
</code></pre>
<p>What is the formula to get '3'?</p>
<p>It's probably something basic but I have not used calculus in a bit hence may have forgotten just forgotten it.</p>
<p>Normally I would use a simple while loop with a sort but I am in excel so I was wondering is there a more elegant solution. </p>
<p>P.S. I have the values sorted in descending order to make things easier.</p>
<p>EDIT: Example</p>
| Ahmed Hossam | 430,756 | <p>The group operation ( = group law ) <span class="math-container">$\color{red}{+}$</span> here is <span class="math-container">$a\color{red}{+}b = (a+b)\bmod 5$</span> and <span class="math-container">$+$</span> is the normal addition of integers. The set <span class="math-container">$G=\{0,1,2,3,4\}$</span> together with <span class="math-container">$\color{red}{+}$</span> build a group <span class="math-container">$(G,\color{red}{+})$</span>.</p>
<p>We can operate on all elements in the set <span class="math-container">$G=\{0,1,2,3,4\}$</span> and get the following</p>
<hr />
<p><span class="math-container">$0\color{red}{+}0 = (0+0)\bmod 5 = 0 = 0\color{red}{+}0 $</span></p>
<p><span class="math-container">$0\color{red}{+}1 = (0+1)\bmod 5 = 1$</span></p>
<p><span class="math-container">$0\color{red}{+}2 = (0+2)\bmod 5 = 2$</span></p>
<p><span class="math-container">$0\color{red}{+}3 = (0+3)\bmod 5 = 3 $</span></p>
<p><span class="math-container">$0\color{red}{+}4 = (0+4)\bmod 5 = 4$</span></p>
<hr />
<p><span class="math-container">$1\color{red}{+}0 = (1+0)\bmod 5 = 1 = 0\color{red}{+}1 $</span></p>
<p><span class="math-container">$1\color{red}{+}1 = (1+1)\bmod 5 = 2 $</span></p>
<p><span class="math-container">$1\color{red}{+}2 = (1+2)\bmod 5 = 3 $</span></p>
<p><span class="math-container">$1\color{red}{+}3 = (1+3)\bmod 5 = 4 $</span></p>
<p><span class="math-container">$1\color{red}{+}4 = (1+4)\bmod 5 = 0 $</span></p>
<hr />
<p><span class="math-container">$2\color{red}{+}0 = (2+0)\bmod 5 = 2 = 0\color{red}{+}2 $</span></p>
<p><span class="math-container">$2\color{red}{+}1 = (2+1)\bmod 5 = 3 = 1\color{red}{+}2 $</span></p>
<p><span class="math-container">$2\color{red}{+}2 = (2+2)\bmod 5 = 4 $</span></p>
<p><span class="math-container">$2\color{red}{+}3 = (2+3)\bmod 5 = 0 $</span></p>
<p><span class="math-container">$2\color{red}{+}4 = (2+4)\bmod 5 = 1 $</span></p>
<hr />
<p><span class="math-container">$3\color{red}{+}0 = (3+0)\bmod 5 = 3 = 0\color{red}{+}3 $</span></p>
<p><span class="math-container">$3\color{red}{+}1 = (3+1)\bmod 5 = 4 = 1\color{red}{+}3 $</span></p>
<p><span class="math-container">$3\color{red}{+}2 = (3+2)\bmod 5 = 0 = 2\color{red}{+}3 $</span></p>
<p><span class="math-container">$3\color{red}{+}3 = (3+3)\bmod 5 = 1 $</span></p>
<p><span class="math-container">$3\color{red}{+}4 = (3+4)\bmod 5 = 2 $</span></p>
<hr />
<p><span class="math-container">$4\color{red}{+}0 = (4+0)\bmod 5 = 4 = 0\color{red}{+}4 $</span></p>
<p><span class="math-container">$4\color{red}{+}1 = (4+1)\bmod 5 = 0 = 1\color{red}{+}4 $</span></p>
<p><span class="math-container">$4\color{red}{+}2 = (4+2)\bmod 5 = 1 = 2\color{red}{+}4$</span></p>
<p><span class="math-container">$4\color{red}{+}3 = (4+3)\bmod 5 = 2 = 3\color{red}{+}4 $</span></p>
<p><span class="math-container">$4\color{red}{+}4 = (4+4)\bmod 5 = 3 $</span></p>
<hr />
<p>Or in a so called <a href="https://en.wikipedia.org/wiki/Cayley_table" rel="nofollow noreferrer">Cayley's table</a>:</p>
<p><a href="https://i.stack.imgur.com/XRQs2.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XRQs2.jpg" alt="enter image description here" /></a></p>
<hr />
<p><strong>(1) neutral or identity element</strong> of <span class="math-container">$G$</span> with respect to the operation <span class="math-container">$\color{red}{+}$</span> is <span class="math-container">$0 \in G$</span></p>
<p><span class="math-container">$\forall a \in G:~ ~a\color{red}{+}0=a=0\color{red}{+}a. $</span></p>
<p>Because we have the following (see first line in each block of calculation or first row and first column in the cayley's table):</p>
<p><span class="math-container">$0\color{red}{+}0 = 0 $</span>, <span class="math-container">$1\color{red}{+}0 = 1 = 0\color{red}{+}1$</span> , <span class="math-container">$2\color{red}{+}0 = 2 = 0\color{red}{+}2$</span> , <span class="math-container">$3\color{red}{+}0 = 3 = 0\color{red}{+}3$</span> and <span class="math-container">$4\color{red}{+}0 = 4 = 0\color{red}{+}4$</span></p>
<hr />
<p><strong>(2) Inverse element</strong> of each element <span class="math-container">$\in G$</span> has to be also <span class="math-container">$\in G$</span></p>
<p><span class="math-container">$\forall a \in \{0,1,2,3,4\}:~ \exists b \in \{0,1,2,3,4\}:~ ~a\color{red}{+}b=0=b\color{red}{+}a $</span></p>
<p>Because we have the following (see the lines in each calculation block, where there's "a <span class="math-container">$0$</span> in the middle", see also "the <span class="math-container">$0$</span> diagonale" in the cayley's table):</p>
<p><span class="math-container">$0\color{red}{+}0 = 0= 0\color{red}{+}0, 3\color{red}{+}2 = 0= 2\color{red}{+}3$</span> and <span class="math-container">$1\color{red}{+}4 = 0= 4\color{red}{+}1$</span></p>
<hr />
<p><strong>(3) Closure with respect to the group operation/law</strong> <span class="math-container">$\color{red}{+}$</span></p>
<p>The group operation/law is a function <span class="math-container">$$\color{red}{+}: G \times G \rightarrow G, x \color{red}{+} y = (x+y) \bmod 5$$</span>, which means that <span class="math-container">$G$</span> has to be closed under it's operation. So, <span class="math-container">$\color{red}{+}$</span> operates (<a href="https://en.wikipedia.org/wiki/Binary_operation" rel="nofollow noreferrer">a binary operator</a>) on two arguments <span class="math-container">$x \in G$</span> and <span class="math-container">$y \in G$</span>. Then it gives an element <span class="math-container">$x\color{red}{+}y$</span> back, which has to be again in <span class="math-container">$G$</span>.</p>
<hr />
<p><strong>(4) Associativity with respect to the group operation/law</strong> <span class="math-container">$\color{red}{+}$</span></p>
<p>Assume <span class="math-container">$a,b,c \in G$</span>, then</p>
<p><span class="math-container">$(a\color{red}{+}b)\color{red}{+}c=((a+b)\bmod 5 )\color{red}{+}c=(((a+b)\bmod 5)+c)\bmod 5 $</span></p>
<p>and</p>
<p><span class="math-container">$a\color{red}{+}(b\color{red}{+}c)=a\color{red}{+}((b+c) \bmod 5 )=(a+((b+c)\bmod 5))\bmod 5$</span></p>
<p>So, if <span class="math-container">$(a\color{red}{+}b)\color{red}{+}c=a\color{red}{+}(b\color{red}{+}c)$</span> or if <span class="math-container">$G$</span> is associative, then <span class="math-container">$$(((a+b)\bmod 5)+c)\bmod 5 =(a+((b+c)\bmod 5))\bmod 5 $$</span>
has to be true. This <span class="math-container">$(a+b)\bmod 5+c\equiv a+(b+c)\bmod 5 ~ ~(\bmod 5) $</span> can be shown using <a href="https://en.wikipedia.org/wiki/Modular_arithmetic" rel="nofollow noreferrer">modular arithmetic</a>.</p>
<p>Or it can also be shown using the <a href="https://en.wikipedia.org/wiki/Light%27s_associativity_test" rel="nofollow noreferrer">light's associativity test</a> using the caley's table.</p>
<hr />
<p>We can also see, that this group is commutative (<span class="math-container">$a+b=b+a$</span>, symmetric cayley's table), which means that this group is a special group, it's an <a href="https://en.wikipedia.org/wiki/Abelian_group" rel="nofollow noreferrer">abelian group</a>.</p>
<hr />
<p><strong>A sub group</strong> <span class="math-container">$H < G$</span> of a group <span class="math-container">$G$</span> has to satisfy all the four group properties or <strong>has to be a group for itself with respect to the same group operation</strong> <span class="math-container">$\color{red}{+}$</span>.</p>
<p>Assume <span class="math-container">$H = \{0,1,4\}$</span>. Then <span class="math-container">$4 \color{red}{+} 4 = (4+4) \bmod 5 = 3 \not\in H$</span> and <span class="math-container">$1 \color{red}{+} 1 = (1+1) \bmod 5 = 2 \not\in H$</span>. <span class="math-container">$H$</span> is therefore not closed with respect to <span class="math-container">$\color{red}{+}$</span>.</p>
<p>Maybe if we look close enough in the cayley's table, we might find a subgroup? But I don't think so, according to the <a href="https://en.wikipedia.org/wiki/Lagrange%27s_theorem_(group_theory)" rel="nofollow noreferrer">Lagrange's theorem</a> and due to the fact that <span class="math-container">$ | H | = 3 \nmid 5 = | G | $</span>.</p>
<p>For example here is a picture about other abelian groups from the wikipedia article about subgroups. The operation here is <span class="math-container">$\color{red}{+}: G \times G \rightarrow G, x\color{red}{+}y = (x+y) \bmod 8$</span></p>
<p><a href="https://i.stack.imgur.com/dV9uV.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dV9uV.jpg" alt="enter image description here" /></a></p>
<p>If <span class="math-container">$G$</span> is a (n abelian) groub, then <span class="math-container">$H$</span> is a (n abelian) subgroub.</p>
|
3,308,291 | <p>I have an array of numbers (a column in excel). I calculated the half of the set's total and now I need the minimum number of set's values that the sum of them would be greater or equal to the half of the total. </p>
<p>Example:</p>
<pre><code>The set: 5, 5, 3, 3, 2, 1, 1, 1, 1
Half of the total is: 11
The least amount of set values that need to be added to get 11 is 3
</code></pre>
<p>What is the formula to get '3'?</p>
<p>It's probably something basic but I have not used calculus in a bit hence may have forgotten just forgotten it.</p>
<p>Normally I would use a simple while loop with a sort but I am in excel so I was wondering is there a more elegant solution. </p>
<p>P.S. I have the values sorted in descending order to make things easier.</p>
<p>EDIT: Example</p>
| user692616 | 692,616 | <p>The smallest subgroup containing <span class="math-container">$1$</span>, should also contains <span class="math-container">$1+1, 1+1+1, 1+1+1+1 \dots$</span> and also their inverses.</p>
|
1,783,458 | <blockquote>
<p>Prove that the equation
$$z^n + z + 1=0 \ z \in \mathbb{C}, n \in \mathbb{N} \tag1$$
has a solution $z$ with $|z|=1$ iff $n=3k +2, k \in \mathbb{N} $.</p>
</blockquote>
<hr>
<p>One implication is simple: if there is $z \in \mathbb{C}, |z|=1$ solution for (1) then $z=cos \alpha + i \cdot sin\alpha$ and $|z + 1|=1$. It follows $cos\alpha=-\frac 1 2$ etc.</p>
<p>The other implication is the one I failed to prove.</p>
| lab bhattacharjee | 33,337 | <p>HINT:</p>
<p>$$z^n+1=-z$$</p>
<p>As $z\ne0,$ $$z^{n/2}+z^{-n/2}=-z^{1-n/2}$$</p>
<p>Let $z=r(\cos t+i\sin t)$</p>
<p>$$2r^{n/2}\cos\dfrac{nt}2=-r^{(2-n)/2}\left(\cos\dfrac{(2-n)t}2+i\sin\dfrac{(2-n)t}2\right)$$</p>
<p>We need $\sin\dfrac{(2-n)t}2=0\iff\dfrac{(2-n)t}2=m\pi$ where $m$ is any integer</p>
<p>So, $\cos\dfrac{(2-n)t}2=\pm1$</p>
<p>Can you take it from here?</p>
|
1,783,458 | <blockquote>
<p>Prove that the equation
$$z^n + z + 1=0 \ z \in \mathbb{C}, n \in \mathbb{N} \tag1$$
has a solution $z$ with $|z|=1$ iff $n=3k +2, k \in \mathbb{N} $.</p>
</blockquote>
<hr>
<p>One implication is simple: if there is $z \in \mathbb{C}, |z|=1$ solution for (1) then $z=cos \alpha + i \cdot sin\alpha$ and $|z + 1|=1$. It follows $cos\alpha=-\frac 1 2$ etc.</p>
<p>The other implication is the one I failed to prove.</p>
| the_candyman | 51,370 | <p>If $z^n + z + 1=0 $ then $z^n = -(1+z)$ and $$|z|^n = |1+z|.$$</p>
<p>If $|z| = 1$, then $z = \cos a + i \sin a$, with $a \in [0, 2\pi]$. Moreover:</p>
<p>$$1^n = |1 + \cos a + i \sin a| \Rightarrow \sqrt{(1+\cos a)^2 + \sin^2 a} = 1 \Rightarrow \\
1 + \cos^2 a + 2 \cos a + \sin^2 a = 1\Rightarrow
\cos a = -\frac{1}{2}\\ \Rightarrow a = \frac{2\pi}{3}\vee a = \frac{4\pi}{3}.$$</p>
<p>Let's plug $z = \cos a + i \sin a$ in the starting equation. We get:</p>
<p>$$\cos (na) + i \sin(na) + \cos (a) + i \sin(a) + 1 = 0 \Rightarrow\\
\begin{cases}
\cos(na) + \cos(a)+ 1 &= 0\\
\sin(na) + \sin(a) &= 0
\end{cases}$$</p>
<p>Let's work on the second equation, which becomes $\sin(a) =-\sin(na)$.</p>
<p>If $a=\frac{2\pi}{3} $, then $na = \frac{4\pi}{3}+2\pi k \vee na = \frac{5\pi}{3}+2\pi k$.</p>
<p>First case:</p>
<p>$$\frac{2\pi}{3}n = \frac{4\pi}{3}+2\pi k \Rightarrow
\frac{2\pi}{3}n = \frac{2\pi}{3}(2 + 3k) \Rightarrow
n = 2+3k$$</p>
<p>Second case:</p>
<p>$$\frac{2\pi}{3}n = \frac{5\pi}{3}+2\pi k \Rightarrow
\frac{2\pi}{3}n = \frac{2\pi}{3}\left(\frac{5}{2} + 3k\right) \Rightarrow
n = \frac{5}{2} + 3k.$$</p>
<p>The last one can't be satisfied since both $n$ and $k$ are integer.</p>
<p>If $a=\frac{4\pi}{3} $, then $na = \frac{\pi}{3}+2\pi k \vee na = \frac{2\pi}{3}+2\pi k$.</p>
<p>First case:</p>
<p>$$\frac{4\pi}{3}n = \frac{\pi}{3}+2\pi k \Rightarrow
\frac{4\pi}{3}n = \frac{4\pi}{3}\left(\frac{1}{4} + \frac{3}{2}k\right) \Rightarrow
n = \frac{1}{4} + \frac{3}{2}k.$$
The last one can't be satisfied since both $n$ and $k$ are integer.</p>
<p>Second case:</p>
<p>$$\frac{4\pi}{3}n = \frac{2\pi}{3}+2\pi k \Rightarrow
\frac{4\pi}{3}n = \frac{4\pi}{3}\left(\frac{1}{2} + \frac{3}{2}k\right) \Rightarrow
n = \frac{1}{2} + \frac{3}{2}k.$$</p>
<p>The last one can't be satisfied since both $n$ and $k$ are integer.</p>
<p>Finally, the only feasible case is:</p>
<p>$$a = \frac{2\pi}{3}$$</p>
<p>and</p>
<p>$$n = 2 + 3k.$$</p>
<p>Notice that in this case also the equation $\cos(na) + \cos(a)+ 1 = 0$ is satisfied. Indeed:</p>
<p>$$\cos\left((2+3k)\frac{2\pi}{3}\right) + \cos\left(\frac{2\pi}{3}\right) + 1 = 0 \Rightarrow \\
\cos\left(\frac{4\pi}{3} + 2\pi k\right) + \cos\left(\frac{2\pi}{3}\right) + 1 = 0 \Rightarrow \\
\cos\left(\frac{4\pi}{3} \right) + \cos\left(\frac{2\pi}{3}\right) + 1 = 0 \Rightarrow \\
-\frac{1}{2}-\frac{1}{2} + 1 = 0.$$</p>
|
2,294,548 | <p><strong>Problem:</strong> Solve $y'=\sqrt{xy}$ with the initial condition $y(0)=1$.</p>
<p><strong>Attempt:</strong> Using $\sqrt{ab}=\sqrt{a}\cdot\sqrt{b}$, I get that the DE is separable by dividing both sides by $\sqrt{y}:$ $$y'=\sqrt{x}\cdot\sqrt{y}\Leftrightarrow\frac{y'}{\sqrt{y}}=\sqrt{x}$$</p>
<p>which can be rearranged to $$\frac{1}{\sqrt{y}}dy=\sqrt{x}dx$$ and proceeding to integrate both sides. </p>
<p>$$\int\frac{1}{\sqrt{y}} \ dy=\int\sqrt{x} \ dx \Longleftrightarrow2\sqrt{y}+C_1=\frac{2x\sqrt{x}}{3}+C_2$$</p>
<p>Which eventually gives $$y(x)=\left(\frac{\frac{2x\sqrt{x}}{3}+C_2-C1}{2}\right)^2=\left(\frac{x\sqrt{x}}{3}+D\right)^2=\frac{x^3}{9}+D.$$</p>
<p><strong>Question:</strong> However, according to <a href="https://math.stackexchange.com/questions/2293258/sqrtx2-1-sqrtx1-cdot-sqrtx-1?noredirect=1#comment4717869_2293258">this question</a> I posted yesterday, $\sqrt{xy}=\sqrt{x}\cdot\sqrt{y}$ only holds for $x,y\geq 0$, but nowhere in this question is this restriction given given. Why is it ok for me to use it then?</p>
<p><strong>Sidenote/question:</strong> Is my way of solving the DE correct? Any room for improvement?</p>
| Rigel | 11,776 | <p>Since $y(0) = 1 > 0$, your solution will be positive near $x=0$ (where it is defined).
Hence, the inequality $x y(x) \geq 0$ can be satisfied (near $x=0$) only for $x\geq 0$.</p>
<p>At this point you can look for a solution $y\colon [0,a)\to\mathbb{R}$ such that $y(x) > 0$ for every $x\in [0,a)$.</p>
<p>If we define
$$
F(y) := \int_1^y \frac{1}{\sqrt{s}}\, ds = 2\sqrt{y} - 2,
\qquad y > 0,
$$
your solution is implicitly defined by
$$
F(y(x)) = \frac{2}{3} x^{3/2}, \qquad x\in [0,a),
$$
i.e.
$$
\sqrt{y(t)} = 1 + \frac{1}{3} x^{3/2}, \qquad x\in [0,a).
$$
Since the r.h.s. is positive for every $x\geq 0$, you can choose $a=+\infty$ and your solution is given by
$$
y(t) = \left(1 + \frac{1}{3} x^{3/2}\right)^2, \qquad x\geq 0.
$$</p>
|
2,294,548 | <p><strong>Problem:</strong> Solve $y'=\sqrt{xy}$ with the initial condition $y(0)=1$.</p>
<p><strong>Attempt:</strong> Using $\sqrt{ab}=\sqrt{a}\cdot\sqrt{b}$, I get that the DE is separable by dividing both sides by $\sqrt{y}:$ $$y'=\sqrt{x}\cdot\sqrt{y}\Leftrightarrow\frac{y'}{\sqrt{y}}=\sqrt{x}$$</p>
<p>which can be rearranged to $$\frac{1}{\sqrt{y}}dy=\sqrt{x}dx$$ and proceeding to integrate both sides. </p>
<p>$$\int\frac{1}{\sqrt{y}} \ dy=\int\sqrt{x} \ dx \Longleftrightarrow2\sqrt{y}+C_1=\frac{2x\sqrt{x}}{3}+C_2$$</p>
<p>Which eventually gives $$y(x)=\left(\frac{\frac{2x\sqrt{x}}{3}+C_2-C1}{2}\right)^2=\left(\frac{x\sqrt{x}}{3}+D\right)^2=\frac{x^3}{9}+D.$$</p>
<p><strong>Question:</strong> However, according to <a href="https://math.stackexchange.com/questions/2293258/sqrtx2-1-sqrtx1-cdot-sqrtx-1?noredirect=1#comment4717869_2293258">this question</a> I posted yesterday, $\sqrt{xy}=\sqrt{x}\cdot\sqrt{y}$ only holds for $x,y\geq 0$, but nowhere in this question is this restriction given given. Why is it ok for me to use it then?</p>
<p><strong>Sidenote/question:</strong> Is my way of solving the DE correct? Any room for improvement?</p>
| Community | -1 | <p>As $y(0)>0$, $xy$ is negative for $x=0^-$ and $y$ can not be defined in the negatives. On another hand, you certainly have $y(x)>0$ in some neighborhood of $x=0^+$.</p>
<p>As the initial condition is given, you can use definite integrals,</p>
<p>$$\int_1^y\frac{dy}{\sqrt y}=\int_0^x\sqrt xdx,$$
giving
$$2(\sqrt y-1)=\frac{2x\sqrt x}3$$ and</p>
<p>$$y=\left(\frac{x\sqrt x}3+1\right)^2.$$</p>
<p>As a last check, we must verify that the LHS of the second identity is positive (as the RHS certainly is). This holds for $y\ge1$, which is guaranteed by the given solution.</p>
|
386,073 | <p>For which values of a do the following vectors for a <strong><em>linearly dependent</em></strong> set in $R^3$?</p>
<p>$$V_1= \left(a,\, \frac{-1}{2}, \,\frac{-1}{2}\right),\;\; V_2= \left(\frac{-1}{2},\, a, \,\frac{-1}{2}\right),\; \;V_3= \left(\frac{-1}{2}, \,\frac{-1}{2},\, a\right)$$</p>
<p>Please would it be possible to <strong><em>advise</em></strong> me how I would go about solving this?<br>
I'm not sure if I should be using only row reduction because I think it may relate to eigenvalues and eigenvectors but we haven't covered those concepts in class as yet.<br>
Am I supposed to find the determinant?</p>
| colormegone | 71,645 | <p>To be linearly dependent in $\mathbb{R}^3$, the three vectors would have to be co-planar. One test would be that the triple product of the three vectors (in any order) would be zero (since a vector perpendicular to the mutual perpendicular of any two others would have to be in the same plane as those two). In Cartesian components, the triple product can be expressed as a 3 x 3 determinant of the three vectors. So you are looking for values of $a$ that make that determinant (using your vectors as either rows or columns) equal to zero.</p>
<p>[Note: $a = -1/2$ is the "trivial" solution; there is another value that works...]</p>
|
445 | <p>Under what circumstances should a question be made community wiki?</p>
<p>Probably any question asking for a list of something (e.g. <a href="https://math.stackexchange.com/questions/81/list-of-interesting-math-blogs">1</a>) must be CW. What else? What about questions asking for a list of applications of something (like, say, <a href="https://math.stackexchange.com/questions/804/applications-for-the-class-equation-from-group-theory">3</a>) or questions like <a href="https://math.stackexchange.com/questions/1446/interesting-properties-of-ternary-relations">2</a>? Should all soft-questions be made community wiki (and how we should define soft-question, in that case)?</p>
| Casebash | 123 | <p>Due to the nature of community wiki, this is quite a difficult question to answer. I think that we should be conservative in our enforcement of CW, so that when we do enforce it, there is a general consensus. Without consensus CW will cause unnecessary confusion and turn users off our site.</p>
<p>I think it is good to review how CW seems to have come about. StackOverflow (the original StackExchange site) is not designed for discussions. The rearranging of answers and the limitations on comments make it a poor tool for this. These limitation aren't just due to technical reasons - our founders had noticed that forums seems to be dominated by never-ending discussion on "What is the best programming language?" or "What is the best university?". This is why questions without a real answer are discouraged.</p>
<p>However, it soon became apparent that there are a class of question without a definite answer - poll questions - where StackOverflow works reasonably well. While new answers to old questions get buried at the end, the top few answers tend to be <em>really</em> good. These question <em>had</em> to be allowed because they enjoy such a large amount of community support (with a small, but very vocal opposition). They also gathered a huge amount of upvotes, meaning that the site was being flooded with these questions and that reputation was being devalued. Community Wiki was the compromise that brought these problems under control.</p>
<p>So, if we are being conservative:</p>
<ol>
<li>Community wiki should be used to prevent vastly unfair or gameable opportunities for gaining reputation. There will always be some easy questions that will gain disproportionate amounts of reputation, but as long as we keep the number low we will be fine.</li>
<li>Community wiki allows valuable, but broad questions to exist on this site. One of the best guides to whether a question should be community wiki is how many different answers you would expect to get if you asked 20 (knowledgeable people). If you'd get more than 6 or 7, then it probably belongs as community wiki.</li>
</ol>
|
17,480 | <p>I have asked a question at <a href="https://academia.stackexchange.com/">academia.stackexchange</a> with three sub-questions recently and I was told that it was not proper there. I just wonder if it is acceptable if one asks multiple (related) questions at math.stackexchange? </p>
<p>To mathematicians, if the answer is "no", that would not even make sense: I can always ask one single question of the form </p>
<blockquote>
<p>What is the ordered triple of the form $(X, Y, Z)$ such that $X$ is the answer to $X'$, $Y$ is the answer to $Y'$ and $Z$ is the answer to $Z'$?</p>
</blockquote>
| Community | -1 | <p>Please do not ask multiple questions within one. There is a closing reason for those (<em>too broad</em>) which says that the author should reduce the scope of the question. I see it has been applied to your <a href="https://academia.stackexchange.com/q/32407/">Academia question</a>.</p>
<p>The "clever" packaging into an ordered tuple would earn you nothing but a few additional downvotes along with closevotes. </p>
|
3,309,511 | <p>Prove that there exists infinitely many pairs of positive real numbers <span class="math-container">$x$</span> and <span class="math-container">$y$</span> such that <span class="math-container">$x\neq y$</span> but <span class="math-container">$ x^x=y^y$</span>.</p>
<p>For example <span class="math-container">$\tfrac{1}{4} \neq \tfrac{1}{2}$</span> but </p>
<p><span class="math-container">$$\left( \frac{1}{4} \right)^{1/4} = \left( \frac{1}{2} \right)^{1/2}$$</span></p>
<p>I am confused how to approach the problem. I think we have to find all the sloutions in a certain interval, probably <span class="math-container">$(0,1]$</span>.</p>
| Ian | 83,396 | <p>It is easier than that, you need only examine the derivative: the derivative is <span class="math-container">$x^x \left ( \frac{d}{dx} x \log(x) \right ) = x^x \left ( \log(x) + 1 \right )$</span> which changes sign precisely at <span class="math-container">$x=e^{-1}$</span>. This point is a minimum, and of course <span class="math-container">$\lim_{x \to \infty} x^x=\infty$</span>. Knowing that, do you see now how for every <span class="math-container">$x \in (0,e^{-1})$</span> there exists <span class="math-container">$y>e^{-1}$</span> with <span class="math-container">$x^x=y^y$</span>?</p>
<p>(More generally the original statement holds whenever any continuous function has a local extremum.)</p>
|
224,970 | <p>$\newcommand{\Int}{\operatorname{Int}}\newcommand{\Bdy}{\operatorname{Bdy}}$
If $A$ and $B$ are sets in a metric space, show that:
(note that $\Int$ stands for interior of the set)</p>
<ol>
<li>$\Int (A) \cup \Int (B) \subset \Int (A \cup B)$.</li>
<li>$(\overline{ A \cup B}) = (\overline A \cup \overline B )$. (note that $\overline A = \Int (A) \cup \Bdy(A)$ )</li>
</ol>
<p>Now for the first (1) I see why its true for instance in $R$ we can have the intervals set $A=[a,b]$ and $B=[b,c]$ we have $A \cup B=[a,c]$ so $\Int(A \cup B)=(a,c)$ now $\Int(A)=(a,b)$ and
$\Int(B)=(b,c)$ so we lose $b$ when we take union to form $\Int(A) \cup \Int(B)=(a,b) \cup (b,c)$.</p>
| Sixtina Aquafina | 416,396 | <blockquote>
<p>The basic rule is S→AABSBCC|ABSC|ASBC|1 where AA, BB and CC got to 0|1, and really only have different labels to highlight the counting. Not that we're not really keeping track of the order, but the step-wise counting ensures we have enough 0s and 1s to either side (remember that the "middle" 11 is not necessarily actually in the middle, just that the string can be broken into those three bits).</p>
</blockquote>
<p>I think there might be a slight problem with this grammar? First of all it doesn't generate string with length 3. Secondly, it generates strings of length 4 like 0010, and I think by problem statement 0010 shall not be in that language? Even if 0110 that has more 1's towards the middle might be an inappropriate string to generate, because it's a length four, and it just does not have a "middle third" with one complete bit (not like two bits sliced in 2/3 and concatenated together shall be valid).</p>
|
224,970 | <p>$\newcommand{\Int}{\operatorname{Int}}\newcommand{\Bdy}{\operatorname{Bdy}}$
If $A$ and $B$ are sets in a metric space, show that:
(note that $\Int$ stands for interior of the set)</p>
<ol>
<li>$\Int (A) \cup \Int (B) \subset \Int (A \cup B)$.</li>
<li>$(\overline{ A \cup B}) = (\overline A \cup \overline B )$. (note that $\overline A = \Int (A) \cup \Bdy(A)$ )</li>
</ol>
<p>Now for the first (1) I see why its true for instance in $R$ we can have the intervals set $A=[a,b]$ and $B=[b,c]$ we have $A \cup B=[a,c]$ so $\Int(A \cup B)=(a,c)$ now $\Int(A)=(a,b)$ and
$\Int(B)=(b,c)$ so we lose $b$ when we take union to form $\Int(A) \cup \Int(B)=(a,b) \cup (b,c)$.</p>
| Kianoosh Boroojeni | 1,130,118 | <p>Here is a simpler grammar that generates the same language:</p>
<p>S-> ABSC | ASBC | A1C</p>
<p>A -> 0 | 1</p>
<p>C -> 0 | 1</p>
<p>B -> 0 | 1 | epsilon</p>
<p>where variable A generally makes parts of the first third and variable C generally makes parts of the last third of the string.</p>
|
94,525 | <p>I am trying to solve the equation
$$z^n = 1.$$</p>
<p>Taking $\log$ on both sides I get $n\log(z) = \log(1) = 0$.</p>
<p>$\implies$ $n = 0$ or $\log(z) = 0$</p>
<p>$\implies$ $n = 0$ or $z = 1$.</p>
<p>But I clearly missed out $(-1)^{\text{even numbers}}$ which is equal to $1$.</p>
<p>How do I solve this equation algebraically?</p>
| Dustan Levenstein | 18,966 | <p>You can't take the logarithm of a negative number, unless you consider the multivalued <a href="http://en.wikipedia.org/wiki/Logarithm#Complex_logarithm" rel="nofollow">complex logarithm</a>.</p>
<p>If you are willing to expand to complex numbers in that manner, then you can take the log of both sides. $\log(1) = 2\pi i k$, $k \in \mathbb{N}$, so then you're solving for $n \log(z) = 2 \pi i k $, which gives $\log z = 2\pi i\frac{k}{n}$, or $z = e^{2 \pi i\frac{k}{n}}$, which describes all of the <a href="http://en.wikipedia.org/wiki/Root_of_unity" rel="nofollow">roots of unity</a>.</p>
|
1,114 | <p>Or more specifically, why do people get so excited about them? And what's your favorite easy example of one, which illustrates why I should care (and is not a group)?</p>
| Josh | 343 | <p>A groupoid is a generalization of a group. The easiest definition, IMO, is as a category in which all arrows are isomorphisms. So a group is just a groupoid with one object and arrows the elements of the group.</p>
<p>The best example is the fundamental groupoid of a topological space. Build a groupoid by taking the objects to be the points in the space and an arrow from point x to point y to be equivalence classes of paths from x to y. This genearlizes the idea of the fundamental group.</p>
<p>They are useful and Ronald Brown has a whole project of building higher dimensional group theory using them. The great thing about the fundamental groupoid is that there is a version of Van Kampen that gives the fundamental group of the circle (without using covering space theory as is the standard way to do it using only the fundamental group).</p>
<p>A good link is <a href="http://www.bangor.ac.uk/~mas010/nonab-a-t.html">http://www.bangor.ac.uk/~mas010/nonab-a-t.html</a></p>
<p>ETA: That link might not be working. Google Ronald Brown's Topology and Groupoids book for a good introduction and motivation.</p>
|
1,114 | <p>Or more specifically, why do people get so excited about them? And what's your favorite easy example of one, which illustrates why I should care (and is not a group)?</p>
| Aleks Kissinger | 800 | <p>To follow on from what Qiaochu said, one of the interesting things about groupoids is their cardinality. Whereas the cardinality of a set is a natural number, the cardinality of a groupoid is a positive rational. This gives us a combinatorial way to inject "numbers" into an abstract system.</p>
<p>For example, a way to think of matrices of natural numbers is just taking spans of finite sets, A <- S -> B. The "numbers" come from counting the paths from A, through S, to B. Composition by pullback then just amounts to matrix multiplication. Incidentally, this is one of the nicest ways to think about commutative bi-alebras, but that's another story (see Stephen Lack - "Composing PROPs" if you're interested).</p>
<p>However, if you take spans of finite groupoids instead, you get computation with matrices of positive rational numbers. If you take spans of "nice" infinite groupoids, you get positive real numbers. John Baez and co. have a nice paper, called <a href="http://arxiv.org/abs/0908.4305v1" rel="nofollow">Higher-Dimensional Algebra VII: Groupoidification</a>, that works a lot of this out an applies it to quantum physics. It's one of the things that convinced me that groupoids were pretty cool gadgets.</p>
|
1,114 | <p>Or more specifically, why do people get so excited about them? And what's your favorite easy example of one, which illustrates why I should care (and is not a group)?</p>
| Pete L. Clark | 1,149 | <p>I (mildly) disagree with David Brown's assertion that a set is an example of a groupoid. Given any set, you can put a groupoid structure on it, even "canonically", but not <em>uniquely</em> canonically. (By way of analogy, you wouldn't say that a set is an example of a topological space, would you?) Thus if I give you a set and tell you the definition of a groupoid, you will probably be able to use that set to define a groupoid, but you might not come up with groupoid that David has in mind.</p>
<p>I want to use this as a jumping off point for my answer: one of the neat things about groupoids is that a lot of times you start with a set $X$, you take some kind of "quotient" of it, and then you are apparently left with a set $Y$ but in a way which feels unpleasant: you feel like there is a loss of information. A lot times, there is a natural groupoid structure on $X$, which has the following features:</p>
<p>(i) It is equivalent to or implied by some other kind of structure you are considering on $X$, so it is not evidently profitable to think of $X$ as a groupoid. </p>
<p>(ii) Passage to the quotient set $Y$ loses some of the evident structure.</p>
<p>(iii) However, if you think of $X$ as a groupoid, then the quotient $Y$ is also a groupoid, and this extra structure is exactly the structure that you were sad to have lost.</p>
<p>Example: Let $G$ be a group and $X$ be a set with an action of $G$. Let $Y = G \backslash X$ be the orbit space. In the passage from $X$ to $Y$ we have apparently "used up" the $G$-structure, but this is not so good: for applications we would like to know the stabilizers of the points of $X$; up to conjugacy, these only depend upon the corresponding point in $Y$ but in passage to $Y$ we seem to have lost that information, which is however very important for "mass formulas" as in Qiaochu's response. </p>
<p>Remedy: realize that any $G$-set is canonically a groupoid: the set of morphisms from $x$ to $x'$ is exactly the set of $g$ in $G$ such that $gx = x'$. Then we can take the quotient of this groupoid by the $G$-action [this can be done generally; in this case it is sufficiently evident what this means that I don't think it will be helpful to say any more about it], so that $X/G$ still has a groupoid structure, in which no two distinct objects have any morphisms between them but that the automorphism group of any single object is isomorphic to the isotropy group of any representative.</p>
<p>See for instance</p>
<p><a href="http://www.maths.qmul.ac.uk/~noohi/papers/WhatIsTopSt.pdf" rel="noreferrer">http://www.maths.qmul.ac.uk/~noohi/papers/WhatIsTopSt.pdf</a></p>
<p>for a bit more on this perspective.</p>
|
1,114 | <p>Or more specifically, why do people get so excited about them? And what's your favorite easy example of one, which illustrates why I should care (and is not a group)?</p>
| Ali Taghavi | 36,688 | <p>The holonomy groupoid of a foliation is another example of a useful groupoid</p>
<p>it is described here:</p>
<p><a href="http://www.ams.org/journals/bull/2005-42-01/S0273-0979-04-01036-5/S0273-0979-04-01036-5.pdf">http://www.ams.org/journals/bull/2005-42-01/S0273-0979-04-01036-5/S0273-0979-04-01036-5.pdf</a></p>
<p>For a singular foliation see <a href="http://users.uoa.gr/~iandroul/AS-holgpd-final.pdf">http://users.uoa.gr/~iandroul/AS-holgpd-final.pdf</a></p>
|
2,409,918 | <p>I need your help in evaluating the following integral in <strong>closed form</strong>. <span class="math-container">$$\displaystyle\int\limits_{0.5}^{1}
\frac{\mathrm{Li}_{2}\left(x\right)\ln\left(2x - 1\right)}{x}\,\mathrm{d}x$$</span></p>
<p>Since the function is singular at <span class="math-container">$x = 0.5$</span>, we are looking for Principal Value. The integral is finite and was evaluated numerically.</p>
<p>I expect the closed form result to contain <span class="math-container">$\,\mathrm{Li}_{3}$</span> and <span class="math-container">$\,\mathrm{Li}_{2}$</span>.</p>
<p>Thanks</p>
| user90369 | 332,823 | <p>$\displaystyle \int\limits_{0.5}^1 \frac{Li_2(x)\ln(2x-1)}{x}dx=$</p>
<p>$\displaystyle =\sum\limits_{k=1}^\infty \frac{1}{k^2 2^k}\sum\limits_{v=0}^{k-1} {\binom {k-1} v} \lim\limits_{h\to 0}\frac{1}{h}\left(\frac{(2x-1)^{v+h+1}}{v+h+1}-\frac{(2x-1)^{v+1}}{v+1}\right)|_{0.5}^1$</p>
<p>$\displaystyle =-\sum\limits_{k=1}^\infty \frac{1}{k^3 2^k}\sum\limits_{v=1}^k {\binom k v} \frac{1}{v} = -\int\limits_0^1 \frac{Li_3(\frac{x+1}{2})-Li_3(\frac{1}{2})}{x}dx $</p>
<p><em>First note:</em> </p>
<p>Be $\,\displaystyle H_k(x):=x\int\limits_0^1 \frac{(xt)^k-1}{xt-1}dt=\sum\limits_{v=1}^k \frac{x^v}{v}$ . $\,$ It's $\enspace\displaystyle \sum\limits_{v=1}^k {\binom k v} \frac{1}{v}=H_k(2)-H_k(1)$ .</p>
<p><em>Second note:</em> </p>
<p>We can define e.g. $\,\displaystyle Fi_n(x):=\int\limits_0^{1-x}\frac{Li_n(t+x)-Li_n(x)}{t}dt\,$ for $\,|x|\leq 1\,$ . </p>
<p>Then it's $\,\displaystyle \int\limits_{0.5}^1 \frac{Li_2(x)\ln(2x-1)}{x}dx=-Fi_3(\frac{1}{2})\,$ .</p>
|
2,409,918 | <p>I need your help in evaluating the following integral in <strong>closed form</strong>. <span class="math-container">$$\displaystyle\int\limits_{0.5}^{1}
\frac{\mathrm{Li}_{2}\left(x\right)\ln\left(2x - 1\right)}{x}\,\mathrm{d}x$$</span></p>
<p>Since the function is singular at <span class="math-container">$x = 0.5$</span>, we are looking for Principal Value. The integral is finite and was evaluated numerically.</p>
<p>I expect the closed form result to contain <span class="math-container">$\,\mathrm{Li}_{3}$</span> and <span class="math-container">$\,\mathrm{Li}_{2}$</span>.</p>
<p>Thanks</p>
| Quanto | 686,284 | <p>Note that</p>
<p><span class="math-container">\begin{align}
I_1=&\int_0^1 \frac{\ln^2(1-x)\ln(1+x)}{1+x}dx =\int_0^1 \frac{\ln^2x \ln(2-x)}{2-x}dx\\
=& \>\ln2 \int_0^1 \frac{\ln^2x }{2-x}dx +\int_0^1 \frac{\ln^2x }{2-x} \left( -\int_0^1 \frac x{2-xy}dy\right) dx\\
=&\>2\ln2 Li_3(\frac12)+\int_0^1 \int_0^1 \frac{\ln^2x }{1-y}\left(\frac1{2-xy}-\frac1{2-x}\right)dy\>dx\\
= &\>2\ln2 Li_3(\frac12)+ 2\int_0^1\frac{Li_3(\frac y2)}ydy +2 \int_0^1 \frac{Li_3(\frac y2)-Li_3(\frac12)}{1-y}\>\overset{ibp}{dy}\\
= & \> 2\ln2 Li_3(\frac12)+2Li_4(\frac12)+
2\int_0^{\frac12}\underset{=J}{\frac{\ln(1-2y)Li_2(y)}y}dy\tag1
\end{align}</span>
A similar procedure establishes
<span class="math-container">\begin{align}
I_2=\int_0^1 \frac{\ln^2(1-x)\ln x}{1+x}dx
= \>2Li_4(1)+ 2J
+ 2\int^1_{\frac12}\frac{\ln(2y-1)Li_2(y)}ydy\tag2
\end{align}</span>
Combine (1) and (2) to express the original integral as
<span class="math-container">\begin{align}
\int^1_{\frac12}\frac{\ln(2y-1)Li_2(y)}ydy
=\frac12 (I_2-I_1)+\ln2 Li_3(\frac12)+Li_4(\frac12)-Li_4(1)\tag3
\end{align}</span>
where the integrals <span class="math-container">$I_1$</span> and <span class="math-container">$I_2$</span> are known, given by
<span class="math-container">\begin{align}
&I_1 =-\frac{\pi^4}{360} +2\ln2 \zeta(3)-\frac{\pi^2}6\ln^22+\frac14 \ln^42\\
&I_2 = -6Li_4(\frac12)+\frac{11\pi^4}{360}-\frac14\ln^42
\end{align}</span></p>
<p>Substitute into (3) to obtain the close-form</p>
<p><span class="math-container">\begin{align}\int^1_{\frac12}\frac{\ln(2y-1)Li_2(y)}ydy
=& -2{Li}_4\left(\frac{1}{2}\right)-\frac{1}{8}\ln2\zeta(3) + \frac{\pi^4}{180}- \frac{1}{12}\ln^42
\end{align}</span></p>
|
3,780,959 | <p>Consider a connected, unweighted, undirected graph <span class="math-container">$G$</span>. Let <span class="math-container">$m$</span> be the number of edges and <span class="math-container">$n$</span> be the number of nodes.</p>
<p>Now consider the following random process. First sample a uniformly random spanning tree of <span class="math-container">$G$</span> and then pick an edge from this spanning tree uniformly at random. Our process returns the edge.</p>
<p>If I want to sample many edges from <span class="math-container">$G$</span> from the probability distribution implied by this process, is there a more efficient (in terms of computational complexity) method than sampling a new random spanning tree each time?</p>
| Misha Lavrov | 383,078 | <p>Let <span class="math-container">$\tau(G)$</span> denote the number of spanning trees in <span class="math-container">$G$</span>, and let <span class="math-container">$G \bullet vw$</span> denote edge contraction: it is the <em>multigraph</em> in which adjacent vertices <span class="math-container">$v$</span> and <span class="math-container">$w$</span> are replaced by a single vertex <span class="math-container">$x$</span>, and all edges incident to either <span class="math-container">$v$</span> or <span class="math-container">$w$</span> are changed to be adjacent to <span class="math-container">$x$</span>.</p>
<p>The spanning trees of <span class="math-container">$G$</span> containing edge <span class="math-container">$vw$</span> are in bijection with the spanning trees of <span class="math-container">$G \bullet vw$</span>, and so the probability that your process will return <span class="math-container">$vw$</span> is <span class="math-container">$$\frac{\tau(G \bullet vw)}{\tau(G)} \cdot \frac1{|V(G)|-1}.$$</span>
We can efficiently compute <span class="math-container">$\tau(H)$</span> for any multigraph graph <span class="math-container">$H$</span> using <a href="https://en.wikipedia.org/wiki/Kirchhoff%27s_theorem" rel="nofollow noreferrer">Kirchhoff's matrix tree theorem</a>.</p>
<p>(Rather than dealing with <span class="math-container">$G\bullet vw$</span>, we could also count the spanning trees containing <span class="math-container">$vw$</span> as <span class="math-container">$\tau(G) - \tau(G-vw)$</span>, but that's slightly less efficient, because the determinants are one bigger.)</p>
|
3,780,959 | <p>Consider a connected, unweighted, undirected graph <span class="math-container">$G$</span>. Let <span class="math-container">$m$</span> be the number of edges and <span class="math-container">$n$</span> be the number of nodes.</p>
<p>Now consider the following random process. First sample a uniformly random spanning tree of <span class="math-container">$G$</span> and then pick an edge from this spanning tree uniformly at random. Our process returns the edge.</p>
<p>If I want to sample many edges from <span class="math-container">$G$</span> from the probability distribution implied by this process, is there a more efficient (in terms of computational complexity) method than sampling a new random spanning tree each time?</p>
| Marcus M | 215,322 | <p>While the other answer is correct, it requires the computation of <span class="math-container">$|E| + 1$</span> many determinants. There is a faster route when <span class="math-container">$|E|$</span> is large. The first thing to note is Kirchoff's theorem which states that if <span class="math-container">$T$</span> is a uniform spanning tree then
<span class="math-container">$$P(e \in T) = \mathscr{R}(e_- \leftrightarrow e_+)$$</span>
where <span class="math-container">$e = \{e_-, e_+\}$</span> and <span class="math-container">$\mathscr{R}(a \leftrightarrow b)$</span> is the effective resistance between <span class="math-container">$a$</span> and <span class="math-container">$b$</span> when each edge is given resistance <span class="math-container">$1$</span>. This implies that the probability an edge is sampled in your process is <span class="math-container">$$\mathscr{R}(e_- \leftrightarrow e_+)/(|V| - 1).$$</span></p>
<p>Thus we only need to compute the effective resistance.</p>
<p>If we let <span class="math-container">$L$</span> denote the graph Laplacian and <span class="math-container">$L^+$</span> to be its Moore-Penrose pseudoinverse, then</p>
<p><span class="math-container">$$\mathscr{R}(a \leftrightarrow b) = (L^+)_{aa} + (L^+)_{bb} - 2 (L^+)_{ab}. $$</span></p>
<p>(See <a href="https://www.math.leidenuniv.nl/scripties/EllensMaster.pdf" rel="nofollow noreferrer">this master's thesis</a> for some nice discussion and references.)</p>
<p>Thus, the only computational overhead for computing the marginals is computing a single psuedoinverse. Depending on how large <span class="math-container">$|E|$</span> is, this may be faster than computing <span class="math-container">$|E|$</span> many determinants.</p>
<p>EDIT: some discussion on complexity</p>
<p>The Pseudoinverse of an <span class="math-container">$n \times n$</span> matrix <a href="https://arxiv.org/ftp/arxiv/papers/0804/0804.4809.pdf" rel="nofollow noreferrer">can be done in</a> <span class="math-container">$O(n^3)$</span> time. So computing <span class="math-container">$L^+$</span> takes <span class="math-container">$O(|V|^3)$</span> time. We have to compute this for <span class="math-container">$|E|$</span> many edges, so the above computes all marginals in <span class="math-container">$O(|E| |V|^3)$</span> time. Conversely, a determinant can be done in, say, <span class="math-container">$O(n^{2.3})$</span> time. So the other answer has complexity <span class="math-container">$O(|E|^2 |V|^{2.3}).$</span> Since <span class="math-container">$G$</span> is connected, <span class="math-container">$|E| \geq |V|-1$</span> and so this algorithm is always faster (asymptotically, at least).</p>
|
1,341,505 | <p>Let U: $\mathbb R$ -> $\mathbb R$ be a concave function, let X be a random variable with a finite expected value, and let Y be a random variable that is independent of X and has an expected value 0. Define Z=X+Y. Prove that $E[U(X)] \ge E[U(Z)]$</p>
<p>I know that $E(X)=E(Z)$, and by Jensen's inequality $U[E(X)] \ge E[U(X)]$ but it gives me nothing so far.</p>
<p>Please help. Thanks a lot.</p>
| Amit | 378,131 | <p>Note that the following equality holds: <span class="math-container">$\mathbb{E}(Z|X = x) = \mathbb{E}(X +Y|X = x) = \mathbb{E}(x +Y|X = x) = x + \mathbb{E}(Y|X = x) =x + \mathbb{E}(Y) = x $</span>,</p>
<p>We are given that <span class="math-container">$u$</span> is concave, so by Jensen's inequality:
<span class="math-container">$$\mathbb{E}(u(X)|X = x) = u(x) \geq \mathbb{E}(u(Z)|X = x)$$</span> for all <span class="math-container">$x$</span>.
Therefore, we have <span class="math-container">$$\mathbb{E}(u(X)|X) \geq \mathbb{E}(u(Z)|X)$$</span>
Taking expectation both sides: <span class="math-container">$$\mathbb{E}(\mathbb{E}(u(X)|X)) \geq \mathbb{E}(\mathbb{E}(u(Z)|X))$$</span>
Therefore,
<span class="math-container">$$\mathbb{E}(u(X)) \geq \mathbb{E}(u(Z))$$</span></p>
|
929,532 | <p>Okay so I want some hints (not solutions) on figuring out whether these sets are open, closed or neither.</p>
<p>$A = \{ (x,y,z) \in \mathbb{R}^3\ \ | \ \ |x^2+y^2+z^2|\lt2 \ and \ |z| \lt 1 \} \\ B = \{(x,y) \in \mathbb{R}^2 \ | \ y=2x^2\}$</p>
<p>Okay so since this question is the last part of the question where I proved that if the function $f$ is continuous then $f^{-1}(B)$ is open if $B$ is open where $f: X \to Y $ and $ B \subseteq X $. I assume I am supposed to define an image of $A$ and $B$ and show that they are close/open then use this definition. But I am not sure how to define the functions. I have an hint for the question stating that I should use the fact that polynomials are continuous mappings and the fact that any norm $\|\cdot\| : V \to \mathbb{R} $ is continous. So for $A$ should I consider a norm $\|\cdot\| $ induced by $|\cdot|$ then the image of $A$ would be $(-2, 2)$ since this set is open, $A$ (its preimage) will be open? And for $b$ I define $f(x)=2x^2$ but that didn't sound right... I don't know I am confused on how to take the first step. So any hints on how I should approach this question? </p>
| mm-aops | 81,587 | <p>Take $\Omega$ to be the set of all natural numbers, $F$ to be the family of all subsets of $\Omega$ and let $\mu(A) = 0$ if $A$ is a finite set and $\mu(A) = \infty$ if $A$ is infinite, I leave it to you to check that it's additive but not $\sigma$-additive.</p>
|
3,238,914 | <p>When is the <a href="https://en.wikipedia.org/wiki/Euler_line" rel="nofollow noreferrer">Euler line</a> parallel with a triangle's side?</p>
<p>I have found that a triangle with angles <span class="math-container">$45^\circ$</span> and <span class="math-container">$\arctan2$</span> is a case.</p>
<p>Is there any other case?
<a href="https://i.stack.imgur.com/1KjZe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1KjZe.png" alt="Image"></a>></p>
| Parcly Taxel | 357,390 | <p>From <a href="https://math.stackexchange.com/q/2912551/357390">here</a> we find a relation between the slopes of the three sides <span class="math-container">$p,q,r$</span> and that of the Euler line <span class="math-container">$m$</span>:
<span class="math-container">$$m=-\frac{3+pq+pr+qr}{p+q+r+3pqr}$$</span>
Without loss of generality fix two of the triangle's vertices at <span class="math-container">$(0,0)$</span> and <span class="math-container">$(1,0)$</span> respectively, letting the third vertex vary as <span class="math-container">$(x,y)$</span>. Then the side slopes are <span class="math-container">$0$</span>, <span class="math-container">$\frac yx$</span> and <span class="math-container">$\frac y{x-1}$</span>, yielding
<span class="math-container">$$m=-\frac{y^2/(x(x-1))+3}{y/x+y/(x-1)}=-\frac{y^2+3x(x-1)}{(2x-1)y}$$</span>
For the Euler line to be parallel to a side, this must be one of <span class="math-container">$0$</span>, <span class="math-container">$\frac yx$</span> or <span class="math-container">$\frac y{x-1}$</span>.</p>
<p>The first generated equation defines an ellipse with aspect ratio <span class="math-container">$\sqrt3$</span> whose minor axis is one of the triangle's sides. The other two equations are cubics:
<span class="math-container">$$4y^2+3(2x-1)^2=3$$</span>
<span class="math-container">$$3x^2(1-x)+y^2(1-3x)=0$$</span>
<span class="math-container">$$3x(1-x)^2+y^2(3x-2)=0$$</span>
For a triangle's Euler line to be parallel to one of its sides, it must be that after the transformation described, the third point's coordinates satisfy one of the above three equations. That is, if the two marked points below are triangle vertices, the third vertex must lie on the blue curves.</p>
<p><img src="https://i.stack.imgur.com/EuWiL.png" alt=""></p>
<p>The last two equations are related by the map <span class="math-container">$x\mapsto1-x$</span>.</p>
<p>For the given triangle, the third point transforms to <span class="math-container">$\left(\frac13,\frac23\right)$</span>, which satisfies the third equation.</p>
<hr>
<p>The angular form of the condition, <span class="math-container">$\tan P\tan Q=3$</span>, can be derived more easily from the Euler line slope formula. Let <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> be on the <span class="math-container">$x$</span>-axis, then the slope formula gives <span class="math-container">$m=-\frac{3+pq}{p+q}$</span> and this is <span class="math-container">$0$</span> if the Euler line is parallel to <span class="math-container">$PQ$</span>, yielding <span class="math-container">$pq=-3$</span>. Now <span class="math-container">$p=\pm\tan Q$</span> and <span class="math-container">$q=\mp\tan P$</span>, the signs depending on relative position but always opposite, so <span class="math-container">$\tan P\tan Q=3$</span></p>
<hr>
<p>Another derivation of the angular form can be achieved through barycentric coordinates <span class="math-container">$A:B:C$</span>. A line parallel to <span class="math-container">$BC$</span> will have a constant first barycentric coordinate, and the centroid always has coordinates <span class="math-container">$\frac13:\frac13:\frac13$</span>, so the Euler line is parallel to <span class="math-container">$BC$</span> iff the orthocentre, which has barycentric coordinate <em>ratios</em> <span class="math-container">$\tan A:\tan B:\tan C$</span>, normalises such that the first barycentric coordinate is <span class="math-container">$\frac13$</span>. This happens when <span class="math-container">$2\tan A=\tan B+\tan C$</span>, and the angular form arises from writing <span class="math-container">$A$</span> in terms of <span class="math-container">$B$</span> and <span class="math-container">$C$</span> and simplifying:
<span class="math-container">$$2\tan(\pi-(B+C))=-2\tan(B+C)=\frac{-2(\tan B+\tan C)}{1-\tan B\tan C}=\tan B+\tan C$$</span>
<span class="math-container">$$\frac{-2}{1-\tan B\tan C}=1$$</span>
<span class="math-container">$$-2=1-\tan B\tan C$$</span>
<span class="math-container">$$\tan B\tan C=3$$</span></p>
|
2,845,085 | <p>Find $f(5)$, if the graph of the quadratic function $f(x)=ax^2+bx+c$ intersects the ordinate axis at point $(0;3)$ and its vertex is at point $(2;0)$</p>
<p>So I used the vertex form, $y=(x-2)^2+3$, got the quadratic equation and then put $5$ instead of $x$ to get the answer, but it's wrong. I think I shouldn't have added $3$ in the vertex form but I don't know how else I can solve this</p>
| rogerl | 27,542 | <p>Hints: Plug $x=0$ into $ax^2+bx+c=3$ to find the value of $c$. Then note that the vertex of a parabola is at $x$-coordinate $-\frac{b}{2a}$. Can you take it from there?</p>
|
1,057 | <p>Suppose a finite group has the property that for every $x, y$, it follows that </p>
<p>\begin{equation*}
(xy)^3 = x^3 y^3.
\end{equation*}</p>
<p>How do you prove that it is abelian?</p>
<hr>
<p>Edit: I recall that the correct exercise needed in addition that the order of the group is not divisible by 3.</p>
| Mariano Suárez-Álvarez | 274 | <p>You don't, as the group is not necessarily abelian! The group of upper triangular 3-by-3 matrices with ones along the diagonal and coefficients in the three-element field
$\mathbb {Z}/3\mathbb{Z}$ has exponent three, so your equation holds, but it is not abelian.</p>
<p>There are lots of examples: the most famous ones are the <a href="http://en.wikipedia.org/wiki/Burnside%27s_problem#Bounded_Burnside_problem" rel="noreferrer">Burnside groups</a> $B(m,3)$: the group I described above is $B(2,3)$.</p>
|
1,057 | <p>Suppose a finite group has the property that for every $x, y$, it follows that </p>
<p>\begin{equation*}
(xy)^3 = x^3 y^3.
\end{equation*}</p>
<p>How do you prove that it is abelian?</p>
<hr>
<p>Edit: I recall that the correct exercise needed in addition that the order of the group is not divisible by 3.</p>
| Arturo Magidin | 742 | <p>Both proofs so far are for finite groups. However, the problem (with the complete assumptions) holds for not-necessarily-finite groups, provided that the group have no element of order <span class="math-container">$3$</span>.</p>
<p>Here is a proof:</p>
<p>From <span class="math-container">$(ab)^3 = a^3b^3$</span>, we immediately conclude that <span class="math-container">$(ba)^2 = a^2b^2$</span> by cancellation. We also conclude that cubes commute with squares, since <span class="math-container">$b^2a^2=(ab)^2 = (ab)^3(ab)^{-1}= a^3b^3b^{-1}a^{-1} = a^3b^2a^{-1}$</span>, hence <span class="math-container">$b^2a^3=a^3b^2$</span>.</p>
<p>Now, consider the cube of a commutator,
<span class="math-container">$$\begin{align*}
[a,b]^3 &= (a^{-1}b^{-1}ab)^3\\
&= a^{-3}b^{-3}a^3b^3\\
&= a^{-3}b^{-3}b^2a^3b\\
&= a^{-3}b^{-1}a^3b\\
&= [a^3,b].
\end{align*}$$</span>
In particular, for any square we have <span class="math-container">$[a,x^2]^3 = [a^3,x^2]=1$</span>, because cubes commute with squares. But since <span class="math-container">$G$</span> has no elements of order <span class="math-container">$3$</span>, we conclude that <span class="math-container">$[a,x^2]=1$</span>. Thus, we conclude that every square in <span class="math-container">$G$</span> is actually central.</p>
<p>But that means that <span class="math-container">$(ab)^2 = b^2a^2 = a^2b^2$</span> for all <span class="math-container">$a,b\in G$</span>. And this condition is well known to imply that the group is abelian: <span class="math-container">$abab=aabb$</span> yields <span class="math-container">$ba=ab$</span>.</p>
<p>In particular, this holds for a finite group whose order is not divisible by <span class="math-container">$3$</span>.</p>
|
514,922 | <p>I need to prove the following affirmation: If $ \lim x_{2n} = a $ and $ \lim x_{2n-1} = a $, prove that $\lim x_n = a $ (in $ \mathbb{R} $ )</p>
<p>It is a simple proof but I am having problems how to write it. I'm not sure it is the right way to write, for example, that the limit of $(x_{2n})$ converges to a:</p>
<p>$ \forall \epsilon > 0 \; \exists n_1 \in \mathbb{N} $ such that if $n \in \mathbb{N}$ and $n \geq n_1$, then (*) $ | x_{2n} - a| < \epsilon $</p>
<p>About the (*) step, is that correct to write $ x_{2n} $? Or should I use another notation?</p>
<p>Thanks for the help! </p>
| drhab | 75,923 | <p>Hint: for $\epsilon>0$ find some $n_{1}$ such that for $n>n_{1}$ we have $|x_{2n}-a|<\epsilon$ and also some $n_{2}$ such that for $n>n_{2}$ we have $|x_{2n-1}-a|<\epsilon$. Based on that try to find some $n_{0}$ such that for $n>n_{0}$ we have $|x_{n}-a|<\epsilon$</p>
|
40,920 | <p>We have functions $f_n\in L^1$ such that $\int f_ng$ has a limit for every $g\in L^\infty$. Does there exist a function $f\in L^1$ such that the limit equals $\int fg$? I think this is not true in general (really? - why?), then can this be true if we also know that $f_n$ belong to a certain subspace of $L^1$?</p>
| t.b. | 5,363 | <p>Another way of phrasing your question: Is $L^{1}$ <em>weakly sequentially complete</em>? That is to say: does every weak Cauchy sequence in $L^1$ converge? </p>
<p>The answer is <strong>yes</strong>.</p>
<p>(Added: See Nate's answer for a definition of <em>weak Cauchy sequence</em> and why this "completeness" of $L^1$ may be considered surprising).</p>
<p>Here's an outline of the argument (I'm assuming that you're working on a $\sigma$-finite measure space $(\Omega, \Sigma, \mu)$, but that's not an essential restriction, see the end of this answer):</p>
<ul>
<li><p>First of all, it follows from the uniform boundedness theorem that the sequence $(f_{n})$ is bounded in $L^{1}$. </p></li>
<li><p>Next, we can define for a measurable set $E$ a quantity
$$\nu(E) = \lim_{n \to \infty} \int_{E} f_{n}$$
because the characteristic function of $E$ belongs to $L^{\infty}$.</p></li>
<li><p>Then one can verify that $\nu$ is a (signed) <em>measure</em> which is absolutely continuous with respect to $\mu$. By the Radon-Nikodym theorem it follows that $\nu(E) = \int_{E} f\,d\mu$ for a unique function (class) $f \in L^{1}$.</p></li>
<li><p>Now by definition we have $\displaystyle \int f g = \lim_{n \to \infty} \int f_{n} g$ for all <em>characteristic functions</em> $g$. But as the characteristic functions span a dense subspace of $L^{\infty}$ we conclude that this must hold for all $g \in L^{\infty}$.</p></li>
</ul>
<p>You can find the details of this argument e.g. in Dunford-Schwartz, <em>Linear operators I</em>, Theorem IV.8.6 on page 290. The assumption on $\sigma$-finiteness I made is easily removed, as is also explained there: The point is that the union of the supports of the $f_n$ is $\sigma$-finite, so we may assume that we work on a $\sigma$-finite space in the first place.</p>
|
108,372 | <p>Given a map $\psi: S\rightarrow S,$ for $S$ a closed surface, is there any algorithm to compute its translation distance in the curve complex? I should say that I mostly care about checking that the translation distance is/is not very small. That is, if the algorithm can pick among the possibilities: translation distance is 0, 1, 2, 3, many, then I am happy...</p>
<p>I know there are algorithms for computing distances IN the curve complex, but this is not quite the same...</p>
| Autumn Kent | 1,335 | <p>In the braid group, Ko and Lee have given a polynomial time test of reducibility using the Garside structure. (See <a href="http://arxiv.org/abs/math/0610746" rel="nofollow">http://arxiv.org/abs/math/0610746</a>)</p>
|
1,928,149 | <p>I have the following general question about geodesics.
I know the following equation for a geodesic $\sigma$ on a manifold $M\subset R^n$ of dimension $m$, written in local coordinates:
$${\sigma^k}^{''} (t) + \Gamma_{i,j}^k {\sigma^{i}}'{\sigma^{j}}'=0,$$</p>
<p>for $i,j,k=1, \dots, m$.</p>
<p>Now, if I have a curve $\gamma(t)=(\gamma_1(t), \dots, \gamma_n(t))$ in $M$, how can I check that such a curve is a geodesic?</p>
<p>More precisely, how can I write my curve in local coordinates, in order to check if it satisfies my equation? I am stuck. Examples are really welcomed too.</p>
<p>Thank you.</p>
| Futurologist | 357,211 | <p>If you have your $M$ to be an $m$-dimensional submanifold of $\mathbb{R}^n$ and the Riemannian metric on $M$ is the Euclidean metric of $\mathbb{R}^n$ restricted to $M$, then there are two things you need to check.</p>
<p>1) Make sure the curve $\gamma : (a,b) \to \mathbb{R}^n$ lies on $M$, i.e. $\gamma(t) \in M$ for all $t \in (a,b)$. </p>
<p>2) Compute the normal curvature vector $N(t)$ of $\gamma(t)$ (the curvature vector of $\gamma(t)$ which is orthogonal to the tangent of $\gamma(t)$ and points to the pivoting point around which $\gamma$ is turning the most at time $t$) and check that $N(t)$ is perpendicular to the tangent space $T_{\gamma(t)}M$ for all $t \in (a,b)$.</p>
<p>If $t$ is an arclength parametrization of $\gamma(t)$, which is equivalent to $\big(\dot{\gamma}(t) \cdot \dot{\gamma}(t) \big)\equiv 1$, then $N(t) = \ddot{\gamma}(t)$. </p>
|
141,823 | <p>I am thinking about the simplest version of Hensel's lemma. Fix a prime $p$. Let $f(x)\in \mathbf{Z}[x]$ be a polynomial. Assume there exists $a_0\in \mathbf{F}_p$ such that $f(a_0)=0\mod p$, and $f'(a_0)\neq 0\mod p$. Then there exists a unique lift $a_n\in \mathbf{Z}/p^{n+1}\mathbf{Z}$ for every $n$. I know there is an elementary proof. However, I want to prove it by using standard deformation theory. It is simply a problem about extending the section $a_0$ order by order. Let $X$ be the scheme defined by $f(x)$ over $k=\mathbf{F}_p$. $I$ is the $k$-module $p\mathbf{Z}/p^2$. If I am correct, the obstruction class is in $Ext^1(a_0^*L_{X/k},I)$, and if it vanishes, the extension is classified by $Ext^0(a_0^*L_{X/k},I)$. Is there a proof of Hensel's lemma along this line?</p>
<p>I know this is like using a big machine to solve a simple problem. However, I really want to understand why $f'(a_0)\neq 0 \mod p$ implies that the obstruction class vanishes. </p>
| Community | -1 | <p>One can also show that any complete local ring $(R,\mathfrak{m})$ is Henselian using the infinitesimal lifting criterion for étale morphisms: Let $S$ be an étale $R$-algebra with a section $S \to R/\mathfrak{m}$. We want to show that there is a lift $S \to R = \varprojlim_n R/\mathfrak{m}^n$. But we can construct such a lift (a compatible family of lifts $S \to R/\mathfrak{m}^n$) inductively using the infinitesimal lifting criterion for étale morphisms $\mathrm{Hom}(S,R/\mathfrak{m}^n) = \mathrm{Hom}(S,R/\mathfrak{m}^{n+1})$.</p>
|
1,885,068 | <p>Prove $$\int_0^1 \frac{x-1}{(x+1)\log{x}} \text{d}x = \log{\frac{\pi}{2}}$$</p>
<p>Tried contouring but couldn't get anywhere with a keyhole contour.</p>
<p>Geometric Series Expansion does not look very promising either.</p>
| Marco Cantarini | 171,547 | <p>We have $$I=\int_{0}^{1}\frac{x-1}{\log\left(x\right)\left(x+1\right)}dx=\sum_{k\geq0}\left(-1\right)^{k}\int_{0}^{1}\frac{x^{k+1}-x^{k}}{\log\left(x\right)}dx
$$ $$\stackrel{x=e^{-u}}{=}\sum_{k\geq0}\left(-1\right)^{k+1}\int_{0}^{\infty}\frac{e^{-\left(k+2\right)u}-e^{-\left(k+1\right)u}}{x}du
$$ and now we can apply the <a href="http://mathworld.wolfram.com/FrullanisIntegral.html" rel="noreferrer">Frullani's theorem</a> and get $$I=\sum_{k\geq1}\left(-1\right)^{k}\log\left(\frac{k}{k+1}\right)=\log\left(\prod_{k\geq1}\left(\frac{k}{k+1}\right)^{\left(-1\right)^{k}}\right)
$$ and now note that $$\prod_{k=1}^{2N}\left(\frac{k}{k+1}\right)^{\left(-1\right)^{k}}=\prod_{k=1}^{N}\frac{2k}{2k+1}\prod_{k=1}^{N}\frac{2k}{2k-1}=\frac{\left(2N\right)!!^{2}}{\left(2N+1\right)!!\left(2N-1\right)!!}=\frac{\pi N!^{2}}{2\Gamma\left(N+\frac{1}{2}\right)\Gamma\left(N+\frac{3}{2}\right)}
$$ where the last identity follows from the classic estimations for the <a href="http://mathworld.wolfram.com/DoubleFactorial.html" rel="noreferrer">double factorial</a>. Hence it is sufficient to take the limit as $N\rightarrow\infty
$ and get $$I=\color{red}{\log\left(\frac{\pi}{2}\right)}$$ as wanted.
Maybe it is interesting to see that the we get a very famous product, known as the Wallis' product. See <a href="https://en.wikipedia.org/wiki/Wallis_product" rel="noreferrer">here</a> for some proof of it. </p>
|
1,836,190 | <p>I've been working on a problem and got to a point where I need the closed form of </p>
<blockquote>
<p>$$\sum_{k=1}^nk\binom{m+k}{m+1}.$$</p>
</blockquote>
<p>I wasn't making any headway so I figured I would see what Wolfram Alpha could do. It gave me this: </p>
<p>$$\sum_{k=1}^nk\binom{m+k}{m+1} = \frac{n((m+2)n+1)}{(m+2) (m+3)}\binom{m+n+1}{ m+1}.$$</p>
<p>That's quite the nasty formula. Can anyone provide some insight or justification for that answer? </p>
| Matthew Conroy | 2,937 | <p>You can prove this by induction.</p>
<p>Here is the induction step:
$$
\begin{align*}
\sum_{k=1}^{n+1} k \binom{m+k}{m+1} &= \frac{n((m+2)n+1)}{(m+2)(m+3)}\binom{m+n+1}{m+1} + (n+1)\binom{m+n+1}{m+1} \\
&=\frac{(m+n+2)(m(n+1)+2n+3)}{(m+2)(m+3)} \binom{m+n+1}{m+1} \\
&=\frac{(n+1)(m(n+1)+2n+3)}{(m+2)(m+3)} \cdot \frac{(m+n+2)!}{(m+1)!(n+1)!} \\
&= \frac{(n+1)((m+2)(n+1)+1)}{(m+2)(m+3)}\cdot \binom{m+n+2}{m+1}.
\end{align*}
$$</p>
|
3,306,089 | <p>I came across this meme today:</p>
<p><a href="https://i.stack.imgur.com/RfJoJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RfJoJ.jpg" alt="enter image description here"></a></p>
<p>The counterproof is very trivial, but I see no one disproves it. Some even say that the meme might be true. Well, <span class="math-container">$\pi$</span> cannot contain itself.</p>
<p>Well, everything means <span class="math-container">$\pi$</span> might contain <span class="math-container">$\pi$</span> somewhere in it. Say it starts going <span class="math-container">$\pi=3.1415...31415...$</span> again on the <span class="math-container">$p$</span> digit. Then it will have to do the same at the <span class="math-container">$2p$</span> digit, since the "nested <span class="math-container">$\pi$</span>" also contains another <span class="math-container">$\pi$</span> in it. <span class="math-container">$\pi$</span> then will be rational, which is wrong. Thus <span class="math-container">$\pi$</span> does not contain all possible combination.</p>
<p>Is this proof correct? I'm not a mathematician so I'm afraid I make silly mistakes.</p>
| ZAF | 609,023 | <p>Let <span class="math-container">$a_{n} = tr_{n}(\pi)$</span></p>
<p><span class="math-container">$a_{1} = 3.1$</span></p>
<p><span class="math-container">$a_{2} = 3.14$</span></p>
<p><span class="math-container">$a_{3} = 3.141$</span></p>
<p><span class="math-container">$a_{4} = 3.1415$</span></p>
<p><span class="math-container">$lim_{n \to \infty} a_{n} = \pi$</span></p>
<p>Then, suppose that <span class="math-container">$\pi$</span> contains all finite sequences of digits.</p>
<p>If <span class="math-container">$\pi$</span> not contains himself, then there exist <span class="math-container">$m = max\{ n \in \mathbb{N} : a_{n} \in$</span> digits of <span class="math-container">$\pi \}$</span></p>
<p>But then <span class="math-container">$a_{m+1}$</span> is a finite sequence, so is in digits of <span class="math-container">$\pi$</span> </p>
<p>Then <span class="math-container">$m$</span> is not the max, that is a contradiction. </p>
<p>Then <span class="math-container">$\pi$</span> contain <span class="math-container">$\pi$</span></p>
<p>Well this is in the hypothetical case in which it happens that <span class="math-container">$\pi$</span> contains all finite sequences of digits</p>
|
2,713,311 | <p>$ \lim_{x \to \infty} [\frac{x^2+1}{x+1}-ax-b]=0 \ $ then show that $ \ a=1, \ b=-1 \ $</p>
<p><strong>Answer:</strong></p>
<p>$ \lim_{x \to \infty} [\frac{x^2+1}{x+1}-ax-b]=0 \\ \Rightarrow \lim_{x \to \infty} [\frac{x^2+1-ax^2-ax-bx-b}{x+1}]=0 \\ \Rightarrow \lim_{x \to \infty} \frac{2x-2ax-a-b}{1}=0 \\ \Rightarrow 2x-2ax-a-b=0 \ \ (?) $ </p>
<p>Comparing both sides , we get </p>
<p>$ 2-2a=0 \\ a+b=0 \ $</p>
<p>Solving , we get </p>
<p>$ a=1 , \ b=-1 \ $</p>
<p>But I am not sure about the above line where question mark is there.</p>
<p><strong>Can you help me?</strong></p>
| egreg | 62,967 | <p>Let $f(x)=\dfrac{x^2+1}{x+1}$. If
$$
\lim_{x\to\infty}(f(x)-ax-b)=0
$$
then also
$$
\lim_{x\to\infty}\frac{f(x)-ax-b}{x}=0
$$
Thus we must have
$$
\lim_{x\to\infty}\left(\frac{x^2+1}{x(x+1)}-a\right)=0
$$
and therefore $a=1$. Now
$$
\frac{x^2+1}{x+1}-x=\frac{x^2+1-x^2-x}{x+1}=\frac{-x}{x+1}
$$
so
$$
\lim_{x\to\infty}(f(x)-x-b)=-1-b
$$
and therefore $b=-1$.</p>
|
92,967 | <p>Let <span class="math-container">$d(n)$</span> be the number of divisors function, i.e., <span class="math-container">$d(n)=\sum_{k\mid n} 1$</span> of the positive integer <span class="math-container">$n$</span>. The following estimate is well known
<span class="math-container">$$
\sum_{n\leq x} d(n)=x \log x + (2 \gamma -1) x +{\cal O}(\sqrt{x})
$$</span>
as well as its variability, e.g., the lim sup of the fraction
<span class="math-container">$$
\frac{\log d(n)}{\log n/\log \log n}
$$</span>
is <span class="math-container">$\log 2$</span> while the lim inf of <span class="math-container">$d(n)$</span> is <span class="math-container">$2,$</span> achieved whenever <span class="math-container">$n$</span> is prime.</p></p>
<p>I am interested in estimating, instead, the following sum
<span class="math-container">$$
A(x)=\sum_{n\leq x} \min[ d(n), f(x)]
$$</span>
for functions of <span class="math-container">$x$</span> where <span class="math-container">$f(x) = c (x / \log x)$</span> or <span class="math-container">$f(x) = c x^{\alpha}$</span> for some <span class="math-container">$\alpha \in (0,1)$</span> are possible candidates. Intuitively, the sum should not change much, but large
infrequent values contribute a lot to the sum, so I am not so sure. The lim-sup mentioned above would seem to imply that <span class="math-container">$d(n)$</span> can achieve a value as large as
<span class="math-container">$$n^{c/\log \log n}$$</span> while I seem to recall that it is also known that for any fixed exponent <span class="math-container">$\varepsilon,$</span> we have <span class="math-container">$d(n) < n^{\varepsilon}$</span> for <span class="math-container">$n$</span> large enough.</p>
<p>Any pointers, comments appreciated.</p>
| Dimitris Koukoulopoulos | 4,003 | <p>The key is to count integers with a given number of prime factors: if $\omega(n)=\sum_{p|n}1$ and $\Omega(n)=\sum_{p^a|n,\,a\ge1}1$, then $2^{\omega(n)}\le\tau(n)\le2^{\Omega(n)}$ and there are results that control the number of integers with a given value of $\omega(n)$, or of $\Omega(n)$. The simplest one of them is the Hardy-Ramanujan theorem: there are absolute constants $A$ and $B$ such that</p>
<p>$$\#\{n\le x: \omega(n)=r\}\le\frac{Ax}{\log x}\frac{(\log\log x+B)^{r-1}}{(r-1)!}\tag{*}$$ for all $x\ge3$ and $r\ge1$.</p>
<p>There are also lower bounds for certain ranges of $r$ and $x$ that are harder to obtain (due to Sathe-Selberg, see the chapter "The Selberg-Delange method" in Tenenbaum's book "Introduction to Analytic and Probabilistic Number Theory").</p>
<p>So here is a way to implement (*) in order to bound your sum from above. First, we need a technical trick: Every $n$ can be written in a unique way as $n=ab$, where $a$ is square-free and $b$ is square-full (i.e. $p|n\Rightarrow p^2|n$). Now, we have that </p>
<p>$$
\begin{align}\sum_{n\le x}\min(d(n),M) & \le \sum_{\substack{b\le x\\\ b\,\text{square-full}}} d(b) \sum_{\substack{a\le x/b \\\ a\,\text{square-free}}}\min(d(a),M) \\\
&\le \sum_{\substack{b\le x\\\ b\,\text{square-full}}} d(b) \sum_{a\le x/b}\min(2^{\omega(a)},M)\\\
&=\sum_{\substack{b\le x\\\ b\,\text{square-full}}} d(b) \sum_{r\ge0} \min(2^{r},M)\#\{a\le x/b:\omega(a)=r\}
\end{align}
$$
So you can insert (*) to control this sum when $b\le\sqrt{x}$. When $b>\sqrt{x}$, apply the trivial bound </p>
<p>$$\sum_{r\ge0} \min(2^{r},M)\#\{a\le x/b:\omega(a)=r\}\le\sum_{a\le x/b}d(a)\ll\frac{x\log(x/b)}{b}$$ </p>
<p>and note that $$\sum_{b\le y}d(b)\le\sum_{k^2l^3\le y}d(k^2l^3)\ll\sqrt{y}\log y,$$ </p>
<p>so that</p>
<p>$$\sum_{\substack{\sqrt{x} \le b\le x \\\ b\,\text{square-full} }} d(b)\sum_{r\ge0}\min(2^r,M)\#\{a\le x/b:\omega(a)=r\}\ll\sqrt{x}\log^2x,$$</p>
<p>by partial summation.</p>
<p>This method will give you an upper bound of the right order of magnitude for all $M$. For the lower bound, you could use that $d(n)\ge 2^{\omega(n)}$ and insert the Sathe-Selberg result (here you need to assume that $M\le(\log x)^{10}$, which is OK; the case $M\ge (\log x)^{10}$ follows by the case $M=(\log x)^{10}$). This would give you lower bounds of matching order essentially for all $M$. You could even get asymptotics but the error term will be weak.</p>
<p>An instructive remark here is that if $M=x$ (i.e. we have the full divisor sum), then this method suggests that </p>
<p>$$\sum_{n\le x}d(n)\approx \frac{x}{\log x}\sum_{r\ge1}\frac{(2\log\log x)^{r-1}}{(r-1)!}.$$</p>
<p>This sum is dominated by $r\approx2\log\log x$. And indeed, this is the case (the contribution of $r$ different from $2\log\log x$ can be bounded by (*)). So for $M\ge(\log x)^{\log 4}$, then your sum is asymptotic to the full divisor sum. However, when $M\le(\log x)^{\log 4-\epsilon}$, then it starts getting smaller, dominated by $r\approx \log M/\log 2$.</p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| Gerald Edgar | 454 | <p><a href="http://www.dimensions-math.org/">Dimensions</a></p>
<p><a href="http://www.youtube.com/watch?v=JX3VmDgiFnY&fmt=18">Möbius Transformations Revealed</a></p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| Any | 950 | <p>'Not Knot' is also a nice vid</p>
<p><a href="http://www.youtube.com/watch?v=AGLPbSMxSUM">http://www.youtube.com/watch?v=AGLPbSMxSUM</a></p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| Nikita Kalinin | 4,298 | <p>NMU(<a href="https://en.wikipedia.org/wiki/Independent_University_of_Moscow" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Independent_University_of_Moscow</a>) and MIAN lectures 2009-2010 (in Russian)</p>
<p><a href="http://erb-files.narod.ru/" rel="nofollow noreferrer">http://erb-files.narod.ru/</a></p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| DamienC | 7,031 | <p>The <a href="http://www.ihes.fr/jsp/site/Portal.jsp">IHES</a> also has a lot of <a href="http://www.dailymotion.com/user/Ihes_science/">on-line videos</a>. In particular, I like very much the ones from the "Colloque Grothendieck". </p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| euklid345 | 4,709 | <p>The famous proof of the snake lemma in the 1980's movie <em>It's my turn</em> (can be found on utube). </p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| Carl Najafi | 19,027 | <p>At the time of writing, <a href="http://www.math.rutgers.edu/~russell2/expmath/" rel="nofollow">Rutgers experimental mathematics seminar</a> has over 200 <a href="http://www.youtube.com/user/kgnang" rel="nofollow">videos</a> up on youtube. I wish more seminars would do this!</p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| Strongart | 9,946 | <p>I make some maths videos at home,Here is an English video:<a href="http://video.yayun2010.sina.com.cn/v/b/49393046-1215048895.html" rel="nofollow">Visible Fibre Bundle</a> </p>
<p>maybe that can help some begginners.</p>
<p>All my maths vedios at <a href="http://blog.sina.com.cn/s/articlelist_1215048895_12_1.html" rel="nofollow">my blog here</a>,thirty courses of communtative algebra and I prepare to make much more in the future,but as you seen,most of them are Chinese(中文),because I can not say much English.</p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| Sniper Clown | 20,215 | <p>I am quite surprised to see <a href="http://www.youtube.com/watch?v=p3DOGo_XF2o" rel="nofollow">Dan Freed's lecture of Hodge Conjecture</a> has not been mentioned. (Although it is an old thread I believe this should be in here. Before there was a QuickTime video but I am grateful to find that it has been youtubed.) </p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| Boris Bukh | 806 | <p>As of today, the digitized tapes of CBMS Lectures on Probability Theory and Combinatorial by Michael Steele <a href="http://sms.cam.ac.uk/collection/1189351" rel="nofollow">are online</a>. I heartily recommend them — the style is informal, but educating: there are jokes, juggling lessons, speculations about the stock market, and all of these amidst beautiful mathematics.</p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| Saikat Biswas | 13,628 | <p>'Selmer Ranks of Elliptic Curves in Families of Quadratic Twists' by Karl Rubin</p>
<p><a href="http://research.microsoft.com/apps/video/default.aspx?id=140581" rel="nofollow">http://research.microsoft.com/apps/video/default.aspx?id=140581</a></p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| Community | -1 | <p>Any video on </p>
<p><a href="http://www.josleys.com/galleries.php?showdate=1" rel="nofollow">Jos Leys "Mathematical Imagery"</a></p>
<p>is a true masterpiece, and has a non-trivial mathematical content...</p>
|
1,714 | <p>I know of two good mathematics videos available online, namely:</p>
<ol>
<li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li>
<li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li>
</ol>
<p>Do you know of any other good math videos? Share.</p>
| Alexey Ustinov | 5,712 | <p><a href="http://sms.cam.ac.uk/collection/533691" rel="nofollow">Discrete Integrable Systems</a> at Isaac Newton Institute for Mathematical Sciences</p>
|
4,099,649 | <p>I’m trying to solve two (in my opinion, tough) integrals which appear in part of my problem. I tried different ways but in the end I failed. See them below, please.</p>
<p><span class="math-container">$${\rm{integral}}\,1 = \int {{{\left( {\frac{A}{{{x^\alpha }}}\, + \sqrt {B + \frac{C}{{{x^{2\alpha }}}}\,} } \right)}^{\frac{1}{3}}}} dx ,$$</span>
and
<span class="math-container">$${\rm{integral}}\,2 = \int {\frac{1}{{{{\left( {\frac{A}{{{x^\alpha }}}\, + \sqrt {B + \frac{{C\,}}{{{x^{2\alpha }}}}} } \right)}^{\frac{1}{3}}}}}} dx,$$</span></p>
<p>where <span class="math-container">$\alpha$</span> is a positive integer (<span class="math-container">$\alpha \ge 2$</span>). How can I solve them? I was wondering if someone could help me integrate these functions. Any help is appreciated. Much thanks.</p>
<p><strong>Edit</strong>: Don't you think that if we set <span class="math-container">$\alpha =2 $</span>, the integral might be easier to solve? Having this, I think I can solve the general case with <span class="math-container">$\alpha \ge 2$</span>.</p>
| Yuri Negometyanov | 297,350 | <p><span class="math-container">$\color{brown}{\textbf{Simple cases.}}$</span></p>
<p>If <span class="math-container">$\;\underline{B=0},\;$</span> then the integration looks trivial.</p>
<p>If <span class="math-container">$\;\underline{C=0},\;$</span> then, by Wolfram Alpha,</p>
<p><a href="https://i.stack.imgur.com/VBW7m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VBW7m.png" alt="Integral 1, C=0" /></a></p>
<p><a href="https://i.stack.imgur.com/cHcjR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cHcjR.png" alt="Integral 2, C=0" /></a></p>
<p><span class="math-container">$\color{brown}{\textbf{The first integral.}}$</span></p>
<p><span class="math-container">$\color{green}{\mathbf{Case\;\alpha=3.}}$</span></p>
<p>Since
<span class="math-container">$$f_3(x) = \dfrac1x\sqrt[\large 3]{A+\sqrt{Bx^6+C}}
= \dfrac16\dfrac{6Bx^5}{Bx^6+C-C}\sqrt[\large3]{A+\sqrt{Bx^6+C}},\tag{1.1}$$</span>
then the substitution
<span class="math-container">$$t^2=Bx^6+C,\quad 6Bx^5\,\text dx=2t\,\text dt,\quad C=p^2,\tag{1.2}$$</span>
presents the integral in the form of
<span class="math-container">$$F_3(t) = \dfrac{F_{30}(p,t)+F_{30}(-p,t)}6,\tag{1.3}$$</span>
where the integral
<span class="math-container">$$F_{30}(p,t)=\int \dfrac{\sqrt[\large 3]{A+t}}{t-p}\,\text dt\tag{1.4}$$</span>
has the closed form of</p>
<p><a href="https://i.stack.imgur.com/6CyIZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6CyIZ.png" alt="Integral G1(p,t)" /></a></p>
<p><span class="math-container">$\color{green}{\mathbf{Case\;\alpha\not=3.}}$</span></p>
<p>Since
<span class="math-container">$$f(x) = x^{\large-^\alpha/_3} \sqrt[\large 3]{A+\sqrt{Bx^{2\alpha}+C}}, \tag{2.1}$$</span>
then the substitution
<span class="math-container">$$x=y^{\large\frac3{3-\alpha}},\quad \text dx=y^{\large\frac{\alpha}{3-\alpha}}\text dy,\quad \beta =\dfrac{6\alpha}{3-\alpha}\tag{2.2}$$</span></p>
<p>presents the first integral in the form of
<span class="math-container">$$F_\beta(y) = \int\sqrt[\large 3]{A+\sqrt{C+By^{\beta}}\,}\,\text dy.\tag{2.3}$$</span></p>
<p>If <span class="math-container">$\;\underline{A^2>|C+By^\beta|},\;$</span> then
<span class="math-container">$$F_\beta(y) = \sqrt[\large 3]A \sum\limits_{k=0}^{\infty}\dbinom{^1/_3}k\, \dfrac{C^{\large^k/_2}}{A^k} \int \left(1+\dfrac BCy^{\beta}\right)^{\large^k/_2}\,\text dy$$</span>
<span class="math-container">$$ = \sqrt[\large 3]A \sum\limits_{k=0}^{\infty}\dbinom{^1/_3}k\, \dfrac{C^{\large^k/_2}}{A^k}\, y\, \operatorname{_2F_1}\left(\dfrac 1\beta, -\dfrac k2, \dfrac{1+\beta}\beta, -\dfrac BCy^\beta\right),$$</span>
<span class="math-container">$$F_\alpha(x) = \sqrt[\large 3]{Ax^{3-\alpha}} \sum\limits_{k=0}^{\infty}\dbinom{^1/_3}k\, \dfrac{C^{\large^k/_2}}{A^k}\, \operatorname{_2F_1}\left(\dfrac{3-\alpha}{6\alpha}, -\dfrac k2, \dfrac{3+5\alpha}{6\alpha}, -\dfrac BC x^{2\alpha}\right).\tag{2.4}$$</span></p>
<p><a href="https://i.stack.imgur.com/6dOBr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6dOBr.png" alt="Basic Integral." /></a></p>
<p>If <span class="math-container">$\;\underline{A^2<|C+By^\beta|},\;$</span> then
<span class="math-container">$$F_\beta(y) = \sum\limits_{k=0}^{\infty}\dbinom{^1/_3}k\,\dfrac{A^k}{C^{\large^k/_2}} \int \left(1+\dfrac BCy^{\beta}\right)^{\large-^k/_2}\sqrt[\large 6]{C+By^\beta} \,\text dy$$</span>
<span class="math-container">$$ = \sqrt[\large 6]C \sum\limits_{k=0}^{\infty}\dbinom{^1/_3}k\, \dfrac{A^k}{C^{\large^k/_2}}\, y\, \operatorname{_2F_1}\left(\dfrac 1\beta, \dfrac k2-\dfrac16, \dfrac{1+\beta}\beta, -\dfrac BCy^\beta\right),$$</span>
<span class="math-container">$$F_\alpha(x) = \sqrt[\large 6]{Cx^{6-2\alpha}} \sum\limits_{k=0}^{\infty} \dbinom{^1/_3}k\, \dfrac{A^k}{C^{\large^k/_2}}\, \operatorname{_2F_1}\left(\dfrac{3-\alpha}{6\alpha}, \dfrac k2-\dfrac16, \dfrac{3+5\alpha}{6\alpha}, -\dfrac BC x^{2\alpha}\right).\tag{2.5}$$</span></p>
<p><span class="math-container">$\color{brown}{\textbf{The second integral.}}$</span></p>
<p><span class="math-container">$\color{green}{\mathbf{Case\;\alpha=-3.}}$</span></p>
<p>Since
<span class="math-container">$$g_{-3}(x) = \dfrac1{x\sqrt[\large 3]{A+\sqrt{Bx^{-6}+C}}}
= -\dfrac16\dfrac{-6Bx^{-7}}{Bx^{-6}+C-C}
\dfrac1{\sqrt[\large 3]{A+\sqrt{Bx^6+C}}},\tag{3.1}$$</span>
then the substitution
<span class="math-container">$$s^2=Bx^{-6}+C,\quad -6Bx^{-7}\,\text dx=2s\,\text ds,\quad C=p^2,\tag{3.2}$$</span>
presents the integral in the form of
<span class="math-container">$$G_{-3}(s) = -\dfrac{F_{30}(p,s)+F_{30}(-p,s)}6,\tag{3.3}$$</span>
where the integral <span class="math-container">$\;F_{30}(p,t)\;$</span> is defined by <span class="math-container">$(1.4).$</span></p>
<p><span class="math-container">$\color{green}{\mathbf{Case\;\alpha\not=-3.}}$</span></p>
<p>Since
<span class="math-container">$$g(x) = \dfrac{x^{\large^\alpha/_3}}
{\sqrt[\large3]{A+\sqrt{Bx^{2\alpha}+C}}},\tag{4.1}$$</span></p>
<p>then the substitution
<span class="math-container">$$x=z^{\large\frac3{3+\alpha}},\quad \text dx=^{-\large\frac{\alpha}{3+\alpha}}\text dz,\quad \gamma =\dfrac{6\alpha}{3+\alpha}\tag{4.2}$$</span></p>
<p>presents the second integral in the form of
<span class="math-container">$$G_\gamma(z) = \int \dfrac {\text dz}{\sqrt[\large 3]{A+\sqrt{C+Bz^{\gamma}}}}.\tag{4.3}$$</span></p>
<p>If <span class="math-container">$\;\underline{A^2>|C+Bz^\gamma|},\;$</span> then
<span class="math-container">$$G_\gamma(z) = \dfrac1{\sqrt[\large 3]A} \sum\limits_{k=0}^{\infty}\dbinom{-^1/_3}k\, \dfrac{C^{\large^k/_2}}{A^k} \int \left(1+\dfrac BCz^{\gamma}\right)^{\large^k/_2}\,\text dz$$</span>
<span class="math-container">$$ = \dfrac1{\sqrt[\large 3]A} \sum\limits_{k=0}^{\infty}\dbinom{-^1/_3}k\, \dfrac{C^{\large^k/_2}}{A^k}\, z\, \operatorname{_2F_1}\left(\dfrac 1\gamma, -\dfrac k2, \dfrac{1+\gamma}\gamma, -\dfrac BCz^\gamma\right),$$</span>
<span class="math-container">$$G_\alpha(x) = \sqrt[\large 3]{\dfrac{x^{3+\alpha}}A} \sum\limits_{k=0}^{\infty}\dbinom{-^1/_3}k\, \dfrac{C^{\large^k/_2}}{A^k}\, \operatorname{_2F_1}\left(\dfrac{3+\alpha}{6\alpha}, -\dfrac k2, \dfrac{3+7\alpha}{6\alpha}, -\dfrac BC x^{2\alpha}\right).\tag{4.4}$$</span></p>
<p>If <span class="math-container">$\;\underline{A^2<|C+Bz^\gamma|},\;$</span> then
<span class="math-container">$$G_\gamma(z) = \sum\limits_{k=0}^{\infty}\dbinom{-^1/_3}k\,\dfrac{A^k}{C^{\large^k/_2}} \int \left(1+\dfrac BCz^{\gamma}\right)^{\large-^k/_2}\dfrac1{\sqrt[\large 6]{C+Bz^\gamma}} \,\text dz$$</span>
<span class="math-container">$$ = \dfrac1{\sqrt[\large 6]C} \sum\limits_{k=0}^{\infty}\dbinom{-^1/_3}k\, \dfrac{A^k}{C^{\large^k/_2}}\, z\, \operatorname{_2F_1}\left(\dfrac 1\gamma, \dfrac k2+\dfrac16, \dfrac{1+\gamma}\gamma, -\dfrac BCz^\gamma\right),$$</span>
<span class="math-container">$$G_\alpha(x) = \sqrt[\large 6]{\dfrac C{z^{6+2\alpha}}} \sum\limits_{k=0}^{\infty} \dbinom{-^1/_3}k\, \dfrac{A^k}{C^{\large^k/_2}}\, \operatorname{_2F_1}\left(\dfrac{3+\alpha}{6\alpha}, \dfrac k2+\dfrac16, \dfrac{3+7\alpha}{6\alpha}, -\dfrac BC x^{2\alpha}\right).\tag{4.5}$$</span></p>
<p><span class="math-container">$\color{brown}{\textbf{Summary.}}$</span></p>
<p>Therefore, both the given integrals can be presented in the closed form or (previously) as 1D series of the hypergeometric functions.</p>
|
1,452,425 | <p>From what I have been told, everything in mathematics has a definition and everything is based on the rules of logic. For example, whether or not <a href="https://math.stackexchange.com/a/11155/171192">$0^0$ is $1$ is a simple matter of definition</a>.</p>
<p><strong>My question is what the definition of a set is?</strong> </p>
<p>I have noticed that many other definitions start with a set and then something. A <a href="https://en.wikipedia.org/wiki/Group_%28mathematics%29#Definition" rel="noreferrer">group is a set</a> with an operation, an equivalence relation is a set, a <a href="https://en.wikipedia.org/wiki/Function_%28mathematics%29#Definition" rel="noreferrer">function can be considered a set</a>, even the <a href="https://en.wikipedia.org/wiki/Natural_number#Constructions_based_on_set_theory" rel="noreferrer">natural numbers can be defined as sets</a> of other sets containing the empty set.</p>
<p>I understand that there is a whole area of mathematics (and philosophy?) that deals with <a href="https://en.wikipedia.org/wiki/Set_theory#Axiomatic_set_theory" rel="noreferrer">set theory</a>. I have looked at a book about this and I understand next to nothing.</p>
<p>From what little I can get, it seems a sets are "anything" that satisfies the axioms of set theory. It isn't enough to just say that a set is any collection of elements because of various paradoxes. <strong>So is it, for example, a right definition to say that a set is anything that satisfies the ZFC list of axioms?</strong></p>
| Carl Mummert | 630 | <blockquote>
<p>So is it, for example, a right definition to say that a set is anything that satisfies the ZFC list of axioms?</p>
</blockquote>
<p>That is almost correct, but not quite. A set on its own does not satisfy the ZFC axioms, any more than a vector on its own can satisfy the vector space axioms or a point on its own can satisfy the axioms of Euclidean geometry.</p>
<p>In school, especially early on, we tend to go from specific to general. First, you learn the numbers 1 through 10 as a young child. Later, you learn larger natural numbers. Finally, much later, you start to talk about the set of all natural numbers. </p>
<p>But things go the other way in advanced mathematics. The definition of a vector space does not start by saying what a "vector" is. The definition of a vector space just give properties that a <em>set</em> of vectors must have with respect to each other to make a vector space. </p>
<p>The same holds for set theory. Instead of saying "a set is anything that satisfies the ZFC list of axioms", you need to start with the entire model of set theory. Then, it does make sense to say, for example, that a ZFC-set is an object in a model of ZFC set theory. Of course, there are several axiom systems for set theory, which <em>a priori</em> have different kinds of "sets". (Of course, there are many vector spaces with different kinds of "vectors" as well.)</p>
<p>When we learn the definition of a vector space, we have some intuitive examples such as $\mathbb{R}^2$ and $\mathbb{R}^3$ to guide us. For set theory, we have examples such as subsets of $\mathbb{N}$ and $\mathbb{R}$, and pure sets such as $\{\emptyset, \{\emptyset\}\}$. These help us understand what the axioms are trying to say. </p>
|
1,722,948 | <blockquote>
<p>$$\frac{1}{x}-1>0$$</p>
</blockquote>
<p>$$\therefore \frac{1}{x} > 1$$</p>
<p>$$\therefore 1 > x$$</p>
<p>However, as evident from the graph (as well as common sense), the right answer should be $1>x>0$. Typically, I wouldn't multiple the x on both sides as I don't know its sign, but as I was unable to factories the LHS, I did so. How can I get this result algebraically?</p>
| DeepSea | 101,504 | <p>Hint: $x(1-x)>0$.Can you continue?</p>
|
1,722,948 | <blockquote>
<p>$$\frac{1}{x}-1>0$$</p>
</blockquote>
<p>$$\therefore \frac{1}{x} > 1$$</p>
<p>$$\therefore 1 > x$$</p>
<p>However, as evident from the graph (as well as common sense), the right answer should be $1>x>0$. Typically, I wouldn't multiple the x on both sides as I don't know its sign, but as I was unable to factories the LHS, I did so. How can I get this result algebraically?</p>
| GoodDeeds | 307,825 | <p>Continuing from the step:
$$\frac1x\gt1$$
Now, to multiply the inequality by any non zero number we need to know its sign. So, taking two cases,</p>
<hr>
<p>Case 1: $x\gt0$</p>
<p>Multiplying by $x$ on both sides will not affect the sign. Thus,
$$1\gt x$$
Due to the assumption,
$$1\gt x\gt0$$</p>
<hr>
<p>Case 2: $x\lt0$</p>
<p>Multiplying by $x$ on both sides will reverse the sign. Thus,
$$1\lt x$$
But $x\lt 0$. Thus, no solution is there in this case.</p>
<hr>
<p>Clearly, the case $x=0$ is not defined. The solution then is,
$$0\lt x\lt1$$</p>
|
1,893,280 | <p>How to show $\frac{c}{n} \leq \log(1+\frac{c}{n-c})$ for any positive constant $c$ such that $0 < c < n$?</p>
<p>I'm considering the Taylor expansion, but it does not work...</p>
| JimmyK4542 | 155,509 | <p><strong>Hint</strong>: For all $n-c \le x \le n$, we have $\dfrac{1}{n} \le \dfrac{1}{x}$. Hence, $$\displaystyle\int_{n-c}^{n}\dfrac{1}{n}\,dx \le \int_{n-c}^{n}\dfrac{1}{x}\,dx.$$</p>
|
2,948,118 | <p>I understand that for a function or a set to be considered a vector space, there are the 10 axioms or rules that it must be able to pass. My problem is that I am unable to discern how exactly we prove these things given that my book lists some weird general examples.</p>
<p>For instance: the set of all third- degree polynomials is not a vector space but the set of all fourth degree or less polynomials is? Is this because I can have <span class="math-container">$$f(x) = x^3$$</span> <span class="math-container">$$g(x) = 1 + x - x^3$$</span> <span class="math-container">$$f(x) + g(x) = 1 + x$$</span>
which isn't in 3rd degree where as the fourth degree or less means I can have it not have to be in 4th degree?</p>
<p>Other curious sets I can't seem to discern or wrap my head around include</p>
<p>The set <span class="math-container">$${(x, y)}$$</span> where <span class="math-container">$$x>=0$$</span> and y is a real number. </p>
<p>A 4x4 matrix with symmetrical ordering except the diagonal is 0, 0, 0, 1 in descending order.</p>
<p>And the set of all 2x2 singular matrices.</p>
<p>The most confusing of all is what I like to call the modified set which changes an operation or two into something like this:</p>
<p>Let V be a set of all positive real numbers; determine whether V is a vector space with the operations shown below</p>
<p><span class="math-container">$$x + y = xy$$</span>
<span class="math-container">$$cx = x^c$$</span></p>
<p>If anyone could help explain why these sets break whatever axiom (because I just feel like these sets fit as a vector space but my book says otherwise) it would really help me out. I just started the vector space unit in my class and I gotta say being this clueless is scary.</p>
| Fred | 380,717 | <p>Your proof is not correct: if <span class="math-container">$ \epsilon$</span> is "small", then <span class="math-container">$ \sigma \notin [-1,2]$</span>.</p>
<p>Your function <span class="math-container">$f$</span> is increasing ! Let <span class="math-container">$n \in \mathbb N$</span> and let <span class="math-container">$P_n=\{x_0,....,x_n\}$</span> be a partition of <span class="math-container">$[-1,2]$</span> such that <span class="math-container">$x_j-x_{j-1}=\frac{1}{n}$</span> for <span class="math-container">$j=1,...,n$</span> then check that</p>
<p><span class="math-container">$U(f,P_n)-L(f,P_n)=\frac{1}{n}(f(2)-f(-1)=\frac{2}{n}$</span>.</p>
<p>If <span class="math-container">$ \epsilon >0$</span> is given, then choose <span class="math-container">$N$</span> such that <span class="math-container">$\frac{2}{N}< \epsilon$</span>.</p>
<p>Then we have <span class="math-container">$U(f,P_N)-L(f,P_N)< \epsilon$</span>.</p>
<p>By Riemann, <span class="math-container">$f$</span> is R - integrable.</p>
<p>Try a proof for the following</p>
<p>Generalization: If <span class="math-container">$f:[a,b] \to \mathbb R$</span> is monotonic, then <span class="math-container">$f$</span> is R - integrable.</p>
|
58,060 | <p>I have been looking at Church's Thesis, which asserts that all intuitively computable functions are recursive. The definition of recursion does not allow for randomness, and some people have suggested exceptions to Church's Thesis based on generating random strings. For example, using randomness one can generate strings of arbitrarily high Kolmogorov complexity but this is not possible with recursion alone.</p>
<p>However, these exceptions are not true <em>functions</em>. They generate multiple outputs, which collectively have some property. A recursive function takes an inputs and outputs a single unique answer. So some people do not consider these random coin-flips to be true exceptions to Church's Thesis.</p>
<p>My question is whether it is possible to use randomness to get something which is still essentially deterministic like a function, but is non-recursive. </p>
<p>For example, if we had a sequence of functions $F^r_i(n)$, which are recursive functions relative to a random tape oracle $r$, which have the property that for some function $g(n)$, we have $F^r_i(n) = g(n)$ with probability approaching $1$ as $n \rightarrow \infty$ (the probability taken over the random tapes). Furthermore, $g$ would not be recursive itself. </p>
<p>Here, I am suggesting relativizing to $r$, rather than having $r$ as an input, because one might need arbitrarily many random cells. This <em>could</em> be thought of as allowing to "intuitively compute" $g$.</p>
| none | 34,896 | <p>There is an article by Leonid Levin that the OP might like:</p>
<ul>
<li><a href="http://arxiv.org/pdf/cs.CC/0203029.pdf" rel="nofollow">http://arxiv.org/pdf/cs.CC/0203029.pdf</a></li>
<li>informal overview: <a href="http://www.cs.bu.edu/fac/lnd/expo/gdl.htm" rel="nofollow">http://www.cs.bu.edu/fac/lnd/expo/gdl.htm</a></li>
</ul>
|
2,764,221 | <p>Let $A$ be a symmetric invertible $n \times n$ matrix, and $B$ an antisymmetric $n \times n$ matrix. Under what conditions is $A+B$ an invertible matrix? In particular, if $A$ is positive definite, is $A+B$ invertible? </p>
<p>This isn't homework, I am just curious. Assume all matrices have entries in $\mathbb{R}$. </p>
<p><strong>Edit to include context:</strong> </p>
<p>This question comes from a question that popped up in my research on string theory. One is interested in (pseudo)-Riemannian manifolds equipped with a two-form gauge field, modelling a background in which a closed string is moving. The metric, $g$, is a symmetric covariant 2-tensor, while the $b$-field is an antisymmetric covariant 2-tensor. The metric is non-degenerate and therefore invertible. Choosing local coordinates for the manifold, we can express the metric and $b$ field as $n \times n$ matrices, say $A$ and $B$, where $A$ is invertible. There is an operation on string backgrounds called T-duality which, in this simplified context, acts by inverting the matrix $E = A + B$, and so I am therefore interested in which scenarios this procedure works. I am mainly interested in the context where $A$ is real, invertible and positive definite (positive eigenvalues), corresponding to a Riemannian metric $g$, although I have tried to be a bit more general in the wording of the question. </p>
<p><strong>Where to start:</strong> The main issue I have is that I don't really have any criteria for when the sum of two matrices is invertible. Certainly if the determinant is non-zero then I will be happy, but the determinant is not additive, so I don't know how to approach this. In two dimensions I can construct a counterexample whenever A has negative determinant, but the situations I really care about have $det(A)>0$. I would like to find a general criterion for when $A+B$ is invertible. </p>
| orangeskid | 168,051 | <p>If $A$ with positive determinant then in the case $n=2$, $A+B$ will be invertible. For $n\ge 3$, you can adjust the $2\times 2$ examples given in the other answers by adding a piece $(-1,1,\ldots)$ on the diagonal to make $A$ $n\times n$ with positive determinant. </p>
<p>Maybe you are thinking about the following: if $A$ is positive definite then the real parts of the eigenvalues of $A+B$ are positive ( and so $\ne 0$). This is easy to show but requires to look at complex vectors. A bit more general, considering complex matrices: </p>
<p>If $A_1$, $A_2$ are hermitian then
the real parts of the eigenvalues of $A_1+ i A_2$ are between the the smallest and the largest eigenvalue of $A_1$, and the imaginary parts of the eingenvalues of $A_1 + i A_2$ are between the smallest and the largest eigenvalue of $A_2$.</p>
<p>Note that if $B$ real and skew symmetric then $1/i B$ is hermitian. Therefore, the eigenvalues of $B$ are purely imaginary and come in pairs, $i b$, $-i b$ (and perhaps some zero eigenvalues). So in the real case $B$ has "no input" towards invertibility of $A+B$. But a definite $A$ will guarantee invertibility of $A+B$.</p>
|
2,358,385 | <p>I had a test and I couldn't solve this problem:<br></p>
<p>Given $f: \mathbb R^2 \rightarrow \mathbb R$.<br>For every constant $y_0$, $f(x,y_0)$ is known to be continuous.<Br>Also, $\frac{\partial f}{\partial y}(x,y)$ is defined and bounded for all $(x,y)$.
<br><br>I needed to prove that $f$ is continuous for all $\mathbb R^2$. How do you do that?</p>
| Fred | 380,717 | <p>Let $(x_0,y_0) \in \mathbb R^2$. For $(x,y) \in \mathbb R^2$ we have</p>
<p>$|f(x,y)-f(x_0,y_0)|=$</p>
<p>$|f(x,y)-f(x,y_0)+f(x,y_0)-f(x_0,y_0)| \le |f(x,y)-f(x,y_0)|+|f(x,y_0)-f(x_0,y_0)| $</p>
<p>There is $L \ge 0$ such that $|\frac{\partial f}{\partial y}(x,y)| \le L$ for all $(x,y) \in \mathbb R^2$. By the Mean value theorem, there is $t$ between $y$ and $y_0$ such that</p>
<p>$|f(x,y)-f(x,y_0)|=|\frac{\partial f}{\partial y}(x,t)(y-y_0)| \le L|y-y_0|$.</p>
<p>Hence</p>
<p>$|f(x,y)-f(x_0,y_0)| \le L|y-y_0|+|f(x,y_0)-f(x_0,y_0)| $.</p>
<p>Now it can be seen, that $f(x,y) \to f(x_0,y_0)$ for $(x,y) \to (x_0,y_0)$ .</p>
|
2,358,385 | <p>I had a test and I couldn't solve this problem:<br></p>
<p>Given $f: \mathbb R^2 \rightarrow \mathbb R$.<br>For every constant $y_0$, $f(x,y_0)$ is known to be continuous.<Br>Also, $\frac{\partial f}{\partial y}(x,y)$ is defined and bounded for all $(x,y)$.
<br><br>I needed to prove that $f$ is continuous for all $\mathbb R^2$. How do you do that?</p>
| Mundron Schmidt | 448,151 | <p><strong>Hint:</strong><p>
For the continuity you have to consider $|f(x_0,y_0)-f(x,y)|$. The information that $x\mapsto f(x,y_0)$ is continuous says how $f$ behaves in the $x$-direction while $\partial_yf(x,y)$ is bounded says how it behaves in the $y$-direction. Therefore you have to write your term such that you can use this information. It means, you write
\begin{align}
|f(x_0,y_0)-f(x,y)|&=|f(x_0,y_0)-f(x,y_0)+f(x,y_0)-f(x,y)|\\
&\leq|f(x_0,y_0)-f(x,y_0)|+|f(x,y_0)-f(x,y)|
\end{align}
For each term you have to use one of the information you have.<p></p>
<p><strong>Proof:</strong><p></p>
<blockquote class="spoiler">
<p> Let be $(x_0,y_0)\in\mathbb R^2$ and $\varepsilon>0$. Since $x\mapsto f(x,y_0)$ is continuous there exists $\delta_1>0$ such that for $x\in\mathbb R$ with $|x_0-x|<\delta_1$ holds $$|f(x_0,y_0)-f(x,y_0)|<\frac\varepsilon2.$$ Futher exists $C>0$ such that $|\partial_yf(x,y)|<C$ for all $(x,y)\in\mathbb R^2$ by assumption. The MVT yields $$|f(x,y_0)-f(x,y)|<C|y_0-y|.$$ Define $\delta<\min\left\{\delta_1,\frac{\varepsilon}{2C}\right\}$. For $(x,y)\in\mathbb R^2$ with $\|(x_0,y_0)-(x,y)\|_\infty<\delta$ holds $$|x_0-x|<\delta<\delta_1\text{ and }|y_0-y|<\delta<\frac\varepsilon{2C}$$ and therefore\begin{align}|f(x_0,y_0)-f(x,y)|&=|f(x_0,y_0)-f(x,y_0)+f(x,y_0)-f(x,y)|\\&\leq|f(x_0,y_0)-f(x,y_0)|+|f(x,y_0)-f(x,y)|\\&<\frac{\varepsilon}2+C|y_0-y|\\&<\varepsilon\end{align} </p>
</blockquote>
|
2,622,092 | <p>I want to study the convergence of the improper integral $$ \int_0^{\infty} \frac{e^{-x^2}-e^{-3x^2}}{x^a}$$To do so I used the comparison test with $\frac{1}{x^a}$ separating $\int_0^{\infty}$ into $\int_0^{1} + \int_1^{\infty}$.</p>
<p>For the first part, $\int_0^{1}$, I did $$\lim_{x\to0} \frac{\frac{e^{-x^2}-e^{-3x^2}}{x^a}}{\frac{1}{x^a}}=0$$
Therefore $\int_0^{1}\frac{e^{-x^2}-e^{-3x^2}}{x^a}$ converges for $a<1$, since $\int_0^{1}\frac{1}{x^a}$ converges for $a<1$</p>
<p>For the second part I did the same, and got that $\int_1^{\infty}\frac{e^{-x^2}-e^{-3x^2}}{x^a}$ converges for $a>1$. This means that the initial improper integral does not converge for any $a$, is this correct?</p>
| user | 505,767 | <p>For the first note that for $x\to0$</p>
<p>$$e^{-x^2}=1-x^2+o(x^2) \quad \quad e^{-3x^2}=1-3x^2+o(x^2)$$</p>
<p>$$\implies \frac{e^{-x^2}-e^{-3x^2}}{x^a}= \frac{2x^2+o(x^2)}{x^a}\sim \frac{2}{x^{a-2}}$$</p>
<p>thus $\int_0^{1}$ converges by comparison with $\frac{1}{x^{a-2}}$ for $a-2<1$ that is $a<3$.</p>
<p>For the second note that for $x\to +\infty$</p>
<p>$$\forall b \in \mathbb{R} \quad\frac{e^{-x^2}-e^{-3x^2}}{x^b}\to 0$$</p>
<p>thus $\int_1^{+\infty}$ converges by comparison with $\frac{1}{x^{2}}$ $\forall a$.</p>
<p>Therefore $\int_0^{\infty} \frac{e^{-x^2}-e^{-3x^2}}{x^a}$ converges $\forall a<3$.</p>
|
1,843,662 | <p>Let C be that part of the circle $z=e^{i\theta}$, where $0\le\theta\le\frac\pi2$. Evaluate $\int_{c}\frac{z}{i}dz$.</p>
<p>This is my first time posting my question here. I'm really poor in writing English. for that reason please understand my bad explanation. proceed to the main issue I have no idea on solving this problem and also the way how to approach the answer. There is no solution manual which means that i'm not sure what i'm solving and answer. So please i need your any advice, suggest, soulution or etc.</p>
| hmakholm left over Monica | 14,366 | <p>You can always make a <em>non-deterministic</em> automaton with a single accepting state for any regular language (even without $\varepsilon$-transitions) -- <em>unless</em> the language contains the empty string and is not closed under concatenation. Just take an automaton without this restriction and create a new accepting state, and then for each transtion <em>into</em> a state that used to accept, supplement it with a transition on the same symbol that goes to the central accepting state. (There will be no transitions out of the accepting state).</p>
<p>This is not always case for <em>deterministic</em> automata, but your examples are not quite right. You <em>can</em> make a DFA with a single accepting state for $a\cup bb$, but not for $a\cup bb^*$.</p>
<p>You can do with a single accepting state <em>unless</em> there are words $u$, $v$, and $w$ such that $u$, $v$ and $vw$ are in the language but $uw$ isn't.</p>
<p>In the case of $a\cup bb^*$, you could take $u=a$, $v=w=b$.</p>
<p>Reversing a deterministic automaton is not as simple as you make it out to be, because a DFS can have several transitions on the same symbol <em>into</em> a given state, and just reversing everything will give you a non-deterministic automaton instead.</p>
|
1,901,244 | <p>Let $f:[0,1] \to \mathbb{R}$ $$f(x) = \begin{cases}
1 & \text{if x }=\frac {1}n, n\in \mathbb{N} \\
0 & \text{otherwise }
\end{cases}$$
I need to prove that $f$ is integrable over $[0,1]$ but I'm failing to understand how that is true. If it is indeed integrable, then we know that for all $ε$ > 0, there exists $δ$ such that for any tagged partition $x_0, ..., x_n$ and $t_0, ..., t_n$ whose mesh is less than $δ$, we have:
$$|\sum_{i=0}^n f(t_i)\Delta X_i - I| < ε$$
But if we choose the $t_i$ to be $\frac {1}i$, then we get that we summation is $1$ since we're over $[0,1]$.Then I must be $1$ so that the equation will be true. But if we choose any other order, it'll have to be $0$.<br><br>What am I missing? I would love some guidance on how to solve that.</p>
| zhw. | 228,045 | <p>I'll assume that this is known: If $g=0$ on $[a,b]$ except for finitley many points, then $\int_a^bg=0.$</p>
<p>Let $\epsilon > 0.$ Choose $a, 0<a<\epsilon.$ Our $f$ has only finitely many points in $[a,1]$ where $f$ is nonzero. From the above, there is a partition $P$ of $[a,1]$ such that $U(f,P) < \epsilon.$ It follows that</p>
<p>$$U(f,P\cup\{0\}) < a\cdot 1 + \epsilon < 2\epsilon.$$</p>
<p>Clearly $L(f,P\cup\{0\})=0.$ This proves $f$ is Riemann integrable on $[0,1],$ and $\int_0^1 f = 0.$</p>
|
1,901,244 | <p>Let $f:[0,1] \to \mathbb{R}$ $$f(x) = \begin{cases}
1 & \text{if x }=\frac {1}n, n\in \mathbb{N} \\
0 & \text{otherwise }
\end{cases}$$
I need to prove that $f$ is integrable over $[0,1]$ but I'm failing to understand how that is true. If it is indeed integrable, then we know that for all $ε$ > 0, there exists $δ$ such that for any tagged partition $x_0, ..., x_n$ and $t_0, ..., t_n$ whose mesh is less than $δ$, we have:
$$|\sum_{i=0}^n f(t_i)\Delta X_i - I| < ε$$
But if we choose the $t_i$ to be $\frac {1}i$, then we get that we summation is $1$ since we're over $[0,1]$.Then I must be $1$ so that the equation will be true. But if we choose any other order, it'll have to be $0$.<br><br>What am I missing? I would love some guidance on how to solve that.</p>
| Farewell | 278,893 | <p>$\int_{0}^{1}f(x)dx=-\int_{1}^{0}f(x)dx=-\sum_{i=1}^{\infty}\int_{\frac {1}{i}}^{\frac {1}{i+1}}f(x)dx=-\sum_{i=1}^{\infty}0=0$</p>
|
2,628,149 | <p>I am having trouble finding the general solution of the following second order ODE for $y = y(x)$ without constant coefficients: </p>
<p>$3x^2y'' = 6y$<br>
$x>0$</p>
<p>I realise that it may be possible to simply guess the form of the solution and substitute it back into the the equation but i do not wish to use that approach here. </p>
<p>I would appreciate any help, thanks. </p>
| Jack D'Aurizio | 44,121 | <p>For $x$ close to the origin we have $(1-x)^{1/4} \approx 1-\frac{x}{4}-\frac{3x^2}{32}$ and an even better approximation is $(1-x)^{1/4} \approx 1-\frac{x}{4}-\frac{9x^2}{96-56x}$. By evaluating at $x=\frac{1}{16}$ and multiplying by $2$ we get the approximation $\sqrt[4]{15}\approx \color{red}{\frac{23301}{11840}}$ which is correct up to six figures.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.