qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,982,102 | <p>If I wanted to figure out for example, how many tutorial exercises I completed today.</p>
<p>And the first question I do is <strong>question $45$</strong>, </p>
<p>And the last question I do is <strong>question $55$</strong></p>
<p>If I do $55-45$ I get $10$.</p>
<p>But I have actually done $11$ questions:<br>
$1=45$, $2=46$, $3=47$, $4=48$, $5=49$, $6=50$, $7=51$, $8=52$, $9=53$, $10=54$, $11=55$.</p>
<p>Is there any way to know when I can just subtract. Or is the rule I always have to add $1$ when I subtract?</p>
| Nitin | 131,456 | <p>You'll always need to add one in such cases. Consider - if you do problems 45 through 45, you'll have done 45-45 + 1 = 1 questions. </p>
|
1,982,102 | <p>If I wanted to figure out for example, how many tutorial exercises I completed today.</p>
<p>And the first question I do is <strong>question $45$</strong>, </p>
<p>And the last question I do is <strong>question $55$</strong></p>
<p>If I do $55-45$ I get $10$.</p>
<p>But I have actually done $11$ questions:<br>
$1=45$, $2=46$, $3=47$, $4=48$, $5=49$, $6=50$, $7=51$, $8=52$, $9=53$, $10=54$, $11=55$.</p>
<p>Is there any way to know when I can just subtract. Or is the rule I always have to add $1$ when I subtract?</p>
| BigPanda | 343,523 | <p>What you are trying to calculate is the sum of exercises you did from question $a$ to question $b$.
You have $1$ exercise per question so the number of exercises is :
$$\underbrace{\sum_{a}^b 1}_{\text{Sum from a to b}} =\underbrace{\sum_{1}^b 1}_{\text{Sum from 1 to b}} - \underbrace{\sum_{1}^{a-1} 1}_{\text{Sum from 1 to a-1}} = b-(a-1) = b-a+1 $$
Therefore, if you did question 45 to 55 you have $a=45$, $b=55$, so you did $55-45+1=11$ exercises</p>
|
1,888,729 | <p>It is stated in Wikipedia (and other pages too) that the spheres $S^n$ are all not contractible. </p>
<p>Take $n=1$. Would anyone explain to me why $$S^1\times [0,1]\to S^1$$$$(e^{2\pi i t},s)\mapsto e^{2\pi i ts}$$is not an homotopy between the identity and a point?</p>
| user39082 | 97,620 | <p>$$e^{2\pi i}=1,$$
so that
$$(e^{2\pi i},s)=(1,s)$$
but
$$(e^{2\pi i},s)$$
is mapped to $e^{2\pi is},$ while $$(1,s)$$ (which corresponds to $t=0$) is mapped to $1.$</p>
<p>For $0<s<1$ you have
$$ e^{2\pi is}\not=1$$
which shows that your map is not well-defined on the circle. (Not to talk about continuity.)</p>
|
1,888,729 | <p>It is stated in Wikipedia (and other pages too) that the spheres $S^n$ are all not contractible. </p>
<p>Take $n=1$. Would anyone explain to me why $$S^1\times [0,1]\to S^1$$$$(e^{2\pi i t},s)\mapsto e^{2\pi i ts}$$is not an homotopy between the identity and a point?</p>
| APURVA AVINASH PUJARI | 982,024 | <p>We can easily prove that S¹ in not contractible.
Let's recall some definations and results for the proof.</p>
<p>★Simply Connected Space: A path Connected space X is called as simply connected, if every closed curve in X is a null homotopy.
After studying Fundamental groups, one can define the term as: "A path Connected space X is called Simply Connected, if it's fundamental group is trivial.</p>
<p>★RESULT: A contractible space is Simply Connected.</p>
<p>Now we are all set to prove that, S¹ is not contractible.</p>
<p>As we stated above,A contractible space X is Simply Connected, i.e. fundamental group defined on contractible space X at any point x is trivial.
But we know that fundamental group π1(S¹,z), for any z in S¹, is Homeomorphic to Z(the set of all integers) which can't be trivial.
Hence S¹ in not Simply Connected, implies S¹ is not contractible.</p>
|
2,377,816 | <p>I was solving problems based on Bayes theorem from the book "A First Course in Probability by Sheldon Ross". The problem reads as follows:</p>
<blockquote>
<p>An insurance company believes that there are two types of people: accident prone and not accident prone. Company statistics states that accident prone person have an accident in any given year with probability $0.4$, whereas the probability is $0.2$ for not-accident prone person. If we assume $30\%$ of population is accident prone, what is the conditional probability that a new policyholder will have an accident in his or her second year of policy ownership, given that the policyholder has had an accident in the first year?</p>
</blockquote>
<p>The solution given is as follows:</p>
<blockquote>
<p><strong>Book Solution</strong><br>
$$
\begin{align}
P(A)=0.3 & & (given)\\
\therefore P(A^c)=1-P(A)=0.7 & & \\
P(A_1|A)=P(A_2|AA_1)=0.4 & &(given)\\
P(A_1|A^c)=P(A_2|A^cA_1)=0.2 & & (given)
\end{align}
$$
$$
P(A_1)=P(A_1|A)P(A)+P(A_1|A^c)P(A^c)
=(.4)(.3)+(.2)(.7)=.26 \\
P(A|A_1)=\frac{(.4)(.3)}{.26}=\frac{6}{13} \\
P(A^c|A_1)=1-P(A|A_1)=\frac{7}{13}
$$
$$
\begin{align}
P(A_2|A_1)& =P(A_2|AA_1)P(A|A_1)+P(A_2|A^cA_1)P(A^c|A_1) &&...(I)\\
&=(.4)\frac{6}{13}+(.2)\frac{7}{13}\approx .29\\
\end{align}
$$</p>
</blockquote>
<p>I dont understand the statement $(I)$. </p>
<blockquote>
<p><strong>My Solution</strong><br>
Shouldnt it be like this:
$$P(A_2|A_1)=P(A_2|AA_1)P(AA_1)+P(A_2|A^cA_1)P(A^cA_1)$$
Continuing further:<br>
$$
\begin{align}
P(A_2|A_1)&=P(A_2|AA_1)P(A_1|A)P(A)+P(A_2|A^cA_1)P(A_1|A^c)P(A^c)\\
&=(.4)(.4)(.3)+(.2)(.2)(.7)=0.076
\end{align}
$$</p>
</blockquote>
<p>Am I wrong? If yes, where did I go wrong?</p>
<p><strong>Added Later</strong> </p>
<p>After going through comments and thinking more, it seems that I am struggling to apply law of total probability (and my above solution is very well wrong). The basic form of law of total probability, which I came across till now, is as follows:
$$P(A)=P(A|\color{red}{B})P(\color{red}{B})+P(A|\color{magenta}{B^c})P(\color{magenta}{B^c})$$
I am first time facing application of this law for conditional probability, as done book solution:
$$P(A_2|A_1)=P(A_2|AA_1)P(A|A_1)+P(A_2|A^cA_1)P(A_c|A_1)$$
as it involves three events ($A,A_1,A_2$). Book did not explained this. Though in current problem, it looks "somewhat" intuitive, </p>
<ol>
<li><p>can someone generalize it, so as to make my understanding more clear? Say for $n$ events? </p></li>
<li><p>Also, in $P(A_2|A_1)=P(A_2|\color{red}{AA_1})P(\color{red}{A|A_1})+P(A_2|\color{magenta}{A^cA_1})P(\color{magenta}{A^c|A_1})$, I feel red colored stuff should be same and pink colored stuff should be same, as in case of simple form law of total probability. </p></li>
<li><p>I felt it should be $P(A_2|\color{red}{(A_1|A)})P(\color{red}{A_1|A})+P(A_2|\color{magenta}{(A_1|A^c)})P(\color{magenta}{A_1|A^c})$. Am I absolutely stupid here? </p></li>
<li><p>For a moment I felt its related to:$P(E_1E_2E_2...E_n)=P(E_1)P(E_2|E_1)P(E_3|E_1E_2)...P(E_n|E_1...E_{n-1})$. Is it so?</p></li>
</ol>
<p>I am now screwed at my ability to apply law of total probability. Please enlighten me.</p>
| Graham Kemp | 135,106 | <blockquote>
<ol>
<li>can someone generalize it, so as to make my understanding more clear? Say for $n$ events? </li>
</ol>
</blockquote>
<p>If $(B_k)_n$ is a sequence of $n$ events that partition the sample space (or if at least $(B_k\cap A_1)_n$ partitions $A_1$) then, $\mathsf P(A_2\mid A_1) = \sum_{k=1}^n \mathsf P(A_2\mid A_1\cap B_k)\mathsf P(B_k\mid A_1)$</p>
<blockquote>
<ol start="2">
<li>Also, in $P(A_2|A_1)=P(A_2|\color{red}{AA_1})P(\color{red}{A|A_1})+P(A_2|\color{magenta}{A^cA_1})P(\color{magenta}{A^c|A_1})$, I feel red colored stuff should be same and pink colored stuff should be same, as in case of simple form law of total probability. </li>
</ol>
</blockquote>
<p>They are <em>not</em> the same in the case of the simple form. So why should they be?</p>
<p>Where $\Omega$ is the entire sample space, then:</p>
<p>$${{\mathsf P(A_2)~}{= \mathsf P(A_2\mid \Omega)\\=\mathsf P(A_2\mid \color{red}{A}, \Omega)P(\color{red}{A}\mid \Omega)+\mathsf P(A_2\mid \color{magenta}{A^c}, \Omega)\,\mathsf P(\color{magenta}{A^c}\mid \Omega)\\=\mathsf P(A_2\mid \color{red}{A})P(\color{red}{A})+\mathsf P(A_2\mid \color{magenta}{A^c})\,\mathsf P(\color{magenta}{A^c})}}$$</p>
<blockquote>
<ol start="3">
<li>I felt it should be $P(A_2|\color{red}{(A_1|A)})P(\color{red}{A_\,\mathsf 1|A})+P(A_2|\color{magenta}{(A_1|A^c)})P(\color{magenta}{A_1|A^c})$. Am I absolutely stupid here? </li>
</ol>
</blockquote>
<p>:) Well, I would not say <em>absolutely</em>. But seriously, it is a rather common misunderstanding.</p>
<p>The conditioning bar is <em>not</em> a set operation. It seperates the <em>event</em> from the <em>condtion</em> that the <em>probability function</em> is being measured over. There can only be one inside any probability function; they do not nest.</p>
<blockquote>
<ol start="4">
<li>For a moment I felt its related to:$P(E_1E_2E_2...E_n)=P(E_1)P(E_2|E_1)P(E_3|E_1E_2)...P(E_n|E_1...E_{n-1})$. Is it so?</li>
</ol>
</blockquote>
<p>Yes, this is so. Specifically $\mathsf P(A_2,A,A_1)=\mathsf P(A_2\mid A,A_1)\mathsf P(A\mid A_1)\mathsf P(A_1)\\ \mathsf P(A_2,A^\mathsf c,A_1)=\mathsf P(A_2\mid A^\mathsf c,A_1)\mathsf P(A^\mathsf c\mid A_1)\mathsf P(A_1)$</p>
<p>$$\begin{align}\mathsf P(A_2\mid A_1)
~ & = \mathsf P((A\cup A^\mathsf c){\cap} A_2\mid A_1) && \text{Union of Complements}
\\[1ex] & = \mathsf P((A{\cap}A_2)\cup(A^\mathsf c{\cap}A_2)\mid A_1) && \text{Distributive Law}
\\[1ex] & = \mathsf P(A{\cap}A_2\mid A_1) + \mathsf P(A^\mathsf c{\cap}A_2\mid A_1)
&& \text{Additive Rule for Union of Exclusive Events}
\\[1ex] & = \dfrac{\mathsf P(A{\cap}A_1{\cap}A_2)+\mathsf P(A^\mathsf c{\cap}A_1{\cap}A_2)}{\mathsf P(A_1)} && \text{by Definition}
\\[1ex] & = \dfrac{\mathsf P(A_2\mid A{\cap}A_1)\,\mathsf P(A{\cap}A_1)+\mathsf P(A_2\mid A^\mathsf c{\cap}A_1)\,\mathsf P(A^\mathsf c{\cap}A_1)}{\mathsf P(A_1)} && \text{by Definition}
\\[1ex] & = {\mathsf P(A_2\mid A{\cap}A_1)\,\mathsf P(A\mid A_1)+\mathsf P(A_2\mid A^\mathsf c{\cap}A_1)\,\mathsf P(A^\mathsf c\mid A_1)} && \text{by Definition of Conditional Probability}
\end{align}$$</p>
|
73,991 | <p>I have the axiom from Peano's axioms:</p>
<p>If $A\subseteq \mathbb{N}$ and $1\in A$ and $m\in A \Rightarrow S(m)\in A$, then $A=\mathbb{N}$.</p>
<p>My book tells me that it secures that there are no more natural numbers than the numbers produced by the below 3 axioms (also from Peano's axioms):</p>
<p>$1\in \mathbb{N}$</p>
<p>For every $n\in\mathbb{N}: 1\neq S(n)$</p>
<p>For every $m,n\in \mathbb{N}:m\neq n\Rightarrow S(m) \neq S(n)$</p>
<p>And I'm not sure why? Is there someone who can explain this?</p>
<p>S(n) is an unary function $S: \mathbb N \rightarrow \mathbb N$. Does this means that $S(n)=n+1$?</p>
| Damian Sobota | 12,690 | <p>The three axioms guarantee you that the set $A=\{1,S(1),S(S(1)),S(S(S(1))),...\}$ is infinite (mainly because of the injectivity of $S$). But they do not guarantee the equality $A=\mathbb{N}$. To set it you need the axiom of induction.</p>
<p>For example, put $S(n)=2n$. It satisfies the three axioms, but $A=\{1\}\cup\{2^n:\ n\in\mathbb{N}\}$, so there are natural numbers which cannot be obtained using only the three axioms.</p>
<p>The axiom of induction guarantees you that when $A$ is defined as above then $A=\mathbb{N}$.</p>
|
1,038,579 | <p>The question is from Joseph .J Rotman's book - Introduction to the Theory of Groups and it goes like this:
<br/> $A,B,C$ are subgroups of $G$, so $A\leq B$, prove that if $(AC=BC\ \text{and}\ A\cap C=B\cap C)$. (we do not assume that either AB or AC is a subgroup) than A=B.
<br/><br/>
I need you guys to tell me if something is wrong - any criticism is welcomed.<br/>
Proof: the map $\varphi:A\rightarrow B/A\cap C $ defined by $\varphi(a)=a(A\cap C)$ is a homomorphism, with $\ker \varphi=A\cap C$
<br/>Though it is clrear to me (and maybe it shouldn't be...), I don't know if it is well founded.
<br/>Thus by the first isomorphism theorem, $A\cap C \vartriangleleft B$ and $A/A\cap C \cong Im \varphi$
<br/>suppose $\varphi$ is not a surjection then there exists $b\in B$ that for all $a\in A$ keeps $b(A\cap C)\neq a(A\cap C)$ and that makes $AC\neq BC$. So our map is a surjection, meaning $A/A\cap C \cong B/A\cap C$. From that I figure that $A\smallsetminus C$ and $B\smallsetminus C$ have the same number of elements, therefore B and A are of the same size. Add that to the fact that $A\leq B$, we get A=B
<br/>QED???</p>
| user 59363 | 192,084 | <p>We have
$$B=\bigcup_{b\in B}b(B\cap C)$$
and
$$A=\bigcup_{a\in A}a(A\cap C).$$
Now, for $b,b'\in B$ we have $b(B\cap C)=b'(B\cap C)$ if and only if $b^{-1}b'\in B\cap C$ if an only if $b^{-1}b'\in C$ since $b^{-1}b'\in B$ is automatically true. From $B\subseteq BC$ and the assumption that $BC=AC$ it follows that for every $b\in B$ there exist $a\in A,c\in C$ such that $b=ac$. But then $a^{-1}b=c\in C\cap B$ (since $a^{-1}\in B$ by $A\subseteq B$) and it follows that $b(B\cap C)=a(B\cap C)=a(A\cap C)$. Altogether:
$$\forall b\in B\ \exists a\in A: b(B\cap C)=a(A\cap C).$$</p>
<p>It follows that that the two unions of cosets are the same and $A=B$. </p>
|
2,245,408 | <blockquote>
<p>How is the following result of a parabola with focus <span class="math-container">$F(0,0)$</span> and directrix <span class="math-container">$y=-p$</span>, for <span class="math-container">$p \gt 0$</span> reached? It is said to be <span class="math-container">$$r(\theta)=\frac{p}{1-\sin \theta} $$</span></p>
</blockquote>
<p>I started by saying the the standard equation of a parabola, in Cartesian form is <span class="math-container">$y= \frac{x^2}{4p} $</span>, where <span class="math-container">$p \gt 0 $</span> and the focus is at <span class="math-container">$F(0,p)$</span> and the directrix is <span class="math-container">$y=-p$</span>. So for the question above, would the equation in Cartesian form be <span class="math-container">$$y= \frac{x^2}{4 \cdot \left(\frac{1}{2}p\right)}=\frac{x^2}{2p}?$$</span></p>
<p>I thought this because the vertex is halfway between the directrix and the focus of a parabola.</p>
<p>Then I tried to use the facts:
<span class="math-container">$$r^2 = x^2 +y^2 \\
x =r\cos\theta \\
y=r\sin\theta.$$</span></p>
<p>But I couldn't get the form required, any corrections, or hints?</p>
<p>Cheers.</p>
| Brian Tung | 224,454 | <p>The equation of the parabola you want is</p>
<p>$$
y = \frac{x^2}{2p} - \frac{p}{2}
$$</p>
<p>Substituting</p>
<p>$$
x = r \cos \theta
$$
$$
y = r \sin \theta
$$</p>
<p>gives us</p>
<p>$$
\frac{\cos^2\theta}{2p} r^2 - (\sin \theta) r - \frac{p}{2} = 0
$$</p>
<p>If you solve this quadratic expression for $r$, and use the identity $\sin^2\theta + \cos^2\theta = 1$ twice, you should obtain the expression you want.</p>
|
2,245,408 | <blockquote>
<p>How is the following result of a parabola with focus <span class="math-container">$F(0,0)$</span> and directrix <span class="math-container">$y=-p$</span>, for <span class="math-container">$p \gt 0$</span> reached? It is said to be <span class="math-container">$$r(\theta)=\frac{p}{1-\sin \theta} $$</span></p>
</blockquote>
<p>I started by saying the the standard equation of a parabola, in Cartesian form is <span class="math-container">$y= \frac{x^2}{4p} $</span>, where <span class="math-container">$p \gt 0 $</span> and the focus is at <span class="math-container">$F(0,p)$</span> and the directrix is <span class="math-container">$y=-p$</span>. So for the question above, would the equation in Cartesian form be <span class="math-container">$$y= \frac{x^2}{4 \cdot \left(\frac{1}{2}p\right)}=\frac{x^2}{2p}?$$</span></p>
<p>I thought this because the vertex is halfway between the directrix and the focus of a parabola.</p>
<p>Then I tried to use the facts:
<span class="math-container">$$r^2 = x^2 +y^2 \\
x =r\cos\theta \\
y=r\sin\theta.$$</span></p>
<p>But I couldn't get the form required, any corrections, or hints?</p>
<p>Cheers.</p>
| robert timmer-arends | 468,626 | <p>For a parabola with $F(0,0)$ and directrix $y=-p$, first write a “distance equation” relating any point on the parabola, $P(x,y)$, to $F$ and to the directrix, $D$.<p>
A parabola can be defined by its locus: $distanceFP = distanceDP$.<p>
$distanceFP = \sqrt{(x-0)^2+(y-0)^2} = \sqrt{x^2+y^2}$<p>
$distanceDP = y-(-p) = y+p$<p>
So, for $P(x,y)$ on the parabola with $F(0,0)$ and directrix $y=-p$, we can write
$$\sqrt{x^2+y^2}=y+p …(1)$$
For polar cords, we know that $x^2+y^2=r^2$, and $y=r sin\theta$<p>
Therefore (1) becomes $$\sqrt{r^2}=r sin\theta + p$$
$$r = r sin\theta + p$$
$$r – r sin\theta = p$$
$$r(1 – sin\theta) = p$$
Hence $$r = \frac{p}{1 – sin\theta}$$</p>
|
4,251,233 | <p>Find</p>
<p><span class="math-container">$\int\frac{x+1}{x^2+x+1}dx$</span></p>
<p><span class="math-container">$\int \frac{x+1dx}{x^2+x+1}=\int \frac{x+1}{(x+\frac{1}{2})^2+\frac{3}{4}}dx$</span></p>
<p>From here I don't know what to do.Write <span class="math-container">$(x+1)$</span> = <span class="math-container">$t$</span>?</p>
<p>This does not work.Use partial integration?I don't think it will work here.</p>
<p>And I should complete square then find.</p>
| AbdelAziz AbdelLatef | 692,431 | <p>You will have to make the numerator as a sum of two functions like this</p>
<p><span class="math-container">$\int \frac{x+\frac{1}{2}}{(x+\frac{1}{2})^2+\frac{3}{4}}dx+\int \frac{\frac{1}{2}}{(x+\frac{1}{2})^2+\frac{3}{4}}dx$</span></p>
<p>The first integral will result in <span class="math-container">$ln$</span> and the second in <span class="math-container">$atan$</span>.</p>
<p><span class="math-container">$\frac 1 2 \ln (x^2+x+1)+ \frac 1 {\sqrt{3}} \arctan{\frac {(2x+1)} {\sqrt{3}}}+C$</span></p>
|
458 | <p>If you go to the bottom of any page in the SE network (e.g. this one!), you'll see a list of SE sites. In particular there's a link to MathOverflow, that is potentially seen by a large number of people (many of whom are outside of our target audience).</p>
<p>When you put your cursor over that link, there's a hover popup reading "mathematicians". If you try this with many of the other sites you'll find more a more detailed description.</p>
<p>We should improve this!</p>
<blockquote>
<p>I'll provide a few samples as answers; please vote for the one you like, and we'll get it fixed.</p>
</blockquote>
| Scott Morrison | 3 | <p>Research mathematicians</p>
<p>.......</p>
|
458 | <p>If you go to the bottom of any page in the SE network (e.g. this one!), you'll see a list of SE sites. In particular there's a link to MathOverflow, that is potentially seen by a large number of people (many of whom are outside of our target audience).</p>
<p>When you put your cursor over that link, there's a hover popup reading "mathematicians". If you try this with many of the other sites you'll find more a more detailed description.</p>
<p>We should improve this!</p>
<blockquote>
<p>I'll provide a few samples as answers; please vote for the one you like, and we'll get it fixed.</p>
</blockquote>
| Scott Morrison | 3 | <p>Professional mathematicians</p>
<p>...</p>
|
2,953,371 | <p>How can I find the derivative of this function ?
<span class="math-container">$$f(x)= (4x^2 + 2x +5)^{0.5}$$</span></p>
| Kaleb R. | 566,000 | <p>You could do it using the Laplace transform and the convolution theorem for Laplace transforms. The Laplace transform of a Dirac delta is
<span class="math-container">$$\mathcal{L}(\delta(t-a)) = e^{-as}$$</span>
and the convolution theorem states that <span class="math-container">$\mathcal{L} ((f*g)(t)) = \mathcal{L}(f(t))\mathcal{L}(g(t))$</span>, so you can multiply the Laplace transforms of your deltas and then take the inverse. There is likely a more direct method though.</p>
|
441,374 | <p>Let $K_{\alpha}(z)$ be the <a href="https://en.wikipedia.org/wiki/Bessel_function#Modified_Bessel_functions:_I.CE.B1_.2C_K.CE.B1" rel="nofollow noreferrer">modified Bessel function of the second kind of order $\alpha$</a>.</p>
<p>I need to compute the following integral:</p>
<p>$$\int_0^\infty\;\;K_0\left(\sqrt{a(k^2+b)}\right)dk$$ </p>
<p>where $a>0$ and $b>0$. </p>
<p>I have tried several substitutions and played around a lot in Mathematica, and can't seem to solve this. Perhaps an integral representation of $K_{0}(z)$ would be helpful here.</p>
<p>Even if this can't be done exactly, a sensible approximation strategy would also be useful. </p>
<p>Any advice would be greatly appreciated. Thanks in advance for your time!</p>
| Random Variable | 16,033 | <p>The evaluation is a bit easier if we use the <a href="http://dlmf.nist.gov/10.32#E10" rel="nofollow noreferrer">integral representation</a> $$K_{0}(x) = \frac{1}{2} \int_{0}^{\infty} \frac{1}{t} \, \exp \left(-t - \frac{x^{2}}{4t} \right) \, dt , \quad x>0, $$ which can be derived from the integral representation $$K_{0}(x) = \int_{0}^{\infty} \exp(-x \cosh t) \, dt, \quad x>0, $$ by extending the interval of integration to the entire real line and then making the substitution $ e^{t} = \frac{2}{z} u$.</p>
<p>Using this representation of the modified Bessel function of the second kind of order zero, we get</p>
<p>$$\begin{align}\int_{0}^{\infty} K_{o}\sqrt{a^{2}\left(x^{2}+b^{2} \right)} &= \int_{0}^{\infty} K_{0} \left(a\sqrt{x^{2}+b^{2}} \right) \, dx \\ &= \frac{1}{2} \int_{0}^{\infty} \int_{0}^{\infty} \frac{1}{t} \, \exp \left(-t - \frac{a^{2}(x^{2}+b^{2})}{4t} \right) \, dt \, dx \\ &= \frac{1}{2} \int_{0}^{\infty} \frac{1}{t} \, \exp \left(-t-\frac{a^{2}b^{2}}{4t} \right) \int_{0}^{\infty} \exp \left(-\frac{a^{2}x^{2}}{4t} \right) \, dx \, dt \tag{1}\\ &= \frac{\sqrt{\pi}}{2a} \int_{0}^{\infty} \frac{1}{\sqrt{t}} \, \exp \left(-t-\frac{a^{2}b^{2}}{4t}\right) \, dt \\ &= \frac{\sqrt{\pi}}{a} \int_{0}^{\infty} \exp \left(-u^{2}-\frac{a^{2}b^{2}}{4u^{2}} \, \right) \, du \\ &= \frac{\sqrt{\pi}}{a} \, \sqrt{\frac{\pi}{4}} \exp \left(-2 \, \sqrt{\frac{a^{2}b^{2}}{4}} \right) \tag{2}\\ &= \frac{\pi}{2a} \, \exp(-ab). \end{align}$$</p>
<hr>
<p>$(1)$ Since the integrand is nonnegative, we're allowed to switch the order of the integration.</p>
<p>$(2)$ <a href="https://math.stackexchange.com/questions/496088/how-to-evaluate-int-0-infty-exp-ax2-frac-bx2-dx-for-a-b0">How to evaluate $\int_{0}^{+\infty}\exp(-ax^2-\frac b{x^2})\,dx$ for $a,b>0$</a></p>
<hr>
<p>This particular integral is a limiting case of the more general integral</p>
<p>$$\small \frac{1}{\beta^{\mu}} \int_{0}^{\infty} J_{\mu}(\beta t) \, \frac{K_{\nu}(a\sqrt{t^{2}+b^{2}}{})}{(t^{2}+b^{2})^{\nu/2}} \, t^{\mu+1} \, dt = \frac{1}{a^{\nu}} \left(\frac{\sqrt{a^{2}+\beta^{2}}}{b} \right)^{\nu-\mu-1}K_{\nu-\mu-1} \left(b \sqrt{a^{2}+\beta^{2}} \right),\tag{3}$$ where $a, \beta >0$, $\Re(b) >0$, and $\Re(\mu) >-1$.</p>
<p>(As $\beta\downarrow 0$, <a href="https://en.wikipedia.org/wiki/Bessel_function#Asymptotic_forms" rel="nofollow noreferrer">$J_{\mu}(\beta t)\sim \left(\frac{\beta t}{2} \right)^{\mu} \frac{1}{\Gamma(\mu+1)}$</a>.)</p>
<p>Integral $(3)$ is entry (2) in section 47 of chapter 13 of the textbook <a href="https://archive.org/details/ATreatiseOnTheTheoryOfBesselFunctions" rel="nofollow noreferrer"><em>A Treatise on the Theory of Bessel Functions</em></a>.</p>
|
598,635 | <p>Prove the two Identities for
$-1 < r < 1$</p>
<p>$$\sum_{n=0}^{\infty} r^n\cos n\theta =\frac{1-r\cos\theta}{1-2r\cos\theta+r^2}$$</p>
<p>$$\sum_{n=0}^{\infty} r^n\sin{n\theta}=\frac{r \sin\theta }{1-2r\cos\theta+r^2}$$</p>
<p>Sorry could not figure out how to format equations</p>
| Mark | 24,958 | <p>HINT:Mathematical induction. For:
$$
\sum_{n=0}^{\infty} r^n\cos n\theta =\frac{1-r\cos\theta}{1-2r\cos\theta+r^2}
$$
Let us consider:
$$
\sum_{n=0}^{\infty} r^{n+1}\cos (n+1)\theta
$$
which is rewritten
$$
r\sum_{n=0}^{\infty} r^{n}\left[\cos n\theta \cos \theta-\sin n\theta \sin \theta \right]
$$
$$
=r\cos \theta\sum_{n=0}^{\infty} r^{n}\cos n\theta - r \sin \theta \sum_{n=0}^{\infty} r^n \sin n\theta \ \ \ (1)
$$
By hypothesis, we know that
$$
\sum_{n=0}^{\infty} r^n\cos n\theta =\frac{1-r\cos\theta}{1-2r\cos\theta+r^2} \ \ \ (2)
$$
$$
\sum_{n=0}^{\infty} r^n\sin{n\theta}=\frac{r \sin\theta }{1-2r\cos\theta+r^2} \ \ \ (3)
$$
Finally we substitute the equations (2) and (3) in (1). Finally, to prove the formula, simply replace the equations (2) and (3) in (1). Same procedure to demonstrate the equation (3).</p>
|
675,718 | <p>I have a non-linear system of equations, $$\left\{ \begin{array}{rcl} x^2 - xy + 8 = 0 \\ x^2 - 8x + y = 0 \\ \end{array} \right.$$
I have tried equating the expressions (because both equal 0), which tells me: $$x^2 - xy + 8 = x^2 - 8x + y$$
Moving all expressions to the right yields: $$0 = xy - 8x + y - 8$$
Factoring the equation: $$0 = x(y-8) + 1(y-8)$$
$$0 = (x+1)(y-8)$$
$$x=-1$$
$$y=8$$
Problem solved, right? No. When you plug in the values into the equations above, you get a false statement. Allow me to demonstrate:
$$x^2 - 8x + y = 0$$
$$(-1)^2 - 8(-1) + (8) = 0$$
$$1 - (-8) + 8 = 0$$
$$1 + 8 + 8 = 0$$
$$17 = 0$$
Can someone please help me solve this system of nonlinear equations? I am stuck.</p>
| ir7 | 26,651 | <p>Hint: You got $x=-1$ OR $y=8$. BTW, cool idea to equate the expressions.</p>
|
2,011,236 | <p>I was reading <a href="https://web.williams.edu/Mathematics/lg5/Hindman.pdf" rel="nofollow noreferrer">this</a> discussion of Hindman's Theorem by Leo Goldmakher, and was tripped up by his introduction of a topology on $U(\mathbb N)$. (He is using $U(\mathbb N)$ to denote the space of ultrafilters on the natural numbers.) Midway through page three, Goldmakher says, "There is a natural topology on $U(\mathbb N)$, given by
the basis of open sets $\{\mathcal U\in U(\mathbb N):A\in\mathcal U\text{ for some }A\subseteq\mathbb N\}$," and this is all he says about the matter until he uses this topology in the proof of Theorem 3.1 several pages later.</p>
<p>I have two questions about this:</p>
<p>(1): Am I correct in thinking that the topology Goldmakher is defining here is the one generated by the basis $\{\{\mathcal U\in U(\mathbb N):A\in\mathcal U\}:A\subseteq\mathbb N\}$? The set $\{\mathcal U\in U(\mathbb N):A\in\mathcal U\text{ for some }A\subseteq\mathbb N\}$ is (as I am currently interpreting it) just the set $U(\mathbb N)$; but that doesn't make sense in context, and, besides, if he meant $U(\mathbb N)$ that's what he would have written.</p>
<p>(2): I know this is vague (so feel free to ignore it), but are there any useful ways of thinking about this topology that might help me understand it?</p>
| Brian M. Scott | 12,042 | <p>Yes, you’re correct. For each $A\subseteq\Bbb N$ let $B_A=\{\mathscr{U}\in U(\Bbb N):A\in\mathscr{U}\}$; then $\{B_A:A\subseteq\Bbb N\}$ is a base for the topology in question. Goldmakher should have put ‘for some $A\subseteq\Bbb N$’ outside the curly braces.</p>
<p>It’s not an easy topology to visualize, to put it mildly. As Goldmakher says, it makes $U(\Bbb N)$ the <a href="https://en.wikipedia.org/wiki/Stone%E2%80%93%C4%8Cech_compactification" rel="nofollow noreferrer">Čech-Stone compactification</a> of $\Bbb N$ with the discrete topology, so it the universal property that characterizes the Čech-Stone compactification. </p>
<p>The space is zero-dimensional: $B_A=U(\Bbb N)\setminus B_{\Bbb N\setminus A}$, so it has a base of clopen sets. In fact it is even <em>extremally disconnected</em>, meaning the the closure of each open set is open. (Note that the word really is <em>extremally</em>, not <em>extremely</em>.) This is even stronger than being zero-dimensional (i.e., having a base of clopen sets).</p>
|
3,086,878 | <p>Recently I came across this general integral,
<span class="math-container">$$\int \frac {dx}{(x^2-2ax+b)^n}$$</span>
Putting <span class="math-container">$x^2-2ax+b=0$</span> we have,
<span class="math-container">$$x = a±\sqrt {a^2-b} = a±\sqrt {∆}$$</span>
Hence the integrand can be written as,
<span class="math-container">$$
\frac {1}{(x^2-2ax+b)^n}
=
\frac {1}{(x-a-\sqrt ∆)^n(x-a+\sqrt ∆)^n}
$$</span>
Resolving into partial fractions we have,
<span class="math-container">$$
\frac {1}{(x^2-2ax+b)^n}
=
\sum \frac {A_r}{(x-a-\sqrt ∆)^r} + \sum \frac {B_r}{(x-a+\sqrt ∆)^r}
$$</span>
Putting <span class="math-container">$-\frac {1}{2\sqrt ∆} = D$</span> , I could produce a table of the coefficients <span class="math-container">$A$</span> and <span class="math-container">$B$</span> for different <span class="math-container">$n$</span>.
\par
For <span class="math-container">$n=1$</span>,
<span class="math-container">$$A_1=-D , B_1=D$$</span>
For <span class="math-container">$n=2$</span>,
<span class="math-container">$$A_1=2D^3 , B_1=-2D^3$$</span>
<span class="math-container">$$A_2=D^2 , B_2 = D^2$$</span>
For <span class="math-container">$n=3$</span>,
<span class="math-container">$$A_1=-6D^5 , B_1=6D^5$$</span>
<span class="math-container">$$A_2=-3D^4 , B_2 = -3D^4$$</span>
<span class="math-container">$$A_3=-D^3, B_3=D^3$$</span>
For <span class="math-container">$n=4$</span>,
<span class="math-container">$$A_1=20D^7, B_1=-20D^7$$</span>
<span class="math-container">$$A_2=10D^6 , B_2 = 10D^6$$</span>
<span class="math-container">$$A_3=4D^5, B_3=-4D^5$$</span>
<span class="math-container">$$A_4=D^4, B_4=D^4$$</span>
For <span class="math-container">$n=5$</span>,
<span class="math-container">$$A_1=-70D^9, B_1=70D^9$$</span>
<span class="math-container">$$A_2=-35D^8, B_2 = -35D^8$$</span>
<span class="math-container">$$A_3=-15D^7, B_3=15D^7$$</span>
<span class="math-container">$$A_4=-5D^6, B_4=-5D^6$$</span>
<span class="math-container">$$A_5=-D^5, B_4=D^5$$</span>
Yet I am unable to deduce a general formula for the coefficients. If I have the coefficients, the integral is almost solved , for then I shall have a logarithmic term and a rational function in <span class="math-container">$x$</span>. More directly, I seek a result of the form,
<span class="math-container">$$\kappa \log \left( \frac {x-a-\sqrt ∆}{x-a+\sqrt ∆}\right) + \frac {P(x)}{Q(x)}$$</span>
Any help would be greatly appreciated.</p>
<h1>Conjecture 1(Proved below)</h1>
<p><span class="math-container">$$A(n,r)= (-1)^n \binom {2n-r-1}{n-1} D^{2n-r}$$</span>
<span class="math-container">$$B(n,r)= (-1)^{n-r} \binom {2n-r-1}{n-1} D^{2n-r}$$</span></p>
| jmerry | 619,637 | <p>All right, now I've got it.</p>
<p>The easiest way to get all the coefficients? Expand in a Laurent series around one of the roots. Substituting <span class="math-container">$z=x-a-\sqrt{\Delta}$</span> and later defining <span class="math-container">$D=\frac1{2\sqrt{\Delta}}$</span>, we get
<span class="math-container">\begin{align*}\frac1{(x^2-2ax+b)^n} &= \frac1{(x-a-\sqrt{\Delta})^n(x-a+\sqrt{\Delta})^n}=\frac1{z^n(z+2\sqrt{\Delta})^n}=\frac1{z^n}\cdot\frac{(2\sqrt{\Delta})^{-n}}{(1+\frac{z}{2\sqrt{\Delta}})^n}\\
\frac1{(x^2-2ax+b)^n} &= \frac{(-D)^n}{z^n(1-Dz)^n} = \frac{(-D)^n}{z^n}\sum_{j=0}^{\infty} \binom{n+j-1}{j}D^jz^j\\
&=(-1)^n\sum_{j=0}^{\infty}\binom{n+j-1}{j}D^{n+j}z^{j-n}\end{align*}</span>
We claim that the coefficients <span class="math-container">$(-1)^n\binom{n+j-1}{j}D^{n+j}$</span> for <span class="math-container">$j<n$</span> are precisely the coefficients of <span class="math-container">$\frac1{z^{n-j}}$</span> in the partial fractions expansion of <span class="math-container">$\frac1{z^n(z+2\sqrt{\delta})^n}$</span>. Why? Subtract the negative-exponent terms of the Laurent series from the partial fractions expansion. The difference is locally bounded, with a nice power series. But then, the only terms in the partial fractions expansion that aren't locally bounded are the <span class="math-container">$\frac1{z^k}$</span> terms - so their coefficients all have to match with the terms from the Laurent series.</p>
<p>Let <span class="math-container">$k=n-j$</span>, and we get <span class="math-container">$A(n,k)=(-1)^n\binom{2n-k-1}{n-k}D^{2n-k}=(-1)^n\binom{2n-k-1}{n-1}D^{2n-k}$</span> in the partial fractions expansion
<span class="math-container">$$\frac1{z^n(z+2\sqrt{\Delta})^n}=\sum_{k=1}^n \frac{A(n,k)}{z^k} +\sum_{k=1}^n \frac{B(n,k)}{(z+2\sqrt{\Delta})^k}=\sum_{k=1}^n \frac{A(n,k)}{(x-a-\sqrt{\Delta})^k} +\sum_{k=1}^n \frac{B(n,k)}{(x-a+\sqrt{\Delta})^k}$$</span>
Oh, yes - in my comment, I didn't actually define my notation, and the update to the question imported that without defining it. The purpose is clear; we're just putting both parameters in the notation instead of just the power <span class="math-container">$k$</span> of <span class="math-container">$\frac1{z-a\pm\sqrt{\Delta}}$</span>. Formally, the definition is the line just above.</p>
<p>That's half of the conjecture. For the other half, we expand around the other root.
<span class="math-container">\begin{align*}\frac1{(x^2-2ax+b)^n} &= \frac1{(x-a-\sqrt{\Delta})^n(x-a+\sqrt{\Delta})^n}=\frac1{(w-2\sqrt{\Delta})^nw^n}=\frac1{w^n}\cdot\frac{(-2\sqrt{\Delta})^{-n}}{(1-\frac{w}{2\sqrt{\Delta}})^n}\\
\frac1{(x^2-2ax+b)^n} &= \frac{D^n}{w^n(1+Dw)^n} = \frac{D^n}{w^n}\sum_{j=0}^{\infty} \binom{n+j-1}{j}(-D)^jw^j\\
&=\sum_{j=0}^{\infty}(-1)^j\binom{n+j-1}{j}D^{n+j}w^{j-n}\end{align*}</span>
Again, extract the negative-exponent terms to get <span class="math-container">$B(n,k)=(-1)^{n-k}\binom{2n-k-1}{n-k}D^{2n-k} =(-1)^{n-k}\binom{2n-k-1}{n-1}D^{2n-k}$</span>. The conjecture is confirmed, and we have our general formula.</p>
|
3,644,710 | <p>I wish to classify the Galois group of <span class="math-container">$\mathbb{Q}(e^{i\pi/4})/\mathbb{Q}$</span>. Let me denote the eighth root of unity as <span class="math-container">$\epsilon$</span>. I see that <span class="math-container">$1, \epsilon, \epsilon^2, \epsilon^3$</span> are linearly independent over <span class="math-container">$\mathbb{Q}$</span>, although I do not know how to rigorously prove this (can anyone show me?). Since <span class="math-container">$\{\epsilon, \epsilon^2, \epsilon^3\}$</span> are linearly independent does that mean <span class="math-container">$[\mathbb{Q}(e^{i\pi/4}):\mathbb{Q}] = 8$</span>, or is it some other value (if it is please explain why)? Are my observations correct so far? </p>
| Martin Argerami | 22,857 | <p>Once you have <span class="math-container">$|x_{n+1}-x_n|\leq c q^n$</span> with <span class="math-container">$c>0$</span> and <span class="math-container">$0<q<1$</span>, you can telescope. This means
<span class="math-container">$$
|x_{n+k}-x_n|=|\sum_{j=1}^kx_{n+j}-x_{n+j-1}|\leq \sum_{j=1}^k|x_{n+j}-x_{n+j-1}|=\sum_{j=1}^kcq^{n+j-1}=\frac{cq^n(1-q^k)}{1-q}\leq\frac{cq^n}{1-q}.
$$</span>
Now given <span class="math-container">$\varepsilon>0$</span>, if you choose <span class="math-container">$n$</span> such that <span class="math-container">$\frac{cq^n}{1-q}<\varepsilon$</span>, you get <span class="math-container">$|x_m-x_n|<\varepsilon$</span> for all <span class="math-container">$m\geq n$</span>.</p>
|
737,915 | <p>I'm reading Calculus: Basic Concepts for High School Students and am trying to digest the definition of 'limit of function'. There are two details that I am struggling to fully accept:</p>
<ol>
<li><p>If you are supposed to pick an interval $(a - \delta, a + \delta)$ but $a$ can be an undefined point at the end of the domain, what happens to the other half of the interval? Is it just ignored/irrelevant?</p></li>
<li><p>The function used as an example is $f(x) = \sqrt{x}$ and from what I can tell the limit of the point $a$ always matches the value of $f(a)$. I cannot see how this would be different for other functions given the way that the limit is calculated - if someone could share an example of a function that has a different limit at $x = a$ than the value $f(a)$ I would be grateful.</p></li>
</ol>
| Siminore | 29,672 | <ol>
<li>Consider $f(x)=\sqrt{x}$. It is naturally defined on the set $[0,+\infty)$, and it would be wrong to consider subsets like $(-1,1)$, since such an interval is not a subset of the domain of definition.</li>
<li>What you are saying could be translated into the sentence that "every function is a continuous function". This is clearly false, as you can read in every calculus book. For example, $\lim_{x \to 0} \frac{1}{x^2}=+\infty$, and you can't define the function $f(x)=\frac{1}{x^2}$ at $x=0$ so that $f(0)=+\infty$.</li>
</ol>
|
3,100,831 | <p>What is the domain of <span class="math-container">$g(x)=\frac{1}{1-\tan x}$</span> </p>
<p>I tried it and got this. But I'm not really sure if it is right. Is that gonna be like this ? <span class="math-container">$(\mathbb{R}, \frac{\pi}{4})$</span></p>
| JustAnAmateur | 589,373 | <p>Any fraction is not defined if its denominator is zero. Hence you must exclude all the points where <span class="math-container">$\tan x=1$</span>,which are <span class="math-container">$x\in\{k\pi+\arctan 1 | k\in \mathbb{Z}\}=\{k\pi + \frac{\pi}{4}| k\in \mathbb{Z}\}$</span>.<br>
However, the <span class="math-container">$\tan$</span> function itself is undefined if <span class="math-container">$x\in\{k\pi+\frac{\pi}{2} | k\in \mathbb{Z}\}$</span>,so we may conclude that domain of <span class="math-container">$g$</span> is <span class="math-container">$R\setminus \left(\{k\pi+\frac{\pi}{2} | k\in \mathbb{Z}\}\cup\{k\pi + \frac{\pi}{4}| k\in \mathbb{Z}\}\right) $</span>.</p>
|
2,979,315 | <p>Let <span class="math-container">$X$</span> be a continuous random variable with uniform distribution between <span class="math-container">$0$</span> and <span class="math-container">$1$</span>. Compute the distribution of <span class="math-container">$Y = \sin(2\pi X)$</span>.</p>
<p><span class="math-container">$\sin(2\pi \cdot0)$</span> and <span class="math-container">$\sin(2\pi \cdot1) =0$</span>. So, the inverse image of the function has multiple roots. How can I find the PDF of <span class="math-container">$Y$</span> then?</p>
| Greg Johnson | 897,096 | <p>Here is a proof that there are at least 48 symmetries. (Goal: make it obvious and easy to understand.)</p>
<p>Go to a craps table and borrow one of the dice.</p>
<p>There are eight corners on your die, and so you can position the die in any of eight ways based on the corner you are looking at.</p>
<p>For each of those eight corners, there are three rotations. For example, if you are looking at the corner where you can see sides with one dot, two dots, and three dots, you can rotate the die so that any of these three sides is on top.</p>
<p>So, that gets you 24 symmetries.</p>
<p>Now, the twist others have alluded to about a non-physical move. If you got the die in Las Vegas, the one/two/three corner will go counter-clockwise. If you got the die in Macau, the one/two/three corner will go clockwise. This gives us an additional factor of two in our count of symmetries. "Change" the die from left-handed to right-handed or vice versa.</p>
<p>This additional factor of 2 gives us 48 symmetries.</p>
<p>That 48 is an upper bound: pick any of the 8 corners. The three adjacent corners can be permuted in 3-factorial ways. So, there can be no more than 48 permutations.</p>
|
65,886 | <p>It is clear that Sylow theorems are an essential tool for the classification of finite groups.
I recently read an article by Marcel Wild, <em>The Groups of Order Sixteen Made Easy</em>, where he gives a complete classification of the groups of order $16$ that is based on
elementary facts, in particular, he does not use Sylow theorem.</p>
<p>Did anyone encounter a complete classification of the groups of order $12$ that does not use Sylow theorem?
What about order 24? (I'm less optimistic there, but who knows).</p>
| Bruce Cooperstein | 754,986 | <p>There is a nice geometric proof: using only Sylow theorems one can show that G has 30 subgroups isomorphic to Z_2 x Z_2 which break into classes of 15 each. Denote by P one class and by L the other. Refer to elements of P as points and elements of L as lines. Say a "point", E, is on the "line", F, E and F intersect in a subgroup of order 2. This is a generalized quadrangle of order 2 which is easily shown to be unique and have isomorphic group S_6. In this way one gets an injective how of G into S_6 which has a unique subgroup of index 2 (since A_6) is simple.</p>
<p>A similar geometric proof can be used to prove a simple group of 168 is the automorphism of a Fano plane (projective plane of order 2)</p>
|
3,299,296 | <p>The question is </p>
<blockquote>
<p>When <span class="math-container">$~2x^3 + x^2 - 2kx + f~$</span> is divided by <span class="math-container">$~x - 1~$</span>, the remainder is
<span class="math-container">$~-4~$</span>, and when it is divided by <span class="math-container">$~x+2~$</span>, the remainder is <span class="math-container">$~11~$</span>. Determine the values of <span class="math-container">$~k~$</span>
and <span class="math-container">$~f~$</span>.</p>
</blockquote>
<p>I know how to solve for <span class="math-container">$~k~$</span>, you would just sub in the root for <span class="math-container">$~x~$</span> and set the equation equal to the remainder but because I also have to solve for <span class="math-container">$~f~$</span> this throws me off and I am confused as of what to do.</p>
| José Carlos Santos | 446,262 | <p><strong>Hint:</strong> If <span class="math-container">$p(x)=2x^3+x^2-2kx+f$</span>, then <span class="math-container">$p(1)=-4$</span> and <span class="math-container">$p(-2)=11$</span>. So…</p>
|
815,065 | <p>$$\int^{\pi /2}_{0} \frac{\ln(\sin x)}{\sqrt x}dx$$</p>
<p>Use the segment integral formula? The $\sqrt x$ is zero at $x=0$ and $\ln\sin x$ is $-\infty$ </p>
| RRL | 148,510 | <p>Use a limit comparison test with the function $g(x) = x^{-2/3}$. </p>
<p>We have $g$ integrable on $[0,\pi/2]$:</p>
<p>$$\int_{0}^{\pi/2}x^{-2/3}dx = 3(\pi/2)^{1/3} < \infty.$$</p>
<p>Now consider the limit</p>
<p>$$\lim_{x \rightarrow 0} \frac{f(x)}{g(x)}=\lim_{x \rightarrow 0} \frac{x^{-1/2}\ln(\sin x)}{x^{-2/3}}=\lim_{x \rightarrow 0} \frac{\ln(\sin x)}{x^{-1/6}}.$$</p>
<p>Using L'Hospital's rule</p>
<p>$$\lim_{x \rightarrow 0} \frac{f(x)}{g(x)}=\lim_{x \rightarrow 0} \frac{f'(x)}{g'(x)}=\lim_{x \rightarrow 0} \frac{-6x^{1/6} \cos x}{\frac{\sin x}x}=0$$</p>
<p>and the integral converges -- as there are $\delta$ and $\epsilon$ such that $|f(x)| < \epsilon x^{-2/3}$ for $0<x<\delta$.</p>
|
815,065 | <p>$$\int^{\pi /2}_{0} \frac{\ln(\sin x)}{\sqrt x}dx$$</p>
<p>Use the segment integral formula? The $\sqrt x$ is zero at $x=0$ and $\ln\sin x$ is $-\infty$ </p>
| Santosh Linkha | 2,199 | <p>Integrate it by parts
$$\int_{0}^{\pi/2} \frac{\log(\sin (x))}{\sqrt x} dx = \left [ \log( \sin x) (2\sqrt x)\right]_{0}^{\pi/2} - \int_{0}^{\pi/2} \frac{2 \sqrt x}{\sin(x)}\cos(x)dx$$taking limit on first you get you get zero. For the last integral, use <a href="http://en.wikipedia.org/wiki/Jordan%27s_inequality" rel="nofollow">Jordan's inequality</a> for sine (lower bound) and $\cos(x) \le 1$. i.e.
$$\int_{0}^{\pi/2} \frac{2 \sqrt x}{\sin(x)}\cos(x)dx \le \int_0^{\pi/2}\frac{2 \pi\sqrt x}{2x}dx$$</p>
<p>you know that the left integral converges.</p>
|
2,202,382 | <p>When $A^TA = I$, I am told it is orthogonal. What does that mean?</p>
<p>$A = \begin{bmatrix}cos\theta & & -sin\theta \\ \\ sin\theta & & cos\theta\end{bmatrix}, A^T = \begin{bmatrix}cos\theta & & sin\theta \\ \\ -sin\theta & & cos\theta\end{bmatrix}$</p>
| Peter | 82,961 | <p>It means that the row vectors (and also the column vectors) form an orthogonal basis, that means if $A$ has dimension $n\times n$, $A$ consists of $n$ linear independent and pairwise orthogonal vectors spanning $\mathbb R_n$</p>
|
1,939,937 | <p>$(172195)(572167)=985242x6565$</p>
<p>Obviously the answer is 9 if you have a calculator, but how can you find x without redoing the multiplication?</p>
<p>The book says to use congruences, but I don't see how that is very helpful. </p>
| hamam_Abdallah | 369,188 | <p>we use congruence modulo 9.
if a=b (9) and c=d (9) then
ac=bd (9)
what we call the proof by 9.</p>
|
1,753,719 | <p>Definition of rapidly decreasing function</p>
<p>$$\sup_{x\in\mathbb{R}} |x|^k |f^{(l)}(x)| < \infty$$ for every $k,l\ge 0$.</p>
<p>Given the Gaussian function $f(x) = e^{-x^2}$, I know that its derivatives will always be in form of $P(x)e^{-x^2}$ where $P(x)$ is a polynomial of degree, say, $n$. Then $|x|^k |f^{(l)}(x)|$ will be $Q(x) e^{-x^2}$ where $Q(x)$ is of degree $n+k$. $e^{-x^2}$ is bounded apparently. But how could I "immediately" argue this whole thing is bounded?</p>
| marty cohen | 13,079 | <p>Suppose
$(e^{-x^2})^{(n)}
=p_n(x) e^{-x^2}
$.
Then</p>
<p>$\begin{array}\\
(e^{-x^2})^{(n+1)}
&=(p_n(x) e^{-x^2})'\\
&=p_n'(x) e^{-x^2}-p_n(x)(2x) e^{-x^2}\\
&=e^{-x^2}(p_n'(x) -2xp_n(x))\\
\end{array}
$</p>
<p>so if we define
$p_0(x) = 1$
and
$p_{n+1}(x)
=p_n'(x) -2xp_n(x)
$,
then
$(e^{-x^2})^{(n)}
=p_n(x) e^{-x^2}
$.</p>
<p>Looking at this recurrence,
we see that
$p_n(x)$
is a polynomial
of degree $n$
with leading coefficient
$(-2)^n$.</p>
<p>We could derive a recurrence
for the coefficients,
but this is enough
to show that
$|p_n(x)|
\le C_n x^n
$
for $x \ge 1$,
where
$C_n$ is the sum of the
absolute value of
the coefficients of
$p_n(x)$.</p>
<p>Since
$x^ne^{-x^2} \to 0$
as $x \to \infty$,
$p_n(x)e^{-x^2} \to 0$
as $x \to \infty$.</p>
<p>Easy proof
that
$x^ne^{-x^2} \to 0$
as $x \to \infty$:</p>
<p>From the power series,
$e^{x^2}
\gt \dfrac{x^{2n}}{n!}
$
so
$x^n e^{-x^2}
< n! x^{-n}
\to 0
$
as
$x \to \infty$.</p>
|
2,725,455 | <p>Probably this is pretty simple (or even trivial), but I'm stucked.</p>
<p>If $H\leq G$ is a subgroup, does it follow that $hH=Hh$, if $h\in H$ ? I can't prove or find a counter-example. If anyone could help me, I'd be grateful!</p>
| Bernard | 202,857 | <p>If $h\in H$, $hH=Hh$ because both are equal to $H$.</p>
<p>Indeed, by definition of a subgroup, $hH\subset H$. Conversely, if $k\in H$, we can write
$\; k=h(h^{-1}k)\in hH$, so $H\subset hH$. Similarly, one checks $H=Hh$.</p>
|
745,436 | <p>I'm reading this pdf <a href="http://rutherglen.science.mq.edu.au/wchen/lndpnfolder/dpn01.pdf" rel="nofollow">http://rutherglen.science.mq.edu.au/wchen/lndpnfolder/dpn01.pdf</a> I understand some of the expression used in this but I don't understand the part $(m,n) = 1$</p>
<p>Is this a cartesian coordinate or some sort of operation?</p>
| amWhy | 9,003 | <p>$(m, n)$ is used by some to denote $\gcd(m, n),$ the <em>greatest common divisor</em> function of two integers, $m, n$. </p>
<p>So $(m, n) = 1$ means that the greatest common divisor of $m, n$ is $1$; i.e., $m, n$ are relatively prime.</p>
<p>Whether $(m, n)$ is used to mean $\gcd(m, n)$, or an ordered pair, depends on the context in which that notation is used.</p>
|
3,479,953 | <p>Let <span class="math-container">$v={\{v_1,v_2,...,v_k}\}$</span> Linearly independent</p>
<p><span class="math-container">$\mathbb{F} = \mathbb{R}$</span> or <span class="math-container">$\mathbb{F}=\mathbb{C}$</span></p>
<blockquote>
<p>Prove that <span class="math-container">${\{v_1 + v_2 , v_2+v_3, v_3+v_4,....,v_{k-1}+v_k,v_k+v_1}\}$</span> is Linearly Independent if and only if <span class="math-container">$k$</span> is odd number </p>
</blockquote>
<p><span class="math-container">$A= S.S$</span> matrix </p>
<p><span class="math-container">$A= \begin{pmatrix}1&1&0&.&.&.&.&0\\ 0&1&1&0&.&.&.&0\\ .&.&&&&&&.\\ .&&.&&&&&.\\ .&&&.&&&&.\\ .&&&&&&&.\\ 0&0&.&.&.&.&1&1\\ 1&0&.&.&.&.&.&1\end{pmatrix}$</span></p>
<p>to prove this using Determinant I get <span class="math-container">$|A|=1+(-1)^{k+1}$</span></p>
<p>so if <span class="math-container">$k$</span> is odd number <span class="math-container">$k+1$</span> is even and <span class="math-container">$|A|=2 \rightarrow A$</span> is Invertible <span class="math-container">$\rightarrow$</span> <span class="math-container">$S$</span> Linearly Independent
...</p>
<p>but any Idea how to prove this without using Determinant ?</p>
<p>thanks</p>
| tch | 352,534 | <p>The standard numerical approaches would be computing a (rank revealing) QR decomposition or SVD. Both <a href="https://docs.scipy.org/doc/numpy-1.10.4/reference/generated/numpy.linalg.matrix_rank.html" rel="nofollow noreferrer">Numpy</a> and <a href="https://www.mathworks.com/help/matlab/ref/rank.html" rel="nofollow noreferrer">Matlab</a> use the SVD, and both algorithms are <span class="math-container">$nd^2$</span>[1].
In general, whenever stability is a concern, the SVD tends to be preferred.
However, even plain QR with householder reflections or modified Gram-Schmidt may give reasonable results in many cases.</p>
<p>As you allude to in the original question, there is always a trade off between accuracy and computation time, and the "best" algorithm depends on your application.
The notion of "rank" itself is not well defined in finite precision, and in defining numerical rank it would be reasonable to take any of the equivalent characterizations of exact rank, and then relax them. </p>
<p>In general, one could define <span class="math-container">$A$</span> as numerically rank <span class="math-container">$k$</span> if all singular values <span class="math-container">$k+1, \ldots, n$</span> are less than some tolerance (this is what numpy and matlab do).
Of course, even if you define it this way, how close your numerical method comes to computing the exact SVD of your finite precision matrix is another concern; i.e. two different implementations of an SVD algorithm would give different results.</p>
<p>On the other hand, if <span class="math-container">$1\ll k\ll n$</span> and you are satisfied with an approximation to the rank, then perhaps a randomized algorithm would be best.</p>
<p>I am not familiar with Smith normal form, but I would be very hesitant to use it in a general purpose numerical method as it seems like it would be susceptible to the same issues as Gaussian elimination.</p>
<p>[1] a rank-<span class="math-container">$r$</span> SVD can be computed somewhat more cheaply, so if you have an upper bound on the rank of your matrix then you may be able to do things somewhat faster.</p>
|
79,658 | <blockquote>
<p>Let $U$ and $W$ be subspaces of an inner product space $V$. If $U$ is a subspace
of $W$, then $W^{\bot}$ is a subspace of $U^{\bot}$?.</p>
</blockquote>
<p>I don't find the above statement intuitively obvious. Could someone provide a proof?</p>
| Jonas Meyer | 1,424 | <p>If you're orthogonal to everything in a set, then you're also orthogonal to everything in every subset of that set.</p>
<p>Put another way: Elements of $W^\perp$ have to be orthogonal to more vectors than elements of $U^\perp$; they have to be orthogonal to the vectors in $U$ <em>and</em> the vectors in $W\setminus U$ (if any). Therefore there are less of them than if we took the vectors that only have to satisfy the property of being orthogonal to vectors in $U$. (More restrictions$\implies$ Fewer vectors satisfying the restrictions.) </p>
|
3,461,762 | <blockquote>
<p>Is it true that, for any Pythagorean triple <span class="math-container">$4ab > c^2$</span>?</p>
</blockquote>
<p>So this came up in a proof I was working on and it seems experimentally correct from what I've tried and I would imagine the proof is similar to proving,</p>
<p><span class="math-container">$$ab < \frac{c^2}{2}$$</span></p>
<p>The idea I have for this approach is,</p>
<p><span class="math-container">$$4ab > c^2$$</span>
<span class="math-container">$$4ab > a^2 + b^2$$</span>
(Then maybe something with the triangle inequality?)</p>
<blockquote>
<p>So is this statement true (it seems to be), and how can I prove it?</p>
<p>A counter-example would also be acceptable.</p>
</blockquote>
| Trevor Gunn | 437,127 | <p><span class="math-container">$15^2 + 112^2 = 113^2$</span>, but <span class="math-container">$4 \cdot 15 \cdot 112 \le 113^2$</span>.</p>
|
1,448,476 | <p>Let's assume that we are given $f_{X}(x)=0.5e^{-|x|}$, with x being in the set of all real numbers and Y=$|X|^{1/3}$. If I'm asked to find the pdf of Y, do I just follow the formula and do the following?</p>
<p>$f_{Y}(Y)$=$f_{x}(g^{-1}(y))$|$g^{-1}$'(y) to get something like:
$0.5e^{-|y^{1/3}|} |y^{-2/3}/3|$</p>
<p>Is it just a matter of following the formula or are there other things to consider?</p>
| juantheron | 14,311 | <p>Using First derivative $$\displaystyle f'(x) = \lim_{h\rightarrow 0}\frac{f(x+h)-f(x)}{h}$$</p>
<p>So we get $$\displaystyle f'(x) = \lim_{h\rightarrow 0}\frac{(x+h)^2\cos \left(\frac{1}{x+h}\right)-x^2\cos \left(\frac{1}{x}\right)}{h}$$</p>
<p>So we get $$\displaystyle f'(x) = \lim_{h\rightarrow 0}x^2\cdot \frac{\cos \left(\frac{1}{x+h}\right)-\cos \left(\frac{1}{x}\right)}{h}+\lim_{h\rightarrow 0}\frac{h(h+2x)}{h}\cdot \cos\left(\frac{1}{x+h}\right)$$</p>
<p>Now Using $$\displaystyle \cos C-\cos D = -2\sin \left(\frac{C+D}{2}\right)\cdot \sin \left(\frac{C-D}{2}\right)$$ for first part.</p>
<p>So $$\displaystyle f'(x) = -\lim_{h\rightarrow 0}x^2\cdot \left[\sin \left(\frac{\frac{1}{x+h}+\frac{1}{x}}{2}\right)\cdot \sin \left(\frac{\frac{1}{x+h}-\frac{1}{x}}{2}\right)\right]\cdot \frac{1}{h}+2x\cos\left(\frac{1}{x}\right)$$</p>
<p>So $$\displaystyle f'(x) = -x^2\cdot \sin \left(\frac{1}{x}\right)\cdot \lim_{h\rightarrow 0}\sin \left(\frac{-h}{2(x^2+xh)}\right)\cdot \frac{2(x^2+xh)}{-h}\cdot \frac{-1}{2(x^2+xh)}+2x\cos \left(\frac{1}{x}\right)$$</p>
<p>So we get $$\displaystyle f'(x)=x^2\cdot \sin \left(\frac{1}{x}\right)\cdot \frac{1}{x^2}+2x\cos \left(\frac{1}{x}\right)$$</p>
|
1,448,476 | <p>Let's assume that we are given $f_{X}(x)=0.5e^{-|x|}$, with x being in the set of all real numbers and Y=$|X|^{1/3}$. If I'm asked to find the pdf of Y, do I just follow the formula and do the following?</p>
<p>$f_{Y}(Y)$=$f_{x}(g^{-1}(y))$|$g^{-1}$'(y) to get something like:
$0.5e^{-|y^{1/3}|} |y^{-2/3}/3|$</p>
<p>Is it just a matter of following the formula or are there other things to consider?</p>
| mrf | 19,440 | <p>Assuming you put $f(0)=0$ to make $f$ continuous there, you have
$$
f'(0) = \lim_{h\to0} \frac{f(h)-f(0)}{h} = \lim_{h\to0} \frac{h^2\cos(1/h)}{h} = \lim_{h\to0} h \cos \frac1h = 0
$$
since $|\cos \frac1h| \le 1$ and $h \to 0$.</p>
|
2,031,699 | <p>Let $A,B$ be open subsets of $\mathbb{R}^n$. </p>
<p>Does the following equality hold?</p>
<p>$$\partial(A\cap B)= (\bar A \cap \partial B) \cup (\partial A \cap \bar B)$$</p>
<p>Edit: Thanks for showing me in the answers that above formula fails if $A$ and $B$ are disjoint but their boundaries still intersect. I was able to come up with a similar formula which avoids this case
$$[\partial(A\cap B)]\setminus(\partial A \cap \partial B)= (A \cap \partial B) \cup (\partial A \cap B),$$
which I was able to prove and suffices for what I need to do.</p>
<p>However, when showing that $ (A \cap \partial B) \cup (\partial A \cap B)\subseteq \partial(A\cap B)$, I needed to assume that the topology is induced by a metric. I wonder if the formula still holds in an arbitrary topological space.</p>
| Leo163 | 185,102 | <p>It does not hold. Consider for example $$A=\{x\in \mathbb{R}^n:|x|<1\}$$ and
$$B=\{x\in \mathbb{R}^n:|x-(2,0,\dots,0)|<1\}.$$
Since $A\cap B=\varnothing$, $\partial(A\cap B)=\varnothing$, but the RHS in your formula is the set $\{(1,0,\dots,0)\}$.</p>
|
3,993,727 | <p>For the fucntion <span class="math-container">$f:\mathbb{R}\rightarrow \mathbb{R}$</span> which are differentiable at <span class="math-container">$x=x_0$</span>
imply <span class="math-container">$f'$</span> is continuous at <span class="math-container">$x=x_0$</span>?</p>
<p><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x=x_0$</span> when <span class="math-container">$\forall \epsilon >0,\exists \delta>0, c\in \mathbb{R} \ s.t$</span>
<span class="math-container">$$0<|x-x_0|<\delta \ \ imply \ \ |\frac{f(x)-f(x_0)}{x-x_0}-c|<\epsilon $$</span></p>
<p>in this case we let <span class="math-container">$c=f'(x_0)$</span></p>
<p>if <span class="math-container">$f'(x)$</span> is defined <span class="math-container">$\forall x \in\mathbb{R}$</span> from the above definition of derivative.
can we prove <span class="math-container">$\forall \epsilon >0,\exists \delta>0,\ s.t$</span>
<span class="math-container">$$|x-x_0|<\delta \ \ imply \ \ |f'(x)-f'(x_0)|<\epsilon $$</span>?</p>
<p>I don't have any proof of above problem or counterexample, please help me!</p>
| JKL | 874,247 | <p>Differentiability does not necessarily imply the derivative is continuous. For a counterexample, see <a href="https://math.stackexchange.com/questions/1391544/differentiable-but-not-continuously-differentiable">Differentiable but not continuously differentiable.</a>.</p>
<p>The class of differentiable functions that do have a continuous derivative are denoted <span class="math-container">$C^1$</span>.</p>
|
3,993,727 | <p>For the fucntion <span class="math-container">$f:\mathbb{R}\rightarrow \mathbb{R}$</span> which are differentiable at <span class="math-container">$x=x_0$</span>
imply <span class="math-container">$f'$</span> is continuous at <span class="math-container">$x=x_0$</span>?</p>
<p><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x=x_0$</span> when <span class="math-container">$\forall \epsilon >0,\exists \delta>0, c\in \mathbb{R} \ s.t$</span>
<span class="math-container">$$0<|x-x_0|<\delta \ \ imply \ \ |\frac{f(x)-f(x_0)}{x-x_0}-c|<\epsilon $$</span></p>
<p>in this case we let <span class="math-container">$c=f'(x_0)$</span></p>
<p>if <span class="math-container">$f'(x)$</span> is defined <span class="math-container">$\forall x \in\mathbb{R}$</span> from the above definition of derivative.
can we prove <span class="math-container">$\forall \epsilon >0,\exists \delta>0,\ s.t$</span>
<span class="math-container">$$|x-x_0|<\delta \ \ imply \ \ |f'(x)-f'(x_0)|<\epsilon $$</span>?</p>
<p>I don't have any proof of above problem or counterexample, please help me!</p>
| Kavi Rama Murthy | 142,385 | <p><span class="math-container">$f(x)=x^{2}\sin (\frac1 x)$</span> for <span class="math-container">$x \neq 0$</span>, <span class="math-container">$ f(0)=0$</span> defines a function which is differentiable at <span class="math-container">$0$</span> but its deriavtive is not continuous at <span class="math-container">$0$</span>. In fact, <span class="math-container">$f'(x)$</span> does not even have a limit as <span class="math-container">$x \to 0$</span>.</p>
|
1,450,497 | <p>Consider the class of topological spaces $\langle X,\mathcal T\rangle$ such that the following are equivalent for $A\subseteq X$:</p>
<ul>
<li>$A$ is a $G_\delta$ set with respect to $\mathcal T$</li>
<li>$A\in\mathcal T$ or $X\smallsetminus A\in\mathcal T$</li>
</ul>
<p>Open sets, of course, are always $G_\delta$. So, equivalently, we are considering the topological spaces such that</p>
<ul>
<li>closed sets are $G_\delta$ and</li>
<li>non-open $G_\delta$ sets are closed.</li>
</ul>
<p>Clearly, not all spaces satisfy this equivalence. For example, with respect to the typical topology, $\Bbb R$ has (many!) $G_\delta$ subsets that are neither open nor closed. On the other hand, with respect to the order topology, the set $\omega_1\cup\{\omega_1\}$ has $\{\omega_1\}$ as a closed subset that is not $G_\delta$ (unless, of course, $\omega_1$ is a countable union of countable sets, as may happen in models of $\mathsf{ZF}$).</p>
<p>On the other hand, there are certainly spaces that <em>do</em> satisfy the equivalence, with the discrete and trivial topologies on any set giving us two (not-very-enlightening) examples.</p>
<hr>
<p>I wonder, then, has the described class of topological spaces been studied in much depth? If so, I am curious about the following:</p>
<ol>
<li>Are there any non-trivial topologies that make such a space?</li>
<li>Is there any common nomenclature for such spaces?</li>
<li>Are there sets of conditions on a topology that imply (or are implied by) a space being part of the class of such spaces?</li>
<li>Furthermore, the members of this class may clearly vary from model to model of $\mathsf{ZF},$ so is there any such set of conditions such that one implication or the other (or both) is equivalent to a Choice principle?</li>
</ol>
<hr>
<p><strong>Edit</strong>: The examples so far (aside from indiscrete spaces on sets with at least two points) have the property that every subset of the underlying set is open or closed. Is it possible that all such space are indiscrete or have all subsets open or closed?</p>
<p>Another thing that is readily apparent (now that I'm a little more awake) is that $\langle X,\mathcal T\rangle$ has the desired property if and only if $$\sigma(\mathcal T)=\mathcal T\cup\{X\setminus U:U\in\mathcal T\},$$ where $\sigma(\mathcal T)$ is the (Borel) $\sigma$-algebra (on $X$) generated by $\mathcal T.$</p>
| marty cohen | 13,079 | <p>For positive integers $x$ and $y$
if $x|y$ and $y|x$
then $x=y$.</p>
<p>Proof:</p>
<p>By prime factorization,
let
$x = \prod_p p^{x_p}$
and
$y = \prod_p p^{y_p}$
.</p>
<p>If $x|y$ then
$x_p \le y_p$.
If $y|x$ then
$y_p \le x_p$.</p>
<p>Therefore,
if $x|y$ and $y|x$
then
$x_p \le y_p$
and
$y_p \le x_p$,
so
$x_p = y_p$
so that
$x = y$.</p>
|
2,842,217 | <p>im looking to understand the tangent taylor series, but im struggling to understand how to use long division to divide one series (sine) into the other (cosine). I also can't find examples of the Tangent series much beyond X^5 (wikipedia and youtube videos both stop at the second or third term), which is not enough for me to see any pattern. (x^3/3 + 2x^5/15 tells me nothing).</p>
<p>Wiki says Bernouli Numbers which i plan on studying next, but seriously, i could really use an example of tangent series out to 5-6 just to get a ballpark of what's going on before i start plug and pray. If someone can explain why the long division of the series spits out x^3/3 instead of x^3/3x^2, that would help too,</p>
<p>because I took x^3/6 divided by x^2/2 and got 2x^3/6x^2, following the logic that 4/2 divided by 3/5 = 2/0.6 or 20/6. So I multiplied my top and bottom terms for the numerator, and my two middle terms for the denominator (4x5)/(2x3) = correct.</p>
<p>But when i do that with terms in the taylor series I'm doing something wrong. does that first x from sine divided by that first 1 from cosine have anything to do with it?</p>
<p>Completely lost. </p>
| Robert Israel | 8,508 | <p>$$\tan(x) = x+{\frac{1}{3}}{x}^{3}+{\frac{2}{15}}{x}^{5}+{\frac{17}{315}}{x}^{7}+
{\frac{62}{2835}}{x}^{9}+{\frac{1382}{155925}}{x}^{11}+{\frac{21844}{
6081075}}{x}^{13}+\ldots$$</p>
<p>EDIT: Long division:</p>
<p>$$ \matrix{& & x &+ \frac{x^3}{3} &+ \frac{2 x^5}{15} &+ \frac{17 x^7}{315}&+ \ldots\cr& &---&---&---&---&--- \cr 1 - \frac{x^2}{2} + \frac{x^4}{24} - \frac{x^6}{720} + \ldots & | & x &- \frac{x^3}{6} &+ \frac{x^5}{120} &- \frac{x^7}{5040} &+ \ldots\cr
& & x &- \frac{x^3}{2} &+ \frac{x^5}{24} &- \frac{x^7}{720} &+ \ldots\cr
& & ---&---&---&---&---\cr
& & &\frac{x^3}{3} &- \frac{x^5}{30} &+ \frac{x^7}{840} &+ \ldots\cr
& & & \frac{x^3}{3} & - \frac{x^5}{6} & + \frac{x^7}{72} &+\ldots\cr
& & & --- & --- & --- & ---\cr
& & & & \frac{2 x^5}{15} & - \frac{4 x^7}{315} & +\ldots\cr
& & & & \frac{2 x^5}{15} & - \frac{2 x^7}{30} & +\ldots\cr
& & & & --- & --- & ---\cr
& & & & & \frac{17 x^7}{315} & + \ldots }$$</p>
|
3,057,819 | <p>I giving a second try to this question. Hopefully, with better problem definition.</p>
<p>I have a circle inscribed inside a square and would like to know the point the radius touches when extended. In the figure A, we have calculated the angle(<code>θ</code>), <code>C</code>(center) , <code>D</code> and <code>E</code>. How do i calculate the <code>(x,y)</code> of <code>A</code> and <code>B</code>? </p>
<p><a href="https://i.stack.imgur.com/Y0st7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y0st7.png" alt="enter image description here"></a></p>
| Mohammad Riazi-Kermani | 514,496 | <p>If you know the coordinates of the center then you add <span class="math-container">$r$</span> to the <span class="math-container">$x$</span> coordinate and you add <span class="math-container">$r \tan (\theta)$</span> to the <span class="math-container">$y$</span> coordinate of the center to get coordinates of <span class="math-container">$A$</span> </p>
<p>Similarly you can find coordinates of <span class="math-container">$B$</span> </p>
|
3,057,819 | <p>I giving a second try to this question. Hopefully, with better problem definition.</p>
<p>I have a circle inscribed inside a square and would like to know the point the radius touches when extended. In the figure A, we have calculated the angle(<code>θ</code>), <code>C</code>(center) , <code>D</code> and <code>E</code>. How do i calculate the <code>(x,y)</code> of <code>A</code> and <code>B</code>? </p>
<p><a href="https://i.stack.imgur.com/Y0st7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y0st7.png" alt="enter image description here"></a></p>
| Michael Hoppe | 93,935 | <p>Describe the circle as
<span class="math-container">$$\vec x=\vec m+\begin{pmatrix} r\cos(t)\\ r\sin(t)
\end{pmatrix}.$$</span>
Now consider the ray
<span class="math-container">$$\vec y=\vec m+\lambda\begin{pmatrix} r\cos(t)\\ r\sin(t)
\end{pmatrix}$$</span>
with <span class="math-container">$\lambda>0$</span>.
You want to have the first coordinate for <span class="math-container">$t\in(-\pi/2,\pi/2)$</span> of <span class="math-container">$\vec y$</span> to be <span class="math-container">$m_1+r$</span>, hence <span class="math-container">$\lambda=1/\cos(t)$</span> and the desired point is
<span class="math-container">$$\vec m+\frac{1}{\cos(t)}\begin{pmatrix} r\cos(t)\\ r\sin(t)
\end{pmatrix}=\begin{pmatrix} m_1+r\\ m_2+r\tan(t)
\end{pmatrix}.$$</span></p>
|
3,581,390 | <p>The problem is as follows:</p>
<p>Mike was born on <span class="math-container">$\textrm{October 1st, 2012,}$</span> and Jack on <span class="math-container">$\textrm{December 1st, 2013}$</span>. Find the date when the triple the age of Jack is the double of Mike's age.</p>
<p>The alternatives given in my book are as follows:</p>
<p><span class="math-container">$\begin{array}{ll}
1.&\textrm{April 1st, 2016}\\
2.&\textrm{March 21st, 2015}\\
3.&\textrm{May 8th, 2015}\\
4.&\textrm{May 1st, 2015}\\
\end{array}$</span> </p>
<p>I tried all sorts of tricks in the book to get this one but I can't find a way to find the given date. What sort of formula or procedure should be used to calculate this date? Can someone help me?</p>
| fleablood | 280,126 | <p>Just do it.</p>
<p>Mike was born Oct 1st, 2012. So <span class="math-container">$365$</span> days later is Oct 1st, 2013. And <span class="math-container">$31$</span> days after that is Nov. 1st, 2013 and Mike is <span class="math-container">$365+31 = 396$</span> days old. And <span class="math-container">$30$</span> days after that is Dec. 1st, 2013. Mike is <span class="math-container">$396+30=426$</span> days old and Jack is born.</p>
<p>So Mike is <span class="math-container">$426$</span> days older than mike.</p>
<p>So we want to find when <span class="math-container">$3J = 2M$</span> where <span class="math-container">$J = $</span> Jack's age in days and <span class="math-container">$M$</span> is Mike's age in days. ANd we know <span class="math-container">$M = J +426$</span>.</p>
<p>So <span class="math-container">$3J = 2(J + 426)$</span></p>
<p><span class="math-container">$3J = 2J + 852$</span>.</p>
<p>So <span class="math-container">$J = 852$</span>. So we need to find the date when <span class="math-container">$J$</span> is <span class="math-container">$852$</span> days old.</p>
<p>On Dec. 1, 2014, Jack is <span class="math-container">$365$</span> days old.</p>
<p>On Dec. 1, 2015, Jack is <span class="math-container">$730$</span> days day old.</p>
<p>Jan 1. 2016 is <span class="math-container">$31$</span> days later and Jack is <span class="math-container">$761$</span> old.</p>
<p>Feb 1. 2016 is <span class="math-container">$31$</span> days later and Jack is <span class="math-container">$792$</span> days old.</p>
<p>March 1. 2016 is <span class="math-container">$29$</span> days later (2016 is a leap year) and Jack is <span class="math-container">$821$</span></p>
<p>April 1. 2016 is <span class="math-container">$31$</span> days later and Jack is <span class="math-container">$852$</span> days old.</p>
<p>That's it.</p>
<p>If you are in a hurry....</p>
<p>One year and two months is <span class="math-container">$\approx 1\frac 2{12}$</span> years.</p>
<p>So we have <span class="math-container">$M = J + 1\frac 2{12}$</span> and need <span class="math-container">$3J = 2(M +1\frac 2{12})= 2M + 2\frac 4{12}$</span> so so <span class="math-container">$J = 2$</span> years and <span class="math-container">$4$</span> months approximately.</p>
<p>And <span class="math-container">$2$</span> years and <span class="math-container">$4$</span> months after Dec 1. 2013 is April 1. 2016. This may be off by a few days as not all months have the same numbers of days.</p>
<p>======</p>
<p>Perhaps the most adjustable and pragmatic idea is distinguish:</p>
<p>Let <span class="math-container">$D$</span> be the difference in ages so <span class="math-container">$M = J + D$</span>.</p>
<p>We need to find the date when <span class="math-container">$3J = 2M = 2(J+D)$</span> so <span class="math-container">$3J = 2J + 2D$</span> so <span class="math-container">$J = 2D$</span>.</p>
<p>That is when <span class="math-container">$J$</span> is twice as old as the difference between them.</p>
<p>As <span class="math-container">$D$</span> is about a little over a year we need a time when Jack is about two years old and somewhat more. <span class="math-container">$A$</span> is the only one even remotely close. </p>
<p>I suppose the trick is to realize that as Jack was born in Dec, there very <em>end</em> of 2013, so do <em>not</em> just subtract 2013 from 2015 or 2016 and figure about 2 or 3 years. They are instead about a year and a bit and two years and a bit. As we need two years and four months. April 1, 2016 is the closest.</p>
<p>And if you want to figure out <span class="math-container">$D$</span> in detail its <span class="math-container">$1$</span> year and <span class="math-container">$2$</span> months more or less, <span class="math-container">$426$</span> days exactly, <span class="math-container">$61$</span> weeks roughly, <span class="math-container">$10,000$</span> plus hours, or whatever.</p>
|
1,392,340 | <p>Suppose there is a function $f:\mathbb R^n \to \mathbb R$. One way to find a stationary value is to solve the ODE $\dot x = - \nabla f(x)$, and look at $\lim_{t\to\infty} x(t)$.</p>
<p>However I want to consider a variation of this method where we solve
$$ dx = - \nabla f(x) dt + C(t,x) \cdot dW_t ,$$
where $C(x,t) \in \mathbb R^{n\times n}$, and $W_t$ is $n$-dimensional Wiener process, with some kind of condition like $C(x,t) \to 0$ as $t\to\infty$. The hope is that it might converge to a stationary value faster, and also that the stationary value it converges to will be a local minimum.</p>
<p>Can anyone give me some resources for where I could read about this sort of thing? Using a google search, and following references, I did find the book
<em>Random perturbations of dynamical systems</em> by Mark I. Freidlin and Alexander D. Wentzell, but I didn't find "gradient descent" in the index.</p>
| denis | 73,222 | <p>Not directly your question, but
Wales, <a href="http://books.google.com/books?isbn=0521814154" rel="nofollow">Energy landscapes</a> (2003, 681p)
might be interesting.
That leads to "basin-hopping", a <em>practical</em> stochastic algorithm for minimizing e.g.
configurations of molecules.
<a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.basinhopping.html" rel="nofollow">scipy basinhopping</a>
says</p>
<blockquote>
<p>The acceptance test used here is the Metropolis criterion of standard Monte Carlo algorithms,
although there are many other possibilities</p>
</blockquote>
<p>(By the way, for accelerated gradient descent in general
see Nesterov momentum, Adagrad, Adadelta ... more methods than test cases.)</p>
|
1,734,680 | <p>How can I find $F'(x)$ given $F(x) = \int_0^{x^3}\sin(t) dt$ ? <br>
I think that (by the fundamental theorem of calculus) since $f = \sin(x)$ is continuous in $[0, x^3]$, then $F$ is differentiable and $F'(x) = f(x) = \sin(x)$ but I'm not sure...</p>
| Zelos Malum | 197,853 | <p>You have that $F(x)=1-\cos x^3$ which can then be derived. If you do not however have it, think like this. Let $g(x)=\int f(x)$ for some function. Then for $F(x)=\int^{h(x)}_0 f(x)dx$ we have
$$F'(x)=(g(h(x)-g(0))'=(g(h(x)))'=g'(h(x))h'(x)=f(h(x))h'(x)$$
where I used the chain rule, fundamental theorem of calculus and such in it.</p>
|
1,785,444 | <p>The question says to 'Express the last equation of each system as a sum of multiples of the first two equations." </p>
<p>System in question being: </p>
<p>$ x_1+x_2+x_3=1 $</p>
<p>$ 2x_1-x_2+3x_3=3 $</p>
<p>$ x_1-2x_2+2x_3=2 $</p>
<p>The question gives a hint saying "Label the equations, use the gaussian algorithm" and the answer is 'Eqn 3 = Eqn 2 - Eqn 1' but short of eye-balling it, I'm not sure how they deduce that after row-reducing to REF.</p>
| alphacapture | 334,625 | <p>Hint:</p>
<p>We want to solve</p>
<p>$$a(x_1+x_2+x_3)+b(2x_1-x_2+3x_3)=x_1-2x_2+2x_3$$</p>
<p>Matching coefficients, we want to solve</p>
<p>$$a+2b=1$$</p>
<p>$$a-b=-2$$</p>
<p>$$a+3b=2.$$</p>
<p>Can you take it from here?</p>
|
1,785,444 | <p>The question says to 'Express the last equation of each system as a sum of multiples of the first two equations." </p>
<p>System in question being: </p>
<p>$ x_1+x_2+x_3=1 $</p>
<p>$ 2x_1-x_2+3x_3=3 $</p>
<p>$ x_1-2x_2+2x_3=2 $</p>
<p>The question gives a hint saying "Label the equations, use the gaussian algorithm" and the answer is 'Eqn 3 = Eqn 2 - Eqn 1' but short of eye-balling it, I'm not sure how they deduce that after row-reducing to REF.</p>
| egreg | 62,967 | <p>You can do row reduction on the <em>transpose</em>:
\begin{align}
\begin{bmatrix}
1 & 2 & 1\\
1 & -1 & -2\\
1 & 3 & 2\\
1 & 3 & 2\\
\end{bmatrix}
&\to
\begin{bmatrix}
1 & 2 & 1\\
0 & -3 & -3\\
0 & 1 & 1\\
0 & 1 & 1\\
\end{bmatrix}
&&\begin{aligned}
R_2&\gets R_2-R_1\\
R_3&\gets R_3-R_1\\
R_4&\gets R_4-R_1
\end{aligned}
\\[6px]
&\to
\begin{bmatrix}
1 & 2 & 1\\
0 & 1 & 1\\
0 & 1 & 1\\
0 & 1 & 1\\
\end{bmatrix}
&&R_2\gets-R_2/3
\\[6px]
&\to
\begin{bmatrix}
1 & 2 & 1\\
0 & 1 & 1\\
0 & 0 & 0\\
0 & 0 & 0\\
\end{bmatrix}
&&\begin{aligned}
R_3&\gets R_3-R_2\\
R_4&\gets R_4-R_2
\end{aligned}
\\[6px]
&\to
\begin{bmatrix}
1 & 0 & -1\\
0 & 1 & 1\\
0 & 0 & 0\\
0 & 0 & 0\\
\end{bmatrix}
&&R_1\gets R_1-2R_2
\end{align}
which makes clear that $C_3=-C_1+C_2$</p>
|
3,054,321 | <p>I'm looking for a closed form for this sequence,</p>
<blockquote>
<p><span class="math-container">$$\sum_{n=1}^{\infty}\left(\sum_{k=1}^{n}\frac{1}{(25k^2+25k+4)(n-k+1)^3} \right)$$</span></p>
</blockquote>
<p>I applied convergence test. The series converges.I want to know if the series is expressed with any mathematical constant. How can we do that?</p>
| marty cohen | 13,079 | <p>Proceeding in
my usual naive way,</p>
<p><span class="math-container">$\begin{array}\\
S
&=\sum_{n=1}^{\infty}\sum_{k=1}^{n}\frac{1}{(25k^2+25k+4)(n-k+1)^3}\\
&=\sum_{n=1}^{\infty}\sum_{k=1}^{n}\frac{1}{(25k^2+25k+4)(n-k+1)^3} \\
&=\sum_{k=1}^{\infty}\sum_{n=k}^{\infty}\frac{1}{(25k^2+25k+4)(n-k+1)^3} \\
&=\sum_{k=1}^{\infty}\frac1{(25k^2+25k+4)}\sum_{n=k}^{\infty}\frac{1}{(n-k+1)^3} \\
&=\sum_{k=1}^{\infty}\frac1{(25k^2+25k+4)}\sum_{n=1}^{\infty}\frac{1}{n^3} \\
&=\zeta(3)\sum_{k=1}^{\infty}\frac1{(25k^2+25k+4)}\\
&=\zeta(3)\sum_{k=1}^{\infty}\frac1{(5k+1)(5k+4)}\\
&=\zeta(3)\sum_{k=1}^{\infty}\frac13\left(\frac1{5k+1}-\frac1{5k+4}\right)\\
&=\frac{\zeta(3)}{3}\lim_{m \to \infty} \sum_{k=1}^{m}\left(\frac1{5k+1}-\frac1{5k+4}\right)\\
&=\frac{\zeta(3)}{15}\lim_{m \to \infty} \left(\sum_{k=1}^{m}\frac1{k+1/5}-\sum_{k=1}^{m}\frac1{k+4/5}\right)\\
&=-\frac{\zeta(3)}{15}\lim_{m \to \infty} \left(\sum_{k=1}^{m}(\frac1{k}-\frac1{k+1/5})-\sum_{k=1}^{m}(\frac1{k}-\frac1{k+4/5})\right)\\
&=-\frac{\zeta(3)}{15}(\psi(6/5)-\psi(9/5))\\
&=\frac{\zeta(3)}{15}(\psi(9/5)-\psi(6/5))\\
\end{array}
$</span></p>
<p>where
<span class="math-container">$\psi(x)$</span>
is the digamma function
(<a href="https://en.wikipedia.org/wiki/Digamma_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Digamma_function</a>).</p>
<p>Note:
Wolfy says that
<span class="math-container">$\sum_{k=1}^{\infty}\frac1{(5k+1)(5k+4)}
= \frac{\pi}{15}\sqrt{1 + \frac{2}{\sqrt{5}}} - \frac1{4}
$</span>.</p>
|
4,004 | <p>This is related to <a href="https://math.stackexchange.com/q/133615/26306">this post</a>, please read the comments.</p>
<p>What is the usual way of dealing with that kind of problems on math.SE?
(By "that kind of problems" I mean someone posting tasks from an ongoing contest.)</p>
<p>I mean I did email the contest coordinator and flag the post, but it seems that there is more than one user and more than one question involved. Also, I do not know whether the OP is a contestant or e.g. a friend that wishes to learn the answer himself. The whole situation is not trivial and I do not see any way to prevent such abuse on future occasions (one cannot possibly be aware of all the contests in the world).</p>
<p>Any comments/ideas/explanations will be appreciated.</p>
| Mariano Suárez-Álvarez | 274 | <p>If anyone notices this happening, it is nice to inform the contest coordinators. </p>
<p>On the other hand, I don't think it is reasonable (nor realistic) to have a policy against this. </p>
|
4,004 | <p>This is related to <a href="https://math.stackexchange.com/q/133615/26306">this post</a>, please read the comments.</p>
<p>What is the usual way of dealing with that kind of problems on math.SE?
(By "that kind of problems" I mean someone posting tasks from an ongoing contest.)</p>
<p>I mean I did email the contest coordinator and flag the post, but it seems that there is more than one user and more than one question involved. Also, I do not know whether the OP is a contestant or e.g. a friend that wishes to learn the answer himself. The whole situation is not trivial and I do not see any way to prevent such abuse on future occasions (one cannot possibly be aware of all the contests in the world).</p>
<p>Any comments/ideas/explanations will be appreciated.</p>
| Gilles 'SO- stop being evil' | 1,853 | <p>If someone posts a task from an ongoing contest as a question, you may mention it in a comment. Or not, since the fact that the question is a contest task is a related bit of trivia but not particularly relevant in answering the question.</p>
<p>If the question is <em>original</em> from the contest, you should comment or edit to provide attribution. A question taken from somewhere else (a contest, a book, a blog…) without attribution is plagiarism. Note that contest questions are not always original — don't attribute a question to contest organizers if they took it from an earlier book!</p>
<p>Acknowledging the source of a question is required by academic traditions. <strong>Refusing to answer questions is very much against academic traditions.</strong></p>
<p>As a (former) scientist, I am deeply appalled that a part of the Math.SE community seems to consider that it is ok to suppress a question because it is part of an ongoing contest. This amounts to giving the contest organizers a monopoly on the question.</p>
<p>Scientific tradition is strongly opposed to a monopoly on ideas. Even the law in most countries, which is less liberal than mathematicians tend to be, recognizes a monopoly (with limitations) on a particular way to express an idea (copyright) and on the practical use of an idea (patents). <strong>A monopoly on the ideas themselves is not acceptable</strong> to our society in general, nor the narrower community of mathematicians.</p>
<p>As an answerer, you are entirely free to refuse to answer a question. But you are not entitled to decide this for others. If a question is otherwise acceptable to this site (on-topic, reasonably scoped, etc.), the fact that it has also appeared elsewhere is not grounds to prevent others from answering it.</p>
<p>I don't see any point in flagging the question: what do you expect a Stack Exchange moderator to do with it? As a moderator on the sister site <a href="https://cs.stackexchange.com/">Computer Science</a>, where <a href="https://cs.meta.stackexchange.com/questions/394/answering-exercise-questions-from-textbooks-where-the-authors-explicitly-ask-oth">a similar issue has been raised</a>, I would have no idea what to do with such a flag, it would only be a waste of time. You may contact the contest organizers if you wish; correlating the contest participants with Stack Exchange users is their problem (and not one they can solve — even if everyone made their real identity apparent, someone could post a question through a front).</p>
<p>To reiterate, I find the very idea of acknowledging a monopoly on the discussion of a scientific idea unethical, and I am deeply troubled that a community of scientists even considers it.</p>
|
1,657,557 | <p>For example, how would I enter y^(IV) - 16y = 0? </p>
<p>typing out fourth derivative, and putting four ' marks does not seem to work. </p>
| Enrico M. | 266,764 | <p>Input:</p>
<p>y''''[x]</p>
<p><a href="http://www.wolframalpha.com/input/?i=y''''(x)+-+16y(x)+%3D+0" rel="nofollow">http://www.wolframalpha.com/input/?i=y''''(x)+-+16y(x)+%3D+0</a>.</p>
|
3,354,566 | <p>I see integrals defined as anti-derivatives but for some reason I haven't come across the reverse. Both seem equally implied by the fundamental theorem of calculus.</p>
<p>This emerged as a sticking point in <a href="https://math.stackexchange.com/questions/3354502/are-integrals-thought-of-as-antiderivatives-to-avoid-using-faulhaber">this question</a>.</p>
| hmakholm left over Monica | 14,366 | <p>Let <span class="math-container">$f(x)=0$</span> for all real <span class="math-container">$x$</span>.</p>
<p>Here is one anti-integral for <span class="math-container">$f$</span>:</p>
<p><span class="math-container">$$ g(x) = \begin{cases} x &\text{when }x\in\mathbb Z \\ 0 & \text{otherwise} \end{cases} $$</span>
in the sense that <span class="math-container">$\int_a^b g(x)\,dx = f(b)-f(a)$</span> for all <span class="math-container">$a,b$</span>.</p>
<p>How do you explain that the slope of <span class="math-container">$f$</span> at <span class="math-container">$x=5$</span> is not <span class="math-container">$g(5)=5$</span>?</p>
<hr>
<p>The idea works better if we restrict all the functions we ever look at to "sufficiently nice" ones -- for example, we could insist that everything is real analytic.</p>
<p>Merely looking for a <em>continuous</em> anti-integral wouldn't suffice to recover the usual concept of derivative, because then something like
<span class="math-container">$$ x \mapsto \begin{cases} 0 & \text{when }x=0 \\ x^2\sin(1/x) & \text{otherwise} \end{cases} $$</span>
wouldn't have a derivative on <span class="math-container">$\mathbb R$</span> (which is does by the usual definition).</p>
|
2,129,086 | <p>I know that the total number of choosing without constraint is </p>
<p>$\binom{3+11−1}{11}= \binom{13}{11}= \frac{13·12}{2} =78$</p>
<p>Then with x1 ≥ 1, x2 ≥ 2, and x3 ≥ 3. </p>
<p>the textbook has the following solution </p>
<p>$\binom{3+5−1}{5}=\binom{7}{5}=21$ I can't figure out where is the 5 coming from?</p>
<p>The reason to choose 5 is because the constraint adds up to 6? so 11 -6 =5?</p>
| Rodrigo de Azevedo | 339,790 | <p>The number of nonnegative integer solutions of $x_1 + x_2 + x_3 = 11$ is the coefficient of $t^{11}$ in the following generating function [JDL]</p>
<p>$$\dfrac{1}{(1-t)^3}$$</p>
<p>Suppose now that we are interested in integer solutions with $x_1 \geq 1$, $x_2 \geq 2$ and $x_3 \geq 3$. We thus introduce three new <em>nonnegative</em> variables</p>
<p>$$z_1 : = x_1 - 1 \qquad\qquad\qquad z_2 : = x_2 - 2 \qquad\qquad\qquad z_3 : = x_2 - 3$$</p>
<p>The number of <em>admissible</em> integer solutions of $x_1 + x_2 + x_3 = 11$ is the number of <em>nonnegative</em> integer solutions of $z_1 + z_2 + z_3 = 5$, which is the coefficient of $t^5$ in the generating function.</p>
<p>Using <a href="http://www.sympy.org" rel="nofollow noreferrer">SymPy</a>:</p>
<pre><code>>>> from sympy import *
>>> t = Symbol('t')
>>> f = 1 / (1-t)**3
>>> f.series(t,0,12)
1 + 3*t + 6*t**2 + 10*t**3 + 15*t**4 + 21*t**5 + 28*t**6 + 36*t**7 + 45*t**8 + 55*t**9 + 66*t**10 + 78*t**11 + O(t**12)
</code></pre>
<p>Hence, the number of <em>admissible</em> integer solutions is $21$. Note that the coefficients in the series are the <a href="https://en.wikipedia.org/wiki/Triangular_number" rel="nofollow noreferrer">triangular numbers</a> (<a href="http://oeis.org/A000217" rel="nofollow noreferrer">A000217</a>)</p>
<p>$$\binom{2}{2}, \binom{3}{2}, \binom{4}{2}, \binom{5}{2}, \dots, \binom{k+2}{2}, \dots$$</p>
<hr>
<p>[JDL] Jesús A. De Loera, <a href="https://www.math.ucdavis.edu/~deloera/RECENT_WORK/semesterberichte.pdf" rel="nofollow noreferrer">The Many Aspects of Counting Lattice Points in Polytopes</a>.</p>
|
897,756 | <p>How can I solve the following trigonometric inequation?</p>
<p>$$\sin\left(x\right)\ne \sin\left(y\right)\>,\>x,y\in \mathbb{R}$$</p>
<p>Why I'm asking this question... I was doing my calculus homework, trying to plot the domain of the function $f\left(x,y\right)=\frac{x-y}{sin\left(x\right)-sin\left(y\right)}$ and figured out I'd have to solve the inequation $\sin\left(x\right)\ne\sin\left(y\right)$... I was able to come to the answer $y\ne x +2\cdot k\cdot \pi \>,\>k \in \mathbb{N}$. However, the answer on the textbook also includes $y\ne -x +2\cdot k\cdot \pi + \pi \>,\>k \in \mathbb{N}$, so I thought that I was probably doing something wrong while solving that inequation.</p>
| Varun Iyer | 118,690 | <p>This is the limit definition of the derivative for $\sin x$</p>
<p>So,</p>
<p>$$(\sin x)' = \cos x$$</p>
<p><strong>EDIT</strong>, given any $f(x)$, to find its derivative, through limits we can express it as:</p>
<p>$$\large \lim_{h \to 0} \frac{f(x+h) - f(x)}{h} = f'(x)$$</p>
<p>Here $f(x) = \sin x$</p>
|
3,608,625 | <blockquote>
<p>Assume that a function <span class="math-container">$f$</span> satisfies the condition <span class="math-container">$f'(1) = 2.$</span> Figure out the limit <span class="math-container">$$\lim_{h\to0}\frac{f(1+h)-f(1-3h)}{5h}.$$</span></p>
</blockquote>
<p>This seemed to be a very simple problem just dealing with the definition of a derivative, but the proposed solution was a bit confusing. It went as follows:</p>
<p>Adding <span class="math-container">$-f(1)$</span> and <span class="math-container">$f(1)$</span> to the numerator we get</p>
<p><span class="math-container">$$\frac{f(1+h)-f(1-3h)}{5h} = \frac{f(1+h)- f(1) +f(1)-f(1-3h)}{5h}. \tag{1}\label{eq1A}$$</span> </p>
<p>This can be simplified to </p>
<p><span class="math-container">$$\frac{f(1+h)-f(1)}{5h} + \frac{f(1)-f(1-3h)}{5h}. \tag{2}\label{eq2A}$$</span></p>
<p>And now from here we get</p>
<p><span class="math-container">$$\frac15 \cdot \frac{f(1+h)-f(1)}{h} + \frac35 \cdot \frac{f(1)-f(1-3h)}{3h}. \tag{3}\label{eq3A}$$</span></p>
<p>Furthermore</p>
<p><span class="math-container">$$\frac15 \cdot \frac{f(1+h)-f(1)}{h} + \frac35 \cdot \frac{f(1-3h)-f(1)}{-3h}. \tag{4}\label{eq4A}$$</span></p>
<p>Now taking the limit <span class="math-container">$h \to 0$</span>, we have that</p>
<p><span class="math-container">$$\lim_{h\to0} \frac15 \cdot \frac{f(1+h)-f(1)}{h} + \lim_{h\to0} \frac35 \cdot \frac{f(1-3h)-f(1)}{-3h} = \frac15\cdot f'(1) + \frac35\cdot f'(1) = \frac85$$</span></p>
<p>Could someone educate me on parts <span class="math-container">$(3)$</span> and <span class="math-container">$(4)$</span>. Why do we chose <span class="math-container">$\frac35$</span> in <span class="math-container">$(3)$</span> instead of <span class="math-container">$\frac15$</span>? If we would have taken out <span class="math-container">$\frac15$</span> the result would have been <span class="math-container">$\frac15\cdot f'(1) + \frac15\cdot f'(1) = \frac45$</span> right? Also what's happening on <span class="math-container">$(4)$</span> how do we suddenly flip the numerator?</p>
| Robert Israel | 8,508 | <p>The series (starting at <span class="math-container">$k=0$</span>) is <span class="math-container">$a^{-1} \Phi(z, 1, 1/a)$</span> where <span class="math-container">$\Phi$</span> is the Lerch Phi function.</p>
|
3,608,625 | <blockquote>
<p>Assume that a function <span class="math-container">$f$</span> satisfies the condition <span class="math-container">$f'(1) = 2.$</span> Figure out the limit <span class="math-container">$$\lim_{h\to0}\frac{f(1+h)-f(1-3h)}{5h}.$$</span></p>
</blockquote>
<p>This seemed to be a very simple problem just dealing with the definition of a derivative, but the proposed solution was a bit confusing. It went as follows:</p>
<p>Adding <span class="math-container">$-f(1)$</span> and <span class="math-container">$f(1)$</span> to the numerator we get</p>
<p><span class="math-container">$$\frac{f(1+h)-f(1-3h)}{5h} = \frac{f(1+h)- f(1) +f(1)-f(1-3h)}{5h}. \tag{1}\label{eq1A}$$</span> </p>
<p>This can be simplified to </p>
<p><span class="math-container">$$\frac{f(1+h)-f(1)}{5h} + \frac{f(1)-f(1-3h)}{5h}. \tag{2}\label{eq2A}$$</span></p>
<p>And now from here we get</p>
<p><span class="math-container">$$\frac15 \cdot \frac{f(1+h)-f(1)}{h} + \frac35 \cdot \frac{f(1)-f(1-3h)}{3h}. \tag{3}\label{eq3A}$$</span></p>
<p>Furthermore</p>
<p><span class="math-container">$$\frac15 \cdot \frac{f(1+h)-f(1)}{h} + \frac35 \cdot \frac{f(1-3h)-f(1)}{-3h}. \tag{4}\label{eq4A}$$</span></p>
<p>Now taking the limit <span class="math-container">$h \to 0$</span>, we have that</p>
<p><span class="math-container">$$\lim_{h\to0} \frac15 \cdot \frac{f(1+h)-f(1)}{h} + \lim_{h\to0} \frac35 \cdot \frac{f(1-3h)-f(1)}{-3h} = \frac15\cdot f'(1) + \frac35\cdot f'(1) = \frac85$$</span></p>
<p>Could someone educate me on parts <span class="math-container">$(3)$</span> and <span class="math-container">$(4)$</span>. Why do we chose <span class="math-container">$\frac35$</span> in <span class="math-container">$(3)$</span> instead of <span class="math-container">$\frac15$</span>? If we would have taken out <span class="math-container">$\frac15$</span> the result would have been <span class="math-container">$\frac15\cdot f'(1) + \frac15\cdot f'(1) = \frac45$</span> right? Also what's happening on <span class="math-container">$(4)$</span> how do we suddenly flip the numerator?</p>
| marty cohen | 13,079 | <p>If a is an integer,
this will be a multisection
of <span class="math-container">$\ln(x)$</span>.</p>
|
38,731 | <p>The <a href="http://en.wikipedia.org/wiki/Ramanujan_summation">Ramanujan Summation</a> of some infinite sums is consistent with a couple of sets of values of the Riemann zeta function. We have, for instance, $$\zeta(-2n)=\sum_{n=1}^{\infty} n^{2k} = 0 (\mathfrak{R}) $$ (for non-negative integer $k$) and $$\zeta(-(2n+1))=-\frac{B_{2k}}{2k} (\mathfrak{R})$$ (again, $k \in \mathbb{N} $). Here, $B_k$ is the $k$'th <a href="http://en.wikipedia.org/wiki/Bernoulli_number">Bernoulli number</a>. However, it does not hold when, for example, $$\sum_{n=1}^{\infty} \frac{1}{n}=\gamma (\mathfrak{R})$$ (here $\gamma$ denotes the Euler-Mascheroni Constant) as it is not equal to $$\zeta(1)=\infty$$. </p>
<p>Question: Are the first two examples I stated the only instances in which the Ramanujan summation of some infinite series coincides with the values of the Riemann zeta function?</p>
| Sumit Kumar Jha | 37,260 | <p>I would urge you to do analyze the <a href="http://en.wikipedia.org/wiki/Harmonic_series_%28mathematics%29" rel="nofollow">harmonic series</a> using <a href="http://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow">Euler-Maclaurin Summation</a> </p>
<p>You will be able to prove</p>
<p>\begin{equation}
\sum_{k\leq x}\frac{1}{k}=\log x+\gamma+O\left(\frac{1}{x}\right)
\end{equation}</p>
<p>where $\gamma$ is the <a href="http://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant" rel="nofollow">Euler-Mascheroni constant</a> and $O(f)$ is <a href="http://en.wikipedia.org/wiki/Big_O_notation" rel="nofollow">big oh notation</a>.</p>
<p>You just need to analyze the remainder term in Euler-Maclaurin summation using Fourier series of <a href="http://en.wikipedia.org/wiki/Bernoulli_polynomials#Periodic_Bernoulli_polynomials" rel="nofollow">periodic Bernoulli polynomials</a>. That is, for $m\geq 2$</p>
<p>\begin{equation}
B_m(x-[x])=-\frac{m!}{(2\pi i)^m}\sum_{n=-\infty,n\neq 0}^{n=\infty}\frac{e^{2\pi i nx}}{n^m}
\end{equation}</p>
<p>This would give you</p>
<p>\begin{equation}
|B_{m}(x-[x])|\leq 2\frac{m!}{(2\pi)^m}\sum_{n=1}^{\infty}\frac{1}{n^m}\leq 2\frac{m!}{(2\pi)^m}\sum_{n=1}^{\infty}\frac{1}{n^2}=\pi^2\frac{m!}{3(2\pi)^m}
\end{equation}</p>
<p>which is just perfect for estimating remainder in Euler-Maclaurin formula. </p>
<p>You can also try to prove <a href="http://en.wikipedia.org/wiki/Stirling%27s_approximation" rel="nofollow">Stirling's approximation</a> using this method that is</p>
<p>\begin{equation}
n! = \sqrt{2 \pi n}~{\left( \frac{n}{e} \right)}^n \left( 1 + O \left( \frac{1}{n} \right) \right).
\end{equation}</p>
|
1,251,914 | <p>I do not understand how to set up the following problem:</p>
<p>"Forces of 20 lb and 32 lb make an angle of 52 degrees with each other. find the magnitude of the resultant force."</p>
<p>An actually picture would really help.</p>
| Pål GD | 55,346 | <p>David Foster Wallace in <em>everything and more, a compact history of ∞</em>:</p>
<blockquote>
<p>There is something I "know," which is that spatial dimensions beyond the Big 3 exist. I can even construct a tesseract or a hypercube out of cardboard. A weird sort of cube-within-a-cube, a tesseract is a 3D projection of a 4D object in the same way that "<img src="https://i.stack.imgur.com/6P7tN.png" alt="drawing of cube">" is a 2D projection of a 3D object. The trick is imagining the tesseract's relevant lines and planes at 90° to each other (it's the same with "<img src="https://i.stack.imgur.com/6P7tN.png" alt="drawing of cube">" and a real cube), because the 4th spatial dimension is on that somehow exists at perfect right angles to the length, width, and depth of our regular visual field. I "know" all this, just as you probably do . . . but now try to really picture it. Concretely. You can feel, almost immediately, a strain at the very root of yourself, the first popped threads of a mind starting to give at the seams. </p>
</blockquote>
|
1,251,914 | <p>I do not understand how to set up the following problem:</p>
<p>"Forces of 20 lb and 32 lb make an angle of 52 degrees with each other. find the magnitude of the resultant force."</p>
<p>An actually picture would really help.</p>
| user21820 | 21,820 | <p><strong>Dimension</strong> usually is just the number of 'components' of some piece of information. 3 dimensions are just nice for describing a position in (Euclidean) space, but you definitely need 4 dimensions if you want to include the time also. Now you are in the room. A while later you are not. Your position has changed over time, so if we want to describe your path, we need to describe each point on the path, and each point consists of both 3 space coordinates and 1 time coordinate, so this path is 4-dimensional.</p>
<p>In mathematics we have a general kind of space called a vector space. The 3d Euclidean space described above is $\mathbb{R}^3$, which is simply the set of all vectors with $3$ real coordinates. Spacetime could be represented as $\mathbb{R}^4$. Time is special in the sense that we can only move forward in time whereas we can move in any direction in space, but this has nothing to do with dimension.</p>
<p>In a vector space, we would like to represent each vector as a combination of some basic vectors. Of course we want to have as few basic vectors as possible. For example $\mathbb{R}^3$ has a basis with 3 vectors, $\{(1,0,0),(0,1,0),(0,0,1)\}$, which you could imagine stands for "1 unit in front of you", "1 unit above you" and "1 unit to your right", if you happen to stand at the centre $(0,0,0)$. Any position in space can indeed by expressed as a linear combination "go $x$ units forward, then $y$ units up, then $z$ units right". And you cannot make do with less. So we say that $\mathbb{R}^3$ has dimension $3$. In general $\mathbb{R}^n$ has dimension $n$.</p>
<p>A general <strong>vector space</strong> $V$ is defined over a field $F$ (a field permits $+,-,\times,\div$ except division by zero). A <strong>linear combination</strong> is defined to mean a weighted sum of finitely many vectors, for example $2x+(-3)y+5z$ is a linear combination of $x,y,z$. An $F$-linear combination is defined as a linear combination with weights in $F$. A <strong>spanning set $S$ for $V$ over $F$</strong> is defined to be a set of vectors such that every vector in $V$ can be written as a $F$-linear combination of finitely many vectors from $S$, that is, for any vector $v \in V$, we have $v = a_1 x_1 + a_2 x_2 + ... + a_n x_n$ for some $a_1,a_2,...,a_n \in F$ and $x_1,x_2,...,x_n \in S$. Then we can define a <strong>basis for $V$ over $F$</strong> to mean a spanning set with minimum size and the <strong>dimension of $V$ over $F$</strong> to be that size.</p>
<p>For another example, a displacement in 3d space requires 6 real values to describe it. The displacement involves both position and orientation. 3 real values are needed for the translation (change in position), and 3 real values are need for the rotation (change in orientation). The most frequently used representation is often called the 6 degrees of freedom in motion, which are up/down, left/right, forward/backward, pitch, yaw, roll. Note that you can represent any displacement using this 6 real values, and conversely any combination of these 6 values gives a unique displacement. But given two displacements, combining them cannot be computed by just adding the corresponding values. It adds for translation but not for rotation. In fact, the order of rotations in 3d space makes a difference! So this vector space structure does not fully reflect the structure of motion. What would be needed for this is transformations on $\mathbb{R}^3$, which can be represented by matrices, but that is another topic.</p>
|
21,141 | <p>Is there a way to extract the arguments of a function? Consider the following example:</p>
<p>I have this sum</p>
<pre><code>g[1] + g[2] + g[3] + g[1]*g[3] + 3*g[1]*g[2] + 6*g[1]*g[2]*g[3]
</code></pre>
<p>Now, what I want to do is exctract the function arguments and apply them to another function <code>func</code> which takes the arguments as a list.</p>
<pre><code>func[{1}] + func[{2}] + func[{3}] + func[{1, 3}] + 3*func[{1, 2}] + 6*func[{1, 2, 3}]
</code></pre>
<p>I know there is <code>Extract[g[1]*g[3], Position[g[1]*g[3], _Integer]]</code> but that does not work if I have a multiplicative constant.</p>
<p>Is there a way to do this?</p>
| Artes | 184 | <p>Use <code>ReplaceAll</code> (<code>/.</code>)</p>
<pre><code>g[1] + g[2] + g[3] + g[1]*g[3] + 3*g[1]*g[2] + 6*g[1]*g[2]*g[3] /. g -> func
</code></pre>
<blockquote>
<pre><code>func[1] + func[2] + 3 func[1] func[2] + func[3] + func[1] func[3]
+ 6 func[1] func[2] func[3]
</code></pre>
</blockquote>
<p><strong>Edit</strong></p>
<pre><code>g[1] + g[2] + g[3] + g[1]*g[3] + 3*g[1]*g[2] + 6*g[1]*g[2]*g[3] /. {
g[a_]*g[b_]*g[c_] -> func[{a, b, c}], g[a_]*g[b_] -> func[{a, b}], g[a_] -> func[{a}]}
</code></pre>
<blockquote>
<pre><code>func[{1}] + func[{2}] + func[{3}] + 3 func[{1, 2}] + func[{1, 3}] + 6 func[{1, 2, 3}]
</code></pre>
</blockquote>
|
21,141 | <p>Is there a way to extract the arguments of a function? Consider the following example:</p>
<p>I have this sum</p>
<pre><code>g[1] + g[2] + g[3] + g[1]*g[3] + 3*g[1]*g[2] + 6*g[1]*g[2]*g[3]
</code></pre>
<p>Now, what I want to do is exctract the function arguments and apply them to another function <code>func</code> which takes the arguments as a list.</p>
<pre><code>func[{1}] + func[{2}] + func[{3}] + func[{1, 3}] + 3*func[{1, 2}] + 6*func[{1, 2, 3}]
</code></pre>
<p>I know there is <code>Extract[g[1]*g[3], Position[g[1]*g[3], _Integer]]</code> but that does not work if I have a multiplicative constant.</p>
<p>Is there a way to do this?</p>
| gpap | 1,079 | <p>Would this do, or does the ordering matter?</p>
<pre><code> Clear@func
func /: func[a_] func[b_] := func[Flatten@{a, b}]
g[1] + g[2] + g[3] + g[1]*g[3] + 3*g[1]*g[2] + 6*g[1]*g[2]*g[3]/. g[a_] -> func[{a}]
</code></pre>
|
3,173,266 | <p>In the case that <span class="math-container">$M$</span> is a <em>closed</em> orientable <span class="math-container">$3$</span>-manifold, using Wu's formula we can show <span class="math-container">$w_1(M) =0 \implies w_2(M) =0$</span>, and so <span class="math-container">$w_3 = w_1w_2 + Sq^1 w_2 = 0$</span> (or you can use the fact that <span class="math-container">$\chi(M)=0$</span> for closed orientable manifolds with odd dimension). It can then be shown that in fact <span class="math-container">$M$</span> is parallelizable and orientedly null-bordant.</p>
<p>If <span class="math-container">$M$</span> is compact and orientable, then its boundary is an orientable surface and therefore bounds so we can complete <span class="math-container">$M$</span> to a closed manifold <span class="math-container">$\bar{M}$</span> where the same argument applies, and we can again compute <span class="math-container">$w_2(M) = 0$</span>.</p>
<p>Therefore if we want an example of an orientable <span class="math-container">$3$</span>-manifold with <span class="math-container">$w_2(M)\neq 0$</span> it needs to be non-compact. Does anyone know an example?</p>
| Tyrone | 258,571 | <p>According to R. Kirby in <em>The Topology of 4-Manifold</em>, section VIII, Theorem 1 on page 46</p>
<blockquote>
<p>Every orientable 3-manifold <span class="math-container">$M^3$</span> is spin and hence parallelizable.</p>
</blockquote>
<p>First he proves the case when <span class="math-container">$M$</span> is compact. Then to complete the argument, in the case that <span class="math-container">$M$</span> is non-compact, he writes</p>
<p><em>If <span class="math-container">$T_M$</span> is non-trivial, then it is non-trivial on some compact piece of <span class="math-container">$M^3$</span>, contracting the above.</em></p>
<p>Where "the above argument" is what I include as the "below argument":</p>
<p>Assume <span class="math-container">$M$</span> is compact and closed, possibly by moving to its double. Assume <span class="math-container">$w_2(M)\neq 0\in H^2(M;\mathbb{Z}_2)$</span> and let <span class="math-container">$C$</span> be a circle in <span class="math-container">$M$</span> which is Poincare dual to this class. Then <span class="math-container">$M\setminus C$</span> has a spin structure <span class="math-container">$\sigma$</span> which does not extend to <span class="math-container">$M$</span> and there is a dual surface <span class="math-container">$F^2\subseteq M$</span> which intersects <span class="math-container">$C$</span> transversely in one point <span class="math-container">$p$</span> (<span class="math-container">$F$</span> may be non-orientable). Then the total space of the normal disc bundle <span class="math-container">$\nu(F)$</span> to <span class="math-container">$F$</span> is isomorphic to the total space of the normal disc bundle to an immersion of <span class="math-container">$F$</span> in <span class="math-container">$\mathbb{R}^3$</span> and so has a spin structure. However spin structures on <span class="math-container">$\nu(F)$</span> are classified by <span class="math-container">$H^1(F;\mathbb{Z}_2)\cong H^1(F\setminus p;\mathbb{Z}_2)$</span>, and this latter group classifies the spin structures on the normal disc bundle <span class="math-container">$\nu(F\setminus p)$</span>. Thus <span class="math-container">$\sigma$</span> gives a spin structure on <span class="math-container">$\nu(F\setminus p)$</span> which must agree by restriction with that on <span class="math-container">$\nu(F)$</span>. It follows from this that <span class="math-container">$\sigma$</span> must extend across <span class="math-container">$C$</span>, which contradicts the assumption that <span class="math-container">$w_2(M)\neq 0$</span>. Hence <span class="math-container">$M$</span> must be spin. And since <span class="math-container">$\pi_2SO_3=0$</span>, it must be parallelizable.</p>
|
3,173,266 | <p>In the case that <span class="math-container">$M$</span> is a <em>closed</em> orientable <span class="math-container">$3$</span>-manifold, using Wu's formula we can show <span class="math-container">$w_1(M) =0 \implies w_2(M) =0$</span>, and so <span class="math-container">$w_3 = w_1w_2 + Sq^1 w_2 = 0$</span> (or you can use the fact that <span class="math-container">$\chi(M)=0$</span> for closed orientable manifolds with odd dimension). It can then be shown that in fact <span class="math-container">$M$</span> is parallelizable and orientedly null-bordant.</p>
<p>If <span class="math-container">$M$</span> is compact and orientable, then its boundary is an orientable surface and therefore bounds so we can complete <span class="math-container">$M$</span> to a closed manifold <span class="math-container">$\bar{M}$</span> where the same argument applies, and we can again compute <span class="math-container">$w_2(M) = 0$</span>.</p>
<p>Therefore if we want an example of an orientable <span class="math-container">$3$</span>-manifold with <span class="math-container">$w_2(M)\neq 0$</span> it needs to be non-compact. Does anyone know an example?</p>
| Mizar | 24,753 | <p>All orientable three-manifolds <span class="math-container">$M$</span> are parallelizable. If you just want to deduce the noncompact case from the closed one, this requires little machinery.</p>
<p>Basically, you find first an exhaustion of <span class="math-container">$M$</span> with connected compact manifolds with boundary <span class="math-container">$M_k$</span>. Then you inductively construct linearly independent vector fields <span class="math-container">$X,Y,Z$</span> on each <span class="math-container">$M_k$</span>. Extending them from <span class="math-container">$M_k$</span> to <span class="math-container">$M_{k+1}$</span> is not trivial, and can be done e.g. taking an appropriate harmonic extension. See my answer to a very similar question <a href="https://mathoverflow.net/a/346983/36952">here</a>.</p>
|
3,181,502 | <p>We have <span class="math-container">$\tan(x)=\dfrac{\sin(x)}{\cos(x)}$</span>. I was wondering why <span class="math-container">$\tan(x+{\pi/2})=\tan(x)$</span>?</p>
<p>I wanted to Show </p>
<p><span class="math-container">$$\frac{\sin(x+\pi/2)}{\cos(x+\pi/2)}=\frac{\sin(x)}{\cos(x)}\iff\frac{\sin(x+\pi/2)\cos(x)}{\cos(x+\pi/2)\sin(x)}=1\iff\frac{\cos^2(x)}{-\sin^2(x)}=1$$</span></p>
<p>But this would imply that <span class="math-container">$\cos^2x+\sin^2x=0$</span>. But this is false</p>
<p>On the other hand we have that <span class="math-container">$$\tan(x+{\pi/2})=\tan(x)$$</span></p>
<p>don't we?</p>
| José Carlos Santos | 446,262 | <p>No, since <span class="math-container">$\tan\left(\frac\pi4\right)=1$</span>, whereas <span class="math-container">$\tan\left(\frac\pi2+\frac\pi4\right)=-1$</span>.</p>
|
3,181,502 | <p>We have <span class="math-container">$\tan(x)=\dfrac{\sin(x)}{\cos(x)}$</span>. I was wondering why <span class="math-container">$\tan(x+{\pi/2})=\tan(x)$</span>?</p>
<p>I wanted to Show </p>
<p><span class="math-container">$$\frac{\sin(x+\pi/2)}{\cos(x+\pi/2)}=\frac{\sin(x)}{\cos(x)}\iff\frac{\sin(x+\pi/2)\cos(x)}{\cos(x+\pi/2)\sin(x)}=1\iff\frac{\cos^2(x)}{-\sin^2(x)}=1$$</span></p>
<p>But this would imply that <span class="math-container">$\cos^2x+\sin^2x=0$</span>. But this is false</p>
<p>On the other hand we have that <span class="math-container">$$\tan(x+{\pi/2})=\tan(x)$$</span></p>
<p>don't we?</p>
| Bernard | 202,857 | <p>No, we don't. The real relation is
<span class="math-container">$$\tan\bigl(x+\tfrac\pi 2\bigr)=-\cot x=-\frac 1{\tan x}$$</span>
since <span class="math-container">$\;\sin\bigl(x+\frac\pi 2)=\cos x$</span>, whereas <span class="math-container">$\;\cos\bigl(x+\frac\pi 2\bigr)=\color{red}-\sin x$</span>.</p>
<p>What is true is that <span class="math-container">$\tan x$</span> has period <span class="math-container">$\pi$</span>, simply because
<span class="math-container">$$\sin(x+\pi)=-\sin x,\quad\cos(x+\pi)=-\cos x.$$</span></p>
|
2,847,277 | <p>Are there primes $p=47\cdot 2^n+1$, where $n\in\mathbb Z_+$? Tested for all primes $p<100,000,000$ without equality.</p>
| 0x.dummyVar | 575,408 | <p>For a cone representing the dispersion of a substrate through a medium with the following properties:</p>
<p>Apex displacement vector   $\vec{s}=[\matrix{s_x \ s_y \ s_z}]$,
<br>
Axis direction vector     $\vec{d}=[\matrix{d_x \ d_y \ d_z}]$ (non-zero),
<br>
Internal angle        $0 < \phi < 2\pi$,
<br>
And speed          $0 < v$</p>
<p>Apply the following algorithm to obtain the vector equation of that cone:</p>
<blockquote>
<p>if ($d_z$ != $0$) { <br> 
$\vec{j}= \left[\matrix{1 \ 1 \ - \frac{d_x + d_y} {d_x}}\right]$ <br>
} else if ($d_y$ != $0$) { <br> 
$\vec{j}= \left[\matrix{1 \ - \frac{d_x + d_z} {d_y}} \ 1 \right]$ <br>
} else if ($d_x$ != $0$) { <br> 
$\vec{j}= \left[\matrix{- \frac{d_x + d_z} {d_y}} \ 1 \ 1 \right]$ <br> }</p>
<p>$\vec{k} = \vec{d} \times \vec{j}$ (vector cross-product)</p>
<p>$\hat{d} = \frac{1}{\lVert\vec{d}\rVert}\vec{d}$ <br>
$\hat{j} = \frac{1}{\lVert\vec{j}\rVert}\vec{j}$ <br>
$\hat{k} = \frac{1}{\lVert\vec{k}\rVert}\vec{k}$</p>
<p>$g=vt \tan{\left(\frac{\phi}{2}\right)}$       (radius growth modifier) <br>
$e=\hat{j} \sin{(\theta)} + \hat{k} \cos{(\theta)}$   (ellipse construct) <br>
$T=vt\hat{d}$           (direction translation)</p>
<p>cone $C$ <br> $C=gc + T + \vec{s}$, or <br>
$C = \vec{s} + vt \left(\tan{\left(\frac{\phi}{2} \right)} \left( \vec{j}\frac{\sin{(\theta)}}{\lVert\vec{j}\rVert} + \vec{k}\frac{\cos{(\theta)}}{\lVert\vec{k}\rVert} \right) + \vec{d}\frac{1}{\lVert\vec{d}\rVert} \right)$ where $\vec{j} \cdot \vec {d} = 0$ and $\vec{k} = \vec{d} \times \vec{j}$ <br>   $\theta \in [0, 2\pi)$ for all $0 < t$.</p>
</blockquote>
<p>I haven't figured out how to turn this into one equation though, what with those if/else statements. I was thinking maybe a bunch of kronecker-delta's somehow? But then the $\vec{j}$ part of the equation just got so long that it wasn't worth it. And then they would all have to be substituted as the value for $\vec{k}$ and it was just too confusing.</p>
|
201,122 | <p>A little bit of <em>motivation</em> (the question starts below the line): I am studying a proper, generically finite map of varieties $X \to Y$, with $X$ and $Y$ smooth. Since the map is proper, we can use the Stein factorization $X \to \hat{X} \to Y$. Since the composition is generically finite, $X \to \hat{X}$ is birational, and therefore a sequence of blowups. I am currently interested in the other map: $\hat{X} \to Y$. I would like to apply Casnati–Ekedahl's techniques from “Covers of algebraic varieties I” (Journal of alg. geom., 1996). For this, I need $\hat{X} \to Y$ to be Gorenstein. (Since $Y$ is Gorenstein (since it is smooth), this is equivalent with $\hat{X}$ being Gorenstein.) When is this true?</p>
<p>Specifically, in my case $X \to Y$ is the albanese morphism of a smooth projective surface: so $Y$ is an abelian surface, and I am in the situation that the albanese morphism is surjective.</p>
<hr>
<p>Let $f \colon X \to Y$ be a proper map between two varieties $X$ and $Y$ over a field $k$. Assume $X$ and $Y$ are smooth (and proper, if you want).</p>
<p>Let $\pi \colon X \to \hat{X}$ and $\hat{f} \colon \hat{X} \to Y$ be the Stein factorization ($f = \hat{f} \circ \pi$). Of course, in general $\hat{X}$ is not smooth. However:</p>
<blockquote>
<p><strong>Q1:</strong> Does $\hat{X}$ have some other nice properties?</p>
</blockquote>
<p>I am thinking in the direction of, e.g., Gorenstein or Cohen–Macaulay. If not, does it help if we assume a bit more on $f$? Or, alternatively:</p>
<blockquote>
<p><strong>Q2:</strong> Under what conditions is $\hat{X}$ Gorenstein?</p>
</blockquote>
| jmc | 21,815 | <p>There are three good answers to this question, and together they have more or less answered what I wanted to know. I find it hard to choose one of them as best, but nevertheless I think this question should have an accepted answer to move it from the <em>unanswered list</em>. Hence a CW-answer summarizing the (in my eyes) most important points made.</p>
<hr>
<ul>
<li>Laurent Moret-Bailly points out that $\hat{X}$ must be normal.</li>
<li>Sasha then says that besides that, it can get as bad as you want. Take a normal subvariety $\hat{X} \subset \mathbb{A}^{N}$. By Noether's lemma we get a finite map $\hat{X} \to \mathbb{A}^{n} = Y$. A resolution of singularities $X \to \hat{X}$ has connected fibres. The composition $X \to Y$ is generically finite.</li>
<li>Karl Schwede remarks that the pair $(\hat{X}, -\mathrm{Ram})$ is log-Gorenstein (where $\mathrm{Ram}$ is the ramification divisor). He also states the slogan <em>“if $\hat{X}$ has really bad singularities at some points, then $\mathrm{Ram}$ also has really bad singularities at those points too. Another way to say this is if the ramification divisor has mild singularities, then $\hat{X}$ does too.”</em></li>
</ul>
|
2,225,834 | <blockquote>
<p>Let $f$ and $g$ be functions on $\mathbb{R}^{2}$ defined respectively by </p>
<p>$$f(x,y) = \frac{1}{3}x^3 - \frac{3}{2}y^2 + 2x$$ and</p>
<p>$$g(x,y)=x−y.$$</p>
<p>Consider the problems of maximizing and minimizing $f$ on the
constraint set $$C=\{(x,y)\in\mathbb{R}\,:\,g(x,y)=0\}.$$</p>
<p>Options: </p>
<ul>
<li><p>$f$ has a maximum at $(1,1)$ and minimum at $(2,2)$</p></li>
<li><p>$f$ has a maximum at $(1,1)$, but does not have a minimum</p></li>
<li><p>$f$ has a minimum at $(2,2)$, but does not have a maximum</p></li>
<li>$f$ has neither a maximum nor a minimum</li>
</ul>
</blockquote>
<p>I got a max at $(1,1)$ and min at $(2,2)$, by taking $g(x,y)=0\Rightarrow x=y$ and thus differentiating: $$\begin{align*}
\underset{x,y\in\mathcal{R}}{\max(\min)} \frac{1}{3}x^3-\frac{3}{2}y^2+2x \ & \ st.\ x-y=0\\
\Leftrightarrow \underset{x\in\mathcal{R}}{\max(\min)} \frac{1}{3}x^3-\frac{3}{2}x^2+2x
\end{align*}$$</p>
<p>but the correct answer is: </p>
<p>$f$ has neither a maximum nor a minimum.</p>
<p>How so?</p>
| mlc | 360,141 | <p>You found a <em>local</em> maximum and a <em>local</em> minimum. There is no <em>global</em> maximum because for $x=y \rightarrow \pm\infty$ the objective function $f(x,y) \rightarrow \pm \infty$. </p>
<p>You may also spot this by taking the limits of $\frac{1}{3}x^3-\frac{3}{2}x^2+2x$ as $x \rightarrow \pm \infty$. </p>
|
2,225,834 | <blockquote>
<p>Let $f$ and $g$ be functions on $\mathbb{R}^{2}$ defined respectively by </p>
<p>$$f(x,y) = \frac{1}{3}x^3 - \frac{3}{2}y^2 + 2x$$ and</p>
<p>$$g(x,y)=x−y.$$</p>
<p>Consider the problems of maximizing and minimizing $f$ on the
constraint set $$C=\{(x,y)\in\mathbb{R}\,:\,g(x,y)=0\}.$$</p>
<p>Options: </p>
<ul>
<li><p>$f$ has a maximum at $(1,1)$ and minimum at $(2,2)$</p></li>
<li><p>$f$ has a maximum at $(1,1)$, but does not have a minimum</p></li>
<li><p>$f$ has a minimum at $(2,2)$, but does not have a maximum</p></li>
<li>$f$ has neither a maximum nor a minimum</li>
</ul>
</blockquote>
<p>I got a max at $(1,1)$ and min at $(2,2)$, by taking $g(x,y)=0\Rightarrow x=y$ and thus differentiating: $$\begin{align*}
\underset{x,y\in\mathcal{R}}{\max(\min)} \frac{1}{3}x^3-\frac{3}{2}y^2+2x \ & \ st.\ x-y=0\\
\Leftrightarrow \underset{x\in\mathcal{R}}{\max(\min)} \frac{1}{3}x^3-\frac{3}{2}x^2+2x
\end{align*}$$</p>
<p>but the correct answer is: </p>
<p>$f$ has neither a maximum nor a minimum.</p>
<p>How so?</p>
| OnoL | 65,018 | <p>I guess you obtained the conclusion by letting
$$\left(\frac 13 x^3-\frac 32 x^2+2x\right)'=0,$$
which yields that
$$(x-2)(x-1)=0,$$
and so you concluded that the function achieves extrema at $(1,1)$ and $(2,2)$, respectively. This is, however, not true, because the first-order condition only characterizes <strong>local extrema</strong> rather than global ones, or in other words, it is the necessary condition for maxima or minima, but not sufficient. In your question, it is obvious that when $x\to +\infty$, the cubic term is dominating and hence the value diverges to $+\infty$, and when $x\to -\infty$, the cubic term is also dominating and hence the value diverges to $-\infty$.</p>
|
2,544,864 | <p>I have been trying to prove the continuity of the function:
$f:\mathbb{R}\to \mathbb{R}, f(x) =x \sin(x) $ using the $\epsilon -\delta$ method. </p>
<p>The particular objective of posting this question is to understand <strong>the dependence of $\delta$ on $\epsilon$ and $x$</strong>. I know that $f(x) =x \sin(x) $ is not uniformly continuous, so $\delta$ depends on both. Here is my attempt:</p>
<p>We need to prove that $\forall \epsilon > 0 \: \exists\, \delta(\epsilon,x) >0$ such that $\lvert x - y \rvert < \delta \implies \lvert x \sin(x) - y \sin(y)\rvert < \epsilon$.</p>
<p>Let $x=2n\pi$ and $y=x-\frac{\delta}{2}$ so that $\lvert x - y \rvert < \delta$. </p>
<p>Then,
\begin{align}
\bigl\lvert x \sin(x) - y \sin(y)\bigr\rvert&=\biggl\lvert 2n\pi \sin(2n\pi) - (2n\pi-\frac{\delta}{2})\sin(2n\pi-\frac{\delta}{2})\biggr\rvert\\
&= \biggl\lvert (2n\pi-\frac{\delta}{2}) \: \sin(2n\pi-\frac{\delta}{2})\biggr\rvert
\end{align}
Now,
\begin{align}
\biggl\lvert (2n\pi-\frac{\delta}{2}) \sin(2n\pi-\frac{\delta}{2})\biggr\rvert \leq \biggl\lvert (2n\pi-\frac{\delta}{2}) \biggr\rvert \leq \epsilon
\end{align}
and hence, a $\delta $ chosen such as $4n\pi + 2\epsilon$ can be used. Since, this choice depends on $4n\pi$ which is $2x$ and $2\epsilon$, hence the function is continuous but not uniformly so.</p>
<p>Is my procedure correct? How can I prove it generally so $\forall x$?</p>
| Brian Tung | 224,454 | <p><strong>Basic approach.</strong> Use the fact that the slope of $\sin x$ is everywhere between $-1$ and $1$, so the slope of $x\sin x$ at any point $x$ is guaranteed to be between $-x-1$ and $x+1$. (Thanks to YvesDaoust for the catch.) Thus, if you want to get the function to within $\varepsilon$, you need to get your neighborhood to have radius no greater than $\delta = \frac{\varepsilon}{x+1}$.</p>
<p>Formalize the above and it should work.</p>
|
305,166 | <p>If two undirected graphs are identical except that one has an additional loop at vertex $A$, do they actually have the same complement?</p>
| Community | -1 | <p>The complement of a graph is only defined for simple graphs.</p>
<p><strong>Source</strong>: M.N.S. Swamy and K. Thulasiraman: <em>Graphs, Networks and Algorithms</em> (1981): $\S 1.2$</p>
<p>If we extend the definition to include loopgraphs then the answer is no as well for the following reason:</p>
<p>Suppose $G$ has a loop at $v$ and $G'$ does not have a loop at $v$.</p>
<p>Then the complement of $G$ (denoted $\overline{G}$) has no loop at $v$ whereas $\overline{G'}$ does have a loop at $v$.</p>
|
4,475,082 | <p>Problem:</p>
<ul>
<li>Three-of-a-kind poker hand: Three cards have one rank and the remaining two cards have
two other ranks. e.g. {2♥, 2♠, 2♣, 5♣, K♦}</li>
</ul>
<p>Calculate the probability of drawing this kind of poker hand.</p>
<p>My confusion: When choosing the three ranks, the explanation used <span class="math-container">$13 \choose 1$</span> and <span class="math-container">$12 \choose 2$</span>. I used <span class="math-container">$13 \choose 3$</span> instead which ends up being wrong. I do not know why.</p>
| Rebecca J. Stones | 91,818 | <p>Actually, we can use <span class="math-container">$\binom{13}{3}$</span>: it counts the number of ways of choosing 3 distinct ranks. Just don't forget to also choose which of those three ranks (i.e., <span class="math-container">$\binom{3}{1}$</span>) is the special rank with 3 cards. It's another way of counting the same thing: <span class="math-container">$$\binom{13}{3} \binom{3}{1} = \frac{13!}{3!\ 10!} \times 3 = 13 \times \frac{12!}{2!\ 10!} = \binom{13}{1} \binom{12}{2}.$$</span></p>
<p>Afterwards, we also need to choose suits (hearts, diamonds, clubs, spades) for each rank.</p>
|
1,407,131 | <p>I need to prove the following integral is convergent and find an upper bound
$$\int_{0}^{\infty} \int_{0}^{\infty} \frac{1}{1+x^2+y^4} dx dy$$</p>
<p>I've tried integrating $\frac{1}{1+x^2+y^2} \lt \frac{1}{1+x^2+y^4}$ but it doesn't converge</p>
| Jack D'Aurizio | 44,121 | <p>Continuing from zhw.'s answer,
$$ \int_{0}^{+\infty}\frac{\sqrt{r}}{2(1+r^2)}\,dr = \int_{0}^{+\infty}\frac{u^2\,du}{1+u^4}=\frac{\pi}{2\sqrt{2}} $$
by the residue theorem, while:
$$ \int_{0}^{\pi/2}\frac{d\theta}{\sqrt{\sin\theta}}=\int_{0}^{1}\frac{du}{\sqrt{u(1-u^2)}}=\frac{1}{2\sqrt{2\pi}}\,\Gamma\left(\frac{1}{4}\right)^2$$
from the Euler beta function and the $\Gamma$ reflection formula. Hence we have:</p>
<blockquote>
<p>$$ \iint_{(0,+\infty)^2}\frac{dx\,dy}{1+x^2+y^4}=\color{red}{2\sqrt{\pi}\cdot\Gamma\left(\frac{5}{4}\right)^2}=2.9123736927\ldots$$</p>
</blockquote>
<p>A simple upper bound may be derived from:
$$ \int_{0}^{+\infty}\frac{u^2}{1+u^4}\,du \leq \int_{0}^{1}u^2\,du+\int_{1}^{+\infty}\frac{du}{u^2}=\frac{4}{3},$$
$$ \int_{0}^{1}\frac{2\,du}{\sqrt{1-u^4}}\leq\int_{0}^{1}\frac{2\,du}{\sqrt{1-u^2}}=\pi. $$</p>
<hr>
<p>Another simple proof comes from:
$$ \int_{0}^{+\infty}\frac{dx}{1+y^4+x^2}=\frac{\pi}{2}\cdot\frac{1}{\sqrt{1+y^4}}$$
and obviously:
$$ \int_{0}^{+\infty}\frac{dy}{\sqrt{1+y^4}}\leq \int_{0}^{1}dy+\int_{1}^{+\infty}\frac{dy}{y^2}$$
giving $\color{red}{I\leq \pi}$. That can be improved through Cauchy-Schwarz inequality:
$$ \int_{0}^{+\infty}\frac{du}{\sqrt{1+u^4}}\leq \sqrt{\left(\int_{0}^{+\infty}\frac{1+u^2}{1+u^4}\,du\right)\cdot\left(\int_{0}^{+\infty}\frac{du}{1+u^2}\right)}$$
leads to:</p>
<blockquote>
<p>$$ \color{red}{I} \leq \frac{\pi^2}{2\sqrt{2\sqrt{2}}}=2.93425\ldots\color{red}{< 3}.$$</p>
</blockquote>
|
1,148,720 | <blockquote>
<p>Toni and her friends are building triangular pyramids with golf balls.
Write a formula for the number of golf balls in a pyramid with n
layers, if a $1$-layer pyramid contains 1 ball, a 2-layer pyramid contains 4
balls, a 3-layer one contains 10 balls, and so on.</p>
</blockquote>
<p>What is the formula for this question, and what are the steps involved in deriving it? </p>
| Marc van Leeuwen | 18,880 | <p>Each layer is a triangular number $\binom n2=\frac{n(n-1)}2 = \sum_{i=0}^{n-1}i$, where the side of the triangle contains $n-1$ balls. Note that the second equality is a special case of the general identity
$$
\sum_{i=0}^{n-1}\binom ik =\binom n{k+1}
$$
which can be proved by an easy induction using Pascal's recurrence $\binom nk+\binom{n-1}k=\binom n{k+1}$. Applying it for $k=2$ gives
$$
\sum_{i=0}^{n-1}\binom i2 =\binom n3
$$
which is the number of balls in a tetrahedral pyramid with side $n-2$. So the number you are after is $$\binom{n+2}3=\frac{n(n+1)(n+2)}6.$$</p>
<p>Generalising, then number in dimension $d$ is $\binom{n+d-1}d$. Since that number is well known to count the $d$-multisets on an $n$-elements set (that is, ways to select $d$ elements out of a total of$~n$ with multiple selection of a same element allowed), one might ask if there is a way to uniquely label the golf balls with such multisets. Here is a way to do it. Arrange the multiset as $(c_1,\ldots,c_d)$ in weakly decreasing order, so that one has $n\geq c_1\geq c_2\geq\cdots\geq c_d>0$; then let $c_1$ select the size of the "layer" on the outer level, and let $(c_2,\ldots,c_d)$ recursively select agolf ball in this $d-1$-dimensional layer.</p>
|
131,206 | <p>According to the wiki of <a href="http://en.wikipedia.org/wiki/Kakutani_fixed-point_theorem">Kakutani's fixed-point theorem</a>, A set-valued mapping $\varphi$ from a topological space $X$ into a powerset $\wp(Y)$ called upper semi-continuous if for every open set $W \subseteq Y$, $\lbrace x| \varphi(x) \subseteq W \rbrace$ is an open set in $X$.</p>
<p>My question:</p>
<ol>
<li>What is the definition of continuity of a multi valued map $\varphi$?</li>
<li>What's the definition of open sets in $\wp(Y)$, in other words, what topology does $\wp(Y)$ have?</li>
</ol>
| Gerald Edgar | 454 | <p>The definition quoted is an "order" notion of upper semicontinuous, not a "topology" notion. For real-valued functions, the two coincide. But in other settings you can have one but not the other.</p>
|
131,206 | <p>According to the wiki of <a href="http://en.wikipedia.org/wiki/Kakutani_fixed-point_theorem">Kakutani's fixed-point theorem</a>, A set-valued mapping $\varphi$ from a topological space $X$ into a powerset $\wp(Y)$ called upper semi-continuous if for every open set $W \subseteq Y$, $\lbrace x| \varphi(x) \subseteq W \rbrace$ is an open set in $X$.</p>
<p>My question:</p>
<ol>
<li>What is the definition of continuity of a multi valued map $\varphi$?</li>
<li>What's the definition of open sets in $\wp(Y)$, in other words, what topology does $\wp(Y)$ have?</li>
</ol>
| Mikhail Katz | 28,128 | <p>One sensible way of generalizing continuity to set-valued functions (from $X$ to subsets of $Y$) is to require the graph of the function to be closed in the product $X\times Y$. This would be equivalent to the continuity of the function if $Y$ is compact. Thus, the Heaviside function is not continuous because one of the points 0 or 1 on the $y$-axis is not in the graph, but if one redefines it to take both values at 0, the graph becomes closed subset of the plane. See <a href="http://en.wikipedia.org/wiki/Closed_graph_theorem" rel="nofollow">http://en.wikipedia.org/wiki/Closed_graph_theorem</a> for a related (but different) notion.</p>
|
663,363 | <p>I do not know if this is an ill-posed question but ...
is $\delta(t)e^{-\gamma t}$ equal to $\delta(t)$?</p>
<p>Thanks,
biologist</p>
| Wei Zhou | 106,010 | <p>Let $a_1,\cdots, a_n, \cdots $ is a basis for $V$. Let $S$ be the vector subspace spanned by $a_2, \cdots, a_{2k}, \cdots$, then $V/S$ is spanned by $a_1+S, \cdots, a_{2k+1}+S. \cdots, $, which is infinite dimension.</p>
|
3,736,706 | <p>Let <span class="math-container">$M$</span> be an <span class="math-container">$A$</span>-module and let <span class="math-container">$\mathfrak{a}$</span> and <span class="math-container">$\mathfrak{b}$</span> be coprime ideals of A.</p>
<p>I must show that <span class="math-container">$M/ \mathfrak{a}M \oplus M/ \mathfrak{b}M \simeq M/ (\mathfrak{a \cap b})M$</span>.</p>
<p>My attempt is the following:</p>
<p>Let <span class="math-container">$x \in M/ \mathfrak{a}M \oplus M/ \mathfrak{b}M$</span>,then <span class="math-container">$x = [y]+[z]$</span>, where <span class="math-container">$[y] = y+\mathfrak{a}M $</span> and <span class="math-container">$[z]=z + \mathfrak{b}M $</span>, <span class="math-container">$y,z \in M$</span>.</p>
<p>So, <span class="math-container">$x = y+z+ \mathfrak{a}M +\mathfrak{b}M $</span>.</p>
<p><span class="math-container">$\mathfrak{a}M +\mathfrak{b}M =\{z | z=am_1+bm_2, a \in \mathfrak{a}, b \in \mathfrak{b} \} $</span>. But then I don't know how to continue.</p>
<p>Is this approach correct? Or is there another way to prove it?
Thanks</p>
| TomGrubb | 223,701 | <p>In fact it's a bit easier to go backwards. Let <span class="math-container">$[x]\in M/(\mathfrak{a}\cap\mathfrak{b})M$</span>. Since <span class="math-container">$\mathfrak{a}$</span> contains the ideal <span class="math-container">$\mathfrak{a}\cap\mathfrak{b}$</span>, we can restrict <span class="math-container">$[x]$</span> to a class <span class="math-container">$[x_1]\in M/\mathfrak{a}M$</span>, and similarly to a class <span class="math-container">$[x_2]\in M/\mathfrak{b}M$</span>. We have to show this map is an isomorphism.</p>
<p>Why is it injective? Well, suppose <span class="math-container">$[x_1] = 0$</span> and <span class="math-container">$[x_2] = 0$</span>. That means <span class="math-container">$x$</span> was originally in <span class="math-container">$\mathfrak{a}M$</span> as well as in <span class="math-container">$\mathfrak{b}M$</span>, so you are done.</p>
<p>Why is it surjective? Take <span class="math-container">$([x_1],[x_2])$</span> on the LHS, and lift them to <span class="math-container">$x_1,x_2\in M$</span>. Because <span class="math-container">$\mathfrak{a}$</span> and <span class="math-container">$\mathfrak{b}$</span> are coprime, there are <span class="math-container">$a\in \mathfrak{a}$</span> and <span class="math-container">$b\in\mathfrak{b}$</span> for which <span class="math-container">$a+b = 1$</span>. What happens to the element <span class="math-container">$x = x_2a+x_1b$</span>?</p>
<p>(EDIT: I should really say that you should define the map from <span class="math-container">$M$</span> and then use one of the isomorphism theorems, that makes it much clearer)</p>
|
4,478,486 | <p>I have just started to read Stein's Singular Integrals and Differentiability properties of functions.</p>
<p>The Hardy-Littlewood maximal function has just been introduced i.e. <span class="math-container">$$M(f)(x):= \sup_{r > 0} \frac{1}{m(B(x,r))}\int_{B(x,r)}|f(y)|dy$$</span></p>
<p>where <span class="math-container">$m(B(x,r))$</span> denotes the measure of the Ball</p>
<p>Stein then states "We shall now be interested in giving a concise expression for the relative size of a function". Let <span class="math-container">$g(x)$</span> be defined on <span class="math-container">$\mathbb{R}^{n}$</span> and for each <span class="math-container">$\alpha$</span> consider the following set <span class="math-container">$\{x:|g(x)| > \alpha\}$</span>. Then the function <span class="math-container">$\lambda(\alpha)$</span> defined to be the measure of this set is the distribution function of <span class="math-container">$|g|$</span>.</p>
<p>Questions:</p>
<p>(1): Stein states, "The decrease of <span class="math-container">$\lambda(\alpha)$</span> as <span class="math-container">$\alpha$</span> grows describes the relative largeness of the function" <strong>why is this describing the largeness (i'd have thought it would be saying how small the function is, and relative to what, other functions?)</strong></p>
<p>(2): If <span class="math-container">$g \subset L^{p}$</span> then one has <span class="math-container">$\int_{\mathbb{R}^{n}}|g(y)|^{p}dy = - \int_{0}^{\infty}\alpha^{p}d \lambda(\alpha)$</span>. <strong>How does one get the RHS of this equality?</strong></p>
| Mason | 752,243 | <p>For finite dimensional vector spaces, you can view tensors as multilinear maps. The tensor product <span class="math-container">$v_1 \otimes v_2$</span> is the bilinear map on <span class="math-container">$V_1^* \times V_2^*$</span> with <span class="math-container">$$(v_1 \otimes v_2)(\omega_1, \omega_2) = v_1(\omega_1)v_2(\omega_2) = \omega_1(v_1)\omega_2(v_2).$$</span>
In your specific case, <span class="math-container">$v_1 \otimes v_2$</span> is the already in simplest form as <span class="math-container">$v_1$</span> and <span class="math-container">$v_2$</span> are basis vectors.</p>
|
1,379,188 | <p>The Riemann distance function $d(p,q)$ is usually defined as the infimum of the lengths of all <strong>piecewise</strong> smooth paths between $p$ and $q$.</p>
<p><strong>Does it change if we take the infimum only over smooth paths?</strong>
(Note that if a smooth manifold is connected, <a href="https://math.stackexchange.com/a/134129/104576">then it is smoothly path connected</a>).</p>
<p>I am quite certain the distance does not change. I think that every piecewise smooth path can be approximated by a smooth path.</p>
<p>Around any singular point of the original path, we can take a coordinate ball, and create smomehow a smoothing of a relevant segment of the path which is not much longer than the original. </p>
<p>An explicit construction such as this can be found <a href="https://math.stackexchange.com/a/134129/104576">here</a>. However, the point there is only to show smooth path connectivity, and we also need some bound on the "added length". </p>
<p><strong>Partial Result (Reduction to the case of Euclidean metric):</strong></p>
<p>I show that the specific Riemannian metric does not matter. That is, if we can create a smoothing with small elongation measured by one metric $g_1$ then we can do the same for any other metric $g_2$. </p>
<p>Hence it is enough to prove the claim for $\mathbb{R}^n$ with the standard metric. </p>
<p>Proof:</p>
<p>Since the question is local (we focus around some point $p$ of non-smoothness of the original piecewise-smooth path) we can take an orthonormal frame for $g_1$, denoted by $E_i$. write $g_{ij}=g_2(E_i,E_j)$, I want to find $\text{max} \{g_2(v,v)|v\in \mathbb{S}^{n-1}_{g_1}\} = \text{max} \{g_2(v,v)|v=x^iE_i , x=(x^1,...,x^n) \in \mathbb{S}^{n-1}_{Euclidean}\} = \text{max} \{g_{ij}x^ix^j| \sum(x^i)^2=1 \} = \text{max} \{x^T \cdot G \cdot x | \|x\|=1 \} = \text{max}{\lambda(G)}$. </p>
<p>Since <a href="https://math.stackexchange.com/a/63206/104576">the roots of a polynomial are continuous in in terms of its coefficients</a>, and the coefficients of the charactersitic polynomial of a matrix depends continuously on the matrix entries, it follows that the eigenvalues of a matrix depends continuously on the matrix entries. Hence, since the matrix $g_{ij}(q)$ is a continuous function of $q$, it follows that if we restrict to a compact small enough neigbourhood of $p$ we the function $f(q)= \text{max}{\lambda(g_{ij}(q))}$ is continuous and in particularly bounded by some constant $C$. Hence for any path $\gamma$ which is contained in a small enough neighbourhood of $p$ $L_{g_2}(\gamma) \le \sqrt C L_{g_1}(\gamma)$.</p>
<p>In particular we can take $g_1$ to be the pullback metric of the standrad Euclidean metric via some coordinate ball around $p$. Now solving the problem for the Euclidean case (which implies solving it for $g_1$), we obtain a solution for an arbitrary $g_2$ as required.</p>
| Community | -1 | <p>No, the distance stays the same. As mentioned in <a href="https://math.stackexchange.com/questions/957324/the-formula-for-a-distance-between-two-point-on-riemannian-manifold">The formula for a distance between two point on Riemannian manifold</a>, the reason to allow piecewise smooth curves is to be able to concatenate them, getting other piecewise smooth curves. </p>
<p>For any piecewise smooth curve from $p$ to $q$ there is a smooth one of almost the same length. For example, suppose $\gamma$ has a point of nonsmoothness at $t=t_0$, with one-sided derivatives not matching. Insert a little loop based at $\gamma(t_0)$ that begins with the velocity vector $\gamma'(t_0-)$ and ends with velocity vector $\gamma'(t_0+)$. The point of nonsmoothness is gone, and the added length can be atbitrarily small. </p>
|
2,060,694 | <p>Problem:</p>
<blockquote>
<p>I have 610 friends. Each one of them will invite me to his birthday party, and I will accept every invitation. What is the probability that I will be attending at least one birthday party on every day of the year?</p>
</blockquote>
<p>My attempt has been to try counting how many ways in which there could be at least one day where there are no birthday celebrations, and subtract the resulting probability from 1, but of course I end up having to use the inclusion-exclusion principle, and with these numbers it becomes unwieldy. Hopefully someone can kindly show me a nicer way of tackling this.</p>
| Fimpellizzeri | 173,410 | <p>I know this uses inclusion-exclusion which you stated is not what you wanted. I don't know how far you got into it, or if other users know the end result, but I figured this is too long for a comment and thought someone might find it useful.</p>
<p>Let $p$ be the desired probability.
We will calculate the complimentary probability $q=1-p$ that no birthday is attended in at least one day of the year.</p>
<p>Let $C_i$ be the event that no birthday is attended on day $i$.
Then:</p>
<p>$$q=\mathbb{P}\left(\bigcup_{i=1}^{365}C_i\right)$$</p>
<p>Let $[365]=\{1,2,\dots,365\}$.
Using <a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="nofollow noreferrer">inclusion-exclusion</a>, we can rewrite that as</p>
<p>$$q=\sum_{k=1}^{365}\left({(-1)}^{k-1}\sum_{\substack{I\subset[365]\\|I|=k}}\mathbb{P}(C_I)\right),$$</p>
<p>where $C_I=\cap_{i \in I}C_i$.</p>
<p>Now we need only calculate the $\mathbb{P}(C_I)$; this is much easier.
For each of our $610$ friends, there are $|I|$ days of the year that are forbidden.
Hence</p>
<p>$$\mathbb{P}(C_I)={\left(\frac{365-|I|}{365}\right)}^{610}={\left(1-\frac{|I|}{365}\right)}^{610}$$</p>
<p>and we may write</p>
<p>$$q=\sum_{k=1}^{365}\left({(-1)}^{k-1}\sum_{\substack{I\subset[365]\\|I|=k}}{\left(1-\frac{k}{365}\right)}^{610}\right).$$</p>
<p>This may be further simplified. Indeed, for each $k$ there are $\binom{365}{k}$ sets $I\subset[365]$ with $|I|=k$.
Thus</p>
<p>$$q=\sum_{k=1}^{365}{(-1)}^{k-1}\binom{365}{k}{\left(1-\frac{k}{365}\right)}^{610},$$</p>
<p>and the desired probability is</p>
<p>\begin{align}
p&=1+\sum_{k=1}^{365}{(-1)}^{k}\binom{365}{k}{\left(1-\frac{k}{365}\right)}^{610}\\
&=\hphantom{1+}\,\,\sum_{k=0}^{365}{(-1)}^{k}\binom{365}{k}{\left(1-\frac{k}{365}\right)}^{610}.
\end{align}</p>
|
4,537,489 | <p>Assume that <span class="math-container">$a>0$</span>, Suppose we have :<br />
<span class="math-container">$$X = \{x\in \mathbb{R} \ : \ x^2 < a \}$$</span><br />
We should prove that this set has a supremum, and that's <strong><span class="math-container">$\sqrt{a}$</span></strong> .<br />
I saw <a href="https://math.stackexchange.com/a/2281226/831100">this answer</a> on one of the related posts:</p>
<blockquote>
<p>Suppose that <span class="math-container">$a>0$</span> then <span class="math-container">$\sqrt{a}$</span> is an upper bound . To see this, use the definition of an open ball . Also <span class="math-container">$0 \in (-\sqrt{a},\sqrt{a})$</span> since <span class="math-container">$|0|<\sqrt{a}$</span>. Therefore supremum exists. Now assume for contradiction that <span class="math-container">$\sqrt{a}$</span> is not the least upper bound. Then there exist <span class="math-container">$M \in R$</span> which is the supremum and <span class="math-container">$M<\sqrt{a}$</span>.Consider <span class="math-container">$z:=\frac{\sqrt{a}-M}{\sqrt{a}}+M$</span>.By construction <span class="math-container">$z>M$</span>. it is impossible that <span class="math-container">$z<\sqrt{a}$</span> since M is the supremum,But if <span class="math-container">$\sqrt{a}\leq z$</span>, then <span class="math-container">$\sqrt{a}\leq\frac{\sqrt{a}-M}{\sqrt{a}}+M \to \sqrt{a}\leq M$</span> ,contradiction.</p>
</blockquote>
<p><strong>My first question:</strong><br />
Is how author recognized that she should use <span class="math-container">$\frac{\sqrt{a}-M}{\sqrt{a}}+M$</span> ? Can we determine a logical process to achieve this expression for our needs?</p>
<p><strong>My second question:</strong><br />
I have problem with this part:<br />
<span class="math-container">$$\sqrt{a}\leq\frac{\sqrt{a}-M}{\sqrt{a}}+M \to \sqrt{a}\leq M$$</span><br />
Can we conclude from <span class="math-container">$z>M$</span> and <span class="math-container">$\sqrt{a}\leq z$</span> that <span class="math-container">$\sqrt{a}\leq M$</span> ? I think that's not possible!<br />
<strong>Last one:</strong><br />
Is there any better way to prove that?</p>
| Lorenzo | 365,199 | <p>Given a topological space <span class="math-container">$(X,\tau)$</span> and an uncountable closed discrete set <span class="math-container">$C\subseteq X$</span>, then we can consider the following open cover of <span class="math-container">$X$</span>: <span class="math-container">$$\{\{x\} \mid x \in C\} \cup \{X\setminus C\}.$$</span>
If <span class="math-container">$X$</span> were Lindelöf, then there would be a countable subcover, but such a subcover, being countable, cannot cover <span class="math-container">$C$</span>, hence the above cover doesn't have any countable subcover and <span class="math-container">$X$</span> is not Lindelöf.</p>
|
2,737,869 | <p>Determine the value of real parameter $p$ </p>
<p>in such a way that the equation</p>
<p>$$\sqrt{x^2+2p} = p+x $$ </p>
<p>has just one real solution</p>
<p>a. $p \ne 0$</p>
<p>b. There is no such value of parameter$p$</p>
<p>c. None of the remaining possibilities is correct.</p>
<p>d. $p\in [−2,\infty)$</p>
<p>e. $p\in [−2,0)\cup(0,\infty)$</p>
<p>I thought the answer is A but it isn't. help me!</p>
| Misha Lavrov | 383,078 | <p>It follows from <a href="https://en.wikipedia.org/wiki/Roth%27s_theorem" rel="nofollow noreferrer">Roth's theorem</a> that for every <em>algebraic number</em> $\alpha \in [0,1]$ (that is, every $\alpha$ that is the root of some polynomial with rational coefficients) there is a sufficiently small $\epsilon > 0$ for which $\alpha$ won't be included in the union. That proves that $[0,1]$ is eventually not entirely covered.</p>
<p>One particular instance of Roth's theorem says that the inequality
$$
\left|\alpha - \frac pq\right| < \frac1{q^3}
$$
is satisfied for only finitely many fractions $\frac pq$. The size $\frac{\epsilon}{2^{n+1}}$ that we include around the $n^{\text{th}}$ rational number $\frac{p_n}{q_n}$ decays much faster than $\frac1{q_n^3}$. So for any fixed $\epsilon$ (say, for $\epsilon = 1$) there will only be finitely many intervals containing $\alpha$.</p>
<p>If those intervals are around the fractions $\frac{p_1}{q_1}, \dots, \frac{p_k}{q_k}$, then for each of them we can look at the distance $\left|\alpha - \frac{p_i}{q_i}\right|$, and choose $\epsilon$ small enough so that $\alpha$ is left out of the interval around $\frac{p_i}{q_i}$. If we do this for all $k$ fractions, then we have ensured that $\alpha$ is not contained in any of the intervals.</p>
<p>In fact, the same argument shows that for every number with finite <a href="https://en.wikipedia.org/wiki/Liouville_number#Irrationality_measure" rel="nofollow noreferrer">irrationality measure</a>, there is a small enough $\epsilon>0$ such that it gets excluded. Only really weird transcendental numbers that have infinitely many ridiculously good rational approximations will stay in the union of this cover for all $\epsilon>0$.</p>
|
258,205 | <p>I want to know if $\displaystyle{\int_{0}^{+\infty}\frac{e^{-x} - e^{-2x}}{x}dx}$ is finite, or in the other words, if the function $\displaystyle{\frac{e^{-x} - e^{-2x}}{x}}$ is integrable in the neighborhood of zero.</p>
| Martin Argerami | 22,857 | <p>$$
\frac1x\,(e^{-x}-e^{-2x})=\frac1x\,(1-x+O(x^2)-(1-2x+O(x^2)))=\frac1x\,(x+O(x^2))=1+O(x).
$$
So the function can be extended to $x=0$ in a continuous way, and it thus integrable on any interval $[0,k]$. </p>
|
2,216,778 | <p>My question is if there exists a way to evaluate the sum</p>
<p>$$
{{s}\choose{s}}^{\!2} + {{s + 1}\choose{s}}^{\!2} + \ldots {{s+r}\choose{s}}^{\!2}.
$$</p>
<p>In other words, it's the sum of the squares of the first r binomial coefficients on the s-th right-to-left diagonal of Pascal's triangle. Moreover, is it true that the previous sum is $O_{\!s}(r^{s})$?</p>
| subdiver | 505,246 | <p>The zero element <span class="math-container">$P_n$</span> is not in the set, but I feel some clarification to the above statements is necessary:
The zero of this space is the polynomial with all zero coefficients - not the polynomial that equals zero when you set <span class="math-container">$t=0$</span>. Thus, multiplication by the zero polynomial will result in zero for all vectors (i.e., other polynomials) in this space. You never really plug in a number for <span class="math-container">$t$</span> when dealing with questions on this space, the vectors are the polynomial equations themselves.
This is enough to conclude it is not a vector space, but it's also true that the set is not closed under vector addition or scalar multiplication as well, as outlined above.</p>
|
7,871 | <p>I'm trying to make a demonstration of how rounding to different numbers of digits affects things but I can't find a way to round numbers to a specified number of digits. </p>
<p>The <code>Round</code>function only round to the nearest whole integer, and that is not what I always want. Other ways seems to only change the way the numbers are displayed, not how they are internally stored. </p>
<p>I want to throw away precision, but it seems Mathematica doesn't want to allow me to do this. As an example: I would like to round 3.4647 to just 3.5 or 3.46. </p>
<p>There must be some way to do this, but I can't for the life of me find it.</p>
| Chris Degnen | 363 | <p>Just specify the nearest multiple in the second argument.</p>
<pre><code>Round[123.456, 0.01]
</code></pre>
<blockquote>
<p>123.46</p>
</blockquote>
|
7,871 | <p>I'm trying to make a demonstration of how rounding to different numbers of digits affects things but I can't find a way to round numbers to a specified number of digits. </p>
<p>The <code>Round</code>function only round to the nearest whole integer, and that is not what I always want. Other ways seems to only change the way the numbers are displayed, not how they are internally stored. </p>
<p>I want to throw away precision, but it seems Mathematica doesn't want to allow me to do this. As an example: I would like to round 3.4647 to just 3.5 or 3.46. </p>
<p>There must be some way to do this, but I can't for the life of me find it.</p>
| Mr.Wizard | 121 | <p>Suppose <code>Round</code> did not take a second argument as it does. What to do?</p>
<pre><code>myround[n_, a_] := Round[n/a] a
myround[π, 0.001]
myround[π, 1/7]
</code></pre>
<blockquote>
<pre><code>3.142
22/7
</code></pre>
</blockquote>
|
2,653,829 | <blockquote>
<p>How can I show $(x^2+1, y^2+1)$ is not maximal in $\mathbb R[x,y]$?</p>
</blockquote>
<p>I know I can mod out the ideal one piece at a time and show $\mathbb C[x]/(x^2+1)$ is not a field since $(x^2+1)$ is not maximal in $\mathbb C[x]$, <strong>but is there another way of showing this?</strong></p>
| quasi | 400,434 | <p>Let $I = (x^2+1,y^2+1)$, let $J=(x^2+1,x-y)$, and let $K=(x^2+1,x+y)$.
<p>
If $1 \in J$, then $a(x^2+1)+b(x-y)=1$, for some $a,b \in \mathbb{R}[x,y]$. But then, letting $y=x$, we get $a(x,x)(x^2+1)=1$, contradiction, since a nonzero multiple of $x^2+1$ must have degree at least $2$.
<p>
If $1 \in K$, then $c(x^2+1)+d(x+y)=1$, for some $c,d \in \mathbb{R}[x,y]$. But then, letting $y=-x$, we get $c(x,-x)(x^2+1)=1$, contradiction, since a nonzero multiple of $x^2+1$ must have degree at least $2$.
<p>
Thus, $J$ and $K$ are proper ideals.
<p>
Since $x-y \in J$, we get $(x+y)(x-y) \in J$, hence $x^2-y^2\in J$. Then since $y^2+1 = (x^2 + 1) - (x^2 - y^2)$, we get that $y^2 + 1 \in J$. It follows that $I \subseteq J$.
<p>
Since $x+y \in K$, we get $(x+y)(x-y) \in K$, hence $x^2-y^2\in K$. Then since $y^2+1 = (x^2 + 1) - (x^2 - y^2)$, we get that $y^2 + 1 \in K$. It follows that $I \subseteq K$.
<p>
Suppose $I$ is maximal. Then we must have $J=I$ and $K=I$.
<p>
From $J=I$, we get $x-y\in I$, and from $K=I$, we get $x+y\in I$.
<p>
From $x-y \in I$ and $x+y \in I$, we get $x \in I$.
<p>
But then, from $x \in I$ and $x^2+1\in I$, we get $1 \in I$, contradiction.
<p>
It follows that $I$ is not maximal.</p>
|
4,019,119 | <p>im struggeling to find <span class="math-container">$$\lim _{x\to 0}\left(2-e^{\arcsin^{2}\left(\sqrt{x}\right)}\right)^{\frac{3}{x}}$$</span></p>
<p>Ive tried the following:
<span class="math-container">$$\lim_{x \to x_0} ax^{bx} = \lim_{x \to x_0} e^{ax^{bx}} = \lim_{x \to x_0} e^{bx \ln(ax)} = e^{\lim_{x \to x_0} bx \ln(ax)}$$</span>
wich leads me to
<span class="math-container">$$ = e^{\lim_{x \to 0} \frac{3\ln(2-e^{\arcsin^2(\sqrt{x})})}{x}}$$</span>
Is this the right way to go? If yes how to get rid of the division by x?
Thanks for any help!</p>
| Community | -1 | <p>Apologies in advance for shooting sparrows with cannons, but I can't give away such a funny opportunity to see "pure" math being applied to "simple" geometry problems.</p>
<p>I'll be using Gödel's compactness theorem, which is well-explained <a href="https://math.stackexchange.com/a/663187/632577">here</a>, so check it out. In essence, the theorem is as follows: In first-order logic, if a property is witnessed by <em>arbitrarily</em> large finite numbers, then there is a way to witness it by infinities. It certainly feels relevant in the post's context.</p>
<p>Consider the theory of the abelian group <span class="math-container">$\mathbb{Z}^2$</span> with generators <span class="math-container">$a,b$</span>. Thus the following sentences hold:</p>
<ul>
<li><span class="math-container">$ab=ba$</span></li>
<li><span class="math-container">$\forall y,\,ey=ye=y$</span></li>
<li><span class="math-container">$a\ne b\ne e$</span></li>
<li><span class="math-container">$a^{-1}\ne a^{1729}$</span></li>
<li>etc.</li>
</ul>
<p>We collect all valid sentences into the set <span class="math-container">$\operatorname{Th}(\mathbb{Z}^2,a,b)$</span>.</p>
<p>We introduce the relation <span class="math-container">$R(x,y)$</span>, which says that "the group elements <span class="math-container">$x,y$</span> of <span class="math-container">$\mathbb{Z}^2$</span> belong to the same tile". Then the following things can be expressed as formulas/sentences:</p>
<ul>
<li><span class="math-container">$R$</span> is an equivalence relation (left as exercise)</li>
<li>Every tile is a single strip of rectangle, or equivalently, any two elements not on a same row/column cannot be in the same tile. We express this with infinitely many sentences. For example, <span class="math-container">$\forall x,\lnot R(x,a^{-3} b^4 x)$</span> would be one of them. We also need the rectangles to be connected, and they can also be expressed by infinitely many sentences, for example, <span class="math-container">$\forall x,R(x,a^6 x)\rightarrow \bigwedge_{i=1}^5 R(x,a^i x)$</span>.</li>
<li>For a particular group element <span class="math-container">$x$</span>, we can ask for it not to be tiled. We use the sentence <span class="math-container">$\phi(x)$</span>, which says that <span class="math-container">$\forall y\ne x,\lnot R(x,y)$</span>.</li>
<li>For a particular group element <span class="math-container">$x$</span>, we can also ask for it to be tiled, and we get to choose how big the tile is! For an integer <span class="math-container">$n>0$</span>, we use the sentence <span class="math-container">$\psi_n(x)$</span> to mean that <span class="math-container">$x$</span> is in a tile of size <span class="math-container">$\ge n$</span>, which is expressed as:
<span class="math-container">$$\left(\bigvee_{i=1-n}^0\bigwedge_{j=i}^{i+n-1} R(x,a^j x)\right)\lor\left(\bigvee_{i=1-n}^0\bigwedge_{j=i}^{i+n-1} R(x,b^j x)\right)$$</span></li>
</ul>
<p>Now consider making a theory <span class="math-container">$T$</span> in the language of groups with the marked elements <span class="math-container">$a,b$</span> and the relation <span class="math-container">$R$</span>. The consistency of this theory will claim that <span class="math-container">$S\subseteq \mathbb{Z}^2$</span> can be tiled by infinite-sized tiles. It will contain all appropirate axioms listed above, but we take special care when putting in any <span class="math-container">$\phi(x)$</span> or <span class="math-container">$\psi_n(x)$</span> into <span class="math-container">$T$</span>. In particular, if <span class="math-container">$x\in S$</span> is an element of <span class="math-container">$\mathbb{Z}^2$</span> expressed as a word in <span class="math-container">$a,b$</span>, then we put <span class="math-container">$\psi_n(x)\in T$</span> for all <span class="math-container">$n$</span> (but of course <span class="math-container">$\phi(x)\notin T$</span>). Otherwise if <span class="math-container">$x\notin S$</span>, then we only put <span class="math-container">$\phi(x)\in T$</span>.</p>
<p>By the compactness theorem, the consistency of <span class="math-container">$T$</span> will follow from the consistency of every finite segment <span class="math-container">$T_0\subseteq T$</span>. However as <span class="math-container">$T_0$</span> is finite, there is some bound <span class="math-container">$N\in\mathbb{N}$</span> such that every <span class="math-container">$\psi_n(x)\in T_0$</span> will be of the form <span class="math-container">$n\le N$</span>. In particular, <span class="math-container">$T_0$</span> is now a segment of a theory whose consistency claims that <span class="math-container">$S$</span> can be tiled by rectangular tiles, and every tile has size at least <span class="math-container">$N$</span>. By our assumption, we can simply tile <span class="math-container">$S$</span> by rectangular tiles of size <span class="math-container">$N$</span>, and see that <span class="math-container">$T_0$</span> is indeed consistent. By the compactness theorem, <span class="math-container">$T$</span> is consistent.</p>
<p>Thus we know that <span class="math-container">$S$</span> can indeed be tiled by rectangles of infinite size. Any such rectangle is either a ray or can be tiled by two rays. Hence <span class="math-container">$S$</span> can also be tiled by rays.</p>
|
3,087,570 | <p>The "school identities with derivatives", like
<span class="math-container">$$
(x^2)'=2x
$$</span>
are not identities in the normal sense, since they do not admint substitutions. For example if we insert <span class="math-container">$1$</span> instead of <span class="math-container">$x$</span> into the identity above, the appearing equality will not be true:
<span class="math-container">$$
(1^2)'=2\cdot 1.
$$</span>
That is why when explaining this to my students I present the derivative in the left side as a formal operation with strings of symbols (and interpret the identity as the equality of strings of symbols). </p>
<p>This however takes a lot of supplementary discussions and proofs which look very bulky, and I have no feeling that this is a good way to explain the matter. In addition, people's reaction to <a href="https://math.stackexchange.com/questions/1501585/calculus-as-a-structure-in-the-sense-of-model-theory">this my question</a> makes me think that there are no texts to which I could refer when I take this point of view.</p>
<p>I want to ask people who teach mathematics how they bypass this difficulty. Are there tricks for introducing rigor into the "elementary identities with derivatives" (and similarly with integrals)?</p>
<p>EDIT. It seems to me I have to explain in more detail my own understanding of how this can be bypassed. I don't follow this idea accurately, in detail, but my "naive explanations" are the following. I describe Calculus as a first-order language with a list of variables (<span class="math-container">$x$</span>, <span class="math-container">$y$</span>,...) and a list of functional symbols (<span class="math-container">$+$</span>, <span class="math-container">$-$</span>, <span class="math-container">$\sin$</span>, <span class="math-container">$\cos$</span>, ...) and the functions which are not defined everywhere, like <span class="math-container">$x^y$</span>, are interpreted as relation symbols (of course this requires a lot of preparations and discussions, that is why I usually miss these details, and that is why I don't like this way). After that the derivative is introduced as a formal operation on <a href="https://en.wikipedia.org/wiki/First-order_logic#Terms" rel="nofollow noreferrer">terms</a> (expressions) of this language, and finally I prove that this operation coincide with the usual derivative on "elementary functions" (i.e. on the functions which are defined by terms of this language). </p>
<p>Derek Elkins suggests a simpler way, namely, to declare <span class="math-container">$x$</span> a notation of the function <span class="math-container">$t\mapsto t$</span>. Are there texts where this is done consistently? (I mean, with examples, exercises, discussions of corollaries...)</p>
<p>@Rebellos, you identity
<span class="math-container">$$
\frac{d}{dx}(x^2)\Big|_{x=1}=2\cdot 1
$$</span>
becomes true either if you understand the derivative as I describe, i.e. as an operation on expressions (i.e. on terms of the first order language), since in this case it becomes a corollary of the equality
<span class="math-container">$$
\frac{d}{dx}(x^2)=2\cdot x,
$$</span>
or if by substitution you mean something special, not what people usually mean, i.e. not the result of the replacement of <span class="math-container">$x$</span> by <span class="math-container">$1$</span> everywhere in the expression (and in this case you should explain this manipulation, because I don't understand it). Anyway, note, that your point is not what Derek Elkins suggests, since for him <span class="math-container">$x$</span> means a notation of the function <span class="math-container">$t\mapsto t$</span>, it can't be substituted by 1). </p>
| Derek Elkins left SE | 305,738 | <p>There are at least a few approaches to this.</p>
<p>At a conceptual level the key is that differentiation acts on functions, not numbers. This leads to the first approach. Often notation like <span class="math-container">$\text{D}f$</span> is used where <span class="math-container">$f$</span> is a function. From this perspective, <span class="math-container">$(x^2)'$</span> is like <span class="math-container">$\text D(x\mapsto x^2)$</span>. This makes <span class="math-container">$x$</span> a <a href="https://en.wikipedia.org/wiki/Free_variables_and_bound_variables" rel="nofollow noreferrer">bound variable</a>. We can't just replace it with a number. Naively done that would produce something like <span class="math-container">$\text D(1\mapsto 1^2)$</span> which is nonsense. We could interpret <span class="math-container">$(1^2)'$</span> as <span class="math-container">$\text D(x\mapsto 1^2)$</span> which would work fine and produce the constantly <span class="math-container">$0$</span> function as desired. Modulo some technicalities, this is closest to what's happening in the standard approach.</p>
<p>An alternative, more algebraic approach is to consider things like <a href="https://en.wikipedia.org/wiki/Derivation_(differential_algebra)" rel="nofollow noreferrer">derivations</a>. Here, instead of having functions, we have algebraic terms. <span class="math-container">$x$</span> ceases to be a variable and is instead a special constant for which we assert the equality <span class="math-container">$\text D x = 1$</span>. In this case, it wouldn't make sense to naively replace <span class="math-container">$x$</span> with <span class="math-container">$3$</span>, say, because that would be like saying "let <span class="math-container">$5$</span> be <span class="math-container">$7$</span>." Alternatively, an interpretation that mapped <span class="math-container">$x$</span> to <span class="math-container">$3$</span> would likely be unsound. You could make a sound interpretation of <span class="math-container">$x$</span> as the identity function though and interpret the derivation operator as the <span class="math-container">$\text D$</span> operator from the first approach. Indeed, you could work with the algebraic notation but in terms of this particular interpretation of it. At any rate, this algebraic approach is very computational but not as flexible since you are limited to the terms of the algebra as opposed to being able to differentiate any (differentiable) function you can specify. On the other hand, this algebraic approach is applicable even in contexts where notions like limits don't make sense. This approach avoids having to deal with free/bound variables could likely be pretty jarring.</p>
<p>A kind of middle approach would be to restrict entirely to analytic functions and to work in terms of power series. If you want to go further, you could even identify power series with special (or, for formal power series, arbitrary) sequences of numbers. You could understand differentiating and integrating power series entirely as manipulations of sequences of numbers. More prosaically, you could work with the usual presentation of power series but emphasize manipulating coefficients. There's a lot of beautiful and useful power series manipulation calculations that could be introduced here that is useful for things like generating functions and Laplace/Z transforms.</p>
<p>There are other approaches as well, such as <a href="https://ncatlab.org/nlab/show/synthetic+differential+geometry" rel="nofollow noreferrer">synthetic differential geometry</a> or <a href="https://en.wikipedia.org/wiki/Non-standard_calculus" rel="nofollow noreferrer">non-standard analysis</a>. These also avoid some of the issues with bound variables, but introduce other issues and are less connected to standard practice in some ways. (Potentially, more connected to what mathematicians <em>actually</em> do in some ways.) I would say, more generally, that calculus is one of the first places in a typical (American, at least) math education that students have to start seriously grappling with notions of free and bound variables. Unfortunately, the notation of calculus makes this very confusing and opaque. I would recommend using notation that makes things more explicit (an extreme form is <a href="https://mitpress.mit.edu/books/structure-and-interpretation-classical-mechanics-second-edition" rel="nofollow noreferrer">Structure and Interpretation of Classical Mechanics</a>, but you don't need to go that far) and spend explicit time in the course teaching and practicing working with bound variables.</p>
|
4,118,297 | <p>I have been trying to find the closed form for integral below <span class="math-container">$$\int_1^{\infty}\frac{x^2\tan^{-1}(ax)}{x^4+x^2+1}dx ,\; \; a>0 $$</span>
My progress to this integral <span class="math-container">$$\cong\frac{\pi^2}{8\sqrt 3}+\frac{\pi}{8}\log(3)-\frac{\pi}{6a\sqrt 2}+\frac{1}{2a^3}\left(\frac{\log(3)}{4} -\frac{\pi}{12\sqrt 3}\right)-\frac{1}{5a^5}\left(\frac{1}{2}-\frac{\pi}{12\sqrt 2}-\frac{\log(3)}{4}\right)+\frac{1}{7a^7}\left(\frac{\pi}{6\sqrt 3}-\frac{1}{2}\right)+\frac{1}{108a^9} +\frac{1}{9a^9}\left(\frac{\log(3)}{9}-\frac{\pi}{12\sqrt 3}\right)-\frac{1}{24a^{11}}+\cdots $$</span> Using the series of <span class="math-container">$\tan(ax)$</span> the above form is obtained. However, I dont find closed form for it. If <span class="math-container">$a\to\infty+$</span>,then it is equal to <span class="math-container">$\frac{\pi^2}{8\sqrt 3}+\frac{\pi}{8}\log(3)$</span>.</p>
| TheSimpliFire | 471,884 | <p>We have <span class="math-container">\begin{align}\int_1^{\infty}\frac{x^2\arctan ax}{x^4+x^2+1}\,dx&=\int_0^1\frac{\arctan a/u}{u^2(1/u^4+1/u^2+1)}\frac{du}{u^2}\\&=\int_0^1\frac{\pi/2-\arctan u/a}{u^4+u^2+1}\,dx\\&=\frac\pi2\left(\frac14\log3+\frac{\pi\sqrt3}{12}\right)-\int_0^1\frac{\arctan bu}{u^4+u^2+1}\,du\end{align}</span> using <span class="math-container">$u^4+u^2+1=(u^2+u+1)(u^2-u+1)$</span> and <span class="math-container">$b=1/a$</span>. The remaining integral can be represented in terms of dilogarithms, such as in <a href="https://math.stackexchange.com/a/3078021/471884">this answer</a>, but I doubt there is a clean result with an arbitrary <span class="math-container">$b>0$</span>.</p>
<p>In short, this integral does have a closed form, but is unlikely to be "simple"; e.g. <a href="https://www.wolframalpha.com/input/?i=integrate+arctan%28x%2F7%29%2F%28x%5E4%2Bx%5E2%2B1%29" rel="nofollow noreferrer">case <span class="math-container">$a=7$</span></a>.</p>
<p>(As <span class="math-container">$a\to+\infty$</span> the term <span class="math-container">$\int_0^1\frac{\arctan bu}{u^4+u^2+1}\,du$</span> has contribution <span class="math-container">$\to0$</span> so your observation is true.)</p>
|
4,118,297 | <p>I have been trying to find the closed form for integral below <span class="math-container">$$\int_1^{\infty}\frac{x^2\tan^{-1}(ax)}{x^4+x^2+1}dx ,\; \; a>0 $$</span>
My progress to this integral <span class="math-container">$$\cong\frac{\pi^2}{8\sqrt 3}+\frac{\pi}{8}\log(3)-\frac{\pi}{6a\sqrt 2}+\frac{1}{2a^3}\left(\frac{\log(3)}{4} -\frac{\pi}{12\sqrt 3}\right)-\frac{1}{5a^5}\left(\frac{1}{2}-\frac{\pi}{12\sqrt 2}-\frac{\log(3)}{4}\right)+\frac{1}{7a^7}\left(\frac{\pi}{6\sqrt 3}-\frac{1}{2}\right)+\frac{1}{108a^9} +\frac{1}{9a^9}\left(\frac{\log(3)}{9}-\frac{\pi}{12\sqrt 3}\right)-\frac{1}{24a^{11}}+\cdots $$</span> Using the series of <span class="math-container">$\tan(ax)$</span> the above form is obtained. However, I dont find closed form for it. If <span class="math-container">$a\to\infty+$</span>,then it is equal to <span class="math-container">$\frac{\pi^2}{8\sqrt 3}+\frac{\pi}{8}\log(3)$</span>.</p>
| Claude Leibovici | 82,404 | <p>You could simplify the problem using
<span class="math-container">$$\frac{x^2}{x^4+x^2+1}=\frac {x^2}{(x^2-r)(x^2-s)}=\frac 1{r-s} \left(\frac{r}{x^2-r}-\frac{s}{x^2-s}\right)$$</span> where
<span class="math-container">$$r=-\frac{1+i \sqrt{3}}{2} \qquad \text{and} \qquad s=-\frac{1-i \sqrt{3}}{2}$$</span> So, the problem boils down to the computation of
<span class="math-container">$$I(t)=\int_1^\infty \frac{\tan ^{-1}(a x)}{x^2-t} \,dx$$</span> where <span class="math-container">$t$</span> is a complex number.</p>
<p>Surprising or not, there is an antiderivative (a monster found by a CAS).</p>
<p>So, there is a closed form. <em>I saw it !</em></p>
|
3,787,167 | <p>Let <span class="math-container">$\{a_{jk}\}$</span> be an infinite matrix such that corresponding mapping <span class="math-container">$$A:(x_i) \mapsto (\sum_{j=1}^\infty a_{ij}x_j)$$</span> is well defined linear operator <span class="math-container">$A:l^2\to l^2$</span>.
I need help with showing that this operator will be bounded. I guess it means that i need to check if a unit sphere maps to something bounded, so i need to manage to get some inequality on coefficients of matrix that will allow to write a straight sequence of inequalities and get desired bound. But I don't understand how to get bound from operator being well defined.</p>
| Bart Michels | 43,288 | <p>It is helpful to consider the 'finite rank' case first:</p>
<p><strong>Lemma.</strong> Let <span class="math-container">$(a_n)$</span> be a sequence of real numbers such that for every <span class="math-container">$x = (x_n) \in \ell^2$</span> we have <span class="math-container">$Ax := \sum_{n = 1}^\infty a_n x_n$</span> convergent. Then <span class="math-container">$A : \ell^2 \to \mathbb R$</span> is a bounded linear map.</p>
<p>Proof: Suppose <span class="math-container">$A$</span> is not bounded. Take any sequence of positive real numbers <span class="math-container">$(M_n)$</span>. We may find <span class="math-container">$x_1 \in \ell^2$</span> with <span class="math-container">$\Vert x_1 \Vert = 1$</span> and <span class="math-container">$Ax_1 > M_1$</span>. We may find a finite subsequence <span class="math-container">$x_1'$</span> of <span class="math-container">$x_1$</span> satisfying <span class="math-container">$Ax_1' \geq M_1/2$</span>. It is no restriction to assume all <span class="math-container">$a_n x_{1, n} \geq 0$</span>. Because <span class="math-container">$x_1'$</span> is a subsequence, <span class="math-container">$\Vert x_1' \Vert \leq \Vert x_1 \Vert \leq 1$</span>. Let <span class="math-container">$A_1$</span> be the corresponding subsequence of <span class="math-container">$(a_n)$</span>. Then <span class="math-container">$A_1$</span> is a finite matrix, so it defines a bounded operator.</p>
<p>Because <span class="math-container">$A - A_1$</span> is also unbounded, we may find <span class="math-container">$x_2 \in \ell^2$</span> with <span class="math-container">$\Vert x_2 \Vert = 1$</span> and <span class="math-container">$(A-A_1)x_2 > M_2$</span>. We may find a finite subsequence <span class="math-container">$x_2'$</span> of <span class="math-container">$x_2$</span> with <span class="math-container">$(A-A_1) x_2' > M_2/2$</span> and all <span class="math-container">$a_n x_{2,n} \geq 0$</span>, and a corresponding subsequence <span class="math-container">$A_2$</span> of <span class="math-container">$A - A_1$</span>. By construction, we may assume that <span class="math-container">$x_1'$</span> and <span class="math-container">$x_2'$</span> have disjoint supports, so that <span class="math-container">$(A-A_1) x_2' = A x_2'$</span>.</p>
<p>Continuing this procedure, we find a sequence <span class="math-container">$(x_n')$</span> with disjoint supports, <span class="math-container">$\Vert x_n \Vert \leq 1$</span> and <span class="math-container">$A x_n' > M_n/2$</span> and <span class="math-container">$a_m x_{n,m} \geq 0$</span>. By taking the <span class="math-container">$M_n$</span> to grow sufficiently fast, the element <span class="math-container">$x := \sum_{n=1}^\infty \frac1{M_n} x_n'$</span> is in <span class="math-container">$\ell^2$</span>. But because the <span class="math-container">$x_n'$</span> have disjoint supports, and because <span class="math-container">$Ax$</span> is a sum of nonnegative terms, it is clear that
<span class="math-container">$$Ax \geq \sum_{n=1}^\infty \frac1{M_n} A x_n > \sum_{n=1}^\infty \frac12 = \infty \,,$$</span>
so that <span class="math-container">$A$</span> is not well-defined. <span class="math-container">$\square$</span></p>
<p><strong>Corollary.</strong> Let <span class="math-container">$(a_{ij})$</span> be a matrix of real numbers with <span class="math-container">$N$</span> rows, such that for every <span class="math-container">$x = (x_j) \in \ell^2$</span> we have <span class="math-container">$Ax := \left(\sum_{j = 1}^\infty a_j x_j \right)_i \in \mathbb R^N$</span> convergent. Then <span class="math-container">$A : \ell^2 \to \mathbb R^N$</span> is a bounded linear map.</p>
<p>Proof: Each of the projections of <span class="math-container">$A$</span> is bounded by the previous lemma, and this suffices because <span class="math-container">$\mathbb R^N$</span> is finite-dimensional. <span class="math-container">$\square$</span></p>
<p>Now, for the case of <span class="math-container">$A : \ell^2 \to \ell^2$</span>.</p>
<p>Proof: Suppose <span class="math-container">$A$</span> is not bounded. Take any sequence of positive real numbers <span class="math-container">$(M_n)$</span>. We may find <span class="math-container">$x_1 \in \ell^2$</span> with <span class="math-container">$\Vert x_1 \Vert = 1$</span> and <span class="math-container">$\Vert A x_1 \Vert > M_1$</span>. There then exists a submatrix <span class="math-container">$A_1$</span> of <span class="math-container">$A$</span> with finitely many nonzero rows, and with <span class="math-container">$\Vert A_1 x_1 \Vert$</span> 'very close' to <span class="math-container">$\Vert A x_1 \Vert$</span>.</p>
<p>By the corollary above, <span class="math-container">$A_1$</span> is bounded, so that <span class="math-container">$A - A_1$</span> is not bounded. Thus we may find <span class="math-container">$x_2 \in \ell^2$</span> with <span class="math-container">$\Vert x_2 \Vert = 1$</span> and <span class="math-container">$\Vert (A-A_1) x_2 \Vert > M_2$</span>, and subsequently a submatrix <span class="math-container">$A_2$</span> of <span class="math-container">$A - A_1$</span> with finitely many rows and with <span class="math-container">$\Vert A_2 x_2 \Vert$</span> 'very close' to <span class="math-container">$\Vert (A-A_1) x_2 \Vert$</span>.</p>
<p>Continuing this process, we find a sequence <span class="math-container">$(x_n) \in \ell^2$</span>, with <span class="math-container">$\Vert x_n \Vert = 1$</span> and <span class="math-container">$\Vert A_n x_n \Vert$</span> 'very close' to <span class="math-container">$\Vert (A - A_1 - A_2 - \ldots - A_{n-1}) x_n \Vert$</span> and with <span class="math-container">$\Vert (A - A_1 - A_2 - \ldots - A_{n-1}) x_n \Vert > M_n$</span>. Note how the <span class="math-container">$A_n x_n$</span> have disjoint supports, by construction.</p>
<p>Consider the element <span class="math-container">$x = \sum_{n=1}^\infty \frac1{M_n} x_n$</span>. When the <span class="math-container">$M_n$</span> grow sufficiently fast, this lies in <span class="math-container">$\ell^2$</span>. Using orthogonality of the <span class="math-container">$A_n x$</span> and the triangle inequality applied to finite sums,
<span class="math-container">$$\begin{align*}\Vert Ax \Vert
&\geq \sum_{n=1}^\infty \Vert A_n x \Vert \\
&\geq \sum_{n=1}^\infty \left( \frac1{M_n}\Vert A_n x_n \Vert - \left \Vert A_n \sum_{j \neq n} \frac1{M_j} x_j \right\Vert \right) \\
&\geq \sum_{n=1}^\infty \left( \frac1{M_n}\Vert A_n x_n \Vert - \sum_{j < n} \frac1{M_j} \Vert A_n x_j \Vert - \Vert A_n \Vert \left \Vert \sum_{j> n} \frac1{M_j} x_j \right\Vert \right) \\
&\geq \sum_{n=1}^\infty \left( \frac1{M_n}\Vert A_n x_n \Vert - \sum_{j < n} \frac1{M_j} (\Vert (A - A_1 - A_2 - \ldots - A_{j-1}) x_j \Vert - \Vert A_j x_j \Vert) - \Vert A_n \Vert \left( \sum_{j > n} \frac1{M_j^2}\right)^{1/2} \right) \,.
\end{align*}$$</span>
Now we choose the <span class="math-container">$M_n$</span> and <span class="math-container">$A_n$</span> so that this sum is infinite: At every step, take <span class="math-container">$M_{n}$</span> very large compared to the norms <span class="math-container">$\Vert A_j \Vert$</span> for <span class="math-container">$j < n$</span>. Given this <span class="math-container">$M_n$</span>, choose <span class="math-container">$A_n$</span> so that <span class="math-container">$\Vert A_n x_n\Vert$</span> is extremely close to <span class="math-container">$\Vert (A - A_1 - A_2 - \ldots - A_{n-1}) x_n \Vert$</span>. It is then clear that the negative terms in the above sum can be made arbitrarily small, to get (for example)
<span class="math-container">$$\Vert Ax \Vert
\geq \sum_{n=1}^\infty \frac1{M_n}\Vert A_n x_n \Vert/2 \geq \sum_{n=1}^\infty \frac12 = \infty \,.$$</span></p>
<p>More concretely, you could take <span class="math-container">$M_1 = 1$</span>, <span class="math-container">$M_n> \max(n, 10n \max_{j < n}(\Vert A_j \Vert))$</span> and <span class="math-container">$A_n$</span> such that <span class="math-container">$\Vert (A - A_1 - A_2 - \ldots - A_{n-1}) x_n\Vert - \Vert A_n x_n \Vert \leq \frac1{100n^2}$</span>. Then
<span class="math-container">$$\begin{align*}
&\sum_{n=1}^\infty \left( \frac1{M_n}\Vert A_n x_n \Vert - \sum_{j < n} \frac1{M_j} (\Vert (A - A_1 - A_2 - \ldots - A_{j-1}) x_j \Vert - \Vert A_j x_j \Vert) - \Vert A_n \Vert \left( \sum_{j > n} \frac1{M_j^2}\right)^{1/2} \right) \\
& \geq \sum_{n=1}^\infty \left( \frac9{10} - \sum_{j < n} \frac1{100j^2} - \left( \sum_{j > n} \frac1{100n^2}\right)^{1/2} \right)
\end{align*}$$</span>
and this is fine.</p>
|
1,578,783 | <p>A friend of mine found a book in which the author said that the dual space of $L^\infty$ is $L^1$, of course not with the norm topology but with the weak-* topology. Does anyone know where I can find this result? Thanks.</p>
| cruiser0223 | 684,172 | <p>This is a consequence of the Mackey-Arens Theorem, see for instance <a href="https://www.sciencedirect.com/book/9780125850018/methods-of-modern-mathematical-physics" rel="nofollow noreferrer">Reed, Simon: Methods of modern mathematical physics</a>, Theorem V.22. It says:</p>
<blockquote>
<p>For a dual pair <span class="math-container">$(X,Y)$</span>, a loc. conv. topology <span class="math-container">$\mathcal{T}$</span> is a dual-<span class="math-container">$(X,Y)$</span> topology if and only if it is stronger than the <span class="math-container">$\sigma(X,Y)$</span>-topology (i.e. the weak topology) and weaker than the Mackey-topology (called the <span class="math-container">$\tau(X,Y)$</span>-topology).</p>
</blockquote>
<p>Now choose <span class="math-container">$X = L^\infty$</span> and <span class="math-container">$Y = L^1$</span>, hence the <span class="math-container">$\sigma(L^\infty,L^1)$</span>-topology is the dual-<span class="math-container">$(L^\infty,L^1)$</span> topology, i.e. the topological (or continuous) dual of <span class="math-container">$(L^\infty, \sigma(L^\infty,L^1)$</span>) is <span class="math-container">$L^1$</span>.</p>
<p>To notice that <span class="math-container">$(L^\infty,L^1)$</span> is a dual pair, see for instance <a href="https://en.wikipedia.org/wiki/Dual_system" rel="nofollow noreferrer">this wikipedia article</a> or the above textbook.</p>
<p>Since you asked for references, I am certain this result is present in most textbooks on topological vector spaces.</p>
|
888,101 | <p>Suppose I am asked to show that some topology is not metrizable. What I have to prove exactly for that ?</p>
| Thomas Andrews | 7,933 | <p>Certainly, if the topology is not Hausdorff, it is not metrizable.</p>
<p>From <a href="http://en.wikipedia.org/wiki/Metrization_theorems" rel="nofollow">Wikipedia</a>:</p>
<blockquote>
<p>A compact Hausdorff space is metrizable if and only if it is <a href="http://en.wikipedia.org/wiki/Second-countable" rel="nofollow">second-countable</a>.</p>
</blockquote>
<p>You might find more on that Wikipedia page.</p>
|
888,101 | <p>Suppose I am asked to show that some topology is not metrizable. What I have to prove exactly for that ?</p>
| Asaf Karagila | 622 | <p>Recall that if $(X,d)$ is a metric space, then the metric induces a topology. But different metrics can induce the same topology, for example $(X,d')$ where $d'(x,y)=2d(x,y)$, is a different metric, but induces the same topology.</p>
<p>We say that a topological space is <em>metrizable</em>, if there is a metric which induces the topology.</p>
<p>So to show that a space is not metrizable, you have to show that there is no metric which can induce this topology. This is often done by refuting certain consequences of metrizability. For example we can prove that metric induced topology is always Hausdorff, and first-countable.</p>
<p>If a space is not first-countable, it's not a metric space. If it is not Hausdorff it's not a metric space.</p>
<p>If you have more information on the space, then you can use other conditions as well, e.g. connected metric space with two points is uncountable. If the space is countable and connected then it is not metrizable.</p>
<p>And so on and so forth.</p>
|
127,108 | <p>If you take a subtraction-free rational identity like $(xxx+yyy)/(x+y)+xy=xx+yy$ and replace $\times$,$/$,$+$,$1$ by $+$,$-$,min,$0$, do you always get a valid min,plus,minus identity like min(min($x+x+x,y+y+y$)$-$min($x,y$),$\:x+y$)$\ =\ $min($x+x,y+y$)?</p>
| Will Sawin | 18,060 | <p>Yes. Replace $x$ with $e^{-Na}$, $y$ with $e^{-Nb}$, etc. Then take the log, then divide by $N$. One gets a new identity where $\times$ is replaced by $+$, $/$ by $-$, $1$ by $0$, and $u+v$ by $-\ln (e^{-N u} + e^{-N v}) / N= \min(u,v) - \ln\left( 1+ e^{-N |u-v|}\right)/N = \min(u,v) + O(1/N)$. Then take the limit as $N$ goes to $\infty$. You now have a tropcal identity.</p>
<p>This fits with the idea of tropical geometry as the limit of classical algebraic geometry as variables get very large.</p>
|
1,907,743 | <p>I'm having trouble with a step in a paper which I believe boils down to the following inequality:
$$
\left\| \sum_{k\in\mathbb{Z}} f(\cdot+k) \right\|_{L^2(0,1)}
\leq c \|f\|_{L^2(\mathbb{R})}.
$$
I haven't come up with many ideas. Hitting the left-hand side with Minkowski, for example, produces something which can exceed $\|f\|_{L^2(\mathbb{R})}$.</p>
<p>I also put a bit of effort this afternoon into falsifying the above inequality (it may be that I'm misunderstanding the omitted steps in the original paper). Begin with a power series $g(x)=\sum a_kx^k$ which has a local $L^2$ singularity. It's then not too hard to use this representation to construct $f$ satisfying
$$
\sum_{k\in\mathbb{Z}} f(x+k) = g(x).
$$
However, the few times I attempted this did not result in an $L^2(\mathbb{R})$ function.</p>
<p>Any help one way or the other is appreciated.</p>
| dls | 1,761 | <p>The counterexample from Artic Char suggested that the statement is true with weight added:
\begin{align}
\left\| \sum_{k\in\mathbb{Z}} f(\cdot+k) \right\|_{L^2(0,1)}
&\leq \sum_{k\in\mathbb{Z}} \|f\|_{L^2(k,k+1)} \\\\
&= \sum_{k\in\mathbb{Z}} \left(\int_k^{k+1} \left|\frac{1+x^2}{1+x^2} \cdot f(x)\right|^2\right)^{1/2} \\\\
&\leq \sum_{k\in\mathbb{Z}} \frac{1}{1+k^2} \|(1+x^2) \, f(x)\|_{L^2(k,k+1)} \\\\
&\leq c \, \|(1+x^2) \, f(x)\|_{L^2(\mathbb{R})}.
\end{align}
Or, you could choose weight $|x|^\alpha$ for any $\alpha>1$.</p>
|
48,626 | <p>In <code>ListPointPlot3D</code>, it seems the only point style available is the default, because there is no <code>PlotMarkers</code> option for this function. Is there a way to change the point style? For example, what if I want to draw the points as small cubes?</p>
| anderstood | 18,767 | <p>Another possibility is to use <code>Graphics3D</code>. For example (one with <code>Cuboid</code>, one with <code>Sphere</code>):</p>
<pre><code>pts = RandomReal[10, {20, 3}];
{Graphics3D[{Blue, Cuboid[#, # + .3 {1, 1, 1}] & /@ pts}],
Graphics3D[{Red, Sphere[#, 0.2] & /@ pts}]}
</code></pre>
<p><a href="https://i.stack.imgur.com/M0Mxp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M0Mxp.png" alt="enter image description here"></a></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.