qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,363,944 | <p>A group consisting of <span class="math-container">$3$</span> men and <span class="math-container">$6$</span> women attends a prizegiving ceremony. If <span class="math-container">$ 5$</span> prizes are awarded at random to members of the group, find the probability that exactly <span class="math-container">$3 $</span> of the prizes are awarded to women if<br>
a) There is a restriction of at most one prize per person<br>
b) There is no restriction on the number of prizes per person</p>
<p>I did part a) and got the same result as the solution but I failed at getting the same answer for part b). When I looked at the working outs of both parts, I noticed a significant difference in the ways two parts are solved. </p>
<p>This is the working out for part a) (which is also similar to my working out)
a) <span class="math-container">$\frac{6C3\times 3C2}{9C5} = \frac{10}{21}\ $</span></p>
<p>And this is the working out of part b)
b) <span class="math-container">$\ 5C3 \times (\frac{3}{9})^{2} \times (\frac{6}{9})^{3}\ = \frac{80}{243}\ $</span></p>
<p>I'm so confused why part b) is done in such a different way than part a) and as a student, how can I know when to consider the numerator and denominator separately like part a) and when to find the probability of each component and times all of them together like part b)? Also, can we solve part b) in a similar way like part a)? Does anyone have any tips on how to distinguish these sorts of methods? </p>
<p>Thank you very much for helping.</p>
| Avinash N | 253,506 | <p>Let's start.( Answer for b)</p>
<p>Total number of prizes <span class="math-container">$=5$</span>.</p>
<p>The number of ways to select <span class="math-container">$3$</span> prizes from <span class="math-container">$5$</span> prizes <span class="math-container">$=5C3$</span> <span class="math-container">$=10$</span>.</p>
<p>Given that <span class="math-container">$3$</span> prizes goes to women and <span class="math-container">$2$</span> prizes goes to men.</p>
<p>And given that,there are <span class="math-container">$6$</span> women and <span class="math-container">$3$</span> men. So the total number of persons <span class="math-container">$=9$</span>.</p>
<p>Let's consider the scene, I am going to distribute <span class="math-container">$3$</span> prizes among <span class="math-container">$6$</span> womens (a person can get more than one prize).</p>
<p>So that, total number of possible ways to distribute <span class="math-container">$3$</span> prizes among <span class="math-container">$6$</span> womens <span class="math-container">$=6×6×6= 6^3$</span>.</p>
<p>Therefore, the probability of the event that to distribute <span class="math-container">$3$</span> prizes among <span class="math-container">$6$</span> womens <span class="math-container">$={6^3}/{9^3}$</span>.</p>
<p>Let's argue the same argument for the scene, that, distributing the <span class="math-container">$2$</span> prizes among <span class="math-container">$3$</span> mens.</p>
<p>So, Therefore, the probability of the event that to distribute <span class="math-container">$2$</span> prizes among <span class="math-container">$3$</span> mens <span class="math-container">$={3^2}/{9^2}$</span>.</p>
<p>(NOTE: If you choose <span class="math-container">$3$</span> prizes for women then remaining <span class="math-container">$2$</span> prizes goes to mens. So only there are <span class="math-container">$5C3$</span> such possibilities.)</p>
<p>Thus,the required ways <span class="math-container">$=10× ({6^3}/{9^3})× ({3^2}/{9^2})$</span> <span class="math-container">$=80/243$</span>.</p>
|
2,302,067 | <p>I'm trying to prove that if ${\kappa}$ is an infinite cardinal, then there are $2^{\kappa}$ bijective functions from ${\kappa}$ to ${\kappa}$. I would greatly appreciate any tips. Thank you. </p>
| bof | 111,012 | <p>We know that there are at most $\kappa^\kappa\le(2^\kappa)^\kappa=2^{\kappa^2}=2^\kappa$ bijections from $\kappa$ to $\kappa;$ we have to show that there are at least $2^\kappa$ bijections. Since $|\{0,1\}\times\kappa|=2\kappa=\kappa,$ it will suffice to exhibit $2^\kappa$ bijections from $\{0,1\}\times\kappa$ to $\{0,1\}\times\kappa,$ which we do as follows.</p>
<p>For each set $S\subseteq\kappa$ define
$$f(\langle i,\ \alpha\rangle)=\begin{cases}
\langle1-i,\ \alpha\rangle\ \text{ if }\ \alpha\in S,\\
\ \ \ \ \ \ \ \langle i,\ \alpha\rangle\ \text{ if }\ \alpha\notin S.\\
\end{cases}$$</p>
|
3,702,086 | <h1>Problem</h1>
<p>In a two-dimensional Cartesian coordinate system,
there are two points <span class="math-container">$A(2, 0)$</span> and <span class="math-container">$B(2, 2)$</span>
and a circle <span class="math-container">$c$</span> with radius <span class="math-container">$1$</span> centered at the origin <span class="math-container">$O(0, 0)$</span>,
as shown in the figure below.
<a href="https://i.stack.imgur.com/MsBqu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MsBqu.png" alt="enter image description here"></a></p>
<p>If <span class="math-container">$P$</span> is a point on the circle <span class="math-container">$c$</span>,
then what is the minimum value of</p>
<p><span class="math-container">$$ f = 2\sqrt{2}\lvert{PA}\rvert + \lvert{PB}\rvert? $$</span></p>
<h1>Hypothesis</h1>
<p>From my experience,
the solutions to such problems do not seem to be available under elementary form in general,
as indicated by answers to the question <a href="https://math.stackexchange.com/q/463978/620160">Minimize the sum of distances between two point and a circle</a>.
However, when I studied this problem on <a href="https://www.geogebra.org/classic/pvcvpvv2" rel="nofollow noreferrer">GeoGebra</a>,
it seems that minimum value is exactly <span class="math-container">$5$</span> in this specific situation,
with <span class="math-container">$P$</span> located at roughly the position shown below:
<a href="https://i.stack.imgur.com/wxfnu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wxfnu.png" alt="enter image description here"></a></p>
<p>I tried to verify my hypothesis as follows.
Since <span class="math-container">$P$</span> is located inside <span class="math-container">$\angle AOB$</span>,
we set its location to <span class="math-container">$(x, \sqrt{1 - x^2})$</span> (where <span class="math-container">$\sqrt{2}/2 < x < 1$</span>).
Therefore,
<span class="math-container">\begin{align*}
\lvert{PA}\rvert
&= \sqrt{(2 - x)^2 + (1 - x^2)} \\
&= \sqrt{5 - 4x}, \\
\lvert{PB}\rvert
&= \sqrt{(2 - x)^2 + (2 - \sqrt{1 - x^2})^2} \\
&= \sqrt{-4\sqrt{1 - x^2} - 4x + 9}, \\
f
&= 2\sqrt{2} \lvert{PA}\rvert + \lvert{PB}\rvert \\
&= 2\sqrt{2} \sqrt{5 - 4x}
+ \sqrt{-4\sqrt{1 - x^2} - 4x + 9}. \\
\end{align*}</span></p>
<p>I asked <a href="https://www.geogebra.org/classic/dwqhf64w" rel="nofollow noreferrer">GeoGebra</a> again to plot <span class="math-container">$f(x)$</span>:
<a href="https://i.stack.imgur.com/NKVXh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NKVXh.png" alt="enter image description here"></a>
and it seems to confirm my conjecture that
<span class="math-container">$$\min_{\sqrt{2}/2 < x < 1} f(x) = 5$$</span></p>
<h1>Question</h1>
<p>Is my hypothesis correct?
If so, is there a proof of this hypothesis that can be relatively easily by hand
(preferably avoiding, say, the evaluation of <span class="math-container">$f'(x)$</span>)?
Geometric proofs will be especially appreciated.</p>
| user | 293,846 | <p>Your hypothesis is true.</p>
<p>Indeed the solution of the equation:
<span class="math-container">$$
2\sqrt{10-8x}+\sqrt{9-4x-4\sqrt{1-x^2}}=5
$$</span>
is
<span class="math-container">$$
x=\frac{2+7\sqrt{46}}{50},\text{ with } \sqrt{1-x^2}=\frac{14-\sqrt{46}}{50}.
$$</span></p>
<p>Substituting this into the derivative of the distance one obtains:
<span class="math-container">$$
\left[-\frac{8}{\sqrt{10-8x}}+\frac{2(x-\sqrt{1-x^2})}{\sqrt{1-x^2}\sqrt{9-4x-4\sqrt{1-x^2}}}\right]_{x=\frac{2+7\sqrt{46}}{50}}\\
=-\frac{4(14+\sqrt{46})}{15}+\frac{4(14+\sqrt{46})}{15}=0.
$$</span></p>
|
3,702,086 | <h1>Problem</h1>
<p>In a two-dimensional Cartesian coordinate system,
there are two points <span class="math-container">$A(2, 0)$</span> and <span class="math-container">$B(2, 2)$</span>
and a circle <span class="math-container">$c$</span> with radius <span class="math-container">$1$</span> centered at the origin <span class="math-container">$O(0, 0)$</span>,
as shown in the figure below.
<a href="https://i.stack.imgur.com/MsBqu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MsBqu.png" alt="enter image description here"></a></p>
<p>If <span class="math-container">$P$</span> is a point on the circle <span class="math-container">$c$</span>,
then what is the minimum value of</p>
<p><span class="math-container">$$ f = 2\sqrt{2}\lvert{PA}\rvert + \lvert{PB}\rvert? $$</span></p>
<h1>Hypothesis</h1>
<p>From my experience,
the solutions to such problems do not seem to be available under elementary form in general,
as indicated by answers to the question <a href="https://math.stackexchange.com/q/463978/620160">Minimize the sum of distances between two point and a circle</a>.
However, when I studied this problem on <a href="https://www.geogebra.org/classic/pvcvpvv2" rel="nofollow noreferrer">GeoGebra</a>,
it seems that minimum value is exactly <span class="math-container">$5$</span> in this specific situation,
with <span class="math-container">$P$</span> located at roughly the position shown below:
<a href="https://i.stack.imgur.com/wxfnu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wxfnu.png" alt="enter image description here"></a></p>
<p>I tried to verify my hypothesis as follows.
Since <span class="math-container">$P$</span> is located inside <span class="math-container">$\angle AOB$</span>,
we set its location to <span class="math-container">$(x, \sqrt{1 - x^2})$</span> (where <span class="math-container">$\sqrt{2}/2 < x < 1$</span>).
Therefore,
<span class="math-container">\begin{align*}
\lvert{PA}\rvert
&= \sqrt{(2 - x)^2 + (1 - x^2)} \\
&= \sqrt{5 - 4x}, \\
\lvert{PB}\rvert
&= \sqrt{(2 - x)^2 + (2 - \sqrt{1 - x^2})^2} \\
&= \sqrt{-4\sqrt{1 - x^2} - 4x + 9}, \\
f
&= 2\sqrt{2} \lvert{PA}\rvert + \lvert{PB}\rvert \\
&= 2\sqrt{2} \sqrt{5 - 4x}
+ \sqrt{-4\sqrt{1 - x^2} - 4x + 9}. \\
\end{align*}</span></p>
<p>I asked <a href="https://www.geogebra.org/classic/dwqhf64w" rel="nofollow noreferrer">GeoGebra</a> again to plot <span class="math-container">$f(x)$</span>:
<a href="https://i.stack.imgur.com/NKVXh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NKVXh.png" alt="enter image description here"></a>
and it seems to confirm my conjecture that
<span class="math-container">$$\min_{\sqrt{2}/2 < x < 1} f(x) = 5$$</span></p>
<h1>Question</h1>
<p>Is my hypothesis correct?
If so, is there a proof of this hypothesis that can be relatively easily by hand
(preferably avoiding, say, the evaluation of <span class="math-container">$f'(x)$</span>)?
Geometric proofs will be especially appreciated.</p>
| Claude Leibovici | 82,404 | <p>The problem is interesting for sure but I believe that the solution is biased by the fact that <span class="math-container">$x_A=x_B=y_A$</span>.</p>
<p>I tried to make it more general with <span class="math-container">$A(x_A,y_A)$</span> and <span class="math-container">$B(x_B,0)$</span> assuming that both points are in the first quadrant. Let <span class="math-container">$(X,\sqrt{1-X^2})$</span> the coordinates of point <span class="math-container">$P$</span> and assume that we want to minimize
<span class="math-container">$$f = k\lvert{PA}\rvert + \lvert{PB}\rvert$$</span></p>
<p>We then have
<span class="math-container">$$\lvert{PA}\rvert=\sqrt{(X-x_A)^2+\left(\sqrt{1-X^2}-y_A\right)^2}$$</span>
<span class="math-container">$$ \lvert{PB}\rvert=\sqrt{-2 x_B X+x_B^2+1}$$</span> which make
<span class="math-container">$$f=k\sqrt{(X-x_A)^2+\left(\sqrt{1-X^2}-y_A\right)^2}+\sqrt{-2 x_B X+x_B^2+1}$$</span> to be an hightly nonlinear equation in <span class="math-container">$X$</span>; this means that, for getting its solution, we need some <em>reasonable estimate</em>.</p>
<p>To generate it, in a preliminary step, let us consider that we want to minimize
<span class="math-container">$$g= k^2\lvert{PA}\rvert^2 + \lvert{PB}\rvert^2$$</span> which is more pleasant. Computing its derivative, we have
<span class="math-container">$$\frac{dg}{dX}=2 k^2 \left(\frac{X y_A}{\sqrt{1-X^2}}-x_A\right)-2 x_B=0\implies X=\frac{k^2 x_A+x_B}{\sqrt{\left(k^2 x_A+x_B\right)^2+k^4 y_A^2}}$$</span></p>
<p>For the example given in the post, this would give as an estimate <span class="math-container">$X=\frac{9}{\sqrt{82}}\approx 0.993884$</span> while the exact solution of the problem is <span class="math-container">$X=\frac{2+7 \sqrt{46}}{50}\approx 0.989526$</span>.</p>
<p>Back to <span class="math-container">$f$</span>, computing its derivative, we need to solve for <span class="math-container">$X$</span>
<span class="math-container">$$\frac{k X y_A-k \sqrt{1-X^2} x_A}{\sqrt{1-X^2} \sqrt{-2 X x_A+x_A^2-2
\sqrt{1-X^2} y_A+y_A^2+1}}-\frac{x_B}{\sqrt{-2 X x_B+x_B^2+1}}=0$$</span> which will require a numerical method such as Newton (this will be simple because we already have a good estimate.</p>
<p>For illustration, let us use <span class="math-container">$x_A=3$</span>, <span class="math-container">$y_A=4$</span> and <span class="math-container">$x_B=5$</span> and <span class="math-container">$k=2\sqrt 2$</span>. The preliminary process gives <span class="math-container">$X_0=\frac{29}{\sqrt{1865}}\approx 0.671519$</span>.</p>
<p>Newton iterates will then be
<span class="math-container">$$\left(
\begin{array}{ccc}
n & X_n & f(X_n) \\
0 & 0.67151942 & 15.72034369 \\
1 & 0.77667655 & 15.68958161 \\
2 & 0.75966295 & 15.68798018 \\
3 & 0.75880236 & 15.68797655 \\
4 & 0.75880040 & 15.68797655
\end{array}
\right)$$</span> </p>
|
4,317,945 | <p>A function <span class="math-container">$h : A → \mathbb{R}$</span> is Lipschitz continuous if <span class="math-container">$\exists K$</span> s.t.</p>
<p><span class="math-container">$$|h(x) - h(y)| \leq K \cdot |x - y|, \forall x, y \in A$$</span></p>
<p>Suppose that <span class="math-container">$I = [a, b]$</span> is a closed, bounded interval; and <span class="math-container">$g : I → \mathbb{R}$</span> is differentiable on <span class="math-container">$I$</span> and the function <span class="math-container">$G = Dg = g' : I → \mathbb{R}$</span> is continuous. Prove that <span class="math-container">$g$</span> is Lipschitz continuous on <span class="math-container">$I$</span>.</p>
| paul garrett | 12,291 | <p>A small amount of cleverness makes this much more tractable, and avoids combinatorics:</p>
<p>Namely, the leading factor of <span class="math-container">$x^2$</span> is <span class="math-container">$(x+1)^2-2(x+1)+1$</span>. So the whole expression can be expressed in powers of <span class="math-container">$x+1$</span>, as
<span class="math-container">$$
\Big((x+1)^2-2(x+1)+1\Big)\cdot (x+1)^n
\;=\; (x+1)^{n+2} - 2(x+1)^{n+1} + (x+1)^n
$$</span>
Thus, the derivative is
<span class="math-container">$$
(n+1)(x+1)^{n+1} - 2(n+1)(x+1)^n + n(x+1)^{n-1}
$$</span>
Rearrange to taste. :)</p>
|
215,835 | <p>According to Willard,</p>
<p>If $(X,\tau)$ is a topological space, a base for $\tau$ is a collection $\mathscr{B} \subset \tau$ such that $\tau=\{ \bigcup_{B \in \mathscr C} : \mathscr C \subset \mathscr B\}$. Evidently, $\mathscr B$ is a base for $X$ iff whenever $G$ is an open set in $X$ and $p \in G$ there is some $B \in \mathscr B$ such that $p \in B \subset G$.</p>
<p>Question 1: Is it safe to assume that in the sentence beginning with "Evidently" it is assumed that $\mathscr B \subset \tau$, for otherwise the iff statement is not true.</p>
<p>Question 2: I've been told that not all basic sets are open, but it seems by the above definition that they are defined to be open.</p>
<p>Comment on Question 2: There is also a definition of being a base for "a" topology. Is this what was meant by not all basic sets are open? Is this just a semantic issue, i.e. the basic sets are open in the topology that the base is a base for but not open in general? Or can there be a base for a topology where the basic sets are not open in that same topology?</p>
| Berci | 41,488 | <p>Q1: Base sets are <strong>open</strong>, indeed: consider the one element $\mathscr C$ subsets.</p>
<p>Q2: Lied. Or thought about something else..</p>
<p>A system $\mathscr B$ of subsets of a set $X$ can be a <em>basis</em> for a topology iff $\forall B,C\in\mathscr B\ \forall x\in B\cap C\ \exists D\in\mathscr B$ such that $x\in D\subseteq B\cap C$'. This guarantees that $\tau$ as defined above, will be closed under <em>finite intersection</em>.</p>
|
215,835 | <p>According to Willard,</p>
<p>If $(X,\tau)$ is a topological space, a base for $\tau$ is a collection $\mathscr{B} \subset \tau$ such that $\tau=\{ \bigcup_{B \in \mathscr C} : \mathscr C \subset \mathscr B\}$. Evidently, $\mathscr B$ is a base for $X$ iff whenever $G$ is an open set in $X$ and $p \in G$ there is some $B \in \mathscr B$ such that $p \in B \subset G$.</p>
<p>Question 1: Is it safe to assume that in the sentence beginning with "Evidently" it is assumed that $\mathscr B \subset \tau$, for otherwise the iff statement is not true.</p>
<p>Question 2: I've been told that not all basic sets are open, but it seems by the above definition that they are defined to be open.</p>
<p>Comment on Question 2: There is also a definition of being a base for "a" topology. Is this what was meant by not all basic sets are open? Is this just a semantic issue, i.e. the basic sets are open in the topology that the base is a base for but not open in general? Or can there be a base for a topology where the basic sets are not open in that same topology?</p>
| Brian M. Scott | 12,042 | <p><strong>Question 1:</strong> Yes, Willard was still talking about collections $\mathscr{B}\subseteq\tau$. However, with a minor change in wording he wouldn’t have to be, because we could deduce that $\mathscr{B}\subseteq\tau$. Specifically, suppose that $\langle X,\tau\rangle$ is a topological space, and that $\mathscr{B}$ is a family of subsets of $X$ such that </p>
<blockquote>
<p>$(*)\quad$ for each $G\subseteq X$, $G\in\tau$ <strong>iff</strong> for each $p\in G$ there is some $B_p\in\mathscr{B}$ such that $p\in B_p\subseteq G$. </p>
</blockquote>
<p>If $B\in\mathscr{B}$, the condition is satisfied if for each $p\in B$ we let $B_p=B$, so $B\in\tau$, and therefore $\mathscr{B}\subseteq\tau$. Thus, he could have said that a family $\mathscr{B}$ of subsets of a space $X$ is a base for the topology of $X$ iff $(*)$ holds. He wouldn’t have had to specify that the members of $\mathscr{B}$ are open, because that follows from $(*)$.</p>
<p><strong>Question 2:</strong> You were misinformed: basic open sets in a topological space are <strong>always</strong> open sets in that space. If $\mathscr{B}$ is a base for a topology $\tau$ on a set $X$, then $\mathscr{B}\subseteq\tau$.</p>
<p><strong>Question 2':</strong> There is a difference between ‘$\mathscr{B}$ is a base for the topology $\tau$ on $X$’ and ‘$\mathscr{B}$ is a base for some topology on the set $X$’. The definition from Willard at the beginning of your question is the definition of the first of these. However, it is also possible to characterize the collections of sets that are bases for <strong>some</strong> topology on a given set. </p>
<blockquote>
<p>Let $X$ be a set, and let $\mathscr{B}$ be a family of subsets of $X$. Then the following are equivalent: </p>
<ol>
<li>$\mathscr{B}$ is a base for some topology on $X$. </li>
<li>$\left\{\bigcup\mathscr{A}:\mathscr{A}\subseteq\mathscr{B}\right\}$ is a topology on $X$. </li>
<li>$\bigcup\mathscr{B}=X$, and whenever $p\in X$, $B_1,B_2\in\mathscr{B}$, and $p\in B_1\cap B_2$, there is a $B\in\mathscr{B}$ such that $p\in B\subseteq B_1\cap B_2$.</li>
</ol>
</blockquote>
<p>(This is worth trying to prove if you’ve not seen it already.)</p>
|
1,859,719 | <blockquote>
<p>Let be $U (x,y) = x^\alpha y^\beta$. Find the maximum of the function $U(x,y)$ subject to the equality constraint $I = px + qy$.</p>
</blockquote>
<p>I have tried to use the Lagrangian function to find the solution for the problem, with the equation</p>
<p>$$\nabla\mathscr{L}=\vec{0}$$</p>
<p>where $\mathscr{L}$ is the Lagrangian function and $\vec{0}=\pmatrix{0,0}$.
Using this method I have a system of $3$ equations with $3$ variables, but I can't simplify this system:</p>
<p>$$ax^{\alpha-1}y^\beta-p\lambda=0$$
$$\beta y^{\beta-1}x^\alpha-q\lambda=0$$
$$I=px+qx$$</p>
| marty cohen | 13,079 | <p>If
$I=px+qy$,
then
$y = (I-px)/q$,
so
$x^ay^b
=x^a((I-px)/q)^b
=x^a(I-px)^b/q^b
$.</p>
<p>Differentiating,
we want</p>
<p>$\begin{array}\\
0
&=(x^a(I-px)^b)'\\
&=ax^{a-1}(I-px)^b-x^apb(I-px)^{b-1}\\
&=x^{a-1}(I-px)^{b-1}(a(I-px)-xpb)\\
&=x^{a-1}(I-px)^{b-1}(aI-apx-xpb)\\
&=x^{a-1}(I-px)^{b-1}(aI-xp(a+b))\\
\text{so}\\
x
&=\dfrac{aI}{p(a+b)}\\
\text{and}\\
y
&=(I-px)/q\\
&=(I-p\dfrac{aI}{p(a+b)})/q\\
&=(I-\dfrac{aI}{(a+b)})/q\\
&=\dfrac{(I(a+b)-aI}{q(a+b)}\\
&=\dfrac{bI}{q(a+b)}\\
\end{array}
$</p>
|
1,878,884 | <p>I recently figured out my own algorithm to factorize a number given we know it has $2$ distinct prime factors. Let:</p>
<p>$$ ab = c$$</p>
<p>Where, $a<b$</p>
<p>Then it isn't difficult to show that:</p>
<p>$$ \frac{c!}{c^a}= \text{integer}$$</p>
<p>In fact, </p>
<p>$$ \frac{c!}{c^{a+1}} \neq \text{integer}$$</p>
<p>So the idea is to first asymptotically calculate $c!$ and then keep dividing by $c$ until one does not get an integer anymore. </p>
<h2>Edit</h2>
<p>I just realized a better algorithm would be to first divide $c^{\lfloor \sqrt {c} /2\rfloor }$. If it is not an integer then divide by $c^{\lfloor \sqrt {c} /4\rfloor }$. However is it is an integer then divide by: $c^{3\lfloor \sqrt {c} /4 \rfloor }$ . And so on ... </p>
<h2>Question</h2>
<p>I was wondering if this already existed in the literature? And what is the running time of this algorithm? Can this algorithm be improved upon?</p>
| gt6989b | 16,192 | <p>Not sure about correctness, but since $c$ has a representation in $\log c$ bits, you have to make $\Theta(c)$ multiplications to do this naively, so this algorithm is <strong>expoential</strong>, not polynomial</p>
<p><strong>UPDATE</strong></p>
<p>The edit improves on the number of divisions, but not on the number of multiplications. Unless you find a way to compute $c!$ in an order less than $c$ (perhaps by considering the Gamma function, but not sure), the running time will stay exponential.</p>
|
2,021,557 | <p>I'm not really sure how to do this, I guessed it had something to do with Vector Functions but overall couldn't find a way to do it. Can you please help?</p>
<p>The equations are:</p>
<p>$$f(x,y) = x^2 + y^2
\ g(x,y) = xy + 10 $$</p>
<p>and I need a Vectorial equation.
Thank you in advance!</p>
| Jean Marie | 305,862 | <p>We look for the intersection of the two surfaces (which are a paraboloid and a hyperbolic paraboloid):</p>
<p>$$\tag{0}\cases{z=x^2+y^2 & (a)\\z=xy+10 & (b)}$$</p>
<p>Let $(r,\theta)$ be the polar coordinates of $(x,y)$; i.e, </p>
<p>$$\tag{1}x=r \cos(\theta), \ \ y=r \sin(\theta).$$</p>
<p>Plugging these expressions in (0)(a) gives </p>
<p>$$\tag{2}r=\sqrt{z}.$$</p>
<p>Using $(0)(b)$, $(1)$ and $(2)$: $z=\sqrt{z} \cos(\theta) \sqrt{z}\sin(\theta)+10$, yielding: </p>
<p>$$z(1-\frac12 \sin(2 \theta))=10 \ \ \iff z=f(\theta) \ \ \text{with} \ \ f(\theta):=\dfrac{20}{2-\sin(2 \theta)}$$</p>
<p>Plugging this expression of $z$ in $(1)$ gives the final description of the intersection curve as a vector function of one variable, the polar angle:</p>
<p>$$\cases{x=\sqrt{f(\theta)}\cos(\theta)\\y=\sqrt{f(\theta)}\sin(\theta)\\z=f(\theta)}$$</p>
<p>(valid for any value of $\theta$ because $f(\theta)>0$ )</p>
|
1,102,885 | <p>I have exams in Machine Learning coming up and I need help answering this question.</p>
<blockquote>
<p>There are a million identical fish in a lake, one of which has
swallowed the One True Ring. You must get it back! After months of
effort, you catch another random fish and pass your metal detector
over it, and the detector beeps! It is the best metal detector money
can buy, and has a very low error rate: it fails to beep when near the
ring only one in a billion times, and it beeps incorrectly only one in
ten thousand times. What is the probability that, at long last, you’ve
found your precious ring?</p>
</blockquote>
<p>This is my answer I worked out using Bayes rule:</p>
<p><img src="https://i.stack.imgur.com/76WjZ.gif" alt="enter image description here"></p>
<p>Is this the right way to work out this type of question and is that somewhat the correct answer?</p>
| Aerinmund Fagelson | 173,945 | <p>Surely the probability of the detector beeping if you have found the fish is 999999999/1000000000 not 9999/10000
Whereas the probability of the detector not beeping when the fish is not found is 9999/10000 not 999999999/1000000000?</p>
|
3,914,626 | <p>Let <span class="math-container">$A$</span> be a <span class="math-container">$k$</span>-dimensional non singualar matrix with integer coefficients. Is it true that <span class="math-container">$\|A^{-1}\|_\infty \leq 1$</span>? How can I show that? Could you give me a counterexample?It is clear that <span class="math-container">$\|A^{-1}\|_{\infty}=\frac{1}{\min\{\|Ax\|_{\infty}:\|x\|_{\infty}=1\}}$</span>. My idea is to show that the minimum is obtained on an integer point so the denominator is bigger than <span class="math-container">$1$</span>. Is mi idea right?</p>
<p>Thank you very much!</p>
| Bram28 | 256,001 | <p>First of all, I hope you understand the intuition behind this:</p>
<p>Just because some <em>specific</em> object has some property obviously does not mean that <em>all</em> objects from the domain have that property.</p>
<p>However, if an arbitrary object from the domain has some property, then all objects do.</p>
<p>And just to be clear: by 'arbitrary' object we mean: we know and have assumed nothing about this object other than that it is some object from the domain.</p>
<p>Now, how exactly this is being formalized in a specific formal system depends on a lot of formal details. In some systems, variables are used to denote arbitrary objects, but in other systems, 'temporary constants' are being used, typically in combination with certain kinds of subproofs.</p>
<p>So, if you ask me if whether you can apply <span class="math-container">$\forall \ I$</span> to infer <span class="math-container">$\forall x \ P(x)$</span> from <span class="math-container">$P(John)$</span>, I really cannot answer that; it all depends on the specifics of the system you are using.</p>
|
1,347 | <p>Sometimes I check how many users of <code>mathematica.stackexchange.com</code> there are.<br>
I remember that a few weeks ago there were about 15 thousand and recently I've been surprised seeing that the <a href="https://mathematica.stackexchange.com/users?tab=NewUsers&sort=creationdate">new users</a> are signed with numbers over 18000.<br>
Let's check <a href="https://mathematica.stackexchange.com/users?page=21&tab=newusers&sort=creationdate">this site</a>, the new users therein have numbers slightly over 14000. The registry number of new users included <code>21</code> pages with <code>4 X 9 = 36</code> new users which is less than <code>800 << 18000 - 15000</code>.</p>
<p>What is going on?</p>
<p>Let's look at <a href="https://mathematica.stackexchange.com/users?page=18&tab=newusers&sort=creationdate">this page</a>, the number of <a href="https://mathematica.stackexchange.com/users/14837/user49115">user49115</a> is <code>14837</code> while the next one in the registry is <a href="https://mathematica.stackexchange.com/users/15838/jazz">Jazz</a> having the number <code>15838</code> i.e <code>15838 = 14837 + 1001</code>.</p>
<p>I know I can find e.g. how many <a href="https://mathematica.stackexchange.com/help/badges/1/teacher">teachers</a> or <a href="https://mathematica.stackexchange.com/help/badges/2/student">students</a> there are, but the number of all users is also interesting.</p>
<p>So what is the reliable number of all users (including unregistered) and of those who are regeistered? </p>
| Oded | 6,870 | <p>This is due to how our databases are setup and operate.</p>
<p>The Id field (user number) is an auto incrementing field - essentially when a user gets created, it takes the next number (not entirely accurate, there are some wrinkles there, not relevant to this).</p>
<p>We operate with replication - our databases are replicated between data centers to help with backup and for disaster recovery (if a datacenter/database goes down, we have a replica with near to current data).</p>
<p>In our <a href="http://msdn.microsoft.com/en-gb/library/ff877884.aspx" rel="nofollow">always-on configuration</a>, to avoid too much network chatter between replicas, seeds are pre-allocated in chunks of 1000 (standard behavior for SQL Server 2012 and above, if I understand things correctly).</p>
<p>If there is a replication hiccup, the result is pretty much what you see.</p>
<p>We did upgrade our database servers around the time of this gap - so, it all fits nicely together... </p>
|
1,457,956 | <p>I am finding the positive values of $x$ for which the following series is convergent $$ \sum_{n=1}^{\infty}x^{\sqrt{n}}$$ It is sure that it is not convergent for $x\geq1$ as $n$-th term will not tend to zero. Now $x\in[0,1)$ how to check its convergence? Please help me to solve it. Thanks. </p>
| Yes | 155,328 | <p>Let $x \in ]0,1[$; then
$$x^{\sqrt{n}} = \exp(\sqrt{n} \log x) = \frac{1}{e^{\sqrt{n}|\log x|}} < \frac{1}{(\sqrt{n})^{3}}
$$
for large $n$, so by the comparison test the desired series converges.</p>
|
1,457,956 | <p>I am finding the positive values of $x$ for which the following series is convergent $$ \sum_{n=1}^{\infty}x^{\sqrt{n}}$$ It is sure that it is not convergent for $x\geq1$ as $n$-th term will not tend to zero. Now $x\in[0,1)$ how to check its convergence? Please help me to solve it. Thanks. </p>
| Bernard | 202,857 | <p><em>Without the exponential:</em></p>
<p>If $0\le x <1$, we have:
\begin{align*}
\sum_{n\ge1}x^{\sqrt n}&\le\sum_{n\ge1}x^{\lfloor\sqrt n\rfloor}=3x+5x^2+7x^3+9x^4+\dotsm \\
&\le 2+4x+6x^2+8x^3+10x^4+\dotsm=2(1+2x+3x^2+4x^3+5x^4+\dotsm)\\
&= 2\biggl(\frac1{1-x}\biggr)'=\frac2{(1-x)^2}
\end{align*}
hence the series converges.</p>
|
802,960 | <p>$$\sum\limits_{k=1}^n\arctan\frac{ 1 }{ k }=\frac{\pi}{ 2 }$$
Find value of $n$ for which equation is satisfied. </p>
| Tom-Tom | 116,182 | <p>Let use write $$s_n=\sum_{k=1}^n \arctan\frac1k.$$
The sequence $(s_n)_{n\in\mathbf N}$ is increasing.
We have $s_0=0$, $s_1=\frac\pi4$ and $s_2=\frac\pi4+\arctan\frac12$.
As $\frac12<1$, $\tan^{-1}\left(\frac12\right)<\frac\pi4$ and $s_2<\frac\pi2$.
Let us compute $s_3$ using the <a href="http://en.wikipedia.org/wiki/Inverse_trigonometric_functions#Arctangent_addition_formula">arctan addition formula</a>
$$s_3=\frac\pi4+\arctan\frac12+\arctan\frac13=\frac\pi4+\arctan\frac{\frac12+\frac13}{1-\frac12\frac13}=\frac\pi4+\arctan1=\frac\pi2.$$
$n=3$ is a solution. As $s_4>s_3$, it's the only one.</p>
|
874,300 | <p>I'm having trouble grasping how to set these types of problems. There are a lot of related questions but it's difficult to abstract a general procedure on finding constants that give the given function bounding constraints to make it big-theta(general function). </p>
<p>so $\frac{x^4 +7x^3+5}{4x+1}$ is $ \Theta (x^3) $</p>
<p>to show this, we need to find constants such that.</p>
<p>$$ |c_1|(x^3) \leq \frac{x^4 +7x^3+5}{4x+1} \leq |c_2|(x^3)$$
In addition, there also has to be a $k$ such that for all values $x >k $ the argument holds.</p>
<p>start with one inequality
$$ |c_1|(x^3) \leq \frac{x^4 +7x^3+5}{4x+1}$$
$$ = |c_1| \leq \frac{x^4 +7x^3+5}{4x^4+x^3}$$
$$ = |c_1| \leq \frac{x^4}{x^3(4x+1)} + \frac{7x^3}{x^3(4x+1)} + \frac{5}{x^3(4x+1)}$$
so basically for $x > 0$, $$ |c_1| \leq \frac{1}{4} + 0 + 0$$
I'm assuming after I take the limit as x goes to infinity, i could choose any $c_1$ less than or equal to $\frac{1}{4}$? The other way would then have the same procedure? What would I set $k$ to?</p>
| brogrenkp | 115,493 | <p>You are on the right track. However, rather than dividing by $x^3$, I would recommend multiplying by $(4x+1)$. The reason for this is so that you will have polynomials of degree $4$ on all sides of the inequality.</p>
<p>It is okay to try different values for $k$ once you get a more simplified inequality. For this problem, I believe setting $k$ to $1$ will work great.</p>
|
373,958 | <p>Is $\sum_{n=1}^\infty(2^{\frac1{n}}-1)$ convergent or divergent?
$$\lim_{n\to\infty}(2^{\frac1{n}}-1) = 0$$
I can't think of anything to compare it against. The integral looks too hard:
$$\int_1^\infty(2^{\frac1{n}}-1)dn = ?$$
Root test seems useless as $\left(2^{\frac1{n}}\right)^{\frac1{n}}$ is probably even harder to find a limit for. Ratio test also seems useless because $2^{\frac1{n+1}}$ can't cancel out with ${2^{\frac1{n}}}$. It seems like the best bet is comparison/limit comparison, but what can it be compared against?</p>
| Paolo Leonetti | 45,736 | <p>Since $2^x=1+x\ln 2+O(x^2)$ as $x\to 0$ then
$$\sum_{n\ge 1}\left(2^{1/n}-1\right)\asymp \sum_{n\ge 1}\frac{1}{n},$$
which diverges.</p>
|
373,958 | <p>Is $\sum_{n=1}^\infty(2^{\frac1{n}}-1)$ convergent or divergent?
$$\lim_{n\to\infty}(2^{\frac1{n}}-1) = 0$$
I can't think of anything to compare it against. The integral looks too hard:
$$\int_1^\infty(2^{\frac1{n}}-1)dn = ?$$
Root test seems useless as $\left(2^{\frac1{n}}\right)^{\frac1{n}}$ is probably even harder to find a limit for. Ratio test also seems useless because $2^{\frac1{n+1}}$ can't cancel out with ${2^{\frac1{n}}}$. It seems like the best bet is comparison/limit comparison, but what can it be compared against?</p>
| paw88789 | 147,810 | <p>You could use the fact that for a series of positive terms, $\sum_{n=1}^\infty a_n$ converges if and only if $\prod_{n=1}^\infty (1+a_n)$ converges.</p>
<p>Applying this result to the given problem: The given series converges if and only if the infinite product $\prod_{n=1}^\infty 2^{\frac1n}$</p>
<p>For this infinite product, the partial products are of the form $2^{1+\frac12+\frac13+\cdots+\frac1n}$, which is divergent since the exponent is a partial sum of the harmonic series, and hence going to $\infty$.</p>
|
91,590 | <p>So I'm reviewing old homeworks for an upcoming comp sci test and I came across this question:</p>
<p>Say whether the following statement is True, False or Unknown: </p>
<blockquote>
<p>The problem of checking whether a given Boolean formula has exactly
one satisfying assignment, is NP-complete</p>
</blockquote>
<p>My original answer to this was True because it seems to me that you can reduce SAT to this. Here's my solution:</p>
<p>Let's call this problem EX_SAT. Given a boolean formula s, we can construct a TM M where L(M) = SAT using EX_SAT. Assume that we have a NTM P that decides EX_SAT, and a NTM Q that decides DOUBLE_SAT (the problem of determining whether a Boolean formula has two or more satisfying assignments). We that DOUBLE_SAT is NP-complete because we reduced SAT to it in an earlier homework problem.</p>
<pre><code>M = on input s
1. Run P on s.
2. If P accepts, then accept.
3. If P rejects then run Q on s.
4. If Q accepts then accept.
5. If Q rejects then reject.
</code></pre>
<p>I see that EX_SAT doesn't have a polynomial time verifier, and I also see the one flaw in this proof is that I also have to use DOUBLE_SAT to complete it - which probably doesn't allow us to conclude that EX_SAT is NP-complete, but I thought I would ask this here because it might aid in my understanding of the topic.</p>
<p>Any thoughts would be much appreciated :) </p>
| templatetypedef | 8,955 | <p>I believe that this is an open problem because I think that the problem of "does φ have exactly one satisfying assignment?" is, I believe, co-NP-complete by a reduction from the unsatisfiability problem, which is known to be co-NP-complete. The idea is that given a formula φ with variables v<sub>1</sub>, v<sub>2</sub>, ..., v<sub>n</sub>, we can construct the formula φ' as</p>
<p>φ' = (φ ∨ w) ∧ (w → ¬ v<sub>1</sub>) ∧ (w → ¬ v<sub>2</sub>) ∧ ... ∧ (w → ¬ v<sub>n</sub>)</p>
<p>The idea behind φ' is that if φ is unsatisfiable, this has exactly one satisfying assignment: make the new variable w true and make each v<sub>i</sub> false. Otherwise, for each satisfying assignment of φ, there is one satisfying assignment to φ' formed by using the satisfying assignment to φ with w set to false. This reduction can be computed in polynomial-time, so the problem of "does φ have exactly one satisfying assignment?" is co-NP-hard. Since it's also in co-NP (because you can easily verify a "no" answer given a certificate containing two satisfying assignments), this problem is NP-complete. Since it's unknown whether NP = co-NP, no co-NP-complete problem is known to be in NP. Thus it's an open problem whether this problem is also contained in NP, though I think the general conjecture is "no."</p>
<p>Hope this helps!</p>
|
1,672,131 | <p>A card game is played with a deck whose cards can be one of 6 suits, one of the suits being hearts, and one of 11 ranks. A hand is a subset of 3 cards. What is the probability that a hand has exactly two hearts given that it has the 2 of hearts? Please explain.</p>
| Robert Israel | 8,508 | <p>Hint: $x$ has $n$ digits if $10^{n-1} \le x < 10^n$.</p>
|
2,282,818 | <p>I'm getting $f(x)=2x+f(0)$ and $f(x)=f(0)-2x$ by setting $y=0$, but I'd like to verify. Am I right?</p>
| Martin R | 42,969 | <p>For <span class="math-container">$y = 0$</span> we get that
<span class="math-container">$$
f(x) = f(0) \pm 2x
$$</span>
for all <span class="math-container">$x \in \Bbb R$</span>. We want to show that the same sign must hold for all <span class="math-container">$x$</span>, i.e. either
<span class="math-container">$$
f(x) = f(0) + 2x \quad \text{for all $x$}
$$</span>
or
<span class="math-container">$$
f(x) = f(0) - 2x \quad \text{for all $x$.}
$$</span></p>
<p>So assume that
<span class="math-container">$$
f(x_1) = f(0) + 2x_1 \\
f(x_2) = f(0) - 2x_2
$$</span>
for non-zero <span class="math-container">$x_1, x_2$</span>. Then
<span class="math-container">$$
2|x_1 - x_2| = |f(x_1) - f(x_2)| = 2 |x_1 + x_2| \\
\implies (x_1 - x_2) ^2 = (x_1 + x_2)^2 \\
\implies x_1 x_2 = 0
$$</span>
is a contradiction.</p>
|
2,619,185 | <p>Let $$P=(X+2)^m+(X+3)^{2m+3}$$ and $$Q=X^2+5X+7.$$ I need to show that $Q$ divides $P$ for any $m$ natural. </p>
<p>I said like this: let $a$ be a root of $X^2+5X+7=0$. Then $a^2+5a+7=0$. </p>
<p>Now, I know I need to show that $P(a)=0$, but I do not know if it is the right path since I have not found any way to do it.</p>
| zwim | 399,263 | <p>We can also prove it by induction.</p>
<p>$P_0(x)=1+(x+3)^3=x^3+9x^2+27x+28=(x^2+5x+7)(x+4)\quad\checkmark$</p>
<p>$\begin{align}
P_{m+1}(x) &=(x+2)^{m+1}+(x+3)^{2m+5}\\
&=(x+2)^m(x+2)+(x+3)^2\overbrace{\big((x^2+5x+7)Q_m(x)-(x+2)^m\big)}^{\text{induction hypothesis}}\\\\
&=(x+2)^m\underbrace{(x+2-x^2-6x-9)}_{-x^2-5x-7}+(x+3)^2(x^2+5x+7)Q_m(x)\\\\
&= (x^2+5x+7)Q_{m+1}(x)\quad\checkmark
\end{align}$</p>
|
804,882 | <p>If both $L:V\rightarrow W$ and $M:W\rightarrow U$ are linear transformations that are invertible, how can you prove that the composition $(M\circ L):V\rightarrow U$ is also invertible.</p>
| EPS | 133,563 | <p>Composition of two invertible functions is invertible and composition of two linear maps is linear.</p>
|
181,367 | <p>It is well known that compactness implies pseudocompactness; this follows from <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Heine%E2%80%93Borel_theorem">the Heine–Borel theorem</a>. I know that the converse does not hold, but what is a counterexample?</p>
<p>(A <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Pseudocompact_space"><strong>pseudocompact space</strong></a> is a topological space $S = \langle X,{\mathfrak I}\rangle$ such that every continuous function $f:S\to\Bbb R$ has bounded range.)</p>
| MJD | 25,554 | <p>I couldn't think of an obvious counterexample, so I looked <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Pseudocompact_space" rel="noreferrer">in Wikipedia</a> and it suggested <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Particular_point_topology" rel="noreferrer">the particular point topology</a> on an infinite set.</p>
<p>$\def\p{{\bf x}}$In the particular point topology, we have a distinguished point, $\p\in X$, and the topology is that a set is open if and only if it is either empty, or includes $\p$. </p>
<p>Let $S=\langle X,{\mathfrak I}\rangle$ be an infinite particular-point space with distinguished point $\p$. It is clear that $S$ is not compact: the open cover consisting of $\{p, \p\}$ for each $p\in X$ other than $\p$ is an infinite open cover of $S$ with no proper, and therefore no finite subcover.</p>
<p>$\def\R{{\Bbb R}}$However, the space is pseudocompact. Let $f:S\to\R$ be a continuous function. Then $f^{-1}[\Bbb R\setminus \{f(\bf x)\}]$ is an open set not containing $\bf x$, so it must be empty, hence $f$ is constant.</p>
|
181,367 | <p>It is well known that compactness implies pseudocompactness; this follows from <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Heine%E2%80%93Borel_theorem">the Heine–Borel theorem</a>. I know that the converse does not hold, but what is a counterexample?</p>
<p>(A <a href="https://secure.wikimedia.org/wikipedia/en/wiki/Pseudocompact_space"><strong>pseudocompact space</strong></a> is a topological space $S = \langle X,{\mathfrak I}\rangle$ such that every continuous function $f:S\to\Bbb R$ has bounded range.)</p>
| user3810316 | 426,834 | <p>This nice paper presents all kinds of relations between different compactness notions.
<a href="http://www.cs.cmu.edu/~yaoliang/mynotes/compact.pdf" rel="nofollow noreferrer">http://www.cs.cmu.edu/~yaoliang/mynotes/compact.pdf</a></p>
<p>E.g., if a topological space $K$ is compact or sequentially compact, then it is countably compact. If $K$ is countably compact, then it is pseudo-compact.</p>
<p>So pick any countably or sequentally compact space that is not compact.
Some are given here: <a href="https://math.stackexchange.com/questions/1629731/limit-point-compactness-does-not-imply-compactness-counter-example">Limit point Compactness does not imply compactness counter-example</a></p>
<p>Or pick a limit-point compact space which is T1 or N1 (hence pseudo-compact) but not compact.
Some are given here: <a href="https://math.stackexchange.com/questions/1810095/prove-or-disprove-limit-point-compact-hausdorff-space-imply-compact-space">Prove or disprove : limit point compact hausdorff space imply compact space?</a></p>
<p>N.B. A continuous image of any compact space is compact. A compact subset of ${\mathbb R}$ is bounded. Therefore, compactness implies pseudo-compactness.</p>
|
160,518 | <p>In Mathematics, we know the following is true:</p>
<p>$$\int \frac{1}{x} \space dx = \ln(x)$$</p>
<p>Not only that, this rule works for constants added to x:
$$\int \frac{1}{x + 1}\space dx = \ln(x + 1) + C{3}$$
$$\int \frac{1}{x + 3}\space dx = \ln(x + 3) + C$$
$$\int \frac{1}{x - 37}\space dx = \ln(x - 37) + C$$
$$\int \frac{1}{x - 42}\space dx = \ln(x - 42) + C$$</p>
<p>So its pretty safe to say that $$\int \frac{1}{x + a}\space dx = \ln(x + a) + C$$ But the moment I introduce $x^a$ where $a$ is not equal to 1, the model collapses. The integral of $1/x^a$ is <strong>not</strong> equal to $\ln(x^a)$. The same goes for $\cos(x)$, and $\sin(x)$, and other trig functions. </p>
<p>So when are we allowed or not allowed to use the rule of $\ln(x)$ when integrating functions?</p>
| Community | -1 | <p>Perhaps I can reverse-address your question. Oftentimes (typically in optimization problems) when dealing with a positive real function $f$ it is easier to differentiate $\log f$ than $f$ itself. It's easy to check that the so-called logarithmic derivative satisfies $\frac{d}{dx} \log [f (x)] = \frac{f'(x)}{f(x)}.$ Note also that we can recover the original derivative by multiplying through with $f.$ In terms of primitives, this is the same as saying $\displaystyle\int \frac{f'(x)}{f(x)} dx = \log x + C.$ This is a general case of some of the formulae you presented and is useful in quickly evaluating many definite integrals by substitution - for instance, $\displaystyle\int \cot x $ $dx = \log |\sin x | +C$ is immediate by this formula.</p>
|
3,529,359 | <p>Let <span class="math-container">$\Omega$</span> be a bounded and smooth domain and let <span class="math-container">$J:H^1(\Omega) \times H^1_0(\Omega) \to \mathbb{R}$</span> be defined by</p>
<p><span class="math-container">$$J(u,v) = \int_\Omega f(u)|\nabla v|^2$$</span>
where <span class="math-container">$f\colon \mathbb{R} \to \mathbb{R}$</span> is a smooth function, bounded above and below away from zero (other assumptions can be added as necessary).</p>
<p>Under what conditions on <span class="math-container">$f$</span> do I get that <span class="math-container">$J$</span> is weakly lower semicontinuous? </p>
<p>Obviously if <span class="math-container">$f \equiv 1$</span> then it is true, but what about the more genera case?</p>
| Johannes Hahn | 62,443 | <p>First note that <span class="math-container">$f(u)\|\nabla v\|^2 = \|\sqrt{f(u)}v\|^2$</span>. Therefore your integral is <span class="math-container">$J(u,v)=\|\sqrt{f(u)}v\|_{L^2}^2$</span>. Now it is known that <span class="math-container">$x_n \xrightarrow[weak]{X} x \implies \liminf \|x_n\|_X \geq \|x\|_X$</span> holds in every Hilbert space <span class="math-container">$X$</span>.</p>
<p>Therefore we aim to prove <span class="math-container">$\Phi: H^1\times H^1 \to L^2, (u,v) \mapsto g(u)\nabla v$</span> is (sequentially) continuous from the weak to the weak topology whenever <span class="math-container">$g:\mathbb{R}\to\mathbb{R}$</span> is continuous and bounded. In fact we break it down into three steps:</p>
<ol>
<li><p><span class="math-container">$\Psi: H^1 \to Hom(L^2,L^2), \Psi_u(v):=g(u)v$</span> is continuous from the weak topology to the strong operator topology (aka topology of pointwise convergence)</p></li>
<li><p><span class="math-container">$im(\Psi)$</span> is an equicontinuous sets of self-adjoint operators.</p></li>
<li><p>The evaluation map <span class="math-container">$Hom(L^2,L^2) \times L^2 \to L^2$</span> -- at least when restricted to subsets of the form <span class="math-container">$E\times L^2$</span> with <span class="math-container">$E$</span> equicontinuous and self-adjoint -- is continuous w.r.t. the strong operator topology on <span class="math-container">$Hom(L^2,L^2)$</span> and the weak topology on (both) <span class="math-container">$L^2$</span>.</p></li>
</ol>
<p>The map <span class="math-container">$\Phi$</span> is then the composition
<span class="math-container">$$H^1\times H^1 \xrightarrow{\Psi\times id} Hom(L^2,L^2)\times H^1 \xrightarrow{id\times\nabla} Hom(L^2,L^2)\times L^2 \xrightarrow{eval} L^2$$</span>
and therefore continuous in the way we want it to be.</p>
<p>Alright, let's get started.</p>
<ol>
<li><p>Let <span class="math-container">$u_n \xrightarrow[weak]{H^1} u$</span> be arbitrary. We use that <span class="math-container">$H^1$</span> is compactly embedded in <span class="math-container">$L^2$</span> so that <span class="math-container">$u_n \xrightarrow{L^2} u$</span>. Now we can extract an a.e. convergent subsequence <span class="math-container">$u_{n_k}$</span> of <span class="math-container">$u_n$</span>. Because <span class="math-container">$g$</span> is continuous, <span class="math-container">$g(u_{n_k})\to g(u)$</span> a.e. as well. If <span class="math-container">$v\in L^2$</span> is fixed, then <span class="math-container">$\|\Psi_{u_{n_k}}(v)-\Psi_u(v)\|_{L^2}^2 = \int_\Omega (g(u_{n_k})-g(u))^2 v^2 \to 0$</span> by dominated convergence. We have in fact shown: Every subsequence of <span class="math-container">$\Psi_{u_n}$</span> contains a subsubsequence which converges to <span class="math-container">$\Psi_u$</span>. Therefore <span class="math-container">$\Psi_{u_n}$</span> converges to <span class="math-container">$\Psi_u$</span>.</p></li>
<li><p>Follows directly from <span class="math-container">$\|\Phi_u\|_{op} = \|g(u)\|_\infty \leq \|g\|_\infty$</span> and <span class="math-container">$im(g)\subseteq\mathbb{R}$</span>.</p></li>
<li><p>Can be seen as follows: If <span class="math-container">$\Phi_n \xrightarrow{s.o.t} \Phi$</span> are self-adjoint, <span class="math-container">$v_n \xrightarrow{weak L^2} v$</span> and <span class="math-container">$w\in L^2$</span> are arbitrary, then
<span class="math-container">$$\Phi_n(v_n)-\Phi(v) = (\Phi_n(v_n)-\Phi_n(v)) + (\Phi_n(v)-\Phi(v))$$</span>
The second summand goes to zero because <span class="math-container">$\Phi_n\to\Phi$</span> in the strong operator topology. The first summand weakly goes to zero because
<span class="math-container">$$\langle \Phi_n(v_n-v),w\rangle = \langle v_n-v,\Phi_n^\ast(w)\rangle = \langle v_n-v,\Phi^\ast(w)\rangle + \langle v_n-v,(\Phi_n-\Phi)^\ast(w)\rangle = \langle v_n-v,\Phi(w)\rangle + \langle v_n-v,(\Phi_n-\Phi)(w)\rangle$$</span>
The first summand here goes to zero because <span class="math-container">$v_n\to v$</span> in the weak topology. The second summand goes to zero because <span class="math-container">$v_n-v$</span> is bounded and <span class="math-container">$\Phi_n(w) \to \Phi(w)$</span> in norm.</p></li>
</ol>
|
2,603,239 | <p>(The Cauchy principal value of)
$$
\int_0^{\infty}\frac{\tan x}{x}\mathrm dx
$$</p>
<p>I tried to cut this integral into $$\sum_{k=0}^{\infty}\int_{k\pi}^{(k+1)\pi}\frac{\tan x}{x}\mathrm dx$$
And then
$$\sum_{k=0}^{\infty}\lim_{\epsilon \to 0}\int_{k\pi}^{(k+1/2)\pi-\epsilon}\frac{\tan x}{x}\mathrm dx+\int_{(k+1/2)\pi+\epsilon}^{(k+1)\pi}\frac{\tan x}{x}\mathrm dx$$
$$\sum_{k=0}^{\infty}\int_{k\pi}^{(k+1/2)\pi}\frac{((2k+1)\pi-2x)\tan x}{((2k+1)\pi-x)x}\mathrm dx$$
And I did not know how to continue. I did not know if I was right or not. How to calculate this integral?</p>
| spaceisdarkgreen | 397,125 | <p>You seem to be on the right track.</p>
<p>We have $$ P\int_0^\infty \frac{\tan{x}}{x}dx = P\int_0^\pi \frac{\tan x}{x}dx + P\int_0^\pi\frac{\tan x}{\pi + x}dx + P\int_0^\pi \frac{\tan x}{2\pi + x}dx+\ldots$$ and then we have $$ P\int_0^\pi \frac{\tan x}{k\pi +x}dx = \int_0^{\pi/2} \tan x\left(\frac{1}{k\pi+x} - \frac{1}{(k+1)\pi-x}\right)dx $$
Finally, $$ \sum_{k=0}^\infty \frac{1}{k\pi+x} - \frac{1}{(k+1)\pi-x} =\cot x,$$ so $$ P\int_0^\infty \frac{\tan{x}}{x}dx = \int_{0}^{\pi/2}\tan x\cot x \;dx = \pi/2.$$</p>
|
3,414,197 | <p>I have to model/simulate a moving iron meter with Simulink, more specifically, I need to build a Simulink model for the equation of motion, wich is given as:
<span class="math-container">$$
\theta\ddot{\alpha} = T_\phi - T_S
$$</span>
where <span class="math-container">$\theta$</span> denotes the pointers moment of Inertia,
<span class="math-container">$\alpha$</span> is the pointers angle,
<span class="math-container">$T_S = c_S\alpha$</span> the springs torque pushing the pointer back to it's initial position, with <span class="math-container">$c_S$</span> as the spring constant
<span class="math-container">$T_\phi = c_\phi i$</span> as the Torque generated by the current i and i is from the following equation: <span class="math-container">$Ri = v - c_i\dot{\alpha}$</span>, where <span class="math-container">$R$</span> denotes the resistance in <span class="math-container">$\Omega$</span>, <span class="math-container">$v$</span> the DC voltage that's supposed to be measured, <span class="math-container">$c_i$</span> the coils conductance.</p>
<p><span class="math-container">$\theta=6.4*10^{-6}\frac{kgm^2}{rad}$</span>;
<span class="math-container">$c_S=6*10^{-4}\frac{Nm}{rad}$</span>;
<span class="math-container">$c_\phi = 8*10^{-2} \frac{Nm}{A}$</span>;
<span class="math-container">$c_i=1.2\frac{Vs}{A}$</span>;
<span class="math-container">$R=2*10^3 \Omega$</span></p>
<p>The reason I'm posting here asking you for help is that I don't know if I did this correctly since I don't have any reference values to verify my result. The meter is supposed to measure the DC voltage <span class="math-container">$v$</span> and to get a proper result I think I need to multiply the resulting angle <span class="math-container">$\alpha$</span> by a certain factor. </p>
<p>To build my Simulink model I put in all the variables and get this
<span class="math-container">$$
\theta \ddot{\alpha} = c_\phi i-c_S\alpha \Leftrightarrow \theta \ddot{\alpha} = c_\phi \frac {v-c_i\alpha}{R}-c_S\alpha
$$</span></p>
<p>after a Laplace Transform and some math I get:
<span class="math-container">$$
\theta s^2X(s) = \frac {c_\phi}{R}v-\frac{c_\phi c_i}{R}sX(s)-c_SX(s)
$$</span>
then I rearranged the equation so I can build the model using integrators:
<span class="math-container">$$
\frac{1}{s}\left(\frac{1}{s}\frac{\frac{c_\phi}{R}v-c_SX(s)}{\theta} - \frac{c_\phi c_i}{\theta R}\right)
$$</span></p>
<p>So in the end, it seems pretty similar to a damped harmonic oscillator...</p>
<p>Attached below you find my Simulink model and the workspace I'm using. </p>
<p><a href="https://i.stack.imgur.com/ENAug.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ENAug.png" alt="Simulink model"></a>
<a href="https://i.stack.imgur.com/KkGxM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KkGxM.png" alt="Workspace"></a></p>
| Dinno Koluh | 519,191 | <p>I would personally do the problem in an other way. I guess that your input is the DC voltage and the output is the angle. The differential equation of the system as you wrote it is:
<span class="math-container">$$
\theta \ddot{\alpha}(t) = \frac{c_\phi}{R}v(t) -\frac {c_\phi c_i}{R}\dot{\alpha}(t)-c_S\alpha(t)
$$</span></p>
<p>after the Laplace Transform finding the ratio of the output to the input you get the following transfer function:
<span class="math-container">$$
\theta s^2Y(s) = \frac {c_\phi}{R}X(s)-\frac {c_\phi c_i}{R}sY(s)-c_SY(s)
$$</span>
<span class="math-container">$$ G(s) = \frac{Y(s)}{X(s)} = \frac{\frac {c_\phi}{R}}{\theta s^2+\frac {c_\phi c_i}{R}s+c_S} = \frac{\frac {c_\phi}{R\theta}}{s^2+\frac {c_\phi c_i}{R\theta}s+\frac{c_S}{\theta}}$$</span>
Note that the transfer function of a second order system is in the form:
<span class="math-container">$$ G(s) = \frac{K\omega_0^2}{s^2+2\xi\omega_0s+\omega_0^2} $$</span>
By comparing the forms you can easily get the gain(<span class="math-container">$K$</span>), natural frequency(<span class="math-container">$\omega_0$</span>) and damping factor (<span class="math-container">$\xi$</span>). You can easily calculate these values (ex. <span class="math-container">$ \omega_0 = \sqrt{\frac{c_S}{\theta}} $</span>) and check if your Simulink model behaves well with these mentioned values (in this way you can know for sure if your model is correct).
In my opinion you should just place this transfer function and pass it an input and read the output. This would be the easiest way.</p>
|
3,414,197 | <p>I have to model/simulate a moving iron meter with Simulink, more specifically, I need to build a Simulink model for the equation of motion, wich is given as:
<span class="math-container">$$
\theta\ddot{\alpha} = T_\phi - T_S
$$</span>
where <span class="math-container">$\theta$</span> denotes the pointers moment of Inertia,
<span class="math-container">$\alpha$</span> is the pointers angle,
<span class="math-container">$T_S = c_S\alpha$</span> the springs torque pushing the pointer back to it's initial position, with <span class="math-container">$c_S$</span> as the spring constant
<span class="math-container">$T_\phi = c_\phi i$</span> as the Torque generated by the current i and i is from the following equation: <span class="math-container">$Ri = v - c_i\dot{\alpha}$</span>, where <span class="math-container">$R$</span> denotes the resistance in <span class="math-container">$\Omega$</span>, <span class="math-container">$v$</span> the DC voltage that's supposed to be measured, <span class="math-container">$c_i$</span> the coils conductance.</p>
<p><span class="math-container">$\theta=6.4*10^{-6}\frac{kgm^2}{rad}$</span>;
<span class="math-container">$c_S=6*10^{-4}\frac{Nm}{rad}$</span>;
<span class="math-container">$c_\phi = 8*10^{-2} \frac{Nm}{A}$</span>;
<span class="math-container">$c_i=1.2\frac{Vs}{A}$</span>;
<span class="math-container">$R=2*10^3 \Omega$</span></p>
<p>The reason I'm posting here asking you for help is that I don't know if I did this correctly since I don't have any reference values to verify my result. The meter is supposed to measure the DC voltage <span class="math-container">$v$</span> and to get a proper result I think I need to multiply the resulting angle <span class="math-container">$\alpha$</span> by a certain factor. </p>
<p>To build my Simulink model I put in all the variables and get this
<span class="math-container">$$
\theta \ddot{\alpha} = c_\phi i-c_S\alpha \Leftrightarrow \theta \ddot{\alpha} = c_\phi \frac {v-c_i\alpha}{R}-c_S\alpha
$$</span></p>
<p>after a Laplace Transform and some math I get:
<span class="math-container">$$
\theta s^2X(s) = \frac {c_\phi}{R}v-\frac{c_\phi c_i}{R}sX(s)-c_SX(s)
$$</span>
then I rearranged the equation so I can build the model using integrators:
<span class="math-container">$$
\frac{1}{s}\left(\frac{1}{s}\frac{\frac{c_\phi}{R}v-c_SX(s)}{\theta} - \frac{c_\phi c_i}{\theta R}\right)
$$</span></p>
<p>So in the end, it seems pretty similar to a damped harmonic oscillator...</p>
<p>Attached below you find my Simulink model and the workspace I'm using. </p>
<p><a href="https://i.stack.imgur.com/ENAug.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ENAug.png" alt="Simulink model"></a>
<a href="https://i.stack.imgur.com/KkGxM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KkGxM.png" alt="Workspace"></a></p>
| Pilotf4 | 272,311 | <p><strong>EDIT:</strong> Had some typos in my transfer function and now I get the same results from my transfer function model! </p>
<p>Thank you for your answer!
I tried to compare the two forms, but I think something must've gone wrong.
For , I get <span class="math-container">$\frac{c_\phi c_i}{2R \theta \omega_0}=0.3873$</span>
For K, I get <span class="math-container">$\frac{c_\phi}{R \theta \omega_0^2}=0.0667$</span>.
For <span class="math-container">$\omega_0 =9.6825$</span></p>
<p>When I plug these into a transfer function and compare the resulting scope output with the one from my Simulink model, I get two very different results.
The first image is from my integrator model and just intuitively speaking, this seems to make sense; the meter's pointer overshoots, goes back a little, but eventually stays stationary at a certain angle <span class="math-container">$\alpha$</span>
<a href="https://i.stack.imgur.com/35YZ1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/35YZ1.png" alt="Integrator model"></a>
<a href="https://i.stack.imgur.com/Z4bod.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z4bod.png" alt="Transfer Function"></a></p>
|
2,060,156 | <p>First thing I want to mention is that this is not a topic about why $1+2+3+... = -1/12$ but rather the connection between this summation and $\zeta$.</p>
<p>I perfectly understand that the definition using the summation $\sum_{k=1}^\infty k^{-s}$ of the zeta function is only valid for $Re(s) > 1$ and that the function is then extrapolated through analytic continuation in the whole complex plan.</p>
<p>However some details bother me : Why can we manipulate the sum and still obtain correct final answer.
$$
S_1 = 1-1+1-1+1-1+... = 1-(1-1+1-1+1-...)= 1-S_1 \implies S_1 = \frac{1}{2} \\
S_2 = 1-2+3-4+5-... \implies S_2 - S_1 = 0-1+2-3+4-5... = -S_2 \implies S_2 = \frac{1}{4} \\
S = 1+2+3+4+5+... \implies S-S_2 = 4(1+2+3+4+...) = 4S \implies S = -\frac{1}{12} \\
S "=" \zeta(-1)
$$
Clearly these manipulations are not legal since we're dealing with infinite non-converging sums. But it works ! Why ?
Is there a real connection between the analytic continuation which yields the "true" value $\zeta(-1) = -1/12$ and these "forbidden manipulations" ? Could we somehow consider these manipulations as "continuation of non-converging sums" ? If so, is there a well-defined framework with defined rules because it is clear that we must be careful when playing with non-converging sums if we don't want to break the mathematics ! (For example <a href="https://en.wikipedia.org/wiki/Riemann_series_theorem" rel="nofollow noreferrer"> Riemann rearrangement theorem</a>)</p>
<p>And since it seems that these illegal operations can be used to compute some value of zeta in the extended domain $Re(s) < 1$, are there other examples of such derivations, for example $0 = \zeta(-2) "=" 1^2 + 2^2 + 3^2 + 4^2 + ...$ ?</p>
<p>Hopefully this is not an umpteenth vague question about zeta and $1+2+3+4...$ I did some research about it but couldn't find any satisfying answer. Thanks !</p>
| Vidyanshu Mishra | 363,566 | <p>Suppose there is integer $p$ which can be written as $\frac{6l-1}{4l-3}$ and $\frac{7k-5}{5k-3}$. </p>
<p>$$p= \frac{6l-1}{4l-3} =\frac{7k-5}{5k-3}$$</p>
<p>$$\implies kl+8k+l=6$$</p>
<p>$$\implies(k+1)l=(6-8k)\implies l=\frac{-2(4k-3)}{(k+1)}$$.</p>
<p>Which gives following integer solutions:</p>
<p>$(k,l)=(-15,-9),(-8,-10),(-3,-15),(-2,-22),(0,6),(1,-1),(6,-6),(13,7)$. These all sets of values will give you a new such number. I shall let you conclude now.</p>
|
1,151,653 | <p>How can I express the following as a function sequence? Namely, how can I properly express <span class="math-container">$f_n(x)$</span>?</p>
<p>Here are the following function graphs:</p>
<p><img src="https://i.stack.imgur.com/2GFYj.png" alt="enter image description here" /></p>
<p>Text only (color-coded with image):</p>
<ol>
<li><span class="math-container">$\color{red}{f_1(x)=x^x}$</span></li>
<li><span class="math-container">$\color{blue}{f_2(x)=x^{x^x}}$</span></li>
<li><span class="math-container">$\color{green}{f_3(x)=x^{x^{x^x}}}$</span></li>
<li><span class="math-container">$\color{purple}{f_4(x)=x^{x^{x^{x^x}}}}$</span></li>
<li><span class="math-container">$\color{orange}{f_5(x)=x^{x^{x^{x^{x^x}}}}}$</span></li>
<li><span class="math-container">$f_6(x)=x^{x^{x^{x^{x^{x^x}}}}}$</span></li>
</ol>
<p>So how may I express <span class="math-container">$f_n(x)$</span>? e.g. <span class="math-container">$f_n(x)=x^{x^{x^{.^{.^{.^{x}}}}}}$</span>?</p>
| Community | -1 | <p>Hint: $$x^2+y^2=1$$
$$y=x^2$$</p>
<p>Where do they intersect?</p>
|
1,384,735 | <p>What is the ODE satisfied by $y=y(x)$ </p>
<p>given that $$\frac{dy}{dx} = \frac{-x-2y}{y-2x}$$</p>
<p>I understand that I need to get it in some form of $\int \cdots \;dy = \int \cdots \; dx$, but am not sure how to go about it.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>rewrite your equation in the form $$\frac{dy}{dx}=\frac{-1-2\frac{y}{x}}{\frac{y}{x}-2}$$ and set $$y=xu$$</p>
|
3,973,006 | <p>The question is fully contained in the title.</p>
<p>I tried to prove maximality (if that happens, <span class="math-container">$I$</span> is prime as well) in <span class="math-container">$\mathbb Z[X]$</span>, but I am not able to figure a strategy out for that purpouse. Obviously, if <span class="math-container">$I$</span> is not maximal, I am expected to say whether <span class="math-container">$I$</span> is a prime ideal, which is a problem too.</p>
<p>How would you solve such exercise?</p>
| Arthur | 15,500 | <p>Let's just play around and see what kinds of polynomials we can find in <span class="math-container">$I$</span>.</p>
<p>First of all, we can try to cancel the cubic term from the cubic generator, and we see that
<span class="math-container">$$
7(X^3+2X^2+1)-X^2(7X+14)=7
$$</span>
is an element of the ideal. And since <span class="math-container">$7X+14=7(X+2)$</span>, we get
<span class="math-container">$$
I=(7,X^3+2X^2+1)
$$</span>
which is a lot simpler.</p>
<p>Now what? Well, since we have a constant in our ideal, it is apt to use the third isomorphism theorem. It states that we can divide out by one generator at a time (presumably you know all the basic connections between ideals and the corresponding quotients).</p>
<p>So if we're interested in whether <span class="math-container">$(7,X^3+2X+1)\subseteq \Bbb Z[X]$</span> is maximal or prime, we might just as well look at <span class="math-container">$(X^3+2X^2+1)\subseteq \Bbb Z[X]/(7)$</span>. And therefore we can just check whether <span class="math-container">$X^3+2X+1$</span> has any roots modulo 7. If it does have roots the ideal isn't prime, and if it doesn't have roots, then the ideal is prime and also maximal (because <span class="math-container">$\Bbb Z[X]/(7,X^3+2X+1)$</span> would then be a finite integral domain and therefore a field, or alternatively because the quotient is a degree 3 extension of the field <span class="math-container">$\Bbb Z_7$</span>).</p>
|
316,865 | <p>How do you find this limit?</p>
<p>$$\lim_{x \rightarrow \infty} \sqrt[5]{x^5-x^4} -x$$</p>
<p>I was given a clue to use L'Hospital's rule.</p>
<p>I did it this way:</p>
<p><strong>UPDATE 1:</strong>
$$
\begin{align*}
\lim_{x \rightarrow \infty} \sqrt[5]{x^5-x^4} -x
&= \lim_{x \rightarrow \infty} x\begin{pmatrix}\sqrt[5]{1-\frac 1 x} -1\end{pmatrix}\\
&= \lim_{x \rightarrow \infty} \frac{\sqrt[5]{1-\frac 1 x} -1}{\frac1x}
\end{align*}
$$</p>
<p>Applying L' Hospital's,
$$
\begin{align*}
\lim_{x \rightarrow \infty} \frac{\sqrt[5]{1-\frac 1 x} -1}{\frac1x}&=
\lim_{x \rightarrow \infty} \frac{0.2\begin{pmatrix}1-\frac 1 x\end{pmatrix}^{-0.8}\begin{pmatrix}-x^{-2}\end{pmatrix}(-1)} {\begin{pmatrix}-x^{-2}\end{pmatrix}}\\
&= -0.2
\end{align*}
$$</p>
<p>However the answer is $0.2$, so I would like to clarify the correct use of L'Hospital's</p>
| Mikasa | 8,581 | <p>You got the answer, but I'd like to note something different. I see you are doing derivations, so I am writing an answer based on it. We say the function $\alpha(x)$ is very small at $x\to a$ when $$\lim\alpha(x)\to 0$$ We can prove that by using Taylor expansion that $\sqrt[n]{1+\alpha(x)}-1\sim\frac{\alpha(x)}{n}$. So $$\frac{\sqrt[5]{1-k}-1}k~\sim~\frac{-k/5}{k}=-1/5$$ when $k\to 0$.</p>
|
1,695,261 | <p>Is it true that for every $ε > 0$, there is $δ > 0$, such that $0 < |x−2| < δ ⇒ |(x^2 −x)−2| < ε$?</p>
<p>Now I know that $|(x^2 −x)−2|$ is same as $|(x-2)(x+1)|$, but I am not sure how to link that with the first bit of info given. In general epsilon-delta proofs confuse me. </p>
<p>So I start by saying that there is an epsilon s.t $|(x^2 −x)−2| < ε$. And if this is true then there is a delta s.t $0 < |x−2| < δ$. Or is it the other way around? </p>
<p>Now, if $|(x^2 −x)−2| < ε$ then $|(x-2)(x+1)| < ε$ and $|x-2||x+1| < ε$ and
$$|x-2|<\frac{ε}{|x+1|}$$ But since epsilon is always positive and so is $|x+1|$ then a delta always exists. </p>
<p>Is my proof correct or totally wrong? I feel as though all I have done is rearranged the equation, and not really proved anything. </p>
| crbah | 314,622 | <p>Let the $\epsilon = \epsilon_0$ satisfying $|x^2-x-2| < \epsilon_0$. Initially choose $\delta$ to be $1$. We will refine this delta.</p>
<p>$-\epsilon_0 < x^2-x-2 < \epsilon_0$</p>
<p>$\implies -\epsilon_0+\frac{9}{4} < x^2-x+\frac{1}{4} < \epsilon_0+\frac{9}{4}$</p>
<p>$\implies -\epsilon_0+\frac{9}{4} < (x-\frac{1}{2})^2 < \epsilon_0+\frac{9}{4}$</p>
<p>$\implies -\epsilon_0+\frac{9}{4} < (x-\frac{1}{2})^2 $</p>
<p>Now, if $\epsilon_0 < \frac{9}{4}$</p>
<p>$\implies \sqrt{-\epsilon_0+\frac{9}{4}} < x-\frac{1}{2} $</p>
<p>$\implies \sqrt{-\epsilon_0+\frac{9}{4}} + \frac{3}{2} < x+1 $</p>
<p>Remember we had $|x^2-x-2| < \epsilon_0$. Then</p>
<p>$-\epsilon_0 < (x-2)(x+1) < \epsilon_0$</p>
<p>$\implies -\frac{\epsilon_0}{x+1} < (x-2) < \frac{\epsilon_0}{x+1}$</p>
<p>$\implies |x-2| < \frac{\epsilon_0}{x+1} < \frac{\epsilon_0}{\sqrt{-\epsilon_0+\frac{9}{4}} + \frac{3}{2}}$.</p>
<p>Therefore choose $\delta = min( 1 , \frac{\epsilon_0}{\sqrt{-\epsilon_0+\frac{9}{4}} + \frac{3}{2}} )$.</p>
<p>(note that for $\epsilon_0 > \frac{9}{4}$, choose $\delta$ as if $\epsilon_0 = \frac{9}{4}$, this will eventually satisfy the condition)</p>
|
1,695,261 | <p>Is it true that for every $ε > 0$, there is $δ > 0$, such that $0 < |x−2| < δ ⇒ |(x^2 −x)−2| < ε$?</p>
<p>Now I know that $|(x^2 −x)−2|$ is same as $|(x-2)(x+1)|$, but I am not sure how to link that with the first bit of info given. In general epsilon-delta proofs confuse me. </p>
<p>So I start by saying that there is an epsilon s.t $|(x^2 −x)−2| < ε$. And if this is true then there is a delta s.t $0 < |x−2| < δ$. Or is it the other way around? </p>
<p>Now, if $|(x^2 −x)−2| < ε$ then $|(x-2)(x+1)| < ε$ and $|x-2||x+1| < ε$ and
$$|x-2|<\frac{ε}{|x+1|}$$ But since epsilon is always positive and so is $|x+1|$ then a delta always exists. </p>
<p>Is my proof correct or totally wrong? I feel as though all I have done is rearranged the equation, and not really proved anything. </p>
| DanielWainfleet | 254,665 | <p>Here are some general results which enable us to handle the Q of continuity for a broad class of real functions: </p>
<p>(1)... Constant functions are continuous.</p>
<p>(2)... f(x)=x is continuous </p>
<p>(3)...f(x)=|x| is continuous.</p>
<p>(4)...For continuous f, g :</p>
<p>... (i)... h(x)=f(x)+g(x) is continuous.</p>
<p>... (ii)...j(x)=f(x)g(x) is continuous.</p>
<p>... (iii)...k(x)=f(g(x)) is continuous. </p>
<p>These are readily proven by the standard $\epsilon , \delta$ method. One immediate consequence is that a polynomial $p(x)$ and its absolute value $|p(x)|$ are continuous functions</p>
|
1,376,159 | <p>A friend of mine shared this problem with me. As he was told, this integral can be evaluated in a closed form (the result may involve polylogarithms). Despite all our efforts, so far we have not achieved anything, so I decided to ask for your advice.
$$\int_0^1\log(x)\,\log(2+x)\,\log(1+x)\,\log\left(1+x^{-1}\right)dx$$</p>
<p>I found some similar questions here on MSE:
<a href="https://math.stackexchange.com/q/316745/76878">(1)</a>,
<a href="https://math.stackexchange.com/q/465444/76878">(2)</a>,
<a href="https://math.stackexchange.com/q/503405/76878">(3)</a>, <a href="https://math.stackexchange.com/q/524358/76878">(4)</a>,
<a href="https://math.stackexchange.com/q/761930/76878">(5)</a>,
<a href="https://math.stackexchange.com/q/795867/76878">(6)</a>,
<a href="https://math.stackexchange.com/q/908108/76878">(7)</a>,
<a href="https://math.stackexchange.com/q/915083/76878">(8)</a>, <a href="https://math.stackexchange.com/q/933977/76878">(9)</a>, <a href="https://math.stackexchange.com/q/972775/76878">(10)</a>,
<a href="https://math.stackexchange.com/q/1043771/76878">(11)</a>, <a href="https://math.stackexchange.com/q/1046519/76878">(12)</a>,
<a href="https://math.stackexchange.com/q/1096557/76878">(13)</a>, <a href="https://math.stackexchange.com/q/1341254/76878">(14)</a>.</p>
| Start wearing purple | 73,025 | <p>The main ingredient here is the integral representation
$$\operatorname{Li}_n(z)=\frac{(-1)^{n-1}}{(n-2)!}\int_0^1
\frac{\ln\left(1-zx\right)\ln^{n-2}x\,dx}{x},\tag{$\spadesuit$}$$
valid for $|z|<1,n\in\mathbb{N}_{\ge 2}$.</p>
<p>The derivation goes as follows:</p>
<ol>
<li><p>Rewrite the initial integral as
\begin{align*}
\mathcal{I}&=\int_0^1\ln(x+2)\underbrace{\left[\ln x\ln^2(1+x)-\ln^2x\ln(1+x)\right]}_{=\frac13\left(\ln x-\ln(x+1)\right)^3-\frac13\ln^3 x+\frac13\ln^3 (x+1)} dx=\\
&=\frac13\biggl[\underbrace{\int_0^1\ln(x+2)\ln^3(x+1)dx}_{\mathcal{I}_1}-\underbrace{\int_0^1\ln(x+2)\ln^3x\,dx}_{\mathcal{I}_2}-\underbrace{\int_0^1\ln(x+2)\ln^3\frac{x+1}{x}dx}_{\mathcal{I}_3}\biggr].
\end{align*}</p></li>
<li><p>The integrals $\mathcal{I}_{1,2}$ have antiderivatives that can be expressed in terms of polylogarithms (say, with Mathematica), therefore we concentrate on $\mathcal{I}_3$. After the change of variables $t=\frac{2x}{x+1}$, we obtain
\begin{align*}\mathcal{I_3}&=-2\int_0^1\frac{\ln\frac{4-t}{2-t}\ln^3\frac t2\,dt}{(2-t)^2}=\\&=-2\int_0^1\frac{\ln\frac{4-t}{2-t}\ln^3 t\,dt}{(2-t)^2}+
6\ln 2 \int_0^1\frac{\ln\frac{4-t}{2-t}\ln^2 t\,dt}{(2-t)^2} \tag{$\clubsuit$}\\&\quad -6\ln^22\int_0^1\frac{\ln\frac{4-t}{2-t}\ln t\,dt}{(2-t)^2}+
2\ln^32\int_0^1\frac{\ln\frac{4-t}{2-t}dt}{(2-t)^2}.
\end{align*}</p></li>
<li><p>Now let me explain how these integrals can be computed. Consider, for instance, the first term in ($\clubsuit$):
\begin{align*}
2\int_0^1\frac{\ln\frac{4-t}{2-t}\ln^3 t\,dt}{(2-t)^2}&=\int_0^1\ln\frac{4-t}{2-t}\ln^3 t\,d\left(\frac{t}{2-t}\right)=-
\int_0^1\frac{t}{2-t}d\left(\ln\frac{4-t}{2-t}\ln^3 t\right)=\\
&=-
\int_0^1\frac{t}{2-t}\left[\color{red}{-\frac{\ln^3 t}{4-t}+\frac{\ln^3 t}{2-t}}+\frac{3}{t}\ln\frac{4-t}{2-t}\ln^2 t\right]dt
\end{align*}
The terms shown in red lead to integrals computable with the help of ($\spadesuit$) (e.g. differentiate it with respect to $z$ and see what happens). The remaining nontrivial piece is thus
$$\int_0^1\frac{\ln\frac{4-t}{2-t}\ln^2 t}{2-t}dt=
\int_0^1\frac{\ln(4-t)\ln^2 t}{2-t}dt-\int_0^1\frac{\ln(2-t)\ln^2 t}{2-t}dt$$
The second part again has again a polylogarithmic antiderivative computable with Mathematica, so it remains to compute
$$\mathcal{I}_4=\int_0^1\frac{\ln(4-t)\ln^2 t}{2-t}dt.$$
Note that the same procedure applied to the other three terms in ($\clubsuit$) leads to easily computable integrals (as instead of $\ln^2t$ in the analog of $\mathcal{I}_4$ we have $\ln t$ or $1$).</p></li>
<li><p>Thus it remains to compute $\mathcal{I}_4$. And this is the only place where a certain miracle takes place, which indicates that there should be an easier way to do the initial integral. Making the change of variables $s=2-t$, we get
\begin{align*}
\mathcal{I}_4&=\int_1^2\frac{\ln(2+s)\ln^2(2-s)\,ds}{s}=\\
&=\frac16\int_1^2\frac{\left[\ln(2+s)+\ln(2-s)\right]^3+\left[\ln(2+s)-\ln(2-s)\right]^3-2\ln^3(2+s)}{s}ds=\\
&=\frac16\int_1^2\frac{\ln^3(4-s^2)}{s}ds+\frac16\int_1^2\frac{\ln^3\frac{2+s}{2-s}}{s}ds-\frac13\int_1^2\frac{\ln^3(s+2)}{s}ds.
\end{align*}
Each of these three pieces again has polylogarithmic antiderivatives that can be computed by Mathematica. This becomes obvious after change of variables $u=s^2$ in the first integral (the miracle is here: due to special parameter values we don't have an additional linear term under logarithm which would spoil the things) and $u=\frac{2+s}{2-s}$ in the second.</p></li>
</ol>
<p>So the conclusion is that indeed, the integral $\mathcal{I}$ can be expressed in terms of polylogarithms (up to $\operatorname{Li}_4$), but I was too lazy to type the answer. Fortunately, for that we have Cleo.</p>
<hr>
<p><strong>Added</strong>: As suggested by Vladimir Reshetnikov, the integration bounds $(0,1)$ are not really important: the above approach yields an explicit antiderivative which I posted at <a href="https://gist.github.com/anonymous/4c35e5617cf846e8f517">https://gist.github.com/anonymous/4c35e5617cf846e8f517</a></p>
|
2,037,704 | <p>What symmetry property in complex space is related to the fact that the absolute value of numbers $|a+ib| = |b+ia|$ are equals?</p>
| GEdgar | 442 | <p>In $\mathbb R^2$, the map $(a,b) \mapsto (b,a)$ is reflection in the
45-degree line $y=x$. This map is (of course) an isometry of the plane, so it is an isometry of $\mathbb C$.</p>
|
3,470,208 | <p><span class="math-container">$$f(x)=\begin{cases}
\dfrac{x}{\sin x}, & x>0\\
2-x, & x\le0
\end{cases}$$</span></p>
<p><span class="math-container">$$g(x)=\begin{cases}
x+3, &x<1\\
x^2-2x-2, &1\le x<2\\
x-5, & x\ge2
\end{cases}$$</span></p>
<p>Find left hand limit and right hand limit of <span class="math-container">$g(f(x))$</span> at <span class="math-container">$x = 0$</span> and hence find
<span class="math-container">$\lim_{x\to0}g(f(x))$</span></p>
<p>My attempt is as follows:-</p>
<p><span class="math-container">$$g(f(x))=\begin{cases}
5-x, &x\le0\\
\dfrac{x}{\sin x}+3 &0<x<1\\
\left(\dfrac{x}{\sin x}\right)^2-\dfrac{2x}{\sin x}-2, & 1\le x<2\\
\dfrac{x}{\sin x}-5 &x\ge2
\end{cases}$$</span></p>
<p>Let's find left hand limit</p>
<p><span class="math-container">$$l=\lim_{x\to0^{-}}g(f(x))$$</span>
<span class="math-container">$$l=\lim_{x\to0^{-}}5-x$$</span>
<span class="math-container">$$l=5$$</span></p>
<p>Let's find right hand limit</p>
<p><span class="math-container">$$r=\lim_{x\to0^{+}}g(f(x))$$</span>
<span class="math-container">$$r=\lim_{x\to0^{+}}\dfrac{x}{\sin x}+3$$</span>
<span class="math-container">$$r=4$$</span></p>
<p><span class="math-container">$$l\ne r$$</span>
<span class="math-container">$$\lim_{x\to0}g(f(x)) \text { doesn't exist }$$</span> </p>
<p>But actual answer is following:</p>
<p><span class="math-container">$$l=-3,r=-3,\lim_{x\to0}g(f(x))=-3$$</span></p>
<p>What mistake am I making here? I tried to find it but didn't get any breakthrough.</p>
| user3290550 | 278,972 | <p>I didn't compose the functions in the proper way, thanks to @user for pointing it out.</p>
<p>I found an interesting way to compose it which will avoid mistakes.</p>
<p>Replace <span class="math-container">$x$</span> by <span class="math-container">$f(x)$</span></p>
<p><span class="math-container">$$g(f(x))=\begin{cases}
f(x)+3, &f(x)<1\\
(f(x))^2-2f(x)-2, &1\le f(x)<2\\
f(x)-5, & f(x)\ge2
\end{cases}$$</span></p>
<p><span class="math-container">$$g(f(x))=\begin{cases}
f(x)+3, &\begin{cases}
\dfrac{x}{\sin x}<1, &x>0\\
2-x<1, &x\le0
\end{cases}\\
(f(x))^2-2f(x)-2, &\begin{cases}
1\le\dfrac{x}{\sin x}<2, &x>0\\
1\le2-x<2, &x\le0
\end{cases}\\
f(x)-5, & f(x)\ge2
\end{cases}$$</span></p>
<p><span class="math-container">$$g(f(x))=\begin{cases}
f(x)+3, &\begin{cases}
\dfrac{x}{\sin x}<1, &x>0 \text { contradiction }\\
x>1, &x\le0 \text { contradiction }
\end{cases}\\
(f(x))^2-2f(x)-2, &\begin{cases}
x\in R \text { and } \dfrac{x-2\sin x}{x}<0 , &x>0\\
x\le 1 \text { and } x>0, &x\le0 \text { contradiction }
\end{cases}\\
f(x)-5, & f(x)\ge2
\end{cases}$$</span></p>
<p><span class="math-container">$$g(f(x))=\begin{cases}
(f(x))^2-2f(x)-2, &\begin{cases}
x\in R \cup x\in \left(0,\dfrac{\pi}{2}\right],&x>0 \text { rough estimation }\\
\end{cases}\\
f(x)-5, & \begin{cases}
\dfrac{x}{\sin x}\ge2, &x>0\\
2-x\ge2, &x\le0
\end{cases}
\end{cases}$$</span></p>
<p><span class="math-container">$$g(f(x))=\begin{cases}
(f(x))^2-2f(x)-2, &\begin{cases}
x\in R \cup x\in \left(0,\dfrac{\pi}{2}\right],&x>0 \text { rough estimation }\\
\end{cases}\\
f(x)-5, & \begin{cases}
x\in \left(\dfrac{\pi}{2},\infty\right), &x>0 \text { rough estimation }\\
x\le0, &x\le0
\end{cases}
\end{cases}$$</span></p>
<p><span class="math-container">$$g(f(x))=\begin{cases}
\left(\dfrac{x}{\sin x}\right)^2-\dfrac{2x}{\sin x}-2, &x\in \left(0,\dfrac{\pi}{2}\right] \text { rough estimation }\\
-3-x, & \text { otherwise }
\end{cases}$$</span></p>
|
2,755,143 | <p>Find Number of integers satisfying $$\left[\frac{x}{100}\left[\frac{x}{100}\right]\right]=5$$ where $[.]$ is Floor function.</p>
<p>I assumed $$x=100q+r$$ where $0 \le r \le 99$</p>
<p>Then we have </p>
<p>$$\left[\left(q+\frac{r}{100}\right)q\right]=5$$ $\implies$</p>
<p>$$q^2+\left[\frac{rq}{100}\right]=5$$</p>
<p>Since $rq$ is an integer we have $$rq=100p+r_1$$ where $0 \le r_1 \le 99$</p>
<p>Then we have</p>
<p>$$q^2+p+\left[\frac{r_1}{100}\right]=5$$ $\implies$</p>
<p>$$q^2+p=5$$ so the possible ordered pairs $(p,q)$ are</p>
<p>$(1,2)$, $(1,-2)$, $(-4, 3)$ i am getting infinite pairs.</p>
<p>How to proceed?</p>
| Frostic | 402,923 | <p>I got $x\in [|250,299|]$ </p>
<p>I solved it writing $x = a10^2+b10^1+c10^0$. And reasoning on $a$ then $b$ then $c$ given the fact that $f$ is non decreasing. </p>
<p>$f(200) = 4$ and
$f(300) = 9$</p>
<p>Therefore $a = 2$</p>
<hr>
<p>$f(240) = 4$ and
$f(250) = 5$ and
$f(290) = 5$</p>
<p>Therefore $5\leq b \leq 9$</p>
<hr>
<p>$f(299) = 5$</p>
<p>Therefore $0\leq c\leq 9$</p>
|
3,519,515 | <p>Here, I wonder what is a good way to use the epsilon delta definition or converging sequences to show that the set S containing quotients on [0,1] have/does not have volume 0, (i.e. whether there exist a <strong>finite</strong> number of intervals which union contain all of S such that the <strong>sum</strong> of length of all intervals is less than any <span class="math-container">$\epsilon > 0$</span> you fix). I believe it more likely does not have volume 0 from my intuition . I am lost on where to start the proof. </p>
<p>Does the idea of closure of S play a part in this proof? and how?</p>
<p>Also, is it possible to prove this using pigeonhole principle involving infinite rationals in one interval?</p>
| Sarvesh Ravichandran Iyer | 316,409 | <p>Indeed, closure does play a part, if you are going for <em>finitely</em> many intervals to do the covering.</p>
<p>The point is, if <span class="math-container">$I_1,...,I_n$</span> are intervals that covered <span class="math-container">$S$</span>, say <span class="math-container">$S \subset \cup_{i=1}^n I_i$</span>, then we can assume that the <span class="math-container">$I_i$</span> are closed (doesn't change their length and increases the union anyway), so that <span class="math-container">$S$</span> is contained in the finite union of the closed sets , which will remain closed (by finiteness of how many sets we are taking a union of). Thus, by definition of closure, we get that <span class="math-container">$[0,1]$</span>, the closure of <span class="math-container">$S$</span>, is contained in the union of these intervals.</p>
<p>Now, by monotonicity and subadditivity of the length of intervals under taking their union, we get that the sum of the lengths of the intervals is at least <span class="math-container">$1$</span>. Thus, under taking only finite intervals for covering, we cannot get volume zero.</p>
<p>If you allow infinitely many intervals, then the union is <em>no more closed</em>, for example, and then one checks that we can narrow the lengths as we want to make the volume zero.</p>
<hr>
<p>Suppose <span class="math-container">$I_i$</span> are closed intervals such that <span class="math-container">$[0,1] \subset I_i$</span>. We have the usual notion of length : if <span class="math-container">$I = [a,b]$</span> where <span class="math-container">$b \geq a$</span> then <span class="math-container">$l(I) = b-a$</span>.</p>
<p>Now, we define the length of a finite disjoint union of intervals, by taking the sum. Thus , for example <span class="math-container">$l([0,1] \cup [2,3]) = 1+1 = 2$</span>.</p>
<p>When we take the union of two intervals, either they overlap so some length is lost in the union, or the union is disjoint so retains the length. In short, it is a standard argument (extended to more intervals) to show that <span class="math-container">$l(A \cup B) \leq l(A) + l(B)$</span> for <span class="math-container">$A,B$</span> disjoint union of intervals, by checking how much each interval overlaps with another and so on.</p>
<p>Next, we get by induction that <span class="math-container">$l(A_1 \cup ... \cup A_n) \leq l(A_1) + ... + l(A_n)$</span>. </p>
<p>Monotonicity of length is clear : if a set contains another, it must have larger length : look just at intervals, and extend to a union of intervals again.</p>
<p>Now, using the definition, we get <span class="math-container">$l([0,1]) \leq l(I_1) + ... + l(I_n)$</span> because <span class="math-container">$[0,1]$</span> is contained in <span class="math-container">$\cup I_i$</span>, so we are using both monotonicity and subadditivity here.</p>
|
6,931 | <p>One of the key steps in <a href="http://en.wikipedia.org/wiki/Merge_sort">merge sort</a> is the merging step. Given two sorted lists</p>
<pre><code>sorted1={2,6,10,13,16,17,19};
sorted2={1,3,4,5,7,8,9,11,12,14,15,18,20};
</code></pre>
<p>of integers, we want to produce a new list as follows:</p>
<ol>
<li>Start with an empty list <code>acc</code>.</li>
<li>Compare the first elements of <code>sorted1</code> and <code>sorted2</code>. Append the smaller one to <code>acc</code>.</li>
<li>Remove the element used in step 2 from either <code>sorted1</code> or <code>sorted2</code>.</li>
<li>If neither <code>sorted1</code> nor <code>sorted2</code> is empty, go to step 2. Otherwise append the remaining list to <code>acc</code> and output the value of <code>acc</code>.</li>
</ol>
<p>Applying this process to <code>sorted1</code> and <code>sorted2</code>, we get</p>
<pre><code>acc={1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20}
</code></pre>
<p><em>Added in response to Rojo's question: We can carry out this procedure even if the two lists are not pre-sorted. So <code>list1</code> and <code>list2</code> below are not assumed to be sorted.</em></p>
<p>If there were a built-in function <code>MergeList</code> which carries out this process, it would probably take three arguments <code>list1</code>, <code>list2</code>, and <code>f</code>. Here <code>f</code> is a Boolean function of two arguments used to decide which element to pick. In the case of merge sort, <code>f = LessEqual</code>. I feel that <code>MergeList</code> is a fundamental list operation, so</p>
<p><strong>Question 1: Is there such a built-in function or one very close to that?</strong></p>
<p>If I were to write such a function in Scheme, I would use a recursive definition equivalent to the following:</p>
<pre><code>MergeList[list1_,{},f_,acc_:{}]:=Join[acc,list1];
MergeList[{},list2_,f_,acc_:{}]:=Join[acc,list2];
MergeList[list1_,list2_,f_,acc_:{}]:=
If[
f@@First/@{list1,list2},
MergeList[Rest[list1],list2,f,Append[acc,First[list1]]],
MergeList[list1,Rest[list2],f,Append[acc,First[list2]]]
]
</code></pre>
<p><em>Sample output with unsorted lists:</em></p>
<pre><code>In[2]:= MergeList[{2,5,1},{3,6,4},LessEqual]
Out[2]= {2,3,5,1,6,4}
</code></pre>
<p>My impression is that recursive solutions tend to be inefficient in Mathematica, so</p>
<p><strong>Question 2: What would be a better way to implement <code>MergeList</code>?</strong></p>
<p>If you have tips about converting loops into their functional equivalents, feel free to mention them as well.</p>
| Leonid Shifrin | 81 | <h2>Preamble</h2>
<p>Since I agree that it would be nice to have a generic function of this type, I will provide a general implementation. First, I will give a generic one based on linked lists, then I will add a JIT-compiled one for special numeric types, and lastly, I will bring it all together.</p>
<h2>Top-level implementation based on linked lists</h2>
<p>Here is a reasonably efficient implementation based on linked lists:</p>
<pre><code>ClearAll[toLinkedList, ll];
SetAttributes[ll, HoldAllComplete];
toLinkedList[s_List] := Fold[ll[#2, #1] &, ll[], Reverse[s]];
</code></pre>
<p>and the main function:</p>
<pre><code>ClearAll[merge];
merge[a_ll, ll[], s_, _] := List @@ Flatten[ll[s, a], Infinity, ll];
merge[ll[], b_ll, s_, _] := List @@ Flatten[ll[s, b], Infinity, ll];
merge[ll[a1_, atail_], b : ll[b1_, _], s_, f_: LessEqual] /;f[a1, b1] :=
merge[atail, b, ll[s, a1], f];
merge[a : ll[a1_, _], ll[b1_, brest_], s_, f_: LessEqual] :=
merge[a, brest, ll[s, b1], f];
merge[a_List, b_List, f_: LessEqual] :=
merge[toLinkedList@a, toLinkedList@b, ll[], f];
</code></pre>
<p>For example:</p>
<pre><code>merge[{2,5,1},{3,6,4},LessEqual]
</code></pre>
<blockquote>
<pre><code> {2,3,5,1,6,4}
</code></pre>
</blockquote>
<pre><code>merge[{2,5,1},{3,6,4},Greater]
</code></pre>
<blockquote>
<pre><code> {3,6,4,2,5,1}
</code></pre>
</blockquote>
<p>And also for large lists:</p>
<pre><code>large1 = RandomInteger[100, 10000];
large2 = RandomInteger[100, 10000];
Block[{$IterationLimit = Infinity},
merge[large1,large2,LessEqual]]//Short//AbsoluteTiming
</code></pre>
<blockquote>
<pre><code>{0.0751953,{70,54,78,84,11,21,41,49,78,93,90,70,19,
<<19975>>,42,2,10,40,53,12,2,47,89,40,2,80}}
</code></pre>
</blockquote>
<p>For a complete implementation of merge sort algorithm based on linked lists, see <a href="https://mathematica.stackexchange.com/questions/237/how-can-i-ensure-that-i-am-constructing-patterns-in-the-most-efficient-way-possi/295#295">this post</a> (the difference there is that I used repeated rule application instead of recursion. Originally, the goal of that example was to show that <code>ReplaceRepeated</code> is not necessarily slow if the patterns are constructed efficiently). </p>
<h2>Full implementation including JIT-compilation</h2>
<p>I'd like to show here how one could implement a fairly complete function which would automatically dispatch to an efficient JIT-compiled code when the arguments are appropriate. Compilation will work not just for numeric lists, but for lists of tensors in general, as long as they are of the same shape.</p>
<h3>JIT - compilation</h3>
<p>First comes the JIT-compiled version, done along the lines discussed in <a href="https://mathematica.stackexchange.com/questions/2335/metaprogramming-in-mathematica/2352#2352">this answer</a>, section "Making JIT-compiled functions"</p>
<pre><code>ClearAll[mergeJIT];
mergeJIT[pred_, listType_, target : ("MVM" | "C") : "MVM"] :=
mergeJIT[pred, Verbatim[listType], target] =
Block[{fst, sec},
With[{decl = {Prepend[listType, fst], Prepend[listType, sec]}},
Compile @@
Hold[decl,
Module[{result = Table[0, {Length[fst] + Length[sec]}], i = 0,
fctr = 1, sctr = 1},
While[fctr <= Length[fst] && sctr <= Length[sec],
If[pred[fst[[fctr]], sec[[sctr]]],
result[[++i]] = fst[[fctr++]],
(* else *)
result[[++i]] = sec[[sctr++]]
]
];
If[fctr > Length[fst],
result[[i + 1 ;; -1]] = sec[[sctr ;; -1]],
(* else *)
result[[i + 1 ;; -1]] = fst[[fctr ;; -1]]
];
result
],
CompilationTarget -> target
]]];
</code></pre>
<p>You can use this in isolation:</p>
<pre><code>mergeJIT[LessEqual,{_Integer,1},"MVM"][{2,5,1},{3,6,4}]
</code></pre>
<blockquote>
<pre><code> {2,3,5,1,6,4}
</code></pre>
</blockquote>
<p>but it is much better to use as a part of the generic function, which would figure out the types for you automatically.</p>
<h3>Generic function implementation</h3>
<p>This is a function to find the type of our lists:</p>
<pre><code>Clear[getType, $useCompile];
getType[arg_List] /; $useCompile && ArrayQ[arg, _, IntegerQ] :=
{_Integer, Length@Dimensions@arg};
getType[arg_List] /; $useCompile && ArrayQ[arg, _, NumericQ] &&
Re[arg] == arg :=
{_Real, Length@Dimensions@arg};
getType[_] := General;
</code></pre>
<p>This is a function to dispatch to a right type:</p>
<pre><code>Clear[mergeDispatch];
SetAttributes[mergeDispatch, Orderless];
mergeDispatch[{Verbatim[_Integer], n_}, {Verbatim[_Real], n_}, pred_] :=
mergeDispatch[{Verbatim[_Real], n}, {Verbatim[_Real], n}, pred];
mergeDispatch[f : {Verbatim[_Real], n_}, {Verbatim[_Real], n_}, pred_] :=
mergeJIT[pred, f, $target];
mergeDispatch[f : {Verbatim[_Integer], n_}, {Verbatim[_Integer], n_}, pred_] :=
mergeJIT[pred, f, $target];
mergeDispatch[_, _, pred_] :=
Function[{fst, sec},
Block[{$IterationLimit = Infinity},
merge[fst, sec, pred]]];
</code></pre>
<p>and this is a function to bring it all together:</p>
<pre><code>ClearAll[mergeList];
Options[mergeList] =
{
CompileToC -> False,
Compiled -> True
};
mergeList[f_, s_, pred_, opts : OptionsPattern[]] :=
Block[{
$target = If[TrueQ[OptionValue[CompileToC]], "C", "MVM"],
$useCompile = TrueQ[OptionValue[Compiled]]
},
mergeDispatch[getType@f, getType@s, pred][f, s]
];
</code></pre>
<p>Finally, a helper function to clear the cache of <code>mergeJIT</code>, if that would be desired:</p>
<pre><code>ClearAll[clearMergeJITCache];
clearMergeJITCache[] :=
DownValues[mergeJIT] = {Last@DownValues[mergeJIT]};
</code></pre>
<h3>Benchmarks and tests</h3>
<p>First, create test data:</p>
<pre><code>clearMergeJITCache[];
huge1 = RandomInteger[1000,1000000];
huge2 = RandomInteger[1000,1000000];
</code></pre>
<p>A first call to the function with C compilation target is expensive:</p>
<pre><code>mergeList[huge1,huge2,Less,CompileToC -> True]//Short//AbsoluteTiming
</code></pre>
<blockquote>
<pre><code> {3.8652344,{267,461,66,607,797,116,197,474,852,805,135,
<<1999978>>,266,667,799,280,261,930,241,83,594,904,894}}
</code></pre>
</blockquote>
<p>But then, for the same types of lists, it will pay off for huge lists:</p>
<pre><code>mergeList[huge1,huge2,Less,CompileToC -> True]//Short//AbsoluteTiming
</code></pre>
<blockquote>
<pre><code> {0.0468750,{267,461,66,607,797,116,197,474,852,805,135,
<<1999978>>,266,667,799,280,261,930,241,83,594,904,894}}
</code></pre>
</blockquote>
<p>On the other hand, the call with MVM target is fast out of the box, but not as fast as the one with the C target after the "warm-up":</p>
<pre><code>mergeList[huge1,huge2,Less]//Short//AbsoluteTiming
</code></pre>
<blockquote>
<pre><code> {0.2138672,{267,461,66,607,797,116,197,474,852,805,135,
<<1999978>>,266,667,799,280,261,930,241,83,594,904,894}}
</code></pre>
</blockquote>
<p>The call to generic one is general but comparatively very slow:</p>
<pre><code>mergeList[huge1,huge2,Less,Compiled->False]//Short//AbsoluteTiming
</code></pre>
<blockquote>
<pre><code> {5.015,{267,461,66,607,797,116,197,474,852,805,135,
<<1999978>>,266,667,799,280,261,930,241,83,594,904,894}}
</code></pre>
</blockquote>
|
58,525 | <p>I am trying to make surface plots of squashed spheres. The spheres are defined by a list of points. For simplicity, consider the round sphere:</p>
<pre><code>pts = Flatten[
Table[{Sin[θ] Cos[ϕ], Sin[θ] Sin[ϕ],
Cos[θ]}, {θ, 0, π, π/14}, {ϕ, 0,
2 π, 2 π/14}], 1];
</code></pre>
<p>One way to plot this is:</p>
<pre><code>ListPlot3D[{pts,
-pts,}
BoundaryStyle -> None,
ColorFunction -> "Rainbow",
InterpolationOrder -> 2,
BoxRatios -> {1, 1, 1}]
</code></pre>
<p><code>ListPlot3D</code> has the nice feature that it interpolates the data to make a smooth surface. However, the northern and southern hemispheres are shaded differently, so there is a clear break at the equator:</p>
<p><img src="https://i.stack.imgur.com/A7jdx.png" alt="enter image description here"></p>
<p>An alternative is to do </p>
<pre><code>ListSurfacePlot3D[pts]
</code></pre>
<p>Now the shading is uniform (there is no break at the equator). However, the data is no longer interpolated (and interpolation is not an option for <code>ListSurfacePlot3D</code>), so the surface looks rough and lumpy:</p>
<p><img src="https://i.stack.imgur.com/xkfXr.png" alt="enter image description here"></p>
<p>I am trying to find a solution that combines the best of both world: the smooth surface of <code>ListPlot3D</code> with the uniform shading of <code>ListSurfacePlot3D</code>.</p>
| Jens | 245 | <p>From the example using <code>ListPlot3D</code>, I assume that your data points can be described by a height function above the plane. In other words, they describe a <em>convex</em> shape with reflection symmetry at the z=0 plane.</p>
<p>Then the only thing you may have to modify is the lighting and the ratio of the axes for your 3D plot, to get a smoother appearance with uniform color as is stated in the question:</p>
<pre><code>pts = Flatten[
Table[{Sin[θ] Cos[ϕ], Sin[θ] Sin[ϕ],
Cos[θ]}, {θ, 0, Pi, Pi/14}, {ϕ, 0,
2 Pi, 2 Pi/14}], 1];
ListPlot3D[{pts, -pts}, BoundaryStyle -> None,
ColorFunction -> (Orange &), InterpolationOrder -> 2,
BoxRatios -> Automatic,
Lighting -> {{"Directional", White, {3, 0, 0}}, {"Ambient",
LightGray}}]
</code></pre>
<p><img src="https://i.stack.imgur.com/B5Duy.png" alt="sphere"></p>
<p>I put the directional light source in a position that illuminates both halves of the sphere equally, so that the equatorial "crease" doesn't show up (no matter how you rotate the output).</p>
|
248,710 | <p>The organizers of a cycling competition know that about 8% of the racers use steroids. They decided to employ a test that will help them identify steroid-users. The following is known about the test: When a person uses steroids, the person will test positive 96% of the time; on the other hand, when a person does not use steroids, the person will test positive only 9% of the time. The test seems reasonable enough to the organizers. The one last thing they want to find out is this: Suppose a cyclist does test positive, what is the probability that the cyclist is really a steroid-user.</p>
<p>S be the event that a randomly selected cyclist is a steroid-user and P be the event that a randomly selected cyclist tests positive.</p>
<p>***My questions is Can someone please translate and explain P(P|S) and P(S|P) ?</p>
| Hagen von Eitzen | 39,174 | <p>We are given that $P(S)=0.08$ (hence $P(\neg S)=0.92$), $P(P|S)=0.96$ and $P(P|\neg S)=0.09$.
What we want to knwo is $P(S|P)$.</p>
<p>Note that $P(S\cap P)=P(S|P)\cdot P(P)$ as well as $P(S\cap P)=P(P|S)\cdot P(S)$, therefore
$$ P(S|P) = \frac{P(P|S)\cdot P(S)}{P(P)}.$$
Thus we first need $P(P)$, which we get from $P(P)=P(P|S)P(S)+P(P|\neg S)P(\neg S)$. At last now all is reduced to given values.</p>
|
2,305 | <p>I need an algorithm to produce all strings with the following property. Here capital letter refer to strings, and small letter refer to characters. $XY$ means the concatenation of string $X$ and $Y$.</p>
<p>Let $\Sigma = \{a_0, a_1,\ldots,a_n,a_0^{-1},a_1^{-1},\ldots,a_n^{-1}\}$ be the set of usable characters. Every string is made up of these symbols.</p>
<p>Out put any set $S_n$ with the following property achieves the goal.($n\geq 2$)</p>
<ol>
<li><p>If $W\in S_n$, then any cyclic shift of $W$ is not in $S_n$</p></li>
<li><p>If $W\in S_n$, then $|W| = n$</p></li>
<li><p>If $W\in S_n$, then $W \neq Xa_ia_i^{-1}Y$, $W \neq Xa_i^{-1}a_iY$, $W \neq a_iXa_i^{-1}$ and $W \neq a_i^{-1}Xa_i$ for any string $X$ and $Y$.</p></li>
<li><p>If $W\not \in S_n$, $S_n \cup \{W\}$ will violate at least one of the above 3 properties. </p></li>
</ol>
<p>Clearly any algorithm one can come up with is an exponential algorithm. but I'm still searching for a fast algorithm because this have some practical uses. At least for $\Sigma=\{a_0,a_1,a_0^{-1},a_1^{-1}\}$ and $n<25$.</p>
<p>The naive approach for my practical application requires $O(4^n)$ time. It generate all strings of length n. When ever a new string is generated, the program create all cyclic permutations of the string and check if it have been generated before though a hash table. If not, add to the list of the result strings. Total amount of operation are $O(n4^n)$, and that's assuming perfect hashing. 12 is the limit.</p>
<p>Are there better approaches? clearly a lot of useless strings were generate.</p>
<p>Edit: The practical usage is to find the maximum of minimum self intersection of a curve on a torus with a hole. Every curve can be characterized by a string described above. Therefore I have to generate every string and feed it to a program that calculate the minimum self intersection.</p>
| deinst | 943 | <p>Making explicit what is implicit in Qiaochu Yuan's comment, and demonstrating that <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.97.4244&rep=rep1&type=pdf" rel="nofollow">someone else's work</a> has failed to evade my eyes. (It is a neat article, read it.) I present this adaptation of Duval's algorithm.</p>
<p>Assign an order to your symbols, say $a_1, a_2, a_1^{-1}, a_2^{-1}$ let first_symbol and _last_symbol be the first and last symbols in the set. Let next be a function that gives the next symbol in sequence. The function conflict checks to see if the two symbols are inverses of each other.</p>
<pre><code>w[1] <- first_symbol
i <- 1
repeat
for j = 1 to n–i
do w[i+j] <- w[j]
if i = n and not conflict(w[1], w[n])
then output w[1] ... w[n]
i <- n
while i > 0 and w[i] = last_symbol
do i <- i–1
if i > o
then w[i] <- next(w[i])
if i > 1 and conflict(w[i-1], w[i])
then w[i] <- next(w[i])
until i = 0
</code></pre>
<p>This is just Duval's algorithm for generating a list of the lexicographically minimal cyclic shifts with extra checks to step over the cases where a conflict should occur. I have neither bothered to work out either a formal proof that this works, or implemented it in actual code. Caveat Emptor.</p>
<p><strong>Edit</strong> As expected, I missed a corner case. The following python code appears to work. It takes the length of the cycle and a list of integers (I use integers for the group)</p>
<pre><code>def cycles(n,l):
w = range(n+1)
m = len(l) - 1
w[1] = 0
i = 1
while i > 0:
for j in range(n-i):
w[j + i + 1] = w[j + 1]
if i == n and l[w[1]] + l[w[n]] != 0:
print [l[w[i]] for i in xrange(1,n+1)]
i = n
while i > 0 and w[i] == m:
i = i - 1
while i > 0:
if i > 0:
w[i] = w[i] + 1
if i > 1 and l[w[i-1]] + l[w[i]] == 0:
w[i] = w[i] + 1
if w[i] <= m:
break
i = i - 1
</code></pre>
<p>to get the length four cycles for {-2, -1, 1, 2} call</p>
<pre><code>cycles(4, [-2, -1, 1, 2])
</code></pre>
<p>resulting in</p>
<pre><code>[-2, -2, -2, -1]
[-2, -2, -2, 1]
[-2, -2, -1, -1]
[-2, -2, 1, 1]
[-2, -1, -2, 1]
[-2, -1, -1, -1]
[-2, -1, 2, -1]
[-2, -1, 2, 1]
[-2, 1, 1, 1]
[-2, 1, 2, -1]
[-2, 1, 2, 1]
[-1, -1, -1, 2]
[-1, -1, 2, 2]
[-1, 2, 1, 2]
[-1, 2, 2, 2]
[1, 1, 1, 2]
[1, 1, 2, 2]
[1, 2, 2, 2]
</code></pre>
<p><strong>Ahem</strong> Didn't I say</p>
<pre><code>def cycles(n,l):
w = range(n+1)
m = len(l) - 1
w[1] = 0
i = 1
while i > 0:
for j in range(n-i):
w[j + i + 1] = w[j + 1]
if (i == n) and ((l[w[1]] + l[w[n]]) != 0):
print [l[w[i]] for i in xrange(1,n+1)]
i = n
while i > 0 and w[i] == m:
i = i - 1
while i > 0:
if i > 0:
w[i] = w[i] + 1
if (i > 1) and ((l[w[i-1]] + l[w[i]]) == 0):
w[i] = w[i] + 1
if w[i] <= m:
break
i = i - 1
</code></pre>
<p>That's what I should have said if I took my own advice. Sorry.</p>
|
1,821,800 | <p>Consider the system of ODE in $\Bbb R^2 $ </p>
<p>$\dfrac{dY}{dt}=AY$ where $Y(0)=$ \begin{bmatrix} 0 \\ 1\end{bmatrix} $t>0$ </p>
<p>where $ A=$ \begin{bmatrix} -1 & 1 \\ 0 & -1\end{bmatrix}</p>
<p>and $Y(t)=$\begin{bmatrix} y_1(t) \\ y_2(t)\end{bmatrix}</p>
<p><strong>My try</strong>:
$dy_1(t)=-y_1(t)+y_2(t)$
and
$dy_2(t)=-y_2(t)$</p>
<p>On solving the second equation I got $y_2(t)=e^{-t}$</p>
<p>Putting this in the first one I got :
$dy_1(t)+y_1(t)=e^{-t}$</p>
<p>On solving the homogeneous and complementary function I got </p>
<p>$y_1(t)=Ae^{-t}+te^{-t}$</p>
<p>Putting $t=0$ we get $A=0$ so $y_1(t)=te^{-t}$.</p>
| Arthur | 15,500 | <p>Note that if $\{x^2\} + \{x\} = 1$, then $x^2 + x$ is an integer. Solve the equation $x^2 + x = n$ for an arbitrary $n$, and see that if it has rational solutions, then those rationals must be integers, which means that $\{x^2\} + \{x\} = 0$.</p>
<p>To see that $x^2 + x$ must be an integer, note that for any $y$ we may use the floor function to write $y = \lfloor y\rfloor + \{y\}$. This gives
$$
x^2 + x = \lfloor x^2\rfloor + \{x^2\} + \lfloor x \rfloor + \{x\} = \lfloor x^2\rfloor + \lfloor x \rfloor + 1
$$
which is an integer.</p>
|
439,918 | <p>I'm trying to find an example of a space that is Hausdorff and locally compact that is not second countable, but I'm stuck. I search an example on the book Counterexamples in Topology, but I can't find anything.<br>
Thank you for any help.</p>
| Asaf Karagila | 622 | <p><strong>Hint:</strong> Discrete spaces are locally compact.</p>
|
507,467 | <p>How do you factor $x^3 + x - 2$?</p>
<p>Hint: Write it as $(x^3-x^2+x^2-x+2x-2)$ to get $(x-1)(x^2+x+2)$</p>
<p>Note the factored form <a href="http://www.wolframalpha.com/input/?i=x%5E3+%2B+x+-+2" rel="nofollow noreferrer">here</a>. Thanks!</p>
| Mikasa | 8,581 | <p>Note that the summation of the coefficients is $0$:
$$+1+1+(-2)=0$$
so the polynomial has a factor like $(x-1)$.</p>
|
470,739 | <p>Assume $S$ and $T$ are diagonalizable maps on $\mathbb{R}^n$ such that $S\circ T$=$T \circ S$. Then $S$ and $T$ have a common eigenvector.</p>
<p>I already have proof, but I just need validation in one part.
My proof:
Let $F$ be an eigenvector of $T$. This means $\exists \; \lambda \in R$ such that $T(v)=\lambda v$. Then, using the fact that $S\circ T$=$T \circ S$, we have</p>
<p>$$ S(T(v)) = (S\circ T)(v)=(T \circ S)(v)=T(S(v)) \Longrightarrow T(S(v))=\lambda S(v)$$</p>
<p>Thus, $S(v)$ is also an eigenvector of $T$. So, $S$ maps eigenvectors of $T$ to eigenvevtors of $T$. Thus, $S$ must have an eigenvector of $T$.</p>
<p>How would one rigorously prove that if $S$ maps eigenvectors of $T$ to eigenvectors of $T$, then $S$ also has an eigenvector of $T$?</p>
<p>Thanks.</p>
| Pete L. Clark | 299 | <p>Since I keep hinting that I want to see a certain answer, perhaps I had better just post it.</p>
<p>The OP has shown that for every <span class="math-container">$\lambda \in \mathbb{R}$</span>, the <span class="math-container">$\lambda$</span>-eigenspace <span class="math-container">$E_{\lambda}(T)$</span> is an <span class="math-container">$S$</span>-invariant subspace: <span class="math-container">$S E_{\lambda}(T) \subset E_{\lambda}(T)$</span>. (This holds vacuously if <span class="math-container">$\lambda$</span> is not an eigenvector for <span class="math-container">$T$</span>. Henceforth let's assume it is.) Thus we may consider the restriction of <span class="math-container">$S$</span> to <span class="math-container">$E_{\lambda}(T)$</span>, and if we can show that this transformation has an eigenvector, it is both an eigenvector for <span class="math-container">$T$</span> and <span class="math-container">$S$</span>.</p>
<p>Note that if the scalar field were algebraically closed (e.g. <span class="math-container">$\mathbb{C}$</span>), then we would automatically have an eigenvector. But since the given scalar field, <span class="math-container">$\mathbb{R}$</span>, is not algebraically closed, this is not automatic: over any non-algebraically closed field <span class="math-container">$K$</span>, there are linear transformations of <span class="math-container">$K^n$</span> (with <span class="math-container">$0 < n < \infty)$</span> without eigenvectors. In fact the OP's assertion works over any scalar field <span class="math-container">$K$</span> whatsoever.</p>
<p>The key is the following claim:</p>
<blockquote>
<p>If <span class="math-container">$S: K^n \rightarrow K^n$</span> is a diagonalizable linear transformation and <span class="math-container">$W \subset K^n$</span> is an <span class="math-container">$S$</span>-invariant subspace, then the restriction of <span class="math-container">$S$</span> to <span class="math-container">$W$</span> is diagonalizable.</p>
</blockquote>
<p>For this I will use the following useful characterization of diagonalizable transformations.</p>
<blockquote>
<p><strong>Diagonalizability Theorem</strong>: A linear transformation <span class="math-container">$S: K^n \rightarrow K^n$</span> is diagonalizable iff its minimal polynomial is squarefree and split, i.e., factors as a product of distinct linear factors.</p>
</blockquote>
<p>For a proof, see e.g. Theorem 4.14 <a href="http://alpha.math.uga.edu/%7Epete/invariant_subspaces.pdf" rel="nofollow noreferrer">of these notes</a>.</p>
<p>Now it is clear that the minimal polynomial of the restriction of <span class="math-container">$S$</span> to an invariant subspace divides the minimal polynomial of <span class="math-container">$S$</span> and that a monic polynomial which divides a squarefree split polynomial is itself squarefree and split. So applying the Diagonalizability Theorem in one direction and then the other, we see that <span class="math-container">$S|_W$</span> is diagonalizable.</p>
<p>This completes the answer to the OP's question. But actually it proves something much stronger: since each eigenspace for <span class="math-container">$T$</span> decomposes as a direct sum of simultaneous eigenspace for both <span class="math-container">$S$</span> and <span class="math-container">$T$</span>, in fact all of <span class="math-container">$K^n$</span>, being a direct sum of these spaces, also decomposes as a direct sum of simultaneous eigenspaces for <span class="math-container">$S$</span> and <span class="math-container">$T$</span>. Taking a basis of simultaneous eigenvectors diagonalizes both <span class="math-container">$S$</span> and <span class="math-container">$T$</span>, so we've shown:</p>
<blockquote>
<p><strong>Theorem</strong>: Let <span class="math-container">$S$</span> and <span class="math-container">$T$</span> be commuting diagonalizable linear transformations on a finite-dimensional <span class="math-container">$K$</span>-vector space (over any field <span class="math-container">$K$</span>). Then <span class="math-container">$S$</span> and <span class="math-container">$T$</span> are <em>simultaneously</em> diagonalizable: there is an invertible linear transformation <span class="math-container">$P$</span> such that <span class="math-container">$P S P^{-1}$</span> and <span class="math-container">$P T P^{-1}$</span> are both diagonal.</p>
</blockquote>
<p>Finally, recall that a linear transformation is <strong>semisimple</strong> if every invariant subspace has an invariant complement. The following result shows that this is a "nonsplit version of diagonalizability".</p>
<blockquote>
<p><strong>Semisimplicity Theorem</strong>: A linear transformation <span class="math-container">$S: K^n \rightarrow K^n$</span> is semisimple iff its minimal polynomial is squarefree, i.e., factors as a product of distinct (but not necessarily linear) factors.</p>
</blockquote>
<p>This is also part of Theorem 4.14 <a href="http://alpha.math.uga.edu/%7Epete/invariant_subspaces.pdf" rel="nofollow noreferrer">of these notes</a>.</p>
<p>From this result we can prove (in exactly the same way) the cousin of the first boxed result:</p>
<blockquote>
<p>If <span class="math-container">$S: K^n \rightarrow K^n$</span> is a semisimple linear transformation and <span class="math-container">$W \subset K^n$</span> is an <span class="math-container">$S$</span>-invariant subspace, then the restriction of <span class="math-container">$S$</span> to <span class="math-container">$W$</span> is semisimple.</p>
</blockquote>
<p>In contrast to the first result, I don't see how to prove this using the characteristic polynomial. And in fact, the argument using the characteristic polynomial shows that the restriction of <span class="math-container">$S$</span> to any invariant subspace has an eigenvalue: it does not (directly) show that it is diagonalizable. (In particular, recall that you cannot always tell whether a transformation is diagonalizable just by looking at its characteristic polynomial. So in this sense the minimal polynomial is a "better invariant".) So I think that I have now explained why I prefer this approach.</p>
|
1,774,670 | <p>Among many fascinating sides of mathematics, there is one that I praise, especially for didactic purposes : the parallels that can be drawn between some "Continuous" and "Discrete" concepts.</p>
<p>I am looking for examples bringing a help to a global understanding...</p>
<p>Disclaimer : Being driven, as said above, mainly by didactic purposes, I am not in need for full rigor here although I do not deny at all the interest of having a rigorous approach in other contexts where it can be essential to show in particular in which sense the continuous "object" is the limit of its discrete counterparts.</p>
<p>I should appreciate if some colleagues can give examples of their own, in the style "my favorite one is...", or references to works about this theme.</p>
<p>Let me provide, on my side, five <strong>examples</strong>:</p>
<hr />
<p><strong>1st example:</strong> How to obtain the equations of certain epicycloids, here a nephroid :</p>
<p>Consider a <span class="math-container">$N$</span>-sided regular polygon <span class="math-container">$A_1,A_2,\cdots A_N$</span> with any integer <span class="math-container">$N$</span> large enough, say around <span class="math-container">$50$</span>. Let us connect every point <span class="math-container">$A_k$</span> to point <span class="math-container">$A_{3k}$</span> by a line segment (we assume a cyclic numbering). As can be seen on Fig. 1, a certain envelope curve is "suggested".</p>
<p>Question : which (smooth) curve is behind this construction ?</p>
<p>Answer : Let us consider two consecutive line segments like those represented on Fig. 1 with a larger width : the evolution speed of <span class="math-container">$A_{3k} \to A_{3k'}$</span> where <span class="math-container">$k'=k+1$</span> is three times the evolution speed of <span class="math-container">$A_{k} \to A_{k'}$</span>, the pivoting of the line segment takes place at the point (of the line segment) which is 3 times closer to <span class="math-container">$A_k$</span> than to <span class="math-container">$A_{3k}$</span> (the weights' ratio 3:1 comes from the size ratio of ''homothetic'' triangles <span class="math-container">$P_kA_kA_k'$</span> and <span class="math-container">$P_kA_{3k}A_{3k'}$</span>.) Said in an algebraic way :</p>
<p><span class="math-container">$$P_k=\tfrac{3}{4}e^{ika}+\tfrac{1}{4}e^{3ika}$$</span></p>
<p>(<span class="math-container">$A_k$</span> is identified with <span class="math-container">$e^{ika}$</span> with <span class="math-container">$a:=\tfrac{2 \pi}{N}$</span>).</p>
<p>Replacing now discrete values <span class="math-container">$ka$</span> by a continuous parameter <span class="math-container">$t$</span>, we get</p>
<p><span class="math-container">$$z=\tfrac{3}{4}e^{it}+\tfrac{1}{4}e^{3it}$$</span></p>
<p>i.e., a parametric representation of the nephroid, or the equivalent real equations :</p>
<p><span class="math-container">$$\begin{cases}x=\tfrac{3}{4}\cos(t)+\tfrac{1}{4}\cos(3t)\\
y=\tfrac{3}{4}\sin(t)+\tfrac{1}{4}\sin(3t)\end{cases}$$</span></p>
<p><a href="https://i.stack.imgur.com/XZPzg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XZPzg.jpg" alt="enter image description here" /></a></p>
<p>Fig. 1 : <em>The nephroid as an envelope. It can be viewed as the trajectory of a point of a small circle with radius <span class="math-container">$\dfrac14$</span> rolling inside a circle with radius <span class="math-container">$1$</span>.</em></p>
<p>Remark: if, instead of connecting <span class="math-container">$A_k$</span> to <span class="math-container">$A_{3k}$</span>, we had connected it to <span class="math-container">$A_{2k}$</span>, we would have obtained a cardioid, with <span class="math-container">$A_{4k}$</span> an astroid, etc.</p>
<hr />
<p><strong>2nd example:</strong> Coupling ''second derivative <span class="math-container">$ \ \leftrightarrow \ \min \ $</span> kernel'' :</p>
<p>All functions considered here are at least <span class="math-container">$C^2$</span>, but function <span class="math-container">$K$</span>.</p>
<p>Let <span class="math-container">$f:[0,1] \rightarrow \mathbb{R}$</span> and <span class="math-container">$K:[0,1]\times[0,1]\rightarrow \mathbb{R}$</span> (a so-called "kernel") defined by <span class="math-container">$K(x,y):=\min(x,y)$</span>.</p>
<p>Let us associate <span class="math-container">$f$</span> with function <span class="math-container">$\varphi(f)=g$</span> defined by <span class="math-container">$$\tag{1}g(y)=\int_{t=0}^{t=1} K(t,y)f(t)dt=\int_{t=0}^{t=1} \min(t,y)f(t)dt$$</span></p>
<p>We can get rid of "<span class="math-container">$\min$</span>" function by decomposing the integral into :</p>
<p><span class="math-container">$$\tag{2}g(y)=\int_{t=0}^{t=y} t f(t)dt+\int_{t=y}^{t=1} y f(t)dt$$</span></p>
<p><span class="math-container">$$\tag{3}g(y)=\int_{t=0}^{t=y} t f(t)dt - y F(y)$$</span></p>
<p>where we have set</p>
<p><span class="math-container">$$\tag{4}F(y):=\int_{t=1}^{t=y}f(t)dt \ \ \ \ \ \ \ \ \text{Remark:} \ \ \ F'(y)=f(y)$$</span></p>
<p>Let us differentiate (4) twice :</p>
<p><span class="math-container">$$\tag{5}g'(y)=y f(y) - 1 F(y) - y f(y) = -F(y)$$</span></p>
<p><span class="math-container">$$\tag{6}g''(y)= -f(y) \ \ \Longleftrightarrow \ \ f(y)=-g''(y)$$</span></p>
<p>Said otherwise, the inverse of transform <span class="math-container">$f \rightarrow \varphi(f)=g$</span> is:</p>
<p><span class="math-container">$$\tag{7}\varphi^{-1} = \text{opposite of the second derivative.}$$</span></p>
<p>This connexion with the second derivative is rather unexpected...</p>
<p>Had we taken a discrete approach, what would have been found ?</p>
<p>The discrete equivalents of <span class="math-container">$\varphi$</span> and <span class="math-container">$\varphi^{-1}$</span> are matrices :</p>
<p><span class="math-container">$$\bf{M}=\begin{pmatrix}1&1&1&\cdots&\cdots&1\\1&2&2&\cdot&\cdots&2\\1&2&3&\cdots&\cdots&3\\\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\1&2&3&\cdots&\cdots&n
\end{pmatrix} \ \ \textbf{and}$$</span> <span class="math-container">$$\bf{D}=\begin{pmatrix}2&-1&&&&\\-1&2&-1&&&\\&-1&2&-1&&\\&&\ddots&\ddots&\ddots&\\&&&-1&2&-1\\&&&&-1&1
\end{pmatrix}$$</span></p>
<p>that verify matrix identity: <span class="math-container">$\bf{M}^{-1}=\bf{D}$</span> in analogy with (7).</p>
<p>Indeed,</p>
<ul>
<li><p>Nothing to say about the connection of matrix <span class="math-container">$\bf{M}$</span> with coefficients <span class="math-container">$\bf{M}_{i,j}=min(i,j)$</span> with operator <span class="math-container">$K$</span>.</p>
</li>
<li><p>tridiagonal matrix <span class="math-container">$\bf{D}$</span> is well known (in particular by all people doing discretization) to be "the" discrete analog of the second derivative due to the classical approximation:</p>
</li>
</ul>
<p><span class="math-container">$$f''(x)\approx\dfrac{1}{2h^2}(f(x-h)-2f(x)+f(x+h))$$</span></p>
<p>that can easily be obtained using Taylor expansions. The exceptional value <span class="math-container">$1$</span> at the bottom right of <span class="math-container">$\bf{D}$</span> is explained by discrete boundary conditions.</p>
<p>Remark: this correspondence between "min" operator and second derivative is not mine ; I known for a long time but I am unable to trace back where I saw it at first (hopefully in a signal processing book). If somebody has a reference ?</p>
<p>Connected : the eigenvalues of <span class="math-container">$D$</span> are remarkable (<a href="http://www.math.nthu.edu.tw/%7Eamen/2005/040903-7.pdf" rel="nofollow noreferrer">http://www.math.nthu.edu.tw/~amen/2005/040903-7.pdf</a>)</p>
<p>In the same vein : <a href="https://math.stackexchange.com/q/3655530">computation of adjoint operators</a>.</p>
<hr />
<p><strong>3rd example</strong> : the Itô integral.</p>
<p>One could think that the Lebesgue integral (1902) is the ultimate theory of integration, correcting the imperfections of the theory elaborated by Riemann some 50 years before. This is not the case. In particular, Itô has defined (1942) a new kind of integral which is now essential in probability and finance. Its principle, roughly said, is that infinitesimal "deterministic" increments "dt" are replaced by random increments of brownian motion type "dW" as formalized by Einstein (1905), then by Wiener (1923). Let us give an image of it.</p>
<p>Let us first recall definitions of brownian motion <span class="math-container">$W(t)$</span> or <span class="math-container">$W_t$</span>, (<span class="math-container">$W$</span> for Wiener), an informal one, and a formal one:</p>
<p>Informal : A "particle" starting at <span class="math-container">$x=0$</span> at time <span class="math-container">$t$</span>, jumps "at the next instant" <span class="math-container">$t+dt$</span>, to a nearby position; either on the left or on the right, the amplitude and sign of the jump being governed by a normal distribution <span class="math-container">$N(x,\sigma^2)$</span> with an infinitesimal fixed standard deviation <span class="math-container">$\sigma.$</span></p>
<p><span class="math-container">$\text{Formal}: \ \ W_t:=G_0 t+\sqrt{2}\sum_{n=1}^{\infty}G_n\dfrac{\sin(\pi n t)}{\pi n}$</span>, with <span class="math-container">$G_n$</span> iid <span class="math-container">$N(0,1)$</span> random variables.</p>
<p>(Other definitions exist. This one, under the form of a "random Fourier series" is handy for many computations).</p>
<p>Let us now consider one of the fundamental formulas of Itô's integral, for a continuously differentiable function <span class="math-container">$f$</span>:</p>
<p><span class="math-container">$$\tag{8}\begin{equation}
\displaystyle\int_0^t f(W(s))dW(s) = \displaystyle\int_0^{W(t)} f(\lambda)d \lambda - \frac{1}{2}\displaystyle\int_0^t f'(W(s))ds.
\end{equation}$$</span></p>
<p><strong>Remark:</strong> The integral sign on the LHS of (8) defines Itô's integral, whereas the integrals on the RHS have to be understood in the sense of Riemann/Lebesgue. The presence of the second term on the RHS is rather puzzling, isnt'it ?</p>
<p>Question: how can be understood/justified this second integral ?</p>
<p>Szabados has proposed (1990) (see (<a href="https://mathoverflow.net/questions/16163/discrete-version-of-itos-lemma">https://mathoverflow.net/questions/16163/discrete-version-of-itos-lemma</a>)) a discrete analog of formula (8). Here is how it runs:</p>
<p><strong>Theorem:</strong> Let <span class="math-container">$f:\mathbb{Z} \longrightarrow \mathbb{R}$</span>. let us define :</p>
<p><span class="math-container">$$
\tag{9}\begin{equation}
F(k)=\left\{
\begin{matrix}
\dfrac{1}{2}f(0)+\displaystyle\sum_{j=1}^{k-1} f(j)+\dfrac{1}{2}f(k) & if & k \geq 1 & \ \ (a)\\
0 & if & k = 0 & \ \ (b)\\
-\dfrac{1}{2}f(k)-\displaystyle\sum_{j=k+1}^{-1} f(j)-\dfrac{1}{2}f(0) & if & k \leq -1 & \ \ (c)
\end{matrix}
\right.
\end{equation}
$$</span></p>
<p><strong>Remarks:</strong></p>
<ol>
<li><p>We will work only on (a) and its particular case (b).</p>
</li>
<li><p>(a) is nothing else than the "trapezoid formula" explaining in particular factors <span class="math-container">$\dfrac{1}{2}$</span> in front of <span class="math-container">$f(0)$</span> et <span class="math-container">$f(k)$</span>.</p>
</li>
</ol>
<p>Let us now define a family of Random Variables <span class="math-container">$X_k$</span>, <span class="math-container">$k=1, 2, \cdots $</span>, iid on <span class="math-container">$\{-1,1\}$</span> with <span class="math-container">$P(X_k=-1)=P(X_k=1)=\frac{1}{2}$</span>, and let</p>
<p><span class="math-container">$$
\begin{equation}
S_n= \displaystyle\sum_{k=1}^n X_k.
\end{equation}
$$</span></p>
<p>Then</p>
<p><span class="math-container">$$
\tag{10}\begin{equation}
\forall n, \ \ \displaystyle\sum_{i=0}^{n}f(S_i)X_{i+1} = F(S_{n+1})-\dfrac{1}{2}\displaystyle\sum_{i=0}^{n}\dfrac{f(S_{i+1})-f(S_{i})}{X_{i+1}}
\end{equation}
$$</span></p>
<p><strong>Remark</strong> : Please note analogies :</p>
<ul>
<li><p>between <span class="math-container">$\frac{f(S_{i+1})-f(S_{i})}{X_{i+1}}$</span> and <span class="math-container">$f'(S_i)$</span>.</p>
</li>
<li><p>between <span class="math-container">$F(k)$</span> and <span class="math-container">$\displaystyle\int_{\lambda=0}^{\lambda=k}f(\lambda)d\lambda$</span>.</p>
</li>
</ul>
<p>For example,</p>
<p>a) If <span class="math-container">$f$</span> is identity function (<span class="math-container">$\forall k \ f(k)=k$</span>), definition (9)(a) gives :
<span class="math-container">$$
\begin{equation}
F(k)=\frac{1}{2}(k-1)k+\frac{1}{2}k=\dfrac{1}{2}k^2.
\tag{11}
\end{equation}
$$</span></p>
<p>which doesn't come as a surprise : the 'discrete antiderivative' of <span class="math-container">$k$</span> is <span class="math-container">$\frac{1}{2}k^2$</span>... (the formula in (11) remains in fact the same for <span class="math-container">$k<0$</span>).</p>
<p>b) If <span class="math-container">$f$</span> is the "squaring function" (<span class="math-container">$\forall k, \ f(k)=k^2$</span>), (9)(a) becomes :</p>
<p><span class="math-container">$$
\begin{equation}
\text{If} \ k>0, \ \ \ F(k)=\frac{1}{6}(k-1)k(2k-1)+\frac{1}{2}k^2=\dfrac{1}{3}k^3+\dfrac{1}{6}k.
\tag{12}
\end{equation}
$$</span></p>
<p>This time, a new term <span class="math-container">$\dfrac{1}{6}k$</span> has entered into the play.</p>
<p><strong>Proof of the Theorem:</strong> The definition allows to write :</p>
<p><span class="math-container">\begin{equation}F(S_{i+1})-F(S_i)=f(S_i)X_{i+1}+\frac{1}{2}\dfrac{f(S_{i+1})-f(S_i)}{X_{i+1}}
\tag{13}
\end{equation}</span></p>
<p>In fact, proving (11) can be split into two cases: either <span class="math-container">$X_{i+1}=1$</span>, or <span class="math-container">$X_{i+1}=-1$</span>. Let us consider the first case (the second case is similar): the RHS of (13) becomes</p>
<p><span class="math-container">$f(S_i)+\frac{1}{2}(f(S_{i+1})-f(S_i))=\frac{1}{2}(f(S_{i+1})+f(S_i))$</span> which is the area variation in the trapezoid formula ;</p>
<p>Summing all equations in (10) gives the desired identity.</p>
<p><strong>An example of application</strong> : Let <span class="math-container">$f(t)=t$</span> ; we get</p>
<p><span class="math-container">$$\displaystyle\sum_{i=0}^{n}S_iX_{i+1} = F(S_{n+1})-\frac{n}{2}=\dfrac{1}{2}S_{n+1}^2-\frac{n}{2}.$$</span></p>
<p>which appears as the discrete equivalent of the celebrated formula:
<span class="math-container">$$
\begin{equation}
\displaystyle\int_0^t W(s)dW(s) = \frac{1}{2}W(t)^2-\frac{1}{2}t.
\end{equation}
$$</span></p>
<p>One can establish the autocorrelation of <span class="math-container">$W_t$</span> process is</p>
<p><span class="math-container">$$cov(W_s,W_t)=E(W_sW_t)-E(W_s)E(W_t)=\min(s,t),$$</span></p>
<p>(see (<a href="https://math.stackexchange.com/q/884299">Autocorrelation of a Wiener Process proof</a>)) providing an unexpected connection with the second example...</p>
<p><strong>Last remark</strong>: Another kind of integral based on a discrete definition : the gauge integral (<a href="https://math.vanderbilt.edu/schectex/ccc/gauge/" rel="nofollow noreferrer">https://math.vanderbilt.edu/schectex/ccc/gauge/</a>).</p>
<hr />
<p><strong>4th example</strong> (Darboux sums) :</p>
<p>Here is a discrete formula :</p>
<p><span class="math-container">$$\prod_{k=1}^{n-1}\sin\frac{k \pi}{n} = \frac{n}{2^{n-1}}$$</span></p>
<p>(see a proof in <a href="https://math.stackexchange.com/q/8385">Prove that $\prod_{k=1}^{n-1}\sin\frac{k \pi}{n} = \frac{n}{2^{n-1}}$</a>)</p>
<p>Has this formula a continuous "counterpart" ?</p>
<p>Taking the logarithm on both sides, and dividing by <span class="math-container">$n$</span>, we get :</p>
<p><span class="math-container">$$\tfrac1n \sum_{k=1}^n \ln \sin \tfrac{k \pi}{n}=\tfrac{\ln(n)}{n}-\ln(2)\tfrac{n-1}{n}$$</span></p>
<p>Letting now <span class="math-container">$n \to \infty$</span>, we obtain the rather classical integral :</p>
<p><span class="math-container">$$\int_0^1 \ln(\sin(\pi x))dx=-\ln(2)$$</span></p>
<hr />
<p><strong>5th example</strong> : bivariate cdfs (cumulative probability density functions).</p>
<p>Let <span class="math-container">$(X,Y)$</span> a pair of Random Variables with pdf <span class="math-container">$f_{X,Y}$</span> and cdf :</p>
<p><span class="math-container">$$F_{X,Y}(x,y):=P(X \leq x \ \& \ Y \leq y).$$</span></p>
<p>Take a look at this formula :</p>
<p><span class="math-container">$$P(x_1<X \leq x_2, \ \ y_1<Y \leq y_2)=F_{XY}(x_2,y_2)-F_{XY}(x_1,y_2)-F_{XY}(x_2,y_1)+F_{XY}(x_1,y_1)\tag{14}$$</span></p>
<p>(<a href="https://www.probabilitycourse.com/chapter5/5_2_2_joint_cdf.php" rel="nofollow noreferrer">https://www.probabilitycourse.com/chapter5/5_2_2_joint_cdf.php</a>)</p>
<p>It is the discrete equivalent of the continuous definition of <span class="math-container">$F_{XY}$</span> as the mixed second order partial derivative of <span class="math-container">$F_{X,Y}$</span>, under the assumption that <span class="math-container">$F$</span> is a <span class="math-container">$C^2$</span> function :</p>
<p><span class="math-container">$$f_{XY}(x,y)=\dfrac{\partial^2 F_{X,Y}}{\partial x \partial y}(x,y).\tag{15}$$</span></p>
<p>Do you see why ? Hint : make <span class="math-container">$x_2 \to x_1$</span> and make <span class="math-container">$y_2 \to y_1$</span> and assimilate the LHS of (14) with <span class="math-container">$f(x_1,y_1)dxdy$</span>.</p>
<p><strong>Final remarks</strong> :</p>
<ol>
<li><p>A remarkable text about this analogy in Physics : <a href="https://www.lptmc.jussieu.fr/user/lesne/MSCS-Lesne.pdf" rel="nofollow noreferrer">https://www.lptmc.jussieu.fr/user/lesne/MSCS-Lesne.pdf</a></p>
</li>
<li><p>In linear algebra, continuous analogs of some fundamental factorizations (<a href="https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.2014.0585" rel="nofollow noreferrer">https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.2014.0585</a>).</p>
</li>
<li><p><a href="https://mathoverflow.net/questions/270930/when-has-discrete-understanding-preceded-continuous">A similar question on MathOverflow</a> mentionning in particular the following well written book <a href="http://math.sfsu.edu/beck/papers/noprint.pdf" rel="nofollow noreferrer">"Computing the continuous discretely"</a> by by Beck and Robins.</p>
</li>
<li><p>There are many other tracks, e.g., connections with graphs <a href="https://web.cs.elte.hu/%7Elovasz/telaviv.pdf" rel="nofollow noreferrer">"Discrete and continuous : two sides of the same." by L. Lovàsz</a>
or this one (<a href="http://jimhuang.org/CDNDSP.pdf" rel="nofollow noreferrer">http://jimhuang.org/CDNDSP.pdf</a>), discrete vs. continuous versions of the logistic equation (<a href="https://math.stackexchange.com/q/3328867">https://math.stackexchange.com/q/3328867</a>), etc.</p>
</li>
<li><p>In the epidemiology domain: "Discrete versus continuous-time models of malaria infections" <a href="https://ethz.ch/content/dam/ethz/special-interest/usys/ibz/theoreticalbiology/education/learningmaterials/701-1424-00L/malaria.pdf" rel="nofollow noreferrer">lecture notes by Lucy Crooks, ETH Zürich</a>.</p>
</li>
<li><p>Another example in probability: the connection between a discrete and a continuous distribution, i.e., the Poisson(<span class="math-container">$\lambda$</span>) distribution and the <span class="math-container">$\Gamma(n)$</span> distribution which is well treated in [this answer] (<a href="https://math.stackexchange.com/q/2228023">https://math.stackexchange.com/q/2228023</a>).</p>
</li>
</ol>
| Artem | 29,547 | <p>My favorite one is about the discrete analogue of the wave equation. We all know how to solve the wave equation
$$
u_{tt}=\alpha^2u_{xx},\quad u(0,t)=u(1,t)=0,\quad u(x,0)=f(x),\,u_t(x,0)=g(x)
$$
with the separation of variables. However rigorously it requires the notion of Fourier series, convergence and the fact that the corresponding Sturm--Liouville problems produces a basis. One can instead consider a system of masses, two of which (the first and the last one) are fixed, and the rest are connected with springs. Then Newton's and Hook's laws imply
$$
m\ddot u_{j}=c(u_{j+1}-2u_j+u_{j-1}),\quad j=1,\ldots,k-1,
u_0(t)=u_k(t)=0,
$$
where $u_j(t)$ is the displacement of the j-th mass at time $t$. We also have two initial conditions for each mass. Of course this system can be solved by using standard methods to analyze linear systems. But one can also look for the solution in the form
$$
u_j(t)=T(t)J(j)
$$
and end up with a boundary value problem for the second order difference operator. Which will produce a finite number of solutions, using which one can build the full solution to the problem. Of course, using correct limit with $k\to\infty$ one can rigorously obtain the wave equation.</p>
|
2,412,454 | <p>I was obviously not clear enough in my first question, so I will reformulate. I have the following equation
$$
A=\frac{B\sin 2\theta}{C+D\cos 2\theta}
$$
where $A,B,C,D$ are variables.
I need to solve or rewrite the equation to easily obtain $\theta$ (or $2\theta$), given known values for $A, B, C, D$.
Thanks for any help.</p>
| trying | 309,917 | <p>A relation $R$ is said <em>in</em> a set $A$ when the $\operatorname{field} R\subseteq A$, where the $\operatorname{field} R=\operatorname{dom}R\cup\operatorname{range}R$. It is also said in this case that $R$ is a relation <em>between</em> elements of $A$. </p>
|
2,781,017 | <p>I known that $\sum a_i b_i \leq \sum a_i \sum b_i$ for $a_i$, $b_i > 0$. It seems this inequality will also hold true when $a_i$, $b_i \in (0,1)$. However, I am unable to find out if</p>
<p>$\sum \frac{a_i}{b_i} \leq \frac{\sum a_i}{\sum b_i}$ </p>
<p>holds true for $a_i$, $b_i \in (0,1)$.</p>
| Community | -1 | <p>It doesn't take long to find a counterexample.</p>
<p>$$\frac11+\frac11>\frac22.$$</p>
<p>Note that the restriction to $(0,1)$ is immaterial as $\dfrac ab=\dfrac{ca}{cb}.$</p>
|
623,190 | <p>What would be the formula, to determine a rectagles edges, when given the perimeter and space? for example, the rectagles space is 80, and the perimeter is 36, and the edge would be 8 and 10, but how do I find them.</p>
<p>I know that the formula for the perimeter would be
2x+2y=per, or 2(space/y)+2y=per
However I'm trying to figure out how to find x and y, when I only know space and perimeter.</p>
| Community | -1 | <p>By space, do you mean area?</p>
<p>So we know that $$2x + 2y = P$$ and also that $$xy = A.$$ This means that, as you pointed out, $$2\frac{A}{y} + 2y = P$$ and thus that $$A + y^2 - \frac{P}{2}y = 0$$ which we get by multiplying both sides by $y$. You can use the quadratic formula to solve for $y$ now, where you get that $$y = \frac{P/2 \pm \sqrt{P^2/4 - 4A}}{2}.$$ Use your knowledge of algebra to simplify this expression for $y$ and then solve for $x$ and you're home! </p>
|
312,878 | <p>Why is $\mathbb{Z} [\sqrt{24}] \ne \mathbb{Z} [\sqrt{6}]$, while $\mathbb{Q} (\sqrt{24}) = \mathbb{Q} (\sqrt{6})$ ?</p>
<p>(Just guessing, is there some implicit division operation taking $2 = \sqrt{4}$ out from under the $\sqrt{}$ which you can't do in the ring?)</p>
<p>Thanks. (I feel like I should apologize for such a simple question.) </p>
| Community | -1 | <p>We have</p>
<p>\begin{align*}
\mathbb{Z}[\sqrt{24}] &= \{a + b\sqrt{24} | a, b \in \mathbb{Z} \} \\
&= \{a + 2b\sqrt{6} | a, b \in \mathbb{Z} \} \\
&= \{a + b'\sqrt{6} | a, b' \in \mathbb{Z} \text{ with } b' \text { even}\}.
\end{align*}</p>
<p>which is clearly a proper subring of $\mathbb{Z}[\sqrt{6}]$. On the other hand,
\begin{align*}
\mathbb{Q}[\sqrt{24}] &= \{a + b\sqrt{24} | a, b \in \mathbb{Q} \} \\
&= \{a + 2b\sqrt{6} | a, b \in \mathbb{Q} \} \\
&= \{a + b'\sqrt{6} | a, b' \in \mathbb{Q}\} \\
&= \mathbb{Q}[\sqrt{6}].
\end{align*}</p>
<p>The point is that you can divide anything in $\mathbb{Q}$ by two, but not anything in $\mathbb{Z}$.</p>
|
2,213,807 | <p>I was solving a problem to discover n and after I transformed the problem it gave me this equation:</p>
<p>\begin{equation*}
\left\lfloor{\frac{2}{3}\sqrt{10^{2n}-1}}\right\rfloor = \frac{2}{3}(10^{n}-1)
\end{equation*}</p>
<p>So I tried to simplify it by defining:
\begin{equation*}
k = 10^{n}-1
\end{equation*}</p>
<p>and was left with:
\begin{equation*}
\left\lfloor{\frac{2}{3}\sqrt{k(k+2)}}\right\rfloor = \frac{2}{3}k
\end{equation*}</p>
<p>But I can't get past that. Can anyone help me?</p>
| dxiv | 291,201 | <p>Hint: $10^n-1$ is a multiple of $9$, so $\frac{2}{3}k$ is an integer, then:</p>
<p>$$
\begin{align}
\left\lfloor{\frac{2}{3}\sqrt{k(k+2)}}\right\rfloor = \frac{2}{3}k \;\;&\iff\;\; \frac{2}{3}k \le \frac{2}{3}\sqrt{k(k+2)} \lt \frac{2}{3}k \,+\, 1 \;\; \\
&\iff\;\; \frac{4}{9}k^2 \le \frac{4}{9}(k^2+2k) \lt \frac{4}{9}k^2 + \frac{4}{3}k+1 \\
&\iff\;\; 0 \le \frac{8}{9} k \lt \frac{4}{3}k+1 \\
\end{align}
$$</p>
|
95,126 | <p>Consider the finite sum</p>
<pre><code>rs[x_, n_] := x/n Sum[n^2/(i + (n - i) x)^2, {i, 1, n}]
</code></pre>
<p>Is there a way to bring <em>Mathematica</em> to calculate the limit for <code>n -> ∞</code>?</p>
<p>I have tried <code>Limit[]</code> as well as <code>NLimit[]</code> without success.</p>
| J. M.'s persistent exhaustion | 50 | <p>This post tackles the convergence acceleration of the Riemann integral in the same spirit as Anton's answer, except that I use a slight variation of one of the algorithms presented. In particular, I'm using this as an excuse to present the <a href="http://dx.doi.org/10.1137/0510061" rel="noreferrer">van den Broeck-Schwartz modification of the Wynn ε algorithm</a>:</p>
<pre><code>wgvs[seq_?VectorQ, h_: 1] := Module[{n = Length[seq], ep, v, w},
Table[
ep[k] = seq[[k]]; w = 0;
Do[v = w; w = ep[j];
ep[j] = v If[OddQ[k - j], h, 1] + 1/(ep[j + 1] - w),
{j, k - 1, 1, -1}];
ep[Mod[k, 2, 1]],
{k, n}]]
</code></pre>
<p>The default setting of the second parameter corresponds to the classical Wynn algorithm.</p>
<p>For the OP's example:</p>
<pre><code>rs[x_, n_] := x/n Sum[n^2/(i + (n - i) x)^2, {i, 1, n}, Method -> "Procedural"]
tab = Table[rs[1/2, 2^k], {k, 12}] // N;
res = wgvs[tab];
-Log10[Abs[tab - 1]]
{0.51491, 0.770798, 1.04959, 1.33974, 1.6354, 1.93377, 2.23347, 2.53384,
2.83454, 3.1354, 3.43635, 3.73734}
-Log10[Abs[res - 1]]
{0.51491, 0.770798, 1.57696, 2.25408, 3.74516, 4.92739, 6.84944, 8.60221,
10.9223, 13.2548, 15.6536, 14.9546}
</code></pre>
<p>where we see that Wynn ε achieved $\approx 14$ good digits with little additional effort.</p>
<p>For comparison, let's change the value of the second parameter of <code>wgvs[]</code> to <code>0</code>; this corresponds to applying the iterated Aitken $\Delta^2$ process:</p>
<pre><code>res = wgvs[tab, 0];
-Log10[Abs[res - 1]]
{0.51491, 0.770798, 1.57696, 2.25408, 3.82643, 4.65844, 6.64954, 7.52831,
8.61013, 10.7914, 13.1377, 14.8085}
</code></pre>
<hr>
<p>In addition to Bender and Orszag, a good reference on convergence acceleration methods is Brezinski and Redivo-Zaglia's <a href="http://www.sciencedirect.com/science/bookseries/1570579X/2" rel="noreferrer"><em>Extrapolation Methods: Theory and Practice</em></a>. Weniger's <a href="http://dx.doi.org/10.1016/0167-7977(89)90011-7" rel="noreferrer">survey paper</a> is also a useful reference.</p>
|
2,302,966 | <p>Translate the following English statements into predicate logic formulae. The domain is the set of integers. Use the following predicate symbols, function symbols and constant symbols.</p>
<ul>
<li>Prime(x) iff x is prime</li>
<li>Greater(x, y) iff x > y</li>
<li>Even(x) iff x is even</li>
<li>Equals(x,y) iff x=y</li>
<li>sum(x, y), a function that returns x + y</li>
<li>0,1,2,the constant symbols with their usual meanings</li>
</ul>
<p>I tried them, but don't know if they're correct.</p>
<p><strong>(a) The relation Greater is transitive.</strong></p>
<p>(∀x(∃y(∃z (Greater(x,y) ∧ Greater(y,z) -> Greater(x,z))))</p>
<p><strong>(b) Every even number is the sum of two odd numbers. (Use (¬Even(x)) to express that x is an odd number.)</strong></p>
<p><strong>(c) All primes except 2 are odd.</strong></p>
<p>(∀x(Prime(x) ∧ ¬(Equals(x,2) -> ¬Even(x))</p>
<p><strong>(d) There are an infinite number of primes that differ by 2. (The Prime Pair Conjecture)</strong></p>
<p>(∀x(∃y (Prime(x) ∧ Equals(y,sum(x,2)) ∧ Prime(y)))) From what I remember, we aren't suppose to put predicate symbol (sum(x,2)) inside Equals. How do I do this?</p>
<p><strong>(e) For every positive integer, n, there is a prime number between n and 2n. (Bertrand's Postulate) (Note that you do not have multiplication, but you can get by without it.)</strong></p>
<p>(∀x(∃y (Greater(x,1) -> (Greater(x,y) ∧ Prime(y) ^ Greater(y, Sum(x,x)))))
-Same problem as d.</p>
| C. Falcon | 285,416 | <p>The answer is <strong>yes</strong>.</p>
<p><strong>Example.</strong> Let $G:=\mathrm{GL}(n,\mathbb{R})$, then $H:=O(n)$ is a compact subgroup of $G$ which is non-compact. Indeed, for all $k\geqslant 1$, $\textrm{diag}(k,1,\ldots,1)\in G$, whence $G$ is non-bounded and therefore non-compact. Besides, by definition $H:=\varphi^{-1}(\{I_n\})$ where:
$$\varphi(A)={}^\intercal AA$$
is continuous as a quadratic form. Hence, $H$ is closed and its bounded by definition, whence compact.</p>
<p>Actually, all maximal compact subgroups of $G$ are conjugates of $H$. Another way to put it, any compact subgroup of $G$ is a conjugates of a subgroup of $H$.</p>
|
2,302,966 | <p>Translate the following English statements into predicate logic formulae. The domain is the set of integers. Use the following predicate symbols, function symbols and constant symbols.</p>
<ul>
<li>Prime(x) iff x is prime</li>
<li>Greater(x, y) iff x > y</li>
<li>Even(x) iff x is even</li>
<li>Equals(x,y) iff x=y</li>
<li>sum(x, y), a function that returns x + y</li>
<li>0,1,2,the constant symbols with their usual meanings</li>
</ul>
<p>I tried them, but don't know if they're correct.</p>
<p><strong>(a) The relation Greater is transitive.</strong></p>
<p>(∀x(∃y(∃z (Greater(x,y) ∧ Greater(y,z) -> Greater(x,z))))</p>
<p><strong>(b) Every even number is the sum of two odd numbers. (Use (¬Even(x)) to express that x is an odd number.)</strong></p>
<p><strong>(c) All primes except 2 are odd.</strong></p>
<p>(∀x(Prime(x) ∧ ¬(Equals(x,2) -> ¬Even(x))</p>
<p><strong>(d) There are an infinite number of primes that differ by 2. (The Prime Pair Conjecture)</strong></p>
<p>(∀x(∃y (Prime(x) ∧ Equals(y,sum(x,2)) ∧ Prime(y)))) From what I remember, we aren't suppose to put predicate symbol (sum(x,2)) inside Equals. How do I do this?</p>
<p><strong>(e) For every positive integer, n, there is a prime number between n and 2n. (Bertrand's Postulate) (Note that you do not have multiplication, but you can get by without it.)</strong></p>
<p>(∀x(∃y (Greater(x,1) -> (Greater(x,y) ∧ Prime(y) ^ Greater(y, Sum(x,x)))))
-Same problem as d.</p>
| CopyPasteIt | 432,081 | <p>The unit circle of the complex numbers (multiplicative group with $0$ removed) is a compact subgroup.</p>
<p>Similarly, $\{-1,+1\}$ is a compact subgroup of the nonzero real numbers.</p>
|
1,371,649 | <p>The question is:</p>
<blockquote>
<p>What does the following interation formula do?:
<span class="math-container">$$x_{k+1}=2x_k-cx_{k}^2.$$</span></p>
</blockquote>
<p>I tried to identify this with Newtons method. I.e. I tried to bring that into the form <span class="math-container">$x_{k+1}=x_k-\frac{f(x_0)}{f'(x_0)}$</span>, which leads to: <span class="math-container">$$(cx_k^2-x_k)f'(x_k)=f(x_k).$$</span>
But then <span class="math-container">$f(x)$</span> is something like <span class="math-container">$e^a$</span> but these functions doesn't have any roots... Is this still correct and I must note that this iteration formula does not converge or are there any other functions satisfying this equality?</p>
| Community | -1 | <p>If the iterations converge, they converge to an <span class="math-container">$x$</span> such that</p>
<p><span class="math-container">$$x=2x-cx^2$$</span></p>
<p>and <span class="math-container">$x=0$</span> or <span class="math-container">$x=\dfrac1c$</span>.</p>
<p>Now the derivative of <span class="math-container">$f'$</span> is <span class="math-container">$2-2cx$</span>. <span class="math-container">$f'(0)=2>1$</span> so the iterations diverge around <span class="math-container">$0$</span>, and <span class="math-container">$f'(\frac1c)=0$</span> and the iterations do converge.</p>
|
2,369,717 | <p>From Jaynes' probability theory: the logic of science, I found this:</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/bnogp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bnogp.png" alt="enter image description here"></a></p>
</blockquote>
<p>$p$ here is the joint probability distribution of $x,$ and $y$. I'm assuming the $\times$ denotes the cartesian product, but I don't really understand what this equation means, nor why it captures the assumption that $x$ and $y$ are independent. </p>
<p><strong>Why does this equation capture the assumption that $x$ and $y$ are independent?</strong> </p>
| Siong Thye Goh | 306,553 | <p>$\times$ means the regular multiplication between two real numbers, not the cartesian product.</p>
<p>The definition of independent is </p>
<p>$$P(X=x, Y=y) = P(X=x)P(Y=y)$$</p>
<p>You might want to understand the equation as</p>
<p>$$\int \int \rho(x,y) dxdy = \int f_X(x) dx \int f_Y(y) dy$$</p>
<p>Credit: Hagen for pointing out we should not use the same $f$ for both variables.</p>
|
829,390 | <p>In a tennis tournament, there are $10$ players. In the first round, $5$ groups(of 2 players) will be formed among them and elimination matches will be held among the two players in each group. In how many ways can pairings be done?</p>
<p>Answer is given as : $\frac{10!}{2^5\times5!}$</p>
<p>My solution :</p>
<p>From $10$ players, we can select $2$ players in $\binom{10}{2}$ ways and form a group. From remaining $8$ players we can select $2$ players in $\binom{8}{2}$ ways and so on.</p>
<p>So, total number of pairings=$\binom{10}{2}\times\binom{8}{2}\times\binom{6}{2}\times\binom{4}{2}\times\binom{2}{2}=\frac{10!}{2^5}$</p>
<p>I want to know why the 5! in the answer should come. Any alternative solution will also be helpful.</p>
| Pavan Sangha | 154,686 | <p>If in your format you had match 1, match 2,...,match 5 and each was played at a different venue, then your answer would be correct. However this is not the case as you are only interested in sets of pairings. Suppose $A=\{1,2\} B=\{3,4\},...,D=\{9,10\}$ then one of the ways to obtain this set of pairings could be to draw $\{A,B,C,D,E\}$, similarly every permutation of the letters $A,B,C,D,E$ correspond to the same set of pairings. Finally there are $5!$ possible ways to permute the five pairs. </p>
|
3,248,863 | <p>I want to calculate the operator norm of the operator <span class="math-container">$A: L^2[0,1] \to L^2[0,1]$</span> which is defined by <span class="math-container">$$(Af)(x):=i\int\limits_0^x f(t)\,dt-\frac{i}{2} \int\limits_0^1 f(t)\, dt$$</span></p>
<p>I've already shown that this operator is compact and selfadjoint. I think maybe this helps me calculating the operator norm. Maybe through spectral theorem for compact self adjoint operators.</p>
<p>I also know that for integral operators of the form
<span class="math-container">$$(Kf)(x)=\int\limits_0^1 k(x,t) f(t)\,dt$$</span> the inequality <span class="math-container">$\Vert K \Vert \leq \Vert k \Vert{}_{L^2}$</span> holds.</p>
<p>For
<span class="math-container">$$(Af)(x)=i\int\limits_0^x f(t)\,dt-\frac{i}{2} \int\limits_0^1 f(t) \,dt = \int\limits_0^1 i\,\left(1_{[0,x]}(t)-\frac{1}{2}\right)f(t)\,dt$$</span> this gives me an upper bound:</p>
<p><span class="math-container">$$\Vert A \Vert \leq \left\Vert i~1_{[0,x]}-\frac{i}{2} \right\Vert{}_{L^2}=\frac{1}{2}$$</span></p>
<p>Can someone help me?</p>
| thehardyreader | 432,413 | <p>The eigenfunctions of the operator <span class="math-container">$A$</span> form an orthonormal system, therefore we can write:
<span class="math-container">$$Af = \sum\limits_{k\in\mathbb{Z}} \lambda_k (f,e_k)e_k$$</span> Where <span class="math-container">$\lambda_k = \frac{1}{(2k+1)\pi}$</span> are the eigenvalues of <span class="math-container">$A$</span> with the corresponding eigenfunctions <span class="math-container">$e_k = e^{(2k+1)\pi i}$</span>.
Now we define
<span class="math-container">$$c:=\max\limits_{k\in\mathbb{Z}}(\vert\lambda_k\vert)$$</span></p>
<p><span class="math-container">$$\Vert Af\Vert^2 = \sum\limits_{k\in\mathbb{Z}} \vert \lambda_k (f,e_k) \vert^2\leq c^2\sum\limits_{k\in\mathbb{Z}} \vert(f,e_k) \vert^2=c^2 \Vert f \Vert^2$$</span></p>
<p>Hence, <span class="math-container">$\Vert A \Vert \leq c$</span>.</p>
<p>For the other direction assume <span class="math-container">$f=e_0$</span>, the eigenfunction which corresponds to the greatest eigenvalue <span class="math-container">$\lambda_0$</span>.</p>
<p><span class="math-container">$$\Vert Af \Vert^2=\Vert \lambda_0 f\Vert^2 = c^2$$</span></p>
<p>It follows that <span class="math-container">$\Vert A \Vert= c$</span>. Where <span class="math-container">$c=\max\limits_{k\in\mathbb{Z}}\Big(\vert\frac{1}{(2k+1)\pi}\vert \Big)=\frac{1}{\pi}$</span>.</p>
|
2,061,063 | <p>Let $X \subset C(\mathbb R;\mathbb R)$ be the space of all continuous functions $u: \mathbb R \to \mathbb R$ where </p>
<p>$$\lim_{x \to \pm \infty} u(x)=0$$</p>
<p>provided with the $\sup$-norm. Let $k \in L^1(\mathbb R)$, $u \in X$ and </p>
<p>$$(Ku)(x) := \int_\mathbb R k(x-y)u(y)\,dy, \,\,\,x \in \mathbb R.$$</p>
<p>Furthermore $$\int_\mathbb R |k(s)|\, ds <1$$</p>
<p>How can I show that for every $f \in X$ there exists exactly one $u \in X$ such that $$u-Ku=f\,\,?
$$</p>
| Olivier Moschetta | 369,174 | <p>Fix $f\in X$ and consider the mapping $\varphi:X\rightarrow X$ defined by
$$\varphi(u)=Ku+f$$
We apply the Banach-fixed point theorem on $\varphi$. To do so we need to check three conditions:</p>
<ul>
<li><p>$X$ is complete (enough to show that is closed in the Banach space of bounded continuous functions).</p></li>
<li><p>The functional $K$ (and hence $\varphi$) maps $X$ to $X$.</p></li>
<li><p>We have $||\varphi(u)-\varphi(v)||_{\infty}\leq C||u-v||_{\infty}$ for every $u,v\in X$ where $C<1$.</p></li>
</ul>
<p>If these conditions hold, then $\varphi$ has a unique fixed point in $X$ that is, there exists a unique $u\in X$ such that
$$Ku+f=u\Leftrightarrow u-Ku=f$$</p>
|
481,764 | <p>What type of symmetry does the function $y=\frac{1}{|x|}$ have?
Specify the intervals over which the function is increasing and the intervals where it is decreasing.</p>
| Eleven-Eleven | 61,030 | <p>Let $f(x)=y$</p>
<p>HINT: what does $f(-x)$ equal?</p>
<p>Let's look at the function, $f(x)=\frac{1}{x}$. Now, a function is even if:
$$f(-x)=f(x)$$
and a function is odd if:
$$f(-x)=-f(x)$$
We therefore can see that this function is an odd function, since;
$$f(-x)=\frac{1}{-x}=-\frac{1}{x}=-f(x)$$
Now, the absolute value is defined as
$$|x| = \sqrt{x^2}
$$
With this in mind, let's look at your function;
$$f(x)=\frac{1}{|x|}$$
Now what happens when we look at $f(-x)$?
$$f(-x)=\frac{1}{|-x|}=\frac{1}{\sqrt{(-x)^2}}=\frac{1}{\sqrt{x^2}}=\frac{1}{|x|}=f(x)$$
Thus, $f(x)=\frac{1}{|x|}$ is an even function, symmetric about the y-axis.</p>
|
2,979,271 | <p>I have been able to calculate the integral of </p>
<p><span class="math-container">$$\int^\infty_\infty x^2e^{-x^2/2}$$</span></p>
<p>and there is a lot of information online about integration with even powers of <span class="math-container">$x$</span>.<br>
However I have been unable to calculate: </p>
<p><span class="math-container">$$\int^\infty_\infty x^3e^{-x^2/2}.$$</span> </p>
<p>The closest I have come to finding a solution is<br>
<span class="math-container">$$\int^\infty_0 x^{2k+1}e^{-x^2/2} = \frac{k!}{2}$$</span></p>
<p>Which I found <a href="https://is.muni.cz/el/1431/podzim2013/F5170/um/integrals.pdf" rel="nofollow noreferrer">here</a>.</p>
<p>Any help with solving this integral would be great.</p>
| José Carlos Santos | 446,262 | <p>Do you mean <span class="math-container">$\int_{-\infty}^\infty x^3e^{-\frac{x^2}2}\,\mathrm dx$</span>? It is <span class="math-container">$0$</span>, since the function is an odd function and integrable (it is the product of a polynomial function with <span class="math-container">$e^{-\frac{x^2}2}$</span>).</p>
|
2,979,271 | <p>I have been able to calculate the integral of </p>
<p><span class="math-container">$$\int^\infty_\infty x^2e^{-x^2/2}$$</span></p>
<p>and there is a lot of information online about integration with even powers of <span class="math-container">$x$</span>.<br>
However I have been unable to calculate: </p>
<p><span class="math-container">$$\int^\infty_\infty x^3e^{-x^2/2}.$$</span> </p>
<p>The closest I have come to finding a solution is<br>
<span class="math-container">$$\int^\infty_0 x^{2k+1}e^{-x^2/2} = \frac{k!}{2}$$</span></p>
<p>Which I found <a href="https://is.muni.cz/el/1431/podzim2013/F5170/um/integrals.pdf" rel="nofollow noreferrer">here</a>.</p>
<p>Any help with solving this integral would be great.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>Substitute <span class="math-container">$$u=x^2$$</span> then we get <span class="math-container">$$\frac{1}{2}\int e^{-u/2}udu$$</span> and then use Integration by parts.</p>
|
4,004,978 | <blockquote>
<p>For all <span class="math-container">$a, b, c, d > 0$</span>, prove that
<span class="math-container">$$2\sqrt{a+b+c+d} ≥ \sqrt{a} + \sqrt{b} + \sqrt{c} + \sqrt{d}$$</span></p>
</blockquote>
<p>The idea would be to use AM-GM, but <span class="math-container">$\sqrt{a} + \sqrt{b} + \sqrt{c} + \sqrt{d}$</span> is hard to expand. I also tried squaring both sides, but that hasn't worked either. Using two terms at a time doesn't really work as well. How can I solve this question? Any help is appreciated.</p>
| See Hai | 646,705 | <p>Alternative solution using Cauchy-Schwarz, which finishes the problem off immediately. By C-S, we have:
<span class="math-container">\begin{align}
& (a+b+c+d)(1+1+1+1) \geq (\sqrt{a}+ \sqrt{b}+ \sqrt{c}+ \sqrt{d} )^2 \\
& \Rightarrow 4(a+b+c+d) \geq (\sqrt{a}+ \sqrt{b}+ \sqrt{c}+ \sqrt{d} )^2 \\
& \Rightarrow 2\sqrt{a+b+c+d} \geq \sqrt{a}+ \sqrt{b}+ \sqrt{c}+ \sqrt{d}.
\end{align}</span></p>
|
1,929,445 | <blockquote>
<p>Is there a solution to the problem $$\left\{\begin{matrix}
y'=y+y^4\\
y(x_0)=y_0
\end{matrix}\right.$$
which is defined on $\mathbb{R}$? ($x_0,y_0$ might be any real numbers)</p>
</blockquote>
<p>It's easy to prove that for all $(x,y)\in\mathbb{R}^2$ there exists an open interval $I$ (with $x_0\in I$) where the problem has a unique solution. However is the maximal interval always $(-\infty,\infty)$? I know that the answer is no, but that's just because I found a solution for particular values of $x_0$ and $y_0$ and checked its domain. But is there a way to prove that $I$ need not be $I=(-\infty,\infty)$ without actually solving the problem for a certain initial condition? In other words, how can I prove that the unique solution need not be defined on $\mathbb{R}$?</p>
| Lutz Lehmann | 115,115 | <p>This is a Bernoulli equation, and can thus be solved directly. Setting <span class="math-container">$u(x)=y(x)^{-3}+1$</span> results in the equation
<span class="math-container">$$
u'(x)=-3y(x)^{-4}y'(x)=-3u(x)\implies u(x)=u(x_0)e^{-3(x-x_0)}
$$</span>
so that
<span class="math-container">$$
y(x)=\frac{y(0)e^{x-x_0}}{\sqrt[3]{1+y_0^3(1-e^{3(x-x_0)})}}
$$</span>
where the cube root is extended as odd function to negative values.</p>
<p>The solution has a singularity in finite time if the denominator has a root, which is the case if <span class="math-container">$1+y_0^{-3}>0$</span>, that is, <span class="math-container">$y_0>0$</span> or <span class="math-container">$y_0<-1$</span>.</p>
|
506,152 | <p>Is $$\frac{a+b}{c+d}<\frac{a}{c}+\frac{b}{d}$$
for $a,b,c,d>0$</p>
<p>If it is true, then can we generalize?</p>
<p>EDIT:typing mistake corrected.</p>
<p>EDIT, WILL JAGY. Apparently the <strong>real question</strong> is
Is $$\color{magenta}{\frac{a+b}{c+d}<\frac{a}{c}+\frac{b}{d}}$$
for $a,b,c,d>0,$ where letters on the left hand side and in the <strong>numerator</strong> stay in the <strong>numerator</strong> on the right-hand side, and letters on the left hand side and in the <strong>denominator</strong> stay in the <strong>denominator</strong> on the right-hand side.</p>
| Thomas Andrews | 7,933 | <p>If you consider them as slopes, then $(0,0)$, $(b,a)$, $(d,c)$ and $(b+d,a+c)$ form a parallelogram. So the slope of the line between $(0,0)$ and $(b+d,a+c)$ will be between the slopes of the lines between $(0,0)$ and $(b,a)$ and $(d,c)$. That means that $\frac{a+c}{b+d}$ will be between $\frac{a}{b}$ and $\frac{c}{d}$. Since these two are positive, this means that $$\frac{a+c}{b+d}\leq \max\left(\frac{a}{b},\frac{c}{d}\right)< \frac{a}{b}+\frac{c}{d}$$</p>
<p>It's pretty clear that you can generalize this by induction to:</p>
<p>$$\frac{a_1+\dots+a_n}{b_1+\dots+b_n}\leq \max_i\left(\frac{a_i}{b_i}\right)$$</p>
|
2,801,643 | <p>given $V$ be a vector space over $\mathbb{F}$. let $P:V\longrightarrow V$ be a function $\negthickspace$ as $V=U_{1}\oplus U_{2}$, and for every $v$, $v=u_{1}+u_{2}$ as $u_{1}\in U_{1}$ , $\!u_{2}\in U_{2}$. </p>
<p>$P(v)=p(u_{1}+u_{2})=u_{1}$. which means $P$ is a Projection.</p>
<p>I need to prove that p is a linear transformation.</p>
<p>I know that you need to start with: </p>
<p>let $v_{i}$,$v_{j}\in V$ as $v_{i}=u_{1i}+u_{2i}$ , $v_{j}=u_{1j}+u_{2j}$<br>
$P(v_{i}+v_{j})=P(u_{1i}+u_{1j}+u_{2i}+u_{2j})$ and P$(v_{i})=u_{1i}$, $P(v_{j})=u_{1j} \Longrightarrow P(v_{i})+P(v_{j})=u_{1i}+u_{1j}$</p>
<p>but i'm missing a part how to connect between the two.</p>
| Emilio Novati | 187,568 | <p>Hint.</p>
<p>The key fact is that, if $x=x_1+x_2$ with $x_1 \in U_1$ and $x_2 \in U_2$, and $y=y_1+y_2$ with $y_1 \in U_1$ and $y_2 \in U_2$ than we have:
$$
x+y= (x_1+x_2)+(y_1+y_2)=(x_1+y_1) + (x_2+y_2)
$$</p>
<p>with:
$$
x_1+y_1 \in U_1 \quad and \quad x_2+y_2 \in U_2
$$</p>
<p>Use this in the definition of $P(x+y)$</p>
|
2,801,643 | <p>given $V$ be a vector space over $\mathbb{F}$. let $P:V\longrightarrow V$ be a function $\negthickspace$ as $V=U_{1}\oplus U_{2}$, and for every $v$, $v=u_{1}+u_{2}$ as $u_{1}\in U_{1}$ , $\!u_{2}\in U_{2}$. </p>
<p>$P(v)=p(u_{1}+u_{2})=u_{1}$. which means $P$ is a Projection.</p>
<p>I need to prove that p is a linear transformation.</p>
<p>I know that you need to start with: </p>
<p>let $v_{i}$,$v_{j}\in V$ as $v_{i}=u_{1i}+u_{2i}$ , $v_{j}=u_{1j}+u_{2j}$<br>
$P(v_{i}+v_{j})=P(u_{1i}+u_{1j}+u_{2i}+u_{2j})$ and P$(v_{i})=u_{1i}$, $P(v_{j})=u_{1j} \Longrightarrow P(v_{i})+P(v_{j})=u_{1i}+u_{1j}$</p>
<p>but i'm missing a part how to connect between the two.</p>
| Lennart Döppenschmitt | 564,574 | <p>Let's clean up the notation a little bit: Given a a vector space $V$ over $\mathbb{K}$ which decomposes as the direct sum $V = V_1 \oplus V_2$. Then every element $v \in V$ can be written as $v = v_1 + v_2$ with $v_i \in V_i$.</p>
<p>The projection $P: V \rightarrow V, v \mapsto v_1$ is a linear transformation because:
$$ P(a + b) = P( (a+b)_1 + (a + b)_2 ) = (a+b)_1 = a_1 + b_1 = P(a) + P(b)$$
and likewise
$$ P(\lambda a) = (\lambda a) _1 = \lambda a_1 = \lambda P(a) $$</p>
<p>for any elements $a,b \in V$ and $\lambda \in \mathbb{K}$.</p>
|
3,063,053 | <p>I'm a Calculus I student and my teacher has given me a set of problems to solve with L'Hoptial's rule. Most of them have been pretty easy, but this one has me stumped. <br /></p>
<p><span class="math-container">$$\lim\limits_{x\to \infty} \frac{x}{\sqrt{x^2 + 1}}$$</span> </p>
<p>You'll notice that using L'Hopital's rule flips the value of the top to the bottom. For example, using it once returns: </p>
<p><span class="math-container">$$\lim\limits_{x\to \infty} \frac{\sqrt{x^2 + 1}}{x}$$</span> </p>
<p>And doing it again returns you to the beginning: </p>
<p><span class="math-container">$$\lim\limits_{x\to \infty} \frac{x}{\sqrt{x^2 + 1}}$$</span> </p>
<p>I of course plugged it into my calculator to find the limit to evaluate to 1, but I was wondering if there was a better way to do this algebraically?</p>
| kkc | 630,558 | <p>When computing the limit of rational functions, as is the case for <span class="math-container">$$\lim_{x \rightarrow \infty} \frac{x}{\sqrt{x^2 +1}},$$</span> you want to divide the top and bottom by the highest degree in the denominator, which in this case is <span class="math-container">$x$</span>. Since <span class="math-container">$x \rightarrow +\infty$</span>, so <span class="math-container">$x$</span> is always positive (at least, near where we are worried about) I claim that <span class="math-container">$x = \sqrt{x^2}$</span>. So, if we divide the top and bottom by <span class="math-container">$x$</span>, we get <span class="math-container">$$\lim_{x \rightarrow \infty} \frac{x}{\sqrt{x^2 +1}} = \lim_{x \rightarrow \infty} \frac{1}{\sqrt{1 + 1/x^2}}.$$</span> You should be able to compute the limit from here.</p>
<p>Whenever you see a monomial in the numerator with the square root of a polynomial in the denominator, you should consider this method. Of course, keep in mind that you'll have to tweak it slightly if <span class="math-container">$x \rightarrow -\infty$</span>! Try to see if you can figure out what would change in that case.</p>
|
2,218,924 | <p>$ \displaystyle \lim_{n\to \infty} \sum_{k=1}^n \frac{k^4}{n^4}=$ ?</p>
<p>I found it difficult to tranform it into the integral form by the definition of Riemann sum, which is a way to solve similar problems.</p>
| Bernard | 202,857 | <p><strong>Hint:</strong></p>
<p>$$\sum_{k=1}^n \frac{k^4}{n^4}=\frac1{n^4}\sum_{k=1}^n k^4$$
and you can use <em>Faulhaber's formula</em> to get the value of
$S_4(n)=\sum_{k=1}^n k^4$ in function of $$S_3(n)=\dfrac{n^2(n+1)^2}4,\quad S_2(n)=\dfrac{n(n+1)(2n+1)}6,\quad S_1(n)=\dfrac{n(n+1)}2.$$
You obtain a polynomial function of degree $5$, with highest degree term $\;\dfrac{n^5}5$, hence
$$\sum_{k=1}^n \frac{k^4}{n^4}\sim_\infty\frac{n^5}{5n^4}=\frac n5\to +\infty.$$</p>
|
2,837,172 | <p>In complex analysis, sometimes we need to use some theorems which are results of measure theory. However, I know very very little about measure theory. So</p>
<blockquote>
<p>What are some very basic results of measure theory on complex functions/complex plane/complex calculus?</p>
</blockquote>
<p>I expect the answers to be like:</p>
<ol>
<li>... is always measurable.</li>
<li>... is always integrable.</li>
<li>For theorems considering a measure space, we can choose it to be ...</li>
<li>...(anything that is worth mentioning)</li>
</ol>
<p>For instance, I always suspected all harmonic functions are measurable, but don’t know how to prove it.</p>
<p>Another example is, I failed to apply Dominated convergence theorem rigorously. On the Wikipedia page, we need to consider $\{f_n\}$ a sequence of measurable real functions on a measure space $(S,\Sigma,\mu)$. What $\{f_n\}$ can I choose if the function is complex? What measure space should I consider?</p>
<p>I hope I have provided enough context and my question is not too broad.</p>
<p>Thanks for any help in advance.</p>
| Homieomorphism | 553,656 | <p>I would suggest you take a look at Folland : Real Analysis and Modern Techniques. The chapter 0 can be ommited and simply used as reference when you forget some basic analysis results. Chapter 1 is a bit harsh at the beginning so maybe skim through it to have a general idea of what is a measure but the most interesting part for you is chapter 2 which is really well explained and has everything (almost) you need about measurability of functions and integration of functions. It treats everything in a general setting and emphasize real and complex measure spaces. Good luck !</p>
|
1,780,797 | <p>According to <a href="https://en.wikipedia.org/wiki/Null_set#Lebesgue_measure" rel="nofollow noreferrer">Wikipedia</a>, the straight line <span class="math-container">$\mathbb{R}$</span> is a null set in <span class="math-container">$\mathbb{R}^2$</span>.</p>
<p>That means, the line <span class="math-container">$\mathbb{R}$</span> can be contained in <span class="math-container">$\bigcup_{k=1}^\infty B_k$</span>, where <span class="math-container">$B_k$</span> are open disks and their total measure of all the <span class="math-container">$B_k$</span> is less than <span class="math-container">$\epsilon$</span>.</p>
<p>Question 1: How can this be done? Any explicit construction to show this?</p>
<p>Question 2: Since the intersection of <span class="math-container">$B_k$</span> with <span class="math-container">$\mathbb{R}$</span> is an open interval <span class="math-container">$I_k$</span>, doesn't this mean that <span class="math-container">$\mathbb{R}$</span> can be covered by union of intervals <span class="math-container">$I_k$</span> whose total length is arbitrarily small? (Which according to my <a href="https://math.stackexchange.com/questions/1780580/what-is-wrong-in-this-proof-that-mathbbr-has-measure-zero">previous question</a> is impossible?)</p>
<p>Sincere thanks for any help. I am puzzled by this.</p>
| Jack D'Aurizio | 44,121 | <p>Let $f(t)$ be the PDF of $X$ and $g(t)$ be the PDF of $Y$.
$$D_{KL}(P_X\parallel P_{X+Y}) = \int_{-\infty}^{+\infty}f(x)\log\frac{f(x)}{(f*g)(x)}\,dx$$
does not admit any obvious simplification, but the term </p>
<p>$$\log\frac{f(x)}{(f*g)(x)}=\log\frac{\int_{-\infty}^{+\infty} f(t)\,\delta(x-t)\,dt}{\int_{-\infty}^{+\infty} f(t)\,g(x-t)\,dt} $$
can be effectively controlled if some informations about the concentration/decay of $g(t)$ are known.</p>
<p>Is this the case?</p>
|
217,514 | <p>Given $A$, $B$ are bounded subsets of $\Bbb R$.
Prove </p>
<ol>
<li>$A\cup B$ is bounded. </li>
<li>$\sup(A \cup B) =\sup\{\sup A, \sup B\}$.</li>
</ol>
<p>Can anyone help with this proof?</p>
| mohamez | 34,920 | <p>for $1$ use the fact that $x\in A\cup B \Leftrightarrow x\in A$ or $x\in B$ (notice that $SupA,\space SupB$ exists since $A,\space B$ are bounded) and for $2$ use the least upper bound property. that if $SupA = M \Leftrightarrow \forall x\in A,\space \exists M\in \mathbb{R}$ such that $x\leq M$ and $\forall \epsilon>0,\space M - \epsilon \leq x$</p>
|
217,514 | <p>Given $A$, $B$ are bounded subsets of $\Bbb R$.
Prove </p>
<ol>
<li>$A\cup B$ is bounded. </li>
<li>$\sup(A \cup B) =\sup\{\sup A, \sup B\}$.</li>
</ol>
<p>Can anyone help with this proof?</p>
| Brian M. Scott | 12,042 | <p>Without loss of generality assume that $\sup A\le\sup B$, so that $\sup\{\sup A,\sup B\}=\sup B$, and you simply want to show that $\sup(A\cup B)=\sup B$. Clearly $\sup(A\cup B)\ge\sup B$, so it suffices to show that $\sup(A\cup B)\le\sup B$.</p>
<p>To show that $\sup(A\cup B)\le\sup B$, just prove that $\sup B$ is an upper bound for $A\cup B$, i.e., that $x\le\sup B$ for every $x\in A\cup B$. This isn’t hard if you remember that we assumed at the start that $\sup A\le\sup B$.</p>
|
2,590,165 | <p>How to show $f(x)$=$\frac{1}{1+x^2}$ is uniform continuous on $\Bbb R$. </p>
<p>Although, of course for any interval $[a,b]$, this function is continuous and bounded, therefore also uniformly continuous. Following <strong>Continuous Extension Theorem</strong> it is uniformly continuous on any $(a,b)$. Therefore proceeding this way, we can show it is uniformly continuous on $ \Bbb R$. </p>
<p>I wish to prove the same analytically. I assumed there exists $x,u \in \Bbb R$, such that $ |x-u|< \delta$. </p>
<p>Now,</p>
<p>$|f(x)-f(u)|$=$\frac {|x^2-u^2|}{|(1+x^2)(1+u^2)|}$ $\le$ $\frac{|x-u||x+u|}{x^2u^2}$ $\le$ $\delta$$\frac{|x+u|}{x^2u^2}$.</p>
<p>Here I stuck. I wish to find an $\epsilon$ so that the $|f(x)-f(y)|\lt \epsilon$, where $\delta$ depends only on $\epsilon$, not on $x$. But unable to do that. Tried to apply A.M-G.M inequality but could not find a fruitful result. What to do? </p>
| user | 505,767 | <p>Note that</p>
<p>$$\frac{e^{xB}}{e^{xA}} \gt \frac{xA-1}{xB-1}\iff xB-xA >\log(1-xA)-\log(1-xB)$$</p>
<p>$$\iff \log(1-xB)+xB>\log(1-xA)+xA$$</p>
<p>which is false, indeed</p>
<p>$$f(y)=\log(1-y)+y\implies f'(y)=\frac{-1}{1-y}+1=\frac{-y}{1-y}<0 \quad y\in(0,1)$$</p>
|
452,803 | <p>Test the convergence of improper integrals :</p>
<p>$$\int_1^2{\sqrt x\over \log x}dx$$</p>
<p>I basically have no idea how to approach a problem in which log appears. Need some hint on solving this type of problems.</p>
| Random Variable | 16,033 | <p>$$ \lim_{x \to 1^{+}} (x-1) \frac{\sqrt{x}}{\ln x} = \lim_{x \to 1^{+}} \frac{\sqrt{x} + (x-1) \frac{1}{2 \sqrt{x}}}{\frac{1}{x}} = 1 $$</p>
<p>The integrand behaves like $\frac{1}{x-1}$ near $x=1$ and thus $ \displaystyle\int_1^2{\sqrt x\over \log x}dx$ diverges.</p>
|
378,953 | <p><strong>Problem:</strong> Give an example of a permutation of the first $n$ natural numbers from which it is impossible to get to the standard permutation $1,2,\ldots,n$ after less than $n-1$ transposition operations (i.e switching the place of two elements).</p>
<p><strong>My attempt</strong></p>
<p>Suppose we have a permutation $T$ and we perform one transposition on $T$ to get $T'$. That would mean $T(i) = j, T(i') = j'$ and $T'(i) = j' , T'(i') = j$ for some $i,j,i',j'$. It is easy to see that the permutation T contains 2 cycles (may be the same): $(i,j,...)$ and $(i',j',\ldots)$. The transposition operation would affect only these two cycles but keep all other cycles intact. Therefore, the number of cycles, if decreased, will not decrease more than $1$.</p>
<p>Now the permutation $[1,2,\ldots,n]$ has n cycles and the permutation $[2,3,\ldots,n,1]$ has 1 cycle only. So it is impossible to use less than $n-1$ operations to get $[2,3,\ldots,n,1]$ from $[1,2,\ldots,n]$. Keeping in mind that getting from permutation A to B is the same as getting from B to A, problem solved.</p>
<p><strong>My question</strong></p>
<p>Is my approach correct and are there any better solutions? Thank you.</p>
| Ted | 15,012 | <p>Yes, this is correct. As you observed, the key fact is that hitting a permutation with a transposition $(ij)$ always either decreases or increases the number of cycles by exactly 1. If the original permutation had $i$ and $j$ in the same cycle, then it splits them (hence increases the number of cycles by 1); if they were in different cycles, it joins them (hence decreases the number of cycles by 1).</p>
<p>This observation about cycles can also be used to prove that the signature of a permutation is well-defined.</p>
|
156,376 | <p>I understand that when we are doing indefinite integrals on the real line, we have $\int f(x) dx = g(x) + C$, where $C$ is some constant of integration. </p>
<p>If I do an integral from $\int f(x) dx$ on $[0,x]$, then is this considered a definite integral? Can I just leave out the constant of integration now? I am skeptical of the fact that this is a definite integral, because our value $x$ is still a variable. </p>
| cuabanana | 64,547 | <p>Yes, your function is a definite integral, because it is evaluated over a certain interval. Although the constant is strictly not necessary, because it will be subtracted when the integral is evaluated, it is good practice to keep the constant of integration. If you want to be consistent, rename the variable in the function that you are integrating to avoid confusion, if you are using x as the value of a point. It seems your reasoning is ok. </p>
|
3,832,684 | <p>Does the following inequality hold?
<span class="math-container">$$\sqrt {x-z} \geq \sqrt x -\sqrt{z} \ , $$</span>
for all <span class="math-container">$x \geq z \geq 0$</span>.</p>
<p>My justification
<span class="math-container">\begin{equation}
z \leq x \Rightarrow \\ \sqrt z \leq \sqrt {x} \Rightarrow \\ 2\sqrt z \sqrt z \leq 2\sqrt z\sqrt {x} \Rightarrow \\ 2 z \leq 2\sqrt z\sqrt {x} \Rightarrow \\ z - 2\sqrt z\sqrt {x} + x \leq x - z \Rightarrow \\ (\sqrt x -\sqrt z )^2 \leq x - z \Rightarrow \\ \sqrt x -\sqrt z \leq \sqrt {x - z}
\end{equation}</span></p>
| poetasis | 546,655 | <p><span class="math-container">\begin{equation}
\qquad\sqrt {x-z} \ge \sqrt x -\sqrt{z}\\
\implies (\sqrt {x-z})^2 \geq (\sqrt x -\sqrt{z})^2\\
\implies x-z\ge x-2\sqrt{xz}+z\\
\implies x-z - x-z\ge-2\sqrt{xz}\\
\implies -2z\ge -2\sqrt{xz}\\
\implies -z\ge -\sqrt{xz}\\
\text{ subtracting both sides from both sides reverses the relationship}\\
\implies \sqrt{xz}\ge z\\
\implies xz\ge z^2\\
\implies x\ge z\quad \land\quad x-z\ge 0\\
\implies x\ge\ z\ge 0
\end{equation}</span></p>
|
4,515,517 | <p>Suppose that <span class="math-container">$E$</span> is a measruable set and <span class="math-container">$f: E \rightarrow [0, \infty]$</span> is a non-negative function with <span class="math-container">$\int_E f(x)^n dx = \int_E f(x) dx < \infty$</span> for all positive integers <span class="math-container">$n$</span>. Show that there exists a meaurable set <span class="math-container">$A \subseteq E$</span> such that <span class="math-container">$f = \chi_A$</span> a.e.</p>
<p><em>My Attempt</em></p>
<p>Define the measurable set <span class="math-container">$A = \{x: \liminf_{n \rightarrow \infty} f(x)^n \text{ exists } \} $</span>. Define <span class="math-container">$g(x) = \liminf_n f(x)^n$</span>, by Fatou's Lemma:</p>
<p><span class="math-container">$$
\int_A \liminf_{n \rightarrow \infty} f(x)^n dx \leq \liminf_{n \rightarrow \infty} \int_A f(x)^n dx = \int_A f(x)$$</span>. Thus</p>
<p><span class="math-container">$$
\int_A f-g \leq 0 \implies f = g \text{ a.e. }
$$</span></p>
<p>comparing the limit <span class="math-container">$g(x) = \liminf_n f(x)^n = f(x)$</span> for the cases <span class="math-container">$f(x) > 1$</span> and <span class="math-container">$f(x) \leq 1$</span> gives <span class="math-container">$f(x) = 1 \text{ or }0$</span></p>
<p>I know there is an error in the proof. Guidance would be appreciated.</p>
| Adayah | 149,178 | <p>First of all, I think it is valuable that you try to carry out Shashi's technical approach from the comments. Having said that, there is a nice trick that solves the problem: let</p>
<p><span class="math-container">$$P(y) = y^2 (y-1)^2 = y^4 - 2y^3 + y^2.$$</span></p>
<p>By assumption</p>
<p><span class="math-container">$$\begin{align*}
\int \limits_E P(f(x)) & = \int \limits_E f(x)^4 - 2f(x)^3 + f(x)^2 \\
& = \int \limits_E f(x) - 2\int \limits_E f(x) + \int \limits_E f(x) = 0.
\end{align*}$$</span></p>
<p>But <span class="math-container">$P(f(x)) \geqslant 0$</span> for <span class="math-container">$x \in E$</span>, hence <span class="math-container">$P(f(x)) = 0$</span> a.e. on <span class="math-container">$E$</span>. It follows that <span class="math-container">$f(x) \in \{ 0, 1 \}$</span> almost everywhere and so <span class="math-container">$f = \chi_A$</span> a.e. for some measurable <span class="math-container">$A \subseteq E$</span>.</p>
<hr />
<p>Feedback on your approach: your definitions are overcomplicated. Note that for a fixed <span class="math-container">$x \in E$</span>, the sequence <span class="math-container">$f(x)^n$</span> is a geometric sequence with a non-negative ratio <span class="math-container">$f(x)$</span>. Such a sequence has a limit if and only if <span class="math-container">$f(x) \in [0, 1]$</span>, so in fact</p>
<p><span class="math-container">$$A = \{ x \in E : f(x) \in [0, 1] \}$$</span></p>
<p>and</p>
<p><span class="math-container">$$g(x) = \begin{cases} 1 & \text{if } f(x) = 1 \\ 0 & \text{otherwise} \end{cases}$$</span></p>
<p>Furthermore, it is unclear how you conclude that</p>
<p><span class="math-container">$$\int f-g \leqslant 0$$</span></p>
<p>because the Fatou lemma give the opposite inequality. Lastly, it is also unclear how you conclude anything about <span class="math-container">$f$</span> outside <span class="math-container">$A$</span>.</p>
|
2,877,085 | <p>I think that there could be used Abel and Dirichlet method, but I have no idea how</p>
<p>$$ \sum_{n=1}^{\infty} (-1)^n\frac{3n-2}{n+1}\frac{1}{n^{1/2}} .$$</p>
| Angina Seng | 436,618 | <p>The series
$$\sum_{n=1}^\infty\frac{3(-1)^n}{n^{1/2}}$$
is convergent by Leibniz. The difference from the original series
is
$$\sum_{n=1}^\infty (-1)^n\left(\frac{3n-2}{(n+1) n^{1/2}}-\frac{3}{n^{1/2}}\right)$$
which is aboslutely convergent, since the terms are $O(n^{-3/2})$.</p>
|
1,661,244 | <p>If $R$ is a comutative ring with identity ring and $K$ is an ideal from it, let $R'=R/K$ and $I$ an ideal of $R$ satisfy $K\subseteq I$ and $I'$ is the coresponding ideal of $R'$ (we knew that correspondence theorem gives a certain one-to-one corespondence between the set of ideals of $R$ containing $K$ and the set of ideals of $R'$).
can you give me some examples where $I'$ is prime then $I$ is not.</p>
| MooS | 211,913 | <p>I do not know why you want to replace 'prime' by 'principal', since these properties do not really relate, but here is an example:</p>
<p>$R=k[x,y], K=(x), I=(x,y)$. $I$ is not principal but $I'=I/K=(y)$ is principal in $R'=R/K=k[y]$.</p>
|
750,751 | <p>if $V$ is a finite-dimensional vector space and $t \in \mathcal L (V,V) $is such that $t^2 = id_V$ prove that the sum of eigenvalues of t is an integer.</p>
<p>I started the prove as such:</p>
<p>Let $\lambda_1 ,...,\lambda_n $ be eigenvalues of $t$. </p>
<p>So $\lambda_1^2 ,... \lambda_n^2$ will be the eigenvalues for $t^2 = id_V$</p>
<p>I don't how to continue. Any suggestions.</p>
| Pedro | 23,350 | <p>You know $X^2-1=(X-1)(X+1)$ annihilates $t$. What can be the possible eigenvalues for $t$?</p>
|
3,041,632 | <p><span class="math-container">$X_n=4X_{n-1}+5$</span></p>
<p>How come the solution of this recurrence is this? </p>
<p><span class="math-container">$X_n=\frac834^n+\frac53$</span></p>
<p>I also have that <span class="math-container">$X_0=1$</span>.</p>
<p>I am using telescoping method and I am trying to solve it like this:</p>
<p><span class="math-container">$X_n= 5 + 4X_{n-1}$</span></p>
<p><span class="math-container">$X_n= 5 + 4(5+4X_{n-2})$</span></p>
<p><span class="math-container">$X_n= 5 + 4\times5 + 4\times4\times X_{n-2}$</span></p>
<p>But this leads to me getting <span class="math-container">$5\times4^{n-1}\times4^n$</span>.</p>
<p>Can some please explain this to me? </p>
| Asit Srivastava | 567,604 | <p>This is a difference equation and it can be solved using Z-Transform. Take Z-transform of both sides of the equation and then use the initial condition. Ultimately, you will get Z-transform of X and then take its inverse z-transform to get the solution.</p>
|
3,041,632 | <p><span class="math-container">$X_n=4X_{n-1}+5$</span></p>
<p>How come the solution of this recurrence is this? </p>
<p><span class="math-container">$X_n=\frac834^n+\frac53$</span></p>
<p>I also have that <span class="math-container">$X_0=1$</span>.</p>
<p>I am using telescoping method and I am trying to solve it like this:</p>
<p><span class="math-container">$X_n= 5 + 4X_{n-1}$</span></p>
<p><span class="math-container">$X_n= 5 + 4(5+4X_{n-2})$</span></p>
<p><span class="math-container">$X_n= 5 + 4\times5 + 4\times4\times X_{n-2}$</span></p>
<p>But this leads to me getting <span class="math-container">$5\times4^{n-1}\times4^n$</span>.</p>
<p>Can some please explain this to me? </p>
| lab bhattacharjee | 33,337 | <p>Let <span class="math-container">$x_m=y_m+a\implies y_0=x_0-a=1-a$</span></p>
<p><span class="math-container">$$5=x_n-4x_{n-1}=y_n+a-4(y_{m-1}+a)=y_n-4y_{n-1}-3a$$</span></p>
<p>Set <span class="math-container">$-3a=5\iff a=?$</span> so that <span class="math-container">$y_n=4y_{n-1}=\cdots=4^ry_{n-r},0\le r\le n$</span></p>
<p><span class="math-container">$r=n\implies y_n=4^ny_0=4^n(1-a)$</span></p>
|
3,153,821 | <p>I'm trying to analyse a game of Mastermind and am having trouble quantifying the amount of possible game states. I know that a code has <span class="math-container">$\text{# of colors}^{\text{# of pegs per guess}}$</span> combinations (in my case that would be <span class="math-container">$6^4=1296$</span>). However, an entire board state also consists of 10 guesses. Each guess has the same amount of combinations, thus my intuition would be that the amount of total states in a game of Mastermind would be <span class="math-container">$\text{# of rows}^{\text{# of combinations per row}}$</span>. This approach yields <span class="math-container">$11^ {1296}$</span> board states which is astronomically large and I'm having a hard time believing this is true.</p>
<p>To clarify what I mean by a board state, I mean any legal state the game board can be in using the standard game rules. Having 3 empty rows, then one guess row and another 6 empty rows is not a legal board state.</p>
<p>How do I go about estimating this number?</p>
| Especially Lime | 341,019 | <p>Your formula going from rows to the full board is incorrect, and should be <span class="math-container">$\#\text{combinations per row}^{\#\text{rows}}$</span>, giving <span class="math-container">$1296^{11}$</span> which is much less. Substituting in the formula for combinations per row, this is just <span class="math-container">$$(\#\text{colours}^{\#\text{pegs per row}})^{\#\text{rows}}=\#\text{colours}^{\#\text{pegs per row}\times\#\text{rows}}=\#\text{colours}^{\#\text{total pegs}}$$</span>
which you can get directly as the number of ways to choose a colour for each peg.</p>
<p>As Arthur says, this is the number of possibilities for a completely full board. The number of possibilities for a board with only <span class="math-container">$10$</span> rows used is likewise <span class="math-container">$(6^4)^{10}$</span>, then <span class="math-container">$(6^4)^9$</span> for <span class="math-container">$9$</span> rows, and so on, giving a total of <span class="math-container">$$\sum_{i=0}^{11}(6^4)^i=\frac{(6^4)^{12}-1}{6^4-1}$$</span></p>
|
2,933,572 | <p>Suppose <span class="math-container">$A = 1/2^{100\log(n)}$</span>, and <span class="math-container">$B = e^{-100\log(2) \log(n)}$</span>.</p>
<p>I'm required to prove that <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are equal, how should I prove this? I tried applying some rules of logarithms that I have learned but I'm not able to show this.</p>
| mechanodroid | 144,766 | <p>Let <span class="math-container">$(x_n)_n$</span> be a sequence in <span class="math-container">$C$</span> which converges to <span class="math-container">$x \in \mathbb{R}$</span>. We have
<span class="math-container">$$d(x, C) \le d(x, x_n) \xrightarrow{n\to\infty} 0$$</span></p>
<p>so <span class="math-container">$d(x, C) = 0$</span>. If we had <span class="math-container">$x \notin C$</span>, it would be <span class="math-container">$d(x,C) > 0$</span> which is a contradiction. Hence <span class="math-container">$x \in C$</span> so <span class="math-container">$C$</span> is closed in <span class="math-container">$\mathbb{R}$</span>.</p>
|
2,120,194 | <blockquote>
<p>Let $K_1$ and $K_2$ be two disjoint compact sets in a metric space $(X,d).$ Show that $$a = \inf_{x_1 \in K_1, x_2 \in K_2} d(x_1, x_2) > 0.$$
Moreover, show that there are $x \in K_1$ and $y \in K_2$ such that $a = d(x,y)$.</p>
</blockquote>
<p>For the first part, suppose to the contrary that $\inf d(x_1, x_2) = 0$. Then $\epsilon$ is not a lower bound, so $d(x_1, x_2) < \epsilon$ for all $\epsilon > 0$. Since $K_1$ and $K_2$ are compact subsets of a metric space, they are closed and bounded. So, then $B(x_1, \epsilon) \cap K_2 \neq \emptyset$. Thus, $x_1$ is an adherent point to $K_2$. Since $K_2$ is closed, this means $x_1 \in K_2$, a contradiction.</p>
<p>I'm stuck on the moreover part. I tried supposing to the contrary that $d(x,y) > a$, but I did not get far. </p>
| Ken Duna | 318,831 | <p>You have the right idea. The ideal $(x^2 - 2) \subseteq \mathbb{Q}[x]$ is maximal and so $R = \frac{\mathbb{Q}[x]}{(x^2-2)}$ is a field. </p>
<p>Note that $R \cong \{ a + b \sqrt{2} \ | \ a,b \in \mathbb{Q} \}$ and $\mathbb{Q} \subseteq R$.</p>
<p>Suppose that $\sqrt{3} \in R$. Then there exist $a,b \in \mathbb{Q}$ such that</p>
<p>$$\sqrt{3} = a + b\sqrt{2}.$$</p>
<p>We now seek a contradiction.</p>
<hr>
<p>Case 1: $a = 0$</p>
<p>I'll leave this to you.</p>
<hr>
<p>Case 2: $b = 0$</p>
<p>I'll leave this to you.</p>
<hr>
<p>Case 3: $a,b \neq 0$
If you square both sides of that equation and rearrange, you get:</p>
<p>$$ 3 - a^2 - 2b^2 = 2ab \sqrt{2}.$$</p>
<p>This implies that $$\sqrt{2} = \frac{3-a^2-2b^2}{2ab} \in \mathbb{Q}_.$$</p>
<p>So we arrive at a contradiction. </p>
<hr>
<p>Thus $\sqrt{3} \notin R$. Obviously neither is $-\sqrt{3}$. So $x^2-3$ has no roots in $R$.</p>
|
2,795,777 | <p>I encountered this problem in one of my linear algebra homeworks (Linear Algebra with Applications 5th Ed 1.3.44):</p>
<p>Consider a $n \times m$ matrix $A$, such that $n > m$. Show there is a vector $b$ in $\mathbb{R}^{n}$ such that the system $Ax=b$ is inconsistent.</p>
<p>I have a strong intuition as to why this is true, because the transformation matrix maps a vector in $\mathbb{R}^{m}$ to $\mathbb{R}^{n}$, so it is going from a lower dimension to a higher. When the $m$ components in $x$ vary, they would at most be parameterizing an $m$-dimensional subspace in $\mathbb{R}^{n}$. However, my "proof" (which is included below, feels very handwavey and sloppy. It may also be incorrect in a number of places. I'd appreciate it if I could get some pointers on how to formalize proofs of this type a little more, so they are rigorous enough to write on a homework/test, and maybe an example with this example.</p>
<p>My proof:</p>
<p>Consider the case where $A$ has at least $m$ linearly independent row-vectors. Using elementary row operations, rearrange $A$ to $A'$, so these $m$ row vectors are the first $m$ rows. $b'$ will refer to the vector $b$ under the same rearrangement of rows. If we place the first $m$ rows in reduced row ecchelon form using only elementary operations with the first $m$ rows, the augmented matrix $[A'|b']$ will have the following form, where $x_{i}$ is the $i$-th element of the solution vector $x$.</p>
<p>\begin{bmatrix}
1 & 0 & \dots & 0 & x_{1} \\
0 & 1 & \dots & 0 & x_{2}\\
\vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & \dots & 1 & x_{m} \\
a'_{m+1, 1} & a'_{m+1, 2} & \dots & a'_{m+1, m} & b'_{m+1} \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
a'_{n,1} & a'_{n, 2} & \dots & a'_{n,m} & b'_{n}
\end{bmatrix}</p>
<p>Now consider the $m+1$th row. To eliminate coefficients in this row, it would mean that $x_{m+1} = b'_{m+1} - $$\sum_{i=1}^{m} x_{i}\cdot a_{m+1,i}$, because to eliminate each coefficient would involve scaling by $a_{m+1,i}$ and then subtracting. The system is inconsistent for all $x_{m+1} \neq 0$, so we then choose any $b'_{m+1}$ for which this inequality holds, there are infinitely many, to find $b'$ which makes $A'x=b'$ inconsistent. Then, unswap the rows to make our $b'$ back into $b$ and we have found a vector which makes our system inconsistent.</p>
| José Carlos Santos | 446,262 | <p><strong>Hint:</strong> What is $A.v$ if $v=(1,10,10^2,\ldots,10^9)$?</p>
|
250,119 | <p>I'd like to show that if a set $X$ is Dedekind finite then is is finite if we assume $(AC)_{\aleph_0}$. As set $X$ is called Dedekind finite if the following equivalent conditions are satisfied: (a) there is no injection $\omega \hookrightarrow X$ (b) every injection $X \to X$ is also a surjection.</p>
<p>Countable choice $(AC)_{\aleph_0}$ says that every contable family of non-empty, pairwise disjoint sets has a choice function. </p>
<p>There is the following theorem: </p>
<p><img src="https://i.stack.imgur.com/yKpuR.png" alt="enter image description here"></p>
<p>from which I can prove what I want as follows: Pick an $x_0 \in X$. Define $G(F(0), \dots, F(n-1)) = \{x_0\}$ if $x_0 \notin \bigcup F(k)$ and $G(F(0), \dots , F(n-1)) = X \setminus \bigcup F(k)$ otherwise. Also, $G(\varnothing) = \{x_0\}$. Let $F: \omega \to X$ be as in the theorem. Then $F$ is injective by construction. </p>
<p>The problem with that is that I suspect that the proof of theorem 24 needs countable choice. So what I am after is the following: consider the generalisation of theorem 24: </p>
<p><img src="https://i.stack.imgur.com/z8hQB.png" alt="enter image description here"></p>
<p>(note the typo in $(R^\ast)$, it should be $F(z) \in G^\ast (F \mid I(z), z)$), and its proof (assuming AC): </p>
<p><img src="https://i.stack.imgur.com/Xtzf1.png" alt="enter image description here"></p>
<p>I want to modify this proof to prove the countable version of the theorem. But I can't seem to manage. I need a countable set $\{G^\ast \mid \{\langle f,z \rangle \} : \langle f,z \rangle \in dom(G^\ast) \}$. Ideas I had were along the lines of picking $f_0(x) = x_0$ the constant function and then to consider $\{G^\ast \mid \{\langle f_0,n \rangle \} : \langle f_0,n \rangle \in dom(G^\ast) \}$ but what then?</p>
<p>Thanks for your help.</p>
| Matt E | 221 | <p>The Lefschetz principle can be understood in scheme theoretic terms in the following way:</p>
<p>suppose that $X \to S$ is a scheme over a base $S$ (possibly with extra data) which is fppf over $S$. Then we may descend $X$ to $X_0 \to S_0$ where $S_0$ is finite type over $\mathbb Z$. (Here "descend" means that there is a map $S\to S_0$ so that $X$ is recovered from $X_0$ via base-change. For a proof/explanation, search for discussions of "removing Noetherian hypotheses" online. The standard reference is somewhere in EGA IV.)</p>
<p>Now suppose that $P$ is a property that can be checked after faithfully flat base-change; then we use the above method to tranfer $P$ from the context of complex scalars to any field of char. zero.</p>
<p>E.g. if $(X,\mathcal L)$ is a smooth projective variety over a field $k$ of char. zero, then via the above we may descend $(X,\mathcal L)$ to $(X_0,\mathcal L_0)$ over a finite type $\mathbb Z$-scheme $S_0$. The morphism Spec $k \to S_0$ factors as Spec $k \to $ Spec $k_0 \to S_0$, where $k_0$ is a finitely generated subfield of $k$, since $S_0$ is finite type over $\mathbb Z$. Base-changing to $k_0$, we get $(X_0',\mathcal L_0')$ over $k_0$ which recovers $(X,\mathcal L)$ after base-changing to $k$.</p>
<p>Now choose an embedding $k \to \mathbb C$, as we may do since $k_0$ is finitely generated. Base-change to $\mathbb C$ gives $(X',\mathcal L')$.</p>
<p>So we have $(X,\mathcal L)$ and $(X',\mathcal L')$ over $k$ and over $\mathbb C$, both of which are base-changed from $(X_0',\mathcal L_0')$ over $k_0$.</p>
<p>Using the fact that properness, smoothness, and ampleness may be checked after a faithfully flat base-change (in our case, just a change of base field), and are also preserved by such a base-change, and also that formation of the canonical bundled, and of cohomology, also commutes with change of base field, we can transfer Kodaira embedding from $(X',\mathcal L')$ to $(X_0',\mathcal L_0')$, and finally to $(X,\mathcal L)$, as desired. </p>
<p>Note: The fact that $X\to S$ can be recovered from $X_0\to S_0$ is one way of encoding Lefschetz's intiuition that an algebraic variety only requires a finite amount of data to encode, which is what underlies the Lefschetz principle. In practice, people use this a lot, whereas I've never seen anyone use a logical or model-theoretic formulation of the Lefschetz principle in an algebraic geometry argument.</p>
<p>People also use the closed points of $S_0$, which have positive characteristic, to deduce facts about the original $X$ --- thus the <em>decomposition theorem</em> for perverse sheaves in char. zero was first proved by such reduction to char. $p$ methods, as was the bend-and-break lemma in the theory of birational geometry. Raynaud gave a proof of Kodaira embedding by proving it first in a char $p$ setting and then passing to char. $0$ by these methods. In the context of passing from char. $p$ to char. $0$ there also more model-theoretic arguments, such as in some proofs of the Ax-Grothendieck theorem, but my experience in this context too is that "spreading out" arguments (people call the passage from
Spec $k$ of char. zero to $S_0$ "spreading out" over $\mathbb Z$) are much more common.</p>
|
5,586 | <p>I'm in my last year of highschool. And I'm aiming for a perfect grade in maths. The problem is that this year is the hardest year of maths I have ever faced in my entire life. Especially derivation and limits as its the first time I am studying it. Here are the lessons that are required to study for the first semester:</p>
<ul>
<li>Limit of a function at a point</li>
<li>Limits Theorems</li>
<li>Limits of fractional functions</li>
<li>Limits of Trigonometric functions</li>
<li>Limits at Infinity</li>
<li>Continuity at a point</li>
<li>Continuity on an interval</li>
<li>Rate of Change</li>
<li>First derivative</li>
<li>Continuity and differentiation</li>
<li>Differentiation Rules</li>
<li>Derivatives of Higher Order</li>
<li>The chain rule</li>
<li>Implicit differentiation</li>
<li>Geometric applications of differentiation</li>
<li>Physical applications of differentiation</li>
<li>Related Rates</li>
<li>Increasing and Decreasing functions</li>
<li>Extreme Values.</li>
</ul>
<p>Limits are relatively easy. However, related rates and extreme values are disgustingly difficult, is there any way to make those two lessons easy and routine? Something like a book filled with questions on those two or something.</p>
<p>Thanks.</p>
| Joonas Ilmavirta | 2,074 | <p>The best way to ensure a good grade is to make sure you <em>deeply understand</em> the topics you are supposed to learn.
It is of course important to remember the routine solution methods, but you should also be able to tell intuitively and at a glance <em>why</em> these methods work and <em>where</em> any given method is applicable.
You of course need to remember some key results, but you should also be able to justify those results — or even better, give a (sketch of a) proof.</p>
<p>The point is that if you understand the topic well, you can quickly and reliably reconstruct all necessary information.
If you remember the topic as a whole, it does not matter if you forget some little details.
I have a PhD in mathematics and I still occasionally forget elementary things, but I can fill the gaps.
For example, if you remember the differentiation rule of quotients but you are not sure about the signs, test it with some simple functions — the sign in the general case must be the same as in any example.</p>
<p>Teachers often focus on telling what is true (differentiation rules, ways to calculate limits), but I strongly recommend learning also what is not true.
For example, if $\lim_{x\to\infty}f(x)=0$ and $\lim_{x\to\infty}g(x)=\infty$, do we necessarily have $0<\lim_{x\to\infty}f(x)g(x)<\infty$?
If you are aware of some common "false rules" that are easy to believe, you can recognize when you have made a mistake.
When solving a problem, try to make sure that you understand what you are doing at all times and test your claims in special cases if you are unsure.
(The last sentence may sound trivial, but many students seem not to do this.)</p>
<p>You will make mistakes and you will forget things.
We all do.
If you want to make yourself good, try to make yourself robust — so that if you forget something, you can reconstruct it based on something else, and if you make a mistake, you can recognize it yourself.</p>
<p>So far I have answered a question like this: "What kind of a student will almost surely get perfect grades?"
Another important question is: "How does one become such a student?"</p>
<p>For one thing, you should know what you want to become.
If you really want to understand mathematics well, examine your own skills.
Ask yourself what are the most important ideas, results and methods in higher order differentiation.
If you cannot answer with confidence and give a couple of examples demonstrating these ideas, you need to work more.</p>
<p>For another thing, do not limit your scope to the present course if possible.
The big picture you create for yourself shouldn't be only about the course at hand, but mathematics as a whole.
I would even suggest not trying to remember which course a given topic was covered in and which course you are having at the moment.
The borders between different courses are somewhat artificial and you don't need to respect them.</p>
<p>Also, if you have the extra time, look what is coming ahead: find a follow-up course that builds on your current course and take a look at its book.
When I was in high school (or the closest equivalent in Finland), other students thought that I didn't have to work at all because I understood quickly and could solve problems quite intuitively.
The reason was that I was working ahead of them: I had already read the book of the next course, and that gave me plenty of context and motivation for the present topic and I could focus on building a solid big picture.
I was working hard, but I was working on something different than others.
It often happens that you properly understand something only when you have applied it in something else; no one masters the last thing they have learned.</p>
<p>As JPBurke suggests, working in a group also helps.
But a group is not strictly necessary if you can't find equally motivated friends or suitable ways to collaborate.
What you do need is someone to ask from if you don't understand something on your own.
It can be a fellow student, a teacher, an older sibling or anyone willing to help.</p>
<p>I realize that this answer gives somewhat grandiose goals.
A perfect understanding is too much to ask for, but I do suggest putting goals in this direction.
For me playful interest and idle curiosity in mathematics is what kept and still keeps me going; there is no need to be serious in order to become good.
The most valuable thing you can have when trying to get good grades is a passion to understand.</p>
|
5,586 | <p>I'm in my last year of highschool. And I'm aiming for a perfect grade in maths. The problem is that this year is the hardest year of maths I have ever faced in my entire life. Especially derivation and limits as its the first time I am studying it. Here are the lessons that are required to study for the first semester:</p>
<ul>
<li>Limit of a function at a point</li>
<li>Limits Theorems</li>
<li>Limits of fractional functions</li>
<li>Limits of Trigonometric functions</li>
<li>Limits at Infinity</li>
<li>Continuity at a point</li>
<li>Continuity on an interval</li>
<li>Rate of Change</li>
<li>First derivative</li>
<li>Continuity and differentiation</li>
<li>Differentiation Rules</li>
<li>Derivatives of Higher Order</li>
<li>The chain rule</li>
<li>Implicit differentiation</li>
<li>Geometric applications of differentiation</li>
<li>Physical applications of differentiation</li>
<li>Related Rates</li>
<li>Increasing and Decreasing functions</li>
<li>Extreme Values.</li>
</ul>
<p>Limits are relatively easy. However, related rates and extreme values are disgustingly difficult, is there any way to make those two lessons easy and routine? Something like a book filled with questions on those two or something.</p>
<p>Thanks.</p>
| Jasper | 1,147 | <p>The advice that JPBurke and Joonas Ilmavirta have given is excellent.</p>
<p>If you want to be perfect, you need to check your work (as explained at the end of this answer). You also need to know that some problems do not have answers -- and that the correct answer may be to point out why. Furthermore, real-world problems have limited precision (or accuracy) of the input data. Understand significant figures, and when "close enough" really is "good enough".</p>
<p>With regard to <strong>calculus</strong>:</p>
<ul>
<li>The Fundamental Theorem of Calculus and the Reynolds Transport Theorem are the two most important concepts in calculus. Yes, they are even more important than limits or the definition of a derivative. This is because they let you sanity-check your work in real life, even if you only have estimates available.</li>
<li>The Fundamental Theorem of Calculus proves that integration and differentiation are inverse processes. You can use integration to check your derivatives, and <em>vice versa</em>.</li>
<li>The Reynolds Transport Theorem is a detailed version of "what goes in, either stays in, or comes out." It is the basis for all Conservation Law problems, such as almost all practical problems in physics and engineering. Learn this Theorem, and you will have a much easier time in Physics, Quantitative Chemistry, Statics, Mechanics, Electrodynamics, Fluid Mechanics, and Thermodynamics. This is because almost all of the problems in all of these subjects use the same math -- special cases of the Reynolds Transport Theorem.</li>
<li>Try to solve most problems without using a calculator.</li>
<li>Follow a good process for solving (and checking) problems, as discussed below.</li>
<li>For every problem (that is not a proof), graph the problem (either on paper, or in your head). Mark on the graph where the function crosses the x-axis, where it crosses the y-axis, where it has minimum(s) and maximum(s), and where it has inflection points. Know where the slope is positive, zero, or negative. Doing this will seem like a lot of work at first, but it will give you an intuitive sense for the shape of the graph.</li>
<li>You will learn some important things from the graph practice I just mentioned. The slope of a function is the function's derivative. If the slope is zero, the graph is either at a minimum, a maximum, or an inflection point. Where the second derivative is zero, the graph is probably at an inflection point.</li>
<li>Pay attention to symmetry. Is the function even or odd? (In other words, is it symmetric with respect to x = 0?) Unshifted even-powered polynomials and unshifted cosines are even; unshifted odd-powered polynomials and unshifted sines are odd.</li>
</ul>
<p>When you are doing things with <strong>vector</strong>s, the following <strong>notation</strong>s work well:</p>
<ul>
<li><p>If you are allowed to choose your notation, use x, y, and z's "hat notation" to indicate the directions of unit vectors. For example, "y-hat" is a letter "y" with a circumflex accent, and is written "ŷ". This is less confusing than using the i, j, and k "hat notation" (such as "ĵ").</p></li>
<li><p>Either use angle bracket notation for vectors (such as <x,y,z>) or explicitly add the vectors (such as x x-hat + y ŷ + z ẑ, and use the proper symbol instead of x-hat).</p></li>
<li><p>Unfortunately, you cannot use the hat notation to indicate unit vectors in Quantum Mechanics, because Quantum Mechanics uses hats to indicate "multiplication" by an operator. For example, "ŷ" in Quantum Mechanics means "multiply by y".</p></li>
</ul>
<p>With regard to <strong>solving word-problems</strong> in general:</p>
<ol>
<li>Try to draw a picture.</li>
<li>If the problem uses units, keep the units with the associated numbers. For example, if t = 2 seconds, never abbreviate this as t = 2.</li>
<li>Label what you know. Label where each variable is zero. Label which direction it is increasing in.</li>
<li>Label what you are trying to find.</li>
<li>Write out your variables and parameters. For example, "t = time since rocket was fired."</li>
<li>Write out the relevant formulas. Count up your knowns and unknowns. For each unknown you have, you need another independent equation if you are to determine the unknown's value. Make sure to include both "boundary conditions" and "field conditions".</li>
<li>Systematically solve the equations using algebra and/or calculus. Make sure that you either do not divide by zero, or that you only divide by zero as part of a limit. (Derivatives and L'Hopital's rule use limits to perform "valid" divisions by zero.) Make sure that you bifurcate the problem as necessary. For example, when you factor a polynomial to find its roots, you will have a separate subproblem for each factor.</li>
<li>Only after you have found a formula for your answer should you compute a numerical answer. Make sure to write out the units as you compute the numerical answer.</li>
<li>Check whether each solution is valid. For example, if you are finding positive solution(s), explicitly rule out any negative solution(s) you find. Also, check whether the units make sense. For example, if the answer is supposed to be in meters per second per second, but your answer is in meters per second, then you have probably made a mistake. If necessary, use unit conversion factors. (For example, multiply a parameter by (1000 ms / s) if the parameter is in seconds, but the answer needs to be in milliseconds.)</li>
<li>Clearly label your answer, and make a note about what the answer means. For example, "t = 4 s. The rocket reaches maximum altitude 4 seconds after it is launched." I was taught to circle this answer in a cloud.</li>
<li>Sanity check your answer. If the rocket was supposed to go to the moon, 4 seconds seems awfully short.</li>
<li>Check-By-Substitution (CBS). Plug your answer into the original formulas (making sure to keep the units). Reduce both sides of each equation until you can either confirm that the answer is (a) correct answer, or that it is incorrect. Mark the initial and intermediate equals-signs with question marks. If the CBS works, mark the final equals-sign with a check mark. If the CBS fails, use an inequality sign, and look for a mistake.</li>
</ol>
|
5,586 | <p>I'm in my last year of highschool. And I'm aiming for a perfect grade in maths. The problem is that this year is the hardest year of maths I have ever faced in my entire life. Especially derivation and limits as its the first time I am studying it. Here are the lessons that are required to study for the first semester:</p>
<ul>
<li>Limit of a function at a point</li>
<li>Limits Theorems</li>
<li>Limits of fractional functions</li>
<li>Limits of Trigonometric functions</li>
<li>Limits at Infinity</li>
<li>Continuity at a point</li>
<li>Continuity on an interval</li>
<li>Rate of Change</li>
<li>First derivative</li>
<li>Continuity and differentiation</li>
<li>Differentiation Rules</li>
<li>Derivatives of Higher Order</li>
<li>The chain rule</li>
<li>Implicit differentiation</li>
<li>Geometric applications of differentiation</li>
<li>Physical applications of differentiation</li>
<li>Related Rates</li>
<li>Increasing and Decreasing functions</li>
<li>Extreme Values.</li>
</ul>
<p>Limits are relatively easy. However, related rates and extreme values are disgustingly difficult, is there any way to make those two lessons easy and routine? Something like a book filled with questions on those two or something.</p>
<p>Thanks.</p>
| guest | 8,441 | <p>The best way to do well in math is to solve problems. (Not to deeeeply understand things.) Deeply understanding things is fine. Even beneficial. but skill in problems is more important.</p>
<p>Per the actual question: yes, Schaum's Outlines. Perfect for what you want which is to drill the A.</p>
<p>Oh...and maybe it is just me, but I find myself realizing things about the concepts AS I DRILL.</p>
|
4,498,801 | <p>I am trying to deeply understand the similarities between these two theorems; the first being a generalization of the second.</p>
<blockquote>
<p><strong>Theorem 16.13.</strong> If <span class="math-container">$f$</span> is nonnegative, then
<span class="math-container">$$
\int_{\Omega} f(T \omega) \mu(d \omega)=\int_{\Omega^{\prime}} f\left(\omega^{\prime}\right) \mu T^{-1}\left(d \omega^{\prime}\right) .
$$</span>
A function <span class="math-container">$f$</span> (not necessarily nonnegative) is integrable with respect to <span class="math-container">$\mu T^{-1}$</span> if and only if <span class="math-container">$f T$</span> is integrable with respect to <span class="math-container">$\mu$</span>, in which case (16.17) and
<span class="math-container">$$
\int_{T^{-1} A^{\prime}} f(T \omega) \mu(d \omega)=\int_{A^{\prime}} f\left(\omega^{\prime}\right) \mu T^{-1}\left(d \omega^{\prime}\right)
$$</span>
hold. For nonnegative <span class="math-container">$f$</span>, (16.18) always holds.</p>
</blockquote>
<blockquote>
<p><strong>(2.47) Theorem.</strong> Suppose <span class="math-container">$\Omega$</span> is an open set in <span class="math-container">$\mathbf{R}^{n}$</span> and <span class="math-container">$G: \Omega \rightarrow \mathbf{R}^{n}$</span> is a <span class="math-container">$C^{1}$</span> diffeomorphism.
(a) If <span class="math-container">$f$</span> is a Lebesgue measurable function on <span class="math-container">$G(\Omega)$</span>, then <span class="math-container">$f \circ G$</span> is Lebesgue measurable on <span class="math-container">$\Omega$</span>. If <span class="math-container">$f \geq 0$</span> or <span class="math-container">$f \in L^{1}(G(\Omega), m)$</span>, then
<span class="math-container">$$
\int_{G(\Omega)} f(x) d x=\int_{\Omega} f \circ G(x)\left|\operatorname{det} D_{x} G\right| d x
$$</span>
(b) If <span class="math-container">$E \subset \Omega$</span> and <span class="math-container">$E \in \mathscr{L}^{n}$</span>, then <span class="math-container">$G(E) \in \mathscr{L}^{n}$</span> and <span class="math-container">$m(G(E))=$</span> <span class="math-container">$\int_{E}\left|\operatorname{det} D_{x} G\right| d x$</span>.</p>
</blockquote>
<p>Why is the second theorem not written as</p>
<p><span class="math-container">$$
\int_{\Omega} f(G(x)) d x=\int_{G(\Omega)} f(x) \left| \operatorname{det} D_{x} G\right| d x
$$</span></p>
<p>This would make a lot more sense to me as we could think of this as <span class="math-container">$G$</span> is a function that changes the underlying measure space, and we integrate w.r.t. the pushforward measure which turns out to be <span class="math-container">$\left| \operatorname{det} D_{x} G\right| d x$</span>. Otherwise, I cannot see how to make the second version fit within the statement of the first.</p>
| Ruy | 728,080 | <p>A lot has been said about this question already, but a new perspective has just occured to me that might be useful.</p>
<p>First of all, let us consider a context in which we have two measure spaces <span class="math-container">$(X, \mu )$</span> and <span class="math-container">$(Y, \nu )$</span>, and a
measurable map <span class="math-container">$T:X\to Y$</span>.</p>
<p>From the point of view of the change of variables in integrals, the ideal situation is that in which the identity
<span class="math-container">$$
\int_Y f(y)\, d\nu (y) = \int_X f(T(x))\, d\mu (x),\tag{$*$}
$$</span>
holds for all nonnegative measurable functions <span class="math-container">$f:Y\to {\mathbb R}$</span>.</p>
<p>The crux of the matter is thus how to find measures <span class="math-container">$\mu $</span> and <span class="math-container">$\nu $</span> providing for (<span class="math-container">$*$</span>), including situations in which one of these
is given in advance, and we have to find the other one.</p>
<p>In that direction, perhaps the main observation to be made is that (<span class="math-container">$*$</span>) holds if and only if
<span class="math-container">$$
\nu =T_*(\mu ). \tag{$**$}
$$</span>
This can be easily proven by pluging in functions of the form <span class="math-container">$f=1_E$</span>, namely the characteristic function of a measurable subset <span class="math-container">$E\subseteq Y$</span>.</p>
<p>Thus, when we are facing the task of evaluating a concrete integral, we might try to identify our problem with either
the right or left-hand-side of (<span class="math-container">$*$</span>), in the hope that the other side will be easier to compute.</p>
<p>In case we identify our concrete integral with the right-hand-side of (<span class="math-container">$*$</span>), in which case we will have already identified <span class="math-container">$f$</span>,
<span class="math-container">$T$</span>, and <span class="math-container">$\mu $</span>, we will then be scrambling to find a
measure <span class="math-container">$\nu $</span> satisfying (<span class="math-container">$*$</span>), but this is now easy since the only choice is <span class="math-container">$\nu =T_*(\mu )$</span>.</p>
<p>On the other hand, if we identify our concrete integral with the left-hand-side of (<span class="math-container">$*$</span>), we will have identified <span class="math-container">$f$</span>
and <span class="math-container">$\nu $</span>, and then we will be required to provide <span class="math-container">$T$</span>, as well as to solve equation (<span class="math-container">$**$</span>) for <span class="math-container">$\mu $</span>.</p>
<p>The content of the
change of variables Theorem of Differential Calculus is precisely taylored for the above task, and it says that, when <span class="math-container">$X$</span>
and <span class="math-container">$Y$</span> are open subsets of <span class="math-container">${\mathbb R}^n$</span>, <span class="math-container">$T:X\to Y$</span> is a surjective differentiable map, and <span class="math-container">$\nu $</span> is Lebesgue's measure on <span class="math-container">$Y$</span>, one possible choice
for
<span class="math-container">$\mu $</span> is
<span class="math-container">$$
\mu (dx)=|\text{det}D_xT|\, dx,
$$</span>
where <span class="math-container">$dx$</span> denotes Lebesgue's measure on <span class="math-container">$X$</span>.</p>
|
3,059,695 | <p>Let <span class="math-container">$A$</span> be a subset of a compact topological space such that every point of <span class="math-container">$A$</span> is an isolated point of <span class="math-container">$A$</span>. Is <span class="math-container">$A$</span> necessarily finite?</p>
| Henno Brandsma | 4,280 | <p>No, it can be very large. But if <span class="math-container">$A$</span> is also closed, it is compact and then it must be finite. </p>
<p>E.g. <span class="math-container">$[0,1]^{\mathbb{R}}$</span> is compact in the product topology, but the set <span class="math-container">$A$</span> of all functions <span class="math-container">$\{f_t: \mathbb{R} \to [0,1], t \in \mathbb{R}\}$</span> defined by <span class="math-container">$f_t(x) = 0$</span> if <span class="math-container">$x \neq t$</span> and <span class="math-container">$f_t(t)=1$</span>, consists only of isolated points, as <span class="math-container">$\pi_t^{-1}[(\frac12,\frac32)] \cap A = \{f_t\}$</span> e.g. So <span class="math-container">$A$</span> can be very large, but non-closed.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.