qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,829,121 | <p>I get how to derive the ellipse equation, but I'm struggling to understand what it means intuitively. </p>
<p>You see, a circle equation can be understood very intuitively. The circle equation models how the radius of the circle can be represented using the Pythagorean theorem. But I don't understand what the ellipse equation means at such a level. Does it model how an ellipse can be drawn out using a stretched rope? What exactly does it model? Can someone please explain? </p>
<p>Can you please explain it as simply as possible, as I'm still a beginner? </p>
| Blue | 409 | <p>I mentioned in my <a href="https://math.stackexchange.com/a/2829283/409">previous answer</a> that the "foci and string" form of the ellipse equation is rarely seen "except as the point of departure on an algebraic journey to the 'standard' form". I want to elaborate a little on that "algebraic journey".</p>
<p>Typically, the journey involves a lot of unenlightening, mechanical symbol-pushing to eliminate the square roots. Specifically, defining
$$d_i := \sqrt{(x-x_i)^2+(y-y_i)^2}$$
the argument tends to go something like this:
$$\begin{alignat}{2}\quad && d_1 + d_2 &= 2 a \qquad\text{(definition: sum of distances to foci is constant)} \tag{$\star$} \\ \to\quad && (d_1 + d_2)^2 &= (2 a)^2 \\[4pt]
\to\quad && d_1^2 + 2 d_1 d_2 + d_2^2 &= 4 a^2 \\[4pt]
\to\quad && 2 d_1 d_2 &= 4 a^2 - d_1^2 - d_2^2 \\[4pt]
\to\quad && (2d_1d_2)^2 &= ( 4 a^2-d_1^2-d_2)^2 \\[4pt]
\to\quad && 0 &= d_1^4+d_2^4+16a^4-2d_1^2d_2^2-8a^2d_1^2-8a^2d_2^2 \tag{$\star\star$}
\end{alignat}$$
so that $(\star\star)$ contains only even powers of the $d_i$, hence: no radicals. Mission accomplished! Replacing the $d_i$ (in particular, with $(x_i,y_i) = (\pm c,0)$, and defining $b^2 := a^2-c^2$), equation $(\star\star)$ simplifies (see below) to the origin-centered standard form equation we all know and love. </p>
<p>I believe OP is disappointed that, somewhere along the tedious journey from $(\star)$ to $(\star\star)$, we lose sight of $(\star)$.</p>
<p>However, it's still possible to catch a glimpse of $(\star)$ in $(\star\star)$, because $(\star\star)$ <em>factors</em>:</p>
<blockquote>
<p>$$(d_1+d_2-2a)(d_1-d_2-2a)(-d_1+d_2-2a)(d_1+d_2+2a) = 0 \tag{$\star\star\star$}$$</p>
</blockquote>
<p>(The reader might see a resemblance to Heron's formula in the above.)</p>
<p>Since $(\star)$ is <em>right there</em> in the first factor, the set of points satisfying $(\star\star\star)$ must include those satisfying $(\star)$, the (well, <em>one</em>) definition of the ellipse.</p>
<p>Note that the last factor of $(\star\star\star)$ contributes no points, since presumably $a > 0$ and $d_i \geq 0$.</p>
<p>Interestingly, the middle factors of $(\star\star\star)$ correspond to the relations
$$d_1 - d_2 = 2a \quad\text{or}\quad d_2 - d_1 = 2a \qquad\qquad\text{i.e.,}\quad |d_1-d_2| = 2a$$
which say precisely that <strong>the <em>difference</em> of distances to the foci is constant</strong>: the (well, <em>one</em>) definition of the hyperbola! (Each factor corresponds to an arm of the ostensible hyperbola.) </p>
<p>Consequently, $(\star\star)$ is <em>simultaneously</em> an ellipse equation and an hyperbola equation! Except, not exactly. The graph of the solution set is <em>only one or the other</em>, as determined by $a$'s relationship to the distance between the foci. To be specific, let's do the simplification hinted at earlier: take $(x_i,y_i) = (\pm c,0)$, so that $(\star\star)$ becomes
$$16 \left(\;a^2 (a^2 - c^2) - x^2(a^2-c^2) - a^2 y^2\;\right) = 0 \qquad\to\qquad \frac{x^2}{a^2} + \frac{y^2}{a^2-c^2} = 1$$
We see, then, that when $a > c$ ---so that the sum of the distances to the foci is bigger than the distance between the foci themselves--- the equation is that of an ellipse; in $(\star\star\star)$, the second and third factors cannot be zero. On the other hand, when $a < c$, the equation is that of an hyperbola; the first factor of $(\star\star\star)$ cannot be zero. (Exploring the degeneracies arising from $a=c$ is left as an exercise to the reader.)</p>
<hr>
<p>Anyway, my point is this: We can get to $(\star\star)$ from $(\star)$ by plodding through sequence of algebraic steps that obscure the geometry; or, we can get to $(\star\star)$ by "rationalizing" $(\star)$ via multiplication by what one might call its "Heronic conjugate", the three factors of which are geometrically meaningful (although one is inherently extraneous). And we get the hyperbola equation for free ... because it's <em>the same equation!</em></p>
<p>Kinda neat, that.</p>
|
1,609,443 | <p>So my textbook states that it is $(-\infty,0)$ that is concave down, but why not $(-\infty,0]$? Could someone help?</p>
| John Steinbeck | 304,183 | <p>Well, @User12345 if textbooks do that, they are certainly wrong.
$f:x\mapsto -x^4$ is certainly concave on $\mathbb{R}$ but do not satisfy your definition.</p>
<p>@CoolKid, I don't know the definition you have of a concave function, but for every definition I would agree with, I consider you are right and that $x\mapsto x^3$ is concave on $(-\infty,0]$.
At least, according to the initial definition of concave functions (without any regularity assumption), it is the case.</p>
<p>(I assumed concave=concave down; if I was wrong about this language subtlety and my comment is unvalid because of it, I apologize)</p>
|
1,701,761 | <p><a href="https://i.stack.imgur.com/Qletj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Qletj.png" alt="Trapezium "></a> </p>
<blockquote>
<p>Prove that ABED is a parallelogram </p>
</blockquote>
<p><strong>Given:</strong></p>
<ol>
<li><p>ABCD is a trapezium</p></li>
<li><p>F and G are the midpoints of AB and DC respectively</p></li>
<li><p>FHG is a straight line</p></li>
<li><p>AD is equal to and parallel to BE</p></li>
</ol>
<p>My attempts have included trying to show that $\angle BAD + \angle ADE = 180^{\circ}$ and trying to show that the opposite angles are equal. But these have not led me anywhere as at some point I am required to assume that AB and DE are parallel, which is what has to be proven. I'd like a hint; any help is much appreciated.</p>
| Intelligenti pauca | 255,730 | <p>HINT: draw $AE$ and show that triangles $ABE$ and $ADE$ are congruent.</p>
|
171,697 | <blockquote>
<p>Let $f \in C^1(\mathbb R, \mathbb R)$ and suppose
$$ \tag{H}
\vert f(x) \vert\le \frac{1}{2}\vert x \vert + 3, \quad \forall x \in \mathbb R.
$$</p>
<p>Then, every solution of
$$
x'(t)+x(t)+f(x(t))=0, \quad t \in \mathbb R
$$
is bounded on $[0,+\infty]$. </p>
</blockquote>
<p>First of all, I would like to say that the text has been copied correctly: I mean, it's really $[0,+\infty]$ so I suppose the author wants me to prove $\displaystyle \lim_{t \to +\infty} x(t) <\infty$. Indeed, boundedness on $[0,+\infty)$ is quite obvious, because we have global existence (its enough to write the equation as $x'=-x-f(x)$ and to observe that the RHS is sublinear thanks to $(H)$).</p>
<p>So, we have to prove $\displaystyle \lim_{t \to +\infty} x(t) <\infty$. How can we do? I've got some ideas but I can't conclude. First, I observe that the problem is autonomous: this implies that solutions are either constant either monotonic.</p>
<p>First idea: I've fixed $x_0 \in \mathbb R$ and I've written the equivalent integral equation:
$$
x(t) = x_0 - \int_0^t [x(s)+f(x(s))]ds
$$</p>
<p>Taking the absolute value and making some rough estimates, we get
$$
\vert x(t) \vert \le \vert x_0 \vert + \left\vert \int_0^t [x(s)+f(x(s))]ds \right\vert \le \vert x_0 \vert + \int_0^t \frac{3}{2}\left\vert x(s) \right\vert +3 ds
$$
but now I don't know how to conclude. Gronwall's lemma? But how can I use it? </p>
<p>Second idea: if $x_0 \in \mathbb R$ is s.t. $x_0 +f(x_0) \neq 0$, the solution of the Cauchy problem is not constant. I can divide both members of equation and I obtain (integrating on $[0,t]$)
$$
-t=\int_0^t \frac{dx}{x+f(x)}
$$
Now I let $t \to +\infty$ but... what can I conclude?</p>
| AlbertH | 11,066 | <p>By assumptions
$$
x'(t) = -x(t) -f(x(t)) \leq -x(t) + \frac 1 2 \lvert x(t) \rvert + 3
$$
Let's suppose there is a point $t_0$ such that
$$
x(t_0) > \max\{x(0), 6\}
$$</p>
<p>Since $x(t_0)$ is positive the following relation is satisfied
$$
x'(t_0) \leq -x(t_0) + \frac 1 2 \lvert x(t_0) \rvert + 3 = -\frac 1 2 x(t_0) + 3 < 0
$$</p>
<p>So the $\max$ of $x(t)$ on the interval $[0, t_0]$ must occur at $t_1\in (0, t_0)$, but that leads to contradiction because:
$$
0 = x'(t_1) \leq - \frac 1 2 x(t_1) + 3 < 0
$$
where the equal sign is there because $t_1$ is an extremum point of a differentiable function on an open interval.</p>
<p>A similar reasoning shows that $x(t)$ is bounded from below.</p>
|
716,856 | <p>I have the following equation:</p>
<p>\begin{align*}
\frac{d^2\theta}{dt}=\alpha(\theta-1)+\beta(\theta-1)^3-\gamma\frac{d\theta}{dt} \tag{1}
\end{align*}</p>
<p>Where $\alpha, \beta, \gamma \in \mathbb{R}$.</p>
<p>This is my solution in attempting to convert (1) into a system of first order ODE's:</p>
<p>\begin{align*}
\frac{d\theta}{dt}&=\phi \tag{2} \\ \\
\frac{d\phi}{dt} &= \alpha(\theta-1)+\beta(\theta-1)^3-\gamma\phi \tag{3}
\end{align*}</p>
<p>Is the above system correct? Also in equation (3) is it okay to have the $\theta$ term?</p>
| Amzoti | 38,839 | <p>We are given:</p>
<p>\begin{align*}
\frac{d^2\theta}{dt}=\alpha(\theta-1)+\beta(\theta-1)^3-\gamma\frac{d\theta}{dt} \tag{1}
\end{align*}</p>
<p>Where $\alpha, \beta, \gamma \in \mathbb{R}$.</p>
<p>We can proceed as follows:</p>
<ul>
<li>$x_1 = \theta \implies x_1' = \theta' = x_2$</li>
<li>$x_2 = \theta' \implies x_2' = \theta'' = \alpha(\theta-1)+\beta(\theta-1)^3-\gamma\frac{d\theta}{dt} = \alpha(x_1-1) + \beta(x_1 - 1)^3 -\gamma ~x_2$</li>
</ul>
<p>So, our new system is:</p>
<p>$$\begin{align}
x_1' & = x_2 \\
x_2' & = \alpha(x_1-1) + \beta(x_1 - 1)^3 -\gamma ~x_2
\end{align}$$</p>
<p>Note, given initial conditions, similarly follow as:</p>
<ul>
<li>$\theta(t_0) = a \implies x_1(t_0) = a$</li>
<li>$\theta'(t_0) = b \implies x_2(t_0) = b$</li>
</ul>
|
67,039 | <p>I just noticed that <code>Exit</code> and <code>Quit</code> can work without brackets i.e. a single</p>
<pre><code>Exit
</code></pre>
<p>or</p>
<pre><code>Quit
</code></pre>
<p>will quit the kernel. Quite surprising!</p>
<p>This isn't mentioned in the document of them. Is there a list for functions with this feature? </p>
<p>Well, I admit I asked this question mainly to show the discovery above :D</p>
| shelure21 | 62,528 | <pre><code>Here
</code></pre>
<p>Could also work without brackets.</p>
|
1,332,433 | <blockquote>
<p>Let <span class="math-container">$f:\mathbb R\to \mathbb R$</span> be such that for every sequence <span class="math-container">$(a_n)$</span> of real numbers,</p>
<p><span class="math-container">$\sum a_n$</span> converges <span class="math-container">$\implies \sum f(a_n)$</span> converges</p>
<p>Prove there is some open neighborhood <span class="math-container">$V$</span>, such that <span class="math-container">$0\in V$</span> and <span class="math-container">$f$</span> restricted to <span class="math-container">$V$</span> is a linear function.</p>
</blockquote>
<p>This was given to a friend at his oral exam this week. He told me the problem is hard. I tried it myself, but I haven't made any significant progress.</p>
| Justthisguy | 81,821 | <p>Here is a proof in the case that $f$ is differentiable at $0$.</p>
<p>Let $f(x) = rx + \epsilon(x)$, where $\lim_{x \to 0} {\epsilon(x)/x}=0$. I claim $\epsilon = 0$ in a a neighborhood of 0.
Suppose not. Then, up to extracting subsequences and making a few sign choices that are without loss of generality, we may assume that there exists $a_n$ decreasing monotonically to $0$ with $\epsilon(a_n)>0$, and $\epsilon(a_n)/a_n$ decreasing monotonically to $0$. Up to extracting another subsequence, we are left with two cases: either $\epsilon(-a_n) \geq 0$ for all $n$, or $\epsilon(-a_n) \leq 0$ for all $n$. I claim that neither is actually possible.</p>
<p>First, the easy case: assume $\epsilon(-a_n) \geq 0$ for all $n$. Let $k_n$ be the least integer greater than $1 \over \epsilon(a_n)$. Define $b_i$ to be alternatingly $a_1$ and $-a_1$ for the first $2k_1$ entries. Then define $b_i$ to be alternatingly $a_2$ and $-a_2$ for the subsequent $2k_2$ entries, and so forth. It is easy to check that $\sum b_i$ converges (not absolutely), but $\sum f(b_i) \geq \sum k_n f(a_n) = \infty$.</p>
<p>Now, the hard case: assume $\epsilon(-a_n) \leq 0$ for all $n$. The index manipulation here is pretty awful, but here goes. Let $l_1 = 1$, let $m_{1,1}=k_1$, and let $b_i = a_{l_1}$ for $i \in \{1,...,m_{1,1}\}$. Now, given $l_\alpha$, $m_{\alpha,1}$, and $b_i$ for $i \in \{1,...,m_{\alpha,1}\}$, choose $j_\alpha > l_\alpha$ such that $a_{j_\alpha} < 1/2^\alpha$ and $${\epsilon(-a_{j_\alpha}) \over - a_{j_\alpha}} < {1 \over 4^\alpha} \epsilon(a_{l_\alpha}).$$ Now, choose $p_\alpha > 0$ such that $0 < p_\alpha a_{j_\alpha} - k_{l_\alpha} a_{l_\alpha} < a_{j_\alpha}$. Let $m_{\alpha,2} = m_{\alpha,1} + p_\alpha$, and let $b_i = -a_{j_\alpha}$ for $i \in \{m_{\alpha,1} + 1,...,m_{\alpha,2}\}$.
Now pick $l_{\alpha + 1} > j_\alpha$, define $m_{\alpha + 1,1} = m_{\alpha,2} + k_{l_{\alpha + 1}}$, and define $b_i = a_{l_{\alpha + 1}}$ when $m_{\alpha,2} < i \leq m_{\alpha + 1,1}$. </p>
<p>To check that indeed $\sum b_i < \infty$ and $\sum f(b_i) = \infty$, note that $$|\sum\limits_{i=m_{\alpha - 1, 2} + 1}^{m_{\alpha,2}} b_i| = |p_\alpha a_{j_\alpha} - k_{l_\alpha} a_{l_\alpha}| < a_{j_\alpha} < {1 \over 2^\alpha},$$, whence $\sum b_i$ converges. <strong>Edit:</strong> well, we need to bound the partial sums within each $\sum\limits_{i=m_{\alpha - 1, 2} + 1}^{m_{\alpha,2}} b_i$, but this can be done by rearranging terms since the terms tend to zero as $\alpha \to \infty$. On the other hand, $$\sum\limits_{i=m_{\alpha - 1, 2} + 1}^{m_{\alpha,2}} f(b_i) \geq 1 - r a_{j_\alpha} + p_\alpha \epsilon(-a_{j_\alpha}).$$
Since $a_n \to 0$, we only need to check that $p_\alpha \epsilon(-a_{j_\alpha})$ is not too negative. By our choice of $j_\alpha$, $\epsilon(-a_{j_\alpha}) \geq {-1 \over 4^\alpha} a_{j_\alpha} \epsilon(a_{l_\alpha})$. By our choice of $p_\alpha$, $p_\alpha a_{j_\alpha} < 2 k_{l_\alpha} a_{l_\alpha}$, whence $p_\alpha \epsilon(-a_{j_\alpha}) \geq -2 a_{l_\alpha} k_{l_\alpha} \epsilon(a_{l_\alpha}) \geq -4 a_{l_\alpha}$. For $\alpha$ large, $a_{l_\alpha}$ is very small, so $$\sum\limits_{i=m_{\alpha - 1, 2} + 1}^{m_{\alpha,2}} f(b_i) \geq 1/2.$$ This proves the claim for $f$ differentiable.</p>
<p>By zhw.'s answer, the difference quotients near $0$ are bounded. By Heine-Borel, for each $a_n \to 0$, there is a subsequence $a_{n_k}$ such that $f(a_{n_k})/a_{n_k}$ converges. I'm sure that using similar arguments to the ones so far, you could show that all the subsequential limits are the same. This would show that, in fact, $f(x)/x$ converges as $x \to 0$, which would complete the proof.</p>
|
50,441 | <p>here is the silly question of mine,</p>
<p>if <code>mathematica</code> takes <code>3</code> seconds to evaluate the function <code>f[3]</code>
then I plug the this value <code>f[3]</code> into a new function <code>g[f[3],f[3]]</code>, Let's ignore the evaluation time for the new function <code>g</code>, how long does it take for <code>g[f[3],f[3]]</code>, 3 seconds or 6 seconds?</p>
<p>if it is 6 seconds, how can I make <code>mathematica</code> temporarily to remember the value of <code>f[3]</code></p>
<p>thanks</p>
| Apple | 10,193 | <pre><code>Clear["Global`*"]
ClearSystemCache[];
$RecursionLimit = Infinity;
fibonacci[1] = 1; fibonacci[2] = 1;
fibonacci[i_] := fibonacci[i] = fibonacci[i - 1] + fibonacci[i - 2];
fibonacci[20000]; // AbsoluteTiming
fibonacci[20000]; // AbsoluteTiming
(*{0.135008, Null}*)
(*{0., Null}*)
</code></pre>
|
33,778 | <p>Let $X$ and $Y$ be independent exponential variables with rates $\alpha$ and $\beta$, respectively. Find the CDF of $X/Y$.</p>
<p>I tried out the problem, and wanted to check to see if my answer of: $\frac{\alpha}{ \beta/t + \alpha}$ is correct, where $t$ is the time, which we need in our final answer since we need a cdf.</p>
<p>Can someone verify if this is correct?</p>
| Michael Hardy | 11,667 | <p>A different method may be of interest:
<span class="math-container">\begin{align}
\Pr(X/Y \le t) = {} & \Pr(X\le tY) \\[8pt]
= {} & \iint\limits_{(x,y)\,:\,x\,\le\, ty} e^{-\alpha x} e^{-\beta y} \big(\alpha\beta\,d(x,y)\big) \\[8pt]
= {} & \int_0^\infty \left( \int_0^{ty} e^{-(\alpha x+\beta y)} (\alpha\,dx) \right) (\beta\,dy) \tag 1
\end{align}</span>
In the inside integral, <span class="math-container">$x$</span> goes from <span class="math-container">$0$</span> to <span class="math-container">$ty$</span> while <span class="math-container">$y$</span> remains fixed; thus we treat <span class="math-container">$y$</span> as constant in this substitution:
<span class="math-container">\begin{align}
& u = x/y \\[6pt]
& du = dx/y \\[6pt]
& \text{As $x$ goes from $0$ to $ty,$} \\
& \text{$u$ goes from $0$ to $t$.}
\end{align}</span>
The integral <span class="math-container">$(1)$</span> becomes
<span class="math-container">\begin{align}
& \int_0^\infty \left( \int_0^t e^{-(\alpha u+\beta)y} (\alpha y\,du) \right) (\beta\,dy)
\end{align}</span>
The point is that the bounds of integration in the inside integral no longer depend on <span class="math-container">$y,$</span> so we can alter the order of operations, thus:
<span class="math-container">\begin{align}
& \int_0^t \left( \int_0^\infty e^{-(\alpha u+\beta)y}\alpha\beta y \, dy \right) \,du \\[8pt]
= {} & \int_0^t \Big( \text{a function of } u \Big)\, du \\[8pt]
\text{So } \Pr(X/Y \le t) = {} & \int_0^t \Big( \text{a function of } u \Big)\, du
\end{align}</span>
Therefore that function of <span class="math-container">$u$</span> (which is readily found) must be the probability density function of the random variable <span class="math-container">$X/Y.$</span></p>
<p><b>Appendix: Evaluation of the integral:</b>
<span class="math-container">\begin{align}
f_{X/Y}(u) = {} & \alpha\beta \int_0^\infty e^{-(\alpha u+\beta)y} y \, dy \\[8pt]
= {} & \frac{\alpha\beta}{(\alpha u+\beta)^2} \int_0^\infty e^{-(\alpha u+\beta)y} (\alpha u+\beta) y \big( (\alpha u + \beta) \, dy \big) \\[8pt]
= {} & \frac{\alpha\beta}{(\alpha u+\beta)^2} \int_0^\infty e^{-v} v \,dv \\[8pt]
= {} & \frac{\alpha\beta}{(\alpha u+\beta)^2}.
\end{align}</span></p>
|
495,487 | <p>Show that the difference of the number of cycles of even and odd permutations is $(-1)^n (n-2)!$, <strong>using a bijective mapping</strong> (combinatorial proof).</p>
<p>Suppose to convert a permutation from odd to even we always flip 1 and 2. Then the difference can be easily calculated.</p>
<p>Thus, if I can find the number of cycles of all even permutations of $[n]$ that have $\{1,2\}$ in the same cycle., I can compute the difference. Can somebody show me a way?</p>
| Marko Riedel | 44,883 | <p>Here is an answer using generating functions that may interest you. Recall that the sign $\sigma(\pi)$ of a permutation $\pi$ is given by
$$\sigma(\pi) = \prod_{c\in\pi} (-1)^{|c|-1}$$
where the product ranges over the cycles $c$ from the disjoint cycle composition of $\pi$.</p>
<p>It follows that the combinatorial species $\mathcal{Q}$ that reflects the signs and the cycle count of the set of permutations is given by
$$\mathcal{Q} =
\mathfrak{P}(\mathcal{V}\mathfrak{C}_1(\mathcal{Z})
+\mathcal{U}\mathcal{V}\mathfrak{C}_2(\mathcal{Z}))
+\mathcal{U}^2\mathcal{V}\mathfrak{C}_3(\mathcal{Z})
+\mathcal{U}^3\mathcal{V}\mathfrak{C}_4(\mathcal{Z})
+\mathcal{U}^4\mathcal{V}\mathfrak{C}_5(\mathcal{Z})
+\cdots)$$
where we have used $\mathcal{U}$ to mark signs and $\mathcal{V}$ for the cycle count.</p>
<p>Translating to generating functions we have
$$Q(z, u, v) =
\exp\left(v\frac{z}{1}
+ vu\frac{z^2}{2}
+ vu^2\frac{z^3}{3}
+ vu^3\frac{z^4}{4}
+ vu^4\frac{z^5}{5}
+\cdots\right).$$
This simplifies to
$$Q(z,u,v) = \exp\left(\frac{v}{u}
\left(
\frac{zu}{1}
+ \frac{z^2 u^2}{2}
+ \frac{z^3 u^3}{3}
+ \frac{z^4 u^4}{4}
+ \frac{z^5 u^5}{5}
+ \cdots
\right)\right) \\
= \exp\left(\frac{v}{u} \log \frac{1}{1-uz}\right)
= \left(\frac{1}{1-uz}\right)^{\frac{v}{u}}.$$</p>
<p>Now the two generating functions $Q_1(z, v)$ and $Q_2(z, v)$ of even and odd permutations by cycle count are given by
$$Q_1(z,v) = \frac{1}{2} Q(z,+1,v) + \frac{1}{2} Q(z,-1,v) =
\frac{1}{2}\left(\frac{1}{1-z}\right)^v
+\frac{1}{2}\left(\frac{1}{1+z}\right)^{-v}$$ and
$$Q_2(z,v) = \frac{1}{2} Q(z,+1,v) - \frac{1}{2} Q(z,-1,v) =
\frac{1}{2}\left(\frac{1}{1-z}\right)^v
-\frac{1}{2}\left(\frac{1}{1+z}\right)^{-v}.$$</p>
<p>We require the quantity
$$G(z, v) = \left.\frac{d}{dv} (Q_1(z,v)-Q_2(z,v))\right|_{v=1} \\=
\left.\frac{d}{dv} \left(\frac{1}{1+z}\right)^{-v}\right|_{v=1} =
- \left.\log \frac{1}{1+z} \left(\frac{1}{1+z}\right)^{-v}\right|_{v=1}
= -(1+z)\log \frac{1}{1+z}.$$</p>
<p>Finally extracting coeffcients from this generating function we obtain
$$- n! [z^n] (1+z)\log \frac{1}{1+z}
= - n! \left(\frac{(-1)^n}{n} + \frac{(-1)^{n-1}}{n-1}\right)
\\= - n! (-1)^{n-1} \left(-\frac{1}{n} + \frac{1}{n-1}\right)
= n! (-1)^n \frac{n-(n-1)}{n(n-1)}
\\= n! (-1)^n \frac{1}{n(n-1)} = (-1)^n (n-2)!$$
This concludes the proof. I do think this is rather pretty.</p>
|
1,938,353 | <p>A detailed solution will be helpful. Given that $a_1=3$ and $a_{n+1}=3^{a_n}$, find the remainder when $a_{2004}$ is divided by 100.</p>
| Jack D'Aurizio | 44,121 | <p>By the CRT, in order to find $a_{2004}\pmod{100}$ it is enough to find $a_{2004}\pmod{8}$ and $a_{2004}\pmod{25}$. Now $a_n=3^{a_{n-1}}\pmod{8}$ just depends on $a_{n-1}\pmod{4}$, and $a_{n-1}=3^{a_{n-2}}\pmod{4}$ just depends on $a_{n-2}$ being even or odd. Since $a_{2002}$ is clearly odd, $a_{2003}\equiv 3\pmod{4}$ and $\color{green}{a_{2004}\equiv 3\pmod{8}}$. $a_n=3^{a_{n-1}}\pmod{25}$ just depends on $a_{n-1}\pmod{20}$. We already know that $a_{2003}\equiv 3\pmod{4}$, hence it is enough to find $a_{2003}=3^{a_{2002}}\pmod{5}$, or $a_{2002}\pmod{4}$. Since $a_{2002}\equiv 3\pmod{4}$, $a_{2003}\equiv 2\pmod{5}$, hence $a_{2003}\equiv 7\pmod{20}$ and $\color{green}{a_{2004}\equiv 12\pmod{25}}$. Putting together the green identities,
$$ \color{green}{\large a_{2004}\equiv 87\pmod{100}.}$$
You may notice that the above argument has little to do with the arithmetic properties of $2004$, and in fact you may apply the same argument to show that $a_n\equiv 87\pmod{100}$ for any $n\geq 3$.<br>
This is due to the fact that by iterating the totient function, we always reach $1$ pretty soon.<br>
For instance:
$$\varphi(\varphi(\varphi(\varphi(\varphi(\varphi(100))))))=1.$$</p>
|
194,985 | <p>How does one show that $$\lim_{n \rightarrow \infty}\int_{0}^{1}\frac{x^{n}}{1 + x^{n}}\, dx = 0?$$ My idea is to evaluate the inner integral, but I can't seem to be able to do that.</p>
| Flanders | 39,649 | <p>For all $x\in [0,1]$ you have $$ \frac{x^n}{1+x^n} \leq x^n $$ and hence</p>
<p>$$ 0\leq\int_{0}^{1}\frac{x^{n}}{1 + x^{n}}\, dx \leq \int_{0}^{1}x^{n}\, dx = \frac{1}{n+1}\rightarrow 0 \quad (n\rightarrow \infty).$$</p>
|
96,518 | <p>Some time ago I had a physics test where I had the following integral: $\int y'' \ \mathrm{d}y$. The idea is that I had a differential equation, and I had acceleration (that is, $y''$) given as a function of position ($y$). The integral was actually equal to something else, but that's not the point. I needed to somehow solve that. I can't integrate acceleration with respect to position, so here's what I did:</p>
<p>$$
\int y'' \ \mathrm{d}y =
\int \frac{\mathrm{d}y'}{\mathrm{d}t} \ \mathrm{d}y =
\int \mathrm{d}y' \frac{\mathrm{d}y}{\mathrm{d}t} =
\int y' \ \mathrm{d}y' = \frac1{2}y'^2 + C
$$</p>
<p>My professor said this was correct and it makes sense, but doing weird stuff with differentials and such never completely satisfies me. Is there a substitution that justifies this procedure?</p>
| Pedro | 23,350 | <p>I'm doubtfull about what $y''$ and $dy$ stand for in your problem. If you have $y = y(x)$ then clearly $$\int y'' dx = y'+C$$ But you're integrating with respect to $dy = d\{y(x)\} = y'(x) dx$ assuming $y(x)$ has a continuous derivative. So you finally have.</p>
<p>$$\int y''(x) y'(x) dx = \int y'(x) d(y'(x)) = \frac{y'^2}{2}+C $$</p>
<p>I'd recommend you read about the Riemann Stieltjes integral, which would formally clarify this issues. </p>
|
1,006,080 | <p>How to solve:
$$\left|\frac{x+4}{ax+2}\right| > \frac{1}{x}$$</p>
<p>What I have done:<br><br></p>
<p>I) $x < 0$:</p>
<p>Obviously this part of the inequation is $x\in(-\infty, 0), x\neq \frac{-2}{a}$<br><br></p>
<p>II) $x > 0$:</p>
<p>$$\left|\frac{x+4}{ax+2}\right| > \frac1x$$
$$\frac{|x+4|}{|ax+2|} > \frac1x$$
because $x > 0$ we can transform it to:</p>
<p>$x^2 + 4x > |ax + 2|$<br>
$(x^2 + 4x)^2 > (ax + 2)^2$<br>
$(x^2 + 4x)^2 - (ax + 2)^2 > 0$<br>
$(x^2 + 4x - ax - 2) (x^2 +4x +ax + 2) > 0$<br><br>
$(x^2 + (4-a)x - 2)(x^2 + (4+a)x + 2) > 0$<br><br></p>
<p>I am kind of stuck here. I also need to discuss the solution for various values of the parameter a. What is the easiest way out of this step? Something with the discriminants, perhaps?</p>
| b.sahu | 60,209 | <p>The technique which you have used to accelerate the convergence of the function for "PI" , comes from Euler-Mclaurin Formula (see Wikipedia).This formula was developed around the year 1735 AD.....Archimedes method of finding approximations of "PI", was developed about 2000 years before the Euler-Maclaurin formula. </p>
|
176,192 | <p>I am trying to solve a system of differential equations as follows:</p>
<pre><code>L1 = 5.0;
L2 = 0.43;
d = 0.0056;
g = 9.8;
h1 = 5.0;
h2 = 0.43;
A1 = 0.00001;
A2 = 0.001;
p0 = 100000;
v0 = 0.001;
NDSolve[{P[t] -
1*1000*10*
L2 == (1000/(2*0.0056)*y[t]^2*
L2*(0.316*(1000*y[t]*0.0056/0.00089))^(-0.25)) +
500*01.7*y[t]^2, (1000*10*L1 -
P[t]) == (1000/(2*0.0056)*z[t]^2*
L1*(0.316*(1000*z[t]*0.0056/0.00089))^(-0.25)) +
500*01.7*z[t]^2, (y[t] - z[t])*0.0001 == v'[t],
P[t] == p0*v0/v[t],P[0]==100000,v[0]==0.001}, {P, v, y, z}, {t, 0, 1}]
</code></pre>
<p>Uppon ruuning the comand I get the following error message:"NDSolve::icfail: Unable to find initial conditions that satisfy the residual function within specified tolerances. Try giving initial conditions for both values and derivatives of the functions.".
How can I make this work ?</p>
| halirutan | 187 | <p>I'm concentrating on the calculation of <code>samplemanycyclesper5years</code> and <code>samplecycledistributions</code>. For the first one, you select 1826 samples randomly and calculate the total. This is done <code>10^7</code> times. We can pack the random total into a compiled function that chooses 1826 random integer positions, accesses <code>cyclesperday</code> and calculates the total</p>
<pre><code>rand = Compile[{{cycl, _Real, 1}, {i, _Integer, 0}},
Module[{pos = RandomInteger[{1, Length[cycl]}, i]},
Total[cycl[[pos]]]
],
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
];
</code></pre>
<p>The parameter <code>i</code> is how many random values of <code>cycl</code> should be totaled. In your case <em>always</em> 1826. Let's test this</p>
<pre><code>rand[cyclesperday, Array[1826 &, 10^5]]; // AbsoluteTiming
(* {0.921493, Null} *)
</code></pre>
<p>and compare</p>
<pre><code>ParallelTable[Total[RandomChoice[cyclesperday, 1826]], {10^5}]; // AbsoluteTiming
(* {5.89441, Null} *)
</code></pre>
<p>So this needs only 15% of the time your <code>ParallelTable</code> needs. The next step is to do the same for the estimation of the <code>LogNormalDistribution</code>. The estimation of the parameters is <a href="https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=2927&context=etd" rel="noreferrer">actually very simple</a> with a maximum likelihood estimator and you can write this down yourself</p>
<pre><code>maxLikelihood = Compile[{{values, _Real, 1}},
Module[{μ = Mean[Log[values]]},
{μ, Sqrt@Mean[(Log[values] - μ)^2]}
],
RuntimeAttributes -> {Listable},
Parallelization -> True
]
</code></pre>
<p>First a quick check:</p>
<pre><code>EstimatedDistribution[cycledatatofit[[10]],
LogNormalDistribution[μ, σ]]
(* LogNormalDistribution[7.42205, 0.042639] *)
maxLikelihood[cycledatatofit[[10]]]
(* {7.42205, 0.042639} *)
</code></pre>
<p>Excellent. Now let's time it</p>
<pre><code>samplecycledistributions =
ParallelTable[EstimatedDistribution[cycledatatofit[[i]],
LogNormalDistribution[μ, σ]], {i, 1, nc}]; // AbsoluteTiming
(* {15.8202, Null} *)
</code></pre>
<p>and</p>
<pre><code>maxLikelihood[cycledatatofit]; // AbsoluteTiming
(* {0.167895, Null} *)
</code></pre>
<p>So this needs only 1% of the original time. Your complete calculation looks like this</p>
<pre><code>nc = 10^5;
samplemanycyclesper5years =
rand[cyclesperday, Array[1826 &, 100*nc]];
cycledatatofit = Partition[samplemanycyclesper5years, 100];
samplecycledistributions = maxLikelihood[cycledatatofit];
cyclesamples =
Round[ParallelTable[
RandomVariate[LogNormalDistribution @@ parms], {parms,
samplecycledistributions}]];
</code></pre>
<p>and I was able to bring it from 654 seconds to 53 seconds. I checked the final histograms and they match perfectly, but please verify each step yourself.</p>
|
123,605 | <p>I have an equality:
$$\frac {d_L}{\phi t_1^2}=\frac {d_0}{\phi (t_1 - \Delta t)^2}$$
I can manually extract $d_L - d_0$ from this equation by expanding and re-arranging. Is there any way to do this in Mathematica? Ideally, I'd like <strong>Solve</strong> to work like this:</p>
<p>Solve$[\frac {d_L}{\phi t_1^2}==\frac {d_0}{\phi (t_1 - \Delta t)^2}, d_L - d_0]$
which would render:
$$d_L - d_0=d_L(\frac {2 \Delta t}{t_1}-\frac {\Delta t^2}{t_1^2})$$</p>
| march | 29,734 | <p>I usually do this by replacing one of the variables in the expression with the difference, then solve, then re-replace.</p>
<pre><code>Equal @@ Solve[eq /. d0 -> dl - dl0, dl0][[1, 1]] /. dl0 -> dl - d0
(* -d0 + dl == dl (2 t1 Δt - Δt^2)/t1^2 *)
</code></pre>
<p>In terms of <code>dl - d0</code> and <code>d0</code> instead:</p>
<pre><code>Equal @@ Solve[eq /. dl -> dl0 + d0, dl0][[1, 1]] /. dl0 -> dl - d0
(* -d0 + dl == d0 (2 t1 Δt - Δt^2)/(t1 - Δt)^2 *)
</code></pre>
|
211,052 | <p>I want to modify a list of table row by adding an extra column. For that I <a href="https://reference.wolfram.com/language/ref/Map.html" rel="nofollow noreferrer"><code>Map</code></a> the data with a pure function that evluate the new column value from exisiting one and reconstruct a new list from that and the initial <a href="https://reference.wolfram.com/language/ref/Part.html" rel="nofollow noreferrer"><code>Part</code>s</a> of the list:</p>
<pre><code>b = 5000
data = {{1, 45000., 27500., "Inverted"},
{2, 22500., 18333.3, ""},
{3, 15000., 13750., "Inverted"},
{4, 11250., 11000., ""},
{5, 9000., 9166.67, "Inverted"},
{6, 7500., 7857.14, ""},
{7, 6428.57, 6875., "Inverted"}}
{#[[1;;3]], #[[1]]> 2b, #[[4]]} & /@ data
</code></pre>
<blockquote>
<pre><code>{{{1,45000.,27500.},False,Inverted},
{{2,22500.,18333.3},False,},
{{3,15000.,13750.},False,Inverted},
{{4,11250.,11000.},False,},
{{5,9000.,9166.67},False,Inverted},
{{6,7500.,7857.14},False,},
{{7,6428.57,6875.},False,Inverted}}
</code></pre>
</blockquote>
<p>The problem is <code>#[[1;;3]]</code> return a list--so I end-up with nested list instead of flat records.</p>
<p>As a workarround, I <a href="https://reference.wolfram.com/language/ref/Flatten.html" rel="nofollow noreferrer"><code>Flatten</code></a> each record:</p>
<pre><code> Flatten[{#[[1;;3]], #[[1]]> 2b, #[[4]]}] & /@ data
</code></pre>
<blockquote>
<pre><code>{{1,45000.,27500.,False,Inverted},
{2,22500.,18333.3,False,},
{3,15000.,13750.,False,Inverted},
{4,11250.,11000.,False,},
{5,9000.,9166.67,False,Inverted},
{6,7500.,7857.14,False,},
{7,6428.57,6875.,False,Inverted}}
</code></pre>
</blockquote>
<pre><code>Flatten[{#[[1;;3]], #[[1]]> 2b, #[[4]]}] & /@ data
</code></pre>
<p>It works <em>in that particular</em> case. But it is not entirely satisfactory since, if the initial record would already contain list items, they would have been flattened too.</p>
<p>If there a more generic solution to build a list from items and list spans?</p>
| MarcoB | 27,951 | <p>Two options presented themselves to me, using <code>Apply</code> or using replacements:</p>
<pre><code>{#1, #2, #3, #1 > 2 b, #4} & @@@ data
data /. {x_, y_, z_, p_} :> {x, y, z, x > 2 b, p}
</code></pre>
<p>Both reproduce your results.</p>
|
503,808 | <p>Let $\mathbb{K} = \mathbb{Q}[\sqrt[3]{5}] \ $, and let $\mathbb{L}$ be the normal closure of $\mathbb{K}$. </p>
<p>Let $\mathbb{O}_{\mathbb{K}} \ $ be the integral closure of $\mathbb{Z}$ in $K$ and $\mathbb{O}_{\mathbb{L}} \ $ be the integral closure of $\mathbb{Z}$ in $\mathbb{L}$. I want to find the factorization of the primes $7, 11 \ $ and $13$ as ideals in $\mathbb{O}_{\mathbb{K}} \ $ and $\mathbb{O}_{\mathbb{L}} \ $. My question is : how can I do that without knowing an integral basis for $\mathbb{K}$ and $\mathbb{L}$ ?</p>
| Lubin | 17,760 | <p>I tend to work on such things on a fairly mindless and <em>ad hoc</em> basis. Let me show you my approach just for the case $p=7$:</p>
<p>The extension $\mathbb K\supset\mathbb Q$ is of degree $3$, and in the downstairs residue field $\mathbb F_7$, $5$ is a generator of the multiplicative group so that $\lambda=5^{1/3}$ has period $18$. You need to go all the way to $\mathbb F_{243}$ to get $18|(7^n-1)$, so that the residue-field extension degree is three, and the prime $7$ does not split at all in $\mathbb K$. On the other hand, we have $(\frac{-3}7)=(\frac47)=1$, so that in $\mathbb Q(\omega)$, where $\omega$ is a cube root of unity, $7$ should split, indeed $7=(2-\omega)(2-\omega^2)$. So there are at least two primes above $7$ in $\mathbb L$, but we know that each of these has residue-field extension degree $3$. So there you are, you can now try the same kind of argument at $11$ and $13$.</p>
|
1,179,498 | <p>Is the following sequence convergent in <img src="https://i.stack.imgur.com/wtYTG.png" alt="enter image description here">?(Metric space is the normal, euclidean space)
<img src="https://i.stack.imgur.com/uqGFl.png" alt="enter image description here"></p>
| quid | 85,306 | <p>Show: </p>
<ul>
<li><p>if the additive group contains an element of order $2$ then the multiplicative does not. </p></li>
<li><p>if the additive group contains no element of order $2$ then the multiplicative does. </p></li>
</ul>
|
3,164,343 | <p>Machine generates one random integer in range <span class="math-container">${[0;40)}$</span> on every spin.<br>
You should choose 5 numbers in that range. </p>
<p>Then the machine will spit out <strong>5</strong> numbers (<em>numbers are independent of each other</em>). </p>
<p><strong>what is the probability that you will get exactly two numbers correct?</strong></p>
<hr>
<p><strong>My logic:</strong><br>
You should get two of them right. chance of that is: <span class="math-container">$r = { \left( 1 \over 40 \right)^ 2 }$</span><br>
You should get 3 of wrong. Chance of that is: <span class="math-container">$w = { \left( 39 \over 40 \right)^3 }$</span><br>
As order doesn't matter answer should be: <span class="math-container">$$ans = { rw \over 2!3!}$$</span>
Simulator tells me I'm wrong. Where is my logic flawed?</p>
<p><strong>P.s. Machine can spit out duplicates</strong></p>
| SlipEternal | 156,808 | <p>If the machine is choosing with replacement, but the person is choosing five distinct numbers, and if success means two distinct numbers the person chose are picked by the machine, then the problem is much more difficult:</p>
<p>Probability machine chooses five distinct numbers:
<span class="math-container">$$\dfrac{40!}{35!40^5}$$</span></p>
<p>Probability machine chooses four distinct numbers (choose the four distinct numbers the machine selected, choose the one number that is repeated twice, permute the multiset to find all possible orders these numbers could be chosen, divide by the total number of ways to choose five numbers):
<span class="math-container">$$\dfrac{\dbinom{40}{4}\dbinom{4}{1}\dfrac{5!}{1!1!1!2!}}{40^5}$$</span></p>
<p>Probability machine chooses three distinct numbers (choose the three distinct numbers the machine selected, either one number is repeated three times or two numbers are repeated twice each):
<span class="math-container">$$\dfrac{\dbinom{40}{3}\dbinom{3}{1}\left(\dfrac{5!}{3!1!1!}+\dfrac{5!}{2!2!1!}\right)}{40^5}$$</span></p>
<p>Probability machine chooses two distinct numbers (choose the two distinct numbers the machine selected. You can have one number four times and the other once or one number three times and the other twice.):
<span class="math-container">$$\dfrac{\dbinom{40}{2}\dbinom{2}{1}\left(\dfrac{5!}{4!1!}+\dfrac{5!}{3!2!}\right)}{40^5}$$</span></p>
<p>So, the probability of you matching exactly two numbers:
<span class="math-container">$$\begin{align*} & \dfrac{40!}{35!40^5}\cdot \dfrac{\dbinom{5}{2}\dbinom{35}{3}}{\dbinom{40}{5}}\\ + & \dfrac{\dbinom{40}{4}\dbinom{4}{1}\dfrac{5!}{2!}}{40^5}\cdot \dfrac{\dbinom{4}{2}\dbinom{36}{3}}{\dbinom{40}{5}} \\ + & \dfrac{\dbinom{40}{3}\dbinom{3}{1}\left(\dfrac{5!}{3!}+\dfrac{5!}{(2!)^2}\right)}{40^5}\cdot \dfrac{\dbinom{3}{2}\dbinom{37}{3}}{\dbinom{40}{5}} \\ + & \dfrac{\dbinom{40}{2}\dbinom{2}{1}\left(\dfrac{5!}{4!}+\dbinom{5}{3}\right)}{40^5}\cdot \dfrac{\dbinom{2}{2}\dbinom{38}{3}}{\dbinom{40}{5}} \approx 0.09116015625\end{align*}$$</span></p>
<p>Note: It is possible for the machine to choose one distinct number, but there is a zero probability that you wind up with two matching numbers, so I ignored that case. To show that my numbers work out though, you can test that the following holds:</p>
<p><span class="math-container">$$\dfrac{40!}{35!}+\dbinom{40}{4}\dbinom{4}{1}\dfrac{5!}{2!1!1!1!}+\dbinom{40}{3}\dbinom{3}{1}\left(\dfrac{5!}{3!1!1!}+\dfrac{5!}{2!2!1!}\right)+\dbinom{40}{2}\dbinom{2}{1}\left(\dfrac{5!}{4!1!}+\dfrac{5!}{3!2!}\right)+\dbinom{40}{1} = 40^5$$</span></p>
<p>(I verified this is true myself using Wolframalpha).</p>
|
3,762,306 | <p>I have the equation <span class="math-container">$$(e^x-1)-k\arctan(x) = 0$$</span> where <span class="math-container">$0<k \leq \frac 2\pi$</span> and I was wondering how I would go about starting to determine the amount of real roots of this equation. So far I have just manipulated the equation to get different equations of <span class="math-container">$x$</span>, however I'm unsure what to do with them.</p>
<p>Current equations for <span class="math-container">$x$</span> are <span class="math-container">$x = \ln(k\arctan(x)-1)$</span> and <span class="math-container">$x = \frac{\tan(e^x-1)}{k}$</span></p>
| saulspatz | 235,128 | <p>Let <span class="math-container">$f(x)=e^x-1-k\arctan(x)$</span>. We have <span class="math-container">$f(0)=0.$</span> If <span class="math-container">$k=0$</span>, obviously <span class="math-container">$x=0$</span> is the only solution.</p>
<p>If <span class="math-container">$k\neq0$</span>, then <span class="math-container">$f'(x)=e^x-\frac k{1+x^2},$</span> so that <span class="math-container">$$f'(x)=0\iff (1+x^2)e^x=k.$$</span> Since <span class="math-container">$(1+x^2)e^x$</span> is increasing, goes to <span class="math-container">$0$</span> at <span class="math-container">$-\infty$</span> and goes to <span class="math-container">$\infty$</span> at <span class="math-container">$\infty$</span> we see that <span class="math-container">$f'$</span> has exactly one real zero, say <span class="math-container">$f'(x_0) = 0$</span>. We see that <span class="math-container">$f$</span> decreases on <span class="math-container">$(-\infty, x_0)$</span> and increases on <span class="math-container">$(x_0, \infty)$</span>, so that <span class="math-container">$f$</span> attains its minimum at <span class="math-container">$x_0$</span>.</p>
<p>Since we know that <span class="math-container">$f(0)=0$</span>, it must be that <span class="math-container">$x_0<0$</span> and <span class="math-container">$x=0$</span> is the only solution.</p>
|
1,652,381 | <p>Suppose $f$ derivable on $\mathbb R$ and that $\lim_{x\to a}f'(x)=\ell$. Show that $$\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\ell.$$</p>
<p>The proof of my course goes like this:</p>
<p>By mean value theorem, there is $y_h\in ]x,x+h[$ s.t. $$f(x+h)-f(x)=f'(y_h)h$$ and thus $$\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\lim_{h\to 0}f'(y_h)=\ell.$$</p>
<p>QED.</p>
<p><strong>Question 1: Didn't we only proved that $$\lim_{h\to 0^+}\frac{f(x+h)-f(x)}{h}=\ell\ \ ?$$</strong></p>
<p><strong>Question 2: May-be by $]x,x+h[$ they mean $\{y\mid x<y<x+h\}$ if $h>0$ and $\{y\mid x+h<y<x\}$ if $h<0$, no ?</strong></p>
| yo' | 43,247 | <p>$40$, because the sequence is $n+7$ and $n^2+10n+16$ interleaved.</p>
<p>$37$, because the sequence is $n+7$ and $-n^2/2+23n/2+16$ interleaved.</p>
<p>Use imagination and you come with equally good excuses for the other two options.</p>
|
2,816,365 | <p><a href="https://i.stack.imgur.com/T1W5r.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T1W5r.jpg" alt="enter image description here"></a></p>
<p>I don't understand how it went from line four to five in the proof?
Do you need to use induction? We haven't covered it yet.</p>
| Asaf Karagila | 622 | <p>When you put a filter in your sink, the idea is that you filter out the big chunks of food, and let the water and the smaller chunks (which can—in principle—be washed through the pipes) go through.</p>
<p>You filter out the larger parts.</p>
<p>A filter filters out the larger sets. It is a way to say "these sets are 'large'", and the sort of make sense. The set of "everything" is definitely large, and "nothing" is definitely <em>not</em>; if something is larger than a large set, then it is also large; and two large sets intersect on a large set.</p>
<p>From a mathematical point of view you can think about this as being co-finite, or being of measure $1$ on the unit interval, or having a dense open subset (again on the unit interval). These are examples of ways where a set can be thought of as "almost everything". And that is the idea behind a filter.</p>
|
201,994 | <p>We arbitrarily choose n lattice points in a 3-dimensional Euclidean space such that no three points are on the same line. What is the least n in order to guarantee that there must be three points x, y, and z among the n points such that the center of mass of the x-y-z triangle is a lattice point?</p>
| joriki | 6,622 | <p>The answer $19$ has already been given, including references, so this is just a numerical check of that result.</p>
<p>The specific shape of the lattice is irrelevant, since a point is a lattice point iff it is an integer linear combination of the lattice vectors. The centre of mass of three lattice points is a lattice point iff all coefficients of the sum are multiples of $3$.</p>
<p>To find the maximal number of points without a centre of mass on the lattice, consider the distribution of the residues of the coefficients of the points $\bmod3$ in $\mathbb Z_3^3$. No three points may add up to $0\in\mathbb Z_3^3$.</p>
<p>We cannot have the same point in $\mathbb Z_3^3$ three times. On the other hand, whatever points in $\mathbb Z_3^3$ we do have, we can have them twice, since the only point that would sum to $0$ with the two copies would be a third copy. Thus, we can reduce the problem to finding the maximal number of <em>different</em> points in $\mathbb Z_3^3$ no three of which add up to $0$; then the desired maximal number of not necessarily different points is twice that.</p>
<p><a href="https://gist.github.com/3780277" rel="nofollow">Here</a>'s code that enumerates all subsets of $\mathbb Z_3^3$ and finds the maximal size of a subset that doesn't contain a triple that sums to $0$. That maximal size turns out to be $9$, so the maximal size of a set of not necessarily different points is $2\cdot9=18$. Thus one more than that, $19$, is the number of points required to force a set to contain a triple with centre of mass on the lattice.</p>
|
2,085,622 | <p>The question I am working through states:</p>
<p><em>Three digits are selected at random from the digits 1 through 10, without replacement. Find the probability that at least 2 digits are prime.</em></p>
<p>I feel as though I am close, but not quite hitting the mark with my solution:</p>
<p>My total number of selections would be $10C3$, and there are 4 prime numbers to pick from $\{2,3,5,7\}$.</p>
<p>Since I'm looking for <strong>at least</strong> 2 prime numbers, that means I am looking for the probability that I select 2, or 3 prime numbers. </p>
<p>So, I have
\begin{align*}
\textrm{P(2 prime numbers)} & \ or\ \textrm{P(3 prime numbers)}\\
\frac{4C2}{10C3} & + \frac{4C3}{10C3}\\
&= \frac{1}{12}
\end{align*}</p>
<p>Should I instead be looking for the compliment? I feel as though it would be just as complex and not really save me any time [1-P(0 or 1 primes)]. </p>
<p>Edit to add new solution (looks better!):
\begin{align*}
\textrm{P(2 prime numbers)} & \ or\ \textrm{P(3 prime numbers)}\\
\frac{4C2 \times 6C1}{10C3} & + \frac{4C3}{10C3}\\
&= \frac{1}{3}
\end{align*}</p>
| Ross Millikan | 1,827 | <p>You are correct for the chance of selecting three prime numbers, but for selecting exactly two you need to multiply by the number of ways to select the non-prime.</p>
|
24,412 | <p>Trying to plot the following two functions to show points of intersection.</p>
<pre><code>2 x + y - 1 == 0,
x - y + 2 == 0
ContourPlot[{2 x + y - 1 == 0, x - y + 2 == 0}, {x, -5, 5}, {y, -5, 5}]
</code></pre>
<p>The above shows the plots, but I find it difficult to see the point of intersection. I suspect that there is a better method than this. </p>
<p>Please suggest a good method to plot such equations.</p>
| kglr | 125 | <pre><code>ContourPlot[{2 x + y - 1 == 0, x - y + 2 == 0}, {x, -5, 5}, {y, -5, 5},
MeshFunctions -> {Function[{x, y}, 2 x + y - 1 - (x - y + 2)]},
MeshStyle -> Directive[PointSize[Large], Red], Mesh -> {{0}}]
</code></pre>
<p><a href="https://i.stack.imgur.com/vmG8g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vmG8g.png" alt="enter image description here"></a></p>
|
2,444,591 | <p>How do I prove that: $f(S ∩ T) \neq f(S) ∩ f(T)$ unless $f$ is one-to-one?</p>
<p>I tried finding contradicting example like considering the function $f(x)=x^2$ with $S = \{-2,-1,0,1,2\}$ and $T=\{0,1,2,3,4\}$ or using the function $f(x) = |x|$, but, it seems to be true. But, my book says that In general the aforementioned statement is true, and I need to prove it. However, I can't start to prove this.</p>
| nonuser | 463,553 | <p>Take any $y\in f(S \cap P)$, then exist $x \in S \cap P$ such that $y= f(x)$. So, since $x\in S$ we have $y\in f(S)$ and since $x \in P$ we have $y\in f(P)$ and thus $y\in f(S)\cap f(P)$. So $\boxed{f(S\cap P)\subseteq f(S)\cap f(P)}$. So here we don't need injective property.</p>
<p>Now suppose $f$ is injective.</p>
<p>Take any $y\in f(S)\cap f(P)$, then there are $s\in S$ and $p\in P$ such that $f(s)=y = f(p)$. Since $f$ is injective we have $s=p =:x$. Thus $x\in S\cap P$ so $y\in f(S\cap P)$ and thus $\boxed{f(S)\cap f(P)\subseteq f(S\cap P)}$.</p>
<p>So, if $f$ is injective we have $f(S)\cap f(P)= f(S\cap P)$. For reverse of this read Arthur answer. </p>
|
164,987 | <p>Find $(x, a, b, c)$ if $$x! = a! + b! + c!$$</p>
<p>I want to know if there are more solutions to this apart from $(x, a, b, c) = (3, 2, 2, 2)$.</p>
| Community | -1 | <p>First note that $a,b,c < x$, since $1 \leq n!$. This means that $a,b,c \leq x-1$. This implies that $$x! = a! + b! + c! \leq 3 (x-1)! \implies x \leq 3$$
Further, $x! = a! + b! + c! \geq 3 \implies x >2$. Hence, $x=3$ is the only possibility left.</p>
|
333,265 | <p>Can $|-x^2 | < 1 $ imply that $-1<x<1$?
My steps are as follows?
$$| -x^2| < 1 $$
$$-1<(-x^2)< 1 $$
$$-1<(-x^2)< 1 $$
$$-1<x^2< 1 $$
$$\sqrt{-1}<x< \sqrt 1 $$</p>
<p>I'm actually looking for the radius of convergence for the power series of $\frac{1}{1-x^2}$:</p>
<p>$$\frac{1}{1-x^2}=\sum\limits_{n=0}^\infty (-x^2)^n \hspace{10mm}\text{for} \,|-x^2|<1$$
This is derived from the equation $$\frac{1}{1-x}=\sum\limits_{n=0}^\infty x^n \hspace{10mm}\text{for} \,|x|<1$$
According to my textbook, the power series $$\sum\limits_{n=0}^\infty (-x^2)^n \hspace{10mm}\text{for} \,|-x^2|<1$$
is 'for the interval (-1,1)' which means that $|-x^2 | < 1 $ implies that $-1<x<1$.
However, that implication does not make sense to me.</p>
| wece | 65,630 | <p>$$|(-x)^2|<1$$
$$|x^2|<1$$
$$|x|^2<1$$
$$|x|<\sqrt{1}$$
$$|x|<1$$
$$-1<x<1$$
Seem more correct to me. (The $\sqrt{-1}$ isn't really well define).</p>
|
970,882 | <p>This is homework so no answers please</p>
<p>We have Multiplication map $F:O(n)\times O(n)\to O(n)$ defined as $F(A,B)=AB$ $F:O(n)\times O(n)\to O(n)$, where $O(n)=\{A\in M(n\times n):AA^{t}=id\}$.
The smooth structure of O(n) comes from ImpFT: consider smooth map $f:M(n\times n)\to Sym(n\times n)$ defined as $f(A)=A^{t}A$ then $O(n)=f^{-1}(Id)$, where Id is a regular value.</p>
<p>The problem is to show that F is smooth.</p>
<p>Smoothness of F, means smoothness of the coordinate representation of F. So I am trying to unearth the coordinate charts of O(n) from IFT. Any suggestions on how to do that? Any links on doing that for general level-set manifolds.</p>
<p>Thanks</p>
| Jack Lee | 1,421 | <p>Two basic theorems in differential geometry would be useful here. Suppose $M,N$ are smooth manifolds and $f\colon M\to N$ is a smooth map.</p>
<ol>
<li><strong>Restricting the domain:</strong> If $S\subseteq M$ is an (immersed or embedded) smooth submanifold, then $f|_M\colon S\to N$ is smooth.</li>
<li><strong>Restricting the codomain:</strong> If $S\subseteq N$ is an <em>embedded</em> smooth submanifold containing $f(M)$, then $f$ is smooth when considered as a map from $M$ to $S$.</li>
</ol>
<p>Together with the fact that multiplication in $GL(n,\mathbb R)$ is smooth because it's given by polynomials in standard coordinates, this leads to an easy proof.</p>
|
2,961,592 | <p>We know 4 is a real number but how can we prove that it is a complex number? How can be describe it in the a+ib form??</p>
| Peter Szilas | 408,605 | <p><span class="math-container">$0<x<y$</span>.</p>
<p><span class="math-container">$(y-x)=$</span></p>
<p><span class="math-container">$(√y-√x)(√y+√x)>0.$</span></p>
<p>Since <span class="math-container">$√y+√x >0$</span>, we get </p>
<p><span class="math-container">$√y -√x>0$</span>, or <span class="math-container">$√y>√x$</span>.</p>
<p><span class="math-container">$x=√x√x <√x√y <√y√y =y$</span></p>
|
2,961,592 | <p>We know 4 is a real number but how can we prove that it is a complex number? How can be describe it in the a+ib form??</p>
| lhf | 589 | <p><span class="math-container">$
\sqrt{x} <\sqrt{y}
\implies
x = \sqrt{x} \sqrt{x} < \sqrt{x} \sqrt{y} = \sqrt{xy}
$</span></p>
<p><span class="math-container">$
\sqrt{x} <\sqrt{y}
\implies
\sqrt{xy} = \sqrt{x} \sqrt{y} < \sqrt{y} \sqrt{y} = y
$</span></p>
|
1,379,899 | <p>I'm trying to achieve a better conception of what it means to "divide out" a variable/number, because I'm currently have a lot of trouble justifying to myself why it actually works the way it does in certain contexts. I apologize if this is too much of an elementary question but I can't seem to come to a satisfying conclusion.</p>
<p>I understand division in the context of explicit numbers but <a href="http://www.econ.nyu.edu/user/ramseyj/textbook/pg215.219.pdf" rel="nofollow">here</a> (see pg. 3) is an example of where I get confused:</p>
<p>"so the number of ways of selecting r objects and ignoring all permutations of the remaining $(n − r)$ objects not chosen is to divide out the unwanted permutations:</p>
<p>No. of perms. $= \dfrac{1 \times 2 \times 3 \times···\times (n − r) \times (n − r + 1) \times··· \times(n − 1) \times n}{1 \times 2 \times 3 \times··· \times (n − r)}$"</p>
<p>So basically, why does this work, and how would explain it to a clever 5 year old? </p>
<p>Similarly, what does it mean to "divide out" $P(B)$ in the classic representation of Bayes' Theorem, i.e.</p>
<p>$P(A|B) = \dfrac{P(B|A)P(A)}{P(B)}$?</p>
| Michael Hardy | 11,667 | <p>Look at the number of permutations of $3$ things out of $5$. Call the five things $ABCDE$.
$$
\begin{array}{c|c|ccc}
\overbrace{\begin{array}{cccc} ABC & ACB & ADB & AEB \\ ABD & ACD & ADC & AEC \\ \underbrace{ABE}_B & \underbrace{ACE}_C & \underbrace{ADE}_D & \underbrace{AED}_E \end{array}}^{\text{starting with $A$}} & \overbrace{\begin{array}{cccc} BAC & BCA & BDA & BEA \\ BAD & BCD & BDC & BEC \\ \underbrace{BAE}_A & \underbrace{BCE}_C & \underbrace{BDE}_D & \underbrace{BED}_E \end{array}}^{\text{starting with $B$}} & \cdots\cdots\cdots\cdots\cdots
\end{array}
$$
When three are listed in some order, the first one could be $A$ or $B$ or $C$ or $D$ or $E$. Five choices. Only two are listed above, but you can see what the other three are. Within each of those, the second can be any of four. And the third can be any of three. So the total number of permutations in the list is $5\times 4\times3= 60$.</p>
<p>So what if we want <em>combinations</em> rather than permutations. Notice that one of the permutations listed above is $BEC$. In how many orders can $BEC$ be listed? Here they are:
$$
\overbrace{BCE\quad BEC}^\text{starting with $B$} \quad \overbrace{CBE\quad CEB}^\text{starting with $C$} \quad \overbrace{EBC\quad ECB}^\text{starting with $E$}
$$
It is listed $3\times2\times1=6$ times. That means that this combination got listed $6$ times in the first list above. <b>Every</b> combination got listed $6$ times. So divide the length of the first list by $6$:
$$
\frac{5\times4\times3}{3\times2\times1} = \frac{60}{6} = 10.
$$
In other words, the number of combinations is $10$.</p>
|
569,814 | <p>I was watching a video on YouTube on Quantum Mechanics Concepts and saw that if you wanted to convert a probability amplitude to a probability, you square it. In the video he said that this was equivalent to multiply by it's complex conjugate. So is this correct? Is squaring the same as multiplying by the complex conjugate, or is this just a thing you do in Quantum Mechanics?
Also, I'm not sure if this should be asked on the physics.se instead of math.se, if it should then I'm sorry.</p>
| Community | -1 | <p>No, it is not: Given $i$, we have</p>
<p>$$i^2 = -1 \ne 1 = i \overline{i}$$</p>
<p>What we can say, however, is that the absolute value of any complex $z$ is defined to be</p>
<p>$$|z| = \sqrt{z \overline{z}}$$</p>
|
1,850,978 | <p>My question concerns the following problem:</p>
<blockquote>
<p>Let $K=\mathbb F_7[T]/(T^3-2)$. Show that $X^3-2$ splits into linear factors in $K[X]$.</p>
</blockquote>
<p>Write $K\simeq \mathbb F_7[\alpha]$ for a root $\alpha\in \overline{\mathbb F_7}$ of $X^3-2$. I found out through experimentation that $2\alpha$, $4\alpha$ are also roots of this polynomial (since $2^3\equiv 4^3 \equiv 1 \ \mathrm{mod}\ (7)$). Is there a more conceptual way to see this?</p>
| lhf | 589 | <p>$X^3-2$ is irreducible mod $7$. This implies that $K$ has $7^3$ elements.</p>
<p>Since the group $K^\times$ is cyclic of order $7^3-1= 2 \cdot 3^2 \cdot 19$, there is an element $\omega \in K^\times$ of order $3$.</p>
<p>Then $X^3-2=(X-\alpha)(X-\omega\alpha)(X-\omega^2\alpha)$.</p>
<p>As you have discovered, you can take $\omega=2$.</p>
|
1,900,365 | <p>Apparently this is a true statement, but I cannot figure out how to prove this. I have tried setting </p>
<p>$$(m)(m + 1)(m + 2)(m + 3) = (m + 4)^2 - 1 $$</p>
<p>but to no avail. Could someone point me in the right direction? </p>
| Steven Alexis Gregory | 75,410 | <p>I will show that
$16m(m+1)(m+2)(m+3) = 16(m^2 + 3m + 1)^2 - 16$</p>
<p>It will follow that
$$m(m+1)(m+2)(m+3) = (m^2 + 3m + 1)^2 - 1$$</p>
<p>Proof:
\begin{align}
16 m(m+1)(m+2)(m+3)
&=(2m)(2m+2)(2m+4)(2m+6)\\
& ---\text{let $2m = n-3$}---\\
&=(n-3)(n-1)(n+1)(n+3)\\
&=(n-3)(n+3) \cdot (n-1)(n+1) \\
&=(n^2 - 9)(n^2 - 1) \\
&=n^4 - 10n^2 + 9 \\
&=(n^2 - 5)^2 - 16 \\
& ---\text{Note $n = 2m+3$}---\\
&=(4m^2 + 12m + 4)^2 - 16\\
&=16(m^2 + 3m + 1)^2 - 16\\
\end{align}</p>
<h2>addendum</h2>
<p>I just noticed this proof.</p>
<p>\begin{align}
m(m+1)(m+2)(m+3)
&= m(m+3) \cdot (m+1)(m+2) \\
&= (m^2 + 3m)(m^2 + 3m + 2) \\
&= ((m^2 + 3m + 1) - 1) \cdot ((m^2 + 3m + 1) + 1)\\
&= (m^2+3m + 1)^2 - 1
\end{align}</p>
|
171,602 | <p>In all calculus textbooks, after the part about successive derivatives, the $C^k$ class of functions is defined.
The definition says : </p>
<blockquote>
<p>A function is of class $C^k$ if it is differentiable $k$ times and the $k$-th derivative is continuous.</p>
</blockquote>
<p>Wouldn't be more natural to define them to be the class of functions that are differentiable $k$ times?
Why is the continuity of the $k$th derivative is so important so as to justify a specific definition?</p>
| Community | -1 | <p>One reason is that it's desirable to equip a class of functions with a <a href="http://en.wikipedia.org/wiki/Norm_%28mathematics%29">norm</a>, particularly with a norm which makes the class a complete metric space, i.e., <a href="http://en.wikipedia.org/wiki/Banach_space">Banach space</a>. In $C^k$ we can use the supremum of the $k$th derivative (plus lower-order terms to make the norm nonzero on polynomials of degree less than k). A general differentiable function may well have an unbounded derivative, e.g., $f(x)=x^2\sin e^{1/x}$. We could try to look the space of functions with <em>bounded</em> $k$th derivative, denoting it $B^k$. I can't tell off the top of my head if $B^k$ is complete, but it loses to $C^k$ in another important aspect: polynomials are dense in $C^k$ but not in $B^k$. </p>
<p>Conclusion: if you want a complete normed space in which the norm has to do with the supremum of the $k$th derivative, and in which polynomials are dense, then $C^k$ is what you have to use. </p>
<p>If you decide to use the <strong>integral</strong> of the $k$th derivative instead, you get <a href="http://en.wikipedia.org/wiki/Sobolev_space">Sobolev spaces</a>. </p>
|
1,650,638 | <p>I have a sequence of reals $S = s_1,s_2,\dots,s_n$ such that $s_i-s_{i-1}$ is a Gaussian distribution. From histogram of sequence $S$ (10000 elements) it appears that it is uniform distribution. Is it true ? If yes, can we prove it ? If no, what can we say about $s_i$ ?</p>
| Martin Argerami | 22,857 | <p>A linear operator is not hermitian "with respect to a basis". </p>
<p>When there is a basis of eigenvectors as you mention, $Av_k=\lambda_kv_k$, then $A$ is already diagonal. The point of diagonalization is to find such basis. </p>
|
4,114,962 | <blockquote>
<p>Prove that the product of four consecutive natural numbers are not the square of an integer</p>
</blockquote>
<p>Would appreciate any thoughts and feedback on my suggested proof, which is as follows:</p>
<p>Let <span class="math-container">$f(n) = n(n+1)(n+2)(n+3) $</span>.
Multiplying out the expression and refactoring it in a slightly different way gives
<span class="math-container">$$f(n) = n^4 + 6n^3+11n^2+6n \\= n^4 + 6n^3 + 9n^2 + 2n^2 + 6n = (n^2 + 3n)^2 + 2n(n+3). \tag{1}\label{1} $$</span></p>
<p>We want to show that the only possible way for <span class="math-container">$ f(n) $</span> to be the square of an integer is if <span class="math-container">$ f(n) = (n^2 + 3n +1 ) ^2. $</span> We show this by proving that <span class="math-container">$ (n^2+3n)^2 < f(n) < (n^2+3n+2)^2 $</span>. The left-hand side follows immediately from <span class="math-container">$(1)$</span>, since <span class="math-container">$ 2n(n+3) > 0 $</span> for all <span class="math-container">$ n \geq 1 $</span>, and the right-hand side can be verified by multiplying out both sides:
<span class="math-container">$$
\begin{align}
(n^2+3n)^2 + 2n(n+3) &< (n^2+3n+2)^2 \\ \iff
n^4 + 6n^3 + 11n^2 + 6n &< n^4 + 9n^2 + 4 + 6n^3+4n^2+12n \\ \iff
0 &< 2n^2 + 6n + 4
\end{align}
$$</span>
which is true for all <span class="math-container">$n \geq 1 $</span>. Now we note that <span class="math-container">$n^2+3n = n(n+3)$</span> is even since one of the factors <span class="math-container">$n$</span> or <span class="math-container">$n+3$</span> is even for all <span class="math-container">$n$</span>. It follows that <span class="math-container">$ n^2+3n+1$</span> must be odd, and so <span class="math-container">$ (n^2+3n+1)^2 $</span> must be odd. But <span class="math-container">$ f(n) $</span> must be even, since either <span class="math-container">$n$</span> and <span class="math-container">$(n+2)$</span> is even, or <span class="math-container">$(n+1)$</span> and <span class="math-container">$(n+3)$</span> is even and an even number multiplied by an odd number is an even number. So <span class="math-container">$f(n) \neq (n^2 + 3n +1)$</span> and therefore <span class="math-container">$f(n)$</span> cannot be the square of an integer for all <span class="math-container">$n \geq 1 $</span>.</p>
| jjagmath | 571,433 | <p>Hint: <span class="math-container">$n(n+1)(n+2)(n+3) +1$</span> is a square.</p>
|
2,539,327 | <p>The equation of a parabola is given by:
$y= ax^2 + bx+c$</p>
<p>Why is it that when the coefficient of $x$ i.e. $a$ is positive we get an upward parabola and when it's negative we get a downward parabola? </p>
<p>Also, I <a href="https://www.desmos.com/calculator/ldyxmzfrws" rel="nofollow noreferrer">saw</a> that increasing the value of $|a|$ narrows the parabola, why? </p>
<p>Lastly, what is the role of $b$ in determining the structure of this parabola? </p>
| Dr. Sonnhard Graubner | 175,066 | <p>let $$f(x)=ax^2+bx+c$$ then we have $$f'(x)=2ax+b$$ and $$f''(x)=2a$$ if $$a>0$$ then we get a Minimum Point, if $a<0$ then we get a Maximum Point.</p>
|
2,539,327 | <p>The equation of a parabola is given by:
$y= ax^2 + bx+c$</p>
<p>Why is it that when the coefficient of $x$ i.e. $a$ is positive we get an upward parabola and when it's negative we get a downward parabola? </p>
<p>Also, I <a href="https://www.desmos.com/calculator/ldyxmzfrws" rel="nofollow noreferrer">saw</a> that increasing the value of $|a|$ narrows the parabola, why? </p>
<p>Lastly, what is the role of $b$ in determining the structure of this parabola? </p>
| Glorious Nathalie | 948,761 | <p>You're given <span class="math-container">$y = a x^2 + b x + c $</span></p>
<p>Start your analysis by completing the square in <span class="math-container">$x$</span>, as follows:</p>
<p><span class="math-container">$\begin{equation}
\begin{split}
y &= a ( x^2 + \dfrac{b}{a} x ) + c \\
&= a (x + \dfrac{b}{2a} )^2 - a (\dfrac{b}{2a})^2 + c \\
&= a (x - x_0)^2 + y_0 \\
\end{split}
\end{equation}$</span></p>
<p>So that,</p>
<p><span class="math-container">$ (y - y_0) = a (x - x_0)^2 $</span></p>
<p>If <span class="math-container">$a \gt 0 $</span>, then as <span class="math-container">$x $</span> gets away from <span class="math-container">$x_0$</span> in either direction (left or right), <span class="math-container">$y$</span> will increase above <span class="math-container">$y_0$</span>, and if we increase <span class="math-container">$a$</span> then <span class="math-container">$y$</span> will grow faster, thus for the same value of <span class="math-container">$|x - x_0|$</span> , <span class="math-container">$y $</span> will be higher, thus making the graph look narrower. The same can be said when <span class="math-container">$a \lt 0$</span> where deviations of <span class="math-container">$x$</span> from <span class="math-container">$x_0$</span> cause <span class="math-container">$y$</span> to dip below <span class="math-container">$y_0$</span> , and as we make the negative <span class="math-container">$a$</span> more negative, the deviation of <span class="math-container">$y$</span> from <span class="math-container">$y_0$</span> will increase, again making the graph narrower.</p>
|
3,490,454 | <p>Definition from Munkres, the textbook we are following: If <span class="math-container">$Y$</span> is a compact Hausdorff space and <span class="math-container">$X$</span> is a proper subspace of <span class="math-container">$Y$</span> whose closure equals <span class="math-container">$Y$</span>, then <span class="math-container">$Y$</span> is said to be a compactification of <span class="math-container">$X$</span>. If <span class="math-container">$Y \setminus X$</span> equals a single point then, then <span class="math-container">$Y$</span> is called the one-point compactification of <span class="math-container">$X$</span>. From this definition, I understood it this way: I just need to add one single point to the given space and it would be compact and Hausdorff. But don’t I need to add 4 points? Namely 0, 1, 2 and 3. Because these are the limit points that the given space lacks to be closed, and in return, compact. What am I missing?..</p>
| AHusain | 277,089 | <p>Think about just one interval <span class="math-container">$(0,2\pi)$</span> first. If you add 2 points, you could make the compact <span class="math-container">$[0,2\pi]$</span>. But the one point compactification does it differently. It only adds one point so that the space becomes a circle like <span class="math-container">$(0,2\pi)$</span> together with <span class="math-container">$0 \equiv 2\pi$</span> are angles around the unit circle.</p>
<p>With 2 intervals, you could add 4 points as you are thinking and get something compact. But one point compactification does something different. The neighborhoods of the added point are totally different.</p>
|
4,338,558 | <p>I am not sure if my proof here is sound, please could I have some opinions on it? If you disagree, I would appreciate some advice on how to fix my proof. Thanks</p>
<p><span class="math-container">$X_1, X_2, ..., X_n$</span> are countably infinite sets.</p>
<p>Let <span class="math-container">$X_1 = \{{x_1}_1, {x_1}_2, {x_1}_3, ... \}$</span></p>
<p>Let <span class="math-container">$X_2 = \{{x_2}_1, {x_2}_2, {x_2}_3, ... \}$</span></p>
<p>...</p>
<p>Let <span class="math-container">$X_n = \{{x_n}_1, {x_n}_2, {x_n}_3, ... \}$</span></p>
<p>Let <span class="math-container">$P_n$</span> be the list of the first <span class="math-container">$n$</span> ordered primes: <span class="math-container">$P_n = (2,3,5,...,p_n) = (p_1,p_2,p_3,...,p_n)$</span></p>
<p>Define the injection: <span class="math-container">$\sigma: X_1 \times X_2 \times ... \times X_n \to \mathbb{N}$</span></p>
<p><span class="math-container">$\sigma (({x_1}_A, {x_2}_B, {x_3}_C,...,{x_n}_N)) = p_1^A\cdot p_2^B \cdot p_3^C \cdot ... \cdot p_n^N$</span></p>
<p>By the Fundamental Theorem of Arithmetic, <span class="math-container">$\sigma$</span> is an injection, because if two elements in the domain map to the same element in the codomain, they must be the same element.</p>
<p>Clearly, the image is finite. So by definition, the Cartesian product of n sets which are all countably infinite, is itself, countably infinite.</p>
<p>EDIT: Is it worth noting that my <span class="math-container">$X_n$</span> sets should be ordered or does that not matter?</p>
| H A Helfgott | 169,068 | <p>Let me treat the case of <span class="math-container">$F(x,y) = x^a y^a \max(x,y)$</span>, <span class="math-container">$a>1$</span>. (In the above, <span class="math-container">$a=5/3$</span>.)</p>
<p>We let <span class="math-container">$u = x y$</span>. Then our double integral equals 2 I, where
<span class="math-container">$$I = \mathop{\mathop{\int \int}_{(x,y)\in U}}_{x<y} \frac{\log x \log y}{x^{a} y^{a}\cdot y} dy dx =
\mathop{\mathop{\int \int}_{(x,u/x)\in U}}_{x<u/x} \frac{\log x \log \frac{u}{x}}{u^{a}\cdot u/x} \frac{du}{x} dx =
\mathop{\mathop{\int \int}_{(x,u/x)\in U}}_{x<\sqrt{u}} \frac{\log x \log \frac{u}{x}}{u^{a+1}} dx du$$</span>
and <span class="math-container">$(x,u/x)\in U$</span> if and only if <span class="math-container">$u^{a+1}/x>R$</span>. Hence, the inner integral is
<span class="math-container">$$\int_{1}^{\min(\sqrt{u},u^{a+1}/R)} \log x \log \frac{u}{x} dx =
x \left(\log \frac{x}{e} \log \frac{u e}{ x}-1\right)\big|_1^{\min(\sqrt{u},u^{a+1}/R)}$$</span>
and so
<span class="math-container">$$\begin{aligned}I &= \int_{R^{\frac{2}{2a+1}}}^\infty \left(\sqrt{u} \left(\log \frac{\sqrt{u}}{e} \log \frac{u e}{\sqrt{u}} - 1\right)-(\log \frac{1}{e} \log e u - 1)\right) \frac{du}{u^{a+1}}\\
&+\int_{R^{\frac{1}{a+1}}}^{R^{\frac{2}{2a+1}}} \left(\frac{u^{a+1}}{R} \left(\log \frac{u^{a+1}}{e R} \log \frac{u e}{u^{a+1}/R} - 1\right)-(\log \frac{1}{e} \log e u - 1)\right) \frac{du}{u^{a+1}}\\ &= \int_{R^{\frac{2}{2a+1}}}^\infty \left(\frac{1}{4} \log^2 u - 2\right)\; \frac{du}{u^{a+1/2}}
+\int_{R^{\frac{1}{a+1}}}^{\infty} (\log u + 2)\frac{du}{u^{a+1}}\\
&+ \frac{1}{R}
\int_{R^{\frac{1}{a+1}}}^{R^{\frac{2}{2a+1}}} \left(-
a (a+1) \log^2 u + (2 a+1) \log e R \log u - (\log e R)^2 - 1
\right) du.
\end{aligned}$$</span>
Let us call the integrals in the last two lines <span class="math-container">$I_1$</span>, <span class="math-container">$I_2$</span> and <span class="math-container">$I_3$</span>, respectively. Then
<span class="math-container">$$I_1 = \left(\frac{1/4}{1/2- a} \log^2 u-
\frac{1/2}{(1/2-a)^2} \log u + \frac{1/2}{(1/2-a)^3} - \frac{2}{1/2-a}\right)\frac{1}{u^{a-1/2}} \big|_{R^{\frac{2}{2a+1}}}^\infty =
\left(\frac{\log^2 R}{(2a+1)^2 (a-1/2)} +
\frac{ \log R}{(2a+1)(a-1/2)^2}
- \frac{2 (a-1) a}{(a-1/2)^3}
\right) \frac{1}{R^{\frac{2a-1}{2a+1}}},$$</span>
<span class="math-container">$$I_2 = \frac{-\frac{\log u}{a} -\frac{1}{a^2} - \frac{2}{a} }{u^{a}} \big|_{R^{\frac{1}{a+1}}}^{\infty} =
\frac{\frac{1}{a (a+1)} \log R + \frac{2 a + 1}{a^2}}{R^{\frac{a}{a+1}}},$$</span>
<span class="math-container">$$\begin{aligned}
I_3
&=\left(- a (a+1) \log^2 u +
\left((2 a + 1) \log R + 2 a^2 + 4 a + 1\right) \log u
-\left(\log^2 R + (2 a + 3) \log R + 2 a^2 + 4 a + 3\right)\right)u\bigg|_{R^{\frac{1}{a+1}}}^{R^{\frac{2}{2a+1}}}\\
&= \left(\frac{\log^2 R}{(2 a + 1)^2} - \frac{\log R }{2 a + 1}
- (2 a^2 + 4 a + 3)\right)
R^{\frac{2}{2a+1}} -
\left(- \frac{a+2}{a+1} \log R - (2 a^2 + 4 a + 3)\right) R^{\frac{1}{a+1}}.
\end{aligned}$$</span>
Hence (unless I've miscalculated anything), our original integral was <span class="math-container">$2 I$</span>, where
<span class="math-container">$$I = \frac{\frac{1}{4 a^2-1}\log^2 R -
\frac{2a-3}{(2 a -1)^2} \log R -
(2 a^2+ 4 a + 3) - \frac{2 (a-1) a}{(a-1/2)^3}
}{R^{\frac{2a-1}{2a+1}}} + \frac{\frac{a+1}{a} \log R + (2 a^2+ 4 a + 3)+\frac{2a+1}{a^2}}{R^{\frac{a}{a+1}}}.$$</span>
There has to be a nicer way. I just spend an entire working day (ending at 4:50am) trying to get this straight, and I'm still not sure it is finally right! If that's the case for this one, let us not even think about the other ones.</p>
<p>(Is there an easy way to show that <span class="math-container">$I\leq \frac{\log^2 R
}{(4 a^2-1)R^{\frac{2a-1}{2a+1}}}$</span> is right in a certain range?)</p>
|
110,867 | <p>I want to prove that the function $f(x) := x^3$ for all real $x$ defines a homeomorphism from $\mathbb{R}$ to $\mathbb{R}$. But I am finding it difficult to prove that the inverse map is continuous!</p>
| Henrik | 22,705 | <p>HINT: As far as I know you could use the very powerful tool that any continuous bijection on a compact topological space onto a topological Hausdorff space is a homeomorphism. </p>
|
4,138,564 | <p>Prove that if <span class="math-container">$q_i$</span>'s are polynomials of degree <span class="math-container">$i$</span>,then <span class="math-container">$q_0,...,q_n$</span> are linearly independent.</p>
<hr />
<p>Assume they are not linearly independent, so there is a scalar <span class="math-container">$\lambda_m \ne 0$</span> with <span class="math-container">$0\le m\le n$</span> such that <span class="math-container">$$\sum_{i=0}^{n}\lambda_iq_{i}\left(x\right)=0$$</span>
<span class="math-container">$$\sum_{i=0}^{n}\frac{\lambda_i}{\lambda_m}q_{i}\left(x\right)=0$$</span></p>
<p>How to get a contradiction?</p>
| Community | -1 | <p>As the matrix of coefficients is triangular with a nonzero diagonal, you can reduce to canonical polynomials by Gaussian elimination. And the canonical basis <span class="math-container">$is$</span> of course made of independent vectors. (Note that as <span class="math-container">$q_0$</span> is not included, the <span class="math-container">$q_i$</span> do not form a basis for polynomials of degree <span class="math-container">$n$</span>.)</p>
|
3,143,532 | <blockquote>
<p>Find <span class="math-container">$\det A$</span> and <span class="math-container">$\text{Tr} A$</span> for the matrix <span class="math-container">$A\in M_n(\mathbb{Q})$</span> such that <span class="math-container">$\sqrt[n]{p}$</span> is an eigenvalue of <span class="math-container">$A$</span>, where <span class="math-container">$p$</span> is a prime number or a positive integer such that the square root is irrational number.</p>
</blockquote>
<p>My attempt is described below. From hypothesis we know that
<span class="math-container">$$ \det(A-\sqrt[n]{p}I)=0,
$$</span> hence <span class="math-container">$ \det(A+\sqrt[n]{p}I)=0$</span> because the characteristic polynomial of matrix <span class="math-container">$A$</span> has rational coefficients. Multiplying these two relation we get <span class="math-container">$ \det(A^2-\sqrt[n]{p^2}I)=0$</span> and so on. </p>
<p>From here, I don't find something. How to proceed? Thanks.</p>
| J.G. | 56,861 | <p>To complement Zacky's answer I'll add a proof that <span class="math-container">$I(0)=0$</span>. Applying <span class="math-container">$x\mapsto\frac{1}{x}$</span> for <span class="math-container">$x\ge 1$</span> gives <span class="math-container">$$I(x)=2\int_0^\infty\ln|1-x^{-2}|dx=2\int_0^1\left[(1+\frac{1}{x^2})\ln(1-x^2)-2\ln x\right]\\=-2\int_0^1\left[(1+x^2)\sum_{n\ge 0}\frac{x^{2n}}{n+1}+2\ln x\right]dx=-2\int_0^1\left[\sum_{n\ge 0}\left(\frac{x^{2n}}{n+1}+\frac{x^{2n+2}}{n+1}\right)+2\ln x\right]dx\\=-2\left[\sum_{n\ge 0}\left(\frac{x^{2n+1}}{(n+1)(2n+1)}+\frac{x^{2n+3}}{(n+1)(2n+3)}\right)+2x\ln x-2x\right]_0^1\\=-2\left[\sum_{n\ge 0}\left(\frac{4}{(2n+1)(2n+3)}\right)-2\right],$$</span>which vanishes by partial fractions.</p>
|
1,552,775 | <p>In how many ways 5 blue pens and 6 black pens can be distributed to 6 children?</p>
<hr>
<p>To do that I used:
$\text{Coefficient of } x^6 \text{ in } ((1+x+x^2+x^3+x^4+x^5)(1+x+x^2+x^3+x^4+x^5+x^6))$
and got answer = 6</p>
<p>but options given are:</p>
<p>a) 97020</p>
<p>b) 116424</p>
<p>c) 8008</p>
<p>d) 672</p>
<p>How does taking coefficient gives answer for distribution problems?</p>
| seed | 294,587 | <p>If you denote by $B^n_m$ the number of ways to distribute n pens among m children, you can write down its generating series:
$$
\sum_{n=0}^\infty{B_m^nx^n}=(1+x+x^2+\ldots)^m=(1-x)^{-m}
$$
Now by taking the coefficient at $x^n$ you find a formula for $B_m^n$:
$$
B_m^n=(-1)^n C_{-m}^n=C_{m+n-1}^n
$$
The answer to the problem is $B^5_6 B^6_6 = C_{10}^5 C_{11}^5 = 116424$.</p>
|
3,676,719 | <p>I've recently started to learn about differential equations and I am having a hard time solving any of them.</p>
<p>I feel like I'm missing some steps.</p>
<p>Therefore, could I please know how to solve:</p>
<p><span class="math-container">$(ye^x +y)dx+ye^{(x+y)}dy=0$</span></p>
<p>So far, I've looked around the book and some websites that could give the final answer, to at least know what way should I go, but I feel like I'm going nowhere. All I was able to find is that the equation above is something called "an equation with separable variables".</p>
<p>Equation:
<a href="https://www.symbolab.com/solver/ordinary-differential-equation-calculator/%5Cleft(ye%5E%7Bx%7D%2By%5Cright)dx%2Bye%5E%7Bx%2By%7Ddy%3D0" rel="nofollow noreferrer">here</a></p>
<p>Thank you very much.</p>
| Ben Grossmann | 81,360 | <p>In general, the statement "if <span class="math-container">$A^2 = 0$</span>, then <span class="math-container">$\operatorname{tr}(A) = 0$</span>" can be proven using the fact that the trace is the sum of the eigenvalues. One proof for the specific case of <span class="math-container">$2 \times 2$</span> matrices is as follows.</p>
<p>It's clear that if <span class="math-container">$A = 0$</span>, then <span class="math-container">$A^2 = 0$</span> and <span class="math-container">$\operatorname{trace}(A) = 0$</span>. So, suppose that <span class="math-container">$A^2 = 0$</span> but <span class="math-container">$A \neq 0$</span>. Because <span class="math-container">$A^2 = 0$</span>, <span class="math-container">$A$</span> cannot be an invertible matrix. It follows that the columns of <span class="math-container">$A$</span> are linearly independent. As a consequence, we can write <span class="math-container">$A$</span> in the form <span class="math-container">$A = xy^T$</span> for some vectors <span class="math-container">$x$</span> and <span class="math-container">$y$</span> (in particular, we can let <span class="math-container">$x$</span> be a non-zero column of <span class="math-container">$A$</span>).</p>
<p>Verify that the trace of <span class="math-container">$A$</span> is also equal to <span class="math-container">$y^Tx$</span>. On the other hand, note that
<span class="math-container">$$
A^2 = (xy^T)(xy^T) = x(y^Tx)y^T = (y^Tx) \cdot xy^T = \operatorname{trace}(A) \cdot A.
$$</span>
So, <span class="math-container">$A^2 = 0$</span> implies that <span class="math-container">$\operatorname{trace}(A) = 0$</span>.</p>
|
285,975 | <p>I'm not sure I understand what to do with what's given to me to solve this. I know it has to do with the relationship between velocity, acceleration and time.</p>
<blockquote>
<p>At a distance of <span class="math-container">$45m$</span> from a traffic light, a car traveling <span class="math-container">$15 m/sec$</span> is brought to a stop at a constant deceleration.</p>
<p>a. What is the value of deceleration?</p>
<p>b. How far has the car moved when its speed has been reduced to <span class="math-container">$3m/sec$</span>?</p>
<p>c. How many seconds would the car take to come to a full stop?</p>
</blockquote>
<p>Can somebody give me some hints as to where I should start? All I know from reading this is that <span class="math-container">$v_0=15m$</span>, and I have no idea what to do with the <span class="math-container">$45m$</span> distance. I can't tell if it starts to slow down when it gets to <span class="math-container">$45m$</span> from the light, or stops <span class="math-container">$45m$</span> from the light.</p>
<hr />
<p>Edit:</p>
<p>I do know that since accelleration is the change in velocity over a change in time, <span class="math-container">$V(t)=\int a\ dt=at+C$</span>, where <span class="math-container">$C=v_0$</span>. Also, <span class="math-container">$S(t)=\int v_{0}+at\ dt=s_0+v_0t+\frac{1}{2}at^2$</span>. But I don't see a time variable to plug in to get the answers I need... or am I missing something?</p>
| Ishan Banerjee | 52,488 | <p>$f(kc)$ where $k\in \mathbb{N}$ and $c$ is the constant term of the polynomial is nonprime. Not sure what to do if $c=1$ or $c=-1$ though.</p>
|
285,975 | <p>I'm not sure I understand what to do with what's given to me to solve this. I know it has to do with the relationship between velocity, acceleration and time.</p>
<blockquote>
<p>At a distance of <span class="math-container">$45m$</span> from a traffic light, a car traveling <span class="math-container">$15 m/sec$</span> is brought to a stop at a constant deceleration.</p>
<p>a. What is the value of deceleration?</p>
<p>b. How far has the car moved when its speed has been reduced to <span class="math-container">$3m/sec$</span>?</p>
<p>c. How many seconds would the car take to come to a full stop?</p>
</blockquote>
<p>Can somebody give me some hints as to where I should start? All I know from reading this is that <span class="math-container">$v_0=15m$</span>, and I have no idea what to do with the <span class="math-container">$45m$</span> distance. I can't tell if it starts to slow down when it gets to <span class="math-container">$45m$</span> from the light, or stops <span class="math-container">$45m$</span> from the light.</p>
<hr />
<p>Edit:</p>
<p>I do know that since accelleration is the change in velocity over a change in time, <span class="math-container">$V(t)=\int a\ dt=at+C$</span>, where <span class="math-container">$C=v_0$</span>. Also, <span class="math-container">$S(t)=\int v_{0}+at\ dt=s_0+v_0t+\frac{1}{2}at^2$</span>. But I don't see a time variable to plug in to get the answers I need... or am I missing something?</p>
| Math Gems | 75,092 | <p><strong>Hint</strong> $\ $ For some $\,x_0\,$ we have $\,f(x_0)= m\ne \pm1\: $ (see the note below for one proof). </p>
<p>It follows that $\,m\mid f(mn+x_0)\,$ since $\,{\rm mod\ } m\!:\ f(mn+x_0)\equiv f(x_0)\equiv m \equiv 0.\:$ </p>
<p><strong>Note</strong> $\ $ If $\,f(x) = \pm 1\,$ for all $\,x\in \Bbb Z\,$ then $\,f\!-\!1\,$ or $\,f\!+\!1\,$ would have infinitely many roots, impossible for a nonzero polynomial over a domain. Thus we do not need to use any order properties to deduce that $\,f\,$ takes a nonunit value.</p>
|
2,102,619 | <p><strong>What is the total running time of counting from 1 to n in binary if the time needed to add 1 to the current number i is proportional to the number of bits in the binary expansion of i that must change in going from i to i + 1?</strong></p>
<p>Having trouble getting my head a around this one because I cannot find a formula for how bits change from some i to i+1. for example
$$~~~~before~~~~~~after~~~~changes$$
$$0000 -> 0001 = 1$$
$$0001 -> 0010 = 2$$
$$0010 -> 0011 = 1$$
$$0011 -> 0100 = 3$$</p>
| Narasimham | 95,860 | <p>It is advantageous to recognize slope $\tan\phi$ as parameter.
$$ f^{\prime}== \tan \phi \tag1$$
$$ f^{\prime \prime}== \sec^2 \phi \frac{d\phi}{dx} \tag2 $$
$$ \sec^3 {\phi} = \sec^2 \phi \frac{d\phi}{dx} \tag3 $$
$$ \cos \phi\, d\phi = dx \tag4 $$
$$\sin\phi = x+c = x, \tag5 $$
with given BC. Now letting $y=f(x),$</p>
<p>$$ \frac{dy}{d\phi}= \frac{dy/dx}{d\phi/dx}= \frac{\tan\phi}{1/\cos \phi} = \sin \phi \tag6 $$</p>
<p>Integrating,</p>
<p>$$y = -\cos \phi +c2 = 1-\cos \phi, \tag7 $$</p>
<p>with given BC.</p>
<p>Parametric equations are:</p>
<p>$$ (x,y)= (\sin\phi,1-\cos\phi) \tag8 $$</p>
<p>A circle of unit radius tangential at origin can be readily recognized.</p>
<p>$$ x^2 + (y-1)^2 = 1 \tag9$$
The given ODE is formula of unit curvature.</p>
|
3,987,911 | <p>In the game of bridge, a player is given 13 cards from a deck of 52 cards, what is the probability that he/she gets exactly one king and one queen? Furthermore what is the expected value of the number of aces he/she gets?</p>
| ETS1331 | 418,667 | <p>We rephrase the problem: the bug does not teleport, but instead heads from <span class="math-container">$(0,-63)$</span> to <span class="math-container">$(x,0)$</span> at <span class="math-container">$\sqrt{2}$</span> units per second and then heads from <span class="math-container">$(x,0)$</span> to <span class="math-container">$(0,74-x)$</span> at <span class="math-container">$2$</span> units per second, since the time taken is the same for the bug.</p>
<p>Now, the key is that the points <span class="math-container">$(0,74-x)$</span>, <span class="math-container">$(x,0)$</span>, and <span class="math-container">$(37,37)$</span> always form an isosceles right triangle! So, we say that the bug heads from <span class="math-container">$(0,-63)$</span> to <span class="math-container">$(x,0)$</span> at <span class="math-container">$\sqrt{2}$</span> units per second, and then travels from <span class="math-container">$(x,0)$</span> to <span class="math-container">$(37,37)$</span> at <span class="math-container">$\sqrt{2}$</span> units per second as well (since the time taken for the bug is again the same).</p>
<p>Now, to minimize the time, we want <span class="math-container">$(0,-63)$</span>, <span class="math-container">$(x,0)$</span> and <span class="math-container">$(37,37)$</span> to be on a line (since the shortest distance between 2 points is a line.) The equation for the line through <span class="math-container">$(0,-63)$</span> and <span class="math-container">$(37,37)$</span> is <span class="math-container">$y = \frac{100}{37}x - 63$</span>, so the bug heads through <span class="math-container">$(\frac{2331}{100},0)$</span>, giving our answer of <span class="math-container">$\frac{2331}{100}$</span>.</p>
|
1,644,130 | <p>I proved for a bounded set $\Omega$ and $1 \leq p \leq q \leq \infty$ that $L^{q}(\Omega) \subset L^{p}(\Omega)$. What is an example that would show strict inclusion, $ p<q$, and false if $\Omega$ is not bounded?</p>
| ZFR | 99,694 | <p>Counterexample: $f(x)=\frac{1}{x}\in L^2([1,+\infty))$, but it does not belong to $L^1([1,+\infty))$</p>
|
2,755,091 | <p>I know that I have to prove this with the induction formula. If proved the first condition i.e. $n=6$ which is divisible by $6$. But I got stuck on how to proceed with the second condition i.e $k$.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>We have $$7\equiv 1\mod 6$$ then $$7^n\equiv 1^n=1\equiv 1\mod 6$$ so $$7^n-1\equiv 0 \mod 6$$</p>
|
15,994 | <p>I am trying to solve this problem <a href="https://mathematica.stackexchange.com/questions/15922/help-me-to-solve-a-condition-from-an-equation-that-requires-many-reductions">here</a> but have to understand basics. Suppose I have a multiplication such as <span class="math-container">$a*(b+c+d+...+x)$</span>. How can I multiply it to the result <span class="math-container">$a*b+a*c+...+a*x$</span>?</p>
<p><strong>Trials</strong></p>
<blockquote>
<p><strong>[Not working]</strong> Reduce[(a + b)*c]</p>
<p><strong>[Not working]</strong> Simplify[(a + b)*c] or FullSimplify[(a + b)*c]</p>
</blockquote>
| cartonn | 4,593 | <p>I know this has been answered in the comments, but just for the sake of having an actual answer..</p>
<pre><code>Expand[a(b+c)]
</code></pre>
<p>or </p>
<pre><code>Expand[(a+b)(b+c)]
</code></pre>
|
2,408,141 | <p>Let $X\to Y$ so that $f(x)=x^2+4$ </p>
<p>$X=\{6,9,2,8,5\}$ and $Y=\{27,85,40,8,12,29,63,68,17\}$</p>
<p>a) state the domain of $f$</p>
<p>b) state the range of $f$</p>
<p>I have calculated that the domain of $f(x)$ is all real numbers and that the range of $f(x)$ is $f\ge4$.</p>
<p>Therefore I thought that the correct answers were:</p>
<p>a) $\{6,9,8,5\}$</p>
<p>b) $\{27,85,40,8,12,29,63,68,17\}$</p>
<p>But this was wrong. Could anyone help me? Many thanks in advance!</p>
| Eric Towers | 123,905 | <p>Let $A,B$ be an algebraic objects of type $T$. Let the algebraic operations $o_1, \dots, o_k$ for some positive integer $k$ be the operations of $T$. Let the $T$-operations in $A$ be denoted $o_j^A$ and the $T$-operations in $B$ be denoted $o_j^B$ for $j = 1 \dots k$. Suppose $A$ is generated as a $T$ by $a_1, \dots, a_n$ for some positive integer $n$. A $T$-homomorphism, $\phi: A \rightarrow B$, is completely specified by $\varphi(a_i)$ for $i = 1 \dots n$.</p>
<p><strong>Comment</strong>: This isn't an algebraic fact, really. It is a syntactic fact plus a very small amount of logic. It's usually expressed by a variation of the phrase "extended by linearity".</p>
<p><strong>Proof</strong>: Any $a \in A$ is an expression, $x$, in the symbols $\{a_i \mid i=1\dots n\} \cup \{o_j^A \mid j = 1 \dots k\}$ since this is what "generated as a $T$" means. Replacing in $x$ every "$a_i$" by "$\varphi(a_i)$" and every "$o_j^A$" by "$o_j^B$" yields an expression in $B$ for $\varphi(a)$. Therefore, it is sufficient to specify $\varphi$ on the generators of $A$ and extend by linearity to the rest of the field.</p>
<p>We should verify that if $y$ is a different expression for $a$ that $\varphi$ yields an equivalent expression for $\varphi(a)$. I.e., that the valuation equivalence is preserved by the substitution operation described above. If $x$ and $y$ are different expressions for $a$, there is a proof of this fact. Applying the same replacement rules as above, we obtain a proof of equality of the replaced versions of $x$ and $y$ in $B$. Consequently, the replacement operation above is well-defined with respect to choice of expression representative of $a$.</p>
<p><strong>Comment</strong>: There are more complicated algebraic objects for which we could permute the operations. This doesn't happen for $T \in \{$ semigroup, quasigroup, loop, magma, monoid, group, ring, module, field, vector space, algebra $\}$, so I don't discuss this possibility.</p>
<p><strong>Example</strong>: In your example of $\mathbb{Q}(\mathrm{e})$, if I specify where $\mathrm{e}$ goes in my field map, $\varphi$, I have a completely specified map. This is because by early lemmas, $\varphi(0_A) = 0_B$ and $\varphi(1_A) = 1_B$, which nails down the image of all of $\mathbb{Q}$ and then I've told you the image of $\mathrm{e}$ as well. Consequently, we can apply $\varphi$ to any expression in $\mathbb{Q}(\mathrm{e})$ and get an expression in the image field. Since $\varphi(\mathrm{e})$ is some element of the image field, so are all of its (integer) powers...</p>
<h2>Edit</h2>
<p>I still read that you are conflating $\mathbb{Q}[\mathrm{e}]$, the <em>ring</em> operation with $\mathbb{Q}(\mathrm{e})$, the <em>field</em> operation.</p>
<p>$\mathbb{Q}[\mathrm{e}]$ is the set of all finite ring expressions (in the operations $+$, $-$, and $\times$, with arbitrary nesting) using elements of $\mathbb{Q} \cup \{\mathrm{e}\}$. It is a lemma that each of these is equal to a finite $\mathbb{Q}$-linear combination of non-negative integer powers of $\mathrm{e}$.</p>
<p>$\mathbb{Q}(\mathrm{e})$ is the set of all finite field expressions (in the operations $+$, $-$, $\times$, and $\div$, with arbitrary nesting), excluding any occurrence of division by zero, using elements of $\mathbb{Q} \cup \{\mathrm{e}\}$. It is a lemma that each of these is equal to a finite $\mathbb{Q}$-linear combination of non-negative integer powers of $\mathrm{e}$ divided by a nonzero finite $\mathbb{Q}$-linear combination of non-negative integer powers of $\mathrm{e}$. Note that the multiplicative inverse of every nonzero such element is in this set. Specifically, for $p,q$ nonzero finite $\mathbb{Q}$-linear combination of non-negative integer powers of $\mathrm{e}$,
$$ \left( \frac{p}{q} \right)^{-1} = \frac{q}{p} \text{.} $$</p>
|
3,621,836 | <p>Let's say I have a large determinant with scalar elements:</p>
<p><span class="math-container">$$\begin{vmatrix} x\cdot a & x\cdot b & x\cdot c & x\cdot d \\ x\cdot e & x\cdot f & x\cdot g & x\cdot h \\ x\cdot i & x\cdot g & x\cdot k & x\cdot l \\ x\cdot m & x\cdot n & x\cdot o & x\cdot p\end{vmatrix}$$</span></p>
<p>Is it valid to factor out a term that's common to every element of the determinant? Is the following true:</p>
<p><span class="math-container">$$\begin{vmatrix} x\cdot a & x\cdot b & x\cdot c & x\cdot d \\ x\cdot e & x\cdot f & x\cdot g & x\cdot h \\ x\cdot i & x\cdot g & x\cdot k & x\cdot l \\ x\cdot m & x\cdot n & x\cdot o & x\cdot p\end{vmatrix} = x \cdot \begin{vmatrix} a & b & c & d \\ e & f & g & h \\ i & g & k & l \\ m & n & o & p\end{vmatrix}$$</span></p>
| Alan | 696,271 | <p>Clearly we have that <span class="math-container">$1, \omega, \omega^2$</span> are the roots of the polynomial <span class="math-container">$p(x):=x^3-1$</span>.
Therefore, it follows that
<span class="math-container">$$p(x)=(x-1)(x-\omega)(x-\omega^2)$$</span>
and that <span class="math-container">$\omega^3=1$</span>. Now,
<span class="math-container">$$\begin{align}(2-\omega)(2-\omega^2)(2-\omega^{19})(2-\omega^{23})&=(2-1)(2-\omega)(2-\omega^2)(2-1)(2-\omega)(2-\omega^2)\\&=p(2)^2\\&=49\end{align}$$</span></p>
|
244,521 | <p>I cannot prove this statement, I tried to prove by using the definition of open sets however i feel that it is necessary prove it in two directions since it's an iff statement.</p>
<p>The question is, </p>
<p>Let $X$ be a metric space with metric $d$, and let $x_n$ be a sequence of points in $X$. Prove that $x_n\rightarrow a$ if and only if for every open set $U$ with $a\in U$, there is a number N such that whenever n > N we have $x_n\in U$</p>
<p>What I have done is, by using the definition of open sets, $B(a,r)\cap U$ is not an empty set and $r > 0$. So by choosing $r=1$ we obtain $x_1\in B(a,1)$ which intersects with $U$. Secondly I chose $r=1/2$ so $x_2\in B(a,1/2)$ which again intersects with $U$. By continuing like this eventually I chose $r=1/n$ so $x_n\in B(a,1/n)$ which intersects with $U$. Therefore $d(x_n,a) <\frac{1}{n}$. Therefore I conclude that when $n$ goes to infinity that $x_n\rightarrow a$. </p>
<p>Am I right up to this point? If so how should I proceed and what should I do to prove that for opposite inclusion?</p>
<p>I'm so new, so let me know if I do or did something inconvenient.
Thanks!</p>
| robjohn | 13,854 | <p><strong>Correction</strong></p>
<p>The condition $B(a,r)\cap U\ne\varnothing$ is used to show that $a$ is a limit point of $U$. To show that $U$ is open,
$$
\forall a\in U,\exists r>0:B(a,r)\subset U
$$</p>
<p><strong>Hint</strong></p>
<p>The standard metric space concept of convergence is
$$
\forall\epsilon>0,\exists N:\forall n\ge N,d(x_n,a)<\epsilon
$$</p>
|
3,774,263 | <ol>
<li>Is it possible that distance (<span class="math-container">$r$</span>) or angle (<span class="math-container">$θ$</span>) contains Imaginary or Complex number?</li>
<li>If the answer is yes, how can I convert a number like that (Polar with complex argument) to Rectangular number? <br />
For example:
<strong><span class="math-container">$(r,θ) = (5+2i, 3+4i)$</span></strong> how to convert to <strong><span class="math-container">$x+yi$</span></strong> ? <br /> <br />
Thank you.</li>
</ol>
| Kavi Rama Murthy | 142,385 | <p>Any non-zero continuous linear functional is surjective. If the kernel is <span class="math-container">$\{0\}$</span> then it is an isomorphism onto <span class="math-container">$\mathbb R$</span>. Hence <span class="math-container">$X$</span> is one-dimensional.</p>
<p>Proof of surjectivity: Choose <span class="math-container">$x$</span> such that <span class="math-container">$x^{*}(x) \neq 0$</span>. If <span class="math-container">$r$</span> is any real number the <span class="math-container">$r=x^{*}(cx)$</span> where <span class="math-container">$c=\frac r {x^{*}(x)}$</span>.</p>
|
2,082,062 | <p>In Avner Friendman's Mondern Analysis book, he makes a statement that has stumped me when proving that for $\{f_n\}$ a sequence of measurable functions, $\sup_n f_n$ And $\inf f_n$ Are measurable.</p>
<blockquote>
<p>The assertion for $\inf_n f_n $ follows from $\inf (f_n )=- \sup (-f_n)$ .</p>
</blockquote>
<p>I really struggle with the concept of inf and sup of a sequence of functions, so I do not see why this statement is true. Why does $\inf (f_n )=- \sup (-f_n)$? Thank you!</p>
| Patrick Abraham | 337,503 | <p>May $f$ be a linear functional.
Then $f$ due it's linearity fulfils the homogeneity and additivity properties.</p>
<p>Let $a \in \mathbb R$ and let $v$ be a vector:
$$f(a\vec v) = a f(\vec v)$$</p>
<p>Hint:</p>
<p>Is there a different approach to write $-f$ with a scalar $a$ as $a f$.</p>
|
1,136,109 | <p>Suppose that $a,b$ are reals such that the roots of $ax^3-x^2+bx-1=0$ are all positive real numbers. Prove that: </p>
<p>$(i)~~0\le 3ab\le 1$<br>
$(ii)~~b\ge \sqrt3$. </p>
<p>My attempt: </p>
<p>I could solve the first part by Vieta's theorem. But, I am stuck on the second part. Please help. Thank you.</p>
| DeepSea | 101,504 | <p><strong>Hint</strong>: $x_1^3 +x_2^3 + x_3^3 = (x_1+x_2+x_3)^3 - 3(x_1+x_2)(x_2+x_3)(x_3+x_1) \leq (x_1+x_2+x_3)^3 - 24x_1x_2x_3$, by AM-GM inequality, and write:</p>
<p>$b = \dfrac{3+\displaystyle \sum x_i^2-a\displaystyle \sum x_i^3}{\displaystyle \sum x_i}$. Express $b$ as a function of $a$ and find the critical points, and take it from there. Use Vieta's theorem again. Use Cauchy-Schwarz inequality for: $\displaystyle \sum x_i^2 \geq \dfrac{1}{3}\cdot \left(\displaystyle \sum x_i\right)^2$ also. This implies:</p>
<p>$b \geq f(a)$, and you can take it from ...there..</p>
|
1,963,214 | <p>$\displaystyle\sum_{i=0}^{k-1}2^i=2^k-1$ for all $k \in\Bbb N$.</p>
<p>Clearly, the first step here is easy. You can start with k=1 and solve to get $2^0=2^1-1$.</p>
<p>What is a bit more challenging is the induction step. I don't even know where to begin here. Where would I even begin?</p>
| Community | -1 | <blockquote>
<p>How do (I) begin proving... </p>
</blockquote>
<p>There is always something you could try. For instance, try to <a href="https://en.wikipedia.org/wiki/How_to_Solve_It" rel="nofollow">solve some simpler related questions</a>:</p>
<ul>
<li>Can you write down by <strong><em>definition</em></strong> what "$f$ is differentiable at $0$" really means?</li>
<li>Can you prove the case when $n=m=1$?</li>
<li>Can you prove the case when $n=2$, $m=1$?</li>
<li>Can you see a pattern there and try to generalize your result?</li>
</ul>
<p>You would end up with estimating a quotient
$$
\frac{|f(0+h)-f(0)-Ah|}{|h|}.
$$
What can you tell about $f(0)$ by you assumptions? What candidate could $A$ be?</p>
|
1,145,902 | <p>I've searched for the inner product definition and I saw that it should only satisfy some conditions (axioms) and thus there could be several operations representing an inner product, one of which is the usual multiplication, or the dot product.
the question is: why is the inner product of 2 functions is defined by $\int f_1(x)f_2(x)dx$? their choice of the operation is based on what? there could be other operations satisfying the inner product conditions so why this ?</p>
| YTS | 126,222 | <p>Yes you are right.
And $||x||$ is just $|x|$ since the absolute value is always non- negative. In general you have that $|g(x)|=g(x)$ if and only if $g(x)$ is non-negative. </p>
|
2,536,206 | <p>Let $M$ be a smooth manifold of dimension $n$, and let $T(M)$ be the tangent bundle of $M$. Let $F$ be a smooth vector field, which is to say a smooth section of the canonical map $T(M) \rightarrow M$. If I understand the manifold structure on $T(M)$ correctly, smoothness means that for any chart $(U,\phi)$ of $M$, we can use $\phi$ to identify $T_p(M)$ with $\mathbb{R}^n$ for all $p \in U$, and under these simultaneous identifications, $F$ becomes a map $\phi(U) \rightarrow \mathbb{R}^n$ which is smooth.</p>
<p>Let $p_0 \in M$ and $t_0 \in \mathbb{R}$. Wikipedia defines an <em>integral curve</em> for the vector field $F$, passing through $p_0$ at time $t_0$, to be an open neighborhood $J$ of $t_0$, together with a smooth morphism $\alpha: J \rightarrow M$, such that $\alpha(t_0) = p_0$ and</p>
<p>$$\alpha'(t) = F(\alpha(t))$$</p>
<p>for all $t \in J$. I am confused on what this equality is saying. First, I do not understand what $\alpha'$ means as a map from $J$ to the manifold $M$. Maybe a chart $(U,\phi)$ containing the image of $J$ must be chosen, and then we can talk about the derivative of the composition $\phi \circ \alpha: J \rightarrow \mathbb{R}^n$. Even so, that derivative is a priori a collection of linear maps $\mathbb{R} \rightarrow \mathbb{R}^n$, so for each $t \in J$, $\alpha'(t)$ can be thought of as a linear map $\mathbb{R} \rightarrow \mathbb{R}^n$. On the other hand, $F(\alpha(t))$ identifies as an element of $\mathbb{R}^n$. I don't see in what sense these things are supposed to be equal. Are we also using the identification $\textrm{Hom}_{\mathbb{R}}(\mathbb{R},V) = V$ for any vector space $V$? </p>
| rtybase | 22,583 | <p>From the set theory perspective, let's note
$$P \overset{\text{def}}{=}\{p \in \mathbb{N} \mid p \text{ - prime}\}$$
$$P_{\leq n} \overset{\text{def}}{=} \{p \in P \mid p \leq n \}$$
$$M_n \overset{\text{def}}{=} \{p \in P \mid p \mid n\}$$
$\color{red}{M_n \ne \varnothing, \forall n\geq2}$. Then, it's easy to show
$$P_n=P_{\leq n} \setminus M_n \tag{1}$$
$$M_n \subset P_{\leq n} \tag{2}$$
Now, let's assume $\exists n \ne m$, in fact we can assume $\color{red}{n>m}$, such that $\color{blue}{P_n = P_m}$ then
$$P_{\leq n} \setminus M_n = P_{\leq m} \setminus M_m \overset{(2)}{\Rightarrow} P_{\leq n}=\left(P_{\leq n} \setminus M_n \right) \bigcup M_n=\left(P_{\leq m} \setminus M_m \right) \bigcup M_n$$
or
$$P_{\leq n}=\left(P_{\leq m} \setminus M_m \right) \bigcup M_n \tag{3}$$
given $\color{red}{P_{\leq m} \subset P_{\leq n}}$ and $(2)$, it's easy to deduct that
$$M_m \subset M_n \tag{4}$$</p>
<p>There are 3 possible cases to exploit with $(4)$:</p>
<hr>
<p><strong>Case 1 (incomplete).</strong> $\forall k \in M_n, k\in M_m$ or $M_n=M_m$ (like $n=2^2 \cdot 3^2, m=2 \cdot 3$ for example), then $\color{green}{P_{\leq n} = P_{\leq m}}$. If $m=\prod\limits_{i} p_{k_i}^{\alpha_i}$ then $n=\prod\limits_{i} p_{k_i}^{\beta_i}$ with $\beta_i \geq \alpha_i$ and at least one $\beta_{i^{*}} > \alpha_{i^{*}}$. Since all $p_i \geq 2$ this means $$m < 2m \leq n$$ From <a href="https://en.wikipedia.org/wiki/Bertrand%27s_postulate" rel="nofollow noreferrer">Bertrand postulate</a> there is $p^{*}$-prime in between $m$ and $2m$ with $p^{*} \notin P_{\leq m}$ and $p^{*} \in P_{\leq n}$, thus $\color{green}{P_{\leq m} \ne P_{\leq n}}$ - <strong>contradiction</strong>.</p>
<p><em>Obviously $\beta_i \geq \alpha_i$ doesn't cover all the cases, e.g. $n=2 \cdot 3^2, m=2^2 \cdot 3$.</em></p>
<hr>
<p><strong>Case 2.</strong> $\exists k \in M_n, k\notin M_m$ and $k \in P_{\leq m}$. Since $k \in P_{\leq m}$ and $k\notin M_m$ then $k \in P_m=P_{\leq m} \setminus M_m$, but because $k \in M_n$ and $k \in P_{\leq n}$ then $k \notin P_n=P_{\leq n} \setminus M_n$. Thus $\color{blue}{P_n \ne P_m}$ - <strong>contradiction</strong>.</p>
<hr>
<p><strong>Case 3.</strong> $\exists k \in M_n, k\notin M_m$ and $k \notin P_{\leq m}$. Then $n=k\cdot t \geq k > m$</p>
<ul>
<li>if $t=1$ then (given $k$ is prime) $n$ is prime and $P_n$ will contain all the prime factors of $m$, but $P_m$ will not contain prime factors of $m$, thus $\color{blue}{P_n \ne P_m}$ - <strong>contradiction</strong>.</li>
<li>if $t\geq2$ then between $n$ and $\frac{n}{2}$ exist a prime $p^{*}$ (<strong>proposition 1</strong>, below). Because $p^{*}>\frac{n}{2}=\frac{kt}{2}
\geq k>m \Rightarrow p^{*} \notin P_m$. Also $p^{*} \nmid n$, otherwise $n=qp^{*}\geq 2p^{*} > n$. Thus $p^{*} \in P_n$ and as a result $\color{blue}{P_n \ne P_m}$ - <strong>contradiction</strong>.</li>
</ul>
<hr>
<p><strong>Proposition 1.</strong> $\forall n \geq 3$ $\exists p$ - prime s.t. $\frac{n}{2}<p<n$</p>
<p>Let's take $k=\left \lfloor \frac{n}{2} \right \rfloor+1 \geq \frac{n}{2}$. <a href="https://en.wikipedia.org/wiki/Bertrand%27s_postulate" rel="nofollow noreferrer">Bertrand postulate</a> says $\exists p$ - prime s.t. $k < p < 2k-2$ or
$$\color{red}{\frac{n}{2}\leq k} < p < 2k-2=\color{blue}{2 \left \lfloor \frac{n}{2} \right \rfloor \leq n} \tag{5}$$
simply because:</p>
<ul>
<li>for $n=2r$ (even) we have $\color{red}{\frac{n}{2}=r < r+1=\left \lfloor \frac{n}{2} \right \rfloor+1=k}$ and $\color{blue}{2 \left \lfloor \frac{n}{2} \right \rfloor=2r=n}$</li>
<li>for $n=2r+1$ (odd) we have $\color{red}{\frac{n}{2}=r+\frac{1}{2}<r+1=\left \lfloor r+\frac{1}{2} \right \rfloor+1=\left \lfloor \frac{n}{2} \right \rfloor+1=k}$ and $\color{blue}{2 \left \lfloor \frac{n}{2} \right \rfloor=2 \left \lfloor r+\frac{1}{2} \right \rfloor=2r<2r+1=n}$</li>
</ul>
<p><a href="https://en.wikipedia.org/wiki/Bertrand%27s_postulate" rel="nofollow noreferrer">Bertrand postulate</a> says $(5)$ is true for $k>3$ or $n >4$. For $n=4$ we have $\frac{4}{2}<3<4$ and for $n=3$ we have $\frac{3}{2}<2<3$.</p>
|
3,624,732 | <p>I'm studying logical operators for school and there's a weird question that keeps bugging me even though it seems pretty basic.
<br>
I was asked to evaluate the proposition : <strong>p -> q -> r</strong> with p, r are False and q is True.
<br>
I tried evaluating it from left to right like this: <strong>( ( p -> q ) -> r )</strong> and got wrong answer.
<br>
Then, I checked my result with an online tool at <a href="https://web.stanford.edu/class/cs103/tools/truth-table-tool/" rel="nofollow noreferrer">https://web.stanford.edu/class/cs103/tools/truth-table-tool/</a> and it evaluates the proposition from right to left like this: <strong>( p -> ( q -> r ) )</strong> ( you can see in <a href="https://i.stack.imgur.com/X5fnm.png" rel="nofollow noreferrer">this picture</a> ). I tried calculating the result again with this order and it was accepted as right answer !
<br>
That's really odd because my lecturer said that if operators are at the same level then the proposition should be evaluated from left to right. Have I misunderstood something ? </p>
| user400188 | 400,188 | <p>The full convention is as follows: Outermost parentheses may be omitted, and in the absence of parentheses, the order of operations is</p>
<ol>
<li><span class="math-container">$\lnot$</span></li>
<li><span class="math-container">$\land$</span></li>
<li><span class="math-container">$\lor$</span></li>
<li><span class="math-container">$\impliedby $</span> and <span class="math-container">$\implies$</span> have equal precedence</li>
<li><span class="math-container">$\iff$</span></li>
</ol>
<p><span class="math-container">$\impliedby$</span> is left associative, and as Wuestenfux stated, <span class="math-container">$\implies$</span> is right associative while <span class="math-container">$\iff$</span> is left associative. </p>
<p>An example using everthing would be: <span class="math-container">$a\lor b\impliedby ~a\iff a\land\lnot a\lor b$</span>, which reads <span class="math-container">$(a\lor b\impliedby (~a\iff ((a\land\lnot a)\lor b)))$</span>. </p>
|
2,365,797 | <p>For integers $n\geq 1$ let $\mu(n)$ the Möbius function, and let $\psi(z)$ the Digamma function, see its definition and how is denoted and typeset in Wolfram Language, if you need it from this <a href="http://mathworld.wolfram.com/DigammaFunction.html" rel="nofollow noreferrer">MathWorld</a>.</p>
<p>While I was doing calculations I found this problem:</p>
<p><em>Calculate a good approximation of</em> $$\sum_{n=1}^\infty\frac{\mu(n)}{n}\psi\left(1+\frac{1}{n}\right).$$</p>
<p>My believe that why it is interesting is that <a href="https://www.wolframalpha.com/" rel="nofollow noreferrer">Wolfram Alpha online calculator</a> provide me approximations around $\frac{1}{2}$, with codes like this</p>
<p><code>sum mu(n)/n PolyGamma[0, 1+1/n], from n=1 to 5000</code></p>
<blockquote>
<p><strong>Question.</strong> My belief is that defining $$S:=\sum_{n=1}^\infty\frac{\mu(n)}{n}\psi\left(1+\frac{1}{n}\right),$$ then $$S=\frac{1}{2}.$$ Can you provide us and justify a good approximation of $S$, or well do you know how discard that $S=\frac{1}{2}$? If do you know that it was in the literature, refers it. <strong>Many thanks.</strong></p>
</blockquote>
| Claude Leibovici | 82,404 | <p><em>This is not an answer but it is too long for a comment.</em></p>
<p>To confirm the result, I computed
$$S_k=\sum_{n=1}^{10^k}\frac{\mu(n)}{n}\psi\left(1+\frac{1}{n}\right)$$ and obtained the following decimal representations
$$\left(
\begin{array}{cc}
k & S_k \\
1 & 0.4606256866 \\
2 & 0.4825538785 \\
3 & 0.4974613056 \\
4 & 0.5012018305 \\
5 & 0.5002812270 \\
6 & 0.4998842081
\end{array}
\right)$$ I gave up for $k=7$ (too long).</p>
|
2,365,797 | <p>For integers $n\geq 1$ let $\mu(n)$ the Möbius function, and let $\psi(z)$ the Digamma function, see its definition and how is denoted and typeset in Wolfram Language, if you need it from this <a href="http://mathworld.wolfram.com/DigammaFunction.html" rel="nofollow noreferrer">MathWorld</a>.</p>
<p>While I was doing calculations I found this problem:</p>
<p><em>Calculate a good approximation of</em> $$\sum_{n=1}^\infty\frac{\mu(n)}{n}\psi\left(1+\frac{1}{n}\right).$$</p>
<p>My believe that why it is interesting is that <a href="https://www.wolframalpha.com/" rel="nofollow noreferrer">Wolfram Alpha online calculator</a> provide me approximations around $\frac{1}{2}$, with codes like this</p>
<p><code>sum mu(n)/n PolyGamma[0, 1+1/n], from n=1 to 5000</code></p>
<blockquote>
<p><strong>Question.</strong> My belief is that defining $$S:=\sum_{n=1}^\infty\frac{\mu(n)}{n}\psi\left(1+\frac{1}{n}\right),$$ then $$S=\frac{1}{2}.$$ Can you provide us and justify a good approximation of $S$, or well do you know how discard that $S=\frac{1}{2}$? If do you know that it was in the literature, refers it. <strong>Many thanks.</strong></p>
</blockquote>
| Community | -1 | <p>Since $\sum_{n=1}^\infty\frac{\mu(n)}{n}=0$, we have $$\sum_{n=1}^\infty\frac{\mu(n)}{n}\psi\left(1+\frac{x}{n}\right)=\sum_{n=1}^\infty\frac{\mu(n)}{n}\left[\psi\left(1+\frac{x}{n}\right)+\gamma\right],$$ and the series is uniformly convergent on $[0,1]$. Now
$$\sum_{n=1}^\infty\frac{\mu(n)}{n}\left[\psi\left(1+\frac{x}{n}\right)+\gamma\right]=-\sum_{n=1}^\infty\frac{\mu(n)}{n}\sum^\infty_{k=1}\zeta(k+1)\frac{(-x)^k}{n^k},$$ from the Taylor series of the Digamma function. For $|x|<1$, this is absolutely convergent, and we can change the order of summation, so we get $$-\sum^\infty_{k=1}\zeta(k+1)(-x)^k\sum_{n=1}^\infty\frac{\mu(n)}{n^{k+1}}=-\sum^\infty_{k=1}(-x)^k=\frac{x}{1+x},$$ since $$\sum_{n=1}^\infty\frac{\mu(n)}{n^{k+1}}=\frac1{\zeta(k+1)}.$$
The limit for $x\rightarrow1$ is $1/2,$ indeed.</p>
|
4,168,198 | <p>If M is a connected n-manifold and N is a codim 2 submanifold
then M-N is connected.</p>
<p>Is this true?</p>
<p>I want to show <span class="math-container">$H_0(M-N)=\mathbb Z$</span>. I think it's better to do in <span class="math-container">$\mathbb Z_2$</span> coefficient because we get orientability. But because no compactness is mentioned I can't use Poincare duality. How can I resolve this?</p>
| user939711 | 939,711 | <p>The goal of this answer is to use Moishe Kohan's idea to prove the general statement for not-necessarily-closed submanifolds. In fact even more than that can be said.</p>
<p>Let <span class="math-container">$M$</span> be a metrizable topological manifold and let <span class="math-container">$f: S \to M$</span> be a continuous injection from a second-countable topological manifold with <span class="math-container">$\dim S \leq \dim M - 2$</span>. (Notice that this need not be an embedding. For instance, <span class="math-container">$f$</span> could be a map from <span class="math-container">$\Bbb Z^2$</span> with image <span class="math-container">$\Bbb Q^2 \subset M$</span>.)</p>
<blockquote>
<p>Theorem: <span class="math-container">$M \setminus f(S)$</span> is path-connected. In fact, for any two points <span class="math-container">$x,y \in M \setminus f(S)$</span>, and any continuous path <span class="math-container">$\gamma: [0,1] \to M$</span> with <span class="math-container">$\gamma(0) = x$</span> and <span class="math-container">$\gamma(1) = y$</span>, there is an arbitrarily close continuous path with the same endpoints whose image is contained in <span class="math-container">$M \setminus S$</span>.</p>
</blockquote>
<p><em>Remark.</em> Injectivity is necessary because of the existence of space-filling curves. Second-countability is necessary as otherwise one could take <span class="math-container">$S$</span> to be <span class="math-container">$M$</span> equipped with the discrete topology, and <span class="math-container">$f$</span> to be the canonical injection.</p>
<hr />
<blockquote>
<p>Lemma: If <span class="math-container">$D$</span> is a closed disc embedded in <span class="math-container">$M$</span> of dimension <span class="math-container">$\dim D \leq \dim M - 2$</span> and <span class="math-container">$x, y \in M \setminus D$</span>, write <span class="math-container">$\mathcal P_{x \to y} M$</span> for the space of continuous paths from <span class="math-container">$x$</span> to <span class="math-container">$y$</span> in <span class="math-container">$M$</span>. Then <span class="math-container">$\mathcal P_{x \to y} (M \setminus D)$</span> is open and dense in <span class="math-container">$\mathcal P_{x \to y} M$</span>.</p>
</blockquote>
<p>Proof: because <span class="math-container">$S$</span> is closed in <span class="math-container">$M$</span>, the set of paths with image in <span class="math-container">$M \setminus D$</span> is open in the compact-open topology (by definition).</p>
<p>Denseness is more difficult. Suppose <span class="math-container">$\gamma: [0,1] \to M$</span> is a continuous path with <span class="math-container">$\gamma(0) = x$</span> and <span class="math-container">$\gamma(1) = y$</span>. Then <span class="math-container">$\gamma^{-1}(D)$</span> is a closed subset of <span class="math-container">$[0,1]$</span> not including the endpoints.</p>
<p>Fix <span class="math-container">$\epsilon > 0$</span>. We will construct a path <span class="math-container">$\eta$</span> with <span class="math-container">$d_\infty(\gamma, \eta) \leq \epsilon$</span> and <span class="math-container">$\eta^{-1}(D) = \varnothing$</span>. Carrying out this construction for all <span class="math-container">$\eta$</span> proves the desired density result.</p>
<p>Using the Lebesgue number lemma, one may find <span class="math-container">$n$</span> large so that</p>
<p><span class="math-container">$\gamma([i/n, (i+1)/n]$</span> has diameter less than <span class="math-container">$\epsilon/2$</span> for all <span class="math-container">$0 \leq i < n$</span>.</p>
<p>Write <span class="math-container">$U_{\epsilon,i} \subset M$</span> for the open set of points <span class="math-container">$$\{x \in M \mid \text{sup}_{t \in [i/n, (i+1)/n]} d(\gamma(t), x) < \epsilon\}.$$</span></p>
<p>For each <span class="math-container">$0 < i < n$</span> pick <span class="math-container">$x_i \in U_{\epsilon,i} \setminus D$</span>. This exists because otherwise <span class="math-container">$D$</span> would contain an open subset of <span class="math-container">$M$</span>, contradicting its dimension and the invariance of domain theorem. Set <span class="math-container">$x_0 = x$</span> and <span class="math-container">$x_n = y$</span>.</p>
<p>Using Moishe Kohan's result, for each <span class="math-container">$0 \leq i < n$</span> there exists a path <span class="math-container">$\eta_i: [i/n, (i+1)/n] \to U_{\epsilon, i} \setminus D$</span> with <span class="math-container">$\eta_i(i/n) = x_i$</span> and <span class="math-container">$\eta_i\left(\frac{i+1}{n}\right) = x_{i+1}$</span>.</p>
<p>Notice that <span class="math-container">$d_\infty(\eta_i, \gamma|_{[i/n, (i+1)/n]}) < \epsilon.$</span></p>
<p>Now define <span class="math-container">$\eta(t)$</span> to be <span class="math-container">$\eta_i(t)$</span> whenever <span class="math-container">$i/n \leq t \leq (i+1)/n$</span>. Notice that this gives a well-defined continuous path on all of <span class="math-container">$[0,1]$</span>, with <span class="math-container">$$d_\infty(\eta, \gamma) < \epsilon,$$</span> and with <span class="math-container">$\eta$</span> missing <span class="math-container">$D$</span>, as desired.</p>
<hr />
<p>To prove the Theorem, write <span class="math-container">$S$</span> as the union of countably many closed discs <span class="math-container">$D_i$</span>. Because the domain is compact and the codomain is Hausdorff, <span class="math-container">$f$</span> gives a homeomorphism of <span class="math-container">$D_i$</span> onto <span class="math-container">$f(D_i)$</span>. We have already seen that <span class="math-container">$\mathcal P_{x \to y} [M \setminus f(D_i)]$</span> is dense in <span class="math-container">$\mathcal P_{x \to y} M$</span>. But <span class="math-container">$$\mathcal P_{x \to y} [M \setminus f(S)] = \bigcap_i \mathcal P_{x \to y} [M \setminus f(D_i)].$$</span></p>
<p>This is an intersection of countably many open dense sets inside the complete metric space <span class="math-container">$\mathcal P_{x \to y} M$</span>. Complete metric spaces are Baire, and hence this intersection is once again dense and in particular nonempty. The theorem follows.</p>
|
478,912 | <p>Ok so I get the basics of this, I just can't put it all together. $U$ is contained in the cartesian product of $\mathbb R^n \times \mathbb R$. What is $\mathbb R^n \times \mathbb R$ though? I know what a cartesian product is, but this seems weird to me. Someone please help explain this part, thanks!</p>
| Shuchang | 91,982 | <p>You could understand it as space-time which $R^n$ is space dimension and the other $R$ is for time. In this view, $U=\{(x,t)\}$ contains points representing an event occuring at point $x$ and time $t$. Does it help?</p>
|
2,936,356 | <p>Let <span class="math-container">$a,b>0$</span>. How can I calculate the roots of the following polynomial?</p>
<p><span class="math-container">$$2bx^6 + 4bx^5+(4b-a)x^4 + 4(b+a)x^3 + (2b-6a)x^2+4ax-a=0$$</span></p>
| Robert Z | 299,698 | <p>Note that the given equation is equivalent to
<span class="math-container">$$s(x-1)^4=2x^2(x+1)^2(x^2+1).$$</span>
where <span class="math-container">$s=a/b>0$</span>.
I guess there is no "easy" closed formula for the roots, but, for example, we always have at least two real roots: one in <span class="math-container">$(0,1)$</span> because
<span class="math-container">$$s=\text{LHS}(0)>\text{RHS}(0)=0
\quad\text{ and }\quad 0=\text{LHS}(1)<\text{RHS}(1)=16$$</span> and another one in <span class="math-container">$(-\infty,-1)$</span> because for sufficiently large <span class="math-container">$t>1$</span>,
<span class="math-container">$$\text{LHS}(-t)<\text{RHS}(-t)
\quad\text{ and }\quad 16s=\text{LHS}(-1)>\text{RHS}(-1)=0.$$</span>
Numerical methods can be used to find approximations of such roots.</p>
|
28,348 | <p>Thomson et al. provide a proof that <span class="math-container">$\lim_{n\rightarrow \infty} \sqrt[n]{n}=1$</span> in <a href="http://classicalrealanalysis.info/documents/TBB-AllChapters-Landscape.pdf#page=95" rel="noreferrer">this book (page 73)</a>. It has to do with using an inequality that relies on the binomial theorem:
<a href="https://i.stack.imgur.com/n3dxw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/n3dxw.png" alt="enter image description here" /></a></p>
<p>I have an alternative proof that I know (from elsewhere) as follows.</p>
<hr />
<p><strong>Proof</strong>.</p>
<p><span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} \frac{ \log n}{n} = 0
\end{align}</span></p>
<p>Then using this, I can instead prove:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} \sqrt[n]{n} &= \lim_{n\rightarrow \infty} \exp{\frac{ \log n}{n}} \newline
& = \exp{0} \newline
& = 1
\end{align}</span></p>
<hr />
<p>On the one hand, it seems like a valid proof to me. On the other hand, I know I should be careful with infinite sequences. The step I'm most unsure of is:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} \sqrt[n]{n} = \lim_{n\rightarrow \infty} \exp{\frac{ \log n}{n}}
\end{align}</span></p>
<p>I know such an identity would hold for bounded <span class="math-container">$n$</span> but I'm not sure I can use this identity when <span class="math-container">$n\rightarrow \infty$</span>.</p>
<p><strong>Question:</strong></p>
<p>If I am correct, then would there be any cases where I would be wrong? Specifically, given any sequence <span class="math-container">$x_n$</span>, can I always assume:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} x_n = \lim_{n\rightarrow \infty} \exp(\log x_n)
\end{align}</span>
Or are there sequences that invalidate that identity?</p>
<hr />
<p>(Edited to expand the last question)
given any sequence <span class="math-container">$x_n$</span>, can I always assume:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} x_n &= \exp(\log \lim_{n\rightarrow \infty} x_n) \newline
&= \exp(\lim_{n\rightarrow \infty} \log x_n) \newline
&= \lim_{n\rightarrow \infty} \exp( \log x_n)
\end{align}</span>
Or are there sequences that invalidate any of the above identities?</p>
<p>(Edited to repurpose this question).
Please also feel free to add different proofs of <span class="math-container">$\lim_{n\rightarrow \infty} \sqrt[n]{n}=1$</span>.</p>
| Paramanand Singh | 72,031 | <p>Let $n > 1$ so that $n^{1/n} > 1$ and we put $n^{1/n} = 1 + h$ so that $h > 0$ depends on $n$ (but we don't write the dependence explicitly like $h_{n}$ to simplify typing) Our job is done if show that $h \to 0$ as $n \to \infty$.</p>
<p>We have $$n = (1 + h)^{n} = 1 + nh + \frac{n(n - 1)}{2}h^{2} + \cdots$$ and hence $$\frac{n(n - 1)}{2}h^{2} < n$$ or $$0 < h^{2} < \frac{2}{n - 1}$$ It follows that $h^{2} \to 0$ as $n \to \infty$ and hence $h \to 0$ as $n \to \infty$.</p>
|
28,348 | <p>Thomson et al. provide a proof that <span class="math-container">$\lim_{n\rightarrow \infty} \sqrt[n]{n}=1$</span> in <a href="http://classicalrealanalysis.info/documents/TBB-AllChapters-Landscape.pdf#page=95" rel="noreferrer">this book (page 73)</a>. It has to do with using an inequality that relies on the binomial theorem:
<a href="https://i.stack.imgur.com/n3dxw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/n3dxw.png" alt="enter image description here" /></a></p>
<p>I have an alternative proof that I know (from elsewhere) as follows.</p>
<hr />
<p><strong>Proof</strong>.</p>
<p><span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} \frac{ \log n}{n} = 0
\end{align}</span></p>
<p>Then using this, I can instead prove:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} \sqrt[n]{n} &= \lim_{n\rightarrow \infty} \exp{\frac{ \log n}{n}} \newline
& = \exp{0} \newline
& = 1
\end{align}</span></p>
<hr />
<p>On the one hand, it seems like a valid proof to me. On the other hand, I know I should be careful with infinite sequences. The step I'm most unsure of is:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} \sqrt[n]{n} = \lim_{n\rightarrow \infty} \exp{\frac{ \log n}{n}}
\end{align}</span></p>
<p>I know such an identity would hold for bounded <span class="math-container">$n$</span> but I'm not sure I can use this identity when <span class="math-container">$n\rightarrow \infty$</span>.</p>
<p><strong>Question:</strong></p>
<p>If I am correct, then would there be any cases where I would be wrong? Specifically, given any sequence <span class="math-container">$x_n$</span>, can I always assume:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} x_n = \lim_{n\rightarrow \infty} \exp(\log x_n)
\end{align}</span>
Or are there sequences that invalidate that identity?</p>
<hr />
<p>(Edited to expand the last question)
given any sequence <span class="math-container">$x_n$</span>, can I always assume:
<span class="math-container">\begin{align}
\lim_{n\rightarrow \infty} x_n &= \exp(\log \lim_{n\rightarrow \infty} x_n) \newline
&= \exp(\lim_{n\rightarrow \infty} \log x_n) \newline
&= \lim_{n\rightarrow \infty} \exp( \log x_n)
\end{align}</span>
Or are there sequences that invalidate any of the above identities?</p>
<p>(Edited to repurpose this question).
Please also feel free to add different proofs of <span class="math-container">$\lim_{n\rightarrow \infty} \sqrt[n]{n}=1$</span>.</p>
| pre-kidney | 34,662 | <p>The limit follows from these inequalities and the squeeze theorem:
<span class="math-container">$$
1<n^{1/n}<1+\sqrt{\frac{2}{n-1}},\qquad n>1
$$</span>
where the right inequality follows by keeping only the third term in the binomial expansion:
<span class="math-container">$$
(1+x)^n>\binom{n}{2}x^2= n,\quad \textrm{where}\quad x^2=\frac{2}{n-1}.
$$</span></p>
|
4,434,225 | <p>Prove or disprove, "if <span class="math-container">$f$</span> is concave function, then <span class="math-container">$|f|$</span> is also concave".</p>
<p>I know the result is false for conves function, but for concave function, I guess it is true. But I am unable to do the proof.</p>
<p>I have tried to show that the set <span class="math-container">$A=\{(x,y)\in \Bbb R^2 :|f(x)|\ge y\}$</span> is a convex set.
For this, take two points <span class="math-container">$(x_1,y_1), (x_2,y_2) \in A$</span> and take <span class="math-container">$\lambda \in (0,1)$</span>.</p>
<p>Now, <span class="math-container">$|f(\lambda x_1+(1-\lambda)x_2)|\ge f(\lambda x_1+(1-\lambda)x_2)\ge \lambda f(x_1)+(1-\lambda)f(x_2)$</span>, as <span class="math-container">$f$</span> is concave. From this how to proceed?
Can I get any help please?</p>
| Dr. Sundar | 1,040,807 | <p>Counterexample:</p>
<p>Note that
<span class="math-container">$$
f(x) = - x^2, \ \ \ (x \in \mathbf{R})
$$</span>
is a concave function, but
<span class="math-container">$$
| f(x) | = x^2, \ \ \ (x \in \mathbf{R})
$$</span>
is a convex function.</p>
|
2,010,423 | <p>Are $$(a, b, c) = (\mid 1 \mid, \mid 2 \mid, \mid 2 \mid), (\mid 2 \mid, \mid 4 \mid, \mid 4 \mid), (\mid 2 \mid, \mid 3\mid, \mid 6 \mid), (\mid 1 \mid, \mid 1 \mid, \mid 1 \mid)$$ the only integers such that
$$\frac{1}{a} + \frac{1}{b} + \frac{1}{c} $$ is an integer ?</p>
| Reese Johnston | 351,805 | <p>Assuming we only care about positive integers for the time being, notice that if $\frac{1}{a} + \frac{1}{b} + \frac{1}{c}$ is an integer then it must be at least $1$. If three numbers add up to at least $1$, then at least one of them must be at least $\frac{1}{3}$ - so at least one of $a$, $b$, and $c$ must be no more than $3$.</p>
<p>If none of them is $2$ or less, then $\frac{1}{a}$, $\frac{1}{b}$, and $\frac{1}{c}$ are all no more than a third. But if they add up to at least $1$, then they must all then be exactly $\frac{1}{3}$. So $a = b = c = 3$.</p>
<p>Say the smallest number we have is $2$. Then we can't have two of them (because if $\frac{1}{2} + \frac{1}{2} + \frac{1}{c}$ is a whole number, then $c = 1$ and that's smaller than $2$). But $\frac{1}{2} + \frac{1}{b} + \frac{1}{c}$ can only be at least $1$ if $\frac{1}{b} + \frac{1}{c}$ is at least $\frac{1}{2}$, so we need $\frac{1}{b},\frac{1}{c}$ to be at least $\frac{1}{4}$. Therefore at least one of $b$ and $c$ is at most $4$. So we have either $\frac{1}{2} + \frac{1}{3} + \frac{1}{c}$ (in which case $c = 6$) or $\frac{1}{2} + \frac{1}{4} + \frac{1}{c}$ (in which case $c = 4$).</p>
<p>So far we have $(3, 3, 3)$, $(2, 3, 6)$, and $(2, 4, 4)$, and we've found all of the ones that don't involve a $1$. I'll leave it to you to try to apply this approach to the case when we <em>do</em> have a $1$.</p>
|
324,307 | <p><a href="http://en.wikipedia.org/wiki/Tucker%27s_lemma" rel="nofollow">Tucker's Lemma is here.</a></p>
<p>Let's stay within the 2D case for now. A standard proof is constructive:</p>
<p>(1) Pick an arced edge on the boundary of the circle. Note its labeling (for example, (1, 2)).</p>
<p>(2) Walk into the circle through your chosen edge into a simplex of the triangle.</p>
<p>(3) If the simplex carries three different labels, then two of them most be antipodal, so Tucker's Lemma is satisfied.</p>
<p>(4) If instead the simplex is entirely labelled with 1's and 2's, then walk through the (1, 2) edge that you didn't enter from, and repeat step (3) or (4) on your new simplex.</p>
<p>(5) Eventually, you will either dead-end in a tri-labeled simplex in the middle of the triangle, or you will walk out of the circle through another arced edge. If you leave the circle, then you've eliminated two edges; pick a new edge and try again.</p>
<p>(6) Since there are an even number of vertices on the boundary of the circle, there are an odd number of edges on the boundary of the circle. So eventually, one of your paths must dead-end inside the circle.</p>
<p>This proof does <em>not</em> rely on the fact that the triangulation of the circle is antipodal symmetric; instead, it only relies on the fact that there are an even number of vertices on the boundary of the circle. So why do we require antipodal symmetry, when the weaker "even number of edges" condition implies the same conclusion?</p>
| Thomas Andrews | 7,933 | <p>As commenters have noted, you can't conclude $a=c=1$.</p>
<p>The condition $ad+b=bc+d$ can be rewritten as:</p>
<p>$$d(a-1)=b(c-1)$$</p>
<p>Now, if $a=1$ then $b\neq 0$ (or else $f(x)=x$ is the identity.)</p>
<p>So in that case, then $x+b$ commutes with $cx+d$ if and only if $c=1$. So if $x+b$ commutes with $cx+d$ and $mx+n$ then $c=m=1$ and therefore $cx+d$ and $mx+n$ commute.</p>
<p>If $a\neq 1$ then $c\neq 1$ (since $c=1\implies a=1$). So we can divide by $(a-1)(b-1)$ and get
$$\frac b{a-1}= \frac{d}{c-1}$$</p>
<p>$$\frac b{a-1}= \frac n{m-1}$$</p>
<p>Therefore:</p>
<p>$$\frac d{c-1} = \frac n{m-1}$$</p>
<p>and therefore $cx+d$ and $mx+n$ commute.</p>
|
4,214,631 | <p>I'm trying to understand a paragraph from the article <a href="https://www.sciencedirect.com/science/article/pii/S0022123600936875" rel="nofollow noreferrer">Christ, Kiselev: Maximal functions associated to filtrations</a>. After the proof of Theorem 1.1., the authors deduce a theorem of Menshov as a corollary:</p>
<blockquote>
<p>Let <span class="math-container">$(X, \mu)$</span> be a measure space and let <span class="math-container">$(\phi_n)_n$</span> be an orthonormal sequence in <span class="math-container">$L^2(X)$</span>. Let <span class="math-container">$1 \le p < 2$</span>. Then for every sequence <span class="math-container">$(c_n)_n \in \ell^p$</span> the series
<span class="math-container">$$\sum_{n=1}^\infty c_n\phi_n$$</span>
converges a.e. in <span class="math-container">$X$</span>.</p>
</blockquote>
<p>Obviously the series <span class="math-container">$\sum_{n=1}^\infty c_n\phi_n$</span> converges in <span class="math-container">$L^2(X)$</span> due to Parseval since in particular we have <span class="math-container">$(c_n)_n \in \ell^2$</span>.</p>
<p>Theorem 1.1. from the article gives that the (sublinear) map <span class="math-container">$$\ell^p \to L^2(X), \quad c = (c_n)_n \mapsto \sup_{N\in\Bbb{N}} \left|\sum_{n=1}^N c_n\phi_n\right|$$</span>
is well-defined and bounded in the sense that there exists a constant <span class="math-container">$C>0$</span> such that
<span class="math-container">$$ \left\|\sup_{N\in\Bbb{N}} \left|\sum_{n=1}^N c_n\phi_n\right|\right\|_{L^2(X)} \le C\|(c_n)_n\|_p, \quad \text{ for all }(c_n)_n \in \ell^p.$$</span></p>
<p>The authors now conclude that the statement of the theorem clearly follows from this fact and from the fact that <span class="math-container">$\sum_{n=1}^\infty c_n\phi_n$</span> obviously converges a.e. when <span class="math-container">$(c_n)_n$</span> is finitely-supported.</p>
<p>I'm not sure how it follows. I noticed that for <span class="math-container">$M \ge N$</span> the Cauchy sums are also bounded by the supremum by reverse triangle inequality:
<span class="math-container">$$\left|\sum_{n=N+1}^M c_n\phi_n\right| \le 2\sup_{K\in\Bbb{N}} \left|\sum_{n=1}^K c_n\phi_n\right|.$$</span>
So this is bounded a.e., however we would need that it goes to <span class="math-container">$0$</span> as <span class="math-container">$M,N\to\infty$</span>. Am I missing something easy?</p>
| Jose27 | 4,829 | <p>This is a standard argument in harmonic analysis. Let me give the abstract version first.</p>
<p>Suppose we have a sequence of linear operators <span class="math-container">$T_n$</span> mapping measurable functions on <span class="math-container">$(X, \mu)$</span> to measurable functions on <span class="math-container">$(Y,\nu)$</span> such that
<span class="math-container">$$
T_*h(x) := \sup_{n} |T_nh(x)|
$$</span>
satisfies <span class="math-container">$T_* :L^p(X)\to L^q(Y)$</span>, for some <span class="math-container">$1\leq p,q<\infty$</span>. Suppose also that for <span class="math-container">$h$</span> in a dense set <span class="math-container">$E\subset L^p(X)$</span> it holds that <span class="math-container">$(T_nh(y))_n$</span> is a Cauchy sequence for a.e. <span class="math-container">$y \in Y$</span>. Then <span class="math-container">$(T_nh(y))_n$</span> is a Cauchy sequence for a.e. <span class="math-container">$y$</span> and <em>all</em> <span class="math-container">$h\in L^p(X)$</span>.</p>
<p>In other words, pointwise control on a dense subset, plus some control of the maximal operator <span class="math-container">$T_*$</span> imply pointwise control for all admissible functions. In fact you can get away with weaker control on <span class="math-container">$T_*$</span> than the above <span class="math-container">$L^p$</span> bounds, but I won't get into that.</p>
<p>The proof is very simple: Given <span class="math-container">$h\in L^p(X)$</span> and <span class="math-container">$\varepsilon>0$</span> fixed, let <span class="math-container">$g\in E$</span> be such that <span class="math-container">$\| h-g\|_{L^p(\mu)}^{p/q}$</span> small, then by properties of <span class="math-container">$\limsup$</span> we have
<span class="math-container">\begin{equation}
\begin{split}
\nu(\{ y\in y: \limsup_{n,m} |T_n h(y)-T_mh(y)|>\varepsilon\}) & \leq \mu(\{ \limsup_{n}|T_nh(y)-T_n g(y)|>\varepsilon/3\}) \\
& \quad + \mu(\{ \limsup_{n,m}|T_n g(y)-T_mg(y)|>\varepsilon/3\}) \\
& \quad + \mu(\{ \limsup_{m}|T_m g(y)-T_mh(y)|>\varepsilon/3\})\\
& =: I+II+III.
\end{split}
\end{equation}</span>
Now, <span class="math-container">$I$</span> and <span class="math-container">$III$</span> we can estimate by Chebyshev's inequality as
<span class="math-container">$$
\mu(\{ \limsup|T_n h(y)-T_n g(y)|>\varepsilon/3\}) \leq \dfrac{3}{\varepsilon} \int_Y T_*(h-g)(y)^q\, d\nu(y) \lesssim \dfrac{1}{\varepsilon} \| h-g\|_{L^p(\mu)}^{p/q},
$$</span>
by the boundedness of <span class="math-container">$T_*$</span>. If we choose then <span class="math-container">$f-g$</span> close enough, then <span class="math-container">$I$</span> and <span class="math-container">$III$</span> can be made as small as we want, while <span class="math-container">$II=0$</span> always by hypotheses (since <span class="math-container">$g\in E$</span>). This means that the set where <span class="math-container">$ \limsup_{n,m}|T_nh(y)-T_mh(y)|>\varepsilon/3$</span> has measure 0 for all <span class="math-container">$\varepsilon>0$</span>. This implies that <span class="math-container">$T_n h$</span> is Cauchy.</p>
<p>In your case you operators are mapping <span class="math-container">$\ell^p$</span> to <span class="math-container">$L^2(X)$</span> via
<span class="math-container">$$
T_N: (c_k)_k \mapsto \sum_{k=1}^N c_k\phi_k,
$$</span>
so that <span class="math-container">$T_*$</span> is exactly the operator you have bounds on, and the result follows from the above abstract principle.</p>
|
405,830 | <p>Are there any cases of finite subgroups of <span class="math-container">$O_n(\mathbb{Q})$</span> <s>not contained in</s> <i>not isomorphic to any subgroup of</i> <span class="math-container">$O_n(\mathbb{Z})$</span>?</p>
| Noam D. Elkies | 14,830 | <p>If it's the same <span class="math-container">$n$</span> then yes this can happen. For example, the lattice <span class="math-container">$D_4$</span> (consisting of all integer vectors in <span class="math-container">${\bf Z}^4$</span> with even sum) has more isometries than <span class="math-container">${\bf Z}^4$</span>. If we allow different <span class="math-container">$n$</span>, then no, because every finite group <span class="math-container">$G$</span> is contained in the group of permutations of <span class="math-container">$G$</span>, which in turn is contained in <span class="math-container">$O_{|G|}({\bf Z})$</span>.</p>
|
54,088 | <p>I am studying KS (Kolmogorov-Sinai) entropy of order <em>q</em> and it can be defined as</p>
<p>$$
h_q = \sup_P \left(\lim_{m\to\infty}\left(\frac 1 m H_q(m,ε)\right)\right)
$$</p>
<p>Why is it defined as supremum over all possible partitions <em>P</em> and not maximum? </p>
<p>When do people use supremum and when maximum?</p>
| Community | -1 | <p>Consider for example the set $X = (0,1)$. Then $\sup X=1$ but $\max X$ does not exist. </p>
<p>Generally, for a set $X\subset {\mathbb R}$, we say $x=\max X$ if $\color{blue}{x\in X}$ and
$$\forall y\in X, y\leq x.$$</p>
<p>We have $x=\sup X$ if
$$\forall y\in X, y\leq x$$
and $$\forall\epsilon>0,\quad\exists y\in X, \quad\text{s.t.}\quad y>x-\epsilon.$$</p>
|
4,517,063 | <p>Prove that there are exactly <span class="math-container">$8100$</span> different ways of distributing <span class="math-container">$4$</span> indistinguishable black marbles and <span class="math-container">$6$</span> distinguishable coloured marbles ( none of them black) into <span class="math-container">$5$</span> distinguishable boxes in such a way that each box contains exactly <span class="math-container">$2$</span> marbles.</p>
<hr />
<p>I have done problems involving indistinguishable balls and indistinguishable/distinguishable boxes, distinguishable balls and indistinguishable/distinguishable boxes.</p>
<p>I am confused about how to handle the situation when both indistinguishable and distinguishable balls are given at the same time.</p>
<hr />
<p>Any hints will be helpful.</p>
| true blue anil | 22,388 | <p>Just distribute the distinguishable balls as a product of two multinomial coefficients,<br />
<em>[laying down a pattern</em>]<span class="math-container">$\;\times\;$</span> [<em>permuting it</em>]<br />
and <strong>forget about filler indistinguishable black balls</strong>, thus</p>
<p><span class="math-container">$2-2-2-0-0\; pattern:\; \Large\binom6{2,2,2,0,0}\binom5{3,2} = 900$</span></p>
<p><span class="math-container">$2-2-1-1-0\;pattern:\; \Large\binom6{2,2,1,1,0}\binom5{2,2,1} =5400$</span></p>
<p><span class="math-container">$2-1-1-1-1\;pattern:\; \Large\binom6{2,1,1,1,1}\binom5{1,4} = 1800$</span></p>
|
2,417,029 | <p>I have been told that a line segment is a set of points.
How can even infinitely many point, each of length zero, can make a line of positive length?</p>
<p>Edit:
As an undergraduate I assumed it was due to having uncountably many points.
But the Cantor set has uncountably many elements and it has measure $0$.</p>
<p>So having uncountably many points on a line is not sufficient for the measure to be positive.</p>
<p>My question was: what else is needed? It appears from the answers I've seen that the additional thing needed is the topology and/or the sigma algebra within which the points are set.</p>
<p>My thanks to those who have helped me figure out where to look for full answers to my question.</p>
| Caleb Stanford | 68,107 | <p>I agree with Mikhail Katz: you were right as an undergraduate when you "assumed it was due to having uncountably many points." Since you say you have learned measure theory, I am not sure why you discarded this (correct) explanation.</p>
<p>To those who are less familiar with differing infinite cardinalities, a simpler explanation might be that you cannot <em>necessarily</em> add up an arbitrary number of sets and have the lengths add. It is true that if you add one line segment to another, the lengths add. But the sets must be special (if asked further: "measurable"), and to add an infinite number of things there must be a technical restriction (if asked further: that there only be <em>countably</em> infinitely many).</p>
|
657,992 | <p>I understand how to explain but can't put it down on paper.</p>
<p>$\displaystyle \frac{a}{b}$ and $\displaystyle \frac{c}{d}$ are rational numbers. For there to be any rational between two numbers I assume $\displaystyle \frac{a}{b} < \frac{c}{d}$.</p>
<p>I let $\displaystyle x = \frac{a}{b}$ and $\displaystyle y = \frac{c}{d}$ so $x < y$. the number in the middle of $x$ and $y$ is definitely a rational number so $\displaystyle x < \frac{x+y}{2} < y$. I know that there is another middle between $x$ and $\displaystyle \frac{x+y}{2}$ and it keeps on going. How would I write it as a proof?</p>
| vonbrand | 43,946 | <p>For rationals $r_1$ and $r_2$ you can always find the rational $(r_1 + r_2) / 2$ between them. As you can repeat this at will, you can construct infinitely many rationals between them (essentially the set $r_1 + (r_2 - r_1) \cdot 2^{-n - 1}$). </p>
|
394,489 | <p>$$\int^L_{-L} x \sin(\frac{\pi nx}{L})$$</p>
<p>I've seen something like this in Fourier theory, but I'm still not sure how to approach this integral. Wolfram Alpha gives me the answer, but no method. Integrate by parts? Substitution?</p>
| apnorton | 23,353 | <p>Whenever you have $p(x)f(x)$, where $p(x)$ is a polynomial, and $f(x)$ is a function like $\sin(x)$ or $e^x$, such that differentiating doesn't increase its "order," integration by parts is always a good technique. Differentiate the polynomial so it becomes "less complicated," and integrate the other function.</p>
<p>$$\int_{-L}^Lx\sin\left(\frac{n\pi x}{L}\right)\,dx$$
Let $u = x, dv=\sin\left(\frac{n\pi x}{L}\right)dx$. Then $du = dx$ and $v = \frac{-L}{n\pi}\cos\left(\frac{n\pi x}{L}\right)$.<br>
Thus,</p>
<p>$$\int_{-L}^Lx\sin\left(\frac{n\pi x}{L}\right)\,dx = \frac{-xL}{n\pi}\cos\left(\frac{n\pi x}{L}\right)\Bigg|_{-L}^L - \int_{-L}^L\frac{-L}{n\pi}\cos\left(\frac{n\pi x}{L}\right)\,dx$$</p>
|
162,364 | <p>Assuming we have a list below, it has real number elements and complex numbers, how can I quickly find if the list has any Real number that its value is less than 1.0? </p>
<pre><code> lis = {3 Sqrt[354], Sqrt[2962], Sqrt[2746], 3 Sqrt[282], Sqrt[2338],
Sqrt[2146], 3 Sqrt[218], Sqrt[1786], Sqrt[1618], 27 Sqrt[2], Sqrt[
1306], Sqrt[1162], 3 Sqrt[114], Sqrt[898], Sqrt[778], 3 Sqrt[74],
Sqrt[562], Sqrt[466], 3 Sqrt[42], Sqrt[298], Sqrt[226], 9 Sqrt[2],
Sqrt[106], Sqrt[58], 3 Sqrt[2], I Sqrt[14], I Sqrt[38], 3 I Sqrt[6],
I Sqrt[62], I Sqrt[62], 3 I Sqrt[6], I Sqrt[38], I Sqrt[14], Sqrt[
1.2], Sqrt[58], Sqrt[1.06], 9 Sqrt[2], Sqrt[226]}
</code></pre>
<p>I am also considering if we can find methods to filter any real or complex numbers with specific values in any types list (e.g. there are some string elements mixed with real and complex number elements in a given list). But this could be another question and isn't necessary for my example. Please leave some advice if you interested! Thanks in advance!</p>
| Nasser | 70 | <blockquote>
<p>how can I quickly find if the list has any Real number that its value
is less than 1.0?</p>
</blockquote>
<p>One way might be</p>
<pre><code>Cases[ReIm[lis],{x_,y_}/;y==0&&x<1]
(*{} *)
</code></pre>
<p>Example</p>
<pre><code> lis={.7, .7I .7+1, 2, 2I, 2+1 I};
Cases[ReIm[lis],{x_,y_}/;y==0&&x<1 :> Complex[x,y]]
</code></pre>
<p><img src="https://i.stack.imgur.com/TLS5b.png" alt="Mathematica graphics"></p>
<p>Using the above, you can now add any kind of checks on <code>x</code> and <code>y</code> to filter any kind of number you want. For example, to look for pure complex numbers:</p>
<pre><code> Cases[ReIm[lis],{x_,y_}/;x==0&&y!=0 :> Complex[x,y]]
</code></pre>
<p><img src="https://i.stack.imgur.com/zbI3j.png" alt="Mathematica graphics"></p>
<p>To look for number with real and complex parts</p>
<pre><code> Cases[ReIm[lis],{x_,y_}/;x!=0&& y!=0 :> Complex[x,y]]
</code></pre>
<p><img src="https://i.stack.imgur.com/nZEjA.png" alt="Mathematica graphics"></p>
<p>And so on.</p>
|
162,364 | <p>Assuming we have a list below, it has real number elements and complex numbers, how can I quickly find if the list has any Real number that its value is less than 1.0? </p>
<pre><code> lis = {3 Sqrt[354], Sqrt[2962], Sqrt[2746], 3 Sqrt[282], Sqrt[2338],
Sqrt[2146], 3 Sqrt[218], Sqrt[1786], Sqrt[1618], 27 Sqrt[2], Sqrt[
1306], Sqrt[1162], 3 Sqrt[114], Sqrt[898], Sqrt[778], 3 Sqrt[74],
Sqrt[562], Sqrt[466], 3 Sqrt[42], Sqrt[298], Sqrt[226], 9 Sqrt[2],
Sqrt[106], Sqrt[58], 3 Sqrt[2], I Sqrt[14], I Sqrt[38], 3 I Sqrt[6],
I Sqrt[62], I Sqrt[62], 3 I Sqrt[6], I Sqrt[38], I Sqrt[14], Sqrt[
1.2], Sqrt[58], Sqrt[1.06], 9 Sqrt[2], Sqrt[226]}
</code></pre>
<p>I am also considering if we can find methods to filter any real or complex numbers with specific values in any types list (e.g. there are some string elements mixed with real and complex number elements in a given list). But this could be another question and isn't necessary for my example. Please leave some advice if you interested! Thanks in advance!</p>
| Michael E2 | 4,999 | <p>Here's a way using <code>Pick</code>:</p>
<pre><code>Pick[lis, UnitStep[1 - Re@#] + I*Im@# &@N@lis, 1]
(* {} *)
lis2 = lis/10;
Pick[lis2, UnitStep[1 - Re@#] + I*Im@# &@N@lis2, 1]
(* {Sqrt[29/2]/5, 3/(5 Sqrt[2]), 0.109545, Sqrt[29/2]/5, 0.102956} *)
</code></pre>
<p><code>UnitStep[1 - Re@#]</code> returns <code>1</code> if the real part of the number is less than <code>1</code>. And <code>I*Im@#</code> is <code>0</code> if the number is real; otherwise, it will have a nonzero imaginary part and be a number of type <code>Complex</code>.</p>
|
1,387,216 | <p>Let $X$ be a metric space, and let $E$ and $K$ be two sets such that $E\subset K$.</p>
<p>I want to prove:</p>
<p>If $p$ is a limit point of $E$, then $p$ is a limit point of $K$.</p>
<p>Proof: if every neighborhood of $p\in E$ contains a point $q\neq p$ such that $q\in E$, then this $q$ is also in $K$, then $p$ is a limit point of $K$. Is that correct?</p>
| coldnumber | 251,386 | <p>You have the right idea, but I think a piece is missing.</p>
<p>Yes, every neighborhood of $p$ in $E$ is a neighborhood of $p$ in $K$, but not every neighborhood of $p$ in $K$ is a neighborhood of $p$ in $E$. You have to start with a neighborhood of $p$ in $K$ and show that that neighborhood of $p$ in $K$, not just in $E$, contains a point of $K$ that is not $p$; you can say that a neighborhood of $p$ in $K$ contains a neighborhood of $p$ in $E$, but I think in this proof this assertion would need explanation.</p>
<p>A proof could start with </p>
<blockquote>
<p>Suppose that $B(p,\varepsilon_k) \subseteq K$ is a neighborhood of $p$ in $K$.</p>
</blockquote>
<p>At this point you haven't explained why there needs to be an element of $E$ in that neighborhood; there may be subsets of $K$ that do not intersect with $E$, and you need to explain why this neighborhood is not one of them; to do this, you can look at the intersection of a neighborhood of $p$ in $K$ and a neighborhood of $p$ in $K$:</p>
<p>Let $B(p, \varepsilon_e) \subseteq E$ be a neighborhood of $p$ in $E$, and let $\varepsilon=\min\{\varepsilon_k, \varepsilon_e\}$. Then the intersection $B(p,\varepsilon_k) \cap B(p,\varepsilon_e)$ equals $B(p, \varepsilon)$, so it is a neighborhood of $p$ in both $E$ and $K$.</p>
<p>Since it is a neighborhood of $p$ in $E$ and $p$ is a limit point of $E$, it contains some element $q \in E$.</p>
<p>So we have $q \in B(p, \varepsilon) \subseteq K$, which means we have shown that an arbitrary open neighborhood of $p$ <strong>in $K$</strong> contains another point of $K$ that is not $p$, which means $p$ is an accumulation point of $K$.</p>
|
2,426,394 | <p>I'm sorry to bother you with this easy problem. But I'm working alone and totally confused.
It is the 1378th Problem from "Problems in Mathematical Analysis" written by Demidovich. The standard answer is
$$\frac{60(1+x)^{99}(1+6x)}{(1-2x)^{41}(1+2x)^{61}}$$</p>
<p>But when I followed the rutine
$$\frac{d(P(x)/Q(x))}{dx}=\frac{\frac{dP(x)}{dx}Q(x)-P(x)\frac{dQ(x)}{dx}}{Q(x)^2}$$
I got
$$\frac{100(1+x)^{99}(1-2x)^{40}(1+2x)^{60}-(1+x)^{100}(-80(1-2x)^{39}+120(1+2x)^{59})}{(1-2x)^{80}(1+2x)^{120}}$$</p>
<p>and finally
$$\frac{60(1+x)^{99}P(x)}{(1-2x)^{41}(1+2x)^{61}}$$</p>
<p>$$P(x)=1-4x^2-\frac{3(1+x)}{(1-2x)^{39}}-\frac{2(1+x)}{(1+2x)^{59}}$$</p>
<p>Would you tell me what mistake I made. Best regards.</p>
| farruhota | 425,072 | <p>Alternatively:
$$[(1+x)^{100}(1-2x)^{-40}(1+2x)^{-60}]'=$$
$$100(1+x)^{99}(1-2x)^{-40}(1+2x)^{-60}+\\80(1+x)^{100}(1-2x)^{-41}(1+2x)^{-60}-\\120(1+x)^{100}(1-2x)^{-40}(1+2x)^{-61}=$$
$$20(1+x)^{99}(1-2x)^{-41}(1+2x)^{-61}\cdot \\ [5(1-2x)(1+2x)+4(1+x)(1+2x)-6(1+x)(1-2x)]=$$
$$\frac{20(1+x)^{99}}{(1-2x)^{41}(1+2x)^{61}}\cdot3(1+6x.)$$</p>
|
1,044,005 | <p>so i'm trying to find the general formula of a sequence, $${18,30,46,66,90,118}$$</p>
<p>i found the first difference, $${12,16,20,24,24}$$</p>
<p>second difference, $${4,4,4,4}$$</p>
<p>If the second difference is 4, you start with $2n^2$</p>
<p>Then i have $${18,30,46,66,90,118}$$</p>
<p>-$${2,8,18,32,50,72}$$ </p>
<p>The residue is $${16,22,28,34,40,46}$$</p>
<p>which its formula is $16+6(n-1)$</p>
<p>So the final answer is $2n^2+16+6(n-1)$ which is $2n^2+6n+10$? </p>
<p>Does my approach seem on the right track? </p>
| turkeyhundt | 115,823 | <p>That's how I teach my students to try it. If the sequence's differences are constant, it's a linear progression. If the sequence's differences' differences are constant, start with $cn^2$ and work from there.</p>
|
1,991,600 | <p><a href="https://i.stack.imgur.com/2L9j9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2L9j9.jpg" alt="enter image description here" /></a></p>
<p>I have worked out the areas as <span class="math-container">$\pi/3$</span> for the circle and <span class="math-container">$2/\sqrt3$</span> for the triangle but don't know how to convert into a percentage without a calculator.</p>
| G Cab | 317,234 | <p>$$
\begin{gathered}
side_{\,tr} = 1\quad \quad h_{\,tr} = \frac{{\sqrt 3 }}
{2}\quad \quad r_{\,circ} = \frac{{\sqrt 3 }}
{6} \hfill \\
A_{\,circ} = \pi \frac{3}
{{36}} = \frac{\pi }
{{12}}\quad \quad A_{\,tr} = \frac{{\sqrt 3 }}
{4} \hfill \\
\rho = \frac{{A_{\,circ} }}
{{A_{\,tr} }} = \frac{\pi }
{{12}}\frac{4}
{{\sqrt 3 }} = \frac{\pi }
{3}\frac{1}
{{\sqrt 3 }} \approx \frac{{3.15}}
{3}\frac{1}
{{\sqrt 3 }} = \frac{{1.05}}
{{\sqrt 3 }} \hfill \\
\rho ^{\,2} \approx \frac{{\left( {1.05} \right)^{\,2} }}
{3} = \frac{1}
{3}\left( {1 + \frac{5}
{{100}}} \right)^{\,2} \approx \frac{1}
{3}\left( {1 + 2\frac{5}
{{100}}} \right) = \frac{{110}}
{{300}} \approx \frac{{36}}
{{100}} \hfill \\
\rho \approx \sqrt {\frac{{36}}
{{100}}} = \frac{6}
{{10}} = \frac{{60}}
{{100}} = 60\% \hfill \\
\end{gathered}
$$</p>
|
799,602 | <p>If I have</p>
<pre><code>(¬q ⇒ r)
</code></pre>
<p>and use implication on this does it give me:</p>
<pre><code>(¬¬q v r)
</code></pre>
<p>from where I can then use double negation,</p>
<p>or does it give me </p>
<pre><code>¬(¬q v r)
</code></pre>
<p>I am thinking the former.</p>
<p>Thank you in advance.</p>
| Hector Pinedo | 30,533 | <p>You can use the book of Rotman "Advanced modern algebra" it covers the Brauer group of a field.</p>
<p>There is an article of Auslander and Goldman called "The Brauer group of a commutative ring," which generalizes the result of the book of Rotman to the context of commutative rings.</p>
|
75,704 | <p><strong>Bug introduced in 10.0 and fixed in 10.0.2</strong></p>
<hr>
<p>I was looking at integrals like:</p>
<pre><code>Integrate[HermiteH[50, x]*Exp[-x^2], {x, 0, Infinity}]
</code></pre>
<p>which gave me a "does not converge on $(0,\infty)$" error. On the other hand something like </p>
<pre><code>Integrate[(x^50)*Exp[-x^2]], {x, 0, Infinity}]
Integrate[HermiteH[4,x]*Exp[-x^2]], {x, 0, Infinity}]
</code></pre>
<p>works just fine. In general, for $n\geq 16$, $\int_0^\infty H_n(x)e^{-x^2}dx$ is reported as divergent. </p>
<p>Is there an easy way to resolve this bug? </p>
<p>I should mention that I'm using Mathematica 10.0.0.0.</p>
| Murta | 2,266 | <p>Well, not a PowerShell use, but an example of the new V10 function <code>RunProcess</code>.</p>
<p><a href="http://www.mls-software.com/opensshd-agent.html" rel="nofollow">Using this link</a>, I did this batch to add my private key in the repository ever time windows start:</p>
<pre><code>@ECHO OFF
for /f "tokens=1,2 delims==; " %%i in ('call ssh-agent') do (
if "echo" neq "%%i" (
set %%i=%%j
setx %%i %%j
)
)
cd c:/keyPath
ssh-add myKey
</code></pre>
<p>Now I can update my git using <code>RunProcess</code> as:</p>
<pre><code>RunProcess["CMD",All,
"sh --login -i -c \"git -C C:/path/local/repository pull ssh://git@github.com/myaccount/myrep.git\"
exit
"]
</code></pre>
<p>With <code>RunProcess</code> you can have better control over system output using it secont argument.</p>
|
1,301,763 | <p>The only problem I have with this is knowing when you are solving for dx or dy. For example, this question which says</p>
<p>find the volume of the solid created by rotating the region bounded by y = 2x-4, y = 0, and x = 3, about the line x = 4. This one is dy. Instead of y = 2x-4, the person solved in respect to x and got y+4/(2). </p>
<p>And then there is this one, similar to the above but about the line y = -3. This one is dx. I don't understand why. Help appreciated. </p>
| randomgirl | 209,647 | <p><img src="https://i.stack.imgur.com/W73ym.png" alt="enter image description here">Sometimes it is easier to use horizontal strips than vertical (or the other way around). It also depends on the method you choose to find the volume, there is washer(disk) or shells. Shells you want parallel strips to the axis of rotation. Washer(disk) you want perpendicular strips to the axis of rotation. $x=f(y)$ think horizontal strip. $y=g(x)$ think vertical strip. In your problem you have you want to rotate the region bounded by $y=2x-4$, $y=0$, and $x=3$ about $x=4$. So your figure can be represented by a bunch of vertical strips. And vertical strips are parallel to vertical lines, the vertical line we talking about is the axis of rotation here. So I would do shells method here. $V=\int_2^3 2 \cdot \pi (4-x)(2x-4)dx $. Now if you want to look at your figure in terms of horizontal strips solve for $x$ given $y=2x-4$. This will mean we will use the washer(disk) method though. $x=\frac{y+4}{2}$. So $V=\int_0^2 \pi ((4-\frac{y+4}{2})^2-(4-3)^2) dy$ </p>
|
2,093,661 | <p>In our algorithms class we defined the Fibonacci series:</p>
<p>$$F_0 = 0$$
$$F_1 = 1$$
$$F_{i+2} = F_i + F_{i+1}$$</p>
<p>After that we used that $F_{i+2} ≥ (\frac {1+\sqrt5} {2})^i$ but I can't see why that is true. Since it's the Fibonacci series I suspect that there should be a common known proof. If there isn't, how could you prove it?</p>
| Ben Grossmann | 81,360 | <p>Prove it inductively, noting that
$$
\left( \frac{1+\sqrt 5}{2}\right)^{i+2} = \left( \frac{1+\sqrt 5}{2}\right)^{i+1} + \left( \frac{1+\sqrt 5}{2}\right)^{i}
$$</p>
|
2,093,661 | <p>In our algorithms class we defined the Fibonacci series:</p>
<p>$$F_0 = 0$$
$$F_1 = 1$$
$$F_{i+2} = F_i + F_{i+1}$$</p>
<p>After that we used that $F_{i+2} ≥ (\frac {1+\sqrt5} {2})^i$ but I can't see why that is true. Since it's the Fibonacci series I suspect that there should be a common known proof. If there isn't, how could you prove it?</p>
| Doug M | 317,162 | <p>Not a prove as to why $F_{n+2} > (\frac {1+\sqrt 5}{2})^n$
But some insight why that expression is relevant.</p>
<p>Suppose, $\frac {F_{n+1}}{F_n}$ is converging to something.</p>
<p>$F_{n+1} = F_{n} + F_{n-1}\\
\frac {F_{n+1}}{F_n} = 1 + \frac {F_{n-1}}{F_n}$</p>
<p>So, supposing that this ratio is indeed converging. (which I have not proven, and leave as an exercise to the reader), then lets say it is coverging to some value x.</p>
<p>If $\frac {F_{n+1}}{F_n} = x,$ then $\frac {F_{n-1}}{F_n} = \frac 1x$</p>
<p>$x = 1 + \frac 1x\\
x^2 - x - 1=0$</p>
<p>$\frac {1+\sqrt 5}{2}$ is the only positive root of that polynomial.</p>
<p>Here is another idea.</p>
<p>$\begin{bmatrix} F_{n+1}\\F_{n}\end{bmatrix} = \begin{bmatrix} 1&1\\1&0\end{bmatrix}\begin{bmatrix} F_{n}\\F_{n-1}\end{bmatrix}$</p>
<p>$\begin{bmatrix} F_{n+1}\\F_{n}\end{bmatrix} = \begin{bmatrix} 1&1\\1&0\end{bmatrix}^n\begin{bmatrix} 1\\0\end{bmatrix}$</p>
<p>The characteristic equation of $\begin{bmatrix} 1&1\\1&0\end{bmatrix}$ is $\lambda^2 - \lambda -1$ (coincidence? no.)</p>
<p>I am going to call the roots $\phi = \frac {1+\sqrt 5}{2}, \phi' = \frac {1-\sqrt 5}{2}$ </p>
<p>$\begin{bmatrix} F_{n+1}\\F_{n}\end{bmatrix} = \frac 1{\phi - \phi'}\begin{bmatrix} \phi&\phi'\\1&1\end{bmatrix}\begin{bmatrix} \phi^n&0\\0&\phi'^n\end{bmatrix}\begin{bmatrix} 1&-\phi'\\-1&\phi\end{bmatrix}\begin{bmatrix} 1\\0\end{bmatrix}$</p>
<p>$F_n = \frac {\phi^n - \phi'^n}{\phi-\phi'}$</p>
|
54,815 | <p>A notorious question with prime numbers is estimating the gaps between consecutive primes. That is, if $(p_n)_{n \geq 1}$ is the canonical enumeration of the primes, then set $g_n = p_{n+1} - p_n$. It is shown that $g_n > \frac{c \log(n) \log \log(n) \log \log \log \log(n)}{(\log \log \log(n))^2}$ infinitely often, but a precise estimate is not known.</p>
<p>My question is, is there a 'natural' superset of the primes that are of interest (say, the set of numbers that are either primes or product of two primes) such that the gap between consecutive members is well known or well estimated?</p>
| Dan Brumleve | 2,003 | <p>How about <a href="https://en.wikipedia.org/wiki/Practical_number#Analogies_with_prime_numbers" rel="nofollow noreferrer">practical numbers</a>?</p>
<p>They aren't a superset of primes, but they are a "notion of 'almost primes'" as the title requests.</p>
<p>Hausman and Shapiro showed in 1984 that practical number gaps satisfy $\frak{g}$$_n < 2 \cdot \frak{p}$$_n^\frac{1}{2} + 1$. On the other hand, for primes, the best known bound is the <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.360.3671&rep=rep1&type=pdf" rel="nofollow noreferrer">Baker-Harman-Pintz or BHP bound</a> published in 2000: $g_n \in p_n^{0.525} + O(1)$. Conditional on the Riemann hypothesis we have $g_n \in p_n^{\frac{1}{2}+o(1)}$, but not $g_n \in O(p_n^\frac{1}{2})$.</p>
|
2,589,155 | <blockquote>
<p>Let $d$ be the standard metric on $\mathbb{R}$. i.e $d(x,y) = |x-y|$. Is there a distance $d'$ with $d'(x,y) \geq d(x,y)$ so that $(\mathbb{R},d')$ is compact?</p>
</blockquote>
<p>Is my proof correct?
Let $B_{\epsilon}^{d'}(x)$ denote a ball of radius $\epsilon$ centered on $x$ with respect to $d'$.</p>
<p><strong>My attempt:</strong></p>
<p>Assume such a $d'$ exists. Since $(\mathbb{R},d')$ is compact it is totally bounded. Let $\epsilon > 0$ be given. Then there is an $n \in \mathbb{N}$ such that $\cup^{n}_{i=i}B_{\epsilon}^{d'}(x_{i}) = \mathbb{R}$. Let $x \in \mathbb{R}$ then $d'(x,x_{i}) < \epsilon$ for some $i = 1,...,n$. But
$$d(x,x_{i}) \leq d'(x,x_{i}) < \epsilon.$$ So $x \in B_{\epsilon}^{d}(x_{i}) \subset B_{\epsilon}^{d'}(x_{i}) \forall i = 1,...,n$. So $x\in \cup_{i=1}^{n}B_{\epsilon}^{d}(x_{i})$ That is $\mathbb{R} \subset \cup_{i=1}^{n}B_{\epsilon}^{d}(x_{i})$ But $\cup_{i=1}^{n}B_{\epsilon}^{d}(x_{i}) \subset \mathbb{R}$ as in we have $\cup_{i=1}^{n}B_{\epsilon}^{d}(x_{i}) = \mathbb{R}$. So $(\mathbb{R},d)$ is totally bounded which is a contradiction. </p>
| José Carlos Santos | 446,262 | <p>Your proof is correct, <em>assuming</em> that $(\mathbb{R},d)$ isn't totally bounded. Do you know how to prove that?</p>
|
3,994,280 | <p>Show that <span class="math-container">$(\lnot(p\lor(\lnot p\land q))$</span> is logically equivalent to <span class="math-container">$\lnot p \land \lnot q$</span>.</p>
<p>I am wondering what I did is correct. Very new to learning simple logic.</p>
<p><span class="math-container">$$(\lnot(p\lor(\lnot p\land q)) \equiv \lnot((p\lor\lnot p)\land(p\lor q)) \\ \equiv \lnot(\text{T} \land (p\lor q)) \\ \equiv\lnot(p \lor q) \\ \equiv \lnot p \land
\lnot q \\ \square$$</span></p>
| amWhy | 9,003 | <p>This is an alternative rount, and corrects for your misuse of DeMorgan's in the last step.</p>
<p><span class="math-container">$$(\lnot(p\lor(\lnot p\land q))) \equiv \lnot p \land \lnot(\lnot p \land q)\tag{DeMorgan's}$$</span>
<span class="math-container">$$\equiv \lnot p \land (p \lor \lnot q)\tag{DeMorgan's}$$</span>
<span class="math-container">$$\equiv (\lnot p \land p) \lor (\lnot p \land \lnot q)\tag{Distributive Law}$$</span>
<span class="math-container">$$\equiv F\lor (\lnot p \land \lnot q)$$</span>
<span class="math-container">$$\equiv \lnot p \land \lnot q$$</span></p>
|
90,006 | <p>I have two problem collections I am currently working through, the "Berkeley Problems in Mathematics" book, and the first of the three volumes of Putnam problems compiled by the MAA. These both contain many problems on basic differential equations.</p>
<p>Unfortunately, I never had a course in differential equations. Otherwise, my background is reasonably good, and I have knowledge of real analysis (at the level of baby Rudin), basic abstract algebra, topology, and complex analysis. I feel I could handle a more concise and mathematically mature approach to differential equations than the "cookbook" style that is normally given to first and second year students. I was wondering if someone to access to the above books that I am working through could suggest a concise reference that would cover what I need to know to solve the problems in them. In particular, it seems I need to know basic solution methods and basic existence and uniqueness theorem. On the other hand, I have no desire to specialize in differential equations, so a reference work like V.I Arnold's book on ordinary differential equations would not suit my needs, and I certainly don't have any need for knowledge of, say, the Laplace transform or PDEs. </p>
<p>To reiterate, I just need a concise, high level overview of the basic (by hand) solution techniques for differential equations, along with some discussion of the basic uniqueness and existence theorems. I realize this is rather vague, but looking through the two problem books I listed above should give a more precise idea of what I mean. Worked examples would be a plus. I am very unfamiliar with the subject matter, so thanks in advance for putting up with my very nebulous request.</p>
<p>EDIT: I found Coddington's "Intoduction to Ordinary Differential Equations" to be what I needed. Thanks guys.</p>
| GEdgar | 442 | <p>For example, show the limit is $\le 1/2$ like this:</p>
<p>$$
\frac{n+1}{2n+3} < \frac{n+1}{2n+2} = \frac{1}{2}
$$</p>
|
1,842,448 | <p>$F(x)= 2x^2 -3x$. </p>
<p>find the range of $x$ to check whether the function the is strictly increasing and strictly decreasing.</p>
| user190080 | 190,080 | <p>The function
$$
F(x)= 2x^2 -3x
$$
is strictly increasing (respectively decreasing) for
$$
f(x):=F'(x)=4x-3>0 \text { respectively } f(x)=4x-3<0 \tag1
$$
this is a linear function which is strictly increasing itself on whole $\mathbb R$. Just observe that $$ f(x)'=4>0$$ holds. Now we just calculate the root $x_0$ of $f(x)$ which is $x_0=\frac 3 4$. There is only one root. This means for all $x>\frac 3 4$ we have $f(x)>0$ and $x<\frac 3 4$ we have $f(x)<0$ vice versa.</p>
<p>Now we conclude by using $(1)$, that $$
F(x) \text{ is strictly increasing } \forall x:x>\frac 3 4
$$
and
$$
F(x) \text{ is strictly decreasing } \forall x:x<\frac 3 4
$$</p>
|
2,127,110 | <p>I am working on this differetiation problem:</p>
<p>$ \frac{d}{dx}x(1-\frac{2}{x})$</p>
<p>and I am currently stuck at this point:</p>
<p>$1\cdot \left(1-\frac{2}{x}\right)+\frac{2}{x^2}x$</p>
<p>Symbolab tells me this simplifies to $1$ but I do not understand how. I am under the impression that;</p>
<p>$1\cdot \left(1-\frac{2}{x}\right)+\frac{2}{x^2}x \equiv 1- 2x^{-x^2}-2^{-x}$</p>
| 5xum | 112,884 | <p>$$1\cdot (1-\frac2x) + \frac{2}{x^2}x = 1-\frac2x + \frac2x = 1$$ for all $x\neq 0$.</p>
<p>I have no idea where you got $x^{-x^2}$ from...</p>
|
588,228 | <p>A,B,C are distinct digits of a three digit number such that </p>
<pre><code> A B C
B A C
+ C A B
____________
A B B C
</code></pre>
<p>Find the value of A+B+C.</p>
<p>a) 16 b) 17 c) 18 d) 19</p>
<p>I tried it out by using the digits 16 17 18 19 by breaking them in three numbers but due to so large number of ways of breaking I cannot help my cause.</p>
| souparno | 397,876 | <p>I am answering the edited question without trial and error method as it will consume more time. we can see that A+B must be 10 as C added to it will result in C in the right most digit and the carry will be 1(carry cannot be 2 as A+B cannot be 20,as they r single digit number).now for the middle column B+C+A+1(i.e carry)=B is possible if A+C=9. then A+B+C+1=AB(i.e 10A+B).
A+B=10;
A+C=9;
therefore, A+B+C+1=(A+C+1)+B=10+B.
so 10+B=10A+B; therefore, A=1;
similiarly B=9,C=8.
so,A+B+C=18</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.