qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
4,040,301 | <p>If <span class="math-container">$\lim_{|x| \to \infty} g(x)/x = \infty$</span>, Prove that <span class="math-container">$\{g(x)\mid x \in \mathbb{R}\} = \mathbb{R}.$</span></p>
| Reveillark | 122,262 | <p>Let <span class="math-container">$M>0$</span>.</p>
<p>By assumption, there is some <span class="math-container">$N>1$</span> such that <span class="math-container">$x\ge N$</span> implies <span class="math-container">$\frac{g(x)}{x}>M$</span>. For such <span class="math-container">$x$</span>, <span class="math-container">$g(x)>xM>M$</span>.</p>
<p>By the same reasoning, there is some <span class="math-container">$N'<-1$</span> such that <span class="math-container">$x\le N'$</span> implies <span class="math-container">$\frac{g(x)}{x}>M$</span>. But then, if <span class="math-container">$x\le N'$</span>, <span class="math-container">$g(x)<x\le -M$</span>.</p>
<p>This shows that <span class="math-container">$g$</span> takes arbitrarily large positive values and arbitrarily large (in absolute value) negative values. Since <span class="math-container">$g$</span> is differentiable, it is continuous, and the Intermediate Value Theorem implies that <span class="math-container">$g$</span> is onto.</p>
|
3,278 | <h3>What are Community Promotion Ads?</h3>
<p>Community Promotion Ads are community-vetted advertisements that will show up on the main site, in the right sidebar. The purpose of this question is the vetting process. Images of the advertisements are provided, and community voting will enable the advertisements to be shown.</p>
<h3>Why do we have Community Promotion Ads?</h3>
<p>This is a method for the community to control what gets promoted to visitors on the site. For example, you might promote the following things:</p>
<ul>
<li>the site's twitter account</li>
<li>useful tools or resources for the mathematically inclined</li>
<li>interesting articles or findings for the curious</li>
<li>cool events or conferences</li>
<li>anything else your community would genuinely be interested in</li>
</ul>
<p>The goal is for future visitors to find out about <em>the stuff your community deems important</em>. This also serves as a way to promote information and resources that are <em>relevant to your own community's interests</em>, both for those already in the community and those yet to join. </p>
<h3>How does it work?</h3>
<p>The answers you post to this question <em>must</em> conform to the following rules, or they will be ignored. </p>
<ol>
<li><p>All answers should be in the exact form of:</p>
<pre><code>[![Tagline to show on mouseover][1]][2]
[1]: http://image-url
[2]: http://clickthrough-url
</code></pre>
<p>Please <strong>do not add anything else to the body of the post</strong>. If you want to discuss something, do it in the comments.</p></li>
<li><p>The question must always be tagged with the magic <a href="/questions/tagged/community-ads" class="post-tag moderator-tag" title="show questions tagged 'community-ads'" rel="tag">community-ads</a> tag. In addition to enabling the functionality of the advertisements, this tag also pre-fills the answer form with the above required form.</p></li>
</ol>
<h3>Image requirements</h3>
<ul>
<li>The image that you create must be <strong>220 x 250 pixels</strong></li>
<li>Must be hosted through our standard image uploader (imgur)</li>
<li>Must be GIF or PNG</li>
<li>No animated GIFs</li>
<li>Absolute limit on file size of 150 KB</li>
</ul>
<h3>Score Threshold</h3>
<p>There is a <strong>minimum score threshold</strong> an answer must meet (currently <strong>6</strong>) before it will be shown on the main site.</p>
<p>You can check out the ads that have met the threshold with basic click stats <a href="http://meta.math.stackexchange.com/ads/display/3278">here</a>.</p>
| E.O. | 18,873 | <p><a href="http://www.khanacademy.org/" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8mCsM.jpg" alt="Khan Academy - A free world-class education for anyone anywhere."></a></p>
|
3,278 | <h3>What are Community Promotion Ads?</h3>
<p>Community Promotion Ads are community-vetted advertisements that will show up on the main site, in the right sidebar. The purpose of this question is the vetting process. Images of the advertisements are provided, and community voting will enable the advertisements to be shown.</p>
<h3>Why do we have Community Promotion Ads?</h3>
<p>This is a method for the community to control what gets promoted to visitors on the site. For example, you might promote the following things:</p>
<ul>
<li>the site's twitter account</li>
<li>useful tools or resources for the mathematically inclined</li>
<li>interesting articles or findings for the curious</li>
<li>cool events or conferences</li>
<li>anything else your community would genuinely be interested in</li>
</ul>
<p>The goal is for future visitors to find out about <em>the stuff your community deems important</em>. This also serves as a way to promote information and resources that are <em>relevant to your own community's interests</em>, both for those already in the community and those yet to join. </p>
<h3>How does it work?</h3>
<p>The answers you post to this question <em>must</em> conform to the following rules, or they will be ignored. </p>
<ol>
<li><p>All answers should be in the exact form of:</p>
<pre><code>[![Tagline to show on mouseover][1]][2]
[1]: http://image-url
[2]: http://clickthrough-url
</code></pre>
<p>Please <strong>do not add anything else to the body of the post</strong>. If you want to discuss something, do it in the comments.</p></li>
<li><p>The question must always be tagged with the magic <a href="/questions/tagged/community-ads" class="post-tag moderator-tag" title="show questions tagged 'community-ads'" rel="tag">community-ads</a> tag. In addition to enabling the functionality of the advertisements, this tag also pre-fills the answer form with the above required form.</p></li>
</ol>
<h3>Image requirements</h3>
<ul>
<li>The image that you create must be <strong>220 x 250 pixels</strong></li>
<li>Must be hosted through our standard image uploader (imgur)</li>
<li>Must be GIF or PNG</li>
<li>No animated GIFs</li>
<li>Absolute limit on file size of 150 KB</li>
</ul>
<h3>Score Threshold</h3>
<p>There is a <strong>minimum score threshold</strong> an answer must meet (currently <strong>6</strong>) before it will be shown on the main site.</p>
<p>You can check out the ads that have met the threshold with basic click stats <a href="http://meta.math.stackexchange.com/ads/display/3278">here</a>.</p>
| J. M. ain't a mathematician | 498 | <p><a href="https://mathematica.stackexchange.com/"><img src="https://i.stack.imgur.com/hOf2P.png" alt="Help this community grow!"></a></p>
|
1,454,919 | <p>I am trying to understand derivative and I want to know intuitive and rigorous definitions for a curve and if derivative is lmited only to curves or not..</p>
| Yes | 155,328 | <p>In elementary calculus context, there is no need to rigorously define what it mean by "curve". Intuitively you can think of a curve as an arbitrary line that can be drawn in one stroke; the simplest curve is a straight line. </p>
<p>The concept of derivative originates from the problem of finding the slope of a curve; you may imagine that, if you are not aware of any calculus, how you may solve that problem. By treating a curve as the graph of a "continuous" function, we can make mathematically rigorous what it means by the slope of a curve at a given point in terms of the derivative of the function: "the slope of a curve at a given point" is rephrased as "the slope of the tangent line to the graph of the function at the point".</p>
<p>Curves can be studied independently, but this is another story. </p>
|
1,820,036 | <p>I'd be thankful if some could explain to me why the second equality is true...
I just can't figure it out. Maybe it's something really simple I am missing?</p>
<blockquote>
<p>$\displaystyle\lim_{\epsilon\to0}\frac{\det(Id+\epsilon H)-\det(Id)}{\epsilon}=\displaystyle\lim_{\epsilon\to0}\frac{1}{\epsilon}\left[\det \begin{pmatrix}
1+\epsilon h_{11} & \epsilon h_{12} &\cdots & \epsilon h_{1n} \\
\epsilon h_{21} & 1+\epsilon h_{22} &\cdots \\
\vdots & & \ddots \\
\epsilon h_{n1} & & &1+\epsilon h_{nn}
\end{pmatrix}-1\right]$</p>
<p>$\qquad\qquad\qquad\qquad\qquad\qquad=\displaystyle\sum_{i=1}^nh_{ii}=\text{trace}(H)$</p>
</blockquote>
| Community | -1 | <p><strong>Hint</strong>:</p>
<p>$$\left|\begin{matrix}
1+\epsilon h_{11}&\epsilon h_{12}\\
\epsilon h_{21}&1+\epsilon h_{22}\\
\end{matrix}\right|=1+\epsilon h_{11}+\epsilon h_{22}+\epsilon^2\left(h_{11}h_{22}-h_{12}h_{21}\right).$$</p>
<p>Only the main diagonal generates terms in $\epsilon$. This generalizes to higher order, for instance using the expansion by minors.</p>
|
2,221,807 | <p>I know that this question has been answered before, however I have not seen a response that satisfies me on whether my proof will work.</p>
<p><strong>Proof</strong></p>
<p>Suppose $A \cup B$ is a separation of $X$. Then WLOG $X-A=B$ and is finite, but this implies that $X-B$ is infinite thus $B$ is not a open set which is a contradiction.</p>
<p>My Question, is it is enough to show that $B$ is not open to get to this contradiction or do I need to go further to reach my contradiction? </p>
| Henno Brandsma | 4,280 | <p>If $A \cup B$ is a disconnection of $X$, this means that $A$ and $B$ are non-empty, disjoint and both open and closed (if one is open, the other is automatically closed as its complement, so we assume both are open or both are closed, usually,and it follows that both sets are clopen).</p>
<p>But by definition the only non-empty closed sets of $X$ are the finite sets, so
both $A$ and $B$ are finite, which cannot happen, as $X$ is infinite.</p>
|
747,561 | <p>I'm having trouble figuring out the limits. What messes me up is that the limit approaches infinity. Usually it approaches a specific number. Is that a trick to solve problems like these? </p>
<p>So for example, use the root test to find convergence/divergence. (n!)^n/(n^n)^7. n=1 and it's to infinity </p>
| David | 119,775 | <p>${\root n\of{a_n}}=n!/n^7\to\infty$, so $\sum a_n$ diverges.</p>
|
2,042,257 | <p>I'm looking at the following differential equation:</p>
<p>$\frac{dx}{dt} = \frac{\sin^2 x - t^2}{t \cdot \sin(2x)}$</p>
<p>I rewrote it as</p>
<p>$(t \cdot \sin(2x))dx = (\sin^2x - t^2)dt$</p>
<p>$\Leftrightarrow (\underbrace{\sin^2 x - t^2}_{J(x,t)})dt + (\underbrace{-t \cdot \sin(2x)}_{I(x,t)}) dx = 0$</p>
<p>where I put the minus sign inside the function because I think it is important that a plus stands between the two functions in order to check if a differential equation is exact or not. But then:</p>
<p>$\partial_x J(x,t) = 2 \sin x \cos x = \sin(2x)$</p>
<p>$\partial_t I(x,t) = -\sin(2x)$</p>
<p>Those functions don't seem to satisfy the condition for a differential equation to be exact. However my teacher wrote in his answer the following:</p>
<p>$\frac{-\partial_x (\sin^2 x - t^2) - \partial_t (t \cdot \sin (2x))}{t \sin(2x)} = \frac{-2}{t}$</p>
<p>$\implies c' = - \frac{2}{t} c$</p>
<p>$\implies x' = \frac{\frac{\sin^2 x}{t^2} - 1}{\frac{\sin (2x)}{t}}$</p>
<p>and he states that this equation is now exact. What happened there? I've never seen anything like that yet. I understand (more or less) what he is doing after $c' = -\frac{2}{t}c$ (separation of variables and integration), but how he came to the first line I have no idea. Is that a known technic to transform a differential equation that is $\textit{almost}$ exact?</p>
<p>Thanks a lot in advance for your answers.</p>
<p>Julien.</p>
| Community | -1 | <p>As $x^x$ is a continous function, $1^1=1$ and $2^2=4$, then there is an $x$ such that</p>
<p>$$x^x=2.$$</p>
<p>As the function is monotonic in this range, the solution is unique.</p>
<p>This number is irrational, otherwise let $x$ be the irreducible fraction $p/q$:</p>
<p>$$\left(\frac pq\right)^{p/q}=2$$ implies</p>
<p>$$p^p=2^qq^p.$$</p>
<p>Then $p$ is even, $p=2r$ with $q$ odd, and</p>
<p>$$2^{2r}r^{2r}=2^qq^p,$$
so that $q$ is even !</p>
|
1,428,377 | <p>So I was watching the show Numb3rs, and the math genius was teaching, and something he did just stumped me.</p>
<p>He was asking his class (more specifically a student) on which of the three cards is the car. The other two cards have an animal on them. Now, the student picked the middle card to begin with. So the cards looks like this</p>
<pre><code>+---+---+---+
| 1 | X | 3 |
+---+---+---+
</code></pre>
<p><em>The <code>X</code> Representing The Picked Card</em></p>
<p>Then he flipped over the third card, and it turned out to be an animal. All that is left now is one more animal, and a car. He asks the student if the chances are higher of getting a car if they switch cards. The student responds no (That's what I thought too).</p>
<p>The student was wrong. What the teacher said is "Switching cards actually doubles your chances of getting the car".</p>
<p>So my question is, why does switching selected cards double your chances of getting the car when 1 of the 3 cards are already revealed. I thought it would just be a simple 50/50 still, please explain why the chances double!</p>
| David | 59,737 | <p>The probability that you've picked the <em>wrong</em> card in the first place is 2/3. This happens to be the truth even when one of the cards is removed; it's just that 2/3 of the initial guesses are wrong.</p>
<p>So when you have the chance to change your mind, you do it, because in the "second round", 1/2 of the guesses are correct (assuming they are random).</p>
<p>Finally, chances when you keep the card are 2/3 times 1/2 that the guess is wrong. On the other hand, chances are 1/3 times 1/2 that the guess is wrong when you switch the cards. Comparing the terms gives the answer, I think.</p>
|
498,694 | <p>So, I'm learning limits right now in calculus class.</p>
<p>When $x$ approaches infinity, what does this expression approach?</p>
<p>$$\frac{(x^x)}{(x!)}$$</p>
<p>Why? Since, the bottom is $x!$, doesn't it mean that the bottom goes to zero faster, therefore the whole thing approaches 0?</p>
| Alex | 38,873 | <p>This ratio grows at the rate $e^x \sqrt{2 \pi x}(1+o(1))$ which tends to infininty as $x \to \infty$</p>
|
2,259,145 | <p>Let $f\colon (0,1]\to [-1,1]$ be a continuous function. Let us define a function $h$ by $h(x)=xf(x)$ for all $x$ belongs to $(0,1]$.
Prove that $h$ is uniformly continuous.</p>
<p>We know $f$ is uniformly continuous on $I$ if $f'(x)$ is bounded on $I$. Here $h'(x)= xf'(x) + f(x)$ and $f(x)$ is bounded here. How can I prove that $xf'(x)$ is bounded here.
Please help me to solve this.
Thanks in advance.</p>
| Jack D'Aurizio | 44,121 | <p>I will try to give this question an actual meaning: at the moment, there are definition issues in $\frac{1}{x}+\frac{1}{x-1}+\ldots+1$ if $x\not\in\mathbb{N}$. We may start from the Weierstrass product for the <a href="https://en.wikipedia.org/wiki/Gamma_function" rel="nofollow noreferrer">$\Gamma$ function</a>:
$$ \Gamma(z+1) = e^{-\gamma z}\prod_{n\geq 1}\left(1+\frac{z}{n}\right)^{-1}e^{z/n} \tag{1}$$
and by applying $\frac{d}{dz}\log(\cdot)$ to both sides we have:
$$ \psi(z+1)\stackrel{\text{def}}{=}\frac{\Gamma'(z+1)}{\Gamma(z+1)} = -\gamma+\sum_{n\geq 1}\left[\frac{1}{n}-\frac{1}{n+z}\right]\tag{2} $$
hence:</p>
<p>$$ \frac{d}{dz}(z!) = \Gamma'(z+1) = \Gamma(z+1)\psi(z+1) = z!\left[-\gamma+\sum_{n\geq 1}\left(\frac{1}{n}-\frac{1}{n+z}\right)\right]\tag{3} $$
and if $z\in\mathbb{N}$ the RHS of $(3)$ equals $z!\left[-\gamma+H_z\right]$.</p>
|
1,132,063 | <p>For $x=(x_j)_{j\in\mathbb N}\in \ell^1$ let</p>
<p>$$\|x\|=\sup_{n\in \mathbb N}\left \Vert \sum_{j=1}^{n}x_j\right\Vert$$</p>
<p>Show that $(\ell^1,\|\cdot\|)$ is a normed space, but it is not complete.</p>
<p>The first part was easy.</p>
<p>Now I try to find a sequence in $\ell^1$ such that it is a cauchy sequence, but not convergent.</p>
<p>Let me choose(try) $x_n=\frac{n}{j^2}$. For a fixed $n$ it is in $\ell^1$ because $\sum_{j=1}^{\infty} \frac{1}{j^2}$ converges.</p>
<p>WLOG $n>m$:</p>
<p>$$\|x_n-x_m\|=\sup_{k\in \mathbb N} \left \Vert \sum_{j=1}^{k}x_n-x_m \right\Vert| = \sup_{k\in \mathbb N} \sum_{j=1}^{k}\frac{n-m}{j^2}$$</p>
<p>Okay for me it seems that this is not even a cauchy-sequence..</p>
<p>Can someone help me? What kinds of sequences should I consider when I am facing problems like this?</p>
| Pedro | 23,350 | <p>A normed space is Banach if and only if whenever a series converges absolutely, it converges. Try to find a sequence $(x_n)$ with $$\sum \lVert x_n\rVert<\infty$$ but that cannot possibly converge. </p>
|
2,165,759 | <p>I am solving the following question</p>
<p>$$\int\frac{\sin x}{\sin^{3}x + \cos^{3}x}dx.$$</p>
<p>I have been able to reduce it to the following form by diving numerator and denominator by $\cos^{3}x$ and then substituting $\tan x$ for $t$ and am getting the following equation. Should Iis there any other way use partial fraction to integrate it further or </p>
<p>$$\int\frac{t}{t^3 + 1}dt.$$</p>
| user326159 | 326,159 | <p>$\textbf{Hint.}$ Firstly, </p>
<p>$$f(t)=\frac{t}{t^3+1}=\frac{t}{(t+1)(t^2-t+1)}=\frac{t+1}{3(t^2-t+1)}-\frac{1}{3(t+1)}$$</p>
<p>The second integral is immediate (a logarithmic function). The first needs more work, but it can be reduced to the integral of a logaritmic plus an $\arctan$</p>
|
2,165,759 | <p>I am solving the following question</p>
<p>$$\int\frac{\sin x}{\sin^{3}x + \cos^{3}x}dx.$$</p>
<p>I have been able to reduce it to the following form by diving numerator and denominator by $\cos^{3}x$ and then substituting $\tan x$ for $t$ and am getting the following equation. Should Iis there any other way use partial fraction to integrate it further or </p>
<p>$$\int\frac{t}{t^3 + 1}dt.$$</p>
| Jack D'Aurizio | 44,121 | <p>We have that $-1,\omega,\omega^{-1}$ are the roots of $t^3+1$. In particular, $\frac{t}{t^3+1}$ can be represented as
$$ \frac{t}{t^3+1} = \frac{A}{t+1}+\frac{B}{t-\omega}+\frac{C}{t-\omega^{-1}}$$
with $A+B+C=0$ and
$$ A = \lim_{t\to -1}\frac{t(t+1)}{t^3+1} = \lim_{t\to -1}\frac{t}{t^2-t+1} = -\frac{1}{3} $$
hence:
$$ \frac{t}{t^3+1} = -\frac{1}{3}\cdot \frac{1}{t+1}+\frac{1}{3}\cdot\frac{t+1}{t^2-t+1} $$
and</p>
<blockquote>
<p>$$ \int\frac{t}{t^3+1}\,dt = C+\frac{1}{\sqrt{3}}\,\arctan\left(\frac{2t-1}{\sqrt{3}}\right)+\frac{1}{3}\,\log(1+t)-\frac{1}{6}\,\log(1-t+t^2). $$</p>
</blockquote>
|
221,712 | <p>I have two matrix <code>A</code> and <code>B</code> of equal dimensions see below. In <code>A</code> matrix I have the variables <code>a,b,c,d</code> which have direct correspondence with matrix <code>B</code> element by each row. In other words, for first row <code>{a, b, c, d}</code> we have <code>{2, 9, 6, 7}</code>, further for each element in both row <code>a=2, b=9, c=6 and d=7</code> similarly for other rows in both matrix. </p>
<pre><code>A={{a, b, c, d}, {d, c, b, a}, {a, c, b, d}};
B={{2, 9, 6, 7}, {11, 3, 5, 12}, {12, 4, 1, 4}};
</code></pre>
<p>After mapping these two matrix, I want to perform simple mathematical operations (addition and subtraction). For example, for first row:</p>
<pre><code>x1=a-d=2-7=-5
y1=b-a=9-2=7
</code></pre>
<p>similarly fir second row, </p>
<pre><code>x2=a-d=12-11=1
y2=b-a=5-12=-7
</code></pre>
<p>I can map these two matrix by <code>Map[A,B]</code>, but I don´t know how to map each element of both matrix. Is there a way we can map each element and then by using loop we evaluate <code>a-d, b-a</code> for each row?</p>
<p>Thanks in Advance </p>
| WReach | 142 | <p>Here is a way using <code>ReplaceAll</code> (<code>/.</code>):</p>
<pre><code>{a - d, b - a} /. MapThread[Rule, {A, B}, 2]
(* {{-5, 7}, {1, -7}, {8, -11}} *)
</code></pre>
|
2,222,215 | <p>Determine whether the difference of the following two series is convergent or not and Prove your answer$$
\sum_{n=1}^\infty \frac{1}{n} $$ and $$\sum_{n=1}^\infty \frac{1}{2n-1} $$</p>
<p>What i tried. I said that the difference of the two series is divergent. My proof is as follows. Find the difference of the two series to get $$\sum_{n=1}^\infty \frac{1}{n} -\sum_{n=1}^\infty \frac{1}{2n-1} = \sum_{n=1}^\infty \frac{n-1}{n(2n-1)}$$
But this diffcult to prove directly that $\sum_{n=1}^\infty \frac{n-1}{n(2n-1)}$ is divergent. So i tired proving by contradiction by assuming that it is convergent and by rearraging the above equation we have,$\sum_{n=1}^\infty \frac{1}{n} =\sum_{n=1}^\infty \frac{1}{2n-1} + \sum_{n=1}^\infty \frac{n-1}{n(2n-1)}$ and since $\sum_{n-1}^\infty \frac{n-1}{n(2n-1)}$ is convergent by our assumption and $\sum_{n=1}^\infty \frac{1}{2n-1}$ is also convergent (need to be proven) then the sum of both series also have to be convergent and thus contradicting the fact that $\sum_{n=1}^\infty \frac{1}{n}$ is divergent and thus proving the statement. Is my proof correct and is there a better prove. Could anyone explain the Prove to me. Thanks</p>
| John Bentin | 875 | <p>The question is not meaningful: The difference of the series is not defined since one (actually, each one) of them is not defined.</p>
<p>If, nevertheless, you insist on going ahead and "subtracting" them, then why not try it in the following artistic way? $$\sum_{n=1}^\infty \frac{1}{n}-\sum_{n=1}^\infty \frac{1}{2n-1}=$$$$\frac11-\left(\frac11\right)+\frac12-\left(\frac13+\frac15\right)+\frac13-\left(\frac17+\frac19+\frac1{11}\right)+
\frac14-\left(\frac1{13}+\frac1{15}+\frac1{17}+\frac1{19}\right)+\cdots,$$or, more formally,$$\sum_{n=1}^\infty \frac{1}{n}-\sum_{n=1}^\infty \frac{1}{2n-1}=\sum_{n=1}^\infty\left(\frac{1}{n}-\sum_{k=(n^2-n)/2+1}^{(n^2+n)/2} \frac{1}{2k-1}\right).$$The series on the right-hand side is well defined. It converges quite nicely to a smallish negative quantity.</p>
|
2,555,815 | <p><strong>Problem</strong></p>
<p>Let $a_{0}(n) = \frac{2n-1}{2n}$ and $a_{k+1}(n) = \frac{a_{k}(n)}{a_{k}(n+2^k)}$ for $k \geq 0.$</p>
<p>The first several terms in the series $a_k(1)$ for $k \geq 0$ are:</p>
<p>$$\frac{1}{2}, \, \frac{1/2}{3/4}, \, \frac{\frac{1}{2}/\frac{3}{4}}{\frac{5}{6}/\frac{7}{8}}, \, \frac{\frac{1/2}{3/4}/\frac{5/6}{7/8}}{\frac{9/10}{11/12}/\frac{13/14}{15/16}}, \, \ldots$$</p>
<p>What limit do the values of these fractions approach?</p>
<p><strong>My idea</strong></p>
<p>I have calculated the series using recursion in C programming, and it turns out that for $k \geq 8$, the first several digits of $a_k(1)$ are $ 0.7071067811 \ldots,$ so I guess that the limit exists and would be $\frac{1}{\sqrt{2}}$.</p>
| Kelenner | 159,886 | <p>Only a remark, not a complete answer.</p>
<p>For $q\in \mathbb{N}$, put $s_2(q)=$ the sum of the digits of the base two expansion of $q$, ie $s_2(3)=2$, $s_2(4)=1$, etc. The following formula can be proven by induction :</p>
<p>$$a_m(n)=\prod_{0\leq q<2^m}\left(1-\frac{1}{2n+2q}\right)^{(-1)^{s_2(q)}}$$ </p>
<p>For $m=0$, we have only $q=0$ to consider, and the formula gives $\displaystyle \frac{2n-1}{2n}$; if the formula is true for $m$, we have
$$a_{m+1}(n)=\prod_{0\leq q<2^m}\left(1-\frac{1}{2n+2q}\right)^{(-1)^{s_2(q)}}\prod_{0\leq q<2^m}\left(1-\frac{1}{2n+2q+2^{m+1}}\right)^{(-1)^{s_2(q)+1}}$$
and as $\{q; 0\leq q<2^{m+1}\}$ is the disjoint union of $\{q; 0\leq q<2^{m}\}$ and $\{q+2^m; 0\leq q<2^{m}\}$, and that if $0\leq q<2^m$, we have $s_2(q+2^m)=s_2(q)+1$, the formula is true for $m+1$.</p>
<p>Now taking the log, using that $-\log(1-x)=x+\dfrac{x^2}2+o(x^2)$, the convergence of the whole infinite product $\displaystyle \prod_{0\leq q}\left(1-\frac{1}{2n+2q}\right)^{(-1)^{s_2(q)}}$ is equivalent to the convergence of the series $\displaystyle \sum_{q\geq 0}\frac{(-1)^{s_2(q)}}{n+q}$, and I do not know if this series is convergent... </p>
|
796,199 | <p>As far as I know, Brent's method for root finding is said to have superlinear convergence, but I haven't been able to find any more concrete information.</p>
<p>Is its convergence rate known to be at least bounded between some known values?</p>
<p>What is a good bibliographic reference for that?</p>
<p>[EDIT]</p>
<p>Also, another related question (I add it here because it is closely related to the previous one): How many calls to the function makes Brent's method per iteration, on average?</p>
<p>[EDIT]</p>
<p>Thanks to a comment by @Barry Cipra, I've reviewed the original source (Brent, 1971).</p>
<p>This gave me an answer to one of my two questions:</p>
<ul>
<li>Brent's algorithms calls the function whose root is to be found once per iteration.</li>
</ul>
<p>The first question I posted remains open to me, as I am not an expert. As far as I understand, Brent's algorithm combines bisection with inverse quadratic interpolation. Bisection convergence is known to be linear, but I don't know about the convergence rate of inverse quadratic interpolation.</p>
<p>I guess the convergence rate of Brent's method can be considered to be bounded between linear and that of inverse quadratic interpolation. So, the remaining question is: What is the convergence rate of inverse quadratic interpolation?</p>
| Simply Beautiful Art | 272,831 | <h2>Corrected answer:</h2>
<p>After a lot of testing on my own time, I noticed a particular anomaly when running Brent's method to high precisions. <strong>Brent's method never attains an order of convergence of <span class="math-container">$\mu\approx1.839$</span></strong>. In fact it doesn't attain an order of convergence of <span class="math-container">$1.7$</span>. After spending some time working through the details, I found that Brent's method actually attains an order of convergence of at most <span class="math-container">$\mu^{1/3}\phi^{2/3}\approx1.689$</span> in general. I'll also point out that my implementation of Brent's method is essentially a copy of the one on <a href="https://en.wikipedia.org/wiki/Brent%27s_method" rel="nofollow noreferrer">Wikipedia</a>, which may be incorrect.</p>
<p>As I discuss below in my now incorrect answer, the asymptotic behavior of inverse quadratic interpolation is dependent on the sign of <span class="math-container">$C$</span> (defined below). If <span class="math-container">$C<0$</span> then inverse quadratic interpolation always yields estimates on one side of the root. This leads to secant-like behavior on one side of the root and an order of convergence given by <span class="math-container">$\phi\approx1.618$</span>.</p>
<p>If <span class="math-container">$C>0$</span>, then a much more interesting situation happens. Let <span class="math-container">$a,b,c$</span> be the bracketing point, the best estimate of the root, and the previous value of <span class="math-container">$b$</span> respectively (as done on Wikipedia). Consider the case when <span class="math-container">$b$</span> and <span class="math-container">$c$</span> are on the same sides of the root.</p>
<ol>
<li><p>According to below, the new IQI estimate <span class="math-container">$s$</span> will replace <span class="math-container">$a$</span>.</p>
</li>
<li><p><span class="math-container">$c$</span> is then set to <span class="math-container">$b$</span>.</p>
</li>
<li><p><span class="math-container">$a$</span> is then set to <span class="math-container">$b$</span>.</p>
</li>
<li><p><span class="math-container">$a$</span> and <span class="math-container">$b$</span> are swapped.</p>
</li>
<li><p>Since <span class="math-container">$a=c$</span> now, IQI cannot be used, causing the secant method to be used to compute <span class="math-container">$s$</span> instead of IQI.</p>
</li>
</ol>
<ul>
<li>Suppose <span class="math-container">$s$</span> is on the same side as <span class="math-container">$a$</span>. We consider the opposite case later. (The side that the secant <span class="math-container">$s$</span> lands on is fixed.)</li>
</ul>
<ol start="6">
<li><p><span class="math-container">$c$</span> is then set to <span class="math-container">$b$</span>.</p>
</li>
<li><p><span class="math-container">$a$</span> is then set to <span class="math-container">$s$</span>.</p>
</li>
<li><p><span class="math-container">$a$</span> and <span class="math-container">$b$</span> are swapped, since the new estimate is better than the previous.</p>
</li>
<li><p>Since <span class="math-container">$a=c$</span> again, IQI still can't be used, so another round of the secant method is tried.</p>
</li>
<li><p><span class="math-container">$c$</span> is then set to <span class="math-container">$b$</span>.</p>
</li>
<li><p><span class="math-container">$b$</span> is then set to <span class="math-container">$s$</span>. Note that the secant <span class="math-container">$s$</span> lands on the same side of the root.</p>
</li>
<li><p>IQI is applied again, repeat all previous steps.</p>
</li>
</ol>
<p>Concerning the case when the secant method lands on the same side as <span class="math-container">$b$</span> on step 5-6, we skip to step 12 and the next iteration will enter the above loop.</p>
<p>In total, the above cycle yields one IQI iteration and two secant iterations. Since only the best estimates of the root are used for each case, we can safely confirm that the optimal order of convergence for each method is used. This leads to an order of convergence of <span class="math-container">$\mu\phi^2$</span> over 3 iterations, or an expected <span class="math-container">$\mu^{1/3}\phi^{2/3}$</span> per single iteration.</p>
<hr />
<h2>Erroneous claim:</h2>
<p>It is not actually the case that Brent's method always has an order of convergence of <span class="math-container">$\mu\approx1.839$</span> as given by <a href="https://math.stackexchange.com/a/801161/272831">hardmath's answer</a>. This behavior is similar to what I have described concerning Ridder's method in <a href="https://math.stackexchange.com/a/3805792/272831">this answer</a>. In particular, if</p>
<p><span class="math-container">$$C=\frac16(f^{-1})'''(0)[f'(x_\mathrm{root})]^3$$</span></p>
<p>is positive, then the order of convergence is indeed <span class="math-container">$1.839$</span>. If it is negative, or negative in a neighborhood of the root, then the order of convergence actually drops down to <span class="math-container">$\phi\approx1.618$</span>, which is the speed of the secant method.</p>
<p>This is of course assuming that the root is simple.</p>
|
3,612,351 | <p>It is given that a function f(x) satisfy:
<span class="math-container">$$f(x)=3f(x+1)-3f(x+2)\quad \text{ and } \quad f(3)=3^{1000}$$</span> then find value of <span class="math-container">$f(2019)$</span>.</p>
<p>I further wanted to ask that is there some general method to solve such equation. The method that I know to solve such questions is to substitute <span class="math-container">$x$</span> with <span class="math-container">$x+1$</span> in equation and there by making new equation which is
<span class="math-container">$$ f(x+1)=3f(x+2)-3f(x+3)$$</span> then again substitute <span class="math-container">$x$</span> with <span class="math-container">$x+2$</span> in original equation and make new equation <span class="math-container">$$f(x+2)=3f(x+3)-3f(x+4)$$</span> Do this for a couple of times and then on combining the equations, in most of the question we get some relation like f(x) = f(x+a) but that does not work here. Please share your ideas on how to solve such questions.</p>
| Pekisch | 735,184 | <p>This is a difference equation. In the case of a linear, constant coefficient difference equation we make the following guess:
<span class="math-container">$$ f(x) = r^x $$</span>
Then
<span class="math-container">$$ f(x+1) r\times r^x$$</span>
and
<span class="math-container">$$ f(x+2) = r^2\times r^x $$</span>
Replacing in the difference equation
<span class="math-container">$$ r^x = 3\times r\times r^x - 3\times r^2\times r^x $$</span>
Factoring out the <span class="math-container">$r^x$</span> and canceling
<span class="math-container">$$ 1 = 3\times r - 3\times r^2 $$</span>
<span class="math-container">$$ 3\times r^2 - 3\times r + 1 = 0 $$</span>
The solutions of this quadratic equation are
<span class="math-container">$$ r_{1,2} = 0.5 \pm \frac{\sqrt{3}}{6}i $$</span>
Replacing in our guess:
<span class="math-container">$$ f(x)= \left(\sqrt{\frac13}\right)\left[C_1\times \cos(1.51312532 \times t) + C_2\times\sin(1.51312532 \times t) \right] $$</span>
This is the general solution to the functional equation. However, there are two indeterminate constants and only one condition <span class="math-container">$f(3)=3^{1000}$</span> so I think that this method does not work here. Because imposing that condition would not yield the values of <span class="math-container">$C_1$</span> and <span class="math-container">$C_2$</span>. Therefore, I think that the way of solving this is the one that Gareth Ma sketched.</p>
|
1,023,193 | <p>Proving this formula
$$
\pi^{2}
=\sum_{n\ =\ 0}^{\infty}\left[\,{1 \over \left(\,2n + 1 + a/3\,\right)^{2}}
+{1 \over \left(\, 2n + 1 - a/3\,\right)^{2}}\,\right]
$$
if $a$ an even integer number so that
$$
a \geq 4\quad\mbox{and}\quad{\rm gcd}\left(\,a,3\,\right) = 1
$$</p>
| Venus | 146,687 | <p>Alternatively, let's consider
$$f(a)=\sum_{n=0}^{\infty }\left(\frac{3}{2n+1-\frac{a}{3}}-\frac{3}{2n+1+\frac{a}{3}}\right)=9\sum_{n=0}^{\infty }\left(\frac{1}{6n+3-a}-\frac{1}{6n+3+a}\right)$$
so that our original sum is $f'(a)$.
$$\begin{align}
f(a)&=9\sum_{n=0}^{\infty }\int_0^1 \left(x^{6n+2-a}-x^{6n+2+a}\right)\,dx\\
&=9\int_0^1\sum_{n=0}^{\infty }\left(x^{6n+2-a}-x^{6n+2+a}\right)\,dx\\
&=9\int_0^1\left(\frac{x^{2-a}-x^{2+a}}{1-x^6}\right)\,dx\\
&=\frac{3}{2}\int_0^1\left(\frac{t^{-\large\frac{3+a}{6}}-t^{-\large\frac{3-a}{6}}}{1-t}\right)\,dt\\
&=\frac{3}{2}\left(\psi\left(\frac{3+a}{6}\right)-\psi\left(\frac{3-a}{6}\right)\right)\\
&=\frac{3\pi}{2}\cot\left(\frac{3-a}{6}\pi\right)
\end{align}$$
See the integral representation and the reflection formula of <a href="http://en.wikipedia.org/wiki/Digamma_function" rel="nofollow">digamma function</a>. Therefore
$$f'(a)=\sum_{n=0}^{\infty }\left[\frac{1}{\left(2n+1-\frac{a}{3}\right)^2}+\frac{1}{\left(2n+1+\frac{a}{3}\right)^2}\right]=\frac{\pi^2}{4}\sec^2\left(\frac{\pi a}{6}\right)$$
then it follows $f'(a)=\pi^2$ for $a\ge4$ and $\text{gcd }(a,3)=1$.</p>
|
24,055 | <p>Running this code:</p>
<pre><code>Histogram[{RandomVariate[NormalDistribution[1/4,0.12],100],
RandomVariate[NormalDistribution[3/4, 0.12], 100]},
Automatic, "Probability", PlotRange -> {{0, 1}, {0, 1}},
Frame -> True, PlotRangeClipping -> True,
FrameLabel -> {Style["x axis", 15], Style["probability", 15]}
]
</code></pre>
<p>Gives me the following plot:</p>
<p><img src="https://i.stack.imgur.com/jYSLN.png" alt="enter image description here"></p>
<p>As you can see, the label on the right ("probability") is not printed correctly. The character "y" is missing. What's going on here?</p>
<p>I am using Mathematica 9.0.0.0. I ran this on two laptops, one with Windows 7 and the other with Windows 8.</p>
<p><strong>Update</strong>: Judging by the comments, this seems to be a bug. So now the question becomes: <strong>Is there a workaround?</strong></p>
<p><strong>Update</strong>: This seems to be bug, so I'll tag as such. In the meantime, see the answers for workarounds.</p>
| Hubble07 | 7,009 | <p>When i ran your code on my system it was fine.
Version No: 9.0.0.0
Platform: Linux x86(32-bit)
So maybe its a windows problem.
Try Exporting the image and see if that 'y' is still missing in the exported image.</p>
|
24,055 | <p>Running this code:</p>
<pre><code>Histogram[{RandomVariate[NormalDistribution[1/4,0.12],100],
RandomVariate[NormalDistribution[3/4, 0.12], 100]},
Automatic, "Probability", PlotRange -> {{0, 1}, {0, 1}},
Frame -> True, PlotRangeClipping -> True,
FrameLabel -> {Style["x axis", 15], Style["probability", 15]}
]
</code></pre>
<p>Gives me the following plot:</p>
<p><img src="https://i.stack.imgur.com/jYSLN.png" alt="enter image description here"></p>
<p>As you can see, the label on the right ("probability") is not printed correctly. The character "y" is missing. What's going on here?</p>
<p>I am using Mathematica 9.0.0.0. I ran this on two laptops, one with Windows 7 and the other with Windows 8.</p>
<p><strong>Update</strong>: Judging by the comments, this seems to be a bug. So now the question becomes: <strong>Is there a workaround?</strong></p>
<p><strong>Update</strong>: This seems to be bug, so I'll tag as such. In the meantime, see the answers for workarounds.</p>
| a06e | 534 | <p>I got the following reply from Technical Support @ Wolfram:</p>
<blockquote>
<p>Hello - </p>
<p>Thank you for your email.</p>
<p>Our developers have created a report on this issue and are
investigating the issue.</p>
<p>If you need a workaround for this issue, you have a number of
possibilities beyond what is mentioned in the StackExchange thread.
You could explicitly specify the <code>FontFamily</code> inside your <code>Style</code>
statements:</p>
<pre><code>FrameLabel -> {Style["x axis", 15],
Style["probability", 15, FontFamily -> "Courier"]}
</code></pre>
<p>Or you could change the magnification of the notebook to something
larger, which will often fix the problem.</p>
<p>Karl Isensee</p>
<p>Technical Support</p>
<p>Wolfram Research, Inc.</p>
</blockquote>
|
92,670 | <p>We're learning about domains and setbuilder notation in school at the moment, and I want to make sure what I did was right.</p>
<p>My thought process:
\begin{align*}
-\frac12|4x - 8| - 1 &< -1 \\
-\frac12|4x - 8| &< 0 \\
|4x - 8| &> 0
\end{align*}
$x =$ all real numbers.</p>
<p>{real numbers} :</p>
<p><||||||||||[0]|||||||||></p>
<p>{x| x is any real number}</p>
<p>{whole numbers}</p>
<p>... <----[-2]---[-1]---[0]---[1]---[2]---> ...</p>
<p>{x|...-2,-1,0,1,2...}</p>
| Community | -1 | <p>First let's consider how absolute-value is defined:</p>
<p>$$
|a| =
\begin{cases}
a, & \text{if } a \geq 0,
\\ -a, &\text{if } a \lt 0.
\end{cases}
$$</p>
<p>Therefore,</p>
<p>$$4x-8 > +0\phantom{.}$$</p>
<p>$$4x-8 < -0.$$</p>
<p>Now, solve for $x$ to get the answer:</p>
<p>$$x > 2\phantom{.}$$</p>
<p>or $$x < 2.$$</p>
<p>Note: This is same as $$ x \neq 2.$$</p>
|
2,236,008 | <p>Suppose $Z$ is a Gaussian distribution $N(0,\sigma^2)$. Is there a formula of upper bound for $P(Z\in [a,b])$, or do we know this probability is integral with respect to $\sigma\in \mathbf{R}$?</p>
| Community | -1 | <p>If $b>a$ then the upper bound of integration is $b$. As area under a curve can be measured by integral and we know that the probability is some area under a curve, therefore we do know that probability is integral with respect to $x$.</p>
|
1,556,805 | <p>I'm working in a problem that involves the equation
$$
w(z)=\sqrt{1-z^{2}} \,\, .
$$</p>
<p>I already know that there're two branch points in this equation, namely $\pm 1$, so there's a Riemann surface covering the domain of the function where the branch cut is from the $-1$ to $1$, as shown in the figure below.</p>
<p><img src="https://i.stack.imgur.com/ISa9q.jpg" alt="Branch points of the complex function $w(z)=\sqrt{1-z^{1}}$"></p>
<p>My purpose is make an expansion through Maclaurin serie around the $0$ point, but I don't know if it's a suitable point to expand that function, I mean, If there's some kind of problems or inconsistency in expanding around that point. if that's not a suitable point, how could I deform the branch cut line to make this a suitable point for expansion? </p>
<p>Greetings!</p>
| Yiorgos S. Smyrlis | 57,021 | <p>Your idea works fine, since $w$ can be defined as a holomorphic function in the unit disk (for example).</p>
<p>Use that fact that
$$
\sqrt{1-x}
= \sum_{k=0}^{\infty} (-1)^k\binom{1/2}{k}x^k=1+\sum_{k=1}^{\infty}\frac{(-1)^k(1/2)(1/2-1)\cdots (1/2-k+1)x^k}{k!} \\ =1+\sum_{k=1}^{\infty}\frac{(-1)^k(-1)(-3)\cdots (-2k+3)x^k}{2^k k!} \\
=1-\sum_{k=1}^{\infty}\frac{1\cdot 3 \cdots (2k-3)x^k}{2^k k!}=1-\sum_{k=1}^{\infty}\frac{(2k)!\,x^k}{4^{k}(2k-1) (k!)^2} \\
=1-\sum_{k=1}^\infty \frac{1}{2k-1}\binom{2k}{k}\left(\frac{x}{4}\right)^{\!k},
$$
and its radius of convergence is equal to 1.</p>
<p>Then
$$
w(z)=\pm\sum_{n=0}^\infty\frac{(2n)!\,z^{2n}}{4^n(2n-1)(n!)^2},
$$
defines two holomorphic functions in the unit disk, satisfying $w^2(z)=1-z^2$.</p>
|
1,181,123 | <blockquote>
<ol>
<li>Find the smallest positive integer such that $80-n$ and $80+n$ are prime numbers. </li>
<li>Find the smallest positive prime number such that $2002-n$ and $2002+n$ are prime numbers.</li>
</ol>
</blockquote>
<p>I cannot think of any way other than trying the prime numbers one by one,
like trying from $2, 3, 5, 7,\ldots...$ but it will probably take forever in case the answer is a big number, any clue please? </p>
<p>Thanks in advance!</p>
| Joffan | 206,402 | <p>For question 2, note that the question asks for a prime number n, unlike question 1. The same reasoning applies as in <a href="https://math.stackexchange.com/users/83272/fermat">Fermat</a>'s <a href="https://math.stackexchange.com/a/1181141/206402">answer</a> in terms of mod 3 analysis:</p>
<p>$2002 \equiv 1 \bmod 3$, therefore</p>
<ul>
<li>for $n \equiv 1 \bmod 3,$ we have $2002-n \equiv 0 \bmod 3$ (only prime if $2002-n=3$)</li>
<li>for $n \equiv 2 \bmod 3,$ we have $ 2002+n \equiv 0 \bmod 3$ (never prime)</li>
<li>for $n \equiv 0 \bmod 3,$ we have $ 2002\pm n \equiv 1 \bmod 3$ (only $n=3$ is prime)</li>
</ul>
<p>We can only possibly have primes for these for the cases where either $n=3$ or $2002-n=3$. In either case, if $2002-3=1999$ were not prime, there would definitely be no solutions - but it is. So we can check just the two cases, $n=3$ and $n=1999$. $2005$ is not a prime, but $4001$ is, giving the only solution of $n=1999$.</p>
|
723,570 | <p>In the proof of Theorem 6.11, $\varphi$ is uniformly continuous and hence for arbitrary $\epsilon > 0$ we can pick $\delta > 0$ s.t. $\left|s-t\right| \leq \delta$ implies $\left|\varphi\left(s\right)-\varphi\left(t\right)\right|<\epsilon$. However, I do not understand why he claims that $\delta < \epsilon$. Help?</p>
<p>Statement of the theorem: Suppose $f\colon\left[a,b\right]\rightarrow \mathbb{R}$ is Riemann integrable on $\left[a,b\right]$, $m\leq f \leq M$ (for $m,M\in\mathbb{R}$), $\varphi\colon \left[m,M\right]\rightarrow \mathbb{R}$ is continuous on $\left[m,M\right]$. Let $h\equiv\varphi\circ f$. Then $h$ is Riemann integrable on $\left[a,b\right]$.</p>
| orion | 137,195 | <p>The question is basically just a sieve, similar to looking for primes.</p>
<p>The period that visits all combinations of said numbers is long ($2\times 3\times 5 \times 7=210$) so whatever you do, it won't be much quicker than the brute method.</p>
<p>For instance, starting with 2 and 5, you have the ones that are NOT divisible by any of them like this:</p>
<p>[201,203,207,209],211,213,217,219,...</p>
<p>You can just repeat this pattern of 4.</p>
<p>When you start again for 3 and only keep those that aren't divisible by it, the pattern repeats each 2*3*5=30 numbers, but only 8 candidates are left in each period:</p>
<p>[203,209,211,217,221,223,227,229],233,239,...</p>
<p>So the next step, when you test for divisibility of 7, you have less work to do (but as said, not much less). What remains are the numbers that are not divisible by any of them. The rest are solutions to your problem.</p>
|
3,306,747 | <p>Here is my attempt </p>
<p>h = 3k -7 ----(1)</p>
<p>(h-1)^2 + (k -1)^2 = 10/4</p>
<p>(h-1)^2 + (3h - 8)^2 = 10/4</p>
<p>This second one doesn't working.Is my approch wrong?</p>
<p>P.S: Sorry for the typo.Also I assumed the center is C(h,k)</p>
| Zacky | 515,527 | <p><span class="math-container">$$S=2\int_{0}^{1}\frac{x}{1-x^2}\left(\frac{\pi^2}{2}-2\arcsin^2(x)\right)dx\overset{IBP}=-4\int_0^1 \frac{\arcsin x\ln(1-x^2)}{\sqrt{1-x^2}}dx$$</span>
<a href="https://math.stackexchange.com/questions/292468/fourier-series-of-log-sine-"><span class="math-container">$$\overset{x=\sin t}=-8\int_0^\frac{\pi}{2} t \ln(\cos t)dt=8 \ln 2 \int_0^\frac{\pi}{2}t dt+8\sum_{n=1}^\infty \frac{(-1)^n}{n}\int_0^\frac{\pi}{2} t\cos(2n t)dt$$</span></a>
<span class="math-container">$$={\pi^2}\ln 2+2\sum_{n=1}^\infty \frac{1-(-1)^n}{n^3}=\boxed{\pi^2 \ln 2 +\frac72 \zeta(3)}$$</span></p>
|
207,040 | <p>Is there some way I can solve the following equation with <span class="math-container">$d-by-d$</span> matrices in Mathematica in reasonable time?</p>
<p><span class="math-container">$$AX+X'B=C$$</span></p>
<p>My solution below calls linsolve on <span class="math-container">$d^2,d^2$</span> matrix, which is too expensive for my case (my d is 1000)</p>
<pre><code>kmat[n_] := Module[{mat1, mat2},
mat1 = Array[{#1, #2} &, {n, n}];
mat2 = Transpose[mat1];
pos[{row_, col_}] := row + (col - 1)*n;
poses = Flatten[MapIndexed[{pos[#1], pos[#2]} &, mat2, {2}], 1];
Normal[SparseArray[# -> 1 & /@ poses]]
];
unvec[Wf_, rows_] := Transpose[Flatten /@ Partition[Wf, rows]];
vec[x_] := Flatten[Transpose[x]];
solveLyapunov2[a_, b_, c_] := Module[{},
dims = Length[a];
ii = IdentityMatrix[dims];
x0 = LinearSolve[
KroneckerProduct[ii, a] +
KroneckerProduct[Transpose[b], ii].kmat[dims], vec[c]];
X = unvec[x0, dims];
Print["error is ", Norm[a.X + Transpose[X].b - c]];
X
]
a = RandomReal[{-3, 3}, {3, 3}];
b = RandomReal[{-3, 3}, {3, 3}];
c = RandomReal[{-3, 3}, {3, 3}];
X = solveLyapunov2[a, b, c]
</code></pre>
<p><em>Edit Sep 30</em>: An approximate solution would be useful as well. In my application <span class="math-container">$C$</span> is the gradient, and <span class="math-container">$X$</span> is the preconditioned gradient, so I'm looking for something that's much better than a "default" solution of <span class="math-container">$X_0=C$</span></p>
| xzczd | 1,871 | <p>First of all, <code>PDE == nv1 + nv2 + nv3 + nv4</code> is obviously wrong, because there already exists a <code>==</code> in your <code>PDE</code>. This is easy to fix of course.</p>
<p>What's confusing is the <code>Power::infy</code> warning. I'm not sure why this pops up, maybe <code>NDSolve</code> fails to notice <code>FiniteElement</code> should be chosen in this case, while <a href="https://mathematica.stackexchange.com/a/140805/1871">it should be able to</a>. Anyway, specifying the method explicitly fixes the problem:</p>
<pre><code>solution = NDSolve[{Subtract @@ PDE == nv4, M[0, x, y] == 1},
M, {x, 0, xf}, {y, 0, 1}, {t, 0, 10},
Method -> {"MethodOfLines", "SpatialDiscretization" -> "FiniteElement"}];
Table[ContourPlot[M[t, x, y] /. solution, {x, 0, xf}, {y, 0, 1}, Contours -> 20,
ColorFunction -> "TemperatureMap", PlotLegends -> Automatic,
PlotLabel -> Row[{"t = ", t}]], {t, 1, 10, 3}]
</code></pre>
<p><a href="https://i.stack.imgur.com/LBlIn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LBlIn.png" alt="enter image description here"></a></p>
<p><code>nv1</code>, <code>nv2</code>, <code>nv3</code> are omitted because zero Neumann value is the default setting of <code>FiniteElement</code>.</p>
|
3,992,495 | <blockquote>
<p><span class="math-container">$\displaystyle b_1=\left\lbrace\frac{12}{9},\frac{12}{9},2\right\rbrace^T,b_2=\{-18,-18,21\}^T$</span> and <span class="math-container">$\displaystyle v_1=\{-1,-1,2\}^T,v_2=\{3,3,-3\}^T$</span>. <span class="math-container">$b_1 \in \operatorname{Span}\{v_1,v_2\} \text{ and } b_2 \in \operatorname{Span}\{v_1,v_2\}$</span>. Can we conclude that (Without performing further verification) <span class="math-container">$\operatorname{Span}\{b_1,b_2\} \subseteq \operatorname{Span}\{v_1,v_2\}$</span>? What about <span class="math-container">$ \operatorname{Span}\{v_1,v_2\} \subseteq \operatorname{Span}\{b_1,b_2\}$</span>?</p>
</blockquote>
<p>As the question has provided, both <span class="math-container">$b_1,b_2$</span> belongs to the span of <span class="math-container">$v_1,v_2$</span>. However, the answer key only pointed out that we can only conclude <span class="math-container">$\operatorname{Span}\{b_1,b_2\} \subset \operatorname{Span}\{v_1,v_2\}$</span> but not <span class="math-container">$ \operatorname{Span}\{v_1,v_2\} \subset \operatorname{Span}\{b_1,b_2\}$</span> without any further explanation.</p>
<p>Is the fact that both <span class="math-container">$b_1 \in \operatorname{Span}\{v_1,v_2\} \text{ and } b_2 \in \operatorname{Span}\{v_1,v_2\}$</span> implies that <span class="math-container">$\operatorname{Span}\{b_1,b_2\} \subseteq \operatorname{Span}\{v_1,v_2\}$</span> ?</p>
<p>More generally, as my question's title suggests, how do I verify if a Span of vectors is a subset of another span of vectors? (I am capable of verifying if a single vector belongs to a span of vectors) but I can't make any connections between the two</p>
| ironX | 534,898 | <p>If <span class="math-container">$b_1 \in \text{Span}(v_1, v_2) $</span> and <span class="math-container">$b_2 \in \text{Span}(v_1, v_2)$</span>, then <span class="math-container">$\text{Span}(b_1, b_2) \subset \text{Span}(v_1, v_2)$</span>.</p>
<p><em>Proof:</em></p>
<p><span class="math-container">$b_1 \in \text{Span}(v_1, v_2) $</span> <span class="math-container">$\implies $</span> <span class="math-container">$b_1 = k_1 v_1 + k_2 v_2$</span>. Similarly, <span class="math-container">$b_2 \in \text{Span}(v_1, v_2)$</span> <span class="math-container">$\implies $</span> <span class="math-container">$b_2 = c_1 v_1 + c_2 v_2$</span>.</p>
<p>Hence, an arbitrary vector <span class="math-container">$w \in \text{Span}(b_1, b_2)$</span> satisfies</p>
<p><span class="math-container">\begin{align}
w &= m_1 b_1 + m_2 b_2\\
&= (m_1 k_1 + m_2 c_1) v_1 + (m_1 k_2 + m_2 c_2 ) v_2
\end{align}</span></p>
<p>This implies <span class="math-container">$w \in \text{Span}(v_1, v_2)$</span>. Hence, <span class="math-container">$\text{Span}(b_1, b_2) \subset \text{Span}(v_1, v_2)$</span>.</p>
<p><em>End Proof</em></p>
<p>If <span class="math-container">$b_1 \in \text{Span}(v_1, v_2) $</span> and <span class="math-container">$b_2 \in \text{Span}(v_1, v_2)$</span>, then <span class="math-container">$\text{Span}(v_1, v_2) \subset \text{Span}(b_1, b_2)$</span> is in general False.</p>
<p><em>Counterexample</em></p>
<p>Take <span class="math-container">$b_1 = b_2 = v_1$</span>. Then, Span<span class="math-container">$(b_1, b_2)$</span> = Span<span class="math-container">$(v_1)$</span>. Hence, <span class="math-container">$\text{Span}(v_1, v_2) \nsubseteq \text{Span}(b_1, b_2)$</span></p>
<p><em>Response to comments:</em></p>
<p>If <span class="math-container">$b_1 \in \text{Span}(v_1, v_2) $</span> and <span class="math-container">$b_2 \in \text{Span}(v_1, v_2)$</span>, then <span class="math-container">$\text{Span}(b_1, b_2) \subset \text{Span}(v_1, v_2)$</span>.</p>
<p>Now if <span class="math-container">$\text{Span}(b_1, b_2) \subset \text{Span}(v_1, v_2)$</span> and dim(Span(<span class="math-container">$b_1, b_2$</span>)) <span class="math-container">$=$</span> dim(Span(<span class="math-container">$v_1, v_2$</span>)), then indeed Span(<span class="math-container">$b_1, b_2$</span>) <span class="math-container">$=$</span> Span(<span class="math-container">$v_1, v_2$</span>). This follows from a general rule that any subspace <span class="math-container">$A$</span> of a vector space <span class="math-container">$V$</span> satisfying dim<span class="math-container">$(A) = $</span> dim(<span class="math-container">$V$</span>) implies that <span class="math-container">$A = V$</span>.</p>
|
3,992,495 | <blockquote>
<p><span class="math-container">$\displaystyle b_1=\left\lbrace\frac{12}{9},\frac{12}{9},2\right\rbrace^T,b_2=\{-18,-18,21\}^T$</span> and <span class="math-container">$\displaystyle v_1=\{-1,-1,2\}^T,v_2=\{3,3,-3\}^T$</span>. <span class="math-container">$b_1 \in \operatorname{Span}\{v_1,v_2\} \text{ and } b_2 \in \operatorname{Span}\{v_1,v_2\}$</span>. Can we conclude that (Without performing further verification) <span class="math-container">$\operatorname{Span}\{b_1,b_2\} \subseteq \operatorname{Span}\{v_1,v_2\}$</span>? What about <span class="math-container">$ \operatorname{Span}\{v_1,v_2\} \subseteq \operatorname{Span}\{b_1,b_2\}$</span>?</p>
</blockquote>
<p>As the question has provided, both <span class="math-container">$b_1,b_2$</span> belongs to the span of <span class="math-container">$v_1,v_2$</span>. However, the answer key only pointed out that we can only conclude <span class="math-container">$\operatorname{Span}\{b_1,b_2\} \subset \operatorname{Span}\{v_1,v_2\}$</span> but not <span class="math-container">$ \operatorname{Span}\{v_1,v_2\} \subset \operatorname{Span}\{b_1,b_2\}$</span> without any further explanation.</p>
<p>Is the fact that both <span class="math-container">$b_1 \in \operatorname{Span}\{v_1,v_2\} \text{ and } b_2 \in \operatorname{Span}\{v_1,v_2\}$</span> implies that <span class="math-container">$\operatorname{Span}\{b_1,b_2\} \subseteq \operatorname{Span}\{v_1,v_2\}$</span> ?</p>
<p>More generally, as my question's title suggests, how do I verify if a Span of vectors is a subset of another span of vectors? (I am capable of verifying if a single vector belongs to a span of vectors) but I can't make any connections between the two</p>
| Ali Ashja' | 437,913 | <p>As @ironX & @pietro said, and also by definition of vector spaces, they contain the <span class="math-container">$Span$</span> of any subset of themselves. But for inverse, also as they said truly, you can't conclude it. Of course, if you are curious to find such condition:</p>
<p><br>If you have <span class="math-container">$\dim(Span\{ v_1, v_2 \}) \leqslant \dim(Span\{ b_1, b_2 \})$</span>, beside <span class="math-container">$Span\{ b_1, b_2 \} \subseteq Span\{ v_1, v_2 \}$</span>, then you can conclude the inverse: <span class="math-container">$Span\{ v_1, v_2 \} \subseteq Span\{ b_1, b_2 \}$</span> and so they become equal with equal dimension.
<br>In some cases investigating the independency of vectors can help much.</p>
|
1,289,626 | <blockquote>
<p>find the Range of $f(x) = |x-6|+x^2-1$</p>
</blockquote>
<p>$$ f(x) = |x-6|+x^2-1 =\left\{
\begin{array}{c}
x^2+x-7,& x>0 .....(b) \\
5,& x=0 .....(a) \\
x^2-x+5,& x<0 ......(c)
\end{array}
\right.
$$</p>
<p>from eq (b) i got $$f(x)= \left(x+\frac12\right)^2-\frac{29}4 \ge-\frac{29}4$$<br>
and from eq (c) i got $$f(x)= \left(x-\frac12\right)^2+\frac{19}4 \ge\frac{19}4$$<br></p>
<p>and eq(b) tells me that it also passes through 5 and so generalize all this and found its range is $\left[-\frac{29}4 , \infty\right)$</p>
<p>but the graph says its range is $(5, \infty)$</p>
| wythagoras | 236,048 | <p>You should have $x-6<0$, $x-6=0$ and $x-6>0$ respectively. Always look to the entire expression within the absolute value. </p>
<p>Oh, and another thing: While finding such minimum, you need to check whether it is in the domain. For example, you have $x>0$ (should be $x>6$) for (b), but the minimum is given at $x=-\frac{1}{2}$. </p>
|
3,371,888 | <p><span class="math-container">$$\left(\!\!{{a+b}\choose k}\!\!\right)= \sum_{j=0}^k \left(\!\!{a\choose j}\!\!\right) \cdot \left(\!\!{b\choose {k-j}}\!\!\right)$$</span></p>
<p>I am quite confused about the case of multichoose. I was able to prove this equation if only "n choose k" form was used as both sides would be the k-th coefficients of <span class="math-container">$(1+x)^{a+b}$</span>.</p>
<p>Any help to understand this would be very appreciated. </p>
| epi163sqrt | 132,007 | <p>This binomial identity is an instance of the <em><a href="https://en.wikipedia.org/wiki/Vandermonde%27s_identity#Chu%E2%80%93Vandermonde_identity" rel="nofollow noreferrer">Chu-Vandermonde identity</a></em>.</p>
<blockquote>
<p>We start with the right-hand side. We obtain
<span class="math-container">\begin{align*}
\color{blue}{\sum_{j=0}^k\left(\!\!\binom{a}{j}\!\!\right)\!\!\left(\!\!\binom{b}{k-j}\!\!\right)}
&=\sum_{j=0}^k\binom{a+j-1}{j}\binom{b+k-j-1}{k-j}\tag{1}\\
&=\sum_{j=0}^k\binom{-a}{j}(-1)^j\binom{-b}{k-j}(-1)^{k-j}\tag{2}\\
&=(-1)^k\sum_{j=0}^k\binom{-a}{j}\binom{-b}{k-j}\\
&=(-1)^k\binom{-a-b}{k}\tag{3}\\
&=\binom{a+b+k-1}{k}\\
&\,\,\color{blue}{=\left(\!\!\binom{a+b}{k}\!\!\right)}
\end{align*}</span>
and the claim follows.</p>
</blockquote>
<p><em>Comment:</em></p>
<ul>
<li><p>In (1) we use the definition of the <em><a href="https://en.wikipedia.org/wiki/Multiset#Counting_multisets" rel="nofollow noreferrer">multiset coefficient</a></em>.</p></li>
<li><p>In (2) we use the binomial identity <span class="math-container">$\binom{-p}{q}=\binom{p+q-1}{q}(-1)^q$</span>.</p></li>
<li><p>In (3) we apply the <em><a href="https://en.wikipedia.org/wiki/Vandermonde%27s_identity#Chu%E2%80%93Vandermonde_identity" rel="nofollow noreferrer">Chu-Vandermonde identity</a></em>.</p></li>
</ul>
<p><strong>Note:</strong> We see from (2) and (3) the identity is in terms of generating functions with <span class="math-container">$[z^k]$</span> denoting the coefficient of <span class="math-container">$z^k$</span> in the series:
<span class="math-container">\begin{align*}
[z^{k}](1-z)^{-a-b}=[z^k](1-z)^{-a}(1-z)^{-b}
\end{align*}</span></p>
|
1,676,848 | <blockquote>
<p>Given the series </p>
<p>$$ \sum_{n=1}^{\infty} \frac{k(k+1)(k+2)\cdot \cdot \cdot (k + n - 1)x^n}{n!} \quad \quad k \geq 1 $$
Find the interval of convergence.</p>
</blockquote>
<p>I started by applying the Ratio test</p>
<p>$$
\lim_{n\to \infty}\left|\frac{k(k+1)(k+2)\cdot \cdot \cdot (k + n - 1)(k+n)x^{n+1}}{(n+1)!}\cdot \frac{n!}{k(k+1)(k+2)\cdot \cdot \cdot (k + n - 1)x^n}\right|$$</p>
<p>$$\lim_{n\to \infty}\left|\frac{(k+n)x}{(n+1)}\right|$$</p>
<p>to show that the series converges when $|x| \lt 1$.</p>
<p>However, when I test the end points of $(-1,1)$ for convergence, I end up with two series whose convergence I am unable to show. Namely,
$$
\sum_{n=1}^{\infty} \frac{k(k+1)(k+2)\cdot \cdot \cdot (k + n - 1)}{n!}
$$</p>
<p>and
$$
\sum_{n=1}^{\infty} \frac{k(k+1)(k+2)\cdot \cdot \cdot (k + n - 1)(-1)^n}{n!}
$$</p>
<p>How can I show that these two series converge or diverge?</p>
| robjohn | 13,854 | <p>Note that for $k\ge1$, we have
$$
\frac{k(k+1)\cdots(k+n-1)}{n!}=\frac k1\frac{k+1}2\cdots\frac{k+n-1}{n}\ge1
$$
Thus, for $|x|=1$, the terms do not go to $0$.</p>
|
2,316,448 | <p>I was working on the infinite sum
$$\sum_{x=1}^\infty \frac{1}{x(2x+1)}$$
and I used partial fractions to split up the fraction
$$\frac{1}{x(2x+1)}=\frac{1}{x}-\frac{2}{2x+1}$$
and then I wrote out the sum in expanded form:
$$1-\frac{2}{3}+\frac{1}{2}-\frac{2}{5}+\frac{1}{3}-\frac{2}{7}+...$$
and then rearranged it a bit:
$$1+\frac{1}{2}-\frac{1}{3}+\frac{1}{4}-\frac{1}{5}+\frac{1}{6}-\frac{1}{7}+...$$
$$2-(1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\frac{1}{5}-\frac{1}{6}+\frac{1}{7}-...)$$
and since the sum inside of the parentheses is just the alternating harmonic series, which sums to $\ln 2$, I got
$$2-\ln 2$$
Which is wrong. What went wrong?
I notice that, in general, this kind of thing happens when I try to evaluate telescoping sums in the form
$$\sum_{x=1}^\infty f(x)-f(ax+b)$$
and I think something is happening when I rearrange it. Perhaps it has something to do the frequency of $f(ax+b)$ and that, when I spread it out to make it cancel out with other terms, I am "decreasing" how many of them there really are because I'm getting rid of the one to one correspondence between the $f(x)$ and $f(ax+b)$ terms?</p>
<p>I can't wrap my head around this. Please help!</p>
| kvicente | 452,277 | <p>This is one of the most astonishing things in mathematics: <strong>every conditionally convergent series</strong> (meaning: that converges but is not absolutely convergent, such as the alternate harmonic series you mentioned) <strong>can be conveniently rearranged to converge to any arbitrary real number, or just diverge.</strong></p>
<p>This is the celebrated Riemann Series Theorem. You can search for a quick introduction to this theorem in Wikipedia, but if you want a proof of the theorem look for Fiktengolz's book on Fundamentals of Mathematical Analysis. I hope this could help you.</p>
|
216,171 | <p>Basically, I have a set of differential equations that I need to solve for exactly 100 different initial conditions (given as lists for each initial condition), and then plot each solution.</p>
<p>Here is some sample code where I have set vrad, vtan, and deltaR (arrays of initial conditions) to an array of length two. So, given the arrays vrad, vtan, deltaR (our initial conditions) I want to be able to essentially do what this code does but for the array of solutions. Cheers!</p>
<p>Edit: I think I've nearly done it, I just need Table to not iterate through every tuple, but instead by index, anyone know how to do this?</p>
<pre><code>(* Scaling Quantities *)
V = 200;
R = 10^4;
(* Random Quantities *)
vrad = {0, 5};
vtan = {0, 5};
deltaR = {0, 5};
(* Converting to dimensionless quantities *)
vRadial = (V + vrad)/V;
vTangential = (V + vtan)/V;
r0 = (10^4 + deltaR)/R;
L = r0*vTangential;
(* numerical solution *)
s = Partition[
Flatten@Table[
NDSolve[{r''[t] == r[t]*ϕ'[t]^2 - 1/r[t], ϕ'[t] == d/
r[t]^2, ϕ[0] == a, r[0] == b,
r'[0] == c}, {r, ϕ}, {t, 0, 200}], {a, vTangential/r0}, {b,
r0}, {c, vRadial}, {d, L}], 2]
(* Plotting the solution *)
ParametricPlot[
Evaluate[{r[t]*Cos[ϕ[t]], r[t]*Sin[ϕ[t]]} /. s], {t, 0,
2*Pi}, GridLines -> Automatic, Frame -> True]
</code></pre>
| Mark R | 65,931 | <p>I think this will do what you want:</p>
<pre><code>s = NDSolve[{r''[t] ==
r[t]*\[Phi]'[t]^2 - 1/r[t], \[Phi]'[t] == #[[4]]/r[t]^2, \[Phi][
0] == #[[1]], r[0] == #[[2]],
r'[0] == #[[3]]}, {r, \[Phi]}, {t, 0, 200}] & /@
Transpose[{vTangential/r0, r0, vRadial, L}]
</code></pre>
<p>Your current solution has only 2 values for each of these but it extends to as many as you'd like. </p>
<p>And here is the picture:
<a href="https://i.stack.imgur.com/ydSAd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ydSAd.png" alt="enter image description here"></a></p>
|
1,223,823 | <p>How can one simplify
$$\arctan\left(\frac{1}{\tan \alpha}\right)?$$
$0<α<\dfrac{\pi}{2}.$ Here is what I tried so far,
$$\arctan\left(\dfrac{1}{\tan \alpha}\right)=θ$$ for some θ.
$$\frac{1}{\tan \alpha}=\tan(θ)$$</p>
<p>I didn't know what to do next because there is no significant relationship between ${θ}$ and ${α}.$<br>
I am stuck right here if there is some relation between θ and $\alpha$ that would make it a lot simpler.</p>
| Narasimham | 95,860 | <p>Recognize the complementary angle relation:</p>
<p>$$\arctan\left(\frac{1}{\tanα}\right) =\arctan( \tan (\pi/2-\alpha) ) $$</p>
<p>$ \rightarrow (\pi/2 - \alpha), (3\pi/2 - \alpha), $ plus co-terminal angles.</p>
|
346,432 | <p>I will think of <span class="math-container">$ \mathbb{R}^{n+m}$</span> as <span class="math-container">$\mathbb{R}^n \times \mathbb{R}^m$</span>.</p>
<p>Let <span class="math-container">$ V \subset \mathbb{R}^{n+m}$</span> be open and <span class="math-container">$g:V \to U \subset \mathbb{R}^{n+m} $</span> be a <span class="math-container">$C^1$</span> diffeomorphism. For a fixed <span class="math-container">${y} \in \mathbb{R}^m$</span>, the image <span class="math-container">$g(\mathbb{R}^n \times \{y\})$</span> is an <span class="math-container">$n$</span>-dimensional <span class="math-container">$C^1$</span> manifold, and, similarly, for a fixed <span class="math-container">${x}$</span>, the image <span class="math-container">$g(\{x\} \times \mathbb{R}^m)$</span> is an <span class="math-container">$m$</span>-dimensional <span class="math-container">$C^1$</span> manifold. Let <span class="math-container">$\mathcal{H}^n$</span> and <span class="math-container">$\mathcal{H}^m$</span> be, respectively, the Hausdorff measures on these with respect to the intrinsic metric on them induced from <span class="math-container">$\mathbb{R}^{n+m}$</span>. As mentioned in the comments below, they will be different from fiber to fiber, and, for example, it is not true that all these measures are identifiable.</p>
<p><strong>Original Question:</strong> I wonder if a "Fubini's theorem" can be formulated and proven using integrals on these manifolds directly.** I do NOT wish to pullback to <span class="math-container">$V$</span> via <span class="math-container">$g$</span>,</p>
<p>Edit: Initially I stated "I do not want to contaminate my integral with the Jacobian!" In light of comments below, it will be impossible to bring in some type of Jacobian(S) into picture. Now, it looks obvious: We must take into account how fibers close in or expand away from one another at different neighborhoods. So, now I reiterate my question allowing this:</p>
<p><strong>Edited Question:</strong> Is there "a Fubini's theorem" that equates an integral over <span class="math-container">$U$</span> to the iterated integrals (of the function probably multiplied with some Jaobian of the map <span class="math-container">$g$</span>) over these fibers -- against their intrinsic Hausdorff measures.**</p>
<p>A cartoon of the sought-for identity will look like: for a continuous real-valued function <span class="math-container">$ \phi: U \to \mathbb{R}$</span>,
<span class="math-container">$$ \int_U \phi \ d\mathcal{L}^{n+m}= \int_{?} \left(\int_{?} \phi(x,y) \cdot Jacobian \ quantities \ from \ g \ d\mathcal{H}^n(x)\right) \ d\mathcal{H}^m(y) \ .$$</span></p>
<p><strong>Note:</strong> I seem to have figured out one such formula but will wait longer for possible alternatives or references to known ones, if any exists.</p>
<p>I have the answer here: <a href="https://mathoverflow.net/questions/350952/fubinis-theorem-on-arbitrary-foliations">Fubini's Theorem on Arbitrary Foliations</a></p>
| Behnam Esmayli | 91,442 | <p>I have the answer here: <a href="https://mathoverflow.net/questions/350952/fubinis-theorem-on-arbitrary-foliations">Fubini's Theorem on Arbitrary Foliations</a></p>
<p><span class="math-container">$$\int_U f = \int_{U_{\eta_0}} \left(\int_{U_\xi} f(\xi,\eta) \frac{|\det DG_{U_\xi} (\xi,\eta)| \cdot |\det DG_{U_{\eta_0}} (\xi,\eta_0)|}{|\det DG(\xi,\eta)|} \ d\mathcal{H}^m(\eta)\right) \ d\mathcal{H}^n(\xi) \ .$$</span></p>
|
2,616,663 | <p>For this proof, after I convert the definite integral into the riemann sum definition, is it just enough to say $\Delta(x)$ = $\frac{b-a}{n}$ and since b = a, the $\Delta(x)$ becomes 0, thus, making everything else equal to 0, since everything else is being multiplied by zero?</p>
| Renji Rodrigo | 522,531 | <p>We can use also the empty sum definition
$$\sum^0_{k=1} f(k)=0 $$ for every f.</p>
<p>One partition is in the form
$a=t_0<t_1< \ldots <t_n=b$
if $a=b$ then $n=0$.</p>
<p>Let it be the degenerate interval $[a,a]=\{a\}$, we have $t_{0}=a$ and $t_{n}=a$, we should have $t_{1}>t_{0}=a$ but we don't have one number greater then $a$ in the interval. We have only one partition. The inferior summation is
$$s(f,P)=\sum^{n=0}_{k=1}m_{k}\Delta t_{k-1}=0 $$
because it's empty.
The superior summation is
$$S(f,P)=\sum^{n=0}_{k=1}M_{k}\Delta t_{k-1}=0 $$
In this case the inferior summation is always equal the superior summation
and the same for the superior and inferior integrals. So every function will be integrable in this set and the integral $0$
$$\int^{a}_{a} f(x)dx=0. $$</p>
|
907,879 | <p>Calculate the limit $\lim\limits_{x\to\infty} (a^x+b^x-c^x)^{\frac{1}{x}}$ where $a>b>c>0$.</p>
<p>First,
$$\exp\left( \lim\limits_{x\to\infty} \frac{\ln(a^x+b^x-c^x)}{x} \right)$$</p>
<p>Next,
$$\lim\limits_{x\to\infty} a^x + b^x - c^x = \lim\limits_{x\to\infty} a^x \left[1 + (b/a)^x - (c/a)^x \right] = \infty$$. </p>
<p>Since, $\ln(\infty) = \infty$ we may use L'Hopital's rule. The expression inside the exponent is: </p>
<p>$$\lim\limits_{x\to\infty} \frac{a^x\ln(a)+b^x\ln(b)-c^x\ln(c)}{a^x+b^x-c^x}$$</p>
<p>Which again is $\frac{\infty}{\infty}$. Is that the right way?</p>
| André Nicolas | 6,312 | <p>The problem can be solved by Squeezing, using minimal algebraic manipulation. Note that for positive $x$ we have
$$a^x\lt a^x+b^x-c^x\lt 2a^x.$$
Now take the $x$-th roots. We get
$$a\lt (a^x+b^x-c^x)^{1/x}\lt 2^{1/x}a.$$
But $\lim_{x\to\infty} 2^{1/x}=1$, and it's over. </p>
|
702,804 | <p>I just need a sanity check, been thinking about this all morning.</p>
<p>If we use the Mean Value Theorem on a function over the infinite interval (suppose the function's domain is unbounded), i.e.</p>
<p>$$M=\lim\limits_{T \to \infty} \dfrac{1}{2T}\int_{-T}^{T} \text{dt} f(t)$$</p>
<p>There is no way that M can be finite right? My intuition tells me it's either zero or infinite, but I wanted another opinion; oddly enough, I wasn't able to <a href="http://www.google.com/" rel="nofollow">google</a> it.</p>
<p>Thanks!</p>
| ketan | 122,095 | <p>The solution 1 xpands the Function $y = \dfrac {1}{2-x}$ about $x=1$. whereas the second solution expands $y = \dfrac {1}{2-x}$ about $x=0$.</p>
|
25,137 | <p>I want to find an intuitive analogy to explain how binary addition (more precise: an adder circuit in a computer) works. The point here is to explain the abstract process of <em>adding</em> something by comparing it to something that isn't abstract itself.</p>
<p>In principle: An everyday object or an action that is structured like or functionally resembles an adder.</p>
<p>Think of a thing that can belong to any number of categories x<sub>1</sub>, x<sub>2</sub>, x<sub>3</sub>, x<sub>4</sub>, x<sub>5</sub>, x<sub>6</sub>, x<sub>7</sub>, x<sub>8</sub> for which the property holds that if you put two objects together/perform two actions simultaneously, and both the objects/actions are of the same category you automatically create an object or perform an action that is of the next higher category that the object doesn't yet belong to, the whole thing therefore implementing the basic functionality of an adder.</p>
<p>(Categories are changing here analogous to the bits in the circuit: 00000001 (1) + 00000001 (1) together, adds up to 00000010 (2).)</p>
<p>But I just can't think of such a situation or an object where this pattern would occur. Whatever analogy i create with increasing amount of categories the way these categories transform becomes increasingly harder to explain, and the metaphor becomes overly specific and unhandy.</p>
<p>Hence the question:</p>
<p><strong>What's an everyday object that resembles an adder in it's basic functionality?</strong></p>
| Wyck | 13,481 | <p>I think if you've played Monopoly you understand that once you get 10 one-dollar bills, you'd rather trade them in for a ten-dollar-bill. And once you get 10 ten-dollar bills, you'd rather trade them in for a hundred-dollar-bill. You're trying to minimize the number of bills you have to manage. It's easier to know how much money you have when you've reduced everything to use the largest bills possible too.</p>
<p>Now suppose the denominations were \$1, \$2, \$4, \$8, \$16. And there's a rule that says you can't have two of the same kind of bill -- e.g. you <em>must</em> trade your two eight-dollar dollar bills in for a sixteen-dollar-bill.</p>
<p>In this situation, if you have a dollar and the bank is going to pay you a dollar, then you intuitively know NOT to accept another one-dollar bill from the bank. Instead, you <strong>give</strong> your one-dollar bill to the bank and they give you back a two-dollar bill.</p>
<p>I think people understand the carry mechanism this way too. Imagine doing 3 + 1. If I have 3 dollars (one one-dollar bill and one two-dollar bill). And someone wants to pay me one dollar, I know that I can't accept a second one-dollar bill, so I must trade it in. I give away my one-dollar bill expecting a two-dollar bill, but I can't accept a second two-dollar bill. There's a beautiful moment where the banker has accepted my one-dollar bill and grabbed a two-dollar bill and is about to hand it to me but I look at my own wallet and <em>refuse</em> to accept the two-dollar bill because I already have one and instead, I hand my two-dollar-bill to the bank, which the banker exchanges for a four-dollar bill.</p>
<p>This moment where the banker hasn't finished giving you your change yet but is just holding a bill of some denomination in front of you for you to decide whether to accept or not is the <em>carry mechanism</em>.</p>
<p>Young children can do this hands-on to get a feel for it. You "pay" the child an amount of money by handing them a bunch of bills (being careful not to have two of any one kind of denomination yourself). Then the child begins checking to see if they have two of anything (if they are breaking the "rule" by accepting a second bill of the same kind.) When they hand you an n-dollar bill, you simply take it away and hand them a 2n-dollar bill instead. This can be done in any order. The child doesn't have to start with their lowest denomination. The child is playing a match game - looking to see if the person paying them is offering them a bill of the same type as something they already have. If they do, they just hand it over and are then offered the next bigger denomination of bill in return.</p>
<p>The child can play as the banker too. They will learn that they will be handed a bill of the same denomination as something they were offered, at which time they must return the two bills to their supply of bills and exchange them for one bill of the next higher denomination.</p>
<p>The analog you are looking for is a person. The person is intuitively alerted to the situation that they are about to have too many of a certain denomination of bill and takes action to resolve it.</p>
|
1,021,753 | <p>Any idea on how to compute the expected value of product of Ito's Integral with two different upper limit?</p>
<p>For example:
$$\mathbb{E}\left[\int_0^r f(t)\,dB(t) \int_0^s f(t)\,dB(t)\right]$$</p>
<p>I only know how to compute when the upper limit r and s are the same...but don't know how when r and s are different...help. </p>
| sds | 37,092 | <p>Your given rotation $R(l,\theta)$ around line $l$ by angle $\theta$ is a composition of two symmetries wrt planes $p_1$ and $p_2$ which intersect along $l=p_1\cap p_2$ at angle $\theta/2$:</p>
<p>$$ R(l,\theta) = S(p_1)\circ S(p_2) = S(p_1)\circ S(p)\circ S(p)\circ S(p_2) $$ </p>
<p>where $p$ is the plane you are interested in (e.g., $xy$) because $S\circ S=\text{Id}$.</p>
<p>Now, let $l_i=p_i\cap p$ and $\theta_i=2\times\angle(p_i,p)$ be double the angle between $p_i$ and $p$ ($i=1,2$).
(You can always select $p_i$ so that neither is parallel to $p$).</p>
<p>Then
$$
\begin{align}
S(p_1)\circ S(p) &= R(l_1,\theta_1) \\
S(p)\circ S(p_2) &= R(l_2,-\theta_2)
\end{align}
$$
and </p>
<p>$$R(l,\theta)=R(l_1,\theta_1)\circ R(l_2,-\theta_2)$$</p>
|
501,660 | <p>In school, we just started learning about trigonometry, and I was wondering: is there a way to find the sine, cosine, tangent, cosecant, secant, and cotangent of a single angle without using a calculator?</p>
<p>Sometimes I don't feel right when I can't do things out myself and let a machine do it when I can't.</p>
<p>Or, if you could redirect me to a place that explains how to do it, please do so.</p>
<p>My dad said there isn't, but I just had to make sure.</p>
<p>Thanks.</p>
| Muralidhar G | 244,309 | <p>Approximate the Taylor series.
In Taylor series we have to use the angle in radians and by converting it into degrees and by making some approximations we can get a simple formulas like
$\sin X = 0.017*X$ for $X<33$ degrees and
$\sin X = 0.016*X$ for $33 < X < 45$</p>
<p>$\cos X=1-0.000145 X^2$ for $X<45$ degrees</p>
<p>By using these two formulas we can calculate any sin and cos functions for any degrees by using methods $\sin(90+X)$ ,$\sin(90-X)$, $\cos(270+X)$ like...</p>
<p>which will give minimum 98% accuracy.</p>
|
501,660 | <p>In school, we just started learning about trigonometry, and I was wondering: is there a way to find the sine, cosine, tangent, cosecant, secant, and cotangent of a single angle without using a calculator?</p>
<p>Sometimes I don't feel right when I can't do things out myself and let a machine do it when I can't.</p>
<p>Or, if you could redirect me to a place that explains how to do it, please do so.</p>
<p>My dad said there isn't, but I just had to make sure.</p>
<p>Thanks.</p>
| MCCCS | 357,924 | <p>Bhaskara's approximation (<a href="https://en.wikipedia.org/wiki/Bhaskara_I's_sine_approximation_formula" rel="noreferrer">Wikipedia</a>) gives an approximation for <span class="math-container">$\sin x^\circ$</span> with less than <span class="math-container">$0.0016$</span> error for <span class="math-container">$0\leq x \leq 180$</span>.</p>
<p><span class="math-container">$$\sin x^\circ \approx \frac{4 x (180-x)}{40500 - x(180-x)}$$</span></p>
<p>The red curve is the approximation, barely seen:</p>
<p><a href="https://i.stack.imgur.com/NxUF0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NxUF0.png" alt="enter image description here"></a></p>
<p>Here's the difference between the formula and the sin function (maximum at x=11.544):</p>
<p><a href="https://i.stack.imgur.com/YHKnl.png" rel="noreferrer"><img src="https://i.stack.imgur.com/YHKnl.png" alt="enter image description here"></a></p>
|
501,660 | <p>In school, we just started learning about trigonometry, and I was wondering: is there a way to find the sine, cosine, tangent, cosecant, secant, and cotangent of a single angle without using a calculator?</p>
<p>Sometimes I don't feel right when I can't do things out myself and let a machine do it when I can't.</p>
<p>Or, if you could redirect me to a place that explains how to do it, please do so.</p>
<p>My dad said there isn't, but I just had to make sure.</p>
<p>Thanks.</p>
| richard1941 | 133,895 | <p>I like continued fractions. For example, if x is in (-pi/2, pi/2), as mentioned above for power series, </p>
<p>sin(x) = x/1+ x^2/6+ x^2, which matches the first 8 terms of the Taylor series given above.</p>
<p>tan(x) = x/1- x^2/3- x^2/5- x^2/7. This also matches the first 8 terms of the Taylor series for tan(x). </p>
<p>These can be easily converted into rational functions (a polynomial divided by another polynomial) so that only one division is required. </p>
<p>Continued fractions are always worth a try because often, when they match the first few terms of the power series, the remaining terms of their power series are very close to those of the desired function. That is way better than setting those terms to zero, as when you truncate the power series.</p>
<p>Use of a calculator with CAS is extremely helpful in converting power series into continued fractions. Mine is an HP-Prime. </p>
|
351,642 | <p>So I'm proving that a group $G$ with order $112=2^4 \cdot 7$ is not simple. And I'm trying to do this in extreme detail :) </p>
<p>So, assume simple and reach contradiction. I've reached the point where I can conclude that $n_7=8$ and $n_2=7$. </p>
<p>I let $P, Q\in \mathrm{Syl}_2(G)$ and now dealing with cases that $|P\cap Q|=1, 2^2, 2^3$ or $2^4$. </p>
<p>I easily find contradiction when $|P\cap Q|=2^4$ and $2$. </p>
<p>Um, got stuck REAL bad on the case $|P\cap Q|=2^3$ and $2^2$. </p>
<p>If $|P \cap Q |=2^3= 8$ and $|P|=|Q|=16$, is there any relationship between $P,Q$ and their intersection that can help me? </p>
| Mikko Korhonen | 17,384 | <p>If $G$ is a simple group, it must have exactly $7$ Sylow $2$-subgroups. Thus $G$ embeds into $S_7$, and in particular into $A_7$ since $G$ does not have a subgroup of index $2$. But the order $A_7$ is not divisible by $112$.</p>
<p>If you want to go along the lines of your original idea, you can rule out the case $|P \cap Q| = 2^3$ by noticing that then $P \cap Q$ is normal in $P$ and $Q$ (as a subgroup of index $2$), so $N_G(P \cap Q)$ contains $P$ and $Q$, which implies that $N_G(P \cap Q) = G$. </p>
<p><strong>ADDED:</strong> I'm not sure if there is an easy way to deal with rest of the cases. However, there is a nice argument which also works for proving that every group of order $p^n q$ ($p$, $q$ distinct primes) is nonsimple. I believe the idea of the proof goes back to G. A. Miller (around 1900-1910). Here's an illustration of it in this case.</p>
<p>Suppose that $G$ is a simple group of order $112$. Then $G$ has exactly $7$ Sylow $2$-subgroups. Let $P, Q \in Syl_2(G)$ be such that $P \neq Q$ and that $D = P \cap Q$ has largest possible order. Steps for the proof:</p>
<ol>
<li><p>Using the fact that $D < N_P(D)$ and $D < N_Q(D)$ (proper inclusion), prove that $N_G(D)$ cannot be a $2$-group.</p></li>
<li><p>Thus $D$ is normalized by an element $g \in G$ of order $7$. Prove that $P, gPg^{-1}, \ldots, g^6Pg^{-6}$ are distinct. Conclude that $D$ is contained in every Sylow $2$-subgroup.</p></li>
<li><p>Since the intersection of all Sylow $2$-subgroups is normal, $D$ is trivial.</p></li>
<li><p>By counting elements in Sylow $2$-subgroups, prove that $G$ contains exactly one Sylow $7$-subgroup. </p></li>
</ol>
<p>This same argument works for proving the statement for groups of order $p^n q$.</p>
|
4,579,084 | <p>It was a new contributor's question. I answered, got my -1 again and then deleted. Then I asked myself. Then gave it up again. Actually I was gonna ask a different question NOW. When I pressed ask a question, to my surprise, the question I intended to ask yesterday was in the memory!</p>
<p>I wanted to evaluate the following limit by logarithmic limit rule:
<span class="math-container">$$\lim_{n\rightarrow\infty} \left(\frac{n^{n-1}}{(n-1)!}\right)^{\frac{1}{n}}=\exp\left(\lim_{n\rightarrow\infty}\frac{(n-1)\ln n-\ln (n-1)!}{n}\right)=\exp\left(\lim_{n\rightarrow\infty}-\frac{1}{n}\sum_{k=1}^n\ln(\frac{k}{n})\right)$$</span>
Then I observed a Riemann sum of an indefinite integral inside so that the limit is
<span class="math-container">$$\exp\left(-\int_0^1\ln xdx\right)=\exp\left((x-x\ln x\vert_0^1)\right)=e.$$</span>
Is my solution correct? Can you suggest another way? Stirling's approximation formula is excluded.</p>
| Peter Leopold | 517,642 | <p>You illustrate one case where the permutation is not <span class="math-container">$i$</span>-orderly, but there is another where it is, if you take <span class="math-container">$m=1$</span> and <span class="math-container">$A_1=\{1,2\}$</span> and <span class="math-container">$B_1=\{3\}$</span>. What about <span class="math-container">$\pi([1,2,3]=[2,1,3]$</span>? That permutation is also <span class="math-container">$i$</span>-orderly <span class="math-container">$-$</span> isn't it? <span class="math-container">$-$</span> because every element of <span class="math-container">$\{2,1\}$</span> is less than every element of <span class="math-container">$\{3\}$</span>.</p>
<hr />
<p>Stepping back: There are <span class="math-container">${n\choose k}$</span> subsets from which to choose <span class="math-container">$m$</span> values for the set of subsets <span class="math-container">$A$</span>. There are <span class="math-container">${n \choose l}$</span> subsets from which to choose another wholly-unrelated <span class="math-container">$m$</span> values for the list <span class="math-container">$B$</span>. Clearly <span class="math-container">${n\choose k} \ne {n \choose l}$</span> if <span class="math-container">$l\ne k$</span>, so there is no natural definition of <span class="math-container">$m$</span>. So we can take <span class="math-container">$m$</span> to be any number <span class="math-container">$ \le \min\{ {n\choose k} , {n\choose l}\}$</span>, right? So, <span class="math-container">$m=1$</span> is an arbitrary and valid choice by your rules, isn't it? And then we can take any <span class="math-container">$m=1$</span> element of either set of sets to compare. Furthermore, when we permute <span class="math-container">$[n]$</span>, we need only take two <span class="math-container">$i$</span> orderly permutations to invalidate the conjecture that at most 1 permutation is <span class="math-container">$i$</span>-orderly. OK, the table is set.</p>
<p>Let <span class="math-container">$n=[6], ~k=3, ~l=2, ~m=[1], A_1=\{1,2,3\}, B_1\{4,5\}. A_1 \cap B_1 = \phi.$</span> Since <span class="math-container">$m=[1]$</span> there are no other cases to consider. Clearly, <span class="math-container">$a<b \text{ for every } a \in A_1, b \in B_1.$</span> The initial permutation <span class="math-container">$\pi_0[n]= 1,2,3,4,5,6$</span> is <span class="math-container">$i=1$</span>-orderly according to your definition of <span class="math-container">$i$</span> for <span class="math-container">$i=1$</span>. But so is <span class="math-container">$S_n=[1,2,3,\pi[4,5,6]].$</span> The cardinality of the set of i-orderly permutations is easily greater than 1.</p>
<p>If we try to prove a theorem and end up disproving it by finding a counter example (this time very easily) then perhaps the meanings of words aren't clear. I take</p>
<blockquote>
<p>For all <span class="math-container">$i \in [m]$</span>, a permutation in <span class="math-container">$S_n$</span> is considered <span class="math-container">$i$</span>-orderly is permutation(<span class="math-container">$a$</span>) < permutation(<span class="math-container">$b$</span>) for all <span class="math-container">$a \in A_i$</span> and <span class="math-container">$b \in B_i.$</span></p>
</blockquote>
<p>to mean that the sets <span class="math-container">$A_1=\{1,2,3\}$</span> corresponds to the first three elements of the null/default permutation and <span class="math-container">$B_1=\{4,5\}$</span> corresponds to the 4th and 5th elements of the null/default permutation of <span class="math-container">$\pi[n]$</span>. <span class="math-container">$\pi_0[1,2,3,4,5,6] [4,5]=\{4,5\}$</span>, but under another permutation of [1,2,3,4,5,6], viz.[1,2,3,4,6,5], <span class="math-container">$B_1 [4,5] = \pi_x[1,2,3,4,6,5][4,5]=\{4,6\}$</span>. So this is another, valid 1-orderly permutation for the values given, and the conjecture fails.</p>
<p>Am I understanding you correctly?</p>
|
153,217 | <blockquote>
<p>Let $$f(x)=\frac{2x+1}{\sin(x)}$$ Find $f'(x).$ </p>
</blockquote>
<p>I used Quotient Rule <br>
$$\begin {align*}\frac{\sin(x)2-(2x+1)\cos(x)}{\sin^2(x)}\\
=\frac{3-2x\cos(x)}{\sin(x)} \end {align*}$$</p>
<p>Is that right? I don't know how to get the answer.
Please help me out, thanks.</p>
| Brian M. Scott | 12,042 | <p>You seem to have some serious problems with the algebra involved. Part of the problem is failure to use necessary parentheses: the result of applying the quotient rule is</p>
<p>$$\frac{2\sin x-(2x+1)\cos x}{\sin^2x}\;,$$</p>
<p>where the parentheses around $2x+1$ are absolutely necessary. If you choose to multiply out the numerator, you should get </p>
<p>$$\frac{2\sin x-2x\cos x-\cos x}{\sin^2x}\;.$$</p>
<p>Alternatively, you can split it into two fractions:</p>
<p>$$\begin{align*}
\frac{2\sin x-(2x+1)\cos x}{\sin^2x}&=\frac{2\sin x}{\sin^2x}-\frac{(2x+1)\cos x}{\sin^2x}\\\\
&=2\csc x-(2x+1)\cot x\csc x\\\\
&=\csc x\Big(2-(2x+1)\cot x\Big)\;.
\end{align*}$$</p>
|
153,217 | <blockquote>
<p>Let $$f(x)=\frac{2x+1}{\sin(x)}$$ Find $f'(x).$ </p>
</blockquote>
<p>I used Quotient Rule <br>
$$\begin {align*}\frac{\sin(x)2-(2x+1)\cos(x)}{\sin^2(x)}\\
=\frac{3-2x\cos(x)}{\sin(x)} \end {align*}$$</p>
<p>Is that right? I don't know how to get the answer.
Please help me out, thanks.</p>
| Gigili | 181,853 | <p>$$f(x)=\frac{2x+1}{\sin(x)}$$</p>
<p>As for the derivative of such a fractional function:</p>
<p>$$f'(x)=\frac{(2x+1)'(\sin x)-(\sin x)'(2x+1)}{\sin^2 x}$$</p>
<p>Simplifying:</p>
<p>$$f'(x)=\frac{2(\sin x)-\cos x(2x+1)}{\sin^2 x}=\frac{2\sin x-2x\cos x-\cos x}{\sin^2 x}=2\csc x-(2x+1) \cot x \csc x$$</p>
|
625,821 | <p>$$\int^\infty_0\frac{1}{x^3+1}\,dx$$</p>
<p>The answer is $\frac{2\pi}{3\sqrt{3}}$.</p>
<p>How can I evaluate this integral?</p>
| GPerez | 118,574 | <p>In general, when you have $$\frac{Bx+C}{ax^2+bx+c} = \frac{Bx}{ax^2+bx+c} + \frac{C}{ax^2+bx+c}$$</p>
<p>Then the left addend can be easily integrated by multiplying by $2a/B$ (and dividing outside the integral sign). For the left addend, assuming we can't factorize it in $\mathbb R$, the procedure I'd use is to write the denominator $ax^2 + bx + c$ as $(x+\alpha)^2 + \beta$ (first make it monic and then complete the square), where $\alpha$ and $\beta$ are real numbers.</p>
<p>This can also be written as $\beta\left((\frac{x+\alpha}{\sqrt \beta})^2+1\right)$ so you end up with $$ \frac{\tilde C}{\left(\frac{x+\alpha}{\sqrt \beta}\right)^2+1}$$</p>
<p>($\tilde C$ is the real number resulting from the previous steps) which is the derivative of $$\arctan{\left(\frac{x+\alpha}{\sqrt\beta}\right)} + k $$</p>
|
1,869,564 | <p>i tried to derive logistic population model, and need to integrate this
$\int \frac{\frac{1}{k}}{1-\frac{N_t}{k}} dN_t$. here is my solution</p>
<p>$\int \frac{\frac{1}{k}}{1-\frac{N_t}{k}} dN_t=\int \frac{1}{k-N_t} dN_t=-\int \frac{1}{k-N_t}d{(k-N_t)}=-\ln\mid k-N_t\mid+C_1$. i think i have done something wrong here, because if i solve it this ways $\int \frac{\frac{1}{k}}{1-\frac{N_t}{k}} dN_t=-\int \frac{1}{1-\frac{N_t}{k}} d(1-\frac{N_t}{k})=-\ln \mid 1-\frac{Nt}{k} \mid +C_2$ which is obviously different from the previous solution, so where is the mistake(s) ?</p>
| Piquito | 219,998 | <p>QUESTION.- Do you want your curves necessarily be all concave? If not, you have a nice example to add to the concave ones with the Witch of Agnesi, whose equation is $y=\frac{8a^3}{x^2+4a^2}$ where $a$ is the radius of the circle that generates the curve (so you have infinitely many examples).</p>
<p>In the figure you have (with $ a = 300$) a "witch" and a good exercise for you would be to find the right equation to figure reflected down the given (i.e. changing the given equation to the red coordinates).</p>
<p><a href="https://i.stack.imgur.com/Brq0w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Brq0w.png" alt="enter image description here"></a></p>
|
1,079,356 | <p>My question can be summarized as:</p>
<blockquote>
<p>I want to prove that closed immersions are stable under base change.</p>
</blockquote>
<p>This is exercise II.3.11.a in Hartshorne's Algebraic Geometry. I researched this for about half a day. I consulted a number of books and online notes, but I found the proofs to be vague. Vakil's notes (9.2.1) hint that this is an immediate consequence of the canonical isomorphism $ M/IM\cong M\otimes_{A}A/I $, and so does Liu's book (1.23). The proof can't be this simple. I need to show that this can be reduced to the affine case, and to do so, I need to show that a closed immersion into an affine scheme is affine. I haven't been able to do so yet. Edit: I found a proof for this later but I still don't know how to use it to prove the question above.</p>
<p>As for the Stacks project, I had to traverse a tree propositions, until I eventually found a proof that uses concepts like quasi-coherent sheaves, something not introduced in Hartshorne's book at this point.</p>
<p>I also consulted Gortz & Wedhorn. The book cites section 4.11 as a proof for this. However the section is a general introduction to the categorical fibred product. It's unrelated. In fact the official errata mentions this error and cites proposition 4.20 instead. It is unclear to me how the proposition shows the result. I suspect the book uses a different - but equivalent - definition.</p>
<p>At this point, I'm frustrated. I'm self-studying and don't have anybody to ask. Could somebody please be kind enough to show me a self-contained proof?</p>
| Exodd | 161,426 | <p>Consider a cartesian diagram
$$\require{AMScd}
\begin{CD}
X^\prime @>>> X \\
@VVV @VVV \\
Y^\prime @>>> Y
\end{CD}$$
where $X\to Y$ is a closed immersion. Let's call $g$ the function $X'\to Y'$. </p>
<p>We know that, taking $Y_i$ an open affine covering of $Y$, and $X_i$, $Y_i'$ open affine covering of $X$ and $Y'$ such that $Y_i'\to Y_i$ and $X_i\to Y_i$ are open embeddings, then the fibered product $X'=X\times_Y Y'$ is covered by the open affine schemes $X_i\times_{Y_i}Y_i'$.</p>
<p>In particular, given the projections $X_i\times_{Y_i}Y_i'\to Y_i'$, they glue togheter, obtaining the morphism $g$. </p>
<p>This means that we can test a property in the affine case</p>
|
4,528,838 | <p>Find the general solution of the equation <span class="math-container">$$x^{(5)} + 2x^{(4)} + 2x^{(3)} + 4x'' + x' + 2x = 100e^{-2t}.$$</span></p>
<p>I don't understand how solve such tasks. I know that I should solve <span class="math-container">$x^{(5)} + 2x^{(4)} + 2x^{(3)} + 4x'' + x' + 2x =0$</span> and then use <span class="math-container">$x(t)=e^{\lambda t}$</span> but I don't understand why and what I can do next. Can anyone show me solution with explanation, so that I can solve the next tasks on my own?</p>
| user577215664 | 475,762 | <p>Hint:
<span class="math-container">$$x^{(5)} + 2x^{(4)} + 2x^{(3)} + 4x'' + x' + 2x = 100e^{-2t}.$$</span>
It's easier to solve this:
<span class="math-container">$$y'''' + 2y'' + y= 100e^{-2t}$$</span>
Where <span class="math-container">$y=x' + 2x $</span>.
<span class="math-container">$$y'''' + 2y'' + y= 0$$</span>
The characteristic polynomial is:
<span class="math-container">$$r^4+2r^2+1=0$$</span>
<span class="math-container">$$(r^2+1)^2=0$$</span></p>
|
2,293,147 | <p>I was trying to solve this ODE $\frac{dy}{dx} = c_{1} + c_{2}y + \frac{c_{3}}{y} , y(0) = c , c >0$.</p>
<p>where $c_{1},c_{2},c_{3}$ are three real numbers say $c_{1} < 0,c_{2},c_{3} > 0$.</p>
<p>I thought of using separation of variables giving me $x = \int(\frac{y}{c_{1}y+c_{2}y^2+c_{3}})dy + c$.</p>
<p>Next I am trying to reduce the denominator into a perfect square thng like of the form $(a + by)^2 + c$ ,so equating $(a + by)^2 = c_{1}y + c_{2}y^2 + c_{3}$
we get,</p>
<p>$(c_{1}y + c_{2}y^2 + c_{3}) = (\sqrt{\frac{-c_{1}^2}{4.c_{2}}} + \sqrt{c_{2}}.y)^2 + (c_{3} - \frac{c_{1}^2}{4.c_{2}})$</p>
<p>thus $x = \int(\frac{y}{(\sqrt{\frac{-c_{1}^2}{4.c_{2}}} + \sqrt{c_{2}}.y)^2 + (c_{3} - \frac{c_{1}^2}{4.c_{2}})}) dy + c$.</p>
<p>Now I am stuck at this point.
Also it makes me think whether there exists an analytic solution to this ODE?</p>
| BAYMAX | 270,320 | <p>Yes I agree what Yves say that I must get some type $\log$ and derivative of $\arctan$ function , but I am curious that when I consider the above integral which i want to find $\int(\frac{y}{c_{1}y+c_{2}y^2+c_{3}})dy $ then MATLAB returns </p>
<p>where for simplification I have taken $c_{1} = a ,c_{2} = b , c_{3} = c$</p>
<p>$\mathrm{log}\!\left(y - \left(\frac{1}{2\, b} - \frac{a\, \sqrt{a^2 - 4\, b\, c}}{2\, \left(a^2\, b - 4\, b^2\, c\right)}\right)\, \left(a + 2\, b\, y\right)\right)\, \left(\frac{1}{2\, b} - \frac{a\, \sqrt{a^2 - 4\, b\, c}}{2\, \left(a^2\, b - 4\, b^2\, c\right)}\right) + \mathrm{log}\!\left(y - \left(\frac{1}{2\, b} + \frac{a\, \sqrt{a^2 - 4\, b\, c}}{2\, \left(a^2\, b - 4\, b^2\, c\right)}\right)\, \left(a + 2\, b\, y\right)\right)\, \left(\frac{1}{2\, b} + \frac{a\, \sqrt{a^2 - 4\, b\, c}}{2\, \left(a^2\, b - 4\, b^2\, c\right)}\right)$</p>
<p>and when I do proper calculations in order to reduce the integrand into a suitable form to solve it analytically then i get a reduced form of integrand like this -</p>
<p>$\int(\frac{y}{(\sqrt{\frac{-c_{1}^2}{4.c_{2}}} + \sqrt{c_{2}}.y)^2 + (c_{3} - \frac{c_{1}^2}{4.c_{2}})}) dy$</p>
<p>then MATLAB returns me </p>
<p>$\frac{\mathrm{log}\!\left(\frac{y}{b} - \frac{\left(2\, \sqrt{b}\, y + \sqrt{- a^2\, b}\right)\, \left(2\, b\, c - \frac{a^2\, b^2}{2} + \frac{\sqrt{b}\, \sqrt{a^2\, b^2 - 4\, b\, c}\, \sqrt{- a^2\, b}}{2}\right)}{b^{\frac{5}{2}}\, \left(4\, c - a^2\, b\right)}\right)\, \left(2\, b\, c - \frac{a^2\, b^2}{2} + \sqrt{2}\, \sqrt{b}\, \sqrt{\frac{a^2\, b^2}{2} - 2\, b\, c}\, \sqrt{-\frac{a^2\, b}{4}}\right)}{4\, b^2\, c - a^2\, b^3} - \frac{\mathrm{log}\!\left(\frac{y}{b} + \frac{\left(2\, \sqrt{b}\, y + \sqrt{- a^2\, b}\right)\, \left(\frac{a^2\, b^2}{2} - 2\, b\, c + \frac{\sqrt{b}\, \sqrt{a^2\, b^2 - 4\, b\, c}\, \sqrt{- a^2\, b}}{2}\right)}{b^{\frac{5}{2}}\, \left(4\, c - a^2\, b\right)}\right)\, \left(\frac{a^2\, b^2}{2} - 2\, b\, c + \sqrt{2}\, \sqrt{b}\, \sqrt{\frac{a^2\, b^2}{2} - 2\, b\, c}\, \sqrt{-\frac{a^2\, b}{4}}\right)}{4\, b^2\, c - a^2\, b^3}
$</p>
<p>(*)where for simplification I have taken $c_{1} = a ,c_{2} = b , c_{3} = c$</p>
<p>Even thought the above two solutions are of the form $\log() + \log()$ but still then are the two expressions or solutions the same (as it must be as they are derived from the same integration I guess if I have done it right.)</p>
|
3,909,972 | <p>I used Photomath and Microsoft Math to compute an equation, but they gave me two different results (-411 and -411/38) Why did that happen and which is the correct answer?</p>
<p><a href="https://i.stack.imgur.com/egt5A.jpg" rel="nofollow noreferrer">https://i.stack.imgur.com/egt5A.jpg</a>
<a href="https://i.stack.imgur.com/RIHb0.png" rel="nofollow noreferrer">https://i.stack.imgur.com/RIHb0.png</a></p>
| Community | -1 | <p>Both calculators gave you the correct answer: <em>false</em>.</p>
|
396,440 | <p>Suppose we have the function $$f(x) = \frac{x}{p} + \frac{b}{q} - x^{\frac{1}{p}}b^{\frac{1}{q}}$$ where $x,b \geq 0 \land p,q > 1 \land \frac{1}{p}+\frac{1}{q} = 1$</p>
<p>I am trying to show that $b$ is the absolute minimum of $f$. </p>
<p>I proceeded as follows:</p>
<p>$$\frac{df(x)}{dx} = \frac{1}{p} - \frac{x^{\frac{1}{p}-1}}{p} b^{\frac{1}{q}} = \frac{x - x^{\frac{1}{p}} b^{\frac{1}{q}}}{px}$$</p>
<p>Now I will look for critical points by searching for the zeros of this function.</p>
<p>$$\frac{x - x^{\frac{1}{p}} b^{\frac{1}{q}}}{px} = 0 \iff x - x^{\frac{1}{p}} b^{\frac{1}{q}} = 0 \iff x = x^{\frac{1}{p}} b^{\frac{1}{q}}$$. </p>
<p>Now I can see that $b$ is a critical point. </p>
<p>How ever when I continue my calculations to check whether there are any other critical points
$$x = x^{\frac{1}{p}} b^{\frac{1}{q}} \implies x^p = b^{\frac{p}{q}}x \implies x^{p-1} = b^{\frac{p}{q}} \implies x = b^{\frac{p}{(p-1)q}}$$</p>
<p>But this could not be equal to $b$, where did I go wrong?</p>
| Inceptio | 63,477 | <p>Let $\dfrac{1}{p}=a$ and $\dfrac{1}{q}=k$</p>
<p>$ax+bk-x^a \cdot b^k=f(x) $</p>
<p>$\dfrac{x + \dots x_{ath}+ b+ \dots b_{kth}}{a+k} \ge (x^ab^k)^{1/(a+k)}$ (By <a href="http://en.wikipedia.org/wiki/Inequality_of_arithmetic_and_geometric_means" rel="nofollow">AM-GM inequality)</a></p>
<p>$x+ \dots x (a$ times)$=ax$ and $b+ \dots b (k$ times)$=bk$</p>
<p>$a+k=1 \implies ax+bk \ge x^a\cdot b^k \implies ax+bk-x^ab^k \ge 0$</p>
<p>Now you have $f(x) \ge 0$, the minimum is achieved when $x=b$ and $a=k$</p>
|
223,582 | <p>Maps $g$ maps $\left\{1,2,3,4,5\right\}$ onto $\left\{11,12,13,14\right\}$ and $g(1)\neq g(2)$. How many g are there.</p>
<p><strong>My answer</strong>:
I transformed the question to a easy-understand way and find out the solution.
Consider there are five children and four seats. Two of them are willing sitting together but only two of them never seat together.</p>
<p>$$\left(\begin{pmatrix}
5 \\
2
\end{pmatrix}-1\right)*4!=456$$</p>
<p>However the answer is 216. I don't know what's wrong.</p>
<p>Could you please help me find out what's wrong or give a right way to solve the problem?</p>
<p>Thanks!</p>
| Jack D'Aurizio | 44,121 | <p>Intersect the circle having $AC$ as a diameter with the initial circle: you will find the two points $D,D'$ such that $CD$ and $CD'$ are tangent to the initial circle. This comes from the fact that the circle is the locus of points that "see" any diameter under an angle equal to $\frac{\pi}{2}$.</p>
|
1,363,144 | <p>Given a cubic polynomial $f(x) = ax^{3} + bx^{2} + cx +d$ with arbitrary real coefficients and $a\neq 0$. Is there an easy test to determine when all the real roots of $f$ are negative?</p>
<p>The Routh-Hurwitz Criterion gives a condition for roots lying in the open left half-plane for an arbitrary polynomial with complex coefficients which helps a little, but this criterion doesn't help me when the complex roots lie in the right half plane.</p>
| P Vanchinathan | 28,915 | <p>If you are interested in only the roots you can normalize and take $a=1$. Then a necessary condition is that $b,c,d>0$. As it has 3 negative roots its two turning points should be negative too. That is $f'(x)= 3x^2-2bx+c$ should have real negative roots, which can be easily translated to a condition on the discriminant $b^2-3c\ge0$.</p>
|
4,045,755 | <blockquote>
<p>If <span class="math-container">$p$</span> is a prime then all the non trivial subgroups of <span class="math-container">$G$</span> with <span class="math-container">$\lvert G\rvert=p^2$</span> are cyclic.</p>
</blockquote>
<p>I tried looking online where does this result come from, but could not find any direct result. I found that all groups with order <span class="math-container">$p^2$</span> are isomorphic to <span class="math-container">$\mathbb{Z}_{p^2}$</span> or <span class="math-container">$\mathbb{Z_p} \times \mathbb{Z_p}$</span>; is this a consequence of this result?</p>
| GreginGre | 447,764 | <p>You do not need the classification of groups of order <span class="math-container">$p^2$</span>. Just use Lagrange theorem to conclude that a nontrivial subgroup (that is, different of the trivial one and <span class="math-container">$G$</span>) has order <span class="math-container">$p$</span>.</p>
<p>Now, groups of prime order <span class="math-container">$p$</span> are known to be cyclic (the order of a nontrivial element cannot be <span class="math-container">$1$</span>, so it is <span class="math-container">$p$</span>).</p>
|
33,622 | <p>I am looking for differentiable functions $f$ from the unit interval to itself that satisfy the following equation $\forall\:p \in \left( 0,1 \right)$:</p>
<p>$$1-p-f(f(p))-f(p)f'(f(p))=0$$</p>
<p>Is there a way to use <em>Mathematica</em> to solve such equations?<br>
<code>DSolve</code> is of course unable to handle this -- unless there are tricks I don't know about.</p>
| jlperla | 9,151 | <p>To answer the question of: does mathematica have facilities for this sort of thing. I think there may be 2 parts:
1) Does mathematica have facilities for general function equations (forgetting even the differential)? I think the answer is no in general. There are certain (recurrence) equations it can solve with <code>RSolve</code>, but the function is mostly intended for difference equations. Regardless, it doesn't seem that <code>RSolve</code> is intended to solve coupled recurrence and differential equations (which I think are called delay-differential equations).</p>
<p>2) If I transform you ODE into something more standard, I think the best you could do is invert pieces of the function and then use the inverse function theorem to transform derviatives between/from the inverse (with a smart set of changes of variables).</p>
<p>But... based on the structure of the problem, my guess is that the best you could possibly transform it into is a delay differential equation as you would end up with an $f^{-1}(p-1)$ after the first inversion. Mathematica has facilities for delay-differential equations, but I think they are only numerical. <a href="http://www.wolfram.com/products/mathematica/newin7/content/DelayDifferentialEquations/" rel="nofollow">http://www.wolfram.com/products/mathematica/newin7/content/DelayDifferentialEquations/</a> </p>
|
1,289,868 | <p>EDIT (<em>now asking how to write $F$ as distributions, instead of writing the integral in terms of distributions</em>): </p>
<p>Let $F$ be the distribution defined by its action on a test function $\phi$ as </p>
<p>\begin{equation*}
F(\phi)=\int_{\pi}^{2\pi}x\phi(x)dx.
\end{equation*}</p>
<p>How would you write $F$ in terms of the delta distribution, heaviside distribution, and a regular distribution $R$ defined by its action on a test function $\phi$ as </p>
<p>\begin{equation*}
R(\phi)=\int_{-\infty}^{\infty}g(x)\phi(x)dx
\end{equation*}</p>
<p>for a continuous function $g$?</p>
<p>Edit: Q1)b) in this link <a href="https://www.maths.ox.ac.uk/system/files/legacy/3422/B5a_13.pdf" rel="nofollow">https://www.maths.ox.ac.uk/system/files/legacy/3422/B5a_13.pdf</a></p>
| Nikita Evseev | 23,566 | <p>Suspect one could not find continuous $g(x)$.
I state that $g(x) = x\cdot H(x-\pi)\cdot H(2\pi-x)$. Namely $g(x) = x$ if $x\in[\pi, 2\pi]$ and $g(x)=0$ for $x\in R\setminus [\pi, 2\pi]$. So
$$
\int_{\pi}^{2\pi}x\phi(x)dx = \int_{-\infty}^{+\infty}g(x)\phi(x)dx.
$$</p>
|
481,421 | <p>Find the limit of:
$$\lim_{x\to\infty}{\frac{\cos(\frac{1}{x})-1}{\cos(\frac{2}{x})-1}}$$</p>
| Boris Novikov | 62,565 | <p>Since $\cos t \sim 1-t^2/2$ from Taylor series, then
$$
\lim_{x\to\infty}{\frac{\cos(\frac{1}{x})-1}{\cos(\frac{2}{x})-1}}= \lim_{x\to\infty}{\frac{\frac{1}{2x^2}}{\frac{4}{2x^2}}}=1/4
$$</p>
|
481,421 | <p>Find the limit of:
$$\lim_{x\to\infty}{\frac{\cos(\frac{1}{x})-1}{\cos(\frac{2}{x})-1}}$$</p>
| obataku | 54,050 | <p>Rewriting $\cos(2/x)=2\cos^2(1/x)-1$ we have:$$\begin{align*}\lim_{x\to\infty}\frac{\cos(1/x)-1}{\cos(2/x)-1}&=\lim_{x\to\infty}\frac{\cos(1/x)-1}{2\cos^2(1/x)-2}\\&=\frac12\lim_{x\to\infty}\frac{\cos(1/x)-1}{\cos(1/x)^2-1}\\&=\frac12\lim_{x\to\infty}\frac{\cos(1/x)-1}{(\cos(1/x)+1)(\cos(1/x)-1)}\\&=\frac12\lim_{x\to\infty}\frac1{\cos(1/x)+1}\\&=\frac12\cdot\frac12\\&=\frac14\end{align*}$$</p>
|
481,421 | <p>Find the limit of:
$$\lim_{x\to\infty}{\frac{\cos(\frac{1}{x})-1}{\cos(\frac{2}{x})-1}}$$</p>
| DonAntonio | 31,254 | <p>What about a little l'Hospital?</p>
<p>$$\lim_{x\to\infty}\frac{\cos\frac1x-1}{\cos\frac2x-1}\stackrel{\text{l'H}}=\lim_{x\to\infty}\frac{\frac1{x^2}\sin\frac1x}{\frac2{x^2}\sin\frac2x}\stackrel{\text{l'H}}=\frac12\lim_{x\to\infty}\frac{-\frac1{x^2}\cos\frac1x}{-\frac2{x^2}\cos\frac2x}=\frac12\cdot\frac12\cdot\frac11=\frac14$$</p>
|
481,421 | <p>Find the limit of:
$$\lim_{x\to\infty}{\frac{\cos(\frac{1}{x})-1}{\cos(\frac{2}{x})-1}}$$</p>
| mnsh | 58,529 | <p>$$\lim_{x \to \infty} \frac{\cos \frac1x - 1}{\cos \frac2x - 1} = \lim_{x \to \infty} \frac{\sin^2 \frac{1}{2x}}{\sin^2 \frac{1}{x}} =\lim_{x \to \infty} \frac{\sin^2 \frac{1}{2x}}{(\frac{1}{2x})^2}\frac{(\frac{1}{x})^2}{(\sin^2 \frac{1}{x})}\frac{1}{4}=1*1* \frac14=\frac14.$$</p>
<p>note that $$\lim_{x \to \infty} \frac{\sin \frac1x }{\frac1x}=1$$</p>
|
1,382,087 | <p>Problem:</p>
<p>A bag contains $4$ red and $5$ white balls. Balls are drawn from the bag without replacement.</p>
<p>Let $A$ be the event that first ball drawn is white and let $B$ denote the event that the second ball drawn is red. Find </p>
<p>(i) $P(B\mid A)$</p>
<p>(ii) $P(A\mid B)$</p>
<p>My confusion is that should $P(A\mid B)=P(A)$</p>
<p>Can we say that in general if $P(A\mid B)$ exists then $P(B\mid A)$ should also exist?</p>
| zoli | 203,663 | <p>Let the event space be</p>
<p>$$\Omega=\{(r,r),(r,w),(w,r),(w,w)\}$$
the corresponding probabilities are
$$\frac{12}{72},\frac{20}{72},\frac{20}{72},\frac{20}{72}.$$</p>
<p>Then</p>
<p>$$Pr(A\cap B)=Pr((w,r))=\frac{20}{72}$$</p>
<p>and</p>
<p>$$Pr(B)=Pr(\{(r,r),(w,r)\})=\frac{12}{72}+\frac{20}{72}=\frac{32}{72}$$</p>
<p>and</p>
<p>$$Pr(A)=Pr(\{(w,r),(w,w)\})=\frac{40}{72}$$</p>
<p>so,</p>
<p>$$Pr(A\mid B)=\frac{Pr(A\cap B)}{Pr(B)}=\frac{\frac{20}{72}}{\frac{32}{72}}=\frac58.$$</p>
<p>and</p>
<p>$$Pr(B\mid A)=\frac{Pr(A\cap B)}{Pr(A)}=\frac{\frac{20}{72}}{\frac{40}{72}}=\frac12.$$</p>
<hr>
<p>In general we cannot say that if $Pr(A\mid B)$ exists then $Pr(B\mid A)$ also exists. Let simply $Pr(A)=0<Pr(B)$. Then $Pr(A\mid B)=0$ but $Pr(B\mid A)$ is not defined.</p>
|
749,473 | <p>I am trying to model the time it takes until a malfunction appears. For example the time a light-bulb will last. I would like the probability that the light-bulb will burn out at a certain moment (given it hadn't bunt yet) to increase as a function of the time ($P(x | X \geq x$) should be monotonic increasing). That is, an old light-bulb is more likely to burn out at the moment than a new one. (Obviously, I can't use a memory-less probability distribution). Any suggestions?</p>
| Clangon | 142,414 | <p>The Weibull distribution seems to satisfy my request.</p>
|
3,634,416 | <p>First of all, English is not my native language, but Chinses is. I tried to spilt the integration interval into 2 pieces: <span class="math-container">$ [0, 1-1/n] $</span> and <span class="math-container">$ [1-1/n, 1] $</span>. In both intervals I use the mean value theorem:
<span class="math-container">$$
\int_{0}^{1-1/n}\frac{1}{1+x^{n}}\,dx=\frac{1}{1+\xi_{n}^{n}}\left( 1-\frac{1}{n} \right), \qquad \text{and} \qquad \int_{1-1/n}^{1}\frac{1}{1+x^{n}}\,dx=\frac{1}{1+\eta_{n}^{n}}\frac{1}{n},
$$</span>
where <span class="math-container">$ \xi_{n}\in(0, 1-1/n), \eta_{n}\in(1-1/n, 1) $</span>.I found that the latter formula has a limit of <span class="math-container">$ 0 $</span> when <span class="math-container">$ n\to\infty $</span>. However I can't handle the previous formula. Does anyone has some thoughts? </p>
| Riemann | 27,899 | <p>Consider limit
<span class="math-container">$$\lim_{n\to\infty}\int_{0}^{1}\frac{x^n}{1+x^{n}}\,dx=0.$$</span>
then your limit
<span class="math-container">$$\lim_{n\to\infty}\int_{0}^{1}\frac{1}{1+x^{n}}\,dx=1.$$</span>
Hint:
<span class="math-container">$$0\leq\frac{x^n}{1+x^{n}}\leq x^n\implies
\lim_{n\to\infty}\int_{0}^{1}\frac{x^n}{1+x^{n}}\,dx=0.$$</span></p>
|
2,629,133 | <p>In keno, the casino picks 20 balls from a set of 80 numbered 1 to 80. Before the draw is over, you are allowed to choose 10 balls. What is the probability that 5 of the balls you choose will be in the 20 balls selected by the casino?</p>
<p>My attempt: The total number of combinations for the 20 balls is $80\choose20$. However, I get stuck at the numerator. I thought it will be $\binom{80}{10}\binom{10}5$ but that's wrong. </p>
<p>Thanks.</p>
| Parcly Taxel | 357,390 | <p>Without loss of generality, assume the casino picks balls 1 to 20. Then for the stated scenario to happen:</p>
<ul>
<li>Five of your picks are within $[1,20]$: $\binom{20}5$ ways</li>
<li>The other five are within $[21,80]$: $\binom{60}5$ ways</li>
</ul>
<p>There are $\binom{80}{10}$ picks altogether, so the probability that five balls match is
$$\frac{\binom{20}5\binom{60}5}{\binom{80}{10}}=0.0514\dots$$</p>
|
2,573,458 | <p>Given $n$ prime numbers, $p_1, p_2, p_3,\ldots,p_n$, then $p_1p_2p_3\cdots p_n+1$ is not divisible by any of the primes $p_i, i=1,2,3,\ldots,n.$ I dont understand why. Can somebody give me a hint or an Explanation ? Thanks.</p>
| drhab | 75,923 | <p>If some integer $m$ is divisible by e.g. $13$ then $m+1$ is not.</p>
<p>Now note that $m=p_1p_2\cdots p_n$ is divisible by every $p_i$ with $i\in\{1,\dots,n\}$ and draw conclusions.</p>
|
3,143,084 | <p>If <span class="math-container">$f : \mathbb{R} \to \mathbb{R}$</span>, we can think of the derivative of <span class="math-container">$f$</span> at a point <span class="math-container">$x$</span>, denoted <span class="math-container">$f'(x)$</span>, as giving the slope of a line tangent to the graph of <span class="math-container">$f$</span> at the point <span class="math-container">$(x, f(x))$</span>. One way to obtain the derivative is to consider a secant line through a second point <span class="math-container">$(x+h, f(x+h))$</span> on the graph of <span class="math-container">$f$</span>. The slope of the secant line is given by
<span class="math-container">$$ \frac{f(x+h) - f(x)}{(x+h)-x} = \frac{f(x+h) - f(x)}{h}. $$</span>
The tangent line results by taking <span class="math-container">$h$</span> to be arbitrarily small, so the derivative is given by
<span class="math-container">$$ \lim_{h\to 0} \frac{f(x+h) - f(x)}{h}, $$</span>
presuming that this limit exists.</p>
<blockquote>
<p><strong>Question:</strong> Suppose that <span class="math-container">$f$</span> is given by
<span class="math-container">$$ f(x) = x^n. $$</span>
What is <span class="math-container">$f'(x)$</span>?</p>
</blockquote>
<p>For small values of <span class="math-container">$n$</span>, this can be computed by hand fairly easily. For example, if <span class="math-container">$n=3$</span>, then
<span class="math-container">$$ f'(x)
= \lim_{h\to 0} \frac{(x+h)^3 - x^3}{h}
= \lim_{h\to 0} \frac{x^3 + 3hx^2 + 3h^2x + h^3}{h}
= \lim_{h\to 0} 3x^2 + 3hx + h^2
= 3x^2. $$</span>
On the other hand, if <span class="math-container">$n$</span> is very large, then this becomes impractical. For example, if <span class="math-container">$n = 123$</span>, then how do we determine
<span class="math-container">$$ f'(x) = \lim_{h\to 0} \frac{(x+h)^{123} - x^{123}}{h}? $$</span></p>
| egreg | 62,967 | <p>Assuming you want to know
<span class="math-container">$$
\lim_{h\to0}\frac{(x+h)^{123}-x^{123}}{h}
$$</span>
set <span class="math-container">$n=123$</span>. In other words, compute
<span class="math-container">$$
\lim_{h\to0}\frac{(x+h)^{n}-x^{n}}{h}
$$</span>
for <em>any</em> integer value of <span class="math-container">$n$</span>.</p>
<p>Let's show that <span class="math-container">$(x+h)^n=x^n+nhx^{n-1}+h^2P_n(x,h)$</span>, where <span class="math-container">$P_n$</span> is a suitable polynomial in <span class="math-container">$x$</span> and <span class="math-container">$h$</span>.</p>
<p>This is clearly true for <span class="math-container">$n=1$</span>; so, assume it holds for <span class="math-container">$n$</span>. Then
<span class="math-container">\begin{align}
(x+h)^{n+1}
&=(x+h)^n(x+h) \\[6px]
&=\bigl(x^n+nhx^{n-1}+h^2P_n(x,h))(x+h) \\[6px]
&=x^{n+1}+hx^n+nhx^n+nh^2x^{n-1}+h^2(x+h)P_n(x,h) \\[6px]
&=x^{n+1}+(n+1)hx^n+h^2P_{n+1}(x,h)
\end{align}</span>
where <span class="math-container">$P_{n+1}=nx^{n-1}+(x+h)P_n(x,h)$</span> is a polynomial.</p>
<p>The proof by induction is complete.</p>
<p>Then
<span class="math-container">$$
\lim_{h\to0}\frac{(x+h)^n-x^n}{h}=
\lim_{h\to0}\bigl(nx^{n-1}+hP_n(x,h)\bigr)=nx^{n-1}
$$</span>
For <span class="math-container">$n=123$</span>, your limit is <span class="math-container">$123x^{122}$</span>.</p>
|
1,396,322 | <p>For example I have eight kids,</p>
<pre><code>A,B,C,D,E,F,G,H
</code></pre>
<p>If I ask them to go into groups of two, their choices are</p>
<pre><code>A->B
B->C
C->B
D->B
E->A
F->A
G->H
H->C
</code></pre>
<p>How to make sure they get their choices as much as possible?</p>
<p>Or similarly, to get into groups of four:</p>
<pre><code>A->B,C,D
B->A,C,G
C->E,A,D
D->B,E,G
E->F,G,H
F->A,B,C
G->E,F,B
H->F,E,C
</code></pre>
<p>I am sure there are many ways to do this. But I just don't know where to start looking for algorithms. What is the mathematical term for such problems?</p>
| tommy | 261,593 | <p>Just expand $\sin(x)$ in a power series and you are done:</p>
<p>$$
\lim_{x\rightarrow 0} \frac{x-\sin(x)}{x^3} =
\lim_{x\rightarrow 0} \frac{x-(x-x^3/3!+\mathcal{O}(x^5))}{x^3}
=\lim_{x\rightarrow 0} (1/3!+\mathcal{O}(x^2))
=1/6
$$</p>
|
240,741 | <p>I'm trying to include the legends inside the frame of the plot like this</p>
<p><a href="https://i.stack.imgur.com/7K5aa.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/7K5aa.jpg" alt="hehe" /></a></p>
<p>Here is my Attempt:</p>
<pre><code>ListPlot[{{2, 5, 2, 8, 6, 8, 3}, {1, 2, 5, 2, 3, 4, 3}},
PlotMarkers -> {"\[SixPointedStar]", 15}, Joined -> True,
PlotStyle -> {Orange, Green},
PlotLegends ->
Placed["line1", "line2",
LegendFunction -> (Framed[#, FrameMargins -> 0] &)], Frame -> True]
</code></pre>
<p>My references:</p>
<ol>
<li><a href="https://mathematica.stackexchange.com/questions/141737/specify-legend-position-in-a-plot">specify-legend-position-in-a-plot</a></li>
<li><a href="https://mathematica.stackexchange.com/questions/173911/plotting-legends-matching-with-plots-inside-the-show-graph">plotting-legends-matching-with-plots-inside-the-show-graph</a></li>
<li><a href="https://mathematica.stackexchange.com/questions/212046/placing-plot-legends-inside-a-plot">placing-plot-legends-inside-a-plot</a></li>
<li><a href="https://www.wolfram.com/mathematica/new-in-9/legends/place-a-legend-inside-a-plot.html" rel="noreferrer">place-a-legend-inside-a-plot.</a></li>
</ol>
| Daniel Huber | 46,318 | <p>Often these graphics commands are a bit obscure and one has to try. Is the following approx. what you are looking for?:</p>
<pre><code>ListPlot[{{2, 5, 2, 8, 6, 8, 3}, {1, 2, 5, 2, 3, 4, 3}},
PlotMarkers -> {"\[SixPointedStar]", 15}, Joined -> True,
PlotStyle -> {Orange, Green},
PlotLegends ->
Placed[LineLegend[{"line1", "line2"},
LegendFunction -> Framed], {0.85, 0.8}], Frame -> True,
PlotRange -> {{0, 10}, All}]
</code></pre>
<p><a href="https://i.stack.imgur.com/pG1JB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/pG1JB.png" alt="
" /></a></p>
|
264,770 | <p>If we have a vector in $\mathbb{R}^3$ (or any Euclidian space I suppose), say $v = (-3,-6,-9)$, then:</p>
<ol>
<li>May I always "factor" out a constant from a vector, as in this example like $(-3,-6,-9) = -3(1,2,3) \implies (1,2,3)$ or does the constant always go along with the vector?</li>
<li>If yes on question 1, then if I want to compute the norm, is the correct computation the following: $||v|| = |-3|\sqrt{14} = 3\sqrt{14}$ ? If so, is the only reason that we take the absolute value of -3 because we don't want a negative length?</li>
</ol>
<p>I'm sorry if things are obvious but I just want to make sure I actually get this correctly.</p>
<p>Best regards</p>
| Thomas Andrews | 7,933 | <p>As a rule, this all falls from the distributive rule. If $v=(ax,ay,az)$, then $$\begin{align}||v|| &= \sqrt{(ax)^2 + (ay)^2+(az)^2} = \sqrt{a^2(x^2+y^2+z^2)}\\
&=\sqrt{a^2}\sqrt{x^2+y^2+z^2}\end{align}$$</p>
<p>And $\sqrt{a^2}=|a|$</p>
<p>And yes, ultimately, this is all because we want distances to be positive, but we also want the norm to be well-defined.If $w=(1,-2,3)$, is $|w|=\sqrt{14}$ or $-\sqrt{14}$? There is no guidance here to choose one or the other.</p>
|
2,332,277 | <p>First of all, note that $\frac{n^{n+1}}{(n+1)^n} \sim \frac{n}{e}$. </p>
<p><em>Question</em>: Is there $n>1$ such that $n^{n+1} \equiv 1 \mod (n+1)^n$?</p>
<p>There is an OEIS sequence for $n^{n+1}\mod (n+1)^n$: <a href="https://oeis.org/A176823" rel="nofollow noreferrer">https://oeis.org/A176823</a>. </p>
<blockquote>
<p>$0, 1, 8, 17, 399, 73, 44638, 1570497, 5077565, 486784401, 22187726197,
166394893969, 13800864889148, 762517292682713, 9603465430859099,
803800832678655745, 3180753925351614970, 947615093635545799201$</p>
</blockquote>
| Mastrem | 253,433 | <p>We first prove that it is impossible when $n\not\equiv 1\pmod 4$.</p>
<p>Notice how:
$$n^{n+1}=1+(n-1)\sum_{k=0}^{n}n^k$$
So, we would have $n^{n+1}\equiv 1\pmod{(n+1)^n}$ if and only if:
$$(n+1)^n\mid (n-1)\sum_{k=0}^{n}n^k$$
And since the RHS won't be equal to $0$, we'd have:
$$(n-1)\sum_{k=0}^{n}n^k\ge(n+1)^n$$
but, since $4\nmid n-1$, we have $\gcd(n-1,(n+1)^n)\le 2$, so this becomes:
$$2\sum_{k=0}^{n}n^k\ge(n+1)^n$$
Multiplying both sides with $(n-1)$ yields:
$$2n^{n+1}>2(n^{n+1}-1)\ge (n-1)(n+1)^n=\frac{n-1}{n+1}\cdot(n+1)^{n+1}$$
or, assuming $n>1$:
$$\frac{2n+2}{n-1}>\left(\frac{n+1}{n}\right)^{n+1}=\left(1+\frac1n\right)^{n+1}>\left(1+\frac1n\right)^n$$
However, as $n$ tends to infinity, the LHS tends to $2$, while the RHS tends to $e$. Using induction, we first prove that for $n\ge 11$, we have $2.4\ge\frac{2n+2}{n-1}$. This is quite easy and I'll leave it out.</p>
<p>For $n=11$, we also have that the RHS is greater than $2.6$ and since $(1+\frac1n)^n$ keeps increasing as $n$ keeps increasing, this shows that there are no $n\not\equiv 1\pmod 4$ with $n\ge 11$ with $n^{n+1}\equiv1\pmod{(n+1)^n}$. Some quick testing reveals that there are no solutions at all for $n\not\equiv 1\pmod 4$.</p>
<hr>
<p>As per @san's request, I'll also provide his solution for the case $n\equiv 1\pmod 4$, so that there is one complete answer. </p>
<p>Assume by contradiction that $n=4j+1$ for some positive integer $j$, and $n^{n+1} \equiv 1 \mod (n+1)^n$. Then
$$
n+1=2(2j+1)\quad\text{and}\quad n-1=2^ra
$$
for some $r\ge 2$ and some odd $a$.</p>
<p>There exists some $k$ such that $n^{n+1} - 1 =k\cdot (n+1)^n=k\cdot 2^n(2j+1)^n$.</p>
<p>But
\begin{eqnarray*}
n^{n+1} - 1&=&\sum_{s=0}^{n+1}\binom{n+1}{s}(n-1)^s-1\\
&=& \sum_{s=1}^{n+1}\binom{n+1}{s}(2^r a)^s\\
&=& (n+1)2^r a+2^{2r}a^2 \sum_{s=2}^{n+1}\binom{n+1}{s}(2^r a)^{s-2}\\
&=& (2j+1)2^{r+1}a+2^{2r}a^2 \sum_{s=2}^{n+1}\binom{n+1}{s}(2^r a)^{s-2}\\
&=& 2^{r+1}\left((2j+1)a+2^{r-1}a^2 \sum_{s=2}^{n+1}\binom{n+1}{s}(2^r a)^{s-2}\right)\\
\end{eqnarray*}
and $2^n$ divides $n^{n+1}-1$, hence $2^n$ divides $2^{r+1}$, and so $r\ge n-1$, which contradicts the fact that $n-1=2^ra$, since in general $n-1<2^{n-1}$.</p>
|
2,293,746 | <p>A function f has derivative for all $x\in \mathbb R$ and the limits of $f$ at $+\infty $, $-\infty$ are equal to $+\infty$ . Is it true that $\lim_{x\to a} \frac {1}{f'(x)} = + \infty $ or $-\infty$ for some $a\in\mathbb R$ ?</p>
<p>Of course function $f' $ has roots , according to Fermat's theorem( $f$ has a total infimum) but how I could find an example to prove that the statement is W(wrong), if it really is wrong? </p>
<p>Thank you in advance!</p>
<p>Babis</p>
| Babis Stergiou | 449,018 | <p>Excuse me for(my poor english and ) comig again, I'm new here and I probably I do something in a wrong way.There was a typo in my first messange and the problem exists yet.</p>
<p>My original question is to find if the function $\frac {1}{f'(x)}$ has a vertical asymptote.</p>
<p>The function $f' $ has the Darboux property and I'm trying to prove that the statement is false.But is it really fasle?</p>
<p>So , I'm trying to prove that there is a function(or to construct a function) , such that for all points $a$ which are roots of $f'$ , both limits</p>
<p>$\lim_{x\to a^{+}}\frac {1}{f'(x)}$ ,$\lim_{x\to a^{-}}\frac {1}{f'(x)}$ </p>
<p>are not equal to $+\infty $ or $ -\infty$ or these limits are not defined.</p>
|
2,304,318 | <p>Let $\sum_{n=1}^\infty a_n$ and $\sum_{n=1}^\infty b_n$ be two real series, where we have $\lim_{n\rightarrow\infty} \frac{a_n}{b_n} = M >0$. Show that either one of these options happen:</p>
<ol>
<li>Both series converge</li>
<li>Both series diverge</li>
</ol>
<p>I have no clue on how to solve this. Can someone help?</p>
<p><strong>EDIT</strong>: After the comments, I think this can only be shown true if we assume $a_n >0$ and $b_n>0$ for all $n\in\mathbb{N}$</p>
| zhw. | 228,045 | <p>This is false: Define</p>
<p>$$a_n= \frac{(-1)^n}{n^{1/2}},\,\, b_n = \frac{(-1)^n}{n^{1/2}}+\frac{1}{2n^{3/4}}.$$</p>
<p>Then</p>
<p>$$\frac{a_n}{b_n}= \frac{1}{1+(-1)^n/(2n^{1/4})} \to 1.$$</p>
<p>However $\sum a_n$ converges and $\sum b_n$ diverges.</p>
|
2,134,928 | <p>Let <span class="math-container">$ \ C[0,1] \ $</span> stands for the real vector space of continuous functions <span class="math-container">$ \ [0,1] \to [0,1] \ $</span> on the unit interval with the usual subspace topology from <span class="math-container">$\mathbb{R}$</span>. Let <span class="math-container">$$\lVert f \rVert_1 = \int_0^1 |f(x)| \ dx \qquad \text{ and } \qquad \lVert f \rVert_{\infty} = \max_{x \in [0,1]} |f(x)|$$</span> be the usual norms defined on that space. Let <span class="math-container">$ \ \Delta : C[0,1] \to C[0,1] \ $</span> be the diagonal function, ie, <span class="math-container">$ \ \Delta f=f \ $</span>, <span class="math-container">$\forall f \in C[0,1]$</span>. Then <span class="math-container">$$ \Delta = \big\{ (f,g) \in C[0,1] \times C[0,1] \ : \ g=f \ \big\} \ . $$</span> My questions are</p>
<blockquote>
<p><strong>(1)</strong> <span class="math-container">$ \ \ $</span> Is <span class="math-container">$ \ \Delta \ $</span> a closed set of <span class="math-container">$ \ C[0,1] \times C[0,1] \ $</span>, with respect to the product topology induced by these norms?</p>
<p><strong>(2)</strong> <span class="math-container">$ \ \ $</span> Is <span class="math-container">$ \ \Delta : (C[0,1], \lVert \cdot \rVert_1) \to (C[0,1], \lVert \cdot \rVert_{\infty}) \ $</span> continuous?</p>
<p><strong>(3)</strong> <span class="math-container">$ \ \ $</span> Does <span class="math-container">$ \ \Delta : (C[0,1], \lVert \cdot \rVert_1) \to (C[0,1], \lVert \cdot \rVert_{\infty}) \ $</span> maps closed sets of <span class="math-container">$ \ (C[0,1], \lVert \cdot \rVert_1) \ $</span> onto closed sets of <span class="math-container">$ \ (C[0,1], \lVert \cdot \rVert_{\infty}) \ $</span>?</p>
<p><strong>(4)</strong> <span class="math-container">$ \ \ $</span> Is <span class="math-container">$ \ \Delta : (C[0,1], \lVert \cdot \rVert_{\infty}) \to (C[0,1], \lVert \cdot \rVert_1) \ $</span> continuous?</p>
<p><strong>(5)</strong> <span class="math-container">$ \ \ $</span> Does <span class="math-container">$ \ \Delta : (C[0,1], \lVert \cdot \rVert_{\infty}) \to (C[0,1], \lVert \cdot \rVert_1) \ $</span> maps closed sets of <span class="math-container">$ \ (C[0,1], \lVert \cdot \rVert_{\infty}) \ $</span> onto closed sets of <span class="math-container">$ \ (C[0,1], \lVert \cdot \rVert_1) \ $</span>?</p>
</blockquote>
<p>Now about some terminology, when I say that "<span class="math-container">$\Delta \ $</span> is closed", or that "<span class="math-container">$\Delta \ $</span> is a closed map" or that "<span class="math-container">$\Delta \ $</span> is a closed operator"?</p>
<p>Thanks in advance.</p>
| Behemoth | 414,337 | <p>She answered $10$ questions, so she was expecting $30$ points. Instead, she got only $18$ points. That means that she lost a total of $12$ points. If you take into consideration that an incorrect answer takes $4$ points from your expected total ($3$ for annulment, $1$ for penalty), the amount of incorrect answers is $12/4=3$. That means that the number of correct answers is $10-3=7$.</p>
|
211,803 | <p>I ended up with a differential equation that looks like this:
$$\frac{d^2y}{dx^2} + \frac 1 x \frac{dy}{dx} - \frac{ay}{x^2} + \left(b -\frac c x - e x \right )y = 0.$$
I tried with Mathematica. But could not get the sensible answer. May you help me out how to solve it or give me some references that I can go over please? Thanks.</p>
| Robert Israel | 8,508 | <p>I don't know if there are closed form solutions in general. In the case $e=0$, Maple finds a solution using Whittaker M and W functions:
$$y \left( x \right) =c_{{1}}
{{\rm \bf M}\left({\frac {ic}{2\sqrt {b}}},\,\sqrt {a},\,2\,i\sqrt {b}x\right)}
{\frac {1}{\sqrt {x}}}+c_{{2}}
{{\rm \bf W}\left({\frac {ic}{2\sqrt {b}}},\,\sqrt {a},\,2\,i\sqrt {b}x\right)}
{\frac {1}{\sqrt {x}}}
$$
Another interesting special case is $a=1/4$, $c=0$, where Maple's solution involves Airy functions:
$$
y \left( x \right) =c_{{1}}
{\text{Ai}\left(-{\frac {b-ex}{ \left( -e \right) ^{2/3}}}\right)}{\frac {1}{\sqrt {x}}}
+c_{{2}}{\text{Bi}\left(-{\frac {b-ex}{ \left( -e \right) ^{2/3}}}\right)}{\frac {1}{
\sqrt {x}}}
$$</p>
<p>EDIT: Note that the scaling $x \to k x$ preserves the form of the differential equation with $(a,b,c,e) \to (a,k^2b, kc,k^3e)$. So if $e \ne 0$ we can assume WLOG that, say, $e=1$. </p>
<p>As @Pragabhava noted, the indicial roots are $\pm \sqrt{a}$, so unless $\sqrt{a}$ is an integer there will be two fundamental solutions of the form
$$\eqalign{y_1(x) &= x^{\sqrt{a}} \left(1 + \sum_{j=1}^\infty u_j x^j\right)\cr y_2(x) &= x^{-\sqrt{a}} \left(1 + \sum_{j=1}^\infty v_j x^j\right)}$$ with coefficients satisfying the recurrences $(2 \sqrt{a} j+j^2) u_j - c u_{j-1} + b u_{j-2} - u_{j-3} = 0$ (with $u_0 = 1$, $u_j = 0$ for $j < 0$) and
$(-2 \sqrt{a} j+j^2) v_j - c v_{j-1} + b v_{j-2} - v_{j-3} = 0$ (with $v_0 = 1$, $v_j = 0$ for $j < 0$). If $\sqrt{a}$ is an integer the second recurrence becomes singular at $j=2\sqrt{a}$, generally resulting in logarithmic terms. I don't think there are closed-form solutions for the recurrences.</p>
|
944,840 | <p>For vectors u, w, and v in a vector space V, I am trying to prove:</p>
<p>If $u + w = v + w$ then $u = v$</p>
<p><strong>without</strong> using the additive inverse and only using the 8 axioms which define a vector space. I am coming up short. I don't see how to do this without assuming that if $u + w = v + w$ then I can just add something to both sides as in $(u+w) + w' = (v+w) + w'$.</p>
<p>Thank you.</p>
| Barry Cipra | 86,747 | <p>The additive cancellation law you're trying to prove is <em>equivalent</em> to the additive inverse axiom, which strongly suggests you <em>can't</em> prove it without assuming that axiom.</p>
<p>That is, let's take the standard eight axioms from the Wikipedia page for vector spaces, slightly modified by writing $\Omega$ for the "zero" vector and $\overline v$ for the additive inverse:</p>
<blockquote>
<p>Associativity of addition: $u + (v + w) = (u + v) + w$</p>
<p>Commutativity of addition: $u + v = v + u$</p>
<p>Identity element of addition: There exists an element $\Omega \in V$
such that $v + \Omega = v$ for all $v \in V$.</p>
<p>Inverse elements of addition: For every $v \in V$, there exists an
element $\overline v \in V$ such that $v + \overline v = \Omega$.</p>
<p>Compatibility of scalar multiplication with field
multiplication: $a(bv) = (ab)v$</p>
<p>Identity element of scalar multiplication: $1v = v$, where $1$ denotes
the multiplicative identity in $F$.</p>
<p>Distributivity of scalar multiplication with respect to vector
addition: $a(u + v) = au + av$</p>
<p>Distributivity of scalar multiplication with respect to field
addition: $(a + b)v = av + bv$</p>
</blockquote>
<p>Note that</p>
<p>$$0\cdot v=(1-1)\cdot v=1\cdot v+(-1)\cdot v=v+(-1)\cdot v$$</p>
<p>just from the axioms for scalar multiplication. Now <em>if</em> we knew (or had as an axiom) that $0\cdot v=\Omega$ for all $v\in V$, then we wouldn't need the additive inverse axiom at all: we'd be able to conclude that $(-1)\cdot v$ is the $\overline v$ whose existence that axiom asserts. But we <em>don't</em> know that $0\cdot v=\Omega$.</p>
<p>However, we do know that</p>
<p>$$\Omega+w = w = 1\cdot w=(0+1)\cdot w=0\cdot w+1\cdot w=0\cdot w+ w$$</p>
<p>So suppose we substitute the OP's cancellation law for the additive inverse axiom. Then we can cancel the $w$ from the two ends of the above, concluding</p>
<p>$$\Omega=0\cdot w$$</p>
<p>for any $w\in V$.</p>
<p>In summary, if you fix the other seven axioms, then the additive inverse axiom, the "multiplication by $0$" law, and the OP's cancellation law are all equivalent. So if you believe (or can prove) that the standard eight axioms are independent, then there's no way to prove the cancellation law from the other seven (standard) axioms.</p>
|
3,733,757 | <p>I'm proving that given a nonempty set <span class="math-container">$I$</span>, and given a filter <span class="math-container">$F$</span>, there exists an ultrafilter <span class="math-container">$D$</span> on <span class="math-container">$I$</span> such that <span class="math-container">$F \subseteq D$</span>. I used Zorn's lemma to prove that for a given filter <span class="math-container">$F$</span>, there exists a maximal filter <span class="math-container">$D'$</span>, where <span class="math-container">$F \subseteq D'$</span>. I need to prove that this maximal filter <span class="math-container">$D'$</span> is a ultrafilter, defined as a filter <span class="math-container">$B$</span> that satisfies the following condition: <span class="math-container">$\forall A \subseteq I , A \in B \lor (I -A) \in B$</span>. I tried to use the proof by contradiction, but failed. How do I prove it?</p>
| egreg | 62,967 | <p>Let <span class="math-container">$\mathcal{F}$</span> be a filter on <span class="math-container">$I$</span> and take <span class="math-container">$A\subseteq I$</span> such that <span class="math-container">$A\notin\mathcal{F}$</span> and <span class="math-container">$B=I\setminus A\notin\mathcal{F}$</span>.</p>
<p>Choose <span class="math-container">$C\in\mathcal{F}$</span>. Without loss of generality, we can assume <span class="math-container">$C\cap A\ne\emptyset$</span> (otherwise, exchange <span class="math-container">$A$</span> and <span class="math-container">$B$</span>).</p>
<p>We want to prove that <span class="math-container">$X\cap A\ne\emptyset$</span>, for every <span class="math-container">$X\in\mathcal{F}$</span>. We have
<span class="math-container">$$
X\cap C=(X\cap A\cap C)\cup(X\cap B\cap C)
$$</span>
If <span class="math-container">$X\cap A=\emptyset$</span>, then <span class="math-container">$X\cap B\cap C\in\mathcal{F}$</span>, so <span class="math-container">$B\in\mathcal{F}$</span>, contrary to the assumption.</p>
<p>Then <span class="math-container">$\mathcal{F}\cup\{A\}$</span> is a filter base, so <span class="math-container">$\mathcal{F}$</span> is not a maximal filter.</p>
|
665,759 | <p>Let $G$ be an open subset of $R$. </p>
<p>If $0\notin G$, then show that $H=\{xy:x,y\in G\}$ is an open subset of $R$.</p>
<p>Now Since $G$ is open , given $x,y\in R$, $\exists ,r_x,r_y$ such that $B(x,r_x)\subset G$
and $B(y,r_y)\subset G$.Now all we need to do is find a radius $r$ given a point $xy$ in $H$.</p>
<p>Not sure how that would work out. may be $min(r_x,r_y)$?? </p>
| scineram | 7,598 | <p>Try $r=\max(|x|\cdot r_y,|y|\cdot r_x)$, it's relatively easy.</p>
<p>The optimal radius is $r=|x|\cdot r_y+|y|\cdot r_x-r_x\cdot r_y$.</p>
<p>I want to expand on optimality. For any open set $G\subset\mathbb{R}\setminus\{0\}$ and $x\in G$ denote
$$r_{x,G}:=dist(x,\mathbb{R}\setminus G)$$
and for $x,y\in G$
$$r_{x,y,G}:=dist(x\cdot y,\mathbb{R}\setminus(G\cdot G)).$$</p>
<ol>
<li>For any $G$ and $x,y\in G$ we have $r_{x,y,G}\ge|x|\cdot r_{y,G}+|y|\cdot r_{x,G}-r_{x,G}\cdot r_{y,G}$.</li>
<li>When $G$ is an interval the above becomes an equality if $x$ and $y$ are closest to the same endpoint, but for most sets and pairs the inequality is strict.</li>
<li>I don't know if there is a set such that the inequality is strict for all pairs. I'm leaning no.</li>
<li>Whether or not the conjecture in 3. holds, because of (2.)
$$(x,y,a,b)\mapsto|x|\cdot b+|y|\cdot a-a\cdot b$$
is the biggest function on $\mathbb{R}^2\times(\mathbb{R}^+)^2$such that (1.) holds when composed with
$$(x,y,G)\mapsto(x,y,r_{x,G},r_{y,G}).$$</li>
</ol>
|
3,192,795 | <p>Apparently either I've forgotten some basic rule about integrals (it has been a while since I've taken a basic calc class) or something is wrong with this problem in pearson mylab. </p>
<p>This was the problem:</p>
<p>Evaluate <span class="math-container">$\int_C\frac{x^2}{y^{4/3}}ds$</span> where C is the curve <span class="math-container">$x=t^2,y=t^3$</span> for <span class="math-container">$-3\le{t}\le-1$</span></p>
<p>The answer that it finally accepted was <span class="math-container">$\frac1{27}(85^{3/2}-13^{3/2})$</span></p>
<p>I've been wracking my brain trying to figure out why it's not <span class="math-container">$\frac1{27}(13^{3/2}-85^{3/2})$</span></p>
<p>After all, (unless I suddenly can't work out integrals, which seems unlikely given that I did a bunch of very similar problems before and after this with no problem) the antiderivative should work out to <span class="math-container">$\frac1{27}(4+9t^2)^{3/2}$</span> and then you just work out the values at <span class="math-container">$t=-1$</span> and <span class="math-container">$t=-3$</span> and subtract the first from the second.</p>
<p>Am I missing something obvious or is this problem actually just borked?</p>
| kccu | 255,727 | <p>Your antiderivative is incorrect because you are missing a minus sign. Once you substitute the parametrization you should have
<span class="math-container">$$\int_{-3}^{-1} \sqrt{4t^2+9t^4} \ dt.$$</span>
In order to pull the <span class="math-container">$t^2$</span> out of the square root, we need to make it <span class="math-container">$|t|$</span> (recall that the square root function by definition returns a nonnegative number, so <span class="math-container">$\sqrt{a^2}=|a|$</span>, not just <span class="math-container">$a$</span>). Since <span class="math-container">$t$</span> is negative on the interval <span class="math-container">$[-3,-1]$</span>, <span class="math-container">$|t|=-t$</span> on this interval. Therefore the integral becomes:
<span class="math-container">$$\int_{-3}^{-1}-t\sqrt{4+9t^2} \ dt.$$</span></p>
|
3,192,795 | <p>Apparently either I've forgotten some basic rule about integrals (it has been a while since I've taken a basic calc class) or something is wrong with this problem in pearson mylab. </p>
<p>This was the problem:</p>
<p>Evaluate <span class="math-container">$\int_C\frac{x^2}{y^{4/3}}ds$</span> where C is the curve <span class="math-container">$x=t^2,y=t^3$</span> for <span class="math-container">$-3\le{t}\le-1$</span></p>
<p>The answer that it finally accepted was <span class="math-container">$\frac1{27}(85^{3/2}-13^{3/2})$</span></p>
<p>I've been wracking my brain trying to figure out why it's not <span class="math-container">$\frac1{27}(13^{3/2}-85^{3/2})$</span></p>
<p>After all, (unless I suddenly can't work out integrals, which seems unlikely given that I did a bunch of very similar problems before and after this with no problem) the antiderivative should work out to <span class="math-container">$\frac1{27}(4+9t^2)^{3/2}$</span> and then you just work out the values at <span class="math-container">$t=-1$</span> and <span class="math-container">$t=-3$</span> and subtract the first from the second.</p>
<p>Am I missing something obvious or is this problem actually just borked?</p>
| hamam_Abdallah | 369,188 | <p><span class="math-container">$$\frac{x^2}{y^{\frac 43}}=\frac{t^4}{t^4}=1$$</span></p>
<p>The integral is
<span class="math-container">$$L=\int_C ds$$</span>
it is also the length of the curve between the left point <span class="math-container">$A=(1,-1) $</span> for <span class="math-container">$t=-1$</span> and the right point <span class="math-container">$B=(9,-27)$</span> for <span class="math-container">$t=-3.$</span></p>
<p>so</p>
<p><span class="math-container">$$L=\int_{-1}^{-3}t\sqrt{4+9t^2}dt$$</span></p>
|
370,212 | <p>Let <span class="math-container">$\mathbb{N}$</span> denote the set of positive integers. For <span class="math-container">$\alpha\in \; ]0,1[\;$</span>, let <span class="math-container">$$\mu(n,\alpha) = \min\big\{|\alpha-\frac{b}{n}|: b\in\mathbb{N}\cup\{0\}\big\}.$$</span> (Note that we could have written <span class="math-container">$\inf\{\ldots\}$</span> instead of <span class="math-container">$\min\{\ldots\}$</span>, but it is easy to see that the infimum is always a minimum.)</p>
<p>Is there an <span class="math-container">$\alpha\in \; ]0,1[$</span> such that for all <span class="math-container">$n\in\mathbb{N}$</span> we have <span class="math-container">$\mu(n+1,\alpha)<\mu(n,\alpha)$</span>?</p>
| Alapan Das | 156,029 | <p>It's easy to proof that <span class="math-container">$\alpha$</span> shouldn't be a rational number.</p>
<p>Now, let <span class="math-container">$\frac{1}{n-1}>\alpha>\frac{1}{n}, n>1$</span> and <span class="math-container">$\alpha-\frac{1}{n} < \frac{1}{n-1}-\alpha$</span>.</p>
<p>Then, <span class="math-container">$\mu(\alpha, k+1)< \mu(\alpha ,k)$</span> for all <span class="math-container">$k=1,2..., n-1$</span>.</p>
<p>If <span class="math-container">$\mu(\alpha, n+1)<\mu(\alpha, n)$</span>,</p>
<p>then either,</p>
<ol>
<li><span class="math-container">$\frac{1}{n-1}>\alpha >\frac{b}{n+1} >\frac{1}{n}$</span> for some <span class="math-container">$b \in \mathbb N, b>1$</span>.
or,</li>
<li><span class="math-container">$\frac{1}{n-1}>\frac{b}{n+1} >\alpha >\frac{1}{n}
$</span> for some <span class="math-container">$b \in \mathbb N, b>1$</span> (With <span class="math-container">$\frac{b}{n+1}+\frac{1}{n}>2\alpha$</span>).</li>
</ol>
<p>Both of these implies</p>
<p><span class="math-container">$1+\frac{2}{n-1}>b>1+\frac{1}{n} \Rightarrow n=2$</span></p>
<p>To satisfy <span class="math-container">$\mu(\alpha, 4)<\mu(\alpha, 3)<\mu(\alpha, 2)<\mu(\alpha, 1)$</span> we need, <span class="math-container">$\frac{3}{4}>\alpha>\frac{17}{24}$</span>.</p>
<p>But, <span class="math-container">$\frac{4}{5}>\frac{3}{4}$</span> and <span class="math-container">$\frac{3}{4}-\frac{17}{24}<\frac{17}{24}-\frac{3}{5}$</span>, hence, <span class="math-container">$\mu(\alpha, 5)>\mu(\alpha, 4)$</span>.</p>
<p>So, there can't be any such <span class="math-container">$\alpha \in (0,1)$</span>.</p>
|
863,860 | <p>I am not particularly well-versed in topology, so I wanted to check with you whether there exists a much simpler argument to prove the following statement or whether there are problems with my proof. The statement also seems to be a very standard result but I could not find a reference in e.g. a book on basic topology (references would also be appreciated). The statement is as follows:</p>
<p>Consider $\mathbb{R}^d$ with its usual topology where $d \geq 1$. Let $A \in \mathbb{R}^d$ be bounded. Then, for any $x\in A$ and $y\in A^c$, there exists a point in the line segment joining $x$ and $y$ ($x$ and $y$ included) that also belongs to the boundary $\partial A$ of $A$. </p>
<p>My argument goes like this: Consider a bijection $T$ from $[0,1]$ to such a line segment so that $T(0) = x$ and $T(1) = y$ (Actually this step is not very necessary but makes the argument a little more visual). For any $a\in[0,1]$, let $f(a) = 0$ if $T(a) \in A$ and otherwise let $f(a) = 1$ if $T(a) \notin A$ so that $f(0) = 0$ (because $x$ is a member of $A$ and $T(0) = x$) and $f(1) = 1$. It is now sufficient to find some $b\in[0,1]$ such that for every $\epsilon > 0$, $f((b-\epsilon,b+\epsilon)\cap[0,1]) = \{0,1\}$. "Topologically," this would mean that every open neighborhood of $b$ contains points from both $A$ and $A^c$, which would mean $b\in\partial A$.</p>
<p>We can find such a $b$ constructively as follows: Let $I_0 = [0,1]$ (We will have a recursion $I_1,I_2,\ldots,$ which will all be intervals). Recall $f(0) = 0$ and $f(1) = 1$. Consider $f(\frac{1}{2})$. If $f(\frac{1}{2}) = 0$, we set $I_1 = [\frac{1}{2},1]$, otherwise if $f(\frac{1}{2}) = 1$ we set $I_1 = [0,\frac{1}{2}]$. In either case, $f$ takes the values $0$ and $1$, respectively at the lower and upper end points of $I_1$. We continue this process by dividing $I_1$ on its middle, and so on, while at each iteration we make sure that $f(\min I_n) = 0$ and $f(\max I_n) = 1$. Let $b = \lim \min I_n =\lim \max I_n$ (It is not difficult to see the limits exist) and we are done.</p>
| Graham Kemp | 135,106 | <p>Tip: convert the square roots to exponent form, and combine exponents before taking the derivative.</p>
<p>$$f(x)=\frac{x^2+4x+3}{\sqrt{x}}$$</p>
<p>$$f(x)=(x^2+4x+3)(x^{-1/2})$$</p>
<p>$$f(x)= x^{3/2}+4x^{1/2}+3x^{-1/2}$$</p>
<p>$$f'(x)=\frac 3 2 x^{1/2}+2x^{-1/2}-\frac 3 2 x^{-3/2}$$</p>
<p>$$f'(x)=\frac {3\sqrt{x}} 2 + \frac 2{\sqrt{x}}-\frac 3 {2 x\sqrt{x}}$$</p>
<p>$$f'(x)=\frac {6x^2+ 4x-3 }{2 x\sqrt{x}}$$</p>
|
3,542,573 | <blockquote>
<p>Solve the differential equation <span class="math-container">$$y''-6y'+25y=50t^3-36t^2 -63t +18$$</span></p>
</blockquote>
<p>I tried solving the homogeneous equation using <span class="math-container">$y = vt$</span>, but I didn't go anywhere. </p>
| Fred | 380,717 | <p>You are not correct. I am missing several <span class="math-container">$T's$</span>.</p>
<p>Correct is</p>
<p><span class="math-container">$$T(2,2,2)=T(2(1,0,0)+2(0,1,1))=2(1,2,3)+2(2,2,2)=(6,8,10).$$</span></p>
|
1,504,483 | <p>Where did the angle convention (in mathematics) come from?</p>
<p>One would imagine that a clockwise direction would be more 'natural' (given
sundials & the like, also a magnetic compass dial).</p>
<p>Also, given time and direction conventions, one would imagine that the
zero degree line would be vertical.</p>
<p>There are two parts to this
question: (1) Why do we measure angles anticlockwise?
(2) Why do we take the zero degree line to be along the $x$-axis.</p>
<p>(This was inspired by <a href="https://matheducators.stackexchange.com/questions/9874/why-do-we-conventionally-treat-trig-functions-as-going-anti-clockwise-from-the-r">https://matheducators.stackexchange.com/questions/9874/why-do-we-conventionally-treat-trig-functions-as-going-anti-clockwise-from-the-r</a>.)</p>
| Community | -1 | <p>It is perhaps "natural" to adopt these two conventions:</p>
<ol>
<li>The zero angle "should" correspond to the positive <span class="math-container">$x$</span>-axis.</li>
<li>Small but positive angles "should" be in the quadrant where <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are positive.</li>
</ol>
<p>Given that we also adopt the convention that the <span class="math-container">$x$</span>-axis points rightwards and the <span class="math-container">$y$</span>-axis points upwards, the anticlockwise convention then also necessarily follows.</p>
|
3,074,900 | <h2>Problem</h2>
<p>When proving one result in the statistical learning theory course, the instructor uses
<span class="math-container">$$
\mathbb{E}[\mathbb{E}[X\vert Y,Z]\vert Z]=\mathbb{E}[X\vert Z]
$$</span>
but I am not sure why this is true.</p>
<h2>What I Have Done</h2>
<p>I know I could do the following
<span class="math-container">$$
\mathbb{E}[X\vert Y]=\int xf_{X\vert Y}(x\vert y)dx
$$</span>
But when <span class="math-container">$X$</span> becomes complicated like <span class="math-container">$\mathbb{E}[X\vert Y,Z]$</span> (sorry for the abuse of variable name), I do not know how to proceed.</p>
<p>Could someone help me, thank you in advance.</p>
| angryavian | 43,949 | <p>This is just a special case of the usual
<span class="math-container">$$\mathbb{E}[X] = \mathbb{E}[\mathbb{E}[X \mid Y]]$$</span>
except all expectations are taken under the conditional distribution given the event <span class="math-container">$Z=z$</span>. If you are still unsure, take your favorite proof of the above equality and replace all PDFs/PMFs with the conditional distribution given <span class="math-container">$Z=z$</span>.</p>
|
3,296,596 | <p>Ive been asked the following question and I'm not sure how to approach it.</p>
<p>Solve the system</p>
<p><span class="math-container">\begin{cases}
x_1+x_2-5x_3=2 \\
6x_1+7x_2+4x_3=7
\end{cases}</span></p>
<p>The answer is required to be in the form of</p>
<p><span class="math-container">$\begin{bmatrix}x_1\\ x_2\\x_3\end{bmatrix}$</span>=<span class="math-container">$\begin{bmatrix}...\\ ...\\...\end{bmatrix}$</span>+s<span class="math-container">$\begin{bmatrix}...\\ ...\\...\end{bmatrix}$</span></p>
<p>I know how to solve systems using REF and RREF or converting linear equations to matrix equations and solve using inverses. But Im not sure how to solve using the answer format above. Any tips? Don't give answer outright if at all possible but some hints would be nice. Thanks.</p>
| Salim | 449,176 | <p>Firstly, we get an augmented matrix that represents the linear system of equations.
<span class="math-container">$$
A = \left[\begin{array}{rrr|r}
1 & 1 & -5 & 2 \\
6 & 7 & 4 & 7 \\
0 & 0 & 0 & 0
\end{array}\right]
$$</span></p>
<p>Then we perform the matrix operations to get <span class="math-container">$A$</span> into Reduced row echelon form giving us:
<span class="math-container">$$
\left[\begin{array}{rrr|r}
1 & 0 & -39 & 7 \\
0 & 1 & 34 & -5 \\
0 & 0 & 0 & 0
\end{array}\right]
$$</span></p>
<p>This means that <span class="math-container">$x_3$</span> is a free variable and we have infinite solutions to this system.</p>
<p>We can now represent the solution as:
<span class="math-container">$$
x_3 = s \in \Bbb{R} \\
x_2 + 34x_3 = -5 \iff x_2 = \\
x_1 - 39x_3 = 7 \iff x_1 = 7 + 39s \\
$$</span></p>
<p>All that is left is to represent this in a Matrix form which can be done as so:
<span class="math-container">$$
\begin{bmatrix}
x_1 \\
x_2 \\
x_3 \\
\end{bmatrix}
=
\begin{bmatrix}
7 + 39s \\
-5 - 34s \\
s \\
\end{bmatrix}
$$</span></p>
<p>I believe at this point it is fair to leave the rest of the conversion as an exercise to the reader :)</p>
|
229,558 | <p>When we say that a set $S$ is denumerable, that is, there is a bijection $S \to \omega$, do we mean that there <em>exists</em> such a bijection or do we mean that we have one and are talking about a pair $(S,f)$?</p>
<p>I'm asking because it makes a difference to whether I need choice in some proofs or whether I don't. For example, if we prove that a denumerable union of denumerable sets is denumerable we need countable choice to prove it if we assume the former definition and we do not need choice at all if we assume the latter. </p>
| Asaf Karagila | 622 | <p>A set is countable if there exists a bijection with $\omega$. Much like a set is finite if and only if is has a bijection with a finite ordinal. </p>
<p>In the proof that a countable union of countable sets is countable we indeed <em>choose</em> a bijection with $\omega$, and if we were given such bijection to begin with then the axiom of choice is indeed redundant.</p>
<p>However the definitions of countability should never require an explicit bijection to be present. It would contradict a very natural and basic understanding of what a finite set is:</p>
<p>Consider a model in which there exists a countable set of <em>pairs</em> without a choice function. The union of these pairs is uncountable, and it is uncountable only because we cannot choose bijections of the pairs with the set $\{0,1\}$. It is unreasonable that a set with two elements is not finite just because it does not have a coupled bijection, right? This intuition is carried over to the genreal case. Cardinality is about the existence of a bijection, not about the explicitness of this function.</p>
|
2,586,618 | <p>I'm trying to study for myself a little of Convex Geometry and I have some doubts with respect the proof of the Theorem 1.8.5 of the book Convex Bodies: The Brunn-Minkowski Theory. Before I presented the proof and my doubts, I will put the definitions used in the theorem below.</p>
<p><span class="math-container">$\textbf{Definitions used in the theorem:}$</span></p>
<p>(i) <span class="math-container">$\mathcal{C}^n := \{ A \subset \mathbb{R}^n \ ; \ A \neq \emptyset \ \text{and} \ A \ \text{is compact} \}$</span>.</p>
<p>(ii) <span class="math-container">$B^n = \overline{B(0,1)} := \{ x \in \mathbb{R}^n \ ; \ d(x,0) \leq 1 \}$</span>.</p>
<p>(iii) The Hausdorff distance of the sets <span class="math-container">$K, L \in \mathcal{C}^n$</span> is defined by</p>
<p><span class="math-container">$$\delta (K,L) := \max \left \{ \sup_{x \in K} \inf_{y \in L} |x - y|, \sup_{x \in L} \inf_{y \in K} |x - y| \right \}$$</span></p>
<p>or, equivalently, by</p>
<p><span class="math-container">$$\delta (K,L) := \min \{ \lambda \geq 0 \ ; \ K \subset L + \lambda B^n, L \subset K + \lambda B^n \} $$</span></p>
<blockquote>
<p><span class="math-container">$\textbf{Theorem 1.8.5.}$</span> From each bounded sequence in <span class="math-container">$\mathcal{C}^n$</span> one can select a convergent subsequence.</p>
<p><span class="math-container">$\textbf{Proof:}$</span></p>
<p>Let <span class="math-container">$(K^0_i)_{i \in \mathbb{N}}$</span> be a sequence in <span class="math-container">$\mathcal{C}^n$</span> whose elements are contained in some cube <span class="math-container">$C$</span> of edge length <span class="math-container">$\gamma$</span>. For each <span class="math-container">$m \in \mathbb{N}$</span>, the cube <span class="math-container">$C$</span> can be written as a union of <span class="math-container">$2^{mn}$</span> cubes of length <span class="math-container">$2^{-m}\gamma$</span>. For <span class="math-container">$K \in \mathcal{C}^n$</span>, let <span class="math-container">$A_m(K)$</span> denote the union of all such cubes that meet <span class="math-container">$K$</span>. Since (for each <span class="math-container">$m$</span>) the number of subcubes is finite, the sequence <span class="math-container">$(K^0_i)_{i \in \mathbb{N}}$</span> has a subsequence <span class="math-container">$(K^1_i)_{i \in \mathbb{N}}$</span> such that <span class="math-container">$A_1(K^1_i) =: T_1$</span> is independent of <span class="math-container">$i$</span>. Similarly, there is an union <span class="math-container">$T_2$</span> of subcubes of length <span class="math-container">$2^{-2} \gamma$</span> and a subsequence <span class="math-container">$(K^2_i)_{i \in \mathbb{N}}$</span> of <span class="math-container">$(K^1_i)_{i \in \mathbb{N}}$</span> such that <span class="math-container">$A_2(K^2_i) = T_2$</span>. Continuing in this way, we obtain a sequence <span class="math-container">$(T_m)_{m \in \mathbb{N}}$</span> of union of subcubes (of edge length <span class="math-container">$2^{-m} \gamma$</span> for given <span class="math-container">$m$</span>) and to each <span class="math-container">$m$</span> a sequence <span class="math-container">$(K^m_i)_{i \in \mathbb{N}}$</span> such that</p>
<p><span class="math-container">$$A_m(K^m_i) = T_m \ (1.61)$$</span></p>
<p>and</p>
<p><span class="math-container">$$(K^m_i)_{i \in \mathbb{N}} \ \text{is a subsequence of} \ (K^k_i)_{i \in \mathbb{N}} \ \text{for} \ k < m. (1.62)$$</span></p>
<p>By <span class="math-container">$(1.61)$</span> we have <span class="math-container">$K^m_i \subset K^m_j + \lambda B^n$</span> with <span class="math-container">$\lambda = 2^{-m} \sqrt{n \gamma}$</span>, hence <span class="math-container">$\delta(K^m_i, K^m_j) \leq 2^{-m} \sqrt{n \gamma}$</span> (<span class="math-container">$i,j,m \in \mathbb{N})$</span> and thus, by <span class="math-container">$(1.62)$</span>,</p>
<p><span class="math-container">$$\delta(K^m_i,K^k_j) \leq 2^{-m} \sqrt{n \gamma} \hspace{1cm} \text{for} \ i,j \in \mathbb{N} \ \text{and} \ k \geq m.$$</span></p>
<p>For <span class="math-container">$K_m := K^m_m$</span>, it follows that</p>
<p><span class="math-container">$$\delta(K_m, K_k) \leq 2^{-m} \sqrt{n \gamma} \hspace{1cm} \text{for} \ i,j \in \mathbb{N} \ \text{and} \ k \geq m.$$</span></p>
<p>Thus, <span class="math-container">$(K_m)_{m \in \mathbb{N}}$</span> is a Cauchy sequence and hence convergent because <span class="math-container">$(\mathcal{C}^n, \delta)$</span> is complete. This is the subsequence that proves the assertation. <span class="math-container">$\square$</span></p>
</blockquote>
<p>My doubts are</p>
<ol>
<li><p>When the author states "Since (for each <span class="math-container">$m$</span>) the number of subcubes is finite the sequence <span class="math-container">$(K^0_i)_{i \in \mathbb{N}}$</span> has a subsequence <span class="math-container">$(K^1_i)_{i \in \mathbb{N}}$</span> such that <span class="math-container">$A_1(K^1_i) =: T_1$</span> is independent of <span class="math-container">$i$</span>", I dind't understand why exists the subsequence <span class="math-container">$(K^1_i)_{i \in \mathbb{N}}$</span> and why <span class="math-container">$T_1$</span> is independent of <span class="math-container">$i$</span>.</p>
</li>
<li><p>Why <span class="math-container">$(1.62)$</span> implies that <span class="math-container">$\delta(K^m_i,K^k_j) \leq 2^{-m} \sqrt{n \gamma} \ \text{for} \ i,j \in \mathbb{N} \ \text{and} \ k \geq m$</span>?</p>
</li>
</ol>
<p>Thanks in advance!</p>
| daulomb | 98,075 | <p>Using Green's theorem: $$A=(P, Q)= (y, -x)\Longrightarrow\int_L \mathbf{A} \cdot d\mathbf{r}=\displaystyle\int\int_{S}(Q_x-P_y)dA=-2\displaystyle\int\int_{S}dA,$$
where $ S:\, \frac{x^2}{4} + \frac{y^2}{9} \leq 1$. You can change the variables as $x=2u$ and $y=3v$ in which the Jacobian becomes $6$. Then it reduces to
$$-2\displaystyle\int\int_{D}dA=-12\displaystyle\int\int_{D}dA,$$
where $D:\,u^2+v^2\leq 1$ is the upper semi unit circle in the $uv$-plane whose area is $\frac{\pi}{2}$. Thus the result is $-6\pi$.</p>
|
4,180,869 | <p>The ReLU activation function in deep learning is given by <span class="math-container">$\text{ReLU}: \mathbb R\rightarrow \mathbb R, x \mapsto \max\left\{0, x\right\}$</span>. I was asking myself whether this function, which is convex, is also closed. This is the general definition of <strong>closed</strong>:</p>
<p><strong>Definition.</strong> A function <span class="math-container">$J: X\rightarrow \mathbb R_{\infty} := \mathbb R \ \cup \left\{ \pm \infty \right\}$</span> is <strong>closed</strong> if its epigraph is closed.</p>
<p><strong>Definition</strong>. The epigraph of <span class="math-container">$J$</span> is given by <span class="math-container">$\text{epi}(J) := \left\{ (x, \alpha)\in X\times \mathbb R \ \vert \ J(x) \leq \alpha \right\}$</span>.</p>
<p>I sketched myself the <span class="math-container">$ReLU$</span> functions and its epigraphs, and using that every function is closed if and only if contains all its limit points, it <em>looks</em> like <span class="math-container">$\text{epi}(\text{ReLU})$</span> is closed. But I am not sure what a mathematically rigorous proof would look like..</p>
| Community | -1 | <p>I don't know if I can give you a rigorous proof, but I don't think you need one. I think you got to the same conclusion that the epigraph of the ReLU function is <span class="math-container">$\{ (x, y) \colon x \geq 0, y \leq x\}$</span>, which is closed.</p>
|
4,180,869 | <p>The ReLU activation function in deep learning is given by <span class="math-container">$\text{ReLU}: \mathbb R\rightarrow \mathbb R, x \mapsto \max\left\{0, x\right\}$</span>. I was asking myself whether this function, which is convex, is also closed. This is the general definition of <strong>closed</strong>:</p>
<p><strong>Definition.</strong> A function <span class="math-container">$J: X\rightarrow \mathbb R_{\infty} := \mathbb R \ \cup \left\{ \pm \infty \right\}$</span> is <strong>closed</strong> if its epigraph is closed.</p>
<p><strong>Definition</strong>. The epigraph of <span class="math-container">$J$</span> is given by <span class="math-container">$\text{epi}(J) := \left\{ (x, \alpha)\in X\times \mathbb R \ \vert \ J(x) \leq \alpha \right\}$</span>.</p>
<p>I sketched myself the <span class="math-container">$ReLU$</span> functions and its epigraphs, and using that every function is closed if and only if contains all its limit points, it <em>looks</em> like <span class="math-container">$\text{epi}(\text{ReLU})$</span> is closed. But I am not sure what a mathematically rigorous proof would look like..</p>
| daw | 136,544 | <p>The function is closed if and only if the epigraph is closed. But the ReLu function is continuous, which implies closedness of epigraph.</p>
<p>The epi graph is the set:
<span class="math-container">$$
\{ (x,\alpha) : \max(0,x) \le \alpha\},
$$</span>
which is the preimage of the closed set <span class="math-container">$[0,+\infty)$</span> under the continuous map <span class="math-container">$x\mapsto \max(x,0)-\alpha$</span>.</p>
|
2,406,587 | <p>Isn't the concept of homomorphism and isomorphism in abstract algebra analogous to functions and invertible functions in set theory respectively? That's one way to quickly grasp the concept into the mind?</p>
| Kajelad | 354,840 | <p>Suppose we have a linear transform from $\mathbb R^n\to\mathbb R^n$ defined by a matrix $A$. This transform maps each vector $\vec v\in\mathbb R^n$ to a new vector $A\vec v\in\mathbb R^n$.</p>
<p>An <em>eigenvector</em> of $A$ is simply a nonzero vector $\vec v$ such that $\vec v$ and $A\vec v$ are parallel. Since two parallel vectors are scalar multiples of each other, we can write this statement as an equation, called the <em>characteristic equation</em> of $A$.</p>
<p>$$A\vec v=\lambda\vec v$$</p>
<p>we can rearrange this equation using the fact that $\vec v=I\vec v$, where $I$ is the Identity matrix.</p>
<p>$$(A-\lambda I)\vec v=0$$</p>
<p>Where $\vec v$ is a nonzero vector and $\lambda$ is a constant. This equation will typically have several solutions for $\vec v$ and $\lambda$, which we'll call the <em>eigenvectors</em> $\{\vec v_1,\vec v_2,...\}$ and <em>eigenvalues</em> $\{\lambda_1,\lambda_2,...\}$. Each eigenvector has exactly one corresponding eigenvalue, but one eigenvalue can correspond to many eigenvectors.</p>
<p>If $\vec v_1$ is an eigenvector with eigenvalue $\lambda_1$, we can see from the equation above that $c\vec v_1$ will also be an eigenvector with the same eigenvalue. If $\vec v_2$ is an eigenvector with the same eigenvalue, then any linear combination of $a\vec v_1+b\vec v_2$ is also an eigenvector with the same eigenvalue. Because of this, we almost always restrict ourselves to a linearly independent set of eigenvectors. One matix can have many sets of eigenvectors, but the eigenvalues will always be the same. The span of all the eigenvectors corresponding to a particular eigenvector will always be the same for a given matrix.</p>
<p>For certain special cases, it is necessary to consider <a href="https://en.wikipedia.org/wiki/Generalized_eigenvector" rel="nofollow noreferrer">generalized eigenvectors</a> in order to completely describe the transform. These obey the characteristic equation $(A-\lambda I)^n\vec v=0$ for some $n\in\mathbb N$.</p>
<p><a href="https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors#Overview" rel="nofollow noreferrer">According to Wikipedia</a>, the term "eigen-" is German for "characteristic", among other things. The choice makes sense, in that no only do the eigenvalues and eigenvectors completely describe a linear transform, but they also <em>characterize</em> what the transformation does geometrically. For example, real eigenvalues correspond to scaling factors, and the eigenvectors tell us in which direction this scaling is applied. Imaginary eigenvalues correspond to rotations, and the eigenvectors tell us the plane of rotation.</p>
<p>Of course, this concept generalizes to linear transforms that are harder to visualize. Of particular importance to differential equations are <em>linear differential operators</em> which are linear transformations on function spaces. The notion of eigenvectors and eigenvalues are still quite useful there, they just have to be adapted to infinite dimensions.</p>
|
995,489 | <p>This is taken from Trefethen and Bau, 13.3.</p>
<p>Why is there a difference in accuracy between evaluating near 2 the expression $(x-2)^9$ and this expression:</p>
<p>$$x^9 - 18x^8 + 144x^7 -672x^6 + 2016x^5 - 4032x^4 + 5376x^3 - 4608x^2 + 2304x - 512 $$</p>
<p>Where exactly is the problem?</p>
<p>Thanks.</p>
| gammatester | 61,216 | <p><span class="math-container">$(x-2)$</span> is small by definition of <span class="math-container">$x$</span>, <span class="math-container">$(x-2)^9$</span> is even much smaller but can be computed with small relative error.
The single terms of the expanded polynomial are much larger and therefore you will suffer from <strong>catastrophic cancellation</strong> (see e.g. <a href="http://en.wikipedia.org/wiki/Loss_of_significance" rel="nofollow noreferrer">Wiki</a> or do a web search). Example:
<span class="math-container">$$x=2 + 10^{-2} \Longrightarrow (x-2)^9 = 10^{-18}.$$</span> This is below the machine epsilon for double! Now with your expanded polynom you have to compute
<span class="math-container">$-512 + 23.04 \pm \dots$</span>.</p>
<p>And here is the actual computation with double and evaluation of the polynomial using Horner for <span class="math-container">$x=2.01:$</span></p>
<pre><code>(x-2)^9 = 9.99999999999808E-019 poly(x) = -3.75166564481333E-012
</code></pre>
<p>As expected the polynomial result is completely wrong (in terms of relative error), but
note that even the relative error for the first is about <span class="math-container">$2\cdot 10^{-13},$</span> which is a result from the fact that <span class="math-container">$0.01$</span> cannot represented exactly as double.</p>
|
322,134 | <p>$$2e^{-x}+e^{5x}$$</p>
<p>Here is what I have tried: $$2e^{-x}+e^{5x}$$
$$\frac{2}{e^x}+e^{5x}$$
$$\left(\frac{2}{e^x}\right)'+(e^{5x})'$$</p>
<p>$$\left(\frac{2}{e^x}\right)' = \frac{-2e^x}{e^{2x}}$$
$$(e^{5x})'=5xe^{5x}$$</p>
<p>So the answer I got was $$\frac{-2e^x}{e^{2x}}+5xe^{5x}$$</p>
<p>I checked my answer online and it said that it was incorrect but I am sure I have done the steps correctly. Did I approach this problem correctly?</p>
| Ross Millikan | 1,827 | <p>You have an extra $x$ in the second term. $(e^{5x})'=5e^{5x}$ by the chain rule. I suspect the online check might prefer $-2e^{-x}$ for the first term, but your version is equivalent.</p>
|
1,948,730 | <blockquote>
<p>For all odd integers $n$, there exists an integer $k$ such that $n=2k+1$.</p>
</blockquote>
<p>I negated using De Morgan's laws. Let $O(n)$ be "$n$ is odd" and $N(n, k)$ "$2k + 1 = n$", then
$$\neg(\forall n \exists k [O(n) \to N(n,k)])\\
\exists n \neg\exists k [O(n) \to N(n,k)]\\
\exists n \forall k \neg [O(n) \to N(n,k)]\\
\exists n \forall k \neg [\neg O(n) \lor N(n,k)]\\
\exists n \forall k O(n) \land \neg N(n,k)\\
$$
Therefore the negation is</p>
<blockquote>
<p>There is at least one $n$ that is odd, and for all $k$ such that $n\neq2k+1$</p>
</blockquote>
<p>Is that the correct result?</p>
| JMP | 210,189 | <p>How about:</p>
<blockquote>
<p>For all odd integers $n$, there does <strong>not</strong> exist any integer $k$ such that $n=2k+1$</p>
</blockquote>
|
297,907 | <p>Let consider the ring $\mathbb{Z}_p$ and $\zeta$ be a $p$-th root of unity. Especially $\zeta \not \in \mathbb{Z}_p$.
Denote with $\Phi _p(x)$ the cyclotomical polynomial in $p$. Since $p$ is a prime we know that it has the shape $\Phi _p(x)= 1 + x +x^2 +... +x^{p-1}$.
This gives rise for the quotient ring</p>
<p>$$ \mathbb{Z}_p[X]/\langle\Phi_p(x)\rangle \cong \mathbb{Z}_p[\zeta] = \mathbb{Z}_p \oplus \zeta \mathbb{Z}_p \oplus \dots \oplus \zeta^{p-2} \mathbb{Z}_p $$</p>
<p>which is obviously a free $\mathbb{Z}_p$-module of rank $p-1$. Denote $g=\zeta -1$.</p>
<p>My question is how to see that $\mathbb{Z}_p[\zeta]$ isa local ring with maximal ideal $g \mathbb{Z}_p[\zeta]$?</p>
<p>I tried to argue in following way:
Obviously observation provides $\Phi _p(g +1) =0$ and the formula above gives $$ \Phi_p(x + 1) = p + \binom{p}{2}x + \binom{p}{3}x^2 + \dots + \binom{p}{p - 1} x^{p - 2} + x^{p - 1} $$.</p>
<p>In light of this I can conclude following inclusions:</p>
<p>$p \mathbb{Z}_p[\zeta] \subset g \mathbb{Z}_p[\zeta]$
and $\pi^{p-1} \mathbb{Z}_p[\zeta] \subset p \mathbb{Z}_p[\zeta]$ which imply $(g\mathbb{Z}_p[\zeta]) \cap\mathbb{Z}_p = p\mathbb{Z}_p$.</p>
<p>From here I'm stuck.</p>
| Laurent Moret-Bailly | 7,666 | <p>Let $m$ be a maximal ideal of $A:=\mathbb{Z}_p[\zeta]$. Then $m\cap\mathbb{Z}_p=p\mathbb{Z}_p$ because $A$ is a finite $\mathbb{Z}_p$-algebra. So the maximal ideals of $A$ are essentially those of $A/pA\cong\mathbb{F}_p[X]/(\Phi_p\bmod p)$. Since $\Phi_p\equiv(X-1)^{p-1}\pmod p$, we see that $A/pA$ is local with maximal ideal $(X-1)$.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.