qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
291,729 | <p>How to show that $\large 3^{3^{3^3}}$ is larger than a googol ($\large 10^{100}$) but smaller than googoplex ($\large 10^{10^{100}}$).</p>
<p>Thanks much in advance!!!</p>
| Community | -1 | <p>$$3^{3^{3^3}} > 3^{300} > 10^{100}$$ since $$3^{3^3} > 3^7 = 3 \cdot (3^3)^2 > 3 \cdot 10^2 = 300$$ since $$3^3 > 7$$</p>
<hr>
<p>$$3^{3^{3^3}} < 10^{3^{3^3}} < 10^{10^{100}}$$ since $$3^{3^3} < 3^{100} < 10^{100}$$ since $$3^3 < 100$$</p>
|
1,954,470 | <p>The question defines $x \in \mathbb{R}$ where x>0 and a sequence of integers with $a_0: [x], a_1=[10^1(x-a_0)]$ until $a_n=[10^n(x-(a_0+10^{-1}a_1+ ... + 10^{-n}a_{n-1}))]$. I want to prove that $0 \leq a_n \leq 9$ for each $n \in \mathbb{N}$.</p>
<p>I am completely stumped at what to do. I feel like the Archimdean property would be useful. I also feel that a proof by induction would be useful, but I can't even think of how to solve the base case. I would really, really appreciate it if someone could help me out. </p>
<p>Here's my attempt at proving the base case $(0 \geq d_1 \geq 9)$ for an induction-style proof:</p>
<p>Since $a_0=[x]$, we can assume WLOG that $x-a_0\geq0$. Hence, this implies that $a_1\geq 0$. So that's one side of the proof done. But I'm not sure how to prove $a_1 \leq 9$.</p>
| Mick | 42,351 | <p>I can only show you the logic.</p>
<p>Note that all square bracket [.] means the least integer function.</p>
<p>Let 75.2609 be an instance of x. Then, $a_0 = [x] = 75$. Its action is equivalent to stripping off the integral part.</p>
<p>$x – a_0$ is to get the decimal part (i.e. .2609).</p>
<p>$10^1(x – a_0)$ is 10 times the above giving 2.609.</p>
<p>$[10^1(x – a_0)]$ is to get the corresponding integral part (2). Therefore, $a_1 = 2$.</p>
<p>The formula is just doing the stripping of all the <strong>decimal digits</strong> in the non-integral portion repeatedly until it ends.</p>
<p>Every <strong>decimal</strong> digit stripped off ($a_1, a_2, ... a_n$) can only be chosen from 0 ~ 9 (inclusive) and therefore lies within the said range.</p>
|
206,227 | <p>I was given the following problem:</p>
<p>Let $V_1, V_2, \dots$ be an infinite sequence of Boolean variables. For each natural number $n$, define a proposition $F_n$ according to the following rules: </p>
<p>$$\begin{align*}
F_0 &= \text{False}\\
F_n &= (F_{n-1} \ne V_n)\;.
\end{align*}$$</p>
<p>Use induction to prove that for all $n$, $F_n$ is $\text{True}$ if and only if an odd number of the variables $V_k \;( k \le n)$ are $\text{True}$.</p>
<p>Can anyone help me out with at least beginning this problem? I'm not even entirely sure what it is asking.</p>
| copper.hat | 27,978 | <p>First, notice that for any boolean value $v$, we have $(\text{False} \neq v) = v$, and $(\text{True} \neq v) = \neg v$.</p>
<p>The base case for the induction is $F_1 = (F_0 \neq V_1) = (\text{False} \neq V_1) = V_1$. If $V_1$ is false, then an even number of values $(V_1)$ are true, if $V_1$ is true, then an odd number of values of $(V_1)$ are true. Hence the base case is true.</p>
<p>Now assume that $F_n$ is true iff odd number of values of $(V_1,...,V_n)$ are true. Then we have two cases to consider: (1) $V_{n+1}$ is false, in which case we have $F_{n+1} = F_n$, or equivalently $F_{n+1} = (F_{n-1} \neq V_n)$, and (2) $V_{n+1}$ is true, in which case we have $F_{n+1} = \neg F_n$ or again, $F_{n+1} = (F_{n-1} \neq V_n)$. </p>
<p>It follows that for all values of $V_n$ we have $F_{n+1} = (F_{n-1} \neq V_n)$.</p>
|
206,227 | <p>I was given the following problem:</p>
<p>Let $V_1, V_2, \dots$ be an infinite sequence of Boolean variables. For each natural number $n$, define a proposition $F_n$ according to the following rules: </p>
<p>$$\begin{align*}
F_0 &= \text{False}\\
F_n &= (F_{n-1} \ne V_n)\;.
\end{align*}$$</p>
<p>Use induction to prove that for all $n$, $F_n$ is $\text{True}$ if and only if an odd number of the variables $V_k \;( k \le n)$ are $\text{True}$.</p>
<p>Can anyone help me out with at least beginning this problem? I'm not even entirely sure what it is asking.</p>
| Brian M. Scott | 12,042 | <p>$\newcommand{\T}{\text{True}}\newcommand{\F}{\text{False}}$The statement that you’re trying to prove for each $n\ge 0$ is:</p>
<blockquote>
<p>$P(n):$ $F_n$ is $\T$ if and only if an odd number of the variables $V_k$ with $k\le n$ are $\T$.</p>
</blockquote>
<p>To check the base case of your induction, observe that there are $0$ variables $V_k$ such that $k<0$ and $V_k$ is $\T$, since there are $0$ variables $V_k$ with $k<0$, and $0$ is an even number. Thus, the number of variables $V_k$ such that $k<0$ and $V_k=\T$ is not odd, and of course $F_0$ is not $\T$. Thus, $P(0)$ is true.</p>
<p>For the induction step, assume that $P(n)$ is true for some $n\ge 0$; we must show that $P(n+1)$ is also true. Let $m$ be the number of variables $V_k$ such that $k\le n$ and $V_k=\T$. We’ve assumed that $P(n)$ holds, so either $F_n$ is true and $m$ is odd, or $F_n$ is false and $m$ is even. Since $V_{n+1}$ can be either $\T$ or $\F$, we have four possibilities to consider:</p>
<ol>
<li><p>If $F_n=\T$ (so $m$ is odd) and $V_{n+1}=\T$, then $F_n=V_{n+1}$, so $F_{n+1}=\F$. There are $m+1$ variables $V_k$ such that $k\le n+1$ and $V_k=\T$, and $m$ is odd, so $m+1$ is even. In this case $F_{n+1}=\F$ and the number of variables $V_k$ such that $k\le n+1$ and $V_k=\T$ is even, which is fine.</p></li>
<li><p>If $F_n=\T$ (so $m$ is odd) and $V_{n+1}=\F$, then $F_n\ne V_{n+1}$, so $F_{n+1}=\T$. There are still only $m$ $\T$ variables $V_k$ with $k\le n+1$, and $m$ is odd, so this case is also fine: $F_{n+1}=\T$, and the number of $\T$ variables $V_k$ with $k\le n+1$ is odd.</p></li>
<li><p>If $F_n=\F$ (so $m$ is even) and $V_{n+1}=\T$, then $F_n\ne V_{n+1}$, so $F_{n+1}=\T$. There are $m+1$ $\T$ variables $V_k$ with $k\le n+1$, and $m$ is even, so $m+1$ is odd, and again we’re in good shape: $F_{n+1}=\T$, and the number of $\T$ variables $V_k$ with $k\le n+1$ is odd.</p></li>
<li><p>If $F_n=\F$ (so $m$ is even) and $V_{n+1}=\F$, then $F_n=V_{n+1}$, so $F_{n+1}=\F$. There are still only $m$ $\T$ variables $V_k$ with $k\le n+1$, and $m$ is even, which is again what we need: $F_{n+1}=\F$, and the number of $\T$ variables $V_k$ with $k\le n+1$ is even.</p></li>
</ol>
<p>It follows that $P(n+1)$ holds, and by induction $P(n)$ holds for all $n\ge 0$.</p>
|
2,055,803 | <p>Simplify $\left(\dfrac{4}{5} - \dfrac{3}{5}i\right)^{\!75}$</p>
<p>I've searched around on the internet and haven't found a very straightforward answer for this particular problem. I believe this problem has something to do with Euler's Formula, but I'm not sure how to use it in this case.</p>
<p>EDIT: We are not allowed to use calculators for this problem.</p>
| Bill Dubuque | 242 | <p>Or, multiply by $\, y = x^{-1}$ to get $\ y^2-3y +2 = (y\!-\!2)(y\!-\!1)= 0\ $ so $\, x^{-1} = y = 2,1$</p>
|
3,891,124 | <p>When <span class="math-container">$ (1+cx)^n$</span> is expanded a series in ascending powers of <span class="math-container">$x$</span>, the first three terms are given by <span class="math-container">$1+20x+150x^2$</span>. Calculate the value of the constants <span class="math-container">$c$</span> and <span class="math-container">$n$</span>.</p>
<p>If possible please include working so I could perhaps understand how you got to the answer.</p>
| peter.petrov | 116,591 | <p>You need to use the <a href="https://en.wikipedia.org/wiki/Binomial_theorem#Statement" rel="nofollow noreferrer">Binomial formula</a></p>
<p>Applying it you get:</p>
<p><span class="math-container">$$(1+cx)^n = {n \choose 0} \cdot 1^n \cdot (cx)^{0} + {n \choose 1} \cdot 1^{n-1} \cdot
(cx)^{1} + {n \choose 2} \cdot 1^{n-2} \cdot (cx)^{2} +\ ... $$</span></p>
<p><span class="math-container">$$(1+cx)^n = {n \choose 0} + {n \choose 1} \cdot (cx)^{1} + {n \choose 2} \cdot (cx)^{2} +\ ... $$</span></p>
<p><span class="math-container">$$(1+cx)^n = {n \choose 0} + {n \choose 1} \cdot cx + {n \choose 2} \cdot c^2x^{2} +\ ... $$</span></p>
<p>Now you can construct equations for <span class="math-container">$c,n$</span> because you are given the coefficients before <span class="math-container">$x$</span> and <span class="math-container">$x^2$</span>.</p>
<p><span class="math-container">$${n \choose 1} \cdot c = 20 $$</span>
<span class="math-container">$${n \choose 2} \cdot c^2 = 150 $$</span></p>
<p>Let us solve this system of two equations.</p>
<p><span class="math-container">$$nc= 20$$</span>
<span class="math-container">$$\frac{n(n-1)}{2}c^2 = 150$$</span></p>
<p>We get</p>
<p><span class="math-container">$$nc = 20$$</span>
<span class="math-container">$$(n-1)c = 15$$</span></p>
<p>This means</p>
<p><span class="math-container">$$nc = 20$$</span>
<span class="math-container">$$nc-c = 15$$</span></p>
<p>This gives us the answer which is <span class="math-container">$c=5, n=4$</span></p>
|
3,891,124 | <p>When <span class="math-container">$ (1+cx)^n$</span> is expanded a series in ascending powers of <span class="math-container">$x$</span>, the first three terms are given by <span class="math-container">$1+20x+150x^2$</span>. Calculate the value of the constants <span class="math-container">$c$</span> and <span class="math-container">$n$</span>.</p>
<p>If possible please include working so I could perhaps understand how you got to the answer.</p>
| Z Ahmed | 671,540 | <p><span class="math-container">$$(1+cx)^n=1+ncx+\frac{1}{2}n(n-1)(cx)^2+...$$</span>
So we have <span class="math-container">$nc=20~~~(1)$</span> and <span class="math-container">$\frac{1}{2}n(n-1)c^2 =150~~~(2)$</span>
<span class="math-container">$$\implies n(n-1)c^2=300 \implies 20 (n-1)c=300 \implies (n-1)C=15 \implies 20-c=15 $$</span>
<span class="math-container">$$\implies c=5, n=4$$</span></p>
|
3,805,745 | <p>I am working my way through a linear algebra book and would appreciate some help verifying my proof.</p>
<p><strong>Prove that <span class="math-container">$|u \cdot v| = |u | |v |$</span> if and only if one vector is a scalar multiple of the
other.</strong></p>
<p><strong>PROOF:</strong></p>
<p>Let <span class="math-container">$k ∈ ℝ$</span> and <span class="math-container">$u ,v \in\mathbb R^n$</span> and <span class="math-container">$~u =k~v$</span></p>
<p>ASSUME: <span class="math-container">$|u\cdot v| = |u | |v |$</span></p>
<p>our assumption holds IFF <span class="math-container">$|kv \cdot v| = |kv | |v |$</span></p>
<p>which again holds IFF <span class="math-container">$k|v \cdot v| = k|v | |v |$</span></p>
<p>and, by definition of the dot product, holds IFF <span class="math-container">$k|v|^2 = k|v |^2$</span></p>
<p>Q.E.D.</p>
| C Squared | 803,927 | <p>In your proof, you make the assumption that <span class="math-container">$\vec{u}=k\vec{v}$</span> and <span class="math-container">$|\vec{u}\cdot\vec{v}|=||\vec{u}||\,||\vec{v}||$</span> and then claim that this holds when <span class="math-container">$$|\vec{u}\cdot\vec{v}|=||\vec{u}||\,||\vec{v}|| \Longleftrightarrow |k\vec{v}\cdot\vec{v}|=||k\vec{v}||\,||\vec{v}||\Longleftrightarrow k|\vec{v}\cdot\vec{v}|=k||\vec{v}||\,||\vec{v}||=k||\vec{v}||^2$$</span> This demonstrates the <span class="math-container">$\Leftarrow $</span> direction, that is, assuming the far right conditions, then we can arrive at the far left condition, but you have not shown that the far left condition implies the far right conditions. This proof uses an assumption which is supposed to be proved when you suggest that <span class="math-container">$\vec{u}=k\vec{v}$</span> and <span class="math-container">$|\vec{u}\cdot\vec{v}|=||\vec{u}||\,||\vec{v}||\Longrightarrow |k\vec{v}\cdot\vec{v}|=||k\vec{v}||\,||\vec{v}||$</span>, which is the statement you are trying to prove.</p>
<p>Here is how I went about proving this:</p>
<p><span class="math-container">$\Rightarrow$</span> direction: Suppose <span class="math-container">$|\vec{u}\cdot \vec{v}|=||\vec{u}||\,||\vec{v}||$</span>. We want to show that one vector is a scalar multiple of the other. Let <span class="math-container">$$\vec{u}=\begin{bmatrix}u_1\\u_2\\\vdots\\u_n \end{bmatrix}, \vec{v}=\begin{bmatrix}v_1\\v_2\\\vdots\\v_n \end{bmatrix}$$</span>
Then we have that <span class="math-container">$$\begin{align}|\vec{u}\cdot\vec{v}|&=|u_1v_1+u_2v_2+...+u_nv_n|\end{align}$$</span>
and
<span class="math-container">$$\begin{align}||\vec{u}||\,||\vec{v}||&=\sqrt{(u_1^2+u_2^2+...+u_n^2)(v_1^2+v_2^2+...+v_n^2)}\end{align} $$</span></p>
<p>This implies that <span class="math-container">$$(u_1v_1+u_2v_2+...+u_nv_n)^2=(u_1^2+u_2^2+...+u_n^2)(v_1^2+v_2^2+...+v_n^2)$$</span> but by the Cauchy-Schwarz inequality, <span class="math-container">$$(u_1v_1+u_2v_2+...+u_nv_n)^2\leq(u_1^2+u_2^2+...+u_n^2)(v_1^2+v_2^2+...+v_n^2)$$</span> with equality if and only if <span class="math-container">$\vec{u}$</span> and <span class="math-container">$\vec{v}$</span> are linearly dependent, or in other words, without loss of generality, <span class="math-container">$\vec{v}=k\vec{u}$</span>.</p>
<p><span class="math-container">$\Leftarrow$</span> direction: Assume vectors <span class="math-container">$\vec{u}$</span> and <span class="math-container">$\vec{v}$</span> are linearly dependent, that is, without loss of generality, <span class="math-container">$\vec{v}=k\vec{u}$</span> for some <span class="math-container">$k\in\mathbb{R}$</span>. Then, <span class="math-container">$$|\vec{u}\cdot\vec{v}|=|\vec{u}\cdot k\vec{u}|=|k|\,||\vec{u}||\,||\vec{u}||=||\vec{v}||\,||\vec{u}||$$</span></p>
<p>Others have suggested using <span class="math-container">$|\vec{u}\cdot\vec{v}|=||\vec{u}||\,||\vec{v}||\,\cos\theta$</span> where <span class="math-container">$\theta$</span> represents the angle between the two vectors and demonstrating that <span class="math-container">$\cos\theta=1$</span> when <span class="math-container">$\theta =0+2\pi r$</span> for some <span class="math-container">$r\in\mathbb{Z}$</span>, which implies that the vectors are parallel and that they are linearly dependent.</p>
|
209,420 | <p>For instance, when trying to compute <span class="math-container">$\mathbb{E}[\sum_{i=1}^{10}X_i]$</span> where <span class="math-container">$X_i \sim N(0,1)$</span>, I input into Mathematica:</p>
<pre><code>Expectation[Sum[x[i],{i, 1, 10}], x[i] \[Distributed] NormalDistribution[]]
</code></pre>
<p>but, instead of getting 0, I get:</p>
<pre><code>x[1]+x[2]+x[3]+x[4]+x[5]+x[6]+x[7]+x[8]+x[9]+x[10]
</code></pre>
<p>Why is it not simplifying to 0?</p>
| SHuisman | 66,987 | <p>You would need all the assumptions:</p>
<pre><code>Expectation[Sum[x[i], {i, 1, 10}], Table[x[i] \[Distributed] NormalDistribution[], {i, 1, 10}]]
</code></pre>
<p>returning:</p>
<pre><code>0
</code></pre>
|
563,161 | <p>For something I'm working on, I have a matrix $A$ with another matrix $U$ which is unitary ($U^*U = I$), and I'm trying to show that, for the Frobenius norm, $\|A\| =\|UA\|$. Now, I can do this pretty easily if an inner product space exists. For example, $\|A\| = \sqrt{\langle A,A\rangle}$ and $\|UA\| = \sqrt{\langle UA,UA\rangle} = \sqrt{\langle A,U^*UA\rangle} = \sqrt{\langle A,A\rangle} = \|A\|$. However, I'm not sure if I evoke the inner product space if I'm just told that the Frobenius norm exists. Is this the appropriate approach?</p>
| Robert Israel | 8,508 | <p>I don't know what you mean by "if an inner product space exists". The Frobenius norm does come from an inner product, namely the Frobenius inner product
$(A, B) = {\rm trace}(A^* B)$. </p>
|
196,303 | <p>All models of space that I know from physics use real or complex manifolds. I was just wondering if it is still the case at the level of Planck scale. In string theory, physicists still use strings (circles) in a 11 dimensional manifold in order to model particles. Do they do this because there is no mathematical alternatives or because the nature (mathematical essence) of space at the Planck scale is still not yet discovered? </p>
| John Baez | 2,893 | <p>There are approaches to quantum gravity where spacetime is described as a quantum superposition of labelled piecewise-linear CW complexes or other related combinatorial/algebraic entities. See for example:</p>
<ul>
<li><p>John Baez, <a href="http://arxiv.org/abs/gr-qc/9905087" rel="nofollow noreferrer">An introduction to spin foam models of quantum gravity and BF theory</a>, in <em>Geometry and Quantum Physics</em>, eds. Helmut Gausterer and Harald Grosse, Springer, Berlin, 2000, pp. 25-93. </p></li>
<li><p>John Baez, <a href="http://arxiv.org/abs/gr-qc/9902017" rel="nofollow noreferrer">Higher-dimensional algebra and Planck-scale physics</a>, in <em>Physics Meets Philosophy at the Planck Length</em>, eds. Craig Callender and Nick Huggett, Cambridge U. Press, Cambridge, 2001, pp. 177-195. </p></li>
<li><p>Daniele Oriti, <em><a href="http://arxiv.org/abs/gr-qc/0311066" rel="nofollow noreferrer">Spin Foam Models of Quantum Spacetime</a></em>, PhD thesis, University of Cambridge, 2003, 337 pp.</p></li>
</ul>
<p>However, your question feels more like a physics question than a math question to me.</p>
|
54,486 | <p>Many colour schemes and colour functions can be accessed using <a href="http://reference.wolfram.com/mathematica/ref/ColorData.html"><code>ColorData</code></a>.</p>
<p>Version 10 introduced new default colour schemes, and a new customization option using <a href="http://reference.wolfram.com/mathematica/ref/PlotTheme.html"><code>PlotTheme</code></a>. The colour themes accessible with <code>PlotTheme</code> have both discrete colour schemes and gradients.</p>
<p>Is there a standard way to access these? I.e. get a colour function that take a real argument in $[0,1]$ and returns a shade, or one that takes an integer argument and returns a colour, as with <code>ColorData</code>.</p>
| kglr | 125 | <p><strong>Update 2:</strong> The content and organization of <code>$PlotThemes</code> in versions 10 and 9 are very different. In Version 10</p>
<pre><code> Charting`$PlotThemes
</code></pre>
<p>gives</p>
<p><img src="https://i.stack.imgur.com/3MlWt.png" alt="enter image description here"></p>
<p>whereas in Version 9, the content is organized around Charting/Plotting functions (See the picture in original post below.)</p>
<p>The color schemes can be obtained using:</p>
<pre><code> "Color"/. Charting`$PlotThemes
(* BackgroundColor, BlackBackground, BoldColor, ClassicColor, CoolColor,
DarkColor,GrayColor, NeonColor,PastelColor, RoyalColor, VibrantColor, WarmColor,
DefaultColor, EarthColor, GarnetColor, OpalColor, SapphireColor, SteelColor,
SunriseColor, TextbookColor, WaterColor} *)
Grid[{#,Row@(("DefaultPlotStyle"/.(Method/.
Charting`ResolvePlotTheme[#, ListPlot]))/.
Directive[x_,__]:>x)}&/@("Color"/. Charting`$PlotThemes),Dividers->All]
</code></pre>
<p><img src="https://i.stack.imgur.com/W4atq.png" alt="enter image description here"></p>
<p><strong>Update:</strong> The function that defines the color schemes and styles seems to be <code>ResolvePlotTheme</code>, which is in the <code>Charting</code> context in both Version 9 and 10.</p>
<pre><code>?Charting`ResolvePlotTheme
(* too long to copy here ... *)
</code></pre>
<p>For example,</p>
<pre><code>Charting`ResolvePlotTheme["Vibrant", ContourPlot]
(* {BaseStyle -> GrayLevel[0.5], BoundaryStyle -> None,
ColorFunction -> (Blend[{Hue[0.5, 1, 0.5], Hue[0.35, 0.5, 0.7],
Hue[0.17, 0.7, 0.9]}, #1] &), ContourStyle -> GrayLevel[1, 0.5],
GridLines -> Automatic,
GridLinesStyle -> Directive[GrayLevel[0.5], Dashing[{0, Small}]],
Method -> {"GridLinesInFront" -> True}} *)
</code></pre>
<p>So, one can access the color functions used in these themes using something like;</p>
<pre><code> Grid[{#, ColorFunction /. Charting`ResolvePlotTheme[#, ContourPlot]} & /@
("ContourPlot" /. Charting`$PlotThemes), Dividers -> All]
</code></pre>
<p><img src="https://i.stack.imgur.com/vTKH9.png" alt="enter image description here"></p>
<p>More generally, one can get the settings for <code>ColorFunction</code>, <code>ChartStyle</code>, <code>PlotStyle</code> <code>BaseStyle</code> etc. using a similar approach:</p>
<pre><code>Grid[{#, Column@FilterRules[Charting`ResolvePlotTheme[#, PieChart],
{ColorFunction, ChartStyle, BaseStyle}]} & /@
("PieChart" /. Charting`$PlotThemes), Dividers -> All]
</code></pre>
<p><img src="https://i.stack.imgur.com/riCUJ.png" alt="enter image description here"></p>
<hr>
<p><code>PlotTheme</code> seems to work in Version 9.0.1.0 as an undocumented feature:</p>
<pre><code>?*`*PlotTheme*
</code></pre>
<p><img src="https://i.stack.imgur.com/nQtQV.png" alt="enter image description here"></p>
<p>After <code>Unprotect</code> and <code>ClearAttributes[--,ReadProtected]</code> one can access some details. For example:</p>
<pre><code>?Charting`$PlotThemes
</code></pre>
<p><img src="https://i.stack.imgur.com/f2Xzc.png" alt="enter image description here"></p>
<p>And, despite syntax hightlighting suggesting error, they work as expected:</p>
<pre><code>Row[Plot[Table[BesselJ[n, x], {n, 5}], {x, 0, 10}, Evaluated -> True,
ImageSize -> 400, PlotLabel -> Style[#, 20],
Charting`PlotTheme -> #] & /@ {"Vibrant", "Monochrome"}]
</code></pre>
<p><img src="https://i.stack.imgur.com/16uqI.png" alt="enter image description here"></p>
|
3,873,138 | <p>Since we have variable coefficients we will use the cauchy-euler method to solve this DE. First we substitute <span class="math-container">$y=x^m$</span> into our given DE. This then gives
"</p>
<p><span class="math-container">$9x(m(m-1)x^{m-2}) + 9mx^{m-1} = 0$</span></p>
<p>Note that:</p>
<p><span class="math-container">$x^{m-2} = x^{m-1}x^{-1}$</span></p>
<p>Then</p>
<p><span class="math-container">$9x(m(m-1)x^{m-1}x^{-1}) + 9mx^{m-1} = 0$</span></p>
<p><span class="math-container">$9(m(m-1)x^{m-1}) + 9mx^{m-1} = 0$</span></p>
<p><span class="math-container">$9mx^{m-1}((m-1)) + 9mx^{m-1} = 0 \Rightarrow m-1=0$</span></p>
<p><span class="math-container">$m_{1} = 1$</span> so <span class="math-container">$y_{1}=c_{1}x$</span> is our solution and using reduction of order we get our second solution which is <span class="math-container">$y_{2}=c_{2}x\ln(x)$</span> and by superposition of homogenous equations we get our general solution</p>
<p><span class="math-container">$y = c_{1}x + c_{2}x\ln(x)$</span></p>
<p>However i am told that this is wrong and the answer is <span class="math-container">$y=c_{1} + c_{2}\ln(x)$</span></p>
<p>What happened to the factor x?</p>
| Michael Rozenberg | 190,319 | <p>It's <span class="math-container">$$(xy')'=0$$</span> or
<span class="math-container">$$xy'=C$$</span> or
<span class="math-container">$$y=C\ln|x|+C_1.$$</span></p>
|
2,478,229 | <p>The matter of interest is</p>
<p>$$\int_{0}^{1/2} \frac{1}{|\sqrt{x}\ln(x)|^p}\, dx$$</p>
<p>I am aware that this integral converges for $p=2$ (that's not too hard to show). I also believe that this integral diverges for $p>2$...but how can I show that using elementary calculus and related techniques (comparison test etc)? </p>
| Jack D'Aurizio | 44,121 | <p>By enforcing the substitution $x=e^{-z}$ we get
$$ \int_{0}^{1/2}\frac{dx}{\left(-\sqrt{x}\log x\right)^p} = \int_{\log 2}^{+\infty}\exp\left[\left(\frac{p}{2}-1\right)z\right]\frac{dz}{z^p} $$
and we clearly need $p\leq 2$ to ensure the (improperly-Riemann or Lebesgue)-integrability of $\exp\left[\left(\frac{p}{2}-1\right)z\right]\frac{1}{z^p}$ over $(\log 2,+\infty)$.</p>
|
2,478,229 | <p>The matter of interest is</p>
<p>$$\int_{0}^{1/2} \frac{1}{|\sqrt{x}\ln(x)|^p}\, dx$$</p>
<p>I am aware that this integral converges for $p=2$ (that's not too hard to show). I also believe that this integral diverges for $p>2$...but how can I show that using elementary calculus and related techniques (comparison test etc)? </p>
| Jürg W. Spaak | 475,371 | <p>A simple change of variables does the trick: $y=1/x$</p>
<p>$$\int_0^\frac12\frac1{(-\sqrt x\log(x))^p}dx = \int_2^\infty\frac1{-y^{-p/2}\log(\frac1y)^p}\frac1{y^2}=\\
=\int_2^\infty \frac{y^{\frac p2-2}}{\log(y)^p}dy$$</p>
<p>The last integral converges iff $\frac p2-2\leq-1$, which is equivalent to $p\leq2$.</p>
|
145,612 | <p>Why are isosceles triangles called that — or called anything? Why is their class given a name? Why did they find their way into the <em>Elements</em> and every single elementary geometry text and course ever since? Did no one ever ask himself, "What use is this, or why is it interesting?"?</p>
<p>Here are some facts about isosceles triangles whihc you might think would serve as valid answers to the above question, and I will attempt to show that they do not:</p>
<ul>
<li><em>A triangle has two equal sides iff it has two equal angles.</em> But that's of interest only because we're already looking at the one class (triangles with two equal sides) or the other (those with two equal angles). And, in any event, the statement of the theorem is not more interesting than its generalization, that the larger a side in a triangle, the greater the angle opposite it.</li>
<li><em>Various facts about the isosceles right triangle.</em> Fine, I'll grant that the isosceles right triangle is interesting. But that's insufficient reason to give the much broader class of isosceles triangles a name.</li>
<li><em>Any triangle can be partitioned into $n$ isosceles triangles $\forall n>4$ — and various other recent results.</em> Very nice, but isosceles triangles are, of course, in Euclid, so these don't really answer the question.</li>
</ul>
| Mark Bennet | 2,906 | <p>Here are a couple of very practical reasons:</p>
<p>If you are an engineer and you have two long pieces of wood or metal which you want to secure a fixed distance from each other, you might just have to hand a number of standard pieces of the same length (I think about making a crane from a Meccano set). Then you are likely to create a structure which contains a number of isosceles triangles, with the exact geometry depending on the separation you want to achieve. </p>
<p>In an engineering design, the equal legs of an isosceles triangle will often be bearing the same load, and therefore need to be equally strong, stiff etc, so can be made of standard material.</p>
|
1,474,067 | <p>A silly example:</p>
<p>$\exists x (P (x, x)) \leftrightarrow \exists x\forall x (P (x, x))$</p>
<p>Intuition tells me that, because we're dealing with the same variable, the Exists on the right side is of no importance, so that side of the equation would be equivalent to $\forall x (P (x, x))$.</p>
<p>Now, concerning the final result, does the existence of an x that satisfies P implies that all x do? If so, why?</p>
| Noah Schweber | 28,111 | <p>"$\exists x \forall x(P(x, x))$" is not a well-formed formula - you're not allowed to overload variables like this, precisely because it leads to ambiguity.</p>
|
3,625,627 | <p>If <span class="math-container">$S = \sum_{n=1}^{243} \frac{1}{n^{4/5}} $</span>. </p>
<p>Find the value of <span class="math-container">$\lfloor S \rfloor$</span> where <span class="math-container">$\lfloor \cdot \rfloor$</span> represents the greatest integer function.</p>
<p>By approximation using definite integral, I know the answer lies in the set <span class="math-container">$[10,15]$</span> (approximately) but I don't know how to find the exact sum.</p>
| Misha Lavrov | 383,078 | <p>You're getting a very loose approximation with the integral if you start at <span class="math-container">$0$</span>, because <span class="math-container">$\int_0^1 x^{-4/5}\,dx = 5$</span>. That's the majority of the error.</p>
<p>To avoid this, write <span class="math-container">$$S = 1 + \sum_{n=2}^{243} \frac1{n^{4/5}}$$</span> and then use integrals to put upper and lower bounds on the sum from <span class="math-container">$2$</span> to <span class="math-container">$243$</span>. Since the function <span class="math-container">$f(x) = \frac1{x^{4/5}}$</span> is decreasing for <span class="math-container">$x>0$</span>, we have <span class="math-container">$$\int_2^{244} \frac1{x^{4/5}}\,dx < \sum_{n=2}^{243} \frac1{n^{4/5}} < \int_1^{243} \frac1{x^{4/5}}\,dx$$</span> and that should be enough to estimate the sum closely enough to find the floor <span class="math-container">$\lfloor S\rfloor$</span>.</p>
|
3,625,627 | <p>If <span class="math-container">$S = \sum_{n=1}^{243} \frac{1}{n^{4/5}} $</span>. </p>
<p>Find the value of <span class="math-container">$\lfloor S \rfloor$</span> where <span class="math-container">$\lfloor \cdot \rfloor$</span> represents the greatest integer function.</p>
<p>By approximation using definite integral, I know the answer lies in the set <span class="math-container">$[10,15]$</span> (approximately) but I don't know how to find the exact sum.</p>
| sammy gerbil | 203,175 | <p>You will not find an exact value. You are probably expected to find an approximation. </p>
<p>You could use the <strong>trapezium rule</strong> to approximate the integral </p>
<p><span class="math-container">$$\int_1^{243}f(x)dx \approx \frac12(f_1+f_2)+\frac12(f_2+f_3)+...\frac12(f_{241}+f_{242})+\frac12(f_{242}+f_{243})$$</span> <span class="math-container">$$= -\frac12(f_1+f_{243})+\sum_1^{243}f_n $$</span>
Then
<span class="math-container">$$I=\int_1^{243}x^{-4/5}dx=[5x^{1/5}]_1^{243}=5(3-1)=10$$</span>
Therefore
<span class="math-container">$$\sum_1^{243}\frac{1}{n^{4/5}} \approx I+\frac12(\frac{1}{1^{4/5}}+\frac{1}{243^{4/5}})=10+\frac12(1+\frac{1}{81}) =10\frac{41}{81}\approx 10.5062$$</span></p>
<p>You could make this more exact using the error term for the trapezium rule which is <span class="math-container">$-M(b-a)/12n^2=-M/12n$</span> where <span class="math-container">$M$</span> is the average value of <span class="math-container">$f''(x)$</span> over the interval.</p>
|
3,625,627 | <p>If <span class="math-container">$S = \sum_{n=1}^{243} \frac{1}{n^{4/5}} $</span>. </p>
<p>Find the value of <span class="math-container">$\lfloor S \rfloor$</span> where <span class="math-container">$\lfloor \cdot \rfloor$</span> represents the greatest integer function.</p>
<p>By approximation using definite integral, I know the answer lies in the set <span class="math-container">$[10,15]$</span> (approximately) but I don't know how to find the exact sum.</p>
| Claude Leibovici | 82,404 | <p>If you know generalized harmonic numbers
<span class="math-container">$$S_p = \sum_{n=1}^{p} \frac{1}{n^{4/5}}=H_p^{\left(\frac{4}{5}\right)}$$</span> Using asymptotics
<span class="math-container">$$S_p=5 p^{1/5}+\zeta \left(\frac{4}{5}\right)+\frac{1}{2}
\frac{1}{p^{4/5}}+\cdots$$</span></p>
<p>For <span class="math-container">$p=243$</span>, the last term is just <span class="math-container">$\frac{1}{162}$</span> and the first one is <span class="math-container">$15$</span>.</p>
<p>Now, close to <span class="math-container">$x=1$</span>
<span class="math-container">$$\zeta(x) \sim \frac{1}{x-1}+\gamma \implies \zeta \left(\frac{4}{5}\right) \sim \gamma-5 $$</span></p>
<p>All of the above makes
<span class="math-container">$$S_{243} \sim 10+\gamma+\frac{1}{162} $$</span> Since <span class="math-container">$ \gamma \sim \frac 12$</span> then
<span class="math-container">$$\lfloor S_{243} \rfloor =10$$</span></p>
<p>Just for your curiosity, using the above approximation, we have <span class="math-container">$S_{243} \sim 10.5834$</span> while the "exact" result is <span class="math-container">$10.5686$</span>.</p>
|
390,532 | <p>I'm trying to solve (for $x$) some problems such as $\arctan(0)=x$, $\arcsin(-\frac{\sqrt{3}}{{2}})=x$, etc.</p>
<p>What is the best way to go about this? So far, I have been trying to solve the problems intuitively (e.g. I ask myself <em>what value of sine will give me $-\frac{\sqrt{3}}{{2}}$?</em>), maybe drawing a triangle to help. Is there a better way to solve these problems?</p>
| DonAntonio | 31,254 | <p>You need to know the basic values of the trigonometric functions, thus for example:</p>
<p>$$\sin\left(-\frac{\pi}3\right)=\sin\left(-\frac{2\pi}3\right)=\frac{\sqrt3}2\implies\arcsin\left(-\frac{\sqrt3}2\right)\in\left\{\;-\frac{\pi}3\;,\;-\frac{2\pi}3\;\right\}$$</p>
<p>so you must know where your values' range is.</p>
|
2,574,768 | <p>There are three planes, <strong>A</strong>, <strong>B</strong>, and <strong>C</strong>, all of which intersect at a single point, <strong>P</strong>. The angles between the planes are given: $$\angle\mathbf{AB}=\alpha$$ $$\angle\mathbf{BC}=\beta$$ $$\angle\mathbf{CA}=\gamma$$ $$0\lt\alpha,\beta,\gamma\le\frac{\pi}{2}$$</p>
<p>The intersection of any two of these planes form lines. The intersection of <strong>AB</strong> is $\mathbf{\overline{AB}}$, the intersection of <strong>BC</strong> is $\mathbf{\overline{BC}}$, and the intersection of <strong>CA</strong> is $\mathbf{\overline{CA}}$. It is given that none of these lines are parallel to each other and that they all intersect at the same single point, <strong>P</strong>.</p>
<p>Please express the lesser of the two angles formed by the intersection of these lines in terms of $\alpha$, $\beta$, and $\gamma$:
$$\angle\mathbf{\overline{AB}\;\overline{BC}}=?$$ $$\angle\mathbf{\overline{AB}\;\overline{CA}}=?$$ $$\angle\mathbf{\overline{BC}\;\overline{CA}}=?$$</p>
| Narasimham | 95,860 | <p>If you call the last mentioned angles as $a,b,c$ and the spherical triangle $ABC$ then from Spherical Trigonometry (sphere has center at $P$,) the Law of Sines is valid:</p>
<p>$$ \dfrac{\sin \alpha}{\sin a} = \dfrac{\sin \beta}{\sin b} =\dfrac{\sin \gamma}{\sin c}= \dfrac{6\, Volume\, PABC}{\sin a\sin b\sin c } $$</p>
|
64,613 | <p>EDIT: I meant to have the coefficients reversed, showing:
$$\frac{n}{n-1}(1-(1-x)^n)^n + (1-x)^{n-1} \leq 1$$
This version should be true.. but still trying to prove it...</p>
<p>ORIGINAL:
Is it possible to show:
$$(1-(1-x)^n)^n + \frac{n}{n-1}(1-x)^{n-1} \leq 1$$ for $0<x<1$ and $n\geq 2$ (and $n$ is an integer)?
This increases the difficulty over the other questions I just asked. </p>
<p>the second term seems to decrease with $n$, so I can solve for the minimum of $n$. But the first term doesn't completely increase with $n$ -- the lines cross when I plot the first term with $n=2$ and then $n=3$. So I'm not sure where to go from there.</p>
| Brian M. Scott | 12,042 | <p>It fails already for $n=2$. Make the substitution suggested by Thijs, and the left-hand side becomes $(1-y^2)^2+2y = 1+y^4 +2y(1-y) > 1$ for $0<y<1$.</p>
|
4,390,855 | <p>For reference: In triangle ABC, <span class="math-container">$S_1$</span> and <span class="math-container">$S_2$</span> are areas of the shaded regions. If <span class="math-container">$S_1 \cdot{S}_2=16 cm^4$</span>, calculate <span class="math-container">$MN$</span>.</p>
<p><a href="https://i.stack.imgur.com/KRiJ1.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KRiJ1.jpg" alt="enter image description here" /></a></p>
<p>My progress:</p>
<p><span class="math-container">$\frac{AM.DM}{2}.\frac{CN.FN}{2}=16 \implies AM.DM.CN.FN=64\\
\frac{S1}{S2} = \frac{AM.MD}{CN.FN}\\
\frac{S1}{\frac{MI.DM}{2}}=\frac{AM}{MI}\implies S1 = \frac{AM.DM}{2}\\
\frac{S2}{\frac{NI.FN}{2}}=\frac{CN}{NI}\implies S2 = \frac{CN.FN}{2}$</span></p>
<p>.....????</p>
| Chris Sanders | 309,566 | <p>To avoid ambiguity, I will call the p-adic integers <span class="math-container">$\mathbb{Z}_{pa}$</span>.</p>
<p>Here's a simple question. What is the ideal <span class="math-container">$I\subset\mathbb{Z}_{pa}$</span> of scalars <span class="math-container">$x$</span> such that <span class="math-container">$x\cdot\mathbb{Z}=0$</span>?</p>
<p>If <span class="math-container">$I=0$</span>, then the elements <span class="math-container">$x\cdot 1$</span> are all distinct, so <span class="math-container">$\mathbb{Z}$</span> would be an uncountable set.</p>
<p>If <span class="math-container">$0<I<\mathbb{Z}_{pa}$</span>, then <span class="math-container">$\mathbb{Z}$</span> would be a non-trivial module over <span class="math-container">$\frac{\mathbb{Z}}{p^n\mathbb{Z}}$</span>, which is impossible.</p>
<p>The only possibility therefore is <span class="math-container">$I=\mathbb{Z}_{pa}$</span>.</p>
|
4,390,855 | <p>For reference: In triangle ABC, <span class="math-container">$S_1$</span> and <span class="math-container">$S_2$</span> are areas of the shaded regions. If <span class="math-container">$S_1 \cdot{S}_2=16 cm^4$</span>, calculate <span class="math-container">$MN$</span>.</p>
<p><a href="https://i.stack.imgur.com/KRiJ1.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KRiJ1.jpg" alt="enter image description here" /></a></p>
<p>My progress:</p>
<p><span class="math-container">$\frac{AM.DM}{2}.\frac{CN.FN}{2}=16 \implies AM.DM.CN.FN=64\\
\frac{S1}{S2} = \frac{AM.MD}{CN.FN}\\
\frac{S1}{\frac{MI.DM}{2}}=\frac{AM}{MI}\implies S1 = \frac{AM.DM}{2}\\
\frac{S2}{\frac{NI.FN}{2}}=\frac{CN}{NI}\implies S2 = \frac{CN.FN}{2}$</span></p>
<p>.....????</p>
| Captain Lama | 318,467 | <p>No, it's not possible. If <span class="math-container">$q\in \mathbb{Z}$</span> is not divisible by <span class="math-container">$p$</span>, then <span class="math-container">$1/q\in \mathbb{Z}_p$</span>, so for any <span class="math-container">$\mathbb{Z}_p$</span>-module <span class="math-container">$M$</span> and any <span class="math-container">$x\in M$</span>, we must have <span class="math-container">$x = q\cdot (\frac{1}{q}x)$</span>, so <span class="math-container">$x\in qM$</span>, and <span class="math-container">$M$</span> is <span class="math-container">$q$</span>-divisible as an abelian group.</p>
<p>But obviously <span class="math-container">$\mathbb{Z}$</span> is not <span class="math-container">$q$</span>-divisible for any <span class="math-container">$q\neq \pm 1$</span>.</p>
|
2,333,702 | <p>Firstly, I have opened the brackets and solved using both compositions of trig and inverse trig functions and using right triangle (the results were the same):$$\arcsin(\frac{40}{41})=\gamma$$$$\frac{40}{41}=\sin\gamma.$$ Coming from the Pythagoras theorem, the adjacent side to $\gamma^\circ$ is 9, so $\cos(\gamma)=\frac{9}{41}.$ $$-\arcsin(\frac{4}{5})=\theta$$ $$-\frac{4}{5}=\sin\theta.$$ So, $\cos\theta=-\frac{3}{5};$ $$\cos(\arcsin(\frac{40}{41})-\arcsin(\frac{4}{5}))=\cos(\arcsin(\frac{40}{41}))-\cos(\arcsin(\frac{4}{5}))=\frac{9}{41}-\frac{3}{5}=-\frac{78}{205}.$$But it didn't turn out to be the right answer. Where did I make a mistake?</p>
| Angina Seng | 436,618 | <p>You could apply Chebotarev to $\Bbb Q(\sqrt{m},\sqrt{-3},2^{1/3})$
which is Galois with Galois group $S_2\times S_3$ of order $12$.
I reckon the Frobeniuses of the $p$ you seek are of the form $(e,\sigma)$
where $e$ is the identity of $S_2$ and $\sigma$ is a $3$-cycle. There
are two of these, so the Dirichlet density of your $p$ is $1/6$.</p>
|
1,397,576 | <p>To me there is a hierarchy where vectors $\subset$ sequences $\subset$ functions $\subset$ operators</p>
<ul>
<li><p>All vectors are sequences, but not all sequences are vectors because
sequences are infinite dimensional</p></li>
<li><p>All sequences are functions, but not all functions are sequences
because functions can do more than just map $\mathbb{N} \to A$ where
$A$ is some set</p></li>
<li><p>All functions are operators, but not all operators are functions
because an operator can map functions to functions but function can only map numbers to numbers</p></li>
</ul>
<p>Can someone check if my ideas are reasonable? Does there exist such a hierarchy?</p>
| Daniel Hast | 41,415 | <ol>
<li>Vectors are not sequences. A vector is an element of a vector space; identifying vectors with tuples of numbers requires a choice of basis of the vector space. (For example, when we write elements of $\mathbb{R}^n$ as $n$-tuples of real numbers, we're implicitly using the standard basis $(1, 0, \dots, 0), \dots, (0, \dots, 0, 1)$.)</li>
<li>As you said, a sequence with values in a set $A$ is a function $\mathbb{N} \to A$. Not all functions are sequences, because functions can have domains other than $\mathbb{N}$.</li>
<li>"Operator" can have multiple meanings; it's often used as a synonym of "linear map between vector spaces", or "linear map from a vector space to itself". In any case, all operators are functions.</li>
<li>It's not true at all that "functions can only map numbers to numbers". One can talk about functions between any two sets. (Also, "number" isn't a precise mathematical term — it's an informal term for a long list of mathematical objects, such as integers, real numbers, complex numbers, $p$-adic numbers, ordinal numbers, cardinal numbers, quaternions, etc.)</li>
<li>A matrix is a finite rectangular array of numbers. (Since I just talked about how "number" is a vague word, I guess I should remark that you can replace "number" by "element of a <a href="https://en.wikipedia.org/wiki/Ring_%28mathematics%29" rel="nofollow">ring</a>" if you want a precise statement.) Matrices show up in many contexts, but one of the most fundamental is this: given a linear map between finite-dimensional vector spaces $f: V \to W$, if we choose a basis for $V$ and for $W$, we can represent $f$ in this basis, giving a bijective correspondence between linear maps $V \to W$ and matrices of the appropriate size. Under this correspondence, function composition corresponds to matrix multiplication. So, you can think of matrices as "finite-dimensional operators between vector spaces <em>with a choice of basis</em>". (But don't forget that they can also represent other things, such as bilinear forms or systems of linear equations, to name a couple.)</li>
</ol>
|
450,410 | <p>I'm trying to teach myself how to do $\epsilon$-$\delta$ proofs and would like to know if I solved this proof correctly. The answer given (Spivak, but in the solutions book) was very different.</p>
<hr>
<p><strong>Exercise:</strong> Prove $\lim_{x \to 1} \sqrt{x} = 1$ using $\epsilon$-$\delta$.</p>
<p><strong>My Proof:</strong></p>
<p>We have that $0 < |x-1| < \delta $.</p>
<p>Also, $|x - 1| = \bigl|(\sqrt{x}-1)(\sqrt{x}+1)\bigr| = |\sqrt{x}-1||\sqrt{x}+1| < \delta$.</p>
<p>$\therefore |\sqrt{x}-1|< \frac{\delta}{|\sqrt{x}+1|}$</p>
<p>Now we let $\delta = 1$. Then
\begin{array}{l}
-1<x-1<1 \\
\therefore 0 < x < 2 \\
\therefore 1 < \sqrt{x} + 1<\sqrt{2} + 1 \\
\therefore \frac{1}{\sqrt{x} + 1}<1.
\end{array}</p>
<p>We had that $$|\sqrt{x}-1|< \frac{\delta}{|\sqrt{x}+1|} \therefore |\sqrt{x}-1|<\delta$$</p>
<p>By letting $\delta=\min(1, \epsilon)$, we get that $|\sqrt{x}-1|<\epsilon$ if $0 < |x-1| < \delta $.</p>
<p>Thus, $\lim_{x \to 1} \sqrt{x} = 1$.</p>
<hr>
<p>Is my proof correct? Is there a better way to do it (still using $\epsilon-\delta$)?</p>
| math4fun | 109,143 | <p>Using your work here is another flavor of this proof:</p>
<p>Let $\epsilon >0$, and put
$\delta= \epsilon(\sqrt{x}+1)$.</p>
<p>Assume $0<|x−1|<\delta$.</p>
<p>Then
$$|F(x)−L|=|x−1|
=∣(\sqrt{x}−1)(\sqrt{x}+1)∣.$$
By our assumption that $0<|x−1|<\delta$, we have
$$|F(x) - L| <|1/(\sqrt{x}+1)|\delta = (1/(\sqrt{x}+1))ϵ(\sqrt{x}+1)) = \epsilon.$$</p>
<p>By letting $\delta=\epsilon(\sqrt{x}+1)$, we get that $|x−1|<\epsilon$ if 0<|x−1|<δ.
Thus, $\lim_{x\to 1} F(x) = L$</p>
<p>The scratch work is usually omitted as far as finding the co-efficient of delta itself. Then just find the co-eff inverse and include epsilon and it falls out at the end. There is a good link on math exchange that shows a template of how to structure delta-epsilon proofs. It is how I learned to write them up, and this is that method. Good luck.</p>
<p>P.S. * represents multiplication, and the square roots were not working. I am unfamiliar with LaTex so this is the best I can do.</p>
|
920,050 | <p>The answer is $\frac1{500}$ but I don't understand why that is so. </p>
<p>I am given the fact that the summation of $x^{n}$ from $n=0$ to infinity is $\frac1{1-x}$. So if that's the case then I have that $x=\frac15$ and plugging in the values I have $\frac1{1-(\frac15)}= \frac54$.</p>
| Kim Jong Un | 136,641 | <p>$$
\sum_{n=4}^\infty\frac{1}{5^n}=\frac{1}{5^4}\sum_{n=4}^\infty\frac{1}{5^{n-4}}=\frac{1}{5^4}\sum_{m=0}^\infty\frac{1}{5^m}=\frac{1}{5^4}\frac{1}{1-1/5}=\frac{1}{500}.
$$</p>
|
708,596 | <p>Suppose that $U$ and $V$ are vector spaces, and that $f:V \to W$ is a linear map. Suppose also that $u$ and $v$ are vectors in $V$ such that $f(u)=f(v)$. Show that there is a vector $w \in \ker f $ such that $v=u+w$.</p>
<p>I roughly understand what is kernel and its definition but I have no idea how to apply it to this question particularly the $f(u)=f(v)$ and $v=u+w$ part. Is it something to do with zero vectors? I don't understand how to show this.</p>
| user133458 | 133,458 | <p>So you have: $$f(u) = f(v)$$
$$f(v) - f(u) = 0$$
$$f(v-u) = 0$$
$$f(w) = 0$$</p>
<p>And if you know the definition of kernel, it just follows from there.</p>
|
107,915 | <p>I randomly place $k$ rooks on an (arbitrarily sized) $N$ by $M$ chessboard. Until only one rook remains, for each of $P$ time intervals we move the pieces as follows:</p>
<p>(1) We choose one of the $k$ rooks on the board with uniform probability. </p>
<p>(2) We choose a direction for the rook, $(N, W, E, S)$, with uniform probability. </p>
<p>(3) We choose a number of squares in which to move the rook along the direction chosen in [2] with uniform probability over the interval consisting of the rook's current position to the edge of the board.</p>
<p>(4) If the rook being moved collides with another piece while being translated in [3], just as in regular chess it will annihilate that piece and remain at the piece's former position.</p>
<p>NOTE - An alternative way of stating [2], [3], and [4] would be to say that the chosen rook samples all possible sets of moves, with uniform probability, and is unable to bypass other rooks without annihilating them and stopping at their former positions.</p>
<p>NOTE 2 - Gerhard Paseman is correct in suggesting that the original formulation for [2] and [3] will bias the rook towards shorter path lengths. This is in part due to the choice of direction in [2] not being weighted by the resulting possible number of choices in [3], and also the over-counting of positions in [3] due to the lack of consideration that there may be a collision. There are also problems with [2] near the board's boundaries where a direction can be chosen in which no move can take place. Instead of [2] and [3], I'll suggest that a better method would be to number all possible position that the chosen rook from [1] can occupy (keeping the collision constraint from [4] in mind), and then use a PRNG to select the next position. </p>
<p>What does the distribution look like for the number of time intervals, $P$, necessary for only a single rook to remain on the board?</p>
| Omer | 9,422 | <p>Depending on the precise model (see NOTE 2), each rook has probability of order $1/n$ of capturing another in a given step (they need to be on the same row, and then the probability is $O(1)$. Note also that the positions of rooks are mixed very quickly (this random walk mixes in $O(1)$ steps.</p>
<p>This brings this process into the range of Kingman's coalescent. As $n\to\infty$, the model can be approximated as follows: When there are $k$ rooks left, each pair of rooks merge at rate $a/(kn)$, where $a$ is some constant that can be computed. T(he $1/k$ factor is from the probability that one of them is moved; If the rooks moves were timed independently it would not be there.) There are $\binom{k}{2}$ pairs.</p>
<p>The time until a single rook is left is roughly a sum of independent exponentials, which will be asymptotically concentrated near $2n\log(M)/a$, where $M$ is the initial number of rooks.</p>
|
2,318,669 | <p>So I have this differential equation</p>
<p>$-0.4 \cdot 9.81+\frac{1}{100}v^2=0.4 v'$</p>
<p>I was able to solve it which gives me </p>
<p>$\ln(\frac{v+20}{v-20})=t+c$</p>
<p>My problem is I can't isolate $v$ after that i get it in this form and also when I try to find the constant $c$ knowing that $v(0) = 0$ I get </p>
<p>$c=\ln(\frac{20}{-20})$</p>
<p>Would I be able to say that </p>
<p>$c=\ln(\frac{20}{-20}) = 0$</p>
<p>If anyone could point me in the right direction to be able to get a function like $v(t) = ...$ </p>
| Lutz Lehmann | 115,115 | <p>The full equation for free fall under air friction reads
$$
m\dot v=-c|v|v-mg
$$
which has only one stationary point $v_\infty=-\sqrt{\frac{mg}c}$. Then the equation can be reformulated in new constants as
$$
\dot v = -b(|v|v+v_\infty^2)
$$
For $v\le 0$ partial fraction decomposition and integration leads to an expression similar to the one you got that should properly read
$$
\ln\left|\frac{v(t)+v_\infty}{v(t)-v_\infty}\right|=-2bv_\infty\, t+C=2b|v_\infty|\,t+C
$$
and for $v(0)=0$ you get $C=0$. Exponentiation and algebraic manipulation then leads to
\begin{align}
\frac{v_\infty+v(t)}{v_\infty-v(t)}&=e^{2b|v_\infty|\,t}
\\~\\
v(t)(e^{2b|v_\infty|\,t}+1)&=v_\infty(e^{2b|v_\infty|\,t}-1)\\~\\
v(t)&=\frac{e^{2b|v_\infty|\,t}-1}{e^{2b|v_\infty|\,t}+1}v_\infty
=\tanh(b|v_\infty|\,t)\,v_\infty
\end{align}</p>
|
3,239,185 | <p>Let <span class="math-container">$f,g$</span> be two analytic functions on the domain <span class="math-container">$\Omega$</span> such that <span class="math-container">$|f(z)|=|g(z)|$</span> throughout <span class="math-container">$\Omega$</span>.</p>
<p>I believe <span class="math-container">$h(z)=f/g$</span> only has removable singularities(can't really prove it...), for the following reasons. If <span class="math-container">$g(z_0)=0$</span>, then <span class="math-container">$f(z_0)=0$</span>, and
<span class="math-container">$$\lim_{z\to z_0}h(z)=\lim_{z\to z_0}\frac{f(z)}{g(z)}=\lim_{z\to z_0}\frac{|f(z)|e^{\arg f(z)}}{|f(z)|e^{\arg g(z)}}\\
=\lim_{z\to z_0}\frac{e^{\arg f(z)}}{e^{\arg g(z)}}=e^{\arg f(z_0)-\arg g(z_0)}.$$</span>
So, we define <span class="math-container">$h(z_0)$</span> to be this value(EDIT this value is undefined :( ). Also,
<span class="math-container">$$
\lim_{z\to z_0}h'(z)=\lim_{z\to z_0}\frac{(f'g-g'f)(z)}{g(z)^2}\\
=\lim_{z\to z_0}(\frac{f'}{g}-\frac{g'}{g}\cdot\frac{f}{g})\\
=\lim_{z\to z_0}\frac{f'-hg'}{g}=\ldots?
$$</span>
Now I cannot proceed to prove that <span class="math-container">$h'(z)$</span> exist at <span class="math-container">$z=z_0$</span>.</p>
<p><strong>How can I make <span class="math-container">$h$</span> analytic?</strong></p>
<p>PS: if <span class="math-container">$h$</span> is made analytic, I can prove by integration that <span class="math-container">$f(z)=e^{\alpha i}g(z)$</span> for some fixed <span class="math-container">$\alpha\in \mathbb R$</span>.</p>
<p>Any help with the problem?</p>
| Kavi Rama Murthy | 142,385 | <p>If <span class="math-container">$f$</span> has a zero of order <span class="math-container">$n$</span> at <span class="math-container">$z_0$</span> then <span class="math-container">$f(z)=(z-z_0)^{n}h(z)$</span> with <span class="math-container">$h$</span> analytic and non-zero in a neighborhood of <span class="math-container">$z_0$</span>. This implies that <span class="math-container">$g$</span> also has a zero of order <span class="math-container">$n$</span> at <span class="math-container">$z_0$</span>. Similarly, if <span class="math-container">$g$</span> has a zero of order <span class="math-container">$n$</span> at <span class="math-container">$z_0$</span> so does <span class="math-container">$f$</span>. Hence the zeros of <span class="math-container">$g$</span> are cancelled by those of <span class="math-container">$f$</span> in the ratio <span class="math-container">$\frac f g$</span>. This makes <span class="math-container">$\frac f g$</span> analytic function with modulus <span class="math-container">$1$</span> hence a constant (by MMP). The constant must have modulus <span class="math-container">$1$</span> so it is of the form <span class="math-container">$e^{i\alpha}$</span> where <span class="math-container">$\alpha$</span> is real. </p>
|
375,549 | <p>I need to solve this recurrence equation with the help of Generating Functions in Combinatorics.</p>
<p>Given:
$$f(0) = 0 , f(1) = 1, f(n) = 10f(n-1) - 25f(n-2) \forall n \geq 2$$</p>
<p>So I said the following:</p>
<p>$$f(n) = \sum_{n=2}^{\infty} {10(n-1)x^n} - \sum_{n=2}^{\infty} {25(n-2)x^n}$$</p>
<p>Is that correct?</p>
| Community | -1 | <p>The generating function is
$$g(x) = \sum_{k=0}^{\infty} f(k) x^k = f(0) + f(1) x + \sum_{k=2}^{\infty} f(k) x^k = x + \sum_{k=2}^{\infty}(10f(k-1) - 25f(k-2))x^k$$
Hence,
$$g(x) = x + 10 x \sum_{k=1}^{\infty} f(k) x^k -25x^2 \sum_{k=0}^{\infty} f(k) x^k = x+10x g(x) - 25x^2 g(x)$$
This gives us
$$g(x) = \dfrac{x}{(5x-1)^2}$$
Now expand the above as a Taylor series about origin and compare coefficients to get $f(k)$.</p>
|
538,811 | <p>Suppose A(.) is a subroutine that takes as input a number in binary, and takes linear time (that is, O(n), where n is the length (in bits) of the number).
Consider the following piece of code, which starts with an n-bit number x.</p>
<p>while x>1:</p>
<p>call A(x)</p>
<p>x=x-1</p>
<p>Assume that the subtraction takes O(n) time on an n-bit number.</p>
<p>(a) How many times does the inner loop iterate (as a function of n)? Leave your answer in big-O form.</p>
<p>(b) What is the overall running time (as a function of n), in big-O form?</p>
<p>(a) O($n^2$)</p>
<p>(b) O($n^3$)</p>
<p>is this correct? can someone concur please, the way i think about it is that the loop has to compute two steps each time in cycles through and it will cycle through x time each time subtracting 1 from n bits until x reaches 0. And for part b since A(.) takes time O(n) we multiply that with the time it takes to execute the loop and we then have the over all running time. If i reasoned or did the problem wrong can someone please correct me and tell me what i did wrong.</p>
| Nate Neuhaus | 185,039 | <p>From the looks of it, this array will run infinitely due to the fact that the recursive element is called before x is decremented, thus the function will never terminate. </p>
<p>If you called x=x-1 before you recursively called A, then that would allow the function to effectively decrement at each level of recursion and thus terminate after x amount of times. </p>
<p>Also, it appears that you do not understand what big-O time complexity means, the fact is whether n = 1 or n = 10000000000000000000000 this has no effect on whether a function is O(n) or O(n^2), as these are related to the exponential nature of the function, not linear (such as a loop iterating x amount times within another loop that iterates x amount times). Addition, subtraction, multiplication, and division are all linear thus are all going have a time complexity of O(n), as well will calling functions, declaring variables, and performing tests ( such as x > 0). </p>
<p>(a & b) in your code the loop iterates until you force-quit the program... if you set the code up properly however then the while test = n, the decrement = n-1 (because it will run one time less than the condition and is not related to the decrement whatsoever), and the recursive call will also = n-1. So... the formula you are looking for for this program is n + (n-1) + (n-1) for all all values where n is greater than 1, otherwise it the total cost would be simply one. for example...</p>
<p>if n < 2 then the cost would be 1 (for the test only).
if n < 2, then the cost would be n + (n-1) + (n-1)
if in was 2 then the cost would be 2 + (2-1) + (2-1)</p>
<p>The time complexity, being that there are no exponential aspects within the function is, and always will be, Big-O of n (O(n)). </p>
|
2,163,948 | <p><strong>Question:</strong></p>
<blockquote>
<p>Does there exist a Riemannian manifold, with a point $p \in M$, and <strong>infinitely many</strong> points $q \in M$ such that there is <strong>more than one</strong> minimizing geodesic from $p$ to $q$?</p>
</blockquote>
<p><strong>Edit:</strong></p>
<p>As demonstrated in Jack Lee's answer, one can construct many exmaples in the following way:</p>
<p>Take $X$ to be a manifold which has a pair of points $p,q$, with more than one minimizing geodesic connecting them. Take $Y$ to be any geodesically convex (Riemannian) manifold. Then $X \times Y$ satisfies the requirement:</p>
<p>Indeed, let $\alpha,\beta$ be two different geodesics in $X$ from $p$ to $q$.</p>
<p>Fix $y_0 \in Y$, and let $y \in Y$ be arbitrary. Let $\gamma_y$ be a minimizing geodesic in $Y$ from $y_0$ to $y$. Then $\alpha \times \gamma_Y,\beta \times \gamma_Y$ are minimizing from $(p,y_0)$ to $(q,y)$.</p>
<p>Hence, if $Y$ is positive-dimensional (hence infinite), we are done.</p>
<p><em>"Open" question: Are there examples which are not products? (This is probably hard, I am not even sure what obstructions exist for a manifold to be a topological product of manifolds)</em></p>
<hr>
<p>Note that for <strong>any</strong> $p$, the set $$\{q \in M \,| \, \text{there is more than one minimizing geodesic from $p$ to $q$} \}$$</p>
<p>is of measure zero. </p>
<p>Indeed, let $M$ be a connected Riemannian manifold, and let $p \in M$.</p>
<p>The distance function from $p$, $d_p$ is $1$-Lipschitz, hence (by Rademacher's theorem) differentiable almost everywhere.</p>
<p>It is easy to see that if there are (at least) two different length minimizing geodesics from $p$ to $q$, then $d_p$ is not differentiable at $q$. (We have two "natural candidates" for the gradients).</p>
| Narasimham | 95,860 | <p>Between two points $(p,q)$ on any surface of revolution there are indefinitely many geodesic trajectories possible.</p>
<p>Just as you can throw a stone between two points $(p,q)$ situated at different heights choosing a different parabola at different angle of slope or angle of attack.</p>
<p>In a boundary value problem after specifying $(p,q)$ we use a shoot-through trial and error numerical procedure that satisfies the differential equation action.</p>
<p>That said however, there is a minimum that can be chosen from the set of all possible geodesics that minimize length, time or any chosen object function from among them.</p>
<p>The first minimization respects only the phenomenon, the second chooses minimum of all possibilities from a set satisfying them.</p>
|
633,799 | <p>I am a little confused about the basic definition of inclusion.</p>
<p>I understand that, for example, $\{4\}\subset\{4\}$.</p>
<p>I also understand that $4\in\{4\}$, and that it is false to say that $\{4\}\in\{4\}$.</p>
<p>However, is it possible to say that $4\subset\{4\}$?</p>
| DonAntonio | 31,254 | <p>$\;4\;$ is an element of the set $\;\{4\}\;$, so the symbol $\;\subset\;$ doesn't fit it. You need the internationally accepted symbol of curly parentheses {} or any other accepted notation in order to make clear it is a set.</p>
<p>For example, if $\;X = \{ 1,\{1\}\}\;$ , then we both have $\;1\in X\,,\,\{1\}\in X\;$ , and we also have $\;\{1\}\subset X\,,\,\{\{1\}\}\subset X\;$ </p>
|
633,799 | <p>I am a little confused about the basic definition of inclusion.</p>
<p>I understand that, for example, $\{4\}\subset\{4\}$.</p>
<p>I also understand that $4\in\{4\}$, and that it is false to say that $\{4\}\in\{4\}$.</p>
<p>However, is it possible to say that $4\subset\{4\}$?</p>
| apnorton | 23,353 | <p>There are two symbols, here, and I think you may be getting them confused. I would suggest to <em>not</em> use the word "inclusion" (at least, not all the time) because that a different meaning in English than in math.</p>
<p>The $\in$ symbol is used to designate if something is <em>inside of</em> a set. That is, $4\in\{4\}$, and $\{4\}\in\{\{4\}\}$, but $\{4\}\not\in\{4\}$.</p>
<p>The $\subset$ symbol is used to show whether the elements of one set are inside another set. That is, if $A \subset B$, then $a\in B$ for every $a \in A$. Another way of looking at it: $\subset$ always has a set on both sides.</p>
|
1,718,380 | <p>Simply: How do I solve this equation for a given $n \in \mathbb Z$?</p>
<p>$x^x = n$</p>
<p>I mean, of course $2^2=4$ and $3^3=27$ and so on. But I don't understand how to calculate the reverse of this, to get from a given $n$ to $x$. </p>
| Bumblebee | 156,886 | <p>See this wikipedia article: <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow">Lambert W function</a> </p>
<p>If $x^x=n,$ then $$x=\dfrac{\ln n}{W(\ln n)},$$ Where $W$ is the Lambert W function.</p>
|
1,698,376 | <p>I need some guidance with the following proof:</p>
<p>Let V be a finite dimensional vector space, and V* its dual.<br>
Let C = $(f1, ... , fn)\subset{V*}$ be a basis for V*.<br>
Let $w\in{V*}$.<br>
Prove that there exists $B\subset{V}$ such that C is dual for B.</p>
<p>Here's what I have so far:<br>
Since C is a basis, then its elements are linearly independent and are not zeroes.
But I can't figure out the use of "w", as well as how to continue proving this.</p>
<p>Thanks in advance for your help!</p>
| MooS | 211,913 | <p>The dual basis $f_1^*, \dotsc, f_n^*$ in $V^{**}$ of $C=\{f_1, \dotsc, f_n \}$ gives rise to a basis $B$ of $V$ via the canonical isomorphism $j:V \to V^{**}$. The dual of $B$ is $C$.</p>
<p><em>Proof:</em> We compute:</p>
<p>$$\delta_{ab}=f_a^*(f_b)=j(j^{-1}(f_a^*))(f_b)=f_b(j^{-1}(f_a^*)),$$</p>
<p>hence the elements $f_1, \dotsc, f_n$ form a dual basis of the basis $j^{-1}(f_1^*), \dotsc, j^{-1}(f_n^*)$.</p>
|
1,698,376 | <p>I need some guidance with the following proof:</p>
<p>Let V be a finite dimensional vector space, and V* its dual.<br>
Let C = $(f1, ... , fn)\subset{V*}$ be a basis for V*.<br>
Let $w\in{V*}$.<br>
Prove that there exists $B\subset{V}$ such that C is dual for B.</p>
<p>Here's what I have so far:<br>
Since C is a basis, then its elements are linearly independent and are not zeroes.
But I can't figure out the use of "w", as well as how to continue proving this.</p>
<p>Thanks in advance for your help!</p>
| martini | 15,379 | <p>Define $i_V \colon V \to V^{**}$ as follows: For $w \in V^*$, $v \in V$ let
$$ i_V(v)(w) = w(v) $$
that is $i_V(v)$ is "<em>evaluation at $v$</em>". Then $i_V$ is linear and one-to-one: If $v \ne 0$, extend $v$ to a basis $B$ of $V$ and define $w \in V^*$ by $w(v) = 1$, $w(b) = 0$, $b \in B \setminus \{v\}$. Then
$$ i_V(v)(w) = w(v) = 1 $$
hence $i_V(v) \ne 0$. So $i_V$ is one-to-one.</p>
<p>As $V$ is finite-dimensional, $\dim V^{**} = \dim V$, so $i_V$ is an isomorphism. </p>
<p>Now let $C^*$ be the dual basis of $C$ in $V^{**}$, that is with $C^* =(f_1^*, \ldots, f_n^*)$ we have
$$ f_i^*(f_j) = \delta_{ij}, \qquad 1 \le i,j \le n $$
Define $B := i_V^{-1}[C]$. Then, as $i_V$ is an isomorphism, $B$ is a basis of $V$ and with $B = (v_1, \ldots, v_n)$ we have
$$ f_j(v_i) = i_V(v_i)(f_j) = f_i^*(f_j) = \delta_{ij}$$</p>
|
2,265,203 | <p>I was reading the paper</p>
<p><a href="https://aimsciences.org/journals/pdfs.jsp?paperID=1058&mode=full" rel="nofollow noreferrer">Dynamical models of tuberculosis and their applications</a> </p>
<p>by Castillo-Chavez, Song B. and it says </p>
<blockquote>
<p>" it is clear that the matrix
$$
D_xf=
\begin{pmatrix}
-\mu & 0 &-\phi\\
0&-(k+\mu)&\phi\\
0& k&-(\mu+r+d)
\end{pmatrix}
$$
has a <strong><em>simple zero eigenvalue</em></strong>." </p>
</blockquote>
<p>But I have found the eigenvalues of $D_xf$ to be
$$\lambda_1=-\mu$$
$$\lambda_2=-\frac{1}{2}k-\mu-\frac{1}{2}r-\frac{1}{2}d-\frac{1}{2}\sqrt{d^2-2dk+2dr+k^2+4k\phi-2kr+r^2}$$
$$\lambda_3=-\frac{1}{2}k-\mu-\frac{1}{2}r-\frac{1}{2}d+\frac{1}{2}\sqrt{d^2-2dk+2dr+k^2+4k\phi-2kr+r^2}$$</p>
<p>So, what does <strong><em>simple zero eigenvalue</em></strong> mean? </p>
| Dmitry | 310,971 | <p>This matrix has a simple zero eigenvalue because of the parameter $\phi$, which is chosen in a particular way, $\phi=\frac{(k+\mu)(\mu+r+d)}{k}$. For any other choice of $\phi$ there won't be a zero eigenvalue.</p>
|
372,211 | <p>I'm trying to write an <a href="http://developer.android.com/reference/android/view/animation/Interpolator.html" rel="nofollow noreferrer">interpolator</a> for a translate animation, and I'm stuck. The animation passes a single value to the function. This value maps a value representing the elapsed fraction of an animation to a value that represents the interpolated fraction. The value starts at 0 and goes to 1 when the animation completes. So for instance, if I wanted a linear translation (constant velocity) my function would look like:</p>
<pre><code>function(input) {
return input
}
</code></pre>
<p>What I need is for the velocity to remain constant for half the animation, then decelerate rapidly to zero. What this essentially means is that the input values I return must be the same as the output values for half the animation (until 0.5), then the values must increase from 0.5 to 1.0 at a slower rate of change between calls than the input value change between calls.</p>
<p><hr>
EDIT (straight from the Android docs):</p>
<p>The following table represents the approximate values that are calculated by example interpolators for an animation that lasts 1000ms:</p>
<p><img src="https://i.stack.imgur.com/UQrow.png" alt="enter image description here"></p>
<p>As the table shows, the LinearInterpolator changes the values at the same speed, .2 for every 200ms that passes. The AccelerateDecelerateInterpolator changes the values faster than LinearInterpolator between 200ms and 600ms and slower between 600ms and 1000ms.</p>
<p><hr>
EDIT 2:</p>
<p>I thought I should provide an example of something that works, just not the way I want it to. The function for a decelerate interpolation provided with the Android framework is exactly:</p>
<pre><code>function(input) {
return (1 - (1 - input) * (1 - input))
}
</code></pre>
| André Nicolas | 6,312 | <p>This answers a problem motivated by your description, but it may not be the problem you want to solve. We assume constant velocity $k$ for $0\le t\le 0.5$. Then we have constant deceleration, so that at time $t=1$ velocity reaches $0$.</p>
<p>Under these conditions, you may want the <em>net displacement</em> at time $t$.</p>
<p>Our velocity at $t=0.5$ is $k$. It has to reach $0$ at time $1$, so the velocity at time $t$, for $0.5\le t\le 1$, is $k-\frac{k}{1-0.5}(t-0.5)=2k-2kt$.</p>
<p>Our net displacement $s(t)$ at time $t$ is $kt$ for $0\le t\le 0.5$.</p>
<p>For $0.5\lt t\le 1$, the net displacement is $(0.5)k +\int_{0.5}^t (2k-2kt)\,dt$. This simplifies to $2kt-kt^2-0.25k$. </p>
<p>Thus a formula for the net displacement $s(t)$ at time $t$ is $s(t)=kt$ for $0\le t\le 0.5$, and $s(t)=2kt -kt^2-0.25k$ for $0.5\lt t\le 1$. </p>
|
911,075 | <p>This is one of my first proofs about fields.
Please feed back and criticise in every way (including style and details).</p>
<p>Let $(F, +, \cdot)$ be a field.
Non-trivially, $\textit{associativity}$ implies that any parentheses are meaningless.
Therefore, we will not use parentheses.
Therefore, we will not use $\textit{associativity}$ explicitly.</p>
<p>By $\textit{identity element}$, $F \ne \emptyset$.
Now, let $a \in F$.
It remains to prove that $0a = 0$.
\begin{equation*}
\begin{split}
0a &= 0a + 0 && \quad \text{by }\textit{identity element }(+ ) \\
&= 0a + a + -a && \quad \text{by }\textit{inverse element }(+ ) \\
&= 0a + 1a + -a && \quad \text{by }\textit{identity element }(\cdot) \\
&= (0 + 1)a + -a && \quad \text{by }\textit{distributivity } \\
&= (1 + 0)a + -a && \quad \text{by }\textit{commutativity }(+ ) \\
&= 1 a + -a && \quad \text{by }\textit{identity element }(+ ) \\
&= a + -a && \quad \text{by }\textit{identity element }(\cdot) \\
&= 0 && \quad \text{by }\textit{inverse element }(+ )
\end{split}
\end{equation*}
QED</p>
<p>PS: Is "Let $(F, +, \cdot)$ be a field." ok?
Besides, I would not want to call $F$ a field, because $F$ is just a set.
Also, what do you think about using adverbs like "Now"? How would you have said the associativity-thing?</p>
| Community | -1 | <p>A few pointers:</p>
<ul>
<li><p>You don't have to use "Now". You could just say "Let $a\in F$."</p></li>
<li><p>Don't say "meaningless". Rather, phrase it like so:</p>
<blockquote>
<p>Non-trivially, associativity implies that any parentheses are redundant. Hence, parenthesis will be suppressed and we will thus not explicitly employ associativity.</p>
</blockquote></li>
<li><p>It's not wrong to say that $(F,+,\cdot)$ is a field, unless the question goes something like this: "Let $(F,+,\cdot)$ be a field, and let $0\in F$. Then show that every multiple of zero equals zero, i.e., for any $a\in F$, $0a=0$". Considering the way you phrased your proof, I don't think that this is how the question was phrased (correct me if I'm wrong).</p></li>
</ul>
<p>Now, your proof is not wrong, but it's not the shortest either. Your proof could go something like this: $0a=(0+0)a=0a+0a$; hence $0a=0$. Alternatively, you could go like so: $0a=(0+0)a=0a+0a$. But $0a=0a+0$. Hence, $0a+0a=0a+0$, implying that $0a=0$. If I was a teacher, I would personally prefer the latter. However, do not take this personally; your proof is also very nice, but a tad bit longer than what I think is the conventional proof of this fact. Mathematicians are lazy; they prefer the shortest proofs (or at least that's what I think. I'm not a professional!)!</p>
|
1,425,519 | <p>I'm trying to solve <a href="http://poj.org/problem?id=2140" rel="nofollow">this problem</a> on POJ and I thought that I had it. Since I can't figure out what's wrong with my code, I'd like to test it against a huge list of correct answers. This will make my code much easier to debug.</p>
<p>If you don't want to go to the page linked above, or figure out what exactly the problem is asking, here's a concise version:</p>
<blockquote>
<p>Given a positive integer $N$, determine $x$, the number of sets of consecutive <strong>positive</strong> integers that sum to $N$.</p>
</blockquote>
<p>For example, suppose $N = 15$. Then $x = 4$, since, by brute force, the only possible solutions are </p>
<p>$\{15\}$, $\{7, 8\}$, $\{4, 5, 6\}$, and $\{1, 2, 3, 4, 5\}$</p>
<hr>
<p>My algorithm runs in $O(n)$ time, although it uses the $sqrt(x)$ function, which is quite expensive. Here's some pseudocode:</p>
<pre><code>input n
if n = 1 or n = 2:
print 1
exit
count = 1
for i = n / 2 + 1; i * (i + 1) / 2 >= n; i -= 1:
j = sqrt(i * (i + 1) - 2 * n)
if i * (i + 1) - j * (j + 1) = 2 * n or
i * (i + 1) - (j + 1) * (j + 2) = 2 * n:
count += 1
print count
exit
</code></pre>
<p>Here's some English explaining the loop: </p>
<p>Start with the greatest number, $i$, of a possible set. Disregarding the trivial set $\{N\}$, $i$ clearly cannot be greater than $N / 2$. (Suppose $i > N / 2$ and $i < N$. Then $i + (i + 1) > N$ and $i + (i - 1)$ may or may not be equal to $N$.) Since $i$ cannot be greater than $N/2$, start with $i = N/2$ as the upper bound of $i$. To test each possible set, loop $i$ from this upper bound down until $i (i + 1) / 2 < N$. Clearly, if the sum all positive integers from $1$ to $i$ is less than $N$, then no set of consecutive integers, whose greatest number is $i$, could sum to $N$. This establishes the lower bound of $i$.</p>
<p>To test if a set could end in $i$, I use the following mathematics:</p>
<p>Let </p>
<p>$$S(x) = \sum_{j=1}^{x} j$$</p>
<p>Starting with a set composed solely of $i$, add a set which most closely sums (including $i$) to $N$. Mathematically, </p>
<p>$$N = i + S(i - 1) - S(x)$$</p>
<p>$S(i - 1) - S(x)$ will generate some set of consecutive integers whose greatest number is $i - 1$ and whose least number is between $1$ and $i - 1$, inclusively. To rewrite,</p>
<p>$$N = i + \frac{(i-1)i}{2} - \frac{x(x + 1)}{2}$$
$$2N = 2i + i(i - 1) - x(x + 1)$$
$$2N = i(i + 1) - x(x + 1)$$</p>
<p>Therefore, $i(i + 1) - 2N = x(x+1)$ for some $x \in R$. If $x \in N$, then a solution has been found, generating the set of consecutive integers from $x + 1$ to $i$, inclusive. To quickly determine whether $x \in N$, I square-root $i (i + 1) - 2N$ and round down to some integer $t$. I plug $t$ back in to the equation for $x$, and check to see if it works. If $i(i + 1) - 2N \not = t(t+1)$ then I increment $t$ by one and check again. If neither of the cases work, then a valid set could not possibly have a greatest value of $i$, so I decrement $i$, continuing the loop.</p>
<hr>
<p>My algorithm has worked for hundreds of test cases, but when I submit, POJ gives me <code>WRONG ANSWER</code>. <strong>This is why I'd like to have a list of correct answers to compare against my program.</strong></p>
| Caleb Stanford | 68,107 | <p><strong>Why your algorithm doesn't work:</strong>
You need to allow the set of consecutive integers to be negative.
For instance, you say in your code that $1$ and $2$ have only one solution.
But each of them have two solutions:
$$
1 = 1 \;;\; 1 = 0 + 1 \\
2 = 2 \;;\; 2 = -1 + 0 + 1 + 2
$$
In fact, the answer will always end up being an even number, as seen in the formula at the end of this post.</p>
<p><strong>An explicit formula:</strong>
If $j$ consecutive integers sum to $n$, and the first is $i$, then
\begin{align*}
&(i) + (i+1) + (i+2) + \cdots + (i + j-1) = n \\
&\implies \frac{(2i+j-1)j}{2} = n \\
&\implies 2n = (2i+j-1)j. \tag{1}
\end{align*}
Conversely, if $jk = 2n$ for some positive integer $j$ and some integer $k$, with $k, j$ different modulo $2$, then setting $2i+j-1 = k$ we get a solution from (1).</p>
<p>Therefore, your answer is the number of ways to factor $2n = jk$ where $j > 0$ and $j,k$ differ mod $2$.
$j > 0$ implies $k > 0$, so the factors must both be positive.</p>
<p>Write $2n = 2^d n'$ with $n'$ odd, $d \ge 1$.
Then your answer will be
$$
\boxed{2 \sigma_0(n'),}
$$
where $\sigma_0$ counts the number of positive integer divisors of $n'$.
You multiply by $2$ because we can either choose to put the $2^d$ into $j$, or into $k$, but we can't split it between them.</p>
|
1,425,519 | <p>I'm trying to solve <a href="http://poj.org/problem?id=2140" rel="nofollow">this problem</a> on POJ and I thought that I had it. Since I can't figure out what's wrong with my code, I'd like to test it against a huge list of correct answers. This will make my code much easier to debug.</p>
<p>If you don't want to go to the page linked above, or figure out what exactly the problem is asking, here's a concise version:</p>
<blockquote>
<p>Given a positive integer $N$, determine $x$, the number of sets of consecutive <strong>positive</strong> integers that sum to $N$.</p>
</blockquote>
<p>For example, suppose $N = 15$. Then $x = 4$, since, by brute force, the only possible solutions are </p>
<p>$\{15\}$, $\{7, 8\}$, $\{4, 5, 6\}$, and $\{1, 2, 3, 4, 5\}$</p>
<hr>
<p>My algorithm runs in $O(n)$ time, although it uses the $sqrt(x)$ function, which is quite expensive. Here's some pseudocode:</p>
<pre><code>input n
if n = 1 or n = 2:
print 1
exit
count = 1
for i = n / 2 + 1; i * (i + 1) / 2 >= n; i -= 1:
j = sqrt(i * (i + 1) - 2 * n)
if i * (i + 1) - j * (j + 1) = 2 * n or
i * (i + 1) - (j + 1) * (j + 2) = 2 * n:
count += 1
print count
exit
</code></pre>
<p>Here's some English explaining the loop: </p>
<p>Start with the greatest number, $i$, of a possible set. Disregarding the trivial set $\{N\}$, $i$ clearly cannot be greater than $N / 2$. (Suppose $i > N / 2$ and $i < N$. Then $i + (i + 1) > N$ and $i + (i - 1)$ may or may not be equal to $N$.) Since $i$ cannot be greater than $N/2$, start with $i = N/2$ as the upper bound of $i$. To test each possible set, loop $i$ from this upper bound down until $i (i + 1) / 2 < N$. Clearly, if the sum all positive integers from $1$ to $i$ is less than $N$, then no set of consecutive integers, whose greatest number is $i$, could sum to $N$. This establishes the lower bound of $i$.</p>
<p>To test if a set could end in $i$, I use the following mathematics:</p>
<p>Let </p>
<p>$$S(x) = \sum_{j=1}^{x} j$$</p>
<p>Starting with a set composed solely of $i$, add a set which most closely sums (including $i$) to $N$. Mathematically, </p>
<p>$$N = i + S(i - 1) - S(x)$$</p>
<p>$S(i - 1) - S(x)$ will generate some set of consecutive integers whose greatest number is $i - 1$ and whose least number is between $1$ and $i - 1$, inclusively. To rewrite,</p>
<p>$$N = i + \frac{(i-1)i}{2} - \frac{x(x + 1)}{2}$$
$$2N = 2i + i(i - 1) - x(x + 1)$$
$$2N = i(i + 1) - x(x + 1)$$</p>
<p>Therefore, $i(i + 1) - 2N = x(x+1)$ for some $x \in R$. If $x \in N$, then a solution has been found, generating the set of consecutive integers from $x + 1$ to $i$, inclusive. To quickly determine whether $x \in N$, I square-root $i (i + 1) - 2N$ and round down to some integer $t$. I plug $t$ back in to the equation for $x$, and check to see if it works. If $i(i + 1) - 2N \not = t(t+1)$ then I increment $t$ by one and check again. If neither of the cases work, then a valid set could not possibly have a greatest value of $i$, so I decrement $i$, continuing the loop.</p>
<hr>
<p>My algorithm has worked for hundreds of test cases, but when I submit, POJ gives me <code>WRONG ANSWER</code>. <strong>This is why I'd like to have a list of correct answers to compare against my program.</strong></p>
| Mark Bennet | 2,906 | <p>Either the sum of consecutive integers will contain an odd number of integers or an even number. Let's deal with the odd case first - and let the middle number be $n$ with the total being $N$ and $2r+1$ consecutive integers involved. Then the sum is $$N=(n-r)+(n-r+1)+\dots +(n-1)+n+(n+1)+\dots +(n+r)=(2r+1)n$$</p>
<p>Each odd factor $2r+1$ of $N$ gives one such decomposition, and you will be able to work out the condition for the lowest of the numbers to be positive, if necessary.</p>
<p>For an even decomposition with $2r$ terms we have</p>
<p>$$N=(n-r+1)+\dots +(n-1)+n+(n+1)+\dots +(n+r-1)+(n+r)=(2r-1)n+(n+r)=r(2n+1)$$ and this gives a decomposition into an even number of consecutive integers for each odd factor of $N$ taken as $2n+1$.</p>
<hr>
<p>Now suppose we have a decomposition of $N\gt 0$ which includes zero. If the least non-positive integer in the decomposition is $-r$, then the $2r+1$ integers $-r\le n\le r$ sum to zero, and can be deleted to give a decomposition containing only positive terms.</p>
<p>On the other hand, if the decomposition contains only positive terms with the least of these being $r+1$ then the $2r+1$ integers $-r\le n\le r$ can be added to the sequence to give a decomposition which includes zero.</p>
<p>So this pairing tells us that precisely half of the above decompositions are wholly positive - one for each odd factor. </p>
<p>Careful analysis will show that the pairing matches the odd and even decompositions related to the same odd factor (it is clear from the comments above that the paired decompositions differ in the parity of the number of terms). </p>
<p>Deduct $1$ if the trivial decomposition is not included.</p>
|
70,582 | <p>For which n can $a^{2}+(a+n)^{2}=c^{2}$ be solved, where $a,b,c,n$ are positive integers?
I have found solutions for $n=1,7,17,23,31,41,47,79,89$ and for multiples of $7,17,23$...
Are there infinitely many prime $n$ for which it is solvable? </p>
| Peđa | 15,660 | <p>If you solve expression for $n$ you get </p>
<p>$n=\sqrt{c^2-a^2}-a$, let's denote $b=\sqrt{c^2-a^2}$,so we have that $n=b-a$</p>
<p>Now,take look at picture bellow.Note that $AD=a$,and $BD=b-a=n$</p>
<p>If you change value of $b$ and keep $a$ to be constant you will get a infinite number of right triangles,and therefore infinite number of values of $n=b-a$,so answer is yes, there are infinitely many primes $n$ for which equation is solvable.</p>
<p><img src="https://i.stack.imgur.com/7f7WD.jpg" alt="enter image description here"> </p>
|
1,990,670 | <blockquote>
<p>Assume that $0 < \theta < \pi$. Solve the following equation for $\theta$. $$\frac{1}{(\cos \theta)^2} = 2\sqrt{3}\tan\theta - 2$$ </p>
</blockquote>
<p><a href="https://i.stack.imgur.com/SoU8A.png" rel="nofollow noreferrer">Question and Answer</a></p>
<p>Regarding to the attached image, that shows the question and the answer?</p>
<p>How could I solve this question and what are the steps to follow to reach the answer?</p>
| hamam_Abdallah | 369,188 | <p>Your equation can be written as</p>
<p>$$\frac{1}{\cos^2(x)}=$$</p>
<p>$$1+\tan^2(x)=2\sqrt{3}\tan(x)-2$$
or</p>
<p>$$\tan^2(x)-2\sqrt{3}\tan(x)+3=0$$</p>
<p>the reduced discriminant is</p>
<p>$$\delta=3-3=0$$</p>
<p>thus, there is one solution given by</p>
<p>$\tan(x)=\sqrt{3}$ which gives</p>
<p>$$x=\frac{\pi}{3}.$$</p>
|
1,774,294 | <p>If you have $$y^2=2x^2+C$$</p>
<p>why is this not equivalent to</p>
<p>$$y=\sqrt{2x^2}+C$$</p>
| Kenny Lau | 328,173 | <p>When you square the second equation, you would get:</p>
<p>$$y^2=2x^2+2C\sqrt{2x^2}+C^2$$</p>
<p>The rest of the proof is left to the reader as an exercise.</p>
<hr>
<p>Extra:</p>
<hr>
<p>$$y^2=2x^2+C$$</p>
<p>$$2y\mathrm dy=2x\mathrm dx$$</p>
<p>$$\frac{\mathrm dy}{\mathrm dx}=\frac xy=\pm\frac{x}{\sqrt{2x^2+C}}$$</p>
<hr>
<p>$$y=\sqrt{2x^2}+C$$</p>
<p>$$y=x\sqrt{2}+C$$</p>
<p>$$\frac{\mathrm dy}{\mathrm dx}=\sqrt2$$</p>
<hr>
<p>They are only equal when $C=0$.</p>
|
1,774,294 | <p>If you have $$y^2=2x^2+C$$</p>
<p>why is this not equivalent to</p>
<p>$$y=\sqrt{2x^2}+C$$</p>
| Soham | 242,402 | <p>Hint:-</p>
<p>The first $C$ and the second $C$ are different.</p>
<p>Its $y=\sqrt{2x^2+C}$ and $\sqrt{a+b}\neq \sqrt a+\sqrt b$</p>
|
1,774,294 | <p>If you have $$y^2=2x^2+C$$</p>
<p>why is this not equivalent to</p>
<p>$$y=\sqrt{2x^2}+C$$</p>
| Rebellos | 335,894 | <p>Because by square rooting you get : $\sqrt{2x^2 + C}$. Then constant C is "inside" the square root.</p>
|
3,270,944 | <p>Let <span class="math-container">$A$</span> be a bounded linear operator on a separable Hilbert space <span class="math-container">${\cal H}$</span>, and suppose that <span class="math-container">$A$</span> is distinct from its adjoint <span class="math-container">$A^*$</span>. </p>
<p><strong>Question:</strong> Can the double commutant of <span class="math-container">$A$</span> be distinct from the double commutant of <span class="math-container">$\{A,A^*\}$</span>? If so, is there a simple example?</p>
<hr>
<p>This question was inspired by the following statement from section 3.3 in Vaughan Jones (2009), <em>Von Neumann Algebras</em> (<a href="https://math.berkeley.edu/~vfr/VonNeumann2009.pdf" rel="nofollow noreferrer">https://math.berkeley.edu/~vfr/VonNeumann2009.pdf</a>):</p>
<blockquote>
<p>If <span class="math-container">$S \subseteq {\cal B(H)}$</span>, we call <span class="math-container">$(S \cup S^*)''$</span> the von Neumann algebra generated by <span class="math-container">$S$</span>.</p>
</blockquote>
<p>I don't know if this was meant to be the most <em>efficient</em> definition, and that's exactly what prompted my question. Maybe the Hilbert space wasn't assumed to be separable in that context, but I am interested in the separable case (if it matters).</p>
| Dr. Sonnhard Graubner | 175,066 | <p>It is <span class="math-container">$$\frac{2(x+h)^2+1-2x^2-1}{h}=\frac{2x^2+4xh+2h^2-2x^2}{h}$$</span></p>
|
70,176 | <p>So I can do something like this which I like:</p>
<pre><code>Manipulate[i, {i, {1,2,3,4}}]
</code></pre>
<p>It lets me pick which specific values I want to allow to be chosen for my function. But that list appears to be very limiting.</p>
<p>Lets say I have a list and each element contains a list of two elements like so:</p>
<pre><code>myList = {{1,2},{3,4}}
</code></pre>
<p>How can I use <code>Manipulate</code> with this list such that it would give me two options to choose from: <code>{1,2}</code> and <code>{3,4}</code></p>
<p>Here is what I have tried:</p>
<pre><code>Manipulate[i, {i, myList}]
</code></pre>
<p>But it seems to only get it right on initialization and then when you touch the slider it goes haywire and starts choosing things like <code>1</code> and <code>3</code> intsead of <code>{1,2}</code> and <code>{3,4}</code></p>
<p>I want to be able to use <code>Manipulate</code> but only have it work on a set pair of numbers.</p>
| Rom38 | 10,455 | <p>I suspect that you can obtain what you need by following way:</p>
<pre><code>list = {{1, 2}, {3, 4}, {5, 6}};
Manipulate[list[[i]], {i, 1, Length@list, 1}]
</code></pre>
<p>This code always gives you the element (sublist) of initial list. </p>
|
1,177,349 | <p>Let $\gamma = e^{2 \pi i/5} + (e^{2 \pi i/5})^4
%γ = e<sup>2πi / 5</sup> + (e<sup>2πi / 5</sup>) <sup>4</sup>
$.</p>
<p>I am looking for the basis for $[\mathbb{Q}(\gamma):\mathbb{Q}] = 2$, and then looking for a dependence between $\gamma^2,\gamma$, and $1$. </p>
<p>I've worked all of this out by numerically but I am not sure how to do this through the basis.</p>
| user26486 | 107,671 | <p>$$x^2-y^2=(x-y)(x+y)$$</p>
<p>This is less than $0$, since it is given that $x<y\iff x-y<0$ and $x,y>0\implies x+y>0$.</p>
|
4,285,426 | <p>Intuitively it is quite easy to see why <span class="math-container">$$a \equiv (a \bmod m) \pmod m.$$</span></p>
<p>When you divide a by m you get a remainder in the range <span class="math-container">$0, \dots, m-1.$</span> When you divide the remainder by m again, you get the same number again as the remainder, except that this time the quotient is 0.</p>
<p>I get that.</p>
<p>The question is how to prove it formally.</p>
| David | 651,991 | <p>Let <span class="math-container">$q$</span> and <span class="math-container">$r$</span> be integers such that <span class="math-container">$a=qm+r$</span> (with <span class="math-container">$0 \leq r < m$</span>). It's easy to see that <span class="math-container">$a \mod m$</span> is precisely <span class="math-container">$r$</span>.</p>
<p>Now we can write <span class="math-container">$a - (a \mod m)$</span> as just <span class="math-container">$qm$</span> which is clearly a multiple of <span class="math-container">$m$</span>. Therefore <span class="math-container">$a$</span> is congruent to <span class="math-container">$(a \mod m)$</span> <span class="math-container">$(\mod m)$</span></p>
|
1,271,935 | <p>Let $(e_n)$ (where $ e_n $ has a 1 in the $n$-th place and zeros otherwise) be unit standard vectors of $\ell_\infty$. </p>
<p>Why is $(e_n)$ not a basis for $\ell_\infty$?</p>
<p>Thanks.</p>
| Ian | 83,396 | <p>There are basically two things to note here. First you need to understand what it means for a set to be a basis of an infinite dimensional normed space. In your context, I am all but certain that this means that it is a Schauder basis, which is a linearly independent set such that the set of all finite linear combinations of members of the set is dense in the space. The other kind of basis is a Hamel basis, which is an "algebraic" basis, i.e. a linearly independent set such that the set of all finite linear combinations of members spans the space.</p>
<p>For $\ell^\infty$, any member of the set of finite linear combinations of $e_n$ is eventually zero. But $[1,1,1,\dots] \in \ell^\infty$ but it is at least $1$ away from any sequence which is eventually zero.</p>
|
28,195 | <p>So yesterday I came across a question. Something seemed suspicious (a badly worded question and an incorrect answer accepted), so I did some snooping. It appears that every question from the OP has been answered by the same user within minutes of posting, and subsequently upvoted and accepted. </p>
<p>I suspect the questioner and answerer may be one and the same (looking to increase their reputation for whatever reason), and I am pretty certain this is behavior the MSE community does not want. My questions are:</p>
<p>Assuming I am correct, is this against some "rule" of the MSE community (and if so, where can I find it)? Is there any way of determining this? And if there is no way for a plebeian like me, how could I go about getting the proper moderator intervention? </p>
<p>I say proper moderator intervention because I flagged the answer as poor quality, but it was then pointed out that this is not intended for incorrect answers.</p>
| Community | -1 | <p>Yes this is bad. Yes this is against the rules. You should flag for moderator intervention (I did). Thanks for bringing this up.</p>
|
2,392,114 | <p>It is possible to rewrite the equation $x^3+ax^2+bx+c=0$ as $y^3+3hy+k=0$ by setting $y=x+a/3$</p>
<p>How do you find the coefficient h in the equation $y^3+3hy+k=0$?</p>
| Khosrotash | 104,171 | <p>If you change the (0,o) origin to the Inflection point of cubic equation , you will have that form
$$y=x^3+ax^2+bx+c \\y'=3x^2+2ax+b\\y''=6x+2a=0 \to x=-\frac{a}{3}$$ you must change $$x \mapsto x-\frac{a}{3}$$
$$y=f(x)=y=x^3+ax^2+bx+c \to \\(x-\frac{a}{3})^3+a(x-\frac{a}{3})^2+b(x-\frac{a}{3})+c\\=x^3-3x^2\frac{a}{3}+3x(\frac{a}{3})^2-(\frac{a}{3})^3+ax^2-2ax\frac{a}{3}+\frac{a^3}{9}
+bx-\frac{ba}{3}+c$$note that $-3x^2\frac{a}{3}$ cancel with $x^2a$ and you will have
$$x^3+x(\frac{a^2}{3}-2\frac{a^2}{3}+b)+(-(\frac{a}{3})^3+\frac{a^3}{9}-\frac{ba}{3}+c)$$ that is the form which you need it . </p>
<p>look an example : $$y=x^3+3x^2+5x-1 \\y''=6x+6=0 \to x=-1\\\to\\x \mapsto x-1\\apply \\Y=(x-1)^3+3(x-1)^2+5(x-1)-1\\=x^3-3x^2+3x-1+3x^3-6x+3+5x-5-1\\=x^3+2x-4$$</p>
|
311,849 | <p>How to evaluate:
$$ \int_0^\infty e^{-x^2} \cos^n(x) dx$$</p>
<p>Someone has posted this question on fb. I hope it's not duplicate.</p>
| Shobhit Bhatnagar | 59,380 | <p>I found a way to do it for $n \in \mathbb{N}$. We begin with</p>
<p>$$\cos^n(x)=\left(\frac{e^{ix}+e^{-ix}}{2}\right)^n = \frac{1}{2^n e^{inx}}(1+e^{2ix})^n = \frac{1}{2^n e^{inx}}\sum_{r=0}^n \binom{n}{r}e^{2irx}$$</p>
<p>Therefore</p>
<p>$$\begin{aligned}\int_{-\infty}^\infty e^{-x^2}\cos^n(x)dx &=\int_{-\infty}^\infty e^{-x^2}\frac{1}{2^n e^{inx}}\sum_{r=0}^n \binom{n}{r}e^{2irx} dx \\ &=\frac{1}{2^n}\sum_{r=0}^n \binom{n}{r}\int_{-\infty}^\infty e^{-x^2+(2ir-in)x}dx\end{aligned}$$</p>
<p>Here we can use the formula, $\int_{-\infty}^\infty e^{-x^2+bx+c}dx=\sqrt{\pi}e^{b^2/4+c}$. Applying it gives </p>
<p>$$\int_{-\infty}^\infty e^{-x^2}\cos^n(x)dx= \frac{\sqrt{\pi}}{2^n}\sum_{r=0}^n \binom{n}{r}\exp\left({\frac{-(2r-n)^2}{4}}\right)$$</p>
<p>The integrand is even, so </p>
<p>$$\int_0^\infty e^{-x^2}\cos^n(x)dx=\boxed{\displaystyle \frac{\sqrt{\pi}}{2^{n+1}}\sum_{r=0}^n \binom{n}{r}\exp\left({\frac{-(2r-n)^2}{4}}\right)}$$ </p>
|
266,124 | <p>A palindrome is a number or word that is the same when read forward
and backward, for example, “176671” and “civic.” Can the number obtained by writing
the numbers from 1 to n in order (n > 1) be a palindrome?</p>
| Mark Bennet | 2,906 | <p>Here's a beginning of the task where the numbers do not have to be in order. Note that a palindrome can have at most one digit which occurs an odd number of times - the centre digit if the number of digits is odd.</p>
<p>Now after 1, you have to have all digits 1-9 - nine digits. If you stop below 100 you will always have an odd number of digits (1-9 plus pairs from the two digit numbers). So you can work on digit parities to reduce the number of cases you have to consider.</p>
|
3,475,893 | <p>Is <span class="math-container">$ \mathbb{Q} \times \mathbb{Q[i]}$</span> an integral domain ?</p>
<p>My attempt : I know that <span class="math-container">$ \mathbb{Q} \times \mathbb{Q}$</span> is not integral domain take <span class="math-container">$(0,1) \times (1,0) =( 0,0)$</span></p>
<p>But im confused in <span class="math-container">$ \mathbb{Q} \times \mathbb{Q[i]}$</span></p>
| MANI | 464,799 | <p>You may use same argument to show <span class="math-container">$\mathbb{Q}\times \mathbb{Q}[i]$</span> is not an integral domain as <span class="math-container">$(q,0)\times (0,q')=(0,0),$</span> for any two non zero rational number <span class="math-container">$q,q'.$</span></p>
<p>Infact if <span class="math-container">$R$</span> and <span class="math-container">$R'$</span> are fields even, <span class="math-container">$R\times R'$</span> can never become integral domain.</p>
|
1,044,910 | <blockquote>
<p>Prove that $$\sum_{i = 2^{n-1} + 1}^{2^n}\frac{1}{a + ib} \ge \frac{1}{a + 2b}$$</p>
</blockquote>
<p>I tried to to prove the above statement using the AM-HM inequality:</p>
<p>$$\begin{align}\frac{1}{2^n - 2^{n-1}}\sum_{i = 2^{n-1} + 1}^{2^n}\frac{1}{a + ib} &\ge \frac{2^n - 2^{n-1}}{\sum_{i = 2^{n-1} + 1}^{2^n}(a + ib)}\\
\sum_{i = 2^{n-1} + 1}^{2^n}\frac{1}{a + ib} &\ge \frac{(2^n - 2^{n-1})^2}{\frac{2^n -2^{n-1}}{2}(2a + (2^n + 2^{n-1} + 1)b)}\\
&=\frac{2^{n+1} - 2^n}{2a + (2^n + 2^{n-1} + 1)b}\end{align}$$</p>
<p>after which I am more or less stuck. How can I continue on from here, or is there another method?</p>
| robjohn | 13,854 | <p>Assuming $a,b\gt0$, we get
$$
\begin{align}
\sum_{i=2^{n-1}+1}^{2^n}\left(\frac1{a+ib}-\frac{2^{-n+1}}{a+2b}\right)
&\ge\sum_{i=2^{n-1}+1}^{2^n}\left(\frac1{a+2^nb}-\frac{2^{-n+1}}{a+2b}\right)\\
&=\sum_{i=2^{n-1}+1}^{2^n}\frac{a(1-2^{-n+1})}{(a+2^nb)(a+2b)}\\
&=\frac{a(2^{n-1}-1)}{(a+2^nb)(a+2b)}\\[12pt]
&\ge0
\end{align}
$$
Therefore,
$$
\sum_{i=2^{n-1}+1}^{2^n}\frac1{a+ib}\ge\frac1{a+2b}
$$</p>
|
1,044,910 | <blockquote>
<p>Prove that $$\sum_{i = 2^{n-1} + 1}^{2^n}\frac{1}{a + ib} \ge \frac{1}{a + 2b}$$</p>
</blockquote>
<p>I tried to to prove the above statement using the AM-HM inequality:</p>
<p>$$\begin{align}\frac{1}{2^n - 2^{n-1}}\sum_{i = 2^{n-1} + 1}^{2^n}\frac{1}{a + ib} &\ge \frac{2^n - 2^{n-1}}{\sum_{i = 2^{n-1} + 1}^{2^n}(a + ib)}\\
\sum_{i = 2^{n-1} + 1}^{2^n}\frac{1}{a + ib} &\ge \frac{(2^n - 2^{n-1})^2}{\frac{2^n -2^{n-1}}{2}(2a + (2^n + 2^{n-1} + 1)b)}\\
&=\frac{2^{n+1} - 2^n}{2a + (2^n + 2^{n-1} + 1)b}\end{align}$$</p>
<p>after which I am more or less stuck. How can I continue on from here, or is there another method?</p>
| Did | 6,179 | <p>A generalized version of the result might be easier to prove:</p>
<blockquote>
<p>For every positive integers $k$ and $m$, $$\sum_{i=k+1}^{k(m+1)}\frac1{a+ib}\geqslant\frac{m}{a+(m+1)b}.$$</p>
</blockquote>
<p>The question asks about the case $k=2^{n-1}$ and $m=1$.</p>
<p>To prove the claim, note that $a+ib\leqslant a+k(m+1)b$ for every $i$ used in the sum $S$ of the LHS and that the sum $S$ has $km$ terms hence $$S\geqslant\frac{km}{a+k(m+1)b}=\frac{m}{(m+1)b}\,\left(1-\frac{a}{a+k(m+1)b}\right).$$ The RHS is an increasing function of $k$ hence it is at least equal to its value when $k=\color{red}{\bf1}$, that is, $$S\geqslant\frac{\color{red}{\bf1}\cdot m}{a+\color{red}{\bf1}\cdot (m+1)\,b}=\frac{m}{a+(m+1)b}.$$</p>
|
162,836 | <p>I would like to find the surface normal for a point on a 3D filled shape in Mathematica. </p>
<p>I know how to calculate the normal of a parametric surface using the cross product but this method will not work for a shape like <code>Cone[]</code> or <code>Ball[]</code>.</p>
<ol>
<li>Is there some sort of <code>RegionNormal</code> option? There is an option to
find <code>VertexNormals</code> <a href="http://reference.wolfram.com/language/ref/VertexNormals.html" rel="noreferrer">here</a>, but this is something to with
shading and seems unhelpful. </li>
<li>Is there a method I can use to convert the region into a parametric expression and use the normal cross product method? </li>
</ol>
<p>The plan is to take an arbitrary line and find the angle of intersection between the line and the surface of the shape. </p>
| Tomi | 36,939 | <p>Wether this is enough to warrant an "answer" is debatable and it relies on MichaelE2's work, but I felt it was helpful to share. </p>
<p>Using Michael E2's solution, we can plot and clearly see the normals for 3D shapes. </p>
<pre><code>numberofpoints = 50;
pts = RandomPoint[RegionBoundary[shape], numberofpoints];
normals = regnormal[shape, {x, y, z}]
surfaces = Length[normals[[1]]];
magnitude = 0.1;
pointonsurface = ConstantArray[0, surfaces];
lines[{a_, b_}] := {magnitude*normals[[1, a, 1]] + b,
b} /. {x -> b[[1]], y -> b[[2]], z -> b[[3]]};
For[i = 1, i <= surfaces, i++,
pointonsurface[[i]] =
Table[ {If[
Evaluate[
normals[[1, i, 2]] /. {x -> pts[[j, 1]], y -> pts[[j, 2]],
z -> pts[[j, 3]]}] == True, i], pts[[j]]}, {j, 1,
numberofpoints, 1}];
pointonsurface[[i]] =
DeleteCases[pointonsurface[[i]], {a_, b_} /; a == Null];
pointonsurface[[i]] = lines /@ pointonsurface[[i]];
]
normallines = Flatten[pointonsurface, 1];
Graphics3D[{shape, {Red, Point[pts]}, Line[normallines] },
Boxed -> False, Axes -> True, PlotRange -> All]
</code></pre>
<p>So, for <code>Cone[]</code></p>
<p><a href="https://i.stack.imgur.com/XSmv6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XSmv6.png" alt="enter image description here"></a></p>
<p>And for <code>Cuboid[]</code></p>
<p><a href="https://i.stack.imgur.com/dkESJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dkESJ.png" alt="enter image description here"></a></p>
<p>And for <code>Sphere[]</code><br>
<a href="https://i.stack.imgur.com/5b953.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5b953.png" alt="enter image description here"></a></p>
|
499,044 | <p>I "know" that $\mathbb{C} \otimes_\mathbb{R} \mathbb{C} \cong \mathbb{C} \oplus \mathbb{C}$ as rings, but I don't really know it, what I mean with this is that I don't know any explicit isomorphism $f: \mathbb{C} \otimes_\mathbb{R} \mathbb{C} \rightarrow \mathbb{C} \oplus \mathbb{C}$. I suspect that such an isomorphism should be easy to find, but I am really not finding anyone. Could anyone please help me?</p>
| Henry T. Horton | 24,934 | <p>One example of an isomorphism $\varphi: \Bbb C \oplus \Bbb C \longrightarrow \Bbb C \otimes_{\Bbb R} \Bbb C$ is given on generators by
$$\varphi(1, 0) = \tfrac{1}{2}(1 \otimes 1 + i \otimes i),$$
$$\varphi(0, 1) = \tfrac{1}{2}(1 \otimes 1 - i \otimes i).$$</p>
|
4,566,254 | <p>Let <span class="math-container">$F$</span> be a functor <span class="math-container">$\mathscr{C}^\text{op}\times\mathscr{C}\to\mathbf{Set}$</span>, and let <span class="math-container">$S$</span> be an arbitrary set. Can we write the following?
<span class="math-container">$$
\int^{C:\mathscr{C}} S\times F(C,C) \cong S\times\int^{C:\mathscr{C}} F(C,C)
$$</span>
This seems intuitively reasonable by analogy to integrals, and it would be generally useful in proofs. Is it true and if so how can I prove it?</p>
<p>(edited: of course it should be <span class="math-container">$\cong$</span>, not <span class="math-container">$=$</span>.)</p>
| fosco | 685 | <p>The coend <span class="math-container">$\int^c F(c,c)$</span> is a colimit in a category <span class="math-container">$D$</span>, so every left adjoint functor <span class="math-container">$L : D\to E$</span> (a particular example of which is <span class="math-container">$S\times-$</span> in a cartesian closed category) will preserve such colimit, which means that provided the colimit exists in the codomain of <span class="math-container">$L$</span>, you have <span class="math-container">$L\left(\int^c F(c,c)\right)\cong \int^c LF(c,c)$</span>. So, for example,
<span class="math-container">$$
S\times \int^c F(c,c)\cong \int^c S\times F(c,c)
$$</span> for every functor <span class="math-container">$F : C^o\times C\to Set$</span> and set <span class="math-container">$S$</span>; but also
<span class="math-container">$$
S\otimes \int^c F(c,c)\cong \int^c S\otimes F(c,c)
$$</span> for every functor <span class="math-container">$F : C^o\times C\to D$</span> and object <span class="math-container">$S\in D$</span> where <span class="math-container">$D$</span> is a monoidal closed category (pointed spaces, modules over a ring,..); but also (since the functor <span class="math-container">$List$</span> that sends a set to its "Kleene star", being the free monoid functor, is a left adjoint)
<span class="math-container">$$
List\left(\int^c F(c,c)\right)\cong \int^c List(F(c,c))
$$</span> whenever (for example) <span class="math-container">$F : FinSet^o\times FinSet \to Set$</span> is a functor (I take finite sets to have a small category of indices for <span class="math-container">$F$</span>); but also,
<span class="math-container">$$
\pi_0\left(\int^c F(c,c)\right)\cong \int^c \pi_0(F(c,c))
$$</span> for every functor <span class="math-container">$F : C^o\times C \to Spaces$</span>, where <span class="math-container">$Spaces$</span> is a decent category of topological spaces and <span class="math-container">$\pi_0$</span> takes connected components.</p>
|
3,018,388 | <p>It is given that the series <span class="math-container">$ \sum_{n=1}^{\infty} a_n$</span> is convergent but not absolutely convergent and <span class="math-container">$ \sum_{n=1}^{\infty} a_n=0$</span>. Denote by <span class="math-container">$s_k$</span> the partial sum <span class="math-container">$ \sum_{n=1}^{k} a_n, \ k=1,2,3, \cdots $</span>. Then</p>
<ol>
<li><p><span class="math-container">$ s_k=0$</span> for infinitely many <span class="math-container">$k$</span></p></li>
<li><p><span class="math-container">$s_k>0$</span> for infinitely many <span class="math-container">$k$</span></p></li>
<li><p>it is possible that <span class="math-container">$ s_k>0$</span> for all <span class="math-container">$k$</span></p></li>
<li><p>it is possible that <span class="math-container">$ s_k>0$</span> for all but finite number of values of <span class="math-container">$k$</span>.</p></li>
</ol>
<p><strong>Answer:</strong> </p>
<p>Consider the sequence <span class="math-container">$ \{a_n \}$</span> defined by <span class="math-container">$ a_{2n-1}=\frac{1}{n}$</span> and <span class="math-container">$a_{2n}=-\frac{1}{n}$</span>, so that</p>
<p><span class="math-container">$ \sum_{n=1}^{\infty} a_n=1-1+\frac{1}{2}-\frac{1}{2}+\cdots $</span></p>
<p>Thus,</p>
<p><span class="math-container">$ s_{2n-1}=\frac{1}{n} \to 0 \ as \ n \to \infty$</span>,</p>
<p><span class="math-container">$s_{2n} =0$</span></p>
<p>Thus,</p>
<p><span class="math-container">$ \sum_{n=1}^{\infty} a_n=0$</span>.</p>
<p>Also the series is not absolutely convergent.</p>
<p>Thus <span class="math-container">$s_{2n-1}=\frac{1}{n}>0$</span> for infinitely many <span class="math-container">$n$</span></p>
<p>hence option <span class="math-container">$(3)$</span> is true.</p>
<p>What about the other options?</p>
<p>help me</p>
| Kavi Rama Murthy | 142,385 | <p>The sum is nothing but a Riemann sum for <span class="math-container">$\int_0^{1}\sqrt{1-t^{2}}\, dt$</span>. You can evaluate this by making the substitution <span class="math-container">$t=\sin\, \theta$</span> and using the formula <span class="math-container">$2 \cos ^{2}\, \theta =1+\cos\, (2\theta)$</span> and you will get <span class="math-container">$\pi /4$</span>.</p>
|
3,018,388 | <p>It is given that the series <span class="math-container">$ \sum_{n=1}^{\infty} a_n$</span> is convergent but not absolutely convergent and <span class="math-container">$ \sum_{n=1}^{\infty} a_n=0$</span>. Denote by <span class="math-container">$s_k$</span> the partial sum <span class="math-container">$ \sum_{n=1}^{k} a_n, \ k=1,2,3, \cdots $</span>. Then</p>
<ol>
<li><p><span class="math-container">$ s_k=0$</span> for infinitely many <span class="math-container">$k$</span></p></li>
<li><p><span class="math-container">$s_k>0$</span> for infinitely many <span class="math-container">$k$</span></p></li>
<li><p>it is possible that <span class="math-container">$ s_k>0$</span> for all <span class="math-container">$k$</span></p></li>
<li><p>it is possible that <span class="math-container">$ s_k>0$</span> for all but finite number of values of <span class="math-container">$k$</span>.</p></li>
</ol>
<p><strong>Answer:</strong> </p>
<p>Consider the sequence <span class="math-container">$ \{a_n \}$</span> defined by <span class="math-container">$ a_{2n-1}=\frac{1}{n}$</span> and <span class="math-container">$a_{2n}=-\frac{1}{n}$</span>, so that</p>
<p><span class="math-container">$ \sum_{n=1}^{\infty} a_n=1-1+\frac{1}{2}-\frac{1}{2}+\cdots $</span></p>
<p>Thus,</p>
<p><span class="math-container">$ s_{2n-1}=\frac{1}{n} \to 0 \ as \ n \to \infty$</span>,</p>
<p><span class="math-container">$s_{2n} =0$</span></p>
<p>Thus,</p>
<p><span class="math-container">$ \sum_{n=1}^{\infty} a_n=0$</span>.</p>
<p>Also the series is not absolutely convergent.</p>
<p>Thus <span class="math-container">$s_{2n-1}=\frac{1}{n}>0$</span> for infinitely many <span class="math-container">$n$</span></p>
<p>hence option <span class="math-container">$(3)$</span> is true.</p>
<p>What about the other options?</p>
<p>help me</p>
| Mostafa Ayaz | 518,023 | <p>This directly leads<br> from fundamental theorem of calculus <br>(<a href="https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus</a>) <br>and<br> the definition of Reimannian sum<br> (<a href="https://en.wikipedia.org/wiki/Riemann_sum" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Riemann_sum</a>)</p>
|
2,698,098 | <p>The question:</p>
<blockquote>
<p>Suppose <span class="math-container">$0< \delta < \pi$</span>, <span class="math-container">$f(x) = 1$</span> if <span class="math-container">$|x| \leq \delta$</span>, <span class="math-container">$f(x) = 0$</span> if <span class="math-container">$\delta < |x| \leq \pi$</span>, and <span class="math-container">$f(x + 2 \pi) = f(x)$</span> for all <span class="math-container">$x$</span>.</p>
<p>(a) Compute the Fourier Coefficients for <span class="math-container">$f$</span>.</p>
<p>(b) Conclude that <span class="math-container">$$\sum_{n=1}^\infty \frac{\sin(n \delta)}{n} = \frac{\pi - \delta}{2}$$</span></p>
</blockquote>
<p>I've found the coefficients as <span class="math-container">$c_n = \frac{1}{2 \pi} \int^\pi_\pi f(x) e^{-inx} = \frac{\sin (n \delta)}{i \pi n}$</span>. But I can't seem to see what I should be looking for to prove (b). Parseval's Theorem gives it for <span class="math-container">$|c_n|^2$</span> and even then it doesn't give the answer I'm looking for. A hint would be appreciated.</p>
| mathematics2x2life | 79,043 | <p>For part (a), it might be a bit more helpful to notice that $f$ is even and real-valued so that you may use the real version of the Fourier series. Also because $f$ is even, $b_n=0$ for all $n$. You should then be able to calculate $a_n$ for $n \geq 0$ (the answer, no work, below)</p>
<blockquote class="spoiler">
<p> $a_0=\frac{2\delta}{\pi}$ and $a_n=\frac{2\sin n\delta}{\pi n}$</p>
</blockquote>
<p>For (b), think of what extra conditions $f$ satisfies and look at Theorem 8.14 with $x=0$.</p>
|
3,368,655 | <p>I came across a problem that asked if it is posible for a function to be Riemann integrable function in <span class="math-container">$[0,+\infty)$</span> but also <span class="math-container">$|f(x)|\geq 1$</span> for all <span class="math-container">$x\geq 0$</span>. </p>
<p>At first I thought it was imposible, but I realized that only holds for continuous functions, because they would have to be either positive or negative, and then they would have to go to 0 at infinity. </p>
<p>I have an idea of what the function would have to be like, with alternating signs, but whose integral converges, but I haven't been able to find any, so I'm starting to think it is imposible. </p>
<p>I would like some help finding this function, or disproving it, as I don't know many tools for working with functions without a constant sign.</p>
| copper.hat | 27,978 | <p>Here is a more elementary example:</p>
<p>Let <span class="math-container">$\phi$</span> be a <span class="math-container">$1$</span>-periodic function such that <span class="math-container">$\phi(x)=-1$</span> for <span class="math-container">$x \in [0,{1 \over 2})$</span> and <span class="math-container">$\phi(x) = 1$</span> for <span class="math-container">$x \in [{1 \over 2},1)$</span>.</p>
<p>Note that for an integer <span class="math-container">$n$</span> and <span class="math-container">$x \in [0,1]$</span>, we have <span class="math-container">$|\int_n^{n+x} \phi(2^kt)dt| \le {1 \over 2^{k+1}}$</span>.</p>
<p>Define <span class="math-container">$f(x) = \sum_{n=0}^\infty 1_{[n,n+1)} \phi(2^nx)$</span>.</p>
<p>Then <span class="math-container">$|f(x)| = 1$</span> for all <span class="math-container">$x\ge 0$</span> and
<span class="math-container">$\int_0^\infty f(x)dx = 0$</span> (the improper integral).</p>
|
2,168,906 | <blockquote>
<p>The task is to find necessary and sufficient condition on <span class="math-container">$b$</span> and <span class="math-container">$c$</span> for the equation <span class="math-container">$x^3-3b^2x+c=0$</span> to have three distinct real roots.</p>
</blockquote>
<p>Are there any formulas (such as <span class="math-container">$x_1x_2=c/a$</span> and <span class="math-container">$x_1+x_2=-b/a$</span> for roots in <span class="math-container">$ax^2+bx+c=0$</span>), but for equations of 3rd power?</p>
| Tsemo Aristide | 280,301 | <p>$f(x)=e^{g(x)}$ where $g(x)=x^x=e^{xln(x)}$, $f'(x)=e^{g(x)}g'(x)$, $g'(x)=e^{xln(x)}(ln(x)+1)$.</p>
<p>Your mistake is when you compute $ln(y)$.</p>
|
315,457 | <p>I am trying to evaluate $\cos(x)$ at the point $x=3$ with $7$ decimal places to be correct. There is no requirement to be the most efficient but only evaluate at this point.</p>
<p>Currently, I am thinking first write $x=\pi+x'$ where $x'=-0.14159265358979312$ and then use Taylor series $\cos(x)=\sum_{i=1}^n(-1)^n\frac{x^{2n}}{(2n)!}$ to decide the best $n$ and the fact the error bound $\frac{1}{(n+1)!}$ for $\cos(x)$ when $x\in[-1,1]$ to decide $n$. Using wolfram alpha I got $n=11$. Thus I need to use the first $11$ term of Taylor series of $\cos(x)$. Is this seems a reasonable approach?</p>
<p>If I am using some programming languages which don't contain $\pi$ as a constant, should I just define $\pi$ first and use the above method? Is there any other approach to this?</p>
<p>If I want to evaluate $\sin(\cos(x))$ at the point $x=3$, should I use above method to evaluate $\cos(x)$ first and then $\sin(\cos(x))$? Is there any other approach to this?</p>
| Renko Usami | 538,693 | <p>A theorem may help you: </p>
<p>Let $A ∈ M_n$. The following are equivalent:<br>
(a) A is irreducible.<br>
(b) $(I + |A|)^{n-1} > 0$.<br>
(c) $(I + M(A))^{n−1} > 0$.<br>
(d) $\Gamma(A)$ is strongly connected. </p>
<p>It is Theorem 6.2.24 in <em>Matrix Analysis, 2nd edition</em>. Go check it if you need a complete proof.</p>
|
315,457 | <p>I am trying to evaluate $\cos(x)$ at the point $x=3$ with $7$ decimal places to be correct. There is no requirement to be the most efficient but only evaluate at this point.</p>
<p>Currently, I am thinking first write $x=\pi+x'$ where $x'=-0.14159265358979312$ and then use Taylor series $\cos(x)=\sum_{i=1}^n(-1)^n\frac{x^{2n}}{(2n)!}$ to decide the best $n$ and the fact the error bound $\frac{1}{(n+1)!}$ for $\cos(x)$ when $x\in[-1,1]$ to decide $n$. Using wolfram alpha I got $n=11$. Thus I need to use the first $11$ term of Taylor series of $\cos(x)$. Is this seems a reasonable approach?</p>
<p>If I am using some programming languages which don't contain $\pi$ as a constant, should I just define $\pi$ first and use the above method? Is there any other approach to this?</p>
<p>If I want to evaluate $\sin(\cos(x))$ at the point $x=3$, should I use above method to evaluate $\cos(x)$ first and then $\sin(\cos(x))$? Is there any other approach to this?</p>
| Community | -1 | <p>Let $A=[a_{i,j}]\in M_n(\mathbb{R})$ and $|A|=[|a_{i,j}|]$. $A$ is irreducible IFF $|A|$ is too. Then we may assume that the $a_{i,j}$ are $\geq 0$. We have a look at the complexity of the problem: "decide whether $A$ is irreducible or not".</p>
<p>Of course, we do not look for a permutation of the basis vectors that triangularizes $A$ (the complexity is $O(n!)$).</p>
<p>We can use the test (cf. Renko Usami) $(I+A)^{n-1}>0$. Yet, the complexity is $O(n^3\log(n))$.</p>
<p>The best is to use the "strong component algorithm" (cf user1551). Its complexity is $O(n)$ (that is extraordinary fast, even for a $10^6\times 10^6$ matrix).</p>
|
1,453,010 | <p>A certain biased coin is flipped until it shows heads for the first time. If the probability of getting heads on a given flip is $5/11$ and $X$ is a random variable corresponding to the number of flips it will take to get heads for the first time, the expected value of $X$ is:
$$E[x] = \sum_{x=1}^\infty{x\frac{5}{11}\frac{6}{11}^{x-1}}$$
I'm not sure how to find an exact value for $E[x]$. I tried thinking about it in terms of a summation of an infinite geometric series but I don't see how that formula can be applied. </p>
| David K | 139,123 | <p>The expectation is not a geometric series (at least not when you write
it directly), but its resemblance to a geometric series is a good observation.</p>
<p>First let's get that factor of $\frac{5}{11}$ out of the way,
because it will become annoying at some point if we keep it inside the
summation.
$$E[x] = \sum_{x=1}^\infty{x\frac{5}{11}\frac{6}{11}^{x-1}}
= \frac{5}{11} \sum_{x=1}^\infty{x \frac{6}{11}^{x-1}} = \frac{5}{11} S,$$
where
$$
S = \sum_{x=1}^\infty{x \frac{6}{11}^{x-1}}.
$$</p>
<p>Now write out $S$ and $\frac{6}{11}S$:
\begin{align}
\newcommand{x}{\left(\frac{6}{11}\right)}
S &= 1 \cdot \x^0 + 2 \cdot \x^1 + 3 \cdot \x^2 + 4 \cdot \x^3 + \cdots\\
\frac{6}{11}S &= \phantom{1 \cdot \x^0 + }\
1 \cdot \x^1 + 2 \cdot \x^2 + 3 \cdot \x^3 + \cdots
\end{align}</p>
<p>From here you should be able to work out what $S - \frac{6}{11}S$ is as a series, taking the difference of the right-hand sides of the two equations above, and then apply what you know about geometric series.
Notice how conveniently $S - \frac{6}{11}S = \frac{5}{11}S$,
which happens to be the value we need in the end.</p>
|
184,601 | <p>A user on the chat asked how could he make something that would cap when it gets a specific value like 20. Then the behavior would be as follows:</p>
<p>$f(...)=...$</p>
<p>$f(18)=18$</p>
<p>$f(19)=19$</p>
<p>$f(20)=20$</p>
<p>$f(21)=20$</p>
<p>$f(22)=20$</p>
<p>$f(...)=20$</p>
<p>He said he would like to perform it with a regular calculator. Is it possible to do this?</p>
| Marc van Leeuwen | 18,880 | <p>$ x \mapsto \min ( x , 20 ) $ </p>
|
245,464 | <p>I only have one region plot and still want to get the legend (both marker and label). I tried the following, but why the legend market does not show up?</p>
<p><code>RegionPlot[x^2 < y^3 + 1 && y^2 < x^3 + 1, {x, -2, 5}, {y, -2, 5}, PlotLegends -> Placed["MyLegend", {0.15, 0.08}]]</code></p>
<p><a href="https://i.stack.imgur.com/OQjmj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OQjmj.png" alt="enter image description here" /></a></p>
<hr />
<p>Edit:
Thanks @kglr for pointing out that {"MyLegend"} works in the original simple example. This helped me realize that what I want to achieve is slightly different. Since I use mesh to highlight the region, I was trying to get the legend for the mesh. Please see the code below:</p>
<p><code>RegionPlot[x^2 < y^3 + 1 && y^2 < x^3 + 1, {x, -2, 5}, {y, -2, 5}, MeshFunctions -> {#1 - #2 &}, Mesh -> 12, MeshStyle -> {Hue[0.75], Opacity[0.3]}, PlotStyle -> None, BoundaryStyle -> None, PlotLegends -> Placed[{"MyLegend"}, {0.15, 0.15}]]</code></p>
<p><a href="https://i.stack.imgur.com/DVaw4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DVaw4.png" alt="enter image description here" /></a></p>
| Bob Hanlon | 9,362 | <pre><code>RegionPlot[x^2 < y^3 + 1 && y^2 < x^3 + 1,
{x, -2, 5}, {y, -2, 5},
PlotLegends ->
Placed[SwatchLegend[{x^2 < y^3 + 1 && y^2 < x^3 + 1}], {0.3, .07}]]
</code></pre>
<p><a href="https://i.stack.imgur.com/mZ5Q7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mZ5Q7.png" alt="enter image description here" /></a></p>
|
962,287 | <p>I am trying to isolate x in the equation $$(x-20)^{2} = -(y-40)^{2} - 525.$$ How can I do it?</p>
| please delete me | 168,166 | <p>If $x$ and $y$ are real, the right side is negative while the left side is non-negative, so the equation never holds.</p>
|
19,586 | <p>I am looking for resources for teaching math modeling to high school teachers with rusty math background. It will be a 6-week course. Some tips/directions on simple projects would be helpful. I would want to introduce coding for numerically solving systems of ODEs.</p>
| mweiss | 29 | <p>I would recommend checking in with the <a href="https://www.mtsu.edu/jstrayer/modules/modules2.php" rel="nofollow noreferrer"><em>MODULE</em><span class="math-container">$(S^2)$</span></a> project (Mathematics Of Doing, Understanding, Learning and Educating for Secondary Schools). A few notes:</p>
<ul>
<li>This project is currently developing materials for Geometry, Algebra, Statistics, and Modeling. Obviously, it is the last of these that would be of interest for you. You can find links to some sample materials at the the bottom of the page I linked to above.</li>
<li>The program materials are designed for use with preservice teachers, but might also work very well with the "rusty teacher" you have in mind.</li>
<li>The materials are still under development, and are therefore something of a work in progress. I am not certain, but I believe the Modeling team is currently seeking partners to pilot their materials.</li>
<li>Full disclosure: I am a member of the Geometry writing team, and have piloted the Algebra materials. However, I don't have any direct involvement with the Modeling group.</li>
</ul>
<p>Another resource you might find helpful is the recently-published book <em><a href="https://www.routledge.com/The-Learning-and-Teaching-of-Mathematical-Modelling/Niss-Blum/p/book/9781138730700" rel="nofollow noreferrer">The Learning and Teaching of Mathematical Modelling</a></em> by Mogens Niss and Werner Blum (Routledge, 2020), part of the <a href="https://www.routledge.com/IMPACT-Interweaving-Mathematics-Pedagogy-and-Content-for-Teaching/book-series/IMPACT" rel="nofollow noreferrer">IMPACT (Interweaving Mathematics Pedagogy and Content for Teaching) book series</a>.</p>
|
87,583 | <p>Next task to complete:</p>
<ul>
<li><p>Count <code>*</code>-symbol in such expression as <code>a + s^2*b - c/y + o^3 + n*m*u</code> (in this case count of <code>*</code> should be 6)</p></li>
<li><p>Powers such $o^3$ should be expand to $o*o*o$</p></li>
</ul>
<p>I try, but my code is pretty ugly.</p>
<p><img src="https://i.stack.imgur.com/f66j0.png" alt="enter image description here"></p>
| Andy Ross | 43 | <p>This doesn't give the result you are looking for exactly because it uses the full form of the expression you give it.</p>
<pre><code>SetAttributes[countTimes, HoldAll];
countTimes[expr_] := Block[{Times, Power, power, times},
power[a_, b_ /; b > 0] := Nest[times[a, #] &, a, b - 1];
power[a_, b_ /; b < 0] := 1/power[a, -b];
power[a_, 0] := 1;
times[a___, b_times] := times[Sequence @@ b, a];
Total[(Length /@
Extract[#, Position[#, times[___]]] &[(expr /.
Power -> power) /. Times -> times]) - 1]
]
expr = a + s^2*b - c/y + o^3 + n*m*u
countTimes[expr]
(*8*)
</code></pre>
<p>The reason this gives 8 instead of 6 is because of the term <code>-c/y</code> which in full form is <code>Times[-1,c,Power[y,-1]]</code>. If you want to treat this specially you will need to add definitions to account for such patterns.</p>
|
87,583 | <p>Next task to complete:</p>
<ul>
<li><p>Count <code>*</code>-symbol in such expression as <code>a + s^2*b - c/y + o^3 + n*m*u</code> (in this case count of <code>*</code> should be 6)</p></li>
<li><p>Powers such $o^3$ should be expand to $o*o*o$</p></li>
</ul>
<p>I try, but my code is pretty ugly.</p>
<p><img src="https://i.stack.imgur.com/f66j0.png" alt="enter image description here"></p>
| Mr.Wizard | 121 | <p>Accepting that in <em>Mathematica</em> <code>-c/y</code> is automatically converted to <code>-1*c*y^-1</code> and permitting the result shown in Andy's answer I believe we can use a simpler approach, at least for the kind of expression given in example.</p>
<p>Define <code>rules</code> that determine how a <code>Times</code> or <code>Power</code> expression should be counted, then use <a href="http://reference.wolfram.com/language/ref/Cases.html" rel="nofollow"><code>Cases</code></a> to find all instances in you expression and total them with <a href="http://reference.wolfram.com/language/ref/Tr.html" rel="nofollow"><code>Tr</code></a>:</p>
<pre><code>rules = {
_*x__ :> Length@{x},
_^n_?Positive :> n - 1
};
expr = a + s^2*b - c/y + o^3 + n*m*u;
Tr @ Cases[expr, #, -2] & /@ rules
Tr @ %
</code></pre>
<blockquote>
<pre><code>{5, 3}
8
</code></pre>
</blockquote>
<p>As a single function:</p>
<pre><code>fn[expr_] :=
Tr @ Cases[expr, #, -2] & /@ {_*x__ :> Length@{x}, _^n_?Positive :> n - 1} // Tr
</code></pre>
<hr>
<h3>String conversion</h3>
<p>If you prefer a string processing result, now accepting that the form <em>Mathematica</em> uses may seem rather arbitrary, I propose:</p>
<pre><code>stringfn[expr_] :=
StringCases[
ToString[expr, InputForm],
{"*" :> 1, "^" ~~ d__?DigitQ :> FromDigits[d] - 1}
] // Tr
a + s^2*b - c/y + o^3 + n*m*u // stringfn
</code></pre>
<blockquote>
<pre><code>6
</code></pre>
</blockquote>
|
2,893,568 | <p>I need some help finding the standard deviation using Chebyshev's theorem. Here's the problem:</p>
<blockquote>
<p>You have concluded that at least $77.66\%$ of the $3,075$ runners took between $60.5$ and $87.5$ minutes to complete the $10$ km race. What was the standard deviation of these $3,075$ runners?</p>
</blockquote>
<p>I set up the formula as follows:</p>
<p>$$.7766 = 1 - \frac{1}{k^2}$$</p>
<p>I got $k = 2.115721092$, which makes some sense because I know that a standard deviation of $2$ yields $75\%$, so I expected a slightly higher percentage $(77.66)$ to yield a slightly higher standard deviation.</p>
<p>Thanks for any hints.</p>
| Arnaud Mortier | 480,423 | <p>Given a random variable $X$ of finite expectation $\mu$ and standard deviation $\sigma$, Chebyshev's theorem states that $$P(X\not\in (\mu-k\sigma,\mu+k\sigma))\leq \frac{1}{k^2}$$</p>
<blockquote>
<p>The probability of $X$ lying at least $k$ standard deviations away
from the mean is less than or equal to $\frac{1}{k^2}$.</p>
</blockquote>
<p>Given the stated conclusion, it must be that $\mu=\frac{60.5+87.5}{2}=74$ and $k\sigma=87.5-74=13.5.$</p>
<p>As for the value of $k$, your equation is correct: $$77.66\%=1-\frac 1{k^2}$$
$$\implies k\simeq2.11572109187$$</p>
<p>Therefore $$\sigma=\frac{13.5}{k}\simeq 6.38$$</p>
|
3,773,695 | <p>I have been trying to get some upper bound on the coefficient of <span class="math-container">$x^k$</span> in the polynomial
<span class="math-container">$$(1-x^2)^n (1-x)^{-m}, \text{ $m \le n$}.$$</span></p>
<p>A straightforward calculation shows that for even <span class="math-container">$k$</span>, the coefficient can be expressed as
<span class="math-container">$$\sum_{i=0}^{k/2} (-1)^i \binom{n}{i} (-1)^{k-2i} \binom{-m}{k-2i} = \sum_{i=0}^{k/2} (-1)^i\binom{n}{i} \binom{m+k-2i-1}{k-2i}$$</span>
and therefore simply using <span class="math-container">$\binom{n}{k} \le n^{k}$</span>, one gets a bound of
<span class="math-container">$$(k/2+1) (n+(m+k)^2)^{\frac{k}{2}} .$$</span></p>
<p>I'm wondering if one could get a better bound, ideally with a better dependence on <span class="math-container">$k$</span>?</p>
| Ned | 67,710 | <p>Consider <span class="math-container">$y=x^3$</span> and <span class="math-container">$y=x^3-x$</span>. For each one:</p>
<p>Let <span class="math-container">$T$</span> be the tangent line through the inflection point at the origin.</p>
<p>Let <span class="math-container">$L$</span> be the line through the origin rotated <span class="math-container">$45$</span> degrees counter clockwise from <span class="math-container">$T$</span>.</p>
<p>Let <span class="math-container">$P$</span> be the point (to the right) where <span class="math-container">$L$</span> crosses the cubic again (<span class="math-container">$(1,1)$</span> and <span class="math-container">$(1,0)$</span> respectively).</p>
<p>Let <span class="math-container">$X$</span> be the angle between <span class="math-container">$L$</span> and the (tangent line to the) curve at point <span class="math-container">$P$</span>.</p>
<p>If the curves were geometrically similar, <span class="math-container">$X$</span> would be the same angle for both curves, but it's <span class="math-container">$Arctan(3)-Arctan(1)$</span> for the first curve and <span class="math-container">$Arctan(2) - Arctan(0)$</span> for the second, and those two angles are not the same (check it numerically).</p>
<p>So they are not geometrically similar.</p>
|
34,724 | <h3>Overview</h3>
<p>For integers n ≥ 1, let T(n) = {0,1,...,n}<sup>n</sup> and B(n)= {0,1}<sup>n</sup>. Note that |T(n)|=(n+1)<sup>n</sup> and |B(n)| = 2<sup>n</sup>.
A certain set S(n) ⊂ T(n), defined below, contains B(n). The question is about the growth rate of |S(n)|. Does it grow exponentially, like |B(n)|, so that |S(n)| ~ c<sup>n</sup> for some c, or does it grow superexponentially, so that c<sup>n</sup>/|S(n)| approaches 0 for all c> 0?</p>
<h3>Definition</h3>
<p>The set S(n) is defined as follows: an n-tuple t = (t<sub>1</sub>,t<sub>2</sub>,...,t<sub>n</sub>) ∈ T(n) is in S(n) if and only if t<sub>i+j</sub> ≤ j whenever 1 ≤ j< t<sub> i</sub>. For example, if t ∈ T(10) with t<sub> 4</sub>=5, t<sub> 5</sub> can be at most 1, t<sub> 6</sub> can be at most 2, , t<sub> 7</sub> can be at most 3, and t<sub> 8</sub> at most 4, but there is no restriction (at least not due to the value of t<sub> 4</sub>) on t<sub> 9</sub> or t<sub> 10</sub>; t<sub> 9</sub> and t<sub> 10</sub> can have any values in {1,...,10}.</p>
<h3>Alternate formulation (counting triangles)</h3>
<p>The elements of S(n) can be put into one-to-one correspondence with certain configurations of n right isosceles triangles, so that |S(n)| counts the number of such configurations. </p>
<p>For integers k>0 (size) and v≥0 (vertical position), let Δ <sub>k,v</sub> be the triangle with vertices (0,v), (k,k+v), and (k,v). (Δ<sub>0,v</sub> is the degenerate triangle with all three vertices at (0,v).)</p>
<p>Now associate with an n-tuple t = (t<sub>1</sub>,t<sub>2</sub>,...,t<sub>n</sub>) ∈ T(n) the set D<sub>t</sub> = $\lbrace\Delta_{t_k,k}:1\le k \le n\rbrace$. (That's "\lbrace\Delta_{t_k,k}:1\le k \le n\rbrace," if you can't read it.) The set D<sub> t</sub> contains n isosceles right triangles that extend to the right of the y-axis, one triangle at each of the points (0,k) for 1 ≤ k ≤ n.</p>
<p>The tuple t is in S(n) if and only if the triangles in D<sub> t</sub> have disjoint interiors. (This isn't hard to show, and if it is, I've probably made a mistake in my definitions, so let me know.) Thus |S(n)| counts the number of ways one can arrange n isosceles right triangles of various sizes (between size zero and size n) at n consecutive integer points on the y-axis so the triangle can extend to the right and up without overlapping. Triangles of the same size are indistiguishable for the purpose of counting the number of arrangements. (It may help to think of right isosceles pennants attached at an acute-angle corner to a flagpole in a stiff wind.)</p>
<h3>Question</h3>
<p>Does |S(n)| grow exponentially with n, or faster?</p>
<h3>Calculations</h3>
<p>If I’ve counted correctly, the first few terms of the sequence {|S(n)|} beginning with n=1 are 2, 8, 38, 184, 904, and 4384. This sequence (and some sequences resulting from minor variations of the problem) fails to match anything in the Online Encyclopedia of Integer Sequence.</p>
<p>Links to similar counting problems mentioned or solved in the literature would help. </p>
<p>Thanks!</p>
| Louigi Addario-Berry | 3,401 | <p><b>Edit</b>: I worked out the details of this exponential upper bound a bit more precisely. It is the case that $S(n) \leq 11*10^n$. </p>
<p>I can only prove an exponential upper bound (rather than an exponential asymptotic), but it can be obtained by weakening your restriction on the vectors $S(n)$ as follows: the value of $t_m$ restricts $t_{m+i}$ to be less than $t_m$ for $i < t_m$. (This replaces your "triangular" barriers with "backwards-L-shaped" barriers.) I'll call this new set of vectors $U(n)$.
In fact, it's then useful to generalize slightly -- I'll write $U(n,m)$ for the set of vectors $(u_1,\ldots,u_n) \in \lbrace 0,1,\ldots,m\rbrace^{n}$ that have the above kind of restriction. (So in this notation we have $S(n) \subset U(n) = U(n,n)$). For any $n$ and $m$, by considering the value of the first element of the vector, we have the recursion
$$
U(n+1,m) = 2U(n,m) + \sum_{k=2}^{\min(n,m)} ( U(k-1,k-1) + U(n+1-k,m) ) + \sum_{k=\min(n,m)+1}^m U(n,k).
$$
The first term on the RHS corresponds to when $u_1=0$ or $u_1=1$, and the second corresponds to when $u_1 > n$. </p>
<p>From this recursion you can get exponential upper bounds by induction. For the base cases, note that $U(1,m)=m+1$ for all $m$, and $U(n,1)=2^n$ for all $n$ (though I don't think the latter is necessary for the following argument). If we then try to get bounds of the form $U(n,m) \leq (m+1)a^mb^n$, then the base case holds as long as $a \geq 1$ and $b \geq 2$. However, the induction is easy if you take $a=2$ and $b=5$; then just by plugging in the inductive bounds and doing a few geometric sums, you derive the same bound for $U(n,m)$. </p>
<p>In particular, the above argument shows that $U(n,n) \leq 11*10^n$, which is likely far from tight. Of course, you can probably figure out precise asymptotics for $U$ from the above recurrence with a little more work, but given that it won't yield asymptotics for $T$ maybe just the upper bound is enough. </p>
|
3,702,649 | <p>It's obvious that it's symmetric because <span class="math-container">$a_{\left(i+1\right)j}=\left(m+1\right)\left(i+1+j\right) = a_{i\left(j+1\right)}=\left(m+1\right)\left(i+j+1\right)$</span>, but how can I prove that it's a Latin square and that it's diagonal consists of different elements?</p>
<p>I thought about showing that the sum of the elements in any row/column is equal to 0+1+...+n-1, but it's not working out</p>
<p><span class="math-container">$\left(m+1\right)\left[\left(i+j\right)+\left(i+j+1\right)+...+\left(i+j+n-1\right)\right] mod(n)$</span></p>
<p><span class="math-container">$\left(m+1\right)\left[n\left(i+j\right)+\left(1\right)+...+\left(n-1\right)\right] mod(n)$</span></p>
<p><span class="math-container">$\left(m+1\right)\left[n\left(i+j\right)+\frac{n\left(n-1\right)}{2}\right] mod(n)$</span></p>
<p><span class="math-container">$\left(m+1\right) \left[0\right]mod(n)$</span></p>
<p>Which will get me 0 or k*n where k is an integer, so we only know that if summing up any row/column (I get the same for other rows also)</p>
| Jeane Z | 818,754 | <p><span class="math-container">$(m+1)$</span> is any number <span class="math-container">$\in$</span> Z. I think , there is relation between <span class="math-container">$m$</span> and <span class="math-container">$n$</span> to get the require . Also, you mean the operation * is usual multiplication.</p>
|
438,070 | <p>I stumbled across this question and I cannot figure out how to use the value of $\cos(\sin 60^\circ)$ which would be $\sin 0.5$ and $\cos 0.5$ seems to be a value that you can only calculate using a calculator or estimate at the very best.</p>
| Emanuele Paolini | 59,304 | <p>Is any linear combination of bounded sequences, a bounded sequence? If yes, $F_1$ is a linear subspace.</p>
<p>Notice that $F_6$ is not a linear subspace...</p>
|
564,195 | <p>Using continuity I was able to show the sequence $x_0 = 1$, $x_{n+1} = sin(x_n)$ converges to 0, but I was wondering if there was a way to prove it using only properties and theorems related to sequences and series, without using continuity.</p>
<p>So far, I know the sequence is monotonically decreasing and bounded below by 0, so it must converge to its infimum. From here I'm not exactly sure how to show 0 is the infimum of this set of numbers.</p>
<p>Alternatively, I could check convergence to $0$ by comparison, but no sequences come to mind that are greater than the given sequence for all $n \geq N$ and converge to $0$.</p>
<p>I've already seen the answers at the following:</p>
<p><a href="https://math.stackexchange.com/questions/45283/lim-n-to-infty-sin-sin-sin-n">Compute $ \lim\limits_{n \to \infty }\sin \sin \dots\sin n$</a> </p>
<p><a href="https://math.stackexchange.com/questions/524638/prove-that-sin-sin-sinx-converges-asymptotically-to-zero">Prove that $\sin(\sin...(\sin(x))..)$ converges asymptotically to zero</a> </p>
| Ben Grossmann | 81,360 | <p>We know that for all $x \in (0,1]$, we have
$$
0 < \sin x < x
$$
From there, you can show that $x_0,x_1,\dots$ is a <strong>strictly</strong> monotonically decreasing sequence. Is this enough? That is, can we forgo continuity? No. As a counterexample, consider the seuqence $x_k = f(x_{k-1}); x_0 =1$ with
$$
f(x) =
\begin{cases}
\sin(x) + 0.1 & x^*<0\leq 1\\
\sin(x) & 0 < x \leq x^*
\end{cases}
$$
Where $x^*$ is the positive solution to $= x^* = \sin(x^*) + 0.1$.
We note that $x_k$ is a strictly monotonically decreasing sequence converging to $x^*$ (and therefore bounded below by $0$), and we even have $f(x)<x$ for every point in $(0,1]$. However, because of the lack of continuity, this is not enough to "force" $\{x_k\}$ to converge to $0$.</p>
|
148,032 | <p>What is the larger of the two numbers?</p>
<p>$$\sqrt{2}^{\sqrt{3}} \mbox{ or } \sqrt{3}^{\sqrt{2}}\, \, \; ?$$
I solved this, and I think that is an interesting elementary problem. I want different points of view and solutions. Thanks!</p>
| PolyaPal | 22,004 | <p>In general, we can state two pertinent results: (1) If $a$ and $b$ are positive real numbers such that $b > a \ge e,$, then $a ^ {b} > b ^ {a}$; (2) If a and b satisfy $e \ge b > a > 0$, then $b ^ {a} > a ^ {b}.$</p>
|
88,469 | <p>These vectors form a basis on $\mathbb R^3$: $$\begin{bmatrix}1\\0\\-1\\\end{bmatrix},\begin{bmatrix}2\\-1\\0\\\end{bmatrix} ,\begin{bmatrix}1\\2\\1\\\end{bmatrix}$$</p>
<p>Can someone show how to use the Gram-Schmidt process to generate an orthonormal basis of $\mathbb R^3$?</p>
| Community | -1 | <p>Let's look at this in two dimensions first. After this you should know how to do it in three!</p>
<p>Suppose that you are working in the plane and have two linearly independent vectors $v$ and $w$. You want to make $v$ and $w$ orthogonal to each other in terms of the standard euclidean inner product. How can you do it?</p>
<p>Well notice that we can subtract a certain multiple of $w$ from $v$, let's call it $cw$ where $c$ is some constant so that $v - cw$ will be orthogonal to $w$. In other words, after some algebraic manipulation we find that $c$ must be equal to</p>
<p>$$\frac{\langle v,w \rangle }{\langle w,w \rangle}$$</p>
<p>simply by solving the equation $\langle v - cw, w \rangle = 0$ for $c$. Then now you will have an orthogonal basis for $\mathbb{R}^2$, namely the vectors </p>
<p>$$w \quad \text{and} \quad v - \frac{\langle v,w \rangle }{\langle w,w \rangle} w.$$</p>
<p>To find an orthonormal basis, you just need to divide through by the length of each of the vectors.</p>
<p>In $\mathbb{R}^3$ you just need to apply this process recursively as shown in the wikipedia link in the comments above. However you first need to check that your vectors are linearly independent! You can check this by calculating the determinant of the matrix whose columns are the vectors that you have stated in your question.</p>
|
1,243,661 | <p>Let $\Theta$ be an unknown random variable with mean $1$ and variance $2$. Let $W$ be another unknown random variable with mean $3$ and variance $5$. $\Theta$ and $W$ are independent.</p>
<p>Let: $X_1=\Theta+W$ and $X_2=2\Theta+3W$. We pick measurement $X$ at random, each having probability $\frac{1}{2}$ of being chosen. This choice is independent of everything else.</p>
<p>How does one calculate $Var(X)$ in this case?
Is </p>
<p>$$
Var(X)\;\; = \;\; \frac{1}{2}(Var(\Theta)+Var(W))+\frac{1}{2}(Var(2\Theta)+Var(3W)) \;\; =\;\; \frac{1}{2}(5Var(\Theta)+10Var(W))?
$$</p>
| drhab | 75,923 | <p><strong>Hint</strong>:</p>
<p>Denoting the random index by $I$ we have:</p>
<p>$$\mathbb EX=\mathbb E(X\mid I=1)P(I=1)+\mathbb E(X\mid I=1)P(I=1)=\mathbb EX_1.\frac12+\mathbb EX_2.\frac12$$and:</p>
<p>$$\mathbb EX^2=\mathbb E(X^2\mid I=1)P(I=1)+\mathbb E(X^2\mid I=1)P(I=1)=\mathbb EX_1^2.\frac12+\mathbb EX^2_2.\frac12$$
Now use the well known identity:
$$\text{Var}(X)=\mathbb EX^2-(\mathbb EX)^2$$
The equalities $X_1=\Theta+W$ and $X_2=2\Theta+3W$ can be used to find $\mathbb EX_i$ and $\mathbb EX_i^2$ for $i=1,2$.</p>
|
3,963,479 | <p>In a quadrilateral <span class="math-container">$ABCD$</span>, there is an inscribed circle centered at <span class="math-container">$O$</span>. Let <span class="math-container">$F,N,E,M$</span> be the points on the circle that touch the quadrilateral, such that <span class="math-container">$F$</span> is on <span class="math-container">$AB$</span>, <span class="math-container">$N$</span> is on <span class="math-container">$BC$</span>, and so on. It is known that <span class="math-container">$AF=5$</span> and <span class="math-container">$EC=3$</span>. Let <span class="math-container">$P$</span> be the intersection of <span class="math-container">$AC$</span> and <span class="math-container">$MN$</span>. Find the ratio <span class="math-container">$AP:PC$</span>.</p>
<p>I know that <span class="math-container">$AM=AF=5$</span> and <span class="math-container">$CN=CE=3.$</span> The answer is equivalent to the ratio of the areas <span class="math-container">$[ADP]:[DPC]$</span>. I cannot continue on from this point. Would anyone please help?<a href="https://i.stack.imgur.com/LXdXy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LXdXy.png" alt="enter image description here" /></a></p>
| sirous | 346,566 | <p><a href="https://i.stack.imgur.com/exKdU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/exKdU.jpg" alt="enter image description here" /></a></p>
<p>COMMENT:</p>
<p>As can be seen ratio <span class="math-container">$\frac{AP}{PC}$</span> depends on the positions of C related to A. So more constrains must be included, for example " A, O(center of circle) and C are colinear, otherwise there can be infinite solutions.</p>
|
1,567,229 | <p>Let Y1 and Y2 have the joint probability density function given by:</p>
<p>$ f (y_1, y_2) = 6(1−y_2), \text{for } 0≤y_1 ≤y_2 ≤1$</p>
<p>Find $P(Y_1≤3/4,Y_2≥1/2).$</p>
<p>Answer:</p>
<p>$$\int_{1/2}^{3/4}\int_{y_1}^{1}6(1− y_2 )dy_2dy_1 + \int_{1/2}^{1}\int_{1/2}^{1}6(1− y_2 )dy_1dy_2 = 7/64 + 24/64 = 31/64 $$</p>
<p>My problem is I don't understand the logic for the second part. We know for a fact that:</p>
<p>$$P(a_1 ≤ Y_1 ≤ a_2, b_1 ≤ Y_2 ≤ b_2) = \int^{b_2}_{b_1}\int^{a_2}_{a_1}6(1− y_2 )dy_1dy_2$$</p>
<p>Thus, I can make the following equations</p>
<p>$$ \int_{1/2}^{3/4}\int_{y_1}^{1}6(1− y_2 )dy_2dy_1 = P(1/2 \leq Y_2 \leq 1, 1/2 \leq Y_1 \leq 3/4 | Y_1 \leq Y_2) $$</p>
<p>This part I understand the logic of why we would add this integral to the expression</p>
<p>$$\int_{1/2}^{1}\int_{1/2}^{1}6(1− y_2 )dy_1dy_2 = P(1/2 \leq Y_2 \leq 1, 1/2 \leq Y_1 \leq 1)$$. </p>
<p>I don't understand the logic of how adding both expressions leads to $P(Y_1≤3/4,Y_2≥1/2).$. In fact, I don't see the logic of choosing $P(1/2 \leq Y_2 \leq 1, 1/2 \leq Y_1 \leq 1)$.</p>
| Pieter21 | 170,149 | <p>(1) Permute the non-zeroes (5*4), pick $7 \choose 3$ places to insert zeroes: 5*4*35 = 700. </p>
<p>(2) Out of $7 \choose 3$ selections for the zeroes, there is only 1 correct, so $1/35$.</p>
|
1,697,206 | <p>In the figure, $BG=10$, $AG=13$, $DC=12$, and $m\angle DBC=39^\circ$.</p>
<p>Given that $AB=BC$, find $AD$ and $m\angle ABC$.</p>
<p>Here is the figure:</p>
<p><a href="https://i.stack.imgur.com/u05wa.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u05wa.jpg" alt="enter image description here"></a></p>
<p>I am inclined to say that since $\overline{AB}\simeq \overline{BC}$, both triangles share side $\overline{BD}$, and they also have a $90^\circ$ angle in common, then $AD=DC$ and $m\angle ABD=m\angle DBC=39^\circ$. However, I am not making the connection of exactly why my conclusion is true. </p>
<p>How can I show this is true without using trigonometry?</p>
<p>Thank you!</p>
| Intelligenti pauca | 255,730 | <p>You can use the RHS condition for the congruence of two right-angled triangles:</p>
<p><a href="https://en.wikipedia.org/wiki/Congruence_(geometry)#Congruence_of_triangles" rel="nofollow">https://en.wikipedia.org/wiki/Congruence_(geometry)#Congruence_of_triangles</a></p>
<p>It's also called "hypotenuse-leg test (HL)":</p>
<p><a href="http://www.mathopenref.com/congruenthl.html" rel="nofollow">http://www.mathopenref.com/congruenthl.html</a></p>
<p>You then have $BCD\cong BAD$ and that's all.</p>
|
152,336 | <p>Let $G$ be an algebraic group. Choose a Borel subgroup $B$ and
a maximal Torus $T \subset B$. Let $\Lambda$ be the set of weights wrt $T$ and let $\mathfrak{g}$ be the lie algebra of $G$.
Now, consider the following two sets,</p>
<p>1) $\Lambda^+$, the set of dominant weights wrt $B$,</p>
<p>2) The set $N_{o,r}$ of pairs $(e,r)$ (identified upto $\mathfrak{g}$ conjugacy), where e is a nilpotent element in $\mathfrak{g}$ and $r$ is an irreducible representation of the centralizer (in $\mathfrak{g}$) $Z_e$ of a nilpotent element e.</p>
<p>There is a bijective map between the two sets that plays an important
role in the representation theory of $G$ and this is often called the
Lusztig-Vogan bijection,</p>
<p>$\rho_{LV} : \Lambda^+ \rightarrow N_{o,r}$.</p>
<p>In recent works, this bijection has been studied by Ostrik,
Bezrukovnikov, Chmutova-Ostrik, Achar, Achar-Sommers (Edit: See links to refs below) using various
different tools. My question however pertains to the motivations that
point to the existence of such a bijection in the first place. As I
understand, the component group $A(O)$ where $O$ is the nilpotent orbit associated to $e$ (under the adjoint action) and a quotient of the component
group $\overline{A(O)}$ play important roles in the
algorithmic description of this bijection (say for example in determining
the map for certain $h \in \Lambda^+$, where $h$ is the Dynkin element
of a nilpotent orbit in the dual lie algebra). One of the original motivations
for the existence of such a bijection seems to have emerged from the
study of primitive ideals in the universal enveloping algebra of g. </p>
<p>My questions are the following :</p>
<ul>
<li><p>How does $\overline{A(O)}$ enter the story from the point of view of the
study of primitive ideals ?</p></li>
<li><p>Are there <em>other</em> representation theoretic motivations that point to the existence of such a bijection ? Here, I am (somewhat vaguely) counting a motivation to be 'different' if its relation to the theory of primitive ideals is nontrivial. </p></li>
</ul>
<p>[Added in Edit] Refs for some recent works on the bijection (in anti-chronological order) : </p>
<ul>
<li><p><em>Local systems on nilpotent orbits and weighted Dynkin diagrams (<a href="http://arxiv.org/abs/math/0201248">link</a>)</em> - P Achar and E Sommers</p></li>
<li><p><em>Calculating canonical distinguished involutions in the affine Weyl groups <a href="http://arxiv.org/abs/math/0106011">(link)</a></em> - T Chmutova and V Ostrik </p></li>
<li><p><em>Quasi-exceptional sets and equivariant coherent sheaves on the nilpotent cone <a href="http://arxiv.org/abs/math/0102039">(link)</a></em> - R Bezrukavnikov</p></li>
</ul>
| Jim Humphreys | 4,231 | <p>Like Jay, I don't see any reasonable way to address all parts of your wide-ranging question. You are looking at the intersection of numerous lines of research, motivated in different ways for different people. For myself, the primary motivation comes indirectly from modular representations of Lie algebras attached to simple algebraic groups in (good) prime characteristic. Here the basic machinery of nilpotent orbits and component groups is essentially the same as in the classical work over <span class="math-container">$\mathbb{C}$</span> which you are implicitly referring to.</p>
<p>There are many ingredients here, suggested in the organization of a conference note I wrote a decade ago <a href="https://arxiv.org/abs/math/0502100" rel="nofollow noreferrer">here</a>. It includes a more extensive list of related papers including those already mentioned in the question. (All of this was heavily influenced by conversations I had with Roman Bezrukavnikov, but the program sketched remains speculative.)</p>
<p>Primitive ideals are almost certainly lurking in the background of what I've written down, as well as in other interpretations of the L-V bijection. In prime characteristic A. Premet has intriguing ideas in some of his papers about reduction mod <span class="math-container">$p$</span> of certain primitive ideals in a characteristic 0 enveloping algebra, which are directly relevant to the modular representation theory. But many of the questions raised, starting with Lusztig's series of papers in the 1980s on cells in affine Weyl groups, remain only partly answered. At least the literature shows a range of motivations and applications involving representation theory, along with a lot of basic machinery. But for Lusztig's canonical quotient of the component group (with your bar notation), you really need to look at the papers by Achar and Sommers. This literature goes in many directions including the Springer correspondence, in spite of having some unity under the surface.</p>
<p>One thing I should emphasize is that Lusztig's bijection requires a transition to an affine Weyl group attached to the Langlands <em>dual group</em>. (This already became part of the modular theory in Verma's work in the early 1970s.) One of Lusztig's basic ideas is to pass from nilpotent orbits to 2-sided cells in the dual type of affine Weyl group.</p>
<p>Finally, it helps to start with the simplest example of this complicated set-up,
where <span class="math-container">$G = \mathrm{SL}_2$</span>. In the Lie algebra you have two nilpotent orbits: the zero orbit and the regular orbit. So the centralizers (in the adjoint group) are respectively the entire group and a unipotent group having only the trivial irreducible representation. The latter pair should be associated with the zero weight (in the root lattice!), whereas the infinitely many irredudible representations of the adjoint group <span class="math-container">$\mathrm{PGL}_2$</span> are usually indexed by the integers <span class="math-container">$\{0,2,4, \dots\}$</span>. But in the bijection here you need to shift thse by 2 (<span class="math-container">$=2\rho$</span>). This may seem arbitrary but does make sense in the cell picture and the modular theory.</p>
|
152,336 | <p>Let $G$ be an algebraic group. Choose a Borel subgroup $B$ and
a maximal Torus $T \subset B$. Let $\Lambda$ be the set of weights wrt $T$ and let $\mathfrak{g}$ be the lie algebra of $G$.
Now, consider the following two sets,</p>
<p>1) $\Lambda^+$, the set of dominant weights wrt $B$,</p>
<p>2) The set $N_{o,r}$ of pairs $(e,r)$ (identified upto $\mathfrak{g}$ conjugacy), where e is a nilpotent element in $\mathfrak{g}$ and $r$ is an irreducible representation of the centralizer (in $\mathfrak{g}$) $Z_e$ of a nilpotent element e.</p>
<p>There is a bijective map between the two sets that plays an important
role in the representation theory of $G$ and this is often called the
Lusztig-Vogan bijection,</p>
<p>$\rho_{LV} : \Lambda^+ \rightarrow N_{o,r}$.</p>
<p>In recent works, this bijection has been studied by Ostrik,
Bezrukovnikov, Chmutova-Ostrik, Achar, Achar-Sommers (Edit: See links to refs below) using various
different tools. My question however pertains to the motivations that
point to the existence of such a bijection in the first place. As I
understand, the component group $A(O)$ where $O$ is the nilpotent orbit associated to $e$ (under the adjoint action) and a quotient of the component
group $\overline{A(O)}$ play important roles in the
algorithmic description of this bijection (say for example in determining
the map for certain $h \in \Lambda^+$, where $h$ is the Dynkin element
of a nilpotent orbit in the dual lie algebra). One of the original motivations
for the existence of such a bijection seems to have emerged from the
study of primitive ideals in the universal enveloping algebra of g. </p>
<p>My questions are the following :</p>
<ul>
<li><p>How does $\overline{A(O)}$ enter the story from the point of view of the
study of primitive ideals ?</p></li>
<li><p>Are there <em>other</em> representation theoretic motivations that point to the existence of such a bijection ? Here, I am (somewhat vaguely) counting a motivation to be 'different' if its relation to the theory of primitive ideals is nontrivial. </p></li>
</ul>
<p>[Added in Edit] Refs for some recent works on the bijection (in anti-chronological order) : </p>
<ul>
<li><p><em>Local systems on nilpotent orbits and weighted Dynkin diagrams (<a href="http://arxiv.org/abs/math/0201248">link</a>)</em> - P Achar and E Sommers</p></li>
<li><p><em>Calculating canonical distinguished involutions in the affine Weyl groups <a href="http://arxiv.org/abs/math/0106011">(link)</a></em> - T Chmutova and V Ostrik </p></li>
<li><p><em>Quasi-exceptional sets and equivariant coherent sheaves on the nilpotent cone <a href="http://arxiv.org/abs/math/0102039">(link)</a></em> - R Bezrukavnikov</p></li>
</ul>
| wky | 14,226 | <p>There are some conversations on the affine Weyl group cells perspective of the bijection. I guess I can contribute a very little bit on the primitive ideals side of the story, if it is not too late to do so.</p>
<p>My first encounter of the Lusztig's quotient comes from the paper of Barbasch and Vogan in 1985 <a href="http://www.jstor.org/discover/10.2307/1971193?uid=3738176&uid=2&uid=4&sid=21103276421363" rel="nofollow">(here)</a>. It is mainly about finding the (g,K)-modules of a fixed infinitesimal character whose annihilators are the 'largest' possible primitive ideal, which Barbasch-Vogan called 'special unipotent' as in the title of the paper.</p>
<p>These special unipotent representations have a lot to do with the ring of regular function of a nilpotent orbit, R[O]. This is first hinted in Section 12 of Vogan's work <a href="http://math.mit.edu/~dav/assocvarunip.pdf" rel="nofollow">(here)</a>. And Vogan's version of the above bijection is about the structure of R[O], as mentioned in the Achar-Sommers paper. I have carried out some computations on the bijection from this perspective (effectively I can prove a conjecture in the paper). But talking about why Vogan's version of the bijection matches with that of Lusztig's, I think Barbasch and Vogan know much more about it.</p>
|
3,112,682 | <p>I was looking at</p>
<blockquote>
<p><em>Izzo, Alexander J.</em>, <a href="http://dx.doi.org/10.2307/2159282" rel="nofollow noreferrer"><strong>A functional analysis proof of the existence of Haar measure on locally compact Abelian groups</strong></a>, Proc. Am. Math. Soc. 115, No. 2, 581-583 (1992). <a href="https://zbmath.org/?q=an:0777.28006" rel="nofollow noreferrer">ZBL0777.28006</a>.</p>
</blockquote>
<p>which proves existence of the Haar-measure for locally compact abelian groups using the Markov-Kakutani theorem. </p>
<p>What I find strange is that the Haar measure is constructed as an element of the dual of <span class="math-container">$C_c(X)$</span>. But for noncompact <span class="math-container">$X$</span> (such as <span class="math-container">$X$</span> being the real numbers <span class="math-container">$\Bbb R$</span>) this must be an unbounded functional (as the Lebesgue-measure on <span class="math-container">$\Bbb R$</span> is not finite). It seems like the author has no problem with this, and (without mentioning it further) goes on to define a weak-* topology for this case and even uses Banach-Alaoglu.</p>
<p>I have not seen this being done this way before, am I misunderstanding something or can one define a weak-* topology on the algebraic dual of a TVS without any problems?</p>
| Paras Khosla | 478,779 | <p><span class="math-container">$$z+\frac{1}{z}=2\cos\theta\iff z^2+1=2z\cos\theta \\ z_{1,2}=\cos\theta\pm i\sin\theta=e^{\pm i\theta}\implies \frac{1}{z_{1,2}}=\cos\theta\mp i\sin\theta=e^{\mp i\theta}$$</span>Using De Moivre's formula, we get the following <span class="math-container">$$ z_{1,2}^n=\cos n\theta\pm i\sin n\theta \tag1$$</span> <span class="math-container">$$\frac{1}{z_{1,2}^n}=\cos n\theta \mp i\sin n\theta \tag2$$</span>Adding equations <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span>, we get the desired result<span class="math-container">$$z^n+\frac{1}{z^n}=2\cos n\theta$$</span></p>
|
978,114 | <p>From $ax\geq 0$ for $a>0$, we have $x\geq 0$. So I suggest that if $Ax\geq 0$ for $A$ positive definite matrix, $x$ a column vector, $0$ is the column vector with $0$ as elements, then $x\geq 0$, that is, the coordinate of $x$ is greater than $0$.</p>
<p>However, I could not prove it...</p>
| Algebraic Pavel | 90,996 | <p>Positive definiteness is rather a spectral property than a "component-wise" one.
A randomly generated example shows that the statement is not true:
$$
A=\begin{bmatrix}2 & 3 \\ 3 & 5\end{bmatrix},
x=\begin{bmatrix}-1\\3\end{bmatrix},
Ax=\begin{bmatrix}7\\12\end{bmatrix}.
$$</p>
<p>This is true for <a href="http://en.wikipedia.org/wiki/M-matrix" rel="nofollow">M-matrices</a>, which in the symmetric case happen to be positive definite. Not all positive definite matrices are M-matrices though.</p>
|
1,033,208 | <p>What do square brackets mean next to sets? Like $\mathbb{Z}[\sqrt{-5}]$, for instance. I'm starting to assume it depends on context because google is of no use.</p>
| Henno Brandsma | 4,280 | <p>There is no general notation like that in set theory, that I'm aware of.</p>
<p>In your case, it's a notation from algebra, and it means: $\{m + n\sqrt{5}: m,n \in \mathbb{Z}\}$. We add a new number (here $\sqrt{5}$) to the integers and generate the minimal ring that contains them both.</p>
|
682,741 | <p>Use the Mean Value Theorem to prove that if $p>1$ then $(1+x)^p>1+px$ for $x \in (-1,0)\cup (0,\infty)$</p>
<p>How do I go about doing this?</p>
| sirfoga | 83,083 | <p>Sorry, actually I wrote the answer quickly and a bit carelessly ... but let's take a different path.</p>
<p>Let $f(x) = (1+x)^p$, and $g(x) = 1+px$; clearly $f'(x) = p(1+x)^{p-1}$ and $g'(x) = p$. </p>
<p>It means that $f'(x) > g'(x)$ for $x > 0$ ( because $1 + x > 1$), and $f(0) = g(0) = 1$, so $f(x) > g(x)$ for every $x > 0$.</p>
<p>Instead if $-1 < x < 0$, consider the function $h(x) = f(x) - g(x)$: it's continuous (because it's the difference of 2 continuous functions) and</p>
<p>$$h'(x) = f'(x) - g'(x) = p(1+x)^{p-1} - p < 0 \text{ for every } x <0 \, (\text{because } 1 + x < 1).$$</p>
<p>It follows that $h(x)$ is continuous and monotonous; in particular $h(x)$ is a decreasing function, so in $(-1,0)$ its minimum value is $h(0) = 0$, instead for every $x$ such that $-1 < x < 0$</p>
<p>$h(x) > 0$, it implies that $h(x) = f(x) - g(x) > 0 \to f(x) > g(x)$ if $-1 < x < 0$, which proves the thesis.</p>
|
129,530 | <p>This book, which needs to be returned quite soon, has a problem I don't know where to start. How do I find a 4 parameter solution to the equation</p>
<p>$x^2+axy+by^2=u^2+auv+bv^2$</p>
<p>The title of the section this problem comes from is entitled (as this question is titled) "Numbers of the Form $x^2+axy+by^2$", yet it deals almost exclusively with numbers of the form $x^2+y^2$. It looks like almost an afterthought or a preview of what's to come where it gives the formula</p>
<p>$(m^2+amn+bn^2)(p^2+apq+bq^2)=r^2+ars+bs^2,r=mp-bnq,s=np+mq+anq$</p>
<p>Then 6 of the 7 problems use this form. The first few involve solving the form $z^k=x^2+axy+by^2$, which I quickly figured out are solved by letting $z=u^2+auv+bv^2$, then using the above formula to get higher powers. So for $z^2$ for example, I set $m=p=u$ and $n=q=v$ to get $x$ and $y$ in terms of $u$ and $v$. But for this problem, I'm drawing a blank.</p>
| Mike | 17,976 | <p>Okay, let's see if I can fix my previous answer. Again if i let $z=u_1^2+au_1v_1+bv_1^2$, we get</p>
<p>$z^2=(u_1^2+au_1v_1+bv_1^2)(u_1^2+au_1v_1+bv_1^2)=r^2+ars+bs^2$</p>
<p>where $r=u_1^2-bv_1^2,s=2u_1v_1+av_1^2$.</p>
<p>I will now multiply this by another number of the form $m^2+amn+bn^2$ to get yet another number of the same form. So $p=u_1^2-bv_1^2, q=2u_1v_1+av_1^2$. My new values for r and s are</p>
<p>$r=mp-bnq=m(u_1^2-bv_1^2)-bn(2u_1v_1+av_1^2)$</p>
<p>$s=np+mq+anq=n(u_1^2-bv_1^2)+m(2u_1v_1+av_1^2)+an(2u_1v_1+av_1^2)$</p>
<p>I'll let this $(r,s)$ be my $(x,y)$. So now I have</p>
<p>$x^2+axy+by^2=z^2(m^2+amn+bn^2)=(mz)^2+a(mz)(nz)+b(nz)^2$.</p>
<p>Now I finally have an equation in the form that I want. This gives my solution as</p>
<p>$x=m(u_1^2-bv_1^2)-bn(2u_1v_1+av_1^2)$</p>
<p>$y=n(u_1^2-bv_1^2)+(m+an)(2u_1v_1+av_1^2)$</p>
<p>$u=mz=mu_1^2+amu_1v_1+bmv_1^2$</p>
<p>$v=nz=nu_1^2+anu_1v_1+bnv_1^2$</p>
|
358,786 | <p>Why does Egorov's theorem not hold in the case of infinite measure? It turns out that, for example, $f_n = \chi_{[n,n+1]}x$ does not converge nearly uniformly, that is, it does not converge on E such that for a set F m(E\F) < $\epsilon$. Is this simply true because it takes on the value 1 for each n but suddenly hits 0 when n ---> infinity?</p>
| chango | 20,376 | <p>$f_n$ converges pointwise to the zero function on $\mathbb{R}$ (here $E = \mathbb{R})$. However there doesn't exist a set of finite measure $F$ such that $f_n$ converges uniformly on $\mathbb{R} \setminus F$. To see this note that for large enough $n$, $f_n$ will take both the values $0$ and $1$ on $\mathbb{R} \setminus F$. </p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.