qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,085,622
<p>The question I am working through states:</p> <p><em>Three digits are selected at random from the digits 1 through 10, without replacement. Find the probability that at least 2 digits are prime.</em></p> <p>I feel as though I am close, but not quite hitting the mark with my solution:</p> <p>My total number of selections would be $10C3$, and there are 4 prime numbers to pick from $\{2,3,5,7\}$.</p> <p>Since I'm looking for <strong>at least</strong> 2 prime numbers, that means I am looking for the probability that I select 2, or 3 prime numbers. </p> <p>So, I have \begin{align*} \textrm{P(2 prime numbers)} &amp; \ or\ \textrm{P(3 prime numbers)}\\ \frac{4C2}{10C3} &amp; + \frac{4C3}{10C3}\\ &amp;= \frac{1}{12} \end{align*}</p> <p>Should I instead be looking for the compliment? I feel as though it would be just as complex and not really save me any time [1-P(0 or 1 primes)]. </p> <p>Edit to add new solution (looks better!): \begin{align*} \textrm{P(2 prime numbers)} &amp; \ or\ \textrm{P(3 prime numbers)}\\ \frac{4C2 \times 6C1}{10C3} &amp; + \frac{4C3}{10C3}\\ &amp;= \frac{1}{3} \end{align*}</p>
Kiran
82,744
<p>P(exact $1$ prime) = $\dfrac{\dbinom{4}{1}\dbinom{6}{2}}{\dbinom{10}{3}}$<br> P(No prime) = $\dfrac{\dbinom{6}{3}}{\dbinom{10}{3}}$</p> <p>Answer <br> $1-\dfrac{\dbinom{4}{1}\dbinom{6}{2}}{\dbinom{10}{3}}-\dfrac{\dbinom{6}{3}}{\dbinom{10}{3}}$</p>
1,815,918
<p>Let $G$ be a group and $|G|=p^n$ for some prime $p$. If $f:G\to H$ is a surjective homomorphism, how do I know $H=f(G)$ also has cardinality a power of $p$?</p>
Bérénice
317,086
<p>Let $E=\{f:[-1,1]\rightarrow \mathbb{R} | f \text{ is even}\}$ and $O=\{f:[-1,1]\rightarrow \mathbb{R} | f \text{ is odd}\}$. In order to show that $X$ is the direct sum of $E$ and $O$. You need to show two things :</p> <ul> <li>$X=E+O$ : Let $f \in X$, let $f_1(x)=\frac{f(x)-f(-x)}{2}$ and $f_2(x)=\frac{f(x)+f(-x)}{2}$. We have $f_1(-x)=\frac{f(-x)-f(-(-x))}{2}=\frac{f(-x)-f(x)}{2}=-f_1(x)$, so $f_1$ is odd, $f_1 \in O$. And $f_2(-x)=\frac{f(-x)+f(-(-x))}{2}=\frac{f(-x)+f(x)}{2}=f_2(x)$, so $f_2$ is even, $f_2 \in E$. Finally $f_1(x)+f_2(x)=\frac{f(x)-f(-x)}{2}+\frac{f(x)+f(-x)}{2}=f(x)$. So $f=f_1+f_2$. We proved that each element of $X$ can be decomposed as a sum of an element of $O$ and an element of $E$. We proved $X=E+O$.</li> <li>$E\cap O=\{0\} $ : Let $f\in E \cap O$. $\forall x \in [-1,1], f(x)=f(-x)=-f(x)$ so $\forall x \in [-1,1], f(x)=0$, so $f=0$. So we proved that $E\cap O=\{0\} $.</li> </ul> <p>Since the two conditions are verified, we have : $X=E\oplus O$.</p>
3,066,762
<p>This is a very interesting identity but I don't know how to prove this, note that <span class="math-container">$$A_1,\ldots,A_J,B \in \mathbb{R}^{n \times n}$$</span> and <span class="math-container">$m$</span> is the number of block diagonal in <span class="math-container">$\mathbf{A}$</span> ,so consider <span class="math-container">$$ \mathbf{A} = \begin{bmatrix} A_1 &amp; A_2 &amp; \ldots &amp;A_J &amp; &amp; &amp; &amp; \\ &amp; &amp; &amp; \ddots \\ &amp; &amp; &amp; &amp; A_1 &amp; A_2 &amp; \ldots &amp; A_J \end{bmatrix} \in \mathbb{R}^{nm\times (Jnm)}$$</span> <span class="math-container">$$ \mathbf{B} = \begin{bmatrix} B &amp; &amp; \\ &amp; \ddots &amp; \\ &amp; &amp; B \end{bmatrix} \in \mathbb{R}^{(Jnm)\times (Jnm)}$$</span> <span class="math-container">$$ \mathbf{A}^\prime = \left[\begin{smallmatrix} A_1 &amp; &amp; &amp; &amp; A_2 &amp; &amp; &amp; &amp; \ldots &amp; &amp; &amp; &amp; A_J\\ &amp; A_1 &amp; &amp; &amp; &amp; A_2 &amp; &amp; &amp; &amp; \ldots &amp; &amp; &amp; &amp; A_J\\ &amp; &amp; \ddots &amp; &amp; &amp; &amp; \ddots &amp; &amp; &amp; &amp; \ldots &amp; &amp; &amp; &amp; \ddots \\ &amp; &amp; &amp; A_1 &amp; &amp; &amp; &amp; A_2 &amp; &amp; &amp; &amp; \ldots &amp; &amp; &amp; &amp; A_J \end{smallmatrix}\right] \in \mathbb{R}^{mn\times (Jmn)}$$</span></p> <p><span class="math-container">$$ \mathbf{B}^\prime = \left[\begin{smallmatrix} \begin{bmatrix}B \\ 0\\ \vdots \\ 0 \\ \end{bmatrix} &amp; &amp; &amp; &amp; \begin{bmatrix}0 \\ B\\ \vdots \\ 0 \\ \end{bmatrix} &amp; &amp; &amp; &amp; \ldots &amp; &amp; &amp; &amp; \begin{bmatrix}0 \\ 0\\ \vdots \\ B \\ \end{bmatrix}\\ &amp; \begin{bmatrix}B \\ 0\\ \vdots \\ 0 \\ \end{bmatrix} &amp; &amp; &amp; &amp; \begin{bmatrix}0 \\ B\\ \vdots \\ 0 \\ \end{bmatrix} &amp; &amp; &amp; &amp; \ldots &amp; &amp; &amp; &amp; \begin{bmatrix}0 \\ 0\\ \vdots \\ B \\ \end{bmatrix}\\ &amp; &amp; \ddots &amp; &amp; &amp; &amp; \ddots &amp; &amp; &amp; &amp; \ldots &amp; &amp; &amp; &amp; \ddots \\ &amp; &amp; &amp; \begin{bmatrix}B \\ 0\\ \vdots \\ 0 \\ \end{bmatrix} &amp; &amp; &amp; &amp; \begin{bmatrix}0 \\ B\\ \vdots \\ 0 \\ \end{bmatrix} &amp; &amp; &amp; &amp; \ldots &amp; &amp; &amp; &amp; \begin{bmatrix}0 \\ 0\\ \vdots \\ B \\ \end{bmatrix} \end{smallmatrix}\right] \in \mathbb{R}^{Jmn\times (Jmn)}$$</span></p> <p>It seems to be true that <span class="math-container">$$\mathbf{AB} = \mathbf{A^\prime B^\prime}$$</span></p> <h2>what I guess might work</h2> <p>From <span class="math-container">$\mathbf{A}$</span> to <span class="math-container">$\mathbf{A}^\prime$</span> it is a column permutation <span class="math-container">$\mathbf{C}$</span>, while explicitly find its inverse should give us <span class="math-container">$$\mathbf{AB} = \mathbf{A CC^{-1} B}= \mathbf{A^\prime B^\prime}$$</span></p> <p>But it seems to me, very difficult to write down the <span class="math-container">$\mathbf{C}$</span></p> <h2>Update:</h2> <p>It seems that C can be written but well, I can show it is correct by <em>hand-waving</em> way to write it down then...</p>
Claude Leibovici
82,404
<p><em>Interesting problem, for sure.</em></p> <p>Using another CAS, I played with the precision. For Mathematica, your first command would be </p> <blockquote> <blockquote> <p>f[a_,b_,n_]:=NIntegrate[BesselJ[0, 2 a Sinh[x/2]] Sin[b x], {x, 0, Infinity}, WorkingPrecision -> n]</p> </blockquote> </blockquote> <p>Below are my results for the integration up to infinity for your particular values. <span class="math-container">$$\left( \begin{array}{cc} n &amp; \text{result} \\ 10 &amp; 0.4084616144 \\ 20 &amp; 0.4084616143 \\ 30 &amp; 0.6848603276 \\ 40 &amp; 0.6848603276 \\ 50 &amp; 0.6554693076 \\ 60 &amp; 0.6554693076 \\ 70 &amp; 0.6554693076 \\ 80 &amp; 0.6554693076 \\ 90 &amp; 0.6541206739\\ 100 &amp; 0.6541738396 \\ 200 &amp; 0.6533938298 \\ 300 &amp; 0.6534598266 \\ 400 &amp; 0.6536447955 \\ 500 &amp; 0.6534985760 \end{array} \right)$$</span> It seems to be very slow convergence (notice the swings of these results even for large values of <span class="math-container">$n$</span>).</p> <p>The systematic complain of the CAS was related to the highly oscillatory integrand as Robert Israel commented.</p>
2,505,216
<blockquote> <p>How can I evaluate<br> <span class="math-container">$$ \lim_{n \rightarrow \infty} \frac{n\sin n}{2n^2- 1}? $$</span></p> </blockquote> <hr> <p>Unsuccessful <strong>attempt</strong>:</p> <p>In the expression <span class="math-container">$\frac{n\sin n}{2n^2 - 1}$</span>, I divided the numerator and denominator by <span class="math-container">$n^2$</span>, but I got stuck with <span class="math-container">$\frac{\sin n}{n}$</span> and I do not know how to go on.</p> <p>Any help will be appreciated.</p>
Clement C.
75,808
<p>Since $-1\leq \sin x\leq 1$ for all $x$, we have $$ \left\lvert\frac{n\sin n}{2n^2-1}\right\rvert = \frac{n\lvert\sin n\rvert}{2n^2-1} \leq \frac{n}{2n^2-1}\,; $$ and since $2n^2-1\geq n^2$ for all $n\geq 1$, $$ \left\lvert\frac{n\sin n}{2n^2-1}\right\rvert \leq \frac{n}{n^2} = \frac{1}{n} \xrightarrow[n\to\infty]{}0. $$</p>
2,223,788
<p>$R$ (1 is not assumed to be in $R$) is a prime right Goldie ring (finite dimensional and ACC on right annihilators) which contains a minimal right ideal. Show that $R$ must be a simple Artinian ring. </p> <p>This appeared in a past paper, the first 2 parts which I managed to prove were "Every essential right ideal of a semi-prime right Goldie ring contains a regular element" and "Every non-zero ideal of a prime ring must be essential as a right ideal". There is also an extra hint that I might have to use Artin-Wedderburn.</p> <p>I've been trying (and failing) to find a way to use these results since I can't assume (or prove that) the minimal right ideal is an ideal (to use the previous result). I also have that the minimal ideal is of the form $eR$ where $e$ is idempotent, but I feel like i'm barking up the wrong tree. Any help is appreciated, thanks.</p>
Henry
326,325
<p>Let $E(R)$ be the right socle of $R$. $E(R)$ is an ideal of $R$, so it is essential as a right ideal and therefore contains a regular element, $c$ say.</p> <p>$R$ is isomorphic to $cR$, via the isomorphism $r\rightarrow cr$. (If $cr=0$ then $r(c)=0$ implies $r=0$, injective. Surjective obvious.)</p> <p>$cR$ is inside $E(R)$, so it is completely reducible, therefore $R$ is completely reducible. R is then a direct sum of irreducible submodules, this direct sum is finite as $R$ is Goldie. Therefore $R$ is right Artinian. Prime right Artinian rings are simple.</p>
2,444,591
<p>How do I prove that: $f(S ∩ T) \neq f(S) ∩ f(T)$ unless $f$ is one-to-one?</p> <p>I tried finding contradicting example like considering the function $f(x)=x^2$ with $S = \{-2,-1,0,1,2\}$ and $T=\{0,1,2,3,4\}$ or using the function $f(x) = |x|$, but, it seems to be true. But, my book says that In general the aforementioned statement is true, and I need to prove it. However, I can't start to prove this.</p>
Arthur
15,500
<p>Just to be clear, the statement that you're asking about is really this: given a function $f:X\to Y$, the following two statements are equivalent:</p> <ol> <li>$f$ is one-to-one</li> <li>For any two sets $S, T\subseteq X$, we have $f(S\cap T) = f(S)\cap f(T)$</li> </ol> <p>Or, to be more specific, you are looking at their inverses:</p> <ol> <li>$f$ is not one-to-one</li> <li>There exists $S, T\subseteq X$ such that $f(S\cap T)\neq f(S)\cap f(T)$</li> </ol> <p>Below is a proof of the equivalence of the latter pair:</p> <p>Note that no matter the function, $f(S\cap T) \subseteq f(S)\cap f(T)$. If there are $S$ and $T$ such that $f(S ∩ T) \neq f(S) ∩ f(T)$, then we know that there is some $y\in f(S)\cap f(T)$ which is not contained in $f(S\cap T)$. In that case, $y\in f(S)\cap f(T)$ means that we have both $y\in f(S)$ and $y\in f(T)$, which in turn means that there is an $x_1\in S$ such that $f(x_1) = y$, and there is an $x_2\in T$ such that $f(x_2) = y$. However, netiher $x_1$ nor $x_2$ are contained in $S\cap T$, because $y\notin f(S, T)$. That means that $x_1$ and $x_2$ must be different, which makes them a witness against $f$ being one-to-one.</p> <p>For the other direction, if $f$ is not one-to-one, then there are some $x_1 \neq x_2$ with $f(x_1) = f(x_2)$. Let $S = \{x_1\}$ and $T = \{x_2\}$, and you're done.</p>
333,265
<p>Can $|-x^2 | &lt; 1 $ imply that $-1&lt;x&lt;1$? My steps are as follows? $$| -x^2| &lt; 1 $$ $$-1&lt;(-x^2)&lt; 1 $$ $$-1&lt;(-x^2)&lt; 1 $$ $$-1&lt;x^2&lt; 1 $$ $$\sqrt{-1}&lt;x&lt; \sqrt 1 $$</p> <p>I'm actually looking for the radius of convergence for the power series of $\frac{1}{1-x^2}$:</p> <p>$$\frac{1}{1-x^2}=\sum\limits_{n=0}^\infty (-x^2)^n \hspace{10mm}\text{for} \,|-x^2|&lt;1$$ This is derived from the equation $$\frac{1}{1-x}=\sum\limits_{n=0}^\infty x^n \hspace{10mm}\text{for} \,|x|&lt;1$$ According to my textbook, the power series $$\sum\limits_{n=0}^\infty (-x^2)^n \hspace{10mm}\text{for} \,|-x^2|&lt;1$$ is 'for the interval (-1,1)' which means that $|-x^2 | &lt; 1 $ implies that $-1&lt;x&lt;1$. However, that implication does not make sense to me.</p>
Pedro
23,350
<p>You want to prove that $|-x^2|&lt;1$ implies $|x|&lt;1$. Note this is the same as $|x^2|&lt;1$ implies $|x|&lt;1$. </p> <p>Let's prove the <a href="http://en.wikipedia.org/wiki/Contraposition" rel="nofollow">contrapositive</a>. If $|x|\geq 1$ then $|x^2|\geq 1$. If $x=1,-1$ this is clear. Since $|x|\geq 0$, the argument for $x&gt;1$ will also serve for $x&lt;1$. Thus, let $x&gt;1$. Then $x=|x|&gt;1$. Multiplying through $|x|$, we find that $$x^2=|x|^2=|x^2|&gt;|x|\geq 1$$ So our claim follows. If $|x|\geq 1$, then $|x^2|\geq 1$. </p>
1,318,884
<blockquote> <p>Let S be a piecewise smooth oriented surface in <span class="math-container">$\mathbb{R}^3$</span> with positive oriented piecewise smooth boundary curve <span class="math-container">$\Gamma:=\partial S$</span> and <span class="math-container">$\Gamma : X=\gamma(t), t\in [a,b]$</span> a rectifiable parametrization of <span class="math-container">$\Gamma$</span>. Imagine <span class="math-container">$\Gamma$</span> is a wire in which a current I flows through. Then</p> <p><span class="math-container">$$m:=\frac{I}{2}\int_a^b\gamma(t)\times \dot{\gamma}(t)dt$$</span></p> <p>is the magnetic moment of the current.</p> <p>Show that for an arbitrary <span class="math-container">$u\in \mathbb{R}^3$</span></p> <p><span class="math-container">$$m\cdot u=I\int_Su\cdot d\Sigma$$</span> is true.</p> </blockquote> <p>I tried doing this with Stokes but I can't seem to get to the desired equation. The teacher gave us a hint: <span class="math-container">$k_u(x):=\frac{1}{2}u\times x$</span> is a vector field and <span class="math-container">$\operatorname{curl}k_u = u$</span>.</p> <p>Any tips or hints? I would appreciate it.</p>
Bill Dubuque
242
<p><strong>Hint</strong> <span class="math-container">$\,\ n,m\mid j \!\iff\! nm\mid nj,mj\!$</span> <span class="math-container">$\overset{\ \rm\color{darkorange}U}\iff\! nm\mid (nj,mj) \overset{\ \rm \color{#0a0}D_{\phantom |}}= (n,m)j\!$</span> <span class="math-container">$\iff\! nm/(n,m)\mid j$</span></p> <p>where above we have applied <span class="math-container">$\,\rm \color{darkorange}U = $</span> <a href="https://math.stackexchange.com/a/1189430/242">GCD Universal Property</a> and <span class="math-container">$\,\rm\color{#0a0} D =$</span> <a href="https://math.stackexchange.com/a/705874/242">GCD Distributive Law</a>.</p> <p><strong>Remark</strong> <span class="math-container">$ $</span> If we bring to the fore implicit <span class="math-container">$\rm\color{#0a0}{cofactor\ reflection}$</span> symmetry we obtain a simpler proof: <span class="math-container">$ $</span> it is easy to show <span class="math-container">$\,d\,\color{#0a0}\mapsto\, mn/d\,$</span> bijects common divisors of <span class="math-container">$\,m,n\,$</span> with common multiples <span class="math-container">$\le mn.$</span> Being order-<span class="math-container">$\rm\color{#c00}{reversing}$</span>, it maps the <span class="math-container">$\rm\color{#c00}{Greatest}$</span> common divisor to the <span class="math-container">$\rm\color{#c00}{Least}$</span> common multiple, i.e. <span class="math-container">$\,{\rm\color{#c00}{G}CD}(m,n)\,\color{#0a0}\mapsto\, mn/{\rm GCD}(m,n) = {\rm \color{#c00}{L }CM}(m,n).\,$</span></p> <p>See <a href="https://math.stackexchange.com/a/3055362/242">here</a> and <a href="https://math.stackexchange.com/search?q=user%3A242+lcm+gcd+involution">here</a> more on this <span class="math-container">$\:\!\rm\color{#0a0}{involution\ (reflection)}$</span> symmetry at the heart of gcd, lcm duality.</p>
724,823
<p>I would like to know if there is some way to imagine the case when a 3D subspace intersects with a 2D plane in a 4D space. For example, let's have a 3D space in 4D $$A = \left(\begin{array}{c}1 &amp; 2 &amp; 3 &amp; 4\\7 &amp; 9 &amp; 0 &amp; 5\\ 3 &amp; 3 &amp; 4 &amp; 1\end{array}\right)$$ and $$B=\left(\begin{array}{c}1 &amp; 1 &amp; 0 &amp; 4\\1 &amp; 2 &amp; 0 &amp; 1\end{array}\right)$$</p> <p>And if I am right, the basis of the space intersection is $C\approx(\begin{array}{c}1 &amp; 1.1614 &amp; 1.2248 &amp; 2.1628)\end{array}$, which is a line.</p> <p>My stupid question is. Is it possible that an intersection in 4D of a 3D space with a 2D plane is a 1D line? I can't picture how it is possible (it's like a 2D plane is "touching" a 3D space in one line?). </p>
vadim123
73,324
<p>The key result you need is that for a finite-dimensional vector space $V$, with subspaces $A,B$, we have $$\dim(A)+\dim(B)=\dim(A+B)+\dim(A\cap B)$$</p> <p>Hence $5=\dim(A+B)+\dim(A\cap B)$. Since $B\subseteq A+B$, we have $\dim(A+B)\in\{3,4\}$. If $3$, then $A\cap B=A$. If $4$, then $\dim(A\cap B)=1$.</p> <hr> <p>Here's a related scenario to wrap your mind around. In 3 dimensions, two planes can intersect in a plane or a line (or not at all, if they're parallel). However, in 4 or more dimensions, two planes can intersect in a single point; specifically the origin if they are standard planes (not affine). Here's how: one is $Span(e_1,e_2)$ and the other is $Span(e_3,e_4)$, where $\{e_1,e_2,e_3,e_4\}$ is the standard basis for $\mathbb{R}^4$.</p>
2,042,476
<p>In this question, I'm assuming the definition of Riemann Integrability.<br> a) Produce an example of a sequence $f_n \rightarrow 0$ pointwise on $[0,1]$ where $\lim_{n\to\infty}\int_0^1f_n$ does not exist.<br> b) Produce an example of a sequence $g_n$ with $\int_0^1g_n\to0$ but $g_n(x)$ does not converge to zero for any $x\in[0,1]$. Let's insist that $g_n(x)\geq 0$ for all $x$ and $n$.</p> <p>For part (a), I thought of the function sequence $$f_n(x) = \begin{cases}1/x &amp; \text{if }0&lt;x&lt;1/n\\0 &amp;\text{if }x=0 \text{ or }x\geq1/n\end{cases}$$ This seems to work. I'm not sure about part (b) though. I would appreciate it if someone could check part (a) and offer some help with part (b).</p>
babakks
349,014
<p>I think part (a) is fine. But for part (b) how about this $$0\leq g_n(x)=\begin{cases}1+(-1)^n &amp; x\in\{0,1\}\\1 &amp;\frac k{2n}\leq x\leq\frac k{2n}+2^{-(n+1)}, k\in \Bbb{N}\\ 0 &amp;(\mathrm{otherwise})\end{cases}$$</p> <p>For each $n$, $g_n(x)$ equals $1$ over the intervals of the form $$\left[\frac k{2n},\frac k{2n}+2^{-(n+1)}\right]$$ where $k\in\Bbb{N}$ and $k&lt;2n$, i.e., the intervals starting from multiples of $\frac 1{2n}$ with the length of $2^{-(n+1)}$. Note that there are $2n-1$ of such intervals within $[0,1]$. Notice that by this definition $g_n(x)$ does not have limit for any $x\in(0,1)$, because for a given $x$, as $n$ grows (changes) $x$ repeatedly falls in and out of the aforementioned intervals. Also, $g_n(0)=g_n(1)$ oscillates between $0$ and $2$, and thus $g_n(x)$ has no limit for any $x\in[0,1]$. Finally, for the integration limit we have $$\int_0^1g_n(x)\mathrm{d}x=(2n-1)\frac{1}{2^{n+1}}$$ which approaches zero as $n\to\infty$.</p>
100,433
<p>I am trying to export some largish integer arrays to HDF5 and know that every entry in them would fit into an unsigned 8 bit integer array. As the default "DataFormat" for export that Mathematica is using is a 32bit integer array the resulting files are unnecessarily large. Does anyone know the correct syntax to export such an integer array as an 8 bit integer dataset?</p> <p>I found the following suggestions in this <a href="https://mathematica.stackexchange.com/a/92064/169">answer</a> and a comment to this <a href="https://mathematica.stackexchange.com/a/47861/169">answer</a>:</p> <pre><code>Export["int8.h5", RandomInteger[{2^8 - 1}, {100, 100}], "DataFormat" -&gt; {"UnsignedInteger8"} ] Export["int8.h5", RandomInteger[{2^8 - 1}, {100, 100}], {"UnsignedInteger8", "DataFormat"} ] </code></pre> <p>but they both seem to not work at all or not create the expected data representation in the files, which can be checked by either HDFView (which I consider authorative) or Mathematica itself with:</p> <pre><code>Import["int8.h5","DataFormat"] </code></pre>
rafalc
51,110
<p>Thie syntax has slightly changed in Mathematica <strong>12</strong>. Now it can be done like this:</p> <pre><code>In[2]:= Export["test.h5", "path/to/dataset" -&gt; { "Data" -&gt; RandomInteger[{2^8 - 1}, {100, 100}], "DataFormat" -&gt; "UnsignedInteger8" }] Out[2]= "test.h5" In[4]:= Import["test.h5", {{"DataFormat", "Dimensions"}, "path/to/dataset"}] Out[4]= {"UnsignedInteger8", {100, 100}} </code></pre> <p>For large data, when possible, I recommend using <code>NumericArray</code> (new in 12). In that case information about the type is included in the <code>NumericArray</code> object and is respected by the HDF5 Export:</p> <pre><code>In[7]:= Export["test.h5", NumericArray @ RandomInteger[{2^8 - 1}, {100, 100}]] Out[7]= "test.h5" In[8]:= Import["test.h5", {{"DataFormat", "Dimensions"}, "Dataset1"}] Out[8]= {"UnsignedInteger8", {100, 100}} </code></pre> <p>In the example above <code>NumericArray</code> type was automatically deduced as <code>"UnsignedInteger8"</code> but you can also specify it manually as the second argument.</p>
3,646,694
<p>Let <span class="math-container">$M$</span> be an elliptic element of <span class="math-container">$SL_2(\mathbb R)$</span>. Then it is conjugate to a rotation <span class="math-container">$R(\theta)$</span>. Note that we can calculate <span class="math-container">$\theta$</span> in terms of the trace of <span class="math-container">$M$</span>; it means that we actually know <span class="math-container">$R(\theta)$</span> and we can write:</p> <p><span class="math-container">$$M=TR(\theta) T^{-1}$$</span></p> <p>If <span class="math-container">$S^1$</span> is the unit circle in <span class="math-container">$\mathbb R^2$</span>, it follows that <span class="math-container">$T(S^1)$</span> is the conic section <span class="math-container">$\mathcal C$</span> which is preserved by <span class="math-container">$M$</span>.</p> <blockquote> <p>Is there any explicit way to find the equation <span class="math-container">$\mathcal C$</span> in general?</p> </blockquote> <p>My procedure is quite uneffective, because one has to find <span class="math-container">$T$</span> first (so non-linear system) and then write down <span class="math-container">$T(S^1)$</span>, which is in general not obvious.</p>
Community
-1
<p>Call the roots <span class="math-container">$\alpha, \alpha', \alpha''$</span> with <span class="math-container">$\alpha' = \alpha^2 + 2\alpha - 4$</span>.</p> <p><span class="math-container">$$p(x) = x^3 + 2x^2 - 5x + 1 = (x - \alpha)(x - \alpha')(x - \alpha'') = q(x)(x - \alpha'')$$</span></p> <p>with <span class="math-container">$$q(x) = x^2 + (-\alpha - 3 \alpha + 4) x + (\alpha^3 + 2 \alpha^2 - 4\alpha) = x^2 + (-\alpha - 3 \alpha + 4) x + (\alpha - 1)$$</span></p> <p>(replacing <span class="math-container">$\alpha^2$</span> with <span class="math-container">$-2\alpha + 4$</span>)</p> <p>We can now compute the polynomial long division <span class="math-container">$p(x)/q(x)$</span> in <span class="math-container">$\mathbb Z(\alpha)$</span> to get <span class="math-container">$x - \alpha''$</span>:</p> <p><span class="math-container">$$p(x) - x q(x) = (\alpha^2 + 3 \alpha - 2) x^2 + (-\alpha - 4) x + 1$$</span></p> <p><span class="math-container">$$p(x) - x q(x) - (\alpha^2 + 3 \alpha - 2) q(x) = 0$$</span></p> <p>so <span class="math-container">$$p(x)/q(x) = x - (- \alpha^2 - 3 \alpha + 2).$$</span></p>
3,341,784
<p>How do you get to the general equation of <span class="math-container">$y = ax^2 + bx + c$</span> for a parabola?</p> <p>I have not found any resources that show how to get to that equation.</p> <p>This equation is shown 1 minute 30 into this video on Simpsons Rule <a href="https://www.youtube.com/watch?v=vpfy3sGw8tI" rel="nofollow noreferrer">https://www.youtube.com/watch?v=vpfy3sGw8tI</a>.</p>
Somos
438,089
<p>One way to show that the general quation<span class="math-container">$\,y=ax^2+bx+c\,$</span> is a parabola is to show that any such equation (with <span class="math-container">$a\ne 0$</span>) is transformed by a unique <a href="https://en.wikipedia.org/wiki/Homothetic_transformation" rel="nofollow noreferrer">homothety-translation</a> into <span class="math-container">$\,y=x^2\,$</span> and verify that its focus is <span class="math-container">$\,(0,d)\,$</span> and directrix is <span class="math-container">$\,y=-d\,$</span> as in the article <a href="https://en.wikipedia.org/wiki/Parabola" rel="nofollow noreferrer">parabola</a> by showing that <span class="math-container">$\,x^2+(y-d)^2 = (y+d)^2\,$</span> where <span class="math-container">$\,d=1/4.\,$</span></p> <p>NOTE: the general equation is only for parabolas whose directrix is parallel to the <span class="math-container">$x$</span>-axis. Also the recent <a href="https://math.stackexchange.com/q/3342798">MSE question 3342798</a> "Why does the focus-directrix definition of the parabola work?" has an answer to a converse question.</p>
4,264,856
<p>I filled in from the definition of a Stirling number of the second kind that the following holds. <span class="math-container">$${2n\brace 2} = \frac{1}{2} \sum_{i=0}^{2} (-1)^i \binom{2}{i} (2-i)^{2n}$$</span></p> <p>And I've visually confirmed in Desmos that the following equality 'appears' to hold.</p> <p><span class="math-container">$${2n\brace 2} = 2^{2n-1}-1$$</span></p> <p>How do I prove that this equality does in fact hold?</p>
HJS
973,557
<p>The definition of <span class="math-container">${2n \brace 2}$</span> is dividing set <span class="math-container">$A= \{ a_1,...,a_{2n} \}$</span> into two sets which is not an empty set.</p> <p>First, lets say that we are dividing set A to set B and set C. For an integer <span class="math-container">$i$</span> such that <span class="math-container">$1 \le i \le 2n$</span>,</p> <p><span class="math-container">$a_i \in B$</span> or <span class="math-container">$a_i \in C$</span>, which means for every <span class="math-container">$i$</span>, there are 2 cases.</p> <p>Considering that B and C shouldn't be an empty set,</p> <p>Dividing set A into set B and set C has <span class="math-container">$2^{2n}-2$</span> cases.</p> <p>So, there are <span class="math-container">$2^{2n-1}-1$</span> cases.</p>
686,296
<p>Let $S$ be a set of rational numbers that is closed under addition and multiplication, and having the property that for every rational number $r$ exactly one the following three statements is true: $r\in S$, $-r\in S$, $r=0$.</p> <p>There are a couple of questions I have to prove but I am having trouble with the first one:</p> <p>Prove that $0$ does not belong to $S$. I think I am confused by the last statement where it says $r$ is exactly one of the following three statements part since it seems to say it is okay for $r=0$ but that seems like it violates $r\in S$ and $-r \in S$ since $0 = -0$. Or is there some other thing I am missing completely? Thank you for any help in this!</p>
amWhy
9,003
<p><em>Exactly</em> here signifies "one and only one" of the three statements is true for each element in $S$. </p> <p>$0\notin S$ because if it were an element of $S$, all statements are true, simultaneously, and so it violates the "one and only one" restriction.</p> <p>To prove $0 \notin S$, you can start by assuming (for the sake of contradiction), that $0 \in S.$ Then show that this assumption leads to a contradiction, allowing you to conclude: therefore, $r\notin S$.</p>
1,900,365
<p>Apparently this is a true statement, but I cannot figure out how to prove this. I have tried setting </p> <p>$$(m)(m + 1)(m + 2)(m + 3) = (m + 4)^2 - 1 $$</p> <p>but to no avail. Could someone point me in the right direction? </p>
Brian Tung
224,454
<p>If you have trouble approaching this, try some examples: Note that (for instance)</p> <p>$$ 1 \times \color{red}{2 \times 3} \times 4 = 24 = 5^2-1 $$</p> <p>$$ 2 \times \color{red}{3 \times 4} \times 5 = 120 = 11^2-1 $$</p> <p>$$ 3 \times \color{red}{4 \times 5} \times 6 = 360 = 19^2-1 $$</p> <p>and observe that</p> <p>$$ 5 = \color{red}{2 \times 3} - 1 $$</p> <p>$$ 11 = \color{red}{3 \times 4} - 1 $$</p> <p>$$ 19 = \color{red}{4 \times 5} - 1 $$</p> <p>So try seeing if</p> <p>$$ m\color{red}{(m+1)(m+2)}(m+3) = [\color{red}{(m+1)(m+2)}-1]^2-1 $$</p>
1,900,365
<p>Apparently this is a true statement, but I cannot figure out how to prove this. I have tried setting </p> <p>$$(m)(m + 1)(m + 2)(m + 3) = (m + 4)^2 - 1 $$</p> <p>but to no avail. Could someone point me in the right direction? </p>
Wildcard
276,406
<p>This has been very adequately answered according to modern methods; however I feel I may add something by explaining it in the plain English prose approach used in older mathematics texts. My statements can be translated into equations if you so wish and will approximate the proofs already given; however by stating it in English we may see the underlying relations at work more clearly (or at least non-mathematicians may so see.)</p> <p>First consider a perfect square. Not the geometric figure, but the product of a number multiplied by an equal number; for instance, 81 as the product of 9 and 9.</p> <p>Note that if we "slide these two factors apart," that is, increase one of them while decreasing the other by the same amount, their product will change. 8 x 10 is 80, which is 1 less than the initial product.</p> <p>Let us slide them apart a little further—by a difference of 2 each from the initial number, 9. We get 7 x 11, which is 77, and this is 4 less than the initial square product of 9 by 9.</p> <p>Note that 1 x 1 = 1, and 2 x 2 = 4. In fact, there is a general principle here:</p> <p><strong>If we begin with two equivalent numbers multiplied together, and we decrease one by the same amount as we increase the other, their product will decrease by an amount equal to the <em>square</em> of the difference that we make.</strong></p> <p>In algebraic terms we can express this like so:</p> <p>X*X = Y</p> <p>(X-D)*(X+D) = Y - D^2</p> <p>This is most easily susceptible to geometric proof; I will essay to draw a picture with words that can be easily visualized:</p> <p>Consider an actual geometric square of 9 units by 9 units. If a single row of 9 unit squares is removed from the top of the square, and placed along the right hand side so as to form a rectangle 1 unit longer than the initial square was wide, there will be a single unit square left over. 9 unit squares were removed from the top row, but only 8 are needed to extend the width of the figure by adding a 10th column. This is easy to see, certainly.</p> <p>Similarly it can be seen that regardless of the initial size of the square, if a single row of unit squares is manipulated from the top row so as to form a new additional right hand column, the column will be adequately filled with 1 less unit square than was removed. Thus we see that our general principle given earlier holds true at least for such cases where the two numbers are increased and decreased respectively by an amount equal to 1, whatever the initial equivalent numbers may be.</p> <p>Now let us consider the more general case where we remove multiple rows from the top of our square, rather than only a single row, and manipulate these to form additional columns equal in number to the number of rows we have removed. In such wise we can visualize that if we remove, let us say, 3 rows from the top of our figure, the columns that remain will each of them be 3 units shorter than they were previously, and each of the 3 new columns which we wish to add shall also need to be 3 units shorter than the original columns, to match the newly shortened columns that remain. But, the rows that we have removed to convert to columns are each of them the same length (width) as the original columns' height, for the initial figure was a perfect square! Thus we will have 3 extra unit squares in each of the 3 rows we have removed, which will none of them be required to form our additional columns. This holds true for any number, not only 3, as can easily be seen.</p> <p>Having satisfied ourselves that the general statement given above is in fact a general truth, let us concern ourselves with consecutive integers.</p> <p>At first glance consecutive integers, being exactly 1 unit apart, would not seem to fit in with the rules given above. However, if we consider a perfect square whose sides are each, for instance, 8 1/2 units long, we shall see that an 8 by 9 rectangle can be formed by removing the top half-row from the square and placing it along the right hand side. This gives an extra 1/4 unit square of dimensions 1/2 unit by 1/2 unit.</p> <p>Similarly, integers which are 3 units apart from each other such as 7 and 10, can be formed as a rectangle from a square by removing and rearranging 1 1/2 rows from a square of non-integer length sides to form a perfect rectangle, after discarding the additional 1 1/2 unit by 1 1/2 unit square which is unnecessary.</p> <p>Since in each case, whether 1 unit apart or 3 units apart, the rectangle formed is a perfect rectangle of integer length sides, the number of unit squares in each rectangle is of course an integer as well.</p> <p>The rectangles in this particular case can be formed beginning from the <em>same</em> square, of 8 1/2 unit by 8 1/2 unit dimensions, and in the one case by discarding 1/4 of a unit square and in the other 2 1/4 unit squares. This is of course true whether we formed rectangles of dimensions 7 x 10 and 8 x 9, or 1007 x 1010 and 1008 x 1009. The discarded quantities are the same in each case. Thus we see that in any case patterned after this one, the rectangles would be exactly 2 unit squares different from one another.</p> <p>If we take all the unit squares from the larger of these two rectangles and form them into a single row, and from the smaller of the two rectangles we form all the unit squares into a single column, we can form the dimensions of a much larger rectangle with a length (width) 2 units greater than its height.</p> <p>By reversing the manipulation performed earlier, we can remove a single column from the right hand side of this larger rectangle and place it along the top to <em>almost</em> form a square—for it should be seen at this point that we shall be 1 unit square lacking from forming a perfect square. And this shall be true regardless of which numbers we begin our manipulation with, that we shall always end up exactly 1 unit square short of a perfect square.</p> <p>To state this principle concisely, and without the necessity for geometric visualization, we should say that:</p> <p><strong>Of four consecutive numbers, if the inner two are multiplied together and the outer two are multiplied together, they shall form products different from each other by exactly 2 units in every case,</strong></p> <p>and further, that:</p> <p><strong>Any two numbers which differ from each other by exactly 2 units shall form, when multiplied, a number precisely 1 unit less than the perfect square formed of the number exactly interposing between them,</strong></p> <p>which will be seen by the astute reader to be merely a special case of the more general principal stated earlier in this passage.</p>
1,900,365
<p>Apparently this is a true statement, but I cannot figure out how to prove this. I have tried setting </p> <p>$$(m)(m + 1)(m + 2)(m + 3) = (m + 4)^2 - 1 $$</p> <p>but to no avail. Could someone point me in the right direction? </p>
alephzero
223,485
<p>You can make the algebra simpler by starting from a better place than $m(m+1)(m+2)(m+3)$. Think symmetry!</p> <p>$$\begin{align} (m - \tfrac 3 2)(m - \tfrac 1 2)(m + \tfrac 1 2)(m + \tfrac 3 2) &amp;= (m^2 - \tfrac 9 4)(m^2 - \tfrac 1 4) \\ &amp;= m^4 - \tfrac 5 2 m^2 + \tfrac{9}{16} \\ &amp;= (m^2 - \tfrac 5 4)^2 - 1 \end{align}$$</p> <p>Finally, if $m - \frac 1 2$ is an integer, $m = k + \frac 1 2$ where $k$ is an integer.</p> <p>Therefore $m^2 - \frac 5 4 = k^2 + k - 1$ is also an integer.</p>
171,602
<p>In all calculus textbooks, after the part about successive derivatives, the $C^k$ class of functions is defined. The definition says : </p> <blockquote> <p>A function is of class $C^k$ if it is differentiable $k$ times and the $k$-th derivative is continuous.</p> </blockquote> <p>Wouldn't be more natural to define them to be the class of functions that are differentiable $k$ times? Why is the continuity of the $k$th derivative is so important so as to justify a specific definition?</p>
bartgol
33,868
<p>Well, first of all, if the derivative of order $k$-$1$ is not continuous, then the $k$-th derivative does not even exist.</p> <p>As for the last one (the $k$-th one), you have a point: why do we need that derivative to be continuous. If we drop this requirement, wouldn't we get a broader class of functions, maybe able to represent a broader class of phenomena? In some sense yes. But...</p> <p>While working with $C^k$ spaces, we are often interested in the <em>pointwise</em> value of some function $f$. If the 3rd derivative of $f$ has a physical/sociologic/demographic/whatever meaning, then it would be <em>probably</em> not satisfying if that derivative jumped at a point $x_0$. I said <em>probably</em>, since there are phenomena involving quantities that are allowed to change suddenly. In physics, for instance, the electric charge density can vary discontinuously. This quantity, in dimension one, turns out to be the derivative of the electric field, which itself is the derivative of the electric potential. Therefore, for this phenomenon, a potential which is $C^2$ according to the standard definition of $C^k$ spaces, might not represent the phenomenon that I am observing, since it would give a continuous charge density.</p> <p>Nevertheless, functions that have $k$ continuous derivatives and the ($k$+$1$)-th has only jump discontinuity, form a pretty interesting space, called Hölder space with exponent $1$ (written $C^{k,1}$). Hölder spaces are pretty complicate, in my opinion, but the case $k=0$ is fairly easy: corresponds to Lipschitz-continuous functions, which are kind of useful in some contest. Maybe these spaces are closer to what you would expect.</p> <p>But as I said, in many applications you are interested in the pointwise value of a function, and you don't expect that function to jump.</p> <p>Things become more interesting when you don't care what's the value of the function at any given point, but rather you care that some other property holds, like for instance </p> <p>$$\int_a^b |f(x)|^2dx&lt;\infty$$</p> <p>But this is completely another story...</p>
36,083
<p>I'm trying to find a way to prove this:</p> <p>EDIT: without using LHopital theorem. $$\lim_{x\rightarrow 1}\frac{x^{1/m}-1}{x^{1/n}-1}=\frac{n}{m}.$$ Honestly, I didn't come with any good idea.</p> <p>We know that $\lim_{x\rightarrow 1}x^{1/m}$ is $1$.</p> <p>I'd love your help with this.</p> <p>Thank you.</p>
Bill Dubuque
242
<p><strong>HINT</strong> $\ $ If you change variables $\rm\ z = x^{1/n} $ then the limit reduces to a very simple first derivative calculation. See also some of my <a href="https://math.stackexchange.com/search?q=user%3A242+derivative+limit">prior posts</a> for further examples of limits that may be calculated simply as first derivatives.</p>
1,547,057
<p>I've been working on the following problems, and I know how to integrate functions,but I do not know how to find the value of "c" in the examples below when finding the antiderivative. Any idea what to do? Cheers</p> <ol> <li><p>A particle travels in a straight line such that its acceleration at time <em>t</em> seconds is equal to $6t+1$ $m/s^2$. When $t=2$, the displacement is equals to $ 12m$ and when $t=3 $ the displacement is equals to $ 34m$. Find the displacement and velocity when $t=4$.</p></li> <li><p>A particle travels in a straight line with its acceleration at time <em>t</em> equal to $3t+2$ $ m/s^2$. The particle has an initial positive velocity and travels $30m$ in the fourth second. Find the velocity of the body when $t=5$.</p></li> </ol>
Ctx
293,338
<p>Let me give you a hint for your second problem:</p> <p>Again, the problem is to find the constant C after integrating from the acceleration to the velocity equation. To achieve this, you have to integrate the velocity equation again, to get the displacement equation. Now, you can use your knowledge for the displacement in the fourth second (calculate the difference of the displacement equation for t=4 and t=3) to get the c for the velocity equation. Then, simply solve it for t=5.</p>
4,338,558
<p>I am not sure if my proof here is sound, please could I have some opinions on it? If you disagree, I would appreciate some advice on how to fix my proof. Thanks</p> <p><span class="math-container">$X_1, X_2, ..., X_n$</span> are countably infinite sets.</p> <p>Let <span class="math-container">$X_1 = \{{x_1}_1, {x_1}_2, {x_1}_3, ... \}$</span></p> <p>Let <span class="math-container">$X_2 = \{{x_2}_1, {x_2}_2, {x_2}_3, ... \}$</span></p> <p>...</p> <p>Let <span class="math-container">$X_n = \{{x_n}_1, {x_n}_2, {x_n}_3, ... \}$</span></p> <p>Let <span class="math-container">$P_n$</span> be the list of the first <span class="math-container">$n$</span> ordered primes: <span class="math-container">$P_n = (2,3,5,...,p_n) = (p_1,p_2,p_3,...,p_n)$</span></p> <p>Define the injection: <span class="math-container">$\sigma: X_1 \times X_2 \times ... \times X_n \to \mathbb{N}$</span></p> <p><span class="math-container">$\sigma (({x_1}_A, {x_2}_B, {x_3}_C,...,{x_n}_N)) = p_1^A\cdot p_2^B \cdot p_3^C \cdot ... \cdot p_n^N$</span></p> <p>By the Fundamental Theorem of Arithmetic, <span class="math-container">$\sigma$</span> is an injection, because if two elements in the domain map to the same element in the codomain, they must be the same element.</p> <p>Clearly, the image is finite. So by definition, the Cartesian product of n sets which are all countably infinite, is itself, countably infinite.</p> <p>EDIT: Is it worth noting that my <span class="math-container">$X_n$</span> sets should be ordered or does that not matter?</p>
Ninad Munshi
698,724
<p>We can rewrite a few things to make the computations a bit nicer. First denote <span class="math-container">$G$</span> as</p> <p><span class="math-container">$$F(x,y) = x^{\frac{5}{3}}y^{\frac{5}{3}}G(x,y)$$</span></p> <p>which means <span class="math-container">$G$</span> now encompasses all of the change between the functions. Next denote</p> <p><span class="math-container">$$I(a,b) = \iint\limits_U \frac{\log x \log y}{x^ay^bG(x,y)}dxdy = \frac{\partial}{\partial a}\frac{\partial}{\partial b}\iint\limits_U \frac{1}{x^ay^bG(x,y)}dxdy$$</span></p> <p>In all cases cutting the region of integration in half at the line <span class="math-container">$x=y$</span> by symmetry is useful.</p> <p><span class="math-container">$\textbf{Case 1}$</span></p> <p><span class="math-container">$$x^{\frac{8}{3}}y^{\frac{5}{3}} &gt; R \implies y &gt; \left(\frac{R^3}{x^{8}}\right)^{\frac{1}{5}} $$</span></p> <p><span class="math-container">$$I = 2\frac{\partial}{\partial a}\frac{\partial}{\partial b}\int_{R^{\frac{3}{13}}}^\infty \int_{\max\left\{\left(\frac{R^3}{x^{8}}\right)^{\frac{1}{5}},1\right\}}^x\frac{1}{x^{a+1}y^b}dydx$$</span></p> <p><span class="math-container">$\textbf{Case 2}$</span></p> <p><span class="math-container">$$\begin{cases}u = xy \\ v = \frac{y}{x}\end{cases} \to J^{-1} = 2\frac{y}{x} \implies J = \frac{1}{2v}$$</span></p> <p><span class="math-container">$$x+y = \left(\frac{u}{v}\right)^{\frac{1}{2}}(v+1)$$</span></p> <p><span class="math-container">$$\frac{u^{\frac{13}{6}}}{v^{\frac{1}{2}}}(v+1) &gt; R \implies u &gt; \left(\frac{vR^2}{(v+1)^2}\right)^\frac{3}{13}$$</span></p> <p><span class="math-container">$$I = 2\frac{\partial}{\partial a}\frac{\partial}{\partial b}\int_0^1 \int_{\max\left\{\left(\frac{vR^2}{(v+1)^2}\right)^\frac{3}{13},\frac{1}{v}\right\}}^{\infty}\frac{dudv}{2u^{a+b+\frac{1}{2}}\sqrt{v}(1+v)}$$</span></p> <p>And the same coordinate change can be used for cases 3 and 4, but now the boundary term would read</p> <p><span class="math-container">$$x-y = \left(\frac{u}{v}\right)^{\frac{1}{2}}(v-1)$$</span></p>
215,125
<p>Suppose $A(x)$ is a $\Delta_0$ formula defining a non-empty set of natural numbers. It's an easy theorem that there is a primitive recursive function $f:\mathbb{N} \rightarrow \mathbb{N}$ such that $Range(f) = \{n \in \mathbb{N} \mid A(n)\}$. I'm wondering if it's known whether either of the following strengthenings of this theorem are true:</p> <p><strong>Strengthening 1:</strong> Suppose $A(x)$ is a $\Delta_0$ formula defining a non-empty set of natural numbers. Can we find a $c \in \mathbb{N}$ such that $\varphi_c$ is primitive recursive, $\mathbb{N} \models \forall x (A(x) \rightarrow \exists y \varphi_c(y) {\downarrow} = x)$, and $PA \vdash \forall y \exists x (\varphi_c(y) {\downarrow} = x \wedge A(x))$?</p> <p><strong>Strengthening 2:</strong> Suppose $A(x)$ is a $\Delta_0$ formula defining a non-empty set of natural numbers. Can we find a $c \in \mathbb{N}$ such that $\varphi_c$ is primitive recursive, $PA \vdash \forall x (A(x) \rightarrow \exists y \varphi_c(y) {\downarrow} = x)$, and $PA \vdash \forall y \exists x (\varphi_c(y) {\downarrow} = x \wedge A(x))$?</p> <p>(In the above, "$\varphi_c$" refers to the (partial) recursive function defined by the Turing machine whose Gödel code is c. "$\varphi_c(y) {\downarrow} = x$" is shorthand for a $\Sigma_1$ formula which says that with input y, the Turing machine with Gödel code c eventually halts and outputs x.)</p> <p><strong>Pedantic Clarification:</strong> Typically "$\phi_c(x){\downarrow} = y$" is an abbreviation for "$\exists t M(c,x,y,t)$", where (i) $M(c,x,y,t)$ is a $\Delta_0$ formula and (ii) $\mathbb{N} \models M(c,x,y,t)$ if and only if the Turing machine with Gödel code c halts after t steps on input x and produces output y. Let's call any formula $M$ satisying (i) and (ii) a "computation predicate". By <a href="http://projecteuclid.org/download/pdf_1/euclid.ndjfl/1093893356" rel="nofollow">this paper by H.B. Enderton</a>, Strengthening 2 is <em>false</em> for certain choices of computation predicate.</p> <p>It follows that any argument for Strengthening 2 which assumes that our choice of $\Delta_0$ computation predicate is arbitrary cannot suffice. We also need that PA proves certain facts about our chosen computation predicate. I suspect that any reasonable construction of a computation predicate will work (e.g., whatever construction is used in your favorite recursion theory book), so my question should be phrased more precisely as: is there a computation predicate such that Strengthenings 1 and 2 hold?</p>
Connor Mooney
16,659
<p>I think that $H_0$ grows like $e^{A}$, which is optimal in light of the example $u(x,t) = e^{-At}\cos(x)$ which solves $u_t - A\Delta u = 0$.</p> <p>Say we fix $C_i$ and the corresponding Harnack constant is $H$, and that $u$ is positive in $B_2 \times [-4,0]$. The proof is by applying the Harnack inequality to the rescaling you suggested, $\tilde{u}(x,t) = u(x, t/A)$, which is defined on $B_2 \times [-4A,0]$. Iterating the Harnack inequality we have that $\tilde{u}|_{B_1 \times \{-A\}} &lt; H\tilde{u}(0,-A + 1) &lt; H^2 \tilde{u}(0,-A + 2) ... &lt; H^A\tilde{u}(0,0)$. Scaling back gives the Harnack inequality $u|_{B_1 \times \{-1\}} \leq H^Au(0,0)$.</p> <p>Remark: I think the dependence on the ellipticity ratio $\Lambda = C_1/C_0$ (say we fix $C_0 = 1$ and let $C_1$ get large) can be understood similarly, with dependence going exponentially in $\sqrt{\Lambda}$ as suggested by the stationary example $u = e^{\sqrt{\Lambda}x}\cos(y)$, which solves the elliptic equation $tr(A \cdot D^2u) = 0$ where $A = \text{diag}(1,\Lambda)$. This is because $u$ is harmonic in an ``ellipsoidal geometry'' where the balls are ellipsoids with vertical axis $1$ and horizontal axis $1/\sqrt{\Lambda}$, and the proof by rescaling/iterating in overlapping ellipsoids for constant coefficients works. This is intuitively the worst case scenario, since if the coefficients jump around then the ellipsoids "rotate" and one iterates the Harnack inequality fewer times, but to do things rigorously I think one needs to go through the original proof. (Maybe there is a good probabilistic interpretation of this too.)</p>
439,750
<p>I have to check if $\int_{0}^\infty \mathrm 1/(x\ln(x)^2)\,\mathrm dx $ is convergent or divergent.</p> <p>My approach was to integrate the function , hence : $\int_{0}^\infty \mathrm 1/(x\ln(x)^2)\,\mathrm dx=-\lim_{x \to \infty} 1/\ln(x)+ \lim_{x \to 0} 1/\ln(x)=0 $</p> <p>Still my book says that it is divergent. Maybe the $\infty$ sign of the integral means to check for $+\infty$ and $-\infty $ or i just overlooked something. Any help would be appreciated.</p>
Ali
78,741
<p>Let's change the variables:</p> <p>$$y=\ln {x} \\ dy =\frac{dx}{x}$$</p> <p>Now the integral becomes:</p> <p>$$I=\int_0^\infty \frac{dx}{x(\ln{x})^2}=\int_{-\infty}^\infty \frac{dy}{y^2}.$$</p> <p>Note that the integrand is an even function, ergo(the main goal is to avoid the singularity):</p> <p>$$I=2\int_0^{\infty} \frac{dy}{y^2}=-\frac{2}{x}\large{|_0^{\infty}}=\infty $$</p>
3,772,371
<p>Write the polynomial of degree <span class="math-container">$4$</span> with <span class="math-container">$x$</span> intercepts of <span class="math-container">$(\frac{1}{2},0), (6,0) $</span> and <span class="math-container">$ (-2,0)$</span> and <span class="math-container">$y$</span> intercept of <span class="math-container">$(0,18)$</span>. The root (<span class="math-container">$\frac{1}{2},0)$</span> has multiplicity <span class="math-container">$2$</span>.</p> <p>I am to write the factored form of the polynomial with the above information. I get:</p> <p><span class="math-container">$f(x)=-6\big(x-\frac{1}{2}\big)^2(x+2)(x-6)$</span></p> <p>Whereas the provided solution is:</p> <p><span class="math-container">$f(x)=-\frac{3}{2}(2x-1)^2(x+2)(x-6)$</span></p> <p>Here's my working:</p> <p>Write out in factored form:</p> <p><span class="math-container">$f(x) = a\big(x-\frac{1}{2}\big)^2(x+2)(x-6)$</span></p> <p>I know that <span class="math-container">$f(0)=18$</span> so:</p> <p><span class="math-container">$$18 = a\big(-\frac{1}{2}\big)^2(2)(-6)$$</span></p> <p><span class="math-container">$$18 = a\big(\frac{1}{4}\big)(2)(-6)$$</span></p> <p><span class="math-container">$$18 = -3a$$</span></p> <p><span class="math-container">$$a = -6$$</span></p> <p>Thus my answer: <span class="math-container">$f(x)=-6\big(x-\frac{1}{2}\big)^2(x+2)(x-6)$</span></p> <p>Where did I go wrong and how can I arrive at:</p> <p><span class="math-container">$f(x)=-\frac{3}{2}(2x-1)^2(x+2)(x-6)$</span> ?</p>
E. Z. L.
811,580
<p>Both forms are equivalent: <span class="math-container">$$ -6(x-\frac{1}{2})^2(x+2)(x-6)\\ =-\frac{3}{2}\cdot 2^2(x-\frac{1}{2})^2(x+2)(x-6)\\ =-\frac{3}{2}(2(x-\frac{1}{2}))^2(x+2)(x-6)\\ =-\frac{3}{2}(2x-1)^2(x+2)(x-6) $$</span></p>
2,102,619
<p><strong>What is the total running time of counting from 1 to n in binary if the time needed to add 1 to the current number i is proportional to the number of bits in the binary expansion of i that must change in going from i to i + 1?</strong></p> <p>Having trouble getting my head a around this one because I cannot find a formula for how bits change from some i to i+1. for example $$~~~~before~~~~~~after~~~~changes$$ $$0000 -&gt; 0001 = 1$$ $$0001 -&gt; 0010 = 2$$ $$0010 -&gt; 0011 = 1$$ $$0011 -&gt; 0100 = 3$$</p>
Teh Rod
389,818
<p>$y=f(x)$ set $v$ equal to $y'=\frac{dy}{dx}$ and then rewrite the equation as $v'=(1+v^2)^{3/2}$ so now by separation we have $\frac{1}{\left (1+v^2\right )^{3/2}}\text{d}v=\text{d}x$ The right side of this is trivial, as for the left side making the substitution $v=\tan u\to \text{d}v=\sec^2 u\text{d}u\implies \int\frac{\sec^2 u}{\sec^3 u}\text{d}u=\int\cos u\text{d}u=\sin u=\frac{v}{\sqrt{1+v^2}}+C$ So in total we have $\frac{v}{\sqrt{1+v^2}}=x+C$ and $v=y'$ Edit: Forgot about the initial conditions, because $v(0)=f'(0)=0$ then the constant is just 0. Now we can solve for $v$ by squaring both sides and doing some algebra to get that $v^2=\frac{x^2}{1-x^2}\implies\frac{dy}{dx}=\frac{x}{\sqrt{1-x^2}}$ by a u-sub the right integral becomes obvious and we get that $y=-\sqrt{1-x^2}+C$ and from the initial conditions we get that this constant is equal to 1 so... $\boxed{f(x)=1-\sqrt{1-x^2}}$</p>
345,652
<p>I try to understand why by definition </p> <ol> <li>$[c_0,c_1,\ldots,c_n]=[c_0,[c_1,\ldots,c_n]]$ and also </li> <li>$[c_0,c_1,\ldots,c_n]=[c_0,c_1,\ldots,c_{n-2},[c_{n-1},c_n]]$ .</li> </ol> <p>Those are continued fractions, and $1$ and $2$ are notes I have in the lecture summery.</p> <p>But we can add bracket where we we want. for example:<br> $[c_0,c_1,\ldots,c_n]=[c_0,c_1,[c_2,\ldots,c_n]]$ </p> <p>Thanks!</p>
DonAntonio
31,254
<p>By definition:</p> <p>$$[c_0,c_1,\ldots,c_n]:=c_0+\cfrac{1}{c_1+\cfrac{1}{c_2+\cfrac{1}{\ddots+\cfrac{1}{c_n}}}}=c_0+\cfrac{1}{[c_1,c_2,\ldots,c_n]}=[c_0,[c_1,\ldots,c_n]]\;,\;etc.$$</p> <p>So yes: you can put the brackets where you want <em>on the right</em>.</p>
2,755,091
<p>I know that I have to prove this with the induction formula. If proved the first condition i.e. $n=6$ which is divisible by $6$. But I got stuck on how to proceed with the second condition i.e $k$.</p>
José Carlos Santos
446,262
<p><strong>Hint:</strong> $7^{n+1}-1=7^{n+1}-7^n+7^n-1=6\times 7^n+7^n-1$.</p>
4,008,277
<p><a href="https://en.wikipedia.org/wiki/Vandermonde%27s_identity#Generalized_Vandermonde%27s_identity" rel="noreferrer">Vadermonde's identity</a> states that</p> <p><span class="math-container">$$ \sum_{k=0}^{r} {m \choose k}{n \choose r-k} = {m+n \choose r} $$</span></p> <p>I wonder if we can use this formula or otherwise to calculate:</p> <p><span class="math-container">$$ \sum_{k=0}^{r}k^l {m \choose k}{n \choose r-k}, l \in \mathbb{N} ?$$</span></p> <p>or if not, at least when <span class="math-container">$l=1,$</span> namely:</p> <p><span class="math-container">$$ \sum_{k=0}^{r}k {m \choose k}{n \choose r-k}?$$</span></p> <p>I'd appreciate the relevant calculations or some links where I can find the above?</p>
Trevor
493,232
<p>The approach above can be simplified to following general case.</p> <p>Let <span class="math-container">$f(n):= \sum_{i=1}^{n}{(n\bmod i)}$</span> as described.</p> <p>Given any pair of natural numbers <span class="math-container">$(n,k)$</span>, we have <span class="math-container">$f(n)=f(n+k)$</span> if and only if</p> <p><span class="math-container">$$\sum_{i=1}^{k}{\sigma(n+i)}=2kn+k^2.$$</span></p> <p>This seems to fit all known examples, including the <span class="math-container">$(2^m-1,2^m)$</span> pairs.</p>
3,420,960
<p>The problem is as follows:</p> <blockquote> <p>A protein sample spins in the counterclockwise direction in a centrifuge seen from the top as shown in the diagram from below. The radius of the centrifuge es <span class="math-container">$R=2\,m$</span>. The magnitude of its speed changes. At a certain instant the acceleration vector is as shown in the figure. Find the speed in <span class="math-container">$\frac{m}{s}$</span> and state the type of its motion in the given instant. A for acceleration if the speed increases or D for deceleration if the speed decreases.</p> </blockquote> <p><a href="https://i.stack.imgur.com/7ENKn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7ENKn.png" alt="Sketch of the problem"></a></p> <p>The alternatives given on my book are:</p> <p><span class="math-container">$\begin{array}{ll} 1.10\frac{m}{s};\,A\\ 2.10\frac{m}{s};\,D\\ 3.5\frac{m}{s};\,A\\ 4.5\frac{m}{s};\,D\\ 5.10\frac{m}{s};\,\textrm{uniform motion}\\ \end{array}$</span></p> <p>This problem I'm particulary lost at. The acceleration shown in the graph. What is it?. Is it perhaps the total acceleration?. In other words the norm of the centripetal and the tangential acceleration?</p> <p>If so then that would meant that the:</p> <p><span class="math-container">$a_c=50\frac{m}{s^2}$</span></p> <p>Then the tangential acceleration will be:</p> <p><span class="math-container">$a_t=50\frac{m}{s^2}$</span></p> <p>And because the angular acceleration is related to the tangential acceleration due the radius as:</p> <p><span class="math-container">$a_t=\alpha\times r$</span></p> <p>Then:</p> <p><span class="math-container">$\alpha=\frac{a_t}{r}=\frac{50}{2}=25\frac{rad}{s^2}$</span></p> <p>But that's how far I went in my analysis. What else can I relate to find the asked speed?.</p> <p>The only equations which I recall are:</p> <p><span class="math-container">$\omega_{f}=\omega_{0}+\alpha t$</span></p> <p>Can somebody help me here?. What exactly should be the right path to get the answer?.</p>
Alexander Geldhof
560,477
<p>The tangential acceleration has opposite direction from the tangential velocity so there will be tangential deceleration D.</p> <p>The centripetal acceleration is <span class="math-container">$50 \frac{m}{s^2}$</span>; as @guest stated, this leads to a velocity of <span class="math-container">$\sqrt{50 \frac{m}{s^2} \cdot 2m} = 10 \frac{m}{s}$</span>.</p>
2,738,138
<p>I know it's probably a stupid question, but I'm confused. I have a set {$x\in\mathbb R, \frac{1}{x} \le 1$} that I want to represent as interval/s.</p> <p>Thinking about it logically, I know that the set is $x\in]-\infty, 0[$U$[1, +\infty[$.</p> <p>However, when trying to solve the inequality, I can't seem to get the answer. What am I doing wrong?</p> <p>I take $\frac{1}x \le 1$, and I split it into 2 cases:</p> <ol> <li>if $x &gt; 0$, then $x \ge 1$,</li> <li>if $x &lt; 0$, then $x \le 1$, which is every element of $\mathbb R$. Where am I going wrong? Thanks.</li> </ol>
Eff
112,061
<p>Consider the two cases $x&gt;0$ and $x&lt;0$ separately.</p> <p>If $x&gt;0$ then the direction of the inequality remains if you multiply by $x$, hence $$\frac1x \geq 1 \implies \frac1x\times x\geq 1\times x\implies 1\geq x. $$</p> <p>If $x&lt;0$, then the equality changes direction if you multiply by $x$, hence $$\frac1x \geq 1\implies \frac 1x\times x\leq 1\times x \implies 1\leq x, $$ which is true for all $x&lt;0$, hence we have either $1\geq x$ or $x&lt;0$.</p>
1,308,872
<p>Let $H$ be a Hilbert space, and let $T \colon H \to H$ be a bounded linear operator. Then how to show the following?</p> <p>The range of $T$ is finite-dimensional if and only if $T$ can be represented in the form $$Tx = \sum_{j=1}^n \langle x, v_j \rangle w_j \ \ \ [ v_j, w_j \in H].$$</p> <p>My effort: </p> <p>Supppose that $\dim \mathscr{R}(T) = n$, and let $\{ e_1, \ldots, e_n \}$ be an orthonormal basis for $\mathscr{R}(T)$. Then, for every $x \in H$, we have $$Tx = \sum_{j=1}^n \langle Tx, e_j \rangle e_j = \sum_{j=1}^n \langle x, T^* e_j \rangle e_j,$$ where $T^*$ denotes the Hilbert adjoint operator of $T$. So we can take $w_j \colon= e_j$ and $v_j \colon= T^* e_j$. </p> <p>Is this reasoning correct? </p> <p>If so, then how to show the converse? Doest the above representation mean that the same $v_j$s and $w_j$s will be used for every $x \in H$? If so, then range of $T$ is spanned by the finitely many elements $w_1, \ldots, w_n$ and is clearly finite-dimensional. </p>
Emilio Novati
187,568
<p>Hint:</p> <p>Let $M,N$ two points on the common tangent at opposite sides with respect to $A$, than $\angle DAM = \angle DEA$ because they subtend the same arc $DA$ ( $\angle DAM$ is a ''limit'' angle, being the side $AM$ tangent, but, if you don't like the concept of ''limit'' angle, you can proof the same claim as a consequence of the <a href="https://proofwiki.org/wiki/Tangent_Secant_Theorem" rel="nofollow">tangent-secant theorem</a>.).</p> <p>In the same manner $\angle BAN=\angle BCA$.</p> <p>Now note that $\angle DAM$ and $\angle BAN$ are opposed, and ..... </p> <p>I don't know if this result has a special name.</p>
1,308,872
<p>Let $H$ be a Hilbert space, and let $T \colon H \to H$ be a bounded linear operator. Then how to show the following?</p> <p>The range of $T$ is finite-dimensional if and only if $T$ can be represented in the form $$Tx = \sum_{j=1}^n \langle x, v_j \rangle w_j \ \ \ [ v_j, w_j \in H].$$</p> <p>My effort: </p> <p>Supppose that $\dim \mathscr{R}(T) = n$, and let $\{ e_1, \ldots, e_n \}$ be an orthonormal basis for $\mathscr{R}(T)$. Then, for every $x \in H$, we have $$Tx = \sum_{j=1}^n \langle Tx, e_j \rangle e_j = \sum_{j=1}^n \langle x, T^* e_j \rangle e_j,$$ where $T^*$ denotes the Hilbert adjoint operator of $T$. So we can take $w_j \colon= e_j$ and $v_j \colon= T^* e_j$. </p> <p>Is this reasoning correct? </p> <p>If so, then how to show the converse? Doest the above representation mean that the same $v_j$s and $w_j$s will be used for every $x \in H$? If so, then range of $T$ is spanned by the finitely many elements $w_1, \ldots, w_n$ and is clearly finite-dimensional. </p>
Archimedesprinciple
289,804
<p><a href="https://i.stack.imgur.com/9z9HG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9z9HG.jpg" alt="Problem Figure." /></a></p> <p>Let the common tangent <span class="math-container">$L$</span> be drawn. Then, by the Alternate Segment Theorem: <span class="math-container">$$\angle DEA =\angle (DA,L)$$</span> And similarly: <span class="math-container">$$\angle CBA=\angle (CA,L)$$</span> But the latter are equal as they are vertically opposite. Therefore it follows that the angles Intercepted by <span class="math-container">$BE$</span> are equal and thus, the lines are parallel.</p> <hr /> <p>This is a special <strong>limiting</strong> case of the <em>Reim's Theorem</em>, which states that lines through the points of intersection of two circles cut the circle in points that form two parallel segments.</p> <p>This is not a well-known theorem by the way.</p>
1,677,141
<p>Is there a standard symbol for <strong>percentile</strong> in mathematics, much like <code>%</code> is used for <em>percentage</em>? I have trying to get the right answer but only getting conflicting answers and logic.</p>
Raniz
249,806
<p>As <a href="https://math.stackexchange.com/users/10576/neil">Neil</a> mentions in his comment $P_i$ is a common notation to denote the $i$-th percentile.</p> <p>The <a href="https://en.wikipedia.org/wiki/Percentile" rel="noreferrer">Wikipedia page on Percentile</a> doesn't actually mention the notation as far as I can see but denotes quartiles as $Q_1$, $Q_2$, and $Q_3$ several times and from this it's logical that percentiles would be denoted by $P_i$ (and likewise other <a href="https://en.wikipedia.org/wiki/Quantile" rel="noreferrer">quantiles</a> with their respective character in the same way).</p> <p>A real world example of this notation (even if it's not subscript) is how you request percentiles in the <a href="http://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricStatistics.html" rel="noreferrer">Amazon CloudWatch API</a> which follows the pattern <code>p(\d{1,2}(\.\d{0,2})?|100)</code> - or <code>p5</code> for the 5th percentile, <code>p70</code> for the 70th percentile, <code>p50.36</code> for the 50.36th percentile, and <code>p100</code> for the maximum value.</p>
1,677,141
<p>Is there a standard symbol for <strong>percentile</strong> in mathematics, much like <code>%</code> is used for <em>percentage</em>? I have trying to get the right answer but only getting conflicting answers and logic.</p>
Candace Agonafir
714,145
<p>If you want to specify the vector (i.e. vector C) and the percentile (i.e. 95), then you may write: percentile(C,95).</p>
4,065,655
<p>If <span class="math-container">$A B = C$</span> where matrix <span class="math-container">$C$</span> is stochastic (all entries are positive and all rows add to <span class="math-container">$1$</span>) then is it necessary that both <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be stochastic?</p>
nullUser
17,459
<p>Indeed, split it up into an integral away from <span class="math-container">$0$</span> and an integral near <span class="math-container">$0$</span>, say <span class="math-container">$(1,\infty)$</span> and <span class="math-container">$[0,1]$</span>. First consider <span class="math-container">$(1,\infty)$</span>. In these situations it is common to &quot;borrow&quot; from the dominating term as follows. Write <span class="math-container">$x^{t-1} e^{-x}$</span> as <span class="math-container">$$ \frac{x^{t-1}}{e^{\epsilon x}} e^{-(1-\epsilon)x} $$</span> for <span class="math-container">$\epsilon &gt;0$</span> very small (<span class="math-container">$&lt;1$</span> is enough). Next you can show using limits that the term on the left is bounded as <span class="math-container">$x \to \infty$</span>. Thus an integral away from <span class="math-container">$0$</span> will converge because <span class="math-container">$\int_1^\infty e^{-(1-\epsilon)x} dx$</span> does. For the part close to <span class="math-container">$0$</span>, the <span class="math-container">$e^{-x}$</span> term is bounded and the integral converges because <span class="math-container">$\int_0^1 x^{t-1} dx$</span> converges.</p>
4,065,655
<p>If <span class="math-container">$A B = C$</span> where matrix <span class="math-container">$C$</span> is stochastic (all entries are positive and all rows add to <span class="math-container">$1$</span>) then is it necessary that both <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be stochastic?</p>
VIVID
752,069
<p>According to subadditivity of definite integrals, we split the integral into two as follows <span class="math-container">$$\int_0^{\infty}x^{t-1}e^{-x}dx = \underbrace{\int_0^1 x^{t-1}e^{-x}dx}_{(1)} + \underbrace{\int_1^\infty x^{t-1}e^{-x}dx}_{(2)}$$</span></p> <hr /> <ul> <li>For the first integral, we note that since <span class="math-container">$t &gt; 0$</span>, we can find a segment <span class="math-container">$[a, A]$</span> such that <span class="math-container">$t \in [a,A]$</span>. Then, for <span class="math-container">$t \in [a,A]$</span> and <span class="math-container">$x \in [0, +\infty)$</span> we have <span class="math-container">$$x^{t-1}e^{-x} = \frac{1}{x^{1-t}e^{x}} \le \frac{1}{x^{1-A}\cdot 1}$$</span> By comparison test, we observe that the integral <span class="math-container">$(1)$</span> converges.</li> </ul> <hr /> <ul> <li>For the second integral we note that <span class="math-container">$$\lim_{x\to\infty}\frac{x^{t-1}e^{-x}}{\frac{1}{x^2}} = \lim_{x\to\infty}x^{t+1}e^{-x} = 0$$</span> and again by comparison test, the integral <span class="math-container">$(2)$</span> converges.</li> </ul>
1,597,332
<p>Get the following quadratic form:</p> <p>$$Q(x)=x_1^{2}+x_3^2+4x_1x_2-4x_1x_3 $$</p> <p>to obtain the canonical form I tried the following: </p> <p>$$Q(x)=x_1^{2}+x_3^{2}+4x_1x_2-4x_1x_3=4(x_1^{2})+(x_3^{2})-4x_1x_3-3x_1x_1+4x_1x_2-(4/3)x_2x_2+(4/3)x_2x_2=(2x_1-x_3)^2-... $$</p> <p>There I stopped because I rememebered that I shouldn't change the coefficient of $x_1^2$. </p> <p>From here I don't know how to continue. </p> <p>I thank you in anticipation for your understanding and I wait forward your answer!</p>
levap
32,262
<p>Since $x_1^2$ appears in $Q(x)$, you should start by writing</p> <p>$$ Q(x) = (ax_1 + bx_2 + cx_3)^2 + \star $$</p> <p>for $a, b, c \in \mathbb{R}$ in such a way that $\star$ won't involve $x_1$ at all. Since the terms </p> <p>$$ x_1^2, 4x_1x_2, -4x_1x_3 $$</p> <p>appear in $Q$, we choose $a = 1, b = 2, c = -2$ and get</p> <p>$$ (x_1 + 2x_2 - 2x_3)^2 = (x_1 + 2x_2)^2 - 4(x_1 + 2x_2)x_3 + 4x_3^2 \\ = \color{blue}{x_1^2} + \color{blue}{4x_1x_2} + 4x_2^2 - \color{blue}{4x_1x_3} - 8x_2x_3 + 4x_3^2 $$</p> <p>and so</p> <p>$$ Q(x) = (x_1 + 2x_2 - 2x_3)^2 - 4x_2^2 + 8x_2x_3 - 4x_3^2 + x_3^2 \\ = (x_1 + 2x_2 - 2x_3)^2 -4 (\color{green}{x_2^2} - \color{green}{2x_2x_3}) - 3x_3^2. $$ </p> <p>Now we can repeat the process for the $x_2$ term. We have</p> <p>$$ (x_2 - x_3)^2 = \color{green}{x_2^2} - \color{green}{2x_2x_3} + x_3^2 $$ </p> <p>and so</p> <p>$$ Q(x) = (x_1 + 2x_2 - 2x_3)^2 - 4(x_2 - x_3)^2 + 4x_3^2 - 3x_3^2 \\ = (x_1 + 2x_2 - 2x_3)^2 - 4(x_2 - x_3)^2 + x_3^2 $$</p> <p>and we're done.</p>
28,348
<p>Thomson et al. provide a proof that <span class="math-container">$\lim_{n\rightarrow \infty} \sqrt[n]{n}=1$</span> in <a href="http://classicalrealanalysis.info/documents/TBB-AllChapters-Landscape.pdf#page=95" rel="noreferrer">this book (page 73)</a>. It has to do with using an inequality that relies on the binomial theorem: <a href="https://i.stack.imgur.com/n3dxw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/n3dxw.png" alt="enter image description here" /></a></p> <p>I have an alternative proof that I know (from elsewhere) as follows.</p> <hr /> <p><strong>Proof</strong>.</p> <p><span class="math-container">\begin{align} \lim_{n\rightarrow \infty} \frac{ \log n}{n} = 0 \end{align}</span></p> <p>Then using this, I can instead prove: <span class="math-container">\begin{align} \lim_{n\rightarrow \infty} \sqrt[n]{n} &amp;= \lim_{n\rightarrow \infty} \exp{\frac{ \log n}{n}} \newline &amp; = \exp{0} \newline &amp; = 1 \end{align}</span></p> <hr /> <p>On the one hand, it seems like a valid proof to me. On the other hand, I know I should be careful with infinite sequences. The step I'm most unsure of is: <span class="math-container">\begin{align} \lim_{n\rightarrow \infty} \sqrt[n]{n} = \lim_{n\rightarrow \infty} \exp{\frac{ \log n}{n}} \end{align}</span></p> <p>I know such an identity would hold for bounded <span class="math-container">$n$</span> but I'm not sure I can use this identity when <span class="math-container">$n\rightarrow \infty$</span>.</p> <p><strong>Question:</strong></p> <p>If I am correct, then would there be any cases where I would be wrong? Specifically, given any sequence <span class="math-container">$x_n$</span>, can I always assume: <span class="math-container">\begin{align} \lim_{n\rightarrow \infty} x_n = \lim_{n\rightarrow \infty} \exp(\log x_n) \end{align}</span> Or are there sequences that invalidate that identity?</p> <hr /> <p>(Edited to expand the last question) given any sequence <span class="math-container">$x_n$</span>, can I always assume: <span class="math-container">\begin{align} \lim_{n\rightarrow \infty} x_n &amp;= \exp(\log \lim_{n\rightarrow \infty} x_n) \newline &amp;= \exp(\lim_{n\rightarrow \infty} \log x_n) \newline &amp;= \lim_{n\rightarrow \infty} \exp( \log x_n) \end{align}</span> Or are there sequences that invalidate any of the above identities?</p> <p>(Edited to repurpose this question). Please also feel free to add different proofs of <span class="math-container">$\lim_{n\rightarrow \infty} \sqrt[n]{n}=1$</span>.</p>
Aryabhata
1,102
<p>Here is one using $AM \ge GM$ to $1$ appearing $n-2$ times and $\sqrt{n}$ appearing twice.</p> <p>$$\frac{1 + 1 + \dots + 1 + \sqrt{n} + \sqrt{n}}{n} \ge n^{1/n}$$</p> <p>i.e</p> <p>$$\frac{n - 2 + 2 \sqrt{n}}{n} \ge n^{1/n}$$</p> <p>i.e.</p> <p>$$ 1 - \frac{2}{n} + \frac{2}{\sqrt{n}} \ge n^{1/n} \ge 1$$</p> <p>That the limit is $1$ follows.</p>
2,694,122
<blockquote> <p>Given that a function $f$ is defined as $$f(x)=1+2x+3x^2+4x^3+...$$. We have to prove that $f$ is continuous on $[0,\frac{1}{8}]$ and evaluate $$\int_{0}^{\frac{1}{8}}f(x)dx$$ </p> </blockquote> <p>I am not sure in a particular area. I have proved $f$ is continuous on $[0,\frac{1}{8}]$. But the evaluation of integral gives </p> <p>$$\int_{0}^{\frac{1}{8}}f(x)dx=\sum_{n=1}^{\infty}\frac{1}{8^{n}}=\frac{8}{7}(1-({\frac{1}{8})^n})= \frac{8}{7}(1-\frac{1}{8^n})$$. </p> <p>I think I am wrong somewhere in calculations. Please guide me where I am wrong. Any help or suggestion will be precious.</p>
José Carlos Santos
446,262
<p>The first mistake that I was able to detect was that you series should start at $1$, not at $0$:\begin{align}\int_0^{\frac18}f(x)\,\mathrm dx&amp;=\sum_{n=1}^\infty\left(\frac18\right)^n\\&amp;=\frac{\frac18}{1-\frac18}\\&amp;=\frac17.\end{align}Another mistake lies in the fact that the answer should be a number; there should be no $n$ in it.</p>
2,268,225
<p>I want to investigate the convergence behavior of $$\int_{0}^{\infty} \cos(x^r)\, dx \hspace{5mm} \textrm{and} \hspace{5mm} \int_{\pi}^{\infty} \left(\frac{\cos x}{\log x}\right)\arctan\lfloor x\rfloor dx.$$ My theoretical tools are Abel's test and Dirichlet's test: Say I have an integral of the form $$\int_{a}^{b}f\cdot g \hspace{1.5mm} dx$$ with improperness (vertical or horizontal asymptote) at $b$.</p> <p>Abel's test guarantees convergence if $g$ is monotone and bounded on $(a,b)$, and $\int_{a}^{b}f $ converges. Dirichlet's test guarantees convergence if $g$ is monotone on $(a,b)$ and $\displaystyle\lim_{x\to b} g(x) = 0$, and $\displaystyle\lim_{\beta \to b}$ $\int_{a}^{\beta}f $ is bounded.</p> <p>For the first integral $\displaystyle\int_{0}^{\infty} \cos(x^r)\, dx $ I'm guessing a substitution $t = x^r $ will give me an expression of the form $f\cdot g$ with $\cos(t)$ as my $f$. For the second integral $\displaystyle\int_{\pi}^{\infty} \dfrac{\cos x}{\log x}\arctan\lfloor x\rfloor\, dx$, I'm (even more) clueless. Help please?</p>
egreg
62,967
<p>The series is $$ \frac{1}{4}\sum_{n\ge1}nx^{n-1} $$ where $x=1/2$. Does this remind you about something?</p>
602,789
<p>Find the fundamental group of the surface of a cube with interior of all edges removed (i.e. the space which consists of the vertices and interior of the faces of the cube. </p> <p>Can I deformation retract the interior of each face into a cross meeting in the middle of the face made up of two diagonal lines which connects the each vertex to the vertex opposite of it. I can deform this into a sphere which has a lattice on it with 12 different loops. So the fundamental group is $\mathbb{Z}^{12}$? </p> <p>Thanks! </p>
rewritten
43,219
<p>It is probably the free product $\mathbb{Z}^{\ast11} = \mathbb{Z}\ast\dots\ast\mathbb{Z}$, as the loops do not commute.</p> <p>An easier way is that you can "continuously" (not exactly an equivalence but in this case it doesn't matter) deform your set into a cube with $12$ points removed, which is topologically equivalent to a disk with $11$ points removed.</p>
4,620,379
<blockquote> <p>Let <span class="math-container">$f: [0,∞)→[0,∞)$</span> be a differentiable function, such that <span class="math-container">$ \lim\limits_{x\to\infty}\big(f(x) + f'(x)\big) = 5$</span>. Prove that <span class="math-container">$f$</span> is uniformly continuous.</p> </blockquote> <p>Now, both my tutor and my friends have proved this with a &quot;trick&quot; - multiplying and dividing by <span class="math-container">$ e^x $</span>, and then using L'Hôpital for infinity over infinity.</p> <p>Is there a different way of proving this?</p>
Hagen von Eitzen
39,174
<p>Consider <span class="math-container">$f(x)-5$</span> instead of <span class="math-container">$f$</span> so that the given limit is <span class="math-container">$0$</span> instead of <span class="math-container">$5$</span>, just to simplify things. By the limit, there exists <span class="math-container">$x_0$</span> such that <span class="math-container">$|f(x)+f^\prime(x)|&lt;1$</span> for all <span class="math-container">$x\ge x_0$</span>. Suppose <span class="math-container">$f$</span> is not bounded from above. Then there is a sequence <span class="math-container">$x_0&lt;x_1&lt;x_2&lt;\cdots \to \infty$</span> with <span class="math-container">$1&lt;f(x_1)&lt;f(x_2)&lt;\cdots \to \infty$</span>. By the MVT, there must be intermediate <span class="math-container">$y_n$</span> with <span class="math-container">$x_n&lt;y_n&lt;x_{n+1}$</span> and <span class="math-container">$f^\prime(y_n)&gt;0$</span> and hence <span class="math-container">$f(y_n)&lt;1$</span>. On <span class="math-container">$[y_{n-1},y_n]$</span>, <span class="math-container">$f$</span> attains its maximum not at the boundary because alread <span class="math-container">$f(x_n)$</span> is bigger. So the max is attained at some interior point <span class="math-container">$z_n$</span>. Then <span class="math-container">$f^\prime(z_n)=0$</span>, hence <span class="math-container">$f(x_n)\le f(z_n)\le1$</span>, contradiction.</p> <p>We conclude that <span class="math-container">$ f$</span> is bounded from above. By the same argument, <span class="math-container">$f$</span> is bounded from below. Then <span class="math-container">$f^\prime$</span> is also bounded on <span class="math-container">$[x_0,\infty)$</span>, hence <span class="math-container">$f$</span> is Lipschitz there and consequently uniformly continuous on <span class="math-container">$[0,\infty)$</span>.</p>
4,620,379
<blockquote> <p>Let <span class="math-container">$f: [0,∞)→[0,∞)$</span> be a differentiable function, such that <span class="math-container">$ \lim\limits_{x\to\infty}\big(f(x) + f'(x)\big) = 5$</span>. Prove that <span class="math-container">$f$</span> is uniformly continuous.</p> </blockquote> <p>Now, both my tutor and my friends have proved this with a &quot;trick&quot; - multiplying and dividing by <span class="math-container">$ e^x $</span>, and then using L'Hôpital for infinity over infinity.</p> <p>Is there a different way of proving this?</p>
Thomas Andrews
7,933
<p>We will prove if <span class="math-container">$f(x)+f'(x)\to 0$</span> then <span class="math-container">$f(x)\to 0.$</span> We can then apply this to <span class="math-container">$f(x)-5$</span> in the original problem.</p> <p>This is a little complicated, and Hagen's answer to show <span class="math-container">$f$</span> is bounded is a quite a bit easier, and is enough for uniform continuity.</p> <p>We will assume that it is not true that <span class="math-container">$f(x)\to 0.$</span></p> <p>So, there is an <span class="math-container">$\epsilon&gt;0$</span> such that the set <span class="math-container">$X=\{x\mid |f(x)|\geq \epsilon\}$</span> is unbounded.</p> <p>Let <span class="math-container">$U=\{x\mid |f(x)|&gt;\frac\epsilon2\}=f^{-1}((-\infty,\epsilon/2)\cup(\epsilon/2,+\infty)).$</span> Then <span class="math-container">$U$</span> is an open, since <span class="math-container">$f$</span> is continuous, and unbounded, since <span class="math-container">$X\subset U.$</span></p> <p>Now, since <span class="math-container">$f(x)+f'(x)\to 0,$</span> there is an <span class="math-container">$N$</span> such that, for <span class="math-container">$x&gt;N,$</span> <span class="math-container">$f(x)+f'(x)\in(-\epsilon/4,\epsilon/4).$</span> For any <span class="math-container">$x\in U$</span> with <span class="math-container">$x&gt;N,$</span> we get <span class="math-container">$|f'(x)|&gt;\frac\epsilon 4,$</span> and <span class="math-container">$f'(x)$</span> has the opposite sign of <span class="math-container">$f(x).$</span></p> <p>Let <span class="math-container">$U_0=U\cap(N,\infty).$</span> Then <span class="math-container">$U_0$</span> is still unbounded and open, and for every <span class="math-container">$x\in U_0,$</span> <span class="math-container">$|f(x)|&gt;\epsilon/2,$</span> <span class="math-container">$|f'(x)|&gt;\epsilon/4$</span> and <span class="math-container">$f(x),f'(x)$</span> have opposite signs.</p> <p>Now, if <span class="math-container">$U_0$</span> contains some interval <span class="math-container">$(x_0,\infty).$</span> Then, either,</p> <ul> <li>For all <span class="math-container">$x&gt;x_0,$</span> <span class="math-container">$f(x)&gt;\epsilon/2,$</span> or</li> <li>for all <span class="math-container">$x&gt;x_0,$</span> <span class="math-container">$f(x)&lt;-\epsilon/2.$</span></li> </ul> <p>For <span class="math-container">$x&gt;x_0,$</span> by the mean value theorem, <span class="math-container">$$f(x)=f(x_0)+(x-x_0)f'(c)$$</span> but we know <span class="math-container">$|f'(c)|&gt;\frac\epsilon4$</span> and has the opposite sign of <span class="math-container">$f(x_0),$</span> which would mean we'd eventually get to an <span class="math-container">$x&gt;x_0$</span> such that <span class="math-container">$f(x)$</span> is the opposite sign, and thus, by the intermediate value theorem, an <span class="math-container">$x&gt;x_0$</span> where <span class="math-container">$f(x)=0,$</span> reaching a contradiction.</p> <p>So <span class="math-container">$U_0$</span> is an unbounded disjoint union of bounded open intervals <span class="math-container">$(a_i,b_i).$</span> Pick any <span class="math-container">$a_i&gt;N.$</span> The sign of <span class="math-container">$f(x)$</span> is constant in this interval. But then <span class="math-container">$f(a_i)=f(b_i)=\pm\frac\epsilon2,$</span> or else one of <span class="math-container">$a_i, b_i$</span> would be in <span class="math-container">$U_0.$</span> By the mean value theorem, there is a <span class="math-container">$c\in(a_i,b_i)\subset U_0$</span> such that <span class="math-container">$f'(c)=0.$</span> But <span class="math-container">$f'(c)\neq 0$</span> in <span class="math-container">$U_0.$</span></p> <p>Contradiction.</p> <p>So <span class="math-container">$f(x)\to 0$</span> and <span class="math-container">$f'(x)\to0.$</span></p> <hr /> <p>A direct proof.</p> <p>Given <span class="math-container">$\epsilon&gt;0$</span> there is some <span class="math-container">$N$</span> such that, for all <span class="math-container">$x&gt;N,$</span> <span class="math-container">$|f(x)+f'(x)|&lt;\frac\epsilon4.$</span></p> <p>Is it possible for <span class="math-container">$f(x)&gt; \epsilon/2$</span> for all <span class="math-container">$x&gt;N?$</span> Then <span class="math-container">$f'(x)&lt;-\epsilon/4$</span> and for all <span class="math-container">$x&gt;N$</span> <span class="math-container">$$f(x)=f(N)+(x-N)f'(c)\leq f(N)-(x-N)\frac\epsilon4\to-\infty.$$</span></p> <p>Similarly, it is not possible for <span class="math-container">$f(x)&lt;-\epsilon/2$</span> for all <span class="math-container">$x&gt;N.$</span></p> <p>So there must be and <span class="math-container">$x_0&gt;N$</span> such that <span class="math-container">$|f(x_0)|\leq \epsilon/2.$</span></p> <p>Now, if <span class="math-container">$x&gt;x_0,$</span> and <span class="math-container">$|f(x)|&gt;\epsilon/2.$</span> Find the largest open interval <span class="math-container">$(a,b)$</span> containing <span class="math-container">$x$</span> such that <span class="math-container">$|f|((a,b))\in(\epsilon/2,\infty).$</span> Then since <span class="math-container">$x_0$</span> can't be in the interval, <span class="math-container">$a\geq x_0$</span> and <span class="math-container">$b&lt;+\infty$</span> because of what we just proved.</p> <p>We easily see that <span class="math-container">$f(a)=f(b)=\pm \frac\epsilon2,$</span> or else we could pick a larger interval including one of the <span class="math-container">$a,b$</span> Thus there must be a <span class="math-container">$c\in(a,b)$</span> such that <span class="math-container">$f'(c)=0.$</span> But then <span class="math-container">$|f(c)|=|f(c)+f'(c)|&lt;\epsilon/4$</span> since <span class="math-container">$c&gt;a\geq x_0&gt;N,$</span> contradicting our assumption about the elements of the interval.</p> <p>So for all <span class="math-container">$x&gt;x_0,$</span> <span class="math-container">$$|f(x)|\leq\epsilon/2&lt;\epsilon.$$</span></p> <p>I suppose this is still an indirect proof. I just pulled the indirect assumption inside the proof, rather than assuming the limit doesn't exist.</p>
324,307
<p><a href="http://en.wikipedia.org/wiki/Tucker%27s_lemma" rel="nofollow">Tucker's Lemma is here.</a></p> <p>Let's stay within the 2D case for now. A standard proof is constructive:</p> <p>(1) Pick an arced edge on the boundary of the circle. Note its labeling (for example, (1, 2)).</p> <p>(2) Walk into the circle through your chosen edge into a simplex of the triangle.</p> <p>(3) If the simplex carries three different labels, then two of them most be antipodal, so Tucker's Lemma is satisfied.</p> <p>(4) If instead the simplex is entirely labelled with 1's and 2's, then walk through the (1, 2) edge that you didn't enter from, and repeat step (3) or (4) on your new simplex.</p> <p>(5) Eventually, you will either dead-end in a tri-labeled simplex in the middle of the triangle, or you will walk out of the circle through another arced edge. If you leave the circle, then you've eliminated two edges; pick a new edge and try again.</p> <p>(6) Since there are an even number of vertices on the boundary of the circle, there are an odd number of edges on the boundary of the circle. So eventually, one of your paths must dead-end inside the circle.</p> <p>This proof does <em>not</em> rely on the fact that the triangulation of the circle is antipodal symmetric; instead, it only relies on the fact that there are an even number of vertices on the boundary of the circle. So why do we require antipodal symmetry, when the weaker "even number of edges" condition implies the same conclusion?</p>
2'5 9'2
11,123
<p>Identify $x\mapsto sx+t$ with $\begin{bmatrix}s&amp;t\\0&amp;1\end{bmatrix}$, and you will be able to prove that composition of the affine functions corresponds to multiplication of the corresponding matrices. </p> <p>Now the given information that $fg=gf$ corresponds to $\begin{bmatrix}f\end{bmatrix}\begin{bmatrix}g\end{bmatrix}=\begin{bmatrix}g\end{bmatrix}\begin{bmatrix}f\end{bmatrix}$. Note that under this identification, $\begin{bmatrix}k\end{bmatrix}$ is a scalar matrix if and only if $k$ is the identity function. $f$ is not the identity function, and if either $g$ or $h$ are, the result is trivial.</p> <p>If both $\begin{bmatrix}f\end{bmatrix}$ and $\begin{bmatrix}g\end{bmatrix}$ are diagonalizable, then since they commute, linear algebra tells us they are simultaneously diagonalizable. Since we can assume that neither matrix is a scalar matrix, then they have the same eigenspaces (each having two eigenspaces of dimension $1$ with distinct eigenvalues.) Ditto for $\begin{bmatrix}f\end{bmatrix}$ and $\begin{bmatrix}h\end{bmatrix}$. Therefore, assuming all three matrices are diagonalizable, $\begin{bmatrix}g\end{bmatrix}$ and $\begin{bmatrix}h\end{bmatrix}$ have the same eigenspaces, are therfore simultaneously diagonalizable, and therefore commute.</p> <p>What if any one of these is not diagonalizable? In the case of $\begin{bmatrix}f\end{bmatrix}$, that would mean $a=1$ and $b\neq0$. So $f$ is a nonzero translation. Such a move only commutes with other translations. (If that is not geometrically clear, then you can deduce it from your formula with $a=1$ and $b\neq0$.) So $g$ and $h$ are also translations, and thus commute with each other. This argument can be spun around to apply if it's one of the other two functions ($g$ or $h$) that has a nondiagonalizable matrix.</p>
718,901
<p>I have for a very long time been thinking about certain notations in differential calculus, for instance it might sometimes be sufficient to carry out a variable substitution in integrals. For example we put $t=x^2$ and then write $\frac {dt}{dx} = 2x \Leftrightarrow dt = (2x)dx$</p> <p>I dont quite understand the last step, for me, $\frac {dt}{dx}$ is just a notation for the derivative of the function $t$. What are we actually doing when we multiply each side with $dx$, I imagine that $\frac {dt}{dx}$ is just a limit when the difference in x and y tends to zero. But when we separate them then $dx=dt=0$</p> <p>Does anyone understand my concerns? If not, I will try to explain better.</p>
DeepSea
101,504
<p>dt is the linear functional in term of dx, and it is defined as: dt = t'(x)dx = 2xdx</p>
54,088
<p>I am studying KS (Kolmogorov-Sinai) entropy of order <em>q</em> and it can be defined as</p> <p>$$ h_q = \sup_P \left(\lim_{m\to\infty}\left(\frac 1 m H_q(m,ε)\right)\right) $$</p> <p>Why is it defined as supremum over all possible partitions <em>P</em> and not maximum? </p> <p>When do people use supremum and when maximum?</p>
Tom Au
12,506
<p>A maximum is the largest number WITHIN a set. A sup is a number that BOUNDS a set. A sup may or may not be part of the set itself (0 is not part of the set of negative numbers, but it is a sup because it is the least upper bound). If the sup IS part of the set, it is also the max.</p>
657,992
<p>I understand how to explain but can't put it down on paper.</p> <p>$\displaystyle \frac{a}{b}$ and $\displaystyle \frac{c}{d}$ are rational numbers. For there to be any rational between two numbers I assume $\displaystyle \frac{a}{b} &lt; \frac{c}{d}$.</p> <p>I let $\displaystyle x = \frac{a}{b}$ and $\displaystyle y = \frac{c}{d}$ so $x &lt; y$. the number in the middle of $x$ and $y$ is definitely a rational number so $\displaystyle x &lt; \frac{x+y}{2} &lt; y$. I know that there is another middle between $x$ and $\displaystyle \frac{x+y}{2}$ and it keeps on going. How would I write it as a proof?</p>
William Ballinger
79,615
<p>One natural way to complete your proof is by contradiction.</p> <p>You noticed that, between any two rational numbers, there is a third.</p> <p>Now, pick any two rational numbers $x &lt; y$, and assume that there are a finite number of (say, only $n$) rational numbers between them. Call these numbers $a_0 &lt; a_1 &lt; \cdots &lt; a_{n-1}$. Now, look between $x$ and $a_0$. We know there is some rational number, call it $z$ between $x$ and $a_0$, which then must also be between $x$ and $y$. But this cannot be true, because $z$ is less than $a_0$, so cannot equal any of the $a_i$, which were assumed to be every rational between $x$ and $y$.</p>
657,992
<p>I understand how to explain but can't put it down on paper.</p> <p>$\displaystyle \frac{a}{b}$ and $\displaystyle \frac{c}{d}$ are rational numbers. For there to be any rational between two numbers I assume $\displaystyle \frac{a}{b} &lt; \frac{c}{d}$.</p> <p>I let $\displaystyle x = \frac{a}{b}$ and $\displaystyle y = \frac{c}{d}$ so $x &lt; y$. the number in the middle of $x$ and $y$ is definitely a rational number so $\displaystyle x &lt; \frac{x+y}{2} &lt; y$. I know that there is another middle between $x$ and $\displaystyle \frac{x+y}{2}$ and it keeps on going. How would I write it as a proof?</p>
Cameron Buie
28,900
<p>You've got the idea. Let $a_1=\frac{x+y}2,$ and $a_{n+1}=\frac{x+a_n}2.$ A quick induction proof will show that each $a_n$ is rational, that $x&lt;a_1&lt;y$ and that $x&lt;a_{n+1}&lt;a_n$ for all $n.$ Thus, you have explicitly determined infinitely-many rational numbers between $x$ and $y.$</p> <p>More explicitly, you can show by induction that $$a_n=x+\frac{y-x}{2^n}$$ for all $n,$ if you want a closed (non-recursive) form.</p>
657,992
<p>I understand how to explain but can't put it down on paper.</p> <p>$\displaystyle \frac{a}{b}$ and $\displaystyle \frac{c}{d}$ are rational numbers. For there to be any rational between two numbers I assume $\displaystyle \frac{a}{b} &lt; \frac{c}{d}$.</p> <p>I let $\displaystyle x = \frac{a}{b}$ and $\displaystyle y = \frac{c}{d}$ so $x &lt; y$. the number in the middle of $x$ and $y$ is definitely a rational number so $\displaystyle x &lt; \frac{x+y}{2} &lt; y$. I know that there is another middle between $x$ and $\displaystyle \frac{x+y}{2}$ and it keeps on going. How would I write it as a proof?</p>
CopyPasteIt
432,081
<p>Slight adaption to Steven Stadnicki linear method (see his comment).</p> <p>Let $p$ and $q$ be any two rational numbers with $p \lt q$. Now<br></p> <p>$\tag 1 p + 1/n \lt q \text{ iff } 1/n \lt q - p \text{ iff } n \gt \frac{1}{q-p}$.</p> <p>By Archimedean property, we can find an $n_0$ so that (1) is true, and of course it will also be true for all integers greater than $n_0$.</p>
60,606
<p>Ok, this will probably be a silly question, but I can't get my head around it. I'm making a simple module that, given a list and two integers, will swap the position of the elements in the list.(probably there's a function that does it, but I'm interested in doing it myself) This is what I have so far:</p> <pre><code>Exchange[list_, i_, j_] := Module[{temp}, temp = list[[i]]; list[[i]] = list[[j]]; list[[j]] = temp; Return[list]; ] </code></pre> <p>Then I do: <code>list1 = {1, 2, 3, 4}</code> And finally:</p> <pre><code>Exchange[list1, 1, 2] </code></pre> <p>But it returns this error: <code>Set::setps: "{1,2,3,4} in the part assignment is not a symbol."</code> And the list is unchanged. I looked at the help but I don't understand how I'm supposed to make a valid assignment. I know that what is wrong are the lines 2 and 3 of the module... Any help is greatly appreciated</p>
RunnyKine
5,709
<p>The problem here is that in <em>Mathematica</em>, parameters of a function are not local variables. So trying to modify a parameter of a function inside it's body will lead to an error. The reason is that function arguments are evaluated, when the function is called so that it is actually the result of this evaluation that's textually substituted for the function parameters within the body. To fix this (without using advanced techniques), assign that parameter to a local variable and work with that variable inside the body of the function. Here's a simple fix to your function:</p> <pre><code>exchange[list_, i_, j_] := Module[{temp}, temp = list; temp[[i]] = list[[j]]; temp[[j]] = list[[i]]; temp ] </code></pre> <p>Now:</p> <pre><code>exchange[list1, 1, 2] </code></pre> <blockquote> <p>{2, 1, 3, 4}</p> </blockquote> <p><strong>Note</strong> that I've removed <code>Return</code> in your definition as it's not needed here. Also note that the original list remains intact and is not changed.</p>
60,606
<p>Ok, this will probably be a silly question, but I can't get my head around it. I'm making a simple module that, given a list and two integers, will swap the position of the elements in the list.(probably there's a function that does it, but I'm interested in doing it myself) This is what I have so far:</p> <pre><code>Exchange[list_, i_, j_] := Module[{temp}, temp = list[[i]]; list[[i]] = list[[j]]; list[[j]] = temp; Return[list]; ] </code></pre> <p>Then I do: <code>list1 = {1, 2, 3, 4}</code> And finally:</p> <pre><code>Exchange[list1, 1, 2] </code></pre> <p>But it returns this error: <code>Set::setps: "{1,2,3,4} in the part assignment is not a symbol."</code> And the list is unchanged. I looked at the help but I don't understand how I'm supposed to make a valid assignment. I know that what is wrong are the lines 2 and 3 of the module... Any help is greatly appreciated</p>
eldo
14,254
<p>If you want to change the original list "directly" (inplace) you can do it with <code>HoldFirst</code>:</p> <pre><code>SetAttributes[Exchange, HoldFirst]; Exchange[list_, a_, b_] := list[[{a, b}]] = list[[{b, a}]] list1 = {1, 2, 3, 4}; Exchange[list1, 3, 1]; list1 </code></pre> <blockquote> <p>{3, 1, 2, 4}</p> </blockquote> <p><strong>Multiple swaps:</strong></p> <pre><code>list1 = {1, 2, 3, 4}; Exchange[list1, ##] &amp; @@@ {{4, 1}, {2, 3}} // Flatten; list1 </code></pre> <blockquote> <p>{4, 3, 2, 1}</p> </blockquote>
1,775,378
<p>So this question is inspired by the following thread: <a href="https://forums.factorio.com/viewtopic.php?f=5&amp;t=25008">https://forums.factorio.com/viewtopic.php?f=5&amp;t=25008</a></p> <p>In it, the poster is examining an $8$-belt balancer (more on that to come) which he shows fails to satisfy a desirable property, which he called universally throughput unlimited.</p> <p>So what is a $n$-belt balancer? It is a configuration of belts (which move items around), and splitters (which take two belts in and balance their items on the two belts on the output side), which will balance the input of all $n$ input belts across all $n$ output belts. They are frequently used in large factories to move large amounts of items to a variety of different areas in a manner where no one belt worth of items getting backlogged (more items coming at it than it can use) results in other projects not getting full throughput (or at least as much as they can use).</p> <p>The desired property called <em>universally throughput unlimited</em> is the following: Suppose only $k$ of the $n$ input belts are getting input (assume full input; aka, input belts are assumed saturated), and that all but $k$ of the output belts are backlogged and have no throughput (already full of items and nothing is moving on those belts). Then the full input on those $k$ input belts can be provided across the $k$ output belts (which have the same maximum throughput, hence no one output belt can handle more than one input belt's worth of throughput). This basically means that the $n$-belt balancer is never a bottleneck no matter the current input or output limitations (which lanes are getting input/available for output).</p> <p>The question I have is the following: It always possible to create an $n$-belt balancer satisfying the universally throughput unlimited condition for any $n$? It not, for which $n$'s is it possible? (clearly, $n=2$ works because of how splitters behave)</p> <p>I have some ideas on how to approach this problem, but am nowhere near having it solved. The first idea is about how to represent the problem: We can represent the input belts and output belts as vertices of a directed graph. The inputs being sources (in-degree=0) and the outputs being sinks (out-degree=0). The balancer is the input and output vertices together with a set of <em>intermediate</em> vertices which represent splitters which have $1\leq$in-degree,out-degree$\leq$2 (one or two directed edges point to them and one or two coming from them) and the associated directed edges. Looking at the problem this way, it is easy to see that a <em>necessary</em> condition is that input on any belt can reach any output belt (it is necessary because if not, then consider the case of all input on one input belt and all but one output belt backlogged with 0 throughput; in such a case, if you can't route that input belt's input to the output belt you won't get any throughput), but this condition is not sufficient (multiple examples that satisfy this condition have been shown both theoretically and experimentally to fail to have the desired universally throughput unlimited property).</p> <p>An important thing to note is that belts can be routed <em>under</em> other belts via underground belts, hence planarity of the above described graph is not necessary. The fact that splitters have some very specific behaviors is important to this problem: They will always try to balance outputs provided there is no backlog, hence, in a no backlog scenario the output on each belt leaving a splitter is half of the <em>total</em> input on both of its input belts. If however, one of the output belts is backlogged with no throughput, then all of the throughput will be merged onto the 'free' belt <em>up to</em> it's throughput limit. If more than one belt worth of throughput is coming into a splitter in this case, then <em>both</em> input belts will start to bottleneck (each belt's effective throughput will be half of the maximum because that's how much of the saturated output belt is coming from the given input belt). Sometimes a backlog is only a <em>reduction</em> in throughput (do to bottlenecking down the line somewhere) in such a case, a splitter will still split input equally up to that the reduced throughput on the lowest throughput belt, after that, <em>all</em> remaining throughput is thrown at the belt with additional capacity until that two is saturated, and if there is any more input coming at the given splitter then <em>both</em> of its input belts will start to backlog.</p> <p>This backlog phenomenon can result in some very subtle behaviors. Which makes simply assigning weights to the directed edges in the above described graph (constrained to a value of $[0,1]$ where $1$ is saturated and $0$ is no throughput) inadequate to describe the problem. For instance a splitter causing a backlog with some throughput but not enough to avoid backlog, can lead to a reduction in throughput for <em>another</em> splitter's output belt, shifting more of it's input onto the other output belt (which might cause a splitter further down that belt to suddenly become a bottleneck and backlog, etc.)</p> <p>My suspicion from experimenting a tad as well as some theoretical work looking at how splitters are dividing inputs leads me to conjecture that it is not possible for all $n$, and that the most likely candidates are powers of $2$. Even then, for powers higher than $1$ it still might be impossible because of odd #s of belts having input needed to get to same number of output belts (and if balancing odd #s of belts isn't possible, then the universally throughput unlimited condition might not be satisfiable because of these cases).</p>
hintss
172,700
<p>This isn't quite a math-y answer, but it sounds like you're reinventing the <a href="https://en.wikipedia.org/wiki/Nonblocking_minimal_spanning_switch" rel="noreferrer">nonblocking minimal spanning switch</a> and <a href="https://en.wikipedia.org/wiki/Clos_network" rel="noreferrer">Clos networks</a>. In this case, a splitter is essentially a 2x2 crossbar switch, and you're using them to build bigger switches.</p> <p>As a simple example if you have a NxN switch (in this case, a 2x2 splitter), you can use 3N of them (3*2=6) to build a switch that is N<sup>2</sup>xN<sup>2</sup> (and thus, the popular 4x4 balancer design that uses 6 splitters).</p> <p>If you wanted a 16x16, you could then take 12 of those 4x4 balancers, have 4 on the input side, 4 on the output side, and 4 in the middle, with every switch connected to every switch in the next stage with one belt. You could then repeat this process to get a 256x256, etc.</p> <p>After that, I'm not entirely sure about the math, but I <em>think</em> you'd be able to, eg, cut a design in half to get half the throughput (eg, 6x 4x4 balancers to get a 8x8). You could then get things that aren't powers of 2 by just not connecting some of the inputs/outputs.</p>
3,426,316
<blockquote> <p>If <span class="math-container">$f(x)\leq a$</span> then can we say <span class="math-container">$$\int f(x) dx\leq a\int dx?$$</span></p> </blockquote> <p>I want to know if this is true or not and can we prove the answer. if it's right can you provide a good simple explanation? thanks in advance.</p>
Captain Lama
318,467
<p>You are right to say that those two definitions do not match: a <span class="math-container">$\mathfrak{g}$</span>-module wher <span class="math-container">$\mathfrak{g}$</span> is seen as a Lie algebra is <em>not</em> a module where <span class="math-container">$\mathfrak{g}$</span> is seen as a nonassociative algebra.</p> <p>There is a possible reconciliation in the comments, with the fact that <span class="math-container">$\mathfrak{g}$</span>-modules are (usual) modules over the enveloping algebra. </p> <p>Here is another take: for any <span class="math-container">$F$</span>-vector space, <span class="math-container">$\operatorname{End}_F(V)$</span> is an associative <span class="math-container">$F$</span>-algebra. Then for any other associative algebra <span class="math-container">$A$</span>, a structure of <span class="math-container">$A$</span>-module on <span class="math-container">$V$</span> is an algebra morphism <span class="math-container">$A\to \operatorname{End}_F(V)$</span>. It also makes sense if <span class="math-container">$A$</span> is nonassociative (we can just forget that <span class="math-container">$\operatorname{End}_F(V)$</span> is associative too).</p> <p>Now <span class="math-container">$\operatorname{End}_F(V)$</span> is also a Lie algebra, for the usual Lie bracket <span class="math-container">$[x,y]=xy-yx$</span>. Usually, to avoid confusion with the associative algebra structure, we write <span class="math-container">$\mathfrak{gl}(V)$</span> for this Lie algebra. Then for any Lie algebra <span class="math-container">$\mathfrak{g}$</span>, a structure of <span class="math-container">$\mathfrak{g}$</span>-module on <span class="math-container">$V$</span> is a Lie algebra morphism <span class="math-container">$\mathfrak{g}\to \mathfrak{gl}(V)$</span>.</p> <p>The point of this answer is to notice that this definition is in essence the same, except that we shift the focus on the structure of <span class="math-container">$\operatorname{End}_F(V)$</span>: instead of looking at it as a usual algebra, we can look at it as a Lie algebra (with a different product).</p> <p>There are ways to unify all this properly, for instance using the notion of operads, but I hope this little argument is at least convincing for you.</p>
2,843,337
<blockquote> <p>If $0 \lt x \lt \dfrac{\pi}{2}$, prove that $$x^{3/2}\sin x + \sqrt{9-x^3}\cos x \leq 3$$</p> </blockquote> <p>This question must be done without calculus. First, I tried splitting it into the intervals $(0,\pi/4)$ and $(\pi/4, \pi/2)$, hoping that, $\sin x$ was bound tightly enough on the interval that it'd be less than 3 even if $\cos x = 1$ (which doesn't work -- letting $\sin x = \dfrac{1}{\sqrt{2}}$ and $\cos x = 1$ produces a result greater than 3).</p> <p>The other thing I noticed was that inside the square root sign, we have $\sqrt{9-x^3} = \sqrt{(3-x^{3/2})(3+x^{3/2})}$, and an $x^{3/2}$ appears in the first term, but I'm not sure how useful the similarity there is.</p> <p>Advice on how to proceed?</p>
Community
-1
<p>For all $0\le x\le 3^{2/3}$ (which is $\ge\frac\pi2$) \begin{align}&amp;x^{3/2}\sin x+(9-x^3)^{1/2}\cos x=\\&amp;=\sqrt{(x^{3/2})^2+9-x^3}\left(\frac{x^{3/2}}{\sqrt{(x^{3/2})^2+9-x^3}}\sin x+\frac{(9-x^3)^{1/2}}{\sqrt{(x^{3/2})^2+9-x^3}}\cos x\right)=\\&amp;=3\left(\frac{x^{3/2}}{3}\sin x+\frac{(9-x^3)^{1/2}}{3}\cos x\right)=3\sin\left(x+\arccos\frac{x^{3/2}}3\right)\le 3\end{align}</p>
2,843,337
<blockquote> <p>If $0 \lt x \lt \dfrac{\pi}{2}$, prove that $$x^{3/2}\sin x + \sqrt{9-x^3}\cos x \leq 3$$</p> </blockquote> <p>This question must be done without calculus. First, I tried splitting it into the intervals $(0,\pi/4)$ and $(\pi/4, \pi/2)$, hoping that, $\sin x$ was bound tightly enough on the interval that it'd be less than 3 even if $\cos x = 1$ (which doesn't work -- letting $\sin x = \dfrac{1}{\sqrt{2}}$ and $\cos x = 1$ produces a result greater than 3).</p> <p>The other thing I noticed was that inside the square root sign, we have $\sqrt{9-x^3} = \sqrt{(3-x^{3/2})(3+x^{3/2})}$, and an $x^{3/2}$ appears in the first term, but I'm not sure how useful the similarity there is.</p> <p>Advice on how to proceed?</p>
Hari Shankar
351,559
<p>By Cauchy Schwarz $$ 9 = \left(x^3+(9-x^3) \right) \left(\sin^2 x + \cos^2 x\right) \ge \left(x^{3/2}\sin x + \sqrt{9-x^3}\cos x\right)^2$$ and the result follows</p>
407,289
<p>Is it true that in the category of connected smooth manifolds equipped with a compatible field structure (all six operations are smooth) there are only two objects (up to isomorphism) - <span class="math-container">$\mathbb{R}$</span> and <span class="math-container">$\mathbb{C}$</span>?</p>
M.G.
1,849
<p>Here is a series of standard arguments.</p> <p>Let <span class="math-container">$(\mathbb{F},+,\star)$</span> be such a field. Then <span class="math-container">$(\mathbb{F},+)$</span> is a finite-dimensional (path-)connected abelian Lie group, hence <span class="math-container">$(\mathbb{F},+) \cong \mathbb{R}^n \times (\mathbb{S}^1)^m$</span> as Lie groups. Since <span class="math-container">$\mathbb{F}$</span> is path-connected, there is in particular a path <span class="math-container">$\gamma: [0,1] \to \mathbb{F}$</span> with <span class="math-container">$\gamma(0) = 0_{\mathbb{F}}$</span> and <span class="math-container">$\gamma(1) = 1_{\mathbb{F}}$</span>. Now consider the homotopy <span class="math-container">$H: \mathbb{F} \times [0,1] \to \mathbb{F}$</span>, <span class="math-container">$(x,t) \mapsto \gamma(t) \star x$</span>. This gives a contraction of <span class="math-container">$\mathbb{F}$</span> and so we can exclude all the circle factors.</p> <p>Now, fix <span class="math-container">$y_0 \in \mathbb{F}$</span> and consider the map <span class="math-container">$\widehat{y_0}: \mathbb{R}^n \to \mathbb{R}^n$</span>, <span class="math-container">$x \mapsto x \star y_0$</span>. Then <span class="math-container">$\widehat{y_0}$</span> is an <strong>additive</strong> map (but at the moment not necessarily linear with respect to the natural vector space structure on <span class="math-container">$\mathbb{R}^n$</span>). It is not too difficult to see that by additivity we have <span class="math-container">$\forall q\in \mathbb{Q}: \widehat{y_0}(qx) = q \widehat{y_0}(x)$</span>. Since <span class="math-container">$\widehat{y_0}$</span> is continuous (as being smooth), it <em>now</em> follows that it's actually <span class="math-container">$\mathbb{R}$</span>-<strong>linear</strong>.</p> <p>Thus <span class="math-container">$\mathbb{F}$</span> is an <span class="math-container">$\mathbb{R}$</span>-algebra. From this point on one can finish either by the Frobenius theorem on the classification of finite-dimensional <em>associative</em> <span class="math-container">$\mathbb{R}$</span>-algebras or invoke a theorem of Bott and Milnor from algebraic topology that <span class="math-container">$\mathbb{R}^n$</span> can be equipped with a bilinear form <span class="math-container">$\beta$</span> turning <span class="math-container">$(\mathbb{R}^n,\beta)$</span> into a division <span class="math-container">$\mathbb{R}$</span>-algebra (not necessarily associative) only in the cases <span class="math-container">$n=1,2,4,8$</span>.</p> <p><strong>EDIT:</strong> Another finishing topological argument is a theorem of Hopf saying that <span class="math-container">$\mathbb{R}$</span> and <span class="math-container">$\mathbb{C}$</span> are the only finite-dimensional commutative division <span class="math-container">$\mathbb{R}$</span>-algebras. This is less of an overkill compared to invoking Frobenius or Bott–Milnor as the proof is a rather short and cute application of homology, see p.173, Thm. 2B.5 in Hatcher's &quot;Algebraic Topology&quot;.</p>
1,143,161
<p>My question is inspired by the structure of Royden's <em>Real Analysis</em>, which introduces measure theory and Lebesgue integration for $\mathbb{R}$ in its Part I and then reconstructs large portions of those mathematical apparatuses in greater generality by constructing (versions of) them for general metric spaces.</p> <p>I'll suggest a first, elementary example: In $\mathbb{R}^n$, every convergent sequence is Cauchy, and vice versa. In a general metric space, every convergent sequence is Cauchy, but a Cauchy sequence need not converge. To provide an example, a Cauchy sequence in $\mathbb{Q}$ (using the standard metric) can converge to an irrational number, which is of course not in $\mathbb{Q}$.</p> <p>What other results in analysis hold for $\mathbb{R}^n$ but not for general metric spaces?</p>
user169845
169,845
<p>As per Section 9.5 of the text mentioned in the original question, the following theorem, which the authors call the Characterization of Compactness for Metric Spaces theorem, holds for general metric spaces:</p> <blockquote> <p>For a metric space $X$, the following three assertions are equivalent:</p> <p>(i) $X$ is complete and totally bounded;</p> <p>(ii) $X$ is compact;</p> <p>(iii) $X$ is sequentially compact.</p> </blockquote> <p>As a corollary, we recover the familiar Heine-Borel and Bolzano-Weierstrass theorems in the following forms (from the same section in the text):</p> <blockquote> <p>For a subset $K$ of $\mathbb{R}^n$, the following three assertions are equivalent:</p> <p>(i) $K$ is closed and bounded;</p> <p>(ii) $K$ is compact;</p> <p>(iii) $K$ is sequentially compact.</p> </blockquote> <p>The (i) $\iff$ (ii) equivalence is the Heine-Borel theorem, and the (i) $\iff$ (iii) equivalence is the Bolzano-Weierstrass theorem.</p> <p>In both of the above theorems, compactness and sequential compactness are equivalent. Sequential compactness and compactness are not equivalent for topological spaces, but they are for all metric spaces. (<em>Counterexamples in Topology</em> contains a plethora of examples illustrating this.) The main difference between $\mathbb{R}^n$ and general metric spaces in this particular case is the replacement of "closed and bounded" with "complete and totally bounded" when generalizing from $\mathbb{R}^n$ to general metric spaces.</p> <p>One of the reasons for this is that in $\mathbb{R}^n$, Cauchy sequences are convergent, and vice versa, while in general a Cauchy sequence need not be convergent, an example of which was given in the original question. Topologically, a closed subset of a space is a subset which contains its limit points, and the definition of a complete metric space is a metric space which contains the limits of its Cauchy sequences; when the convergent sequences in a space are exactly the Cauchy sequences in a space, such as in the case of $\mathbb{R}^n$, the closed subsets are complete, and vice versa, since closed sets contain their limit points and complete sets contain the limits of their Cauchy sequences and these two things coincide. However, in a general metric space, a Cauchy sequence in a subspace need not converge in that subspace, so the Cauchy sequences are no longer exactly the convergent sequences, and "closed" and "complete" are no longer equivalent.</p> <p>Similarly, the difference between "bounded" and "totally bounded" is that a totally bounded space is always bounded, but a bounded space need not be totally bounded, although these two coincide for $\mathbb{R}^n$. To again cite Royden (page 198 in the 4th edition), the closed unit ball in $\ell^2$ is bounded but not totally bounded:</p> <blockquote> <p>Let $X$ be the Banach space of $\ell^2$ of square summable sequences. Consider the closed unit ball $B = \{\{x_n\} \in \ell^2 \mid ||\{x_n\}||_2 \leq 1\}$. Then $B$ is bounded. We claim that $B$ is not totally bounded. Indeed, for each natural number $n$, let $e_n$ have the $n$th component 1 and other components 0. Then $||e_n - e_m||_2 = \sqrt{2}$ if $m \neq n$. Then $B$ cannot be contained in a finite number of balls of radius $&lt; 1/2$ since one of these balls would contain two of the $e_n$'s, which are distance $\sqrt{2}$ apart and yet the ball has diameter less than 1.</p> </blockquote> <p>Therefore, in this specific scenario, the generalization from $\mathbb{R}^n$ to a general metric space is accomplished by generalizing "bounded" to "totally bounded" and "closed" to "complete". </p> <p>Therefore, to experience examples of metric spaces that do not follow the Heine-Borel and Bolzano-Weierestrass theorems, we should look at spaces that are totally bounded but not bounded, complete but not closed, or both. Very specific examples of this have been given above, but it is very interesting to ask what other spaces fall into this category.</p>
2,426,394
<p>I'm sorry to bother you with this easy problem. But I'm working alone and totally confused. It is the 1378th Problem from "Problems in Mathematical Analysis" written by Demidovich. The standard answer is $$\frac{60(1+x)^{99}(1+6x)}{(1-2x)^{41}(1+2x)^{61}}$$</p> <p>But when I followed the rutine $$\frac{d(P(x)/Q(x))}{dx}=\frac{\frac{dP(x)}{dx}Q(x)-P(x)\frac{dQ(x)}{dx}}{Q(x)^2}$$ I got $$\frac{100(1+x)^{99}(1-2x)^{40}(1+2x)^{60}-(1+x)^{100}(-80(1-2x)^{39}+120(1+2x)^{59})}{(1-2x)^{80}(1+2x)^{120}}$$</p> <p>and finally $$\frac{60(1+x)^{99}P(x)}{(1-2x)^{41}(1+2x)^{61}}$$</p> <p>$$P(x)=1-4x^2-\frac{3(1+x)}{(1-2x)^{39}}-\frac{2(1+x)}{(1+2x)^{59}}$$</p> <p>Would you tell me what mistake I made. Best regards.</p>
Piquito
219,998
<p>HINT.-Take first derivative of $\dfrac{(1+x)^a}{(1-2x)^b(1+2x)^c}$ and replace the values of $a,b,c$ in your result. This way could be better for your calculations.</p>
2,903,259
<p>I am working on one example from Munkres book "Topology"and I would like to clarify one question.</p> <p><strong>Example:</strong> Consider the set $[0,1)$ of real numbers and the set $\mathbb{Z}_+$ of positive integers, both in their usual orders; give $\mathbb{Z}_+\times [0,1)$ the dictionary order. This set has the same order type as the set of nonnegative reals; the function $$f(n\times t)=n+t-1$$ is the required bijective order-preserving correspondence. On the other hand, the set $[0,1)\times \mathbb{Z}_+$ in the dictionary order has quite a different order type; for example, every element of this ordered set has an immediate successor. </p> <p><strong>My questions:</strong> </p> <p>1) I've checked that $[0,1)\times \mathbb{Z}_+$ has the same order type as the set of nonnegative reals, right?</p> <p>2) Any element $(t,n)$ from $[0,1)\times \mathbb{Z}_+$ has immediate successor, namely $(t,n+1)$. Right?</p> <p>3) But elements in $\mathbb{Z}_+\times [0,1)$ have not immediate successors, right?</p>
José Carlos Santos
446,262
<ol> <li>No. You checked that $\mathbb{Z}_+\times[0,1)$ has the same order type as the set of nonnegative reals.</li> <li>Right.</li> <li>Right, for every element of that set.</li> </ol>
1,044,005
<p>so i'm trying to find the general formula of a sequence, $${18,30,46,66,90,118}$$</p> <p>i found the first difference, $${12,16,20,24,24}$$</p> <p>second difference, $${4,4,4,4}$$</p> <p>If the second difference is 4, you start with $2n^2$</p> <p>Then i have $${18,30,46,66,90,118}$$</p> <p>-$${2,8,18,32,50,72}$$ </p> <p>The residue is $${16,22,28,34,40,46}$$</p> <p>which its formula is $16+6(n-1)$</p> <p>So the final answer is $2n^2+16+6(n-1)$ which is $2n^2+6n+10$? </p> <p>Does my approach seem on the right track? </p>
Zubin Mukerjee
111,946
<p>If you <em>know</em> that the pattern is a quadratic, then you can use the method of finite differences. Otherwise, there are an infinity of formulas/patterns that will give you those six numbers in order. </p> <hr> <p>Anyway, Let your quadratic be $$Q(x) = ax^2 + bx + c$$ with coefficients $a,b,c \in \mathbb{R}$. We know </p> <p>$$Q(0) = 18$$</p> <p>$$Q(1) = 30$$</p> <p>$$Q(2) = 46$$</p> <p>As it turns out, three is enough to find everything we need to. The first equation above means that $$c=18$$</p> <p>The second gives us $$a+b+c = 30 $$ or $$ a+b = 12$$</p> <p>The third yields $$4a + 2b + 18 = 46$$</p> <p>or </p> <p>$$2a + b = 14$$</p> <p>Since $a+b=12$, we get $$a=2$$</p> <p>and $$b=10$$</p> <hr> <p>Thus, $$Q(x) = 2x^2 + 10x + 18$$</p> <p>As a check, you can find $Q(3)$, $Q(4)$, and $Q(5)$ to make sure they're $66$, $90$, and $118$, respectively.</p>
4,292,822
<blockquote> <p>Prove linear independence of <span class="math-container">$1+x^3-x^5,1-x^3,1+x^5$</span> in the Vector Space of Polynomials</p> </blockquote> <p>The attempts I found online all are quite easy. You just substitute something in for <span class="math-container">$x$</span> into the equation <span class="math-container">$a(1+x^3-x^5)+b(1-x^3)+c(1+x^5)=0$</span> for example <span class="math-container">$x=1,0,-1$</span> and this will give you three equations where you can show that <span class="math-container">$a,b,c=0$</span>. But why can we substitute something in? If I define the Vector Space of Polynomials in a very abstract way that <span class="math-container">$\sum_{i} \alpha_i x^{i}+\sum_{i} \beta_{i} x^{i}:=\sum_{i} (\alpha_{i}+\beta_{i})x^{i})$</span> and <span class="math-container">$(\sum_{i}^{n} \alpha_i x^{i})(\sum_{i}^{m} \alpha_{i} x^{i} ):=\sum_{i=0}^{n+m} c_i x^i$</span> with <span class="math-container">$c_k=a_0 b_k+a_1 b_{k-1}+...+a_{k} b_0$</span> and a <span class="math-container">$x$</span> is just an abstract symbol with absolutely no meaning why should one be allowed to substitute something for <span class="math-container">$x$</span> or even worse differentiate the equation?</p>
Mark Bennet
2,906
<p>Evaluating a polynomial at a point <span class="math-container">$p$</span> is a ring homomorphism whose kernel is the ideal generated by <span class="math-container">$x-p$</span>.</p> <p>A ring homomorphism always takes zero to zero, so if you have a polynomial equal to zero, and apply the evaluation map, the result will still be zero.</p> <p>You can choose whatever value you like (for example a value which eliminates one of more of the constants) and get a valid expression equal to zero. It can be a surprisingly powerful technique.</p>
1,991,600
<p><a href="https://i.stack.imgur.com/2L9j9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2L9j9.jpg" alt="enter image description here" /></a></p> <p>I have worked out the areas as <span class="math-container">$\pi/3$</span> for the circle and <span class="math-container">$2/\sqrt3$</span> for the triangle but don't know how to convert into a percentage without a calculator.</p>
Momo
384,029
<p>Just replace $\pi$ with 3.14 and $\sqrt3{}$ with 1.73 (do the students today know how to do the square root by hand any longer?), and calculate approximately. Than see what percentage is closest.</p>
228,283
<p>Could someone give a reference or construct an example of closed subspace of $Y\subset L_1[0,1]$ such that $\operatorname{dist}(x,Y)$ is not attained of for any $x\notin Y$.</p> <p>I read somewhere that $Y$ is necessarily of infinite dimension and codimension.</p>
Mikhail Ostrovskii
37,822
<p>Consider any functional $f$ which does not attain its norm on $L_1[0,1]$ (such $f$ exists by James's theorem, but in this case one can find it without, as an $L_\infty$-function with essential supremum equal to $1$, which is not attained on a set of non-zero measure) and let $Y$ be the kernel of $f$. Let $x\notin Y$, if $\hbox{dist}(x,Y)$ is attained at $y\in Y$, then $f$ would attain its norm on $(x-y)/||x-y||$.</p>
1,137
<p>In a freshman course in mathematics for biology students, I have had the issue that basic algebra (e.g. simplifying $\frac{a/b}{c/d}$) is far from being mastered. On the one hand, it seems ill-suited to teach more advanced matters to student failing on the basics. On the other hand, if we take too much time to cover the material allegedly covered in high school, we can hardly hope to cover what is needed to these student, and we enforce the impression that not mastering high-school mathematics is ok.</p> <blockquote> <p>How much effort should we spend in class for studying material that is supposed to be mastered but in practice is not?</p> </blockquote>
Community
-1
<p>Some strategies could be useful for improving students' basic information without wasting time in the main process of the course.</p> <ul> <li>Use a TA and ask him/her to work on students' basic information during some sessions.</li> <li>Surely there are some students in your class who know the basics better than others. Divide the students to some groups such that each group includes an informed student as a head. Ask heads to help their friends on basic subjects in some sessions. Of course you should motivate these heads by some extra grades or something like this.</li> </ul>
1,137
<p>In a freshman course in mathematics for biology students, I have had the issue that basic algebra (e.g. simplifying $\frac{a/b}{c/d}$) is far from being mastered. On the one hand, it seems ill-suited to teach more advanced matters to student failing on the basics. On the other hand, if we take too much time to cover the material allegedly covered in high school, we can hardly hope to cover what is needed to these student, and we enforce the impression that not mastering high-school mathematics is ok.</p> <blockquote> <p>How much effort should we spend in class for studying material that is supposed to be mastered but in practice is not?</p> </blockquote>
user570
570
<p>I've previously written online algebra tutorials for 2nd year bio students, with online quizzes at the end. It seemed to help. The advantage is you can base the learning materials on the course content to some extent. Also you can let students know early that they may have a problem before its too late to pass or withdraw. Set it as an early mandatory thing to complete as an assessment of prerequisities.</p>
2,787,114
<p>Before going to my question, let me give two prelimilary definitions</p> <blockquote> <p><strong>Definition 1.</strong> Let $S\subseteq\mathbb{R}^n$ be a non=empty open set in $\mathbb{R}^n$ under the usual topology on $\mathbb{R}^n$ and $f:S\to \mathbb{R}^m$. Let $\mathbf{c}\in S$ and $g:U(\subseteq \mathbb{R})\to S$ is such that,</p> <ul> <li><p>$U$ is open in $\mathbb{R}$ under the usual topology on $\mathbb{R}$</p></li> <li><p>$g(0)=\mathbf{c}$</p></li> <li><p>$g$ is continuous at $0$</p></li> </ul> <p>Then $f$ will be said to have <em>derivative along the cruve $g$ at the point $\mathbf{c}$</em> if, $$\displaystyle\lim_{h\to 0}\dfrac{(f\circ g)(h)-(f\circ g)(0)}{h}$$ exists. </p> <p><strong>Definition 2.</strong> Let $S\subseteq\mathbb{R}^n$ be a non=empty open set in $\mathbb{R}^n$ under the usual topology on $\mathbb{R}^n$ and $f:S\to \mathbb{R}^m$. Let $\mathbf{c}\in S$. Then $f$ will be said to have <em>approach independent derivative at $\mathbf{c}$</em> if, $$\displaystyle\lim_{h\to 0}\dfrac{(f\circ g)(h)-(f\circ g)(0)}{h}$$ exists for all $g$ satisfying the properties listed in the previous definition.</p> </blockquote> <p><strong>Question</strong></p> <p>If $f$ has approach independent derivative at $\bf{c}$ then is it continuous at $\mathbf{c}$?</p> <hr> <p>I was trying to find a counter example of such a function $f$ but till now I have not been able to find such an example. Any help will be appreciated. </p>
Fimpellizzeri
173,410
<p>Suppose $f$ were not continuous at $c$. Then</p> <p>$$\exists \epsilon &gt; 0,\,\forall \delta &gt; 0,\,\exists x\in S \,\text{ with }\, \lVert x - c\rVert &lt; \delta,\, \lVert f(x) - f(c)\rVert \geq \epsilon\tag{$*$}$$</p> <p>For $n\in\mathbb N$, let $x_n$ be obtained from $(*)$ by taking $\delta = \frac1n$. This produces a sequence $x_n\to c$ with $\lVert f(x_n) - f(c)\rVert \geq \epsilon$ for all $n\in\mathbb N$.</p> <p>Let $t \in (0,1)$ be such that $t \in \left[\frac1{n+1},\frac1n\right)$. Then we define</p> <p>$$g(t) = (n+1)(1-n\,t)\,x_{n+1}+n\big((n+1)t - 1\big)\,x_n.$$</p> <p>In other words, $g$ join $x_{n+1}$ to $x_n$ on each segment $\left[\frac1{n+1},\frac1n\right)$ by a straight line. We define $g(0) =c$ and $g(t) = g(-t)$ for $t\in(-1,0)$.</p> <p>Can you see that $g$ is continuous at $t=0$? Can you see that this would violate the existence of an approach indepedent derivative?</p> <hr> <p>EDIT: Welp, this is just a less elegant version of Frank Lu's answer.</p>
481,017
<blockquote> <p>Find all $f(x)$ satisfying $f(f(x)) = x^2 - 2$.</p> </blockquote> <p>Presumably $f(x)$ is supposed to be a function from $\mathbb R$ to $\mathbb R$ with no further restrictions (we don't assume continuity, etc), but the text of the problem does not specify further. </p> <p><strong>Possibly Helpful Links:</strong> Information on similar problems can be found <a href="https://mathoverflow.net/questions/17605/how-to-solve-ffx-cosx">here</a> and <a href="https://mathoverflow.net/questions/17614/solving-ffx-gx">here</a>.</p> <p><strong>Source:</strong> <a href="https://math.stackexchange.com/questions/481000/some-old-russian-problems">This question.</a> It is about to be closed for containing too many problems in one question. I'm posting each problem separately. </p>
Gottfried Helms
1,714
<p>Hmm, if we look at $g(x) = f(f(x)) = x^2-2$ and find the fixpoints $t_0=-1,t_1=2$ then we might assume a solution $h(x)$ for $g(x) = h(x-2)+2 $ and then from this we search for the half-iterate of $h(x)$ defining $h(x-2)+2 = w(w(x-2))+2$ and shall have a solution of the problem by $f(x)=w(x+2)-2$, hopefully having nonzero range of convergence at least in the vicinity of the fixpoint $t_1$. </p> <p>With this ansatz we find that $$h(x) = g(x+2)-2 = (x+2)^2-4 = 4x+1x^2$$ could be a replacement for $g(x)$ and from this we get the leading terms for the power series of its half-iterate $$w(x) = 2 x + 1/6 x^2 - 1/90 x^3 + 1/720 x^4 - 7/32400 x^5 \\ + 161/4276800 x^6 - 391/55598400 x^7 + O(x^8) $$ by standard algebraic manipulations. <em>(To check this insert $w(x)$ for $x$ in that leading terms)</em> </p> <p>Thus $f(x)$ can be written as a power series around (fixpoint) $2$: $$f(x) = 2+ 2 (x-2) + 1/6 (x-2)^2 - 1/90 (x-2)^3 + 1/720 (x-2)^4 - 7/32400 (x-2)^5 \\ + 161/4276800 (x-2)^6 - 391/55598400 (x-2)^7 + O(x^8) $$ <em>(The link gives a confirmation <a href="http://www.wolframalpha.com/input/?i=2*cosh%28sqrt%282%29*arccosh%28x/2%29%29" rel="nofollow noreferrer">using WolframAlpha</a> and a $\cosh()$ /$\text{arccosh}()$-formula, see my other answer)</em></p> <p>By the first $64$ terms it looks as if $w(x)$ has a positive range of convergence; and computing a couple of examples gives $$ \begin{array} {c|c|c|c} x &amp;g(x)=h(1-t_1)+t_1 &amp; f(x) = w(x-t_1)+t_1 &amp; f(f(x))=w(w(x-t_1))+t_1\\ \hline\\ 1 &amp; -1 &amp; 0.17942912356439677341 &amp; -1.0 \\ 1.5 &amp; 0.25 &amp; 1.0431497617870845281 &amp; 0.25 \\ 2.5 &amp; 4.25 &amp; 3.0403583699367069623&amp; 4.25 \\ \end{array}$$ while the results for $f(x)$ are only (but well) approximated because I've only the truncation to say 32 or 64 terms so far and not yet an analytical description for the coefficients which only would allow an exact result. </p> <p><hr><br> <strong><em>Picture</em></strong> For the time being here is a plot taken on base of the power series for $f(x)$ truncated to 32 terms. I plotted $y=x,y=f(x),y=f(f(x))=g(x),y=g(f(x))$ to show more pattern. Unfortunately I don't know the Pari/GP-plot facility well enough to make the picture selfexplanatory. But I think one recognizes the $y=x$ straight line and the $y=g(x)$ parabola (red). The $y=f(x)$ curve (green) is that in between and the $y=g(f(x))$ (green) is that beyond the parabola: <img src="https://i.stack.imgur.com/2FPVn.png" alt="enter image description here"></p> <p><hr> <em>This is additional information according to the request in the comment of @dfeuer</em> </p> <p><strong><em>More Explanations</em></strong> </p> <p>1) Generalities <em>(Remark: $f(x)$ is meant here in general, not yet your sought function)</em> </p> <p>There is consent (see for instance L. Comtet, "Advanced Combinatorics", pg 144-148) that for power series <em>without constant term</em> there is a meaningful fractional iterate using the Bell-polynomials based on the formal power series for the function. One can express this more elegantly and concise in terms of Bell/Carleman matrixes <em>(<a href="https://en.wikipedia.org/wiki/Carleman_matrix" rel="nofollow noreferrer">see wikipeda</a> but for convenience I use the Carleman matrix here in transposed form)</em> , which have a form such that for a function $f(x)$ we have $$ V(x) \cdot B = V(f(x)) \\ V(x) \cdot B^2 = V(f(f(x))) \\ \cdots\\ V(x) \cdot B^n = V(f^{[n]}(x)) \\ $$ where the notation $V(x)=[1,x,x^2,x^3,...]$ and thus $V(f(x))=[1,f(x),f(x)^2,f(x)^3,...]$ means infinite vectors of an indeterminate argument $x$. </p> <p>This notation allows to isolate the required coefficients of the power series of $f(x)$ into the (infinite, lower triangular) matrix $B$ . Then for the operation of composition and self-composition (aka iteration) via the Bell-polynomials it suffices to denote powers of the matrix $B$ - because they <em>implicitely</em> define that required Bell-polynomial driven operations on the <strong><em>formal powerseries</em></strong> . </p> <p>If the power series for $f(x) = \sum_{k=1}^\infty b_k\cdot x^k$ has no constant term, then the associated Bell-/Carleman-matrix is lower triangular, and powers of it can be computed also on the finitely truncated versions - giving exact coefficients which are valid also for the untruncated/infinite-size case.<br> Moreover this is also possible for fractional powers, and in particular for the square-root, which provides then the solution that you are searching. </p> <p>2) How this concerns your problem <em>(here $f(x)$ means your function)</em> </p> <p>The procedure is now to find an equivalent power series (or polynomial) for your function <em>(let's call it $g(x)=f(f(x))$)</em> which has <strong><em>no</em></strong> constant term - by shifting the $x$ and $y$-coordinate, such that $g(x) = x^2-2 $ which <strong><em>has</em></strong> a constant term, is rewritten as $h(x) = 4x + x^2$ which has <strong><em>no</em></strong> constant term and then we have modeled for any number of iterations $$g(x) = h(x-2) + 2 \\ g(g(x)) = h(h(x-2))+2 \\ \cdots$$. </p> <p>Then for the function $h(x)$ we build the (lower triangular) Carleman-matrix $H$ which begins as $$ H= \small \begin{bmatrix} 1 &amp; . &amp; . &amp; . &amp; . &amp; . \\ 0 &amp; 4 &amp; . &amp; . &amp; . &amp; . \\ 0 &amp; 1 &amp; 16 &amp; . &amp; . &amp; . \\ 0 &amp; 0 &amp; 8 &amp; 64 &amp; . &amp; . \\ 0 &amp; 0 &amp; 1 &amp; 48 &amp; 256 &amp; . \\ 0 &amp; 0 &amp; 0 &amp; 12 &amp; 256 &amp; 1024 \end{bmatrix} \qquad \text{ matrix H is of infinite size} $$ and obviously the dot product with a $V(x)$-vector evaluates then to a $V(h(x))$-vector: $$ V(x) \cdot H = V(h(x))$$</p> <p>Then the squareroot (let's call it $W$) of $H$ such that $W^2 = H$ can be determined by solution of an infinite equation system or by diagonalization.<br> One solution is again lower triangular and instead of a polynomially function like $h(x)$ it provides the coefficients of a power series. It begins like $$ W = \small \begin{bmatrix} 1 &amp; . &amp; . &amp; . &amp; . &amp; . \\ 0 &amp; 2 &amp; . &amp; . &amp; . &amp; . \\ 0 &amp; 1/6 &amp; 4 &amp; . &amp; . &amp; . \\ 0 &amp; -1/90 &amp; 2/3 &amp; 8 &amp; . &amp; . \\ 0 &amp; 1/720 &amp; -1/60 &amp; 2 &amp; 16 &amp; . \\ 0 &amp; -7/32400 &amp; 1/540 &amp; 1/30 &amp; 16/3 &amp; 32 \end{bmatrix} \qquad \text{ matrix W is of infinite size}$$ (You can check it by hand when you compute the square of <strong>W</strong>)<br> The second column gives the coefficients for the power series of $w(x)$ with the property $w(w(x-2))+2 = g(x)$ and thus gives an acceptable solution $f(x)=w(x-2)+2$. </p> <p><em>Additional remark: Often with fractional iterates the occuring power series have very small or zero range of convergence - then additional measures must be introduced to make the solution usable and meaningful, but the powerseries for $w(x)$ here seems to have a convergence-radius of about</em> $4$ <em>and can be used for a not-too-small range of</em> $x$ <em>-values.</em></p>
799,602
<p>If I have</p> <pre><code>(¬q ⇒ r) </code></pre> <p>and use implication on this does it give me:</p> <pre><code>(¬¬q v r) </code></pre> <p>from where I can then use double negation,</p> <p>or does it give me </p> <pre><code>¬(¬q v r) </code></pre> <p>I am thinking the former.</p> <p>Thank you in advance.</p>
gebruiker
145,141
<p>IMHO the authors of the Stacks project do a good job in proving certain theorems on CSA's and Brauer groups. The proofs are very short and to the point. <a href="http://stacks.math.columbia.edu/download/brauer.pdf" rel="nofollow">Here</a>'s a link to the chapter on Brauer groups. </p> <p>In your particular case though I think you would benefit from the relation $$A\otimes_K\operatorname{Mat}_n(K)\cong_K\operatorname{Mat}_n(A),$$ for a $K$-algebra $A$ (so CSA's over $K$ in particular). Together with Wedderburn's theorem, the proofs of your first two statements come rolling out prety easily. </p>
28,166
<p>With the initial conditions: $a&gt;b&gt;0$;</p> <p>I need to find $$\lim_{n\to\infty}\sqrt[n]{a^n-b^n}.$$</p> <p>I tried to block the equation left and right in order to use the Squeeze (sandwich, two policemen and a drunk, choose your favourite) theorem.</p>
user 1591719
32,016
<p><strong>METHOD I</strong></p> <p>Since $a&gt;b&gt;0$, our limit boils down to:</p> <p>$$\lim_{n\to\infty}\sqrt[n]{a^n-b^n}=\lim_{n\to\infty}\sqrt[n]{a^n \left(1-({\frac{b}{a})^{n}}\right)} = \lim_{n\to\infty}\sqrt[n]{a^n}\lim_{n\to\infty}\sqrt[n]{1-(\frac{b}{a})^{n}}=a.$$ </p> <p><strong>METHOD II</strong></p> <p>We may simply squeeze it:</p> <p>$$a= \lim_{n\to\infty}\sqrt[n]{(a-b)a^{n-1}}\leq \lim_{n\to\infty}\sqrt[n]{a^n-b^n} \leq \lim_{n\to\infty}\sqrt[n]{a^n}=a$$</p> <p>The proofs are complete. </p>
28,166
<p>With the initial conditions: $a&gt;b&gt;0$;</p> <p>I need to find $$\lim_{n\to\infty}\sqrt[n]{a^n-b^n}.$$</p> <p>I tried to block the equation left and right in order to use the Squeeze (sandwich, two policemen and a drunk, choose your favourite) theorem.</p>
Zarrax
3,035
<p>Another way: Note that $a^n - b^n = (a - b)(a^{n-1} + a^{n-2}b + ... + ab^{n-2} + b^{n-1})$. Since $b &lt; a$, each term in the sum is bounded by $a^{n-1}$ and we have $$a^{n-1} + a^{n-2}b + ... + ab^{n-2} + b^{n-1} &lt; na^{n-1}$$ Since the sum is at least the first term, we also have $$a^{n-1} + a^{n-2}b + ... + ab^{n-2} + b^{n-1} &gt; a^{n-1}$$ Combining we have $$a^{n-1}(a - b) &lt; a^n - b^n &lt; na^{n-1}(a - b)$$ Taking $n$th roots we get $$a^{1 - {1 \over n}}(a - b)^{1 \over n} &lt; \sqrt[n]{a^n - b^n} &lt;n^{1 \over n} a^{1 - {1 \over n}}(a - b)^{1 \over n} $$ This can be rewritten as $$a \bigg({a - b \over a}\bigg)^{1 \over n} &lt; \sqrt[n]{a^n - b^n} &lt; a n^{1 \over n} \bigg({a - b \over a}\bigg)^{1 \over n}$$ As $n$ goes to infinity both the left and right sides of the above go to $a$. Thus by the squeeze theorem so does the middle and we are done.</p>
1,314,802
<p>True.</p> <p>Since $f$ is continuous (because all uniformly continuous function is continuous), we can assume:</p> <p>$$ f\left(\lim_{n\to\infty} \frac{1}{n}\right) $$</p> <p>Since $ \lim_{n\to\infty} \frac{1}{n} $ is bounded in $ (0,1] $ and $ I \subset (0,1] $, we have by hypothesis $f$ uniformly continuous then</p> <p>$$ \lim_{n\to\infty} f\left(\frac{1}{n}\right) \text{, exists.} $$</p>
Community
-1
<p>Note that we can write \begin{equation*} \lim_{x\to 0}\exp(\frac{\ln(1+\frac{1-\cos(x)}{x})}{x}). \end{equation*} Applying L'Hopital's rule twice gives us \begin{equation*} \exp(\lim_{x\to 0}\frac{x\cos(x)}{1+2x-\cos(x)+x\sin(x)}) \\ =\exp(\lim_{x\to 0}\frac{x}{1+2x-\cos(x)+x\sin(x)}). \end{equation*} Applying L'Hopital's rule again gives \begin{equation*} \exp(\lim_{x\to 0} \frac{1}{2+x\cos(x)+2\sin(x)})=\sqrt{e}~_{\square} \end{equation*}</p>
1,840,176
<p>I got a task in front of me but I don't really understand it. If someone could explain, I think I would be able to solve it myself.</p> <blockquote> <p>$P(x) = \sum_{k=0}^{\infty}a_{k}x^{k}$ is a power series. There exists a $k_{0} \in \mathbb{N}$ with $a_{k} \neq 0$ for all $k \geq k_{0}$.</p> <p>Proof that: If the sequence $\left ( \left | \frac{a_{k+1}}{a_{k}} \right | \right )_{k \geq k_{0}}$ converges towards a number in $\mathbb{R}$ or towards $\infty$ and if $a:= \lim_{k\rightarrow \infty} \left | \frac{a_{k+1}}{a_{k}} \right | \in \mathbb{R} \cup \left \{ \infty \right \}$ indicates this limit point ($\infty $ or $-\infty$) then following applies for the radius of convergence $R$ of $P$:</p> <p>$R=\left\{\begin{matrix} 0, &amp; a = \infty\\ \infty, &amp; a = 0 \\ \frac{1}{a}, &amp; otherwise \end{matrix}\right.$</p> </blockquote> <hr> <p>What is meant by $k_{0}$ ? It's just any unknown variable which seems to be smaller or equal $k$, right? Oh and it cannot be smaller than zero.</p> <p>What is $a_{k}$ ? It's just any sequence that cannot be zero, right?</p> <p>So first I take the sequence $a_{n}$, use the ratio test to see if it converges. Okay after that is done, I check if in the ratio test, I get + or - $\infty$.</p> <p>Is it right so far?</p> <p>But what confuses me most is this: </p> <blockquote> <p>$a:= \lim_{k\rightarrow \infty} \left | \frac{a_{k+1}}{a_{k}} \right | \in \mathbb{R} \cup \left \{ \infty \right \}$</p> </blockquote> <p>What is it saying with infinity?</p> <p>Sorry I haven't started with the task but first I try to understand everything, then start. </p>
user247327
247,327
<p>"What is meant by $k_0$ ? It's just any unknown variable which seems to be smaller or equal k, right? Oh and it cannot be smaller than zero." No. $k_0$ is a constant, not a variable. This is saying that $a_k$ is non-zero for all $a_k$ in the sequence with k larger than $k_0$.</p> <p>"What is $a_k$? It's just any sequence that cannot be zero, right?" No. "$a_k$" is the "kth" term in this given sequence, not the sequence itself.</p> <p>"So first I take the sequence $a_n$, use the ratio test to see if it converges. Okay after that is done, I check if in the ratio test, I get + or - ∞".</p> <p>Is it right so far?" "+ or - ∞"? The ratio test says that a series converges if the ratio goes to a number less that 1, diverges if larger than 1. </p> <p>"But what confuses me most is this: </p> <p>$a:=limk→∞∣∣\frac{a_{k+1}}{a_k}∣∣∈R∪{∞}$ What is it saying with infinity?" What that is saying is that "a" can be any real number or it could be infinity. In other words, it could be anything!</p>
4,486,425
<blockquote> <p>Suppose that we are given reduced rational numbers <span class="math-container">$\,\dfrac{a}{k},\ \dfrac{b}{\ell},\ \dfrac{c}{q},\,$</span> i.e. <span class="math-container">$\gcd(a,k)=\gcd(b,\ell)=\gcd(c,q)=1$</span> such that <span class="math-container">$$\frac{c}{q}=\frac{a}{k}-\frac{b}{\ell}.$$</span></p> <p>Then we have <span class="math-container">$\ q=k'\ell'e = {\dfrac{gk'l'}{f}}$</span> and <span class="math-container">$\,c=\dfrac{a\ell'-bk'}{f}$</span>, where <span class="math-container">$$g=\gcd(k,l),\,\ k'\,=\frac{k}g,\,\ \ell'=\frac{\ell}g,\,\ e=\frac{g}f,\ f=\gcd(a\ell'\!-bk',g).$$</span></p> </blockquote> <p>I have some troubles to prove that <span class="math-container">$\,q=\ell'k'e.\,$</span> I was trying something like that: <span class="math-container">$$\frac{c}{q}=\frac{a\ell-bk}{k\ell}.$$</span> If we assume that <span class="math-container">$t=\gcd(a\ell-bk,k\ell)$</span>, then <span class="math-container">$c=\dfrac{(a\ell-bk)}{t}$</span> and <span class="math-container">$q=\dfrac{kl}{t}$</span>. Then I performed some manipulations but I did not reach the needed equality.</p> <p>Can anyone show it please? Thank you!</p>
John Omielan
602,049
<p>For simpler algebra, let <span class="math-container">$g = \gcd(k,\ell)$</span>, which means <span class="math-container">$\ell = g\ell'$</span> and <span class="math-container">$k = gk'$</span>. This gives that</p> <p><span class="math-container">$$\begin{equation}\begin{aligned} \frac{a\ell - bk}{k\ell} &amp; = \frac{g(a\ell' - bk')}{(gk')(g\ell')} \\ &amp; = \frac{a\ell' - bk'}{gk'\ell'} \end{aligned}\end{equation}\tag{1}\label{eq1A}$$</span></p> <p>Next, using that <span class="math-container">$g = ef$</span>, and <span class="math-container">$f = \gcd(a\ell' - bk',g) \; \to \; f \mid a\ell' - bk'$</span> meaning there's an integer <span class="math-container">$h$</span> such that <span class="math-container">$h = \frac{a\ell' - bk'}{f}$</span>, we then get</p> <p><span class="math-container">$$\begin{equation}\begin{aligned} \frac{a\ell' - bk'}{gk'\ell'} &amp; = \frac{a\ell' - bk'}{efk'\ell'} \\ &amp; = \frac{\left(\frac{a\ell' - bk'}{f}\right)}{ek'\ell'} \\ &amp; = \frac{h}{ek'\ell'} \end{aligned}\end{equation}\tag{2}\label{eq2A}$$</span></p> <p>Note from <span class="math-container">$f = \gcd(a\ell' - bk',g)$</span> with <span class="math-container">$a\ell' - bk' = hf$</span> and <span class="math-container">$g = ef$</span>, we have that <span class="math-container">$\gcd(h,e) = 1$</span>. Also, since <span class="math-container">$\gcd(a,k) = 1$</span>, then <span class="math-container">$\gcd(a,k') = 1$</span>, plus <span class="math-container">$\gcd(k',\ell') = 1$</span> (due to their definitions involving dividing by <span class="math-container">$\gcd(k,\ell)$</span>), thus <span class="math-container">$\gcd(k',a\ell' - bk') = 1$</span>, so <span class="math-container">$\gcd(k',h) = 1$</span>. Similarly, <span class="math-container">$\gcd(\ell',h) = 1$</span>. This means <span class="math-container">$\gcd(h, ek'\ell') = 1$</span>, so \eqref{eq2A} is in the reduced rational form of <span class="math-container">$\frac{c}{q}$</span> with <span class="math-container">$c = h = \frac{a\ell' - bk'}{f}$</span> and <span class="math-container">$q = \ell'k'e$</span>.</p>
4,486,425
<blockquote> <p>Suppose that we are given reduced rational numbers <span class="math-container">$\,\dfrac{a}{k},\ \dfrac{b}{\ell},\ \dfrac{c}{q},\,$</span> i.e. <span class="math-container">$\gcd(a,k)=\gcd(b,\ell)=\gcd(c,q)=1$</span> such that <span class="math-container">$$\frac{c}{q}=\frac{a}{k}-\frac{b}{\ell}.$$</span></p> <p>Then we have <span class="math-container">$\ q=k'\ell'e = {\dfrac{gk'l'}{f}}$</span> and <span class="math-container">$\,c=\dfrac{a\ell'-bk'}{f}$</span>, where <span class="math-container">$$g=\gcd(k,l),\,\ k'\,=\frac{k}g,\,\ \ell'=\frac{\ell}g,\,\ e=\frac{g}f,\ f=\gcd(a\ell'\!-bk',g).$$</span></p> </blockquote> <p>I have some troubles to prove that <span class="math-container">$\,q=\ell'k'e.\,$</span> I was trying something like that: <span class="math-container">$$\frac{c}{q}=\frac{a\ell-bk}{k\ell}.$$</span> If we assume that <span class="math-container">$t=\gcd(a\ell-bk,k\ell)$</span>, then <span class="math-container">$c=\dfrac{(a\ell-bk)}{t}$</span> and <span class="math-container">$q=\dfrac{kl}{t}$</span>. Then I performed some manipulations but I did not reach the needed equality.</p> <p>Can anyone show it please? Thank you!</p>
Bill Dubuque
242
<p><span class="math-container">$\ \,\dfrac{c}q \,=\, {\dfrac{a}k-\dfrac{b}\ell} \,=\!\!\!\!\!\!\!\!\overbrace{\dfrac{a\ell -bk}{k\ell}}^{\!\textstyle\small {\rm cancel}\,\ g \!=\! (k,l) \Rightarrow }\!\!\!\!\!\!\!\!= {\dfrac{\color{#0a0}{a\ell' -bk'}}{\color{#0a0}g\:\!\color{#c00}{k'\ell'}}},\ $</span> for <span class="math-container">$\,\ \ell' = \dfrac{\ell}g,\,\ k' = \dfrac{k}g,\,\ (k',\ell')=1$</span></p> <p><span class="math-container">$\ \color{#c00}{k'}\,$</span> &amp; <span class="math-container">$\,\color{#c00}{\ell'}$</span> are <em>already</em> coprime to <span class="math-container">$\,\color{#0a0}{a\ell'\! -bk'}$</span> (proof below), so to reduce <span class="math-container">$\rm\color{#0a0}{RH}\color{#c00}{S}$</span> fraction it suffices by <span class="math-container">$\small \rm\color{#0af}{EL}=$</span> <a href="https://math.stackexchange.com/a/20904/242">Euclid's Lemma</a> to cancel <span class="math-container">$\:\!(\color{#0a0}{a\ell'\!-\!bk',g}) =:\! \color{darkorange}f,\,$</span> yielding <span class="math-container">$\,\bbox[6px,border:1px solid #c00]{\begin{align}c &amp;= (a\ell'\!-bk')/ \color{darkorange}f\\ q &amp;= gk'\ell'/ \color{darkorange}f,\,\text{ as claimed}\end{align}}$</span></p> <p><strong>Proof:</strong> <span class="math-container">$\,\ (\color{#c00}{k'},\:\!a\ell'\! -b\color{#c00}{k'})=\!\!\! \underbrace{(k',a\ell')\overset{\rm\color{#0af}{EL}}= 1}_{\small\textstyle (k',a)\!=\!1\!=\!(k',\ell')}\!\!\!.\,$</span> By <span class="math-container">$\,k',\ell'$</span> symmetry <span class="math-container">$\,(\color{#c00}{\ell'},a\color{#c00}{\ell'}\! -bk')\!=\!1\,$</span> too.</p>
3,165,494
<blockquote> <p>Consider the function <span class="math-container">$\phi(t,x)$</span> where <span class="math-container">$x=(x_1,x_2) \in \mathbb{R}^2$</span>. How can we check if <span class="math-container">$\phi(t,x)$</span> is a flow for a vector field? <span class="math-container">$$\phi(t,x)=(x_1\cos(r^2t)+x_2\sin(r^2t),-x_1\sin(r^2t)+x_2\cos(r^2t))$$</span> where <span class="math-container">$r^2=x_1^2+x_2^2$</span>.</p> </blockquote> <hr /> <p>My try:</p> <p>We need to check if <span class="math-container">$\phi(0,x)=\phi(t,\phi(-t,x))$</span>. I am wondering if there is a shorter way?</p>
Tsemo Aristide
280,301
<p><span class="math-container">$\phi_t$</span> is the flow of a vector field if and only <span class="math-container">$\phi_{t+t'}=\phi_t\circ \phi_{t'}$</span>.</p> <p>If <span class="math-container">$\phi_{t+t'}=\phi_t\circ \phi_{t'}$</span>, write <span class="math-container">$X(x)=lim_{\rightarrow 0}{d\over {dt}}\phi_t(x)$</span>.</p> <p><span class="math-container">${d\over{du}}_{u=t_0}$</span> <span class="math-container">$\phi_t(x)={d\over{du}}_{u=0}\phi_{t_0+u}=$</span></p> <p><span class="math-container">${d\over{du}}_{u=0}\phi_{t_0}\circ\phi_u(x)=\phi_{t_0}^*{d\over{du}}_{u=0}\phi_t(x)=\phi_{t_0}^*X(x)$</span> which is equivalent to saying that <span class="math-container">$\phi_t$</span> is the flow of <span class="math-container">$X$</span>.</p> <p>The fact that if <span class="math-container">$\phi_t$</span> is the flow of then <span class="math-container">$\phi_{t+t'}=\phi_t\circ\phi_{t'}$</span> is a classical argument which is a consequence of the properties of differential equations.</p>
1,405,486
<p>My question concerns division by zero. Let's say there are two functions, $g(x)$ and $f(x)$ that approach $0$, as $x\to t$, also assume their derivative w.r.t. $x$ is finite as $x\to t$. Using L'Hopital's Rule: $\frac{f(x)}{g(x)} = \frac{f'(x)}{g'(x)}$ $x\to t$, Can you say $\frac{f'(x)}{f(x)} = \frac{g'(x)}{g(x)}$? Or $\frac{f'(x)}{f(x)} - \frac{g'(x)}{g(x)} = 0$? To be more specific is $\frac{1}{0} = \frac{1}{0}$? Or is it undefined?</p>
Community
-1
<p>You cannot say any of those things. Division by $0$ is <strong>ALWAYS</strong> undefined (in the real numbers before anyone says anything).</p>
1,405,486
<p>My question concerns division by zero. Let's say there are two functions, $g(x)$ and $f(x)$ that approach $0$, as $x\to t$, also assume their derivative w.r.t. $x$ is finite as $x\to t$. Using L'Hopital's Rule: $\frac{f(x)}{g(x)} = \frac{f'(x)}{g'(x)}$ $x\to t$, Can you say $\frac{f'(x)}{f(x)} = \frac{g'(x)}{g(x)}$? Or $\frac{f'(x)}{f(x)} - \frac{g'(x)}{g(x)} = 0$? To be more specific is $\frac{1}{0} = \frac{1}{0}$? Or is it undefined?</p>
Paramanand Singh
72,031
<p>When you talk of $f(x)/g(x)$ then you assume that $g(x) \neq 0$ as $x \to t$. At the same time you assume that $f(x) \to 0, g(x) \to 0$ as $x \to t$. This is the typical use-case where L'Hopital's Rule can be applied provided $f'(x)/g'(x)$ tends to a limit as $x \to t$.</p> <p>Now note that if $f'(x)/g'(x) \to L$ as $x \to t$ then it is also obvious that $g'(x) \neq 0$ as $x \to t$. The L'Hopital's Rule says that under these conditions $f(x)/g(x) \to L$ as $x \to t$. This means that $$\lim_{x to t}\frac{f(x)}{g(x)} = \lim_{x \to t}\frac{f'(x)}{g'(x)}\text{ or what is the same as }\lim_{x \to t}\left(\frac{f(x)}{g(x)} - \frac{f'(x)}{g'(x)}\right) = 0\tag{1}$$ but <strong>this does not mean that</strong> $$\frac{f(x)}{g(x)} = \frac{f'(x)}{g'(x)}$$ Note further that <em>if two functions have same limit as $x \to t$ then it does not necessarily mean that they are equal as $x \to t$</em>.</p> <p>The symbols like $1/0$ don't mean anything and it is better not to worry about them.</p>
2,840,091
<blockquote> <p>Consider the linear map$:T:\mathbb{R}^3 → \mathbb{R}$ with $$T\left(\begin{bmatrix} x \\ y \\ z \end{bmatrix}\right)=x-2y-3z$$ Find the basis of its kernel.</p> </blockquote> <p><strong>My try</strong></p> <p>Since the plane is the nullspace of the matrix $$A=\begin{bmatrix} 1 &amp; -2 &amp; -3 \\ 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 \end{bmatrix}$$</p> <p>But I am stuck here. Can anyone explain this furthur</p>
Mohammad Riazi-Kermani
514,496
<p>The Kernel of $T$ is a two dimensional space.</p> <p>You need to find two linearly independent vectors satisfying $$ x-2y-3z=0 $$</p> <p>Finding such vectors is achieved by choosing $x$ and $y$ values and solving for $z$</p> <p>I have found $(3,0,1)$ and $(0,3,-2)$ to serve as a basis for the kernel of T. </p>
2,702,329
<blockquote> <p>Let $A$ be an $m\times n$ matrix, let $v \in \mathbb{R}^m$, and let $S$ be the set of least squares solutions to the system $Ax=b$. Show that there exists a unique <em>minimal least square solution</em> to $Ax=b$; that is, there exists some $x\in S$ such that $||x|| \le||y||$ for all $y\in S$.</p> </blockquote> <p>I have two questions about this.</p> <ol> <li><p>How can there be a unique least square solution if the stipulation is $||x|| \le ||y||$, shouldn't it just be less than?</p></li> <li><p>This seems tautological to me, because you're just trying to show that there is a least element in S, which should clearly exist because the norm gives you a scalar that you can compare. Is there any more to this proof?</p></li> </ol>
Siong Thye Goh
306,553
<p>We can't have $$\exists x \in S, \|x\| &lt; \|y \|, \forall y \in S$$</p> <p>because this would imply that $\|x\| &lt; \|x\|$ such $x \in S$.</p> <p>However, we can write </p> <p>$$\exists x \in S, \|x\|&lt;\|y\|, \forall y \in S \setminus \{x\}.$$</p> <p>A minimal point need not exists in general, for example if $S=(0,1)$, try to minimize $\|x\|$, in that case, we have infimum but not minimum. We have to use some property of the least square solution to show that a minimal element exists.</p>
1,501,736
<p>Let $x_1 = 1$ and for $n \ge2, x_n = \sqrt{3 + x_{n-1}}$, I just want to know how to determine the limit. </p>
Lutz Lehmann
115,115
<p><strong>Hint:</strong> If you know the Banach fixed-point theorem, then $f(x)=\sqrt{3+x}$ has Lipschitz constant $1/4$ on $[1,\infty)$.</p>
1,773,060
<p>Is every (non-trivial) quotient of a Boolean algebra isomorphic to a subalgebra of that Boolean algebra? And conversely is every subalgebra isomorphic to a quotient algebra?</p>
Robert Furber
184,596
<p>For the first question, the answer is no, there are quotient algebras that cannot be embedded as subalgebras. Let's take <span class="math-container">$\newcommand{\powerset}{\mathcal{P}}\newcommand{\N}{\mathbb{N}}A = \powerset(\N)$</span>, the ideal <span class="math-container">$I \subseteq A$</span> to be the set of finite subsets, and <span class="math-container">$B = A/I$</span>. We need to show that <span class="math-container">$B$</span> is not a subalgebra of <span class="math-container">$A$</span>. </p> <p>We start by observing that <span class="math-container">$A$</span> has what is known as the <em>countable chain condition</em>, which is that if <span class="math-container">$(S_i)_{i \in I}$</span> is a pairwise disjoint family of non-zero elements of <span class="math-container">$A$</span>, then <span class="math-container">$I$</span> is countable. This follows from the countability of <span class="math-container">$\N$</span>. </p> <p>We can therefore show that <span class="math-container">$B$</span> is not a subalgebra of <span class="math-container">$A$</span> by finding an uncountable pairwise disjoint family in <span class="math-container">$B$</span>. We do this as follows. For each <span class="math-container">$n \in \N$</span>, we write <span class="math-container">$\mathbf{digits}(n)$</span> for the finite sequence expressing <span class="math-container">$n$</span> in binary. For each infinite binary sequence <span class="math-container">$a \in 2^\N$</span>, define <span class="math-container">$S_a \subseteq \N$</span> as: <span class="math-container">$$ S_a = \{ n \in \N \mid \mathbf{digits}(n) \text{ is a prefix of } a\} $$</span> As is well known, <span class="math-container">$2^\N$</span> is uncountable. Each <span class="math-container">$S_a$</span> is infinite, because each finite prefix of <span class="math-container">$a$</span> comes from some <span class="math-container">$n \in \N$</span>, so the equivalence class <span class="math-container">$[S_a]$</span> is non-zero in <span class="math-container">$B$</span>. Finally, if <span class="math-container">$a \neq b$</span> are elements of <span class="math-container">$2^\N$</span>, they have only finitely many prefixes that are the same, so <span class="math-container">$S_a \cap S_b$</span> is finite, so the equivalence classes <span class="math-container">$[S_a]$</span> and <span class="math-container">$[S_b]$</span> are disjoint in <span class="math-container">$B$</span>. So we have shown that <span class="math-container">$([S_a])_{a \in 2^\N}$</span> is an uncountable disjoint family in <span class="math-container">$B$</span>, which would be a contradiction if <span class="math-container">$B$</span> could be embedded as a subalgebra of <span class="math-container">$A$</span>. </p> <hr> <p>For your second question, the answer is also no, there are subalgebras that are not isomorphic to any quotient. However, I was not able to make this example as elementary as the other one, so I will be using Stone duality and a bit of general topology.</p> <p>Let's start again with <span class="math-container">$A = \powerset(\N)$</span>, and define the subalgebra <span class="math-container">$B$</span> to consist of finite sets and <em>cofinite</em> sets, <em>i.e.</em> complements of finite sets. Assume for a contradiction that <span class="math-container">$B$</span> is representable as a quotient of <span class="math-container">$A$</span>, so there is a surjective Boolean homomorphism <span class="math-container">$f : A \rightarrow B$</span>. The Stone space of <span class="math-container">$A$</span> is <span class="math-container">$\beta \N$</span>, the Stone-Čech compactification of <span class="math-container">$\N$</span>, and the Stone space of <span class="math-container">$B$</span> is the "free convergent sequence", <em>i.e.</em> it is homeomorphic to the subspace <span class="math-container">$S = \{ 0, 1, \frac{1}{2}, \cdots, \frac{1}{n}, \cdots \} \subseteq \mathbb{R}$</span>. The Stone dual of <span class="math-container">$f$</span> is a continuous injective map <span class="math-container">$g : S \rightarrow \beta(X)$</span>. But as <span class="math-container">$\beta(X)$</span> is extremally disconnected, every convergent sequence is eventually constant, which is a contradiction. </p>
2,136,183
<p>Find integers x,y such that the repeating decimal 0.712341234.... = x/y.</p> <p>I would actually do this problem if the 7 was not there. If the 7 was not there, my proof would be as follows.</p> <p>proof:</p> <p>Let z = 0.12341234...</p> <p>Then 10^4z = 1234.1234</p> <p>10^4z - z = 1234</p> <p>z = 1234/(10^4 - 1)</p> <p>x = 1234, y = 10^4-1</p> <p>So my question is, how would this change when there is a random number thrown in there that is not part of the repeating decimal?</p> <p>Edit: Proof after hints given</p> <p>Let Let z = 0.712341234...</p> <p>Then 10z = 7.1234...</p> <p>10z - 7 = 0.1234...</p> <p>10^4*(10z - 7) = 1234.1234...</p> <p>10^4*(10z-7) - (10z-7) = 1234</p> <p>(10z-7) * 10^4 - 1 = 1234</p> <p>(10z-7) = 1234/(10^4 - 1)</p> <p>10z = 1234/(10^4-1) + 7</p> <p>z = (1234/(10^4-1) + 7)/10</p> <p>x = 1234/(10^4-1) + 7, y = 10</p> <p>I mean this does give me the correct answer, but x isn't exactly an integer.</p>
fleablood
280,126
<p>Let $x=.712341234.... $</p> <p>$10000x=7123.41234..... $</p> <p>$9999x =7122.7$</p> <p>$x =\frac {71227}{99990} $</p> <p>May be able to reduce. Or maybe not.</p> <p>The irregular 7 doesn't change anything seriously. </p> <p>$1/n $ will reduce to a purely repeating pattern but most $m/n $ will not.</p> <p>...or if you prefer (it's the same thing really)</p> <p>Let $10x =7.12341234....$</p> <p>Let $y=.12341234$</p> <p>$y=\frac {1234}{9999} $</p> <p>$10x =7+ \frac {1234}{9999} $</p> <p>$x=\frac 7 {10}+ \frac {1234}{99990} $</p> <p>$x=\frac {7*9999+1234}{99990}$</p> <p>$x=\frac {69993+1234}{99990} $</p> <p>$x=\frac {71227}{99990} $</p>
90,006
<p>I have two problem collections I am currently working through, the "Berkeley Problems in Mathematics" book, and the first of the three volumes of Putnam problems compiled by the MAA. These both contain many problems on basic differential equations.</p> <p>Unfortunately, I never had a course in differential equations. Otherwise, my background is reasonably good, and I have knowledge of real analysis (at the level of baby Rudin), basic abstract algebra, topology, and complex analysis. I feel I could handle a more concise and mathematically mature approach to differential equations than the "cookbook" style that is normally given to first and second year students. I was wondering if someone to access to the above books that I am working through could suggest a concise reference that would cover what I need to know to solve the problems in them. In particular, it seems I need to know basic solution methods and basic existence and uniqueness theorem. On the other hand, I have no desire to specialize in differential equations, so a reference work like V.I Arnold's book on ordinary differential equations would not suit my needs, and I certainly don't have any need for knowledge of, say, the Laplace transform or PDEs. </p> <p>To reiterate, I just need a concise, high level overview of the basic (by hand) solution techniques for differential equations, along with some discussion of the basic uniqueness and existence theorems. I realize this is rather vague, but looking through the two problem books I listed above should give a more precise idea of what I mean. Worked examples would be a plus. I am very unfamiliar with the subject matter, so thanks in advance for putting up with my very nebulous request.</p> <p>EDIT: I found Coddington's "Intoduction to Ordinary Differential Equations" to be what I needed. Thanks guys.</p>
Zev Chonoles
264
<p>Let $X$ be some number <em>other</em> than $\frac{1}{2}$. Let's define $d=|X-\frac{1}{2}|$, and because $X\neq\frac{1}{2}$ we have $d&gt;0$. </p> <p>The <a href="http://en.wikipedia.org/wiki/Triangle_inequality#Reverse_triangle_inequality" rel="nofollow">reverse triangle inequality</a> says that for any $a$ and $b$, $$|a-b|\geq ||a|-|b||.$$ In particular, $$\left|X-\frac{n+1}{2n+3}\right|=\left|X-\left(\frac{n+\frac{3}{2}}{2n+3}-\frac{\frac{1}{2}}{2n+3}\right)\right|=\left|\left(X-\frac{1}{2}\right)-\left(-\frac{1}{4n+6}\right)\right|\geq d-\frac{1}{4n+6}$$</p> <p>Suppose that $$\lim_{n\to\infty}\frac{n+1}{2n+3}=X,$$ i.e. for any $\epsilon&gt;0$, there exists an $N\in\mathbb{N}$ such that: for all $n&gt;N$, $$\left|X-\frac{n+1}{2n+3}\right|&lt;\epsilon.$$ Then we have that for all $n&gt;N$, $$d-\frac{1}{4n+6}&lt;\epsilon,$$ or equivalently$$d&lt;\epsilon+\frac{1}{4n+6}.$$ But this is true for all $n&gt;N$ iff $d\leq \epsilon$. And $d\leq \epsilon$ for all $\epsilon&gt;0$ iff $d=0$. But $d&gt;0$; contradiction.</p> <p>Thus, the limit cannot be anything other than $\frac{1}{2}$.</p>
2,497,799
<h2>Question</h2> <p>While Solving a recursive equation , i am stuck at this summation and unable to move forward.Summation is </p> <blockquote> <p>$$\sum_{j=0}^{n-2}2^j (n-j)$$</p> </blockquote> <h2>My Approach</h2> <blockquote> <p>$$\sum_{j=0}^{n-2}2^j (n-j) = \sum_{j=0}^{n-2}2^j \times n-\sum_{j=0}^{n-2} 2^{j} \times j$$</p> </blockquote> <p>$$=n \times (2^{n-1}-1)-\sum_{j=0}^{n-2} 2^{j} \times j$$</p> <p>I am unable to move forward , please help me out!</p>
Leucippus
148,155
<p>In general: \begin{align} \sum_{k=0}^{n} x^{k} &amp;= \frac{1 - x^{n+1}}{1-x} \\ \sum_{k=0}^{n} k \, x^{k} &amp;= \frac{x \, (1 - (n+1) x^{n} + n x^{n+1})}{(1-x)^{2}} \end{align} Now \begin{align} \sum_{k=0}^{n} k \, \frac{1}{x^{k}} &amp;= \frac{n - (n+1) x + x^{n+1}}{x^{n} \, (1-x)^{2}} \\ \sum_{k=0}^{n} k \, x^{n-k} &amp;= \frac{n - (n+1) x + x^{n+1}}{ (x-1)^{2}}. \end{align}</p> <p>Using $$\sum_{k=0}^{n} k \, x^{n-k} = \sum_{k=0}^{n} (n-k) \, x^{k} = \sum_{k=0}^{n-2} (n-k) \, x^{k} + x^{n-1}$$ then \begin{align} \sum_{k=0}^{n-2} (n-k) \, x^{k} &amp;= \sum_{k=0}^{n} k \, x^{n-k} - x^{n-1} = \frac{(2x-1) x^{n-1} - (n+1) x + n}{(x-1)^{2}} \end{align}</p> <p>for the case of $x=2$ this becomes $$ \sum_{k=0}^{n-2} (n-k) \, 2^{k} = 3 \cdot 2^{n-1} - n - 2.$$</p>
4,230,409
<p>I recently stumbled upon Edwin Moise's proof of the area of a square, but there is something that is bothering me in his proof - he claims that since the inequalities <span class="math-container">$(1)$</span> and <span class="math-container">$(5)$</span> are true, then <span class="math-container">$a =\sqrt{S_a}$</span>. but why? why <span class="math-container">$(1)$</span> and <span class="math-container">$(5)$</span> imply that there is an equality between <span class="math-container">$a$</span> and <span class="math-container">$\sqrt{S_a}$</span>?</p> <p>I would be grateful if you could explain his proof, especially the latter part.</p> <p>Thank you!</p> <p><a href="https://i.stack.imgur.com/C0yV3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C0yV3.jpg" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/jAZjh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jAZjh.png" alt="enter image description here" /></a></p>
Sayan Dutta
943,723
<p>Here's another mathematical proof of the given fact. This is not really radically different from the last one, but it's more geometric in the sense that it makes the picture much more clear.</p> <p>We will use <span class="math-container">$b=\sqrt{\alpha S_a}$</span> in this argument.</p> <p>If possible let <span class="math-container">$a\neq b$</span> and <span class="math-container">$|b-a|=d$</span>.</p> <p>Now, because of the structure of the reals, for any <span class="math-container">$r\in \mathbb R$</span>, we can find a sequence of rationals <span class="math-container">$\{q_n\}_{n=1}^\infty$</span> which converges to <span class="math-container">$r$</span>. So, take a sequence <span class="math-container">$\{a_n\}_{n=1}^\infty$</span> which converges to <span class="math-container">$a$</span>. Put <span class="math-container">$\epsilon=\frac d 2$</span>. Because of convergence, for this choice of <span class="math-container">$\epsilon$</span>, there is an <span class="math-container">$n\in \mathbb N$</span> such that <span class="math-container">$|a-a_n|&lt;\epsilon=\frac d 2$</span>. This gives us a contradiction.</p> <p>Another way to see it is to construct the sequence and argue that since we have <span class="math-container">$$\lim_{n\to \infty} |a_n-a|=0$$</span> that is the distance between <span class="math-container">$a$</span> and a rational <span class="math-container">$a_n$</span> can be made arbitrarily small, <span class="math-container">$a$</span> and <span class="math-container">$b$</span> must coincide.</p> <p>I hope this makes the geometry clear.</p>
3,036,489
<blockquote> <p>For <span class="math-container">$k, l \in \mathbb N$</span> <span class="math-container">$$\sum_{i=0}^k\sum_{j=0}^l\binom{i+j}i=\binom{k+l+2}{k+1}-1$$</span> How can I prove this?</p> </blockquote> <p>I thought some ideas with Pascal's triangle, counting paths on the grid and simple deformation of the formula.</p> <p>It can be checked <a href="https://www.wolframalpha.com/input/?i=sum(i+%3D+0,+k,+sum(j+%3D+0,+l,+C(i+%2B+j,+i)))" rel="nofollow noreferrer">here (wolframalpha)</a>.</p> <p>If the proof is difficult, please let me know the main idea.</p> <p>Sorry for my poor English.</p> <p>Thank you.</p> <p>EDIT: I got <a href="https://math.stackexchange.com/a/3036497/625590">the great and short proof</a> using Hockey-stick identity by Anubhab Ghosal, but because of this form, I could also get the <a href="https://math.stackexchange.com/a/3036508/625590">Robert Z's specialized answer</a>. Then I don't think it is fully duplicate.</p>
Robert Z
299,698
<p>Your idea about a <strong>combinatorial proof</strong> which is related to counting paths in a grid is a good one!</p> <p>The binomial <span class="math-container">$\binom{i+j}i$</span> counts the paths on the grid from <span class="math-container">$(0,0)$</span> to <span class="math-container">$(i,j)$</span> moving only right or up. So the double sum <span class="math-container">$$\sum_{i=0}^k\sum_{j=0}^l\binom{i+j}i-1$$</span> counts the number of all such paths from <span class="math-container">$(0,0)$</span> to any vertex inside the rectangle <span class="math-container">$(0,k)\times (0,l)$</span> different from <span class="math-container">$(0,0)$</span>.</p> <p>Now consider the paths from <span class="math-container">$(0,0)$</span> to <span class="math-container">$(k+1,l+1)$</span> different from <span class="math-container">$$(0,0)\to (0,l+1)\to (k+1,l+1)\quad\text{and}\quad (0,0)\to (k+1,0)\to (k+1,l+1)$$</span> which are <span class="math-container">$$\binom{k+l+2}{k+1}-2.$$</span></p> <p>Now any path of the first kind can be completed to a path of the second kind by changing direction, going to the boundary of the rectangle <span class="math-container">$(0,k+1)\times(0,l+1)$</span> and then moving to the corner <span class="math-container">$(k+1,l+1)$</span> along the side.</p> <p>Is this a bijection between the first set of paths and the second one?</p>
11,435
<p><strong>Question</strong>: What is the correct notion of a <em>product</em> of integral (or rational) polytopes which induces a factorization of its Ehrhart (quasi-)polynomial into two primitive Ehrhart (quasi-)polynomials corresponding to its constituent polytopes, viz., $L_{P \times Q}(t) = L_{P}(t) L_{Q}(t)$? </p> <p>(<em>Motivation</em>) Given two closed integral polytopes $P$ and $Q$ each with vertices at $\{ \mathbf{0} , b_{1} \mathbf{e}_{1}, \dots, b_{n} \mathbf{e}_{n} \}$ and $\{ \mathbf{0} , d_{1} \mathbf{e}_{1}, \dots, d_{m} \mathbf{e}_{m} \}$, respectively, where $n, b_{i}, m, d_{j} \in \mathbb{N}$, define the integral polytope $R$ with vertices at $\{ \mathbf{0}, b_{1} \mathbf{e}_{1}, \dots, b_{n} \mathbf{e}_{n}, d_{1} \mathbf{e}_{n+1}, \dots, d_{m} \mathbf{e}_{n+m} \}$. </p> <p>The above construction cannot be the sought after product $P \times Q$. Suppose $P$ and $Q$ are defined by $b_{1} = b_{2} = d_{1} = d_{2} = 2$. It is easy to show that $L_{P}(1) = L_{Q}(1) = 6$. Define $R$ as above with vertices of $P$ and $Q$. It is true that $L_{R}(1) = 15 \neq 6^{2}$.</p> <p><strong>Question</strong>: What is $R$ in terms of $P$ and $Q$? Is it special in some way?</p> <p>Thanks!</p>
Robin Chapman
226
<p>Isn't this obvious (or have I missed something?)</p> <p>The appropriate product of rational polytopes $P\subseteq\mathbb R^m$ and $Q\subseteq\mathbb R^n$ is surely their Cartesian product $P\times Q\subseteq\mathbb R^{m+n}$. For a positive integer $t$, the integer points in $t(P\times Q)$ are those of the form $(a,b)$ where $a\in tP$ and $b\in tQ$, so there are $L_P(t)L_Q(t)$ of them.</p>
2,118,230
<blockquote> <p>Let $G$ be a group and let $H,K \le G$ be subgroups of finite index, say $|G:H| = m$ and $|G:K|=n$. Prove that $\mathrm{lcm}(m,n) \le |G:H \cap K| \le mn$. </p> </blockquote> <p>I was able to establish the upper bound, but I am having difficulty establishing the lower bound, so I consulted <a href="https://crazyproject.wordpress.com/2010/03/25/bounds-on-the-index-of-an-intersection-of-two-subgroups/" rel="nofollow noreferrer">this</a>. However, I am having trouble following the author's reasoning. Here is the relevant part I am referring to:</p> <blockquote> <p>Now...we have $H \cap K \le H \le G$. Thus, $m$ divides $|G:H \cap K|$ and $n$ divides $|G:H \cap K|$, so that $\mathrm{lcm}(m,n)$ divides $|G:H \cap K|$.</p> </blockquote> <p>Exactly what theorem is being used to make this conclusion?</p>
Stefan4024
67,746
<p><strong>HINT:</strong> $$[G:H \cap K] = [G:H][H:H \cap K]$$</p>
1,515,645
<p>The question is to find the values of a real number $\lambda$ for which the following equation is satisfied for all real values of $\alpha$ which are not integral multiples of $\pi/2$ $${\sin\lambda\alpha\over \sin\alpha}-{\cos\lambda\alpha\over \cos\alpha}=\lambda-1$$</p> <p>All I could do was to guess some values that just came to mind by observation, like $-1,1,3$</p> <p>What should be a more mathematical way to find all possible values of $\lambda$? </p> <p>SOURCE: KVPY 2015 SB stream </p>
some one
284,056
<p>$1111111111111111=10^{15} + ... + 1 = \large{(10^{16} -1)\over9}$</p> <p>$10^2 \equiv -2\pmod {17}$</p> <p>$10^{16} - 1 = (-2)^8 - 1 = 256 -1 \equiv 1 -1 = 0\pmod {17}$</p> <p>$11111111= \large{(10^8 -1 )\over9} $</p> <p>$10^2 \equiv -2\pmod {17}$</p> <p>$10^8 - 1 \equiv (-2)^4 - 1 = 15 \pmod {17}$</p>
1,515,645
<p>The question is to find the values of a real number $\lambda$ for which the following equation is satisfied for all real values of $\alpha$ which are not integral multiples of $\pi/2$ $${\sin\lambda\alpha\over \sin\alpha}-{\cos\lambda\alpha\over \cos\alpha}=\lambda-1$$</p> <p>All I could do was to guess some values that just came to mind by observation, like $-1,1,3$</p> <p>What should be a more mathematical way to find all possible values of $\lambda$? </p> <p>SOURCE: KVPY 2015 SB stream </p>
2'5 9'2
11,123
<p>Do you know that $a^{p-1}-1$ is always divisible by $p$ when $p$ is prime and $a$ is not divisible by $p$? It's a famous result. Take $a=10$ and $p=17$ to get $9999999999999999$. And dividing out $9$ won't change divisibility by $17$.</p> <p>$11111111$ is just too easy to outright factor. It's clearly divisible by $11$, which is prime ($11\cdot01010101$). And also by $101$ which is prime ($101\cdot00110011$). After dividing these two prime factors, you have $$11111111=11\cdot101\cdot10001$$ For the remaining factor of $10001$, note that it is $100^2+1$. That's not helpful for factoring. But move on to calculate it is $101^2-200$. Still not helpful. $102^2-403$. $103^2-608$. $104^2-815$. $105^2-1024$. Aha. So it's $105^2-32^2=(105-32)(105+32)=73\cdot137$. So you have $$11111111=11\cdot73\cdot101\cdot137$$</p>
259,431
<p>In the book of Richard Hammack, I come accross with the following question:</p> <blockquote> <p>There are two different equivalence relations on the set $A = \{a,b\}$. Describe them.</p> </blockquote> <p>OK, I found that the solution is,</p> <p>$$R_1 = \{(a,a),(b,b),(a,b),(b,a)$$ and $$R_2 = \{(a,a),(b,b)\}$$</p> <p>Then I thought two more equivalence classes $R_3 = \{(a,a)\}$, $R_4 = \{(b,b)\}$. But when I looked the answer, I saw that, $R_1$ and $R_2$ are true but others are false. Why is that?</p>
Isomorphism
16,823
<p>Read the reflexive property again. </p> <p>Reflexive property is not conditional.</p>
3,612,316
<p>I am trying to interpret an equation, but can't understand how the less-than signs work:</p> <p><span class="math-container">$\sum _{n=1}^{\infty } \frac{1}{n^2}&lt;1+\sum _{n=1}^{\infty } \frac{1}{n (n+1)}=1+\sum _{n=1}^{\infty } \left(\frac{1}{n}-\frac{1}{n+1)}\right)=1+1=2$</span></p> <p>The first part returns False when I put it into Mathematica which I don't think is how it should be interpreted since this value approximates 1.644. </p> <p><span class="math-container">$\sum _{n=1}^{\infty } \frac{1}{n^2}&lt;1$</span></p>
Wouter
89,671
<p>Your statement "unpacks"</p> <p>First <span class="math-container">$$\sum _{n=1}^{\infty } \frac{1}{n^2}&lt;1+\sum _{n=1}^{\infty } \frac{1}{n (n+1)}$$</span> second <span class="math-container">$$1+\sum _{n=1}^{\infty } \frac{1}{n (n+1)}=1+\sum _{n=1}^{\infty } \left(\frac{1}{n}-\frac{1}{n+1)}\right)$$</span> third <span class="math-container">$$1+\sum _{n=1}^{\infty } \left(\frac{1}{n}-\frac{1}{n+1)}\right)=1+1$$</span> fourth <span class="math-container">$$1+1=2$$</span></p> <p>The use of multiple equalities and inequalities like this is a bit abuse of notation. What if I wrote <span class="math-container">$$1+1+1=1+2=3$$</span> then you might be tempted to say <span class="math-container">$1+1+1=1+2$</span> is true, so the above becomes <span class="math-container">$$\text{true}=3$$</span> (and in certain programming languages the statement really would be interpreted like that) but this is almost never what is meant. What is meant is that the series of (in)equalities should be unpacked, here as <span class="math-container">$1+1+1=1+2$</span> and <span class="math-container">$1+2=3$</span>, independently.</p>
2,598,848
<p>I have a project I'm doing in desmos and want to know if this is possible.</p> <p>I want to be able to set some parameter, a, equal to the solution of f(x) = 0. I plugged my specific equation into Maple and it could not arrive at an explicit solution for x.</p> <p>However, when I type in the equation f(x) = 0 into desmos, it does a fine job of plotting the x values for which this equation is true.</p> <p>My question is this: Is it possible to take this x value(s) and set my parameter (AKA constant or 'slider') equal to it? Does desmos provide this functionality?</p> <p>Thanks!</p>
Mostafa Ayaz
518,023
<p>Consider the picture below:</p> <p><a href="https://i.stack.imgur.com/n285G.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n285G.jpg" alt="enter image description here"></a></p> <p>The description of two ellipses in polar coordinates is: $$r^2={1\over {{\cos^2\theta\over{a^2}}+{\sin^2\theta\over{b^2}}}}$$ $$r^2={1\over {{\cos^2\theta\over{b^2}}+{\sin^2\theta\over{a^2}}}}$$ so the surface we seek to find (assuming $a&gt;b$) is: $$S=\int_{\pi\over4}^{3\pi\over4}{r^2\over2}d\theta={1\over 2}\int_{\pi\over4}^{3\pi\over4}{1\over {{\cos^2\theta\over{a^2}}+{\sin^2\theta\over{b^2}}}}d\theta=abtan^{-1}{a\over b}$$</p> <p>Then the area between two ellipses is $\Large 4abtan^{-1}{max(a,b)\over min(a,b)}$</p>
1,583,540
<p>Is it possible to find an example of an one-sided inverse of a function? other than matrix?</p> <p>I am trying to find such an example but having no luck. Anybody got an idea about it?</p>
Tim Raczkowski
192,581
<p>Here's another example: Let $f:\Bbb R\to[0,\infty)$ be defined by $f(x)=x^2$, and $g:[0,\infty)\to\Bbb R$ by $g(x)=\sqrt x$.</p> <p>Now, $$(f\circ g)(x)=\sqrt x^2=x,$$ but $$(g\circ f)(x)=\sqrt{x^2}=|x|.$$</p>
244,048
<p>A couple days ago I posted this on MSE (<a href="https://math.stackexchange.com/questions/1853774/determining-a-function-is-harmonic-from-mean-value-property-for-just-three-ra">here</a>) but in retrospect it might be more appropriate for this site.</p> <p>This theorem is well-known (maybe it can be called Morera's theorem):</p> <blockquote> <p>A continuous function satisfying the mean value property on balls is harmonic.</p> </blockquote> <p>I was recently surprised to hear in a talk that the conclusion still holds if you only check the mean value property on three (I think) radii. Does anyone have a reference or name for this result? I would enjoy seeing the details and a proof.</p>
Dave Witte Morris
68,305
<p>Yes, in fact you only need two radii. More precisely, a theorem on page 167 of <a href="http://dx.doi.org/10.2307/2321600">this Monthly paper of Zalcman</a> says any two radii $r_1$ and $r_2$ will work unless the quotient $r_1/r_2$ is a quotient of zeros of a certain explicit function. The author says this result "was discovered by Jean Delsarte as far back as 1957."</p>
602,353
<p>Let $f: \mathbb{R} \to \mathbb{R}$ be given by $$f(x)= \begin{cases} -x-1, &amp;\text{if }x≥0\\ 1, &amp;\text{if }x&lt;0. \end{cases}$$</p> <p>Denote by</p> <ul> <li>$U$ the <em>usual</em> topology on $\mathbb{R}$.</li> <li>$H$ the <em>half-open interval</em> topology on $\mathbb{R}$.</li> <li>$C$ is the <em>open half-line topology</em> on $\mathbb{R}$. </li> </ul> <p>Is $f$: </p> <ol> <li>$U$-$U$ continuous?</li> <li>$U$-$H$ continuous?</li> <li>$U$-$C$ continuous?</li> <li>$H$-$U$ continuous?</li> <li>$C$-$C$ continuous?</li> <li>$C$-$U$ continuous?</li> <li>$C$-$H$ continuous?</li> <li>$H$-$H$ continuous?</li> </ol> <p>So, I kinda understand how to find things that are $U$-$U$ or whatever continuous, but the function is kinda throwing me off. So maybe some tips or suggestions as to where to start. </p>
user642796
8,348
<p>We should start by looking at what the inverse images of various basic open sets look like. This will help us determine the continuity of the function. (And plotting the graph of the function is quite helpful.)</p> <p>Let's start out with $U$, the usual topology. The basic open sets in this topology are just the open intervals $(a,b)$ for $a&lt;b$. The inverse image of these sets break down into several cases.</p> <ul> <li>If $a &lt; b \leq -1$, then $f^{-1} [ (a,b) ] = ( -b-1 , -a-1 )$;</li> <li>If $a &lt; -1 &lt; b \leq 1$, then $f^{-1} [ (a,b) ] = [ 0 , -a-1 )$;</li> <li>If $a &lt; -1 &lt; 1 &lt; b$, then $f^{-1} [ (a,b) ] = [ 0 , -a-1 ) \cup ( -\infty , 0 )$;</li> <li>If $-1 \leq a &lt; b \leq 1$, then $f^{-1} [ (a,b) ] = \varnothing$;</li> <li>If $-1 \leq a &lt; 1 &lt; b$, then $f^{-1} [ (a,b) ] = ( -\infty , 0 )$;</li> <li>If $1 \leq a &lt; b$, then $f^{-1} [ (a,b) ] = \varnothing$.</li> </ul> <p>As $[ 0 , 1 ) = [ 0 , -(-2)-1 ) = f^{-1} [ (-2,1) ]$ is not open with respect to the usual topology, then $f$ is not $U$-$U$ continuous. A simple check shows that all of these sets are open in the half-open interval topology, and so $f$ is $H$-$U$ continuous. As $(-4,-2) = ( -3-1 , -1-1 ) = f^{-1} [ (1,3) ]$ is not open in the open half-line topology, then $f$ is not $C$-$U$ continuous</p> <p>For the half-open interval topology you perform a similar analysis for the basic open sets of this topology, which are of the form $[ a , b )$ for $a &lt; b$. (If you know the connection between this topology and the usual topology, you should be able to see that the above actually answers the question of the possible $U$-$H$, $H$-$H$ and $C$-$H$ continuity of $f$ for <strong><em>two</em></strong> of these.)</p> <p>Finally for the half open-line topology you consider the inverse images of all sets of the form $( a , + \infty )$.</p>
1,335,734
<blockquote> <p>A library leases its photocopier. One monthly bill was 750 dollars for 12,000 copies. Another month, the bill was $862.50 for 16,500 copies. How much does the library pay for each copy?</p> </blockquote> <p>How should I start set up the equation for this problem? How exactly do I find the linear equation giving the cost in terms of the number of copies used? </p> <p>I am sorry if this is a stupid question, I am trying to review pre-calc and I forgot some techniques used to solve the problems. </p>
SamC
139,011
<p>Total derivative: ${df \over dx}={\partial f \over \partial x}{dx \over dx} + {\partial f \over \partial y}{dy \over dx}$ and ${dx \over dx }=1$</p> <p>So,</p> <p>$${d \over{dx}}3(x^2+y^2)^2 = {\partial \over \partial x}3(x^2+y^2)^2 + {\partial \over \partial y}3(x^2+y^2)^2 {dy \over dx} $$</p> <p>You should be able to go from here.</p>
187,511
<p>There are two questions:</p> <ol> <li><p>How to prove that in general </p> <p>$[\hat{A}(\mathbb HP^m)]_{4m} = 0$</p> <p>It is possible to verify it for low values of $m$.</p></li> <li><p>How to prove that in general</p> <p>$\left[\frac{\hat{A}(\mathbb HP^m)} { \hat{M}(\mathbb HP^m) }\right]_{4m} = 0$</p> <p>where $\hat{M}(\mathbb HP^m)$ is the Mayer class defined by</p> <p>$\hat{M}(V) = \prod _{i=1}^{s}\cosh \left( \frac{y_{{i}}}{2} \right)$</p> <p>with</p> <p>$p(V) =\prod _{i=1}^{s}(1+{y_{i}}^2)$.</p> <p>It is possible to verify it for low values of $m$.</p></li> </ol>
Johannes Ebert
9,928
<p>Since $HP^n$ is a spin manifold and since the homogeneous metric has positive scalar curvature, the $\hat{A}$-genus of $HP^n$ is zero by the index theorem and the Weitzenboeck-Lichenrowicz formula.</p> <p>I cannot at the moment answer the question for the quotient by the Mayer class. But there is a direct computation for the $\hat{A}$-genus, which might help for the quotient as well. I learnt this method from Hirzebruch, when he was still around at the Max-Planck-Insitute.</p> <p>In my lecture notes <a href="http://wwwmath.uni-muenster.de/u/jeber_02/skripten/mainfile.pdf" rel="nofollow">http://wwwmath.uni-muenster.de/u/jeber_02/skripten/mainfile.pdf</a>, page 161, I discuss the computation of the basic characteristic classes of $HP^n$.</p> <p>Consider the circle bundle $q: CP^{2n+1} \to HP^n$. $q$ induces an injection on cohomology. Therefore, it is enough to calculate that $(q^{\ast}\hat{A}(THP^n),[CP^{2n}])=0$. One can show that $q^{\ast} THP^n \oplus H^{\otimes 2} = TCP^{2n+1}$, where $H$ is the Hopf bundle. Denote by $x \in H^2 (CP^{2n+1})$ the generator (first Chern class of $H$).</p> <p>For a general even power series $F$ with associated multiplicative sequence, we therefore have $q^{\ast} F(THP^n) = F(x)^{2n+2}F(2x)^{-1}$. We have to determine the $2n$th coefficient and prove that it is zero for $n &gt;0$, when $F$ is the power series of the $\hat{A}$-genus. The $2n$th coefficient is $\frac{1}{2 \pi i}\int F(x)^{2n+2}F(2x)^{-1} x^{-2n-1} dx$, integration is over a circle around $0$ in the complex plane. Form the generating series $$ G(t):=\sum_{n \geq 0} \langle F(x)^{2n+2}F(2x)^{-1}; [CP^{2n}] \rangle t^{2n} $$ Using the above formula for the coefficients and the sum formula for the geometric series, one sees that the generating series is the same as $$ \frac{1}{2 \pi i}\int \frac{F(x)^2}{F(2x)x} \frac{1}{1- \frac{F(x)t}{x}} dx. $$ For the $\hat{A}$-genus, $F(x) = \frac{x/2}{\sinh(x/2)}$. Perform the substitution $\sinh(x/2)=u$ in the above integral. Finally, you arrive at the integral $$ \frac{1}{2 \pi i}\int \frac{1}{u-t/2} du. $$ The value is independent of $t$, as long as $|t|$ is small enough. Therefore $G(t) \equiv 1$, and this proves the result. I have not performed the first sanity check for this calculation (plug in the Hirzebruch $L$-class has to give the correct value for the signature of $HP^n$, namely $1$). </p>
2,111,579
<p>A question:</p> <p>If $\displaystyle{m}^{2}={1}-{m},{\quad\text{and}\quad}{n}^{2}={1}-{n},{\quad\text{and}\quad}{n}\ne{m};$</p> <p>Proof that $\displaystyle{m}^{7}+{n}^{7}+{30}={1}$</p> <p>Without finding the roots of equation $\displaystyle{x}^{2}+{x}-{1}={0}$.</p> <p>Is there such a shortcut solution?</p>
Jean Marie
305,862
<p>I would like to present a very simple solution by interpretation of these matrices as operators on $\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p> <p>For any $x_1,x_2,x_3,\cdots x_n$:</p> <p>$$\tag{1}A (x_1,x_2,x_3,\cdots x_n)^T=(s_1,s_2,s_3,\cdots s_n)^T \ \ \text{with} \ \ \begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\s_3&amp;=&amp;x_1+x_2+x_3\\...\end{cases}$$</p> <p>(1) is equivalent to:</p> <p>$$\tag{2}A^{-1} (s_1,s_2,s_3,\cdots x_n)^T=(x_1,x_2,x_3,\cdots x_n)^T \ \ \text{with} \ \ \begin{cases}x_1&amp;=&amp; \ \ s_1&amp;&amp;&amp;&amp;\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\...\end{cases}$$</p> <p>and it suffices now to "collect the coefficients" in the right order in order to constitute the inverse matrix.</p> <p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p>