qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
422,941
<p>How can we expand the following by the binomial expansion, upto the term including $x^3$? That'll be 4 terms.</p> <p>This the expression to be expanded: $\sqrt{2+x\over1-x}$</p> <p>I understand how to do the numerator and denominator individually. Now this is what I'm doing - having expanded the denominator (using the standard expansion formula for $(1+x)^{-1}$), do I now simply need to multiply this expansion once with the numerator $(2+x)$? I'm not getting the correct answer, but is this the correct method?</p>
marty cohen
13,079
<p>We will use the expansion $\sqrt{1+x} = 1+x/2+x^2(1/2)(-1/2)/2 + x^3(1/2)(-1/2)(-3/2)/6 + ... = 1+x/2-x^2/8+x^3/8+... $ where "..." means "terms of higher order than $x^3$" both in this expansion and in the math below.</p> <p>Note: I am doing the following math off the top of my head as I am entering it, so the chances for error are decidedly nonzero. However, the form of the result should be correct. Because we are only interested in terms up to order $x^3$, whenever a term of higher order occurs, it is dropped and subsumed into the "..." part. For those who know the "big-$O$" notation, the "+..." could also be written as $+O(x^4)$.</p> <p>$\begin{align} \sqrt{2+x\over1-x} &amp;=\sqrt{(2+x)(1+x+x^2+x^3+...)}\\ &amp;=\sqrt{2+2x+2x^2+2x^3+...+x+x^2+x^3+...}\\ &amp;=\sqrt{2+3x+3x^2+3x^3+...}\\ &amp;=\sqrt{2}\sqrt{1+3x/2+3x^2/2+3x^3/2+...}\\ &amp;=\sqrt{2}(1+(3x/2+3x^2/2+3x^3/2)/2 -(3x/2+3x^2/2)^2/8 +(3x/2)^3/8+...)\\ &amp;=\sqrt{2}(1+3x/4+3x^2/4+3x^3/4 -(3x/2)^2(1+x)^2/8 +27x^3/64+...)\\ &amp;=\sqrt{2}(1+3x/4+3x^2/4+3x^3/4 -(9x^2/32)(1+2x+x^2) +27x^3/64+...)\\ &amp;=\sqrt{2}(1+3x/4+3x^2/4+3x^3/4 -9x^2/32-9x^3/16 +27x^3/64+...)\\ &amp;=\sqrt{2}(1+3x/4+x^2(3/4-9/32)+x^3(3/4-9/16+27/64))+...\\ &amp;=\sqrt{2}(1+3x/4+x^2((3*8-9)/32)+x^3((3*16-9*4+27)/64))+...\\ &amp;=\sqrt{2}(1+3x/4+15x^2/32+39x^3/64)+...\\ \end{align} $</p> <p>This is why computer algebra systems came to be.</p>
1,237,450
<p>I couldn't follow a step while reading this <a href="https://math.stackexchange.com/a/1237316/135088">answer</a>. Since I do not have enough reputation to post this as a comment, I'm asking a question instead. The answer uses "partial integration" to write this $$ \int \frac{dv}{(v^2 + 1)^\alpha} = \frac{v}{2(\alpha-1)(v^2+ 1)^{\alpha - 1}} + \frac{2\alpha -3}{2\alpha - 2}\int \frac{dv}{(v^2 + 1)^{\alpha -1}} $$ I would like to know what this technique is, and how this equality follows from it.</p>
Chappers
221,811
<p>"Partial integration" just means integration by parts. The important step here is the writing of the fraction as $$ \frac{1}{(v^2+1)^{\alpha}} = \frac{1}{(v^2+1)^{\alpha}} - \frac{1}{(v^2+1)^{\alpha-1}} + \frac{1}{(v^2+1)^{\alpha-1}} \\ = -\frac{v^2}{(v^2+1)^{\alpha}} + \frac{1}{(v^2+1)^{\alpha-1}}. $$ Then you integrate the first term by parts, integrating $\frac{v}{(v^2+1)^{\alpha}}$ and differentiating $v$; this gives $$ \int \left( \frac{1}{(v^2+1)^{\alpha}}-\frac{1}{(v^2+1)^{\alpha-1}} \right) \, dv = -\int \frac{v^2}{(v^2+1)^{\alpha}} \, dv \\ = \frac{v}{2(\alpha-1)(v^2+1)^{\alpha-1}} - \frac{1}{2(\alpha-1)}\int \frac{dv}{(v^2+1)^{\alpha-1}}, $$ and then rearranging gives the result.</p>
122,546
<p>There is a famous proof of the Sum of integers, supposedly put forward by Gauss.</p> <p>$$S=\sum\limits_{i=1}^{n}i=1+2+3+\cdots+(n-2)+(n-1)+n$$</p> <p>$$2S=(1+n)+(2+(n-2))+\cdots+(n+1)$$</p> <p>$$S=\frac{n(1+n)}{2}$$</p> <p>I was looking for a similar proof for when $S=\sum\limits_{i=1}^{n}i^2$</p> <p>I've tried the same approach of adding the summation to itself in reverse, and I've found this:</p> <p>$$2S=(1^2+n^2)+(2^2+n^2+1^2-2n)+(3^2+n^2+2^2-4n)+\cdots+(n^2+n^2+(n-1)^2-2(n-1)n$$</p> <p>From which I noted I could extract the original sum;</p> <p>$$2S-S=(1^2+n^2)+(2^2+n^2-2n)+(3^2+n^2-4n)+\cdots+(n^2+n^2-2(n-1)n-n^2$$</p> <p>Then if I collect all the $n$ terms;</p> <p>$$2S-S=n\cdot (n-1)^2 +(1^2)+(2^2-2n)+(3^2-4n)+\cdots+(n^2-2(n-1)n$$</p> <p>But then I realised I still had the original sum in there, and taking that out mean I no longer had a sum term to extract.</p> <p>Have I made a mistake here? How can I arrive at the answer of $\dfrac{n (n + 1) (2 n + 1)}{6}$ using a method similar to the one I expound on above? <strong>I.e following Gauss' line of reasoning</strong>?</p>
Pedro
23,350
<p>Since I think the solution Tyler proposes is very useful and accesible, I'll spell it out for you:</p> <p>We know that</p> <p>$$(k+1)^3-k^3=3k^2+3k+1$$</p> <p>If we give the equation values from $1$ to $n$ we get the following:</p> <p>$$(\color{red}{1}+1)^3-\color{red}{1}^3=3\cdot \color{red}{1}^2+3\cdot \color{red}{1}+1$$ $$(\color{red}{2}+1)^3-\color{red}{2}^3=3 \cdot \color{red}{2}^2+3 \cdot \color{red}{2}+1$$ $$(\color{red}{3}+1)^3-\color{red}{3}^3=3 \cdot \color{red}{3}^2+3 \cdot \color{red}{3}+1$$ $$\cdots=\cdots$$ $$(\color{red}{n-1}+1)^3-(\color{red}{n-1})^3=3(\color{red}{n-1})^2+3(\color{red}{n-1})+1$$ $$(\color{red}{n}+1)^3-\color{red}{n}^3=3\color{red}{n}^2+3\color{red}{n}+1$$</p> <p>We sum this orderly in columns.</p> <p>Note that in the LHS the numbers cancel out with each other, except for the $(n+1)^3$ and the starting $-1$ ($2^3-1^3+3^3-2^3+4^3-3^3+\cdots+n^3-(n-1)^3+(n+1)^3-n^3$). We get:</p> <p>$$(n+1)^3-1 = 3(1+2^2+3^2+\cdots +(n-1)^2+n^2)+ 3(1+2+3+\cdots +(n-1)+n)+(\underbrace{1+1+\cdots+1}_{n})$$</p> <p>We can write this in sigma notation as:</p> <p>$$(n+1)^3-1=\sum\limits_{k=1}^n(3k^2+3k+1)$$</p> <p>Naming our sum $S$ we have that:</p> <p>$$(n+1)^3-1=3S+\sum\limits_{k=1}^n(3k+1)$$</p> <p>We know how to compute the sum in the RHS, because</p> <p>$$\sum\limits_{k=1}^n 3k =3\frac{n(n+1)}{2}$$</p> <p>$$\sum\limits_{k=1}^n 1 =n$$</p> <p>(We're summing $n$ ones in the last sum.)</p> <p>$$(n+1)^3-1=3S+3 \frac{n(n+1)}{2}+n$$</p> <p>$$n^3+3n^2+3n=3S+\frac{3}{2}n^2+\frac{3}{2}n+n$$</p> <p>$$n^3+\frac{3n^2}{2}+\frac{n}{2}=3S$$</p> <p>$$\frac{n^3}{3}+\frac{n^2}{2}+\frac{n}{6}=S$$</p> <p>This factors to</p> <p>$$\frac{n(2n+1)(n+1)}{6}=S$$</p> <p>which is what you wanted.</p>
122,546
<p>There is a famous proof of the Sum of integers, supposedly put forward by Gauss.</p> <p>$$S=\sum\limits_{i=1}^{n}i=1+2+3+\cdots+(n-2)+(n-1)+n$$</p> <p>$$2S=(1+n)+(2+(n-2))+\cdots+(n+1)$$</p> <p>$$S=\frac{n(1+n)}{2}$$</p> <p>I was looking for a similar proof for when $S=\sum\limits_{i=1}^{n}i^2$</p> <p>I've tried the same approach of adding the summation to itself in reverse, and I've found this:</p> <p>$$2S=(1^2+n^2)+(2^2+n^2+1^2-2n)+(3^2+n^2+2^2-4n)+\cdots+(n^2+n^2+(n-1)^2-2(n-1)n$$</p> <p>From which I noted I could extract the original sum;</p> <p>$$2S-S=(1^2+n^2)+(2^2+n^2-2n)+(3^2+n^2-4n)+\cdots+(n^2+n^2-2(n-1)n-n^2$$</p> <p>Then if I collect all the $n$ terms;</p> <p>$$2S-S=n\cdot (n-1)^2 +(1^2)+(2^2-2n)+(3^2-4n)+\cdots+(n^2-2(n-1)n$$</p> <p>But then I realised I still had the original sum in there, and taking that out mean I no longer had a sum term to extract.</p> <p>Have I made a mistake here? How can I arrive at the answer of $\dfrac{n (n + 1) (2 n + 1)}{6}$ using a method similar to the one I expound on above? <strong>I.e following Gauss' line of reasoning</strong>?</p>
JeremyKun
13,528
<p>There is a more beautiful Gauss-style proof that involves writing the numbers in triangles instead of in a line.</p> <p><img src="https://i.stack.imgur.com/za9s2.png" alt="Gauss style proof"></p> <p>I leave the details to you.</p>
280,346
<p>I am wondering how to tell Mathematica that a function, say <code>F[x]</code>, is a real-valued function so that, e.g., the <code>Conjugate</code> command will pass through it:</p> <pre><code>Conjugate[E^(-i k x)F[x]] = E^(i k x)F[x] </code></pre> <p>I tried to make a huge calculation using the <code>Conjugate</code> command, but without setting the arbitrary function <code>F[x]</code> as a real-valued function, the result is completely messy.</p>
Roman
26,598
<p>Use <a href="https://reference.wolfram.com/language/ref/Assuming.html" rel="nofollow noreferrer">assumptions</a>:</p> <pre><code>Assuming[Element[F[_], Reals], Conjugate[E^(-I k x) F[x]] // FullSimplify] (* E^(I Conjugate[k] Conjugate[x]) F[x] *) </code></pre> <p>With several real-valued symbols:</p> <pre><code>Assuming[Element[F[_] | F'[_], Reals], Conjugate[E^(-I k x) F[x] F'[x]] // FullSimplify] (* E^(I Conjugate[k] Conjugate[x]) F[x] F'[x] *) </code></pre> <p>If we mention all symbols in this list of reals, then we recover the @BobHanlon's <a href="https://reference.wolfram.com/language/ref/ComplexExpand.html" rel="nofollow noreferrer"><code>ComplexExpand</code></a> solution:</p> <pre><code>Assuming[Element[F[_] | F'[_] | k | x, Reals], Conjugate[E^(-I k x) F[x] F'[x]] // FullSimplify] (* E^(I k x) F[x] F'[x] *) </code></pre>
1,747,696
<p>First of all: beginner here, sorry if this is trivial.</p> <p>We know that $ 1+2+3+4+\ldots+n = \dfrac{n\times(n+1)}2 $ .</p> <p>My question is: what if instead of moving by 1, we moved by an arbitrary number, say 3 or 11? $ 11+22+33+44+\ldots+11n = $ ? The way I've understood the usual formula is that the first number plus the last equals the second number plus second to last, and so on. In this case, this is also true but I can't seem to find a way to generalize it.</p>
MJ73550
331,483
<p>This a question of notation.</p> <p>$1+2+3+4+\dots+n$ is a notation for $\sum_{k=1}^n k$</p> <p>I assume that $11+22+33+44+\dots+11\times n$ is a notation for $\sum_{k=1}^n 11\times k$</p> <p>in this case, you just get : $$ \sum_{k=1}^n 11\times k = 11 \times \sum_{k=1}^n k = 11 \frac{n(n+1)}{2}$$</p> <p>with any $M=3$ or $11$, you get $\sum_{k=1}^n M\times k= M \times \sum_{k=1}^n k$</p>
2,359,292
<p>I have been working on a problem in Quantum Mechanics and I have encountered a equation as given below.</p> <p>$$\frac{d\hat A(t)}{dt} = \hat F(t)\hat A(t)$$</p> <p>Where ^ denotes it is an operator </p> <p>How will this differential equation be solved? Will the usual rules for linear homogeneous first order differential with variable coefficients apply here?</p>
md2perpe
168,433
<p>Let us introduce an evolution operator $\hat U(t_1, t_0)$ such that $\hat A(t_1) = \hat U(t_1, t_0) \hat A(t_0).$ It satisfies $$\frac{\partial}{\partial t_1} \hat U(t_1, t_0) = \hat F(t_1) \, \hat U(t_1, t_0).$$</p> <p>$\newcommand{\prodint}{{\prod}}$ We can use a <a href="https://en.wikipedia.org/wiki/Product_integral" rel="nofollow noreferrer">product integral</a> to express $\hat U$: $$\hat U(t_1, t_0) = \prodint_{t_0}^{t_1} e^{\hat F(s) \, ds}$$ where $t$ increases from right to left in the product since we have $$\begin{align} \hat U(t_1 + \Delta t_1, t_0) &amp; = \prodint_{t_0}^{t_1 + \Delta t_1} e^{\hat F(s) \, ds} \\ &amp; = \left( \prodint_{t_1}^{t_1 + \Delta t_1} e^{\hat F(s) \, ds} \right) \left( \prodint_{t_0}^{t_1} e^{\hat F(s) \, ds} \right) \\ &amp; \approx e^{\hat F(t_1) \, \Delta t_1} \left( \prodint_{t_0}^{t_1} e^{\hat F(s) \, ds} \right) \\ &amp; \approx \left( 1 + \hat F(t_1) \, \Delta t_1 \right) \, \hat U(t_1, t_0) \\ \end{align}$$ so that $$ \frac{\partial}{\partial t_1} \hat U(t_1, t_0) = \lim_{\Delta t_1 \to 0} \frac{\hat U(t_1 + \Delta t_1, t_0) - \hat U(t_1, t_0)}{\Delta t_1} = \hat F(t_1) \, \hat U(t_1, t_0) $$</p> <p>Before we continue, let us study what happens if $\hat F$ depends on parameter and we differentiate $\hat U$ with respect to this parameter. We add an index $\lambda$ to $\hat F$ and $\hat U$, approximate the product integral with an finite product, differentiate, and take limits: $$ \frac{\partial}{\partial\lambda} \hat U_\lambda(t_1, t_0) \approx \frac{\partial}{\partial\lambda} \prod_{k=1}^{n} e^{\hat F_\lambda(s_k) \, \Delta s_k} \\ = \sum_{k=1}^{n} \left( \prod_{l=k+1}^{n} e^{\hat F_\lambda(s_l) \, \Delta s_l} \right) \left( \frac{\partial \hat F_\lambda}{\partial\lambda}(s_k) \, \Delta s_k \, e^{\hat F_\lambda(s_k) \, \Delta s_k} \right) \left( \prod_{l=1}^{k-1} e^{\hat F_\lambda(s_l) \, \Delta s_l} \right) \\ \to \int_{t_0}^{t_1} \left( \prodint_{t}^{t_1} e^{\hat F_\lambda(s) \, ds} \right) \left( \frac{\partial \hat F_\lambda}{\partial\lambda}(t) \right) \left( \prodint_{t_0}^{t} e^{\hat F_\lambda(s) \, ds} \right) \, dt \\ = \int_{t_0}^{t_1} \hat U_\lambda(t_1, t) \left( \frac{\partial \hat F_\lambda}{\partial\lambda}(t) \right) \hat U_\lambda(t, t_0) \, dt $$</p> <p>The second derivative becomes $$ \frac{\partial^2}{\partial\lambda^2} \hat U_\lambda(t_1, t_0) = \frac{\partial}{\partial\lambda} \int_{t_0}^{t_1} \hat U_\lambda(t_1, t) \left( \frac{\partial \hat F_\lambda}{\partial\lambda}(t) \right) \hat U_\lambda(t, t_0) \, dt \\ = \int_{t_0}^{t_1} \frac{\partial \hat U_\lambda}{\partial\lambda}(t_1, t) \left( \frac{\partial \hat F_\lambda}{\partial\lambda}(t) \right) \hat U_\lambda(t, t_0) \, dt + \int_{t_0}^{t_1} \hat U_\lambda(t_1, t) \left( \frac{\partial \hat F_\lambda}{\partial\lambda}(t) \right) \frac{\partial \hat U_\lambda}{\partial\lambda}(t, t_0) \, dt \\ = \int_{t_0}^{t_1} \left( \int_{t}^{t_1} \hat U_\lambda(t_1, t') \left( \frac{\partial \hat F_\lambda}{\partial\lambda}(t') \right) \hat U_\lambda(t', t) \, dt' \right) \left( \frac{\partial \hat F_\lambda}{\partial\lambda}(t) \right) \hat U_\lambda(t, t_0) \, dt \\ + \int_{t_0}^{t_1} \hat U_\lambda(t_1, t) \left( \frac{\partial \hat F_\lambda}{\partial\lambda}(t) \right) \left( \int_{t_0}^{t} \hat U_\lambda(t, t') \left( \frac{\partial \hat F_\lambda}{\partial\lambda}(t') \right) \hat U_\lambda(t', t_0) \, dt' \right) \, dt \\ = \int_{t_0}^{t_1} \int_{t}^{t_1} \hat U_\lambda(t_1, t') \left( \frac{\partial \hat F_\lambda}{\partial\lambda}(t') \right) \hat U_\lambda(t', t) \left( \frac{\partial \hat F_\lambda}{\partial\lambda}(t) \right) \hat U_\lambda(t, t_0) \, dt' \, dt \\ + \int_{t_0}^{t_1} \int_{t_0}^{t} \hat U_\lambda(t_1, t) \left( \frac{\partial \hat F_\lambda}{\partial\lambda}(t) \right) \hat U_\lambda(t, t') \left( \frac{\partial \hat F_\lambda}{\partial\lambda}(t') \right) \hat U_\lambda(t', t_0) \, dt' \, dt \\ = \int_{t_0}^{t_1} \int_{t_0}^{t_1} \hat U_\lambda(t_1, t_+) \left( \frac{\partial \hat F_\lambda}{\partial\lambda}(t_+) \right) \hat U_\lambda(t_+, t_-) \left( \frac{\partial \hat F_\lambda}{\partial\lambda}(t_-) \right) \hat U_\lambda(t_-, t_0) \, dt' \, dt $$ where $t_+ = \max(t,t')$ and $t_- = \min(t,t').$</p> <p>Often one can write $\hat F(t) = \hat F_0 + \lambda \hat F_i(t)$ where $\hat F_0$ is constant and corresponds to no interaction. We can then expand $\hat U_\lambda(t_1, t_0)$ as a Taylor series in $\lambda$: $$\begin{align} U_\lambda(t_1, t_0) &amp; = U_0(t_1, t_0) \\ &amp; + \lambda \int_{t_0}^{t_1} U_0(t_1, t) \, \hat F_i(t) \, U_0(t, t_0) \, dt \\ &amp; + \frac12 \lambda^2 \int_{t_0}^{t_1} \int_{t_0}^{t_1} U_0(t_1, t_+) \, F_i(t_+) \, U_0(t_+, t_-) \, F_i(t_-) \, U(t_-, t_0) \, dt' \, dt \\ &amp; + \cdots \end{align}$$</p> <p>The terms can be seen as a no interaction, one interaction, two interactions, and so on.</p>
290,132
<p>Let $x,a,b$ be real numbers and $f(x)$ a (nongiven) real-analytic function.</p> <p>How to find $f(x)$ such that for all $x$ we have $f(x)+af(x+1)=b^x$ ? </p> <p>In particular I wonder most about the case $a=1$ and $b=e$. (I already know the trivial cases $a=-1$ and $a=0$)</p> <p>I know how to express $f(x+1)$ into a taylor series once I have the taylor series for $f(x)$ and I assume this is related ? But does it help to find a closed form solution here ? If there is a closed form solution ? Am I on right track here or do we need to use something completely different or more general ? Does this relate to the fibonacci sequence ?</p>
sdcvvc
12,523
<p>Note that you can write $Lf = e^x$ where $(L f)(x)=f(x)+f(x+1)$ is a linear operation.</p> <p>Therefore, it's enough to find a single function $f$ such that $Lf = e^x$ and all solutions will be of the form $f+g$ where $L g=0$. You can discover as in Haskell Curry's answer that $f$ can be taken to be $\frac{1}{1+e} e^x$.</p> <p>Functions satisfying $L g = 0$ are called <a href="http://en.wikipedia.org/wiki/Antiperiodic_function#Antiperiodic_functions">antiperiodic</a>, one example is $\exp(\pi i x)$ since $\exp(\pi i (x+1))=-\exp(\pi i)$.</p>
3,142,417
<p>If <span class="math-container">$a , b , c$</span> and <span class="math-container">$d$</span> are positive integers, and <span class="math-container">$ab$</span> is greater than <span class="math-container">$cd$</span>, then, is <span class="math-container">$a+b$</span> greater than or equal to <span class="math-container">$c+d$</span>, always true?</p>
att epl
610,770
<p>Nope...consider: <span class="math-container">$$100=20 \times 5$$</span> and <span class="math-container">$$54=27 \times 2$$</span></p>
2,631,230
<p>So, I'm studying mathematics on my own and I took a book about Proofs in Abstract Mathematics with the following exercise:</p> <p>For each $k\in\Bbb{N}$ we have that $\Bbb{N}_k$ is finite</p> <p>Just to give some context on what theorems and definitions we can use:</p> <ol> <li>Definition: $\Bbb{N}_k = \{1, 2, ..., k \} $</li> <li>Definition: A set $S$ is infinite iff there exists a one-to-one but not onto $\ f:S\to S$</li> <li>Definition: $A\sim B$ means $A$ is equipotent to (or same cardinality of) $B$</li> <li>Theorem: if $A$ is infinite and $A\sim B$, then $B$ is infinite</li> <li>Theorem: if $A$ is infinite and $f:A\to B$ is one-to-one, then $B$ is infinite</li> <li>Theorem: Let $\ f:A \to B$ be one-to-one and $C\subseteq A$ then $\ g:C \to B$, $\ g(x)=f(x)\ $ for any $\ x\in C$, is also one-to-one</li> <li>Lemma: Let $k\in\Bbb{N}$, then $\Bbb{N}_k- \{x\} \sim \Bbb{N}_{k-1}$ for any $x\in \Bbb{N}_k$</li> </ol> <p>What I did was:</p> <p>Suppose that $\Bbb{N}_K$ is not finite for every $k\in\Bbb{N}$, then by the Well-Ordering Principle, there is a smallest element $k\in\Bbb{N}$ such that $\Bbb{N}_k$ is infinite. Let $x_0\in\Bbb{N}_k\ $ be the smallest element of $\Bbb{N}_k$ and define $C=\Bbb{N}_k - \{x_0\}$. Let $f:\Bbb{N}_k \to C\ $ be $\ f(n)=n+1$. We will prove that $f$ is one-to-one. Let $x_1,x_2\in\Bbb{N}_k$ such that $f(x_1)=f(x_2)$, then $x_1+1=x_2+1$. Hence $x_1=x_2$, what proves that $f$ is one-to-one. Thus we have that $C$ is infinite. Then $C\sim \Bbb{N}_{k-1}$ and thus we must have that $\Bbb{N}_{k-1}$ is infinite. However this contradicts our hypothesis that $k$ is the least element such that $\Bbb{N}_k$ is infinite. Thus it must be that for each $k\in \Bbb{N}$ we have $\Bbb{N}_k$ is finite.</p> <p>My question is if the proof above, especially when creating the function $f:\Bbb{N}_k\to C$ has any flaw. The book explicitly says we should use the 6th theorem listed above, but I didn't find any explicitly use of it. Maybe is there another way to prove it?</p> <p><strong>Edited:</strong> </p> <p>As some of you commented, the proof above was wrong. The function I created was not defined to $k+1$. I think this one is correct:</p> <p>If $\Bbb{N}_k$ is not finite for every $k \in \Bbb{N}$, then by the Well-Ordering principle there exists a least element $k \in \Bbb{N}$ such that $\Bbb{N}_k$ is infinite. By definition, there exists $f:\Bbb{N}_k \to \Bbb{N}_k$ such that $f$ is one-to-one but not onto. Then, because $f$ is not onto, there exists $y\in\Bbb{N}_k$ such that $y\neq f(x)$ for every $x\in \Bbb{N}_k$. Pick $x_0\neq y$ and define $A=\Bbb{N}_k-\{x_0\}$. Let $g:A\to A$ be defined as: $$g(x)= \begin{cases} f(x) \ if \ f(x)\neq x_0 \\ f(x_0) \ if \ f(x) = x_0 \end{cases}$$</p> <p>We will prove that $g$ is one-to-one but not onto. </p> <p>First we show $g$ is one-to-one. Let $x_1,x_2 \in A$ such that $x_1\neq x_2$. Since $f$ is one-to-one, $f(x_1)\neq f(x_2)$. If $f(x_1)=x_0$, then $f(x_2)\neq x_0$. Hence $g(x_1)=f(x_0)$ and $g(x_2)=f(x_2)$. Since $x_0\neq x_2$, then $f(x_0)\neq f(x_2)$ and thus $g(x_1)\neq g(x_2)$. Without loss of generality, if $f(x_2)=x_0$, then $g(x_1)\neq g(x_2)$. If $f(x_1)\neq x_0$ and $f(x_1)\neq x_0$, then $g(x_1)=f(x_1)$ and $g(x_2)=f(x_2)$. Hence $g(x_1)\neq g(x_2)$. We have that $g$ is one-to-one.</p> <p>We now show that $g$ is not onto. Note that, because $x_0\neq y$, such that $y\neq f(x)$ for all $x\in\Bbb{N}_k$, we have $y\in A=\Bbb{N}_k-\{x_0\}$. Let $x\in A$. If $f(x)=x_0$, then $g(x)=f(x_0)\neq y$. If $f(x)\neq x_0$, then $g(x)=f(x)\neq y$. Hence there exists $y \in A$ such that for any $x \in A$ we have $g(x)\neq y$. Thus, $g$ is not onto.</p> <p>We have demonstrated that $g:A\to A$ is one-to-one, but not onto, hence A is infinite by definition. Giving that $A=\Bbb{N}_k-\{x_0\}$ and our lemma, we have that $\Bbb{N}_{k-1}$ is also infinite. However this contradicts our hypothesis that $k$ is the smallest element such that $\Bbb{N}_k$ is infinite. Hence it must be that for every $k\in\Bbb{N}$ we have $\Bbb{N}_k$ is finite.</p> <p>Sorry if my proof writing is bad in anyway. If you have any stylistic suggestion, or any suggestion at all, I would gladly read it :) </p>
Community
-1
<p>$$\int^{1}_{0}\int^{1}_{0}4xy\sqrt{x^2+y^2} dy \, dx=$$ $$\int^{\pi/4}_{0}\int^{\sec\theta}_{0}4r^4\sin\theta \cos \theta drd\theta+\int^{\pi/2}_{\pi/4}\int^{cosec \theta}_{0}4r^4\sin\theta \cos \theta drd\theta $$</p> <p><strong>Explanation:-</strong></p> <p>$$x=r\cos \theta$$ $$y=r\sin \theta$$ $$dxdy=rdrd\theta$$ <a href="https://i.stack.imgur.com/4GxTu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4GxTu.png" alt="enter image description here"></a></p> <p>$0\leq\theta \leq \pi/4$ and for the variation in $r$,$r:0\to sec \theta(\because x=1$ can be translated as$ r\cos\theta=1)$. Simillarly other region.</p>
1,793,231
<p>Can you please help me on this question? $\DeclareMathOperator{\adj}{adj}$</p> <p>$A$ is a real $n \times n$ matrix; show that:</p> <p>$\adj(\adj(A)) = (\det A)^{n-2}A$</p> <p>I don't know which of the expressions below might help</p> <p>$$ \adj(A)A = \det(A)I\\ (\adj(A))_{ij} = (-1)^{i+j}\det(A(i|j)) $$</p> <p><em>Editor's note: adjoint here refers to the <a href="https://en.wikipedia.org/wiki/Adjugate_matrix" rel="nofollow">classical adjoint</a>.</em></p>
Jyothi Krishna Gudi
616,478
<p>We know the property adj(A).A = |A|I</p> <p>Now consider</p> <p><span class="math-container">$$adj(adj(A))*adj(A)=|adj A|I$$</span></p> <p>Post multiply this with A</p> <p><span class="math-container">$$adj(adj(A))*adj(A)*A=|A|^{n-1}.I.A$$</span></p> <p><span class="math-container">$$adj(adj(A))*|A|=|A|^{n-1}.A$$</span></p> <p><span class="math-container">$$adj(adj(A))=|A|^{n-2}.A$$</span></p>
2,965,717
<p>How would you prove that <span class="math-container">$$\displaystyle \prod_{k=1}^\infty \left(1+\dfrac{1}{2^k}\right) \lt e ?$$</span></p> <p>Wolfram|Alpha shows that the product evaluates to <span class="math-container">$2.384231 \dots$</span> but is there a nice way to write this number? </p> <p>A hint about solving the problem was given but I don't know how to prove the lemma.</p> <p>Lemma : Let, <span class="math-container">$a_1,a_2,a_3, \ldots,a_n$</span> be positive numbers and let <span class="math-container">$s=a_1+a_2+a_3+\cdots+a_n$</span> then <span class="math-container">$$(1+a_1)(1+a_2)(1+a_3)\cdots(1+a_n)$$</span> <span class="math-container">$$\le 1+s+\dfrac{s^2}{2!}+\dfrac{s^3}{3!}+\cdots+\dfrac{s^n}{n!}$$</span></p>
user
505,767
<p><strong>HINT</strong></p> <p>Taking <span class="math-container">$\log$</span> both sides the statement is equivalent to prove that</p> <p><span class="math-container">$$\sum_{k=1}^\infty \log \left(1+\dfrac{1}{2^k}\right) \lt 1$$</span></p> <p>then use <span class="math-container">$\log(1+x)&lt;x$</span>.</p>
2,934,973
<p><a href="https://i.stack.imgur.com/XQ80d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XQ80d.png" alt="enter image description here"></a> </p> <p>Let <span class="math-container">$\theta = \angle BAC$</span>. Then we can write <span class="math-container">$\cos \theta = \dfrac{x}{\sqrt{2}}$</span> Find x.</p> <p>Currently, that is all I have. I think I should use Law of cosine. Where should I progress?</p>
nonuser
463,553
<p><span class="math-container">$$a = BC = \sqrt{2^2+3^2}=\sqrt{13}$$</span> <span class="math-container">$$b = AC = \sqrt{3^2+1^2}=\sqrt{10}$$</span> <span class="math-container">$$c = AB = \sqrt{2^2+1^2}=\sqrt{5}$$</span></p> <p>so <span class="math-container">$$\cos \theta = {b^2+c^2-a^2\over 2bc} = {1\over 5\sqrt{2}}\implies x=1/5$$</span></p>
368,292
<p>This question is two-fold.</p> <p>The first question is rather specific: what are some small examples of negative surgeries on negative knots that give rise to the same 3-manifold? I know one class of examples coming from Borromean rings. By performing <span class="math-container">$-1/m$</span> and <span class="math-container">$-1/n$</span> surgery on two components of the Borromean rings, we get the double twist knot <span class="math-container">$K_{m,n}$</span> which is negative. Now, <span class="math-container">$-1/l$</span> surgery on <span class="math-container">$K_{m,n}$</span> is just <span class="math-container">$-1/l$</span>, <span class="math-container">$-1/m$</span> and <span class="math-container">$-1/n$</span> surgery on the Borromean rings, and by the symmetry of the Borromean rings, it is the same as <span class="math-container">$-1/m$</span> surgery on <span class="math-container">$K_{l,n}$</span> and <span class="math-container">$-1/n$</span> surgery on <span class="math-container">$K_{l,m}$</span>. I would like to know some other simple examples (preferably with knots with small number of crossings).</p> <p>The second question is a bit vague: what is known about the class of 3-manifolds obtained as negative surgeries on negative knots? I am curious to know if there are some theorems saying this class of 3-manifolds are &quot;nice&quot; in some way. Any kind of input would be greatly appreciated.</p>
Marc Kegel
84,120
<p><span class="math-container">$(-7)$</span>-surgery on the left-handed trefoil yields the lens space <span class="math-container">$L(7,2)$</span> which is defined to be the <span class="math-container">$(-7/2)$</span>-surgery along the unknot.</p> <p>Similarly one can get more examples along negative torus knots producing lens spaces. Moser classified all surgeries along torus knots in [L. Moser, Elementary surgery along a torus knot, Pacific J. Math. 38 (1971), 737–745.].</p>
368,292
<p>This question is two-fold.</p> <p>The first question is rather specific: what are some small examples of negative surgeries on negative knots that give rise to the same 3-manifold? I know one class of examples coming from Borromean rings. By performing <span class="math-container">$-1/m$</span> and <span class="math-container">$-1/n$</span> surgery on two components of the Borromean rings, we get the double twist knot <span class="math-container">$K_{m,n}$</span> which is negative. Now, <span class="math-container">$-1/l$</span> surgery on <span class="math-container">$K_{m,n}$</span> is just <span class="math-container">$-1/l$</span>, <span class="math-container">$-1/m$</span> and <span class="math-container">$-1/n$</span> surgery on the Borromean rings, and by the symmetry of the Borromean rings, it is the same as <span class="math-container">$-1/m$</span> surgery on <span class="math-container">$K_{l,n}$</span> and <span class="math-container">$-1/n$</span> surgery on <span class="math-container">$K_{l,m}$</span>. I would like to know some other simple examples (preferably with knots with small number of crossings).</p> <p>The second question is a bit vague: what is known about the class of 3-manifolds obtained as negative surgeries on negative knots? I am curious to know if there are some theorems saying this class of 3-manifolds are &quot;nice&quot; in some way. Any kind of input would be greatly appreciated.</p>
Oğuz Şavk
131,172
<p>In general, to find explicit examples for the first part of your question is a hard problem, sometimes impossible. Actually, it is related to the notion of <em>cosmetic surgeries</em>, see Ni and Wu's <a href="http://www.its.caltech.edu/%7Eyini/Published/Cosmetic.pdf" rel="nofollow noreferrer">paper</a>, and further articles.</p> <p>You may predict conjectures or obtain obstructions due to Thurston's theorem: all but finitely many surgeries on a hyperbolic knot result in hyperbolic manifolds.</p> <p>On the other hand, as Kegel said, L. Moser completely classified surgeries along torus knots as follows:</p> <p><strong>Theorem:</strong> Let <span class="math-container">$K$</span> be an <span class="math-container">$(r,s)$</span> torus knot in <span class="math-container">$S^3$</span> and let <span class="math-container">$Y$</span> be the <span class="math-container">$3$</span>-manifold obtained by performing a <span class="math-container">$(p,q)$</span>-surgery along <span class="math-container">$K$</span>. Set <span class="math-container">$\sigma =rsp−q$</span>.</p> <p><strong>(a).</strong> If <span class="math-container">$|\sigma|&gt;1$</span>, then <span class="math-container">$Y$</span> is the Seifert manifold <span class="math-container">$\Sigma(\alpha_1, \alpha_2, \alpha_3)$</span> over <span class="math-container">$S^2$</span> with three exceptional fibers of multiplicities <span class="math-container">$\alpha_1=s, \alpha_2=r$</span> and <span class="math-container">$\alpha_3=|\sigma|$</span>.</p> <p><strong>(b).</strong> If <span class="math-container">$\sigma =±1$</span>, then <span class="math-container">$Y$</span> is the lens space <span class="math-container">$L(|q|,ps^2)$</span>.</p> <p><strong>(c).</strong> If <span class="math-container">$\sigma =0$</span>, then <span class="math-container">$Y$</span> is the connected sum of lens spaces <span class="math-container">$L(r,s) \#L(s,r)$</span>.</p> <p><strong>EDIT:</strong> Considering mirror symmetry of knots and following the common convention on surgeries, we have for <span class="math-container">$n \geq 1$</span>,</p> <ol> <li><span class="math-container">$\Sigma(r,s,rsn-1)$</span> is obtained by <span class="math-container">$(-1,n)$</span>-surgery along the left-handed <span class="math-container">$(r,s)$</span> torus knot.</li> <li><span class="math-container">$\Sigma(r,s,rsn+1)$</span> is obtained by <span class="math-container">$(-1,n)$</span>-surgery along the right-handed <span class="math-container">$(r,s)$</span> torus knot.</li> </ol> <p>Note that these are only integral homology spheres obtained by surgery on a torus knot in <span class="math-container">$S^3$</span>.</p>
75,791
<p>When will a probabilistic process obtained by an "abstraction" from a deterministic discrete process satisfy the Markov property?</p> <p>Example #1) Suppose we have some recurrence, e.g., $a_t=a^2_{t-1}$, $t&gt;0$. It's a deterministic process. However, if we make an "abstraction" by just considering the one particular digit of each $a_t$, we have a probabilistic process. We wonder whether it satisfy a Markov property or not?</p> <p>Example #2) Suppose we have a finite state automaton. Now we make an "abstraction" by grouping the states into sets of states and obtaining a probabilistic finite state automaton. We consider this automaton in time and we wonder whether it satisfies the Markov property or not.</p> <p>The particular examples are not important, of interest are some general conditions when a deterministic process becomes a Markov process after an "abstraction" of the kind above (in any context). I'm looking for any references on this matter. </p> <hr> <p>Edit: as pointed out in the comments below, the examples #1 and #2 were not well specified. Now there's a distribution on the starting state $a_0$ and $a_t=f(a_{t-1})$ is a deterministic function. Then, $a_t$, $t\geq 0$ is a degenerate Markov chain. Now the question is whether grouping some of the states of such a chain can yield a Markov chain (i.e., pointers to literature where the conditions would be discussed is required). </p> <p>A more general problem seems to be "Given any Markov chain (i.e., not a degenerate one), group the states into sets: what are the conditions under the resulting process satisfies the Markov assumption?"</p>
Did
6,179
<p>It seems both examples fit into the following setting. One starts from a (deterministic) dynamic system defined by $a_0\in A$ and $a_{t+1}=u(a_t)$ for every nonnegative integer $t$, for a given function $u:A\to A$, and one considers the $X$-valued process $(x_t)_{t\geqslant0}$ defined by $x_t=\xi(a_t)$ for every nonnegative integer $t$, for a given function $\xi:A\to X$.</p> <p>For every fixed $a_0$, $(x_t)_{t\geqslant0}$ is deterministic hence $(x_t)_{t\geqslant0}$ is a (quite degenerate) inhomogenous Markov chain whose transition at time $t$ is the kernel $Q_t$ such that $Q_t(x,y)=P(x_{t+1}=y\mid x_t=x)$ is undefined for every $x\ne\xi(a_t)$ and $\delta_y(\xi(a_{t+1}))$ if $x=\xi(a_t)$.</p> <p>One way to get a truly random process $(x_t)_{t\geqslant0}$ in this setting is to choose a randomly distributed $a_0$. But then there is every reason to expect that $(x_t)_{t\geqslant0}$ will <strong>not</strong> be a Markov chain and in fact the construction above is a classical way to encode random processes with a complex dependence structure.</p> <p>One example which might help get a feeling of what is happening is the case when $A=\mathbb R$, $u(a)=a+\frac15$ for every $a\in A$, $a_0$ uniformly distributed on $(0,1)$, and $\xi:\mathbb R\to\mathbb N$ the function integer part. Then $x_{t+1}\in\{x_t,x_t+1\}$ with full probability but $(x_t)_{t\geqslant0}$ is not Markov. However, $x_{t+1}$ is a deterministic function of $(x_{t},x_{t-1},x_{t-2},x_{t-3},x_{t-4})$ (or something similar) hence $(x_t)_{t\geqslant0}$ is a (degenerate) fifth-order Markov process.</p> <hr> <p><strong>Edit</strong> A feature preventing $(x_t)_{t\geqslant0}$ from being Markov in the example above is the existence of points $a$ and $a&#39;\ne a$ in $A$ visited by the process $(a_t)_{t\geqslant0}$ such that $\xi(a)=\xi(a&#39;)$ but $\xi(u(a))\ne\xi(u(a&#39;))$. This is reminiscent of the condition for a <a href="http://en.wikipedia.org/wiki/Hidden_Markov_model" rel="nofollow">hidden Markov chain</a> to be a Markov chain, which reads as follows.</p> <p>Assume that $(a_t)_{t\geqslant0}$ is a Markov chain with transition kernel $q$ and let $(x_t)_{t\geqslant0}$ denote the process defined by $x_t=\xi(a_t)$ for every nonnegative $t$. Then $(x_t)_{t\geqslant0}$ is a Markov chain for every starting distribution of $a_0$ if and only if the sum $$ q_\xi(a,y)=\sum\limits_{b\in A}q(a,b)\cdot[\xi(b)=y] $$ depends on $a$ only through $x=\xi(a)$, that is, if and only if $q_\xi(a,y)=Q(x,y)$ for a given function $Q$. When this condition, called <a href="http://en.wikipedia.org/wiki/Lumpability" rel="nofollow">lumpability</a>, holds, the Markov chain $(a_t)_{t\geqslant0}$ is said to be <em>lumpable</em> (by the function $\xi:A\to X$) and $Q$ is the transition kernel of the Markov chain $(x_t)_{t\geqslant0}$.</p> <p>The question to know whether $(x_t)_{t\geqslant0}$ is a Markov chain for a <em>given</em> starting distribution of $a_0$ is more involved but a condition for this to hold is stated <a href="http://www.cs.uni-salzburg.at/~anas/papers/Pres-17-06-03-Lump.pdf" rel="nofollow">here</a>.</p> <hr> <p><strong>Second edit</strong> Here is an example in continuous state space showing that the initial distribution is important. </p> <p>Let $A=\mathbb R/\mathbb Z$ denote the unit circle, $u:A\to A$ defined by $u(a)=2a$, $X=\{0,1\}$, $\xi:A\to X$ defined by $\xi(a)=[a\in A_1]$ where $A_1=(\mathbb Z+[\frac12,1))/\mathbb Z$, and $a_{t+1}=u(a_t)$ and $x_t=\xi(a_t)$ for every nonnegative $t$. Then, if the distribution of $a_0$ is uniform on $A$, the process $(x_t)_{t\geqslant0}$ is a Markov chain since it is in fact i.i.d. with $x_t$ uniform on $X$ for every $t$. </p> <p>This is adapted from the example based on the logistic map presented in <a href="http://www.stat.cmu.edu/~cshalizi/754/notes/lecture-09.pdf" rel="nofollow">these notes</a> by Cosma Shalizi, which goes as follows. </p> <p>Let $B=[0,1]$, $v:B\to B$ defined by $v(b)=4b(1-b)$, $\eta:B\to X$ defined by $\eta(b)=[b\in B_1]$ where $B_1=[\frac12,1]$, and $b_{t+1}=v(b_t)$ and $y_t=\eta(b_t)$ for every nonnegative $t$. Then, if the distribution of $b_0$ is the arcsine distribution, with density $1/(\pi\sqrt{b(1-b)})$ on $B$, the process $(y_t)_{t\geqslant0}$ is a Markov chain since it is in fact i.i.d. with $y_t$ uniform on $X$ for every $t$. Shalizi notes that $(y_t)_{t\geqslant0}$ is a Markov chain with respect to its own filtration, since the distributions of $y_{t+1}$ conditionally on $(y_s)_{s\leqslant t}$ or conditionally on $y_t$ are the same (and both are the uniform distribution on $X$). On the other hand the distribution of $y_{t+1}$ conditionally on $(b_s)_{s\leqslant t}$ is the Dirac measure at $+1$ or at $0$ since $y_{t+1}$ is a deterministic function of $b_t$. More precisely, this conditional distribution is the Dirac measure at $\eta(v(b_t))$.</p> <p>Finally, the examples based on $u$ and $v$ are conjugate since $v\circ \sigma=\sigma\circ u$ with $\sigma:A\to B$ defined by $\sigma(a)=\sin^2(\pi a)$. Note that, if $a_0$ is uniform on $A$, then $\sigma(a_0)$ follows the arcsine distribution on $B$.</p>
2,691,266
<p>The quotient ring $\mathcal{O}/\mathfrak{a}$ of a Dedekind domain by an ideal $\mathfrak{a}\ne 0$ is a principal ideal domain. </p> <p>I am trying to show $\mathcal{O}/\mathfrak{p}^n$ is principal ring. Let $\mathfrak{p}^i/\mathfrak{p}^n$ be an ideal and choose $\pi\in\mathfrak{p}\setminus\mathfrak{p}^2$. Then how I show that $\mathfrak{p}^i = \mathcal{O}\pi^i + \mathfrak{p}^n$ ? Any help/hint in this regards would be highly appreciated. Thanks in advance!</p>
Bernard
202,857
<p><strong>Hint</strong>: $$\mathcal O/\mathfrak p^n\simeq\mathcal O_{\mathfrak p}/(\mathfrak pO_{\mathfrak p})^n.$$ What is the localisation of a Dedekind domain at a maximal ideal? </p>
1,737,674
<p>I am trying to understand how to find all congruence classes in $\mathbb{F}_2[x]$ modulo $x^2$. How can I compute them ? Can someone get me started with this? I am having trouble understanding $\mathbb{F}_2[x] $ is it the set $\{ f(x) = a_nx^n + ...+ a_1 x + a_0 : a_i = 0,1 \} $?</p>
imranfat
64,546
<p>In triangle BDC you can set up the Sine Law: $\frac{DC}{\sin15^{\circ}}=\frac{BD}{\sin45^{\circ}}$ with $DC=1$, I get $BD=2.732$ Now in triangle BDA we will have to use the Law of Cosine because you have 2 sides with an enclosed angle D so calculating AB we get $AB²=2.732²+2²-2*2.732*2*\cos60°$ which gives $AB=\sqrt{6}$. Now apply Law of Sines in triangle BDA to calculate A. You got three sides and an angle. please give it a try</p>
1,737,674
<p>I am trying to understand how to find all congruence classes in $\mathbb{F}_2[x]$ modulo $x^2$. How can I compute them ? Can someone get me started with this? I am having trouble understanding $\mathbb{F}_2[x] $ is it the set $\{ f(x) = a_nx^n + ...+ a_1 x + a_0 : a_i = 0,1 \} $?</p>
Senex Ægypti Parvi
89,020
<p>assumption $\overline{CD}=1$<br> point C $(-1\mid 0)$<br> point D $(0\mid 0)$<br> point A $(2\mid 0)$<br> point B $\left(\frac{1+\sqrt3}2\mid\frac{3+\sqrt3}2\right)\quad$ intersection of<br> $\qquad\qquad y=x\tan {60°}$ and $y=(x+1)\tan{45°}$<br> angle A =$\tan^{-1}{\frac{\frac{3+\sqrt3}2}{\frac{1+\sqrt3}2-2}}$ =$\tan^{-1}(2+\sqrt3)=75°$</p>
3,027,286
<p>I am a little confused as to proving that <span class="math-container">$(C^*)^{-1} = (C^{-1})^*$</span> where <span class="math-container">$C$</span> is an invertible matrix which is complex. </p> <p>Initially, I thought that it would have something to do with the identity matrix where <span class="math-container">$CC^{-1}=C^{-1}$</span>. <span class="math-container">$C = I$</span> but don't seem to be getting anywhere with that. </p> <p>Thank you! </p>
Scientifica
164,983
<p>It is known that <span class="math-container">$$\sum_{j=1}^\infty \dfrac{1}{j^2}=\dfrac{\pi^2}{6}.$$</span></p> <p>So if you take <span class="math-container">$p_j=\frac{6}{(\pi j)^2}$</span>, you have <span class="math-container">$\sum_{j=1}^\infty p_j=1$</span> yet <span class="math-container">$\sum_{j=1}^\infty jp_j=\frac{6}{\pi^2}\sum_{j=1}^\infty\frac{1}{j}=+\infty.$</span></p>
3,027,286
<p>I am a little confused as to proving that <span class="math-container">$(C^*)^{-1} = (C^{-1})^*$</span> where <span class="math-container">$C$</span> is an invertible matrix which is complex. </p> <p>Initially, I thought that it would have something to do with the identity matrix where <span class="math-container">$CC^{-1}=C^{-1}$</span>. <span class="math-container">$C = I$</span> but don't seem to be getting anywhere with that. </p> <p>Thank you! </p>
jjagmath
571,433
<p>We have the series <span class="math-container">$\displaystyle\sum_{j=1}^\infty \frac{1}{j(j+1)} = 1$</span>, but <span class="math-container">$\displaystyle\sum_{j=1}^\infty \frac{1}{j+1}$</span> diverges, so your affirmation is false.</p>
3,027,286
<p>I am a little confused as to proving that <span class="math-container">$(C^*)^{-1} = (C^{-1})^*$</span> where <span class="math-container">$C$</span> is an invertible matrix which is complex. </p> <p>Initially, I thought that it would have something to do with the identity matrix where <span class="math-container">$CC^{-1}=C^{-1}$</span>. <span class="math-container">$C = I$</span> but don't seem to be getting anywhere with that. </p> <p>Thank you! </p>
Mike Earnest
177,399
<p>Not all aperiodic, irreducible Markov processes have a stationary distribution. This is only true for finite state spaces. For infinite spaces, you need the process to be positive recurrent, meaning the expected time to return to a state is finite. Here, starting from <span class="math-container">$1$</span>, the expected time to return to <span class="math-container">$1$</span> is <span class="math-container">$\sum jp_j$</span>. Therefore, your proof goes in circles; in order for the process to have a stationary distribution, you need <span class="math-container">$\sum jp_j&lt;\infty$</span>, and in order to prove that, you use that the process has a stationary distribution.</p> <p>When the list <span class="math-container">$(p_1,p_2,\dots)$</span> has too fat a tail, the process will never settle, and instead become more diffuse as time goes on.</p>
2,245,631
<blockquote> <p>$x+x\sqrt{(2x+2)}=3$</p> </blockquote> <p>I must solve this, but I always get to a point where I don't know what to do. The answer is 1.</p> <p>Here is what I did: </p> <p>$$\begin{align} 3&amp;=x(1+\sqrt{2(x+1)}) \\ \frac{3}{x}&amp;=1+\sqrt{2(x+1)} \\ \frac{3}{x}-1&amp;=\sqrt{2(x+1)} \\ \frac{(3-x)^{2}}{x^{2}}&amp;=2(x+1) \\ \frac{9-6x+x^{2}}{2x^{2}}&amp;=x+1 \\ \frac{9-6x+x^{2}-2x^{2}}{2x^{2}}&amp;=x \\ \frac{9-6x+x^{2}-2x^{2}-2x^{3}}{2x^{2}}&amp;=0 \end{align}$$</p> <p>Then I got: $-2x^{3}-x^{2}-6x+9=0$ </p>
John Doe
399,334
<p>So you got to the cubic equation $f(x)=-2x^{3}-x^{2}-6x+9=0$. When you come across a cubic like this, try evaluating $f(\pm1), f(\pm2)$, etc, to try and figure out some roots so you can factor it (you know $x_0$ a root if $f(x_0)=0$). Here you can see $1$ is a root, so factoring out $(x-1)$ gives $f(x)=(x-1)\underbrace{(2x^2+3x+9)}_{\text{no real roots}}$. So the only solution is $x=1$.</p>
3,840,692
<p>The equation is <span class="math-container">$2z^2w''+3zw'-w=0$</span></p> <p><span class="math-container">$z_0=0$</span> is a regular singular point, so <span class="math-container">$w(z)=\sum_{n=0}^{\infty} a_nz^{n+r}$</span></p> <p>then <span class="math-container">$w'(z)=\sum_{n=0}^{\infty} (n+r)a_nz^{n+r-1}$</span> and <span class="math-container">$w''(z)=\sum_{n=0}^{\infty} (n+r)(n+r-1)a_nz^{n+r-2}$</span></p> <p>Replacing in the equation:</p> <p><span class="math-container">$2z^2\sum_{n=0}^{\infty} (n+r)(n+r-1)a_nz^{n+r-2}+3z\sum_{n=0}^{\infty} (n+r)a_nz^{n+r-1}-\sum_{n=0}^{\infty} a_nz^{n+r}=0$</span></p> <p><span class="math-container">$\sum_{n=0}^{\infty} 2(n+r)(n+r-1)a_nz^{n+r}+\sum_{n=0}^{\infty} 3(n+r)a_nz^{n+r}-\sum_{n=0}^{\infty} a_nz^{n+r}=0$</span></p> <p><span class="math-container">$\sum_{n=0}^{\infty} [2(n+r)(n+r-1)a_nz^{n+r}+ 3(n+r)a_nz^{n+r}- a_nz^{n+r}]=0$</span></p> <p><span class="math-container">$2(n+r)(n+r-1)a_n+ 3(n+r)a_n- a_n=0$</span></p> <p><span class="math-container">$(2(n+r)(n+r-1)+3(n+r)-1) a_n=0$</span></p> <p>At this point, how can I construct a recurrence relation since I only got <span class="math-container">$a_n$</span>?, I'd need <span class="math-container">$a_{n+1}$</span>, right?</p> <p>I already found the indicial equation and its roots, which are <span class="math-container">$r_1=1/2$</span> and <span class="math-container">$r_2=-1$</span></p>
metamorphy
543,769
<p>Your last equation is all you get. There's no recurrence, and you don't need one.</p> <p>You <em>necessarily</em> have <span class="math-container">$\color{blue}{a_n=0}$</span> for <span class="math-container">$n&gt;0$</span>. Otherwise, if <span class="math-container">$a_n\neq 0$</span> for some <span class="math-container">$n&gt;0$</span>, you would have both <span class="math-container">$r$</span> and <span class="math-container">$n+r$</span> be roots of <span class="math-container">$2r^2+r-1=0$</span> (the indical equation). In our case, this is clearly impossible.</p>
1,685,895
<blockquote> <blockquote> <p>Question: Find a value of $n$ such that the coefficients of $x^7$ and $x^8$ are in the expansion of $\displaystyle \left(2+\frac{x}{3}\right)^{n}$ are equal.</p> </blockquote> </blockquote> <hr> <p>My attempt:</p> <p>$\displaystyle \binom{n}{7}=\binom{n}{8} $</p> <p>$$ n(n-1)(n-2)(n-3)(n-4)(n-5)(n-6) \times 2^{n-7} \times (\frac{1}{3})^7= n(n-1)(n-2)(n-3)(n-4)(n-5)(n-6)(n-7) \times 2^{n-8} \times (\frac{1}{3})^8 $$</p> <p>$$ \frac{6}{7!} = \frac{n-7}{40320} $$</p> <p>$$ n-7 = 48 $$</p> <p>$$ n=55 $$</p>
Decaf-Math
227,902
<p>Reference that $$(a + b)^n = {n \choose 0}a^nb^0 + {n \choose 1}a^{n-1}b^1 + \cdots + {n \choose n-1}ab^{n-1} + {n \choose n}a^0b^n.$$</p> <p>So we want $a = 2$ and $b = {x \over 3}$. So we are considering the terms $\displaystyle {n \choose 7}a^{n - 7}b^7$ and $\displaystyle {n\choose 8}a^{n-8}b^8.$ So, $${n \choose 7}a^{n - 7}b^7 = {n\choose 8}a^{n-8}b^8$$ $${n! \over 7!(n - 7)!}a^{n-7}b^7 = {n! \over 8!(n-8)!}a^{n-8}b^8$$ $${a^{n-8}a \over 7!(n-7)(n-8)!} = {a^{n-8}b \over 8\times 7!(n-8)!}$$ $${a \over n-7} = {b \over 8}$$ $$n-7 = {8a \over b}$$ $$n = {48 \over x} + 7$$ Can you see where it comes in now?</p>
2,554,153
<p>I have some problem with writing character table of a group. For instance, a group $S_4$. When we write character table, we write irreducible representations of group. So, how can I quickly find them? Then how to Fill the table? Can someone explain me upon this example?</p>
Andres Mejia
297,998
<p>Let's start with $S_3$.</p> <p><strong>Step 1:</strong> Find the conjugacy classes when this is not too difficult. $1, (12), (123)$ generate the full group.</p> <p><strong>Step 2:</strong> There are two easy representations: the trivial one, and the "alternating" representation, which is just $\mathrm{sgn}$ which assigns to a cycle its parity in the decompmosition into transpositions. Thus, we get values $(1,-1,1)$ on the three conjugacy classes respectively.</p> <p><strong>Step 3:</strong> Finally, we have the permutation representation $S^3 \to GL(\mathbb C^3)$ which basically acts by permutation of indeces on a $3$-tuple. However, this representation decomposes into the trivial representation on the diagonal $\mathrm{Span}[1,1,1]:=U$ and its orthogonal complement. Hence, we gat that $\mathbb C^3:=U \oplus V$. But the values of the permutation representation should be $(3,1,0)$ on each conjugacy class (check this by looking at matrices.) Hence, the two dimensional irreducible $V$ representation has character $\chi_{\mathbb C^3}-\chi_{U}=(3,1,0)-(1,1,1)=(2,0,-1)$ which is the last character, of dimension $2$.</p> <p>We know we are done since the sum of the squares of dimension is the cardinality of $S_3$, which is $6$.</p> <p><strong>Hint for $S_4$:</strong> Use the "same" $3$ representations and use that $\sum \mathrm{dim}\,\chi_i^2=24$, while there will be at most $5$ irreducible characters. Try tensoring for an algebraic way to get another one, and the last one can be deduced just for orthogonality reasons (use the inner product.)</p> <p>If you get stuck, I suggest reading section $2.3$ of Fulton &amp; Harris.</p>
3,068,934
<blockquote> <p>Let <span class="math-container">$A$</span> be a square matrix over <span class="math-container">$\mathbb{C}$</span>. Prove there are matrices <span class="math-container">$D$</span> and <span class="math-container">$N$</span> such that <span class="math-container">$A = D + N$</span> such that <span class="math-container">$D$</span> is diagonalizable, <span class="math-container">$N$</span> is nilpotent and <span class="math-container">$DN = ND$</span>.</p> </blockquote> <p>I can see that any nilpotent matrix has to satisfy <span class="math-container">$N^l=0$</span> for some <span class="math-container">$l$</span>. I'm not sure how to go about proving that all these conditions hold for any square matrix A.</p>
Spitemaster
604,925
<p>A triangle in <span class="math-container">$n$</span> dimensions is known as an <em>n-simplex</em>.</p>
3,068,934
<blockquote> <p>Let <span class="math-container">$A$</span> be a square matrix over <span class="math-container">$\mathbb{C}$</span>. Prove there are matrices <span class="math-container">$D$</span> and <span class="math-container">$N$</span> such that <span class="math-container">$A = D + N$</span> such that <span class="math-container">$D$</span> is diagonalizable, <span class="math-container">$N$</span> is nilpotent and <span class="math-container">$DN = ND$</span>.</p> </blockquote> <p>I can see that any nilpotent matrix has to satisfy <span class="math-container">$N^l=0$</span> for some <span class="math-container">$l$</span>. I'm not sure how to go about proving that all these conditions hold for any square matrix A.</p>
Dr. Richard Klitzing
518,676
<p>The <span class="math-container">$n$</span>-dimensional simplex has <span class="math-container">$n+1$</span> vertices and also <span class="math-container">$n+1$</span> facets, all of which are <span class="math-container">$n-1$</span>-dimensional simplices in turn. </p> <p>In fact, the count of elements of an <span class="math-container">$n$</span>-simplex is being given by the <span class="math-container">$n+1$</span>-st row of the Pascal triangle, i.e. the number of <span class="math-container">$k$</span>-dimensional elements of an <span class="math-container">$n$</span>-simplex always is just <span class="math-container">${n+1\choose k+1}$</span>. Eg. the (2D) triangle has both 3 vertices and 3 sides. This property is easily verified via inductive construction by means of the identities <span class="math-container">${n+1\choose k}={n\choose k-1}+{n\choose k}$</span> as well as <span class="math-container">${n+1\choose 0}={n+1\choose n+1}=1$</span>.</p> <p>The 2D simplex is known to be a <em>triangle</em> or <em>trigon</em>. A 3D simplex sometimes is spoken of as a (general) <em>tetrahedron</em>, esp. (for sure) when it is considered to be fully regular. Quite similar a 4D simplex will be called a (general) <em>pentachoron</em>, a 5D one then a (general) <em>hexateron</em>, a 6D one then a (general) <em>heptapeton</em>, a 7D one a (general) <em>octaexon</em>, etc.</p> <p>--- rk </p>
1,180,199
<p>I am not quite sure how to deal with discrete IVP</p> <p>Find self-similar solution \begin{equation} u_t=u u_x\qquad -\infty &lt;x &lt;\infty,\ t&gt;0 \end{equation}</p> <p>satisfying initial conditions</p> <p>\begin{equation} u|_{t=0}=\left \{\begin{aligned} -1&amp; &amp;x\le 0,\\ 1&amp; &amp;x&gt; 0 \end{aligned}\right. \end{equation}</p> <p>Here is my attempt</p> <p>Characteristic equation: \begin{align} \frac{dt}{1} &amp;=\ \frac{dx}{-u} = \frac{du}{0}\\ \frac{du}{dt}&amp;=\ 0 \implies u=f(C)\\ \frac{dx}{dt} &amp;=\ -u = -f(C)\\ x &amp;=\ C - tf(C)\\ \end{align}</p> <p>Impose boundary condition: $t=0$ and $x = C$, \begin{align} u &amp;=\ f(C) = u|_{t=0}=\left\{\begin{aligned} -1&amp; &amp;x\le 0,\\ 1&amp; &amp;x&gt; 0 \end{aligned}\right. \end{align} \begin{align} x = C+t \left\{\begin{aligned} -1&amp; &amp;x\le 0,\\ 1&amp; &amp;x&gt; 0 \end{aligned}\right. \end{align} Where the characteristic lines are never across each other</p>
doraemonpaul
30,938
<p>Follow the method in <a href="http://en.wikipedia.org/wiki/Method_of_characteristics#Example" rel="nofollow">http://en.wikipedia.org/wiki/Method_of_characteristics#Example</a>:</p> <p>$\dfrac{dt}{ds}=1$ , letting $t(0)=0$ , we have $t=s$</p> <p>$\dfrac{du}{ds}=0$ , letting $u(0)=u_0$ , we have $u=u_0$</p> <p>$\dfrac{dx}{ds}=-u=-u_0$ , letting $x(0)=f(u_0)$ , we have $x=f(u_0)-u_0s=f(u)-ut$ , i.e. $u=F(x+ut)$</p> <p>$u(x,0)=\begin{cases}-1&amp;\text{when}~x\leq0\\1&amp;\text{when}~x&gt;0\end{cases}$ :</p> <p>$\therefore u=\begin{cases}-1&amp;\text{when}~x+ut\leq0\\1&amp;\text{when}~x+ut&gt;0\end{cases}=\begin{cases}-1&amp;\text{when}~x-t\leq0\\1&amp;\text{when}~x+t&gt;0\end{cases}$</p> <p>Hence $u(x,t)=\begin{cases}-1&amp;\text{when}~x\leq t\\1&amp;\text{when}~x&gt;-t\\c&amp;\text{otherwise}\end{cases}$</p>
233,169
<p>I had to redo the problem because there was a mistake. With the given function from a previous problem, I was solving <a href="https://mathematica.stackexchange.com/questions/231664/adding-a-point-in-a-manipulate-command">link</a>, I found that the parabola created a trajectory on the graph, ie another parabola.</p> <p>I'm trying to create a plot where I collect the coordinate points of Min. value for 21 values of parameter <strong>a</strong> (from -7 to 7), and find a,b, and c such that the points are on the curve of the quadratic equation <span class="math-container">$ax^2+bx+c=y$</span>. The plot looks something like this</p> <p><a href="https://i.stack.imgur.com/msUOn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/msUOn.png" alt="enter image description here" /></a></p> <p>The curve of <span class="math-container">$ax^2 + b+x +c=y$</span> goes through all the minimum points for the curves created</p>
N0va
42,436
<p>I am not sure if I understood the question correctly. Does this solve your question?</p> <pre><code>Table[{Plot[x^2-2*(a-2)*x+a-2,{x,-20,20}],{a-2,-6+5 a-a^2}},{a,Range[-7,7,14/20]}]; Show[Flatten[{%[[All,1]],Plot[x-x^2,{x,-20,20},PlotStyle-&gt;Red],ListPlot[%[[All,2]]]}]] -6+5 a-a^2/.a-&gt;x+2//Expand </code></pre> <p><a href="https://i.stack.imgur.com/zwPwZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zwPwZ.png" alt="Parabolas" /></a></p> <p>I used the code from <a href="https://mathematica.stackexchange.com/a/231665/42436">https://mathematica.stackexchange.com/a/231665/42436</a> to generate the parabolas and from the formula for the minima <code>{a-2,-6+5 a-a^2}</code> one can directly compute the parabola on which all minima are found: <span class="math-container">$$x-x^2=y\\ a x^2+b x+c=y\quad\text{with}\quad \{a=-1,b=1,c=0\}$$</span></p>
2,677
<p>If <em>G</em> is a group, its <strong>abelianization</strong> is the abelian group <em>A</em> and the map <em>G</em> &rarr; <em>A</em> such that any map <em>G</em> &rarr; <em>B</em> with <em>B</em> abelian factors through <em>A</em>. Abelianization is a functor, and in general a very lossy operation. The map <em>G</em> &rarr; <em>A</em> is always a surjection/quotient, because we can construct <em>A</em> by dividing <em>G</em> by the minimal normal subgroup that contains all conjugations <em>ghg<sup>-1</sup>h<sup>-1</sup></em> for <em>g,h</em>&isin;<em>G</em>.</p> <p>If <em>V</em> is a finite-dimensional (super)vector space over a field <em>K</em>, then the abelianization of GL(<em>V</em>) is isomorphic to the multiplicative group <em>K</em><sup>*</sup> of non-zero numbers in <em>K</em>. Indeed, the determinant exhibits the desired isomorphism.</p> <p>Here are two questions I'm curious about:</p> <ol> <li>What can be said about the abelianizations of other (finite-dimensional) Lie groups?</li> <li>If <em>V</em> is an infinite-dimensional vector space, what can be said about the abelianization of GL(<em>V</em>)? Most infinite-dimensional vector spaces have some analytic structure, e.g. topological vector spaces, and so it's reasonable to ask that the operators in GL(<em>V</em>) should preserve that structure; you are welcome to take your favorite type of infinite-dimensional vector space and your favorite type of GL(<em>V</em>), if you want.</li> </ol>
Jason DeVito
1,708
<p>(In some sense, this is just a restatement of what Eric said above....)</p> <p>For compact groups, quite a lot can be said. Every compact group H' has a finite cover H which is Lie group isomorphic to $T^{k} \times G$, where $G$ is compact and simply connected.</p> <p>Then, one can easily show that [H,H] = {$e$}$\times G$ and hence that the abelianization of $H$ (which, as in the finite case is H/[H,H]) is $T^{k}$.</p> <p>The same holds true of the original group $H'$: the abelianization is $H'/[H',H']$ and is isomorphic to $T^{k}$.</p>
108,060
<p>Suppose: $$\sum_{n=2}^{\infty} \left( \frac{1}{n(\ln(n))^{k}} \right) =\frac{1}{ 2(\ln(2))^{k} } +\frac{1}{ 3(\ln(3))^{k} }+..., $$ by which $k$ does it converge?</p> <p>When I use comparison test I get inconclusive result:</p> <p>$\lim_{n\rightarrow\infty} \frac{u_{n+1}}{u_{n}}=\frac{n\ln(n)^{k}}{(n+1)\ln(n+1)^{k}} =\lim_{n\rightarrow\infty} =\frac{n\ln(n)^{k}+\ln(n+1)^{k}-\ln(n+1)^{k}}{(n+1)\ln(n+1)^{k}}\approx 1- \frac{\ln(n+1)^{k}}{(n+1)\ln(n+1)^{k}}=\\1-\lim_{n\rightarrow \infty}\frac{1}{n+1}=1$</p> <p>Now my conclusion would be when $k\in\mathbb R$ but I feel I am doing something wrong because I am pretty sure I have done this kind of problems earlier where I used some well-known series comparison. WA nothing <a href="http://www.wolframalpha.com/input/?i=Integrate%28x%5E%7Bk%7Dln%28x%29,x,0,%5Cinfty%29" rel="nofollow">here</a>.</p> <p><strong>[Update] Trying to use Cauchy condensation test</strong></p> <p>$$\sum_{n=2}^{\infty}\left( n^{1/k} ln(n)\right)^{-k}$$</p> <p>and now C-test:</p> <p>$$\sum_{n=2}^{\infty}\left( \left(2^{n} \right) 2^{n/k} ln(2^{n})\right)^{-k}=$$ $$\sum_{n=2}^{\infty} e^{-k \left( n(1+\frac{1}{k})ln(2)+ln(n)+ln(ln(2) \right) }$$</p> <p>so now as a geometric series, can I conclude something in terms of $k$? Look $k$ is still in one denominator not the just first factor in the exponent. </p>
André Nicolas
6,312
<p>For completeness, we sketch the <a href="http://en.wikipedia.org/wiki/Integral_test_for_convergence" rel="nofollow">Integral Test</a> approach. </p> <p>Let $f$ be a function which is defined, non-negative, and decreasing (or at least non-increasing from some point $a$ on. Then $\sum_1^\infty f(n)$ converges if and only if the integral $\int_a^\infty f(x)\,dx$ converges. </p> <p>In our example, we use $f(x)=\dfrac{1}{x\,\ln^k(x)}$. Note that $f(x)\ge 0$ after a while, and decreasing So we want to find the values of $k$ for which $$\int_2^\infty \frac{dx}{x\ln^k(x)}\qquad\qquad(\ast)$$ converges (we could replace $2$ by say $47$ if $f(x)$ misbehaved early on).</p> <p>Let $I(M)=\int_0^M f(x)\,dx$. We want to find the values of $k$ for which $\lim_{M\to\infty} I(M)$ exists.</p> <p>It is likely that this was already done when you were studying improper integrals, but we might as well do it again.</p> <p>Suppose first that $k&gt;1$. To evaluate the integral, make the substitution $\ln x=u$. Then $$I(M)=\int_2^M \frac{dx}{x\ln^k(x)}=\int_{\ln 2}^{\ln M} \frac{du}{u^k}.$$ We find that $$I(M)=\frac{1}{k-1}\left(\frac{1}{(\ln 2)^{k-1}}- \frac{1}{(\ln M)^{k-1}}\right).$$ Because $k-1&gt;0$, the term in $M$ approaches $0$ as $M\to\infty$, so the integral $(\ast)$ converges. </p> <p>By the Integral Test, we therefore conclude that our original series converges if $k&gt;1$.</p> <p>For $k=1$, after the $u$-substitution, we end up wanting $\int\frac{du}{u}$. We find that $$I(M)=\ln(\ln M)-\ln(\ln 2).$$ As $M\to\infty$, $\ln(\ln M)\to \infty$ (though glacially slowly). So by the Integral Test, our series diverges if $k=1$.</p> <p>For $k&lt;1$, we could again do the integration. But an easy Comparison with the case $k=1$ proves divergence.</p>
327,860
<p>Let <span class="math-container">$A$</span> be a symmetric <span class="math-container">$d\times d$</span> matrix with integer entries such that the quadratic form <span class="math-container">$Q(x)=\langle Ax,x\rangle, x\in \mathbb{R}^d$</span>, is non-negative definite. For which <span class="math-container">$d$</span> does it imply that <span class="math-container">$Q$</span> is a sum of finitely many squares of linear forms with integer coefficients <span class="math-container">$$ Q(x)=\sum_{i=1}^N (\ell_i(x))^2\quad \text{for some}\, N? $$</span> For <span class="math-container">$d=2$</span> this is true, I know it from the <a href="https://artofproblemsolving.com/community/c6h219889p1219430" rel="noreferrer">problem</a> proposed by Sweden to IMO in 1995, but probably all this stuff is known for a longer time. </p> <p>I think, I may prove it for some other small dimensions, although not so elementary (using Minkowski theorem on lattice points in convex bodies: if <span class="math-container">$Q$</span> is positive definite, we may find a linear form <span class="math-container">$\ell(x)$</span> such that <span class="math-container">$Q-\ell^2$</span> is still non-negative definite, this is equivalent to finding an integer point in an ellipsoid), but for large <span class="math-container">$d$</span> this argument fails. </p>
WKC
29,241
<p>This is a well-known problem, called the Waring's problem of integral quadratic forms. Every semi-positive definite quadratic form in <span class="math-container">$n \leq 5$</span> variables is a sum of <span class="math-container">$n + 3$</span> squares of linear forms. This was proved by Chao Ko, but this can be explained by the fact that the quadratic form of sum of <span class="math-container">$n + 3$</span> squares has class number 1 if <span class="math-container">$n \leq 5$</span>. </p> <p>There are positive definite quadratic forms in <span class="math-container">$n \geq 6$</span> variables which cannot be written as sums of squares of integral linear forms. The smallest example is the quadratic form corresponding to the root system <span class="math-container">$E_6$</span>. </p> <p>So, one should look at the set of positive definite quadratic forms in <span class="math-container">$n$</span> variables that can be written as sums of squares of linear forms. Then there exists an integer <span class="math-container">$g(n)$</span> such that all these quadratic forms can be written as a sum of <span class="math-container">$g(n)$</span> squares of integral linear forms. The magnitude of <span class="math-container">$g(n)$</span> is not known. The best upper bound is <span class="math-container">$O(e^{k\sqrt{n}})$</span> for some explicit <span class="math-container">$k$</span>. This is obtained recently by Beli-Chan-Icaza-Liu (appeared in TAMS).</p>
580,616
<p>The Axiom of separation states that, if A is a set then $\{a \in A ;\Phi(a)\}$ is a set. Given a set $B \subseteq A$, Suppose I define $B=\{ a \in A ; a\notin B \}$. This, of course leads to a contradiction. Because we define $B$ by elements not from $B$. My queation is: what part of the axioms sais that this kind of definition is not possible?</p> <p>Thank you!</p>
Peter Smith
35,151
<p>There is nothing at all to stop you defining a set $\Sigma$ such that $x \in \Sigma$ iff $x \in A \land x \notin B$, so $\Sigma = \{x \in A \mid x \notin B\}$. </p> <p>But what you've shown is that $\Sigma \neq B$! </p> <p>No problem so far.</p> <p>What you can't do is then go on (having a knock-down argument to show that $\Sigma \neq B$) to assert, as you do, $\Sigma = B$. What could possibly legitimate that???</p>
580,616
<p>The Axiom of separation states that, if A is a set then $\{a \in A ;\Phi(a)\}$ is a set. Given a set $B \subseteq A$, Suppose I define $B=\{ a \in A ; a\notin B \}$. This, of course leads to a contradiction. Because we define $B$ by elements not from $B$. My queation is: what part of the axioms sais that this kind of definition is not possible?</p> <p>Thank you!</p>
Community
-1
<p>You were given a set and named it $B$.</p> <p>You defined another set, and named it $B$.</p> <p>Just because you've given them the same names doesn't mean they are actually the same set. Your contradiction only appears because you've confused yourself and thought the two sets were the same since you gave them the same name.</p>
638,244
<p>In any (simple) type theory there are <strong>base types</strong> (i.e. the type of <em>individuals</em> and the type of <em>propositions</em>) and <strong>type builders</strong> (i.e. $\rightarrow$, which takes two types $t,t'$ and yields the type of <em>functions</em> $t \rightarrow t'$). </p> <p>For each type in such a type theory there is a rooted ordered tree with </p> <ul> <li>base types as labels of leaves and </li> <li>type builders as labels of non-leaf nodes<br/> (<em>"by which type builder this node is built?"</em>)</li> </ul> <p>that shows how the corresponding type (i.e. the root) is built from base types.</p> <blockquote> <p>What is the "official" name of such a tree when the base types are ignored (i.e. the labels of the leaves)? <br/><br/>[Formally: <em>Two types have the same <strong>???</strong> when their corresponding trees are isomorphic upto labelling of the leaves.</em>]</p> </blockquote> <p>(Something like "type constructor" or "type construction" or "construction type"?)</p> <p>As long as there is only <em>one</em> base type, this question is not very interesting. And when there are several base types, but of quite different nature - like <em>individuals</em> and <em>propositions</em> - it's not very interesting, too. </p> <p>But what I think of is a type theory with several base types of the <em>same</em> kind (or nature) &mdash; like chemical elements. This leads to another question:</p> <blockquote> <p>(How) can/are "base types <em>of the same kind</em>" be captured in type theory?</p> </blockquote> <hr> <p>(See also: <a href="http://en.wikipedia.org/wiki/Context-free_grammar#Derivations_and_syntax_trees" rel="nofollow">context-free grammars</a>, <a href="http://en.wikipedia.org/wiki/Parse_tree" rel="nofollow">parse tree</a>, <a href="http://en.wikipedia.org/wiki/Abstract_syntax_tree" rel="nofollow">syntax tree</a>, <a href="http://en.wikipedia.org/wiki/Atomism" rel="nofollow">atomism</a>/<a href="http://en.wikipedia.org/wiki/Reductionism" rel="nofollow">reductionism</a>.)</p>
Giorgio Mossa
11,888
<p>Basically what you're considering seems to me as the type-operations which you can obtain from the basic <strong>type builders</strong>.</p> <p>As you have guessed this objects should be the <a href="http://en.wikipedia.org/wiki/Type_constructor" rel="nofollow">type constructors</a>.</p> <p>About the second part of the question type theory in general don't preclude the possibility of types of being terms of some other type. The idea is to consider a special type <em>kind</em> whose terms are called types which can be terms which inhabit them.</p> <p>This construction is possible in type theories which allows it, for instance in univalent foundations there's a hierarchy of types whose terms are types.</p>
78,725
<p>The general theorem is: for all odd, distinct primes $p, q$, the following holds: $$\left( \frac{p}{q} \right) \left( \frac{q}{p} \right) = (-1)^{\frac{p-1}{2}\frac{q-1}{2}}$$</p> <p>I've discovered the following proof for the case $q=3$: Consider the Möbius transformation $f(x) = \frac{1}{1-x}$, defined on $F_{p} \cup {\infty}$. It is a bijection of order 3: $f^{(3)} = Id$.</p> <p>Now we'll count the number of fixed points of $f$, modulo 3:</p> <p>1) We can calculate the number of solutions to $f(x) = x$: it is equivalent to $(2x-1)^2 = -3$. Since $p \neq 2,3$, the number of solutions is $\left( \frac{-3}{p} \right) + 1$ (if $-3$ is a non-square, there's no solution. Else, there are 2 distinct solutions, corresponding to 2 distinct roots of $-3$).</p> <p>2) We know the structure of $f$ as a permutation: only 3-cycles or fixed points. Thus, number of fixed points is just $|F_{p} \cup {\infty}| \mod 3$, or: $p+1 \mod 3$.</p> <p>Combining the 2 results yields $p = \left( \frac{-3}{p} \right) \mod 3$. Exploiting Euler's criterion gives $\left( \frac{p}{3} \right) = p^{\frac{3-1}{2}} = p \mod 3$, and using $\left( \frac{-1}{p} \right) = (-1)^{\frac{p-1}{2}}$, we get: $$\left( \frac{3}{p} \right) \left( \frac{p}{3} \right) = (-1)^{\frac{p-1}{2}\frac{3-1}{2}} \mod 3$$ and equality in $\mathbb{Z}$ follows.</p> <p>My questions:</p> <ul> <li>Can this idea be generalized, with other functions $f$?</li> <li>Is there a list\article of proofs to special cases of the theorem?</li> </ul>
franz lemmermeyer
23,365
<p>As for your second question, a (partial) list of articles dealing with the quadratic character of small primes can be found <a href="http://www.rzuser.uni-heidelberg.de/~hb3/small.html">here</a>.</p>
2,481,767
<p>Let A={$3m-1|m\in Z$} and B={$4m+2|m\in Z$} and let $f:A\rightarrow B$ is defined by </p> <p>$f(x)=\frac{4(x+1)}{3}-2$ . Is f surjective?</p> <p>I'm not really sure how to prove this. By trying out certain values it seems it's surjective. This is my work so far:</p> <p>$f(x)=y \iff \frac{4(x+1)}{3}-2 = y \iff x=\frac{3y+2}{4}$</p> <p>If we substitute $y=4m+2$ then $x=\frac{3(4m+2)+2}{4} \iff x=\frac{12m+8}{4} \iff x=3m+2$. Although this is not exactly $ A = 3m-1$ it seems that no matter which number for m you choose you basically get the same set in the end. </p> <p>Same if we do $f(A)=B \iff f(3m-1)=4m+2 \iff \frac{4(3m-1+1)}{3}-2=4m+2 \iff $</p> <p>$\iff 4m-2=4m+2$. Obviously these two are not equal yet they yield the same exact sets since they are infinite. So is f surjective? It seems like it , but these two proofs are not exactly very precise.</p>
Andres Mejia
297,998
<p><strong>Hint:</strong> $x=3m+2 \implies x=3(m+1)-1$.</p>
4,513,678
<p>Suppose <span class="math-container">$f(x) = ax^3 + bx^2 + cx + d$</span> is a cubic equation with roots <span class="math-container">$\alpha, \beta, \gamma.$</span> Then we have:</p> <p><span class="math-container">$\alpha + \beta + \gamma= -\frac{b}{a}\quad (1)$</span></p> <p><span class="math-container">$\alpha\beta + \beta\gamma + \gamma\alpha = \frac{c}{a}\quad (2)$</span></p> <p><span class="math-container">$\alpha\beta\gamma = -\frac{d}{a}\quad (3)$</span></p> <p>We can find <span class="math-container">$\alpha^2\beta + \beta^2\gamma + \gamma^2\alpha + \alpha^2\gamma + \gamma^2\beta + \beta^2\alpha$</span> in terms of <span class="math-container">$a,b,c,d$</span> with the formula:</p> <p><span class="math-container">$$ \alpha^2\beta + \beta^2\gamma + \gamma^2\alpha + \alpha^2\gamma + \gamma^2\beta + \beta^2\alpha = (\alpha+\beta+\gamma)(\alpha\beta+\alpha\gamma+\beta\gamma) - 3\alpha\beta\gamma $$</span> <span class="math-container">$$=\left(\frac{-b}{a}\right) \left(\frac{c}{a}\right) - 3\left(-\frac{d}{a}\right).$$</span></p> <p>But I was wondering if there was some way to find <span class="math-container">$ \alpha^2\beta + \beta^2\gamma + \gamma^2\alpha\ $</span> and therefore also <span class="math-container">$\ \alpha^2\gamma + \gamma^2\beta + \beta^2\alpha\ $</span> in terms of <span class="math-container">$a,b,c,d,\ $</span> with some algebraic manipulation, i.e. without <a href="https://mathworld.wolfram.com/CubicFormula.html" rel="nofollow noreferrer">finding the roots with a cubic formula</a>?</p> <p>Notice that there are <em>two</em> possible values of <span class="math-container">$\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha,$</span> namely <span class="math-container">$\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha = \beta^2\gamma+\gamma^2\alpha+\alpha^2\beta = \gamma^2\alpha+\alpha^2\beta+\beta^2\gamma$</span> and <span class="math-container">$\alpha^2\gamma+\gamma^2\beta+\beta^2\alpha = \gamma^2\beta+\beta^2\alpha+\alpha^2\gamma = \beta^2\alpha + \alpha^2\gamma+\gamma^2\beta.$</span></p>
Ivan Kaznacheyeu
955,514
<p>Quantity is not-symmetric. This results in rather complex formula:</p> <p><span class="math-container">$$\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha=t_{1}\,t_{3}^2-{{b\,t_{3}^2}\over{3\,a}}+t_{2}^2\,t_{3}-{{2\,b\, t_{2}\,t_{3}}\over{3\,a}}-{{2\,b\,t_{1}\,t_{3}}\over{3\,a}}+\\{{b^2\, t_{3}}\over{3\,a^2}}-{{b\,t_{2}^2}\over{3\,a}}+t_{1}^2\,t_{2}-{{2\,b \,t_{1}\,t_{2}}\over{3\,a}}+{{b^2\,t_{2}}\over{3\,a^2}}-{{b\,t_{1}^2 }\over{3\,a}}+{{b^2\,t_{1}}\over{3\,a^2}}-{{b^3}\over{9\,a^3}}$$</span></p> <p><span class="math-container">$$t_i=\sqrt[3]{\frac{q}{2}+\sqrt{\frac{q^2}{4}-\frac{p^3}{27}}}\,e^{k_i\frac{2i\pi}{3}}+\frac{p}{3\sqrt[3]{\frac{q}{2}+\sqrt{\frac{q^2}{4}-\frac{p^3}{27}}}}e^{-k_i\frac{2i\pi}{3}}, i\in\{1,2,3\}$$</span></p> <p><span class="math-container">$$p={{b^2}\over{3\,a^2}}-{{c}\over{a}}, q={{b\,c }\over{3\,a^2}}-{{d}\over{a}}-{{2\,b^3}\over{27\,a^3}}$$</span></p> <p><span class="math-container">$\{k_1,k_2,k_3\}=\{0,1,2\}$</span> determines order of roots taken as <span class="math-container">$\alpha$</span>, <span class="math-container">$\beta$</span>, <span class="math-container">$\gamma$</span>.</p>
1,936,260
<p>We have a binary sequence of 1s and 0s, and the length is 10. I wonder how many binary sequence of length 10 with four 1's can be created such that the 1's do not appear consecutively?</p>
Bernard
202,857
<p><strong>Hint:</strong></p> <p>$\bigl\lfloor\log_{10}x\bigr\rfloor=k\iff 10^k\le x&lt;10^{k+1}$.</p>
212,240
<p>I'm a beginner of the area of free boundary problem. Let me first give some background: </p> <p>$\Omega \subset \mathbb{R}^n$ is an open connected set, and locally $\partial \Omega$ is a Lipschitz graph. Consider the convex set $$K:=\{v \in L^1_{loc}(\Omega): \nabla v \in L^2(\Omega) \,, v=u^0 \mbox{on $\partial \Omega$}\},$$ where $u^0\ge0,u^0 \in L^1_{loc}(\Omega)$, and $\nabla u^0 \in L^2(\Omega)$. </p> <p>We are looking for the minimizer $u$ of the functional $$J(v):=\int_{\Omega}(|\nabla v|^2+\chi_{\{v&gt;0\}})$$ in the class $K$.</p> <p>It is proved that the minimizer $u$ exists and satisfying the following properties: $$u \ge 0 , \, \Delta u=0 \, \mbox{on the open set $\{u&gt;0\}$}, \, \mbox{and $u$ is subharmonic},$$ see section 1-2 in the paper by Alt and Caffarelli <a href="ftp://eudml.org/doc/152360" rel="noreferrer">here</a></p> <p>It is also proved in section 3-5 that $\partial\{u&gt;0\}$ has locally finite $\mathcal{H}^{n-1}$ measure. However, a lot of intermediate theorems such as Corollary 3.3 and Remark 4.2 are based on the fact that $|\partial\{u&gt;0\}|=0$, that is, $|\partial\{u=0\}|=0$. This fact is not proved in the paper, and generally it is not true if $u$ is merely continuous. </p> <p>Now my question is, why is $|\partial\{u=0\}|=0$ true? I've been stucked on it for a couple of days. </p> <p>Another related question is, what conditions on a general function $u$, which is not necessarily the minimum of the functional $J$, can guarantee that $|\partial\{u=0\}|=0$? Is the assumption that $u$ is a Sobolev function enough? How about $u$ is subharmonic? </p> <p>Any suggestions would be appreciated. Thanks!</p>
student
51,546
<p>I figured out the problem later and until today I could have time to write it down.</p> <p>The proof of the rectifiability of free boundary $\partial \{u&gt;0\}$ requires the Lipchitz regularity of $u$ across the free boundary and the nondegeneracy of the function $u$, see theorem 3.2-theorem 4.5 in Alt and Caffarelli(1981).</p> <p>In the proof of the Lipchitz regularity of $u$, the authors proves that if $u(x)&gt;0$, then $|\nabla u(x)|$ is bounded, and then they claim that $u$ is Lipchitz. At first I thought it was a problem, because if $|\partial\{u&gt;0\}|$>0, then how can one give a bound for $|\nabla u(x)|$? That is the reason I asked the question here.</p> <p>Later I found there was no problem. By Gilbarg and Trudinger, $Du^+=Du$ on $\{u&gt;0\}$ and $Du^+=0$ otherwise. This means the weak derivative of $u$ is always $0$ on $\{u=0\}$ no matter how crazy $|\partial\{u&gt;0\}|$ is. Then by a well known but nontrivial result that $C^{0,1}=W^{1,\infty}$, one can finally say $u$ is Lipchitz.</p> <p>To tell the truth, I was reluctant to post the answer here, because I think the general question related to the boundary of a level set of a function is still meaningful, even just for the mere object of level set of a "nice" function. I have seen some references studying partial results of level sets of functions. For example, this question is related to the generalization of Morse-Sard Theorem for "nice" functions, coarea formula for $\mathcal{H}^s$ rectifiable sets and so on. My adviser told me the regularity of the free boundary of a solution of a general fully nonlinear equation is not known, and the problem is very hard.</p> <p>By the way, one can easily construct a Lipschitz function $u$ such that $\partial\{u&gt;0\}$ can be as crazy as possible. </p>
121,403
<blockquote> <p>A manifold $M$ of dimension n is a topological space with the following properties:<br> a) $M$ is Hausdorff<br> b)$M$ is locally Euclidean of dimension n<br> c) $M$ has a countable basis of open sets. </p> </blockquote> <p>Why is the first property necessary? I do not have much experience with Hausdorff spaces, hence I am not able to see the importance of that condition, probably it is something obvious. </p> <p>Since I come from a physics background, I can understand the importance of the second property. But what is the strong mathematical motivation to study such special subset of a topological space? Also the second property can be alternatively stated as: each point $p\in M$ has a neighborhood $U$ which is homeomorphic to an open subset $U$ of $\mathbb R^n$. Why should it be homeomorphic to an open set and not closed?</p> <p>Similarly in the third property,why a countable basis of open sets? </p>
davidlowryduda
9,754
<p>A lot of the work on smooth manifolds is to let us use Euclidean analysis to merely locally Euclidean things that come up. Things that aren't Hausdorff are terrible and scary, real intuition busters (at least in my case), so I don't mind at all that we require that. And in fact, with just these 3 requirements (and a smoothness requirement), smooth manifolds can actually be viewed more or less in real space (the <a href="http://en.wikipedia.org/wiki/Whitney_embedding_theorem">Whitney Embedding Theorem</a>). And our real space intuition is pretty good, you know? And it suffices, has really nice properties, provides some pretty rich material, etc.</p> <p>But there are some people who care a lot about non-Hausdorff manifolds. In fact, secretly, we see them and don't know it. The etale space of the sheaf of continuous real functions over a regular manifold is a manifold, and it's sometimes non-Hausdorff.</p> <p>Similarly, there are people who care about non-second-countable manifolds. But these are also a bit unfortunate. One of the great things about second countability is that it guarantees that ordinary manifolds are paracompact. A paracompact smooth manifold admits partitions of unity subordinate to a refinement of any cover. Why is this important? (for that matter, what does it really mean?) While it's easy to stitch together continuous functions to make a continuous function (just sort of join the ends together, right?), it's really hard to stitch smooth functions together in general (join the ends together, and perturb it so the first derivatives align, and so the second align, etc.). But this can be done with little fuss with partitions of unity, and thus with little fuss with second-countability.</p> <p>And if you study smooth manifolds, you'll see that partitions of unity are immediately used for everything. </p> <p>So it's the way it is because it has these really nice properties, right? Well, why don't we just require manifolds to be paracompact? (Firstly, there is a distinction, but it's 'small.' A manifold with more than countably many disconnected components may be metrizable, and thus paracompact, but obviously won't be second-countable). In this case, the category of paracompact manifolds is closed coproducts, which doesn't hold for second-countable manifolds. In fact, some people do only consider paracompact manifolds.</p> <p>At the end of the day, Hausdorff and second countability are exactly what let us use the embedding theorem to view manifolds in real space, and that's what's deemed important for people on their first tour through manifolds.</p>
3,657,075
<p><a href="https://i.stack.imgur.com/ytcQ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ytcQ3.png" alt="enter image description here"></a></p> <blockquote> <p>In the given figure <span class="math-container">$\angle BAE, \angle BCD$</span> and <span class="math-container">$\angle CDE$</span> are right angles and <span class="math-container">$AB = 4, BC=3, CD=4$</span> and <span class="math-container">$DE=5$</span>. What is the value of <span class="math-container">$\angle ABD$</span>?</p> </blockquote> <p>I found this problem in a sheet of contest math problems. <strong>However</strong> the problem stated in the sheet was "<strong>Find the value of <span class="math-container">$AE$</span></strong> ". I proved the problem using <span class="math-container">$BE^2 = 8^2+4^2$</span>. Then I noticed that it could be proved by making rectangle <span class="math-container">$ABED'$</span>. I tried to prove why <span class="math-container">$\angle ABD$</span> is a right angle only with the given conditions but failed. Is the <span class="math-container">$\angle ABD$</span> even a right angle (then how to prove it) or it can have different values? </p>
Quanto
686,284
<p>Continue with <span class="math-container">$BE = \sqrt{8^2+4^2} = 4\sqrt5$</span> and recognize that the triangle BDE is isosceles with <span class="math-container">$BD =DE=5$</span>,</p> <p><span class="math-container">$$\cos \angle ABE =\frac{AB}{BE} = \frac1{\sqrt5}, \&gt;\&gt;\&gt;\&gt;\&gt; \cos \angle DBE =\frac{BE/2}{BD} = \frac2{\sqrt5} $$</span></p> <p>Thus,</p> <p><span class="math-container">$$\angle ABD = \angle ABE + \angle DBE = \arccos \frac1{\sqrt5} + \arccos \frac2{\sqrt5}=90^\circ$$</span></p>
66,670
<p>I want to use:</p> <pre><code>demand = {1.92, 2.07, 2.37, 2.72, 2.87}*10^6; NSolve[SetV == demand[[1]]/(Cpf (1 - χ)), χ] </code></pre> <p>I want to make a vector of solutions for chi (χ) given each of the demand vector components.</p>
Dr. belisarius
193
<p>Diophantine problems are tough and there is no silver bullet. In your example this works:</p> <pre><code>i = IntegerPart; sol = NMaximize[{(3 i@ n + 4)/(2 i@n + 1), n &gt; 1}, n]; i@n /. sol[[2]] (* 1 *) </code></pre>
1,979,226
<p>Use Bayes' theorem or a tree diagram to calculate the indicated probability. Round your answer to four decimal places. Y1, Y2, Y3 form a partition of S.</p> <p>P(X | Y1) = .8, P(X | Y2) = .1, P(X | Y3) = .9, P(Y1) = .1, P(Y2) = .4. </p> <p>Find P(Y1 | X).</p> <p>P(Y1 | X) =</p> <p>For this one I thought that all I had to do was P(X | Y1)*P(Y1)/P(X | Y1)*P(Y1)+P(X | Y2)*P(Y2)+P(X | Y3)*P(Y3)</p> <p>But when I do that I am not getting the correct answer, is it possible that the value for P(Y3) is not .1 and if it is not, what is it? </p>
hamam_Abdallah
369,188
<p>let</p> <p>$$v_n=\frac{\sum_{k=0}^n u_k}{n+1}$$.</p> <p>we have</p> <p>$$\frac{u_{n+1}^2}{n+1}=v_n$$.</p> <p>if $lim_{n\to +\infty}u_n=L$ then</p> <p>$0=\lim_{n\to+\infty} v_n=L$</p> <p>using Cesaro average.</p> <p>Now, if $(u_n)$ is increasing and</p> <p>$u_0=a&gt;0$, the limit can't be $0$.</p> <p>thus $(u_n)$ diverges.</p> <p>OR WITHOUT CESARO</p> <p>we have</p> <p>$$u_{n+1}-u_n=\frac{u_n}{u_n+u_{n+1}}$$</p> <p>and when $n\to +\infty$</p> <p>$$L-L=\frac{L}{2L}$$</p> <p>which proves that </p> <p>$(u_n)$ diverges.</p>
2,740,954
<p>Determine price elasticity of demand and marginal revenue if $q = 30-4p-p^2$, where q is quantity demanded and p is price and p=3.</p> <p>I solved it for first part-</p> <p>Price elasticity of demand = $-\frac{p}{q} \frac{dq}{dp}$</p> <p>on solving above i got answer as $\frac{10}{3}$</p> <p>But on solving for Marginal revenue i am getting -10. But the correct answer given is $\frac{21}{10}$</p> <p>Any hint is appreciable please help.</p>
Trurl
72,915
<p>Try this. Total Revenue TR is $pq$. Marginal revenue is the change in TR with change in $quantity$ (not price, as I incorrectly stated in my comment) so marginal revenue is $\frac{\partial TR}{\partial q}$ or $$\frac{\partial (pq)}{\partial p}\frac{\partial p}{\partial q}$$ Revenue is $pq$, or $30p−4p^2−p^3$ so marginal revenue is $30−8p−3p^2$ = 30−24−27=−21. But then, as OP correctly calcualted, $\frac{\partial q}{\partial p}=-10$ so $$\frac{\partial (pq)}{\partial p}\frac{\partial p}{\partial q}= -21/-10.$$</p>
1,805,615
<p>I have one problem. I am sure it is not complicated, but I only need help to see am I, at least, on the right path.</p> <p><strong>Problem: Let $S=Span\{(0,-2,3),(1,1,1),(2, -2, 8)\}\subseteq \mathbb R^3$. Find subspace $T$ of space $\mathbb R^3$ so that $\mathbb R^3=S \oplus T$.</strong></p> <p>Here is what I have done so far:</p> <ol> <li>Since $S$ is span of vectors $(0,-2,3),(1,1,1),(2, -2, 8)$, that means that $S$ has all vectors that are linear combination of those three vectors.</li> <li>We are searching for subspace $T$, but we need to keep in mind that $\mathbb R^3=S \oplus T$, which means that S$\cap T=\overrightarrow 0$. So, $T$ would have all those vectors that cannot be a result of linear combination of vectors from $S$.</li> <li>After that, I placed vectors from $S$ into a matrix:\begin{bmatrix} 0 &amp; 1 &amp; 2 \\ -2 &amp; 1 &amp; -2 \\ 3 &amp; 1 &amp; 8 \\ \end{bmatrix} and I found its rank is $2$ which means that $\dim S=2$.We also know that $\dim\mathbb R^3=3$. Now, based on formula $\dim\mathbb R^3=\dim(S\oplus T)=\dim S+\dim T$, we get that dimension of $T$ should be $1$. That would mean that $T$ needs to have, of course, $3$ vectors, but the rank of matrix $[T]$ should be one. Am I right to assume those $3$ vectors would be something like $$T=\{(\alpha_0 a,\alpha_0 b,\alpha_0 c),(\alpha_1 a, \alpha_1 b, \alpha_1 c),(\alpha_2 a, \alpha_2 b, \alpha_2 c)\}$$, where $\alpha_0, \alpha_1$ and $\alpha_2$ are scalars? For example: $$[T]=\begin{bmatrix} 2 &amp; 7 &amp; 1 \\ 4 &amp; 14 &amp; 2 \\ 8 &amp; 28 &amp; 4 \\ \end{bmatrix}$$ Rank of that matrix would be one, making dimension of $T$ to be one. So, $T$ <em>is span over one vector</em>.</li> </ol> <p>I searched here and found a similar problem but I guess I am not sure of how $T$ would really look like. Would $T$ be $span$ over vector $(1,0,0)$ because span over that vector cannot produce any in space $S$? Can $T$ be span over any vector that is making a base in $\mathbb R^3$? For example $T=span${$(1,1,0)$}?</p> <p>Thank you.</p> <p>Edit: changed last question.</p>
M. Vinay
152,030
<p>Forming a matrix with the vectors in $S$ as column vectors (as you have done), we get \begin{equation*} A = \begin{bmatrix} 0 &amp; 1 &amp; 2\\ -2 &amp; 1 &amp; -2\\ 3 &amp; 1 &amp; 8 \end{bmatrix}. \end{equation*}</p> <p>Now, the span of $S$ is the same as the column space of $A$, and we can find a basis for this from the column <a href="https://en.wikipedia.org/wiki/Row_echelon_form/" rel="nofollow" title="Echelon Form &#40;Wikipedia&#41;">echelon form</a> of $A$, which is \begin{equation*} \begin{bmatrix} 1 &amp; 0 &amp; 0\\ 1 &amp; -2 &amp; 0\\ 1 &amp; 3 &amp; 0 \end{bmatrix}. \end{equation*}</p> <p>Now, there are two linearly independent vectors, which form a basis of the span of $S$. To get a complete basis of $\mathbb R^3$, you need one more vector, which is linearly independent from these two, and an obvious choice is (say) $v = \begin{bmatrix}0 \\ 0 \\ 1\end{bmatrix}$. Therefore, $T = \operatorname{span}(\{v\})$.</p>
48,864
<p>I can't resist asking this companion question to the <a href="https://mathoverflow.net/questions/48771/proofs-that-require-fundamentally-new-ways-of-thinking"> one of Gowers</a>. There, Tim Dokchitser suggested the idea of Grothendieck topologies as a fundamentally new insight. But Gowers' original motivation is to probe the boundary between a human's way of thinking and that of a computer. I argued, therefore, that Grothendieck topologies might be more natural to computers, in some sense, than to humans. It seems Grothendieck always encouraged people to think of an object in terms of the category that surrounds it, rather than its internal structure. That is, even the most lovable mathematical structure might be represented simply as a symbol $A$, and its special properties encoded in arrows $A\rightarrow B$ and $C\rightarrow A$, that is, a grand combinatorial network. I'm tempted to say that the idea of a Grothendieck topology is something of an obvious corollary of this framework. It's not something I've devoted much thought to, but it seems this is exactly the kind of reasoning more agreeable to a computer than to a woolly, touchy-feelly thinker like me.</p> <p>So the actual question is, what other mathematical insights do you know that might come more naturally to a computer than to a human? I won't try here to define computers and humans, for lack of competence. I don't think having a deep knowledge of computers is really a prerequisite for the question or for an answer. But it would be nice if your examples were connected to substantial mathematics. </p> <p>I see that this question is subjective (but not argumentative in intent), so if you wish to close it on those grounds, that's fine.</p> <p>Added, 11 December: Being a faulty human, I had an inexplicable attachment to the past tense. But, being weak-willed on top of it all, I am bowing to peer pressure and changing the title.</p>
Allen Knutson
391
<p>I think the human/computer dichotomy you set up should be extended to a human/mathematician/computer trichotomy, just because a substantial portion of "mathematical maturity" is about learning to think like a computer, in your sense.</p> <p>Anyway I've just put that in place to try and shore up my example. It seems that humans read "let's say we have X and Y..." and automatically take the extra step of assuming X and Y are unequal. Computers wouldn't bother to take this extra step. Mathematicians, or at least I myself, split into the X=Y and X$\neq $Y cases but try to obviate that split when writing down a proof.</p>
48,864
<p>I can't resist asking this companion question to the <a href="https://mathoverflow.net/questions/48771/proofs-that-require-fundamentally-new-ways-of-thinking"> one of Gowers</a>. There, Tim Dokchitser suggested the idea of Grothendieck topologies as a fundamentally new insight. But Gowers' original motivation is to probe the boundary between a human's way of thinking and that of a computer. I argued, therefore, that Grothendieck topologies might be more natural to computers, in some sense, than to humans. It seems Grothendieck always encouraged people to think of an object in terms of the category that surrounds it, rather than its internal structure. That is, even the most lovable mathematical structure might be represented simply as a symbol $A$, and its special properties encoded in arrows $A\rightarrow B$ and $C\rightarrow A$, that is, a grand combinatorial network. I'm tempted to say that the idea of a Grothendieck topology is something of an obvious corollary of this framework. It's not something I've devoted much thought to, but it seems this is exactly the kind of reasoning more agreeable to a computer than to a woolly, touchy-feelly thinker like me.</p> <p>So the actual question is, what other mathematical insights do you know that might come more naturally to a computer than to a human? I won't try here to define computers and humans, for lack of competence. I don't think having a deep knowledge of computers is really a prerequisite for the question or for an answer. But it would be nice if your examples were connected to substantial mathematics. </p> <p>I see that this question is subjective (but not argumentative in intent), so if you wish to close it on those grounds, that's fine.</p> <p>Added, 11 December: Being a faulty human, I had an inexplicable attachment to the past tense. But, being weak-willed on top of it all, I am bowing to peer pressure and changing the title.</p>
o a
22,247
<p>This paper on <a href="http://mishap.sdf.org/by:gavrilovich-and-hasson/what:a-homotopy-theory-for-set-theory/Exercises_de_style_A_homotopy_theory_for_set_theory-II.pdf" rel="nofollow"> homotopy and set theory</a> seems to take this question seriously: if you restrict yourself to posetal categories and try to do model categories in a brute-force naive way, you arrive to definitions of some set-theoretic invariants...So maybe we can say that Shelah is a computer. ;) </p>
3,538,305
<blockquote> <p>Given that the differential equation</p> <p><span class="math-container">$f(x,y) \frac {dy}{dx} + x^2 +y = 0$</span> is exact and <span class="math-container">$f(0,y) =y^2$</span> , then <span class="math-container">$f(1,2)$</span> is</p> </blockquote> <p>choose the correct option</p> <p><span class="math-container">$a)$</span> <span class="math-container">$5$</span></p> <p><span class="math-container">$b)$$4$</span></p> <p><span class="math-container">$c)$</span> <span class="math-container">$6$</span></p> <p><span class="math-container">$d)$</span> <span class="math-container">$0$</span></p> <p>My attempt : <span class="math-container">$(x^2+y)dx -f(x,y) dy =0$</span> Here <span class="math-container">$M =(x^2 +y)$</span> , <span class="math-container">$N=f(x,y)$</span></p> <p>I know that for exact <span class="math-container">$\frac{dM}{dy} = \frac{dN}{dx}$</span> that is <span class="math-container">$f(x,y) =1$</span></p> <p>After that im not able to proceed further</p>
Qurultay
338,156
<p>Your equation is <span class="math-container">$$(x^2+y)dx+f(x,y)dy=0$$</span> thus from <span class="math-container">$\frac{dM}{dy}=\frac{dN}{dx}$</span> we have <span class="math-container">$$\frac{df}{dx}=1$$</span> or <span class="math-container">$$f(x,y)=x+h(y)$$</span> Now ...</p>
729,444
<p>Let be two lists $l_1 = [1,\cdots,n]$ and $l_2 = [randint(1,n)_1,\cdots,randint(1,n)_m]$ where $randint(1,n)_i\neq randint(1,n)_j \,\,\, \forall i\neq j$ and $n&gt;m$. How I will be able to found the number of elements $x\in l_1$, to select, such that the probability of $x \in l_2$ is $1/2$?. I'm trying using the birthday paradox but I cann't get.</p> <p>$randint(x,y)$ pick a random number between $x$ and $y$.</p>
Marc van Leeuwen
18,880
<p>If you know that change of basis is realised by conjugating by an appropriate invertible matrix, then you can reason in terms of matrices as follows. $E_{i,j}$ is the matrix with unique nonzero entry $1$ at position $i,j$.</p> <ul> <li><p>The (unique) matrix $M$ of $T$ can have no nonzero off-diagonal entries: if $a_{i,j}$ were such an entry, then conjugating by $I+E_{j,i}$ adds $a_{i,j}$ to the diagonal entries at $(j,j)$, and subtracts it from the entry at $(i,i)$, while it was supposed to leave all entries unchanged.</p></li> <li><p>Being diagonal, $M$ must have all diagonal entries equal, since conjugating by a permutation matrix permutes the diagonal entries.</p></li> </ul>
4,064,084
<p>Does there exists a countable family of infinite sets <span class="math-container">$\{A_n:n\in\mathbb N\}\subset\mathcal P(\mathbb N)$</span> satisfying the following property: <span class="math-container">$$\text{For every infinite set }I\in\mathcal P(\mathbb N),\text{ there is }n\in\mathbb N\text{ such that }A_n\subset I\text{ ?}$$</span></p> <p>If we require the family to have cardinality <span class="math-container">$\mathfrak c$</span>, then the question is trivial, but for the countable case I'm stuck.</p>
moray eel
892,232
<p>Using hgmath's comment, here is a way to write <span class="math-container">$n^3$</span> as a sum of five cubes of integers with absolute values <span class="math-container">$&lt;|n|$</span> for <span class="math-container">$n=2k$</span> and <span class="math-container">$k\ge 8$</span>. We write <span class="math-container">$$(2k)^3=(2k-4)^3+(k+7)^3-(k-9)^3-10^3-2^3.$$</span> For this to work, we need to have <span class="math-container">$f(0)$</span>, <span class="math-container">$f(1)$</span>, <span class="math-container">$f(2)$</span>, ..., <span class="math-container">$f(15)$</span>. I already got <span class="math-container">$f(0)$</span> until <span class="math-container">$f(12)$</span>.</p> <p>For <span class="math-container">$f(13)$</span> and <span class="math-container">$f(15)$</span> we use hgmath's comment (note that hgmath's comment is good for <span class="math-container">$k\geq 4$</span>, but I already found until <span class="math-container">$f(12)$</span>). For <span class="math-container">$f(14)$</span>, we can use <span class="math-container">$14^3=12^3+10^3+2^3+2^3$</span>.</p>
3,465,018
<p>Compute <span class="math-container">$\pi_{2}(S^2 \vee S^2).$</span></p> <p><strong>Hint:</strong> Use universal covering thm. and use Van Kampen to show it is simply connected.</p> <p>Still I am unable to solve it, could anyone give me more detailed hint and the general idea of the solution.</p>
kamills
497,007
<p>Here's another quick way, using Hurewicz.</p> <p><span class="math-container">$\pi_1(S^2 \vee S^2) \cong 0$</span> by van Kampen. Then the Hurewicz theorem asserts that <span class="math-container">$\pi_2(S^2 \vee S^2) \cong H_2(S^2 \vee S^2) \cong \mathbb{Z} \oplus \mathbb{Z}$</span>.</p>
2,094,596
<p>I'm questioning myselfas to why indeterminate forms arise, and why limits that apparently give us indeterminate forms can be resolved with some arithmetic tricks. Why $$\begin{equation*} \lim_{x \rightarrow +\infty} \frac{x+1}{x-1}=\frac{+\infty}{+\infty} \end{equation*} $$</p> <p>and if I do a simple operation,</p> <p>$$\begin{equation*} \lim_{x \rightarrow +\infty} \frac{x(1+\frac{1}{x})}{x(1-\frac{1}{x})}=\lim_{x \rightarrow +\infty}\frac{(1+\frac{1}{x})}{(1-\frac{1}{x})}=1 \end{equation*} $$</p> <p>I understand the logic of the process, but I can't understand why we get different results by "not" changing anything.</p>
StackTD
159,845
<p>So you're looking at something of the form $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$ and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions: $$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$ The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that $$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as <em>another</em> quotient of functions but in such a way that the individual limits in the numerator and denominator now <em>do</em> exist, which allows the use of the rule in blue (<em>"limit of a quotient, is the quotient of the limits; if these two limits exist"</em>): $$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.</p> <hr> <p>When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens: $$\begin{align} \lim_{x \rightarrow +\infty} f(x) &amp; = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt] &amp; = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\ \end{align}$$</p>
1,268,431
<p>$$\lim_{x\to 2} \frac {\sin(x^2 -4)}{x^2 - x -2} $$</p> <p>Attempt at solution:</p> <p>So I know I can rewrite denominator:</p> <p>$$\frac {\sin(x^2 -4)}{(x-1)(x+2)} $$</p> <p>So what's next? I feel like I'm supposed to multiply by conjugate of either num or denom.... but by what value...?</p> <p>Don't tell me I'm simply supposed to plug in $x = 2$</p> <p>I need to simplify fractions somehow first, how?</p>
Jordan Glen
225,803
<p>$$\frac {\sin(x^2 -4)}{(x-2)(x+1)}\cdot\frac{x+2}{x+2} = \frac{(x+2)\sin(x^2 - 4)}{(x+1)(x^2 - 4)} = \dfrac{x+2}{x+1}\cdot \dfrac{\sin(x^2 - 4)}{x^2 - 4}$$</p>
3,115,830
<p>So my logic to this up until now has been that for any <span class="math-container">$x$</span> the function <span class="math-container">$\left\lfloor\frac{\lceil x\rceil}{2}\right\rfloor$</span> will return an integer that is an element of <span class="math-container">$\mathbb Z$</span>. Thus since you can map any <span class="math-container">$x$</span> in the domain to any y in the co-domain it is surjective.</p> <p>Now I'm not sure if this counts as a full proof, and whether the function is injective.</p>
Robert Lewis
67,071
<p>With</p> <p><span class="math-container">$f(s) = a T + b N + c(s) B, \tag 1$</span></p> <p>and</p> <p><span class="math-container">$\Vert f(s) \Vert = 1, \tag 2$</span></p> <p>it follows that <span class="math-container">$s$</span> is the arc-length along <span class="math-container">$f(s)$</span>; thus</p> <p><span class="math-container">$T = \dot f(s) = a \dot T + b \dot N + \dot c(s) B + c(s) \dot B; \tag 3$</span></p> <p>now introducing the Frenet-Serret equations</p> <p><span class="math-container">$\dot T = \kappa N, \tag 4$</span></p> <p><span class="math-container">$\dot N = -\kappa T + \tau B, \tag 5$</span></p> <p><span class="math-container">$\dot B = -\tau N, \tag 6$</span></p> <p>and inserting them into (3) we find</p> <p><span class="math-container">$T = a(\kappa N) + b(-\kappa T + \tau B) + \dot c(s) B -c(s) \tau N$</span> <span class="math-container">$= -b\kappa T + (a\kappa - c(s) \tau)N + (b\tau + \dot c(s))B; \tag 7$</span></p> <p>comparing the coefficients of the three orthonormal vectors <span class="math-container">$T$</span>, <span class="math-container">$N$</span>, and <span class="math-container">$B$</span> yields</p> <p><span class="math-container">$-b\kappa = 1, \tag 8$</span></p> <p><span class="math-container">$a\kappa - c(s) \tau = 0, \tag 9$</span></p> <p><span class="math-container">$b\tau + \dot c(s) = 0; \tag{10}$</span></p> <p>we thus see, in accord with our OP Gary Zoldiek, that</p> <p><span class="math-container">$\kappa = -\dfrac{1}{b}, \; \text{a constant}, \tag{11}$</span></p> <p>which, since we conventionally assume <span class="math-container">$\kappa &gt;0$</span>, implies that</p> <p><span class="math-container">$b &lt; 0; \tag{12}$</span></p> <p>from (9), (10) (11) we also have</p> <p><span class="math-container">$c(s) \tau = a\kappa = -\dfrac{a}{b}, \tag{13}$</span></p> <p>and</p> <p><span class="math-container">$\dot c(s) = -b\tau; \tag{14}$</span></p> <p>now since</p> <p><span class="math-container">$a \ne 0 \ne b, \tag{15}$</span></p> <p>(13) implies that</p> <p><span class="math-container">$c(s) \ne 0 \ne \tau, \tag{16}$</span></p> <p>whence we may write</p> <p><span class="math-container">$\tau = -\dfrac{a}{bc(s)}, \tag{17}$</span></p> <p>hence from (14),</p> <p><span class="math-container">$\dot c(s) = -b\tau = \dfrac{a}{c(s)}; \tag{18}$</span></p> <p>thus,</p> <p><span class="math-container">$\dfrac{d}{ds} (\dfrac{1}{2} c^2(s)) = c(s) \dot c(s) = a, \tag{19}$</span></p> <p>or</p> <p><span class="math-container">$\dfrac{d}{ds} (c^2(s)) = c(s) \dot c(s) = 2a; \tag{20}$</span></p> <p>we integrate 'twixt <span class="math-container">$s_0$</span> and <span class="math-container">$s$</span>:</p> <p><span class="math-container">$c^2(s) - c^2(s_0) = 2a(s - s_0); \tag{21}$</span></p> <p><span class="math-container">$c^2(s) = c^2(s_0) - 2a(s - s_0), \tag{22}$</span></p> <p><span class="math-container">$c(s) = \pm \sqrt{c^2(s_0) - 2a(s - s_0)}, \tag{23}$</span></p> <p>as long as the signs allow; it then follows from (17) that </p> <p><span class="math-container">$\tau(s) = \mp \dfrac{a}{b \sqrt{c^2(s_0) - 2a(s - s_0)}}. \tag{24}$</span></p> <p>So, what have we learned about <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, <span class="math-container">$c$</span>, <span class="math-container">$\kappa$</span> and <span class="math-container">$\tau$</span>?</p> <p>Well, <span class="math-container">$b &lt; 0$</span>, <span class="math-container">$c(s)$</span> is given by (23), <span class="math-container">$\kappa = -1/b$</span> is constant, and <span class="math-container">$\tau$</span> is as in (24); apparently <span class="math-container">$a$</span> and <span class="math-container">$c(s_0)$</span> are the two free parameters upon which all else swings. We should also observe that <span class="math-container">$c(s)$</span> and <span class="math-container">$\tau(s)$</span> will become singular when <span class="math-container">$s$</span> passes through</p> <p><span class="math-container">$s = s_0 + \dfrac{c^2(s_0)}{2a} \tag{25}$</span></p> <p>where <span class="math-container">$c(s) = 0$</span>; depending on the sign of <span class="math-container">$a$</span>, the curve <span class="math-container">$f(s)$</span> may be extended arbitrarily far in the direction of either increasing or decreasing <span class="math-container">$s$</span>. </p> <p>I think there are many more intriguing facts about <span class="math-container">$f(s)$</span> which will be revealed by further scrutiny of these results, but for now I haven't the time to say more.</p>
715,361
<p>Let $\Omega$ be a bounded domain and $f_n\in L^2(\Omega)$ be a sequence such that $$\int_\Omega f_nq\operatorname{dx}\leq C&lt;\infty\qquad \text{for all}\quad q\in H^1(\Omega),\ \|q\|_{H^1(\Omega)}\leq1,\ n\in\mathbb{N}.\quad (1) $$ Is it then possible to conclude that $$ \sup_{n\in\mathbb{N}}\|f_n\|_{L^2(\Omega)}\leq C. $$</p> <p>Here, $H^1(\Omega)$ denotes the Sobolev-Hilbert-Space $H^1(\Omega)$.</p> <p>Obviously, this statement would be true if we were to replace (1) with $$\int_\Omega f_nq\operatorname{dx}\leq C&lt;\infty\qquad \text{for all}\quad q\in L^2(\Omega),\ \|q\|_{L^2(\Omega)}\leq1,\ n\in\mathbb{N}. $$</p> <p>and maybe the dense and compact embedding $H^1(\Omega)\hookrightarrow L^2(\Omega)$ is of help but I'm not sure of it.</p> <p>Edit: By now I'm pretty sure, that this statement doesn't hold. We only have a bound in the dual of $H^1(\Omega)$. But until now I'm failing to compile a conclusive argument!</p>
5xum
112,884
<p>The trainer is right, there is no solution. Your approach of crossing the equations has an implicit demand that the denominators are nonzero, so your approach should show there are no solutions as well.</p>
262,173
<p>Consider $x^2 + y^2 = r^2$. Then take the square of this to give $(x^2 + y^2)^2 = r^4$. Clearly, from this $r^4 \neq x^4 + y^4$. </p> <p>But consider: let $x=a^2, y = b^2 $and$\,\,r = c^2$. Sub this into the first eqn to get $(a^2)^2 + (b^2)^2 = (c^2)^2$. $x = a^2 =&gt; a = |x|,$ and similarly for $b.$</p> <p>Now put this in to give $|x|^4 + |y|^4 = r^4 =&gt; (-x)^4 + (-y)^4 = r^4 $ or $ (x)^4 + (y)^4 = r^4,$ both of which give $ x^4 + y^4 = r^4$ Where is the flaw in this argument?</p> <p>Many thanks.</p>
gt6989b
16,192
<p>$x = a^2$ does not imply that $a = |x|$, rather $|a| = \sqrt{x}$.</p>
716,036
<blockquote> <p>Suppose that a curve $\mathbf\gamma$ in $\mathbb R^3$ has constant strictly positive curvature function $\mathbf\kappa(s)$, and constant non-zero torsion function $\mathbf\tau(s)$. Prove that the curve is a helix.</p> </blockquote> <p>I think it is easier to work backward here. First I can show that a helix satisfies the two conditions on curvature and torsion. Second, I want to use the fundamental theorem of curves to show that curve satisfying these two conditions must be a helix. However, there is a gap here. The fundamental theorem requires the function of curvature and torsion to uniquely identify a curve up to rigid motion. However, this question only gives a qualitative description of the two functions. How to make up this gap, please? Thank you! </p>
Yiorgos S. Smyrlis
57,021
<p>A curve in $\mathbb R^3$ can be uniquely (up to a rigid motion) reproduced once its curvature and torsion are known. If ${T}$, ${N}$ and ${B}$ is its moving orthogonal frame (tangent, norma and binormal), then they satisfy the system (Frenet-Serret) $$ T'=kN,\\ N'=-kT-\tau B,\\ B'=\tau N. $$ or $$ H'=\left(\begin{matrix} 0 &amp; \kappa &amp; 0\\ -\kappa &amp; 0 &amp; -\tau \\ 0 &amp; \tau &amp; 0 \end{matrix}\right)\,H $$ where $H$ is the matrix with rows $T,N$ and $B$.</p> <p>Next, if we the exponential of $A=\left(\begin{matrix} 0 &amp; \kappa &amp; 0\\ -\kappa &amp; 0 &amp; -\tau \\ 0 &amp; \tau &amp; 0 \end{matrix}\right)$, then $$ H(t)=\exp(tA)H(0), $$ and if $T(t)$ is the first row, then $\gamma(t)=\gamma(0)+\int_0^t T(s)\,ds.$</p>
4,521,199
<blockquote> <p><strong>Theorem 8.15</strong>: If <span class="math-container">$f$</span> is a continuous and <span class="math-container">$2\pi$</span>-periodic function and if <span class="math-container">$\epsilon&gt;0$</span> is fixed, then there exists a trigonometric polynomial <span class="math-container">$P$</span> such that <span class="math-container">$$\left|P(x)-f(x)\right|&lt;\epsilon$$</span> for all real <span class="math-container">$x$</span>.<br /> <em>Proof</em>: If we identify <span class="math-container">$x$</span> and <span class="math-container">$x+2\pi$</span>, we may regard the <span class="math-container">$2\pi$</span>-periodic functions on <span class="math-container">$\mathbb{R}^1$</span> as functions on the unit circle <span class="math-container">$T$</span>, by means of the mapping <span class="math-container">$x\rightarrow e^{ix}$</span>. The trigonometric polynomials, i.e., the functions of the form <span class="math-container">$$Q(x)=\sum^N_{-N} c_ne^{inx}\qquad(x\mbox{ real})$$</span> form a self-adjoint algebra <span class="math-container">$\mathscr{A}$</span>, which separates points on <span class="math-container">$T$</span>, and which vanishes at no point of <span class="math-container">$T$</span>. Since <span class="math-container">$T$</span> is compact, the Stone-Weierstrass theorem tells us that <span class="math-container">$\mathscr{A}$</span> is dense in <span class="math-container">$\mathscr{C(T)}$</span>. This is exactly what the theorem asserts.</p> </blockquote> <p>I have already seen the proof of this theorem discussed in the site, however I have never found a satisfactory explanation of the passages omitted by Rudin (more advanced tools are usually used to answer the existing questions but that does not help). I have tried proposing my attempt at a proof, however I immediately realized it was flawed (it implied the map <span class="math-container">$z \to e^{ix}, x \in [0,2\pi[$</span> to be continuous).</p> <p>I think that Rudin is being a little too much elliptic here. In fact, I interpreted his first statement as &quot;we can define a continuous function g on <span class="math-container">$S^1$</span> such that: <span class="math-container">$$g(z) = f(x), \hspace{5mm} \text{$x$ is the unique element of $[0,2\pi[$ such that $z=e^{ix}$ "}$$</span> and then the rest of the proof proceeds smoothly. My problem is that there is a non-trivial property of <span class="math-container">$g$</span> that must be proved in order to make use of the Stone-Weierstrass Theorem, that is, <span class="math-container">$g$</span> is continuous. Rudin doesn't even acknowledge that and I did not manage to prove that it is actually continuous so far. Therefore I am not even sure that this is the right interpretation of what is written.</p> <p>If someone could give me some guidance that would be very appreciated!</p>
José Carlos Santos
446,262
<p>Let <span class="math-container">$\log_1\colon\Bbb C\setminus[0,\infty)\longrightarrow\Bbb C$</span> be the antiderivative of <span class="math-container">$\frac1z$</span> which maps <span class="math-container">$-1$</span> into <span class="math-container">$\pi i$</span>. It is a continuous function (actually, it is an analytic function). Then, for each <span class="math-container">$z\in T\setminus\{1\}$</span>,<span class="math-container">$$g(z)=f\left(\frac{\log_1(z)}i\right)=f\left(-i\log_1(z)\right),$$</span>and therefore <span class="math-container">$g$</span> is continuous on <span class="math-container">$T\setminus\{1\}$</span>.</p> <p>Now, let <span class="math-container">$\log_2\colon\Bbb C\setminus(-\infty,0]\longrightarrow\Bbb C$</span> be the antiderivative of <span class="math-container">$\frac1z$</span> which maps <span class="math-container">$1$</span> into <span class="math-container">$0$</span>. Again, it is continuous. And, since <span class="math-container">$f$</span> is periodic with period <span class="math-container">$2\pi$</span>, for each <span class="math-container">$z\in T\setminus\{-1\}$</span>, you have <span class="math-container">$g(z)=f\left(-i\log_2(z)\right)$</span>.</p> <p>This proves that the restrictions of <span class="math-container">$f$</span> to <span class="math-container">$T\setminus\{1\}$</span> and to <span class="math-container">$T\setminus\{-1\}$</span> are continuous. Since these are open subsets of <span class="math-container">$T$</span> whose union is <span class="math-container">$T$</span>, <span class="math-container">$g$</span> is continuous.</p>
59,828
<p>Is there a way to display the variable name instead of its value? for example, I need something like<code>varname = 1; function[varname];</code> and the output is <code>varname</code> instead of <code>1</code></p>
RunnyKine
5,709
<p>There's also <code>Defer</code> to accomplish this:</p> <pre><code>varname = 1; Defer @ varname </code></pre> <blockquote> <p>varname</p> </blockquote>
65,658
<p>Suppose $X_i$'s are i.i.d, with the density distribution $f(x) = e^{-x}$, $x \geq 0$. I was able to show that $$P(\limsup X_n/\log{n} =1)=1$$ using Borel-Cantelli.</p> <p>Define $M_n=\max \{X_1,\ldots,X_n\}$, can I claim $M_n/\log{n} \rightarrow 1$ a.s. in this case? Is it still true in general without knowing the distribution of $X_i$?</p>
Leandro
633
<p>this is a small observation not an answer: </p> <p>the distribution is in fact important, for example if the random variables are bounded almost surely the limit is zero a.s.</p> <p>For the unbounded case, (that more likely you are thinking about), I just got the trivial lower bound $$ 1\leq \liminf_{n\to\infty} \frac{M_n}{\log n} $$ in the i.i.d case, by the Borel-Cantelli Lemma, supposing that $$\sum_{n=1}^{\infty} \mathbb{P}(X_1\leq \ln n)^n&lt;+\infty.$$ </p> <p><strong> Edition.</strong> I replaced the limsup by liminf, which is better in this case, based on the comments made by Didier Piau. </p>
3,910,013
<p>I'm preparing for a high school math exam and I came across this question in an old exam.</p> <p>Let <span class="math-container">$f(x) = \dfrac{1}{2(1+x^3)}$</span>.</p> <p><span class="math-container">$\alpha \in (0, \frac{1}{2})$</span> is the only real number such that <span class="math-container">$f(\alpha) = \alpha$</span>.</p> <p><span class="math-container">$(u_n)$</span> is the series such that <span class="math-container">$$ \begin{cases} u_0 = 0 \\ u_{n+1} = f(u_n) \quad \forall n \in \mathbb{N} \end{cases} $$</span> Prove that <span class="math-container">$|u_{n+1} - \alpha| \le \frac{1}{2} |u_n - \alpha|$</span></p> <p>The end goal is to prove that <span class="math-container">$(u_n)$</span> converges to <span class="math-container">$\alpha$</span>. But we have to do it this way instead of finding that both <span class="math-container">$(u_{2n})$</span> and <span class="math-container">$(u_{2n+1})$</span> converge to <span class="math-container">$\alpha$</span>.</p> <p>It's supposed to be a very easy question. But I have been trying for the last two hours and I couldn't find it.</p> <p>It's easy to prove that <span class="math-container">$u_n \in [0, \frac{1}{2}]$</span> by induction.</p> <p>Here's what I tried:</p> <ul> <li>I tried separating the question into two cases <span class="math-container">$u_n \le \alpha$</span> and <span class="math-container">$u_n \ge \alpha$</span>. I got that I just need to prove that <span class="math-container">$u_{n+1} + \frac{1}{2} u_n \ge \frac{3}{2} \alpha$</span>. But I didn't know how to go from here. Substituting <span class="math-container">$u_{n+1}$</span> with <span class="math-container">$\dfrac{1}{2(1+u_n^3)}$</span> seems to only make the problem more complicated.</li> <li>I tried squaring both sides. It only made the expression more complicated.</li> </ul> <p>The problem vaguely reminds me of the epsilon-delta definition of <span class="math-container">$\lim_{x \to \alpha} f(x) = \alpha$</span>. But that seems like it won't lead anywhere.</p>
mechanodroid
144,766
<p><strong>Hint:</strong></p> <p>Prove that for <span class="math-container">$x,y \in \left(0,\frac12\right)$</span> we have <span class="math-container">$$|f(x)-f(y)| \le \frac12 |x-y|.$$</span> and then apply this to <span class="math-container">$x = u_n$</span> and <span class="math-container">$y = \alpha$</span>.</p> <p>For example, you can use the mean value theorem since <span class="math-container">$$f'(x) = -\frac{3x^2}{2(1+x^3)^2}$$</span> is strictly decreasing on <span class="math-container">$\left(0,\frac12\right)$</span> and <span class="math-container">$f'\left(\frac12\right) =-\frac8{27} =-0.296296.$</span></p>
271
<p>Is there a way of taking a number known to limited precision (e.g. $1.644934$) and finding out an "interesting" real number (e.g. $\displaystyle\frac{\pi^2}{6}$) that's close to it?</p> <p>I'm thinking of something like Sloane's Online Encyclopedia of Integer Sequences, only for real numbers.</p> <p>The intended use would be: write a program to calculate an approximation to $\displaystyle\sum_{n=1}^\infty \frac{1}{n^2}$, look up the answer ("looks close to $\displaystyle\frac{\pi^2}{6}$") and then use the likely answer to help find a proof that the sum really is $\displaystyle \frac{\pi^2}{6}$.</p> <p>Does such a thing exist?</p>
Michael Lugo
173
<p>I've long used Simon Plouffe's <a href="http://wayback.cecm.sfu.ca/projects/ISC/ISCmain.html" rel="nofollow noreferrer">inverse symbolic calculator</a> for this purpose. It is essentially a searchable list of &quot;interesting&quot; numbers.</p> <p>Edit: link updated (Mar 2022).</p>
8,699
<p>I love your site.... but the your question does not meet our quality standards thing is really annoying... I have wasted lots of time trying to figure out what this message means.....maybe someone could explain it to me.....whats wrong with this question:</p> <p>Find numbers a and b such that: </p> <p>$ lim =((sqrt(ax+b)-2)/(x))=1$<br> $x-&gt;0$ </p> <p>I don't quite understand what I a supposed to do with this?</p> <p>Any help would be appreciated......</p>
robjohn
13,854
<p>In addition to Asaf's suggestions, one more that deals with the readability is to use <a href="https://math.meta.stackexchange.com/questions/5020/tex-latex-mathjax-basic-tutorial-and-quick-reference">$\LaTeX$</a> with the <a href="http://www.mathjax.org/docs/2.0/tex.html" rel="nofollow noreferrer">MathJax</a> markup which is supported on this site. If you want to see the MathJax source that is used to make an expression, right-click on the expression and select "Show Math As > TeX Commands". For example, try $$ \lim_{x\to0}\frac{\sqrt{ax+b}-2}{x}=1 $$</p> <p>People are more apt to read your question and less likely to be irritated by having to translate hard to read math.</p> <hr> <p>Since I mention them above, I copy Asaf's suggestions here:</p> <p>If you want to avoid these messages there are a few tricks to use:</p> <ol> <li>Avoid "plain formula" or "copy-paste" problems. Add some words around the problem. </li> <li>Add your own efforts, where did you get stuck, and what is not clear to you. </li> <li>Remember that no one is here to solve your homework for you, which is an addition to the previous points - but relevant enough to bring up again. </li> </ol>
8,699
<p>I love your site.... but the your question does not meet our quality standards thing is really annoying... I have wasted lots of time trying to figure out what this message means.....maybe someone could explain it to me.....whats wrong with this question:</p> <p>Find numbers a and b such that: </p> <p>$ lim =((sqrt(ax+b)-2)/(x))=1$<br> $x-&gt;0$ </p> <p>I don't quite understand what I a supposed to do with this?</p> <p>Any help would be appreciated......</p>
zyx
14,120
<blockquote> <p>the your question does not meet our quality standards thing is really annoying... I have wasted lots of time trying to figure out what this message means</p> </blockquote> <p>It is an automatically generated message.</p> <p>There is an algorithmic "quality filter" for questions. StackExchange has not published the formula, and there must be a lot of analysis of this on meta.stackoverflow, but it is safe to assume that the shorter the question the higher the chance of rejection. In your question, the math symbols are not in TeX and it is possible the filter has heuristics that correlate with that. Several alphabetic strings are not in the English dictionary and not written between dollar signs: <em>b,lim,sqrt,x,x-</em>.</p> <p>As suggested in comment below the question, searches for (quality-filter) tag on meta.stackoverflow and meta.math.stackexchange.com might reveal more. </p> <p><a href="https://meta.stackoverflow.com/search?q=quality-filter">https://meta.stackoverflow.com/search?q=quality-filter</a> $\hskip20pt$ 256 hits <a href="http://meta.math.stackexchange.com/questions/tagged/quality-filter">http://meta.math.stackexchange.com/questions/tagged/quality-filter</a> $\hskip20pt$ 9 hits</p> <p>And for our friends the anonymous downvoters, here is a copy of the comment explaining why all this information is the only valid form of answer, until and unless the asker requests something different.</p> <blockquote> <p>I interpret the question as "why did I get this message preventing me from posting (attached text)". To that, the only possible answer is some discussion of the quality filter. Answers that do not even mention the filter and give generalized advice assuming there was something "wrong" in the material stopped by the filter, are insulting when the OP did not request any such writing advice.</p> </blockquote>
1,879,129
<p>If $0 &lt; y &lt; 1$ and $-1 &lt; x&lt;1$, then prove that $$\left|\frac{x(1-y)}{1+yx}\right| &lt; 1$$</p>
ervx
325,617
<p>$$ \bigg|\frac{x(1-y)}{(1+yx)}\bigg|=\frac{|x|(1-y)}{1+yx}. $$</p> <p>We have two cases.</p> <p>If $x\geq 0$, then the above becomes</p> <p>$$ \frac{x(1-y)}{1+xy}. $$</p> <p>Note that $x(1-y)&lt;x&lt;1$, while $1+xy&gt;1$. Thus, the inequality follows in this case.</p> <p>If instead $x&lt;0$, then the inequality becomes</p> <p>$$ \frac{-x(1-y)}{1+xy}=\frac{-x+xy}{1+xy}. $$</p> <p>Note that $-x&lt;1$ and, hence, $-x+xy&lt;1+xy$ so the inequality follows.</p>
1,985,905
<p>I was wondering if the cardinality of a set is a well defined function, more specifically, does it have a well defined domain and range?</p> <p>One would say you could assign a number to every finite set, and a cardinality for an infinite set. So the range would be clear, the set of cardinal numbers. But what about the domain, here we get a few problems. This should be the set of all sets, yet this concept isn't allowed in mathematics as it leads to paradoxes like Russell's paradox.</p> <p>So how do we formalize the notion of 'cardinality'? It seems to behave like a function that maps sets into cardinal numbers, but you can't define it this way as that definition would be based on a paradoxical notion. Even if we only restrict ourselves to finite sets the problem pops up, as we could define the set {A} for every set, thereby showing a one-to-one correspondence between 'the set of all sets' (that doesn't exist) and the 'set of all sets with one element'.</p> <p>So how should one look at the concept of cardinality? You can't reasonably call it a function. Formalizing this concept without getting into paradoxes seems very hard indeed.</p>
Asaf Karagila
622
<p>The cardinality function is well-defined, but it is what known as a <em>class</em> function. Since <em>every</em> set has a cardinality, the domain of the function $A\mapsto |A|$ has to be the class of all sets, so this is indeed a proper class. And since every set has a strictly large cardinal, the class of cardinals is not a set either.</p> <p>Using the axioms of set theory, we can canonically determine an object, in the set theoretic universe, which will represent the cardinal $|A|$. So the function $A\mapsto|A|$ is indeed definable.</p> <p>It should be pointed, perhaps, that this class function is also <em>amenable</em>. Namely, restricting it to any <em>set</em> of sets will result in a function which is itself a set. Namely, a set of sets can only have a set of distinct cardinals. This is a direct consequence of the Replacement axiom.</p> <p>There is some inherent difficulty at first when talking about existence of proper classes, and whether or not they are well-defined objects. In the case of $\sf ZFC$ and related theories, existence means "a set", but when we say that a class exists and it is well-defined, we mean to say that there is a definition which is <em>provably</em> giving us the function that we want. This is the case in your question.</p> <p>But one can also work in class theories like $\sf KM$ (Kelley&ndash;Morse) or $\sf NBG$ (von Neumann&ndash;Godel&ndash;Bernays), and there the function assigning every set its cardinality is still a class function and not a set, but now it exists in "an internal way" as an object of the universe.</p>
108,953
<p>Given a variety $X$ over $\mathbb{Q}$ with good reduction at $p$, proper smooth base change tells us that its $l$-adic cohomology groups are unramified at $p$ (and I'd guess some $p$-adic Hodge theory tells us its p-adic cohomology is crystalline).</p> <p>My question is to what extent it's possible to find a converse to this statement. More precisely, I have yet to see a counterexample to the following "conjecture" (though I still suspect it's wrong).</p> <p><strong>"Conjecture"</strong>: Let $K$ be a number field, $p$ and $l$ primes, and $V$ a geometric (say, coming from the variety $Y$) $l$-adic representation of $G_K$ that is unramified/crystalline at $\mathfrak{p}|p$. Then there exists a smooth proper variety $X$ such that $X$ has good reduction at $\mathfrak{p}$ and $V$ can be cut out of the cohomology of $X$.</p> <p>From googling around, the things I know so far are (at least for $l \not= p$):</p> <ul> <li>If $Y$ is an abelian variety, the classical Neron-Ogg-Shafarevich condition means that $Y$ itself is a witness to the conjecture.</li> <li>We can take torsors for abelian varieties with no $K$-rational points, and these can have the same representations, but fail to have good reduction (in this paper <a href="http://arxiv.org/abs/math/0605326">http://arxiv.org/abs/math/0605326</a> of Dalawat).</li> <li>There exist curves which have bad reduction, but whose Jacobians have good reduction.</li> </ul> <p>If anyone knows any more about this story I'd be interested to hear. Ultimately I guess it would be nice to have a definition for when a motive is unramified/has good reduction, and cohomologically this surely has to mean unramified/crystalline, but it would be nice if this could always be realised "geometrically".</p> <p>Thanks, Tom.</p>
Joël
9,317
<p>(could be a comment but too long...) </p> <p>That's quite a natural question. I am not sure it is possible to prove that the "conjecture" you state is true with the current technology (and to be sure I have no idea how to prove it), but my intuition would differ from yours in that I believe the conjecture to be true. Let me give a very rough "argument" why. Let us assume for the sake of the argument that your Galois representation satisfies a self-duality condition so that it is expected to be attached to an automorphic representation for an unitary group. Then because of your hypothesis that your Galois representation is unramified, the automorphic representation should be unramified at places $\mathfrak{p}$ above $p$, that is have invariant by maximal compact hyperspecial subgroups $K_\mathfrak p$. The Shimura variety for the unitary group with hyper special maximal level structure is conjectured (and actually, known) to have good reduction (Milne conjecture). Assume also that the representation $\pi_\infty$ is such that the Galois representation attached to $\pi$ appears in the étale cohomology of that Shimura variety (like for a modular eigenform of weight 2, whose representation appears in the cohomology of the modular curve and not of a Kuga-Sato variety over it). Then your Galois rep. appears in the cohomology of a variety with good reduction.</p> <p>Admittedly this argument is not very convincing, because I have made very strong assumption on the Galois representation. However, as those assumptions seem (to me) quite orthogonal to the problem discussed, I believe this is a decent evidence in favor of the conjecture.</p>
1,594,130
<p>Does there exist a vector field $\vec F$ such that curl of $\vec F$ is $x \vec i+y\vec j+z \vec k$ ? </p> <p>UPDATE : I did $div(curl \vec F)=0$ as the answers did ; but that assumes a lot i.e. it assumes that components of $F$ have second partial derivatives and continuous mixed partial derivatives ; whereas for curl to be defined , we only need components of $F$ to have first order partial derivatives . Is the answer still no with this less assumption ? Please help . Thanks in advance</p>
j.d. allen
293,950
<p>Using the theorem $div(curl(\vec F))= 0 $ we can show that the vector $F=x \vec i+y\vec j+z \vec k$ cannot be the curl of any field because $F=x \vec i+y\vec j+z \vec k$ has a divergence of 3. </p>
1,201,900
<p>This is a rather soft question to I will tag it as such.</p> <p>Basically what I am asking, is if anyone has a good explanation of what a homomorphism is and what an isomorphism is, and if possible specifically pertaining to beginner linear algebra.</p> <p>This is because, in my courses we have talked about vector spaces, linear transformations, etc., but we have always for some reason skipped the sections on isomorphisms and homomorphisms.</p> <p>And yes I have tried to look on wikipedia and such, but it just isn't really clicking for me what it is and what it represents/use of it.</p> <p>I am under the impression that two spaces with bijection are isomorphic to one another, but that is about it.</p> <p>Any ideas/opinions?</p> <p>Thanks!</p>
Moya
192,336
<p>Well the standard answer to this sort of question is that two algebraic objects (vector spaces in this case) $V$ and $W$ are isomorphic if they are basically the same, meaning that once can identify them with one another in a reasonable way. Another way to say this is that a map $f:V\to W$ is an isomorphism if it is bijective and it preserves the algebraic structures of $V$ (or $W$, since the definition implies that $f^{-1}:W\to V$ is an isomorphism).</p> <p>In the case of vector spaces, the algebraic structure we're interested in is addition of vectors and multiplication by scalars, so that's the basis (no pun intended) for the definition $f(av_1+bv_2)=af(v_1)+bf(v_2)$ in the definition of a homomorphism (linear transformation) of vector spaces).</p> <p>Why did I say basically the same and not exactly the same? The typical example here would be $V=\mathbb{R}^2$ and $W=\mathbb{C}$ (over $\mathbb{R}$). These two vector spaces aren't exactly the same, since $\mathbb{C}$ has a lot of different algebraic properties than $\mathbb{R}^2$. However, as vector spaces, the map taking $(1,0)\mapsto 1$ and $(0,1)\mapsto i$ is an isomorphism (as you can check), so as real vector spaces, they are essentially the same. </p> <p>Another note here is that we used a specific choice of basis in this example for the isomorphism. There are spaces which are canonically isomorphic, which essentially means we can create an isomorphism between them that doesn't depend on the choice of basis. I can't think of an elementary linear algebra example right now other than $V$ and $V^{\ast\ast}$, the double dual space, are naturally isomorphic when $V$ is finite dimensional. Still, while basically the same, they are not exactly the same: one is a vector space of just abstract vectors, while the other is a space of functions from $V^{\ast}\to \mathbb{F}$ (the base field).</p> <p>Hope that helps.</p>
903,656
<p>An urn has $2$ balls and each ball could be green, red or black. We draw a ball and it was green, then it was returned it to the urn. What is the probability that the next ball is red? </p> <p>My attempt: I think it is just a probability of $1/4$ because we have 4 colors in total but on the other hand I think i need to use conditional probability:</p> <p>$$P(R|V)= {P(R\bigcap V)\over P(V)}$$</p> <p>where $P(V)$ is the probability of drawing a green ball , $P(R)$ is the probability of drawing a red ball but I am not so sure which one would be the correct approach of the problem</p> <p>I would really appreciate your help :)</p>
David
119,775
<p><strong>Hint</strong>: $$\eqalign{&amp;P(\hbox{second ball is red})\cr &amp;\qquad=P(\hbox{second ball is red}\,|\,\hbox{second ball drawn is the same ball as the first})\cr &amp;\qquad\qquad\qquad{}\times P(\hbox{second ball drawn is the same ball as the first})\cr &amp;\qquad\qquad{}+P(\hbox{second ball is red}\,|\,\hbox{second ball drawn is the other ball})\cr &amp;\qquad\qquad\qquad{}\times P(\hbox{second ball drawn is the other ball})\cr}$$</p>
903,656
<p>An urn has $2$ balls and each ball could be green, red or black. We draw a ball and it was green, then it was returned it to the urn. What is the probability that the next ball is red? </p> <p>My attempt: I think it is just a probability of $1/4$ because we have 4 colors in total but on the other hand I think i need to use conditional probability:</p> <p>$$P(R|V)= {P(R\bigcap V)\over P(V)}$$</p> <p>where $P(V)$ is the probability of drawing a green ball , $P(R)$ is the probability of drawing a red ball but I am not so sure which one would be the correct approach of the problem</p> <p>I would really appreciate your help :)</p>
angryavian
43,949
<p>I'm guessing we make the Bayesian assumption that before we draw anything, each of the two balls has an equal chance of being any of the $3$ colors.</p> <p>Your guess of $1/3$ is incorrect; the intuition is that by drawing a red, you gain some knowledge about the urn, and sort of decreases the chance of drawing a red the second time. This can be made precise in the computation.</p> <p>Following the conditional probabilities given in David's answer:</p> <blockquote class="spoiler"> <p>\begin{align*}&amp;P(\text{second ball is red})\\ &amp;= P(\text{second ball is red} \mid \text{second ball is same as the first})\cdot P(\text{second ball is the same as the first})\\ &amp;\quad + P(\text{second ball is red} \mid \text{second ball is not the first ball})\cdot P(\text{second ball is not the first ball})\\&amp;= \frac{1}{3}\cdot \frac{1}{2} + 0 \cdot \frac{1}{2} \end{align*}</p> </blockquote> <p>The "knowledge gained" appears in the term $0 \cdot \frac{1}{2}$; before our knowledge, this term would be $\frac{1}{3} \cdot \frac{1}{2}$ so the whole probability would have been $\frac{1}{3}$, which is the prior guess.</p>
2,809,686
<p>Let S={1,2,3,...,20}. Find the probability of choosing a subset of three numbers from the set S so that no two consecutive numbers are selected in the set. "I am getting problem in forming the required number of sets."</p>
JMoravitz
179,297
<p>To remove from unanswered queue:</p> <p>Consider the related problem of counting how many quadruples $(x_1,x_2,x_3,x_4)$ of non-negative integers exist such that $x_1+x_2+x_3+x_4=15$</p> <p>Count the number of such possible quadruples using <a href="https://en.wikipedia.org/wiki/Stars_and_bars_(combinatorics)" rel="nofollow noreferrer">stars and bars</a>.</p> <p>Recognize that there is a bijection between the sets of non-negative integer quadruples adding to $15$ and the three-element subsets of $\{1,2,3,\dots,20\}$ containing no consecutive numbers.</p> <blockquote class="spoiler"> <p> Explicitly the bijection takes a set $\{a,b,c\}$ and maps it to $(a-1,b-a-2,c-b-2,20-c)$ in the one direction, or it takes a quadruple $(x_1,x_2,x_3,x_4)$ and maps it to $\{(x_1+1),(x_2+x_1+3),(x_3+x_2+x_1+5)\}$ in the other direction. You should conform that this truly is a bijection between the sets.</p> </blockquote> <p>Now, this tells us how many three-element subsets of $\{1,2,3,\dots,20\}$ have the property we want. Taking the ratio then with the total number of three-element subsets of $\{1,2,3,\dots,20\}$ will give us the probability that a uniformly randomly selected three-element subset will have the desired property.</p> <blockquote class="spoiler"> <p> $\dfrac{\binom{18}{3}}{\binom{20}{3}}$</p> </blockquote> <hr> <blockquote class="spoiler"> <p> In retrospect, after writing this answer I realize the bijection could also instead be formed with the three-element subsets of $\{1,2,3,\dots,18\}$ instead. Let $\{a,b,c\}$ be a three-element subset from $\{1,2,3,\dots,18\}$ with $a&lt;b&lt;c$. Map this to $\{a,b+1,c+2\}$ which will be a three-element subset of $\{1,2,3,\dots,20\}$ with no consecutive numbers.</p> </blockquote>
112,226
<p>Prove that there are exactly</p> <p>$$\displaystyle{\frac{(a-1)(b-1)}{2}}$$ </p> <p>positive integers that <em>cannot</em> be expressed in the form </p> <p>$$ax\hspace{2pt}+\hspace{2pt}by$$</p> <p>where $x$ and $y$ are non-negative integers, and $a, b$ are positive integers such that $\gcd(a,b) =1$.</p>
Aryabhata
1,102
<p>It is well known that any number $\ge (a-1)(b-1)$ is representable.</p> <p>The number of numbers $c$ such that $0 \lt c \lt ab$ which are representable correspond exactly to the number of lattice points in the region</p> <p>$ax + by \lt ab$, $x \ge 0$, $y \ge 0$</p> <p>This is because if $ax + by = ax&#39; + by&#39;$, then $x-x&#39;$ is divisible by $b$, which cannot happen in the region, so two lattice points represent different numbers (and vice-versa).</p> <p>A straigthforward counting argument now gives what you seek. Note, you would need to do some subtraction to get the count of numbers that are <em>not</em> representable, since the above lattice points count the numbers that are representable.</p>
2,994,962
<p>Recently I came through the following expansion for <span class="math-container">$\log {(x + \sqrt {x^2+1})}$</span> : <span class="math-container">$$x - \frac {1}{2}.\frac {x^3}{3} + \frac {1}{2}.\frac {3}{4}.\frac {x^5}{5} - ……$$</span> I think I can use the Faa di Bruno formula to get a closed form for the <span class="math-container">$r^{th}$</span> term but I am unable to get good simplifications. If there is some reduction formula or something like that, I welcome any suggestions.</p>
lab bhattacharjee
33,337
<p><span class="math-container">$$\dfrac{d\ln(x+\sqrt1+x^2)}{dx}=\frac{1+\dfrac x{\sqrt{1+x^2}}}{x+\sqrt1+x^2}=(1+x^2)^{-1/2}$$</span></p> <p>Using <a href="https://en.wikipedia.org/wiki/Binomial_series" rel="nofollow noreferrer">Binomial Series</a> for <span class="math-container">$|x^2|&lt;1,$</span></p> <p><span class="math-container">$$(1+x^2)^{-1/2}=1+\sum_{r=1}^\infty\dfrac{-1/2(-1/2-1)\cdots(-1/2-(r-1))}{r!}(x^2)^r$$</span></p> <p><span class="math-container">$$=1+\sum_{r=1}^\infty\dfrac{x^{2r}(-1)^r\prod_{n=1}^r(2n-1)}{r!2^r}$$</span></p> <p>Now integrate both sides.</p>
2,994,962
<p>Recently I came through the following expansion for <span class="math-container">$\log {(x + \sqrt {x^2+1})}$</span> : <span class="math-container">$$x - \frac {1}{2}.\frac {x^3}{3} + \frac {1}{2}.\frac {3}{4}.\frac {x^5}{5} - ……$$</span> I think I can use the Faa di Bruno formula to get a closed form for the <span class="math-container">$r^{th}$</span> term but I am unable to get good simplifications. If there is some reduction formula or something like that, I welcome any suggestions.</p>
Robert Z
299,698
<p>The function <span class="math-container">$f(x)=\log {(x + \sqrt {x^2+1})}$</span> is the <a href="http://mathworld.wolfram.com/InverseHyperbolicSine.html" rel="nofollow noreferrer">inverse hyperbolic sine</a> whose expansion is <span class="math-container">$$\sum_{n=0}^{\infty} \frac{(-1)^n (2n-1)!!}{(2n+1)(2n)!!}\, x^{2n+1}$$</span> where the <a href="https://en.wikipedia.org/wiki/Double_factorial" rel="nofollow noreferrer">double factorial</a> notation is used (take a look also <a href="https://en.wikipedia.org/wiki/Inverse_hyperbolic_functions#Series_expansions" rel="nofollow noreferrer">here</a>).</p> <p>This can be obtained by expanding its derivative: <span class="math-container">$$f'(x)=(1+x^2)^{-1/2}=\sum_{n\geq 0}\binom{-1/2}{n}(x^2)^n$$</span> and then by integrating it.</p>
760,767
<p>I don't understand the last part of this proof:</p> <p><a href="http://www.proofwiki.org/wiki/Intersection_of_Normal_Subgroup_with_Sylow_P-Subgroup" rel="nofollow">http://www.proofwiki.org/wiki/Intersection_of_Normal_Subgroup_with_Sylow_P-Subgroup</a></p> <p>where they say: $p \nmid \left[{N : P \cap N}\right]$, thus, $P \cap N$ is a Sylow p-subgroup of $N$. I don't see why this implicaction is true. On the other hand, I understand that $P$ being a Sylow p-subgroup of $G$ implies that $p \nmid [G : P]$, for $[G:P]=[G:N_G(P)][N_G(P):P]$ and $p$ does not divide any of these two factors. So, what I don't understand why is true is the inverse implication, that is, if $p \nmid [G : P]$ then $P$ is a Sylow p-subgroup of $G$.</p>
Mark Bennet
2,906
<p>Imagine a segment of the curve along a radius from the origin of your polar co-ordinates. That increases the arc length without changing $\theta$ at all and $rd\theta=0$ for this segment. So you need to take into account the radial component.</p>
1,372,376
<p>For what values of $a$ and $b$, the two functions $f_a(x)=ax^2+3x+1$ and $g_b(x)=\frac{b}{x}$ are tangent to each other at a point where the $x\text{-coordinate}=1$.</p> <p>The points of intersection are where: $f_a(1)=g_b(1)$</p> <p>which gives $$a+4=b\text{ and } b-4=a$$</p> <p>Now what to do with this information? Or if my approach is right?</p>
Rory Daulton
161,807
<p>It looks like your approach is right, and you cannot gather any further information about $a$ and $b$.</p> <p>There are just infinitely many pairs of $a$'s and $b$'s that satisfy the requirements of the problem, and you have shown all the requirements on the $a$'s and $b$'s.</p> <p>So you are done! So report your answer, and start on another problem.</p>
2,498,628
<p>This was a question in our exam and I did not know which change of variables or trick to apply</p> <p><strong>How to show by inspection ( change of variables or whatever trick ) that</strong></p> <p><span class="math-container">$$ \int_0^\infty \cos(x^2) dx = \int_0^\infty \sin(x^2) dx \tag{I} $$</span></p> <p>Computing the values of these integrals are known as routine. Further from their values, the equality holds. But can we show equality beforehand?</p> <blockquote> <p><strong>Note</strong>: I am not asking for computation since it can be found <a href="https://math.stackexchange.com/questions/187729/evaluating-int-0-infty-sin-x2-dx-with-real-methods?noredirect=1&amp;lq=1.">here</a> and we have as well that, <span class="math-container">$$ \int_0^\infty \cos(x^2) dx = \int_0^\infty \sin(x^2) dx =\sqrt{\frac{\pi}{8}}$$</span> and the result can be recover here, <a href="https://math.stackexchange.com/questions/187729/evaluating-int-0-infty-sin-x2-dx-with-real-methods?noredirect=1&amp;lq=1">Evaluating $\int_0^\infty \sin x^2\, dx$ with real methods?</a>.</p> </blockquote> <p>Is there any trick to prove the equality in (I) without computing the exact values of these integrals beforehand?</p>
Zaid Alyafeai
87,813
<p>Note by change of variable it suffices to show </p> <p>$$\int^\infty_0\frac{\cos(x)}{\sqrt{x}}\,dx =\int^\infty_0\frac{\sin(x)}{\sqrt{x}}\,dx $$</p> <p>Consider the following function</p> <p>$$f(z)=z^{-1/2}\,e^{iz}$$</p> <p>Where we choose the principal root for $ z^{-1/2}=e^{-1/2\log(z)}$. By integrating around the following contour</p> <p><a href="https://i.stack.imgur.com/obGmV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/obGmV.png" alt="enter image description here"></a> $$\int_{C_r}f(z)\,dz+\int_{r}^R f(x)\,dx+\int_{\gamma}f(z)\,dz+\int^{iR}_{ir}f(x)\,dx = 0$$</p> <p>Taking the integral around the small quarter circle with $r\to 0$ $$\left| \int_{C_r}f(z)\,dz\right|\leq \left|\sqrt{r}\int^{\pi/2}_{0}e^{it/2} e^{rie^{it}}\,dt\right| \leq \sqrt{r}\int^{\pi/2}_{0}\left|e^{-r\sin(t)}\right|\,dt\sim 0$$</p> <p>On $\gamma(t)=(1-t)R+iRt$ where $0\leq t \leq 1$</p> <p>$$\left|\int_{\gamma}f(z)\,dz\right| = \left| R(i-1)\int^1_0e^{-1/2\log(R(1-t)+iRt)}e^{i(1-t)R-Rt}\,dt\right| \\ \leq \frac{\sqrt{2}}{\sqrt{R}} \int^1_0 \frac{e^{-Rt}}{\sqrt[4]{(1-t)^2+t^2}}\,dt$$</p> <p>Hence we have</p> <p>$$\left|\int_{\gamma}f(z)\,dz\right| \leq \frac{\sqrt{2}}{\sqrt{R}} \int^1_0 e^{-Rt}\,dt=\frac{\sqrt{2}}{R\sqrt{R}}\left(1-e^{-R}\right)\sim_{\infty}0$$</p> <p>Finally what is remaining when $r\to 0$ and $R \to \infty$</p> <p>$$\int^\infty_0 \frac{e^{ix}}{\sqrt{x}}\,dx =i \int^{\infty}_{0}(ix)^{-1/2}e^{-x}\,dx$$</p> <p>Note that $i^{-1/2}=e^{-i\pi/4}$</p> <p>$$\int^\infty_0\frac{e^{ix}}{\sqrt{x}}\,dx = ie^{-i\pi/4}I = \frac{I}{\sqrt{2}}+i\frac{I}{\sqrt{2}}$$</p> <p>By equating the real part with the real part and the imaginary part with the imaginary part we reach $$\int^\infty_0\frac{\cos(x)}{\sqrt{x}}\,dx =\int^\infty_0\frac{\sin(x)}{\sqrt{x}}\,dx = \frac{I}{\sqrt{2}} $$</p> <p>Although $I$ is easy to evaluate using the gamma function, we didn't have to evaluate it to show equivalence. </p>
2,498,628
<p>This was a question in our exam and I did not know which change of variables or trick to apply</p> <p><strong>How to show by inspection ( change of variables or whatever trick ) that</strong></p> <p><span class="math-container">$$ \int_0^\infty \cos(x^2) dx = \int_0^\infty \sin(x^2) dx \tag{I} $$</span></p> <p>Computing the values of these integrals are known as routine. Further from their values, the equality holds. But can we show equality beforehand?</p> <blockquote> <p><strong>Note</strong>: I am not asking for computation since it can be found <a href="https://math.stackexchange.com/questions/187729/evaluating-int-0-infty-sin-x2-dx-with-real-methods?noredirect=1&amp;lq=1.">here</a> and we have as well that, <span class="math-container">$$ \int_0^\infty \cos(x^2) dx = \int_0^\infty \sin(x^2) dx =\sqrt{\frac{\pi}{8}}$$</span> and the result can be recover here, <a href="https://math.stackexchange.com/questions/187729/evaluating-int-0-infty-sin-x2-dx-with-real-methods?noredirect=1&amp;lq=1">Evaluating $\int_0^\infty \sin x^2\, dx$ with real methods?</a>.</p> </blockquote> <p>Is there any trick to prove the equality in (I) without computing the exact values of these integrals beforehand?</p>
robjohn
13,854
<p>Since $e^{iz^2}$ is entire, by <a href="https://en.wikipedia.org/wiki/Cauchy%27s_integral_theorem" rel="noreferrer">Cauchy's Integral Theorem</a>, we have $$ \int_0^R e^{iz^2}\,\mathrm{d}z =\int_0^{(1+i)R} e^{iz^2}\,\mathrm{d}z+\int_{(1+i)R}^R e^{iz^2}\,\mathrm{d}z\tag1 $$ where, using the parameterization $z=R(1+it)$, we have the estimate $$ \begin{align} \left|\,\int_{(1+i)R}^R e^{iz^2}\,\mathrm{d}z\,\right| &amp;\le R\int_0^1e^{-2R^2t}\,\mathrm{d}t\\ &amp;\le\frac1{2R}\tag2 \end{align} $$ and using the reparameterization $z\mapsto(1+i)z$, $$ \begin{align} \int_0^{(1+i)R}e^{iz^2}\,\mathrm{d}z &amp;=(1+i)\int_0^Re^{-2z^2}\,\mathrm{d}z\tag3 \end{align} $$ Combining $(1)$, $(2)$, and $(3)$, while letting $R\to\infty$, validates the following <strong>change of variables: $\boldsymbol{\color{#C00}{z\mapsto(1+i)z}}$</strong>. $$ \begin{align} \int_0^\infty\left(\cos\left(z^2\right)+i\sin\left(z^2\right)\right)\mathrm{d}z &amp;=\boldsymbol{\color{#C00}{\int_0^\infty e^{iz^2}\,\mathrm{d}z}}\\ &amp;\boldsymbol{\color{#C00}{=(1+i)\int_0^\infty e^{-2z^2}\,\mathrm{d}z}}\tag4 \end{align} $$ Since the real and imaginary parts of $(4)$ are the same, we get that $$ \int_0^\infty\cos\left(z^2\right)\,\mathrm{d}z =\int_0^\infty\sin\left(z^2\right)\,\mathrm{d}z\tag5 $$</p>
959,201
<p>I am confused about the following.</p> <p>Could you explain me why if $A=\varnothing$,then $\cap A$ is the set of all sets?</p> <p>Definition of $\cap A$:</p> <p>For $A \neq \varnothing$:</p> <p>$$x \in \cap A \leftrightarrow (\forall b \in A )x \in b$$</p> <p><strong>EDIT</strong>:</p> <p>I want to prove that $\cap \varnothing$ is not a set.</p> <p>To do that, do I have to begin, supposing that it is a set?</p>
Community
-1
<p>Usually, $\cap A$ is defined as the class of all things that are in every element of $A$.</p> <p>No matter what $x$ is, $\forall y \in \varnothing: x \in y$ is vacuously true, therefore, <em>all</em> sets are members of the class $\cap \varnothing$.</p>
959,201
<p>I am confused about the following.</p> <p>Could you explain me why if $A=\varnothing$,then $\cap A$ is the set of all sets?</p> <p>Definition of $\cap A$:</p> <p>For $A \neq \varnothing$:</p> <p>$$x \in \cap A \leftrightarrow (\forall b \in A )x \in b$$</p> <p><strong>EDIT</strong>:</p> <p>I want to prove that $\cap \varnothing$ is not a set.</p> <p>To do that, do I have to begin, supposing that it is a set?</p>
Thomas Andrews
7,933
<p>Intuitively, if $A\subseteq B$ then $\bigcap B\subseteq \bigcap A$. Now, for any set $X$, let $B=\{\{X\}\}$. Then $\emptyset = A\subseteq B$ and $\{X\}=\bigcap B \subseteq \bigcap A$, so $X\in\bigcap A$. </p> <p>But that definition cannot actually be done - there is no set of all sets.</p>
25,363
<p>In what way and with what utility is the law of excluded middle usually disposed of in intuitionistic type theory and its descendants? I am thinking here of topos theory and its ilk, namely synthetic differential geometry and the use of topoi in algebraic geometry (this is a more palatable restructuring, perhaps), where free use of these "¬⊨P∨¬P" theories is necessarily everywhere--freely utilized at every turn, one might say. But why and how are such theories first formulated, and what do they look like in the purely logical sense?</p> <p>You will have to forgive me; I began as a student in philosophy (not even that of mathematics), and the law of excluded middle is something that was imbibed with my mother's milk, as it were. This is more of a philosophical issue than a mathematical one, but being the renaissance guys/gals that you all are, I thought that perhaps this could generate some fruitful discussion. </p>
Charles Matthews
6,153
<p>I don't know whether this will be helpful, but here goes. There used to be things called the "Laws of Thought", and they used to be equated (tendentiously) with sort-of axioms for rationality, when "axiom" still meant self-evident. After Leibniz there were four basic Laws of Thought, of which you have referenced two. </p> <p>Now, someone should probably write a book about the subsequent fate of these Laws, but for a mathematical analogy, look at another similar thing, the Principle of Continuity. This was big in the eighteenth century, was questioned in the nineteenth century, and eventually dissolved at the hands of Weierstrass into the epsilon-delta proof technique, i.e. the standard approach of mathematical analysis. </p> <p>Excluded middle underwent a somewhat parallel development, though it is not as if this is taught as mainstream mathematics. The intuitionists objected to it: basically from a constructive point of view, proof by case analysis is not good unless there is a computable criterion for which case you are in, and excluded middle is what happens with two cases. When intuitionistic logic was written down as a formal system (not the first idea of Brouwer), the structure of propositions came out as a Heyting algebra, not a Boolean algebra. </p> <p>When the logic of topos theory was recognised to be intuitionistic (not the first idea of Grothendieck!) a bit more could be said. The truth-values (more accurately the subobject classifier) would be a Heyting algebra. The case of "classical logic" of "classical set theory" would be the truth values being the Boolean algebra with two elements. Usually the subobject classifier would be something much more complicated. (As has been pointed out, the "law of non-contradiction" or first Law of Thought is about the truth values not being reduced to just one, which is not the same thing as various other statements.) The result, over all, including the Axiom of Choice because topos theory is a type of set theory not just a propositional logic, is a very sophisticated range of models. "Classical logic" is seen as a very particular form of intuitionistic logic. If the question is about how disjunction actually works in a topos, or how negation works in intuitionistic logic, there are answers: the technicalities will dispel any "mysteries". But it's not <em>au revoir</em> at all: excluded middle is an option and one can say exactly how it fits in.</p>
392,835
<p>In a concrete category (i.e., where the morphisms are functions between sets), I define a <strong>base</strong> of an object <span class="math-container">$A$</span> to be a set of elements <span class="math-container">$M$</span> of <span class="math-container">$A$</span> such that for any morphisms <span class="math-container">$F,G:A\to B$</span> that coincide on <span class="math-container">$M$</span>, we have <span class="math-container">$F=G$</span>.</p> <p><strong>Question:</strong> Is there an established name for a <strong>base</strong> in that sense?</p> <p><strong>Examples:</strong> In the category of vectors spaces, generating sets are bases. In the category of sets, <span class="math-container">$A$</span> is the only base of <span class="math-container">$A$</span>.</p> <p><strong>Note:</strong> The above definition does not really need a concrete category (an initial object is enough), but I decided to formulate it in a concrete category for simplicity.</p>
Dominique Unruh
101,775
<p>At least in the context of von Neumann algebras, <em>separating</em> is used for this concept. Confer [Takesaki], Definition II.3.16 (slightly reformulated):</p> <p><strong>Definition.</strong> Let <span class="math-container">$\mathcal M$</span> be a von Neumann algebra on <span class="math-container">$\mathfrak H$</span>. A subset <span class="math-container">$\mathfrak U$</span> of <span class="math-container">$\mathfrak H$</span> is called <em>separating</em> for <span class="math-container">$\mathcal M$</span> iff for all <span class="math-container">$a\in\mathcal M$</span>, <span class="math-container">$a\xi=0$</span> for all <span class="math-container">$\xi\in\mathfrak U$</span> implies <span class="math-container">$a=0$</span>.</p> <p>(But note also the definition of a <em>separating set</em> in <a href="https://ncatlab.org/nlab/show/separator" rel="nofollow noreferrer">nLab</a> which is related by a different concept.)</p> <p>[Takesaki] <em>Takesaki, Masamichi</em>, Theory of operator algebras I, New York, Heidelberg, Berlin: Springer-Verlag. VII, 415 p. DM 79.00; $ 44.30 (1979). <a href="https://zbmath.org/?q=an:0436.46043" rel="nofollow noreferrer">ZBL0436.46043</a>.</p>
1,600,597
<p>I'm currently going through Spivak's calculus, and after a lot of effort, i still can't seem to be able to figure this one out.</p> <p>The problem states that you need to prove that $x = y$ or $x = -y$ if $x^n = y^n$</p> <p>I tried to use the formula derived earlier for $x^n - y^n$ but that leaves either $(x-y) = 0$ or $(x^{n-1}+x^{n-2}y+...+xy^{n-2}+y^{n-1})$ and i'm not sure how to proceed from there.</p>
Américo Tavares
752
<p>Let $n=2p$. For convenience let us denote $y=a$. From the algebraic identities</p> <p>\begin{eqnarray} x^{2p}-a^{2p} &amp;=&amp;(x-a)\sum_{k=0}^{2p-1}a^{k}x^{2p-1-k}, \tag{1} \\ \sum_{k=0}^{2p-1}a^{k}x^{2p-1-k} &amp;=&amp;(x+a)\sum_{k=0}^{p-1}a^{2k}x^{2p-2-2k},\tag{2} \end{eqnarray}</p> <p>we conclude that</p> <p>\begin{equation} x^{2p}-a^{2p}=(x-a)(x+a)\sum_{k=0}^{p-1}a^{2k}x^{2p-2-2k}. \tag{3} \end{equation}</p> <p>Since for $a\neq 0$ the polynomial $\sum_{k=0}^{p-1}a^{2k}x^{2p-2-2k}$ on the right-hand side of (3) has no real roots, it follows that the equation $x^{2p}-y^{2p}=0$ is equivalent to $(x-y)(x+y)=0$, thus proving that if $x^{n}=y^{n}$ and $ n $ is even, then $x=y$ or $x=-y$.</p> <p>The identities $(1)$ and $(2)$ can be justified by applying <a href="https://en.wikipedia.org/wiki/Ruffini%27s_rule" rel="nofollow"><em>Ruffini's Rule</em></a> twice: for identity $(1)$</p> <p>$$ \begin{array}{c|cccccccc} &amp; 1 &amp; 0 &amp; 0 &amp; \ldots &amp; 0 &amp; 0 &amp; &amp; -a^{2p} \\ a &amp; &amp; a &amp; a^2 &amp; \ldots &amp; a^{2p-2} &amp; a^{2p-1} &amp; &amp; a^{2p} \\ \hline &amp; 1 &amp; a &amp; a^{2} &amp; \ldots &amp; a^{2p-2} &amp; a^{2p-1} &amp; | &amp; 0 \end{array} $$</p> <p>\begin{equation*} x^{2p}-a^{2p}=(x-a)(x^{2p-1}+ax^{2p-2}+a^{2}x^{2p-3}+\cdots +a^{2p-2}x+a^{2p-1}), \end{equation*}</p> <p>and for identity $(2)$</p> <p>$$ \begin{array}{c|cccccccc} &amp; 1 &amp; a &amp; a^{2} &amp; a^{3} &amp; \ldots &amp; a^{2p-2} &amp; &amp; a^{2p-1} \\ -a &amp; &amp; -a &amp; 0 &amp; -a^{3} &amp; \ldots &amp; 0 &amp; &amp; -a^{2p-1} \\ \hline &amp; 1 &amp; 0 &amp; a^{2} &amp; 0 &amp; \ldots &amp; a^{2p-2} &amp; | &amp; 0 \end{array} $$</p> <p>$x^{2p-1}+ax^{2p-2}+\cdots +a^{2p-2}x+a^{2p-1}$ $$=(x+a)(x^{2p-2}+a^{2}x^{2p-4}+a^{4}x^{2p-6}+\cdots +a^{2p-4}x^{2}+a^{2p-2})$$</p>
1,600,597
<p>I'm currently going through Spivak's calculus, and after a lot of effort, i still can't seem to be able to figure this one out.</p> <p>The problem states that you need to prove that $x = y$ or $x = -y$ if $x^n = y^n$</p> <p>I tried to use the formula derived earlier for $x^n - y^n$ but that leaves either $(x-y) = 0$ or $(x^{n-1}+x^{n-2}y+...+xy^{n-2}+y^{n-1})$ and i'm not sure how to proceed from there.</p>
Ennar
122,131
<p>We have that $x\mapsto x^n\colon \mathbb R_{\geq 0} \to \mathbb R_{\geq 0}$ is strictly increasing function and thus injective. Now,</p> <p>$$x^n = y^n \implies |x|^n = |y|^n \implies |x| = |y| \implies x=\pm y\stackrel{\text{$n$ is even}}\implies x^n = y^n$$ therefore, $x^n = y^n\iff x =\pm y$.</p>
2,871,949
<p>Let $X_1, X_2, X_3, X_4$ be independent Bernoulli random variables. Then \begin{align} Pr[X_i=1]=Pr[X_i=0]=1/2. \end{align} I want to compute the following probability \begin{align} Pr( X_1+X_2+X_3=2, X_2+X_4=1 ). \end{align} My solution: Suppose that $X_1+X_2+X_3=2$ and $X_2+X_4=1$. Then $(X_2, X_4)=(0,1)$ or $(1,0)$. The probabilities of $(X_2, X_4)=(0,1)$ and $(1,0)$ are both $1/2 \times 1/2 = 1/4$. If $(X_2, X_4)=(0,1)$, then $X_1+X_2+X_3=X_1+X_3=2$. Therefore $X_1 = X_3 =1$. This happens with probability $1/4$. If $(X_2, X_4)=(1,0)$, then $X_1+X_2+X_3=X_1+1+X_3=2$. Therefore $(X_1, X_3) \in \{(1,0), (0,1)\}$. This happens with probability $1/4$. Therefore \begin{align} Pr( X_1+X_2+X_3=2, X_2+X_4=1 ) = 1/16+2/16=3/16. \end{align} Is this correct? Thank you very much.</p>
tortue
140,475
<p>Yes, your solution is indeed correct! </p> <hr> <p>Minor comment: In the last sentence before the final expression "this" is probably referred to each of the outcomes $(1, 0)$ and $(0, 1)$ rather than to the event $(X_1, X_3) \in \{ (1, 0), (0, 1) \}$.</p>
2,871,949
<p>Let $X_1, X_2, X_3, X_4$ be independent Bernoulli random variables. Then \begin{align} Pr[X_i=1]=Pr[X_i=0]=1/2. \end{align} I want to compute the following probability \begin{align} Pr( X_1+X_2+X_3=2, X_2+X_4=1 ). \end{align} My solution: Suppose that $X_1+X_2+X_3=2$ and $X_2+X_4=1$. Then $(X_2, X_4)=(0,1)$ or $(1,0)$. The probabilities of $(X_2, X_4)=(0,1)$ and $(1,0)$ are both $1/2 \times 1/2 = 1/4$. If $(X_2, X_4)=(0,1)$, then $X_1+X_2+X_3=X_1+X_3=2$. Therefore $X_1 = X_3 =1$. This happens with probability $1/4$. If $(X_2, X_4)=(1,0)$, then $X_1+X_2+X_3=X_1+1+X_3=2$. Therefore $(X_1, X_3) \in \{(1,0), (0,1)\}$. This happens with probability $1/4$. Therefore \begin{align} Pr( X_1+X_2+X_3=2, X_2+X_4=1 ) = 1/16+2/16=3/16. \end{align} Is this correct? Thank you very much.</p>
Giulio Scattolin
580,201
<p>Let's verify your result by simulation using a <em>Python</em> script:</p> <pre><code>import numpy as np N = 10**5 # number of trials # list of N 4-ples (X1, X2, X3, X4) XX = [np.random.randint(2, size=4) for n in np.arange(N)] # list of trials outcomes P = list(map(lambda X: (X[0]+X[1]+X[2]==2)&amp;(X[2]+X[3]==1), XX)) # average of successes np.mean(P) # ~ 0.18772 </code></pre> <p>Since $\frac{3}{16} \simeq 0.18772$ this confirms your result.</p>
3,069,262
<p>Given some quadrilateral <span class="math-container">$Q \subset \mathbb R^2$</span> defined by the vertices <span class="math-container">$P_i = (x_i,y_i), i=1,2,3,4$</span> (you can assume they are in positive orientation), is there a function <span class="math-container">$f: \mathbb R^2 \to \mathbb R^2$</span> that is particularly easy to compute which satisfies </p> <p><span class="math-container">$$f(Q) \subseteq U \quad \text{ and } \quad f(\mathbb R^2 \setminus Q) \subseteq \mathbb R^2 \setminus U?$$</span></p> <p>Here <span class="math-container">$U = [0,1]\times[0,1]$</span> denotes the unit square (or <span class="math-container">$U=[-1,1]\times[-1,1]$</span> if you prefer)</p> <p>My first attempt was using a function <span class="math-container">$f(x,y) := (a + bx + cy +dxy, a' + b'x+c'y +d'xy)$</span> (known as <s>the 2d perspecitve transformation</s> bilinear interpolation), but determining the coefficients <span class="math-container">$a,b,c,\ldots, d'$</span> requires inveriting <span class="math-container">$4\times 4$</span> matrix to solve two linear systems of equations.</p> <p>EDIT: The actual 2d perspective transformation as described <a href="https://math.stackexchange.com/a/339033/109451">here</a> does only produce the desired result if <span class="math-container">$Q$</span> is <em>convex</em>, which is not necessarily the case.</p>
symchdmath
626,816
<p>This integral is more complicated than it looks and only requires comfort with hyperbolic trigonometric definitions. My initial instinct is to look at the expression inside the <span class="math-container">$\cosh$</span> to see if I can simplify it. In fact we have by the definition of the hyperbolic trigonometric functions,</p> <p><span class="math-container">$$\tanh^{-1}(x) = \frac{1}{2} \ln \left(\frac{1+x}{1-x}\right) $$</span></p> <p>Now using log laws we have that,</p> <p><span class="math-container">$$\tanh^{-1}(3x) - \tanh^{-1}(x) = \frac{1}{2} \ln \left(\frac{(1+3x)(1-x)}{(1-3x)(1+x)} \right), \ \ (*)$$</span></p> <p>We note at this point that,</p> <p><span class="math-container">$$\sqrt{36x^4 - 40x^2 + 4} = 2\sqrt{(1-9x^2)(1-x^2)} = 2\sqrt{(1-3x)(1+3x)(1-x)(1+x)}$$</span></p> <p>There is a lot in common between the above two equations which motivates us to press forward, we now use the following identity which we can prove just with the definitions of the hyperbolic trig functions,</p> <p><span class="math-container">$$\cosh(x+y) = \cosh(x) \cosh(y) + \sinh(x) \sinh(y)$$</span></p> <p>We use to simplify the <span class="math-container">$\cosh$</span> in the integral, we find that,</p> <p><span class="math-container">$$\cosh(3x + (\tanh^{-1}(3x) - \tanh^{-1}(x))) = \cosh(3x) \cosh(\tanh^{-1}(3x) - \tanh^{-1}(x)) + \sinh(3x) \sinh(\tanh^{-1}(3x) - \tanh^{-1}(x)) $$</span></p> <p>Using the following definitions of the hyperbolic functions,</p> <p><span class="math-container">$$\cosh(x) = \frac{e^x + e^{-x}}{2} $$</span></p> <p><span class="math-container">$$\sinh(x) = \frac{e^x - e^{-x}}{2} $$</span></p> <p>We find that using <span class="math-container">$(*)$</span>, leaving the details to you,</p> <p><span class="math-container">$$\cosh(\tanh^{-1}(3x) - \tanh^{-1}(x)) = \cosh\left(\frac{1}{2}\ln \left(\frac{(1+3x)(1-x)}{(1-3x)(1+x)} \right)\right) = \frac{1-3x^2}{\sqrt{(1-9x^2)(1-x^2)}}$$</span></p> <p>Similarly, we have</p> <p><span class="math-container">$$\sinh(\tanh^{-1}(3x) - \tanh^{-1}(x)) = \frac{2x}{\sqrt{(1-9x^2)(1-x^2)}} $$</span></p> <p>Putting this all together,</p> <p><span class="math-container">$$I = \int_{-1/3}^{1/3} \sqrt{36x^4 - 40x^2 + 4} \cosh(3x + \tanh^{-1}(3x) - \tanh^{-1}(x)) \ \mathrm{d}x $$</span></p> <p><span class="math-container">$$I = \int_{-1/3}^{1/3} 2\sqrt{(1-9x^2)(1-x^2)} \left(\cosh(3x) \frac{1-3x^2}{\sqrt{(1-9x^2)(1-x^2)}} + \sinh(3x)\frac{2x}{\sqrt{(1-9x^2)(1-x^2)}} \right) \ \mathrm{d}x$$</span></p> <p>Finally the integral simplifies to,</p> <p><span class="math-container">$$I = 2 \int_{-1/3}^{1/3} (1-3x^2)\cosh(3x) \ \mathrm{d}x + 2 \int_{-1/3}^{1/3} 2x \sinh(3x) \ \mathrm{d}x $$</span></p> <p>Which is a lot easier to compute with IBP and I will leave that to you to complete, this does indeed give the correct answer.</p>
4,321,675
<p>I'm struggling to derive the Finsler geodesic equations. The books I know either skip the computation or use the length functional directly. I want to use the energy. Let <span class="math-container">$(M,F)$</span> be a Finsler manifold and consider the energy functional <span class="math-container">$$E[\gamma] = \frac{1}{2}\int_I F^2_{\gamma(t)}(\dot{\gamma}(t))\,{\rm d}t\tag{1}$$</span>evaluated along a (regular) curve <span class="math-container">$\gamma\colon I \to M$</span>. We use tangent coordinates <span class="math-container">$(x^1,\ldots,x^n,v^1,\ldots, v^n)$</span> on <span class="math-container">$TM$</span> and write <span class="math-container">$g_{ij}(x,v)$</span> for the components of the fundamental tensor of <span class="math-container">$(M,F)$</span>. We may take for granted (using Einstein's convention) that <span class="math-container">$$F^2_x(v) = g_{ij}(x,v)v^iv^j, \quad \frac{1}{2}\frac{\partial F^2}{\partial v^i}(x,v) = g_{ij}(x,v)v^j, \quad\frac{\partial g_{ij}}{\partial v^k}(x,v)v^k = 0.\tag{2} $$</span></p> <p>Setting <span class="math-container">$L(x,v) = (1/2) F_x^2(v)$</span>, and writing <span class="math-container">$(\gamma(t),\dot{\gamma}(t)) \sim (x(t),v(t))$</span>, the Euler-Lagrange equations are <span class="math-container">$$0 = \frac{{\rm d}}{{\rm d}t}\left(\frac{\partial L}{\partial v^k}(x(t),v(t))\right) -\frac{\partial L}{\partial x^k}(x(t),v(t)),\quad k=1,\ldots, n=\dim(M).\tag{3}$$</span>It's easy to see (omitting application points) that <span class="math-container">$$\frac{\partial L}{\partial x^k} = \frac{1}{2}\frac{\partial g_{ij}}{\partial x^k}\dot{x}^i\dot{x}^j\quad\mbox{and}\quad \frac{\partial L}{\partial v^k} = g_{ik}\dot{x}^i,\tag{4}$$</span>so <span class="math-container">$$\frac{\rm d}{{\rm d}t}\left(\frac{\partial L}{\partial v^k}\right) = \frac{\partial g_{ik}}{\partial x^j}\dot{x}^j\dot{x}^i +{\color{red}{ \frac{\partial g_{ik}}{\partial v^j} \ddot{x}^j\dot{x}^i }}+ g_{ik}\ddot{x}^i\tag{5}$$</span> <strong>Problem:</strong> I cannot see for the life of me how to get rid of these <span class="math-container">$v^j$</span>-derivatives indicated in red, even using the last relation in (2), as the indices simply don't match. I am surely missing something obvious. Once we know that this term does vanish, then (4) and (5) combine to give <span class="math-container">$$ g_{ik}\ddot{x}^i + \left(\frac{\partial g_{ik}}{\partial x^j} - \frac{1}{2}\frac{\partial g_{ij}}{\partial x^k}\right)\dot{x}^i\dot{x}^j =0\tag{6}$$</span><a href="https://en.wikipedia.org/wiki/Finsler_manifold#Canonical_spray_structure_on_a_Finsler_manifold" rel="nofollow noreferrer">as in the Wikipedia page</a>.</p>
Qmechanic
11,127
<p>OP's red term vanishes <span class="math-container">$$\frac{\partial g_{ik}}{\partial v^j} \ddot{x}^j\dot{x}^i ~\stackrel{\rm EOM}{\approx}~ v^i\frac{\partial g_{ik}}{\partial v^j} \dot{v}^j ~\stackrel{(C)}{=}~v^i\frac{\partial g_{jk}}{\partial v^i} \dot{v}^j ~\stackrel{(B)}{=}~0\tag{A}$$</span> because of the metric <span class="math-container">$g_{jk}$</span> has homogeneity weight 0: <span class="math-container">$$ v^i\frac{\partial g_{jk}}{\partial v^i} ~=~0 .\tag{B}$$</span> Eq. (B) is a consequence of the definition of the metric <span class="math-container">$$ g_{ij}~:=~\frac{1}{2}\frac{\partial (F^2)}{\partial v^i\partial v^j},\tag{C}$$</span> and that <span class="math-container">$F$</span> has homogeneity weight 1: <span class="math-container">$$v^i\frac{\partial F}{\partial v^i}~=~F ,\tag{D}$$</span> cf. the homogeneity property of the <a href="https://en.wikipedia.org/wiki/Finsler_manifold" rel="nofollow noreferrer">definition</a>. [Eqs. (C) &amp; (D) also imply the 1st equality in OP's eq. (2).]</p>
1,530,848
<p>Let $F(\mathbb{R})$ be the set of all functions $f : \mathbb{R} → \mathbb{R}$. Define pointwise addition and multiplication as follows. For any $f$ and $g$ in $F(\mathbb{R})$ let:</p> <p>(i) $(f + g)(s) = f(x) + g(x)$ for all $x \in \mathbb{R}$</p> <p>(ii) $(f · g)(s) = f(x) · g(x)$ for all $x \in \mathbb{R}$</p> <p>Prove that $F(\mathbb{R})$ forms a ring under these two operations.</p> <p>I know the axioms of rings that I need to show hold for this to be true. My question is, when I am showing closure for instance,</p> <p>$(f + g)(s) +(f + g)(t) = f(x) + g(x)+ f(y) + g(y) \in \mathbb{R}$ and</p> <p>$(f · g)(s)·(f · g)(t) = f(x) · g(x) · f(y) · g(y) \in \mathbb{R}$</p> <p>is this the correct way to apply the axioms? My book was very general about the definition of a ring. The whole pointwise notation versus the definition is confusing me. Any help would be appreciated.</p>
Daniel R. Collins
266,243
<p>Practice. </p> <p>I might say that the general problem-solving sequence is (per Polya): (1) Read a natural-language problem carefully, (2) Translate to math equation(s); (3) Solve the equation(s); (4) Translate back to natural language and check for reasonability. </p> <p>Now the truth is that in step #3 you usually do want to be manipulating symbols directly. There is usually some step-by-step algorithm or strategy for a given field of problems; and the whole point of the symbolic writing is that this becomes the most concise and fastest way of solving such a problem. Alfred North Whitehead wrote:</p> <blockquote> <p>"By the aid of symbolism, we can make transitions in reasoning almost mechanically, by the eye, which otherwise would call into play the higher faculties of the brain. [...] It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilisation advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments."</p> </blockquote> <p>But! Don't forget to also practice <em>at the end</em> of the symbolic process translating back to natural language and double-checking if the meaning is reasonable (which can be by substitution, estimation, and/or quick graphing). Failing to do that last part does leave a person with a rather empty and brittle set of symbolic-pushing skills. </p> <p>As usual, the answer is: "You need both". </p>
2,214,137
<p>How many positive integer solutions does the equation $a+b+c=100$ have if we require $a&lt;b&lt;c$?</p> <p>I know how to solve the problem if it was just $a+b+c=100$ but the fact it has the restriction $a&lt;b&lt;c$ is throwing me off.</p> <p>How would I solve this?</p>
Ziad Fakhoury
295,839
<p>Since $a$ is the smallest, the largest number it can be is $32$, so $a$ ranges from $1$ to $32$. The remaining sum $b+c$ must be equal to $100 - a$ and since $b$ is smaller than $c$ then $b$ ranges from $a+1$ to $\lfloor \frac{100-a}{2}\rfloor $. So for a given $a$ there are $\lfloor \frac{100-a}{2}\rfloor - a -1$ different $b$s one can choose. Therefore the possible permutations are </p> <p>$$\sum_{a=1}^{32} \lfloor \frac{100-a}{2}\rfloor - a -1$$</p> <p>$$=\sum_{a=1}^{32} \lfloor \frac{100-a}{2}\rfloor - \sum_{a=1}^{32}a -32 $$ $$ = 2\sum_{a=34}^{49}a - \sum_{a=1}^{32}a -32$$ $$=49(50) - 34(33) - \frac{32(33)}{2} -32= 768$$</p> <p>EDIT: </p> <p>Following the comments of Vik, it is quite clear that since we are including the case $b = a +1$ the expression should change to $$\sum_{a=1}^{32} \lfloor \frac{100-a}{2}\rfloor - a $$ However as Vik also mentioned this is an over estimate as it would also give us the combination where we have $b = c= (100-a)/2$ the case where $a$ is even. Since $b$ is strictly less than $c$, we need to remove $16$ off of our expression as there are 16 cases of when $a$ is even. Therefore, we get $$\sum_{a=1}^{32}( \lfloor \frac{100-a}{2}\rfloor - a) - 16= 784$$</p>
2,114,276
<p>How to show that $(x^{1/4}-y^{1/4})(x^{3/4}+x^{1/2}y^{1/4}+x^{1/4}y^{1/2}+y^{3/4})=x-y$</p> <p>Can anyone explain how to solve this question for me? Thanks in advance. </p>
S.C.B.
310,930
<p>This follows from the fact that $$x^4-y^4=(x^2-y^2)(x^2+y^2)=(x-y)(x+y)(x^2+y^2)=(x-y)(x^3+x^2y+xy^2+y^3)$$ Now just replace $x,y$ with $x^{\frac{1}{4}}$ and $y^{\frac{1}{4}}$.</p> <p>It is known, in general that $$x^n-y^n=(x-y)(x^{n-1}+x^{n-2}y+\dots+xy^{n-2}+y^{n-1})$$ As can be seen <a href="https://en.wikipedia.org/wiki/Factorization#Sum.2Fdifference_of_two_nth_powers" rel="nofollow noreferrer">here</a>. </p>
615,275
<p>So I'm making a star Ship bridge game where the game is rendered using a 2-D Cartesian grid for positioning logic. The player has only the attributes of position and an arbitrary look-at angle (currently degrees). A "view-port" determines if a planet is within the angular difference of $45^\circ$ so that it can render the planet. My problem is finding the formula in order to find the appropriate x-coordinate on the "view-port". So far I have </p> <p>$x = \frac{\text{View Width}}{2} - \frac{\text{View Width}}{2}\times (\text{Angular Difference})$</p> <p>where angular Difference is converted to a rational number between 0.0 and 1.0 and can be negative or positive</p> <p><img src="https://i.stack.imgur.com/oBgfJ.jpg" alt="enter image description here"></p>
Michael Albanese
39,599
<p>First of all, you said that $\sin x = \frac{\sqrt{2}}{2}$ when $x = \frac{\pi}{4}, \frac{3\pi}{4}$ then concluded that $\sin x &gt; \frac{\sqrt{2}}{2}$ when $\frac{\pi}{4} &lt; x &lt; \frac{3\pi}{4}$. While this is true, you should give some explanation here as it could be the case that $\sin x &lt; \frac{\sqrt{2}}{2}$ for $\frac{\pi}{4} &lt; x &lt; \frac{3\pi}{4}$.</p> <p>As $\sin x &gt; \frac{\sqrt{2}}{2}$ for $\frac{\pi}{4} &lt; x &lt; \frac{3\pi}{4}$, $\sin(x - \frac{\pi}{3}) &gt; \frac{\sqrt{2}}{2}$ for $\frac{\pi}{4} &lt; x - \frac{\pi}{3} &lt; \frac{3\pi}{4}$. By adding $\frac{\pi}{3}$ to each term in the inequality, we have $\frac{7\pi}{12} &lt; x &lt; \frac{13\pi}{12}$. </p> <p>So, for every $k \in \mathbb{Z}$, we have $\sqrt{2} - 2\sin(x-\frac{\pi}{3}) &gt; 0$ for $\frac{7\pi}{12}+2k\pi &lt; x &lt; \frac{13\pi}{12}+2k\pi$. </p> <p>For $k = -1$ we have $-\frac{17\pi}{12} &lt; x &lt; -\frac{11\pi}{12}$, and for $k = 0$ we have $\frac{7\pi}{12} &lt; x &lt; \frac{13\pi}{12}$. As we are looking for $x$ which satisfy $-\pi &lt; x &lt; \pi$, these are the only $x$ we need to consider (for any other $k$, the corresponding inequalities do not allow for $x$ which also satisfy $-\pi &lt; x &lt; \pi$). </p> <p>If $x$ satisfies $-\frac{17\pi}{12} &lt; x &lt; -\frac{11\pi}{12}$ and $-\pi &lt; x &lt; \pi$, then $-\pi &lt; x &lt; -\frac{11\pi}{12}$.</p> <p>If $x$ satisfies $\frac{7\pi}{12} &lt; x &lt; \frac{13\pi}{12}$ and $-\pi &lt; x &lt; \pi$, then $\frac{7\pi}{12} &lt; x &lt; \pi$. </p> <p>Therefore, for $-\pi &lt; x &lt; \pi$, $\sqrt{2} -2\sin(x-\frac{\pi}{3}) &gt; 0$ for $-\pi &lt; x &lt; -\frac{11\pi}{12}$ and $\frac{7\pi}{12} &lt; x &lt; \pi$.</p>
615,275
<p>So I'm making a star Ship bridge game where the game is rendered using a 2-D Cartesian grid for positioning logic. The player has only the attributes of position and an arbitrary look-at angle (currently degrees). A "view-port" determines if a planet is within the angular difference of $45^\circ$ so that it can render the planet. My problem is finding the formula in order to find the appropriate x-coordinate on the "view-port". So far I have </p> <p>$x = \frac{\text{View Width}}{2} - \frac{\text{View Width}}{2}\times (\text{Angular Difference})$</p> <p>where angular Difference is converted to a rational number between 0.0 and 1.0 and can be negative or positive</p> <p><img src="https://i.stack.imgur.com/oBgfJ.jpg" alt="enter image description here"></p>
lab bhattacharjee
33,337
<p>We need $$2\sin\left(x-60^\circ\right)-\sqrt2&gt;0$$</p> <p>But as $\sin45^\circ=\frac1{\sqrt2},$ it essentially implies and is implied by $$2\sin\left(x-60^\circ\right)-2\sin45^\circ&gt;0$$ </p> <p>using <a href="http://mathworld.wolfram.com/ProsthaphaeresisFormulas.html" rel="nofollow">Prosthaphaeresis Formula</a>s,</p> <p>$$\sin\left(x-60^\circ\right)-\sin45^\circ=2\sin\dfrac{x-105^\circ}2\cos\frac{x-15^\circ}2\ \ \ \ (1)$$</p> <p>Now, $$\sin\frac{x-105^\circ}2&gt;0\iff n360^\circ&lt;\frac{x-105^\circ}2&lt;n360^\circ+180^\circ$$ $$\iff n720^\circ+105^\circ&lt;x&lt;n720^\circ+(540+105)^\circ$$</p> <p>Setting $n=0, 105^\circ&lt;x&lt;(540+105)^\circ$</p> <p>and setting $n=-1,$ $(-720+105)^\circ&lt;x&lt;(-720+540+105)^\circ\iff -615^\circ&lt;x&lt;-75^\circ$</p> <p>As we have $-180^\circ&lt;x&lt;180^\circ,$</p> <p>$\displaystyle\sin\frac{x-105^\circ}2&gt;0\iff-180^\circ&lt;x&lt;-75^\circ$ or $105^\circ&lt;x&lt;180^\circ\ \ \ \ (2)$</p> <p>Again, $$\cos\frac{x-15^\circ}2&gt;0\iff m360^\circ-90^\circ&lt;\frac{x-15^\circ}2&lt;m360^\circ+90^\circ$$</p> <p>$$m720^\circ-165^\circ&lt;x&lt;m720^\circ+195^\circ$$</p> <p>Set $n=0$ as $-180^\circ&lt;x&lt;180^\circ,$</p> <p>$\displaystyle \cos\frac{x-15^\circ}2&gt;0\iff -165^\circ&lt;x&lt;180^\circ \ \ \ \ (3)$</p> <p>So, $(1)$ will be $&gt;0$ </p> <p>if $105^\circ&lt;x&lt;180^\circ$(where both the multipliers in $(2)&lt;(3)$ are positive ) </p> <p>or if $-180^\circ&lt;x&lt;-165^\circ$ (where both the multipliers are negative ) </p>
27,965
<p>I'm looking at <a href="https://math.stackexchange.com/questions/2669893/calculating-the-sums-of-series">this question</a>. I gave the answer that was accepted. Please bear in mind that, when I answered this question, it was a different edit. In particular, there were more parts to the question.</p> <p>The reason I'm here is because my question got a few up votes as well as a few down votes, attaining a score of $-1$ at its nadir. I was wondering, was I justified in giving this answer?</p> <p>I'm aware of the guidelines about good questions for the site, and also aware that this question fails them. I've also seen <a href="https://math.meta.stackexchange.com/questions/27259/is-it-acceptable-to-answer-a-poor-quality-question">this meta post</a>.</p> <p>The reason I gave my answer is because I believed the asker really did have little clue about how to go about these questions. I figured that all the asker needed was one good, fully justified worked example, and they could do the rest on their own. The asker's comment at the end vindicated this view, and they changed the question so that my answer would more comfortably fit it!</p> <p>I understand that it's generally not preferable to reward poor quality questions on the site. I don't want the quality of questions to (further) drop. But, I do think that, for many people, these first steps are the most daunting. It's very hard for a new student to produce the work they've done, when they haven't done any. First and foremost, I want to help people on this site, as I'm sure most other people here do as well, and part of me finds it hard to accept that these people, who are genuinely confused at the first hurdle, cannot get help.</p> <p>I know there are ways around it too. I know that a more savvy user of the site knows that they can mitigate this, for example, by copying out the relevant formulae they know. However, this is not something that new users pick up easily. I don't think it necessarily indicates laziness, or a lack of due consideration of a problem, merely an unfamiliarity with the subtler workings of this site.</p> <p>Bear in mind, I only answered one part of the question too. I wasn't giving out all the answers for all the parts, for the asker to present as their work. The asker would still be required to do most of the work in order to answer the question. I was just using one part as an example to impart the necessary tools for the asker to answer the question.</p> <p>So, with all that said, was I justified in giving this answer? Or were the downvotes justified instead?</p>
Community
-1
<blockquote> <p>I figured that all the asker needed was one good, fully justified worked example, and they could do the rest on their own.</p> </blockquote> <p>Doing the OP's exercises isn't the only way to achieve this goal.</p> <p>Instead, you could find (or create) a well-posed reference question that has the desired good, fully justified worked example, and then vote to close the given question as a duplicate of the reference question.</p> <hr> <p>Over mse history, there have been a number of drives to create repositories of reference questions (<a href="https://math.meta.stackexchange.com/questions/1868/list-of-generalizations-of-common-questions">such as this one</a>) to be used for precisely this purpose; the key phrase for searching on this topic is <a href="https://math.meta.stackexchange.com/search?q=abstract%20duplicate">abstract duplicate</a>. </p>
191,210
<p>Let $R$ be the smallest $\sigma$-algebra containing all compact sets in $\mathbb R^n$. I know that based on definition the minimal $\sigma$-algebra containing the closed (or open) sets is the Borel $\sigma$-algebra. But how can I prove that $R$ is actually the Borel $\sigma$-algebra?</p>
William
13,579
<p>Let $\mathcal{B}$ denote the $\sigma$-algebra of Borel sets, i.e. the smallest $\sigma$ algebra containing the closed sets. Let $\mathcal{C}$ is $\sigma$-algebra generated by all the compact subsets of $\mathbb{R}^n$. </p> <p>As you mentioned in your previous question, <a href="https://math.stackexchange.com/questions/191178/are-all-compact-sets-in-bbb-rn-g-delta-sets">Are all compact sets in $ \Bbb R^n$, $G_\delta$ sets?</a> , all compacts sets are closed. Hence $\mathcal{C} \subset \mathcal{B}$. </p> <p>My answer to your previous question showed that all closed sets are $G_\delta$. Hence all open sets are $F_\sigma$. Let $U$ be an arbitrary open set. Let $(F_n)$ be a sequence of closed sets such that $U = \bigcup_{n \in \mathbb{N}} F_n$. Let $\bar{B}_k$ be the closed ball of radius $k$ centered at the origin. Define $C_{n,k} = F_n \cap \bar{B}_k$ is a closed and bounded subset of $\mathbb{R}^n$. Hence $C_{n,k}$ is a compact set. So $U = \bigcup_{n,k} C_{n,k}$. Thus $U$ is a countable union of compact sets. So $\mathcal{C}$ contains all the open sets. $\mathcal{B} \subset \mathcal{C}$. </p> <p>Finally, $\mathcal{B} = \mathcal{C}$. The two $\sigma$-algebras are identical. </p>
24,318
<p>I have an expression as below:</p> <pre><code>Equations = 2.0799361919940695` x[1] + 3.3534325557330327` x[1]^2 - 4.335179297091139` x[1] x[2] + 1.1989715511881491` x[2]^2 - 3.766597877399148` x[1] x[3] - 0.33254815073371535` x[2] x[3] + 1.9050048836042945` x[3]^2 + 1.1386715715291826` x[1] x[4] + 2.802846492104668` x[2] x[4] - 0.6210244597295915` x[3] x[4] + 4.943369095158792` x[4]^2 </code></pre> <p>I want to write it in an output file. So I use the below code:</p> <pre><code>removebracketvar[x_] := StringReplace[ StringReplace[ ToString[x], {"[" -&gt; "", "]" -&gt; "", "," -&gt; "", "*^" -&gt; "e", ".*" -&gt; ".0*"}], Whitespace -&gt; ""]; SetDirectory["C:\\folder"]; WriteString["eqfile.txt", removebracketvar[ ToString[Equations , InputForm, NumberMarks -&gt; False]] ]; Close["eqfile.txt"] </code></pre> <p>The slight problem with the code for me is that it inserts the floating point numbers up to 16 digits of precision. I just want them to around up to 10 digits of precision. When I use <code>SetPrecision[Equations,10]</code>, it weirdly changes <code>x[1]</code> etc. to <code>x[1.0000000]</code>, etc.! I want to leave the variables as they are but want to change the floating points to less number of digits after the decimal point. What would be the best way of doing this?</p>
rcollyer
52
<p>I would simplify your code a bit, merging everything into the <code>Map</code> statement, and move everything into a function, as follows:</p> <pre><code>process[func_, xvals_] := Block[{points}, points = Map[ With[{val = func@#}, UnitStep[val] val]&amp;, xvals]; Transpose[{xvals, points}] ] </code></pre> <p>and then for your functions, you can simply run</p> <pre><code>process[func1, Range[0, 500, 2.5]] </code></pre> <p>Or, if you prefer to bury your <code>xvals</code> inside your function, just do this, instead:</p> <pre><code>process[func_, xvals_:Range[0, 500, 2.5]] := Block[{points}, points = Map[ With[{val = func@#}, UnitStep[val] val]&amp;, xvals]; Transpose[{xvals, points}] ] </code></pre>
1,647,157
<p>How can I solve this using only 'simple' algebraic tricks and asymptotic equivalences? No l'Hospital.</p> <p>$$\lim_{x \rightarrow0} \frac {\sqrt[3]{1+\arctan{3x}} - \sqrt[3]{1-\arcsin{3x}}} {\sqrt{1-\arctan{2x}} - \sqrt{1+\arcsin{2x}}} $$</p> <p>Rationalizing the numerator and denominator gives</p> <p>$$ \lim_{x \rightarrow0} \frac {A(\arctan{3x}+\arcsin{3x})} {B(\arctan{2x} + \arcsin{2x})} $$ where $\lim_{x \rightarrow 0} \frac{A}{B} = -\frac{2}{3} $</p>
Disintegrating By Parts
112,478
<p>The orthogonal complement of $Y$ consists of all $g$ such that $$ 0 = (f,g) = \int_{-\pi}^{0}f(t)\overline{g(t)}+\int_{0}^{\pi}f(t)\overline{g(t)}dt \\ = \int_{0}^{\pi}f(t-\pi)\overline{g(t-\pi)}+f(t)\overline{g(t)}dt \\ = \int_{0}^{\pi}f(t)\overline{\{g(t-\pi)+g(t)\}}dt,\;\;\; f \in Y. $$ It follows that $$ Y^{\perp} = \{ g \in L^2 : g(t-\pi)=-g(t) \}. $$ To find $g\in Y$ as stated, it is necessary and sufficient that $$ (t-g(t))\perp Y \\ (t-g(t)) \in Y^{\perp} \\ (t-\pi-g(t-\pi))=-(t-g(t)),\;\;\; t \in [0,\pi] \\ 2t-\pi = g(t-\pi)+g(t),\;\;\; t \in [0,\pi] \\ 2t-\pi = 2g(t),\;\;\; t\in [0,\pi] \\ g(t) = t-\frac{\pi}{2},\;\;\; t \in [0,\pi] \\ g(t-\pi)=g(t),\;\;\; t \in [-\pi,0). $$ Therefore the square distance is $$ \|t-g\|^2 = \int_{-\pi}^{0}|t-g(t)|^2dt+\int_{0}^{\pi}|t-g(t)|^2dt \\ =\int_{0}^{\pi}|t-\pi-g(t-\pi)|^2dt+\int_{0}^{\pi}|t-(t-\pi/2)|^2dt \\ =\int_{0}^{\pi}|t-\pi-g(t)|^2dt+ \pi\frac{\pi^2}{4} \\ =\int_{0}^{\pi}|t-\pi-(t-\pi/2)|^2dt+\frac{\pi^3}{4} \\ =\frac{2\pi^3}{4}. $$ Hence, as you expected, the distance from $t$ to $Y$ is $\sqrt{\pi^3/2}$.</p>