qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,437,026
<p>Suppose that:</p> <p>$$ X \sim Bern(p) $$</p> <p>Then, intuitively $X^2 = X \sim Bern(p)$. However, when I try to think of it logically, it doesn't make any sense. </p> <p>As an example, $X$ is $1$ with probability $p$ and $0$ with probability $1-p$. Then, $X^2 = X\cdot X$ is $1$ only when both $X$'s are $1$, which occur with probability $p^2$, and so it doesn't seem like $X^2 = X$. Can someone tell me what is wrong here?</p>
Graham Kemp
135,106
<blockquote> <p>As an example, $X$ is $1$ with probability $p$ and $0$ with probability $1-p$. Then, $X^2 = X\cdot X$ is $1$ only when both $X$'s are $1$, which occur with probability $p^2$, and so it doesn't seem like $X^2 = X$. Can someone tell me what is wrong here?</p> </blockquote> <p>There are not "both $X$'s". &nbsp; $X^2$ is the product of $X$ and <em>itself</em>. </p> <p><em>Whenever</em> $X$ equals $1$, then $X^2$ must equal $1^2$. &nbsp; So $\{X=1\}$ and $\{X^2=1^2\}$ are the <em>exact same event</em>, and thus cannot have anything except the same probability measure. $$\mathsf P(X^2=1^2)~=~\mathsf P(X=1)~=~p$$</p> <p>Likewise, $\{X=0\} = \{X^2=0^2\}$.$$\mathsf P(X^2=0^2)~=~\mathsf P(X=0)~=~1-p$$</p>
1,133,817
<p>Using Legendre polynomial generating function \begin{equation} \sum_{n=0}^\infty P_n (x) t^n = \frac{1}{\sqrt{(1-2xt+t^2)}} \end{equation} Or $$ P_n(x)=\frac{1}{2^n n!} \frac{d^n}{dx^n} [(x^2-1)^n] $$ </p> <p>Show$$ P_{2n}(0)=\frac{(-1)^n (2n)!}{(4)^n (n!)^2} $$ And $$ P_{2n+1}(0)=0$$</p> <p>I expressed $$(x^2 -1)^{2n}= \sum_{k=0}^{2n} {2n \choose k}x^{4n-2k}(-1)^k$$ And using second formula given, the only term that remains after differentiating 2n times and substituting x=0 is where $$4n-2k=2n$$ so $$2n=2k$$ k=n i.e $$(-1)^n \frac{(2n)!}{(n!)^2} $$ but multiplying this by$$ \frac{1}{2^{2n} (2n)!} $$ doesn't give desired solution. Where have I gone wrong?</p>
Joe
107,639
<p>Assume $\lim_{x\to0}f(x)$ exists. If by contradiction $\lim_{x\to0}f(x)=a\neq0$ then $\lim_{x\to0}\frac{f(x)}{x}=\infty$.</p>
238,076
<p>I was thinking about this when flying on the plane which was approaching and slowing down.</p> <p>Assume an object is approaching its target which is at a certain initial distance d at time t0.</p> <p>It starts at a speed that will allow it to reach the target in exactly one hour (e.g. d=100km, it starts at 100 km/h).</p> <p>Incrementally, it will slow down so that at every point in time, it will be exactly one hour far from reaching the target (after it had travelled 40 km being 60 km away, it will be travelling at 60 km/h).</p> <p>After reaching a predefined minimum speed (10 km/h), it will keep its velocity constant (and it will need exactly one additional hour to reach the target).</p> <p>How long will it take to reach the target?</p> <p>I somehow assume the answer should be 2 hours, independently of initial distance, but it does not fit (because then it would not matter what the original distance is (which it should not since we are travelling faster in the beginning), but any shorter path is included in the longer path and the resulting equation cannot possibly be true (or can it be?)) </p> <p>I think I am missing the mathematical apparatus that is needed to solve this (is it differential equations?)</p> <p>Can you please advise how to solve this?</p>
Ross Millikan
1,827
<p>Until the final speed, if $x$ is the distance to the target, we have $\frac {dx}{dt}=-x$. This is solved by $x=x_0e^{-t}$ where $x_0$ is the distance at time $0$. Then we look how long it takes to get to $10$, so we solve $10=100e^{-t}, -t=\ln \frac 1{10}, t=\ln 10$. Then, as you say, it takes an hour to get there so the total time is $1+\ln 10 \approx 3.303$ hours</p>
2,510,723
<p>Does the series $\displaystyle\sum_{n=1}^{\infty} \dfrac{19+(n+5)!}{(n+7)!}$ Converge or diverge</p> <blockquote> <p>I tried using the ratio test, but it gave me this which I don't see how you'd be able to solve. $$\lim_{n\to\infty} |\frac{a_{n+1}}{a_{n}}| = \lim_{n\to\infty}\frac{19+(n+6)!}{19+(n+5)!*(n+8)}$$</p> </blockquote> <p>Wolfram alpha told me to use the comparison test, but I can't for the love of god see which series I'd compare it to.</p>
Nosrati
108,128
<p>$$\sum_{n=1}^{\infty} \frac{19+(n+5)!}{(n+7)!}=19\sum_{n=1}^{\infty} \frac{1}{(n+7)!}+\sum_{n=1}^{\infty} \frac{1}{(n+7)(n+6)}$$ and $$ \frac{1}{(n+7)!}&lt;\dfrac{1}{n^2}~~~,~~~\frac{1}{(n+7)(n+6)}&lt;\dfrac{1}{n^2}$$</p>
1,624,221
<p>For the former one, I am aware that if let $F(x)=\int_a^x f(t)dt$, then it also equals $\int_0^x f(t)dt-\int_0^a f(t)dt$, so $F'(x)= f(x)-0=f(x)$. But who can tell me why $\int_0^a f(t)dt$ is $0$?</p>
Jimmy R.
128,037
<p>Since $$\frac{1}{n^2-1}=\frac{1}{(n-1)(n+1)}=\frac12\left(\frac{1}{n-1}-\frac{1}{n+1}\right)$$ as you already have, then write for $N\in \mathbb N$<br> \begin{align}\sum_{n=2}^{N}\left(\frac{1}{n-1}-\frac{1}{n+1}\right)&amp;=\left(1-\frac13\right)+\left(\frac12-\frac14\right)+\left(\frac13-\frac15\right)+\dots+\left(\frac1{N-1}-\frac1{N+1}\right)\\[0.2cm]&amp;=1+\frac12+\left(\frac13-\frac13\right)+\dots+\left(\frac1{N-1}-\frac1{N-1}\right)-\frac1N-\frac1{N+1}\\[0.2cm]&amp;=\frac32-\frac1N-\frac1{N+1}\end{align} In other words, this sum <em>telescopes.</em> Now let $N\to \infty$ (and of course do not forget 1/2 in front of the sum) to conclude that $$\sum_{n=2}^{+\infty}\frac{1}{n^2-1}=\lim_{N\to+\infty} \frac12\sum_{n=2}^N\left(\frac{1}{n-1}-\frac1{n+1}\right)=\frac12\lim_{N\to+\infty}\left(\frac32-\frac1N-\frac1{N+1}\right)=\frac34$$</p>
2,125,018
<blockquote> <p>You toss a fair coin 3x, events:</p> <p>A = "first flip H"</p> <p>B = "second flip T"</p> <p>C = "all flips H"</p> <p>D = "at least 2 flips T"</p> <p><strong>Q:</strong> Which events are independent?</p> </blockquote> <p>From the informal def. it is where one doesnt affect the other.</p> <p>So in this case, $AB$ seem independent? Any others?</p>
L Parker
410,830
<p>I retain the relationship $\alpha = \pi - \gamma$ as proposed by JeanMarie; allow me to explain the coordinates of $a$ using 1 variable only: </p> <p>Let slope of $L_1 = \tan \theta_0$. We have</p> <p>$$ \begin{align} a&amp;= (L_1 \cos \theta_0, L_1 \sin\theta_0) \\ b&amp;= (L_3 \cos \theta, L_3 \sin\theta) \end{align} $$</p> <p>Finding the slope of $ab$,</p> <p>$ \tan \alpha = \frac{ L_3 \sin\theta - L_1 \sin\theta_0 }{ L_3 \cos\theta - L_1 \cos\theta_0 } $ -- (*)</p> <p>Hence $ \cos^2 \alpha = \frac{(L_3 \cos\theta - L_1 \cos\theta_0)^2 }{(L_3 \sin\theta - L_1 \sin\theta_0 )^2+(L_3 \cos\theta - L_1 \cos\theta_0 )^2 } = \frac{ (L_3 \cos\theta - L_1 \cos\theta_0)^2 }{ L_1^2 + L_3^2 - 2L_1 L_3 \cos(\theta_0-\theta) } $</p> <p>Differentiate both sides of (*),</p> <p>$ \sec^2 \alpha \text d \alpha = \frac{ L_3^2 - L_1L_3 \cos(\theta_0-\theta) }{(L_3 \cos\theta - L_1 \cos\theta_0)^2 } \text d \theta $</p> <p>$ \frac{\text d \alpha }{\text d \theta } = \frac{ L_3^2 - L_1L_3 \cos(\theta_0-\theta) }{(L_3 \cos\theta - L_1 \cos\theta_0)^2 } \cos^2 \alpha = \frac{ L_3^2 - L_1L_3 \cos(\theta_0-\theta) }{(L_3 \cos\theta - L_1 \cos\theta_0)^2 } \frac{ (L_3 \cos\theta - L_1 \cos\theta_0)^2 }{ L_1^2 + L_3^2 - 2L_1 L_3 \cos(\theta_0-\theta) } = \frac{L_3^2 - L_1L_3 \cos(\theta_0-\theta) }{L_1^2 + L_3^2 - 2L_1 L_3 \cos(\theta_0-\theta) } $</p> <p>So from $\alpha = \pi - \gamma$, $\frac{\text d \gamma }{\text d \theta } =- \frac{\text d \alpha }{\text d \theta } $</p> <p>Hence</p> <p>$$\frac{\text d \gamma }{\text d \theta } = \frac{L_1L_3 \cos(\theta_0-\theta)-L_3^2 }{L_1^2 + L_3^2 - 2L_1 L_3 \cos(\theta_0-\theta) } $$</p> <p>You can see the value as per asked in the question varies with the size of $\theta$, so there is no definite value. </p> <p><strong>Edit</strong></p> <p>By cosine law, $L_1^2 + L_3^2 - 2L_1 L_3 \cos(\theta_0-\theta) = L_2^2$</p> <p>I am reluctant in putting $L_2$ in the expression as it is yet another variable. </p> <p>My objective was to show that $\text d \gamma / \text d \theta$ is not a constant. </p> <p>Introducing $L_2$ might be confusing as it appears to be a constant like $L_1$ or $L_3$, which isn't in reality.</p>
42,881
<p>In short, my question is the same as my previous one except that everything is now wrapped up in a module.</p> <p>The relevant code I'm working with is:</p> <pre><code>getinter[a_, b_, u0_, k_, m_, hbar_, Nu_, Np_, up_] := Module[{ekp, ms, LUs, env, eenv, envpart, f, kppart, g, approx, approx1, papprox, approx2, hard, ereal, psiparts, real, real1, realp, real2, er, inter}, ekp = energies[a, b, u0, k, 1, m, hbar, 0.001, 10^-15][[1]]; ms = effmass[a, b, u0, k, 0.2, 1, m, hbar, 0.001, 10^-15]; LUs = BuildLUs[a, b, Nu, Np, u0, up]; env[x_] := Abs[Det[envfunc[ms, hbar, x, up, Nu, Np, a, b]]]^2; eenv = zeros[env, up + 0.000001, 0.01, 0, 1, 10^-15, 0.02][[1]]; envpart = getpsipieces[LUs[[1]], LUs[[2]], eenv, ms, hbar]; f[x_] := Evaluate@Piecewise[envpart]; Return[f[0.231]]; ] </code></pre> <p>envpart is a properly formatted object for piecewise and the variable it uses is named x. When I evaluate this, it's returning the structure I put into piecewise but now formatted as a case structure as opposed to an array.</p> <p>What I'd like it to do is to return f[0.231]='some number' like you would expect. Also, f[x_]:= Evaluate@Piecewise[envpart]; works fine if I take it out of the module.</p> <p>Thanks for the help. Also, I get the sense that the solution to this problem, if I fully understand it, will allow me to work out a lot of other problems I encounter, so I'd really appreciate any suggestions on where I can learn about the issues at play.</p> <p>Edited for slightly less awful formatting.</p>
freddieknets
631
<p>You are having a scoping problem here, because inside a <code>Module</code> your variable x will be renamed internally.</p> <p>Just replace your temporary definition of <code>f</code> with a <code>ReplaceAll</code>. I don't have your definitions for <code>energies</code>, <code>effmass</code>, etc, so I'm not 100% sure but normally this should work:</p> <pre><code>getinter[a_, b_, u0_, k_, m_, hbar_, Nu_, Np_, up_] := Module[{ekp, ms, LUs, env, eenv, envpart, f, kppart, g, approx, approx1, papprox, approx2, hard, ereal, psiparts, real, real1, realp, real2, er, inter}, ekp = energies[a, b, u0, k, 1, m, hbar, 0.001, 10^-15][[1]]; ms = effmass[a, b, u0, k, 0.2, 1, m, hbar, 0.001, 10^-15]; LUs = BuildLUs[a, b, Nu, Np, u0, up]; env[x_] := Abs[Det[envfunc[ms, hbar, x, up, Nu, Np, a, b]]]^2; eenv = zeros[env, up + 0.000001, 0.01, 0, 1, 10^-15, 0.02][[1]]; envpart = getpsipieces[LUs[[1]], LUs[[2]], eenv, ms, hbar]; Piecewise[envpart] /. x -&gt; 0.231 ] </code></pre> <p>You don't need to use a <code>Return</code> in the end of a <code>Module</code>, just leave out the last <code>;</code></p>
3,991,572
<p>I need to solve the following problem: <span class="math-container">$\lim_{x\to 3}(x-3) \cot{\pi x}$</span>. Can anyone give me a hint? I have no idea.</p>
Parcly Taxel
357,390
<p>There is no need to cancel terms across both series because of the <span class="math-container">$(3/4)^n$</span>. Evaluating both series individually (using the Maclaurin series of <span class="math-container">$\log(1-x)$</span>) gives <span class="math-container">$$S=\frac{16}{\log4}-\left(-\frac{33}2+\frac{16}{\log4}\right)=\frac{33}2$$</span></p>
3,991,572
<p>I need to solve the following problem: <span class="math-container">$\lim_{x\to 3}(x-3) \cot{\pi x}$</span>. Can anyone give me a hint? I have no idea.</p>
lab bhattacharjee
33,337
<p>Hint:</p> <p>Let <span class="math-container">$\dfrac{7n+32}{n(n+2)}\cdot\left(\dfrac34\right)^n=f(n)-f(n-2)$</span></p> <p>where <span class="math-container">$f(m)=\dfrac a{m+2}\cdot\left(\dfrac34\right)^m$</span></p> <p>so that for <span class="math-container">$r\ge4,$</span> <span class="math-container">$$\sum_{n=2}^r\dfrac{7n+32}{n(n+2)}\cdot\left(\dfrac34\right)^n=\sum_{n=2}^r(f(n)-f(n-2))=f(r)+f(r-1)-f(1)-f(0)$$</span></p> <p><span class="math-container">$$f(n)-f(n-2)=\left(\dfrac34\right)^n\left(\dfrac a{n+2}-\dfrac an\cdot\dfrac{16}9 \right)$$</span></p> <p>We need <span class="math-container">$\dfrac{7n+32}{n(n+2)}=\dfrac a{n+2}-\dfrac an\cdot\dfrac{16}9=-a\cdot\dfrac{7n+32}{9n(n+2)}\implies a=-9$</span></p>
3,812,432
<p>For <span class="math-container">$a,b,c&gt;0.$</span> Prove<span class="math-container">$:$</span> <span class="math-container">$$4\Big(\dfrac{1}{a^2}+\dfrac{1}{b^2}+\dfrac{1}{c^2} \Big)+\dfrac{81}{(a+b+c)^2}\geqslant{\dfrac {7(a+b+c)}{abc}}$$</span></p> <p>My proof is using SOS<span class="math-container">$:$</span></p> <p><span class="math-container">$${c}^{2}{a}^{2} {b}^{2}\Big( \sum a\Big)^2 \sum a^2 \Big\{ 4\Big(\dfrac{1}{a^2}+\dfrac{1}{b^2}+\dfrac{1}{c^2} \Big)+\dfrac{81}{(a+b+c)^2}-{\dfrac {7(a+b+c)}{abc}} \Big\}$$</span> <span class="math-container">$$=\dfrac{1}{2} \sum {a}^{2}{b}^{2} \left( {a}^{2}+{b}^{2}-2\,{c}^{2} +5bc-10ab+5\, ac \right) ^{2} +\dfrac{1}{2} \prod (a-b)^2 \left( 7\sum a^2 +50\sum bc \right) \geqslant 0.$$</span></p> <p>From this we see that the inequality is true for all <span class="math-container">$a,b,c \in \mathbb{R};ab+bc+ca\geqslant 0.$</span></p> <p>But we also have this inequality for <span class="math-container">$a,b,c \in \mathbb{R}.$</span> Which verify by Maple.</p> <p>I try and I found a proof but I'm not sure<span class="math-container">$:$</span></p> <p>If replace <span class="math-container">$(a,b,c)$</span> by <span class="math-container">$(-a,-b,-c)$</span> we get the same inequality.</p> <p>So we may assume <span class="math-container">$a+b+c\geqslant 0$</span> (because if <span class="math-container">$a+b+c&lt;0$</span> we can let <span class="math-container">$a=-x,b=-y,c=-z$</span> where <span class="math-container">$x+y+z \geqslant 0$</span> and the inequality is same!)</p> <p>Let <span class="math-container">$a+b+c=1,ab+bc+ca=\dfrac{1-t^2}{3} \quad (t\geqslant 0), r=abc.$</span> Need to prove<span class="math-container">$:$</span></p> <p><span class="math-container">$$f(r) =81\,{r}^{2}-15\,r+\dfrac{4}{9} \left( t-1 \right) ^{2} \left( t+1 \right) ^{2 }\geqslant 0.$$</span></p> <p>It's easy to see, when <span class="math-container">$r$</span> increase then <span class="math-container">$f(r)$</span> decrease. Since <span class="math-container">$r\leqslant \dfrac{1}{27} \left( 2\,t+1 \right) \left( t-1\right) ^{2} \quad$</span>(see <a href="https://artofproblemsolving.com/community/c1101515h2255918_helpful_lemma_for_the_homogeneous_inequality" rel="nofollow noreferrer">here</a>). We get<span class="math-container">$:$</span></p> <p><span class="math-container">$$f(r)\geqslant f\Big(\dfrac{1}{27} \left( 2\,t+1 \right) \left( t-1\right) ^{2}\Big)=\dfrac{1}{9} {t}^{2} \left( 2\,t-1 \right) ^{2} \left( t-1 \right) ^{2} \geqslant 0.$$</span></p> <p>Done.</p> <p>Could you check it for me? Who have a proof for <span class="math-container">$a,b,c \in \mathbb{R}$</span>?</p>
tkf
117,974
<p>Given a string that avoids <span class="math-container">$000,111,222$</span> add <span class="math-container">$i$</span> modulo <span class="math-container">$3$</span> to the <span class="math-container">$i$</span>'th digit to obtain a string that avoids <span class="math-container">$012,120,201$</span>. This is clearly a bijection.</p> <p>Lets consider all legitimate strings of length <span class="math-container">$3$</span> (e.g. those that avoid <span class="math-container">$000,111,222$</span>):</p> <p><span class="math-container">$011, 022, 100,122,200,211$</span></p> <p>and</p> <p><span class="math-container">$001,010,020,002,12,021,101,110,120,102,112,121,201,210,220,202,212,221$</span>.</p> <p>Notice I have written them in two lists: the first <span class="math-container">$6$</span> end <span class="math-container">$aa$</span>, whilst the other <span class="math-container">$18$</span> end <span class="math-container">$ab$</span>. In my notation <span class="math-container">$a_3=18, b_3=6$</span>. Here <span class="math-container">$a_n$</span> is the number of legitimate strings which end <span class="math-container">$ab$</span> and <span class="math-container">$b_n$</span> is the number of legitimate strings which end <span class="math-container">$aa$</span>.</p> <p>Consider a string which ends <span class="math-container">$aa$</span> such as <span class="math-container">$011$</span>. What can I add to the end, so that it is still legitimate? I cannot add <span class="math-container">$1$</span>, but if I add <span class="math-container">$0$</span> or <span class="math-container">$2$</span> I get a string of length <span class="math-container">$4$</span> and ending <span class="math-container">$ab$</span>: <span class="math-container">$0110, 0112$</span>.</p> <p>What about a string ending <span class="math-container">$ab$</span>? Consider <span class="math-container">$001$</span>. If I add <span class="math-container">$0$</span> or <span class="math-container">$2$</span> then I get a string of length <span class="math-container">$4$</span> ending <span class="math-container">$ab$</span>: <span class="math-container">$0010,0012$</span>. On the other hand if I add a <span class="math-container">$1$</span> I get <span class="math-container">$0011$</span> which ends <span class="math-container">$aa$</span>.</p> <p>This is summed up in the diagram:</p> <p><a href="https://i.stack.imgur.com/J5Vgp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J5Vgp.png" alt="enter image description here" /></a></p> <p>So for each string of length <span class="math-container">$n$</span> ending <span class="math-container">$ab$</span>, we can get <span class="math-container">$2$</span> strings of length <span class="math-container">$n+1$</span> of type <span class="math-container">$ab$</span> by adding a number at the end, whilst for each string of length <span class="math-container">$n$</span> ending <span class="math-container">$aa$</span> we can get two strings of length <span class="math-container">$n+1$</span> ending <span class="math-container">$ab$</span>, by adding a number at the end. Thus for <span class="math-container">$n\geq 3$</span> we have: <span class="math-container">$$a_{n+1}=2a_n+2b_n.$$</span></p> <p>Similarly, for each string of length <span class="math-container">$n$</span> ending <span class="math-container">$ab$</span> we get one string of length <span class="math-container">$n+1$</span> ending <span class="math-container">$aa$</span>. You cannot add any number to a string of length <span class="math-container">$n$</span> ending <span class="math-container">$aa$</span>, to get a string of length <span class="math-container">$n+1$</span> ending <span class="math-container">$aa$</span> (as <span class="math-container">$aaa$</span> not allowed). Thus for <span class="math-container">$n\geq 3$</span> we have: <span class="math-container">$$b_{n+1}=a_n$$</span></p> <p>Combining the two equations, for <span class="math-container">$n\geq4$</span> we get:<span class="math-container">$$a_{n+1}=2a_n+2a_{n-1}$$</span></p> <p>The total number of legitimate strings of length <span class="math-container">$n$</span> is <span class="math-container">$$a_n+b_n=a_n+a_{n-1}.$$</span></p> <p>We know <span class="math-container">$a_2=6,a_3=18$</span>. From the recursion we have <span class="math-container">$a_4=48, a_5=132, a_6=360$</span>.</p> <p>Thus there are <span class="math-container">$132+360=492$</span> legitimate strings.</p>
179,886
<p>I have series of values which, by visual inspection, appear to be sums of certain constants, not divisible by each other, with rational weights. I want to convert these sums to vectors of weights for a specific basis vector.</p> <p>I have an initial solution based on <code>FindInstance</code>, which works reasonably but I think is not necessarily elegant:</p> <pre><code>ClearAll[splitSumCoefficients]; splitSumCoefficients[sum_, basis_] := With[{cvals = c /@ basis}, cvals/d /. # &amp; /@ FindInstance[ d sum == basis.cvals &amp;&amp; d != 0, {Sequence @@ cvals, d}, Integers]]; </code></pre> <p>It works fine for a case where vector values could actually be extracted using expression rewriting:</p> <pre><code># -&gt; splitSumCoefficients[#, {1, 1/E, E}] &amp; /@ Table[TrigExpand@ SeriesCoefficient[ Sin[x] + Sqrt[1 - x^2] Sinh[Sqrt[1 - x^2]]/(x + 1), {x, 0, n}], {n, 0, 5}] </code></pre> <blockquote> <p>$$\begin{array}{c} \frac{e}{2}-\frac{1}{2 e}\to \left( \begin{array}{ccc} 0 &amp; -\frac{1}{2} &amp; \frac{1}{2} \\ \end{array} \right) \\ 1+\frac{1}{2 e}-\frac{e}{2}\to \left( \begin{array}{ccc} 1 &amp; \frac{1}{2} &amp; -\frac{1}{2} \\ \end{array} \right) \\ -\frac{1}{2 e}\to \left( \begin{array}{ccc} 0 &amp; -\frac{1}{2} &amp; 0 \\ \end{array} \right) \\ \frac{1}{2 e}-\frac{1}{6}\to \left( \begin{array}{ccc} -\frac{1}{6} &amp; \frac{1}{2} &amp; 0 \\ \end{array} \right) \\ \frac{e}{16}-\frac{7}{16 e}\to \left( \begin{array}{ccc} 0 &amp; -\frac{7}{16} &amp; \frac{1}{16} \\ \end{array} \right) \\ \frac{1}{120}+\frac{7}{16 e}-\frac{e}{16}\to \left( \begin{array}{ccc} \frac{1}{120} &amp; \frac{7}{16} &amp; -\frac{1}{16} \\ \end{array} \right) \\ \end{array}$$</p> </blockquote> <p>It also works in a case where expression rewriting would already be a little more tricky:</p> <pre><code># -&gt; splitSumCoefficients[#, {1, Sinh[1], Cosh[1]}] &amp; /@ Table[SeriesCoefficient[ Sin[x] + Sqrt[1 - x^2] Sinh[Sqrt[1 - x^2]]/(x + 1), {x, 0, n}], {n, 0, 5}] </code></pre> <blockquote> <p>$$\begin{array}{c} \sinh (1)\to \left( \begin{array}{ccc} 0 &amp; 1 &amp; 0 \\ \end{array} \right) \\ 1-\sinh (1)\to \left( \begin{array}{ccc} 1 &amp; -1 &amp; 0 \\ \end{array} \right) \\ \frac{1}{2} (\sinh (1)-\cosh (1))\to \left( \begin{array}{ccc} 0 &amp; \frac{1}{2} &amp; -\frac{1}{2} \\ \end{array} \right) \\ \frac{1}{6} (-1-3 \sinh (1)+3 \cosh (1))\to \left( \begin{array}{ccc} -\frac{1}{6} &amp; -\frac{1}{2} &amp; \frac{1}{2} \\ \end{array} \right) \\ \frac{\sinh (1)}{2}-\frac{3 \cosh (1)}{8}\to \left( \begin{array}{ccc} 0 &amp; \frac{1}{2} &amp; -\frac{3}{8} \\ \end{array} \right) \\ \frac{1}{120} (1-60 \sinh (1)+45 \cosh (1))\to \left( \begin{array}{ccc} \frac{1}{120} &amp; -\frac{1}{2} &amp; \frac{3}{8} \\ \end{array} \right) \\ \end{array}$$</p> </blockquote> <p>(This code can also convert above expressions between $1/sinh/cosh$ and $1/\frac{1}{e}/e$ basis automatically.)</p> <p>Would there be a more practical solution than <code>FindInstance</code> for this problem?</p> <p><strong>EDIT</strong>:</p> <p>A failing example:</p> <pre><code>splitSumCoefficients[#, {1, Sinh[1], Cosh[1]}] &amp;@ SeriesCoefficient[ Sin[x] + Sqrt[1 - x^2] Sinh[Sqrt[1 - x^2]]/(x + 1), {x, 0, 143}] </code></pre> <blockquote> <p>{}</p> </blockquote> <p>(That is, no solutions from <code>FindInstance</code>.)</p>
kirma
3,056
<p>This is really an extended comment to @DanielLichtblau's answer, touching the subject of <code>WorkingPrecision</code>. I arrived at this completely separately...</p> <p>Essentially the improvement here is calculating necessary <code>$MaxExtraPrecision</code> on basis of input:</p> <pre><code>ClearAll[splitSumCoefficients]; splitSumCoefficients[sum_, basis_] := Block[{$MaxExtraPrecision = (Length[basis] + 1) Ceiling[ Length[basis] + Log10[Max[1, #] &amp;@ Max@Cases[Expand@Simplify@sum, x_Integer | x_Rational :&gt; Max[Abs@Numerator@x, Denominator@x], Infinity]/ Min@Abs@basis]]}, Quiet@FindIntegerNullVector[ Append[-basis, sum]] /. {_FindIntegerNullVector -&gt; {}, {most__: 1, last_} :&gt; {{most}/last}}]; </code></pre> <p>(Returning <code>{}</code> on failure, and handling special cases gracefully like one -element <code>basis</code> and no-integer/rational <code>sum</code> too.)</p> <p>Testing it:</p> <pre><code>FullSimplify[# == {1, Sinh[1], Cosh[1]}.First@ splitSumCoefficients[#, {1, Sinh[1], Cosh[1]}]] &amp;@ SeriesCoefficient[ Sin[x] + Sqrt[1 - x^2] Sinh[Sqrt[1 - x^2]]/(x + 1), {x, 0, 143}] </code></pre> <blockquote> <p>True</p> </blockquote> <p>Examples of use:</p> <pre><code>splitSumCoefficients[5/3, {1}] </code></pre> <blockquote> <p>{{5/3}}</p> </blockquote> <pre><code>splitSumCoefficients[1 + 5 E, {1, E}] </code></pre> <blockquote> <p>{{1, 5}}</p> </blockquote> <pre><code>splitSumCoefficients[E, {1, Sqrt[2]}] </code></pre> <blockquote> <p>{}</p> </blockquote> <pre><code>splitSumCoefficients[E, {Sinh[1], Cosh[1]}] </code></pre> <blockquote> <p>{{1, 1}}</p> </blockquote> <p>($e=\sinh (1)+\cosh (1)$)</p>
2,303,795
<p>So I know that by Euler's homogeneous function theorem $m$ is a positive number, but why is it an integer? And how to prove that $f$ is polynomial of degree $m$?</p>
RRL
148,510
<p>Note that</p> <p>$$\int_0^x \sin^2(t +1/n) \, dt = \frac{x}{2} - \frac{1}{4}\sin(2x+2/n) + \frac{1}{4} \sin (2/n),$$</p> <p>and</p> <p>$$\int_0^x \sin^2(t) \, dt = \frac{x}{2} - \frac{1}{4}\sin(2x) .$$</p> <p>Hence,</p> <p>$$|f_n(x) - f(x)| \leqslant \frac{1}{4}\left|\sin(2x+2/n) - \sin(2x) \right|+ \frac{1}{4} |\sin(2/n)| \leqslant \frac{1}{2n} + \frac{1}{4}|\sin(2/n)|$$</p> <p>We have uniform convergence on $(0,\infty)$.</p>
3,131,516
<p>I would like to know if this differential equation can be transformed into the hypergeometric differential equation</p> <p><span class="math-container">$ 4 (u-1) u \left((u-1) u \text{$\varphi $1}''(u)+(u-2) \text{$\varphi $1}'(u)\right)+\text{$\varphi $1}(u) \left((u-1) u \omega ^2-u (u+4)+8\right)=0$</span></p>
Abhinav
646,522
<p>Being a high school student I can help you out. <span class="math-container">$$2x^2+3x-2$$</span> <span class="math-container">$$=2x^2+4x-x-2$$</span> <span class="math-container">$$=2x(x+2)-(x+2)$$</span> <span class="math-container">$$=(2x-1)(x+2)$$</span></p>
2,885,754
<p>I just sat a real analysis exam and this was a question in it that I couldn't answer...</p> <p>Prove that $\left|e^\frac{-x^2}{2t}-e^\frac{-y^2}{2t}\right| \leq \frac{|x-y|}{t}$ for $x,y \in [-1,1] ,t&gt;0$</p> <p>I ended up trying to set $f(x,y)=e^\frac{-x^2}{2t}-e^\frac{-y^2}{2t}$, then attempted trying $f(-1,-1) =f(1,1)$ but never ended up getting anywhere.</p> <p>Any tips on how this is actually solved? I've never seen an inequality problem like this before.</p>
tmaths
584,390
<p>You can write when $|x| \leqslant |y|$ and using triangle inequality for integrals :</p> <p>\begin{align*} \left| e^{-x^2/2t} - e^{-y^2/2t} \right| = \left| \int_{x}^{y}- \frac{u}{t} e^{-u^2/2t} du\right| &amp;\leqslant \int_{|x|}^{|y|} \frac{|u|}{t} du \\ &amp;\leqslant \int_{|x|}^{|y|} \frac{u}{t} du \\ &amp;= \frac{1}{2t}(y^2 - x^2)\\ &amp;= \frac{1}{2t}(|y| - |x|) (|x|+|y|) \\ &amp; \leqslant \frac{|y-x|}{t} \end{align*}</p> <p>You can conclude with an argument of symmetry.</p>
2,885,754
<p>I just sat a real analysis exam and this was a question in it that I couldn't answer...</p> <p>Prove that $\left|e^\frac{-x^2}{2t}-e^\frac{-y^2}{2t}\right| \leq \frac{|x-y|}{t}$ for $x,y \in [-1,1] ,t&gt;0$</p> <p>I ended up trying to set $f(x,y)=e^\frac{-x^2}{2t}-e^\frac{-y^2}{2t}$, then attempted trying $f(-1,-1) =f(1,1)$ but never ended up getting anywhere.</p> <p>Any tips on how this is actually solved? I've never seen an inequality problem like this before.</p>
Rigel
11,776
<p>Observe that, by the mean value theorem, $$ |e^a - e^b| \leq |a-b|, \qquad \forall a,b \leq 0. $$ Hence, if $x,y\in [-1,1]$, $$ \left| e^{-x^2/2t} - e^{-y^2/2t} \right| \leq \left| {x^2/2t} - {y^2/2t} \right| \leq |x+y| \, \frac{|x-y|}{2t} \leq \frac{|x-y|}{t}\,. $$</p>
2,716,585
<p>Evaluate $\sum_{k=1}^{n-3}\frac{(k+3)!}{(k-1)!}$. </p> <p>My strategy is defining a generating function, $$g(x) = \frac{1}{1-x} = 1 + x + x^2...$$ then shifting it so that we get, $$f(x)=x^4g(x) = \frac{x^4}{1-x}= x^4+x^5+...$$ and then taking the 4th derivative of f(x). Calculating the fourth derivative is going to be a little tedious but it won't be as bad compared to the partial fraction decomposition I will end up doing. What is a better way to evaluate the sum using generating functions?</p>
zwim
399,263
<p>If you consider $U_4(k)= k(k+1)(k+2)(k+3)(k+4)$</p> <p>Then $U_4(k)-U_4(k-1)=k(k+1)(k+2)(k+3)\bigg((k+4)-(k-1)\bigg)=5U_3(k)$</p> <p>Thus you get a telescoping series and</p> <p>$$\sum\limits_{k=1}^{n-3}k(k+1)(k+2)(k+3)=\dfrac 15\bigg(U_4(n-3)-U_4(0)\bigg)=\dfrac 15U_4(n-3)\\=\dfrac{(n-3)(n-2)(n-1)(n)(n+1)}5$$</p>
2,716,585
<p>Evaluate $\sum_{k=1}^{n-3}\frac{(k+3)!}{(k-1)!}$. </p> <p>My strategy is defining a generating function, $$g(x) = \frac{1}{1-x} = 1 + x + x^2...$$ then shifting it so that we get, $$f(x)=x^4g(x) = \frac{x^4}{1-x}= x^4+x^5+...$$ and then taking the 4th derivative of f(x). Calculating the fourth derivative is going to be a little tedious but it won't be as bad compared to the partial fraction decomposition I will end up doing. What is a better way to evaluate the sum using generating functions?</p>
N. Shales
259,568
<p>When you differentiate $g(x)$ $4$ times all powers of $x$ initially less than $4$ disappear anyway. </p> <p>Then:</p> <p>$$g^{(4)}(x)=4!(1-x)^{-5}=\sum_{k\ge 0}\frac{(k+4)!}{k!}x^k$$</p> <p>Operating with $(1-x)^{-1}=1+x+x^2+x^3+\cdots$ on both sides gives a new expansion with terms which are partial sums of the coefficients of $x^k$ in the previous expansion:</p> <p>$$(1-x)^{-1}g^{(4)}(x)=\sum_{r\ge 0}\left(\sum_{k=0}^{r}\frac{(k+4)!}{k!}\right)x^r$$</p> <p>so</p> <p>$$4!(1-x)^{-6}=\sum_{r\ge 0}\left(\sum_{k=0}^{r}\frac{(k+4)!}{k!}\right)x^r$$</p> <p>$$\implies [x^r]4!(1-x)^{-6}=4!\binom{r+5}{5}=\sum_{k=0}^{r}\frac{(k+4)!}{k!}$$</p> <p>but $r=n-4$ to match up with $n$ in the question, so</p> <blockquote> <p>$$\sum_{k=0}^{n-4}\frac{(k+4)!}{k!}=4!\binom{n+1}{5}\tag{Answer}$$</p> </blockquote> <p>This summation is the same as the one in the question with only a shift in summation index.</p> <p>Of course you may notice that this is just <a href="https://en.m.wikipedia.org/wiki/Hockey-stick_identity" rel="nofollow noreferrer">Pascal's hockey stick rule</a>.</p>
69,542
<p>My first question here would fall into the 'ask Johnson' category if there was one (no pressure Bill). I'm interested in constructing a uniformly convex Banach space with conditional structure without using interpolation. The constructions of Ferenczi and Maurey-Rosenthal both use interpolation. </p> <p>Using existing methods for constructing spaces with conditional structure I think it is possible to construct a hereditarily indecomposable space whose natural basis statisfies a lower $\ell_2$ estimate on any $n$ disjointly supported blocks vectors supported after the $n^{th}$ position on the basis and an upper $\ell_2$ estimate on all finite block sequences. The space $X$ is sure to be reflexive and probably doesn't contain $\ell_\infty$ finitely represented. </p> <p>I would like to have some way of showing that $X$ is uniformly convex and this is where I'm stuck. Perhaps one could show that $\ell_1$ is not finitely represented in $X$ but as far as I can see this is not good enough (or is it?). </p> <p>My question: If a space is reflexive and does not contain $\ell_1$ finitely represented is it necessarily uniformly convex? </p> <p>I suspect the answer is no but I don't have a counterexample. </p> <p>Another question: Are there any known conditions on a basis, which (1) do not imply the basis is unconditional and (2) do imply the space is uniformly convex? </p>
Bill Johnson
2,554
<p>Kevin, there are non reflexive spaces with non trivial type--even of type 2. James constructed the first one; his argument is very complicated. Later Pisier-Xu did it much more simply using interpolation between $\ell_1$ and $\ell_\infty$, but using the universal non weakly compact operator instead of the formal identity between the two spaces. See</p> <p>Random series in the real interpolation spaces between the spaces vp. Geometrical aspects of functional analysis (1985/86), 185–209, Lecture Notes in Math., 1267, Springer, Berlin, 1987.</p> <p>For a reflexive space with non-trivial type that is not superreflexive take the $\ell_2$ sum of all finite dimensional subspaces of the Pisier-Xu space.</p>
249,074
<p>I am trying to solve the following problem:</p> <blockquote> <p>Show that a unit-speed curve $\gamma$ with nowhere vanishing curvature is a geodesic on the ruled surface $\sigma(u,v)=\gamma(u)+v\delta(u)$, where $\gamma$ is a smooth function of $u$, if and only if $\delta$ is perpendicular to the principal normal of $\gamma$ at $\gamma(u)$ for all values of $u$. </p> </blockquote> <p>Edit (rather large): My professor wrote the question down wrong. I fixed it on here. Sadly, even with it right, I can't get either direction.</p> <p>Any help would be appreciated. Thanks!</p>
robjohn
13,854
<p>A unit-speed curve $\gamma(u)$ (i.e. parametrized by arc-length) is a geodesic in a surface $S$ iff $\gamma''(u)$ is perpendicular to $S$.</p> <p>The normal to $\sigma(u,v)=\gamma(u)+v\delta(u)$ is parallel to $$ \frac{\partial\sigma}{\partial u}\times\frac{\partial\sigma}{\partial v} =(\gamma'+v\delta')\times\delta\tag{1} $$ On $\gamma$, $v=0$. Thus, $\gamma$ is a geodesic iff $\gamma''\times(\gamma'\times\delta)=0$. Using <a href="http://en.wikipedia.org/wiki/Triple_product#Vector_triple_product" rel="nofollow">Lagrange's formula</a> and the fact that $\gamma''\cdot\gamma'=0$, we get $$ \gamma''\cdot\delta\gamma'-\gamma''\cdot\gamma'\delta=0 \Leftrightarrow\gamma''\cdot\delta=0\tag{2} $$ Thus, $\gamma$ is a geodesic iff $\gamma''\cdot\delta=0$, where $\gamma''$ is the parallel to the principal normal since $\gamma$ is unit-speed.</p>
1,512,515
<p>I have tested all the primes up to 50,000,000 and did not find a single prime which satisfies the condition "sum of digits of prime number written in base7 divides by 3". E.g. </p> <ul> <li>13 (Base10) = 16 (Base7) --> 7 (sum of digits in base 7)</li> <li>1021 (Base10) = 2656 (Base7) --> 19</li> <li>823541 (Base10) = 6666665 (Base7) --> 41</li> <li>46941953 (Base10) = 1110000002 (Base7) --> 5</li> </ul> <p>Here you can see the distribution of sums in base 7:</p> <p><a href="http://s12.postimg.org/lcf3tntzx/prime_sum_in_base7_distribution.png" rel="nofollow">http://s12.postimg.org/lcf3tntzx/prime_sum_in_base7_distribution.png</a></p> <ul> <li>COUNT(*) - the number of occurrences</li> <li>SUM7 - sum of digits in base7</li> <li>MIN(PRIME) - minimal prime in base10</li> <li>MAX(PRIME) - maximal prime in base10</li> </ul> <p>As you can see sum7 of 9, 15, 21, 27, 33 are missing in the list, though other valid sums are widely represented. By 'valid sum' I mean that sum must be odd, because of "In an odd base, a number is odd if and only if it has an odd number of odd digits."</p> <p>So what is the least prime whose sum of digits written in base7 divide by 3? Or is it possible to prove that all primes have such a feature?</p>
phuclv
90,333
<p>Do you see that in decimal the divisibility of 9 and 3 is checked by <em>divisibility of sum of the digits for 9 or 3</em>? Similarly you can check divisibility of 15, 5 and 3 in hexadecimal, 7 in octal, 3 in base 4 and 2, 3, 6 in base 7 because <strong>this is true for all factors of <span class="math-container">$b-1$</span> in base <span class="math-container">$b$</span>.</strong></p> <p>Let's say we have a number <span class="math-container">$N_b$</span> in base b</p> <p><span class="math-container">$\begin{aligned} N_b &amp;= \overline{a_na_{n-1} \dots a_1a_0}_b = a_nb^n + a_{n-1}b^{n-1} + \dots + a_1b^1 + a_0b^0 \\ &amp;= (a_n + a_{n-1} + \dots + a_0) + \Big[ a_n(b^n-1) + a_{n-1}(b^{n-1}-1) + \dots + a_2(b^2-1) + a_1(b-1) \Big] \end{aligned}$</span></p> <p>The latter part has <span class="math-container">$b^k-1$</span> in each term, thus is divisible by <span class="math-container">$b-1$</span>. Hence it's also divisible by all factors of <span class="math-container">$b-1$</span>. As a result we just need to check the first part which is the sum of all digits to see if it's divisible by that factor of <span class="math-container">$b-1$</span> or not</p> <p>So in base 7 a number is divisible by 6/3/2 if and only if the sum of the digits is divisible by 6/3/2 respectively. Therefore a prime in base 7 can't have its digits accumulated to a multiple of 3.</p>
772,665
<p><strong>Question:</strong></p> <blockquote> <p>For any $a,b\in \mathbb{N}^{+}$, if $a+b$ is a square number, then $f(a)+f(b)$ is also a square number. Find all such functions.</p> </blockquote> <p><strong>My try:</strong> It is clear that the function $$f(x)=x$$ satisfies the given conditions, since: $$f(a)+f(b)=a+b.$$</p> <p>But is it the only function that fits our needs? </p> <p>It's one of my friends that gave me this problem, maybe this is a Mathematical olympiad problem. Thank you for you help.</p>
Community
-1
<p>It's not a complete answer, but as mentioned in comments, this problem probably missed some restrictions, and so have too many solutions. Thus I decided to answer this question for the case that $f$ have constant value in infinite (or finite by little changes) partition of $\mathbb N$.<br> I expect another answers for remained cases e.g when $f$ is an increasing function (polynomial case mentioned in comments). </p> <p>Let $A$ is an infinite subset of $\mathbb N$, not containing two numbers with square sum (like <a href="https://oeis.org/A203988" rel="noreferrer">https://oeis.org/A203988</a> except elements of the form $\frac{(2k)^2}{2}$ in this sequence) and $A'=\mathbb N -A$ . Suppose $A_1,A_2,...$ is an infinite non-empty partition of $A$, now $f$ could be defined as below<br> $$ f(n) = \begin{cases} a=\frac{(2k)^2}{2}&amp; \quad \text{if } n \in A' \\a_1^2-a &amp; \quad \text{if } n \in A_1\\a_2^2-a &amp; \quad \text{if } n \in A_2\\.\\.\\. \end{cases} $$ where $k$ and $a_i \in \mathbb N$ . </p> <p>Now if $x,y \in \mathbb N$ and $x+y$ is a perfect square, then both of $x$ and $y$ should be contained in $A'$, or on of them is in $A'$ and another one is in $A$ (and so contained in one of the $A_i$), in both cases $f(x)+f(y)$ is a perfect square . </p>
383,063
<p>I need some help with this exercise:</p> <p>Suppose $A\subseteq{G}$ is abelian, and $|G:A|$ is a prime power. Show that $G'\lt{G}$</p> <p>Thank you very much in advance.</p>
Jack Schmidt
583
<p>Hint: Reduce to a simple group and apply theorem 3.9.</p> <p>Reduction to simple group:</p> <blockquote class="spoiler"> <p> Suppose by way of contradiction that $G'=G$. If $G$ is not simple, then $G$ has a proper non-identity normal subgroup $N$. If $AN=G$, then $G/N = AN/N \cong A/A\cap N$ is abelian, so $G=G' \leq N$, contradicting $N$ being proper. Hence $\bar G = G/N$ is a finite group with abelian subgroup $\bar A = AN/N \leq \bar G$, and $[\bar G:\bar A]$ (which divides $[G:A]$) is a prime power. However, $\bar G' = \bar G$, so we have a smaller counterexample. Continuing in this way, we may assume $G$ is simple (lest we find a new $N$).</p> </blockquote> <p>Final contradiction:</p> <blockquote class="spoiler"> <p>However, for any non-identity element $a \in A$, $A \leq C_G(a)$ since $A$ is abelian, so $[G:C_G(a)]$ divides $[G:A]$, a prime power. This contradicts theorem 3.9 which says that the only $a \in G$ (for $G$ simple) with $[G:C_G(a)]$ a prime power is $a=1$ with $[G:C_G(a)]=1$. $\square$</p> </blockquote>
204,150
<p>If I had a list of let's say 20 elements, how could I split it into two separate lists that contain every other 5 elements of the initial list?</p> <p>For example:</p> <pre><code>list={1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20} function[list] (* {1,2,3,4,5,11,12,13,14,15} {6,7,8,9,10,16,17,18,19,20} *) </code></pre> <p>Follow-up question:</p> <p>Thanks to the numerous answers! Is there a way to revert this process? Say we start from two lists and I would like to end up with the <code>list</code>above:</p> <pre><code>list1={1,2,3,4,5,11,12,13,14,15} list2={6,7,8,9,10,16,17,18,19,20} function[list1,list2] (* {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20} *) </code></pre>
Alx
35,574
<pre><code>Flatten /@ {#[[1 ;; ;; 2]], #[[2 ;; ;; 2]]} &amp;@Partition[list, 5] </code></pre> <p>gives desired </p> <pre><code>{{1, 2, 3, 4, 5, 11, 12, 13, 14, 15}, {6, 7, 8, 9, 10, 16, 17, 18, 19, 20}} </code></pre>
1,761,668
<p>Wikipedia says about logical consequence:</p> <blockquote> <p>A formula Ο† is a logical consequence of a formula ψ if every interpretation that makes ψ true also makes Ο† true. In this case one says that Ο† is logically implied by ψ.</p> </blockquote> <p>But if Ο† and ψ are both true under some interpretations, then aren't they on equal footing? Why is one the logical consequence of the other? </p> <p>In extension, if we have a set of expressions $S = \{X_{1}, X_{2}, X_{3}, X_{4}\}$ and this set is satisfied by an interpretation $I$, so that every expression $X$ in $S$ is satisfied by $I$, then couldn't we just choose the subset $S' = \{X_2, X_{3}\}$ and claim that $X_{1}$ and $X_{4}$ are logical consequences of $S'$? It seems to my (naive) eyes that all expressions in X stand in mutual entailment, which somehow seems wrong. </p>
Martin Argerami
22,857
<p>Sure. Let $x\in M$. Put $\delta=1-\|x\|$. If we show that the ball of radius $\delta$ around $x$ is contained in $M,$ this implies that $M$ is open. </p> <p>The key fact is that $\|z\|=(z_1^2 +\ldots+z_{n+1}^2)^{1/2}$ is a norm. In particular, it satisfies the triangle inequality. </p> <p>So, if $\|z-x\|&lt;\delta$, then $$\|z\|=\|z-x+x\|\leq\|z-x\|+\|x\|&lt;\delta+\|x\|=1.$$</p>
4,481,695
<p>I tried substituting <span class="math-container">$x+3$</span> to see if I could simplify in any way, but couldn't think of anything. Also tried using <span class="math-container">$\ln$</span> and <span class="math-container">$\exp$</span>, but in the end just got to <span class="math-container">$\ln(0)$</span>. Can someone give me a tip?</p>
Frank W
552,735
<p>If you prefer a proof without using derivatives: Observe that our limit is also equal to</p> <p><span class="math-container">$$\lim\limits_{x\to-3}\frac {4^{(x+3)/5}-1}{x+3}=\frac 15\lim\limits_{x\to0}\frac {4^x-1}x$$</span></p> <p>Where the substitution <span class="math-container">$x\mapsto\tfrac 15(x+3)$</span> was made. The resulting limit can be tackled by using another substitution</p> <p><span class="math-container">$$n=4^x-1\qquad\implies\qquad x=\log_4(n+1)$$</span></p> <p>Therefore, when <span class="math-container">$x$</span> tends towards zero, <span class="math-container">$n$</span> also tends towards zero. Substituting gives</p> <p><span class="math-container">$$\begin{align*}\lim\limits_{x\to0}\frac {4^x-1}x &amp; =\lim\limits_{n\to0}\frac {n}{\log_4(n+1)}\\ &amp; =\lim\limits_{n\to0}\frac 1{\log_4(n+1)^{1/n}}\end{align*}$$</span></p> <p>Now use the limit definition of <span class="math-container">$e$</span></p> <p><span class="math-container">$$e=\lim\limits_{n\to0}(n+1)^{1/n}$$</span></p> <p>To get</p> <p><span class="math-container">$$\lim\limits_{x\to0}\frac {4^x-1}x=\frac 1{\log_4 e}=\log 4$$</span></p> <p>Note that <span class="math-container">$\log(\cdot)$</span> is the natural logarithm. Our original limit is <span class="math-container">$1/5$</span> of that, so we get</p> <p><span class="math-container">$$\lim\limits_{x\to-3}\frac {4^{(x+3)/5}-1}{x+3}\color{blue}{=\frac 15\log 4}$$</span></p>
1,801,112
<p>Find the simplest solution:</p> <p>$y' + 2y = z' + 2z$ I think proper notation is not sure, y' means first derivate of y. ($\frac{dy}{dt}+ 2y = \frac{dz}{dt} + 2z$)</p> <p>$y(0)=1$</p> <p>I got kind of confused, is $y=z=1$ a proper solution here? Or is disqualified because a constant is not reliant on time and something like $e^t$ is the simplest solution?</p> <p>You can choose z and y however you like.</p>
Aaron Meyerowitz
84,560
<p>The answer you propose certainly can't be faulted.</p> <p>There is not a unique accepted criterion of "simplest" in mathematics. Certainly we expect people to agree in many specific cases. Many items in high stakes tests are of the form "find the next term in the sequence" but what is the simplest sequence starting $1,2,4,..?$ We reasonably expect most people to say "powers of $2$ so $1,2,4,8,16,...$" But maybe the differences should be $1,2,3,4,..$ so the quadratic fit $1,2,4,7,11,...$ is simpler?</p> <p>Certainly one criterion is "a polynomial if possible and, if so, one of lowest degree." That would give $1,2,4,7,11,...$</p> <p>Consider the differential equation $w'(t)=-2w(t).$ The most general solution is $w=ke^{-2t}$ where $k=w(0)$. Any initial condition uniquely specifies a solution. If I denied you any initial condition and asked for the simplest answer, most people might say $w(t)=e^{-2t}$ though the criterion above prefers $w(t)=0.$ The other might qualify as the simplest non-trivial solution. All these are strictly positive or strictly negative. The general positive solution is $e^{-2t+C}$ and $C=0$ seems simplest.</p> <p>Your problem could be cast as </p> <blockquote> <p>$y(t)$ and $z(t)$ are functions such that $y(0)=1$ and $w(t)=z(t)-y(t)$ satisfies $w'(t)=-2w(t).$ Find the <em>simplest</em> solution.</p> </blockquote> <p>The must general solution is to have $y(t)$ be any function with $y(0)=1$ and $w(t)$ be any function with $w'(t)=-2w(t).$ By the reasonable criterion I suggested, $y(t)=1$ and $w(t)=0$ is indeed simplest.</p> <p>BUT I might leave out $w(t)$ and say "How simple can $z(t)$ be? What about $z(t)=0?$ That forces $y(t)=e^{-2t}.$"</p>
4,066,512
<p>Find <span class="math-container">$\sum_{j=0}^{n}\sum_{i=j}^{n} {n \choose i}{i \choose j}$</span>.</p> <p>I don't know how to double summations like this very well. Can someone expand this to show how the <span class="math-container">$i=j$</span> thing works?</p> <p>I tried the following: <span class="math-container">${n \choose i}{i \choose j}={n \choose j}{n-j \choose i-j}$</span></p> <p><span class="math-container">$\sum_{j=0}^{n}\sum_{i=j}^{n} {n \choose i}{i \choose j}=\sum_{j=0}^{n}\sum_{i=j}^{n} {n \choose j}{n-j \choose i-j}=\sum_{j=0}^{n} {n \choose j}{n-j \choose 0}=2^n$</span></p> <p>Where am I going wrong?</p>
Especially Lime
341,019
<p>See robjohn's answer for the problem in your attempt.</p> <p>However, a good way to start these problems is to stop and think what the sum actually represents. <span class="math-container">$\binom ni\binom ij$</span> is the number of ways to choose a committee of <span class="math-container">$i$</span> people out of <span class="math-container">$n$</span>, and then a subcommittee of <span class="math-container">$j$</span> people from those <span class="math-container">$i$</span>. If you sum this over all possible values <span class="math-container">$i\geq j$</span> you get the number of ways to divide <span class="math-container">$n$</span> people into three groups (those on the subcommittee, those on the main committee only, those on neither) with groups allowed to be any size (including empty). What is a simpler way to calculate that?</p>
4,287,733
<blockquote> <p>How can we show <span class="math-container">$$\frac{1-x^n}{1-c^n} + \left(1-\frac{1-x}{1-c}\right)^n \leq 1 $$</span> for all <span class="math-container">$n \in \mathbb{N}$</span>, <span class="math-container">$0 \leq c \leq x \leq 1, c \neq 1$</span>?</p> </blockquote> <p>The context is <a href="https://math.stackexchange.com/questions/4287507/on-the-conditioned-tail-of-a-maximum-of-a-sequence-of-random-variables/4287567#4287567">this</a> probability problem, but of course this problem might be of independent interest to inequality enthusiasts.</p> <p>In my attempt, I have <a href="https://www.desmos.com/calculator/rbxehr6quf" rel="nofollow noreferrer">graphed</a> the inequality on Desmos. We might note that the derivative changes sign in the range <span class="math-container">$c &lt; x &lt; 1$</span> so it may be unlikely differentiation would be of help.</p>
Ivan Kaznacheyeu
955,514
<p>If <span class="math-container">$x=c$</span> or <span class="math-container">$n=1$</span> then inequality holds. So consider <span class="math-container">$x &gt; c, n &gt; 1$</span>:</p> <p><span class="math-container">$$\frac{1-x^n}{1-c^n}+\left(\frac{x-c}{1-c}\right)^n\leq 1\Leftrightarrow 1-x^n+(1-c^n)\left(\frac{x-c}{1-c}\right)^n\leq 1-c^n\Leftrightarrow$$</span> <span class="math-container">$$(1-c^n)\left(\frac{x-c}{1-c}\right)^n\leq x^n-c^n\Leftrightarrow \frac{1-c^n}{(1-c)^n} \leq \frac{x^n-c^n}{(x-c)^n}$$</span></p> <p>Consider <span class="math-container">$f(x)=\frac{x^n-c^n}{(x-c)^n}$</span>, then <span class="math-container">$$f'(x)=\frac{n x^{n-1}}{(x-c)^n}-\frac{n(x^n-c^n)}{(x-c)^{n+1}}=\frac{nc(c^{n-1}-x^{n-1})}{(x-c)^{n+1}}&lt;0$$</span></p> <p><span class="math-container">$$f'(x)&lt;0, x\leq 1 \Rightarrow f(1)\leq f(x)\Rightarrow \frac{1-c^n}{(1-c)^n} \leq \frac{x^n-c^n}{(x-c)^n}$$</span></p>
3,831,387
<p><span class="math-container">$X,Y\sim N(0,1)$</span> and are independent, consider <span class="math-container">$X+Y$</span> and <span class="math-container">$X-Y$</span>.</p> <p>I can see why <span class="math-container">$X+Y$</span> and <span class="math-container">$X-Y$</span> are independent based on the fact that their joint distribution is equal to the product of their marginal distributions. Just, I'm having trouble understanding <em>intuitively</em> why this is so.</p> <p>This is how I see it : When you look at <span class="math-container">$X+Y=u$</span>, the set <span class="math-container">$\{(x,u-x)|x\in\mathbb{R}\}$</span> is the list of possibilities for <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>.</p> <p>And intuitively, I understand independence of two random variables <span class="math-container">$A$</span> and <span class="math-container">$B$</span> as, the probability of the event <span class="math-container">$A=a$</span> being completely unaffected by the event <span class="math-container">$B=b$</span> happening.</p> <p>But when you look at <span class="math-container">$X+Y=u$</span> given that <span class="math-container">$X-Y=v$</span>, the set of possibilities has only one value <span class="math-container">$(\frac{u+v}{2},\frac{u-v}{2})$</span>.</p> <p>So, <span class="math-container">$\mathbb{P}(X+Y=u|X-Y=v)\neq \mathbb{P}(X+Y=u)$</span>.</p> <p>Doesn't this mean that <span class="math-container">$X+Y$</span> is affected by the occurrance of <span class="math-container">$X-Y$</span>? So, they would have to be dependent? I'm sorry if this comes off as really stupid, it has been driving me crazy, even though I am sure that they are independent, it just doesn't feel right.</p> <p>Thank you.</p>
tommik
791,458
<p>To understand a very intuitive brainstorming let's start with <span class="math-container">$X,Y$</span> iid <span class="math-container">$N(\theta;1)$</span> distribution.</p> <p>You will probabily know that <span class="math-container">$X+Y$</span> is a <em>&quot;complete sufficient statistic&quot;</em> for <span class="math-container">$\theta$</span> while <span class="math-container">$X-Y\sim N(0;2)$</span> is independent of <span class="math-container">$\theta$</span> so it is <em>&quot;ancillary&quot;</em></p> <p>This is that <span class="math-container">$X+Y$</span> contains all the information about <span class="math-container">$\theta$</span> while <span class="math-container">$X-Y$</span> has no useful information...its distribution does not depend anymore from <span class="math-container">$\theta$</span></p> <p>So they are independent</p> <hr /> <p>This intuitive brainstorming is, in poor words, <a href="https://en.wikipedia.org/wiki/Basu%27s_theorem" rel="nofollow noreferrer">Basu's Theorem</a></p>
3,597,172
<h2>The problem</h2> <p>Let <span class="math-container">$f: \mathbb{R} \to \mathbb{R}$</span></p> <p>Determine <span class="math-container">$f(x)$</span> knowing that </p> <p><span class="math-container">$ 3f(x) + 2 = 2f(\left \lfloor{x}\right \rfloor) + 2f(\{x\}) + 5x $</span>, where <span class="math-container">$ \left \lfloor{x}\right \rfloor $</span> is the floor function and <span class="math-container">$\{x\} = x - \left \lfloor{x}\right \rfloor$</span> (also known as the fractional part)</p> <h2>My thoughts</h2> <p>We can observe that for <span class="math-container">$x = 0$</span> we obtain <span class="math-container">$f(0) = 2$</span>.</p> <p>Considering <span class="math-container">$f(\left \lfloor{x}\right \rfloor)$</span> we get <span class="math-container">$ 3f(\left \lfloor{x}\right \rfloor) + 2 = 2f(\left \lfloor\left \lfloor{x}\right \rfloor\right \rfloor) + 2f(\{\left \lfloor{x}\right \rfloor\}) + 5\left \lfloor{x}\right \rfloor $</span></p> <p>And for <span class="math-container">$f(\{x\})$</span> we get <span class="math-container">$ 3f(\{x\}) + 2 = 2f(\left \lfloor\{x\}\right \rfloor) + 2f(\{\{x\}\}) + 5\{x\} $</span></p> <p>I did this in the hope of defining <span class="math-container">$f(\left \lfloor{x}\right \rfloor)$</span> and <span class="math-container">$f(\{x\})$</span> and thus replacing them in the initial condition.</p>
Nitin Uniyal
246,221
<p>For surjectivity of a linear map <span class="math-container">$A: U\rightarrow V$</span>, you must have <span class="math-container">$rank(A)=dim(V)$</span>.</p> <p>In your case <span class="math-container">$rank(A)=2=dim(V)$</span> so <span class="math-container">$A$</span> is ....</p>
308,117
<p>I have the matrix $$A := \begin{bmatrix}6&amp; 9&amp; 15\\-5&amp; -10&amp; -21\\ 2&amp; 5&amp; 11\end{bmatrix}.$$ Can anyone please tell me how to both find the eigenspaces by hand and also by using the Nullspace command on maple? Thanks.</p>
Mikasa
8,581
<p>You know that, the solution sets of homogeneous linear systems $Ax=0$ provide an important source of vector spaces called Nullspace. Here, $det(A)\neq0$ so the above system has only one triple in $\mathbb R^3$ as its solution $$0^*=(0,0,0)$$ so the Nullspace is a trivial vector subspace of $\mathbb R^3$ $$\langle 0^*\rangle$$</p>
2,903,557
<p>I am having trouble proving the following identity: </p> <p>$$\frac{\sinh \tau +\sinh i\sigma }{\cosh \tau +\cosh i\sigma }=-\coth \left(i \frac{\sigma +i\tau }{2}\right)$$</p> <p>I have tried using identities and the definitions but haven't had much luck. This is a missing step in inverting the bipolar coordinates. Any assistance is appreciated.</p>
user
505,767
<p>We have that</p> <p>$$T_r=\frac{n-r+1}{r(r+1)(r+2)}=\frac{n+1}{r(r+1)(r+2)}-\frac{r}{r(r+1)(r+2)}=$$$$=(n+1)\left(\frac{1}{2r}+\frac{1}{2(r+2)}-\frac{1}{r+1}\right)+\frac1{r+2}-\frac1{r+1}$$</p> <p>and by telescoping we can see that the first term gives</p> <p>$$(n+1)\left(\frac12+\frac14+\frac12\cdot 2\sum_{r=3}^n\left(\frac1r\right)+\frac12\frac1{n+1}+\frac12\frac1{n+2}-\sum_{r=2}^n\left(\frac1r\right)-\frac1{n+1}\right)$$</p> <p>$$(n+1)\left(\frac{1}{4} -\frac12\frac1{n+1}+\frac12\frac1{n+2}\right)$$</p> <p>$$(n+1)\left(\frac{(n+1)(n+2)-2(n+2)+2(n+1)}{4(n+1)(n+2)}\right)$$</p> <p>$$\frac{n^2+3n}{4(n+2)}$$</p> <p>and the second one</p> <p>$$\frac1{n+2}-\frac12$$</p> <p>therefore</p> <p>$$\sum_{r=1}^n T_r=\frac{n^2+3n}{4(n+2)}+\frac1{n+2}-\frac12=\frac{n^2+3n+4-2(n+2)}{4(n+2)}=\frac{n^2+n}{4(n+2)}$$</p>
1,052,073
<p><strong>Assume $V$ is a real $n$-dimensional vector space, and $v,w \in V $. Define $ T \in L(V)$ by $ T(u) = u - (u,v)w$. Find a formula for Trace(T)</strong></p> <p>All I know about this is that trace is sum of the diagonal entries of the matrix. So how do I find the diagonal entries? I don't really know what steps to follow. </p>
Henry
6,460
<p>Experimentally it appears that seems that something close to Hagen von Eitzen's comment of the construction using a regular pentagon</p> <p><img src="https://i.stack.imgur.com/7C0ZP.png" alt="enter image description here"></p> <p>works for all obtuse triangles. Here is an example, with two isosceles triangles cutting off the sharp angles, and then choosing a suitable point inside the remaining pentagon to give $7$ acute angled triangles in all. Not all pairs of isosceles triangles allow there to be a suitable interior point, but I could not find an obtuse angled triangle without some solution.</p> <p><img src="https://i.stack.imgur.com/EzhIk.png" alt="enter image description here"></p> <p>I would be surprised if there was a solution with fewer acute angled triangles.</p>
1,052,073
<p><strong>Assume $V$ is a real $n$-dimensional vector space, and $v,w \in V $. Define $ T \in L(V)$ by $ T(u) = u - (u,v)w$. Find a formula for Trace(T)</strong></p> <p>All I know about this is that trace is sum of the diagonal entries of the matrix. So how do I find the diagonal entries? I don't really know what steps to follow. </p>
Hagen von Eitzen
39,174
<p>Let $ABC$ be a triangle with angles $\alpha=\angle BAC&lt;\frac\pi2,\beta=\angle CBA&lt;\frac\pi2,\gamma=\angle ACB$. Let $F$ be the orthogonal projection of $C$ to $AB$ (which is between $A$ and $B$). For any point $P$ between $C$ and $F$, let $\ell $ be the line through $P$ parallel to $AB$. Let $E$ be the intersection of $\ell$ and $AC$, $D$ the intersection of $\ell$ and $BC$. We may assume that we picked $P$ sufficiently close to $C$ such that $DE&lt;PF$ holds (indeed, $DP, PE\to0$ and $FP\to FC$ as $P\to C$, so a continuity argument applies). Let $G,H$ be the projections of $D,E$ to $AB$. Let $I$ be the intersection of $AB$ with the line trough $D$ perpendicular to $BC$. Let $J$ be the intersection of $AB$ with the line trough $E$ perpendicular to $AC$. Then both $I$ and $F$ are between $A$ and $G$. Let $K$ be a point that is both between $G$ and $F$ and between $G$ and $I$. Similarly, let $L$ be apoint that is both between $H$ and $F$ and between $H$ and $J$. <img src="https://i.stack.imgur.com/RMISp.png" alt="enter image description here"> Then $ABC$ is partitioned into seven - almost acute - triangles as follows:</p> <ul> <li>$ALE$ is acute: $\angle LAE=\alpha$, $\angle AEL&lt;\angle AEJ=\frac\pi2$, $\angle ELA&lt;\angle EHA=\frac\pi2$.</li> <li>$BDK$ is acute by the same reasoning</li> <li>$CPD$ is a right triangle with $\angle DPC=\frac\pi2$</li> <li>$CEP$ is a right triangle with $\angle CPE=\frac\pi2$</li> <li>$DPK$ is acute: $\angle PDK&lt;\angle PDG=\frac\pi2$, $\angle KPD&lt;\angle FPD=\frac\pi2$, $\angle DKP$ is also acute because it is opposed to the shortest side: $DP&lt;DE&lt;PF&lt;\min\{KP,KD\}$</li> <li>$ELP$ is acute by the same reasoning</li> <li>$KPL$ is acute: $\angle PKL&lt;\angle PFL=\frac\pi2$, $\angle KLP&lt;\angle KFP=\frac\pi2$, $\angle LPK$ is also acute because it is oppsed to the shortest side: $KL&lt;GH=DE&lt;PF&lt;\min\{PK,PL\}$</li> </ul> <p>The Thales circles over $CD$ and $CE$ intersect in $C$ and $P$. Hence for any $Q$ between $P$ and $F$, the angles $\angle DQC$ and $\angle CQE$ are acute. As long as $Q$ is sufficiently close to $P$, the triangles $CQD$, $CEQ$, $DQK$, $ELQ$, $KQL$ are acute, thus giving us a partition of $ABC$ into seven acute triangles. <img src="https://i.stack.imgur.com/B5Er4.png" alt="enter image description here"></p>
2,099,516
<p>For independent Gamma random variables $G_1, G_2 \sim \Gamma(n,1)$, $\frac{G_1}{G_1+G_2}$ is independent of $G_1+G_2$. Does this imply that $G_1+G_2$ is independent of $G_1-G_2$? Thanks!</p>
mbe
100,502
<p>Since a rigorous proof as well as a simulation study have been proportioned already, I thought I'd throw in a slightly $\textbf{heuristic}$ argument on why whey are not independent. </p> <p>Given a realisation of $G_1-G_2$, set $Y=|G_1-G_2|$. Then we know at least that $G_1\geq Y$ or $G_2\geq Y$, such that necessarily the distribution of $G_1+G_2|Y$ is concentrated above the value $Y&gt;0$. In contrast, the unconditional distribution of $G_1+G_2$ is concentrated on the whole $\mathbb{R}_+$. </p>
2,403,608
<p>I was asked to solve for the <span class="math-container">$\theta$</span> shown in the figure below.</p> <p><a href="https://i.stack.imgur.com/3Yxqv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Yxqv.png" alt="enter image description here" /></a></p> <p>My work:</p> <p>The <span class="math-container">$\Delta FAB$</span> is an equilateral triangle, having interior angles of <span class="math-container">$60^o.$</span> I don't think <span class="math-container">$\Delta HIG$</span> and <span class="math-container">$\Delta DEC$</span> are right triangles.</p> <p>So far, that's all I know. I'm confused on how to get <span class="math-container">$\theta.$</span> How do you get the <span class="math-container">$\theta$</span> above?</p>
Xander Henderson
468,350
<p>First, some general advice</p> <ul> <li>Often when you are presented a problem like this, the thing that you are looking for is something that you can't get at directly. <strong>Be prepared to work indirectly.</strong> You have to make some kind of round-about argument, and determine a lot of intermediate unknowns. In this case, you instinct to look at $\triangle GIH$ and $\triangle ECD$ was good. That was absolutely the right thing to try!</li> <li><strong>Remember some basics.</strong> In a triangle, the angles add to $180^{\circ}$. If two angles are complementary (i.e. they combine to make a right angle), then they add up to $90^{\circ}$. If two angles are supplementary (i.e. they combine to make a straight line), then they add to up to make $180^{\circ}$. The two angles opposite the congruent sides of an isosceles triangle are congruent. Et cetera.</li> <li><strong>Look for relations.</strong> Try to find figures that are congruent (or similar). Remember the basic congruence relations for triangles (e.g. SAS, as used below). Try to use the scaling relations of similar objects.</li> </ul> <p>With that said, an argument follows, assuming that the large quadrilateral is a square.</p> <hr> <p>Indeed, $\triangle HIG$ and $\triangle DEC$ are not right triangles.</p> <p>Observe that $\angle I$ and $\angle A$ are complementary (i.e. they add up to a right angle). But $\angle A$ measures $60^{\circ}$, and so $\angle I$ must measure $30^{\circ}$. By similar reasoning, $\angle C$ also measures $30^{\circ}$.</p> <p>By the side-angle-side congruence relation (SAS), you know that $\triangle GIH \cong \triangle ECD$. We just argued that the middle angles are $30^{\circ}$, and from the diagram, we have that $GI = IH = EC = CD = x$. Indeed, not only are the two triangle congruent, they are both isosceles! This means that $$ m\angle G = m\angle H \qquad\text{and}\qquad m\angle G + m\angle H + m\angle I = 180^{\circ},$$ since the angle sum of a triangle is $180^{\circ}$. Combining these, and solving for either $m\angle G$ or $m\angle H$, we obtain $$ m\angle G = m\angle H = 75^{\circ}.$$ By similar reasoning $$ m\angle E = m\angle D = 75^{\circ}.$$ The two unlabeled angles are equal, as they are each the complement of an angle that is $75^{\circ}$. Thus the two unlabeled angles each measure $15^{\circ}$. Finally, in the unlabeled triangle, we know that the angle sum must, again, be $180^{\circ}$. Since we know that two of the angles are $15^{\circ}$ each, we solve $$ \theta + 15^{\circ} + 15^{\circ} = 180^{\circ}$$ in order to obtain $\theta = 150^{\circ}$.</p> <hr> <p>If the figure is not a square, then things are slightly more complicated (though not really):</p> <p><a href="https://i.stack.imgur.com/PvefR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PvefR.png" alt="enter image description here"></a></p> <p>Since the quadrilateral $ABDC$ is equilateral (i.e. all four sides are of the same length), it must be a rhombus, so opposite sides are parallel. This implies that $$ \alpha + \beta + 120^{\circ} = 180^{\circ} $$ (make sure you understand why). From this, it follows that $$ \frac{1}{2}(\alpha + \beta) = 30^{\circ}.$$ Note that $\triangle EAC$ is isosceles (it has two sides that are congruent), from which it follows that the two remaining unknown angles have measure $$ 90^{\circ} - \frac{\alpha}{2}. $$ Similarly, the two unlabeled angles in $\triangle EBD$ have measure $$ 90^{\circ} - \frac{\beta}{2}. $$ Again using the fact that $\overline{AC} \parallel \overline{BD}$, it follows that $$ \left( 90^{\circ} - \frac{\alpha}{2} \right) + \left( 90^{\circ} - \frac{\beta}{2} \right) + \gamma + \delta = 180^{\circ}. $$ Because this turns out to be useful, we can rearrange this to get $$ \gamma + \delta = \frac{1}{2} (\alpha + \beta) = 30^{\circ}$$ But the angle sum of a triangle is $180^{\circ}$, and so $$ \theta = 180^{\circ} - (\gamma + \delta) = 180^{\circ} - 30^{\circ} = 150^{\circ}.$$ Note that this is the same answer we got above, which goes to show that it is often possible to get the right answer for reasons that are not quite right.</p>
1,293,725
<p>Here I have a question:</p> <p>Solve for real value of $x$: $$|x^2 -2x -3| &gt; |x^2 +7x -13|$$</p> <p>I got the answer as $x = (-\infty, \frac{1}{4}(-5-3\sqrt{17}))$ and $x=(\frac{10}{9},\frac{1}{4}(3\sqrt{17}-5)$</p> <p>Please verify it if it is correct or not. Thanks</p>
DeepSea
101,504
<p><strong>hint</strong>: Use $|A| &gt; |B| \iff A^2 &gt; B^2 \iff (A-B)(A+B) &gt; 0$. Apply this property to $A = x^2-2x-3, B = x^2+7x-13$</p>
3,492,155
<p>I am looking for the values <span class="math-container">$ x \in R $</span> which satisfy the following equation :</p> <p><span class="math-container">$ e^{-\alpha x} = \frac{a}{x - c} $</span></p> <p>Where <span class="math-container">$ \alpha $</span>, <span class="math-container">$ a $</span> and <span class="math-container">$ c $</span> are real valued constants.</p> <p>If <span class="math-container">$ c = 0 $</span>, we get <span class="math-container">$ x = - \frac{W(-a \alpha)}{\alpha} $</span>, where <span class="math-container">$ W$</span> denotes the <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert W function</a>, but with <span class="math-container">$ c \neq 0 $</span> I don't see an obvious solution.</p> <p>Otherwise, could I find an approximate solution with numerical methods in limited time ? (my system needs to be running in real time)</p>
GEdgar
442
<p>Step by step: <span class="math-container">$$ e^{-\alpha x} = \frac{a}{x-c} \\ (x-c)e^{-\alpha x} = a \\ (x-c)e^{-\alpha x} e^{\alpha c} = a e^{\alpha c} \\ (x-c)e^{-\alpha (x-c)} = a e^{\alpha c} \\ -\alpha(x-c)e^{-\alpha (x-c)} = -a\alpha e^{\alpha c} \\ -\alpha(x-c) = W(-a\alpha e^{\alpha c}) \\ x-c = -\frac{W(-a\alpha e^{\alpha c})}{\alpha} \\ x = c-\frac{W(-a\alpha e^{\alpha c})}{\alpha} $$</span></p>
286,930
<p>I have been assigned this problem for homework:</p> <blockquote> <p>Show that, if $a &lt; b + \epsilon$ for every $\epsilon \gt 0$, then $a\le b$.</p> </blockquote> <p>I have tried to go about this using Induction, but I don't know what the base case would be. It is obvious to me in my mind, but I don't know how to put it into mathematical terms on paper. any hints?</p>
lamb_da_calculus
54,044
<p>I'm assuming this is from a first proof-based course, so apologies if this comes off as condescending. </p> <p>Induction is harder to use in this case for precisely the reason you stated: what is the base case? We don't have any equivalent of some smallest nonzero number 1 with which to start like we do in $\mathbb{N}$. Maybe we could start with 0.0001? No, 0.00001 is smaller. And 0.000001 is smaller still. This sort of idea is known as the <a href="http://en.wikipedia.org/wiki/Well-ordering_principle" rel="nofollow">Well-Ordering Principle</a> and is maybe not immediately clear, but it's worth thinking about. In particular, $\mathbb{R}$ (the real numbers) is not well-ordered, since there is no smallest element in $(0,1)$.</p> <p>But in statements involving inequalities it's often a good idea to try to do things by contradiction. In that case, we'd assume $a &gt; b$ and try to show that this is impossible. Well, if $a &gt; b$ then $a - b &gt; 0$. But we are given that $a &lt; b + \epsilon$ for any $\epsilon &gt; 0$. So what value should we choose for $\epsilon$ to find our contradiction?</p>
286,930
<p>I have been assigned this problem for homework:</p> <blockquote> <p>Show that, if $a &lt; b + \epsilon$ for every $\epsilon \gt 0$, then $a\le b$.</p> </blockquote> <p>I have tried to go about this using Induction, but I don't know what the base case would be. It is obvious to me in my mind, but I don't know how to put it into mathematical terms on paper. any hints?</p>
icurays1
49,070
<p>I'm a snob, so I like using contrapositive instead of contradiction whenever possible. The contrapositive would go like this: "if $a&gt;b$, then there exists an $\epsilon&gt;0$ such that $a\geq b+\epsilon$." Finding this epsilon should be easy with a number line. </p> <p>Edit: the original poster figured it out in the comments, so here's the full argument: if $a&gt;b$, then certainly $a-b&gt;0$. Hence set $\epsilon=a-b$; then, since $b+\epsilon=b+a-b=a$, we certainly have $a\leq b+\epsilon=a$ (since $a=a$).</p>
2,518,305
<p><a href="http://www.mit.edu/%7Esame/pdf/qualifying_round_2017_answers.pdf" rel="noreferrer">This is a problem from MIT integration bee 2017.</a></p> <p><span class="math-container">$$\int_0^{\pi/2} \frac 1 {1+\tan^{2017} x} \, dx$$</span></p> <p>I have tried substitution method, multiplying numerator and denominator with <span class="math-container">$\sec^2x$</span>, breaking the numerator in terms of linear combination of the denominator and the derivative of it. None of these methods work.</p> <p>Some hints please?</p>
jonsno
310,635
<p>Try using $$\int_a^b f(x) \, dx = \int_a^b f(a+b-x) \, dx$$</p> <p>and the fact that $\tan(\frac{\pi}{2}-x) = \cot(x)$ to convert to get the following: $$I = \int_0^{\pi / 2} \frac 1 {1+ \tan^{2017}(x)} \, dx = \int_0^{\pi / 2} \frac 1 {1+ \tan^{2017}(\pi/2-x)} \, dx \\ = \int_0^{\pi / 2} \frac 1 {1+ \cot^{2017}(x)} \, dx = \int_0^{\pi / 2} \frac{\tan^{2017}(x)}{1+ \tan^{2017}(x)} \, dx$$</p> <p>Hence</p> <p>$$2I = \int_{0}^{\pi / 2} dx = \frac{\pi}{2}$$</p>
260,865
<p>I am fairly new to mathematica and I am trying to plot a 3D curve defined by multiple formulas. I have the curve <span class="math-container">$K$</span> from the point <span class="math-container">$(\frac{1}{2},-\frac{1}{2}\sqrt{3},0)$</span> to <span class="math-container">$(\frac{1}{2},\frac{1}{2}\sqrt{3},2\sqrt{3})$</span> given by, <br /> <span class="math-container">$\begin{cases}x^{2}+y^{2}=1,\\ z=\frac{y}{x}+\sqrt{3}\\ x\geq\frac{1}{2} \end{cases}$</span><br /> I would like to see this curve plotted somehow. I just can't find a function on mathematica which allows this. Does anyone know if this can be done in a simple way?</p>
Bob Hanlon
9,362
<p>You could also use <a href="https://reference.wolfram.com/language/ref/ParametricPlot3D.html" rel="nofollow noreferrer"><code>ParametricPlot3D</code></a>. The curve is made up of two segments as a function of <code>x</code></p> <pre><code>ParametricPlot3D[{ {x, Sqrt[1 - x^2], Sqrt[1 - x^2]/x + Sqrt[3]}, {x, -Sqrt[1 - x^2], -Sqrt[1 - x^2]/x + Sqrt[3]}}, {x, 1/2, 1}, BoxRatios -&gt; {1, 1, 1}] </code></pre> <p><a href="https://i.stack.imgur.com/qP70T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qP70T.png" alt="enter image description here" /></a></p>
260,865
<p>I am fairly new to mathematica and I am trying to plot a 3D curve defined by multiple formulas. I have the curve <span class="math-container">$K$</span> from the point <span class="math-container">$(\frac{1}{2},-\frac{1}{2}\sqrt{3},0)$</span> to <span class="math-container">$(\frac{1}{2},\frac{1}{2}\sqrt{3},2\sqrt{3})$</span> given by, <br /> <span class="math-container">$\begin{cases}x^{2}+y^{2}=1,\\ z=\frac{y}{x}+\sqrt{3}\\ x\geq\frac{1}{2} \end{cases}$</span><br /> I would like to see this curve plotted somehow. I just can't find a function on mathematica which allows this. Does anyone know if this can be done in a simple way?</p>
Daniel Huber
46,318
<p>Here is an analytical solution:</p> <pre><code>eq = {x^2 + y^2 == 1 &amp;&amp; z == Sqrt[3] + y/x, x &gt; 1/2}; Reduce[eq, z, Reals] </code></pre> <p><a href="https://i.stack.imgur.com/rS8Af.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rS8Af.png" alt="enter image description here" /></a></p> <p>We may take x as a parameter, and plot the 2 branches using ParametricPlot3D:</p> <pre><code>ParametricPlot3D[{{x, Sqrt[1 - x^2], 1/x (Sqrt[3] x + Sqrt[1 - x^2])}, {x, -Sqrt[1 - x^2], 1/x (Sqrt[3] x - Sqrt[1 - x^2])}}, {x, 1/2, 1}, BoxRatios -&gt; {1, 1, 1}] </code></pre> <p><a href="https://i.stack.imgur.com/BwuzM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BwuzM.png" alt="enter image description here" /></a></p>
260,865
<p>I am fairly new to mathematica and I am trying to plot a 3D curve defined by multiple formulas. I have the curve <span class="math-container">$K$</span> from the point <span class="math-container">$(\frac{1}{2},-\frac{1}{2}\sqrt{3},0)$</span> to <span class="math-container">$(\frac{1}{2},\frac{1}{2}\sqrt{3},2\sqrt{3})$</span> given by, <br /> <span class="math-container">$\begin{cases}x^{2}+y^{2}=1,\\ z=\frac{y}{x}+\sqrt{3}\\ x\geq\frac{1}{2} \end{cases}$</span><br /> I would like to see this curve plotted somehow. I just can't find a function on mathematica which allows this. Does anyone know if this can be done in a simple way?</p>
cvgmt
72,111
<p>Use the hint by @J.M.can'tdealwithit</p> <pre><code>sol = Solve[{x == Cos[t], y == Sin[t], z == y/x + Sqrt[3], x &gt;= 1/2, -Ο€ &lt;= t &lt;= Ο€},{x,y,z}] plot = ParametricPlot3D[{x, y, z} /. sol[[1]], {t, -Ο€, Ο€}, PlotStyle -&gt; {Thick, Red}] </code></pre> <p><a href="https://i.stack.imgur.com/tYzh7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tYzh7.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/qSogL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qSogL.png" alt="enter image description here" /></a></p>
2,970,234
<p>So, I'm given the following query:</p> <p>Write the Taylor series centered at <span class="math-container">$z_0 = 0$</span> for each of the following complex-valued functions:</p> <p><span class="math-container">$$f(z) = z^2\sin(z),\quad g(z) = z\sin(z^2)$$</span></p> <p>Then, use these series to help you compute the following limit:</p> <p><span class="math-container">$$\lim_{z \to 0} \frac{z^2\sin(z)-z\sin(z^2)}{z^5}$$</span></p> <p>So, the first part wasn't so bad. I simply noticed that</p> <p><span class="math-container">$$\sin(z) = \sum_{n=0}^{\infty} \frac{(-1)^nz^{2n+1}}{(2n+1)!}$$</span> and <span class="math-container">$$\sin(z^2) = \sum_{n=0}^{\infty} \frac{(-1)^nz^{4n+2}}{(2n+1)!}$$</span></p> <p>With a little bit of simplifying, I obtained:</p> <p><span class="math-container">$$f(z) = \sum_{n=0}^{\infty} \frac{(-1)^nz^{2n+3}}{(2n+1)!}, \quad g(z) = \sum_{n=0}^{\infty} \frac{(-1)^nz^{4n+3}}{(2n+1)!}$$</span></p> <p>However, I'm not quite sure how to deal with the limit. Does anyone have any advice on how I could approach it? I've tried expressing the entire limit as a series, but this doesn't seem to simplify much, and the limit cares about what happens as <span class="math-container">$z \to 0$</span> instead of as <span class="math-container">$n \to \infty$</span> (as we usually think about with series).</p>
Nosrati
108,128
<p><span class="math-container">\begin{align} \lim_{z\to0} \frac{z^2\sin(z)-z\sin(z^2)}{z^5} &amp;= \lim_{z\to0} \frac{\sum_{n=0}^{\infty} \dfrac{(-1)^nz^{2n+3}}{(2n+1)!}- \sum_{n=0}^{\infty} \dfrac{(-1)^nz^{4n+3}}{(2n+1)!}}{z^5} \\ &amp;= \lim_{z\to0} \frac{\left(z^3-\dfrac{z^5}{3!}+\dfrac{z^7}{5!}-\cdots\right)-\left(z^3-\dfrac{z^7}{3!}+\dfrac{z^{11}}{5!}-\cdots\right)}{z^5} \\ &amp;= \lim_{z\to0} -\dfrac{1}{3!}+\left(\dfrac{1}{5!}+\dfrac{1}{3!}\right)z^7+P(z) \\ &amp;= -\dfrac{1}{3!} \end{align}</span> where <span class="math-container">$P(z)$</span> is an analytic function with <span class="math-container">$a_0=0$</span>.</p>
2,502,711
<p>If the cross ratio $Z_1, Z_2, Z_3$ and $Z_4$ is real, then</p> <p>which of the following statement is true? </p> <p>1)$Z_1, Z_2$ and $Z_3$ are collinear</p> <p>2)$Z_1, Z_2$ and $Z_3$ are concyclic</p> <p>3)$Z_1, Z_2$ and $Z_3$ are collinear when atleast one $Z_1, Z_2$ or $Z_3$ is real </p> <p>My attempt : By theorem: <a href="https://math.stackexchange.com/questions/482166/cross-ratio-is-real-on-image-of-real-axis">Cross ratio is real on image of real axis</a></p> <p>I am confused due to $Z_4$ is not being given. Please help me.</p>
doraemonpaul
30,938
<p>Hint:</p> <p>$\dfrac{d^2z}{dx^2}=be^{-az}$</p> <p>$\dfrac{dz}{dx}\dfrac{d^2z}{dx^2}=be^{-az}\dfrac{dz}{dx}$</p> <p>$\int\dfrac{dz}{dx}\dfrac{d^2z}{dx^2}~dx=\int be^{-az}\dfrac{dz}{dx}~dx$</p> <p>$\int\dfrac{dz}{dx}~d\left(\dfrac{dz}{dx}\right)=\int be^{-az}~dz$</p> <p>$\dfrac{1}{2}\left(\dfrac{dz}{dx}\right)^2=-\dfrac{be^{-az}}{a}+c$</p> <p>$\left(\dfrac{dz}{dx}\right)^2=\dfrac{C_1-be^{-az}}{a}$</p> <p>$\dfrac{dz}{dx}=\pm\dfrac{\sqrt{C_1-be^{-az}}}{\sqrt a}$</p>
3,965,668
<p>I started off my proof by of course stating that the different right triangles I would be comparing should have the same area A. I was able to show that what the question is asking is true visually and computationally using the Pythagorean Theorem, and even using the triangle inequality, but I don't really know how to set it up in such a way that I can use the Euler-Lagrange equation in order to prove that this is true.</p> <p>Equations:</p> <p><span class="math-container">$$\frac{1}{2}xy=A$$</span></p> <p>So</p> <p><span class="math-container">$$y(x) = 2A/x \Longrightarrow y'(x) = -\frac{2A}{x^2}$$</span></p>
Martin R
42,969
<p>Yes. For all <span class="math-container">$t \in [0, 1]$</span> is <span class="math-container">$$ e^{\phi(t) - f(\gamma(t))} = \frac{e^{\phi(t)}}{e^{f(\gamma(t))}} = \frac{\gamma(t)}{\gamma(t)} = 1 $$</span> because <span class="math-container">$e^{f(w)} = w$</span> holds for all <span class="math-container">$w \in \Omega$</span>. It follows that the continuous function <span class="math-container">$$ [0, 1] \ni t \mapsto \frac{1}{2 \pi i }(\phi(t) - f(\gamma(t))) $$</span> takes only integer values on the connected interval <span class="math-container">$[0, 1]$</span>, and is therefore constant. But <span class="math-container">$\phi(0) = f(\gamma(0))$</span> per the construction, so that <span class="math-container">$\phi(t) = f(\gamma(t))$</span> holds for all <span class="math-container">$t$</span>.</p>
1,408,467
<p>I've found a few papers that deal with removing redundant inequality constraints for linear programs, but I'm only trying to find the non-redundant constraints that define a feasible region (i.e. I have no objective function), given a set of possibly redundant inequality constraints.</p> <p>For instance, if I have:</p> <p>$$ 0x_1 + x_2 \leq -1\\ 0x_1 - x_2 \leq -1\\ -x_1 + 0x_2 \leq -2\\ x_1 + 0x_2 \leq -2\\ x_1 + 0x_2 \leq -6 $$</p> <p>Is there a robust technique that could detect that the last constraint is redundant?</p>
Robert Israel
8,508
<p>You could try maximizing $x_1 + 0 x_2$ subject to the first $5$ constraints. The constraint is redundant iff the optimal objective value $\le -6$. </p>
32,294
<p>I sort of asked a version of this question before and it was unclear; try I will now to make an honest attempt to state everything clerly.</p> <p>I am trying to evaluate the following, namely $\nabla w = \nabla |\vec{a} \times \vec{r}|^n$, where $\vec{a}$ is a constant vector and $\vec{r}$ is the vector $&lt;x_1,x_2,\ldots x_n&gt;$. Now say that I use the chain rule, first say by setting $\vec{u}$ to be equal to the cross product of $\vec{a}$ and $\vec{r}$. </p> <p>Now here's the part that I'm confused. How do we extend the chain rule over when dealing with the gradient? Do I take $\nabla |\vec{a} \times \vec{r}|^n$ to be equal to $\nabla |\vec{u}|^n$ $\times$ $\nabla (\vec{a} \times \vec{r})$, where $\times$ denotes the cross product?</p> <p>The first bit $\nabla |\vec{u}|^n$ is easy, it just evaluates to $n|\vec{a} \times \vec{r}|^{n-2} (\vec{a} \times \vec {r})$, remembering that $\vec{u} = \vec{a} \times \vec{r}$.</p> <p>I am guessing that $\nabla |\vec{a} \times \vec{r}|^n$ $\neq$ $\nabla |\vec{u}|^n$ $\times$ $\nabla (\vec{a} \times \vec{r})$, as to even speak about $\nabla (\vec{a} \times \vec{r})$, i.e. the gradient of a vector we would have to talk about either the cross product or dot product of the gradient and $\nabla (\vec{a} \times \vec{r})$ </p> <p>By the way, I am told the answer given is $\nabla |\vec{a} \times \vec{r}|^n$ = $n|\vec{a} \times \vec{r}|^{n-2} \Big(\vec{a} \times (\vec{r} \times \vec{a})\Big)$.</p> <p>So let's say that I try a component wise approach, i.e. we look first at $\frac{\partial w}{\partial x_1}$. Then is it true (I could be wrong) that:</p> <p>$\frac{\partial w}{\partial x_1} = n|\vec{a} \times \vec{r}|^{n-2} \quad \vec{u_1} \times \frac{\partial}{\partial x_1} \Big(\vec{a} \times \vec{r}\Big) = n|\vec{a} \times \vec{r}|^{n-2} \quad \vec{u_1} \times \Big(\vec{a} \times \frac{\partial\vec{r}}{\partial x_1}\Big)$, as $\vec{a}$ is a constant vector? Here $\vec{u_1}$ denotes the first component of the vector $\vec{a} \times \vec{r}$.</p> <p>I would really aprreciate an interpretation of this, it is just that I am confused about what to take and the meanings of these operations.</p>
Hans Lundmark
1,242
<p>Let $f(r)=|a \times r|$. (I'll be lazy and omit the vector arrows.)</p> <p>Then, to begin with, $\nabla(f(r)^n) = n f(r)^{n-1} \nabla f(r)$ by the chain rule. So it only remains to compute $\nabla f(r)$, and if you can't think of any clever way, then you just do it by brute force in terms of coordinates: take $a=(a_1, a_2, a_3)$ and $r=(x_1,x_2,x_3)$ (the cross products only exists in three dimensions, so your $x_n$ doesn't make sense), compute the components of $u=a \times r=(u_1,u_2,u_3)$, then compute the length $f(r)=\sqrt{u_1^2 + u_2^2 + u_3^2}$, and compute the derivatives of that with respect to $x_1$, $x_2$ and $x_3$. Done!</p>
1,238,481
<p>Let $\vec p, \vec q$ and $\vec r$ are three mutually perpendicular vectors of the same magnitude. If a vector $\vec x$ satisfies the equation $\begin{aligned} \vec p \times ((\vec x - \vec q) \times \vec p)+\vec q \times ((\vec x - \vec r) \times \vec q)+\vec r \times ((\vec x - \vec p) \times \vec r)=0\end{aligned}$, then find $\vec x$.</p> <p>I considered $p^2 = q^2 = r^2 = a^2$ (say) and since $\vec p, \vec q$ and $\vec r$ are three mutually perpendicular vectors, therefore $\vec p \cdot \vec q = \vec q \cdot \vec r = \vec p \cdot \vec r = 0$. Then I solved the given equation using these data and got:</p> <p>$3a^2 \vec x +((\vec p\cdot\vec x)- a^2)\vec p +((\vec q\cdot\vec x)- a^2)\vec q+((\vec r\cdot\vec x)- a^2)\vec r= 0$</p> <p>How should I proceed after this to find $\vec x$ ? Thanks.</p>
Asaf Karagila
622
<p>Let me first clarify the issue with an inaccessible, since it seems to me that my words were somewhat misunderstood.</p> <ol> <li><p>I only mentioned inaccessible cardinals, since it is a plausible axiom, and if you believe it to be consistent, then of course there will be a (standard) model of <span class="math-container">$\sf ZFC$</span>. And in that case we cannot prove <span class="math-container">$\sf\lnot\operatorname{Con}(ZFC)$</span>.</p> <p>What I didn't say is that &quot;If the consistency of <span class="math-container">$\sf ZFC$</span> implies the consistency of <span class="math-container">$\sf ZFC+I$</span> then ...&quot; because we can easily show that this cannot be the case, unless <span class="math-container">$\sf ZFC$</span> was inconsistent to begin with.</p> </li> <li><p>It is not an open question whether or not inaccessible cardinals are consistent. We know that this statement cannot be proved. Whether or not inaccessible cardinals are <em>inconsistent</em> is an open question, but I don't think any set theorist today believes that.</p> </li> </ol> <p>The point is that in some mathematical universes, <span class="math-container">$\sf ZFC$</span> is consistent, but <span class="math-container">$\sf ZFC+\operatorname{Con}(ZFC)$</span> is not consistent. In those universes, <span class="math-container">$\sf ZFC$</span> <em>proves</em> its own inconsistency, which sounds weird, and indeed those universes will necessarily disagree with the meta-theory about the integers, so as far as models of set theory go, they will be non-standard models.</p> <p>But in such mathematical universe, <span class="math-container">$\sf ZFC\vdash\lnot\operatorname{Con}(ZFC)$</span>. And therefore, for <em>any</em> statement <span class="math-container">$\varphi$</span>, <span class="math-container">$\sf ZFC\vdash\operatorname{Con}(ZFC)\rightarrow\varphi$</span>. Simply because a false premise implies everything.</p> <p>Finally, we can relax the assumption on <span class="math-container">$\sf\operatorname{Con}(ZFC)$</span> and essentially require that <span class="math-container">$\sf\operatorname{Con}(ZFC)$</span> is not refutable. This follows from both <span class="math-container">$\omega$</span>-consistency and <span class="math-container">$\Sigma^0_1$</span>-soundness.</p>
32,809
<p>Is it possible to define new graphics directives?</p> <p>For example, suppose I want to be able to use the following code:</p> <pre><code>Graphics[{ BigPointSize[0.07], SmallPointSize[0.04], Red, BigPoint[{1,1}], BigPoint[{1,3}], SmallPoint[{3,1}], Blue, SmallPoint[{2,2}], SmallPoint[{3,2}], BigPoint[{0,0}] }] </code></pre> <p>Is there any way to define <code>BigPointSize</code>, <code>SmallPointSize</code>, <code>BigPoint</code>, and <code>SmallPoint</code> so that this code will work as intended? Ideally <code>BigPointSize</code> and <code>SmallPointSize</code> should have all of the functionality of other graphics directives, e.g. scoping inside of lists, and the ability to call the command multiple times within the same list.</p> <p>(Obviously it's possible to draw these points in other ways, but I'm curious whether it's possible to get this <em>syntax</em> to work.)</p> <p><strong>Edit:</strong> Just to clarify, I would like <code>BigPointSize</code> and <code>SmallPointSize</code> to work the same way as PointSize and other graphics directives. For example, the code</p> <pre><code>Graphics[{ BigPointSize[0.1], { BigPointSize[0.05], BigPoint[{0,0}] }, BigPoint[{1,0}] }] </code></pre> <p>should produce one point of size <code>0.05</code> and one point of size <code>0.1</code>.</p>
Vahagn Poghosyan
9,567
<pre><code>BigPointSizeValue = 1; BigPointSize[s_] := (BigPointSizeValue = s;) BigPoint[p_] := {PointSize[BigPointSizeValue], Point[p]} SmallPointSizeValue = 1; SmallPointSize[s_]:=(SmallPointSizeValue = s;) SmallPoint[p_] := {PointSize[SmallPointSizeValue], Point[p]} </code></pre>
32,809
<p>Is it possible to define new graphics directives?</p> <p>For example, suppose I want to be able to use the following code:</p> <pre><code>Graphics[{ BigPointSize[0.07], SmallPointSize[0.04], Red, BigPoint[{1,1}], BigPoint[{1,3}], SmallPoint[{3,1}], Blue, SmallPoint[{2,2}], SmallPoint[{3,2}], BigPoint[{0,0}] }] </code></pre> <p>Is there any way to define <code>BigPointSize</code>, <code>SmallPointSize</code>, <code>BigPoint</code>, and <code>SmallPoint</code> so that this code will work as intended? Ideally <code>BigPointSize</code> and <code>SmallPointSize</code> should have all of the functionality of other graphics directives, e.g. scoping inside of lists, and the ability to call the command multiple times within the same list.</p> <p>(Obviously it's possible to draw these points in other ways, but I'm curious whether it's possible to get this <em>syntax</em> to work.)</p> <p><strong>Edit:</strong> Just to clarify, I would like <code>BigPointSize</code> and <code>SmallPointSize</code> to work the same way as PointSize and other graphics directives. For example, the code</p> <pre><code>Graphics[{ BigPointSize[0.1], { BigPointSize[0.05], BigPoint[{0,0}] }, BigPoint[{1,0}] }] </code></pre> <p>should produce one point of size <code>0.05</code> and one point of size <code>0.1</code>.</p>
Vahagn Poghosyan
9,567
<pre><code>Protect[BigPointSize, BigPoint, SmallPointSize, SmallPoint]; UserGraphics[gr_, opt : OptionsPattern[Graphics]] := Module[ { BigPointSizeStack, BigPointSizeExec, BigPointExec, SmallPointSizeStack, SmallPointSizeExec, SmallPointExec, TempList, TempListExec, }, BigPointSizeStack = {0.1}; (* default value *) BigPointSizeExec[s_] := (BigPointSizeStack[[-1]] = s; {}); BigPointExec[p_] := {PointSize[BigPointSizeStack[[-1]]], Point[p]}; SmallPointSizeStack = {0.1};(* default value *) SmallPointSizeExec[s_] := (SmallPointSizeStack[[-1]] = s; {}); SmallPointExec[p_] := {PointSize[SmallPointSizeStack[[-1]]], Point[p]}; TempListExec[x___] := Module[{retval}, AppendTo[BigPointSizeStack, BigPointSizeStack[[-1]]]; AppendTo[SmallPointSizeStack, SmallPointSizeStack[[-1]]]; retval = {x} /. { BigPointSize -&gt; BigPointSizeExec, SmallPointSize -&gt; SmallPointSizeExec, BigPoint -&gt; BigPointExec, SmallPoint -&gt; SmallPointExec }; BigPointSizeStack = Delete[BigPointSizeStack, -1]; BigPointSizeStack = Delete[SmallPointSizeStack, -1]; retval ]; Graphics[(gr //. x_List :&gt; TempList @@ x) /. TempList :&gt; TempListExec, opt] ] gr = {Line[{{0, -1}, {6, 0}}], BigPoint[{1, 0}], BigPointSize[0.02], BigPoint[{2, 0}], {BigPointSize[0.03], BigPoint[{3, 0}], Red, BigPointSize[0.04], BigPoint[{4, 0}]}, BigPoint[{5, 0}]}; UserGraphics[gr, Frame -&gt; True, PlotRange -&gt; {{0, 6}, {-1, 1}}] </code></pre> <p>I don't belive, but I wrote it, and it works !!!</p> <p>I tried to use obvious tricks and commands, which easily can be found in the documentation. The function <code>UserGraphics</code> supports all options and properties of Graphics. It also supports <code>BigPointSize</code>, <code>SmallPointSize</code> directives, and <code>BigPoint</code>, <code>SmallPoint</code> primitives.</p> <p>It is the first version, so all found bugs/questions/comments/remarks are welcome !</p>
4,020,986
<blockquote> <p>For every <span class="math-container">$n \in \mathbb{N}$</span> denote <span class="math-container">$x_n=(n,0) \in \mathbb{R^2}.$</span> Show that the set <span class="math-container">$\mathbb{R^2}Β \setminus \{x_n \mid n \in \mathbb{N} \}$</span> is an open subset of the plane.</p> </blockquote> <p>The set <span class="math-container">$\mathbb{R^2}Β \setminus \{x_n \mid n \in \mathbb{N} \}$</span> is the plane excluding the positive <span class="math-container">$x-$</span>axis? It seems that I cannot use the definition of a ball here to conclude that the set would be open. Other definition that I know states that the union of open sets is open also, but it doesn't seem applicable here also. What other definitions might I use here?</p>
lhf
589
<p>Consider the function <span class="math-container">$\mathbb R^2 \to \mathbb R^3$</span> given by <span class="math-container">$f(x,y)=(\sin(x \pi),x,y)$</span>. Then <span class="math-container">$f$</span> is continuous.</p> <p>Let <span class="math-container">$C=\{0\} \times [0,\infty) \times \{0\}$</span>. Then <span class="math-container">$C$</span> is closed in <span class="math-container">$\mathbb R^3$</span>.</p> <p>Therefore, <span class="math-container">$f^{-1}(C)=\mathbb N \times \{0\}$</span> is closed in <span class="math-container">$\mathbb R^2$</span> and so its complement is open.</p> <p>(I assume that <span class="math-container">$0 \in \mathbb N$</span>. Otherwise, take <span class="math-container">$[1,\infty)$</span> instead of <span class="math-container">$[0,\infty)$</span>.)</p>
4,020,986
<blockquote> <p>For every <span class="math-container">$n \in \mathbb{N}$</span> denote <span class="math-container">$x_n=(n,0) \in \mathbb{R^2}.$</span> Show that the set <span class="math-container">$\mathbb{R^2}Β \setminus \{x_n \mid n \in \mathbb{N} \}$</span> is an open subset of the plane.</p> </blockquote> <p>The set <span class="math-container">$\mathbb{R^2}Β \setminus \{x_n \mid n \in \mathbb{N} \}$</span> is the plane excluding the positive <span class="math-container">$x-$</span>axis? It seems that I cannot use the definition of a ball here to conclude that the set would be open. Other definition that I know states that the union of open sets is open also, but it doesn't seem applicable here also. What other definitions might I use here?</p>
Henno Brandsma
4,280
<p>As <span class="math-container">$\Bbb N$</span> and <span class="math-container">$\{0\}$</span> are closed in <span class="math-container">$\Bbb R$</span>, <span class="math-container">$\Bbb N \times \{0\}$</span> is closed in <span class="math-container">$\Bbb R^2$</span> and so your set, which is the complement of that set, is open.</p>
3,001,747
<p>I'd need some help evaluating this limit:</p> <p><span class="math-container">$$\lim_{x \to 0} \frac{\ln\sin mx}{\ln \sin x}$$</span></p> <p>I know it's supposed to equal 1 but I'm not sure how to get there.</p>
user
505,767
<p><strong>HINT</strong></p> <p>We need <span class="math-container">$x&gt;0$</span> and <span class="math-container">$m&gt;0$</span> then by standard limits</p> <p><span class="math-container">$$\frac{\ln\sin mx}{\ln \sin x}=\frac{\ln\frac{\sin mx}{mx}+\ln (mx)}{\ln \frac{\sin x}x+\ln x}=\frac{\ln\frac{\sin mx}{mx}+\ln x+\ln m}{\ln \frac{\sin x}x+\ln x}$$</span></p> <p>then recall that</p> <ul> <li><span class="math-container">$\frac{\sin mx}{mx} \to 1 \implies \ln\frac{\sin mx}{mx}\to 0$</span></li> <li><span class="math-container">$\frac{\sin x}{x} \to 1 \implies \ln\frac{\sin x}{x}\to 0$</span></li> </ul> <p>and then the limit is determined by the <span class="math-container">$\ln x$</span> terms.</p>
3,001,747
<p>I'd need some help evaluating this limit:</p> <p><span class="math-container">$$\lim_{x \to 0} \frac{\ln\sin mx}{\ln \sin x}$$</span></p> <p>I know it's supposed to equal 1 but I'm not sure how to get there.</p>
Mostafa Ayaz
518,023
<p>According to L^Hopital's rule we have <span class="math-container">$$\lim_{x \to 0} \frac{\ln\sin mx}{\ln \sin x}=\lim_{x \to 0} \frac{m\cos mx}{\cos x}{\sin x\over \sin mx}=1$$</span></p>
347,226
<p>If $V$ is a vector space over the field $F$ then verify that </p> <p>$$(\alpha_1+ \alpha_2)+(\alpha_3+\alpha_4)=[\alpha_2+(\alpha_3+\alpha_1)]+\alpha_4$$</p> <p>for all the vectors $\alpha_1,\alpha_2,\alpha_3,\alpha_4$ in $V$?</p> <p>i think we will use properties of vector spaces like associative,cummutative,etc,,,, but i dont how to get it</p>
cthl
69,820
<p>This is just the associativity and commutativity of the addition: In your example you would use the associativity in the following way: $(\alpha_1+\alpha_2)+(\alpha_3+\alpha_4)=((\alpha_1+\alpha_2)+\alpha_3)+\alpha_4=[\alpha_1+(\alpha_2+\alpha_3)]+\alpha_4=(\ast)$. Now we use commutativity and get $(\ast)=[\alpha_1+(\alpha_3+\alpha_2)]+\alpha_4=(\ast\ast)$ and associativity to get $(\ast\ast)=[(\alpha_1+\alpha_3)+\alpha_2]+\alpha_4=(\ast\ast\ast)$. Now we just have to use commutativity twice to get the final result $(\ast\ast\ast)=[(\alpha_3+\alpha_1)+\alpha_2]+\alpha_4=[\alpha_2+(\alpha_3+\alpha_1)]+\alpha_4$. I hope that helps a bit.</p>
347,226
<p>If $V$ is a vector space over the field $F$ then verify that </p> <p>$$(\alpha_1+ \alpha_2)+(\alpha_3+\alpha_4)=[\alpha_2+(\alpha_3+\alpha_1)]+\alpha_4$$</p> <p>for all the vectors $\alpha_1,\alpha_2,\alpha_3,\alpha_4$ in $V$?</p> <p>i think we will use properties of vector spaces like associative,cummutative,etc,,,, but i dont how to get it</p>
Eleven-Eleven
61,030
<p>$(\alpha_1+\alpha_2)+(\alpha_3+\alpha_4)$ = $[(\alpha_1+\alpha_2)+\alpha_3]+\alpha_4$ = $[(\alpha_2+\alpha_1)+\alpha_3]+\alpha_4$ = $[\alpha_2+(\alpha_1+\alpha_3)]+\alpha_4$ = $[\alpha_2+(\alpha_3+\alpha_1)]+\alpha_4$</p>
57,213
<p>Let <span class="math-container">$A \in \mathbb{Q}^{6 \times 6}$</span> be the block matrix below:</p> <p><span class="math-container">$$A=\left(\begin{array}{rrrr|rr} -3 &amp;3 &amp;2 &amp;2 &amp; 0 &amp; 0\\ -1 &amp;0 &amp;1 &amp;1 &amp; 0 &amp; 0\\ -1&amp;0 &amp;0 &amp;1 &amp; 0 &amp; 0\\ -4&amp;6 &amp;4 &amp;3 &amp; 0 &amp; 0\\ \hline 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp;1 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; -9 &amp;6 \end{array}\right).$$</span></p> <p>I found out that the minimal polynomial of <span class="math-container">$A$</span> is <span class="math-container">$(x-3)^3(x+1)^2$</span>, and now let</p> <p><span class="math-container">$$f(x)=2x^9+x^8+5x^3+x+a$$</span></p> <p>a polynomial, <span class="math-container">$a\in N$</span>. I need to find out for which <span class="math-container">$a$</span> the matrix <span class="math-container">$f(A)$</span> is invertible.</p> <p>It has some similarity to <a href="https://math.stackexchange.com/questions/57123/prove-that-if-gt-is-relatively-prime-to-the-characteristic-polynomial-of-a">to my last question</a>, but I still can't understand and solve it. Thanks again.</p>
Pierre-Yves Gaillard
660
<p><strong>Hint</strong></p> <p>Let $A$ be an square matrix with coefficients in a field $K$, and let $g$ be its minimal polynomial. </p> <p>Then the epimorphism $K[X]\to K[A]$, $f\mapsto f(A)$ induces an isomorphism $K[X]/(g)\to K[A]$. </p> <p>Assume that $g$ splits over $K$. </p> <p>Then the Chinese Remainder Theorem says that this algebra is isomorphic to the product of the $K[X]/(X-\lambda)^{m(\lambda)}$, where $\lambda$ is an root of $g$ and $m(\lambda)$ its multiplitity. </p> <p>Moreover the natural morphism from $K[X]$ to $K[X]/(X-\lambda)^{m(\lambda)}$ attaches to $f\in K[X]$ its degree $ &lt; m(\lambda)$ Taylor polynomial at $\lambda$. </p> <p><strong>EDIT 1.</strong> The interest of the above observation (which is of course entirely classical) is that it gives you a formula for $f(A)$. [If $K=\mathbb C$ the formula holds also for the functions which are holomorphic on the spectrum --- like the exponential.]</p> <p>[Technical point: In positive characteristic, $$\frac{f^{(n)}}{n!}$$ is <strong>not</strong> defined as $f^{(n)}$ divided by $n!$ (Here $f$ is in $K[X]$.)]</p> <p><strong>EDIT 2.</strong> This is to explain how Andrea's very nice answer can be obtained in this setting. Once you've noticed the isomorphism $K[A]=K[X]/(g)$, it's clear that $f(A)$ is invertible iff $f$ is invertible mod $g$, iff $f$ is prime to $g$. </p>
1,364,430
<p><strong>Problem</strong></p> <p>How many of the numbers in $A=\{1!,2!,...,2015!\}$ are square numbers?</p> <p><strong>My thoughts</strong></p> <p>I have no idea where to begin. I see no immediate connection between a factorial and a possible square. Much less for such ridiculously high numbers as $2015!$.</p> <p>Thus, the only one I can immediately see is $1! = 1^2$, which is trivial to say the least.</p>
ThePortakal
137,487
<p><strong>Hint:</strong> (for example) $13!, 14!, \dots , 25!$ are all nonsquare numbers because all of them are divisible by $13$ only once. (because $13$ is a prime)</p> <p>Similarly, $17!, 18!, \dots, 33!$ are nonsquare numbers.</p> <p>Go on like this.</p>
117,619
<p>I need to evaluate the following real convergent improper integral using residue theory (vital that i use residue theory so other methods are not needed here) I also need to use the following contour (specifically a keyhole contour to exclude the branch cut):</p> <p><a href="https://i.stack.imgur.com/4wwwj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4wwwj.png" alt=""></a></p> <p>$$\int_0^\infty \frac{\sqrt{x}}{x^3+1}\ \mathrm dx$$</p>
Amir Alizadeh
53,185
<p>Close format for this type of integrals: $$ \int_0^{\infty} x^{\alpha-1}Q(x)dx =\frac{\pi}{\sin(\alpha \pi)} \sum_{i=1}^{n} \,\text{Res}_i\big((-z)^{\alpha-1}Q(z)\big) $$ $$ I=\int_0^\infty \frac{\sqrt{x}}{x^3+1} dx \rightarrow \alpha-1=\frac{1}{2} \rightarrow \alpha=\frac{3}{2}$$ $$ g(z) =(-z)^{\alpha-1}Q(z) =\frac{(-z)^{\frac{1}{2}}}{z^3+1} =\frac{i \sqrt{z}}{z^3+1}$$ $$ z^3+1=0 \rightarrow \hspace{8mm }z^3=-1=e^{i \pi} \rightarrow \hspace{8mm }z_k=e^{\frac{\pi+2k \pi}{3}} $$ $$z_k= \begin{cases} k=0 &amp; z_1=e^{i \frac{\pi}{3}}=\frac{1}{2}+i\frac{\sqrt{3}}{2} \\ k=1 &amp; z_2=e^{i \pi}=-1 \\k=2 &amp; z_3=e^{i \frac{5 \pi}{3}}=\frac{1}{2}-i\frac{\sqrt{3}}{2} \end{cases}$$ $$R_1=\text{Residue}\big(g(z),z_1\big)=\frac{i \sqrt{z_1}}{(z_1-z_2)(z_1-z_3)}$$ $$R_2=\text{Residue}\big(g(z),z_2\big)=\frac{i \sqrt{z_2}}{(z_2-z_1)(z_2-z_3)}$$ $$R_3=\text{Residue}\big(g(z),z_3\big)=\frac{i \sqrt{z_3}}{(z_3-z_2)(z_3-z_1)}$$ $$ I=\frac{\pi}{\sin\left( \frac{3}{2} \pi\right)} (R_1+R_2+R_3) = \frac{\pi}{-1} \left(\frac{-1}{3}\right)=\frac{\pi}{3}$$</p> <p>Matlab Program</p> <pre><code> syms x f=sqrt(x)/(x^3+1); int(f,0,inf) ans = pi/3 </code></pre> <p>Compute R1,R2,R3 with Malab</p> <pre><code> z1=exp(i*pi/3); z2=exp(i*pi); z3=exp(5*i*pi/3); R1=i*sqrt(z1)/((z1-z2)*(z1-z3)); R2=i*sqrt(z2)/((z2-z1)*(z2-z3)); R3=i*sqrt(z3)/((z3-z2)*(z3-z1)); I=(-pi)*(R1+R2+R3); </code></pre>
283,747
<p>Let $BG$ denote the classifying space of a finite group $G$. For which group cohomology classes $c\in H^2(G;\mathbb{Z}/2)$ does there exist a real vector bundle $E$ over $BG$ such that $w_2(E)=c$?</p>
Matthias Wendt
50,846
<p>I don't know of a general criterion, but here are two relevant references.</p> <p>1) Several statements concerning the relation between the cohomology ring and the subring of Stiefel-Whitney classes are discussed in: </p> <ul> <li>P. Guillot. The computation of Stiefel-Whitney classes. Annales de l'institut Fourier, Volume 60 (2010) no. 2 , p. 565-606.</li> </ul> <p>The basic punchline seems to be that for most groups, the cohomology ring is generated by Stiefel-Whitney classes of real representations.</p> <p>2) There is also </p> <ul> <li>J. Gunarwardena, B. Kahn and C.B. Thomas. Stiefel-Whitney classes of real representations of finite groups. J. Algebra 126 (1989), 327-347. </li> </ul> <p>This paper contains some example calculations. In particular, on pp. 337-338, there is an explicit example group where $H^2$ is not generated by Stiefel-Whitney classes of real representations. </p> <p>There are some further papers, by Gunarwardena, Kahn, (and probably more that I'm not aware of right now) computing Stiefel-Whitney classes of the regular representation. But again, I don't know of a general criterion. Probably, if you have a specific group, not too large, the methods in the above papers might help deciding the question if all of $H^2$ is generated by Stiefel-Whitney classes.</p>
2,647,123
<p>I'm asked to to find a $3\times3$ matrix, in which no entry is $0$ but $A^2=0$. </p> <p>The problem is if I I brute force it, I am left with a system of 6 equations (Not all of which are linear...) and 6 unknowns. Whilst I could in theory solve that, is there more intuitive way of solving this problem or am I going to have to brute force the solution?</p> <p>Any suggestions would be greatly appreciated.</p>
Focus
516,040
<p>There's an idea. We can try to put $A = \alpha \beta'$, where $\alpha,\beta$ are $3 \times 1$ matrix.</p> <p>Then, $A^2 = \alpha \beta' \alpha \beta'$, so if $\alpha,\beta \ne 0$(all the components are not $0$),$\beta' \alpha = 0$ things will be solved.</p> <p>So,we could let</p> <p>$\alpha = \begin{pmatrix} 1 \\ 1\\ 1 \end{pmatrix}, \beta = \begin{pmatrix} 1 \\ 1 \\ -2 \end{pmatrix}$</p> <p>and $A = \begin{pmatrix} 1 &amp; 1 &amp; -2 \\ 1 &amp; 1 &amp; -2 \\ 1 &amp; 1 &amp; -2 \end{pmatrix}$</p>
2,873,520
<p>I want to find out how interference of two sine waves can affect the output-phase of the interfered wave. </p> <p>Consider two waves,</p> <p>$$ E_1 = \sin(x) \\ E_2 = 2 \sin{(x + \delta)} $$</p> <p>First off, I don't know how to prove it, but I can see visually (plotting numerically) that the sum of these waves looks a new sine wave. </p> <p>I want to find out what the phase of $E_1 + E_2 $ looks like. First I tried finding using functions like ArcSin() and Ln() but ran into trouble for both methods. For example when I try ArcSin(Sin[x] + 2*Sin(x - $\delta$)), I get answers that disagree with my numerical answers. </p> <p>Numerically, I solve for zeros and find the one with a positive derivative in a 2-pi region. </p> <p>Now I plot the phase-shift of $E_3$ as a function of $\delta$ (in blue) and compare it to $E_2$ (in purple): </p> <p><a href="https://i.stack.imgur.com/P6Jsc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P6Jsc.png" alt="enter image description here"></a></p> <p>Is there a "formula" I can use to find this answer without having to trace through one of the zeros? I think the key when using something like ArcSin is using the right normalization (I think it only works for sine of amplitude 1), but I'm not sure exactly the proper way of doing it. </p>
Siong Thye Goh
306,553
<p>$$L= a_0(x) + a_1(x)\frac{d}{dx}+ \ldots + a_n(x) \frac{d^n}{dx^n} $$</p> <p>is known as a linear <a href="https://en.wikipedia.org/wiki/Differential_operator" rel="nofollow noreferrer">differential operator</a>.</p> <p>We have $$Ly= a_0(x)y + a_1(x)\frac{dy}{dx}+ \ldots + a_n(x) \frac{d^ny}{dx^n} $$</p> <p>$$a_0(x)y+a_1(x)y'+\ldots a_n(x)y^{(n)}+b(x)=0$$ is a <a href="https://en.wikipedia.org/wiki/Linear_differential_equation#Linear_differential_operator" rel="nofollow noreferrer">linear differential equation</a>.</p> <p>when the $a_i(x)$ is independent of $x$, we describe them as constant coefficients.</p>
3,239,540
<p><span class="math-container">$$S=\sum_{k=2}^{n}\frac{k^{2}-2}{k!}, n\geq 2$$</span></p> <p>I got <span class="math-container">$S=\sum_{k=2}^{n}\frac{1}{(k-2)!}+\frac{1}{(k-1)!}-\frac{1}{k!}-\frac{1}{k!}$</span></p> <p>I give k values but not all terms are vanishing.I remain with <span class="math-container">$\frac{1}{1!}+\frac{1}{2!}+...+\frac{1}{(n-2)!}$</span> </p> <p>The sum should be <span class="math-container">$2-e+\frac{1}{1!}+\frac{1}{2!}+...+\frac{1}{(n-2)!}$</span></p>
P. Quinton
586,757
<p>For the sum until the index <span class="math-container">$n$</span>, you can apply the same trick : <span class="math-container">\begin{align*} \sum_{k=2}^n \frac{k^2-2}{k!} &amp;= \sum_{k=2}^n \left[\frac{k (k-1)}{k!} + \frac{k}{k!}-\frac{2}{k!} \right]\\ &amp;= \color{blue}{\sum_{k=2}^n \frac{k (k-1)}{k!}} + \color{orange}{\sum_{k=2}^n\frac{k}{k!}}-\color{green}{\sum_{k=2}^n\frac{2}{k!}}\\ &amp;= \color{blue}{\sum_{k=0}^{n-2} \frac{1}{k!}} + \color{orange}{\sum_{k=0}^{n-2}\frac{1}{k!} - 1 + \frac{1}{(n-1)!}}- \color{green}{2\sum_{k=0}^{n-2}\frac{1}{k!} + 2+2-\frac{2}{n!}-\frac{2}{(n-1)!}}\\ &amp;= 3-\frac{2}{n!}-\frac{1}{(n-1)!} \end{align*}</span></p> <p>When <span class="math-container">$n$</span> tends to <span class="math-container">$\infty$</span>, this converges to <span class="math-container">$3$</span>.</p>
1,456,444
<p>How can I go about solving this Pigeonhole Principle problem? </p> <p>So I think the possible numbers would be: $[3+12], [4+11], [5+10], [6+9], [7+8]$</p> <p>I am trying to put this in words...</p>
Yiyuan Lee
104,919
<p>The sentence you want is probably something along the lines of : "We need to pick $6$ distinct numbers from $5$ pairs of numbers. By the pigeonhole principle, both numbers in at least one of the pairs must be picked."</p>
2,218,716
<p>Explain why 1/i is βˆ’ i. </p> <p>(That is: explain why the multiplicative inverse of i is the complex number βˆ’ i.)</p> <p>And then the hint that I was given was, what property defines the multiplicative inverse?</p> <p>I know how to algebraically prove 1/i = -i, but need help writing the proof.</p>
Isaac Browne
429,987
<p>$i^2=-1$ is the definition of i. Thus, $$i*-i=--1=1$$ The multiplicative inverse is simply whatever you multiply by to get the identity, thus it is proved!</p>
3,249,735
<p>My question relates to this problem:</p> <p>Prove by induction that 54 divides <span class="math-container">$2^{2k+1}-9k^2+3k-2$</span>. </p> <p>My solving so far gives this answer: (after all calculations)</p> <p><span class="math-container">$2^{2(k+1)+1}-9(k+1)^2+3(k+1)-2= 54 \cdot2^2k+27k^2-27k$</span></p> <p>It is obvious that <span class="math-container">$27=\frac{1}{2}54$</span> divides this expression, but how do I figure it out if 54 divides it too? The end result is correct (checked!) </p>
gt6989b
16,192
<p><strong>HINT</strong></p> <p>Note that the first and last terms are even and hence are divisible by 2. The middle two are really <span class="math-container">$$ 3k-9k^2 = 3k(1-3k), $$</span> and the factors always have different parity, hence one of them is always even as well.</p>
2,627,131
<blockquote> <p>let <span class="math-container">$f(x)= 1+\sqrt{x+k+1}-\sqrt{x+k} \ \ k \in \mathbb{R}$</span> Number of answers :</p> <p><span class="math-container">$$f(x)=f^{-1}(x) \ \ \ \ :f^{-1}(f(x))=x$$</span></p> </blockquote> <p>MY Try :</p> <p><span class="math-container">$$y=1+\sqrt{x+k+1}-\sqrt{x+k} \\( y-1)^2=x+k+1-x-k-2\sqrt{(x+k+1)(x+k)}\\(y-1)^2+k-1=-2\sqrt{(x+k+1)(x+k)}\\ ((y-1)^2+k-1)^2=4(x^2+x(2k+1)+k^2+k)$$</span></p> <p>now what do i do ?</p>
xpaul
66,420
<p>Let $y=f(x)=f^{-1}(x)$ and then \begin{eqnarray} y=1+\sqrt{x+k+1}-\sqrt{x+k},\tag{1}\\ x=1+\sqrt{y+k+1}-\sqrt{y+k}.\tag{2} \end{eqnarray} Subtracting (1) from (2) gives \begin{eqnarray} x-y&amp;=&amp;(\sqrt{y+k+1}-\sqrt{x+k+1})-(\sqrt{y+k}-\sqrt{x+k})\\ &amp;=&amp;\frac{y-x}{\sqrt{y+k+1}+\sqrt{x+k+1}}-\frac{y-x}{\sqrt{y+k}+\sqrt{x+k}} \end{eqnarray} and hence $$ (x-y)\left[\frac{1}{\sqrt{y+k+1}+\sqrt{x+k+1}}-\frac{1}{\sqrt{y+k}+\sqrt{x+k}}\right]=0.$$ Thus one has either $$ x=y $$ or $$ \frac{1}{\sqrt{y+k+1}+\sqrt{x+k+1}}=\frac{1}{\sqrt{y+k}+\sqrt{x+k}}\tag{3}. $$ Since (3) does holds, one must have $x=y$ and hence $$ x=1+\sqrt{x+k+1}-\sqrt{x+k}. $$ It is easy to see that $x$ is the real roots of the following equation $$x(x^3-8x^2+12x-4)-4k(x+1)^2=0.\tag{4}$$ The number of answers depends on the real solutions of (4), either 1,2,3 or 4, depending on $k$.</p>
2,174,912
<p>I' just so stumped right now. I want to get $x^{n}$ to equal $x^{2n+1}$. Right now I have that: $$(\sqrt{x})^{2n} = x^n$$ But I don't know what to do to x to get: $$x^n = \{\text{something done to $x$}\}^{2n+1}$$</p>
Community
-1
<p>$(x^n)=x^{\frac{n}{2n+1}^{2n+1}}$ $\forall n\in\Bbb R/\{-\frac{1}{2}$}</p>
2,197,967
<p>Can someone explain how is the RHS concluded? I did with sample numbers and it is all correct. but I can't figure out how C(12,6) comes to play. $$ \binom{12}{0} + \binom{12}{1} + \binom{12}{2} + \binom{12}{3} + \binom{12}{4} + \binom{12}{5} = (2^{12} - \binom{12}{6}) / 2 $$</p>
Mythomorphic
152,277
<p>Suppose we toss a fair coin for $12$ times. The number of combinations is $2^{12}$.</p> <p>Now notice the number of outcomes $N(H&gt;T)$ such that the total number of Head is more than Tail is same as that of Tail more than Head $N(H&lt;T)$, by symmetry.</p> <p>We know $N(H&gt;T)$ is the total number of choosing less than $6$ Tails, which is $$N=\binom{12}{0} + \binom{12}{1} + \binom{12}{2} + \binom{12}{3} + \binom{12}{4} + \binom{12}{5}$$</p> <p>We know that the no. of outcomes when the number of Head and Tail equals is ${12}\choose{6}$. </p> <p>Since the total number of possible outcome is $$2N+{{12}\choose{6}}=2^{12}$$</p> <p>The claim follows.</p>
4,548,329
<p>Find the first derivative of <span class="math-container">$$y=\sqrt[3]{\dfrac{1-x^3}{1+x^3}}$$</span></p> <p>The given answer is <span class="math-container">$$\dfrac{2x^2}{x^6-1}\sqrt[3]{\dfrac{1-x^3}{1+x^3}}$$</span> It is nice and neat, but I am really struggling to write the result exactly in this form. We have <span class="math-container">$$y'=\dfrac13\left(\dfrac{1-x^3}{1+x^3}\right)^{-\frac23}\left(\dfrac{1-x^3}{1+x^3}\right)'$$</span> The derivative of the &quot;inner&quot; function (the last term in <span class="math-container">$y'$</span>) is <span class="math-container">$$\dfrac{-3x^2(1+x^3)-3x^2(1-x^3)}{\left(1+x^3\right)^2}=\dfrac{-6x^2}{(1+x^3)^2},$$</span> so for <span class="math-container">$y'$</span> <span class="math-container">$$y'=-\dfrac13\dfrac{6x^2}{(1+x^3)^2}\left(\dfrac{1+x^3}{1-x^3}\right)^\frac23=-\dfrac{2x^2}{(1+x^3)^2}\left(\dfrac{1+x^3}{1-x^3}\right)^\frac23$$</span> Can we actually leave the answer this way?</p>
Átila Correia
953,679
<p><strong>HINT</strong></p> <p>I would start with noticing that</p> <p><span class="math-container">\begin{align*} y^{3} = \frac{1 - x^{3}}{1 + x^{3}} = -1 + \frac{2}{1 + x^{3}} \Rightarrow 3y^{2}y' = -\frac{6x^{2}}{(1 + x^{3})^{2}} \end{align*}</span></p> <p>Can you take it from here?</p>
3,888,259
<p>The special linear group of invertible matrices is defined as the kernel of the determinant of the map:</p> <p><span class="math-container">$$\det:GL(n,\mathbb{R}) \mapsto \mathbb{R}^*$$</span></p> <p>In my mind the kernel of a linear map is the set of vectors that are mapped to the zero vector. So the map above would contain all the matrices that have determinant zero (which doesn't make sense since the codomain of the function excludes zero)? But isn't the special linear group made of matrices with determinant 1?</p>
JosΓ© Carlos Santos
446,262
<p>But <span class="math-container">$\det$</span> is <em>not</em> a linear map. It is a group homomorphism. Its kernel is<span class="math-container">$$\left\{M\in GL(n,\Bbb R)\,\middle|\,\det(M)=1\right\},$$</span>since <span class="math-container">$1$</span> is the identity element of <span class="math-container">$\bigl(\Bbb R\setminus\{0\},\times\bigr)$</span>.</p>
4,639,011
<p>I'm interested in finding the asymptotic at <span class="math-container">$n\to\infty$</span> of <span class="math-container">$$b_n:= \frac{e^{-n}}{(n-1)!}\int_0^\infty\prod_{k=1}^{n-1}(x+k)\,e^{-x}dx=e^{-n}\int_0^\infty\frac{e^{-x}}{x\,B(n;x)}dx$$</span> Using a consecutive application of Laplace' method, I managed to get <a href="https://artofproblemsolving.com/community/c7h3012973_integral_inequality_concepts" rel="nofollow noreferrer">(here)</a> <span class="math-container">$$b_n\sim(e-1)^{-n}$$</span> but this approach is not rigorous, and I cannot find even next asymptotic term, let alone a full asymptotic series.</p> <p>So, my questions are:</p> <ul> <li>how we can handle beta-function in this (and similar) expressions at <span class="math-container">$n\to\infty$</span></li> <li>whether we can get asymptotic in a rigorous way ?</li> </ul>
Svyatoslav
869,237
<p>This is an attempt to generalize the asymptotic. Namely, to consider a bit more general case: <span class="math-container">$$b_n(\lambda)=e^{-n}\int_0^\infty\frac{x^\lambda \,e^{-x}}{x\,B(n;x)}dx\,,\,\,\lambda\geqslant0\tag{1}$$</span> Using the second approach proposed by @Gary, we get <span class="math-container">$$\sum\limits_{n = 1}^\infty {b_n z^n } = z\int_0^{ + \infty } {\frac{{x^\lambda}}{{(e - z)^{x + 1} }}} dx = \frac{\Gamma(1+\lambda)\,z}{{(e - z)\log^{\lambda+1} (e - z)}}\tag{2}$$</span> For <span class="math-container">$\lambda=0, 1, 2,...$</span> we can obtain the asymptotic in a closed form.</p> <p>For example, for <span class="math-container">$\lambda=1$</span>, denoting <span class="math-container">$\epsilon=e-1-z$</span> and keeping only the singular part <span class="math-container">$$\frac{z}{{(e - z)\log^2 (e - z)}}=\frac{e-1}{\epsilon^2}-\frac{1}{\epsilon}+H_1(\epsilon)\tag{3}$$</span> where <span class="math-container">$H_1(\epsilon)$</span> is analytical in the disk <span class="math-container">$\,|\epsilon|&lt;e-1$</span>.</p> <p>Expanding the singular part of (3) into the series, <span class="math-container">$$[z^n]\left(\frac{e-1}{(e-1-z)^2}-\frac{1}{e-1-z}\right)=\frac{1}{(e-1)^{n+1}n!}\big((n+1)!-n!\big)=\frac{n}{(e-1)^{n+1}}\tag{4}$$</span> It is not clear whether we can find a good approximation for non-integer <span class="math-container">$\lambda$</span>, though the main asymptotic term for arbitrary <span class="math-container">$\lambda$</span> is still <span class="math-container">$\displaystyle\frac{n^\lambda}{(e-1)^{n+\lambda}}$</span>.</p> <p>But for <span class="math-container">$\lambda=k=0, 1, 2, ...$</span> such closed form does exist. To get it, it is convenient to use the first approach proposed by @Gary. <span class="math-container">$$b_n (k)= \frac{{e^{ - n} }}{\Gamma (n)}\int_0^{ + \infty } {\frac{\Gamma (x + n)\,x^ke^{ - x}}{\Gamma (x + 1)} dx}=\frac{\partial^k}{\partial\alpha^k}\,\bigg|_{\alpha=0}\frac{{e^{ - n} }}{\Gamma (n)}\int_0^{ + \infty } {\frac{\Gamma (x + n)\,x^ke^{ - x(1-\alpha)}}{\Gamma (x + 1)} dx}\tag{5}$$</span> Changing the order of integration and using Ramanujan's formula <span class="math-container">$$\frac{{e^{ - n} }}{\Gamma (n)}\int_0^{ + \infty } {\frac{\Gamma (x + n)\,x^k}{\Gamma (x + 1)}e^{ - x(1-\alpha)} dx}=\frac{e^{-n}}{\Gamma(n)}\int_0^\infty e^{-s}s^{n-1}ds\bigg(\int_0^\infty\frac{\Big(se^{-(1-\alpha)}\Big)^x}{\Gamma(1+x)}dx\bigg)$$</span> <span class="math-container">$$=\big(e-e^\alpha\big)^{-n}-\int_{-\infty}^\infty\frac{1}{(e+e^{\alpha+y})^n}\frac{dy}{y^2+\pi^2}\tag{6}$$</span> The second term in (6) is exponentially small vs. the first one.</p> <p>For example, for <span class="math-container">$k=1$</span> <span class="math-container">$$\frac{\partial}{\partial\alpha}\,\bigg|_{\alpha=0}\int_{-\infty}^\infty\frac{1}{(e+e^{\alpha+y})^n}\frac{dy}{y^2+\pi^2}$$</span> <span class="math-container">$$=-n\int_{-\infty}^\infty\frac{1}{(e+e^y)^n}\frac{dy}{y^2+\pi^2}+ne\int_{-\infty}^\infty\frac{1}{(e+e^y)^{n+1}}\frac{dy}{y^2+\pi^2}\sim O\left(\frac{n}{e^n}\right)$$</span> It mean that the full asymptotics (if we drop exponentially small terms) is given by <span class="math-container">$$\boxed{\,\,b_n(k)=e^{-n}\int_0^\infty\frac{x^k \,e^{-x}}{x\,B(n;x)}dx\sim\frac{\partial^k}{\partial\alpha^k}\,\bigg|_{\alpha=0}\big(e-e^\alpha\big)^{-n}\,,\,\,k=0, 1, 2, ...\,\,}$$</span> where all terms are valid and should be kept.</p> <p>For several first <span class="math-container">$k$</span> we get <span class="math-container">$$\qquad\qquad b_n(1)=\frac{n}{(e-1)^{n+1}}\,\,\big(in \,agreement \,with \,(4)\,\big)$$</span> <span class="math-container">$$b_n(2)=\frac{n(n+e)}{(e-1)^{n+2}}$$</span> <span class="math-container">$$b_n(3)=\frac{n\Big(n^2+3e(n+1)+e^2\Big)}{(e-1)^{n+3}}$$</span> etc.</p> <p>The numeric check is in perfect agreement with the approximation. Exactly the same answer can also be obtained by double application of Laplace' method, though in this case we cannot evaluate the error.</p>
119,876
<pre><code>Module[{x}, f@x_ = x; p@x_ := x; {x, x_, x_ -&gt; x, x_ :&gt; x} ] ?f ?p </code></pre> <p>gives</p> <pre><code>{x$17312, x$17312_, x_ -&gt; x, x_ :&gt; x} f[x_]=x p[x_]:=x </code></pre> <p>but I'd like to get</p> <pre><code>{x$17312, x$17312_, x$17312_ -&gt; x$17312, x$17312_ :&gt; x$17312} f[x$17312_]=x$17312 p[x$17312_]:=x$17312 </code></pre> <p>I thought <code>Module[{x}, body_]</code> operates something like the following, which would do what I want:</p> <pre><code>module[{x_Symbol}, body_] := ReleaseHold[Hold@body /. x -&gt; Unique@x]; SetAttributes[module, HoldAll]; module[{x}, f@x_ = x; p@x_ := x; {x, x_, x_ -&gt; x, x_ :&gt; x} ] ?f ?p </code></pre> <p>I guess there are some cases with nested scoping constructs that need to be considered for special treatment, but why can't it do the replacement in <code>Set, SetDelayed, Rule, RuleDelayed</code>?</p> <hr> <p>Motivation</p> <p>I want to use<code>f@x_ = Integrate[y^2, {y, 0, x}]</code> instead of <code>f@x_ := Evaluate@Integrate[y^2, {y, 0, x}]</code> and to be safe I want to scope the variable/pattern label <code>x</code> to something unique.</p> <p>See also <a href="https://mathematica.stackexchange.com/questions/119878/why-does-syntax-highlighting-in-set-and-rule-not-color-pattern-names-on-the">Why does syntax highlighting in `Set` and `Rule` not color pattern names on the RHS?</a></p>
Kuba
5,478
<p>While <code>Set</code> isn't a scoping construct (SC), it is considered one by other SCs outer to it. <a href="http://reference.wolfram.com/language/ref/Set.html" rel="nofollow noreferrer"><strong>ref / Set / Details[[-3]]</strong></a> (thanks to <a href="https://mathematica.stackexchange.com/users/280/alexey-popkov">Alexey Popkov</a> for correcting me).</p> <p>Here it is inner to the <code>Module</code> and <code>Module</code> decides not to interfere in this case (don't know why), but you can trick it:</p> <pre><code>Module[{x}, Set @@ {f[x_], Integrate[y^2, {y, 0, x}]}; ] ?f </code></pre> <blockquote> <pre><code>f[x$301_]=x$301^3/3 </code></pre> </blockquote> <p>Further reading: <a href="https://mathematica.stackexchange.com/a/20776/5478">Enforcing correct variable bindings and avoiding renamings for conflicting variables in nested scoping constructs</a></p>
402,802
<p>I have read that $$y=\lvert\sin x\rvert+ \lvert\cos x\rvert $$ is periodic with fundamental period $\frac{\pi}{2}$.</p> <p>But <a href="http://www.wolframalpha.com/input/?i=y%3D%7Csinx%7C%2B%7Ccosx%7C" rel="nofollow">Wolfram</a> says it is periodic with period $\pi$.</p> <p>Please tell what is correct.</p>
WiseStrawberry
61,705
<p>If you look at the graphs you can see the difference. what <strong>is</strong> a fundamental period? </p>
397,347
<p>I'm trying to figure out how to evaluate the following: $$ J=\int_{0}^{\infty}\frac{x^3}{e^x-1}\ln(e^x - 1)\,dx $$ I'm tried considering $I(s) = \int_{0}^{\infty}\frac{x^3}{(e^x-1)^s}\,dx\implies J=-I'(1)$, but I couldn't figure out what $I(s)$ was. My other idea was contour integration, but I'm not sure how to deal with the logarithm. Mathematica says that $J\approx24.307$. </p> <p>I've asked a <a href="https://math.stackexchange.com/questions/339711/find-the-value-of-j-int-0-infty-fracx3ex-1-lnx-dx">similar question</a> and the answer involved $\zeta(s)$ so I suspect that this one will as well. </p>
Felix Marin
85,343
<p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\on}[1]{\operatorname{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span> <span class="math-container">\begin{align} J &amp; \equiv \bbox[5px,#ffd]{\int_{0}^{\infty}{x^{3} \over \expo{x} - 1}\ln\pars{\expo{x} - 1}\,\dd x} \\[5mm] &amp; = \left.\partiald{}{\nu}\int_{0}^{\infty}x^{3} \pars{\expo{x} - 1}^{\nu}\,\dd x \,\right\vert_{\,\nu\ =\ -1} \\[5mm] &amp; = \left.\partiald{}{\nu}\int_{0}^{\infty}x^{3}\expo{\nu x} \pars{1 - \expo{-x}}^{\nu}\,\dd x \,\right\vert_{\,\nu\ =\ -1} \\[5mm] &amp; = \left.\partiald{}{\nu}\sum_{k = 0}^{\infty}{\nu \choose k} \pars{-1}^{k}\int_{0}^{\infty}x^{3}\expo{-\pars{k - \nu}x} \,\,\dd x\,\right\vert_{\,\nu\ =\ -1} \\[5mm] &amp; = \left.\partiald{}{\nu}\sum_{k = 0}^{\infty} {-\nu + k - 1\choose k}\,{6 \over \pars{k - \nu}^{4}}\,\right\vert_{\,\nu\ =\ -1} \\[5mm] &amp; = 24\sum_{k = 0}^{\infty}{1 \over \pars{k + 1}^{5}} - 6\sum_{ k = 0}^{\infty}{H_{k} \over \pars{k + 1}^{4}} \\[5mm] &amp; = 30\,\underbrace{\sum_{k = 1}^{\infty}{1 \over k^{5}}} _{\ds{\zeta\pars{5}}}\ -\ 6\ \underbrace{\sum_{ k = 1}^{\infty}{H_{k} \over k^{4}}} _{\ds{3\zeta\pars{5} - \pi^{2}\zeta\pars{3}/6}} \\[5mm] &amp; = \bbx{\pi^{2}\,\zeta\pars{3} + 12\,\zeta\pars{5}} \approx 24.3070 \\ &amp; \end{align}</span> <span class="math-container">$\ds{\sum_{ k = 1}^{\infty}{H_{k} \over k^{4}}}$</span>: <a href="https://mathworld.wolfram.com/HarmonicNumber.html" rel="nofollow noreferrer">See <span class="math-container">$\ds{\pars{20}}$</span> in MW</a>.</p>
2,971,143
<p>Let me choose <span class="math-container">$n=1$</span> for my induction basis: <span class="math-container">$2 &gt; 1$</span>, true.</p> <p>Induction Step : <span class="math-container">$2^n &gt; n^2 \rightarrow 2^{n+1} &gt; (n+1)^2 $</span></p> <p><span class="math-container">$2^{n+1} &gt; (n+1)^2 \iff$</span></p> <p><span class="math-container">$2\cdot 2^n &gt; n^2 + 2n + 1 \iff$</span></p> <p><span class="math-container">$0 &gt; n^2 + 1 + 2n - 2\cdot 2^n \iff$</span></p> <p><span class="math-container">$0 &gt; n^2 -2^n + 1 + 2n - 2^n \iff$</span> IH: <span class="math-container">$0 &gt; n^2 - 2^n$</span></p> <p><span class="math-container">$0 &gt; 1 + 2n - 2^n &gt; n^2 - 2^n + 1 + 2n - 2^n \iff$</span></p> <p><span class="math-container">$2^n &gt; 1 + 2n &gt; n^2$</span>, which can be proved with induction for <span class="math-container">$n \geq 3$</span></p> <p><span class="math-container">$2^n &gt; n^2$</span>, true by assumption</p> <p>I have showed that, based from the induction basis, I can conclude the general statement. But like I have said in the headline the identity is not fulfilled for <span class="math-container">$n=2$</span>, so something must be wrong in the proof. </p>
user
505,767
<p>You have assumed for the <strong>base case</strong></p> <ul> <li><span class="math-container">$n=1 \implies 2&gt;1$</span></li> </ul> <p>and it is not wrong, then for the <strong>induction step</strong> <span class="math-container">$P(n) \implies P(n+1)$</span> you have found that it works only for <span class="math-container">$n\ge 3$</span>.</p> <p>In that case what we need to complete the proof is to go back again to the base case and find a value <span class="math-container">$n\ge 3$</span> which works.</p>
76,778
<p>I'm reading Yao's unpredictability -> pseudorandomness construction and Goldreich/levin's pseudorandom permutation -> pseudorandom generator construction.</p> <p>My question is:</p> <p>is there a direct way to show that:</p> <p>given a pseudorandom function, we can construct a pseudorandom permutation out of it?</p> <p>[or is this question open]</p> <p>Thanks!</p>
Steve
18,168
<p>That would be the celebrated Luby Rackoff result.</p>
1,908,844
<p>The following example is taken from the book "Introduction to Probability Models" of Sheldon M. Ross (Chapter 5, example 5.4).</p> <blockquote> <p>The dollar amount of damage involved in an automobile accident is an exponential random variable with mean 1000. Of this, the insurance company only pays that amount exceeding (the deductible amount of) 400. Find the expected value and the standard deviation of the amount the insurance company pays per accident."</p> </blockquote> <p>In the solution, the author states that: </p> <blockquote> <p>By the lack of memory property of the exponential, it follows that if a damage amount exceeds 400, then the amount by which it exceeds it is exponential with mean 1000.</p> </blockquote> <p>After reading several implications of this property, I easily map such statement to something like: if you have been waiting for 400s without seeing the bus, then the expected time until the next bus is always 1000s. (Please correct me if I'm wrong)</p> <p>In case I've understood well, what makes me confuse is this next equation:</p> <p>$$ E[Y|I=1] = 1000 $$</p> <p>where:</p> <p>$X$: the dollar amount of damage resulting from an accident</p> <p>$Y=(X-400)^+$: the amount paid by the insurance company (where $a^+$ is $a$ if $a&gt;0$ and 0 if $a&lt;=0$).</p> <p>$I = 1*(X &gt; 400) + 0*(X&lt;=400)$</p> <p>I don't get why that equality holds given the memoryless property. Straightforwardly, I think with respect to 400 subtraction, it should be something like: $E[Y|I] = 1000 - 400 = 600$ (or some other value). Can anyone give me an explanation about this?</p> <p>In case you are not clear about my description, please refer to this <a href="https://books.google.ca/books?id=A3YpAgAAQBAJ&amp;pg=PA281&amp;lpg=PA281&amp;dq=probability%20model%20dollar%20amount%20of%20damage%20exponential&amp;source=bl&amp;ots=CaFTvM6Rtw&amp;sig=t0nrAFc-6hX0ByxD3bAD-E3M7EM&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiA4oaN4enOAhUGfxoKHRZHDEYQ6AEIHDAA#v=onepage&amp;q=probability%20model%20dollar%20amount%20of%20damage%20exponential&amp;f=false" rel="nofollow">link</a> with <strong>example 5.4</strong>.</p>
Alex R.
22,064
<p>If you want a generating function approach, your sum is equivalent to finding the $x^k$ coefficient of $(1+x)^n(1+x)^m=(1+x)^{m+n}.$ This is easily seen to be $\binom{n+m}{k}$. </p>
2,916,246
<p>I've found this to be difficult to solve:</p> <p>$$ \frac{d^2 x }{dt^2} + (a x + b) \frac{dx}{dt} = 0 $$</p> <p>I've done some reading, and I guess I could write this as:</p> <p>$$ \frac{d^2 x }{dt^2} + b \frac{dx}{dt} + ax \frac{dx}{dt} = 0 $$</p> <p>If I then treat $v(x) = \frac{dx}{dt}$ as an independent variable, I would get:</p> <p>$$ \frac{dv}{dt} + bv + axv = 0 $$</p> <p>This is sort of like a nonhomogenous equation. If I take the homogenous solution, I would get:</p> <p>$$ v(t) = A e^{-bt}$$</p> <p>I think.... I'm not sure where to go from here though. </p>
user577215664
475,762
<p>$$\frac{d^2 x }{dt^2} + (a x + b) \frac{dx}{dt} = 0$$ $$x'' + (a x + b) x' = 0$$ $$x'' + a xx' + bx' = 0$$ $$x'' + \frac a 2 (x^2)' + bx' = 0$$ Integrate $$x'+ \frac a 2 x^2 + bx = K$$ $$(x+\frac ba)'+ \frac a 2 (x^2 + \frac {2b}ax+\frac {b^2}{a^2}) = C$$ Substitute $z=x+\frac ba$ $$z'+ \frac a 2 z^2 = C$$ This last equation is separable</p>
2,111,402
<p>Simple exercise 6.2 in Hammack's Book of Proof. "Use proof by contradiction to prove"</p> <p>"Suppose $n$ is an integer. If $n^2$ is odd, then $n$ is odd"</p> <p>So my approach was:</p> <p>Suppose instead, IF $n^2$ is odd THEN $n$ is even</p> <p>Alternatively, then you have the contrapositive, IF $n$ is not even ($n$ is odd), then $n^2$ is not odd ($n^2$ is even).</p> <p>$n = 2k+1$ where $k$ is an integer. (definition of odd)</p> <p>$n^2 = (2k+1)^2$</p> <p>$n^2 = 4k^2 + 4k + 1$</p> <p>$n^2 = 2(2k^2 + 2k) + 1$</p> <p>$n^2 = 2q + 1$ where $q = 2k^2 + 2k$</p> <p>therefore $n^2$ is odd by definition of odd.</p> <p>Therefore we have a contradiction. Contradictory contrapositive proposition said $n^2$ is not odd, but the derivation says $n^2$ is odd. Therefore the contradictory contrapositive is false, therefore the original proposition is true.</p> <p>Not sure if this was the efficient/correct way to prove this using Proof-By-Contradiction.</p>
Nitin Uniyal
246,221
<p>The contrapositive of the original statement i.e. "If $n$ is even the $n^2$ is even" is easy to prove. </p> <blockquote> <p>Let $n=2k$, $k\in\mathbb Z$ then $n^2=(2k)^2=2(2k^2)$ is even.</p> </blockquote>
26,083
<p>I have a data set in form of:(this is just an example)</p> <pre><code>1324501020 3241030205 4332020134 </code></pre> <p>the data are stored in a text file (e.g. data.txt) but I need to convert them into a matrix format such that each number be place in a cell like this:</p> <pre><code>1 3 2 4 5 0 1 0 2 0 3 2 4 1 0 3 0 2 0 5 4 3 3 2 0 2 0 1 3 4 </code></pre> <p>or in terms of <code>List</code> in Mathematica, I need to have</p> <p><code>{{1,3,2,4,5,0,1,0,2,0},{3,2,4,1,0,3,0,2,0,5},{4,3,3,2,0,2,0,1,3,4}}</code></p> <p>in other words, the final data set supposed to be a matrix of numbers. Any idea??</p>
george2079
2,079
<pre><code>MatrixForm[ Read /@ StringToStream /@ Characters[#] &amp; /@ {"1234" , "5678"}] </code></pre> <p><img src="https://i.stack.imgur.com/aCR3S.jpg" alt="enter image description here"> </p> <p>perhaps a bit cleaner..</p> <pre><code> IntegerDigits /@ Read /@ StringToStream /@ {"1234", "5678"} </code></pre>
26,083
<p>I have a data set in form of:(this is just an example)</p> <pre><code>1324501020 3241030205 4332020134 </code></pre> <p>the data are stored in a text file (e.g. data.txt) but I need to convert them into a matrix format such that each number be place in a cell like this:</p> <pre><code>1 3 2 4 5 0 1 0 2 0 3 2 4 1 0 3 0 2 0 5 4 3 3 2 0 2 0 1 3 4 </code></pre> <p>or in terms of <code>List</code> in Mathematica, I need to have</p> <p><code>{{1,3,2,4,5,0,1,0,2,0},{3,2,4,1,0,3,0,2,0,5},{4,3,3,2,0,2,0,1,3,4}}</code></p> <p>in other words, the final data set supposed to be a matrix of numbers. Any idea??</p>
Hans
7,695
<p>Try:</p> <pre><code>TableForm[IntegerDigits[{1324501020, 3241030205, 4332020134}]] (* -&gt; 1 3 2 4 5 0 1 0 2 0 3 2 4 1 0 3 0 2 0 5 4 3 3 2 0 2 0 1 3 4 *) </code></pre> <p>This will make the list look like a matrix table form. <a href="https://mathematica.stackexchange.com/users/6648/hypergroups">HyperGroups</a> is correct in trying <code>IntegerDigits</code>. What kinds of operations will you be performing on the results or is this purely formatting for display purposes? </p>
3,408,082
<blockquote> <p><span class="math-container">$\textbf{Definition}$</span>: We say <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span> is <em>intersecting</em> if for every nonempty <span class="math-container">$A \subset \mathbb{R}$</span>, <span class="math-container">$f[A] \cap A \neq \varnothing$</span>.</p> </blockquote> <p>There is only one intersecting function: the identity. The reason for this is that <span class="math-container">$f[\{x\}] \cap \{x\} \neq \varnothing$</span> forces <span class="math-container">$f(x)=x$</span>.</p> <p>If we impose restrictions on the subsets <span class="math-container">$A$</span> we consider (say, for instance, we rule out singletons), must <span class="math-container">$f$</span> still be the identity? Let's try this. We can first introduce some definitions.</p> <blockquote> <p><span class="math-container">$\textbf{Definition:}$</span> For any cardinal <span class="math-container">$\ell$</span>, we say <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span> is <span class="math-container">$\ell$</span>-<em>intersecting</em> if for every nonempty <span class="math-container">$A \subset \mathbb{R}$</span> with <span class="math-container">$|A| \geq \ell$</span>, <span class="math-container">$f[A] \cap A \neq \varnothing$</span>.</p> <p><span class="math-container">$\textbf{Definition:}$</span> For any <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span>, its <em>deviation</em> is the cardinality of the set <span class="math-container">$\{x \in \mathbb{R} : f(x) \neq x\}$</span>.</p> </blockquote> <p>I'm going to write down some results for the <span class="math-container">$2$</span>-intersecting case. Suppose <span class="math-container">$f$</span> has the property that for every <span class="math-container">$A \subset \mathbb{R}$</span> with <span class="math-container">$|A| \geq 2$</span> that <span class="math-container">$f[A] \cap A \neq \varnothing$</span>. There indeed exists a non-identity example here, one with deviation <span class="math-container">$3$</span>, in fact. Let <span class="math-container">$f(x) = x$</span> for <span class="math-container">$x \in \mathbb{R} \setminus \{1,2,3\}$</span> and let <span class="math-container">$f(1)=2, f(2)=3, f(3)=1$</span>. Is there a <span class="math-container">$2$</span>-intersecting <span class="math-container">$f$</span> with deviation <span class="math-container">$\geq 4$</span>? No. The argument for this is combinatorial. Suppose an <span class="math-container">$f$</span> with this property existed, and let <span class="math-container">$A=\{x_1, x_2, x_3, x_4\}$</span> be a set of four elements with <span class="math-container">$f(x_i) \neq x_i$</span> for each <span class="math-container">$x_i \in A$</span>. We first note that <span class="math-container">$f$</span> restricted to <span class="math-container">$A$</span> defines a function <span class="math-container">$f_{A}:A \to A$</span>. To justify this claim, we can argue by contradiction. Suppose for some <span class="math-container">$x_i \in A$</span> that <span class="math-container">$f(x_i) = y \notin A$</span>. Let <span class="math-container">$x_j, x_k$</span> be distinct elements in <span class="math-container">$A$</span>, also both distinct from <span class="math-container">$x_i$</span>. Then since <span class="math-container">$f[\{x_i, x_j\}] \cap \{x_i, x_j\} \neq \varnothing$</span>, and <span class="math-container">$f[\{x_i, x_k\}] \cap \{x_i, x_k\} \neq \varnothing$</span>, it's easy to see this implies <span class="math-container">$f(x_k) = f(x_j) = x_i$</span> , which implies <span class="math-container">$f[\{x_k, x_j\}] \cap \{x_k, x_j\} = \varnothing$</span>, contradiction. We also note the restricted function <span class="math-container">$f_{A}$</span> is injective. To see this, suppose, say, <span class="math-container">$f(x_i) = x_{m}$</span> and <span class="math-container">$f(x_j) = x_{m}$</span>, and each of <span class="math-container">$i,j, m$</span> are distinct. Then <span class="math-container">$f[\{x_i, x_j\}] \cap \{x_i, x_j\} = \varnothing$</span>, contradiction. Since <span class="math-container">$A$</span> is finite, this implies it's bijective too. Hence <span class="math-container">$f_{A}$</span> corresponds to a permutation in <span class="math-container">$S_4$</span> without fixed points. It cannot have two cycles, since if <span class="math-container">$x_j, x_i$</span> are in distinct two-cycles, then <span class="math-container">$f[\{x_i, x_j\}] \cap \{x_i, x_j\} = \varnothing$</span>. Hence it's cyclic. But if it's cyclic, then <span class="math-container">$f[\{x_{i_2}, x_{i_4}\}] \cap \{x_{i_2}, x_{i_4}\} = \varnothing$</span>, where <span class="math-container">$x_{i_2}, x_{i_4}$</span> are respectively the second and fourth elements of the cycle. This establishes the contradiction.</p> <p>What eludes me is the general case. The combinatorics seem to get harder if <span class="math-container">$\ell \geq 3$</span>.</p> <blockquote> <p><span class="math-container">$\textbf{Problem 1:}$</span> The finite case. Let <span class="math-container">$\ell$</span> be a finite cardinal (i.e., a positive integer). Then is it true that any <span class="math-container">$\ell$</span>-intersecting map <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span> has finite deviation? If this is true (which I suspect), is there a closed form for the largest possible deviation in terms of <span class="math-container">$\ell$</span>?</p> <p><span class="math-container">$\textbf{Problem 2:}$</span> The infinite case. What is the maximum deviation of an <span class="math-container">$\aleph_{0}$</span>-intersecting map <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span>? What is the maximum deviation of a <span class="math-container">$\mathfrak{c}$</span>-intersecting map <span class="math-container">$f:\mathbb{R} \to \mathbb{R}$</span>? In particular, are there such functions with infinite deviation?</p> </blockquote> <p>You will notice that we don't really use any of the analytic or algebraic structure of <span class="math-container">$\mathbb{R}$</span> here, so really these notions can be generalized to functions <span class="math-container">$f:X \to Y$</span> for arbitrary sets <span class="math-container">$X,Y$</span>. We could, however, ask different kinds of questions similar to the ones described above which make use of the structure of <span class="math-container">$\mathbb{R}$</span>. Instead of considering subsets <span class="math-container">$A$</span> with sufficiently large cardinality, we could alternatively consider subsets <span class="math-container">$A$</span> which are (nondegenerate) intervals, as has been suggested in the comments, or possibly subsets which are nonempty open sets. In a broad sense, we're interested in finding &quot;highly non-identity&quot; functions <span class="math-container">$f$</span> satisfying <span class="math-container">$f[A] \cap A \neq \varnothing$</span> for <span class="math-container">$A$</span> in some 'large' collection of subsets of <span class="math-container">$\mathbb{R}$</span>. If you have the solution to a different problem, but one which is similar in the broad sense described above, you're free to share it.</p>
β„‹olo
471,959
<p>A complete answer for <span class="math-container">$\ell$</span>-intersecting for infinite <span class="math-container">$\ell$</span>:</p> <p>Let <span class="math-container">$f:Q\to Q$</span>, define <span class="math-container">$D(f) = \{x \in Q: f(x) \neq x\}$</span> and <span class="math-container">$\ell(f)$</span> be the minimum <span class="math-container">$k$</span> such that <span class="math-container">$f$</span> is <span class="math-container">$k$</span>-intersecting.</p> <blockquote> <p>Theorem: If <span class="math-container">$\ell(f)$</span> is infinite, then <span class="math-container">$|D(f)|&lt;\ell(f)$</span> and nothing else can be said.</p> </blockquote> <p>Let <span class="math-container">$O_f(A)=\{x∈A\mid f(x)βˆ‰A\}$</span>, if <span class="math-container">$|O_f(D(f))|=\ell(f)$</span> then we are done, <span class="math-container">$f[O_f(D(f))]∩O_f(D(f))=\emptyset$</span>. If not, let <span class="math-container">$X_0=O_f(D(f))$</span>, then look at <span class="math-container">$O_f(O_f(D(f)))$</span>, repeat this process and we get a sequence <span class="math-container">$(X_i)_{iβˆˆΟ‰}$</span>, let <span class="math-container">$X=\bigcap_{i\in\omega} X_i$</span>.</p> <p>Note 1: <span class="math-container">$f[X]βŠ†X$</span></p> <p>Note 2: <span class="math-container">$f[X_i]βŠ†X_{i-1}$</span></p> <p>If <span class="math-container">$|X|β‰₯\ell(f)$</span>, then well order it: <span class="math-container">$\langle x_i\mid i∈X\rangle$</span>, if <span class="math-container">$f[X]$</span> is bounded by <span class="math-container">$x_Ξ³$</span>, let <span class="math-container">$B=\{x_α∈X\mid Ξ±&gt;Ξ³\}$</span>, then <span class="math-container">$f[B]∩B=\emptyset$</span>, if not we can take inductively elements into a set: assume you took all the elements bellow <span class="math-container">$Ξ±$</span> elements, then <span class="math-container">$Ξ±$</span> will be an element larger than <span class="math-container">$f(Ξ²)$</span> for all <span class="math-container">$Ξ²&lt;Ξ±$</span> such that <span class="math-container">$f(Ξ±)&gt;Ξ²$</span> for all <span class="math-container">$Ξ²&lt;Ξ±$</span>, that set has cardinality of <span class="math-container">$|X|$</span> and it is disjoint from it's image under <span class="math-container">$f$</span></p> <p>If <span class="math-container">$|X|&lt;\ell(f)$</span> we have either <span class="math-container">$E=\bigcup_{i\in\omega} X_{2i}$</span> or <span class="math-container">$O=\bigcup_{i\in\omega} X_{2i+1}$</span> are of cardinality <span class="math-container">$β‰₯\ell(f)$</span>, take <span class="math-container">$B=\max(E,O)$</span> and we get <span class="math-container">$f[B]∩B=\emptyset$</span></p> <p>Both cases gives contradiction.</p> <p>Now, assume that <span class="math-container">$ΞΊ&lt;\ell(f)$</span>, take any function without fix point <span class="math-container">$g:ΞΊβ†’ΞΊ$</span>, well order <span class="math-container">$A\subseteq Q$</span> with <span class="math-container">$|A|=ΞΊ$</span>: <span class="math-container">$\{A_i\mid i\in\kappa\}$</span>, then <span class="math-container">$f:Q\to Q$</span> given by <span class="math-container">$f(x)=x$</span> if <span class="math-container">$xβ‰ A_i$</span> for all <span class="math-container">$i$</span>, and if <span class="math-container">$x=A_i$</span>, then <span class="math-container">$f(x)=A_{g(i)}$</span>. <span class="math-container">$|D(f)|=ΞΊ$</span></p> <p>Together with @antkam and @Misha Lavrov the answer is completely finished</p>
2,781,827
<p>I need to find a symmetric matrix of real values (not the zero matrix) of any order that is orthogonal to any diagonal matrix of real values. Any hints?</p>
Community
-1
<p><em>Hint:</em> Using the definition of orthogonality of matrices to be $\text { tr}(AB^t)=0$ (see <a href="https://math.stackexchange.com/a/1262311">here</a>), all that matters is what happens on the diagonal of the product matrix $AB^t$.</p> <p>Think zeros on the diagonal... since $B$ is diagonal, this will result in $AB^t$ having zeros on the diagonal...</p>
3,082,779
<p>I love watches, and I had an idea for a weird kind of watch movement (all of the stuff that moves the hands). It is made up of a a central wheel, with one of the hands connected to it (in this case, it will be the hour hand). This hand goes through a pivot, and then displays the time. I attached a video of a 3d mock up <a href="https://youtu.be/5-OmzIeNhwg" rel="noreferrer">here</a>, because it is kinda hard to explain. My question is, is there any functions that would be able to graph the movement of the end of the hand? I don't want to make the real prototype just yet.</p>
Giuseppe Negro
8,157
<blockquote> <p><strong>SUMMARY</strong>. This is formally wrong, as Ivo already showed. <strong>If the metric is <span class="math-container">$\delta_{\mu \nu}dx^\mu\otimes dx^\nu$</span></strong>, then it is morally correct. Otherwise, it is not; see the example at the end of this post.</p> </blockquote> <p>My understanding of these things is that you can raise or lower indices in the <strong>coordinates</strong> of a tensor, not in the tensor itself. To explain what I mean, let <span class="math-container">$\mu_0\in \{1, 2, \ldots n\}$</span> be fixed. The 1-form <span class="math-container">$dx^{\mu_0}$</span> is the tensor whose coordinates in the basis <span class="math-container">$dx^1, dx^2, \ldots, dx^n$</span> are <span class="math-container">$$ (\delta^{\mu_0}{}_{\nu}\ :\ \nu=1, 2, \ldots, n), $$</span> and we denote them by <span class="math-container">$\delta^{\mu_0}{}_{\nu}$</span>. (This <span class="math-container">$\delta$</span> is the Kronecker symbol, which equals <span class="math-container">$1$</span> if both indices agree and <span class="math-container">$0$</span> otherwise. The <a href="https://math.stackexchange.com/q/73171/8157">spacing in the indices is not important</a>). </p> <p>Dually, the vector field <span class="math-container">$\frac{\partial}{\partial x^{\mu_0}}$</span> is the tensor whose coordinates are <span class="math-container">$\delta_{\mu_0}{}^\nu$</span>. And this is the end of the story unless we introduce a Riemannian metric. </p> <p>If we introduce a such metric <span class="math-container">$g_{\mu\nu}$</span> (or <span class="math-container">$g_{\mu\nu}dx^\mu\otimes dx^\nu$</span>, if you prefer the extended version), then we can raise or lower indices by contracting with this metric, which is another definition of the "musical isomorphisms" of Ivo's answer. In the case of the tensor we introduced previously, <strong>if <span class="math-container">$g_{\mu\nu}=\delta_{\mu\nu}$</span></strong>, then raising the second index in the first tensor we obtain the second; <span class="math-container">$$ \delta^{\mu_0}{}_\nu \delta^{\nu \rho}=\delta^{\mu_0\rho}. $$</span> </p> <hr> <p><strong>WARNING!</strong> If the metric is not <span class="math-container">$\delta_{\mu \nu}dx^\mu\otimes dx^\nu$</span>, then the above fails, and (in the language of Ivo's answer) it is <strong>not</strong> true that <span class="math-container">$$ \left(\frac{\partial}{\partial x^\mu}\right)_b =dx^\mu, \quad \frac{\partial}{\partial x^\mu}=(dx^\mu)^{\#}.$$</span> Let me make an explicit example: parametrize <span class="math-container">$\mathbb S^2$</span> (minus a semicircle) as <span class="math-container">$$ (\cos \theta, \sin \theta\cos \phi), \qquad \text{where }\theta\in(0, \pi),\phi\in(0, 2\pi).$$</span> Then the metric tensor of <span class="math-container">$\mathbb S^2$</span> is <span class="math-container">$$ g = d\theta^2 + \sin^2\phi d\phi^2,$$</span> hence the matrix <span class="math-container">$g_{\mu \nu}$</span> is <span class="math-container">$$ \begin{bmatrix}Β 1 &amp; 0 \\ 0 &amp; \sin^2\phi\end{bmatrix}. $$</span> In particular, <span class="math-container">$g_{\mu \nu}$</span> is <strong>not</strong> the Kronecker symbol. </p> <p>Now, let us compute <span class="math-container">$(d\phi)_b$</span>. The components of this tensor, in the basis <span class="math-container">$d\theta, d\phi$</span>, are <span class="math-container">$(0, 1)$</span>. We should contract then with the metric, which amounts to compute <span class="math-container">$$ (0, 1)\begin{bmatrix} 1 &amp; 0 \\ 0 &amp; \sin^2\phi\end{bmatrix} = (0, \sin^2\phi).$$</span> Hence <span class="math-container">$$ (d\phi)_b= \sin^2\phi \frac{\partial}{\partial \phi}.$$</span></p>
2,756,686
<p>I have a second derivative that I need to use to find inflection points to create a graph. The second derivative is $$f^{\prime\prime}(x)=-4\pi^2\cos(\pi(x-1))$$</p> <p>So I set the equation to $0$ and solve for $x$</p> <p>$$-4\pi^2\cos(\pi(x-1))=0$$</p> <p>I divide by the constant $-4\pi^2$ and get </p> <blockquote> <p>$$\cos(\pi(x-1))=0$$</p> </blockquote> <p>But I am basically stuck at this point. I know I need to take the inverse cosine of both sides. The result I am getting is $x=3/2$, but the answer in the book is $x=1/2$, $3/2$. Can someone help me figure out how to solve the last steps of this problem?</p>
Greg
495,411
<p>Assuming you're looking within $0 \le t \le 2\pi$.</p> <p>Where does $\cos t = 0$?<br> At $\pi/2$ and $3\pi/2$.<br> Set $\pi(x-1)$ equal to each of these and solve for $x$. You'll get $3/2$ and $1/2$.</p>
262,425
<p>I'm trying to integrate a function that involves a <em>finite</em> sum:</p> <p><span class="math-container">$$\int_{-\infty}^{\infty}\sum_{j=1}^n (e^{-b t^2}r_j) \,dt$$</span></p> <p>I think it should be possible to take the exponent <em>outside</em> the sum:</p> <p><span class="math-container">$$\int_{-\infty}^{\infty}\left(e^{-b t^2} \sum_{j=1}^n r_j \right)dt=\sum_{j=1}^n r_j \times \int_{-\infty}^{\infty}e^{-b t^2} dt$$</span></p> <p>I write it in Mathematica like this:</p> <pre><code>$Assumptions=_\[Element]Reals Assuming[ b&gt;0, Integrate[Sum[Exp[-b t^2]*r[j],{j,1,n}],{t,-\[Infinity],+\[Infinity]}] ] </code></pre> <p>This, however, simply returns the integral unchanged:</p> <p><span class="math-container">$$\int_{-\infty }^{\infty } \left(\sum _{j=1}^n e^{-b t^2} r(j)\right)\, dt$$</span></p> <p>If I specify a number for <span class="math-container">$n$</span>, I get the expected result:</p> <p><span class="math-container">$$\frac{\sqrt{\pi } (r(1)+r(2)+r(3)+r(4)+r(5))}{\sqrt{b}}$$</span></p> <hr /> <p>How do I extract <span class="math-container">$e^{-bt^2}$</span> outside the sum? Alternatively, how do I bring the integral inside the sum? More generally, how do I integrate this?</p>
Bob Hanlon
9,362
<p>Use a replacement <a href="https://reference.wolfram.com/language/ref/Rule.html" rel="nofollow noreferrer"><code>Rule</code></a> to swap the order when appropriate.</p> <pre><code>Clear[&quot;Global`*&quot;] swap = Integrate[Sum[f_, iter1_, opts1___], iter2_, opts2___] :&gt; Sum[Integrate[f, iter2, opts2], iter1, opts1]; expr[n_Integer?Positive] = Assuming[b &gt; 0, (Integrate[ Sum[Exp[-b t^2]*r[j], {j, 1, n}], {t, -∞, +∞}] /. swap)] (* Sum[(Sqrt[Pi]*r[j])/Sqrt[b], {j, 1, n}] *) expr[5] // Simplify (* (Sqrt[Ο€] (r[1] + r[2] + r[3] + r[4] + r[5]))/Sqrt[b] *) </code></pre>
713,098
<p>The answer to my question might be obvious to you, but I have difficulty with it. </p> <p>Which equations are correct:</p> <p>$\sqrt{9} = 3$</p> <p>$\sqrt{9} = \pm3$</p> <p>$\sqrt{x^2} = |x|$</p> <p>$\sqrt{x^2} = \pm x$</p> <p>I'm confused. When it's right to take an absolute value? When do we have only one value and why? When two and why? </p> <p>Thank you very much in advance for your help!</p>
MPW
113,214
<p>The confusion about the sign is understandable. The square root symbol applied to a positive number always yields a positive number (disregarding the case of zero for the sake of simplicity here). The problem arises when you don't know ahead of time whether $x$ is positive or not. It is true that one of the numbers $x$ and $-x$ must be positive, though. So you can write with certainty that $$\sqrt {x^2}=|x|$$ since $|x|$ is precisely the one of these two numbers that is positive--it's just another way to say the same thing more concisely.</p> <p>It is also true that "either $\sqrt{x^2}=x$ or $\sqrt{x^2}=-x$" is true, which is often abbreviated as "$\sqrt{x^2}=\pm x$". But be very careful what this says. It is a disjunction, a compound statement that at least one of the two component statements must be true. It does <em>not</em> say that both must be true. So it is also correct to write $$\sqrt{x^2}=\pm x$$ if you understand that it means "or" but not necessarily "and".</p> <p>So to answer your question: they are all correct.</p>
441,374
<p>Let $K_{\alpha}(z)$ be the <a href="https://en.wikipedia.org/wiki/Bessel_function#Modified_Bessel_functions:_I.CE.B1_.2C_K.CE.B1" rel="nofollow noreferrer">modified Bessel function of the second kind of order $\alpha$</a>.</p> <p>I need to compute the following integral:</p> <p>$$\int_0^\infty\;\;K_0\left(\sqrt{a(k^2+b)}\right)dk$$ </p> <p>where $a&gt;0$ and $b&gt;0$. </p> <p>I have tried several substitutions and played around a lot in Mathematica, and can't seem to solve this. Perhaps an integral representation of $K_{0}(z)$ would be helpful here.</p> <p>Even if this can't be done exactly, a sensible approximation strategy would also be useful. </p> <p>Any advice would be greatly appreciated. Thanks in advance for your time!</p>
Ron Gordon
53,268
<p>OK, I think I have a way to make an approximation that holds up pretty well by using Laplace's method twice. </p> <p>Begin by using the representation</p> <p>$$K_0(u) = \int_0^{\infty} dt \, e^{-u\cosh{t}}$$</p> <p>Then the integral you seek may be written, when we reverse the order of integration, as</p> <p>$$\int_0^{\infty} dk \, K_0\left(\sqrt{a (k^2+b)}\right) = \int_0^{\infty} dt \, \int_0^{\infty} dk \, e^{-\sqrt{a} \cosh{t} \sqrt{k^2+b}}$$</p> <p>This is justified in that both integrals clearly converge absolutely. Now, to apply Laplace's method on the integral over $k$, we may assume that $a \cosh{t}$ is sufficiently large, and we use the approximation that </p> <p>$$\sqrt{k^2+b} \sim \sqrt{b} + \frac{k^2}{2 \sqrt{b}}$$</p> <p>We end up with a familiar integral that may be evaluated immediately, and we are left with a single integral:</p> <p>$$\int_0^{\infty} dk \, K_0\left(\sqrt{a (k^2+b)}\right) \sim \sqrt{\frac{\pi \sqrt{b}}{2 \sqrt{a}}} \int_0^{\infty} \frac{dt}{\sqrt{\cosh{t}}} e^{-\sqrt{a b} \cosh{t}} \quad (a \to \infty)$$</p> <p>At this point I should say that I am not providing an error estimate, although deriving one should be fairly straightforward. (This involves providing an additional term of the Taylor expansion of the square root above, and then Taylor expanding the exponential in the integrand.)</p> <p>Now we are left with the problem of evaluating the above integral, which looks only a little less problematic than the original one. Nevertheless, we may apply Laplace's method again, as we are claiming that $a$ is sufficiently large, and $b \gt 0$. In this case, we use the Taylor expansion of $\cosh{t} \sim 1+t^2/2$ and we again end up with a familiar integral. The final result is</p> <p>$$\int_0^{\infty} dk \, K_0\left(\sqrt{a (k^2+b)}\right) \sim \frac{\pi}{2 \sqrt{a}} e^{-\sqrt{a b}} \quad (a \to \infty)$$</p> <p>Some random numerical samples - even with moderate values of $a=1$ and $b=1$ in WA have verified the correctness of this approximation.</p>
598,635
<p>Prove the two Identities for $-1 &lt; r &lt; 1$</p> <p>$$\sum_{n=0}^{\infty} r^n\cos n\theta =\frac{1-r\cos\theta}{1-2r\cos\theta+r^2}$$</p> <p>$$\sum_{n=0}^{\infty} r^n\sin{n\theta}=\frac{r \sin\theta }{1-2r\cos\theta+r^2}$$</p> <p>Sorry could not figure out how to format equations</p>
DonAntonio
31,254
<p>Hint:</p> <p>$$\sum_{n=0}^\infty \left(re^{\theta i}\right)^n=\frac1{1-re^{i\theta }}\;,\;\;\text{as long as}\;\;|r|&lt;1$$</p> <p>But</p> <p>$$\frac1{1-re^{i\theta}}=\frac{1-re^{-i\theta}}{|1-re^{i\theta}|^2}$$</p> <p>and now just take the real and the imaginary parts...and remember, of course, that a complex sequence converges iff its real and imaginary parts converge each (and what's the relation of this with series convergence?)</p> <p><strong>Added on request:</strong> By definition, $\;e^{i\theta}=\cos\theta+i\sin\theta\;,\;\;\theta\in\Bbb R\;$ . Sometimes, in particular at high school level, this is denoted by cis$\,\theta\;$ .</p> <p>Now, it's an easy exercise, using the polar form for complex numbers, to show that if $\;z\in\Bbb C\;$ , then</p> <p>$$z^{-1}=\overline z\iff |z|=1\iff z=e^{i\theta}$$</p> <p>and from here, doing the usual and multiplying a complex fraction by the denominator's conjugate in the form of $\;1\;$, we get:</p> <p>$$\frac1{1-re^{i\theta}}=\frac1{1-re^{i\theta}}\frac{1-re^{-i\theta}}{1-re^{-i\theta}}=\frac{1-re^{-i\theta}}{|1-re^{i\theta}|^2}=\frac{1-re^{-i\theta}}{(1-r\cos\theta)^2+r^2\sin\theta^2}=$$</p> <p>$$=\frac{1-r\cos\theta+i\sin\theta}{1-2r\cos\theta+r^2}$$</p> <p>because $\;\forall\,z\in\Bbb C\;,\;\;z\overline z=|z|^2=\text{Re}\,(z)^2+\text{Im}\,(z)^2\;$</p> <p>Finally, if we have an infinite geometric series (real or complex) $\;a,ar,ar^2,ar^3,...,ar^n,...\;$ , with $\;|r|&lt;1\;$ , then</p> <p>$$\sum_{n=0}^\infty ar^n=\frac1{1-r}$$</p> <p>since</p> <p>$$\sum_{k=0}^n ar^k=a\frac{1-r^{n+1}}{1-r}\;,\;\;\text{and}\;\;r^n\xrightarrow[n\to\infty]{}0\;\;\text{for}\;\;|r|&lt;1$$</p> <p>I'm assuming the OP knows the basics of complex numbers, e.g. de Moivre's formula:</p> <p>$$(\cos\theta+i\sin\theta)^n=\cos n\theta+i\sin n\theta$$</p> <p>which becomes pretty trivial if we use the exponential form $\;\left(e^{i\theta}\right)^n=e^{in\theta}\;$</p> <p>Hope this helps. Any other doubt write back.</p>
2,011,236
<p>I was reading <a href="https://web.williams.edu/Mathematics/lg5/Hindman.pdf" rel="nofollow noreferrer">this</a> discussion of Hindman's Theorem by Leo Goldmakher, and was tripped up by his introduction of a topology on $U(\mathbb N)$. (He is using $U(\mathbb N)$ to denote the space of ultrafilters on the natural numbers.) Midway through page three, Goldmakher says, "There is a natural topology on $U(\mathbb N)$, given by the basis of open sets $\{\mathcal U\in U(\mathbb N):A\in\mathcal U\text{ for some }A\subseteq\mathbb N\}$," and this is all he says about the matter until he uses this topology in the proof of Theorem 3.1 several pages later.</p> <p>I have two questions about this:</p> <p>(1): Am I correct in thinking that the topology Goldmakher is defining here is the one generated by the basis $\{\{\mathcal U\in U(\mathbb N):A\in\mathcal U\}:A\subseteq\mathbb N\}$? The set $\{\mathcal U\in U(\mathbb N):A\in\mathcal U\text{ for some }A\subseteq\mathbb N\}$ is (as I am currently interpreting it) just the set $U(\mathbb N)$; but that doesn't make sense in context, and, besides, if he meant $U(\mathbb N)$ that's what he would have written.</p> <p>(2): I know this is vague (so feel free to ignore it), but are there any useful ways of thinking about this topology that might help me understand it?</p>
Noah Schweber
28,111
<p>The answer to your first question is yes.</p> <hr> <p>For your second question, I like the following. Think of a set $A\subseteq\mathbb{N}$ as a question you're asking about some number $N$ that I know and you're trying to figure out (a la "Twenty Questions" but for large values of twenty) - the question is, "Is $N\in A$?" For instance, you might ask "Is $N$ even?" That is, "Is $N\in \{2, 4, 6, ...\}$?"</p> <p>Now, you can think of a nonprincipal ultrafilter as a convincing way to cheat. That is, maybe I don't actually have an $N$ in mind - each time you ask a question, I just make something up. That way you never figure out what "$N$" is, since there <em>is</em> no actual $N$!</p> <p>In order to fool you, I need my answers to be consistent. That is:</p> <ul> <li><p>If you ask "Is $N\in A$?" and I say "Yes," and you then ask "Is $N\in B$?" for some $B\supseteq A$, I'd better say "Yes."</p></li> <li><p>If you ask "Is $N\in A$?" and "Is $N\in B$?" and I say "Yes" to each, then I'd better say "Yes" when you ask "Is $N\in A\cap B$?"</p></li> <li><p>If you ask "Is $N\in A$?" and I say "No," then I'd better say "Yes" when you ask "Is $N\in \overline{A}$?" (and conversely).</p></li> <li><p>If you ask "Is $N\in F$?" for some <em>finite</em> set $F$, I'd better say "no", since otherwise you'll be able to pin me down to a single number (I'm trying to cheat, remember?).</p></li> </ul> <p>So you can think of an arbitrary ultrafilter as a <strong>strategy</strong> - that is, a way for me to play this game, without every <em>obviously</em> lying. Sometimes (principal ultrafilters) I'm not cheating, while other times (nonprincipal ultrafilters) I am. In this context, a basic open set corresponds to a question: the set of ultrafilters containing $A$ is, essentially, the set of strategies which make me answer "Yes" when you ask "Is $N\in A$?" </p> <p><em>Note the twist here: "Is $N$ in $A$?" is the same question as "Is $A$ in $\mathcal{U}$?". An ultrafilter is basically a "pseudo-number": it behaves like a natural number in the context of the game above. Specifically, we can identify a <strong>number</strong> with the set of <strong>questions</strong> about it whose answer is "yes" - this is just the principal ultrafilter generated by the singleton containing the number!</em></p> <hr> <p>This is often a useful approach to thinking about the more "logic-y" topological spaces: the basic open sets are usually those of the form "All points which do have property $P$", for some reasonable property $P$. If the properties we're looking at are closed under negation (e.g. in $\beta\mathbb{N}$ asking "Does $\mathcal{U}$ not contain $A$?" is the same as asking "Does $\mathcal{U}$ contain $\overline{A}$?"), then the corresponding space is totally disconnected. Sometimes this intuition just makes things messier - e.g. I don't think it's generally useful for understanding the usual topology on $\mathbb{R}$ - but other times it's quite helpful, and I tend to think that this is one of them.</p>
2,189,445
<p>I try to solve this: $$ \frac{\partial^{2} I}{\partial b \partial a} = I. $$ I guessed $ I = C e^{a+b} $, but it's not the general solution. So, how to find the last one?</p>
skyking
265,767
<p>The statement (premise) is not a paradox, it's a lie - a false statement. If we let $xSy$ mean $x$ shaves $y$ and $b$ is the barber the statement becomes:</p> <p>$$\forall x(bSx\Leftrightarrow \neg xSx)$$</p> <p>this means especially $bSb\Leftrightarrow \neg bSb$ which is formally false or expressed mathematically we have that $\neg(\phi\Leftrightarrow\neg\phi)$.</p> <p>Now the question if the barber shaves himself? Or not? Here comes the power of a false premise - given a false premise any conclusion is true. We can prove both that:</p> <p>$$\forall x(bSx\Leftrightarrow \neg xSx)\vdash bSb$$ $$\forall x(bSx\Leftrightarrow \neg xSx)\vdash \neg bSb$$</p> <p>That is the premise leads to a contradiction. But this isn't really a problem, because since the premise is false the consequence does not need to be true anyway - one of them may in fact be true and the other false (which is how we would want things to be).</p> <p>Also note that forming a contradiction is what RAA is about. When using that we assume a premise and form a contradiction (a statement and it's negation being both conseqences). And from that we conclude that the premise is false (ie it's negation being proven).</p>
3,086,878
<p>Recently I came across this general integral, <span class="math-container">$$\int \frac {dx}{(x^2-2ax+b)^n}$$</span> Putting <span class="math-container">$x^2-2ax+b=0$</span> we have, <span class="math-container">$$x = aΒ±\sqrt {a^2-b} = aΒ±\sqrt {βˆ†}$$</span> Hence the integrand can be written as, <span class="math-container">$$ \frac {1}{(x^2-2ax+b)^n} = \frac {1}{(x-a-\sqrt βˆ†)^n(x-a+\sqrt βˆ†)^n} $$</span> Resolving into partial fractions we have, <span class="math-container">$$ \frac {1}{(x^2-2ax+b)^n} = \sum \frac {A_r}{(x-a-\sqrt βˆ†)^r} + \sum \frac {B_r}{(x-a+\sqrt βˆ†)^r} $$</span> Putting <span class="math-container">$-\frac {1}{2\sqrt βˆ†} = D$</span> , I could produce a table of the coefficients <span class="math-container">$A$</span> and <span class="math-container">$B$</span> for different <span class="math-container">$n$</span>. \par For <span class="math-container">$n=1$</span>, <span class="math-container">$$A_1=-D , B_1=D$$</span> For <span class="math-container">$n=2$</span>, <span class="math-container">$$A_1=2D^3 , B_1=-2D^3$$</span> <span class="math-container">$$A_2=D^2 , B_2 = D^2$$</span> For <span class="math-container">$n=3$</span>, <span class="math-container">$$A_1=-6D^5 , B_1=6D^5$$</span> <span class="math-container">$$A_2=-3D^4 , B_2 = -3D^4$$</span> <span class="math-container">$$A_3=-D^3, B_3=D^3$$</span> For <span class="math-container">$n=4$</span>, <span class="math-container">$$A_1=20D^7, B_1=-20D^7$$</span> <span class="math-container">$$A_2=10D^6 , B_2 = 10D^6$$</span> <span class="math-container">$$A_3=4D^5, B_3=-4D^5$$</span> <span class="math-container">$$A_4=D^4, B_4=D^4$$</span> For <span class="math-container">$n=5$</span>, <span class="math-container">$$A_1=-70D^9, B_1=70D^9$$</span> <span class="math-container">$$A_2=-35D^8, B_2 = -35D^8$$</span> <span class="math-container">$$A_3=-15D^7, B_3=15D^7$$</span> <span class="math-container">$$A_4=-5D^6, B_4=-5D^6$$</span> <span class="math-container">$$A_5=-D^5, B_4=D^5$$</span> Yet I am unable to deduce a general formula for the coefficients. If I have the coefficients, the integral is almost solved , for then I shall have a logarithmic term and a rational function in <span class="math-container">$x$</span>. More directly, I seek a result of the form, <span class="math-container">$$\kappa \log \left( \frac {x-a-\sqrt βˆ†}{x-a+\sqrt βˆ†}\right) + \frac {P(x)}{Q(x)}$$</span> Any help would be greatly appreciated.</p> <h1>Conjecture 1(Proved below)</h1> <p><span class="math-container">$$A(n,r)= (-1)^n \binom {2n-r-1}{n-1} D^{2n-r}$$</span> <span class="math-container">$$B(n,r)= (-1)^{n-r} \binom {2n-r-1}{n-1} D^{2n-r}$$</span></p>
Aleksas Domarkas
562,074
<p>Let <span class="math-container">$b\neq a^2$</span>, <span class="math-container">$$S(n)=\int \frac {dx}{(x^2-2ax+b)^n}$$</span> With method of undetermined coefficients we find formula <span class="math-container">$$S(n)=\frac{Ax+B}{(x^2-2ax+b)^{n-1}}+CS(n-1)$$</span> We get <span class="math-container">$$1=-\left( 2 A n-C-3 A\right) \, {{x}^{2}}-\left( \left( 2 B-2 A a\right) n+\left( 2 C+4 A\right) a-2 B\right) x\\+2 B a n-\left( -C-A\right) b-2 B a$$</span> <span class="math-container">$$A=\frac{1}{2 \left( b-{{a}^{2}}\right) \, \left( n-1\right) },B=-\frac{a}{2 \left( b-{{a}^{2}}\right) \, \left( n-1\right) },\\C=\frac{2 n-3}{2 \left( b-{{a}^{2}}\right) \, \left( n-1\right) }$$</span> Then <span class="math-container">$$S(n)=\frac{x-a}{2(n-1)(b-a^2)(x^2-2ax+b)^{n-1}}+ \frac{2n-3}{2(n-1)(b-a^2)}S(n-1), \; n&gt;1$$</span> <span class="math-container">$$S(1)=\int \frac {dx}{x^2-2ax+b}$$</span></p>
3,086,878
<p>Recently I came across this general integral, <span class="math-container">$$\int \frac {dx}{(x^2-2ax+b)^n}$$</span> Putting <span class="math-container">$x^2-2ax+b=0$</span> we have, <span class="math-container">$$x = aΒ±\sqrt {a^2-b} = aΒ±\sqrt {βˆ†}$$</span> Hence the integrand can be written as, <span class="math-container">$$ \frac {1}{(x^2-2ax+b)^n} = \frac {1}{(x-a-\sqrt βˆ†)^n(x-a+\sqrt βˆ†)^n} $$</span> Resolving into partial fractions we have, <span class="math-container">$$ \frac {1}{(x^2-2ax+b)^n} = \sum \frac {A_r}{(x-a-\sqrt βˆ†)^r} + \sum \frac {B_r}{(x-a+\sqrt βˆ†)^r} $$</span> Putting <span class="math-container">$-\frac {1}{2\sqrt βˆ†} = D$</span> , I could produce a table of the coefficients <span class="math-container">$A$</span> and <span class="math-container">$B$</span> for different <span class="math-container">$n$</span>. \par For <span class="math-container">$n=1$</span>, <span class="math-container">$$A_1=-D , B_1=D$$</span> For <span class="math-container">$n=2$</span>, <span class="math-container">$$A_1=2D^3 , B_1=-2D^3$$</span> <span class="math-container">$$A_2=D^2 , B_2 = D^2$$</span> For <span class="math-container">$n=3$</span>, <span class="math-container">$$A_1=-6D^5 , B_1=6D^5$$</span> <span class="math-container">$$A_2=-3D^4 , B_2 = -3D^4$$</span> <span class="math-container">$$A_3=-D^3, B_3=D^3$$</span> For <span class="math-container">$n=4$</span>, <span class="math-container">$$A_1=20D^7, B_1=-20D^7$$</span> <span class="math-container">$$A_2=10D^6 , B_2 = 10D^6$$</span> <span class="math-container">$$A_3=4D^5, B_3=-4D^5$$</span> <span class="math-container">$$A_4=D^4, B_4=D^4$$</span> For <span class="math-container">$n=5$</span>, <span class="math-container">$$A_1=-70D^9, B_1=70D^9$$</span> <span class="math-container">$$A_2=-35D^8, B_2 = -35D^8$$</span> <span class="math-container">$$A_3=-15D^7, B_3=15D^7$$</span> <span class="math-container">$$A_4=-5D^6, B_4=-5D^6$$</span> <span class="math-container">$$A_5=-D^5, B_4=D^5$$</span> Yet I am unable to deduce a general formula for the coefficients. If I have the coefficients, the integral is almost solved , for then I shall have a logarithmic term and a rational function in <span class="math-container">$x$</span>. More directly, I seek a result of the form, <span class="math-container">$$\kappa \log \left( \frac {x-a-\sqrt βˆ†}{x-a+\sqrt βˆ†}\right) + \frac {P(x)}{Q(x)}$$</span> Any help would be greatly appreciated.</p> <h1>Conjecture 1(Proved below)</h1> <p><span class="math-container">$$A(n,r)= (-1)^n \binom {2n-r-1}{n-1} D^{2n-r}$$</span> <span class="math-container">$$B(n,r)= (-1)^{n-r} \binom {2n-r-1}{n-1} D^{2n-r}$$</span></p>
Przemo
99,778
<p>Let <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$x$</span> be real and <span class="math-container">$n$</span> be a positive integer. Let <span class="math-container">$\Delta:= a^2-b$</span>. then the following formula holds: <span class="math-container">\begin{eqnarray} \frac{1}{(x^2-2 a x+b)^n} &amp;=&amp; \sum\limits_{l_1=1}^n \binom{2 n-1-l_1}{n-1} \frac{(-1)^n}{(x-a-\sqrt{\Delta})^{l_1}} \cdot \frac{1}{(-2 \sqrt{\Delta})^{2 n-l_1}} + \\ &amp;&amp; \sum\limits_{l_1=1}^n \binom{2 n-1-l_1}{n-1} \frac{(-1)^n}{(x-a+\sqrt{\Delta})^{l_1}} \cdot \frac{1}{(+2 \sqrt{\Delta})^{2 n-l_1}} \end{eqnarray}</span> The result follows from the second formula from the top in my answer to <a href="https://math.stackexchange.com/questions/3184487/how-to-quickly-solve-partial-fractions-equation/3184910#3184910">How to quickly solve partial fractions equation?</a> .</p>
737,915
<p>I'm reading Calculus: Basic Concepts for High School Students and am trying to digest the definition of 'limit of function'. There are two details that I am struggling to fully accept:</p> <ol> <li><p>If you are supposed to pick an interval $(a - \delta, a + \delta)$ but $a$ can be an undefined point at the end of the domain, what happens to the other half of the interval? Is it just ignored/irrelevant?</p></li> <li><p>The function used as an example is $f(x) = \sqrt{x}$ and from what I can tell the limit of the point $a$ always matches the value of $f(a)$. I cannot see how this would be different for other functions given the way that the limit is calculated - if someone could share an example of a function that has a different limit at $x = a$ than the value $f(a)$ I would be grateful.</p></li> </ol>
Rustyn
53,783
<p>Any function that has a jump discontinuity is an example. Take for example the function defined by: $f(x) = x$ if $x\ne 4$, $f(x) = 5$ if $x=4$. Then $$\lim_{x\to 4}f(x) = 4$$ but $f(4) = 5$. </p>
4,159,771
<p>I understand the geometric intuition behind determinants but what is the real life use of it? I'm not looking for answers along the lines of &quot;it helps to find solutions to linear systems&quot; etc, unless this is one of those concepts that is useful because it allows us to do &quot;more math&quot;. I'm more interested in knowing practical applications of determinants in science, engineering, computer graphics etc.</p>
umby
839,300
<p>Simplifying, in the deformation theory of bodies, the determinant of the deformation tensor gives the ratio between the final volume, in the deformed state, and the initial volume, in the undeformed state. Then, if det &gt; 0 (&lt; 0), you have a deformation causing an expansion (contraction), whether if det = 0 there is no volume change during the deformation, a situation usually called distortion. This cannot be related to a particular reference system, then it has to be described by an invariant, as the determinant is.</p>
3,121,361
<p>Given <span class="math-container">$G$</span> has elements in the interval <span class="math-container">$(-c, c)$</span>. Group operation is defined as: <span class="math-container">$$x\cdot y = \frac{x + y}{1 + \frac{xy}{c^2}}$$</span></p> <p>How to prove closure property to prove that G is a group?</p>
Calum Gilhooley
213,690
<p>If <span class="math-container">$x \geqslant y &gt; 0$</span>, and <span class="math-container">$n$</span> is a positive integer, then <span class="math-container">$$ x^n - y^n = (x - y)(x^{n-1} + x^{n-2}y + \cdots + y^{n-1}) \geqslant n(x - y)y^{n-1}. $$</span> Therefore, for <span class="math-container">$n &gt; 1$</span>, <span class="math-container">\begin{align*} a_n - a_{n-1} &amp; = \left(1+\frac{1}{n}\right)^n \! - \left(1+\frac{1}{n-1}\right)^{n-1} \\ &amp; = \frac{1}{n}\left(1+\frac{1}{n}\right)^{n-1} \!\! - \left[ \left(1+\frac{1}{n-1}\right)^{n-1} \!\! - \left(1+\frac{1}{n}\right)^{n-1}\right] \\ &amp; \leqslant \frac{1}{n}\left(1+\frac{1}{n}\right)^{n-1} \!\! - \frac{1}{n}\left(1+\frac{1}{n}\right)^{n-2} \\ &amp; = \frac{1}{n^2}\left(1+\frac{1}{n}\right)^{n-2} \\ &amp; = \frac{a_n}{(n+1)^2}, \end{align*}</span> whence <span class="math-container">$$ a_n \leqslant a_{n-1}\left(1 - \frac{1}{(n+1)^2}\right)^{-1} \quad (n &gt; 1). $$</span> By induction on <span class="math-container">$n$</span>, <span class="math-container">$$ a_n \leqslant 2\left(1 - \frac{1}{9}\right)^{-1}\!\! \left(1 - \frac{1}{16}\right)^{-1}\!\!\cdots \left(1 - \frac{1}{(n+1)^2}\right)^{-1} \quad (n &gt; 1). $$</span> Writing <span class="math-container">$c_n = (n+1)^{-2}$</span> and <span class="math-container">$s_n = c_2+c_3+\cdots+c_n$</span> (<span class="math-container">$n &gt; 1$</span>), we have <span class="math-container">$$ s_n &lt; \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \cdots + \frac{1}{n(n+1)} &lt; \frac{1}{2} \quad (n &gt; 1). $$</span> By the Weierstrass Product Inequality (the very simple proof by induction on <span class="math-container">$n$</span> is given <a href="https://proofwiki.org/wiki/Weierstrass_Product_Inequality" rel="nofollow noreferrer">here</a>, but it could be left as an exercise), <span class="math-container">$$ (1 - c_2)(1 - c_3)\cdots(1 - c_n) \geqslant 1 - s_n \quad (n &gt; 1). $$</span> So we have, finally, <span class="math-container">$$ a_n \leqslant 2(1 - c_2)^{-1}(1 - c_3)^{-1}\cdots(1 - c_n)^{-1} \leqslant 2(1 - s_n)^{-1} &lt; 4. $$</span></p>
2,555,499
<p>Let $v_1=(1,1)$ and $v_2=(-1,1)$ vectors in $\mathbb{R}^2$. They are <strong>clearly linearly independent</strong> since each is not an scalar multiple of the other. The following information about a linear transformation $f: \mathbb{R}^2 \to \mathbb{R}^2$ is given: $$f(v_1)=10 \cdot v_1 \text{ and } f(v_2)=4 \cdot v_2$$</p> <ol> <li><p><strong>Give the transformation matrix $_vF_v$ with respect to ordered basis $\mathcal{B}=(v_1,v_2)$</strong></p></li> <li><p><strong>Give the transformation matrix $_eF_e$ with respect to the ordered standard basis $e=(e_1,e_2)$ of $\mathbb{R}^2$</strong></p></li> </ol> <p>Recall that $$ \begin{bmatrix} 1 &amp; -1 \\ 1 &amp; 1 \end{bmatrix}^{-1}=\begin{bmatrix} \frac{1}{2} &amp; \frac{1}{2} \\ -\frac{1}{2} &amp; \frac{1}{2} \end{bmatrix} $$ We need a matrix $_eF_e$ such that: $$_eF_e\begin{bmatrix} 1 &amp; -1 \\ 1 &amp; 1 \end{bmatrix}=\begin{bmatrix} 10 &amp; -4 \\ 10 &amp; 4 \end{bmatrix}=\begin{bmatrix} 1 &amp; -1 \\ 1 &amp; 1 \end{bmatrix}\begin{bmatrix} 10 &amp; 0 \\ 0 &amp; 4 \end{bmatrix}$$ then $$_eF_e=\begin{bmatrix} 10 &amp; -4 \\ 10 &amp; 4 \end{bmatrix}\begin{bmatrix} 1 &amp; -1 \\ 1 &amp; 1 \end{bmatrix}^{-1} =\begin{bmatrix} 10 &amp; -4 \\ 10 &amp; 4 \end{bmatrix}\begin{bmatrix} \frac{1}{2} &amp; \frac{1}{2} \\ -\frac{1}{2} &amp; \frac{1}{2} \end{bmatrix}=\begin{bmatrix} 7 &amp; 3 \\ 3 &amp; 7 \end{bmatrix}$$ Okay so I'm pretty sure that $$_eF_e=_eF_v \cdot _vF_v \cdot _vF_e$$ And i figured I could find $_eF_e=\begin{bmatrix} ? &amp; ? \\ ? &amp; ? \end{bmatrix}$ in the following equation $$\begin{bmatrix} ? &amp; ? \\ ? &amp; ? \end{bmatrix} \text{ } \begin{bmatrix} 1 &amp; -1 \\ 1 &amp; 1 \end{bmatrix}= \begin{bmatrix} 10 &amp; -4 \\ 10 &amp; 4 \end{bmatrix} \\ \Rightarrow \begin{bmatrix} 10 &amp; -4 \\ 10 &amp; 4 \end{bmatrix} \text{ } \begin{bmatrix} \frac{1}{2} &amp; \frac{1}{2} \\ -\frac{1}{2} &amp; \frac{1}{2} \end{bmatrix}= \begin{bmatrix} 7 &amp; 3 \\ 3 &amp; 7 \end{bmatrix} \\ \Rightarrow {}_eF_e=\begin{bmatrix} 7 &amp; 3 \\ 3 &amp; 7 \end{bmatrix} $$</p> <p>Now, how can I find ${}_v{F}_v$? I got a feeling that I'm making it more difficult than necessary</p>
Benji Altman
398,014
<p>This is where definitions become necessary. How I was taught order of operations was that division and multiplication have the same precedence, and that weather written as $ab$ or $a\cdot b$ or even $a\times b$, it's all the same thing (assuming $a$ and $b$ are numbers and not vectors or something). Based on what I just said you would be right and you would read this left to right and evaluate no mater what "feels" most natural.</p> <p>That being said it is perfectly possible the book could choose to define $ab$ somewhat differently giving it higher precedence, and to back this argument try putting the string "$a/bc$" or "$a\div bc$" into <a href="https://www.wolframalpha.com/" rel="nofollow noreferrer">Wolfram Alpha</a> and you find it reads it as $$\frac a{bc}$$</p>