qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
7,223
<p>I want to produce a <em>Mathematica</em> Computable Document in which <code>N</code> appears as a variable in my formulae. But <code>N</code> is a reserved word in the <em>Mathematica</em> language. Is there a way round this other than using a different symbol? It seems a severe limitation if you cannot use <em>Mathematica</em> to generate papers in which <code>N</code> is employed as a variable.</p>
István Zachar
89
<p>As it is often voiced here, modifying built-in variables is not a good idea most of the times, especially in case of such fundamental symbols as <code>N</code>. It is used heavily through millions of underlying code lines, and you will never know where your change can (and will) cause any mischief (or catastrophe).</p> <p>On the other hand, the name of the variable and the way it is displayed could be completely different. You can use <code>Format</code> for example to give a specific formatting whenever <code>n</code> is printed:</p> <pre><code>Clear[n]; n[x_] := N[x/5]; Format[n] := Style["N", Red, Italic, Bold]; n </code></pre> <p><img src="https://i.stack.imgur.com/mjeu7.png" alt="Mathematica graphics"></p> <pre><code>n[12] </code></pre> <p>2.4</p> <p>(Note, that since <code>Format</code> does not have attribute <code>HoldFirst</code> or <code>HoldAll</code>, <code>n</code> should not have any <code>OwnValues</code>, otherwise the assignment won't happen.)</p>
3,952,702
<p>The problem is to make the following integral stationary: <span class="math-container">$$ \int_{x_1}^{x_2} \frac{\sqrt{1+y'^2}}{y^2}dx $$</span> to simplify the Euler equation, I tried to change the independent variable: <span class="math-container">$$ \int_{y_1}^{y_2} \frac{\sqrt{1+x'^2}}{y^2}dy, \: \: \: \: \: \: \: \: y=y\left( x \right),\: y'=\frac{dy}{dx} $$</span> with the correspondent Euler equation: <span class="math-container">$$ \frac{d}{dy}\frac{\partial F}{\partial x'}-\frac{\partial F}{\partial x}=0 $$</span> thus <span class="math-container">$$ \begin{aligned} \frac{\partial F}{\partial x}=0 \Rightarrow \frac{\partial F}{\partial x'} &amp;= k\\ \frac{x'}{y^2 \sqrt{1+x^2}} &amp;= k\\ \frac{dx}{dy} = x' &amp;= \frac{ky^2}{\sqrt{1-k^2y^4}} \end{aligned} $$</span> and I get: <span class="math-container">$$ x = \int \frac{ky^2}{\sqrt{1-k^2y^4}} dy $$</span> Now, can I change <span class="math-container">$ ky^2 $</span> to a new single arbitrary variabel to simplify the integrand? Or are there a more effective method?</p>
Claude Leibovici
82,404
<p>As said in comments and answer, using <span class="math-container">$\frac{u}{\sqrt{k}}$</span> you will end with <span class="math-container">$$\int \frac{ky^2}{\sqrt{1-k^2y^4}}\, dy=\frac{E\left(\left.\sin ^{-1}(u)\right|-1\right)-F\left(\left.\sin ^{-1}(u)\right|-1\right)}{\sqrt{k}}$$</span></p> <p>If you need to solve for <span class="math-container">$u$</span> the equation <span class="math-container">$$f(u)=E\left(\left.\sin ^{-1}(u)\right|-1\right)-F\left(\left.\sin ^{-1}(u)\right|-1\right)=a$$</span> you could use the series expansion <span class="math-container">$$f(u)=\sum_{n=0}^\infty b_n \,u^{4n+3}$$</span> where the <span class="math-container">$b_n$</span>'s form the sequence <span class="math-container">$$\left\{\frac{1}{3},\frac{1}{14},\frac{3}{88},\frac{1}{48},\frac{35}{2432},\frac{63}{5 888},\frac{77}{9216},\frac{429}{63488},\frac{1287}{229376},\frac{935}{196608},\cdots\right\}$$</span> Using the above coefficients, the fit is quite good : the absolute error is</p> <ul> <li><span class="math-container">$&lt;0.1$</span>% if <span class="math-container">$u \lt 0.921$</span></li> <li><span class="math-container">$&lt;0.01$</span>% if <span class="math-container">$u \lt 0.874$</span></li> <li><span class="math-container">$&lt;0.001$</span>% if <span class="math-container">$u \lt 0.829$</span></li> </ul> <p>Using series reversion <span class="math-container">$$u=t \left(1-\sum_{n=1}^\infty c_n \,t^{4n} \right)\qquad \text{where} \qquad t=\sqrt[3]{3a}$$</span> the first <span class="math-container">$c_n$</span>'s forming the sequence <span class="math-container">$$\left\{\frac{1}{14},\frac{15}{4312},\frac{61}{181104},\frac{85325}{2119641216},\frac{ 1214019}{227508157184}\right\}$$</span></p> <p>For a test example, using <span class="math-container">$a=0.5$</span> the above would give as <em>estimate</em> <span class="math-container">$u=0.99010375$</span> while the &quot;exact&quot; solution is <span class="math-container">$u=0.99010360$</span></p>
4,181,442
<p>I know this is a dumb question but...</p> <p>Is <span class="math-container">$x ≠ 1,3,5$</span> the same as <span class="math-container">$x$</span> does not belong to {<span class="math-container">$1,3,5$</span>}, for example ?</p> <hr /> <p>Sorry for the formatting.</p> <p>Btw, anyone has a link with mathjax's commands I guess?</p> <hr /> <p>Thanks in advance.</p>
Community
-1
<p><span class="math-container">$x \neq 1, 3, 5$</span> isn't super standard notation, but if someone wrote it out at random I'd assume they meant <span class="math-container">$x \neq 1$</span> and <span class="math-container">$x \neq 3$</span> and <span class="math-container">$x \neq 5,$</span> or <span class="math-container">$x \not\in \{1, 3, 5\}$</span> for short.</p>
631,053
<p>I have a container of 100 yellow items.</p> <p>I choose 2 at random and paint each of them blue.</p> <p>I return the items to the container.</p> <p>If I repeat this process, on average how many cycles will I make before all 100 items are painted?</p> <p>It is obviously 50 (100/2) if there is no replacement. But in this case, the items are returned to the container, so the same item could be chosen often.</p> <p>What if we choose 3?</p>
Zur Luria
117,481
<p>If you choose 1 item each time, then the expected time until the items are all painted is $n \ln(n)$; This is the coupon collector problem.</p> <p><a href="http://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow">http://en.wikipedia.org/wiki/Coupon_collector%27s_problem</a></p> <p>If you take 2 at random each time it should take about half that time, because you speed up the process by a factor of two, so about $\frac{n \ln(n)}{2} $.</p>
1,009,082
<p>Given is a linear map f from V to W, whereby V has dimension n and W has dimension m.</p> <p>Now given n > m, can the map be injective,surjective or invertible? And what about the same questions, given that m > n?</p> <p>My thoughts so far: </p> <ul> <li><p>Invertibility should be possible in the second case, if we think of the map e.g. as a quadratic matrix which maps from the V to a subspace of W, but not in the first case (the matrix would not be quadratic.</p></li> <li><p>Injectivity could be possible in the second case - else basically the a vector in W must have two pre-images</p></li> <li><p>Surjectivity should be possible in the first case - the image of a linear map can not have a bigger dimensionaly than the space V</p></li> </ul> <p>Is this correct?</p>
anomaly
156,999
<p>Extending $f$ to a continuous, even function on $[-1, 1]$, we then have $\int_{-1}^1 x^n f = 0$ for all $n$, as $\int_{-1}^1 x^{2n} f = 2\int_0^1 x^{2n}f = 0$. Thus $f = 0$. For the case of odd $n$, consider $g(x) = x f(x)$.</p>
2,291,852
<p>$$\int_\pi^\infty{\frac{x \cos x}{x^2-1}dx}$$</p> <p>So the only think I came up with was to take an absolute value of ${\frac{x \cos x}{x^2-1}}$ and by comparison test the integral does not converge. </p> <p>But I see it's not very close to the solution, so what should I do?</p>
Hagen von Eitzen
39,174
<p>The function $x\mapsto x^2+e^x$ is continuous, hence your set (as inverse image of the closed set $\{1\}$) is closed. It is also bounded because $e^x&gt;0$ and so $x^2&lt;1$ (i.e., $-1&lt;x&lt;1$) for points in the set.</p>
1,424,198
<p>My mathematical logic textbook defines $\{x \ | \ \text {_} x \text {_} \ \}$, but I'm not sure what the $\text {_} x \text {_}$ means. </p> <p>Do the _ just mean 'for any expression involving $x$', or is there something I'm missing?</p>
Mauro ALLEGRANZA
108,274
<p>You can supplement Enderton's explanation with some examples from :</p> <ul> <li>Herbert Enderton, <a href="https://books.google.it/books?id=LXA_avkJAv8C&amp;pg=PA4" rel="nofollow">Elements of set theory</a> (1977), page 4 :</li> </ul> <blockquote> <p>The notation used for the set of all objects $x$ such that the condition $\text {__} x \text {__}$, holds is </p> <p>$$\{x \ | \ \text {__} x \text {__} \}.$$</p> <p>For example: </p> <ol> <li>$\mathcal PA$ is the set of all objects $x$ such that $x$χ is a subset of $A$. Here "$x$ is a subset of $A$" is the entrance requirement that $x$ must satisfy in order to belong to $\mathcal PA$. We can write </li> </ol> <p>$$\mathcal PA = \{ x \ | \ x \text { is a subset of } A \}$$ </p> <p>$$= \{ x \ | x \subseteq A \}.$$</p> <ol start="3"> <li><p>The set $\{ z \ | \ z \ne z \}$ equals $\emptyset$, because the entrance requirement "$z \ne z$" is not satisfied by any object $z$.</p></li> <li><p>The set $\{ n \ | \ n \text { is an even prime number} \}$ is the same as the set $\{ 2 \}$. </p></li> </ol> </blockquote> <p>Thus, the set-builder expression $\{x \ | \ \text {__} x \text {__} \}$ needs a condition "$\text {__} x \text {__}$" to be specified; all and only those objects $x$ satisfying the condition will belong to the corresponding set.</p>
1,484,736
<p>[<img src="https://i.stack.imgur.com/i5iPj.jpg" alt="enter image description here">]</p> <p>$$\lim_{x\to 0}f\big(f(x)\big)$$</p> <p>(Original image <a href="https://imgur.com/a/Uogqk" rel="nofollow noreferrer">here</a>.)</p> <p>I don't need an answer, I just want to know how I can calculate the limit based on the given information.</p>
Faraz Masroor
163,745
<p>I want to disagree with Stefan. You got it right that $\lim_{x\to 0}f(x)$ is 2, but notice that it approaches 2 from the negative direction, going from below 2 towards 2. Therefore you need to calculate $\lim_{x\to 2}f(x)$ as it approaches 2 from the negative direction, which is -2. </p>
2,163,494
<p>Let $f: A\to B; \ g,h:B\to A$ and $f\circ g = I_B$ and $h \circ f = I_A$</p> <p>I want to simply state that for any function $f$ if $f \circ h = I_A$ then it must be that $h = f^{-1}$ but that seems incomplete to me. What can I do for fixing this?</p>
Yiorgos S. Smyrlis
57,021
<p>If $f(A)$ is not bounded, then there exists a sequence $\{x_n\}\subset A$, such that $|f(x_n)|\to\infty$. But, as $\{x_n\}$ is bounded, it possesses a convergent subsequence $x_{k_n}\to x$. Since $f$ is continuous, $f(x_{k_n})\to f(x)$, which contradicts the fact that $|f(x_n)|\to\infty$.</p>
4,292,427
<p><span class="math-container">$$ \frac{d^{2}y}{dt^2}+ 2t \frac{dy}{dt}+ t y=0 ~~ \tag{1} $$</span></p> <p>At least I know that in this case of ODE can be solved by finding out 2 particular solutions.</p> <p>As those 2 particular solutions are known, the general solution for this ODE can be written as below form.</p> <p><span class="math-container">$$ y\left(x\right)=C_{1}y_{1}\left(x\right)+ C_{2} y_{2}\left(x\right) $$</span></p> <p>I've been completely struggling to find it.</p> <p>Is there some cheat sheet(s) which can be used to find out particular solutions so that I don't have to ask about it in MSE?</p> <p>Or can you give me some hint(s) so that I can find the particulr solutions?</p> <p>I've found of it <a href="https://slideplayer.com/slide/13806843/" rel="nofollow noreferrer">one(at 13th page)</a> (however not applicable to this ODE though).</p> <p><strong>I assume computer cannot be used and this problem is solved in a offline paper test.</strong></p> <p>The wolfram showed the below result but what are Ai and Bi ?</p> <p><a href="https://i.stack.imgur.com/zdkyN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zdkyN.png" alt="enter image description here" /></a></p>
Cesareo
397,348
<p>You can ask for series solutions. In this case proposing <span class="math-container">$y_n = \sum_{k=0}^n a_k t^k$</span> and substituting into the ODE we have</p> <p><span class="math-container">$$ \left(\sum_{k=0}^n a_k t^k\right)''+2t\left(\sum_{k=0}^n a_k t^k\right)'+t\sum_{k=0}^n a_k t^k=0 $$</span></p> <p>grouping the powers of <span class="math-container">$t$</span> we have</p> <p><span class="math-container">$$ 2a_2+(a_0+2a_1+6a_3)t+(a_1+4a_2+12a_4)t^2+\cdots+ = 0 $$</span></p> <p>to maintain the nullity, the <span class="math-container">$t^k$</span> coefficients should be null. The first condition thus is <span class="math-container">$a_2 = 0$</span>. After that, as we need two initial independent conditions, we will intent to solve to nullity all the coefficients depending on <span class="math-container">$a_0, a_1$</span>. Thus, after making <span class="math-container">$a_2=0$</span> we have</p> <p><span class="math-container">$$ \cases{ a_0+2 a_1+6 a_3 = 0\\ a_1+12 a_4= 0\\ 6 a_3+20 a_5=0\\ a_3+8 a_4+30 a_6=0\\ a_4+10 a_5+42 a_7=0\\ \vdots } $$</span></p> <p>and solving for <span class="math-container">$a_0,a_1$</span> we have</p> <p><span class="math-container">$$ \cases{ a_3=-\frac{1}{6} \left(a_0+2 a_1\right) \\ a_4=-\frac{a_1}{12} \\ a_5=\frac{1}{20} \left(a_0+2 a_1\right) \\ a_6=\frac{1}{180} \left(a_0+6 a_1\right) \\ \vdots } $$</span></p> <p>thus we have a whole solution.</p>
76,420
<p>The following code segment shows what I'd like to do. I'm a procedural programmer trying to learn the Mathematica functional style. Any help on this would be appreciated.</p> <pre><code>B = {{1, 3}, {1, 5}, {4, 2}, {5, 2}, {5, 5}} u = SparseArray[{{1, 2} -&gt; 5, {1, 3} -&gt; 9, {1, 4} -&gt; 6, {1, 5} -&gt; Infinity, {3, 2} -&gt; 2, {3, 4} -&gt; 4, {4, 2} -&gt; 9, {5, 2} -&gt; Infinity, {5, 3} -&gt; Infinity, {5, 4} -&gt; Infinity, {5, 5} -&gt; Infinity}]; r = {3, 0, -2, 10, 19} (* Want to convert this python-looking code to functional Mathematica code for {i, j} in B : r[i] = r[i] - u[[i, j]] r[j] = r[j] + u[[i, j]] *) </code></pre>
Dr. belisarius
193
<p>What Bill posted as a comment, or for example:</p> <pre><code>{r[[#1]] -= #3, r[[#2]] += #3} &amp; @@@ (Transpose@Join[Transpose@B, {Extract[u, B]}]); r (* {-∞, ∞, 7, 1, Indeterminate} *) </code></pre> <p>The <strong>Indeterminate</strong> thingy comes from <code>∞ - ∞</code></p>
76,420
<p>The following code segment shows what I'd like to do. I'm a procedural programmer trying to learn the Mathematica functional style. Any help on this would be appreciated.</p> <pre><code>B = {{1, 3}, {1, 5}, {4, 2}, {5, 2}, {5, 5}} u = SparseArray[{{1, 2} -&gt; 5, {1, 3} -&gt; 9, {1, 4} -&gt; 6, {1, 5} -&gt; Infinity, {3, 2} -&gt; 2, {3, 4} -&gt; 4, {4, 2} -&gt; 9, {5, 2} -&gt; Infinity, {5, 3} -&gt; Infinity, {5, 4} -&gt; Infinity, {5, 5} -&gt; Infinity}]; r = {3, 0, -2, 10, 19} (* Want to convert this python-looking code to functional Mathematica code for {i, j} in B : r[i] = r[i] - u[[i, j]] r[j] = r[j] + u[[i, j]] *) </code></pre>
Karsten 7.
18,476
<p>Purely functional</p> <pre><code>func[lastr_, {i_, j_}] := MapAt[# + u[[i, j]] &amp;, MapAt[# - u[[i, j]] &amp;, lastr, i], j] Fold[func, r, B] </code></pre> <blockquote> <p><code>{-∞, ∞, 7, 1, Indeterminate}</code></p> </blockquote> <p>or</p> <pre><code>Fold[Function[{lastr, ind}, MapAt[# + u[[Sequence @@ ind]] &amp;, MapAt[# - u[[Sequence @@ ind]] &amp;, lastr, First@ind], Last@ind]], r, B] </code></pre>
2,466,527
<p>Let $A$ be the matrix of $T:P_2\to P_2$ with respect to basis $B=\{v_1,v_2,v_3\}$. Find $T(v_1)$</p> <p>$$A=\begin{bmatrix} 1 &amp; 3 &amp; -1 \\ 2 &amp; 0 &amp; 5 \\ 6 &amp; -2 &amp; 4 \end{bmatrix}$$</p> <p>$v_1=3x+3x^2$</p> <p>$v_2=-1+3x+2x^2$</p> <p>$v_3=3+7x+2x^2$</p> <hr> <p>First part of question asks to find $T(v_1)_B$.</p> <p>$$T(v_1)_B=[T]_{P_2}^{B}\cdot (v_1)_{P_2}=\begin{bmatrix} 1 &amp; 3 &amp; -1 \\ 2 &amp; 0 &amp; 5 \\ 6 &amp; -2 &amp; 4 \end{bmatrix}\cdot\begin{bmatrix} 0 \\ 3 \\ 3 \end{bmatrix}=\begin{bmatrix} 6 \\ 15 \\ 6 \end{bmatrix}$$</p> <p>Where $v_1$ is our vector, $B$ is the basis, $P_2=\{1,x,x^2\}$, $A=[T]_{P_2}^{B}$</p> <p>Assuming I did this part of the question correct, now the next part asks be to find $T(v_1)$, with respect to <strong>no basis</strong>.</p> <hr> <p>I think this makes sense: since we already found $T(v_1)_B$, that means that the vector $[6,15,6]$ can be written as a linear combinations of the vectors in the basis $B$. Hence we have:</p> <p>$$6+15x+6x^2=r_1(3x+3x^2)+r_2(-1+3x+2x^2)+r_3(3+7x+2x^2)$$</p> <p>Cleaning up a bit we get:</p> <p>$$6+15x+6x^2=(-r_2+3r_3)1+(3r_1+3r_2+7r_3)x+(3r_1+2r_2+2r_3)x^2$$</p> <p>Giving us the solution set:</p> <p>$$\color{red}{-r_2+3r_3=5}, \color{green}{3r_1+3r_2+7r_3=15}, \color{blue}{3r_1+2r_2+2r_3=6}$$</p> <p>Solving for $r_1,r_2,r_3$, we should have $T(v_1)$. Is this correct?</p>
marty cohen
13,079
<p>Suppose $x_n = ux_{n-1}+v $.</p> <p>If $u=1$, this is $x_n = x_{n-1}+v $, so $x_n = x_0+nv$.</p> <p>From now on, I will assume that $u \ne 1$.</p> <p>I now apply a standard trick and divide by $u^{n}$. This becomes $\dfrac{x_n}{u^n} = \dfrac{x_{n-1}}{u^{n-1}}+\dfrac{v}{u^n} $.</p> <p>Now, let $y_n = \dfrac{x_n}{u^n} $. This becomes $y_n = y_{n-1}+\dfrac{v}{u^n} $, or $y_n- y_{n-1}=\dfrac{v}{u^n} $.</p> <p>Summing this from $1$ to $m-1$, we get</p> <p>$\begin{array}\\ y_m-y_0 &amp;=\sum_{n=1}^{m-1} (y_n- y_{n-1})\\ &amp;=\sum_{n=1}^{m-1} \dfrac{v}{u^n}\\ &amp;=v\sum_{n=1}^{m-1} (1/u)^n\\ &amp;=v\dfrac{1/u-1/u^m}{1-1/u}\\ \text{or}\\ \dfrac{x_m}{u^m}-x_0 &amp;=v\dfrac{1/u-1/u^m}{1-1/u}\\ &amp;=v\dfrac{1-1/u^{m-1}}{u-1}\\ \text{or}\\ x_m &amp;=x_0u^m+v\dfrac{u^m-u}{u-1}\\ &amp;=u^m(x_0+\dfrac{v}{u-1})-\dfrac{uv}{u-1}\\ \end{array} $</p>
817,934
<p>How to prove</p> <p>$$\int\frac{12x\sin^{-1}x}{9x^4+6x^2+1}dx=-\frac{2\sin^{-1}x}{3x^2+1}+\tan^{-1}\left(\frac{2x}{\sqrt{1-x^2}}\right)+C$$</p> <p>where $\sin^{-1}x$ and $\tan^{-1}x$ are inverse of trig functions. I don't know how to find the integral because of inverse of trig functions. I missed calc class twice. Please help me. Thanks.</p>
Claude Leibovici
82,404
<p><strong>Hint</strong></p> <p>As said in comments, integration by parts is the only way to get rid of the inverse trigonometric functions.</p> <p>So, let $$u=\sin ^{-1}(x)$$ $$v'=\frac{12x}{9x^4+6x^2+1}=\frac{12x}{(3x^2+1)^2}$$ Then $$u'=\frac{1}{\sqrt{1-x^2}}$$ $$v=-\frac{2}{3 x^2+1}$$ Now, the answer which is given to you suggests the change of variable to be used.</p> <p>I am sure that you can take from here.</p>
619,370
<p>Let $C,Q$ is complex numbers Field and Rational number Field,respectively,if $f(x),g(x)\in Q[x]$,</p> <p>if $g(x)|f(x)$ on $C[x]$,show that $$g(x)|f(x)$$ on $Q[x]$</p> <p>My try: since $g(x)|f(x)$,then we have $$f(x)=g(x)h(x)$$ where $h(x)\in C[x]$. Then I can't prove also have $$g(x)|f(x)$$ on $Q[x]$.</p> <p>Thank you</p>
Prahlad Vaidyanathan
89,789
<p>Induct on the degree of $h$ : if $deg(h) = 0$, then $h = a_0$, a constant, which must be rational.</p> <p>If the result is true for $deg(h) \leq n-1$, assume $deg(h) = n$, then $$ h(x) = a_0 + a_1x + \ldots + a_nx^n $$ Then, compare the leading coefficients on both sides to conclude that $a_n \in \mathbb{Q}$. Now consider $$ f_1(x) = f(x) - a_nx^ng(x) \in \mathbb{Q}[x],\text{ and } h_1(x) = h(x) - a_nx^n $$ Then $deg(h_1) &lt; n$, and $$ f_1(x) = g(x)h_1(x) $$ and $f_1, g \in \mathbb{Q}[x]$. By induction, it follows that $a_i \in \mathbb{Q}$ for all $i$.</p>
2,485,997
<p><a href="https://i.stack.imgur.com/OkhoU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OkhoU.png" alt="enter image description here"></a></p> <p>Question: What if we consider (1,4,5) and (1,5,4) as non-distinct possibilities, then what should we do?</p> <p>$${{9}\choose{2}}-2\cdot\frac{{9}\choose{2}}{3}$$</p>
Christian Blatter
1,303
<p>Since solutions obtained by a permutation are considered the same we may assume $1\leq a\leq b\leq c$ to begin with. We therefore put $$a=1+x_1,\quad b=a+x_2=1+x_1+x_2,\quad c=b+x_3=1+x_1+x_2+x_3\ ,$$ whereby $x_i\geq0$ $\&gt;(1\leq i\leq3)$ and $a+b+c=3+3x_1+2x_2+x_3=10$, or $$3x_1+2x_2+x_3=7\ .$$ If $x_1=0$ then $2x_2+x_3=7$, which leads to the four solutions $(0,0,7)$, $(0,1,5)$, $(0,2,3)$, $(0,3,1)$.</p> <p>If $x_1=1$ then $2x_2+x_3=4$, which leads to the three solutions $(1,0,4)$, $(1,1,2)$, $(1,2,0)$.</p> <p>If $x_1=2$ we obtain the single solution $(2,0,1)$.</p> <p>In terms of $a$ $b$, $c$ we get the $8$ solutions listed by N.F. Taussig. It remains unclear how your text arrived at $36$ solutions.</p>
300,531
<p>Prove that : $$ \gamma=-\int_0^{1}\ln \ln \left ( \frac{1}{x} \right) \ \mathrm{d}x.$$</p> <p>where $\gamma$ is Euler's constant ($\gamma \approx 0.57721$).</p> <hr> <p>This integral was mentioned in <a href="http://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant">Wikipedia</a> as in <a href="http://mathworld.wolfram.com/Euler-MascheroniConstant.html">Mathworld</a> , but the solutions I've got uses corollaries from <a href="http://en.wikipedia.org/wiki/Harmonic_number">this theorem</a>. Can you give me a simple solution (not using much advanced theorems) or at least some hints.</p>
Argon
27,624
<p>$$I = \int_0^1 \log (-\log x)\,dx = \int_0^\infty e^{-x} \log(x)\,dx$$</p> <p>Noting that</p> <p>$$\Gamma(s) = \int_0^\infty e^{-x} x^{s-1}\, dx$$</p> <p>we find that </p> <p>$$\Gamma'(1) = I = -\gamma$$</p>
300,531
<p>Prove that : $$ \gamma=-\int_0^{1}\ln \ln \left ( \frac{1}{x} \right) \ \mathrm{d}x.$$</p> <p>where $\gamma$ is Euler's constant ($\gamma \approx 0.57721$).</p> <hr> <p>This integral was mentioned in <a href="http://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant">Wikipedia</a> as in <a href="http://mathworld.wolfram.com/Euler-MascheroniConstant.html">Mathworld</a> , but the solutions I've got uses corollaries from <a href="http://en.wikipedia.org/wiki/Harmonic_number">this theorem</a>. Can you give me a simple solution (not using much advanced theorems) or at least some hints.</p>
Pedro
23,350
<p>You can see a proof <a href="https://math.stackexchange.com/questions/126713/proof-that-1/127717#127717">here</a> where we use that $$\Gamma(z) = \frac{\exp{(-\gamma z)}}{z}\prod\limits_{n=1}^\infty\frac{\exp \left({\frac z n}\right)}{1+\dfrac z n }$$</p> <p>There is another proof <a href="https://math.stackexchange.com/questions/183428/trying-to-prove-that-lim-n-rightarrow-infty-frac-gamma-n1n-logn/183435#183435">here</a> where we use $$\gamma=\lim\limits_{n\to\infty}\left( H_n-\log n\right)$$</p>
474,807
<p>I am studying a text on permutation groups, which has the following example in a section on regular normal subgroups: </p> <blockquote> <p>If $Z(N)=1$, then $N \cong \mathrm{Inn}(N)$, the group of inner automorphisms of $G$, and the semidirect product $N \rtimes \mathrm{Inn}(N)$ is the diagonal group $N^*=N \times N$ described earlier.</p> </blockquote> <p>My question is on why the group $N \rtimes \mathrm{Inn}(N)$ "is" the group $N \times N$? What would be the group isomorphism? I presume it is not necessary that $N$ be regular and normal since this is not stated explicitly in this example, and I'm not sure why a $G$ enters the picture (is this a typo - should this be an $N$ instead?). </p> <p>Let me recall a couple facts described earlier in the text: </p> <p>(a) If $G$ is a group, then $G^* := G \times G$ acts on $G$ by the rule $\mu(x,(g,h)):=g^{-1} xh$. Thus, $G^*$ is the product action that contains the left and right regular actions as normal subgroups. </p> <p>(b) If $N$ is a regular normal subgroup of $G$, then $G=N \rtimes G_1$; and conversely, if $N$ is any group and $H \le \mathrm{Aut}(N)$, then $N \rtimes H$ acts as a permutation group on $N$, with $N$ as a regular normal subgroup and $H$ as the stabilizer of the identity; the element $hn$ of $N \rtimes H$ acts on $N$ by the rule $\mu(x,hn)=x^h n$. [The group operation in $N \rtimes H$ is not specified in the text, but we do get an action if the binary operation is assumed to be $(n_1,h_1)(n_2,h_2)=(n_1^{h_2} n_2, h_1 h_2)$.]</p> <p>Since $N/Z(N) \cong \mathrm{Inn}(N)$, the assumption $Z(N)=1$ implies $N \cong \mathrm{Inn}(N)$. To get an isomorphism from $N^*:= N \times N$ to $N \rtimes \mathrm{Inn}(N)$, I tried the map $(g,h) \mapsto (g, c(h))$, where $c(h)$ denotes conjugation by $h$. But this map does not seem to be a homomorphism: $\phi((g_1,h_1)(g_2,h_2))= \phi((g_1 g_2, h_1 h_2)) = (g_1 g_2, c(h_1 h_2))$, but $\phi((g_1,h_1)) \phi((g_2,h_2)) = (g_1, c(h_1))(g_2,c(h_2)) = (g_1 ^{c(h_2)} g_2, c(h_1 h_2))$. </p>
Derek Holt
2,820
<p>In the semidirect product $N \rtimes \operatorname{Inn} N$, the normal subgroup $N \times 1$ is centralized by the subgroup $\{ (h^{-1},h) \mid h \in N \}$, which is isomorphic to $N$. So this subgroup is equal to the second factor $1 \times N$ in the direct product $1 \times N$.</p> <p>The isomorphism $N \times N \to N \rtimes \operatorname{Inn} N$ is given by $(g,h) \to (h^{-1}g,h)$.</p>
103,970
<p>Here's a very bizarre inconsistency I've just struggled with and I'm wondering why it exists or if I'm missing something.</p> <p>I have some noisy data and I wish to make a framed plot of the data but allow the data to extend outside the vertical limits of the frame (for stylistic reasons). Like so:</p> <pre><code> xs = Range[0, 0.5, 0.005]; data = Transpose[{xs, Sin[Pi xs]^2 + 0.05 RandomReal[{-1, 1}, Length[xs]]}]; opts = Sequence[PlotRange -&gt; {{0, 0.5}, {0, 1}}, Frame -&gt; True, PlotRangeClipping -&gt; False, ImagePadding -&gt; 20]; ListLinePlot[{}, opts, Prolog -&gt; First@ListLinePlot[data, PlotRange -&gt; All] ] </code></pre> <p><a href="https://i.stack.imgur.com/vjuqv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vjuqv.png" alt="enter image description here"></a></p> <p>However, my dataset contains points outside my desired x limits, thus my data is more accurately given by:</p> <pre><code> xs = Range[-0.1, 0.6, 0.005]; data = Transpose[{xs, Sin[Pi xs]^2 + 0.05 RandomReal[{-1, 1}, Length[xs]]}]; </code></pre> <p>Which when plotted with exactly the same code gives:</p> <p><a href="https://i.stack.imgur.com/XstNd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XstNd.png" alt="enter image description here"></a></p> <p>which obviously extends in both the x and y directions when I only want the extension in the y. </p> <p>My solution is to change the value of <code>PlotRange -&gt; All</code> in the 'Prolog' 'ListLinePlot'. However, this only works in the y-direction, observe:</p> <pre><code> Grid[{{ ListLinePlot[{}, opts, Prolog -&gt; First@ListLinePlot[data, PlotRange -&gt; {All, {0, 1}}] ], ListLinePlot[{}, opts, Prolog -&gt; First@ListLinePlot[data, PlotRange -&gt; {{0, 1}, All}] ], ListLinePlot[{}, opts, Prolog -&gt; First@ListLinePlot[data, PlotRange -&gt; {{0, 1}, {0, 1}}] ] }}] </code></pre> <p><a href="https://i.stack.imgur.com/YuJ95.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YuJ95.png" alt="enter image description here"></a></p> <p>As you can see the data never obeys the PlotRange in the x direction! Looking into the content of the <code>First@ListLinePlot[data, ...]</code> we can see that the graphics items describing the data do get clipped in the y-direction:</p> <p><a href="https://i.stack.imgur.com/Gn4wW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gn4wW.png" alt="enter image description here"></a></p> <p>You can see there are several instances near the beginning where the <code>Line</code>'s y-coordinate has been clipped to <code>0.</code> and many at the end where it is <code>1.</code>. </p> <p>However if we try to restrict the graphics in the x-direction:</p> <p><a href="https://i.stack.imgur.com/F0Zdy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F0Zdy.png" alt="enter image description here"></a></p> <p>We see no such clipping occurs, leading to the problems described above.</p> <p>Why is this happening? Is there an elegant workaround? My current method to circumvent this problem is to create and <code>Interpolation</code> of may data and then use <code>Plot</code> as opposed to <code>ListLinePlot</code> in the <code>Prolog</code> but this seems like overkill for what should be a simple fix?</p> <p>I note that merely taking a subset of my data won't work for my real data as the x-values do not lie nicely on my coordinates, ie I might not have a value at 0 so I would be left with a gap either side. </p> <p>Many thanks. </p>
bbgodfrey
1,063
<p>I, too, could not find a simple, transparent solution, so here is a simple but not transparent solution.</p> <pre><code>ListLinePlot[Reverse[data, 2], PlotRange -&gt; {All, {0, 0.5}}]; Reverse[Cases[%, Line[{z__}] -&gt; z, Infinity], 2]; ListLinePlot[{}, opts, Prolog -&gt; First@ListLinePlot[%, PlotRange -&gt; All]] </code></pre> <p><a href="https://i.stack.imgur.com/iGR0o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iGR0o.png" alt="enter image description here"></a></p> <p>Because <code>ListLinePlot</code> appears to treat <code>PlotRange</code> differently for the two coordinates, plot the data with the coordinates reversed, extract the <code>Line</code>, reverse the coordinates again, and plot the result.</p> <p><strong>Addendum</strong></p> <p>A more stringent test is to use data for which no points lie precisely at the edges of the plot in <code>x</code>. For instance, with <code>xs = Range[-0.15, 0.6, 0.1];</code>, the plot still works properly. (A coarse set of data is used for visual clarity.)</p> <p><a href="https://i.stack.imgur.com/Hgqdo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hgqdo.png" alt="enter image description here"></a></p> <p>Approaches that, directly or indirectly, simply delete data points outside <code>PlotRange</code> would have blank space at either end of the plot.</p>
220,139
<p>Is there a way to either make <code>FindMinimum</code> do an exact computation or <code>Minimize</code> find also the local minima? Or other ideas to find local minima exactly?</p> <p><strong>Example:</strong> find all local minima (exact values, not approximations) of <span class="math-container">$f(x,y)=x^2 − x + 2y^2$</span> on <span class="math-container">$E=\{(x,y)\in \mathbb{R}^2 : x^2+y^2\le 1\}$</span>.</p>
rmw
57,128
<p><strong>Extrema on the edge.</strong></p> <pre><code>f = x^2 - x + 2 y^2; g = x^2 + y^2 - 1; L = f + \[Lambda] g; pts = Solve[{Grad[L, {x, y}] == 0, g == 0}, {x, y, \[Lambda]}]; points = pts[[All, 1 ;; 2]]; critpts = Thread@{f /. points, points}; </code></pre> <p>bordered Hessian:</p> <pre><code>hesseMatrix = {{0, D[g, x], D[g, y]}, {D[g, x], D[L, x, x], D[L, x, y]}, {D[g, y], D[L, y, x], D[L, y, y]}} /. pts; detHesse = Det /@ hesseMatrix {-4, 6, 6, -12} max = Cases[Thread@{Sign /@ detHesse, critpts}, {1, _}][[All, 2]]; min = Cases[Thread@{Sign /@ detHesse, critpts}, {-1, _}][[All, 2]]; sol = Association[&quot;Minima&quot; -&gt; min, &quot;Maxima&quot; -&gt; max] </code></pre> <p><a href="https://i.stack.imgur.com/2ylWj.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2ylWj.gif" alt="enter image description here" /></a></p> <p><strong>Extrema within the region.</strong></p> <pre><code>sol2 = Solve[{Grad[f, {x, y}] == 0, x^2 + y^2 &lt; 1}, {x, y}]; globalMin = Thread@{f /. sol2, sol2} </code></pre> <p><a href="https://i.stack.imgur.com/rtyqz.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rtyqz.gif" alt="enter image description here" /></a></p> <pre><code>Eigenvalues@D[f, {{x, y}, 2}] /. sol2 {4, 2} </code></pre> <p>Eigenvalues are positive; this is the global minimum.</p> <pre><code>reg = ImplicitRegion[x^2 + y^2 &lt;= 1, {x, y}]; Show[ Plot3D[f, {x, y} \[Element] reg, Mesh -&gt; 10, MeshFunctions -&gt; {#^2 + #2^2 &amp;}, PlotTheme -&gt; &quot;FilledSurface&quot;], Graphics3D[{ Red, PointSize@0.03, Point[{x, y, f} /. sol[&quot;Minima&quot;][[All, 2]]], Green, PointSize@0.03, Point[{x, y, f} /. sol[&quot;Maxima&quot;][[All, 2]]], Blue, PointSize@0.03, Point[{x, y, f} /. globalMin[[1, 2]]]}] ] </code></pre> <p><a href="https://i.stack.imgur.com/BtJMp.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BtJMp.gif" alt="enter image description here" /></a></p>
1,812,468
<p>Let $x=\{a,b\}$ be a set. Then, $x\in\{a,b\}$?</p> <p>I think: Yes. So, why?</p>
Fnacool
318,321
<p>Begin with </p> <p>$$E [e^{a W_s} ] = e^{\frac{a^2s}{2}}.$$ </p> <p>This is true for all complex-valued $a$. Now assume $a$ is real. </p> <p>Set $X_s^a = e^{a W_s}$. Note that $(X_s^a)^2 =X_s^{2a}$. </p> <p>Therefore for $T&lt;\infty$</p> <p>$$ E [\int_0^T (X_s^a)^2 ds ] =\int_0^T E [ e^{2aW_s} ]ds = \int_0^T e^{2a^2 s} ds=\frac{1}{2a^2} (e^{2a^2T} -1)&lt;\infty.$$</p> <p>This implies $ \int_0^T X_s^2 ds &lt;\infty$ a.s. and answers 2. (specifically: a.s. the integral is finite for all rational $t$, which implies that a.s. the integral is finite for all $t$). As for 1. take the limit $T\to\infty$ and apply monotone convergence to obtain that for any $a$ (including the trivial case $a=0$), $E [ \int_0^\infty (X_s^a)^2 ds]=\infty$. </p> <p>Think what would happen if $a$ were purely imaginary. </p>
239,863
<p>I've to study this series:</p> <p>$$\sum_{n=1}^\infty e^{\sqrt n\,x}$$ </p> <p>My teacher wrote that with the asymptotic comparison with this series:</p> <p>$$\sum_{n=1}^\infty\frac{1}{n^2}$$<br> My series converges for every </p> <p>$$x&lt;0$$</p> <p>I don't understand the motivation, hoping for someone to enlighten me!</p> <p>=) Thanks. Leonardo.</p>
Hui Yu
19,811
<p>Let $X$ be a general Banach space and $X^*$ its dual. Then the weak topology on a bounded subset of $X$ is determined by a dense subset of $X^*$ in the sense:</p> <blockquote> <p>If $D\subset X^*$ is dense and $(x_\alpha)\subset X$ is uniformly bounded in norm, then $x_\alpha$ converges to $x$ weakly if and only if $x_\alpha(d)\to x(d)$ for all $d\in D$.</p> </blockquote> <p>With this and assuming 2). The first says $x_n$ are uniformly bounded in norm, while the second one says $x_n$ converges pointwise on sum dense subset of $\ell^q$ (I am pretty sure you can figure out this dense subset). So the result follows.</p>
239,863
<p>I've to study this series:</p> <p>$$\sum_{n=1}^\infty e^{\sqrt n\,x}$$ </p> <p>My teacher wrote that with the asymptotic comparison with this series:</p> <p>$$\sum_{n=1}^\infty\frac{1}{n^2}$$<br> My series converges for every </p> <p>$$x&lt;0$$</p> <p>I don't understand the motivation, hoping for someone to enlighten me!</p> <p>=) Thanks. Leonardo.</p>
Focus
254,076
<p>We claim that <span class="math-container">$\textrm{Ball}(\ell^p)$</span> is weakly compact and weakly metrizable.</p> <p>Since <span class="math-container">$1&lt;p&lt;\infty$</span>, we know <span class="math-container">$\ell^p$</span> is reflexive, so by Alaoglu's theorem, <span class="math-container">$\textrm{Ball}(\ell^p)$</span> is weakly compact. Also, <span class="math-container">$\textrm{Ball}(\ell^p)$</span> is weakly metrizable by the separability of <span class="math-container">$\ell ^q$</span> where <span class="math-container">$1/p +1/q =1$</span>. Q.E.D.</p> <p>Now, we can assume that all <span class="math-container">$x_n \in \textrm{Ball}(\ell^p)$</span>. Consider a subsequence <span class="math-container">$(x_{n_{k}})_k$</span>. By the above claim, there is a sub-subsequence <span class="math-container">$(x_{n_{k_{l}}})_l$</span>and an element <span class="math-container">$y$</span> such that <span class="math-container">$x_{n_{k_{l}}} \rightharpoonup y$</span>. It is easy to check that <span class="math-container">$y \equiv x$</span>, which implies that <span class="math-container">$x_n \rightharpoonup x$</span>.</p>
347,315
<p>How can I find </p> <ol> <li>The image of the upper half plane $\mathrm{Im}(z)&gt;0$ under the linear fractional transformation $w=\dfrac{3z + i}{-iz + 1}$.</li> <li>The image of the set {${z∈C-0:\{\mathrm{Im}(z)} = \mathrm{Re}(z)\}$} under $w=z + \dfrac{1}{z}$.</li> </ol> <p>For 1., I consider $y&gt;0, x \in R$ and then find the inverse $z=\dfrac{w-i}{3+iw}$. This gave me $x=\dfrac{2u}{{(3-v)}^2+u^2}$ and $y=\dfrac{-v^2-u^2+4v-3}{(3-v)^2+u^2}$. Then since $y&gt;0$, ${-v^2-u^2+4v-3}&gt;0$ which gave me ${(v-2)}^2+u^2&lt;1$. I therefore knew that the image is the interior of the unit disk centered at $(0,2)$ and drew it. So I need confirmation. But for the second one, I am confused as how the image should be because if $\text{Im}(z)=\text{Re}(z)$ and $w=z+\dfrac{1}{z}$, then $z$ should not be $0$, but I thought the the line $\text{Im}(z)=\text{Re}(z)$ must pass through the origin. I am waiting for help!</p>
Andreas Caranti
58,401
<p>$\operatorname{PGL}_{2}(q)$ is centreless, while $\operatorname{SL}_{2}(q)$ has a centre $D_{2}$ of order $2$ for $q$ odd.</p> <p>As a reference, see the <a href="http://en.wikipedia.org/wiki/Projective_special_linear_group#Finite_fields" rel="nofollow">Wikipedia article about the projective special linear group</a>, or Steve's post ;-)</p>
347,315
<p>How can I find </p> <ol> <li>The image of the upper half plane $\mathrm{Im}(z)&gt;0$ under the linear fractional transformation $w=\dfrac{3z + i}{-iz + 1}$.</li> <li>The image of the set {${z∈C-0:\{\mathrm{Im}(z)} = \mathrm{Re}(z)\}$} under $w=z + \dfrac{1}{z}$.</li> </ol> <p>For 1., I consider $y&gt;0, x \in R$ and then find the inverse $z=\dfrac{w-i}{3+iw}$. This gave me $x=\dfrac{2u}{{(3-v)}^2+u^2}$ and $y=\dfrac{-v^2-u^2+4v-3}{(3-v)^2+u^2}$. Then since $y&gt;0$, ${-v^2-u^2+4v-3}&gt;0$ which gave me ${(v-2)}^2+u^2&lt;1$. I therefore knew that the image is the interior of the unit disk centered at $(0,2)$ and drew it. So I need confirmation. But for the second one, I am confused as how the image should be because if $\text{Im}(z)=\text{Re}(z)$ and $w=z+\dfrac{1}{z}$, then $z$ should not be $0$, but I thought the the line $\text{Im}(z)=\text{Re}(z)$ must pass through the origin. I am waiting for help!</p>
Stephen
146,439
<p>There are natural maps $\mathrm{SL}_2(q) \hookrightarrow \mathrm{GL}_2(q) \twoheadrightarrow \mathrm{PGL}_2(q)$; composing these gives a map from $\mathrm{SL}_2(q)$ to $\mathrm{PGL}_2(q)$. For $q$ not a power of $2$ this map definitely has a kernel, the subgroup of order $2$ generated by the diagonal matrix $-1$. For $q$ a power of $2$ the kernel is trivial, since in that case $a^2=1$ implies $a=1$ for $a \in F$ a field of char. $2$, so your calculation of the orders shows that over a field of char. $2$ (for $q$ any power of $2$) the groups are isomorphic.</p> <p>In general, the center of $\mathrm{SL}_2(q)$ consists of scalar matrices $a$ for which $a^2=1$; meanwhile, the group $\mathrm{PGL}_2(q)$ has trivial center: if $g$ commutes with every matrix module scalars, $g h g^{-1}=a$, then $g$ in particular normalizes the torus and hence is either diagonal or anti-diagonal, but a quick calculation shows that $g$ cannot be anti-diagonal and then another that it must itself be scalar.</p> <p>Added: Andreas just beat me to this, with a much pithier answer.</p>
69,961
<p>I want to determine the set of natural numbers that can be expressed as the sum of some non-negative number of 3s and 5s.</p> <p>$$S=\{3k+5j∣k,j∈\mathbb{N}∪\{0\}\}$$</p> <p>I want to check whether that would be: 0,3, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, and so on.</p> <p>Meaning that it would include 0, 3, 5, 8. Then from 9 and on, every Natural Number. But how would I explain it as a set? or prove that these are the numbers in the set?</p>
Austin Mohr
11,245
<p>By inspection, you can see how to represent 0, 3, 5, and 8. Now, given any integer $n \geq 9$, classify $n$ into one of the following cases.</p> <p>Case $n \equiv 0$ (mod 3): In this case, $n = 3k$ for some $k$, so there is nothing to show (it is already a certain multiple of 3).</p> <p>Case $n \equiv 1$ (mod 3): In this case, $n = 3k + 1$ for some $k \geq 3$ (since $n \geq 9$). We can write $n = 3(k-3) + 5 \cdot 2$.</p> <p>Case $n \equiv 2$ (mod 3): In this case, $n = 3k + 2$, for some $k \geq 3$. We can write $n = 3(k-1) + 5$.</p> <p>So, not only do all those numbers belong to the set, but there is an easy way to figure out how they are represented.</p>
69,961
<p>I want to determine the set of natural numbers that can be expressed as the sum of some non-negative number of 3s and 5s.</p> <p>$$S=\{3k+5j∣k,j∈\mathbb{N}∪\{0\}\}$$</p> <p>I want to check whether that would be: 0,3, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, and so on.</p> <p>Meaning that it would include 0, 3, 5, 8. Then from 9 and on, every Natural Number. But how would I explain it as a set? or prove that these are the numbers in the set?</p>
Jyrki Lahtonen
11,619
<p>This type of a question falls under the umbrella of <a href="http://en.wikipedia.org/wiki/Numerical_semigroup" rel="nofollow">numerical semigroups.</a> The numerical semigroup generated by a set of positive integers consists of their linear combinations with non-negative integer coefficients. So your set is the numerical semigroup generated by $\{3,5\}$.</p> <p>I have encountered these as sets of 'pole numbers' of algebraic curves. In that context the 'gaps' (the numbers missing from the list) are related to the genus of the curve. </p> <p>For example, it is known that, if $(a,b)=1$, then the number of gaps of the numerical semigroup generated by $\{a,b\}$ equals $(a-1)(b-1)/2$. This checks out in your case. You have $a=3,b=5$, so the number of gaps should be $(3-1)(5-1)/2=4$, and you have found all of them: $1,2,4,7$. It is also known (and not hard to prove) that the largest gap is $(a-1)(b-1)-1$. In your case $(3-1)(5-1)-1=7$.</p>
2,864,585
<p>I tried to calculate the Hessian matrix of linear least squares problem (L-2 norm), in particular:</p> <p>$$f(x) = \|AX - B \|_2$$ where $f:{\rm I\!R}^{11\times 2}\rightarrow {\rm I\!R}$</p> <p>Can someone help me?<br> Thanks a lot.</p>
littleO
40,119
<p>Let $f:\mathbb R^n \to \mathbb R$ be defined by $$ f(x)=\frac12 \|Ax-b\|^2. $$ Notice that $f(x)=g(h(x))$, where $h(x)=Ax-b$ and $g(y) = \frac12 \|y\|^2$. The derivatives of $g$ and $h$ are given by $$ g'(y)=y^T, \quad h'(x)=A. $$ The chain rule tells us that $$ f'(x)=g'(h(x))h'(x) = (Ax-b)^T A. $$ If we use the convention that the gradient is a column vector, then $$ \nabla f(x)=f'(x)^T=A^T(Ax-b). $$</p> <p>The Hessian $Hf(x)$ is the derivative of the function $x \mapsto \nabla f(x)$, so: $$ Hf(x)= A^T A. $$</p>
4,108,996
<p>I am a little confused about how the author came to this discriminant of this quadratic polynomial. I understand that the discriminant comes from solving for a variable, and the discriminant is whatever is under the square root. Below is information directly from the PDF:</p> <p>(1) <span class="math-container">$f(y) = (y^2 − v^2)(y − u)$</span>,</p> <p>with rational u, v. Indeed, to get all the roots of f rational, it is enough to assume that u, v are rational.</p> <p>The quadratic polynomial:</p> <p>(2) <span class="math-container">$f'(y) = 3y^2 − 2yu − v^2$</span></p> <p>has rational roots, if and only if its discriminant <span class="math-container">$4(u^2 + 3v^2)$</span> is the square of a rational number.</p> <p>How is the discriminate <span class="math-container">$4(u^2 + 3v^2)$</span>? When I solve for <span class="math-container">$y$</span> in equation (2), I get <span class="math-container">$(u^2 + 3v^2)$</span> under the square root. I am guessing that the author is assuming that <span class="math-container">$4(u^2 + 3v^2)$</span> is the discriminant?</p>
Amirali
794,843
<p>We have <span class="math-container">$f'(y)=3y^2-2uy-v^2$</span>. it is a quadratic equation in <span class="math-container">$y$</span>. so discriminant is <span class="math-container">$$\Delta=(-2u)^2-4(3)(-v^2)=4u^2+4(3v^2)=4(u^2+3v^2)$$</span></p>
4,108,996
<p>I am a little confused about how the author came to this discriminant of this quadratic polynomial. I understand that the discriminant comes from solving for a variable, and the discriminant is whatever is under the square root. Below is information directly from the PDF:</p> <p>(1) <span class="math-container">$f(y) = (y^2 − v^2)(y − u)$</span>,</p> <p>with rational u, v. Indeed, to get all the roots of f rational, it is enough to assume that u, v are rational.</p> <p>The quadratic polynomial:</p> <p>(2) <span class="math-container">$f'(y) = 3y^2 − 2yu − v^2$</span></p> <p>has rational roots, if and only if its discriminant <span class="math-container">$4(u^2 + 3v^2)$</span> is the square of a rational number.</p> <p>How is the discriminate <span class="math-container">$4(u^2 + 3v^2)$</span>? When I solve for <span class="math-container">$y$</span> in equation (2), I get <span class="math-container">$(u^2 + 3v^2)$</span> under the square root. I am guessing that the author is assuming that <span class="math-container">$4(u^2 + 3v^2)$</span> is the discriminant?</p>
Octonions
914,583
<p>There are many ways to find the roots of a quadratic polynomial. First thing you want to figure out is how many real roots it has. How to do that? The discriminant tells you this exact type of information.</p> <p>First, I would like to figure out the roots of the polynomial <span class="math-container">$f(x) = ax^2 + bx + c$</span>. How can we figure out the roots? We can easily do that by setting <span class="math-container">$f(x) = 0$</span>. Thus: <span class="math-container">$$\begin{align*} f(x) = 0\\ a x^2 + b x + c = 0 \end{align*}$$</span></p> <p>Solving for <span class="math-container">$x$</span> in there can be done in many ways, but I will go with the completing the square idea. First step is to divide the entire equation by <span class="math-container">$a$</span> and then move constant terms to the right hand side: <span class="math-container">$$\begin{align*} a x^2 + b x + c &amp; = 0\\ x^2 + \frac{b}{a} x + \frac{c}{a} &amp; = 0\\ x^2 + \frac{b}{a} x &amp; = -\frac{c}{a} \end{align*}$$</span></p> <p>Now let us think about <span class="math-container">$(u + v)^2$</span>. If you expand it, you will get: <span class="math-container">$$\begin{align*} (u + v)^2 &amp; = u^2 + uv + uv + v^2\\ (\color{green}u + \color{blue}v)^2 &amp; = \color{green}u^2 + 2\color{green}u\color{blue}v + \color{blue}v^2 \end{align*}$$</span></p> <p>I will now highlight a few terms in the equation for the roots and also do a small manipulation. Maybe you will few comfortable about noticing a few things. <span class="math-container">$$\begin{align*} \color{green}x^2 + \color{blue}{\frac{b}{a}}\color{green}x &amp; = -\frac{c}{a}\\ \color{green}x^2 + \color{red}2\color{green}x\color{blue}{\frac{b}{\color{red}{2}a}} &amp; = -\frac{c}{a}\\ (\color{green}x)^2 + 2(\color{green}x)\left(\color{blue}{\frac{b}{2a}}\right) &amp; = -\frac{c}{a} \end{align*}$$</span></p> <p>If you look at the equation we got above, you will notice it is almost a perfect square. However, it is missing a term and we can add it to both sides. <span class="math-container">$$\begin{align*} (\color{green}x)^2 + 2(\color{green}x)\left(\color{blue}{\frac{b}{2a}}\right) \color{red}{+ \left(\frac{b}{2a}\right)^2} &amp; = -\frac{c}{a} \color{red}{+ \left(\frac{b}{2a}\right)^2}\\ (\color{green}x)^2 + 2(\color{green}x)\left(\color{blue}{\frac{b}{2a}}\right) + \left(\color{blue}{\frac{b}{2a}}\right)^2 &amp; = -\frac{c}{a} + \left(\color{blue}{\frac{b}{2a}}\right)^2\\ \left(\color{green}x + \color{blue}{\frac{b}{2a}}\right)^2 &amp; = \left(\color{blue}{\frac{b}{2a}}\right)^2 - \frac{c}{a} \end{align*}$$</span></p> <p>We can simplify the above but first we must notice that the solutions to <span class="math-container">$x^2 = n^2$</span> can be both <span class="math-container">$x = -n$</span> and <span class="math-container">$x = n$</span>. Thus: <span class="math-container">$$\begin{align*} x^2 = n^2\\ x = \pm \sqrt{n^2} \end{align*}$$</span></p> <p>Going back to our problem: <span class="math-container">$$\begin{align*} \left(x + \frac{b}{2a}\right)^2 &amp; = \left(\frac{b}{2a}\right)^2 - \frac{c}{a}\\ \left(x + \frac{b}{2a}\right)^2 &amp; = \frac{b^2}{4a^2} - \frac{c}{a}\\ \left(x + \frac{b}{2a}\right)^2 &amp; = \frac{b^2}{4a^2} - \frac{\color{red}{4a}c}{\color{red}{4a}a}\\ \left(x + \frac{b}{2a}\right)^2 &amp; = \frac{b^2}{4a^2} - \frac{4ac}{4a^2}\\ \left(x + \frac{b}{2a}\right)^2 &amp; = \frac{b^2 - 4ac}{4a^2}\\ x + \frac{b}{2a} &amp; = \pm \sqrt{\frac{b^2 - 4ac}{4a^2}}\\ x + \frac{b}{2a} &amp; = \pm \frac{\sqrt{b^2 - 4ac}}{2a}\\ x &amp; = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a} \end{align*}$$</span></p> <p>We have successfully found what gives us the roots of a quadratic polynomial. However, there is something which tells us the nature of these roots: the discriminant (<span class="math-container">$\Delta$</span>).</p> <p>In the formula we derived, the conditions for the roots to be real or not, is bounded to the square root. Thus, we define the inside of the square root as <strong>the discriminant of the polynomial</strong>. <span class="math-container">$$\boxed{\Delta = b^2 - 4ac}$$</span></p> <p>In the formula you provided, we have: <span class="math-container">$$\begin{align*} f(y) &amp; = 3y^2 - 2uy - v^2\\ f(y) &amp; = \color{red}{3}y^2 \color{red}{-2u}y \color{red}{-v^2}\\ a &amp; = 3\\ b &amp; = -2u\\ c &amp; = -v^2 \end{align*}$$</span></p> <p>Therefore, your discriminant <span class="math-container">$\Delta$</span> will be: <span class="math-container">$$\begin{align*} \Delta &amp; = (-2u)^2 - 4(3)(-v^2)\\ \Delta &amp; = 4u^2 + 4(3v^2)\\ \Delta &amp; = 4(u^2 + 3v^2) \end{align*}$$</span></p> <p>Which is what you originally asked.</p>
814,020
<p><strong>Preamble</strong></p> <p>The <a href="http://mathworld.wolfram.com/CassiniOvals.html" rel="nofollow">Cassinian curves</a> are the pre-images of concentric circles (centered at $1+0\,i$) under the map $z\mapsto z^2$. Using this fact and the fact that complex polynomials are conformal we can deduce that the orthogonal trajectories to the Cassinian curves map to straight lines passing through the point $1+0\,i$. These lines can be writen as</p> <p>$$w(\lambda) = 1 + \lambda \,e^{i\theta}$$</p> <p>Where $\theta$ is the angle the line makes with the real axis. Applying the function $z\mapsto \sqrt{z}$ to these lines then gives us back the othogonal trajectories to the Cassinian curves.</p> <p><strong>Question</strong></p> <p>The book I am working through asks the reader to use this to show that the orthogonal trajectories are <em>hyperbolae</em> but I can't seem to make much headway with it. How does one show this?</p> <p><strong>Attempt</strong></p> <p>I tried expanding the real and imaginary components of the line ($w=u+i\,v$) and its image ($z=x+i\,y$), equating $z=\sqrt{w}$ and squaring both sides, which gives</p> <p>$$x^2-y^2 + 2 i\,xy = \underbrace{1+\lambda\,\cos(\theta)}_u + i\,\underbrace{\lambda\sin(\theta)}_v$$</p> <p>but apart from the fact we have $x^2-y^2$ in there this doesn't seem so useful to me. It might also be useful to note that the orthogonal trajectories must pass through $z=\pm 1$.</p>
Daniel Fischer
83,702
<p>The preimage of the real line under $f(z) = z^2$ is not a hyperbola, it's the limiting case, the two coordinate axes.</p> <p>For all other straight lines passing through $1$, let us describe them by an equation: $L_c = \{ u+i v : u-1 = cv\}$, and let $H_c = f^{-1}(L_c)$. For $H_c$, we then obtain the describing equation</p> <p>$$x^2-y^2-1 = c(2xy)\qquad \text{resp.} \qquad x^2 - 2cxy - y^2 = 1.\tag{1}$$</p> <p>That is the equation of a hyperbola, but not yet in standard form (except for $c = 0$). We obtain an equation in standard form by rotating the coordinate system.</p> <p>Let $\alpha = \frac{1}{2} \arctan c$. Then we can write $(1)$ as</p> <p>$$\begin{align} x^2 - 2xy\tan 2\alpha - y^2 &amp;= 1\\ x^2\cos 2\alpha - 2xy\sin 2\alpha - y^2\cos 2\alpha &amp;= \cos 2\alpha\\ (x\cos\alpha - y\sin\alpha)^2 - (x\sin\alpha + y\cos\alpha)^2 &amp;= \cos 2\alpha \end{align}$$</p> <p>and in the rotated coordinates</p> <p>$$\begin{pmatrix}\xi\\ \eta\end{pmatrix} = \begin{pmatrix}\cos\alpha &amp; -\sin\alpha\\ \sin\alpha &amp;\cos\alpha \end{pmatrix} \begin{pmatrix} x \\ y\end{pmatrix}$$</p> <p>we have the standard form</p> <p>$$\xi^2 - \eta^2 = \cos 2\alpha.\tag{2}$$</p> <p>Thus we obtain</p> <p>$$H_{c} = e^{\large -\frac{i}{2}\arctan c}\cdot \left\{\xi + i\eta : \xi^2-\eta^2 = \frac{1}{\sqrt{1+c^2}}\right\}.$$</p>
4,627,334
<p>To my understanding that a primitive triple <span class="math-container">$x$</span> and <span class="math-container">$y$</span> can be written as <span class="math-container">$x = q^2 - p^2$</span> while <span class="math-container">$y=2pq$</span> for relatively prime opposite parity <span class="math-container">$q &gt; p$</span> then the area can be calculated as: <span class="math-container">$pq(q^2 - p^2) = pq(q+p)(q-p)$</span>. Am I missing something obvious that helps prove that a Pythagorean Triple triangle cannot have an odd area?</p>
Hagen von Eitzen
39,174
<p>If <span class="math-container">$a^2+b^2=c^2$</span> with integers <span class="math-container">$a,b,c$</span>, recall that the square of an even number is <span class="math-container">$\equiv 0\pmod4$</span> and the square of an odd number is <span class="math-container">$\equiv1\pmod8$</span>. Therefore, either <span class="math-container">$a,b,c$</span> are even (and then <span class="math-container">$\frac12ab$</span> is even) or <span class="math-container">$c$</span> and one of <span class="math-container">$a,b$</span> are odd and the other is a multiple of 4 (because its square is a multiple of 8), so again <span class="math-container">$\frac12ab$</span> is even.</p>
186,240
<p>I need some notion about topology(I'm very interested in boundary points, open sets) and few examples of solved exercises about limits of functions($f:\mathbb{R}^{n}\rightarrow \mathbb{R}^m$) using $\epsilon, \delta$ and also some theory for continous functions. Please give me some links or name of the books which can help me. </p> <p>Thanks :) </p>
user642796
8,348
<p>If you don't want to go straight to general topology, you could look at a book more specifically about metric spaces, like Mícheál O'Searcoid's <a href="http://books.google.com/books?id=aP37I4QWFRcC&amp;printsec=frontcover&amp;hl=de#v=onepage&amp;q&amp;f=false" rel="nofollow"><em>Metric Spaces</em></a>.</p>
1,278,860
<p>Use the process of implicit differentiation to find $dy/dx$ given that:</p> <p>$$x^2e^y − y^2e^x=0 $$</p> <p>I am trying first to find $y$, </p> <p>$$y^2e^x = x^2e^y$$</p> <p>$$y^2 = (x^2e^y)/e^x$$</p> <p>$$y = \sqrt{(x^2e^y)/e^x}$$</p> <p>Is this correct? I have the feeling it is not.</p>
Autolatry
25,097
<p>So, implicitly differentiating; $$x^{2}\frac{dy}{dx}e^{y}+2x e^{y}-\left(y^{2}e^{x}-2y\frac{dy}{dx}e^{x}\right)=0$$ Collecting like terms $$\frac{dy}{dx}\left(x^{2}e^{y}-2ye^{x} \right)=y^{2}e^{x}-2xe^{y}$$ From which it is readily seen that $$\frac{dy}{dx}=\frac{y^{2}e^{x}-2xe^{y}}{\left(x^{2}e^{y}-2ye^{x} \right)}$$</p>
1,302,990
<p>I want to ask basic question. In our mathematics classes ,while teaching the Fourier series and transform topic,the professor says that when the signal is periodic ,we should use Fourier series and Fourier transform for aperiodic signals.</p> <p>My question is can't we use Fourier transform formula in case of periodic signal ? Also, Is Fourier series used always for periodic signals and Fourier transform for aperiodic signals only ?</p>
Gyu Eun Lee
52,450
<p>A Fourier series is only defined for functions defined on an interval of finite length, including periodic signals, as you can see from the definition of the Fourier coefficients (in the basis $\{e^{inx}\}_{n\in\mathbb{Z}}$) $$ a_n = \frac{1}{2\pi}\int_{-\pi}^\pi f(x)e^{-inx}~dx. $$</p> <p>You can't define an aperiodic signal on an interval of finite length (if you try, you'll lose information about the signal), so one must use the Fourier transform for such a signal.</p> <p>In principle, one could take a Fourier transform of a periodic signal in the sense that one could extend the signal outside the interval $[-\pi,\pi]$ by zero. But the resulting Fourier transform would look like $$ \widehat{f}(p) = \frac{1}{2\pi}\int_{-\pi}^\pi f(x)e^{-ipx}~dx $$ and this isn't particularly different from the Fourier coefficients. Moreover, being able to express a periodic signal as a <em>discrete</em> sum of frequencies is a stronger statement than expressing it as a continuous sum via the inversion formula.</p> <p><strong>Edit:</strong> In response to the comment.</p> <p>I mean "stronger" rather loosely, not in the mathematical sense of one statement implying another.</p> <p>Taking the Fourier transform of a periodic signal by extending it to $0$ outside $[-\pi,\pi]$ gives no information that the usual Fourier transform would not. Furthermore, the inversion formula still holds, and therefore we see that $f$ can be recovered from its Fourier transform as a continuous sum over all frequencies.</p> <p>What is remarkable about periodic functions is that one does not actually need all this information to recover the function; out of uncountably many values $\widehat{f}(p)$, one only needs the integer values $\widehat{f}(n)$ (i.e. the Fourier coefficients) to reconstruct the function.</p> <p>So in this sense, for a periodic function, there is more to say than what the Fourier transform alone provides.</p>
306,011
<p>Does anyone have a proof for $$\int_0^{\infty}\frac{\sin(x^2)}{x^2}\,dx=\sqrt{\frac{\pi}{2}}.$$ I tried to get it from contour integrating $$\frac{e^{iz^2}-1}{z^2},$$ but failed. Thanks.</p>
Jack D'Aurizio
44,121
<p>Since $\mathcal{L}(\sin x)(s)=\frac{1}{1+s^2}$ and $\mathcal{L}^{-1}\left(\frac{1}{x\sqrt{x}}\right)=\frac{2}{\sqrt{\pi}}\sqrt{s}$ we have $$ \int_{0}^{+\infty}\frac{\sin(x^2)}{x^2}\,dx = \frac{1}{2}\int_{0}^{+\infty}\frac{\sin x}{x\sqrt{x}}\,dx = \frac{1}{\sqrt{\pi}}\int_{0}^{+\infty}\frac{\sqrt{s}}{1+s^2}\,ds=\frac{1}{\sqrt{\pi}}\int_{-\infty}^{+\infty}\frac{u^2 du}{1+u^4} $$ and by the residue theorem $\int_{-\infty}^{+\infty}\frac{u^2 du}{1+u^4}\,du=\frac{\pi}{\sqrt{2}}$, so the original integral equals $\sqrt{\frac{\pi}{2}}$.</p>
2,161,294
<p>I was wondering... $1$, $\phi$ and $\frac{1}{\phi}$, they have something in common: they share the same decimal part with their inverse. And here it comes the question:</p> <p>Are these numbers unique? How many other members are in the set if they exist? If there are more than three elements: is it finite or infinite? Is it a dense set? Is in countable? Are their members irrational numbers??</p> <p><strong>Many thanks in advance!!</strong></p>
Donald Splutterwit
404,247
<p>There is a pair for every $n \in N$ \begin{eqnarray*} x_{\pm} = \frac{n \pm \sqrt{n^2+4}}{2} \end{eqnarray*}</p> <p>whence \begin{eqnarray*} 1/(x_{\pm}) = \frac{-n \pm \sqrt{n^2+4}}{2} = x_{\pm}-n \end{eqnarray*}</p> <p>Eg $n=2$ ... $x_+=2.414 \cdots$ &amp; $x_-=0.414 \cdots$.</p>
2,455,408
<p>I am encountering questions like this below.</p> <p>$$\frac{dP}{dt}=(a-b\cos t ) \left(P+ \frac{P^2}{M}\right)$$</p> <p>Then there is information stating $M$ is a positive integer and $a$ and $b$ are positive. That when $t=0$, $P=P_0$.</p> <p>It wants me to solve the differential equation and show the process.</p> <p>How am I supposed to derive this? I suspect it is a separation of variables question but I don't know how to proceed. Can anyone point me in the direction of an exercise or explanation video.</p>
Alex S
484,637
<p>The simplest way to solve this problem is to first draw a picture. Here's a photograph of the curves described in your problem:</p> <p><a href="https://i.stack.imgur.com/wndiNm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wndiNm.png" alt="enter image description here"></a></p> <p>The region bounding the finite area is a square who's sides are of length $\pi$. Calculating this area is simple. Now for the trickier bit: calculating the area beneath a sine wave. This can be solved by applying the methods of integration. </p> <p>\begin{align*} \int_0^\pi\sin(x)\,dx&amp;=[-\cos(x)]_{0}^\pi\\ &amp;=-\cos(\pi)+\cos(0)\\ &amp;=2 \end{align*}</p> <p>Since there are four of these sine waves, $A=\pi^2-4\int_0^\pi \sin(x)\,dx=\pi^2-8= 1.8696\ldots$</p>
3,754,548
<p>Suppose there is a strictly convex continuous function <span class="math-container">$f$</span>: <span class="math-container">$R^n$</span> <span class="math-container">$\rightarrow$</span> <span class="math-container">$R$</span>.</p> <p>Is the supremum of <span class="math-container">$f$</span> always infinity? How can we prove it?</p> <p>I am trying to come up with proof. If <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are two points in <span class="math-container">$R^n$</span>, strictly convex implies <span class="math-container">$f(\alpha x_1 + (1-\alpha) x_2) $</span> &lt; <span class="math-container">$\alpha f(x_1) + (1-\alpha)f(x_2)$</span> . Suppose <span class="math-container">$f$</span> is bounded.</p> <p>Case 1: The bound is attained at a point, say <span class="math-container">$x_0$</span>. Then for some <span class="math-container">$\alpha$</span>, some <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span> s.t. <span class="math-container">$ (\alpha x_1 + (1-\alpha) x_2) = x_0$</span>:<br /> <span class="math-container">$f(x_0)$</span>&lt; <span class="math-container">$\alpha f(x_1) + (1-\alpha)f(x_2)$</span><br /> Therefore a contradiction.<br /> Case 2: The bound is not attained. Since the function is strictly convex, we know <span class="math-container">$f(x)$</span> approaches this bound as <span class="math-container">$x$</span> approaches <span class="math-container">$ \infty $</span><br /> I don't know how to proceed after this step. Where can I find a contradiction in this case?</p>
Z Ahmed
671,540
<p>You can do it by the tabulated integral <span class="math-container">$\int e^{kx} dx=\frac{e^{kx}}{k}$</span> as below <span class="math-container">$$I=\int e^{ax} \sin bx dx= \Im \left( \int e^{(a+ib)x}~dx\right)= \Im \left( \frac{e^{(a+ib)x}}{a+ib} \right).$$</span> <span class="math-container">$$\implies I=\frac{e^{ax}}{a^2+b^2}[a\sin bx-b \cos bx]+C$$</span></p>
3,754,548
<p>Suppose there is a strictly convex continuous function <span class="math-container">$f$</span>: <span class="math-container">$R^n$</span> <span class="math-container">$\rightarrow$</span> <span class="math-container">$R$</span>.</p> <p>Is the supremum of <span class="math-container">$f$</span> always infinity? How can we prove it?</p> <p>I am trying to come up with proof. If <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are two points in <span class="math-container">$R^n$</span>, strictly convex implies <span class="math-container">$f(\alpha x_1 + (1-\alpha) x_2) $</span> &lt; <span class="math-container">$\alpha f(x_1) + (1-\alpha)f(x_2)$</span> . Suppose <span class="math-container">$f$</span> is bounded.</p> <p>Case 1: The bound is attained at a point, say <span class="math-container">$x_0$</span>. Then for some <span class="math-container">$\alpha$</span>, some <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span> s.t. <span class="math-container">$ (\alpha x_1 + (1-\alpha) x_2) = x_0$</span>:<br /> <span class="math-container">$f(x_0)$</span>&lt; <span class="math-container">$\alpha f(x_1) + (1-\alpha)f(x_2)$</span><br /> Therefore a contradiction.<br /> Case 2: The bound is not attained. Since the function is strictly convex, we know <span class="math-container">$f(x)$</span> approaches this bound as <span class="math-container">$x$</span> approaches <span class="math-container">$ \infty $</span><br /> I don't know how to proceed after this step. Where can I find a contradiction in this case?</p>
Koro
266,435
<p><span class="math-container">$I=\int f(x)g(x) dx$</span>. Suppose that <span class="math-container">$f$</span> is differentiable and <span class="math-container">$g$</span> is integrable. <br/> Applying integration by parts: <br/></p> <blockquote> <p><span class="math-container">$I=f(x) \int g(x)dx-\int f'(x)(\int g(x)dx) dx=f(x)\int g(x)dx-[f'(x)\int (\int g(x) dx)dx-\int (f''(x) \int (\int g(x) dx)dx)dx$</span> <br/> etc.</p> </blockquote> <p>In general, if <span class="math-container">$I$</span> and <span class="math-container">$II$</span> are &quot;well-behaved&quot; functions, we can write: <br/> <span class="math-container">$\int (I)(II)dx=I \int (II)dx-\text{(1st derivative of I)}\text{(integration of II)}+\text {(2nd derivative of I)(integration of integration of II)}-...$</span> <br/></p> <p>For this we require that some <span class="math-container">$n$</span>th derivative of <span class="math-container">$I$</span> must be zero. For example: <br/> <span class="math-container">$\int x^3 \sin x dx=x^3 (-\cos x)-(3x^2)(-\sin x)+(6x)(\cos x)-(6)(\sin x)+(0)(-\cos x)$</span> <br/></p> <p>Note that here <span class="math-container">$3$</span>rd derivative of <span class="math-container">$x^3$</span> is zero, hence tabular method terminates. <br/></p> <p><strong>In your case, both <span class="math-container">$e^{ax}$</span> and <span class="math-container">$\sin (bx)$</span> are infinitely differentiable &amp; integrable and their derivative never becomes identically zero. That's why Tabular method doesn't terminate.</strong></p>
315,844
<p>What is the probability P(X>Y) given that X,Y are Uniformly distributed between [0,1]?</p>
gt6989b
16,192
<p>A more general approach is based on symmetry -- since $X,Y$ have the same distribution and $\mathbb{P}[X=Y] = 0$ (here, because both are continuous random variables), any outcome $(x,y)$ where $x&gt;y$ is just as likely as the outcome $(y,x)$, so $X&gt;Y$ exactly half the time.</p> <p>Note that this is independent of distribution - as long as $\mathbb{P}[X=Y] = 0$.</p>
2,814,703
<p>I am reading <a href="https://en.wikipedia.org/wiki/Lower_limit_topology" rel="nofollow noreferrer">lower limit topology</a> on Wikipedia, which states that the lower limit topology </p> <blockquote> <p>[...] is the topology generated by the basis of all half-open intervals $[a,b)$, where a and b are real numbers. [...] The lower limit topology is finer (has more open sets) than the standard topology on the real numbers (which is generated by the open intervals). The reason is that every open interval can be written as a (countably infinite) union of half-open intervals.</p> </blockquote> <p>I cannot see how to write $(a,b)$ as a countably infinite union of half-open intervals.</p>
CiaPan
152,299
<p>Here's an example of a union of disjoint intervals:</p> <p>$$ (a,b) = \bigcup_{n=1}^\infty\left[a+\frac{b-a}{n+1}, a+\frac{b-a}n\right) $$</p> <p>I wonder, why nobody presented it yet?</p>
866,654
<p>Reading various betting forum I came across different threads claiming <strong><em>betting multiple is worse than betting on single events</em></strong>.</p> <p>Could you explain why?</p> <p>[Clairification for the ones not familiar with betting: Betting on a single event: predict the outcome of a single match. Betting on multiple events: predict the outcome of multiple matches, you win only if you guess correctly the outcome of all the events, even if you only make one mistake you win nothing.]</p> <p>I think that the key point is the commission taken by bookmakers on each match (they take something like 5% on every bet).</p> <p>For example if you bet 100dollars on a tennis match where each player has P(win) = 0.5, You will win something like 195 instead of 200, due to commissions.</p>
Graham Kemp
135,106
<p>Assuming the outcomes of all the matches are independent, with $p_k$ being the probability of a favorable outcome of the $k^{th}$ match, for matches numbered $1$ to $n$. Then probability of favourable outcomes occurring in all matched will be:$$P_{\text{all}}=\prod\limits_{k=1}^n p_k$$</p> <p>Since $0\leq p_j\leq 1$ for all $j$, then it follows that: $p_k\geq \prod\limits_{j=1}^n p_j$ for all k.</p>
2,489,498
<p>A={a,b,c,d}</p> <p>R={(a,b),(a,c),(c,b)}</p> <p>According to the definition for transitive relation, if there is (a,b) and (b,c) there should be (a,c)</p> <p>In the above relation there is (a,c),(c,b) as well as (a,b). Shouldn't it be transitive?</p>
Mark Viola
218,419
<blockquote> <p><strong>I thought it would be instructive to present a way forward that relies on elementary, pre-calculus tools only. To that end we proceed.</strong></p> </blockquote> <hr> <p>To show that $\sqrt x\log(x) \to 0$ as $x\to 0$, we simply exploit the inequality</p> <p>$$\frac{x-1}{x}\le\log(x)\le x-1 \tag1$$ </p> <p>which I showed using only the limit definition of the exponential function and Bernoulli's inequality in <a href="http://math.stackexchange.com/questions/1589429/how-to-prove-that-logxx-when-x1/1590263#1590263">THIS ANSWER</a>. Note for $0&lt;x\le 1$, we have from $(1)$</p> <p>$$\frac{x-1}{x}\le \log(x)\le 0 \tag2$$</p> <p>Replacing $x$ with $x^\alpha$ in $(2)$, multiplying by $\sqrt x$, and using $\log(x^\alpha)=\alpha \log(x)$, we find that for $0&lt;x\le 1$ and $\alpha &gt;0$</p> <p>$$\frac{\sqrt x-x^{1/2-\alpha}}\alpha\le \sqrt x\log(x)\le 0$$</p> <p>Restricting $0&lt;\alpha&lt;1/2$ (e.g., Take $\alpha=1/4$.), applying the squeeze theorem yields coveted limit </p> <p>$$\lim{x\to 0}\sqrt x\log(x)=0$$</p> <p>as was to be shown!</p> <hr> <p>Note that by enforcing the substitution $x\to x^2$ and using integration by parts, we find that </p> <p>$$\begin{align} \lim_{\epsilon\to 0}\int_{\epsilon}^1 \sqrt{x}\log(x)\,dx&amp;=\lim_{\epsilon\to 0}\left.\left(\frac29 x^{3/2}(3\log(x)-2)\right)\right|_{\epsilon}^1\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac49 -\frac29 \epsilon^{3/2}(3\log(\epsilon)-2)\right)\\\\ &amp;=-\frac49 \end{align}$$</p> <blockquote> <p>So, the integral converges and the value to which it converges is $-\frac49$.</p> </blockquote>
2,505,863
<p>I have to find one affine transformation that maps the point P=(1,1,1) to P'=(-1,-1,-1), the point P=(-1,-1,-1)' to P=(1,1,1) and the point Q=(0,0,0) to Q'=(2,2,2). I started with a sketch and think that it is not possible to map both points with one affine transformation, but I must somehow prove that. So I take the formula: x' = a + Ax and started to fill in what we know about. We know that a = (2,2,2) to be able to map Q and we are looking for a matrix that can also transform P to P'. So I filled in P as x, and P' as x'.</p> <p>$\begin{pmatrix}-1\\-1\\-1\end{pmatrix}=\begin{pmatrix}2\\2\\2\end{pmatrix} + \begin{pmatrix}a11+a12+a13\\a21+a22+a23\\a32+a32+a33\end{pmatrix}*\begin{pmatrix}1\\1\\1\end{pmatrix}$</p> <p>Then I get 3 equations with 9 variables so I'm not sure how to solve it:</p> <p>I: a11 + a12 + a13 = -3</p> <p>II: a21 + a22 + a23 = -3</p> <p>III: a31 + a32 + a33 = -3</p> <p>Am I on the right way? How can I solve this equations?</p>
lab bhattacharjee
33,337
<p>Let $\sqrt{2x+1}-3=u\implies2x=u^2+6u+8$</p> <p>and $\sqrt{x-2}-\sqrt2=v\implies x= v^2+2\sqrt2v+4$</p> <p>and as $x\to0; u,v\to0$</p> <p>On division, $2v^2+4\sqrt2v=u^2+6u\iff v(2v+4\sqrt2)=u(u+6)$</p> <p>$$\lim_{x\to 4} \frac{\sqrt{2x+1}-3}{\sqrt{x-2}-\sqrt{2}}=\lim_{u,v\to0}\dfrac uv= \lim_{u,v\to0}\dfrac{2v+4\sqrt2}{u+6}=?$$</p>
3,778,024
<p>Let <span class="math-container">$(\Omega, \mathcal{F}, P)$</span> be a probability space, <span class="math-container">$X$</span> a random variable and <span class="math-container">$F(x) = P(X^{-1}(]-\infty, x])$</span>. The statement I am trying to prove is</p> <blockquote> <p>The distribution function <span class="math-container">$F$</span> of a random variable <span class="math-container">$X$</span> is right continuous, non-decreasing and satisfies <span class="math-container">$\lim_{x \to \infty}F(x) = 1$</span>, <span class="math-container">$\lim_{x \to -\infty} F(x) = 0$</span>.</p> </blockquote> <p>As <span class="math-container">$F(x + \delta) = F(x) + P(]x, x + \delta])$</span>, we have that <span class="math-container">$F$</span> is non-decreasing, but is the measure of an interval bounded by its length? In that case we would have right continuity as well.</p> <p>For the limits, we have <span class="math-container">$F(x) + P(X^{-1}(]x, \infty]) = P(\Omega) = 1$</span>, so <span class="math-container">$F(x) = 1 - P(X^{-1}(]x, \infty])$</span>, so it suffices for <span class="math-container">$P(X^{-1}(]x, \infty])$</span> to get small as <span class="math-container">$x$</span> gets large and to get large as <span class="math-container">$x$</span> gets small. This is not true for general measures, take the Lebesgue measure for example, but maybe because we need <span class="math-container">$P(X^{-1}(\mathbb{R}))$</span> to be <span class="math-container">$1$</span>?</p>
Masacroso
173,262
<p>Let <span class="math-container">$P_X:=P\circ X^{-1}$</span>, then <span class="math-container">$(\mathbb{R},\mathcal{B}(\mathbb{R}),P_X)$</span> is a probability space (that is, <span class="math-container">$P_X$</span> is a probability measure in the Borel <span class="math-container">$\sigma $</span>-algebra of the standard topology on <span class="math-container">$\mathbb{R}$</span>). Now pick any sequence <span class="math-container">$(x_k)\to -\infty $</span>, then from the reversed Fatou's lemma we have that</p> <p><span class="math-container">$$ \lim_{k\to \infty }F(x_k)=\limsup_{k\to\infty }P_X[(-\infty ,x_k]]\leqslant P_X\left[\limsup_{k\to\infty}(-\infty ,x_k]\right]=P_X[\emptyset ]=0 $$</span> Therefore <span class="math-container">$\lim_{x\to -\infty }F(x)=0$</span>. Similarly you can show that <span class="math-container">$\lim_{x\to\infty }F(x)=1$</span> using the standard Fatou's lemma, continuity from the right follows also easily using the dominated convergence theorem, and the increasing nature of <span class="math-container">$F$</span> is a simple consequence of <span class="math-container">$P_X$</span> being a measure.</p>
3,778,024
<p>Let <span class="math-container">$(\Omega, \mathcal{F}, P)$</span> be a probability space, <span class="math-container">$X$</span> a random variable and <span class="math-container">$F(x) = P(X^{-1}(]-\infty, x])$</span>. The statement I am trying to prove is</p> <blockquote> <p>The distribution function <span class="math-container">$F$</span> of a random variable <span class="math-container">$X$</span> is right continuous, non-decreasing and satisfies <span class="math-container">$\lim_{x \to \infty}F(x) = 1$</span>, <span class="math-container">$\lim_{x \to -\infty} F(x) = 0$</span>.</p> </blockquote> <p>As <span class="math-container">$F(x + \delta) = F(x) + P(]x, x + \delta])$</span>, we have that <span class="math-container">$F$</span> is non-decreasing, but is the measure of an interval bounded by its length? In that case we would have right continuity as well.</p> <p>For the limits, we have <span class="math-container">$F(x) + P(X^{-1}(]x, \infty]) = P(\Omega) = 1$</span>, so <span class="math-container">$F(x) = 1 - P(X^{-1}(]x, \infty])$</span>, so it suffices for <span class="math-container">$P(X^{-1}(]x, \infty])$</span> to get small as <span class="math-container">$x$</span> gets large and to get large as <span class="math-container">$x$</span> gets small. This is not true for general measures, take the Lebesgue measure for example, but maybe because we need <span class="math-container">$P(X^{-1}(\mathbb{R}))$</span> to be <span class="math-container">$1$</span>?</p>
Kavi Rama Murthy
142,385
<p>It is a basic fact that for any finite measure <span class="math-container">$\mu$</span> the condition <span class="math-container">$A_n$</span> decreasing to <span class="math-container">$A$</span> implies that <span class="math-container">$\mu (A_n) \to \mu (A)$</span>. [Lebesgue measure is an infinite measure and this property fails for Lebesgue measure]. This follow from the fact that <span class="math-container">$\mu(A_n^{c}) \to \mu(A^{c})$</span> since <span class="math-container">$A_n^{c}$</span> increases to <span class="math-container">$A$</span> and <span class="math-container">$\mu (E^{c})=\mu (\Omega)-\mu (E)$</span>. With this result in hand it should be easy for you to complete your arguments.</p> <p>Note that <span class="math-container">$(x,x+\delta]$</span> decreases to empty set as <span class="math-container">$\delta$</span> decreases to <span class="math-container">$0$</span> and <span class="math-container">$(x, \infty)$</span> decreases to empty set as as <span class="math-container">$x$</span> increases to <span class="math-container">$\infty$</span>.</p>
2,243,083
<p>I'm writing an advanced interface, but I don't yet have a concept of derivatives or integrals, and I don't have an easy way to construct infinite many functions (which could effectively delay or tween their frame's contributing distance [difference between B and A] over the next few frames).</p> <p>I can store values for a frame, and I can also consume them or previous values and map into A.</p> <p>For example, each frame could calculate the distance between B and A, then add that distance to A, but they would be perfectly in sync.</p> <p>I can keep track of the last N frames' distances and constantly shift old distances off, but this would create a delay, not an elastic effect. Somehow, the function that's popping off past-frames' distances needs to adjust for how long it's been for those 10 frames.</p> <hr> <p><strong>Is there any function I can rewrite each frame, which picks up the progress from it's predecessor, and contributes it's correct delta amount, adjusting for the new total distance between B and A?</strong></p> <p>Does this question make sense? How can I achieve behavior where A is constantly catching up to B in a non-linear, exponential way?</p> <hr>
Community
-1
<p>You can truly simulate the physics of a dampened spring, which leads to a differential equation.</p> <p>$$m\ddot x_A+d\dot x_A+k(x_A-x_B)=0$$</p> <p>where $x$ is the position, $m$ the mass, $d$ a damping coefficient and $k$ the stiffness constant of the spring.</p> <p>As you probably have enough with a qualitatively realistic solution, there is no need for a sophisticated ODE solver and the Euler method should be good enough.</p> <p>Turn the equation in a system of two first order equations:</p> <p>$$\begin{cases}\dot v_A=\dfrac{kx_B-kx_A-dv_A}m,\\\dot x_A=v_A.\end{cases}$$</p> <p>Hence you will iterate</p> <p>$$\begin{cases}v_{A,n+1}=v_{A,n}+\dfrac{kx_{B,n}-kx_{A,n}-dv_{A,n}}m\Delta t,\\x_{A,n+1}=x_{A,n}+ v_{A,n}\Delta t\end{cases}$$</p> <p>for some time increment. This way you can compute a position from the previous, but you also need to remember the speed.</p> <p>By varying the damping and stiffness coefficients, you can achieve more or less tight following of $B$, with an oscillatory behavior or not.</p> <p>If you want a 2D simulation, $x$ will be a vector, and the shape of the equations remains.</p>
1,362,860
<p>$$\frac{1}{3}+\frac{1}{13}+\frac{1}{23}+\frac{1}{31}+\frac{1}{37}+\frac{1}{43}\cdots$$ Intuitively, I feel that this sum converges, but I really don't know why, (or if I am correct). Can I have a somewhat rigorous proof of whether or not this sum converges or diverges? Thank you lots for any help.</p>
paw88789
147,810
<p>The sum of reciprocals of all positive integers without the digit $3$ converges. (See for instance <a href="https://math.stackexchange.com/questions/387/sum-of-reciprocals-of-numbers-with-certain-terms-omitted">Sum of reciprocals of numbers with certain terms omitted</a>)</p> <p>Hence the sum of reciprocals of all primes without the digit $3$ also converges.</p> <p>But the sum of all prime reciprocals diverges. </p> <p>Hence the sum of prime reciprocals with the digit $3$ must diverge.</p>
3,808,575
<p>Assuming I have the statement ∀x(∀y¬Q(x,y)∨P(x)), can I pull the universal quantifier ∀y out of the parenthesis? Meaning, is this statement equivalent to ∀x∀y(¬Q(x,y)∨P(x)) ?</p> <p>An approach I tried so far:</p> <ol> <li>∀x((∃y Q(x,y) ) =&gt; P(x)). (original eq.)</li> <li>∀x((∀y¬Q(x,y))∨P(x)) (De Morgan's application)</li> <li>∀x∀y(¬Q(x,y)∨P(x)). (Working off the assumption that taking out the ∀y is a valid operation).</li> <li>∀x∀y (Q(x,y) =&gt; P(x)) (Going backwards from the ¬P v Q definition of implication)</li> </ol> <p>Statement 4 does not seem to be equivalent to statement 1, which suggests that pulling out the universal quantifier is not acceptable. I would greatly appreciate any confirmation of whether this is the case, and if so, what governs when quantifiers can be brought to the outside of the parenthesis.</p>
Shaun
104,041
<p>They're equivalent.</p> <p>Here is a proof:</p> <p><img src="https://i.stack.imgur.com/dIKSM.jpg" alt="enter image description here" /></p> <p>This tree was generated <a href="https://www.umsu.de/trees/#(%E2%88%80x(%E2%88%80y%C2%ACQ(x,y)%E2%88%A8P(x)))%E2%86%94(%E2%88%80x%E2%88%80y(%C2%ACQ(x,y)%E2%88%A8P(x)))" rel="nofollow noreferrer">here</a>.</p>
3,684,917
<p>Let <span class="math-container">$C_{1}$</span> and <span class="math-container">$C_{2}$</span> be polytopes in <span class="math-container">$\mathbb{R}^{n}$</span> such that <span class="math-container">$C_{1}=conv\left( V\right) $</span> with <span class="math-container">$V$</span> being a set of vertices. If <span class="math-container">$V\subseteq C_{2}$</span>, my question is <span class="math-container">$C_{1}\subseteq C_{2}$</span>?</p>
JMP
210,189
<p>Given the range <span class="math-container">$[a,b]$</span> and the congruence <span class="math-container">$k \mod n$</span>, then first, subtract <span class="math-container">$k$</span> from each of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> to create a new range <span class="math-container">$[a-k,b-k]$</span>.</p> <p>This doesn't change the size of any of the congruence classes from using <span class="math-container">$n$</span>.</p> <p>We wish to find the size of each congruence classes for the residues <span class="math-container">$0,\dots,n-1$</span>, and we start with <span class="math-container">$a-k \mod n$</span>, this has <span class="math-container">$\lfloor\frac{b-k-(a-k)}{n}\rfloor+1=\lfloor\frac{b-a}{n}\rfloor+1$</span> elements in it.</p> <p>The next residue we look at is <span class="math-container">$a-k+1 \mod n$</span>, which has <span class="math-container">$\lfloor\frac{b-k-(a-k+1)}{n}\rfloor+1=\lfloor\frac{b-a-1}{n}\rfloor+1$</span> elements in it.</p> <p>And continue through to residue <span class="math-container">$a-k+n-1 \mod n$</span>, which has <span class="math-container">$\lfloor\frac{b-k-(a-k+n-1)}{n}\rfloor+1=\lfloor\frac{b-a-n+1}{n}\rfloor+1$</span> elements in it.</p>
997,602
<blockquote> <p>Prove that the function <span class="math-container">$x \mapsto \dfrac 1{1+ x^2}$</span> is uniformly continuous on <span class="math-container">$\mathbb{R}$</span>.</p> </blockquote> <p>Attempt: By definition a function <span class="math-container">$f: E →\Bbb R$</span> is uniformly continuous iff for every <span class="math-container">$ε &gt; 0$</span>, there is a <span class="math-container">$δ &gt; 0$</span> such that <span class="math-container">$|x-a| &lt; δ$</span> and <span class="math-container">$x,a$</span> are elements of <span class="math-container">$E$</span> implies <span class="math-container">$|f(x) - f(a)| &lt; ε.$</span></p> <p>Then suppose <span class="math-container">$x, a$</span> are elements of <span class="math-container">$\Bbb R. $</span> Now <span class="math-container">\begin{align} |f(x) - f(a)| &amp;= \left|\frac1{1 + x^2} - \frac1{1 + a^2}\right| \\&amp;= \left| \frac{a^2 - x^2}{(1 + x^2)(1 + a^2)}\right| \\&amp;= |x - a| \frac{|x + a|}{(1 + x^2)(1 + a^2)} \\&amp;≤ |x - a| \frac{|x| + |a|}{(1 + x^2)(1 + a^2)} \\&amp;= |x - a| \left[\frac{|x|}{(1 + x^2)(1 + a^2)} + \frac{|a|}{(1 + x^2)(1 + a^2)}\right] \end{align}</span></p> <p>I don't know how to simplify more. Can someone please help me finish? Thank very much.</p>
Community
-1
<p>First:$f(x)=\frac{1}{1+x^2}\implies f'(x)=-\frac{2x}{(1+x^2)^2} $</p> <p>Then observes that; For $|x|\le1$ $$\frac{|x|}{(1+x^2)^2} \le\frac{1}{(1+x^2)^2}\le 1$$</p> <p>and for $|x|\ge1$ $$|x|\le x^2 \le (1+x^2)^2\implies \frac{|x|}{(1+x^2)^2} \le 1$$</p> <p>Hence,</p> <p>$$|f'(x)|=\frac{2|x|}{(1+x^2)^2} \le 2\implies |f(x)-f(y)|\le 2|x-y|$$</p> <blockquote> <p>This shows that $f$ is Lipschitz which implies, that $f$ is uniformly continuous.</p> </blockquote>
2,745,570
<p>Use the mathematical Induction show that $H_{2^n}\le n+1$</p> <p>here $H$ is harmonic numbers ie. $H_n=1+\frac{1}{2}+\frac{1}{3}+.....\frac{1}{2^n}$</p> <p><strong>my idea</strong></p> <p>so for $n=0$ L.H.S=R.H.S</p> <p>Suppose this is true for $n$</p> <p>we prove for $n+1$</p> <p>So $H_{2^{n+1}}=1+\frac{1}{2}+\frac{1}{3}+...\frac{1}{2^n}+\frac{1}{2^n+1}+....\frac{1}{2^{n+1}}\\ =H_{2^n}+\frac{1}{2^n+1}+.......\frac{1}{2^{n+1}}$</p> <p>How to continue from here</p>
Peter Szilas
408,605
<p>Rephrasing a bit:</p> <p>1) $n=0$, ok.</p> <p>2) Hypothesis: $H_{2^n} \le n+1$.</p> <p>3)Step: $n+1$:</p> <p>$H_{2^{n+1}} =$</p> <p>$H_{2^n} + \dfrac{1}{2^n +1}+.....\dfrac{1}{2^{n+1}} =$ </p> <p>$H_{2^n} + \dfrac{1}{2^n+1}+...\dfrac{1}{2^n+ 2^n} \lt$</p> <p>$H_{2^n} + 2^n \dfrac{1}{2^n+1} \lt $</p> <p>$H_{2^n} +1 \lt (n+1) +1$, where </p> <p>in the second last line $\dfrac{2^n}{2^n+1} &lt;1$,</p> <p>and in the last line the hypothesis has been used.</p>
2,745,570
<p>Use the mathematical Induction show that $H_{2^n}\le n+1$</p> <p>here $H$ is harmonic numbers ie. $H_n=1+\frac{1}{2}+\frac{1}{3}+.....\frac{1}{2^n}$</p> <p><strong>my idea</strong></p> <p>so for $n=0$ L.H.S=R.H.S</p> <p>Suppose this is true for $n$</p> <p>we prove for $n+1$</p> <p>So $H_{2^{n+1}}=1+\frac{1}{2}+\frac{1}{3}+...\frac{1}{2^n}+\frac{1}{2^n+1}+....\frac{1}{2^{n+1}}\\ =H_{2^n}+\frac{1}{2^n+1}+.......\frac{1}{2^{n+1}}$</p> <p>How to continue from here</p>
heropup
118,193
<p>$$H_n = \sum_{k=1}^n \frac{1}{k}.$$ Then $$\begin{align*} H_{2^{n+1}} &amp;= \sum_{k=1}^{2^{n+1}} \frac{1}{k} \\ &amp;= \sum_{k=1}^{2^n} \frac{1}{k} + \sum_{k=2^n + 1}^{2^{n+1}} \frac{1}{k} \\ &amp;= H_{2^n} + \sum_{j=1}^{2^{n+1} - 2^n} \frac{1}{2^n + j} \\ &amp;\overset{\text{i.h.}}{\le} (n+1) + \sum_{j=1}^{2^n} \frac{1}{2^n + j} \\ &amp;&lt; (n+1) + \sum_{j=1}^{2^n} \frac{1}{2^n} \\ &amp;= (n+1) + 2^n \cdot \frac{1}{2^n} \\ &amp;= (n+1) + 1 \\ &amp;= n+2, \end{align*}$$ where the inequality with "i.h." indicates the step where the induction hypothesis was used.</p>
2,509,308
<p>My friend asked me a so called modified version of <a href="https://math.stackexchange.com/questions/96826/the-monty-hall-problem#">Monty Hall problem</a> in his opinion. But I find the description a bit spooky and maybe someone here can enlighten us with what is the problem with the description of the problem, or maybe it is me missing something.</p> <p>Imagine exactly same setting as in monthy hall problem with one difference. Imagine I randomly pick one card. Now, as opposed to the original problem, the show host doesn't know where the car is, and of the remaining two cards, he randomly picks a card: if it happens to be a goat the host shows us this card (as in original game), if it happens to be a car however, my friend said you "quit" the game (or don't consider such case). Now the question is same as in original problem, is it better for me to change my initial choice?</p> <p>One problem with this description is mainly with the "quit" part. Does not this hinder the process of calculating whether we should switch or not? How do you model the case the host quit the game (in calculating whether you should switch or not)?</p> <p>He claimed that in such case, switching my initial choice doesn't give me any advantage anymore and made a simulation program as follows. He made 10000 experiments, where he was skipping the part where the host chose a car, so of course he was remained with 2/3 of the experiment test cases, and he claimed that now, since the number of matches the user made (1/3rd of 1000) is half of the number of remaining test cases(2/3rd of 1000- the ones where host didn't choose a car) - it is not advantageous anymore to change the card.</p> <p>I am failing to find a flaw either in description in problem or in the implementation I mentioned above. Can someone help figure what is wrong with either the problem description or implementation?</p> <p>I would appreciate some help because I got confused overall with the whole thing now (whereas I understand original monthy hall problem well).</p> <p>PS. Here is the Java implementation actually: <a href="http://codepad.org/rt7fOqei" rel="nofollow noreferrer">http://codepad.org/rt7fOqei</a>, where he deduces that since <code>match</code> is one half of <code>count</code>, then it makes no sense to change my initial choice.</p>
Dean
393,411
<p>In the modified game, identify the 3 cards as (1) the card you choose at random (2) the card the host chooses at random from the remaining two cards (3) the other card. It is assumed that the two random selections are done without any information about which card is the winning card, unlike in the actual Monte Hall problem.</p> <p>Prior to showing (2), all have equal probability of being the winning card, by symmetry. If (2) is shown to not be the winning card, the remaining cards (1) and (3) continue to have equal probability of being the winning card, again by symmetry. This is also the case if (2) is shown to be the winning card, but since the probabilities equal zero, you might as well quit in that case.</p>
2,509,308
<p>My friend asked me a so called modified version of <a href="https://math.stackexchange.com/questions/96826/the-monty-hall-problem#">Monty Hall problem</a> in his opinion. But I find the description a bit spooky and maybe someone here can enlighten us with what is the problem with the description of the problem, or maybe it is me missing something.</p> <p>Imagine exactly same setting as in monthy hall problem with one difference. Imagine I randomly pick one card. Now, as opposed to the original problem, the show host doesn't know where the car is, and of the remaining two cards, he randomly picks a card: if it happens to be a goat the host shows us this card (as in original game), if it happens to be a car however, my friend said you "quit" the game (or don't consider such case). Now the question is same as in original problem, is it better for me to change my initial choice?</p> <p>One problem with this description is mainly with the "quit" part. Does not this hinder the process of calculating whether we should switch or not? How do you model the case the host quit the game (in calculating whether you should switch or not)?</p> <p>He claimed that in such case, switching my initial choice doesn't give me any advantage anymore and made a simulation program as follows. He made 10000 experiments, where he was skipping the part where the host chose a car, so of course he was remained with 2/3 of the experiment test cases, and he claimed that now, since the number of matches the user made (1/3rd of 1000) is half of the number of remaining test cases(2/3rd of 1000- the ones where host didn't choose a car) - it is not advantageous anymore to change the card.</p> <p>I am failing to find a flaw either in description in problem or in the implementation I mentioned above. Can someone help figure what is wrong with either the problem description or implementation?</p> <p>I would appreciate some help because I got confused overall with the whole thing now (whereas I understand original monthy hall problem well).</p> <p>PS. Here is the Java implementation actually: <a href="http://codepad.org/rt7fOqei" rel="nofollow noreferrer">http://codepad.org/rt7fOqei</a>, where he deduces that since <code>match</code> is one half of <code>count</code>, then it makes no sense to change my initial choice.</p>
fleablood
280,126
<p>The "quit" is simply throwing away one third of the games and not considering them. This is because you are "viewing" the problem at a specific point of time; after the host says his card and before you the player decide to switch.</p> <p>Imagine instead of quitting the host takes out an anti-matter vial and destroys the entire universe if he gets the car. So before the game what is probability earth will continue to exist the next morning? The answer is $\frac 23$. So the host turns over a card at random and ... <em>thank god</em> ... it's a goat! <em>NOW</em> given that, what is the probability of anything happening in this timeline <em>after</em> an event with $\frac 23$ probability of happening <em>did</em> happen?</p> <p>This is conditional probability and a very simple case.</p> <p>There were three equally likely outcomes for where the car was: $1,2,3$.</p> <p>There are three equally likely outcomes for what card you pick for $9$ equally likely outcomes. </p> <p>In those there are $2$ equally likely outcomes for that what card the host pick for $18$ equally likely event.</p> <p>In $1/3$ of the cases we picked the car and the probability the host picks the car as well is $0$. But in $2/3$ of the cases we didn't pick the car and host and a $1/2$ probability of picking the car (and the universe ends), this is a $\frac 12*\frac 23= \frac 13$ probability of us all ending in oblivion.</p> <p>Here are the $18$ cases spelled out: </p> <p>$[1:1:2], [1:1:3], \color{red}{[1:2:1]}, [1:2:3], \color{red}{[1:3:1]},[1:3:2], \color{red}{[2:1:2]}, [2:1:3], [2:2:1], [2:2:3], [2:3:1],\color{red}{[2:3:2]}, [3:1:2],\color{red}{[3:1:3]}, [3:2:1], \color{red}{[3:2:3]}, [3:3:1], [3:3:2]$.</p> <p>In $6 $ of them the universe ends. Of the $12$ equally likely ones that remain, it's equally likely that we switch. The possibilities spelled out are:</p> <p>$\color{blue}{[1:1:2:\text{stay with car #1}]},[1:1:2:\text{switch to goat #3}] \color{blue}{[1:1:3:\text{stay with car #1}]},[1:1:3:\text{switch to goat #2}], \color{red}{[1:2:1:\text{the universes ended. We don't have a choice}]}, [1:2:3:\text{stay with goat # 1}]\color{blue}{[1:2:3:\text{switch to car # 2}]}, \color{red}{[1:3:1:]\text{the universes ended. We don't have a choice}},[1:3:2:\text{stay with goat # 3}]\color{blue}{[1:3:2:\text{switch to car # 1}]}, \color{red}{[2:1:2:\text{the universes ended. We don't have a choice}]}, [2:1:3:\text{stay with goat # 2}]\color{blue}{[2:1:3:\text{switch to car # 1}]}, \color{blue}{[2:2:1:\text{stay with car #2}]},[2:2:1:\text{switch to goat #3}], \color{blue}{[2:2:3:\text{stay with car #2}]},[2:2:3:\text{switch to goat #1}], [2:3:1:\text{stay with goat # 2}]\color{blue}{[2:3:1:\text{switch to car # 3}]}, \color{red}{[2:3:2:\text{the universes ended. We don't have a choice}]}, [3:1:2:\text{stay with goat # 3}]\color{blue}{[3:1:2:\text{switch to car # 1}]}, \color{red}{[3:1:3:\text{the universes ended. We don't have a choice}]}, [3:2:1][3:2:1:\text{stay with goat # 3}]\color{blue}{[3:2:1:\text{switch to car # 2}]}, \color{red}{[3:2:3:\text{the universes ended. We don't have a choice}]}, []\color{blue}{[3:3:1:\text{stay with car #3}]},[3:3:1:\text{switch to goat #2}], \color{blue}{[3:3:2:\text{stay with car #3}]},[3:3:2:\text{switch to goat #1}]$.</p> <p>The red cases could have happened but they didn't. They are off the table. Of the 24 remaining equally likely cases, 12 of the blue ones have favorable outcomes and 12 of the black ones have unfavorable outcomes.</p>
1,123,050
<p>This is the same problem asked here. - <a href="https://math.stackexchange.com/questions/1105927/next-step-to-take-to-reach-the-contradiction">Next step to take to reach the contradiction?</a> Here is it again.</p> <p><img src="https://i.stack.imgur.com/onqzq.png" alt="enter image description here"></p> <p>I understand the solution - how you want to get to the fact 100 divides n^2 and then go through all the possibilities to show that the integer k for which n^2 * k = 100 is not n + 1.</p> <p>Here is my instructor's solution <img src="https://i.stack.imgur.com/wu6xx.png" alt="enter image description here"></p> <p>The n^2 + n^3 > n^3 part of the proof makes sense to me. After all any positive integer squared will make an expression higher than itself. How did she get to 5^3 >= n^3 -> n &lt; 5 though? I can't figure out the algebra step she took to get to that conclusion. I tried setting n^2 + n^3 equal to n^3 but that just got me zero. I am just to understand different ways of doing a proof</p>
jdods
212,426
<p>I actually struggled with this concept in grad school since I was studying applied math and was sort of thrust into higher level theory without building it up rigorously like I assume would be done in a pure math program.</p> <p>If we are integrating over a space <span class="math-container">$X$</span>, I sometimes prefer the notation <span class="math-container">$\int_X f(x)\mu(dx)$</span>. I like to think of it as splitting the space we are integrating over into infinitesimal pieces, but we have to take the measure of those infinitesimal pieces as they may not all be identical under <span class="math-container">$\mu$</span>. I'm used to working with probability measures, and at least in that case, you can often think of it similar to the way Lebesgue and Riemann integration are developed.</p> <p>Create a disjoint partition <span class="math-container">$X=\cup_{k=1}^N A_k$</span>, and define the sum which will approximate the integral using appropriately chosen sample points <span class="math-container">$x_k\in A_k$</span>.</p> <p><span class="math-container">$$\int_X f(x)\mu(dx) \approx \sum_{k=1}^N f(x_k) \mu(A_k).$$</span></p> <p>Ideally, taking the limit as <span class="math-container">$N\rightarrow\infty$</span> (carefully refining the partition and choosing appropriate sample points as <span class="math-container">$N$</span> increases) will make the sum converge to the integral. What we are doing here is effectively approximating the function <span class="math-container">$f$</span> with a simple function which is constant on a finite collection of measurable sets.</p>
1,351,350
<p>Assume that probability of $A$ is $0.6$ and probability of $B$ is at least $0.75$. Then how do I calculate the probability of both $A$ and $B$ happening together?</p>
JP McCarthy
19,352
<p>If $A$ and $B$ are two events then</p> <p>$$\mathbb{P}[A\cap B]=\mathbb{P}[A]\cdot \mathbb{P}[B\,|\, A]=\mathbb{P}[B]\cdot \mathbb{P}[A\,|\,B],$$</p> <p>where $A\cap B$ is the event $A$ AND $B$ and $\mathbb{P}[A\,|\,B]$ is the probability of $A$ <em>given that $B$ is true</em>.</p> <p>When $A$ and $B$ are <em>independent</em> (the truth of $A$ does not affect the truth of $B$) --- $\mathbb{P}[B\,|\,A]=\mathbb{P}[B]$ and so we have:</p> <p>$$\mathbb{P}[A\cap B]=\mathbb{P}[A]\cdot \mathbb{P}[B].$$</p> <p>I am guessing you might be coming from a finite-kind of a place so let us give an example where we can see where the multiplication comes from. </p> <p>Let $X_1$ and $X_2$ be the outcome of two dice rolls (these are independent). Now what is the probability of getting $X_2=1$ and $X_2=1$. Now you can list out 36 possibilities --- 6 TIMES 6 --- for the two dice rolls and only one of them corresponds to $X_1=X_2=1$. Now let $C$ be the event of a double one, $A$ be the event $X_1=1$ and $B$ be the event $X_2=1$. Note $C=A\cap B$ and </p> <p>$$\mathbb{P}[C]=\frac{\text{# favourable outcomes}}{\text{# possible outcomes}}=\frac{1}{36}=\frac{1}{6}\cdot \frac{1}{6}=\mathbb{P}[A]\cdot \mathbb{P}[B].$$</p>
1,618,753
<p>Trying to expand $f(x)=\cot(x)$ to Taylor series (Maclaurin, actually). But I keep "adding up" infinities when using the formula. (Because of $\cot(0)=\infty$) Could you perhaps give me a hint on how to proceed?</p>
GaussTheBauss
104,620
<p>The function $\cot x$ is not continuous at zero, and therefore has no power series around zero.</p> <p>If you know complex analysis, you should look for the Laurent series of $\cot z$ at $z=0$ instead. </p>
1,500,827
<p>A composite number $n$ is a Fermat-pseudoprime to base $a$, if</p> <p>$$a^{n-1}\equiv\ 1\ (\ mod\ n)$$</p> <p>If $n-1=2^s\times t$ , $t$ odd , $n$ is a strong a-PRP, if either $2^t\equiv 1\ (\ mod\ n)$ or there is a number $u$ with $0\le u&lt;s$ and $\large 2^{2^u\times t}\equiv -1\ (\ mod\ n\ )$</p> <p>I want to find the composite numbers, which are strong PRP to the bases $2,3,5$ upto , lets say , $10^9$ in an efficient way (faster than just check all the composite numbers).</p> <p>Is this possible ? If yes, how ?</p>
mrprottolo
84,266
<p>From the fact that $a_n=1+\frac{1}{2^{n-1}}$ we have that $a_{n+1}=1+\frac{1}{2^{n}}$. Now notice that $$\frac{a_n}{2}=\frac{1}{2}+\frac{1}{2^{n}}=a_{n+1}-\frac{1}{2}.$$ Therefore we get the following recurrence relation $$a_{n+1}=a_n/2+1/2$$</p>
3,484,052
<p>Let's say you have a series that looks like <span class="math-container">$\sum^\infty_{n=N}f(n)$</span>, where <span class="math-container">$f(n)$</span> is some <span class="math-container">$n$</span>-dependent thing. If you take the limit of this series as <span class="math-container">$N$</span> approaches infinity, what kind of stuff can you use to figure out what the limit is? For instance, I read somewhere else <span class="math-container">$\sum^\infty_{n=N}\frac{1}{n^2} \rightarrow 0$</span> as <span class="math-container">$N \rightarrow \infty$</span>, but why?</p>
Peter Szilas
408,605
<p>Hopefully not too trivial:</p> <p>Assume <span class="math-container">$\lim_{N \rightarrow \infty} \sum_{k=N}^{\infty}f(k)=0$</span>:</p> <p><span class="math-container">$\epsilon/2$</span> given.</p> <p>There is a <span class="math-container">$N_0$</span> s.t. for <span class="math-container">$N&gt;N_0$</span></p> <p><span class="math-container">$|\sum_{k=N}^{\infty}f(k)|&lt;\epsilon/2$</span>.</p> <p>For <span class="math-container">$m \ge n &gt;N_0$</span></p> <p><span class="math-container">$|\sum_{k=n}^{m}f(k)|=$</span></p> <p><span class="math-container">$|\sum_{k=n}^{\infty}f(k)-\sum_{k=m}^{\infty}f(k)|&lt;$</span></p> <p><span class="math-container">$|\sum_{k=n}^{\infty}f(k)|+|\sum_{k=m}^{\infty}f(k)| &lt;$</span></p> <p><span class="math-container">$\epsilon/2+\epsilon/2=\epsilon$</span>, i .e.</p> <p><span class="math-container">$S_n:=\sum_{k=1}^{n}f(k)$</span> is Cauchy, hence convergent.</p>
1,722,964
<p>Expression :$$(p\rightarrow q)\leftrightarrow(\neg q\rightarrow \neg p)$$ What does the symbol $\leftrightarrow$ mean ? Please explain by drawing the truth table for this expression and also with other examples if possible. <strong>I'm in a desperate situation so I'd really appreciate a quick response !</strong></p>
Rubicon
318,714
<p>All the answers above are great, and should help you. I will show you how I would do the full truth table for the logical statement: $$(p \Rightarrow q)\Longleftrightarrow(\neg q \Rightarrow \neg p)$$</p> <p>\begin{array}{cc|c|c|c|c} p &amp; q &amp; \neg q &amp; \neg p &amp; \neg q \Rightarrow \neg p &amp; p \Rightarrow q\\\hline T &amp; T &amp; F &amp; F &amp; T &amp; T\\ T &amp; F &amp; T &amp; F &amp; F &amp; F\\ F &amp; T &amp; F &amp; T &amp; T &amp; T\\ F &amp; F &amp; T &amp; T &amp; T &amp; T \end{array}</p>
3,347,342
<blockquote> <p><span class="math-container">$$\frac{2}{5}^{\frac{6-5x}{2+5x}}&lt;\frac{25}{4}$$</span></p> </blockquote> <p>I can write this as <span class="math-container">$$\frac25 ^{\frac{6-5x}{2+5x}} &lt;\frac25 ^{-2}$$</span> Therefore <span class="math-container">$$\frac{6-5x}{2+5x}&lt;-2$$</span> Solving it , we get <span class="math-container">$x\in (-2, -\frac 25)$</span></p> <p>The correct answer is <span class="math-container">$x\in (-\infty , -2)\cup (-\frac 25 , \infty)$</span></p> <p>I feel it’s got something to do with signs. Maybe in the part where I wrote <span class="math-container">$\frac 25 ^{-2}$</span> </p> <p>If I change the inequality there, I arrive at the answer. But I disagree. I haven’t changed the number at all, so the sign should not change. What’s the right answer?</p>
Community
-1
<p>Because <span class="math-container">$\frac25&lt;1$</span>, higher exponents will result in smaller numbers. That is why the inequality needs to be reversed when you switch to the inequality with just the exponents.</p>
2,811,870
<p>This is a question from Brilliant.org</p> <blockquote> <p>The triangle $ABC$ has $AB = 9$ and $AC:BC = 40:41$. What is the maximum possible area of $ABC$?</p> </blockquote> <p>For this question, I considered the equation $A=\frac 12ab\sin\theta$.</p> <p>Since $\sin\theta\le 1$, then $A$ is maximised when $\sin\theta = 1$.</p> <p>This meant $ABC$ was a right-angled triangle, after some working I got the answer as 180. However, it was wrong, Brilliant said it is not necessarily a right-angled triangle and then used Heron's formula to find the maximised area, 820.</p> <p>I checked other posts which were similar to my question such as, <a href="https://math.stackexchange.com/q/2096023">How to maximize the area of a triangle, given two sides?</a>. However, they followed the same method I did</p> <p>I am interested in why a right-angled triangle would <em>not</em> maximise the area in this case and what is wrong with my logic?</p>
Alex R.
22,064
<p>Your confusion stems from the fact that the area is maximized at $\theta=90$, <em>if you keep the side lengths $BC,AC$ fixed</em>. However in this problem $BC,AC$ can vary in length. </p>
2,811,870
<p>This is a question from Brilliant.org</p> <blockquote> <p>The triangle $ABC$ has $AB = 9$ and $AC:BC = 40:41$. What is the maximum possible area of $ABC$?</p> </blockquote> <p>For this question, I considered the equation $A=\frac 12ab\sin\theta$.</p> <p>Since $\sin\theta\le 1$, then $A$ is maximised when $\sin\theta = 1$.</p> <p>This meant $ABC$ was a right-angled triangle, after some working I got the answer as 180. However, it was wrong, Brilliant said it is not necessarily a right-angled triangle and then used Heron's formula to find the maximised area, 820.</p> <p>I checked other posts which were similar to my question such as, <a href="https://math.stackexchange.com/q/2096023">How to maximize the area of a triangle, given two sides?</a>. However, they followed the same method I did</p> <p>I am interested in why a right-angled triangle would <em>not</em> maximise the area in this case and what is wrong with my logic?</p>
g.kov
122,782
<p>Since we have expressions for the side lengths as $a,b=px,c=qx$ ($a=9,p=40,q=41$), we can ignore all the angles and use Heron’s formula for the area \begin{align} S&amp;=\tfrac14\sqrt{4a^2b^2-(a^2+b^2-c^2)^2} \tag{1}\label{1} ,\\ S(x)&amp;=\tfrac14\sqrt{2x^2a^2(q^2+p^2)-x^4(p-q)^2(q+p)^2-a^4} \tag{2}\label{2} . \end{align} </p> <p>The $x$ that maximizes the area $S(x)$, also maximizes $S(x)^2$, so we can consider \begin{align} S_2(x)&amp;=S(x)^2=\tfrac1{16}(2x^2a^2(q^2+p^2)-x^4(p-q)^2(q+p)^2-a^4) ,\\ S_2'(x)&amp;= \tfrac14x(a^2(q^2+p^2)-(p-q)^2(q+p)^2x^2) \end{align}</p> <p>And the sought maximum can be found at</p> <p>\begin{align} x_{\max}&amp;= \frac{a\sqrt{q^2+p^2}}{q^2-p^2} =\tfrac19\sqrt{3281} \approx 6.36 ,\\ S_{\max}&amp;= S(x_{\max})=820 . \end{align}</p>
2,811,870
<p>This is a question from Brilliant.org</p> <blockquote> <p>The triangle $ABC$ has $AB = 9$ and $AC:BC = 40:41$. What is the maximum possible area of $ABC$?</p> </blockquote> <p>For this question, I considered the equation $A=\frac 12ab\sin\theta$.</p> <p>Since $\sin\theta\le 1$, then $A$ is maximised when $\sin\theta = 1$.</p> <p>This meant $ABC$ was a right-angled triangle, after some working I got the answer as 180. However, it was wrong, Brilliant said it is not necessarily a right-angled triangle and then used Heron's formula to find the maximised area, 820.</p> <p>I checked other posts which were similar to my question such as, <a href="https://math.stackexchange.com/q/2096023">How to maximize the area of a triangle, given two sides?</a>. However, they followed the same method I did</p> <p>I am interested in why a right-angled triangle would <em>not</em> maximise the area in this case and what is wrong with my logic?</p>
farruhota
425,072
<p>Let the sides $AC=40x, BC=41x$. Using the Heron's formula: $$S=\sqrt{\frac{81x+9}{2}\cdot \frac{81x-9}{2}\cdot \frac{9+x}{2}\cdot \frac{9-x}{2}}=\frac{81}{4}\sqrt{\left(x^2-\frac 1{81}\right)\left(81-x^2\right)}\overbrace{\le}^{GM-AM} \\ \frac{81}{4}\cdot \frac{\left(x^2-\frac 1{81}\right)+\left(81-x^2\right)}{2}=820,$$ equality occurs when $x^2-\frac1{81}=81-x^2 \Rightarrow x\approx 3$.</p> <p>Note: $GM-AM$ is the Geometric Mean-Arithmetic Mean inequality.</p>
672,744
<p>Find the surface area obtained by rotating $y= 1+3 x^2$ from $x=0$ to $x = 2$ about the $y$-axis.</p> <p>Having trouble evaluating the integral: </p> <p>Solved for $x$:</p> <ul> <li>$x=0, y=1$</li> <li>$x=2, y=13$</li> </ul> <p>$$\int_1^{13} 2\pi\sqrt\frac{y-1}3 \cdot \sqrt{1+\sqrt\frac{y-1}3'}^2\,dy$$</p> <p>I got stuck at 2\pi\sqrt\frac{y-1}3 \cdot \sqrt{1+(1/12)+(1/(y-1))}</p> <p>any help would be great thanks</p>
robjohn
13,854
<p>This can be done using <a href="http://en.wikipedia.org/wiki/Pappus%27s_centroid_theorem" rel="nofollow">Pappus's Theorem</a> and integrating in $x$: $$ \begin{align} \int_0^22\pi x\,\overbrace{\sqrt{y'^2+1}\,\mathrm{d}x}^{\mathrm{d}s} &amp;=\int_0^22\pi x\sqrt{36x^2+1}\,\mathrm{d}x\\ &amp;=\frac\pi{36}\int_0^2\sqrt{36x^2+1}\,\mathrm{d}(36x^2+1)\\ &amp;=\frac\pi{36}\int_1^{145}\sqrt{u}\,\mathrm{d}u\\ &amp;=\frac\pi{36}\left[\frac23u^{3/2}\right]_1^{145}\\[4pt] &amp;=\frac\pi{54}\left(145^{3/2}-1\right) \end{align} $$ where we used the substitution $u=36x^2+1$.</p>
2,710,681
<p>If I have a function of three variables and I want to create a new function in which it equals the other function squared, could I literally just square the other function or does this violate any rules? Would this also mean its gradient vector is just squared at a certain point?</p>
Emilio Novati
187,568
<p>A function is not characterized only by the number of independent variables in the domain, but also by the number of dependent variables in the codomain and this make some difference in defining the ''square'' of a function.</p> <p>If we have a function $f:\mathbb{R}^3 \to \mathbb{R}$ than its value is areal number and we can easily define the square as $f^2(x)=[f(x)]^2$ , i.e. the function whose value is the square of the value of the function $f(x)$.</p> <p>If we have a function $f:\mathbb{R}^3 \to \mathbb{R}^n$ the value of the function has $n$ components: $f(x,y,z)=(f_1(x,y,z),f_2(xyz),f_3(xyz), \cdots, f_n(x,y,z))$ so we can define the square function as the function that has as components the square of the components: $f^2(x,y,z)=(f_1^2(x,y,z),f_2^2(x,y,z),f_3^2(x,y,z), \cdots, f_n^2(x,y,z))$.</p> <p>In any case the derivative ( or the derivatives) can be found using the chain rule as in: $$ \frac {\partial f^2(x,y,z)}{\partial x}=2f(x,y,z)\frac {\partial f(x,y,z)}{\partial x} $$</p>
3,670,240
<p>It' not a physics question, just ..coincidence ;) (i'm concerned about mathematical rightness of it)</p> <p>Let's consider <span class="math-container">$U,T,S,P,V\in\mathbb{R_{&gt;0}}$</span> such that <span class="math-container">$$dU=TdS-PdV$$</span></p> <ul> <li>Based on this, how we can rigorously proof that <span class="math-container">$U=U(S,V)$</span>?</li> </ul> <hr> <p>Attempt 1: (probably inconclusive, see 'Attempt 2')</p> <p>Let us consider <span class="math-container">$$A, X, Y \in \mathbb{R}\;\;\mid\;\; A=A(X,Y)\;\;\;\wedge\;\;\; dA=dU$$</span></p> <p>Then <span class="math-container">$$dA=\frac{\partial A}{\partial X}\bigg|_Y\,dX+\frac{\partial A}{\partial Y}\bigg|_X\,dY$$</span> Requirement <span class="math-container">$dA=dU$</span> implies <span class="math-container">$$\frac{\partial A}{\partial X}\bigg|_Y\,dX+\frac{\partial A}{\partial Y}\bigg|_X\,dY=TdS-PdV$$</span> or <span class="math-container">$$\frac{\partial A}{\partial X}\bigg|_Y\,dX+\frac{\partial A}{\partial Y}\bigg|_X\,dY-TdS+PdV=0$$</span> Now, since <span class="math-container">$dX, dY, dS$</span> and <span class="math-container">$dV$</span> are arbitrary, to make the sum null, what they multiply must be zero, and since <span class="math-container">$T,P$</span> are not null by definition, only possibilities are that <span class="math-container">$$X=S\;\wedge\;Y=V \qquad\text{or}\qquad Y=S\;\wedge\;X=V$$</span> in either case, we obtain <span class="math-container">$$\frac{\partial A}{\partial S}\bigg|_V=T,\qquad\frac{\partial A}{\partial V}\bigg|_S=-P$$</span> (I've considered <span class="math-container">$A$</span> being just function of two variables <span class="math-container">$X,Y$</span>, but this is not restrictive since if more than two variables were present in <span class="math-container">$A$</span> dependencies, the result woudn't change, as the additional partial derivatives appearing in <span class="math-container">$dA$</span> expansion would have been necessarily set to <span class="math-container">$0$</span>, eliminating thus their dependency in <span class="math-container">$A$</span>)</p> <p>Also follows that </p> <p><span class="math-container">$$A=A(S, V)$$</span></p> <p>Then, being <span class="math-container">$dA=dU\,[..]\Rightarrow\,U=U(S,V)$</span></p> <p>Some question about this attempt</p> <ol> <li>How to properly carry on last step, if all was correct so far? (simply saying that <span class="math-container">$A$</span> and <span class="math-container">$U$</span> differ by a constant as a consequence to mean value theorem? but how we can say this if still we don't know <span class="math-container">$U$</span> dependencies..?)</li> <li>Has sense to look for <span class="math-container">$A$</span> such that <span class="math-container">$dA=dU$</span> if <span class="math-container">$A$</span> initially is not function of the same variables as <span class="math-container">$U$</span>?</li> <li>Seems that to make the above reasoning work, <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> have to be independent one with respect to the other, but what if we cannot require this for <span class="math-container">$S$</span> and <span class="math-container">$V$</span>?</li> </ol> <hr> <p>Attempt 2: (als inconclusive see 'Attempt 3')</p> <p>From <span class="math-container">$dU=TdS-PdV$</span>, we have <span class="math-container">$$\frac{dU}{dS}=T-P\,\frac{dV}{dS}\qquad\text{and}\qquad\frac{dU}{dV}=T\,\frac{dS}{dV}-P$$</span> Then <span class="math-container">$$\frac{dU}{dS}\bigg|_{V}=\Bigg(T-P\,\frac{dV}{dS}\bigg)\Bigg|_{V}=T\qquad\text{and}\qquad\frac{dU}{dV}\bigg|_{S}=\Bigg(T\,\frac{dS}{dV}-P\bigg)\Bigg|_{S}=P$$</span> Eventually <span class="math-container">$$dU=\frac{dU}{dS}\bigg|_{V}\,dS+\frac{dU}{dV}\bigg|_{S}\,dV$$</span></p> <p>But here arises the problem, if i were sure that <span class="math-container">$U$</span> would just depend on <span class="math-container">$S,\,V$</span>, we could have written (you can check <a href="https://en.wikipedia.org/wiki/Partial_derivative#Basic_definition" rel="nofollow noreferrer">wikipedia page on this</a>) <span class="math-container">$$dU=\frac{\partial U}{\partial S}\,dS+\frac{\partial U}{\partial V}\,dV$$</span> and maybe arrive to the conclusion <span class="math-container">$U=U(S,V)$</span> in some way, but being the reasoning 'circular' we cannot do so..</p> <p>So also this way seems inconclusive.. i wrote it in the hope of maybe clicking some ideas in the answerer, thanks!</p> <hr> <p>Attempt 3: posted in answer</p>
MasterYoda
418,429
<p>The OP's solution is rigorous and sound. Here I would like to point out that <span class="math-container">$U=U(S,V)$</span> <em>by construction</em>.</p> <p>The most complete equation for the internal energy would read</p> <p><span class="math-container">$$dU = d(TS) - d(PV)$$</span></p> <p>In this case, the internal energy is dependent on all four variables with <span class="math-container">$d(TS)$</span> being the change in heat of the system while <span class="math-container">$d(PV)$</span> is the change in pressure-volume work done by the system. The sign of <span class="math-container">$d(PV)$</span> is convention and changes between chemistry and physics uses. Now apply chain rule</p> <p><span class="math-container">$$dU = TdS + SdT - PdV - VdP$$</span></p> <p>It just so happens, it is a heck of a lot easier to design experimental systems that are isothermal (constant temperature, <span class="math-container">$T$</span>) as opposed to isentropic (constant entropy, <span class="math-container">$S$</span>) and isobaric (constant pressure, <span class="math-container">$P$</span>) systems are much safer to handle than isochoric (constant volume, <span class="math-container">$V$</span>) systems. Hence, the most applicable equation for internal energy would be a system that is at constant temperature and pressure thereby obeying the formula</p> <p><span class="math-container">$$dU = TdS - PdV$$</span></p> <p>You could, on the other hand, study a system that is isothermal and isobaric (like an insulated jar with the top on). That would yield a new equation</p> <p><span class="math-container">$$dU = TdS - VdP$$</span></p> <p>In this case <span class="math-container">$U=U(S,P)$</span>, making the variable dependencies by construction. OP's proof should work for every case chosen for <span class="math-container">$U$</span>.</p>
476,899
<p>Does someone know a proof that $\{1,e,e^2,e^3\}$ is linearly independent over $\mathbb{Q}$?</p> <p>The proof should not use that $e$ is transcendental.</p> <p>$e:$ Euler's number.</p> <p><a href="http://paramanands.blogspot.com/2013/03/proof-that-e-is-not-a-quadratic-irrationality.html#.Uhv87tJFUnl">$\{1,e,e^2\}$ is linearly independent over $\mathbb{Q}$</a></p> <p>Any hints would be appreciated.</p>
abnry
34,692
<p>Since I've spent enough time thinking about this, yet not getting a proof, I might as well show what I've got. Others can comment on whether or not more can be done.</p> <p>Your problem is solved if you can show that for any integers $a, b, c$, we have $$\sum^\infty_{n=0} \frac{1}{n!} (a + b 2^n + c 3^n)$$ irrational (using taylor series).</p> <p>WLOG, assume $c&gt;0$. Pick $N$ large so that $(a+b2^n+c3^n) &gt; 0$ for all $N \geq 0$. Then our problem is equivalent to showing that the series with strictly positive terms $$\sum^\infty_{n=N} \frac{1}{n!} (a + b 2^n + c 3^n)$$ is irrational. Suppose it was not and equal to $p/q$. Now we try to mimic the proof of irrationality of $e$.</p> <p>Define</p> <p>$$x = q!\left(\frac{p}{q} - \sum_{n=N}^{q} \frac{1}{n!} (a + b 2^n + c 3^n) \right).$$ One easily sees by distributing that it is an integer, and because our original series contains only positive terms, $x&gt;0$.</p> <p>Note that we can also write $$x = \sum_{n=q+1}^\infty \frac{q!}{n!} (a + b 2^n + c 3^n).$$</p> <p>Now if $b=c=0$, then using $q!/n! &lt; 1/(q+1)^{n-q}$ gives a geometric series bound that gives $x &lt; 1/q$. Then we can get $x&lt;1$ which is a contradiction that $x$ is an integer.</p> <p>The terms $2^n$ and $3^n$ grow too fast for this same trick to work. You'd get bounds of $2^q/q$ and $3^q/q$ respectively. Since $q!/n! &lt; 1/(q+1)^(n-q)$ is not tight, it is still possible that we can get our sum under 1. Or maybe we can monkey with our original definition for $x$.</p> <p>I think what really needs to be copied are proofs of the irrationality of $e^2$ and $e^3$, but I am not aware of such proofs. Googling, I found a very algebraic proof of the irrationality of $e^2$, but I didn't read it carefully. This suggests proofs of the irrationality of $e^2$ may not easily generalize, and hence you aren't really proving that $e$ is transcendental at the same time.</p>
476,899
<p>Does someone know a proof that $\{1,e,e^2,e^3\}$ is linearly independent over $\mathbb{Q}$?</p> <p>The proof should not use that $e$ is transcendental.</p> <p>$e:$ Euler's number.</p> <p><a href="http://paramanands.blogspot.com/2013/03/proof-that-e-is-not-a-quadratic-irrationality.html#.Uhv87tJFUnl">$\{1,e,e^2\}$ is linearly independent over $\mathbb{Q}$</a></p> <p>Any hints would be appreciated.</p>
ParaH2
164,924
<p>Using algebra, let $D$ be the differentiate operator for $C^{\infty}$ functions. </p> <p>So with $$f_n(x)=e^{\lambda_n x}$$</p> <p>Then $$\forall i \in \{1,...,n\}, \forall x \in \mathbb{R}, D(f_n(x))=\lambda_n \cdot f_n(x) $$</p> <p>If all $\lambda_i$ are differentant $n$ is the space's dimension then the familly $\left(f_i\right)_{1\le i \le n}$ is free. (because $f_i$ is the eigenvector associates to $\lambda_i$ eigenvalue.)</p> <p>We conclude with $\lambda_i=i$ ($i$ must be $0$ if you need it) and $x=1$.</p>
489,907
<p>I've only got the following parts of a triangle:</p> <ul> <li>Line A to B </li> <li>Line B to C</li> </ul> <p>And optionally the Line from A to C if needed?</p> <p>I'm trying to get the point X</p> <p><a href="https://i.stack.imgur.com/f6leO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f6leO.png" alt="Triangle"></a></p> <p>Now the problem is, i've got absolutly no idea of trigonometry so I dont even know how to search for it. Thanks in advance!</p>
Pocho la pantera
92,051
<p>$X=B+\frac{(A-B)\cdot (C-B)}{||C-B||^2} (C-B)$</p>
4,051,403
<p>I'm not a math major, but a philosophy major that likes to know that he knows what he's talking about. This may seem like a super stupid question, but here I go.</p> <p>So Euclid made a lot of sense when he gave the example of the nature of multiplication. For example. &quot;2 x 3&quot; is really 2 added to itself 3 times, and vice versa, and this is different from 2 + 3, which is five, two units added to three.</p> <p>But, what about division?</p> <p>It seems to be the opposite of multiplication like a mirror image from 3rd grade, but I'm starting to doubt that after digging deeper.</p> <p>For example, <a href="https://www.mathwarehouse.com/dictionary/D-words/definition-of-divide.php" rel="nofollow noreferrer">this site</a> gave the following diagram for division: <a href="https://i.stack.imgur.com/KzDiq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KzDiq.png" alt="enter image description here" /></a></p> <p>Based on this image, for 6/3, division seems to be the counting of groups, not the inverse of multiplication according to Euclid.</p> <p>Can someone explain the true nature of division in light of the euclid example I gave?</p>
Lutz Lehmann
115,115
<p>If you have a separable situation like the present, then the connection between first integral/Hamiltonian/energy and the potential is <span class="math-container">$$ H(x,y)=\frac12y^2+P(x) $$</span> So in this case you can chose <span class="math-container">$P(x)=\frac14(x^2-λ)^2$</span>.</p>
2,066,455
<p>I ask this question mainly to resolve (hopefully) and error with the following problem. </p> <p>The United States Court consists of $3$ women and $6$ men. In how many ways can a $3$-member committee be formed if each committee must have at least one woman?</p> <p>My approach: Since each group needs at least one woman, there are $\binom31$ ways to pick the first one. After that, both men and women are allowed. There are $\binom82$ ways of doing this. </p> <p>This amounts to $\binom31\times\binom82 = 84$ ways, but the answer is $64$. I don't see how this is the answer for two reasons. The first is that there should be at least somewhere a $3$ multiplied but $64$ has no powers of $3$. The second is that I wrote a counter in Java to look at all strings from the list $a,b,c,d,e,f,g,h,$ and $i$ choosen $3$ at a time and count how many words contained $a,b,$ or $c$. </p> <p>What seems to be the problem?</p>
Kiran
82,744
<p>In how many ways can ten people be arranged in a line if neither of two particular people can sit on either end of the row?</p> <p>Without any restrictions, $10!$ ways you can arrange them</p> <p>$9!\times 2 $ ways where first person is at any end $9!\times 2 $ ways where second person is at any end</p> <p>$8!\times 2 $ ways where both persons are at end</p> <p>Then use inclusion-exclusion for answer <br> $10! - (9!\times 2 + 9!\times 2) + 8!\times 2 $</p>
88,565
<p>Today I had an argument with my math teacher at school. We were answering some simple True/False questions and one of the questions was the following:</p> <p><span class="math-container">$$x^2\ne x\implies x\ne 1$$</span></p> <p>I immediately answered true, but for some reason, everyone (including my classmates and math teacher) is disagreeing with me. According to them, when <span class="math-container">$x^2$</span> is not equal to <span class="math-container">$x$</span>, <span class="math-container">$x$</span> also can't be <span class="math-container">$0$</span> and because <span class="math-container">$0$</span> isn't excluded as a possible value of <span class="math-container">$x$</span>, the sentence is false. After hours, I am still unable to understand this ridiculously simple implication. I can't believe I'm stuck with something so simple.<br><br> <strong>Why I think the logical sentence above is true:</strong><br> My understanding of the implication symbol <span class="math-container">$\implies$</span> is the following: If the left part is true, then the right part must be also true. If the left part is false, then nothing is said about the right part. In the right part of this specific implication nothing is said about whether <span class="math-container">$x$</span> can be <span class="math-container">$0$</span>. Maybe <span class="math-container">$x$</span> can't be <span class="math-container">$-\pi i$</span> too, but as I see it, it doesn't really matter, as long as <span class="math-container">$x \ne 1$</span> holds. And it always holds when <span class="math-container">$x^2 \ne x$</span>, therefore the sentence is true.</p> <h3>TL;DR:</h3> <p><strong><span class="math-container">$x^2 \ne x \implies x \ne 1$</span>: Is this sentence true or false, and why?</strong></p> <p>Sorry for bothering such an amazing community with such a simple question, but I had to ask someone.</p>
The Chaz 2.0
7,850
<p>First, some general remarks about logical implications/conditional statements. </p> <ol> <li><p>As you know, $P \rightarrow Q$ is true when $P$ is false, or when $Q$ is true. </p></li> <li><p>As mentioned in the comments, the <em>contrapositive</em> of the implication $P \rightarrow Q$, written $\lnot Q \rightarrow \lnot P$, is logically equivalent to the implication. </p></li> <li><p>It is possible to write implications with merely the "or" operator. Namely, $P \rightarrow Q$ is equivalent to $\lnot P\text{ or }Q$, or in symbols, $\lnot P\lor Q$.</p></li> </ol> <p>Now we can look at your specific case, using the above approaches. </p> <ol> <li>If $P$ is false, ie if $x^2 \neq x$ is false (so $x^2 = x$ ), then the statement is true, so we assume that $P$ is true. So, as a statement, $x^2 = x$ is false. Your teacher and classmates are rightly convinced that $x^2 = x$ is equivalent to ($x = 1$ or $x =0\;$), and we will use this here. If $P$ is true, then ($x=1\text{ or }x =0\;$) is false. In other words, ($x=1$) AND ($x=0\;$) are both false. I.e., ($x \neq 1$) and ($x \neq 0\;$) are true. I.e., if $P$, then $Q$. <br></li> <li>The contrapositive is $x = 1 \rightarrow x^2 = x$. True.</li> <li>We use the "sufficiency of or" to write our conditional as: $$\lnot(x^2 \neq x)\lor x \neq 1\;.$$ That is, $x^2 = x$ or $x \neq 1$, which is $$(x = 1\text{ or }x =0)\text{ or }x \neq 1,$$ which is $$(x = 1\text{ or }x \neq 1)\text{ or }x = 0\;,$$ which is $$(\text{TRUE})\text{ or }x = 0\;,$$ which is true. </li> </ol>
88,565
<p>Today I had an argument with my math teacher at school. We were answering some simple True/False questions and one of the questions was the following:</p> <p><span class="math-container">$$x^2\ne x\implies x\ne 1$$</span></p> <p>I immediately answered true, but for some reason, everyone (including my classmates and math teacher) is disagreeing with me. According to them, when <span class="math-container">$x^2$</span> is not equal to <span class="math-container">$x$</span>, <span class="math-container">$x$</span> also can't be <span class="math-container">$0$</span> and because <span class="math-container">$0$</span> isn't excluded as a possible value of <span class="math-container">$x$</span>, the sentence is false. After hours, I am still unable to understand this ridiculously simple implication. I can't believe I'm stuck with something so simple.<br><br> <strong>Why I think the logical sentence above is true:</strong><br> My understanding of the implication symbol <span class="math-container">$\implies$</span> is the following: If the left part is true, then the right part must be also true. If the left part is false, then nothing is said about the right part. In the right part of this specific implication nothing is said about whether <span class="math-container">$x$</span> can be <span class="math-container">$0$</span>. Maybe <span class="math-container">$x$</span> can't be <span class="math-container">$-\pi i$</span> too, but as I see it, it doesn't really matter, as long as <span class="math-container">$x \ne 1$</span> holds. And it always holds when <span class="math-container">$x^2 \ne x$</span>, therefore the sentence is true.</p> <h3>TL;DR:</h3> <p><strong><span class="math-container">$x^2 \ne x \implies x \ne 1$</span>: Is this sentence true or false, and why?</strong></p> <p>Sorry for bothering such an amazing community with such a simple question, but I had to ask someone.</p>
Phira
9,325
<p>The short answer is: Yes, it is true, because the contrapositive just expresses the fact that $1^2=1$.</p> <p>But in controversial discussions of these issues, it is often (but not always) a good idea to try out non-mathematical examples:</p> <hr> <p>"If a nuclear bomb drops on the school building, you die."</p> <p>"Hey, but you die, too."</p> <p>"That doesn't help you much, though, so it is still true that you die." </p> <hr> <p>"Oh no, if the supermarket is not open, I cannot buy chocolate chips cookies."</p> <p>"You cannot buy milk and bread, either!"</p> <p>"Yes, but I prefer to concentrate on the major consequences."</p> <hr> <p>"If you sign this contract, you get a free pen."</p> <p>"Hey, you didn't tell me that you get all my money."</p> <p>"You didn't ask."</p> <hr> <p>Non-mathematical examples also explain the psychology behind your teacher's and classmates' thinking. In real-life, the choice of consequences is usually a loaded message and can amount to a lie by omission. So, there is this lingering suspicion that the original statement suppresses information on 0 on purpose. </p> <p>I suggest that you learn about some nonintuitive probability results and make bets with your teacher.</p>
3,051,480
<p>Now we have the equation <span class="math-container">$$\sum_{i}(x_i-\hat x_i)^2,$$</span> where <span class="math-container">$x_i$</span> is the observed value of a data sample <span class="math-container">$S$</span>. Here is the question:</p> <blockquote> <p>Why does this expression get its minimum value when <span class="math-container">$\hat x_i$</span> is the average of the data sample <span class="math-container">$S$</span> ?</p> </blockquote> <p>I tried to take the derivatives of that equation and make it to zero, but it seems there's something wrong, because <span class="math-container">$\hat x_i$</span> is kind of multi-variable. Can anyone help me out? Thanks a lot!</p>
marty cohen
13,079
<p>This can be solved without calculus.</p> <p>Let <span class="math-container">$f(z) =\sum_{i}(x_i-z)^2 $</span>.</p> <p>Then, since <span class="math-container">$\sum_{i}x_i =n\hat x$</span>,</p> <p><span class="math-container">$\begin{array}\\ f(z)-f(\hat x) &amp;=\sum_{i}(x_i-z)^2-\sum_{i}(x_i-\hat x)^2\\ &amp;=\sum_{i}((x_i-z)^2-(x_i-\hat x)^2)\\ &amp;=\sum_{i}(x_i^2-2x_iz+z^2-(x_i^2-2x_i\hat x+\hat x^2))\\ &amp;=\sum_{i}(2x_i(\hat x-z)+z^2-\hat x^2)\\ &amp;=2n\hat x(\hat x-z)+n(z^2-\hat x^2)\\ &amp;=2n\hat x(\hat x-z)+n(z-\hat x)(z+\hat x)\\ &amp;=n(\hat x-z)(2\hat x-(z+\hat x))\\ &amp;=n(\hat x-z)(\hat x-z)\\ &amp;=n(\hat x-z)^2\\ &amp;\ge 0 \quad \text{with equality iff } z=\hat x\\ \end{array} $</span></p>
2,960,501
<p><span class="math-container">$(0^n 1)^* \ \ , n\geq 0 $</span></p> <p>According to wiki</p> <blockquote> <p>If V is a set of strings, then V* is defined as the smallest superset of V that contains the empty string ε and is closed under the string concatenation operation</p> <p>If V is a set of symbols or characters, then V* is the set of all strings over symbols in V, including the empty string ε.</p> </blockquote> <p>So this language accepts all strings over <span class="math-container">$\Sigma^*$</span> which must be regular. Also regular languages are closed under kleene star.</p> <p>But again on wiki</p> <blockquote> <p><span class="math-container">$V^* = \bigcup\limits_{i\geq 0}^{} V_i = {\epsilon} \ \cup \ V_1 \ \cup V_2 \ \cup \ V_3 \ \cup .....$</span></p> </blockquote> <p>Now according to this definition strings such as <span class="math-container">$01001$</span> cannot be a part of given language so <span class="math-container">$0$</span>'s prior of every 1 are compared within a string, so this can't be regular.</p> <p>But according to the former definition <span class="math-container">$01001$</span> is a part of language because it can be formed with symbols <span class="math-container">$01$</span> and <span class="math-container">$001$</span> both are part of <span class="math-container">$0^n 1$</span>.</p> <p>Can someone help me in identifying the class of these types of languages</p>
vonbrand
43,946
<p>I take it that you mean <span class="math-container">$L = \bigcup_{n \ge 0} \mathcal{L}((1^n 0)^*)$</span>, i.e., arbitary repeats of <span class="math-container">$1^n 0$</span> for each <span class="math-container">$n$</span>. If you try to dream up an DFA to recognize this, you'll see it would need to record <span class="math-container">$n$</span> somehow to check the others, and as <span class="math-container">$n$</span> is not limited, that won't work. So suspect it isn't regular.</p> <p>For a proof, use the pumping lemma. Assume your language <span class="math-container">$L$</span> is regular, then there is a constant <span class="math-container">$N$</span> such that for each word <span class="math-container">$\sigma \in L$</span> such that <span class="math-container">$\lvert \sigma \rvert \ge N$</span> it can be written <span class="math-container">$\sigma = \alpha \beta \gamma$</span> such that <span class="math-container">$\lvert \alpha \beta \rvert \le N$</span> with <span class="math-container">$\beta \ne \epsilon$</span> such that for all <span class="math-container">$k \ge 0$</span> it is <span class="math-container">$\alpha \beta^k \gamma \in L$</span>. Pick <span class="math-container">$\sigma = 1^N 0 1^N 0 \in L$</span>, <span class="math-container">$\lvert \sigma \rvert = 2 N + 2 \ge N$</span>. But then <span class="math-container">$\beta$</span> is formed just by <span class="math-container">$1$</span> (say <span class="math-container">$\lvert \beta \rvert = u$</span>, <span class="math-container">$u &gt; 0$</span>), if you pick <span class="math-container">$k = 2$</span> it is <span class="math-container">$\alpha \beta^2 \gamma = 1^{N + u} 0 1^N 0 \notin L$</span>. This contradiction shows <span class="math-container">$L$</span> is not regular.</p>
1,931,754
<p>I am trying to show that the interval $[0,1)$ is a closed subset of $(-1,1)$ by using the definition that a closed subset contains all of its limit points. So for a convergent sequence $\{x_n\}$ in $[0,1)$ we have that $0 \leq x_{n} &lt; 1$ for all $n \in \mathbf{N}$. How can I show that $\lim_{n \rightarrow \infty}x_{n} = x$ implies that $x \in [0,1)$? I understand that 1 cannot be a limit point of $[0,1)$ since $1 \not\in (-1,1)$ but I'm having a hard time saying that in a formal way?</p>
Amarildo
307,377
<p>$$\lim _{n\to \infty }\:\left(1+\log \left(\frac{n}{n-1}\right)\right)^n\: = \lim _{n\to \infty }\:\left(e^{n\cdot \:ln\left(1+\log \left(\frac{n}{n-1}\right)\right)}\right)\: \approx$$</p> <p>$$\lim _{n\to \infty }\:\left(e^{n\cdot \:ln\left(\frac{n}{n-1}\right)}\right)\: \approx \lim _{n\to \infty }\:\left(e^{n\cdot \left(\frac{n-n+1}{n-1}\right)}\right)\: = \color{red}{e}$$</p> <p><strong>Note:</strong> $ln(\frac{n}{n-1})$ if the argumenent of the ln $\frac{n}{n-1} \to 1$ for $n \to \infty$ then $$\color{red}{ln\left(\frac{n}{n-1}\right) \approx \left(\frac{n-\left(denominator\right)}{n-1}\right) = \left(\frac{n-\left(n-1\right)}{n-1}\right) = \left(\frac{1}{n-1}\right)}$$</p>
1,931,754
<p>I am trying to show that the interval $[0,1)$ is a closed subset of $(-1,1)$ by using the definition that a closed subset contains all of its limit points. So for a convergent sequence $\{x_n\}$ in $[0,1)$ we have that $0 \leq x_{n} &lt; 1$ for all $n \in \mathbf{N}$. How can I show that $\lim_{n \rightarrow \infty}x_{n} = x$ implies that $x \in [0,1)$? I understand that 1 cannot be a limit point of $[0,1)$ since $1 \not\in (-1,1)$ but I'm having a hard time saying that in a formal way?</p>
Student
255,452
<p>Let the Limit be equal to $L$. Then $\lim n(1+ \log(n/(n-1))=\ln L$. This implies: </p> <p>$$\lim\frac{1+\log(1/(1-1/n))}{1/n}$$ Let $y=1/n$.$$\lim\frac{1-\log(1-y)}{y}=\lim \frac{1/(1-y)}{1}=1$$ But $1=\ln L$ which implies $L=e$.</p>
1,596,264
<p>Let $p$ be prime and $d \ge 2$. I want to show that $$ \frac{(p^d - 1)(p^{d-1} - 1)}{(p-1)(p^2 - 1)} \equiv 1 \pmod{p}. $$ I have a proof, but I think it is complicated, and the statement appears in a book as if it is very easy to see. So is there any easy argument to see it?</p> <p>My proof uses $$ \frac{p^n - 1}{p-1} = 1 + p + \ldots + p^{n-1}. $$ So $$ \frac{(p^d - 1)(p^{d-1} - 1)}{(p-1)(p^2 - 1)} = \frac{(1 + p + \ldots + p^{d-1})(1 + p + \ldots + p^{d-2})}{p+1} $$ If $d - 2$ is odd, set $u := (1 + p + \ldots + p^{d-1})$ and then proceed \begin{align*} &amp; \frac{u(1 + p + \ldots + p^{d-2})}{p+1} \\ &amp; = \frac{u(1+p) + u(p^2 + \ldots + p^{d-4})}{p+1} \\ &amp; = u + p^2\cdot \frac{u(1+p+\ldots + p^{d-4})}{p+1} \\ &amp; = u + p^2\cdot \left( \frac{u(1+p) + u(p^2 + \ldots p^{d-4})}{p+1} \right) \\ &amp; = u + p^2\cdot \left( u + p^2\cdot \frac{u(1+p+\ldots p^{d-6})}{p+1} \right) \\ &amp; \quad \qquad\qquad \vdots \\ &amp; = u + p^2\cdot \left( u + p^2 \left( u + p^2\left( u + \ldots + p^2 \frac{u(1+p)}{1+p} \right) \right) \right) \\ &amp; = u + p^2\cdot ( u + p^2 \cdot ( u + p^2 ( u + \ldots + p^2 u ))) \\ &amp; \equiv u \pmod{p} \\ &amp; \equiv 1 \pmod{p} \end{align*} and similar if $d-1$ is odd then successively multiply $(1+p+\ldots + p^{d-1})$ out with $v := (1+p+\ldots p^{d-2})$. But as said, this seems to complicated for me, so is there another easy way to see this?</p>
Redundant Aunt
109,899
<p>You need to prove that $p\mid\frac{(p^{d-1}-1)(p^d-1)}{(p-1)(p^2-1)}-1$</p> <p>It is not difficult to see that $p\mid (p^{d-1}-1)(p^d-1)-(p-1)(p^2-1)$. Furthermore, we have that $(p-1)(p^2-1)\mid (p^{d-1}-1)(p^d-1)$ because either $d$ or $d-1$ is even and so either $p^{d}-1$ or $p^{d-1}-1$ is divisible by $p^2-1$ and the other factor is then divisible by $p-1$.</p> <p>So to sum up, we have proven that: $$ p\mid (p^{d-1}-1)(p^d-1)-(p-1)(p^2-1) $$ and $$ (p-1)(p^2-1)\mid (p^{d-1}-1)(p^d-1)-(p-1)(p^2-1) $$ Now, because $\gcd(p,(p-1)(p^2-1))=1$, this implies: $$ p(p-1)(p^2-1)\mid (p^{d-1}-1)(p^d-1)-(p-1)(p^2-1)\implies\\ p\mid \frac{(p^{d-1}-1)(p^d-1)}{(p-1)(p^2-1)}-1 $$</p>
1,596,264
<p>Let $p$ be prime and $d \ge 2$. I want to show that $$ \frac{(p^d - 1)(p^{d-1} - 1)}{(p-1)(p^2 - 1)} \equiv 1 \pmod{p}. $$ I have a proof, but I think it is complicated, and the statement appears in a book as if it is very easy to see. So is there any easy argument to see it?</p> <p>My proof uses $$ \frac{p^n - 1}{p-1} = 1 + p + \ldots + p^{n-1}. $$ So $$ \frac{(p^d - 1)(p^{d-1} - 1)}{(p-1)(p^2 - 1)} = \frac{(1 + p + \ldots + p^{d-1})(1 + p + \ldots + p^{d-2})}{p+1} $$ If $d - 2$ is odd, set $u := (1 + p + \ldots + p^{d-1})$ and then proceed \begin{align*} &amp; \frac{u(1 + p + \ldots + p^{d-2})}{p+1} \\ &amp; = \frac{u(1+p) + u(p^2 + \ldots + p^{d-4})}{p+1} \\ &amp; = u + p^2\cdot \frac{u(1+p+\ldots + p^{d-4})}{p+1} \\ &amp; = u + p^2\cdot \left( \frac{u(1+p) + u(p^2 + \ldots p^{d-4})}{p+1} \right) \\ &amp; = u + p^2\cdot \left( u + p^2\cdot \frac{u(1+p+\ldots p^{d-6})}{p+1} \right) \\ &amp; \quad \qquad\qquad \vdots \\ &amp; = u + p^2\cdot \left( u + p^2 \left( u + p^2\left( u + \ldots + p^2 \frac{u(1+p)}{1+p} \right) \right) \right) \\ &amp; = u + p^2\cdot ( u + p^2 \cdot ( u + p^2 ( u + \ldots + p^2 u ))) \\ &amp; \equiv u \pmod{p} \\ &amp; \equiv 1 \pmod{p} \end{align*} and similar if $d-1$ is odd then successively multiply $(1+p+\ldots + p^{d-1})$ out with $v := (1+p+\ldots p^{d-2})$. But as said, this seems to complicated for me, so is there another easy way to see this?</p>
Stella Biderman
123,230
<p>There are even easier proofs than the answers others have supplied. If you look at the numerator, one of the terms is guaranteed to be divisible by $p^2-1$, whichever has an even exponent. This is because $(x^2-1)|(x^{2n}-1)$. The division clearly results in a polynomial with constant term $1$, as does the division of the other term by $p-1$. Then their product does as well, and so taking the expression mod p gives 1.</p> <p>Alternatively, we observe that $p^d=0$ (mod p) and then just be done. This means that the fraction trivially simplifies to $$\frac{(-1)^2}{(-1)^2}=1$$</p>
70,603
<p>We were shown in class this next calculation: (Here, $V_n(RB^n)$ is the volume of an $n$ dimensional ball of radius $R$, likewise $S_{n-1}$ is the surface area of the $n$ dimensional sphere in $\mathbb{R}^n$. $rS^{n-1}$ denotes the $n$ dimensional sphere of radius $r$ and integrating $d\textbf{S}$ means a surface integral.) $$V_n(RB^n)=\int_{RB^n}1dx=\int_0^R\int_{rS^{n-1}}1d\textbf{S}dr=\int_0^R\int_{S^{n-1}}r^{n-1}d\textbf{S}dr=$$$$=\int_0^Rr^{n-1}\int_{S^{n-1}}1d\textbf{S}dr=\int_0^Rr^{n-1}S_{n-1}dr=\frac{R^n}{n}S_{n-1}$$ and finally $V_n=\frac{1}{n}S_{n-1}$ since $V_n(RB^n)=R^nV_n$. My problem is with the 3rd equality. The first is obvious and the second is the coarea formula. I assume the third equality is a result of a change of variables, but since this is taking place in $\mathbb{R}^n$ I'd expect the change of variables to be $x\mapsto rx$ which gives the Jacobian of $r^n$ - not the $r^{n-1}$ we see after the third equality.</p> <p>It'd be easier for me to assume the teacher had a mistake here, had she not used this result later on in her lectures... So my question is, was she wrong in the change of variables there or am I missing something about surface integrals?</p>
Christian Blatter
1,303
<p>The third equality comes from the fact that the map $$f:\quad{\mathbb R}^n\to{\mathbb R}^n, \quad u\mapsto x:=r\&gt;u$$ ($r$ is constant here) multiplies volume elements by its Jacobian $r^n$ but multiplies $(n-1)$-dimensional surface elements by $r^{n-1}$.</p> <p>While we are at it: The set $B^n:=\{x\in{\mathbb R}^n\ |\ |x|&lt;1\}$ is the ($n$-dimensional) unit ball in ${\mathbb R}^n$. On the other hand its boundary $\{x\in{\mathbb R}^n\ |\ |x|=1\}$ is not the $n$-dimensional unit sphere, but the $(n-1)$-dimensional unit sphere $S^{n-1}$. The latter has an $(n-1)$-dimensional surface area which you might call $\omega(S^{n-1})$ or similar, but certainly not $S_n$. Therefore the proper way to write your formula would be $${\rm vol}(R\&gt;B^n)={R^n\over n}\omega(S^{n-1})\ .$$</p>
1,406,878
<p>Given is following sequence:</p> <p>$a_{n+1} = a_n - \frac{a_n - v}{s}$</p> <p>I found out that</p> <p>$\forall a_0, v, s \in \mathbb{R}, s&gt;0: \lim\limits_{n \to \infty}a_n=v$</p> <p>But I do not know why. I tried to write down $a_2$ , $a_3$, but the term becomes very long and complex, and it doesn't help me to find out why the limes leads to $v$.</p>
user1337
62,839
<p>This is a first order linear recurrence relation, and the solution can be found explicitly:</p> <p>$$a_n=a_0 \left(\frac{s-1}{s}\right)^n-v \left(\frac{s-1}{s}\right)^n+v.$$</p> <p>Can you see that the limit doesn't necessarily exist?</p>
592,912
<p>I need to describe the minimal field extension $\mathbb Q(\sqrt[3] {2})$ of the rational numbers $\mathbb Q$ that contain $\sqrt[3] {2}$.</p> <p>$\mathbb Q(\sqrt[3] {2}) =\{a+b\sqrt[3] {2}+c(\sqrt[3] {2})^2|a,b,c \in \mathbb{Q}\}$.</p> <p>I tried to use the rationalization of $x^3 + y^3 + z^3 - 3xyz$ ?</p>
C-star-W-star
79,762
<p>Systematic Approach:</p> <p><a href="https://i.stack.imgur.com/K9ANo.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K9ANo.jpg" alt="Extensions"></a></p> <p><em>(Comment for questions!)</em></p>
459,374
<p>Let $X$ be the random variable which denotes the number of times a die has been rolled till each side has appeared. The order does not matter. We are trying to find $E[X]$.</p> <p>Let $X_i$ be a random variable which denotes how many times a die has to be rolled till side i has appeared.</p> <p>So,</p> <p>$$E[X]= E[X1+X2+X3+X4+X5+X6] = E[X1]+E[X2]+E[X3]+E[X4]+E[X5]+E[X6]$$</p> <p>$$E[X1]=E[X2]=E[X3]=E[X4]=E[X5]=E[X6]=6$$</p> <p>$$E[X]=36$$?</p> <p>Why is this solution wrong?</p>
HK Lee
37,116
<p>[intuitive answer]</p> <p>If $y$ is fixed and if $x$ is a point in interior, we draw a small ball $B(x,r)$. </p> <p>Then there exists a point $x'\in \partial B(x,r)$ such that </p> <p>$$ d(x',y) =d(x,y)+r $$</p>
1,860,459
<blockquote> <p>Prove that $4k &lt; 2^k$ by induction.</p> </blockquote> <p>It holds for $k = 5$. Assume $ k = n + 1 $. Then</p> <p>$4(n+1) &lt; 2^{(n+1)}$</p> <p>$4n + 4 &lt; 2^n * 2$</p> <p>$2n + 2 \leq 2^n$</p> <p>Now I just need to show that</p> <p>$2n + 2 \leq 4n$</p> <p>$n + 1 \leq 2n$</p> <p>$1 \leq n$</p> <p>And because I chose $n = 5$ which is greater than $1$, this should prove that the formula holds for $n \geq 5$.</p> <p>Is this correct?</p>
Henrik supports the community
193,386
<p>Your choice of $n$ should only be for the basis of the induction, not for the inductive step.</p> <p>What you need to do is to show that if $4n&lt;2^n$ then $4(n+1)&lt;2^{n+1}$. One way of doing that is to say that $4(n+1)=4n+4&lt;2^n+4$ and then argue that $x+4&lt;2x$ for the relevant values.</p>
4,071,619
<blockquote> <p>There are two German couples, two Japanese couples and one unmarried person. If all 9 persons are two be interviewed one by one then the total number of ways of arranging their interviews such that no wife gives an interview before her husband is?</p> </blockquote> <p>I tried using the string method, but that will be only counting the cases where the wife speaks just after her husband.</p> <p>There are too many elements to take care of, unlike a similar question which involved only two people, so by symmetry the answer is half of the total number of ways of arranging those <span class="math-container">$n$</span> people.</p> <p>Then how is this one solved</p>
Bill Dubuque
242
<p><a href="https://math.stackexchange.com/q/87383/242">Recall</a> a simple form (with simple proof) of LTE = Lifting The Exponent is</p> <p><span class="math-container">$$a\equiv b\!\!\! \pmod{kn}\,\Rightarrow\, a^k\equiv b^k\!\!\!\! \pmod{k^2n}$$</span></p> <p>Applied inductively for <span class="math-container">$\,k=3\,$</span> (i.e. repeatedly cubing) yields the claim as follows <span class="math-container">$$\begin{align} &amp;10^{\large 3^\color{#c00}0}\! \equiv 1\!\!\! \pmod{3^{2+\color{#c00}0}}\\ \Rightarrow\ &amp;10^{\large 3^\color{#c00}1}\! \equiv 1\!\!\! \pmod{3^{2+\color{#c00}1}}\ \ \rm by\ \ LTE\\ \Rightarrow\ &amp;10^{\large 3^\color{#c00}2}\! \equiv 1\!\!\! \pmod{3^{2+\color{#c00}2}}\ \ \rm by\ \ LTE\\ &amp;\ \ \ \ \ \ \ \ \ \vdots\\ \Rightarrow\ &amp;10^{\large 3^\color{#c00}k}\! \equiv 1\!\!\! \pmod{3^{2+\color{#c00}k}}\ \ \rm by\ \ LTE\\[.1em] \Rightarrow\ &amp;9\cdot 3^k\mid 10^{3^k}\!\!-1\\[.1em] \Rightarrow\ &amp; 9\cdot n\ \mid 10^n-1 \end{align}\qquad$$</span></p>
4,071,619
<blockquote> <p>There are two German couples, two Japanese couples and one unmarried person. If all 9 persons are two be interviewed one by one then the total number of ways of arranging their interviews such that no wife gives an interview before her husband is?</p> </blockquote> <p>I tried using the string method, but that will be only counting the cases where the wife speaks just after her husband.</p> <p>There are too many elements to take care of, unlike a similar question which involved only two people, so by symmetry the answer is half of the total number of ways of arranging those <span class="math-container">$n$</span> people.</p> <p>Then how is this one solved</p>
cansomeonehelpmeout
413,677
<p>The statement is equivalent to proving <span class="math-container">$$10^{3^k}-1\equiv_{3^{k+2}}0$$</span> Notice that <span class="math-container">$\phi(3^{k+2})=3^{k+2}-3^{k+1}=2\cdot 3^{k+1}$</span>, so that for a unit <span class="math-container">$a$</span> we have <span class="math-container">$$a^{2\cdot 3^{k+1}}\equiv_{3^{k+2}}1$$</span> We also have</p> <blockquote> <p><strong>Claim</strong>: <span class="math-container">$$10^{3^k}\equiv_{3^{k+2}}10^{3^{k+1}}$$</span></p> </blockquote> <p><em>Proof</em>: <span class="math-container">$$10^{3^k}\equiv_{3^{k+2}}10^{3^{k+1}-2\cdot 3^{k}}\equiv_{3^{k+2}}10^{3^{k+1}}$$</span></p> <p>Let <span class="math-container">$t=10^{3^k}$</span>, then the above claim may be written as <span class="math-container">$$t^3\equiv_{3^{k+2}}t\iff t(t+1)(t-1)\equiv_{3^{k+2}}0$$</span> Both <span class="math-container">$t,t+1$</span> are units since <span class="math-container">$t,t+1\not\equiv_{3}0$</span>, therefore we must have <span class="math-container">$t-1\equiv_{3^{k+2}}0$</span>.</p>
2,558,870
<p>Suppose $f:[0,1]\to \mathbb{R}$ is uniformly continuous, and $(p_n)_{n\in\mathbb{N}}$ is a sequence of polynomial functions converging uniformly to $f$.</p> <p>Does it follow that $\mathcal{F}=\{p_n\mid n\in\mathbb{N}\}\cup \{f\}$ is equicontinuous?</p> <p>Also, if $C_n$ are the Lipschitz constants of the polynomials $p_n$, does it follow that $C_n&lt;\infty$ for all $n$, and $\lim_{n\to\infty} C_n=\infty?$</p> <p>I'm preparing for a test, but I'm not sure how to go about answering these two question. Any hints or tips as to what to look for would be appreciated. </p>
Karn Watcharasupat
501,685
<p>Actually this is a combination of a few different results that are more easily proven separately but just for this I will prove everything in one go.</p> <p>Suppose $e_i$ is an eigenvector of $(A - pI)^{-1}$ then \begin{align} (A - pI)^{-1}e_i&amp;=(q_i - p)^{-1}e_i\\ (A - pI)(A - pI)^{-1}e_i&amp;=(A - pI)(q_i - p)^{-1}e_i\\ (q_i - p)e_i&amp;=(A - pI)e_i\\ q_ie_i - pe_i&amp;=Ae_i - pe_i\\ Ae_i&amp;=q_ie_i\\ \end{align}</p> <p>Clearly, $e_i$ is an eigenvector of $A$ with corresponding eigenvalue $q_i$.</p> <hr> <p><strong>The results I used in this derivation:</strong></p> <blockquote> <p>If $\lambda$ is an eigenvalue of $M$ with corresponding eigenvector $v$, then $\lambda^{-1}$ is an eigenvalue of $M^{-1}$ with corresponding eigenvector $v$, provided $M$ is invertible.</p> </blockquote> <p>\begin{align} Mv&amp;=\lambda v\\ M^{-1}Mv&amp;=M^{-1}\lambda v\\ Iv&amp;=\lambda M^{-1}v\\ \lambda^{-1}v&amp;=M^{-1}v \end{align}</p> <blockquote> <p>If $\lambda$ is an eigenvalue of $M$ with corresponding eigenvector $v$, then $\lambda+k$ is an eigenvalue of $M+kI$ with corresponding eigenvector $v$.</p> </blockquote> <p>\begin{align} Mv&amp;=\lambda v\\ (M+kI)v&amp;=Mv+kIv\\ &amp;=\lambda v+kv\\ &amp;=(\lambda+k)v \end{align}</p> <p><strong>Bonus:</strong></p> <blockquote> <p>If $\lambda$ is an eigenvalue of $M$ with corresponding eigenvector $v$, then $n\lambda$ is an eigenvalue of $nM$ with corresponding eigenvector $v$.</p> </blockquote> <p>\begin{align} Mv&amp;=\lambda v\\ nMv&amp;=n\lambda v\\ \end{align}</p>
2,970,370
<p>For <span class="math-container">$f \in C^0([0,1])$</span>, I have the following partial differential equation:</p> <p><span class="math-container">$$u''(x) = f(x)$$</span> in <span class="math-container">$\Omega = (0,1)$</span> <span class="math-container">$$u'(0) = u'(1) = 0$$</span></p> <p>Why is this equation not well posed in <span class="math-container">$H^1(\Omega)$</span>?</p> <p>--> Is it because of the missing boundary conditions? </p>
Giuseppe Negro
8,157
<p>That problem can be interpreted in a <em>weak formulation</em>, that is, a <em>solution</em> to that equation is a <span class="math-container">$u\in H^1(\Omega)$</span> such that <span class="math-container">$$ \int_\Omega - u'\phi' =\int_{\Omega} f\phi,\qquad \forall \phi \in H^1(\Omega).$$</span> Thus, there are no regularity problems. The lack of well-posedness comes from a much more trivial issue; there is no uniqueness, because if <span class="math-container">$u$</span> is a solution, then so is <span class="math-container">$u+C$</span> for any constant <span class="math-container">$C\in\mathbb R$</span>.</p> <p>See <a href="https://www.springer.com/gp/book/9780387709130" rel="nofollow noreferrer">the book of Brezis</a>, section 8.4.</p>
1,116,496
<blockquote> <p>Let $H$ be a Hilbert space with a countable basis $B$. Does it mean that any vector $x\in H$ can be expressed as a <strong>finite</strong> linear combination of elements from $x$, or as an <strong>infinite</strong> linear combination?</p> </blockquote> <p>Thanks in advance</p>
Math1000
38,584
<p>Take $\ell^2$ for example, i.e. the square-summable sequences of complex numbers with inner product $$\langle x,y\rangle = \sum_{n=1}^\infty x_n\overline{y_n}. $$ This has the countable orthonormal basis $$\{(1,0,0,\ldots), (0,1,0,\ldots), (0,0,1,0,\ldots),\ldots\}. $$ As $$\sum_{n=1}^\infty 2^{-n} = 1&lt;\infty, $$ we have $$(2^{-1}, 2^{-2}, 2^{-3},\ldots)\in\ell^2, $$ and it is clear that this element cannot be written as a finite linear combination of the basis elements.</p>
982,780
<p>I have the following system of <span class="math-container">$M$</span> linear equations in <span class="math-container">$N$</span> unknowns.</p> <p><span class="math-container">$$ \begin{bmatrix} 3 &amp; 0 &amp; 1 &amp; 0 &amp; -1 &amp; -3 &amp; 2\\ 1 &amp; 2 &amp; 0 &amp; 4 &amp; 0 &amp; 0 &amp; -1\\ 1 &amp; 1 &amp; 0 &amp; 0 &amp; -1 &amp; -1 &amp; -2\\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; -3 &amp; -1 &amp; 1 \\ \end{bmatrix} \begin{bmatrix} x_{1}\\ x_{2}\\ x_{3}\\ x_{4} \\ x_{5} \\ x_{6} \\ x_{7} \\ \end{bmatrix} = \begin{bmatrix} 1\\ 0\\ 0\\ -1\\ \end{bmatrix}$$</span></p> <p>Is there any algorithm for finding answers of this equations that <span class="math-container">${x_{i} \ge 0}$</span>?</p> <p><strong>Comment</strong>: I just want that <span class="math-container">$x_i \ge 0$</span>.</p> <p>It can change to</p> <p><span class="math-container">$$ \begin{bmatrix} 1 &amp; 0 &amp; 0 &amp; 0 &amp; 2/3 &amp; -2/3 &amp; 1/3 &amp; 2/3\\ 0 &amp; 1 &amp; 0 &amp; 0 &amp; -5/3 &amp; -1/3 &amp; -7/3 &amp; -2/3 \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; -3 &amp; -1 &amp; 1 &amp; -1 \\ 0 &amp; 0 &amp; 0 &amp; 1 &amp; 2/3 &amp; 1/3 &amp; 5/6 &amp; 1/6 \\ \end{bmatrix} $$</span></p>
Tomasz Kania
17,929
<p>I'd use the fact that $F\subset X$ is closed if and only if for each strictly increasing (possibly transfinite) sequence $(x_\alpha)_{\alpha&lt;\lambda}$ with entries in $F$ we have</p> <p>$$\big( \lim_{\alpha&lt;\lambda} x_\alpha = \big) \sup_{\alpha&lt;\lambda}x_\alpha\in F.$$</p> <p>Now, if $F$ is closed in $X$, by the above criterion $F$ has the greatest element. It is well-ordered itself so there is an increasing bijection $\Theta$ from $F$ onto some ordinal which itself has the greatest element. But for such interval you've already proved the theorem and $\Theta$ is also a homeomorphism.</p>
2,569,557
<p>I'm still confused by the use of &nbsp;$\Rightarrow$&nbsp; in (ε,δ)-definition of limit. <br/> Take for example the definition of $\underset{x\rightarrow x_{0}}{\lim}f\left(x\right)=l$ :<br/></p> <blockquote> <p>$$\forall\varepsilon&gt;0,\;\exists\delta&gt;0\quad\mathrm{such\:that\quad}\forall x\in\mathrm{dom}\,f,\;0&lt;\left|x-x_{0}\right|&lt;\delta\;\Rightarrow\;\left|f\left(x\right)-l\right|&lt;\varepsilon$$</p> </blockquote> <p>My questions are:</p> <p>Why is $\left|f\left(x\right)-l\right|&lt;\varepsilon$ not a sufficient condition for $0&lt;\left|x-x_{0}\right|&lt;\delta\;$? <br/></p> <p>Or, stated in another way, shouldn't $\left|f\left(x\right)-l\right|&lt;\varepsilon\;\Rightarrow\;0&lt;\left|x-x_{0}\right|&lt;\delta\;$ also be true ? If $f\left(x\right)$ becomes arbitrarily close to $l$, doesn't $x$ becomes arbitrarily close to $x_0$?</p>
hmakholm left over Monica
14,366
<p>Changing the definition in that way would mean that a <em>constant function</em> cannot have a limit, for example.</p> <p>Or as a less trivial example, consider for example $\lim\limits_{x\to 1}\frac1x$. Intuitively this ought to be $1$, but with your addition to the definition the limit would not exist. Namely, if I choose $\varepsilon=2$ then you can't find any $\delta$ such that $|\frac1x-1|&lt;2$ is only true when $|x-x_0|&lt;\delta$.</p>
2,569,557
<p>I'm still confused by the use of &nbsp;$\Rightarrow$&nbsp; in (ε,δ)-definition of limit. <br/> Take for example the definition of $\underset{x\rightarrow x_{0}}{\lim}f\left(x\right)=l$ :<br/></p> <blockquote> <p>$$\forall\varepsilon&gt;0,\;\exists\delta&gt;0\quad\mathrm{such\:that\quad}\forall x\in\mathrm{dom}\,f,\;0&lt;\left|x-x_{0}\right|&lt;\delta\;\Rightarrow\;\left|f\left(x\right)-l\right|&lt;\varepsilon$$</p> </blockquote> <p>My questions are:</p> <p>Why is $\left|f\left(x\right)-l\right|&lt;\varepsilon$ not a sufficient condition for $0&lt;\left|x-x_{0}\right|&lt;\delta\;$? <br/></p> <p>Or, stated in another way, shouldn't $\left|f\left(x\right)-l\right|&lt;\varepsilon\;\Rightarrow\;0&lt;\left|x-x_{0}\right|&lt;\delta\;$ also be true ? If $f\left(x\right)$ becomes arbitrarily close to $l$, doesn't $x$ becomes arbitrarily close to $x_0$?</p>
Ennar
122,131
<p>Already given example of constant function should (in my opinion) be enough to shoot the whole idea down in a blazing glory, but parabola might be more convincing visually:</p> <p><a href="https://i.stack.imgur.com/clerf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/clerf.png" alt="enter image description here"></a></p> <p>As we can see, $\lim_{x\to -2} x^2 = \lim_{x\to 2} x^2 = 4$, and when we are getting close to the limit $4$ on the $y$-axis, it could be that we are either close to $2$ or $-2$, but most definitely we can't get close to both at the same time.</p>
3,995,492
<p>I have no clue how to do this, I manage to get I get that <span class="math-container">$11^{36} \equiv 1 \hspace{0.1cm} \text{mod} (13)$</span> but I can't get anywhere from there.</p>
Joffan
206,402
<p>The exercise here is to calculate the <a href="https://en.wikipedia.org/wiki/Multiplicative_inverse" rel="nofollow noreferrer">multiplicative inverse</a> of <span class="math-container">$11$</span>, written <span class="math-container">$11^{-1}$</span>, in <span class="math-container">$\bmod 13$</span> arithmetic. Then we can get that since <span class="math-container">$11^{36}\equiv 1$</span>, we will have <span class="math-container">$11^{35}\equiv 11^{36}\cdot 11^{-1}\equiv 11^{-1}\bmod 13$</span></p> <p>Now <span class="math-container">$13$</span> and <span class="math-container">$11$</span> are coprime (of course, since <span class="math-container">$13$</span> is prime) so it's guaranteed that there is a value <span class="math-container">$a$</span> such that <span class="math-container">$11a \equiv 1 \bmod 13$</span>. For a small number like <span class="math-container">$13$</span>, it's not too hard to find this by guessing, but the <a href="https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm" rel="nofollow noreferrer">extended Euclidean algorithm</a> will find the value rapidly for less straightforward cases.</p> <p>To show the logic of this algorithm, we find progressively smaller results which we can make with linear combinations of <span class="math-container">$13$</span> &amp; <span class="math-container">$11$</span> until we make <span class="math-container">$1$</span>: <span class="math-container">\begin{align}1\cdot 13 + 0\cdot 11 &amp;= 13 \tag{setup: 1}\\ 0\cdot 13 + 1\cdot 11 &amp;= 11 \tag{setup: 2}\\ 1\cdot 13 + -1\cdot 11 &amp;= 2 \tag{(1)-(2): 3 }\\ -5\cdot 13 + 6\cdot 11 &amp;= 1 \tag{(2)-5(3): result }\\\end{align}</span> gives us that <span class="math-container">$6\cdot 11 \equiv 1$</span> so <span class="math-container">$11^{-1}\equiv 6 \bmod 13$</span>.</p>
214,766
<p>Is there an efficient way to check a number x and remove all prime factors in the number which are less than some n? For example for n = 200:</p> <pre><code>x=88984589931961415442566827779929187431222364934742868664124547963532933 FactorInteger[x] {{29, 2}, {31, 1}, {37, 2}, {269, 1}, {271, 1}, {34200471605536976187361939984030218061598132568100785528233, 1}} </code></pre> <p>After removing all prime factors &lt; n from x gives:</p> <pre><code>2493180179572040027082498062895818866472442266081979164222657467 </code></pre> <p>I'd like to use as large n as possible and then use PrimeQ to check the remaining number, which is faster than checking for large prime factors.</p> <p>I made this code which works but may be slow:</p> <pre><code>x=2*53*6571*18313*31259 n=20000; n=PrimePi[n]; listWithSmallPrimeFactorsRemoved={}; AppendTo[listWithSmallPrimeFactorsRemoved,x]; For[i=1,i&lt;=n,i++, z=Last[listWithSmallPrimeFactorsRemoved]; a=IntegerExponent[z,Prime[i]]; z=z/(Prime[i]^a); AppendTo[listWithSmallPrimeFactorsRemoved,z]; ] CountDistinct[listWithSmallPrimeFactorsRemoved]-1 (*count of how many prime factors were removed*) Last[listWithSmallPrimeFactorsRemoved] (*the remaining number after removing prime factors \[LessEqual] n*) </code></pre> <p>cheers, Jamie</p>
Fraccalo
40,354
<p>It's still not fully clear what are you exactly looking for, but this is a piece of code that might help you (it gives you the first set of lists you give in your question). It can easily be readapted for the other cases:</p> <pre><code>list = {{1, {0}}, {2, {0}}, {3, {-2, 0, 2}}, {4, {-2, 0, 2}}, {5, {-2,0, 2}}}; ind = DeleteDuplicates@Flatten[list[[;; , 2]]] </code></pre> <blockquote> <p>{0, -2, 2}</p> </blockquote> <pre><code>{list1, list2, list3} = Cases[list, {a_, b_} /; MemberQ[b, #] -&gt; {a, #}] &amp; /@ ind </code></pre> <blockquote> <p>{{{1, 0}, {2, 0}, {3, 0}, {4, 0}, {5, 0}}, {{3, -2}, {4, -2}, {5, -2}}, {{3, 2}, {4, 2}, {5, 2}}}</p> </blockquote>
214,766
<p>Is there an efficient way to check a number x and remove all prime factors in the number which are less than some n? For example for n = 200:</p> <pre><code>x=88984589931961415442566827779929187431222364934742868664124547963532933 FactorInteger[x] {{29, 2}, {31, 1}, {37, 2}, {269, 1}, {271, 1}, {34200471605536976187361939984030218061598132568100785528233, 1}} </code></pre> <p>After removing all prime factors &lt; n from x gives:</p> <pre><code>2493180179572040027082498062895818866472442266081979164222657467 </code></pre> <p>I'd like to use as large n as possible and then use PrimeQ to check the remaining number, which is faster than checking for large prime factors.</p> <p>I made this code which works but may be slow:</p> <pre><code>x=2*53*6571*18313*31259 n=20000; n=PrimePi[n]; listWithSmallPrimeFactorsRemoved={}; AppendTo[listWithSmallPrimeFactorsRemoved,x]; For[i=1,i&lt;=n,i++, z=Last[listWithSmallPrimeFactorsRemoved]; a=IntegerExponent[z,Prime[i]]; z=z/(Prime[i]^a); AppendTo[listWithSmallPrimeFactorsRemoved,z]; ] CountDistinct[listWithSmallPrimeFactorsRemoved]-1 (*count of how many prime factors were removed*) Last[listWithSmallPrimeFactorsRemoved] (*the remaining number after removing prime factors \[LessEqual] n*) </code></pre> <p>cheers, Jamie</p>
Alucard
18,859
<pre><code>List@@Flatten/@ (list /. {a_, {b_, 0, d_}} -&gt; { a, 0} ) List@@Flatten/@ (list /. {a_, {b_, 0, d_}} -&gt; { a, b} ) List@@Flatten/@ (list /. {a_, {b_, 0, d_}} -&gt; { a, d} ) </code></pre>
3,738,579
<blockquote> <p>What is the cardinality of set <span class="math-container">$\big\{(x,y,z)\mid x^2+y^2+z^2= 2^{2018}, xyz\in\mathbb{Z} \big\}$</span>?</p> </blockquote> <p>Since I have very limited knowledge in number theory, I tried using logarithms and then manipulating the equation so that we get <span class="math-container">$$10^{2018}+2=x^2+y^2+z^2.$$</span> Then setting one of <span class="math-container">$x,y,z$</span> equal to <span class="math-container">$\sqrt{2}$</span> we find all values of <span class="math-container">$x$</span> and <span class="math-container">$y$</span> where <span class="math-container">$$2x^2+y^2=10^{2018}.$$</span> Finally we use combinatorics to get the required answer. However this led to no-where.</p> <p>What is the correct way to solve this problem?</p>
Alan
696,271
<p>Note that <span class="math-container">$2^{2n+1}=2\cdot 2^{2n}=2\cdot 4^n$</span>. Now, <span class="math-container">$$\frac{2}{3}(4^n-1)+2^{2n+1}=\frac{2}{3}(4^n-1)+2\cdot 4^n =\frac{8}{3}\cdot 4^n -\frac{2}{3} = \frac{2}{3}(4^{n+1}-1)$$</span></p>