qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
971,997 | <p>How do we find
$$\Re\left[\int_0^{\large\frac{\pi}{2}} e^{\Large e^{i\theta}}d\theta\right]$$</p>
<p>In the shortest and easiest possible manner?</p>
<p>I cannot think of anything good.</p>
| Semiclassical | 137,524 | <p>An expression in Bessel functions for a generalization of this integral can be developed using the Jacobi-Anger identities $$e^{z \cos \theta}=\sum_{n=-\infty}^\infty I_n(z) e^{i n \theta},\quad e^{i z \sin \theta}=\sum_{n=-\infty}^\infty J_n(z)e^{i n \theta}.$$</p>
<p>Then $e^{z e^{i \theta}}=e^{z\cos \theta}e^{z i \sin \theta}=\sum_{n,m}I_n(z)J_m(z)e^{i(n+m)\theta}$, so if $z\in \mathbb{R}$ then</p>
<p>\begin{align}
\Re\int_0^{\pi/2}e^{z e^{i \theta}}\,d\theta
&=\Re \sum_{n+m\neq 0}\frac{I_n(z)J_m(z)}{n+m}\left[i^{n+m-1}+i\right]+\Re \sum_{n}\frac{2}{\pi}I_n(z)J_n(z)\\
&=\sum_{n+m\text{ odd}}\frac{I_n(z)J_m(z)}{n+m}(-1)^{(n+m+1)/2}+\sum_{n}\frac{2}{\pi}I_n(z)J_n(z)
\end{align}</p>
<p>Taking $z=1$ recovers the case of interest. I'll see if I can cross-check these results with known properties of Bessel functions to relate this to the simple exponential-integral results found above.</p>
|
2,174 | <p>As a teenager I was given this problem which took me a few years to solve. I'd like to know if this hae ever been published. When I presented my solution I was told that it was similar to one of several he had seen.</p>
<p>The problem:</p>
<p>For an <span class="math-container">$n$</span> dimensional space, develop a formula that evaluates the maximum number of <span class="math-container">$n$</span> dimensional regions when divided by <span class="math-container">$k$</span> <span class="math-container">$n-1$</span> dimensional (hyper)planes.</p>
<p>Example: <span class="math-container">$A$</span> line is partitioned by points: <span class="math-container">$1$</span> point, <span class="math-container">$2$</span> line segments. <span class="math-container">$10$</span> points, <span class="math-container">$11$</span> line segments, and so one.</p>
| Robin Chapman | 226 | <p>I presume you are asking for the number of regions into which $n$-space
is divided into by $k$ hyperplanes "in general position".
See the notes of Richard Stanley on <a href="http://math.mit.edu/~rstan/arrangements/arr.html" rel="nofollow">hyperplane arrangements</a>.
The answer to your problem is Proposition 2.4, and there are lots
more goodies too!</p>
|
2,610,048 | <p>Ok, so I'm trying to prove statement in the header. I have read the following discussion on it, but I can't seem to follow it all the way through:</p>
<p><a href="https://math.stackexchange.com/questions/1198735/proving-sum-k-1n-k-k-n1-1/1198743#1198743?newreg=abbcf872d9904cbfa76cd161c4fecdd0">Proving $\sum_{k=1}^n k k!=(n+1)!-1$</a></p>
<p>I like mfl's answer, but I get hung up on the last step. They say: </p>
<p>and we need to show</p>
<blockquote>
<p>$$\sum_{k=1}^{n+1} kk!=(n+2)!−1.$$</p>
</blockquote>
<p>Just write</p>
<blockquote>
<p>$$\sum_{k=1}^{n+1} kk!=\sum_{k=1}^n kk! + (n+1)(n+1)!$$</p>
</blockquote>
<p>How do they get from the first step stated above, to the following step? I'm stuck.</p>
| S. Gates | 519,648 | <p>You have 2 pairs of dice (4 dies) and you roll the dice once (1) and 3 times (2) and are looking for snake eyes (two 1’s). What is the probability (P) of rolling snake eyes exclusively (3) and inclusively (4). Note that there will be 4 answers at the end.</p>
<p>Answer: (can this be explained step by step and not just plugged in to an equation?)
First of all the rolling of four dies can be modeled as xxxx where each x is a base 6 number system (i.e. they are numbered 1 to 6) and four x’s represent four columns. The possible combinations are 6^4 or 1296 possibilities (To find the possibilities you take base^(# of columns)).</p>
<p>P(no 1’s) + P(one 1) + P(two 1’s) + P(three 1’s) + P(four 1’s) = 1 ; (All the possibilities mean P = 1)</p>
<p>P(no 1’s) = (cases without 1's)/(# of possibilities)=(5/6)^4= 625/1296 ; The P of not rolling a 1 on a die is 5/6, two dies is (5/6)*(5/6), four dies (5/6)^4. P(four 1’s) = 1/1296 (“1111” only occurs once). We eliminated the cases without 1’s but we still have the same amount of 1’s in our system which is 1296/6(in the leftmost column(6^3 column)) and 1296/6 in the next column (6^2 column) and 1296/6 in the third column (6^1 column) and 1296/6 in the last column (6^0 column). Each column treats numbers in a number system equally. So we have (1296/6)4 = 864 1’s total. We have only identified P(four 1’s) so far which had 1 case which means it used up 4 1’s and we have 860 left. </p>
<p>P(one 1) = (4C1)*(5^3) = (4!/1!3!)(125) = 4(125) = 500 (there are 500 cases with only one 1). For the case where you have “one 1” there are (4C1) combinations that can occur. (4C1) = 4 which are 1xxx, x1xx, xx1x, and xxx1. Since all columns have the same quantity of 1’s and x cannot be a “1” there are 3 columns of x’s with 5 choices (2,3,4,5,6) (you cannot use 1) so we multiply 4 by 5^3.</p>
<p>P(two 1’s) = (4C2)*(5^2) = (4!/2!2!)(25) = 6(25) = 150 (there are 150 cases with only two 1’s). For the case where you have “two 1’s” there are (4C2) combinations that can occur. (4C2) = 6 which are 11xx, x11x, xx11, 1x1x, 1xx1, and x1x1. Since all columns have the same quantity of 1’s and x cannot be a “1” there are 2 columns of x’s with 5 choices (2,3,4,5,6) (you cannot use 1) so we multiply 6 by 5^2.</p>
<p>P(three 1’s) = (4C3)*(5^1) = (4!/3!1!)(25) = 4(5) = 20 (there are 20 cases with only three 1’s). For the case where you have “three 1’s” there are (4C3) combinations that can occur. (4C3) = 4 which are 111x, x111, 1x11, and 11x1. Since all columns have the same quantity of 1’s and x cannot be a “1” there is 1 column of x’s with 5 choices (2,3,4,5,6) (you cannot use 1) so we multiply 6 by 5^1.</p>
<p>P(no 1’s) + P(one 1) + P(two 1’s) + P(three 1’s) + P(four 1’s) = 1 ; (All the possibilities mean P = 1)
625/1296 +500/1296 +150/1296 +20/1296 +1/1296 =1 (works!) 500(1) + 150(2) +20(3) +1(4) = 864 (the number of 1’s) (works!)</p>
<ol>
<li>Answer: P(two 1’s) = 150/1296 = 11.57%</li>
<li>Answer: P(two 1’s) + P(three 1’s) + P(four 1’s) = 171/1296 = 13.19%
If we now roll the 4 dies 3 times what is the probability of getting snake eyes? We are not looking for the probability of both to occur (P(A)^3) but for at least one to occur which is the complement of nothing occurring (1 – (p(A)’)^3)</li>
<li>Answer: (1 – (1 - 150/1296)^3) = 0.3086 = 30.86%</li>
<li>Answer (1 – (1 - 171/1296)^3) = 0.3459 = 34.59%</li>
</ol>
|
1,255,629 | <p>Show that </p>
<p>$$\sin\left(\frac\pi3(x-2)\right)$$ </p>
<p>is equal to </p>
<p>$$\cos\left(\frac\pi3(x-7/2)\right)$$</p>
<p>I know that $\cos(x + \frac\pi2) = −\sin(x)$ but i'm not sure how i can apply it to this question.</p>
| egreg | 62,967 | <p>The equality $\sin \alpha=\cos\beta$ can be written
$$
\cos(\pi/2-\alpha)=\cos\beta
$$
which is satisfied when either
$$
\beta=\frac{\pi}{2}-\alpha+2k\pi \qquad(k \text{ integer})
$$
or
$$
\beta=\alpha-\frac{\pi}{2}+2k\pi \qquad(k \text{ integer})
$$
These can be rewritten respectively as
$$
\alpha+\beta=\frac{\pi}{2}+2k\pi \qquad(k \text{ integer})
$$
or
$$
\alpha-\beta=\frac{\pi}{2}+2k\pi \qquad(k \text{ integer})
$$
Now try with $\alpha=(\pi/3)(x-2)$ and $\beta=(\pi/3)(x-7/2)$; is one of the two equalities true?</p>
|
83,648 | <p><a href="http://reference.wolfram.com/language/ref/FullGraphics.html" rel="nofollow noreferrer"><code>FullGraphics</code></a> hasn't worked entirely for a long time and the situation appears to be getting worse instead of better. In <em>Mathematica</em> 10.0, 10.1, 11.3, 12.3 up to 13.1 a simple usage throws numerous errors and returns a graphic without ticks and with the wrong aspect ratio:</p>
<pre><code>Plot[Sin[x], {x, 0, 10}] // FullGraphics
</code></pre>
<blockquote>
<p>Axes::axes: {{False,False},{False,False}} is not a valid axis
specification. >></p>
<p>Ticks::ticks: {Automatic,Automatic} is not a valid tick specification. >></p>
<p>(* etc. etc. *)</p>
</blockquote>
<p>This may be caused by or related to <a href="https://mathematica.stackexchange.com/questions/68937/">More Ticks::ticks errors in AbsoluteOptions in v10</a>.</p>
<p>It seems that I must go back to version 5 functionality if I want this function to work right:</p>
<pre><code><< Version5`Graphics` (* load old graphics subsystem *)
Plot[Sin[x], {x, 0, 10}] // FullGraphics
</code></pre>
<p><img src="https://i.stack.imgur.com/2ldzm.png" alt="enter image description here" /></p>
<p>I wonder at this point if there is any indication that <code>FullGraphics</code> and perhaps also <code>AbsoluteOptions</code> are still supported? Or has something to the contrary has been written (Wolfram blog, a developer's comment, etc.) that indicates these should be removed from the documentation now?</p>
<p>With <code>FullGraphics</code> broken is there a method that can take its place for producing proper <code>Graphics</code> directives that may be further manipulated and combined, not merely vectorized outlines?</p>
| Alexey Popkov | 280 | <h2>Basic implementation</h2>
<p>After working on <a href="https://mathematica.stackexchange.com/a/271580/280">this</a> answer, I realized that the undocumented <a href="https://mathematica.stackexchange.com/a/1411/280"><code>FrontEnd`ExportPacket</code></a> with <code>"InputForm"</code> as second argument may be considered as an alternative to <code>FullGraphics</code>. First, a basic definition:</p>
<pre><code>fullGraphicsBasic[expr_, dist_ : 1.2] := Block[{e = dist}, ToExpression[
First@FrontEndExecute[
FrontEnd`ExportPacket[Cell[BoxData@ToBoxes[expr]], "InputForm"]]]]
</code></pre>
<p>Here is a demonstration of how <code>fullGraphicsBasic</code> works (<em>Mathematica</em> 13.1.0):</p>
<pre><code>Plot[Sin[x], {x, 0, 6 Pi}]
fullGraphicsBasic[%]
ResourceFunction["ShortInputForm"][%]
</code></pre>
<blockquote>
<p><a href="https://i.stack.imgur.com/xxTh7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xxTh7.png" alt="plot" /></a></p>
</blockquote>
<blockquote>
<p><a href="https://i.stack.imgur.com/Cy3WK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cy3WK.png" alt="fullGraphicsBasic@plot" /></a></p>
</blockquote>
<blockquote>
<p><a href="https://i.stack.imgur.com/y54QT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y54QT.png" alt="screenshot" /></a></p>
</blockquote>
<p>The axes and ticks look lighter than they are on the original figure due to the default <code>Antialiasing -> True</code> applied to the graphics primitives (by default <code>Axes</code> and <code>Frame</code> are rendered with <code>Antialiasing -> False</code>). We can fix this, for example, as follows:</p>
<pre><code>fullGraphicsBasic[Plot[Sin[x], {x, 0, 6 Pi}]] /.
AbsoluteThickness[0.2`] :> Sequence[AbsoluteThickness[0.2`], Antialiasing -> False]
</code></pre>
<blockquote>
<p><a href="https://i.stack.imgur.com/0bgDV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0bgDV.png" alt="plot" /></a></p>
</blockquote>
<hr />
<h2>Discussion and better implementation</h2>
<p>I was forced to wrap the main code of <code>fullGraphicsBasic</code> with <code>Block</code>, because for some reason it returns an expression with undefined variable <code>e</code>, which determines the distance between the minus sign and the number. The minus signs are separately drawn as <code>Style["-", FontFamily -> "MathematicaSans"]</code>, while the numbers are drawn with <code>FontFamily -> "Arial"</code>. Surprisingly, the "Hyphen-Minus" characters <code>"-"</code> look the same (at least on Windows 10 x64) in these fonts, so this complication is completely redundant. But it makes a trouble, because the correct value for this variable seemingly depends on where the number is placed on the plot. The same problem applies to some other mathematical characters, for example: <code>~</code>, <code>±</code>, <code>=</code> etc. Applying <code>PrivateFontOptions -> {"OperatorSubstitution" -> False}</code> doesn't change this behavior.</p>
<p>This feature/bug is present in all versions I checked: 8.0.4, 12.3.1 and 13.1.0, excepting version 5.2 where the method itself doesn't quite work. It is interesting, that if I specify <code>"Output"</code> as a style for the <code>Cell</code> in the definition above, this undefined variable <code>e</code> disappears and the result looks as if it is replaced with <code>1.8</code> (as a consequence, the minus sign runs over the number). These observations indicate that this functionality of <code>FrontEnd`ExportPacket</code> is still incomplete.</p>
<p>As a workaround to this horrible feature, one can replace the "Hyphen-Minus" character with zero before applying <code>FrontEnd`ExportPacket</code>, and then replace it back after it. Since we are most interested in reproducing the <code>Ticks</code> and <code>FrameTicks</code>, we can proceed as follows:</p>
<pre><code>Clear[fullGraphics]
fullGraphics[expr_] := Module[{e},
e = expr /.
g_Graphics :>
Show[g, AbsoluteOptions[g, {Ticks, FrameTicks}] /.
s_String :>
StringReplace[
s, {StartOfString ~~ "-." -> "00.", StartOfString ~~ "-" -> "0"}]];
ToExpression[
First@FrontEndExecute[
FrontEnd`ExportPacket[Cell[BoxData@ToBoxes[e]], "InputForm"]]] /.
Text[Style[s_String, opts__], c__] :>
Text[Style[Row[{Invisible["0"],
StringReplace[
s, {StartOfString ~~ "0" ~~ d : DigitCharacter :> "\[Minus]" <> d}]}], opts],
c]];
</code></pre>
|
2,927,374 | <p>In a right triangle, relative to a <span class="math-container">$45^\circ$</span> angle, if we have
<span class="math-container">$$\text{adjacent} = 1$$</span>
<span class="math-container">$$\text{opposite} = 1$$</span></p>
<p>then
<span class="math-container">$$\text{hypotenuse}=\sqrt{o^2+a^2}=\sqrt{1^2+1^2}=\sqrt{2}$$</span>
so that
<span class="math-container">$$\sin 45^\circ=\frac{1}{\sqrt{2}}$$</span></p>
<p>But, when
<span class="math-container">$$\text{hypotenuse} = 1$$</span>
<span class="math-container">$$\text{opposite} = \text{adjacent}$$</span>
then (writing <span class="math-container">$o$</span> for <span class="math-container">$\text{opposite}$</span> and <span class="math-container">$a$</span> for <span class="math-container">$\text{adjacent}$</span>)
<span class="math-container">$$\begin{align}
o &= \sqrt{h^2-a^2} = \sqrt{1^2-o^2} \\[4pt]
\implies \quad o^2 &= h^2-o^2\\
&=1-o^2 \\[4pt]
\implies\quad 2o^2&=1 \\
\implies\quad o&=\sqrt{\frac{1}{2}}
\end{align}$$</span>
so that
<span class="math-container">$$\sin 45^\circ =\frac{\sqrt{\frac{1}{2}}}{1}$$</span></p>
<p>What went wrong?</p>
| Lucas | 478,855 | <p>Nothing went wrong.
<span class="math-container">$$ \frac{\sqrt{\frac{1}{2}}}{1} = \sqrt{\frac12}=\frac1{\sqrt2} $$</span></p>
|
710,146 | <p>Conversely, is it true that if every sequence of pointwise equicontinuous functions from $M$ to $\mathbb{R}$ is uniformly equicontinuous, them $M$ is compact?</p>
| Community | -1 | <p>This is not necessary. To see why this is the case, let us construct a counterexample as follows: Consider a family of constant functions $\{f_n|n\in \mathbb{N}\}$. Here, $f_n(x)=n$ for all $x\in M$. This family is uniformly equicontinuous. However, the sequence $(f_n)_n$ doesn't have a convergent sub-sequence in every (usual) norm or metric.</p>
|
1,313,056 | <p>Got this matrix: </p>
<p>\begin{bmatrix} 1 & 2 \\ -2 & 5 \end{bmatrix}</p>
<p>I should determine if the matrix is diagonalizable or not.
I found the eigenvalues ( only one) = 3.
My eigenvector is then \begin{bmatrix} 1 \\ 1 \end{bmatrix}
This matrix is not diagonizable (from my teachers notes) but i don't know why, can someone explain this? </p>
| Kolibrie | 202,998 | <p>To answer your comment about the eigenvectors for this matrix:
$$ A = \left[
\begin{array}{cc}
1&0\\
0&1
\end{array}
\right] $$
The characteristic polynomial given by $det(A - rI)$ where I is the n by n identity matrix will have roots equal to the eigenvalues of the matrix.<br>
(Note: $r$ is eigenvalue of A such that $Ax = rx$, where x is not the zero vector and has n components. This x is called the corresponding eigenvector to the eigenvalue $r$.)</p>
<p>$det(A-rI) = r^2-2r+1 = (r-1)^2$, so we have the eigenvalue $r = 1$ (multiplicity two).</p>
<p>Now construct the eigenvectors corresponding to $r_1 = r_2 = 1$:</p>
<p>Since r is a scalar such that $Ax = rx$ for a non-trival vector x, we can find this eigenvector corresponding to r by solving $A - rI$.</p>
<p>This means to find $Null( A - rI )$, the set of vector(s) $t$ such that $(A - rI)*t = 0$</p>
<p>Since
$$ (A - 1I) = I - I = \left[
\begin{array}{cc}
0&0\\
0&0
\end{array}
\right] $$
That means any vector that isn't the zero vector may satisfy $(A - rI)*t = 0$. So a valid $t$ is in</p>
<p>$$ Span
\left\{
\begin{array}{c}
\begin{pmatrix}
1\\
0
\end{pmatrix},
\begin{pmatrix}
0\\
1
\end{pmatrix}
\end{array}
\right\}
$$</p>
<p>The vectors in the set above are the eigenvectors associated with the eigenvalue $r = 1$.</p>
<p>(Any identity matrix diagonalizes to the identity matrix).</p>
|
4,285,189 | <p>I am aware of the following identity</p>
<p><span class="math-container">$\det\begin{bmatrix}A & B\\ C & D\end{bmatrix} = \det(A)\det(D - CA^{-1}B)$</span></p>
<p>When <span class="math-container">$A = D$</span> and <span class="math-container">$B = C$</span> and when <span class="math-container">$AB = BA$</span> the above identity becomes</p>
<p><span class="math-container">$\det\begin{bmatrix}A & B\\ B & A\end{bmatrix} = \det(A)\det(A - BA^{-1}B) = \det(A^2 - B^2) = \det(A-B)\det(A+B)$</span>.</p>
<p>However, I couldn't prove this identity for the case where <span class="math-container">$AB \neq BA$</span>.</p>
<p><strong>EDIT:</strong> Based on @Trebor 's suggestion.</p>
<p>I think I could do the following.</p>
<p><span class="math-container">$\det\begin{bmatrix}A & B\\ B & A\end{bmatrix} =
\det\begin{bmatrix}A & B\\ B-A & A-B\end{bmatrix} = \det(A^2-B^2) = \det(A-B)\det(A+B)$</span>.</p>
| achille hui | 59,379 | <p>Let's say <span class="math-container">$A, B$</span> are <span class="math-container">$n \times n$</span> matrices with entries from a field with characteristic <span class="math-container">$\ne 2{}^{\color{blue}{[1]}}$</span>.</p>
<p>Let <span class="math-container">$I$</span> be the <span class="math-container">$n \times n$</span> identity matrix and <span class="math-container">$J = \begin{bmatrix}I & I \\ -I & I\end{bmatrix}$</span>. Since <span class="math-container">$\det J = 2^n \ne 0$</span>, <span class="math-container">$J$</span> is invertible.</p>
<p>Notice <span class="math-container">$$
J \begin{bmatrix}A & B \\ B & A\end{bmatrix}
= \begin{bmatrix}A+B & A+B \\ B-A & A-B\end{bmatrix}
= \begin{bmatrix}A+B & 0 \\ 0 & A-B\end{bmatrix} J
$$</span>
We have
<span class="math-container">$$\det\begin{bmatrix}A & B \\ B & A\end{bmatrix}
= \det\begin{bmatrix}A+B & 0 \\ 0 & A-B\end{bmatrix}
= \det(A+B)\det(A-B)\tag{*1}
$$</span></p>
<p><strong>Notes</strong></p>
<ul>
<li><p><span class="math-container">$\color{blue}{[1]}$</span> - As demonstrated by @Just a user's answer,
the requirement that entries from a field with characteristic <span class="math-container">$\ne 2$</span> can be dropped. <span class="math-container">$(*1)$</span> continues to work when entries of <span class="math-container">$A,B$</span> take values from any commutative ring.</p>
<p>Aside from using row/column operations as in @Just a user's answer, we can use the fact that LHS and RHS of <span class="math-container">$(*1)$</span> are polynomials with integer coefficients in entries of <span class="math-container">$A,B$</span>. Since they are equal as a polynomial, it remains equal when we substitute the entries by elements from any commutative ring.</p>
</li>
</ul>
|
1,237,618 | <p>I really need Your help.</p>
<p>I need to prove that Euclidean norm is strictly convex. I know that a function is strictly convex if $f ''(x)>0$. Can I use it for Euclidean norm and how? $||x||''=\frac{||x||^2-x^2}{||x||^3}$</p>
<p>Thank You!</p>
| Brenton | 226,184 | <p>It is easier to work with the definition that a function f is convex if for any $\lambda \in [0,1]$ and for all $x,y \in (a,b)$:
$$ f((1-\lambda)x + \lambda y) \leq (1-\lambda)f(x) + \lambda f(y)$$</p>
<p>Then:</p>
<p>$$ ||(1-\lambda x) + \lambda y ||^2 = \sum_{i} |(1-\lambda x_i) + \lambda y_i|^2 \leq \sum_{i} (1-\lambda) |x_i|^2 + \lambda |y_i|^2 = (1-\lambda)||x||^2 + \lambda ||y||^2 $$ </p>
|
1,237,618 | <p>I really need Your help.</p>
<p>I need to prove that Euclidean norm is strictly convex. I know that a function is strictly convex if $f ''(x)>0$. Can I use it for Euclidean norm and how? $||x||''=\frac{||x||^2-x^2}{||x||^3}$</p>
<p>Thank You!</p>
| Mercy King | 23,304 | <p>The function
$$
\lVert \mbox{ }\rVert:\mathbb{R}^n\to \mathbb{R},\, f(x)=\lVert x\rVert
$$
is certainly convex. In fact, given $t\in [0,1]$ and $x,y\in \mathbb{R}^n$ we have:
$$
\lVert tx+(1-t)y\rVert=\lVert tx+(1-t)y\rVert\le \lVert tx\rVert+\lVert (1-t)y\rVert=t\lVert x\rVert+(1-t)\lVert y\rVert.
$$
But is it <strong><em>STRICTLY CONVEX</em></strong>? i.e. if $t\in (0,1)$ and $x,y \in \mathbb{R}^n$, with $x\ne y$, then
$$
\lVert tx+(1-t)y\rVert<t\Vert x\rVert+(1-t)\lVert y\rVert.
$$
The answer is <strong><em>NO</em></strong>. In fact, if $t\in (0,1)$, and $y=sx$, with $0<s \ne 1$, and $x\in \mathbb{R}^n$, we certainly have $x\ne y$. But
\begin{eqnarray}
\lVert tx+(1-t)y\rVert&=&\lVert tx+(1-t)sx\rVert=\lVert (t+(1-t)s)x\rVert=(t+(1-t)s)\lVert x\rVert\\
&=&t\lVert x\rVert+(1-t)\lVert sx\rVert=t\lVert x\rVert+(1-t)\lVert y\rVert
\end{eqnarray}
Hence the function $\lVert\mbox{ } \rVert$ is <strong><em>CONVEX</em></strong> but <strong><em>NOT STRICTLY CONVEX</em></strong>.</p>
|
80,787 | <p>Hello,</p>
<p>I am very new to the field of approximation theory, and since
an extended search on the Internet did not provide answers for
two rather basic questions, I decided to ask them here. </p>
<p>1) From my understanding upper bounds for</p>
<p>$$ \inf_{q} \int_{-1}^{1} |f(x) - q(x)|^{2p} dt $$</p>
<p>with $f$ continuous and $q$ a polynomial of degree $n$, are expressed
in terms of the $L^p$ smoothness of $f$ and in terms of the degree $n$.
Could somebody point me to a proof
of such a result?</p>
<p>2) Heuristically, what kind of information do lower bounds for the
above infinum contain ? (For example, suppose that I can
give a lower bound of $p!$ for the above infinimum as $p \rightarrow \infty$). </p>
<p>My last question might not be well-posed, so if it doesn't make sense please ignore
it.</p>
<p>Thank you.</p>
| Ben Adcock | 19,011 | <p>A good introductory lookup for 1) (and similar problems) is the book "Spectral Methods: Fundamentals in Single Domains" by Canuto, Hussaini, Quarteroni & Zang. Chapter 5, in particular. Equation (5.4.16) gives a bound for the $L^p$ norm approximation problem in terms of the L^p smoothness of $f$ and its derivatives:
$$
\inf_{q \in \mathbb{P}_n} \| f - q \| _{L^p} \leq C N^{-m} \left ( \sum^{m} _{k=\min(m,n+1)} \| f^{(k)} \|^p _{L^p} \right )^{\frac{1}{p}}
$$
According to the bibliographical notes section (p.291) a proof can be found in this <a href="http://www.springerlink.com/content/42m3284x30mv5307/" rel="nofollow">paper</a>.</p>
|
918,049 | <p>Let $f: X \to Y$ be a map of sets. We are given that $X$ is a topological space. We are to show that there is a topology on $Y$ making $f$ continuous, and moreover, determine if this topology is unique. </p>
<p>Should I read it as "$X$ is just a set on which there is given a topology, i.e sets that are open in $X$ are predetermined" or "$X$ is a collection of subsets that form a topological space"?</p>
<p>I am not asking for assistance on the exercise itself.</p>
| mathlove | 78,967 | <p>HINT : For the $n=p+1$ step, all you need is to prove $$\frac{1+\sqrt{4\alpha +1}}{2}\gt \sqrt{\frac{1+\sqrt{4\alpha +1}}{2}}.$$</p>
|
3,215,553 | <p>Recently I have been reading a lot about <span class="math-container">$\mathbb{Z}_2$</span>-actions on topological spaces. Mainly I was focused on surfaces such as the sphere, torus and Klein bottle and here the existence of a nontrivial <span class="math-container">$\mathbb{Z}_2$</span>-action is rather simple. But I was wondering if a general topological space always admits a nontrivial continuous <span class="math-container">$\mathbb{Z}_2$</span>-action? If not, then more specific, does a manifold always admit a nontrivial continuous <span class="math-container">$\mathbb{Z}_2$</span>-action? </p>
<p>For a manifold <span class="math-container">$M$</span> I was thinking about the fact that we can embed <span class="math-container">$M$</span> into <span class="math-container">$\mathbb{R}^N$</span> for some <span class="math-container">$N >0$</span> and then <span class="math-container">$M$</span> can inherit a <span class="math-container">$\mathbb{Z}_2$</span>-action from <span class="math-container">$\mathbb{R}^N$</span> but then when one looks at the spiral in <span class="math-container">$\mathbb{R}^2$</span> we see that this spiral does not inherit for example the antipodality of <span class="math-container">$\mathbb{R}^2$</span>.</p>
<p>Extra: I was also wondering that if there are spaces that do admit a nontrivial continuous <span class="math-container">$\mathbb{Z}_2$</span>-action, do these space then also admit a <strong>free</strong> <span class="math-container">$\mathbb{Z}_2$</span>-action? By free I mean that the action is fixed point free.</p>
<p>If anyone knows some basic examples that do not admit a continuous (free) <span class="math-container">$\mathbb{Z}_2$</span>-action. Please do share. I seem to be unable to find one.</p>
<p>Thank you in advance! </p>
| hmakholm left over Monica | 14,366 | <p><span class="math-container">$X=\mathbb R$</span> easily admits a nontrivial <span class="math-container">$\mathbb Z_2$</span>-action, but it cannot be free -- we can always find a fixed point with the intermediate value theorem. </p>
|
3,818,445 | <p>I'm trying to compute <span class="math-container">$\operatorname{Ext}_{\mathbb{Z}}^1(\mathbb{Z}[1/p],\mathbb{Z})\cong \mathbb{Z}_p/\mathbb{Z}$</span>.</p>
<p>Now I have the projective resolution
<span class="math-container">$$0\rightarrow \bigoplus_{i>1}\mathbb{Z}\xrightarrow{\alpha} \mathbb{Z}\oplus \bigoplus_{i>1}\mathbb{Z}\xrightarrow{\beta} \mathbb{Z}[1/p]\rightarrow 0 .$$</span> The map <span class="math-container">$\alpha$</span> is given by <span class="math-container">$(a_i)_{i>0}\mapsto (-\Sigma a_i, a_ip^i)$</span> and <span class="math-container">$\beta$</span> is given by <span class="math-container">$(b_i)_{i\geq 0}\mapsto \Sigma_{i\geq 0} b_i/p^i$</span>.</p>
<p>Now apply the <span class="math-container">$\operatorname{Hom}(-,\mathbb{Z})$</span>, I want to calculate the kernel of the dualised map <span class="math-container">$\prod_{i>1}\mathbb{Z}\xleftarrow{\alpha^*} \mathbb{Z}\prod (\prod_{i>1} \mathbb{Z})$</span> which is given by <span class="math-container">$(f_0,0,\dots)\mapsto f_0'$</span> and <span class="math-container">$(0,\dots,f_i,\dots )\mapsto (0,\dots,p^if_i,\dots )$</span>, where <span class="math-container">$f_0': \prod_{i>1}\mathbb{Z}\to \mathbb{Z}$</span>, <span class="math-container">$f_0'((a_i))=f_0(\Sigma a_i)$</span>. Is there any way to see what is this kernel and how the quotient of <span class="math-container">$\prod_{i>1} \mathbb{Z}$</span> by this kernel is <span class="math-container">$\mathbb{Z}_p/\mathbb{Z}$</span>?</p>
| Fabio Lucchini | 54,738 | <p><span class="math-container">$\newcommand\ZZ{\mathbb{Z}}$</span>
<span class="math-container">$\DeclareMathOperator\Hom{Hom}$</span>
<span class="math-container">$\DeclareMathOperator\Ext{Ext}$</span>
Another approach using a free resolution of <span class="math-container">$\ZZ[1/p]$</span>.
First recall that we have a ring isomorphism:
<span class="math-container">$$\mathbb Z[1/p]\cong\frac{\mathbb Z[x]}{(px-1)\mathbb Z[x]}$$</span>
where <span class="math-container">$x$</span> is an indeterminate.
Then we get the following exact sequence:
<span class="math-container">$$\{0\}\to\mathbb Z[x]\to\mathbb Z[x]\to\mathbb Z[1/p]\to\{0\}$$</span>
where the map <span class="math-container">$\mathbb Z[x]\to\mathbb Z[x]$</span> is the multiplication by <span class="math-container">$px-1$</span>.
This is, in fact, a free resolution of <span class="math-container">$\ZZ[1/p]$</span>.
The we get the exact sequence:
<span class="math-container">$$\Hom(\ZZ[x],\ZZ)\to\Hom(\ZZ[x],\ZZ)\to\Ext(\ZZ[1/p],\ZZ)\to\{0\}$$</span>
We claim that also the following sequence is exact:
<span class="math-container">$$\Hom(\ZZ[x],\ZZ)\to\Hom(\ZZ[x],\ZZ)\xrightarrow\zeta\hat\ZZ_p/\ZZ\to\{0\}$$</span>
Here <span class="math-container">$\zeta$</span> is esplicity given by
<span class="math-container">\begin{align}
\zeta&:\Hom(\ZZ[x],\ZZ)\to\hat\ZZ_p/\ZZ&
\psi&\mapsto\sum_{n=0}^\infty\psi(x^n)p^n+\ZZ
\end{align}</span>
If <span class="math-container">$\psi\in\Hom(\ZZ[x],\ZZ)$</span> belongs to the image of <span class="math-container">$\Hom(\ZZ[x],\ZZ)\to\Hom(\ZZ[x],\ZZ)$</span>, then there exists <span class="math-container">$\varphi\in\Hom(\ZZ[x],\ZZ)$</span> such that <span class="math-container">$\psi(f)=\varphi((xp-1)f)$</span>
for every polynomial <span class="math-container">$f\in\ZZ[x]$</span>.
In particular, for every <span class="math-container">$k\in\mathbb N$</span>, we have:
<span class="math-container">$$\psi(x^k)=p\varphi(x^{k+1})-\varphi(x^k)\tag{1}$$</span>
Consequently, we obtain:
<span class="math-container">$$\sum_{k=0}^{n-1}\psi(x^k)p^k=p^n\varphi(x^n)-\varphi(1)\xrightarrow{n\to\infty}-\varphi(1)$$</span>
in <span class="math-container">$\hat\ZZ_p$</span>, hence <span class="math-container">$\psi\in\operatorname{Ker}\zeta$</span>.
Conversely, assume <span class="math-container">$\psi\in\operatorname{Ker}\zeta$</span>.
We claim that <span class="math-container">$\psi$</span> is the image of <span class="math-container">$\varphi\in\Hom(\ZZ[x],\ZZ)$</span> satisfying:
<span class="math-container">$$\varphi(x^k)=\sum_{n=k}\psi(x^n)p^{n-k}$$</span>
For every <span class="math-container">$k\in\mathbb N$</span>.
By assumption, <span class="math-container">$\varphi(1)\in\ZZ$</span>.
Then <span class="math-container">$\varphi(x^k)\in\ZZ$</span> for every <span class="math-container">$k\in\mathbb N$</span> follows by induction on <span class="math-container">$k$</span>, and since <span class="math-container">$(1)$</span> is clearly satisfied, this proves the assertion.</p>
|
3,969,679 | <p><strong>The letters in the word GUMTREE and KOALA are rearranged to form a 12-letter word where KOALA appears precisely in order but not necessarily together. How many ways can this happen?</strong></p>
<p>So I attmepted it via this method:</p>
<p>Firstly arrange like this since KOALA must be in order but not necessarily together: (let the fullstops (.) be spots for the letters in GUMTREE. There are <span class="math-container">$7$</span> letters in GUMTREE and <span class="math-container">$6$</span> full stops so <span class="math-container">$6^7$</span>.</p>
<p>.K.O.A.L.A.</p>
<p>Butthere are two E's so <span class="math-container">$\frac{6^7}{2!}$</span>. The letters in KOALA are fixed so they have <span class="math-container">$1$</span> way each, except for A (there are two so the first A has two choices and the second A has one choice)</p>
<p>Therefore, <span class="math-container">$$2\cdot\frac{6^7}{2!}=279936$$</span></p>
<p>But the answer is 1995840 arrangements</p>
<p>I belive my method is very close but I am forgetting to multiply by something. Can someone point out my logical flaw? Otherwise the worked solutions propose <span class="math-container">$\frac{12!}{5!2!}$</span>, but I don't get why you divide by 5! for the KOALA, since they are not identical letters... regardless it would be great to understand both ways!</p>
<p>Thanks</p>
| Math Lover | 801,574 | <p>Here is the problem with your method of counting -</p>
<p>You are giving each letter of GUMTREE a choice of being in any of the <span class="math-container">$6$</span> places. So far so good.</p>
<p>But <span class="math-container">$6^7$</span> does not include permutations of "GUMTREE". For example, in the arrangements of all letters of GUMTREE being in the last place as KOALA {GUMTREE}, where are you considering permutations of GUMTREE?</p>
<p>So you may say we can multiply by <span class="math-container">$\frac{7!}{2!}$</span> but that brings in a different problem then. You overcount and to use P.I.E or to find multiplication factor for each type of arrangement separately is going to be a lot more work. Why does it overcount? Take an example,</p>
<p><em>K</em> (G) <em>O</em> (UM) <em>A</em> (TR) <em>L</em> (EE) <em>A</em></p>
<p>Now if you multiply as we mentioned above, it will also count cases,</p>
<p><em>K</em> (U) <em>O</em> (GM) <em>A</em> (TR) <em>L</em> (EE) <em>A</em> but those have already been counted in <span class="math-container">$6^7$</span>.</p>
<p>So it is easier to consider KOALA as letters that are same in total <span class="math-container">$12$</span> letters so we have <span class="math-container">$2$</span> <span class="math-container">$E$</span> and <span class="math-container">$5$</span> <span class="math-container">$X$</span> giving an answer of <span class="math-container">$\frac{12!}{5! \, 2!}$</span>.</p>
|
758,158 | <p>I am trying to use <span class="math-container">$f(x)=x^3$</span> as a counterexample to the following statement. </p>
<p>If <span class="math-container">$f(x)$</span> is strictly increasing over <span class="math-container">$[a,b]$</span> then for any <span class="math-container">$x\in (a,b), f'(x)>0$</span>. </p>
<p>But how can I show that <span class="math-container">$f(x)=x^3$</span> is strictly increasing?</p>
| Mariano Suárez-Álvarez | 274 | <p>Well, you know that $f$ is strictly increasing in $[0,+\infty)$ and in $(-\infty,0]$. Morevoer, $f$ is positive in the first interval and negative in the second one!</p>
|
725,241 | <p>Where s=circumcenter, H= orthocenter, and A'= midpoint of one side of triangle.
<img src="https://i.stack.imgur.com/Pg6rv.png" alt="enter image description here"></p>
<p>How can can I determine the location of the three vertices of the triangle? </p>
| user133281 | 133,281 | <p>Rewrite the right-hand side as $$\sum_{i=1}^k \binom{k-1}{i-1} \binom{n}{n-i}.$$</p>
<p>Now imagine we have $n$ men and $k-1$ women, and we want to count the number of ways to choose $n-1$ of these people. The left-hand side is the number of ways to do this, $\binom{n+k-1}{n-1}$ (we choose $n-1$ out of $n+k-1$ people). The right-hand side also counts the number of ways to do this, since the summand for $i$ is the number of ways to choose $i-1$ women and $n-i$ men. Since both sides count the same thing, equality must hold.</p>
|
2,845,427 | <p>I posted concerning this question a little while ago, not asking for the answer but for an understanding of the setup. Well, I thought I would solve it but I seem to be unable to obtain the required answer. If anyone could help I'd be very grateful.</p>
<p>Here's the question again:</p>
<p><a href="https://i.stack.imgur.com/v7GOe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v7GOe.png" alt="enter image description here"></a></p>
<p>Here's my attempt:</p>
<p>We are told that for the weight attached to the unstretched string that AC = $\frac{4a}{3}$ and CB = $\frac{4a}{7}$ so that means that AB = $\frac{4a}{3}+\frac{4a}{7}=\frac{40a}{21}$ = natural length of string</p>
<p>Below is my diagram of the final situation:</p>
<p>My method will be by consideration of energy.</p>
<p>Once the string is stretched over the bowl, but before the weight falls to touch the inner surface, it will have elastic potential energy due to having been stretched. I'll calculate this.</p>
<p>Then I'll calculate the energy stored in the string due to both it's stretching over the diameter AND the weight having fallen.</p>
<p>The difference in these two energies will be equal to the loss in potential energy of the weight.</p>
<p>So, natural length = $\frac{40a}{21}$</p>
<p>Length once stretched over the diameter is $2a =\frac{42a}{21}$</p>
<p>and so extension due to this is $\frac{2a}{21}$ and so the elastic potential energy in the string due to only the stretching over the diameter is $\frac{\lambda(\frac{2}{21})^2a^2}{2*\frac{40}{21}a}=\frac{4}{441}\lambda a*\frac{21}{80}$</p>
<p>I'll leave it in this form for convenience later on.</p>
<p>Ok. from the diagram:</p>
<p><a href="https://i.stack.imgur.com/zsk6q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zsk6q.png" alt="enter image description here"></a></p>
<p>$x\,cos\,30+y\,cos\,60=2a$ therefore</p>
<p>$\frac{\sqrt{3}}{2}x+\frac{y}{2}=2a$ therefore</p>
<p>$\sqrt{3}x+y=4a$ call this equation 1</p>
<p>also</p>
<p>$h=x\,cos\,60=\frac{x}{2}$ and $h=y\,cos\,30=\frac{\sqrt{3}}{2}y$</p>
<p>and so $x=2h$ and $y=\frac{2h}{\sqrt{3}}$</p>
<p>And so, from equation 1, we have</p>
<p>$2\sqrt{3}h+\frac{2h}{\sqrt{3}}=4a$ therefore</p>
<p>h = $\frac{\sqrt{3}a}{2}$</p>
<p>and so</p>
<p>$x=\sqrt{3}a$ and $y=\frac{2}{\sqrt{3}}*\frac{\sqrt{3}}{2}a=a$</p>
<p>So the new length, due to stretching over diameter AND falling of weight = $a(1+\sqrt{3})$</p>
<p>So the extension is now:</p>
<p>$a(1+\sqrt{3})-\frac{40a}{21} = (\sqrt{3}-\frac{19}{21})a$</p>
<p>So energy now stored in string is:</p>
<p>$\frac{\lambda(\sqrt{3}-\frac{19}{21})^2a^2}{2\frac{40a}{21}}$ = $\frac{21}{80}\lambda(\sqrt{3}-\frac{19}{21})^2a$</p>
<p>So, the change in elastic potential energy in going from just stretched across the diameter to streched across the diameter AND having the weight fallen is:</p>
<p>$\frac{21}{80}\lambda a[(\sqrt{3}-\frac{19}{21})^2-\frac{4}{441}]$</p>
<p>and this can be equated to the loss in gravitational potential energy of the weight W giving:</p>
<p>$\frac{21}{80}\lambda a[(\sqrt{3}-\frac{19}{21})^2-\frac{4}{441}]=wh=\frac{\sqrt{3}}{2}wa$</p>
<p>So I have $\lambda$ in terms of w BUT I do not have $\lambda=w$</p>
<p>Is my physical reasoning incorrect?</p>
<p>If not, have I made a mathematical mistake(s)?</p>
<p>Thanks for any help,
Mitch.</p>
| Cesareo | 397,348 | <p>Assuming the Hooke's law</p>
<p>$$
F = \lambda\left(\frac{l-l_0}{l_0}\right)
$$</p>
<p>Calling</p>
<p>$$
|AC|_0 = a\frac 43\\
|CB|_0 = a\frac 47\\
|AC| = 2a\sin(\frac{\pi}{3})\\
|CB| = 2a\sin(\frac{\pi}{6})\\
F_{AC} = \lambda\left(\frac{|AC|-|AC|_0}{|AC|_0}\right)(-\cos(\frac{\pi}{6}),\sin(\frac{\pi}{6}))\\
F_{CB} = \lambda \left(\frac{|CB|-|CB|_0}{|CB|_0}\right)(\cos(\frac{\pi}{3}),\sin(\frac{\pi}{3}))\\
R = r(-\cos(\frac{\pi}{3}),\sin(\frac{\pi}{3}))\\
W = w(0,-1)
$$</p>
<p>we have in equilibrium</p>
<p>$$
F_{AC}+F_{CB}+R+W=0
$$</p>
<p>or</p>
<p>$$
\left\{
\begin{array}{c}
-\frac{3 \sqrt{3} \left(\sqrt{3} a-\frac{4 a}{3}\right) \lambda }{8 a}+\frac{3 \lambda }{8}-\frac{r}{2}=0 \\
\frac{3 \left(\sqrt{3} a-\frac{4 a}{3}\right) \lambda }{8 a}+\frac{3 \sqrt{3} \lambda }{8}+\frac{\sqrt{3} r}{2}-w=0 \\
\end{array}
\right.
$$</p>
<p>now solving for $\lambda, r$ we have</p>
<p>$$
\begin{array}{c}
r=\left(\sqrt{3}-\frac{3}{2}\right) w \\
\lambda =w \\
\end{array}
$$</p>
<p>NOTE</p>
<p>$R = $ normal surface reaction force</p>
<p>$\lambda = $ string elastic modulus.</p>
<p>$|\cdot| = $ stretched lenght</p>
<p>$|\cdot|_0 = $ unstrectched length.</p>
|
1,703,169 | <p>At a recent maths competition one of the questions was to find for which $x,y,z$ this equation holds true:</p>
<p>$$\sqrt{x}-\sqrt{z+y}=\sqrt{y}-\sqrt{z+x}=\sqrt{z}-\sqrt{x+y}$$</p>
<p>where $x,y,z \in \mathbb{R} \cup \{0\}$. So how am I supposed to approach this problem?</p>
<p>Also sorry for not explaining my personal progress on this problem, but there simply is none.</p>
| fleablood | 280,126 | <p>$x \ge 0; y \ge 0; z \ge 0$.</p>
<p>Suppose $z = 0$</p>
<p>Then $\sqrt{x} - \sqrt{y} = \sqrt{y} - \sqrt{x} = - \sqrt{x+y} \implies \sqrt{x} = \sqrt{y} \implies x = y; x + y = 0 \implies x = y = z = 0$.</p>
<p>Likewise if $x = 0$ or $y = 0$ then $x = y = z = 0$ by the same argument so either they all equal 0 or none do.</p>
<p>So assume no $x, y, $ or $z$ = 0:</p>
<p>$\sqrt{x}-\sqrt{z+y}=\sqrt{y}-\sqrt{z+x}=\sqrt{z}-\sqrt{x+y} \implies$</p>
<p>$(\sqrt{x}-\sqrt{z+y})^2=(\sqrt{y}-\sqrt{z+x})^2=(\sqrt{z}-\sqrt{x+y})^2 \implies$</p>
<p>$x+y+z -2(\sqrt{x}\sqrt{z+y})=x+y+z -2(\sqrt{y}\sqrt{z+x})=x+y+z -2(\sqrt{z}\sqrt{x+y}) \implies$</p>
<p>${x}({z+y})={y}({z+x})={z}({x+y}) \implies$</p>
<p>$xz + xy = yz + xy = xz + yz \implies$</p>
<p>$xz = yz; xy = xz; xy = yz $</p>
<p>Thus $x = y; y = z$ and $x = z$.</p>
<p>So any $x = y = z \ge 0$ will work.</p>
|
3,758 | <p>When implicitly finding the derivative of: </p>
<blockquote>
<p>$xy^3 - xy^3\sin(x) = 1$</p>
</blockquote>
<p>How do you find the implicit derivative of:</p>
<blockquote>
<p>$xy^3\sin(x)$</p>
</blockquote>
<p>Is it using a <em>triple</em> product rule of sorts?</p>
| Pierre-Yves Gaillard | 660 | <p>Here is a simple way to compute the $n$-th derivative of a product of $k$ functions. To ease the notation, stick to the case $k=3$. First follow your instinct, and then you'll see that the justification presents no difficulty. Write (with all summation indices varying from 0 to $\infty$)
$$\sum_n\frac{(fgh)^{(n)}}{n!}X^n
=\sum_i\frac{f^{(i)}}{i!}X^i\ \sum_j\frac{g^{(j)}}{j!}X^j\ \sum_k\frac{h^{(k)}}{k!}X^k$$
$$=\sum_{i,j,k}\frac{f^{(i)}g^{(j)}h^{(k)}}{i!\ j!\ k!}X^{i+j+k}
=\sum_n\ X^n\sum_{i+j+k=n}\frac{f^{(i)}g^{(j)}h^{(k)}}{i!\ j!\ k!}\quad.$$</p>
<p>In categorical language: To each $\mathbb Q$-algebra $A$ is attached the $\mathbb Q$-algebra $A':=A[[X]]$ equipped with the derivation $d/dX$ (the letter $X$ being an indeterminate). Then $A\mapsto A'$ is a <strong>right adjoint</strong> to the forgetful functor from differential $\mathbb Q$-algebras to $\mathbb Q$-algebras. </p>
|
3,557,398 | <p>Here are two second-order differential equations.</p>
<p><span class="math-container">$$ y''+9y=\sin(2t) \tag 1 $$</span></p>
<p><span class="math-container">$$ y'' +4y =\sin(2t) \tag 2 $$</span></p>
<p>I am told to use undetermine coefficients method to solve.</p>
<p>For 1), I use <span class="math-container">$y_p=A \cos(2t)+B \sin(2t)$</span> to get <span class="math-container">$A=0$</span> and B=<span class="math-container">$\frac{1}{5}$</span> and get <span class="math-container">$y_p=\frac{1}{5} \sin(2t)$</span></p>
<p>For 2), I realize that that method doesn't work and told to do <span class="math-container">$y_p=t(A \cos(2t)+B \sin(2t)$</span> Why does it work then?</p>
| mjw | 655,367 | <p>The statement is true, and could be proved by considering each case:</p>
<p><span class="math-container">$$0<x<y<z$$</span></p>
<p><span class="math-container">$$0<x<z<y$$</span></p>
<p><span class="math-container">$$0<z<x<y$$</span></p>
<p><span class="math-container">$$\textit{etc.}$$</span></p>
<p>Actually, just these three are enough. The statement is equivalent exchanging <span class="math-container">$x$</span> and <span class="math-container">$y$</span>.</p>
|
84,897 | <pre><code>Integrate[a/(Sin[t]^2 + a^2), {t, 0, 2 Pi}]
</code></pre>
<p>$$\int_0^{2 \pi } \frac{a}{a^2+\sin ^2(t)} \, dt$$</p>
<p>gives $0$</p>
<p>This cannot be true. What is going on?</p>
<p>If I insert a number into <code>a</code>, it gives a reasonable result:</p>
<pre><code>NIntegrate[2/(Sin[t]^2 + 4), {t, 0, 2 Pi}]
</code></pre>
<p>give <code>2.80993</code></p>
| cheap_slut95 | 29,881 | <p>I confirmed the result in both <em>Mathematica</em> 9 and 10 on Linux.</p>
<p>Using an assumption seems to work:</p>
<pre><code>Integrate[a/(Sin[t]^2 + a^2), {t, 0, 2 π}, Assumptions -> a > 0]
</code></pre>
<blockquote>
<pre><code>(2 π)/Sqrt[1 + a^2]
</code></pre>
</blockquote>
<p>which agrees with <code>NIntegrate</code>. </p>
<p>I'm not sure about the source of the discrepancy though. It could be a bug or the way the assumptions are made, i.e. by default <code>a</code> is considered complex, so either the integral is zero for complex <code>a</code> (didn't do the calcs) or <code>Integrate</code> gets confused somehow.</p>
<p>In any case, always remember to play around with the assumptions and numerically test the results.</p>
<p>Cheers</p>
|
3,810,856 | <p>I'm having trouble understanding how this form of the principle (<a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="nofollow noreferrer">on wiki</a>) results in the form below.</p>
<p>Wiki form:
<a href="https://i.stack.imgur.com/6IXQZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6IXQZ.png" alt="wiki form" /></a></p>
<p>Using three sets:
<a href="https://i.stack.imgur.com/6maIp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6maIp.png" alt="example" /></a></p>
<p>My confusion is the last <span class="math-container">$(-1)^{n-1} |A_1 \cap \cdots \cap A_n|$</span>. Where does this last form appear in the three set example?</p>
<p>Also, if using two sets vs three, is the <span class="math-container">$\sum_{1 \le i \lt j \lt k \le n}$</span> unsatisfiable because there are no three <span class="math-container">$i, j, k$</span> terms to satisfy it? In that case this summation term disappears?</p>
| user | 505,767 | <p>We can cancel out both sides a factor <span class="math-container">$\sin \frac{\pi}{x}>0$</span> to obtain</p>
<p><span class="math-container">$$\frac{\pi}{\sin^2 \frac{\pi}{x}} \gt \frac{x}{\tan \frac{\pi}{x}}\iff \frac{\pi}{\sin^2 \frac{\pi}{x}} \gt \frac{x}{\frac{\sin\frac{\pi}{x}}{\cos\frac{\pi}{x}}}$$</span></p>
<p><span class="math-container">$$\iff \frac{\pi}{\sin \frac{\pi}{x}} \gt \frac{x}{\frac{1}{\cos\frac{\pi}{x}}} \iff \frac \pi x> \sin \frac{\pi}{x}\cos \frac{\pi}{x}=\frac12 \sin \frac{2\pi}{x}$$</span></p>
<p><span class="math-container">$$\iff \sin \frac{2\pi}{x}< \frac{2\pi}{x}$$</span></p>
<p>a well known inequality which holds for <span class="math-container">$\frac{2\pi}{x}>0$</span> and therefore for <span class="math-container">$x>2$</span>.</p>
<p>Refer to the related</p>
<ul>
<li><a href="https://math.stackexchange.com/q/1806387/505767">Prove $\ \sin(x) < x \ \ \ \forall x \in(0, 2\pi)$</a></li>
<li><a href="https://math.stackexchange.com/q/75130/505767">How to prove that $\lim\limits_{x\to0}\frac{\sin x}x=1$?</a></li>
</ul>
|
1,614,899 | <p>Construct Context-free Grammar for integers. Integer can begin with + or - and after that we have non-empty string of digits. Integer must not contain unnecessary leading zeros and zero should not be preceded by + or -.
For example: 0; 123; -15; +9999 are correct, but +0; 01; +-3; +09; + are incorrect.</p>
<p>I have something like this:</p>
<p>(number) ::= (unsigned number) | (sign)(unsigned number)</p>
<p>(sign) ::= + | – </p>
<p>(unsigned number) ::= (digit) | (unsigned number)(digit) </p>
<p>(digit) ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |</p>
<p>| - or</p>
<p>Is it okay? ;)</p>
| egreg | 62,967 | <p>I'll work under the assumption that a <em>cut</em> $\gamma$ is a subset of $\mathbb{Q}$ such that</p>
<ol>
<li>$\gamma\ne\emptyset$;</li>
<li>for all $a,b\in\mathbb{Q}$, if $a<b$ and $b\in\gamma$, then $a\in\gamma$;</li>
<li>for all $a\in\gamma$, there exists $b\in\gamma$ with $a<b$;</li>
<li>there exists $c\in\mathbb{Q}$ such that $c\notin\gamma$.</li>
</ol>
<p>The order relation is $\gamma<\delta$ if and only if $\gamma\subsetneq\delta$.</p>
<p>A subset $A$ of $\mathbb{R}$ is bounded if there exists $\varepsilon\in\mathbb{R}$ such that $\gamma<\varepsilon$, for every $\gamma\in A$.</p>
<p>In particular, if $A$ is bounded, then $\delta=\bigcup_{\gamma\in A}\gamma\ne\mathbb{Q}$, because by definition, $\gamma\subseteq\varepsilon$, for every $\gamma\in A$ and so $\delta$ is a subset of $\varepsilon$.</p>
<p>Proving properties 1–3 of $\delta$ is straighforward, so $\delta$ is indeed a cut and, of course $\gamma\le\delta$, for every $\gamma\in A$.</p>
<p>Denote by $\mathbf{0}$ the cut
$$
\mathbf{0}=\{a\in\mathbb{Q}:a<0\}
$$
and consider $A=\{\gamma\in\mathbb{R}:\gamma<\mathbf{0}\}$. In particular, $\mathbf{0}\notin A$. Let's look at
$$
\delta=\bigcup_{\gamma\in A}\gamma
$$
Since every $\gamma\in A$ consists of negative rational numbers, also $\delta$ consists of negative rational numbers; in particular $\delta\subseteq\mathbf{0}$. Let $c\in\mathbb{Q}$, $c<0$. Then $c/2<0$ and
$$
\gamma_0=\{a\in\mathbb{Q}:a<c/2\}
$$
is a cut; moreover $\gamma_0<\mathbf{0}$ and $c\in\gamma_0$, so $c\in\delta$. Thus every negative rational number belongs to $\delta$ and we have so proved that $\delta=\mathbf{0}$.</p>
<p>This provides the required counterexample.</p>
|
312,012 | <p>What is the exact difference between $\arg\max$ and $\max$ of a function?</p>
<p>Is it right to say the following?</p>
<blockquote>
<p>$\arg\max f(x)$ is nothing but the value of $x$ for which the value of the function is maximal. And $\max f(x)$ means value of $f(x)$ for which it is maximum.</p>
</blockquote>
<p>More precisely,</p>
<blockquote>
<p>$\arg \max$ returns a value from the domain of the function and $\max$ returns from the range of the function?</p>
</blockquote>
| Michael Hardy | 11,667 | <p>Suppose $f(x) = 100 - (x-6)^2$.</p>
<p>Then $\max\limits_x f(x) = 100$ and $\operatorname*{argmax}\limits_x f(x) = 6$.</p>
|
2,112,259 | <p>I want to prove or disprove $\min{S_1}=\min{S_2}\Longleftrightarrow\min{S_1}\in S_2\land\min{S_2}\in S_1$ but I don't know where to start.</p>
<p>Maybe take cases like $\min{S_1}<\min{S_2}$, $\min{S_1}>\min{S_2}$ and $\min{S_1}=\min{S_2}$ and see where it is true?</p>
| Erick Wong | 30,402 | <p>You've already done the hard part. Now that you know $p^2 - q^2 \equiv 1 \pmod 5$. But the only non-zero quadratic residues mod $5$ are $\pm 1$, so there are no solutions unless one of $p$ or $q$ is divisible by $5$. There are not many options for primes divisible by $5$.</p>
|
3,838,889 | <p>Exactly one ace given that it contains exactly two queens?</p>
<p>I am struggling how to solve this maths problem. I know that a hand of <span class="math-container">$10$</span> cards dealt from a normal pack of <span class="math-container">$52$</span> is <span class="math-container">$52\choose 10$</span> <span class="math-container">$=$</span> <span class="math-container">$15820024220$</span>. And, I know that there are <span class="math-container">$4$</span> queens in a normal pack of <span class="math-container">$52$</span>, hence, the problem states that there are exactly two queens, which is <span class="math-container">$4\choose2$</span> <span class="math-container">$= 6$</span>.</p>
<p>Any help is much appreciated - thanks!</p>
| tommik | 791,458 | <p>You can use the definition of conditional probability.</p>
<p><span class="math-container">$$P(A|B)=\frac{P(A\cap B)}{P(B)}$$</span></p>
<ul>
<li><p><span class="math-container">$P(A \cap B)$</span> is the probability to have 1A and 2Q in 10 cards. So you have</p>
<ul>
<li><span class="math-container">$\binom{4}{1}$</span> choices to get the Aces</li>
<li><span class="math-container">$\binom{4}{2}$</span> choices to get two Queens</li>
<li><span class="math-container">$\binom{44}{7}$</span> choices to get 7 cards different from Queens and Aces</li>
</ul>
</li>
</ul>
<p>Thus the probability of <span class="math-container">$P(A \cap B)=\frac{\binom{4}{1}\binom{4}{2}\binom{44}{7}}{\binom{52}{10}}$</span></p>
<p>Do the same brainstoming to calculate the probability to have exactly 2Q and then divide the two results:</p>
<p>Numerator: #cases to have exactly 1A and 2Q</p>
<p>Denominator: #cases to have exactly 2Q</p>
<p><span class="math-container">$$\frac{\binom{4}{1}\binom{4}{2}\binom{44}{7}}{\binom{4}{2}\binom{48}{8}}=\frac{3952}{9729}\approx 40,62\%$$</span></p>
|
501,993 | <p>I have never encounter "proof" questions before in my career, but this question in textbook troubled me and I have totally no clue where to start.</p>
<p>Prove that the limit
$$\lim_{x\rightarrow0} \frac{1-\sqrt{1-x^2}}{x^2} = \frac{1}{2}$$</p>
| MathAdam | 266,049 | <p>L'Hôpital's rule to the rescue: </p>
<p>$$\lim_{x\rightarrow0} \frac{1-\sqrt{1-x^2}}{x^2} =\lim_{x\rightarrow0} \frac{d/dx(1-\sqrt{1-x^2})}{d/dx(x^2)} = \frac{1}{2\sqrt{1-x^2}}$$</p>
<p>Let $x:=0$</p>
<p>$$\frac{1}{2\sqrt{1-0^2}}=\frac{1}{2}$$</p>
|
551,873 | <p>I have to solve the following problem:
Using $\exists$ Introduction prove that PA$\vdash x\leq y \wedge y\leq z \longrightarrow x\leq z$: I used that if $x\leq y$ then $\ \exists \ r\ x+r=y$ and in the same way $\exists t \ y
+t=z$, but in logic term I don't know how to use these equalities and the Peano's axioms to get the result.</p>
| CopyPasteIt | 432,081 | <p>See <a href="https://en.wikipedia.org/wiki/Peano_axioms" rel="nofollow noreferrer">Wikipedia Peano Axioms</a>.</p>
<p>The point of this answer is to show that you can define an ordering of the natural numbers before you define addition.</p>
<p>Proposition 1: If $n$ is any number except $0$, there exist an $m$ such that</p>
<p>$n = S(m)$</p>
<p>Moreover, if the successor of both $m$ and $m^{'}$ is equal to $n$, then $m = m^{'}$.</p>
<p>Proof: Exercise</p>
<hr>
<p>Definitions:</p>
<p>The Decrement mapping $D$ </p>
<p>$ n \mapsto D(n)$ </p>
<p>sends any nonzero number $n$ to $m$, where $S(m) = n$.</p>
<p>We also define $1$ to be $S(0)$. Note that $D(1) = 0$</p>
<p>Note for any nonzero $n$ we have the 'keep decrementing unless you can't' number expressions:</p>
<p>(1) $D(n)$, $D(D(n))$, $D(D(D(n)))$, ...</p>
<p>It is understood that this list might terminate since you can't apply $D$ to $0$.</p>
<hr>
<p>Proposition 2: If $n$ is any nonzero number, the decrementing terms (1) ends in the number $0$.</p>
<p>Proof: It is true for $n$ = 1.</p>
<p>Assume it is true for $n$, and consider $S(n)$. But $D(S(n)) = n$, so that for $S(n)$, compared to the terms for $n$, you simply get a new starting term. </p>
<p>QED</p>
<p>These logical expressions,</p>
<p>$m \ne n \;\; iff \;\; D(m) \ne D(n)$</p>
<p>are true provided both $m$ and $n$ are nonzero. By repeatedly applying Proposition 2 you will find that either $m$ or $n$ gets decremented to $0$ first.</p>
<p>Definition: If $m$ and $n$ are two distinct nonzero numbers, we say that $m$ is less than $n$, written $m \lt n$. if $m$ is decremented to zero before $n$.</p>
<p>We define the relation $\le$ in the obvious fashion and agree that for all natural numbers $n$, $\;0 \le n$.</p>
<p>Proposition 3: $x\leq y \wedge y\leq z \longrightarrow x\leq z$</p>
<p>Proof: Assume $x, y, z$ are three distinct nonzero numbers.</p>
<p>$x\leq y \wedge y\leq z \longrightarrow x\leq z$</p>
<p>IF AND ONLY IF</p>
<p>$D(x)\leq D(y) \wedge D(y)\leq D(z) \longrightarrow D(x)\leq D(z)$</p>
<p>As before, you see that $x$ is decremented to zero before both $y$ and $z$, so that $x \le z$.</p>
<p>We leave it to the reader to finish the proof.</p>
|
1,717,333 | <p>Feeling unsure, want to check if I am thinking correctly.</p>
<p>Exercise sounds: Let n>1, square-free integer. Show that $\mathbb{Z}[\sqrt{n}]/\langle \sqrt{n}\rangle\simeq\mathbb{Z}_n$</p>
<p>My take: Lets make suitable homomorphism $\mathbb{Z}[\sqrt{n}]\rightarrow\mathbb{Z}_n$. Let $x=a+\sqrt{n}b$, which is the form of $x\in\mathbb{Z}[\sqrt{n}]$.</p>
<p>Then appropriate homomorphism would be $f(a+\sqrt{n}b)=a+0b\mod{n}(=a\mod{n})$. It's kernel is all elements with $a=0$ and.. and.. I dont know what "and". I think I am done, but feel pretty unsure</p>
<p>Little help apreciated. My 'homomorphism' doesnt look like homomorphism to me.</p>
| learner | 228,313 | <p>Your proof's okay.</p>
<p>The map you define is indeed a homomorphism from $(\Bbb Z[\sqrt n],+,*)$ to $(\Bbb Z_n,+_n,*_n)$ since you have,</p>
<p>$$\begin{align}f((a+\sqrt n b)+(c+\sqrt n d))&=f((a+c)+\sqrt n(b+d))\\&=(a+c)~\bmod~n\\&=(a~\bmod~n)+_n(b~\bmod~n)=f(a+\sqrt n b)+_nf(c+\sqrt n d)\end{align}$$</p>
<p>$$\begin{align}f((a+\sqrt n b)(c+\sqrt n d))&=f((ac+nbd)+\sqrt n(bc+ad))\\&=(ac+nbd)~\bmod~n\\&=(ac)~\bmod~n\\&=(a~\bmod~n)*_n (c~\bmod~n)=f(a+\sqrt n b)*_n f(c+\sqrt n d)\end{align}$$</p>
<p>Now, since the kernel of $f$ is given by,</p>
<p>$\begin{align}\textrm{Ker}(f)=\{nd+\sqrt n b\mid d,b\in\Bbb Z\}&=\{(b+\sqrt n d)\sqrt n\mid d,b\in\Bbb Z\}\\&=\{m\sqrt n\mid m\in\Bbb Z[\sqrt n]\}\\&=\langle \sqrt n\rangle\end{align}$</p>
<p>and $f$ is surjective ($\forall~x\in\Bbb Z_n~,~\exists y=x+\sqrt n\in\Bbb Z[\sqrt n]$ such that $f(y)=x$), so $f$ is an epimorphism and by the first isomorphism theorem, we conclude our result.</p>
|
2,862,166 | <p>In a paper I've been studying it says:</p>
<blockquote>
<p>Let $x$ in the cone $\mathbb R_+^n$ of all vectors in $\mathbb R^n$ with nonnnegative components ($n\in\mathbb N$)</p>
</blockquote>
<p>Somebody tell me what does it means, please? $\mathbb R_+^n$ should be $[0,\infty)\times\dots\times [0,\infty)$ ($n$ times), but I don't understand why <em>the cone $\mathbb R_+^n$</em>. Maybe <em>the cone $\mathbb R_+^n$</em> is different from $[0,\infty)\times\dots\times [0,\infty)$?</p>
| Kavi Rama Murthy | 142,385 | <p>A cone $C$ is a set in a vector space with two propertes: $x+y \in C$ whenever $x,y \in C$ and $tx \in C$ whenever $x\in C$ and $t \geq 0$. Since $[0,\infty ) \times [0,\infty ) \times ...[0,\infty ) $ has these properties it is a cone. </p>
|
1,637,237 | <p>I'm studying differential calculus, but one of the questions involves solving an inequality:</p>
<p>$$(x-2)e^x < 0$$</p>
<p>I intend to go deeper in solving inequalities later, but I just want to understand how the teacher got the following solution in order to advance in these lectures:
$$x-2 < 0$$
$$x < 2$$</p>
<p>Where did the $e^x$ go? There's some rule to solve these inequalities involving $e$?</p>
| 5xum | 112,884 | <p>For all values of $x$, $e^x>0$ is true. This means that $a\cdot e^x > 0$ is true if and only if $a>0$.</p>
|
4,114,360 | <p><strong>Question</strong> :</p>
<blockquote>
<p>Andy and Beth are playing a game worth $100. They take turns flipping
a penny. The first person to get 10 heads will win.</p>
<p>But they just realized that they have to be in math class right away
and are forced to stop the game.</p>
<p>Andy had four heads and Beth had seven heads. How should they divide
the pot?</p>
</blockquote>
<p>This is a question from the book <strong>Probability: With Applications and R by Robert P. Dobrow</strong> .Now the given answer is :</p>
<blockquote>
<p>P(Andy wins) = 0.1445. Andy gets <span class="math-container">$14.45 \;and\; Beth\; gets\; $</span>85.55</p>
</blockquote>
<p>But I am not getting this answer . So can someone check my solution and tell me where I am wrong , or else provide a better solution . Thanks in advance.</p>
<p><strong>Proposed Solution</strong> : Let Andy be A and Beth B: A needs 6 heads to win, and B needs 3 heads to win.</p>
<p>Let <span class="math-container">$X_a$</span> be the number of times A needs to flip the coin to get 6 wins , and <span class="math-container">$X_b$</span> be the number of flips for B to get 3 wins . Both <span class="math-container">$X_a$</span> and <span class="math-container">$X_b$</span> are Negative Binomial Variables with the <span class="math-container">$X_a$</span> having parameters 6 and <span class="math-container">$p=\frac{1}{2}$</span> and <span class="math-container">$X_b$</span> having parameters 3 and <span class="math-container">$p=\frac{1}{2}$</span>.</p>
<p>Now if A needs <span class="math-container">$k$</span> flips to win. Then the required probability is :</p>
<p><span class="math-container">$P(A \;wins) = P(X_a = k)*P(X_b > k)$</span>(i.e A wins in <span class="math-container">$k$</span> moves and B gets 3 heads after <span class="math-container">$k$</span> flips)</p>
<p><span class="math-container">$P(X_a = k)=(^{k-1}C_5)*(\frac{1}{2})^6(\frac{1}{2})^{k-6}=(^{k-1}C_5)*(\frac{1}{2})^k$</span>(Negative Binomial Distribution formula)</p>
<p><span class="math-container">$P(X_b > k) =$</span> The 3rd head should come after <span class="math-container">$k$</span> trails , therefore until <span class="math-container">$k$</span> trails only <span class="math-container">$0\;or\;1\;or\;2$</span> heads can occur.</p>
<p>Therefore : <span class="math-container">$P(X_b>k) = \sum_{i=0}^2{^{k}C_i}*(\frac{1}{2})^i(\frac{1}{2})^{k-i}=\sum_{i=0}^2{^{k}C_i}*(\frac{1}{2})^k$</span></p>
<p>So <span class="math-container">$P(A\;wins) = (^{k-1}C_5*(\frac{1}{2})^k) * (\sum_{i=0}^2{^{k}C_i}*(\frac{1}{2})^k) $</span> . Now minimum 6 tries are required for A to win, so <span class="math-container">$k:6 \to \infty$</span></p>
<p><span class="math-container">$\begin{equation}
P(A\;wins) = \sum_{k=6}^{\infty}\biggl( (^{k-1}C_5*(\frac{1}{2})^k) * (\sum_{i=0}^2{^{k}C_i}*(\frac{1}{2})^k) \biggr)
\\
= \sum_{k=6}^{\infty}\biggl((\frac{1}{2})^{2k}*(^{k-1}C_5)*(^kC_0+^kC_1+^kC_2)) \biggr)
\\
=\sum_{k=6}^{\infty}\biggl(\frac{1}{2^{2k+1}}*(^{k-1}C_5)*(k^2+k+2)\biggr)
\end{equation}$</span></p>
<p>Now I calculated the sum on my calculator for <span class="math-container">$k : 6 \to 150$</span> and it is converging to 0.052. Which is very far off from the given answer of <span class="math-container">$P(Andy\; wins) = 0.1445$</span>. So is my method wrong ? Or is the sum convergence slow ?</p>
<p>Can someone solve this question , employing an entirely different method if necessary.</p>
| Mark Saving | 798,694 | <p>Consider the set <span class="math-container">$S = \{x | \forall n \in \mathbb{N}, x \neq n + \frac{1}{2n}\}$</span>. I claim that this set is open.</p>
<p>For consider some <span class="math-container">$x \in S$</span>. Then by the Archimedean property, we can find some <span class="math-container">$n$</span> such that <span class="math-container">$n > x$</span>. Then consider <span class="math-container">$\delta = \min\limits_{1 \leq m \leq n} |x - (m + \frac{1}{2m})|$</span>. Then we see that <span class="math-container">$B_{\delta}(x) \subseteq S$</span>. For suppose <span class="math-container">$y \in B_{\delta}(x)$</span>. And suppose <span class="math-container">$y = k + \frac{1}{2k}$</span>. Then <span class="math-container">$|x - y| = |k - (k + \frac{1}{2k}| < \delta$</span>, so it must be the case that <span class="math-container">$k > n$</span>. But then <span class="math-container">$k + \frac{1}{2k} > k \geq n + 1 > n + \frac{1}{2n} > x$</span>, so <span class="math-container">$\delta > |x - (k + \frac{1}{2k})| = k + \frac{1}{2k} - x > n + \frac{1}{2n} - x = |x - (n + \frac{1}{2n})| \geq \delta$</span>. This is a contradiction.</p>
<p>And clearly, the complement of <span class="math-container">$S$</span> is <span class="math-container">$\{n + \frac{1}{2n} | n \in \mathbb{N}\}$</span>.</p>
|
3,452,518 | <p>I will call an <em>average</em> any continuous function <span class="math-container">$f(v_1,\dots,v_n)$</span> of <span class="math-container">$n$</span> arguments such that it lies in the closed interval <span class="math-container">$[\min(v_{1},\dots ,v_{n});\max(v_{1},\dots ,v_{n})]$</span>, is symmetric for all permutations of arguments, and is <a href="https://en.wikipedia.org/wiki/Homogeneous_function" rel="nofollow noreferrer">homogeneous</a> with the degree <span class="math-container">$1$</span>.</p>
<p>Question: Can an average <em>not</em> to tend to infinity when one of the arguments tends to infinity and the rest arguments are fixed nonnegative reals? (We can assume <span class="math-container">$n\geq 2$</span>.)</p>
<p>My original question had a trivial solution <span class="math-container">$\min(v_{1},\dots ,v_{n})$</span>, so let's add an additional requirement: our average <em>not</em> to tend to zero when one of the arguments tends to zero and the rest arguments are fixed positive reals? (We can assume <span class="math-container">$n\geq 2$</span>.)</p>
| Andrei | 331,661 | <p>Not sure this matches your needs, but <span class="math-container">$f(v_1,...,v_n)=\min(v_1,...,v_n)$</span> might obey all your requirements. It is homogeneous of degree 1, symmetric in all permutation of arguments, and if only one argument goes to infinity, it will not change it's value. Also, it lies in the closed interval you want.</p>
|
256,415 | <p>I need to store values of a function of x and y defined as <code>f[x_, y_] := Sin[Pi*x/3]*Sin[Pi*y/3]</code> at the following points <code>xval = {0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5, 8, 8.5, 9, 9.5, 10, 10.5, 11, 11.5, 12, 12.5, 13, 13.5, 14, 14.5, 15};</code>, same for y.</p>
<p>I need it to be a matrix of dimensions <code>{961,3}</code>, being the first two columns the values of X and Y and the third, the value of f(x,y) at said points.</p>
<p>I have tried to use <a href="https://reference.wolfram.com/language/ref/Outer.html" rel="nofollow noreferrer">outer</a> as <code>Outer[f, xval, yval];</code>, but it provides a result whose dimensions are <code>{31, 31}</code>. Is there a way to reshape the dimensions or an alternative to <code>Outer</code> which allows for the data to be stored as I need?</p>
| Nasser | 70 | <p>another alternative is to use <code>meshgrid</code></p>
<pre><code>ClearAll[f, x, y];
f[x_, y_] := Sin[Pi*x/3]*Sin[Pi*y/3];
meshgrid[x_List, y_List] := {ConstantArray[x, Length[x]],
Transpose@ConstantArray[y, Length[y]]}
{xx, yy} = meshgrid[Range[0, 15, .5], Range[0, 15, .5]];
u = f[xx, yy];
pts = Flatten[{xx, yy, u}, {2, 3}];
ListPlot3D[pts, PlotRange -> All, AxesLabel -> Automatic,
ImagePadding -> 20, Mesh -> 35, InterpolationOrder -> 2,
ColorFunction -> "Rainbow", Boxed -> False]
</code></pre>
<p><img src="https://i.stack.imgur.com/emI3S.png" alt="Mathematica graphics" /></p>
<p>But if you really need to have them all in one matrix, then</p>
<pre><code>meshgrid[x_List, y_List] := {ConstantArray[x, Length[x]],
Transpose@ConstantArray[y, Length[y]]}
{xx, yy} = meshgrid[Range[0, 15, .5], Range[0, 15, .5]];
x0 = Flatten[xx]; y0 = Flatten[yy];
mat = Transpose[{x0, y0, f[x0, y0]}];
Dimensions[mat]
</code></pre>
<p><img src="https://i.stack.imgur.com/6nXEW.png" alt="Mathematica graphics" /></p>
<p>Reference <a href="https://mathematica.stackexchange.com/questions/30701/simulate-matlabs-meshgrid-function/30706#30706">Simulate MATLAB's meshgrid function</a></p>
|
4,500,550 | <p>I have a very basic question about cohomology of sheaves. Suppose <span class="math-container">$\mathcal{F}$</span> is a sheaf of abelian groups over a topological space <span class="math-container">$X$</span>. Then <span class="math-container">$\mathcal{F}$</span> itself is a topological space with a continuous map <span class="math-container">$\sigma : \mathcal{F} \rightarrow X$</span>.</p>
<p>How are the sheaf cohomology groups <span class="math-container">$H^i(X,\mathcal{F})$</span> of <span class="math-container">$X$</span> related to <span class="math-container">$H^i(\mathcal{F},\mathbb{Z})$</span>, the singular cohomology groups of the space <span class="math-container">$\mathcal{F}$</span>?</p>
<p>Can we also consider "relative" cohomology groups <span class="math-container">$H^i(\mathcal{F}, X)$</span> using the zero section and relate it to the other cohomology groups, perhaps with a spectral sequence?</p>
| C.D. | 553,510 | <p>I had some fun playing with examples of the espace étalé in different settings. I'll write <span class="math-container">$X_\mathcal{F}$</span> for the espace étalé of <span class="math-container">$\mathcal{F}$</span> on <span class="math-container">$X$</span>.
Summary:</p>
<ol>
<li>In differential/topological settings, the cohomology of the espace étalé will almost always be extremely pathological.</li>
<li>The singular cohomology of <span class="math-container">$X_{\mathcal{F}}$</span> is more closely related to the singular cohomology of the base <span class="math-container">$X$</span>, often a very different beast than the <span class="math-container">$\mathcal{F}$</span>-sheaf cohomology. So the comparison is a bit apples-to-oranges.</li>
</ol>
<p><strong>Differential-type Settings</strong></p>
<p>Let's first look at sheaves that are fine (roughly, where bump functions exist -- think topological and differential-geometric settings). Take for example <span class="math-container">$X = S^1$</span> and <span class="math-container">$\mathcal{F} = C^\infty(X)$</span> to be smooth real-valued functions. Since the sheaf is fine, <span class="math-container">$H^1(X, \mathcal{F}) = 0$</span> (this is a standard Cech cohomology fact, similar to the proof that flabby sheaves are acyclic).</p>
<p>To get some intuition for what <span class="math-container">$X_{\mathcal{F}}$</span> looks like, let's think about its fundamental group. Pick two distinct points <span class="math-container">$p, q$</span> on <span class="math-container">$X$</span>, and fix a germ <span class="math-container">$f_p$</span> at <span class="math-container">$p$</span> (hence, a point in <span class="math-container">$X_\mathcal{F}$</span>). Given <em>any</em> <span class="math-container">$g_q$</span> germ at <span class="math-container">$q$</span>, we can smoothly interpolate
from <span class="math-container">$f_p$</span> to <span class="math-container">$g_q$</span> and then around the other side of the circle to complete a loop in the espace étalé. None of these loops for distinct <span class="math-container">$g_q$</span> can be deformed into each other: roughly, this is because the espace étalé equips the fiber over <span class="math-container">$q$</span> with the discrete topology: any deformation of our loop has to give rise to a continuously moving point in that fiber, which therefore has to be constant. All of these nonequivalent loops live over a generator for <span class="math-container">$\pi_1(S^1)$</span>. In general, we can specify arbitrary germs at an arbitrary finite sequence of points along with signs specifying which way around to get there, and there will be uncountably many nonequivalent loops satisfying that data. Yikes! Note also that this space is definitely NOT locally simply connected.</p>
<p>Okay, but all that was about the fundamental group. There is some hope that <span class="math-container">$H_1 = \pi_1^{ab}$</span> might be much smaller, since the group described above is highly noncommutative, but I am not optimistic. In particular, since any commutator of loops lives in the kernel of <span class="math-container">$\pi_1(X_{\mathcal{F}})$</span>, we at least have a surjective map <span class="math-container">$H_1(X_{\mathcal{F}}) \to \pi_1(S^1) = H_1(S^1) = \mathbb{Z}$</span>, so we</p>
<p>Moral: <em>the singular (co)homology of <span class="math-container">$X_{\mathcal{F}}$</span> will always be able to see the singular (co)homology of the base space</em>, while the sheaf cohomology can't, in general.</p>
<p>Of course, if we take <span class="math-container">$\mathcal{F} = \underline{\mathbb{Z}}$</span> the constant sheaf on <span class="math-container">$X = S^1$</span>, then <span class="math-container">$H^i(X_{\mathcal{F}}, \mathbb{Z}) = \bigoplus_{i \in \mathbb{Z}} H^i(X, \mathbb{Z})$</span>, and so the two are not so far apart. But you can see this is a uniquely convenient choice: we are still comparing singular cohomology of the base to its sheaf cohomology, and for <span class="math-container">$\underline{\mathbb{Z}}$</span> these happen to coincide.</p>
<p><strong>Rigid Settings</strong></p>
<p>Things are less huge in rigid (e.g. algebraic or complex-analytic settings), because a germ at a point determines the function uniquely on its full domain, but here the espace étalé isn't very useful or interesting for the same reason. Given <span class="math-container">$X$</span> an (integral) algebraic variety or complex manifold over <span class="math-container">$\mathbb{C}$</span> and <span class="math-container">$\mathcal{F} = \mathcal{O}$</span> (algebraic or holomorphic functions), each function <span class="math-container">$f$</span> determines a connected component of <span class="math-container">$X_{\mathcal{F}}$</span> which is just homeomorphic to <span class="math-container">$U$</span>, the maximal domain of <span class="math-container">$f$</span>. So the espace étalé ends up just being a bunch of disjoint copies of open subsets of <span class="math-container">$X$</span>, indexed by all the rational/meromorphic functions on <span class="math-container">$X$</span>. Not very interesting, and in particular its singular cohomology can't see anything except topological properties of the variety, while <span class="math-container">$H^i(X, \mathcal{O})$</span> sees more geometric/analytic information.</p>
|
404,659 | <p>The following is copied verbatim from a book (I. Protasov, <em>Combinatorics of numbers</em>, p. 14):</p>
<blockquote>
<p>Suppose that to each point <span class="math-container">$x$</span> of a set <span class="math-container">$X$</span> a collection <span class="math-container">$\mathcal{B}(x)$</span> of subsets of <span class="math-container">$X$</span>, which are called <em>neighborhoods of</em> <span class="math-container">$x$</span>, is assigned so that the following conditions are satisfied:</p>
<p>(B1) <span class="math-container">$x\in U$</span> for every neighborhood <span class="math-container">$U \in \mathcal{B}(x)$</span>;<br>(B2) if <span class="math-container">$U \subseteq V, U \in \mathcal{B}(x)$</span>, then <span class="math-container">$V\in \mathcal{B}(x)$</span>;<br>(B3) if <span class="math-container">$U_1, \dots, U_n \in \mathcal{B}(x)$</span>, then <span class="math-container">$U_1 \cap \dots \cap U_n\in \mathcal{B}(x)$</span>;<br>(B4) if <span class="math-container">$U\in \mathcal{B}(x)$</span>, then there is a neighborhood <span class="math-container">$V\in\mathcal{B}(x)$</span> such that <span class="math-container">$U\in \mathcal{B}(y)$</span> for every <span class="math-container">$y\in V$</span>.</p>
<p>A subset <span class="math-container">$A\subseteq X$</span> is defined to be <em>open</em>, if <span class="math-container">$A$</span> is a neighborhood of each of its points, i.e. <span class="math-container">$A\in\mathcal{B}(x)$</span> for every <span class="math-container">$x\in A$</span>. Evidently, open sets satisfy the following properties:</p>
<p>(O1) <span class="math-container">$X, \varnothing$</span> are open sets;<br>(O2) if <span class="math-container">$U_1, \dots, U_n$</span> are open sets, then <span class="math-container">$U_1 \cap \dots \cap U_n$</span> is an open set;<br>(O3) if <span class="math-container">$U_\alpha, \alpha\in J$</span>, is a collection of open sets, then <span class="math-container">$\bigcup\{U_\alpha : \alpha \in J\}$</span> is an open set.</p>
</blockquote>
<p>I don't see why (O1) is true. More generally, I don't see what guarantees that <span class="math-container">$\mathcal{B}(x) \neq \varnothing$</span> for any <span class="math-container">$x \in X$</span>. In fact, if we set <span class="math-container">$\mathcal{B}(x) = \varnothing$</span> for all <span class="math-container">$x \in X$</span>, conditions (B1)-(B4) are all satisfied vacuously. In this case there could be no open sets, so (O1) could not hold.</p>
<p>Am I missing something?</p>
<p>If not, there must be some error in the book, and I'd like to know how to fix it such that what results agrees with the standard way of defining open sets in terms of neighborhoods.</p>
| kjo | 13,675 | <p>I think I finally got it. All we need to do is replace B3 with</p>
<p>(B3) each collection $\mathcal{B}(x)$ is closed with respect to finite (including empty) intersections;</p>
<p>Then the empty intersection (namely $X$) will belong to every $\mathcal{B}(x)$.</p>
<p>(It could be argued that the original (B3) already implies this.)</p>
|
2,025,711 | <p>What is the sum of this series: $$\sum_{n>=1}{(-1)^{n+1}*\frac{2n+1}{n(n+1)}}$$
I don't know how to solve it especialy with that ${(-1)^{n+1}}$</p>
| Fred | 380,717 | <p>Hints: </p>
<p>1.$(-1)^{n+1}*\frac{2n+1}{n(n+1)}=(-1)^{n+1}(\frac{1}{n}+\frac{1}{n+1})$ </p>
<p>and </p>
<ol start="2">
<li>$ \sum_{n \ge 1}(-1)^{n+1}\frac{1}{n}= \ln (2)$</li>
</ol>
|
2,699,842 | <p>I've been looking at this question for hours now, but I can't grasp it: </p>
<p>Let A be a symmetric matrix with minimum eigenvalue $\lambda_{\text{min}}$ . Give a bound on the largest element of $A^{−1}$.</p>
<p>I was looking at the spectral decomposition where $A=VDV^T$ with D a diagonal matrix, however I don't think this is the rightway since for this case is not always true that $A^{-1}=A^T$</p>
| mechanodroid | 144,766 | <p>If a matrix $M$ is symmetric then the operator norm of $M$ is given by </p>
<p>$$\|M\| = \max_{\lambda \in \sigma(M)} |\lambda|$$</p>
<p>The $(i,j)$-th entry of a matrix $M$ is given by $\langle Me_j, e_i\rangle$ where $\{e_1, \ldots, e_n\}$ is the canonical basis.</p>
<p>Therefore $$\left|\langle Me_i, e_j\rangle\right| \stackrel{CSB}{\le} \|Me_i\|\|e_j\| \le \|M\|$$</p>
<p>If your matrix $A$ is symmetric, then $A^{-1}$ is also symmetric. Furthermore, its spectrum is given by $$\sigma(A^{-1}) = \frac{1}{\sigma(A)} = \left\{\frac1\lambda : \lambda \in \sigma(A)\right\}$$</p>
<p>If $\lambda_{\text{min}}$ is the smallest (by absolute value) eigenvalue of $A$, then $\displaystyle\frac1{\lambda_{\text{min}}}$ is the largest (by absolute value) eigenvalue of $A^{-1}$.</p>
<p>Hence:</p>
<p>$$\left|\langle A^{-1}e_i, e_j\rangle\right| \le \|A^{-1}\| = \frac{1}{|\lambda_\text{min}|}$$</p>
|
749,171 | <p>Right now I'm taking a 3 part course on probability and statistics using Schverish & Degroot Probability and Statistics and it is just not helpful. For the first part, which was on Probability, I used a First Course in Probability by Sheldon Ross which was extremely helpful unfortunately it doesn't have much statistics. So is there a similar textbook that is focused on statistics? </p>
| Community | -1 | <p>$$\frac{f(2)-f(0)}{2-0}=\frac{9-1}{2}=4$$ So solve $$f'(c)=6c-2=4$$ to get $$c=1$$</p>
|
215,227 | <p>I'm looking at a past homework solution and there is a part of it I don't understand. Specifically I'm talking about number 2 in <a href="http://www.math.cmu.edu/~af1p/Teaching/Combinatorics/F08/hw5a.pdf" rel="nofollow">this problem set</a>:</p>
<blockquote>
<p>Let $m=\lfloor(8/7)^{n/3}\rfloor$. Show that there exist distinct sets $A_1,A_2,\dots,A_m\subseteq[n]$ such that for all distinct $i,j,k\in[m]$ we have $A_i\cap A_j\nsubseteq A_k$.</p>
</blockquote>
<p>Given $A_1, A_2, A_3,..., A_m$ which are subsets of $ [n] $. Can someone explain in detail why the probability of $A_i \cap A_j \subseteq A_k$ is equal to $(\frac7 8)^{n}$. </p>
| Ross Millikan | 1,827 | <p>The sets $A_i$ were constructed by taking each element of $[n]$ with probability $\frac 12$. An element $p$ can render $A_i \cap A_j \subseteq A_k$ false if it is in $A_i$ and $A_j$ but not in $A_k$. This requires three choices to be right, so happens in $\frac 18$ of the cases. To have $A_i \cap A_j \subseteq A_k$ true, none of the elements can render it false, so the chance is $(\frac 78)^n$</p>
|
2,864 | <p>Is there a way to detect double click events? I did not find anything on the doc page of <a href="http://reference.wolfram.com/mathematica/ref/EventHandler.html" rel="nofollow"><code>EventHandler</code></a>.</p>
<p>Use case: I want to re-implement the <code>Crop Image...</code> functionality available form the context menu when clicking images. The cropping GUI appears much too slowly with even moderately large images, such as screenshots of the full screen. Although I could use a button to crop, I'd prefer using double click, just like in the original implementation.</p>
| Mr.Wizard | 121 | <p>You could do something like this:</p>
<pre><code>DynamicModule[{col = Green, time = AbsoluteTime[]},
EventHandler[
Style["text", FontColor -> Dynamic[col]],
"MouseClicked" :>
If[AbsoluteTime[] - time > 0.25,
time = AbsoluteTime[],
col = col /. {Red -> Green, Green -> Red}]
]]
</code></pre>
<p>Implementing limited tolerance for mouse position will be more complicated.</p>
|
2,864 | <p>Is there a way to detect double click events? I did not find anything on the doc page of <a href="http://reference.wolfram.com/mathematica/ref/EventHandler.html" rel="nofollow"><code>EventHandler</code></a>.</p>
<p>Use case: I want to re-implement the <code>Crop Image...</code> functionality available form the context menu when clicking images. The cropping GUI appears much too slowly with even moderately large images, such as screenshots of the full screen. Although I could use a button to crop, I'd prefer using double click, just like in the original implementation.</p>
| kglr | 125 | <p>You can use a combination of <code>MouseDown</code> and <code>MouseClickCount</code> as in the following examples: </p>
<p>example 1: double-click increments the value of <code>j</code>:</p>
<pre><code>j = 1; EventHandler[Panel[Dynamic[j]],
"MouseDown" :> If[CurrentValue["MouseClickCount"] == 2, ++j]]
</code></pre>
<p>example 2: double-click toggles the text color:</p>
<pre><code>DynamicModule[{col = Green},
EventHandler[
Style["text", FontColor -> Dynamic[col]],
{"MouseDown" :>
If[CurrentValue["MouseClickCount"] == 2,
(col = col /. {Red -> Green, Green -> Red})]}
]]
</code></pre>
|
71,670 | <pre><code>TableForm[Table[i/j + 4*Boole[j > i] // N, {i, 3}, {j, 4}], TableHeadings -> {{"Row1", "Row2", "Row3"}, {"Col1", "Col2", "Col3", "Col4"}}]
</code></pre>
<p>Produces the following table:</p>
<p><img src="https://i.stack.imgur.com/Dqnmr.png" alt="enter image description here"></p>
<p>How select the maximum value in each row and make it a bold font?</p>
<p>So that <code>4.5</code>, <code>4.66667</code> and <code>4.75</code> are bold in the 1st, 2nd and 3rd row.</p>
<p>Thanks</p>
| kglr | 125 | <p>Using <code>Position</code> with <code>MapAt</code>:</p>
<pre><code>ClearAll[sF];
sF = With[{pos = Thread[{Range@Length@#, Position[#, Max@#][[1, 1]] & /@ #}]},
MapAt[Style[#, Bold, Red] &, #, pos]] &;
t = Table[i/j + 4*Boole[j > i] // N, {i, 3}, {j, 4}];
h = {{"Row1", "Row2", "Row3"}, {"Col1", "Col2", "Col3", "Col4"}};
TableForm[sF@t, TableHeadings -> h]
</code></pre>
<p><img src="https://i.stack.imgur.com/D90vm.png" alt="enter image description here"></p>
<p>... with <code>ReplacePart</code>:</p>
<pre><code>ClearAll[sF2];
sF2 = With[{m=#, pos = Thread[{Range@Length@#, Position[#, Max@#][[1, 1]] & /@ #}]},
ReplacePart[m, (# -> Style[m[[## & @@ #]], Bold, Red]) & /@ pos]] &;
TableForm[sF2@t, TableHeadings -> h]
(* same picture *)
</code></pre>
<p>... with <code>Part</code> assignment:</p>
<pre><code>ClearAll[sF3];
sF3 = Module[{r=#, p = Position[#, Max@#][[1]]},
(r[[#]] = Style[r[[#]], Bold, Red]) & /@ p; r] & /@ # &;
TableForm[sF3@t, TableHeadings -> h]
(* same picture *)
</code></pre>
|
4,132,907 | <p>I'm trying to figure out the identity above though I'm having difficulties towards figuring it out and would kindly appreciate your support!</p>
<p><span class="math-container">$n\binom{n-1}{r-1} = r\binom{n}{r}$</span></p>
<p>What I have tried:
Given that</p>
<p><span class="math-container">$$\binom{n}{r}=\binom{n-1}{r-1}+\binom{n-1}{r}$$</span>
Then by rearranging for <span class="math-container">$\binom{n-1}{r-1}$</span> I get</p>
<p><span class="math-container">$$\binom{n}{r}-\binom{n-1}{r}=\binom{n-1}{r-1}$$</span></p>
<p>Which simplifys to:</p>
<p><span class="math-container">$$n\left[\frac{n!}{(n-r)!r!}-\frac{(n-1)!}{(n-r!)r!}\right]=n\cdot\frac{n!-(n-1)!}{(n-r!)r!}$$</span></p>
<p>I'm stuck here on how to simply this any further to get the result I'm after.</p>
| I am a person | 806,777 | <p><span class="math-container">$\binom{n - 1}{r - 1} = \frac{(n - 1)!}{(r - 1)!(n - r)!}$</span>.</p>
<p><span class="math-container">$n*\frac{(n - 1)!}{(r - 1)!(n - r)!} = \frac{n!}{(r - 1)!(n - r)!}.$</span></p>
<p>Dividing the LHS and RHS by <span class="math-container">$r$</span>, we want to prove that <span class="math-container">$\frac{n!}{(r - 1)!(n - r)!(r)} = \binom{n}{r}$</span>.</p>
<p>Expanding the RHS, we get <span class="math-container">$\binom{n}{r} = \frac{n!}{r!(n - r)!}$</span></p>
<p>Simplifying the LHS, we get <span class="math-container">$\frac{n!}{r!(n - r)!}$</span>.</p>
<p>RHS = <span class="math-container">$\frac{n!}{r!(n - r)!}$</span> = LHS.</p>
<p>So, we are done.</p>
|
4,132,907 | <p>I'm trying to figure out the identity above though I'm having difficulties towards figuring it out and would kindly appreciate your support!</p>
<p><span class="math-container">$n\binom{n-1}{r-1} = r\binom{n}{r}$</span></p>
<p>What I have tried:
Given that</p>
<p><span class="math-container">$$\binom{n}{r}=\binom{n-1}{r-1}+\binom{n-1}{r}$$</span>
Then by rearranging for <span class="math-container">$\binom{n-1}{r-1}$</span> I get</p>
<p><span class="math-container">$$\binom{n}{r}-\binom{n-1}{r}=\binom{n-1}{r-1}$$</span></p>
<p>Which simplifys to:</p>
<p><span class="math-container">$$n\left[\frac{n!}{(n-r)!r!}-\frac{(n-1)!}{(n-r!)r!}\right]=n\cdot\frac{n!-(n-1)!}{(n-r!)r!}$$</span></p>
<p>I'm stuck here on how to simply this any further to get the result I'm after.</p>
| Bernard | 202,857 | <p>From the formula for binomial coefficients, but the other way: what has to be proved is
<span class="math-container">$$ \binom nr=\frac nr \binom{n-1}{r-1}.$$</span>
Now, we have
<span class="math-container">$$\binom nr=\frac{n!}{r!(n-r)!}=\frac nr\frac{(n-1)!}{(r-1)!(n-r)!}=\frac nr\binom{n-1}{r-1}$$</span>
since <span class="math-container">$\: n-r=(n-1)-(r-1)$</span>.</p>
|
41,940 | <p>For example, the square can be described with the equation $|x| + |y| = 1$. So is there a general equation that can describe a regular polygon (in the 2D Cartesian plane?), given the number of sides required?</p>
<p>Using the Wolfram Alpha site, this input gave an almost-square:
<code>PolarPlot(0.75 + ArcSin(Sin(2x+Pi/2))/(Sin(2x+Pi/2)*(Pi/4))) (x from 0 to 2Pi)</code></p>
<p>This input gave an almost-octagon:
<code>PolarPlot(0.75 + ArcSin(Sin(4x+Pi/2))/(Sin(4x+Pi/2)*Pi^2)) (x from 0 to 2Pi)</code></p>
<p>The idea is that as the number of sides in a regular polygon goes to infinity, the regular polygon approaches a circle. Since a circle can be described by an equation, can a regular polygon be described by one too? For our purposes, this is a regular convex polygon (triangle, square, pentagon, hexagon and so on).</p>
<p>It can be assumed that the centre of the regular polygon is at the origin $(0,0)$, and the radius is $1$ unit.</p>
<p>If there's no such equation, can the non-existence be proven? If there <em>are</em> equations, but only for certain polygons (for example, only for $n < 7$ or something), can those equations be provided?</p>
| that one guy | 137,947 | <p>Simply enough:</p>
<p>r(θ) = sec(θ%(π/n')-π/n)</p>
<p>When n' = n/2 and % is the modulus operator.</p>
<p>It would work just as well with cosecant as with secant.</p>
<p>Also the apothem would be 1 and the radius sec(-π/n).</p>
|
41,940 | <p>For example, the square can be described with the equation $|x| + |y| = 1$. So is there a general equation that can describe a regular polygon (in the 2D Cartesian plane?), given the number of sides required?</p>
<p>Using the Wolfram Alpha site, this input gave an almost-square:
<code>PolarPlot(0.75 + ArcSin(Sin(2x+Pi/2))/(Sin(2x+Pi/2)*(Pi/4))) (x from 0 to 2Pi)</code></p>
<p>This input gave an almost-octagon:
<code>PolarPlot(0.75 + ArcSin(Sin(4x+Pi/2))/(Sin(4x+Pi/2)*Pi^2)) (x from 0 to 2Pi)</code></p>
<p>The idea is that as the number of sides in a regular polygon goes to infinity, the regular polygon approaches a circle. Since a circle can be described by an equation, can a regular polygon be described by one too? For our purposes, this is a regular convex polygon (triangle, square, pentagon, hexagon and so on).</p>
<p>It can be assumed that the centre of the regular polygon is at the origin $(0,0)$, and the radius is $1$ unit.</p>
<p>If there's no such equation, can the non-existence be proven? If there <em>are</em> equations, but only for certain polygons (for example, only for $n < 7$ or something), can those equations be provided?</p>
| amd | 265,466 | <p>A cheap trick: Take an equation that represents the union of the extensions of the polygon’s sides and add a domain-limiting term to both sides. </p>
<p>For example, the equation <span class="math-container">$$(x-C)(x+y-\sqrt2C)(y-C)(x-y+\sqrt2C)(x+C)(x+y+\sqrt2C)(y+C)(x-y-\sqrt2C)+\sqrt{1-x^2-y^2}=\sqrt{1-x^2-y^2}$$</span> with <span class="math-container">$C=\cos{\frac\pi8}$</span> describes a regular octagon. For a regular polygon with an even number of sides, you can pair up opposite sides to halve the number of terms: <span class="math-container">$$(x^2-C^2)((x+y)^2-2C^2)(y^2-C^2)(x-y+\sqrt2C)((x-y)^2-2C^2)+\sqrt{1-x^2-y^2}=\sqrt{1-x^2-y^2}.$$</span> </p>
<p>This obviously generalizes to polygons that are affine images of cyclic polygons, i.e., one whose vertices all lie on an ellipse.</p>
|
1,631,851 | <p>Let $I_1, . . . , I_n$ be any open intervals in $\mathbb{R}$ for $n ≥ 2$. How can I show that $(I_1 × · · · × I_n) \setminus \{a\}$ is open and connected for any $a ∈ \mathbb{R}^n$?</p>
| egreg | 62,967 | <p>The set $I_1\times I_2\times\dots\times I_n$ is open by definition of product topology. The set $\{a\}$ is closed because $\mathbb{R}^n$ is Hausdorff. So
$$
(I_1\times I_2\times\dots\times I_n)\setminus\{a\}=
(I_1\times I_2\times\dots\times I_n)\cap(\mathbb{R}^n\setminus\{a\})
$$
is open because it's the intersection of two open sets.</p>
<p>If $a\notin I_1\times I_2\times\dots\times I_n$, then the set is connected, being the product of connected spaces (it's actually convex and open, so connected by arcs).</p>
<p>If $a\in I_1\times I_2\times\dots\times I_n$, then the set is still connected by arcs. Take two points $b$ and $c$ in $(I_1\times I_2\times\dots\times I_n)\setminus\{a\}$. If the segment joining $b$ and $c$ doesn't pass through $a$ we are done. If it passes through $a$, then consider $\varepsilon>0$ such that the $n$-sphere with center $a$ and radius $\varepsilon$ is contained in $I_1\times I_2\times\dots\times I_n$.</p>
<p>Pick the point $b'$ where the segment $ba$ meets the $n$-sphere and fix similarly $c'$; consider and a point $d$ inside the $n$-sphere such that neither the segment $b'd$ nor the segment $dc'$ pass through $a$. Then the polygonal $bb'dc'c$ is a path joining $b$ and $c$ contained in $(I_1\times I_2\times\dots\times I_n)\setminus\{a\}$.</p>
|
343,993 | <p>Let $I$ be an ideal of $\mathbb{C}[x,y]$ such that its zero set in $\mathbb{C}^2$ has cardinality $n$. Is it true that $\mathbb{C}[x,y]/I$ is an $n$-dimensional $\mathbb{C}$-vector space (and why)?</p>
| Georges Elencwajg | 3,217 | <p>The answer is no, but your very interesting question leads to about the most elementary motivation for the introduction of scheme theory in elementary algebraic geometry. </p>
<p>You see, if the common zero set $X_{\mathrm{classical}}=V_{\mathrm{classical}}(I)$ consists set-theoretically (I would even say <em>physically</em>) in $n$ points, then $N:=\dim_\mathbb C \mathbb{C}[x,y]/I\geq n$.<br>
If $N\gt n$, this is an indication that some interesting geometry is present: $X_{\mathrm{classical}}$ is described by equations that are not transversal enough, so that morally they describe a variety bigger than the naked physical set.<br>
The most elementary example is given by $I=\langle y,y-x^2\rangle$: we have $V_{\mathrm{classical}}(I)=\{(0,0)\}=\{O\}$<br>
Everybody feels that it is a bad idea do describe the origin as $V(I)$, i.e. as the intersection of a parabola and one of its tangents: a better description would be to describe it by the ideal $J=\langle x,y\rangle,$ in other words as the intersection of two <em>transversal</em> lines.<br>
However the ideal $I$ describes an interesting structure, richer than a naked point, and this structure is called a <em>scheme</em>.<br>
This is all reflected in the strict inequality $$\dim_\mathbb C \mathbb{C}[x,y]/I=2=N\gt \dim_\mathbb C \mathbb{C}[x,y]/J=1=n=\text { number of physical points}.$$ Scheme theory in its most elementary incarnation disambiguates these two cases by adding the relevant algebra in the algebro-geometric structure, now defined as pairs consisting of a physical set plus an algebra: $$V_{\mathrm{scheme}}(J)=(\{O\},\mathbb{C}[x,y]/J )\subsetneq V_{\mathrm{scheme}}(I)= (\{O\},\mathbb{C}[x,y]/I ).$$</p>
<p><strong>Bibliography</strong><br>
Perrin's <a href="http://books.google.fr/books/about/Algebraic_Geometry.html?id=Vn1yR9qPvlMC&redir_esc=y" rel="nofollow"><em>Algebraic Geometry</em></a> is the most elementary introduction to this down-to-earth vision of schemes (cf. beginning of Chapter VI). </p>
|
79,040 | <p>Let <span class="math-container">$X$</span> be a compact Riemann surface, and <span class="math-container">$f$</span> a meromorphic function on X.
There's a theorem telling us that <span class="math-container">$\deg(\mathrm{div}(f)) = 0$</span>.</p>
<p>But is the inverse statement also true? I mean, is it true that:</p>
<blockquote>
<p>if <span class="math-container">$D$</span> is a divisor on <span class="math-container">$X$</span> with <span class="math-container">$\deg(D) = 0$</span>, then exists a meromorphic
function <span class="math-container">$f$</span> on <span class="math-container">$X$</span> such that <span class="math-container">$D = \mathrm{div}(f)$</span>?</p>
</blockquote>
<p>Thanks!</p>
| Lucas Kaufmann | 12,501 | <p>Rafael answer shows that if every degree zero divisor on <span class="math-container">$X$</span> is principal then <span class="math-container">$X$</span> is isomorphic to the Riemann sphere <span class="math-container">$\mathbb{C}_{\infty}$</span>.</p>
<p>The converse also holds true, i.e., every degree zero divisor <span class="math-container">$D$</span> on <span class="math-container">$\mathbb{C}_{\infty}$</span> is principal. To see this just note that if <span class="math-container">$D = \sum n_i \cdot z_i + n_{\infty}\cdot \infty$</span> and <span class="math-container">$n_\infty = - \sum n_i$</span> then <span class="math-container">$f(z) = \Pi (z-z_i)^{n_i} $</span> is a rational function (hence meromorphic on <span class="math-container">$\mathbb{C}_{\infty}$</span>) such that div<span class="math-container">$(f) = D$</span>.</p>
|
1,950,381 | <p>If $H$ is a subgroup of $S_n$ and if H is not contained in $A_n$ prove that precisely one half of the elements of H are even permutations. I know that multiplying two odd permutations gives a even, and that one even and one odd return an odd permutation but i have no idea where to go from there. </p>
| p Groups | 301,282 | <p>Every subgroup of $S_n$ contains an even permutation (which one?). Then the hypothesis says that $H$ contains an odd permutation also. Then consider the map
$$\sigma:H\rightarrow \{1,-1\}, h\mapsto \mbox{sign}(h).$$
Can you complete proof?</p>
|
2,480,493 | <p>I have a problem with what looks like a very easy equation to solve $\left|\frac{1 + z}{1- i\overline z}\right| = 1$ . ($z$ is a complex number, $\overline z$ is a conjugate of $z$) I got stuck at the point when after replacing $\overline z = a-bi $ and $z = a +bi$ and getting rid of absolute value I end up with $a^2 +a-b =0$. I have no idea how to follow this up or wether I should take totally different approach from the begining. I'd be very gratefull if someone could guide me into the right solution.</p>
| xpaul | 66,420 | <p>Let $z=x+yi$ and then from
$$ |1+z|=|1-i\bar{z}| $$
it is easy to obtain
$$ (x+1)^2+y^2=(-y+1)^2+x^2, $$
So
$$ x=-y $$
and hence $z=(1-i)x$.</p>
|
3,515,656 | <p>Given <span class="math-container">$V(x) = -5x_1^2 (x_1^2 - 1) - 4x_1x_2$</span>.
Show that there exist <span class="math-container">$x_0 \in \mathbb{R}^2$</span> arbitrarily close to <span class="math-container">$x = 0$</span> such that <span class="math-container">$V(x_0) > 0$</span>.
I have no idea how to prove this. I was thinking to put some small numbers <span class="math-container">$x_1, x_2$</span> that satisfies the condition but this is not a formal prove. Can you help me to solve this? Thank you.</p>
| José Carlos Santos | 446,262 | <p>Note that<span class="math-container">$$V(x,x)=-5x^2(x^2-1)-4x^2=x^2(1-5x^2),$$</span>which is greater than <span class="math-container">$0$</span> if <span class="math-container">$x\in\left(0,\sqrt{\frac15}\right)$</span>.</p>
|
207,515 | <p>Consider the upper half space $\mathbb{R}^n_{+} = \{x = (x_1,..,x_n) \in \mathbb{R}^n : x_n \geq 0\}$. Consider the Laplacian on this space with either the Dirichlet boundary condition or the Neumann boundary condition. My question is: can the Laplacian $\Delta$ under such boundary conditions be treated as a pseudodifferential operator of order 2? I can understand that the usual trick of Fourier inversion does not work, but if we still forcibly write
$$-\Delta f(x) = \int p(x, \xi)\hat{f}(\xi)e^{ix.\xi}d\xi$$ and assume that $p(x, \xi)$ belongs to some symbol class $S^m_{\rho, \delta}$, what can go wrong?</p>
| Christian Remling | 48,839 | <p>The formula will work just fine (with no <em>pseudo</em> required), provided you interpret $\widehat{f}$ suitably. Since $f$ is only a function on a half space, we have to make an agreement on what exactly we mean by $\widehat{f}$.</p>
<p>To obtain the Dirichlet Laplacian, define $\widehat{f}$ as the Fourier transform of the <em>odd</em> extension $f(x,-y)=-f(x,y)$ of $f$ (I use the notation $x=(x_1,\ldots ,x_{n-1})$, $y=x_n$). Then
$$
\widehat{-\Delta f} = |\xi|^2 \widehat{f} ,
$$
as expected. Moreover, the domains also get handled correctly by this formula: $f\in D(-\Delta)=W^2_0$ precisely if the odd extension of $f$ satisfies $|\xi|^2\widehat{f}\in L^2(\mathbb R^n)$, or, equivalently, if (in $L^2$ sense)
$$
f(x,y) = \int_{\mathbb R^{n-1}} d\xi' \int_0^{\infty} dk\, g(\xi',k) e^{i\xi' x}\sin ky
$$
for some $g\in L^2$ with $(|\xi'|^2+y^2)g\in L^2$.</p>
<p>For the Neumann Laplacian, work with even extensions instead.</p>
|
2,658,368 | <p>I know that one can use Category Theory to formulate polynomial equations by modeling solutions as limits. For example, the sphere is the equalizer of the functions
\begin{equation}
s,t:\mathbb{R}^3\rightarrow\mathbb{R},\qquad s(x,y,z):=x^2+y^2+z^2,~t(x,y,z)=1.
\label{equalizer}
\end{equation}
One could then find out more about the solution set by mapping the equalizer diagram into other categories. More generally, solution sets of polynomial equations (and more generally, <a href="https://en.wikipedia.org/wiki/Algebraic_variety" rel="noreferrer">algebraic varieties</a>) are a central study object of algebraic geometry.</p>
<p>As differential equations are central to all areas of physics, I assume that there have been made a lot of attempts to generalise these ideas to solution sets of these. However, I do not yet have a lot of knowledge about algebraic geometry, topos theory or synthetic differential geometry. Thus I would be grateful if someone could explain roughly where and how Category Theory is used to study differential equations. </p>
<p>Can Category Theory really <strong>help to solve</strong> differential equations (for example by mapping diagrams of equations to other categories, similarly to how problems of topology are often solved by mapping topological spaces to algebraic ones in algebraic topology) or can it "only" provide schemes for generalising differential equations to other spaces/categories?</p>
<p>I am particularly interested in names of areas I have to look into if I want to understand this better. Also literature recommendation would be very welcome.
<br><br><br>
<strong>EDIT</strong>: I found a book by Vinogradov called <a href="http://books.google.dk/books?id=XIve9AEZgZIC" rel="noreferrer">Cohomological Analysis of Partial Differential Equations and Secondary Calculus</a> where "the main result [...] is Secondary Calculus on diffieties". </p>
<p>However, the material is very deep and thus I am still not completely able to say whether these "new geometrical objects which are analogs of algebraic varieties" can be used to help solving PDEs or if they serve to structure the theory of PDEs or result in other applications I am not aware of. Thus further information would be very appreciated!</p>
| Nicolas Hemelsoet | 491,630 | <p>In another direction, one can use differential equations to solve a categorical problem.</p>
<p>If one wants to deform-quantize Poisson-Lie group, a crucial step is to deform an infinitesimally braided monoidal category into a braided monoidal category, and this boils down to find a Drinfeld associator, that is a power series in two (non-commuting) variables $X,Y$ verifying some conditions I won't write here. </p>
<p>It turns out that one can builds an associator from the solution of the equation $\frac{d \psi}{dz} = \psi \cdot \alpha$ where $\alpha = \sum_{a,b} d(log(z_a - z_b))t_{a,b} \in H^1(X_A, \mathbb R) \otimes \mathfrak t_A$ where $X_A$ is the configuration space of $A$ ($A$ is a finite set) points in $\mathbb C$, and $\mathfrak t_A$ is the Drinfeld-Khono Lie algebra. </p>
<p>This is very vast domain, a good start should be this <a href="https://ncatlab.org/nlab/show/Drinfeld-Kohno+Lie+algebra" rel="noreferrer">ncatlab link</a> defining the Drinfeld-Khono Lie algebra, and then browsing should give you a good overview. There is also good articles on the subjects, but unfortunately I don't know very complete survey, so best might be to gather different sources. A short but nice surey is given in these <a href="http://www.maths.gla.ac.uk/~ukraehmer/Warszawa0409.pdf" rel="noreferrer">slides</a>. Finally, a good reference is also the end of the <a href="https://link.springer.com/book/10.1007%2F978-1-4612-0783-2?page=1#toc" rel="noreferrer">book</a> by Kassel, see the chapters "quantum groups and monodromy". </p>
|
112,889 | <p>I am new in Wolfram Math and I need a help in making a simple program. Let's suppose we have a list:</p>
<pre><code>list = {{1, 11}, {2, 7}, {4, 2}, {7, 9}, {-2, 3}, {-1, 10}};
</code></pre>
<p>Now, I need to collect the first elements of sublists, but not all of them, only those whose second elements are larger than 8. </p>
<p>Thanks in advance. </p>
| Jason B. | 9,490 | <pre><code>list = {{1, 11}, {2, 7}, {4, 2}, {7, 9}, {-2, 3}, {-1, 10}};
Cases[list, {a_, b_} /; b > 8 :> a]
(* {1, 7, -1} *)
</code></pre>
<p>What I'm doing above is to use <a href="http://reference.wolfram.com/language/ref/Cases.html"><code>Cases</code></a> to select only those sublists whose second element is greater than 8,</p>
<pre><code>Cases[list, {a_, b_} /; b > 8]
(* {{1, 11}, {7, 9}, {-1, 10}} *)
</code></pre>
<p>The <code>/;</code> notation defines a <a href="http://reference.wolfram.com/language/ref/Condition.html"><code>Condition</code></a>. Then I'm applying a <code>:></code> to make a replacement, in this case keeping only the first element, look at <a href="http://reference.wolfram.com/language/ref/RuleDelayed.html"><code>RuleDelayed</code></a></p>
|
112,889 | <p>I am new in Wolfram Math and I need a help in making a simple program. Let's suppose we have a list:</p>
<pre><code>list = {{1, 11}, {2, 7}, {4, 2}, {7, 9}, {-2, 3}, {-1, 10}};
</code></pre>
<p>Now, I need to collect the first elements of sublists, but not all of them, only those whose second elements are larger than 8. </p>
<p>Thanks in advance. </p>
| ubpdqn | 1,997 | <p>Mathematica/Wolfram Language provides a lot of flexibility. Some ways you could do this:</p>
<pre><code>Cases[list, {x_, _?(# > 8 &)} :> x]
Select[list, #[[2]] > 8 &][[;; , 1]]
Pick[list, #[[2]] > 8 & /@ list][[;; , 1]]
True /. GroupBy[list, (#[[2]] > 8 &) -> (#[[1]] &)]
Flatten[Last@Reap[Sow @@@ list, _?(# > 8 &), #2 &]]
</code></pre>
<p>I encourage you to look the documentation and play. You can find the way that suits your aims and/or style.</p>
<p>and some more ridiculous:</p>
<pre><code>Cases[{#1, Sign[#2 - 8]} & @@@ list, {x_, 1} :> x]
f[x_, y_] := x /; y > 8
f[__] := Sequence[]
f @@@ list
</code></pre>
|
4,547,151 | <p>Say I have a set of ordered points, like in a polygon, I can name a point in this set <span class="math-container">$P_i$</span>.</p>
<p>Say I have a high dimensional point and I want to denote its x coordinate, I can do <span class="math-container">$P_x$</span> and if I want it's ith coorddinate I can do <span class="math-container">$P_i$</span></p>
<p>Say I want the jth coordinate of the ith point, or the x coordinate of the ith point. How do I write that down?</p>
<p><span class="math-container">$P_{i, j}$</span>? or <span class="math-container">$P_{i}[j]$</span> or what other convention is there?</p>
| Abel Wong | 1,090,313 | <p>Imagine to cut the mug into slice of circle disc perpendicular to z-axis.
So the radius of each circle disc is x. Area is <span class="math-container">$\pi x^2$</span>. The range of z for the mug is from z=0 to z=1. This is the limit you mentioned.</p>
<p>Then you can take integral.</p>
<p>volume = <span class="math-container">$\int^1_0 \pi x^2 dz$</span></p>
<p>Put in the equation <span class="math-container">$z=x^4$</span>.</p>
<p>volume = <span class="math-container">$\int^1_0 \pi \sqrt{z} dz$</span>.</p>
<p>And you can finish the rest and get the <span class="math-container">$\frac{2}{3} \pi$</span>.</p>
|
2,713,853 | <p>This is the question:
"Let X and Y be independent random variables representing the lifetime (in 100 hours) of Type A and Type B light bulbs, respectively. Both variables have exponential distributions, and the mean of X is 2 and the mean of Y is 3.
What is the variance of the total lifetime of two Type A bulbs and one Type B bulb? "</p>
<p>This is the answer:
$$\operatorname{Var}(2X+Y)=2^2 \operatorname{Var}(X)+\operatorname{Var}(Y)=4×4+9=25$$</p>
<p>I understand $\operatorname{Var}(2x)=2^2$, Var(x), but why are $\operatorname{Var}(X) = 4$ and $\operatorname{Var}(Y)=9$? I also know that $\operatorname{Var}(x) = σ^2$. However, I also observed that $\operatorname{Var}(x) = μ^2$ in this case.</p>
| Green | 357,732 | <p>Your definition for $Var(x) = σ^2$ is taken from the normal distribution. Note, however, that in this question $X$ and $Y$ are stated to be exponential distributions with means of $2$ and $3$ respectively.</p>
<p>For an exponential distribution, the mean is equal to $β$ and the variance is equal to $β^2$. That's the reason you observe that $Var(x)$ equals the square of the mean in this case.</p>
<p>For more information on exponential distribution:
<a href="https://en.wikipedia.org/wiki/Exponential_distribution" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Exponential_distribution</a></p>
|
427,663 | <p>If $|z|=|w|=1$, and $1+zw \neq 0$, then $ {{z+w} \over {1+zw}} \in \Bbb R $</p>
<p>i found one link that had a similar problem.
<a href="https://math.stackexchange.com/questions/343982/prove-if-z-1-and-w-1-then-1-zw-neq-0-and-z-w-over-1">Prove if $|z| < 1$ and $ |w| < 1$, then $|1-zw^*| \neq 0$ and $| {{z-w} \over {1-zw^*}}| < 1$</a></p>
| Jyrki Lahtonen | 11,619 | <p>Draw vectors on the complex plane from the origin to a point on the unit circle. Notice that the argument of the sum of two such vectors is the average of the arguments of the two vectors (observe that the latter is defined only modulo a multiple of $\pi$). This follows from the properties of a rhombus (=parallelogram with equal sides).</p>
<p>Let $\arg z=\alpha$ and $\arg w=\beta$. By the above observation
$\arg(z+w)=(\alpha+\beta)/2\pmod\pi$. It is well known that $\arg zw=\alpha+\beta$. As $\arg1=0$ another application of the above observation gives that
$\arg(1+zw)=(\alpha+\beta)/2\pmod \pi$. </p>
<p>So $z+w$ and $1+zw$ share the same argument modulo an integer multiple of $\pi$. This means that the argument of their ratio is a multiple of $\pi$, and hence the ratio itself is real.</p>
|
2,790,767 | <p>Let $X_i$ a countable collection of independent random variables with symmetric distribution i.e. $P(X_i\in A)=P(X_i \in -A)$ for all $i\geq 1$. If $\lambda\in\mathbb{R}$ is such that $E(e^{\lambda X_i})$ is finite for all $i$ I want to prove the following Kolmogorov like inequality
$$P\left(\max_{1\leq k\leq n} \left|\sum_{i=1}^{k}X_{k}\right|>t\right)\leq e^{-\lambda t}\prod_{i=1}^{n}E(e^{\lambda X_i })$$
Have you some hint for solution?</p>
| Wanshan | 530,208 | <p>I think this bound is incorrect, because the right hand side is too small. For example, if we let $X_i$ to be i.i.d. Rademacher variable, i.e., $P(X_i = 1) = P(X_i = -1) = 1/2$, then any $\lambda\in \mathbb{R}$ can make $E(e^{\lambda X_i})<\infty$. Take $t = 1/2$ and $n=1$, then $P(|X_1|>t) = 1$, but the right hand side is
$$
\frac{1}{2}e^{-\lambda/2}(e^{\lambda} + e^{-\lambda}) = \frac{1}{2}[e^{\lambda/2} + e^{-3\lambda/2}].
$$
If we draw the plot of $\frac{1}{2}[e^{\lambda/2} + e^{-3\lambda/2}]$, we can find that the it can take value that is smaller than $1$. Specially, the minimum value of it, achieved at $\frac{1}{2}\ln 3$, is $0.87738<1$.
<a href="https://i.stack.imgur.com/YVuOM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YVuOM.png" alt="Plot of function \frac{1}{2}[e^{\lambda/2} + e^{-3\lambda/2}]$$"></a></p>
<p>In the following part, I will prove a similar bound using martingale theory. This bound is a little bit worse than the original bound and needs a stronger condition $E[e^{\lambda|X_i|}]<\infty$.</p>
<p>Since the distribution of $X_i$ is symmetric, $E(e^{\lambda X_i}) = E(e^{-\lambda X_i})$. Thus we can assume that $\lambda>0$. By Jensen inequality, $e^{\lambda{E}X_i}\leq E(e^{\lambda X_i})<\infty$, so $EX_i<\infty$, and this together with the fact that $X_i$ has symmetric distribution implies that $EX_i=0$. Therefore $S_k = \sum_{i=1}^kX_i$ is a martingale.</p>
<p>Since $f(x) = e^{|\lambda x|}$ is a convex function, we know that $e^{ |\lambda S_k|}$ is a submartingale. Now by the Doob's inequality for submartingale (<a href="http://services.math.duke.edu/~rtd/PTE/PTE4_1.pdf" rel="nofollow noreferrer">Theorem 5.4.2</a> at <em>Probability: Theory and Examples</em> by Rick Durrett), we have
$$
P(\max_{1\leq k\leq n}e^{|\lambda S_k|}>e^{\lambda t})\leq e^{-\lambda t}Ee^{|\lambda S_n|}.
$$
Furthermore, if we can assume $E[e^{\lambda|X_i|}]<\infty$, then we can use independence to get</p>
<p>$$
P(\max_{1\leq k\leq n}e^{|\lambda S_k|}> e^{\lambda t}) \leq e^{-\lambda t}Ee^{|\lambda S_n|} \leq e^{-\lambda t}\cdot E e^{\sum_{i=1}^n \lambda|X_i|} = e^{-\lambda t}\prod_{i=1}^n(Ee^{\lambda|X_i|})
$$</p>
<p>The last point is to notice that
$$
P(\max_{1\leq k\leq n}e^{|\lambda S_k|}>e^{\lambda t}) = P(\max_{1\leq k\leq n}|\lambda S_k|>{\lambda t}) = P(\max_{1\leq k\leq n}| S_k|>{t}).
$$
Therefore the upper bound I can get is
$$
P(\max_{1\leq k\leq n}| S_k|>{t})\leq e^{-\lambda t}Ee^{|\lambda S_n|}\leq e^{-\lambda t}\prod_{i=1}^n(Ee^{\lambda|X_i|}).
$$</p>
|
3,715,674 | <p>Let <span class="math-container">$X$</span> be a scheme and <span class="math-container">$U$</span> an open subset of <span class="math-container">$X$</span>. Let <span class="math-container">$s \in \Gamma(U, O_X)$</span> and <span class="math-container">$x \in U$</span>.<br>
I am getting confused with what the difference is between
<span class="math-container">$s_x$</span> and <span class="math-container">$s(x)$</span>... Are they the same thing? </p>
| Henry Lee | 541,220 | <p>An exact differential equation is defined as an equation which has a solution of the form:
<span class="math-container">$$du(x,y)=P(x,y)dx+Q(x,y)dy$$</span> if the DE is defined as:
<span class="math-container">$$P(x,y)dx+Q(x,y)dy=0$$</span> leading to the general solution of:
<span class="math-container">$$u(x,y)=C$$</span>
It may be called an exact equation because it is based on the requirements of continuous pds or that the value of the constant can be worked out easily so values given are "exact" rather than how PDEs are often solved using numerical methods which are effectively good approximations</p>
|
54,395 | <p>A generic question: </p>
<p>are there any spectral techniques to estimate the genus of a graph? I am interested in complete balance multipartite graph.</p>
| Igor Rivin | 11,142 | <p>Yes, there are techniques. For graphs of fixed genus and $n$ vertices, the second lowest eigenvalue of the laplacian is of order $O(1/\sqrt{n}),$ where the hidden constant depends on the genus (in an explicit way -- this follows from the Cheeger inequality and the separator theorems of Lipton and Tarjan, see eg, the paper of Spielman and Teng called "Spectral partitioning works). The dependence on the genus can be made quite explicit, so if you do that, you will get a lower bound on the genus in terms of the size of the graph and $\lambda_2.$</p>
|
2,755 | <p>As Akhil had great success with his <a href="https://mathoverflow.net/questions/1291/a-learning-roadmap-for-algebraic-geometry">question</a>, I'm going to ask one in a similar vein. So representation theory has kind of an intimidating feel to it for an outsider. Say someone is familiar with algebraic geometry enough to care about things like G-bundles, and wants to talk about vector bundles with structure group G, and so needs to know representation theory, but wants to do it as geometrically as possible.</p>
<p>So, in addition to the algebraic geometry, lets assume some familiarity with representations of finite groups (particularly symmetric groups) going forward. What path should be taken to learn some serious representation theory?</p>
| Chuck Hague | 1,528 | <p>All of these recommendations are very good, and I'd like to add that the book <a href="https://doi.org/10.1007/978-0-8176-4523-6" rel="nofollow noreferrer">D-Modules, Perverse Sheaves, and Representation Theory</a> (which you can download at the provided link if you have institutional access; otherwise you can get it from, say, Amazon) contains some very good introductory chapters (chapters 9, 10, and 11) on the various sorts of things one would want to know in representation theory and algebraic geometry. The whole book is quite good if you're interested in the D-modules/perverse sheaves side of the story, but even if you're not interested in that, those particular chapters might be of interest.</p>
|
2,925,320 | <p>I was wondering how did Spectral graph theory, this multidisciplinary area between Linear Algebra and Graph theory start?</p>
<p>How did it emerge? </p>
<p>Was there a certain problem (maybe graph isomorphism problem?) that evoked its formation? </p>
| Chris Godsil | 16,143 | <p>The oldest work on spectral graph theory that I am aware of is the work by chemists on Hueckel molecular orbital theory (<a href="https://en.wikipedia.org/wiki/H%C3%BCckel_method" rel="noreferrer">https://en.wikipedia.org/wiki/Hückel_method</a>). One principal goal of this work was to relate chemical properties of aromatic hydrocarbons to spectral properties of the underlying graphs.
The work of Coulson was very influential.</p>
<p>Chemists also studied the matching polynomial before graph theorists became interested in the topic. (I view
the matching polynomial as a topic in spectral graph theory.)</p>
<p>Another starting point would be the original paper by Hoffman and Singleton on Moore graphs. It seems hard to overestimate the significance of this work. One thing it did was establish the connection between problems in extremal graph theory and graph spectra.</p>
<p>Certainly there was always the hope that one might use spectral information for graph isomorphism but it does not appear that lead to the development of spectral graph theory.</p>
|
1,856,225 | <p>Right, so as the final step of my project draws near and after having made a bad layout sort of question, I am posting a new one very clear and unambiguous. I need to find this specific definite integral which Mathematica could not solve:</p>
<p>$$ \int_{x=0}^\pi \frac{\cos \frac{x}{2}}{\sqrt{1 + A \sin^2 \frac{x}{2}}} \sin \left( \frac{1+A}{\sqrt{A}} \omega \tanh^{-1} \frac{\cos \frac{x}{2}}{\sqrt{1 + A \sin^2 \frac{x}{2}}} \right) \, dx.$$</p>
<p>where $ 0<A<1 $ and $ \omega > 0 $ are parameters of the problem.
I tried to use a substitution of the argument of the hyperbolic arctan but that seemed to make it worse. I was posting here in the hopes of receiving help, if someone could tell me if it is at all possible to solve it analytically via a trick of sorts or a clever substitution, or maybe it is an elliptic integral in disguise. I thank all helpers.</p>
<p>** My question on the Melnikov integral can be found <a href="https://math.stackexchange.com/questions/1853921/finding-a-specific-improper-integral-on-a-solution-path-to-a-2-dimensional-syste">here</a> I just used trig identities to soften it up</p>
| Yuriy S | 269,624 | <p>I highly doubt this integral has any useful analytic form. I suggest you either use numerical integration or simplify your initial model to obtain a solvable integral.</p>
<p>Using @tired 's suggestion, we can make you integral much easier. First, let's make the following substitutions:</p>
<p>$$y=\sin \frac{x}{2}$$</p>
<p>$$B=\frac{1+A}{\sqrt{A}} \omega$$</p>
<p>Then we obtain:</p>
<p>$$I(A,B)=2 \int_0^1 \sin \left( B~ \text{arctanh}~\sqrt{\frac{1-y^2}{1+Ay^2}} \right) \frac{dy}{\sqrt{1+Ay^2}}$$</p>
<p>For the most simple case Mathematica gives:</p>
<p>$$B=1,~~A=0$$</p>
<p>$$I(0,1)=2\pi \frac{\sinh (\pi/2)}{\sinh \pi}$$</p>
<p>Mathematica can't take this integral even with $B=1,~A=1$. Neither with $A=0$ and $B$ not defined.</p>
<p>Which is why I think numerical methods is the only way for you. Numerically, the integral is very nice (and <strong>very weakly</strong> depends on $A$ by the way).</p>
<p>See the plots below for $I(A,B)$, for $A \in (0,1)$ and $B \in (0,4)$:</p>
<p><a href="https://i.stack.imgur.com/CEL11.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CEL11.png" alt="enter image description here"></a></p>
<blockquote>
<p>I'm aware this is not even close to answering your question, but you might find this useful.</p>
</blockquote>
<p>And yes, there might be some connection with elliptic integrals, as you can see by the argument of the inverse hyperbolic tangent.</p>
<hr>
<p><strong>Edit</strong></p>
<p>I was wrong - this integral is workable. See @MariuszIwaniuk 's much better answer!</p>
<p>And see @achillehui's answer for the final solution. With permission, I add the 3D plot of the exact solution to compare with numerical results:</p>
<p><a href="https://i.stack.imgur.com/jFEDt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jFEDt.png" alt="enter image description here"></a></p>
<p>All credit goes to achille hui!</p>
|
1,856,225 | <p>Right, so as the final step of my project draws near and after having made a bad layout sort of question, I am posting a new one very clear and unambiguous. I need to find this specific definite integral which Mathematica could not solve:</p>
<p>$$ \int_{x=0}^\pi \frac{\cos \frac{x}{2}}{\sqrt{1 + A \sin^2 \frac{x}{2}}} \sin \left( \frac{1+A}{\sqrt{A}} \omega \tanh^{-1} \frac{\cos \frac{x}{2}}{\sqrt{1 + A \sin^2 \frac{x}{2}}} \right) \, dx.$$</p>
<p>where $ 0<A<1 $ and $ \omega > 0 $ are parameters of the problem.
I tried to use a substitution of the argument of the hyperbolic arctan but that seemed to make it worse. I was posting here in the hopes of receiving help, if someone could tell me if it is at all possible to solve it analytically via a trick of sorts or a clever substitution, or maybe it is an elliptic integral in disguise. I thank all helpers.</p>
<p>** My question on the Melnikov integral can be found <a href="https://math.stackexchange.com/questions/1853921/finding-a-specific-improper-integral-on-a-solution-path-to-a-2-dimensional-syste">here</a> I just used trig identities to soften it up</p>
| Mariusz Iwaniuk | 276,773 | <p>This is not a full answer.From answer of user <code>You're In My Eye</code> .First, let's make the following substitutions:</p>
<p>$$y=\sin \frac{x}{2}$$</p>
<p>$$B=\frac{1+A}{\sqrt{A}} \omega$$</p>
<p>$$2 \int_0^1 \sin \left( B~ \text{arctanh}~\sqrt{\frac{1-y^2}{1+Ay^2}} \right) \frac{dy}{\sqrt{1+Ay^2}}$$
substitutions:
$$y={\frac { \sqrt{- \left( A{t}^{2}+1 \right) \left( {t}^{2}-1
\right) }}{A{t}^{2}+1}}
$$
Then we obtain:
$$2\, \sqrt{A+1}\int_{0}^{1}\!{\frac {\sin \left( B{\rm arctanh} \left(t
\right) \right) t}{ \left( A{t}^{2}+1 \right) \sqrt{-{t}^{2}+1}}}
\,{\rm d}t
$$
substitutions:
$$t=\tanh \left( k \right) $$
have :
$$2\,\sqrt {A+1}\int_{0}^{\infty }\!{\frac {\sin \left( Bk \right) \sinh
\left( k \right) }{A \left( \cosh \left( k \right) \right) ^{2}+
\left( \cosh \left( k \right) \right) ^{2}-A}}\,{\rm d}k
$$
trig identity:
<code>cosh(k)^2-sinh(k)^2 = 1</code>
and <code>A+1=m</code>
$$2\, \sqrt{A+1}\int_{0}^{\infty }\!{\frac {\sin \left( Bk \right) \sinh
\left( k \right) }{1+ \left( A+1 \right) \left( \sinh \left( k
\right) \right) ^{2}}}\,{\rm d}k \tag{1}
$$
I have a simple form of integral:
$$2\, \sqrt{m}\int_{0}^{\infty }\!{\frac {\sin \left( B k \right) \sinh
\left( k \right) }{1+m \left( \sinh \left( k \right) \right) ^{2}}}
\,{\rm d}k
$$
Substitutions back $$B=\frac{1+A}{\sqrt{A}} \omega$$ to equation <code>1</code>.
I have:
$$2\, \sqrt{A+1}\int_{0}^{\infty }\!{\frac {\sin \left(\frac{1+A}{\sqrt{A}} \omega k \right) \sinh
\left( k \right) }{1+ \left( A+1 \right) \left( \sinh \left( k
\right) \right) ^{2}}}\,{\rm d}k
$$</p>
<p>Mathematica can find solution for this integral.</p>
<pre><code> A = 1/4;
omega = 1;
int = Normal[2*Sqrt[A + 1]*Integrate[(Sin[(1 + A)/Sqrt[A]*omega*k]*Sinh[k])/(
1 + (A + 1)*Sinh[k]^2), {k, 0, Infinity}]]
(*(1/1769)2 Sqrt[5] ((305 -
122 I) ((1/5 + (3 I)/20) Hypergeometric2F1[1, 1 - (5 I)/2,
2 - (5 I)/2, -((1 + 2 I)/Sqrt[5])] + (1/5 - (3 I)/
20) Hypergeometric2F1[1, 1 - (5 I)/2,
2 - (5 I)/2, -((1 - 2 I)/Sqrt[5])] + (1/5 - (3 I)/
20) Hypergeometric2F1[1, 1 - (5 I)/2, 2 - (5 I)/2, (1 - 2 I)/
Sqrt[5]] + (1/5 + (3 I)/20) Hypergeometric2F1[1, 1 - (5 I)/2,
2 - (5 I)/2, (1 + 2 I)/Sqrt[5]]) + (5 +
2 I) (61 ((1/5 + (3 I)/20) Hypergeometric2F1[1, 1 + (5 I)/2,
2 + (5 I)/2, -((1 + 2 I)/Sqrt[5])] + (1/5 - (3 I)/
20) Hypergeometric2F1[1, 1 + (5 I)/2,
2 + (5 I)/2, -((1 - 2 I)/Sqrt[5])] + (1/5 - (3 I)/
20) Hypergeometric2F1[1, 1 + (5 I)/2, 2 + (5 I)/2, (
1 - 2 I)/Sqrt[5]] + (1/5 + (3 I)/20) Hypergeometric2F1[1,
1 + (5 I)/2, 2 + (5 I)/2, (1 + 2 I)/Sqrt[5]]) + (2 +
5 I) ((6 +
5 I) ((1/5 + (3 I)/20) Hypergeometric2F1[1, 3 - (5 I)/2,
4 - (5 I)/2, -((1 + 2 I)/Sqrt[5])] + (1/5 - (3 I)/
20) Hypergeometric2F1[1, 3 - (5 I)/2,
4 - (5 I)/2, -((1 - 2 I)/Sqrt[5])] + (1/5 - (3 I)/
20) Hypergeometric2F1[1, 3 - (5 I)/2, 4 - (5 I)/2, (
1 - 2 I)/Sqrt[5]] + (1/5 + (3 I)/20) Hypergeometric2F1[
1, 3 - (5 I)/2, 4 - (5 I)/2, (1 + 2 I)/Sqrt[5]]) - (6 -
5 I) ((1/5 + (3 I)/20) Hypergeometric2F1[1, 3 + (5 I)/2,
4 + (5 I)/2, -((1 + 2 I)/Sqrt[5])] + (1/5 - (3 I)/
20) Hypergeometric2F1[1, 3 + (5 I)/2,
4 + (5 I)/2, -((1 - 2 I)/Sqrt[5])] + (1/5 - (3 I)/
20) Hypergeometric2F1[1, 3 + (5 I)/2, 4 + (5 I)/2, (
1 - 2 I)/Sqrt[5]] + (1/5 + (3 I)/20) Hypergeometric2F1[
1, 3 + (5 I)/2, 4 + (5 I)/2, (1 + 2 I)/Sqrt[5]]))))*)
</code></pre>
|
298,066 | <p>Let $A$ be a commutative ring, and consider the category of bimodules over $A$.</p>
<p>An $A$-bimodule $M$ is called symmetric if $a\cdot m = m \cdot a$ for all $a \in A$, $m \in M$.</p>
<p>Is the category of symmetric bimodules over $A$ closed under extensions?</p>
<p>Namely, given an exact sequence of $A$-bimodules</p>
<p>$0 \to M \to N \to K \to 0$</p>
<p>where $M,K$ are symmetric, must $N$ also be a symmetric $A$-module?</p>
| Johannes Hahn | 3,041 | <p>Easier counterexample than the one Mare gave:</p>
<p>Let $A=k[X]$ and $N:=k^2$ where $X$ acts as $id_N$ on the right and as $\begin{pmatrix}1&1\\&1\end{pmatrix}$ on the left. Then $M:=\langle e_1\rangle$ is a symmetric sub-bimodule ($X$ acts as $id$ on both sides) and the quotient $K:=M/N=\langle e_2+M\rangle$ is a symmetric bimodule (again, $X$ acts as $id$ on both sides), but $N$ is not symmetric.</p>
|
165,791 | <p>I'm looking for a maximum accuracy quadrature formula:</p>
<p>$$
\int_{-1} ^{1} \sqrt{\frac {1-x}{1+x}} f(x)dx = A_1f(x_1)+A_2f(x_2)+R(f)
$$</p>
<p>I don't know exactly if it's Trapezoidal rule which has the degree of accuracy one, or Simpson's rule with three or Gauss (these 3 I have studied deeply, but might be others), also I would like to prove it using numerical analysis methods so any help in this direction will be highly appreciated.</p>
| Brendan McKay | 9,025 | <p>Federico's answer gives you the theory. Since you only ask for two abscissae, it is easy to solve this case by first principles. Just put in the abscissae as variables and do the integral. $f(-3^{-1/2})+f(3^{-1/2})$ gives the right answer for cubic polynomials and not for quartic polynomials. I hope this isn't homework.</p>
|
3,258,256 | <p>I know that if we have a matrix A defined for a space V with dimension n that has n distinct eigenvalues, we can write it as <span class="math-container">$X^{-1}DX$</span> where D is a diagonal matrix with the eigenvalues of A on its diagonal and X is a matrix of eigenvectors. </p>
<p>It seems that what we are doing is using the fact that we have n distinct eigenvalues to find a basis of eigenvectors of our space. In this basis, the linear transformation which corresponds to A simply rescales each eigenvector so A can be written as a diagonal matrix in this basis.</p>
<p>I understand that we need the matrix X to transform a vector defined in a basis <span class="math-container">$f_1,...,f_n$</span>that was used to define A to the eigenbasis <span class="math-container">$e_1,...e_n$</span>. However, I do not understand why we have to use X as the linear map that does this. Why can we not just define a map <span class="math-container">$L:V \to V$</span> where <span class="math-container">$L(f_i) = e_i$</span> <span class="math-container">$\forall 1\leq i \leq n $</span> and then X is just the identity matrix.</p>
| symplectomorphic | 23,611 | <p>Your question makes little sense: it is not clear what you mean when you say that under your choice of <span class="math-container">$L$</span>, “<span class="math-container">$X$</span> is just the identity matrix.” But perhaps these comments will clarify the situation.</p>
<p>Diagonalization is really about linear operators, not about the matrices that represent them. Say you have a finite-dimensional space <span class="math-container">$V$</span> of dimension <span class="math-container">$n$</span> and an operator <span class="math-container">$L:V\to V$</span>. The question is: does there exist a basis of <span class="math-container">$V$</span> so that the matrix that represents <span class="math-container">$L$</span> with respect to that basis is diagonal? The answer (not hard to show, just unraveling what a diagonal representation means) is: such a basis exists if and only if <span class="math-container">$L$</span> has <span class="math-container">$n$</span> linearly independent eigenvectors. Such a basis is called an eigenbasis.</p>
<p>Now, let’s translate these basis-free ideas into basis-dependent matrices. Say <span class="math-container">$L_e$</span> be the matrix that represents <span class="math-container">$L$</span> with respect to the standard basis <span class="math-container">$e$</span> (or really any old basis you wish), and let <span class="math-container">$L_b$</span> be the matrix that represents <span class="math-container">$L$</span> with respect to the eigenbasis <span class="math-container">$b$</span>. Then <span class="math-container">$L_b$</span> is diagonal. The question is, how are these two matrices <span class="math-container">$L_e$</span> and <span class="math-container">$L_b$</span> related?</p>
<p>The answer is that they are similar, of course; they represent the same transformation with respect to different bases. Say that <span class="math-container">$X$</span> is the matrix that represents the transformation that sends the basis <span class="math-container">$e$</span> to the eigenbasis <span class="math-container">$b$</span>, <em>with respect to the basis <span class="math-container">$e$</span></em>. Then elementary change of basis calculations give the formula</p>
<p><span class="math-container">$$L_e=XL_bX^{-1}$$</span></p>
|
89,666 | <p>As I am trying to get the solution of the simultaneous ordinary differential equations </p>
<pre><code> b1'[z]-1I*beta1*b1[z]-C1*b2[z]==0,
b2'[z]-1I*beta2*b2[z]+C1*b1[z]==0
</code></pre>
<p>with boundary conditions </p>
<pre><code>b1[1.581825567*10^-6]==0.876212, b2[1.581825567*10^-6]==0.481925
</code></pre>
<p>By running the following command</p>
<pre><code>S = DSolve[{b1'[z]-1I*beta1*b1[z]-C1*b2[z]==0, b2'[z]-1I*beta2*b2[z]+C1*b1[z]==0,
b1[1.581825567*10^-6]==0.876212, b2[1.581825567*10^-6]==0.481925,
(b1[z])^2 (b2[z])^2==1}, {b1, b2}, z]
</code></pre>
<p>I am getting the values of <code>b1[z]</code> and <code>b2[z]</code>. But my main problem is that whatever the values of <code>b1[z]</code> and <code>b2[z]</code>, I am getting after solving the above equation, the sum of there values i.e.<code>|b1|^2+|b2|^2</code> should be less then or equal to 1, i.e., <code>|b1|^2+|b2|^2 <= 1</code> (which is my third boundary condition); so please can anyone suggest me that how I will run the above equation with this third boundary condition <code>|b1|^2+|b1|^2 <=1</code>. So that I can get those values of <code>b1[z]</code> and <code>b2[z]</code> which are satisfying our third boundary condition i.e.<code>|b1|^2+|b2|^2</code>. Even I had tryied to run this programme with my third boundary condition. But it is showing some error whose snapshot I am attaching with this problem so please suggest me the right answer for the same problem. And please I have request just do no put simple comment, if it is possible try this problem in mathematica and then send to me.</p>
<p><a href="https://i.stack.imgur.com/FlqDj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FlqDj.png" alt="enter image description here"></a></p>
| Michael E2 | 4,999 | <p>Your ODE is linear, so whether it can satisfy your constraint <code>|b1|^2+|b2|^2 <= 1</code> depends primarily on the eigenvalues of the coefficient matrix, as well as the initial condition.</p>
<p>The coefficient matrix is the second array returned by <a href="http://reference.wolfram.com/language/ref/CoefficientArrays.html" rel="nofollow"><code>CoefficientArrays</code></a>.</p>
<pre><code>ode = {b1'[z] - 1 I*beta1*b1[z] - C1*b2[z] == 0,
b2'[z] - 1 I*beta2*b2[z] + C1*b1[z] == 0} /. Equal -> Subtract
Normal@CoefficientArrays[ode, {b1[z], b2[z]}]
(*
{-I beta1 b1[z] - C1 b2[z] + b1'[z], C1 b1[z] - I beta2 b2[z] + b2'[z]}
{{b1'[z], b2'[z]}, derivatives
{{ -I beta1, -C1 }, linear coefficient matrix
{ C1, -I beta2 }}}
*)
</code></pre>
<p>Here are the <a href="http://reference.wolfram.com/language/ref/Eigenvalues.html" rel="nofollow"><code>Eigenvalues</code></a>:</p>
<pre><code>lambda = Eigenvalues@CoefficientArrays[ode, {b1[z], b2[z]}][[2]]
(*
{-(1/2) I (beta1 + beta2 - I Sqrt[-beta1^2 + 2 beta1 beta2 - beta2^2 - 4 C1^2]),
-(1/2) I (beta1 + beta2 + I Sqrt[-beta1^2 + 2 beta1 beta2 - beta2^2 - 4 C1^2])}
*)
</code></pre>
<p>There are conditions on the eigenvalues and the initial conditions that have to be satisfied for the constraint to be met. The conditions on the eigenvalues are relatively simple and will be shown below. Given that the system has appropriate eigenvalues, then initial condition has to be chosen such that the maximum of <code>|b1|^2+|b2|^2</code> is less than or equal to <code>1</code>. The conditions for this might be complicated in terms of the symbolic parameters <code>beta1</code>, <code>beta2</code>, and <code>C1</code>. It shouldn't be too hard to determine for numeric parameters. I'll leave that to the reader</p>
<p>With respect to the eigenvalues, for the solution to meet the constraint for all time, the real parts of the eigenvalues need to be zero. We can use <a href="http://reference.wolfram.com/language/ref/Reduce.html" rel="nofollow"><code>Reduce</code></a> to simplify the resulting restrictions on the parameters in the ODE:</p>
<pre><code>Reduce[Thread[Re[lambda] == 0]]
(* Im[beta1 + beta2] == 0 && Re[Sqrt[-beta1^2 + 2 beta1 beta2 - beta2^2 - 4 C1^2]] == 0 *)
</code></pre>
<p>If the constraint is to be met only for all <strong>future</strong> time (from the initial condition onward), then the real part of the eigenvalues need only be less than or equal to zero. The constraints on the parameters then simplify to the following:</p>
<pre><code>Reduce[Thread[Re[lambda] <= 0]]
(*
(Im[beta1 + beta2] < 0 &&
Im[beta1 + beta2] <=
Re[Sqrt[-beta1^2 + 2 beta1 beta2 - beta2^2 - 4 C1^2]] <=
-Im[beta1 + beta2]) ||
(Im[beta1 + beta2] == 0 &&
Re[Sqrt[-beta1^2 + 2 beta1 beta2 - beta2^2 - 4 C1^2]] == 0)
*)
</code></pre>
|
4,013,630 | <blockquote>
<p>The population of bacteria triples each day on a petri dish. If it takes 20 days for the population of bacteria to fill the entire dish, how many days will it take bacteria to fill half of the petri dish?</p>
</blockquote>
<ul>
<li><p>My doubt 1: if we don't have a initial population can be solve this problem?
Like <span class="math-container">$A(d)$</span>= initial population times <span class="math-container">$3^{20}$</span> days but as I don't have this I am stuck.</p>
</li>
<li><p>My doubt 2: Also, can we assume a number for population at day 20 and move our way backwards?</p>
</li>
</ul>
| Community | -1 | <p><span class="math-container">$$f(n):=\begin{cases}\text{even}(n)&\to-3,\\\text{odd}(n)&\to\ \ \ 7\end{cases}$$</span> is a valid answer.</p>
<hr />
<p>If you really want a "formula",</p>
<p><span class="math-container">$$2-5\cos(n\pi)$$</span>
or
<span class="math-container">$$(10n)\bmod20-3.$$</span></p>
|
2,332,544 | <p>I've been messing around with a quadratic that popped out of a problem I'm working on, and noticed it has a property I haven't seen before. The form is as follows:</p>
<p>$$ ax^2 + bx + c = 0$$
where
$$a = r_{1}^{2} + r_{2}^{2} - r_{3}^{2}$$
$$b = 2\left( r_{1} t_2 + r_{2} t_2 - r_{3} t_3 \right)$$
$$c = t_{1}^{2} + t_{2}^{2} - t_{3}^{2}$$
If we now define two vectors
$$ \underline u = [r_1, r_2, i r_3] \quad \underline v = [t_1, t_2, i t_3]$$
The quadratic can be re-written as
$$(\underline u \cdot \underline u) x^2 + 2(\underline u \cdot \underline v) x + (\underline v \cdot \underline v) = 0$$
and then
$$(\underline u x + \underline v) \cdot (\underline u x + \underline v) = 0$$</p>
<p>Does the fact that the equation possesses this property tell us anything useful? Can it be exploited in any way? is it a common property?</p>
<p>Thanks!</p>
| Goa'uld | 336,195 | <p>Here is another fun proof. Consider the map</p>
<p>$$ f \colon S^{\infty} \to \mathbb{R} $$</p>
<p>$$ (x_0,x_1,\ldots) \mapsto \sum n x_n $$</p>
<p>This map is well defined since any element of $S^{\infty}$ has only a finite number of nonzero $x_i$. And it is continuous because the restriction to each $S^n$ is continuous. The images of the points $(0,\ldots,0,1,0, \ldots)$ and $(0,\ldots,0,-1,0, \ldots)$ are precisely the integers and so all the integers belong to the image. Since the map is continuous, it must be surjective. But $\mathbb{R}$ is not compact, so $S^{\infty}$ can not be compact. </p>
|
1,902,407 | <p>Expand given function $f$ as Taylor series around $c=3$
$$f(x) = \frac{x-3}{(x-1)^2}+\ln{(2x-4)} $$</p>
<p>and find out open interval at what that series is convergent. What is radius of convergence?</p>
<p>This is what i have so far. We know that $\ln{(1+x)} = \sum_{n=1}^{\infty} \frac{x^n}{n}$ and $\frac{1}{1-x}=\sum_{n=1}^\infty x^n ,|x| \lt 1$</p>
<p>We can rewrite $\ln{(2x-4)}$ as $\ln{(-\frac{1}{4})}+\ln{(-x/2+1)}$ and then expand last expression, but i don't know what to do with $\ln{(-1/4)}$. We can can split given fraction onto partial fractions as $\frac{x-3}{(x-1)^2} = \frac{1}{x-1}-\frac{2}{(x-1)^2}$. First fraction we can expand easily, but i don't know how to expand fraction with binomial as denominator.</p>
| barak manos | 131,263 | <p>$3$ is obviously the second prime.</p>
<p>And $11$ is the third, which you can easily prove by induction for $10^{2n}-1$:</p>
<ul>
<li>Base case: $10^2-1=99$ is divisible by $11$</li>
<li>Inductive step: $10^{2(n+1)}-1=101\cdot(10^{2n}-1)$</li>
</ul>
|
1,902,407 | <p>Expand given function $f$ as Taylor series around $c=3$
$$f(x) = \frac{x-3}{(x-1)^2}+\ln{(2x-4)} $$</p>
<p>and find out open interval at what that series is convergent. What is radius of convergence?</p>
<p>This is what i have so far. We know that $\ln{(1+x)} = \sum_{n=1}^{\infty} \frac{x^n}{n}$ and $\frac{1}{1-x}=\sum_{n=1}^\infty x^n ,|x| \lt 1$</p>
<p>We can rewrite $\ln{(2x-4)}$ as $\ln{(-\frac{1}{4})}+\ln{(-x/2+1)}$ and then expand last expression, but i don't know what to do with $\ln{(-1/4)}$. We can can split given fraction onto partial fractions as $\frac{x-3}{(x-1)^2} = \frac{1}{x-1}-\frac{2}{(x-1)^2}$. First fraction we can expand easily, but i don't know how to expand fraction with binomial as denominator.</p>
| ijbalazs | 403,552 | <p>Let us examine 41. We will show that 41 | 10^100 -1. First we consider the 5th power. 10^5 = 2439 * 41 + 1 so that 41 | 10^5 -1 and further 10^5 -1 | (10^5)^20 - 1 = 10^100 - 1 </p>
|
2,202,498 | <p>Test whether the series $\sum_{n=1}^\infty\frac{1+\sin(n)}{2^n}$ is converges or diverges. I used the limit comparison method in order to test its convergence. I chose $a_n = \frac{1+\sin(n)}{2^n}$ and $b_n = \frac{1}{2^n}$.</p>
<p>$$
\lim_{n \rightarrow \infty}\frac{\frac{1+\sin(n)}{2^n}}{\frac{1}{2^n}} = \lim_{n \rightarrow \infty}({1+\sin(n)}) = \infty
$$</p>
<p>However, WolframAlpha suggests that the series is in fact convergent. If this is the case, then is the way I set up the limit comparison test incorrect or is there another test that proves this series is convergent? Any help would be appreciated!</p>
| Alex Provost | 59,556 | <p>In $\mathbb{Z}_7$, we have $-2 = 5$ and $-4 = 3$. Therefore, if we let $A = \left(\begin{matrix}
0&2 \\
2 & 2
\end{matrix}\right)$, the equality $A^2 = 4I + 2A$ with $\mathbb{Z}_7$ coefficients is equivalent to $$0 = A^2 - 2A - 4I = A^2 + 5A + 3I = \phi(X^2 + 5X + 3).$$</p>
|
574,220 | <p><strong>Question 1:</strong></p>
<p>let Polynomial <span class="math-container">$f(x)=\displaystyle\sum_{i=0}^{3}a_{i}x^i,$</span> have three real numbers roots,where <span class="math-container">$a_{i}>0,i=1,2,3$</span>.</p>
<blockquote>
<p>show that:
<span class="math-container">$$g(x)=\sum_{i=0}^{3}a^m_{i}x^i$$</span> have only real roots,where <span class="math-container">$m\in R,m\ge 1$</span></p>
</blockquote>
<p><strong>My try:</strong></p>
<p><strong>case (1):</strong></p>
<p>suppose that <span class="math-container">$f(x)$</span> has a zero of multiplicity 3, then we assume</p>
<blockquote>
<p><span class="math-container">$$f(x)=(x+p)^3=x^3+3x^2p+3xp^2+p^3$$</span>
then
<span class="math-container">$$g(x)=x^3+(3p)^mx^2+(3p^2)^mx+(p^3)^{m}=(x+p^m)[x^2+(3^mp^m-p^m)x+p^{2m}]$$</span></p>
</blockquote>
<p>then</p>
<blockquote>
<p><span class="math-container">$$h(x)=x^2+p^m(3^m-1)x+p^{2m}\Longrightarrow \Delta =(p^m(3^m-1))^2-4p^{2m}>0$$</span>
so this case <span class="math-container">$g(x)$</span> have only three real roots.</p>
</blockquote>
<p><strong>for case (2):</strong></p>
<p>let <span class="math-container">$f(x)=(x+p)^2(x+q)$</span>,</p>
<p>I can't prove it,</p>
<p><strong>and the case (3):</strong></p>
<p><span class="math-container">$$f(x)=(x+p)(x+q)(x+r)$$</span>
and this case I can't prove it too.</p>
<p><strong>I hope someone can help solve this nice problem ;</strong></p>
<p>Thank you very much!</p>
| mercio | 17,445 | <p>As I explained in my comment, it is enough to do case (2).</p>
<p>Let $x,y \ge 0$. Then $(X+x)(X+x)(X+y) = X^3+(2x+y)X^2 + (x^2+2xy)X + x^2y$.
The discriminant of $X^3+bX^2+cX+d$ is, according to wikipedia, $\Delta(b,c,d) = b^2c^2-4c^3-4b^3d-27d^2+18bcd$.</p>
<p>The derivative of $m \mapsto \Delta(b^m,c^m,d^m)$ at $m=1$ is
$$\delta(b,c,d) \\
= (2b^2c^2-12b^3d+18bcd)\log b + (2b^2c^2-12b^3d+18bcd)\log c + (18bcd-4b^3d-54d^2)\log d
$$</p>
<p>Plugging $b = 2x+y,c = x^2+2xy,d = x^2y$ we compute and obtain
$$ \delta(x,y) =
4x^6(\frac yx -1)^3\left(-(2+ \frac yx)\log \frac {(2x+y)^2}{x^2+2xy} + \frac yx \log \frac {(2x+y)^3} {x^2y}\right)$$</p>
<p>$\delta(x,y)/4x^6$ is actually a function of $t = \frac yx$ and has the same sign as $\delta(x,y)$ so it is enough to study the sign of the function $$g(t) = (t-1)\left(-(2+t)\log \frac{(2+t)^2}{1+2t} + t \log \frac {(2+t)^3}t\right) \\
= (t-1)\left((t-4)\log(2+t) + (2+t)\log(1+2t) - t\log t\right)
$$</p>
<p>Letting $g(t) = (t-1)h(t)$, we compute $h'(t) = \log(9+(t- t^{-1})^2) + 2\frac{(t-1)^2}{2t^2+5t+2} \ge \log 9 > 0 $. Since $h(1) = -3\log3+3\log3-\log1 = 0$, we have $h(t)>0$ for $t>1$, $h(t)<0$ for $t<1$, and $h(1)=0$. </p>
<p>Going back to $g$ we learn that $g(t)>0$ except when $t=1$, which means that $\delta(x,y) > 0$ except when $x=y$, which is what we wanted.</p>
|
274,885 | <p>I've been doing some research on ranking algorithms, and I've read theresearch with interest. The aspects of the Schulze Algorithm that appeals to me is that respondents do not have to rank all options, the rank just has to be ordinal (rather than in order), and ties can be resolved. However, upon implementation, I'm having trouble showing that ties do not occur when using the algorithm.</p>
<p>I've put together a below real-life example, in which the result looks to be a tie. I can't figure out what I'm doing wrong. Is the solution simply to randomly select a winner when there is a tie? Could you somoene please have a look and offer any advice?</p>
<p>Thanks very much,</p>
<p>David</p>
<p>Who do you think are the best basketball players?</p>
<p>_ Kevin Garnett</p>
<p>_ Lebron James</p>
<p>_ Josh Smith</p>
<p>_ David Lee</p>
<p>_ Tyson Chandler</p>
<p>5 users (let’s just call them User A, User B, User C, User D and User E) answer the question in the following way:</p>
<p>User A : </p>
<p>1.Kevin Garnett</p>
<p>2.Tyson Chandler</p>
<p>3.Josh Smith</p>
<p>4.David Lee</p>
<p>5.Lebron James</p>
<p>User B: </p>
<p>1.Kevin Garnett</p>
<p>2.David Lee</p>
<p>3.Lebron James</p>
<p>4.Tyson Chandler</p>
<p>5.Josh Smith</p>
<p>User C: </p>
<p>1.David Lee</p>
<p>2.Josh Smith</p>
<p>3.Kevin Garnett</p>
<p>4.Lebron James</p>
<p>5.Tyson Chandler</p>
<p>User D: </p>
<p>1.Lebron James</p>
<p>2.Josh Smith</p>
<p>3.Tyson Chandler</p>
<p>4.David Lee</p>
<p>5.Kevin Garnett</p>
<p>User E: </p>
<p>1.Tyson Chandler</p>
<p>2.David Lee</p>
<p>3.Kevin Garnett</p>
<p>4.Lebron James</p>
<p>5.Josh Smith</p>
<p>If you put together the matrix of pairwise preferences, it would look like:</p>
<p>p[*Kevin Garnett]</p>
<p>p[*Tyson Chandler]</p>
<p>p[*David Lee]</p>
<p>p[*Lebron James]</p>
<p>p[*Josh Smith]</p>
<p>p[*Kevin Garnett]</p>
<p>-</p>
<p>3</p>
<p>2</p>
<p>4</p>
<p>3</p>
<p>p[*Tyson Chandler]</p>
<p>2</p>
<p>-</p>
<p>3</p>
<p>2</p>
<p>3</p>
<p>p[*David Lee]</p>
<p>3</p>
<p>2</p>
<p>-</p>
<p>4</p>
<p>3</p>
<p>p[*Lebron James]</p>
<p>1</p>
<p>3</p>
<p>1</p>
<p>-</p>
<p>3</p>
<p>p[*Josh Smith]</p>
<p>2</p>
<p>2</p>
<p>2</p>
<p>2</p>
<p>-</p>
<p>To find the strongest paths, you can then construct the grid to look like:</p>
<p>Now, the strongest paths are (weakest links in red):</p>
<p>…to Kevin Garnett</p>
<p>…to Tyson Chandler</p>
<p>…to David Lee</p>
<p>…to Lebron James</p>
<p>…to Josh Smith</p>
<p>From Kevin Garnett…</p>
<p>-</p>
<p>Tyson Chandler (3)</p>
<p>Tyson Chandler (3) – David Lee (3)</p>
<p>Lebron James (4)</p>
<p>Josh Smith (3)</p>
<p>From Tyson Chandler…</p>
<p>David Lee (3) – Kevin Garnett (3)</p>
<p>-</p>
<p>David Lee (3)</p>
<p>David Lee (3) – Lebron James (4)</p>
<p>Josh Smith (3)</p>
<p>From David Lee…</p>
<p>Kevin Garnett (3)</p>
<p>Kevin Garnett (3) – Tyson Chandler (3)</p>
<p>-</p>
<p>Lebron James (4)</p>
<p>Josh Smith (3)</p>
<p>From Lebron James…</p>
<p>Tyson Chandler (4) – David Lee (3) – Kevin Garnett (3)</p>
<p>Tyson Chandler (4)</p>
<p>Tyson Chandler (4) – David Lee (3)</p>
<p>-</p>
<p>Josh Smith (3)</p>
<p>From Josh Smith…</p>
<p>0</p>
<p>0</p>
<p>0</p>
<p>0</p>
<p>-</p>
<p>So the new strongest paths grid is:</p>
<p>p[*Kevin Garnett]</p>
<p>p[*Tyson Chandler]</p>
<p>p[*David Lee]</p>
<p>p[*Lebron James]</p>
<p>p[*Josh Smith]</p>
<p>p[*Kevin Garnett]</p>
<p>-</p>
<p>3</p>
<p>3</p>
<p>4</p>
<p>3</p>
<p>p[*Tyson Chandler]</p>
<p>3</p>
<p>-</p>
<p>3</p>
<p>3</p>
<p>3</p>
<p>p[*David Lee]</p>
<p>3</p>
<p>3</p>
<p>-</p>
<p>4</p>
<p>3</p>
<p>p[*Lebron James]</p>
<p>3</p>
<p>4</p>
<p>3</p>
<p>-</p>
<p>3</p>
<p>p[*Josh Smith]</p>
<p>0</p>
<p>0</p>
<p>0</p>
<p>0</p>
<p>-</p>
<p>Kevin Garnett and David Lee would tie in this case.</p>
| Jay | 2,389 | <p>You can see that ties are not avoidable with a simpler example. Consider this election with three candidates and three votes:</p>
<pre><code>A>B>C (1 single vote)
B>C>A (1 single vote)
C>A>B (1 single vote)
</code></pre>
<p>The result will be a cycle, the Smith set contains all candidates, and you will need to break the tie.</p>
|
2,877,032 | <p>$\int y^{2}dx+x^{2}dy$ and $C\left( r\right) =\left(a \cos t,b\sin t\right)$ ,$0<t< \pi$</p>
<p>Could someone give me a hint to evaluate this integral?</p>
<p>my efforts:
$C^{'}\left( r\right)=\left(-a \sin t,b\cos t\right)$ then $\left\| C^{'}\left( r\right) \right\|$=$\sqrt {a^{2}\sin ^{2}t+b^{2}\cos ^{2}t}$ then I didn't get anywhere from here. Should I employ Green's theorem?</p>
| Nosrati | 108,128 | <p>Let $x=a\cos t$ and $y=b\sin t$ and replace in integral. Therefore
\begin{align}
\int y^{2}dx+x^{2}dy
&= \int_0^\pi (b\sin t)^2(-a\sin t)\ dt + (a\cos t)^2(b\cos t)\ dt \\
&= ab\int_0^\pi (a\cos^3t-b\sin^3t)\ dt \\
&= \color{blue}{-\dfrac43ab^2}
\end{align}</p>
|
2,071,054 | <p>Let $f(x)$ be continuous in $[a,b]$</p>
<p>Let $A$ be the set defined as:
$$A = \{ x \in [a,b] \mid f(x) = f(a) \}$$</p>
<ol>
<li><p>Does $A$ have a maximum? I guess it it is the max value in $[a,b]$ which $f$ sends to $f(a)$ but I don't know how to prove it.</p></li>
<li><p>Would it still have a maximum if $[a,b]$ would turn into $[a,b)$?</p></li>
</ol>
| Zaid Alyafeai | 87,813 | <p>$$\sum\nolimits_{i=0}^{\infty }{\sum\nolimits_{j=0}^{\infty }{\sum\nolimits_{k=0}^{\infty }{{{3}^{-\left( i+j+k \right)}}}}} = \sum\nolimits_{i=0}^{\infty }{3}^{-i}{\sum\nolimits_{j=0}^{\infty }{3}^{-j}{\sum\nolimits_{k=0}^{\infty }{3}^{-k}}}\\ =\left(\sum_{i=0}^\infty \frac{1}{3^i}\right)^3 = \left(\frac{1}{1-1/3}\right)^3$$</p>
|
1,379,878 | <blockquote>
<p>Let $M_n(\mathbb{C})$ denote the vector space over $\mathbb{C}$ of all $n\times n$ complex matrices. Prove that if $M$ is a complex $n\times n$ matrix then $C(M)=\{A\in M_n(\mathbb{C}) \mid AM=MA\}$ is a subspace of dimension at least $n$.</p>
</blockquote>
<p>My Try:</p>
<p>I proved that $C(M)$ is a subspace. But how can I show that it is of dimension at least $n$. No idea how to do it. I found similar questions posted in MSE but could not find a clear answer. So, please do not mark this as duplicate.</p>
<p>Can somebody please help me how to find this? </p>
<p>EDIT: Non of the given answers were clear to me. I would appreciate if somebody check my try below:</p>
<p>If $J$ is a Jordan Canonical form of $A$, then they are similar. Similar matrices have same rank. $J$ has dimension at least $n$. So does $A$. Am I correct?</p>
| user24142 | 208,255 | <p>In the case that $M$ has $n$ unique eigenvalues and eigenvectors, there is a fairly straightforward argument. For the eigenvalue $\lambda$, there are vectors $v_L$ and $v_R$ which are the left and right eigenvectors. Construct the $n\times n$ matrix $C_\lambda = v_L v_R^T$: $C_v$ is such that all its rows are scalar multiples of $v_R$ and all its columns are scalar multiples of $v_L$.</p>
<p>Therefore $M C_\lambda = C_\lambda M = \lambda C_\lambda$.</p>
<p>I'd expect to be able to make good headway working with generalised eigenvectors.</p>
|
1,685,621 | <p>I know that we can define two vectors to be orthogonal only if they are elements of a vector space with an inner product. </p>
<p>So, if $\vec x$ and $\vec y$ are elements of $\mathbb{R}^n$ (as a real vector space), we can say that they are orthogonal iff $\langle \vec x,\vec y\rangle=0$, where $\langle \vec x,\vec y\rangle $ is an inner product.</p>
<p>Usually the inner product is defined with respect to the standard basis $E=\{\hat e_1,\hat e_2 \}$ (for $n=2$ to simplify notations), the standard definition is:
$$
\langle \vec x,\vec y\rangle_E=x_1y_1+x_2y_2
$$
Where
$$
\begin{bmatrix}
x_1\\x_2
\end{bmatrix}
=[\vec x]_E
\qquad
\begin{bmatrix}
y_1\\y_2
\end{bmatrix}
=[\vec y]_E
$$
are the components of the two vectors in the standard basis and, by definition of the inner product, $\hat e_1$ and $\hat e_2$ are orho-normal. </p>
<p>Now, if $\vec v_1$ and $\vec v_2$ are linearly independent the set $V=\{\vec v_1,\vec v_2\}$ is a basis and we can express any vector in this basis with a couple of components:
$$
\begin{bmatrix}
x'_1\\x'_2
\end{bmatrix}
=[\vec x]_V
\qquad
\begin{bmatrix}
y'_1\\y'_2
\end{bmatrix}
=[\vec y]_V
$$
from which we can define an inner product:
$$
\langle \vec x,\vec y\rangle_V=x'_1y'_1+x'_2y'_2
$$</p>
<p>Obviously we have:
$$
[\vec v_1]_V=
\begin{bmatrix}
1\\0
\end{bmatrix}
\qquad
[\vec v_2]_V=
\begin{bmatrix}
0\\1
\end{bmatrix}
$$
and $\{\vec v_1,\vec v_2\}$ are orthogonal (and normal) for the inner product $\langle \cdot,\cdot\rangle_V$.</p>
<p>This means that any two linearly independent vectors are orthogonal with respect to a suitable inner product defined by a suitable basis. So orthogonality seems a ''coordinate dependent'' concept. </p>
<p>The question is: is my reasoning correct? And, if yes, what make the usual standard basis so special that we chose such basis for the usual definition of orthogonality? </p>
<hr>
<p>I add something to better illustrate my question.</p>
<p>If my reasoning is correct than, for any basis in a vector space there is an inner product such that the vectors of the basis are orthogonal. If we think at vectors as oriented segments (in pure geometrical sense) this seems contradicts our intuition of what ''orthogonal'' means and also a geometric definition of orthogonality. So, why what we call a ''standard basis'' seems to be in accord with intuition and other basis are not? </p>
| Quarky Quanta | 320,443 | <p>Let me try to add my personal intuition to this conversation and see if that helps you a bit. If you take the standard basis vectors, and draw them in the Cartesian coordinate system, then you will note that they satisfy your intuition of what we think of perpendicular vectors. What we further note is that any two vectors that are orthogonal as per the inner product in this basis do in fact satisfy the same intuition of being perpendicular as per our intuition. If you pick a different basis, lets say ${v_1, v_2}$ then as you noted these two vectors will be orthogonal as per the inner product defined in their basis. In the standard basis, the angle between them is $\theta = arccos(v_1 . v_2) / |v_1||v_2|$. Any set of vectors that are orthogonal in this new basis, will in fact have the same angle $\theta$ between them in the standard basis. </p>
<p>To summarize, if you pick any arbitrary basis, the angle between the vectors in that basis, when drawn in the standard basis, is the same as the angle between any two vectors, again when drawn in the standard basis, that are orthogonal in this new basis. So, it makes sense to pick the standard basis as the standard because the basis vectors there are in fact 90 degrees (fitting well with our intuition). And consequently all the vectors that are orthogonal in that basis will also have 90 degree angle between them. </p>
|
416,497 | <p>I was recently trying to find a numerical solution to a thermodynamics problem and the expression <span class="math-container">$x\ln x$</span> appeared in one of the computations. I did not have to find its value very near <span class="math-container">$0$</span>, so the computer managed fine, but it got me thinking - can one make a stable numerical algorithm to compute <span class="math-container">$x\ln x$</span> for values near 0?</p>
<p>It is easy to prove that <span class="math-container">$\lim\limits_{x\to 0^+} x\ln x=0$</span>. However, simply multiplying <span class="math-container">$x$</span> by <span class="math-container">$\ln x$</span> is not a good solution for small <span class="math-container">$x$</span>. The problem seems to be that we are multiplying a small number (<span class="math-container">$x$</span>) by a large number (<span class="math-container">$\ln x$</span>).</p>
<p>My first thought would be to approximate it somehow. But I quickly saw that Taylor series wouldn't work, as the derivative is <span class="math-container">$\ln x + 1$</span>, which blows up (or rather down :-)) to <span class="math-container">$-\infty$</span>. Some kind of iterative method like Newton's method does not seem to be the solution either, because the operations needed seem to be even more messy than what we are trying to compute.</p>
<p>So my question is - is there some numerically stable method to compute <span class="math-container">$x\ln x$</span> for small values of <span class="math-container">$x$</span>? And preferably one that is more general, so that it could be used on functions like <span class="math-container">$x^n \ln x$</span> - but these at least have a finite first derivative at <span class="math-container">$0$</span> for <span class="math-container">$n>1$</span>.</p>
| Robert Israel | 13,650 | <p>If <span class="math-container">$x$</span> is represented in floating-point as <span class="math-container">$y \times 10^{-d}$</span>, <span class="math-container">$0.1 < y \le 1$</span>, <span class="math-container">$d \in \mathbb N$</span>, note that</p>
<p><span class="math-container">$$ x \ln(x) = y (\ln(y) - d \ln(10)) \times 10^{-d} $$</span></p>
<p>which shouldn't be a problem to evaluate accurately.</p>
|
416,497 | <p>I was recently trying to find a numerical solution to a thermodynamics problem and the expression <span class="math-container">$x\ln x$</span> appeared in one of the computations. I did not have to find its value very near <span class="math-container">$0$</span>, so the computer managed fine, but it got me thinking - can one make a stable numerical algorithm to compute <span class="math-container">$x\ln x$</span> for values near 0?</p>
<p>It is easy to prove that <span class="math-container">$\lim\limits_{x\to 0^+} x\ln x=0$</span>. However, simply multiplying <span class="math-container">$x$</span> by <span class="math-container">$\ln x$</span> is not a good solution for small <span class="math-container">$x$</span>. The problem seems to be that we are multiplying a small number (<span class="math-container">$x$</span>) by a large number (<span class="math-container">$\ln x$</span>).</p>
<p>My first thought would be to approximate it somehow. But I quickly saw that Taylor series wouldn't work, as the derivative is <span class="math-container">$\ln x + 1$</span>, which blows up (or rather down :-)) to <span class="math-container">$-\infty$</span>. Some kind of iterative method like Newton's method does not seem to be the solution either, because the operations needed seem to be even more messy than what we are trying to compute.</p>
<p>So my question is - is there some numerically stable method to compute <span class="math-container">$x\ln x$</span> for small values of <span class="math-container">$x$</span>? And preferably one that is more general, so that it could be used on functions like <span class="math-container">$x^n \ln x$</span> - but these at least have a finite first derivative at <span class="math-container">$0$</span> for <span class="math-container">$n>1$</span>.</p>
| Brendan McKay | 9,025 | <p>Modern mathematics libraries should be able to find <span class="math-container">$\log x$</span> precisely for all floating-point numbers, as the algorithms for doing that have long been known and adopted. My experiments on a fairly recent Intel chip with gnu mathematics library and gcc 10 compiler confirm that.</p>
<p>Multiplication is even more definite: correct rounding of the last bit is guaranteed (though there can be different options available for what "correct rounding" means).</p>
<p>It might appear from the above that precise computation of <span class="math-container">$x\log x$</span> is guaranteed for any small <span class="math-container">$x>0$</span>. However there is a reason why that doesn't happen for really tiny <span class="math-container">$x$</span> and there is no way to avoid it.</p>
<p>Floating-point numbers are usually stored with the mantissa normalized (leading bit 1, sometimes implicit). However, when the number is too small it may be impossible to normalize the number without the exponent underflowing. In this case (usually) the number is kept in unnormalized form and the number of bits of precision is reduced. This situation is called <em>partial underflow</em> and such numbers are <em>subnormal</em> or <em>denormalized</em>.</p>
<p>So, if you try to compute <span class="math-container">$x\log x$</span> when <span class="math-container">$x\log x$</span> is in the partial underflow range, <span class="math-container">$\log x$</span> will be computed precisely but the multiplication by <span class="math-container">$x$</span> cannot produce more bits of precision than numbers of the size of <span class="math-container">$x\log x$</span> have. Short of using different floating-point numbers, there is no solution.</p>
<p>If <span class="math-container">$x\log x$</span> is in the partial underflow range, then <span class="math-container">$x$</span> will be too, or maybe it will be so small as to underflow to 0. In practice <span class="math-container">$x$</span> will come from some earlier computation and the partial underflow means it may not be so precise as thought, which is another source of error. It isn't the fault of the function <span class="math-container">$x\log x$</span>.</p>
<p>Incidentally I tested this explanation empirically and behaviour was exactly as predicted.</p>
|
1,183,643 | <p>Given a integer $h$</p>
<blockquote>
<p>What is $N(h)$ the number of full binary trees of height less than $h$?</p>
</blockquote>
<p><img src="https://i.stack.imgur.com/XcNVi.jpg" alt="enter image description here"></p>
<p>For example $N(0)=1,N(1)=2,N(2)=5, N(3)=21$(As pointed by <a href="https://math.stackexchange.com/users/212738/travisj">TravisJ</a> in his partial answer) I can't find any expression of $N(h)$ neither a reasonable upper bound.</p>
<p><strong>Edit</strong> In a full binary tree (sometimes called proper binary tree) every node other than the leaves has two children.</p>
| Marko Riedel | 44,883 | <p>Here is my contribution to this interesting discussion. Introduce $T_{\le n}(z)$ the OGF of full binary trees of height at most $n$ by the number of nodes. Now a tree of height at most $n$ either has height at most $n-1$ or height exactly $n.$ The latter tree has a subtree of height $n-1$ on the left or the right of the root node or the root has two children of height $n-1.$ This gives
$$T_{\le n} = T_{\le n-1} + 2z (T_{\le n-1}-T_{\le n-2})T_{\le n-2} + z(T_{\le n-1}-T_{\le n-2})^2$$
where $T_{\le 0} = 1$ and $T_{\le 1} = z+1.$
Observe that $$T_{=n} = T_{\le n} - T_{\le n-1}.$$
Some algebra produces the simplified form
$$T_{\le n} = T_{\le n-1} + z T_{\le n-1}^2 - z T_{\le n-2}^2.$$
This produces e.g. the following generating function for trees of height at most four by the number of nodes:
$${z}^{15}+8\,{z}^{14}+28\,{z}^{13}+60\,{z}^{12}+94\,{z}^{11}
+116\,{z}^{10}\\+114\,{z}^{9}+94\,{z}^{8}+69\,{z}^{7}
+44\,{z}^{6}+26\,{z}^{5}+14\,{z}^{4}+5\,{z}^{3}+2\,{z}^{2}+z+1.$$
For the count compute the sequence $T_{\le n}(1)$ which yields
$$1, 2, 5, 26, 677, 458330, 210066388901, 44127887745906175987802,\\
1947270476915296449559703445493848930452791205,\ldots$$
This is <a href="https://oeis.org/A003095" rel="nofollow noreferrer">OEIS A003095</a> and has closed form recurrence
$$t_n = t_{n-1} + t_{n-1}^2-t_{n-2}^2$$
with $t_0=1$ and $t_1=2.$
The number of trees of height <strong>exactly</strong> $n$ is given by
$$t_n-t_{n-1}$$ which gives the sequence
$$1, 3, 21, 651, 457653, 210065930571, 44127887745696109598901,\\
1947270476915296449559659317606103024276803403,\ldots$$
which is <a href="https://oeis.org/A001699" rel="nofollow noreferrer">OEIS A001699</a>.</p>
<p><strong>Remark.</strong> Reviewing this post several years later we see that we have not employed the definition of a full binary tree as shown e.g. at <a href="https://en.wikipedia.org/wiki/Binary_tree" rel="nofollow noreferrer">Wikipedia</a> because we admit trees where one of the children is a leaf. But the OEIS says we have the right values, how to explain this. We can count full binary trees by not admitting leaves (trees of size zero) so that $T_{\le 0}(z)=0$ and $T_{\le 1}(z) = z.$ This gives starting values for the recurrence as $0$ and $1$ with the next value being $2.$ However $1$ and $2$ are the starting values for ordinary binary trees, which explains the matching values (shift sequence and prepend a zero term). Hence the above calculation goes through. (Note: we take height zero to be the height of a leaf and height one the height of a singleton.)</p>
|
1,986,007 | <p>What is the intuitive explanation of the Inverse Function Theorem (and generalized to multiple dimensions)?</p>
| Gyu Eun Lee | 52,450 | <p>The moral story of all differential calculus is this:</p>
<blockquote>
<p>Differentiable functions behave locally like their linearizations.</p>
</blockquote>
<p>The degree to which this statement holds can be quantitatively formulated, for example in terms of error estimates for first-order Taylor expansions. It also manifests qualitatively; for an easy example, if $f:\mathbb{R}\to\mathbb{R}$ has a linearization $\ell_p(t) = f(p) + f'(p)(t-p)$ which is increasing (i.e. $f'(p)>0$), then $f$ is also locally increasing.</p>
<p>The moral of the inverse function theorem goes like this. First, let $f$ be differentiable on a neighborhood of $p$, and suppose for simplicity that $f(p) = 0$, so that the linearization of $f$ is $f(x) = D_pf(x-p)$. Then:</p>
<blockquote>
<p>If a function $f$ is differentiable on a neighborhood of a point $p$ and its linearization $D_pf$ at $p$ is invertible, then $f$ is locally invertible. Moreover, the inverse of the linearization $(D_pf)^{-1}$ is precisely the linearization of the inverse $f^{-1}$ of $f$ (which we know to be locally defined).</p>
</blockquote>
<p>So all we're saying is that invertibility is one of the properties that a function $f$ can copy from its linearization $D_pf$. But because a given linear approximation is only accurate (i.e. has small error) in a small neighborhood of the point of approximation, and you can only expect $f$ to behave like $D_pf$ when the approximation is accurate, you can only conclude that $f$ is invertible on a small neighborhood.</p>
|
951,524 | <p>How do I prove, that $(\mathbb R×\mathbb R;+,*) $ is a ring, but not a field, where the $+$ and $*$ operations are: $(a,b)+(c,d):=(a+c,b+d)$ and $(a,b)*(c,d):=(ac,bd)$?</p>
<p>For the solution: so I would have to first show, that $(\mathbb R×\mathbb R;+,*) $ is a ring, I have to prove that using the definition of a ring:</p>
<ol>
<li>$(\mathbb R×\mathbb R;+)$ has to be commutative group</li>
<li>$(\mathbb R×\mathbb R;*)$ has to be a semigroup</li>
<li>$*$ must be distributive over $+$ (from both sides)</li>
</ol>
<hr>
<p>If 1. 2. and 3. can be proven, then $(\mathbb R×\mathbb R;+,*) $ is a ring.</p>
<ol>
<li>here I have to prove that $(\mathbb R×\mathbb R;+) $ is an algebraic structure where $+$ is associative and commutative; the identity element is $0$ and that all elements have an inverse</li>
<li>$*$ has to be associative</li>
<li>? (How do I prove distributivity in this particular example?)</li>
</ol>
<hr>
<p>Now I have to prove that $(\mathbb R×\mathbb R;+,*) $ is not a field. (How do I do that?)
Also how can I show whether or not $(\mathbb R×\mathbb R;+,*) $ is a <em>commutative</em> ring?</p>
| robjohn | 13,854 | <p>The function
$$
\frac{180/z}{z^{180}-1}
$$
has residue $1$ at each root of $z^{180}-1$ $\left(\text{i.e. }e^{k\pi i/90}\text{ for }k=0\dots179\right)$ and residue $-180$ at $z=0$.</p>
<p>On $|z|=1$,
$$
\tan(\theta/2)=-i\frac{z-1}{z+1}
$$
Integrating
$$
f(z)=-\left(\frac{z-1}{z+1}\right)^2\frac{180/z}{z^{180}-1}
$$
around a large circle is $0$ since the integrand is approximately $|z|^{-181}$. Thus, the sum of residues is
$$
2\sum_{k=0}^{89}\tan^2\left(\frac{k\pi}{180}\right)+\operatorname*{Res}_{z=0}f(z)+\operatorname*{Res}_{z=-1}f(z)=0
$$
Since $\operatorname*{Res}\limits_{z=0}f(z)=180$ and $\operatorname*{Res}
\limits_{z=-1}f(z)=-\frac{32402}{3}$, we get
$$
\begin{align}
\sum_{k=0}^{89}\tan^2\left(\frac{k\pi}{180}\right)
&=\frac12\left(\frac{32402}{3}-180\right)\\
&=\frac{15931}{3}
\end{align}
$$</p>
<hr>
<p>This same method also gives
$$
\sum_{k=0}^{89}\tan^4\left(\frac{k\pi}{180}\right)=\frac{524560037}{45}
$$
and
$$
\sum_{k=0}^{89}\tan^6\left(\frac{k\pi}{180}\right)=\frac{4855740968543}{135}
$$</p>
|
1,552,501 | <p><a href="https://i.stack.imgur.com/zFz40.jpg" rel="nofollow noreferrer">My Question</a></p>
<p>First I let u = y' and employed the Chain Rule to obtain du/dx = du/dy * u</p>
<p>But I am not sure where to go from there. Any tips, suggestions, or solutions to the problem would be much appreciated!</p>
<p>I suspect there may also be different families of solutions to this problem.</p>
| amd | 265,466 | <p>They’re not saying that they’re equal to the angle, only that they’re approximately equal. </p>
<p>The Taylor series for $\sin\theta$ around $0$ is $\theta+\frac{\theta^3}3+\dots$, and that for $\tan\theta$ is $\theta+\frac{\theta^3}6+\dots$, so for very small $\theta$ we can drop the higher-order terms as an approximation. If $\sin^2\theta\tan\theta$ is small, then so is $\theta$, so you can use these approximations.</p>
|
1,552,501 | <p><a href="https://i.stack.imgur.com/zFz40.jpg" rel="nofollow noreferrer">My Question</a></p>
<p>First I let u = y' and employed the Chain Rule to obtain du/dx = du/dy * u</p>
<p>But I am not sure where to go from there. Any tips, suggestions, or solutions to the problem would be much appreciated!</p>
<p>I suspect there may also be different families of solutions to this problem.</p>
| marty cohen | 13,079 | <p>This is because
of the only
trig limit that matters:</p>
<p>$\lim_{x \to 0} \frac{\sin x}{x}
= 1
$.</p>
<p>This follows from the expansion
$\sin(x)
=x+\frac{x^3}{6}+...
$.</p>
<p>From this you can deduce that
$\lim_{x \to 0} \frac{1-\cos(x)}{x^2}
=\frac12
$
and
$\lim_{x \to 0} \frac{\tan x}{x}
= 1
$.</p>
|
1,918,781 | <p>Suppose that $\phi:\mathbb R^3\to\mathbb R$ is a harmonic function. I am asked to show that for any sphere centered at the origin, the average value of $\phi$ over the sphere is equal to $\phi(0)$. I am also directed to use Green's second identity: for any smooth functions $f,g:\mathbb R^3\to \mathbb R$, and any sphere $S$ enclosing a volume $V$, $$\int_S (f\nabla g-g\nabla f)\cdot dS=\int_V (f\nabla^2g-g\nabla^2 f)dV.$$</p>
<p>Here is what I have tried: left $f=\phi$ and $g(\mathbf r)=|\mathbf r|$ (distance from the origin). Then $\nabla g=\mathbf{\hat r}$, $\nabla^2 g=\frac1r$, and $\nabla^2 f=0$. Note also that $\int_Sg\nabla f\cdot dS=r\int_S\nabla f\cdot dS=0.$ Then $$\frac{1}{4\pi r^2}\int_S f(x)\,dA=\frac{1}{4\pi r^2}\int_S f(x)\nabla g(x)\cdot dS=\frac{1}{4\pi r^2}\int_S (f\nabla g-g\nabla f)\cdot dS.$$ Using Green's identity, this is equal to $$\frac{1}{4\pi r^2}\int_V (f\nabla^2g-g\nabla^2 f)\,dV=\frac{1}{4\pi r^2}\int_V\frac{f}{r}\,dV.$$ This reminds me of the Cauchy integral formula. If there is indeed some sort of identity that I can use to show that the last integral is equal to $f(0)$? Or is there another way to solve this problem?</p>
| Plutoro | 108,709 | <p>I appreciate the other answers, but I came up with my own answer which, in my humble opinion, is a bit simpler.</p>
<p>For posterity's sake, here is the answer: Let <span class="math-container">$g=1/|\mathbf r|$</span> and <span class="math-container">$f=\phi$</span>. Then the following hold: <span class="math-container">$$\int_S g(\mathbf r)\nabla f(\mathbf r)\cdot d\mathbf S=-\frac{1}{r}\int_S \nabla f\cdot d\mathbf S=0,$$</span> <span class="math-container">$$\nabla^2 f(\mathbf r)=0\quad\quad\text{for } \mathbf r\in V,$$</span> <span class="math-container">$$\nabla g(\mathbf r)=-\frac{\mathbf{\hat r}}{r^2},$$</span> <span class="math-container">$$\nabla^2 g(\mathbf r)=-4\pi \delta(\mathbf r).$$</span> Thus, the average value of <span class="math-container">$\phi$</span> on <span class="math-container">$S$</span> is
<span class="math-container">\begin{align*}
\frac{1}{4\pi R^2}\int_S \phi(\mathbf r)\,dS&=\frac{1}{4\pi}\int_S f(\mathbf r)\left(\frac{\hat r}{R^2}\right)\cdot d\mathbf S\\
&=-\frac{1}{4\pi}\int_S f(\mathbf r)\nabla g(\mathbf r)\cdot d\mathbf S\\
&=-\frac{1}{4\pi}\int_S (f(\mathbf r)\nabla g(\mathbf r)-g(\mathbf r)\nabla f(\mathbf r)\cdot d\mathbf S\\
&=-\frac{1}{4\pi}\int_V(f(\mathbf r)\nabla^2 g(\mathbf r)-g(\mathbf r)\nabla^2 f(\mathbf r))\,dV\\
&=\frac{4\pi}{4\pi}\int_V\phi(\mathbf r) \delta(\mathbf r)\,dV\\
&=\phi(\mathbf 0).
\end{align*}</span></p>
|
18,174 | <p>I am an undergraduate secondary math education major. In <span class="math-container">$2$</span> weeks I have to give a <a href="https://www.mathmammoth.com/lessons/number_talks.php" rel="nofollow noreferrer">Number Talk</a> in my math ed class on the problem "<span class="math-container">$3.9$</span> times <span class="math-container">$7.5$</span>". I need to come up with as many different solution methods as possible. </p>
<p>Here is what I have come up with so far:</p>
<ol>
<li><p>The most common way: multiply the two numbers "vertically", ignoring the decimal, to get <span class="math-container">$2925$</span>:
<span class="math-container">\begin{array}
{}\hfill {}^6{}^439\\
\hfill \times\ 75 \\\hline
\hfill {}^1 195 \\
\hfill +\ 273\phantom{0} \\\hline
\hfill 2925
\end{array}</span> Since there are two numbers that are to the right of the decimal, place the decimal after the <span class="math-container">$9$</span> to get the answer <span class="math-container">$29.25$</span>.</p></li>
<li><p>Write both numbers as improper fractions: <span class="math-container">$$3.9= \dfrac{39}{10}$$</span> and <span class="math-container">$$7.5=\dfrac{75}{10}$$</span>Then multiply <span class="math-container">$$\dfrac{39}{10}\cdot\dfrac{75}{10}$$</span> to get <span class="math-container">$\dfrac{2925}{100}$</span> which simplifies to 29.25.</p></li>
<li><p>Use lattice multiplication. This is a very uncommon method that I doubt the students will use, and I need to review it myself before I consider it.</p></li>
<li><p>Since <span class="math-container">$3.9$</span> is very close to <span class="math-container">$4$</span>, we could instead do <span class="math-container">$4\cdot7.5=30$</span> and then subtract <span class="math-container">$0.1\cdot7.5=0.75$</span> to get <span class="math-container">$30 - 0.75=29.25$</span></p></li>
<li><p>Similarly, since <span class="math-container">$7.5$</span> rounds up to <span class="math-container">$8$</span>, we can do <span class="math-container">$3.9\cdot 8=31.2$</span> and then subtract .<span class="math-container">$5\cdot 3.9=1.95$</span> to get <span class="math-container">$31.2-1.95=29.25$</span> </p></li>
</ol>
<p>Are there any other possible methods the students might use? (<strong>Note:</strong> they are junior college math ed students.) Thanks!</p>
| Joey Lakerdas-Gayle | 13,846 | <p>You can do questions like this in your head pretty easily which is a great skill to teach students. Since <span class="math-container">$3.9$</span> is close to <span class="math-container">$4$</span>, and <span class="math-container">$7.5$</span> is easy to add to itself. So <span class="math-container">$3.9\times 7.5 = 4\times 7.5 - 0.1\times7.5$</span>.
<span class="math-container">$4\times 7.5$</span> is just doubling it twice. <span class="math-container">$7.5\times 2=15$</span>, and <span class="math-container">$15\times 2=30$</span>. Then subtract <span class="math-container">$0.1\times 7.5=0.75$</span>. So that's <span class="math-container">$29.25$</span>.
A similar process can be used to do most questions like this in your head.</p>
|
377,720 | <p>Let $u$ be the distribution on $\mathbb{R}$ given by $$u=\delta_0-\delta_1 $$ </p>
<p>(a) show there exists a continuous function $f$ such that $f''=u$ and indicate such one. I thought of doing this with Fourier-transforms: $$F(f'')=s^2F(f)=F(u) = 1-e^{-2\pi i s}$$ </p>
<p>which would imply $f= F^{-1}(s^{-2}(1- e^{-2\pi i s}))$. My problem is that it is not continuous (by wolframalpha calculations). </p>
<p>So how to approach this?</p>
<p>(b) Show there exists a triple of continuous functions $g_0,g_1,g_2$ on $\mathbb{R}$ with compact support such that $$u= g_0+g_1'+g_2''$$
and indicate such triple. </p>
<p>Thanks for any help or suggestions! </p>
| Sangchul Lee | 9,340 | <p><strong>Answer to (a)</strong> : Intuitively, 1-dimensional Dirac delta function is the derivative of a unit step function (also called the Heaviside step function). Thus $f'' = \delta_0 - \delta_1$ means that $f'$ has a step (jump) of size 1 at $x = 0$ and a step of size -1 at $x = 1$ as follows:</p>
<p>$$ f'(x) = \begin{cases} a & x < 0 \\ a+1 & 0 \leq x < 1 \\ a & 1 \leq x \end{cases} $$</p>
<p>Integrating, we have</p>
<p>$$ f(x) = ax + b + x \chi_{[0, 1)}(x) + \chi_{[1, \infty)}(x), \tag{1}$$</p>
<p>where $\chi_{E}$ is the characteristic (indicator) function of $E \subset \Bbb{R}$. Confirming that this is our desired function is also easy: For any test function $\varphi \in C_{c}^{\infty}(\Bbb{R})$,</p>
<p>\begin{align*}
\int_{\Bbb{R}} f\varphi'' \, dx
&= \int_{\Bbb{R}} (ax + b + x \chi_{[0, 1]} + \chi_{[1, \infty)}) \varphi'' \, dx \\
&= \int_{\Bbb{R}} (ax + b) \varphi'' \, dx + \int_{0}^{1} x \varphi''(x) \, dx + \int_{1}^{\infty} \varphi''(x) \, dx \\
&= 0 + \left[ x \varphi'(x) \right]_{0}^{1} - \int_{0}^{1} \varphi'(x) \, dx - \varphi'(1) \\
&= \varphi(0) - \varphi(1).
\end{align*}</p>
<p>and therefore (1) is proved.</p>
|
112,575 | <p>If we set $\exp(x)=\sum x^k/k!$, then $\exp(x+y)=\exp(x)\cdot \exp(y)$. In terms of coefficients it means that $(x+y)^n=\sum \frac{n!}{k!(n-k)!} x^ky^{n-k}$, i.e. just binomial expansion.</p>
<p>Now consider logarithm. Set $L(x):=\sum_{k>0} x^k/k$, then $L(x)=-\log(1-x)$ in a sense, and hence $$L(u+v-uv)=L(u)+L(v),$$ i.e. $\sum (u+v-uv)^n/n=\sum (u^n+v^n)/n$, or, if we pass to coefficients of $u^av^b$ ($a,b\geq 1$), we get
$$
\sum_k (-1)^k\frac{(a+b-k-1)!}{(a-k)!(b-k)!k!}=0
$$</p>
<p>The question is what is combinatorial meaning of this identity. Maybe, it is some exclusion-inclusion formula, as it is usual for alternating sums?</p>
| David E Speyer | 297 | <p>I don't know a combinatorial interpretation, but here is a quick proof. Let $1 \leq a \leq b$. Write
$$\sum_{0 \leq k \leq a} (-1)^k \frac{(a+b-k-1)!}{(a-k)! (b-k)! k!} = \frac{1}{a!} \sum_{0 \leq k \leq a} (-1)^k \frac{(a+b-k-1)!}{(b-k)!} \binom{a}{k}$$
$$= \frac{1}{a!} \sum_{0 \leq k \leq a} (-1)^k f(k) \binom{a}{k}$$
where
$$f(k) = (a-1+b-k)(a-2+b-k) \cdots (2+b-k)(1+b-k).$$
Since $f(k)$ is a polynomial of degree $a-1$, its $a$-th difference is zero. (We used the assumption $0 \leq k \leq a \leq b$ to make sure that $(a+b-k-1)! / (b-k)!$ is never $0/0$.)</p>
<p>A combinatorial interpretation may be difficult because the summands are not always integers. EG: $a=b=2$ gives $3/2 - 2+1/2=0$.</p>
|
260,240 | <p><img src="https://i.stack.imgur.com/3WeQw.png" alt="enter image description here"></p>
<p>In the last bullet, it says l must be even and provides an explanation. I don't understand the explanation, however. Why does it have to be even?</p>
| A.Schulz | 35,875 | <p>They mean that in order to have $a=\sqrt{2}\sqrt{l}$ an integer, we must have that $l$ is an even number, say $2r$. In this case $a=\sqrt{2}\sqrt{2r}=2\sqrt{r}$. If $l$ is odd, we can not <em>spilt of the $\sqrt{2}$ from $\sqrt{l}$</em>, thus the $\sqrt{2}$ in $a$ remains, and $a$ will be irrational.</p>
|
155,897 | <p>I would like to exclude non-western characters and words from a text file. I do not know how to insert the text file here, but I suppose you can do without it. All your suggestions will be much appreciated.</p>
<p>Update: I have used:</p>
<pre><code>Alphabet["Russian"]
Select[dict21, Not@StringContainsQ[#, Alternatives @@ dict24Russion] &]
</code></pre>
<p>The problem is that there are several alphabets in the text (even unknown). There must be some solution of kind "include only <code>Alphabet[]</code>". What do you think?</p>
| C. E. | 731 | <p>It is especially easy to express the character range <code>Alphabet[]</code> that you mention using <code>RegularExpression</code>:</p>
<pre><code>StringDelete["adeфfgh12cа34", RegularExpression["[^a-z]+"]]
</code></pre>
<blockquote>
<p><code>"adefghc"</code></p>
</blockquote>
<p>It will also be faster than other options. (Thanks to Alexey for pointing me towards <code>StringDelete</code>.)</p>
<p>To do the same for other alphabets I would suggest just looking at what you need beyond the a-z range, then just add that. So in the case of the Swedish alphabet it would be:</p>
<pre><code>StringDelete["adeфfgh12cа34", RegularExpression["[^a-zåäö]+"]]
</code></pre>
<p>Also note that if you need to match capital letters as well then you should use</p>
<pre><code>StringDelete["adeфfgh12cа34", RegularExpression["[^A-Za-zÅÄÖåäö]+"]]
</code></pre>
<p>Furthermore, you might want to add special characters which, while not in the alphabet, are used in Swedish texts:</p>
<pre><code>str = "Spörj, forskare, så långt du gitter,
vad residens som själen har.
Det bästa svar blir Dumboms svar:
\"Min vän, hon sitter där hon sitter.\"";
StringDelete[str, RegularExpression["[^A-Za-zÅÄÖåäö\n., –\"]+"]]
(* Out:
"Spörj, forskare, så långt du gitter,
vad residens som själen har.
Det bästa svar blir Dumboms svar
\"Min vän, hon sitter där hon sitter.\""
*)
</code></pre>
<p><code>\n</code> corresponds to a new line.</p>
|
1,458,464 | <p>Let</p>
<ul>
<li>$(\Omega,\mathcal A)$ and $(E,\mathcal E)$ be measurable spaces</li>
<li>$I\subseteq[0,\infty)$ be at most countable and closed under addition with $0\in I$</li>
<li>$X=(X_t)_{t\in I}$ be a stochastic process on $(\Omega,\mathcal A)$ with values in $(E,\mathcal E)$</li>
<li>$\mathbb F=(\mathcal F_t)_{t\in I}$ be the filtration generated by $X$</li>
<li>$\tau$ be a $\mathbb F$-stopping time</li>
<li>$f:E^I\to\mathbb R$ be bounded and $\mathcal E^{\otimes I}$-measurable</li>
</ul>
<p>Clearly, $$Y_s:=1_{\left\{\tau=s\right\}}\operatorname E\left[f\circ\left(X_{s+t}\right)_{t\in I}\mid\mathcal F_\tau\right]$$ is $\mathcal F_s$-measurable. Thus,</p>
<p>\begin{equation}
\begin{split}
\operatorname E\left[f\circ\left(X_{\tau+t}\right)_{t\in I}\mid\mathcal F_\tau\right]&=&\sum_{s\in I}Y_s\\&=&\sum_{s\in I}\operatorname E\left[Y_s\mid\mathcal F_s\right]\\&\color{red}=&\color{red}{\sum_{s\in I}\operatorname E\left[1_{\left\{\tau=s\right\}}\operatorname E\left[f\circ\left(X_{\tau+t}\right)_{t\in I}\mid\mathcal F_s\right]\mid\mathcal F_\tau\right]}\;,
\end{split}
\end{equation}</p>
<p>but I don't understand why the $\color{red}{\text{red}}$ part is true. It looks like the <em>tower property</em>, but we shouldn't be able to use it unless $\mathcal F_\tau\subseteq\mathcal F_s$, which is obviously wrong. So, how do we need to argue?</p>
| John Dawkins | 189,130 | <p>The red part is equal to the line just above it because (i) the event $\{\tau=s\}$ is both ${\mathcal F}_s$ measurable and ${\mathcal F}_\tau$ measurable, and (ii) those two $\sigma$-algebras coincide on $\{\tau=s\}$, in the sense that if $A$ is an event and $s\in I$ is fixed then $A\cap\{\tau=s\}\in{\mathcal F}_s$ if and only if $A\cap\{\tau=s\}\in{\mathcal F}_\tau$.</p>
|
58,009 | <p>Let $f: X \to Y$ be a morphism of varieties such that its fibres are isomorphic to $\mathbb{A}^n$. Since the definition of a vector bundle stipulates that $f$ be locally the projection $U \times \mathbb{A}^n \to U$, it is likely that there exist morphisms that are not locally of that form, but I can't come up with an example.</p>
<p>So the question is: what is an example of a morphism with fibres $\mathbb{A}^n$ that is not locally trivial? not locally isotrivial? </p>
<p>UPDATE: what if one assumes vector space structure on the fibres?</p>
| Angelo | 4,790 | <p>In Jack's example the fiber is not scheme-theoretically $\mathbb A^1$. You can get a counterexample by taking $Y$ to be a nodal curve, $Y'$ its the normalization, with one of the two points in the inverse image of the node removed, and $X = Y' \times \mathbb A^1$.</p>
<p>If we assume that the map is smooth, this becomes quite subtle. It is false in positive characteristic. Let $k$ be a field of characteristic $p > 0$. Take $Y = \mathbb A^1 = \mathop{\rm Spec}k[t]$, $Y' = \mathop{\rm Spec} k[t,x]/(x^p - t)$. Of course $Y' \simeq \mathbb A^1$, but the natural map $Y' \to Y$ is an inseparable homemorphism. Now embed $Y'$ in $\mathbb P^1 \times Y$ over $Y$, and take $X$ to be the difference $(\mathbb P^1 \times Y) \smallsetminus Y'$.</p>
<p>On the other hand, it is not so hard to show that in characteristic 0 the answer is positive for $n = 1$ (if $Y$ is reduced), and I believe it is known to be true $n = 2$. The general case seems estremely hard.</p>
<p>I am afraid that Sasha'a argument does not work; if the fiber does not have a vector spaces structure, there is not reason that choosing points gives you a trivialization.</p>
<p>[Edit] The question has been updated with "what if one assumes vector space structure on the fibres?"?</p>
<p>Well, $\mathbb A^n$ can always be given a vector space structure. In my first example, the fibers are canonically isomorphic to $\mathbb A^1$, so they have a natural vector space structure.</p>
<p>However, if the map $X \to Y$ is smooth, and the vector space stucture is allowed to "vary algebraically" that is, if the zero section $Y \to X$ is a regular function, the addition gives a regular function $X \times_Y X \to X$, and scalar multiplication gives a regular function $\mathbb A^1 \times X \to X$, then $X$ is in fact a vector bundle. The proof uses some machinery: one uses smoothness to construct bases locally in the étale topology, showing that $X$ is étale locally trivial over $Y$, and descent theory to show that in fact $X$ is Zariski locally trivial.</p>
|
3,427,640 | <p>This is a problem that came up as I was learning Hermitian/skew-Hermitian transformations:</p>
<hr>
<p>Let <span class="math-container">$T: V\rightarrow E$</span> be a linear transformation where <span class="math-container">$V$</span> is a subspace of a complex Euclidean space <span class="math-container">$E$</span> and define a scalar-valued function <span class="math-container">$Q$</span> on <span class="math-container">$V$</span> such that <span class="math-container">$\forall x \in V$</span>:</p>
<p><span class="math-container">$$Q(x) = (T(x),x)$$</span></p>
<p>where <span class="math-container">$()$</span> denotes inner product. </p>
<p>Show that if <span class="math-container">$Q(x)$</span> is real for all <span class="math-container">$x$</span>, then <span class="math-container">$T$</span> is Hermitian i.e. <span class="math-container">$(T(x),y) = (x,T(y))$</span></p>
<hr>
<p><strong>My work:</strong></p>
<p>Since <span class="math-container">$Q(x)$</span> is real, we know that <span class="math-container">$\forall x \in V$</span>, <span class="math-container">$Q(x) = \overline{Q(x)}$</span></p>
<p><span class="math-container">$Q(x+ty) = (T(x)+tT(y), x+ty) = Q(x) + t(T(y),x) + \bar{t}(T(x),y) + t\bar{t}Q(y)$</span></p>
<p><span class="math-container">$\overline{Q(x+ty)} = (x+ty, T(x)+tT(y)) = (x,T(x)) + t(y,T(x)) + \bar{t}t\bar{t}(x,T(y))+(y,T(y)) = Q(x) + t(y,T(x)) + \bar{t}(x,T(y)) + t\bar{t}Q(y)$</span></p>
<p>Putting the two equations together, I get: <span class="math-container">$$t(T(y),x) + \bar{t}(T(x),y) = t(y,T(x)) + \bar{t}(x,T(y))$$</span></p>
<p>I feel like this is close, but I'm not sure how to continue. Does anyone have any ideas?</p>
| Mathew Westfields | 643,848 | <p>Actually you are done by comparing coefficents in the last equation.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.