qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
47,143 | <p>I want to create a table of replacement rules. </p>
<pre><code>g[a_, b_] := a -> b
t1 = Table[10 i + j, {i, 5}, {j, 3}]
t2 = Table[ i + j, {i, 5}, {j, 3}]
g[ # & @@@ t1, # & @@@ t2 ]
</code></pre>
<p>The correct output is below:</p>
<pre><code>{{11 -> 2, 12 -> 3, 13 -> 4},
{21 -> 3, 22 -> 4, 23 -> 5},
{31 -> 4, 32 -> 5, 33 -> 6},
{41 -> 5, 42 -> 6, 43 -> 7},
{51 -> 6, 52 -> 7, 53 -> 8}}
</code></pre>
<p>Instead I am getting:</p>
<pre><code> {11, 21, 31, 41, 51} -> {2, 3, 4, 5, 6}
</code></pre>
<p>Which shows two concepts I am still trying to wrap my mind around how to accomplish in <em>Mathematica</em>.</p>
<p>In a 2-d list how to you select each row, and then perform an operation on each element of that row ( in this case take that element and make a rule replacment a -> b). Then iterate through every row.</p>
| gwr | 764 | <h2>Excursion: Using Modelica within WL</h2>
<p>I would like to mention this possibility here as importing and running <a href="https://specification.modelica.org/master/" rel="nofollow noreferrer">Modelica</a> has since been implemented within the <a href="https://reference.wolfram.com/language/guide/SystemModelingOverview.html" rel="nofollow noreferrer">System Modeling Functionality</a> in the Wolfram Language. The above system of ODEs can be entered as a Modelica <code>model</code> using the following code:</p>
<pre><code>codeString = "
model MSE47141
Real x(start = 0.);
discrete Real lambda(start = 1.);
algorithm
when der(x) < 0.25 then
lambda := x;
end when;
equation
0 = der(x) + (x - lambda);
end MSE47141;
";
model = ImportString[ codeString, "MO" ];
model["ModelicaDisplay"]
</code></pre>
<p><a href="https://i.stack.imgur.com/Oolir.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Oolir.png" alt="Modelica code" /></a></p>
<p>The code display is indicative for a correct interpretation of the Modelica code, which should be rather self-explanatory, if one accepts that <code>when</code> statements run in <code>algorithm</code> sections.</p>
<p>A couple of things to note:</p>
<ul>
<li>Modelica will not allow using <code>der(x) == 0.25</code> as opposed to <code>NDSolve</code>; we have to think in terms of <a href="https://mbe.modelica.university/behavior/discrete/decay/" rel="nofollow noreferrer">crossing-functions</a>.</li>
<li>The prefix <code>discrete</code> is given for clarity; the discreteness of <code>lambda</code> will automatically be deduced from it appearing in a <code>when-statement</code> should it be missing.</li>
</ul>
<p>The thing to <em>like</em> about the System Modeling Functionality is added convenience imo: Fast access to nice plots and great flexibility for querying system's properties.</p>
<pre><code>sim = SystemModelSimulate[ model, All, {0, 5},
Method -> { "RungeKutta" }
];
SystemModelPlot[ sim, All ]
</code></pre>
<p><a href="https://i.stack.imgur.com/jn1GZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jn1GZ.png" alt="Simulation Plot" /></a></p>
<p>For some reason, the <em>adaptive step methods</em> (<code>DASSL</code>, <code>CVODES</code>) currently have difficulties detecting the event.</p>
|
268,635 | <p>Consider the triangle formed by randomly distributing three points on a circle. What is the probability of the center of the circle be contained within the triangle?</p>
| amWhy | 9,003 | <p>The probability is, in fact, $\large\frac14$.</p>
<p>Wherever the first point is chosen, the diameter on which it lies (the diameter being determined by the circle center and the first chosen point) divides the circle into two symmetric semi-circles, so the second and third points (assuming they are distinct) must necessarily be place on opposite halves of the circle. </p>
<p>The line connecting the second and third points must then also lie above the center (with respect to the first point - or below the center, if the first point was on "top"); so if the second point is at a distance $x$ from the first point along the perimeter of the circle (in units of the length of the perimeter), there's a range within length $\frac12-x$ in which to place the third point. Thus, computing the probability gives:</p>
<p>$$2\int_0^{\large\frac12}\left(\frac12-x\right)\,dx\;=\;2\int_0^{\large\frac12}x\,dx\;=\;\frac14$$</p>
<p>where we multiply the integral by $2$ to cover the fact that we can interchange the second and third point.</p>
|
268,635 | <p>Consider the triangle formed by randomly distributing three points on a circle. What is the probability of the center of the circle be contained within the triangle?</p>
| Linos | 553,526 | <p>The triangle must be acute if to contain the center.
Then, if say X is the biggest angle of the triangle, we need to calculate
$P(X<90|X>60)=
P(60 <X <90)÷P(X>60)=
30/180÷120/180=
\frac{1}{4}$.</p>
|
250,484 | <p>For a fixed set $X$ and a finite collection $E_1,E_2,\ldots,E_k\subseteq X$, define the binary relation <em>adjacency</em> as follows: $E_i,E_j$ are adjacent
if their intersection is nonempty.
We term the transitive closure of this relation by
<em>transitive adjacency</em>
and define the <em>adjacent union</em> by
$$
\tilde\cup(E_1,\ldots,E_k):=
\begin{cases}
\bigcup_{i=1}^k E_i, & \text{the $(E_i)$ are transitively adjacent}
\\
\emptyset, & \text{else}
.
\end{cases}
$$</p>
<p>Are there standard terms for <em>transitive adjacency</em> and <em>adjacent union</em>?</p>
| Vinicius dos Santos | 8,193 | <p>You can think in terms of <a href="https://en.wikipedia.org/wiki/Intersection_graph" rel="nofollow noreferrer">intersection graphs</a>. The transitive adjacency tells you when two vertices are adjacent. The adjacency union is then empty if and only if two of $E_1,\ldots,E_k$ are in distinct connected components.</p>
|
1,906,332 | <p>I know every complex differentiable function is continuous. I would like to know if the converse is true. If not, could someone give me some counterexamples?</p>
<p><strong>Remark:</strong> I know this is not true for the real functions (e.g. $f(x)=|x|$ is a continuous function in $\mathbb R$ which is not differentiable at the origin).</p>
| Thomas | 128,832 | <p>No, of course not. Complex differentiability is a very stringent property.</p>
<p>For examples, just look at any pair of continuous but not (real) analytic functions $a,b:\mathbb{C}\rightarrow \mathbb{R}$ and let $F= a+ib$.</p>
|
2,472,313 | <blockquote>
<p>Suppose $x$ and $y$ are real numbers and $x^2+9y^2-4x+6y+4=0$. Then find the maximum value of $\left(\frac{4x-9y}{2}\right)$</p>
</blockquote>
| Raffaele | 83,382 | <p>Lagrange multipliers method</p>
<p>Maximize
$$k \left(x^2-4 x+9 y^2+6 y+4\right)+\frac{1}{2} (4 x-9 y)$$</p>
<p>$
\left\{
\begin{array}{l}
k (2 x-4)+2=0 \\
k (18 y+6)-\frac{9}{2}=0 \\
x^2-4 x+9 y^2+6 y+4=0 \\
\end{array}
\right.
$</p>
<p>$$\left(x=\frac{6}{5},\;y=-\frac{2}{15}\right);\;\left(x=\frac{14}{5},\;y=-\frac{8}{15}\right)$$</p>
<p>$$\frac{1}{2} (4 x-9 y)=8 \text{ is maximum for }x= \frac{14}{5},y= -\frac{8}{15}$$</p>
<p>$\color{red}{Second \;method}$</p>
<p>$r:\frac{1}{2} (4 x-9 y)=h$</p>
<p>$h$ is maximum (or minimum) when the line is tangent to the ellipse</p>
<p>$
\left\{
\begin{array}{l}
\frac{1}{2} (4 x-9 y)=h \\
x^2-4 x+9 y^2+6 y+4=0 \\
\end{array}
\right.
$</p>
<p>$y =-\frac{2}{9} (h - 2 x)$</p>
<p>substitute</p>
<p>$$\frac{4}{9} (h-2 x)^2-\frac{4}{3} (h-2 x)+x^2-4 x+4=0$$
Expand and collect</p>
<p>$25x^2-4x(4 h+3) x+36-12h+4h^2=0$</p>
<p>to be tangent the discriminant must be zero</p>
<p>$\Delta=(4(4h+3))^2-100(36-12h+4h^2)=0$</p>
<p>$-144 \left(h^2-11 h+24\right)=0\to h_1=3;\;h_2=8$</p>
<p>so $\frac{1}{2} (4 x-9 y)=h$ is maximum when $h=8$</p>
|
1,972,002 | <p>Show an open set $A \subseteq \mathbb{R}$ contains no isolated points.</p>
<p>My attempt:
well if we can show an arbitrary element $a \in A$ is a limit point then we will be done. So since $A$ is open, there is an $\varepsilon > 0$ such that $B_\varepsilon(a) = (a - \varepsilon, a + \varepsilon) \subseteq A$. To show $a$ is a limit point, we need to construct a sequence $(a_n)$ in $A$ with $a_n \neq a$ for every $n \in \mathbb{N}$ and such that $(a_n) \rightarrow a$. I'm confused on how to construct such a sequence. Or is it easier to prove this statement by contradiction? Thanks.</p>
| RJM | 376,273 | <p>Hint: About every point must exist a neighborhood of radius greater than 0 that is a subset of A. Take an arbitrary point ,x, and neighborhood, B, of radius h, such that B $\subset$ A. Let N$\epsilon \gt$ h, where N $\in \mathbb{N}$ and $\epsilon \gt 0$. You can find a point of B for n $\ge$ N in each smaller neighborhood of x of radius $h\over{n}$.</p>
|
3,956,392 | <p>So the question is as follows:</p>
<blockquote>
<p>An urn contains m red balls and n blue balls. Two balls are drawn uniformly at random
from the urn, without replacement.</p>
</blockquote>
<blockquote>
<p>(a) What is the probability that the first ball drawn is red?</p>
</blockquote>
<blockquote>
<p>(b) What is the probability that the second ball drawn is red?*</p>
</blockquote>
<p>The answer to a) quite clearly works out to be <span class="math-container">$\frac{m}{(m+n)}$</span>, but the answer to b turns out to be the same, and my tutor said this is intuitive by a symmetry argument.</p>
<p>i.e. that <span class="math-container">$P(A_1)$</span> = <span class="math-container">$P(A_2)$</span> where <span class="math-container">$A_i$</span> is the event that a red ball is drawn on the ith turn. However I am struggling to see how this is evident, can anyone explain this?</p>
| Allawonder | 145,126 | <p>The second probability is</p>
<p><span class="math-container">$$\frac{m-1}{m+n-1},$$</span></p>
<p>which is not the same as the first, so there's nothing to be worried about.</p>
|
23,937 | <p>I am trying to use the <code>Animate</code> command to vary a parameter of the Lorenz Equations in 3-D phase space and I'm not having much luck.</p>
<p>The equations are:</p>
<blockquote>
<p>$\begin{align*}
\dot{x} &= \sigma(y-x)\\
\dot{y} &= rx-y-xz\\
\dot{z} &= xy-bz
\end{align*}$</p>
</blockquote>
<p>Where $\sigma, r, b > 0$ are parameters to be varied.</p>
<p>Insofar, I am using the <code>NDSolve</code> command to numerically integrate these equations, then <code>ParametricPlot3D</code> and the <code>Evaluate</code> command to plot them. </p>
<p>Just for starters, I am trying to create an animate command to vary $\sigma$ for example from 0 to 10. Can anyone guide me in the right direction? My code looks like this so far:</p>
<hr>
<pre><code>σ = 10;
NDSolve[{x'[t] == σ (y[t] - x[t]),
y'[t] == 28 x[t] - y[t] - x[t] z[t], z'[t] == x[t] y[t] - 8/3 z[t],
x[0] == z[0] == 0, y[0] == 2}, {x, y, z}, {t, 0, 25}]
Animate[ParametricPlot3D[
Evaluate[{x[t], y[t], z[t]} /. solution], {t, 0, 25}], {σ, 0, 25},
AnimationRunning -> False]
</code></pre>
<hr>
<p>This will generate an animated plot but obviously as <code>σ</code> varies, nothing is changing since I am not implementing new <code>NDSolve</code> commands. Can anyone guide me as to how I can implement successive <code>NDSolve</code>'s inside the animate command? Thank you</p>
<p>EDIT: I am using $r=28$ and $b=\frac83$ in place of <code>r</code> and <code>b</code> in my code.</p>
| halirutan | 187 | <p>You can always define the recursive function yourself and use <em>memoizing</em> to speed up computation:</p>
<pre><code>g[n_, 0] := g[n, 0] = n;
g[n_, 1] := g[n, 1] = n^2;
g[n_, k_] := g[n, k] = g[n + 1, k - 1] + g[n + 2, k - 2];
Table[g[n, k], {k, 0, 10}, {n, 0, 10}] // TableForm
</code></pre>
<p><img src="https://i.stack.imgur.com/yii9u.png" alt="enter image description here"></p>
|
23,937 | <p>I am trying to use the <code>Animate</code> command to vary a parameter of the Lorenz Equations in 3-D phase space and I'm not having much luck.</p>
<p>The equations are:</p>
<blockquote>
<p>$\begin{align*}
\dot{x} &= \sigma(y-x)\\
\dot{y} &= rx-y-xz\\
\dot{z} &= xy-bz
\end{align*}$</p>
</blockquote>
<p>Where $\sigma, r, b > 0$ are parameters to be varied.</p>
<p>Insofar, I am using the <code>NDSolve</code> command to numerically integrate these equations, then <code>ParametricPlot3D</code> and the <code>Evaluate</code> command to plot them. </p>
<p>Just for starters, I am trying to create an animate command to vary $\sigma$ for example from 0 to 10. Can anyone guide me in the right direction? My code looks like this so far:</p>
<hr>
<pre><code>σ = 10;
NDSolve[{x'[t] == σ (y[t] - x[t]),
y'[t] == 28 x[t] - y[t] - x[t] z[t], z'[t] == x[t] y[t] - 8/3 z[t],
x[0] == z[0] == 0, y[0] == 2}, {x, y, z}, {t, 0, 25}]
Animate[ParametricPlot3D[
Evaluate[{x[t], y[t], z[t]} /. solution], {t, 0, 25}], {σ, 0, 25},
AnimationRunning -> False]
</code></pre>
<hr>
<p>This will generate an animated plot but obviously as <code>σ</code> varies, nothing is changing since I am not implementing new <code>NDSolve</code> commands. Can anyone guide me as to how I can implement successive <code>NDSolve</code>'s inside the animate command? Thank you</p>
<p>EDIT: I am using $r=28$ and $b=\frac83$ in place of <code>r</code> and <code>b</code> in my code.</p>
| Vitaliy Kaurov | 13 | <p>Note, this can be solved in general form. Start as</p>
<pre><code>RSolve[{G[n, k] == G[n + 1, k - 1] + G[n + 2, k - 2]}, G[n, k], {n, k}]
</code></pre>
<p><img src="https://i.stack.imgur.com/q71oO.png" alt="enter image description here"></p>
<p>You have two unknown functions C(1)[x] and C(2)[x] that you can find using your boundary conditions. </p>
<p>Apply your initial conditions G[n,0]:</p>
<pre><code>A[n_] = C[1][n] /.
Solve[n == (-(1/2) - Sqrt[5]/2)^n C[1][n] + (-(1/2) + Sqrt[5]/2)^
n C[2][n], C[1][n]][[1]]
</code></pre>
<p>Apply your initial conditions G[n,1]:</p>
<pre><code>B[n_] = C[1][1 + n] /.
Solve[n^2 == (-(1/2) - Sqrt[5]/2)^
n C[1][n + 1] + (-(1/2) + Sqrt[5]/2)^n C[2][n + 1],
C[1][n + 1]][[1]]
</code></pre>
<p>Combine the two above to find function C(2)[n] - I rename it S2[n]:</p>
<pre><code>S2[n_] = C[2][1 + n] /. Solve[A[n + 1] == B[n], C[2][1 + n]][[1]] /.
n -> n - 1 // FullSimplify
</code></pre>
<p>Substitute this in A[n] to find C(1)[n] - I rename it S1[n]</p>
<pre><code>S1[n_] = (-(1/2) - Sqrt[5]/2)^-n (n - (-(1/2) + Sqrt[5]/2)^n S2[n]) //FullSimplify
</code></pre>
<p>Finally substitute both in the very original solution to find the final function:</p>
<pre><code>SolvedG[n_, k_] = (-(1/2) - Sqrt[5]/2)^n S1[k + n] + (-(1/2) + Sqrt[5]/2)^
n S2[k + n] // FullSimplify;
</code></pre>
<p>So here - you got it - the beauty of math:</p>
<pre><code>SolvedG[n, k] // TraditionalForm
</code></pre>
<p><img src="https://i.stack.imgur.com/YdNMl.png" alt="enter image description here"></p>
<p>Now verify against @halirutan table - identical !</p>
<pre><code>Table[SolvedG[n, k] // FullSimplify, {k, 0, 10}, {n, 0, 10}] // TableForm
</code></pre>
<p><img src="https://i.stack.imgur.com/9ADEy.png" alt="enter image description here"></p>
|
2,372 | <p>How can I find the number of <span class="math-container">$k$</span>-permutations of <span class="math-container">$n$</span> objects, where there are <span class="math-container">$x$</span> types of objects, and <span class="math-container">$r_1, r_2, r_3, \cdots , r_x$</span> give the number of each type of object?</p>
<p>I'm still looking for the solution to this more general problem out of interest.</p>
<p>Here is an example with <span class="math-container">$n = 20, k = 15, x = 4,$</span> <span class="math-container">$ r_1 = 4 \quad r_2 = 5 \quad r_3 = 8 \quad r_4 = 3$</span>.</p>
<blockquote>
<p>I have 20 letters from the alphabet. There are some duplicates - 4 of them are <em>a</em>, 5 of them are <em>b</em>, 8 of them are <em>c</em>, and 3 are <em>d</em>. How many unique 15-letter permutations can I make?</p>
</blockquote>
<h2>Edits:</h2>
<p>I've done some more work on this problem but haven't really come up with anything useful. Intuition tells me that as Douglas suggests below there will probably not be an easy solution. However, I haven't been able to prove that for sure - does anyone else have any ideas?</p>
<p>I've now re-asked this question on <a href="https://mathoverflow.net/questions/37211/permutations-with-identical-objects">MO</a>.</p>
| Community | -1 | <p>There are 20!/4!5!8!3! arrangements of the 20 alphabets. Consider the action of $S_5$ on the arrangements by permuting the last 5 digits. You want to compute the number of orbits. You can then try to do it by <a href="http://en.wikipedia.org/wiki/Burnside%27s_lemma" rel="nofollow">Burnside's formula</a>. It is not difficult to do it at least for this case, since if a permutation of $S_5$ fixes an arrangement, it means that each cycle of the permutation corresponds to a color.</p>
<p>For the general case, I suspect that Polya's enumeration theorem can do it, though I'm not certain.</p>
|
2,015,809 | <blockquote>
<p>If $$f(x)=3 f(1-x)+1$$
for all $x$, what is the value of $f(2016)$?</p>
</blockquote>
<p>I am not sure how to do this, because I see two "$f$"s.</p>
<p>All I could try is substituting,</p>
<p>$$f(x)=3(1-2016)+1\\
=-6044$$</p>
<p>Which I am pretty sure wrong.</p>
<p>How do I deal with this question? when there is $f$ around a braket?</p>
<p>Thank you</p>
| Hermès | 127,149 | <p><strong>Hint:</strong> $f(2016) = 3f(-2015) + 1$. But $f(-2015)= 3f(2016)+1$. Conclude</p>
|
2,015,809 | <blockquote>
<p>If $$f(x)=3 f(1-x)+1$$
for all $x$, what is the value of $f(2016)$?</p>
</blockquote>
<p>I am not sure how to do this, because I see two "$f$"s.</p>
<p>All I could try is substituting,</p>
<p>$$f(x)=3(1-2016)+1\\
=-6044$$</p>
<p>Which I am pretty sure wrong.</p>
<p>How do I deal with this question? when there is $f$ around a braket?</p>
<p>Thank you</p>
| fleablood | 280,126 | <p>f (2016)= 3f (-2015)+1</p>
<p>f (-2015)=3f (1-(-2015))+1=3f (2016)+1.</p>
<p>So f (2016)=3 (3f (2016)+1)+1=9f (2016)+4</p>
<p>So -8f (2016)=4</p>
<p>So f (2016)=-1/2.</p>
<p>Now the real question, is how to solve for <strong>all</strong> x.</p>
|
1,519,251 | <p>This question was in school maths challenge.
I dont know how to approach this one.. any help would be appreciated.</p>
| fleablood | 280,126 | <p>Ends in 100 zeros means that $5^{100}$ is a divisor. If $n \ge 5^im$ then $n > 2^im$ so if $5^{100}$ is a divisor so is $2^{100}$ so is $10^{100}$. so $5^{100}$ divisor is <em>sufficient</em> to end in 100 zeros. </p>
<p>To end in <em>exactly</em> 100 zeros means that in {1,2, ...., n} there are precisely 100 occurrences of 5 and added powers of 5. Let $5M \le n <= 5(M+1)$. There are M mulitiples of 5. 1/5 of those will be multiples of 25 and 1/25 will be multiples of 125 etc.</p>
<p>In total $5^m$ will yield $5^{m-1}$ multiples of 5, $5^{m-2}$ multiples of 25, etc. So $5^m!$ will have $5^{m-1} + 5^{m-2} + 5^{m - 3}+ ....$ zeros. So 125! will have 25 multiples of 5, 5 multiples of 25,, and 1 multiple of 125 so 125! ends with exactly 31 zeros. </p>
<p>So 375! will end with exactly 93 zeros. (As there are 75 multiples of 5, 15 multiples of 25, and 3 multiples of 125). We need 7 more zeros. that is 6 more multiples of 5 and one more multiple of 25. So 405! has exactly 100 zeros. (81 multiples of 5, 16 multiples of 25 and 3 multiples of 125).</p>
<p>So the $405 \le n < 410$. As 8|n. n = 408. </p>
|
275,527 | <pre><code>Sqrt[Matrix[( {
{0, 1},
{-1, 0}
} )]] /. f_[Matrix[x__]] :> Matrix[MatrixFunction[f, x]]
</code></pre>
<p><code>Matrix</code> is an undefined symbol but I want to define some substitutions with it.</p>
| userrandrand | 86,543 | <p>As explained by @Mikado <code>Sqrt</code> is converted to <code>Power</code> which takes two arguments. One solution is to define:</p>
<pre><code>sqrt=Inactive[Sqrt]
</code></pre>
<p>and then use <code>sqrt</code> instead like</p>
<pre><code>sqrt[Matrix[({{0, 1}, {-1, 0}})]] /.
f_[Matrix[x__]] :> Matrix[MatrixFunction[Activate@f, x]]
</code></pre>
<p>If it is too late and you already used <code>Sqrt</code> at multiple parts in the notebook then you can use:</p>
<pre><code>Hold[Sqrt[Matrix[({{0, 1}, {-1, 0}})]]] /.
f_[Matrix[x__]] :> Matrix[MatrixFunction[f, x]] // ReleaseHold
</code></pre>
<p>Or</p>
<pre><code>Sqrt[Matrix[({{0, 1}, {-1, 0}})]] /. Sqrt[s_] -> Inactive[Sqrt][s] /.
f_[Matrix[x__]] :> Matrix[MatrixFunction[Activate@f, x]]
</code></pre>
<p><strong>Notice</strong> that I used <code>Sqrt[s_] -> Inactive[Sqrt][s]</code> instead of <code>Sqrt -> Inactive[Sqrt]</code> because I am allowing <code>Sqrt[s_]</code> to be converted to</p>
<p><code>Power[Pattern[s,Blank[]],Rational[1,2]]</code></p>
<p>If I had a bad idea and used <code>HoldPattern</code> :</p>
<pre><code>Sqrt[Matrix[({{0, 1}, {-1, 0}})]] /.
HoldPattern[Sqrt[s_]] -> Inactive[Sqrt][s]
</code></pre>
<p>Then it would not work because It would not convert to <code>Power</code>. If the <code>Sqrt</code> was a <code>Cos</code> then <code>HoldPattern</code> would work :</p>
<pre><code>Cos[Matrix[({{0, 1}, {-1, 0}})]] /.
HoldPattern[Cos[s_]] -> Inactive[Cos][s]
</code></pre>
<p>because there is nothing to worry about concerning hidden transformations in that case (there can be in other cases because of the parity of <code>Cos</code> which leads to argument reordering).</p>
|
3,173,349 | <p>The question:</p>
<p>Suppose G is a finite group of Isometries in the plane. Suppose p is fixed by G. Prove that G is conjugate into O(2).</p>
<p>What exactly does "conjugate into O(2)" mean and how would I proceed?</p>
| Will Jagy | 10,400 | <p>Lemma: if integers <span class="math-container">$a,b > 0$</span> and <span class="math-container">$ab = k^2$</span> and <span class="math-container">$\gcd(a,b) = 1,$</span> then both <span class="math-container">$a,b$</span> are squares. This is by unique factorization.</p>
<p>No need to consider negative <span class="math-container">$n,$</span> as <span class="math-container">$n \leq -3$</span> gives a negative product. With <span class="math-container">$n \geq 1:$</span> As <span class="math-container">$n^2 + 2n = (n+1)^2 - 1,$</span> we find
<span class="math-container">$$ \gcd(n+1, n^2 + 2n) = 1. $$</span>
If the product were a square, we would require <span class="math-container">$n^2 + 2n$</span> to be a square. However, <span class="math-container">$n^2 + 2n+1$</span> really is a square, the only consecutive squares are <span class="math-container">$0,1,$</span> which forces <span class="math-container">$n=0.$</span> Done.</p>
|
3,173,349 | <p>The question:</p>
<p>Suppose G is a finite group of Isometries in the plane. Suppose p is fixed by G. Prove that G is conjugate into O(2).</p>
<p>What exactly does "conjugate into O(2)" mean and how would I proceed?</p>
| Ehsaan | 78,996 | <p>A positive integer <span class="math-container">$n$</span> is a square if and only if all of its prime factors occur with <em>even</em> multiplicity.</p>
<p>So this would be true of the integer <span class="math-container">$d:=n(n+1)(n+2)$</span> if it is a square. Since consecutive integers are coprime, and "twins" (<em>i.e.</em> <span class="math-container">$n$</span> and <span class="math-container">$n+2$</span>) have only <span class="math-container">$2$</span> as a common factor, you can analyze the (odd) primes <span class="math-container">$p$</span> occurring in <span class="math-container">$d$</span>: if <span class="math-container">$p|d$</span>, then <span class="math-container">$p$</span> divides one of <span class="math-container">$n$</span>, <span class="math-container">$n+1$</span>, or <span class="math-container">$n+2$</span>, but notice it has to occur in <em>exactly</em> one of these unless <span class="math-container">$p=2$</span>. If <span class="math-container">$p=2$</span> then either it occurs in both <span class="math-container">$n$</span> and <span class="math-container">$n+2$</span>, or only occurs in <span class="math-container">$n+1$</span>. So you have a few cases.</p>
<p>Aince these three consecutive integers can't share (odd) prime factors, this shows that all of <span class="math-container">$n$</span>, <span class="math-container">$n+1$</span>, and <span class="math-container">$n+2$</span> would have to be squares. But this is impossible.</p>
|
4,461,482 | <p>In <span class="math-container">$ΔABC$</span>, if <span class="math-container">$(a+b+c)(a−b+c)=3ac$</span>, then which of the following is <span class="math-container">$\color{green}{\text{True}}$</span>?</p>
<ul>
<li><span class="math-container">$\angle B=60^\circ $</span></li>
<li><span class="math-container">$\angle B=30^\circ $</span></li>
<li><span class="math-container">$\angle C=60^\circ $</span></li>
<li><span class="math-container">$\angle A + \angle B=120^\circ $</span></li>
</ul>
| insipidintegrator | 1,062,486 | <p><span class="math-container">$(a+b+c)(a-b+c)=3ac$</span>
<br> <span class="math-container">$(a+c)^2$</span> - <span class="math-container">$b^2$</span> = 3ac
<br> <span class="math-container">$\frac{a^2+c^2-b^2}{2ac}$</span> = <span class="math-container">$\frac{1}{2}$</span>
<br> Hence by the cosine rule for triangles,
<span class="math-container">$cos B$</span> = <span class="math-container">$\frac{1}{2}$</span> which implies that B=<span class="math-container">$60^o$</span> because B cannot exceed <span class="math-container">$180^o$</span>. Thus, A+C = <span class="math-container">$120^o$</span> and not A+B.</p>
|
255,902 | <p>I have a double lattice sum and I was wondering how I could calculate this with Mathematica. In particular, I have a function <span class="math-container">$F:\mathbb{R}^2 \times \mathbb{R}^2 \to \mathbb{R}$</span> which takes as arguments a pair of points <span class="math-container">$x,y\in\mathbb{R}^2$</span>, and I want to evaluate the sum</p>
<p><span class="math-container">$$
\sum_{x\in L}\sum_{y\in L} F(x,y)
$$</span></p>
<p>where <span class="math-container">$L\subset\mathbb{R}^2$</span> is some finite lattice, i.e. a collection of points in <span class="math-container">$\mathbb{R}^2$</span>.</p>
<p>Now, I have tried using <code>Outer</code> for this via</p>
<p><code>Total[Flatten@Outer[F[#1,#2] &, L, L]]</code></p>
<p>but with this, I obtain a sum of <span class="math-container">$F$</span> evaluated at real pairs <span class="math-container">$s,t\in\mathbb{R}$</span></p>
<p><code>F[s,t] + ...</code></p>
<p>which doesn't make sense, instead of <span class="math-container">$F$</span> evaluated at real vector pairs <span class="math-container">$x=(x_1,x_2),y=(y_1,y_2)\in\mathbb{R}^2$</span></p>
<p><code>F[{x_1,x_2},{y_1,y_2}] + ...</code></p>
<p>as it should be. I'm not sure how to remedy this, but any help with this would be much appreciated.</p>
<p>I am also open to recommendations for any alternative/better ways to perform such a double sum.</p>
<p><strong>Extra Information</strong>
My function <code>F</code> is defined as a conditional, i.e.</p>
<p><code>F[{x_,y_},{s_,t_}] = If[Abs[g[{x, y}]] < eps || Abs[g[{s, t}]] < eps, 0, W[{x - s, y - t}] * g[{x, y}] * g[{s, y}]]</code></p>
<p>for some other lattice functions <code>g</code> and <code>W</code>.</p>
<p>Thanks to answer below, I am able to evaluate the summation, but I obtain as an answer</p>
<p><code>If[Abs[\[Piecewise] 1 {-(27/2),(7 Sqrt[3])/2}==0&&0<={-27,7 Sqrt[3]}<=2 Sqrt[3]&&{-(27/2)+8 Sqrt[3],Sqrt[3]/2}==0 0 True ]<2.22045*10^-14,0,W[{{-(27/2),(7 Sqrt[3])/2},{-(27/2),(7 Sqrt[3])/2}}] g[{{-(27/2),(7 Sqrt[3])/2},{-(27/2),(7 Sqrt[3])/2}}]] +...</code></p>
<p>It seems that the double summation has not been executed entirely.</p>
<p><strong>Edit</strong></p>
<p>Thank you for all the solutions and comments, they have been helpful and I shall surely use those in the future. Sadly, due to the nature of my functions, they didn't work perfectly, but that was just because of how my functions were defined.</p>
<p>Although, I have found a solution that seems to work: To evaluate the double sum, I used the Map function twice</p>
<p><code>Total[Flatten@Map[Function[y, Map[Function[x, F[x,y]], L, L]]]]</code></p>
<p>I found this solution in another post given by <a href="https://mathematica.stackexchange.com/questions/15480/how-do-i-designate-arguments-in-a-nested-map">@halirutan</a></p>
| kglr | 125 | <pre><code>sum1 = Total[F @@@ Tuples[Range[0, 3], {2, 2}]]
sum1 // Short
</code></pre>
<blockquote>
<pre><code>F[{0, 0}, {0, 0}] + F[{0, 0}, {0, 1}] + F[{0, 0}, {0, 2}] + << 250 >> +
F[{3, 3}, {3, 1}] + F[{3, 3}, {3, 2}] + F[{3, 3}, {3, 3}]
</code></pre>
</blockquote>
<pre><code>sum2 = Array[F[{#, #2}, {##3}] &, {4, 4, 4, 4}, 0, Plus];
sum2 == sum1
</code></pre>
<blockquote>
<pre><code>True
</code></pre>
</blockquote>
|
3,817,760 | <p>In geometry class, it is usually first shown that the medians of a triangle intersect at a single point. Then is is explained that this point is called the centroid and that it is the balance point and center of mass of the triangle. Why is that the case?</p>
<p>This is the best explanation I could think of. I hope someone can come up with something better.</p>
<p>Choose one of the sides of the triangle. Construct a thin rectangle with one side coinciding with the side of the triangle and extending into it. The center of mass of this rectangle is near the midpoint of the side of the triangle. Continue constructing thin rectangles, with each one on top of the previous one and having having the lower side meet the two other sides of the triangle. In each case the centroid of the rectangle is near a point on the median. Making the rectangles thinner, in the limit all the centroids are on the median, and therefore the center of mass of the triangle must lie on the median. This follows because the center of mass of the combination of two regions lies on the segment joining the centroids of the two regions.</p>
| Kenneth Luo | 1,066,156 | <p>I have a fairly intuitive proof but not very rigorous. Firstly, the median of any triangle divides it into two triangles of equal area. This is because the median goes from one corner and bisects the opposing side, resulting in two triangles which have the same base since each is one half of the original side. They also have the same height since they share the same top corner, being the one from which the median was originally drawn. This means that the centre of mass is along this line, since both sides are balanced along this line. Drawing a second median and making the same conclusion makes it obvious that the centre of mass must be at their intersection. It follows that all 3 medians intersect at one point since there is only one centre of mass.</p>
|
158,469 | <p>This is another exercise from Golan's book.</p>
<p><strong>Problem:</strong> Let $V$ be an inner product space over $\mathbb{C}$ and let $\alpha$ be an endomorphism of $V$ satisfying $\alpha^*=-\alpha$, where $\alpha^*$ denotes the adjoint. Show that every eigenvalue of $\alpha$ is purely imaginary.</p>
<p>My proposed solution is below.</p>
| Potato | 18,240 | <p>Suppose $c$ is an eigenvalue of $\alpha$ and $v$ is a corresponding eigenvector. We have</p>
<p>$$c \langle v,v\rangle= \langle \alpha(v),v\rangle =\langle v, \alpha^*(v)\rangle =\langle v,-\alpha(v)\rangle=\langle v, -cv\rangle=\overline{\langle -cv, v\rangle} = \overline{-c\langle v,v\rangle}$$</p>
<p>Because $\langle v,v\rangle$ is real, as it is the norm of a vector, we have $c=\overline{-c}$, which immediately shows $c$ is imaginary (write $c$ out in terms of imaginary and real parts to see this).</p>
|
3,486,456 | <p>Suppose <span class="math-container">$f:X\rightarrow Y$</span> is a homeomorphism. Show that if <span class="math-container">$X$</span> is hausdorff then so is <span class="math-container">$Y$</span>.</p>
<p>My attempt:
Let <span class="math-container">$y_1,y_2\in Y$</span> be distinct, by bijectivity of <span class="math-container">$f$</span>, there exists distinct <span class="math-container">$x_1,x_2 \in X$</span> such that <span class="math-container">$f(x_1)=y_1$</span> and <span class="math-container">$f(x_2)=y_2$</span>. Since <span class="math-container">$X$</span> is hausdorff, there exists disjoint open subsets of <span class="math-container">$X$</span>, <span class="math-container">$V_1$</span> and <span class="math-container">$V_2$</span> containing <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span> respectively. Since <span class="math-container">$f$</span> is a homeomorphism, it is an open map, hence <span class="math-container">$f(V_1)\cap f(V_2)$</span> is the union of two open sets. Since <span class="math-container">$f$</span> is injective, <span class="math-container">$f(V_1 \cap V_2)= f(V_1) \cap f(V_2)= \varnothing$</span>, and <span class="math-container">$f(V_1),f(V_2)$</span> contain <span class="math-container">$y_1,y_2$</span> respectively. Thus <span class="math-container">$Y$</span> is hausdorff.</p>
<p>Is it correct?</p>
| Community | -1 | <p>Homeomorphisms preserve all topological properties, and the result follows. </p>
|
484,095 | <p>When taking the log of a matrix we have various choices, but fixing a particular choice, we should have</p>
<p>$$P^{-1}\log{(A)} P = \log(P^{-1}AP),$$</p>
<p>right? (Here $P \in GL$.)</p>
<p>It is supported by the notion that we can exponentiate both sides and it comes out true. Is there some snag here that I'm missing?</p>
<p>Also, once we choose a branch of logarithm, do we always have a bijection between $M_n(\mathbb{C})$ and $GL(n, \mathbb{C})$ given by $A \mapsto e^A$?</p>
<p>More specifically, given a Jordan block associated with a nonzero eigenvalue $\lambda$</p>
<p>$$\left( \begin{matrix} \lambda & 1 & & & \\ & \lambda &1 & & \\ &&\ddots&\ddots \\ &&&\ddots&1 \\ &&&&\lambda \end{matrix} \right)$$</p>
<p>We can choose a basis so that the above linear operator has the form</p>
<p>$$\left( \begin{matrix} \lambda & \lambda & \lambda/2! & \dotsb & \lambda/n!\\ & \lambda &\lambda & \ddots & \vdots \\ &&\ddots&\ddots& \lambda/2! \\ &&&\ddots&\lambda \\ &&&&\lambda \end{matrix} \right)$$</p>
<p>and then a log of such a matrix is </p>
<p>$$\left( \begin{matrix} \log{\lambda} & 1 & & & \\ & \log{\lambda} &1 & & \\ &&\ddots&\ddots \\ &&&\ddots&1 \\ &&&&\log{\lambda} \end{matrix} \right),$$</p>
<p>where we have to choose a value for $\log{\lambda}$ (in principle we could choose a different value of $\log{\lambda}$ for each diagonal entry).</p>
<p>Once we fix a choice of branch of log, it seems that we can get a unique log for each matrix in $GL_n(\mathbb{C})$. </p>
| Jim | 56,747 | <p>We have $b \equiv 1$ modulo $b - 1$ so for any $i$ we have $b^i \equiv 1$ modulo $b - 1$, but this <strong>doesn't</strong> come from "dividing" the summation, remember that
$$\sum x_iy_i$$
means
$$x_1y_1 + x_2y_2 + x_3y_3 + \cdots$$
so you can't divide out $\sum x_i$.</p>
<p>Anyway, $b^i \equiv 1$ modulo $b - 1$ is trivially true and you aren't going to get any information about the $n_i$ from it. The only simplification you could make is to combine the summations:
$$\sum_{i = 0}^kn_i(b^i - 1) \equiv 0 \pmod{b - 1}$$
but again $b^i - 1 \equiv 0$ modulo $b - 1$ so this won't give you any information about the $n_i$.</p>
|
1,563,004 | <p>Assume we that we calculate the expected value of some measurements $x=\dfrac {x_1 + x_2 + x_3 + x_4} 4$. what if we dont include $x_3$ and $x_4$, but instead we use $x_2$ as $x_3$ and $x_4$. Then We get the following expression $v=\dfrac {x_1 + x_2 + x_2 + x_2} 4$.</p>
<p>How do I know if $v$ is a unbiased estimation of $x$?</p>
<p>I am not sure how to approach this problem, any ideas are appreciated!</p>
| Zhanxiong | 192,408 | <p>Use the facts</p>
<blockquote>
<p>$$(a^x)' = a^x \ln a, \quad (\log_a(x))' = \frac{1}{x\ln a}$$
for $a > 0$ and $a \neq 1$.</p>
</blockquote>
<p>Then use the multiplication formula for differentiation to get
\begin{align}
& (10^x \log_{10}x)' \\
= & (10^x)'\log_{10}x + 10^x (\log_{10}x)' \\
= & 10^x \ln (10) \log_{10}x + 10^x \frac{1}{x\ln 10}.
\end{align}
If you want to express your final result in natural logs, then you can simplify by writing
$$\log_{10}x = \frac{\ln x}{\ln 10},$$
in this way the answer would agree with the wolfram output exactly.</p>
|
1,563,004 | <p>Assume we that we calculate the expected value of some measurements $x=\dfrac {x_1 + x_2 + x_3 + x_4} 4$. what if we dont include $x_3$ and $x_4$, but instead we use $x_2$ as $x_3$ and $x_4$. Then We get the following expression $v=\dfrac {x_1 + x_2 + x_2 + x_2} 4$.</p>
<p>How do I know if $v$ is a unbiased estimation of $x$?</p>
<p>I am not sure how to approach this problem, any ideas are appreciated!</p>
| zahbaz | 176,922 | <p>Your answer is correct, but you can simplify it further using the change of base formula. Where $\log x$ is in base $10$ and $\ln x$ is the natural log, you have</p>
<p>$$10^x\cdot \ln10\cdot \log x+\frac{1}{x\cdot \ln10}\cdot 10^x$$</p>
<p>$$=10^x\cdot \left(\ln10\cdot \color{green}{\log x}+\frac{1}{x\cdot \ln 10}\right)$$</p>
<p>$$=10^x\cdot \left(\ln10\cdot \color{green}{\frac{\ln x}{\ln 10}}+\frac{1}{x\cdot \ln 10}\right)$$</p>
<p>$$=10^x\cdot \left(\ln x+\frac{1}{x\cdot \ln 10}\right)$$</p>
|
971,333 | <p>Given $\int \int dxdy$, I want to find the area bounded by $y=\ln\left(x\right)$ and $y=e+1-x$, and the $x$ axis. </p>
<p>I think the limits of integral in $y$ axis are from $y=\ln\left(x\right)$ to $y=e+1-x$ so $\int \limits_{y=\ln\left(x\right)}^{e+1-x}dy\int \:dx$, but I don't know how to find the limits of the integral in $x$ axis since the area bounded by $x$ axis? Please give me a clue to solve this problem.</p>
| georg | 144,937 | <p>HINT:</p>
<p>y = ln(x), y = e+1-x</p>
<p>I would say, can guess the solution x = e, y = 1. Other real solution is not.</p>
|
17,285 | <p>I have problems in understanding few concepts of elementary set theory. I've choosen a couple of problems (from my problems set) which would help me understand this concepts. To be clear: it's not a homework, I'm just trying to understand elementary set's theory concepts by reading solutions.
<hr/>
<strong>Problem 1</strong></p>
<p>(I don't understand this; I mean - not at all)</p>
<p>Let $f: A \to B$, where $A,B$ are non-empty sets, and let $R$ be an equivalence relation in set $B$. We define equivalence relation $S$ in $A$ set by condition:</p>
<p>$aSb \Leftrightarrow f(a)R f(b)$
Determine, which inclusion is always true:</p>
<p>(a) $f([a]_S) \subseteq [f(a)]_R$</p>
<p>(b) $[f(a)]_R \subseteq f([a]_S)$</p>
<p>Notes:</p>
<p>$[a]_S$ is an equivalence class
<hr/>
<strong>Problem 2</strong></p>
<p>(I suppose, that (a) is true & (b) is false)</p>
<p>Which statement is true, and which is false (+ proof):</p>
<p>(a) If $f: A \xrightarrow{1-1} B$ and $f(A) \not= B$ then $|A| < |B|$</p>
<p>(b) If $|A| < |B|$ and $C \not= \emptyset$ then $|A \times C| < |B \times C|$
<hr/>
<strong>Problem 3</strong></p>
<p>(I don't know, how to think about $\mathbb{Q}^{\mathbb{N}}$ and $\{0,1\}^∗$.)</p>
<p>Which sets have the same cardinality:</p>
<p>$P(\mathbb{Q}), \mathbb{R}^{\mathbb{N}},\mathbb{Z}, \mathbb{Q}^{\mathbb{N}}, \mathbb{R} \times \mathbb{R}, \{ 0,1 \}^*, \{ 0,1 \}^{\mathbb{N}},P(\mathbb{R})$</p>
<p>where $\{ 0,1 \}^*$ means all finite sequences/words that contains $1$ and $0$, for example $000101000100$ or $1010101010101$ etc. $P(A)$ is a Power Set.
<hr/>
<strong>Problem 4</strong></p>
<p>(I don't understand this; I mean - not at all)</p>
<p>What are: maximum/minimum/greatest/lowest elements in set:</p>
<p>$\{\{2; 3; 3; 5; 2\}; \{1; 2; 3; 4; 6\}; \{3\}; \{2; 1; 2; 1\}; \{1; 2; 3; 4; 5\}; \{3; 4; 2; 4; 1\}; \{2; 1; 2; 2; 1\}\}$</p>
<p>ordered via subset inclusion
<hr/>
<strong>Problem 5</strong></p>
<p>How many equivalence relations there are in $\mathbb{N}$ which also are partial order?
<hr/>
These are simple problems, but I really need to understand, how to solve this kind of problems. I would apreciate Your help.</p>
| Arturo Magidin | 742 | <p><strong>Problem 1.</strong> Here's an example: take $A=\mathbb{N}$, $B=\{0,1,2,3\}$, and let $f\colon A\to B$ map the natural numbers to their remainder when divided by $4$ (so $f(3) = 3$, $f(15) = 3$, $f(21) = 1$, etc). Let $R$ be the equivalence relation on $B$ given by $aRb$ if and only if $a^2\equiv b^2 \pmod{4}$.</p>
<p>The relation $S$ is defined as follows: first map to $B$, then check $R$ for the images. If the images are related, then the original elements are related; if the images are not related, then the originals are not related. For example, to see whether $3R10$ holds, we take $f(3)=3$ and $f(10)=2$, and check whether $3R2$ holds or not; since $3^2 \not\equiv 2^2\pmod{4}$, then $3\not R2$; that is, $f(3)\not Rf(2)$, so $3\not S 2$. On the other hand, $7S9$ holds: because $f(7) = 3$, $f(9) = 1$, and $f(7)^2 = 9 \equiv 1 = f(9)^2\pmod{4}$, so $f(7)Rf(9)$ is true.</p>
<p>That's the idea.</p>
<p>Now, for a general case. First convince yourself that $S$ is in fact an equivalence relation on $A$.</p>
<p>Once you do that, in order to check (a), you want to see if $f([a]_S)\subseteq [f(a)]_R$. Well, take an element in $f([a]_S)$; this is a $b\in B$ such that $b=f(x)$ for some $x\in[a]_S$. that is, $xSa$ is true, and $f(x)=b$. So you want to see whether $b\in[f(a)]_R$; that is, whether $bRf(a)$. So, from assuming $xSa$ holds, you want to see whether you can always conclude that $f(x)Sf(a)$.</p>
<p>For (b), you proceed similarly; now take $b\in [f(a)]_R$, and you want to see whether $b\in f([a]_S)$. That is, assume that $bRf(a)$; must there exist an $x\in A$ such that $xSa$ and $f(x)=b$? If so, prove it; if not, give a specific counterexample with specific $A$, $B$, $f$, $R$, and $S$, and an exlicit $a$ and $b$.</p>
<p><strong>Problem 2.</strong> (a) If your sets are finite, the your guesses are right; if your sets are not necessarily finite, you're in trouble. Consider $A=B=\mathbb{N}$ and $f(n) = 2n$. </p>
<p>For (b); you know there is a $1-1$ function from $A$ to $B$; see if you can construct a $1-1$ function from $A\times C$ to $B\times C$ (and see if you can spot why you need $C\neq\emptyset$). This will show that $|A|\leq|B|$ implies $|A\times C|\leq|B\times C|$. If you also know that there is no surjection between $A$ and $B$, then again you are going to have two different cases, depending on whether $C$ is "really big" (compared to $A$ and $B$) or not.</p>
<p><strong>Problem 3.</strong> The set $A^B$ is the set of all functions from $B$ to $A$. So $\mathbb{Q}^{\mathbb{N}}$ is the set of all functions from $\mathbb{N}$ to $\mathbb{Q}$; this is the set of all <em>rational sequences</em> (sequences, each term a rational number). The set $\{0,1\}^{\mathbb{N}}$ is the set of all <em>binary sequences</em> (sequences, each term either $0$ or $1$). <em>Hint.</em> Think about binary expansion of real numbers between $0$ and $1$, or even better, the ternary expansion of the elements of the Cantor set to deal with $\{0,1\}^{\mathbb{N}}$. For $\{0,1\}^*$, can you count how many sequences of length $n$ there are, for each $n$? Add them all up then. For $\mathbb{Q}^{\mathbb{N}}$, can you exhibit at least as many sequences as there are real numbers?</p>
<p><strong>Problem 4.</strong> You are comparing sets by inclusion. We say set $A$ is "smaller than or equal to" set $B$ if and only if $A\subseteq B$. So, for example, the set $\{3\}$ is smaller than or equal to the set $\{1,2,3,4,5\}$, but not smaller than the set $\{1,2\}$. When using this order, note that it is possible for you to have two sets, neither of which is smaller than or equal to the other (for example, neither of $\{3\}$ and $\{1,2\}$ is smaller than the other; they are <em>incomparable</em>). The maximum among the sets you are given would be a set that is greater than or equal (contains) all of the sets you are given; a minimum would be a set that is smaller than or equal (is contained in) all of the sets you are given. They may or may not exist. "Greatest" is the same as "maximum" in this context; "smallest" the same as minimum.</p>
<p><strong>Problem 5.</strong> An equivalence relation is a collection of ordered pairs that is reflexive, symmetric, and transitive. A partial order is a collection of ordered pairs that is reflexive, *<em>anti</em>*symmetric, and transitive.</p>
<p>So the question is: how many equivalence relations on $\mathbb{N}$ are also antisymmetric? Or: how many relations are all of reflexive, transitive, symmetric, <em>and</em> antisymmetric? Think about what having both symmetry and antisymmetry means. </p>
|
41,175 | <p>Suppose that I have two linear functions</p>
<pre><code>f[x_] := f0 + f1 x
g[x_] := g0 + g1 x
</code></pre>
<p>and a (possibly rather complicated) set of conditional expressions, obtained through Reduce. For example, we might have something like this:</p>
<pre><code>conditions = (f0 == f1 && g0 == 0) || (f0 == g1 && g0 == f1)
</code></pre>
<p>What I would like to do is write something like</p>
<pre><code>{f[x],g[x]} /. conditions
</code></pre>
<p>and receive as output the set of pairs of $f$ and $g$ adhering to that formula. In this case we'd have</p>
<pre><code>{{a + ax, bx}, {a + bx, b + ax}}
</code></pre>
<p>(or maybe <code>{{f0 + f0x, g1x}, {f0 + f1x, f1 + f0x}}</code> to stick with original variable names).</p>
<p>How can I do this?</p>
| Kuba | 5,478 | <pre><code>Assuming[#, Simplify[{f[x], g[x]}]] & /@ List @@ conditions
</code></pre>
<blockquote>
<pre><code>{{f1 (1 + x), g1 x}, {g1 + g0 x, g0 + g1 x}}
</code></pre>
</blockquote>
<p>Which, technically but with switched constants, is what is desired.</p>
|
975 | <p>The usual <code>Partition[]</code> function is a very handy little thing:</p>
<pre><code>Partition[Range[12], 4]
{{1, 2, 3, 4}, {5, 6, 7, 8}, {9, 10, 11, 12}}
Partition[Range[13], 4, 3]
{{1, 2, 3, 4}, {4, 5, 6, 7}, {7, 8, 9, 10}, {10, 11, 12, 13}}
</code></pre>
<p>One application I'm working on required me to write a particular generalization of <code>Partition[]</code>'s functionality, which allowed the generation of sublists of unequal lengths, for as long as the lengths were appropriately commensurate. (Let's assume for the purposes of this question that the list lengths being commensurate is guaranteed, but you're welcome to generalize further to the incommensurate case.) Here's my generalization in action:</p>
<pre><code>multisegment[lst_List, scts_List] := Block[{acc},
acc = Prepend[Accumulate[PadRight[scts, Length[lst]/Mean[scts], scts]], 0];
Inner[Take[lst, {#1, #2}] &, Most[acc] + 1, Rest[acc], List]]
multisegment[CharacterRange["a", "x"], {3, 1, 2}]
{{"a", "b", "c"}, {"d"}, {"e", "f"}, {"g", "h", "i"}, {"j"}, {"k", "l"},
{"m", "n", "o"}, {"p"}, {"q", "r"}, {"s", "t", "u"}, {"v"}, {"w", "x"}}
</code></pre>
<p>(Thanks to halirutan for optimization help with <code>multisegment[]</code>.)</p>
<p>The problem I've hit into is that I wanted <code>multisegment[]</code> to also support offsets, just like in <code>Partition[]</code>. I want to be able to do something like the following:</p>
<pre><code>multisegment[Range[14], {4, 3}, {3, 1}]
{{1, 2, 3, 4}, {4, 5, 6}, {5, 6, 7, 8},
{8, 9, 10}, {9, 10, 11, 12}, {12, 13, 14}}
</code></pre>
<p>How might a version of <code>multisegment[]</code> with offsets be accomplished?</p>
| Heike | 46 | <p>Whit solution is using a <code>NestWhile</code> construction in combination with <code>Sow</code> and <code>Reap</code></p>
<pre><code>partitions[list_, {parts_List, offsets_List}] :=
Reap[
NestWhile[{RotateLeft[#[[1]]], RotateLeft[#[[2]]],
Sow[#[[3]][[;; #[[1, 1]]]]]; ArrayPad[#[[3]], {-#[[2, 1]], 0}]} &,
{parts, offsets, list},
(Length[#[[3]]] >= #[[1, 1]]) &];
][[2, 1]]
partitions[list_, p : {__?NumericQ}] := partitions[list, {p, p}]
</code></pre>
<p>Example</p>
<pre><code>list = CharacterRange["a", "z"];
partitions[list, {{3, 4}, {1, 3}}]
</code></pre>
<blockquote>
<pre><code> {{"a", "b", "c"}, {"b", "c", "d", "e"}, {"e", "f", "g"},
{"f", "g", "h", "i"}, {"i", "j", "k"}, {"j", "k", "l", "m"},
{"m", "n", "o"}, {"n", "o", "p", "q"}, {"q", "r", "s"},
{"r", "s", "t", "u"}, {"u", "v", "w"}, {"v", "w", "x", "y"}}
</code></pre>
</blockquote>
|
2,540,640 | <p>Say I pick a random $n\times n$ matrix with entries uniformly chosen from $[0, 1]$.</p>
<p>a) What is the probability it is nilpotent?</p>
<p>b) What is the probability that, given it is nilpotent, it is similar to a given matrix?</p>
<p>To clarify part (b), there are finitely many possibilities for it when written in Jordan normal form. What are the probabilities of each of these? (e.g. is there one of them that comes up with probability $1$?)</p>
<p>Edit: as Qiaochu pointed out, the probability it's nilpotent is $0$. So for part (b), can we say anything about the relative measure of the space of matrices with different Jordan blocks? For example, even if they both have measure 0, is it possible to say that the set of matrices similar to $\bigl( \begin{smallmatrix}0 & 1\\ 0 & 0\end{smallmatrix}\bigr)$ is larger in some concrete sense than those similar to $\bigl( \begin{smallmatrix}0 & 0\\ 0 & 0\end{smallmatrix}\bigr)$?</p>
| Robert Israel | 8,508 | <p>The nilpotent matrices form a variety of (I think) dimension $n^2-n$ in the $n \times n$ matrices (based on the fact that the coefficients of $\lambda^j$ for $j=0,\ldots,n-1$ in the characteristic polynomial must be $0$). You can then consider $n^2-n$-dimensional Hausdorff measure on this variety. Fix some nonzero vector $v$. If $A^k v \ne 0$ for $k = 1, \ldots, n-1$ then the Jordan form consists of a single block, and I'm pretty sure this will be the case for almost every nilpotent matrix.</p>
|
2,540,640 | <p>Say I pick a random $n\times n$ matrix with entries uniformly chosen from $[0, 1]$.</p>
<p>a) What is the probability it is nilpotent?</p>
<p>b) What is the probability that, given it is nilpotent, it is similar to a given matrix?</p>
<p>To clarify part (b), there are finitely many possibilities for it when written in Jordan normal form. What are the probabilities of each of these? (e.g. is there one of them that comes up with probability $1$?)</p>
<p>Edit: as Qiaochu pointed out, the probability it's nilpotent is $0$. So for part (b), can we say anything about the relative measure of the space of matrices with different Jordan blocks? For example, even if they both have measure 0, is it possible to say that the set of matrices similar to $\bigl( \begin{smallmatrix}0 & 1\\ 0 & 0\end{smallmatrix}\bigr)$ is larger in some concrete sense than those similar to $\bigl( \begin{smallmatrix}0 & 0\\ 0 & 0\end{smallmatrix}\bigr)$?</p>
| Community | -1 | <p>Let $N$ be the set of $n\times n$ complex nilpotent matrices. $N$ is an algebraic set but it is not a pure variety because its dimension is locally constant but not globally (except for eventual singularities). Let $A\in N$ and $U$ be its Jordan form. Note that the group $Gl_n$ acts as follows: </p>
<p>$(P,Z)\in GL_n\times M_n\rightarrow P.Z=P^{-1}ZP\in M_n$. $A$ is in the orbit $O_U$ of $U$; $O_U$ is a variety of dimension $dim(Gl_n)-dim(S_U)$, where $S_U$ is the stabilizer $\{P;P^{-1}UP=U\}=comm(U)\cap GL_n$. Then the maximal local dimension of $N$ is reached when $dim(comm(U))$ is minimal, that is when $U$ is cyclic, that is when $x^n=0$ is its minimal polynomial. The only possible form for $U$ is $J_n$, the nilpotent Jordan block of dimension $n$ and then $dim(comm(J_n))=n$. Therefore</p>
<p>Proposition 1. $N$ is an algebraic set of dimension $n^2-n$ and $O_{J_n}$ is the sole component of maximal dimension.</p>
<p>Proposition 2. $O_{J_n}$ is a subset of $N$ that is Zariski open, then dense.</p>
<p>Proof. $O_{J_n}$ is characterized in $N$ by $X^{n-1}\not= 0$.</p>
<p>Another way of seeing things: we can go from $J_n$ to the other Jordan forms of nilpotent matrices by tending to $0$ some $1's$ of the matrix $J_n$.</p>
<p>Remark. The last proposition can be interpreted as follows. We randomly choose </p>
<p>i) a stricly upper triangular matrix $T$.</p>
<p>ii) a matrix $P$.</p>
<p>Then, "always", $P$ is invertible and $P^{-1}TP$ is a nilpotent matrix that is similar to $J_n$.</p>
|
1,806,778 | <p>I'm currently studying for my logic exam, and looking into examples on DFA construction. </p>
<p>Assume the alphabet is {a, b}, and the language to be constructed is defined as follows: </p>
<pre><code>{w | w has at least three a's}
</code></pre>
<p>All models solutions I have come across look like the following: </p>
<p><a href="https://i.stack.imgur.com/sCwmj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sCwmj.png" alt="enter image description here"></a></p>
<p>However, is this the only solution for the aforementioned language? For instance, would we still have a valid DFA if we omitted some of the b-transitions, as in the following case: </p>
<p><a href="https://i.stack.imgur.com/I7m3Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I7m3Y.png" alt="enter image description here"></a></p>
<p>After all, the language would still consist of "at least three a's". However, if the latter model is not valid, why is that?</p>
<p>Thanks in advance!</p>
| kennytm | 171 | <p>I'm not sure why there is a $(-2+\gamma+\ln 2)$, because $\left|\ln(x+1) - \Psi(x+\frac32)\right| < 0.0365$ itself is already satisfied.</p>
<p>The approximation comes from the <a href="https://en.wikipedia.org/wiki/Digamma_function#Computation_and_approximation" rel="nofollow">series expansion of $\exp\left(\Psi(x+\frac12)\right)$</a>, which states, for $x > 1$,</p>
<p>\begin{align}
\Psi\left(x + \frac12\right) = \ln\left( x + \frac{1}{4!\cdot x} - \frac{37}{8\cdot 6!\cdot x^3} + o\left(\frac1{x^5}\right) \right)
\end{align}
and thus $\Psi(x+\frac32)$ is approximately $\ln(x+1)$. The largest difference happens at $x=0$ which is $\Psi(\frac32) = 2 - \ln 4 - \gamma \approx 0.03648997$.</p>
|
1,806,778 | <p>I'm currently studying for my logic exam, and looking into examples on DFA construction. </p>
<p>Assume the alphabet is {a, b}, and the language to be constructed is defined as follows: </p>
<pre><code>{w | w has at least three a's}
</code></pre>
<p>All models solutions I have come across look like the following: </p>
<p><a href="https://i.stack.imgur.com/sCwmj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sCwmj.png" alt="enter image description here"></a></p>
<p>However, is this the only solution for the aforementioned language? For instance, would we still have a valid DFA if we omitted some of the b-transitions, as in the following case: </p>
<p><a href="https://i.stack.imgur.com/I7m3Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I7m3Y.png" alt="enter image description here"></a></p>
<p>After all, the language would still consist of "at least three a's". However, if the latter model is not valid, why is that?</p>
<p>Thanks in advance!</p>
| gammatester | 61,216 | <p>I doubt your approximation. First I guess you missed a factor $2$ for $\ln 2$, the approximation should be
$$f(x) =\ln(x+1)-\Psi(x+3/2)-2+\gamma+2\ln 2 \cdot$$
If you omit the $2$ you have
$$f(0)= 0 -(2-\gamma-2\ln2) -2+\gamma+ \ln 2 = -4 + 2\gamma + 3\ln 2 \approx -0.766$$
while with the factor $2$ you have
$$f(0)= 0 -(2-\gamma-2\ln2) -2+\gamma+ 2\ln 2 = -4 + 2\gamma + 4\ln 2 \approx -0.0723$$
which is much smaller but still violates your given bound.</p>
<p>PS: A Taylor series at $x=0$ computed with Maple is
$$\ln(x+1)-\Psi(x+\tfrac{3}{2}) = (-2+\gamma+2\ln2)+(5-\tfrac{1}{2}\pi^2)x+(-\tfrac{17}{2}+7\zeta(3))x^2
+O(x^3)$$</p>
|
297,251 | <p>In his well-known <a href="http://matwbn.icm.edu.pl/ksiazki/fm/fm101/fm101110.pdf" rel="nofollow noreferrer">paper</a> Bellamy constructs an indecomposable continua with exactly two composants. The setup is as follows:</p>
<p>We have an inverse-system $\{X(\alpha); f^\alpha_\beta: \beta,\alpha < \omega_1\}$ of metric indecomposable continua and retractions. For each $X(\alpha)$ a composant $C(\alpha) \subset X(\alpha)$ is specified and each $C(\alpha)$ maps into $C(\beta) $. </p>
<p>The inverse limit $X$ has exactly two composants. The first is the union
$\bigcup\{X(\beta): \beta < \omega_1\}$ where we identify $X(\beta)$ with the set of sequences $(x_\alpha) \in X$ with $x_\beta = x_{\beta+1} = x_{\beta+2} = \ldots . $ The second composant is the inverse limit $\{C(\alpha); f^\alpha_\beta: \beta,\alpha < \omega_1\}$.
Observe there is no reason <em>a priori</em> for the second composant to be nonempty. However I do not believe an example is know.</p>
<p>My question is an easier one. Can you think of an example of a metric continuum $M$ and a $\omega_1$-indexed decreasing nest of dense semicontinua <strong>with empty intersection</strong>? We call the set $S \subset M$ a <em>semicontinuum</em> to mean for each $x,y \in S$ there exists a continuum $K$ with $\{x,y\} \subset K \subset S$.</p>
<p>If the second composant was empty the family $\{f^\alpha_0(C(\alpha)): \alpha < \omega_1\}$ would be such a nest for $M = X_0$.</p>
<p>If we index by $\omega$ instead an example is easy to come by. Let $M$ be the unit disc and $Q = \{q_1,q_2, \ldots\}$ and enumeration of the rational points on the boundary. Let $S(n)$ be formed by drawing the straight line segment from each element of $\{q_n,q_{n+1}, \ldots\}$ to each rational point of $(0,1/n) \times \{0\}$. Then add in $(0,1/n) \times \{0\}$ itself to make the space a semicontinuum.</p>
<p>Indexing by $\omega_1$ must somehow get around the fact that any $\omega_1$ decreasing nest of closed subsets of a metric continuum eventually stabilizes.</p>
<p>It feels like this would be easier if we assume the Continuum Hypothesis.</p>
| Joseph O'Rourke | 6,094 | <p>Not sure if this will help...</p>
<blockquote>
<p>Süß, Hendrik. "Fano Threefolds with 2-Torus Action: A Picture Book." <em>Documenta Mathematica</em> 19 (2014): 905-940.
(<a href="https://www.math.uni-bielefeld.de/documenta/vol-19/30.pdf" rel="nofollow noreferrer">PDF download</a>.)
<br />
<strong>Abstract</strong>. ...we give
a combinatorial description for smooth Fano threefolds admitting a
$2$-torus action.</p>
</blockquote>
<p><hr />
<a href="https://i.stack.imgur.com/ouRpP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ouRpP.jpg" alt="BlowUp"></a></p>
<hr />
|
2,529,088 | <p>Well, I'm learning olympiad inequalities, and most of the books go like this:</p>
<blockquote>
<p>Memorize holder-cauchy-jensen-muirhead, then just dumbass (i.e bash and homogenize) and use the above theorems.</p>
</blockquote>
<p>Needless to say, I feel extremely disintersted in inequalities for this approach. </p>
<p>What are some good books which makes inequality interesting ? (Not involving extremely elementary things like AM-GM, Cauchy, but more advanced like Jensen-Muirhead-Karamata-Schur also) ?</p>
<hr>
<p><em>To give clarification what I consider as intersting as of now, the only inequality problem that I found interseting is that the following statement for positive reals</em></p>
<blockquote>
<p>$\displaystyle \sum_{i=0}^{n}\frac{a_i}{a_{i-1}+a_{[i+1 \mod n}]} \leq \frac{1}{2}$</p>
</blockquote>
<p><em>only holds for finitely many $n$. Also one thing that I find interesting is that the bounds which can't be analytically found, but exists.</em></p>
| Jaideep Khare | 421,580 | <p>$$\underbrace{(2^n)}_{\text{even}}\underbrace{( 2^{m-n}-1)}_{\text{odd}}=112=\underbrace{2^4}_{\text{even}}\times \underbrace{7}_{\text{odd}}$$</p>
<p>Thus, $$2^n=2^4 \implies \color{blue}{n=4}$$ and $$2^{m-4}-1=7 \implies 2^{m-4}=2^3 \implies \color{blue}{m=7}$$</p>
|
2,529,088 | <p>Well, I'm learning olympiad inequalities, and most of the books go like this:</p>
<blockquote>
<p>Memorize holder-cauchy-jensen-muirhead, then just dumbass (i.e bash and homogenize) and use the above theorems.</p>
</blockquote>
<p>Needless to say, I feel extremely disintersted in inequalities for this approach. </p>
<p>What are some good books which makes inequality interesting ? (Not involving extremely elementary things like AM-GM, Cauchy, but more advanced like Jensen-Muirhead-Karamata-Schur also) ?</p>
<hr>
<p><em>To give clarification what I consider as intersting as of now, the only inequality problem that I found interseting is that the following statement for positive reals</em></p>
<blockquote>
<p>$\displaystyle \sum_{i=0}^{n}\frac{a_i}{a_{i-1}+a_{[i+1 \mod n}]} \leq \frac{1}{2}$</p>
</blockquote>
<p><em>only holds for finitely many $n$. Also one thing that I find interesting is that the bounds which can't be analytically found, but exists.</em></p>
| Robert Z | 299,698 | <p>Hint. Note that $m>n\geq 1$ and
$$2^n\cdot (2^{m-n} -1)=112=2^4\cdot 7.$$</p>
|
4,290,960 | <p>In machine learning (and not only), it is very common to see concatenation of different feature vectors into a single one of higher dimension which is then processed by some function. For example, feature vectors computed for an image at different scales are concatenated to form a multi-scale feature vector which is then further processed.</p>
<p>However, combining vectors by concatenation seems somehow artificial to me (we simply stack them and then use a function that operates on a higher-dimensional space):</p>
<p><span class="math-container">$$\mathbf{z} = \mathbf{v} \oplus \mathbf{w} = [v_1, \dots, v_n]^T \oplus [w_1, \dots, w_m]^T = [v_1,\dots, v_n, w_1,\dots, w_n]^T \in \mathbb{R}^{n+m},$$</span></p>
<p><span class="math-container">$$f(\mathbf{z}): \mathbb{R}^{n+m} \to \mathbb{R}^{k}.$$</span></p>
<p>First, I would like to ask if there is a formal definition of concatenation as a mapping to higher-dimensional space (perhaps in form of a matrix multiplication). What can be said about the space where the concatenated vectors live? In particular, if the second vector is fixed, the points represented by the first vector will be mapped to a higher-dimensional space, but they will be confined to the subspace <span class="math-container">$\mathbb{R}^{n}\subset \mathbb{R}^{n+m}$</span> perpendicular to the rest of the axes <span class="math-container">$n+1,\dots m$</span>. It's like manifold embedding.</p>
<p>Finally, I was wondering if there are alternatives to concatenation for effective combination of feature vectors?</p>
| Will Jagy | 10,400 | <p>let me make this Community Wiki, people sometimes dislike computed lists; crucial, however, in exploring diophantine equations, looking for patterns.</p>
<p>Alright, there are always examples with repeats; for any positive <span class="math-container">$b,$</span> the triple with <span class="math-container">$a=1, c=b$</span> is a solution. Furthermore, Vieta JUmping takes this to <span class="math-container">$(2b^2 - 1, b,b),$</span></p>
<p>Here is a list of triples with small maximum, deliberately <strong>leaving out the solutions where one of the variables is equal to <span class="math-container">$1$</span></strong> In all the triples I write in <em><strong>decreasing order</strong></em></p>
<p><span class="math-container">$$ \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc $$</span></p>
<pre><code>7 2 2
17 3 3
26 7 2
31 4 4
49 5 5
71 6 6
97 7 7
97 26 2
99 17 3
127 8 8
161 9 9
199 10 10
241 11 11
244 31 4
287 12 12
337 13 13
362 26 7
362 97 2
391 14 14
449 15 15
485 49 5
511 16 16
577 17 17
577 99 3
647 18 18
721 19 19
799 20 2
846 71 6
881 21 21
967 22 22
1057 23 23
1151 24 24
1249 25 25
1351 26 26
1351 97 7
1351 362 2
1457 27 27
1567 28 28
1681 29 29
1799 30 30
1921 31 31
1921 244 4
2024 127 8
2047 32 32
2177 33 33
2311 34 34
2449 35 35
2591 36 36
2737 37 37
2887 38 38
2889 161 9
3041 39 39
3199 40 40
3361 41 41
3363 99 17
3363 577 3
</code></pre>
<p><span class="math-container">$$ \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc $$</span></p>
|
3,219,187 | <p>The cancellation law for the multiplication of natural numbers is:</p>
<p><span class="math-container">$$\forall m, n\in\mathbb N, \forall p\in\mathbb N-\{0\}, m\cdot p=n\cdot p\Rightarrow m=n.$$</span></p>
<p>Is it possible to show this using induction?</p>
<p>I tried to define <span class="math-container">$$X=\{n\in\mathbb N: \forall m\in\mathbb N, \forall p\in\mathbb N-\{0\}, m\cdot p=n\cdot p\Rightarrow m=n\}.$$</span></p>
<p>It is easy to verify that <span class="math-container">$0\in X$</span> but I'm not able to show that if <span class="math-container">$n\in X$</span> then <span class="math-container">$n+1\in X$</span>. </p>
<p>My attemp: Suppose <span class="math-container">$$m\cdot p=n\cdot p\Rightarrow m=n$$</span>
for all <span class="math-container">$m\in\mathbb N$</span> and <span class="math-container">$p\in\mathbb N-\{0\}$</span>. Supposing</p>
<p><span class="math-container">$$m\cdot p=(n+1)\cdot p$$</span> we should show <span class="math-container">$m=n+1$</span>. But:</p>
<p><span class="math-container">$$m\cdot p=(n+1)\cdot p=n\cdot p+p,$$</span> and I can't see how to use the induction hypothesis.</p>
<p>Well, I could try to prove this after trichotomy. If it were the case that <span class="math-container">$m\neq n+1$</span> then <span class="math-container">$m>n+1$</span> or <span class="math-container">$m<n+1$</span>. In the first case we would have <span class="math-container">$m=n+1+r$</span> for some <span class="math-container">$r\in\mathbb N-\{0\}$</span>. Then:</p>
<p><span class="math-container">$m\cdot p=(n+1+r)\cdot p=(n+1)\cdot p+r\cdot p.$</span>Since <span class="math-container">$r, p\in \mathbb N-\{0\}$</span> it would follow <span class="math-container">$$m\cdot p>(n+1)\cdot p,$$</span> which contradicts <span class="math-container">$$m\cdot p=(n+1)\cdot p.$$</span> Is there another way to do this proof?</p>
| Mauro ALLEGRANZA | 108,274 | <p><em>Hint</em></p>
<p>You have : <span class="math-container">$m⋅p=(n+1)⋅p$</span>.</p>
<p>But you know that <span class="math-container">$(n+1) \ne 0$</span> and the assumption is that <span class="math-container">$p \ne 0$</span>. Thus <span class="math-container">$m⋅p=(n+1)⋅p \ne 0$</span> and from it and <span class="math-container">$p \ne 0$</span> we have that <span class="math-container">$m \ne 0$</span>.</p>
<p>To say that <span class="math-container">$m \ne 0$</span> means that <span class="math-container">$m=z+1$</span> for some <span class="math-container">$z$</span>. Thus, from <span class="math-container">$m⋅p=(n+1)⋅p$</span> we get :</p>
<blockquote>
<p><span class="math-container">$(z+1)⋅p=zp+p=np+p$</span>.</p>
</blockquote>
<p>Now we apply the (previous proved : by induction) cancellation law for sum to get : <span class="math-container">$zp=np$</span>.</p>
<p>On it we apply the induction hypotheses to conclude with :</p>
<blockquote>
<p><span class="math-container">$z=n$</span>.</p>
</blockquote>
<p>From it we get : <span class="math-container">$z+1=n+1$</span>, i.e.</p>
<blockquote>
<blockquote>
<p><span class="math-container">$m=n+1$</span>.</p>
</blockquote>
</blockquote>
|
180,121 | <p>Let $f:X\rightarrow Y$ be a morphism of schemes and let $\mathcal{F}$ be a sheaf on $Y$. Then there is a natural map $$\Psi:\mathcal{F} \rightarrow f_{\ast}f^{\ast}(\mathcal{F})$$ and localizing $$\Psi_p:\mathcal{F}_p \rightarrow f_{\ast}f^{\ast}(\mathcal{F})_p=f_{p,\ast}f_p^{\ast}(\mathcal{F}_p)$$</p>
<p>I'm interested in learning under what conditions on $X,Y,f,\mathcal{F},p$ is $\Psi_p$ an isomorphism. E.g. is it an isomorphism if $\mathcal{F}$ is quasi-coherent and $f$ is étale?</p>
<p>Thanks in advance for any insight.</p>
| Benjamin Schmidt | 14,385 | <p>The bounded derived category of coherent sheaves is a nice technical tool to understand this question more in depth. Assume that that $A$ and $B$ are bounded complexes of coherent sheaves. Then by using derived functors you get a general version of the projection formula
$$Rf_*(B) \otimes^L A \cong Rf_*(B \otimes^L Lf^* A).$$
In order, to deal with your situation you can set $B = \mathcal{O}_X$ and $A = Lf^*\mathcal{F}$. Then you get
$$Rf_* (\mathcal{O}_X) \otimes^L Lf^*\mathcal{F} \cong Rf_* Lf^* \mathcal{F}.$$</p>
<p>In order to get the special cases Sasha told you about, you just need to use the fact that $f^*$ is exact if $f$ is flat or that generally $f^*$ is exact on flat sheaves.</p>
|
9,880 | <p>The ban on edits that change only a few characters can be quite annoying in some cases. I don't have a problem with trying to keep trivial spelling corrections and such out of the peer review queue, but sometimes a small edit is semantically significant. For example, it makes perfect sense to edit an otherwise good question/answer to change a $0$ to a $1$ or to change $a\implies b\implies c$ to $(a\implies b)\implies c$.</p>
| Carl Mummert | 630 | <p>There is no absolute ban on small edits. It's up to each person who has enough rep to edit to decide whether a change should be made. </p>
<p>The main reason is to avoid small changes is that an edit can bump a question onto the front page. This can be particularly problematic if someone edits a few dozen questions in a row. So for "nitpicking" changes, particularly to older questions, it may be better to just let it be, if there is little chance of confusion. It is less likely for someone to make large numbers of substantial edits quickly, because these require more thought. </p>
<p>Another reason to avoid small changes it that the person who wrote the question might feel irritable about having someone point a trivial mistake, just as a speaker may be irritable if someone stops the presentation to point out a trivial typo on their slides. </p>
<p>In both cases, the question to think about is whether the benefit of the edit outweighs any potential negative effects. If there is a substantial benefit, there is no reason to avoid the edit. If there is little benefit, particularly for older questions, then the edit might be better if not done. You can always leave a comment instead.</p>
|
481,673 | <p>Find all functions $g:\mathbb{R}\to\mathbb{R}$ with $g(x+y)+g(x)g(y)=g(xy)+g(x)+g(y)$ for all $x,y$.</p>
<p>I think the solutions are $0, 2, x$. If $g(x)$ is not identically $2$, then $g(0)=0$. I'm trying to show if $g$ is not constant, then $g(1)=1$. I have $g(x+1)=(2-g(1))g(x)+g(1)$. So if $g(1)=1$, we can show inductively that $g(n)=n$ for integer $n$. Maybe then extend to rationals and reals.</p>
| Aldo Guzmán Sáenz | 92,139 | <p>About showing that $g(1)=1$: If we assume that $g(0)=0$ and $g(2)\neq 0$, we can prove that $g(1)=1$ as follows:</p>
<p>We take $x=y=2$, and substitute:</p>
<p>$g(2+2)+g(2)g(2)=g(2\cdot 2)+g(2)+g(2)$</p>
<p>thus</p>
<p>$g(4)+g(2)^2=g(4)+2g(2)$, and since we are assuming that $g(2)\neq 0$, then</p>
<p>$g(2)=2$.</p>
<p>Now, if we have $x\neq 0$, then taking $y=x^{-1}$ and substituting on the original equation gives us</p>
<p>$g(x+x^{-1})+g(x)g(x^{-1})=g(1)+g(x)+g(x^{-1})$.</p>
<p>In particular, taking $x=1$,</p>
<p>$g(2)+g(1)g(1)=g(1)+g(1)+g(1)$.</p>
<p>The assumption $g(2)\neq 0$ and this equation in particular imply that $g(1)\neq 0$. And in fact we have</p>
<p>$3g(1)-g(1)^2=g(2)=2$, which gives $g(1)=1$ <strong>or $2$</strong>.</p>
<hr>
<p>Added later (to rule out $g(1)=2$):</p>
<p>Suppose that $g(1)=2$. Taking $x=1,y=-1$ we have</p>
<p>$g(1+(-1))+g(1)g(-1)=g(1\cdot (-1))+g(1)+g(-1)$, that is</p>
<p>$2g(-1)=g(-1)+2+g(-1)$, and so</p>
<p>$0=2$, which is absurd.</p>
|
2,086,598 | <p>Let's start with a well known limit:
$$\lim_{n \to \infty} \left(1 + {1\over n}\right)^n=e$$</p>
<p>As $n$ approaches infinity, the expression evaluates Euler's number $e\approx 2.7182$. Why that number? What properties does the limit have that leads it to 2.71828$\dots$ and why is this number important? Where can we find $e$, in what branch of Mathematics?</p>
| MoebiusCorzer | 283,812 | <p>As Walter Rudin puts it in his famous "Real and Complex Analysis":</p>
<blockquote>
<p>This is the most important function in mathematics. It is defined, for every complex number <span class="math-container">$z$</span>, by the formula</p>
<p><span class="math-container">$$\exp(z)=\sum_{n=0}^{\infty}\frac{z^{n}}{n!}$$</span></p>
</blockquote>
<p>There are alternative definitions and the "problem" of this one is that it requires notions about convergence of series, etc. But one big advantage - and it is why Rudin introduces it this way - is that it allow us to define diverse important functions directly from this one: <span class="math-container">$\sin$</span>, <span class="math-container">$\cos$</span>, <span class="math-container">$\log$</span>, etc. let alone the famous Euler's identity. From the exponential function, we can also define the power of any positive number, which is not trivial (indeed, why would there be a number <span class="math-container">$3^{\sqrt{2}}$</span> for example?). It can also be used to define <span class="math-container">$\pi$</span>.</p>
<p>It also appears that it is the only non-zero function that is equal to its derivative, which is a practical property in many situations.</p>
<p>In mathematics, one often wants to define things by making links with existing ones, but overall to define useful things for our purposes. The ubiquity of the exponential comes from the fact it appears to fulfill this very goal of being useful. The fact that it is equal to <span class="math-container">$2.7\dots$</span> has no real importance (what is important is that it is greater than <span class="math-container">$1$</span>, say).</p>
<p>If you have some knowledge of calculus (convergence of series), the introduction of the Rudin can be a good read. Actually, even if you don't have any knowledge of series, it can be convincing that it is indeed "the most important function in mathematics". (Of course, there are other important functions and a "function" is a very general concept, but in analysis, topology, differential geometry, it is quite important).</p>
|
3,663,267 | <p>in a linear algebra class i was given a theorem: If <span class="math-container">$\left\{\boldsymbol{u}_{1}, \boldsymbol{u}_{2}, \ldots, \boldsymbol{u}_{n}\right\}$</span> is a linearly independent subset of <span class="math-container">$\mathbb{R}^{n}$</span> then
<span class="math-container">$
\mathbb{R}^{n}=\operatorname{span}\left\{\boldsymbol{u}_{1}, \boldsymbol{u}_{2}, \ldots, \boldsymbol{u}_{n}\right\}
$</span></p>
<p>I understand what this means but it got me thinking - what do we really mean when we say a set is equal to <span class="math-container">$\mathbb{R}^{n}$</span>? If we have the two vectors (1,0,0) and (0,1,0), then they are linearly independent and their span gives us a plane. Now, would we call this spanning set equal to <span class="math-container">$\mathbb{R}^{2}$</span> or is it just isomorphic to <span class="math-container">$\mathbb{R}^{2}$</span>? If we only have the latter, then is any subset of <span class="math-container">$\mathbb{R}^{3}$</span> actually equal to <span class="math-container">$\mathbb{R}^{2}$</span> or can we only talk about a set being equal to <span class="math-container">$\mathbb{R}^{2}$</span> when we have not explicitly defined vectors that can live in <span class="math-container">$\mathbb{R}^{3}$</span> (as in vectors that have 3 coordinates)?</p>
| Dabouliplop | 426,049 | <p>What you are trying to do is to understand why if <span class="math-container">$i$</span>, <span class="math-container">$j$</span>, <span class="math-container">$k$</span> are three distinct variables (as in programming languages), the unique substitution <span class="math-container">$j \leftarrow j + abi$</span> has the same effect as the following list of substitutions:</p>
<p><span class="math-container">$k \leftarrow k + a i$</span><br>
<span class="math-container">$j \leftarrow j + b k$</span><br>
<span class="math-container">$k \leftarrow k - ai$</span><br>
<span class="math-container">$j \leftarrow j - bk$</span></p>
<p>We see that the variable <span class="math-container">$k$</span> will indeed not be changed by this list of substitutions, the third one cancelling the first one. The variable <span class="math-container">$j$</span> will have after the second step the initial value of <span class="math-container">$j + bk + bai$</span>. And the final step removes the <span class="math-container">$bk$</span>.</p>
<p>But this explanation is the same thing as expanding the expression you gave, except it is more informal. When you expand <span class="math-container">$(I + a e_{ik})(I + b e_{kj})$</span>, you get the matrix describing the values after the second step in terms of the initial values. When you multiply on the right this matrix by <span class="math-container">$(I-ae_{ik})$</span>, you get the description of the values of the variables at the third step. Etc. I wouldn't call this a massive expression though.</p>
|
2,796,854 | <p>what happens if both the second and first derivatives at a certain point are $0$? Is it an infliction point, an extremum point, or neither? Can we say anything at all about a point in such a case? </p>
| Surb | 154,545 | <p>You can't say anything. Example : $$x\longmapsto x^3\quad x\longmapsto x^4\quad \text{and}\quad x\longmapsto\begin{cases} x^3\sin\left(\frac{1}{x}\right)&x\neq 0\\ 0&x=0\end{cases}.$$</p>
|
1,154,690 | <p>We have instituted random drug testing at our company. I was charged with writing the code to generate the weekly random list of employees. We've gotten some complaints because of people getting picked more frequently than they expected and our lawyer wants some evidence that these events fall within the bell curve of randomness. </p>
<p>I'm very confident in our code. I have now written several Monte Carlo simulations that back up the results we've had. That said, all my Monte Carlo simulations (each written from scratch, completely independently) also show a phenomenon that I can't explain and I'm hoping others can.</p>
<p>Here are the parameters I'm using: 4550 employees, of which 91 (2%) are picked each week at random.</p>
<p>The phenomenon we're encountering is this: </p>
<p>Over the first 20 weeks, we expect (according to the Monte Carlo simulations) roughly 32 people to be picked 3+ times (and around 2.7 people to be picked 4+ times, but let's just stick with the people picked 3+ times). And we've had the program going for about 20 weeks and the numbers seem to agree so far.</p>
<p>Over the first 40 weeks, the number of people picked 3+ times shoots up to 207 (more than 6 times as many in twice the time). </p>
<p>Over the first 52 weeks, the number shoots up again to 390 (30% more time than 40 weeks, but 90% more people picked 3 times).</p>
<p>Maybe I've written all my Monte Carlo simulations wrong, but I'm pretty sure I haven't. I've looked at all this a bunch of different ways and I'm convinced this phenomenon is real, but I need to be able to explain it to the VP of HR and I'm not sure why the number of people picked 3 times rises so fast from say 40 to 52 weeks (and this is all of the counts. The number of people picked 4+ times, the number of people picked 5+ times, etc).</p>
<p>I do understand that say, in the first 4 weeks, you can't possibly have anyone picked 5 times, so the first week where that would be possible would be week 5. So after 10 weeks, you have 5 times as many opportunities for someone to be picked 5 times as you do in 5 weeks (500% increase in odds over increase 100% in time).</p>
<p>But I'm not sure that explains the 40 week to 52 week changes. Or does it?</p>
<p>I've also ruled out any issues with the random number generator (I get the roughly the same results using the basic one as I do using the random number generator from the cryptography library). </p>
<p>Thanks to anyone who can explain this in a way that I can take back to HR and our legal guys.</p>
<p><strong>Update</strong></p>
<p>To expound a bit on the process, here's an example: I have a database table that I've created called DrugTest. It has 2 columns: TestRun and Employee. Both columns are integers.</p>
<p>So for 52 weeks, I have TestRun values of 1 to 52 and then I have 91 random employee numbers (numbers between 0 and 4549) for each TestRun value. No employee can be picked twice in the same week (the primary key is (TestRun, Employee), ensuring unique employee numbers for each TestRun value).</p>
<p>For a sample run, I loaded up 52 weeks of data. Then I execute the following query:</p>
<pre><code>select employee, count(*) as cnt
from DrugTest
where TestRun <= 52
group by employee
having count(*) = 3
order by 2
</code></pre>
<p>The above query returns 313 results</p>
<pre><code>select employee, count(*) as cnt
from DrugTest
where TestRun <= 40
group by employee
having count(*) = 3
order by 2
</code></pre>
<p>The above query returns 178 results</p>
<pre><code>select employee, count(*) as cnt
from DrugTest
where TestRun <= 20
group by employee
having count(*) = 3
order by 2
</code></pre>
<p>The above query returns 34 results</p>
| TravisJ | 212,738 | <p>Let $n$ be the number of weeks you've sampled. The for any individual, the probability that they are chosen on any given week is $p=.02$. Then, the probability, that after $n$ weeks any individual has been chosen at least 3 times is: </p>
<p>\begin{align*} \mathbb{P}(\text{chosen } \geq 3 \text{ times} )
&= 1 - \mathbb{P}(\text{chosen}\leq 2 \text{ times}) \\
&= 1 - \left(\sum_{i=0}^{2} \binom{n}{i}p^{i}(1-p)^{n-i}\right) \\
&= 1 - (1-p)^{n} - np(1-p)^{n-1} - \frac{n(n-1)p^2}{2}(1-p)^{n-2}
\end{align*}</p>
<p>Just picking a few values of $n$ we have that, for $n=10$, any individual has a .08% chance of being chosen at least 3 times. The expected number of employees who would be tested at least 3 times would be 3.9. For reference, call this triple $(10, .08, 3.9)$. The next few values are:</p>
<p>$(10, .08, 3.9)$, $(15, .3, 13.8)$, $(20, .7, 32.2)$, $(25, 1.32, 60.2)$, $(30, 2.17, 98.8)$, $(35, 3.25, 148)$, $(40, 4.57, 207.8)$. It looks like your values are a little high, but what I've put here is just what one expects. It is very unlikely to get "what's expected" but it is unlikely to deviate a long ways. The question is, is your deviation unexpectedly large? I'm not a statistician, so I cannot answer that well enough to satisfy. I would though check your code and make sure your not doing something bad with seeding the random number generator you are using. I would also be certain that you are not using your own "home-made" random number generator. Also, random number generators do not generate random numbers, let alone "uniform" random numbers. It may be useful to implement something that makes sure that people who've been chosen in the last week or two have a lower probability of being chosen again.</p>
|
3,407,174 | <p>I'm dealing with a specific polynomial function. The first derivative of it is displayed below. As you can see, it has the shape of an asymetric function. But what does this tells us about the initial function in general and in terms of finding a maximum or a minimum for it? I know it is a general question, but I find hardly any documentation on this and I think it's good if someone can give general information when the first derivative takes this form.</p>
<p>Any information is appreciated.</p>
<p><a href="https://i.stack.imgur.com/ECkgk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ECkgk.png" alt="enter image description here"></a></p>
<p>edit: when I have different inputs in the function, the first derivative looks like this:</p>
<p><a href="https://i.stack.imgur.com/Cfv3Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cfv3Z.png" alt="enter image description here"></a></p>
| J.G | 293,121 | <p>Note that <span class="math-container">$ee^T$</span> is rank-one with a single nonzero eigenvalue of <span class="math-container">$e$</span> (with eigenvector <span class="math-container">$m$</span>). So <span class="math-container">$(1+1/\gamma)I-ee^T/m$</span> has eigenvalues <span class="math-container">$1+1/\gamma$</span> with multiplicity <span class="math-container">$m-1$</span> and <span class="math-container">$1/\gamma$</span> with multiplicity one.</p>
|
988,581 | <p>Our teacher challenge us a question and it goes like this:
The derivative of a function is define such as </p>
<p>$$\begin{cases} 1 & \text{if } x>0 \\ 2 & \text{if } x=0 \\ -1 &\text{if } x<0\end{cases} $$</p>
<p>Now he said that "$2$, if $x=0$" is not possible.He asked us to proved that the derivative at $x=0$ does not exist without using the concept of limit or using graphical intuition such as the slope of tangent but using only verbal words without resorting to use limit.</p>
| Lucian | 93,448 | <ul>
<li><p>On one hand, the series is strictly increasing.</p></li>
<li><p>On the other hand, <span class="math-container">$n^3+2\le2n^3$</span> and <span class="math-container">$n^4+3n^2+1\ge n^4$</span>, implying <span class="math-container">$t_n\le\ldots~\Big($</span>I'll let you fill in the dots, and draw the conclusion for yourself<span class="math-container">$\Big)$</span>.</p></li>
</ul>
|
381,309 | <p>Let <span class="math-container">$T\in B(\mathcal{H} \otimes \mathcal{H})$</span> where <span class="math-container">$\mathcal{H}$</span> is a Hilbert space. We can define operators
<span class="math-container">$$T_{[12]}= T \otimes 1;\quad T_{[23]}= 1 \otimes T$$</span>
and if <span class="math-container">$\Sigma: \mathcal{H} \otimes \mathcal{H} \to \mathcal{H} \otimes \mathcal{H}$</span> is the "flip" map, then we can define
<span class="math-container">$$T_{[13]}= \Sigma_{[23]}T_{[12]}\Sigma_{[23]}= \Sigma_{[12]}T_{[23]}\Sigma_{[12]}$$</span></p>
<p><strong>Question</strong>: Given <span class="math-container">$S,T \in B(\mathcal{H} \otimes \mathcal{H})$</span>, is it true that
<span class="math-container">$$(ST)_{[13]}= S_{[13]}T_{[13]}?$$</span></p>
<p>I attempted this as follows:</p>
<p>We know that the algebraic tensor product <span class="math-container">$B(\mathcal{H}) \odot B(\mathcal{H})$</span> is weak<span class="math-container">$^*$</span>-dense (= <span class="math-container">$\sigma$</span>-weakly dense) in <span class="math-container">$B(\mathcal{H} \otimes \mathcal{H})$</span>. It is easy to see that the identity holds for <span class="math-container">$S,T \in B(\mathcal{H}) \odot B(\mathcal{H})$</span>.</p>
<p>Can I conclude from this that the equality holds for all <span class="math-container">$S,T \in B(\mathcal{H}) \overline{\otimes} B(\mathcal{H})= B(\mathcal{H}\otimes \mathcal{H})$</span> (here, the first tensorproduct is the von-Neumann algebraic tensor product).</p>
<p>It is natural to try to use results involving weak<span class="math-container">$^*$</span>-continuity and Kaplansky-density-like results, but I'm having trouble finishing the proof. Any ideas?</p>
| Gerald Edgar | 454 | <p>Not an answer (does not use translation invariance)</p>
<p>Another non-measurable set, which may generalize more easily, is the <a href="https://math.stackexchange.com/questions/169714/whats-application-of-bernstein-set">Bernstein set</a> ... That is, a set <span class="math-container">$E$</span> such that for every uncountable closed set A, we have <span class="math-container">$A \cap E \ne \varnothing$</span> and <span class="math-container">$A \setminus E \ne \varnothing$</span> .</p>
<p>[With AC] we can prove that any uncountable Polish space admits a Bernstein set (indeed, <span class="math-container">$\mathfrak c$</span> disjoint Bernstein sets).
If <span class="math-container">$\mu$</span> is any atomless Borel measure on an uncountable Polish space,
then a Bernstein set is not <span class="math-container">$\mu$</span>-measurable.</p>
|
3,777,049 | <p>Suppose that <span class="math-container">$(H_i)_{i \in I}$</span> is a collection of closed <strong>orthogonal</strong> subspaces of the Hilbert space <span class="math-container">$H$</span>. Suppose that <span class="math-container">$\sum_{i \in I} \Vert x_i \Vert^2 < \infty$</span>. Prove that <span class="math-container">$\sum_{i \in I} x_i$</span> converges in <span class="math-container">$H$</span>.</p>
<p>Here <span class="math-container">$\sum_{i \in I} x_i$</span> is the norm-limit of the net <span class="math-container">$(\sum_{i \in J} x_i)$</span> where <span class="math-container">$J$</span> ranges over all finite subsets of <span class="math-container">$I$</span>, ordered by inclusion.</p>
<p><strong>Attempt</strong>:</p>
<p>It suffices to check that <span class="math-container">$(\sum_{i \in J} x_i)_J$</span> is a Cauchy net in <span class="math-container">$H$</span>. So, let <span class="math-container">$\epsilon > 0$</span>. Since <span class="math-container">$\sum_{i \in I} \Vert x_i \Vert^2 < \infty$</span>, we have that <span class="math-container">$(\sum_{i \in J} \Vert x_i \Vert^2)_J$</span> is a Cauchy net. Thus, there is a finite subset <span class="math-container">$J_0 \subseteq I$</span> such that if <span class="math-container">$K,L$</span> are finite subsets of <span class="math-container">$I$</span> containing <span class="math-container">$J_0$</span>, then <span class="math-container">$$\sum_{K \triangle L} \Vert x_i \Vert^2 = |\sum_K \Vert x_i \Vert^2 - \sum_L \Vert x_i\Vert^2 | < \epsilon$$</span></p>
<p>Here <span class="math-container">$K \triangle L = (K \setminus L) \cup (L \setminus K)$</span> is the symmetric difference.</p>
<p>Consequently, for <span class="math-container">$K,L$</span> as above
<span class="math-container">$$\Vert \sum_K x_i - \sum_L x_i \Vert ^2 = \Vert \sum_{K\triangle L} x_i \Vert ^2 = \sum_{K \triangle L}
\Vert x_i \Vert^2 < \epsilon$$</span></p>
<p>Hence <span class="math-container">$(\sum_{i \in J} x_i)_J$</span> is a Cauchy net in <span class="math-container">$H$</span> and we are done.</p>
<p>Is this correct? I think the step with the <span class="math-container">$\triangle $</span> might be flawed.</p>
| Kavi Rama Murthy | 142,385 | <p>Some mistakes have already been pointed out. So I will give a valid proof.</p>
<p><span class="math-container">$ \sum \|x_i\|^{2} <\infty$</span> implies that <span class="math-container">$x_i=0$</span> for al but countably many <span class="math-container">$i$</span>. Hence the result reduces to the case of countable family <span class="math-container">$(H_i)_{i \geq 1}$</span>.</p>
<p>In this case <span class="math-container">$\|\sum\limits_{k=n}^{m} x_i\|^{2}=\sum\limits_{k=n}^{m} \|x_i\|^{2}$</span> by orthogonality and hence <span class="math-container">$(\sum\limits_{k=n}^{m} x_i)$</span> is Cauchy. This finishes the proof since <span class="math-container">$H$</span> is compete.</p>
<p>[ If <span class="math-container">$\|x_{i_j}\| >\frac 1 n$</span> for <span class="math-container">$j=1,2,..,N$</span> then <span class="math-container">$\sum \|x_i||^{2} \geq \frac N {n^{2}}$</span> proving that <span class="math-container">$N \leq \sum \|x_i\|^{2}n^{2}$</span>. This proves that there are only finitely many <span class="math-container">$x_i$</span>'s with <span class="math-container">$\|x_i\| >\frac 1 n$</span> and taking union over <span class="math-container">$n$</span> we see that there are at most countably many <span class="math-container">$i$</span>'s with <span class="math-container">$\|x_i\|>0$</span>].</p>
|
3,777,049 | <p>Suppose that <span class="math-container">$(H_i)_{i \in I}$</span> is a collection of closed <strong>orthogonal</strong> subspaces of the Hilbert space <span class="math-container">$H$</span>. Suppose that <span class="math-container">$\sum_{i \in I} \Vert x_i \Vert^2 < \infty$</span>. Prove that <span class="math-container">$\sum_{i \in I} x_i$</span> converges in <span class="math-container">$H$</span>.</p>
<p>Here <span class="math-container">$\sum_{i \in I} x_i$</span> is the norm-limit of the net <span class="math-container">$(\sum_{i \in J} x_i)$</span> where <span class="math-container">$J$</span> ranges over all finite subsets of <span class="math-container">$I$</span>, ordered by inclusion.</p>
<p><strong>Attempt</strong>:</p>
<p>It suffices to check that <span class="math-container">$(\sum_{i \in J} x_i)_J$</span> is a Cauchy net in <span class="math-container">$H$</span>. So, let <span class="math-container">$\epsilon > 0$</span>. Since <span class="math-container">$\sum_{i \in I} \Vert x_i \Vert^2 < \infty$</span>, we have that <span class="math-container">$(\sum_{i \in J} \Vert x_i \Vert^2)_J$</span> is a Cauchy net. Thus, there is a finite subset <span class="math-container">$J_0 \subseteq I$</span> such that if <span class="math-container">$K,L$</span> are finite subsets of <span class="math-container">$I$</span> containing <span class="math-container">$J_0$</span>, then <span class="math-container">$$\sum_{K \triangle L} \Vert x_i \Vert^2 = |\sum_K \Vert x_i \Vert^2 - \sum_L \Vert x_i\Vert^2 | < \epsilon$$</span></p>
<p>Here <span class="math-container">$K \triangle L = (K \setminus L) \cup (L \setminus K)$</span> is the symmetric difference.</p>
<p>Consequently, for <span class="math-container">$K,L$</span> as above
<span class="math-container">$$\Vert \sum_K x_i - \sum_L x_i \Vert ^2 = \Vert \sum_{K\triangle L} x_i \Vert ^2 = \sum_{K \triangle L}
\Vert x_i \Vert^2 < \epsilon$$</span></p>
<p>Hence <span class="math-container">$(\sum_{i \in J} x_i)_J$</span> is a Cauchy net in <span class="math-container">$H$</span> and we are done.</p>
<p>Is this correct? I think the step with the <span class="math-container">$\triangle $</span> might be flawed.</p>
| Danny Pak-Keung Chan | 374,270 | <p>Firstly, let us clarify the meaning of the symbol <span class="math-container">$\sum_{i\in I}x_{i}$</span>.
Let <span class="math-container">$\mathcal{C}$</span> be the collection of all finite subsets of <span class="math-container">$I$</span>.
Then <span class="math-container">$(\mathcal{C},\subseteq)$</span> is a directed system in the following
sense:</p>
<p>(1) For any <span class="math-container">$J\in\mathcal{C}$</span>, <span class="math-container">$J\subseteq J$</span>,</p>
<p>(2) For any <span class="math-container">$J_{1},J_{2},J_{3}\in\mathcal{C}$</span>, if <span class="math-container">$J_{1}\subseteq J_{2}$</span>
and <span class="math-container">$J_{2}\subseteq J_{3}$</span>, then <span class="math-container">$J_{1}\subseteq J_{3}$</span>,</p>
<p>(3) For any <span class="math-container">$J_{1},J_{2}\in\mathcal{C}$</span>, there exists <span class="math-container">$J_{3}\in\mathcal{C}$</span>
such that <span class="math-container">$J_{1}\subseteq J_{3}$</span> and <span class="math-container">$J_{2}\subseteq J_{3}$</span>.</p>
<hr />
<p>Define a map <span class="math-container">$\theta:\mathcal{C}\rightarrow H$</span> by <span class="math-container">$\theta(J)=\sum_{j\in J}x_{j}$</span>.
Then <span class="math-container">$(\mathcal{C},\subseteq,\theta)$</span> is a net on the Hilbert space.
We say that the net converges to some <span class="math-container">$x\in H$</span> if for any <span class="math-container">$\varepsilon>0$</span>,
there exists <span class="math-container">$J_{0}\in\mathcal{C}$</span> such that <span class="math-container">$||\theta(J)-x||<\varepsilon$</span>
whenever <span class="math-container">$J_{0}\subseteq J$</span>. If such <span class="math-container">$x$</span> exists, it is unique (because
the norm topology on <span class="math-container">$H$</span> is Hausdorff) and we denote it by the symbol
<span class="math-container">$\sum_{i\in I}x_{i}$</span>.</p>
<hr />
<p>Go back to our question. Let <span class="math-container">$I_{0}=\{i\in I\mid x_{i}\neq0\}$</span>. We
firstly show that <span class="math-container">$I_{0}$</span> is at most countable. Prove by contradiction.
Suppose that <span class="math-container">$I_{0}$</span> in uncountable. Observe that <span class="math-container">$I_{0}=\cup_{n\in\mathbb{N}}\{i\in I\mid||x_{i}||^{2}>\frac{1}{n}\},$</span>
so there exists <span class="math-container">$n$</span> such that <span class="math-container">$\{i\in I\mid||x_{i}||^{2}>\frac{1}{n}\}$</span>
is uncountable. Denote <span class="math-container">$I'=\{i\in I\mid||x_{i}||^{2}>\frac{1}{n}\}$</span>,
then
<span class="math-container">\begin{eqnarray*}
\sum_{i\in I}||x_{i}||^{2} & \geq & \sum_{i\in I'}||x_{i}||^{2}\\
& \geq & \sum_{i\in I'}\frac{1}{n}\\
& = & \infty,
\end{eqnarray*}</span>
which is a contradiction. In another word, in the formal sum <span class="math-container">$\sum_{i\in I}x_{i}$</span>,
there are at most countably many of terms are non-zero. Fix an enumeration
for <span class="math-container">$I_{0}$</span>, for example, <span class="math-container">$I_{0}=\{i_{n}\mid n\in\mathbb{N}\}$</span>.
(Note that if <span class="math-container">$I_{0}$</span> is a finite set, we simply have set <span class="math-container">$x=\sum_{i\in I_{0}}x_{i}$</span>
and prove that <span class="math-container">$x$</span> is the limit of the net <span class="math-container">$(\mathcal{C},\subseteq,\theta)$</span>
defined in above. We skip this simple case.)</p>
<p>For each <span class="math-container">$n$</span>, define <span class="math-container">$s_{n}=\sum_{k=1}^{n}x_{i_{k}}$</span>. We go to show
that <span class="math-container">$(s_{n})$</span> is a Cauchy sequence in <span class="math-container">$H$</span>. Note that <span class="math-container">$\sum_{k=1}^{\infty}||x_{i_{k}}||^{2}=\sum_{i\in I}||x_{i}||^{2}<\infty$</span>.
Let <span class="math-container">$\varepsilon>0$</span> be given, then there exists <span class="math-container">$N$</span> such that for
any <span class="math-container">$N\leq m<n$</span>, we have <span class="math-container">$\sum_{k=m+1}^{n}||x_{i_{k}}||^{2}<\varepsilon$</span>.
Let <span class="math-container">$m,n\in\mathbb{N}$</span> be arbitrary such that <span class="math-container">$N\leq m<n$</span>. Then
<span class="math-container">\begin{eqnarray*}
& & ||s_{n}-s_{m}||^{2}\\
& = & \sum_{k=m+1}^{n}||x_{i_{k}}||^{2}\\
& < & \varepsilon.
\end{eqnarray*}</span>
By completeness of <span class="math-container">$H$</span>, there exists <span class="math-container">$x\in H$</span> such that <span class="math-container">$s_{n}\rightarrow x$</span>.
Finally we go to show that the net <span class="math-container">$(\mathcal{C},\subseteq,\theta)$</span>
converges to <span class="math-container">$x$</span>. Let <span class="math-container">$\varepsilon>0$</span> be given. Choose <span class="math-container">$N\in\mathbb{N}$</span>
such that <span class="math-container">$\sum_{k=N+1}^{\infty}||x_{i_{k}}||^{2}\leq\varepsilon^{2}$</span>.
For any <span class="math-container">$n>N$</span>, we have
<span class="math-container">\begin{eqnarray*}
& & ||s_{N}-s_{n}||^{2}\\
& = & \sum_{k=N+1}^{n}||x_{i_{k}}||^{2}\\
& \leq & \sum_{k=N+1}^{\infty}||x_{i_{k}}||^{2}\\
& \leq & \varepsilon^{2}.
\end{eqnarray*}</span>
Letting <span class="math-container">$n\rightarrow\infty$</span>, we have <span class="math-container">$||s_{N}-x||\leq\varepsilon$</span>.
Define <span class="math-container">$J_{0}=\{i_{1},i_{2},\ldots,i_{N}\}\in\mathcal{C}$</span>. Let <span class="math-container">$J\in\mathcal{C}$</span>
be such that <span class="math-container">$J_{0}\subseteq J$</span>. We have estimation:
<span class="math-container">\begin{eqnarray*}
& & ||\theta(J)-x||\\
& \leq & ||\theta(J)-\theta(J_{0})||+||\theta(J_{0})-x||\\
& = & ||s_{N}-x||+||\theta(J)-\theta(J_{0})||.
\end{eqnarray*}</span>
Observe that <span class="math-container">$\theta(J)-\theta(J_{0})=\sum_{i\in I_{0}\cap(J\setminus J_{0})}x_{i}$</span>.
Hence,
<span class="math-container">\begin{eqnarray*}
& & ||\theta(J)-\theta(J_{0})||^{2}\\
& = & \sum_{i\in I_{0}\cap(J\setminus J_{0})}||x_{i}||^{2}\\
& \leq & \sum_{k=N+1}^{\infty}||x_{i_{k}}||^{2}\\
& \leq & \varepsilon^{2}.
\end{eqnarray*}</span>
It is now clear that <span class="math-container">$||\theta(J)-x||\leq2\varepsilon$</span>. That is,
the net <span class="math-container">$(\mathcal{C},\subseteq,\theta)$</span> converges to <span class="math-container">$x$</span>.</p>
<hr />
<hr />
<p>An important application for orthonormal base and Fourier expansion:
Let <span class="math-container">$\{e_{i}\mid i\in I\}$</span> be an orthonormal base for <span class="math-container">$H$</span>. For each
<span class="math-container">$i\in I$</span>, let <span class="math-container">$H_{i}=\{\alpha e_{i}\mid\alpha\in\mathbb{R}\}$</span>. Clearly
<span class="math-container">$H_{i}$</span> are mutually orthogonal closed subspace of <span class="math-container">$H$</span>. Let <span class="math-container">$x\in H$</span>.
Define <span class="math-container">$\alpha_{i}=\langle x,e_{i}\rangle$</span>. For any finite subset
<span class="math-container">$J\subseteq I$</span>, observe that <span class="math-container">$x=(x-\sum_{j\in J}\alpha_{j}e_{j})+\sum_{j\in J}\alpha_{j}e_{j}$</span>
and <span class="math-container">$(x-\sum_{j\in J}\alpha_{j}e_{j})$</span> is orthogonal to <span class="math-container">$\sum_{j\in J}\alpha_{j}e_{j}$</span>.
Therefore
<span class="math-container">\begin{eqnarray*}
||x||^{2} & = & ||x-\sum_{j\in J}\alpha_{j}e_{j}||^{2}+||\sum_{j\in J}\alpha_{j}e_{j}||^{2}\\
& \geq & ||\sum_{j\in J}\alpha_{j}e_{j}||^{2}\\
& = & \sum_{j\in J}\alpha_{j}^{2}.
\end{eqnarray*}</span>
Since <span class="math-container">$J$</span> is arbitrary, it follows that <span class="math-container">$\sum_{i\in I}||\alpha_{i}e_{i}||^{2}\leq||x||^{2}<\infty$</span>.
(Acually equality holds, but we do not need this). By the above result,
<span class="math-container">$\sum_{i\in I}\alpha_{i}e_{i}$</span> converges to <span class="math-container">$y$</span>, for some <span class="math-container">$y\in H$</span>.
From the construction of <span class="math-container">$y$</span>, we can prove that, for each <span class="math-container">$i\in I$</span>,
<span class="math-container">$\langle y,e_{i}\rangle=\alpha_{i}=\langle x,e_{i}\rangle$</span>. Hence,
<span class="math-container">$\langle x-y,e_{i}\rangle=0$</span> for each <span class="math-container">$i$</span>. Since <span class="math-container">$\{e_{i}\mid i\in I\}$</span>
is a maximal orthonormal set, it follows that <span class="math-container">$x-y=0.$</span> That is, <span class="math-container">$x=\sum_{i\in I}\alpha_{i}e_{i}$</span>.</p>
|
3,745,097 | <p>In my general topology textbook there is the following exercise:</p>
<blockquote>
<p>If <span class="math-container">$F$</span> is a non-empty countable subset of <span class="math-container">$\mathbb R$</span>, prove that <span class="math-container">$F$</span> is not an open set, but that <span class="math-container">$F$</span> may or may not but a closed set depending on the choice of <span class="math-container">$F$</span>.</p>
</blockquote>
<p>I already proved that <span class="math-container">$F$</span> is not opened in the euclidean topology, but why is the second part true?</p>
<p>If <span class="math-container">$F$</span> is countable then <span class="math-container">$F \sim \mathbb N$</span>. This means that we can list the elements of <span class="math-container">$F$</span>, so we can write: <span class="math-container">$F=\{f_1,...,f_k,...\}$</span></p>
<p><span class="math-container">$\mathbb R \setminus F= (-\infty, f_1) \cup \bigcup \limits _{i=1}^{\infty}(f_i,f_{i + 1})$</span></p>
<p>We have that <span class="math-container">$(-\infty, f_1) \in \tau$</span> and that every <span class="math-container">$(f_i,f_{i + 1}) \in \tau$</span>. Because the union of elements of <span class="math-container">$\tau$</span> is also a element of <span class="math-container">$\tau$</span>, we have that <span class="math-container">$(-\infty, f_1) \cup \bigcup \limits _{i=1}^{\infty}(f_i,f_{i + 1}) \in \tau$</span>, then <span class="math-container">$F$</span> is closed.</p>
<p>Is this correct, because the statement says that "may or may not but a closed set depending on the choice of <span class="math-container">$F$</span>"?</p>
| Ross Millikan | 1,827 | <p>Finite unions of closed sets are closed, but infinite unions need not be. Look for a set of points that approaches a limit. If that limit is not in the set, the set is not closed.</p>
|
3,962,058 | <p>Let <span class="math-container">$A$</span> be a commutative ring with identity, and <span class="math-container">$P$</span> a prime ideal of <span class="math-container">$A$</span>. I want to show that <span class="math-container">$\lim_{f \notin P} A_{f} = A_{P}$</span>. <span class="math-container">$A_f$</span> is the localization of <span class="math-container">$A$</span> at <span class="math-container">$\{1\} \cup \{f^{n}: n \in \mathbb N\}$</span>, and <span class="math-container">$A_P$</span> is the localization of <span class="math-container">$A$</span> at <span class="math-container">$A \setminus P$</span>. There is a natural map from <span class="math-container">$A_{f} \to A_P$</span> if <span class="math-container">$f$</span> is not in <span class="math-container">$P$</span>, so by the universal property of the direct limit, we get a map from <span class="math-container">$\Phi : \lim_{f \notin P} A_{f} \to A_{p}$</span>. Then it remains to show that this canonical map is an isomorphism. Surjectivity can be proven just using the universal property without knowing what <span class="math-container">$\Phi$</span> looks like. However I cannot prove the injectivity using the universal property. It seems that one has to use the explicit form of <span class="math-container">$\Phi$</span> in order to achieve that.</p>
| Joshua P. Swanson | 86,777 | <p>It seems to me you're assuming the direct limit exists, but justifying that requires some sort of explicit construction anyway, which you're trying to avoid.</p>
<p>I think what you really what you want to do is to show that <span class="math-container">$A_P$</span> (with the system of natural maps <span class="math-container">$\phi_f \colon A_f \to A_P$</span>) satisfies the universal property in question, using only the universal property of localization.</p>
<p>For that, suppose we have <span class="math-container">$Y$</span> with natural maps <span class="math-container">$\psi_f \colon A_f \to Y$</span>. We need a unique map <span class="math-container">$u \colon A_p \to Y$</span> where <span class="math-container">$\psi_f = u \circ \phi_f$</span>. Fix <span class="math-container">$f \not\in P$</span>. For all <span class="math-container">$g \not\in P$</span>, the map <span class="math-container">$\psi_f \colon A_f \to Y$</span> factors through the map <span class="math-container">$A_f \to A_{fg}$</span>, and <span class="math-container">$g$</span> is a unit in <span class="math-container">$A_{fg}$</span>, so <span class="math-container">$\psi_f(g)$</span> is a unit in <span class="math-container">$Y$</span>. Hence there exists a unique map <span class="math-container">$u_f \colon A_P \to Y$</span> such that <span class="math-container">$\psi_f = u_f \circ \phi_f$</span>.</p>
<p>All that's left is to show <span class="math-container">$u_f$</span> is independent of <span class="math-container">$f$</span>, which is essentially formal. Write <span class="math-container">$\alpha_{f,g} \colon A_f \to A_g$</span> (when this makes sense). For all <span class="math-container">$f, g \not\in P$</span>,
<span class="math-container">$$\psi_f = \psi_{fg} \circ \alpha_{f, fg} = u_{fg} \circ \phi_{fg} \circ \alpha_{f, fg} = u_{fg} \circ \phi_f.$$</span>
By the uniqueness of <span class="math-container">$u_f$</span>, we have <span class="math-container">$u_{fg} = u_f$</span>. Hence <span class="math-container">$u_f = u_{fg} = u_{gf} = u_g$</span>.</p>
|
1,527,675 | <p>I'm trying to proof that the subset $H$ of $G$ together with $\circ$ is also a group i.e. a subgroup and I'm stuck in the proof that $(H,\circ)$ is associative. </p>
| Paul Orland | 42,566 | <p>Let $a$, $b$, and $c$ be in $H$. Then since $H$ is a subset of $G$, we know $a$, $b$, and $c$ are in $G$ and satisfy the associative law: $a\circ(b\circ c) = (a\circ b)\circ c$. Therefore $\circ$ is associative for $H$.</p>
<p>Now, not every subset $H$ of $G$ is a subgroup, but your question seems to imply you've got the other criteria figured out.</p>
|
1,527,675 | <p>I'm trying to proof that the subset $H$ of $G$ together with $\circ$ is also a group i.e. a subgroup and I'm stuck in the proof that $(H,\circ)$ is associative. </p>
| lhf | 589 | <p>By definition, a subset $H$ of a group $G$ is a subgroup when $H$ is a group using the operation of $G$.</p>
<p>In principle, this would require you to check all group properties for $H$:
$H$ is closed under the operation, there is a neutral element, there exists an inverse for each element in $H$, and that the operation is associative.</p>
<p>But since the operation of $H$ is the same as the operation of $G$, just restricted to $H$, it inherits its properties and you only have to check that:</p>
<ul>
<li><p>$H$ is closed under the operation</p></li>
<li><p>$H$ contains the neutral element of $G$ — you don't need to prove that it is a neutral element for $H$</p></li>
<li><p>$H$ contains the inverse in $G$ of each of its element — you don't need to prove that this inverse is an inverse in $H$</p></li>
<li><p>You don't need to prove that the operation is associative in $H$ because it is associative in $G$ </p></li>
</ul>
|
2,010,329 | <p>The function under consideration is:</p>
<p>$$y = \int_{x^2}^{\sin x}\cos(t^2)\mathrm d t$$</p>
<p>Question asks to find the derivative of the following function. I let $u=\sin(x)$ and then $\tfrac{\mathrm d u}{\mathrm d x}=\cos(x)$. Solved accordingly but only to get the answer as
$$ \cos(x)\cos(\sin^2(x))-\cos(x)\cos(x^4) $$
but the answer is given as:
$$ \cos(x)\cos(\sin^2(x))-2x\cdot\cos(x^4) $$</p>
<p>May I know where I went wrong? Is my substitution wrong in the first place?</p>
| layman | 131,740 | <p>Notice that \begin{split} \int \limits_{x^{2}}^{\sin{x}} \cos{t^{2}}\,dt &= \int \limits_{x^{2}}^{a} \cos{t^{2}}\,dt + \int \limits_{a}^{\sin{x}} \cos{t^{2}}\,dt \\ &= \underbrace{-\int \limits_{a}^{x^{2}} \cos{t^{2}}\,dt}_{A} + \underbrace{\int \limits_{a}^{\sin{x}} \cos{t^{2}}\,dt}_{B} \end{split}</p>
<p>Then $\frac{d}{dx} \left( \int \limits_{x^{2}}^{\sin{x}} \cos{t^{2}}\,dt \right ) = \underbrace{-\cos(x^{4})2x}_{\text{chain rule applied to A}} + \underbrace{\cos(\sin^{2}(x))\cos(x)}_{\text{chain rule applied to B}}$.</p>
|
2,724,866 | <p>This question caught my eye from "How to integrate it" by Sean M. Stewart. I have attempted it for fun and am now stuck. </p>
<p><strong>Question</strong></p>
<p>Let $f$ and $g$ be continuous bounded functions on some interval $[a,b]$ such that $\int_a^b|f(x)-g(x)|dx=0$. Show that $\int_a^b|f(x)-g(x)|^2dx=0$.</p>
<p><strong>Attempt</strong></p>
<p>Note: $0\le |\int_a^bf(x)-g(x)dx |\le\int_a^b|f(x)-g(x)|dx=0$</p>
<p>We conclude $|\int_a^bf(x)-g(x)dx |=0$</p>
<p>Multiply both sides by $|\int_a^bf(x)-g(x)dx |$ giving $|\int_a^bf(x)-g(x)dx |^2=0$</p>
<p>This is where I'm stuck. Specifically how do I get both the square and the modulus back past the pesky $\int dx$</p>
<p>Possibly
$|\left(\int_a^bf(x)-g(x)dx\right)^2|=0$ leading to $0=|\left(\int_a^bf(x)-g(x)dx\right)^2|\le?$ </p>
<p>I feel I'm missing something with obvious with modulus but it is escaping me. Of course I could be on the wrong track altogether. </p>
<p>Side question: the afore mention book only has solutions to selected problems so if anyone knew of a full solution set somewhere that'd greatly help.</p>
<p>Thanks in advance for your help.</p>
<p><strong>update</strong></p>
<p>I now realise I had discarded $f(x)=g(x)$ because I thinking was thinking (incorrectly) of $\int_a^b f(x)-g(x)dx=0$ whereby I imagined a situation such as $f(x)=-x$ and $g(x)=x$ over an interval such as $[-1,1]$ I have accepted the answer that includes the case for discontinuous functions but appreciate all other efforts.</p>
| Ingix | 393,096 | <p>Are you sure that the condition is for $f$ and $g$ to be continuous? In that case, if $f$ and $g$ differ at one point $x_0$, then $f-g$ has the same sign and $|f-g| > \epsilon > 0 $ in some small interval around $x_0$, so the integral can't be zero. In other words, under that condition we get the much better result that $f=g$ in $[a,b]$.</p>
<p>Not assuming contiuity, we get have that $|f| < M$ and $|g| < N$ for some constants $M,N$ (in $[a,b]$), so we have $|f-g| < M+N$ in that interval. This leads to $\int_a^b |f(x)-g(x)|^2dx = \int_a^b |f(x)-g(x)| \times |f(x)-g(x)|dx \le \int_a^b |f(x)-g(x)|(M+N)dx = (M+N)\int_a^b |f(x)-g(x)|dx = 0.$</p>
|
2,724,866 | <p>This question caught my eye from "How to integrate it" by Sean M. Stewart. I have attempted it for fun and am now stuck. </p>
<p><strong>Question</strong></p>
<p>Let $f$ and $g$ be continuous bounded functions on some interval $[a,b]$ such that $\int_a^b|f(x)-g(x)|dx=0$. Show that $\int_a^b|f(x)-g(x)|^2dx=0$.</p>
<p><strong>Attempt</strong></p>
<p>Note: $0\le |\int_a^bf(x)-g(x)dx |\le\int_a^b|f(x)-g(x)|dx=0$</p>
<p>We conclude $|\int_a^bf(x)-g(x)dx |=0$</p>
<p>Multiply both sides by $|\int_a^bf(x)-g(x)dx |$ giving $|\int_a^bf(x)-g(x)dx |^2=0$</p>
<p>This is where I'm stuck. Specifically how do I get both the square and the modulus back past the pesky $\int dx$</p>
<p>Possibly
$|\left(\int_a^bf(x)-g(x)dx\right)^2|=0$ leading to $0=|\left(\int_a^bf(x)-g(x)dx\right)^2|\le?$ </p>
<p>I feel I'm missing something with obvious with modulus but it is escaping me. Of course I could be on the wrong track altogether. </p>
<p>Side question: the afore mention book only has solutions to selected problems so if anyone knew of a full solution set somewhere that'd greatly help.</p>
<p>Thanks in advance for your help.</p>
<p><strong>update</strong></p>
<p>I now realise I had discarded $f(x)=g(x)$ because I thinking was thinking (incorrectly) of $\int_a^b f(x)-g(x)dx=0$ whereby I imagined a situation such as $f(x)=-x$ and $g(x)=x$ over an interval such as $[-1,1]$ I have accepted the answer that includes the case for discontinuous functions but appreciate all other efforts.</p>
| GNUSupporter 8964民主女神 地下教會 | 290,189 | <p>Replace $f- g$ with $h$. It suffices to show that
$$\int_a^b |h|=0 \implies \int_a^b |h|^2=0 \quad \forall\, h \in C([a,b])$$
As @KingTut points out, this follows trivially from
$$\forall\, h \in C([a,b]), \int_a^b |h|=0 \implies h \equiv 0.$$
Its contrapositive form has a straightforward proof: let $h(x_0) \ne 0$ at some point $x_0 \in [a,b]$. There exists $\delta > 0$ such that $|h|$ is positive in the $\delta$-neighbourhood of $x_0$.
$$\text{i.e.} \forall\, x \in [a,b] \cap (x_0-\delta, x_0+\delta), |h(x)| > 0$$
$$\therefore \int_a^b h \ge \int_{[a,b] \cap (x_0-\delta, x_0+\delta)} |h| > 0$$</p>
<hr>
<p>In fact, you can replace "continuous" with "measurable". The given condition shows that $f = g$ a.e. in $[a,b]$, so $|f-g|^2 = 0$ a.e. in $[a,b]$.</p>
<hr>
<p>(Addded in response to OP's edit)</p>
<p>In your example $\int_{-1}^1 (f-g) = \int_{-1}^1 (-2x) dx = 0$, but $\int_{-1}^1 (f-g) = \int_{-1}^1 |-2x| dx = 2 \int_0^1 |x| dx = 1 > 0$, so the condition $\int_a^b|f(x)-g(x)|dx=0$ is <em>not</em> satisfied. As a result, $\int_a^b|f(x)-g(x)|^2dx=\int_{-1}^1 |-2x|^2 dx=2\cdot\frac43 > 0$.</p>
|
4,180,344 | <p>I started learning from Algebra by Serge Lang. In page 5, he presented an equation<br><i></p>
<blockquote>
<p>Let <span class="math-container">$G$</span> be a commutative monoid, and <span class="math-container">$x_1,\ldots,x_n$</span> elements of <span class="math-container">$G$</span>. Let <span class="math-container">$\psi$</span> be a bijection of the set of integers <span class="math-container">$(1,\ldots,n)$</span> onto itself. Then</i>
<span class="math-container">$$\prod_{\nu = 1}^n x_{\psi(\nu)} = \prod_{\nu=1}^n x_\nu$$</span></p>
</blockquote>
<p>In this equation, the mapping <span class="math-container">$\psi(\nu) = \nu $</span>, resulting the same value,
I don't understand why <span class="math-container">$x_{\psi(\nu)}$</span> bothers to index with a mapping, rather than a number.</p>
| Bernard | 202,857 | <p><span class="math-container">$\psi(\nu)$</span> is not a mapping, but the value of the map <span class="math-container">$\psi $</span> at <span class="math-container">$\nu$</span>. In other words, the <span class="math-container">$n$</span>-uple <span class="math-container">$(x_{\psi(1)},x_{\psi(2)},\dots,x_{\psi(n)})\:$</span> is the <span class="math-container">$n$</span>-uple <span class="math-container">$(x_1, x_2,\dots,x_n)$</span> enumerated in <em>another order</em>.</p>
<p>For instance, suppose <span class="math-container">$n=5$</span> and <span class="math-container">$\psi$</span> is the cycle <span class="math-container">$(1\,3\,4\,2\,5)$</span>, then the product
<span class="math-container">$$x_1\, x_2\, x_3\, x_4\, x_5 \enspace\text{becomes the product}\enspace x_3\,x_5\,x_4\, x_2\, x_1. $$</span></p>
|
189,308 | <p>I have this problem: Find integration limits and compute the following integral.</p>
<p>$$\iiint_D(1-z)\,dx\,dy\,dz \\ D = \{\;(x, y, z) \in R^3\;\ |\;\; x^2 + y^2 + z^2 \le a^2, z\gt0\;\}$$</p>
<p>I can compute this as an indefinite integral but finding integration limits beats me. As indefinite integral the result looks like this (hopefully without any careless mistakes):</p>
<p>$$\iiint(1-z)\,dx\, dy\, dz \\ = \iint(x(1-z) + C_x)\,dy\, dz\\ = \iint (x - xz + C_x)\,dy\, dz \\ = \int (xy - xyz + yC_x + C_y)\,dz\\ = xyz - \frac{xyz^2}{2} + yzC_x + zC_y + C_z$$</p>
| Santosh Linkha | 2,199 | <p>Change the limits as
$$ 8*\int_0^a \int_0^{\sqrt{a^2 - x^2}} \int_0^{\sqrt{a^2 - x^2 - y^2}} (1 - z) \,dz \,dy \,dx $$
This integral should give you the volume of sphere. Note that it is easy to remember the limits.
$$ 8*\int_0^a \int_0^{\sqrt{a^2 - x^2}} \int_0^{\sqrt{a^2 - x^2 - y^2}} 1 \cdot \,dz \,dy \,dx = {4 \over 3} \pi a^3$$</p>
<p><strong>EDIT::</strong> for $z > 0$ we have $$ 4*\int_0^a \int_0^{\sqrt{a^2 - x^2}} \int_0^{\sqrt{a^2 - x^2 - y^2}} (1 - z) \cdot \,dz \,dy \,dx $$</p>
<p>More exactly, the limits of integration are
$$ \int_{-a}^a \int_{-\sqrt{a^2 - x^2}}^{\sqrt{a^2 - x^2}} \int_0^{\sqrt{a^2 - x^2 - y^2}} (1 - z) \cdot \,dz \,dy \,dx $$</p>
<p>This can be easily be done by changing into <a href="http://en.wikipedia.org/wiki/Spherical_coordinate_system" rel="nofollow">spherical polar coordinates</a>,
$z = a \cos \theta$ and the limit of $ \theta = 0 - {\pi \over 2}$.</p>
|
3,379,756 | <p>Why does the close form of the summation
<span class="math-container">$$\sum_{i=0}^{n-1} 1$$</span> equals <span class="math-container">$n$</span> instead of <span class="math-container">$n-1$</span>?</p>
| user | 505,767 | <p>We have that</p>
<p><span class="math-container">$$\sum_{i=0}^{n-1}= \overbrace{{1+1+1+\cdots+1}}^\color{red}{\text{n terms from 0 to n-1}}=n$$</span></p>
|
2,322,481 | <p>Look at this limit. I think, this equality is true.But I'm not sure.</p>
<p>$$\lim_{k\to\infty}\frac{\sum_{n=1}^{k} 2^{2\times3^{n}}}{2^{2\times3^{k}}}=1$$
For example, $k=3$, the ratio is $1.000000000014$</p>
<blockquote>
<p>Is this limit <strong>mathematically correct</strong>?</p>
</blockquote>
| José Carlos Santos | 446,262 | <p>If <span class="math-container">$n<k$</span>, then<span class="math-container">\begin{align}\frac{2^{2\times3^n}}{2^{2\times3^k}}&=4^{3^n-3^k}\\&\leqslant4^{3^{k-1}-3^k}\\&=4^{-2\times3^{k-1}}\\&\leqslant4^{-(k-1)},\end{align}</span>because <span class="math-container">$2\times3^{k-1}\geqslant k-1$</span>. But then<span class="math-container">$$1\leqslant\frac{\sum_{n=1}^k2^{2\times3^n}}{2^{2\times3^k}}\leqslant\frac{k-1}{4^{k-1}}+1$$</span>So, yes, your limit is equal to <span class="math-container">$1$</span>.</p>
|
3,407,474 | <p>I can understand that The set <span class="math-container">$M_2(2\mathbb{Z})$</span> of <span class="math-container">$2 \times 2$</span>
matrices with even integer entries is an infinite non commutative ring. But why It doesn't have any unity? I think <span class="math-container">$I(2\times2)$</span> is a unity for this ring.
Where is my misunderstanding in subject of <span class="math-container">$I(2\times2)$</span> being a unity?</p>
| John Hughes | 114,036 | <p>The <span class="math-container">$2\times 2$</span> identity has entries <span class="math-container">$1,0,0,1$</span>. Are all of those are even numbers? </p>
|
3,407,474 | <p>I can understand that The set <span class="math-container">$M_2(2\mathbb{Z})$</span> of <span class="math-container">$2 \times 2$</span>
matrices with even integer entries is an infinite non commutative ring. But why It doesn't have any unity? I think <span class="math-container">$I(2\times2)$</span> is a unity for this ring.
Where is my misunderstanding in subject of <span class="math-container">$I(2\times2)$</span> being a unity?</p>
| Chris Leary | 2,933 | <p>If your matrix ring had an identity element <span class="math-container">$I,$</span> then the square of its determinant would have to be <span class="math-container">$1$</span> (or <span class="math-container">$0$</span>), since <span class="math-container">$I^2=I.$</span> The determinant cannot be <span class="math-container">$0$</span> since <span class="math-container">$I$</span> is invertible (or, non-singular, if you prefer). But the determinant of a <span class="math-container">$2 \times 2$</span> matrix over <span class="math-container">$2\mathbb{Z}$</span> is even (<span class="math-container">$ad-bc$</span> is a difference of a product of even numbers). So, you cannot have an identity element in this ring.</p>
|
1,424,297 | <p>Why is $a\overline bc + ab = ab + ac$? I think it has something to do with the rule $a + \overline a = 1$, right?</p>
| André Nicolas | 6,312 | <p>Multiply top and (missing) bottom by $1+\cos(1/n)$. After using the identity $1-\cos^2 t=\sin^2 t$, we get that we want
$$\lim_{n\to\infty}\left(\frac{1}{1+\cos(1/n)}\cdot \frac{\sin^2(1/n)}{(1/n)^2}\right).$$
The rest should follow from a limit you know. </p>
|
1,424,297 | <p>Why is $a\overline bc + ab = ab + ac$? I think it has something to do with the rule $a + \overline a = 1$, right?</p>
| Hagen von Eitzen | 39,174 | <p>Using $$\cos x-\cos y =2\sin\frac{x+y}{2}\sin\frac{y-x}{2}$$
we find with $x=0$ and $y=\frac1n$
$$ 1-\cos\frac1n=2\sin^2\frac1{2n}$$
Now if you know $\lim_{x\to 0}\frac{\sin x}x=1$, you also get $\lim_{n\to\infty}2n\sin\frac1{2n}=1$ so ultimately
$$\lim_{n\to\infty} n^2\left(1-\cos \frac1n\right) =\lim_{n\to\infty}2n^2\sin^2\frac1n=\frac12\left(\lim_{n\to\infty}2n\sin\frac1{2n}\right)^2=\frac12.$$</p>
|
1,424,297 | <p>Why is $a\overline bc + ab = ab + ac$? I think it has something to do with the rule $a + \overline a = 1$, right?</p>
| Bernard | 202,857 | <p><em>With equivalents:</em></p>
<p>It's a classic fact that $\;1-\cos\dfrac1n\sim_\infty\dfrac1{2n^2}$ (for a proof see Taylor development at order $2$ of $\cos$), hence $$n^2\Bigl(1-\cos \frac1n\Bigr)\sim_\infty\frac{n^2}{2n^2}=\frac12.$$</p>
|
2,597,126 | <p>Given the following problem:</p>
<blockquote>
<p>Applying the division algorithm, prove that all whole numbers that are at the same time a square and a cube have the form $7k$ or $7k+1$.</p>
</blockquote>
<p>I am unable to interpret what it is asking of me and therefore I am unable to provide any solutions. Could someone explain to me what exactly the problem is asking and how do I solve it? </p>
| Dr. Sonnhard Graubner | 175,066 | <p>we have $$x\equiv 0,1,2,3,4,5,6\mod 7$$ then
$$x^6\equiv 0,1,1,1,1,1,1 \mod 7$$ you should check this</p>
|
3,571,637 | <p>I was reading Linear Algebra by Hoffman Kunze, and encountered this in the Theorem 8 of the Chapter Coordinates, the theorem is stated below : </p>
<p><a href="https://i.stack.imgur.com/6xFlJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6xFlJ.png" alt="coo"></a></p>
<p>What I get about is in the most bottom line (uniqueness), it says </p>
<p><span class="math-container">$"... it\: is\: clear\: that $</span></p>
<p><span class="math-container">$$ \alpha'_{i}=\sum_{i=1}^{n} P_{ij}\alpha_{i}." \tag{a}$$</span></p>
<p>Is there any easy way to see why it is clear? I tried many ways and what I found was that we start from scratch, (starting the same with the proof, with <span class="math-container">$\scr \overline{B}$</span> and then we find an invertible matrix, say <span class="math-container">$Q$</span>, which we don't know equal to <span class="math-container">$P$</span> or not, such that the property (a) above holds with <span class="math-container">$Q_{ij}$</span> instead of <span class="math-container">$P_{ij}$</span>. Then now what we are left to show is that <span class="math-container">$P$</span> = <span class="math-container">$Q$</span>, and we currently have:</p>
<p><span class="math-container">$$ x_{i}=\sum_{j=1}^{n} P_{ij} x'_{j}\tag{from (i)}$$</span> </p>
<p>and,</p>
<p><span class="math-container">$$ x_{i}=\sum_{j=1}^{n} Q_{ij} x'_{j}$$</span></p>
<p>So together, </p>
<p><span class="math-container">$$ \sum_{j=1}^{n} Q_{ij} x'_{j}=\sum_{j=1}^{n} P_{ij} x'_{j}$$</span></p>
<p><span class="math-container">$$ \sum_{j=1}^{n} (Q_{ij} - P_{ij}) x'_{j}= 0$$</span></p>
<p>The way I showed that this implies that <span class="math-container">$P$</span> = <span class="math-container">$Q$</span> is by asserting that <span class="math-container">$P - Q \neq 0^{n\times n}$</span> and find a contradiction. Suppose <span class="math-container">$A := P - Q \neq 0^{n\times n}$</span> then we can choose a row <span class="math-container">$r$</span> such that the k-th entry is non-zero, we can plug in <span class="math-container">$0$</span> to any other entries other then the k-th and we are left with something non zero equals to <span class="math-container">$0$</span> which is a contradiction, so that <span class="math-container">$A = 0^{n\times n}$</span> and <span class="math-container">$P = Q$</span>, and since <span class="math-container">$(a)$</span> holds for <span class="math-container">$Q$</span> and <span class="math-container">$P = Q$</span>, it follows that <span class="math-container">$(a)$</span> holds for <span class="math-container">$P$</span>. </p>
<p>I'm pretty sure that the proof is correct since we can plug in various values to <span class="math-container">$x'_{1}, ... , x'_{n} \in F$</span>, since <span class="math-container">$F$</span> is a field, it sure contains <span class="math-container">$0$</span> and <span class="math-container">$1$</span>, but this proof seems to be lengthy and is not as clear as how it was written to be by Hoffman and Kunze, I think I'm missing something here, and will be very thankful for a good explanation. Thanks!</p>
| justadzr | 672,528 | <p><strong>Theorem 8</strong>: this theorem actually starts with the set <span class="math-container">$\mathcal B'$</span> which is supposed to be our aimed basis. Therefore, Kunze says: let's define a set of vectors <span class="math-container">$\mathcal B'$</span> where its elements are defined by
<span class="math-container">$$\alpha'_j=\sum_{i=1}^n P_{ij}\alpha_i$$</span> and then prove that <span class="math-container">$\mathcal{B}'$</span> is a basis for <span class="math-container">$V$</span>. So the goal is to produce a set with <span class="math-container">$P$</span> and to prove that it is a basis.</p>
<hr>
<p><strong>Uniqueness</strong>: I believe Kunze's Theorem 7 in this chapter proves that if <span class="math-container">$\mathcal{B}$</span> and <span class="math-container">$\mathcal{B}'$</span> are two bases for vector space <span class="math-container">$V$</span> over <span class="math-container">$F$</span>, there exists a unique <span class="math-container">$n\times n$</span> matrix <span class="math-container">$P$</span> which suffices
<span class="math-container">$$[\alpha]_\mathcal{B}=P[\alpha]_{\mathcal{B}'}$$</span>
and
<span class="math-container">$$[\alpha]_{\mathcal{B}'}=P[\alpha]^{-1}_{\mathcal{B}}$$</span>
The uniqueness of <span class="math-container">$P$</span> could be simply proved by regarding it as a collection of unique scalars. <strong>Since every vector in basis <span class="math-container">$\mathcal B'$</span> can be written uniquely as a linear combination of vectors in basis <span class="math-container">$\mathcal B$</span>, the coefficients are unique in the following picture</strong> (and thus the uniqueness of <span class="math-container">$P$</span> followers immediately):
<a href="https://i.stack.imgur.com/gOWHW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gOWHW.png" alt="Screenshot"></a></p>
|
740,323 | <p>This might be a little open-ended, but I was wondering: are there any natural connections between geometry and the prime numbers? Put differently, are there any specific topics in either field which might entertain relatively close connections?</p>
<p><strong>PS:</strong> feel free to interpret the term <em>natural</em> in a broad sense; I only included it to avoid answers along the lines of "take [fact about the primes] $\to$ [string of connections between various areas of mathematics] $\to$ [geometry!]"</p>
| Jack M | 30,481 | <p>The <a href="http://en.wikipedia.org/wiki/Gauss-Wantzel_theorem" rel="noreferrer">Gauss-Wantzel theorem</a> on constructible polygons immediately springs to mind. This states that a regular $n$-gon is constructible with a straightedge and compass iff $n$ is the product of a power of $2$ and a collection of distinct <a href="http://en.wikipedia.org/wiki/Fermat_prime" rel="noreferrer">Fermat primes</a>.</p>
<p>The power of $2$ is only there because if you can construct an $n$-gon, you can easily construct a $2n$-gon by constructing an isoceles triangle on each side of the $n$-gon. Doing this repeatedly, you can get a $2^mn$-gon. So really, this is about the nature of Fermat primes.</p>
|
740,323 | <p>This might be a little open-ended, but I was wondering: are there any natural connections between geometry and the prime numbers? Put differently, are there any specific topics in either field which might entertain relatively close connections?</p>
<p><strong>PS:</strong> feel free to interpret the term <em>natural</em> in a broad sense; I only included it to avoid answers along the lines of "take [fact about the primes] $\to$ [string of connections between various areas of mathematics] $\to$ [geometry!]"</p>
| chris williams | 219,521 | <p>Construction of polygons that have prime number properties are more complex to construct than composite numbers, a polygon with composite properties can easily constructed by splitting each compostite fraction into half,
example, circle in half = 2
Circle in quarters = 4
Continue Quarters in half = 8 (notice 6 is missing in this sequance) because a circle is devided into six by its own radius example would be sacred geometry (creating flower of life)</p>
<p>Here is my layout of prime numbers...
01 03 05 07 11 13 17 19... 02 06 10 14... 04 09 15... 08 12 20... 16 18...</p>
<p>Notice the 1st numbers going across are prime
Notice the 1st numbers going down are composite and are easy to use when constructin pygons (polygons related to geometry) search polygon for further understanding.</p>
<p>Iv noticed prime numbers are never a even number, though i have not studied further on this, only up to 20 that i studied.</p>
|
838,631 | <p>This question may be far too easy for this site but I always seem to get stuck when it comes to the triangle inequality.</p>
<p>For example, I am trying to prove that differentiability implies Lipschitz continuity. Ok, I don't really have a hard time with this except for the step when using the triangle inequality.</p>
<p>Let $| x -x_0| < \epsilon$. Suppose $f$ is differentiable at $x_0$. Then</p>
<p>$ |f(x)-f(x_0)-f'(x_0)(x-x_0) | < | x- x_0|$</p>
<p>The next step is where I am not sure how the triangle inequality allows this,</p>
<p>$|f(x)-f(x_0)| \le |f'(x_0)(x-x_0)|+|x-x_0|$</p>
<p>Why am I allowed to add the $|f'(x_0)(x-x_0)|$ to the right hand side??</p>
<p>Sorry that this question is most likely trivial but I always get stuck with this operation. </p>
| Surb | 154,545 | <p>Seems that the <a href="http://en.wikipedia.org/wiki/Triangle_inequality#Reverse_triangle_inequality" rel="nofollow">reverse triangle inequality</a> is used in your case: $| \ |a|-|b|\ | \leq |a-b|$ for any real numbers $a,b$.</p>
<p>Applying this in the setting of your problem:</p>
<p>$$ |f(x)-f(x_0)|-|f'(x_0)(x-x_0)| \leq \left| |f(x)-f(x_0)|-|f'(x_0)(x-x_0)| \right| \\ \leq |f(x_0)+f(x)-f'(x_0)(x-x_0)| \leq |x-x_0|$$</p>
<p>add $|f'(x_0)(x-x_0)|$ on both sides to recover the expression you want.</p>
|
1,981,553 | <p>When a person asks: "What is the smallest number (natural numbers) with two digits?"</p>
<p>You answer: "10".</p>
<p>But by which convention 04 is no valid 2 digit number?</p>
<p>Thanks alot in advance</p>
| Soham | 242,402 | <p>Simple answer would be $0$ has no value infront of a number.So,$04,004,0004,000000000004$ are all same.As,this $0$ has no meaning or significance in the representation of the number,$0$ infront of a number is not considered.</p>
|
369,104 | <p>How do I take an algebraic expression and construct a tree out of it?</p>
<p>Sample equation:</p>
<p>((2 + x) - (x * 3)) - ((x - 2) * (3 + y))</p>
<p>If somebody can teach me in steps, that would be really helpful!</p>
| vonbrand | 43,946 | <p>You are looking for a <a href="http://en.wikipedia.org/wiki/Abstract_syntax_tree" rel="nofollow">syntax tree</a>, and that is part of what <a href="http://en.wikipedia.org/wiki/Parsing" rel="nofollow">parsing</a> is all about. This is more suited for <a href="http://cs.stackexchange.com">http://cs.stackexchange.com</a> or perhaps <a href="http://www.stackoverflow.com">http://www.stackoverflow.com</a>. The "compiler-compilers" (parser generators) are programs that given a grammar construct programs that (essentially) create such trees. There are plenty...</p>
|
3,868,523 | <blockquote>
<p>Given the bases <em>a</em> = {(0,2),(2,1)} and <em>b</em> = {(1,0),(1,1)} compute the change of coordinate matrix from basis <em>a</em> to <em>b</em>.<br />
Then, given the coordinates of <em>z</em> with respect to the basis <em>a</em> as (2,2), use the previous question to compute the coordinates of <em>z</em> with respect to the basis <em>b</em>.</p>
</blockquote>
<p>The way I understood the first part was that I have to multiply the vectors of <em>b</em> by the coordinates of the vectors of <em>a</em> to compute the change of coordinate matrix from <em>a</em> to <em>b</em>. This gives me the following matrix: <span class="math-container">\begin{bmatrix}2&3\\2&1\end{bmatrix}</span></p>
<p>For the second part, I then have to take the inverse of the matrix I got from above and then multiply it by the coordinates of <em>z</em> to get the coordinates of <em>z</em> with respect to the basis <em>b</em>. The inverse of the matrix is: <span class="math-container">\begin{bmatrix}-1/4&3/4\\1/2&-1/2\end{bmatrix}</span> which I then multiply by (2,2) to get the coordinates of <em>z</em> with respect to basis <em>b</em></p>
<p>I am not sure that is correct however.</p>
| Community | -1 | <p>You need to change the coordinates from the basis <span class="math-container">$a$</span> to the canonical basis (multiplying by <span class="math-container">$A$</span>), then change from the canonical basis to the base <span class="math-container">$b$</span> (multiplying by <span class="math-container">$B^{-1}$</span> on the left). Finally you apply to <span class="math-container">$z$</span>,</p>
<p><span class="math-container">$$B^{-1}Az.$$</span></p>
<hr />
<p>As a shortcut, you can compute <span class="math-container">$Az$</span> and solve the system</p>
<p><span class="math-container">$$Bx=Az,$$</span> this spares computation (but does not give you the matrix).</p>
|
1,470,378 | <p>I am not sure i formatted it right but I was studying for calculus and came across a problem I couldn't compute. </p>
<p>$$\int \frac{x^2}{x^2-2}dx$$</p>
<p>I have not learned partial fractions yet so if this is a case where that is used, the techniques might not work for me. </p>
<p>What I have tried. to do integration by parts, and substitution. I put the denominator to the -1 power. I got 1/2ln|x^2-2|*3/2x^3 but I am certain that isn't right. </p>
| Aditya | 278,230 | <p>Use partial fraction method.
Add and subtract 2 in the numerator and then the integration boggs down tona simple integration.</p>
|
1,299,630 | <p>I've been wondering if there is any use to defining a set that is isomorphic to $\mathbb{Q}^2$ (in the same way that $\mathbb{C}$ is isomorphic to $\mathbb{R}^2$).</p>
<p>I immediately see a problem with <em>e.g.</em> Cauchy's theorem (there's no $\pi$ available) and perhaps even defining a "analytical function" concept because there are series defined in $\mathbb{Q}$ that do not converge within $\mathbb{Q}$.</p>
<p>Still, I wonder if there's a way around these difficulties.</p>
| Fallen Apart | 201,873 | <p>Given a continuous mapping $F:X\rightarrow Y$ it induces continuous map $F^{**}:X^{**}\rightarrow Y^{**}$ given by the formula
$$[F^{**}(x^{**})](y^*):=x^{**}(F^*(y^*)).$$
This assigment is functorial, i.e. it has two properties:</p>
<p>$(id_X)^{**}=id_{X^{**}}$ and $(G\circ F)^{**}=G^{**}\circ F^{**}.$</p>
<p>Additionaly for every normed space $X$ we have continuos mapping $J_X:X\rightarrow X^{**}$ given by formula $J_X(x)(x^*)=x^*(x).$ This assigment is such that for every mapping $F:X\rightarrow Y$ we have that $J_{Y}\circ F=F^{**}\circ J_X,$ i.e the following diagram commutes
\begin{array}{cc}\phantom{\dfrac{a}{b}}X & \xrightarrow{\Large J_X} & X^{**}\phantom{\dfrac{a}{b}}\\ F \downarrow && \downarrow F^{**}\\ Y & \xrightarrow{ \Large J_Y }& Y^{**}\end{array}</p>
<p>You have that $X$ and $Y$ are isomorphic and $X$ is reflexive. Hence there exists isomorphism $F:X\rightarrow Y$ and $J_X$ is an isomorphism as well. From functoriality we get that $F^{**}$ is an isomorphism (cause $(F^{-1})^{**}=(F^{**})^{-1}$)and form diagram you get that $J_Y$ is composition of isomorphism, so isomorhism as well. What means that $Y$ is reflexive.</p>
|
4,117,377 | <p>In calculating the integral</p>
<p><span class="math-container">$$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx$$</span></p>
<p>by contour integration, we use</p>
<p><span class="math-container">$$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx = \int_{-\infty}^{\infty} \frac{\operatorname{Im}(e^{ix})}{x}dx =\operatorname{Im}\left(\int_{-\infty}^{\infty} \frac{e^{ix}}{x}dx \right)$$</span></p>
<p>but in the process, we have gone from an integral which is well-defined with no real singularities to one with a real singularity which in fact is just undefined as an improper integral. Therefore, in the source I am reading, we take the cauchy principal value (CPV) of the integral on the RHS instead of treating it as an improper integral. This principal value is calculated by use of the Residue Theorem.</p>
<p>My question: There are different ways to treat singularities in integrals. How do we know that this one (the CPV) will give us the correct result for <span class="math-container">$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx$</span>? Of course, knowing the answer by other methods, we can compare and see it was correct, but I'm looking to understand why the reasoning is valid.</p>
<p><strong>Response to 1st answer</strong>: Simply saying that the integral converges is not enough. We need some way to know that in particular the CPV is the correct notion of integration for the exponential integral. Clearly, not any notion of integration which converges must give the correct result.</p>
<p><strong>Response to 2nd answer</strong>: The question I ask is: by what reasoning is the notion of CPV in <span class="math-container">$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx =\operatorname{Im}\left(CPV\int_{-\infty}^{\infty} \frac{e^{ix}}{x}dx \right)$</span> justified. Of course the first integral is the same as an improper integral or CPV, this just doesn't answer the question.</p>
| Thusle Gadelankz | 840,795 | <p>In order to say that the Cauchy principal value agrees with the ordinary value of an improper integral you only need to know that the integral coverges, which you may show using a convergence test. This is because the Cauchy principal value is simply a particularly easy way of taking the limit. The Cauchy principal value takes the limit symmetrically. If the limit exists, it means you can take it in any manner you like, for example symmetrically.</p>
|
4,117,377 | <p>In calculating the integral</p>
<p><span class="math-container">$$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx$$</span></p>
<p>by contour integration, we use</p>
<p><span class="math-container">$$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx = \int_{-\infty}^{\infty} \frac{\operatorname{Im}(e^{ix})}{x}dx =\operatorname{Im}\left(\int_{-\infty}^{\infty} \frac{e^{ix}}{x}dx \right)$$</span></p>
<p>but in the process, we have gone from an integral which is well-defined with no real singularities to one with a real singularity which in fact is just undefined as an improper integral. Therefore, in the source I am reading, we take the cauchy principal value (CPV) of the integral on the RHS instead of treating it as an improper integral. This principal value is calculated by use of the Residue Theorem.</p>
<p>My question: There are different ways to treat singularities in integrals. How do we know that this one (the CPV) will give us the correct result for <span class="math-container">$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx$</span>? Of course, knowing the answer by other methods, we can compare and see it was correct, but I'm looking to understand why the reasoning is valid.</p>
<p><strong>Response to 1st answer</strong>: Simply saying that the integral converges is not enough. We need some way to know that in particular the CPV is the correct notion of integration for the exponential integral. Clearly, not any notion of integration which converges must give the correct result.</p>
<p><strong>Response to 2nd answer</strong>: The question I ask is: by what reasoning is the notion of CPV in <span class="math-container">$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx =\operatorname{Im}\left(CPV\int_{-\infty}^{\infty} \frac{e^{ix}}{x}dx \right)$</span> justified. Of course the first integral is the same as an improper integral or CPV, this just doesn't answer the question.</p>
| Community | -1 | <p>Since this is a question about rigor we should state what integral we are using. From (1) This integral is not Lebesgue integrable as it is not absolutely convergent, it is an improper Riemann integral though so we will use the Riemann integral.</p>
<p>Let us have a visual of the function being integrated</p>
<p><a href="https://i.stack.imgur.com/G0tOf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G0tOf.png" alt="enter image description here" /></a></p>
<hr />
<p>When we state the purely real integral <span class="math-container">$$I_1 = \int_{-\infty}^{\infty} \frac{\sin x}{x}dx$$</span> this is short-hand for <span class="math-container">$$I_1 = \lim_{a \to -\infty} \lim_{b \to \infty} \int_{a}^{b} f(x) dx$$</span> where <span class="math-container">$$f(x) = \begin{cases}
\frac{\sin(x)}{x} & x \not = 0 \\
1 & x = 0 \\
\end{cases}$$</span></p>
<p>The convergence of this integral must be addressed. It does not converge absolutely so we should attempt to cancel parts of the integral with itself. We will show that the right side (from <span class="math-container">$2\pi$</span> to <span class="math-container">$\infty$</span>) converges by chopping it up and subtracting the negative part of each sine wave from the positive part. For each segment we have</p>
<p><span class="math-container">$$\begin{align}
&\, \left|\int_{\pi 2 k}^{\pi (2 k + 2)} \frac{\sin(x)}{x} dx\right| \\
\le&\, \left|\int_{\pi 2 k}^{\pi (2 k + 1)} \frac{\sin(x)}{x} - \frac{\sin(x)}{x + \pi} dx\right| \\
\le&\, \left|\int_{\pi 2 k}^{\pi (2 k + 1)} \frac{\sin(x)}{\pi (2 k + 1)} - \frac{\sin(x)}{\pi (2 k + 2)} dx\right| \\
\le&\, \left|\int_{\pi 2 k}^{\pi (2 k + 1)} \frac{\sin(x)}{\pi} \left[\frac{1}{2 k + 1} - \frac{1}{2 k + 2}\right] dx\right| \\
\end{align}$$</span></p>
<p>and the alternating harmonic series converges. Note that each segment is positive and their sum converges absolutely. Note that we showed the integral from <span class="math-container">$0$</span> to <span class="math-container">$\infty$</span> converges independently of the integral from <span class="math-container">$-\infty$</span> to <span class="math-container">$0$</span>.</p>
<p>The finite part from <span class="math-container">$0$</span> to <span class="math-container">$2 \pi$</span> easily converges as we always have <span class="math-container">$\sin(x)/x \le 1$</span>.</p>
<hr />
<p>When we state a contour integral like</p>
<p><span class="math-container">$$I_2 = \mathrm{p.v.} \int_\gamma \frac{e^{i z} - e^{-i z}}{2 i \cdot z} dz$$</span></p>
<p>(<span class="math-container">$\gamma$</span> denoting a line from <span class="math-container">$-\infty$</span> to <span class="math-container">$\infty$</span>).</p>
<p>the meaning (2) is that the we delete from the contour <span class="math-container">$\gamma$</span> an <span class="math-container">$\epsilon$</span> sized ball around the singularity and take the limit of <span class="math-container">$\epsilon$</span> to 0.</p>
<p>So <span class="math-container">$$I_2 = \lim_{\epsilon \to 0} \lim_{a \to -\infty} \lim_{b \to \infty} \left(\int_a^{-\epsilon} + \int_{\epsilon}^b\right) \frac{e^{i z} - e^{-i z}}{2 i \cdot z} dz$$</span></p>
<p>Regarding convergence of <span class="math-container">$I_2$</span>, this integral (or half of it, to be precise) drops out algebraically from an application of Cauchy's Theorem (that a contour integral around a closed curve not containing poles gives 0) to the meromorphic function <span class="math-container">$e^{iz}/z$</span>. The details are <a href="https://math.stackexchange.com/questions/4146333/prove-that-int-0-infty-frac-sinxxdx-frac-pi2-using-contour-in/4146869#4146869">here</a>.</p>
<hr />
<p>Now to show <span class="math-container">$I_1 = I_2$</span>. The single exceptional point at <span class="math-container">$x=0$</span> has no bearing on the value of the integral <span class="math-container">$I_1$</span> so we may split <span class="math-container">$I_1$</span> around 0 and add a <span class="math-container">$\lim_{\epsilon \to 0}$</span> on it. Now <span class="math-container">$I_1 - I_2 = 0$</span> can be shown. Rewrite <span class="math-container">$I_2$</span> to use the complex <span class="math-container">$\sin$</span> function.</p>
<p><span class="math-container">$$\begin{align}
I_1 - I_2 =& \left[\lim_{a \to -\infty} \lim_{b \to \infty} \int_{a}^{b} f(x) dx \right] - \left[\lim_{\epsilon \to 0} \lim_{a \to -\infty} \lim_{b \to \infty} \left(\int_a^{-\epsilon} + \int_{\epsilon}^b\right) \frac{e^{i z} - e^{-i z}}{2 i \cdot z} dz\right] \\
=& \left[\lim_{\epsilon \to 0} \lim_{a \to -\infty} \lim_{b \to \infty} \left(\int_a^{-\epsilon} + \int_{\epsilon}^b\right) \frac{\sin(x)}{x} dx \right] - \left[\lim_{\epsilon \to 0} \lim_{a \to -\infty} \lim_{b \to \infty} \left(\int_a^{-\epsilon} + \int_{\epsilon}^b\right) \frac{\sin(z)}{z} dz\right] \\
=& \lim_{\epsilon \to 0} \lim_{a \to -\infty} \lim_{b \to \infty} \left(\int_a^{-\epsilon} + \int_{\epsilon}^b\right) \left[\frac{\sin(x)}{x} - \frac{\sin(x)}{x} \right] dx \\
=& 0
\end{align}$$</span></p>
<hr />
<ul>
<li>(1) <a href="https://en.wikipedia.org/wiki/Dirichlet_integral" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Dirichlet_integral</a></li>
<li>(2) <a href="https://en.wikipedia.org/wiki/Cauchy_principal_value" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Cauchy_principal_value</a></li>
<li>(3): Theorem 15.2 of Complex Analysis - M.W. Wong <a href="https://books.google.co.uk/books?id=7OKiVxvselkC&pg=PA94&redir_esc=y#v=onepage&q&f=false" rel="nofollow noreferrer">https://books.google.co.uk/books?id=7OKiVxvselkC&pg=PA94&redir_esc=y#v=onepage&q&f=false</a> proves that if the improper integral exists then the cauchy p.v. exists and is equal.</li>
</ul>
|
4,117,377 | <p>In calculating the integral</p>
<p><span class="math-container">$$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx$$</span></p>
<p>by contour integration, we use</p>
<p><span class="math-container">$$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx = \int_{-\infty}^{\infty} \frac{\operatorname{Im}(e^{ix})}{x}dx =\operatorname{Im}\left(\int_{-\infty}^{\infty} \frac{e^{ix}}{x}dx \right)$$</span></p>
<p>but in the process, we have gone from an integral which is well-defined with no real singularities to one with a real singularity which in fact is just undefined as an improper integral. Therefore, in the source I am reading, we take the cauchy principal value (CPV) of the integral on the RHS instead of treating it as an improper integral. This principal value is calculated by use of the Residue Theorem.</p>
<p>My question: There are different ways to treat singularities in integrals. How do we know that this one (the CPV) will give us the correct result for <span class="math-container">$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx$</span>? Of course, knowing the answer by other methods, we can compare and see it was correct, but I'm looking to understand why the reasoning is valid.</p>
<p><strong>Response to 1st answer</strong>: Simply saying that the integral converges is not enough. We need some way to know that in particular the CPV is the correct notion of integration for the exponential integral. Clearly, not any notion of integration which converges must give the correct result.</p>
<p><strong>Response to 2nd answer</strong>: The question I ask is: by what reasoning is the notion of CPV in <span class="math-container">$\int_{-\infty}^{\infty} \frac{\sin x}{x}dx =\operatorname{Im}\left(CPV\int_{-\infty}^{\infty} \frac{e^{ix}}{x}dx \right)$</span> justified. Of course the first integral is the same as an improper integral or CPV, this just doesn't answer the question.</p>
| robjohn | 13,854 | <p><strong>Why the Cauchy Principal Value Gives the Proper Value</strong></p>
<p>Your question boils down to the following
<span class="math-container">$$
\begin{align}
&\mathrm{Im}\left(\mathrm{PV}\int_{-\infty}^\infty\frac{e^{ix}}x\,\mathrm{d}x\right)\\
&=\mathrm{Im}\left(\mathrm{PV}\int_{-\infty}^\infty\frac{\cos(x)}x\,\mathrm{d}x+i\,\mathrm{PV}\int_{-\infty}^\infty\frac{\sin(x)}x\,\mathrm{d}x\right)\tag{1a}\\
&=\mathrm{Im}\left(\lim_{\epsilon\to0}\int_{|x|\gt\epsilon}\frac{\cos(x)}x\,\mathrm{d}x+i\int_{-\infty}^\infty\frac{\sin(x)}x\,\mathrm{d}x\right)\tag{1b}\\
&=\mathrm{Im}\left(0+i\int_{-\infty}^\infty\frac{\sin(x)}x\,\mathrm{d}x\right)\tag{1c}\\
&=\int_{-\infty}^\infty\frac{\sin(x)}x\,\mathrm{d}x\tag{1d}
\end{align}
$$</span>
Explanation:<br />
<span class="math-container">$\text{(1a)}$</span>: <span class="math-container">$e^{ix}=\cos(x)+i\sin(x)$</span> and <span class="math-container">$\mathrm{PV}$</span> is linear<br />
<span class="math-container">$\text{(1b)}$</span>: definition of <span class="math-container">$\mathrm{PV}$</span> on the cosine integral<br />
<span class="math-container">$\phantom{\text{(1b):}}$</span> <span class="math-container">$\mathrm{PV}$</span> of a convergent integral is that integral<br />
<span class="math-container">$\text{(1c)}$</span>: the integral of an odd function over a domain<br />
<span class="math-container">$\phantom{\text{(1c):}}$</span> symmetric about the origin is <span class="math-container">$0$</span><br />
<span class="math-container">$\text{(1d)}$</span>: take the imaginary part</p>
<p>Actually, it doesn't even matter what the Principal Value of the cosine integral is. As long as it exists and is real, it is eliminated by taking the imaginary part.</p>
<hr />
<p><strong>Avoid Singularities Altogether</strong></p>
<p>There is no singularity whatsoever in
<span class="math-container">$$
\int_{-\infty}^\infty\frac{\sin(x)}x\,\mathrm{d}x\tag2
$$</span>
The way to compute this integral and never come close to a singularity is to note that
<span class="math-container">$$
\lim_{R\to\infty}\int_{\gamma_R}\frac{\sin(z)}z\,\mathrm{d}z=0\tag3
$$</span>
where <span class="math-container">$\gamma_R=[-R,R]\cup[R,R-i]\cup[R-i,-R-i]\cup[-R-i,-R]$</span>. This is because there are no singularities inside this contour for any <span class="math-container">$R$</span>.</p>
<p>Furthermore, the integral vanishes on <span class="math-container">$[R,R-i]$</span> and <span class="math-container">$[-R-i,-R]$</span> as <span class="math-container">$R\to\infty$</span> since <span class="math-container">$|\sin(z)|\le\cosh(1)$</span> and <span class="math-container">$|z|\ge R$</span> and the length of each path is <span class="math-container">$1$</span>. Thus, the integral over both paths is no bigger than <span class="math-container">$\frac{2\cosh(1)}R$</span>.</p>
<p>Discounting the integrals which vanish, <span class="math-container">$(3)$</span> becomes
<span class="math-container">$$
\int_{-\infty}^\infty\frac{\sin(x)}x\,\mathrm{d}x=\int_{-\infty-i}^{\infty-i}\frac{\sin(z)}z\,\mathrm{d}z\tag4
$$</span>
The path of integration on the right side of <span class="math-container">$(4)$</span> passes nowhere near a singularity. Use <span class="math-container">$\sin(z)=\frac{e^{iz}-e^{-iz}}{2i}$</span> to evaluate the right side of <span class="math-container">$(4)$</span>:
<span class="math-container">$$
\begin{align}
\int_{-\infty-i}^{\infty-i}\frac{\sin(z)}z\,\mathrm{d}z
&=\frac1{2i}\int_{-\infty-i}^{\infty-i}\frac{e^{iz}}z\,\mathrm{d}z-\frac1{2i}\int_{-\infty-i}^{\infty-i}\frac{e^{-iz}}z\tag{5a}\\
&=\frac1{2i}\lim_{R\to\infty}\int_{\gamma_R^+}\frac{e^{iz}}z\,\mathrm{d}z-\frac1{2i}\lim_{R\to\infty}\int_{\gamma_R^-}\frac{e^{-iz}}z\tag{5b}
\end{align}
$$</span>
where <span class="math-container">$\gamma_R^+=[-R-i,R-i]\cup Re^{i[0,\pi]}-i$</span> and <span class="math-container">$\gamma_R^-=[-R-i,R-i]\cup Re^{i[0,-\pi]}-i$</span>. <span class="math-container">$\text{(5b)}$</span> follows because the integrals along the huge arcs go to <span class="math-container">$0$</span>; <span class="math-container">$e^{iz}$</span> vanishes exponentially in the upper half-plane and <span class="math-container">$e^{-iz}$</span> vanishes exponentially in the lower half-plane. In fact, the integrals along those arcs are bounded by
<span class="math-container">$$
\begin{align}
\int_0^\pi\overbrace{e^{-R\sin(\theta)+1}\vphantom{\frac RR}}^{\large e^{\pm iz}}\overbrace{\ \frac{R\,\mathrm{d}\theta}{R-1}\ }^{\mathrm{d}z/z}
&\le\frac{2eR}{R-1}\int_0^{\pi/2}e^{-2R\theta/\pi}\,\mathrm{d}\theta\tag{6a}\\
&\le\frac{2eR}{R-1}\frac\pi{2R}\tag{6b}\\[3pt]
&=\frac{e\pi}{R-1}\tag{6c}
\end{align}
$$</span>
Since <span class="math-container">$\gamma_R^-$</span> contains no singularities, the integral on the right-hand side of <span class="math-container">$\text{(5b)}$</span> is <span class="math-container">$0$</span>. Since <span class="math-container">$\gamma_R^+$</span> contains the singularity at <span class="math-container">$0$</span>, whose residue is <span class="math-container">$1$</span>, we get that the integral on the left-hand side of <span class="math-container">$\text{(5b)}$</span> is <span class="math-container">$2\pi i$</span>.</p>
<p>Putting together <span class="math-container">$(2)-(5)$</span>, we conclude
<span class="math-container">$$
\int_{-\infty}^\infty\frac{\sin(x)}x\,\mathrm{d}x=\pi\tag7
$$</span></p>
|
1,075,505 | <p>I have the following function:</p>
<p>$$f(x+iy) = x^2+iy^2$$</p>
<p>My textbook says the function is only differentiable along the line $x = y$, can anyone please explain to me why this is so? What rules do we use to check where a function is differentiable?</p>
<p>I know the Cauchy-Riemann equations, and that $u=x^2$ and $v=y^2$ here. </p>
| Harriet Grigg | 390,879 | <p>A perhaps more accessible perspective is to understand that the definition of the complex derivative, as in the real case, relies on the existence and uniqueness of a certain limit. In particular, given a point and a function, one considers evaluation of the function in a small neighbourhood of the point in question in the domain of the function. The limit in question is of course the ratio of the difference in the value of the function between the distinguished point and close neighbours to the difference in the argument. In the familiar real case, the limit must be uniquely defined, regardless of the direction from which the point is approached (from above or below). For example, a piecewise linear function has an upper derivative and a lower derivative everywhere; it only has a full derivative where they are equal (which may not be true at the boundaries of the pieces). Failure to meet this condition corresponds to a loss of regularity of the function; it is both intuitively and rigorously meaningless to assign a value to the derivative or slope of such a function at points where the full derivative does not exist. </p>
<p>The intuition is identical in the complex case; the difference is that the distinguished point may be approached from an infinity of directions. In common with the real case, the derivative is defined iff the limit agrees regardless of this choice. Similarly, the intuition behind this is captured by the idea that the "slope" - here generalised to include the phase as well as the magnitude of the defining ratio, since a ratio of complex numbers is in general complex - must agree at a point, independent of the direction from which it is approached. This is reflected in the structure of the Cauchy-Riemann equations. A slightly counterintuitive aspect of complex analysis is that this can easily be the case even if the real and imaginary parts of the function under consideration are smooth as real functions; agreement with the complex structure is a rigid constraint on differentiability. Try evaluating the limit from several directions for your function and this will persuade you that the limit is not independent of the choice. If the gradient of this function represented a force on a particle in imaginary space, which direction would it move in? It wouldn't be able to make its mind up. </p>
|
2,234,737 | <p>$f:\mathbb{R}^m\to\mathbb{R}$ differentiable such that $f(x/2) = f(x)/2$, show $f$ is linear.</p>
<p>I tried to do:</p>
<p>$f(a+b) = f(2(a+b)/2) = f(2(a+b))/2$</p>
<p>but that wouldn't turn into $f(a)+f(b)$</p>
<p>In the same way: $f(a)/2 + f(b)/2 = f((a+b)/2)$ which won't help.</p>
<p>Also, I need to use the differentiability. I know that there must exist $r(v)$ such that</p>
<p>$$f(a+v) = f(a) + grad(f)\cdot v + r(v)$$</p>
<p>where $\lim_{v\to 0}\frac{r(v)}{|v|} = 0$</p>
| Breno Pinheiro | 740,066 | <p>Let <span class="math-container">$f:\mathbb{R}^n\rightarrow \mathbb{R}$</span> be a differentiable function, then f linear <span class="math-container">$\Leftrightarrow$</span> <span class="math-container">$\dfrac{\partial f(a)}{\partial v}=f(v)$</span></p>
<p><span class="math-container">$\Rightarrow$</span> <span class="math-container">$\dfrac{\partial f(a)}{\partial v}=\lim_{t \to 0} \dfrac{f(a+tv)-f(a)}{t}=\lim_{t \to 0} \dfrac{f(a)+tf(v)-f(a)}{t}=f(v)$</span></p>
<p><span class="math-container">$\Leftarrow$</span> <span class="math-container">$f(v)=\dfrac{\partial f(a)}{\partial v}=\langle \nabla f(a), v\rangle$</span> thence, <span class="math-container">$f(a+tv)=\langle \nabla f(a), a+tv\rangle = \langle \nabla f(a), a\rangle+ t\langle \nabla f(a), v\rangle=f(a)+tf(v)$</span>. <span class="math-container">$\square$</span></p>
<p>Now, <span class="math-container">$f(0)=f(0)/2\rightarrow f(0)=0$</span>. We also have to <span class="math-container">$f(x/2^n)=f(x)/2^n$</span> (By induction, it's easy). Then</p>
<p><span class="math-container">$$\dfrac{\partial f(0)}{\partial v}=\lim_{t \to 0} \dfrac{f(0+tv)-f(0)}{t}=\lim_{t \to 0} \dfrac{f(tv)}{t}$$</span></p>
<p>suppose <span class="math-container">$t=\dfrac{1}{2^n}$</span>, If <span class="math-container">$t \to 0$</span>, then <span class="math-container">$n \to \infty$</span></p>
<p><span class="math-container">$$\dfrac{\partial f(0)}{\partial v}=\lim_{t \to 0} \dfrac{f(0+tv)-f(0)}{t}=\lim_{t \to 0} \dfrac{f(tv)}{t}=\lim_{n \to \infty} \dfrac{f(\dfrac{1}{2^n}v)}{\dfrac{1}{2^n}}=\lim_{n \to \infty} \dfrac{\dfrac{1}{2^n}f(v)}{\dfrac{1}{2^n}}=f(v)$$</span></p>
<p>Showing the result. <span class="math-container">$\blacksquare$</span></p>
|
567,668 | <p>In my analysis class we are discussing integrable functions and I need help thinking through something. I don't want any answers, just what should I be looking at? (hints)</p>
<p><img src="https://i.stack.imgur.com/qKte6.png" alt="enter image description here"></p>
| Berci | 41,488 | <p><strong>Hints:</strong> If $N$ is a normal subgroup of $G_1\times G_2$, as the projections $p_i:G_1\times G_2\to G_i$ are surjective, they map $N$ onto a normal subgroup.</p>
<p>For the case both $p_i(N)=G_i$, let $M:=\{g_1\mid (g_1,1)\in N \}$. Then $M$ is again a normal subgroup (of $G_1$). If $M=G_1$ we are ready soon. <br>
Finally, if $M=\{1\}$ (by symmetry it also means $\{g_2\mid (1,g_2)\in N\}=\{1\}$), conclude that $N$ is a graph of an isomorphism (e.g. $(g_1,g_2)\in N$ and $(g_1,g_2')\in N$ implies $g_2=g_2'$), in other words $N$ is just the <em>diagonal</em> of $G_1\times G_2$ applying the isomorphism $G_1\cong G_2$.</p>
<p>On the other hand, unless $G$ is commutative, the diagonal of $G\times G$ for a simple group is <em>not a normal</em> subgroup.</p>
|
1,522,062 | <p>Identify for which values of $x$ there is subtraction of nearly equal numbers, and find an alternate form that avoids the problem:
$$E = \frac{1}{1+x} - \frac{1}{1-x} = -\frac{2x}{1-x^2} = \frac{2x}{x^2-1} $$
How come $-2x/(1-x^2)$ can be changed to $2x(x^2 - 1)$ according to the homework solutions? Why does the denominator change it's digits/variables places? Shouldn't it be just $2x/(1-x^2)$ ? Thanks!</p>
| team-erdos | 288,344 | <p>In general, for numbers $a,b$ and $c$, we may write
$$
\frac a{b-c}=\frac a{-1(c-b)}=\frac 1{-1} \cdot \frac a{c-b}=\frac {-1}{1} \cdot \frac a{c-b}=\frac{-a}{c-b}.
$$</p>
<p>In your case, we have $a=-2x$, $b=1$, and $c=x^2$. So,
$$
\frac{-2x}{1-x^2}=\frac{-2x}{-1(x^2-1)}=\frac 1{-1}\cdot \frac{-2x}{x^2-1}=\frac {-1}1\cdot \frac{-2x}{x^2-1}=\frac{2x}{x^2-1}.
$$</p>
<p>Hope that helps.</p>
|
1,686,904 | <blockquote>
<p>How can we prove that <span class="math-container">$$\binom{n}{1}\binom{n}{2}^2\binom{n}{3}^3\cdots \binom{n}{n}^n \leq \left(\frac{2^n}{n+1}\right)^{\binom{n+1}{2}}$$</span></p>
</blockquote>
<p><span class="math-container">$\bf{My\; Try::}$</span> Using <span class="math-container">$\bf{A.M\geq G.M\;,}$</span> we get</p>
<p><span class="math-container">$$\binom{n}{0}+\binom{n}{1}+\binom{n}{2} + \cdots+\binom{n}{n}\geq (n+1)\cdot \left[\binom{n}{0}\cdot \binom{n}{1}\cdot \binom{n}{2} \cdots \binom{n}{n}\right]^{\frac{1}{n+1}}$$</span></p>
<p>So <span class="math-container">$$2^n\geq (n+1)\left[\binom{n}{1}\cdot \binom{n}{2}\cdots \binom{n}{n}\right]^{\frac{1}{n+1}}$$</span></p>
<p>How can I solve it after that? Help me.</p>
<p>Thanks.</p>
| DeepSea | 101,504 | <p><strong>Hint</strong>: You are close to the answer. Here is the right way: Apply the AM-GM:</p>
<p>$(a_1+2a_2+\cdots + na_n)^{\binom{n+1}{2}} \geq \binom{n+1}{2}^{\binom{n+1}{2}}a_1a_2^2\cdots a_n^n$, with $a_k = \binom{n}{k}$, and the left side is a popular sum that can be calculated by several methods one of which is using derivative of $(1+x)^n$</p>
|
244,489 | <p>Given:</p>
<p>${AA}\times{BC}=BDDB$</p>
<p>Find $BDDB$:</p>
<ol>
<li>$1221$</li>
<li>$3663$</li>
<li>$4884$</li>
<li>$2112$</li>
</ol>
<p>The way I solved it:</p>
<p>First step - expansion & dividing by constant ($11$):
$AA\times{BC}$=$11A\times{BC}$</p>
<ol>
<li>$1221$ => $1221\div11$ => $111$</li>
<li>$3663$ => $3663\div11$ => $333$</li>
<li>$4884$ => $4884\div11$ => $444$</li>
<li>$2112$ => $2112\div11$ => $192$</li>
</ol>
<p>Second step - each result is now equal to $A\times{BC}$. We're choosing multipliers $A$ and $BC$ manually and in accordance with initial condition. It takes <strong>a lot</strong> of time to pick up a number and check whether it can be a multiplier.</p>
<p>That way I get two pairs:</p>
<p>$22*96$=$2112$</p>
<p>$99*37$=$3663$</p>
<p>Of course $99*37$=$3663$ is the right one.</p>
<p>Is there more efficient way to do this? Am I missing something?</p>
| Patrick M | 48,933 | <p>So, assume that $0 \leq a \leq 1$ and $0\leq b\leq 1$. I'll address the boundary conditions later.[1] You have square A, inside of a unit square. When the square A is touching a corner, you have the most space available for square B.[2] Assuming that the square is touching a corner, you are left with two rectangles of space of size $(1-a)$ X 1. (With the understanding that these two rectangles of space overlap.) Now, a square is restricted in size by the smaller of the two dimensions, so the max size for $b$ is $(1-a)$, or:</p>
<p>$0 \leq b \leq (1-a) \leq 1 \Rightarrow [\text{add $a$ to everything}] \Rightarrow$<br>
$a \leq b+a \leq 1 < 1+a \Rightarrow$<br>
[removing the terms on the ends gives us a weaker statement]<br>
$b+a \leq 1$</p>
<p>[1] So, lets take the two edge cases first:<br>
1) $a=1$: If a=1, a takes up all the room in the square. Honestly, I don't know my definitions well know to know if a square B can exist in zero space, with a length of zero size. You have to prove this, to prove the statement is correct.<br>
2) $a=0$: Square B can fit in the box, and be any size 0<b<=1, but again, I don't remember my geometry well enough to know if you can have a square of zero size.</p>
<p>[2] <s>If you want to prove that the corner is optimal, for allowing the biggest rectangles, and squares, you can prove it by finding an expressing for the free space on each side of a square, based on the square's position, and assuming the size is a constant C. Then you can show the free space is maximized then the position of the square is in a corner.</s></p>
<p><b>Edit:</b> This is harder to prove than originally stated, because it neglects to account for what happens if the rectangles are rotated.</p>
|
3,364,317 | <p>Points A and B fixed, and point C moves on circle such that ABC acute triangle. <span class="math-container">$AT = BT$</span> and <span class="math-container">$TM \perp AC, \, TN \perp BC$</span>. How can I proove that all the middle perpendiculars (perpendicular bissector) to <span class="math-container">$MN$</span> passes through a fixed point?</p>
| sirous | 346,566 | <p>Hint:Let's mark perpendicular bisector as (PB).
If C is coincident on A or B then M and N will be coincident on A and B respectively and the (PB) of MN is exactly the (PB) of AB.Also when the triangle is isosceles MN is parallel with AB, so again (PB) of MN and AB are coincident. That is the point is on the (PB) of AB. To find this point extend the (PB) of MN to cross the the (PB) of AB at P. It can be shown that the ratio of <span class="math-container">$R=\frac{TP}{AB}$</span> is independent of position of C and is constant. The angle <span class="math-container">$(\alpha)$</span> between PT and (PB) of MN is always equal to the angle <span class="math-container">$(\beta)$</span> between AB and MN(or their extensions),because their rays are perpendicular. If <span class="math-container">$\angle CAB$</span> or <span class="math-container">$\angle CBA$</span> is <span class="math-container">$90^o$</span> Then M or N locate on A or B respectively. Let's mark the intersection of (PB) of MN and AB as Q. The right triangles ABC and PQT are similar . Since AB is constant then TP must be constant due to sine law.That is the equality of angles <span class="math-container">$(\alpha)$</span> and <span class="math-container">$(\beta)$</span> requires that (PB) of MN always passes the point P. </p>
|
2,812,472 | <p>I am trying to show that</p>
<p>$$
\frac{d^n}{dx^n} (x^2-1)^n = 2^n \cdot n!,
$$ for $x = 1$. I tried to prove it by induction but I failed because I lack axioms and rules for this type of derivatives. </p>
<p>Can someone give me a hint?</p>
| Chinny84 | 92,628 | <p>$$
(x^2-1)^n = \sum_{k=0}^n\left(\matrix{n\\k}\right)(-1)^{n-k}(x^2)^k
$$
or your problem becomes
$$
\frac{d^n}{dx^n}\sum_{k=0}^n\left(\matrix{n\\k}\right)(-1)^{n-k}(x^2)^k = \sum_{k=0}^n\left(\matrix{n\\k}\right)(-1)^{n-k}\frac{d^n}{dx^n}x^{2k}
$$
Can you take it from here?</p>
|
2,812,472 | <p>I am trying to show that</p>
<p>$$
\frac{d^n}{dx^n} (x^2-1)^n = 2^n \cdot n!,
$$ for $x = 1$. I tried to prove it by induction but I failed because I lack axioms and rules for this type of derivatives. </p>
<p>Can someone give me a hint?</p>
| Rhys Hughes | 487,658 | <p>We have:
$$y=(x^2-1)^n$$
Using the chain rule: $$y=a[f(x)]^n\to\frac{dy}{dx}=an[f'(x)][f(x)]^{n-1}$$
We get:
$$\frac{dy}{dx}=2xn(x^2-1)^{n-1}$$
Then:
$$\frac{d^2y}{dx^2}=2xn\cdot2x(n-1)\cdot(x^2-1)^{n-2}=n(n-1)(2x)^2(x^2-1)^{n-2}$$
$$\frac{d^3y}{dx^3}=2xn\cdot2x(n-1)\cdot2x(n-2)\cdot(x^2-1)^{n-2}=(2x)^3[n(n-1)(n-2)][(x^2-1)^{n-3}]$$
Can you show it from here?</p>
|
1,060,213 | <p>I have searched the site quickly and have not come across this exact problem. I have noticed that a Pythagorean triple <code>(a,b,c)</code> where <code>c</code> is the hypotenuse and <code>a</code> is prime, is always of the form <code>(a,b,b+1)</code>: The hypotenuse is one more than the non-prime side. Why is this so?</p>
| poetasis | 546,655 | <p>The subset of triples where the GCD of A,B,C is an odd square and where <span class="math-container">$C-B=(2n-1)^2$</span> is also a group of distinct sets of triples as shown below.</p>
<p><span class="math-container">$$\begin{array}{c|c|c|c|c|}
\text{$Set_n$}& \text{$Triple_1$} & \text{$Triple_2$} & \text{$Triple_3$} & \text{$Triple_4$}\\ \hline
\text{$Set_1$} & 3,4,5 & 5,12,13& 7,24,25& 9,40,41\\ \hline
\text{$Set_2$} & 15,8,17 & 21,20,29 &27,36,45 &33,56,65\\ \hline
\text{$Set_3$} & 35,12,37 & 45,28,53 &55,48,73 &65,72,97 \\ \hline
\text{$Set_{25}$} &2499,100,2501 &2597,204,2605 &2695,312,2713 &2793,424,2825\\ \hline
\end{array}$$</span></p>
<p>Normally, triples area generated using Euclid's formula where
<span class="math-container">$$A=m^2-n^2\quad B=2mn\quad C=m^2+n^2$$</span>
We can get the subset if, instead of using <span class="math-container">$(m,n)$</span>, we use <span class="math-container">$(2-1+n,n)$</span> which expands to</p>
<p><span class="math-container">$$A=(2m-1)^2+2(2m-1)n\qquad B=2(2m-1)+2n^2\qquad C=(2m-1)^2+2(2m-1)n+2n^2$$</span></p>
<p><span class="math-container">$Set_1$</span> contains <span class="math-container">$only$</span> primitives and <span class="math-container">$only$</span> in <span class="math-container">$Set_1$</span> does <span class="math-container">$C-B=1$</span></p>
<p>If we let <span class="math-container">$m=1 (Set_1)$</span>, the formula reduces to <span class="math-container">$A=2n+1\quad B=2n^2+2n\quad C=2n^2+2n+1$</span></p>
<p>We can see by inspection that the diffrerence between <span class="math-container">$B$</span> and <span class="math-container">$C$</span> is always <span class="math-container">$1$</span>.</p>
|
3,554,555 | <p>Let <span class="math-container">$S_n=1-2^{-n}$</span>. Prove that the sequence <span class="math-container">$S_n$</span> converges to <span class="math-container">$1$</span> as <span class="math-container">$n$</span> approaches infinity. </p>
<p>I let <span class="math-container">$N = \lceil\log_2 \varepsilon\rceil + 1$</span>. Then there exists <span class="math-container">$k\geq 1$</span> such that <span class="math-container">$\varepsilon \geq 1/2^k$</span>. Then, set <span class="math-container">$N=k+1$</span>. Let <span class="math-container">$n\geq N$</span>. </p>
<p>I am having trouble proving this though. any help would be greatly appreciated. thank you!</p>
| P. Lawrence | 545,558 | <p>Expansion of <span class="math-container">$2^n=(1+1)^n$</span> by the binomial theorem shows that <span class="math-container">$2^n>n.$</span> Thus <span class="math-container">$2^{-n}<\frac {1}{n}$</span> so <span class="math-container">$$|1-(1-2^{-n})|=2^{-n}< \frac {1}{n}$$</span>. You can take it from there with <span class="math-container">$\epsilon$</span> and <span class="math-container">$\delta .$</span></p>
|
1,115,389 | <p>Given: </p>
<p>$$\sum_{n = 0}^{\infty} a_nx^n = f(x)$$</p>
<p>where:</p>
<p>$$a_{n+2} = a_{n+1} - \frac{1}{4}a_n$$</p>
<p>is the recurrence relationship for $a_2$ and above ($a_0$ and $a_1$ are also given).</p>
<p>Is there a nice closed form to this pretty recurrence relationship?</p>
| Alex R. | 22,064 | <p>If you think about it, placing 9 rooks on a 9x9 chessboard so that they don't attack each other is equivalent to asking how many permutations there are of the numbers $1,2,\cdots,9$. This is because each allowed rook configuration corresponds to a $9\times 9$ permutation matrix $M$, with $M_{ij}=1$ if there's a rook at $(i,j)$ and 0 otherwise: there cannot be two 1's on the same row or column. So the answer is immediately $9!/\binom{81}{9}$. </p>
|
3,677,303 | <p>Is there an example of topological space being both sequentially compact and Lindelof but noncompact?</p>
| nessy | 785,381 | <p>No.</p>
<p>Let <span class="math-container">$X$</span> be a sequentially compact and Lindelöf topological space. Then <span class="math-container">$X$</span> is countably compact, since it is sequentially compact. (*) It is by definition that countably compact and Lindelöf space is compact.</p>
<p>(*) Countably compactness is equivalent to the following: "For arbitrary sequence <span class="math-container">$(x_n)_{n\in\mathbb{N}}$</span>, the set <span class="math-container">$\bigcap_{n\in\mathbb{N}}\overline{\{ x_n, x_{n+1},\cdots \}}$</span> is not empty." (This can be proven immediately by definition of countably compactness, using easy set-theoretic calculation.) You can now show that sequentially compactness implies countablly compactness.</p>
|
801,574 | <p>Let $A$ be a complex $n\times m$ matrix ($n<m$), which is row full rank. For any $m\times m$ matrix $M$, such that $AM$ is still row full rank, can we find an invertible matrix $X$, such that $XA=AM$?</p>
| Community | -1 | <p>Some basic facts that can be used to reason about the problem:</p>
<ul>
<li>If $X$ is invertible, the rowspace of $XA$ is the rowspace of $A$.</li>
<li>Every row of $AM$ is in the row space of $M$.</li>
</ul>
|
330,488 | <p>On the complex plane <span class="math-container">$\mathbb C$</span> consider the half-open square <span class="math-container">$$\square=\{z\in\mathbb C:0\le\Re(z)<1,\;0\le\Im(z)<1\}.$$</span> </p>
<p>Observe that for every <span class="math-container">$z\in \mathbb C$</span> and <span class="math-container">$p\in\{0,1,2,3\}$</span> the set <span class="math-container">$(z+i^p\cdot\square)$</span> is the shifted and rotated square <span class="math-container">$\square$</span> with a vertex at <span class="math-container">$z$</span>.</p>
<blockquote>
<p><strong>Problem.</strong> Is it true that for any function <span class="math-container">$p:\mathbb C\to\{0,1,2,3\}$</span> there a subset <span class="math-container">$Z\subset\mathbb C$</span> such that the union of the squares
<span class="math-container">$$\bigcup_{z\in Z}(z+i^{p(z)}\cdot\square)$$</span>is not Borel in <span class="math-container">$\mathbb C$</span>?</p>
</blockquote>
<hr>
<p><strong>Added in Edit.</strong> As @YCor observed in his comment, the answer to this problem is affirmative under <span class="math-container">$\neg CH$</span>.</p>
<p>An affirmative answer to Problem would follow from an affirmative answer to another intriguing </p>
<blockquote>
<p><strong>Problem'.</strong> Is it true that for any partition <span class="math-container">$\mathbb C=A\cup B$</span> either <span class="math-container">$A$</span> contains an uncountable strictly increasing function or <span class="math-container">$B$</span> contains an uncountable strictly decreasing function? </p>
</blockquote>
<p>Here by a <em>function</em> I understand a subset <span class="math-container">$f\subset \mathbb C$</span> such that for any <span class="math-container">$x\in\mathbb R$</span> the set <span class="math-container">$f(x)=\{y\in\mathbb R:x+iy\in f\}$</span> contains at most one element.</p>
<hr>
<p><strong>Added in the Next Edit.</strong> In the discussion with @YCor we came to the conclusion that under CH the answer to both problems is negative. Therefore, both problems are independent of ZFC. Very strange.</p>
| Taras Banakh | 61,536 | <p>Both problems have negative answer under CH and positive answer under <span class="math-container">$\neg$</span>CH. The proofs can be found <a href="https://arxiv.org/abs/1905.02243" rel="nofollow noreferrer">here</a>.</p>
|
2,149,006 | <p>While learning the power rule, one thing popped up in my mind which is confusing me. We know what the power rule states :</p>
<p>$$\frac{\mathrm{d}}{\mathrm{d}x}(x^n) = nx^{n-1}$$ where $n$ is a real number.</p>
<blockquote>
<p>But instead of $n$, if we have a trig function like $\sin(x)$, <strong>will the power rule still apply?</strong></p>
</blockquote>
<p>Eg. We have a function $y = x^{\sin(x)}$, and thus by the power rule;</p>
<p>$$\frac{dy}{dx} = sin(x)x^{sin(x)-1}$$. </p>
<p>Is this possible? Please tell me if even the function I wrote above really does exist or not.</p>
<p>I know this may seem a stupid question to many, but please help because I cannot find any explanation to this. </p>
| Community | -1 | <p>Let me answer for any variable exponent $n(x)$: no,</p>
<p>$$(x^{n(x)})'=n(x) x^{n(x)-1}$$ doesn't hold.</p>
<p>As a counterexample, consider the function</p>
<p>$$n(x):=\frac1{\ln(x)}$$</p>
<p>remarking that $$x^{n(x)}=x^{1/\ln(x)}=e^{\ln(x)/\ln(x)}=e.$$</p>
<p>The wrong rule would yield</p>
<p>$$\frac1{\ln(x)}x^{1/\ln(x)-1}=\frac1{\ln(x)}e^{1-\ln(x)}.$$ </p>
<p>But on another hand, the derivative of a constant is $0$.</p>
<hr>
<p>The correct rule can be found by means of logarithms:</p>
<p>$$(x^{n(x)})'=(e^{\ln(x)n(x)})'=(\ln(x)n(x))'e^{\ln(x)n(x)}=\left(\frac{n(x)}x+\ln(x)n'(x)\right)x^{n(x)}.$$</p>
|
2,149,006 | <p>While learning the power rule, one thing popped up in my mind which is confusing me. We know what the power rule states :</p>
<p>$$\frac{\mathrm{d}}{\mathrm{d}x}(x^n) = nx^{n-1}$$ where $n$ is a real number.</p>
<blockquote>
<p>But instead of $n$, if we have a trig function like $\sin(x)$, <strong>will the power rule still apply?</strong></p>
</blockquote>
<p>Eg. We have a function $y = x^{\sin(x)}$, and thus by the power rule;</p>
<p>$$\frac{dy}{dx} = sin(x)x^{sin(x)-1}$$. </p>
<p>Is this possible? Please tell me if even the function I wrote above really does exist or not.</p>
<p>I know this may seem a stupid question to many, but please help because I cannot find any explanation to this. </p>
| Narasimham | 95,860 | <p>No, you cannot temporarily assume one of two variables to be constant when differentiating with respect exponent or argument in base function separately.</p>
<p>To handle both at the same time, take logarithm on both sides</p>
<p>$$\ln y =\sin x \ln x$$ and by the chain rule we get</p>
<p>$$\frac{y'}{y}=\cos x \ln x+\sin x\cdot \frac1{x}$$</p>
<p>$$ y^{\prime}= \left( \cos x \ln x+\sin x\cdot \frac1{x} \right) x^ {\sin x} $$</p>
|
2,372,035 | <p>$ABCD$ is a square with side-length $1$ and equilateral triangles $\Delta AYB$ and $\Delta CXD$ are inside the square. What is the length of $XY$?</p>
| Michael Rozenberg | 190,319 | <p>Just $2\cdot\frac{\sqrt3}{2}-1$</p>
|
629,887 | <p>I was going through one of the topic "Introduction to Formal proof". In one example while explaining "Hypothesis" and "conclusion" got confused.</p>
<p>The example is as follows:</p>
<blockquote>
<p>If $x\geq4$, then $2^x \geq x^2$.</p>
</blockquote>
<p>While deriving conclusion article said "As $x$ grows larger than $4$, LHS $2^x$ doubles as $x$ increases by $1$ and RHS grows by ratio $x+1/x$".</p>
<p>Thanks in advance.</p>
| Sasha | 11,069 | <p>Using
$$
\binom{n}{k} \frac{k}{n} = \binom{n-1}{k-1}
$$
Then
$$
\sum_{k=1}^n \frac{k}{n} \binom{n}{k} t^k (1-t)^{n-1} = \sum_{k=1}^{n} \binom{n-1}{k-1} t^{k} (1-t)^{n-k} = t \sum_{k=0}^{n-1} \binom{n-1}{k} t^{k} (1-t)^{n-1-k} = (t+1-t)^{n-1} t = t
$$</p>
|
4,219,492 | <p>This is a long question/explanation, where I more or less have a method to the madness; however, I've stumbled upon how I perform the calculation and I feel it is quite crude. This question is based off <a href="https://math.stackexchange.com/questions/2631501/finding-the-distribution-of-the-sum-of-three-independent-uniform-random-variable#">another post</a> about the sum of three variables. Since I'm am unsure of my method, I didn't want to make this an answer there but rather my own question for critiquing. Reproduced here, consider three random variables <span class="math-container">$X$</span>, <span class="math-container">$Y$</span>, and <span class="math-container">$Z$</span>, all drawn from Uniform<span class="math-container">$(0,1)$</span>. What is the distribution of the sum of the three random variables, <span class="math-container">$W = X+Y+Z$</span>?</p>
<h2>Part 1: Convolution of two variables</h2>
<p>First define the random variable of the sum of two variables, <span class="math-container">$S = X+Y$</span>. We find the pdf of <span class="math-container">$S$</span> by convolution of the two distributions</p>
<p><span class="math-container">$$\begin{align}
f_S(s) &= \int_{-\infty}^\infty\! f_X(s-t) f_Y(t)\ \textrm{d}t,\\
&= \int_0^1\! f_X(s-t)\ \textrm{d}t,\\
&= \int_{s-1}^s f_X(u)\ \textrm{d}u.
\end{align}$$</span></p>
<p>Here I used the fact that <span class="math-container">$f_Y(t) = 1$</span> when <span class="math-container">$0 \leq t \leq 1$</span>, and is otherwise <span class="math-container">$0$</span>. Likewise, we know that <span class="math-container">$f_X(s-t)$</span> has the same properties, such that another bound is <span class="math-container">$0 \leq s-t \leq 1$</span>. This may be transform into <span class="math-container">$s-1 \leq t \leq s$</span>, which is "coincidently" the bounds over the variable <span class="math-container">$u$</span> (perhaps this "coincident" is where my enlightenment awaits).</p>
<p><span class="math-container">$\textbf{Here}$</span> is where I make a leap of faith of sorts. We now have two bounds for for our integral over <span class="math-container">$t$</span></p>
<ol>
<li><span class="math-container">$0 \leq t \leq 1$</span> from the pdf of <span class="math-container">$Y$</span>, and</li>
<li><span class="math-container">$s-1 \leq t \leq s$</span> from the shifted pdf of <span class="math-container">$X$</span>.</li>
</ol>
<p>I therefore, "mix-and-match" my bounds to arrive at</p>
<p><span class="math-container">$$\begin{align}
f_S(s) =
\cases{
\displaystyle\int_0^s \mathrm{d}t = s, & $0 \leq s < 1$\\
\displaystyle\int_{s-1}^1 \mathrm{d}t = 2-s, & $1 \leq s \leq 2$
}.
\end{align}$$</span></p>
<p>I can see the mixing of bounds either from naively swapping them (say the upper bounds). I can also appreciate from visualizing the convolution that there are two "behaviors" and therefore a piecewise function is appropriate. However, this feeling is not systematized and is just an intuition at best. I've seen these bounds explained via proof by "what else could it be", i.e., what makes sense to keep the integral within the domain <span class="math-container">$[0,1]$</span>, where we take that <span class="math-container">$s \in [0,2]$</span> since <span class="math-container">$S = X+Y$</span>... After arriving at the answer that seems fine, but not very satisfactory.</p>
<h3>Aside about how I visualize the convolution</h3>
<p>Imagining the convolution of the two unit boxes, fixing one distribution and sliding the other, there are three "moments"/instances of interest</p>
<ol>
<li>When the distributions begin to overlap, the leading edge of the sliding distribution is at <span class="math-container">$s=0$</span>.</li>
<li>When the distributions completely overlap, <span class="math-container">$s=1$</span>. This moment also coincides with when the distributions begin to diverge.</li>
<li>When the distributions last touch <span class="math-container">$s=2$</span>.</li>
</ol>
<p>Since "moment" two is instantaneous there are really only two "event," when the distributions are increasing their overlapping region and when the are decreasing it. From this, I argue there are two cases: <span class="math-container">$0 \leq s < 1$</span> and <span class="math-container">$1 \leq s \leq 2$</span>.</p>
<h2>Part 2: Convolution of three variables</h2>
<p>We now want <span class="math-container">$W = X+Y+Z = S+Z$</span>, meaning we will use our results from the convolution of two variables to do another convolution. Similar to before</p>
<p><span class="math-container">$$\begin{align}
f_W(w) &= \int_{-\infty}^\infty\! f_S(w-t) f_Z(t)\ \textrm{d}t,\\
&= \int_0^1\! f_S(w-t)\ \textrm{d}t,\\
&= \int_{w-1}^w f_S(u)\ \textrm{d}u.
\end{align}$$</span></p>
<p>Here again I have restricted <span class="math-container">$0 \leq t \leq 1$</span> from the distribution of <span class="math-container">$Z$</span>. Now this time <span class="math-container">$f_S(w-t)$</span> is a little more complicated since it is not a uniform distribution</p>
<p><span class="math-container">$$\begin{align}
f_S(w-t) =
\cases{
w-t, & $0 \leq w-t < 1\ \longrightarrow\ w-1 \leq t \leq w$\\
2-w+t, & $1 \leq w-t \leq 2\ \longrightarrow\ w-2 \leq t \leq w-1$
}.
\end{align}$$</span></p>
<p>Now I can repeat the "mixing-and-matching" of <span class="math-container">$0 \leq t \leq 1$</span> with the bounds of <span class="math-container">$f_S$</span> to arrive at a list of</p>
<ol>
<li><span class="math-container">$0 \leq t \leq w$</span></li>
<li><span class="math-container">$w-1 \leq t \leq 1$</span></li>
<li><span class="math-container">$0 \leq t \leq w-1$</span></li>
<li><span class="math-container">$w-2 \leq t \leq 1$</span>.</li>
</ol>
<p>Now I have to lean on my visualization of the convolution to say there are three cases of interest. These are, with what bounds I associated with them, when the distribution begin to overlap (1), while the overlap (2 and 3), and when the begin to stop overlapping (4). Thus, the first half of <span class="math-container">$f_S$</span> shows up by itself while they begin to overlap, both part of <span class="math-container">$f_S$</span> contribute while the distributions are completely overlapping, and only the later part of <span class="math-container">$f_S$</span> contributes when they begin to diverge. This leads me to the result that</p>
<p><span class="math-container">$$\begin{align}
f_W(w) =
\cases{
\displaystyle\int_0^w (w-t)\ \mathrm{d}t = \frac{w^2}{2}, & $0 \leq w < 1$\\
\displaystyle\int_{w-1}^1 (w-t)\ \mathrm{d}t + \int_0^{w-1} (2-w+t)\ \mathrm{d}t= -w^2 + 3w - \frac{3}{2}, & $1 \leq w \leq 2$\\
\displaystyle\int_{w-2}^1 (2-w+t)\ \mathrm{d}t= \frac{(w-3)^2}{2}, & $2 \leq w \leq 3$
}.
\end{align}$$</span></p>
<p>This is the correct answer from the original post. My intuition hasn't gotten me the wrong answer at the very least.</p>
<h2>Part 3: Convolution of four variables</h2>
<p>I'll skip the majority of the explanation, and summarize we now want <span class="math-container">$K = X+Y+Z+J$</span>, where <span class="math-container">$J$</span> is another random variable drawn from Uniform<span class="math-container">$(0,1)$</span>.</p>
<p><span class="math-container">$$\begin{align}
f_K(k) &= \int_{-\infty}^\infty\! f_W(k-t) f_J(t)\ \textrm{d}t,\\
&= \int_0^1\! f_W(k-t)\ \textrm{d}t,\\
&= \int_{k-1}^k f_W(u)\ \textrm{d}u.
\end{align}$$</span></p>
<p>Writing the shifted convolved three variable distribution and solving for its bounds</p>
<p><span class="math-container">$$\begin{align}
f_W(k-t) =
\cases{
\frac{(k-t)^2}{2}, & $0 \leq k-t < 1\ \longrightarrow k-1 \leq t < k$\\
-(k-t)^2 + 3(k-t) - \frac{3}{2}, & $1 \leq k-t \leq 2\ \longrightarrow k-2 \leq t < k-1$\\
\frac{(k-t-3)^2}{2}, & $2 \leq k-t \leq 3\ \longrightarrow k-3 \leq t < k-2$
}.
\end{align}$$</span></p>
<p>Then setting up the integrals and solving I arrive at</p>
<p><span class="math-container">$$\begin{align}
f_K(k) =
\cases{
\displaystyle\int_0^k \frac{(k-t)^2}{2}\ \mathrm{d}t = \frac{k^3}{6}, & $0 \leq k < 1$\\
\displaystyle\int_{k-1}^1 \frac{(k-t)^2}{2}\ \mathrm{d}t + \int_0^{k-1} \left(-(k-t)^2+3(k-t)-\frac{3}{2}\right)\ \mathrm{d}t= -\frac{k^3}{3} + 2 k^2 - 2k + \frac{2}{3}, & $1 \leq k < 2$\\
\displaystyle\int_{k-2}^1 \left(-(k-t)^2+3(k-t)-\frac{3}{2}\right)\ \mathrm{d}t + \int_0^{k-2} \frac{(k-t-3)^2}{2}\ \mathrm{d}t = \frac{k^3}{2} -4k^2 +10k - \frac{22}{3}, & $2 \leq k < 3$\\
\displaystyle\int_{k-3}^1 \frac{(k-t-3)^2}{2}\ \mathrm{d}t = -\frac{(k-4)^3}{6}, & $3 \leq k \leq 4$\\
}.
\end{align}$$</span></p>
<p>Now this is where my earlier visualization intuition failed me, or at least evidently needed to be tweaked. Rather than the events: beginning to overlap, overlapping, and beginning to diverge; the real importance is the coupling of different regions. Meaning, the first piecewise in the 2 variable sum came from coupling the void (0) with the first half of the uniform distribution, and the void with the second half—this seems to be the biggest subtly in my altered thinking. However, it is then straightforward to see why we have 3 pieces for the 3 sum, and 4 for the 4 sum, and why each middle section is the sum of two integrals as we are coupling adjacent regions. In fact all pieces are the sum of two integrals, it is just that the first and last section are integrals over 0!</p>
<p>Now I have no reference for if these are correct, but it is what I get from following my method. Moreover, when I plot it in Mathematica, I got a reasonable looking distribution. By chance if this process is taken to infinitum, does this converge to a scaled Normal distribution (perhaps rather the of convolving the Uniform<span class="math-container">$(-1,1)$</span>)? I'd suppose not exactly, as the convolution of the normal with a uniform is not strictly the normal distribution from what I've seen.</p>
<p><a href="https://i.stack.imgur.com/voXsf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/voXsf.png" alt="Sum of 4 uniform random variables" /></a></p>
<h2>Question/Challenge</h2>
<p>With my thinking made clear, I am having difficulty convolving the triangle distribution with itself, i.e., it should be our 4 uniform variable sum. Now I want to see that <span class="math-container">$K = S+S$</span> such that</p>
<p><span class="math-container">$$\begin{align}
f_K(k) &= \int_{-\infty}^\infty\! f_S(k-t) f_S(t)\ \textrm{d}t,\\
&= \int_0^1\! t f_S(s-t)\ \textrm{d}t + \int_1^2 (2-t) f_S(s-t)\ \textrm{d}t.
\end{align}$$</span></p>
<p>How are the bound picked, and how do I systematically know which integrals combine with one another? I can get the limits <span class="math-container">$0 \leq t \leq 2$</span>, <span class="math-container">$z-1 \leq t \leq z$</span> and <span class="math-container">$z-2 \leq t \leq z-1$</span>. Naively, I can end up with 8 integrals, four for each term in <span class="math-container">$f_K(k)$</span> where I get four from mixing and matching bounds of the shifted triangle distribution. If there are really 8 integrals, which ones belong to which section of the piecewise function?</p>
| Novice C | 81,628 | <p>Since this was an ill posed question, and I tried to throw in a concrete question at the end, I've answered it myself (as to not waste anyone's time further). The 8 integrals I got are:</p>
<p><span class="math-container">$$\begin{align}
1.& \int_0^z t(z-t)\ \textrm{d}t, &0 \leq z < 1,\\
2.& \int_{z-1}^1 t(z-t)\ \textrm{d}t, &1 \leq z < 2,\\
3.& \int_0^{z-1} t(2-z+t)\ \textrm{d}t, &1 \leq z < 2,\\
4.& \int_{z-2}^1 t(2-z+t)\ \textrm{d}t, &2 \leq z < 3,\\
5.& \int_1^z (2-t)(z-t)\ \textrm{d}t, &1 \leq z < 2,\\
6.& \int_{z-1}^2 (2-t)(z-t)\ \textrm{d}t, &2 \leq z < 3,\\
7.& \int_1^{z-1} (2-t)(2-z+t)\ \textrm{d}t, &2 \leq z < 3,\\
8.& \int_{z-2}^2 (2-t)(2-z+t)\ \textrm{d}t, &3 \leq z \leq 4.
\end{align}$$</span></p>
<p>The first four come from the first term, while the last four come from the second term of the convolution. These integrals were got from mixing and matching, and which part of the piecewise function they belonged to were argued from what else could they be, i.e., the only values that are valid. This indeed reduces to the sum of 4 uniform random variables. Hopefully, the pattern is evident to anyone who stumbles across this post hoping to understand convolution.</p>
<p>I suppose I am satisfied that I can do convolutions; however, I more than happily will accept an answer that gives better insight other than "what else could they be."</p>
|
321,255 | <p>I've been looking for the definition of projective ideal but haven't found anything, all I've seen is the definition of projective module (but I don't know how these are related, if they are ¿?). Does anyone know a book where I can look up basic facts about projective ideals?</p>
| Boris Novikov | 62,565 | <p>A (left) projective ideal is just a projective submodule of the regular module ${}_RR$ (i.e. the ring $R$ considered as a left $R$-module). So if ${}_RR$ can be decomposed in a direct sum, then every summand is a projective ideal.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.