qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,192,068 | <p><span class="math-container">\begin{matrix}
1 & 2 & 0 & 1 \\
2 & 4 & 1 & 4 \\
3 & 6 & 3 & 9 \\
\end{matrix}</span>
I have tried to transpose it and then reduce it by row echelon form and i get zeros on the last two rows. But i can't grasp if i should be doing that or doing it another way.</p>
| Mathbeginner | 506,526 | <p>In general, the length of this curve is given by
<span class="math-container">$\int_{-\pi}^{\pi} \sqrt{(x')^{2}+(y')^{2}}dt$</span>, which in this case leads to
<span class="math-container">$$\int_{-\pi}^{\pi} \sqrt{(x')^{2}+(y')^{2}}dt = \int_{-\pi}^{\pi} \sqrt{(-3\cos(3t))^{2}+(2)^{2}}dt = \int_{-\pi}^{\pi} \sqrt{9\cos^{2}(3t)+4}dt.$$</span> I don't know if you need this precise answer or a rounded value, which any computer could easily do.</p>
|
1,700,608 | <p>In <a href="https://math.stackexchange.com/a/1700505/132192">this</a> answer the value of $1 - \cos(x)$ has to be evaluated in order to find its upper limit, if it exists.</p>
<p>In particular, $x = 2 \pi / n$. The answer is related to the length of a side of a regular $n$-gon inscribed into a unit-radius circumference; because the perimeter of the $n$-gon is always less than $2 \pi$, the single side must always be less than $2 \pi / n$.</p>
<p>The inequality</p>
<p>$$1 - \cos (x) \leq \displaystyle \frac{x^2}{2}$$
(1)</p>
<p>is used and the proof is completed with </p>
<p>$$2(1 - \cos(x)) \leq (2 \pi / n)^2$$</p>
<p>$$\sqrt{2(1 - \cos(x))} \leq 2 \pi / n$$</p>
<p>But it is well known that the cosine is a function $f(x) \in [-1;1]$, so $1 - \cos (x) \in [0,2]$. By using this information, we would obtain</p>
<p>$$1 - \cos (x) \leq 2$$
(2)</p>
<p>The proof would provide</p>
<p>$$2(1 - \cos(x)) \leq 4$$</p>
<p>$$\sqrt{2(1 - \cos(x))} \leq 2$$</p>
<p>which is a completely different result.</p>
<ul>
<li>Why in that case it is preferable to use (1) instead of (2)?</li>
<li>How to choose when it is convenient to use (1) and when to use (2) in a proof?</li>
</ul>
| ncmathsadist | 4,154 | <p>Let $x\gt 0$. You know that $\sin(x) < x$ for such $x$. Integrating
$$ 1 - \cos(x) =\int_0^x \sin(t)\, dt < \int_0^x t\,dt = {t^2\over 2}$$</p>
|
194,664 | <p>How to generate a list of fixpoint free permutations of n elements in mathematica?</p>
| Roman | 26,598 | <p>As your equation is <code>Sinc[a] == x</code> the formal solution is</p>
<pre><code>A = InverseFunction[Sinc];
</code></pre>
<p>You can plot it with</p>
<pre><code>Plot[A[x], {x, -0.21723362821122166`, 1}]
</code></pre>
<p>but as you see in the result you get a random branch of the solution:</p>
<p><a href="https://i.stack.imgur.com/tj3KK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tj3KK.png" alt="enter image description here"></a></p>
<p>Better to use something numeric: the <span class="math-container">$i^{\text{th}}$</span> branch is found numerically by starting a <code>FindRoot</code> at the quadratic approximation of the relevant branch (only positive branches <span class="math-container">$a>0$</span>):</p>
<pre><code>Clear[B];
B[0, x_?NumericQ] := a /. FindRoot[Sinc[a] == x, {a, Sqrt[6 (1 - x)]}]
B[i_?OddQ, x_?NumericQ] := a /. FindRoot[Sinc[a] == x,
{a, ((3+2i)π((3+2i)^2*π^2-2(6+Sqrt[-12+(3+2i)π(8x+(3+2i)π(2-(3+2i)π*x))])))/(-16+2(3+2i)^2*π^2)}]
B[i_?EvenQ, x_?NumericQ] := a /. FindRoot[Sinc[a] == x,
{a, ((1+2i)π((π+2i*π)^2+2(-6+Sqrt[-12+(1+2i)π((2+4i)π+8x-(π+2i*π)^2*x)])))/(2(-8+(π+2i*π)^2))}]
Table[B[i, 0.03], {i, 0, 10}]
</code></pre>
<blockquote>
<p>{3.04997, 6.47879, 9.14681, 12.9659, 15.2333, 19.4735, 21.298, 26.0288, 27.3139, 32.8091, 33.1041}</p>
</blockquote>
<pre><code>With[{z = 0.03},
Plot[Sinc[a], {a, 0, 35}, GridLines -> {None, {z}},
Epilog -> {Red, Table[Point[{B[i, z], z}], {i, 0, 10}]}]]
</code></pre>
<p><a href="https://i.stack.imgur.com/LWQ9b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LWQ9b.png" alt="enter image description here"></a></p>
|
194,664 | <p>How to generate a list of fixpoint free permutations of n elements in mathematica?</p>
| cphys | 63,840 | <p>In order to find all of the roots for <span class="math-container">$x$</span> in range <span class="math-container">$\{0,1\}$</span>, (without placing a limit on the range of <span class="math-container">$a$</span>), you should use <a href="https://reference.wolfram.com/language/ref/Reduce.html" rel="nofollow noreferrer">reduce</a>. Because the user specifically wrote the function as <span class="math-container">$\sin(a)/a = z$</span> we do not convert the equation to <span class="math-container">$\text{sinc}(a)$</span>, and specifically exclude the point <span class="math-container">$a=0$</span> because this would put a zero in the denominator. </p>
<pre><code>f[yMax_, x_] := f[yMax, x] =
If[x != 1,
If[x != 0 ,
{ToRules[N[Reduce[Sin[a]/a == x, a, Reals]]]},
DeleteCases[Flatten[Table[ FullSimplify[Solve[Sin[a]/a == 0, a, Reals], a != 0 && C[1] \[Element] Integers] /. C[1] -> iConst, {iConst, -IntegerPart[yMax], IntegerPart[yMax]}], 1], {a -> 0}]],
{}]
</code></pre>
<p>Writing all of the solutions for a particular value of x, as an array of ordered pairs gives,</p>
<pre><code>finalFunction[yMax_, x_] := If[f[yMax, x] != {},
{x, a} /. f[yMax, x],
Nothing]
</code></pre>
<p>We can list all of the roots for values <span class="math-container">$x$</span> in the range <span class="math-container">$\{0,1\}$</span> to an arbitrary resolution in values of <span class="math-container">$x$</span> using the function,</p>
<pre><code>listAllRoots[yMax_, resolution_] := listAllRoots[yMax, resolution] = SortBy[Flatten[ParallelTable[N[finalFunction[yMax, x]], {x, -1,1,1/resolution}], 1], Last]
</code></pre>
<p>Plotting all of these values at different scales of <span class="math-container">$a(x)$</span> using the function,</p>
<pre><code>finalPlot[yMax_, resolution_] := ListLinePlot[
listAllRoots[yMax,resolution],
AspectRatio -> .75,
PlotRange -> {{Automatic,1.0554}, {-yMax - .1*yMax, +yMax + .1*yMax}},
LabelStyle -> {FontFamily -> "Latex", FontSize -> 25},
FrameLabel -> {"x", "a(x)"},
FrameTicks -> {{Table[Round[i, 1], {i, -yMax, yMax, yMax/3}], None},{Automatic, None}},
PlotTheme -> "Scientific", ImageSize -> 450]
</code></pre>
<p>Here we have plotted all of the roots to the transcendental equation. Note that at <span class="math-container">$x=0$</span> the full solution is <span class="math-container">$a=n\pi$</span> where <span class="math-container">$n$</span> is any positive or negative integer. Thus there are infinite solutions at <span class="math-container">$x=0$</span>, so here we have used the parameter <span class="math-container">$\text{yMax}$</span> to only solve for values <span class="math-container">$a=\{-\text{yMax},\text{yMax}\}$</span>, which are within the plotting window. This value may be arbitrarily adjusted to any value. </p>
<pre><code>GraphicsRow[{finalPlot[3, 500], finalPlot[30, 500], finalPlot[90, 500]}]
</code></pre>
<p><a href="https://i.stack.imgur.com/QK7oc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QK7oc.png" alt="enter image description here"></a></p>
|
2,061,363 | <p>I have the complex power series $ \sum_{k=1}^{\infty}(\frac{z^4}{4} - \frac{\pi}{7})^k$. </p>
<p>Through algebraic manipulation I obtain $ \sum_{k=1}^{\infty}(\frac{1}{4})^k(z^4 - \frac{4}{7}\pi)^k$. I now argue that this is a power series around $\frac{4}{7}\pi$ with radius of convergence R = 4, using the euler root test, asserting all the while that the power 4 in the argument z doesn't affect those two quantities. However, I'm not sure how to prove or even heuristically show that last bit. In fact, I'm not even sure I'm right in asserting that the power doesn't matter, it's really more of a hunch. Does anyone know how to handle this scenario? </p>
| Learnmore | 294,365 | <p>Put $t=z^4$ so the series is $\sum_{k=0}^\infty \dfrac{1}{4^k}(t-t_0)^k$ where $t_0=\dfrac{4\pi}{7}$</p>
<p>Then the radius of convergence $R=\lim \sup \dfrac{a_k}{a_{k+1}}=4$ where $a_k=\dfrac{1}{4^k}$</p>
<p>Hence the series converges $\forall t$ such that $|t|<4\implies |z|^4<4\implies |z|<\sqrt 2$</p>
<p><strong>Note</strong>: the radius of convergence of a power series remains unaltered if you change the point about which the series is given.So here $t_0$ has no role to play.It is needed only when you need the circle of convergence which is not needed here</p>
<p>So the radius of convergence is $\sqrt 2$</p>
|
3,102,336 | <p>I have been looking for fixed points of <a href="https://simple.wikipedia.org/wiki/Riemann_zeta_function" rel="nofollow noreferrer">Riemann Zeta function</a> and find something very interesting, it has two fixed points in <span class="math-container">$\mathbb{C}\setminus\{1\}$</span>.</p>
<p>The first fixed point is in the Right half plane viz. <span class="math-container">$\{z\in\mathbb{C}:Re(z)>1\}$</span> and it lies precisely in the real axis (Value is : <span class="math-container">$1.83377$</span> approx.).</p>
<p><strong>Question:</strong> I want to show that Zeta function has no other fixed points in the right half complex plane excluding the real axis, <span class="math-container">$D=\{z\in\mathbb{C}:Im(z)\ne 0,Re(z)>1\}$</span>.</p>
<p><strong>Tried:</strong> In <span class="math-container">$D$</span> the Zeta function is defined as, <span class="math-container">$\displaystyle\zeta(s)=\sum_{n=1}^\infty\frac{1}{n^s}$</span>. If possible let it has a fixed point say <span class="math-container">$z=a+ib\in D$</span>. Then, <span class="math-container">$$\zeta(z)=z\\ \implies\sum_{n=1}^\infty\frac{1}{n^z}=z\\ \implies \sum_{n=1}^\infty e^{-z\log n}=z\\ \implies \sum_{n=1}^\infty e^{-(a+ib)\log n}=a+ib$$</span>
Equating real and imaginary part we get,
<span class="math-container">$$\sum_{n=1}^\infty e^{-a\log n}\cos(b\log n)=a...(1) \\
\sum_{n=1}^\infty e^{-a\log n}\sin(b\log n)=-b...(2)$$</span>
Where <span class="math-container">$b\ne 0, a>1$</span>.</p>
<p><strong>Problem:</strong> How am I suppose to show that the relation (2) will <strong>NOT</strong> hold at any cost?</p>
<p>Any hint/answer/link/research paper/note will be highly appreciated. Thanks in advance.</p>
<p>Please visit <a href="https://math.stackexchange.com/questions/3145277/counting-numbers-of-fixed-point-of-zeta-function-by-argument-principle">this</a>.</p>
| George Lamprou | 689,811 | <p>Hmm...i did a run in my computer cause i found you question of fixed points interesting so..</p>
<p>the only result i got is this for <span class="math-container">$a=1.8337719154395\cdots$</span> and for <span class="math-container">$b=0$</span></p>
<p><span class="math-container">$\zeta(1.8337719154395\cdots)=1.8337719154395\cdots$</span></p>
<p>note: this is an engineer approach i'm no mathematician</p>
<p>I made this for you <a href="https://www.desmos.com/calculator/hoyjifa8wn" rel="nofollow noreferrer">https://www.desmos.com/calculator/hoyjifa8wn</a></p>
<p>second version added (3/5/2021): <a href="https://www.desmos.com/calculator/t52rv9aevc" rel="nofollow noreferrer">https://www.desmos.com/calculator/t52rv9aevc</a></p>
<p>also the code for the real part when a>1 and b=0 for ζ(a)=a or in the other program I made was a trick of input and output so if ζ(χ0)=χ1 then repeat this rule ζ((x0+x1)/2)=x2 and again put the output as input this way Ζ((((χ0+χ1)/2)+χ2)/2)=x3 on and on n times the more the better because it minimizes the ''error'' to have ζ(s)=s , s ≈ 1.83377265168027...= a to be more accurate , you have to find a mathematical theorem that does that for you tho.</p>
<p>cant find the code I had made but if i remember correctly it was like this
<a href="https://imgur.com/wHfvIqp" rel="nofollow noreferrer">https://imgur.com/wHfvIqp</a></p>
<p>for starting condition of a=1 and b=1 , using the above rule ζ(a+bi) =z ≈ -0.295905005575214 gives this solution but it turns out again b=0 after many n times</p>
<p>for starting conditions of a=2 and b=1 ζ(z) =z ≈1.83377265168027 it makes b=0 after many n times again.</p>
<p>After watching it many times with other numbers it shows that introducing the b≠0 makes the system rotate anyway from the initial conditions so there is no way to have an input with a>1 and b≠0 to have it stay somewhere close with fixed points, trying to minimize the error between input and output makes the b=0 so it will have θ=0 , z ≈1.83377265168027 and θ=180 degrees , z ≈ -0.295905005575214 there are also infinite solutions of this when b=0 and all are a straight line on the x-axis as long as a>1</p>
<p>Conclusions i made is that if you are going to try to find fixed points with z=a+bi and a>1, b=0 then it has infinite solutions and are all real numbers on the x-axis but, if b≠0 then ζ(z)≠z due to rotation in the system, so we have ζ(z)=w another coplex number and w-z≠0 its also a complex number with real and imaginary part, so using it in the denominator of a fraction like this 1/(ζ(z)-z) is no problem as long as b≠0 and z is not real in general!!!</p>
|
1,241,695 | <p>I was reading A First Course in Probability by Sheldon Ross. I think I quite understood the below problem but I still feel fuzzy.</p>
<blockquote>
<p><strong>Problem</strong>: In answering on a multiple choice test, a student either know the answer or guesses. Let p be the probability that the students knows the answer and 1-p be the probability that the student guesses. Assume that a student who guesses at the answer will be correct with probability 1/m, where m is the number of multiple choice alternatives. What is the conditional probability that a student knew the answer to a question when he or she answered it correctly?</p>
<p><strong>Solution as given in the book</strong>:
Let C be the event that the student answered the question correctly. And
Let K be the event that the student actually knows the answer.
<span class="math-container">$$P(K|C) = \frac{P(KC)}{P(C)}$$</span>
<span class="math-container">$$ = \frac{P(C|K)P(K)}{P(C|K)P(K)+P(C|K^C)P(K^C)}$$</span>
<span class="math-container">$$ = \frac{p}{p+(1/m)(1-p)}$$</span></p>
</blockquote>
<p>Now this seems reasonable, only confusing is that how is</p>
<ol>
<li>Probability that the student knows the answer <span class="math-container">$= P(KC) = p$</span> but not <span class="math-container">$P(C|K)$</span></li>
<li>While on the other hand the probability that a student who guesses the answer will be correct <span class="math-container">$= P(C|K^C) = 1/m$</span> but this time not <span class="math-container">$P(CK^C)$</span> or <span class="math-container">$P(K^CC)$</span>.</li>
</ol>
<p>I think its just that I am finding it difficult to determine the probabilities relation from the sentence formations.</p>
<ol start="3">
<li>Is there any other simpler, non fuzzy approach to such problems?</li>
</ol>
| paw88789 | 147,810 | <p>Imagine that the test consists of $N$ questions, each with the same parameter $p$ of the student knowing the right answer; and assume that knowing the right answer on any question is independent of knowing the right answer on any other question.</p>
<p>In this scenario, each question will fall into one of three categories: </p>
<p>(A) questions where the student knew the answer (and hence and answered correctly);</p>
<p>(B) questions where the student didn't know but answered correctly; and</p>
<p>(C) questions where the student didn't know and answered incorrectly.</p>
<p>We would expect $pN$ questions to fall into category (A); and $(1-p)\cdot N\cdot \frac1m$ to fall into category (B). </p>
<p>Thus we expect $pN$ of the correctly answered questions to have been known. And so the probability of a correctly answered question to have been known is
$$\frac{\#\text{known}}{\#\text{answered correctly}}=\frac{\#(A)}{\#(A)+\#(B)}$$ </p>
|
1,241,695 | <p>I was reading A First Course in Probability by Sheldon Ross. I think I quite understood the below problem but I still feel fuzzy.</p>
<blockquote>
<p><strong>Problem</strong>: In answering on a multiple choice test, a student either know the answer or guesses. Let p be the probability that the students knows the answer and 1-p be the probability that the student guesses. Assume that a student who guesses at the answer will be correct with probability 1/m, where m is the number of multiple choice alternatives. What is the conditional probability that a student knew the answer to a question when he or she answered it correctly?</p>
<p><strong>Solution as given in the book</strong>:
Let C be the event that the student answered the question correctly. And
Let K be the event that the student actually knows the answer.
<span class="math-container">$$P(K|C) = \frac{P(KC)}{P(C)}$$</span>
<span class="math-container">$$ = \frac{P(C|K)P(K)}{P(C|K)P(K)+P(C|K^C)P(K^C)}$$</span>
<span class="math-container">$$ = \frac{p}{p+(1/m)(1-p)}$$</span></p>
</blockquote>
<p>Now this seems reasonable, only confusing is that how is</p>
<ol>
<li>Probability that the student knows the answer <span class="math-container">$= P(KC) = p$</span> but not <span class="math-container">$P(C|K)$</span></li>
<li>While on the other hand the probability that a student who guesses the answer will be correct <span class="math-container">$= P(C|K^C) = 1/m$</span> but this time not <span class="math-container">$P(CK^C)$</span> or <span class="math-container">$P(K^CC)$</span>.</li>
</ol>
<p>I think its just that I am finding it difficult to determine the probabilities relation from the sentence formations.</p>
<ol start="3">
<li>Is there any other simpler, non fuzzy approach to such problems?</li>
</ol>
| wltrup | 232,040 | <p>Apologies for the long post but this material can be confusing so I tried to be clear by explaining every step.</p>
<ol>
<li>Probability that the student knows the answer $= P(KC) = p$ but not $P(C|K)$.</li>
</ol>
<p>It's not. The probability that the student knows the answer is $P(K)$, not $P(KC)$. The probability of the joint events <em>student knows the answer</em> and <em>student answers correctly</em> is the numerator in the first equality, $P(KC)$, and that equals $P(CK)$ which, in turn, equals the probability that she answers correctly <em>given</em> that she knows the answer, $P(C|K)$, times the probability that she knows the answer, $P(K)$. That's the numerator in the second equality. Now think about it for a moment. What should the probability that she answers correctly <em>given</em> that she knows the answer, $P(C|K)$, be? If she knows the answer then she'll always answer correctly, right? That means $P(C|K) = 1$.</p>
<ol start="2">
<li>While on the other hand the probability that a student who guesses the answer will be correct $=P(C|KC)=1/m$ but this time not $P(CKC)$ or $P(KCC)$.</li>
</ol>
<p>I think you're confusing the roles of the various probabilities. Think of it this way. The probability of the joint events <em>knows the answer</em> (K) and <em>answers correctly</em> (C) is $P(KC)$ and that's also equal to $P(CK)$, since the order doesn't matter (both events happen simultaneously). So,</p>
<p>$P(KC) = P(CK)$</p>
<p>Now, express the two probabilities above as</p>
<p>$P(K|C)P(C) = P(C|K)P(K)$</p>
<p>The LHS is the probability of knowing the answer given that she answered correctly times the unconditional probability of answering correctly. The RHS is the probability of answering correctly given that she knows the answer times the unconditional probability of knowing the answer.</p>
<p>What actual information about the various probabilities do we know?</p>
<p>We know that the (unconditional) probability the students knows the answer, $P(K)$, is $p$, so $P(K) = p$. We also know that a student who guesses at the answer will be correct (with probability $P(C|K^c)$, the probability of being correct given that she does <strong>not</strong> know the answer) with probability $1/m$, so $P(C|K^c) = 1/m$. Finally, we also know that she will answer correctly every time she knows the answer, so $P(C|K) = 1$.</p>
<p>What we want to compute is $P(K|C)$, the probability of knowing the answer given that she answered correctly. What we don't have is $P(C)$.</p>
<p>But $P(C)$, being the unconditional probability of answering correctly, is</p>
<p>$P(C) = P(C|K)P(K) + P(C|K^c)P(K^c)$</p>
<p>What does this really say? Since she either knows the answer or she doesn't, you can decompose the unconditional probability that she answered correctly in only two ways:</p>
<ol>
<li>She knows the answer (with probability $P(K)$) and answered correctly (with probability $P(C|K)$)</li>
</ol>
<p>or</p>
<ol start="2">
<li>She <strong>doesn't</strong> know the answer (with probability $P(K^c)$) but still answered correctly (with probability $P(C|K^c)$).</li>
</ol>
<p>These two events are disjoint, so $P(C)$ is the sum of the probabilities of those two events. Each of them, in turn, has a probability equal to the product of the probabilities I mentioned in each case.</p>
<p>Putting it all together, we have</p>
<p>$P(K|C) = \frac{\displaystyle P(C|K)P(K)}{\displaystyle P(C)} = \frac{\displaystyle P(C|K)P(K)}{\displaystyle P(C|K)P(K) + P(C|K^c)P(K^c)}$</p>
<p>Finally, plug in the values $P(K)=p$, $P(C|K^c) = 1/m$, $P(C|K) = 1$, and $P(K^c) = 1-p$ to get</p>
<p>$P(K|C) = \frac{\displaystyle p}{\displaystyle p + (1-p)/m}$.</p>
|
1,265,026 | <p>Suppose I have some vector field
\begin{align}
\vec{F}\left(x\left(t\right),y\left(t\right),z\left(t\right)\right)&=G\textbf{i}+H\textbf{j}+T\textbf{k}.\tag{1}
\end{align}
Would it be correct for me to say
\begin{align}
\mathbb{R}^3\overset{\vec{F}}{\longrightarrow}\mathbb{R}^3\;?\tag{2}
\end{align}</p>
| Thomas | 26,188 | <p>Writing
$$
\mathbb{R}^3\overset{\vec{F}}{\longrightarrow}\mathbb{R}^3
$$
I would think that $\vec{F}$ is a function with domain $\mathbb{R}^3$ and it looks like you have a function with domain $\mathbb{R}$. So the notation isn't good. I also don't think it is a good idea to write $\vec{F}_t: \mathbb{R}^3 \to \mathbb{R}^3$ because this makes it look like as if for each fixed $t$ you get a function with domain $\mathbb{R}^3$. So I would write, for example, $G" \mathbb{R}^3 \to \mathbb{R}$. Again writing $G_t$ makes it look like you have a function for each fixed $t$.</p>
<p>I am guessing you want
$$
\mathbb{R}\overset{\vec{F}}{\longrightarrow}\mathbb{R}^3.
$$
And I am guessing that the $x, y$, and $z$ are then functions from $\mathbb{R}$ to $\mathbb{R}$. If you want to have the functions $G, H$ and $T$, then these would be functions from $\mathbb{R}^3$ to $\mathbb{R}$.</p>
|
1,271,942 | <p>I am a little bit confused with the definition of finitely presented modules. In Lang's <em>Algebra</em> he defines a module <span class="math-container">$M$</span> to be finitely presented if and only if there is a exact sequence <span class="math-container">$F'\to F\to M \to 0$</span> such that both <span class="math-container">$F', F$</span> are free. However the standard definition I have seen elsewhere only demands <span class="math-container">$F'$</span> be finitely generated. Are these two definitions equivalent?</p>
<p>Looking at the situation of a non-principal ideal of a ring, say <span class="math-container">$(x, y)$</span> of <span class="math-container">$\mathbb{R}[x, y]$</span>, it appears that this is finitely presented, by the usual definition, but I don't see any way of making it finitely presented by Lang's definition.</p>
| Martin Brandenburg | 1,650 | <p>If $F' \to F \to M \to 0$ is exact and $F'$ is finitely generated, choose some finitely generated free module $F''$ which maps <em>onto</em> $F'$. Then $F'' \to F \to M \to 0$ is exact.</p>
<p>This shows: A finitely generated module is finitely related iff it is finitely presented.</p>
<p>Of course, this fails for modules which are not finitely generated.</p>
|
2,207,572 | <p>Imagine an undirected graph $G = (V,E)$ with $|V| = n$ nodes. Its unweighted edges $E$ are the union of $h$ random Hamiltonian cycles through all nodes, each generated iid uniformly at random from the set of all Hamiltonian cycles.</p>
<p>What is the expected diameter $D$ of $G$?</p>
<p>The case $h=1$ is trivial and not interesting.</p>
<p>Clearly, $D$ grows strictly monotonically with $n$ as well as with $h^{-1}$. However, I'm not sure of the exact relationship of these variables. I suspect a relationship along the lines of $D = O(\log(n)/h)$.</p>
| D.W. | 14,578 | <p>Purely <em>heuristically</em>, I expect the answer to be $O(\log(n)/\log(h))$.</p>
<p>Why? We can imagine that each vertex has an edge to $2h$ randomly chosen other vertices. Then heuristically we can imagine that there are about $(2h)^d$ vertices at distance $\le d$ from a fixed vertex $v$ (as long as $(2h)^d$ is small compared to $n$). Thus if $(2h)^d \approx n$, we can expect that any fixed pair of vertices $v,w$ are likely connected by some path of length $\le d$. This equation is satisfied when $d \approx \log_{2h}(n) \sim \log(n)/\log(h)$. When $d$ is a small constant factor larger than that, we can heuristically expect there to be an overwhelming probability that any fixed pair of vertices $v,w$ are connected by a path of length $\le d$. Taking a union bound over all pairs of vertices, we can expect that there is $d=O(\log(n)/\log(h))$ such that with overwhelming probability the diameter will be $\le d$.</p>
<p>This is not a proof -- this is just a hand-wavy back-of-the-envelope heuristic estimate.</p>
|
2,207,572 | <p>Imagine an undirected graph $G = (V,E)$ with $|V| = n$ nodes. Its unweighted edges $E$ are the union of $h$ random Hamiltonian cycles through all nodes, each generated iid uniformly at random from the set of all Hamiltonian cycles.</p>
<p>What is the expected diameter $D$ of $G$?</p>
<p>The case $h=1$ is trivial and not interesting.</p>
<p>Clearly, $D$ grows strictly monotonically with $n$ as well as with $h^{-1}$. However, I'm not sure of the exact relationship of these variables. I suspect a relationship along the lines of $D = O(\log(n)/h)$.</p>
| Misha Lavrov | 383,078 | <p>At least for <span class="math-container">$h$</span> constant, taking the union of <span class="math-container">$h$</span> uniformly random Hamiltonian cycles is maybe kind of equivalent to taking a uniformly random <span class="math-container">$2h$</span>-regular graph, whose properties as <span class="math-container">$n \to\infty$</span> we know quite well.</p>
<p>One result in this direction is the following. Let <span class="math-container">$\mathcal G_{n,d}$</span> denote the uniform probability space over random <span class="math-container">$d$</span>-regular graph on <span class="math-container">$n$</span> vertices. By <a href="http://www.sciencedirect.com/science/article/pii/S0095895600919919" rel="nofollow noreferrer">a result of Kim and Wormald</a>, we have:</p>
<blockquote>
<p>If <span class="math-container">$d\ge4$</span> is even, then <span class="math-container">$G \in \mathcal G_{n,d}$</span> a.a.s. (asymptotically almost surely) has a complete Hamiltonian decomposition.</p>
</blockquote>
<p>In other words, with probability tending to <span class="math-container">$1$</span> as <span class="math-container">$n \to\infty$</span>, a uniformly random <span class="math-container">$2h$</span>-regular graph is the union of <span class="math-container">$h$</span> edge-disjoint Hamiltonian cycles.</p>
<p>Of course, if we just take <span class="math-container">$h$</span> uniformly random Hamiltonian cycles, they will probably not be disjoint. But they are not too far off either. If <span class="math-container">$X_{ij}$</span> is the number of cycles shared between the <span class="math-container">$i$</span>-th and <span class="math-container">$j$</span>-th Hamiltonian cycle, then <span class="math-container">$X_{ij} \sim \operatorname{Poisson}(2)$</span>. So as long as <span class="math-container">$h$</span> is constant, the number of overlapping edges is <span class="math-container">$O(1)$</span> a.a.s., and with constant probability there are none.</p>
<p>Another reason not to care about the overlaps is that I'm pretty sure that a different result is also true: if <span class="math-container">$\mathcal G_{n,d}'$</span> is the corresponding probability space to <span class="math-container">$\mathcal G_{n,d}$</span> of <span class="math-container">$d$</span>-regular loopless multigraphs (allowing parallel edges), then for even <span class="math-container">$d$</span>, <span class="math-container">$G \in \mathcal G_{n,d}'$</span> a.a.s. has a decomposition into Hamiltonian cycles that are no longer edge-disjoint. (The paper above mentions this for <span class="math-container">$d=4$</span>, but doesn't say anything one way or the other about larger <span class="math-container">$d$</span>; I think the same methods would solve that problem.)</p>
<p>Since all unions of <span class="math-container">$h$</span> Hamiltonian cycles are equally probable outcomes of sampling from <span class="math-container">$\mathcal G_{n,2h}'$</span>, this would tell us at results true a.a.s. of <span class="math-container">$\mathcal G_{n,2h}'$</span> are also true a.a.s. of this random graph model. This is nice, because many proofs about <span class="math-container">$\mathcal G_{n,d}$</span> go through multigraphs first anyway, and then take into account the probability that the graph is simple. In particular, this is true of the result below.</p>
<p><a href="https://link.springer.com/article/10.1007%2FBF02579310" rel="nofollow noreferrer">A result of Bollobás and de la Vega</a> gets the following bounds on the diameter of <span class="math-container">$\mathcal G_{n,r}$</span> (switching notation, they use <span class="math-container">$r$</span> for degree):</p>
<blockquote>
<p><strong>Theorem 1.</strong> Let <span class="math-container">$r \ge 3$</span> and <span class="math-container">$\epsilon>0$</span> be fixed and define <span class="math-container">$d=d(n)$</span> as the least integer satisfying <span class="math-container">$$(r-1)^{d-1} \ge (2+\epsilon) rn \log n.$$</span> Then a.e. <span class="math-container">$r$</span>-regular graph has diameter at most <span class="math-container">$d$</span>.</p>
<p><strong>Theorem 3.</strong> The diameter of a.e. <span class="math-container">$r$</span>-regular graph of order <span class="math-container">$n$</span> is at least <span class="math-container">$$\lfloor \log_{r-1} n\rfloor + \left\lfloor\log_{r-1} \log n - \log_{r-1}\frac{6r}{r-2} \right\rfloor + 1.$$</span></p>
</blockquote>
<p>Set <span class="math-container">$r = 2h$</span> and that's that.</p>
|
2,634,791 | <blockquote>
<p>How can I show that the map $f: GL_n(\mathbb R)\to GL_n(\mathbb R)$ defined by $f(A)=A^{-1}$ is continuous?</p>
</blockquote>
<p>The space $GL_n(\mathbb R)$ is given the operator norm and so I want to show for all $\epsilon$ there exists $\delta$ such that $\|A-B\|<\delta \implies \|A^{-1}-B^{-1}\|<\epsilon$.</p>
| C. Falcon | 285,416 | <p>One has a formula for the inverse, namely:
$$A^{-1}=\frac{1}{\det(A)}\operatorname{adj}(A),$$
where $\operatorname{adj}(A)$ is the <a href="https://en.wikipedia.org/wiki/Adjugate_matrix" rel="nofollow noreferrer">adjugate</a> of $A$. Whence, $A^{-1}$ is a rational fraction in the coefficients of $A$.</p>
<hr>
<p>Another approach could be the following, though it is not exactly rigorous as it assumes that if $(A_k)_k$ converges, then the sequence $({A_k}^{-1})_k$ converges too.</p>
<p>Let $A$ be invertible and let $(A_k)_k$ be a sequence of invertible matrices converging toward $A$, then one has:
$${A_k}{A_k}^{-1}=I_n,$$
therefore, by bilinearity of the matrix product and hence its continuity, one has:
$$A\lim_{k\to+\infty}{A_k}^{-1}=I_n,$$
so that by unicity of the inverse, one has $\lim\limits_{k\to+\infty}{A_k}^{-1}=A^{-1}$, which is the desired result.</p>
|
1,993,217 | <p>Let $\left\{f_{n}\right\}$ be a sequence of equicontinuous functions where $f_n: [0,1] \rightarrow \mathbf{R}$. If $\{f_n(0)\}$ is bounded, why is $\left\{f_{n}\right\}$ uniformly bounded?</p>
| Eugene Zhang | 215,082 | <p>Since $[0,1]$ is compact, $f_n$ is uniform continuous. So given $\epsilon>0$, there is a $\delta'>0$ such that for any $x,y\in [0,1], \:|x-y|<\delta'$
$$
|f_n(x)-f_n(y)|<\epsilon\tag1
$$
Since $f_n$ is equicontinuous, $(1)$ holds for all $n$. </p>
<p>Take $\delta=\delta'/2$. For any open cover on $\bigcup_{x\in [0,1]}(x-\delta, x+\delta)$ on $[0,1]$, there is a finite cover $\bigcup_{i\in \{1, \cdots, l\}}(x_i-\delta, x_i+\delta)$ covers $[0,1]$ because it is compact. Since $|f_n(0)|<M'$ and assume $0\in (x_1-\delta, x_1+\delta)$, by $(1)$ for any $x\in (x_1-\delta, x_1+\delta)$ and any $n$
$$
|f_n(x)|<|f_n(0)|+\epsilon<M'+\epsilon
$$
Similarly for any $x\in (x_2-\delta, x_2+\delta)$ and any $n$
$$
|f_n(x)|<|f_n(y)|+\epsilon<M'+2\epsilon
$$
where $y\in (x_1-\delta, x_1+\delta)$. Repeat this process and we have, for any $x\in (x_l-\delta, x_l+\delta)$ and any $n$
$$
|f_n(x)|<|f_n(y)|+\epsilon<M'+l\epsilon
$$
where $y\in (x_{l-1}-\delta, x_{l-1}+\delta)$.</p>
<p>Take $M=M'+l\epsilon$. Then $|f_n(x)|<M$ on $[0,1]$ for any $n$. So $f_n$ is uniform bounded.</p>
|
130,028 | <p>I often want to have the same code at the beginning of every new notebook. Is it possible to configure Mathematica, such that whenever you create a new notebook some user-defined code will always be created with the new document.</p>
<p>E.g. commonly used plot configurations, packages, directory setting etc.</p>
<pre><code>Needs["PolygonPlotMarkers"]
Needs["TwoAxisListPlot"]
fm[name_, size_: 7] :=
Graphics[{EdgeForm[], PolygonMarker[name, Offset[size]]}]
PlotStyles = {Frame -> True, FrameStyle -> Directive[Black, Thin],
Axes -> False, ImageSize -> 350, AspectRatio -> 1.0};
</code></pre>
<p>At the begining of every new notebook. </p>
| c186282 | 4,515 | <p>Place the code in your init.m file. It will then be run each time the kernel is started. On Linux the init.m file is in ~/.Mathematica/Kernel/. I forget where it is on Windows but just look at the output of <code>$Path</code> and you will be able to find the "Kernel" directory within your account directories. </p>
|
3,883,164 | <p>I evaluated following limit with taylor series but for a practice I am trying to evaluate it using L'Hopital's Rule:</p>
<p><span class="math-container">$$\lim_{x\to 0}\frac{\sinh x-x\cosh x+\frac{x^3}3}{x^2\tan^3x}=\lim_{x\to0}\cfrac{f(x)}{g(x)}$$</span>
<span class="math-container">$f(x)=\sinh x-x\cosh x+\frac{x^3}3 ,f(0)=0$</span></p>
<p><span class="math-container">$f'(x)=-x\sinh x+x^2, f'(0)=0$</span></p>
<p><span class="math-container">$f''(x)=-\sinh x-x\cosh x+2x, f''(0)=0$</span></p>
<p><span class="math-container">$f'''(x)=-2\cosh x-x\sinh x+2, f'''(0)=0$</span></p>
<p>It seems it is going to be <span class="math-container">$0$</span> for further derivatives.</p>
<p>Also for <span class="math-container">$g(x)=x^2\tan^3x$</span>, wolfram alpha gives this result:</p>
<p><a href="https://i.stack.imgur.com/vPSik.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vPSik.png" alt="enter image description here" /></a></p>
<p>Which seems we have <span class="math-container">$g^{(n)}(x)=0$</span> too.</p>
<p>So Is there any way to evaluate the limit applying L'Hopital's Rule?</p>
| Community | -1 | <p>The denominator is of the fifth degree (after linearizing the tangent), so if there is a finite answer you will need five successive applications of L'Hospital.</p>
<p>The numerator is easy:</p>
<p><span class="math-container">$$\sinh x-x\cosh x+\frac{x^3}3,$$</span></p>
<p><span class="math-container">$$-x\sinh x+x^2,$$</span></p>
<p><span class="math-container">$$-\sinh x-x\cosh x+2x,$$</span></p>
<p><span class="math-container">$$-2\cosh x-x\sinh x+2$$</span></p>
<p><span class="math-container">$$-3\sinh x-x\cosh x,$$</span>
<span class="math-container">$$-4\cosh x-x\sinh x.$$</span></p>
<p>Every time, you need to check that the expression tends to zero (otherwise the limit will not exist because of the zero denominator).</p>
<p>For the denominator, it is really worth to rewrite</p>
<p><span class="math-container">$$x^2\tan^3x=x^5\frac{\tan^3x}{x^3}$$</span> and take the fraction away. Then the fifth derivative is <span class="math-container">$5!$</span> and the requested ratio</p>
<p><span class="math-container">$$-\frac1{30}.$$</span></p>
<hr />
<p>For info, keeping the denominator as is, we get</p>
<p><span class="math-container">$$2520x^2(\tan(x))^8+6600x^2(\tan(x))^6+3600x(\tan(x))^7+36x^2(\sec(x))^2(\tan(x))^2+\\
5772x^2(\tan(x))^4+8160x(\tan(x))^5+1200(\tan(x))^6+120x\tan(x)(\sec(x))^2+
\\120x^2(\sec(x))^2+1692x^2(\tan(x))^2+5640x(\tan(x))^3+2280(\tan(x))^4+
\\1080x\tan(x)+120(\sec(x))^2+1080(\tan(x))^2$$</span></p>
<p>and the only nonzero term is <span class="math-container">$120\sec^2x$</span>.</p>
|
1,392,576 | <p>How can the followin question be solved algebraically?</p>
<p>A certain dealership has a total of 100 vehicles consisting of cars and trucks. 1/2 of the cars are used and 1/3 of the trucks are used. If there are 42 used vehicles used altogether, how many trucks are there?</p>
| haqnatural | 247,767 | <p>$$\lim _{ x\rightarrow 0 }{ \frac { { x }^{ 2 }+2\sqrt { { x }^{ 2 } } }{ x } } =\lim _{ x\rightarrow 0 }{ \frac { { x }^{ 2 }+2\left| x \right| }{ x } } =\lim _{ x\rightarrow 0 }{ \left( x+2\frac { \left| x \right| }{ x } \right) } \\ \lim _{ x\rightarrow 0- }{ \left( x+2\frac { \left| x \right| }{ x } \right) =-2 } \\ \\ \lim _{ x\rightarrow 0+ }{ \left( x+2\frac { \left| x \right| }{ x } \right) =2 } $$</p>
|
1,392,576 | <p>How can the followin question be solved algebraically?</p>
<p>A certain dealership has a total of 100 vehicles consisting of cars and trucks. 1/2 of the cars are used and 1/3 of the trucks are used. If there are 42 used vehicles used altogether, how many trucks are there?</p>
| LeastSquaresWonderer | 233,263 | <p>$$ \lim_{ x \to 0} \frac{x^2 + 2\sqrt{x^2}}{x}. $$</p>
<p>$$ \lim_{ x \to 0} \frac{x^2 + 2|x|}{x}. $$</p>
<p>We will look at the limits aproaching 0 from both sides</p>
<p>from the left -> 0^-
$$ \lim_{ x \to 0} \frac{x^2 + 2x}{-x}. $$</p>
<p>$$ \lim_{ x \to 0} -(x + 2) = -2 $$</p>
<p>from the right -> 0^2
$$ \lim_{ x \to 0} \frac{x^2 + 2x}{x}. $$</p>
<p>$$ \lim_{ x \to 0} (x + 2) = 2 $$</p>
<p>-2!=2 => the limit does not exist</p>
|
73,238 | <p>How can I calculate the solid angle that a sphere of radius R subtends at a point P? I would expect the result to be a function of the radius and the distance (which I'll call d) between the center of the sphere and P. I would also expect this angle to be 4π when d < R, and 2π when d = R, and less than 2π when d > R.</p>
<p>I think what I really need is some pointers on how to solve the integral (taken from <a href="http://en.wikipedia.org/wiki/Solid_angle" rel="nofollow">wikipedia</a>) $\Omega = \iint_S \frac { \vec{r} \cdot \hat{n} \,dS }{r^3}$ given a parameterization of a sphere. I don't know how to start to set this up so any and all help is appreciated!</p>
<p>Ideally I would like to derive the answer from this surface integral, not geometrically, because there are other parametric surfaces I would like to know the solid angle for, which might be difficult if not impossible to solve without integration.</p>
<p>*I reposted this from mathoverflow because this isn't a research-level question.</p>
| J. M. ain't a mathematician | 498 | <p>For this answer, I make a few (not too drastic) assumptions:</p>
<ol>
<li><p>The circle you are interested in is centered at the origin (thus, the plane your circle lies in has to pass through the origin).</p></li>
<li><p>The plane your circle lies in is already in <a href="http://mathworld.wolfram.com/HessianNormalForm.html" rel="nofollow">Hessian normal form</a>, $\mathbf{\hat n}\cdot\langle x\;y\;z\rangle=0$, where $\mathbf{\hat n}$ is a <em>unit</em> normal vector to your plane.</p></li>
<li><p>The plane is not any one of the coordinate planes (and in those cases, you wouldn't need to go through this route).</p></li>
</ol>
<p>What you can use to derive the parametric equations for your circle is the <a href="http://mathworld.wolfram.com/RodriguesRotationFormula.html" rel="nofollow">Rodrigues rotation formula</a>, which is a rotation matrix used for rotating by an angle $\varphi$ about an arbitrary axis $\mathbf{\hat n}=\langle n_x\;n_y\;n_z\rangle$. Letting</p>
<p>$$\mathbf W=\begin{pmatrix}0&-n_z&n_y\\n_z&0&-n_x\\-n_y&n_x&0\end{pmatrix}$$</p>
<p>the Rodrigues rotation matrix is</p>
<p>$$\mathbf R(\varphi)=\mathbf I+\sin\,\varphi\mathbf W+2\sin^2\frac{\varphi}{2}\mathbf W^2$$</p>
<p>Thus, to assemble the parametric equations for your circle: pick any point in your plane whose distance from the origin is equal to the radius of your circle, and then apply the Rodrigues rotation formula to that point. The axis to use is the unit normal vector in the Hessian normal form of your plane, and the rotation angle is the varying parameter in your parametric equations. That is, if $\mathbf p$ is a point at a distance $r$ from the origin, and satisfies $\mathbf{\hat n}\cdot\mathbf p=0$, then $\mathbf r(t)=\mathbf R(t)\cdot\mathbf p$ is the vector equation for your circle.</p>
|
73,238 | <p>How can I calculate the solid angle that a sphere of radius R subtends at a point P? I would expect the result to be a function of the radius and the distance (which I'll call d) between the center of the sphere and P. I would also expect this angle to be 4π when d < R, and 2π when d = R, and less than 2π when d > R.</p>
<p>I think what I really need is some pointers on how to solve the integral (taken from <a href="http://en.wikipedia.org/wiki/Solid_angle" rel="nofollow">wikipedia</a>) $\Omega = \iint_S \frac { \vec{r} \cdot \hat{n} \,dS }{r^3}$ given a parameterization of a sphere. I don't know how to start to set this up so any and all help is appreciated!</p>
<p>Ideally I would like to derive the answer from this surface integral, not geometrically, because there are other parametric surfaces I would like to know the solid angle for, which might be difficult if not impossible to solve without integration.</p>
<p>*I reposted this from mathoverflow because this isn't a research-level question.</p>
| SadMakesMeCalcface | 84,846 | <p>If your circle has a unit normal vector < cos(a), cos(b), cos(c)> then depending on c, you have:</p>
<p>< cos(t-arcsin(cos(b)/sin(c)))/sqrt(sin(t)^2 +cos(t)^2*sec(c)^2), sin(t-arcsin(cos(b)/sin(c)))/sqrt(sin(t)^2 +cos(t)^2*sec(c)^2), cos(t)*sin(c)/sqrt(cos(t)^2+sin(t)^2*cos(c)^2> 0
<p>< sin(t)*sin(a), -sin(t)*cos(a), cos(t)> cos(c)=0</p>
<p>and of course, < cos(t), sin(t), 0> sin(c)=0</p>
<p>The first two will have t=0 be the maximum point</p>
|
858,494 | <p>Where does the definition of the $L_\infty$ norm come from?</p>
<p>$$\|x\|_\infty=\max \{|x_1|,\dots,|x_k|\}$$</p>
| Oria Gruber | 76,802 | <p>The $p$ norm of a vector is defined as such:</p>
<p>$\|x\|_p = (\sum_{i=1}^{n}|x_i|^p)^\frac{1}{p}$.</p>
<p>Notice that when $p=2$ this is the simple euclidean norm.</p>
<p>You asked about the infinity norm.</p>
<p>When $p$ tends to infinity, we can see that:</p>
<p>$$\lim_ {p \to \infty} \|x\|_p = \lim_ {p \to \infty} (\sum_{i=1}^{n}|x_i|^p)^\frac{1}{p}$$</p>
<p>Convince yourself that if $a>b>0$ then:</p>
<p>$\lim_{p \to \infty} a^p+b^p=\lim_{p \to \infty} a^p$ </p>
<p>Combine the two statements to reach the desired results. </p>
|
10,974 | <p>Is the following true: If two chain complexes of free abelian groups have isomorphic homology modules then they are chain homotopy equivalent.</p>
| Tyler Lawson | 360 | <p>Yes, this is true. Suppose $C_*$ is such a chain complex of free abelian groups.</p>
<p>For each $n$, choose a splitting of the boundary map $C_n \to B_{n-1}$, so that $C_n \cong Z_n \oplus B_{n-1}$. (You can do this because $B_{n-1}$, as a subgroup of a free group, is free.) For all $n$, you then have a sub-chain-complex $\cdots \to 0 \to B_n \to Z_n \to 0 \to \cdots$ concentrated in degrees $n$ and $n+1$, and $C_*$ is the direct sum of these chain complexes.</p>
<p>Given two such chain complexes $C_*$ and $D_*$, you get a direct sum decomposition of each, and so it suffices to show that any two complexes $\cdots \to 0 \to R_i \to F_i \to 0 \to \cdots$, concentrated in degrees $n$ and $n+1$, which are resolutions of the same module $M$ are chain homotopy equivalent; but this is some variant of the fundamental theorem of homological algebra.</p>
<p>This is special to abelian groups and is false for modules over a general ring.</p>
|
3,329,363 | <p>Sorry for the long text; this is a nebulous question that has always been in the back of my mind, and I've had trouble putting into a short form.</p>
<hr>
<p><strong>"Natural" Definition</strong></p>
<p>If someone on the street hears the word "permutation," I think they will naturally assume that a permutation:</p>
<ul>
<li>Involves rearranging objects</li>
<li>The order in which the objects are written, before and after the permutation is performed, is of crucial importantce (it's really the essence of the permutation)</li>
</ul>
<p>I would naturally expect a permutation to be an instruction, or an action. For example, I would expect a permutation to look something like</p>
<p><span class="math-container">$$\sigma = \text{interchange the first two entries.}$$</span> </p>
<p>Then, if we apply <span class="math-container">$\sigma$</span> to <span class="math-container">$(A, B, C)$</span> we get <span class="math-container">$(B, C, A);$</span> if we apply it to <span class="math-container">$(4, 6, 9)$</span> we get <span class="math-container">$(6, 4, 9).$</span> To me this is a very satisfying (informal) definition of a permutation, because it captures exactly what many people (or at least me) would think a permutation "should be."</p>
<p>Another way to define "permutation" (to me, less satisfactory than the previous, but still more satisfactory than the official definition) could be to just say that "the 3-tuple <span class="math-container">$(B, A, C)$</span> is a permutation of <span class="math-container">$(A, B, C).$</span>" (In fact, I think this is the definition used in elementary Statistics books.)</p>
<p><strong>Percieved Weaknesses of the Official Definition</strong></p>
<ul>
<li>It makes little sense to "permute" your set of objects. If you have a set of objects <span class="math-container">$\{4, 6, 8 \},$</span> and while you are not in the room someone applies a permutation to your set, you will never know; the output of your permutation is still <span class="math-container">$\{4, 6, 8 \}.$</span> Even if they only apply the permutation to a subset, you only <em>might</em> be able to tell.</li>
<li>Permutations seem to have <em>nothing</em> to do with the order that your objects are in, either before or after doing the permutation. This, like I mentioned above, seems to violate <em>the whole point</em> of a permutation</li>
</ul>
<p>I call these weaknesses because they seem to violate the "person on the street" understanding of a permutation, and I know that generally mathematicians try really hard to not distort the meaning of common English words too much.</p>
<p><strong>My Question</strong></p>
<p>Is there really such a big disconnect between the "Natural" and Official definitions of permutations? Even if there is not, and there is a way to tediously link the natural definition with the official definition (which I'm sure there is), why does the Official definition deserve to be called a permutation more than the natural one? Is there a name for the Natural definition? </p>
<p>Thanks.</p>
| robjohn | 13,854 | <p>"A bijective map from a set to itself" does not require the set to be ordered, but when applied to an ordered set, this map acts to reorder the set.</p>
<p>This definition is therefore a generalization of the idea of "reordering an ordered set" to a more general setting.</p>
<p>Often, in mathematics, a name lifts with a generalization.</p>
|
1,627,357 | <p>Is there a simple way to prove $$\frac{1}{\sqrt{1-x}} \le e^x$$ on $x \in [0,1/2]$?</p>
<p>Some of my observations from plots, etc.:</p>
<ul>
<li>Equality is attained at $x=0$ and near $x=0.8$.</li>
<li>The derivative is positive at $x=0$, and zero just after $x=0.5$. [I don't know how to find this zero analytically.]</li>
<li>I tried to work with Taylor series. I verified with plots that the following is true on $[0,1/2]$:
$$\frac{1}{\sqrt{1-x}} = 1 + \frac{x}{2} + \frac{3x^2}{8} + \frac{3/4}{(1-\xi)^{5/2}} x^3 \le 1 + \frac{x}{2} + \frac{3}{8} x^2 + \frac{5 \sqrt{2} x^3}{6} \le 1 + x + \frac{x^2}{2} + \frac{x^3}{6} \le e^x,$$
but proving the last inequality is a bit messy.</li>
</ul>
| RRL | 148,510 | <p>We have</p>
<p>$$-2 \ln \sqrt{1-x}=-\ln(1-x)= \int_{1-x}^1\frac{dt}{t} \leqslant \frac{x}{1-x}.$$</p>
<p>For $0 \leqslant x \leqslant 1/2$, we have $2(1-x) \geqslant 1$ and </p>
<p>$$-\ln \sqrt{1-x} < \frac{x}{2(1-x)} \leqslant x.$$</p>
<p>Hence,</p>
<p>$$\frac{1}{\sqrt{1-x}} = \exp[-\ln(\sqrt{1-x})]\leqslant e^x.$$</p>
|
3,026,097 | <p>I was studying an article where I encountered <span class="math-container">$\mathbb{R}^E_{\gt 0}$</span>. I couldn't find out what does this notation mean exactly.
I'm sorry if my question is basic, I searched this community but I didn't find the answer to my question.</p>
<p>Here is a part of that article:</p>
<blockquote>
<p>Given an undirected graph <span class="math-container">$G=(V, E)$</span> with positive edge lengths <span class="math-container">$l\in\mathbb{R}^E_{\gt 0}$</span> on which one desires to compute the shortest path from a vertex <span class="math-container">$s$</span> to <span class="math-container">$t$</span>, ...</p>
</blockquote>
| Felix Marin | 85,343 | <p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
<span class="math-container">\begin{align}
&\bbox[10px,#ffd]{\int_{0}^{\infty}{\sin\pars{\pi x} \over
\left\lceil{x}\right\rceil^{2} + \left\lceil{x}\right\rceil}
\,\dd x} =
\sum_{k = 0}^{\infty}\int_{k}^{k + 1}
{\sin\pars{\pi x} \over
\pars{k + 1}^{2} + \pars{k + 1}}\,\dd x
\\[5mm] = &\
{2 \over \pi}\sum_{k = 0}^{\infty}
{\pars{-1}^{k} \over \pars{k + 1}\pars{k + 2}} =
{2 \over \pi}\bracks{%
\sum_{k = 0}^{\infty}{\pars{-1}^{k} \over k + 1} -
\sum_{k = 0}^{\infty}{\pars{-1}^{k} \over k + 2}}
\\[5mm] = &\
{2 \over \pi}\bracks{%
-\sum_{k = 1}^{\infty}{\pars{-1}^{k} \over k} -
\sum_{k = 2}^{\infty}{\pars{-1}^{k} \over k}}
\\[5mm] = &\
-\,{2 \over \pi}\braces{%
\sum_{k = 1}^{\infty}{\pars{-1}^{k} \over k} +
\bracks{1 + \sum_{k = 1}^{\infty}{\pars{-1}^{k} \over k}}} \\[5mm] = &\
-\,{2 \over \pi}\bracks{%
1 + 2\sum_{k = 1}^{\infty}{\pars{-1}^{k} \over k}} =
\bbx{4\ln\pars{2} - 2 \over \pi} \approx 0.2459
\end{align}</span></p>
|
323,128 | <p>Show that in every (not necessarily connected) graph there is a path from every vertex $u$ of odd degree to some other vertex $v$ ($u \neq v$), also of odd degree.</p>
| joriki | 6,622 | <p>If there isn't then $u$ is in a connected component consisting of itself and vertices of even degree. But then the sum of degrees in that connected component is odd, which it can't be, since it counts every edge twice.</p>
|
323,128 | <p>Show that in every (not necessarily connected) graph there is a path from every vertex $u$ of odd degree to some other vertex $v$ ($u \neq v$), also of odd degree.</p>
| hardmath | 3,111 | <p>The conclusion holds in finite graphs, but no finiteness assumption was stated. For an infinite counterexample consider the natural numbers with edges connecting each number and its successor. Only one vertex has odd degree.</p>
|
3,757,763 | <p>Let <span class="math-container">$T: V\rightarrow V$</span> be a linear operator of the vector space <span class="math-container">$V$</span>.</p>
<p>We write <span class="math-container">$V=U\oplus W$</span>, for subspaces <span class="math-container">$U,W$</span> of <span class="math-container">$V$</span>, if <span class="math-container">$U\cap W=\{0\}$</span> and <span class="math-container">$V=U+W$</span>.</p>
<p>If we assume <span class="math-container">$\dim V<\infty$</span>, then by the <a href="https://en.wikipedia.org/wiki/Rank%E2%80%93nullity_theorem" rel="nofollow noreferrer">rank-nullity</a> theorem, <span class="math-container">$\ker T\cap {\rm Im}\,T=\{0\}$</span> implies <span class="math-container">$V=\ker T\oplus {\rm Im}\,T$</span>.</p>
<blockquote>
<p>However, my question is about the case <span class="math-container">$\dim V$</span> is infinite. Is it still true? What if <span class="math-container">$T$</span> has a minimal polynomial?</p>
</blockquote>
<p>Thanks.</p>
| Batominovski | 72,152 | <p>Let <span class="math-container">$\mathbb{K}$</span> be the base field. If <span class="math-container">$T:V\to V$</span> is such that <span class="math-container">$\ker(T)\cap\text{im}(T)=0$</span> and there exists <span class="math-container">$p(X)\in\mathbb{K}[X]$</span> such that <span class="math-container">$p(T)=0$</span>, then <span class="math-container">$$V=\ker(T)\oplus\text{im}(T)\,.$$</span>
By choosing <span class="math-container">$p(X)$</span> to be the monic polynomial of the lowest possible degree, we may assume that <span class="math-container">$0$</span> is a simple root of <span class="math-container">$p(X)$</span> (this is due to the assumption that <span class="math-container">$\ker(T)\cap\text{im}(T)=0$</span>, and if the minimal polynomial of <span class="math-container">$T$</span> is not divisible by <span class="math-container">$X$</span>, which is possible, then we simply multiply the minimal polynomial of <span class="math-container">$T$</span> by <span class="math-container">$X$</span>). That is,
<span class="math-container">$$p(X)=X^n+a_{n-1}X^{n-1}+a_{n-2}X^{n-2}+\ldots+a_2X^2+a_1X$$</span>
for some <span class="math-container">$a_1,a_2,\ldots,a_{n-2},a_{n-1}\in\mathbb{K}$</span> with <span class="math-container">$a_1\neq 0$</span>.</p>
<p>Write <span class="math-container">$q(X):=X^{n-1}+a_{n-1}X^{n-2}+a_{n-2}X^{n-3}+\ldots+a_2X+a_1$</span>. Note that
<span class="math-container">$$1=\frac{1}{a_1}\,q(X)+r(X)\,X\,,$$</span>
where
<span class="math-container">$$r(X):=-\frac{1}{a_1}\,X^{n-2}-\frac{a_{n-1}}{a_1}\,X^{n-3}-\frac{a_{n-2}}{a_1}\,X^{n-4}-\ldots-\frac{a_3}{a_1}\,X-\frac{a_2}{a_1}\,.$$</span>
Therefore,
<span class="math-container">$$\text{id}_V=\frac{1}{a_1}\,q(T)+r(T)\,T\,.$$</span>
Fix <span class="math-container">$v\in V$</span>. We get
<span class="math-container">$$v=\text{id}_V(v)=\left(\frac{1}{a_1}\,q(T)+r(T)\,T\right)v=\frac{1}{a_1}\,q(T)v+r(T)\,Tv\,.$$</span>
Observe that <span class="math-container">$q(T)v\in \ker(T)$</span> and <span class="math-container">$Tv\in\ker\big(q(T)\big)$</span> (as <span class="math-container">$X\,q(X)=q(X)\,X=p(X)$</span> is the minimal polynomial of <span class="math-container">$T$</span>). This implies
<span class="math-container">$$V=\ker(T)\oplus\ker\big(q(T)\big)\,.$$</span></p>
<p>We want to prove that
<span class="math-container">$$\text{im}(T)=\ker\big(q(T)\big)\,.$$</span>
The direction <span class="math-container">$\text{im}(T)\subseteq\ker\big(q(T)\big)$</span> is clear because <span class="math-container">$q(X)\,X=p(X)$</span>. We shall prove the reversed inclusion. Suppose that <span class="math-container">$v\in\ker\big(q(T)\big)$</span>. Thus,
<span class="math-container">$$T^{n-1}v+a_{n-1}\,T^{n-2}v+a_{n-2}\,T^{n-3}v+\ldots+a_2Tv+a_1v=0\,.$$</span>
This gives
<span class="math-container">$$v=T\left(-\frac{1}{a_1}\,T^{n-2}v-\frac{a_{n-1}}{a_1}\,T^{n-3}v-\frac{a_{n-2}}{a_1}\,T^{n-4}v-\ldots-\frac{a_2}{a_1}\,v\right)\in \text{im}(T)\,.$$</span></p>
|
917,302 | <p>If $p(x)$ is a polynomial of degree 4 such that $p(2)=p(-2)=p(-3)=-1$ and $p(1)=p(-1)=1$, then find $p(0)$.</p>
| lab bhattacharjee | 33,337 | <p>Let $\displaystyle p(x)=(Ax+B)(x-2)(x+2)(x+3)-1,$ where $A,B$ are arbitrary finite constants</p>
<p>$$p(0)=(+B)(-2)(+2)(+3)+1$$</p>
<p>Set $x=1,-1$ one by one to find $B$</p>
|
917,302 | <p>If $p(x)$ is a polynomial of degree 4 such that $p(2)=p(-2)=p(-3)=-1$ and $p(1)=p(-1)=1$, then find $p(0)$.</p>
| Kelenner | 159,886 | <p>Put $q(x)=p(x)-p(-x)$. Then $p$ is an odd polynomial, of degree $\leq 3$. Hence $p(x)-p(-x)=xq(x^2)$ with degree of $q\leq 1$, say $q(y)=ay+b$. We have $q(1)=q(4)=0$, hence as degree of $q$ $\leq 1$, $q=0$ and $p(x)=p(-x)$. We get $p(-3)=p(3)=-1$, and so $p(x)=c(x^2-4)(x^2-9)-1$ for a constant $c$, and we finish easily. </p>
|
917,302 | <p>If $p(x)$ is a polynomial of degree 4 such that $p(2)=p(-2)=p(-3)=-1$ and $p(1)=p(-1)=1$, then find $p(0)$.</p>
| Community | -1 | <p>For later simplification, consider $p_1(x)=p(x)+1$, that has three known roots. We see that $p_1(x)$ is an even function, so that $p_1(x)=ax^4+bx^2+c=a(x^2)^2+bx^2+c=q(x^2)$.</p>
<p>$q(x^2)$ is of the <strong>second degree</strong> in $x^2$, and much easier to interpolate.
We have $q(1)=2$, $q(4)=q(9)=0$, hence by Lagrange $q(0)=2\frac{(0-4)(0-9)}{(1-4)(1-9)}=3=p_1(0)$, and $p(0)=\color{blue}2$.</p>
|
2,531,714 | <p>Given is: </p>
<p>$(1-x^2)\dfrac{d^2y(x)}{dx^2} + 2x\dfrac{dy(x)}{dx} - 2y(x) = 0 $</p>
<p>The Solution is: </p>
<p>$y(x) =C_1x + C_2(x^2+1)$ </p>
<p>How do I factor the $x$ out in order to get it into a normal "linear" form that contains only coefficients to show that the solution is valid? </p>
<p>Edit: The equation should be both Linear and Homogeneous</p>
| Felix Marin | 85,343 | <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>Hereafter, $\ds{\bracks{\cdots}}$ is an Iverson Bracket. Namely, $\ds{\bracks{P} = 1}$ whenever $\ds{P\ \mbox{is}\ \color{red}{\texttt{true}}}$ and $\ds{0\ \color{red}{\mbox{otherwise}}}$.</p>
</blockquote>
<p>\begin{align}
\left.\sum_{n = 1}^{\infty}H_{n}^{2}x^{n}\,\right\vert_{\ \verts{x}\ <\ 1} & =
\sum_{n = 1}^{\infty}\overbrace{\braces{\sum_{i = 1}^{\infty}{\bracks{i \leq n} \over i}}}^{\ds{H_{n}}}\
\overbrace{\braces{\sum_{j = 1}^{\infty}{\bracks{j \leq n} \over j}}}^{\ds{H_{n}}}\ x^{n}
\\[5mm] & =
\sum_{i = 1}^{\infty}{1 \over i}\sum_{j = 1}^{\infty}{1 \over j}
\sum_{n = 1}^{\infty}\bracks{n \geq i}\bracks{n \geq j}x^{n}
\\[5mm] & =
\sum_{i = 1}^{\infty}{1 \over i}\sum_{j = 1}^{\infty}{1 \over j}\braces{%
\bracks{i \leq j}\sum_{n = j}^{\infty}x^{n} +
\bracks{i > j}\sum_{n = i}^{\infty}x^{n}}
\\[5mm] & =
\sum_{i = 1}^{\infty}{1 \over i}\sum_{j = 1}^{\infty}{1 \over j}\braces{%
\bracks{j \geq i}\,{x^{j} \over 1 - x} +
\bracks{j < i}\,{x^{i} \over 1 - x}}
\\[5mm] & =
{1 \over 1 - x}\sum_{i = 1}^{\infty}{1 \over i}\pars{%
\sum_{j = i}^{\infty}{x^{j} \over j} + \sum_{j = 1}^{i - 1}{x^{i} \over j}} =
{1 \over 1 - x}\pars{\sum_{i = 1}^{\infty}{1 \over i}
\sum_{j = i}^{\infty}{x^{j} \over j} +
\sum_{i = 1}^{\infty}{x^{i} \over i}\overbrace{\sum_{j = 1}^{i - 1}{1 \over j}}
^{\ds{H_{i - 1}}}}
\\[5mm] & =
{1 \over 1 - x}\pars{\sum_{j = 1}^{\infty}{x^{j} \over j}\
\overbrace{\sum_{i = 1}^{j}{1 \over i}}^{\ds{H_{j}}}\ +
\sum_{i = 1}^{\infty}{x^{i + 1} \over i + 1}\,H_{i}} =
{1 \over 1 - x}\sum_{i = 1}^{\infty}
\pars{{x^{i} \over i} + {x^{i + 1} \over i + 1}}H_{i}
\\[5mm] & =
{1 \over 1 - x}\sum_{i = 1}^{\infty}H_{i}
\pars{x^{i}\int_{0}^{1}t^{i - 1}\,\dd t + x^{i + 1}\int_{0}^{1}t^{i}\,\dd t}
\\[5mm] & =
{1 \over 1 - x}
\pars{\int_{0}^{1}{1 \over t}\sum_{i = 1}^{\infty}H_{i}\pars{xt}^{i}\,\dd t + x\int_{0}^{1}\sum_{i = 1}^{\infty}H_{i}\pars{xt}^{i}\,\dd t}
\\[5mm] & =
{1 \over 1 - x}\int_{0}^{1}{1 + xt \over t}
\bracks{-\,{\ln\pars{1 - xt} \over 1 - xt}}\,\dd t =
-\,{1 \over 1 - x}\int_{0}^{x}{1 + t \over t\pars{1 - t}}\,
\ln\pars{1 - t}\,\dd t
\end{align}</p>
<blockquote>
<p>because $\ds{\sum_{i = 1}^{\infty}H_{i}z^{i} =
-\,{\ln\pars{1 - z} \over 1 - z}:\ \pars{~H_{i}\ Generating\ Function~}}$.</p>
</blockquote>
<p>Then,
\begin{align}
\left.\sum_{n = 1}^{\infty}H_{n}^{2}x^{n}\,\right\vert_{\ \verts{x}\ <\ 1} & =
-\,{1 \over 1 - x}\bracks{%
2\
\underbrace{\int_{0}^{x}{\ln\pars{1 - t} \over 1 - t}\,\dd t}
_{\ds{-\,{1 \over 2}\,\ln^{2}\pars{1 - x}}}\ +\
\underbrace{\int_{0}^{x}\overbrace{{\ln\pars{1 - t} \over t}}
^{\ds{-\,\mrm{Li}_{2}'\pars{x}}}\ \,\dd t}
_{\ds{-\,\mrm{Li}_{2}\pars{x}}}}
\\[5mm] & =
\bbx{\ln^{2}\pars{1 - x} + \mrm{Li}_{2}\pars{x} \over 1 - x}
\end{align}</p>
|
3,884,581 | <p>Please don't just throw an answer at me, please explain how you arrived at it cause I've been fiddling with this for the past 30min...</p>
| Deepak | 151,732 | <p>Hints:</p>
<p><span class="math-container">$a^2 + b^2 + 2ab = (a+b)^2$</span></p>
<p>and <span class="math-container">$a^2 + b^2 - 2ab = (a-b)^2$</span></p>
<p>You can easily find both <span class="math-container">$a+b$</span> and <span class="math-container">$a-b$</span> and you're left with only linear simultaneous equations to solve. Don't forget to consider both signs when taking the square roots.</p>
|
2,861,949 | <blockquote>
<p>Use the method of least squares in order to find the best approximation
to a solution for the system
$$3x + y = 1\\
x − y = 2\\
x + 3y = −1$$</p>
</blockquote>
<p><strong>My Try:</strong>
$$Ax=B$$
$$\begin{bmatrix} 1 & 1 \\ 1 & -1 \\ 1 & 3 \end{bmatrix}\begin{bmatrix} x \\ y \end{bmatrix}=\begin{bmatrix} 1 \\ 2 \\ -1 \end{bmatrix}$$</p>
<p>First I found $A^TA$ and then $A^TB$</p>
<p>$$A^TA=\begin{bmatrix} 1 & 1 & 1 \\ 1 & -1 & 3 \end{bmatrix}\begin{bmatrix} 1 & 1 \\ 1 & -1 \\ 1 & 3 \end{bmatrix}=\begin{bmatrix} 3 & 3 \\ 3 & 11 \end{bmatrix}$$
and
$$A^TB=\begin{bmatrix} 1 & 1 & 1 \\ 1 & -1 & 3 \end{bmatrix}\begin{bmatrix} 1 \\ 2 \\ -1 \end{bmatrix}=\begin{bmatrix} 2 \\ -4 \end{bmatrix}$$
The I used gaussian elimination for $\begin{bmatrix} 3 & 3 & 2 \\ 3 & 11 & -4 \end{bmatrix}$ and got $x=\dfrac{17}{12},y=-\dfrac{3}{4}$. </p>
<p>Is this method of least squares correct?</p>
| John Bentin | 875 | <p>Analysis of the posted text is complicated by it being a faulty translation from the original Italian. However, some features appear to be a fair representation of the author's writing, and some of those are nonstandard. The notation $f : X ⊆ S → T$ is nonstandard. We would normally write $f : X → T$, with $X ⊆ S$ either being obvious from the context or stated separately. In English, $f(X)$ is called the <em>range</em> (or <em>image</em>, in some contexts) of $f$, and $T$ is the <em>codomain</em>. Calling the range "codomain" is simply contrary to normal usage. The term <em>domain</em> is correctly used, but <em>definition set</em> has little currency in English. The term <em>value set</em> (or the grammatically incorrect form <em>values set</em>) is also unconventional in English.</p>
<p>In mathematical writing, some unconventiality may improve on the standard form, or may be justified by the particular context; but generally it makes life harder for readers—and for students in particular. It's difficult to assess whether this author's terminological creativity will cause real problems of understanding for you. While I can see some logic in it, I wouldn't recommend teaching students usage that conflicts with what they will encounter elsewhere.</p>
<p>In English we might write "a function from a set $X$ to a set $T$, where $X\subseteq S$ " or "a function on a set $X$ with values in a set $T$, where $X\subseteq S$ ". The abbreviated form "a function from a set $X\subseteq S$ to a set $T$ " would perhaps be considered acceptable by enough people, but the purely symbolic form $f : X ⊆ S → T$ is not normal. Viewed syntactically, the arrow points from the $S$ to the $T$. But $S$ has nothing to do with the function $f$; it just happens to be the case that $X\subseteq S$. The notation seems to conflate <em>function</em> with <em>partial function</em>.</p>
<p>I would edit the rest of the translated text to:</p>
<p>"The set $X$ is called the <em>definition set</em> or <em>domain</em> of the function $f$. The subset of $T$ comprising the values $f(x)$ ($x\in X$) is called the <em>value set</em> or <em>codomain</em> [sic] of $f$ and is denoted by $f(X)$."</p>
|
2,861,949 | <blockquote>
<p>Use the method of least squares in order to find the best approximation
to a solution for the system
$$3x + y = 1\\
x − y = 2\\
x + 3y = −1$$</p>
</blockquote>
<p><strong>My Try:</strong>
$$Ax=B$$
$$\begin{bmatrix} 1 & 1 \\ 1 & -1 \\ 1 & 3 \end{bmatrix}\begin{bmatrix} x \\ y \end{bmatrix}=\begin{bmatrix} 1 \\ 2 \\ -1 \end{bmatrix}$$</p>
<p>First I found $A^TA$ and then $A^TB$</p>
<p>$$A^TA=\begin{bmatrix} 1 & 1 & 1 \\ 1 & -1 & 3 \end{bmatrix}\begin{bmatrix} 1 & 1 \\ 1 & -1 \\ 1 & 3 \end{bmatrix}=\begin{bmatrix} 3 & 3 \\ 3 & 11 \end{bmatrix}$$
and
$$A^TB=\begin{bmatrix} 1 & 1 & 1 \\ 1 & -1 & 3 \end{bmatrix}\begin{bmatrix} 1 \\ 2 \\ -1 \end{bmatrix}=\begin{bmatrix} 2 \\ -4 \end{bmatrix}$$
The I used gaussian elimination for $\begin{bmatrix} 3 & 3 & 2 \\ 3 & 11 & -4 \end{bmatrix}$ and got $x=\dfrac{17}{12},y=-\dfrac{3}{4}$. </p>
<p>Is this method of least squares correct?</p>
| trying | 309,917 | <p>You need not be scared by terms used unconventionally as long as those terms are unambiguously defined by the Author. Yes, today <em>codomain</em> is a term that all agree to define in the same ways, but still some 30 years ago in non-English speaking countries <em>codomain</em> could be defined as the <em>image</em> or as the <em>target set</em> depending on author's preference. The same kind of ambiguity that today still persists for <em>range</em> in English speaking countries.</p>
<p>As to <em>biunivocal</em> that you say in a comment, it is not really clear what you mean, because this term is not very much used internationally in math. Probably you mean <em>one-to-one</em>. Note that a <em>one-to-one function</em> is said an <em>injective function</em>, while a function that is <em>one-to-one corrispondence</em> is said a <em>bijective function</em>. So be warned about such a subtlety. In Italian <em>one-to-one</em> is not usually translated word by word, rather it is translated with a word that indeed resemble "biunivocal", but there is no difference between a "biunivocal" function and a function that is a "biunivocal" corrispondence in Italian, they both mean <em>bijective function</em>. So stick to <em>injective</em> and <em>bijective</em> terms, that have no translation problems.</p>
<p>But again this is not important, as long as a definition is given with no ambiguity. If you deeply learn concepts, you will not feel uneasy changing terminology when a new context occurs.</p>
<p>Furthermore, with the notation:
$$f\colon X\subseteq S\to T$$
the Author says what he means, that is, he defines it unambiguously (reread the definition, if not clear).</p>
<p>I've never read Nicola Fedele, but I know that he is an authority, pupil of <a href="https://en.wikipedia.org/wiki/Federico_Cafiero" rel="nofollow noreferrer" title="Wikipedia link">Federico Cafiero</a>. So I would not discard his book (still in widespread use) just for the unconventional terminology.</p>
<p>Just as an example Herstein in his beautiful "Topics in Algebra" second edition 1975 had been using <em>isomorphism into</em> and <em>isomorphism onto</em> instead of the now common <em>monomorphism</em> and <em>isomorphism</em>, respectively. But this is a book that still is worth reading, even though at the beginning one feels uneasy because of that: at least it helps you to develop a critical approach when reading math in terms of paying the utmost attention to every single word.</p>
|
2,231,092 | <p>I am reading <a href="http://people.ucalgary.ca/~rzach/static/open-logic/open-logic-complete.pdf" rel="nofollow noreferrer">Open Logic TextBook</a>. In which there is a proposition about Extensionality of first order sentences (6.12) It goes like this, </p>
<p>Let $\phi$ be a sentence, and $M$ and $M'$
be structures.
If $c_M = c_{M'}$
, $R_M = R_{M'}$
, and $f_M = f_{M'}$
for every constant symbol $c$,
relation symbol $R$, and function symbol $f$ occurring in $\phi$, then $ M \models \phi$ iff $M' \models \phi$</p>
<p>Does this statement implicitly imply that the Domain is exactly the same set, since $f_M = f_{M'}$ I am confused at this statement, does it mean, $f_M = f_{M'}$ only on the domain of constant values (or other covered terms?)</p>
| user12345 | 394,065 | <p><strong>Hint</strong>: First, find the equation of the line in Cartesian ($y=mx+b$ (shouldn't be very difficult)). You know the slope is $\frac{1}{7}=m$. They gave you another point that it goes through/satisfies. Use that to find $b$ and you'll have the equation of your line in Cartesian Coordinates. Then, use the relationship between Cartesian and Polar to get the polar equation (maybe $x=r\cos(\theta)$ and $y=r\sin(\theta)$ will ring a bell? (try to just drawing a triangle and review what Polar coordinates and Cartesian coordinates mean)) Good luck.</p>
|
2,576,344 | <p>This problem is about expected value, and it's a real world problem.</p>
<p>I know so far that $f$ is strictly increasing, if that makes the proof more concise (but if you can also prove it without this assumption, that would be awesome). Find all solutions for $f$ when $f(P_a \cdot a+P_b \cdot b)=P_a \cdot f(a)+P_b \cdot f(b)$ for all $a,b \in R$, and for $P_a,P_b \in R_+$. I'm 80% sure $f$ must be $f(x)=mx$ but I don't know how to prove it.</p>
| Przemysław Scherwentke | 72,361 | <p>HINT: For all $P_a$, $P_b\in\mathbb{R}$ and all $\lambda\in[0,1]$ your condition gives that the fuction is convex and concave, so its graph is an interval with $x$-ends $P_a$ and $P_b$.</p>
|
1,301,522 | <p>Many texts will define a manifold as "a second-countable Hausdorff space that is locally homeomorphic to Euclidean space".
By definition of homeomorphism, shouldn't this really and officially read as "locally homeomorphic to a <em>subset</em> of Euclidean space"?</p>
| nullUser | 17,459 | <p>Note that $B(0,1) \simeq \mathbb{R}^n$. Can you answer your own question now?</p>
|
279,707 | <p>$p \land \lnot q \lor q \land \lnot r \lor \lnot p \lor r $
$\equiv$$(p \lor \lnot p) \land (\lnot q \lor q) \land (\lnot r \lor r)$</p>
<p>Is this move "legal"? Or can you only apply the associative property on like operators? </p>
| mjqxxxx | 5,546 | <p>No, mixed expressions like this are not associative; instead, they obey distributive laws:
$$
(a\wedge b)\vee c \equiv (a\vee c) \wedge (b \vee c)
$$
and
$$
a \wedge (b\vee c) \equiv(a\wedge b) \vee (a \wedge c).
$$</p>
|
279,707 | <p>$p \land \lnot q \lor q \land \lnot r \lor \lnot p \lor r $
$\equiv$$(p \lor \lnot p) \land (\lnot q \lor q) \land (\lnot r \lor r)$</p>
<p>Is this move "legal"? Or can you only apply the associative property on like operators? </p>
| amWhy | 9,003 | <p><strong>Associativity</strong> applies only when the connectives involved are exclusively $\land$ or exclusively $\lor$:</p>
<p>$$p \land q \land r \equiv (p \land q)\land r \equiv p \land (q\land r)$$</p>
<p>$$p \lor q \lor r \equiv (p \lor q)\lor r \equiv p \lor (q\lor r)$$</p>
<p>Because of associativity of $\lor$ and $\land$, parentheses are not necessary to define expressions like those above.</p>
<p>Your statement, however:</p>
<p>$$p \land \lnot q \lor q \land \lnot r \lor \lnot p \lor r \tag{given}$$</p>
<p>has <em>mixed connectives</em>, and so <em>associativity does not apply</em> across all possible groupings. </p>
<p><em><strong>Please note</em></strong>: as stated, your (given) expression <strong><em>is not well-defined without parentheses</em></strong>. That is, without parentheses, it is ambiguous; it can be read any number of ways, most of which are not equivalent. Does it mean connect from left to right?:</p>
<p>$$(((((p\land \lnot q) \lor q) \land) \lnot r)\lor\lnot p) \lor r\;?\tag{1}$$</p>
<p>Or does it mean this?</p>
<p>$(p \land \lnot q) \lor (q \land \lnot r) \lor (\lnot p \lor r)\;?\tag{2}$</p>
<p>or any number of other possible ways of grouping with parentheses?</p>
<hr>
<p>In general, when you have an expression like $(2)$ above, you need to apply the <strong>Distributive Laws</strong> to distribute <em>over</em> another connective:</p>
<p>For example $$p \land (q \lor r) \equiv (p \land q) \lor (p \land r)$$
$$p \lor (q\land r) \equiv (p \lor q) \land (p\lor r)$$</p>
|
1,919,159 | <p>I don't get this step in proof of <a href="https://en.wikipedia.org/wiki/Carath%C3%A9odory%27s_theorem_(convex_hull)" rel="nofollow">Carathéodory's theorem (convex hull)</a>
Why:</p>
<blockquote>
<p>Suppose k > d + 1 (otherwise, there is nothing to prove). Then, the points $x_2 − x_1, ..., x_k − x_1$ are linearly dependent</p>
</blockquote>
<p>Why is this true?</p>
<p>How can we cay these points are linearly dependent?</p>
| Surb | 154,545 | <p>What is the cardinality of $\{x_2 − x_1, ..., x_k − x_1\}$? Now remember that there aren't any linearly independent set of cardinality greater than $d$ in $\Bbb R^d$.</p>
|
1,969,903 | <blockquote>
<p>a) Evaluate the one-dimensional Gaussian integral</p>
<p><span class="math-container">$I(a)$</span> = <span class="math-container">$\int_R exp(-ax^2)dx$</span>, <span class="math-container">$a>0$</span></p>
<p>b) evaluate the two-dimensional Gaussian integral using a)</p>
<p><span class="math-container">$I_2(a,b)$</span> = <span class="math-container">$\int_{R^2} exp(-ax^2 -by^2)dxdy, a,b>0$</span></p>
</blockquote>
<p>For a) I have done the following:</p>
<p><span class="math-container">$\int_{-\infty}^{+\infty}$</span> <span class="math-container">$e^{-ax^2}dx = 2\int_0^{\infty}e^{-ax^2}dx $</span></p>
<p><span class="math-container">$I^2$</span> = 4<span class="math-container">$\int_0^\infty$$\int_0^\infty$$e^{-a(x^2 + y^2)}dydx$</span></p>
<p>...</p>
<p><span class="math-container">$I^2$</span> = <span class="math-container">$\sqrt{\pi a}$</span></p>
<p><span class="math-container">$I$</span> = <span class="math-container">$\pi$$\sqrt a$</span></p>
<p>I am having difficulties understanding how to solve b) any help or guide will be appreciated.</p>
| Karl | 880,798 | <p>Note the suggested answer in the original question is wrong. Should be:
<span class="math-container">$$I = \sqrt{\frac{\pi}{a}}$$</span></p>
|
40,572 | <p>Dummit and Foote, p. 204</p>
<p>They suppose that $G$ is simple with a subgroup of index $k = p$ or $p+1$ (for a prime $p$), and embed $G$ into $S_k$ by the action on the cosets of the subgroup. Then they say</p>
<p>"Since now Sylow $p$-subgroups of $S_k$ are precisely the groups generated by a $p$-cycle, and distinct Sylow $p$-subgroups intersect in the identity"</p>
<p>Am I correct in assuming that these statements follow because the Sylow $p$-subgroups of $S_k$ must be cyclic of order $p$? They calculate the number of Sylow $p$-subgroups of $S_k$ by counting (number of $p$-cycles)/(number of $p$-cycles in a Sylow $p$-subgroup). They calculate the number of $p$-cycles in a Sylow $p$-subgroup to be $p(p-1)$, which I don't see.</p>
| JavaMan | 6,491 | <p>First and foremost, your notation $c(9,39)$ should read $c(39,9)$. That is presumably a typo. Now:</p>
<p>The inclusion-exclusion principle helps you find the cardinality of the sets $A \cup B$, $A \cup B \cup C$, $A \cup B \cup C \cup D$, etc. In your case, you have to find the cardinality of the set $A \cup B$, where $A$ is the event that you draw exactly four spades and $B$ is the event that you draw exactly four diamonds. If $|A|$ denotes the cardinality (or number of elements) in the set $A$, then</p>
<p>$$
|A \cup B| = |A| + |B| - |A \cap B|.
$$</p>
<p>This makes intuitive sense, since if you want to count the number of elements in the set $A \cup B$, you count the number of elements in $A$, you count the number of elements in $B$ and you subtract the number of elements in $A \cap B$, since they were counted twice.</p>
<p>To find $|A|$ or the number of $13$ card hands which have exactly four spades, you need to choose $4$ spades from the possible $13$ and $9$ non spades from the rest of the deck. The number of ways of doing this is</p>
<p>$$
c(13,4) \cdot c(39,9) = 151519319380.
$$</p>
|
2,221,897 | <p>Show that </p>
<p>$$\lim_{n \to \infty} \sum_{k=3}^n \frac{2k}{k^2+n^2+1} = \ln(2)$$</p>
<p>How many ways are there to prove it ?</p>
<p>Is there a standard way ?</p>
<p>I was thinking about making it a Riemann sum.
Or telescoping.</p>
<p>What is the easiest way ?
What is the shortest way ?</p>
| RRL | 148,510 | <p>Note that</p>
<p>$$\sum_{k=3}^n \frac{2k}{k^2+n^2+1} = \sum_{k=1}^n \frac{2k}{k^2+n^2+1} - \frac{2}{2+n^2} - \frac{4}{5+n^2}.$$</p>
<p>We can ignore the last two terms since they converge to $0$.</p>
<p>Consider</p>
<p>$$\sum_{k=1}^n \frac{2k}{k^2+n^2+1} = \frac{1}{n}\sum_{k=1}^n \frac{2(k/n)}{1+(k/n)^2 + (1/n^2)}. $$</p>
<p>This is almost a Riemann sum for $\int_0^1 2x/(1 +x^2) \, dx$ except for the annoying term $1/n^2$.</p>
<p>However, it is valid to take a limit of a double sequence</p>
<p>$$S_{mn} =\frac{1}{n}\sum_{k=1}^n \frac{2(k/n)}{1+(k/n)^2 + (1/m^2)}, $$</p>
<p>as</p>
<p>$$\lim_{n \to \infty}S_{nn} = \lim_{n \to \infty} \lim_{m \to \infty} S_{mn},$$</p>
<p>since the inner limit on the RHS exhibits uniform convergence.</p>
<p>Thus,</p>
<p>$$\lim_{n \to \infty}\sum_{k=3}^n \frac{2k}{k^2+n^2+1} = \lim_{n \to \infty} \lim_{m \to \infty}\frac{1}{n}\sum_{k=1}^n \frac{2(k/n)}{1+(k/n)^2 + (1/m^2)} \\ = \lim_{n \to \infty} \frac{1}{n}\sum_{k=1}^n \frac{2(k/n)}{1+(k/n)^2} \\ = \int_0^1 \frac{2x}{1+ x^2} \, dx \\ = \log 2$$</p>
|
995,775 | <p>So we have the regular $\delta$-$\epsilon$ definition of continuity as: </p>
<p>(1) For all $\epsilon>0$, there exists a $\delta>0$ such that, if $|x-a|<\delta$, then $|f(x)-f(a)|<\epsilon$.</p>
<p>My question is why is the following definition incorrect?</p>
<p>(2) There exists a $\delta>0$ such that, if $|x-a|<\delta$, then $|f(x)-f(a)|<\epsilon$ for all $\epsilon>0$.</p>
<p>The obvious response is that $\forall x$ $\exists y$ $\neq$ $\exists y$ $\forall x$ (or rather, they are not always equal), but look harder at the grammar: is that necessarily what is going on?</p>
<p>Let us define $p:=$ "There exists a $\delta>0$ such that, if $|x-a|<\delta$, then $|f(x)-f(a)|<\epsilon$"</p>
<p>So we now have the statement:</p>
<p>"$p$ is true for all $\epsilon>0$"</p>
<p><strong>Isn't that sentence identical <em>to the English sentence</em></strong>: "for all $\epsilon>0$, $p$ is true"?</p>
<p>In which case you would have (1):</p>
<p>For all $\epsilon>0$, there exists a $\delta>0$ such that, if $|x-a|<\delta$, then $|f(x)-f(a)|<\epsilon$.</p>
<p>The counterargument that $\forall x$ $\exists y$ $\neq$ $\exists y$ $\forall x$ makes sense <em>if you look at it immediately from a logical view,</em> but <em>because of the way English sentences work</em> (and their vagueness, in a case like this), the "for all $\epsilon>0$" clause can be <em>placed</em> anywhere without changing the meaning of the statement <em>in English</em>.</p>
<p>To illustrate better what I'm talking about, let us imagine that we write our definition as follows:</p>
<p>There exists a $\delta>0$ such that, if $|x-a|<\delta$, then $|f(x)-f(a)|<\epsilon$ (for all $\epsilon>0$).</p>
<p>Or, maybe:</p>
<p>There exists a $\delta>0$ such that, if $|x-a|<\delta$, then $|f(x)-f(a)|<\epsilon$ (N.B. we're talking about all $\epsilon>0$ here!).</p>
<p>Aren't these parentheticals sort of "overlying" the whole statement? Is the fault in my reasoning that $\epsilon$ is being "bound" to an if-then statement <em>before</em> it has been defined? Or am I just blatantly incorrect and this <em>is</em> a case of $\forall x$ $\exists y$ $\neq$ $\exists y$ $\forall x$.</p>
<p><strong>The essential point of all of this</strong> is that, if you have the statement: $$ \forall A \exists B : {B \subseteq A} $$
Then in English, this is equivalent to saying, <strong><em>both</em></strong> "For all A, there exists a B such that if B then A", <strong><em>and</em></strong>, "There exists a B such that if B then A, for all A".</p>
| hmakholm left over Monica | 14,366 | <p>The English sentence</p>
<blockquote>
<p>There exists $A$ such that $P(A,B)$ for all $B$.</p>
</blockquote>
<p>is <strong>ambiguous</strong> -- it can either mean $\exists A\forall B\,P(A,B)$ or $\forall B\exists A\,P(A,B)$, and there is no generally observed convention about which of them it ought to be understood as.</p>
<p>Therefore, for the sake of clarity sentences of this shape should be avoided in mathematical writing.</p>
<p>In symbolic logic, there is no ambiguity because the quantifier should always come <em>first</em>. Some beginning students (and, sadly, some teachers) write things like "$P(x) \,\forall x$", as a kind of shorthand for the English wording "$P$ of $x$ holds for all $x$", but this is not proper syntax for predicate logic. If you use symbols, you should stick to the grammar that goes with them.</p>
<hr>
<p>On the other hand, symbolic logic regrettably has a <em>different</em> ambiguity, namely whether $\forall x\;P(x)\Rightarrow Q$ means $(\forall x\;P(x))\Rightarrow Q$ or $\forall x\;(P(x)\Rightarrow Q)$. There are conventions for which one to choose here, but unfortunately they give different results! Explicit parentheses are recommended for both of these meanings, unless you're really really sure that your readers know and follow the same convention that you do.</p>
|
943,048 | <p><strong>Question:</strong></p>
<blockquote>
<p>let $x_{i}=1$ or $-1$,$i=1,2,\cdots,1990$, show that
$$x_{1}+2x_{2}+\cdots+1990x_{1990}\neq 0$$</p>
</blockquote>
<p>this problem it seem is easy,But I think is not easy. </p>
<p>I think note
$$1+2+3+\cdots+1990\equiv \pmod { 1990}?$$</p>
| Petite Etincelle | 100,564 | <p>suppose all the $x_i$ are $1$, then we have $$1+2+3+\cdots+1990 = \frac{1991\times 1990}{2}$$ is odd.</p>
<p>Then each time you change one of $x_i$ from $1$ to $-1$, you change the sum by an even number, so the sum is always odd</p>
|
33,153 | <p>Here is one definition of a differential equation:</p>
<blockquote>
<p>"An equation containing the derivatives of one or more dependent variables, with respect to one of more independent variables, is said to be a differential equation (DE)" <em>(Zill - A First Course in Differential Equations)</em></p>
</blockquote>
<p>Here is another:</p>
<blockquote>
<p>"A differential equation is a relationship between a function of time & it's derivatives" <em>(Braun - Differential equations and their applications)</em></p>
</blockquote>
<p>Here is another:</p>
<blockquote>
<p>"Equations in which the unknown function or the vector function appears under the sign of the derivative or the differential are called differential equations" <em>(L. Elsgolts - Differential Equations & the Calculus of Variations)</em></p>
</blockquote>
<p>Here is another:</p>
<blockquote>
<p>"Let <span class="math-container">$f(x)$</span> define a function of <span class="math-container">$x$</span> on an interval <span class="math-container">$I: a < x < b$</span>. By an ordinary differential equation we mean an equation involving <span class="math-container">$x$</span>, the function <span class="math-container">$f(x)$</span> and one of more of it's derivatives" <em>(Tenenbaum/Pollard - Ordinary Differential Equations)</em></p>
</blockquote>
<p>Here is another:</p>
<blockquote>
<p>"A differential equation is an equation that relates in a nontrivial way an unknown function & one or more of the derivatives or differentials of an unknown function with respect to one or more independent variables." <em>(Ross - Differential Equations)</em></p>
</blockquote>
<p>Here is another:</p>
<blockquote>
<p>"A differential equation is an equation relating some function <span class="math-container">$f$</span> to one or more of it's derivatives." <em>(Krantz - Differential equations demystified)</em></p>
</blockquote>
<p>Now, you can see that while there is just some tiny variation between them,
calling <span class="math-container">$f(x)$</span> the function instead of <span class="math-container">$f$</span> or calling it a function instead of an
equation but generally they all hint at the same thing.</p>
<p>However:</p>
<blockquote>
<p>"Let <span class="math-container">$U$</span> be an open domain of n-dimensional euclidean space, & let <span class="math-container">$v$</span> be a vector field in <span class="math-container">$U$</span>. Then by the differential equation determined by the vector field <span class="math-container">$v$</span> is meant the equation <span class="math-container">$x' = v(x), x \in U$</span>.</p>
<p>Differential equations are sometimes said to be equations containing unknown functions and their derivatives. This is false. For example, the equations <span class="math-container">$\frac{dx}{dt} = x(x(t))$</span> is not a differential equation." <em>(Arnold - Ordinary Differential Equations)</em></p>
</blockquote>
<p>This is quite different and the last comment basically says that all of the
above definitions, in all of the standard textbooks, are in fact incorrect.</p>
<p>Would anyone care to expand upon this point if it is of interest as some of you
might know about Arnold's book & perhaps be able to give some clearer examples than
<span class="math-container">$\frac{dx}{dt} = x(x(t))$</span>, I honestly can't even see how to make sense of <span class="math-container">$\frac{dx}{dt} = x(x(t))$</span>.
The more explicit (and with more detail) the better!</p>
<p>A second question I would really appreciate an answer to would be -
is there any other book that takes the view of differential equations
that Arnold does? I can't find any elementary book that starts by
defining differential equations in the way Arnold does and then goes on
to work in phase spaces etc. Multiple references welcomed.</p>
| Qiaochu Yuan | 232 | <p>Arnold simply means that most books are not being precise. A slightly more precise version of the first few definitions is that a differential equation (in one variable) is an equation of the form $f(t, x, x', x'', ...) = 0$. This rules out Arnold's example. </p>
|
31,502 | <p>This is probably a trivial question, but I don't see the answer, and I haven't found it on <a href="http://en.wikipedia.org/wiki/Cartesian_closed_category" rel="nofollow noreferrer">Wikipedia</a>, <a href="http://ncatlab.org/nlab/show/cartesian+closed+category" rel="nofollow noreferrer">nLab</a>, nor <a href="https://mathoverflow.net/questions/19004/is-the-category-commutative-monoids-cartesian-closed">MathOverflow</a>.</p>
<p>Let $\text{ComAlg}$ denote the category whose objects are commutative algebras over a fixed field $\mathbb K$ and whose morphisms are homomorphisms of algebras, and let $\text{ComAlg}^{\rm op}$ denote its opposite category. Given commutative algebras $A,B$, let $\operatorname{hom}(A,B)$ denote the set of algebra homomorphisms $A\to B$, so that $\operatorname{hom}$ is the usual functor $\text{ComAlg}^{\rm op} \times \text{ComAlg} \to \text{Set}$. The short version of my question:</p>
<blockquote>
<p>Is $\text{ComAlg}^{\rm op}$ Cartesian closed?</p>
</blockquote>
<p>The long version of my question (if I've gotten all the signs right):</p>
<blockquote>
<p>Is there a functor $[,] : \text{ComAlg} \times \text{ComAlg}^{\rm op} \to \text{ComAlg}$ such that there is an adjunction (natural in $A,B,C$, i.e. an isomorphism of functors $\text{ComAlg}^{\rm op} \times \text{ComAlg} \times \text{ComAlg} \to \text{Set}$) of the form:
$$ \operatorname{hom}([A,B],C) \cong \operatorname{hom}(A,B\otimes C) ?$$</p>
</blockquote>
<p>Recall: $\otimes$ is the coproduct in $\text{ComAlg}$, hence the product in $\text{ComAlg}^{\rm op}$.</p>
<p>Motivation: $\text{ComAlg}^{\rm op}$ is complete and cocomplete, and so many constructions that make sense in $\text{Set}$ and $\text{Top}$ transfer verbatim to the algebraic setting. I would like to know how many.</p>
| BCnrd | 3,927 | <p>Set $A = B = k[x]$ and figure out for yourself what that is a counterexample. (Hint: rigorously prove that there's no "universal polynomial" over $k$-algebras.)</p>
|
2,706,776 | <p>In solving the wave equation
$$u_{tt} - c^2 u_{xx} = 0$$
it is commonly 'factored'</p>
<p>$$u_{tt} - c^2 u_{xx} =
\bigg( \frac{\partial }{\partial t} - c \frac{\partial }{\partial x} \bigg)
\bigg( \frac{\partial }{\partial t} + c \frac{\partial }{\partial x} \bigg)
u = 0$$</p>
<p>to get
$$u(x,t) = f(x+ct) + g(x-ct).$$</p>
<p><strong>My question is: is this legitimate?</strong></p>
<p>The partial differentiation operators are not variables, but here in 'factoring' they are treated as such.</p>
<p>Also it does not seem that both factors can individually be set to zero to obtain the solution--either one or the other, or both might be zero.</p>
| Community | -1 | <p>Let us define $$ f(u)=\frac{\partial u}{\partial t}-c\frac{\partial u}{\partial x},\ g(u)=\frac{\partial u}{\partial t}-c\frac{\partial u}{\partial x} $$</p>
<p>Then, wave equation can be expressed by composition of two functions.</p>
<p>$$ f(g(u))=g(f(u))=\frac{\partial^2 u}{\partial t^2}-c^2\frac{\partial^2 u}{\partial x^2}=0 $$</p>
<p>This equation is vaild for all function u, so that</p>
<p>$$ g(u)=f(u)=0 $$ </p>
<p>I hope this helps you. '^'</p>
|
1,071,433 | <p>consider the region bounded by $ \displaystyle y=4{{x}^{2}}$ and $ \displaystyle 2x+y=6$. What is the volume of solid of revolution about $\displaystyle x$-axis.</p>
<p>What is thought about setting the integral:</p>
<p>I split the region into two parts</p>
<p>$\displaystyle V=4\pi \int\limits_{0}^{4}{y\left( 1-\frac{\sqrt{y}}{2} \right)\,dy}+2\pi \int\limits_{4}^{9}{y\left( \frac{6-y}{2}-\frac{\sqrt{y}}{2} \right)\,dy}$</p>
<p>Will it work?</p>
| MathMajor | 113,330 | <p>Below is the plot for $y = 4x^2$, and $y=6-2x:$</p>
<p><img src="https://i.stack.imgur.com/1JuF6.png" alt="enter image description here"></p>
<p>There is absolutely no need to separate $\mathfrak{R}$ into two regions. Find the intersection of the two curves and use $r = r_{out} - r_{in}$. Recall that </p>
<p>$$
\mathfrak{R} = \int_{x_0}^{x_1} \pi r^2 \ dr.
$$</p>
<p>where $x_0$ is the leftmost intersection and $x_1$ is the rightmost.</p>
|
1,071,433 | <p>consider the region bounded by $ \displaystyle y=4{{x}^{2}}$ and $ \displaystyle 2x+y=6$. What is the volume of solid of revolution about $\displaystyle x$-axis.</p>
<p>What is thought about setting the integral:</p>
<p>I split the region into two parts</p>
<p>$\displaystyle V=4\pi \int\limits_{0}^{4}{y\left( 1-\frac{\sqrt{y}}{2} \right)\,dy}+2\pi \int\limits_{4}^{9}{y\left( \frac{6-y}{2}-\frac{\sqrt{y}}{2} \right)\,dy}$</p>
<p>Will it work?</p>
| Ivo Terek | 118,056 | <p>I don't see why you need to split the region. Notice that $y = 4x^2$ and $y = 6 - 2x$ both intercept at: $$4x^2 = 6-2x \implies2x^2 + x - 3 =0 \implies x = \frac{-1\pm\sqrt{1+24}}{4} \implies x = \frac{-1\pm 5}{4},$$ so $x = -3/2$ and $x = 1$. Since the area of revolution of $y = f(x)$ around the $x$ axis is $$A = \pi \int_{a}^b f(x)^2 \ {\rm d}x$$ and $6 - 2x > 4x^2$ in $(-3/2, 1)$, the area you're looking for is: $$A = \pi\int_{-3/2}^1 (6-2x)^2 \ {\rm d}x - \pi \int_{-3/2}^1 (4x^2)^2 \ {\rm d}x.$$</p>
|
4,032,983 | <p>I would like to know math websites that are useful for students, PhD students and researchers (useful in the sense most of the students or researchers—of a particular area—are using it). Maybe you can share which math websites you sometime use and why you use it.</p>
<p>Let me give my websites and why I use them:</p>
<p>MathOverflow and Stack Exchange: Very good to look up questions about how to do mathematics, what mathematics is, how it evolves, and of course to have an exchange with smart people, who can help you if you are stuck at something.</p>
<p>arXiv: I don't use it now, but I think that's the main website to read papers and to publish (research level)</p>
<p>The Stacks Project: It is very useful for me to look algebraic things ups with more explanation than in some lectures...</p>
<p>MathSciNet: To search for papers that may help you.</p>
<p>Number Theory Web: Since I want to become a number theorist, this site helps me to see which kind of things exist.</p>
<p>Do you have more websites that help you to understand maybe mathematics, or just to connect to other mathematicians?</p>
<p>Everything is welcome. Just post it as an answer and not a comment.</p>
| Amarjeet Jayanthi | 250,743 | <p><a href="http://www.algebra.com" rel="nofollow noreferrer">http://www.algebra.com</a> is a very good site for algebra problems and solutions.</p>
|
870,240 | <p>Which number is larger? $\underbrace{888\cdots8}_\text{19 digits}\times\underbrace{333\cdots3}_\text{68 digits}$ or $\underbrace{444\cdots4}_\text{19 digits}\times\underbrace{666\cdots67}_\text{68 digits}$? Why? How much is it larger?</p>
| cirpis | 152,276 | <p>note that $$\underbrace{888\cdots8}_\text{19 digits}=2*\underbrace{444\cdots4}_\text{19 digits}$$
and $$2*\underbrace{333\cdots3}_\text{68 digits}=\underbrace{666\cdots6}_\text{68 digits}$$
further
$$\underbrace{666\cdots6}_\text{68 digits}+1=\underbrace{666\cdots7}_\text{68 digits}$$
Thus
$$\underbrace{888\cdots8}_\text{19 digits}*\underbrace{333\cdots3}_\text{68 digits}=\underbrace{444\cdots4}_\text{19 digits}*2*\underbrace{333\cdots3}_\text{68 digits}=
\underbrace{444\cdots4}_\text{19 digits}*\underbrace{666\cdots6}_\text{68 digits}$$
which is by $\underbrace{444\cdots4}_\text{19 digits}$ smaller than
$$\underbrace{444\cdots4}_\text{19 digits}*\underbrace{666\cdots7}_\text{68 digits}=\underbrace{444\cdots4}_\text{19 digits}*\underbrace{666\cdots6}_\text{68 digits}+$\underbrace{444\cdots4}_\text{19 digits}$$</p>
|
4,115,069 | <p>I understand 'functionals' as functions of functions, for example:</p>
<p><span class="math-container">$$ S[y(x)]= \int_{t_1}^{t_2} \sqrt{1+(y')^2} dx$$</span></p>
<p>Which is the famous arc length integral</p>
<p>Now, in a similar way, a limit we can write as:</p>
<p><span class="math-container">$$L(a, [y(x)] ) = \lim_{x \to a} y(x) \tag{1}$$</span></p>
<p>In this way, we can think of a limit as a function of a 'function' and a 'number'. So, would it be correct to call the above object a functionals? Why/Why not?</p>
<p>Examples of (1):</p>
<p><span class="math-container">$$L(0,\frac{\sin x}{x}) = \lim_{x \to 0} \frac{\sin x}{x} = 1$$</span></p>
<p><span class="math-container">$$L(0,e^x) = 1$$</span></p>
<p>etc</p>
<hr />
<p>This doubt mainly emerged while I was answering through <a href="https://math.stackexchange.com/questions/4114333/is-l-a-function-of-a/4115070#4115070">this post</a></p>
| Son Gohan | 865,323 | <p>Why not? There are lots of examples in which old concepts can be seen as functionals or operators from some abstract space to another abstract space or the real numbers. And, indeed, now this seems obvious to us but this is the case just because we are used to functional analysis point of view, where it is natural to consider functions just as points of that space; this was a great revolution in analysis though.</p>
<p>The only one needs to check is that the definition is <em>well defined</em>. As one defines the derivative as an operator that to every function (with some regularity) associates another function (its derivative), one can define the operator in the following way:</p>
<p><span class="math-container">$$L : \mathbb{\overline{R}} \times V \rightarrow \mathbb{\overline{R}}$$</span> defined in the following way (note we allow it to take values in the extended real line, since we want to allow the cases of <span class="math-container">$+\infty $</span> and <span class="math-container">$ - \infty$</span>):</p>
<p><span class="math-container">$$ L (y, f) := \lim_{x \rightarrow y}f(x)$$</span></p>
<p>Now, to have a <em>well-defined</em> operator <span class="math-container">$L$</span> we need the limit to be unique, and this is not an issue in the real line since the real line is a Hausdorff space. Nevertheless, we need to be sure the limit exists, otherwise what would be the real number that the <span class="math-container">$L$</span> operator gives us in the case we consider the point <span class="math-container">$(\infty, sin(x))$</span>? It would be:</p>
<p><span class="math-container">$$ L (\infty, sin(x)) := \lim_{x \rightarrow \infty}\sin(x)$$</span> which does not exist!</p>
<p>Solved this problem, there is no limitation in viewing the limit as an operator. The only thing is that, indeed, to solve this existence issue one needs to impose strong restrictions on the vector space <span class="math-container">$V$</span> in which we take functions or to the space where we pick our points (maybe a compact subset of <span class="math-container">$\mathbb{\overline{R}}$</span>).</p>
|
16,627 | <p>Yesterday, I wrote <a href="https://math.stackexchange.com/a/904777">this answer</a>, but then realized that the OP had considered breaking things into specific cases, so I deleted my answer. Right after I deleted my answer, I saw that the OP had accepted my answer. I undeleted my answer and commented to the OP, asking if they indeed found the answer useful. They said yes, so I left the answer.</p>
<p>The answer is accepted, and shows as such in the <a href="https://math.stackexchange.com/users/13854/robjohn?tab=answers">list of my answers</a>. However, I never received the reputation for the acceptance and it does not show up in the <a href="https://math.stackexchange.com/users/13854/robjohn?tab=reputation">record of my reputation</a>.</p>
<p>I'm not so worried about the 15 points, but I am curious about why this happened and hope that it can be fixed so that others won't miss reputation they've earned.</p>
| Willie Wong | 1,543 | <p>As a side note:</p>
<blockquote>
<p>... and hope that it can be fixed so that others won't miss reputation they've earned.</p>
</blockquote>
<p>Deleting an <strong>accepted</strong> answer is a mod-only power. So that this potential bug only affects moderators. (Remember, except in very compelling circumstances we never delete accepted answers!)</p>
|
1,448,416 | <p>It states that nth difference of a polynomial of n degree is constant thus (n+1)th difference will be zero.</p>
<ul>
<li>how can i show that the nth difference is constant? </li>
<li>forward difference of a constant is zero but how can i prove it?</li>
</ul>
| R.N | 253,742 | <p>Hint: i) Use induction and $f^{(n)}(x)=n!$ where $f(x)=x^n$</p>
<p>ii) For $f(x)=c$ you have $f'(x_0)=\lim_{h\to 0}\frac{f(x_0+h)-f(x_0)}{x-x_0}=\lim_{h\to 0}\frac{c-c}{x-x_0}=0$</p>
|
3,098,587 | <p>I have no answers to refer to, hence, would be great if someone could check up if my procedure to solve the following problem is correct. Also, I am struggling to solve for event B from part b) - any tips would be much much appreciated! I'm preparing for an exam, hence, it is of vital importance. Thank you! </p>
<p>Consider the following function <span class="math-container">$f$</span> on <span class="math-container">$I=[2,∞): f(x)=ax^{-3}$</span> where <span class="math-container">$a$</span> is a certain constant.</p>
<p>a) Find <span class="math-container">$a$</span> such that <span class="math-container">$f$</span> is a probability density function.</p>
<blockquote>
<p>By rule it must hold that: <span class="math-container">$P(I)=\int_If(x)dx=1$</span></p>
</blockquote>
<p>From here follows:</p>
<blockquote>
<p><span class="math-container">$P(I)=\int_If(x)dx=\int^∞_2ax^{-3}dx=[-\cfrac{a}{2}x^{-2}]^{x=∞}_{x=2}=
> [-\cfrac{a}{2}x^{-2} - [-\cfrac{a2^{-2}} {2}]]$</span> The first part goes to
0 as x goes to ∞. So we are left with the second part, so,
<span class="math-container">$\cfrac{a2^{-2}} {2}=1 => a=8$</span></p>
</blockquote>
<p>b) Determine the probabilities of the events <span class="math-container">$A=(4,∞)$</span> and <span class="math-container">$B=$</span>{3}</p>
<blockquote>
<p><span class="math-container">$P[4,∞]=\int^∞_48x^{-3}=[-4x^{-2}]^{x=>∞}_{x=4}=[-4x^{-2} -
> [-4*4^{-2}]$</span> Once again, as x goes to infinity, the first part goes to
0 so we are left with the second part only, so <span class="math-container">$4*4^{-2}=1/4$</span></p>
</blockquote>
<p>For event B, though, I have absolutely no idea what to do. What limits do we put on the integral, both upper and lower limit 3? But that would lead us to an equation that equals 0 so I'm confused.</p>
| Kulisty | 170,765 | <p>I'm quite positive rotation does not involve Euclid's fifth postulate. I'll try to present strict definition of rotation in synthetic setting.</p>
<p><strong>Definition 1</strong>. An ordered pair <span class="math-container">$(A,B)$</span> of halflines having the same origin will be called a directed angle. Also denote <span class="math-container">$AB:=(A,B)$</span></p>
<p><strong>Definition 2</strong>. We introduce a relation <span class="math-container">$\simeq$</span> among directed angles: We say <span class="math-container">$AB\simeq CD$</span> iff these angles are congruent (or equivalently have the same measure) and have the same orientation (in case <span class="math-container">$AB$</span> and <span class="math-container">$CD$</span> are not <span class="math-container">$0$</span> or <span class="math-container">$180$</span> degrees). Having the same orientation roughly speaking means that going from <span class="math-container">$A$</span> to <span class="math-container">$B$</span> and from <span class="math-container">$C$</span> to <span class="math-container">$D$</span> are both clockwise or both counterclockwise.</p>
<p>It can be proven that</p>
<p><strong>Proposition 1</strong>. <span class="math-container">$\simeq$</span> is an equivalence relation.</p>
<p><strong>Proposition 2</strong>. Given a halfline <span class="math-container">$A$</span> of origin <span class="math-container">$o$</span> and a directed angle <span class="math-container">$PQ$</span>, there is a unique halfline <span class="math-container">$B$</span> of origin <span class="math-container">$o$</span> such that <span class="math-container">$AB\simeq PQ$</span>.</p>
<p>Now if we fix a directed angle <span class="math-container">$PQ$</span> and a point <span class="math-container">$o$</span> on a plane, we can define a rotation around <span class="math-container">$o$</span> by a directed angle <span class="math-container">$PQ$</span> the following way: Obviously we map <span class="math-container">$o$</span> to <span class="math-container">$o$</span>. Take <span class="math-container">$a\neq o$</span>. By proposition 2 we can find a halfline <span class="math-container">$B$</span> of origin <span class="math-container">$o$</span> such that <span class="math-container">$AB\simeq PQ$</span>, where <span class="math-container">$A$</span> is a halfline <span class="math-container">$\overrightarrow{oa}$</span>. Next we find a unique point <span class="math-container">$b$</span> on <span class="math-container">$B$</span> such that <span class="math-container">$oa\equiv ob$</span>. Finally we map <span class="math-container">$a$</span> to <span class="math-container">$b$</span>.</p>
<p>What is the most important thing about rotations is that it can be proven in neutral geometry that they are isometries. It follows quite easily from lemma</p>
<p><strong>Lemma</strong>. Let halflines <span class="math-container">$P,Q,P_1,Q_1$</span> all have the same origin. If <span class="math-container">$PP_1\simeq QQ_1$</span>, then <span class="math-container">$PQ\simeq P_1Q_1$</span>.</p>
<p>To prove that rotations are isometries you have to consider few cases but basically you apply the lemma and SAS congruence rule.</p>
|
240,699 | <p>I have the following equation which I want to solve:</p>
<p><span class="math-container">$$
I_D = [Li_2(-e^{V_D-I_D})-Li_{2}(e^{I_D})]
$$</span></p>
<p>Here <span class="math-container">$Li_2(x)$</span> is the PolyLog function of order <span class="math-container">$2$</span>. Is there a way to solve this equation iteratively in Mathematica to get <span class="math-container">$I_D$</span> as a function of <span class="math-container">$V_D$</span>.</p>
<p>Edit: I want to solve this equation numerically for real values of <span class="math-container">$I_D$</span> and <span class="math-container">$V_D$</span>.</p>
| Stephen Luttrell | 1,393 | <p>You can investigate what your function does when you iterate it by plotting how it updates points in the complex <span class="math-container">$I_D$</span> plane, and I find that using <code>VectorPlot</code> to plot a vector field is a useful way of visualising this.</p>
<p>Define your function <span class="math-container">$f(I_D,V_D)$</span> - i.e. the righthand side of your equation.</p>
<pre><code>f[id_, vd_] := PolyLog[2, -Exp[vd - id]] - PolyLog[2, Exp[id]];
</code></pre>
<p>Create an animation to explore what happens as you vary <span class="math-container">$V_D$</span>.</p>
<pre><code>With[{f0 = 5, v0 = 3, dv = 0.5, p = 25},
Animate[
VectorPlot[
ReIm[f[idR + I idI, vd] - (idR + I idI)],
{idR, -f0, f0}, {idI, -f0, f0},
VectorPoints -> p
],
{vd, -v0, v0, dv}
]
]
</code></pre>
<p>This plots a vector field of the update <span class="math-container">$f(I_D,V_D)-I_D$</span> in the complex <span class="math-container">$I_D$</span> plane, using arrows to show update direction, and arrow colour to show update magnitude. For <span class="math-container">$V_D=0$</span> this looks like the plot below.</p>
<p><a href="https://i.stack.imgur.com/YiE1x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YiE1x.png" alt="enter image description here" /></a></p>
<p>Points in the complex <span class="math-container">$I_D$</span> plane that map to themselves - i.e. are solutions of your equation - show up prominently in this sort of plot. For instance, in the plot above you can see a couple of "vortices", where the flow circulates around points that map to themselves.</p>
<p>This and other clues can give you a feel for how your function moves points around in the complex plane, and in particular the location of points that map to themselves - i.e. the various solution-branches that you seek.</p>
|
14,340 | <blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://mathematica.stackexchange.com/questions/3247/consistent-plot-styles-across-multiple-mma-files-and-data-sets">Consistent Plot Styles across multiple MMA files and data sets</a> </p>
</blockquote>
<p>So, here's my problem; I have a lot of data that is shown in different plots.
I want all the plots to have the same options (<code>PlotStyle</code>, <code>Axes</code>, <code>BaseStyle</code>, <code>FrameTicks</code>, etc...).
I also want to be able to modify these options (because the size and <code>FontSize</code> change depending on where I want to use the plots, in my thesis or in a presentation) and do that without having to change each <code>Plot</code> function by hand.</p>
<p>I guess what I'm looking for is something like this:</p>
<pre><code>optionPacket = PlotStyle -> {RGBColor[1, 0, 0]}, Frame -> True,
BaseStyle -> {FontSize -> 20};
</code></pre>
<p>and then use it like this:</p>
<pre><code>ListPlot[mydata, optionPacket]
ListPlot[mydata2, optionPacket]
</code></pre>
<p>Is there any way to accomplish this? (What I just posted obviously doesn't work or I wouldn't be asking).</p>
| b.gates.you.know.what | 134 | <p>Another possibility is to add your settings to the options :</p>
<pre><code>SetOptions[Plot, PlotStyle -> {RGBColor[1, 0, 0], Frame -> True, BaseStyle -> {FontSize -> 20}}]
Plot[Sin[x], {x, -1, 1}]
</code></pre>
<p><img src="https://i.stack.imgur.com/QrLNY.png" alt="plot1"></p>
<p>but</p>
<pre><code>ParametricPlot[{x, Sin[x]}, {x, -1, 1}]
</code></pre>
<p><img src="https://i.stack.imgur.com/FMamZ.png" alt="plot2"></p>
|
3,389,659 | <p>if <span class="math-container">$S_n={a_1,a_2,a_3,...,a_{2n}}$</span>} where <span class="math-container">$a_1,a_2,a_3,...,a_{2n}$</span> are all distinct integers.Denote by <span class="math-container">$T$</span> the product
<span class="math-container">$$T=\prod_{i,j\epsilon S_n,i<j}{(a_i-a_j})$$</span> Prove that <span class="math-container">$2^{(n^2-n+2[\frac{n}{3}])}\times \text{lcm}(1,3,5,...,(2n-1)) $</span> divides <span class="math-container">$T$</span>.(where [] is the floor function)</p>
<p>I have tried many approaches.I tried using the fact that either number of odd or even integers<span class="math-container">$\ge(n+1)$</span> and since in <span class="math-container">$T$</span> every integer is paired with every other integer only once.Since even-even and odd-odd both give even numbers:<span class="math-container">$2^{\frac{n(n+1)}{2}}$</span> divides <span class="math-container">$T$</span>.As for the <span class="math-container">$lcm(1,,3,5,....(2n-1))$</span> I have no Idea what to do.Please help.I am quite new to NT.Thank you.</p>
| richrow | 633,714 | <p>Let me give a sketch of the solution.</p>
<p>Well, there is a following fact:</p>
<blockquote>
<p><strong>Proposition.</strong> Given <span class="math-container">$n$</span> distinct positive integers <span class="math-container">$a_1, a_2, \ldots, a_n$</span>. Then,
<span class="math-container">$$
\prod_{i>j}\frac{a_i-a_j}{i-j}\in \mathbb{Z}.
$$</span>
In other words, <span class="math-container">$A_n$</span> divides <span class="math-container">$\prod\limits_{i>j}(a_i-a_j)$</span>, where <span class="math-container">$A_n=\prod\limits_{i>j}(i-j)$</span>.</p>
</blockquote>
<p>This fact can be proved by showing that number of multiplicands of the product <span class="math-container">$\prod\limits_{i>j}(a_i-a_j)$</span> which are divisible by <span class="math-container">$p^s$</span> for every prime <span class="math-container">$p$</span> and positive integer is greater or equal the number of multiplicands of the product <span class="math-container">$\prod\limits_{i>j}(i-j)$</span> which are divisible by <span class="math-container">$p^s$</span>.</p>
<p>It's clear that in the statement of the proposition above constant <span class="math-container">$A_n$</span> is sharp (because we can simply consider <span class="math-container">$a_k=k$</span> for <span class="math-container">$k=\overline{1, n}$</span>).</p>
<p>Thus, it's sufficient to prove that <span class="math-container">$A_{2n}$</span> is divisible by <span class="math-container">$2^{(n^2-n+2[\frac{n}{3}])}\times\text{lcm}(1,3,5,...,(2n-1))$</span>. Since <span class="math-container">$A_{2n}$</span> is divisible by <span class="math-container">$1, 3, \ldots, 2n-1$</span> we need to show that
<span class="math-container">$$
2^{n^2-n+2[\frac{n}{3}]}|A_{2n}.
$$</span>
This can be done in the way which is described in the previous answer.</p>
|
3,915,771 | <p>I'm given the series:</p>
<p><span class="math-container">$$\sum_{n=2}^{\infty} \frac{n^2}{n^4-n-3}$$</span></p>
<p>I know it converges, however I'm meant to show that by the comparison test. What would be a good choice here? <span class="math-container">$\frac{1}{k^2}$</span> and <span class="math-container">$\frac{1}{k}$</span> don't work here, obviously.</p>
| hamam_Abdallah | 369,188 | <p>For <span class="math-container">$ n\ge 3$</span>, we have</p>
<p><span class="math-container">$$n\le \frac{n^4}{3}$$</span>
and</p>
<p><span class="math-container">$$3\le \frac{n^4}{3}$$</span></p>
<p>thus</p>
<p><span class="math-container">$$n^4-n-3\ge \frac{n^4}{3}$$</span></p>
<p>and</p>
<p><span class="math-container">$$0\le \frac{n^2}{n^4-n-3}\le \frac{3}{n^2}$$</span></p>
|
413,108 | <p>Given a commutative ring <span class="math-container">$ R $</span> and a multiplicatively closed subset <span class="math-container">$ S $</span> of <span class="math-container">$ R $</span>, there are two ways to consturct <span class="math-container">$ S^{-1}R $</span>:</p>
<ol>
<li><p>define an equivalence relation <span class="math-container">$ \sim $</span> on <span class="math-container">$ R\times S $</span> and then take <span class="math-container">$ S^{-1}R := (R\times S)/\sim $</span>.</p>
</li>
<li><p>Let <span class="math-container">$ R[S] $</span> be the free commutative algebra over <span class="math-container">$ R $</span> generated by <span class="math-container">$ S $</span> and <span class="math-container">$ i:S\to R[S] $</span> the canonical embedding and <span class="math-container">$ I $</span> the ideal of <span class="math-container">$ R[S] $</span> generated by <span class="math-container">$ i(s)s-1 $</span> for each <span class="math-container">$ s $</span> in <span class="math-container">$ S $</span> and then take <span class="math-container">$ S^{-1}R := R[S]/I $</span>.</p>
</li>
</ol>
<p>Suppose <span class="math-container">$ S $</span> dosen't contain zero divisors, if one works in the first way, it's easy to prove that the canonical map <span class="math-container">$ j:R\to R^{-1}S $</span> is injective. But if one works in the second way, then it's equivalent to state that <span class="math-container">$ I \cap R = \{ 0 \}$</span>. Can we directly prove this without considering <span class="math-container">$ (R\times S)/\sim $</span> ?</p>
| Johannes Hahn | 3,041 | <p>Wlog assume <span class="math-container">$n=m$</span>. Set <span class="math-container">$S:=\{a_1^{k_1}\cdots a_n^{k_n} \mid k_i\in\mathbb{N}\}$</span>. Then <span class="math-container">$R[x_1,\ldots,x_n]/I$</span> is precisely the localisation <span class="math-container">$S^{-1} R$</span> that inverts <span class="math-container">$a_1, \ldots, a_n$</span> and your question is equivalent to asking whether or not the canonical map <span class="math-container">$R\to S^{-1} R$</span> is injective.</p>
<p>The kernel of the canonical map is well-known to be equal to <span class="math-container">$\{r\in R \mid \exists s\in S: sr=0\}$</span>. In particular: It is zero iff none of the elements <span class="math-container">$a_1,\ldots,a_n$</span> are zero divisors of <span class="math-container">$R$</span>.</p>
|
118,763 | <p>Hello,</p>
<p>Let $X'X$ be a positive definite matrix and let $\mathbf{1}$ denote the vector of ones. </p>
<p>I'm hoping to construct a positive, diagonal matrix $W$ such that
$$(W X'X W) \mathbf{1} = \mathbf{1}$$</p>
<p>$X$ and $W$ are all assumed to have real-valued entries, and $X'$ denotes the transpose of $X$.</p>
<p>I don't, yet, have a proof that such a matrix $W$ always exists, but strongly suspect it. Any ideas on algorithms, proofs, or counter-examples would be gratefully received.</p>
<p>The problem arises from work in statistics. </p>
<p>thanks,</p>
<p>David.</p>
| David Bryant | 30,601 | <p>The relevant reference is </p>
<p>Marshall, A. and Olkin, I. Scaling of Matrices to Achieve Specified Row and Column Sums. Numerische Mathematik 12, 83-90 (1968)</p>
<p>who prove the result in the affirmative for positive definite matrices (and some generalizations). The proof is elegant and construction: the diagonal matrix can be found by minimizing a particular constrained optimization problem. </p>
<p>There is a good discussion of the problem and its generalizations in</p>
<p>Johnson, C.R. and Reams, R. Scaling of symmetric matrices by positive diagonal congruence. Linear and Multilinear Algebra, 57(2) (2009) 123-140.</p>
<p>-David.</p>
|
15,237 | <p><a href="https://matheducators.stackexchange.com/questions/176/knowing-mathematics-does-not-translate-to-knowing-to-teach-mathematics-why">A question</a> has been asked about why great mathematicians are not necessarily great teachers. On the other hand, I am wondering if knowing more mathematics actually helps with one's teaching of lower level courses in mathematics. For example, I believe that a good student with bachelor's degree in mathematics should have sufficient knowledge to teach calculus. However, how does having a master's or doctorate degree help one's teaching in calculus, if any at all?</p>
<p>I am teaching calculus now and I do not understand commutative algebra; I took a course on commutative algebra long time ago; I did poorly in the course and now I could hardly recall anything from this course. If I invest substantial among of time studying this subject well now, will it help me in my calculus course in any sense?</p>
| Michael Joyce | 1,397 | <p>I'll give an answer by analogy. When you are a kid, you should be exposed to playing with other kids or participating in activities such as team sports. It's natural to ask, "Does playing soccer at seven years old help a child become a better adult?" And the answer seems to be that, in general, yes it does help. Not because of soccer per se, but because it is important for the child to learn how to interact with others, work with others in a cooperative environment, learn about the importance of practice and hard work, and learn how to listen to their coach to improve their skills. Of course, that's not what the child notices. Hopefully, they're just having fun and focusing on the immediate task in front of them.</p>
<p>And it doesn't mean that if a kid plays soccer, then they'll automatically be better at those bigger goals than a kid who doesn't play soccer, but on average playing soccer will be a positive influence. Those experiences tend to accumulate and help shape who we become as adults.</p>
<p>I think studying advanced math and getting a Ph.D. work the same way. Most of the time is hopefully spent enjoying the opportunity to learn mathematics and focus on the problem in front of us. At the same time, we have an opportunity to learn a plethora of skills in the process. Among them are skills that help with our teaching - the ability to think about problems in greater depth, to question which assumptions are really needed, to better understand the essence of an argument or a calculation and to hopefully improve our ability to communicate mathematical insights to others. Again, not all Ph.D. recipients acquire all of these skills, nor is it the only possible route to acquire them, but on average, math Ph.D.'s are much more skilled in these areas than people whose math education stopped much earlier.</p>
<p>Documenting exactly <em>how</em> an advanced degree can make you a better teacher is non-trivial. It's not clear the extent to which the experienced process (which takes years to complete) can be summarized succinctly. But experience shows that those with a Ph.D. in mathematics (or related fields) are generally - but again not universally - much more prepared to teach the content required for a math course such as calculus.</p>
<p>I think it's much more debatable if math Ph.D. recipients are better at the other aspects of teaching calculus (how they deliver the content) than others. And it's debatable how much relative importance good presentation of the material has compared to a good choice of content. But without a strong understanding of the content, no amount of brilliant presentation can help a teacher who either makes a poor choice of how to present specific content or, worse, presents that material incorrectly because they lack a full understanding of the relevant content and the proper perspective to view it in. </p>
|
285,227 | <p>I am trying to prove $\exp(x+y) = \exp(x) \exp(y)$.</p>
<p>I may use that $$\exp(x) = \sum_{n=0}^\infty \frac {x^n}{n!}$$
I further know how to multiply two power series in one point, i.e. if $f(x) = \sum_{n=0}^\infty c_n(x-a)^n$ and $g(x) = \sum_{k=0}^\infty d_n(x-a)^n$ then
$$
f(x)g(x) = \sum_{n=0}^\infty e_n(x-a)^n
$$
with
$$
e_n = \sum_{m=0}^n c_md_{n-m}
$$</p>
| David Zhang | 80,762 | <p>This can actually be done without writing a single sum. Consider the function $$ f(x, y) = \frac{e^x e^y}{e^{x+y}}. $$ Observe that $$ \frac{\partial f}{\partial x} = \frac{e^x e^y e^{x+y} - e^x e^y e^{x+y}}{(e^{x+y})^2} = 0. $$
Similarly, $$ \frac{\partial f}{\partial y} = \frac{e^x e^y e^{x+y} - e^x e^y e^{x+y}}{(e^{x+y})^2} = 0. $$ This shows that $f$ is a constant function. Now, we need only to use the series definition to show $f(0, 0) = 1$. Then, by rearrangement, we have the desired result: $$ e^{x+y} = e^x e^y. $$</p>
|
1,860,134 | <p>In <em>The logic of provability</em>, by G. Boolos, there is a remark in chapter 7 saying that $\diamond^{m} \top\implies \diamond^{n} \top$ is false if $m<n$ <strong>(unless $PA$ is 1-inconsistent)</strong>.</p>
<p>Now, it seems to me that the parenthetical expression is not necessary, since earlier in the chapter we saw a demonstration of the arithmetical completeness theorem of $GL$ for constant sentences (sentences without sentence letters), which did require simple consistency of $PA$ to work, instead of 1-consistency.</p>
<p>Since the expression in question is letterless, and it is easy to see that it is not a theorem of $GL$, attending to its trace, then we should be able to conclude that $PA$ doesn't prove that either appealing to mere consistency.</p>
<p>Am I right? Is there something I am misunderstanding?</p>
| Jsevillamol | 240,304 | <p>Nevermind, I just realized that in the proof of arithmetical completeness the author also needs to assume 1-consistency (To show that Bew(False) is not provable).</p>
|
7,268 | <p>I'm a private tutor working with a 7th grader who is struggling with solving equations. Given a simple equation, he is able to solve it using a formulaic procedure, but it is very obvious that he has no idea what the solution really means. Hence, if he gets a problem that's slightly different from ones he's solved before, he's completely lost. </p>
<p>Working with him yesterday, I realized that he doesn't understand what a variable is, or what he's doing when he's solving an equation -- he's just following the steps his teacher told him for that specific problem. </p>
<p>I'm trying to think of a way to visually demonstrate what's happening when he's solving an equation -- something visual that he can see. Kind of an algebraic equivalent to putting four coins on the table and adding one more to show 4 + 1 = 5. </p>
<p>Any ideas? Thanks!
-Ian </p>
| Karl | 4,668 | <p>I would backtrack and revisit substition, solving equations should fall out as a bi-product or the student hasn't grasped that topic fully. With regards to solving equations I believe choosing the numbers to provide a rich example is extremely important. Consider for example $$\frac{x}{4}=12$$</p>
<p>This is far more likely to be answered incorrectly than$$\frac{x}{3}=7$$</p>
<p>Why? Multiplication is the only 'sensible' option for the student in the second question while the opposite is true in the first example. Force them to confront such addictive number bonds head on.</p>
|
3,692,435 | <p>prove the following identity:</p>
<p><span class="math-container">$\displaystyle\sum_{k=0}^{n}\frac{1}{k+1}\binom{2k}{k}\binom{2n-2k}{n-k} = \binom{2n+1}{n}$</span></p>
<p>what I tried:</p>
<p>I figured that: <span class="math-container">$\displaystyle\binom{2n+1}{n} = (2n+1) C_n$</span>
and <span class="math-container">$\displaystyle\sum_{k=0}^{n}\frac{1}{k+1}\binom{2k}{k}\binom{2n-2k}{n-k}= \sum_{k=0}^{n}C_k\binom{2n-2k}{n-k}$</span></p>
<p>from here i tried simplifying:<span class="math-container">$\displaystyle\binom{2n-2k}{n-k}$</span> to something i could work with but did not succeed</p>
<p>I also know that <span class="math-container">$\displaystyle C_n = \sum_{k=0}^{n-1}C_k C_{n-k-1}$</span> so I tried to prove : <span class="math-container">$\displaystyle\sum_{k=0}^{n}C_k\binom{2n-2k}{n-k}= C_n + \sum_{k=0}^{n-1}C_k\binom{2n-2k}{n-k} = C_n + 2n\sum_{k=0}^{n-1}C_kC_{n-k-1}$</span> but that approach also failed (couldn't prove the last equality)</p>
<p>any suggestions?</p>
| Brian M. Scott | 12,042 | <p>Here’s a combinatorial argument. <span class="math-container">$\binom{2n+1}n$</span> is the number of lattice paths from <span class="math-container">$\langle 0,0\rangle$</span> to <span class="math-container">$\langle n,n+1\rangle$</span> using <span class="math-container">$n$</span> right steps and <span class="math-container">$n+1$</span> up steps. Every such path must rise above the line <span class="math-container">$y=x$</span>; let <span class="math-container">$\langle k,k+1\rangle$</span> be the first point at which it does so. There are <span class="math-container">$C_k=\frac1{k+1}\binom{2k}k$</span> paths from <span class="math-container">$\langle 0,0\rangle$</span> to <span class="math-container">$\langle k,k\rangle$</span> that do not rise above the line <span class="math-container">$y=x$</span>, any of which can be combined with any of the <span class="math-container">$\binom{2n-2k}{n-k}$</span> unrestricted lattice paths from <span class="math-container">$\langle k,k+1\rangle$</span> to <span class="math-container">$\langle n,n+1\rangle$</span>, so there are <span class="math-container">$\frac1{k+1}\binom{2k}k\binom{2n-2k}{n-k}$</span> paths from <span class="math-container">$\langle 0,0\rangle$</span> to <span class="math-container">$\langle n,n+1\rangle$</span> that first rise above the line <span class="math-container">$y=x$</span> at <span class="math-container">$\langle k,k+1\rangle$</span>. Summing over <span class="math-container">$k$</span> yields the desired result.</p>
|
1,521,518 | <p>Determine if the given vectors span $\mathbb{R}^4$</p>
<p>${(1, 1, 1, 1), (0, 1, 1, 1), (0, 0, 1, 1), (0, 0, 0, 1)}$.</p>
<p>I'm completely confused on this question. My textbook gives a different problem but in $\mathbb{R}^3$. How would i go about this?</p>
| uniquesolution | 265,735 | <ol>
<li>Arrange the vectors as rows of a matrix.</li>
<li>Compute the determinant. It is particularly easy to compute as the matrix is upper triangular, so the determinant is just the product of the diagonal entries. It is equal to $1$.</li>
<li>Conclude that the rows of the matrix are linearly independent.</li>
<li>Any Four linearly independent vectors span $\mathbb{R}^4$.</li>
</ol>
|
1,559,906 | <p>Is there an equivalent in mathematical language to the modulo (or <code>mod</code>) function in computing? </p>
| Mike Pierce | 167,197 | <p>I have seen the percent sign (%) represent <em>mod</em> in computing. So when you are thinking of as <code>x % n</code>, a mathematician would equivalently write this as $x \bmod n$. </p>
<p>There is a subtlety here, though. When you are using <code>mod</code> in computing, you are thinking of it as a function that returns the <em>least non-negative value that is the remainder after performing the division of $x$ by $n$.</em> Typically in mathematics, though, we think of <em>modulo</em> as a restriction on the set of numbers we are currently working with. You will hear the phrase "<em>we are working $\bmod n$ right now,</em>" and we'll have the modulo $\pmod n$ written off the side in parenthesis almost like a reminder.</p>
<p>You'll usually often see the notation $y \equiv x \pmod n$ and sometimes more you'll see the more compact notation $y \equiv_n x$. These both would be read as "$x$ and $y$ are <em>congruent</em> $\bmod n$". We choose to say that $x$ and $y$ are <em>congruent</em> (rather than <em>equivalent</em>) because they are just members of the same equivalence class $\bmod n$.</p>
<p>For a concrete example we can look at the integers modulo $7$, more concisely written as $\mathbb{Z}_7$. Now $\mathbb{Z}_7$ consists of $7$ elements, $\{0,1,2,3,4,5,6\}$. But these elements aren't exactly numbers. Each of these elements in $\mathbb{Z}_7$ represents an <em>equivalence class</em>. The element $3$ in $\mathbb{Z}_7$ represents the <em>equivalence class</em> of all integers that are congruent to $3\pmod 7$, so it is the integers in the set $\{\dotsc, -11,-4,3,10,17, \dotsc\}$.</p>
|
235,430 | <p>Suppose that a bounded sequence of real numbers $s_i$ ($i\in\omega$) has a limit $\alpha$ along some ultrafilter $\mu_1\in \beta{\Bbb N}\setminus{\Bbb N}$. Then given another ultrafilter $\mu_2\in \beta{\Bbb N}\setminus{\Bbb N}$, surely there exists some rearrangement $s_{r(i)}$ of $s_i$ that has the same limit $\alpha$.</p>
<p>One can easily extend this simple observation to a countable family of sequences. </p>
<p>Now given $s_{i;j}$ ( $i,j\in \omega$; values bounded for each fixed $j$) with limits $\alpha_j$ along a fixed $\mu_1\in \beta{\Bbb N}\setminus{\Bbb N}$, and given another ultrafilter $\mu_2\in \beta{\Bbb N}\setminus{\Bbb N}$, there exists a <em>simultaneous</em> rearrangement $s_{r(i);j}$ having the same limits $\alpha_j$ along $\mu_2$. </p>
<p>All this fails if we pass to size $c=2^\omega$ families of sequences. Indeed $s_{i;j}$ could then enumerate all bounded sequences. But all the limits $\alpha_j$ together would determine $\mu_1$. Taking limits of a simultaneous rearrangement of all the sequences amounts, equivalently, to taking limits of the original sequences along an ultrafilter $\mu_2'$ in the orbit of $\mu_2$ under the action of the symmetric group of $\Bbb N$ extended to $\beta\Bbb N$. Equality of all those limits thus forces $\mu_1=\mu_2'$, and that places $\mu_1$ and $\mu_2$ in the same orbit of the symmetric group action, a severe restriction on $\mu_2$.</p>
<p><strong>Question</strong>: If CH fails, what happens for a size $\omega_1$ family of sequences?</p>
| MJ73550 | 89,993 | <p>Since $Y$ is diagonal you get $M$ such that $M^TM=Y$ so $x^TYx=|Mx|^2$</p>
<p>let's say $X_1$ is centered</p>
<p>then $$E(X^T Y X)=\sum_{i=1}^n\text{Var}(\sum_{j=1}^n M_{ij}X_j)$$</p>
<p>then since $X_j$ are independent you get :
$$\text{Var}(\sum_{j=1}^n M_{ij}X_j)=\sum_{j=1} M_{ij}^2\text{Var}(X_j)$$
because they have same law
$$\text{Var}(\sum_{j=1}^n M_{ij}X_j)=\text{Var}(X_1)\sum_{j=1} M_{ij}^2$$
now you sum over $i$ and you get :
$$E(X^T Y X)=\text{Var}(X_1)\bf{1}^T Y \bf{1}$$
where $\bf{1}$ is the vector full of ones</p>
|
3,033,344 | <p>Question: Tom only have 2 type of coins: coins: 4 cents and 5 cents. Write a proof by induction that every amount n ≥ a can indeed be paid with Tom coins</p>
<p>1) Base Case: Tom can pay <span class="math-container">$12, $</span>13, <span class="math-container">$14, $</span>15, <span class="math-container">$16 and $</span>17</p>
<p>2) Inductive steep: Let n>= 17 and suppose the Tom can pay every amount k with 12 <= k < n </p>
<p>3) Proof of claim: I am confused now...</p>
<p>edit: it's a normal induction, not strong induction</p>
| davidlowryduda | 9,754 | <p>What primes do you need to worry about? The primes that divide <span class="math-container">$360$</span> are <span class="math-container">$2,3,5$</span>, so you really want to find those numbers which are not divisible by <span class="math-container">$2, 3$</span>, or <span class="math-container">$5$</span>.</p>
<p>Now</p>
<ul>
<li><span class="math-container">$360/2 = 180$</span> numbers up to <span class="math-container">$360$</span> are divisible by <span class="math-container">$2$</span>.</li>
<li><span class="math-container">$360/3 = 120$</span> numbers up to <span class="math-container">$360$</span> are divisible by <span class="math-container">$3$</span>.</li>
<li><span class="math-container">$360/5 = 72$</span> numbers up to <span class="math-container">$360$</span> are divisible by <span class="math-container">$5$</span>.</li>
</ul>
<p><em>But some numbers are divisible by both <span class="math-container">$2$</span> and <span class="math-container">$3$</span>, or by <span class="math-container">$3$</span> and <span class="math-container">$5$</span>, or <span class="math-container">$2$</span> and <span class="math-container">$5$</span></em>. So we've overcounted. (This is an inclusion-exclusion argument in progress).</p>
<ul>
<li>There are <span class="math-container">$360/6 = 60$</span> divisible by <span class="math-container">$2$</span> and <span class="math-container">$3$</span>.</li>
<li>There are <span class="math-container">$360/15 = 24$</span> divisible by <span class="math-container">$3$</span> and <span class="math-container">$5$</span>.</li>
<li>There are <span class="math-container">$360/10 = 36$</span> divisible by <span class="math-container">$2$</span> and <span class="math-container">$5$</span>.</li>
</ul>
<p><em>But we've overcounted how much we've overcounted! There are also numbers divisible by <span class="math-container">$2$</span>, <span class="math-container">$3$</span>, and <span class="math-container">$5$</span></em>.</p>
<ul>
<li>There are <span class="math-container">$360/(2 \cdot 3 \cdot 5) =12$</span> numbers up to <span class="math-container">$360$</span> that are divisible by <span class="math-container">$2\cdot3\cdot5 = 30$</span>.</li>
</ul>
<p>Thus in total there will be </p>
<p><span class="math-container">$$ 360 - (180 + 120 + 72 - (60 + 24 + 36 - (12))) = 96.$$</span></p>
<hr>
<p>In fact, this is one way of understanding the expression for <span class="math-container">$\varphi(n)$</span> given by
<span class="math-container">$$ \varphi(n) = n\prod_{p \mid n} \left( 1 - \frac{1}{p} \right),$$</span>
which encodes this inclusion-exclusion argument within it.</p>
|
1,309,578 | <blockquote>
<p>Prove\Disprove:<br>
$A$ is bounded from above $\iff$ $A\cap \mathbb{Z}$ is bounded from above.</p>
</blockquote>
<p>Let $A=\{a\in \mathbb{Q} \setminus \mathbb{Z}: a<0\}$ is bounded from above, $A\cap \mathbb{Z}=\emptyset $ and $\emptyset$ is not bounded from above</p>
<p>Is it a valid contradiction? </p>
| acradis | 229,012 | <p>The empty set is trivially bounded </p>
|
1,309,578 | <blockquote>
<p>Prove\Disprove:<br>
$A$ is bounded from above $\iff$ $A\cap \mathbb{Z}$ is bounded from above.</p>
</blockquote>
<p>Let $A=\{a\in \mathbb{Q} \setminus \mathbb{Z}: a<0\}$ is bounded from above, $A\cap \mathbb{Z}=\emptyset $ and $\emptyset$ is not bounded from above</p>
<p>Is it a valid contradiction? </p>
| drhab | 75,923 | <p>I presume you are working in $\mathbb R$ or $\mathbb Q$.</p>
<p>If $s$ serves as upper bound of $A$ then it also serves as upper bound of any subset of $A$. So if $A$ is bounded from above then any subset of $A$, including $A\cap\mathbb Z$, is bounded from above as well. </p>
<p>If e.g. $A=\mathbb Q-\mathbb Z$ then $A$ is evidently not bounded from above while $A\cap\mathbb Z=\varnothing$ is bounded from above. So the converse of the statement is not true.</p>
|
1,879,076 | <p>How to show that $(1+x/n)^n\geq (1+x/10)^{10}$? (for $n\geq 10$)</p>
<p>I see that if I consider $n\to\infty$, the LHS approaches $e^x$, but that's all I could really see.</p>
<p>Please assume that $x\in[0,\infty)$</p>
| parsiad | 64,601 | <p>Since the two expressions are equal at $n=10$, it is sufficient to
show that $$\frac{\partial}{\partial n}(1+x/n)^{n}\geq0 \text{ for } n\geq1 \text{ and } x\geq 0.$$</p>
<p>You can check that this is equivalent to showing $$\log((n+x)/n)\geq x/(n+x)\text{ for } n\geq1 \text{ and } x\geq 0.$$</p>
<p>Moreover, this inequality follows immediately from <a href="https://proofwiki.org/wiki/Lower_Bound_of_Natural_Logarithm" rel="nofollow">a well-known lower bound on the natural logarithm</a>.</p>
|
1,765,222 | <p>I have proven this by the induction method but would like to know if it can be proven using an alternative method.</p>
| Michael Burr | 86,421 | <p>Using a little linear algebra, we can write:
$$
\frac{n(n^4-1)}{5}=24\binom{n}{5}+48\binom{n}{4}+30\binom{n}{3}+6\binom{n}{2}.
$$
Since the binomial coefficients are always integers, this exhibits that your fraction is always an integer.</p>
<p>This method is kinda silly, I prefer the Fermat's little theorem approach. However, in general, any polynomial whose value on $\mathbb{Z}$ is always an integer can be written as an integer linear combination of binomial coefficients. </p>
|
62,177 | <p>One of the most mind boggling results in my opinion is, with the axiom of choice/well-ordering principle, there exist such things as uncountable well-ordered sets $(A,\leq)$. </p>
<p>With this is mind, does there exist some well ordered set $(B,\leq)$ with some special element $b$ such that the set of all elements smaller than $b$ is uncountable, but for any element besides $b$, the set of all elements smaller is countable (by countable I include finite too). </p>
<p>More formally stated, how can one show the existence of a well ordered set $(B,\leq)$ such that there exists a $b\in B$ such that $\{a\in X\mid a\lt b\}$ is uncountable, but $\{a\in X\mid a\lt c\}$ is countable for all $c\neq b$?</p>
<p>It seems like this $b$ would have to "be at the very end of the order." </p>
| William | 13,579 | <p>Let $\aleph_1$ be the first uncountable ordinal. It can be defined to be the smallest ordinal which is not in bijection with $\omega$. This set exists using the ZF axioms. The ordinal $\aleph_1 + 1 $ is defined as $\aleph_1 \cup \{\aleph_1\}$. Hence $\aleph_1 + 1$ is the desired set which contains an element particularly $\aleph_1$ such that $\{x \in \aleph_1 + 1 : x < \aleph_1\}$ is uncountable, but the initial segment determined by any other element is countable.</p>
<p>To elaborate on how to show $\aleph_1$ exists : With the axion of choice, let $Z$ be an uncountable ordinal. Let $A = \{x \in Z : x \text{ is in bijection with } \omega\}$, which can be written more formally in the language of sets, which exists by separation. $\aleph_1$ is then $\bigcup A$, which exists by the axiom of union. </p>
|
1,291,511 | <p>This may seem like a silly question, but I just wanted to check. I know there are proofs that if $f(x)=f'(x)$ then $f(x)=Ae^x$. But can we 'invent' another function that obeys $f(x)=f'(x)$ which is <strong>non-trivial</strong>?</p>
| Ivo Terek | 118,056 | <p>No. Suppose that $f \not\equiv 0$. We <strong>solve</strong> the ODE: $$f(x) = f'(x) \implies \frac{f'(x)}{f(x)} = 1 \implies \int \frac{f'(x)}{f(x)}\,{\rm d}x = \int \,{\rm d}x \implies \ln |f(x)| = x+c, \quad c \in \Bbb R$$With this, $|f(x)| = e^{x+c} = e^ce^x$. Call $e^c = A > 0$. Eliminating the absolute value, we have $f(x) = Ae^x$, with $A \in \Bbb R \setminus \{0\}$. Since $f \equiv 0$ also satisfies the equation, we can write all at once: $$f(x) = Ae^x, \quad A \in \Bbb R.$$</p>
|
1,291,511 | <p>This may seem like a silly question, but I just wanted to check. I know there are proofs that if $f(x)=f'(x)$ then $f(x)=Ae^x$. But can we 'invent' another function that obeys $f(x)=f'(x)$ which is <strong>non-trivial</strong>?</p>
| Michael Hardy | 11,667 | <p>OK, let's try a plodding pedestrian approach:
$$
f'(x)-f(x) = 0
$$
The idea of an exponential multiplier $m(x)$ is that we write
$$
0 = m(x)f'(x) + (-m(x))f(x) = m(x)f'(x)+m'(x)f(x) = \Big( m(x)f(x)\Big)'.
$$
For this to work we would need $-m(x)=m'(x)$. We <b>do not need <em>all</em></b> functions $m$ satisfying this, but only one. So there's no need to wonder whether any non-trivial solutions to $m'=-m$ exist; we only need a trivial one. $m(x)=e^{-x}$ does it. So
$$
0 = \Big( m(x)f(x)\Big)'.
$$
Thus, by the mean value theorem, $m(x)f(x)$ is constant, so $f(x)=(\text{constant}\cdot e^x$).</p>
|
863,364 | <p><img src="https://i.stack.imgur.com/w6R2g.jpg" alt="enter image description here"></p>
<p>I am missing the 3D graph for the equation $x^2+2z^2=1$.</p>
| Andrew D | 55,458 | <p>What happens depends on what variable(s) you are applying the Fourier transform to; if we suppose we are making the Fourier transform with regards to $x \rightsquigarrow \xi$, then if we are using the one-dimensional convention that</p>
<p>$$ \hat{f}(\xi) = \int^{\infty}_{-\infty}f(x)e^{-i\xi x} dx$$</p>
<p>so in higher dimensions we generalise to</p>
<p>$$ \hat{f}(\xi,y,t) = \int^{\infty}_{-\infty}f(x,y,t)e^{-i\xi x} dx $$</p>
<p>it follows that we get that (letting $\mathcal{F}$ represent the Fourier transform operator)</p>
<p>$$\begin{array}{c}
\mathcal{F} \left( \frac{\partial^n u(x,y,t)}{\partial t^n} \right) = \frac{\partial^n \hat u(\xi,y,t)}{\partial t^n}\\
\mathcal{F} \left( \frac{\partial^n u(x,y,t)}{\partial x^n} \right) = (i\xi)^n \hat u(\xi,y,t)\\
\mathcal{F} \left( \frac{\partial^n u(x,y,t)}{\partial y^n} \right) = \frac{\partial^n \hat u(\xi,y,t)}{\partial y^n}\\
\end{array}
$$</p>
<p>Therefore, for example, if you wanted to find $\mathcal{F}(\partial_{x,y}u(x,y,t))$, we would get that it equals</p>
<p>$$ \frac{\partial}{\partial y} \mathcal{F}\left(\frac{\partial u(x,y,t)}{\partial x}\right) = ix \frac{\partial \hat u(\xi,y,t)}{\partial y}$$</p>
<p>If you end up making Fourier transforms with respect to more than one variable, then just use these rules, but do them a variable at time - so then, if we were to take $x \rightsquigarrow \xi$ and $y \rightsquigarrow \zeta$, we would get that the answer to the above would now be</p>
<p>$$ ix \frac{\partial \hat u(\xi,\zeta,t)}{\partial y} = -xy \hat u(\xi,\zeta,t) $$</p>
|
1,175,297 | <p>Note: The following definitions from my book, Discrete Mathematics and Its Applications [7th ed, 598].</p>
<p>This is my book's definition for a reflexive relation
<img src="https://i.stack.imgur.com/og5wE.png" alt="enter image description here"></p>
<p>This is my book's definition for a anti symmetric relation
<img src="https://i.stack.imgur.com/OaZGk.png" alt="enter image description here"></p>
<p>Is a reflexive relation just the same as a anti symmetric relation? From what I've, the only way to meet that antisymmetric requirement is to have the same ordered pair, say an element a from Set A, (a,a). If you have anything other than the same ordered pair, (1,2) and (2,1), it will not meet the antisymmetric requirement. But the overall definition of reflexive relation is that it's the same ordered pair. Are they just two ways of saying the same thing? Is it possible to have one and not the other?</p>
| N. S. | 9,176 | <p>They are not the same thing. For example, on $\mathbb R$ the strict inequality is anti-symmetric but not reflexive.... If you want one which is not vacuous, change it to be $\leq $ if at least one of the numbers is negative.</p>
<p>On $\mathbb Z$, if we define $(x,y) \in R$ if and only if $x-y$ is even, then this is reflexive but not santi-ymmetric.</p>
|
1,175,297 | <p>Note: The following definitions from my book, Discrete Mathematics and Its Applications [7th ed, 598].</p>
<p>This is my book's definition for a reflexive relation
<img src="https://i.stack.imgur.com/og5wE.png" alt="enter image description here"></p>
<p>This is my book's definition for a anti symmetric relation
<img src="https://i.stack.imgur.com/OaZGk.png" alt="enter image description here"></p>
<p>Is a reflexive relation just the same as a anti symmetric relation? From what I've, the only way to meet that antisymmetric requirement is to have the same ordered pair, say an element a from Set A, (a,a). If you have anything other than the same ordered pair, (1,2) and (2,1), it will not meet the antisymmetric requirement. But the overall definition of reflexive relation is that it's the same ordered pair. Are they just two ways of saying the same thing? Is it possible to have one and not the other?</p>
| ajotatxe | 132,456 | <p>No.</p>
<p>First, you can have a reflexive relation which is not antisymmetric. For example: an integer number $a$ is related to other integer $b$ if and only if $a$ and $b$ have the same parity. This is not antysymmetric, because $2R4$ and $4R2$, but $2\neq 4$. But it is reflexive, since every integer has the same parity than itself.</p>
<p>Second, you can have an antysymmetric relation that it is not reflexive. The empty relation on a non empty set, for example: that is, a non empty set in which no element is related to any element (not even to itself).</p>
|
154,757 | <p>I have this data:</p>
<ul>
<li><p>$a=6$</p></li>
<li><p>$b=3\sqrt2 -\sqrt6$ </p></li>
<li><p>$\alpha = 120°$</p></li>
</ul>
<p><strong>How to calculate the area of this triangle?</strong></p>
<p>there is picture:</p>
<p><img src="https://i.stack.imgur.com/hr2Cp.jpg" alt=""></p>
| Isaac | 72 | <p>Because the angle at $A$ is obtuse, the given information uniquely determines a triangle. To find the area of a triangle, we might want:</p>
<ul>
<li>the length of a side and the length of the altitude to that side (we don't have altitudes)</li>
<li>all three side lengths (we're short 1)</li>
<li>two side lengths and the measure of the angle between them (we don't have the other side that includes the known angle or the angle between the known sides)</li>
</ul>
<p>(There are other ways to find the area of a triangle, but the three that use the above information are perhaps the most common.)</p>
<p>Let's find the angle between the known sides (since we'd end up finding that angle anyway if we were trying to find the unknown side). The Law of Sines tells us that $\frac{\sin A}{a}=\frac{\sin B}{b}$, so $$\frac{\sin120^\circ}{6}=\frac{\sin B}{3\sqrt{2}-\sqrt{6}},$$ which can be solved for $B$ (since $A$ is obtuse, $0^\circ<B<90^\circ$, so there is a unique solution). Once we have $B$, we can use $A+B+C=180^\circ$ to get $C$ and then the area of the triangle is $$\frac{1}{2}ab\sin C.$$</p>
<p><sup>See also <a href="https://math.stackexchange.com/a/106540/72">my answer here on general techniques for triangle-solving</a>.</sup></p>
|
3,222,871 | <p>Let <span class="math-container">$P(x, y, 1)$</span> and <span class="math-container">$Q(x, y, z)$</span> lie on the curves <span class="math-container">$$\frac{x^2}{9}+\frac{y^2}{4}=4$$</span> and <span class="math-container">$$\frac{x+2}{1}=\frac{y-\sqrt{3}}{\sqrt{3}}=\frac{z-1}{2}$$</span> respectively. Then find the square of the minimum distance between <span class="math-container">$P$</span> and <span class="math-container">$Q$</span>.</p>
<p>My Attempt is:</p>
<p>I tried to find minimum distance between the points <span class="math-container">$(-2,\sqrt{3})$</span> and <span class="math-container">$(6\cos \theta,4\sin \theta)$</span>.</p>
| Cesareo | 397,348 | <p>We can solve this problem proposing a lagrangian. So calling</p>
<p><span class="math-container">$$
d^2 = (x_1-x_2)^2+(y_1-y_2)^2+(1-z_2)^2\\
g_1 = \frac{x_1^2}{9}+\frac{y_1^2}{4}-4\\
g_3 = x_2+2-\lambda\\
g_4 = y_2-\sqrt 3-\sqrt 3\lambda\\
g_5 = z_2-1-2\lambda
$$</span></p>
<p>and forming</p>
<p><span class="math-container">$$
L(x_1,y_1,x_2,y_2,z_2,\lambda,\mu_1,\mu_2,\mu_3,\mu_4) = d^2+\sum_i\mu_i g_i
$$</span></p>
<p>the stationary condition gives </p>
<p><span class="math-container">$$
\nabla L = 0 = \left\{
\begin{array}{c}
\frac{2 \mu_1 x_1}{9}+2 (x_1-x_2) \\
\frac{\mu_1 y_1}{2}+2 (y_1-y_2) \\
\mu_2-2 (x_1-x_2) \\
\mu_3-2 (y_1-y_2) \\
\mu_4-2 (1-z_2) \\
\frac{x_1^2}{9}+\frac{y_1^2}{4}-4 \\
-\lambda +x_2+2 \\
-\sqrt{3} \lambda +y_2-\sqrt{3} \\
-2 \lambda +z_2-1 \\
-\mu_2-\sqrt{3} \mu_3-2 \mu_4 \\
\end{array}
\right.
$$</span></p>
<p>Solving this system we get</p>
<p><span class="math-container">$$
\left(
\begin{array}{ccccccccccc}
x_1&y_1&x_2&y_2&z_2&\mu_1&\mu_2&\mu_3&\mu_4&\lambda&d^2\\
-5.96291 & -0.444062 & -2.96651 & 0.0580128 & -0.933013 & -4.52256 & -5.99281 & -1.00415 & 3.86603 & -0.966506 & 12.9671 \\
-5.07051 & -2.13853 & -3.22182 & -0.384201 & -1.44364 & -3.28137 & -3.69739 & -3.50865 & 4.88727 & -1.22182 & 12.4667 \\
-1.7813 & 3.81965 & -1.52068 & 2.56225 & 1.95863 & -1.31677 & -0.521237 & 2.51481 & -1.91727 & 0.479317 & 2.56796 \\
5.72047 & -1.20669 & -1.6712 & 2.30155 & 1.6576 & -11.6293 & 14.7833 & -7.01649 & -1.31521 & 0.328802 & 67.377 \\
\end{array}
\right)
$$</span></p>
<p>so the minimum distance is <span class="math-container">$d = \sqrt{2.56796}$</span> with</p>
<p><span class="math-container">$$
p_1 = ( -1.7813, 3.81965, 1) \in P\\
q_1 = (-1.52068, 2.56225; 1.95863) \in Q
$$</span></p>
|
1,639,081 | <p>I have been unable to solve the following question, </p>
<p>If $$\sin(2x) - \tan(x) = 0$$</p>
<p>Find $x$ , $-\pi\le x\le \pi$</p>
<p>So far my workings have been
Use following identity: </p>
<p>$$\sin(2x) = 2\sin(x)\cos(x)\\2\sin(x)\cos(x) - \tan(x) = 0\\2\sin(x)\cos(x) - \frac{\sin(x)}{\cos(x)} = 0\\
2\frac{\sin(x)\cos(x)}{1} - \frac{\sin(x)}{\cos(x)} = 0$$</p>
<p>Then cross multiply to give :</p>
<p>$$-\sin x+((2\cos(x)\sin(x))\cos(x))/\cos(x) = 0$$</p>
<p>$$-\sin x+(2\cos^2(x)\sin(x))/ \cos(x) = 0$$</p>
<p>However, I have been unable to get any further.</p>
<p>If someone could help me find a solution to this question it would be very much appreciated.Thank you. </p>
| 100001 | 310,245 | <p>Once you got to
$$2\sin(x)\cos(x) - \frac{\sin(x)}{\cos(x)} = 0$$
you can pull out a factor of $\sin (x)$ to get
$$\sin(x)\left[2\cos(x)- \frac{1}{\cos(x)}\right]$$
Now, either $\sin(x) = 0$ or $2\cos(x)- \frac{1}{\cos(x)} = 0$. For $\sin(x) = 0$, we have $-180$, $0$, and $180$. For $2\cos(x)- \frac{1}{\cos(x)} = 0$, we can multiply through by $\cos(x)$ to get $2\cos^2(x)-1 = 0$. Solving gives $$\cos(x) = \frac{\sqrt{2}}{2}$$
so x = -45 and 45.</p>
|
1,639,081 | <p>I have been unable to solve the following question, </p>
<p>If $$\sin(2x) - \tan(x) = 0$$</p>
<p>Find $x$ , $-\pi\le x\le \pi$</p>
<p>So far my workings have been
Use following identity: </p>
<p>$$\sin(2x) = 2\sin(x)\cos(x)\\2\sin(x)\cos(x) - \tan(x) = 0\\2\sin(x)\cos(x) - \frac{\sin(x)}{\cos(x)} = 0\\
2\frac{\sin(x)\cos(x)}{1} - \frac{\sin(x)}{\cos(x)} = 0$$</p>
<p>Then cross multiply to give :</p>
<p>$$-\sin x+((2\cos(x)\sin(x))\cos(x))/\cos(x) = 0$$</p>
<p>$$-\sin x+(2\cos^2(x)\sin(x))/ \cos(x) = 0$$</p>
<p>However, I have been unable to get any further.</p>
<p>If someone could help me find a solution to this question it would be very much appreciated.Thank you. </p>
| vrugtehagel | 304,329 | <p>The equation we want to solve is $$\sin(2x)-\tan(x)$$
You deduced correctly that we now have to solve $$2\sin(x)\cos(x)-\frac{\sin(x)}{\cos(x)}=0$$ which we can rewrite to $$2\sin(x)\cos(x)=\frac{\sin(x)}{\cos(x)}$$
or $$2\sin(x)\cos(x)^2=\sin(x)$$
Now, either $\sin(x)=0$ (in which case $x\in\{-180,0,180\}$, given that $x\in[-180,180]$), or we may divide both sides by $\sin(x)$ to get
$$2\cos(x)^2=1$$ So $\cos(x)=\pm\sqrt{\frac12}=\pm\frac12\sqrt{2}$, of which we know the solutions to be $x\in\{-135,-45,45,135\}$, and therefore the final solution is $$x\in\{-180,-135,-45,0,45,135,180\}$$</p>
|
204,218 | <p>I can't bear an expression containing radicals of imaginary numbers,
in case it can be expressed as in terms of radicals of real numbers only.</p>
<p>For example, I can't bear the expression</p>
<pre><code>Sqrt[2 + I]
</code></pre>
<p>because
it can be expressed as </p>
<pre><code>Sqrt[1/2 (2 + Sqrt[5])] + Sqrt[1/2 (-2 + Sqrt[5])] I
</code></pre>
<p>But it seems there is no easy way to do it in Mathematica. I have tried many commands (in Mathematica) but all in vain.</p>
<p>Is there a systematic way to do such job ?</p>
<p><code>Sqrt[2 + I]</code> was a very simple example. I hope the method work for much more complicated expression.</p>
<p>P.S
I know that there are many algebraic numbers that cannot be expressed as in terms of radicals, For example, <code>Root[#^5 + # - 1 &, 1]</code>.</p>
| Bill Watts | 53,121 | <p>Try</p>
<pre><code>Sqrt[2 + I] // ComplexExpand // FunctionExpand
</code></pre>
<p>It may not create a result simplified in the exact form you want, but it will be closer.</p>
|
204,218 | <p>I can't bear an expression containing radicals of imaginary numbers,
in case it can be expressed as in terms of radicals of real numbers only.</p>
<p>For example, I can't bear the expression</p>
<pre><code>Sqrt[2 + I]
</code></pre>
<p>because
it can be expressed as </p>
<pre><code>Sqrt[1/2 (2 + Sqrt[5])] + Sqrt[1/2 (-2 + Sqrt[5])] I
</code></pre>
<p>But it seems there is no easy way to do it in Mathematica. I have tried many commands (in Mathematica) but all in vain.</p>
<p>Is there a systematic way to do such job ?</p>
<p><code>Sqrt[2 + I]</code> was a very simple example. I hope the method work for much more complicated expression.</p>
<p>P.S
I know that there are many algebraic numbers that cannot be expressed as in terms of radicals, For example, <code>Root[#^5 + # - 1 &, 1]</code>.</p>
| Bob Hanlon | 9,362 | <pre><code>{1, I}.(Sqrt[2 + I] // ReIm // ComplexExpand // FunctionExpand //
FullSimplify)
(* I Sqrt[1/2 (-2 + Sqrt[5])] + Sqrt[1/2 (2 + Sqrt[5])] *)
</code></pre>
<p>or</p>
<pre><code>% // Simplify
(* (I Sqrt[-2 + Sqrt[5]] + Sqrt[2 + Sqrt[5]])/Sqrt[2] *)
</code></pre>
<p>Verifying that these are equivalent to the original form</p>
<pre><code>Sqrt[2 + I] === (% // FullSimplify) === (%% // FullSimplify)
(* True *)
</code></pre>
|
1,400,394 | <p>Given that $u(x,y)$ can someone please explain to me how the result as asked in the question is achieved? Steps would be really appreciated, thanks.</p>
| Ian Taylor | 236,192 | <p>It's analogous to $$\frac{d^2y}{dx^2}=0$$. The solution to this is obtained by saying $\frac{dy}{dx}=k_1$ (a constant, not a function of $x$).
So $$y=k_1x+k_2$$ where $k_2$ is another constant, not a function of $x$.
In your example with partial derivatives $f(y)$ and $g(y)$ are like the constants $k_1$ and $k_2$, since their partial derivatives with respect to $x$ are zero.</p>
|
2,431,861 | <p>Let $P(z)=\displaystyle \sum_{0\le k\le n}a_kz^k$ a complex polynomial. What conditions must satisfy the coefficients $a_k$ to have $$P(z)=-\overline{P(\overline z)}\space \space ?$$</p>
| Gabriel Romon | 66,096 | <p>Note that $\overline{P(\overline z)}=\overline{\sum_{k=0}^na_k\overline z^k}=\sum_{k=0}^n\overline{a_k}z^k$, hence $P(z)=-\overline{P(\overline z)}$ if and only if $\sum_{k=0}^n(a_k+\overline{a_k})z^k=0$.</p>
<p>A polynomial is $0$ if and only if its coefficients are $0$, hence $P(z)=-\overline{P(\overline z)}$ if and only if $\forall k, Re(a_k)=0$.</p>
|
4,492,930 | <p>Let <span class="math-container">$\mathfrak{c}$</span> denote the cardinality of the continuum. I sketch an intuitive but non-rigorous argument that <span class="math-container">$|\mathbb{R}^\mathbb{N}| = \mathfrak{c}$</span>, with the question:</p>
<p><strong>Question</strong>: can this argument be made rigorous?</p>
<hr />
<p><strong>Sketch "proof"</strong>: Let <span class="math-container">$f : \mathbb{R} \to \mathbb{R}, a \in \mathbb{R}$</span>. It is a standard result that <span class="math-container">$f$</span> is continuous at <span class="math-container">$a$</span> iff:</p>
<ol>
<li><span class="math-container">$\forall\varepsilon>0 \; \exists \delta>0$</span> s.t. <span class="math-container">$|x-a| < \delta \implies |f(x)-f(a)| < \varepsilon$</span></li>
<li><span class="math-container">$\forall (x_n)$</span> s.t. <span class="math-container">$x_n \to a$</span>, <span class="math-container">$f(x_n) \to f(a)$</span></li>
</ol>
<p>(1) requires something to be true for every element of a set of cardinality <span class="math-container">$\mathfrak{c}$</span>, while (2) requires something to be true for every element of:</p>
<p><span class="math-container">$$S := \{(x_n) | x_n \to a\}$$</span></p>
<p>But since (1) and (2) are equivalent, we may deduce that <span class="math-container">$|S| = \mathfrak{c}$</span>. It follows that <span class="math-container">$|\mathbb{R}^\mathbb{N}| = \mathfrak{c}$</span>.</p>
<p>(This argument has inaccuracies, at least some of which can be fixed with observations made below.)</p>
<hr />
<p><strong>General idea</strong>: If two logical statements are equivalent and each "depends"(?) on sets of cardinality <span class="math-container">$C$</span> and <span class="math-container">$C'$</span> respectively, then we would expect that <span class="math-container">$C = C'$</span>.</p>
<p>I outline two potential issues that I spotted with this type of reasoning below.</p>
<hr />
<p><strong>Objection A</strong>: Suppose we let <span class="math-container">$T$</span> = <span class="math-container">$\{S\}$</span> i.e a set with <span class="math-container">$S$</span> as an element. Then, we can rephrase (2) as:</p>
<ol start="3">
<li>For every <span class="math-container">$s \in t$</span> of every <span class="math-container">$t \in T$</span>, <span class="math-container">$f(s) \to f(a)$</span> (where <span class="math-container">$f(s)$</span> is interpreted in the obvious way).</li>
</ol>
<p>But then <span class="math-container">$T$</span> is a set of cardinality <span class="math-container">$1$</span>, and there's an issue because <span class="math-container">$\mathfrak{c} \neq 1$</span>. I think this issue could be remedied with a more rigorous approach, but I don't know any of the set theory which I expect is needed to do so.</p>
<hr />
<p><strong>Objection B</strong>: Consider the following two statements. We have that <span class="math-container">$x \le 0$</span> iff:</p>
<ol start="4">
<li><span class="math-container">$x \le a$</span>, <span class="math-container">$\; \forall a \in A_1$</span> where <span class="math-container">$A_1 = \{0\}$</span></li>
<li><span class="math-container">$x \le a$</span>, <span class="math-container">$\; \forall a \in A_2$</span> where <span class="math-container">$A_2 = \mathbb{R}^+ \cup \{0\}$</span></li>
</ol>
<p>Then, (4),(5) are equivalent, but depend on sets of wildly different cardinalities <span class="math-container">$1 \neq \mathfrak{c}$</span> again.</p>
<p>The solution to this is a bit more obvious. It is clear that most of <span class="math-container">$A_2$</span> is redundant, so we could argue that it has "effective cardinality" <span class="math-container">$1$</span>, since it suffices to know simply whether or not <span class="math-container">$x \le a$</span> for <span class="math-container">$a=0 \in A_2$</span>. But that doesn't solve the issue of (6):</p>
<ol start="6">
<li><span class="math-container">$x < a$</span>, <span class="math-container">$\; \forall a \in A_3$</span> where <span class="math-container">$A_3 = \mathbb{R}^+$</span></li>
</ol>
<p>which is also equivalent to (4),(5) but appears to have countably infinite "effective cardinality". (Notably, <span class="math-container">$\mathbb{R}^+$</span> is not closed; I think that is key here.)</p>
<hr />
<p>I expect that for both (A) and (B), there would need to be some kind of condition on what "types" of elements are allowed for the sets in question, and also the types of sets allowed.</p>
<p>Is there a way to reconcile all of these issues and make this a valid direction of argument? Is there any truth to the general idea described above, and if not, is there a clear error that can be pinpointed?</p>
| sadman-ncc | 942,091 | <p>Inner products, by definition, are linear in each position. By linearity in the first position, we have:
<span class="math-container">$$ \langle Ax ,x \rangle + \langle Bx, x \rangle=\langle Ax+Bx,x \rangle\\
= \langle (A+B)x,x \rangle = \langle \lambda x,x \rangle .$$</span> So, your statement is always right.</p>
|
1,579,781 | <blockquote>
<p>If $x+y+z=6$ and $xyz=2$, then find the value of $$\cfrac{1}{xy}
+\cfrac{1}{yz}+\cfrac{1}{zx}$$</p>
</blockquote>
<p>I've started by simply looking for a form which involves the given known quantities ,so:</p>
<p>$$\cfrac{1}{xy} +\cfrac{1}{yz} +\cfrac{1}{zx}=\cfrac{yz\cdot zx +xy \cdot zx +xy \cdot yz}{(xyz)^2}$$</p>
<p>Now this might look nice since I know the value of the denominator but if I continue to work on the numerator I get looped :</p>
<p>$$\cfrac{yz\cdot zx +xy \cdot zx +xy \cdot yz}{(xyz)^2}=\cfrac{4\left(\cfrac{1}{xy}+\cfrac{1}{zy}+\cfrac{1}{zy}\right)}{(xyz)^2}=\cfrac{4\left(\cfrac{(\cdots)}{(xyz)^2}\right)}{(xyz)^2}$$</p>
<p>How do I deal with such continuous fraction ?</p>
| Kushal Bhuyan | 259,670 | <p>$$\cfrac{1}{xy} +\cfrac{1}{yz} +\cfrac{1}{zx}=\cfrac{yz\cdot zx +xy \cdot zx +xy \cdot yz}{(xyz)^2}=\frac{x+y+z}{xyz}$$</p>
|
928,644 | <blockquote>
<p>Let $f,g$ be $\mathcal E$-$\mathcal B(\mathbb R)$-measurable functions. I want to show piecewise function $h$ of $f$ and $g$ is also measurable.</p>
</blockquote>
<p>Suppose $(X, \mathcal E)$ is a measure space, let $f,g$ be $\mathcal E$-$\mathcal B(\mathbb R)$-measurable functions and let $A \in \mathcal E$.</p>
<p>I want to show $h: X \rightarrow \mathbb R$ given by $ h(x) = \left\{
\begin{array}{lr}
f(x) : x \in A\\
g(x) : x \in A^C
\end{array}
\right.\\$ is again a $\mathcal E$-$\mathcal B(\mathbb R)$-measurable function.</p>
<p>I've tried writing $(-\infty, a]$ as two disjoint sets $A_1, A_2$ such that $A_1 \cup A_2 = (-\infty, a]$, but then $f^{-1}(-\infty, a]) = f^{-1}(A_1 \cup A_2) = f^{-1}(A_1) \cup f^{-1}(A_2)$ and I can't say whether this is an element of $\mathcal E$. Also I don't use that $A \in \mathcal E$.</p>
<p>Can anyone help ? </p>
| Stefan Hansen | 25,632 | <p>For any Borel set $B$ one has
$$
h^{-1}(B)=\big(f^{-1}(B)\cap A\big)\cup \big(g^{-1}(B)\cap A^c\big).
$$</p>
|
4,186,743 | <p>Is there a trigonometric function explaining <span class="math-container">$\cos(x)+\sin(x)$</span>? if not, what is <span class="math-container">$\cos(x)+\sin(x)$</span> as a function of <span class="math-container">$\cos(x)$</span>?</p>
| Sayan Dutta | 943,723 | <p>We know that <span class="math-container">$\cos{x}+\sin{x}=\cos{x}+\cos{(\frac\pi2-x)}$</span>.</p>
<p>You may keep it as it is, or you may apply the formula for <span class="math-container">$\cos{C}+\cos{D}$</span> according to your needs.</p>
<p>Another representation may also be <span class="math-container">$\cos{x}+\sin{x}=\cos{x}+\sqrt{1-\cos^2{x}}$</span>. But in this case, as N.F. Taussig pointed out, we will have cases (with respect to in which quadrant <span class="math-container">$x$</span> lies) according to when to take the negative root, and when to take the positive one.</p>
<p>Does that help?</p>
|
3,946,591 | <blockquote>
<p>The sides of a triangle are on the lines <span class="math-container">$2x+3y+4=0$</span>, <span class="math-container">$ \ \ x-y+3=0$</span>, and <span class="math-container">$5x+4y-20=0$</span>. Find the equations of the altitudes of the triangle.</p>
</blockquote>
<p>Should I find the vertices first? Or is there a direct way? Actually, I tried finding the vertices, using the substitution method but I find it hard to turn it into an equation.</p>
| Robert Shore | 640,080 | <p>The answer is</p>
<p><span class="math-container">$$3^n+ \sum_{k=1}^n 3^{k-1}\cdot 3^{n-k}=n3^{n-1}+3^n.$$</span></p>
<p>There are <span class="math-container">$3^n$</span> possible words with no occurrence of <span class="math-container">$b$</span>. Assume the first <span class="math-container">$b$</span> occurs at position <span class="math-container">$k$</span>. Then each of the first <span class="math-container">$k-1$</span> positions has three possible choices (<span class="math-container">$a, c, \text{ or } d$</span>), and each of the last <span class="math-container">$n-k$</span> positions has three possible choices (<span class="math-container">$b, c, \text{ or } d$</span>).</p>
|
3,231,271 | <blockquote>
<p>Suppose <span class="math-container">$X$</span> is Banach and <span class="math-container">$T\in B(X)$</span> (i.e. <span class="math-container">$T$</span> is a linear and continuous map and <span class="math-container">$T:X \to X$</span>). Also, suppose <span class="math-container">$\exists c > 0$</span>, s.t. <span class="math-container">$\|Tx\| \ge c\|x\|, \forall x\in X$</span>. Prove <span class="math-container">$T$</span> is a compact operator if and only if <span class="math-container">$X$</span> is finite dimensional.</p>
</blockquote>
<p>"<span class="math-container">$X$</span> is finite dimensional <span class="math-container">$\implies$</span> <span class="math-container">$T$</span> is compact" is easy to show. To prove the other side, at first, I made a mistake, thinking <span class="math-container">$X$</span> is reflexive. Then this work can be easily done by the fact that any sequence of a reflexive linear space has a weakly convergent subsequence and <span class="math-container">$T$</span> is completely continuous (since <span class="math-container">$T$</span> is compact). But this is not the situation. </p>
<blockquote>
<p>So how to prove "<span class="math-container">$T$</span> is compact <span class="math-container">$\implies X$</span> is finite dimensional"?</p>
</blockquote>
| Matematleta | 138,929 | <p>Here is an idea: let <span class="math-container">$T(X) \ni y_n=T(x_n)\to y\in \overline{T(X)}$</span>. Then, since <span class="math-container">$\|Tx_n\| \ge c\|x_n\|,\ x_n\to x\in X$</span> and continuity of <span class="math-container">$T$</span> implies that <span class="math-container">$y_n=T(x_n)\to T(x)$</span> and so <span class="math-container">$y=T(x)$</span>. Therefore, <span class="math-container">$T$</span> has closed range.</p>
<p><span class="math-container">$T(B_X)$</span> contains a ball in the Banach space <span class="math-container">$T(X)$</span>, by the open mapping theorem, and since <span class="math-container">$\overline {T(B_X)}$</span> is compact, <span class="math-container">$T(X)$</span> is locally compact, hence finite dimensional. But <span class="math-container">$T$</span> is injective, so <span class="math-container">$X$</span> must be finite dimensional ,too. </p>
|
81,982 | <p>I am beggining to do some work with cubical sets and thought that I should have an understanding of various extra structures that one may put on cubical sets (for purposes of this question, connections). I know that cubical sets behave more nicely when one has an extra set of degeneracies called connections. The question is: Why these particular relations? Why do they show up? Precise references would be greatly appreciated.</p>
| Ronnie Brown | 19,949 | <p>The point I want to make was that the notion of connection on cubical set was forced on us in the following way. </p>
<p>Go back to</p>
<p><a href="http://groupoids.org.uk/pdffiles/brown-spencerCTDC_1977_18_4_450_0.pdf" rel="nofollow">[21]</a>. (with C.B. SPENCER), ``Double groupoids and crossed
modules'', Cah. Top. G\'{e}om. Diff. 17 (1976) 343-362.</p>
<p>The problem we started with in 1971 was: since double groupoids were putative codomains for a 2-d van Kampen type theorem, were there interesting examples of double groupoids? </p>
<p>We easily found functors </p>
<p>(1) (double groupoids) $\to $ (crossed modules) </p>
<p>We eventually found a functor </p>
<p>(2) (crossed modules) $\to$ (double groupoids)</p>
<p>which nicely tied in double groupoids with classical ideas, but which double groupoids arose in this way? A concurrent question was: what is a commutative cube in a double groupoid? (An answer was needed for the conjectured proof of the 2-d vKT.) </p>
<p>It was great that both questions were resolved with the notion of connection! (our first perhaps rambling exposition was turned down by JPAA as a result of negative referee reports, and because the 2-d van Kampen theorem, an explicit aim, was not yet achieved). As explained in [21] the transport law was borrowed from a paper of Virsik on path connections, hence the name `connection', see also [21] for a general definition. </p>
<p>It was not too hard to formulate the higher dimensional laws on connections, since they involved the monoid structure max in the unit interval, but the verification of the equivalence corresponding to (2) was carried out by Philip Higgins, (phew!), stated in </p>
<p>[22]. (with P.J. HIGGINS), ``Sur les complexes crois\'es,
$\omega$-groupo\"{\i}des et T-complexes'', <em>C.R. Acad. Sci.
Paris S\'er. A.</em> 285 (1977) 997-999.</p>
<p>and published in </p>
<p>[31]. (with P.J. HIGGINS), ``On the algebra of cubes'', <em>J. Pure
Appl. Algebra</em> 21 (1981) 233-260.</p>
<p>I hope the early pages of the book partially titled <a href="http://groupoids.org.uk/pdffiles/brown-spencerCTDC_1977_18_4_450_0.pdf" rel="nofollow">Nonabelian Algebraic Topology</a> (EMS Tract vol 15, 2011) (pdf with hyperref downloadable from my web site, with permission of EMS) will help to explain the background. Look at particularly the notion of <em>algebraic inverse to subdivision</em>, which necessitated the cubical approach. </p>
<p>Another relevant paper is </p>
<p>Higgins, P.J. Thin elements and commutative shells in cubical
$\omega$-categories.
<em>Theory Appl. Categ.</em> 14 (2005) No. 4, 60--74.</p>
<p>. </p>
|
3,023,726 | <p>I'm trying to solve a problem that I can't seem to work out.</p>
<blockquote>
<p><span class="math-container">$f$</span> is an entire function. Prove that <span class="math-container">$|f^{(n)}(0)|< n!n^n$</span> for at least 1 <span class="math-container">$n$</span>. </p>
</blockquote>
<p>I've been thinking to use the Cauchy estimates somehow but there's no reason for me to believe that <span class="math-container">$f$</span> is bounded from above.</p>
<p>Any help is appreciated.</p>
| achille hui | 59,379 | <p>Let <span class="math-container">$M = \sup \{ |f(z)| : |z| = 1\}$</span>. Since <span class="math-container">$f$</span> is entire, for all <span class="math-container">$n \ge 0$</span>, we have</p>
<p><span class="math-container">$$f^{(n)}(0) = \frac{n!}{2\pi i}\int_{|z|=1} \frac{f(z)}{z^{n+1}}dz$$</span>
Whenever <span class="math-container">$n > \max\{ 1, M \}$</span>, this leads to
<span class="math-container">$$|f^{(n)}(0)| \le \frac{n!}{2\pi} \int_{|z|=1} |f(z)| |dz| \le
\frac{n!}{2\pi} \int_{|z|=1} M |dz| = n!M
< n! n < n!n^n
$$</span></p>
|
198,739 | <p>I'm actually doing an exercise where I have to draw graphs of functions.
I understand r=|s| but not |r|=|s|.
Are they the same?</p>
| Emily | 31,475 | <p>Let $r = 4$ and $s = -4$. Then, $r = |s|$ and $|r| = |s|$.</p>
<p>Then, switch it around. Let $r = -4$ and $s = 4$. Then, $|r| = |s|$ but $r \neq |s|$.</p>
<p>So they are not always the same.</p>
|
258,226 | <p>As an algebraist, I have some strong intuitions about what it means for an algebraic result to be true. In particular, my intuition would lead me to believe that if I cannot construct a counter-example to a claim, then the claim must be true. This is what motivated <a href="https://mathoverflow.net/questions/258132/platonic-truth-and-1st-order-predicate-logic">my previous question</a>, where it became clear that my intuition needed some tweaking. (In other words, in some contexts it just doesn't hold true!) The answers given there are excellent, and I recommend people read them before continuing.</p>
<p>So here is a follow-up question to see just how far we can take the intuition.</p>
<p>Let $T_0$ be the theory ${\rm PA}$ over a countable language. Let $T_1$ be the extended theory obtained by adding as new axioms all $\Pi_1^0$ statement which are independent of ${\rm PA}$. This new theory is consistent and sound, assuming that the standard model of the natural numbers exists.</p>
<p>I would guess that this new theory $T_1$ is already not effectively computable. At any rate, suppose now that we let $T_2$ be the extended theory of $T_1$ obtained by adding as new axioms all $\Pi_2^0$ statement which are independent of $T_1$.</p>
<p>Is $T_2$ consistent? If so, is the theory $T_n$ (defined in the obvious recursive way) consistent? If so, is $\bigcup_{n\in \mathbb{N}}T_n$ the true theory of the standard model?</p>
<p>If $T_2$ is not consistent, is there some natural way to fix the problem?</p>
<p>Finally, what happens if we repeat these ideas for ${\rm ZFC}$ instead?</p>
| Joel David Hamkins | 1,946 | <p>Yes, your theory is the same as the true arithmetic. In
particular, yes, it is consistent.</p>
<p>I claim that at stage $n$, your theory $T_n$ consists of PA plus
the collection of true $\Pi^0_n$ sentences (that is, true in the
standard model). This starts out true with $T_0$. If $T_n$ is like
that, then consider $T_{n+1}$. If $\psi$ is a true $\Pi^0_{n+1}$
statement, then in particular, it is consistent with $T_n$, since
both are true in the standard model. So either it is provable from
$T_n$, in which case it is already there, or else it is
independent, in which case you add it at stage $n+1$. Conversely,
if $\psi$ is a $\Pi^0_{n+1}$ statement that is independent of
$T_n$, then in particular, it asserts $\forall x\ \phi(x)$, where
$\phi(x)$ is $\Sigma_n$. Since $\psi$ is true in a model of $T_n$,
it follows that $\phi(m)$ must hold of every standard $m$, since if $\phi(m)$ failed then $\neg\phi(m)$ would be a true $\Pi^0_n$ sentence and hence part of $T_n$, preventing $\psi$ from being consistent with $T_n$. So
$\psi$ is true.</p>
<p>Thus, by induction, you are adding all and only the true $\Pi^0_n$
statements at each stage, and you've got the theory of true
arithmetic.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.