qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
11,172 | <p>As a TA who led calculus* 1 and 2 discussion section and holds office hour** in the previous year, I heard the following (wrong) arguments several times.</p>
<blockquote>
<ol>
<li><p><span class="math-container">$\displaystyle \lim_{x\to \infty} \sqrt{x+1}-\sqrt{x}=0$</span> because <span class="math-container">$\infty-\infty=0$</span>.</p>
</li>
<li><p><span class="math-container">$\displaystyle \lim_{x\to \infty} x^{1/x}=1$</span> because <span class="math-container">$\infty^0=1$</span>.</p>
</li>
<li><p><span class="math-container">$\int_1^{\infty}f(x)dx$</span> and <span class="math-container">$\int_1^{\infty}g(x)dx$</span> both diverge so <span class="math-container">$\int_1^{\infty}f(x)+g(x)dx$</span> diverge.</p>
</li>
</ol>
</blockquote>
<p>I usually explain the arguments are not true in general by providing a (very trivial) counter example, for example,</p>
<blockquote>
<ol>
<li><p><span class="math-container">$\displaystyle \lim_{x\to \infty} f(x)=\infty$</span> and <span class="math-container">$\displaystyle \lim_{x\to \infty} g(x)=\infty$</span> does not guarantee <span class="math-container">$\displaystyle \lim_{x\to \infty} f(x)-g(x)=0$</span>, for example, <span class="math-container">$f(x)=x+1$</span> and <span class="math-container">$g(x)=x$</span>.</p>
</li>
<li><p><span class="math-container">$\displaystyle \lim_{x\to \infty} f(x)=\infty$</span> and <span class="math-container">$\displaystyle \lim_{x\to \infty} g(x)=0$</span> does not guarantee <span class="math-container">$\displaystyle \lim_{x\to \infty} f(x)^{g(x)}=1$</span>, for example, <span class="math-container">$f(x)=2^x$</span> and <span class="math-container">$g(x)=1/x$</span>.</p>
</li>
<li><p>False in general, for example <span class="math-container">$f(x)=-g(x)=1$</span></p>
</li>
</ol>
</blockquote>
<p>After giving explanations like that I sometime heard "But in your examples you can cancel the expression/formula..." and I was not sure how to continue. I tried the following methods, non of them seem to work very well.</p>
<p>a. Provide a much more complicated counter example which requires a few minutes of calculation to get the answer. This often leads to further confusion.</p>
<p>b. Just say that is the wrong way to do it. It sounds like "I'm the teacher so believe me." and doesn't do too much.</p>
<p>c. Show them the correct way to do their problems. This is almost like b (Why is your way the right way and mine is the wrong way?).</p>
<p>I'm looking for a better way to deal with questions like these.</p>
<p>*<span class="math-container">$\epsilon-\delta$</span> definition is not introduced.
** Office hour is in tutoring center where I'm also responsible for students take the class from the professors I'm not TA'ing for.</p>
| leftaroundabout | 6,902 | <p>I find it in general a bit backwards if a teacher has to explain <em>why such and such claim isn't true</em>. For any given statement that that can't be proven, the default assumption should be that it's not true<sup>†</sup>.</p>
<p>The entire confusion could be avoided by never talking about “infinite results” in the first place. If you replace $\lim f(x) = +\infty$ with simply <em>the limit doesn't exist</em>, then it's not possible to forge that into an unsound expression like $\infty^0$.</p>
<p>Now, of course what we don't want to do is make it more difficult to arrive at <em>correct</em> results, such as l'Hospital. But IMO, it's actually <em>helpful</em> if one doesn't start with infinite limits of some $f$ and $g$ for such theorems: the theorem is all the more impressive if it doesn't just give you a way to <em>calculate</em> some particular result, but actually “generates” that result, which you previously had no reason to assume existed!</p>
<p>This kind of proof-oriented thinking was never really explained to me at school, and I only started really enjoying maths when I understood it at university.</p>
<hr>
<p><sup>†</sup><sub>Of course, one should not assume that it's <em>wrong</em> either. It's often good <a href="https://en.wikipedia.org/wiki/Constructivism_(mathematics)" rel="nofollow">not to think in a true/false dichotomy</a>...</sub></p>
|
11,172 | <p>As a TA who led calculus* 1 and 2 discussion section and holds office hour** in the previous year, I heard the following (wrong) arguments several times.</p>
<blockquote>
<ol>
<li><p><span class="math-container">$\displaystyle \lim_{x\to \infty} \sqrt{x+1}-\sqrt{x}=0$</span> because <span class="math-container">$\infty-\infty=0$</span>.</p>
</li>
<li><p><span class="math-container">$\displaystyle \lim_{x\to \infty} x^{1/x}=1$</span> because <span class="math-container">$\infty^0=1$</span>.</p>
</li>
<li><p><span class="math-container">$\int_1^{\infty}f(x)dx$</span> and <span class="math-container">$\int_1^{\infty}g(x)dx$</span> both diverge so <span class="math-container">$\int_1^{\infty}f(x)+g(x)dx$</span> diverge.</p>
</li>
</ol>
</blockquote>
<p>I usually explain the arguments are not true in general by providing a (very trivial) counter example, for example,</p>
<blockquote>
<ol>
<li><p><span class="math-container">$\displaystyle \lim_{x\to \infty} f(x)=\infty$</span> and <span class="math-container">$\displaystyle \lim_{x\to \infty} g(x)=\infty$</span> does not guarantee <span class="math-container">$\displaystyle \lim_{x\to \infty} f(x)-g(x)=0$</span>, for example, <span class="math-container">$f(x)=x+1$</span> and <span class="math-container">$g(x)=x$</span>.</p>
</li>
<li><p><span class="math-container">$\displaystyle \lim_{x\to \infty} f(x)=\infty$</span> and <span class="math-container">$\displaystyle \lim_{x\to \infty} g(x)=0$</span> does not guarantee <span class="math-container">$\displaystyle \lim_{x\to \infty} f(x)^{g(x)}=1$</span>, for example, <span class="math-container">$f(x)=2^x$</span> and <span class="math-container">$g(x)=1/x$</span>.</p>
</li>
<li><p>False in general, for example <span class="math-container">$f(x)=-g(x)=1$</span></p>
</li>
</ol>
</blockquote>
<p>After giving explanations like that I sometime heard "But in your examples you can cancel the expression/formula..." and I was not sure how to continue. I tried the following methods, non of them seem to work very well.</p>
<p>a. Provide a much more complicated counter example which requires a few minutes of calculation to get the answer. This often leads to further confusion.</p>
<p>b. Just say that is the wrong way to do it. It sounds like "I'm the teacher so believe me." and doesn't do too much.</p>
<p>c. Show them the correct way to do their problems. This is almost like b (Why is your way the right way and mine is the wrong way?).</p>
<p>I'm looking for a better way to deal with questions like these.</p>
<p>*<span class="math-container">$\epsilon-\delta$</span> definition is not introduced.
** Office hour is in tutoring center where I'm also responsible for students take the class from the professors I'm not TA'ing for.</p>
| Benoît Kloeckner | 187 | <p>I think your first strategy, giving counter-example, is a good start; you may have more success by <strong>giving counter-examples that carry meaning</strong> and <strong>relate to things the students already know</strong>. In the case at hand, you can also try to make the point that <strong>limits are not about arithmetic, but about asymptotic behavior.</strong></p>
<p>I will consider the first fallacy you mention for illustration.</p>
<hr>
<p><strong>To give counter-examples that carry meaning</strong>, you can draw two graphs of functions, and have the student guess what their limits are (which you have taken to be $+\infty$). Then point (ask him or her to show you) on the graph the difference between the two functions: it is represented by the vertical space between the two graphs. Now you can draw examples where both functions go to $+\infty$, but the gap between them goes to zero, or $+\infty$, or any given number, or does not have a limit. Then you can ask (or explain how) to turn these drawings into precise counter examples. Hopefully, symbols will start to carry a meaning in the students mind rather than being purely abstract. One of the thing we teachers too often forget to tell to our students is the process that made us propose a counter example. We are like magicians pulling a rabbit out of our hat, while our true goal is to attract their attention to the trick. By the way, I prefer by far to use hand drawn graphs, because a computer generated graph needs you to think of the formula first, feed it to the computer, and then observe. What we really do is to think of the general form of the graph, and then come up with a formula. Also, students are far more likely to feel they can use this methods of reasoning themselves with hand drawn graphs than with a computer, hand you want them to be able to debunk their next fallacies themselves.</p>
<hr>
<p><strong>To relate the fallacies with things they already know,</strong> you can ask them if it is true that when two (positive) functions go to $+\infty$, their quotient goes to $1$. They may know that the answer is negative, and you can start get them to explain it to you (to themselves, really). Then the refreshment can be used in the case at hand, which is in fact awfully close.</p>
<hr>
<p>To make the point that <strong>limits are not about arithmetic, but about asymptotic behavior</strong>, you can try to separate two things: the meaning of the limit (what is the behavior of the function in a certain point or direction?) and the tools to determine limits. I guess that student often confuse the two (in a wide array of topics, not only limits; as an example, many student end up considering eigenvalues as being <em>defined</em> as the roots of the characteristic polynomial, leaving them unable to prove that the vector with all entries $1$ is an eigenvector of the matrix with all entries $1$: I have had almost all of my students failing this question because they wanted to compute the characteristic polynomial first).</p>
<p>You can thus try to rephrase the first fallacy: "if two functions go to $+\infty$ as the argument goes to $+\infty$, then the gap between the two <em>must</em> go to zero". They may realize they have no reason to believe that (but the drawings above may be necessary to do that). Relate to the true fact that when the common limit is not $+\infty$ but a real number, then the difference must go to zero. Have them understand that the use of arithmetic is made possible then because in this situation, the conclusion is inevitable. Even if they don't have the tools to prove it, they can see it. There you are at the root of the problem (they applied a rule they learned in a situation it does not apply to, without seeing that there is a missing hypothesis; such reasoning by similarity is very common). Once they realize the difference between the two situations (finite and infinite common limit), they may accept to at least doubt what they first said. The confusion might have been entrenched in their mind because they have seen the cases when one of the two functions goes to $\infty$ and the other has a finite limit, so make them look at this case two (without formulas at first, just graphs).</p>
|
2,442,233 | <p>let $A= \{x^2 \mid 0 < x <1\}$ and $B =\{x^3 \mid 1 < x < 2 \}$.</p>
<p>Which of the statement is true?</p>
<p>1.there is a one to one, onto function from $A$ to $B$.</p>
<p>2.there is no onto function from $A$ to $B$</p>
<p>my attempt ; there will be no onto function from $A$ to $B$ because order of $A$ is $2$ and order $B$ is $3$ so there is no one one function from $A$ to $B$.</p>
<p>So my answer is option 2,,
is my answer is correct or not ...pliz tell me the solution</p>
| John Griffin | 466,397 | <p>"...because the order of $A$ is $2$ and the order of $B$ is $3$, so..."
What do you mean by the order of a set? How does this relate to the functions it admits?</p>
<p>Note that $A=(0,1)$ and $B=(1,8)$. Both of these sets have the same cardinality of $\mathbb{R}$. Thus there are bijections $f:A\to\mathbb{R}$ and $g:B\to\mathbb{R}$. Using $f$ and $g$, can you construct a bijection from $A$ onto $B$?</p>
|
467,279 | <p>I'm reading Intro to Topology by Mendelson.</p>
<p>I'm in the section titled "Compact Metric Spaces".</p>
<p>The problem is in the title.</p>
<p>My attempt at the proof is as follows:</p>
<p>Let $\{a_n\}_{n=1}^\infty$ be a Cauchy sequence in $X$. We will show that $\{a_n\}_{n=1}^\infty$ converges to a point in $X$. Consider the set $S=\{a_n:n\in\mathbb{N}\}$. Then there are two cases to consider, $S$ finite and $S$ infinite. If $S$ is finite then there exists some $N\in\mathbb{N}$ such that $a_n=a$ for some $a\in S$ and so $\{a_n\}_{n=1}^\infty\to a$. Suppose now that $S$ is infinite. Then $S$ has at least one accumulation point in $X$, call it $a$. Thus, the neighborhood $B(a;\frac{1}{n})$ contains a point $a_n\in S$ and $\lim\limits_{n\to\infty} a_n=a$.</p>
<p>My concern with this proof is no where did I use the fact that the sequence was Cauchy, other than supposing it was. I know this is a flaw in my proof since I have to use the hypothesis some where. </p>
<p>I was also considering looking at the $\text{sup} S$, but I'm not sure how to go about using that fact or whether or not that's the right approach.</p>
<p>Thanks for any help or feedback!</p>
| taylorlan | 226,447 | <p>Any sequence of points in a compact metric space has a convergent subsequence. We know that a sequence that has a convergent subsequence is in fact convergent. So this easily shows us that a compact metric space is complete. </p>
|
2,895,923 | <p>I am trying to solve this simple geometry problem but I am always tangled in so many equations it makes my head spin. I tried solving it via similar triangles but i cant seem to eliminate all the unwanted variables. Please help.</p>
<p>I have to prove $ r_1\times r_3=(r_2)^2$</p>
<p><a href="https://i.stack.imgur.com/5KQru.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5KQru.png" alt="Question Image"></a></p>
<p>Thank you</p>
| g.kov | 122,782 | <p><a href="https://i.stack.imgur.com/6Lf4q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6Lf4q.png" alt="enter image description here"></a></p>
<p>We have
\begin{align}
\triangle AOB:\quad
|AO| &= \frac{r_1}{\sin\phi}
,\\
|AP|&=|AO|+|OP|=\frac{r_1}{\sin\phi}+r_1+r_2
,\\
\triangle APC:\quad
|AP| &= \frac{r_2}{\sin\phi}
,
\end{align}<br>
which gives
\begin{align}
r_2&=
\frac{r_1(1+\sin\phi)}{1-\sin\phi}
.
\end{align} </p>
<p>Similarly,</p>
<p>\begin{align}
|AQ| &= |AP|+|PQ|=\frac{r_1}{\sin\phi}+r_1+2r_2+r_3
\\
&=
\frac{r_1}{\sin\phi}+r_1+
\frac{2r_1(1+\sin\phi)}{1-\sin\phi}
+r_3
,\\
\triangle AQD:\quad
|AQ| &= \frac{r_3}{\sin\phi}
,\\
\end{align}</p>
<p>hence</p>
<p>\begin{align}
r_3&=
\frac{r_1(1+\sin\phi)^2}{(1-\sin\phi)^2}
,\\
\end{align}</p>
<p>and </p>
<p>\begin{align}
r_1r_3&=r_2^2
\end{align}</p>
<p>follows.</p>
|
875,924 | <p>For what values of $x\in\mathbb{R}$ is $f$ continuous?</p>
<p>$f(x) = \left\{
\begin{array}{lr}
0 & \text{if}\, x \in \Bbb Q\\
1 & \text{if}\, x \notin \Bbb Q
\end{array}
\right.$</p>
<p>The solution I found:</p>
<p>$f(x)$ is continuous nowhere. For, given any number $a$ and any $\delta>0$, the interval $(a-\delta, a+ \delta)$ contains infinitely many rational numbers and infinitely many irrational numbers.</p>
<p>Since $f(a) = 0$ or $1$, there are infinitely many numbers $x$ with $0<|x-a|< \delta $, and $|f(x) - f(a)| = 1$</p>
<p>Thus, $\lim_{x \to a}f(x) \neq f(a)$</p>
<p>My question: I'm having a hard time visualizing that you can't have a rational number $x$ with rational numbers on both sides of that $x$ such that $\lim_{x \to a}f(x) = f(a)$(is that because you can go as 'deep' in to the interval as you like to reach the irrational number?). Can anybody attempt to give me some insight in to how this works? </p>
<p>I have never given it much thought that you can have infinitely many numbers even in the smallest of intervals, and it's pretty overwhelming for me to even imagine.</p>
| Shabbeh | 165,678 | <p>when ${x \to a}$ the $f(x)$ doesn't tend to any specific number because there are infinite irrational and rational numbers near the $a$.</p>
<p>but $f(a)$ is either $0$ or $1$ </p>
<p>and thus (as you noticed) $\lim_{x \to a}f(x) \neq f(a)$ </p>
<p>so $f(x)$ is not continuous.</p>
|
875,924 | <p>For what values of $x\in\mathbb{R}$ is $f$ continuous?</p>
<p>$f(x) = \left\{
\begin{array}{lr}
0 & \text{if}\, x \in \Bbb Q\\
1 & \text{if}\, x \notin \Bbb Q
\end{array}
\right.$</p>
<p>The solution I found:</p>
<p>$f(x)$ is continuous nowhere. For, given any number $a$ and any $\delta>0$, the interval $(a-\delta, a+ \delta)$ contains infinitely many rational numbers and infinitely many irrational numbers.</p>
<p>Since $f(a) = 0$ or $1$, there are infinitely many numbers $x$ with $0<|x-a|< \delta $, and $|f(x) - f(a)| = 1$</p>
<p>Thus, $\lim_{x \to a}f(x) \neq f(a)$</p>
<p>My question: I'm having a hard time visualizing that you can't have a rational number $x$ with rational numbers on both sides of that $x$ such that $\lim_{x \to a}f(x) = f(a)$(is that because you can go as 'deep' in to the interval as you like to reach the irrational number?). Can anybody attempt to give me some insight in to how this works? </p>
<p>I have never given it much thought that you can have infinitely many numbers even in the smallest of intervals, and it's pretty overwhelming for me to even imagine.</p>
| Anupam | 84,126 | <p>Let $x_0\in \mathbb Q$. Then $(x_n)$, where $x_n=x_0+\frac{\sqrt{2}}{n}$ is a sequence of irrationals such that $x_n\to x_0$. But $f(x_n)=1$ for all $n\in \mathbb N$ and $f(x_0)=0$. Thus $(f(x_n))$ can not converge to $f(x_0)$. Thus $f$ is not continuous at any point of $\mathbb Q$. </p>
<p>Similarly it can be shown that $f$ is not continuous at any point of $\mathbb R\setminus \mathbb Q$.</p>
|
951,914 | <p>Here's my question. We know that the absolute value of X looks like:
<img src="https://i.stack.imgur.com/ePQo9.png" alt="Absolute Value of x"></p>
<p>Clearly, we can see, since the absolute value of x is always greater than or equal to 0, the area under the curve is always positive. Why then does it integrate to the following? Where the integral function takes negative values? I do understand, though, that the derivative of the following function works out to be what one would expect: |x|</p>
<p><strong>EDIT</strong>: I guess this question is a little bit stupid. I am confusing definite integral with the indefinite integral. I do notice that if I take any two points and take the difference between the values of the indefinite integral evaluated at these points, I get a positive value for the area. </p>
<p><img src="https://i.stack.imgur.com/FCtT3.png" alt="Integral of Absolute Value of x"></p>
| davidlowryduda | 9,754 | <p>Your intuition seems to be telling you that the antiderivative of an always-positive function should be always positive. But this is not correct. This is a counterexample. Integrating <span class="math-container">$x^2 + 1$</span> is another example: it's antiderivative is <span class="math-container">$\frac{x^3}{3} + x + C$</span>, which is not always positive.</p>
<p>Instead, the correct property that we should expect is for the function to be always <em>increasing</em>. Starting with a positive function <span class="math-container">$f(x)$</span>, we know that <span class="math-container">$\displaystyle \int_a^b f(x) dx > 0$</span>. In particular, this should mean that <span class="math-container">$\displaystyle F(x) = \int_0^x f(t) dt$</span>, which is the antiderivative, to be a strictly <em>increasing</em> function.</p>
<p>For instance, <span class="math-container">$\int_a^b f(x) dx > 0 \iff F(b) - F(a) > 0$</span>, so that we see that <span class="math-container">$F(x)$</span> must be strictly increasing.</p>
<p>In this case, <span class="math-container">$\frac{1}{2}x^2 \text{sgn}(x)$</span> is a strictly increasing function, so that it might be the antiderivative of a positive function (like it is).</p>
|
322,448 | <p>I'm supposed to show that: </p>
<p>$$y=\frac{5(x-1)(x+2)}{(x-2)(x+3)} = P + \frac{Q}{(x-2)} + \frac{R}{(x+3)}$$ </p>
<p>and the required answers are: $$ P=5, Q=4, R=-4 $$ </p>
<p>I tried to solve this with partial fractions like so: </p>
<p>$$5(x-1)(x+2) = A(x+3) + B(x-2)$$ </p>
<p>$\implies$ $A$=4, $B$=-4<br>
$\implies$ $Q$=4, R=-4 </p>
<p>But where does $P$=5 come from?</p>
<p>Or should I have first multiplied out the numerator and denominator and then used long division to solve?</p>
| achille hui | 59,379 | <p>You don't need to do any long division. In the expression</p>
<p>$$y = \frac{N(x)}{D(x)} = \frac{5(x-1)(x+2)}{(x-2)(x+3)}$$</p>
<p>The numerator $N(x) = 5(x-1)(x+2)$ and denominator $D(x) = (x-2)(x+3)$
are polynomials of degree $2$. Since the roots of $D(x)$ are simple,
$y$ can be rewritten in the form:</p>
<p>$$y = P(x) + \frac{Q}{(x-2)} + \frac{R}{(x+3)}$$</p>
<p>where $P(x)$ is a polynomial of degree $\deg N(x) - \deg D(x) = 2 - 2 = 0$, i.e. a constant. To evaluate the 3 coefficients, you can evaluate both side at 3 different
values of $x$: the two roots of $D(x)$ and $\infty$:</p>
<p>$$\begin{align}
P &= \lim_{x\to\infty} \frac{N(x)}{D(x)} = 5\\
Q &= \lim_{x\to 2} (x-2)\frac{N(x)}{D(x)} = \lim_{x\to 2}\frac{5(x - 1)(x+2)}{x+3} = \frac{5(2 - 1)(2+2)}{2+3} = 4\\
R &= \lim_{x\to -3}(x+3)\frac{N(x)}{D(x)} = \lim_{x\to-3}\frac{5(x-1)(x+2)}{x-2} = \frac{5(-3-1)(-3+2)}{-3-2} = -4
\end{align}$$</p>
|
13,274 | <p>I have searched stackoverflow (and comparable pages) for quite a while now (got redirected from there to this specialized stack), and I surrender. I am trying to evaluate an expression that is small in the end numerically. <br></p>
<p>Example:</p>
<pre>Log[Log[Log[6^5^4^3^2^1]]]=12.9525...<br></pre>
<p>WolframAlpha has no problem evaluating these values for (any/a very high) amount of exponents (I tried it up to 20). I guess it is possible to achieve this in Mathematica aswell?</p>
<p>I tried Hold, Defer etc, as described in <br>
<a href="https://stackoverflow.com/questions/1616592/mathematica-unevaluated-vs-defer-vs-hold-vs-holdform-vs-holdallcomplete-vs-etc">https://stackoverflow.com/questions/1616592/mathematica-unevaluated-vs-defer-vs-hold-vs-holdform-vs-holdallcomplete-vs-etc</a>
<br>
Hoever none of these did what I hoped for. Is it a matter of explaining Mathematica the rules of logarithms?</p>
<pre>FullSimplify[Log[x^b], x>0 && b>0]</pre>
<p>expands it nicely, however that is not what I want (I have explicit numbers).
Is there any way to perform the calculations WolframAlpha performs with mathematica (obviously avoiding the WolframAlpha Output Operator ;)) ?</p>
<p>Is there some Option/Assumption etc I have overlooked?</p>
<hr>
<p>For this specific question there is a recursive algebraic solution:
$$
n^{(n-1)^{...^1}}=e^{\log(n)*(n-1)^{...^1}}
$$
and so on, remove a bunch of e-s at the end.
I guess Wolfram|Alpha uses this. I would still like to know if theres a true Mathematica solution to this.</p>
| Dr. belisarius | 193 | <p>For example you could do:</p>
<pre><code>rules = {Log[x_ y_] :> Log[x] + Log[y], Log[x_^k_] :> k Log[x]};
N[Defer@Log[Log[Log[6^5^4^3^2^1]]] //. rules]
(*
--> 12.9525
*)
</code></pre>
<p>But you should be aware that it doesn't work ad-infinitum because your expression stops transforming after you get to </p>
<p>$\log \left(\log (5) 4^{3^{2^1}}+\log (\log (6))\right)$</p>
<p><strong>Edit</strong></p>
<p>I posted a related question in <a href="https://math.stackexchange.com/q/216937/2731">Mathematics</a> (no full answer yet)</p>
|
102,280 | <p>What are the usual tricks is proving that a group is not simple? (Perhaps a link to a list?)</p>
<p>Also, I may well be being stupid, but why if the number of Sylow p groups $n_p=1$ then we have a normal subgroup?</p>
| Norbert | 19,538 | <p>Consider the equality
$$
1+x+x^2+...+x^n=\frac{x^{n + 1} -1}{x-1}
$$
Differentiate it by $x$, then multiply by $x$:
$$
x+2x^2+3x^3+...+n x^n=\frac{nx^{n+2} - (n + 1)x^{n+1} + x}{(x-1)^2}
$$
Now we can substitute $x=\frac{1}{2}$ and obtain
$$
\sum\limits_{i=0}^n\frac{i}{2^i}=n\left(\frac{1}{2}\right)^{n} - (n + 1)\left(\frac{1}{2}\right)^{n-1} + 2
$$</p>
|
2,210,372 | <p>I need to identify which of the curves $A_1,...,A_5$ are related to algorithms whose run times are proportional to $n, \log(n), n^2, n^3$ and $1.1^n$:</p>
<p>(Mentioning that the first figure of the $A_5$ column should be $0.015$ (not $0.025$)</p>
<p><a href="https://i.stack.imgur.com/70I2d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/70I2d.png" alt="enter image description here"></a></p>
| zwim | 399,263 | <p><a href="https://i.stack.imgur.com/5DXu3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5DXu3.jpg" alt="enter image description here"></a></p>
<p>As Marcelo suggest, you can identify some of the curves if not all at sight.</p>
<h3>linear</h3>
<p>$A_4$ is a straight line, so this is a linear algorithm in $\Theta(n)$.</p>
<h3>log</h3>
<p>$A_2$ is concave, so among the suggested algorithms, only the logarithm fits $\Theta(\log(n))$.</p>
<h3>exponential</h3>
<p>Now it remains only $n^2,n^3$ and $1.1^n$, for $A_3,A_4,A_5$ this is not easy to distinguish them by sight. </p>
<p>But from the array, you can divide consecutive terms, and hopefully the exponential $\frac{C\mu^{n}}{C\mu^{n-5}}=\mu^5$ will emerge naturally.</p>
<p>And effectively when you do this you find that $A_3$ fits the model $\mu^5\simeq 1.6\iff \mu\simeq 1.1$ so $A_3$ appear to be $\Theta(1.1^n)$</p>
<h3>polynomial</h3>
<p>It remains the two blue curves $A_1$ and $A_5$.
From the graphic you can notice that $A_5$ is catching up on $A_1$ so $A_1$ should be $\Theta(n^2)$ and $A_5$ should be $\Theta(n^3)$.</p>
<p>To help further identification you can also convert your data to $\log$ scale (this is $\ln(n)$ abscissa), and now if you trace the curves, they are straight lines.</p>
<p>You can measure the slope gradient $\alpha$ and the equation is : $ln(a(n))=\alpha\ln(n)+\beta$.</p>
<p>But this is equivalent in the normal scale to $a(n)=e^{\beta}n^\alpha=Cn^\alpha$ with a constant $C$.</p>
|
2,666,421 | <p>Consider a linear regression model, i.e., $Y = \beta_0 + \beta_1 x_i + \epsilon_i$, where $\epsilon_i$ satisfies the classical assumptions. The estimation method of the coefficients $(\beta_0 , \beta)$ is the least-squared method. What would be an intuitive explanation of why the sum of residuals is $0$? I know the way of showing this algebraically, however I cannot seem to understand the concept and intuition behind it. Any explanations?</p>
| Georges Elencwajg | 3,217 | <p>The simplest example is when $X$ is a smooth projective curve , say over $\mathbb C$.<br>
Then a prime divisor is just a single point $P\in X$ and the group morphism $$j:\mathbb Z\to \operatorname {Pic(X)}:n\mapsto [nP]$$ is injective.<br>
<strong>Reason:</strong> any principal divisor $$\operatorname {div}f=\sum n_iP_i\in \operatorname {Prin}(X)\subset \operatorname {Div(X)}$$ has degree $\sum n_i=0$, so that $nP$ can be principal only if $n=0.$<br>
But is $j$ surjective?</p>
<p>$\bullet$If the genus of $X$ is zero, i.e. if $X=\mathbb P^1$, we get an isomorphism $j:\mathbb Z\stackrel{\cong}{\to}\operatorname {Pic(X)}$<br>
<strong>Reason:</strong> $\operatorname {Pic(U)}=\operatorname {Pic(\mathbb A^1)}=0$ because $\mathcal O(\mathbb A^1)=\mathbb C[T]$ and a UFD has zero Picard group. </p>
<p>$\bullet \bullet$ If $X$ has positive genus $g$, then $j$ is not surjective.<br>
<strong>Reason:</strong> $\operatorname {Pic(X)}$ is non denumerable. More precisely: the group $\operatorname {Pic(X)}$ can be endowed with the structure of a (non-connected) smooth complex variety of dimension $g$ (this is a non-trivial result).</p>
<p><strong>Remarks</strong><br>
1) Note carefully that the injectivity of $j$ prevents (for any value of the genus $g$) the group morphism $\operatorname {Pic(X)}\to \operatorname {Pic(U)}$ from being bijective (contrary to what you thought according to the ninth line of your question).<br>
2) If $X$ is not a smooth projective curve $j$ has no reason to be injective.<br>
The simplest counterexample is for $X=\mathbb A^1$: as I mentioned above $\operatorname {Pic}(\mathbb A^1)=0$, so that $j:\mathbb Z\to \operatorname {Pic}(\mathbb A^1)=0$ is definitely not injective!</p>
|
114,340 | <p>Let $M\subset \mathbb C^2$ be a hypersurface defined by $F(z,w)=0$. Then for some point $p\in M$, I've
$$\text{ rank of }\left(
\begin{array}{ccc}
0 &\frac{\partial F}{\partial z} &\frac{\partial F}{\partial w} \\
\frac{\partial F}{\partial z} &\frac{\partial^2 F}{\partial ^ 2z} &\frac{\partial^2 F}{\partial z\partial w} \\
\frac{\partial F}{\partial w} &\frac{\partial^2 F}{\partial w\partial z} & \frac{\partial^2 F}{\partial w^2} \\
\end{array}
\right)_{\text{ at p}}=2.$$</p>
<p>What does it mean geometrically? Can anyone give a geometric picture near $p$? </p>
<p>Any comment, suggestion, please.</p>
<p>Edit: Actually I was reading about Levi flat points and Pseudo-convex domains. I want to understand the relation between these two concepts. A point p for which the rank of the above matrix is 2 is called Levi flat. If the surface is everywhere Levi flat then it is locally equivalent to $(0,1)\times \mathbb{C}^n$, so I have many examples....but what will happen for others for example take the three sphere in $\mathbb{C}^2$ given by $F(z,w)=|z|^2+|w|^2−1=0$. This doesn't satisfy the rank 2 condition. Can I have precisely these two situations?</p>
| John | 221,643 | <p>The dot product of the adjacency matrix with itself is a measure of similarity between nodes. For instance take the non-symmetric directed adjacency matrix A = </p>
<pre><code>1, 0, 1, 0
0, 1, 0, 1
1, 0, 0, 0
1, 0, 1, 0
</code></pre>
<p>then the dot of <span class="math-container">$A^T$</span>A (gram matrix) gives the un-normalized similarity between column i and column j which is the symmetric matrix:</p>
<pre><code>3, 0, 2, 0
0, 1, 0, 1
2, 0, 2, 0
0, 1, 0, 1
</code></pre>
<p>This is much like the gram matrix of a linear kernel in an SVM. An alternate version of the kernel is the RBF kernel. The RBF kernel is simply a measure of similarity between two datapoints that can be looked up in the nxn matrix. Likewise, so is the linear kernel.</p>
<p>A Gram matrix is simply the dot of its transpose and itself.</p>
<p>Now say you have matrix B which is also a non-symmetric directed adjacency matrix. B =</p>
<pre><code>1, 0, 1, 0
1, 0, 0, 0
1, 0, 0, 0
1, 0, 0, 1
</code></pre>
<p>So <span class="math-container">$A^T$</span>B is a non-symmetric matrix:</p>
<pre><code>3, 0, 1, 1
1, 0, 0, 0
2, 0, 1, 1
1, 0, 0, 0
</code></pre>
<p>Matrix A col i and matrix B col j are proportionately similar according to the above matrix. Thus the dot product of transpose of the first matrix to the second matrix is a measure of similarity of nodes characterized by their edges.</p>
|
103,084 | <p><strong>Bug introduced in 8.0 or earlier and fixed in 11.0.0</strong></p>
<p>Case number: 305932.</p>
<hr>
<p>In the front-end notebook (Mathematica 9.0 and 10.2):</p>
<pre><code>g[x: Optional[_,default]]:=x;
</code></pre>
<p>has no problems, and works correctly (<code>g[]</code> outputs <code>default</code>, and <code>g[1]</code> outputs <code>1</code>).</p>
<p>However, when the following package is constructed in an <code>.m</code> file called <code>Dummy.m</code>:</p>
<pre><code>BeginPackage["Dummy`"];
f
g; Thing;
Begin["`Private`"];
g[x: Optional[_,default]]:=x; (*Minimal case causing problem*)
f[x: Optional[Thing->{_},Thing->{1}]]:=x; (*I need this in my package*)
End[];
EndPackage[];
</code></pre>
<p>The same line of code throws a <code>General::patop</code> error upon package initialization. (The definition for <code>f</code> is the structure of the optional argument I need to achieve in my package.)</p>
<pre><code><<Dummy`
</code></pre>
<p><a href="https://i.stack.imgur.com/VidwS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VidwS.png" alt="enter image description here"></a></p>
<p>Is this a bug? and I can I define <code>f</code> in my package?</p>
| m_goldberg | 3,066 | <p>I rewrote your package like this</p>
<pre><code>BeginPackage["Dummy`"]
Clear[f,g,Thing]
Begin["`Private`"]
g[x_:default]:=x
f[x:(Thing->{_}):(Thing->{1})]:=x
End[]
EndPackage[]
</code></pre>
<p>and saved it to <code>/Users/oldmg/Desktop/Dummy.m</code> and ran the following code</p>
<pre><code>Quiet @ Remove["Dummy`*"]
<< "/Users/oldmg/Desktop/Dummy.m" (* *)
dummy = 42;
g[] && g[2]
</code></pre>
<blockquote>
<p><code>Dummy`Private`default && 2</code></p>
</blockquote>
<pre><code>f[] && f[Thing -> {42}] && f[42]
</code></pre>
<blockquote>
<p><code>(Thing -> {1}) && (Thing -> {42}) && f[42]</code></p>
</blockquote>
<p>I believe this is the behavior you are looking for in <code>Dummy.m</code>, so you might use this as a guide when you are writing your non-toy package code.</p>
|
1,403,228 | <blockquote>
<p><strong>Question:</strong></p>
<p>Let <span class="math-container">$m, n, q, r \in \mathbb Z$</span>. If <span class="math-container">$m = qn + r$</span>, show that <span class="math-container">$\gcd(m, n) = \gcd(n, r)$</span>. Hence justify the Euclidean Algorithm.</p>
</blockquote>
<p>I found this question in a past test paper, but cannot seem to find a reference in my textbook that indicates how I can go about "proving" the above statement. Can anyone please point me in the right direction?</p>
| 727 | 126,929 | <p><strong>Hint:</strong> </p>
<p>Suppose $d$ divides both $m$ and $n$ (it needn't be the $\gcd$). Then $d$ divides $m+kn$ for any $k \in \mathbb{Z}$.</p>
|
2,845,001 | <p>If I know the length of a chord of a circle and the length of the corresponding arc, but do not know the circle's radius, is there a formula by which I can calculate the length of the sagitta?</p>
| paulwal222 | 1,040,297 | <p>These other answers are too complicated. Here is a simple way to approximate it, where s=sagitta, a=arc length, c=chord length:</p>
<p><span class="math-container">$$s = 0.42 \sqrt{ a^2 - c^2}$$</span></p>
<p>If the arc is more like a full semi-circle, then the constant will be a little closer to 0.41. If the arc is more flat, then the constant will be closer to 0.43. But these are minor differences for approximations.</p>
<p>If you have only the sagitta and arc length and want to approximate the chord length, then you can use:</p>
<p><span class="math-container">$$c = \sqrt{ a^2 - \left(\frac{s}{0.42}\right)^2 }$$</span></p>
<p>If you have only the sagitta and chord length and want to approximate the arc length:</p>
<p><span class="math-container">$$a = \sqrt{ c^2 + \left(\frac{s}{0.42}\right)^2 }$$</span></p>
|
1,870,675 | <blockquote>
<p>Let $R$ be a commutative ring with unity, and let $R^{\times}$ be the group of units of $R$. Then is it true that $(R,+)$ and $(R^{\times},\ \cdot)$ are not isomorphic as groups ? </p>
</blockquote>
<p>I know that the statement is true in general for fields. And it is trivially true for any finite ring (as $|R^{\times}| \le |R|-1<|R|$, so they are not even bijective).</p>
<p>I can show that the groups are not isomorphic whenever $\operatorname{char} R \ne 2$ , but I am unable to deal with $\operatorname{char} R=2$ case ... Please help. Thanks in advance.</p>
| Community | -1 | <p>The conjecture is false. Here is a counterexample.</p>
<p>Suppose $R$ is a ring with the property that every $r \in R^\times$ satisfies $r^2 = 1$.</p>
<p>Then both $(R,+)$ and $(R^\times,\cdot)$ are abelian groups with the property that every element has exponent $2$ — that is, they are vector spaces over $\mathbf{F}_2$.</p>
<p>If $B$ is a basis for a vector space $V$ over $\mathbf{F}_2$, then the elements of $V$ can be identified with finite subsets of $B$. If $B$ is infinite, it has the same cardinality as its set of finite subsets. Consequently, $(R,+)$ and $(R^\times, \cdot)$ are isomorphic if and only $R$ and $R^\times$ have the same cardinality.</p>
<p>Let $X$ be a set of indeterminates, and define the ring</p>
<p>$$ T[X] = \mathbf{F}_2[X] / \langle x^2 - 1 \mid x \in X \rangle $$</p>
<p>$(T[X], +)$ is a vector space whose basis is the set of all finite subsets of $X$. For any $v \in T[X]$, let $\deg(v)$ be the sum of the coefficients of $v$.</p>
<p>For every $v \in T[X]$, $v^2 = \deg(v)$.</p>
<p>Therefore, for every $v \in T[X]$, we either have $v$ is zero divisor ($v^2 = 0$) or $v$ is a unit (with inverse $v$). Thus, $T[X]^\times$ is the set of all elements with $\deg(v) = 1$.</p>
<p>If $X$ is infinite, then $T[X]$ and $T[X]^\times$ have the same cardinality, and therefore $(T[X],+)$ is isomorphic to $(T[X]^\times, \cdot)$ as abelian groups.</p>
|
221,403 | <p>I have a series of functions <span class="math-container">$f_d$</span> in <span class="math-container">$d$</span> variables and would like compute the sum of each one evaluated at each lattice point within the <span class="math-container">$d$</span>-sphere of radius <span class="math-container">$R$</span>; that is, at each point <span class="math-container">$(x_1,x_2 \dots x_d) \in \mathbb{Z}^d:\sqrt{x_1^2+x_2^2 \dots +x_d^2} \leq R$</span>.</p>
<p>I have no problem evaluating the functions <span class="math-container">$f_d$</span> at these points; I think the best way would be to use <code>Part</code> within <code>Sum</code> on the list of lattice points. <strong>What I am unsure of</strong> is how to generate <span class="math-container">$P_R$</span>, the list of lattice points not outside the <span class="math-container">$d$</span>-sphere of radius <span class="math-container">$R$</span>. </p>
| David G. Stork | 9,735 | <pre><code>pts[dim_Integer, num_Integer] :=
Select[RandomVariate[UniformDistribution[Table[{-1, 1}, dim]], num],
Norm[#] <= 1 &]
</code></pre>
<p>Then</p>
<pre><code>pts[5,20]
</code></pre>
<p>(Note that this creates 20 sample points, but some subset are within the unit ball.)</p>
|
2,228,328 | <p>Is the function $ f(z)=\dfrac 1 {x^2+y^2} + i \dfrac 1 {x^2+y^2} $ is differentiable and holomorphic somewhere?</p>
<p>We have $z=x+iy$ and $f(z)= \dfrac{1+i}{|z|^2}$ . Now, $f(0)= \lim_{\delta z \rightarrow 0} \frac{f(0+\delta z) - f(0)}{\delta z}=$ undefined. So $f(z)$ is not differentiable at origin. Also since since $ f(z)= |z|^2$ is not differentiable anywhere except origin , so $f(z)$ is not not differentiable and hence it is not holomorphic. But I am not sure, please help me</p>
| WB-man | 420,588 | <p>You can check for sure using the Cauchy-Riemann equations. If we have a complex function $f(x+yi) = u(x,y) + iv(x,y)$, then for $f$ to be differentiable at a point $z_0 = x_0 + y_0 i$, both
$$
\left\{
\begin{aligned}
\frac{\partial u}{\partial x} &= \frac{\partial v}{\partial y} \\
\frac{\partial u}{\partial y} &= -\frac{\partial v}{\partial x}
\end{aligned} \right.
$$
must be satisfied at $z_0$. Let's check the first equation:
\begin{align}
\frac{\partial u}{\partial x} = -\frac{2x}{(x^2+y^2)^2} \overset{?}{=} -\frac{2y}{(x^2+y^2)^2} = \frac{\partial v}{\partial y}
\end{align}
We can cancel all the common factors to obtain
$$ x = y $$
So the first of the CR equations holds whenever $x = y$. As for the second equation:$\newcommand{\AND}{\ \ {\rm{\small{AND}}}\ \ }$
$$ \frac{\partial u}{\partial y} = -\frac{2y}{(x^2+y^2)^2} \overset{?}{=} \frac{2x}{(x^2+y^2)^2} = -\frac{\partial v}{\partial x} $$
This simplifies to
$$ -y = x $$
So <em>both</em> CR equations are satisfied only when $x = y \AND -y = x$. This only happens when $x = y = 0$, which is already precluded because $f$ is not defined there. So the CR equations are satisfied nowhere, and hence $f$ is differentiable nowhere, and is therefore holomorphic nowhere.</p>
|
85,622 | <p>I'm trying to understand how this works in terms of Galois theory and local class field theory. Assume we have an extension of local fields $E/L/K$ s.t. $E/L$ and $L/K$ are abelian. I'm interested in recognizing when $E/K$ is Galois. Clearly, $E/K$ is Galois if and only if $E$ is always fixed by an extension of an $L/K$ automorphism to $E$, but this is tricky to compute.</p>
<p>I overheard a brief conversation that this can be done through Galois groups and some group actions that occur in the tower, but I haven't found anything explicit through google. We should be able to see from how $Gal(L/K)$ acts on something whether or not the extension is Galois. I'm having trouble seeing what the action should be. I hope someone who knows what I'm talking about could write it down explicitly. Since the Galois groups should correspond through local class field theory to very concrete objects which are quotients of $E^\times$, $L^\times$ and $K^\times$. I was wondering how this action on the Galois side is expressed on the local field side?</p>
<p>I'm interested in this since it clearly would provide a tool for constructing some solvable extensions of e.g. $\mathbb{Q}_p$. I apologize for being fuzzy, but I don't know how to be more explicit.</p>
| Chandan Singh Dalawat | 2,821 | <p>When you have a finite abelian extension $E|L$ of a finite galoisian extension $L|K$ of local fields, the extension $E|K$ is going to be galoisian if and only if the subgroup $N_{E|L}(E^\times)\subset L^\times$ is $Gal(L|K)$-stable.</p>
<p>This is somewhat similar to the following purely algebraic fact : Let $K$ be any field, $L$ a finite galoisian extension of $K$ of group $G=\mathrm{Gal}(L|K)$, and suppose that $L^\times$ has an element of order $n$ for some $n>1$. Then abelian extensions $E$ of $L$ of exponent dividing $n$ correspond bijectively to subgroups $H$ of $L^\times/L^{\times n}$ under the map $E\mapsto\mathrm{Ker}(L^\times/L^{\times n}\to E^\times/E^{\times n})$, the reciprocal being $H\mapsto L(\root n\of H)$ (Kummer theory). In such a situation, the abelian extension $E=L(\root n\of H)$ of $L$ is galoisian over $K$ if and only if the subgroup $H\subset L^\times/L^{\times n}$ is $G$-stable.</p>
<p>One has a similar (purely algebraic) statement in Artin-Schreier theory of abelian extension of exponent $p$ in characteristic $p$, and indeed in Witt's theory of abelian extensions of exponent dividing $p^n$.</p>
|
759,683 | <p>Given an undirected connected simple graph $G=(V,E)$ there are $2^{|E|}$ orientations. How many of these orientations are acyclic? </p>
| mjachi | 838,358 | <p>Very old, but for the sake of completeness, yes, it is well known that we can count</p>
<p><span class="math-container">$$ \text{AO}(G) = (-1)^{|V(G)|}\mathcal{X}_G(-1) = T_G(2,0)$$</span></p>
<p>where <span class="math-container">$\mathcal{X}_G$</span> denotes the chromatic polynomial of <span class="math-container">$G$</span>, and <span class="math-container">$T_G$</span> denotes the <em>Tutte polynomial</em> of <span class="math-container">$G$</span>. You can find a number of papers discussing these results, but the proof of the second inequality is usually done separately from the equality between the Tutte polynomial at <span class="math-container">$(0,2)$</span>. Tutte polynomial is usually defined on matroids and worked backwards (since any graph <span class="math-container">$G$</span> admits a graphic matroid <span class="math-container">$M(G)$</span>, anything true for <span class="math-container">$M(G)$</span> is true for <span class="math-container">$G$</span>; ie, matroids are more general); there are a number of well-known TG-invariants, of which your question asks about just one. Another is <span class="math-container">$T_G(0,2)$</span>, which counts the number of <em>totally cyclic orientations</em> instead.</p>
<p>The proofs for each are scattered about, but can dig them up at request.</p>
|
3,563,238 | <p>I try to understand the relation of likelihood to cross-entropy by reading <a href="https://en.wikipedia.org/wiki/Cross_entropy" rel="nofollow noreferrer">cross-entropy</a>.</p>
<p>The problem is I cannot understand the formula for the likelihood in the article. The likelihood is defined as follows</p>
<p><span class="math-container">$$\prod_{i}^{}q_i^{Np_i}$$</span></p>
<p>where</p>
<p><span class="math-container">$q_i$</span> is the estimated probability of outcome <span class="math-container">$i$</span>, <span class="math-container">$p_i$</span> is the empirical probability of outcome <span class="math-container">$i$</span> and <span class="math-container">$N$</span> is the size of the training set. </p>
<p>I haven't seen the formulation of the likelihood like that before that combines estimated and empirical probabilities. Why <span class="math-container">$p_i$</span> takes place in the formula? What's the motivation behind this formulation?</p>
| Mini | 743,701 | <p>This is because in a block of length <span class="math-container">$N$</span>, outcome <span class="math-container">$i$</span> appears about <span class="math-container">$Np_i$</span> times and the probability of each of them is estimated as <span class="math-container">$q_i$</span>. This is why for each outcome it is equal to <span class="math-container">$q_i^{Np_i}$</span>.</p>
|
306,178 | <p>Given
$$
y_n=\left(1+\frac{1}{n}\right)^{n+1}\hspace{-6mm},\qquad n \in \mathbb{N}, \quad n \geq 1.
$$
Show that $\lbrace y_n \rbrace$ is a decreasing sequence. Anyone can help ? I consider the ratio $\frac{y_{n+1}}{y_n}$ but I got stuck.</p>
| Mikasa | 8,581 | <p>I think it needs just some basic algebraic manipulations: $$\frac{y_{n}}{y_{n+1}}=\frac{(1+\frac{1}n)^{n+1}}{(1+\frac{1}{n+1})^{n+2}}$$ You can show that the latter fraction is equal to $$(1+\frac{1}{n^2+2n})^{n+1}\times\frac{1}{1+1/(n+1)}$$ But $$(1+\frac{1}{n^2+2n})^{n+1}\ge {1+1/(n+1)}$$ Note that if $x\ge-1$ then $(1+x)^n\ge 1+nx.$</p>
|
2,213,047 | <p>Let $F$ be the set of all functions $f : \mathbb{R} \to \mathbb{R}$. A relation $c$ is defined on $F$ by
$f c g$ if and only if $f(x) ≤ g(x)$ for all $x ∈ \mathbb{R} $.
Prove that '$c$' is a partial order.</p>
<p>I have proved that the relation is <em>reflexive</em>. Now I must prove that it is <em>antisymmetric</em> (and later <em>transitive</em>). </p>
<p>My working thus far: </p>
<p>Suppose $a, b ∈ F$. We must prove that if $a c b$ and $b c a$ then $a = b$. Now, if $a c b$ and $b c a$ then $f(a) ≤ f(b)$ and $f(b) ≤ f(a)$. Therefore, $f(a) = f(b)$. </p>
<p><em>Now, how to prove that $a = b$? There is no indication that $f$ is an injective function.</em> </p>
| phunfd | 361,692 | <p>Keep in mind that this is a relation on the functions, not the elements. You seem to be getting confused with this difference. If $f \prec g$ and $g \prec f$ then $f(x) \leq g(x)$ and $f(x) \geq g(x)$ for all $x \in R$. This means $f=g$. No need for injectivity :)</p>
|
2,916,168 | <p>I am a beginner in proofs and, unfortunately, I cannot wrap my mind around how to prove the simplest things, so I need a bit of help getting started. This is the proof that I am dealing with:</p>
<p>$\text{If }x< y< 0\text{, then }x^{2}> y^{2}\text{.}$</p>
<p>Thank you in advance.</p>
| mechanodroid | 144,766 | <p>If $x < y < 0$ then $|x| > |y|$ so $$x^2 = |x|^2 > |y|^2 = y^2$$</p>
|
226,488 | <p>I am currently working on a challenge problem where I need to show that there is a point $x \in \mathbb{R_+}$ such that $\cos(x) = 0$ using only a few properties of the cosine function. In particular, the only properties of the cosine function that I can use are:</p>
<ul>
<li><p>$\cos(x)$ is continuous</p></li>
<li><p>$\cos(x) = Re(\exp(z))$ for $z \in \mathbb{C}$</p></li>
<li>$\cos^2(x) + \sin^2(x)=1$</li>
<li>$\displaystyle \cos(x) = \sum_{n=0}^\infty \frac{(-1)^n}{(2n!)}x^{2n}$</li>
<li>$\displaystyle \exp(x) = \sum_{n=0}^\infty \frac{x^n}{n!}$</li>
</ul>
<p>My strategy is to use the intermediate value theorem on the interval $[0,a]$ since it's easy to show that $\cos(0) = 1$. If I could show that there is a point $a \in \mathbb{R}_+$ s.t. $\cos(a) < 0$, then the IVT and the continuity of the cosine function would allow me to conclude that there has to be some $x\in [0,a]$ such that $\cos(x) = 0$.</p>
| EuYu | 9,246 | <p>If you have $n$ points uniformly distributed on a circle then the angle between the points is $\frac{2\pi}{n}$. If the distance between adjacent points is $m$ then the chord of the sector with angle $\frac{2\pi}{n}$ has length $m$. </p>
<p>Trigonometry then yields
$$r = \frac{m\sin\left(\frac{(n-2)\pi}{2n}\right)}{\sin\left(\frac{2\pi}{n}\right)}$$</p>
<p>Alternatively, this formula will probably be simpler
$$r = \frac{m}{2\sin\left(\frac{\pi}{n}\right)}$$</p>
|
2,673,078 | <p>Generalize the Monty Hall problem where there are $n \geq 3$ doors, of
which Monty opens $m$ goat doors, with $1 \leq m \leq n$.<br><strong>Original Monty Hall Problem:</strong> There are $3$ doors, behind one of which there is a car (which you want), and behind the other two of which there are goats (which you don’t want). Initially, all possibilities are equally likely for where the car is. You choose a door, which for concreteness we assume is Door $1$. Monty Hall then opens a door to reveal a goat, and offers you the option of switching. Assume that Monty Hall knows which door has the car, will always open a goat door and offer the option of switching, and as above assume that if Monty Hall has a choice between opening Door $2$and Door $3$, he chooses Door $2$ and Door $3$ with equal probability .<br>Find the probability that the strategy of always switching succeeds, given that Monty opens Door $2$.</p>
<p><strong>My approach:</strong><br>
Let $C_i$ be the event that car is behind the door $i$,
$O_i$ be the event that Monty opened door $i$ and $X_i$ be the event that intially I chose door $i$. Here $i=1,2,3,...,n$.<br>
Let's start with case where I chose $X_1$. Then:<br>
$P(O_{j_1, j_2, ..., j_m}|C_1, X_1) = {{n-1}\choose{m}}(\frac{1}{n-1})^m$, here $j \in$ {$m$ doors out of $n-1$, i.e., exclude Door$1$ }<br>
$P(O_{k_1, k_2, ..., k_m}|C_t, X_1) = {{n-2}\choose{m}}(\frac{1}{n-2})^m$,
here $k \in$ {$m$ doors out of $n-2$, i.e., exclude Door$1$ & Door$t$}, $t \in$ {$2,3, ..., n$}<br>
Also, $P(C_r|X_s) = \frac{1}{n}$, here $r,s \in$ {$1,2,...,n$}</p>
<p>Probability of winning by switching is,</p>
<p>$$P(C_3 | O_{k_1, k_2, ..., k_m}, X_1) = \frac{P(O_{k_1, k_2, ..., k_m}|C_3, X_1).P(C_3|X_1)}{P(O_{m-doors}|X_1)}$$</p>
<p>$$= \frac{P(O_{k_1, k_2, ..., k_m}|C_3, X_1).P(C_3|X_1)}{P(O_{j_1, j_2, ..., j_m}|C_1, X_1).P(C_1|X_1) + \sum_{t=2}^n(P(O_{k_1, k_2, ..., k_m}|C_t, X_1).P(C_t|X_1))}$$</p>
<p>$$ = \frac{{{n-2}\choose{m}}(\frac{1}{n-2})^m.\frac{1}{n}}{(\frac{1}{n}).({{n-1}\choose{m}}(\frac{1}{n-1})^m + {{n-2}\choose{m}}(\frac{1}{n-2})^m.(n-1))}$$</p>
<p>$$= \frac{(n-m-1)(n-1)^{m-1}}{(n-2)^m + (n-m-1)(n-1)^m}$$</p>
<p>However, the correct answer is $\frac{(n-1)}{(n-m-1).n}$. Any insights to what I have done wrong.</p>
| angryavian | 43,949 | <p>If your initial door is the car, the switching strategy fails.</p>
<p>$\frac{n-1}{n}$ of the time, your initial door is not the car.
Then excluding your initial door and the $m$ goat doors, the car must be in one of the $n-m-1$ remaining doors, and presumably the switching strategy picks uniformly at random. So, the probability that switching succeeds is $\frac{n-1}{n} \cdot \frac{1}{n-m-1}$.</p>
|
3,129,192 | <p>I've been trying to integrate
<span class="math-container">$$\int\frac{dx}{(x-a)(x-b)}$$</span></p>
<p>By using the substitution
<span class="math-container">$$x=a \cos^2 \theta + b \sin^2 \theta$$</span></p>
<p>The only problem here is I arrived at the result
<span class="math-container">$$\frac{2}{a-b} \ln |\csc 2\theta - \cot 2\theta|+c$$</span></p>
<p>and I having trouble on how to substitute from <span class="math-container">$\theta$</span> to <span class="math-container">$x$</span>.</p>
| Paul Frost | 349,785 | <p>Identifying <span class="math-container">$\mathbb{R}^2$</span> with <span class="math-container">$\mathbb{C}$</span>, we have <span class="math-container">$A = \{ z \in \mathbb{C} \mid 1 \le \lvert z \rvert \le 2 \}$</span>. Define
<span class="math-container">$$q : I^2 \to A, q(x,y) = 2(x+1)e^{2\pi iy} .$$</span>
This is a well-defined continuous map because <span class="math-container">$\lvert q(x,y) \rvert = 2(x+1) \in [1,2]$</span>. It is a closed map because <span class="math-container">$I^2$</span> is compact and <span class="math-container">$A$</span> is Hausdorff. Moreover, <span class="math-container">$q$</span> is surjective: If <span class="math-container">$z \in A$</span>, then <span class="math-container">$z = \lvert z \rvert e^{2\pi it}$</span> for some <span class="math-container">$t \in [0,1]$</span>. Hence <span class="math-container">$(\frac{\lvert z \rvert -1}{2},t) \in I^2$</span> and <span class="math-container">$q(\frac{\lvert z \rvert -1}{2},t) = z$</span>.</p>
<p>So <span class="math-container">$q$</span> is a continuous closed surjection, hence a quotient map. Obviuosly we have <span class="math-container">$q(x,y) = q(x',y')$</span> if and only if <span class="math-container">$x = x'$</span> and either <span class="math-container">$y = y'$</span> or <span class="math-container">$\{y, y' \} = \{0, 1 \}$</span>. Note that the latter implies <span class="math-container">$(x,y) \sim (x',y')$</span>.</p>
<p>Let <span class="math-container">$p : A \to T = A/\approx$</span> denote the quotient map. Then <span class="math-container">$r = p \circ q : I^2 \to T$</span> is a quotient map. We claim that <span class="math-container">$r(x,y) = r(x',y') \Leftrightarrow (x,y) \sim (x',y')$</span>.</p>
<p>"<span class="math-container">$\Leftarrow$</span>" We have <span class="math-container">$q(x,0) = q(x,1)$</span>, hence <span class="math-container">$r(x,0) = r(x,1)$</span>, and we have <span class="math-container">$q(0,y) = e^{2\pi iy}, q(1,y) = 2e^{2\pi iy} = 2q(0,y)$</span>, hence <span class="math-container">$r(0,y) = r(1,y)$</span>.</p>
<p>"<span class="math-container">$\Rightarrow$</span>" In case <span class="math-container">$q(x,y) = q(x',y')$</span> we are done. So let <span class="math-container">$q(x,y) \ne q(x',y')$</span>. Hence w.l.o.g. (1) <span class="math-container">$\lvert q(x,y) \rvert = 1$</span> and (2) <span class="math-container">$q(x',y') = 2q(x,y)$</span>. But (2) implies (3) <span class="math-container">$\lvert q(x',y') \rvert = 2$</span> and (4) <span class="math-container">$y = y'$</span> or <span class="math-container">$\{y, y' \} = \{0, 1 \}$</span>. From (1) and (3) we conclude <span class="math-container">$x = 0$</span> and <span class="math-container">$x' = 1$</span>. Then (4) shows that <span class="math-container">$(x,y) \sim (x',y')$</span>.</p>
<p>Our above claim proves that <span class="math-container">$r$</span> induces a homeomorphism <span class="math-container">$\hat{r} : I^2 /\sim \phantom{.} \to A/\approx$</span>. See also <a href="https://math.stackexchange.com/q/3064037">When is a space homeomorphic to a quotient space?</a>.</p>
|
1,326,525 | <p>Suppose I have some functions $f,g$ such that
$$f:\Bbb{R} \mapsto \Bbb{R}^2$$
$$g:\Bbb{R}^2 \mapsto \Bbb{R}^n$$</p>
<p><strong>My Question:</strong></p>
<p>For some $c \in \Bbb{R}$, is $g(f(c))$ a function of one variable? If so, why is the fact $g$ requires two arguments irrelevant? </p>
<p>EDIT: </p>
<p>Specifically, I am considering a function
$$\mathbf{F}(\mathbf{y}) = \nabla f(\mathbf{x}) -\nabla (\lambda g(\mathbf{x}))$$
where $$\mathbf{y} = [\lambda,\mathbf{x}]$$
I want to write
$$\mathbf{F}(c,\mathbf{y}) = 0$$
and use the implicit function theorem to show that $\lambda$ and $\mathbf{x}$ can be parametrized by $c$, but I am confused why/if I can write $\mathbf{F}$ in terms of $c$ and $\mathbf{y}$ given that the first derivative of my Lagrangian multiplier equation isn't a function of $c$. </p>
| Mathxx | 221,836 | <p>As @AlexR said, $P(\text{dog} \cap \neg \text{cat}) = P(\text{dog}) - P(\text{dog} \cap \text{cat})$</p>
<p>So you'll get 0.1 for P(Dog and not cat).</p>
<p>$P(\text{Not cat} \mid \text{Dog})=\frac{0.1}{0.25}=0.4$</p>
|
1,212,177 | <p>Does $i\arg(e^{2z})=2iy?$ If it does I have solved my problem, and hence it seems like it must be the case, but I don't see it.</p>
<p>$$i\arg(e^{2z})=i\arg(e^{2x+2iy})=i\arg(e^{2x}e^{2iy})\implies \theta=2y(?)$$ Why does the $2x$ get 'ignored'?</p>
| dustin | 78,317 | <p>A property of the argument is $\arg(z_1z_2) = \arg(z_1)+\arg(z_2)$. Therefore,
$$
\arg(e^{2x}e^{2iy}) = \arg(e^{2x})+\arg(e^{2iy})
$$
For $\arg\in(-\pi,\pi)$, what is $ \arg(e^{2x})$?</p>
|
2,735,854 | <p>How many permutations of the word $STRESSLESSNESS$ begin OR end with an $E$? </p>
<p>Correct me if I'm wrong, but you would have to subtract the permutations where $E$ begins AND ends the permutation?</p>
| Leyla Alkan | 459,554 | <ul>
<li><p>$E\underbrace{------------}_{12 \text {letter}}E$<br>
Result: $\frac{12}{7!}$ , since there are seven $S$'s among the remaining letters(STRSSLSSNESS) we divide by $7!$ to eliminate overcounting</p></li>
<li><p>$E\underbrace{-------------}_{13 \text {letter}}$<br>
Result: $\frac{13}{2!7!}$ ,since there are seven $S$'s and two $E$'s among the remaining letters(STRSSLESSNESS) we divide by $7!2!$ to eliminate overcounting</p></li>
<li><p>$\underbrace{-------------}_{13 \text {letter}}E$<br>
Result: $\frac{13}{2!7!}$ , again since there are seven $S$'s and two $E$'s among the remaining letters(STRSSLESSNESS) we divide by $7!2!$ to eliminate overcounting</p></li>
</ul>
<p>But remember $2nd$ and $3rd$ cases contain $1st$ case as well. So, we need to handle that overcounting too by substracting $1st$ result from $2nd$ and $3rd$ results.</p>
<p>So, in total: $\frac{12}{7!}+(\frac{13}{2!7!}-\frac{12}{7!})+ (\frac{13}{2!7!}-\frac{12}{7!})$</p>
|
186,310 | <p>I am trying to evaluate: </p>
<pre><code>For[i = 1, i < 100000, i++, h[0] = hs;
h[i]=hi /. Check[FindRoot[Xfnew[a, w, hi, mxs + i*step]*92*10^23) == 1198/10000, {hi, h[i - 1]}]];
If [h[i] == 25, Break[], AppendTo[hin, {mxs + i*step, h[i]}]]];
</code></pre>
<p>where mxs , hs, and step has some values, and hin is initially empty list.</p>
<p>The reason I am using Findroot in a loop, is that my function of Xfnew is seriously complicated, and Findroot can only find solutions very close to the actual solution. So I cannot evaluate Findroot over a wide range. So I am using small steps, and using the previous step's result as the starting point for FindRoot.</p>
<p>If FindRoot cannot evaluate this, then it returns some error message, and the Check option sets the solution to 25, in which case I break the for loop, otherwise it keeps adding to the list.</p>
<p>I want to know if there is a faster way to do the same thing in Mathematica, (like using Table, or other functions which generally people recommend using, over for loop).</p>
<p>Thank you for your help. </p>
| John Doty | 27,989 | <p>You're creating pairs <code>{m,h}</code> where each pair really only depends on the previous pair. That's perfect for <code>NestWhileList</code>. <code>i</code> is irrelevant, an artifact of procedural formulation: you don't need it. You need two functions. The first transforms an <code>{m,h}</code> pair into a new pair, something like:</p>
<pre><code>f[{m_,h_}]:={m+step,(whatever makes the next h)}
</code></pre>
<p>Then, you need a test function, something like:</p>
<pre><code>tf[{m_,h_}]:= h!=25
</code></pre>
<p>(although I'd use something like <code>False</code> or <code>$Failed</code> as a termination token rather than a number). Then,</p>
<pre><code>NestWhileList[f,{mxs,hs},tf,1,100000]
</code></pre>
<p>will make a list of your results.</p>
|
2,698,284 | <p>Here is the problem: </p>
<p>Let A be a set with n elements. Find an expression for S(n, 2), the number of partitions of A into exactly two subsets. You can either start with the general recurrence for S(n, k), or count S(n, 2) directly. </p>
<p>I'm having trouble understanding what exactly it wants me to do. So far I know that it wants me to find an expression that gives the number of combinations of A into sets like (x, y). I got the equation: (n - 1) * 2 for number of partitions of A into two parts, but I don't know understand what the problem means by creating two subsets. What am I doing wrong?</p>
| Ashwin Trisal | 343,481 | <p>First, the problem isn't actually asking you to divide up $A$ into sets with exactly two elements -- for something like $|A|=3$, this is impossible. Instead, what they want is to consider all the ways that $A=B\cup C$, where $B\cap C=\emptyset$.</p>
<p>So how can we approach this? Generally, we do it by induction if we're going to do it directly. Or, if you have the recurrence for $S(n,k)$, you may as well use that.</p>
|
2,655,178 | <p>I am asked to find the equation of a cubic function that passes through the origin. It also passes through the points $(1, 3), (2, 6),$ and $(-1, 10)$. </p>
<p>I have walked through many answers for similar questions that suggest to use a substitution method by subbing in all the points and writing in terms of variables. I have tried that but I don't really know where to take it from there or what variables to write it as. </p>
<p>If anyone could provide their working out for this problem it would be extremely enlightening. </p>
| Ranjeev Grewal | 484,668 | <p>the general cubic equation is $$y=ax^3+bx^2+cx+d.$$Plug in the coordinates of the points for x and y, and you end up with a system of four equations in four variables, namely $a, b, c$ and $d$. Hope that helps!</p>
|
2,655,178 | <p>I am asked to find the equation of a cubic function that passes through the origin. It also passes through the points $(1, 3), (2, 6),$ and $(-1, 10)$. </p>
<p>I have walked through many answers for similar questions that suggest to use a substitution method by subbing in all the points and writing in terms of variables. I have tried that but I don't really know where to take it from there or what variables to write it as. </p>
<p>If anyone could provide their working out for this problem it would be extremely enlightening. </p>
| Mostafa Ayaz | 518,023 | <p>Hint:</p>
<p>It is proper to use the <strong>Lagrange function</strong> as following$$f(x)=\sum_{cyc}\dfrac{(x-x_2)(x-x_3)(x-x_4)}{(x_1-x_2)(x_1-x_3)(x_1-x_4)}$$</p>
|
2,655,178 | <p>I am asked to find the equation of a cubic function that passes through the origin. It also passes through the points $(1, 3), (2, 6),$ and $(-1, 10)$. </p>
<p>I have walked through many answers for similar questions that suggest to use a substitution method by subbing in all the points and writing in terms of variables. I have tried that but I don't really know where to take it from there or what variables to write it as. </p>
<p>If anyone could provide their working out for this problem it would be extremely enlightening. </p>
| csar | 446,038 | <p>This problem is a special case in which all known values differ by fixed <span class="math-container">$\Delta x$</span> (here <span class="math-container">$\Delta x = 1$</span>). This is a very common problem when you have discrete equispaced values and you want to interpolate to find the "exact" maximum or minimum value, between the endpoints. (You then assume that a cubic polynomial passes through these points, find the polynomial and then its extremum). It is also known as "cubic interpolation".</p>
<p>In this case, the cubic polynomial is
<span class="math-container">$$f(x) = a_0 + a_1\epsilon(x) + a_2\epsilon^2(x) + a_3\epsilon^3(x)$$</span>
where
<span class="math-container">$$ \epsilon = \frac{x-n\Delta x}{\Delta x},\quad n=-1,0,1,2$$</span>.
<span class="math-container">$\epsilon$</span> is actually shifting and scaling the polynomial, so that the known points are at <span class="math-container">$-1, 0, 1, 3$</span>. In your case, <span class="math-container">$\Delta x=1$</span> and you can take <span class="math-container">$n=0$</span>, therefore
<span class="math-container">$$f(x) = a_0 + a_1 x + a_2x^2 + a_3x^3$$</span>
and
<span class="math-container">\begin{align}
a_0 & = f(0),\\
a_1 & = \frac{-2f(-1) - 3f(0) + 6f(1)-f(2)}{6},\\
a_2 & = \frac{f(-1) -2f(0) + f(1)}{2},\\
a_3 & = \frac{-f(-1) + 3f(0) - 3f(1) + f(2)}{6}.\\
\end{align}</span></p>
<p>Note that in the case where the know points are not <span class="math-container">$-1, 0, 1, 2$</span>, the formulas for <span class="math-container">$a_0,\ldots a_2$</span> are the similar, the only difference being that if the points are <span class="math-container">$x_{-1}, x_0, x_1, x_2$</span>, then in the above equations, just use <span class="math-container">$f(x_{-1})$</span> instead of <span class="math-container">$f(-1)$</span>, <span class="math-container">$f(x_0)$</span> instead of <span class="math-container">$f(0)$</span> and so on.</p>
|
1,504,438 | <blockquote>
<p>the parallel sides of a trapezoid have lengths 7 cm and 15 cm. The two lower base angles are 30 and 60 degrees. The area of the trapezoid is..?</p>
</blockquote>
<p>Two 30-60-90 degree triangles form on each side of the trapezoid. If I determine the height of one triangle, I can easily calculate the area of the two triangles and rectangle. But to obtain the height I must first obtain another side length. <strong>So my question is: how do I obtain a side length of any one of the triangles?</strong></p>
<p><a href="https://i.stack.imgur.com/E9hZ0.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E9hZ0.gif" alt="enter image description here"></a></p>
<p>Note that in the image, both triangles seem identical. I put it there for a little reference, while the real measurements/angles are based off the question.</p>
| André Nicolas | 6,312 | <p>Hint: Note that $h\cot (30^\circ)+h\cot(60^\circ)$ is equal to the difference $15-7$ of the parallel side lengths.</p>
|
1,504,438 | <blockquote>
<p>the parallel sides of a trapezoid have lengths 7 cm and 15 cm. The two lower base angles are 30 and 60 degrees. The area of the trapezoid is..?</p>
</blockquote>
<p>Two 30-60-90 degree triangles form on each side of the trapezoid. If I determine the height of one triangle, I can easily calculate the area of the two triangles and rectangle. But to obtain the height I must first obtain another side length. <strong>So my question is: how do I obtain a side length of any one of the triangles?</strong></p>
<p><a href="https://i.stack.imgur.com/E9hZ0.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E9hZ0.gif" alt="enter image description here"></a></p>
<p>Note that in the image, both triangles seem identical. I put it there for a little reference, while the real measurements/angles are based off the question.</p>
| David | 119,775 | <p><strong>Hint</strong>. If the base angles are $30^\circ$ and $60^\circ$, then the trapezium can be visualised as a $30$–$60$–$90$ triangle with (horizontally placed) hypotenuse $15$, minus a similar triangle with hypotenuse $7$.</p>
|
1,098,587 | <blockquote>
<p>Prove that for a bipartite graph $G$ on $n$ vertices the number of edges in $G$ is at most $\frac{n^2}{4}$. </p>
</blockquote>
<p>I used induction on $n$. </p>
<p>Induction hypothesis: Suppose for a bipartite graph with less than $n$ vertices the result holds true. </p>
<p>Now take a bipartite graph on $n$ vertices.Let $x,y$ be two vertices in $G$ where an edge exist between $x$ and $y$. Now remove these two vertices from $G$ and consider this graph $G'$. $G'$ has at most ${(n-2)^2}\over4$. Add these two vertices back. Then the number of edges $G$ can have is at most </p>
<p>$$|E(G')|+d(x)+d(y)-1$$ </p>
<p>My question is in my proof I took $d(x) + d(y) \le n$, where $d(x)$ denotes the degree of vertex $x$. Can I consider $d(x)+d(y) \leq n$? I thought the maximum number of edges is obtained at the situation $K_{\frac n 2,\frac n 2}$</p>
| Asinomás | 33,907 | <p>The sum of the degrees of vertices $x$ and $y$ is indeed less than or equal to $n$. One reason for this is that if a vertex $v$ is adjacent to $x$ it cannot be adjacent to $y$ since $y$ and $v$ would be in the same part.</p>
<hr>
<p>I would like to share my proof, please tell me what you think of it: The bipartite graph on $n$ vertices with the most edges is clearly a complete bipartite graph, otherwise we could take that graph, add an edge and we would have a graph with more edges.</p>
<p>In a complete bipartite graph with parts of size $k,n-k$ the vertices in the part with $k$ vertices have order $n-k$ and the vertices in the part with $n-k$ vertices have order $k$. The sum of the degrees is then $2k(n-k)$, so by the handshaking lemma the number of edges is $k(n-k)$</p>
<p>Now write $k$ as $\frac{n}{2}-j$. Then $k(n-k)=(\frac{n}{2}-j)(\frac{n}{2}+j)=\frac{n^2}{4}-l^2\leq \frac{n^2}{4}$</p>
|
1,098,587 | <blockquote>
<p>Prove that for a bipartite graph $G$ on $n$ vertices the number of edges in $G$ is at most $\frac{n^2}{4}$. </p>
</blockquote>
<p>I used induction on $n$. </p>
<p>Induction hypothesis: Suppose for a bipartite graph with less than $n$ vertices the result holds true. </p>
<p>Now take a bipartite graph on $n$ vertices.Let $x,y$ be two vertices in $G$ where an edge exist between $x$ and $y$. Now remove these two vertices from $G$ and consider this graph $G'$. $G'$ has at most ${(n-2)^2}\over4$. Add these two vertices back. Then the number of edges $G$ can have is at most </p>
<p>$$|E(G')|+d(x)+d(y)-1$$ </p>
<p>My question is in my proof I took $d(x) + d(y) \le n$, where $d(x)$ denotes the degree of vertex $x$. Can I consider $d(x)+d(y) \leq n$? I thought the maximum number of edges is obtained at the situation $K_{\frac n 2,\frac n 2}$</p>
| bof | 111,012 | <p>Suppose $p,q$ are nonnegative integers with $p+q=n,$ and that $K_{p,q}$ has the maximum number of edges among all bipartite graphs with $n$ vertices.</p>
<p>If we move one vertex from the side with $p$ vertices to the side with $q$ vertices, we lose $q$ edges and gain $p-1$ new edges. Since the number of edges was already maximized, we must have $p-1\le q,$ that is, $p-q\le1.$ Symmetrically, we also have $q-p\le1,$ and so $|p-q|\le1.$</p>
<p>If $n$ is even, then from $p+q=n$ and $|p-q|\le1$ it follows that $p=q=\frac n2,$ and $pq=\frac{n^2}4=\left\lfloor\frac{n^2}4\right\rfloor.$</p>
<p>If $n$ is odd, then $\{p,q\}=\left\{\frac{n-1}2,\frac{n+1}2\right\},$ and $pq=\frac{n^2-1}4=\left\lfloor\frac{n^2}4\right\rfloor.$</p>
|
17,819 | <p>$\mathfrak{sl}_2(\mathbb{C})$ is usually given a basis $H, X, Y$ satisfying $[H, X] = 2X, [H, Y] = -2Y, [X, Y] = H$. What is the origin of the use of the letter $H$? (It certainly doesn't stand for "Cartan.") My guess, based on similarities between these commutator relations and ones I have seen mentioned when people talk about physics, is that $H$ stands for "Hamiltonian." Is this true? Even if it's not, is there a connection? </p>
| Jim Humphreys | 4,231 | <p>Both terminology and notation in Lie theory have varied over time, but as far
as I know the letter H comes up naturally (in various fonts) as the next
letter after G in the early development of Lie groups. Lie algebras came
later, being viewed initially as "infinitesimal groups" and having labels like
those of the groups but in lower case Fraktur. Many traditional names are
not quite right: "Cartan subalgebras" and such arose in work of Killing,
while the "Killing form" seems due to Cartan (as explained by Armand Borel,
who accidentally introduced the terminology). It would take a lot of work to
track the history of all these things. In his book, Thomas Hawkins is more
concerned about the history of ideas. Anyway, both (h,e,f) and (h,x,y) are
widely used for the standard basis of a 3-dimensional simple Lie algebra,
but I don't know where either of these first occurred. Certainly h belongs to
a Cartan subalgebra.</p>
<p>My own unanswered question along these lines is the source of the now
standard lower case Greek letter rho for the half-sum of positive
roots (or sum of fundamental weights). There was some competition from
delta, but other kinds of symbols were also used by Weyl, Harish-Chandra, ....</p>
<p>ADDED: Looking back as far as Weyl's four-part paper in Mathematische Zeitschrift (1925-1926), but not as far back as E. Cartan's Paris thesis, I can see clearly in part III the prominence of the infinitesimal "subgroup" $\mathfrak{h}$ in the structure theory of infinitesimal groups which he laid out there following Lie, Engel, Cartan. (Part IV treats his character formula
using integration over what we would call the compact real form of the semisimple Lie group in the background. But part III covers essentially the Lie algebra structure.) The development starts with a <em>solvable</em> subgroup $\mathfrak{h}$ and its "roots" in a Fitting decomposition of a general Lie algebra, followed by Cartan's criterion for semisimplicity and then the more familiar root space decomposition. Roots start out more generally as "roots" of the characteristic polynomial of a "regular" element. Jacobson follows a similar line in his 1962 book, reflecting his early visit at IAS and the lecture notes he wrote there inspired by Weyl. </p>
<p>In Weyl you also see an equation of the type $[h e_\alpha] = \alpha \cdot e_\alpha$, though his notation is quite variable in different places and sometimes favors lower case, sometimes upper case for similar objects. Early in the papers you see an infinitesimal group $\mathfrak{a}$ with subgroup $\mathfrak{b}$.
All of which confirms my original view that H is just the letter of the alphabet following G, as often encountered in modern group theory. (No mystery.)</p>
|
2,885,441 | <p>Prove that</p>
<p>$$\sum_{n=-\infty}^{\infty} \frac{\cos n}{n^2+1} = \frac{\pi \cosh (\pi -1)}{\sinh \pi }$$</p>
<p>I already have a solution using Fourier expansion of the exponential function. I'm interested in a complex analysis approach. It would be natural to consider the function</p>
<p>$$f(z) = \frac{\pi \cot \pi z \cos z}{z^2+1}$$</p>
<p>and integrate it around an appropriate contour $\Gamma_N$ ( say a square ) The residues at $z=i$ and $z=-i$ are equal;</p>
<p>$$\mathfrak{Res}_{z=i} f(z) = \mathfrak{Res}_{z=-i} f(z) = - \frac{\pi \cosh 1 \coth \pi}{2}$$</p>
<p>Thus,</p>
<p>$$\frac{1}{2\pi i } \oint \limits_{\Gamma_N} f(z) \, \mathrm{d}z = \sum_{n=-N}^{N} \mathfrak{Res}_{z=n} f(z) + \mathfrak{Res}_{z=i} f(z) + \mathfrak{Res}_{z=-i} f(z)$$</p>
<p>If we let $N \rightarrow +\infty$ the contour would go to $0$ and we would pick the result. However, this is not the case. Something's missing. The main question is why is that? Can you suggest an appropriate kernel function as well as a contour? </p>
| pisco | 257,943 | <p>Hint: try to integrate
$$f(z)=\frac{{\pi {e^{iz}}}}{{({z^2} + 1)({e^{2\pi iz}} - 1)}}$$
around the rectangular contour with vertices $\pm R \pm Ri$, with $R$ a half integer. You might want to check carefully that the integral indeed tends to $0$. </p>
<p>The sum of residues at $z=\pm i$ is $\frac{i}{2}\frac{{\pi \cosh (\pi - 1)}}{{\sinh \pi }}$, giving
$$ - \frac{i}{2}\sum\limits_{n = - \infty }^\infty {\frac{{{e^{in}}}}{{{n^2} + 1}}} + \frac{i}{2}\frac{{\pi \cosh (\pi - 1)}}{{\sinh \pi }} = 0$$</p>
|
2,885,441 | <p>Prove that</p>
<p>$$\sum_{n=-\infty}^{\infty} \frac{\cos n}{n^2+1} = \frac{\pi \cosh (\pi -1)}{\sinh \pi }$$</p>
<p>I already have a solution using Fourier expansion of the exponential function. I'm interested in a complex analysis approach. It would be natural to consider the function</p>
<p>$$f(z) = \frac{\pi \cot \pi z \cos z}{z^2+1}$$</p>
<p>and integrate it around an appropriate contour $\Gamma_N$ ( say a square ) The residues at $z=i$ and $z=-i$ are equal;</p>
<p>$$\mathfrak{Res}_{z=i} f(z) = \mathfrak{Res}_{z=-i} f(z) = - \frac{\pi \cosh 1 \coth \pi}{2}$$</p>
<p>Thus,</p>
<p>$$\frac{1}{2\pi i } \oint \limits_{\Gamma_N} f(z) \, \mathrm{d}z = \sum_{n=-N}^{N} \mathfrak{Res}_{z=n} f(z) + \mathfrak{Res}_{z=i} f(z) + \mathfrak{Res}_{z=-i} f(z)$$</p>
<p>If we let $N \rightarrow +\infty$ the contour would go to $0$ and we would pick the result. However, this is not the case. Something's missing. The main question is why is that? Can you suggest an appropriate kernel function as well as a contour? </p>
| Felix Marin | 85,343 | <p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
With the <a href="https://dlmf.nist.gov/1.8.E14" rel="nofollow noreferrer">Poisson's Sumation Formula</a>:
<span class="math-container">\begin{align}
&\bbox[10px,#ffd]{\sum_{n = -\infty}^{\infty}{\cos\pars{n} \over
n^{2} + 1}} =
\sum_{n = -\infty}^{\infty}\int_{-\infty}^{\infty}{\cos\pars{t} \over t^{2} + 1}\expo{-2\pi\ic nt}\,\dd t
\\[5mm] & =
{1 \over 2}\sum_{n = -\infty}^{\infty}\bracks{\int_{-\infty}^{\infty}
{\expo{\pars{1 - 2n\pi}\ic t} \over t^{2} + 1}\,\dd t +
\int_{-\infty}^{\infty}
{\expo{\pars{-1 - 2n\pi}\ic t} \over t^{2} + 1}\,\dd t}
\end{align}</span></p>
<hr>
However,
<span class="math-container">$\ds{\left.\int_{-\infty}^{\infty}
{\expo{\ic kt} \over t^{2} + 1}\,\dd t\,\right\vert_{\ k\ \in\ \mathbb{R}} = \pi\expo{-\verts{k}}}$</span>.
<hr>
Then,
<span class="math-container">\begin{align}
&\bbox[10px,#ffd]{\sum_{n = -\infty}^{\infty}{\cos\pars{n} \over
n^{2} + 1}} =
{1 \over 2}\,\pi\sum_{n = -\infty}^{\infty}\bracks{%
\expo{-\verts{2n\pi - 1}} + \expo{-\verts{2n\pi + 1}}}
\\[5mm] & =
\bbx{\pi\cosh\pars{\pi - 1} \over \sinh\pars{\pi}} \approx 1.1739
\\ &
\end{align}</span>
|
1,041,623 | <p>I have been suggested to read the <a href="http://press.princeton.edu/chapters/gowers/gowers_VIII_6.pdf"><em>Advice to a Young Mathematician</em> section </a> of the <em>Princeton Companion to Mathematics</em>, the short paper <a href="http://alumni.media.mit.edu/~cahn/life/gian-carlo-rota-10-lessons.html"><em>Ten Lessons I wish I had been Taught</em></a> by Gian-Carlo Rota, and the <a href="http://terrytao.wordpress.com/career-advice/"><em>Career Advice</em></a> section of Terence Tao's blog, and I am amazed by the intelligence of the pieces of advice given in these pages. </p>
<p>Now, I ask to the many accomplished mathematicians who are active on this website if they would mind adding some of their own contributions to these already rich set of advice to novice mathematicians. </p>
<p>I realize that this question may be seen as extremely opinion-based. However, I hope that it will be well-received (and well-answered) because, as Timothy Gowers put it,</p>
<blockquote>
<p>"The most important thing that a young mathematician needs to learn is
of course mathematics. However, it can also be very valuable to learn
from the experiences of other mathematicians. The five contributors to
this article were asked to draw on their experiences of mathematical
life and research, and to offer advice that they might have liked to
receive when they were just setting out on their careers."</p>
</blockquote>
| Georges Elencwajg | 3,217 | <p>My advice would be:<br>
$\bullet $ Do many calculations<br>
$\bullet \bullet$ Ask yourself concrete questions whose answer is a number.<br>
$\bullet \bullet \bullet$ Learn a reasonable number of formulas by heart. (Yes, I know this is not fashionable advice!)<br>
$\bullet \bullet \bullet \bullet$ Beware the illusion that nice general theorems are the ultimate goal in your subject. </p>
<p>I have answered many questions tagged <em>algebraic geometry</em> on this site and I was struck by the contrast between the excellent quality of the beginners in that field and the nature of their questions: they would know and really understand abstract results (like, say, the equivalence between the category of commutative rings and that of affine schemes) but would have difficulties answering more down-to-earth questions like: "how many lines cut four skew lines in three-dimensional projective space ?" or "give an example of a curve of genus $17$". </p>
<p>In summary the point of view of some quantum physicists toward the philosophy of their subject<br>
<strong>Shut up and calculate !</strong> contains more than a grain of truth for mathematicians too (although it could be formulated more gently...) </p>
<p><strong>Nota Bene</strong><br>
The above exhortation is probably <a href="http://www.physicsandmore.net/resources/Shutupandcalculate.pdf" rel="noreferrer">due to David Mermin</a>, although it is generally misattributed to Richard Feynman. </p>
<p><strong>Edit</strong><br>
Since @Mark Fantini asks for more advice in his comment below, here are some more (maybe too personal!) thoughts:<br>
$\bigstar$ Learn mathematics pen in hand but after that go for a stroll and think about what you have just learned. This helps classifying new material in the brain, just as sleep is well known to do.<br>
$\bigstar \bigstar$ Go to a tea-room with a mathematician friend and scribble mathematics for a few hours in a relaxed atmosphere.<br>
I am very lucky to have had such a friend since he and I were beginners and we have been working together in public places ( also in our shared office, of course) ever since.<br>
$\bigstar \bigstar \bigstar$ If you don't understand something, teach it!<br>
I had wanted to learn scheme theory for quite a time but I backed down because I feared the subject.<br>
One semester I agreed to teach it to graduate students and since I had burned my vessels I really had to learn the subject in detail and invent simple examples to see what was going on.<br>
My students did not realize that I was only one or two courses ahead of them and my teaching was maybe better in that the material taught was as new and difficult for me as it was for them.<br>
$\bigstar \bigstar \bigstar \bigstar$ <strong>Last not least: use this site!</strong><br>
Not everybody has a teaching position, but all of us can answer here.<br>
I find using this site and MathOverflow the most efficient way of learning or reviewing mathematics . The problems posed are often quite ingenious, incredibly varied and the best source for questions necessitating explicit calculations (see points $\bullet$ and $\bullet \bullet$ above). </p>
<p><strong>New Edit (December 9th)</strong><br>
Here are a few questions posted in the last 12 days which I find are in the spirit of what I recommend in my post: <a href="https://math.stackexchange.com/questions/1057967/does-the-projective-fermat-curve-have-singular-points/1058071#1058071">a)</a>, <a href="https://math.stackexchange.com/questions/1057642/describe-the-topology-of-spec-mathbbrx/1057864#1057864">b)</a>, <a href="https://math.stackexchange.com/questions/1057157/non-examples-for-krull-schmidt-azumaya/1057415#1057415">c)</a>, <a href="https://math.stackexchange.com/questions/1056533/trouble-showing-flatness/1057134#1057134">d)</a>, <a href="https://math.stackexchange.com/questions/1056191/intersection-numbers-of-plane-curves-constructing-a-counterexample/1056425#1056425">e)</a>, <a href="https://math.stackexchange.com/questions/1055302/m-times-n-orientable-if-and-only-if-m-n-orientable/1055522#1055522">f)</a>, <a href="https://math.stackexchange.com/questions/1046640/problem-in-proving-that-mathbba2-is-not-homeomorphic-to-mathbbp2/1047028#1047028">g)</a>, <a href="https://math.stackexchange.com/questions/1050035/how-to-picture-a-projective-variety/1050229#1050229">h)</a>. </p>
<p><strong>Newer Edit(December 17th)</strong><br>
<a href="https://math.stackexchange.com/q/1070860/3217">Here</a> is a fantastic question, brilliantly illustrating how to aggressively tackle mathematics, asked a few hours ago by Clara: very concrete, low-tech and naïve but quite disconcerting.<br>
This question also seems to me absolutely original : I challenge everybody to find it in any book or any on-line document !</p>
|
1,041,623 | <p>I have been suggested to read the <a href="http://press.princeton.edu/chapters/gowers/gowers_VIII_6.pdf"><em>Advice to a Young Mathematician</em> section </a> of the <em>Princeton Companion to Mathematics</em>, the short paper <a href="http://alumni.media.mit.edu/~cahn/life/gian-carlo-rota-10-lessons.html"><em>Ten Lessons I wish I had been Taught</em></a> by Gian-Carlo Rota, and the <a href="http://terrytao.wordpress.com/career-advice/"><em>Career Advice</em></a> section of Terence Tao's blog, and I am amazed by the intelligence of the pieces of advice given in these pages. </p>
<p>Now, I ask to the many accomplished mathematicians who are active on this website if they would mind adding some of their own contributions to these already rich set of advice to novice mathematicians. </p>
<p>I realize that this question may be seen as extremely opinion-based. However, I hope that it will be well-received (and well-answered) because, as Timothy Gowers put it,</p>
<blockquote>
<p>"The most important thing that a young mathematician needs to learn is
of course mathematics. However, it can also be very valuable to learn
from the experiences of other mathematicians. The five contributors to
this article were asked to draw on their experiences of mathematical
life and research, and to offer advice that they might have liked to
receive when they were just setting out on their careers."</p>
</blockquote>
| SQL Injection | 211,243 | <p>$\bullet$ Patience!</p>
<p>$\bullet\bullet$ Persistence.</p>
<p>$\bullet\bullet$ Work hard.</p>
<p>$\bullet\bullet\bullet$ Learn things very well. (in detail)</p>
<p>$\bullet\bullet\bullet\bullet$ Ask yourself lots of questions, even stupid ones! (when does this lemma work? when it doesn't? is there a generalization of it? is there a similar lemma about ...)</p>
<p>$\bullet\bullet\bullet\bullet\bullet$ Don’t base career decisions on glamour or fame.</p>
<p>$\bullet\bullet\bullet\bullet\bullet\bullet$ Think about the “big picture”.</p>
<p>$\bullet\bullet\bullet\bullet\bullet\bullet\bullet$ Professional mathematics is not a sport (in sharp contrast to mathematics competitions). </p>
|
331,543 | <blockquote>
<p>Given a fair six-sided die. Find the probability generating functions for the number of the throw on which the rth six appears. Hence find the probability that the fifth six occurs on the 20th throw. </p>
</blockquote>
<p>For the rth six is it true that $X~Geo((1/6)^r)$, so PGF is $(1/6)^rt+(1/6)^r\times(1-(1/6)^r)t^2$ etc? And for the 20th throw do I look for the coefficient of $t^{20}$?</p>
| user65384 | 65,384 | <p>See "negative binomial". With parameters r and p, a negative binomial random variable represents the trial number of the rth success when each trial is independently a success with probability p.</p>
<p>r = 5<br>
p = 1/6<br>
i = 20</p>
<p>(sorry in advance for formatting, coding errors etc.)</p>
<p>$p(i=20)$ = ${19 \choose 4}$$(1/6)^5$$(5/6)^{15} $</p>
|
331,543 | <blockquote>
<p>Given a fair six-sided die. Find the probability generating functions for the number of the throw on which the rth six appears. Hence find the probability that the fifth six occurs on the 20th throw. </p>
</blockquote>
<p>For the rth six is it true that $X~Geo((1/6)^r)$, so PGF is $(1/6)^rt+(1/6)^r\times(1-(1/6)^r)t^2$ etc? And for the 20th throw do I look for the coefficient of $t^{20}$?</p>
| Martin Hansen | 646,413 | <p>A straight forward application of the Negative Binomial Distribution where the number of trials <span class="math-container">$n$</span> is fixed and made up of <span class="math-container">$k$</span> failures and <span class="math-container">$r$</span> successes, the <span class="math-container">$r$</span>th success occurring on the <span class="math-container">$n$</span>th trial
<span class="math-container">$$X \sim NB(k;r,p)=\binom {k+r-1}{r-1}p^r(1-p)^k$$</span></p>
<p><span class="math-container">$k$</span> is the number of failures within the <span class="math-container">$n$</span> trials, the number of times a six was not rolled</p>
<p><span class="math-container">$r$</span> is the number of successes within the <span class="math-container">$n$</span> trials, the number of times a six was rolled</p>
<p><span class="math-container">$p$</span> is the probability of success, of rolling a six
<span class="math-container">$$X \sim NB(15;5,\frac{1}{6})=\binom {19}{4}\bigr(\frac{1}{6}\bigr)^5\bigr(\frac{5}{6}\bigr)^{15}$$</span></p>
<p>With a little effort, Marko Riedel's answer could be generalised to prove the result I've quoted</p>
|
979,947 | <p>If $(A, B, C)$ are distinct integers $> 1$, and $$f(A, B, C) = \frac{\frac{A^2-1}{A} + \frac{B^2-1}{B}}{\frac{C^2-1}{C}},$$ then for what (if any) triplets $(A, B, C)$ is $f(A, B, C)$ an integer?</p>
<p>UPDATE: I've done some more work and have come up with the following: Based on the equations I combined to arrive at $f(A,B,C)$, it follows that: $$f(A,B,C)>2\Rightarrow A>B>C>1$$ and $$f(A,B,C)=2\Rightarrow A>C>B>1$$</p>
<p>My further work: </p>
<p>Let $f(A,B,C)=k$ where $k>1$ is an integer. Rewrite $f(A,B,C)$ as $$k=\frac{A-\frac{1}{A}+B-\frac{1}{B}}{C-\frac{1}{C}}$$</p>
<p>Thus $$A-\frac{1}{A}+B-\frac{1}{B}-k(C-\frac{1}{C})=A-\frac{1}{A}+B-\frac{1}{B}-kC+\frac{k}{C}=0$$</p>
<p>Note that $\frac{1}{A}+\frac{1}{B}<1$ for all integer $A,B$ such that $A>B>1$.</p>
<p>Now we consider three cases: $1)$ $k<C$, $2)$ $k=C$, $3)$ $k>C$.</p>
<p>Case $1$: $k<C$</p>
<p>Since $\frac{k}{C}<1$ it follows that these equations hold:$$A+B=kC$$ $$\frac{1}{A}+\frac{1}{B}=\frac{A+B}{AB}=\frac{k}{C}$$ </p>
<p>By substitution $$\frac{kC}{AB}=\frac{k}{C}\Rightarrow AB=C^2$$</p>
<p>This is only true if $A=np^2$, $B=nq^2$, and $C=npq$ for integers $n,p,q\geq 1$ and $p>q$. From this we have $$p^2>pq; pq>q^2\Rightarrow p^2>pq>q^2\Rightarrow np^2>npq>nq^2\Rightarrow A>C>B>1\Rightarrow k\leq2$$ This last condition, $k\leq2$, follows from the conditions stated above and leaves us with two possibilities: $k=2$ or $k=1$.</p>
<p>If $k=1$ we have $A+B=C$ which violates the condition $A>C$.</p>
<p>If $k=2$ we have $A+B=2C\Rightarrow \frac{A+B}{2}=C$. By substitution we have $$\frac{A+B}{AB}=\frac{2}{\frac{A+B}{2}}$$ $$4AB=(A+B)^2\Rightarrow A^2-2AB+B^2=0\Rightarrow (A-B)^2=0$$</p>
<p>However, this leads to the result that $A=B$ which violates that stated conditions.</p>
<p>Case $1$, $k<C$, fails.</p>
<p>Case $2$: $k=C$</p>
<p>Since $\frac{k}{C}=1$ it follows that $$A+B+1=C^2$$ $$\frac{1}{A}+\frac{1}{B}=0$$</p>
<p>This later equation is clearly false and thus Case $2$, $k=C$ fails.</p>
<p>Case $3$: $k>C$</p>
<p>Let $p,q$ be integers with $C>q\geq0$ and $p>1$. From this we can write $k$ as $k=pC+q$.</p>
<p>Note that $q=0 \Rightarrow A+B+p=C^2$ and $\frac{1}{A}+\frac{1}{B}=0$ (as above) which is clearly a contradiction, so $q\geq1$.</p>
<p>Substituting in $k$ above we have $$A-\frac{1}{A}+B-\frac{1}{B}-pC^2-qC+p+\frac{q}{C}=0$$ </p>
<p>Applying the same logic as above we have the following equations $$\frac{A+B}{AB}=\frac{q}{C}$$ $$A+B=pC^2+qC-p$$</p>
<p>By substitution we have the following system of equations (one cubic and one quadratic) $$pC^3+qC^2-pC=qAB\Rightarrow pC^3+qC^2-pC-qAB=0$$ $$pC^2+qC-p-A-B=0$$</p>
<p>Since at least one $C$ that works must solve both equations we denote the roots of the cubic as $x_1,x_2,x_3$ and the roots of the quadratic as $x_1,y_2$.</p>
<p>By Vieta's formulas we have $$x_1+x_2+x_3=-\frac{q}{p}$$ $$x_1 x_2+x_1 x_3+x_2 x_3=-1$$ $$x_1 x_2 x_3=\frac{qAB}{p}$$</p>
<p>and $$x_1+y_2=-\frac{q}{p}$$ $$x_1 y_2= -\frac{p+A+B}{p}$$</p>
<p>From the first equations (setting $x_1+x_2+x_3=x_1+y_2$) we have $y_2=x_2+x_3$.</p>
<p>Thus $$x_1 y_2=x_1 x_2+x_1 x_3=-\frac{p+A+B}{p}\Rightarrow x_2 x_3-\frac{p+A+B}{p}=-1\Rightarrow x_2 x_3=\frac{A+B}{p}$$ $$x_2 x_3=\frac{A+B}{p}\Rightarrow x_1\frac{A+B}{p}=\frac{qAB}{p}\Rightarrow x_1=\frac{qAB}{A+B}$$</p>
<p>Since $x_1=C$ and $B>C$ we have $$B>\frac{qAB}{A+B}\Rightarrow B^2>(q-1)AB$$</p>
<p>Since $A>B$ it follows $AB>B^2$ and so $q-1<1\Rightarrow q<2$. Since $q\neq0$ (see above) we have $q=1$ and so $C=\frac{AB}{A+B}$.</p>
<p>Substituting for $C$ in the quadratic (although the cubic has the same result) we have $$0=p(\frac{AB}{A+B})^2+\frac{AB}{A+B}-p-A-B$$ Which, after a fair bit of rearranging, becomes $$p=\frac{(A+B)^3-AB(A+B)}{(AB)^2-(A+B)^2}$$ Thus $f(A,B,C)$ is an integer (with $A,B,C$ constrained as above) iff $$p=\frac{(A+B)^3-AB(A+B)}{(AB)^2-(A+B)^2}$$ has integer solutions $p>0$ and $A>B>2$.</p>
<p>Any help with solving this last bit would be greatly appreciated. So far guesswork and Wolfram Alpha have failed to produce results.</p>
| Mastombord | 185,395 | <p>Your proof is right. In 4 out of 5 you are getting - 1/4 in 1 out of it is +1. This means mathematically speaking it does in the long term not make a difference if you guess or leave an answer blank. But that means it is exactly as good, not at least as good: on single test where you don't know any answers it might turn out worse or better. </p>
<p>I could think of some cases where it might following this thoughts be actually wise to guess: When your only chance to pass is luck. But you are usually not going to be in that situation. </p>
|
278,637 | <p>After spending more than 1 hr on this, and looking at many questions, I give up as I am not able to figure a solution.</p>
<p>I have my current package in the standard location given by <code>FileNameJoin[{$UserBaseDirectory, "Applications"}]</code> which on windows is</p>
<pre><code>C:\Users\Owner\AppData\Roaming\Mathematica\Applications
</code></pre>
<p>My package is <code>nma</code>, so I have in the above the standard set up of <code>nma.m</code> and <code>kernel\init.m</code> where init.m loads all the files using <code>Get</code>.</p>
<pre><code>C:\Users\Owner\AppData\Roaming\Mathematica\Applications\nma\nma.m
C:\Users\Owner\AppData\Roaming\Mathematica\Applications\nma\A.m
C:\Users\Owner\AppData\Roaming\Mathematica\Applications\nma\B.m
C:\Users\Owner\AppData\Roaming\Mathematica\Applications\nma\kernel\init.m
</code></pre>
<p>The above is all working fine. I can do</p>
<pre><code> <<nma`
</code></pre>
<p>From any notebook, and all the files in the package are loaded just fine.</p>
<p>But I do not like to keep my software on the C drive. My backup software only backups another drive. So I wanted to move <code>C:\Users\Owner\AppData\Roaming\Mathematica\Applications\nma\</code> to say <code>G:\</code> drive and the whole tree to say <code>G:\mathematica_version\code\nma\</code>. But when I did this, and made sure to open the notebook in same folder and made sure to do <code>SetDirectory[NotebookDirectory[]]</code> so that current directory is the above where all the files including <code>kernel</code> folder are, now when I do</p>
<pre><code> <<nma`
</code></pre>
<p>It no longer loads all the files in the package, it only loads <code>nma.m</code>, because I assume it does not use <code>init.m</code> in the <code>kernel</code> folder anymore, where <code>init.m</code> was loading all the other .m files.</p>
<p>It seems <code>application_name/kernel/init.m</code> is only used when the package lives in the standard location <code>C:\Users\Owner\AppData\Roaming\Mathematica\Applications</code> ? Is this true?</p>
<p><strong>My question is</strong>, how to copy <code>Applications\nma\...</code> from C drive to any other location and have</p>
<pre><code><<nma`
</code></pre>
<p>work the same as before? i.e. as if the package was in the standard location?</p>
<p>I tried my other things, like <code>AppendTo[$Path,"G:\\mathematica_version\\code\\nma"]</code> but this had no effect.</p>
<p>So currently after moving the package to the different location, I have to now manually do a <code>Get</code> on each file. This is something that <code>init.m</code> was doing before. i.e. I am doing this now</p>
<pre><code> SetDirectory[NotebookDirectory[]]
Get["nma.m"]
Get["A.m"]
Get["B.m"]
etc...
</code></pre>
<p>for each .m file. I would prefer to have init.m do this as before if possible.</p>
<p><strong>Update</strong></p>
<p>I also tried the following. Changed <code>$UserBaseDirectory</code> to point to the location in the other disk. Removed all the <code>Application\</code> folder and moved it to the new location. But this did not work. When I did</p>
<pre><code><<nma`
</code></pre>
<p>It did not load the package. To change <code>$UserBaseDirectory</code> I had to do the following</p>
<pre><code> Unprotect[<span class="math-container">$UserBaseDirectory]
$</span>UserBaseDirectory="new path here"
Protect[$UserBaseDirectory]
</code></pre>
<p>So I copied the Application folder back to where it was. I thought may be by moving the whole Application tree and changing <code>$UserBaseDirectory</code> will make it work. May be I did not do it right, I looked at option inspector and did not see where <code>$UserBaseDirectory</code> is defined. Is it ok to change <code>$UserBaseDirectory</code> to new location? If so, what is the correct way to do it?</p>
| lericr | 84,894 | <p>There is a built-in function for checking whether an expression is atomic: <code>AtomQ</code>. So, if your definition of atomic matches Mathematica's, you could do something like:</p>
<pre><code>Scalarize[list_List] :=
With[
{flat = Flatten[list]},
If[
MatchQ[flat, {_?AtomQ}],
First@flat,
list]];
Scalarize[other_] := other;
</code></pre>
<p>You don't really need the <code>With</code>, I just don't like repeating expressions.</p>
<p>Your examples are all with numbers, so if <code>AtomQ</code> doesn't match your semantics, maybe <code>NumberQ</code> would.</p>
<p>Or, based on your update, maybe some combination of <code>NumberQ</code> and <code>StringQ</code>.</p>
|
3,133,831 | <p>I know that supremum means least upper bound. If I have a sequence of events, <span class="math-container">$\{A_n\}_{n=1}^\infty$</span></p>
<p>then <span class="math-container">$$\limsup_{n\rightarrow \infty} A_n = \lim_{n\rightarrow \infty} \sup_{j\geq n} A_j$$</span></p>
<p>I'm having trouble understanding this statement: </p>
<p>"The supremum of a collection of elements in a partially ordered set is its least upper bound, so <span class="math-container">$\sup_{j\geq n} A_j$</span> should be a set and it should hold that <span class="math-container">$A_j \subset \sup_{j\geq n} A_j$</span> for all <span class="math-container">$j \geq n $</span>. Because the supremum should also be the smallest upper bound it is not hard to see that <span class="math-container">$$\sup_{j\geq n} A_j= \bigcup_{j=n}^\infty A_j$$</span>"</p>
<p>Why does <span class="math-container">$j \geq n $</span> matter? Also, I don't understand what set is partially ordered, and also why the supremum of <span class="math-container">$A_j$</span> is itself a set. Shouldn't it just be an element? And I also don't see how the supremum is the same thing as the union.</p>
| Berci | 41,488 | <p>Probability events are (the measurable) subsets of the probability space <span class="math-container">$\Omega$</span>, and the partial order among them is simply the relation 'being subset of'. So, for any countable set of events <span class="math-container">$B_i$</span>, their supremum is just <span class="math-container">$\bigcup_i B_i$</span> (which is still measurable).</p>
<p>By the way, we also have
<span class="math-container">$$\limsup_n A_n\ =\ \lim_n\sup_{j\ge n} A_j\ =\ \bigcap_n\bigcup_{j\ge n} A_j$$</span></p>
|
245,789 | <p>Let $f$ be a function defined on the interval $(−1, 1)$ such that for all $x, y \in (−1, 1)$, $$f(x + y) = \frac{{f(x) + f(y)}}{{1 - f(x)f(y)}}.$$ Suppose that $f$ is differentiable at $x = 0$, show that $f$ is differentiable on $(−1, 1)$.</p>
<p>Need your kind guidance and help. Thanks in advance!</p>
| Pedro | 23,350 | <p>Assuming differentiability at $x=0$, we have that</p>
<p>$$\lim\limits_{h\to 0}\frac{f(h)-f(0)}{h}=\ell$$</p>
<p>exists.</p>
<p>Now, note that from $$f(x + y) = \frac{{f(x) + f(y)}}{{1 - f(x)f(y)}}.$$ we get that</p>
<p>$$f(x +h)-f(x) = \frac{{f(x) + f(h)}}{{1 - f(x)f(h)}}-f(x)$$</p>
<p>$$f(x +h)-f(x) = \frac{{f(x) + f(h)}}{{1 - f(x)f(h)}}-\frac{{f(x)({1 - f(x)f(h)})}}{{1 - f(x)f(h)}}$$</p>
<p>$$f(x +h)-f(x) = \frac{{f(x) + f(h)}}{{1 - f(x)f(h)}}-\frac{{{f(x) - f(x)^2f(h)}}}{{1 - f(x)f(h)}}$$</p>
<p>$${f(x +h)-f(x)}= \frac{{f(x) + f(h)}}{{1 - f(x)f(h)}}-\frac{{{f(x) - f(x)^2f(h)}}}{{1 - f(x)f(h)}}$$</p>
<p>$$f(x +h)-f(x) = \frac{{f(x) + f(h)-f(x) + f(x)^2f(h)}}{{1 - f(x)f(h)}}$$</p>
<p>$$ \frac{f(x +h)-f(x)}h = \frac{f(h)}{h}\frac{{ 1 + f(x)^2}}{{1 - f(x)f(h)}}$$</p>
<p>Thus</p>
<p>$$\lim\limits_{h\to0} \frac{f(x +h)-f(x)}h =\lim\limits_{h\to0} \frac{f(h)}{h}\frac{{ 1 + f(x)^2}}{{1 - f(x)f(h)}}$$</p>
<p>All you need now is to show $f(0)=0$. The functional equation gives </p>
<p>$$f(0) = \frac{{2f(0)}}{{1 - f(0)^2}}.$$</p>
<p>Note that $f(0)^2\neq 1$ since the RHS wouldn't make sense. Thus</p>
<p>$$(1 - f(0)^2)f(0) = 2f(0)$$</p>
<p>$$f(0) - f(0)^3 = 2f(0)$$</p>
<p>$$f(0) + f(0)^3 = 0$$
$$f(0)(1 + f(0)^2) = 0\implies f(0)=0$$</p>
<p>Thus $f'(0)=\lim\limits_{h\to 0} f(h)/h$, and since $f$ is differentiable at $x=0$ it is continuous there, whence $\lim\limits_{h\to 0} f(h)=f(0)=0$. Finally</p>
<p>$$\lim\limits_{h\to0} \frac{f(x +h)-f(x)}h =\lim\limits_{h\to0} \frac{f(h)}{h}\frac{{ 1 + f(x)^2}}{{1 - f(x)f(h)}}$$</p>
<p>$$f'(x) =f'(0)\frac{{ 1 + f(x)^2}}{{1 - 0}}$$</p>
<p>$$f'(x) =f'(0)\left( 1 + f(x)^2\right)$$</p>
|
3,760,115 | <p>I have these constraints on a cost function</p>
<p><span class="math-container">$$
c = A+Bx=A+B\text{vec}\ (q^*q^\top),
$$</span>
where <span class="math-container">$(c,A)\in\mathbb{R}^{100}$</span>, <span class="math-container">$B\in\mathbb{C}^{100\times 81}$</span>, <span class="math-container">$x\in\mathbb{C}^{81}$</span> and <span class="math-container">$q\in\mathbb{C}^9$</span>. So <span class="math-container">$x=\text{vec}\ (q^*q^\top)$</span>, which is the vectorization operator. I want to speed up my optimizer and therefore i require the gradient of the constraints (with respect to <span class="math-container">$q$</span>). This is how far i have come:</p>
<p><span class="math-container">$$
\begin{aligned}
dc = Bdx &= Bd\text{vec}\ (q^*q^\top)\\
&=B\text{vec}\ (q^*dq^\top+dq^*q^\top) \\
&=B\text{vec}\ (q^H:dq)+B\text{vec}\ (q^\top:dq^*)
\end{aligned}
$$</span></p>
<p>However, i cannot seem to get rid of the <span class="math-container">$\text{vec}$</span> operator. If i "matricize" the left side to remove the vectorization at the right side, i cannot get to <span class="math-container">$\frac{\partial c}{\partial q}$</span> anymore. Anyone got some brilliance for me?</p>
<p><strong>Update</strong>: The last line of my derivation is incorrect i think. <span class="math-container">$q^H\in\mathbb{C}^{12}$</span> while <span class="math-container">$dq\in\mathbb{C}^{1\times 12}$</span>, so you cannot use the Frobenius product here.</p>
| Ben Grossmann | 81,360 | <p>So far, you have
<span class="math-container">$$
dc = B \operatorname{vec}(q^*\,dq^\top) + B\operatorname{vec}(dq^*\, q^\top).
$$</span>
To obtain the components of the gradient, it suffices to plug in <span class="math-container">$q = e_j$</span> (where <span class="math-container">$e_1,\dots,e_9$</span> denote the standard basis vectors). So, we have
<span class="math-container">$$
\frac{\partial c}{\partial q_j} = B \operatorname{vec}(q^*\,e_j^\top) + B\operatorname{vec}(e_j\, q^\top).
$$</span>
We could rewrite this in terms of the Kronecker product to "unvectorize". Note that <span class="math-container">$\operatorname{vec}(v w^T) = w \otimes v$</span>, so that
<span class="math-container">$$
\frac{\partial c}{\partial q_j} = B (e_j \otimes q^*) + B(q \otimes e_j).
$$</span></p>
<hr />
<p>Another option is to go in the opposite direction: instead of vectorizing, unvectorize everything. Suppose that we have
<span class="math-container">$$
B = \sum_{j=1}^k P_j \otimes Q_j,
$$</span>
with <span class="math-container">$P_j,Q_j$</span> of size <span class="math-container">$10 \times 3$</span> (such a decomposition can be computed with reshaping and either SVD or rank factorization). We then have
<span class="math-container">$$
B \operatorname{vec}(q^*q^T) = \sum_{j=1}^k P_j \otimes Q_j \operatorname{vec}(q^*q^T) \\
= \sum_{j=1}^k \operatorname{vec}(Q_jq^*q^TP_j^T) \\
= \sum_{j=1}^k \operatorname{vec}(Q_jq^*(P_jq)^T).
$$</span>
In other words, if we unvectorize <span class="math-container">$c$</span> into the <span class="math-container">$10 \times 10$</span> matrix <span class="math-container">$C$</span>, then we have
<span class="math-container">$$
C = [\text{const.}] + \sum_{j=1}^k (Q_jq^*(P_jq)^T).
$$</span></p>
|
947 | <p>I'm looking for the algorithm that efficiently locates the "loneliest person on the planet", where "loneliest" is defined as:</p>
<p>Maximum minimum distance to another person — that is, the person for whom the closest other person is farthest away.</p>
<p>Assume a (admittedly miraculous) input of the list of the exact latitude/longitude of every person on Earth at a particular time.</p>
<p>Also take as provided a function <span class="math-container">$d(p_1, p_2)$</span> that returns the distance on the surface of the earth between <span class="math-container">$p_1$</span> and <span class="math-container">$p_2$</span> - I know this is not trivial, but it's "just spherical geometry" and not the important (to me) part of the question.</p>
<p>What's the most efficient way to find the loneliest person?</p>
<p>Certainly one solution is to calculate <span class="math-container">$d(\ldots)$</span> for every pair of people on the globe, then sort every person's list of distances in ascending order, take the first item from every list and sort those in descending order and take the largest. But that involves <span class="math-container">$n(n-1)$</span> invocations of <span class="math-container">$d(\ldots)$</span>, <span class="math-container">$n$</span> sorts of <span class="math-container">$n-1$</span> items and one last sort of <span class="math-container">$n$</span> items. Last I checked, <span class="math-container">$n$</span> in this case is somewhere north of six billion, right? So we can do better?</p>
| user80022 | 80,022 | <p>just divide the points into a mesh of triangles . the entire sphere , including the surface points can be divided as such if the population call it P is divisible by 3 . (I have written a publication for the other cases In Deutschen Mathematica vol 127.6 )
then not to describe every kilobyte in that paper, you can now use calculus on X amount of vectors spaces with each vector a 'distance' from another person. U then can find the ****Maximum ***such triangle (3 vectors )</p>
|
2,198,293 | <p>Having a bit of trouble in what way I am suppose to go about solving this problem. Any guidance would be great.</p>
<p>Show that there is at least one real solution to:$$x^5 - x^2 - 4 = 0 $$</p>
<p>Thanks in advance.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>set $$f(x)=x^5-x^2-4$$ then we have $$f(1)=-4$$ and $$f(2)=2^5-4-4=24>0$$ and the function $$f(x)$$ is continuous, thus there is at least one solution in the interval</p>
|
1,842,781 | <p>Suppose we have an alphabet of the following allowed characters: </p>
<ul>
<li>the lowercase letters $a$ through $z$ (26)</li>
<li>the uppercase letters $A$ through $Z$ (26)</li>
<li>the numerals $0$ through $9$ (10)</li>
<li>the common punctuation marks: $.,;:'"()!?-$ ( 11)</li>
<li>the white space (1)</li>
</ul>
<p>Making a total of 74 options. </p>
<p>Each character $c$ in our 10-character sequence can be a member of the set of the allowed characters, $C$. </p>
<p>Suppose we have a given ten-character sequence, $S$. (For the sake of example, let $S = Abc123.,!$)</p>
<p>What is the probability that a randomly constructed sequence, $S'$, (with each character being randomly selected from $C$) matches $S$?</p>
| heropup | 118,193 | <p>Let the numbers be arranged as follows: $$\begin{array}{ccccc} & a & & b & \\ 0 & & c & & 1000 \\ & d & & e & \end{array}$$ Then we have the relationships $$\begin{align*} 3a &= b + c \\ 3d &= c + e \\ 3b &= a + c + 1000 \\ 3e &= c + d + 1000 \\ 6c &= a + b + d + e + 1000. \end{align*}$$ If we add up the first four equations, we obtain $$3(a+b+d+e) = 4c + (a+b+d+e) + 2000,$$ or $$2c = a+b+d+e - 1000.$$ Subtracting this result from the fifth equation yields $4c = 2000$, or $c = 500$. Consequently, $$\begin{align*} 3a &= b + 500 \\ 3b &= a + 1500. \end{align*}$$ Their sum gives $2(a+b) = 2000$, or $a+b = 1000$, hence $3a - (1000 - a) = 500$, or $a = 375$, consequently $b = 625$. Since the equations for $d$ and $e$ have the same form as those for $a$ and $b$, we also find $d = a = 375$ and $e = b = 625$.</p>
<hr>
<p>It is not difficult to generalize this solution so that if in place of $0$ and $1000$ we write $x$ and $y$ respectively, then $$(a,b,c,d,e) = \left(\frac{5x+3y}{8}, \frac{3x+5y}{8}, \frac{x+y}{2}, \frac{5x+3y}{8}, \frac{3x+5y}{8}\right).$$</p>
|
2,912,401 | <p>I'm trying to prove that the sequence $\left(\frac{1}{2},\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{2}{4},\frac{3}{4},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5},\cdots\right)$ is not a Cauchy sequence. </p>
<p>I know that a sequence of real numbers is <em>not</em> Cauchy if there exists an $\epsilon>0$ such that, for all $N\in\mathbb{N}$, there exists $m,n>N$ such that $|x_{m}-x_{n}|\geq\epsilon$. It intuitively makes sense to me that the sequence cannot be Cauchy, as the distance between points where the denominator changes (like $\cdots\frac{99}{101},\frac{100}{101},\frac{1}{102},\cdots$) keeps growing larger. However, I'm not sure how to find indices $m$ and $n$ in general with $|x_{m}-x_{n}|\geq\epsilon$. Thanks in advance for any help!</p>
| ℋolo | 471,959 | <p>Cauchy sequence: $$\forall\varepsilon>0\exists N\forall n,m\quad n,m>N\implies|a_n-a_m|<\varepsilon$$</p>
<p>We want to show the negative:$$\exists\varepsilon>0\forall N\exists n,m\quad n,m>N\land |a_n-a_m|\ge\varepsilon$$</p>
<p>Take $\varepsilon=1/10$, and then, for all $N$ take $n=\sum_{i=1}^{N+1} i=\frac{(N+1)^2+N+1}{2}, m=n+1$ and you have $a_n$ an element in the form of $\frac{k}{k+1}$ and $a_m$ in the form of $\frac{1}{k+2}$</p>
|
1,025,588 | <p>I just started doing AM-GM inequalities for the first time about two hours ago. In those two hours, I have completed exactly two problems. I am stuck on this third one! Here is the problem:</p>
<p>If $a, b, c \gt 0$ prove that $$ a^3 +b^3 +c^3 \ge a^2b +b^2c+c^2a.$$</p>
<p>I am going crazy over this! A hint or proof would be much appreciated. Also any general advice for proving AM-GM inequalities would bring me happiness to my heart. Thank you!</p>
| Aditya Hase | 190,645 | <p>Using <a href="http://en.wikipedia.org/wiki/Rearrangement_inequality" rel="noreferrer">rearrangement inequality</a> with $a^2,b^2,c^2$ and $a,b,c$ we get
$$aa^2+bb^2+cc^2\ge a^2b +b^2c+c^2a$$</p>
|
4,014,022 | <p>In example <span class="math-container">$1.5$</span> of <em>Cracking the GRE Subject Test</em>, the authors make the following calculation in one step with no additional commentary:</p>
<blockquote>
<p>we interchange <span class="math-container">$x$</span> and <span class="math-container">$y$</span> and solve for <span class="math-container">$y$</span>:
<span class="math-container">\begin{align}
\vdots\\
xy^2 + y-x &= 0\\
y &= \frac{-1 \pm \sqrt{1+4x^2}}{2x}
\end{align}</span></p>
</blockquote>
<p>What technique allows such a breezy solution?</p>
| 19aksh | 668,124 | <p>Using the quadratic formula for <span class="math-container">$ay^2 + by +c = 0$</span></p>
<p><span class="math-container">$$y = \frac{-b\pm\sqrt{b^2-4ac}}{2a}$$</span></p>
<p>Here <span class="math-container">$a = x$</span>, <span class="math-container">$b = 1$</span>, <span class="math-container">$c = -1$</span></p>
|
73,112 | <blockquote>
<p>Find <span class="math-container">$e^{At}$</span>, where <span class="math-container">$$A = \begin{bmatrix} 1 & -1 & 1 & 0\\ 1 & 1 & 0 & 1\\ 0 & 0 & 1 & -1\\ 0 & 0 & 1 & 1\\ \end{bmatrix}$$</span></p>
</blockquote>
<hr />
<p>So, let me just find <span class="math-container">$e^{A}$</span> for now and I can generalize later. I notice right away that I can write</p>
<p><span class="math-container">$$A = \begin{bmatrix} B & I_{2} \\ 0_{22} & B \end{bmatrix}$$</span></p>
<p>where</p>
<p><span class="math-container">$$B = \begin{bmatrix} 1 & -1\\ 1 & 1\\ \end{bmatrix}$$</span></p>
<p>I'm sort of making up a method here and I hope it works. Can someone tell me if this is correct?</p>
<p>I write:</p>
<p><span class="math-container">$$A = \mathrm{diag}(B,B) + \begin{bmatrix}0_{22} & I_{2}\\ 0_{22} & 0_{22}\end{bmatrix}$$</span></p>
<p>Call <span class="math-container">$S = \mathrm{diag}(B,B)$</span>, and <span class="math-container">$N = \begin{bmatrix}0_{22} & I_{2}\\ 0_{22} & 0_{22}\end{bmatrix}$</span>. I note that <span class="math-container">$N^2$</span> is <span class="math-container">$0_{44}$</span>, so</p>
<p><span class="math-container">$$e^{N} = \frac{N^{0}}{0!} + \frac{N}{1!} + \frac{N^2}{2!} + \cdots = I_{4} + N + 0_{44} + \cdots = I_{4} + N$$</span></p>
<p>and that <span class="math-container">$e^{S} = \mathrm{diag}(e^{B}, e^{B})$</span> and compute:</p>
<p><span class="math-container">$$e^{A} = e^{S + N} = e^{S}e^{N} = \mathrm{diag}(e^{B}, e^{B})\cdot[I_{4} + N]$$</span></p>
<p>This reduces the problem to finding <span class="math-container">$e^B$</span>, which is much easier.</p>
<p>Is my logic correct? I just started writing everything as a block matrix and proceeded as if nothing about the process of finding the exponential of a matrix would change. But I don't really know the theory behind this I'm just guessing how it would work.</p>
| Sasha | 11,069 | <p>Consider $M(t) = \exp(t A)$, and as you noticed, it has block-diagonal form
$$
M(t) = \left(\begin{array}{cc} \exp(t B) & n(t) \\ 0_{2 \times 2} & \exp(t B) \end{array} \right).
$$
Notice that $M^\prime(t) = A \cdot M(t)$, and this results in a the following differential equation for $n(t)$ matrix:
$$
n^\prime(t) = \mathbb{I}_{2 \times 2} \cdot \exp(t B) + B \cdot n(t)
$$
which translates into
$$
\frac{\mathrm{d}}{\mathrm{d} t} \left( \exp(-t B) n(t) \right) = \mathbb{I}_{2 \times 2}
$$
which is to say that $n(t) = t \exp(t B)$.</p>
|
73,112 | <blockquote>
<p>Find <span class="math-container">$e^{At}$</span>, where <span class="math-container">$$A = \begin{bmatrix} 1 & -1 & 1 & 0\\ 1 & 1 & 0 & 1\\ 0 & 0 & 1 & -1\\ 0 & 0 & 1 & 1\\ \end{bmatrix}$$</span></p>
</blockquote>
<hr />
<p>So, let me just find <span class="math-container">$e^{A}$</span> for now and I can generalize later. I notice right away that I can write</p>
<p><span class="math-container">$$A = \begin{bmatrix} B & I_{2} \\ 0_{22} & B \end{bmatrix}$$</span></p>
<p>where</p>
<p><span class="math-container">$$B = \begin{bmatrix} 1 & -1\\ 1 & 1\\ \end{bmatrix}$$</span></p>
<p>I'm sort of making up a method here and I hope it works. Can someone tell me if this is correct?</p>
<p>I write:</p>
<p><span class="math-container">$$A = \mathrm{diag}(B,B) + \begin{bmatrix}0_{22} & I_{2}\\ 0_{22} & 0_{22}\end{bmatrix}$$</span></p>
<p>Call <span class="math-container">$S = \mathrm{diag}(B,B)$</span>, and <span class="math-container">$N = \begin{bmatrix}0_{22} & I_{2}\\ 0_{22} & 0_{22}\end{bmatrix}$</span>. I note that <span class="math-container">$N^2$</span> is <span class="math-container">$0_{44}$</span>, so</p>
<p><span class="math-container">$$e^{N} = \frac{N^{0}}{0!} + \frac{N}{1!} + \frac{N^2}{2!} + \cdots = I_{4} + N + 0_{44} + \cdots = I_{4} + N$$</span></p>
<p>and that <span class="math-container">$e^{S} = \mathrm{diag}(e^{B}, e^{B})$</span> and compute:</p>
<p><span class="math-container">$$e^{A} = e^{S + N} = e^{S}e^{N} = \mathrm{diag}(e^{B}, e^{B})\cdot[I_{4} + N]$$</span></p>
<p>This reduces the problem to finding <span class="math-container">$e^B$</span>, which is much easier.</p>
<p>Is my logic correct? I just started writing everything as a block matrix and proceeded as if nothing about the process of finding the exponential of a matrix would change. But I don't really know the theory behind this I'm just guessing how it would work.</p>
| Rodrigo de Azevedo | 339,790 | <p>Using induction, for <span class="math-container">$k \geq 1$</span>,</p>
<p><span class="math-container">$${\rm A}^k = \begin{bmatrix} {\rm B}^k & k \, {\rm B}^{k-1}\\ {\rm O} & {\rm B}^k\end{bmatrix}$$</span></p>
<p>and, thus,</p>
<p><span class="math-container">$$\exp\left( t \, {\rm A} \right) = \begin{bmatrix} {\rm I} & {\rm O}\\ {\rm O} & {\rm I}\end{bmatrix} + \sum_{k=1}^{\infty} \frac{t^k}{k!} \begin{bmatrix} {\rm B}^k & k \, {\rm B}^{k-1}\\ {\rm O} & {\rm B}^k\end{bmatrix} = \begin{bmatrix} \exp\left( t \, {\rm B} \right) & \square\\ {\rm O} & \exp\left( t \, {\rm B} \right)\end{bmatrix}$$</span></p>
<p>where</p>
<p><span class="math-container">$$\square = \sum_{k=1}^{\infty} \frac{t^k}{k!} k \, {\rm B}^{k-1} = t \sum_{k=1}^{\infty} \frac{t^{k-1}}{(k-1)!} {\rm B}^{k-1} = t \sum_{k=0}^{\infty} \frac{t^k}{k!} {\rm B}^k = t \, \exp\left( t \, {\rm B} \right)$$</span></p>
<hr />
<p><a href="/questions/tagged/matrices" class="post-tag" title="show questions tagged 'matrices'" rel="tag">matrices</a> <a href="/questions/tagged/block-matrices" class="post-tag" title="show questions tagged 'block-matrices'" rel="tag">block-matrices</a> <a href="/questions/tagged/matrix-exponential" class="post-tag" title="show questions tagged 'matrix-exponential'" rel="tag">matrix-exponential</a></p>
|
14,527 | <p>The <a href="https://en.wikipedia.org/wiki/Robinson%E2%80%93Schensted%E2%80%93Knuth_algorithm" rel="nofollow noreferrer">Robinson-Schensted correspondence</a> is a bijection between elements of the symmetric group and ordered pairs of standard tableaux of the same shape.</p>
<p>Some simple operations on tableaux correspond to simple operations on the group: switching the tableaux corresponds to inverse on the group.</p>
<blockquote>
<p>What about taking the transpose of the tableaux? Does that correspond to something easily described on permutations?</p>
</blockquote>
<p>In order to rule out easy guesses, let me describe this on <span class="math-container">$S_3$</span>:</p>
<ul>
<li>the identity switches with the involution 321.</li>
<li>the transpositions 213 and 132 switch.</li>
<li>the 3-cycles 312 and 231 switch.</li>
</ul>
<p>In general, this operation preserves being order <span class="math-container">$\leq 2$</span> (since this is equivalent to the P- and Q-symbols being the same).</p>
| Igor Pak | 4,040 | <p>When you conjugate diagrams and apply RSK or other Young tableau bijections, the answer is typically bad, with some rare exceptions. The right way to think of RSK is to think of row length being continuous while columns still integer (see e.g. my "Geometric proof of the hook-length formula" paper and refs therein). </p>
<p>An exception: there is a "hidden symmetry" for LR-coefficients when you conjugate all three diagrams - see Hanlon-Sundaram paper (1992). Once you know this bijection, there are natural connections to RSK as described in the long Pak-Vallejo paper and a followup by Azenhas-Conflitti-Mamede (search the web for a paper and a ppt presentation). Together, these do give a complete description of your involution, but making sense of it might require quite a bit of work. </p>
|
434,685 | <p>Suppose that $X^*$ is the dual space of a normed space $X$. If we renorm the space $X^*$ with a new norm equivalent to the first one, is this new normed space the dual of $X$ as well?
(I think it suffices to prove that a functional $f$ is continuous with a norm1 if and only if it is continuous with norm2 where norm1 and norm2 are two equivalent norms. This seems to be obvious!).</p>
<p>Thanks for the help.</p>
| fuglede | 10,816 | <p>For the last statement, suppose $f$ is continuous with respect to $\lVert \cdot \rVert_1$, and that $k,K > 0$ are so that $k \lVert v \rVert_1 < \lVert v \rVert_2 < K\lVert v \rVert_1$ for all $v \in X$. Let $\varepsilon > 0$ be given, and choose $\delta_0$ so that $\lvert f(v) - f(w) \rvert < \varepsilon$ whenever $\lVert v - w \rVert_1 < \delta_0$. Put $\delta = \delta_0/k$. Then for $\lVert v - w \rVert_2 < \delta$ you have $\lVert v - w \rVert_1 < \delta/k = \delta_0$, so $\lvert f(v)-f(w) \rvert < \varepsilon$, so $f$ is continuous with respect to $\lVert \cdot \rVert_2$</p>
<p>Note that this does not use linearity of $f$ and is a general fact about a continuous maps on metric spaces.</p>
|
1,289,112 | <p>I am trying to prove $\exists n_0 > 0: \forall n > n_0: \log n < \sqrt n$. My attempt uses the series representation of the exponential function, but it does not seem to accomplish the proof:</p>
<p>$$
\log n < \sqrt n \\
\Leftrightarrow n < e^{\sqrt n} = \sum_{k=0}^\infty \frac{(\sqrt n)^k}{k!} = 1 + \sqrt n + \frac n 2 + \sum_{k=3}^\infty \frac{(\sqrt n)^k}{k!} \\
\Leftrightarrow \frac n 2 < 1 + \sqrt n + \sum_{k=3}^\infty \frac{(\sqrt n)^k}{k!} \\
\Leftrightarrow ???
$$</p>
<p>How could I complete this proof? Or would a different strategy e.g. via monotonicity be more convenient?</p>
| André Nicolas | 6,312 | <p>Hint: The series representation for the exponential will work. Use the fact that if $x$ is positive then
$$e^x\gt \frac{x^3}{3!}.$$</p>
|
1,289,112 | <p>I am trying to prove $\exists n_0 > 0: \forall n > n_0: \log n < \sqrt n$. My attempt uses the series representation of the exponential function, but it does not seem to accomplish the proof:</p>
<p>$$
\log n < \sqrt n \\
\Leftrightarrow n < e^{\sqrt n} = \sum_{k=0}^\infty \frac{(\sqrt n)^k}{k!} = 1 + \sqrt n + \frac n 2 + \sum_{k=3}^\infty \frac{(\sqrt n)^k}{k!} \\
\Leftrightarrow \frac n 2 < 1 + \sqrt n + \sum_{k=3}^\infty \frac{(\sqrt n)^k}{k!} \\
\Leftrightarrow ???
$$</p>
<p>How could I complete this proof? Or would a different strategy e.g. via monotonicity be more convenient?</p>
| zhw. | 228,045 | <p>In fact, $\ln x < \sqrt x$ for all $x>0.$ Let $f(x) = \sqrt x - \ln x.$ Then $f'(x) = 1/(2\sqrt x) - 1/x.$ This tells us $f'<0$ on $(0,4), f'(4)=0,$ and $f'>0$ on $(4,\infty).$ Thus $f(4)$ is the absolute minimum of $f$ on $[1,\infty).$ It is quite simple to check that $f(4) > 0,$ hence $f> 0$ on $(0,\infty).$ </p>
|
689,921 | <p>There are two teams.Two games were played.There are three possible outcomes which are win, lose or draw. how many permutations are there?</p>
| amWhy | 9,003 | <p>Teams A, B</p>
<p>In any game, if team A wins, team B loses. If team A loses, team B wins...etc. So the outcome for any one team determines the outcome for the other.</p>
<p>So we need only consider the outcomes for team A, given team A plays two games against B.</p>
<p>$1\quad2$ : Game<br>
$W \;\;W$<br>
$W \;\; L$<br>
$W\;D $<br>
$L\;\;W$<br>
$L\;\;L$<br>
$L\;\;D$<br>
$D\;\;W$<br>
$D\;\;L$<br>
$D\;\;D$<br></p>
<p>In all, there are $9$ possible outcomes when two teams play two games.</p>
|
153,999 | <p>How to solve: $|\sqrt{x-1}-2| + |\sqrt{x-1}-3|=1$.</p>
<p>I would like to know how to solve an absolute value equation when there is a square root sign inside.</p>
| Cameron Buie | 28,900 | <p>I recommend the following approach, in this case. Let $y=\sqrt{x-1}$. Now you need only solve the equation $|y-2|+|y-3|=1$--which should have a closed interval's worth of solutions--then given any of the solutions, say $\alpha$, solve the equation $\sqrt{x-1}=\alpha$. In the end, you will obtain a closed interval's worth of solutions.</p>
<p>It is important to note that this approach <strong><em>will not always work</em></strong>! If the equation you'd started with had been $$\left|\sqrt{x+1}-2\right|+\left|\sqrt{x}-3\right|=1,$$ we could not have made the substitution as above.</p>
|
10,677 | <p>I have a question about a transformation of a matrix. Lets say we have the following matrix</p>
<p>$
M =
\left[ {\begin{array}{cc}
4 & 3 \\
4 & 3 \\
\end{array} } \right]
$
</p>
<p>Then I want to transform all numbers in the matrix with the following function:</p>
<p>$f(x) = (x - 4)^2 + 2x -4$ </p>
<p>What would be the correct notation for this?</p>
| Mitchell Watt | 3,564 | <p>Could use the "Hadamard Product"... I don't know whether there is a consistently accepted notation for such a product, however a few examples <a href="https://planetmath.org/HadamardProduct" rel="nofollow noreferrer">here</a> and on Wikipedia seem to use an open circle.</p>
<p>In that sense, your expression would take the form</p>
<p><span class="math-container">$(M-4O)\circ(M-4O)+2M-4O$</span> where <span class="math-container">$O= \left( \begin{array} {cc} 1 &1 \\ 1&1 \end{array} \right) $</span></p>
<p>In MATLAB, use .* for element-wise multiplication.</p>
<p>There are probably other possibilities.</p>
|
10,677 | <p>I have a question about a transformation of a matrix. Lets say we have the following matrix</p>
<p>$
M =
\left[ {\begin{array}{cc}
4 & 3 \\
4 & 3 \\
\end{array} } \right]
$
</p>
<p>Then I want to transform all numbers in the matrix with the following function:</p>
<p>$f(x) = (x - 4)^2 + 2x -4$ </p>
<p>What would be the correct notation for this?</p>
| Agustí Roig | 664 | <p>Maybe I don't understand your question, but: what's wrong with this?</p>
<p>$$
\begin{pmatrix}
f(4) & f(3) \\\
f(4) & f(3)
\end{pmatrix}
$$
</p>
<p>More generally, if you have a matrix $(a^i_j)$, you could write the resulting matrix as $(f(a^i_j))$.</p>
|
1,552,747 | <p>While we were introduced to integration, we were told about some basic concepts that, as we were told, could not be proved based on our level of sophistication. They are as follows:</p>
<ol>
<li>$$\int_a^b \! f(x) \, \mathrm{d}x=\phi(b)-\phi(a),$$ where $\phi$ is a primitive of $f$ in $[a,b]$</li>
<li>$$\int_b^a \! f(x) \, \mathrm{d}x=-\int_a^b \! f(x) \, \mathrm{d}x$$</li>
</ol>
<p>When I learned Riemann Integral, I thought I would be able to prove them, as they didn't seem to be too out-of-the-earth type. So, my question is what is the concept of a primitive according to Riemann Integration? And how can it be used to prove $(1)$? If there isn't any, then where should I look?</p>
<p>Also, how can I even conceptualise $(2)$? I mean, Riemann Integration is defined using Sums. How can I Sum in the opposite way? Does it even matter? And how does it become negative?</p>
<p>I'm just a curious high school student, familiar with the basic concepts of Riemann Integration. So, spare me if my questions are too dumb. And it'd be great if you can suggest some study material where I should look for these type of concepts. Thank you in advance.</p>
| Siddharth Joshi | 288,487 | <p>You <strong>cannot associate any</strong> physical interpretation to $\int_b^a f(x) \ dx$</p>
<p>You're right, the sum would be the same irrespective of whether you sum it forward or backward. But $\int_b^a f(x) dx$ doesn't connote taking the sum in the opposite manner. If it were then $\int_b^a f(x) dx$ would have been equal to $\int_a^bf(a+b-x)dx$ which doesn't happen.</p>
<p>However, this is a useful concept since it allows us to write $\int_a^bf(x) dx = \int_a^cf(x) dx + \int_c^bf(x) dx$ even when $a < b < c$</p>
|
1,552,747 | <p>While we were introduced to integration, we were told about some basic concepts that, as we were told, could not be proved based on our level of sophistication. They are as follows:</p>
<ol>
<li>$$\int_a^b \! f(x) \, \mathrm{d}x=\phi(b)-\phi(a),$$ where $\phi$ is a primitive of $f$ in $[a,b]$</li>
<li>$$\int_b^a \! f(x) \, \mathrm{d}x=-\int_a^b \! f(x) \, \mathrm{d}x$$</li>
</ol>
<p>When I learned Riemann Integral, I thought I would be able to prove them, as they didn't seem to be too out-of-the-earth type. So, my question is what is the concept of a primitive according to Riemann Integration? And how can it be used to prove $(1)$? If there isn't any, then where should I look?</p>
<p>Also, how can I even conceptualise $(2)$? I mean, Riemann Integration is defined using Sums. How can I Sum in the opposite way? Does it even matter? And how does it become negative?</p>
<p>I'm just a curious high school student, familiar with the basic concepts of Riemann Integration. So, spare me if my questions are too dumb. And it'd be great if you can suggest some study material where I should look for these type of concepts. Thank you in advance.</p>
| Dorebell | 59,428 | <p>The first question is the fundamental theorem of calculus: it says that if there is a differentiable function $F$ with $\frac{dF}{dx} = f$, and $f$ is Riemann integrable on $[a,b]$, then $\int_{a}^b f(x) dx = F(b) - F(a)$. This can be proved from the definitions relatively easily, but not obviously. It will be covered in any book on elementary real analysis, a good book on calculus, or <a href="https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus#Proof_of_the_second_part" rel="nofollow">Wikipedia</a> (NB: When I was a curious high school student, Wikipedia was a godsend for explaining things that didn't make sense or weren't explained in calc class!)</p>
<p>The second statement should be considered a definition. For $a < b$, we define the integral $\int_{a}^{b} f(x) dx$ to be the integral of $f$ with respect to the interval $[a,b]$ (note that the definition of the integral only really depends on the interval!). Then, for any $a,b,c$ with $a < b < c$, we have $\int_a^b f(x) dx + \int_b^c f(x) dx = \int_a^c f(x) dx$ (which is easy to see from the definitions). Now, we want to assign meaning to the symbol $\int_b^a f(x) dx$ with $a \leq b$. We can do this in a unique way such that the addition formula above holds for any $a,b,c$. First, plugging in $b = c$, the formula reads $\int_a^b f(x) dx + \int_b^b f(x) dx = \int_a^b f(x) dx$, so $\int_b^b f(x) dx = 0$. Then, if $a < b$, we can plug in $c = a$ to get $\int_a^b f(x) dx + \int_b^a f(x) dx = \int_a^a f(x) dx = 0$, so $\int_b^a f(x) dx = - \int_a^b f(x) dx$. </p>
|
1,037,530 | <p>what is the basic difference between a mapping and a function? many say they are same but the opposite views are also seen. is mapping a restricted version of a function?</p>
| Cameron Williams | 22,551 | <p>I think a map and function are usually synonymous amongst mathematicians. Often you will see "let $f$ be a map" or "let $f$ be a function" used interchangeably.</p>
|
4,363,384 | <p>Let <span class="math-container">$A_i$</span>, <span class="math-container">$i \in I$</span> subsets of a space X given topology <span class="math-container">$\tau$</span>. Show that <span class="math-container">$\overline{\bigcup_i A_i }\subset \bigcup_i\overline{ A_i }$</span> does not hold necessarily :</p>
<p>My counterexample is in <span class="math-container">$\Bbb N$</span> with the co-finite topology and <span class="math-container">$A_k=\{2k\}=\overline{ A_k }$</span> because singletons are closed. Hence, <span class="math-container">$\bigcup_i\overline{ A_i } = 2\Bbb N$</span>. However, <span class="math-container">$\overline{\bigcup_i A_i }$</span> is the smallest closed set containing <span class="math-container">$2\Bbb N$</span>, since closed sets are finite or equal to <span class="math-container">$\Bbb N$</span>, then <span class="math-container">$\overline{\bigcup_i A_i }=\Bbb N$</span> so <span class="math-container">$\overline{\bigcup_i A_i }\nsubseteq \bigcup_i\overline{ A_i }$</span>.</p>
<p>But here, since <span class="math-container">$I$</span> is countable, it seems to disagree with the fact that <span class="math-container">$\overline{A \cup B}=\overline{A} \cup \overline{B}$</span>. Is my counterexample wrong ? Should I find a case with <span class="math-container">$I$</span> uncountable ?</p>
| José Carlos Santos | 446,262 | <p>Your example is fine. The equality <span class="math-container">$\overline{A\cup B}=\overline A\cup\overline B$</span> holds for pairs of subsets of <span class="math-container">$X$</span> and, more generally, for finite sets of subsets of <span class="math-container">$X$</span>. But you have provided an example with a <em>countable</em> set of subsets of <span class="math-container">$X$</span>.</p>
<p>Another example would consist in taking <em>any</em> non-closed subset <span class="math-container">$A$</span> of a <span class="math-container">$T_1$</span> topological space, and then noticing that <span class="math-container">$\bigcup_{a\in A}\overline{\{a\}}=\bigcup_{a\in A}\{a\}=A$</span>, but <span class="math-container">$\overline{\bigcup_{a\in A}\{a\}}=\overline A\supsetneq A$</span>.</p>
|
1,675,411 | <p>So far I have this:</p>
<p>First consider $n = 5$. In this case $(5)^2 < 2^5$, or $25 < 32$. So the inequality holds for $n = 5$.</p>
<p>Next, suppose that $n^2 < 2^n$ and $n \geq 5$. Now I have to prove that $(n+1)^2 < 2^{(n+1)}$.</p>
<p>So I started with $(n+1)^2 = n^2 + 2n + 1$. Because $n^2 < 2^n$ by the hypothesis, $n^2 + 2n + 1$ < $2^n + 2n + 1$. As far as I know, the only way I can get $2^{n+1}$ on the right side is to multiply it by $2$, but then I get $2^{n+1} + 4n + 2$ on the right side and don't know how to get rid of the $4n + 2$. Am I on the right track, or should I have gone a different route?</p>
| Adriano | 76,987 | <p>There are, of course, many ways to force $2^{n + 1}$ to show up on the RHS. I like to bound the lower order terms as multiples of the leading term by repeatedly applying the given inequality $n \geq 5$ so that I can apply the inductive hypothesis in one shot. Indeed, observe that:
\begin{align*}
(n + 1)^2
&= n^2 + 2n + 1 \\
&< n^2 + 4n + 5 \\
&\leq n^2 + 4n + n &\text{since } n \geq 5 \\
&= n^2 + 5n \\
&\leq n^2 + (n)n &\text{since } n \geq 5 \\
&= 2(n^2) \\
&< 2(2^n) &\text{by the inductive hypothesis} \\
&= 2^{n + 1}
\end{align*}
as desired.</p>
|
393,430 | <p><strong>Question:</strong> Is there a simple method for calculating the Fourier transform of a holomorphic complex function <span class="math-container">${f{{\left({z}\right)}}}:\Omega\to{\mathbb{{{C}}}}$</span>?</p>
<p>In order for my question to be well-posed I define a holomorphic function <span class="math-container">${f}:\Omega\to{\mathbb{{{C}}}}$</span> to posses continuous first partial derivatives and satisfy the Cauchy-Riemann equations in a simple connected domain <span class="math-container">${\Omega}\subseteq{\mathbb{{{C}}}}$</span> without any singularities. I am quite familiar with a Fourier transform for a real, periodic function <span class="math-container">${f}:{\mathbb{{{R}}}}\to{\mathbb{{{R}}}}$</span> that uses complex exponentials as a basis of eigenfunctions to generate an expansion <span class="math-container">${f{{\left({x}\right)}}}={\sum_{{{n}=-\infty}}^{{\infty}}}{A}_{{{n}}}{e}^{{in{x}}}$</span>.</p>
<p>Given that all functions satisfying the Cauchy-Riemann equations are harmonic, I wondered if the Laplace PDE with homogeneous Dirichlet boundary conditions <span class="math-container">${\Delta}{f{{\left({z}\right)}}}={0}\forall{z}\in{\Omega}$</span> and <span class="math-container">${f{{\left({z}\right)}}}{\mid}_{{\partial{\Omega}}}={0}$</span> could be used to generate a class of harmonic functions in <span class="math-container">${\mathbb{{{R}}}}^{{{2}}}$</span>. Admittedly, none would be guaranteed to correspond with analytic functions let alone approximate a desired function <span class="math-container">${f{{\left({z}\right)}}}$</span> within a sufficiently small error bound.</p>
<p>Next, I considered the viability of taking a Fourier decomposition of the real and imaginary components separately, which could be superposed to recover the original function. While this approach merits consideration for sufficiently simple functions, I noticed that it would fail for cases where separability is more enigmatic. For an example, I turn to the Schwarz-Christoffel transform.</p>
<p><span class="math-container">$${f{{\left({z}\right)}}}={\int_{{{z}_{{{0}}}}}^{{{z}}}}\frac{{{A}{\left.{d}{z}\right.}}}{{{\prod_{{{j}={1}}}^{{{n}}}}{\left({z}-{x}_{{{j}}}\right)}^{{{k}_{{{j}}}}}}}+{B}$$</span></p>
<p>In the above, <span class="math-container">${A},{B}\in{\mathbb{{{C}}}}$</span> are both taken to be constants. Given the integral representation of the formula, I find that it would present a particular challenge to separate the components for an arbitrary choice of <span class="math-container">${x}_{{{j}}}$</span>.</p>
| Sam Nead | 1,650 | <p>Suppose that <span class="math-container">$M$</span> is finite volume hyperbolic. Then the mapping class group is finite and there are algorithms to build its multiplication table. On the other hand, there is definitely no overall pattern to these groups - they are “just” the finite quotients of torsion free lattices in <span class="math-container">$\mathrm{SL}(2, \mathbb{C})$</span>.</p>
<p>And this is only the tip of the iceberg... the mapping class groups of surfaces appear when thinking about Seifert fibered spaces, outer automorphism groups of free groups show up when dealing with the doubles of handlebodies, and if you connect sum the above together you get more craziness...</p>
|
4,126,527 | <p>Suppose <span class="math-container">$w_1$</span>, <span class="math-container">$w_2$</span>, ..., <span class="math-container">$w_{n-1}$</span> are the complex roots not equal to <span class="math-container">$1$</span> of <span class="math-container">$z^n-1=0$</span>, where <span class="math-container">$n$</span> is odd.</p>
<p>Show: <span class="math-container">$\frac{1-\bar{w}}{1+w}+\frac{1-w}{1+\bar{w}}=2-w-\bar{w}$</span>.</p>
<p>Hello, I am struggling with this complex number proof. Any help is welcomed; thank you in advance.</p>
<p>I have tried expanding the LHS and comparing it to the RHS but am not able to make them equal; I got this far:
<span class="math-container">$\frac{2-w^2-\bar{w}^2}{2+w+\bar{w}}=2-w-\bar{w}$</span></p>
<p>Part 2 of the question:
Hence show: <span class="math-container">$$\sum_{k=1}^{n-1} \frac{1-\bar{w_k}}{1+w_k} = n$$</span></p>
<p>Any suggestions?</p>
| FeedbackLooper | 423,711 | <p>The idea is that any root of unity complies <span class="math-container">$w\bar{w}=1$</span> and the main manipulation trick you need is:
<span class="math-container">$$
(w+\bar{w})^2 = w^2 + 2w\bar{w} + \bar{w}^2 = 2 + w^2+\bar{w}^2
$$</span>
so that <span class="math-container">$2-w^2-\bar{w}^2 = 2 - (w^2+\bar{w}^2) = 2 - ( (w+\bar{w})^2-2 ) = 2^2-(w+\bar{w})^2 $</span>
which is a difference of squares. Can you fill the remaining details?</p>
|
1,928,892 | <p>I'm solving a graduate entrance examination problem.
We are required to establish the inequality using the following result:</p>
<p>for $x,y > 0$, $\frac{x}{y} + \frac{y}{x} > 2$ (1), which is easy to prove as it is equivalent to $(x - y)^2 > 0$.</p>
<p>But when it comes to an inequality combining $x, y, z$, I got stuck as I've tried to develop the expression into one single fraction and obtain something irreducible.</p>
<p>Any hints ? My intuition tells me that for $x,y,z >0$, any fraction of the form $\frac{x}{y+z}$ is greater than 1/2. As there are three fractions of this kind with mute variables playing symmetrical roles, we get: $1/2 + 1/2 + 1/2 = 3/2$.</p>
<p>I just don't figure out how to play with the result (1).</p>
| Bart Michels | 43,288 | <p>Using the <a href="https://en.wikipedia.org/wiki/Rearrangement_inequality" rel="nofollow">Rearrangement Inequality</a>:</p>
<p>$x,y,z$ and $\frac1{y+z},\frac1{z+x},\frac1{x+y}$ have the same ordering, so
$$(x,y,z)\left(\frac1{y+z},\frac1{z+x},\frac1{x+y}\right)\geq(x,y,z)\left(\frac1{x+y},\frac1{y+z},\frac1{z+x}\right)$$
and
$$(x,y,z)\left(\frac1{y+z},\frac1{z+x},\frac1{x+y}\right)\geq(x,y,z)\left(\frac1{z+x},\frac1{x+y},\frac1{y+z}\right).$$
Adding both inequalities gives
$$2(x,y,z)\left(\frac1{y+z},\frac1{z+x},\frac1{x+y}\right)\geq3.$$</p>
|
2,487,044 | <p>Why are integral elements defined in terms of monic polynomials? Why do we wish to split the cases between non monic polynomials and monic polynomials? I.e, algebraic elements and integral elements. Motivation? </p>
| Ted | 15,012 | <p>Any answer might be a bit arguing from hindsight. One could say that the reason algebraic integers are important is because a lot of theory has been successfully developed using algebraic integers as a concept. </p>
<p>But one way to look at it is that the ring of integers of a number field is a ring which is a <em>finitely generated</em> module over $\mathbb{Z}$, which would not be the case if you included the roots of non-monic polynomials. (@reuns comment above gives an example of this.) Finite-generation is kind of like being discrete, which is an important difference between $\mathbb{Z}$ and $\mathbb{Q}$.</p>
|
1,435,718 | <blockquote>
<p>Find all pairs of prime numbers $p , q$ for which:
$$p^2 \mid q^3 + 1 \tag{A}$$
and
$$q^2 \mid p^6 − 1 \tag{B}$$</p>
</blockquote>
<p>The question is from the Bulgaria National Olympiad 2014.</p>
<p><em>I'm looking for any solution I may have missed, and generally any alternative method that might reduce the case work (could combine cases 2.1 and 3.1, I suppose).</em></p>
<hr>
<p>I will split the work into three cases:</p>
<ol>
<li>$p=q\ge2$. </li>
<li>$p>q\ge2$.</li>
<li>$q>p\ge2$.</li>
</ol>
<h2>Case 1</h2>
<p>It is clear that neither (A) nor (B) are met.</p>
<h2>Case 2</h2>
<p>First consider two subcases:</p>
<ol>
<li>$p>q$ and $q\in\{2,3\}$.</li>
<li>$p>q\ge5$.</li>
</ol>
<h3>Case 2.1</h3>
<p>$$\begin{align}
q=2 &\implies p^2\mid9 &\implies p^2=9 &\implies p=3 \\
q=3 &\implies p^2\mid28 &\implies p^2=4 &\implies\text{ no solution} \\
\end{align}$$</p>
<p>So $\boxed{(p,q)=(3,2)}$ is the only solution for this case.</p>
<h3>Case 2.2</h3>
<p>(A) factorises as $p^2 \mid (q+1)(q^2-q+1)$. Now</p>
<p>$$q^2-q+1=(q+1)(q-2)+3 \implies \gcd(q+1,q^2-q+1)=
\begin{cases} 3,\quad\text{if }3\mid q+1\\1,\quad\text{otherwise}\end{cases}$$</p>
<p>Since $p>5$ is prime, we must have either $p^2\mid q+1$ or $p^2\mid q^2-q+1$. But this is impossible because $p>q\ge5 \implies p^2>q+1\text{ and }p^2>q^2>q^2-q+1$. So there are no solutions here.</p>
<h2>Case 3</h2>
<p>Consider two subcases:</p>
<ol>
<li>$q>p$ and $p\in\{2,3\}$.</li>
<li>$q>p\ge5$.</li>
</ol>
<h3>Case 3.1</h3>
<p>$$\begin{align}
p=2 &\implies q^2\mid63 &\implies q^2=9 &\implies q=3 \\
p=3 &\implies q^2\mid728 &\implies q^2=4 &\implies\text{ no solution} \\
\end{align}$$</p>
<p>So $\boxed{(p,q)=(2,3)}$ is the only solution for this case.</p>
<h3>Case 3.2</h3>
<p>(B) factorises as $q^2 \mid (p^3+1)(p^3-1)$. Now</p>
<p>$$p^3+1=(p^3-1)+2 \implies \gcd(p^3+1,p^3-1)=
\begin{cases} 2,\quad\text{if }p\text{ is odd}\\1,\quad\text{otherwise}\end{cases}$$</p>
<p>Since $q>5$ is prime, we must have either $q^2\mid p^3+1$ or $q^2\mid p^3-1$. If $q^2\mid p^3+1$, the the same arguments as in case 2.2 can be applied to show that $q^2\mid p+1$ or $q^2\mid p^2-p+1$ neither of which is possible when $q>p$. </p>
<p>So the only remaining possibility is $q^2\mid p^3-1$. This factorises as $q^2\mid (p-1)(p^2+p+1)$. Now</p>
<p>$$p^2+p+1=(p-1)(p+2)+3 \implies \gcd(p-1,p^2+p+1)=
\begin{cases} 3,\quad\text{if }3\mid p-1\\1,\quad\text{otherwise}\end{cases}$$</p>
<p>Since $q>5$ is prime, we must have either $q^2\mid p-1$ or $q^2\mid p^2+p+1$. But this is impossible because $q>p\ge5 \implies q^2>p-1\text{ and }q^2\ge (p+1)^2=p^2+2p+1>p^2+p+1$. So there are no solutions here either.</p>
| Batominovski | 72,152 | <p>Here is a shorter solution. If $p=2$, then $q^2\mid p^6-1=63$ implies that $q=3$. If $p=3$, then $q^2\mid 3^6-1=728$ leads to $q=2$. If $q=2$, then $p^2\mid q^3+1=9$ means $p=3$. If $q=3$, then $p^2\mid q^3+1=28$ gives $p=2$. That is, solutions with $p\leq 3$ or $q\leq 3$ are $(p,q)=(2,3)$ and $(p,q)=(3,2)$. From now on, suppose that $p,q>3$.</p>
<p>Observe that $p-1$, $p+1$, $p^2-p+1$, and $p^2+p+1$ are pairwise coprime integers. Thus, $$q^2\mid p^6-1=(p-1)(p+1)(p^2-p+1)(p^2+p+1)$$ implies that $q^2$ divides one of the four numbers $p-1$, $p+1$, $p^2-p+1$, and $p^2+p+1$. First, we assume that $q^2$ divides either $p-1$ or $p+1$. Then,
$$q^2\leq p+1\text{ so that }q<p\,.$$
Now, from $p^2\mid q^3+1=(q-1)(q^2-q+1)$ and $q-1$ is relatively prime to $q^2-q+1$, then $$p^2\mid q-1\text{ or }p^2\mid q^2-q+1\,,\tag{*}$$ but this is impossible as $q<p$. Hence, $q^2$ must divide either $p^2-p+1$ or $p^2+p+1$.</p>
<p>Now, we have $$q^2\leq p^2+p+1<(p+1)^2\,,\text{ whence }q\leq p\,$$
Again, from (*), we obtain
$$p^2\leq q^2-q+1 <q^2\leq p^2\,,$$
which is a contradiction. Hence, there are no other solutions than the two listed in the first paragraph.</p>
|
76,474 | <p>Use Newton's method to apporzimate the indicated root of the equation correct to six decimal places. The negative root of $e^x = 4-x^2$</p>
<p>I do not know what a negative root is nor do I really know what I am supposed to do. I am guessing raise everything by loge.</p>
| ROSE | 71,470 | <p>e^x=4-x^2<br>
<strong>Subtract</strong> e^x<br>
4-x^2-e^x=0<br>
<strong>Take Derivative</strong><br>
-2x-e^x=0</p>
<p>plug this formula into the graphing calculator:</p>
<p>x-((4-x^2-e^x)/(-2x-e^x))<br><br> (NOTE - this formula can be used with any Newton's method problem when needing to find roots: x-(original fx/derivative of fx)</p>
<p>From the graphing screen, hit 2nd + Trace, select Value, put x=-1. The y-value given on the screen is the "new" x-value. Now do 2nd + Trace again this time typing in the "new" x-value, the new given y-value is the next "new" x-value. Keep doing this over and over until the y-values stop growing/shrinking. </p>
|
390,662 | <p>Let <span class="math-container">$X$</span> be a compact Riemann surface. I would like to find a somehow complete reference for the proof of the so called non-Abelian Hodge correspondence relating Dolbeaut, Betti and Higgs bundle moduli spaces.</p>
<p>I've tried to read the original articles by Hitchin (1987) or Simpson (1990) but it seems to me that I've not found somehow a complete reference showing a precise proof of all the statements involved (for the case of a curve).</p>
<p>Does anyone know a book or notes related to this?</p>
| Sean Lawton | 12,218 | <p>I personally like the notes by Eugene Xia: <a href="https://arxiv.org/pdf/1404.5025.pdf" rel="noreferrer">Abelian and Non-Abelian Cohomology</a> to build intuition.</p>
<p>But for a definitive source, I would read Simpson: <a href="http://www.numdam.org/article/PMIHES_1994__79__47_0.pdf" rel="noreferrer">Moduli of representations of the fundamental group of a smooth projective variety I</a> and <a href="http://www.numdam.org/article/PMIHES_1994__80__5_0.pdf" rel="noreferrer">Moduli of representations of the fundamental group of a smooth projective variety II</a></p>
<p>True, the above results of Simpson assume the Lie group is <span class="math-container">$\mathrm{GL}(n,\mathbb{C})$</span> and the surface is closed for genus at least 2, but understanding those cases of the full non-Abelian Hodge theorem goes a long way. After that, you can navigate the current literature to find that the correspondence holds in the parabolic setting for arbitrary reductive Lie groups <span class="math-container">$G$</span> (look at writings by Oscar Garcia Prada or Peter Gothen for example).</p>
<p>I hope that helps!</p>
<p>P.S. For a well-written detailed exposition of the correspondence in the simplest case see:
<em><a href="https://arxiv.org/pdf/math/0402429.pdf" rel="noreferrer">Rank One Higgs Bundles and Representations of Fundamental Groups of Riemann Surfaces</a></em> by Goldman & Xia. Many of the general features of the theory are already present in this case.</p>
|
27,865 | <p>Both the Laplace transform and the Fourier transform in some sense decode the "spectrum" of a function. The Laplace transform gives a power-series decomposition whereas the Fourier transform gives a harmonic (or loop-based) decomposition.</p>
<p>Are there deep connections between these two transforms? <a href="https://math.stackexchange.com/questions/7301/connection-between-fourier-transform-and-taylor-series">The formulaic connection</a> is clear, but is there something deeper?</p>
<p>(Maybe the answer will involve spectral theory?)</p>
| asmaier | 27,609 | <p>Laplace transform and Fourier transform are both special cases of the <a href="http://en.wikipedia.org/wiki/Linear_canonical_transformation" rel="nofollow">http://en.wikipedia.org/wiki/Linear_canonical_transformation</a>.</p>
|
12,047 | <p>How to prove the following trigonometric identities ?</p>
<p>1) If $\displaystyle \tan (\alpha) \cdot \tan(\beta) = 1 \text{ then } \alpha + \beta = \frac{\pi}{2}$
</p>
<p>I tried to prove it by using the the formula for $\tan(\alpha + \beta)$ but ain't it valid only when $\alpha + \beta \neq \frac{\pi}{2}$ ?</p>
<p>2) $\displaystyle\sec\theta + \tan \theta = \frac{1}{ \sec\theta - \tan \theta}, \theta \neq (2n+1)\frac{\pi}{2}, n \in \mathbb{Z} $
</p>
<p>For this one I tried substituting them with the sides of the triangle, but not successful to the final result.</p>
<p>These are not my homework, I am trying to learn maths almost on my own, so ...</p>
| Quixotic | 2,109 | <p>For (2)</p>
<p>$$\sec \theta + \tan \theta = \frac{1}{\cos \theta} + \frac{\sin \theta}{\cos \theta}= \frac{1 + \sin \theta}{\cos \theta}$$ $$ = \frac{(1 + \sin \theta)\cdot (1 - \sin \theta)}{\cos \theta \cdot(1 - \sin \theta)}\text{ [Multiplying both sides by } (1 - \sin \theta)\text{]} $$
$$ = \frac{\cos^2 \theta}{ \cos \theta - \sin \theta \cdot \cos \theta} = \frac{1}{\sec \theta - \tan \theta} \text{ (Q.E.D) }$$</p>
|
838,854 | <p>In page 4 of the following <a href="http://web.mat.bham.ac.uk/D.Kuehn/RamseyGreg.pdf" rel="nofollow">http://web.mat.bham.ac.uk/D.Kuehn/RamseyGreg.pdf</a> the text says</p>
<blockquote>
<p>In any graph the number of vertices with odd degree must be even. For this reason there cannot exist a red 5-regular subgraph of $K_9$ or a blue 3-regular subgraph of $K_9$. This implies that in a complete 2-coloured graph of order nine there must be at least one vertex which is incident to at least six red or at least four blue edges.</p>
</blockquote>
<p>I don't understand why "there cannot exist..." and why in turn "This implies that...". For example, if I take inside $K_9$ a $K_4$ and paint it all blue, don't I get a blue $3$-regular subgraph? It might be that I don't understand what the author means by "blue $3$-regular subgraph". Could someone clarify? </p>
| Vladimir | 154,757 | <p>Hint: (1) $Ax\cdot Ax=|Ax|^2$;</p>
<p>(2) One has
$$
Ax\cdot Ay=\frac14 (A(x+y)\cdot A(x+y)-A(x-y)\cdot A(x-y)).
$$</p>
|
838,854 | <p>In page 4 of the following <a href="http://web.mat.bham.ac.uk/D.Kuehn/RamseyGreg.pdf" rel="nofollow">http://web.mat.bham.ac.uk/D.Kuehn/RamseyGreg.pdf</a> the text says</p>
<blockquote>
<p>In any graph the number of vertices with odd degree must be even. For this reason there cannot exist a red 5-regular subgraph of $K_9$ or a blue 3-regular subgraph of $K_9$. This implies that in a complete 2-coloured graph of order nine there must be at least one vertex which is incident to at least six red or at least four blue edges.</p>
</blockquote>
<p>I don't understand why "there cannot exist..." and why in turn "This implies that...". For example, if I take inside $K_9$ a $K_4$ and paint it all blue, don't I get a blue $3$-regular subgraph? It might be that I don't understand what the author means by "blue $3$-regular subgraph". Could someone clarify? </p>
| goblin GONE | 42,339 | <p>Let $A$ denote an $n \times n$ real matrix.</p>
<blockquote>
<p><strong>Definitions.</strong> Call $A$ <em>dot-product preserving</em> iff $x \cdot y = Ax \cdot Ay$ for all relevant vectors $x$ and $y$. On the other hand, let
us call $A$ <em>length-preserving</em> iff $|x| = |Ax|$ for all relevant
vectors $x$.</p>
</blockquote>
<p>Now suppose $A$ is dot-product preserving. Let $x$ be fixed-but-arbitrary. Then $x \cdot x = Ax \cdot Ax$. Hence $|x|^2 = |Ax|^2$. So $|x|=|Ax|.$</p>
<p>On the other hand, suppose $A$ is length-preserving. Let $x$ and $y$ be fixed-but-arbitrary. Using the <a href="http://en.wikipedia.org/wiki/Polarization_identity" rel="nofollow">polarization identity</a> twice, we have</p>
<p>$$x \cdot y = \frac{1}{4}\left(|x+y|^2-|x-y|^2\right)$$</p>
<p>$$Ax \cdot Ay = \frac{1}{4}\left(|Ax+Ay|^2-|Ax-Ay|^2\right)$$</p>
<p>But since $A$ is length-preserving, we have that the two LHS's are equal (make sure you can show this, it should only take a few steps). Hence so too are the RHS's.</p>
|
2,303,339 | <p>How does this proof work?</p>
<p><strong>Theorem.</strong>$\quad$Let $G$ be a group. Then $G$ has a unique identity. </p>
<p><strong>Proof.</strong>$\quad$Assume that $e$ and $f$ are two identities in $G$. Since $e$ is
an identity, $ef=f$; and since $f$ is an identity, $ef=e$. Thus $e=ef=f$. </p>
<p>I think need to get my understanding of variables sorted out, because when I read the first line of the proof I picture $e$ as an object different from $f$ and it's confusing to then read the conclusion that $e$ and $f$ are equal. Also, how does this show that $G$ has a unique identity?</p>
| user1551 | 1,551 | <p>Here is an answer that is light on computation but conceptually somewhat obscure.</p>
<p>If we perturb $A$ by some $\delta I_2$, both sides of the equality change by the same amount. The similar holds if we perturb $B$ by a scalar matrix. So, we may assume that both $A$ and $B$ are traceless. In this case, the equality reduces to
$$
AB+BA=\operatorname{tr}(AB)I_2\text{ when }\operatorname{tr}(A)=\operatorname{tr}(B)=0.\tag{1}
$$
As both sides are bilinear in $(A,B)$, it suffices to prove $(1)$ on any basis of the subspace of all traceless $2\times2$ matrices. </p>
<p>We want to show that $(1)$ is a <em>universal identity</em>. So, it suffices to consider only the case $\mathbb F=\mathbb C$ (for the reason, see e.g. p.4 of <a href="http://www.math.uconn.edu/~kconrad/blurbs/linmultialg/univid.pdf" rel="nofollow noreferrer">the handout written by Keith Conrad</a>). In particular, it suffices to prove it when $A$ and $B$ are some nonzero scalar multiples of <a href="https://en.wikipedia.org/wiki/Pauli_matrices" rel="nofollow noreferrer">Pauli matrices</a> $\sigma_x,\sigma_y,\sigma_z$. Since the real linear span of $\{-i\sigma_x,-i\sigma_y,-i\sigma_z\}$ is isomorphic to the real algebra of quaternions under the isomorphism $-i\sigma_x\mapsto i,\ -i\sigma_y\mapsto j,\ -i\sigma_z\mapsto k$, equality $(1)$ further reduces to
$$
ab+ba=2\operatorname{Re}(ab),\quad a,b\in\{i,j,k\},\tag{2}
$$
which is completely trivial.</p>
|
578,086 | <p>Hey I've been having problem finding the limit of the sequence below</p>
<p>$\lim_{n\to\infty}\frac{\sqrt[n]{n!}}{n}$</p>
<p>Thanks!</p>
| Claude Leibovici | 82,404 | <p>Use the simplest approximation for (n!) that is to say Log[n!] = n Log[n] - n<br>
<a href="http://en.wikipedia.org/wiki/Stirling%27s_approximation" rel="nofollow">http://en.wikipedia.org/wiki/Stirling%27s_approximation</a>.<br>
Taking logarithms, you will easily show that the limit is 1/e as shown by Dedalus.</p>
|
2,963,560 | <blockquote>
<p>Given <span class="math-container">$a,b,c\in \Bbb Z$</span>, pairwise distinct, and <span class="math-container">$n\in \Bbb N\setminus\{0\}$</span> prove that
<span class="math-container">$$S(n)=\frac{a^n}{(a-b)(a-c)}+\frac{b^n}{(b-a)(b-c)}+\frac{c^n}{(c-a)(c-b)}\in \Bbb Z.$$</span></p>
<p>Source: tagged as Kurshar 1959 in a text with problems from math contests</p>
</blockquote>
<p><strong>My attempt</strong>: I approached the problem trying first to solve a particular instance, such as <span class="math-container">$n=1$</span>, to get some insight. </p>
<p>In this particular case (<span class="math-container">$n=1$</span>), the proof is straightforward:
<span class="math-container">$$S(1)=\frac{a(b-c)-b(a-c)+c(a-b)}{(a-b)(a-c)(b-c)}$$</span>
<span class="math-container">$$S(1)=\frac{ab-ac-ab+bc+ac-bc}{(a-b)(a-c)(b-c)}=0\in \Bbb Z$$</span>
Then, I tried the avenue of an induction proof, considering that the proposition is true for <span class="math-container">$S(1)$</span> so that assuming that it is also true for <span class="math-container">$S(n-1)$</span> it would imply it is true for <span class="math-container">$S(n)$</span>. But I couldn't make this step to work. </p>
<p>Hints and answers, not necessarily with induction will be appreciated. But if possible with induction, that would be nice. Sorry if this is a dup.</p>
| Hw Chu | 507,264 | <p>Similar to your <span class="math-container">$S(1)$</span> case, write </p>
<p><span class="math-container">$$
S(n) = \frac{a^n(b-c) - b^n(a-c) + c^n(a-b)}{(a-b)(a-c)(b-c)}.
$$</span></p>
<p>Now look at the numerator.</p>
<p><span class="math-container">$$
a^n(b-c) - b^n(a-c) + c^n(a-b) = ab(a^{n-1}-b^{n-1}) - c(a^n-b^n) + c^n(a-b).
$$</span></p>
<p>So <span class="math-container">$(a-b)$</span> is a factor of the numerator. By symmetry so should <span class="math-container">$(b-c)$</span> and <span class="math-container">$(c-a)$</span>.</p>
|
1,179,959 | <p>taking $v$ as $x$ axis and $u$ as $y$ axis I would like to know whether ${1\over v}+{1\over u}={1\over f}$ has graph of the form $xy=c^2$?</p>
| Narasimham | 95,860 | <p>Yes, but note that u=v=0 at the origin and that there is a displacement of x- y- axes at origin (f,f).</p>
<p>In fact KF Gauss had got rid of this reciprocal stuff to neatly introduce the displaced axes $ x\cdot y $ form useful in Optics:</p>
<p>$$ \frac{1}{u} + \frac{1}{v}=\frac{1}{f} $$
$$ u\, v = ( u+v) f $$
$$ ( u-f) (v-f) = f^2 $$
$$ u_1 v_1 = f^2$$</p>
|
554,003 | <p>How can I find a closed form for the following sum?
$$\sum_{n=1}^{\infty}\left(\frac{H_n}{n}\right)^2$$
($H_n=\sum_{k=1}^n\frac{1}{k}$).</p>
| Felix Marin | 85,343 | <p>$\newcommand{\+}{^{\dagger}}
\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\down}{\downarrow}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\isdiv}{\,\left.\right\vert\,}
\newcommand{\ket}[1]{\left\vert #1\right\rangle}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}
\newcommand{\wt}[1]{\widetilde{#1}}$
$\ds{\sum_{n = 1}^{\infty}\pars{H_{n} \over n}^{2}:\ {\large ?}}$</p>
<blockquote>
<p>$$
\mbox{Note that}\quad
H_{n}=\int_{0}^{1}{1 - t^{n} \over 1 - t}\,\dd t
=-n\int_{0}^{1}\ln\pars{1 - t}t^{n - 1}\,\dd t
$$</p>
</blockquote>
<p>Then,
\begin{align}
&\color{#c00000}{\sum_{n = 1}^{\infty}\pars{H_{n} \over n}^{2}}
=\sum_{n = 1}^{\infty}\bracks{\int_{0}^{1}\ln\pars{1 - x}x^{n - 1}\,\dd x}
\bracks{\int_{0}^{1}\ln\pars{1 - y}x^{n - 1}\,\dd y}
\\[3mm]&=\int_{0}^{1}\int_{0}^{1}
\ln\pars{1 - x}\ln\pars{1 - y}\sum_{n =1}^{\infty}\pars{xy}^{n - 1}\,\dd y\,\dd x
\\[3mm]&=\int_{0}^{1}\ln\pars{1 - x}
\color{#00f}{\int_{0}^{1}{\ln\pars{1 - y} \over 1 - xy}\,\dd y}\,\dd x\tag{1}
\end{align}</p>
<blockquote>
<p>\begin{align}&\color{#00f}{\int_{0}^{1}{\ln\pars{1 - y} \over 1 - xy}\,\dd y}
=\int_{0}^{1}{\ln\pars{y} \over 1 - x\pars{1 - y}}\,\dd y
=\int_{0}^{1}{\ln\pars{y} \over 1 - x + xy}\,\dd y
\\[3mm]&=-\,{1 \over x}\int_{0}^{1}{\ln\pars{y} \over 1 - xy/\pars{x - 1}}\,{x\,\dd y \over x - 1}
=-\,{1 \over x}\int_{0}^{x/\pars{x - 1}}
{\ln\pars{\bracks{x - 1}y/x} \over 1 - y}\,\dd y
\\[3mm]&=-\,{1 \over x}\int_{0}^{x/\pars{x - 1}}{\ln\pars{1 - y} \over y}\,\dd y
={1 \over x}\int_{0}^{x/\pars{x - 1}}{{\rm Li}_{1}\pars{y} \over y}\,\dd y
\end{align}
where $\ds{{\rm Li_{s}}\pars{z}}$ is the <a href="http://en.wikipedia.org/wiki/Polylogarithm">PolyLogarithm Function</a> and
$\ds{{\rm Li_{1}}\pars{z} = -\ln\pars{1 - z}}$. Hereafter, we'll use well known properties of them as reported in the above cited link:
\begin{align}&\color{#00f}{\int_{0}^{1}{\ln\pars{1 - y} \over 1 - xy}\,\dd y}
={1 \over x}\int_{0}^{x/\pars{x - 1}}{\rm Li}_{2}'\pars{y}\,\dd y
={1 \over x}\,{\rm Li}_{2}\pars{x \over x - 1}
\end{align}</p>
</blockquote>
<p>Replacing the last result in expression $\pars{1}$:
\begin{align}
&\color{#c00000}{\sum_{n = 1}^{\infty}\pars{H_{n} \over n}^{2}}
=\int_{0}^{1}\ln\pars{1 - x}\,{1 \over x}\,{\rm Li}_{2}\pars{x \over x - 1}\,\dd x
=-\int_{0}^{1}{\rm Li}_{2}'\pars{x}{\rm Li}_{2}\pars{x \over x - 1}\,\dd x
\\[3mm]&=-\int_{0}^{1}{\rm Li}_{2}'\pars{1 - x}
{\rm Li}_{2}\pars{1 - {1 \over x}}\,\dd x
=-\int_{0}^{1}{\rm Li}_{2}'\pars{1 - x}\bracks{-{\rm Li}_{2}\pars{1 - x}
-\half\,\ln^{2}\pars{x}}\,\dd x
\end{align}
where we used
<a href="http://en.wikipedia.org/wiki/Polylogarithm#Dilogarithm">Landen Identity</a>.
\begin{align}
&\color{#c00000}{\sum_{n = 1}^{\infty}\pars{H_{n} \over n}^{2}}
=\half\,{\rm Li}_{2}^{2}\pars{1}
+\half\int_{0}^{1}{\rm Li}_{2}'\pars{1 - x}\ln^{2}\pars{x}\,\dd x
\\[3mm]&={\pi^{4} \over 72}
-\half\color{#00f}{\int_{0}^{1}{\ln^{3}\pars{x} \over 1 - x}\,\dd x}
\quad\mbox{since}\quad{\rm Li}_{2}\pars{1} = {\pi^{2} \over 6}\tag{2}
\end{align}</p>
<blockquote>
<p>Finally, we have to evaluate the integral
\begin{align}&\color{#00f}{\int_{0}^{1}{\ln^{3}\pars{x} \over 1 - x}\,\dd x}
=\int_{0}^{1}\ln\pars{1 - x}\,\bracks{3\ln^{2}\pars{x}\,{1 \over x}}\,\dd x
=-3\int_{0}^{1}{\rm Li}_{2}'\pars{x}\ln^{2}\pars{x}\,\dd x
\\[3mm]&=3\int_{0}^{1}{\rm Li}_{2}\pars{x}\bracks{2\ln\pars{x}\,{1 \over x}}\,\dd x
=6\int_{0}^{1}{\rm Li}_{3}'\pars{x}\ln\pars{x}\,\dd x
\\[3mm]&=-6\int_{0}^{1}{\rm Li}_{3}\pars{x}\,{1 \over x}\,\dd x
=-6\int_{0}^{1}{\rm Li}_{4}'\pars{x}\,\dd x=-6{\rm Li}_{4}\pars{1}
=-6\zeta\pars{4}=-6\,{\pi^{4} \over 90}=\color{#00f}{-\,{\pi^{4} \over 15}}
\end{align}</p>
</blockquote>
<p>Replacing in $\pars{2}$:
\begin{align}
&\color{#66f}{\large\sum_{n = 1}^{\infty}\pars{H_{n} \over n}^{2}}
={\pi^{4} \over 72} - \half\,\pars{-\,{\pi^{4} \over 15}}
=\color{#66f}{\large{17 \over 360}\,\pi^{4}}
\end{align}</p>
|
3,556,334 | <p>Given a rectangle with dimensions <span class="math-container">$10$</span>cm and <span class="math-container">$6$</span>cm, show that for every <span class="math-container">$3$</span> points in the interior of the rectangle, the area of the triangle is less than <span class="math-container">$30$</span> cm<span class="math-container">$^2$</span>.</p>
<p>I draw the diagonals but now I am stuck.</p>
| Aniket Gupta | 751,375 | <p>The area of the rectangle is <span class="math-container">$60 cm^2$</span>. Consider a triangle with the base as one of the sides of the rectangle.The the 3rd vertex be located on the opposite side. Clearly the area of this triangle is <span class="math-container">$30cm^2$</span>. But since all vertices of the triangle need to be inside the rectangle, the distance from the base to the opposite vertex or the height will be reduced, resulting in an area lesser than <span class="math-container">$30cm^2$</span>. You can extend this argument to the triangle have base as one diagonal and and the opposite vertex as a vertex of the rectangle.The area of this triangle is also <span class="math-container">$30cm^2$</span>. But the base has to be smaller than the diagonal, again resulting in a smaller area.</p>
|
479,671 | <p>Let $p$ and $q$ be two distinct primes. Prove that $$p^{q-1} + q^{p-1} =1 \mod pq$$ I try to used Fermat little theorem and I obtain the congruence $p^q +q^p=0 \mod pq$. From this I don know how to relate to the congruence on the question.</p>
| André Nicolas | 6,312 | <p><strong>Hint:</strong> It is congruent to $1$ modulo $p$, and also modulo $q$. Modulo $p$, the first part is congruent to $0$, and the scond part is congruent to $1$, by Fermat's Theorem.</p>
|
63,534 | <p>I found in an article <a href="http://dx.doi.org/10.1103/PhysRev.105.776" rel="nofollow">"Imperfect Bose Gas with Hard-Sphere Interaction"</a>, <em>Phys. Rev.</em> 105, 776–784 (1957) the following integral, but I don't know how to solve it. Any hints?</p>
<p>$$\int_0^\infty {\int_0^\infty {\mathrm dp\mathrm dq\frac{\sinh(upq)}{q^2 - p^2}pq} } e^{-vq^2 - wp^2} = \frac{\pi}{4}\frac{u(w - v)}{\left[(w + v)^2-u^2 \right]\left(4wv-u^2\right)^{1/2}}$$</p>
<p>for $u,v,w > 0$.</p>
| Did | 6,179 | <p>Call $I(u,v,w)$ the integral to compute and note that this can be defined only when $4vw>u^2$. Using the definition of $\sinh$ and the parity of the function to be integrated one sees that
$$
4I(u,v,w)=\int_{-\infty}^\infty\int_{-\infty}^\infty{\mathrm dp\mathrm dq}\,\frac{e^{upq}}{q^2 - p^2}pq\, e^{-vq^2 - wp^2},
$$
that is,
$4I(u,v,w)=\partial_uJ(u,v,w)$ with
$$
J(u,v,w)=\iint{\mathrm dp\mathrm dq}\,\frac{e^{upq}}{q^2 - p^2}\, e^{-vq^2 - wp^2}.
$$
The function $J(u,\cdot,\cdot)$ is symmetric and
$$
\partial_wJ(u,v,w)-\partial_vJ(u,v,w)=\iint{\mathrm dp\mathrm dq}\,e^{upq}\,e^{-vq^2-wp^2}.
$$
The exponent in the exponential is a quadratic form in $(p,q)$ and one knows that
$$
\iint e^{-\frac12\xi^*C\xi}\,\text{d}\xi=2\pi\det(C)^{-1/2},
$$
hence
$$
\partial_wJ(u,v,w)-\partial_vJ(u,v,w)=\frac{2\pi}{\sqrt{4vw-u^2}}.
$$
This is enough to recover $J(u,v,w)$, hence $I(u,v,w)$. Since $J(u,\frac12(v+w),\frac12(v+w))=0$ by symmetry, one gets $J(u,v,w)$ as an integral of $\partial_tJ(u,\frac12(v+w)-t,\frac12(v+w)+t)$, that is,
$$
J(u,v,w)=\int\limits_{0}^{(w-v)/2}\frac{2\pi \text{d}t}{\sqrt{4\left(\frac12(v+w)+t\right)\left(\frac12(v+w)-t\right)-u^2}},
$$
which is
$$
J(u,v,w)=\int\limits_{0}^{w-v}\frac{\pi \text{d}t}{\sqrt{s^2-t^2}},\quad
s^2=(v+w)^2-u^2.
$$
Hence,
$$
J(u,v,w)=\pi\text{Arcsin}\left(\frac{w-v}{s}\right).
$$
Differentiating this with respect to $u$ yields finally
$$
4I(u,v,w)=\frac{\pi(w-v)u}{((v+w)^2-u^2)\sqrt{4vw-u^2}}.
$$</p>
|
375,025 | <p>How do I solve this simultaneous equation? I know it has multiple solutions, but would anyone be able to show me the exact steps in working these out for future reference? </p>
<p>$$
\begin{cases}
2x +y = 4\\
-6x - 3y = -12
\end{cases}
$$</p>
<p>Thank you...</p>
| lab bhattacharjee | 33,337 | <p>As $2x+y=4-6x-3y=-12$</p>
<p>So, $2x+y=-12,y=-12-2x$</p>
<p>Putting $y=-12-2x$ in $4-6x-3y=-12$</p>
<p>$4-6x-3\{-12-2x\}=-12$</p>
<p>or $52=12$ which is impossible</p>
<p>So, the simultaneous equation will not have any solution</p>
<p><strong>EDIT</strong> :
Due to the change in the question to $2x+y=4,-6x-3y=-12$</p>
<p>Again, from the 1st equation, $y=-12-2x$ </p>
<p>Putting $y=-12-2x$ in $-6x-3y=-12$ we get, $12=12$ which is true for all values of $y$</p>
<p>So, $(x,-12-2x)$ will satisfy the given simultaneous equation for all values of $x$</p>
|
1,784,469 | <p>I roll a biased dice 9000 times. the probability of seeing 1,2,3,4,5,6 are $\frac{1}{3}, \frac{1}{12}, \frac{1}{12}, \frac{1}{6}, \frac{1}{6}, \frac{1}{6}$ respectively. I need to use the Central limit theorem to estimate the probability that the total number of 1s that I see is within [2970,3040].</p>
<p>So far, I only know the fact that the random variables Xi of of CLT are each rolls. I'm not sure how I'm supposed to apply the information given. Please help this hopeless soul :(( </p>
| Steven | 315,088 | <p>Define $X_i$ as the number of 1's on the i's toss. $X_i=1$ if the $i^{th}$ toss is 1, and $X_i=0$ if the $i^{th}$ toss is not 1.</p>
<p>By central limit theorem, we can assume $X_i's$ have normal distribution with $E(X)=9000\times\frac{1}{3}=3000$.</p>
<p>$Var(X)=9000\times(\frac{1}{3}-(\frac{1}{3})^2)=2000$</p>
<p>$\sigma=\sqrt{Var(X)}=\sqrt{2000}$</p>
<p>Let $T=\frac{\bar{X}-\mu}{\sigma}$, and $T$~$N(0,1)$.</p>
<p>So we want to calculate:</p>
<p>$P(2970<X<3040)=P(\frac{2970-3000}{\sqrt{2000}}<T<\frac{3040-3000}{\sqrt{2000}})$</p>
<p>Using R studio, with command: </p>
<p>pnorm((3040-3000)/sqrt(2000))-pnorm((2970-3000)/sqrt(2000))</p>
<p>We get $P(2970<X<3040)=P(\frac{2970-3000}{\sqrt{2000}}<T<\frac{3040-3000}{\sqrt{2000}})=0.5632858$</p>
|
1,784,469 | <p>I roll a biased dice 9000 times. the probability of seeing 1,2,3,4,5,6 are $\frac{1}{3}, \frac{1}{12}, \frac{1}{12}, \frac{1}{6}, \frac{1}{6}, \frac{1}{6}$ respectively. I need to use the Central limit theorem to estimate the probability that the total number of 1s that I see is within [2970,3040].</p>
<p>So far, I only know the fact that the random variables Xi of of CLT are each rolls. I'm not sure how I'm supposed to apply the information given. Please help this hopeless soul :(( </p>
| callculus42 | 144,421 | <p>The probability of seeing a one in a single roll is $\frac{1}{3}$. The probability of $\texttt{not}$ seeing a one in a single roll is $1-\frac{1}{3}=\frac{2}{3}$.</p>
<p>Each roll i is <a href="https://en.wikipedia.org/wiki/Bernoulli_distribution" rel="nofollow">bernoulli distributed</a>. Let´s denote this random variable as $X_i$. </p>
<p>Therefore the sum is binomial distributed as $\sum X_i=Y\sim Bin(n,\frac{1}{3})$. </p>
<p>Now the approxmation is in general:</p>
<p>$P(Y\leq y)=\Phi\left(\frac{y-\mu}{\sigma}\right)$, where $\Phi(\cdot )$ is the cdf of the standard normal distribution.</p>
<p>$P(2970\leq Y\leq 3040)=P(Y\leq 3040)-P(Y\leq 2969)$</p>
<p>And $\mu=\frac{1}{3}\cdot 9000=3000$, $\sigma=\sqrt{n\cdot p \cdot (1-p)}=\sqrt{9000\cdot\frac{1}{3}\cdot (1-\frac{1}{3})}=\sqrt{2000}$</p>
<p>$P(Y\leq 3040)=\Phi\left(\frac{3040-3000}{\sqrt{2000}}\right)=\Phi(0.8944)$</p>
<p>$P(Y\leq 2969)=\Phi\left(\frac{2969-3000}{\sqrt{2000}}\right)=\Phi(-0.6931$)</p>
<p>You can use <a href="http://www.stat.ufl.edu/~athienit/Tables/Ztable.pdf" rel="nofollow">this table</a> to evaluate the corresponding values (probabilities). </p>
<p>Then substract the second one from the first one.</p>
|
805,902 | <p>I'm struggling to figure out what I'm exactly required to do. The problem states</p>
<p>"Compute which lines through the point $(1, 0)$ that are tangent to the parabola defined by $y = x^2$."</p>
<p>I believe it's a simple question however I've been going around this for quite a bit. </p>
<p>I'll appreciate any kind of help! </p>
<p>Thank you!</p>
| Michael Hardy | 11,667 | <p>Certainly the $x$-axis is one such tangent line, since it passes through the point $(1,0)$ and is tangent to the parabola at its vertex. That much you get just from looking at the picture.</p>
<p>Classical geometry says that if you draw lines parallel to the axis of a parabola through the point of intersection of two tangent lines and through the two points of tangency, then the middle line (through the point of intersection) is exactly halfway between the other two lines parallel to the axis (through the points of tangency).</p>
<p>The $x$-coordinate of the point of intersection is $1$.</p>
<p>The $x$-coordinate of one of the points of tangency is $0$.</p>
<p>So $1$ is halfway between $0$ and what? Clearly $2$.</p>
<p>So the other tangent line is tangent to the parabola at $x=2$ and $y=2^2$.</p>
|
2,781,702 | <p>Honestly, I have no idea if I put the correct tag on this question, and I don't even know where to begin to solve an equation like this:
$$
f(d,n)=\sum_{i=1}^n\binom{d}{i}.
$$</p>
<p>Could someone explain what the "d" over the "i" inside the parenthesis means? I'm attempting to solve for when d is equal to 20, n is equal to 3, but I can't work out what I'm supposed to do here.</p>
<p>Thanks all!</p>
| Ian | 83,396 | <p>${d \choose i}$ is read as "d choose i" and is called a binomial coefficient. It is given by the formula $\frac{d!}{i!(d-i)!}$.</p>
|
4,619,164 | <p>So one way to solve this ordinary differential equation is by computing the integral of both sides of <span class="math-container">$$\frac{y'}{y} = 1$$</span>
However, I did it another way and I think I made a mistake, but where is it? What I did is:
<span class="math-container">$$y' = y $$</span>
<span class="math-container">$$ \int y'dx = \int y dx $$</span>
<span class="math-container">$$ \int \frac{dy}{dx}dx = \int y dx $$</span>
<span class="math-container">$$ \int dy = \int y dx $$</span>
<span class="math-container">$$ y = yx + C $$</span>
<span class="math-container">$$ y = \frac{C}{1-x}$$</span></p>
| Átila Correia | 953,679 | <p>Just out of curiosity, you can also apply the integrating factor method:
<span class="math-container">\begin{align*}
y' = y & \Longleftrightarrow y' - y = 0\\\\
& \Longleftrightarrow e^{-x}y' - e^{-x}y = 0\\\\
& \Longleftrightarrow (e^{-x}y)' = 0\\\\
& \Longleftrightarrow e^{-x}y = k\\\\
& \Longleftrightarrow y(x) = ke^{x}
\end{align*}</span>
where <span class="math-container">$k\in\textbf{R}$</span>.</p>
<p>Hopefully this helps :-)</p>
|
1,148,674 | <p>I have always asked myself why this happens. </p>
<p>If $x = 4$, then $\sqrt{x} = 2$, but if I search for the $\sqrt{4}$, I get $2$ & $-2$.</p>
| layman | 131,740 | <p>The problem you are having is in how to interpret square roots. You have probably been taught that when we say $\sqrt{4}$, we are thinking about the "number that when you square it, you get $4$". If you are thinking about it this way, then of course $\sqrt{4}$ could be $2$ or $-2$ since when you square each of those, you get $4$.</p>
<p>But you <strong>should not</strong> think about square root this way. The square root is a <strong>positive number</strong>. So $\sqrt{4}$ is the <strong>positive number</strong> that you square to get $4$, which means $\sqrt{4} = 2$.</p>
<p>Now, when you have the equation $x^{2} = 4$, and you want to <em>solve</em> this equation for $x$, <strong>this is the equation where you ask yourself: "what number squared gives me $4$?"</strong> And in this case, it's both $2$ and $-2$, i.e., $+ \sqrt{4}$ and $- \sqrt{4}$. Notice that here, $\sqrt{4}$ is a <strong>positive number</strong>. When we want to express the $-2$ answer, we write $- \sqrt{4}$. This is because $\sqrt{4} = 2$.</p>
<p>So don't forget that the square root is a positive number. For example, $\sqrt{9} = 3$. But the solutions to $x^{2} = 9$ are $+ \sqrt{9}$ and $- \sqrt{9}$.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.