qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,529,387 | <p><a href="https://i.stack.imgur.com/3M7iS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3M7iS.jpg" alt="enter image description here"></a></p>
<p>The fourth question is how can I do?</p>
<blockquote>
<p>Q: Given that $a, b \in \mathbb{Z}$ and $a$ is even, show that if $a^2b$ is not divisible by $8$, then $b$ is odd.</p>
</blockquote>
| Dipak Kambale | 505,137 | <p>Since $a$ is even number, let us assume it as, $a=2n$. Hence $a^2\cdot b = (2n)^2\dot b = 4n^2\cdot b$.</p>
<p>Now $a^2\cdot b$ is divisible by $8$ if $a^2 \cdot b = 8k$ where $k \in \mathbb{Z}$. This is possible only if $b$ is even. Hence $a^2b$ is not divisible by $8$ if $b$ is odd.</p>
|
2,529,387 | <p><a href="https://i.stack.imgur.com/3M7iS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3M7iS.jpg" alt="enter image description here"></a></p>
<p>The fourth question is how can I do?</p>
<blockquote>
<p>Q: Given that $a, b \in \mathbb{Z}$ and $a$ is even, show that if $a^2b$ is not divisible by $8$, then $b$ is odd.</p>
</blockquote>
| Mark Bennet | 2,906 | <p>HINT I would argue by contradiction as I think this is easier here: suppose both $a$ and $b$ are even ... can you show that $a^2b$ is then divisible by $8$? </p>
<p>So what gives if $a^2b$ is not divisible by $8$?</p>
|
3,091,090 | <p>I came across this question the other day and have been trying to solve it by using some simple algebraic manipulation without really delving into L'Hospital's Rule or the Power Series as I have just started learning limit calculations.
We needed to find :
<span class="math-container">$$\lim_{x \to 0} \frac {x\cos x - \sin x}{x^2\sin x}$$</span>
I approached this problem in two different ways and know what the flaw is, however I have been unable to justify why this is so.</p>
<p>Let <span class="math-container">$$f(x) = \frac {x\cos x - \sin x}{x^2\sin x}$$</span>
Therefore, dividing by <span class="math-container">$x$</span>,
<span class="math-container">$$f(x) = \frac {\cos x - \frac{\sin x}{x}}{x\sin x}$$</span>
Using standard limit properties,
<span class="math-container">$$\lim_{x \to 0}f(x) = \frac{\lim_{x \to 0}\cos x - \lim_{x \to 0}\frac{\sin x}{x}}{\lim_{x \to 0}x\sin x}$$</span>
Since <span class="math-container">$$\lim_{x \to 0} \frac {\sin x}{x}=1$$</span>
<span class="math-container">$$\lim_{x \to 0}f(x)= \frac{\lim_{x \to 0}\cos x-1}{\lim_{x \to 0}x\sin x}$$</span></p>
<p>Rewriting the above as <span class="math-container">$$\lim_{x \to 0}\frac{(\cos x -1)x}{x^2\sin x}$$</span> and using the fact that <span class="math-container">$\lim_{x \to 0} \frac {\sin x}{x}=1$</span> and <span class="math-container">$\lim_{x \to 0}\frac{\cos x -1}{x^2}= -\frac{1}{2}$</span>,
we get <span class="math-container">$$\lim_{x \to 0}f(x)=-\frac{1}{2}$$</span></p>
<p>I know that the answer is wrong although I am not able to understand why.
I believe it is because I cannot combine the numerator and denominator into a single limit function. Using a similar trick, I also obtained the limit to be <span class="math-container">$-\frac{3}{8}$</span>.</p>
<p>Questions:</p>
<p>1) Could someone please explain why combining the numerator and denominator into a single limit is wrong? (The reason I even went ahead with such a manipulation was, we are allowed to separate the numerator and denominator while expanding the limit of a rational function so I felt that the reverse should also work). </p>
<p>2) As you can notice, I have not used L'Hospital's Rules or Power Series expansion of <span class="math-container">$\sin x $</span>and <span class="math-container">$\cos x$</span>. When I used L'Hospital's Rule, I noticed that I needed to go upto the third or fourth derivative to get rid of the <span class="math-container">$\frac{0}{0}$</span> indeterminate form. So would there be a better way of approaching such limits? </p>
<p>Thank You.</p>
| user3482749 | 226,174 | <p>First, note that both of your answers are incorrect: the actual answer is <span class="math-container">$\frac{-1}{3}$</span>. </p>
<p>For your questions:</p>
<ol>
<li>The problem is actually earlier than that: it is not necessarily the case that <span class="math-container">$\lim \frac{f(x)}{g(x)} = \frac{\lim f(x)}{\lim g(x)}$</span> unless both limits exist, <em>and <span class="math-container">$\lim g(x) \neq 0$</span></em>. In this case, that latter condition isn't satisfied, so this method does not work. In particular, when you write:</li>
</ol>
<p><span class="math-container">$$\lim_{x\to 0}f(x) = \dfrac{\lim_{x\to 0}\cos(x) - \lim_{x\to 0}\frac{\sin(x)}{x}}{\lim_{x\to 0} x\sin(x)},$$</span></p>
<p>what you've actually written is <span class="math-container">$\lim\limits_{x\to 0}f(x) = \frac{\mbox{[something]}}{0}$</span>, which is obviously nonsense, and so everything after that is necessarily flawed. </p>
<ol start="2">
<li>L'Hôpital's rule (he's not a hospital!) is how I'd approach this. There might be a trick to do it quickly, but it's not obvious if there is. </li>
</ol>
|
2,708,071 | <p>Question: Suppose $|x_n - x_k| \le n/k^2$ for all $n$ and $k$.Show that $\{x_n\}_{n=1}^{\infty}$ is cauchy. </p>
<p>Attempt : To prove this, I have to find $M \in N$ that for $\varepsilon >0$, $n/k^2 < \varepsilon$ for $n,k \ge M$. </p>
<p>Let $\varepsilon > 1/M$. </p>
<p>Then, $n/k^2 \le M/M^2$ (#) $= 1/M < \varepsilon$ for $n,k \ge M$.</p>
<p>I feel (#) is not necessarily true. Is there any way to show (#) is correct? or could you give me some any hint regarding this question?</p>
| Prasun Biswas | 215,900 | <p><strong>Hint:</strong></p>
<p>With $n$ fixed, the map $k\mapsto\dfrac n{k^2}$ is strictly decreasing with limit $0$ as $k\to\infty$. This means, given $\epsilon\gt 0$, for a fixed $n$, there exists $N\in\Bbb N$ such that $\dfrac n{k^2}\lt\epsilon~\forall~k\geq N$</p>
|
2,785,993 | <p>Let</p>
<ul>
<li><p><span class="math-container">$k(0)=11$</span></p>
</li>
<li><p><span class="math-container">$k(1)=1101$</span></p>
</li>
<li><p><span class="math-container">$k(2)=1101001$</span></p>
</li>
<li><p><span class="math-container">$k(3)=11010010001$</span></p>
</li>
<li><p><span class="math-container">$k(4)=1101001000100001$</span></p>
</li>
<li><p>And So on....</p>
<p>I've checked it up to <span class="math-container">$k(120)$</span>, and I did't find anymore prime of such form. Are there anymore prime numbers of that form ? (I just realized that only <span class="math-container">$k(6n+5)$</span> could be a prime (?))</p>
</li>
</ul>
| Mr Pie | 477,343 | <p>This is not really an answer but might be of some help.</p>
<hr>
<p>According to the "<a href="http://www.softschools.com/math/topics/the_divisibility_rules_3_6_9/" rel="nofollow noreferrer">Divisibility by $3$ Rule</a>," if $n\equiv 1\pmod 3$ then $k(n)$ will not be prime as it will be divisible by $3$. And, it will be <a href="https://www.math.hmc.edu/funfacts/ffiles/10013.5.shtml" rel="nofollow noreferrer">divisible by $11$</a> if $n$ is even.</p>
<p>That leaves only all the odd numbers for $n$ (since the congruence above is also even).</p>
|
2,785,993 | <p>Let</p>
<ul>
<li><p><span class="math-container">$k(0)=11$</span></p>
</li>
<li><p><span class="math-container">$k(1)=1101$</span></p>
</li>
<li><p><span class="math-container">$k(2)=1101001$</span></p>
</li>
<li><p><span class="math-container">$k(3)=11010010001$</span></p>
</li>
<li><p><span class="math-container">$k(4)=1101001000100001$</span></p>
</li>
<li><p>And So on....</p>
<p>I've checked it up to <span class="math-container">$k(120)$</span>, and I did't find anymore prime of such form. Are there anymore prime numbers of that form ? (I just realized that only <span class="math-container">$k(6n+5)$</span> could be a prime (?))</p>
</li>
</ul>
| XinBae | 562,933 | <p>Here is a Mathematica search to answer your question:</p>
<pre><code>b[1] = 1;
b[2] = 2;
b[n_] := b[n] = b[n - 1] + n - 1;
list[t_] := b /@ Range[t];
Reap@Do[a = ReplacePart[Array[0 &, b[t]], Transpose[{list[t]}] -> 1];
c = FromDigits[a, 2]; If[PrimeQ[c], Sow@c], {t, 100}]
</code></pre>
<p>which yields</p>
<p><code>{Null, {{3, 13,
2713027506953773210808498184692097546276033420315106938029407997308\
25845099036699701989532948734015220469369753358523432961}}}</code></p>
|
3,873,138 | <p>Since we have variable coefficients we will use the cauchy-euler method to solve this DE. First we substitute <span class="math-container">$y=x^m$</span> into our given DE. This then gives
"</p>
<p><span class="math-container">$9x(m(m-1)x^{m-2}) + 9mx^{m-1} = 0$</span></p>
<p>Note that:</p>
<p><span class="math-container">$x^{m-2} = x^{m-1}x^{-1}$</span></p>
<p>Then</p>
<p><span class="math-container">$9x(m(m-1)x^{m-1}x^{-1}) + 9mx^{m-1} = 0$</span></p>
<p><span class="math-container">$9(m(m-1)x^{m-1}) + 9mx^{m-1} = 0$</span></p>
<p><span class="math-container">$9mx^{m-1}((m-1)) + 9mx^{m-1} = 0 \Rightarrow m-1=0$</span></p>
<p><span class="math-container">$m_{1} = 1$</span> so <span class="math-container">$y_{1}=c_{1}x$</span> is our solution and using reduction of order we get our second solution which is <span class="math-container">$y_{2}=c_{2}x\ln(x)$</span> and by superposition of homogenous equations we get our general solution</p>
<p><span class="math-container">$y = c_{1}x + c_{2}x\ln(x)$</span></p>
<p>However i am told that this is wrong and the answer is <span class="math-container">$y=c_{1} + c_{2}\ln(x)$</span></p>
<p>What happened to the factor x?</p>
| E.H.E | 187,799 | <p>Hint:</p>
<p>you can multiply by <span class="math-container">$x$</span> to make it Euler Cauchy Differential Equation ,see <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Euler_equation" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Cauchy%E2%80%93Euler_equation</a></p>
|
2,452,084 | <ul>
<li><em>I'm having trouble understanding why the arbitrariness of $\epsilon$ allows us to conclude that $d(p,p')<0$. It seems we could likely
conclude a value such as $\frac {\epsilon}{100}$ couldn't we?? The
other idea that would normally work is the limit (as $n$ approaches
$\infty$, $p$ approaches $p'$) but that would mean we are further in
another sequence which would have same problem Thanks in advance</em></li>
</ul>
<p>$\ $
$\ $
$\ $ </p>
<p>Definition of Convergence</p>
<blockquote>
<p>A sequence $\{ p_n \}$ in a metric space $X$ is said to converge if there us a point $p \in X$ with the following property: For every $ \epsilon>0$ there is an integer $N$ such that $n \ge N$ implies distance function $d(p_n, p) < \epsilon.$</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/tDMkx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tDMkx.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/TIFil.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TIFil.png" alt="enter image description here"></a></p>
| fleablood | 280,126 | <p>$d(p,p') \ge 0$.</p>
<p>Case 1: $d(p,p') > 0$.</p>
<p>Let $\epsilon = d(p,p') > 0$. Then $d(p,p') < \epsilon = d(p,p')$. That is a contradiction.</p>
<p>Case 2: $d(p,p') = 0$.</p>
<p>Then..... $d(p,p') = 0$.....</p>
|
2,312,096 | <p>How do I compute the integration for $a^2<1$,
$$\int_0^{2\pi} \dfrac{\cos 2x}{1-2a\cos x+a^2}dx=? $$
I think that:
$$\cos2x =\dfrac{e^{i2x}+e^{-2ix}}{2},
\qquad\cos x =\dfrac{e^{ix}+e^{-ix}}{2}$$
But I cannot. Please help me.</p>
| Chappers | 221,811 | <p>One can factorise the denominator:
$$ 1-2a\cos{x}+a^2 = (e^{ix}-a)(e^{-ix}-a). $$
Then using partial fractions gives
$$ \frac{1}{2}\frac{e^{2ix}+e^{-2ix}}{(e^{ix}-a)(e^{-ix}-a)} = -\frac{a^2+1}{2 a^2} + \frac{a^4+1}{2 a (a^2-1)}\left( -\frac{1}{e^{i x}-a}-\frac{1}{1-a e^{i x}} \right)-\frac{e^{i x}+e^{-i x}}{2 a} $$
Now, since $a^2<1$, the remaining fractions can be expanded in a power series such as $\sum_{k=0}^{\infty} (ae^{ix})^n $. But
$$ \int_0^{2\pi} e^{inx} \, dx = \begin{cases} 2\pi & n=0 \\ 0 & n \neq 0 \end{cases}, $$
so only the constant terms contribute, and hence the integral is
$$ 2\pi \left( -\frac{a^2+1}{2 a^2} + \frac{a^4+1}{2 a (a^2-1)}\left( \frac{1}{a}-1 \right) \right) = \frac{2\pi a^2}{1-a^2}. $$</p>
<hr>
<p>It may be easier to use the formula
$$ \frac{1-a^2}{1-2a\cos{x}+a^2} = 1+2\sum_{k=0}^{\infty} a^k\cos{kx} $$
for $\lvert a \rvert <1$.</p>
|
949,052 | <p>I am having a hard time understanding the definitions of dimensions. Dimension of a finite-dimensional vector space is defined as the length of any basis of the vector space. The definition seems to be easy to understand. But I need examples to have a better understanding of dimensions so if anyone can explain me more about dimensions, it would much appreciated!</p>
<p>Thanks!</p>
| Andrew Maurer | 51,043 | <p>Dimension is actually quite a difficult notion to make rigorous -- we want to have a down-and-dirty definition, but it usually turns out that in a sufficiently general context,mathematical objects behave unexpectedly.</p>
<p>When dealing with real numbers, it seems natural to define the dimension of the unit interval to be 1, and the image under any continuous map should somehow "shrink" the dimension. (For instance, if the interval was mapped to a point.) However, Peano showed that there exist "space-filling-curves" that map $[0,1]$ continuously <em>onto</em> the unit box $[0,1]^2$. Weirder still, there are spaces (eg fractals) who only seem to have a correctly defined dimension (called the Hausdorff dimension) when we consider arbitrary real numbers (as opposed to integer values). The key fact here is that it's not the <em>size</em> of the set that matters, but it depends on how it sits in Euclidean space.</p>
<p>In the world of algebra, functions aren't quite as free as in analysis, so you actually can define dimension in a way that aligns with intuition quite nicely, although the definition is weird. There's the Krull dimension, defined for any ring (and doesn't make sense until you see the geometry).</p>
<p>So all in all, it's a difficult notion to define and this answer (originally a comment) is just to give you a few words that are google-able.</p>
|
1,956,338 | <p>Let $f\in C_b(\mathbb{R})$ and $g$ any continuous function on $\mathbb{R}$ (or maybe there has to be a restiction on $g$?). Now let K be a compact subset of $\mathbb{R}$ and $U$ an open subset of $\mathbb{R}$. Then I want to show that the following set $$\Phi:=\{t\in\mathbb{R}_+:(e^{t\cdot g}f)(K)\subseteq U\}$$ is a open set. Hence I have to find for every $t\in\Phi$ an $\varepsilon>0$ such that $s\in\Phi$ whenever $|s-t|<\varepsilon$. Why is this true. Maybe the argument is simple but I don't see it. Already thank you.</p>
| s.harp | 152,424 | <p>Let $t\in\Phi$ and write $h=e^{tg}f$. Note $h$ is continuous so bounded on $K$. For $x\in K$:</p>
<p>$$|e^{(t+a) g(x)}f(x)-e^{tg(x)}f(x)|=|e^{ag(x)}h(x)-h(x)|≤\|h\|_K\cdot |1-e^{ag(x)}|≤\|h\|_K |1-e^{|a|\cdot \|g\|_K }|\tag{1}$$
So it is clear, for every $\epsilon>0$ one can choose a $\delta$ independent of $x$ so that if $a\in(-\delta,\delta)$ that then the expression in $(1)$ is smaller than $\epsilon$.</p>
<p>What we will do now is that we will show there exists and $\epsilon>0$ so that if $y\in h(K)$ that then $B_\epsilon(y)\subset U$. This implies from before that there exists a $\delta$ so that $(t-\delta,t+\delta)\subset\Phi$ and we would be finished.</p>
<p>This follows from $\mathbb R-U$ being closed and disjoint from the compact $h(K)$. Namely the infimum:
$$\inf\{d(x,y)\mid x\in h(K), y\in \mathbb R-U\}$$
is realised and actually a minimum. The minimum cannot be zero, since the sets are disjoint.</p>
<hr>
<p>To see the minimum is realised, consider an $A$ so that $[-A,A]\supset h(K)$ and that contains points $\mathbb R -U$. Then every point outside of $[-2A,2A]$ has a distance from $h(K)$ that is greater than the distance from any point in $[-A,A]$ from $h(K)$. So we can restrict to the minimal distance to $[-2A,2A]\cap(\mathbb R-U)$ being realised, but since that set is compact it must be realised.</p>
|
217,429 | <blockquote>
<p>Let $A\subset\mathbb R$. Show for each of the following statements that it is either true or false.</p>
<ol>
<li>If $\min A$ and $\max A$ exist then $A$ is finite.</li>
<li>If $\max A$ exists then $A$ is infinite.</li>
<li>If $A$ is finite then $\min A$ and $\max A$ do exist.</li>
<li>If $A$ is infinite then $\min A$ does not exist.</li>
</ol>
</blockquote>
<hr>
<p>My attempts so far:</p>
<ol>
<li><p>This statement is wrong. Let $A=[a;b]\cap\mathbb Q\subset\mathbb R$ with $a<b$. It is obvious that $\min A=a$ and $\max A=b$. Assume now, that $A$ is finite, then we would be able to enumerate every element in $A$, however $\mathbb Q$ is dense in $\mathbb R$ and therefore we can find for each $x_k,y_k\in A$ a $r_k\in A$ in $[x_k;y_k]$, such that $x_k<r_k<y_k$. E.g. this can be achieved with the arithmetic mean $r_k=(x_k+y_k)/2$. Starting with $[x_1=a;y_1=b]$ one could create infinitely nested intervals $[x_k;r_k]$ and can therefore find infinite elements in constrast to the assumption, that $A$ is finite.</p></li>
<li><p>This statement is wrong. Let $A=(0;n]\cap\mathbb N\subset\mathbb R$ with $n\in\mathbb N$ and therefore $A$ is bounded by $n$ with $\max A=n$. Assume $A$ would be infinite; with $|A|=n-1$ follows, that $A$ must be finite contrary to the assumption that it is infinite.</p></li>
<li><p>???</p></li>
<li><p>This statement is wrong. Let $A=\mathbb N\subset\mathbb R$ and we know that $A$ is infinite. However $A$ has a lower bound and even a minimal element where $\min A=1$ in constrast to the assumption that it does not exist.</p></li>
</ol>
<hr>
<p>I would like to know whether my attempts for (1), (2) and (4) are plausible or incomplete. Furthermore I need some hints on how to prove (3).</p>
| fgp | 42,986 | <p>All points on the unit circle obey the equation $x^2 + y^2 = 1$. You know $y=-\frac{7}{25}$, and can thus solve for $x$.</p>
<p>In case you forgot, in general, the circle around $(x_c,y_c)$ with radius $r$ is described by $$
(x - x_c)^2 + (y - y_c)^2 = r^2
$$</p>
|
2,592,477 | <p>I'm trying to solve the following exercise but I appear to be stuck. </p>
<p>Prove that every finite solvable group $G$ contains a fully invariant abelian $p$-subgroup for some prime $p$.</p>
<p>I know that the next-to-last derived subgroup is abelian and fully invariant plus I know that every finite solvable group has a synthetic series in which every quotient group is of prime rank, but I cannot combine these two to get what I want. Any help?</p>
| Nicky Hekster | 9,605 | <p>This solution is for "characteristic" in stead of "fully-invariant", which is slightly different - see the remark by prof D. Holt. <br>The trick is to look at a <em>minimal</em> normal subgroups of $G$. <br>If $M$ is minimal normal and non-trivial, then because a subgroup of $G$ it is also solvable and we must have $M' \lt M$. Since $M'$ char $M \lhd G$, we have $M' \lhd G$, and by the minimality of $M$, we get $M'=1$, that is, $M$ is abelian. <br>Now let $p$ be a prime dividing $|M|$, then $H=\{m \in M: m^p=1 \}$ is a non-trivial (Cauchy!) characteristic subgroup of $M$ (use that $M$ is abelian!). Again we can conclude that $H=M$, so $M$ must be elementary abelian. Finally, the group you are looking for is the <em>product</em> of all such $M$ for a fixed $p$. This subgroup is clearly characteristic.</p>
|
2,592,477 | <p>I'm trying to solve the following exercise but I appear to be stuck. </p>
<p>Prove that every finite solvable group $G$ contains a fully invariant abelian $p$-subgroup for some prime $p$.</p>
<p>I know that the next-to-last derived subgroup is abelian and fully invariant plus I know that every finite solvable group has a synthetic series in which every quotient group is of prime rank, but I cannot combine these two to get what I want. Any help?</p>
| Derek Holt | 2,820 | <p>All terms in the derived series of a solvable group are fully invariant. In particular, the last nontrivial term, $H$ say, in the derived series is abelian and fully invariant.</p>
<p>Now, if $H$ is finite, then it is easy to see that any Sylow $p$-subgroup of $H$ is fully invariant in $G$, and is an abelian $p$-group.</p>
|
82,279 | <p>Consider a ten-digit sequence of positive integers 0 - 9.<br>
The 1st, 4th, and 5th digits are either 7 or 9<br>
3rd and tenth digits are either 2 or 4.<br>
Somewhere in the phone number are 2 zeros, and the sum of all the digits equate to 42.<br>
What are all possible such sequence of 10 digits??</p>
<p>Just from going by total possible ways of arranging said numbers we have $2^5 * 10^3 * 5!$ (for each specified digit slot there are 2 ways of doing it. In the 5 slots left in 2 digits it has to be zero so 1 way of picking those, and 10 ways of picking the unspecified digits and 5! ways of arranging those digits. From that I guess we can use method of complements and find set of all possible number that DO NOT add up to 42 and subtract the total permutations by that, but my problem is calculating the cardinality of said set. Is the only way of doing so directly/explicitly? If so then it seems like I have a lot of calculating to do..</p>
| Arturo Magidin | 742 | <p>For the first, it is true that $h$ is differentiable. But if the conclusion is true, then take an example in which <em>neither</em> $f$ nor $g$ are ever equal to $0$. Then the conclusion would be that $f/g$ and $g/f$ are <em>both</em> differentiable increasing. But if $f/g$ is increasing, then $g/f$ has to be decreasing! So the conclusion cannot be always true.</p>
<p>For the second, consider the derivative as a limit, and use the fact that $f(a)=f(a+c)$.</p>
<p>The third one is false. $C'(0)$ represents the marginal cost to start producing something; i.e., how much more it will cost you to produce 1 item than it costs you to produce no items. That's not the fixed costs.</p>
|
242,773 | <p>Did I tackle this implicit differentiation correctly?</p>
<p>$$5x^2+3xy-y^2=5$$</p>
<p>$$10x+3x\dfrac{dy}{dx}+3y-2y\dfrac{dy}{dx}=0$$
$$\dfrac{dy}{dx}(y-2y)=-10x-3z-3y$$</p>
<p>$$\dfrac{dy}{dx}=\dfrac{-10x-3x-3y}{y-2y}$$</p>
| CCC | 258,418 | <p>I am not sure where your $z$ is coming from. </p>
<p>Steps 1 and 2 are correct,</p>
<p>but step 3 onwards, you group the terms incorrectly.</p>
<p>It should be $\frac {dy}{dx}(3x-2y)$</p>
<p>Therefore, it should be $\frac{dy}{dx}=\frac{-10x-3y}{3x-2y}$.</p>
|
377,471 | <p>If $F[x]$ is a polynomial ring, and $f(x), g(x), h(x)$ and $r(x)$ are four polynomials in it, then is it always true that $f(x)=h(x)g(x)+r(x)$ where $deg(r(x))<deg(g(x))$, or is this true only when $F[x]$ is a Euclidean domain?</p>
<p>Please note that this question has been edited heavily to make it more coherent. </p>
<p>Thanks in advance.</p>
| Will Orrick | 3,736 | <p>I think that your question gets the logic backwards. I'll assume that you mean for $F$ to be a field and for $F[x]$ to be the ring of polynomials over that field. That $F$ is a field guarantees that we can perform <a href="http://en.wikipedia.org/wiki/Euclidean_division_of_polynomials#Euclidean_division" rel="nofollow">Euclidean division</a> of elements of $F[x]$ using, for example, <a href="http://en.wikipedia.org/wiki/Polynomial_long_division" rel="nofollow">polynomial long division</a>. That, in turn, implies that $F[x]$ is a Euclidean domain.</p>
<p>So in $F[x],$ where $F$ is any field, it is the case that given polynomials $a$ and $b\ne0,$ one can find unique polynomials $q$ and $r$ such that $a=bq+r$ and either $r=0$ or $\deg(r)<\deg(b)$. This property makes $F[x]$ a Euclidean domain. These statements are proved in just about any algebra textbook. See, for example, Section 3.9 in Herstein's <em>Topics in Algebra</em>.</p>
|
3,209,722 | <p>I saw in another post on the website a simple proof that <span class="math-container">$$\lim_{n\to\infty} \left( 1+\frac{x}{n} \right)^n = \lim_{m\to\infty} \left( 1+\frac{1}{m} \right)^{mx}$$</span></p>
<p>which consists of substituting <span class="math-container">$n$</span> by <span class="math-container">$mx$</span>. I can see how the equality then holds for positive real numbers <span class="math-container">$x$</span>, yet it isn't obvious to me why it holds for negative <span class="math-container">$x$</span>.</p>
| fGDu94 | 658,818 | <p>For <span class="math-container">$x$</span> negative, we must substitute <span class="math-container">$n$</span> with <span class="math-container">$-mx$</span> with <span class="math-container">$m$</span> positive. We can use the fact that we know what the limit <span class="math-container">$(1-\frac{1}{m})^{-m}$</span> is, and equate this with the usual definition of <span class="math-container">$e$</span>.</p>
|
1,466,618 | <p>Find the equation of the locus of the intersection of the lines below<br>
$y=mx+\sqrt{m^2+2}$<br>
$y=-\frac{ 1 }{ m }x+\sqrt{\frac{ 1 }{ m^2 +2}}$</p>
<hr>
<p>By <a href="https://www.desmos.com/calculator/rpif4yiotx" rel="nofollow">graphing</a>, I have got an ellipse as locus : $x^2+\dfrac{y^2}{2}=1$.<br>
The given lines form tangent and normal to above ellipse.</p>
<p>Is there any other nice way to eliminate $m$ w/o using graphing ?<br>
I have tried eliminating $m$ by solving the intersection point, but it's looking very messy. Thanks!</p>
| Claude Leibovici | 82,404 | <p>If you solve the intersection point of the two lines, you should get $$x=-\frac{m}{\sqrt{m^2+2}}$$ $$y=\frac{2}{\sqrt{m^2+2}}$$ and then the result you found.</p>
|
4,196,868 | <p><span class="math-container">$(f_n)$</span> is a sequence of continuous, real valued functions on a metric space <span class="math-container">$M$</span>.</p>
<p>It converges pointwise to a <strong>continuous</strong> function <span class="math-container">$f$</span>.</p>
<p>Suppose that <span class="math-container">$(y_m)$</span> is a sequence of points in <span class="math-container">$M$</span>, and it converges to point <span class="math-container">$y\in M$</span>.</p>
<p>Then, <span class="math-container">$lim_{y_m\to\ y}[lim_{n\to\infty} (f_n(y_m))] = lim_{n\to\infty}[lim_{y_m\to\ y} (f_n(y_m))] = f(y)$</span></p>
<p>I think it follows from continuity of <span class="math-container">$f$</span> and the functions <span class="math-container">$f_n$</span>, and the definitions of those limits.</p>
<p>Is this right?</p>
| David | 471,540 | <p>As far as I know, you can only solve this numerically. It has infinite solutions, of course. It's equivalent to solving</p>
<p><span class="math-container">$$x = \tan x,$$</span></p>
<p>and in physics it arises in a number of contexts that we want to solve</p>
<p><span class="math-container">$$x = \tan k x.$$</span></p>
<p>for some real <span class="math-container">$k$</span>. When it does, we usually just plot <span class="math-container">$x, \tan k x$</span> as functions of <span class="math-container">$x$</span> look to see where the curves intersect. You can use numerical methods to find different intersections. For some reasons <span class="math-container">$x = \cot x$</span> doesn't seem to arise as much, but it seems to function pretty similarly.</p>
|
1,397,576 | <p>To me there is a hierarchy where vectors $\subset$ sequences $\subset$ functions $\subset$ operators</p>
<ul>
<li><p>All vectors are sequences, but not all sequences are vectors because
sequences are infinite dimensional</p></li>
<li><p>All sequences are functions, but not all functions are sequences
because functions can do more than just map $\mathbb{N} \to A$ where
$A$ is some set</p></li>
<li><p>All functions are operators, but not all operators are functions
because an operator can map functions to functions but function can only map numbers to numbers</p></li>
</ul>
<p>Can someone check if my ideas are reasonable? Does there exist such a hierarchy?</p>
| user251257 | 251,257 | <p>To summarize the comments:</p>
<p>The concepts in their general meaning do not admit a linear hierarchy.</p>
<ul>
<li><p>Vectors are elements of a <em>vector space</em>. For example, $\mathbb R^n$, the set of $\mathbb R$ valued function over a non empty set $X$, $\mathbb R$ as $\mathbb Q$ vector space.</p></li>
<li><p>Sequences are function form $\mathbb N$ into any non empty set $X$. If $X$ is a $K$ vector space, then the $X$ valued sequences become a $K$ vector space by point wise operations.</p></li>
<li><p>Functions are the most general concept. Sequences are functions. Particular function spaces are vector spaces in a natural way.</p></li>
<li><p>Operators are functions between vector spaces (usually over the same scalar field). </p></li>
</ul>
|
2,668,275 | <p>I have two independent, exponential RVs $X$ and $Y$ that both have the same parameter. I am trying to find the distribution of $Y$ given that $X>Y$. So far, I have: </p>
<p>$$P(Y|X>Y) = P(Y=y|X>Y) = P(Y=y, X>Y)/P(X>Y)$$</p>
<p>and I don't know how to proceed at this point. I understand how to get the denominator, but the numerator is really confusing me. Could someone give me a hint?</p>
| Pedro | 23,350 | <p>Let $S$ denote the topologist sine curve, which is a union of a sine graph and an interval. Adjoin to this interval a circle. The points $x$ on the interval and on the circle have $\pi(X,x)=\mathbb Z$, while those points $x'$ on the sine graph have $\pi(X,x') = 0$. </p>
|
2,852,020 | <p>Five identical(indistinguishable) balls are to be randomly distributed into $4$ distinct boxes. What is the probability that exactly one box remains empty?</p>
<p>we can choose the empty box in $4$ ways.
We should fit the $5$ balls now into $3$ distinct boxes in $^7C_5$ ways.
The correct answer must be: $\frac37$ (mcq)
Can you tell me how should I proceed then?</p>
| Phil H | 554,494 | <p>I couldn't see how the previous answer came up with a solution so I did it differently.</p>
<p>$0$ boxes empty has $1,1,1,2$ box contents times $\frac{5!}{2!}$ ball arrangements $= \frac{4!}{3!}\cdot \frac{5!}{2!} = 240$</p>
<p>$1$ box empty has $1,1,3$ and $1,2,2$ box contents and $\frac{5!}{3!}$ and $\frac{5!}{2!\cdot 2!}$ ball arrangements $= (\frac{3!}{2!}\cdot \frac{5!}{3!}+\frac{3!}{2!}\cdot \frac{5!}{2!\cdot 2!})\cdot ^4C_1 = 600$</p>
<p>$2$ boxes empty has $1,4$ and $2,3$ box contents and $\frac{5!}{4!}$ and $\frac{5!}{3!\cdot 2!}$ ball arrangements $=(2!\cdot \frac{5!}{4!}+2!\cdot \frac{5!}{3!\cdot 2!})\cdot ^4C_2 = 180$</p>
<p>$3$ boxes empty has all $5$ in any one of $4$ boxes $= 4$</p>
<p>$P(1 \text{empty}) = \frac{600}{240+600+180+4} = \frac{600}{1024} = .58594$</p>
|
107,915 | <p>I randomly place $k$ rooks on an (arbitrarily sized) $N$ by $M$ chessboard. Until only one rook remains, for each of $P$ time intervals we move the pieces as follows:</p>
<p>(1) We choose one of the $k$ rooks on the board with uniform probability. </p>
<p>(2) We choose a direction for the rook, $(N, W, E, S)$, with uniform probability. </p>
<p>(3) We choose a number of squares in which to move the rook along the direction chosen in [2] with uniform probability over the interval consisting of the rook's current position to the edge of the board.</p>
<p>(4) If the rook being moved collides with another piece while being translated in [3], just as in regular chess it will annihilate that piece and remain at the piece's former position.</p>
<p>NOTE - An alternative way of stating [2], [3], and [4] would be to say that the chosen rook samples all possible sets of moves, with uniform probability, and is unable to bypass other rooks without annihilating them and stopping at their former positions.</p>
<p>NOTE 2 - Gerhard Paseman is correct in suggesting that the original formulation for [2] and [3] will bias the rook towards shorter path lengths. This is in part due to the choice of direction in [2] not being weighted by the resulting possible number of choices in [3], and also the over-counting of positions in [3] due to the lack of consideration that there may be a collision. There are also problems with [2] near the board's boundaries where a direction can be chosen in which no move can take place. Instead of [2] and [3], I'll suggest that a better method would be to number all possible position that the chosen rook from [1] can occupy (keeping the collision constraint from [4] in mind), and then use a PRNG to select the next position. </p>
<p>What does the distribution look like for the number of time intervals, $P$, necessary for only a single rook to remain on the board?</p>
| Joseph O'Rourke | 6,094 | <p>To supplement Per Alexandersson's and Aaron Golden's data,
here are two distributions for $4$ rooks on a $4 \times 4$
board (mean: 21.5 moves), and $8$ rooks on an $8 \times 8$ board (mean: 73.6 moves):
<br /> <img src="https://i.stack.imgur.com/bokjE.jpg" alt="4x4"><br />
<br /> <img src="https://i.stack.imgur.com/IczIm.jpg" alt="8x8"><br />
10,000 trials each. Now updated to count only moves that actually move a rook!
Here is a $4 \times 4$ example that took $19$ moves.
<br /> <img src="https://i.stack.imgur.com/SbhBC.jpg" alt="Example"><br /></p>
|
1,199,044 | <p>What is the general solution $h: \mathbb{R}^2 \rightarrow \mathbb{C}$ (or maybe $h: \mathbb{C}^2 \rightarrow \mathbb{C}$ if necessary), $(x, y) \mapsto h(x, y)$ to the non-linear partial differential equation
$$
h_x^2 + h_y^2 = -1
$$
where the subscript denotes a partial derivative. I tried to approach the problem by looking at limiting cases and using ansatz, but nothing interesting came out.</p>
| Chinny84 | 92,628 | <p>Changing coords
$$
x\to iv\\
y\to iu
$$
We find
$$
\partial_x = -i\partial_v\\
\partial_y = -i\partial_u
$$
Thus you get
$$
\left(-i\partial_vh\right)^2 + \left(-i\partial_uh\right)^2 = -\left(h_v^2 + h_u^2\right) = -1
$$
Thus you get
$$
h_v^2 + h_u^2 = 1
$$
Solutipns of the above have the form given by
$$
h^2 = (v-c_1)^2 + (u-c_2)^2
$$
where
$$
c_1^2 +c_2^2 = 1
$$
Leading to
$$
h(x,y) =\sqrt{ (-ix-c_1)^2 + (-iy-c_2)^2}\\
h(x,y) =\pm i\sqrt{(x+ic_1)^2 +(y+ic_2)^2}\\
$$</p>
|
1,199,044 | <p>What is the general solution $h: \mathbb{R}^2 \rightarrow \mathbb{C}$ (or maybe $h: \mathbb{C}^2 \rightarrow \mathbb{C}$ if necessary), $(x, y) \mapsto h(x, y)$ to the non-linear partial differential equation
$$
h_x^2 + h_y^2 = -1
$$
where the subscript denotes a partial derivative. I tried to approach the problem by looking at limiting cases and using ansatz, but nothing interesting came out.</p>
| doraemonpaul | 30,938 | <p>$h_x^2+h_y^2=-1$</p>
<p>$h_x^2=-h_y^2-1$</p>
<p>$h_x=\pm i\sqrt{h_y^2+1}$</p>
<p>$h_{xy}=\pm\dfrac{ih_yh_{yy}}{\sqrt{h_y^2+1}}$</p>
<p>Let $u=h_y$ ,</p>
<p>Then $u_x=\pm\dfrac{iuu_y}{\sqrt{u^2+1}}$</p>
<p>$u_x+\mp\dfrac{iuu_y}{\sqrt{u^2+1}}=0$</p>
<p>Follow the method in <a href="http://en.wikipedia.org/wiki/Method_of_characteristics#Example" rel="nofollow">http://en.wikipedia.org/wiki/Method_of_characteristics#Example</a>:</p>
<p>$\dfrac{dx}{dt}=1$ , letting $x(0)=0$ , we have $x=t$</p>
<p>$\dfrac{du}{dt}=0$ , letting $u(0)=u_0$ , we have $u=u_0$</p>
<p>$\dfrac{dy}{dt}=\mp\dfrac{iu}{\sqrt{u^2+1}}=\mp\dfrac{iu_0}{\sqrt{u_0^2+1}}$ , letting $y(0)=f(u_0)$ , we have $y=\mp\dfrac{iu_0t}{\sqrt{u_0^2+1}}+f(u_0)=\mp\dfrac{iux}{\sqrt{u^2+1}}+f(u)$ , i.e. $u=F\left(y\pm\dfrac{iux}{\sqrt{u^2+1}}\right)$</p>
|
4,261,763 | <p>I was working with this problem:</p>
<p><a href="https://i.stack.imgur.com/AgVpu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AgVpu.png" alt="enter image description here" /></a></p>
<p><span class="math-container">$f(x) = 1, x \in(-\infty,0)\cup(0,\infty)\\
f(x) = -1, x = 0 $</span></p>
<p>The absolute value would yield us the function: <span class="math-container">$|f(x)| = 1, \forall x\in(-\infty,\infty)$</span></p>
<p>Hence, it'd become continous everywhere.</p>
<p>Thank you for any feedback. And as a side question, is there a way to find more such functions, in an easy way?</p>
| Neptune | 825,557 | <p>In general, anytime you have a function whose absolute value is already continuous (whether or not it is continuous other than abs), <span class="math-container">$g: x \in \Bbb X\mapsto g(x)$</span>; <span class="math-container">$g(x)$</span> is not identically <span class="math-container">$0$</span>, <span class="math-container">$c:|c|=1; c\not = 1$</span>, and some proper subset <span class="math-container">$\Bbb Y$</span> of <span class="math-container">$\Bbb X, \Bbb Y \neq \emptyset, f(\Bbb Y)$</span> and <span class="math-container">$f(\Bbb X \setminus \Bbb Y)$</span> are not identically <span class="math-container">$0$</span>, you can make your function <span class="math-container">$f$</span> have this property by defining it to be:</p>
<p><span class="math-container">$f(x) = \begin{cases}
g(x) & \text{if $x\in \Bbb{Y}$} \\
cg(x) & \text{if $x \in \Bbb{X \setminus Y}$} \\ \end{cases}$</span></p>
<p>If you are working only in the real numbers, it will be <span class="math-container">$\Bbb X =\Bbb R $</span> and your <span class="math-container">$c$</span> will be <span class="math-container">$-1$</span>.</p>
|
4,528,489 | <p>I hope I could get clarification on a minor detail in the proof to theorem 3.11 (b) in Rudin's Principles of Mathematical Analysis. The theorem and proof are as follows.
<br>Theorem:</p>
<blockquote>
<p>If X is a compact metric space and if {<span class="math-container">$p_n$</span>} is a Cauchy sequence in X, then {<span class="math-container">$p_n$</span>} converges to some point of X.</p>
</blockquote>
<p>Proof:</p>
<blockquote>
<p>Let {<span class="math-container">$p_n$</span>} be a Cauchy sequence in the compact space X. For N = 1,2,3..., let <span class="math-container">$E_N$</span> be the set consisting of <span class="math-container">$p_N$</span>, <span class="math-container">$p_{N+1}$</span>, <span class="math-container">$p_{N+2}$</span>, ... Then <span class="math-container">$$ \lim \limits_{N \to \infty} diam \overline {E_N} = 0$$</span>, by Definition 3.9 and Theorem 3.10 (a). Being a closed subset of the compact space X , each <span class="math-container">$\overline {E_N}$</span> is compact (Theorem 2.35). Also <span class="math-container">$E_N \supset E_{N+1}$</span>, so that <span class="math-container">$\overline{E_N} \supset \overline{E_{N+1}}$</span>...</p>
</blockquote>
<p>Here I am trying to find out a reasoning behind the statement <span class="math-container">$E_N \supset E_{N+1}$</span>, then <span class="math-container">$\overline{E_N} \supset \overline{E_{N+1}}$</span>. My reasoning behind <span class="math-container">$E_N \supset E_{N+1}$</span> is that since <span class="math-container">$ \lim \limits_{N \to \infty} diam E_N = 0$</span>, and since the diameter of <span class="math-container">$E_N$</span> captures how big the set is, then as N gets bigger, the set becomes smaller. And the reason why <span class="math-container">$\overline{E_N} \supset \overline{E_{N+1}}$</span> is that <span class="math-container">$diam \overline{E_N} = diam E_N$</span> (theorem 3.10 a). Is my reasonings correct?</p>
| Mark Bennet | 2,906 | <p>Well you might begin by observing that if <span class="math-container">$x^3\equiv y^3$</span> then <span class="math-container">$x^3-y^3=(x-y)(x^2+xy+y^2)\equiv 0$</span></p>
<p>Then <span class="math-container">$pq$</span> cannot be a factor of <span class="math-container">$x-y$</span> because <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are too small by hypothesis.</p>
<p>So this means that if <span class="math-container">$x^3\equiv y^3$</span> then either <span class="math-container">$p$</span> or <span class="math-container">$q$</span> (or both) must be a factor of <span class="math-container">$x^2+xy+y^2$</span>.</p>
<p>You haven't said what you already know, but there is an extra condition given on <span class="math-container">$p$</span> and <span class="math-container">$q$</span> and there are standard mathematical techniques for dealing with a quadratic (taking out the factor <span class="math-container">$x-y$</span> has reduced the degree). So you may be able to take it from there.</p>
|
976,617 | <p>Find supremum and infimum of the set:
$B={ \frac{x}{1+ \mid x \mid }} \ for \ x\in \mathbb{R}$
For me it is visible that it will be 1 and -1 respectively but how to prove it properly?</p>
| graydad | 166,967 | <p>You can do both in one move. Show for any $x \in \Bbb{R}$ that $$\left |\frac{x}{1+|x|} \right| \leq 1 $$ Then suppose there is some $c>0$ such that $$\left |\frac{x}{1+|x|} \right|\leq c < 1 $$ for all $x$. Show this case is a contradiction, as you can always find some $x$ where $$\left|\frac{x}{1+|x|} \right | >c$$</p>
|
976,617 | <p>Find supremum and infimum of the set:
$B={ \frac{x}{1+ \mid x \mid }} \ for \ x\in \mathbb{R}$
For me it is visible that it will be 1 and -1 respectively but how to prove it properly?</p>
| CLAUDE | 118,773 | <p>It is completely apparent that the magnitude of this function is less than one. Now we want to prove that its supremum is $1$, for this issue, suppose that the supremum is $a$, which according to the previous statement it must be less than or equal to 1. Hence for all values of $x$, $\frac{x}{1+|x|}\leq a\rightarrow \lim_{x\rightarrow \infty}\frac{x}{1+|x|}\leq a\rightarrow 1\leq a\rightarrow a=1$. For the infimum the procedure is similar. </p>
|
3,496,594 | <p>I have to factorize the polynomial <span class="math-container">$P(X)=X^5-1$</span> into irreducible factors in <span class="math-container">$\mathbb{C}$</span> and in <span class="math-container">$\mathbb{R}$</span>, this factorisation happens with the <span class="math-container">$5$</span>th roots of the unity. </p>
<p>In <span class="math-container">$\mathbb{C}[X]$</span> we have <span class="math-container">$P(X)=\prod_{k=0}^4 (X-e^\tfrac{2ki\pi}{5})$</span>.</p>
<p>In <span class="math-container">$\mathbb{R}[X]$</span> the solution states that by gathering all complex conjugate roots we find that <span class="math-container">$P(X)=(X-1)(X^2-2\cos(\frac{2\pi}{5})+1)(X^2-2\cos(\frac{4\pi}{5})+1)$</span>, but I can't figure out how. Another problem I ran into was trying to figure out where the <span class="math-container">$2\cos(\frac{2\pi}{5})$</span> and <span class="math-container">$2\cos(\frac{4\pi}{5})$</span> come from so I tried these two methods:
The sum of the roots of unity is zero so we have:
<span class="math-container">$1+e^\tfrac{2i\pi}{5}+e^\tfrac{4i\pi}{5}+e^\tfrac{6i\pi}{5}+e^\tfrac{8i\pi}{5}=0$</span></p>
<p>In the <span class="math-container">$5$</span>th roots of unity circle P3 and P4 are images according to the x-axis the same goes to P2 and P5, therefore <span class="math-container">$e^\tfrac{6i\pi}{5}=e^\tfrac{-4i\pi}{5}$</span> and <span class="math-container">$e^\tfrac{8i\pi}{5}=e^\tfrac{-2i\pi}{5}$</span> afterwards by using Euler's formula we find <span class="math-container">$1+2\cos(\frac{2\pi}{5})+2\cos(\frac{4\pi}{5})=0$</span>.</p>
<p>Another method is that <span class="math-container">$\cos(6\pi/5) = \cos(-6\pi/5) = \cos(-6\pi/5 + 2\pi) = \cos(4\pi/5)$</span> <span class="math-container">$\cos(8\pi/5) = \cos(-8\pi/5) = \cos(-8\pi/5 + 2\pi) = \cos(2\pi/5)$</span> </p>
<p>therefore <span class="math-container">$1 + \cos(2\pi/5) + \cos(4\pi/5) + \cos(4\pi/5) + \cos(2\pi/5) = 0$</span> and we find <span class="math-container">$1+2\cos(\frac{2\pi}{5})+2\cos(\frac{4\pi}{5})=0$</span></p>
<p>I don't know if both of these methods are correct on their own and I don't know if they will help in the factorisation since I don't know how to go from there and find <span class="math-container">$P(X)=(X-1)(X^2-2\cos(\frac{2\pi}{5})+1)(X^2-2\cos(\frac{4\pi}{5})+1)$</span></p>
| Dave | 334,366 | <p>In <span class="math-container">$\mathbb C[x]$</span> we can write it as <span class="math-container">$$p(x)=(x-1)(x-\color{blue}{e^{2\pi i/5}})(x-\color{blue}{e^{-2\pi i/5}})(x-\color{red}{e^{4\pi i/5}})(x-\color{red}{e^{-4\pi i/5}})$$</span> where the blue and red are conjugate root pairs. Multiplying these conjugate root pairs together gives (i.e. multiply the second and third terms together, and then the fourth and fifth together)</p>
<p><span class="math-container">$$\begin{align}p(x)&=(x-1)(x^2-(e^{2\pi i/5}+e^{-2\pi i/5})x+e^{2\pi i/5}e^{-2\pi i/5})(x^2-(e^{4\pi i/5}+e^{-4\pi i/5})x+e^{4\pi i/5}e^{-4\pi i/5}) \\&=(x-1)\left(x^2-2\cos\left(\frac{2\pi}{5}\right)x+1\right)\left(x^2-2\cos\left(\frac{4\pi}{5}\right)x+1\right) \end{align}$$</span></p>
<p>using Euler's formula for <span class="math-container">$e^{2k\pi i/5}+e^{-2k\pi i/5}=2\cos\left(\frac{2k\pi}{5}\right)$</span>.</p>
|
538,811 | <p>Suppose A(.) is a subroutine that takes as input a number in binary, and takes linear time (that is, O(n), where n is the length (in bits) of the number).
Consider the following piece of code, which starts with an n-bit number x.</p>
<p>while x>1:</p>
<p>call A(x)</p>
<p>x=x-1</p>
<p>Assume that the subtraction takes O(n) time on an n-bit number.</p>
<p>(a) How many times does the inner loop iterate (as a function of n)? Leave your answer in big-O form.</p>
<p>(b) What is the overall running time (as a function of n), in big-O form?</p>
<p>(a) O($n^2$)</p>
<p>(b) O($n^3$)</p>
<p>is this correct? can someone concur please, the way i think about it is that the loop has to compute two steps each time in cycles through and it will cycle through x time each time subtracting 1 from n bits until x reaches 0. And for part b since A(.) takes time O(n) we multiply that with the time it takes to execute the loop and we then have the over all running time. If i reasoned or did the problem wrong can someone please correct me and tell me what i did wrong.</p>
| ashley | 50,188 | <p>Supposing your code is as follows-- so that the inner-loop is both lines: </p>
<pre><code>while (x>1) {
call A(x)
x=x-1
}
</code></pre>
<p>the statements <strong>A(x)</strong> & <strong>x=x-1</strong> run <strong>at O(n)</strong> time each, at each step
of the loop-- asymptotically adding up to <strong>O(n)</strong>. </p>
<p>The loop iterates <strong>x</strong> times-- that is directly proportional to the representation of <strong>n</strong> in decimal-- so the <strong>while</strong> statement iterates <strong>O(n)</strong>-- times. </p>
<p>The entire code executes in <strong>O(n)*O(n)=O($n^2$)</strong>. </p>
|
2,070,739 | <p>The following paragraph appears on page 42 in the book <em>Rational Number Theory in the 20th Century: From PNT to FLT</em> (Par Wladyslaw Narkiewicz):</p>
<blockquote>
<p>The fact that the strip <span class="math-container">$0<\Re{s}<1/2$</span> contains infinitely many zeros of the zeta-function follows from the formula for the number of zeros lying in the rectangle <span class="math-container">$0<\Re{s}<1/2$</span>, <span class="math-container">$0<\Im{s}<T$</span>, conjectured by Riemann and established by H. von Mangoldt in 1895: <span class="math-container">$$N(T)=\frac{1}{2\pi}T\log\left(\frac{T}{2\pi}\right)-\frac{T}{2\pi}+R(T)$$</span> with <span class="math-container">$R(T)=O(\log^2T)$</span>.</p>
<p>(<a href="https://i.stack.imgur.com/lI1pL.png" rel="noreferrer">image of page</a>)</p>
</blockquote>
<p>Wouldn't this contradict the Riemann hypothesis?</p>
| Semiclassical | 137,524 | <p>Looking at some other sources, it appears that to be a typo: It should be "$0<\Re s<1$" not "$0<\Re s<1/2$", i.e. the number of zeros in the critical strip. </p>
<p>In fact, no non-trivial zeros of the Riemann Zeta function occur outside the critical strip, so this restriction is superfluous i.e. the formula gives the number of zeroes in $0 <\Im s<T$. It is in this form that <a href="https://en.wikipedia.org/wiki/Riemann-von_Mangoldt_formula" rel="nofollow noreferrer">Wikipedia</a> and <a href="http://mathworld.wolfram.com/Riemann-vonMangoldtFormula.html" rel="nofollow noreferrer">Mathworld</a> state the Riemann-von Mangoldt formula. Technical sources can be found in both links.</p>
|
2,070,739 | <p>The following paragraph appears on page 42 in the book <em>Rational Number Theory in the 20th Century: From PNT to FLT</em> (Par Wladyslaw Narkiewicz):</p>
<blockquote>
<p>The fact that the strip <span class="math-container">$0<\Re{s}<1/2$</span> contains infinitely many zeros of the zeta-function follows from the formula for the number of zeros lying in the rectangle <span class="math-container">$0<\Re{s}<1/2$</span>, <span class="math-container">$0<\Im{s}<T$</span>, conjectured by Riemann and established by H. von Mangoldt in 1895: <span class="math-container">$$N(T)=\frac{1}{2\pi}T\log\left(\frac{T}{2\pi}\right)-\frac{T}{2\pi}+R(T)$$</span> with <span class="math-container">$R(T)=O(\log^2T)$</span>.</p>
<p>(<a href="https://i.stack.imgur.com/lI1pL.png" rel="noreferrer">image of page</a>)</p>
</blockquote>
<p>Wouldn't this contradict the Riemann hypothesis?</p>
| Chappers | 221,811 | <p>Yes, if it were correct, the Riemann hypothesis would be false. It's definitely a typo, however. You can see, for example, <a href="http://www.math.umn.edu/~garrett/m/mfms/notes_2013-14/02c_counting_zeros_of_zeta.pdf" rel="nofollow noreferrer">here</a> for a fairly detailed proof; suffice to say that nothing can be done to cut the region of validity down effectively, due to the difficult nature of the $\zeta$-function's behaviour in the critical strip.</p>
|
633,799 | <p>I am a little confused about the basic definition of inclusion.</p>
<p>I understand that, for example, $\{4\}\subset\{4\}$.</p>
<p>I also understand that $4\in\{4\}$, and that it is false to say that $\{4\}\in\{4\}$.</p>
<p>However, is it possible to say that $4\subset\{4\}$?</p>
| Alex R. | 119,906 | <p>No, because $4$ is not a set, but an element.</p>
|
73,550 | <p>I am learning a bit about <a href="http://en.wikipedia.org/wiki/PGF/TikZ" rel="nofollow noreferrer">TikZ</a> and found a nice feature in its graphics, that I am having hard time duplicating with <code>Graphics3D</code>. It is making a <code>Cylinder</code>, where the bottom will have part of its edge, that is behind the current view, show up as dashed lines. Ofcourse the <code>Cylinder</code> will have to be a little transparent to see the edge (using <code>Opacity</code>).</p>
<p>Another obstacle I found, is that one can't <code>Inset</code> 3D object inside another 3D object in <em>Mathematica</em>. I hope this will become possible in future versions.</p>
<p>Here is the <a href="https://tex.stackexchange.com/questions/182976/creating-cylinder-with-bottom-node-shape-in-tikz-pgf">latex question</a> that shows how this is done in TikZ. Here is screen shot of the final shape (the letter <code>A</code> is not needed)</p>
<p><img src="https://i.stack.imgur.com/tiycv.png" alt="Mathematica graphics"></p>
<p>This is what I tried: Make cylinder, remove its EdgeForm, make another cylinder to use for the bottom part, which is very thin, and have its edge be dashed. </p>
<p>But this does not really solve the problem, as I need to have the edge that is "behind" the current view be dashed. </p>
<p>The reason for asking, is not just for fun, but this will actually make the disk appear more real if it is possible to make the behind view edge dashed (or other color) from the front facing edge. So this can be useful feature for many other 3D objects making.</p>
<pre><code>g1 = Graphics3D[{Opacity[.5], EdgeForm[], Yellow,
Cylinder[{{0, 0, 0}, {0, 0, 1}}, 1]}]
</code></pre>
<p><img src="https://i.stack.imgur.com/HetxL.png" alt="Mathematica graphics"></p>
<pre><code>g2 = Graphics3D[{EdgeForm[Directive[Thin, Dashed, Red]], FaceForm[],
Cylinder[{{0, 0, 0}, {0, 0, .01}}, 1]}]
</code></pre>
<p><img src="https://i.stack.imgur.com/xIILw.png" alt="Mathematica graphics"></p>
<p>Now combined</p>
<pre><code> Graphics3D[{First@g1, First@g2}]
</code></pre>
<p><img src="https://i.stack.imgur.com/CP2y5.png" alt="Mathematica graphics"></p>
<p>Close, but no cigar. </p>
| Michael E2 | 4,999 | <p>Well, it was fairly quick for such a simple figure. David Carraher's solution is simpler, except that the envelope is not drawn.</p>
<pre><code>DynamicModule[{vv = {0, 0, 1}, vp = {3.2, 2., 4.}},
Graphics3D[{{Opacity[.5], EdgeForm[], Yellow,
Cylinder[{{0, 0, 0}, {0, 0, 1}}, 1]},
{Directive[Thick, Dashed, Red],
Line[Table[{Cos[t], Sin[t], 0}, {t, 0, 2 Pi, 2 Pi/60}]],
Line[Table[{Cos[t], Sin[t], 1}, {t, 0, 2 Pi, 2 Pi/60}]]},
{Directive[Thick, Red],
Dynamic@
With[{angle = ArcTan[vp[[1]], vp[[2]]], width = ArcCos[1/Norm[vp]]},
{If[vp[[3]] > 0,
Line[Table[{Cos[t], Sin[t], 0}, {t, angle - width, angle + width, width/30}]],
Line[Table[{Cos[t], Sin[t], 0}, {t, 0, 2 Pi, 2 Pi/60}]]],
If[vp[[3]] < 0,
Line[Table[{Cos[t], Sin[t], 1}, {t, angle - width, angle + width, width/30}]],
Line[Table[{Cos[t], Sin[t], 1}, {t, 0, 2 Pi, 2 Pi/60}]]],
Line[Append[Through[{Cos, Sin}[angle - width]], #] & /@ {0, 1}],
Line[Append[Through[{Cos, Sin}[angle + width]], #] & /@ {0, 1}]}
]
}
}, ViewPoint -> Dynamic[vp], ViewVertical -> Dynamic[vv]]
]
</code></pre>
<p><img src="https://i.stack.imgur.com/Bps9D.png" alt="Mathematica graphics"></p>
<p>The dashed lines change as the object is rotated in the front end.</p>
|
2,604,206 | <p>Can anyone provide links to a concrete proof? Intuitively, the two-dimensional real space is infinite. so there should be infinitely many subspaces. But how do I go about a proof?</p>
| Community | -1 | <p>You should use your intuition from Euclidean geometry.</p>
<p>If you fix a point $O$ in a geometrical ($2$-dimensional!) plane, and identify every geometrical vector $v=\vec{OA}$ with its ending point $A$, then:</p>
<ul>
<li>The whole $2$-dimensional vector space is identified with the whole plane;</li>
<li>Any $1$-dimensional subspace is identified with a <em>line</em> through $O$. (That is why it is obviously infinitely many of them.)</li>
<li>The unique $0$ dimensional subspace is identified with the $1$-element set consisting of $O$ as its only element.</li>
</ul>
<p>Hope this clears the matters a bit.</p>
|
637,898 | <p>I have to find the number of irreducible factors of $x^{63}-1$ over $\mathbb F_2$ using the $2$-cyclotomic cosets modulo $63$.</p>
<p>Is there a way to see how many the cyclotomic cosets are and what is their cardinality which is faster than the direct computation?</p>
<p>Thank you.</p>
| user91500 | 91,500 | <p>Note that $x^{p^n}-x\in\mathbb{Z}_p[x]$ equals to product of all irreducible factors of degree $d$ such that $d|n$. Suppose $w_p(d)$ is the number of irreducible factors of degree $d$ on $\mathbb{Z}_p$, then we have
$$p^n=\sum_{d|n}dw_p(d)$$
now use <strong>Mobius Inversion Formula</strong> to obtain
$$w_p(n)=\frac1{n}\sum_{d|n}\mu(\frac{n}{d})p^d.$$
use above identity to obtain
$$w_p(1)=p$$
$$w_p(q)=\frac{p^q-p}{q}$$
$$w_p(rs)=\frac{p^{rs}-p^r-p^s+p}{rs}$$
where $q$ is a prime number and $r,s$ distinct prime numbers. </p>
<p>Now you need to calculate $w_2(1)+w_2(2)+w_2(3)+w_2(6)\color{#ff0000}{-{1}}$. By using above formulas you can see that the final answer is $13$. </p>
|
1,811,109 | <p>How can we cause this relation to be true?</p>
<blockquote>
<p>$$x \sin\theta + y \cos\theta = \sqrt{ x^2 + y^2 } \tag{$\star$}$$</p>
</blockquote>
<p>I know the identity</p>
<p>$$x \sin\theta + y \cos\theta = \sqrt{x^2+y^2}\; \sin\left(\theta + \operatorname{atan}\frac{y}{x}\right)$$
What can make the sine part "$1$" (or just approximately "$1$") so that $(\star)$ holds?</p>
| Ng Chung Tak | 299,599 | <p>The equality holds if $$(\sin \theta, \cos \theta)=\left( \frac{x}{\sqrt{x^2+y^2}},\frac{y}{\sqrt{x^2+y^2}} \right)$$</p>
|
304,209 | <p>I am trying to learn weak derivatives. In that, we call <span class="math-container">$\mathbb{C}^{\infty}_{c}$</span> functions as test functions and we use these functions in weak derivatives. I want to understand why these are called <em>test functions</em> and why the functions with these properties are needed. I have some idea about these but couldn't understand them properly.</p>
<p>Also, I'll be happy if any one can suggest some good reference on this topic and Sobolev spaces.</p>
| liuyao | 133,582 | <p>Specifically <em>why</em> it is called test function. (Essentially not different from another answer.)</p>
<p>Strichartz liked to introduce the concept with physical intuition. </p>
<p><a href="https://books.google.com/books?id=T7vEOGGDCh4C&lpg=PP1&dq=strichartz%20distribution&pg=PA1#v=onepage&q=temperature&f=false" rel="nofollow noreferrer">https://books.google.com/books?id=T7vEOGGDCh4C&lpg=PP1&dq=strichartz%20distribution&pg=PA1#v=onepage&q=temperature&f=false</a></p>
<p>Suppose you want to measure the temperature at a point in the room. You hold out your thermometer, but the tip of it is actually a blob of glass with some liquid inside, so are you really measuring the temperature at any actual point? At best it is a weighted average in a neighborhood of the point.</p>
|
987,054 | <p>Prove that the sequence
$$b_n=\left(1+\frac{1}{n}\right)^{n+1}$$
Is decreasing.</p>
<p>I have calculated $b_n/b_{n-1}$ but it is obtain:
$$\left(1-\frac{1}{n^2}\right)^n \left(1+\frac{1}{n}\right)^n$$
But I can't go on.</p>
<p>Any suggestions please?</p>
| Eclipse Sun | 119,490 | <p>Since the inequality holds for any positive numbers:$$\sqrt[n]{{a_1}{a_2}\cdots {a_n}}\ge \frac{n}{a_1^{-1}a_2^{-1}\cdots a_n^{-1}},$$we have:$$\sqrt[n+2]{\underbrace{\frac{n+1}n\frac{n+1}n\cdots \frac{n+1}n}_{n+1\text{ times}}\cdot 1}\ge \frac{n+2}{\frac n{n+1}+\frac n{n+1}+\cdots +\frac n{n+1}+1}.$$Hence,$$\left(1+\frac 1n\right)^{n+1} \ge \left(1+\frac 1{n+1}\right)^{n+2}.$$</p>
|
1,620,512 | <p>$\mathcal P_3(\mathbb R)$ is the set of polynomials with degree at most $3$ with coefficients in $\mathbb R$.</p>
<p>In the last paragraph is says $U$ cannot be extended to a basis of $\mathcal P_3(\mathbb R)$. I do not understand why not. Why would we get a list with length greater than $4$? Why can't we add $(x-5)$ to the list?</p>
<p><a href="https://i.stack.imgur.com/xzsED.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xzsED.png" alt="enter image description here"></a></p>
<p>From Linear Algebra Done Right</p>
| Andrew D'Angelo | 295,634 | <p>As we have already determined that $U$ is not equal to $\mathcal{P}_3(\mathbf{R})$, and that $U$ is a subspace of $\mathcal{P}_3(\mathbf{R})$, we know that $U$ cannot have dimension 4 (the same dimension as $\mathcal{P}_3(\mathbf{R})$).</p>
<p>The only way that $U$ would have the same dimension as $\mathcal{P}_3(\mathbf{R})$ would be if the two were equal (i.e. the basis vectors for $U$ would also span $\mathcal{P}_3(\mathbf{R})$), as we can <em>always</em> add vectors to the basis of a strict subspace to obtain a basis of the vector space.</p>
<p>Edit: In <em>Linear Algebra Done Right</em> 3e, theorem 2.33 states "Every linearly independent list of vectors in a finite-dimensional vector space can be extended to a basis of the vector space", with the proof following. The basis for $U$ is indeed a linearly independent list of vectors.</p>
|
1,620,512 | <p>$\mathcal P_3(\mathbb R)$ is the set of polynomials with degree at most $3$ with coefficients in $\mathbb R$.</p>
<p>In the last paragraph is says $U$ cannot be extended to a basis of $\mathcal P_3(\mathbb R)$. I do not understand why not. Why would we get a list with length greater than $4$? Why can't we add $(x-5)$ to the list?</p>
<p><a href="https://i.stack.imgur.com/xzsED.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xzsED.png" alt="enter image description here"></a></p>
<p>From Linear Algebra Done Right</p>
| Pablo Herrera | 135,689 | <p>We know, If $\dim U = 4 $ and $U \subseteq \mathcal{P}_3$ then $U = \mathcal{P}_3$.</p>
<p>But, is easy to show a polynomial in $\mathcal{P}_3$. such that $p'(5)\neq 0 $ (for example $p(x)= x-3$).</p>
<p>that's mean $U$ isn't $\mathcal{P}_3$ finally $\dim U < 4$.</p>
|
911,075 | <p>This is one of my first proofs about fields.
Please feed back and criticise in every way (including style and details).</p>
<p>Let $(F, +, \cdot)$ be a field.
Non-trivially, $\textit{associativity}$ implies that any parentheses are meaningless.
Therefore, we will not use parentheses.
Therefore, we will not use $\textit{associativity}$ explicitly.</p>
<p>By $\textit{identity element}$, $F \ne \emptyset$.
Now, let $a \in F$.
It remains to prove that $0a = 0$.
\begin{equation*}
\begin{split}
0a &= 0a + 0 && \quad \text{by }\textit{identity element }(+ ) \\
&= 0a + a + -a && \quad \text{by }\textit{inverse element }(+ ) \\
&= 0a + 1a + -a && \quad \text{by }\textit{identity element }(\cdot) \\
&= (0 + 1)a + -a && \quad \text{by }\textit{distributivity } \\
&= (1 + 0)a + -a && \quad \text{by }\textit{commutativity }(+ ) \\
&= 1 a + -a && \quad \text{by }\textit{identity element }(+ ) \\
&= a + -a && \quad \text{by }\textit{identity element }(\cdot) \\
&= 0 && \quad \text{by }\textit{inverse element }(+ )
\end{split}
\end{equation*}
QED</p>
<p>PS: Is "Let $(F, +, \cdot)$ be a field." ok?
Besides, I would not want to call $F$ a field, because $F$ is just a set.
Also, what do you think about using adverbs like "Now"? How would you have said the associativity-thing?</p>
| Adriano | 76,987 | <p>Your proof is fine. As the comments have pointed out, the proof can be shortened. If you still want to do it in a single chain of equalities, then you can do something like this:</p>
<p>\begin{equation*}
\begin{split}
0a &= 0a + 0 && \quad \text{by }\textit{identity element }(+ ) \\
&= 0a + 0a + (-0a) && \quad \text{by }\textit{inverse element }(+ ) \\
&= (0 + 0)a + (-0a) && \quad \text{by }\textit{distributivity } \\
&= 0a + (-0a) && \quad \text{by }\textit{identity element }(+ ) \\
&= 0 && \quad \text{by }\textit{inverse element }(+ )
\end{split}
\end{equation*}</p>
|
1,990,670 | <blockquote>
<p>Assume that $0 < \theta < \pi$. Solve the following equation for $\theta$. $$\frac{1}{(\cos \theta)^2} = 2\sqrt{3}\tan\theta - 2$$ </p>
</blockquote>
<p><a href="https://i.stack.imgur.com/SoU8A.png" rel="nofollow noreferrer">Question and Answer</a></p>
<p>Regarding to the attached image, that shows the question and the answer?</p>
<p>How could I solve this question and what are the steps to follow to reach the answer?</p>
| Community | -1 | <p>Multiply both members by $\cos^2\theta$:</p>
<p>$$1=2\sqrt3\sin\theta\cos\theta-2\cos^2\theta=\sqrt3\sin2\theta-1-\cos2\theta.$$</p>
<p>Then</p>
<p>$$\frac{\sqrt3}2\sin2\theta-\frac12\cos2\theta=1,$$
$$\sin\left(2\theta-\frac\pi6\right)=1.$$</p>
|
3,850,320 | <p>If a graph is Eulerian (i.e. has an Eulerian tour), then do we immediately assume for it to be connected?</p>
<p>The reason I ask is because I came across this question:</p>
<p><a href="https://math.stackexchange.com/questions/1689726/graph-and-its-line-graph-that-both-contain-eulerian-circuits">Graph and its line Graph that both contain Eulerian circuits</a></p>
<p>And the solution seems to assume that the graph is connected, before using the result that a connected graph is Eulerian if and only if every vertex has even degree.</p>
| Arthur | 15,500 | <p>We are given that the original graph has an Eulerian circuit. So each edge must be connected to each other edge, regardless of whether the graph itself is connected. Thus the line graph must be connected. Technically this ought to have been pointed out in the answer post you linked, yes.</p>
|
508,059 | <p>How it is possible to considerably shorten the list of properties that define a vector space by using definitions from abstract algebra?</p>
| Loki Clock | 58,287 | <p>Let a $K$-algebra be a ring with a homomorphism from $K$. Then a $K$-vector space $V$ is the underlying addition of a $K$-algebra $R$, together with a product $K \times V \rightarrow V$ given by restriction of the multiplication of $R$.</p>
|
11,175 | <p>This question is a follow-up from <a href="https://mathematica.stackexchange.com/questions/11171/functional-programming-and-do-loops">here</a>. I have a function that generates a list of correlations between some random variables:</p>
<pre><code>varparams = Table[i, {i, 0.1, 0.9, 0.1}];
var[i_] :=
RandomChoice[{varparams[[i]], 1 - varparams[[i]]} -> {1, 0}, 10]
corrcheck[i_, j_, n_] :=
Table[BlockRandom[SeedRandom[k];
Correlation[var[i], var[j]]], {k, n}]
</code></pre>
<p>Now, if by chance one or both of <code>var[i]</code>,<code>var[j]</code> is a list of all 1's, then <code>Correlation[var[i],var[j]]</code> is undefined. I want to improve <code>corrcheck[i_, j_, n_]</code> so that it automatically throws away all such <code>vars</code>. My idea was to do something like this:</p>
<pre><code>corrcheck[i_, j_, n_] :=
Table[BlockRandom[SeedRandom[k];
If[Mean@var[i] != 1 && Mean@var[j] != 1,
Correlation[var[i], var[j]], "N/A"]], {k, n}]
</code></pre>
<p>Why doesn't this work? Have I missed a better alternative?</p>
| kglr | 125 | <p>A minor issue with your current test is that it ignores the case where <code>Mean[var[i]]=0</code>. You need to use <code>Variance[var[i]]</code> in your condition. </p>
<p>However, this does not fix the real problem. The real issue with the current formulation is that randomization is performed everytime <code>var[i]</code> is used. Because of this, when you check <code>Variance[var[i]] !=0</code> and then later use <code>var[i]</code> in <code>Correlation[var[i]],var[j]]</code> there are actually two different <code>var[i]</code>s, and sometimes the first one passes your check but the second one inside <code>Correlation</code> has zero standard deviation. To fix this you need to use something like</p>
<pre><code> ClearAll[corrcheck];
corrcheck[i_, j_, n_] :=
Table[BlockRandom[SeedRandom[k]; With[{v1 = var[i], v2 = var[j]},
If[Variance@v1 != 0 && Variance@v2 != 0, Correlation[v1, v2], "N/A"]]], {k, n}]
</code></pre>
<p>where in each iteration one random value is generated to be used for checking non-zero variance <strong>and</strong> computing the correlation for each <code>var[i]</code>.</p>
<p>EDIT: Few more alternative ways to define <code>corrcheck</code></p>
<pre><code>ClearAll[corrcheck2, corrcheck3];
corrcheck2[i_, j_, n_] := Table[BlockRandom[SeedRandom[k];
Module[{v1, v2}, If[Variance@v1 != 0 && Variance@v2 != 0, Correlation[v1, v2],
"N/A"] /. {v1 -> var[i], v2 -> var[j]}]], {k, n}];
corrcheck3[i_, j_, n_] := Table[BlockRandom[SeedRandom[k];
If[Variance@#1 != 0 && Variance@#2 != 0, Correlation[#1, #2],
"N/A"] &[var[i], var[j]]], {k, n}];
corrcheck[1, 2, 10]
(* {"N/A","N/A",-0.21821789,"N/A",0.21821789,-0.408248290,
-0.1111111,-0.3333333,-0.218217,"N/A"}*)
corrcheck2[1, 2, 10] == corrcheck3[1, 2, 10] == corrcheck[1, 2, 10]
(* True *)
</code></pre>
<p>EDIT: Cleanest approach is to make use of <code>Correlation</code>'s internal testing for zero standard deviation -- using <code>Check</code> as in Mark's answer but placing it just outside <code>Correlation</code>:</p>
<pre><code>corrcheckX[i_, j_, n_] := Table[BlockRandom[SeedRandom[k];
Quiet[ Check[Correlation[var[i], var[j]], "N/A", Correlation::zerosd]]], {k, n}];
corrcheck2[1, 2, 10]==corrcheck3[1, 2, 10]==corrcheck[1, 2, 10] == corrcheckX[1, 2, 10]
(* True *)
</code></pre>
|
4,113,376 | <p>Given following predicates:</p>
<p><span class="math-container">$$
F_1 = (\forall x)(F(x) \leftrightarrow G(x)) \text{ and } F_2 = (\forall x)F(x) \leftrightarrow (\forall x)G(x)
$$</span></p>
<p>I think that they are not equivalent, but if it possible to prove that?</p>
| esoteric-elliptic | 425,395 | <p>Consider <span class="math-container">$a,b\in\mathbb R$</span>.
<span class="math-container">$$\forall x(ax + b = 0) \leftrightarrow \forall x (a = b = 0)$$</span>
certainly holds, but
<span class="math-container">$$\forall x (ax + b = 0 \leftrightarrow a = b = 0)$$</span>
does not!</p>
|
751,138 | <p>Show that if $3\mid(a^2+1)$ then $3$ does not divide $(a+1)$. </p>
<p>using proof of contradiction </p>
<p>can someone prove this using contradiction method please</p>
| mookid | 131,738 | <p><strong>Hint:</strong> just consider the 3 cases
$$a = 3k, a=3k+1, a=3k-1
$$</p>
|
293,921 | <p>The problem I am working on is:</p>
<p>An ATM personal identification number (PIN) consists of four digits, each a 0, 1, 2, . . . 8, or 9, in succession.</p>
<p>a.How many different possible PINs are there if there are no restrictions on the choice of digits?</p>
<p>b.According to a representative at the author’s local branch
of Chase Bank, there are in fact restrictions on the choice
of digits. The following choices are prohibited: (i) all four
digits identical (ii) sequences of consecutive ascending or
descending digits, such as 6543 (iii) any sequence start-ing with 19 (birth years are too easy to guess). So if one of the PINs in (a) is randomly selected, what is the prob-ability that it will be a legitimate PIN (that is, not be one of the prohibited sequences)?</p>
<p>c. Someone has stolen an ATM card and knows that the first
and last digits of the PIN are 8 and 1, respectively. He has
three tries before the card is retained by the ATM (but
does not realize that). So he randomly selects the $2nd$ and $3^{rd}$
digits for the first try, then randomly selects a different pair of digits for the second try, and yet another randomly selected pair of digits for the third try (the
individual knows about the restrictions described in (b)
so selects only from the legitimate possibilities). What is
the probability that the individual gains access to the
account?</p>
<p>d.Recalculate the probability in (c) if the first and last digits are 1 and 1, respectively. </p>
<h2>---------------------------------------------</h2>
<p>For part a): The total number of pins without restrictions is $10,000$</p>
<p>For part b): The number of pins in either ascending or descending order is $10 \cdot 1 \cdot 1 \cdot 1$, because once the first digit is known, then the three other spots containing digits are already spoken for. The number of pins where each slot contains the same digit is $10 \cdot 1 \cdot 1 \cdot 1$, because once the first digit is known there is only one option left to the rest of the slots. The number of pins that have their first and second slot occupied by 1 and 9, respectively, is $1 \cdot 1 \cdot 10 \cdot 10 \cdot$. So, if R is the set that contains these restricted pins, then $|R| = 130$; and if N is the set that contains the non-restricted ones, meaning R and N are complementary sets, then $|N| = 10,000 - 130$. <strong>Hence, the probability is then $P(N) = 9780/10000 = 0.9870.$ However, the answer is $0.9876$. What did I do wrong?</strong></p>
<p>For part c): The sample space, containing all of the outcomes of the experiment that will take place, is $|N|=9870$. When it says that the thief won't use the same pair of digits in each try, does that not allow him trying the pin 8 <strong>5 2</strong> 1 in one try and the pin 8 <strong>2 5</strong> 1 in another try?</p>
| Metin Y. | 49,793 | <p>For d): We have the case: $1$ * * $1$. </p>
<p>But the thief knows, by prohibition (i), it can not be $1111$. Thus he eleminates $1$ possibility.</p>
<p>Also he knows, by (iii), the second digit can not be $9$. There are exactly $10$ number of the form $19$ * $1$, namely, $1900$, $1910$, $1920$... So, at this stage he eleminates $10$ possibilities.</p>
<p>All in all, if he had no restrictions, there were $100$ choices for the form $1$ * * $1$. But he excluded $10 + 1 = 11$ of them and get 89 possible choices. Since he has 3 chances, the resulting possibility is $3/89$.</p>
|
1,177,349 | <p>Let $\gamma = e^{2 \pi i/5} + (e^{2 \pi i/5})^4
%γ = e<sup>2πi / 5</sup> + (e<sup>2πi / 5</sup>) <sup>4</sup>
$.</p>
<p>I am looking for the basis for $[\mathbb{Q}(\gamma):\mathbb{Q}] = 2$, and then looking for a dependence between $\gamma^2,\gamma$, and $1$. </p>
<p>I've worked all of this out by numerically but I am not sure how to do this through the basis.</p>
| String | 94,971 | <p>Multiply $x<y$ by the positive numbers $x$ and $y$ in turn to get $xx<xy$ and $xy<yy$ so that
$$
xx<xy<yy
$$</p>
|
1,177,349 | <p>Let $\gamma = e^{2 \pi i/5} + (e^{2 \pi i/5})^4
%γ = e<sup>2πi / 5</sup> + (e<sup>2πi / 5</sup>) <sup>4</sup>
$.</p>
<p>I am looking for the basis for $[\mathbb{Q}(\gamma):\mathbb{Q}] = 2$, and then looking for a dependence between $\gamma^2,\gamma$, and $1$. </p>
<p>I've worked all of this out by numerically but I am not sure how to do this through the basis.</p>
| marty cohen | 13,079 | <p>Using the definition of "$>$":</p>
<p>If $y > x$,
there is a $d > 0$
such that
$y = x+d$.</p>
<p>Then
$y^2-x^2
=(x+d)^2-x^2
=x^2+2xd+d^2-x^2
= 2xd+d^2
> 0
$
since both
$2xd$ and $d^2$
are positive.</p>
|
4,114,034 | <p>In Linear Algebra Done Right by Axler, there are two sentences he uses to describe the uniqueness of Linear Maps (3.5) which I cannot reconcile. Namely, whether the uniqueness of Linear Maps is determined by the choice of 1) <em>basis</em> or 2) <em>subspace</em>. These two seem like very different statements to me given there can be a many-to-one relationship between basis and subspace. In otherwords, saying a Linear Map is "unique on a subspace" seems like a stronger statement than saying it is "unique on a basis".</p>
<p>This first sentence he writes before proving the theorem (3.5):</p>
<blockquote>
<p>The uniqueness part of the next result means that a linear map is completely determined by its
values on a <strong>basis</strong>.</p>
</blockquote>
<p>This second sentence he writes at the end after proving the uniqueness of a linear map:</p>
<blockquote>
<p>Thus <span class="math-container">$T$</span> is uniquely determined on <span class="math-container">$span(v_1, \dots, v_n)$</span> by the equation above.
Because <span class="math-container">$v_1, \dots, v_n$</span> is a basis of <span class="math-container">$V$</span>, this implies that <span class="math-container">$T$</span> is <strong>uniquely determined
on <span class="math-container">$V$</span></strong>.</p>
</blockquote>
<p>My question is, if <span class="math-container">$T$</span> is uniquely determined on <span class="math-container">$V$</span>, doesn't that imply that the choice of basis for <span class="math-container">$V$</span> doesn't matter (since each basis of <span class="math-container">$V$</span> spans <span class="math-container">$V$</span>)? But if so, a key part of the theorem requires explicitly choosing <span class="math-container">$T$</span> such that <span class="math-container">$T(v_j)=w_j$</span>, meaning if we choose a different basis <span class="math-container">$a_1, \dots, a_n$</span> of <span class="math-container">$V$</span> and then select <span class="math-container">$T$</span> such that <span class="math-container">$T(a_j)=w_j$</span>, we would get a different <span class="math-container">$T$</span>.</p>
<p>I've found these questions <a href="https://math.stackexchange.com/questions/3263742/proving-a-linear-transformation-is-unique">here</a> and <a href="https://math.stackexchange.com/questions/1272797/insights-about-tv-j-w-j-the-linear-maps-and-basis-of-domain?rq=1">here</a> that are tangentially related but don't address my question specifically. On the other hand the questions <a href="https://math.stackexchange.com/questions/1873360/linear-maps-uniqueness-proof-difference-between-uniquely-determined-on-spanv">here</a> and <a href="https://math.stackexchange.com/questions/3059263/linear-map-uniquely-determined-by-span-of-basis">here</a> get a little closer, but the answer in the first suggests that the choice of basis is arbitrary where as the answer in the second suggests the basis must be the same.</p>
<p>For more information, I've included the complete statement of the theorem plus the last paragraph of the proof.</p>
<p><strong>Theorem 3.5 Linear maps and basis of domain</strong></p>
<blockquote>
<p>Suppose <span class="math-container">$v_1, \dots, v_n$</span> is a basis of <span class="math-container">$V$</span> and <span class="math-container">$w_1, \dots, w_n \in W$</span>. Then there exists a unique linear map <span class="math-container">$T: V \to W$</span> such that
<span class="math-container">$Tv_j=w_j$</span> for each <span class="math-container">$j = 1, \dots, n$</span>.</p>
</blockquote>
<p>Last paragraph of proof:</p>
<blockquote>
<p>To prove uniqueness, now suppose that <span class="math-container">$T \in \mathcal{L}(V,W)$</span>; and that <span class="math-container">$Tv_j=w_j$</span> for each <span class="math-container">$j = 1, \dots ,n$</span>. Let <span class="math-container">$c_1, \dots, c_n \in F$</span>. The homogeneity of <span class="math-container">$T$</span> implies that <span class="math-container">$T(c_jv_j) = c_jw_j$</span> for each <span class="math-container">$j=1, \dots, n$</span>. The additivity of <span class="math-container">$T$</span> now implies that <span class="math-container">$T(c_1v_1 + \cdots + c_nv_n) = c_1w_1 + \cdots + c_nw_n$</span>. Thus <span class="math-container">$T$</span> is uniquely determined on <span class="math-container">$span(v_1, \dots, v_n)$</span> by the equation above. Because <span class="math-container">$v_1, \dots, v_n$</span> is a basis of <span class="math-container">$V$</span>, this implies that <span class="math-container">$T$</span> is uniquely determined on <span class="math-container">$V$</span>.</p>
</blockquote>
| D_S | 28,556 | <p>Let <span class="math-container">$f$</span> be a function with domain <span class="math-container">$D$</span> and range <span class="math-container">$R$</span>. What does it mean for a point <span class="math-container">$x \in D$</span> to be a fixed point? It means that if you plug <span class="math-container">$x$</span> into <span class="math-container">$f$</span>, you get back <span class="math-container">$x$</span> again. In other words, <span class="math-container">$x$</span> is a fixed point if <span class="math-container">$f(x) = x$</span>.</p>
<p>Your function is <span class="math-container">$\phi(x) = 2x^2$</span> with domain <span class="math-container">$[-1,1]$</span> and range <span class="math-container">$[0,2]$</span>. What does it mean for a point <span class="math-container">$x \in [-1,1]$</span> to be a fixed point? It means that when you plug <span class="math-container">$x$</span> into <span class="math-container">$\phi$</span>, you get <span class="math-container">$x$</span> back. In other words, to say that <span class="math-container">$x$</span> is a fixed point is to say that</p>
<p><span class="math-container">$$2x^2 = x.$$</span></p>
<p>Thus, a fixed point for <span class="math-container">$\phi$</span> is a number between <span class="math-container">$-1$</span> and <span class="math-container">$1$</span> that solves the quadratic equation <span class="math-container">$2x^2 - x = 0$</span>. You can solve this quadratic equation and find the two fixed points.</p>
|
1,271,935 | <p>Let $(e_n)$ (where $ e_n $ has a 1 in the $n$-th place and zeros otherwise) be unit standard vectors of $\ell_\infty$. </p>
<p>Why is $(e_n)$ not a basis for $\ell_\infty$?</p>
<p>Thanks.</p>
| ナウシカ | 213,845 | <p>A Schauder basis is a dense countable subset. But there does not exist any dense countable subset in $\ell^\infty$.</p>
<p>Let us prove that no such set can exist:</p>
<p>By contradiction, assume that $D$ was a dense countable subset of $\ell^\infty$. Now consider the set $S = \{s\mid s_n \in \{0,1\}\}$, that is, all sequences only consisting of $0$ and $1$.</p>
<p>If we consider balls of radius ${1\over 3}$ around each $s$ then in every such ball there will have to be a $d$ in $D$ since $D$ is dense. But $S$ is uncountable therefore this cannot be the case. Hence $\ell^\infty$ cannot contain a dense countable subset.</p>
<p>In particular, $e_n$ cannot be a Schauder basis. </p>
|
28,195 | <p>So yesterday I came across a question. Something seemed suspicious (a badly worded question and an incorrect answer accepted), so I did some snooping. It appears that every question from the OP has been answered by the same user within minutes of posting, and subsequently upvoted and accepted. </p>
<p>I suspect the questioner and answerer may be one and the same (looking to increase their reputation for whatever reason), and I am pretty certain this is behavior the MSE community does not want. My questions are:</p>
<p>Assuming I am correct, is this against some "rule" of the MSE community (and if so, where can I find it)? Is there any way of determining this? And if there is no way for a plebeian like me, how could I go about getting the proper moderator intervention? </p>
<p>I say proper moderator intervention because I flagged the answer as poor quality, but it was then pointed out that this is not intended for incorrect answers.</p>
| Jay | 144,901 | <p>Assuming that you are correct about what happened, and it's not just a coincidence or some such ...</p>
<p>(a) This seems really pointless. So a person boosts his reputation score by cheating. Why? What did he gain? Unless there's something going on that I don't know about, there are no cash prizes for high reputation scores. Do guys with high stack exchange scores get all the girls?</p>
<p>(b) Seems like this would just clutter up the site with lots of useless questions and answers. I suppose if he posted interesting, useful questions, and then posted his own answers to them, who cares if it's all a trick? It's still interesting questions added to the knowledge base. But if they're worthless questions, then it's creating clutter.</p>
|
4,409,290 | <p>Can I have a matrix <span class="math-container">$Q$</span> which is orthogonal because each of the column vectors dot products with each other is 0? Or must only satisfy <span class="math-container">$QQ^T=I$</span>. For example consider the following matrix <span class="math-container">$Q$</span>:</p>
<p><span class="math-container">$$Q=\begin{pmatrix}
2 & 1 & -2 \\
-2 & 2 & -1 \\
1 & 2 & 2
\end{pmatrix}$$</span></p>
<ol>
<li>Calculating the dot product of each column pair:</li>
</ol>
<p><span class="math-container">$$\begin{pmatrix}
2 \\
-2 \\
1
\end{pmatrix}\cdot\begin{pmatrix}
1 \\
2 \\
2
\end{pmatrix}=0 ,\quad \begin{pmatrix}
2 \\
-2 \\
1
\end{pmatrix}\cdot\begin{pmatrix}
-2 \\
-1 \\
2
\end{pmatrix}=0 , \quad and \begin{pmatrix}
1 \\
2 \\
2
\end{pmatrix}\cdot\begin{pmatrix}
-2 \\
-1 \\
2
\end{pmatrix}=0 $$</span></p>
<p>Suggests all vectors are orthogonal to each other.</p>
<ol start="2">
<li>But <span class="math-container">$det(Q)=27$</span> and <span class="math-container">$QQ^T=9I$</span>.</li>
</ol>
<p>So is it or is it not orthogonal?</p>
| A. P. | 1,027,216 | <p>By definition a matrix <span class="math-container">$Q\in \mathcal{M}_{n}(\mathbb{R})$</span> is orthogonal <strong>if</strong>:</p>
<ol>
<li><span class="math-container">$Q$</span> is invertible.</li>
<li><span class="math-container">$Q^{T}=Q^{-1}$</span>.</li>
</ol>
<p>Moreover, we know that <span class="math-container">$Q\in \mathcal{M}_{n}(\mathbb{R})$</span> is orthogonal <strong>iff</strong> the columns of <span class="math-container">$Q$</span> setting a orthonormal basis for <span class="math-container">$\mathbb{R}^{n}$</span>.</p>
<p><strong>Remark:</strong></p>
<p>A set <span class="math-container">$\beta=\{c_{1},c_{2},\ldots,c_{n}\}\subseteq \mathbb{R}^{n}$</span> is a orthonormal basis <strong>iff</strong>:</p>
<ol>
<li><span class="math-container">$\beta$</span> is a basis for <span class="math-container">$\mathbb{R}^{n}$</span>.</li>
<li>The inner product between <span class="math-container">$c_{i}$</span> with <span class="math-container">$c_{j}$</span> denoted by <span class="math-container">$\langle c_{i},c_{j}\rangle$</span> for <span class="math-container">$i\not=j$</span> is zero, i.e., <span class="math-container">$\langle c_{i},c_{j}\rangle_{i\not=j}=0$</span>.</li>
<li>The norm for each <span class="math-container">$c_{j}\in \beta$</span> satisfies <span class="math-container">$||c_{j}||=1$</span>.</li>
</ol>
|
311,849 | <p>How to evaluate:
$$ \int_0^\infty e^{-x^2} \cos^n(x) dx$$</p>
<p>Someone has posted this question on fb. I hope it's not duplicate.</p>
| Ayman Hourieh | 4,583 | <p>Using the <a href="http://en.wikipedia.org/wiki/List_of_trigonometric_identities#Power-reduction_formula" rel="nofollow">power reduction formula</a> for $\cos$ for odd powers:</p>
<p>\begin{align}
I &= \int_0^\infty e^{-x^2} \cos^n(x) \, dx \\
&= \int_0^\infty e^{-x^2} \left( \frac{2}{2^n} \sum_{k=0}^{(n-1)/2} \binom{n}{k} \cos{((n-2k)x)} \right) \, dx \\
&= \frac{2}{2^n} \sum_{k=0}^{(n-1)/2} \binom{n}{k} \int_0^\infty e^{-x^2} \cos{((n-2k)x)} \, dx \\
\end{align}</p>
<p>The inner integral is a generalization of the Gaussian integral and can be <a href="http://planetmath.org/GeneralisationOfGaussianIntegral.html" rel="nofollow">evaluated using differentiation under the integral sign</a>:
$$
\int_0^\infty e^{-x^2} \cos{((n-2k)x)} \, dx = \frac{1}{2} \sqrt{\pi} e^{-\frac{1}{4}(n-2 k)^2}
$$</p>
<p>Therefore, we have:</p>
<p>$$
I = \frac{\sqrt{\pi}}{2^n} \sum_{k=0}^{(n-1)/2} \binom{n}{k} e^{-\frac{1}{4}(n-2 k)^2} \qquad n \text{ odd}
$$</p>
<p>I don't think there is a nice closed form for this sum.</p>
<hr>
<p>As for even powers, the same method yields:</p>
<p>\begin{align}
I &= \int_0^\infty e^{-x^2} \cos^n(x) \, dx \\
&= \int_0^\infty e^{-x^2} \left( \frac{1}{2^n} \binom{n}{n/2} + \frac{2}{2^n} \sum_{k=0}^{n/2-1} \binom{n}{k} \cos{((n-2k)x)} \right) \, dx \\
&= \frac{\sqrt{\pi}}{2^{n+1}} \binom{n}{n/2} + \frac{2}{2^n} \sum_{k=0}^{n/2-1} \binom{n}{k} \int_0^\infty e^{-x^2} \cos{((n-2k)x)} \, dx
\end{align}</p>
<p>Therefore:</p>
<p>$$
I = \frac{\sqrt{\pi}}{2^{n+1}} \binom{n}{n/2} + \frac{\sqrt{\pi}}{2^n} \sum_{k=0}^{n/2-1} \binom{n}{k} e^{-\frac{1}{4}(n-2 k)^2} \qquad n \text{ even}
$$</p>
|
1,044,910 | <blockquote>
<p>Prove that $$\sum_{i = 2^{n-1} + 1}^{2^n}\frac{1}{a + ib} \ge \frac{1}{a + 2b}$$</p>
</blockquote>
<p>I tried to to prove the above statement using the AM-HM inequality:</p>
<p>$$\begin{align}\frac{1}{2^n - 2^{n-1}}\sum_{i = 2^{n-1} + 1}^{2^n}\frac{1}{a + ib} &\ge \frac{2^n - 2^{n-1}}{\sum_{i = 2^{n-1} + 1}^{2^n}(a + ib)}\\
\sum_{i = 2^{n-1} + 1}^{2^n}\frac{1}{a + ib} &\ge \frac{(2^n - 2^{n-1})^2}{\frac{2^n -2^{n-1}}{2}(2a + (2^n + 2^{n-1} + 1)b)}\\
&=\frac{2^{n+1} - 2^n}{2a + (2^n + 2^{n-1} + 1)b}\end{align}$$</p>
<p>after which I am more or less stuck. How can I continue on from here, or is there another method?</p>
| Kim Jong Un | 136,641 | <p>For $a,b,x>0$, the function $f(x)=\frac{1}{a+bx}$ is convex. So by Jensen's inequality
$$
\frac{1}{2^{n-1}}\sum_{i=2^{n-1}+1}^{2^n}f(i)\geq f\left(\frac{1}{2^{n-1}}\sum_{i=2^{n-1}+1}^{2^n}i\right)=f(0.5+3\times2^{n-2})=\frac{1}{a+(0.5+3\times2^{n-2})b}.
$$
It remains to show $2^{n-1}$ times the rightmost expression above is greater than or equal to the RHS of the desired inequality. This amounts to computing:
$$
2^{n-1}(a+2b)-(a+(0.5+3\times2^{n-2})b)=a(2^{n-1}-1)+b(2^n-0.5-3\times2^{n-2})\\
=a(2^{n-1}-1)+b(2^{n-2}-0.5)=(2^{n-1}-1)(a+b/2)\geq 0.
$$</p>
|
3,055,649 | <p>Can there be more than four different types of polygons meeting at a vertex? How?
(The polygons must be convex, regular and different)</p>
<p>There are two ways to fit 5 regular polygons around a vertex, what are they?
(The polygons must be regular, they may not be of different types)</p>
| user26872 | 26,872 | <p><span class="math-container">$\def\b{\beta}$</span><span class="math-container">\begin{align*}
\newcommand\cmt[1]{{\small\textrm{#1}}}
I_n(\b) &= \int_0^{\pi/2}
\left(\frac{\sin (2n+1)x}{\sin x}\right)^\b dx \\
&= \frac 1 4 \int_0^{2\pi}
\left(\frac{\sin (2n+1)x}{\sin x}\right)^\b dx
& \cmt{begin similar to user630708} \\
&= \frac{1}{4i} \oint_\gamma
\left(\frac{z^{4n+2}-1}{z^2-1}\right)^\b
\frac{dz}{z^{2n\b+1}}
& \cmt{let $z=e^{ix}$} \\
&= \frac{1}{4i} \oint_\gamma
\left(\sum_{k=0}^{2n}z^{2k}\right)^\b
\frac{dz}{z^{2n\b+1}}
& \cmt{partial sum of geometric series} \\
&= \left.\frac{1}{4i} \frac{2\pi i}{(2n\b)!}
\left(\frac{d}{dz}\right)^{2n\b}
\left(\sum_{k=0}^{2n}z^{2k}\right)^\b \right|_{z=0}
& \cmt{Cauchy integral formula} \\
&= \left.\frac{\pi}{2} \frac{1}{(2n\b)!}
\left(\frac{d}{dz}\right)^{2n\b}
\sum_{\sum x_k=\b} \frac{\b!}{\prod x_k!}
\prod (z^{2k})^{x_k}
\right|_{z=0}
& \cmt{multinomial expansion, $k=0,1,\ldots,2n$} \\
&= \left.\frac{\pi}{2} \frac{1}{(2n\b)!}
\left(\frac{d}{dz}\right)^{2n\b}
\sum_{\sum x_k=\b} \frac{\b!}{\prod x_k!}
z^{2\sum k x_k}
\right|_{z=0} \\
&= \frac{\pi}{2}
\sum_{\sum x_k=\b \atop \sum k x_k = n\b}
\frac{\b!}{\prod x_k!}
& \cmt{only surviving terms have $\sum k x_k = n\b$} \\
&= \frac{\pi}{2}
\sum_{\sum x_k=\b \atop \sum (n-k) x_k = 0}
\frac{\b!}{\prod x_k!}
\end{align*}</span>
In the last line note that
<span class="math-container">$\sum_{k=0}^{2n} n x_k=n\b$</span> and so
<span class="math-container">$\sum_{k=0}^{2n} (n-k)x_k = 0$</span>.
By inspection one can see that
<span class="math-container">$$\sum_{\sum_{k=0}^{2n} x_k=\b \atop \sum_{k=0}^{2n} (n-k) x_k = 0}
\frac{\b!}{\prod x_k!}
= \textrm{number of arrays of $\b$ integers in $-n,\ldots,n$ with sum equal to 0,}$$</span>
i.e.,
<span class="math-container">$$I_n(\b) = \frac{\pi}{2} T(\b,n),$$</span>
where <span class="math-container">$T(\b,n)$</span> is <a href="http://oeis.org/A201552" rel="nofollow noreferrer">OEIS A201552</a>, as pointed out by James Arathoon in the comments.
(On that page we also find an integral form of <span class="math-container">$T(\b,n)$</span> which, after a simple substitution, gives <span class="math-container">$I_n(\b) = \frac{\pi}{2} T(\b,n)$</span>.)</p>
|
315,457 | <p>I am trying to evaluate $\cos(x)$ at the point $x=3$ with $7$ decimal places to be correct. There is no requirement to be the most efficient but only evaluate at this point.</p>
<p>Currently, I am thinking first write $x=\pi+x'$ where $x'=-0.14159265358979312$ and then use Taylor series $\cos(x)=\sum_{i=1}^n(-1)^n\frac{x^{2n}}{(2n)!}$ to decide the best $n$ and the fact the error bound $\frac{1}{(n+1)!}$ for $\cos(x)$ when $x\in[-1,1]$ to decide $n$. Using wolfram alpha I got $n=11$. Thus I need to use the first $11$ term of Taylor series of $\cos(x)$. Is this seems a reasonable approach?</p>
<p>If I am using some programming languages which don't contain $\pi$ as a constant, should I just define $\pi$ first and use the above method? Is there any other approach to this?</p>
<p>If I want to evaluate $\sin(\cos(x))$ at the point $x=3$, should I use above method to evaluate $\cos(x)$ first and then $\sin(\cos(x))$? Is there any other approach to this?</p>
| dineshdileep | 41,541 | <p>The best place to look is this <a href="http://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem" rel="nofollow noreferrer">wiki link</a>. To add to the other answer, another equivalent condition is that for every index $[i,j]$, there should be a $m$ such that $(A^m)_{ij}>0$ which is naturally satisfied if the matrix entries are all positive. If it is non-negative, then one needs to check other things. </p>
|
1,101,371 | <p>Any book that I find on abstract algebra is somehow advanced and not OK for self-learning. I am high-school student with high-school math knowledge. Please someone tell me a book can be fine on abstract algebra? Thanks a lot. </p>
| user153330 | 153,330 | <p>I'm astonished that this wasn't mentioned:</p>
<blockquote>
<p><a href="http://rads.stackoverflow.com/amzn/click/1939512018" rel="nofollow noreferrer"><em>Learning Modern Algebra</em>
<strong>From Early Attempts to Prove</strong>
<strong><em>Fermat’s Last Theorem</em></strong> By Al Cuoco and Joseph J. Rotman</a></p>
</blockquote>
<p>This book is designed for prospective and practicing high school mathematics teachers, but it can serve as a text for standard abstract algebra courses as well. The presentation is organized historically: the Babylonians introduced Pythagorean triples to teach the Pythagorean theorem; these were classified by Diophantus, and eventually this led Fermat to conjecture his Last Theorem. The text shows how much of modern algebra arose in attempts to prove this; it also shows how other important themes in algebra arose from questions related to teaching. Indeed, modern algebra is a very useful tool for teachers, with deep connections to the actual content of high school mathematics, as well as to the mathematics teachers use in their profession that doesn't necessarily "end up on the blackboard."</p>
<p>The focus is on number theory, polynomials, and commutative rings. Group theory is introduced near the end of the text to explain why generalizations of the quadratic formula do not exist for polynomials of high degree, allowing the reader to appreciate the more general work of Galois and Abel on roots of polynomials. Results and proofs are motivated with specific examples whenever possible, so that abstractions emerge from concrete experience. Applications range from the theory of repeating decimals to the use of imaginary quadratic fields to construct problems with rational solutions. While such applications are integrated throughout, each chapter also contains a section giving explicit connections between the content of the chapter and high school teaching.</p>
<p>Table of Contents</p>
<ol>
<li>Early Number Theory</li>
<li>Induction</li>
<li>Renaissance</li>
<li>Modular Arithmetic</li>
<li>Abstract Algebra</li>
<li>Arithmetic of Polynomials</li>
<li>Quotients, Fields, and Classical Problems</li>
<li>Cyclotomic Integers</li>
<li>Epilog References</li>
</ol>
<p><img src="https://i.stack.imgur.com/Wvulb.jpg" alt="enter image description here"></p>
|
291,684 | <p>Linear ODE systems $x'=Ax$ are well understood. Suppose I have a quadratic ODE system where each component satisfies $x_i'=x^T A_i x$ for given matrix $A_i$. What resources, textbooks or papers, are there that study these systems thoroughly? My guess is that they aren't completely understood, but it would be good to know more about what has been done.</p>
| Artem | 29,547 | <p>You are right, these systems are not completely understood even in the simple case of dimension $n=2$ (i.e., on the plane). You can check out <a href="http://rads.stackoverflow.com/amzn/click/1441940243" rel="noreferrer">this book</a> on planar quadratic differential equations. In general you can consider systems of the form
$$
\dot x_i=x_i(c_i+(Ax)_i),
$$
which is slightly less general than you are asking for (the notation $(Ax)_i$ means the $i$th element of the vector $Ax$). These are the so-called Lotka--Volterra systems in mathematical ecology. There is an equivalent system defined on the simplex
$$
\dot p_i=p_i((Ap)_i-p\cdot Ap),
$$
which is called the replicator equation. There are a lot of open questions about these equations. A good book to look for some existing theory is <a href="http://rads.stackoverflow.com/amzn/click/052162570X" rel="noreferrer">Evolutionary games and popualtion dynamics</a> (or just google the names). </p>
|
701,122 | <p>Part 1
Let $f(x) = ax^n$, where $a$ is any real number. Prove that $f$ is even if $n$ is an even integer. (Integers can be negative too)</p>
<p>Part 2
Prove that if you add any two even functions, you get an even function</p>
<p>I'm confused as to how you would prove adding two even functions would get you an even function.</p>
| Community | -1 | <p>Part 1)
$$f(x)=ax^n$$
So, let $n=2k$. Then,
$$f(x)=ax^n=a(x^2)^k$$
Obviously, $x^2$ is an even function ($(-x)^2=x^2$). So,
$$f(x)=ax^n=a(x^2)^k$$
is an even function.</p>
<p>Part 2)
Let $f(x),g(x)$ be two even functions. Then,
$$f(x)=f(-x),\qquad g(x)=g(-x)$$
Adding the two gives
$$f(x)+g(x)=h(x)$$
Using the above relation then gives
$$f(x)+g(x)=h(x)=f(-x)+g(-x)$$
The last expression is $h(-x)$. We have therefore shown that $h(x)$ is even. Q.E.D.</p>
|
3,055,904 | <p>Everyone help me please!</p>
<p>Find for all <span class="math-container">$x$</span> is positive
Such that
<span class="math-container">$x^2+5x+2$</span> is perfect square of integer
First I suppose <span class="math-container">$x^2+5x+2= y^2$</span> and substract <span class="math-container">$y^2$</span> from both side
Got <span class="math-container">$(x-y)(x+y)=-2x-5$</span>
But I can't consider in case by case</p>
| clathratus | 583,016 | <p>Not yet a complete answer</p>
<p><span class="math-container">$$P=\prod_{n=1}^{\infty}(1-e^{-18\pi n})$$</span>
I don't think it is possible without special functions, but it is worthwhile to recall the definition of the Euler function
<span class="math-container">$$\phi(q)=\prod_{n=1}^{\infty}(1-q^n)$$</span>
Immediately we recognize the product in question as
<span class="math-container">$$P=\phi(e^{-18\pi})$$</span>
Which may help, because Ramanujan found that
<span class="math-container">$$\phi(e^{-\pi})=\frac{e^{\pi/34}}{2^{7/8}}Y$$</span>
<span class="math-container">$$\phi(e^{-2\pi})=\frac{e^{\pi/12}}{2}Y$$</span>
<span class="math-container">$$\phi(e^{-4\pi})=\frac{e^{\pi/6}}{2^{11/8}}Y$$</span>
<span class="math-container">$$\phi(e^{-8\pi})=\frac{e^{\pi/3}}{2^{29/16}}(\sqrt{2}-1)^{1/4}Y$$</span>
Where <span class="math-container">$$Y=\frac{\Gamma(1/4)}{\pi^{3/4}}$$</span>
There may be a pattern to these closed forms, so there could still be hope...</p>
<p>Also we may see that
<span class="math-container">$$
\begin{align}
\log P=&\sum_{n=1}^{\infty}\log(1-e^{-18\pi n})\\
=&\sum_{n=1}^{\infty}\log\bigg(\frac{e^{18\pi n}-1}{e^{18\pi n}}\bigg)\\
=&\sum_{n=1}^{\infty}\log(e^{18\pi n}-1)-18\pi n\\
\end{align}
$$</span>
Which may or may not help</p>
|
3,055,904 | <p>Everyone help me please!</p>
<p>Find for all <span class="math-container">$x$</span> is positive
Such that
<span class="math-container">$x^2+5x+2$</span> is perfect square of integer
First I suppose <span class="math-container">$x^2+5x+2= y^2$</span> and substract <span class="math-container">$y^2$</span> from both side
Got <span class="math-container">$(x-y)(x+y)=-2x-5$</span>
But I can't consider in case by case</p>
| Paramanand Singh | 72,031 | <p>You need to go step by step. Let <span class="math-container">$$F(q) =\prod_{n=1}^{\infty} (1-q^{2n})\tag{1}$$</span> where <span class="math-container">$0<q<1$</span>. Then you wish to seek the value of <span class="math-container">$F(q^9)$</span> where <span class="math-container">$q=e^{-\pi} $</span>. Fortunately Ramanujan gave relations between <span class="math-container">$F(q), F(q^3)$</span> and <span class="math-container">$F(q^9)$</span> so that <span class="math-container">$F(q^9)$</span> can be evaluated in closed form if the value of <span class="math-container">$F(q) $</span> is known.</p>
<p>The value of <span class="math-container">$F(q) $</span> for <span class="math-container">$q=e^{-\pi} $</span> is well known and can be obtained using the link between such functions and elliptic integrals. Thus if <span class="math-container">$$\eta(q) =q^{1/12}F(q)\tag{2}$$</span> and <span class="math-container">$k$</span> is the elliptic modulus corresponding to nome <span class="math-container">$q$</span> and <span class="math-container">$K$</span> the corresponding complete elliptic integral then we have <span class="math-container">$$\eta(q) =2^{-1/3}\sqrt{\frac{2K}{\pi}}(kk')^{1/6}\tag{3}$$</span> If <span class="math-container">$q=e^{-\pi} $</span> then <span class="math-container">$k=k'=1/\sqrt{2}$</span> and <span class="math-container">$K=\Gamma^2(1/4)/4\sqrt{\pi}$</span> and thus <span class="math-container">$$\eta(q) =\frac{\Gamma(1/4)}{2\pi^{3/4}}$$</span> and hence <span class="math-container">$$F(q) =e^{\pi/12}\cdot \frac{\Gamma(1/4)}{2\pi^{3/4}}\tag{4}$$</span> If <span class="math-container">$L, l $</span> correspond to <span class="math-container">$q^3$</span> then we have <span class="math-container">$$\eta(q^3)=2^{-1/3}\sqrt{\frac{2L}{\pi}}(ll')^{1/6}$$</span> From <a href="https://math.stackexchange.com/a/2596065/72031">this answer</a> the values <span class="math-container">$L, l, l'$</span> are known and we have thus <span class="math-container">$$\eta(q^{3})=\frac{\Gamma(1/4)}{\pi^{3/4}}\frac{\sqrt[4]{3+2\sqrt{3}}\sqrt[3]{2-\sqrt{3}}}{2\sqrt{3}}$$</span> You should check that the above expression matches with the value given in <a href="https://en.wikipedia.org/wiki/Dedekind_eta_function" rel="nofollow noreferrer">Wikipedia</a> for <span class="math-container">$\eta(3i)$</span> and we thus have <span class="math-container">$$\eta(q^3)=\frac{\Gamma (1/4)}{2\sqrt[3]{3}\sqrt[12]{3+2\sqrt{3}}\pi^{3/4}}$$</span> so that <span class="math-container">$$F(q^3)=e^{\pi/4}\cdot \frac{\Gamma (1/4)}{2\sqrt[3]{3}\sqrt[12]{3+2\sqrt{3}}\pi^{3/4}}$$</span> Now we need to use Ramanujan's identity <span class="math-container">$$1+9q^2\cdot\frac{F^3(q^9)}{F^3(q)}=\left(1+27q^2\cdot\frac{F^{12}(q^3)}{F^{12}(q)}\right) ^{1/3}\tag{5}$$</span> and we can get the value <span class="math-container">$F(q^9)$</span> in closed form.</p>
<p>We have <span class="math-container">$$1+27q^2\frac {F^{12}(q^3)}{F^{12}(q)}=\frac{2\sqrt{3}+6}{9}$$</span> and hence <span class="math-container">$$F(q^9)=e^{3\pi/4}\cdot\frac{\sqrt[3]{\sqrt[3]{18+6\sqrt{3}}-3}}{6}\cdot\frac{\Gamma (1/4)}{\pi^{3/4}}$$</span> Note that the first term of your product is practically equal to <span class="math-container">$1$</span> and the above complicated closed form expression is thus approximately equal to <span class="math-container">$1$</span>. To be more precise equating the above expression with <span class="math-container">$1$</span> gives the value of <span class="math-container">$\Gamma(1/4)$</span> correct to 23 places of decimals as <span class="math-container">$$\Gamma(1/4)=3.625609908221908311930686156\dots$$</span> whereas the correct value is <span class="math-container">$$\Gamma(1/4)=3.625609908221908311930685155\dots$$</span></p>
|
1,007,309 | <p><img src="https://i.stack.imgur.com/xJKtU.png" alt="enter image description here"></p>
<p>I cannot understand part ii) in this solution. I cannot see the significance of arbitrarily close to 0 points for which $|sin(\frac{1}{x_n})|=1$</p>
| Mr. Barrrington | 73,431 | <p>Well, if $g^{-1}(B)$ was open, it should contain an $\varepsilon$-ball with center $0$ for some $\varepsilon>0$, since $0\in g^{-1}(B)$. But for all $\varepsilon>0$ you find a point in this ball, which maps to $1$, hence the ball cannot lie in $g^{-1}((-\frac{1}{2},\frac{1}{2})$!</p>
|
144,364 | <p>I have been in a debate over <a href="http://9gag.com/" rel="noreferrer">9gag</a> with this new comic: <a href="http://9gag.com/gag/4145133" rel="noreferrer">"The Origins"</a></p>
<p><img src="https://i.stack.imgur.com/BOEnP.jpg" alt=""-1 doesn't have a square root?" "Here come imaginary numbers""></p>
<p>And I thought, "haha, that's funny, because I know $i = \sqrt{-1}$".</p>
<p>And then, this comment cast a doubt:</p>
<blockquote>
<p>There is no such thing as sqrt(-1). The square root function is only
defined for positive numbers. Sorry...</p>
</blockquote>
<p>This wasn't by ignorance of complex numbers. In the short round of arguing that happened there, the guys insisted that negative numbers don't have square roots, and that $i$ is defined such that $-1 = i^2$, which is not the same as $i = \sqrt{-1}$. In their opinion, too, no respectable math textbook should <em>ever</em> say that $i = \sqrt{-1}$; which is precisely <a href="http://mathworld.wolfram.com/ImaginaryUnit.html" rel="noreferrer">how Wolfram MathWorld defines $i$</a>.</p>
<p>Is there a consensus over if negative numbers have a square root? Are there some gotchas or disputes with the definitions I should be aware of in the future?</p>
| Kaz | 28,530 | <p>The square root of a number $k$ may be regarded as the root of a polynomial of the form $z^2 - k$. The fundamental theorem of algebra tells us that we can write $z^2 - k$ as $(z - m)(z - n)$, where $m$ and $n$ are the roots which this function necessarily has, being of degree two. $z$, $k$, $m$ and $n$ are all complex. It's possible that $m$ and $n$ are the same: that is there is a root with multiplicity two. For $z^2 - k$ this happens when $k = 0$.</p>
<p>So basically, for all $k$ everywhere on the complex plane, $z^2 - k$ has two roots, meaning that $k$ has two square roots, except for $k = 0$, where $k$ has two roots that are both equal to zero.</p>
<p>Square root is defined exactly as much for the real numbers as for the complex numbers. There are two square roots of four, namely ${-2, 2}$ and there are two square roots of $-1$, namely ${-i, i}$.</p>
<p>It is irrationally inconsistent to accept that there is a defined square root over the non-negative real number line, but not elsewhere.</p>
<p>(That kind of thinking seems like a remnant of the suspicion that imaginary numbers themselves are some kind of fraud, a view that is outdated by several hundred years.)</p>
|
995,029 | <p>I want to find the power series of $\frac{1}{3!}$ in the field $\mathbb{Q}_3$.</p>
<p>In order to do this, do I have to solve the congruence $3!x \equiv 1 \pmod{3^n} \Rightarrow 6x \equiv 1 \pmod 3$?</p>
<p>If so, for $n=1$, we get $0 \equiv 1 \pmod 3$, that is a contradiction.</p>
<p>What does this mean?</p>
| Daniel Fischer | 83,702 | <p>It's not a power series but a Laurent series. You can express elements of $\mathbb{Q}_p$ as series</p>
<p>$$\sum_{k=m}^\infty a_k p^k,$$</p>
<p>where $m\in\mathbb{Z}$ and each $a_k$ is an integer between $0$ and $p-1$ inclusive.</p>
<p>For $\frac{1}{3!}$ in $\mathbb{Q}_3$, the shortcut way to find the series is to write</p>
<p>$$\frac{1}{3!} = \frac{1}{3}\cdot \frac{1}{2} = \frac{1}{3}\left(1 + \frac{1}{-2}\right) = \frac{1}{3}\left(1+\frac{1}{1-3}\right).$$</p>
<p>That form should give you an idea on how to continue.</p>
|
995,029 | <p>I want to find the power series of $\frac{1}{3!}$ in the field $\mathbb{Q}_3$.</p>
<p>In order to do this, do I have to solve the congruence $3!x \equiv 1 \pmod{3^n} \Rightarrow 6x \equiv 1 \pmod 3$?</p>
<p>If so, for $n=1$, we get $0 \equiv 1 \pmod 3$, that is a contradiction.</p>
<p>What does this mean?</p>
| Lubin | 17,760 | <p>Can I cut through the fog with a somewhat different approach? You can write elements of $\mathbb Z_p$ as infinite $p$-ary expansions going to the left, a typical one would look like
$$
\cdots a_4a_3a_2a_1a_0;
$$
which means $a_0+pa_1+p^2a_2+\cdots=\sum_{i\ge0}a^ip^i$, and as appropriate for a $p$-ary expansion, all the $a_i$ are integers between $0$ and $p-1$ inclusive. You can add, subtract, and multiply any two of these things using <em>exactly</em> the methods you learned in elementary school, the carries proceeding to the left.</p>
<p>In this notation, $-1$ has the expansion where all the entries are $p-1$. In particular, in $\mathbb Z_3$, $-1=\cdots22222;\,$. You can check that this is right by applying the formula for geometric series, $a/(1-r)$, where in this case $a=2$ and $r=3$, it evaluates to $-1$.</p>
<p>In particular, $-1/2=\cdots11111;\,$, and you get $1/2$ by adding $1$ to this expansion to get $1/2=\cdots111112;\,$.</p>
<p>Your question was to expand $1/6$, which is $3^{-1}\times1/2$. But you know how to divide by the radix $3$ in $3$-ary expansion: move the radix point one to the left, just as you divide by $10$ in decimal by moving the decimal point one to the left. So your expansion is $1/6=\cdots11111;2\,$. You can check that this is right by noticing that your expansion claims that $1/6=2/3-1/2$, which is just right.</p>
|
962,287 | <p>I am trying to isolate x in the equation $$(x-20)^{2} = -(y-40)^{2} - 525.$$ How can I do it?</p>
| Deepak | 151,732 | <p>$$(x-20)^{2} = -(y-40)^{2} - 525$$</p>
<p>Square root both sides:</p>
<p>$$x-20= \pm \sqrt{-(y-40)^{2} - 525}$$</p>
<p>Rearrange:</p>
<p>$$x= 20 \pm \sqrt{-(y-40)^{2} - 525}$$</p>
<p>Or equivalently,</p>
<p>$$x= 20 \pm i\sqrt{(y-40)^{2} + 525}$$</p>
|
4,572,517 | <p>Given two 3x3 matrix:</p>
<p><span class="math-container">$$
V=
\begin{bmatrix}
1 & 0 & 9 \cr
6 & 4 & -18 \cr
-3 & 0 & 13 \cr
\end{bmatrix}\quad
W=
\begin{bmatrix}
13 & 9 & 3 \cr
-14 & -8 & 2 \cr
5 & 3 & -1 \cr
\end{bmatrix}
$$</span></p>
<p>Is there any way to predict that <span class="math-container">$ V * W = W * V $</span>
without actually calculating both multiplications</p>
| egreg | 62,967 | <p>Your proof is fine. The hypothesis that <span class="math-container">$V$</span> is finite dimensional ensures the existence of the polynomial <span class="math-container">$p$</span>. But the argument doesn't need such existence. <em>If</em> such a polynomial exists, then its roots are eigenvalues.</p>
<p>Indeed, if <span class="math-container">$\dim V=n$</span>, the the vectors <span class="math-container">$v,Tv,T^2v,\dots,T^nv$</span> aren't linearly independent, so there are scalars <span class="math-container">$a_0,a_1,\dots,a_n$</span> not all zero such that
<span class="math-container">$$
a_0v+a_1Tv+\dots+a_nT^nv=0
$$</span>
So if <span class="math-container">$q(z)=a_0+a_1z+\dots+a_nz^n$</span>, we have <span class="math-container">$q(T)v=0$</span>. A nonzero polynomial exists, so there's a nonzero one of minimal degree.</p>
<p>It's not difficult to find a linear map <span class="math-container">$T\colon V\to V$</span> that has no eigenvalue (with <span class="math-container">$V$</span> infinite dimensional, of course), so for no vector <span class="math-container">$v\ne0$</span> such a polynomial exists.</p>
|
19,586 | <p>I am looking for resources for teaching math modeling to high school teachers with rusty math background. It will be a 6-week course. Some tips/directions on simple projects would be helpful. I would want to introduce coding for numerically solving systems of ODEs.</p>
| ncr | 1,537 | <p>The information and handbooks at <a href="https://m3challenge.siam.org/resources" rel="nofollow noreferrer">SIAM Mathworks Modelling Challenge</a> pretty helpful. One handbook introduces the reader to the modeling process and the second one introduces the reader to the communication of computation related to modeling (including ODEs via spreadsheets--you could probably generalize what they discuss to solve the logistic equation to systems). There are also have links to sample problems from previous competitions.</p>
|
2,666,772 | <blockquote>
<p>$W$ = $\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 \end{bmatrix}$. Use W to build an 8x8 matrix encoding an orthonormal basis in $R^8$ by scaling A = $\begin{bmatrix} W & W \\ W & -W \end{bmatrix}$ in the right way.</p>
</blockquote>
<p>Am I wrong here, but is it not just this matrix beside each other 4 times? Doesn't that work?</p>
| Martin Argerami | 22,857 | <p>You have
$$
A^TA=\begin{bmatrix} W^T & W^T \\ W^T & -W^T \end{bmatrix}\begin{bmatrix} W & W \\ W & -W \end{bmatrix}=\begin{bmatrix}2W^TW&0\\ 0&2W^TW \end{bmatrix}.
$$
Since $W^TW=4I$, we got $A^TA=8I$. Then
$$
\frac1{2\sqrt2}\,\begin{bmatrix} W & W \\ W & -W \end{bmatrix}
$$
is an orthogonal matrix. </p>
|
4,005,522 | <p>Let <span class="math-container">$U$</span> be connected open set of <span class="math-container">$\mathbb{R}^{n}$</span>. Consider <span class="math-container">$C^{\infty}(U)$</span> with sup norm, we said <span class="math-container">$A\subset C^{\infty}(U)$</span> is a minimal dense subalgebra of <span class="math-container">$C^{\infty}(U)$</span> if and only if any subalgebra contain in <span class="math-container">$A$</span> is not dense in <span class="math-container">$C^{\infty}(U)$</span>. I want to ask are the polynomial algebra of <span class="math-container">$U$</span>, either with coefficients in <span class="math-container">$\mathbb{Q}$</span> or <span class="math-container">$\mathbb{R}$</span>\ <span class="math-container">$\mathbb{Q}$</span>, the only two minimal dense subalgebra of <span class="math-container">$C^{\infty}(U)$</span>?</p>
| Ulli | 821,203 | <p>Concerning the third bullet:
It should definitely read as "whenever X is extremally disconnected <strong>and compact</strong>". Otherwise, <span class="math-container">$\mathbb{N}$</span> is extremally disconnected, but <span class="math-container">$\mathbb{N} \setminus \{1\}$</span> is not almost compact.
Furthermore, if <span class="math-container">$p$</span> is isolated in <span class="math-container">$X$</span>, then <span class="math-container">$X \setminus \{p\}$</span> is compact, hence not almost compact as defined above.
(Although, almost compactness should include compactness, which actually is, to my knowledge, the most common understanding.)</p>
<p>Anyway, if <span class="math-container">$X$</span> is extremally disconnected, compact and <span class="math-container">$p\in X$</span>, <span class="math-container">$p$</span> not isolated in <span class="math-container">$X$</span>, then
<span class="math-container">$X \setminus \{p\}$</span> is almost compact:</p>
<p>We have to show, that <span class="math-container">$\beta (X \setminus \{p\}) = X$</span>, i.e. that if <span class="math-container">$A, B \subset X \setminus \{p\}$</span> are completely separated in <span class="math-container">$X \setminus \{p\}$</span>,
then <span class="math-container">$A, B$</span> are completely separated in <span class="math-container">$X$</span>:</p>
<p>There exist open, disjoint subsets <span class="math-container">$U, V$</span> of <span class="math-container">$X \setminus \{p\}$</span> such that <span class="math-container">$A \subset U$</span> and <span class="math-container">$B \subset V$</span>.
<span class="math-container">$U, V$</span> are also open in <span class="math-container">$X$</span>. Since <span class="math-container">$X$</span> is extremally disconnected, the closures of <span class="math-container">$U, V$</span> are disjoint and compact, hence completely separated.
Hence also <span class="math-container">$A, B$</span> are completely separated in <span class="math-container">$X$</span>.</p>
|
335,929 | <p>Let $A=\{1, 2,\dots,n\}$
What is the maximum possible number of subsets of $A$ with the property that any two of them have exactly one element in common ?</p>
<p>I strongly suspect the answer is $n$, but can't prove it.</p>
| Sean Eberhard | 23,805 | <p>This is a well known type of problem in combinatorics. (Try googling "exact intersections".) The (slight) generalisation of your claim, in which we require $|A\cap B|=\ell>0$ whenever $A\neq B$, is apparently due to Fisher.</p>
<p>Let $\mathcal{A}$ be a family of subsets of $\{1,2,\dots,n\}$ such that for every two distinct $A,B\in\mathcal{A}$ we have $|A\cap B|=\ell$. We claim $|\mathcal{A}|\leq n$. Certainly we're done if some $A\in\mathcal{A}$ has size $\ell$ (this is where we use $\ell>0$), so assume otherwise.</p>
<p>For each $A\in\mathcal{A}$ consider the "indicator vector" $1_A\in\mathbf{R}^n$ given by $1_A(x)=1$ if $x\in A$ and $0$ otherwise. I claim that $\{1_A : A\in\mathcal{A}\}$ is a linearly independent set, so $|\mathcal{A}|\leq n$.</p>
<p>Suppose $\lambda_A\in\mathbf{R}$ are some coefficients such that</p>
<p>$$\sum_{A\in\mathcal{A}} \lambda_A 1_A = 0.$$</p>
<p>Taking the scalar product with $1_B$, noting $1_A\cdot 1_B = \ell$ when $A\neq B$, we have</p>
<p>$$\lambda_B |B| + \sum_{A\neq B} \lambda_A \ell = 0.$$</p>
<p>Rearranging slightly,</p>
<p>$$\lambda_B (|B| - \ell) = - \ell\sum_{A\in\mathcal{A}} \lambda_A.$$</p>
<p>Conclusion: either $\sum\lambda_A = 0$, in which case every $\lambda_B=0$ (since $|B|>\ell$ for all $B\in\mathcal{A}$), or $\sum\lambda_A\neq 0$, in which case all $\lambda_B$ are nonzero and opposite in sign to $\sum\lambda_A$, impossible.</p>
|
564,195 | <p>Using continuity I was able to show the sequence $x_0 = 1$, $x_{n+1} = sin(x_n)$ converges to 0, but I was wondering if there was a way to prove it using only properties and theorems related to sequences and series, without using continuity.</p>
<p>So far, I know the sequence is monotonically decreasing and bounded below by 0, so it must converge to its infimum. From here I'm not exactly sure how to show 0 is the infimum of this set of numbers.</p>
<p>Alternatively, I could check convergence to $0$ by comparison, but no sequences come to mind that are greater than the given sequence for all $n \geq N$ and converge to $0$.</p>
<p>I've already seen the answers at the following:</p>
<p><a href="https://math.stackexchange.com/questions/45283/lim-n-to-infty-sin-sin-sin-n">Compute $ \lim\limits_{n \to \infty }\sin \sin \dots\sin n$</a> </p>
<p><a href="https://math.stackexchange.com/questions/524638/prove-that-sin-sin-sinx-converges-asymptotically-to-zero">Prove that $\sin(\sin...(\sin(x))..)$ converges asymptotically to zero</a> </p>
| Suraj M S | 85,213 | <p>$$\sin(\sin...sin(1)))....)) =L$$
$$ \sin(\sin...sin(1)))....)) =arcsin(L)$$
$$L=arcsin(L)$$
$$\sin(L) =L$$
$$L=0$$</p>
|
41,883 | <p>Let $G$ be an abelian group, $A$ a trivial $G$-module. We know that $\text{Ext}(G,A)$ classifies abelian extensions of $G$ by $A$, whereas $H^2(G,A)$ classifies central extensions of $G$ by $A$. So we have a canonical inclusion $\text{Ext}(G,A)\hookrightarrow H^2(G,A)$. Is there some naturally arising exact sequence/spectral sequence which realizes this injection?</p>
<p>Usually this kind of thing can be explained by constructing a clever short exact sequence, but here I have no idea how one might compare $R^1\text{Hom}_\mathbb{Z}(G,\underline{\quad})$ with $R^2\text{Hom}_G(\mathbb{Z},\underline{\quad})$.</p>
| Francesco Polizzi | 7,460 | <p>Let $G$ and $A$ be groups and assume that $R \rightarrowtail F \twoheadrightarrow G$ is a presentation of $G$.</p>
<p>Then, by a general theorem of MacLane, there is an exact sequence</p>
<p>$\textrm{Hom}(F_{ab}, A) \to \textrm{Hom}(R/[R,F], A) \to H^2(G, A) \to 0$.</p>
<p>Moreover, if $G$ and $A$ are Abelian there is an exact sequence of Abelian groups</p>
<p>$\textrm{Hom}(F_{ab}, A) \to \textrm{Hom}(R/[F,F], A) \to \textrm{Ext}(G, A) \to 0$.</p>
<p>Therefore the inclusion $\textrm{Ext}(G, A) \to H^2(G,A)$ is induced by the natural homomorphism</p>
<p>$\textrm{Hom}(R/[F,F], A) \to \textrm{Hom}(R/[R,F], A)$,</p>
<p>i.e. by the natural homomorphism</p>
<p>$R/[R, F] \to R/[F, F]$.</p>
<p>See [Robinson, A Course in the Theory of Groups, Chapter 11] for more details.</p>
|
3,963,479 | <p>In a quadrilateral <span class="math-container">$ABCD$</span>, there is an inscribed circle centered at <span class="math-container">$O$</span>. Let <span class="math-container">$F,N,E,M$</span> be the points on the circle that touch the quadrilateral, such that <span class="math-container">$F$</span> is on <span class="math-container">$AB$</span>, <span class="math-container">$N$</span> is on <span class="math-container">$BC$</span>, and so on. It is known that <span class="math-container">$AF=5$</span> and <span class="math-container">$EC=3$</span>. Let <span class="math-container">$P$</span> be the intersection of <span class="math-container">$AC$</span> and <span class="math-container">$MN$</span>. Find the ratio <span class="math-container">$AP:PC$</span>.</p>
<p>I know that <span class="math-container">$AM=AF=5$</span> and <span class="math-container">$CN=CE=3.$</span> The answer is equivalent to the ratio of the areas <span class="math-container">$[ADP]:[DPC]$</span>. I cannot continue on from this point. Would anyone please help?<a href="https://i.stack.imgur.com/LXdXy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LXdXy.png" alt="enter image description here" /></a></p>
| Quanto | 686,284 | <p><a href="https://i.stack.imgur.com/ceiEW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ceiEW.png" alt="enter image description here" /></a></p>
<p>Construct line from C parallel to the chord MN, meeting the extension of AD at X. Then, XMNC is isosceles, implying XM = CN = CE. Note that the triangles AMP and AXC are similar, leading to</p>
<p><span class="math-container">$$\frac {AP}{PC} =\frac {AM}{MX} = \frac {AF}{CE} = \frac 53
$$</span></p>
|
1,697,206 | <p>In the figure, $BG=10$, $AG=13$, $DC=12$, and $m\angle DBC=39^\circ$.</p>
<p>Given that $AB=BC$, find $AD$ and $m\angle ABC$.</p>
<p>Here is the figure:</p>
<p><a href="https://i.stack.imgur.com/u05wa.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u05wa.jpg" alt="enter image description here"></a></p>
<p>I am inclined to say that since $\overline{AB}\simeq \overline{BC}$, both triangles share side $\overline{BD}$, and they also have a $90^\circ$ angle in common, then $AD=DC$ and $m\angle ABD=m\angle DBC=39^\circ$. However, I am not making the connection of exactly why my conclusion is true. </p>
<p>How can I show this is true without using trigonometry?</p>
<p>Thank you!</p>
| assefamaru | 262,896 | <p>Since triangle $ABC$ is isosceles, with $AB=BC$,</p>
<ol>
<li>To find $\angle ABC$, notice that $\angle BCD=51^0=\angle DAB$, hence $\angle ABD=39^0$, and $\angle ABC=39^o+39^o=78^o $.</li>
<li>To find $AD$, since ABC is an isosceles triangle, and since $\angle ABD=\angle DBC$, the line $BD$ divides $AC$ into two equal parts $AD=DC$. Since $DC=12$, hence $AD=12$.</li>
</ol>
|
3,112,682 | <p>I was looking at</p>
<blockquote>
<p><em>Izzo, Alexander J.</em>, <a href="http://dx.doi.org/10.2307/2159282" rel="nofollow noreferrer"><strong>A functional analysis proof of the existence of Haar measure on locally compact Abelian groups</strong></a>, Proc. Am. Math. Soc. 115, No. 2, 581-583 (1992). <a href="https://zbmath.org/?q=an:0777.28006" rel="nofollow noreferrer">ZBL0777.28006</a>.</p>
</blockquote>
<p>which proves existence of the Haar-measure for locally compact abelian groups using the Markov-Kakutani theorem. </p>
<p>What I find strange is that the Haar measure is constructed as an element of the dual of <span class="math-container">$C_c(X)$</span>. But for noncompact <span class="math-container">$X$</span> (such as <span class="math-container">$X$</span> being the real numbers <span class="math-container">$\Bbb R$</span>) this must be an unbounded functional (as the Lebesgue-measure on <span class="math-container">$\Bbb R$</span> is not finite). It seems like the author has no problem with this, and (without mentioning it further) goes on to define a weak-* topology for this case and even uses Banach-Alaoglu.</p>
<p>I have not seen this being done this way before, am I misunderstanding something or can one define a weak-* topology on the algebraic dual of a TVS without any problems?</p>
| Bumblebee | 156,886 | <p>Assume the required quantity is <span class="math-container">$2\cos n\theta$</span> (you can see this squaring the given identity). Then apply the induction using the identity <span class="math-container">$$x^{n+1}+\dfrac{1}{x^{n+1}}=\left(x+\dfrac1x\right)\left(x^{n}+\dfrac{1}{x^{n}}\right)-\left(x^{n-1}+\dfrac{1}{x^{n-1}}\right).$$</span> </p>
|
273,219 | <p>I have been studying for the AP BC Calculus exam (see this <a href="https://math.stackexchange.com/questions/272628/how-know-which-direction-a-particle-is-moving-on-a-polar-curve">previous question</a>) and most of the questions that deal with the first derivative in polar coordinates say that if ${dr\over d\theta}<0$ and $r>0$, then the graph (in polar coordinates) is moving closer to the origin. </p>
<p>What about $r = 4-\theta$, which has $\frac{\mathrm{d}r}{\mathrm{d}\theta} = -1$? </p>
<p>Here is the graph: </p>
<p><img src="https://i.stack.imgur.com/4L1Ct.gif" alt="wolfram alpha"></p>
<p>Does this disprove the statement?</p>
| Thomas Andrews | 7,933 | <p>Just looking at finite simple groups, the <a href="http://en.wikipedia.org/wiki/Classification_of_finite_simple_groups" rel="nofollow">classification is fairly complex</a>, and I doubt you'd call all of the examples, "familiar."</p>
<p>In particular, the odd examples are the so-called <a href="http://en.wikipedia.org/wiki/Sporadic_group" rel="nofollow">Sporadic Groups</a>. These are a $26$ specific finite groups that do not fit into broad infinite classes. Indeed, some of them were initially discovered during the attempt to classify the finite simple groups - they were not initially found in terms of the groups of symmetries of any mathematical object.</p>
<p>One of the stranger results is that almost all groups of order less than 2000 have order 1024. There are $49,487,365,422$ distinct groups of order $1024$. That is more than 99% of the distinct groups of order less than 2000. Certainly, <em>some</em> of these must not be "familiar."</p>
|
3,082,873 | <p>Consider the set of all boolean square matrices of order <span class="math-container">$3 \times 3$</span> as shown below where a,b,c,d,e,f can be either 0 or 1.</p>
<p><span class="math-container">$\begin{bmatrix}
a&b&c\\
0&d&e\\
0&0&f
\end{bmatrix}$</span></p>
<p>Out of all possible boolean matrices of above type, a matrix is chosen at random.The probability that the matrix is singular is?</p>
<p>My Work</p>
<p>The above matrix is a triangular matrix and it's determinant can be 0, if either one or more from a,d and f are 0 in which case 0 will be an eigen value of the matrix and hence determinant 0.</p>
<p>Number of ways in which I can set <span class="math-container">$a,d,f$</span> to zero are: <span class="math-container">$\binom{3}{1}+\binom{3}{2}+\binom{3}{3}=7$</span> ways.</p>
<p>Now, total given boolean matrices possible are</p>
<p><span class="math-container">$2^6=64$</span></p>
<p>So, the required probability must be <span class="math-container">$\frac{7}{64}$</span></p>
<p>Is my answer correct?</p>
| Siong Thye Goh | 306,553 | <p>To be singular, we need <span class="math-container">$a=0$</span> or <span class="math-container">$d=0$</span> or <span class="math-container">$f=0$</span>.</p>
<p>To be non-singular, we need <span class="math-container">$a=d=f=1$</span>.</p>
<p>Hence, the probaility is <span class="math-container">$$1-\frac{1}{2^3}=\frac{7}{8}.$$</span></p>
<p>Note that the values that <span class="math-container">$b,c,e$</span> takes are irrelevant. Number of ways to get a singular matrix is <span class="math-container">$7 \times 8=56$</span>.</p>
|
1,229,467 | <p>I tried substituting the $4$ into the $3x-5$ equation, so my slope would be represented as $3(4)-5 = 7$. Then my equation for the line would be $y-3 = 7(x-4)$. That means the equation of the line would be $y = 7x - 25$. However, I'm trying to submit this answer online and it comes back as incorrect. Any help would be appreciated, thank you.</p>
| user46372819 | 86,425 | <p>You have the slope is $7$ at the point $(4,3)$ which is correct. But, what you found was the equation of the tangent line to the curve at the point $(4,3)$.</p>
<p><strong>SOLUTION:</strong> We need to integrate $\frac{dy}{dx}$ to get the function of the original curve. So,
$$y=\int (3x-5) \ dx=\frac{3}{2}x^2-5x+C.$$</p>
<p>Now, we plug in the point $(4,3)$ to find $C$.
$$3=\frac{3}{2}(4^2)-5(4)+C.$$</p>
<p>So, $C=-1$ and $$y=\frac{3}{2}x^2-5x-1.$$</p>
|
682,741 | <p>Use the Mean Value Theorem to prove that if $p>1$ then $(1+x)^p>1+px$ for $x \in (-1,0)\cup (0,\infty)$</p>
<p>How do I go about doing this?</p>
| Paramanand Singh | 72,031 | <p><strong>Hint</strong>: Try $f(x) = (1 + x)^{p}$ and then use $f(x) - f(0) = xf'(c)$. Clearly $f'(x) = p(1 + x)^{p - 1}$.</p>
<blockquote class="spoiler">
<p> If $x > 0$ then clearly we have $0 < c < x$ and therefore $(1 + c) > 1$ and that implies $f'(c) = p(1 + c)^{p - 1} > p$ so that $f(x) - f(0) = xf'(c) > px$. If $-1 < x < 0$ then $-1 < x < c < 0$ so that $0 < (1 + c) < 1$ and hence $f'(c) = p(1 + c)^{p - 1} < p$. Since $x < 0$ we get $xf'(c) > px$ and thus we get $f(x) - f(0) > px$ in this case also. Since $f(0) = 1$ it follows that we have $(1 + x)^{p} = f(x) > 1 + px$ for all $x \in (-1, 0)\cup(0, \infty)$.</p>
</blockquote>
|
129,530 | <p>This book, which needs to be returned quite soon, has a problem I don't know where to start. How do I find a 4 parameter solution to the equation</p>
<p>$x^2+axy+by^2=u^2+auv+bv^2$</p>
<p>The title of the section this problem comes from is entitled (as this question is titled) "Numbers of the Form $x^2+axy+by^2$", yet it deals almost exclusively with numbers of the form $x^2+y^2$. It looks like almost an afterthought or a preview of what's to come where it gives the formula</p>
<p>$(m^2+amn+bn^2)(p^2+apq+bq^2)=r^2+ars+bs^2,r=mp-bnq,s=np+mq+anq$</p>
<p>Then 6 of the 7 problems use this form. The first few involve solving the form $z^k=x^2+axy+by^2$, which I quickly figured out are solved by letting $z=u^2+auv+bv^2$, then using the above formula to get higher powers. So for $z^2$ for example, I set $m=p=u$ and $n=q=v$ to get $x$ and $y$ in terms of $u$ and $v$. But for this problem, I'm drawing a blank.</p>
| Will Jagy | 10,400 | <p>Oh, well. This is from page 57 of <em>Binary Quadratic Forms</em> by Duncan A. Buell. As long as $$ \gcd(a_1, a_2, B) = 1 $$ we have
$$ (a_1 x_1^2 + B x_1 y_1 + a_2 C y_1^2) (a_2 x_2^2 + B x_2 y_2 + a_1 C y_2^2) = a_1 a_2 X^2 + B X Y + C Y^2, $$
where we take
$$ X = x_1 x_2 - C y_1 y_1, \; \; \; \; Y = a_1 x_1 y_2 + a_2 x_2 y_1 + B y_1 y_2. $$
This is Dirichlet's "united forms" recipe for composition, with
$$ \langle a_1,B, a_2 C \rangle \; \langle a_2,B, a_1 C \rangle \; = \; \langle a_1 a_2,B, C \rangle $$ in the form class group. At least, it is a group when
$$ B^2 - 4 a_1 a_2 C $$ is not a square.</p>
<p>So a four parameter identity with parameters $x_1, y_1, x_2, y_2$ would be
$$ (a_1 x_1^2 + B x_1 y_1 + a_2 C y_1^2) (a_2 x_2^2 + B x_2 y_2 + a_1 C y_2^2) = a_1 a_2 (x_1 x_2 - C y_1 y_1)^2 + B(x_1 x_2 - C y_1 y_1)(a_1 x_1 y_2 + a_2 x_2 y_1 + B y_1 y_2) + C (a_1 x_1 y_2 + a_2 x_2 y_1 + B y_1 y_2)^2. $$</p>
|
3,649,221 | <blockquote>
<p>Suppose <span class="math-container">$\{f_n\}$</span> is an equicontinuous sequence of functions defined on <span class="math-container">$[0,1]$</span> and <span class="math-container">$\{f_n(r)\}$</span> converges <span class="math-container">$∀r ∈ \mathbb{Q} \cap [0, 1]$</span>. Prove that <span class="math-container">$\{f_n\}$</span> converges uniformly on
<span class="math-container">$[0, 1]$</span>.</p>
</blockquote>
<p>Since I know that <span class="math-container">$\mathbb{Q} \cap [0, 1]$</span> is not compact, I am a bit stuck on my proof.</p>
<p>So far I have:</p>
<p>Let <span class="math-container">$f_n \to f$</span> pointwise on <span class="math-container">$\mathbb{Q} \cap [0, 1]$</span> </p>
<p>Since <span class="math-container">$\{f_n\}_n$</span> is equicontinuous and point-wise bounded (it’s pointwise convergent, so in particular), there exists a subsequence <span class="math-container">$\{f_{n_k}\}_k$</span> such that <span class="math-container">$f_{n_k} \to f$</span> uniformly. </p>
<p>Since each <span class="math-container">$f_n$</span> is continuous, <span class="math-container">$f$</span> is then continuous.</p>
<p>Now take <span class="math-container">$\varepsilon > 0$</span>. Using equicontinuity of <span class="math-container">$\{f_n\}_n$</span>, we find <span class="math-container">$\delta_1 > 0$</span> such that if <span class="math-container">$d(x, y) < δ_1$</span>, <span class="math-container">$x, y \in K$</span>, then
<span class="math-container">$|f_n(x) − f_n(y)| < \varepsilon/3$</span> for all <span class="math-container">$n ∈ \mathbb{Z}^+$</span>. </p>
<p>Using continuity of <span class="math-container">$f$</span>, for each <span class="math-container">$x \in K$</span>, let <span class="math-container">$\delta_2 = \delta_2(x) > 0$</span> be such that if <span class="math-container">$|x − y| < \delta_2(x)$</span>, <span class="math-container">$y \in \mathbb{Q} \cap [0, 1]$</span>, then <span class="math-container">$|f(x) − f(y)| < \varepsilon/3$</span>. For <span class="math-container">$x \in \mathbb{Q} \cap [0, 1]$</span>, let <span class="math-container">$\delta(x) = \min(\delta_1, \delta_2(x)) > 0$</span></p>
<p>I am not sure how to continue nor am I too sure I am on the right path.</p>
| Reveillark | 122,262 | <p>Step 1: For every <span class="math-container">$x\in [0,1]$</span>, <span class="math-container">$\{f_n(x)\}_n$</span> converges. </p>
<p>Fix <span class="math-container">$\varepsilon >0$</span>. Pick <span class="math-container">$\delta>0$</span> witnessing the definition of equicontinuity for <span class="math-container">$\varepsilon/3$</span>. Pick a rational number <span class="math-container">$r$</span> with <span class="math-container">$|x-r|<\delta$</span>. Fix <span class="math-container">$N$</span> such that <span class="math-container">$|f_n(r)-f_m(r)|<\varepsilon/3$</span> for every <span class="math-container">$n,m\ge N$</span>. </p>
<p>If <span class="math-container">$n,m\ge N$</span>, then
<span class="math-container">$$
|f_n(x)-f_m(x)|\le |f_n(x)-f_n(r)|+|f_n(r)-f_m(r)|+|f_m(r)-f_m(x)|\le \varepsilon
$$</span>
Thus <span class="math-container">$\{f_n(x)\}_n$</span> is a Cauchy sequence, and we're done by equicontinuity. </p>
<p>Let <span class="math-container">$f(x):=\lim_n f_n(x)$</span>. </p>
<p>Step 2: The convergence is uniform. See <a href="https://math.stackexchange.com/questions/649588/equicontinuity-on-a-compact-metric-space-turns-pointwise-to-uniform-convergence">this answer</a></p>
|
3,649,221 | <blockquote>
<p>Suppose <span class="math-container">$\{f_n\}$</span> is an equicontinuous sequence of functions defined on <span class="math-container">$[0,1]$</span> and <span class="math-container">$\{f_n(r)\}$</span> converges <span class="math-container">$∀r ∈ \mathbb{Q} \cap [0, 1]$</span>. Prove that <span class="math-container">$\{f_n\}$</span> converges uniformly on
<span class="math-container">$[0, 1]$</span>.</p>
</blockquote>
<p>Since I know that <span class="math-container">$\mathbb{Q} \cap [0, 1]$</span> is not compact, I am a bit stuck on my proof.</p>
<p>So far I have:</p>
<p>Let <span class="math-container">$f_n \to f$</span> pointwise on <span class="math-container">$\mathbb{Q} \cap [0, 1]$</span> </p>
<p>Since <span class="math-container">$\{f_n\}_n$</span> is equicontinuous and point-wise bounded (it’s pointwise convergent, so in particular), there exists a subsequence <span class="math-container">$\{f_{n_k}\}_k$</span> such that <span class="math-container">$f_{n_k} \to f$</span> uniformly. </p>
<p>Since each <span class="math-container">$f_n$</span> is continuous, <span class="math-container">$f$</span> is then continuous.</p>
<p>Now take <span class="math-container">$\varepsilon > 0$</span>. Using equicontinuity of <span class="math-container">$\{f_n\}_n$</span>, we find <span class="math-container">$\delta_1 > 0$</span> such that if <span class="math-container">$d(x, y) < δ_1$</span>, <span class="math-container">$x, y \in K$</span>, then
<span class="math-container">$|f_n(x) − f_n(y)| < \varepsilon/3$</span> for all <span class="math-container">$n ∈ \mathbb{Z}^+$</span>. </p>
<p>Using continuity of <span class="math-container">$f$</span>, for each <span class="math-container">$x \in K$</span>, let <span class="math-container">$\delta_2 = \delta_2(x) > 0$</span> be such that if <span class="math-container">$|x − y| < \delta_2(x)$</span>, <span class="math-container">$y \in \mathbb{Q} \cap [0, 1]$</span>, then <span class="math-container">$|f(x) − f(y)| < \varepsilon/3$</span>. For <span class="math-container">$x \in \mathbb{Q} \cap [0, 1]$</span>, let <span class="math-container">$\delta(x) = \min(\delta_1, \delta_2(x)) > 0$</span></p>
<p>I am not sure how to continue nor am I too sure I am on the right path.</p>
| WannaBeRealAnalysist | 319,730 | <p>This is actually a simple case of applying the <em><strong>Arzela-Ascoli Propagation Theorem</strong></em> which states:</p>
<p><em>Point-wise convergence of an equicontinuous sequence of functions on a dense subset of the domain propagates to uniform
convergence on the whole domain.</em></p>
<p>The rational numbers, <span class="math-container">$\mathbb{Q}$</span>, are dense in the interval <span class="math-container">$[0,1]\subset \mathbb{R}$</span>. So if <<span class="math-container">${f_n(r)}$</span>> is converging to some function <span class="math-container">$f$</span> for every <span class="math-container">$r\in\mathbb{Q}$</span> <span class="math-container">$\cap$</span> <span class="math-container">$[0,1]$</span>, the convergence will propagate to uniform convergence on all of <span class="math-container">$[0,1]$</span>.</p>
<p>A proof of this theorem can be found on page 227 of <em>Real Mathematical Analysis</em> by <strong>Charles Pugh</strong>.</p>
|
3,056,616 | <p><span class="math-container">$P(x) = 0$</span> is a polynomial equation having <strong>at least one</strong> integer root, where <span class="math-container">$P(x)$</span> is a polynomial of degree five and having integer coefficients. If <span class="math-container">$P(2) = 3$</span> and <span class="math-container">$P(10)= 11$</span>, then prove that the equation <span class="math-container">$P(x) = 0$</span> has <strong>exactly one</strong> integer root.</p>
<p>I tried by assuming a fifth degree polynomial but got stuck after that.</p>
<p>The question was asked by my friend.</p>
| Eric Wofsey | 86,856 | <p>The assumption that <span class="math-container">$P$</span> has degree <span class="math-container">$5$</span> is irrelevant and unhelpful.</p>
<p>If <span class="math-container">$r$</span> is a root of <span class="math-container">$P$</span>, we can write <span class="math-container">$P(x)=(x-r)Q(x)$</span> for some polynomial <span class="math-container">$Q$</span>. If <span class="math-container">$r$</span> is an integer, then <span class="math-container">$Q$</span> will also have integer coefficients (polynomial division never requires dividing coefficients if you are dividing by a monic polynomial). So, for any integer <span class="math-container">$a$</span>, <span class="math-container">$P(a)=(a-r)Q(a)$</span> must be divisible by <span class="math-container">$a-r$</span>. Taking <span class="math-container">$a=2$</span> and <span class="math-container">$a=10$</span>, we see easily that the only possible value of <span class="math-container">$r$</span> is <span class="math-container">$-1$</span>.</p>
<p>Moreover, we can say that <span class="math-container">$P$</span> only has one integer root even counting multiplicity, because if <span class="math-container">$-1$</span> were a root of higher multiplicity, we could write <span class="math-container">$P(x)=(x+1)^2R(x)$</span> where <span class="math-container">$R(x)$</span> again has integer coefficients, so <span class="math-container">$P(2)$</span> would need to be divisible by <span class="math-container">$(2+1)^2=9$</span>.</p>
|
594,975 | <p>What will be the basis of vector space <span class="math-container">$\Bbb C$</span> over a field of rational numbers <span class="math-container">$\Bbb Q$</span>?</p>
<p>I think it will be an infinite basis! I think it will be <span class="math-container">$B=\{r_1+r_2i \mid r_1, r_2 \in \Bbb Q^{c}\}\cup\{1,i\}$</span>. But this generator is an uncountable set. Can a basis of a vector space be that big? If it is true does it mean that <span class="math-container">$\Bbb Q$</span>-module (Vector space) <span class="math-container">$\Bbb C$</span> is free?</p>
| tomasz | 30,222 | <p>Note first that for any ring $R$ and any $R$-module $M$, if the cardinality $\lvert M\rvert$ is infinite and greater than $\lvert R \rvert$, then any generating set of $M$ (in particular, any basis) must have cardinality equal to $\lvert M\rvert$ (this is simple combinatorics).</p>
<p>In particular, any basis of $\bf C$ over $\bf Q$ must have cardinality of the continuum, which is rather uncountable no matter how you look at it.</p>
<p>Secondly, any module over a field (i.e. any vector space) is free (it's a basic theorem in linear algebra that every vector space has a basis, assuming axiom of choice).</p>
<p>On the other hand, in ZF alone (without axiom of choice), it is consistent that $\bf C$ has no basis over $\bf Q$ (because such a basis can't be measurable, and there are models of ZF where every set is measurable). Therefore, loosely speaking, you can't write down such a basis. A little more precisely, it's impossible to write a formula which will provably define a basis of $\bf C$ over $\bf Q$ without some strong set-theoretic assumptions like $V=L$.</p>
<p>The set you've written down certainly generates $\bf C$ as a vector space, but it's not hard to see that it is not linearly independent.</p>
|
2,752,073 | <p>Looking for a finite recursive formula for the constant e preferably using standard operators (ones a calculator could carry out)</p>
<p>i.e. a formula of the form $ x_{n+1}=f(x_n)$ where $\lim_{n\rightarrow\infty}(x_n)=e$ for some $x_1$</p>
<p>Bonus points if it works for any $x_1$</p>
| N. S. | 9,176 | <p>$$x_{n+1}= \left( 1+ \frac{\max\{ |x_n|,1 \} }{n}\right)^{\frac{\max\{ |x_n|,1 \}}{n}}$$</p>
|
2,752,073 | <p>Looking for a finite recursive formula for the constant e preferably using standard operators (ones a calculator could carry out)</p>
<p>i.e. a formula of the form $ x_{n+1}=f(x_n)$ where $\lim_{n\rightarrow\infty}(x_n)=e$ for some $x_1$</p>
<p>Bonus points if it works for any $x_1$</p>
| Brian Tung | 224,454 | <p>One approach is to use Newton's method on a function known to have a zero at $e$—for instance, $f(x) = \ln x - 1$. Then $f'(x) = 1/x$, and we can iterate as</p>
<p>$$
x_{n+1} = x_n (2-\ln x_n)
$$</p>
<p>Of course, this isn't very satisfying, since the natural log is sitting right there in the expression. (I also haven't looked at the interval of convergence.) But it is available on most scientific calculators.</p>
<p>I'll have to give this more thought to see if something clever can be done to avoid the natural log (or exp, for that matter).</p>
<hr>
<p>Also, as I mentioned in the comments, this question has been looked at <a href="https://math.stackexchange.com/questions/1171572/a-recursive-formula-to-approximate-e-prove-or-disprove">before</a> on Math.SE, with the nice result</p>
<p>$$
x_{n+1} = x_n - \frac1n x_{n-1}
$$</p>
<p>with $x_1 = x_2 = 1$. This is not <em>exactly</em> in the form desired by the OP, but it's pretty nice all the same.</p>
|
4,015,203 | <p>How would you prove the following property about covariances?</p>
<p><a href="https://i.stack.imgur.com/y18wd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y18wd.png" alt="enter image description here" /></a></p>
<p>I found it here:</p>
<p><a href="https://www.probabilitycourse.com/chapter5/5_3_1_covariance_correlation.php" rel="nofollow noreferrer">https://www.probabilitycourse.com/chapter5/5_3_1_covariance_correlation.php</a></p>
| drhab | 75,923 | <p>It is enough to prove for suitable random variables <span class="math-container">$X,Y,Z$</span> and constant <span class="math-container">$a$</span> that:</p>
<ul>
<li><span class="math-container">$\mathsf{Cov}(X+Y,Z)=\mathsf{Cov}(X,Z)+\mathsf{Cov}(Y,Z)$</span></li>
<li><span class="math-container">$\mathsf{Cov}(aX,Y)=a\mathsf{Cov}(X,Y)$</span></li>
<li><span class="math-container">$\mathsf{Cov}(X,Y)=\mathsf{Cov}(Y,X)$</span></li>
</ul>
<p>which can be done on base of definition of <span class="math-container">$\mathsf{Cov}$</span>.</p>
<p>The first two bullets show that <span class="math-container">$\mathsf{Cov}$</span> is linear on first argument.</p>
<p>The third bullet makes it possible to show that this is also the case for the second argument.</p>
|
2,368,179 | <p>Answer should be in radians
Like π/4 (45°) π(90°).
I used $\tan(A+B)$ formula and got $5/7$ as the answer, but that's obviously wrong.</p>
| MCCCS | 357,924 | <p>Another approach:</p>
<blockquote>
<p>$\tan a + \tan b = \frac{\sin(a+b)}{\cos a \cos b}$</p>
</blockquote>
<p>Then</p>
<p>$\tan a + \tan b = \frac{\sin(a+b)}{\cos a \cos b}$</p>
<blockquote>
<p>$\cos (\tan^{-1}x) = \frac{1}{\sqrt{x^2+1}}$</p>
</blockquote>
<p>$-\frac{1}{2} + -\frac{1}{3} = \frac{\sin(a+b)}{\frac{1}{\sqrt{(\frac{-1}{2})^2+1}}\cdot \frac{1}{\sqrt{(\frac{-1}{3})^2+1}}}$</p>
<p>$-5 = \frac{\sin(a+b)}{\frac{1}{\sqrt{5}}\cdot \frac{1}{\sqrt{10}}}$</p>
<p>$-\frac{1}{\sqrt{2}} = \sin(a+b)$</p>
<p>$\sin^{-1}\frac{-\sqrt2}{2}= \frac{-\pi}{4}$</p>
|
2,762,237 | <blockquote>
<p>Does the sequence $(x_n)_{n=1}^\infty$ with $x_{n+1}=2\sqrt{x_n}$ converge?</p>
</blockquote>
<p>I'm almost positive this converges but I am not entirely sure how to go about this. The square root is really throwing me off as I haven't dealt with it at all up until now.</p>
| marty cohen | 13,079 | <p>First, find the limit.</p>
<p>Suppose the sequence has a limit L.
Then $L=2\sqrt{L}$,
so $L=4$.
Note: this does not show
that the limit exists.
It shows that if the limit does exist then it must be 4.</p>
<p>Then, see how the limit is approached.</p>
<p>If
$y=2\sqrt{x}$,
then
$y-4=2\sqrt{x}-4
=2(\sqrt{x}-2)
$
so
$\frac{y-4}{x-4}
=\frac{2(\sqrt{x}-2)}{x-4}
=\frac{2(\sqrt{x}-2)}{(\sqrt{x}-2)(\sqrt{x}+2)}
=\frac{2}{\sqrt{x}+2}
$.</p>
<p>Therefore
$|\frac{y-4}{x-4}| < 1$
so the iterations converge
to the limit.</p>
|
2,420,255 | <p>If the price of an article is increased by percent $p$, then the decrease in percent of sales must not exceed $d$ in order to yield the same income. The value of $d$ is:
$\textbf{(A)}\ \frac{1}{1+p} \qquad \textbf{(B)}\ \frac{1}{1-p} \qquad \textbf{(C)}\ \frac{p}{1+p} \qquad \textbf{(D)}\ \frac{p}{p-1}\qquad \textbf{(E)}\ \frac{1-p}{1+p}$</p>
<p>My try :</p>
<p>Suppose that original price=$x$ , then price after increase =$x+px =x (1+p)$</p>
<p>Sales before increase = $x.n$ </p>
<p>Sales after increase =$ x.n. (1+p)$</p>
<p>For the sales to be equal $ \to $</p>
<p>$x.n=x.n. (1+p).d \to d=\frac {1}{p+1}$</p>
| Intelligenti pauca | 255,730 | <p>Your construction can be rephrased in a more geometrical way as follows: given a triangle $ABC$, we construct a triangle $DBE$ having angle $\angle B$ in common with $ABC$ (in your case $\angle B$ is a right angle, but that is not necessary) and $AD=BD-BA=BC-BE=EC$ (see diagram below). Lines through $A$ and $D$, parallel to $BC$, meet lines though $E$ and $C$, parallel to $AB$, at $G$ and $F$ respectively. We want to prove that point $H$, where $AC$ and $DE$ meet, lies on line $FG$.</p>
<p><a href="https://i.stack.imgur.com/uzjLb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uzjLb.png" alt="enter image description here"></a></p>
<p>From point $H$ draw lines $HI$ and $HJ$, parallel to $BC$ and $AB$. By the <a href="https://en.wikipedia.org/wiki/Intercept_theorem" rel="nofollow noreferrer">intercept theorem</a>, if we prove that $AI=EJ$ then $H$ lies on line $FG$.</p>
<p>Observe now that triangles $AIH$ and $HJC$ are similar, as well as triangles $DIH$ and $HJE$. We have then:
$$
AI:HI=HJ:JC
\quad\hbox{and}\quad
DI:HI=HJ:JE,
$$
whence:
$$
HI\cdot HJ=AI\cdot JC=DI\cdot JE.
$$
From that we get:
$$
AI\cdot (JE+EC)=(AI+AD)\cdot JE,
\quad\hbox{that is:}\quad
AI\cdot EC=AD\cdot JE.
$$
From the last equality, recalling that $EC=AD$, we obtain then $AI=JE$, which is what we wanted to prove.</p>
|
1,388,434 | <p>I was brushing up on some calculus and I was thinking about the following function:</p>
<blockquote>
<p>Let$$f(x) =
\begin{cases}
\frac{yx^{6} +y^{3}+x^{3}y}{x^{6}+y^{2}} & \text{for $(x,y)$ $\neq (0,0)$,} \\
0 & \text{for $(x,y)$ $=$ $(0,0)$. } \\
\end{cases}$$The function $f$ is continuous over the entire real line and is differentiable everywhere except at $x=0$.</p>
</blockquote>
<p>For some arbitrary nonzero $\xi=(u,v)$, I was able to compute the directional derivative ${\xi}f(0,0)=\frac{1}{h}f(hu,hv)=v$. In particular for $\xi=\frac{\partial}{\partial{x}}=(1,0)$, we have $\frac{\partial{f}}{\partial{x}}(0,0)=0$ and for $\xi=\frac{\partial}{\partial{y}}=(0,1)$, we have that $\frac{\partial{f}}{\partial{y}}(0,0)=1$. From here we see that partial derivative depends linearly on the vector we differentiate on meaning $(t{\xi}))f(0,0)=(tu,tv)f(0,0)=tv=t({\xi}f(0,0))$.</p>
<p><strong>This function is not differentiable at the origin, in fact it is not even continuous at the origin. I was having trouble seeing exactly why</strong>. I've forgotten lots of calculus, but I was wondering if the fact that $\frac{\partial{f}}{\partial{y}}(0,0)=1$ while $\frac{\partial{f}}{\partial{x}}(0,0)=0$ implies discontinuity at $(0,0)$.</p>
<p>Also as a side note, there is the big theorem in differential calculus which says if all the directional derivatives ${\xi}f: \mathbb{R}^{2} \rightarrow \mathbb{R}$ are continuous at a point $p$, then $f$ is differentiable at the point $p$. <strong>I'm confused as to how the partial derivatives above are not continuous.</strong></p>
<p>Any help is much appreciated.</p>
| ignoramus | 155,096 | <p><strong>Hint:</strong> consider approaching the origin along the path $(x,x^3)$</p>
|
73,912 | <p>For my work, I am examining the values of a complex function as I vary the input according to a real parameter, and I want to both give the general plot and the plot of specific points, with labels (so one sees the direction of increase). </p>
<p>I knew from the documentation that <code>Point</code> and <code>Epilog</code> together allow you to label points on graphs; e.g., </p>
<pre><code>ourF[z_] := z^2;
parts[z_] := {Re[z], Im[z]}
ParametricPlot[parts[ourF[x + I/4]], {x, -3 , 3},
Epilog -> {{PointSize[Medium],
Point[Table[parts[ourF[ j + I/4]], {j, -3, 3}]]}}]
</code></pre>
<p>This produces:</p>
<p><img src="https://i.stack.imgur.com/7bIh8.png" alt="A plot with labeled points, just as in the documentation"></p>
<p>Looking at the answers to this site's <a href="https://mathematica.stackexchange.com/questions/1854/adding-labels-to-points-in-listplot">Question 1854</a> (especially Jacob Jurmain's), <code>Listplot</code> in newer versions of Mathematica has a <code>Labeled</code> option that labels the points in the <code>Listplot</code>. Indeed, I can get what I want by making a <code>Plot</code> and <code>Listplot</code> separately and then <code>Show</code>ing them together. e.g.</p>
<pre><code>ourF[z_] := z^2;
parts[z_] := {Re[z], Im[z]}
plotOne = ParametricPlot[parts[ourF[x + I/4]], {x, -3 , 3}];
plotTwo = ListPlot[Table[Labeled[parts[ourF[ j + I/4]],
Row[{"x = ", j}], Right
], {j, -3, 3}]];
Show[plotOne, plotTwo]
</code></pre>
<p>which yields</p>
<p><img src="https://i.stack.imgur.com/bAYwK.png" alt="A plot with labeled points"></p>
<p>as required. </p>
<p>My first attempt, however, was to simply put the <code>ListPlot</code> in the <code>Epilog</code>, e.g.</p>
<pre><code>ourF[z_] := z^2;
parts[z_] := {Re[z], Im[z]}
ParametricPlot[parts[ourF[x + I/4]], {x, -3 , 3},
Epilog -> {ListPlot[Table[Labeled[parts[ourF[ j + I/4]],
Row[{"x = ", 13/10 + j/10}], Right
], {j, -3, 3}]]}]
</code></pre>
<p>This yields the error:</p>
<pre><code>Graphics is not a Graphics primitive or directive.
</code></pre>
<p>I guess <code>ParametricPlot</code> calls <code>Graphics</code>, and <code>Listplot</code> is now calling <code>Graphics</code> inside the other <code>Graphics</code>, hence the issue. </p>
<p>I also tried putting <code>Labeled</code> in the <code>Point</code> variation, but <code>Point</code> doesn't know what to do with the label, and the error message becomes</p>
<pre><code>Coordinate Labeled[{8.9375, -1.5}, Row[{"x = ", 1}], Right] should be a pair of numbers, or a Scaled or Offset form.
</code></pre>
<hr>
<p><strong>Q:</strong> Is there any way of putting it all in one plotting command?</p>
<hr>
<p><em>P.S.:</em> An answer in the vein of, "You have acceptable output, stop worrying about it" would also be reasonable. I am new to Mathematica, but it seems to the newcomer as though Mathematica puts in one line what I would use 5-10 lines in MATLAB to set up, and [after having debugged] the fewest number of lines is the best.</p>
| kglr | 125 | <p>You are five keystrokes (<code>[[1]]</code>) close to something that works (See also: <a href="https://mathematica.stackexchange.com/a/71997/125">this</a> and <a href="https://mathematica.stackexchange.com/a/69940/125">this</a>)</p>
<pre><code>ParametricPlot[parts[ourF[x + I/4]], {x, -3, 3}, PlotRangeClipping -> False, ImagePadding -> 30,
Epilog -> ListPlot[Table[Labeled[parts[ourF[j + I/4]], Row[{"x = ", 13/10 + j/10}], Right],
{j, -3, 3}]][[1]]]
</code></pre>
<p><img src="https://i.stack.imgur.com/UOUQM.png" alt="enter image description here"></p>
<p>Alternatively, you can use <code>ParametricPlot[...][[1]]</code> as the <code>Epilog</code> setting in <code>ListPlot</code>:</p>
<pre><code>ListPlot[Table[Labeled[parts[ourF[j + I/4]], Row[{"x = ", 13/10 + j/10}], Right], {j, -3, 3}],
Epilog -> ParametricPlot[parts[ourF[x + I/4]], {x, -3, 3}][[1]]]
</code></pre>
<p><strong>Update:</strong> Using <code>MeshFunctions</code> and <code>Mesh</code>:</p>
<pre><code>ParametricPlot[parts[ourF[x + I/4]], {x, -3, 3},
PlotRangePadding -> {{0, 1}, {1, 1}}, ImageSize -> 500,
BaseStyle -> {PointSize[Large], Blue}, MeshFunctions -> {#2 &},
Mesh -> {Table[{Im[ourF[j + I/4]],
Text[Style[Row[{"x = ", 13/10 + j/10}], 12, Black], parts[ourF[j + I/4]], {-2, 0}]},
{j, -3, 3}]}]
</code></pre>
<p><img src="https://i.stack.imgur.com/gHhZ8.png" alt=""></p>
|
268,091 | <p><a href="https://i.stack.imgur.com/PAO6T.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PAO6T.jpg" alt="enter image description here" /></a></p>
<p>I have to solve ODE x'(t)=1/2(x(t))-t, x(0)
The existence of solutions of this IVP is equivalent to finding a fixed point of integral operator T:C[0,1]->C[0,1] defined by
T(x(t))=x(0)+integral[0,t][1/2(x(tau))-tau) d(tau)
I am facing the problem how to T(x(t)) in Mathematica??</p>
| I.M. | 26,815 | <p>Can be also solved with <code>AsymptoticDSolveValue</code></p>
<pre><code>AsymptoticDSolveValue[{x'[t] == 1/2 x[t] - t, x[0] == 0}, x[t], t, {t, 0, 4}]
(* -(t^2/2)-t^3/12-t^4/96 *)
</code></pre>
|
1,676,347 | <p>Suppose that $V$ and $W$ are vector spaces, $g:V\rightarrow W$ a linear map. Show that g is surjective $\iff$ For any vector space $U$ the map $g^{\ast}:Hom(W,U) \rightarrow Hom(V,U)$ defined by $g^{\ast}(f) = f\circ g$ is injective.</p>
<p>I've failed too much trying to solve this problem. Any hint could be useful, thank you.</p>
| Luiz Cordeiro | 58,818 | <p>I thinkg the notation can become quite heavy at some point, so let's try to make thinkgs easy. You can show that the map $g^*$ is always linear.</p>
<p>Suppose $g$ is surjective. Let $U$ be any vector space. Let's show that the map $g^*$ described above is injective. Suppose $f:W\to U$ satisfies $g^*(f)=0$, that is, $fg=0$. We need to show that $f=0$. Let any $w\in W$. Since $g$ is surjective, $w=g(x)$ for some $x\in V$, so $f(w)=f(g(x))=0$. This proves that $f=0$, and therefore $\ker g^*=0$, which means that $g^*$ is injective.</p>
<p>For the other direction we use the contrapositive: Suppose $g$ is not surjective, and let $v\in W\setminus g(V)$. Let $U=W$. Let's show that $g^*$ is not injective.</p>
<p>We can construct a linear map $f:W\to W$ satisfying $f(v)=v$ and $f(g(V))=0$. For this, take a basis $\mathcal{B}$ of $g(V)$, so $\mathcal{B}\cup\left\{v\right\}$ is LI in $W$, and can be extended to a basis $\mathcal{C}$ of $W$. We define $f$ on $W$ by setting $f(v)=v$ and $f(x)=x$ for all other elements $x$ of the basis $\mathcal{C}$. Since $v\neq 0$, this yields $f\neq 0$.</p>
<p>But by definition, we have $f(x)=0$ for all $x\in\mathcal{B}$, so $f(y)=0$ for all $y\in g(V)$, which means that $f(g(v))=0$ for all $v\in V$, so $g^*f=f(g)=0$.</p>
<p>Therefore, $f$ is a nonzero function in $\operatorname{ker} g^*$, so $g^*$ is non-injective.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.