qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,335,698
<p>For this problem do I use the distance formula that I would use between two regular points? </p> <p>$d=\sqrt{(x_2−x_1)^2+(y_2−y_1)^2}$</p> <p>The distance between points $u$ and $v$ on the $x$-axis is given by $|u-v|$. Solve $|x-5|+|x-6|=1$ (think geometrically).</p>
Steven Alexis Gregory
75,410
<p>Write it as $|5-x|+|x-6|=|5-6|$</p> <p>If $d(x,y)$ means the distance between points $x$ and $y$ on the number line, then this can be interpreted as $d(5,x) + d(x,6) = d(5,6)$. Which means that x is between $5$ and $6$. That is $5 \le x \le 6$.</p> <p><img src="https://i.stack.imgur.com/bcGpj.jpg" alt="betweenness"></p>
1,937,826
<p>Ok, this seems obvious to me, but how would one prove it?</p> <p>Let $&lt;f(t),g(t)&gt;$ and $&lt;h(t),p(t)&gt;$ be parametrized arcs in the cartesian plot. If $f,g,h,p$ are all continous and the arcs don't intersect, then there will be a line between the two that will be the shortest distance. Prove this line is normal to both arcs.</p> <p>Is this proof non trivial? It seems so obvious, but i am not sure how it would be done.</p>
dxiv
291,201
<p>(The following is more of a comment on the context and intuition behind the result, rather than a formal answer - which has been largely provided by @cjackal.)</p> <p>Starting with a restatement of the problem: let $C_1, C_2$ be two smooth curves in the $2D$ plane. Assume that, among all segments $AB$ with $A \in C_1, B \in C_2$, there exists a shortest one $PQ$. Further assume that the endpoints of this segment are interior points of the respective curves i.e. $P \in \text{int} \;C_1, Q \in \text{int}\;C_2$. Then $PQ$ is normal to both curves at the points of contact, which is equivalent to the tangents at $P,Q$ respectively to $C_1,C_2$ being parallel.</p> <ul> <li><p>The smoothness assumption ensures that normality and tangency are well defined.</p></li> <li><p>The existence of $P,Q$ is a necessary assumption since a shortest segment does not always exist, even for bounded curves. For example, there is no shortest distance between open segments $(0,0)$ and $(1, 0)$ on the real axis.</p></li> <li><p>The assumption that $P,Q$ are <em>interior</em> points is required to eliminate cases where the minimum distance would be otherwise achieved at an endpoint of either curve. For example, the shortest distance between two closed segments which are not parallel does exist, but is not normal to at least one of them. <br> (As a side note, the <em>interior</em> assumption is slightly more general than the assumption of $C_1,C_2$ being non-self-intersecting and open used in @cjackal's proof, for example it includes the case of two circles, but that proof itself could be adjusted fairly easily to relax those assumptions.)</p></li> </ul> <p>Now back to the problem itself, draw the tangents at $P,Q$ to the respective curves. Since $P,Q$ are interior points, each has an open vicinity on the respective curve. For small enough such vicinities, they will each be on opposite sides of the respective tangents, and for infinitesimally small such vicinities the tangents virtually approximate the curves around the two points. If the tangents are not parallel, then slightly shifting the points towards the intersection of the two tangents would yield two new points $P',Q'$ with $|P'Q'| \lt |P Q|$ but that would contradict the assumption of $P,Q$ being the shortest distance. Thus the tangents must be parallel, and $P Q$ must be normal to both.</p>
1,392,257
<p><strong>The definition of a conjugate element</strong> </p> <p>We say that $x$ is conjugate to $y$ in $G$ if $y = g^{-1}xg $ for some $g \in G$</p> <p>Now for the group $G=Q_8$ , we have the group presentation $$Q_8 = \big&lt;a,b: a^4 =1,b^2 = a^2, b^{-1}ab = a^{-1} \big&gt;$$</p> <p>Now the elements of $Q_8$ are $\{1,a,a^2,a^3,ab,a^2b,a^3b,b\}$ and after some calculation we would get $5$ different conjugacy classes, namely $a^G = \{a,a^3\}$ where $a^G$ denotes the conjugacy class of $a$ in $G = Q_8$,</p> <p>also we have </p> <p>$1^G = \{1 \}$, ${a^2}^G = \{ a^2 \}$, ${(a^2b)}^G = \{a^2b,b \}$ and ${(ab)}^G = \{ab,a^3b\}$</p> <p>Of course , there is no surpise that for every element $x \in G$ we have $x \in x^G$ because $x = 1^{-1}x1$. However, we see that all the conjugacy classes for $Q_8$ contain the element and it's inverse. Like $a^{-1} = a^3$, ${(a^2)}^{-1} = a^2$, ${(a^2b)}^{-1} = b$ and so on.</p> <p>My question is does this hold true for all groups ?</p> <p>More formally , Is it true that for an element $x \in G$ then $x,x^{-1} \in x^G$ ?</p>
Nicky Hekster
9,605
<p>There is a connection to character theory of finite groups: a group is said to be <em>ambivalent</em> if every element is conjugate to its inverse. For a finite group, this is equivalent to every character of the group over complex numbers, being <em>real-valued</em>. It is easily seen that all symmetric groups $S_n$ are ambivalent. However, the alternating groups $A_n$ are ambivalent only for $n \in \{1,2,5,6,10,14\}$.</p>
2,883,370
<p>If I want to determine whether a sequence, ${a_n}$, is bounded above $\forall n \in \Bbb{N} $, is it enough to find a sequence that is larger than $a_n$, and show that it converges and is therefore bounded? For example:</p> <p>$\forall n \in \Bbb{N}, let,$</p> <p>$$ a_n = \frac{1}{n+1} + \frac{1}{n+2} + ...+\frac{1}{2n}\\ \frac{1}{n+1} + \frac{1}{n+2} + ...+\frac{1}{2n} \leq \frac{1}{n} + \frac{1}{n} + ...+\frac{1}{n} = n\cdot\frac{1}{n}=1 $$ and since $\lim\limits_{n\to\infty}1 = 1$, then 1 must be an upper bound for $a_n$. Is this correct? Thanks.</p>
Babelfish
485,123
<p>Yes, this is enough. If $(b_n)$ dominates $(a_n)$ and $b_n \rightarrow b$, then there is $S\in \mathbb R$ with $b_n&lt;S$ for all $n \in \mathbb N$. Since $(b_n)$ dominates, we get $S&gt;b_n \geq a_n$, so $(a_n)$ is bounded as well.</p>
61,106
<p>Let <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> be Poisson random variables with means <span class="math-container">$\lambda$</span> and <span class="math-container">$1$</span>, respectively. The difference of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> is a <a href="http://en.wikipedia.org/wiki/Skellam_distribution" rel="nofollow noreferrer">Skellam random variable</a>, with probability density function <span class="math-container">$$\mathbb P(X - Y = k) = \mathrm e^{-\lambda - 1} \lambda^{k/2} I_k(2\sqrt{\lambda}) =: S(\lambda, k),$$</span> where <span class="math-container">$I_k$</span> denotes the <a href="http://en.wikipedia.org/wiki/Bessel_function#Modified_Bessel_functions_:_I.CE.B1.2C_K.CE.B1" rel="nofollow noreferrer">modified Bessel function of the first kind</a>. Let <span class="math-container">$F(\lambda)$</span> denote the probability that <span class="math-container">$X$</span> is larger than <span class="math-container">$Y$</span>: <span class="math-container">$$F(\lambda) := \mathbb P(X &gt; Y) = \sum_{k=1}^{\infty} S(\lambda, k) = \mathrm e^{-\lambda - 1} \sum_{k=1}^\infty \lambda^{k/2} I_k(2\sqrt{\lambda}).$$</span> According to Mathematica, the graph of the function <span class="math-container">$F$</span> looks like</p> <p><img src="https://i.stack.imgur.com/m0mPi.png"><br/><sub>(source: <a href="https://cims.nyu.edu/~lagatta/F.png" rel="nofollow noreferrer">nyu.edu</a>)</sub><br/></p> <p>My question:<li>Is there a closed-form expression for the function <span class="math-container">$F$</span>?</li><li>If not, what are <span class="math-container">$\lim_{\lambda \to 0} F'(\lambda)$</span> and <span class="math-container">$F'(1)$</span>? What is the asymptotic behavior as <span class="math-container">$\lambda \to \infty$</span>?</p>
Ori Gurel-Gurevich
1,061
<p>Regarding the asymptotic behavior when $\lambda \to \infty$:</p> <p>To get an estimate one simply finds that the dominating element among $$A_k= \mathbb{P}(Y=k+1)\mathbb{P}(X=k) = e^{-1-\lambda} \frac{\lambda^k}{k!(k+1)!}$$ is $A_{\sqrt{\lambda}}$ which gives roughly $e^{2\sqrt{\lambda}-\lambda}$. Probably there's also a Poly($\lambda$) factor here.</p> <p>Oh, and $F'(\lambda)$ is simply $\mathbb{P}(X=Y)$ since the probability of adding 1 when increasing the intensity of a Poisson RV by $\epsilon$ is $\epsilon$. In the special case $\lambda=1$ we get $$F'(1)=e^{-2} \sum_{k=0}^\infty \frac{1}{(k!)^2}$$</p>
3,896,817
<blockquote> <p>Solve <span class="math-container">$x^2 \equiv 12 \pmod {13}$</span></p> </blockquote> <p>By guessing I can say that the solutions are <span class="math-container">$5$</span> and <span class="math-container">$8$</span>, but is there another way to find the solution besides guessing?</p>
CopyPasteIt
432,081
<p>In the field <span class="math-container">${\displaystyle \mathbb {Z} /13\mathbb {Z}}$</span>, <span class="math-container">$\,[1] + [1] \ne [0]$</span>, and therefore there are either zero or two distinct <span class="math-container">$\text{modulo-}13$</span> solutions for,</p> <p><span class="math-container">$\tag 1 x^2 \equiv 12 \pmod{13}$</span></p> <p>When one solution <span class="math-container">$[u]$</span> has been found the other solution is <span class="math-container">$-[u]$</span>.</p> <p>We have</p> <p><span class="math-container">$\; \large x^2 \equiv 12 \pmod{13} \; \text{ iff }$</span><br> <span class="math-container">$\; \large x^2 \equiv 2^{-2} \cdot (2^2 \cdot 12) \pmod{13} \; \text{ iff }$</span><br> <span class="math-container">$\; \large x^2 \equiv 2^{-2} \cdot (3^2) \pmod{13} \; \text{ iff }$</span><br> <span class="math-container">$\; \large x^2 \equiv \bigr(2^{-1} \cdot 3^1\bigr)^2 \pmod{13}$</span><br></p> <p>Now the inverse of <span class="math-container">$[2]$</span> is easily calculated,</p> <p><span class="math-container">$\quad [2]^{-1} = [\frac{13 + 1}{2}] = [7]$</span></p> <p>and so one solution to <span class="math-container">$\text{(1)}$</span> is given by</p> <p><span class="math-container">$\quad x \equiv 7 \cdot 3 \equiv 8 \pmod{13}$</span></p> <p>The other solution is given by</p> <p><span class="math-container">$\quad x \equiv -8 \equiv 5 \pmod{13}$</span></p> <hr /> <p>Note: Examining this question resulted in a <a href="https://math.stackexchange.com/q/3901694/432081">conjecture</a>,</p> <p><span class="math-container">$\quad$</span> A new method for finding a solution (when they exist) to <span class="math-container">$x^2 = a \pmod p$</span>?</p>
2,732,562
<p>I'm trying to figure out how to find the number of ternary strings of length $n$ that have 3 or more consecutive 2's. So far I've been able to establish that there is $n(2^{n-1})$ with a single 2. And I think (but am not certain) that this can be extrapolated to the number of strings with a single group of 2's of length $x$ by: $$\bigl(n-(x-1)\bigr)(2^{n-x})$$ What I'm getting caught on is the 'or more part', any help would be greatly appreciated.</p>
G Cab
317,234
<p>Following another approach, consider a binary word, where the one stands for the ternary $2$ and the zero stands for $0,1$.</p> <p>Take a binary word of length $n$, having $s$ ones and $n-s$ zeros in total, with <em>no more than</em> $r$ consecutive ones.</p> <p>Now the number of such binary words is given by<br> $$ \eqalign{ &amp; M_b (s,r,n) = N_b (s,r,n - s + 1)\quad \left| {\;0 \le {\rm integers }s,n,r} \right.\quad = \cr &amp; = \sum\limits_{\left( {0\, \le } \right)\,\,k\,\,\left( { \le \,\min \left( {{s \over {r + 1}}\,,\,n - s + 1} \right)} \right)} {\left( { - 1} \right)^k \left( \matrix{ n - s + 1 \cr k \cr} \right)\left( \matrix{ n - k\left( {r + 1} \right) \cr s - k\left( {r + 1} \right) \cr} \right)} \cr} $$ as explained in <a href="http://math.stackexchange.com/questions/2045496/number-of-occurrences-of-k-consecutive-1s-in-a-binary-string-of-length-n-conta/2045737#2045737">this related post</a>.</p> <p>Coming back to the ternary words, we just have to multiply $N_b$ by the number of ways to pad the $n-s$ zeros with $0$ or $1$. So in general we have $$ \eqalign{ &amp; M_t (s,r,n) = 2^{\,n - s} M_b (s,r,n) = \cr &amp; = 2^{\,n - s} \sum\limits_{\left( {0\, \le } \right)\,\,k\,\,\left( { \le \,\min \left( {{s \over {r + 1}}\,,\,n - s + 1} \right)} \right)} {\left( { - 1} \right)^k \left( \matrix{ n - s + 1 \cr k \cr} \right)\left( \matrix{ n - k\left( {r + 1} \right) \cr s - k\left( {r + 1} \right) \cr} \right)} \cr} $$ which summed over $s$ gives<br> $T(r,n)=$ <em>the number of ternary strings of length $n$ having max $r$ consecutive two's</em> $$ \bbox[lightyellow] { T(r,n) = \sum\limits_{\left( {0\, \le } \right)\,\,s\,\,\left( { \le \,n} \right)} {2^{\,n - s} \sum\limits_{\left( {0\, \le } \right)\,\,k\,\,\left( { \le \,\min \left( {{s \over {r + 1}}\,,\,n - s + 1} \right)} \right)} {\left( { - 1} \right)^k \left( \matrix{ n - s + 1 \cr k \cr} \right)\left( \matrix{ n - k\left( {r + 1} \right) \cr s - k\left( {r + 1} \right) \cr} \right)} } }$$</p> <p>Note that it can be easily extended to quaternary, quinary, ... words.</p> <p>For your particular case then, the number is given by $3^n-T(2,n)$, which checks with the formulas already given in the precedent answers.</p>
789,458
<p>If one day we finally prove the normality of $\pi $, would we be able to say that we have ourselves a sure-fire <em>truly random</em> number generator?</p>
Asimov
137,446
<p>Not random. Never random. Evenly distributed, and pseudo-random yes. Pi is defined before you calculate it, so its not random, just unknowable before you calculate each digit. If you pick digits that have not been calculated or formulated (or just not know to you) then it is for all practical purposes it is random. </p> <p>TL;DR version</p> <p>No, but you can use it as such for simple applications</p>
1,712,289
<p>If from twice the greater of two numbers 17 is subtracted, the result is half the other number. If from half the greater number 1 is subtracted, the result is two-thirds of the smaller number.</p> <p>$$2x - 17 = \frac{ 1 }{2}y$$</p> <p>$$\frac{ x }{2} - 1 = \frac{ 2 }{3}y$$</p> <p>$$-17 - 4 = \frac{ 1 }{2}y - \frac{ 8 }{3}y$$</p> <p>I'm so, off. I need a little help.</p>
Brian Tung
224,454
<p><strong>Basic approach.</strong> Your equations look right to me, except that it should be $x/2$ in the second equation, not $x$.</p> <p>Now, if you multiply both sides of the upper equation by $2$, and both sides of the lower equation by $3$, you will get two expressions for $y$ in terms of $x$ that can be equated with each other. Using them, you can determine the value of $x$, and from that, determine the value of $y$.</p>
2,653,708
<p>In this question I am using the euclidean metric to determine the distance between two points.</p> <p>I want to make a function $f(x)=$ the minimum distance between $y=x$ and $y=e^x$ at each given point x, is there an efficient way of doing this?</p> <p>Second related question if i knew $e^x$ was the shortest distance between $g(x)$ and $y = x$ could I figure out closed form solution for $g(x)$</p>
BoolHool
532,226
<p>Okay so, for your first question, all the points in the form of the line $y=x$ is in the form $(a,a)$ (So this is our best alternative, keep in mind however that the $x$'s in $y=x$ is not the same as the $x$'s in $y=e^x$, as discussed in the comments). As you mentioned, you are using the Euclidean Metric. So we need to find the minimum distance between $(a,a)$ and $(x,e^x)$. We can start at: $$g(x) = \sqrt{(a-x)^2+(a-e^x)^2}$$ So we would like to find $x$ such that this distance is minimal for a given $a$. We can take the derivative of $g$, treating $a$ as a constant to find that: $$g'(x) = \frac{-2e^x(a-e^x)-2(a-x)}{2\sqrt{(a-e^x)^2+(a-x)^2}}$$ So we set $g'(x) = 0$ and then solving for $a$ would give us the closest point of the form $(a,a)$ for a given point $(x,e^x)$. We do the math and we find that: $$a = \frac{e^{2x}+2x}{e^x+2}$$ Since the $a$ minimizes the distance $g$ we just plug it back in. Therefore, the function that represents the minimum distance for each point in the form $(a,a)$ to the curve $y=e^x$, as you said, "for every $x$" is: $$g(x) = \sqrt{(\frac{e^{2x}+2x}{e^x+2}-x)^2+(\frac{e^{2x}+2x}{e^x+2}-e^x)^2}$$</p> <p>Your second question is too unclear, and I'll have to make too many assumptions to answer it</p>
3,016,169
<p>I want to prove or disprove that <span class="math-container">$C^1([a,b], \mathbb{R}^n)$</span> equipped with the norm <span class="math-container">$||x||=\underset{t\in[a,b]}{\sup}|x(t)|_{\mathbb{R}^n}+\underset{t\in[a,b]}{\sup}|\dot{x}(t)|_{\mathbb{R}^n}$</span> is a reflexive Banach space. </p> <p>I figured out that it is indeed a Banach space, but I'm stuck on the reflexive part. I know that a Banach space <span class="math-container">$X$</span> is reflexive if the mapping <span class="math-container">$F: X \rightarrow X''$</span> is surjective, where <span class="math-container">$X''$</span> is the second dual space of <span class="math-container">$X$</span>. </p> <p>However, I have the feeling that this space is not reflexive, but I'm not sure how to prove that.</p> <p>Is it indeed true that this space is not reflexive? </p>
Nate Eldredge
822
<p>There's a standard theorem that for any Banach space <span class="math-container">$X$</span>, if <span class="math-container">$X'$</span> is separable then so is <span class="math-container">$X$</span>. Hence, if <span class="math-container">$X''$</span> is separable then so is <span class="math-container">$X'$</span>.</p> <p>It is not so hard to show that if <span class="math-container">$X = C^1([a,b])$</span>, then <span class="math-container">$X'$</span> is not separable. (I'll take <span class="math-container">$n=1$</span>.) For instance, for any <span class="math-container">$t \in [a,b]$</span>, consider the linear functional <span class="math-container">$\ell_t(x) = \dot{x}(t)$</span>. You can check that it is continuous, and that <span class="math-container">$\|\ell_t - \ell_s\|_{X'} = 2$</span> for any <span class="math-container">$s \ne t$</span>. Hence there is an uncountable closed discrete set in <span class="math-container">$X'$</span>.</p> <p>You can also check that <span class="math-container">$X$</span> itself is separable (using some version of Stone-Weierstrass). So if <span class="math-container">$X$</span> were reflexive, then <span class="math-container">$X'' = X$</span> would be separable, hence <span class="math-container">$X'$</span> would also be separable, but it isn't.</p>
222,555
<p>I would like to find a simple equivalent of:</p> <p>$$ u_{n}=\frac{1}{n!}\int_0^1 (\arcsin x)^n \mathrm dx $$</p> <p>We have:</p> <p>$$ 0\leq u_{n}\leq \frac{1}{n!}\left(\frac{\pi}{2}\right)^n \rightarrow0$$</p> <p>So $$ u_{n} \rightarrow 0$$</p> <p>Clearly:</p> <p>$$ u_{n} \sim \frac{1}{n!} \int_{\sin(1)}^1 (\arcsin x)^n \mathrm dx $$</p> <p>But is there a simpler equivalent for $u_{n}$?</p> <p>Using integration by part:</p> <p>$$ \int_0^1 (\arcsin x)^n \mathrm dx = \left(\frac{\pi}{2}\right)^n - n\int_0^1 \frac{x(\arcsin x)^{n-1}}{\sqrt{1-x^2}} \mathrm dx$$</p> <p>But the relation </p> <p>$$ u_{n} \sim \frac{1}{n!} \left(\frac{\pi}{2}\right)^n$$</p> <p>seems to be wrong...</p>
Did
6,179
<p>The change of variable $x=\cos\left(\frac{\pi s}{2n}\right)$ yields $$ u_n=\frac1{n!}\left(\frac\pi2\right)^{n+2}\frac1{n^2}v_n, $$ with $$ v_n=\int_0^n\left(1-\frac{s}n\right)^n\,\frac{2n}\pi \sin\left(\frac{\pi s}{2n}\right)\,\mathrm ds. $$ When $n\to\infty$, $\left(1-\frac{s}n\right)^n\mathbf 1_{0\leqslant s\leqslant n}\to\mathrm e^{-s}$ and $\frac{2n}\pi \sin\left(\frac{\pi s}{2n}\right)\mathbf 1_{0\leqslant s\leqslant n}\to s$. Both convergences are monotonic hence $v_n\to\int\limits_0^\infty\mathrm e^{-s}\,s\,\mathrm ds=1$. Finally, $$ u_n\sim\frac1{(n+2)!}\left(\frac\pi2\right)^{n+2}. $$</p>
2,698,121
<p>Came across a question about CDF and PDF in my homework:</p> <p><a href="https://i.stack.imgur.com/bYqvr.jpg" rel="nofollow noreferrer">Click here for the question</a></p> <p><a href="https://i.stack.imgur.com/bYqvr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bYqvr.jpg" alt="enter image description here"></a></p> <p>a. What is the cumulative distribution function?</p> <p>Knowing that pdf is the derivative of cdf, I integrated the piecewise function to: 0 if x &lt; 0, (3/4)x if 0 ≤ x ≤ 1, 3/4 + 1/4(x-3) if 3 ≤ x ≤ 4, and 1 for 4 &lt; x. However, I am given that the answer to 1 ≤ x ≤ 3 is 3/4, but I do not know how to find it. </p> <p>b. What is P(X>1)?</p> <p>I know that for P(X &lt; u) I will sub in the value of u into the cdf function and find the answer. However, I don't think the same will work for P(X > u). Do I need to integrate the pdf from 0 to 2? </p> <p>c. What is E(X)?</p> <p>I know that E(X) is the integral of xf(x) from negative infinity to infinity, but since this is a piecewise function, do I need to take the integral of each interval and find their sum?</p> <p>Sorry for asking so many questions at once, I am really lost in the whole CDF and PDF idea.</p> <p>Thanks!</p>
Siong Thye Goh
306,553
<p>a) </p> <p>if $x \in [1,3]$,</p> <p>\begin{align}Pr(X \le x) &amp;= \int_{-\infty}^xf(t) \, dt \\ &amp;=\int_0^1f(t) \, dt + \int_1^x f(t) \, dt\\ &amp;= \int_0^1 \frac34 \, dt + \int_1^x 0 \, dt \\ &amp;= \frac34\end{align}</p> <p>Geometrically just find the area under the graph of the density when $X \le x$.</p> <p>The area stop increasing after $1$ and the increment continue after $3$.</p> <p>b) $$P(X&gt;1) = 1-P(X \le 1) $$ Use your answer to part $a$ to handle this.</p> <p>c) Evaluate the following:$$E[X]=\int_0^1 \frac{3x}4 \, dx+\int_3^4 \frac{x}4\, dx$$ </p>
1,089,635
<blockquote> <p>Is it possible to display a result in Mathcad as a function of $\pi$? </p> </blockquote> <p>I'm studying physics and I need to show exact results at the exams. I know I can set Mathcad to give me the result in decimals or in fractions, but none of them are good enough. </p> <p><strong>Example: calculating 400/200 $\pi$</strong></p> <p>What mathcad can give me: $(400/200) \cdot \pi = 6.283$ or $(400/200) \cdot \pi = 10838702/1725033 $</p> <p>What I want: $(400/200) \cdot \pi = 2 \pi$</p>
Community
-1
<p>Solution (found by the OP): one can use "CTRL + ." instead of " = " in Mathcad. That gives the desired form of result.</p>
4,164,553
<p>Can anybody enlighten me about the applications of <a href="https://en.wikipedia.org/wiki/Intuitionistic_logic" rel="noreferrer">intuitionistic logic</a>? I am familiar with this system only by <a href="https://www.maa.org/publications/maa-reviews/proof-theory" rel="noreferrer">G.Takeuti's book</a>, where it is described as one of the examples of axiomatic systems of logic. Is it possible to explain to a non-specialist why this system deserves studying?</p> <p>I suspect that this is easier to explain on the example of <a href="https://encyclopediaofmath.org/wiki/Intuitionistic_propositional_calculus" rel="noreferrer">propositional intuitionistic calculus</a>, so if somebody could cast a light on this, this would be especially valuable.</p>
HallaSurvivor
655,547
<p>One fun answer is &quot;programming languages&quot;. It turns out a good way to study programming languages is by viewing them as proof systems, and the programming languages you get in this way tend to be intuitionistic proof systems. You <em>can</em> build programming languages which correspond to classical logic (using <code>letcc</code> and <a href="https://en.wikipedia.org/wiki/Continuation" rel="noreferrer">continuations</a>, for instance. See <a href="https://www.cs.cmu.edu/%7Erwh/pfpl/supplements/letcc.pdf" rel="noreferrer">here</a>), but they're less common.</p> <p>The correspondence goes both ways, too! Anything you prove intuitionistically has some &quot;computational content&quot; to it. So there's a very real sense in which you can run your proof like a program, and in fact most proof systems work in this way. Much ink has been spilled on this subject, so I won't go into it in this answer (see <a href="https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence" rel="noreferrer">here</a>, <a href="https://www.cs.cornell.edu/courses/cs3110/2017fa/l/21-curryhoward/lec.pdf" rel="noreferrer">here</a>, and <a href="https://www.cs.cmu.edu/%7Efp/courses/15317-f08/lectures/04-pap.pdf" rel="noreferrer">here</a> for instance. Plus <a href="http://math.andrej.com/2021/05/18/computing-an-integer-using-a-sheaf-topos/" rel="noreferrer">here</a> for an example of this being done &quot;in the wild&quot;).</p> <p>If you want a great reference for building programming languages with logic, I heartily recommend Harper's <em>Practical Foundations for Programming Languages</em> (lovingly referred to as <span class="math-container">$\mathsf{PFPL}$</span> in at least a few of the things I've linked so far).</p> <hr /> <p>I hope this helps ^_^</p>
71,608
<p>Consider the following question:</p> <p>Is there a family $\mathcal{F}$ of subsets of $\aleph_\omega$ that satisfies the following properties?</p> <p>(1) $|\mathcal{F}|=\aleph_\omega$</p> <p>(2) For all $A\in \mathcal{F}$, $|A|&lt;\aleph_\omega$</p> <p>(3) For all $B\subset \aleph_\omega$, if $|B|&lt;\aleph_\omega$, then there exists some $B'\in \mathcal{F}$ such that $B\subset B'$.</p> <p>I am not sure if there is anything special about $\aleph_\omega$, but this was the example that came up. </p> <p>Any help?</p>
Andy Voellmer
14,794
<p>I think the following diagonalization will show that there is no such set $\mathcal{F}$.</p> <p>Suppose there were such an $\mathcal{F}$. Then we could split it up into $\omega$ many chunks $( \mathcal{F}_i ) _{i \in \omega} $ such that each $\mathcal{F} _i$ had exactly the sets of size $\aleph_i $ or smaller that were in $\mathcal{F}$. Now for each $\mathcal{F}_i$ we will construct a countable set $S_i \subset \aleph _\omega$ such that every set $A \in \mathcal{F} _i$ has only finite intersection with $S_i$. If we can make such an $S_i$, then by unioning together all the $S_i$ for every $i \in \omega$, we will get a countable set which is not contained in any $A \in \mathcal{F}$.</p> <p>So: for a given $\mathcal{F} _i$, if there are fewer than $\aleph _\omega $ sets in it, then it's easy to make our set $S_i$, since the union of all the sets in $\mathcal{F} _i$ is smaller than $\aleph _\omega $. Now suppose there are $\aleph _\omega$ many sets in $\mathcal{F} _i$. Break $\mathcal{F} _i$ up into $\omega$ many chunks $( \mathcal{G}_j ) _{j \in \omega} $ such that each $ \mathcal{G}_j $ has size $\aleph _j $ and such that if $m &lt; n$ then $\mathcal{G}_m \subset \mathcal{G}_n $. Note that the union of each $ \mathcal{G}_j $ has size less than $\aleph _\omega$. So now we can construct our set $S_i$ as follows: pick the $j$-th element to be something outside the union of $ \mathcal{G}_j $. Then $S_i$ has only finite intersection with any $A \in \mathcal{F}_i $. </p>
1,465,229
<p>At p. 388 of Calculus, Spivak gives a formula:</p> <p>$$\frac{1}{1+t^2} = 1 - t^2 + t^4 - ... + (-1)^nt^{2n} + \frac{(-1)^{n+1}t^{2n+2}}{1+t^2}$$</p> <p>Which can be integrated to find $\arctan(x)$.</p> <p>I don't understand where this formula comes from, but I found it up to $(-1)^nt^{2n}$ by considering the geometric series for $\frac{1}{1-x}$ and replacing $x$ by $-x^2$ to get the series for $\frac{1}{1+x^2}$. I don't see the term $\frac{(-1)^{n+1}x^{2n+2}}{1+x^2}$ though, because the series I got this way is $\frac{1}{1+x^2} = \sum_{n=0}^{\infty}(-1)^nx^{2n}$.</p>
skyking
265,767
<p>The term $(-x^2)^{n+1}/(1+x^2)$ is just the rest term:</p> <p>$${1\over 1 - (-x^2)} = \sum_0^\infty (-x^2)^k = \sum_0^n (-x^2)^k + \sum_{n+1}^\infty (-x^2)^k$$</p> <p>where</p> <p>$$\sum_{n+1}^\infty (-x^2)^k = (-x^2)^{n+1} \sum_0^\infty (-x^2)^k = {(-x^2)^k\over1 - (-x^2)}$$</p> <p>Alternately you can do it the other way from the RHS and use the formula for geometric series (so you won't have to resort to infinite series):</p> <p>$$\sum_0^n(-x^2)^k + {(-x^2)^{n+1}\over 1 - (-x^2)} = {1 - (-x^2)^{n+1}\over 1- (-x^2)} + {(-x^2)^{n+1}\over 1 - (-x^2)} = {1\over 1+x^2}$$</p>
1,465,229
<p>At p. 388 of Calculus, Spivak gives a formula:</p> <p>$$\frac{1}{1+t^2} = 1 - t^2 + t^4 - ... + (-1)^nt^{2n} + \frac{(-1)^{n+1}t^{2n+2}}{1+t^2}$$</p> <p>Which can be integrated to find $\arctan(x)$.</p> <p>I don't understand where this formula comes from, but I found it up to $(-1)^nt^{2n}$ by considering the geometric series for $\frac{1}{1-x}$ and replacing $x$ by $-x^2$ to get the series for $\frac{1}{1+x^2}$. I don't see the term $\frac{(-1)^{n+1}x^{2n+2}}{1+x^2}$ though, because the series I got this way is $\frac{1}{1+x^2} = \sum_{n=0}^{\infty}(-1)^nx^{2n}$.</p>
egreg
62,967
<p>Consider $$ 1+x+x^2+\dots+x^n=\frac{1-x^{n+1}}{1-x}=\frac{1}{1-x}-\frac{x^{n+1}}{1-x} $$ that can also be written $$ \frac{1}{1-x}=1+x+x^2+\dots+x^n+\frac{x^{n+1}}{1-x} $$ Now substitute $x=-t^2$, that gives $$ \frac{1}{1+t^2}=1+(-t^2)+(-t^2)^2+\dots+(-t^2)^n+\frac{(-t^2)^{n+1}}{1+t^2} $$ and not it's just a matter of observing that $(-t^2)^k=(-1)^kt^{2k}$.</p>
3,055,272
<p><strong>Background</strong></p> <p>A connected graph has an <em>Eulerian circuit</em> if every vertex has even degree. </p> <p>I am thinking about a certain classification of connected graphs where, for a connected graph <span class="math-container">$G$</span>, every <a href="https://en.wikipedia.org/wiki/Cut_(graph_theory)" rel="nofollow noreferrer">cut</a> splits (intersects) an <em>even</em> number of edges.</p> <p>For example, while the following graph does not have an Eulerian circuit, the displayed cut 'splits an <em>even</em> number of edges':</p> <p><a href="https://i.stack.imgur.com/eS6vOs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eS6vOs.png" alt="Graph cut"></a></p> <blockquote> <p>Claim: A connected graph <span class="math-container">$G$</span> has an Eulerian circuit iff every cut of <span class="math-container">$G$</span> splits an even number of edges.</p> </blockquote> <p>Proof attempt:</p> <p>(<span class="math-container">$\rightarrow$</span>) If <span class="math-container">$G$</span> has an Eulerian circuit then ... ?</p> <p>(<span class="math-container">$\leftarrow$</span>) Suppose every cut splits an even number of edges. Then we can make a cut around each individual vertex that will split an even number of edges. Hence the degree of each vertex is even. Therefore <span class="math-container">$G$</span> has an Eulerian circuit. </p> <p>I'm lost in the forwards direction of the proof. Any hint would be appreciated. </p>
Peter Taylor
5,676
<p>Given: <span class="math-container">$\forall v \in V: \textrm{deg}(v) \equiv 0 \pmod 2$</span>. Partition <span class="math-container">$V$</span> into <span class="math-container">$S$</span> and <span class="math-container">$T$</span>. We desire to show that the number of edges split by the cut is even.</p> <p>By induction on the size of <span class="math-container">$T$</span>.</p> <p>Base case: when <span class="math-container">$|T| = 0$</span>, there are <span class="math-container">$0$</span> edges split by the cut, which is even.</p> <p>Inductive step: if it holds for all <span class="math-container">$|T| = n$</span>, transfer vertex <span class="math-container">$u$</span> from <span class="math-container">$S$</span> to <span class="math-container">$T$</span>. Since <span class="math-container">$u$</span> has an even number of edges in total, its edges to <span class="math-container">$S$</span> and its edges to <span class="math-container">$T$</span> have the same parity, so adding the edges <span class="math-container">$u-S$</span> to the cut and removing the edges <span class="math-container">$s-T$</span> from the cut doesn't change the parity of the number of edges split by the cut.</p> <p>QED.</p>
1,150,805
<p>An unfair 3-sided die is rolled twice. The probability of rolling a 3 is $0.5$, the probability of rolling a 1 is $0.25$, and the probability of rolling a 2 is $0.25$. Let $X$ be the outcome of the first roll and $Y$ the outcome of the second.</p> <ul> <li><p>Find the Joint Distribution of $X$ and $Y$ in a Table.</p> <p>The outcome of $X = \{1,2,3\}$.</p> <p>The outcome of $Y = \{1,2,3\}$.</p> <p>Would I just make a table of all the roll possibilities?</p></li> <li><p>Find the Probability $\mathrm{P}(X+Y \geq 5)$.</p> <p>The only roll that will make this is a 3 or a 2. Should I just take the same of every possible roll to find this probability?</p></li> </ul>
Asaf Karagila
622
<p>For induction you have to define some explicit base case, what is the smallest finite set? The empty set. Define its power set explicitly.</p> <p>Now suppose that you defined the power set of a set with $n$ elements, and $A=\{a_0,\ldots,a_n,a_{n+1}\}$, find a way to write $A$ as the union of $A'$ and a singleton, with $A'$ having $n$ elements; now ask yourself, what sort of subsets of $A$ are there, and how do they relate to subsets of $A'$ and that additional singleton?</p> <hr> <p>Since you're a computer science student (according to the user's profile), here is another way to think about it.</p> <p>Recall that there is a canonical identification between subsets of $\{0,\ldots,n-1\}$ and binary strings of length $n$. Namely, $A\subseteq\{0,\ldots,n-1\}$ is mapped to the string $\langle a_0,\ldots,a_{n-1}\rangle$ such that $a_i=0$ if and only if $i\notin A$.</p> <p>Now you can think about this as defining a string of length $n+1$ as a concatenation of a string of length $n$ with either $0$ or $1$.</p>
4,036,049
<p>In order for <span class="math-container">$n^5 - n$</span> divisible by <span class="math-container">$5$</span>, <span class="math-container">$n^5 - n = 5 x + 0$</span> (for some <span class="math-container">$x$</span>, <span class="math-container">$x$</span> is a natural number) I simplified the <span class="math-container">$(n^5 - n) = n(n^2+1)(n+1)(n-1)$</span> and I do not know what to do next.</p> <p>And I tried something else like <span class="math-container">$(n^5 - n) = n (n^4 - 1)$</span>, but now I need to show <span class="math-container">$(n^4 - 1)$</span> is divisible by <span class="math-container">$5$</span>, how do I do that? Thanks in advance.</p>
Deepak
151,732
<p>You're solving a cubic here. While there is a general solution, it's no picnic. For most &quot;elementary&quot; (school-style) problems, you won't be expected to apply the full cubic solution. Instead you'll be expected to apply some combination of Rational Root Theorem, Factor Theorem and Polynomial Long Division (or a shortcut like Synthetic Division) to reduce the degree to a quadratic you can solve with methods you already know. But this requires the cubic to have at least one &quot;nice&quot; rational root, which often (but not always) turns out to be an integer.</p> <p>Let's proceed on the assumption that this is a problem specifically designed to be solvable at an elementary level.</p> <p>Rearrange the cubic and make it monic (lead coefficient one):</p> <p><span class="math-container">$t^3 - 15t^2 - 200t + 1250 = 0$</span></p> <p>By <a href="https://en.m.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">Rational Root Theorem</a>, any rational root (if it exists) will need to divide <span class="math-container">$1250$</span>. You'll have to consider both positive and negative possibilities for each number.</p> <p><span class="math-container">$1250 = (2^1)(5^4)$</span></p> <p>This means it has a total of <span class="math-container">$(1+1)(4+1)= 10$</span> factors, considering only the positive ones. Double this for the negatives, and you're looking at <span class="math-container">$20$</span>.</p> <p>At this point, you're going to have to &quot;appeal&quot; to the good sense of the question setter and hope the root you're looking for is within that narrow range they stipulated. Which means you'll start by testing <span class="math-container">$1, 2, 5, 10$</span>. You don't even need to get to the end of that list because you'll hit paydirt at <span class="math-container">$5$</span>.</p> <p>Once you know <span class="math-container">$t=5$</span> is a root, you can immediately deduce that <span class="math-container">$(t-5)$</span> is a factor of the cubic polynomial (this is Factor Theorem). You can then divide the cubic by <span class="math-container">$(t-5)$</span> and get a quadratic you can easily solve by any method you've learned (factorisation by inspection, completing the square or quadratic formula).</p> <p>The division can be simplified by <a href="https://www.purplemath.com/modules/synthdiv.htm" rel="nofollow noreferrer">Ruffini's method of synthetic division</a>. The quadratic you get as the result will be <span class="math-container">$t^2 - 10t - 250$</span>, for which the roots (by formula) are <span class="math-container">$5(1 \pm \sqrt{11})$</span>. Neither of these values is in the required range, so you can reject them after finding them. But finding them is a must, for rigour.</p> <p>Now, what would've happened if, in the original application of Rational Root Theorem, you couldn't find a root among the candidates <span class="math-container">$1,2,5,10$</span>? Technically you have a few choices. You can go on to test the remaining <span class="math-container">$20-4 = 16$</span> possibilities. It is possible you'll get a nice rational root outside the stipulated range that, after polynomial division, gets you to a quadratic which <em>does</em> have one or two root(s) in the stipulated range. But that would have to be a pretty sadistic question setter. Not impossible, just unpleasant.</p> <p>But it's still a more pleasant undertaking than actually solving the cubic using the general solution, which I won't be going into. It's far too tedious and involved.</p> <p>Anyway, the only solution for your question in the required range is <span class="math-container">$t=5$</span>.</p>
4,036,049
<p>In order for <span class="math-container">$n^5 - n$</span> divisible by <span class="math-container">$5$</span>, <span class="math-container">$n^5 - n = 5 x + 0$</span> (for some <span class="math-container">$x$</span>, <span class="math-container">$x$</span> is a natural number) I simplified the <span class="math-container">$(n^5 - n) = n(n^2+1)(n+1)(n-1)$</span> and I do not know what to do next.</p> <p>And I tried something else like <span class="math-container">$(n^5 - n) = n (n^4 - 1)$</span>, but now I need to show <span class="math-container">$(n^4 - 1)$</span> is divisible by <span class="math-container">$5$</span>, how do I do that? Thanks in advance.</p>
Barry Cipra
86,747
<p>This is mainly an extended comment on the answers that have already been posted, but it may be of help in approaching problems of this type, where you can assume the problem poser has carefully constructed things to have a nice answer.</p> <p>As the other answers observe, the cubic simplifies initially to</p> <p><span class="math-container">$$t^3-15t^2-200t+1250=0$$</span></p> <p>which you can then attack by hoping it has an <em>integer</em> root. The key observation to make is that since the coefficients <span class="math-container">$15$</span>, <span class="math-container">$200$</span>, and <span class="math-container">$1250$</span> are all divisible by <span class="math-container">$5$</span>, then <span class="math-container">$t^3$</span>, if it's an integer, <em>must also be divisible by <span class="math-container">$5$</span></em>. So if we let <span class="math-container">$t=5u$</span>, the equation becomes <span class="math-container">$125u^3-15\cdot25u^2-1000u+1250=0$</span>, which simplifies to</p> <p><span class="math-container">$$u^3-3u^2-8u+10=0$$</span></p> <p>The smaller coefficients here make this a much easier cubic to think about, and it might even jump right out at you that <span class="math-container">$u=1$</span> is a root, since <span class="math-container">$3+8=1+10$</span>. This leads to the solution <span class="math-container">$t=5u=5$</span>.</p>
1,627,713
<p>This is maybe math $101$ question:</p> <p>Let $z_1=1+i$.</p> <p>I know that $r=\sqrt 2$ and $\theta=\arctan(1/1)=\pi/4$ so $$z_1=\color{blue}{\sqrt 2e^{i\pi/4}} .$$</p> <p>But now if I take a look at</p> <p>$z_2=-1-i$,</p> <p>I know that $r=\sqrt 2$ and $\theta=\arctan(-1/-1)=\pi/4$ so $$z_1=\color{blue}{\sqrt 2e^{i\pi/4}}.$$</p> <p>But $z_2$ should be equal to $$\color{red}{\sqrt 2 e^{5i\pi/4}} .$$</p> <p>Why in $z_2$ should I add $\pi$ in the power of the exponent?</p>
Neal
20,569
<p>That's actually correct. For any $a,b\in\Bbb{R}^2 - \Bbb{E}$, you need to find such a path. My hint to finding such a path is, try to think about what the space $\Bbb{R}^2 - \Bbb{E}$, the points with at least one irrational coordinate, might look like. Then see if you can visualize how you might go from one point with at least one irrational coordinate to another point with at least one irrational coordinate so that you always have at least one irrational coordinate.</p>
827,154
<p>I need help with the definition of "within 1":</p> <ul> <li><p>If $x = 8$ and $y = 7$, then $x$ is "within 1" of $y$. </p></li> <li><p>If $x = 8$ and $y = 9$, then $x$ is "within 1" of $y$.</p></li> <li><p>If $x = 8$ and $y = 8$, is $x$ still "within 1" of $y$?</p></li> </ul> <p>It's my understanding that this would still be true, but I'm being asked for something to back up my assumption, so I guess I'm looking for a second opinion.</p>
Earl Grey
193,393
<p>I have come across a similar query around the definition of "within". The dictionary definition (from www.oed.com) is "That which is within or inside".<br> Given the "or" in the definition the "within 1 of" includes the value 1.</p>
878,373
<p>Often I have heard about the link between Algebra (in particular Representations of Groups and Algebras) and some "indefinite" field of Physics.</p> <p>I have a good preparation in Algebra and Representation Theory (in particular about Representations of Lie Algebras), and I'm fascinated with Physics. My idea is try to understand this link and eventually study it with more depth.</p> <p>Hence I'm looking for an introductory book that emphasizes the applications of Algebra in Physics from a comprehensible and mathematical point of view.</p> <p>Does anyone have an idea for a book with these requisites?</p> <p>Thank you! </p>
Salix Liu
834,147
<p>If your <span class="math-container">$f$</span> is continuous differentiable then this ito integral is a pathwise RS integral then of course you can find a bound</p>
118,298
<p>I'm trying to work through "Elements of Functional Languages" by Martin Henson. On p. 17 he says:</p> <blockquote> <p>$v$ occurs free in $v$, $(\lambda v.v)v$, $vw$ and $(\lambda w.v)$ but not in $\lambda v.v$ or in $\lambda v.w$. And $v$ occurs bound in $\lambda v.v$ and in $(\lambda v.v)v$ but not in $v$, $vw$ or $(\lambda w.v)$. Note that free and bound variables are not opposites. A variable may occur free and bound in the same expression.</p> </blockquote> <p>Can someone explain what this means?</p>
Community
-1
<p>Since you implied you're comfortable on the computer side of the house... it's talking about the scope of a variable. "$\lambda x.$" introduces a new scope which lasts for the length of the lambda expression, and $x$ is a local variable in that scope.</p> <p>A free variable is one not local to the expression. e.g. in $\lambda x. xy$, $y$ is free.</p> <p>Some syntaxes for lambda calculus allow a local variable to shadow a global one, just as in common programming languages. In $(\lambda x.x)x$, the lambda expression introduces a local variable $x$ which shadows the 'global' variable by the same name. I like to use colors when dealing with expressions like this, to help distinguish them: the expression is colored $(\lambda {\color{red}x}.{\color{red} x})\color{green}{x}$. The red and green $x$'s are different variables.</p> <p>Note that this isn't lambda-calculus specific. Quantifiers ($\forall x:$, $\exists x:$) do the same thing. So does integral notation: $\int \ldots \, dx$ (Leibniz notation for derivatives too... sort of...). Such a thing is also usually implied when defining functions pointwise, as in</p> <p>$$f(x) := x^2$$</p> <p>As usually meant, $x$ is a variable local to the expression. And $f$ is a global variable! (or maybe a global constant, depending on the context and how one likes to set up the low-level details of syntax)</p> <p>Do keep in mind that people aren't always consistent about distinguishing between syntactic details. Particularly on topics like whether $x^2$ is an expression that denotes a real number (the square of $x$) or an expression denoting a function in one indeterminate variable (the function that squares its input) to which $x$ is bound.</p>
2,010,069
<p>I am looking on the solution to this problem presented in the book <em>"Fifty Challenging Problems in Probability with Solutions"</em> by Mosteller (p.18-19).</p> <blockquote> <p>On average, how many times must a die be thrown until one gets a 6?</p> </blockquote> <p>There are many ways to solve this problem as this is simple example of a geometric distribution, but I don't quite get the trick the author did it with the $qm$. <br> <strong>I am looking for explanation/interpretation of this trick (transition (2) (3) )</strong>? <br> <br> The first expression is clear, it is just the expansion of the expected value definition</p> <p><br> (p be the probability of a 6 on a given trial) <br> $$ m = p + 2 pq + 3pq^2 + 4pq^3 + ... \quad \quad \quad (1)$$ <br> Further the "trick" with qm has been used <br> $$qm =\ \ \ \ \ \ \ \ pq + 2pq^2 + 3pq^3 + ... \quad \quad \quad (2)$$ so that $$m - qm = p + pq + pq^2 + ... \quad \quad \quad (3)$$ $$m(1-q) = 1 \quad \quad \quad (4)$$ $$m=\frac{1}{p} \quad \quad \quad (5)$$</p>
user26872
26,872
<p>We have $x = x_i e_i = x'_i e'_i$ where $e_i$ and $e'_i$ are bases related by nonsingular linear transformation. Note that ${e'}_i^T e'_j = g_{ij}$, where $g$ is invertible. Thus, ${e'}_i^T e_j x_j = {e'}_i^T e'_j x'_j = g_{ij}x'_j$ or $$x'_i = (g^{-1})_{ij}{e'}_j^T e_k x_k = (g^{-1})_{ij}{e'}_j^T x.$$ This gives us two good pieces of intuition. First, for a nonsingular linear transformation $A$ we can think of the elements of $A$ as being given by $$a_{ij} = (g^{-1})_{ik}{e'}_k^T e_j,$$ that is, by the dot product of a certain linear combination of the transformed basis vectors with the untransformed basis vectors. Second, to find the result of applying $A$ to $x$ we simply dot the same linear combination of the transformed basis vectors with the vector $x$.</p> <p>For <em>orthogonal transformations</em> we find $g_{ij} = \delta_{ij}$ and so $$x'_i = {e'}_i^T e_j x_j = {e'}_i^T x \hspace{5ex}\textrm{and}\hspace{5ex} a_{ij} = {e'}_i^T e_j.$$</p> <p><em>Note</em>: We use Einstein's summation convention, $x = x^i e_i \equiv \sum_i x^i e_i$. For this problem the dual basis is $e^i = e_i^T$. The dual of $x$ is $x^T$, so $x_i e^i = x^i e_i^T$. We need not distinguish between $x_i$ and $x^i$ and so we write $x = x_i e_i$. </p> <p><strong>Example</strong></p> <p>Let $$A = \left(\begin{array}{cc}\cos\theta &amp; \sin\theta \\ -\sin\theta &amp; \cos\theta \end{array}\right).$$ Then $$\left(\begin{array}{cc}\cos\theta &amp; \sin\theta \\ -\sin\theta &amp; \cos\theta \end{array}\right) \left(\begin{array}{c}x \\ y\end{array}\right)$$ will give the components of $x$ in the new basis $e'_i$, where $[e_i]_j = \delta_{ij}$ is the standard basis. (This is a passive, rather than active, transformation.) It is straightforward to show that $e'_i = A^{-1}e_i = A^T e_i,$ so $$e'_1 = \left(\begin{array}{c}\cos\theta \\ \sin\theta\end{array}\right) \hspace{5ex}\textrm{and}\hspace{5ex} e'_2 = \left(\begin{array}{c}-\sin\theta \\ \cos\theta\end{array}\right).$$ One can then easily check that the elements of $A$ are given by $a_{ij} = {e'}_i^T e_j$. Note that, $$x'_1 = {e'}_1^T e_j x_j = \left(\begin{array}{cc}\cos\theta &amp; \sin\theta\end{array}\right) \left(\begin{array}{c}x \\ y\end{array}\right)$$ and $$x'_2 = {e'}_2^T e_j x_j = \left(\begin{array}{cc}-\sin\theta &amp; \cos\theta\end{array}\right) \left(\begin{array}{c}x \\ y\end{array}\right),$$ as expected. </p>
2,010,069
<p>I am looking on the solution to this problem presented in the book <em>"Fifty Challenging Problems in Probability with Solutions"</em> by Mosteller (p.18-19).</p> <blockquote> <p>On average, how many times must a die be thrown until one gets a 6?</p> </blockquote> <p>There are many ways to solve this problem as this is simple example of a geometric distribution, but I don't quite get the trick the author did it with the $qm$. <br> <strong>I am looking for explanation/interpretation of this trick (transition (2) (3) )</strong>? <br> <br> The first expression is clear, it is just the expansion of the expected value definition</p> <p><br> (p be the probability of a 6 on a given trial) <br> $$ m = p + 2 pq + 3pq^2 + 4pq^3 + ... \quad \quad \quad (1)$$ <br> Further the "trick" with qm has been used <br> $$qm =\ \ \ \ \ \ \ \ pq + 2pq^2 + 3pq^3 + ... \quad \quad \quad (2)$$ so that $$m - qm = p + pq + pq^2 + ... \quad \quad \quad (3)$$ $$m(1-q) = 1 \quad \quad \quad (4)$$ $$m=\frac{1}{p} \quad \quad \quad (5)$$</p>
David Reed
444,890
<p>After reading your question further, I believe I misunderstood initially what you were asking.</p> <p>In terms of orthogonality, what it tells you is that the row space is the orthogonal complement to the Nul space. Thus every vector can be written uniquely as the sum of a vector in the row space of A and in the Nul space of A. This is a fundamental direct product relationship: $$\mathbb{R}^n = \mathrm{Row}\left(\mathbf{A}\right) \ \oplus \mathrm{Nul}\left(\mathbf{A}\right)$$</p> <p>That is, if you let $\hat{\mathbf y}$ be the projection of $\mathbf{y}$ onto $\mathrm{Nul}\left(\mathbf{A}\right)$, then $\mathbf{y} - \mathbf{\hat{y}} \in \mathrm{Row}(\mathbf{A})$</p> <p>To see this, let $$ \mathbf{A} = \begin{bmatrix} \mathbf{a_1}^T \\ \mathbf{a_2}^T \\ \vdots \\ \mathbf{a_m}^T \end{bmatrix} $$</p> <p>$ \\ $</p> <p>Then</p> <p>$$ \mathbf{Ax} = \mathbf{A} = \begin{bmatrix} \mathbf{a_1}^T \\ \mathbf{a_2}^T \\ \vdots \\ \mathbf{a_m}^T \end{bmatrix} * \mathbf{x} = \begin{bmatrix} \mathbf{a_1}^T \mathbf{x} \\ \mathbf{a_2}^T \mathbf{x}\\ \vdots \\ \mathbf{a_m}^T \mathbf{x} \end{bmatrix} $$</p> <p>Therefore $\mathbf{Ax} = \mathbf{0}$ if and only if $\mathbf{x}$ is orthogonal to each row of $\mathbf{A}$, and hence to the entire row space of A. Ironically, I have a book that refers to this and one other similar statement as the Fundamental Theorem of Linear Algebra. I don't think most would agree but it is an important result.</p>
119,481
<p>I need to prove the following trigonometric identity: $$ \frac{\sin^2(\frac{5\pi}{6} - \alpha )}{\cos^2(\alpha - 4\pi)} - \cot^2(\alpha - 11\pi)\sin^2(-\alpha - \frac{13\pi}{2}) =\sin^2(\alpha)$$</p> <p>I cannot express $\sin(\frac{5\pi}{6}-\alpha)$ as a function of $\alpha$. Could it be a textbook error?</p>
Blue
409
<p>Since all the trig values are squared, it seems as though the exercise is simply playing with shifts by odd or even multiples of $\pi/2$.</p> <p>Loosely,</p> <blockquote> <ul> <li>Shifting by "$\frac{\pi}{2} \cdot \text{odd}$" switches "sin" and "cos" (and possibly affects the sign)</li> <li>Shifting by "$\frac{\pi}{2} \cdot \text{even}$" ($=$ "$\pi \cdot \text{any}$") preserves "sin" and "cos" (and possibly affects sign)</li> <li>Negating the argument preserves "sin" and "cos" (and possibly affects sign)</li> </ul> </blockquote> <p>Since squaring eliminates sign considerations, we have, simply:</p> <blockquote> <p>$$\begin{align} \mathrm{trig}^2\left( \pm \; \theta \pm \frac{\pi}{2} \text{odd} \right) &amp;= \mathrm{cotrig}^2\theta \\ \mathrm{trig}^2\left( \pm \; \theta \pm \frac{\pi}{2} \text{even} \right) &amp;= \mathrm{trig}^2\left( \pm \; \theta \pm \pi \cdot \text{any} \right) = \mathrm{trig}^2\theta \end{align}$$</p> </blockquote> <p>where each "$\pm$" is independent, "any" means (of course) "any <em>integer</em>", and "trig" can in fact be any of the six trig functions.</p> <p>This makes pretty quick work of the simplification process ... $$\begin{align} \frac{\sin^2\left(\frac{5\pi}{6}-\alpha\right)}{\cos^2\left(\alpha-4\pi\right)} - \cot^2\left(\alpha-11\pi\right) \; \sin^2\left(-\alpha-\frac{13\pi}{2}\right) &amp;\stackrel{?}{=} \sin^2\alpha \\[1em] \frac{\sin^2\left(\frac{5\pi}{6}-\alpha\right)}{\cos^2\alpha} - \cot^2\alpha \; \cos^2\alpha &amp;\stackrel{?}{=} \sin^2\alpha \end{align}$$ ... right up to the point at which the process shudders to a halt.</p> <p>Given the nature of all the other terms (and @Adam's comment that sum and difference identities are not allowed), I <em>suspect</em> that "$\frac{5\pi}{6}$" is a typo of "$\frac{5\pi}{2}$", which would get us a little further ...</p> <p>$$\frac{\cos^2\alpha}{\cos^2\alpha} - \cot^2\alpha \; \cos^2\alpha = 1 - \cot^2\alpha \;\cos^2\alpha \stackrel{?}{=} \sin^2\alpha$$</p> <p>... but we hit another snag. Could it be that "$\sin^2\left(-\alpha-\frac{13\pi}{2}\right)$" is a typo of "$\cos^2(...)$"? If so, then that factor should've simplified to "$\sin^2\alpha$", and we'd have</p> <p>$$1 - \cot^2\alpha \;\sin^2\alpha = 1 - \cos^2\alpha = \sin^2\alpha$$</p> <p>as desired.</p> <p>(It's also possible that, instead of a sin-cos typo, "$\cot$" is a typo for "$\tan$", but it seems like that would be an easier one for the OP to notice.)</p>
2,028,703
<p>I'm having this example for a simple <a href="https://en.wikipedia.org/wiki/Binary_symmetric_channel" rel="nofollow noreferrer">binary symmetric channel</a> (BSC) to bound the mutual information of $X$ and $Y$ as</p> <p>\begin{align*} I(X;Y) &amp;= H(Y) - H(Y|X)\\ &amp;= H(Y) - \sum p(x) H(Y \mid X = x) \\ &amp;= H(Y) - \sum p(x) H(p) \\ &amp;= H(Y) - H(p) \\ &amp;\leq 1 - H(p) \end{align*}</p> <p>However, as the title states, I don't really understand why I can write</p> <p>\begin{align*} \sum p(x) H(Y \mid X = x) = \sum p(x) H(p) \end{align*}</p> <p>I know that</p> <p>\begin{align*} \mathbb{P}[Y = 0 \mid X = 0 ] &amp;= 1 - p \\ \mathbb{P}[Y = 1 \mid X = 0 ] &amp;= p \\ \mathbb{P}[Y = 1 \mid X = 1 ] &amp;= p \\ \mathbb{P}[Y = 0 \mid X = 1 ] &amp;= 1 - p \end{align*}</p> <p>but let's assume I set $p = \frac{1}{3}$, would that mean that I have</p> <p>\begin{align*} I(X;Y) \leq 1- H(p) = 1- H(\frac{1}{3}) \approx 0.4716 \text{ bit} \end{align*}</p> <p>I ask because if this is the case, why is it not</p> <p>\begin{align*} I(X;Y) \leq 1- H(1-p) = 1- H(\frac{2}{3}) \approx 0.61 \text{ bit} \end{align*}</p> <p>instead? </p> <p>Or, and this would make the most sense to me, it's actually $p = (p_{error}, 1-p_{error})= (\frac{1}{3}, \frac{2}{3})$ and thus we have</p> <p>\begin{align*} I(X;Y) \leq 1- H(p) = 1- H(\frac{1}{3}, \frac{2}{3}) \approx 0.0817 \text{ bit} \end{align*}</p>
disaster
280,592
<p>It seems to me that you are confusing $H(X) = - \sum_i Pr(x_i) \log ( Pr(x_i) )$ (the entropy) and $H(p) = -p \log_2(p) - (1-p) \log_2(1-p)$, which is called the binary entropy function.</p> <p>The difference is whether the input is a random variable or a probability.</p> <p>Note that $H(\frac 1 3) = H(\frac 2 3)$</p>
163,873
<p>With the exception of a few miscellaneous cases, the axioms (and/or schemeta) of ZFC can roughly be divided into two kinds:</p> <ol> <li><p>Those that guarantee the existence of more complicated sets, given that simpler sets are already around (e.g. separation, replacement schema). Also, uniqueness of these entities immediately follows, which is very satisfying.</p></li> <li><p>Those that guarantee the existence of larger sets, given that smaller sets are already around (e.g. powerset, union). These also tend to satisfy uniqueness properties; in particular, powersets and unions are indeed unique.</p></li> </ol> <p>Of course, this is a gross oversimpification (e.g. replacement is needed to prove the existence of $\beth_\omega,$ a kind of "large" cardinal, albeit a very small one). However, the point is that we can also apply the above categorization scheme to $\in$-sentences that aren't theorems of ZFC, especially to proposed axioms for set theory. In particular, large cardinal axioms are (by definition) of the latter variety.</p> <blockquote> <p><strong>Question.</strong> Have any axioms or axiom schemata of the former variety (i.e. those guaranteeing the existence of more complicated sets) been proposed or otherwise considered?</p> </blockquote> <p>I'm especially interested in:</p> <ul> <li><p>axioms and/or schemata that legitimize non-mainstream ways of defining and/or constructing things. For example, I'd be interested to hear of a schema asserting that certain definable (proper-class) functions always have greatest and/or least fixed points. Or an axiom asserting that a particular class of self-referential definitions do indeed define unique functions. etc.</p></li> <li><p>axioms or axiom schemata that guarantee the existence of entities whose uniqueness can then be proved (or which assert not only existence but also uniqueness). For example, Martin's axiom does not have this property.</p></li> </ul>
Andreas Blass
6,794
<p>I like to use the following very mild extension of ZFC. Add a predicate "Sat"; add axioms saying that this predicate satisfies the expected inductive clauses to define "satisfaction of $\in$-formulas in the full universe of sets"; and add replacement (or separation and collection) axioms for formulas in which Sat occurs. (By $\in$-formulas, I mean formulas in the usual language of ZFC, whose only non-logical symbol is $\in$; thus these formulas do not involve Sat, and so this theory does not run afoul of Tarski's theorem on undefinability of truth.) </p> <p>This extension of ZFC proves the consistency of ZFC; just show that all the ZFC axioms are true in the universe, and first-order deduction preserves truth. In fact, this extension of ZFC proves the existence of what Montague and Vaught called natural models of ZFC, i.e., models of the form $V_\alpha$ (an initial segment of the cumulative hierarchy) with the standard membership relation. That implies also the existence of countable standard models of ZFC. </p> <p>Where most set-theorists work with countable elementary submodels of $H_\theta$, the collection of sets of hereditary cardinality $&lt;\theta$, for some unspecified but sufficiently large $\theta$, this extension of ZFC allows me to work with countable elementary submodels of the universe --- which is what one usually really wants anyway, $\theta$ being just a circumlocution to avoid using Sat.</p> <p>Basically, the use of this extension of ZFC as my metatheory removes most of the headaches that arise when I want to work with proper classes but feel compelled (by honesty) to say things that make sense in ZFC.</p>
1,476,313
<p>I want to simplify this fraction</p> <p>$$ \frac{\sqrt{6} + \sqrt{10} + \sqrt{15} + 2}{\sqrt{6} - \sqrt{10} + \sqrt{15} - 2} $$</p> <p>I've tried to group up the denominator members like $ (\sqrt{6} + \sqrt{15}) - (\sqrt{10} + 2) $ and then amplify with $ (\sqrt{6} + \sqrt{15}) + (\sqrt{10} + 2) $ </p>
mathlove
78,967
<p>HINT : </p> <p>$$\sqrt 6\pm\sqrt{10}+\sqrt{15}\pm 2=\sqrt 3(\sqrt 5+\sqrt 2)\pm\sqrt{2}(\sqrt 5+\sqrt 2)$$ $$=(\sqrt 5+\sqrt 2)(\sqrt 3\pm\sqrt 2)$$</p>
1,346,286
<p>Why is $\int_{0}^{\pi}{1\over 1-\sin x}dx=2\int_{0}^{\pi\over 2}{1\over 1-\sin x}dx$, or to be accurate: why is $\int_{\pi\over 2}^{\pi}{1\over 1-\sin x}dx=\int_{0}^{\pi\over 2}{1\over 1-\sin x}dx$? </p> <p>At the very best, I know that the area $\sin x$ covers between $0$ to $\pi\over 2$ has the same magnitude between $\pi\over 2$ to $\pi$ but I fail to see how it formally leads to the identity aforementioned. Can you help me with understanding this?</p>
Mark Viola
218,419
<p>The integral of interest $\int_0^{\pi}\frac{dx}{1-\sin x}$ does not converge. Rather, it diverges since </p> <p>$$1-\sin x=-\frac12 (x-\pi/2)^2+O\left((x-\pi/2)^4\right)$$</p> <p>Even if we interpret the integral in the sense of a Cauchy Principal Value, then we have</p> <p>$$\begin{align} \text{P.V.}\int_0^{\pi}\frac{dx}{1-\sin x}&amp;=\lim_{\epsilon \to 0}\left(\int_0^{\pi/2-\epsilon}\frac{dx}{1-\sin x}+\int_{\pi/2+\epsilon}^{\pi}\frac{dx}{1-\sin x}\right)\\\\ &amp;=\lim_{\epsilon \to 0}\left(\int_0^{\pi/2-\epsilon}\frac{dx}{1-\sin x}+\int_{0}^{\pi/2-\epsilon}\frac{dx}{1-\sin x}\right)\\\\ &amp;=2\lim_{\epsilon \to 0}\int_0^{\pi/2-\epsilon}\frac{dx}{1-\sin x}\\\\ &amp;=\infty \end{align}$$</p> <p>So, although we can write</p> <p>$$\int_{\pi/2+\epsilon}^{\pi}\frac{dx}{1-\sin x}=\int_{0}^{\pi/2-\epsilon}\frac{dx}{1-\sin x}$$</p> <p>and therefore write</p> <p>$$\int_0^{\pi/2-\epsilon}\frac{dx}{1-\sin x}+\int_{\pi/2+\epsilon}^{\pi}\frac{dx}{1-\sin x}=2\int_0^{\pi/2-\epsilon}\frac{dx}{1-\sin x}$$</p> <p>the limit as $\epsilon$ goes to zero does not exist and we may not equate $\int_0^{\pi}\frac{dx}{1-\sin x}$ as asserted.</p>
2,793,983
<p>For example I find myself wanting to write $x$ is an element of the integers from $1$ to $50$,</p> <p>Is this the quickest way? </p> <p>$x\in \left[ 1,50\right] \cap \mathbb{N} $</p> <p>Also is this standard on here? $\mathbb{N} = \{0, 1, 2,\dotsc \}$, $\mathbb{ℤ}_+ = \{1, 2, \dotsc \}$.</p>
Community
-1
<p>Anyone will understand</p> <p>$$n\in\{1,2,\dots50\}$$ or even</p> <p>$$n\in\{1,\dots50\}$$ without toil.</p> <p>If it is clear from context that $n$ is an integer,</p> <p>$$n\in[1,50]$$ is good enough (and is very compact from the standpoint of LaTeX formatting :) ).</p> <p>Following @EspeciallyLime, $[50]$ is a good option, though you should introduce the notation. This remains compatible with more general intervals like $[11,50]$.</p>
2,567,607
<p>$$\arctan 2x +\arctan 3x = \left(\frac{\pi}{4}\right)$$ $$\arctan \left(\frac{2x+3x}{1-2x*3x}\right)=\frac {\pi}{4}$$ $$\frac {5x}{1-6x^2}=\tan \frac{\pi}{4}=1$$ $$6x^2 + 5x -1 = 0$$ $$(6x-1)(x+1)=0$$ $$x=-1, \frac{1}{6}$$</p> <p>The answer however rejects the solution $x=-1$ saying that it makes the L.H.S of the equation negative. I don't understand this, I don't see how $x=-1$ makes the L.H.S. negative.</p>
Botond
281,471
<p>$\tan(x)$ is negative in the interval $\left(-\frac{\pi}{2},0\right)$, so it's inverse on the interval $(-\infty,0)$ will give a value from the interval $\left(-\frac{\pi}{2},0\right)$.</p>
1,307,269
<p>In my textbook, before introducing the epsilon delta definition, they gave a working definition of what a limit is. The definition sounded something like this "$\lim \limits_{x \to a}f(x) = L$, if when $x$ gets closer to $a$, $f(x)$ gets closer to $L$" </p> <hr> <p>But is that always the case with limits? What if $f(x) = 4,$ then we have $\lim \limits_{x \to 2}f(x) = 4$, but it is never true that when x gets closer to 2, f(x) gets closer to 4. Maybe instead we should say: "$\lim \limits_{x \to a}f(x) = L$, if when $x$ gets closer to $a$, $ f(x)$ gets closer to or equals $L$".</p> <hr> <p>Please correct me if I'm wrong. I'm pretty new to this stuff. Btw, i understand that the epsilon delta definition has the constant function limit case covered, but I'm more interested in the working definition.</p>
Michael Hardy
11,667
<blockquote> <blockquote> <p>when $x$ gets closer to $a$, $f(x)$ gets closer to $L$</p> </blockquote> </blockquote> <p>That is wrong. Consider two examples:</p> <ul> <li><p>Let $f(x) = 6-(x-4)^2$. Clearly $f(x)$ never gets bigger than $6$, so the limit cannot be $7$, but $f(x)$ gets closer to $7$ (and to all numbers that are bigger than $6$) as $x$ gets closer to $4$, but $\lim\limits_{x\to4}f(x)$ is $6$, not $7$.</p></li> <li><p>Suppose that as $x$ approaches $4$, $g(x)$, depending continuously on $x$, goes up to $10+0.1$, then down to $10-0.1$, then up to $10+0.01$, then down to $10-0.01$, then up to $10+0.001$, then down to $10-0.001$, etc. Then $\lim\limits_{x\to4} g(x)=10$. But $g(x)$ does not keep getting closer to $10$, but gets alternately closer and farther away: As it's going down from $10+0.1$ to $10$ it's getting closer to $10$, and as it continues going downward from $10$ to $10-0.1$, it's getting farther from $10$.</p></li> </ul>
1,307,269
<p>In my textbook, before introducing the epsilon delta definition, they gave a working definition of what a limit is. The definition sounded something like this "$\lim \limits_{x \to a}f(x) = L$, if when $x$ gets closer to $a$, $f(x)$ gets closer to $L$" </p> <hr> <p>But is that always the case with limits? What if $f(x) = 4,$ then we have $\lim \limits_{x \to 2}f(x) = 4$, but it is never true that when x gets closer to 2, f(x) gets closer to 4. Maybe instead we should say: "$\lim \limits_{x \to a}f(x) = L$, if when $x$ gets closer to $a$, $ f(x)$ gets closer to or equals $L$".</p> <hr> <p>Please correct me if I'm wrong. I'm pretty new to this stuff. Btw, i understand that the epsilon delta definition has the constant function limit case covered, but I'm more interested in the working definition.</p>
CiaPan
152,299
<p>The 'working definition' is actually NOT working and you shoud NOT use it. To make it work replace it with </p> <blockquote> <p>$f$ gets <strong>arbitrarily</strong> close to $L$ if $x$ <em>sufficiently</em> approaches $a$</p> </blockquote> <p>where 'gets arbitrarily close' does not mean just $f$ <em>may</em> get closer to $L$ but rather that it certainly <em>will not get apart</em> from $L$ farther than any arbitrarily set distance.</p> <p>In other words</p> <blockquote> <p>$f$ gets <strong>arbitrarily</strong> close to $L$ and <strong>stays there</strong></p> </blockquote>
3,047,686
<p>Prove that: <span class="math-container">$$(p - 1)! \equiv p - 1 \pmod{p(p - 1)}$$</span></p> <p>In text it's not mentioned that <span class="math-container">$p$</span> is prime, but I checked and this doesn't hold for non-prime, so I guess <span class="math-container">$p$</span> is prime .. I know that <span class="math-container">$(p - 1)! \equiv -1 \equiv p - 1 \pmod p$</span>, and <span class="math-container">$(p - 1)! \equiv 0 \equiv (p - 1) \pmod{p - 1}$</span>, the problem is that I don't know how to combine these. Is it true that then we have: <span class="math-container">$(p - 1)! \equiv (p - 1)^2 = p^2 - p - p + 1 \equiv - p + 1 = - (p - 1) \pmod{p(p - 1)}$</span>, which is not the desired result .. ? Or I need to multiply both sides of congruences ? What are the laws for congruences that allow me to do this ? (I hope you get the idea of what I'm trying to ask).</p>
user10354138
592,552
<p>Apply <a href="https://en.wikipedia.org/wiki/Chinese_remainder_theorem" rel="nofollow noreferrer">Chinese remainder theorem</a> to the coprime moduli <span class="math-container">$p$</span> and <span class="math-container">$p-1$</span>.</p> <p>However, if you can't use CRT, you can simply observe <span class="math-container">$(p-1)!-(p-1)$</span> is divisible by <span class="math-container">$p-1$</span> (obvious factor) and by <span class="math-container">$p$</span> (Wilson's), so it is divisible by their least common multiple <span class="math-container">$\operatorname{lcm}(p,p-1)=p(p-1)$</span>.</p>
3,929,089
<p>In the construction of <span class="math-container">$\operatorname{Frac}(R)$</span>, where <span class="math-container">$R$</span> is a domain, we define a partition on <span class="math-container">$R \times R^\times$</span> where <span class="math-container">$R^\times:= R \setminus \{0\}$</span>. which in turn becomes a field containing <span class="math-container">$R$</span>.</p> <p>Taking <span class="math-container">$R:=\mathbb{Q}[x]$</span>, can we tell that <span class="math-container">$\frac{x^2+1}{x-1} \in \mathbb{Q}(x)$</span>? As <span class="math-container">$x-1$</span> is not the zero-polynomial it is a member of <span class="math-container">$\mathbb{Q}[x]^\times$</span>, hence <span class="math-container">$\frac{x^2+1}{x-1}$</span> should be a member of <span class="math-container">$\mathbb{Q}(x)$</span>. But this is undefined when 1 is plugged in <span class="math-container">$x$</span>.</p> <p>Where am I going wrong?</p>
Vercassivelaunos
803,179
<p>Your understanding has a gap starting at the ring of polynomials <span class="math-container">$\mathbb Q[X]$</span>, not at the field of fractions. So I'm going to focus on the former, not the latter.</p> <p>To start with, polynomials are <em>not</em> functions. Memorize this sentence and never forget it: Polynomials are not functions.</p> <p>What polynomials <em>really</em> are should be understood through first understanding what their purpose is. A polynomial's purpose is to be a template for expressions that involve some element <span class="math-container">$X$</span> of a ring using only the operations provided by the ring: addition, subtraction and multiplication. We are allowed to multiply <span class="math-container">$X$</span> by an element of the ring or by itself, and we can add or subtract the results of these multiplications. The end result is a polynomial expression in <span class="math-container">$X$</span> with coefficients in the underlying ring. This expression is a template into which we want to plug in any object for which the ring operations addition, subtraction and multiplication are meaningful. For instance, real matrices can be multiplied by themselves or by real numbers, and they can be added. So matrices are a kind of object we might want to plug into a polynomial. But then the result is a matrix, so it wouldn't make the least bit of sense to define the polynomial to be a function mapping numbers to numbers. On the other hand, we might as well plug in numbers, and then the results <em>are</em> numbers. So we also can't define polynomials to be functions mapping matrices to matrices. Rather, polynomials should be something that allows us to plug in a variety of objects before we even know what kind of object we want to plug in. What kind of map we get shouldn't be predefined.</p> <p>The takeaway is that polynomials do not serve as maps, but as templates for maps involving all kinds of objects, where we're free to adjust the domain to our liking: we could make it a matrix space, or the ring itself, or something entirely different. The same goes for the fraction field. Its elements are templates for functions whose domain we can adjust dynamically. Simply don't use an expression like <span class="math-container">$\frac{1}{X}$</span> as a template for functions whose domain contains <span class="math-container">$0$</span> and you're good.</p>
422,761
<blockquote> <p>Prove that there is no Integer such that $x≡2 \pmod 6$ and $x≡3 \pmod 9$ are both true.</p> </blockquote> <p>How should I approach this question?<br> I attempted using contra-positive proof, so $x=6p+2$ and $x=9q+3$ where $p,q$ are integers.<br> Then $6p+2=9q+3$. </p>
rurouniwallace
35,878
<p>I'll go ahead and answer what I think you were <em>trying</em> to ask. You asked if the series converges. The series you typed is a finite series, but if it were an infinite series it would diverge because:</p> <p>$$\lim_{n\to\infty}\sum_{k=0}^n{\ln^n(3)}=\lim_{n\to\infty}n\ln^n(3)$$</p> <p>Also, if the series were written as follows:</p> <p>$$\sum_{k=0}^\infty\ln^k(3)$$</p> <p>It would still diverge, because it is a geometric series, and a geometric series converges only if the absolute value of the base is less than $1$. $\ln(3)&gt;1$, so the above series diverges.</p>
2,349,982
<p>According to the <a href="http://www.fftw.org/fftw3_doc/1d-Real_002dodd-DFTs-_0028DSTs_0029.html#g_t1d-Real_002dodd-DFTs-_0028DSTs_0029" rel="nofollow noreferrer">FFTW Website</a>, the Fourier Sine Transform (FST) returns:</p> <p>$$Y_k = 2 \sum_{j=0}^{N-1} X_i \sin [\pi (j+1)(k+1)/(N+1)]$$</p> <p><a href="http://reference.wolfram.com/language/ref/FourierSinTransform.html" rel="nofollow noreferrer">WolframAlpha</a> defines the Fourier Sine Transform as follows: $2\sqrt\frac{\lvert b \rvert}{(2\pi)^{1-a}} \int_0^\infty f(t)\sin(b\omega t)\mathrm{d}t$</p> <p>Taking $a=1$ and $b=\pi$ this becomes: $F^{W}_{sin} = 2\sqrt \pi \int_0^\infty f(t)\sin(b\omega t)\mathrm{d}t$. </p> <p>Comparing the two definitions one can write:</p> <p>$$Y_k = 2 \sum_{j=0}^{N-1} X_i \sin [\pi (j+1)(k+1)/(N+1)] \approx \frac{1}{\sqrt\pi} F^{W}_{s}$$</p> <p>Setting $f(t) = t\mathrm{e}^{-t^2}$ and performing FourierSinTransform, Wolframalpha returns:</p> <p>$$FST\{f(t)\}= \frac{1}{2}\pi^2\omega\mathrm{e}^{-(1/4)\pi^2\omega^2}$$</p> <p>I implemented this in my code and I was puzzled by the results: The analytical and numerical solutions look similar but I would have expected a higher precision. What is the reason for this? Am I doing some error in reasoning?</p> <p>Any help is appreciated.</p> <pre><code>// RosenbluthFourier.cpp : Definiert den Einstiegspunkt für die Konsolenanwendung. // #include "stdafx.h" #include &lt;fftw3.h&gt; int main() { int const Pi = 3.14159265359; int N; double sup; cout &lt;&lt; "enter N of points "; std::cin &gt;&gt; N; cout &lt;&lt; "enter sup "; std::cin &gt;&gt; sup; cout &lt;&lt; "Interval runs from 0 to " &lt;&lt; sup &lt;&lt; " and will be divided into " &lt;&lt; N &lt;&lt; " intervals." &lt;&lt; endl; double T, Df; T = sup / (N-1); Df = 1 / sup; cout &lt;&lt; "Sampling interval T = " &lt;&lt; T &lt;&lt; endl; cout &lt;&lt; "Frequency spacing df = " &lt;&lt; Df &lt;&lt; endl &lt;&lt; endl; double *X = new double[N]; double *Y = new double[N]; for (int i = 0; i &lt;= N; i++) { X[i] = T*i; Y[i] = X[i] * exp(-pow(X[i],2)); cout &lt;&lt; "X[" &lt;&lt; i &lt;&lt; "] = " &lt;&lt; X[i] &lt;&lt; " Y[" &lt;&lt; i &lt;&lt; "] = " &lt;&lt; Y[i] &lt;&lt; endl; } cout &lt;&lt; endl &lt;&lt; "Analitically tranformed function" &lt;&lt; endl &lt;&lt; endl ; double *f = new double[N]; double *Yt = new double[N]; for (int k = 0; k &lt;= N; k++) { // calc pi*w f[k] = Pi*k*Df; Yt[k] = (1./2.)*Pi*f[k] * exp(-pow(f[k]/2., 2)); cout &lt;&lt; "f[" &lt;&lt; k &lt;&lt; "] = " &lt;&lt; f[k] &lt;&lt; " Yt[" &lt;&lt; k &lt;&lt; "] = " &lt;&lt; Yt[k] &lt;&lt; endl; } cout &lt;&lt; endl &lt;&lt; "FFTW-tranformed function" &lt;&lt; endl &lt;&lt; endl; fftw_plan p; p = fftw_plan_r2r_1d(N, Y, Yt, FFTW_RODFT00, FFTW_ESTIMATE); fftw_execute(p); for (int k = 0; k &lt;= N; k++) { Yt[k] = Yt[k] * T * double(sqrt(Pi)) ; cout &lt;&lt; "f[" &lt;&lt; k &lt;&lt; "] = " &lt;&lt; f[k] &lt;&lt; " Yt[" &lt;&lt; k &lt;&lt; "] = " &lt;&lt; Yt[k] &lt;&lt; endl; } fftw_destroy_plan(p); return 0; } </code></pre>
Michael Hardy
11,667
<p>$\newcommand{\E}{\operatorname{E}}\newcommand{\v}{\operatorname{var}}\newcommand{\c}{\operatorname{cov}}$First, note that $h\sum_{i=1}^n x_i^2+k\left(\sum_{i=1}^nx_i\right)^2$ is merely a number, so it cannot be biased or unbiased, but $h\sum_{i=1}^n X_i^2 + k \left( \sum_{i=1}^n X_i\right)^2$ is a random variable, and it is an <em>observable</em> random variable, otherwise called a statistic, so it can be biased or unbiased for a particular quantity of interest.</p> <p>\begin{align} &amp; \E\left( h\sum_{i=1}^n X_i^2 + k\left( \sum_{i=1}^n X_i \right)^2 \right) \tag 1 \\[10pt] = {} &amp; h\sum_{i=1}^n \E(X_i^2) + k \E\left(\left( \sum_{i=1}^n X_i \right)^2 \right) \text{ by linearity of expectation} \\[10pt] = {} &amp; h \sum_{i=1}^n \left( \v(X_i) + \left( \E X_i \right)^2 \right) + k \left( \v\left( \sum_{i=1}^n X_i \right) + \left( \E\left( \sum_{i=1}^n X_i \right) \right)^2 \right) \\[10pt] = {} &amp; hn(\sigma^2+\mu^2) + k\left( \v\left( \sum_{i=1}^n X_i \right) + (n\mu)^2 \right) \\[10pt] = {} &amp; hn\sigma^2 + n(h + kn)\mu^2 + k\v\left( \sum_{i=1}^n X_i \right). \end{align}</p> <p>Then we have \begin{align} &amp; \v\left( \sum_{i=1}^n X_i \right) \\[10pt] = {} &amp; \left( \sum_{i=1}^n \v(X_i) \right) + \left( \sum_{i=1}^n \sum_{\left( j\,:\,\begin{smallmatrix} 1\le j \le n \\ \&amp;\ j\ne i \end{smallmatrix}\right)}^n \c(X_i,X_j) \right) \\[10pt] = {} &amp; n\sigma^2 + n(n-1)p. \end{align}</p> <p>So the expected value on line $(1)$ is $$ hn\sigma^2 + n(h + kn)\mu^2 + kn\sigma^2 + kn(n-1)p. \tag 2 $$ The problem now is to choose $h$ and $k$ so as to make line $(2)$ remain equal to $\sigma^2$ regardless of the values of $\sigma^2,$ $\mu,$ and $p.$</p> <p>For this to work, line $(2)$ must be equal to $0$ when $\sigma^2=0$ and to $1$ when $\sigma^2=1.$ Thus we have \begin{align} 0 &amp; = n(h + kn)\mu^2 + kn(n-1)p, \\ 1 &amp; = hn + n(h + kn)\mu^2 + kn + kn(n-1)p. \end{align} Both of these are linear in $h$ and linear in $k,$ so they can be readily solved for $h$ and $k.$</p>
1,740,458
<p>I saw somewhere on Math Stack that there was a way of finding integrals in the form $$\int \frac{dx}{a+b \cos x}$$ <strong>without</strong> using Weierstrass substitution, which is the usual technique. </p> <p>When $a,b=1$ we can just multiply the numerator and denominator by $1-\cos x$ and that solves the problem nicely. </p> <p>But I remember that the technique I saw was a nice way of evaluating these even when $a,b\neq 1$.</p>
Justin Benfield
297,916
<p>If $a=b$ then you can modify the technique for $a=b=1$ slightly to obtain:</p> <p>$\int \frac{dx}{b+b\cos x}=\int\frac{b-b\cos x}{(b+b\cos x)(b-b\cos x)}dx$</p> <p>$=\int\frac{b-b\cos x}{b^2-b^2\cos^2 x}dx=\int\frac{b-b\cos x}{b^2(1-\cos^2 x)}dx=\frac{1}{b}\int\frac{1-\cos x}{\sin^2 x}dx$</p> <p>Splitting the numerator, and further simplifying:</p> <p>$\frac{1}{b}\int\frac{1}{\sin^2 x}dx-\frac{1}{b}\int\frac{\cos x}{\sin^2 x}dx=\frac{1}{b}\int\csc^2 x\:dx-\frac{1}{b}\int\frac{\cos x}{\sin^2 x}dx$</p> <p>The integral on the left is $-\cot x$ and the one on the right is an easy $u$-sub with $u=\sin x$.</p> <p>So what happens with $a\neq b$?</p> <p>$\int \frac{dx}{a+b\cos x}=\int\frac{a-b\cos x}{(a+b\cos x)(a-b\cos x)}dx=\int\frac{a-b\cos x}{a^2-b^2\cos^2 x}dx$</p> <p>Now, add and subtract $b^2$ to the denominator and group the $+b^2$ with $-b^2\cos^2x$.</p> <p>$=\int\frac{a-b\cos x}{a^2-b^2+b^2-b^2\cos^2 x}dx=\int\frac{a-b\cos x}{(a^2-b^2)+b^2(1-\cos^2 x)}dx$</p> <p>Split the numerator again, and use pythagorean identity.</p> <p>$\int\frac{a-b\cos x}{(a^2-b^2)+b^2(\sin^2 x)}dx$</p> <p>and here I am stuck.</p>
1,740,458
<p>I saw somewhere on Math Stack that there was a way of finding integrals in the form $$\int \frac{dx}{a+b \cos x}$$ <strong>without</strong> using Weierstrass substitution, which is the usual technique. </p> <p>When $a,b=1$ we can just multiply the numerator and denominator by $1-\cos x$ and that solves the problem nicely. </p> <p>But I remember that the technique I saw was a nice way of evaluating these even when $a,b\neq 1$.</p>
Adhvaitha
228,265
<p>If the integral is a definite integral (typically from $0$ to $\pi/2$ or some other variants of this), then we can <a href="https://www.dropbox.com/s/imfywebvbabebj0/1_over_a_b_sin.pdf?dl=0" rel="nofollow">follow the technique here</a> to obtain the integral. The key ingredient is to write $\dfrac1{a+b\cos(x)}$ as a geometric series in $\cos(x)$ and evaluate the integral of the sum by swapping the integral and the summation.</p>
19,325
<p>I'm looking for a simple way to define mathematics to primary/elementary school teachers and explain some of the confusion children have.</p> <p>I'm hoping some Algebraist could help me properly state the following:</p> <blockquote> <p>A number in and of itself has no true meaning. True in the sense that it relates to an existing object within our world. The question we need to ask is how do we teach children meaning if that meaning is not at first grounded within something concrete.</p> </blockquote> <blockquote> <p>Numbers in and of themselves represent abstract notions and in the pure study of mathematics we study mostly patterns: the various patterns that emerge from these abstract notions and the various means through which some relation is developed or expressed between them. Meaning that between the value 1 and 2 for instance there is no relation except when explicitly defined for example as some additive operation, in general an addition of multiples of the unit element.</p> </blockquote>
Rusty Core
7,930
<p>Forget New Math, it does now work with young kids, and it is not needed for learning arithmetics in elementary school.</p> <p>Instead, first teach the concepts of &quot;same as&quot;, &quot;more than&quot;, &quot;less than&quot; by lining up real objects like apples or backpacks or horses one against another, making pairs. Then you ask, how do we figure out &quot;more&quot; or &quot;less&quot; or &quot;same&quot; without bringing my horses and your horses to the river? We can abstract horses into counters and line up counters mine against yours and compare them. Then you ask, can we abstract other things like cows or women or children with counters? Do they have to be the same counters? Can we use different ones? Can we use sticks instead of stones? Can we compare without lining up counters? Well, we write counters on paper instead of actually carrying stones or sticks. Pictures of counters is an abstraction of physical counters. When we get too many things, it becomes unwieldy, can we do better? Well, we can &quot;count&quot; the counters - a whole new idea and a process, and come up with - ta-da! - numbers. Comparing 3 and 5 is the same as comparing ||| and |||||, just different representation. How is it better? Without introducing place value, not much. So you explain how place value works, and how with limited number of digits we can now represent any countable number, another level of abstraction.</p> <p>But please, don't tell kids that numbers don't exist, that they represent the abstract notion blah-blah-blah. They tried it sixty years ago, and it did not work.</p> <p>Let me quote <em>Why Johnny Can't Add</em> by Morris Kline:</p> <blockquote> <p>&quot;Is 7 a number,&quot; asks a teacher. The students, taken aback by the simplicity of the question, hardly deem it necessary to answer; but the sheer habit of obedience causes them to reply affirmatively. The teacher is aghast. &quot;If I asked you who you are, what would you say?&quot;</p> <p>The students are now wary of replying, but one more courageous youngster does do so: &quot;I am Robert Smith.&quot;</p> <p>The teacher looks incredulous and says chidingly, &quot;You mean that you are the name Robert Smith? Of course not. You are a person and your name is Robert Smith. Now let us get back to my original question: Is 7 a number? 0f course not! It is the name of a number. 5 + 2, 6 + 1, and 8 - 1 are names for the same number. The symbol 7 is a numeral for the number.</p> <p>The teacher sees that the students do not appreciate the distinction and so she tries another tack. &quot;Is the number 3 half of the number 8?&quot; she asks. Then she answers her own question: &quot;Of course not! But the numeral 3 is half of the numeral 8, the right half.&quot;</p> <p>The students are now bursting to ask, &quot;What then is a number?&quot; However, they are so discouraged by the wrong answers they have given that they no longer have the heart to voice the question. This is extremely fortunate for the teacher, because to explain what a number really is would be beyond her capacity and certainly beyond the capacity of the students to understand it. And so thereafter the students are careful to say that 7 is a numeral, not a number. Just what a number is they never find out.</p> </blockquote>
19,325
<p>I'm looking for a simple way to define mathematics to primary/elementary school teachers and explain some of the confusion children have.</p> <p>I'm hoping some Algebraist could help me properly state the following:</p> <blockquote> <p>A number in and of itself has no true meaning. True in the sense that it relates to an existing object within our world. The question we need to ask is how do we teach children meaning if that meaning is not at first grounded within something concrete.</p> </blockquote> <blockquote> <p>Numbers in and of themselves represent abstract notions and in the pure study of mathematics we study mostly patterns: the various patterns that emerge from these abstract notions and the various means through which some relation is developed or expressed between them. Meaning that between the value 1 and 2 for instance there is no relation except when explicitly defined for example as some additive operation, in general an addition of multiples of the unit element.</p> </blockquote>
jfkoehler
7,003
<p>I would suggest taking it easy to start but maybe something like base numeration is a way in here. There is no reason we have to use base 10 and students use of different bases can be very important to understanding operations that require regroupings, and that weird alignment under the typical presentation of multiplication algorithms.</p> <p>One example to check out is Roger Howe's essay <a href="https://www.maa.org/sites/default/files/pdf/pmet/resources/PVHoweEpp-Nov2008.pdf" rel="nofollow noreferrer">here</a>. I found activities where teachers count, add, subtract, and multiply using base 4 blocks to be very effective in my elementary mathematics teaching courses.</p>
4,257,962
<p>By definition - A real number is algebraic if it is a root of a non-zero polynomial equation with rational coefficients. What does non-zero polynomial equation mean?</p> <p>Well, an equation f(x) = x -5, becomes zero when x = 5, so this is a zero polynomial equation. Is the definition saying that the equation should not equal zero in any case?</p> <p>Can someone clarify this?</p>
Rounak Sarkar
831,748
<p>A zero polynomial is a polynomial which gives <span class="math-container">$0$</span> for all values of <span class="math-container">$x$</span>,</p> <p>Basically <span class="math-container">$P(x)=0$</span> means that no matter what <span class="math-container">$x$</span> you put in there, the result will always be zero. For your example, <span class="math-container">$P(x)=x-5$</span> is not a zero polynomial because there is only a finite number of <span class="math-container">$x$</span>'s for which the polynomial will be <span class="math-container">$0$</span>. In this case there is only one such <span class="math-container">$x$</span>. And that is <span class="math-container">$5$</span>.</p> <p>And a real number is algebraic if there exists a non-<span class="math-container">$\color{green}{\textrm{constant}}$</span> polynomial with rational coefficients, such that the the real number is one of the roots of the polynomial.</p>
3,970,959
<blockquote> <p><span class="math-container">$S = \frac{1}{1001} + \frac{1}{1002}+ \frac{1}{1003}+ \dots+\frac{1}{3001}$</span>.</p> </blockquote> <blockquote> <p>Prove that <span class="math-container">$\dfrac{29}{27}&lt;S&lt;\dfrac{7}{6}$</span>.<br></p> </blockquote> <p>My Attempt:<br> <span class="math-container">$S&lt;\dfrac{500}{1000} + \dfrac{500}{1500}+ \dfrac{500}{2000}+ \dfrac{500}{2500}+\dfrac{1}{3000} =\dfrac{3851}{3000}$</span></p> <p>(Taking 250 terms together involves many fractions and is difficult to calculate by hand.)</p> <p>Using AM-HM inequality gave me <span class="math-container">$S &gt; 1$</span>, but the bounds are weak.</p> <p><a href="https://math.stackexchange.com/questions/688432">Prove that $1&lt;\frac{1}{1001}+\frac{1}{1002}+\frac{1}{1003}+\dots+\frac{1}{3001}&lt;\frac43$</a> <br><br> <a href="https://math.stackexchange.com/questions/605111">Inequality with sum of inverses of consecutive numbers</a> <br><br> The answers to these questions are nice, but the bounds are weak.</p> <p>Any help without calculus and without calculations involving calculators would be appreciated.</p> <p>(I encountered this question when I was preparing for a contest which neither allows calculators nor calculus(Only high-school mathematics.))</p>
Claude Leibovici
82,404
<p>We can find an <em>approximation</em> of the result.</p> <p>Consider <span class="math-container">$$\sum_{i=1}^{2001}\frac1{1000+i}=\sum_{i=1}^{3001}\frac1{i}-\sum_{i=1}^{1000}\frac1{i}$$</span> and we shall use the fact that <span class="math-container">$$\sum_{i=1}^{n}\frac1{i}=H_n$$</span></p> <p>Now, the asymptotics of harmonic numbers <span class="math-container">$$H_n=\gamma +\log \left({n}\right)+\frac{1}{2 n}-\frac{1}{12 n^2}+O\left(\frac{1}{n^4}\right)$$</span> Using it twice and you will have sharper bounds.</p>
3,970,959
<blockquote> <p><span class="math-container">$S = \frac{1}{1001} + \frac{1}{1002}+ \frac{1}{1003}+ \dots+\frac{1}{3001}$</span>.</p> </blockquote> <blockquote> <p>Prove that <span class="math-container">$\dfrac{29}{27}&lt;S&lt;\dfrac{7}{6}$</span>.<br></p> </blockquote> <p>My Attempt:<br> <span class="math-container">$S&lt;\dfrac{500}{1000} + \dfrac{500}{1500}+ \dfrac{500}{2000}+ \dfrac{500}{2500}+\dfrac{1}{3000} =\dfrac{3851}{3000}$</span></p> <p>(Taking 250 terms together involves many fractions and is difficult to calculate by hand.)</p> <p>Using AM-HM inequality gave me <span class="math-container">$S &gt; 1$</span>, but the bounds are weak.</p> <p><a href="https://math.stackexchange.com/questions/688432">Prove that $1&lt;\frac{1}{1001}+\frac{1}{1002}+\frac{1}{1003}+\dots+\frac{1}{3001}&lt;\frac43$</a> <br><br> <a href="https://math.stackexchange.com/questions/605111">Inequality with sum of inverses of consecutive numbers</a> <br><br> The answers to these questions are nice, but the bounds are weak.</p> <p>Any help without calculus and without calculations involving calculators would be appreciated.</p> <p>(I encountered this question when I was preparing for a contest which neither allows calculators nor calculus(Only high-school mathematics.))</p>
Erik Satie
698,573
<p>Well not an answer but an idea to inspire someone :</p> <p>I remenber the Gauss's solution of the first case concerning the Faulhaber's formula :</p> <p>We have :</p> <p><span class="math-container">$$\frac{1}{1001}+\frac{1}{3001}&gt;\frac{1}{1002}+\frac{1}{3000}&gt;\cdots&gt;\frac{1}{2002}+\frac{1}{2000}$$</span></p> <p>Wich gives :</p> <p><span class="math-container">$$S&lt;\frac{1000}{1001}+\frac{1000}{3001}+\frac{1}{2001}$$</span></p> <p>Now if we divide (3001-1001) by four we get :</p> <p><span class="math-container">$$\frac{500}{2500}+\frac{500}{2502}+\frac{1}{2501}+\frac{499}{1499}+\frac{499}{1501}+\frac{1}{1500}&lt;S&lt;\frac{500}{1001}+\frac{500}{2000}+\frac{1}{1500}+\frac{499}{2001}+\frac{499}{3001}+\frac{1}{2500}$$</span></p>
56,847
<p>What are the angles formed at the center of a tetrahedron if you draw lines to the vertices?</p> <p>I'm trying to make these:</p> <p><img src="https://i.stack.imgur.com/FRUi8.jpg" alt="caltrop"> </p> <p>I need to know what angles to bend the metal.</p>
J. M. ain't a mathematician
498
<p>The required tetrahedral angle is $\arccos\left(-\frac13\right)\approx109.5^\circ$. You can use the law of cosines to show this... or more transparently, you can exploit the fact that a tetrahedron is easily embedded inside a cube:</p> <p><img src="https://i.stack.imgur.com/stZDK.gif" alt="tetrahedron in a cube"></p> <hr> <p>I suppose now's as good a time as any to post the synthetic proof.</p> <p>One can use the Pythagorean theorem to show that a square with unit edge length has a diagonal of length $\sqrt 2$. The Pythagorean theorem can be used again to show that a right triangle with leg lengths $1$ and $\sqrt 2$ will have a hypotenuse of length $\sqrt 3$ (corresponding to the triangle formed by an edge, a face diagonal, and a cube diagonal). We know that the diagonals of a rectangle bisect each other; this can be used to show that the diagonals of a cube bisect each other. From this, we find that the side lengths of the (isosceles!) triangle formed by two half-diagonals of the cube (corresponding to two of the arms of your caltrops) and a face diagonal are $\frac{\sqrt 3}{2}$, $\frac{\sqrt 3}{2}$, and $\sqrt{2}$. From the law of cosines, we have</p> <p>$$2=\frac34+\frac34-2\frac34\cos\theta$$</p> <p>where $\theta$ is the obtuse angle whose measure we are seeking. Algebraic manipulation yields $\cos\,\theta=-\frac13$.</p>
56,847
<p>What are the angles formed at the center of a tetrahedron if you draw lines to the vertices?</p> <p>I'm trying to make these:</p> <p><img src="https://i.stack.imgur.com/FRUi8.jpg" alt="caltrop"> </p> <p>I need to know what angles to bend the metal.</p>
Narasimham
95,860
<p>If the tetrahedron is regular, we can use statical equilibrium of four equal isotropic forces with angle $ \theta$ between any two of them. Referencing with reference to any one force,</p> <p>$$ F \cos \theta + .. + .. + F = 0$$</p> <p>$$ \cos \theta = -\frac13 $$</p> <p>To generalize for all $i$ direction forces on a particular Z direction the vector dot product sum can be used:</p> <p>$$ \Sigma F_i . Z =0 $$</p>
56,847
<p>What are the angles formed at the center of a tetrahedron if you draw lines to the vertices?</p> <p>I'm trying to make these:</p> <p><img src="https://i.stack.imgur.com/FRUi8.jpg" alt="caltrop"> </p> <p>I need to know what angles to bend the metal.</p>
tclayton2k
428,796
<p>Using the illustration of the tetrahedron embedded into the cube, you can lay out some dimensions. Assign a length of 2 to the sides of the cube, and then according to an old guy name Pythagoras, the hypotenuse of each side will be d 2*√2. Since the center of the tetrahedron is also the center of the cube, then you can draw an isosceles triangle using the hypotenuse and the center that will be 2*√2 at the base and 1 unit high. Split that in half to make two right triangles, each with a base of √2 and a height of 1. Now comes the real trig. Tan = Opposite / Adjacent, or √2/1, or just √2. Grab your calculator and you'll find that the angle you're looking for a the top of the right triangle is 54.735°. Put those two triangles back together, and you get an angle of 109.47°. For fabrication, I think you'd be fine with an angle of 110°.</p>
56,847
<p>What are the angles formed at the center of a tetrahedron if you draw lines to the vertices?</p> <p>I'm trying to make these:</p> <p><img src="https://i.stack.imgur.com/FRUi8.jpg" alt="caltrop"> </p> <p>I need to know what angles to bend the metal.</p>
Roy
894,041
<p>Another possible solution:</p> <p>Consider the equilateral triangle, edge length x, formed by any 3 tips of a regular tetrapod (a face on it's corresponding tetrahedron).</p> <p>The 3 medians of this equilateral triangle section it into 6 congruent right triangles with hypotenuse length y, and angles of 60° between lines around the centroid.</p> <p>sin60° = (1/2x) / y</p> <p>therefore</p> <p>y = x / 2sin60°</p> <p>Now consider a line running orthogonal to the previously identified triangle, from its centroid to the tip of the opposing tetrapod leg. This line is coincident with that opposing tetrapod leg and forms a right triangle with the previously identified y, having a hypotenuse equal in length to x.</p> <p>Back to our tetrapod, any two legs form an isosceles triangle with base length x. Therefore, the angle between any two legs, A, can be expressed:</p> <p>A = 180° - 2</p> <p>where</p> <p> = arcsin (y / x)</p> <p> = arcsin ([x / 2sin60°] / x)</p> <p> = arcsin (1 / 2sin60°)</p> <p>therefore</p> <p>A = 180° - 2[arcsin (1 / 2sin60°)]</p> <p>A ≈ 109.47°</p> <p>A more spatial approach, and certainly not the most direct, but it got me out of a pinch during a grade 12 chemistry test years ago. We were supposed to have the angles of molecular structures memorised and of course I blanked mid-test so I found myself deriving something to this effect in the margins.</p> <p>This was on mobile so please forgive the formatting and lack of diagrams which surely would have made this explanation easier to follow.</p>
3,001,335
<p>I know little formal math terminology and don't understand much of anything about complex analysis. Also, if this isn't a good starting point for complex integration feel free to say (I'm learning about it partly for Cauchy's residue theorem). </p> <p>My first and intuitive idea of residue has to do with remainder, subtraction, division, etc., But more generally something that's leftover, extra, or unused by an operation or something. Do these ideas tie together? </p>
Masacroso
173,262
<p>Yes, it is. The residue of the Laurent series of a function in an annulus is the coefficient <span class="math-container">$c_{-1}$</span> of this series, that is what is left after integration of the function in a loop on the annulus (that is, after the integration of the Laurent series that represent the function in this annulus).</p> <p>Note that</p> <p><span class="math-container">$$\oint z^{-k}\, dz=\begin{cases}0,&amp; k\in\Bbb Z\setminus\{1\}\\2\pi i,&amp; k=1\end{cases}$$</span></p> <p>Hence (assuming a Laurent series around a pole in the origin for simplicity) we have that</p> <p><span class="math-container">$$\oint f(z)\, dz=\oint\sum_{k\in\Bbb Z}c_k z^k\, dz=\sum_{k\in\Bbb Z}c_k \oint z^k\, dz=c_{-1}2\pi i$$</span></p>
234,945
<p>Let $a \in \mathbb R$, what values of $t$ solve the equation $at + \sin(t) = 0$?</p>
Berci
41,488
<p>First, $t=0$ is a solution. Otherwise, it can be written as $$-a=\frac{\sin t}t.$$ One can find the bounds of $\frac{\sin t}t$ so that if $-a$ is out of that range, there is no solution. But, in general, it seems only numerical solutions could be found, see also <a href="http://www.wolframalpha.com/input/?i=%28sin%20t%29/t" rel="nofollow">its graph</a>. </p>
2,734,338
<p>I would like to show that $$\forall n\in\mathbb{N}^*, \quad \sqrt{\frac{n}{n+1}}\notin \mathbb{Q}$$</p> <p>I'm interested in more ways of proofing this.</p> <p>My method :</p> <p>suppose that $\sqrt{\frac{n}{n+1}}\in \mathbb{Q}$ then there exist $(p,q)\in\mathbb{Z}\times \mathbb{N}^*$ such that $\sqrt{\frac{n}{n+1}}=\frac{p}{q}$ thus </p> <p>$$\dfrac{n}{n+1}=\dfrac{p^2}{q^2} \implies nq^2=(n+1)p^2 \implies n(q^2-p^2)=p^2$$ since $p\neq q\implies p^2\neq q^2$ then</p> <p>$$n=\dfrac{p^2}{(q^2-p^2)}$$</p> <p>since $n\in \mathbb{N}^*$ then $n\in \mathbb{Q}$</p> <ul> <li>I'm stuck here and I would like to see different ways to prove $\sqrt{\frac{n}{n+1}}\notin \mathbb{Q}$</li> </ul>
fleablood
280,126
<p>Finishing your proof</p> <p>$n = \frac {p^2}{q^2 - p^2} = \frac {p^2}{(q-p)(q+p)}$</p> <p>All prime or unitary factors of $q-p$ and of $q+p$ must be prime or unitary factors of $p$ so then must be common factors of both $p$, and $q$. But $p$ and $q$ are presumed to be relatively prime. So the only factors of $q-p$ and $q+p$ are $1$ so $q^2 - p^2 = 1$ which... is not possible.</p> <p>We can assume $q &gt; p$ (else $\frac pq \ge 1 &gt;\sqrt{\frac n{n+1}}$) so let $q = p + k; k \ge 1$ so $q^2 = p^2 + 2k + 1 &gt; p^2 + 1$.</p> <p>....</p> <p>Basically it is well known that $\sqrt n$ is rational for integer $n$ only if $n$ is a perfect square. It can be verified that if $\gcd(a,b)= 1$ then $\sqrt{\frac ab} = \frac pq$ where $\gcd(p,q) = 1$ then $\frac ab = \frac {p^2}{q^2}$ and both $\frac ab$ and $\frac {p^2}{q^2}$ are in lowest terms so $a = p^2$ and $b = q^2$. As $\gcd(n,n+1) = 1$ and then for $\sqrt{\frac n{n+1}}$ to be rational then but $n$ and $n+1$ are perfect squares.</p> <p>But there are <em>many</em> ways to show that's impossible, but mainly if $(n+k)^2 - n^2 = 2k + k^2$ which if $k \ge 1$ is greater than $1$. Or if $m &gt; n$ then $m^2 - n^2 = (m-n)(m+n) \ge m+n&gt; 1$.</p>
959,219
<p>let $a,b,c&gt;0$, and such $$a^2+b^2+c^2&lt;2ab+2bc+2ca$$</p> <p>show that $$a^4+b^4+c^4+6(a^2b^2+b^2c^2+a^2c^2)+4abc(a+b+c)&lt;4(ab+bc+ac)(a^2+b^2+c^2)$$</p> <p>I know this indentity: $$a^2+b^2+c^2-2(ab+bc+ac) =-(\sqrt{a}+\sqrt{b}+\sqrt{c})(-\sqrt{a}+\sqrt{b}+\sqrt{c})(\sqrt{a}-\sqrt{b}+\sqrt{c})(\sqrt{a}+\sqrt{b}-\sqrt{c})$$</p>
Macavity
58,320
<p>Let $p = a+b+c, q = ab+bc + ca, r = abc$ then the inequality we need to show is equivalent to: $$p^4+16q^2 &lt; 8p^2q+4pr$$ </p> <p>Using the condition, we know $p^2 &lt; 4q$ and it is well known that $3q \le p^2$. Thus we have $(4q-p^2)(p^2-3q) \ge 0 \implies 7p^2q \ge p^4+12q^2$.</p> <p>Using this, it is enough to show that $4q^2 &lt; p^2q+4pr$. But we always have $p^2q +3pr \ge 4q^2$, so this is true.</p> <hr> <p>To show that $p^2q + 3pr \ge 4q^2$, i.e. $$(a+b+c)^2(ab+bc+ca)+3(a+b+c)abc \ge 4(ab+bc+ca)^2$$ $$\iff \sum_{sym}a^3b \ge \sum_{sym} a^2b^2$$ which is evident using Muirhead or AM-GM as $a^3b+ab^3 \ge 2a^2b^2$.</p>
3,836,695
<p>Is there a way to evaluate <span class="math-container">$\int\cos^2xdx$</span> without using <span class="math-container">$\cos^2x=\frac12(1+\cos(2x))$</span>?</p>
rash
650,763
<p><span class="math-container">$$I=\int cos^2 xdx= x\cos^2 x+\int x(2\sin x\cos x)dx \text{ (Integration by Parts)} $$</span> <span class="math-container">$$=x\cos^2 x+\int x\sin 2xdx$$</span> Applying Integration by parts for <span class="math-container">$\int x\sin 2xdx$</span>, <span class="math-container">$$\int x\sin 2xdx =-x\times\frac{\cos 2x}{2}+\frac{1}{2}\int \cos 2xdx$$</span> <span class="math-container">$$= -x\times\frac{\cos 2x}{2}+\frac{1}{4}\sin 2x$$</span> Substituting this into I, <span class="math-container">$$I=x\cos^2 x -\frac{x\cos 2x}{2}+\frac{1}{4}\sin 2x +C $$</span></p>
3,836,695
<p>Is there a way to evaluate <span class="math-container">$\int\cos^2xdx$</span> without using <span class="math-container">$\cos^2x=\frac12(1+\cos(2x))$</span>?</p>
Aadhaar Murty
826,105
<p>You could use the complex definitions of sine and cosine -</p> <p><span class="math-container">$$\cos x = \frac {e^{ix}+ e^{-ix}}{2}, \sin x = \frac {e^{ix}- e^{-ix}}{2i}$$</span></p> <p><span class="math-container">$$I = \int \frac {e^{2ix}+e^{-2ix} + 2}{4} dx = \frac {1}{4}\left(2x + \frac {e ^{2ix}}{2i} - \frac {e^{-2ix}}{2i}\right) = \frac {x}{2} + \frac {1}{4}\sin(2x) =$$</span></p> <p><span class="math-container">$$\boxed {\frac {x +\sin x \cdot \cos x }{2} } $$</span></p>
227,873
<p>I am looking for robust generalizations of matrix rank.</p> <p>Think of the the following problem: A big matrix of low rank is perturbed by random noise, such that it becomes a full-rank matrix. Is there a generalization of matrix rank that still 'sees' that the perturbed matrix is close to a low-rank matrix?</p>
Carlo Beenakker
11,260
<p><A HREF="http://www.sciencedirect.com/science/article/pii/016501149090208N" rel="nofollow">The rank of a fuzzy matrix and its evaluation</A></p> <blockquote> <p>A new type of matrix rank, which is called margin rank in this article, is introduced to a fuzzy matrix defined to be rectangular array of fuzzy numbers. The new rank is, in general, a real number and consistent with the conventional integer-valued rank, defined for the crisp matrix. The margin rank indicates the margin of retaining the rank of the mean matrix, which enables us to represent the grade of some characteristics described by the ordinary rank of a matrix. In this article the definition of the new rank and a procedure for its evaluation are shown with several examples.</p> </blockquote>
4,305,538
<p>I am trying to compute the Fourier transform of <span class="math-container">$(x_{1}+ix_{2})^{-1}$</span> in <span class="math-container">$S'(\mathbb{R}^{2})$</span>. i.e. as a tempered distribution.</p> <p>It might be useful to note that for <span class="math-container">$\mu \in S'(\mathbb{R}^{2})$</span> and <span class="math-container">$\psi \in S(\mathbb{R}^{2})$</span> we define <span class="math-container">$\langle\hat{\mu},\psi\rangle=\langle\mu,\hat{\psi}\rangle$</span>.</p> <p>In my attempt, I noted that the definition of the Fourier transform in <span class="math-container">$S(\mathbb{R}^{2})$</span>:</p> <p><span class="math-container">$$ \hat{f}(\lambda)=\int_{\mathbb{R}^{2}}f(x)e^{-i \lambda \cdot x} dx, $$</span> gave us that <span class="math-container">$\widehat{(-i \partial_{1}+\partial_{2}) \delta}=x_{1}+ix_{2}$</span>. I'm not sure how to use this fact to help me complete the problem. Any help would be greatly appreciated.</p>
Svyatoslav
869,237
<p>There is a comprehensive solution by @Ninad Munshi. We can also try another approach.</p> <p>Let's denote <span class="math-container">$x=\sqrt{x_1^2+x_2^2}$</span> and <span class="math-container">$\lambda=\sqrt{\lambda_1^2+\lambda_2^2}$</span>. We consider the case <span class="math-container">$\lambda\neq0$</span>.</p> <p>First, we note that we can arbitrary choose the system of coordinates <span class="math-container">$(x_1, ix_2)$</span> in the complex plane. We can direct the axis <span class="math-container">$X$</span> along the <span class="math-container">$\vec\lambda$</span>. At this choice, <span class="math-container">$\vec\lambda= (\lambda_1;i\lambda_2)=(\lambda, 0)$</span>. We also note that in this case <span class="math-container">$\lambda_1x_1+\lambda_2x_2=\lambda x_1=\lambda x\cos\phi\,$</span> and <span class="math-container">$\,x_1+ix_2=xe^{i\phi}$</span>. <span class="math-container">$$I=\int_{\mathbb{R}^{2}}\frac{e^{-i (\lambda_1 x_1+\lambda_2 x_2)}}{x_1+ix_2} dx_1dx_2$$</span> Switching into the polar system of coordinates, <span class="math-container">$$I=\int_0^\infty xdx\int_0^{2\pi}d\phi\,\frac{e^{-i\lambda x\cos\phi}}{xe^{i\phi}}$$</span></p> <p>Bearing in mind that <span class="math-container">$\lambda\neq0$</span> and splitting interval <span class="math-container">$[0;2\pi]$</span> into <span class="math-container">$[0;\pi]$</span> and <span class="math-container">$[\pi;2\pi]$</span>, after easy transformations we get <span class="math-container">$$I=\frac{1}{\lambda}\int_0^\infty dt\int_0^\pi e^{-it\cos\phi}(e^{-i\phi}-e^{i\phi})\,d\phi=-\frac{2i}{\lambda}\int_0^\infty dt\int_0^\pi e^{-i t\cos\phi}\sin\phi \,d\phi$$</span> <span class="math-container">$$=-\frac{2i}{\lambda}\int_0^\infty dt\int_{-1}^1e^{-itx}dx=\frac{4i}{\lambda}\int_0^\infty \frac{\sin t}{t}dt=\frac{2\pi i}{\lambda}$$</span> Remembering that it was a special choice of coordinate system, and that at arbitrary choice <span class="math-container">$\vec\lambda=(\lambda_1;i\lambda_2)=\lambda e^{i\psi}; \psi=\tan^{-1}\frac{\lambda_2}{\lambda_1}$</span>, the general answer is <span class="math-container">$$I(\lambda_1;\lambda_2)=\frac{2\pi i}{\lambda e^{i\psi}}=\frac{2\pi i}{\lambda_1+i\lambda_2}$$</span></p>
61,047
<p>I can add the value of a slider to the right of it using the Appearance-->Labelled option, but what if I want to add text after the automatic label. How can I do that?</p> <p>Normally I want to do this to show the units of the value. For example, if the slider label is "4.7", I might want it to read "4.7 meters".</p>
eldo
14,254
<pre><code>Framed[ Row[{ "Interval", IntervalSlider[Dynamic[m], {0, 3, 0.1}, Appearance -&gt; "Markers"], Dynamic[m], " meters"}], Background -&gt; GrayLevel@0.9, FrameMargins -&gt; 15] </code></pre> <p><img src="https://i.stack.imgur.com/dJpfT.jpg" alt="enter image description here"></p> <p>Thanks to <strong>alancalvitty</strong>'s comment: Much nicer is:</p> <pre><code>Framed[ Row[{ "Interval", IntervalSlider[ Dynamic[m], {0, 3, 0.1}, Appearance -&gt; "Markers"], Dynamic[m], " meters"}], BaseStyle -&gt; Directive[FontFamily -&gt; "Helvetica", FontSize -&gt; 14], RoundingRadius -&gt; 10, Background -&gt; GrayLevel @ 0.9, FrameMargins -&gt; 15, FrameStyle -&gt; GrayLevel @ 0.7] </code></pre> <p><img src="https://i.stack.imgur.com/NHFLK.jpg" alt="enter image description here"></p>
1,712,256
<p>I'm given some equations.</p> <p>The first one, $x^3+2x^2-8x+1$ wants me to find the tangent line at $x=2$.</p> <p>The second, (x^1.5) - (x^1/2) wants me to find the tangent line at $x=4$.</p> <p>How would I go about solving this algebraically? I have to be able to prove the answers are $y=12x-23$ and $y=2.75x-5$ respectively, but I don't know how to start.</p>
Randy
310,503
<p>The tangent line can be written as $$ y = m x + b $$ Here, $m$ would be equal to the slope, and at the tangent line, the slope is equal to the derivative of a function in consideration. Thus for your first function, the derivative at $x=2$ will have $$ \frac{dx^3+2x^2-8x+1}{dx}|_{x=2} = 3x^2+4x-8|_{x=2}= 12. $$ Thus, the tangent line will take the form of $$ y = 12x+b. $$ Now, to find $b$ you would notice that the tangent line intersects the function at $x=2$. The function value at $x=2$ is equal to $2^3+2(2^2)-8(2)+1 = 1$, thus we need to have $(x,y)= (2,1)$ to be a solution of the tangent line, and this would be satisfied when $$ 1 = 12(2)+b $$ i.e. $b=-23$. Thus the functional for for the tangent line would be $$ y = 12x-23. $$ Similar arguments would work for the other questions.</p>
2,502,224
<p>I don't quite understand <a href="https://math.stackexchange.com/a/1866099/185631">this example given by Mike Haskel</a>. I want to find an example about</p> <p><span class="math-container">$$\operatorname{Hom}_R\left ( M ,\bigoplus_{i\in I} N_{i}\right )\not \cong\bigoplus_{i\in I} \operatorname{Hom}_R\left ( M ,N_{i}\right ).$$</span></p> <p>Mike Haskel's example is that:</p> <blockquote> <p>It's not true when <span class="math-container">$I$</span> is infinite, due exactly to the problem you encounter. Consider the case where <span class="math-container">$R = \mathbb{R}$</span>, <span class="math-container">$M$</span> is an infinite dimensional vector space, and each <span class="math-container">$N_i$</span> is <span class="math-container">$\mathbb{R}$</span>, with <span class="math-container">$I$</span> infinite. Convince yourself that <span class="math-container">$\operatorname{Hom}(M,\bigoplus_i N_i)$</span> corresponds to infinite matrices whose columns each have finitely many nonzero entries, while <span class="math-container">$\bigoplus_i \operatorname{Hom}(M,N_i)$</span> corresponds to infinite matrices with only a finite number of nonzero rows.</p> </blockquote> <p>It's not quite clear to me why &quot;<span class="math-container">$\bigoplus_i \operatorname{Hom}(M,N_i)$</span> corresponds to infinite matrices with only a finite number of nonzero rows&quot;.</p> <p>I don't know how to write a formal rigorous proof that <span class="math-container">$\operatorname{Hom}_R\left ( M ,\bigoplus_{i\in I} N_{i}\right )\not \cong\bigoplus_{i\in I} \operatorname{Hom}_R\left ( M ,N_{i}\right )$</span> in this case.</p>
Andreas Blass
48,510
<p>I think you're right that the vector spaces in this example will be isomorphic when $I$ is countably infinite. Probably what Mike Haskel had in mind is that the canonical map from the sum-of-Homs to the Hom-into-sum is not an isomorphism. </p> <p>The example becomes correct if $I$ is chosen more carefully. What one needs is $|I|=\kappa$ for some cardinal $\kappa&gt;|\mathbb R|$ with the additional property that $\kappa^{\dim M}&gt;\kappa$. For simplicity, let me take $\dim M=\aleph_0$ and all $N_i=\mathbb R$ for this example. The required cardinals $\kappa$ exist; indeed the inequality $\kappa^{\aleph_0}&gt;\kappa$ holds whenever the cofinality of $\kappa$ is $\aleph_0$.</p> <p>Having chosen $\kappa$ this way, we get for the dimension of the Hom-into-sum, $(|\mathbb R|\cdot\kappa)^{\aleph_0}=\kappa^{\aleph_0}$, while the sum-of-Homs has dimension $\kappa\cdot(|\mathbb R|^{\aleph_0})=\kappa\cdot|\mathbb R|=\kappa$. So the dimensions are different.</p>
512,118
<p>Suppose $X$ is a metric space, $z$ is in $X$ and $(x_n)$ is a sequence in $X$. </p> <p>Then what does it mean to say that, $z$ is in the "<em>closure of every tail of $(x_n)$</em>."</p> <p>What does "<em>closure</em>" of every tail, mean ?</p>
Brian M. Scott
12,042
<p>First, if $m\in\Bbb N$, the $m$-tail of the sequence is the subsequence $\langle x_n:n\ge m\rangle$ consisting of all of the terms from $x_m$ on. Now instead of viewing this as a sequence, consider just the set of terms in the $m$-tail: that set is $T_m=\{x_n:n\ge m\}$. It’s just a subset of $X$, so it has a closure, $$\operatorname{cl}T_m=\operatorname{cl}\{x_n:n\ge m\}\;.$$</p> <p>A point $x\in X$ is therefore in the closure of every tail of $\langle x_n:n\in\Bbb N\rangle$ if</p> <p>$$x\in\bigcap_{m\in\Bbb N}\operatorname{cl}T_m=\bigcap_{m\in\Bbb N}\operatorname{cl}\{x_n:n\ge m\}\;.$$</p> <p>This means that if you throw away any finite number of terms at the beginning of the sequence, $x$ is still in the closure of the set of remaining terms. This will be the case if there is a subsequence of $\langle x_n:n\in\Bbb N\rangle$ converging to $x$.</p>
3,445,768
<p>I am trying to solve a nonlinear differential equation of the first order that comes from a geometric problem ; <span class="math-container">$$x(2x-1)y'^2-(2x-1)(2y-1)y'+y(2y-1)=0.$$</span></p> <p>edit1 <strong><em>I am looking for human methods to solve the equation</em></strong> </p> <p>edit2 the geometric problem was discussed on this french forum <a href="http://www.les-mathematiques.net/phorum/read.php?8,1779080,1779080" rel="nofollow noreferrer">http://www.les-mathematiques.net/phorum/read.php?8,1779080,1779080</a> </p> <p>We can see the differential equation here <a href="http://www.les-mathematiques.net/phorum/read.php?8,1779080,1780782#msg-1780782" rel="nofollow noreferrer">http://www.les-mathematiques.net/phorum/read.php?8,1779080,1780782#msg-1780782</a></p> <p>edit 3 I do not trust formal computer programs: look at Wolfram's answer when asked to calculate the cubic root of -1 <a href="https://www.wolframalpha.com/input/?i=%7B%5Csqrt%5B3%5D%7B-1%7D%7D%29+" rel="nofollow noreferrer">https://www.wolframalpha.com/input/?i=%7B%5Csqrt%5B3%5D%7B-1%7D%7D%29+</a>.</p>
user577215664
475,762
<p>Not a complete solution, but you can try a polynomial solution approach: <span class="math-container">$$y(x)=\sum_{n=0}^{m}a_nx^n$$</span> Plug <span class="math-container">$y(x)=a_mx^m$</span> in the equation to find the degree of the polynomial: <span class="math-container">$$2m^2a_m^2-4a_m^2m+2a_m^2=0$$</span> <span class="math-container">$$(m-1)^2=0 \implies m=1$$</span> Then, <span class="math-container">$$y(x)=a_1x+a_2$$</span> Plug that solution in the equation to find <span class="math-container">$a_1,a_2$</span> <span class="math-container">$$x(2x-1)y'^2-(2x-1)(2y-1)y'+y(2y-1)=0.$$</span> <span class="math-container">$$x(2x-1)a_1^2-(2x-1)(2a_1x+2a_2-1)a_1+(a_1x+a_2)(2a_1x+2a_2-1)=0.$$</span> <strong>Edit:</strong> It's a polynomial in x write it like this <span class="math-container">$$\alpha x+\beta=0$$</span> Since this must be true for all <span class="math-container">$x$</span> <span class="math-container">$$\implies \begin{cases}\alpha=0 \\ \beta=0 \end{cases} $$</span> <span class="math-container">$$\implies \begin{cases}2a_1a_2-a_1+2a_2^2-a_2 =0\\ a_1(a_1+1)=0 \end{cases} $$</span> The solutions of the system are: <span class="math-container">$$S=\{(0,0),(0,\frac 1 2),(-1,\frac 1 2 ),(-1,1)\}$$</span> These are also solution of the DE: <span class="math-container">$y(x)=0$</span>. And <span class="math-container">$y(x)=-x+1$</span>. </p>
2,704,247
<p>I was given this question to prepare for an exam:</p> <p><em>Show that the set of all functions $f(x)$ such that $f''(x)$ = -3 on ($-\infty$, $\infty$) is uncountable.</em></p> <p>I know that this gives me a set of parabolas $f(x) = -\frac{3}{2}x^{2} + ax + b$, but I'm unsure of how to show this set is uncountable. I thought I could find a correspondence between the roots of the parabola and the real numbers, but there's no guarantee there are any roots. Would a and b determine a unique parabola, or is this an insufficient number of points? </p> <p>Could anyone give me a hint to figure this out, but not the solution?</p> <p>edit: Thank you for your help, I have constructed the following bijection to show the set of these functions is equivalent to the real plane.</p> <p>Let z : $\mathbb{R^2}$ $\to$ A, where A is the set of these functions and z(a,b) = $ -\frac{3}{2}x^{2} + ax + b$. Then z is clearly a bijection and thus A is uncountable.</p>
grontim
283,574
<p>You have $f(x) = \frac{-3}{3}x^2+ax+b$. Note that every pair of $a,b$ gives you a different function $f$. Why? Because a polynomial function is uniquely determined by its coefficients i.e., if you have two polynomials $f$ and $g$ and they have same values every where,then they have the same coefficients. This is true for polynomials with coefficients in an infinite field. So, now you have a bijection from $\mathbb R^2$ to your set of polynomials. Since, $\mathbb R^2$ is uncountable. Your set of polynomial is also uncountable.</p>
3,746,402
<p>For example, if we define <span class="math-container">$F(x)=\int^x_a f(t)dt$</span>, where <span class="math-container">$f$</span> is Riemann integrable, then <span class="math-container">$F(x)$</span> is a function. Or for a 2 variables real-valued integrable function <span class="math-container">$f(x, y)$</span>, <span class="math-container">$G(x)=\int^b_a f(x,y)dy$</span>, then <span class="math-container">$G(x)$</span> is a function. But for <span class="math-container">$\int_a^b f$</span>, is it just a number rather than a function?</p>
Xander Henderson
468,350
<h3>The Riemann integral is a number.</h3> <p>If <span class="math-container">$f : [a,b]\to\mathbb{R}$</span> is a Riemann integrable function then, by definition, the <em>Riemann integral</em> of <span class="math-container">$f$</span> is given by <span class="math-container">$$ \int_{a}^{b} f(x)\,\mathrm{d}x = \lim_{\delta\to 0} \sum_{n=1}^{N} f(x_n^*) (x_{n}-x_{n-1}), $$</span> where <span class="math-container">$a = x_0 &lt; x_1 &lt; \dotsb &lt; x_N = b$</span> is any partition of <span class="math-container">$[a,b]$</span> with <span class="math-container">$x_n - x_{n-1} &lt; \delta$</span> for all <span class="math-container">$n$</span>, and <span class="math-container">$x_n^* \in [x_{n-1}, x_n]$</span>.</p> <p>For any partition of <span class="math-container">$[a,b]$</span>, the sum on the right will be a will be a real number (since <span class="math-container">$f(x_n^*)$</span> and <span class="math-container">$x_n-x_{n-1}$</span> are real for every <span class="math-container">$n$</span>). Thus the term on the right is the limit of a sequence of real numbers. The limit of a sequence of real numbers is, again, a real number, hence the Riemann integral is a real number. Since the variable is not really important here, the above notation will often be abbreviated to <span class="math-container">$$ \int_{a}^{b} f, $$</span> but this notation indicates the same number discussed above.</p> <p>That being said, there is something else going on, which might explain your confusion. Roughly speaking, the Fundamental Theorem of Calculus tells us that if there is a function <span class="math-container">$F$</span> with derivative <span class="math-container">$f$</span>, then <span class="math-container">$$ F(x) = \int_{a}^{x} f(t)\,\mathrm{d}t, $$</span> where <span class="math-container">$a$</span> is an arbitrarily chosen constant in the domain of <span class="math-container">$F$</span>, and <span class="math-container">$x$</span> is <em>any</em> element in the domain of <span class="math-container">$F$</span>. Note that, even here, the Riemann integral is a real number: first, we fix some some point <span class="math-container">$x$</span> in the domain of <span class="math-container">$F$</span>, and then we determine the value of <span class="math-container">$F$</span> at that point (that is, we determine <span class="math-container">$F(x)$</span>) by evaluating a Riemann integral. This process can be used to <em>define</em> a function—it gives us a way of evaluating <span class="math-container">$F$</span> at every point in its domain—but the integral itself is not a function.</p> <p>Finally, there is one other notational wrinkle: if the derivative of <span class="math-container">$F$</span> is <span class="math-container">$f$</span>, then we say that <span class="math-container">$F$</span> is an <em>antiderivative</em> of <span class="math-container">$f$</span>.<sup>[1]</sup> The Fundamental Theorem of Calculus could be restated as &quot;If <span class="math-container">$F$</span> is an antiderivative of <span class="math-container">$f$</span>, then <span class="math-container">$$ F(x) = \int_{a}^{x} f(x)\,\mathrm{d}x, $$</span> where <span class="math-container">$a$</span> and <span class="math-container">$x$</span> are as above.&quot; Because of this link between derivatives, antiderivatives, and (Riemann) integrals, it is common to write <span class="math-container">$$ F = \int f $$</span> in order to indicate that <span class="math-container">$F$</span> is an antiderivative of <span class="math-container">$f$</span>. In this notation, the &quot;integrals&quot; have no limits, and <span class="math-container">$\int f$</span> doesn't really refer to an integral at all (though it is often called the &quot;indefinite integral&quot;). In this case, it would be fair to say that <span class="math-container">$\int f$</span> is a function,<sup>[2]</sup> but <span class="math-container">$\int f$</span> no longer denotes a Riemann integral.</p> <hr /> <p>[1] Note the use of the indefinite article: <span class="math-container">$F$</span> is <strong>an</strong> antiderivative, not <strong>the</strong> antiderivative. This is important, because if <span class="math-container">$F'(x) = f(x)$</span> for all appropriate <span class="math-container">$x$</span>, then <span class="math-container">$$ \frac{\mathrm{d}}{\mathrm{d}x} \bigl(F'(x) + C\bigr) = f(x) $$</span> for any constant <span class="math-container">$C$</span>. Hence antiderivatives are not unique; both <span class="math-container">$x \mapsto F(x)$</span> and <span class="math-container">$x\mapsto F(x) + C$</span> are antiderivatives of <span class="math-container">$f$</span>.</p> <p>[2] It may also be worth noting that it is a little bit of a lie to say that <span class="math-container">$$ \int f $$</span> is a function—in reality, it is probably better to think of it as an <em>equivalence class</em> of functions, which are equivalent up to addition of a constant. But this distinction is typically irrelevant in elementary courses, and is easily dealt with in more advanced settings.</p>
1,362,220
<p>My question is regarding the validity of the following statement:</p> <p>$$ (\forall a (\phi \implies \psi)) \equiv (\phi \implies \forall a \psi ),$$</p> <p>provided, of course, there are no free occurrences of $a$ in $\phi$.</p> <p>I am using the axiom system from <a href="http://rads.stackoverflow.com/amzn/click/0415126002" rel="nofollow">Hughes and Cresswell</a>, namely,</p> <p>(US) $\forall a \phi \implies \phi [a / b]$ (N.B. $\phi[a/b]$ denotes a bound alphabet variant of $\phi$ with no bound $b$, then replacing every free $a$ with free $b$).</p> <p>(UG) From $\phi \implies \psi$ infer $\phi \implies \forall a \psi$, provided $a$ is not free in $\phi$.</p> <p>(MP) Modus Ponens. </p> <p>I also have some modal axioms in play but I assume they are irrelevant. In the book they list as a theorem:</p> <p>$$ (\forall a (\phi \implies \psi)) \implies (\phi \implies \forall a \psi ),$$</p> <p>provided there are no free occurrences of $a$ in $\phi$. (Which is clearly a straightforward application of (1)+MP and then (2).) I believe the other direction should follow from a rather similar argument, but seeing as the book did not list such a equivalence as a theorem, but merely a one sided implication, I am second guessing myself. Anyway, a sketch:</p> <p>$$(\phi \implies \forall a \psi ) \quad [1: Assumption]$$ $$(\forall a \psi \implies \psi[a/a] )\quad [2: US]$$ $$(\phi \implies \psi[a/a]) \quad [3: 1+2+MP]$$ $$(\phi \implies \forall a \psi ) \implies (\phi \implies \psi[a/a]) \quad [4: 1+3]$$ $$\forall a(\phi \implies \psi[a/a]) \quad [5: 4+UG]$$</p> <p>Then since bound alphabetic variants are material equivalents, this delivers the result. Now, I'm a bit new to the whole logic thing so any errors or omissions would be very helpful.</p>
Trung Ta
214,606
<p>My sequences of transformation is as follow:</p> <p>$$ \forall a (\phi \Rightarrow \psi) $$</p> <p>$$ \equiv~ \forall a (\neg\phi \vee \psi) $$ $$ \equiv~ \neg\phi \vee \forall a (\psi) $$ $$ \equiv~ \phi \Rightarrow \forall a (\psi) $$</p> <p>Note that from step 2 to 3, since $a$ does not occur free in $\phi$, hence $\forall a (\neg\phi \vee \psi) ~~ \equiv ~~ \neg\phi \vee \forall a (\psi)$.</p>
2,919,561
<p>What is the following limit ? How do I solve it ? $$\lim \limits_{x\to0}\frac{6\sin x}{x-3\tan x}$$</p>
Cesareo
397,348
<p>$$ \frac{6\sin x}{x-3\tan x} = \frac{6\sin x\cos x}{x(\cos x-3\frac{\sin x}{x})} = 6\cos x\left(\frac{\sin x}{x}\right)\frac{1}{\cos x-3\left(\frac{\sin x}{x}\right)} $$</p> <p>hence</p> <p>$$ \lim_{x\to 0}\frac{6\sin x}{x-3\tan x} = \lim_{x\to 0}6\cos x\left(\frac{\sin x}{x}\right)\frac{1}{\cos x-3\left(\frac{\sin x}{x}\right)} = \frac{6}{1-3} = -3 $$</p> <p>NOTE</p> <p>We used the fact </p> <p>$$ \lim_{x\to 0}\frac{\sin x}{x} = 1 $$</p>
2,683,032
<blockquote> <p>Show the sum of the first $n$ positive even integers is $n^2 + n$ using strong induction.</p> </blockquote> <p>I can't solve the above problem using strong induction. It will be very helpful if I can get the solution. Thanks in advance.</p>
Graham Kemp
135,106
<p>In your Attempt A, the "bigram" you discuss is not $\mathsf P(w_i\mid w_{1:i-1})$ but rather it is $\mathsf P(w_i\cap w_{1:i-1})$ - the <em>joint probability</em> for the $i$th word choice and the preceeding sequence of $i-1$ word choices. Which is of course, $\mathsf P(w_{1:i})$.</p> <p>What is $\mathsf P(w_i\mid w_{1:i-1})$, actually is the probability for the $i$th word choice <em>when given</em> a preceeding sequence of $i-1$ word choices. &nbsp; By definition: $$\mathsf P(w_i\mid w_{1:i-1})=\dfrac{\mathsf P(w_i\cap w_{1:i-1})}{\mathsf P(w_{1:i-1})}=\dfrac{\mathsf P(w_{1:i})}{\mathsf P(w_{1:i-1})}$$</p> <p>And so in declaring that $\mathsf P(w_{1:i})=\mathsf P(w_i\cap w_{1:i-1})=1/\lvert V\rvert^i$ you are <em>assertin</em> that the $i$th word choice is <em>independent</em> from the preceeding $i-1$ word choices. $$\mathsf P(w_i\mid w_{1:i-1})=\mathsf P(w_i)=1/\lvert V\rvert$$</p>
3,896,314
<p>I am trying to show that <span class="math-container">$\log 15$</span> and <span class="math-container">$\log 3$</span> + <span class="math-container">$\log 5$</span> is irrational.</p> <p>For <span class="math-container">$\log 15$</span> I feel like I have no issues showing this is irrational.</p> <p>By contradiction assume <span class="math-container">$\log 15$</span> is rational then</p> <p><span class="math-container">$\log 15 = \frac{m}{n}$</span> where <span class="math-container">$m,n\in\mathbb{N}$</span> and <span class="math-container">$n\not= 0$</span></p> <p><span class="math-container">$15 = 10^{\frac{m}{n}}\Rightarrow 15^n = 10^m \Rightarrow 3^n5^n = 5^m2^m$</span></p> <p>The left side of the equation has an odd multiple of <span class="math-container">$5$</span> but the right side has an even multiple of <span class="math-container">$5$</span> and we have arrived at a contradiction.</p> <p>But for <span class="math-container">$\log 3$</span> + <span class="math-container">$\log 5$</span> I do not even know where to begin, I am assuming I cannot use the fact that <span class="math-container">$\log 15 = \log 3$</span> + <span class="math-container">$\log 5$</span> as that would be trivial. Should I show they are individually irrational? But that leads me to my next thought as the sum of two irrational numbers is not always irrational (even though in this case I know the sum is irrational). And then would that lead me to breaking it into a case by case where I assume one is rational and the other is not, etc. I am still an undergrad so I do not know much advanced number theory so I would appreciate a more intuitive approach if possible.</p>
José Carlos Santos
446,262
<p>Since <span class="math-container">$\log_{10}3+\log_{10}5=\log_{10}(3\times5)=\log15$</span> and since <span class="math-container">$\log_{10}15$</span> is irractional…</p> <p>On the other hand, the problem with the equality <span class="math-container">$3^n5^n=5^m2^m$</span> lies in the fact that <span class="math-container">$3^n5^n$</span> is odd, whereas <span class="math-container">$5^m2^m$</span> is even.</p>
3,896,314
<p>I am trying to show that <span class="math-container">$\log 15$</span> and <span class="math-container">$\log 3$</span> + <span class="math-container">$\log 5$</span> is irrational.</p> <p>For <span class="math-container">$\log 15$</span> I feel like I have no issues showing this is irrational.</p> <p>By contradiction assume <span class="math-container">$\log 15$</span> is rational then</p> <p><span class="math-container">$\log 15 = \frac{m}{n}$</span> where <span class="math-container">$m,n\in\mathbb{N}$</span> and <span class="math-container">$n\not= 0$</span></p> <p><span class="math-container">$15 = 10^{\frac{m}{n}}\Rightarrow 15^n = 10^m \Rightarrow 3^n5^n = 5^m2^m$</span></p> <p>The left side of the equation has an odd multiple of <span class="math-container">$5$</span> but the right side has an even multiple of <span class="math-container">$5$</span> and we have arrived at a contradiction.</p> <p>But for <span class="math-container">$\log 3$</span> + <span class="math-container">$\log 5$</span> I do not even know where to begin, I am assuming I cannot use the fact that <span class="math-container">$\log 15 = \log 3$</span> + <span class="math-container">$\log 5$</span> as that would be trivial. Should I show they are individually irrational? But that leads me to my next thought as the sum of two irrational numbers is not always irrational (even though in this case I know the sum is irrational). And then would that lead me to breaking it into a case by case where I assume one is rational and the other is not, etc. I am still an undergrad so I do not know much advanced number theory so I would appreciate a more intuitive approach if possible.</p>
peterwhy
89,922
<p>Where does the restriction end? Without using the log identity directly, I can still assume <span class="math-container">$\log 3 + \log 5 = \frac mn$</span> and then</p> <p><span class="math-container">$$10^{\log 3+\log 5} = 10^{m/n} \implies 10^{\log 3}\cdot 10^{\log 5} = 10^{m/n} \implies 3\cdot 5 = 10^{m/n}$$</span></p>
1,311,466
<p>My concept of real no. Is not very clear. Please also tell the logic behind the question. The expression is true for 19, is it true for all the multiples? </p>
Asinomás
33,907
<p>You want to prove $\frac{(3n)!}{6^n}$ is an integer. Just use $\frac{3n!}{6^n}=\frac{1\cdot2\cdot3}{6}\frac{4\cdot 5\cdot 6}{6}\frac{7\cdot8\cdot 9}{6}\dots \frac{(3n-2)(3n-1)(3n)}{6}$ and each fraction is an integer since $k(k+1)(k+2)$ is always a multiple of $2$ and of $3$ since three consecutive integers always contain a multiple of $3$ and an odd number. Alternatively because <a href="https://math.stackexchange.com/questions/12065/the-product-of-n-consecutive-integers-is-divisible-by-n-factorial">the product of $n$ consecutive numbers is always divisible by $n!$</a>.</p> <hr> <p>I had misunderstood your question as prove $\frac{(3n)!}{n!^3}$ is an integer. Here is a solution to that problem.</p> <p>Solution 1:</p> <p>$$\frac{(3n)!}{n!^3}=\frac{1\times2\times\dots \times n}{n!}\frac{(n+1)\times (n+2)\times (n+n)!}{n!}\frac{(2n+1)\times (2n+2)\times \dots (2n+n)}{n!}$$</p> <p>all three factors are integers since <a href="https://math.stackexchange.com/questions/12065/the-product-of-n-consecutive-integers-is-divisible-by-n-factorial">the product of $n$ consecutive integers is divisible by $n!$</a></p> <hr> <p>Solution $2$:</p> <p>$$\frac{(3n)!}{n!^3}=\frac{1\times2\times\dots \times n}{n!}\frac{(n+1)\times (n+2)\times (n+n)!}{n!}\frac{(2n+1)\times (2n+2)\times \dots (2n+n)}{n!}=\binom{3n}{n}\binom{2n}{n}\binom{n}{n}$$</p> <p>Look up binomial coefficient.</p> <hr> <p>Solution $3$:</p> <p>$$\frac{(3n)!}{n!^3}=\binom{3n}{n,n,n}$$</p> <p>Look up multinomial coefficient.</p>
1,311,466
<p>My concept of real no. Is not very clear. Please also tell the logic behind the question. The expression is true for 19, is it true for all the multiples? </p>
Shailesh
241,153
<p>Very simply the expression is the number of ways of arranging $ 3n $ objects, of which there are $n$ distinct group of $3$ alike objects. eg. Number of ways of arranging 12 balls of which 3 are blue, 3 are red, 3 are green and 3 are yellow. Since this expression is the number of ways, so it is an integer alright. More of an intuitive way of looking at it.</p>
1,311,466
<p>My concept of real no. Is not very clear. Please also tell the logic behind the question. The expression is true for 19, is it true for all the multiples? </p>
miniparser
99,402
<p>prove $(3n)!\over{6^n}$ is integer. no need for induction, etc. $6^n=2^n3^n$</p> <p>$(3n)!=(3n)(3n-1)(3n-2)...(3)(2)(1)$ with $3n$ total factors. half of those factors must be even, which takes care of $2^n$. for $3^n$ every $3rd$ factor going back from $3n$ must be divisible by $3$:</p> <p>$\{3n,3n-3,3n-6,...,9,6,3\}$</p> <p>divide all of those by $3$: $\{n,n-1,n-2,...,3,2,1\}$; there are $n$ of them. this takes care of $3^n$</p> <p>so $(2^n)(3^n)=6^n|(3n)!$, making $(3n)!\over{6^n}$ an integer. </p>
3,484,136
<blockquote> <p>Show that <span class="math-container">$n^2+n$</span> is even for all <span class="math-container">$n\in\mathbb{N}$</span> by contradiction.</p> </blockquote> <p>My attempt: assume that <span class="math-container">$n^2+n$</span> is odd, then <span class="math-container">$n^2+n=2k+1$</span> for all <span class="math-container">$k\in\mathbb{N}$</span>.</p> <p>We have that <span class="math-container">$$n^2+n=2k+1 \Leftrightarrow \left(n+\frac{1}{2}\right)^2-\frac{1}{4}=2k+1\Leftrightarrow n=\sqrt{2k+\frac{5}{4}}-\frac{1}{2}.$$</span></p> <p>Choosing <span class="math-container">$k=1$</span>, we have that <span class="math-container">$n=\sqrt{2+\frac{5}{4}}-\frac{1}{2}\notin\mathbb{N}$</span>, so we have a contradiction because <span class="math-container">$n^2+n=2k+1$</span> should be verified for all <span class="math-container">$n\in\mathbb{N}$</span> and for all <span class="math-container">$k\in\mathbb{N}$</span>.</p> <p>Is this correct or wrong? If wrong, can you tell me where and why? Thanks.</p>
Robert Lewis
67,071
<p>I hate to be the bearer of bad tidings, but the presented proof attempt is flawed. There is no guarantee that <em>any</em> <span class="math-container">$k$</span> corresponds to a given <span class="math-container">$n^2 + n$</span>, so the assumption that</p> <p><span class="math-container">$\forall n \in \Bbb N, \exists k \in \Bbb Z, \; n^2 + n = 2k + 1 \tag 1$</span></p> <p>is erroneous. </p> <p>I prove it like this: if <span class="math-container">$n$</span> is odd, then <span class="math-container">$n + 1$</span> is even; so whether <span class="math-container">$n$</span> is odd or even, we have <span class="math-container">$2 \mid n(n + 1)$</span>, i.e., <span class="math-container">$n(n + 1)$</span> is even.</p>
55,737
<p>Let $G$ be a group with two generators. Suppose that all non-trivial words of length less or equal $n$ in the generators and their inverses define non-trivial elements in $G$.</p> <blockquote> <p><strong>Question:</strong> How many of the $4\cdot 3^{n}$ words of length $n+1$ in the generators and their inverses can at most be trivial in $G$?</p> </blockquote> <p>I am interested in the growth of this number as $n$ grows. From what I understand from the Gromov's theory of random groups, a choice of relations of length $n+1$ will (almost surely as $n \to \infty$) not enforce shorter relations if one chooses $$3^{\left(\frac{1}2 - \varepsilon\right) \cdot n}$$ relations of length $n+1$ at random. (This result is related to small cancellation theory which applies to randomly choosen relations. The exponent $1/2$ which appears is related to the birthday paradox. It ensures that with high probability one does not chose relations which have large overlap.) However, I would not know how to prove it or even locate it in the literature. Can someone confirm this?</p> <blockquote> <p><strong>Question:</strong> Can one do better than $3^{\left(\frac{1}2 - \varepsilon\right)\cdot n}$ (as $n \to \infty$) with a concrete sequence of groups rather than using random groups?</p> </blockquote>
David Ben-Zvi
582
<p>Since there's no response to this question so far I can break out my broken record and point out such theorems (sheaves on the product are tensor product of categories of sheaves on the factors) are true, and in fact extremely easy, in the derived setting (modulo the appropriate $\infty$-categorical technology). First for any quasicompact quasiseparated scheme we know the derived category is compactly generated (thanks to Thomason-Trobaugh --in fact by a single object, see Bondal-van den Bergh). Thus the (enhanced) derived category is just dg modules over the ext algebra of this generator (or over the small category of generators). Now it's easy to see that the external product of generators is a generator for the product. Hence sheaves on the product are modules for the tensor product of the dg algebras, ie sheaves on the product are the tensor product of categories of sheaves on each factor.</p> <p>(This result is due to Toen in his great Inventiones paper on Morita theory. Nowadays it's much easier, thanks to Lurie foundations, literally this easy argument can be found in my paper with Francis and Nadler, modulo quoting DAG a lot..)</p> <p>I would be curious to see the answer to your original question though --- my gut feeling is it's not nearly as easy if true.. but maybe an argument for schemes with ample sequences of line bundles can be made to imitate this (ie if you know you have generators with enough exactness you might have a shot..)</p>
55,737
<p>Let $G$ be a group with two generators. Suppose that all non-trivial words of length less or equal $n$ in the generators and their inverses define non-trivial elements in $G$.</p> <blockquote> <p><strong>Question:</strong> How many of the $4\cdot 3^{n}$ words of length $n+1$ in the generators and their inverses can at most be trivial in $G$?</p> </blockquote> <p>I am interested in the growth of this number as $n$ grows. From what I understand from the Gromov's theory of random groups, a choice of relations of length $n+1$ will (almost surely as $n \to \infty$) not enforce shorter relations if one chooses $$3^{\left(\frac{1}2 - \varepsilon\right) \cdot n}$$ relations of length $n+1$ at random. (This result is related to small cancellation theory which applies to randomly choosen relations. The exponent $1/2$ which appears is related to the birthday paradox. It ensures that with high probability one does not chose relations which have large overlap.) However, I would not know how to prove it or even locate it in the literature. Can someone confirm this?</p> <blockquote> <p><strong>Question:</strong> Can one do better than $3^{\left(\frac{1}2 - \varepsilon\right)\cdot n}$ (as $n \to \infty$) with a concrete sequence of groups rather than using random groups?</p> </blockquote>
Daniel Schäppi
1,649
<p>I have recently proved a result that gives a partial answer to this question, see <a href="http://arxiv.org/abs/1211.3678" rel="nofollow">here</a> (I need to make some assumptions on $X$ and $Y$). Let $X$ and $Y$ be noetherian schemes with the resolution property (that is, every coherent sheaf is a quotient of a locally free sheaf of finite rank). This includes all separated regular noetherian schemes and all projective noetherian schemes. I don't know a quasi-compact semi-separated scheme which does not have the resolution property.</p> <p>If $X \times Y$ is also noetherian, then the category of coherent sheaves on $X \times Y$ is given by the Deligne tensor product of the categories $\operatorname{Coh}(X)$ and $\operatorname{Coh}(Y)$ of coherent sheaves on the two factors: There is an equivalence</p> <p>$\operatorname{Coh}(X\times Y) \simeq \operatorname{Coh}(X) \boxtimes \operatorname{Coh}(Y)$</p> <p>of symmetric monoidal $R$-linear categories. </p> <p>There is an analogous result without the noetherian assumptions, but you have to consider the categories of finitely presentable quasi-coherent sheaves instead. </p> <p>If $X$ and $Y$ have the strong resolution property (which means that the locally free sheaves of finite rank form a generator of the respective categories of quasi-coherent sheaves), then there is an equivalence</p> <p>$\operatorname{QCoh}_{fp}(X\times Y) \simeq \operatorname{QCoh}_{fp}(X) \boxtimes \operatorname{QCoh}_{fp}(Y)$</p> <p>of symmetric monoidal $R$-linear categories, where now $\boxtimes$ denotes Kelly's tensor product of finitely cocomplete $R$-linear categories. To prove this, I also show that this tensor product gives a bicategorical coproduct in the 2-category of "right exact symmetric monoidal categories," that is, symmetric monoidal finitely cocomplete $R$-linear categories for which tensoring with a fixed object is right exact. By passing to categories of ind-objects you find that the category of all quasi-coherent sheaves on the product is equivalent to the desired bicategorical coproduct.</p>
55,737
<p>Let $G$ be a group with two generators. Suppose that all non-trivial words of length less or equal $n$ in the generators and their inverses define non-trivial elements in $G$.</p> <blockquote> <p><strong>Question:</strong> How many of the $4\cdot 3^{n}$ words of length $n+1$ in the generators and their inverses can at most be trivial in $G$?</p> </blockquote> <p>I am interested in the growth of this number as $n$ grows. From what I understand from the Gromov's theory of random groups, a choice of relations of length $n+1$ will (almost surely as $n \to \infty$) not enforce shorter relations if one chooses $$3^{\left(\frac{1}2 - \varepsilon\right) \cdot n}$$ relations of length $n+1$ at random. (This result is related to small cancellation theory which applies to randomly choosen relations. The exponent $1/2$ which appears is related to the birthday paradox. It ensures that with high probability one does not chose relations which have large overlap.) However, I would not know how to prove it or even locate it in the literature. Can someone confirm this?</p> <blockquote> <p><strong>Question:</strong> Can one do better than $3^{\left(\frac{1}2 - \varepsilon\right)\cdot n}$ (as $n \to \infty$) with a concrete sequence of groups rather than using random groups?</p> </blockquote>
Martin Brandenburg
2,841
<p>More generally, I have proven that for quasi-compact and quasi-separated schemes <span class="math-container">$\mathrm{Qcoh}(X \times_S Y)$</span> is the bicategorical pushout of <span class="math-container">$\mathrm{Qcoh}(X)$</span> and <span class="math-container">$\mathrm{Qcoh}(Y)$</span> over <span class="math-container">$\mathrm{Qcoh}(S)$</span> in the bicategory of cocomplete linear tensor categories. The technique of the proof has many other applications as well.</p> <p><a href="https://arxiv.org/abs/2002.00383" rel="nofollow noreferrer">Localizations of tensor categories and fiber products of schemes</a> (arXiv:2002.00383)</p>
4,090,259
<p>I began watching Gilbert Strang's lectures on Linear Algebra and soon realized that I lacked an intuitive understanding of matrices, especially as to why certain operations (e.g. matrix multiplication) are defined the way they are. Someone suggested to me 3Blue1Brown's video series (<a href="https://youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab" rel="nofollow noreferrer">https://youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab</a>) and it has helped immensely. However, it seems to me that they present matrices in completely different ways: 3Blue1Brown explains that they represent linear transformations, while Strang depicts matrices as systems of linear equations. What's the connection between these two different ideas?</p> <p>Furthermore, I understand why operations on matrices are defined the way they are when we think of them as linear maps, but this intuition breaks when matrices are thought of in different ways. Since matrices are used to represent all sorts of things (linear transformations, systems of equations, data, etc.), how come operations that are seemingly defined for use with linear maps the same across all these different contexts?</p>
user20672
512,659
<p>Consider the system of equations</p> <p><span class="math-container">$$ Ax = b $$</span></p> <p>In this system, <span class="math-container">$A$</span> is a linear transformation. When that transformation is applied to <span class="math-container">$x$</span> it equals <span class="math-container">$b$</span>. In that sense, it is a system of equations.</p> <p>For all of the contexts you mention, what is described is always a system of equations for which the value on one side (<span class="math-container">$b$</span>) is the linear transformation (<span class="math-container">$A$</span>) of some other quantity (<span class="math-container">$x$</span>).</p>
4,090,259
<p>I began watching Gilbert Strang's lectures on Linear Algebra and soon realized that I lacked an intuitive understanding of matrices, especially as to why certain operations (e.g. matrix multiplication) are defined the way they are. Someone suggested to me 3Blue1Brown's video series (<a href="https://youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab" rel="nofollow noreferrer">https://youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab</a>) and it has helped immensely. However, it seems to me that they present matrices in completely different ways: 3Blue1Brown explains that they represent linear transformations, while Strang depicts matrices as systems of linear equations. What's the connection between these two different ideas?</p> <p>Furthermore, I understand why operations on matrices are defined the way they are when we think of them as linear maps, but this intuition breaks when matrices are thought of in different ways. Since matrices are used to represent all sorts of things (linear transformations, systems of equations, data, etc.), how come operations that are seemingly defined for use with linear maps the same across all these different contexts?</p>
Ethan Bolker
72,858
<p>You ask</p> <blockquote> <p>Since matrices are used to represent all sorts of things (linear transformations, systems of equations, data, etc.), how come operations that are seemingly defined for use with linear maps the same across all these different contexts?</p> </blockquote> <p>Other answers and comments address the connection between linear transformations and systems of equations. You can turn questions about systems of equations into questions about their matrices of coefficients and then about linear transformations, and vice versa. The sum and product operations on matrices correspond to addition and composition of linear transformations. That back and forth illuminates both areas.</p> <p>Abstractly, a matrix is just a rectangular array of numbers. That can be a useful abstraction in contexts other than systems of equations. It's the underlying abstraction in a spreadsheet. For example, the percentages of urban, suburban and rural areas by state will be a <span class="math-container">$50 \times 3$</span> matrix. In that context matrix addition is not likely to be useful.</p>
2,417,356
<p>I found the following very nice post yesterday which presented the conditional expectation in a way which I found intuitive;</p> <p><a href="https://math.stackexchange.com/questions/1492306/conditional-expectation-with-respect-to-a-sigma-algebra?noredirect=1&amp;lq=1">Conditional expectation with respect to a $\sigma$-algebra</a>.</p> <p>I wonder if there is a way to see that $E(X\mid \mathcal{F}_n)(\omega)=\frac 1 {P(E_i)} \int_{E_i}X \, dP$ if $\omega \in E_i$, could be regarded a Radon-Nikodym derivative. I cant formally connect the dots with respect to for example to the Wikipedia discussion,</p> <p><a href="https://en.wikipedia.org/wiki/Conditional_expectation" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Conditional_expectation</a>.</p> <p>I am missing the part where the measure gets "weighted" i.e somthing analogous to $\frac 1 {P(E_i)}$, in the Wikipedia article.</p> <p>Update</p> <p>It just hit me that if one divides the defining relation by the measure of the set the one has</p> <p>$\frac{1}{P(E_{i})} \int_{E_i}X \, dP=\frac{1}{P(E_{i})} \int_{E_i}E(X\mid \mathcal{F}_n) \, dP$ for all $E_{i}\in \mathcal{F}_{n}$. This looks like we have a function which agrees with the avarages of $X$ a.e on every set of the algebra $\mathcal{F}_{n}$. This is not quite what @martini writes but maybe it is a reansable way to look at it aswell? This looks like somthing which would fit into the wikipedia disussion better tho. But it donst sit right on some other occasions.</p> <p>So the question remains,</p> <p>How do I think about this in the right way? If the second way is wrong, then how is the first consistant with the Wikipedia article?</p> <p>My comment of Sangchuls answer will also be of help when understadning my troubles!</p>
H. H. Rugh
355,946
<p>Your intuition and formula makes sense when each $E_i$ is an elementary event in ${\cal F} $ which can not be decomposed into 2 disjoint events, both of positive probability. If it can be decomposed then there is a mis-match as in general $E(X|{\cal F})(\omega)$ is not constant over $\omega\in E_i$ while your formula is.</p> <p>Suppose that $\Omega$ may be partitioned into a countable family of disjoint measurable events $E_i$, $i\geq 1$. It suffices to keep only the events with strictly positive probability, as they will carry the total probability. The $\sigma$-algebra ${\cal F}$ generated by this partition simply consists of all unions of elements in this family. A measurable function w.r.t ${\cal F}$ is precisely a linear combination of $\chi_{E_i}$, the characteristic functions on our disjoint family of events. We may thus write: $$ E(X|{\cal F}) (\omega) = \sum_j c_j \chi_{E_j}(\omega)$$ The constants may be computed from the fact that $\int_{E_i} E(X|{\cal F}) dP = c_i P(E_i) = \int_{E_i} X\; dP$. We get: $$ E(X|{\cal F}) (\omega) = \sum_j\chi_{E_j}(\omega) \frac{1}{P(E_j)} \int_{E_j} X\; dP $$ corresponding to the formula you mentioned. By writing down the defining equation you see that this indeed is the Radon-Nykodym derivative of $\nu(E)=\int_E X \; dP$, $E\in {\cal F}$ with respect to $P_{|{\cal F}}$.</p> <p>Conditional expectation, however, becomes less intuitive when ${\cal F}$ is no longer generated by a countable partition, although sometimes you may find a tweak to get around. Example: Let $P$ be a probability on ${\Bbb R}$ having density wrt Lebesgue $f\in L^1({\Bbb R})$, $dP(x) = f(x) dx$. </p> <p>We will consider a sub-$\sigma$-algebra generated by symmetric subsets of the Borel $\sigma$-algebra. Thus $A\in {\cal F}$ iff $x\in A \Leftrightarrow -x\in A$. </p> <p>A measurable function wrt ${\cal F}$ is now any function which is symmetric, i.e. $Y(x)=Y(-x)$ for all $x$. This time an elementary event consists of a symmetric couple $\{x,-x\}$ which has zero probability. And you can not throw all these away when calculating conditional expectation. So going back to the definition, given an $X\in L^1(dP)$ you need to find a symmetric integrable function $Y$ so that for any measurable $I\subset (0,+\infty)$ you have: $$ \int_{I\cup (-I)} Y\; dP = \int_{I\cup (-I)} X \; dP $$ Using that $Y$ is symmetric and a change of variables this becomes: $$ \int_I Y(x) (f(x)+f(-x)) \; dx = \int_I (X(x) f(x) + X(-x) f(-x)) \; dx $$ On the set $\Lambda = \{ x\in {\Bbb R} : f(x)+f(-x)&gt;0 \}$ which has full probability we may then solve this by defining: $$ Y(x) = \frac{X(x) f(x) + X(-x) f(-x) }{f(x) + f(-x) }, \; x\in \Lambda. $$ On the complement $Y$ is not defined but the complement has zero probability. $Y$ is then symmetric and has the same expectation as $X$ on symmetric events. Again $Y(x)$ is the Radon-Nikodym derivative of $\nu(E) = \int_E X \; dP$ wrt $P(E)$ with $E\in {\cal F}$.</p> <p>Our luck here is that there is a simple symmetry, i.e. $x\mapsto -x$, describing the events in ${\cal F}$ and that the probability measure transforms nicely under this symmetry. In more general situations you may not be able to describe $E(X|{\cal F})$ explicitly in terms of values of $X$ and you are stuck with just the defining properties for conditional expectation [which, on the other hand, may suffice for whatever computation you need to carry out].</p>
1,036,636
<p>The following statement makes sense intuitively, but is there a way to prove it mathematically? (This is something we make use of in applied optimization in calculus.)</p> <blockquote> <p>If $f$ is continuous on an interval $I$ and $x_0$ is the <strong>only</strong> relative (local) extremum, then $x_0$ is actually an <strong>absolute (global)</strong> extremum on $I$.</p> </blockquote>
Jeffrey Rolland
329,781
<p>Suppose $x_0$ is a local maximum. Assume $x_0$ is not an absolute maximum. Then $\exists x_1 \in I$ with $f(x_1) &gt; f(x_1)$. Let $I'$ be the closed interval from $x_0$ to $x_1$. Let $x'$ be the global minimum of $f$ on $I'$.</p> <p>Then we can choose $x' \ne x_0$, as $x_0$ is a local maximum for $f$, so there is a small open interval $V$ about $x_0$ in $I$ with $f(v) \le f(x_0)$ for $v \in V$.</p> <p>Also, clearly $x' \ne x_1$.</p> <p>So, $x'$ is an interior global minimum point for $f$ on $I'$, so $x'$ is a local minimum for $f$ on $I'$ and hence for $f$ on $I$, contradicting the status of $x_0$ as the unique local extremum for $f$ on $I$.</p> <p>The case for $x_0$ a local minimum is similar.</p>
2,419,753
<p>How would you approach to solve questions like:</p> <p>$\sin(z)=\sin(2)$, $z$ is an arbitrary complex number.</p>
user577215664
475,762
<p>When exponents are at the same level you only deal with addition and multiplication... So $x^{10n} \left( x^5 \left(x^{-5}\right)^n \right) = x^{-10}$ Multiply n and 5</p> <p>$x^{10n} x^5 x^{-5n} = x^{-10}$</p> <p>Adding 5 and 5n</p> <p>$x^{10n} x^{5-5n} = x^{-10}$</p> <p>$x^{10n+5-5n} = x^{-10}$</p> <p>$5+5n = -10$</p> <p>$5n = -15$</p>
4,050,336
<p>I am familiar with the Negative Binomial distribution <span class="math-container">$NB(p, k)$</span>, which gives the number of failures before <span class="math-container">$k$</span> successes occur in a Bernoulli process with parameter <span class="math-container">$p$</span>. I am wondering, however, if there the distribution of the number of failures before getting <span class="math-container">$k$</span> <em>distinct</em> successes is a well-known distribution.</p> <p>More precisely, suppose we have some set of <span class="math-container">$N$</span> elements; without loss of generality, we may assume this set is <span class="math-container">$[N] = \{1, \cdots, N\}$</span>. We have a subset <span class="math-container">$S \subseteq [N]$</span> which contains the successes, while every element in <span class="math-container">$[N] \setminus S$</span> is a failure. Now in our process, we select a random element uniformly at random from <span class="math-container">$N$</span>, and keep on doing this until we have attained some <span class="math-container">$k \leq |S|$</span> successes. Let <span class="math-container">$X$</span> be the total number of trials performed before we stop the process. <em>What is the distribution of <span class="math-container">$X$</span>, in terms of <span class="math-container">$k$</span>, N, and p = <span class="math-container">$|S| / N$</span>?</em> If we can't concisely describe this distribution, can we at least come up with some (relatively tight) tail bounds? And how valid is the approximation <span class="math-container">$X \approx NB(p, k)$</span>?</p> <p>I know this problem can simplify to the coupon collector's problem in the case that <span class="math-container">$S = [N]$</span> and <span class="math-container">$k = N$</span>, but I'm more interested in the case where <span class="math-container">$0 &lt; p &lt; 1$</span> and <span class="math-container">$k \ll |S| = pN$</span>.</p>
Wuestenfux
417,848
<p>Hint: <span class="math-container">$x^2+1$</span> has a unique factorization in an appropriate extension field, say <span class="math-container">$\Bbb Q(i)$</span>. Then <span class="math-container">$x^2+1 = (x+i)(x-i)$</span>.</p> <p>You have to show that <span class="math-container">$i\not\in \Bbb Q(\sqrt 2 i)$</span>.</p>
4,050,307
<p>You have a black box function to which you can give real number inputs and from which you can receive real number outputs. <strong>How would you test whether it is likely to be a polynomial?</strong></p> <p>One expensive idea is to use finite differences:</p> <ol> <li>Choose a maximum degree <em>n</em> of the &quot;polynomial&quot; you are testing.</li> <li>Choose a consecutive sequence with random step size, and evaluate the function there to get an output sequence. E.g., <span class="math-container">$$[ 2, 2.3, 2.6, 2.9,\dots] \to [ 4.81, 5.02, 5.05, 4.90,\dots]$$</span></li> <li>Using the output sequence as S[0], define S[n] so that its k^th entry S[n][k] = S[n-1][k+1]-S[n-1][k]. E.g. S[1] = [5.02-4.81,5.05-5.02,4.90-5.05,...] = [0.21,0.03,-0.15,...]</li> <li>If the function is a polynomial (of degree at most <em>n</em>), then the sequence S[n+1] should be all zeros.</li> </ol> <p>Some issues about programming this method:</p> <ul> <li>Would be <a href="https://en.wikipedia.org/wiki/Finite_difference#Higher-order_differences" rel="nofollow noreferrer">expensive for large <em>n</em></a></li> <li>If S[0] has <em>large</em> values, computer arithmetic will produce bad results for S[1] and beyond.</li> </ul>
preferred_anon
27,150
<p>I don't know much about &quot;likely&quot;. But <span class="math-container">$N+1$</span> distinct points uniquely determine a degree <span class="math-container">$\le N$</span> polynomial by <a href="https://en.wikipedia.org/wiki/Polynomial_interpolation#Interpolation_theorem" rel="nofollow noreferrer">Lagrange interpolation</a>. If the obtained points are <span class="math-container">$(x_i, y_i)$</span>, for <span class="math-container">$i=0, \ldots N$</span>, then first define:</p> <p><span class="math-container">$$L_{N, j}(x) = \prod_{k \ne j}^{N}\frac{x-x_k}{x_j-x_k}$$</span> (which is clearly a polynomial of degree <span class="math-container">$N$</span>). The unique polynomial passing through those points is <span class="math-container">$$p(x) = \sum_{j=0}^{n}y_j L_{N, j}(x),$$</span> which is a polynomial of degree at most <span class="math-container">$N$</span>. If you compute one <em>more</em> point from your black box, then you can compare it with this polynomial to see whether it fits.</p> <p>By the same argument, there is no way to deterministically rule out polynomials given a finite set of input values, since the polynomial described above can always be constructed.</p> <p>If you choose a constant step size (like <span class="math-container">$1$</span>), your <span class="math-container">$p(x_{N+1})$</span> may be easy to calculate. In that case, you would need to only compute</p> <p><span class="math-container">$$\sum_{j=0}^{n}y_{j}\prod_{j \ne k}(x_{N+1} - x_j) = \sum_{j=0}^{n}y_{j}\prod_{j \ne k}(N+1-j)$$</span></p> <p>which doesn't sound over-complex.</p>
2,359,455
<blockquote> <p>Let <span class="math-container">$M$</span> be the largest subset of <span class="math-container">$\{1,\dots,n\}$</span> such that for each <span class="math-container">$x\in M$</span>, <span class="math-container">$x$</span> divides at most one other element in <span class="math-container">$M$</span>. Prove that<span class="math-container">$$ |M|\leq \left\lceil \frac{3n}4\right\rceil. $$</span></p> </blockquote> <p><strong>My attempt:</strong></p> <p>Let <span class="math-container">$M_1 = \{x\in M;\;\exists !y\in M:\; x|y \} $</span> and <span class="math-container">$M_0 = M\setminus M_1$</span>. Obviously both <span class="math-container">$M_0$</span> and <span class="math-container">$M_1$</span> are an antichains in <span class="math-container">$M$</span> and they make a partition of <span class="math-container">$M$</span>. And this is all I can find. I was thinking about Dilworth theorem but... Also, I was thinking, what if I add a new element <span class="math-container">$k\in M^C$</span> to <span class="math-container">$M$</span>. Then <span class="math-container">$M\cup \{k\}$</span> is no more ''good'' set...</p>
Sarvesh Ravichandran Iyer
316,409
<p>You are right. We first divide $M$ into two subsets , $M_1$ and $M_2$, where the former contains all $x$ such that $x$ divides exactly one other element, and $M_2$ contains all the others.</p> <p>It is easy to see that $M_1$ is an antichain with respect to two elements being linked if one divides the other. This is because if $x$ and $y$ in $M_1$ are linked, then one of them divides two elements in the group , a contradiction. Similarly, it's easy to see that $M_2$ is also an antichain.</p> <p>It's clear that the maximum size of an antichain is $\frac n2$,since if we take the greatest odd divisor of each element in the antichain, these will be distinct, but there are only $\frac n2$ of these. An example of a maximal antichain is $\frac n2 + 1 , ...,n$.</p> <p>Suppose that $|M| \geq \frac {3n}4 + 1$. Then, take the greatest odd divisor of each element of $M$. No three elements can share this greatest odd divisor (can you see why?), and there are only $\frac n2$ of them, so by the pigeonhole principle, there must be at least $\frac n4$ pairs of numbers in $A$ having the same greatest odd divisor. Out of this list of greatest odd divisors, pick the biggest, say $D$. Since there are at least $\frac n4$ distinct odd divisors in the above list, it follows that $d &gt; \frac n2$, but then , of the pair of numbers with $d$ as the greatest divisor, one of them must be greater than or equal to $2d \geq n$, giving the contradiction. Hence, the claim follows.</p>
1,700,608
<p>In <a href="https://math.stackexchange.com/a/1700505/132192">this</a> answer the value of $1 - \cos(x)$ has to be evaluated in order to find its upper limit, if it exists.</p> <p>In particular, $x = 2 \pi / n$. The answer is related to the length of a side of a regular $n$-gon inscribed into a unit-radius circumference; because the perimeter of the $n$-gon is always less than $2 \pi$, the single side must always be less than $2 \pi / n$.</p> <p>The inequality</p> <p>$$1 - \cos (x) \leq \displaystyle \frac{x^2}{2}$$ (1)</p> <p>is used and the proof is completed with </p> <p>$$2(1 - \cos(x)) \leq (2 \pi / n)^2$$</p> <p>$$\sqrt{2(1 - \cos(x))} \leq 2 \pi / n$$</p> <p>But it is well known that the cosine is a function $f(x) \in [-1;1]$, so $1 - \cos (x) \in [0,2]$. By using this information, we would obtain</p> <p>$$1 - \cos (x) \leq 2$$ (2)</p> <p>The proof would provide</p> <p>$$2(1 - \cos(x)) \leq 4$$</p> <p>$$\sqrt{2(1 - \cos(x))} \leq 2$$</p> <p>which is a completely different result.</p> <ul> <li>Why in that case it is preferable to use (1) instead of (2)?</li> <li>How to choose when it is convenient to use (1) and when to use (2) in a proof?</li> </ul>
Mark Viola
218,419
<p>The bound <span class="math-container">$-1\le \cos(x)\le 1$</span> is correct, but rather a crude one.</p> <p>We can easily obtain tighter bounds using the inequality from elementary geometry</p> <p><span class="math-container">$$|\sin(\theta)|\le |\theta| \tag 1$$</span></p> <p>for all <span class="math-container">$\theta$</span>, and the half-angle formula for the sine function</p> <p><span class="math-container">$$1-\cos(\theta)=2\sin^2(\theta/2) \tag 2$$</span></p> <p>Squaring both sides of <span class="math-container">$1$</span>, substituting <span class="math-container">$\theta =x/2$</span>, and using <span class="math-container">$(2)$</span> reveals</p> <p><span class="math-container">$$\begin{align} \sin^2(x/2)&amp;=\frac{1-\cos(x)}{2}\\\\ &amp;\le x^2/4 \tag 3 \end{align}$$</span></p> <p>whereupon we find for all <span class="math-container">$x$</span></p> <p><span class="math-container">$$\bbox[5px,border:2px solid #C0A000]{1-\cos(x)\le \frac12 x^2} \tag 4$$</span></p> <p>For values of <span class="math-container">$x&lt;2$</span>, <span class="math-container">$\frac12 x^2&lt;2$</span> and <span class="math-container">$(4)$</span> provides a tighter bound than <span class="math-container">$1-\cos(x)\le 2$</span>. For <span class="math-container">$x\ge 2$</span>, it is still correct that <span class="math-container">$1-\cos(x)\le \frac12 x^2$</span>, but the inequality <span class="math-container">$1-\cos(x)\le 2$</span> is obviously tighter. Therefore, we can write</p> <p><span class="math-container">$$1-\cos(x)\le \begin{cases}\frac12 x^2&amp;,x&lt;2\\\\2&amp;,x\ge 2\tag 5\end{cases}$$</span></p> <p>So, <span class="math-container">$(5)$</span> provides a guideline for the appropriate use of the bounds for <span class="math-container">$1-\cos(x)$</span>.</p>
1,700,608
<p>In <a href="https://math.stackexchange.com/a/1700505/132192">this</a> answer the value of $1 - \cos(x)$ has to be evaluated in order to find its upper limit, if it exists.</p> <p>In particular, $x = 2 \pi / n$. The answer is related to the length of a side of a regular $n$-gon inscribed into a unit-radius circumference; because the perimeter of the $n$-gon is always less than $2 \pi$, the single side must always be less than $2 \pi / n$.</p> <p>The inequality</p> <p>$$1 - \cos (x) \leq \displaystyle \frac{x^2}{2}$$ (1)</p> <p>is used and the proof is completed with </p> <p>$$2(1 - \cos(x)) \leq (2 \pi / n)^2$$</p> <p>$$\sqrt{2(1 - \cos(x))} \leq 2 \pi / n$$</p> <p>But it is well known that the cosine is a function $f(x) \in [-1;1]$, so $1 - \cos (x) \in [0,2]$. By using this information, we would obtain</p> <p>$$1 - \cos (x) \leq 2$$ (2)</p> <p>The proof would provide</p> <p>$$2(1 - \cos(x)) \leq 4$$</p> <p>$$\sqrt{2(1 - \cos(x))} \leq 2$$</p> <p>which is a completely different result.</p> <ul> <li>Why in that case it is preferable to use (1) instead of (2)?</li> <li>How to choose when it is convenient to use (1) and when to use (2) in a proof?</li> </ul>
ajk68
558,962
<p>ncmathsadist's technique can be extended for a tighter bound as described below. It also extends the region over which the bound is better than <span class="math-container">$1 - \cos (x) \leq 2$</span> per Mark Viola's answer.</p> <p>Cutting off the Taylor series for <span class="math-container">$\sin(t)$</span> before a negative term (the terms alternate sign), one gets a polynomial bound on <span class="math-container">$\sin(t)$</span> that can be integrated.</p> <p>For example <span class="math-container">$\sin (t) \leq t-t^3/3!+t^5/5!$</span>, therefore</p> <p><span class="math-container">$$1-\cos (x)=\int_0^x \sin (t)\ dt \leq \int_0^x t-\frac{t^3}{3!}+\frac{t^5}{5!}\ dt = \frac{t^2}{2!} - \frac{t^4}{4!} + \frac{t^6}{6!} $$</span></p> <p>The cut-off approximation is an upper bound. The cut-off residual will always be negative. This can be shown by summing the terms pairwise. Since <span class="math-container">$\sin (t)$</span> is absolutely convergent, there isn't an issue with ordering of terms.</p> <p>A bound which is better than <span class="math-container">$1 - \cos (x) \leq 2$</span> from <span class="math-container">$[0,2\,\pi]$</span> is: <span class="math-container">$$1-\cos (x) \leq \frac{t^2}{2!} - \frac{t^4}{4!} + \frac{t^6}{6!} - \frac{t^{8}}{8!} + \frac{t^{10}}{10!} - \frac{t^{12}}{12!} + \frac{t^{14}}{14!} \leq 2$$</span></p> <p>You can then see that all this really says is: <span class="math-container">$$\cos (x) \geq 1 - \frac{t^2}{2!} + \frac{t^4}{4!} - \frac{t^6}{6!} + \frac{t^{8}}{8!} - \frac{t^{10}}{10!} + \frac{t^{12}}{12!} - \frac{t^{14}}{14!}$$</span> ...or <span class="math-container">$\cos (x)$</span> is greater than its Taylor expansion with the cut-off starting at a positive term. Really not that impressive a result. :-(</p>
2,646,251
<p>Given the linear system in $\mathbb{Z_3}$:</p> <p>$$ \left\{ \begin{array}{c} a+b+c+d=1 \\ b+c+e=2 \\ a+2e=0 \end{array} \right. $$</p> <p>I used the row reduction with matrices and I got:</p> <p>$$ \left\{ \begin{array}{c} a+b+c+d=1 \\ b+c+e=2 \\ d=2 \end{array} \right. $$</p> <p>But now I don't know how to find the solutions.</p>
Martin Argerami
22,857
<p>What you probably did is add the last two equations to get $$ 2=a+b+c, $$ since $3e=0$. Now the first equation gives you $d=1-2=2$. Going back to the system (and again using $2=-1$), $$ \left\{ \begin{array}{c} a+b+c=2 \\ b+c+e=2 \\ a-e=0 \end{array} \right. $$ and we already know $d=2$, $a=e$. Now theh first two equations are $a+b+c=2$, and so we are free to prescribe two of them. So, say, if you prescribe $b=t$, $c=s$, you have $$ a=2+2t+2s,\ \ d=2,\ \ e=a. $$</p>
1,265,026
<p>Suppose I have some vector field \begin{align} \vec{F}\left(x\left(t\right),y\left(t\right),z\left(t\right)\right)&amp;=G\textbf{i}+H\textbf{j}+T\textbf{k}.\tag{1} \end{align} Would it be correct for me to say \begin{align} \mathbb{R}^3\overset{\vec{F}}{\longrightarrow}\mathbb{R}^3\;?\tag{2} \end{align}</p>
Kyle Miller
172,988
<p>It looks like your vector field is also parameterized by time, so writing $\vec{F}_t:\mathbb{R}^3\to\mathbb{R}^3$ might be better.</p> <p>For more notation: each of the component for the vector field are functions $G_t,H_t,T_t:\mathbb{R}^3\to\mathbb{R}$.</p>
450,785
<p>I want to obtain the formula for binomial coefficients in the following way: elementary ring theory shows that $(X+1)^n\in\mathbb Z[X]$ is a degree $n$ polynomial, for all $n\geq0$, so we can write</p> <p>$$(X+1)^n=\sum_{k=0}^na_{n,k}X^k\,,\ \style{font-family:inherit;}{\text{with}}\ \ a_{n,k}\in\mathbb Z\,.$$</p> <p>Using the fact that $(X+1)^n=(X+1)^{n-1}(X+1)$ for $n\geq1$ and the definition of product of polynomials, we obtain the following recurrence relation for all $n\geq1$:</p> <p>$$a_{n,0}=a_{n,n}=1;\ a_{n,k}=a_{n-1,k}+a_{n-1,k-1}\,,\ \style{font-family:inherit;}{\text{for}}\ k=1,\dots,n-1\,.$$</p> <p>I want to know if there is a way to manipulate this recurrence in order to obtain directly the values of the coefficients $a_{n,k}$, namely $a_{n,k}=\binom nk=\frac{n!}{k!(n-k)!}$. </p> <p>Note that the usual approach via generating functions definitely will not work, at least <strong>in the spirit of my question</strong>, because this method only works when we do know in advance the coefficients of the generating function (either by the "number of $k$-subsets" argument, or Maclaurin series for $(X+1)^n$, or anything else) and this is <em>precisely</em> what I intend to avoid.</p> <p>This question is closely related to a recent <a href="https://math.stackexchange.com/questions/449834/explicit-formula-for-bernoulli-numbers-by-using-only-the-recurrence-relation">question of mine</a>. Actually the same question, with Bernoulli numbers instead of binomial coefficients.</p> <p><strong>EDIT</strong></p> <p>I do not consider as a valid manipulation the following "magical" argument: 'the sequence $(b_{n,k})$ given by $b_{n,k}=\frac{n!}{k!(n-k)!}$ obeys the same recurrence and initial conditions as $(a_{n,k})$, so $a_{n,k}=b_{n,k}$ for all $n,k$. How did you obtain the formula for the $b_{n,k}$ at the first place? Okay, you can go through the "counting subsets" argument, but this is precisely what I don't want to do. The same applies to my related question about Bernoulli numbers.</p>
user136023
136,023
<p>I think Mathematica command for solving recurrence relation can be used:</p> <p>RSolve[{a[n,k]==a[n-1,k]+a[n-1,k-1],a[n,0]==1,a[n,n]==1},a[n,k],{n,k}]</p>
1,135,045
<p>I need to compute \begin{align} S = \sum_{k=-\infty}^j \sum_{m=-1}^2 w_{k,m} f_{k+m-1} \end{align} but I only want to access the elements of $f$ once, so I would prefer something like \begin{align} \sum_k f_k \sum_m ... \end{align} Here is what I did: substitute $l=m-1+k$ to get \begin{align} S &amp;= \sum_{k=-\infty}^j \sum_{m=-1}^2 w_{k,m} f_{k+m-1} \\ &amp;=\sum_{k=-\infty}^j \sum_{l+1-k=-1}^2 w_{k,l+1-k} f_{l} \\ &amp;=\sum_{k=-\infty}^j \sum_{l=k-2}^{k+1} w_{k,l+1-k} f_{l} \end{align} But when I try to get $f_l$ out of the inner sum I'm messing something up. Can anyone produce the correct sum for looping over $f$ only once? Thanks in advance.</p> <p>Since the highest index of $f$ that is accessed is $j+1$, I assume that the outer sum should be \begin{align} \sum_{k=-\infty}^{j+1} f_k \end{align}</p>
Mark Fischler
150,362
<p>Step 1: break up the inner sum into 4 parts corresponding to four values of $m$. If the range were larger you could do this for arbitrary upper limit, but I will illustrate this for the specific case of upper limit $u=2$: $$ S(j,2) = \sum_{k=-\infty}^j \left[ w_{k,-1}f_{k-2}+ w_{k,0}f_{k-1} + w_{k,1}f_l+w_{k,2}f_{k+1} \right] $$ The next (and key) step is that we can shift the summation index, but we have to do a different shift for each of the (4 in our case) values of $u$. This leads to different upper limits, and that is where I think you went wrong. I'm going to let $n=k-2$ in the first sum below, $n=k-1$ in the second sum, and so forth to get $f_n$ in each sum. But because we want to recombine, I will stop all the sums at $n=j-2$, and explicitly write out the left over terms. $$ S(j,2) = \sum_{n=-\infty}^{j-2} f_n w_{n+2,-1} + \\ \sum_{n=-\infty}^{j-2}f_n w_{n+1,0} + f_{j-1}w_{(j-1)+1,0} +\\ \sum_{n=-\infty}^{j-2}f_n w_{n,1} + f_{j-1}w_{(j-1),1} + f_{j}w_{j,1} + \\ \sum_{n=-\infty}^{j-2}f_n w_{n-1,2} + f_{j-1}w_{(j-1)-1,2} + f_j w_{j-1,2} +f_{j+1}w_{(j+1)-1,2} $$ And now we can group and simplify: $$ S = \sum_{n=-\infty}^{j-2} f_n \left[ \sum_{p=-1}^2 w_{n+p,1-p} \right] + f_{j-1}w_{j,0} + f_{j-1}w_{j-1,1} + f_{j}w_{j,1} + f_{j-1}w_{j-2,2} + f_j w_{j-1,2}+f_{j+1}w_{j,2} \\ S = \sum_{n=-\infty}^{j-2} f_n \left[ \sum_{p=-1}^2 w_{n+p,1-p} \right] + f_{j-1}\left( w_{j,0} + w_{j-1,1} + w_{j-2,2} \right) + f_{j} \left( w_{j,1} + w_{j-1,2} \right) +f_{j+1}w_{j,2} $$ </p>
1,271,942
<p>I am a little bit confused with the definition of finitely presented modules. In Lang's <em>Algebra</em> he defines a module <span class="math-container">$M$</span> to be finitely presented if and only if there is a exact sequence <span class="math-container">$F'\to F\to M \to 0$</span> such that both <span class="math-container">$F', F$</span> are free. However the standard definition I have seen elsewhere only demands <span class="math-container">$F'$</span> be finitely generated. Are these two definitions equivalent?</p> <p>Looking at the situation of a non-principal ideal of a ring, say <span class="math-container">$(x, y)$</span> of <span class="math-container">$\mathbb{R}[x, y]$</span>, it appears that this is finitely presented, by the usual definition, but I don't see any way of making it finitely presented by Lang's definition.</p>
Bernard
202,857
<p>In Lang's definition, both $F$ and $F'$ must be finitely generated (and free). This definition is equivalent to: ‘There exists a short exact sequence: $$0\to K\to F\to M\to 0$$ such the $T$ is a finitely generated free module, and the module of relations $K$ is finitely generated. It is equivalent because there is exists a surjective homomorphism from a <em>finitely generated</em> free module onto $K$.</p>
2,634,791
<blockquote> <p>How can I show that the map $f: GL_n(\mathbb R)\to GL_n(\mathbb R)$ defined by $f(A)=A^{-1}$ is continuous?</p> </blockquote> <p>The space $GL_n(\mathbb R)$ is given the operator norm and so I want to show for all $\epsilon$ there exists $\delta$ such that $\|A-B\|&lt;\delta \implies \|A^{-1}-B^{-1}\|&lt;\epsilon$.</p>
Giuseppe Negro
8,157
<p>C.Falcon gave the simplest and most elegant answer with the algebraic formula involving the determinant. That formula actually implies that $A^{-1}$ is an <strong>analytic</strong> function of $A$. We can prove this fact with a direct, determinant-free proof that extends to the infinite dimensional case (see also the answer of Martín-Blas). </p> <p>The key step is the geometric series formula $$ f(I-Y)=(I-Y)^{-1}=\sum_{n=0}^\infty Y^n, $$ which holds for all matrices $Y$ such that $\|Y\|&lt;1$, where $\|\cdot\|$ is a matrix norm that we fix once and for all. This shows that $f$ is analytic in a neighborhood of $I$. If $M$ is an invertible matrix, then $$ f(M-X)=M^{-1}(I-M^{-1}X)^{-1}=M^{-1}\sum_{n=0}^\infty (M^{-1}X)^n, $$ provided that $$\tag{*} \|M^{-1}X\|&lt;1.$$ Since $\|\cdot\|$ is a matrix norm it satisfies the inequality $\|AB\|\le \|A\|\|B\|$, and in particular, the property (*) is satisfied if $$ \|X\|&lt;\frac{1}{\|M^{-1}\|}.$$ We conclude that $f$ is analytic in a neighborhood of $M$, and so it is analytic on the whole set of invertible matrices. </p> <p>In particular, $f$ is continuous.</p>
2,634,791
<blockquote> <p>How can I show that the map $f: GL_n(\mathbb R)\to GL_n(\mathbb R)$ defined by $f(A)=A^{-1}$ is continuous?</p> </blockquote> <p>The space $GL_n(\mathbb R)$ is given the operator norm and so I want to show for all $\epsilon$ there exists $\delta$ such that $\|A-B\|&lt;\delta \implies \|A^{-1}-B^{-1}\|&lt;\epsilon$.</p>
Community
-1
<p>The crucial inequality is this one:</p> <p>$$\|A^{-1}-B^{-1}\|=\|-A^{-1}(A-B)B^{-1}\|\le\|A^{-1}\|\|A-B\|\|B^{-1}\|\le \|A^{-1}\|\|A-B\|(\|A^{-1}\|+\|A^{-1}-B^{-1}\|)$$</p> <p>The rest is just finding how small to make $\|A-B\|$ to make $\|A^{-1}-B^{-1}\|\lt\varepsilon$. I've done this calculation, and it turns out that it is enough to take:</p> <p>$$\delta=\min\left(\frac{1}{\|A^{-1}\|},\frac{\varepsilon}{\|A^{-1}\|^2+\|A^{-1}\|\varepsilon}\right)$$</p> <p>Namely, if $\|A-B\|\lt\delta$, then $\|A-B\|\lt\frac{1}{\|A^{-1}\|}$, i.e. $1-\|A^{-1}\|\|A-B\|\gt 0$, and also: $\|A-B\|\lt\frac{\varepsilon}{\|A^{-1}\|^2+\|A^{-1}\|\varepsilon}$, i.e. $\|A^{-1}\|^2\|A-B\|+\|A-B\|\|A^{-1}\|\varepsilon\lt\varepsilon$, i.e. $\|A^{-1}\|^2\|A-B\|\lt (1-\|A^{-1}\|\|A-B\|)\varepsilon$, i.e.</p> <p>$$\frac{\|A^{-1}\|^2\|A-B\|}{1-\|A^{-1}\|\|A-B\|}\lt\varepsilon$$</p> <p>On the other hand, if you define $h=\|A^{-1}-B^{-1}\|$, then the first inequality above can be rewritten as:</p> <p>$$h\le\|A^{-1}\|\|A-B\|(\|A^{-1}\|+h)$$</p> <p>i.e. $h(1-\|A^{-1}\|\|A-B\|)\le\|A^{-1}\|^2\|A-B\|$, i.e. $h\le\frac{\|A^{-1}\|^2\|A-B\|}{1-\|A^{-1}\|\|A-B\|}$.</p> <p>Thus $\|A^{-1}-B^{-1}\|=h\lt\varepsilon$, as needed.</p>
1,288,812
<p><a href="https://en.wikipedia.org/wiki/Automorphisms_of_the_symmetric_and_alternating_groups#Exotic_map">This section</a> says:</p> <blockquote> <blockquote> <p>There is a subgroup (indeed, $6$ conjugate subgroups) of $S_6$ which are abstractly isomorphic to $S_5$,</p> </blockquote> </blockquote> <p>At this point I'm thinking: certainly: the group of all permutations of $\{a,b,c,d,e,f\}$ that leave the letter $a$ fixed is isomorphic to $S_5$. And there are six groups like it, since one can choose any of the six letters as the one that will remain fixed. But the section continues:</p> <blockquote> <blockquote> <p>There is a subgroup (indeed, $6$ conjugate subgroups) of $S_6$ which are abstractly isomorphic to $S_5$, and transitive as subgroups of $S_6$.</p> </blockquote> </blockquote> <p>But the groups I identify above do not act transitively on $\{a,b,c,d,e,f\}$, so this must be about some other subgroups. What are they? Are they images of the six groups I mention above under an outer automorphism?</p>
Josh B.
4,308
<p>Note that $S_5$ contains a subgroup of order $20$ (generated by, say, $(1,2,3,4,5)$ and $(1,3,4,2)$ ). The action of $S_5$ on the $6$ cosets of a subgroup of order $20$ provides a permutation representation of $S_5$ on $6$ points. And yes, an outer automorphism maps this sort of $S_5$ to the first type you were thinking of.</p>
1,288,812
<p><a href="https://en.wikipedia.org/wiki/Automorphisms_of_the_symmetric_and_alternating_groups#Exotic_map">This section</a> says:</p> <blockquote> <blockquote> <p>There is a subgroup (indeed, $6$ conjugate subgroups) of $S_6$ which are abstractly isomorphic to $S_5$,</p> </blockquote> </blockquote> <p>At this point I'm thinking: certainly: the group of all permutations of $\{a,b,c,d,e,f\}$ that leave the letter $a$ fixed is isomorphic to $S_5$. And there are six groups like it, since one can choose any of the six letters as the one that will remain fixed. But the section continues:</p> <blockquote> <blockquote> <p>There is a subgroup (indeed, $6$ conjugate subgroups) of $S_6$ which are abstractly isomorphic to $S_5$, and transitive as subgroups of $S_6$.</p> </blockquote> </blockquote> <p>But the groups I identify above do not act transitively on $\{a,b,c,d,e,f\}$, so this must be about some other subgroups. What are they? Are they images of the six groups I mention above under an outer automorphism?</p>
David Wheeler
23,285
<p>I will show you in principle how this is done, but the actual mechanics are tedious. Let:</p> <p>$$P_1 = \{e,(1\ 2\ 3\ 4\ 5),(1\ 3\ 5\ 2\ 4), (1\ 4\ 2\ 5\ 3),(1\ 5\ 4\ 3\ 2)\}\\ P_2 = \{e, (1\ 2\ 3\ 5\ 4),(1\ 3\ 4\ 2\ 5),(1\ 5\ 2\ 4\ 3),(1\ 4\ 5\ 3\ 2)\}\\ P_3 = \{e,(1\ 2\ 4\ 3\ 5),(1\ 4\ 5\ 2\ 3), (1\ 3\ 2\ 5\ 4), (1\ 5\ 3\ 4\ 2)\}\\ P_4 = \{e, (1\ 2\ 4\ 5\ 3),(1\ 4\ 3\ 2\ 5), (1\ 5\ 2\ 3\ 4), (1\ 3\ 5\ 4\ 2)\}\\ P_5 = \{e,(1\ 2\ 5\ 3\ 4),(1\ 5\ 4\ 2\ 3), (1\ 3\ 2\ 4\ 5), (1\ 4\ 3\ 5\ 2)\}\\ P_6 = \{e,(1\ 2\ 5\ 4\ 3), (1\ 5\ 3\ 2\ 4), (1\ 4\ 2\ 3\ 5), (1\ 3\ 4\ 5\ 2)\}$$</p> <p>If we pick an element $\sigma$ of $S_5$, say $(1\ 2\ 3)(4\ 5)$, we find that:</p> <p>$$\sigma P_1\sigma^{-1} = P_5\\ \sigma P_2\sigma^{-1} = P_3\\ \sigma P_3\sigma^{-1} = P_6\\ \sigma P_4\sigma^{-1} = P_2\\ \sigma P_5\sigma^{-1} = P_4\\ \sigma P_6\sigma^{-1} = P_1$$</p> <p>That is, if $\phi:S_5 \to S_6$ is our (exotic) embedding, then:</p> <p>$\phi((1\ 2\ 3)(4\ 5)) = (1\ 5\ 4\ 2\ 3\ 6)$ (note this indeed takes an element of order $6$ to an element of order $6$). It should be clear that if $\sigma \in S_5$ is a $5$-cycle, it fixes the sylow $5$-subgroup it belongs to, and permutes the rest, pick one at random, and verify it creates a $5$-cycle in $S_6$.</p> <p>Note that we only need the $3$-cycles of $S_5$ to show this action is indeed transitive; for example, to send $P_1 \to P_5$, we can conjugate by the $3$-cycle $(3\ 5\ 4)$ (I deliberately chose the $5$-cycle generators to start with $(1\ 2\ \dots)$ to make this clear).</p>
2,232,095
<blockquote> <p>Let $a, b, c, p, q$ be real numbers. Suppose $\{α, β\}$ are the roots of the equation $x^2 + 2px+ q = 0$ and $\{α,\frac{1}{β}\}$ are the roots of the equation $ax^2 + 2bx+ c = 0$, where $β \notin \{−1, 0, 1\}$.</p> </blockquote> <p>STATEMENT-1 : $(p^2 − q)(b^2 − ac) ≥ 0$</p> <p>STATEMENT 2: $b \neq pa$ or $c \neq qa$</p> <blockquote> <p>My Attempt:</p> </blockquote> <p>The second statement means that $α+\dfrac{1}{β} \neq α+β$ or $α\dfrac{1}{β} \neq αβ$, which is true as long as $α\neq0$, since $β \notin \{−1, 0, 1\}$. </p> <p><strong>But I have two questions:</strong></p> <ul> <li><p>Is assuming $\alpha \neq 0$ wrong?</p></li> <li><p>Does statement $2$ imply statement $1$ ?</p></li> </ul>
Twenty-six colours
424,197
<p>This is a reply to the comment </p> <blockquote> <p>"Does statement 2 imply statement 1?" </p> </blockquote> <p>I don't have enough rep to comment.<br> The answer is yes.<br> Statement 1 was derived from the condition that $\beta \notin \{-1,0,1\}.$<br> Since statement 2 also requires that condition on $\beta$, then having that condition present implies both of them, so we can say that if statement 2 is present, statement 1 must also be present.<br> (For $\alpha \neq 0$ of course.)</p>
3,695,868
<p>In right triangle <span class="math-container">$ABC,$</span> <span class="math-container">$\angle C = 90^\circ.$</span> Let <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> be points on <span class="math-container">$\overline{AC}$</span> so that <span class="math-container">$AP = PQ = QC.$</span> If <span class="math-container">$QB = 67$</span> and <span class="math-container">$PB = 76,$</span> find <span class="math-container">$AB.$</span></p> <p><a href="https://i.stack.imgur.com/BIPQ0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BIPQ0.png" alt="enter image description here"></a></p> <p>How do I use ratios and given side lengths to create a proportion to solve for <span class="math-container">$AB$</span>? Is there any other way to solve this?</p> <p>I would think the best way to approach this is to relate <span class="math-container">$QB/CB = AB/CB$</span>, though that would make <span class="math-container">$CB$</span> for both the same. I guess the relation of <span class="math-container">$AB/AC = QB/QC$</span> can also be used.</p>
g.kov
122,782
<p><a href="https://i.stack.imgur.com/RIWYl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RIWYl.png" alt="enter image description here"></a></p> <p>Let <span class="math-container">$|BC|=a$</span>, <span class="math-container">$|AB|=c$</span>, <span class="math-container">$|AC|=b$</span>, <span class="math-container">$|AP|=|PQ|=|QC|=\tfrac b3$</span>, <span class="math-container">$|BP|=u=76$</span>, <span class="math-container">$|BQ|=v=67$</span>. </p> <p>Applying <a href="https://en.wikipedia.org/wiki/Stewart%27s_theorem" rel="nofollow noreferrer">Stewart’s Theorem</a>, we have</p> <p><span class="math-container">\begin{align} \triangle AQB:\quad |AB|^2\,|PQ|+|BQ|^2\,|AP|-|AQ|\,(|BP^2|+|AP|\,|PQ|) &amp;=0 ,\\ c^2\cdot\tfrac b3+v^2\cdot\tfrac b3 -2\cdot\tfrac b3\cdot(u^2+\cdot(\tfrac b3)^2) &amp;=0 ,\\ 9c^2-2b^2-18u^2+9v^2 &amp;=0 \tag{1}\label{1} , \end{align}</span> </p> <p><span class="math-container">\begin{align} \triangle PCB:\quad |BP|^2\,|QC|+|BC|^2\,|PQ|-|PC|\,(|BQ^2|+|PQ|\,|QC|) &amp;=0 ,\\ u^2\cdot\tfrac b3+a^2\cdot\tfrac b3 -2\cdot\tfrac b3\cdot(v^2+\cdot(\tfrac b3)^2) &amp;=0 ,\\ -9a^2+2b^2-9u^2+18v^2 &amp;=0 ,\\ -9c^2+11b^2-9u^2+18v^2 &amp;=0 \tag{2}\label{2} . \end{align}</span> </p> <p>From \eqref{1},\eqref{2} we have</p> <p><span class="math-container">\begin{align} b^2 &amp;= 3(u^2-v^2)=3861 , \\ c^2 &amp;= \tfrac13\,( 8u^2-5v^2 ) =7921 . \end{align}</span></p>