qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,469,720
<p>Math problem:</p> <blockquote> <p>Find $x$, given that $ \, 2^2 \times 2^4 \times 2^6 \times 2^8 \times \ldots \times 2^{2x} = \left( 0.25 \right)^{-36}$</p> </blockquote> <p>To solve this question, I changed the left side of the equation to $2^{2+4+6+ \ldots + 2x}$ and the right side to: $\frac{2^{74}}{3^{36}}$.</p> <p>My question is how can $3$ to a power (in this case $36$) be changed to $2$ to a power? (algebraically-without a calculator)</p> <p>By checking with a calculator and doing $\log$, I found that it is not a whole number and therefore the wrong method for this question.</p>
Community
-1
<p>Take the base-2 logarithm of both members, and you get</p> <p>$$2+4+6+\cdots+2x=(-2)(-36)$$</p> <p>or</p> <p>$$1+2+3+\cdots+x=36.$$</p> <p>$36$ is the eighth triangular number.</p> <hr> <p>Even though this is irrelevant to the given problem, you convert a power of $2$ to a power of $3$ by writing</p> <p>$$2^a=3^b,$$ and taking the logarithm (in any base),</p> <p>$$a\log2=b\log 3$$ or</p> <p>$$b=a\frac{\log 2}{\log 3}.$$</p> <p>For integer $a$, $b$ can never be an integer (nor a rational).</p>
370,007
<p>A river boat can travel a 20km per hour in still water. The boat travels 30km upstream against the current then turns around and travels the same distance back with the current. IF the total trip took 7.5 hours, what is the speed of the current? Solve this question algebraically as well as graphically..</p> <p>I started the Algebra Solution: starting with this x=(Vstill-Vcurrent)t,(When goes up stream) x=(Vstill Vcurrent) t2( when it goes back stream....</p> <p>I have the same question on a quiz in 1 hours and I need to know how to do this please show a solution :D thanks</p>
Alexander Gruber
12,952
<p><em>Hint/roadmap:</em></p> <p>If $|a|$ and $|b|$ are coprime <strong>and $a$ and $b$ commute</strong>, then $|ab|=|a||b|$. In particular, this holds for all pairs of elements in abelian groups.</p> <p>This follows from these facts:</p> <ol> <li><p><em>if $ab=ba$</em>, then $(ab)^x=a^xb^x$, $\hspace{17pt}$(<em>note:</em> this is the step which fails without commutativity)</p></li> <li><p>if $g$ is an element in any group $G$ and $g^x=\operatorname{id}_G$, then $|g|$ divides $x$,</p></li> <li><p>if $|a|$ and $|b|$ are coprime, the smallest number divisible by both $|a|$ and $|b|$ (the <em>least common multiple,</em> one might say) is $|a||b|$.</p></li> </ol> <p>From these facts, one may show that the smallest number $x$ for which $(ab)^x$ is the identity is $|a||b|$, hence $|ab|=|a||b|$.</p>
1,356,367
<p>Is it true that projection is a normal matrix? It's clear that orthogonal projection is, but what about non-orthogonal projection?</p> <p>By normal matrix, I mean matrix A such that $AA' = A'A$.</p>
James Pak
187,056
<p>$$\frac{n!}{n_1!n_2!},\;\; n_1+ n_2=n,$$</p> <p>counts the permutations of the sequence $$\underbrace{p_1...p_1}_{n_1}\underbrace{p_2...p_2}_{n_2}.$$</p> <p>Coincidentally, $$\frac{n!}{n_1!n_2!}=\frac{n!}{k!(n-k)!}={n \choose k}, \;\; n_1=k=\text{number of successes}.$$</p> <p>Therefore, ${n \choose k}$, disguised as counting combinations, actually counts permutations.</p>
2,572,302
<p>I want to calculate the limit: $$ \lim_{x\to +\infty}(1+e^{-x})^{2^x \log x}$$ The limit shows itself in an $1^\infty$ Indeterminate Form. I tried to elevate $e$ at the logarithm of the function:</p> <p>$$\lim_{x\to +\infty} \log(e^{(1+e^{-x})^{2^x \log x}}) = e^{\lim_{x\to +\infty} \log((1+e^{-x})^{2^x \log x})} = e^{\lim_{x\to +\infty} 2^x \log x \cdot \log(1+e^{-x}) }$$</p> <p>And then rewrite the exponent as a fraction, to get an $\frac{\infty}{\infty}$ form:</p> <p>$$= \lim_{x\to +\infty} \frac{2^x \log x}{\frac{1}{\log(1+e^{-x})}} $$</p> <p>But I don't know how to apply an infinite comparing technique here, and even applying de l'Hôpital seems to lead to nothing...</p> <p>Could you guys give me some help?</p> <p>Furthermore: is there a way to calculate this limit without using series expansions or other advanced mathematic instruments?</p> <p>Thank you very much in advance.</p> <p>P.S. Wolfram says this limit goes to 1, but I still really want to know how.</p>
user
505,767
<p>We have:</p> <p>$$(1+e^{-x})^{2^x logx}=e^{2^x \log x\log(1+e^{-x})}\to e^0 = 1$$</p> <p>indeed</p> <p>$$2^x \log x\log(1+e^{-x})=\frac{2^x\log x}{e^{x}}\log\left[\left(1+\frac{1}{e^x}\right)^{e^x}\right]\to 0\cdot \log e=0$$</p> <p>$$\frac{2^x\log x}{e^{x}}=\frac{\log x}{\left(\frac{e}{2}\right)^{x}}\stackrel{\text{l'Hospital}}=\frac{\frac{1}{x}}{\left(\frac{e}{2}\right)^{x}\log{\frac{e}{2}}}=\frac{1}{x\left(\frac{e}{2}\right)^{x}\log{\frac{e}{2}}}\to 0$$</p>
353,087
<p>Solve the interior Dirichlet Problem</p> <p>$$(r^2u_r)_r+\dfrac{1}{\sin\phi}(\sin\phi~u_\phi)_\phi+\dfrac{1}{\sin^2\phi}u_{\theta\theta}=0\,, \,\,\,\,\,\,\, 0&lt;r&lt;1 $$</p> <p>where $u(1,\phi)=\cos3\phi$</p>
doraemonpaul
30,938
<p>$(r^2u_r)_r+\dfrac{1}{\sin\phi}(\sin\phi~u_\phi)_\phi+\dfrac{1}{\sin^2\phi}u_{\theta\theta}=0$</p> <p>$r^2u_{rr}+2ru_r+u_{\phi\phi}+\cot\phi~u_\phi+\csc^2\phi~u_{\theta\theta}=0$</p> <p>Note that this PDE is separable.</p> <p>Let $u(r,\phi,\theta)=f(r)g(\phi)h(\theta)$ ,</p> <p>Then $r^2f''(r)g(\phi)h(\theta)+2rf'(r)g(\phi)h(\theta)+f(r)g''(\phi)h(\theta)+\cot\phi~f(r)g'(\phi)h(\theta)+\csc^2\phi~f(r)g(\phi)h''(\theta)=0$</p> <p>$\dfrac{r^2f''(r)+2rf'(r)}{f(r)}+\dfrac{g''(\phi)+\cot\phi~g'(\phi)}{g(\phi)}+\dfrac{\csc^2\phi~h''(\theta)}{h(\theta)}=0$</p> <p>$\dfrac{r^2f''(r)+2rf'(r)}{f(r)}=-\dfrac{g''(\phi)+\cot\phi~g'(\phi)}{g(\phi)}-\dfrac{\csc^2\phi~h''(\theta)}{h(\theta)}=\dfrac{4s^2-1}{4}$</p> <p>$\begin{cases}\dfrac{r^2f''(r)+2rf'(r)}{f(r)}=\dfrac{4s^2-1}{4}\\-\dfrac{g''(\phi)+\cot\phi~g'(\phi)}{g(\phi)}-\dfrac{\csc^2\phi~h''(\theta)}{h(\theta)}=\dfrac{4s^2-1}{4}\end{cases}$</p> <p>$\begin{cases}r^2f''(r)+2rf'(r)-\dfrac{4s^2-1}{4}f(r)=0\\\dfrac{\csc^2\phi~h''(\theta)}{h(\theta)}=-\dfrac{g''(\phi)+\cot\phi~g'(\phi)}{g(\phi)}-\dfrac{4s^2-1}{4}\end{cases}$</p> <p>$\begin{cases}r^2f''(r)+2rf'(r)-\dfrac{4s^2-1}{4}f(r)=0\\\dfrac{h''(\theta)}{h(\theta)}=-\dfrac{\sin^2\phi~g''(\phi)+\sin\phi\cos\phi~g'(\phi)}{g(\phi)}-\dfrac{4s^2-1}{4}\sin^2\phi=-t^2\end{cases}$</p> <p>$\begin{cases}r^2f''(r)+2rf'(r)-\dfrac{4s^2-1}{4}f(r)=0\\\dfrac{h''(\theta)}{h(\theta)}=-t^2\\-\dfrac{\sin^2\phi~g''(\phi)+\sin\phi\cos\phi~g'(\phi)}{g(\phi)}-\dfrac{4s^2-1}{4}\sin^2\phi=-t^2\end{cases}$</p> <p>$\begin{cases}r^2f''(r)+2rf'(r)-\dfrac{4s^2-1}{4}f(r)=0\\h''(\theta)+t^2h(\theta)=0\\\sin^2\phi~g''(\phi)+\sin\phi\cos\phi~g'(\phi)+\biggl(\dfrac{4s^2-1}{4}\sin^2\phi-t^2\biggr)g(\phi)=0\end{cases}$</p> <p>$\begin{cases}f(r)=c_1(s)r^{s-\frac{1}{2}}+c_2(s)r^{-s-\frac{1}{2}}\\h(\theta)=c_3(t)\sin\theta t+c_4(t)\cos\theta t\\g(\phi)=c_5(s,t)P_{s-\frac{1}{2}}^t(\cos\phi)+c_6(s,t)Q_{s-\frac{1}{2}}^t(\cos\phi)\end{cases}$</p> <p>$\therefore u(r,\phi,\theta)=\int_t\int_sC_1(s,t)r^{s-\frac{1}{2}}\sin\theta t~P_{s-\frac{1}{2}}^t(\cos\phi)~ds~dt+\int_t\int_sC_2(s,t)r^{s-\frac{1}{2}}\sin\theta t~Q_{s-\frac{1}{2}}^t(\cos\phi)~ds~dt+\int_t\int_sC_3(s,t)r^{s-\frac{1}{2}}\cos\theta t~P_{s-\frac{1}{2}}^t(\cos\phi)~ds~dt+\int_t\int_sC_4(s,t)r^{s-\frac{1}{2}}\cos\theta t~Q_{s-\frac{1}{2}}^t(\cos\phi)~ds~dt+\int_t\int_sC_5(s,t)r^{-s-\frac{1}{2}}\sin\theta t~P_{s-\frac{1}{2}}^t(\cos\phi)~ds~dt+\int_t\int_sC_6(s,t)r^{-s-\frac{1}{2}}\sin\theta t~Q_{s-\frac{1}{2}}^t(\cos\phi)~ds~dt+\int_t\int_sC_7(s,t)r^{-s-\frac{1}{2}}\cos\theta t~P_{s-\frac{1}{2}}^t(\cos\phi)~ds~dt+\int_t\int_sC_8(s,t)r^{-s-\frac{1}{2}}\cos\theta t~Q_{s-\frac{1}{2}}^t(\cos\phi)~ds~dt$</p> <p>The condition about the information of $\theta$ is not clear, so I stop here until OP has properly clarified.</p>
2,189,832
<p>Take the matrix $$ \begin{matrix} 1 &amp; 1 &amp; 1 &amp; 1 \\ 1 &amp; 1 &amp; 1 &amp; 1 \\ 1 &amp; 1 &amp; 1 &amp; 1 \\ 1 &amp; 1 &amp; 1 &amp; 1 \\ \end{matrix} $$</p> <p>I tried to calculalte the eigenvalues of this matrix and got to a point where I found that the eigenvalues are: 1,0,2. but wolfarm says it has only 4 ad 0. I have no idea why.. wolfarm:<a href="https://i.stack.imgur.com/5Ukn1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Ukn1.png" alt="enter image description here"></a></p> <p>In addition, can this matrix be diagonalized?</p> <p>Thanks!</p> <p>my calculations:</p> <p>I calculated its characteristic polynomial which is the determinant of the matrix A. \begin{matrix} 1-t &amp; 1 &amp; 1 &amp; 1 \\ 1 &amp; 1-t &amp; 1 &amp; 1 \\ 1 &amp; 1 &amp; 1-t &amp; 1 \\ 1 &amp; 1 &amp; 1 &amp; 1-t \\ \end{matrix} I got to det(t * Id - A) = t*(1-t)^2*(t-2) which means that the eigenvalues are 1,0,2 where did I go wrong?</p>
Scientifica
164,983
<p>This means that you made a mistake while computing the eigenvalues, so you should check your work and find what you did wrong.</p> <p>Actually, you can solve this problem pretty easly. We're working on a four-dimensional vector space, and I guess that's $\mathbb{R}^4$ (or $\mathbb{C}^4$, not much difference in this case). If you consider the operator $f\in\mathcal{L}(\mathbb{R}^4)$ which matrix in the canonical basis $(e_1,e_2,e_3,e_4)$ of $\mathbb{R}^4$ is your matrix (call it $A$), we have $$4=\dim{\mathbb{R}^4}=\dim\ker{f}+\dim\text{Im}f$$ </p> <p>Notice that all the columns are the same: the image of each vector of the basis is $e_1+e_2+e_3+e_4$, which means that $\text{Im}f=\text{span}\{e_1+e_2+e_3+e_4\}$. Thus $\dim\text{Im}f=1$. Hence $\dim\ker{f}=3$. Thus $0$ is an eigenvalue and there exists three linearly independent vectors $v_1,v_2,v_3\in\mathbb{R}^4$ such that $Av_1=Av_2=Av_3=0$. $v_1$, $v_2$ and $v_3$ form a linearly independent list of eigenvectors corresponding to the eigenvalue $0$.</p> <p>Finally, by inspection, you can see that $4$ is an eigenvalue. Indeed, notice that $f(e_i)=e_1+e_2+e_3+e_4$ for all $i\in\{1,2,3,4\}$. The image of each $e_i$ contributes by $1$ in the coordinates. So what happens if you take the sum? $f(e_1+e_2+e_3+e_4)=4(e_1+e_2+e_3+e_4)$. This shows that $4$ is an eigenvalue. Call $v_4=e_1+e_2+e_3+e_4$.</p> <p>$(v_1,v_2,v_3,v_4)$ form a basis of $\mathbb{R}^4$ since there are $4$ vectors that are linearly independent ($v_4$ is linearly independent from the rest because it is an eigenvector with a different eigenvalue). This shows that $0,4$ are the only eigenvalues and that your matrix is diagonalizable.</p> <p><strong>Edit</strong>: As amd said in comments, we can solve the problem more easily: we found three linearly independent vectors that are eigenvalues of $A$ corresponding to the eigenvalue $0$, so there is at most one additional eigenvalue. The trace of $A$ is 4 but the sum of zeros is zeros. So $4$ must be an eigenvalue and thus $0$ and $4$ are the eigenvalues of $A$. Take any eigenvector $v_4$ corresponding to the eigenvalue $4$. $(v_1,v_2,v_3,v_4)$ is linearly independent, thus a basis of $\mathbb{R}^4$. Hence $A$ is diagonalizable.</p>
3,553,644
<p>I am taking a Introduction to Calculus course and am struggling to understand how derivatives can represent tangent lines.</p> <p>I learned that derivatives are the rate of change of a function but they can also represent the slope of the tangent to a point. I also learned that a derivative will always be an order lower that the original function.</p> <p>For example: <span class="math-container">$f(x) = x^3 and f'(x) = 3x^2$</span></p> <p>What I fail to understand is that how can <span class="math-container">$3x^2$</span> represent the slope of the tangent line if it is not a linear function?</p> <p>Wouldn't this example mean that the slope or the tangent itself is a parabola?</p>
José Carlos Santos
446,262
<p>What happens is that, for each <span class="math-container">$a$</span> in the domain of <span class="math-container">$f$</span>, <span class="math-container">$f'(a)$</span> is the <em>slope</em> of the the tangent to the graph of <span class="math-container">$f$</span> at the point <span class="math-container">$\bigl(a,f(a)\bigr)$</span>.</p> <p>So, if <span class="math-container">$f(x)=x^3$</span>, since <span class="math-container">$f'(x)=3x^2$</span>, the slope of the tangent to the graph of <span class="math-container">$f$</span> at the point <span class="math-container">$(1,1)$</span> is <span class="math-container">$3$</span>, and therefore that tangent line is the line <span class="math-container">$3(x-1)+1\bigl(=3x-2\bigr)$</span>.</p>
3,553,644
<p>I am taking a Introduction to Calculus course and am struggling to understand how derivatives can represent tangent lines.</p> <p>I learned that derivatives are the rate of change of a function but they can also represent the slope of the tangent to a point. I also learned that a derivative will always be an order lower that the original function.</p> <p>For example: <span class="math-container">$f(x) = x^3 and f'(x) = 3x^2$</span></p> <p>What I fail to understand is that how can <span class="math-container">$3x^2$</span> represent the slope of the tangent line if it is not a linear function?</p> <p>Wouldn't this example mean that the slope or the tangent itself is a parabola?</p>
Community
-1
<ul> <li>The formula defining the derivative function is not itself the equation of the tangent; this formula gives you , for each tangent ( one tangent for each point <span class="math-container">$(x, f(x))$</span> of the graph of <span class="math-container">$f$</span> ), the slope of this line. And a slope is a <em>number</em>.</li> </ul> <p>The main point here is that the derivative function is a function that sends back <strong><em>numbers</em></strong> as outputs ( not lines, not tangents). One and only one number far every permissible input <span class="math-container">$x$</span>. </p> <ul> <li>To understand this, remember that for all point <span class="math-container">$(x, f(x))$</span> of the graph (such that there is a tangent to the graph at this point), this tangent will have the form : </li> </ul> <p><span class="math-container">$$y = mx + b$$</span>. </p> <p>The number <span class="math-container">$m$</span> is the <strong><em>slope</em></strong> of the tangent. You may think of it as a percent ( in the same way as we ordinarily think of the slope of a road in terms of %). </p> <p>For example, the slope of the line <span class="math-container">$y = 0,5x +2$</span> has slope <span class="math-container">$0,5$</span>, that is, <span class="math-container">$50$</span>%. The slope of the line <span class="math-container">$6x + 10$</span> has slope <span class="math-container">$6$</span>, that is <span class="math-container">$600$</span>%. The slope of <span class="math-container">$y=0x+5=5$</span> is <span class="math-container">$O$</span> ( = <span class="math-container">$0$</span>%). The slope of <span class="math-container">$y= -2x +40$</span> is <span class="math-container">$-2$</span> = <span class="math-container">$- 200$</span>% ( These are arbitrary examples, not related to the <span class="math-container">$x^3$</span> function). </p> <ul> <li><p>So, for each input <span class="math-container">$x$</span>, <strong><em>the derivative gives as output the number <span class="math-container">$m$</span></em></strong> ( that is, the slope) of the tangent to the graph at point <span class="math-container">$( x, f(x))$</span>. </p></li> <li><p>The beauty is that, although the tangents will ( ordinarily) have various slopes, although the outputs of function <span class="math-container">$f'(x)$</span> will be different for different <span class="math-container">$x$</span> values ( inputs), <strong><em>we are often able to find a rule defining a constant numerical relation between the value of <span class="math-container">$x$</span> and the corresponding slope</em></strong>. For example, for <span class="math-container">$f(x)=x²$</span>, it can be proved that <span class="math-container">$f'(x)$</span> ( the slope of the tangent to the graph of <span class="math-container">$f$</span> at <span class="math-container">$(x, f(x))$</span>) is always the double of x ! This is what means the differentiation rule : <span class="math-container">$\frac {d} {dx}x^2$</span> <span class="math-container">$=$</span> <span class="math-container">$2\times x$</span>. </p></li> </ul> <p>Note : this number that is sent back as output is formally defined as a limit, namely, the <strong><em>limit</em></strong> , as <span class="math-container">$h$</span> approaches <span class="math-container">$0$</span> , of the <strong><em>ratio</em></strong></p> <p><span class="math-container">$\frac {f(x+h) - f(x)} { (x+h) - x}$</span> = <span class="math-container">$\frac{change- in-y}{change-in-x}$</span></p> <p>This shows that the slope of the tangent happens to be identical to the instantaneous rate of growth of the original function <span class="math-container">$f$</span> at the point <span class="math-container">$( x, f(x))$</span>. This is why, in fact, we are interested in these slopes. </p> <p>Note : you can use the number <span class="math-container">$f'(a)$</span> to find the equation of the tangent at a given point <span class="math-container">$( a, f(a))$</span>. Since <span class="math-container">$f'(a)$</span> is " the <span class="math-container">$m$</span> ( = slope) of this tangent" ,the equation of this line will have the form : <span class="math-container">$y = f'(a)x + b$</span>. The fact that you also know one point of this tangent , namely, the point <span class="math-container">$(a, f(a))$</span>, allows you ( with some algebra) to recover the number <span class="math-container">$b$</span> , and finally the whole equation of the tangent at this point <span class="math-container">$( a, f(a))$</span>. </p> <ul> <li>Examples with <span class="math-container">$f(x)= x^3$</span> and consequently <span class="math-container">$f'(x)= 3x^2$</span>: </li> </ul> <p>For <span class="math-container">$x= 1$</span> , the slope is <span class="math-container">$f'(1)$</span> = <span class="math-container">$3\times1^2$</span>= <span class="math-container">$3$</span> = <span class="math-container">$300$</span>%</p> <p>So, at <span class="math-container">$( 1, f(1))$</span> , the slope of the tangent to the graph of <span class="math-container">$f$</span> is <span class="math-container">$300$</span>%. Quite a big slope. </p> <p>For <span class="math-container">$x= 2$</span> , the slope is <span class="math-container">$f'(2)$</span>=<span class="math-container">$3\times2^2$</span>= 12 = <span class="math-container">$1200$</span>%</p> <p>So, at <span class="math-container">$( 3, f(3))$</span> , the slope of the tangent to the graph of <span class="math-container">$f$</span> is <span class="math-container">$1200$</span>%. A huge slope! </p> <p><a href="https://i.stack.imgur.com/jj7TY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jj7TY.png" alt="enter image description here"></a></p>
2,062,706
<p>I have the following function:</p> <p>\begin{equation} f(q,p) = q \sqrt{p} + (1-q) \sqrt{1 - p} \end{equation}</p> <p>Here, $q \in [0,1]$ and $p \in [0,1]$.</p> <p>Now, given some value $q \in [0,1]$ what value should I select for $p$ in order to maximize $f(q,p)$? That is, I need to define some function $g(q)$ such that $f(q, g(q))$ is a local maximum.</p> <p>I've been thinking about this problem for days and I don't know where to begin. Any help will be greatly appreciated.</p>
mfl
148,513
<p>We have that </p> <p>$$\nabla f=\left(\sqrt{p}-\sqrt{1-p},\frac{q}{2\sqrt p}-\frac{1-q}{2\sqrt{1-p}}\right)$$ doesn't vanish on $(0,1)\times (0,1).$ Thus the maximum is achieved on the boundary. Now we have</p> <p>\begin{cases}f(0,p)&amp;=1-p\\f(1,p)&amp;=p\\f(q,0)&amp;=\sqrt{1-q}\\ f(q,1)&amp;=\sqrt{q}\end{cases}</p> <p>So we have $$f(0,0)=f(1,1)=1&gt;f(x,y), \forall (x,y)\in [0,1]\times [0,1]\setminus \{(0,0),(1,1)\}.$$</p> <p>To maximize $f(p,q)$ for a given $q$ we consider $g(p)=q\sqrt{p} + (1-q) \sqrt{1 - p}$ with $q$ constant. Then $g'(p)\ne 0, \forall p\in (0,1).$ So the maximum value is among</p> <p>$$f(q,0)=1-q, f(q,1)=q.$$</p>
2,048,054
<p>I need to find signed distance from the point to the intersection of 2 hyperplanes. I was quite sure that this is something that every mathematician do twice a week :) But not found any good solution or explanation for same problem.</p> <p>In my case the hyperplanes is defined as $y = w'*x + x_0$, but it is ok to define it with the set of points if there is no other way to solve.</p> <p>The only solution i found is method to find points of intersection from points from hyperplanes here: <a href="https://www.mathworks.com/matlabcentral/fileexchange/50181-affinespaceintersection-intersection-of-lines-planes-volumes-etc" rel="nofollow noreferrer">https://www.mathworks.com/matlabcentral/fileexchange/50181-affinespaceintersection-intersection-of-lines-planes-volumes-etc</a></p> <p>But i stuck how to find signed distance after that.</p> <p>I have strong feeling that there is easy solution, but i don't know correct keywords.</p> <p>It will be great to see formulas and implementation on any language. But for sure any help highly appreciated.</p> <p>Thank you.</p>
N. S.
9,176
<p>$$[0,1] \cup [2,3]$$ is metric, disconnected and has infinitely many elements.</p> <p>Note: if there are infinitely many <strong>components</strong> take each component as an open set and use this open cover to show that the space is not compact.</p>
1,704,410
<p>If we have two groups <span class="math-container">$G,H$</span> the construction of the direct product is quite natural. If we think about the most natural way to make the Cartesian product <span class="math-container">$G\times H$</span> into a group it is certainly by defining the multiplication</p> <p><span class="math-container">$$(g_1,h_1)(g_2,h_2)=(g_1g_2,h_1h_2),$$</span></p> <p>with identity <span class="math-container">$(1,1)$</span> and inverse <span class="math-container">$(g,h)^{-1}=(g^{-1},h^{-1})$</span>.</p> <p>On the other hand we have the construction of the semidirect product which is as follows: consider <span class="math-container">$G$</span>,<span class="math-container">$H$</span> groups and <span class="math-container">$\varphi : G\to \operatorname{Aut}(H)$</span> a homomorphism, we define the semidirect product group as the Cartesian product <span class="math-container">$G\times H$</span> together with the operation</p> <p><span class="math-container">$$(g_1,h_1)(g_2,h_2)=(g_1g_2,h_1\varphi(g_1)(h_2)),$$</span></p> <p>and we denote the resulting group as <span class="math-container">$G\ltimes H$</span>.</p> <p>We then show that this is a group and show many properties of it. My point here is the intuition.</p> <p>This construction doesn't seem quite natural to make. There are many operations to turn the Cartesian product into a group. The one used when defining the direct product is the most natural. Now, why do we give special importance to this one?</p> <p>What is the intuition behind this construction? What are we achieving here and why this particular way of making the Cartesian product into a group is important?</p>
Cameron Williams
22,551
<p>I take a different point of view than most people on this, I think, which is due to the way I first encountered it (along the lines of mathematical physics). To me the direct product is lacking in much structure. You effectively just slap two groups together and call it a day - much like when you combine two subspaces via direct sums.</p> <p>The semi-direct product is a simple way to really mix two groups together. Let's consider matrices. If we have two groups of square matrices (with the same dimension), say $G$ and $H$, then if we were to multiply elements, we'd have $g_1 h_1 g_2 h_2$. This would be like the product $(g_1,h_1)(g_2,h_2)$ in direct product notation. If $H$ commutes with $G$, then we could rewrite this as $g_1 g_2 h_1 h_2$ which we could realize as being similar to $(g_1g_2,h_1h_2)$ in direct product notation. The two groups don't really see each other in this setting.</p> <p>We know however that this is not always the case. Instead what you might have is that $H$ acts on $G$ in some way so that if you try to repackage the product $g_1 h_1 g_2 h_2$ in the form $g_1 g_2 h_1 h_2$, $g_2$ gets mixed up a bit by $h_1$. Since we don't want to leave the group $G$, we would need that $h_1$ acting on $g_2$ gives us another element in $G$. Moreover $h_1 I = I h_1$ so $h_1$ would have to permute the elements of $G$ while leaving the identity fixed. Moreover, if we had $g_1 h_1 g_2g_3 h_3$, then acting $h_1$ on each of $g_2$ and $g_3$ independently (and moving over) should give the same result as acting on their product (and moving over) - this is just the homomorphism property.</p> <p>This does not give rise to the <em>auto</em>morphism aspect, but this can be seen by noting that $h^{-1}hg = g$ and so if $h$ mapped $g$ to the identity, you would have a contradiction (unless $g = I$ of course).</p> <p>In summary: if you had two matrix groups $G$ and $H$ that did not necessarily commute but attempted to reorder the elements in a direct product kind of way, you necessarily need that $H$ acts on $G$ by automorphisms.</p>
1,013,484
<p>I've this function : $f(x,y)= \dfrac{(1+x^2)x^2y^4}{x^4+2x^2y^4+y^8}$ for $(x,y)\ne (0,0)$ and $0$ for $(x,y)=(0,0)$</p> <p>It's admits directional derivatives at the origin?</p>
2'5 9'2
11,123
<p>A directional derivative needs a direction. Maybe you travel vertically straight through $(0,0)$, along the line $x=0$. Then your function is constantly $0$, so of course that particular directional derivative exists. </p> <p>Otherwise, travel in the direction of the line $y=kx$. Then away from $(0,0)$ your function is $$f(x)=\frac{k^4(1+x^2)x^6}{x^4+2k^4x^6+k^8x^8}=\frac{k^4(1+x^2)x^2}{1+2k^4x^2+k^8x^4}$$ which is either identically $0$ if $k=0$, or otherwise is quadratic in nature as $x\to0$. In both cases, the derivative is $0$ at $x=0$.</p> <p>So all directional derivatives for this function exist at $(0,0)$, and they are all equal to $0$.</p> <hr> <p>Note though what happens if you approach $(0,0)$ along the nonlinear path $x=y^2$...</p>
4,192,869
<p>What is the difference between a set being an element of a <span class="math-container">$\sigma$</span>-algebra compared to being a subset of a <span class="math-container">$\sigma$</span>-algebra?</p>
user6247850
472,694
<p>A subset of a <span class="math-container">$\sigma$</span>-algebra is a set of sets, but an element of a <span class="math-container">$\sigma$</span>-algebra is a set of elements of the underlying space. For example, if we take <span class="math-container">$\Omega = \{1,2,3\}$</span> and the <span class="math-container">$\sigma$</span>-algebra <span class="math-container">$\mathcal F := \{\emptyset, \{1,2,3\}, \{1\},\{2,3\} \}$</span> then <span class="math-container">$\{1\}$</span> and <span class="math-container">$\{1,2,3\}$</span> are elements of <span class="math-container">$\mathcal F$</span>, while <span class="math-container">$\{\emptyset, \{1\}\}$</span> or <span class="math-container">$\{\{1\}, \{2,3\}\}$</span> are subsets of <span class="math-container">$\mathcal F$</span>.</p>
1,034,697
<p>The exercise goes like this:</p> <p>-Let $W= {(x,y,z)|2x+3y-z=0}$ Then $W\subseteq\mathbb{R}^3$, find the dimension of $W$.</p> <p>-Find the dimension $[\mathbb{R}^3|W]$</p> <p>This was a problem from my algebra exam, it was a team exam and this problem was solved by another member of the team (we...he had it right), his solution goes like this:</p> <p>Lets take the natural projection of $\mathbb{R}^3\to \mathbb{R}^3|W$, we have $Dim \,\,\mathbb{R}^3=3$, $\text{dim}\,\, Ker= dim \,W=2$. because it is superyective by the dimension theorem we have dim $[\mathbb{R}^3|W]=1$</p> <p>I don't understand the solution (I think he used things we haven't seen in class, he's a little bit ahead). I can see why Dim $\mathbb{R}^3=3$ but not why dim $Ker= dim\, W=2$ and why dim $[\mathbb{R}^3|W]=1$. What does being superyective have to do with it?</p> <p>Can someone please explain what is used here? We already got an A in the exam but I'd like to understand how the problem was solved.</p> <p>I know it's probably something very basic, but I don't know much.</p> <p>Thanks</p>
JohnD
52,893
<p>To show $W$ is a subspace, use the Subspace Theorem: $0\in W$, $W$ is closed under addition and scalar multiplication.</p> <p>To compute the dimension of $W$, just note that $2x+3y-z=0$ implies $x=-3y+z$. So you have two free variables: $y$ and $z$. Thus, $W$ is two dimensional. </p> <p>Since $\mathbb{R}^3$ is three dimensional and $W$ is two dimensional, then $\mathbb{R}^3/W$ is one dimensional.</p>
426,114
<p>What is the terminology of two point support in this lemma?</p> <p><img src="https://i.stack.imgur.com/fecF5.png" alt="enter image description here"></p>
coffeemath
30,316
<p><strong>Hint:</strong> If $c(x)=c_0 \in (0,1)$ then it may be that $f$ has a value exceeding $f(c_0)$ which occurs at some $x&lt;c$. In this case a small movement of $x$ will only move $c(x)$ near $c_0$ so the max will stay the same. </p> <p>On the other hand it may be that $f$ has its maximum value on the interval $[0,c_0]$ of $f(c_0)$ i.e. it is maximal at $c(x)=c(x_0).$ In this case as $x$ moves near $x_0$ it may either cause $c(x)$ to go back down below $c(x_0)$, or move $c_(x)$ above $c_0$, but either way the continuity of $f$ should keep the max from jumping.</p>
426,114
<p>What is the terminology of two point support in this lemma?</p> <p><img src="https://i.stack.imgur.com/fecF5.png" alt="enter image description here"></p>
Stefan Hamcke
41,672
<p>Note that the set $C:=\{(x,t)\mid x\in\Bbb R,\ t\in[0,c(x)]\}$ is closed and contains the graph of $c$, the set $G(c) = \{(x,c(x))\mid x\in\Bbb R\}$, which is closed in $\Bbb R \times [0,1]$</p> <p>Show that the preimage of an open subbase set $(-\infty,a)$ under $F$ is open: This is the set $\Bbb R-\{x\mid \exists t\le c(x),f(x,t)\ge a\}$. The complement can be expressed as $\pi_{\Bbb R}(f^{-1}([a,\infty)\cap C)$. Since $[a,\infty)$ is closed, its preimage under $f$ is closed. But if we now apply the projection $\pi_{\Bbb R}$, we get a closed set again. This is because the projection $X\times Y\to X$ is closed if $Y$ is compact, so it is in the case $\Bbb R\times[0,1]\to\Bbb R$.</p> <p>Still, there is a problem if we want to apply the same argument to an open subbase set $(a,\infty)$ since the restriction of an open map, like the projection, to a closed subset isn't necessarily open. In this case, however, $\pi_{\Bbb R}|_C$ is indeed open. This is obvious on $C\setminus G(c)$. And on $G(c)$, we have $\pi_{\Bbb R}((U×V)\cap G(c))=c^{-1}(V)\cap U$.</p>
244,433
<p>I have a list:</p> <pre><code>data = {{2.*10^-9, 0.0025}, {4.*10^-9, 0.0025}, {6.*10^-9, 0.0025}, {8.*10^-9, 0.0025}, {1.*10^-8, 0.0025}, {7.*10^-9, 0.0023}, {3.*10^-9, 0.0025},...} </code></pre> <p>And I wanted to remove every third pair and get</p> <pre><code> newdata = {{2.*10^-9, 0.0025}, {4.*10^-9, 0.0025}, {8.*10^-9, 0.0025}, {1.*10^-8, 0.0025}, {3.*10^-9, 0.0025},...} </code></pre>
Nasser
70
<p>I am sure there are many ways to do this. One direct way could be to build the index and use it to select the entries.</p> <pre><code>data = {{2.*10^-9, 0.0025}, {4.*10^-9, 0.0025}, {6.*10^-9, 0.0025}, {8.*10^-9, 0.0025}, {1.*10^-8, 0.0025}, {7.*10^-9, 0.0023}, {3.*10^-9, 0.0025}}; </code></pre> <p>And now</p> <pre><code>idx = Table[If[Mod[n, 3] != 0, n, Nothing], {n, 1, Length[data]}]; </code></pre> <p><img src="https://i.stack.imgur.com/0z2dT.png" alt="Mathematica graphics" /></p> <p>And now use the new index</p> <pre><code>data[[idx]] </code></pre> <p><img src="https://i.stack.imgur.com/Lj6MK.png" alt="Mathematica graphics" /></p>
244,433
<p>I have a list:</p> <pre><code>data = {{2.*10^-9, 0.0025}, {4.*10^-9, 0.0025}, {6.*10^-9, 0.0025}, {8.*10^-9, 0.0025}, {1.*10^-8, 0.0025}, {7.*10^-9, 0.0023}, {3.*10^-9, 0.0025},...} </code></pre> <p>And I wanted to remove every third pair and get</p> <pre><code> newdata = {{2.*10^-9, 0.0025}, {4.*10^-9, 0.0025}, {8.*10^-9, 0.0025}, {1.*10^-8, 0.0025}, {3.*10^-9, 0.0025},...} </code></pre>
Pillsy
531
<p>There are many ways to do this, but my favorite is to use the the stride and end arguments in <a href="http://reference.wolfram.com/language/ref/Drop.html" rel="noreferrer"><code>Drop</code></a>:</p> <pre><code>Drop[data, {3, -1, 3}] === newdata (* True *) </code></pre>
6,431
<p>I hate to sound like a broken record, but closing <a href="https://math.stackexchange.com/q/219906/12042">this question</a> as <em>not constructive</em> makes no sense to me. The canned explanation reads in relevant part:</p> <blockquote> <p>We expect answers to be supported by facts, references, or specific expertise, but this question will likely solicit debate, arguments, polling, or extended discussion.</p> </blockquote> <p>In fact the question has a short, simple answer that can be supported by facts: the information given is not sufficient to answer the question, and this can easily be illustrated with a couple of examples.</p> <p><em>Not a real question</em> comes a little closer to being a legitimate reason for closure; that was the reason chosen by those who closed the OP’s <a href="https://math.stackexchange.com/q/218843/12042">one previous question</a>. But even that isn’t really accurate, since it seems quite clear what the question is asking, and indeed I provided an answer before the question was closed.</p> <p>I said nothing when that question was closed, because it <em>had</em> been answered. This one has not, even in the comments, and I can see no reason to have closed it instead of answering it, let alone closing it with a specious reason. I should really like to know the thinking behind doing so. This is MSE, after all, not MO.</p> <p>I’m not suggesting that the question should be re-opened, by the way: it now has an answer in the comments that is at least marginally adequate. I do think, though, that the OP has been treated rather shabbily.</p>
Emily
31,475
<p>I personally voted close for the reason that the "question is not a good fit for our Q&amp;A format." I feel as though a user, especially one who has asked previous questions, should understand that MSE is not a dumping site for homework problems on which that they wish to avoid actual work. In particular, because the user had other questions (and rep), even given the short time frame between questions, they were obviously active on the site, and consequently had opportunity to read any comments, etc. on their other questions, or other questions in general.</p> <p>Personally, I don't feel like "Too Localized" is an appropriate reason for closure for any homework-tagged or otherwise obvious homework problem. Homework problems are always going to be too localized.</p> <p>I also don't feel as though NARQ is a sufficient reason, since there was a clear question being asked.</p> <p>To reiterate, I think that the use of MSE as a dumping ground for homework problems on which the OP has literally put no effort is by definition "not constructive" -- not to the community or to the OP. Finally, when these questions do get posed -- as they frequently do -- they are often met with as many sarcastic responses as earnest efforts to help the user understand the MSE community. (The post in question indeed contains such a response). I feel that closing the question is more productive in such a case.</p>
1,876,133
<p>Let x be a object that is not a set</p> <p>Let S be a set</p> <p>Would the following statement:</p> <p>x ⊆ S</p> <p>evaluate to False, or considered not a well formed statement (as x is not even a set).</p>
Hagen von Eitzen
39,174
<p>A more general claim is true:</p> <p><strong>Claim.</strong> Let $f\colon \Bbb R\to\Bbb R$ be differentiable such that $f'(x)\ge 0$ for all $x\in \Bbb R$. Also assume that between any two points, there is at least one point with non-zero derivative. Then $f$ is strictly increasing. <em>(So in principle many more than just one point is allowed to have zero derivative)</em></p> <p><em>Proof.</em> Let $a,b\in \Bbb R$ with $a&lt;b$. We want to show that $f(a)&lt;f(b)$. By assumption, there exists $c$ with $a&lt;c&lt;b$ and $f'(c)&gt;0$. By definition of derivative, we have $\frac{f(c+h)-f(c)}h\approx f'(c)$ for small $h$, or more precisely and in particular: For all sufficiently small $h&gt;0$, we have $\left|\frac{f(c+h)-f(c)}h-f'(c)\right|&lt;\frac12|f'(c)|$ and therefore $f(c+h)&gt;f(c)$. We may assume that $h&lt;b-c$. So with $d=c+h&lt;b$ we have $f(d)&gt;f(c)$. By the intermediate value theorem, $f(c)-f(a)=(c-a)f'(\xi)$ and $f(b)-f(d)=(b-d)f'(\eta)$ with $a&lt;\xi&lt;c$ and $d&lt;\eta&lt;b$. This implies $$f(b)\ge f(d)&gt;f(c)\ge f(a)$$ as desired.</p>
1,876,133
<p>Let x be a object that is not a set</p> <p>Let S be a set</p> <p>Would the following statement:</p> <p>x ⊆ S</p> <p>evaluate to False, or considered not a well formed statement (as x is not even a set).</p>
Community
-1
<p>Because $f'(x) &gt; \forall x \gt x_0$ then $f$ is strictly increasing 0n $(x_0, \infty)$. Now, because $f$ continuous it follows $f$ is strictly increasing on $[x_0, \infty)$. Similar, $f$ is strictly increasing on $(-\infty, x_0]$ therefore $f$ is strictly increasing on $(-\infty, \infty)$</p>
3,526,586
<p>1) Let <span class="math-container">$A \in \mathbb{R}^{n \times n}$</span> be a matrix with nonzero determinant. Show that there exists <span class="math-container">$c&gt;0$</span> so that for every <span class="math-container">$v \in \mathbb{R}^{n},\|A v\| \geq c\|v\|$</span></p> <p>My attempt: Since <span class="math-container">$A$</span> is invertible, we have <span class="math-container">$\frac{\|Av\|}{\|v\|}&gt;0$</span> for all <span class="math-container">$v \neq 0$</span>. But how can we fix a constant <span class="math-container">$c&gt;0$</span> ?</p>
Akshya Kumar
1,009,890
<p>U=span{(1,1,1),(-1,2,1)} Observe that (0,1,0)=(1\3)(1,1,1)+(1/3)(-1,2,-1) Since (0,1,0)belongs to U So orthogonal projection of (0,1,0) on U is (0,1,0). Hence, option 1 is correct.</p>
3,202,797
<p>Why is solving the system of equations <span class="math-container">$$1+x-y^2=0$$</span> <span class="math-container">$$y-x^2=0$$</span> the same as minimizing <span class="math-container">$$f(x,y)=(1+x-y^2)^2 + (y-x^2)^2$$</span></p> <p>Originally I thought it was because if you take the partial derivatives of <span class="math-container">$f(x,y)$</span> and set them equal to zero that is what you are doing in the system. But when I worked out the partial derivatives it was not clear that that is what was going on. </p> <p>Can someone clarify why they are equivalent?</p>
Mohammad Riazi-Kermani
514,496
<p>Note that <span class="math-container">$$ f(x,y)=(1+x-y^2)^2 + (y-x^2)^2$$</span> is sum of two squares which is always non-negative.</p> <p>The minimum value of <span class="math-container">$f(x,y)$</span> is zero which is attained when both squares are zero.</p> <p>Your system of equations are simply making the squares equal zero and finding the points at which the minimum is attained.</p>
39,684
<p>In order to have a good view of the whole mathematical landscape one might want to know a deep theorem from the main subjects (I think my view is too narrow so I want to extend it).</p> <p>For example in <em>natural number theory</em> it is good to know quadratic reciprocity and in <em>linear algebra</em> it's good to know the Cayley-Hamilton theorem (to give two examples).</p> <p>So, what is one (<strong>per post</strong>) deep and representative theorem of each subject that one can spend a couple of months or so to learn about? (In Combinatorics, Graph theory, Real Analysis, Logic, Differential Geometry, etc.)</p>
Gadi A
1,818
<p>It might be <strong>too</strong> obvious, but <a href="http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems" rel="nofollow">Gödel's incompleteness theorems</a> are certainly landmarks, not only because of their historical significance and "popularity" outside mathematics (where usually they are quoted in a wrong way, in a wrong context, in order to claim a wrong claim) but also because it's ideas are actually the roots of Computability theory - and understanding both the theorems and their connections to computability might take some time indeed (but I wouldn't say a couple of months...)</p> <p>In complexity theory I'd say the <a href="http://en.wikipedia.org/wiki/Cook%E2%80%93Levin_theorem" rel="nofollow">Cook-Levin</a> theorem, which leads to the entire theory on NP-completeness, is a good representative. But then again, the theorem itself is mainly technical and not very illuminating - it's the implications that are important and need to be studied. </p> <p>Yet another main landmark of modern complexity theory is the <a href="http://en.wikipedia.org/wiki/PCP_theorem" rel="nofollow">PCP theorem</a> (actually PCP theorems - there are many versions). This is a very surprising and powerful result.</p>
39,684
<p>In order to have a good view of the whole mathematical landscape one might want to know a deep theorem from the main subjects (I think my view is too narrow so I want to extend it).</p> <p>For example in <em>natural number theory</em> it is good to know quadratic reciprocity and in <em>linear algebra</em> it's good to know the Cayley-Hamilton theorem (to give two examples).</p> <p>So, what is one (<strong>per post</strong>) deep and representative theorem of each subject that one can spend a couple of months or so to learn about? (In Combinatorics, Graph theory, Real Analysis, Logic, Differential Geometry, etc.)</p>
Arturo Magidin
742
<p>For <strong>Field Theory</strong>, we have <strong>The Fundamental Theorem of Galois Theory</strong>. It is a basic tool not only for fields, but for algebraic number theory among other fields, and it establishes a beautiful connection between group theory and field theory.</p>
410,905
<p>If A is real, symmetric, regular, positive definite matrix in $R^{n.n}$ and $x,h\in R^n$, why is it $\langle Ah,x\rangle = \langle h,A^T x\rangle =\langle Ax,h\rangle$? Is there some rule or theorem for this?</p>
DonAntonio
31,254
<p>I think this follows straightforward from definitions:</p> <p>$\,A\,$ is symmetric and real$\;\implies A^*=A^t=A\;$ , so since the inner product is real we get</p> <p>$$\langle x,y\rangle =\langle y,x\rangle \;\;\text{ and}\;\;\langle y,A^tx\rangle=\langle A^tx,y\rangle=\langle Ax,y\rangle\;\ldots $$</p>
58,209
<p>Question: Of the following, which is the best approximation of $$\sqrt{1.5}(266)^{3/2}$$</p> <p>$$(A)~1,000~~~~(B)~2,700~~~~(C)~3,200~~~~(D)~4,100~~~~(E)~5,300$$</p> <p>I used $1.5\approx1.44=1.2^2$ and $266\approx256=16^2$. Therefore the approximation by me is $4096$, so I chose $(D)$ which is wrong. The correct answer is $(E)$.</p> <p>How should I find it out?</p>
Jacob
825
<p>$$\sqrt{1.5}(266)^{3/2} \approx \sqrt{\frac{16}{9}}(256)^{3/2} \approx \frac{4}{3} \times 4096 \approx 5460 $$</p> <p>Hence, $(E)$</p> <p><strong>N.B.</strong> <a href="https://math.stackexchange.com/questions/58209/how-to-do-this-approximation/58212#58212">kuch nahi's answer</a> is probably the "right" one ; this seemed more intuitive to me since I didn't need to compute $16^3$.</p>
1,416,041
<blockquote> <p>How many ways are there to plane n indistinguishable balls into n urns so that exactly one urn is empty?</p> </blockquote> <p>Why is the answer for this question n(n-1)?</p>
Bamboo
155,045
<p>It does not matter which urn is empty, and there are $n$ urns, so we have $n$ choices of which urn to leave empty.</p> <p>Next, every other urn must have at least one ball in it (since we can only have one empty urn), so must distribute $n-1$ of the balls into these $n-1$ urns. We have one ball left and we can choose any of those $n-1$ urns to put the last ball into (it cannot go into the empty urn, or it would no longer be empty).</p> <p>Thus, we have $n$ choices of which urn is empty, and then $n-1$ choices of where to place the extra ball, giving $n \cdot (n-1)$ total ways to accomplish this.</p>
1,416,041
<blockquote> <p>How many ways are there to plane n indistinguishable balls into n urns so that exactly one urn is empty?</p> </blockquote> <p>Why is the answer for this question n(n-1)?</p>
NeitherNor
262,655
<p>Hint: what is the maximal number of balls in a urn when the condition is satisfied, and how many urns will have the maximal number? Say you choose the urn having zero balls first. How many different choices do you have? Then, given that you have already chosen the urn with zero balls: how many choices do you have for the urns with the maximal amount of balls? </p>
1,840,352
<blockquote> <p>For every $A \subset \mathbb R^3$ we define $\mathcal{P}(A)\subset \mathbb R^2$ by $$ \mathcal{P}(A) := \{ (x,y) \mid \exists_{z \in \mathbb R}:(x,y,z) \in A \} \,. $$ Prove or disprove that $A$ closed $\implies$ $\mathcal{P}(A)$ closed and $A$ compact $\implies$ $\mathcal{P}(A)$ compact.</p> </blockquote> <p>This $\mathcal{P}(A)$ seems to me the projection on the $xy$-plane, so intuitively both make sense to me.</p> <p><strong>My try</strong></p> <p>For the first one, to prove something is closed seems easiest to do with sequences, so for all sequences $(x^{(n)})$ in $A$ with $x^{(n)} \to x$ we have that $x \in A$. But I do not know how to progress further in a useful direction. For the second one I don't even know where to start.</p> <p><strong>Update</strong></p> <p>For the first one I tried to argue that for any sequence going to $x \in A$, a projection of that sequence on the $xy$-plane will also go to the projection of $x$, so $x \in \mathcal{P}(A)$, so hence $\mathcal{P}(A)$ must be closed. </p>
DanielWainfleet
254,665
<p>Let $A=\{(x,0,z): xz=1\},$ which is closed in $R^3.$ Then $P(A)=\{(x,0):x\ne 0\},$ which is not closed in $R^2.$</p> <p>Observe that $P$ is Lipschitz-continuous, as $\|P(u)-P(v)\|=\|P(u-v)\|\leq \|u-v\|.$ The continuous image of a compact space is compact. So if $A\subset R^3$ is compact then $P(A)$ is compact.</p>
4,112,958
<p>This is a Number Theory problem about the extended Euclidean Algorithm I found:</p> <p>Use the extended Euclidean Algorithm to find all numbers smaller than <span class="math-container">$2040$</span> so that <span class="math-container">$51 | 71n-24$</span>.</p> <p>As the eEA always involves two variables so that <span class="math-container">$ax+by=gcd(a,b)$</span>, I am not entirely sure how it is applicable in any way to this problem. Can someone point me to a general solution to this kind of problem by using the extended Euclidean Algorithm? Also, is there maybe any other more efficient way to solve this than using the eEA?</p> <p>(Warning: I'm afraid I'm fundamentally not getting something about the eEA, because that section of the worksheet features a number of similiar one variable problems, which I am not able to solve at all.)</p> <p>I was thinking about using <span class="math-container">$71n-24=51x$</span>, rearranging that into <span class="math-container">$$71n-51x=24.$$</span> It now looks more like the eEA with <span class="math-container">$an+bx=gcd(a,b)$</span>, but <span class="math-container">$24$</span> isn‘t the <span class="math-container">$gcd$</span> of <span class="math-container">$71$</span> and <span class="math-container">$51$</span>...</p>
hamam_Abdallah
369,188
<p><strong>Other way</strong></p> <p><span class="math-container">$$71n\equiv 24\pmod {51}\iff$$</span> <span class="math-container">$$20n\equiv 24\pmod {51}\iff$$</span> <span class="math-container">$$5n\equiv 6\pmod {51}\iff$$</span> <span class="math-container">$$5n\equiv -45\pmod {51}\iff$$</span> <span class="math-container">$$n\equiv -9 \pmod {51}$$</span></p>
644,935
<p>I'm having trouble integrating $3^x$ using the $px + q$ rule. Can some please walk me through this?</p> <p>Thanks</p>
JPi
120,310
<p>If I'm correct in what you think the $px+q$ rule is then take $f(x)=e^x$, $p=\log 3$, and $q=0$, with $\log$ the natural logarithm (of course!).</p>
12,878
<p>I wish the comments didn't have a lower bound for characters.Many times all I want to say is "yes". Can someone explain me what is the purpose of this l.b.?</p>
Alexander Gruber
12,952
<p>Situations warranting a simple "yes" or "no" exist, but they should be in the extreme minority. Discussions here should be more in depth than that. Comments are not a chat room. The limit exists to remind users to write good content, and the inconvenience of being unable to leave short comments pales, in my view, to a potential decrease in overall quality due to its removal.</p>
7,647
<p>Given a polyhedron consists of a list of vertices (<code>v</code>), a list of edges (<code>e</code>), and a list of surfaces connecting those edges (<code>s</code>), how to break the polyhedron into a list of tetrahedron?</p> <p>I have a convex polyhedron.</p>
Hugh Thomas
468
<p>If I understand your question correctly, you're saying that the given information is the face structure of a 3-dimensional convex polytope, and you would like a subdivision of the polytope into tetrahedra. </p> <p>Here is one way to proceed. First, subdivide all the faces into triangles. Then pick your favourite vertex $v_0$. Connect $v_0$ to each triangle belonging to a face not containing $v_0$. This subdivides your polytope into tetrahedra. </p>
4,059,420
<p>If f is holomorphic at every point on the open disc <span class="math-container">$$D=\{z:|z|\lt1\}$$</span> I want to show that f is constant</p>
johnnyb
298,360
<p>The symbol <span class="math-container">$\infty$</span>, as is currently used, simply means that the value is beyond the range supported by the real numbers. It is not itself a number. For instance, let's say that we only knew single-digit numbers. We might have values, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, and <span class="math-container">$\infty$</span>, to represent a value that is outside the numbers we were familiar with. That's what <span class="math-container">$\infty$</span> does. You can't do any sort of arithmetic with it, because it isn't actually specific enough to be a value. It's like when your calculator says &quot;error&quot;. You can't actually add two &quot;error&quot; values together, because they aren't really numbers.</p> <p>However, there are number systems that are beyond the real numbers, and include larger values than are included in the real numbers. Some of these systems include Cantor's transfinite numbers, the surreal numbers, and various hyperreal systems.</p> <p>Hyperreal numbers are my favorite, as you can basically treat infinities as ordinary algebraic objects. You simply have a symbol to represent some particular infinite value (since we can't actually count to infinity, I can't actually tell you <em>which</em> infinite value it represents), and then treat it algebraically. I usually use <span class="math-container">$\omega$</span> for this.</p> <p>So, in this system, <span class="math-container">$\omega + 1$</span> is a distinct number from <span class="math-container">$\omega$</span>, as is <span class="math-container">$2\omega$</span> or <span class="math-container">$\omega^2$</span>. You can get infinitesimal numbers by dividing by <span class="math-container">$\omega$</span>. <span class="math-container">$\frac{1}{\omega}$</span> is closer to zero than any real number, but is still non-zero. It's really easy to work with, and also simplifies many cases where real numbers have to have a lot of theorems to decide if the value can be contained by the real numbers. In hyperreal numbers, all numbers essentially act as &quot;finite&quot; numbers even if they are infinite, so all of theorems around deciding if a value is finite or not are largely moot.</p>
398,857
<p>Please help me solve this and please tell me how to do it..</p> <p>$12345234 \times 23123345 \pmod {31} = $?</p> <p>edit: please show me how to do it on a calculator not a computer thanks:)</p>
André Nicolas
6,312
<p>We want to replace these big numbers by much smaller ones that have the same remainder on division by $31$.</p> <p>Take your first big number $12345234$. Divide by $31$ with the calculator. My display reads $398233.3548$. Subtract $398233$. My calculator shows $0.354838$. Multiply by $31$. The calculator gives $10.999978$. If it were perfect, it would say $11$. </p> <p>Do the same with the other big number. My calculator again says $10.99978$, and if perfect would say $11$. </p> <p>Multiply $11$ by $11$, and find the remainder when $121$ is divided by $31$. Again, we could use the calculator. But it can be done in one's head. The remainder when we divide by $31$ is $28$. </p> <p><strong>Remark:</strong> It can be fairly important not to write down an intermediate calculator result, and rekey. The reason is that the calculator inernally keeps <em>guard digits</em>, that is, it is calculating to greater precision than it displays. If you rekey, typing in only digits that you <em>see</em>, you will lose some of the built-in accuracy that you paid for. For similar reasons, it is useful to learn to make use of the <em>memory</em> features of your calculator.</p> <p>Let's ee why the procedure we used works. When we divide $12345234$ by $31$, the stuff before the decimal point is the <em>quotient</em>. The remainder is unfortunately not given directly, but what's "left over" is (approximately) $0.354838$. This is a decimal representation of $\frac{r}{31}$, where $r$ is the (integer) remainder. To recover $r$, we multiply $0.354838$ by $31$. Because of internal roundoff, we usually don't get an integer, but most of the time, if the divisor (here $31$) is not too large, we get an answer that is very close to an integer, so we can quickly decide what $r$ must be. </p>
679,544
<p>How to prove this for positive real numbers? $$a+b+c\leq\frac{a^3}{bc}+\frac{b^3}{ca}+\frac{c^3}{ab}$$</p> <p>I tried AM-GM, CS inequality but all failed.</p>
Michael Rozenberg
190,319
<p>Another way.</p> <p>By Rearrangement <span class="math-container">$$\sum_{cyc}\left(\frac{a^3}{bc}-a\right)=\sum_{cyc}\frac{a}{bc}(a^2-bc)\geq\frac{1}{3}\sum_{cyc}\frac{a}{bc}\sum_{cyc}(a^2-bc)=\frac{1}{6}\sum_{cyc}\frac{a}{bc}\sum_{cyc}(a-b)^2\geq0.$$</span></p>
552,238
<p>Reasons that $(\mathbb R^n, +, \mathcal Z)$ is not a topological group: Given any two distinct points $\vec{p},\vec q \in \mathbb R ^n$ let $P$ be the unique hyperplane through $\vec p$ which is perpendicular to the vector $\vec p- \vec q$. $P$ is closed with respect to $\mathcal Z$ and hence its complement is an open set containing $\vec q$ but not $\vec p$. Therefore the space is $T_0$. If it were a topological group, this would imply it were Hausdorff, which is not the case.</p> <p>Reasons that $(\mathbb R^n, +, \mathcal Z)$ should be a topological group. It is both a group an a topological space. Moreover all polynomials $(\mathbb R^n, \mathcal Z) \to (\mathbb R, \mathcal Z)$ are continuous. So any map of the form $\vec x \mapsto (p_1(\vec x), p_2(\vec x), \ldots, p_n(\vec x)) $ or $(\vec x, \vec y ) \mapsto (p_1(\vec x, \vec y), p_2(\vec x, \vec y), \ldots, p_n(\vec x, \vec y) $ where $p_i$ are polynomials, is continuous, and the operations of addition and negation are both of this form.</p> <p>Could someone point out where my misapprehension?</p>
Alex Youcis
16,497
<p>The problem is that the product topology is not the same as the Zariski topology. So, while maps $\mathbb{R}^{2n}\to\mathbb{R}$ are continuous, polynomial ones, this is in the Zariski topology, not the product topology.</p> <p>In fact, every $T_0$ topological group is already Hausdorff. Every variety over $\mathbb{R}$ is $T_0$ but is not Hausdorff.</p> <p><strong>EDIT:</strong> Let me emphasize precisely what goes wrong. In your argument, you claim that the map $\mathbb{R}^n\times\mathbb{R}^n\to \mathbb{R}^n$ given by $(x,y)\mapsto x-y$ is continuous, since it has polynomial coefficients, where $\mathbb{R}^n\times\mathbb{R}^n$ is given the product topology. If this were true, then the preimage of $0$ would be closed, since $0$ is closed in $\mathbb{R}^n$ with the Zariski topology. But, the preimage is precisely the diagonal $\Delta_{\mathbb{R}^n}\subseteq\mathbb{R}^n\times\mathbb{R}^n$. But, the diagional being closed is equivalent to being Hausdorff, and since $\mathbb{R}^n$ is not Hausdorff with the Zariski topology, this is a contradiction. Thus, the map $(x,y)\mapsto x-y$ is <b>not</b> continuous. </p>
552,238
<p>Reasons that $(\mathbb R^n, +, \mathcal Z)$ is not a topological group: Given any two distinct points $\vec{p},\vec q \in \mathbb R ^n$ let $P$ be the unique hyperplane through $\vec p$ which is perpendicular to the vector $\vec p- \vec q$. $P$ is closed with respect to $\mathcal Z$ and hence its complement is an open set containing $\vec q$ but not $\vec p$. Therefore the space is $T_0$. If it were a topological group, this would imply it were Hausdorff, which is not the case.</p> <p>Reasons that $(\mathbb R^n, +, \mathcal Z)$ should be a topological group. It is both a group an a topological space. Moreover all polynomials $(\mathbb R^n, \mathcal Z) \to (\mathbb R, \mathcal Z)$ are continuous. So any map of the form $\vec x \mapsto (p_1(\vec x), p_2(\vec x), \ldots, p_n(\vec x)) $ or $(\vec x, \vec y ) \mapsto (p_1(\vec x, \vec y), p_2(\vec x, \vec y), \ldots, p_n(\vec x, \vec y) $ where $p_i$ are polynomials, is continuous, and the operations of addition and negation are both of this form.</p> <p>Could someone point out where my misapprehension?</p>
Martin Brandenburg
1,650
<p>Let me mention that the correct "replacement" for the usual euclidean space in algebraic geometry is the affine space. In particular, this means that you don't have the product topology of the underlying topological spaces (which wouldn't make much sense anyway since this doesn't incorporate the algebraic structure). Also, group objects can be defined in any category with products (actually you don't need products). Then $(\mathbb{R}^n,+)$ is a group object in $\mathrm{Top}$ i.e. a <em>topological group</em>, whereas $(\mathbb{A}^n,+)$ is a group object in $\mathrm{Var}$, i.e. an <em>algebraic group</em>. You can write this group as $(\mathbb{G}_a)^n$ (in order to distinguish it from $\mathbb{A}^n$ which usually has no group structure attached to it). It is also known as the $n$-dimensional torus.</p>
2,031,964
<p>I am required to prove that the following series $$a_1=0, a_{n+1}=(a_n+1)/3, n \in N$$ is bounded from above and is monotonously increasing through induction and calculate its limit. Proving that it's monotonously increasing was simple enough, but I don't quite understand how I can prove that it's bounded from above through induction, although I can see that it is bounded. It's a fairly new topic to me. I would appreciate any help on this.</p>
hamam_Abdallah
369,188
<p><strong>Hint</strong></p> <p>Let $f(x)=\frac{x+1}{3}$</p> <p>$f$ has one fixed point $L=\frac{1}{2}=f(L)$</p> <p>Now, you can prove by induction that</p> <p>$$\forall n\geq 0\;\;a_n\leq L$$</p> <p>using the fact that $f$ is increasing at $\; \mathbb R$ and $a_{n+1}=f(a_n)$.</p>
2,031,964
<p>I am required to prove that the following series $$a_1=0, a_{n+1}=(a_n+1)/3, n \in N$$ is bounded from above and is monotonously increasing through induction and calculate its limit. Proving that it's monotonously increasing was simple enough, but I don't quite understand how I can prove that it's bounded from above through induction, although I can see that it is bounded. It's a fairly new topic to me. I would appreciate any help on this.</p>
Simply Beautiful Art
272,831
<p>See that for any $a_n&lt;\frac12$, we have</p> <p>$$a_{n+1}=\frac{a_n+1}3&lt;\frac{\frac12+1}3=\frac12$$</p> <p>Thus, it is proven that since $a_0&lt;\frac12$, then $a_1&lt;\frac12$, etc. with induction.</p> <hr> <p>We choose $\frac12$ since, when solving $a_{n+1}=a_n$, we result with $a_n=\frac12$, the limit of our sequence.</p>
399,804
<p>The Question was:</p> <blockquote> <p>Express $2\cos{X} = \sin{X}$ in terms of $\sin{X}$ only.</p> </blockquote> <p>I have had dealings with similar problems but for some reason, due to I believe a minor oversight, I am terribly vexed.</p>
Double AA
22,307
<p>Try using $\cos{x}^2 + \sin{x}^2 =1$ or some other known identity, but solving for $\cos{x}$ in terms of $\sin{x}$. Then just substitute.</p>
1,057,675
<p>I was asked to prove that $\lim\limits_{x\to\infty}\frac{x^n}{a^x}=0$ when $n$ is some natural number and $a&gt;1$. However, taking second and third derivatives according to L'Hôpital's rule didn't bring any fresh insights nor did it clarify anything. How can this be proven? </p>
Luiz Cordeiro
58,818
<p>In some cases, it is interesting (or simply makes notation simpler) to consider the extended real numbers: $\overline{\mathbb{R}}=[-\infty,\infty]$.</p> <p>Formally, pick your favorite objects which are not real numbers, and let's conveniently call them $-\infty$ and $\infty$. Define the set $\overline{\mathbb{R}}=\mathbb{R}\cup\left\{-\infty,\infty\right\}$. Define the following order on $\overline{\mathbb{R}}$: If $a$ and $b$ are real numbers, $a\leq b$ is the usual order in $\mathbb{R}$. For all $a,b\in\overline{\mathbb{R}}$, we also define $-\infty\leq b$ and $a\leq\infty$. This says, intuitively, that $\infty$ is "infinitely large" and $-\infty$ is "infinitely negative" (not small, because this may cause some confusion).</p> <p>If we were working only with $\mathbb{R}$, we have to specify what the notation $(a,\infty)$ means: the symbol $\infty$ alone does not have any meaning in $\mathbb{R}$. However, in $\overline{\mathbb{R}}$, $\infty$ has a meaning, and the interval $(a,\infty)$ is simply $\left\{b\in\overline{\mathbb{R}}:a&lt;b&lt;\infty\right\}$. Similarly, we may speak of the intervals $(a,\infty]$, $[-\infty,b)$, etc...</p> <p>The space of extended real numbers has several nice properties. Specificaly, it is isomorphic (in almost every analytical sense) to the interval $[0,1]$, which is certainly one of the nicest spaces out there.</p>
58,772
<p>Is there any general way to find out the coefficients of a polynomial.</p> <p>Say for e.g.<br> $(x-a)(x-b)$ the constant term is $ab$, coefficient of $x$ is $-(a+b)$ and coefficient of $x^2$ is $1$.</p> <p>I have a polynomial $(x-a)(x-b)(x-c)$. What if the number is extended to n terms.?</p>
Geoff Robinson
13,147
<p>This question leads naturally to the elementary symmetric polynomials. If we have a polynomial such as $p(x) = (x - t_1)(x-t_2) \ldots (x-t_n) = \prod_{j=1}^{n}(x-t_i)$, we may write it as $x^{n} + \sum_{j=1}^{n} (-1)^{j}x^{n-j}s_{j}(t_1,\ldots ,t_n).$ Here $t_1,t_2,\ldots ,t_n$ can be elements of any (commutative) ring $R,$ and $x^{0}$ is read as 1.The term $s_{j}(t_1,t_2, \ldots,t_n)$ is a sum of $\left(\begin{array}{clcr} n\\j \end{array} \right)$ terms, and is the sum of all possible products of any $j$ of the $t_i$'s.</p> <p>Another interesting direction to pursue (though I will not do it here) is that of Newton's identities, which relates the power sum polynomials to the elementary symmetric polynomials. The $k$-th power sum polynomial of the $t_i$ is $\sum_{j=1}^{n} t_{j}^{k}$. The power sum polynomials are expressible in terms of the elementary symmetric polynomials, and if division by every integer can be performed within the ring $R,$ then by Newton's identities, the elementary symmetric polynomials can be expressed (as polynomials) in terms of the power sums. For example, $t_1 t_2 = \frac{1}{2}[(t_1 +t_2)^2 - (t_1^2 + t_2 ^2)].$ </p>
2,208,113
<p>Let $x$ and $y \in \mathbb{R}^{n}$ be non-zero column vectors, from the matrix $A=xy^{T}$, where $y^{T}$ is the transpose of $y$. Then the rank of $A$ is ?</p> <hr> <p>I am getting $1$, but need confirmation .</p>
John Wayland Bales
246,513
<p>If there is a solution, the symmetry of the equations suggests we investigate the possibility that $x=y=z$</p> <p>So assume that $x=y=z=a$.</p> <p>$$2a=\sqrt{4a-1}$$</p> <p>Thus $4a^2-4a+1=0$, or $(2a-1)^2=0$. Thus $x=y=z=\frac{1}{2}$.</p> <p>This is a solution. But, as per the comment of dxiv below, this does not rule out solutions where $x\ne y$, $y\ne x$, $z\ne x$.</p>
1,556,298
<p>If we have $p\implies q$, then the only case the logical value of this implication is false is when $p$ is true, but $q$ false.</p> <p>So suppose I have a broken soda machine - it will never give me any can of coke, no matter if I insert some coins in it or not.</p> <p>Let $p$ be 'I insert a coin', and $q$ - 'I get a can of coke'.</p> <p>So even though the logical value of $p \implies q$ is true (when $p$ and $q$ are false), it doesn't mean the implication itself is true, right? As I said, $p \implies q$ has a logical value $1$, but implication is true when it matches the truth table of implication. And in this case, it won't, because $p \implies q$ is <strong>false</strong> for true $p$ (the machine doesn't work).</p> <p>That's why I think it's not right to say the implication is true based on only one row of the truth table. Does it make sense?</p>
skyking
265,767
<p>You have to distinguish between cases where the truth values for $p$ and/or $q$ are given and cases when they're not. If the truth values for $p$ and $q$ are given then only that row of the truth table is relevant. If on the other hand they are given completely or partially all the relevant rows are relevant. </p> <p>For example if we know $q$ to be true you have to consider both rows when $q$ is true - for which $p\rightarrow q$ happens to be true for all of them.</p> <p>For your example one assumes that you don't insert a coin and therefore it's known that $p$ is false (and $q$ too) so you only need to consider those rows for the truth table.</p> <p>$$\begin{matrix} p &amp; q &amp; p\rightarrow q\\ \hline f &amp; f &amp; T \\ f &amp; T &amp; T \\ T &amp; f &amp; f \\ T &amp; T &amp; T \\ \end{matrix}$$</p>
1,413,145
<p>I would like to find a way to show that the sequence $a_n=\big(1+\frac{1}{n}\big)^n+\frac{1}{n}$ is eventually increasing.</p> <p>$\hspace{.3 in}$(Numerical evidence suggests that $a_n&lt;a_{n+1}$ for $n\ge6$.)</p> <p>I was led to this problem by trying to prove by induction that $\big(1+\frac{1}{n}\big)^n\le3-\frac{1}{n}$, as in</p> <p>$\hspace{.4 in}$ <a href="https://math.stackexchange.com/questions/1087545/a-simple-proof-that-bigl1-frac1n-bigrn-leq3-frac1n">A simple proof that $\bigl(1+\frac1n\bigr)^n\leq3-\frac1n$?</a></p>
Arin Chaudhuri
404
<p>Let $a_n = (1 + 1/n)^n.$ </p> <p>We want to show $a_{n+1} - a_{n} \geq \dfrac{1}{n(n+1)}$ for large $n$. </p> <p>$\dfrac{a_{n+1}}{a_n} = \left(1 + \dfrac{1}{n}\right) \left(1 - \dfrac{1}{(n+1)^2}\right)^{n+1}.$</p> <p>The RHS can be expanded as</p> <p>$\left(1 + \dfrac{1}{n}\right) \left(1 - \dfrac{1}{(n+1)^2}\right)^{n+1} = \dfrac{n+1}{n} \times \left( \underbrace{1 - \dfrac{1}{n+1}} + \dfrac{1}{2!(n+1)^2}\underbrace{\left(1 - \dfrac{1}{n+1}\right)} - \dfrac{1}{3!(n+1)^3}\underbrace{\left(1 - \dfrac{1}{n+1}\right)} \left(1 - \dfrac{2}{n+1} \right) + \dots (-1)^{n+1} \dfrac{1}{(n+1)!(n+1)^{n+1}}\underbrace{\left(1 - \dfrac{1}{n+1}\right)} \left(1 - \dfrac{2}{n+1} \right) \dots\left(1 - \dfrac{n}{n+1}\right)\right).$</p> <p>Since $ \dfrac{n+1}{n} \times (1-\dfrac{1}{n+1}) = 1$, we have $\dfrac{a_{n+1}}{a_n} = 1 + \dfrac{1}{2!(n+1)^2} - \dfrac{1}{3!(n+1)^3} \left(1 - \dfrac{2}{n+1} \right) + \dots$</p> <p>So,</p> <p>$|\dfrac{a_{n+1}}{a_n} - 1 - \dfrac{1}{2(n+1)^2}| \leq \dfrac{1}{3!(n+1)^3} + \dfrac{1}{4!(n+1)^4} + \dots \leq \dfrac{1}{6(n+1)^2n}.$</p> <p>This implies $$ (n+1)^2 \left( \dfrac{a_{n+1}}{a_n} - 1 \right) \to 1/2$$ so upon multiplying the above by $na_n/(n+1)$ $$ n(n+1)(a_{n+1} - a_n) \to e/2 &gt; 1.$$ Hence, $a_{n+1} - a_n \geq 1/n(n+1)$ for all large n.</p>
1,734,419
<p>I have tried to show that : $2730 |$ $n^{13}-n$ using fermat little theorem but i can't succeed or at a least to write $2730$ as $n^p-n$ .</p> <p><strong>My question here</strong> : How do I show that $2730$ divides $n^{13}-n$ for $n$ is integer ?</p> <p>Thank you for any help </p>
André Nicolas
6,312
<p>Hint: Use Fermat's Theorem, working separately modulo $2$, $3$, $5$, $7$, and $13$. (Note that $2730=2\cdot 3\cdot 5\cdot 7\cdot 13$.)</p>
8,695
<p>I have a parametric plot showing a path of an object in x and y (position), where each is a function of t (time), on which I would like to put a time tick, every second let's say. This would be to indicate where the object is moving fast (widely spaced ticks) or slow (closely spaced ticks). Each tick would just be short line that crosses the plot at that point in time, where that short line is normal to the plotted curve at that location.</p> <p>I'm sure I can figure out a way to do it using lots of calculations and graphics primitives, but I'm wondering if there is something built-in that I have missed in the documentation that would make this easier.</p> <p>(Note: this is about ticks on the plotted curve itself -- this doesn't have anything to do with the ticks on the axes or frame.)</p>
jVincent
1,194
<p>One way to do it would be to find analytically the parallel line yourself, and then creating a <code>Graphics[]</code> with all the lines. Here is a rough working example that's specific to this one expression and scales the lines according to the differential of the function, but it outlines the gist of the idea. </p> <pre><code>Show[ Line /@ Table[ Evaluate[ {Cos[t^2], Sin[t^2]} + (0.1/2 tickpoint D[{Cos[t^2], Sin[t^2]}, t] // {#[[2]], -#[[1]]} &amp;) ] , {t, 0, 2.5, 0.05}, {tickpoint, {-1, 1}}] // Graphics, ParametricPlot[{Cos[t^2], Sin[t^2]}, {t, 0, 2.5}] ] </code></pre> <p><img src="https://i.stack.imgur.com/3t3Zy.png" alt="ParametricPlot combined with ticks showing regular intervals of the parameter"></p>
28,532
<p><code>MapIndexed</code> is a very handy built-in function. Suppose that I have the following list, called <code>list</code>:</p> <pre><code>list = {10, 20, 30, 40}; </code></pre> <p>I can use <code>MapIndexed</code> to map an arbitrary function <code>f</code> across <code>list</code>:</p> <pre><code>{f[10, {1}], f[20, {2}], f[30, {3}], f[40, {4}]} </code></pre> <p>where the second argument to <code>f</code> is the part specification of each element of the list.</p> <p>But, now, what if I would like to use <code>MapIndexed</code> only at certain elements? Suppose, for example, that I want to apply <code>MapIndexed</code> to only the second and third elements of <code>list</code>, obtaining the following:</p> <pre><code>{10, f[20, {2}], f[30, {3}], 40} </code></pre> <p>Unfortunately, there is no built-in "<code>MapAtIndexed</code>", as far as I can tell. What is a simple way to accomplish this? Thanks for your time.</p>
Kuba
5,478
<p>I'm sure one can improve following solution</p> <pre><code>SetAttributes[mapIndexedAt, HoldRest]; mapIndexedAt[f_, list_, pos_] := Do[list = MapAt[f[#, pos[[i]]] &amp;, list, pos[[i]]] , {i, Length@pos}] l = {1, 1, 1, 1}; mapIndexedAt[(#1 + #2) &amp;, l, {2, 3}] </code></pre> <blockquote> <pre><code>{1, 3, 4, 1} </code></pre> </blockquote> <p>It does not look good but at least it is not scanning through the list.</p> <p>A little variation with <code>Fold</code>:</p> <pre><code>f = #1 + #2 &amp; Fold[ReplacePart[#1, #2 -&gt; f[#1[[#2]], #2]] &amp;, l, {2, 3}] </code></pre> <blockquote> <pre><code>{1, 3, 4, 1} </code></pre> </blockquote>
113,797
<p>I'm trying to extract every 21st character from this text, s (given below), to create new strings of all 1st characters, 2nd characters, etc.</p> <p>I have already separated the long string into substrings of 21 characters each using</p> <pre><code> splitstring[String : str_, n_] := StringJoin @@@ Partition[Characters[str], n, n, 1, {}] </code></pre> <p>giving:</p> <pre><code> {"ALJJJEAQZJMZKOZDKEHBL", "XLPXNEHZCSEJVVLWHTUDJ", \ "WFYXKKMWNNTNPHDTMGIOP", "OOSYPXGTLOHOPHTDHBHWO", \ "MWGSKXSTNNEYSQHRSGPKP", "CJBNVIYCZHIVPFSWCKFPJ", \ "OZQLNGPTLCIALHMBIGUOP", "ESYNDGACTURTALHLSGFBR", \ "LPRMYKFQFXTEEZQHIUMOC", "CLSUIKWYLRPRRJZWCKUOW", \ "WJLLVYEJXNIEMDQYQTDFC", "HJQLYKFQJTKUCICIRKOWX", \ "PHLWYCCDRKVPAHCYPZFRL", "CNYBIOEWWGHQQBCDHGPRW", \ "WHWIUOQTYOJLHGLFRTTVL", "CQCIUEHZYJKJEWHOVMYOM", \ "JOBTIHCOSGCZVZJFEYHAC", "JQCQFSTWLOHYPXZDHBHTW", \ "TLGIUJIWEXSJGKLTMKAFF", "PYGYICGSPRPLPOZNKPUSH", \ "NSMQCGPWHILVXLZARZLIS", "POBDNEXJYMESQDTAQWKWL", \ "WPBIDCCJXKHQCOTAJXBVZ", "WWDNCCCCWACQHZJREEHRO", \ "LQSSRCVPFHCJCGLURBHFM", "POYKFKWRHKVGLLSYRTISC", \ "ESYKXCEYFNSAHUHYIBJVL", "WZLKPJHQJEQQVHNHSEHTO", \ "TNFJCRMZCCIEXKSALTMFP", "YTLGCORTMRIAILOISWKRJ", \ "DTMRXGVYHEHRSIPRIWKOX", "SLBQVJHSZTMZMIHRAFFTJ", \ "CTMSYGADQEHQXUZIQHLZJ", "OOGRZGVHLKPTIUEHHKHKT", \ "CHCNCMMYLIMBWWHNKXBQP", "DEWQCIVZRNETVVPFCVYST", \ "RTYZYEPWZTMQHLNHSGPRO", "TNFJCRRLNNPRHGYARRJVW", \ "HJBIFTAPWRVUCZODYYLGF", "COBDWKMDTGJCEBDTVRDRO", \ "HJQLPLVHJYKNJSLGEBZDL", "OOWKROWOOOJRXKRAMYMMM", \ "FOBFTGTSLHIGLQLWVVLTL", "TDYBEGVNJLEAQDPRQXKRH", \ "WOGIUCPLCJEAJBYGLTSCY", "OCUDECCQCURAELOAPEHKP", \ "YJBIVOPWZTEVHJHNEYNMX", "CFSHYKPPWCGUMEWYKNHZW", \ "JQSWCRANSOALVIJLPRZDL", "YOFDJVCDHTAVAHTRMTBHP", "RJZBIOEOSCR"} </code></pre> <p>How can I make strings of all the first characters, second characters, etc.?</p> <pre><code> s=ALJJJEAQZJMZKOZDKEHBLXLPXNEHZCSEJVVLWHTUDJWFYXKKMWNNTNPHDTMGIOPOOSYPXG\ TLOHOPHTDHBHWOMWGSKXSTNNEYSQHRSGPKPCJBNVIYCZHIVPFSWCKFPJOZQLNGPTLCIALH\ MBIGUOPESYNDGACTURTALHLSGFBRLPRMYKFQFXTEEZQHIUMOCCLSUIKWYLRPRRJZWCKUOW\ WJLLVYEJXNIEMDQYQTDFCHJQLYKFQJTKUCICIRKOWXPHLWYCCDRKVPAHCYPZFRLCNYBIOE\ WWGHQQBCDHGPRWWHWIUOQTYOJLHGLFRTTVLCQCIUEHZYJKJEWHOVMYOMJOBTIHCOSGCZVZ\ JFEYHACJQCQFSTWLOHYPXZDHBHTWTLGIUJIWEXSJGKLTMKAFFPYGYICGSPRPLPOZNKPUSH\ NSMQCGPWHILVXLZARZLISPOBDNEXJYMESQDTAQWKWLWPBIDCCJXKHQCOTAJXBVZWWDNCCC\ CWACQHZJREEHROLQSSRCVPFHCJCGLURBHFMPOYKFKWRHKVGLLSYRTISCESYKXCEYFNSAHU\ HYIBJVLWZLKPJHQJEQQVHNHSEHTOTNFJCRMZCCIEXKSALTMFPYTLGCORTMRIAILOISWKRJ\ DTMRXGVYHEHRSIPRIWKOXSLBQVJHSZTMZMIHRAFFTJCTMSYGADQEHQXUZIQHLZJOOGRZGV\ HLKPTIUEHHKHKTCHCNCMMYLIMBWWHNKXBQPDEWQCIVZRNETVVPFCVYSTRTYZYEPWZTMQHL\ NHSGPROTNFJCRRLNNPRHGYARRJVWHJBIFTAPWRVUCZODYYLGFCOBDWKMDTGJCEBDTVRDRO\ HJQLPLVHJYKNJSLGEBZDLOOWKROWOOOJRXKRAMYMMMFOBFTGTSLHIGLQLWVVLTLTDYBEGV\ NJLEAQDPRQXKRHWOGIUCPLCJEAJBYGLTSCYOCUDECCQCURAELOAPEHKPYJBIVOPWZTEVHJ\ HNEYNMXCFSHYKPPWCGUMEWYKNHZWJQSWCRANSOALVIJLPRZDLYOFDJVCDHTAVAHTRMTBHP\ RJZBIOEOSCR </code></pre>
JungHwan Min
35,945
<p><code>StringTake</code> and <code>Span</code> would be useful.</p> <p>For example, to get the second characters:</p> <pre><code>StringTake[s, 2 ;; ;; 21] (* "LLFOWJZSPLJJHNHQOQLYSOPWQOSZNTTLTOHETNJOJOODOCJFQOJ" *) </code></pre> <p>To get all of the strings:</p> <pre><code>StringTake[s, Array[# ;; ;; 21&amp;, 21]] </code></pre> <p>Another approach would be a combination of <code>Characters</code> and <code>Part</code>:</p> <pre><code>StringJoin[Characters[s][[2 ;; ;; 21]]] (* "LLFOWJZSPLJJHNHQOQLYSOPWQOSZNTTLTOHETNJOJOODOCJFQOJ" *) </code></pre> <p>Again, to get all of the strings:</p> <pre><code>Array[StringJoin[Characters[s][[# ;; ;; 21]]]&amp;, 21] </code></pre>
81,715
<p>I am a graduate student in physics trying to learn differential geometry on my own, out of a book written by Fecko.</p> <p>He defines the gradient of a function as:</p> <p>$ \nabla f = \sharp_g df = g^{-1}(df, \cdot ) $</p> <p>This makes enough sense to me. However, when I try to calculate the gradient of a function in spherical coordinates:</p> <p>$ g^{-1} (df, \cdot) = g^{ij} \partial_i(df) \otimes \partial_j = g^{ij} \partial_i f \partial_j $</p> <p>So the $j^{th}$ component of the gradient of f is:</p> <p>$ g^{ij} \partial_if $</p> <p>The coefficients of the metric tensor are:</p> <p>$ g = \begin{pmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; r^2 &amp; 0 \\ 0 &amp; 0 &amp; r^2 \sin^2{\theta} \end{pmatrix} $</p> <p>So the inverse of a diagonal matrix ($g^{-1}$) is just a diagonal matrix whose entries are the reciprocals of the original matrix:</p> <p>$ g^{-1} = \begin{pmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; r^{-2} &amp; 0 \\ 0 &amp; 0 &amp; r^{-2} \csc^2{\theta} \end{pmatrix} $</p> <p>So it seems our expression doesn't match the vector calculus definition of the gradient in spherical coordinates. For instance, differential geometry gives us a $\hat{\theta}$ component of $ r^{-2} \partial_\theta f$ but vector calculus tells us this is $ r^{-1} \partial_\theta f$.</p> <p>Where is my mistake?</p>
Hans Lundmark
1,242
<p>The components that the formula $g^{ij} \partial_j f$ refers to are taken with respect to the natural tangent space basis induced by the coordinate system; these vectors are often denoted by $(\partial/\partial r, \partial/\partial \theta, \partial/\partial \phi)$, and they differ from the orthonormal frame $(\hat{r}, \hat{\theta}, \hat{\phi})$ by the usual normalization factors $1$, $r$, $r \sin\theta$, respectively.</p> <p>EDIT: Let's think in terms of vector calculus. In that case, your manifold could be a surface in $\mathbf{R}^3$, or the whole space $\mathbf{R}^3$ (but described in a curvilinear coordinate system). The position vector of a point in the manifold is written as $\mathbf{r}(s,t)$ (for a parametrized surface) or $\mathbf{r}(r,\theta,\phi)$ (for the whole space in spherical coordinates). The tangent vectors to the surface are $\partial\mathbf{r}/\partial s$ and $\partial\mathbf{r}/\partial t$. For the whole space, you have a frame of vector fields $(\partial\mathbf{r}/\partial r, \partial\mathbf{r}/\partial \theta, \partial\mathbf{r}/\partial \phi)$ which are orthogonal at each point (this is what it means when we say that spherical coordinates are an orthogonal coordinate system), but they are not normalized. It is these un-normalized vectors that in differential geometry are referred to as $(\partial/\partial r, \partial/\partial \theta, \partial/\partial \phi)$. (In the abstract setting, the manifold is not embedded in a Euclidean space, so it doesn't make sense to talk about a position vector, and thus the $\mathbf{r}$ is omitted from the notation. Also, as you've probably seen, vectors are often defined as first order differential operators, and this notation conforms to that way of thinking.) You get $\hat{r}$ etc. by normalizing these vectors.</p>
3,710,710
<p>Suppose <span class="math-container">$A(t,x)$</span> is a <span class="math-container">$n\times n$</span> matrix that depends on a parameter <span class="math-container">$t$</span> and a variable <span class="math-container">$x$</span>, and let <span class="math-container">$f(t,x)$</span> be such that <span class="math-container">$f(t,\cdot)\colon \mathbb{R}^n \to \mathbb{R}^n$</span>.</p> <p>Is there a chain rule for <span class="math-container">$$\frac{d}{dt} A(t,f(t,x))?$$</span></p> <p>It should be something like <span class="math-container">$A_t(t,f(t,x)) + ....$</span>, what is the other term?</p>
peek-a-boo
568,204
<p>Yes, there is a chain rule for such functions. Before getting to that, let's just briefly discuss partial derivatives for multivariable functions.</p> <blockquote> <p>Let <span class="math-container">$V_1, \dots, V_n, W$</span> be Banach spaces (think finite-dimensional if you wish, such as <span class="math-container">$\Bbb{R}^{k_1}, \dots \Bbb{R}^{k_n}, \Bbb{R}^m$</span>, or spaces of matrices, like <span class="math-container">$M_{m \times n }(\Bbb{R})$</span> if you wish). Let <span class="math-container">$\phi: V_1 \times \dots \times V_n \to W$</span> be a map, and <span class="math-container">$a = (a_1, \dots, a_n) \in V_1 \times \dots \times V_n$</span>. We say the function <span class="math-container">$\phi$</span> has an <span class="math-container">$i^{th}$</span> partial derivative at the point <span class="math-container">$a$</span>, if the function <span class="math-container">$\phi_i:V_i \to W$</span> defined by <span class="math-container">\begin{align} \phi_i(x) := \phi(a_1, \dots, a_{i-1}, x, a_{i+1}, \dots a_n) \end{align}</span> is differentiable at the point <span class="math-container">$a_i$</span>. In this case, we define the <span class="math-container">$i^{th}$</span> partial derivative of <span class="math-container">$\phi$</span> at <span class="math-container">$a$</span> to be the derivative of <span class="math-container">$\phi_i$</span> at the point <span class="math-container">$a_i$</span>. In symbols, we write: <span class="math-container">\begin{align} (\partial_i\phi)_a := D(\phi_i)_a \in \mathcal{L}(V_i,W). \end{align}</span> We may also use notation like <span class="math-container">$\dfrac{\partial \phi}{\partial x^i}(a)$</span> or anything else which resembles the classical notation. The important thing to keep in mind is that <span class="math-container">$(\partial_i \phi)(a) \equiv \dfrac{\partial \phi}{\partial x^i}(a)$</span> is by definition a linear map <span class="math-container">$V_i \to W$</span>.</p> </blockquote> <p>Note that this is almost word for word the same definition you might have seen before (or atleast if you think about it for a while, you can convince yourself it's very similar). THe idea is of course that we fix all but the <span class="math-container">$i^{th}$</span> variable, and then consider the derivative of that function at the point <span class="math-container">$a_i$</span>. Next, we need one last bit of background.</p> <blockquote> <p>One very important special case which deserves mention is if the domain of the function is <span class="math-container">$\Bbb{R}$</span>. So, now, suppose we have a function <span class="math-container">$\psi: \Bbb{R} \to W$</span>. Then, we have two notions of differentiation. THe first is the familiar one as the limit of difference quotients: <span class="math-container">\begin{align} \dfrac{d\psi}{dt}\bigg|_t \equiv \dot{\psi}(t) \equiv \psi'(t) := \lim_{h \to 0} \dfrac{\psi(t+h) - \psi(t)}{h}. \end{align}</span> (the limit on the RHS being taken with respect to the norm on <span class="math-container">$W$</span>). The second notion of derivative is that since <span class="math-container">$\Bbb{R}$</span> is a vector space and <span class="math-container">$W$</span> is also a vector space, <span class="math-container">$\psi: \Bbb{R} \to W$</span> is a map between normed vector spaces. So, we can consider the derivative <span class="math-container">$D \psi_t: \Bbb{R} \to W$</span> as a linear map.</p> <p>How are these two notions related? Very simple. Note that <span class="math-container">$\mathcal{L}(R,W)$</span> is naturally isomorphic to <span class="math-container">$W$</span> (because <span class="math-container">$\Bbb{R}$</span> is one-dimensional), and the isomorphism is <span class="math-container">$T \mapsto T(1)$</span>. So, there is a theorem which says that <span class="math-container">$\psi'(t)$</span> (the limit of difference quotients) exists if and only if <span class="math-container">$D\psi_t$</span> exists (a linear map from <span class="math-container">$\Bbb{R} \to W$</span>), and in this case, <span class="math-container">\begin{align} \psi'(t) = D\psi_t(1). \end{align}</span> Hence forth, whenever I use <span class="math-container">$\dfrac{d}{dt}$</span> notation or <span class="math-container">$\dfrac{\partial}{\partial t}$</span> notation, where the <span class="math-container">$t$</span> refers to the fact that the domain is <span class="math-container">$\Bbb{R}$</span>, I shall always mean the vector in <span class="math-container">$W$</span> obtained by the limit of the difference quotient (which you now know is simply the evaluation of the linear map on the element <span class="math-container">$1 \in \Bbb{R}$</span>).</p> <p>See Loomis and Sternberg's <a href="http://people.math.harvard.edu/~shlomo/docs/Advanced_Calculus.pdf" rel="nofollow noreferrer">Advanced Calculus</a>, Section <span class="math-container">$3.6-3.8$</span> (<span class="math-container">$3.7, 3.8$</span> mainly) for more information.</p> </blockquote> <hr> <p>Anyway, the chain rule in this case is as follows: <span class="math-container">\begin{align} \dfrac{d}{dt}A(t, f(t,x)) &amp;= \dfrac{\partial A}{\partial t}\bigg|_{(t,f(t,x))} + \dfrac{\partial A}{\partial x}\bigg|_{(t,f(t,x))} \left[ \dfrac{\partial f}{\partial t}\bigg|_{(t,x)}\right] \tag{$*$} \end{align}</span> What does this mean? Well, on the LHS, we have a function <span class="math-container">$\psi: \Bbb{R} \to M_{n \times n}(\Bbb{R})$</span>, defined by <span class="math-container">\begin{align} \psi(t):= A(t,f(t,x)) \end{align}</span> and we're trying to computing <span class="math-container">$\psi'(t)$</span>. On the RHS, note that <span class="math-container">$A: \Bbb{R} \times \Bbb{R}^n \to M_{n \times n}(\Bbb{R})$</span>. So, the first term is <span class="math-container">$\dfrac{\partial A}{\partial t}\bigg|_{(t,f(t,x))} \in M_{n \times n}(\Bbb{R})$</span>, which is exactly what you predicted.</p> <p>Now, how do we understand the second term? Again, note that <span class="math-container">$A$</span> maps <span class="math-container">$\Bbb{R} \times \Bbb{R}^n \to M_{n\times n}(\Bbb{R})$</span>. So, <span class="math-container">$\dfrac{\partial A}{\partial x}\bigg|_{(t,f(t,x))}$</span> is the partial deriavtive of <span class="math-container">$A$</span> with respect to the variables in <span class="math-container">$\Bbb{R}^n$</span> (i.e we're considering <span class="math-container">$V_1 = \Bbb{R}$</span> and <span class="math-container">$V_2 = \Bbb{R}^n$</span>, so it's the <span class="math-container">$2$</span>nd partial derivative of <span class="math-container">$A$</span>), calculated at the point <span class="math-container">$(t,f(t,x)) \in \Bbb{R} \times \Bbb{R}^n$</span> of its domain. Note that this by definition is a linear map <span class="math-container">$\Bbb{R}^n \to M_{n \times n}(\Bbb{R})$</span>. We are now evaluating this linear transformation on the vector <span class="math-container">$\dfrac{\partial f}{\partial t}\bigg|_{(t,x)} \in \Bbb{R}^n$</span> to finally end up with the matrix <span class="math-container">$\dfrac{\partial A}{\partial x}\bigg|_{(t,f(t,x))} \left[ \dfrac{\partial f}{\partial t}\bigg|_{(t,x)}\right] \in M_{n \times n}(\Bbb{R})$</span>. This is how to read the notation in <span class="math-container">$(*)$</span>.</p> <hr> <p>If for some reason you don't like to think in terms of linear transformations, here's an alternative approach, in a simplified case, using Jacobian matrices (but I just don't like such a presentation). Suppose that <span class="math-container">$A$</span> is a function <span class="math-container">$A : \Bbb{R} \times \Bbb{R}^n \to \Bbb{R}^m$</span>, and <span class="math-container">$f: \Bbb{R} \times \Bbb{R}^n \to \Bbb{R}^n$</span>. Then, we can say <span class="math-container">\begin{align} \dfrac{d}{dt} A(t, f(t,x)) &amp;= (\text{Jac}_{\Bbb{R}} A){(t,f(t,x))} + (\text{Jac}_{\Bbb{R}^n}A)(t, f(t,x)) \cdot \dfrac{\partial f}{\partial t}\bigg|_{(t,x)}\\ &amp;=\dfrac{\partial A}{\partial t}\bigg|_{(t,f(t,x))} + (\text{Jac}_{\Bbb{R}^n}A)(t, f(t,x)) \cdot \dfrac{\partial f}{\partial t}\bigg|_{(t,x)}. \end{align}</span> Note that the Jacobian matrix of <span class="math-container">$A: \Bbb{R}\times \Bbb{R}^n \to \Bbb{R}^m$</span> evaluated at the point <span class="math-container">$(t,f(t,x)) \in \Bbb{R}\times \Bbb{R}^n$</span>, denoted by <span class="math-container">$(\text{Jac }A)(t, f(t,x))$</span> is an <span class="math-container">$m \times (1 +n)$</span> matrix. So, when I say <span class="math-container">$(\text{Jac}_{\Bbb{R}}A)(t, f(t,x))$</span>, I mean the <span class="math-container">$m \times 1$</span> submatrix obtained by taking the first column (so that we only keep track of the derivative with respect to the <span class="math-container">$\Bbb{R}$</span> variable, i.e with respect to <span class="math-container">$t$</span>). You see, this is just a vector in <span class="math-container">$\Bbb{R}^m$</span>. Next, when I say <span class="math-container">$(\text{Jac}_{\Bbb{R}^n}A)(t, f(t,x))$</span>, I mean the <span class="math-container">$m \times n$</span> submatrix obtained by ignoring the first column (so that we only keep track of the derivative with respect to the <span class="math-container">$\Bbb{R}^n$</span> variables). Then, we multiply this <span class="math-container">$m \times n$</span> matrix by the <span class="math-container">$n \times 1$</span> vector <span class="math-container">$\dfrac{\partial f}{\partial t}\bigg|_{(t,x)}$</span>, to finally get a <span class="math-container">$m \times 1$</span> matrix, or simply a vector in <span class="math-container">$\Bbb{R}^m$</span>.</p> <p>The reason I don't like this approach is because in your case, the target space is <span class="math-container">$M_{n \times n}(\Bbb{R})$</span>, so it is not natural to think of it as <span class="math-container">$\Bbb{R}^m$</span>. I mean sure, you could construct an isomorphism to <span class="math-container">$\Bbb{R}^{n^2}$</span>, but this requires a certain choice of basis in order to "vectorize" a matrix. But then in the end you will probably want to "undo" the vectorization, and then the whole thing is just a mess. Doable, but I think it's very adhoc, and that it's much cleaner to treat everything as linear transformations, because then it doesn't matter what the domain or target space are... it's pretty much linear algebra from here.</p> <p>To hopefully convince more about the generality (and simplicity) of the linear transformations approach, let <span class="math-container">$V,W$</span> be normed vector spaces, <span class="math-container">$A: \Bbb{R} \times V \to W$</span> be a differentiable map, and let <span class="math-container">$f: \Bbb{R} \times V \to W$</span> be differentiable. Then, <span class="math-container">\begin{align} \dfrac{d}{dt} \bigg|_t A(t,f(t,x)) &amp;= \dfrac{\partial A}{\partial t} \bigg|_{(t,f(t,x))} + \dfrac{\partial A}{\partial x} \bigg|_{(t,f(t,x))}\left[ \dfrac{\partial f}{\partial t}\bigg|_{(t,x)}\right] \in W \end{align}</span> i.e, the formula for the chain rule stays exactly the same, regardless of what vector spaces <span class="math-container">$V,W$</span> are. But if you insist on thinking of everything in terms of Jacobian matrices, you're going to have a tough time first constructing isomorphisms <span class="math-container">$V \cong \Bbb{R}^n$</span> and <span class="math-container">$W \cong \Bbb{R}^m$</span>, and then doing everything in the cartesian spaces, and then "undoing" the isomorphisms, to reexpress everything back in terms of the spaces <span class="math-container">$V$</span> and <span class="math-container">$W$</span>.</p> <hr> <p>Or of course, another way to think of it is to express everything in terms of component functions of the matrix-valued function <span class="math-container">$A$</span>: <span class="math-container">\begin{align} \dfrac{d}{dt}A_{ij}(t,f(t,x)) &amp;= \dfrac{\partial A_{ij}}{\partial t}\bigg|_{(t,f(t,x))} + \sum_{k=1}^n\dfrac{\partial A_{ij}}{\partial x_k}\bigg|_{(t,f(t,x))} \cdot \dfrac{\partial f_k}{\partial t}\bigg|_{(t,x)} \end{align}</span> (all these partial derivatives being real numbers). But of course, for obvious reasons, this component-by-component approach can get very tedious very quickly (and doesn't generalize well), and also it didn't seem to be what you really wanted to ask about, which is why I'm mentioning it at the end.</p>
78,423
<p>How far can one go in proving facts about projective space using just its universal property?</p> <p>Can one prove Serre's theorem on generation by global sections, calculate cohomology, classify all invertible line bundles on projective space?</p> <p>I don't like many proofs of some basic technical facts very aesthetic because one has to consider homogeneous prime ideals, homogeneous localizations, etc. Do there exist nice clean conceptual proofs which avoid the above unpleasantries?</p> <p>If you include references in your answer it would be very helpful, thanks.</p>
Anton Geraschenko
1
<p>I think the answer is "probably not." The reason is that projective space has <em>two</em> universal properties which are used to prove different kinds of things about it. One of these is the slick universal property you like, and the other is the clunky one which results in unpleasantries.</p> <p>Though each universal property implies the other (since it uniquely identifies projective space), it seems unlikely to me that you can effectively do anything if you try to avoid one of them altogether.</p> <hr> <p>One universal property makes it easy to understand maps <em>to</em> projective space:</p> <p>$$ Hom(T,\mathbb P^n) = \{\mathcal O_T^{n+1}\twoheadrightarrow \mathcal L| \mathcal L\text{ a line bundle}\} $$</p> <p>Without bending over backwards (i.e.~reproducing the usual theory), I'd be surprised if you could use this universal property to even prove that there are no non-constant regular functions on $\mathbb P^n$.</p> <p>I expect constructions that naturally pull back along morphisms (e.g. line bundles, regular functions) to behave like morphisms <em>from</em> projective space, so it would be strange if you could attack such constructions with this universal property.</p> <hr> <p>Another universal property makes it easy to understand maps <em>from</em> projective space: $Hom(\mathbb P^n,T)$ is the equalizer of the two restriction maps $Hom(\coprod_{i=0}^n \mathbb A^n,T)\rightrightarrows Hom(\coprod_{i,j}\mathbb A^{n-1}\times (\mathbb A-0),T)$.</p> <p>I guess this is the one that you don't like, but we're lucky to have it since it actually makes it possible to make sense of projective space having Zariski local properties (e.g. being smooth, $n$-dimensional, etc.), and thereby makes it possible to do geometry on it.</p>
4,383,557
<p>This question came up in an oral exam. During the course we studied a bit of the theory of lie algebras and some representation theory.</p> <p>The question: show that the lie algebra <span class="math-container">$\mathfrak{g_2}$</span> has a dimension <span class="math-container">$14$</span> representation, where dimension <span class="math-container">$14$</span> means the vector space <span class="math-container">$V$</span> where the representation is defined has dimension <span class="math-container">$14$</span> over <span class="math-container">$k$</span>.</p> <p>Why is this true? I think it has to do with <span class="math-container">$\mathfrak{g_2}$</span> having <span class="math-container">$12$</span> roots (and maybe the maximal toral subalgebra has dimension <span class="math-container">$2$</span>? But why?).</p> <p>I'll be glad if someone can enlighten me.</p>
Torsten Schoeneberg
96,384
<p>Hint 1: What is that Lie algebra's dimension?</p> <p>Hint 2: Surely in your class the concept of the <em>adjoint representation</em> was mentioned?</p>
464,426
<p>Find the limit of $$\lim_{x\to 1}\frac{x^{1/5}-1}{x^{1/3}-1}$$</p> <p>How should I approach it? I tried to use L'Hopital's Rule but it's just keep giving me 0/0.</p>
Pedro
23,350
<p><strong>Hint</strong> Take $x=t^{15}$. Then use long division, or $$\frac{t^a-1}{t-1}=1+t+t^2+\cdots+t^{a-1}$$</p> <p>You can also think about derivatives of $x^{1/3},x^{1/5}$ at $x=1$.</p>
405,866
<p>Original question: For compact metric spaces, plenty of subtly different definitions converge to the same concept. Overtness can be viewed as a property dual to compactness. So is there a similar story for overt metric spaces?</p> <p>Edit: Since overtness is trivially true assuming the Law of the Excluded Middle, clearly the question is primarily interesting when we do not assume the LEM.</p> <p>Edit 2: It looks like it is extremely difficult for a metric space to not be overt even in constructive settings. So editing the question to ask if there is ANY model where metric spaces are not overt.</p> <p>Edit 3: For these reasons I changed the question again, from &quot;Is there any model of mathematics where there exists a metric space that is not overt?&quot;. PT</p>
Paul Taylor
2,733
<p>David Roberts has rubbed the magic lamp and the genie appears!</p> <p>Even though the notion of overtness does depend on the strength of the ambient logic, I believe the question here is with the notion of metric space, rather than the choice of a model of mathematics.</p> <p>The natural answer is that any metric space has enough points and is therefore necessarily overt, in any sensible logical setting.</p> <p>Indeed, I am inclined to think that any <em>whole</em> space in topology is in practice overt and the interesting question is what <strong>overt <em>sub</em>spaces</strong> look like.</p> <p>Andrej has already pointed out that the set <span class="math-container">$A\subset{\mathbb N}$</span> of programs that don't terminate (with the two-valued metric) is not overt.</p> <p>We can do better than this. <a href="https://www.cs.bham.ac.uk/%7Esjv/papersfull.php" rel="nofollow noreferrer">Steve Vickers</a> has an alternative to the Cauchy completion in locale theory and formal topology. Like any metric topology, it has a basis of <em>balls</em> <span class="math-container">$B(x,r)$</span>, where we may take the radii <span class="math-container">$r$</span> to be dyadic rationals and the centres <span class="math-container">$x$</span> to be (for example) points with dyadic rational coordinates.</p> <p>This construction still gives an overt space, because the set (overt discrete space) of centres is dense.</p> <p>(Since I mention Steve, in general he is interested in the <em>hyper</em>spaces of <em>all</em> overt or compact subspaces, which are called the lower and upper <em>powerdomains</em>. My interest, in constrast, is with <em>individual</em> overt subspaces.)</p> <p>To the general mathematician, the definition of overtness using an <em>operator</em> <span class="math-container">$\lozenge$</span> that takes unions of open subspaces to the existential quantifier is not very familiar. However, it has a very natural equivalent form when we're working in a metric space constructed in the above way.</p> <p>Define <span class="math-container">$(d(x)&lt; r) \equiv \lozenge B(x,r)$</span>. It is easy to show that this satisfies</p> <p><span class="math-container">$$ d(x)&lt; r'&lt; r \Longrightarrow d(x)&lt; r $$</span> <span class="math-container">$$ d(x)&lt; r \Longrightarrow \exists r'.d(x)&lt; r'&lt; r $$</span> <span class="math-container">$$ d(x,y)&lt; r \;\land\; d(y)&lt; s \Longrightarrow d(x)&lt; r+s $$</span> <span class="math-container">$$ d(x)&lt; r \;\land\; \epsilon\gt 0 \Longrightarrow \exists y.d(x,y)&lt; r \;\land\; d(y)&lt; \epsilon $$</span> for any <span class="math-container">$\epsilon&gt;0$</span></p> <p>What this means is that <span class="math-container">$d:X\to\overline{\mathbb R}$</span> is an <em>upper semicontinuous</em> function, or alternatively one that is valued in the <em>upper real numbers</em>.</p> <p>This is the essence of the equivalence between overt and <strong>located</strong> subspaces (the latter are used in Bishop-style constructive analysis), which was stated by <a href="https://arxiv.org/pdf/math/0703561.pdf" rel="nofollow noreferrer">Bas Spitters</a>. Unfortunately, he only considered the case of <em>closed</em> overt/located subspaces, which are characterised by <span class="math-container">$d$</span> being valued in the ordinary (Euclidean, Dedekind, ...) real numbers.</p> <p>The more general case is covered in my draft paper <a href="http://www.paultaylor.eu/ASD/overtrn" rel="nofollow noreferrer"><em>Overt Subspaces of <span class="math-container">${\mathbb R}^n$</span></em></a>.</p> <p>The third condition above is the triangle law. Under suitable conditions, the <strong>Newton--Raphson algorithm</strong> yields a function <span class="math-container">$\Delta(x)\equiv |f(x)/\dot f(x)|$</span> that satisfies all the other conditions and a <span class="math-container">$d$</span> obeying all of them can easily be derived from it.</p> <p>My intuition is that <strong>an overt subspace is the solution-space of an algorithm</strong>. To justify this we need more examples from numerical analysis like Newton--Raphson, but that is very much not my subject.</p> <p>On the other hand, Newton--Raphson actually yields more information than the <span class="math-container">$d$</span> function.</p> <p>There are two possible responses to this:</p> <ul> <li>Maybe we should replace overtness with something more <em>quantitative</em>; or</li> <li>Maybe an algorithm could be <em>derived</em> from the <em>formula</em> for <span class="math-container">$\lozenge$</span> or <span class="math-container">$d$</span> together with the <em>proof</em> that it has the appropriate properties.</li> </ul> <p>The second is not completely unreasonable: An overt subspace is a generalisation of a point defined by a Dedekind cut or a completely prime filter. Andrej Bauer pioneered some ideas for <a href="https://mapcommunity.github.io/ictp/presentation_files/Bauer_P.pdf" rel="nofollow noreferrer"><em>Efficient computation with Dedekind reals</em></a> and had a prototype calculator called Marshall.</p> <p>Given how widely used the notions of overt, located or recursively enumerable subspaces now are in the different constructive cults, really we ought to have a better story than &quot;overtness is dual to compactness but classically invisible&quot;. There ought to be a way of explaining the idea to &quot;ordinary&quot; (classical) mathematicians, in particular numerical analysts.</p> <p>I have been trying to do this for more than a decade, but I think I'm the wrong person to do it, and probably we can't do it from the constructive side: somehow we have to kidnap a numerical analyst and inculcate them with this idea.</p> <p>I still have this <em>draft</em> paper (above). Probably I should just stop fussing and publish it. Comments towards that are welcome.</p>
147,661
<p>Let $R$ be a (noncommutative) ring and $a \in R$ such that $a(1-a)$ is nilpotent. Why is $1+a(t-1)$ a unit in $R[t,t^{-1}]$? Probably one just has to write down an inverse element, but I could not find it. Perhaps there is a trick related to the geometric series which motivates the choice of the inverse element, which can be actually made into a formal proof because the series is finite since $a(1-a)$ is nilpotent?</p> <p>Here is a proof that the two-sided ideal generated by $1+a(t-1)$ is $R[t,t^{-1}]$ (which already finishes the proof when $R$ is commutative): Let $A$ be the quotient, we have to show $A=0$. Now, $A$ contains elements $a,t$ such that $a(1-a)$ is nilpotent, $t$ is invertible and $(1-a) + ta = 0$. If we multiply this equation by $a^u (1-a)^v$, we see that $a^{u+1}(1-a)^v=0 \Rightarrow a^u (1-a)^v = 0$ as well as $a^u (1-a)^{v+1} = 0 \Rightarrow a^u (1-a)^v = 0$ for all $u,v \geq 0$. Since $a(1-a)$ is nilpotent, we get by induction that $a$ and $1-a$ are nilpotent. But $a$ nilpotent implies that $1-a$ is a unit, so it can only be nilpotent when $A=0$.</p> <p>As mt_ mentions below, the general case may be reduced to the commutative case by working with the commutative subring $\mathbb{Z}[a] \subseteq R$. Anyway, I would like to know if there is a short proof which just writes down the inverse.</p> <p>EDIT: I would also like to know why the converse is true: When $1+a(t-1)$ is a unit, why is $a(1-a)$ nilpotent? Rosenberg claims this in his book without proof (he only says "by the same reasoning as in ...", but this doesn't work the same!). Here, we cannot assume that $R$ is commutative, since the inverse <em>may</em> contain coefficients which do not lie in $\mathbb{Z}[a]$.</p> <p>Background: This is needed in the proof of the Bass-Heller-Swan Theorem.</p>
wxu
4,396
<p><strong>Should be read first:</strong> If $R$ is commutative, then both side is easy. Pick any prime ideal $\mathfrak{p}$ of $R$, $1+a(t-1)=1-a+at$ is a unit in $R/\mathfrak{p}[t,t^{-1}]$ if and only if either $1-a=0$ or $a=0$ in $R/\mathfrak{p}$. So if and only if $(1-a)a\in \mathfrak{p}$.</p> <hr> <p>Since $(1+a(t-1))(1+a(t^{-1}-1))=1+a(1-a)(t+t^{-1}-2)$ and $a(1-a)$ is nilpotent, it follows $1+a(t-1)$ is a unit.</p> <p>Conversely, if $1+a(t-1)$ is a unit, then $1+a(t^{-1}-1)$ is also a unit. (mapping $t\to t^{-1}$ gives a ring map of itself). So their product $1+a(1-a)(t+t^{-1}-2)$ is a unit. Now, $R[t,t^{-1}]\to R[t,t^{-1}]$ sending $t$ to $t^2$, we know $1+a(1-a)(t^2+t^{-2}-2)=1+a(1-a)(t-\frac{1}{t})^2$ is a unit. We want to show $x=a(1-a)$ is nilpotent.</p> <p>Just write down the inverse of $1+a(1-a)(t-\frac{1}{t})^2$, say $s=b_{-k}t^{-k}+\ldots+b_0+\ldots+b_nt^n$ is the inverse. Then mapping $t\to t^{-1}$, we know that the inverse has very good property: $b_{-i}=b_i$ for all $i$ and no odd terms, i.e., $s$ indeed polynomial of $(t-1/t)$. </p> <p>OK, Consider the ring map $R[Y]\to R[t,t^{-1}]$ sending $Y$ to $t-1/t$. We come to the case: $1+xY^2$ is a unit of $R[Y]$, we want to show $x$ is a nilpotent element. This is true of course.</p> <hr> <p><strong>Edit:</strong> It seems I have made it comlicated. Go to the step: $1+x(t+t^{-1}-2)=1-2x+x(t+t^{-1})$ is a unit. Mapping $t$ to $t^{-1}$, we see the inverse of $1-2x+x(t+t^{-1})$ is a polynomial of $t+t^{-1}$. Now $R[Y]\to R[t,t^{-1}]$ sending $Y\to t+t^{-1}$ is injective. Come to the case $1-2x+xY$ is a unit in $R[Y]$. </p> <p>Finally, if we are in the world of commutative rings, then modulo every prime ideal of $R$, to see $x$ is nilpotent. If we are not in the world of commutative rings, just write down the inverse and compare the coefficients to see $x$ is nilpotent.</p>
4,463,373
<p>If <span class="math-container">$\frac{\partial f}{\partial x}(0,0) = \frac{\partial f}{\partial y}(0,0) = 0$</span>, then <span class="math-container">$f'((0,0);d)=0$</span> (directional derivative) for every direction <span class="math-container">$d \in \mathbb{R}^n$</span>.</p> <p>Is this true? I'm trying to find a counterexample to prove it false, but nothing comes to mind.</p>
sadman-ncc
942,091
<p>Your derivative calculation is correct but can be simplified in a different way for making the differentiability of <span class="math-container">$f$</span> much more obvious. By definition, <span class="math-container">$$\frac{d}{dx}(|x|)=\frac{x}{|x|}.$$</span> Now, if <span class="math-container">$f(x)=x|x^3|$</span>, we can rewrite as <span class="math-container">$$f(x)=x|x^2 \cdot x|=x\cdot x^2\cdot |x|=x^3|x|.$$</span> Taking the derivative, we have <span class="math-container">\begin{align*} f'(x)&amp;=3x^2 |x| + x^3\cdot\frac{x}{|x|}\\ &amp;=\frac{3x^2\cdot|x|^2 + x^4}{|x|}\\ &amp;=\frac{4x^4}{|x|}\\ &amp;=\frac{4x^2\cdot|x|^2}{|x|}\\ &amp;=4x^2 |x|. \end{align*}</span> So, it is obvious now that the derivative exits at <span class="math-container">$x=0$</span>. But as I said, one could derive at this result even from your calculation as follows: <span class="math-container">\begin{align*} \frac{4x^6}{|x^3|} &amp;= \frac{4\cdot|x^3|^2}{|x^3|}\\ &amp;= 4|x^3|\\ &amp;= 4x^2|x|. \end{align*}</span></p>
435,079
<p>This is exercise from my lecturer, for IMC preparation. I haven't found any idea.</p> <p>Find the value of</p> <p>$$\lim_{n\rightarrow\infty}n^2\left(\int_0^1 \left(1+x^n\right)^\frac{1}{n} \, dx-1\right)$$</p> <p>Thank you</p>
marty cohen
13,079
<p>I get an answer that differs from that of user17762. This is because the error term is not one term of order $\frac{x^{2n}}{n^2}$ but a number of such terms.</p> <p>I get that the limit is between 3/4 and 7/8, but only have an infinite series for the value.</p> <p>My complete analysis follows.</p> <p>$(1+x^n)^{1/n} =\sum_{k=0}^{\infty} \binom{1/n}{k}x^{kn} $.</p> <p>We first look at $\binom{1/n}{k}$.</p> <p>$\begin{align} \binom{1/n}{k} &amp;=\frac1{k!}\prod_{i=0}^{k-1}(\frac{1}{n}-i)\\ &amp;=\frac1{k!n^k}\prod_{i=0}^{k-1}(1-in)\\ &amp;=\frac{(-1)^k}{k!n^k}(-1)\prod_{i=1}^{k-1}(in-1)\\ &amp;=\frac{(-1)^{k+1}}{k!n^k}\prod_{i=1}^{k-1}(in-1)\\ \end{align} $</p> <p>so</p> <p>$\begin{align} \big|\binom{1/n}{k}\big| =\frac1{k!n^k}\prod_{i=1}^{k-1}(in-1) &lt;\frac{1}{k!n^k}\prod_{i=1}^{k-1}(in) =\frac{n^{k-1}(k-1)!}{k!n^k} =\frac1{kn} \end{align} $</p> <p>and</p> <p>$\begin{align} \frac{\binom{1/n}{k+1}}{\binom{1/n}{k}} &amp;=\frac{(-1)^{k+2}}{(k+1)!n^{k+1}}\frac{k!n^k}{(-1)^{k+1}}\frac{\prod_{i=1}^{k}(in-1)}{\prod_{i=1}^{k-1}(in-1)}\\ &amp;=\frac{-1}{(k+1)n}(kn+1)\\ &amp;=-\frac{kn+1}{kn+n}\\ \end{align} $</p> <p>We now look at $\int_0^v (1+x^n)^{1/n}\, dx$ to see what happens as $v \to 1$.</p> <p>$\begin{align} \int_0^v (1+x^n)^{1/n}\, dx &amp;=\sum_{k=0}^{\infty} \binom{1/n}{k} \int_0^v x^{kn}\, dx\\ &amp;=\sum_{k=0}^{\infty} \binom{1/n}{k} \frac{v^{kn+1}}{kn+1}\\ &amp;=v+\frac{v^{n+1}}{n(n+1)}+\sum_{k=2}^{\infty} \binom{1/n}{k} \frac{v^{kn+1}}{kn+1}\\ &amp;=v+\frac{v^{n+1}}{n(n+1)}+v^{2n+1}\sum_{k=2}^{\infty} \binom{1/n}{k} \frac{v^{(k-2)n}}{kn+1}\\ \end{align} $</p> <p>This means that the terms in $\sum_{k=2}^{\infty} \binom{1/n}{k} \frac{v^{(k-2)n}}{kn+1} $ decrease in absolute value and, since they alternate in sign, the series converges. and converges even at $v=1$ because of the $\frac1{kn+1}$.</p> <p>Let $f(v, n) = \sum_{k=2}^{\infty} \binom{1/n}{k} \frac{v^{(k-2)n}}{kn+1} $.</p> <p>Since $\big|\binom{1/n}{k} \frac{1}{kn+1}\big| &lt; \frac1{kn(kn+1)} $, $|f(v, n)| &lt;\sum_{k=2}^{\infty} \frac1{kn(kn+1)} &lt;\frac1{n^2}\sum_{k=2}^{\infty} \frac1{k^2} &lt;\frac1{n^2} $.</p> <p>The first term of $f(v, n)$ is $\binom{1/n}{2}\frac{1}{2n+1} =\frac{(1/n)(1/n-1)}{2}\frac{1}{2n+1} =-\frac{n-1}{2n^2(2n+1)} $ and this is between $-\frac1{8n^2}$ and $-\frac1{4n^2}$ for $n &gt; 3$.</p> <p>Therefore the first two terms of the expansion of $\int_0^v (1+x^n)^{1/n}\, dx$ are both of order $1/n^2$, so we have to consider the whole sum, not just the first term (after $1$).</p> <p>Since $\int_0^v (1+x^n)^{1/n}\, dx =v+\frac{v^{n+1}}{n(n+1)}+v^{2n+1}f(v, n) $ and all the terms exist as $v \to 1$, $\int_0^1 (1+x^n)^{1/n}\, dx -1-\frac{1}{n(n+1)}=f(1, n) $.</p> <p>Since $f(1,n)$ is between $-\frac1{8n^2}$ and $-\frac1{4n^2}$, $-\frac1{4n^2} &lt;\int_0^1 (1+x^n)^{1/n}\, dx -1-\frac{1}{n(n+1)} &lt;-\frac1{8n^2} $, $-\frac1{4} &lt;n^2\big(\int_0^1 (1+x^n)^{1/n}\, dx -1\big)-\frac{n}{n+1} &lt;-\frac1{8} $ so $1-\frac1{4} &lt;\lim_{n \to \infty} n^2 \big(\int_0^1 (1+x^n)^{1/n}\, dx -1\big) &lt; 1-\frac1{8} $.</p>
435,079
<p>This is exercise from my lecturer, for IMC preparation. I haven't found any idea.</p> <p>Find the value of</p> <p>$$\lim_{n\rightarrow\infty}n^2\left(\int_0^1 \left(1+x^n\right)^\frac{1}{n} \, dx-1\right)$$</p> <p>Thank you</p>
Sangchul Lee
9,340
<p>By integration by parts,</p> <p>\begin{align*} \int_{0}^{1} (1 + x^{n})^{\frac{1}{n}} \, dx &amp;= \left[ -(1-x)(1+x^{n})^{\frac{1}{n}} \right]_{0}^{1} + \int_{0}^{1} (1-x)(1 + x^{n})^{\frac{1}{n}-1}x^{n-1} \, dx \\ &amp;= 1 + \int_{0}^{1} (1-x) (1 + x^{n})^{\frac{1}{n}-1} x^{n-1} \, dx \end{align*}</p> <p>so that we have</p> <p>\begin{align*} n^{2} \left( \int_{0}^{1} (1 + x^{n})^{\frac{1}{n}} \, dx - 1 \right) &amp;= \int_{0}^{1} n^{2} (1-x) (1 + x^{n})^{\frac{1}{n}-1} x^{n-1} \, dx. \end{align*}</p> <p>Let $a_{n}$ denote this quantity. By the substitution $y = x^{n}$, it follows that</p> <p>\begin{align*} a_{n} &amp;= \int_{0}^{1} n \left(1-y^{1/n}\right) (1 + y)^{\frac{1}{n}-1} \, dy = \int_{0}^{1} \int_{y}^{1} t^{\frac{1}{n}-1} (1 + y)^{\frac{1}{n}-1} \, dtdy \end{align*}</p> <p>Since $0 \leq t (1 + y) \leq 2$ and $ \int_{0}^{1} \int_{y}^{1} t^{-1}(1+y)^{-1} \, dtdy &lt; \infty$, an obvious application of the dominated convergence theorem shows that</p> <p>\begin{align*} \lim_{n\to\infty} a_{n} = \int_{0}^{1} \int_{y}^{1} \frac{dtdy}{t(1+y)} &amp;= - \int_{0}^{1} \frac{\log y}{1+y} \, dy \\ &amp;= \sum_{m=1}^{\infty} (-1)^{m} \int_{0}^{1} y^{m-1} \log y \, dy = \sum_{m=1}^{\infty} \frac{(-1)^{m-1}}{m^2} = \frac{\pi^2}{12}. \end{align*}</p>
335,116
<p>As a PhD student, if I want to do something algebraic / linear-algebraic such as representation theory as well as do PDEs, in both the theoretical and numerical aspects of PDEs, would this combination be compatible and / or useful? Is it feasible?</p> <p>I'd be grateful for an online resource to look into.</p> <p>Thanks,</p>
Robert Furber
61,785
<p>This goes back to the beginning of the subject of unitary representations of locally compact noncompact groups. Wigner was looking for all possible generalizations of the Dirac equation to higher spin, and developing the representation theory of the Poincaré group is how he obtained his results (Bargmann did this independently, so they published together). See here: <a href="https://www.pnas.org/content/34/5/211" rel="noreferrer">https://www.pnas.org/content/34/5/211</a></p>
335,116
<p>As a PhD student, if I want to do something algebraic / linear-algebraic such as representation theory as well as do PDEs, in both the theoretical and numerical aspects of PDEs, would this combination be compatible and / or useful? Is it feasible?</p> <p>I'd be grateful for an online resource to look into.</p> <p>Thanks,</p>
RBega2
127,803
<p>Peter Olver has an interesting book on <a href="https://www.springer.com/gp/book/9781468402742" rel="noreferrer">Symmetry and PDEs</a>. Another area to consider (that is particularly important for geometric PDEs) are exterior differential systems. <a href="https://services.math.duke.edu/~bryant/MSRI_Lectures.pdf" rel="noreferrer">Here</a> are some notes on the subject by Robert Bryant (who sometimes posts here).</p>
1,275,461
<p>I am trying to proof that $-1$ is a square in $\mathbb{F}_p$ for $p = 1 \mod{4}$. Of course, this is really easy if one uses the Legendre Symbol and Euler's criterion. However, I do not want to use those. In fact, I want to prove this using as little assumption as possible.</p> <p>What I tried so far is not really helpful:</p> <p>We can easily show that $\mathbb{F}_p^*/(\mathbb{F}_p^*)^2 = \{ 1\cdot (\mathbb{F}_p^*)^2, a\cdot (\mathbb{F}_p^*)^2 \}$ where $a$ is not a square (this $a$ exists because the map $x \mapsto x^2$ is not surjective). Now $-1 = 4\cdot k = 2^2 \cdot k$ for some $k\in \mathbb{F}_p$.</p> <p>From here I am trying to find some relation between $p =1 \mod{4}$ and $-1$ not being a multiple of a square and a non-square.</p>
Thomas Andrews
7,933
<p>You can use Wilson's theorem: $(p-1)!\equiv-1\pmod p$ and then show that $$(p-1)!\equiv 1\cdot 2\cdots \frac{p-1}{2} \left(-\frac{p-1}{2}\right)\cdots(-2)(-1) = \left(\left(\frac{p-1}{2}\right)!\right)^2(-1)^{\frac{p-1}{2}}\pmod p$$</p> <p>This gives an exactly formula for a solution of $a^2=-1$, although not an efficient one: $a=\left(\frac{p-1}{2}\right)!$.</p> <p>Wilson's theorem can be shown pretty directly by comparing $x^{p-1}-1$ and $(x-1)(x-2)\cdots(x-(p-1))$.</p> <p>More generally, if $\mathbb F_q$ is a field with $q$ elements, $q$ odd, then the product of all the elements of $\mathbb F_q^\times$ is $-1$, because we can pair $a$ with $a^{-1}$ except for $a=-1$ and $a=1$. So you can show that $-1$ is a square in $\mathbb F_q$ if $q\equiv 1\pmod 4$ using the same argument.</p> <p>Or you can factor $$x^{p-1}-1 = \left(x^{\frac{p-1}{2}}-1\right)\left(x^{\frac{p-1}{2}}+1\right)$$</p> <p>The left side has $p-1$ roots, and thus $x^{\frac{p-1}{2}}+1$ must have a root, let's call it $a$. Then $(a^{\frac{p-1}{4}})^2=-1$ if $a$ is such a root.</p>
745,674
<p>Let $E$ be a complex vector space of dimension 3. Let $f$ be a non zero endomorphism such that $f^2=0$. I want to show that there is a basis $B=\{b_1,b_2,b_3\}$ of $E$ such that $$f(b_1)=0, f(b_2)=b_1,f(b_3)=0$$</p> <p><strong>Edit</strong> Here is how i see the answer now: </p> <p>$f$ being non zero there exists $x_0\in E$ such that $f(x_0)\not =0$. </p> <p>Let $M=span\{f(x_0),x_0\}$. Since $f^2=0$ we show easily that $f(x_0)$ and $x_0$ are linearly independent hence they form a basis for $M$. </p> <p>We take $b_1=f(x_0)$, $b_2=x_0$. </p> <p>Take any $z\not \in M$. </p> <p>If $z\in \ker f$ then take $b_3=z$. </p> <p>If $z\not \in \ker f$ then there exists $\beta \not = 0$ such that $f(z)=\beta f(x_0)$ (because $\dim(Im(f))=1$ hence it is spanned by any non zero vector, we take $f(x_0)$ as a spanning vector). Take $z'=\dfrac{1}{\beta}z-f(x_0)$ hence $z'\in \ker f$ and we take $b_3=z'$.</p>
Valentin Waeselynck
141,752
<p>You're not doing it in the right order. Choose $b_2$ so $f(b_2) \neq 0$. Then $b_1 = f(b_2)$, and choose any other $b_3 \in Ker(f)$ that independent from $b_1$.</p>
3,037,793
<p>I have to write the following sentence "If professors are unhappy all students fail their exams" in logic and my answer is:</p> <p>∀x [Prof(x) ∧ Unhappy(x)] ⇒ [∀y stud(y) ⇒ fail_exam(x,y)]</p> <p>However, the answer of my teacher is:</p> <p>∀x ∀y( prof(x) ∧ unhappy(x) ∧ stud(y) ) ⇒ fail exam(x, y))</p> <p>Can someone helps me?</p>
Bram28
256,001
<p>They are equivalent ... although to show that, I will first insist on adding a few parentheses so as to indicate the proper scope of the quantifiers, giving us:</p> <p><span class="math-container">$\forall x ((Prof(x) \land Unhappy(x)) \rightarrow \forall y (Stud(y) \rightarrow FailExam(x,y)))$</span></p> <p>and</p> <p><span class="math-container">$\forall x \forall y ((Prof(x) \land Unhappy(x) \land Stud(y)) \rightarrow FailExam(x,y))$</span></p> <p>Now, to show these are equivalent, let us first note the following general 'Prenex Law', which is an equivalence that allows you to 'take out' quantifiers and broaden their scope to include ('move over') other parts of the formula:</p> <p><span class="math-container">$\psi \rightarrow \forall x \ \varphi(x) \Leftrightarrow \forall x (\psi \rightarrow \varphi(x))$</span></p> <p>Here, the formula <span class="math-container">$\psi$</span> cannot include any free variables <span class="math-container">$x$</span></p> <p>Well, we can apply this Prenex law to the second formula, and take out the <span class="math-container">$\forall y$</span>, since the antecedent of the conditional you are moving it over does not contain any free variables <span class="math-container">$y$</span>. Thus, we get:</p> <p><span class="math-container">$\forall x \forall y ((Prof(x) \land Unhappy(x) \land Stud(x)) \rightarrow FailExam(x,y))$</span></p> <p>Ok, and now we can apply a second general equivalence principle that mAuro alluded to in the comments, called eXportation:</p> <p><span class="math-container">$P \rightarrow (Q \rightarrow R) \Leftrightarrow (P \land Q) \rightarrow R$</span></p> <p>Applied to the previous formula, we thus obtain the first of your two formulas, thus showing that they are equivalent.</p>
3,037,793
<p>I have to write the following sentence "If professors are unhappy all students fail their exams" in logic and my answer is:</p> <p>∀x [Prof(x) ∧ Unhappy(x)] ⇒ [∀y stud(y) ⇒ fail_exam(x,y)]</p> <p>However, the answer of my teacher is:</p> <p>∀x ∀y( prof(x) ∧ unhappy(x) ∧ stud(y) ) ⇒ fail exam(x, y))</p> <p>Can someone helps me?</p>
Jorge Adriano Branco Aires
74,765
<p>You've been answered that they are equivalent. It's also worth knowing the transformation from <span class="math-container">$(P\land Q) \rightarrow R$</span> to <span class="math-container">$P\rightarrow Q \rightarrow R$</span> is known by the name of <em><a href="https://en.wikipedia.org/wiki/Currying" rel="nofollow noreferrer">"currying"</a></em>, and its inverse <em>"uncurrying"</em>. It shows up not only in logic, but any domain that forms a <a href="https://en.wikipedia.org/wiki/Cartesian_closed_category" rel="nofollow noreferrer">Cartesian Closed Category</a>.</p>
3,999,996
<p>I know how you can show this geometrically, but is there any way to prove this algebraically?</p>
jjuma1992
715,368
<p>You can use the following two known inequalities <span class="math-container">$$x \leq |x|$$</span> and <span class="math-container">$$\left|\int_a^b f(x)dx\right|\leq \int_a^b|f(x)|dx.$$</span> We have <span class="math-container">\begin{align*} \int_0^1x\sin^{-1}xdx &amp;\leq \left|\int_0^1x\sin^{-1}xdx\right|\\ &amp;\leq \int_0^1x|\sin^{-1}x|dx\\ &amp;\leq \frac{\pi}{2}\int_0^1xdx=\frac{\pi}{4}. \end{align*}</span></p>
1,246,705
<p>I was doing some linear algebra exercises and came across the following tough problem :</p> <blockquote> <p>Let $M_{n\times n}(\mathbf{R})$ denote the set of all the matrices whose entries are real numbers. Suppose $\phi:M_{n\times n}(\mathbf{R})\to M_{n\times n}(\mathbf{R})$ is a nonzero linear transform (i.e. there is a matrix $A$ such that $\phi(A)\neq 0$) such that for all $A,B\in M_{n\times n}(\mathbf{R})$ $$\phi(AB)=\phi(A)\phi(B).$$ Prove that there exists a invertible matrix $T\in M_{n\times n}(\mathbf{R})$ such that $$\phi(A)=TAT^{-1}$$ for all $A\in M_{n\times n}(\mathbf{R})$.</p> </blockquote> <p>This is an exercise from my textbook and I am all thumbs when I attempted to solve it .</p> <p>Can someone tell me as to how should I , at least , start the problem ? </p>
Gabriel Romon
66,096
<p>Since $\varphi$ is linear, it's only natural to investigate how it acts on the canonical basis of $M_n(k)$. Let $E_{ij}$ denote the matrix with a single one for the entry $(i,j)$ and $0$'s elsewhere. It is standard that $E_{ij}E_{kl} = \delta_{jk}E_{il}$, hence $\varphi(E_{ij})\varphi(E_{kl}) = \delta_{jk}\varphi(E_{il})$.</p> <p>Let $e_k$ denote the $k$-th vector of the canonical basis of $k^n$.Note that if $\varphi(E_{ij}) = CE_{ij}C^{-1}$, then $\varphi(E_{ij})Ce_k = \delta_{jk}Ce_i$. With $j=k$, $$\varphi(E_{ij})Ce_j=Ce_i$$ which yields a construction of $C$ (fix $j$ and let $i$ vary), as follows.</p> <p>Let $I$ denote the identity matrix. Since $\varphi \neq 0$, there is some $A$ such that $\varphi(A)\neq 0$, hence $\varphi(A)\varphi(I)\neq 0$, hence $\varphi(I)\neq 0$. Since $I=\sum_{k=1}^n E_{kk}$, there exists some $k$ such that $\varphi(E_{kk})\neq 0$. WLOG suppose $k=1$. <br/> Since $\varphi(E_{11})\neq 0$, there exists $e_1'\in \operatorname{im}\varphi(E_{11})\setminus\{0\}$. Define $e_i':=\varphi(E_{i1})(e_1')$. It's easy to check that every $e_i'$ is non-zero and that $(e_1',\ldots, e_n')$ is linearly independent, hence a basis. Define $C$ the matrix with columns $(e_1',\ldots, e_n')$. $C$ is invertible by construction. It is easily checked that for fixed $(i,j)$, the following holds: $$\forall k, \varphi(E_{ij})e_k' = CE_{ij}C^{-1}e_k'$$ thus $$\varphi(E_{ij}) = CE_{ij}C^{-1}$$</p> <p>Hence by linearity, $\varphi(A) = CAC^{-1}$ for all $A$.</p>
2,764,381
<p>one thing I don't understand is what is sin(0) and sin(1) exactly? I am alright with the concept of radian (pi) but don't understand 0 and 1. What does it mean?</p>
fhorrobin
558,456
<p>I am not completely sure what you are asking but I will try to answer. In the context of the question in the title, $[0,1] $ means that they are talking about $y=\sin x $ for values on $x $ on the closed interval from 0 to 1 (rather than say all of the real numbers. </p> <p>In terms of what this means, the sin function just maps a real number to another real number so though $\sin x$ may not always be rational, it is always just a real number. </p>
3,505,397
<p>well, I have to find the Taylor polynomial of <span class="math-container">$f(x,y)=\sin(x)\sin(y)$</span> at <span class="math-container">$(0,\pi/4)$</span>. I found:</p> <p>Is <span class="math-container">$T_3(x,y)=-\frac{1}{12}\sqrt{2}x(16x^2+48y^2-24\pi+3\pi^2)$</span> correct?</p>
Quanto
686,284
<p>Use the identity <span class="math-container">$\cos^{-1}x+\sin^{-1}x=\frac\pi2 $</span>,</p> <p><span class="math-container">$$\cos^{-1}\left(\sin\frac{16\pi}{7}\right) =\frac\pi2 - \sin^{-1}\left(\sin\frac{16\pi}{7}\right) =\frac\pi2 - \sin^{-1}\left(\sin\frac{2\pi}{7}\right) =\frac\pi2-\frac{2\pi}{7}=\frac{3\pi}{14} $$</span> </p>
578,337
<p>For $n=1,2,3,\dots,$ and $|x| &lt; 1$ I need to prove that $\frac{x}{1+nx^2}$ converges uniformly to zero function. How ?. For $|x| &gt; 1$ it is easy. </p>
Sugata Adhya
36,242
<p>$f_n(0)=0\to0$ and for $x\ne0,$ $|f_n(x)|=\left|\dfrac{x}{1+nx^2}\right|\le\left|\dfrac{1}{nx}\right|\to0$ as $n\to\infty$</p> <p>So, $f(x)=\displaystyle\lim_{n\to\infty}f(x)\equiv0$ on $(-1,1).$ Let $\forall~n\ge1,$$$M_n=\displaystyle\sup_{|x|&lt;1}\left|\dfrac{x}{1+nx^2}-0\right|=\displaystyle\sup_{|x|&lt;1}\left|\dfrac{x}{1+nx^2}\right|=\displaystyle\sup_{|x|&lt;1}\dfrac{|x|}{1+nx^2}$$</p> <p>By the Jensen's inequality, for $x\ne0,$ $$\dfrac{\dfrac{1}{|x|}+\dfrac{nx^2}{|x|}}{2}\ge\sqrt n\implies0\le\dfrac{|x|}{1+nx^2}\le\dfrac{1}{2\sqrt n}\to0$$</p> <p>Consequently by Squeeze theorem $M_n\to0.$</p> <p>Thus $f_n\rightrightarrows f$ on $(-1,1).$</p>
489,109
<p>I've been stumped on this problem for hours and cannot figure out how to do it from tons of tutorials.</p> <p>Please note: This is an intro to calculus, so we haven't learned derivatives or anything too complex.</p> <p>Here's the question: </p> <p>Let $f(x) = x^5 + x + 7$. Find the value of the inverse function at a point. $f^{-1}(1035) = $_<strong>__</strong>?</p> <p>I tried setting $f(x)$ as $y$.. and solving for $x$. Clearly that doesn't help lol. I've tried many different approaches and cannot figure out the answer. I used wolframalpha, my textbook, notes, examples, and tons of Google searches and nothing makes sense. Can someone please help? Thanks!!</p>
Henry Swanson
55,540
<p>In general, polynomials won't have an inverse. This one happens to have one, but it's not fun to express, as far as I know.</p> <p>Since you only need to find the inverse at a particular number, not any $y$, just plug it in and rearrange until something looks nice: $x^5 + x + 7 = 1035$ means $x(x^4 + 1) = 1028$. The factors of $1028$ are a good place to start.</p>
1,407,683
<p>I am new to differential geometry. It is surprising to find that the linear connection is not a tensor, namely, not coordinate-independent. </p> <p>Can we bypass this ugly object? Only intrinsic quantities should appear in a textbook. </p>
Amitai Yuval
166,201
<p>In contrast to what is written in the question and some of the comments, a connection is <em>independent</em> of coordinates.</p> <p>The intrinsic way to define connections is the following. For simplicity, we treat only connections on the tangent bundle, even though a similar definition can be applied for any vector bundle. Let $M$ be a smooth manifold. Let $TM$ denote the tangent bundle to $M$, let $\Gamma(TM)$ denote the space of vector fields on $M$, and let $\mathrm{end}(TM)$ denote the space of vector bundle morphisms from $TM$ to itself. A connection on $M$ is a map$$\nabla:\Gamma(TM)\to\mathrm{end}(TM),$$which is additive and satisfies the Leibniz rule. Additivity should be clear enough, and the Leibniz rule means that we have $$\nabla fX=df\cdot X+f\nabla X,$$for any smooth function $f$ and a vector field $X$.</p> <p>It is customary to use the notation $\nabla_YX$ for $\nabla(X)(Y)$. Using this notation, additivity means $$\nabla_Y(X_1+X_2)=\nabla_YX_1+\nabla_YX_2,$$while the Leibniz rule takes the form$$\nabla_YfX=df(Y)\cdot X+f\nabla_YX.$$As written above, this is perfectly intrinsic. Now, in some schools this intrinsic approach is neglected, and connections are introduced only via the Christoffel symbols. Naturally, the Christoffel symbols depend on coordinates, but this actually means nothing.</p>
1,407,683
<p>I am new to differential geometry. It is surprising to find that the linear connection is not a tensor, namely, not coordinate-independent. </p> <p>Can we bypass this ugly object? Only intrinsic quantities should appear in a textbook. </p>
jxnh
132,834
<p>As mentioned in a previous answer, connections are quite intrinsic. I will take the slightly more pedestrian view that a connection is a collection of maps from type $(k,l)$ tensor fields to type $(k,l+1)$ fields that is linear, satisfies a Leibniz rule, commutes with contraction, agrees with the differential for smooth functions and is usually assumed to be torsion free in the sense that $\nabla_a\nabla_bf = \nabla_b \nabla_a f$, where small latin indices are used in the abstract index sense (apologies to those who use precisely the opposite), so that second covariant derivatives of smooth functions are symmetric. </p> <p>Now suppose we have two connections $\nabla$ and $\tilde{\nabla}$. We already know that $(\nabla - \tilde \nabla) f = 0$ for any smooth function. Now suppose we have a one form $\omega_a$. Then we can compute that $$ (\nabla_b - \tilde \nabla_b)(f\omega_a) = f[(\nabla_b - \tilde \nabla_b)(\omega_a)] $$ as a result of the Leibniz rule. This means that $\omega_a \to (\nabla_b - \tilde \nabla_b)(\omega_a)$ is linear over $C^\infty(M)$, and thus is characterized by some type $(1,2) $ tensor $C^c_{ab}$ such that $(\nabla_b - \tilde \nabla_b)(\omega_a) = \omega_c C^c_{ab}$. The difference between the two actiong on arbitrary fields can now be expressed by various signed sums of contractions with $C^c_{ab}$</p> <p>Now, there are particular connections $\nabla$ that we want to be able to compute with, e.g. Levi-Civita, which are defined in terms of intrinsic properties of $M$. But the only connections we really know how to write down easily are the flat connections $\tilde\nabla$ associated to a fixed coordinate chart. In this case, we often write $C^c_{ab} = \Gamma^c_{ab}$, which are the Christoffel symbols. Note that the "ugliness" of these symbols has to do with the fact that the flat connections are not coordinate independent, thus for different coordinate charts one generally has a different correction tensor $C^c_{ab}$, which leads to the saying which rather peeved a physics professor of mine that the Christoffel symbols are not a tensor. </p> <p>Additionally, it seems rather strange to say that we should never present anything extrinsic. I suppose it is possible that everything could be derived from purely intrinsic computation, but that seems strange. In particular the use of differential geometry has many practical uses where we really would like to be able to numerically compute,say, the path of a geodesic.</p>
2,104,984
<p>For every $x\in \mathbb{R}$, $f(x+6)+f(x-6)=f(x)$ is satisfied. What may be the period of $f(x)$? I tried writing several $f$ values but I couldn't get something like $f(x+T)=f(x)$.</p>
Simply Beautiful Art
272,831
<p>It follows then that</p> <p>$$\begin{align}f(x)&amp;=f(x+6)+f(x-6)\\&amp;=f(x+12)+f(x)+f(x-6)\\\implies0&amp;=f(x+12)+f(x-6)\\\implies f(x+12)&amp;=-f(x-6)\\\implies f(x+18)&amp;=-f(x)\\\implies f(x+36)&amp;=-f(x+18)=f(x)\end{align}$$</p> <blockquote> <p>$$f(x+36)=f(x)$$</p> </blockquote>
1,443,812
<p>Suppose that $X$ and $Y$ have joint p.d.f.</p> <p>$$ f(x, y) = 3x, \; 0 &lt; y &lt; x &lt; 1.$$ </p> <p>Find $f_X(x)$,the marginal p.d.f. of $X$.</p> <p>this is what i got</p> <p>$$f_X(x) = \int_0^x f(x, y)dy = \int_0^x 3x dy = 3x^2$$ for $0 &lt; x &lt; 1$.</p> <p>however, if want to know whether $X$ and $Y$ are independent or not, how can I do it?</p>
drhab
75,923
<p><strong>Hint</strong>:</p> <p>The pdf takes value $0$ on $\{\langle x,y\rangle\mid x&lt;y\}$ so that $P(X&lt;Y)=0$.</p> <p>Then $P(X&lt;c\wedge Y&gt;c)=0$ for each $c$. </p> <p>If $X$ and $Y$ are indeed independent then: $$P(X&lt;c\wedge Y&gt;c)=P(X&lt;c)P(Y&gt;c)$$ for each $c$.</p> <p>So to prove that $X$ and $Y$ are <em>not</em> independent it is enough to find some $c$ with: $$P(X&lt;c)P(Y&gt;c)&gt;0$$</p>
4,041,758
<p>If you have a line passing through the middle of a circle, does it create a right angle at the intersection of the line and curve?</p> <p>More generally, is it valid to define an angle created between a line and a curve? Is the tangent to the curve at the point of intersection a valid interpretation (I.e a semi circle has 2 right angles)</p> <p>I saw it in a debate thread and it got me curious now.</p>
Directions In Physics
888,931
<blockquote> <p>More generally, is it valid to define an angle created between a line and a curve?</p> </blockquote> <p>Yes. It can be done with a little calculus. Calculus gives us a workable definition of a unique tangent vector at each point on a curve. So, we can use calculus to translate questions about curves into equivalent questions about lines.</p> <p>The line <span class="math-container">$\overline{AH}$</span> crosses the circle at <span class="math-container">$A$</span> and <span class="math-container">$H$</span>. It is intuitively plausible that the line <span class="math-container">$\overline{GH}$</span> from the point <span class="math-container">$G$</span> on the circle to the point <span class="math-container">$H$</span> becomes perpendicular to <span class="math-container">$\overline{AH}$</span> as <span class="math-container">$G$</span> approaches <span class="math-container">$H$</span>. If there is a line through the circle perpendicular to <span class="math-container">$\overline{AH}$</span> then it must be the line that <span class="math-container">$\overline{GH}$</span> approaches as <span class="math-container">$G$</span> goes to <span class="math-container">$H$</span>. It is possible to rigorously define such a line using the mathematically precise definitions of limit and derivative from calculus.</p> <p>The equation for the curve of the circle in polar coordinates is</p> <p><span class="math-container">$(x(\theta), y(\theta))=(r\cos(\theta), r\sin(\theta))$</span> with <span class="math-container">$r$</span> is constant.</p> <p>The equation for the line through the circle in polar coordinates is</p> <p><span class="math-container">$(x(r), y(r))=(r\cos(\theta), r\sin(\theta))$</span> with <span class="math-container">$\theta$</span> is constant.</p> <p>The vector tangent to the circle at <span class="math-container">$\theta$</span> is <span class="math-container">$\vec{a}=lim_{h\rightarrow 0} \frac{(x(\theta+h), y(\theta+h))-(x(\theta), y(\theta))}{h}$</span>.</p> <p>This is a mathematically rigorous definition of the notion of the tangent to the curve at a point.</p> <p><span class="math-container">$\vec{a}=\frac{d(x(r,\theta),y(r\theta))}{d\theta}=\frac{d(r\cos(\theta),(r\sin(\theta))}{d\theta}=(-r\sin\theta , r\cos\theta)$</span></p> <p>Similarly, the vector tangent to the line at <span class="math-container">$r$</span> is</p> <p><span class="math-container">$\vec{b}=\frac{d(x(r,\theta),y(r\theta))}{dr}=\frac{d(r\cos(\theta),(r\sin(\theta))}{dr}=(\cos \theta , \sin\theta)$</span></p> <hr /> <p>The Dot Product.</p> <p><span class="math-container">$\vec{a} \cdot \vec{b}= |\vec{a}| |\vec{b}| \cos(\phi)$</span> where <span class="math-container">$\phi$</span> is the angle between <span class="math-container">$\vec{a}$</span> and <span class="math-container">$\vec{b}$</span>.</p> <p><span class="math-container">$\implies$</span></p> <p><span class="math-container">$\vec{a} \cdot \vec{b}=0 \rightarrow \vec{a} \perp \vec{b}$</span>.</p> <hr /> <p><span class="math-container">$\vec{a} \cdot \vec{b} = -r \sin \theta \cos\theta + r \cos \theta \sin \theta=0$</span></p> <p>Therefore, the vectors are orthogonal.</p> <p>Assume the line is crossing the unit circle where <span class="math-container">$\theta=0$</span> then</p> <p><span class="math-container">$a=(0,1)|_{\text{where the line crosses}}$</span> and <span class="math-container">$b=(1,0)|_{\text{where the line crosses}}$</span>.</p> <p>And you can always define any circle to be a unit circle by changing your units of measurement. So this particular choice of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> really holds for any circle.</p>
1,912,570
<p>There is a proof from stein for this assertion,</p> <p><a href="https://i.stack.imgur.com/fqRKU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fqRKU.jpg" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/fhZmE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fhZmE.jpg" alt="enter image description here"></a></p> <p>My question is why $\sum_{j \in J_1}|Q_j|+\sum_{j \in J_2}|Qj| \leqslant \sum_{j =1}^\infty|Q_j|$ ?</p> <p>I have a feeling they are not equal since the order of addition has been changed, but it is not changed in the way that we can apply the proposition that absolute convergence implies that the order of summation does not matter.</p>
Reveillark
122,262
<p>They are not equal because the cubes may not intersect neither $E_1$ nor $E_2$, so you might be counting 'excessive' cubes in the large sum.</p> <p>Here's a hint to see that the inequality holds, using the following:</p> <blockquote> <p><strong>Proposition</strong>: Let $\{a_n\}$ be a sequence of non-negative terms. Then $$\sum_n a_n&lt;\infty\qquad (*)$$ if and only if, for some (and hence all) sequences $1=r_0&lt;r_1&lt;r_2&lt;\dots$ $$\sum_{i=0}^\infty\sum_{n=r_i}^{r_{i+1}}a_n&lt;\infty\qquad (**)$$ <em>Hint of proof:</em> The partial sums of $(**)$ (with respect to the leftmost series) are nothing more than a subsequence of the sequence of partial sums of $(*)$, which is monotonically increasing. </p> </blockquote> <p>In other words, an insertion of parenthesis $$(a_1+\dots+a_{r_1})+(a_{r_1+1}+\dots+a_{r_2})+\dots$$ does not alter the convergence. </p> <p>Using this we can write, for any set of indices $J$:</p> <p>$$\sum_{n\in J}a_n=\sum_{n=1}^\infty b_n$$</p> <p>where $$b_n=\begin{cases}a_n &amp; if \ n\in J\\ 0 &amp; otherwise\end{cases}$$</p> <p>Can you see how to proceed?</p>
215,983
<p>I was expecting to get the answer to the above by doing either of the following:</p> <pre><code>Options[Graph[{1 -&gt; 2, 2 -&gt; 1}], GraphLayout] AbsoluteOptions[Graph[{1 -&gt; 2, 2 -&gt; 1}], GraphLayout] </code></pre> <p>However the result to both is</p> <blockquote> <p>GraphLayout->Automatic</p> </blockquote> <p>How do I find out the optionsetting for <code>GraphLayout</code> chosen by the Automatic method?</p>
Bob Hanlon
9,362
<p>Using <a href="https://reference.wolfram.com/language/ref/Rasterize.html" rel="nofollow noreferrer"><code>Rasterize</code></a> to do a "visual" comparison with some suspects</p> <pre><code>grR = Rasterize@Graph[{1 -&gt; 2, 2 -&gt; 1}]; Select[{#, Rasterize@ Graph[{1 -&gt; 2, 2 -&gt; 1}, GraphLayout -&gt; #]} &amp; /@ {"CircularEmbedding", "SpiralEmbedding", "SpringEmbedding", "SpringElectricalEmbedding", "HighDimensionalEmbedding", "LayeredDigraphEmbedding", "LayeredEmbedding"}, #[[2]] === grR &amp;] </code></pre> <p><a href="https://i.stack.imgur.com/gaP8J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gaP8J.png" alt="enter image description here"></a></p>
4,417,896
<p>I have only found information regarding doing this by integration by parts. By differentiating under the integral sign, I let <span class="math-container">$$I_n = \int_0^\infty x^n e^{-\lambda x} dx $$</span> and get <span class="math-container">$\frac{dI_n}{d\lambda} = -I_{n+1} $</span> and therefore <span class="math-container">$\frac{dI_n}{d\lambda} = -\frac{n+1}{\lambda} I_n$</span>. Proceeding from here I solve the ODE to get <span class="math-container">$I_n = Ae^{-\frac{n+1}{\lambda}x}$</span>.</p> <p>This is clearly wrong. What went wrong? I am unsure how to proceed with this differentiation of the integral approach to solve this problem.</p>
Quanto
686,284
<p>Differentiate under the integral sign <span class="math-container">$n$</span> times as follows <span class="math-container">$$\int_0^\infty x^n e^{-\lambda x} dx=(-1)^n\frac{d^n}{d\lambda^n} \int_0^\infty e^{-\lambda x} dx= (-1)^n\frac{d^n}{d\lambda^n}\frac1\lambda=\frac{n!}{\lambda^{n+1}} $$</span></p>
1,017,965
<p>Just as the title, my question is what is the matrix representation of Radon transform (Radon projection matrix)? I want to have an exact matrix for the Radon transformation. </p> <p>(I want to implement some electron tomography algorithms by myself so I need to use the matrix representation of the Radon transformation.)</p> <p>(An introduction of the Radon transformation: <a href="http://en.wikipedia.org/wiki/Radon_transform" rel="nofollow noreferrer">click here</a>.)</p>
Guillermo González Sánchez
739,077
<p>Radon transform consists in a rotation plus a projection in X axis.</p> <p>This code will throw True for any squared matrix where there is an inscribed circle containing the target values.</p> <pre><code>from skimage.transform import radon, rotate import numpy as np def get_image_center(img): center = (img.shape[0] * 0.5, img.shape[1] * 0.5) return center def rotate_image(img, deg): rotated_img = rotate(img, deg, center=get_image_center(img), preserve_range=True) return rotated_img theta_test = np.linspace(0, 360, 100) sinogram = radon(image, theta=theta_test, circle=True) def perform_radon(img, angle_vector): radon_result_list = [] x_projection_matrix = np.ones(img.shape[0]) for angle in angle_vector: rotated_img = rotate_image(img, - angle) radon_transform = np.dot(x_projection_matrix, rotated_img) radon_result_list.append(radon_transform) arr = np.vstack(radon_result_list).T return arr manual_radon = perform_radon(img, theta_test) manual_radon.shape np.allclose(manual_radon, sinogram) </code></pre> <p>The last thing needed to get a linear transformation that represents Radon transformation is to get a linear operator that represents the rotation.</p>
2,307,785
<p>Things I understand:</p> <p>Shannon entropy- </p> <ul> <li>is the expected amount of information in an event from that distribution. </li> <li>In game of 20 questions to guess an item, it is the lower bound on the number questions one could ask.</li> </ul> <p>Doubt:</p> <p>It gives the lower bound on the number of bits needed on average to encode symbols drawn from a distribution.</p> <p>I don't understand the why it mentions on <strong>average</strong>. Isn't it just a lower bound. Also, please elaborate on the lower bound if possible.</p>
leonbloy
312
<blockquote> <p>In game of 20 questions to guess an item, it is the lower bound on the number questions one could ask.</p> </blockquote> <p>That's incorrect. One could be lucky and get the right answer on the first question. The entropy gives a lower bound on the <em>average</em> number questions one needs to ask.</p> <p>For example, consider a distribution with three values $(A,B,C)$ with probabilities $(1/2, 1/4, 1/4)$. In this case, the optimum sequence of questions is first ask "Is it A"? If no, ask "Is it B"? To discover the element we need sometimes 1 question, sometimes 2 questions. In average, we need $3/2$ questions. Which (you can check) coincides with the entropy. In this case, the bound is achieved. </p>
2,961,971
<blockquote> <p><span class="math-container">$$\sum_{n=1}^\infty\frac{(2n)!}{2^{2n}(n!)^2}$$</span></p> </blockquote> <p>Can I have a hint for whether this series converges or diverges using the comparison tests (direct and limit) or the integral test or the ratio test?</p> <p>I tried using the ratio test but it failed because I got 1 as the ratio. The integral test seems impossible to use here.</p>
Nosrati
108,128
<p>Every term of the series is <span class="math-container">$$\frac{(2n)!}{2^{2n}(n!)^2}=\dfrac{\Gamma(2n+1)}{2^{2n}\Gamma^2(n)}=\dfrac{\Gamma(n+\frac12)}{n\Gamma(n)\sqrt{\pi}}&gt;\dfrac{1}{2n}$$</span> by the formula <span class="math-container">$$\dfrac{\Gamma(n)}{\Gamma(2n)}=\dfrac{\sqrt{\pi}}{2^{2n-1}\Gamma(n+\frac12)}$$</span></p>
2,961,971
<blockquote> <p><span class="math-container">$$\sum_{n=1}^\infty\frac{(2n)!}{2^{2n}(n!)^2}$$</span></p> </blockquote> <p>Can I have a hint for whether this series converges or diverges using the comparison tests (direct and limit) or the integral test or the ratio test?</p> <p>I tried using the ratio test but it failed because I got 1 as the ratio. The integral test seems impossible to use here.</p>
user
505,767
<p><strong>HINT</strong></p> <p>We have that</p> <p><span class="math-container">$$\frac{(2n)!}{2^{2n}(n!)^2} =\frac1{4^n}\binom{2n}{n} \sim \frac{1}{\sqrt{2\pi n}}$$</span></p> <p>indeed recall that by <a href="https://en.wikipedia.org/wiki/Binomial_coefficient#Bounds_and_asymptotic_formulas" rel="nofollow noreferrer">bounds and asymptotic formulas </a> for the binomial coefficient</p> <p><span class="math-container">$$\binom{2n}{n} \sim \frac{4^n}{\sqrt{2\pi n}}$$</span></p> <p>then we can refer to limit comparison test.</p> <p>Refer also to the related</p> <ul> <li><a href="https://math.stackexchange.com/q/320846/505767">Show that that <span class="math-container">$\lim_{n\to\infty}\sqrt[n]{\binom{2n}{n}} = 4$</span></a></li> </ul>
16,795
<p>Consider a finite simple graph $G$ with $n$ vertices, presented in two different but equivalent ways:</p> <ol> <li>as a logical formula $\Phi= \bigwedge_{i,j\in[n]} \neg_{ij}\ Rx_ix_j$ with $\neg_{ij} = \neg$ or $ \neg\neg$ </li> <li>as an (unordered) set $\Gamma = \lbrace [n],R \subseteq [n]^2\rbrace$ </li> </ol> <p>In each case the complement $G'$ of $G$ is easily presented and is of course <em>not</em> isomorphic to $G$ (in the usual sense) generally:</p> <ol> <li>$ \Phi' = \bigwedge_ {i,j} \neg \neg_{ij}\ R x_i x_j $ </li> <li>$\Gamma' = \lbrace [n],[n]^2 \setminus R\rbrace$</li> </ol> <p>Let's state for the moment that the presentation as a logical formula is the more "flexible" one: we can easily omit single literals, leaving it open whether $Rx_ix_j$ or not. But this can be mimicked for set presentation by making it from a pair to a triple $\lbrace[n],R,\neg R \subseteq [n]^2 \setminus R\rbrace$. </p> <p>Let's call a presentation <em>complete</em>, if it leaves nothing open, i.e. no omitted literal and $\neg R = [n]^2 \setminus R$, resp.</p> <p>Now, let a graph be given in complete set presentation $\lbrace[n],R,\neg R = [n]^2 \setminus R\rbrace$. Since order in this set should not matter, any sensible definition of "graph isomorphism" should make any graph isomorphic to its complement.</p> <blockquote> <p>Where and how do I run into trouble when I assume - following this line of reasoning, contrary to the usual line of thinking - that every (finite) graph is isomorphic to its complement?</p> </blockquote>
Greg Kuperberg
1,450
<p>It is a strange question, but maybe a useful answer can make it a bit better.</p> <p>Certainly for many purposes a graph will look totally different from its complement. For instance, a graph and its complement have completely different spectra, diameter, perfect matchings, etc. So that side of the question is kind-of lame, I agree.</p> <p>On the other hand, for some purposes a graph is much the same as its complement. One obvious case is when you are interested in the automorphism group of a graph, or in the computational problem of graph isomorphism. Then you might as well think of a graph as a bicoloring of the edges of a complete graph. It then has a natural extra automorphism given by switching the two colors. More generally a colored graph with $n$ colors is equivalent, for graph isomorphism and graph automorphism questions, to a complete graph with $n+1$ colors. This viewpoint is more useful than you might first think, because a natural partial algorithm and preparatory step in the graph isomorphism and automorphism problems is to recolor every vertex by its valence, then color every edge by the colors of its vertices, etc., until the recoloring process stabilizes. Many graphs can be completely identified this way in practice. Recognizing the equivalence between a graph and its complement makes it easier to understand what these algorithms are really doing.</p> <p>Even some specific graphs, such as the Higman-Sims graph, are mainly used for their automorphisms and similar purposes, and it might be better to think of them as colorings of a complete graph.</p> <p>A much deeper example is the <a href="http://en.wikipedia.org/wiki/Perfect_graph">perfect graph theorem</a> of Lovasz. The theorem is that a graph is perfect if only if its complement is perfect. For perfect graphs, taking the graph complement is closely related to the dual or polar polytope of a convex polytope.</p>
16,795
<p>Consider a finite simple graph $G$ with $n$ vertices, presented in two different but equivalent ways:</p> <ol> <li>as a logical formula $\Phi= \bigwedge_{i,j\in[n]} \neg_{ij}\ Rx_ix_j$ with $\neg_{ij} = \neg$ or $ \neg\neg$ </li> <li>as an (unordered) set $\Gamma = \lbrace [n],R \subseteq [n]^2\rbrace$ </li> </ol> <p>In each case the complement $G'$ of $G$ is easily presented and is of course <em>not</em> isomorphic to $G$ (in the usual sense) generally:</p> <ol> <li>$ \Phi' = \bigwedge_ {i,j} \neg \neg_{ij}\ R x_i x_j $ </li> <li>$\Gamma' = \lbrace [n],[n]^2 \setminus R\rbrace$</li> </ol> <p>Let's state for the moment that the presentation as a logical formula is the more "flexible" one: we can easily omit single literals, leaving it open whether $Rx_ix_j$ or not. But this can be mimicked for set presentation by making it from a pair to a triple $\lbrace[n],R,\neg R \subseteq [n]^2 \setminus R\rbrace$. </p> <p>Let's call a presentation <em>complete</em>, if it leaves nothing open, i.e. no omitted literal and $\neg R = [n]^2 \setminus R$, resp.</p> <p>Now, let a graph be given in complete set presentation $\lbrace[n],R,\neg R = [n]^2 \setminus R\rbrace$. Since order in this set should not matter, any sensible definition of "graph isomorphism" should make any graph isomorphic to its complement.</p> <blockquote> <p>Where and how do I run into trouble when I assume - following this line of reasoning, contrary to the usual line of thinking - that every (finite) graph is isomorphic to its complement?</p> </blockquote>
Theo Johnson-Freyd
78
<p>OK, I'll bite.</p> <p>I've never liked the word "graph". Some people use it to mean "set $V$ of 'vertices' a collection $E$ of size-two subsets of $V$". Some mean "set $V$ and a symmetric $V\times V$ matrix valued in nonnegative integers, i.e. a map $V\times V \to \mathbb N$, telling you how often a vertex is connected to another vertex". Some people mean "(finite) one-dimensional CW complex". I tend to mean "finite set $H$ of 'half edges', a partition $E$ of $H$ into blocks of size $2$, and a partition $V$ of $H$ which is allowed to have blocks of arbitrary size (including empty blocks)". The reason I like the last one is that it's the definition that gets the correct notion of "autmorphism" for the purposes of evaluating Feynman-diagrammatic integrals. But it's the wrong one for many other applications. Note that for this definition, as for "one-dimensional CW complexes", it doesn't make sense to talk about the "complement".</p> <p>So, anyway, I'm perfectly happy thinking about the category for which a generic object is a set $V$ along with a partition of $V \choose 2$ (which is the set of subsets of $V$ of size $2$) into precisely two blocks. I know, more of less, what a morphism of such objects are. A map $V \to V'$ almost induces a map ${V \choose 2} \to {V'\choose 2}$ &mdash; the problem is just the pairs that are identified, which I guess I can forget. Then I can ask that the partition of $V \choose 2$ pushes forward along this map to subsets of the two sets in the partition of $V' \choose 2$. Note that if I order the two blocks of the partition, then this category is equivalent to the category of graphs following one of the definitions above; without ordering the blocks of the partition, I think I'm getting your category.</p> <p>But is it a useful category? Does it capture some intuition we (mathematicians collectively) have? To the latter question, the answer is clearly yes: you've suggested an isomorphism based on some mathematical intuition. To the former question, Greg K's answer gives some indication. But my feeling is that really, this category does not deserve to be called the category of <em>graphs</em>, because there are so many interesting graph-theoretic questions that mathematicians want to ask that are not isomorphism-invariant in this category.</p>
3,434,242
<p>If I need to get some variables values from a vector in Matlab, I could do, for instance, </p> <pre><code>x = A(1); y = A(2); z = A(3); </code></pre> <p>or I think I remember I could do something like</p> <blockquote> <p>[x, y, z] = A;</p> </blockquote> <p>However Matlab is not recognizing this format. what was the correct syntax? Thanks!!</p>
Thales
382,892
<p>The right way to do it is:</p> <pre><code>x = A(1); y = A(2); z = A(3); </code></pre> <p>This <code>[x, y, z] = A</code> makes no sense because A is not a function. You can use more variables in the left-hand side of an assignment as the outputs of a function. If <code>A</code> is a variable, then you just can't do it.</p> <p>Edit: @horchler suggestion was to use a cell array to do it. Perhaps then you can write:</p> <pre><code>A = 1:3; A = num2cell(A); [x,y,z]=A{:} </code></pre> <p>And that will work. First you <a href="https://www.mathworks.com/help/matlab/ref/num2cell.html" rel="nofollow noreferrer">convert your array to a cell</a>, then use the assignment to define the x, y and z variables. It is worth mentioning that this will do the trick for you, but that is not efficient. A cell array in MATLAB is much slower than an array itself and assign different elements of the array to different variables instead of using cell arrays will run faster though.</p>
435,936
<p>Does anyone know when $x^2-dy^2=k$ is resoluble in $\mathbb{Z}_n$ with $(n,k)=1$ and $(n,d)=1$ ? I'm interested in the case $n=p^t$</p>
Brian M. Scott
12,042
<p>Yes. Let $A$ be the set (still to be constructed), and for $n\in\Bbb Z^+$ let $A_n=\{k\in A:k\le n\}$. Suppose that you’ve constructed $A_n$. For any $\epsilon&gt;0$ choose $m\in\Bbb Z^+$ large enough so that $\frac{n}m&lt;\epsilon$, and let $A_m=A_n$. (In other words, omit from $A$ every integer $k$ such that $n&lt;k\le m$.) Then $\frac{|A_m|}m=\frac{|A_n|}m\le\frac{n}m&lt;\epsilon$. Now choose $r\in\Bbb Z^+$ large enough so that $\frac{m}r&lt;\epsilon$, and let $$A_r=A_m\cup\{k\in\Bbb Z^+:m&lt;k\le r\}\;;$$ then $\frac{|A_r|}r\ge\frac{r-m}r=1-\frac{m}r&gt;1-\epsilon$. By alternating in this fashion, using a sequence of $\epsilon$’s converging to $0$, you can construct a set $A$ such that $\underline{d}(A)=0$ and $\overline{d}(A)=1$.</p>
2,032,241
<p>In Euler's (number theory) theorem one line reads: since $d|ai$ and $d|n$ and $gcd(a,n)=1$ then $d|i$. I've been staring at this for over an hour and I am not convinced why this is true could anyone explain why? I have tried all sorts of lemma's I've seen before but I honestly just can't see it and I feel I'm going round in circles. Could someone just explain to me why it is making me feel stupid. Here is the full proof for context and I highlighted the lines I don't get. Thanks! (Also hcf=gcd as I know that confuses some people.)</p> <p><a href="https://i.stack.imgur.com/JEcNA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JEcNA.png" alt="enter image description here"></a></p>
Bill Dubuque
242
<p>Clearer: since $\,a\,$ and $\,i\,$ are coprime to $\,n\,$ so too is their product $\,ai\,$ (by Euclid's Lemma). By the Euclidean algorithm $\ 1 = \gcd(ai,n) = \gcd(\color{#c00}{ai\bmod n},\,n) = \gcd(\color{#c00}{s_i},n).\,$ So $\,s_i\,$ is coprime to $\,n\, $ and $\,0\le s_i &lt; n,\,$ hence $\,s_i\in S,\,$ by the definition of $\,S.$</p>
239,653
<p>It is usually told that Birch and Swinnerton-Dyer developped their famous conjecture after studying the growth of the function $$ f_E(x) = \prod_{p \le x}\frac{|E(\mathbb{F}_p)|}{p} $$ as $x$ tends to $+\infty$ for elliptic curves $E$ defined over $\mathbb{Q}$, where the product is defined for the primes $p$ where $E$ has good reduction at $p$. Namely, this function should grow at the order of $$ \log(x)^r $$ when $x$ tends to $+\infty$, where $r$ is the (algebraic) rank of $E$.</p> <p><strong>Question 1.</strong> Why is it natural to look at these kind of products?</p> <p>Nowadays, people usually state the BSD conjecture as the equality $$ r = \text{ord}_{s=1}L(E,s)\text{.} $$</p> <p><strong>Question 2.</strong> Are these two statements equivalent?</p>
Joe Silverman
11,926
<p>I believe that the original experimental observation was that the product seems to converge to $\infty$ if $E(\mathbb Q)$ is infinite, and to a finite value if $E(\mathbb Q)$ is finite. Also, I may be mis-remembering, but I thought they looked at $$\prod_p \frac{p}{\#E(\mathbb F_p)},$$ with the limit conjecturally being $0$ if and only if $\#E(\mathbb Q)=\infty$.</p> <p>Here's an intuitive reason to study this quantity. We know from Hasse that $\#E(\mathbb F_p)=p+1-a_p$ with $|a_p|\le2\sqrt p$. So $$\log \left(\prod_p \frac{\#E(\mathbb F_p)}{p} \right)\approx -\sum_p \frac{a_p}{p}.$$ (This is just a heuristic, of course.) Anyway, this quantity diverges to $+\infty$ if the $a_p$ tend to be negative more often than they are positive. So one expects the product to equal $\infty$ if and only if $\#E(\mathbb F_p)$ is biased towards being larger than its "expected" value of $p+1$. On the other hand, if $E(\mathbb Q)=\infty$, then one might hope (guess?) that the image of $E(\mathbb Q)$ in $E(\mathbb F_p)$ tends to make $E(\mathbb F_p)$ somewhat larger than would be expected at random. Ergo, one expects that the product is large if and only if $E(\mathbb Q)$ is large. Then one does experiments to see whether "large" means "infinite".</p>
24,593
<p>Traditionally, I have always taught evaluating expressions before teaching linear equations. But, I was recently given a remedial class of students that have to cover the bare minimums (and we have until mid-December to finish). Luckily, I have great flexibility with what I can do to the syllabus, so for the first time ever, I have completely cut out evaluating expressions since they won´t even be tested on this on the final exam.</p> <p>My question is more if anyone else has done this, or thinks this is not a good way to go. Most of my students in that particular class have ZERO to little formal math background, a lot of them did not even finish high school, and they barely get by with mean, median, mode, rounding, etc. I started equations with them today, and they seemed ¨fine&quot; for the most part. Of course, I also have spent the past week emphasizing positive and negative integer operations, so they are pretty OK with that so far. The textbook itself does not cover linear equations until after the section on evaluating expressions.</p> <p>EDIT: Upon request for &quot;non-native&quot; English speakers:</p> <p>Evaluating expressions simply means in the US to plug in numbers for the given variable values of the algebraic expression. Thus, for example, an exercise would be:</p> <p>&quot;Evaluate a + b + c&quot; if a = 1, b = 2, c = 3.&quot;</p> <p>The expressions can be as simple as that, or very much more complicated/interesting/beautiful. But, you get the picture.</p> <p>Linear equations simply mean basic equations where you solve for an unknown variable.</p> <p>Example: x + 5 = 10. What is x? Or 2x + 20 = 40, what is x? Etc.</p>
Tommi
2,083
<p>Your context is students who have not learned much from the mathematics education they have been exposed to thus far. I guess, but can not be sure, that they have already been exposed to someone showing them algorithms and asking them to repeat. Because they are performing badly now, likely the teachers have tried to take this into account by teaching these things to them very slowly and thoroughly.</p> <p>How has it been working thus far?</p> <p>Given this, it is a tempting idea that you should do something, anything, else. What exactly this something else is seems to vary a bit, but the literature I have been exposed to recently has been all about open questions and questions with low threshold to doing something but with lots of room to expand.</p> <p>Some that might be relevant for your students are (and this is speculation on my part, so a truckful of salt etc.)</p> <ul> <li><a href="https://en.wikipedia.org/wiki/Figurate_number" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Figurate_number</a> : have a pattern, ask them to continue it, figure out a rule, evaluate to check. This should practice evaluation of expressions, though mostly with positive integers. But then you could ask if we can make sense of the two-and-halft figure or zeroth figure or minus tenth figure; what might such mean?</li> <li>how complex an equation can you make that has a given answer?</li> <li>how much do different telephone contracts cost in terms of fixed costs and use of data, calls etc.? Which ones are cheapest for particular amounts of use? At which point do the costs of different contracts break even? Maybe use the contracts the students have. Or take something more relevant for their everyday life. Obviously, make sure to explicitly draw the connection to the more abstract mathematics, or the skills won't translate outside the context. Note that this problem trains both of the skills you mention.</li> </ul> <p>This perspective would suggest approaching the issue more in terms of interesting and open questions and less from the perspective of a staircase of simple skills.</p>
801,668
<p>Sources: <a href="https://rads.stackoverflow.com/amzn/click/0495011665" rel="nofollow noreferrer"><em>Calculus: Early Transcendentals</em> (6 edn 2007)</a>. p. 206, Section 3.4. Question 95.<br> <a href="https://rads.stackoverflow.com/amzn/click/0470383348" rel="nofollow noreferrer"><em>Elementary Differential Equations</em></a> (9 edn 2008). p. 165, Section 3.3, Question 34.</p> <blockquote> <p><strong>Quandary:</strong> If $y=f(x)$, and $x = u(t)$ is a new independent variable, where $f$ and $u$ are twice differentiable functions, what's $\dfrac{d^{2}y}{dt^{2}} $?</p> <p><strong>Answer:</strong> By the chain rule, $\dfrac{dy}{dt} = \dfrac{dy}{dx} \dfrac{dx}{dt} $. Then by the product rule, </p> <p>$\begin{align} \dfrac{d^{2}y}{dt^{2}} &amp; \stackrel{\mathrm{defn}}{=}\dfrac{d}{dt} \dfrac{dy}{dt} \\ &amp; = \color{forestgreen}{ \dfrac{d}{dt} \dfrac{dy}{dx} } &amp; \dfrac{dx}{dt} &amp; +\frac{dy}{dx} \quad \dfrac{d}{dt} \dfrac{dx}{dt} \\ &amp; = \color{red}{\frac{d^{2}y}{dx^{2}}(\frac{dx}{dt})} &amp; \dfrac{dx}{dt} &amp; + \frac{dy}{dx} \quad \frac{d^{2}x}{dt^{2}} \\ &amp; = \frac{d^{2}y}{dx^{2}}(\frac{dx}{dt})^{2} &amp; &amp;+\frac{dy}{dx} \quad \frac{d^{2}x}{dt^{2}} \end{align} $</p> </blockquote> <p>Why $\color{forestgreen}{ \dfrac{d}{dt} \dfrac{dy}{dx} } = \color{red}{\dfrac{d^{2}y}{dx^{2}}(\dfrac{dx}{dt})} $? Please explain informlly and intuitively.</p>
Nick Peterson
81,839
<p>Think of $\frac{dy}{dx}$ as a function of $x$; say it is $f(x)$. Then $x$ is also a function of $t$. So, by the <em>exact same</em> chain rule argument that you gave, $$ \frac{d}{dt}\left[\frac{dy}{dx}\right]=\frac{d}{dt}[f(x)]=\frac{df}{dx}\cdot\frac{dx}{dt}. $$ But $$ \frac{df}{dx}=\frac{d}{dx}\left[\frac{dy}{dx}\right]. $$</p>
3,040,110
<p>What is the Range of <span class="math-container">$5|\sin x|+12|\cos x|$</span> ?</p> <p>I entered the value in desmos.com and getting the range as <span class="math-container">$[5,13]$</span>.</p> <p>Using <span class="math-container">$\sqrt{5^2+12^2} =13$</span>, i am able to get maximum value but not able to find the minimum.</p>
arctic tern
296,782
<p>Without loss of generality we may assume <span class="math-container">$0\le \theta\le\pi/2$</span> so <span class="math-container">$\cos \theta$</span> and <span class="math-container">$\sin \theta$</span> are positive.</p> <p>You can solve this with calculus (set the derivative equal to zero and solve for <span class="math-container">$\theta$</span>), but here is a geometric way to think about the problem which avoids calculus.</p> <p>The level curves of the scalar function <span class="math-container">$f(x,y)=5y+12x$</span> are the parallel lines</p> <p><span class="math-container">$$ 5y+12x=C $$</span></p> <p>for various real values <span class="math-container">$C$</span>. One may "parametrize" such lines by using a perpendicular line through the origin, namely the line <span class="math-container">$y=\frac{5}{12}x$</span>. Since the slope is less than <span class="math-container">$1$</span>, the perpendicular line is tilted towards the <span class="math-container">$x$</span>-axis, so the "first" of level curves to intersect the unit circle arc <span class="math-container">$0\le\theta\le\frac{\pi}{2}$</span> does so at the point <span class="math-container">$(0,1)$</span>, which corresponds to an output of <span class="math-container">$f(0,1)=5$</span>. The "last" of the level curves to intersect the arc occurs where the perpendicular line intersects it, so solve <span class="math-container">$x^2+(\frac{5}{12}x)^2=1$</span> to get <span class="math-container">$x=\frac{12}{13}$</span> and <span class="math-container">$y=\frac{5}{13}$</span>, which yields a maximum of <span class="math-container">$f(\frac{12}{13},\frac{5}{13})=13$</span>.</p> <p>Therefore the range is <span class="math-container">$[5,13]$</span>.</p>
2,279,120
<p>How do we prove that for any complex number $z$ the minimum value of $|z|+|z-1|$ is $1$ ? $$ |z|+|z-1|=|z|+|-(z-1)|\geq|z-(z-1)|=|z-z+1|=|1|=1\\\implies|z|+|z-1|\geq1 $$</p> <p>But, when I do as follows $$ |z|+|z-1|\geq|z+z-1|=|2z-1|\geq2|z|-|1|\geq-|1|=-1 $$ Since LHS can never be less than 0, $|z|+|z-1|\geq0$</p> <p>Why do I seem to get different result compared to the first method ?</p> <p>ie. 1st method $\implies (|z|+|z-1|)_{min}=1\\$</p> <p>2nd method$\implies (|z|+|z-1|)_{min}=0\\$</p> <p>What is going wrong in the second approach ?</p>
egreg
62,967
<p>The triangle inequality tells you that $$ |z|+|z-1|=|z|+|1-z|\ge|z+1-z|=1 $$ which is what you did. Taking into account that for $z=1/2$ you have equality, the minimum is $1$.</p> <p>The other approach simply gives you a different lower bound, which is not very informative: $$ |z|+|z-1|\ge|z+z-1|=|2z-1|\ge0 $$ is already known.</p>
25,485
<p>Not sure if this is more appropriate for here or for Math.SE, but here goes:</p> <p>How does one who is self-studying mathematics determine if a textbook is too hard for you?</p> <p>Math is hard in general, but when does a textbook cross that line from being challenging to being nearly intractable?</p> <p>Sometimes I can't tell if I'm just being challenged when I have to re-read one paragraph ten times to understand what the author is saying (even if I understand all the components of their statement individually), or if the book is simply not at the right level for my current background.</p>
Rusi
17,672
<p>Lovely question; I await better answers than mine; meanwhile here's my provisional one(s)...</p> <h1>tl;dr</h1> <p>(a) tempo (b) flow (c) practice-to-success</p> <p>Note 1: These are related but different enough to merit separate discussions.<br /> Note 2: The commonality and differenc s are clearer in music/arts than in math so I'll use that as a running example.<br /> [Yeah when I was in school I dangled for quite a while choosing between STEM and music]</p> <h1>Longer Version</h1> <h2>Tempo</h2> <p>Think of a normal person walking, an old disabled person, an infant just beginning to try and an olympic champion at their peak. Each has vastly different details of movement, but that's too reductionistic. Each has a very different <em>tempo</em>. You need to have a sense of the perimeter of your tempo and not fall wildly outside.</p> <p>Note at the start this almost always will require some downscaling. Even the math prodigy will not be able to read math with the speed of novels, newspapers etc. So at the start you set the limits and then you work within that pushing against (ultimately yourself) a little at a time. Little is important as I show below!!</p> <h2>Flow</h2> <p>Compare <a href="https://www.youtube.com/watch?v=Qt45HCsDljk" rel="nofollow noreferrer">Dinnerstein</a> with <a href="https://www.youtube.com/watch?v=HlvNKc5pYrk" rel="nofollow noreferrer">Sokolov</a> playing <em>the same</em> Bach. Do they sound same??</p> <p>I'm not going to say which I prefer, still less which you should prefer. Reason I'm giving two very different renderings is to show that hi-tempo <em>can</em> be be enjoyable but is not <em>necessarily</em> better. One can tarry along enjoying -- so to say -- the flowers by the wayside.</p> <p>When you're unable to digest a book/author X and resonate with Y, it can mean X is hard and Y is easy. But it can also mean quite simply that Y suits you X doesn't. Respect your own innate preferences.</p> <p>One of the biggest figures of math-in-Computer-Science, Dijkstra, towards the latter part of his career was given a festschrift that summed up his work. The title is <a href="https://link.springer.com/book/10.1007/978-1-4612-4476-9" rel="nofollow noreferrer">Beauty is our Business</a>.</p> <p>So that's flow, which is kin to tempo but different.</p> <p>But this all applies to 'the great'. What about all of us who are far from there? In music as in math... it's practice.</p> <p>However there's helpful, not so helpful, useless and outright damaging practice. The physical aspects which apply in music don't carry over much beyond specific instruments and genres. The psychological aspects are common to both music and math. I'll try and transmit a very crucial lesson I received from a music professor. And his lesson was:</p> <h2>Practice to Success not to Failure</h2> <p>[Firstly note how he adroitly changes the cliche <em>Practice to perfection</em>]<br /> He said to us: <em>When you walk the walls of a music school you'll often hear this:</em> Then he sat at the piano and demonstrated:</p> <p>He started playing the well known <a href="https://www.youtube.com/watch?v=aeEmGvm7kDk" rel="nofollow noreferrer">Mozart</a>.<br /> At first it was fine.<br /> Then he started making mistakes.<br /> Then he started showing more and more frustration.<br /> And getting worse and worse.<br /> Finally he slammed the keyboard and got up.</p> <p><sup>Of course a description does not work anything like a real demo; the closest I can think of are the pantomimes of <a href="https://www.youtube.com/watch?v=3fEy9g2T1l8" rel="nofollow noreferrer">Victor Borge</a> but this was more literal.</sup></p> <p>What he said after this was memorable and (I believe) applies to music math or any field where there is significant difficulty to be overcome. He said:</p> <blockquote> <p>I believe this actually damages the nervous system. When you practice, stop when you're doing well not when you are doing badly. <strong>Practice to success not to failure.</strong></p> </blockquote> <p>I believe this applies very much to math. A good dose of math — whether reading, proving, problem solving, whatever — should have a resultant residue of a delicious coffee after a good sleep. <em>It should refresh and invigorate.</em> If its leaving a residue of frustration you're doing it wrong.</p>
4,631,463
<p>I came cross the following equation:<br /> <span class="math-container">$$\lim_{x\to+\infty}\frac{1}{x}\int_0^x|\sin t|\mathrm{d}t=\frac{2}{\pi}$$</span> I wonder how to prove it.</p> <p>Using the Mathematica I got the following result: <a href="https://i.stack.imgur.com/CVWgR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CVWgR.png" alt="enter image description here" /></a> Could you suggest some ideas how to prove this? Any hints will be appreciated.</p>
Prem
464,087
<p>Let <span class="math-container">$x=A+\pi C$</span> where <span class="math-container">$A$</span> is between <span class="math-container">$0$</span> &amp; <span class="math-container">$\pi$</span> , with Integer <span class="math-container">$C$</span>.</p> <p><span class="math-container">$D(A,C) = \frac{1}{A+\pi C}\int_0^{A+\pi C}|\sin t|\mathrm{d}t$</span></p> <p><span class="math-container">$D(A,C) = \frac{1}{A+\pi c}\int_0^{\pi C}|\sin t|\mathrm{d}t + \frac{1}{A+\pi c}\int_{0+\pi C}^{A+\pi C}|\sin t|\mathrm{d}t$</span></p> <p><span class="math-container">$D(A,C) = \frac{1}{A+\pi C} [[ \int_0^{\pi C}|\sin t|\mathrm{d}t ]] + \frac{1}{A+\pi C}\int_0^{A}|\sin t|\mathrm{d}t$</span></p> <p><span class="math-container">$D(A,C) = \frac{1}{A+\pi C} [[ \int_0^{\pi}|\sin t|\mathrm{d}t + \int_{\pi}^{2\pi}|\sin t|\mathrm{d}t + \int_{2\pi}^{3\pi}|\sin t|\mathrm{d}t +\cdots \int_{\pi (C-1)}^{\pi (C)}|\sin t|\mathrm{d}t ]] + \frac{1}{A+\pi C}\int_0^{A}|\sin t|\mathrm{d}t$</span></p> <p>Pictorially :<br /> <a href="https://i.stack.imgur.com/7aJo2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7aJo2.png" alt="|SIN|" /></a></p> <p>Here <span class="math-container">$x$</span> is given in terms of <span class="math-container">$A$</span> &amp; <span class="math-container">$C$</span>.</p> <p>Each Purple Area is the Positive Part of a Cycle of the <span class="math-container">$\sin$</span> Curve having <span class="math-container">$Area=2$</span>.<br /> Each Green Area is the Negative Part of a Cycle of the <span class="math-container">$\sin$</span> Curve having <span class="math-container">$Area=2$</span>.<br /> The last Gray Area is the Partial Cycle having <span class="math-container">$Area$</span> between <span class="math-container">$0$</span> &amp; <span class="math-container">$2$</span>.</p> <p><span class="math-container">$D(A,C) = \frac{1}{A+\pi C} [[ 2 + 2 + 2 +\cdots 2 ]] + \frac{1}{A+\pi C}\int_0^{A}|\sin t|\mathrm{d}t$</span></p> <p><span class="math-container">$D(A,C) = \frac{1}{A+\pi C} [[ 2C ]] + \frac{1}{A+\pi c}\int_0^{A}|\sin t|\mathrm{d}t$</span></p> <p><span class="math-container">$D(A,C) = \frac{2C}{A+\pi C} + \frac{k}{A+\pi C}$</span> where <span class="math-container">$k$</span> is a number (Depending on <span class="math-container">$A$</span>) between <span class="math-container">$0$</span> &amp; <span class="math-container">$2$</span></p> <p><span class="math-container">$\large{\displaystyle\lim_{x\to+\infty}\frac{1}{x}\int_0^x|\sin t|\mathrm{d}t = \lim_{C\to+\infty}D(A,C) = \frac{2}{\pi}}$</span></p> <p>The Limit will not change with <span class="math-container">$A$</span> or <span class="math-container">$k$</span> because the Grey Area is too negligible when <span class="math-container">$C$</span> is very large.</p>
254,126
<p>If 0 &lt; a &lt; b, where a, b $\in\mathbb{R}$, determine $\lim \bigg(\dfrac{a^{n+1} + b^{n+1}}{a^{n} + b^{n}}\bigg)$</p> <p>The answer (from the back of the text) is $\lim \bigg(\dfrac{a^{n+1} + b^{n+1}}{a^{n} + b^{n}}\bigg) = b$ but I have no idea how to get there. The course is Real Analysis 1, so its a course on proofs. This chapter is on limit theorems for sequences and series. The squeeze theorem might be helpful. </p> <p>I can prove that $\lim \bigg(\dfrac{a^{n+1} + b^{n+1}}{a^{n} + b^{n}}\bigg) \le b$ but I can't find a way to also prove that it is larger than b</p> <p>Thank you!</p>
AnonymousCoward
565
<p>Lets pursue this induction: Suppose that in what follows, at each step, $M_{d-1}\cap M_1$ is $\mathbb{R}^2$ less $d$ distinct points. You have a short exact sequence: </p> <p>$$ 0 \rightarrow \Omega^*(M_1\cup M_{d-1}) \rightarrow \Omega^*(M_1)\oplus\Omega^*(M_{d-1}) \rightarrow \Omega^*(M_d) \rightarrow 0$$</p> <p>When $d = 2$, we have the induced long exact sequence </p> <p>$$ 0 \rightarrow H^0(\mathbb{R}^2)=\mathbb{R} \rightarrow H^0(M_1)\oplus H^0(M_1) = \mathbb{R}^2 \rightarrow H^0(M_2) $$</p> <p>$$ \rightarrow H^1(\mathbb{R}^2)=0 \rightarrow H^1(M_1)\oplus H^1(M_1) = \mathbb{R}^2 \rightarrow H^1(M_2) $$</p> <p>$$ \rightarrow H^2(\mathbb{R}^2)=0\rightarrow H^2(M_1)\oplus H^2(M_1) = 0 \rightarrow H^2(M_2) $$</p> <p>$$ \rightarrow 0 \cdots $$</p> <p>Where I have used the poincare lemma in the left column, and that $S^1\sim M_1$ in the middle column. This will show that $h^2(M_d) = 0$ by inducting.</p>
4,359,019
<p>Let <span class="math-container">$P \in \mathbb{R}^{N\times N}$</span> be an orthogonal matrix and <span class="math-container">$f: \mathbb{R}^{N \times N} \to \mathbb{R}^{N \times N}$</span> be given by <span class="math-container">$f(M) := P^T M P$</span>. I am reading about random matrix theory and an exercise is to calculate the Jacobian matrix of <span class="math-container">$f$</span> and its Jacobian determinant.</p> <p><strong>Question:</strong> How to calculate the Jacobian of a matrix-valued function? How is it defined? Somehow the notation here confuses me. I suspect that <span class="math-container">$Jf = P^T P$</span> and thus <span class="math-container">$\det(Jf) = 1\cdot 1 = 1$</span>.</p>
QuantumSpace
661,543
<p>Since <span class="math-container">$F$</span> is a dense subspace of <span class="math-container">$E$</span>, it is natural to define <span class="math-container">$$\varphi: E' \to F': f \mapsto f\vert_F.$$</span> Clearly this map is linear. To see that it is isometric, you will have to invoke density of <span class="math-container">$F$</span> in <span class="math-container">$E$</span>. To see the surjectivity, use the Hahn-Banach theorem.</p>
3,095,710
<p>Factor <span class="math-container">$x^8-x$</span> in <span class="math-container">$\Bbb Z[x]$</span> and in <span class="math-container">$\Bbb Z_2[x]$</span></p> <p>Here what I get is <span class="math-container">$x^8-x=x(x^7-1)=x(x-1)(1+x+x^2+\cdots+x^6)$</span> now what next? Help in both the cases in <span class="math-container">$\Bbb Z[x]$</span> and in <span class="math-container">$\Bbb Z_2[x]$</span></p> <p><strong>Edit:</strong> I think <span class="math-container">$(1+x+x^2+\cdots+x^6)$</span> is cyclotomic polynomial for <span class="math-container">$p=7$</span> so it is irred over <span class="math-container">$\Bbb Z$</span>. Now the problem remains for <span class="math-container">$\Bbb Z_2[x]$</span></p>
Ri-Li
152,715
<p>After my edit I finally got the answer it was under my nose the polynomial <span class="math-container">$1+\cdots +x^6$</span> does not have zeros at <span class="math-container">$0$</span> and <span class="math-container">$1$</span> so it can't be factored as a multiple of a one degree polynomial and one other as a whole.</p> <p><strong>Edit:</strong> But as Lubin mentioned in the comment <span class="math-container">$1+\cdots +x^6=(1+x+x^3)(1+x^2+x^3)$</span></p>