qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,917,848
<p>Given a real square matrix <span class="math-container">$A$</span> and a vector <span class="math-container">$v$</span>, Krylov subspaces are given by: <span class="math-container">$$\mathcal K_n(A,v) := \text{span}(v, Av, \cdots A^{n-1} v)$$</span> These spaces are known to help solve numerical linear algebra problems like approximating the eigenvalues <span class="math-container">$\lambda$</span> given by <span class="math-container">$Ax = \lambda x$</span>. Can someone explain the core idea behind this?</p>
user
505,767
<p><strong>HINT</strong></p> <p>Let use <a href="https://en.wikipedia.org/wiki/Spherical_coordinate_system" rel="nofollow noreferrer"><strong>spherical coordinates</strong></a> with</p> <ul> <li><p>$h=r\sin \phi \cos \theta$</p></li> <li><p>$k=r\sin \phi \sin \theta$</p></li> <li><p>$t=r\cos \phi$</p></li> </ul> <p>to obtain</p> <p>$$\frac{\sqrt{hk(z+t)}}{\sqrt{h^2+k^2+t^2}}=\sqrt {\sin^2 \phi \sin \theta\cos \theta(z+r\cos \phi)}$$</p>
213,665
<p><strong>I've tried 3 methods but all failed to do that.</strong></p> <p>1st Method</p> <pre><code>Apply[Flatten, {1, {2, {3, 4}, 5}, 6}, {2}] </code></pre> <p>2nd Method</p> <pre><code>Map[Flatten, {1, {2, {3, 4}, 5}, 6}, {2}] </code></pre> <p>3rd Method</p> <pre><code>Flatten[{1, {2, {3, 4}, 5}, 6}, {2}] </code></pre> <p>I wanna get {1, {2, 3, 4, 5}, 6}</p>
user1066
106
<pre><code>lst[[2]]=Flatten@lst[[2]];lst </code></pre> <blockquote> <p>{1, {2, 3, 4, 5}, 6}</p> </blockquote>
2,507,247
<p>Given a bimatrix game of <span class="math-container">$$\left(\begin{matrix}(0,-1) &amp; (0,0)\\(-90,-6)&amp;(10, -10)\end{matrix}\right)$$</span> <a href="https://i.stack.imgur.com/uY44c.jpg" rel="nofollow noreferrer">Source</a></p> <p>How to find the nash equilibrium strategy for both players?</p>
Gerhard S.
474,939
<p>I assume that the entries $(a,b)$ in the payoff-matrix are interpreted as: $a$ is the row player's payoff and $b$ is the column player's payoff. To find a mixed strategy Nash equilibrium you use the fact that for a mixed strategy to be optimal for a player, the player must be indifferent between the pure strategies over which he or she mixes.</p> <p>Denote by $x$ the probability that the row player chooses the upper row. If the column player chooses left, he or she gets $-x-6(1-x)$, if he or she chooses right, the payoff is $-10(1-x)$. Hence, it must hold that $-6+5x=-10+10x$, which yields $x=4/5$.</p> <p>Denote by $y$ the probability that the column player chooses left. An argument analogous to the one above yields the condition $0=-90y+10(1-y)$, from which we obtain $y=1/10$. </p>
4,610,394
<p>Clearly, none of the roots are in <span class="math-container">$\mathbb{Q}$</span> so <span class="math-container">$f(x) = x^4 + 1$</span> does not have any linear factors. Thus, the only thing left to check is to show that <span class="math-container">$f(x)$</span> cannot reduce to two quadratic factors.</p> <p>My proposed solution was to state that <span class="math-container">$f(x) = x^4 + 1 = (x^2 + i)(x^2 - i)$</span> but <span class="math-container">$\pm i \not\in \mathbb{Q}$</span> so <span class="math-container">$f(x)$</span> is irreducible.</p> <p>However, I stumbled across this post <a href="https://math.stackexchange.com/questions/1249143/x4-1-reducible-over-mathbbr-is-this-possible">$x^4 + 1$ reducible over $\mathbb{R}$... is this possible?</a> with a comment suggesting that <span class="math-container">$x^4 + 1 = (x^2 + \sqrt{2}x + 1)(x^2 - \sqrt{2}x + 1)$</span> which turns out to be a case that I did not fully consider. It made me realize that <span class="math-container">$\mathbb{Q}[x]$</span> being a UFD only guarantees a unique factorization of irreducible elements in <span class="math-container">$\mathbb{Q}[x]$</span> (which <span class="math-container">$x^2 \pm i$</span> nor <span class="math-container">$x^2 \pm \sqrt{2} x + 1$</span> aren't in <span class="math-container">$\mathbb{Q}[x]$</span>) so checking a single combination of quadratic products is not sufficient.</p> <p>Therefore, what is the ideal method for checking that <span class="math-container">$x^4 + 1$</span> cannot be reduced to a product of two quadratic polynomials in <span class="math-container">$\mathbb{Q}[x]$</span>? Am I forced to just brute force check solutions of <span class="math-container">$x^4 + 1 = (x^2 + ax + b)(x^2 + cx + d)$</span> don't have rational solutions <span class="math-container">$(a,b,c,d) \in \mathbb{Q}^4$</span>?</p>
KCd
619
<p>There is no such thing as an ideal method. Abandon the dream that there should be some kind of technique that &quot;always works&quot; unless you just want to find a general algorithm that works all the time, like the <a href="https://en.wikipedia.org/wiki/Berlekamp%E2%80%93Zassenhaus_algorithm" rel="nofollow noreferrer">Berlekamp–Zassenhaus algorithm</a> for <span class="math-container">$\mathbf Z[T]$</span>.</p> <p>In practice, for monics in <span class="math-container">$\mathbf Z[T]$</span>, reduction mod <span class="math-container">$p$</span> is the simplest technique to use (with a computer), since if it works at all then it works for infinitely many <span class="math-container">$p$</span>. Of course just one <span class="math-container">$p$</span> is enough, if that method works, but knowing there are either no such <span class="math-container">$p$</span> or a lot of them makes this technique feel more robust.</p> <p>There are reasons involving algebraic number theory that reduction mod <span class="math-container">$p$</span> need not work at all for some polynomials (it's related to the structure of the Galois group of the polynomial). See <a href="https://kconrad.math.uconn.edu/blurbs/ringtheory/reducibleallp.pdf" rel="nofollow noreferrer">here</a> for an example of a quartic in <span class="math-container">$\mathbf Z[T]$</span> that is irreducible over <span class="math-container">$\mathbf Q$</span> but it is reducible mod <span class="math-container">$p$</span> for all <span class="math-container">$p$</span> and it has no Eisenstein translate: <span class="math-container">$T^4 - 10T^2 + 1$</span>. The method used there to prove irreducibility over <span class="math-container">$\mathbf Q$</span> is (i) show there's no linear factor by the rational roots test and (ii) show there is no quadratic factor by finding <em>all</em> monic quadratic factorizations in the larger ring <span class="math-container">$\mathbf R[T]$</span> and showing none live in <span class="math-container">$\mathbf Q[T]$</span>: if there were a quadratic irreducible factorization in <span class="math-container">$\mathbf Q[T]$</span> then it would have to be a quadratic factorization in <span class="math-container">$\mathbf R[T]$</span>.</p> <p>A final observation: if a monic in <span class="math-container">$\mathbf Z[T]$</span> is reducible in <span class="math-container">$\mathbf Q[T]$</span> with two factors of degree <span class="math-container">$d$</span> and <span class="math-container">$d'$</span>, then it is a product of monics in <span class="math-container">$\mathbf Z[T]$</span> with degrees <span class="math-container">$d$</span> and <span class="math-container">$d'$</span>. Therefore the &quot;brute force check&quot; that you mention at the end of your post has a serious omission: you only need to consider <span class="math-container">$a, b, c, d \in \mathbf Z$</span>.</p>
2,803,398
<p>We know that in category of $\mathbb{Set}$ the inverse limit is the direct product. But I am looking for specific category in which inverse limit does not exist. Any comments would be highly appreciated.</p>
hmakholm left over Monica
14,366
<p>You have an <em>injection</em> $(0,1)^2\to(0,1)$ -- but it is not <em>surjective</em>, because there is nothing that maps to, for example, $$ \frac{1}{99} = 0.0101010101010\ldots $$</p> <p>De-interlating the digits of this would produce $\langle 0,\frac19\rangle$, but that is not in $(0,1)^2$.</p> <hr> <p>However, an injection is really all you need, because it is easy to find an injection in the other direction, and then the Schröder-Bernstein theorem does the work of stitching them together to a single bijection for you.</p>
2,637
<p>Trying to round-trip expressions through JSON, I'm getting unexpected errors for held expressions, and would be grateful for advice or clues. Consider, first, something that works well</p> <pre><code>Export[Environment["USERPROFILE"] &lt;&gt; "\\AppData\\Local\\test.json", {1, 2, 3},"JSON"] </code></pre> <p>and read it back in</p> <pre><code>Import[Environment["USERPROFILE"] &lt;&gt; "\\AppData\\Local\\test.json","JSON"] </code></pre> <p>producing the expected <code>{1, 2, 3}</code>. However, when I try a held expression, such as (and now you can see why I might want to do this):</p> <pre><code>Export[ Environment["USERPROFILE"] &lt;&gt; "\\AppData\\Local\\test.json", HoldComplete[myList = {1, 2, 3}], "JSON"] </code></pre> <p>and we get</p> <pre><code>Export::badval: The element Data contains invalid values. &gt;&gt; </code></pre> <p>I haven't been able to find anything useful on this error message. I suspect it's something to do with <code>HoldComplete</code> and its friends not being real expressions, but rather some kinds of special syntax in the front end or the kernel, but it's a bit surprising since one of the oft-repeated slogans in Mathematica is <em>everything is an expression</em>.</p> <p>Btw, lest we think that the assignment to <code>myList</code> is the problem, the following fails with the same message:</p> <pre><code>Export[ Environment["USERPROFILE"] &lt;&gt; "\\AppData\\Local\\test.json", HoldComplete[{1, 2, 3}], "JSON"] </code></pre>
celtschk
129
<p>The reason is that the JSON format is quite limited. It doesn't support arbitrary expressions. You can e.g. see that by trying the folllowing:</p> <pre><code>Export["test.json",someSymbol,"JSON"] </code></pre> <p>You'll get the same error.</p> <p>If your goal is just to pass the expressions around (i.e. on the other side is another Mathematica instance to interpret them), the simplest solution is to package the expression in a string before sending:</p> <pre><code>Export["test.json",ToString[HoldComplete[myList={1,2,3}],InputForm],"JSON"] </code></pre> <p>and translating that string back into an expression on receiving:</p> <pre><code>ToExpression@Import["test.json","JSON"] (* ==&gt; HoldComplete[myList={1,2,3}] *) </code></pre> <p>Note that I used <code>InputForm</code> in order to enable passing expressions which are normally represented specially. For example, <code>Graph[{1-&gt;2,2-&gt;3}]</code> could not be passed through otherwise.</p> <p>If you want to distinguish between Mathematica expressions and other stuff, you can do the following:</p> <pre><code>Export["test.json",{"MathematicaExpression"-&gt;ToString[HoldComplete[myList={1,2,3}],InputForm]},"JSON"] </code></pre> <p>This will produce the following JSON file:</p> <pre><code>{"MathematicaExpression" : "HoldComplete[myList = {1, 2, 3}]"} </code></pre> <p>You can then read it in with</p> <pre><code>ToExpression@("MathematicaExpression"/.Import["test.json","JSON"]) </code></pre> <p>You can of course first check that the <code>Import</code> resulted in something matching <code>{ "MathematicaExpression -&gt; x_String }</code>. The following will give the Mathematica expression if a Mathematica expression was explicitly passed using that mechanism, and whatever <code>Import</code> resulted in otherwise:</p> <pre><code>Import["test.json","JSON"]/.{{"MathematicaExpression"-&gt;x_String}:&gt;ToExpression[x]} </code></pre>
3,623,432
<p>Say I have two independent normal distributions (both with <span class="math-container">$\mu=0$</span>, <span class="math-container">$\sigma=\sigma$</span>) one for only positive values and one for only negatives so their pdfs look like:</p> <p><span class="math-container">$p(x, \sigma) = \frac{\sqrt{2}}{\sqrt{\pi} \sigma} exp(-\frac {x^2}{2 \sigma^2}), \forall x&gt;0$</span> and</p> <p><span class="math-container">$p(y, \sigma) = \frac{\sqrt{2}}{\sqrt{\pi} \sigma} exp(-\frac{y^2}{2 \sigma^2}), \forall y&lt;0$</span>.</p> <p>If I pluck samples from both and then take the average <span class="math-container">$ = \frac{x+y}{2}$</span> I would imagine the expected value of this average to be zero but I would imagine the variance would be less than the variance of the individual distributions because the averaging of a positive and negative number would "squeeze" the final distribution. </p> <p>I think the correct way to calculate it is using the following integral.</p> <p><span class="math-container">$Var( \frac{x+y}{2}) = \frac{2}{ \pi \sigma^2} \int^{\infty}_{0} \int^{0}_{- \infty} \frac{(x + y)^2}{4}exp(-\frac {x^2}{2 \sigma^2}) exp(-\frac {y^2}{2 \sigma^2}) dx dy$</span></p> <p>But I am not sure if I am over-simplifying it. Does that logic seem correct or am I missing something?</p> <p>Thank you.</p> <p>Edited to mention independence and correct formulae mistakes.</p>
J.G.
56,861
<p>Your approach is workable (although the <span class="math-container">$\tfrac{1}{2\pi\sigma^2}$</span> should be <span class="math-container">$\tfrac{2}{\pi\sigma^2}$</span>), but there's a much easier way @callculus pointed out. Since <span class="math-container">$X,\,Y$</span> are independent, <span class="math-container">$\operatorname{Var}(aX+bY)=a^2\operatorname{Var}X+b^2\operatorname{Var}Y=(a^2+b^2)^2\operatorname{Var}X$</span>, so <span class="math-container">$\operatorname{Var}\tfrac{X+Y}{2}=\tfrac12\operatorname{Var}X=\sigma^2(\tfrac12-\tfrac{1}{\pi})$</span>. (I've used a variance from <a href="https://en.wikipedia.org/wiki/Half-normal_distribution" rel="nofollow noreferrer">here</a>.)</p>
2,767,070
<p>The intuition for $E[g(Y)|Y=y]$ would be that $g(Y)$ would play the role of a constant once $Y$ is fixed to a certain $y$ value. But how to show this more formally ? I can't seem to expand the equation below.</p> <p>$E[g(Y)|Y=y]=\sum_{y} g(y)P[g(y)=y'|Y=y]$</p>
B. Mehta
418,148
<p>It looks like you've got your variables confused. On the left, $y$ is a fixed value but on the right you're summing over it. I believe you should have $$\sum_{y'} g(y')P[Y=y'|Y=y].$$ From there, think about what the probability means in the cases $y = y'$ and $y \neq y'$, and the answer should become clear.</p>
163,672
<p>Is there a characterization of boolean functions $f:\{-1,1\}^n \longrightarrow \{-1,1\}$, so that $\mathbf{Inf_i}[f]=\frac{1} {2}$, for all $1\leq i\leq n$? Is it known how many such functions there are? </p>
Aaron Meyerowitz
8,008
<p>Here are some minor comments and a few counts <strong>LATER</strong> and a (rather weak) lower bound of $$4\binom{2^{n-1}}{2^{n-2}} \approx 2^{2^{n-1}-n/2}$$</p> <p>Another way to say this is label the $2^n$ vertices of an $n$-cube so that in each of the $n$ directions exactly half (i.e. $2^{n-2}$ ) of the edges are labelled $0,1$ . So for $n=2$ We need three corners the same and one different giving $8$. </p> <p>This is $(2^{n-1}-1)2^{n+1}$ which is all the possibilities $\sum_{i\in I} a_i \vee \sum_{j\in J} a_j$ mentioned by Bjorn (allowing some all or none of the variables to be replaced by their negations as well as negating the entire thing) .</p> <p>For $n=3$ that would give $48$.There are actually $64$, $16$ each with two or six $1$'s and another $32$ with equally many $1$'s and $0$'s. The $16$ with two $1$'s is any pair not on a common edge (so $12$ on the same face and $4$ more antipodal pairs). The complements give the $16$ with $6$ $1$'S. With $4$ $1$'S the options are a vertex and the $3$ neighbors ($8$ ways) and a path of length $3$ not all on one face (so $1/2\cdot 8 \cdot 3 \cdot 2 \cdot 1=24$ ways)</p> <p>For $n=4$ I come up with $4128.$ This is $2^{12}+2^5$ although I don't know how significant that is. These come out to be $228$ each with $4$ or $12$ $1$'S, $1152$ with $6$ or $10$ and $1368$ with equally many $0$'s and $1$'s.</p> <p>At least $2^{n-2}$ vertices must be labelled $1$. This gives a valid function exactly when no two of those are adjacent. One (rather lazy) way to achieve this is to split the vertices in two in an alternating fashion (so one set would be those whose coordinates have even sum) and then from one class select half (i.e. $2^{n-2}$) of vertices to get the value $1$. Since we can choose either class AND we can take the complement, we get the count above of $$4\binom{2^{n-1}}{2^{n-2}} $$. Then $\binom{2t}{t} \approx \frac{2^{2t}}{\sqrt{\pi t}}$ gives the asymptotics.</p> <p>Note that that </p> <ul> <li><p>The counts $2\binom{2^{n-1}}{2^{n-2}}$ enumerate for $n=2,3,4$ only $4,12$ and $140$ cases with the minimum number of $1$'s whereas we found actual counts of $8,32$ and $228.$</p></li> <li><p>The case of this few (or many) $1$'s is the marginal one. There seem many more possibilities with closer to an even split.</p></li> </ul>
162,293
<p>Consider a "curve" defined by a list of points in finite dimension (here, four):</p> <pre><code> pts = Table[{Cos[t], 0, Sin[2 t], Sin[t]}, {t, Subdivide[0, 1, 99]}] </code></pre> <p>I used known functions to generate <code>pts</code> but of course I am not supposed to know the parametric equation of the curve they belong to.</p> <p>What would be a good approach to compute the local curvature? Several possibilities I thought of:</p> <ul> <li>interpolating <code>pts</code> and using <code>ArcCurvature</code> (introduced in <em>Mathematica 10</em>)</li> <li>using $n+1$ consecutive points (where $n$ is the dimension), fit the circle that passes through them: that's the osculating circle, whose radius is the opposite of the curvature.</li> </ul> <p>Ideally, the solution should not be too sensitive to noise...</p>
Ulrich Neumann
53,677
<p>The Square of curvature is <a href="https://i.stack.imgur.com/gdrRV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gdrRV.png" alt="enter image description here"></a></p> <p>To solve your problem you only need good local approximations of the two derivatives, which you could get form Interpolation[]. Suppose the points are closely spaced you could also use common difference schemes. </p> <p>remark: If the points are closely spaced, you can calculate the optimal circle (3points) for every point(R^n) analytically...</p>
244,333
<p>Consider this equation : </p> <p><span class="math-container">$$\sqrt{\left( \frac{dy\cdot u\,dt}{L}\right)^2+(dy)^2}=v\,dt,$$</span></p> <p>where <span class="math-container">$t$</span> varies from <span class="math-container">$0$</span> to <span class="math-container">$T$</span> , and <span class="math-container">$y$</span> varies from <span class="math-container">$0$</span> to <span class="math-container">$L$</span>. Now how to proceed ? </p> <p>This equation arises out of following problem : </p> <p>A cat sitting in a field suddenly sees a standing dog. To save its life, the cat runs away in a straight line with speed <span class="math-container">$u$</span>. Without any delay, the dog starts with running with constant speed <span class="math-container">$v&gt;u$</span> to catch the cat. Initially, <span class="math-container">$v$</span> is perpendicular to <span class="math-container">$u$</span> and <span class="math-container">$L$</span> is the initial separation between the two. If the dog always changes its direction so that it is always heading directly at the cat, find the time the dog takes to catch the cat in terms of <span class="math-container">$v, u$</span> and <span class="math-container">$L$</span>. <hr> See my solution below : </p> <p>Let initially dog be at <span class="math-container">$D$</span> and cat at <span class="math-container">$C$</span> and after time <span class="math-container">$dt$</span> they are at <span class="math-container">$D'$</span> and <span class="math-container">$C'$</span> respectively. Dog velocity is always pointing towards cat.</p> <p>Let <span class="math-container">$DA = dy, \;AD' = dx$</span></p> <p>Let <span class="math-container">$CC'=udt,\;DD' = vdt$</span> as interval is very small so <span class="math-container">$DD'$</span> can be taken straight line.</p> <p>Also we have <span class="math-container">$\frac{DA}{DC}= \frac{AD'}{ CC'}$</span> using triangle property.</p> <p><span class="math-container">$\frac{dy}{L}= \frac{dx}{udt}\\ dx = \frac{dy.udt}{L}$</span></p> <p><span class="math-container">$\sqrt{(dx)^2 + (dy)^2} = DD' = vdt \\ \sqrt{(\frac{dy.udt}{L})^2 + (dy)^2} = vdt $</span></p> <p>Here <span class="math-container">$t$</span> varies from <span class="math-container">$0-T$</span>, and <span class="math-container">$y$</span> varies from <span class="math-container">$0-L$</span>. Now how to proceed?<img src="https://i.stack.imgur.com/Ji3Fc.jpg" alt="enter image description here"></p>
Egor Skriptunoff
50,643
<p>Let &nbsp; $\large t$ = time, &nbsp; $\large\phi$ = angle between velocities &nbsp; and &nbsp; $\large z$ = distance.<br> The system would be as follows:<br> $$\large\frac{dz}{dt}=u\cdot\cos\phi-v$$ $$\large\frac{d\phi}{dt}=-\frac{u\cdot\sin\phi}{z}$$ It can be easily proved that the following expression is an invariant of the system:<br> $$\large Inv(t, z,\phi)=t+\frac{z\cdot(v+u\cdot\cos\phi)}{v^2-u^2}$$ Thus, $$\large Inv(t_{final},0,\phi_{final})=Inv(0,L,\frac{\pi}{2})$$ which leads us to answer $$\large t_{final}=L\cdot\frac{v}{v^2-u^2}$$</p>
3,327,435
<p>I have no clue for the following problem: </p> <blockquote> <p>Let <span class="math-container">$G$</span> be a finite group, <span class="math-container">$p$</span> a prime number, <span class="math-container">$S$</span> a Sylow <span class="math-container">$p$</span> subgroup of <span class="math-container">$G$</span>. Let <span class="math-container">$N$</span> be the normalizer of <span class="math-container">$S$</span> inside <span class="math-container">$G$</span>. Let <span class="math-container">$X, Y$</span> two subsets of <span class="math-container">$Z(S)$</span> (center of <span class="math-container">$S$</span>) such that <span class="math-container">$\exists g \in G, gXg^{-1}= Y$</span>. Then we need to show that <span class="math-container">$\exists n \in N$</span> such that <span class="math-container">$gxg^{-1} = nxn^{-1}, \forall x \in X$</span>. </p> </blockquote> <p>So I guess first I can assume <span class="math-container">$X, Y$</span> to be subgroups by taking the smallest subgroup containing them. Then I have no clue. </p>
Arturo Magidin
742
<p>Because <span class="math-container">$X$</span> is contained in <span class="math-container">$Z(P)$</span>, it follows that <span class="math-container">$N_G(X)$</span> contains <span class="math-container">$P$</span>. That means that <span class="math-container">$N_G(Y) = N_G({}^gX)$</span> must contain <span class="math-container">${}^gP$</span>. But it also contains <span class="math-container">$P$</span>, since <span class="math-container">$Y$</span> is central in <span class="math-container">$P$</span>.</p> <p>Now, notice that both <span class="math-container">$P$</span> and <span class="math-container">${}^gP$</span> are Sylow <span class="math-container">$p$</span>-subgroups of <span class="math-container">$N_G(Y)$</span>. Can you take it from there?</p>
2,156,357
<p>if $H$ and $K$ are nonabelian simple groups prove that :</p> <blockquote> <p>$H$ $\times$ $K$ has exactly four distinct normal subgroups. </p> </blockquote> <p>Please help me prove this.</p>
Nicky Hekster
9,605
<p>Hint: put $M=H \times \{1\}$, and $N=\{1\} \times K$. Then $M,N \unlhd G$, $G=MN$, and $M \cap N=\{(1,1)\}$. Now if $T \unlhd G$, what can you say about $T \cap M$ and $T \cap N$?</p>
787,894
<p>Find the values of $x,y$ for which $x^2 + y^2$ takes the minimum value where $(x+5)^2 +(y-12)^2 =14$.</p> <p>Tried Cauchy-Schwarz and AM - GM , unable to do.</p>
sirfoga
83,083
<p><strong>Hint</strong>: take a look at the picture below, and all the problem will vanish...</p> <p><img src="https://i.stack.imgur.com/tvQBv.png" alt="enter image description here"></p> <p>In fact the picture shows the circle of equation $(x+5)^2 +(y-12)^2 =14$, and the line passing trough its centre and the origin. The question asks the minimum length of the segment whose extremities are the origin and a point on the circumference...Thus you should minimize the distance from the point on the circumference from the origin, and this is done by drawing a line passing through the centre of the square and the centre (trivial proof). So you get $$A\equiv \left(5\left(\frac{\sqrt{14}}{13}-1\right);12\left(1-\frac{\sqrt{14}}{13}\right)\right)\Rightarrow \\ \text{length of} \overline{AB} = \sqrt{\left(5\left(\frac{\sqrt{14}}{13}-1\right)\right)^2+\left(12\left(1-\frac{\sqrt{14}}{13}\right)\right)^2}=\\ \sqrt{(\sqrt{14}-13)^2}=\sqrt{14}-13$$ and finally $$\text{minimum of } x^2+y^2=\overline{AB}^2=(\sqrt{14}-13)^2=183-26\sqrt{14}$$</p>
3,844,235
<p>Suppose a matrix <span class="math-container">$A \in \text{Mat}_{2\times 2}(\mathbb{F}_5)$</span> has characteristic polynomial <span class="math-container">$x^2 - x +1$</span>. Is <span class="math-container">$A$</span> diagonalizable over <span class="math-container">$\mathbb{F}_5$</span>?</p> <p>Normally, I would just check to see if the geometric multiplicity and algebraic multiplicity are equal for each eigenspace, but over <span class="math-container">$\mathbb{F}_5$</span>, I am not even sure what the eigenvalues are!</p>
markvs
454,915
<p>The char polynomial has no roots in <span class="math-container">$\Bbb{F}_5$</span>, so the matrix is not diagonalizable over that field. It is diagonalizable over <span class="math-container">$\Bbb{F}_{25}$</span> because the char polynomial has two different roots in that field.</p>
954,130
<p>I have to prove that the function $f(n)=3n^2-n+4$ is $O(n^2)$. So I use the definition of big oh:</p> <blockquote> <p>$f(n)$ is big oh $g(n)$ if there exist an integer $n_0$ and a constant $c&gt;0$ such that for all integers $n\geq n_0$, $f(n)\leq cg(n)$.</p> </blockquote> <p>And it doesn't matter what those constants are. So I will choose $c=1$</p> <p>\begin{align} f(n)&amp;\leq cg(n)\\ 3n^2-n+4&amp;\leq 1*n^2\\ 3n^2-n+4&amp;\leq n^2\\ 0&amp;\leq n^2-3n^2+n-4\\ 0&amp;\leq -2n^2+n-4 \end{align}</p> <p>Now I am having trouble figuring out $n_0$ from here. In the book he simplified the polynomial to its roots and logically determined $n_0$. It looks like this polynomial can't be broken down into a $(a\pm b)(c\pm d)$ form. </p>
Bridgeburners
166,757
<p>No proof is needed, it's true simply by definition of units. When dealing with units in any physics calculation, we generally respect the following unofficial axioms that define units:</p> <p>1) A unit is treated as a variable in any calculation</p> <p>2) Any number that is linear in a specific unit (with no intercept) is said to be a quantity of "those units".</p> <p>So if "m" is a unit, that means that we can only classify a quantity as having "units of m" if it is of the form $k \text{m}$ for some unitless number $k$. Respecting the definition above, we see that $x = e^{2 \text{m}} = 1 + 2 \text{m} + 2 \text{m}^2 + \frac{4}{3} \text{m}^3 + \cdots$ is not strictly of form $k \text{m}$, thus by the definition above, it can't be said to have "units of m". </p> <p>Aside from that definition, I don't think "units" has a formal definition in pure mathematics (perhaps someone will point me out wrong here), so without that formal definition, you can't make a proof. You need a definition like the one I wrote above.</p>
630,912
<p>In <a href="https://math.stackexchange.com/questions/626256/choose-the-branch-for-1-zeta21-2-that-makes-it-holomorphic-in-the-upper">this</a> question I brought up a passage from Stein/Shakarchi's <em>Complex Analysis</em> page 232: </p> <blockquote> <p>...We consider for $z\in \mathbb{H}$, $$f(z)=\int_0^z \frac{d\zeta}{(1-\zeta^2)^{1/2}},$$ where the integral is taken from $0$ to $z$ along any path in the closed upper half-plane. We choose the branch for $(1-\zeta^2)^{1/2}$ that makes it holomorphic in the upper half-plane and positive when $-1&lt;\zeta&lt;1$. As a result, $$(1-\zeta^2)^{-1/2}=i(\zeta^2-1)^{-1/2}\quad \text{when }\zeta&gt;1.$$</p> </blockquote> <p>One thing I'm still not quite clear on: why is there a factor of $i$ between $(1-\zeta^2)^{-1/2}$ and $(\zeta^2-1)^{-1/2}$? If we look at the argument of $(1-\zeta)(1+\zeta)$, it seems like it should change by $\pi$ when we go by $\zeta=1$, and change again by $\pi$ as we go by $\zeta=-1$. Therefore it is changing by $2\pi$ total...halve that and you get $\pi$, apply the exponential and you get a factor of $-1$. So why is the factor $i$?</p> <p>Also, explicitly what branches of ${\sqrt{1-\zeta}}$ and $\sqrt{1+\zeta}$ are we choosing to make it real and positive on $(-1,1)$?</p>
Disintegrating By Parts
112,478
<p>The defining property of $(1-\zeta)^{-1/2}$ is that its square is $1/(1-\zeta^{2})$. Both of the forms you gave check out that way because $i^{2}=-1$. This is why using the traditional view of branch cuts can be confusing. If you consider $f(z)=\sqrt{z}$ to be the branch cut where $z= x+i0$ for $-\infty \le x \le 0$, then $\sqrt{z}$ and $i\sqrt{-z}$, while both square roots of $z$, have different branch cuts when considered as functions of $z$. Using logarithms to define the powers makes the branch cuts more obvious.</p>
630,912
<p>In <a href="https://math.stackexchange.com/questions/626256/choose-the-branch-for-1-zeta21-2-that-makes-it-holomorphic-in-the-upper">this</a> question I brought up a passage from Stein/Shakarchi's <em>Complex Analysis</em> page 232: </p> <blockquote> <p>...We consider for $z\in \mathbb{H}$, $$f(z)=\int_0^z \frac{d\zeta}{(1-\zeta^2)^{1/2}},$$ where the integral is taken from $0$ to $z$ along any path in the closed upper half-plane. We choose the branch for $(1-\zeta^2)^{1/2}$ that makes it holomorphic in the upper half-plane and positive when $-1&lt;\zeta&lt;1$. As a result, $$(1-\zeta^2)^{-1/2}=i(\zeta^2-1)^{-1/2}\quad \text{when }\zeta&gt;1.$$</p> </blockquote> <p>One thing I'm still not quite clear on: why is there a factor of $i$ between $(1-\zeta^2)^{-1/2}$ and $(\zeta^2-1)^{-1/2}$? If we look at the argument of $(1-\zeta)(1+\zeta)$, it seems like it should change by $\pi$ when we go by $\zeta=1$, and change again by $\pi$ as we go by $\zeta=-1$. Therefore it is changing by $2\pi$ total...halve that and you get $\pi$, apply the exponential and you get a factor of $-1$. So why is the factor $i$?</p> <p>Also, explicitly what branches of ${\sqrt{1-\zeta}}$ and $\sqrt{1+\zeta}$ are we choosing to make it real and positive on $(-1,1)$?</p>
Felix Marin
85,343
<p>$\newcommand{\+}{^{\dagger}}% \newcommand{\angles}[1]{\left\langle #1 \right\rangle}% \newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}% \newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}% \newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,}% \newcommand{\dd}{{\rm d}}% \newcommand{\ds}[1]{\displaystyle{#1}}% \newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}% \newcommand{\expo}[1]{\,{\rm e}^{#1}\,}% \newcommand{\fermi}{\,{\rm f}}% \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}% \newcommand{\half}{{1 \over 2}}% \newcommand{\ic}{{\rm i}}% \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow}% \newcommand{\isdiv}{\,\left.\right\vert\,}% \newcommand{\ket}[1]{\left\vert #1\right\rangle}% \newcommand{\ol}[1]{\overline{#1}}% \newcommand{\pars}[1]{\left( #1 \right)}% \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}}% \newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}% \newcommand{\sech}{\,{\rm sech}}% \newcommand{\sgn}{\,{\rm sgn}}% \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}}% \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ I found some way which makes clear the integration in cases like this one: Split the integral into two pieces and translate the integrand to start both from zero. This is a 'brute force trick' but, in my particular case, I found this one quite clear and not confusing.</p> <p>Besides the fact that the integral is performed in the complex plane ( we can always fix that ): \begin{align} &amp;\int_{0}^{z}{\dd \zeta \over \pars{1 - \zeta^{2}}^{1/2}} = \half\bracks{\int_{0}^{z}{\dd \zeta \over \pars{1 - \zeta^{2}}^{1/2}} + \int_{0}^{z}{\dd \zeta \over \pars{1 - \zeta^{2}}^{1/2}}} \\[3mm]&amp;= \half\braces{\bracks{% \int_{0}^{-1}{\dd \zeta \over \pars{1 - \zeta^{2}}^{1/2}} + \int_{-1}^{z}{\dd \zeta \over \pars{1 - \zeta^{2}}^{1/2}}} + \bracks{% \int_{0}^{1}{\dd \zeta \over \pars{1 - \zeta^{2}}^{1/2}} + \int_{1}^{z}{\dd \zeta \over \pars{1 - \zeta^{2}}^{1/2}}}} \\[3mm]&amp;= \half\int_{-1}^{z}{\dd \zeta \over \bracks{\pars{1 - \zeta}\pars{1 + \zeta}}^{1/2}} + \half\int_{1}^{z}{\dd \zeta \over \bracks{\pars{1 - \zeta}\pars{1 + \zeta}}^{1/2}} + \half\int_{-1}^{1}{\dd \zeta \over \pars{1 - \zeta^{2}}^{1/2}} \\[3mm]&amp;= \half\int_{0}^{z + 1}{\xi^{-1/2}\dd \zeta \over \pars{2 - \zeta}^{1/2}} + \half\int_{0}^{z - 1}{\pars{-\xi}^{-1/2}\dd \zeta \over \pars{2 + \zeta}^{1/2}} + \half\,\pi\,,\qquad z \not= \pm 1 \end{align} Indeed, the 'fine' procedure will involve 'path parametrization' but I hope this illustrates the general idea.</p>
3,224,455
<p>I derived the volume of a cone using two approaches and compared the results.</p> <p>First I integrated a circle of radius <span class="math-container">$r$</span> over the height <span class="math-container">$h$</span> to get the expression: <span class="math-container">$$V_1=\frac{1}{3}\pi r^2 h$$</span></p> <p>Then I considered a polygonal pyramid of infinite sides.</p> <p>An n-sided polygon with apothem <span class="math-container">$r$</span> has an area of: <span class="math-container">$$A=nr^2\tan{\frac{180°}{n}}$$</span></p> <p>Integrating this over the height <span class="math-container">$h$</span> gives the expression for the area of the n-sided polygonal pyramid as: <span class="math-container">$$V_2=\frac{1}{3}n\tan{\frac{180°}{n}}r^2 h$$</span></p> <p>Equating <span class="math-container">$V_1$</span> and <span class="math-container">$V_2$</span> implies that: <span class="math-container">$$ \lim_{n \to \infty} \left(n\tan{\frac{180°}{n}}\right) = \pi $$</span></p> <p>So is it true to say that: <span class="math-container">$$\infty\tan{\frac{180°}{\infty}} = \pi$$</span> </p> <p>But: <span class="math-container">$$\tan{\frac{180°}{\infty}}=0$$</span></p> <p>So: <span class="math-container">$$\infty (0)=\pi$$</span></p> <p>Can anyone shed some light on this surprising result?</p>
Matthew Leingang
2,785
<blockquote> <p>Equating <span class="math-container">$V_1$</span> and <span class="math-container">$V_2$</span> implies that: <span class="math-container">$$ \lim_{n \to \infty} \left(n\tan{\frac{180}{n}}\right) = \pi $$</span></p> </blockquote> <p>That is correct. 180 degrees is <span class="math-container">$\pi$</span> radians. If you change variables from <span class="math-container">$n$</span> to <span class="math-container">$\theta$</span> with <span class="math-container">$\theta = \frac{1}{n}$</span>, you get <span class="math-container">$$ \lim_{n \to \infty} \left(n\tan{\frac{180}{n}}\right) = \lim_{n \to \infty} \frac{\tan{\pi x}}{x} $$</span> and that is, in fact <span class="math-container">$\pi$</span>.</p> <blockquote> <p>So is it true to say that: <span class="math-container">$$\infty\tan{\frac{180}{\infty}} = \pi$$</span> </p> </blockquote> <p>Not at all. The limit of a product of functions is the product of the limits of the functions <em>provided those functions have limits in the first place</em>. Since <span class="math-container">$\lim_{n\to\infty} n$</span> does not exist, you cannot apply the limit laws that way.</p> <p>In short: <span class="math-container">$\infty$</span> is not a number. Treating it as a number leads to madness (or, at least, contradiction).</p>
3,252,765
<p>We are trying to codify in terms of modern algorithm the works of the ancient Indian mathematician <em>Udayadivakara</em> (CE 1073). In his work <em>Sundari</em>, he quotes one <em>Acarya Jayadeva</em> who has given methods to solve Pell's equations. In these methods, one can find the the cyclic <em>Chakravala</em> method to deal with <span class="math-container">$X^2-DY^2=1$</span> wrongly attributed to Bhaskara. He also gives the method to solve <span class="math-container">$X^2-DY^2=C$</span> for any integer <span class="math-container">$C$</span>. </p> <ol> <li>His algorithm starts off by finding the nearest square integer <span class="math-container">$&gt;D$</span> named <span class="math-container">$P^2$</span>. Then <span class="math-container">$a=P^2-D$</span>.</li> <li>Now some <span class="math-container">$b$</span> is chosen in such a way that <span class="math-container">$Db^2+Ca$</span> is some perfect square <span class="math-container">$Q^2$</span>. </li> <li>Then the <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> solutions can be found by using <span class="math-container">$Y=\frac{Q\pm P b}{a}$</span> and <span class="math-container">$X=PY \mp b$</span>.</li> <li>This procedure can continue indefinitely to find all the solutions. </li> <li>Coming to the question of the fundamental solution i.e. the solution with which Bhavana has to be performed repeatedly to get other solutions (related to the modern automorphism group of the quadratic form), Prof. K.S. Shukla who first translated the work from Sanskrit to English, in his example says that it should be chosen "appropriately".</li> <li><strong>Our primary question is then what is the criterion to derive this <em>fundamental solution</em>? Is there a way to derive such a criterion?</strong> </li> <li>The whole procedure seems to resemble Conway's topograph method which has been posted here several times <a href="https://math.stackexchange.com/questions/1719280/does-the-pell-like-equation-x2-dy2-k-have-a-simple-recursion-like-x2-dy2?noredirect=1&amp;lq=1">Does the Pell-like equation $X^2-dY^2=k$ have a simple recursion like $X^2-dY^2=1$?</a> It is quite fascinating to think that some wonderful mind came up with this algorithm about 1000 years ago and the optimality of it is equally amazing.</li> </ol> <p>P.S: If anyone so wishes, we would be happy to provide a version of the original paper written by Prof. Shukla in 1950 dealing with this!</p>
Dmitry Ezhov
602,207
<p>This not answer, only illustration how work <strong>pari/gp</strong>.</p> <p>gp-code:</p> <pre><code>gpell(D,C)= { print("\nRoot solutions of Pell equation x^2-",D,"*y^2=",C,"\n"); Q= iferr(bnfinit('X^2-D), E, 0); if(Q, U= iferr(Q.fu, E, 0); if(U, for(j=1, #U, u= U[j]; print("Q.fu: ",u,"\n"); N= iferr(bnfisintnorm(Q, C), E, 0); print("bnfisnorm: ",N,"\n"); if(N, for(k=1, #N, n= N[k]; for(l=0, 48, \\print("\n",l); nu= lift(n*u^l); \\print(nu); X= abs(polcoeff(nu, 0)); Y2= (X^2-C)/D; if(X==floor(X)&amp;&amp;Y2==floor(Y2), Y= sqrtint(Y2); print("X= ",X," Y= ",Y," l= ",l); break() ) ) )) )) ) }; </code></pre> <p>Run code:</p> <pre><code>? \r gpell.gp ? gpell(97,96) Root solutions of Pell equation x^2-97*y^2=96 Q.fu: Mod(569*X - 5604, X^2 - 97) bnfisnorm: [25/2*X + 247/2, -5/2*X + 53/2, 47*X - 463, 6173*X - 60797, 111802*X - 110112 2, 2*X + 22, -2*X + 22, 278*X - 2738, -5035*X + 49589, -661225*X + 6512311, -5/2*X - 53/ 2, 25/2*X - 247/2] X= 463 Y= 47 l= 0 X= 60797 Y= 6173 l= 0 X= 1101122 Y= 111802 l= 0 X= 22 Y= 2 l= 0 X= 22 Y= 2 l= 0 X= 2738 Y= 278 l= 0 X= 49589 Y= 5035 l= 0 X= 6512311 Y= 661225 l= 0 ? ? ? ? ? d=1377;gpell(d*(d-8),d*8) Root solutions of Pell equation x^2-1885113*y^2=11016 Q.fu: Mod(1/333*X - 4, X^2 - 1885113) bnfisnorm: [-11/74*X + 459/2, -X + 1377, -3/37*X + 153, -45/74*X + 1683/2] X= 1377 Y= 1 l= 0 X= 74319362381115913874527035825931593 Y= 54129408430647687070089140606679 l= 36 ? </code></pre> <p>Very attention to the parameter "l" in code.</p> <p>And for contrast work <a href="http://ipic.su/img/img7/fs/11.1559821930.png" rel="nofollow noreferrer">Wolfram</a>.</p>
2,879,035
<p>$f(x) = \int_{1}^{\infty} \frac{2}{\sqrt{2\pi}}e^{-\frac{1}{2}x^2} dx$</p> <p>find $P(X &gt; 1)$</p> <p>This is $X$ ~ $Norm(0, 1)$.</p> <p>$P(X &gt; 1) = 1 - P(X \leq 1) = 1 - 2 \phi(1) = 1-2(1-\phi(-1)) = 1 - 2(1-0.1587) = -0.6826$. </p> <p>Yikes. Negative number. What am I doing wrong? </p>
Aaron Montgomery
485,314
<p>I assume that by $\phi(t)$ you mean the area in the left tail of a normal distribution. If so, then the two below is incorrect and should be removed: $$P(X &gt; 1) = 1 - P(X \leq 1) = 1 - \color{red}{2}\phi(1) = \dots$$</p>
2,322,646
<p>Let $f$ and $\varphi$ be continuous real valued functions on $\mathbb{R}$. Suppose $\varphi(x)=0$ for $|x|&gt;5$ and that $\int_{\mathbb{R}}\varphi(x)\mathbb{d}x=1$. Show that $$\lim_{h\to 0}\left[\frac{1}{h}\int_{\mathbb{R}}f(x-y)\varphi\left(\frac{y}{h}\right)\mathbb{d}y\right]=f(x).$$ I don't know how to proceed. Please help.</p>
Robert Z
299,698
<p>Hint. The given limit is equivalent to $$\lim_{h\to 0^+}\frac{1}{h}\int_{-5h}^{5h}(f(x-y)-f(x))\varphi\left(\frac{y}{h}\right)dy=\lim_{h\to 0^+}\int_{-5}^{5}(f(x-th)-f(x))\varphi\left(t\right)dt=0$$ Then use the fact that $\varphi$ is bounded in $[-5,5]$ and $f$ is uniformly continuous in $[x-6,x+6]$.</p>
4,498,263
<p>I know that a group action is transitive when there is one orbit. Say that <span class="math-container">$G$</span> is a group acting on the set <span class="math-container">$A$</span>. The identity element of <span class="math-container">$G$</span> will clearly create <span class="math-container">$|A|$</span>-many orbits. But the other elements will create each their own set of orbits. Will all of these elements of <span class="math-container">$G$</span> give the same total number of orbits?</p>
Oiler
270,500
<p>Elements of <span class="math-container">$G$</span> do <em>not</em> have orbits. Elements of <span class="math-container">$A$</span> have orbits (and orbits are subsets of <span class="math-container">$A$</span>). If a group <span class="math-container">$G$</span> is acting on a set <span class="math-container">$A$</span> and <span class="math-container">$a \in A$</span>, then we denote the <em>orbit</em> of <span class="math-container">$a$</span> by <span class="math-container">$\operatorname{orb}_{G}(a)$</span> and define <span class="math-container">$$ \operatorname{orb}_{G}(a) : = \{ g \cdot a : g \in G \} \subseteq A. $$</span></p> <p>For any <span class="math-container">$a \in A$</span> <span class="math-container">$$ |\operatorname{orb}_{G}(a)| = (G : \operatorname{stab}_{G}(a) ), $$</span> where <span class="math-container">$\operatorname{stab}_{G}(a) \leq G$</span> is the <em>stabilizer</em> of <span class="math-container">$a$</span>, i.e. <span class="math-container">$$ \operatorname{stab}_{G}(a) : = \{ g \in G : g \cdot a = a \} \subseteq G. $$</span></p> <p>So orbits can have varying cardinalities.</p> <p><strong>EDIT:</strong></p> <p>With the examples you are considering, I believe you are looking at the specific example of a subgroup <span class="math-container">$H \leq G$</span> acting on <span class="math-container">$G$</span> via left multiplication: for any <span class="math-container">$h \in H$</span> and <span class="math-container">$g \in G$</span>, we define <span class="math-container">$$ h \cdot g : = hg. $$</span> The orbit of <span class="math-container">$g \in G$</span> under the action of <span class="math-container">$H$</span> is then <span class="math-container">$$ \operatorname{orb}_{H}(g) = \{ hg : h \in H \}, $$</span> which is precisely the right coset <span class="math-container">$Hg$</span>. All of this is to say that the orbit of <span class="math-container">$g \in G$</span> under the action of left multiplication by <span class="math-container">$H$</span> is the right coset <span class="math-container">$Hg$</span>. It is worth mentioning a couple of things:</p> <ul> <li>The cosets <span class="math-container">$Hg$</span> partition <span class="math-container">$G$</span> and are all of the same cardinality. In particular, if <span class="math-container">$|H| &lt; |G|$</span>, then this action is not transitive.</li> <li>When <span class="math-container">$H$</span> acts by right translation, the orbits are the left cosets of <span class="math-container">$H$</span> in <span class="math-container">$G$</span>.</li> <li>When <span class="math-container">$H$</span> is a normal subgroup of <span class="math-container">$G$</span>, then the left and right cosets of <span class="math-container">$H$</span> in <span class="math-container">$G$</span> are the same and there is no distinction.</li> <li>This is a special group action that will not work all of the time. In particular, you are relying on the fact that the set you are acting on, <span class="math-container">$G$</span>, is actually a group and you know how to multiply elements of the group <span class="math-container">$H$</span> and the set <span class="math-container">$G$</span>. For instance, <span class="math-container">$S_{n}$</span> acts on the set <span class="math-container">$\{ 1 , \dots , n \}$</span>, but there is no canonical multiplication of an element of <span class="math-container">$S_{n}$</span> and an element of <span class="math-container">$\{1 , \dots , n \}$</span>.</li> </ul> <p>With the examples you are discussing in the comments, it seems that you are looking at the example where <span class="math-container">$H$</span> is the cyclic subgroup generated by some element.</p>
29,766
<p>I'm looking for a news site for Mathematics which particularly covers recently solved mathematical problems together with the unsolved ones. Is there a good site MO users can suggest me or is my only bet just to google for them?</p>
sotiris
7,158
<p>There is also a list about group theory open problems here : <a href="http://www.grouptheory.info/" rel="nofollow">http://www.grouptheory.info/</a></p>
29,766
<p>I'm looking for a news site for Mathematics which particularly covers recently solved mathematical problems together with the unsolved ones. Is there a good site MO users can suggest me or is my only bet just to google for them?</p>
Bogdan Grechuk
89,064
<p>Here <a href="https://theorems.home.blog/theorems-list/" rel="nofollow noreferrer">https://theorems.home.blog/theorems-list/</a> is the website you are asking for. </p> <p>It covers all recently solved mathematical problems, which are important (for example, published in a top journal) but at the same time can be understood with not too much background.</p>
174,165
<p>I have Maths test tomorrow and was just doing my revision when I came across these two questions. Would anyone please give me a nudge in the right direction?</p> <p>$1)$ If $x$ is real and $$y=\frac{x^2+4x-17}{2(x-3)},$$ show that $|y-5|\geq2$ </p> <p>$2)$ If $a&gt;0$, $b&gt;0$, prove that $$\left(a+\frac1b\right)\left(2b+\frac1{2a}\right)\ge\frac92$$</p>
Asaf Karagila
622
<p>There are two cases:</p> <ol> <li>$n$ is even, write it as $2k$ and then you have $\cos(k\pi)$ and $\sin(k\pi)$ which you already know.</li> <li>$n$ is odd, write it as $2k+1$ and then you have $\cos(k\pi+\frac\pi2)$ and $\sin(k\pi+\frac\pi2)$. Recall that $\sin(x)=\cos(x+\frac\pi2)$, and deduce from the previous case what the values are.</li> </ol>
1,149,561
<p>I've tried using mods but nothing is working on this one: solve in positive integers $x,y$ the diophantine equation $7^x=3^y-2$.</p>
hjhjhj57
150,361
<p>If you see the <a href="http://en.wikipedia.org/wiki/Euler_characteristic#Surfaces" rel="nofollow">wikipedia article</a> on the subject you'll see there are some examples. For instance, if $T$ is the 2-torus, and $K$ the Klein bottle we have that, $$ \chi(T) = \chi(K) =\chi(S^1) = 0. $$ On the other hand, \begin{align} H_1(T) &amp;\simeq \mathbb{Z}\oplus\mathbb{Z}, \\ H_1(K) &amp;\simeq \mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}, \\ H_1(S^1) &amp;\simeq \mathbb{Z}. \\ \end{align}</p>
873,434
<p>Let's say I want to find the product of $1,2,3, \dots, 10$. Do I need to do $1 \cdot 2 \cdot 3 \cdot \dots \cdot 10$ manually or is there an easier way to do it?</p> <p>Something like the sumation of $1$ to $n$ which gives $\frac{n(n+1)}{2}$.</p> <p>I tried to search but couldn't find a way to do it directly. </p>
johannesvalks
155,865
<p>It is known as factorial and denoted as $n!$.</p> <p>The case $10!$ can be reduced:</p> <p>\begin{eqnarray} 1 \cdot 2 \cdot 3 \cdots \cdot 8 \cdot 9 \cdot 10 &amp;=&amp; 10 \Big(5-4\Big) \Big(5-3\Big) \cdots \Big(5+4\Big)\\ &amp;=&amp; 50 \Big(25-16\Big) \Big(25-9\Big) \Big(25-4\Big) \Big(25-1\Big)\\ &amp;=&amp; 50 \Big(15-6\Big) \Big(15+6\Big) \Big(20-4\Big) \Big(20+4\Big)\\ &amp;=&amp; 50 \Big(225 - 36\Big) \Big(400 - 16\Big)\\ &amp;=&amp; 50 \times 189 \times 384 = 3,628,800 \end{eqnarray}</p> <hr> <p>If you have to do it by head, collect easy factors:</p> <p>\begin{eqnarray} 1 \cdot 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdot 7 \cdot 8 \cdot 9 \cdot 10 &amp;=&amp; \Big( 2 \cdot 4 \cdot 5 \cdot 10 \Big) \Big( 3 \cdot 6 \cdot 8 \cdot 9 \Big) \cdot 7\\ &amp;=&amp; 20^2 \cdot 6^4 \cdot 7\\ &amp;=&amp; 400 \times 1296 \times 7 \end{eqnarray}</p>
3,491,867
<p>I'm working on an integral used to illustrate <span class="math-container">$\pi &gt; \frac{22}{7}$</span> and I'm stuck on finding the name of a theorem for the following:</p> <p>Let <span class="math-container">$f(x)$</span> be a continuous Real Valued function on the interval <span class="math-container">$[a,b]$</span> (where <span class="math-container">$a$</span>, <span class="math-container">$b$</span> can be finite or infinite). If <span class="math-container">$f(x) \geq 0$</span> on <span class="math-container">$[a,b]$</span> then <span class="math-container">$$ \int_a^b f(x) \:dx \geq 0 $$</span></p> <p>Does anyone know what the name of this theorem is?</p>
Hans Lundmark
1,242
<p>I don't know if this helps (or if you really <em>need</em> a name for that theorem), but the property that <span class="math-container">$f \ge g$</span> on <span class="math-container">$[a,b]$</span> (where <span class="math-container">$a \le b$</span>) implies <span class="math-container">$\int_a^b f(x) \, dx \ge \int_a^b g(x) \, dx$</span> is sometimes called <strong>monotonicity</strong> of the integral.</p>
542,391
<p>I understand the processes of putting a matrix into Jordan normal form and forming the transformation matrix associated to "diagonalizing" the matrix. So here's my question:</p> <p>Why is it that when you have an eigenvalue x=0 with algebraic multiplicity greater than 1, that you don't put a 1 in the superdiagonal of the JNF matrix but when the eigenvalue is non-zero and satisfies the same properties, we put a 1 in the superdiagonal of the Jordan normal form?</p> <p>My professor posted solutions to an assignment involving finding a matrix exponential, but the JNF of a matrix had eigenvalue x=0 with algebraic multiplicity of 3,yet had no entries of 1 along the superdiagonal.</p> <p>In advance, I would like to thank you for your help.</p>
Тимофей Ломоносов
54,117
<p>I can make some counter-example about $1$.</p> <p>Let's see two matrices: $A=\begin{pmatrix} 1 &amp; 0 \\ 0 &amp; 1 \end{pmatrix}$ and $B=\begin{pmatrix} 1 &amp; 1 \\ 0 &amp; 1 \end{pmatrix}$.</p> <p>In both cases eigenvalues are $1$ with multiplicity $2$, but $A$ has two eigenvectors such that $Ae=e (\{1,0\},\{0,1\})$ and $B$ has only one such eigenvector$(\{1,0\})$.</p> <p>We put a $1$ into subdiagonal only then the corresponding block has only one eigenvector with such eigenvalue.</p>
4,002,458
<p>I'm a geometry student. Recently we were doing all kinds of crazy circle stuff, and it occurred to me that I don't know why <span class="math-container">$\pi r^2$</span> is the area of a circle. I mean, how do I <em>really</em> know that's true, aside from just taking my teachers + books at their word?</p> <p>So I tried to derive the formula myself. My strategy was to fill a circle with little squares. But I couldn't figure out how to generate successively smaller squares in the right spots. So instead I decided to graph just one quadrant of the circle (since all four quadrants are identical, I can get the area of the easy +x, +y quadrant and multiply the result by 4 at the end) and put little rectangles along the curve of the circle. The more rectangles I put, the closer I get to the correct area. If you graph it out, my idea looks like this:</p> <p><a href="https://i.stack.imgur.com/5JMSb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5JMSb.png" alt="Approximating circle area using rectangles" /></a></p> <p>Okay, so to try this in practice I used a Python script (less tedious):</p> <pre><code>from math import sqrt, pi # explain algo of finding top right quadrant area # thing with graphics via a notebook # Based on Pythagorean circle function (based on r = x**2 + y**2) def circle_y(radius, x): return sqrt(radius**2 - x**2) def circleAreaApprox(radius, rectangles): area_approx = 0 little_rectangles_width = 1 / rectangles * radius for i in range(rectangles): x = radius / rectangles * i little_rectangle_height = circle_y(radius, x) area_approx += little_rectangle_height * little_rectangles_width return area_approx * 4 </code></pre> <p>This works. The more rectangles I put, the wrongness of my estimate goes down and down:</p> <pre><code>for i in range(3): rectangles = 6 * 10 ** i delta = circleAreaApprox(1, rectangles) - pi # For a unit circle area: pi * 1 ** 2 == pi print(delta) </code></pre> <h3>Output</h3> <pre><code>0.25372370203838557 0.030804314363409357 0.0032533219749364406 </code></pre> <p>Even if you test with big numbers, it just gets closer and closer forever. Infinitely small rectangles <code>circleAreaApprox(1, infinity)</code> is presumably the true area. But I can't calculate that, because I'd have to loop forever, and that's too much time. How do I calculate the 'limit' of a for loop?</p> <p>Ideally, in an intuitive way. I want to reduce the magic and really understand this, not 'solve' this by piling on more magic techniques (like the <span class="math-container">$\pi \times radius^2$</span> formula that made me curious in the first place).</p> <p>Thanks!</p>
Joffan
206,402
<p>Typically this kind of issue is handled by seeing a stable value emerge, with tolerance under some desired value, from successively finer approximation.</p> <hr /> <p>In this case if you double the number of rectangles, you approximately halve the error - this is what you would expect from treating each sub-section of the circle as a straight line section, with the excess of area having a rectangle taken out of it, leaving two triangles of the same area. Anyway, this gives you a way to get extra precision by combining successive estimates.</p> <p>It's actually quite interesting to look at the difference between this extrapolated estimate and a <a href="https://en.wikipedia.org/wiki/Trapezoidal_rule" rel="nofollow noreferrer">trapezoidal</a> approach; effectively by this picture of section of the curve:</p> <p><a href="https://i.stack.imgur.com/h05mF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h05mF.png" alt="enter image description here" /></a></p> <p>showing that here the two-rectangles-for-one loses the upper right rectangle of area so extrapolating that forward can be thought of as losing the additional wedges off the top of the two rectangles. For comparison the green trapezoid gives an underestimate of the area under the curve, in this case.</p> <hr /> <p>Using your base routines for the &quot;extrapolated area&quot; gives the following :</p> <pre><code>rec = 1 est = [circleAreaApprox(1, 1)] # all estimates on unit circle for i in range(20): rec *= 2 est.append(circleAreaApprox(1, rec)) print(&quot;rectangles= &quot;,rec, &quot;basic error=&quot;,est[-1]-pi, &quot;extrapolated error=&quot;, 2*est[-1]-est[-2]-pi) </code></pre> <p>which is calculating both <span class="math-container">$(e_k-\pi)$</span> and <span class="math-container">$(2e_k-e_{k-1}-\pi)$</span>, gives</p> <pre><code>rectangles= 2 basic error= 0.5904581539790841 extrapolated error= 0.32250896154796127 rectangles= 4 basic error= 0.35411641451264764 extrapolated error= 0.1177746750462112 rectangles= 8 basic error= 0.19822649076738008 extrapolated error= 0.042336567022112526 rectangles= 16 basic error= 0.10666038423794832 extrapolated error= 0.015094277708516568 rectangles= 32 basic error= 0.05600976928733781 extrapolated error= 0.00535915433672729 rectangles= 64 basic error= 0.028954259189892806 extrapolated error= 0.0018987490924478045 rectangles= 128 basic error= 0.014813138806821335 extrapolated error= 0.000672018423749865 rectangles= 256 basic error= 0.007525429367436942 extrapolated error= 0.00023771992805254882 rectangles= 512 basic error= 0.0038047491298613956 extrapolated error= 8.406889228584902e-05 rectangles= 1024 basic error= 0.0019172379492267133 extrapolated error= 2.972676859203105e-05 rectangles= 2048 basic error= 0.0009638743216493495 extrapolated error= 1.0510694071985682e-05 rectangles= 4096 basic error= 0.00048379526796571426 extrapolated error= 3.7162142820790223e-06 rectangles= 8192 basic error= 0.00024255458489719217 extrapolated error= 1.3139018286700832e-06 rectangles= 16384 basic error= 0.00012150956160006388 extrapolated error= 4.6453830293557985e-07 rectangles= 32768 basic error= 6.083690070024517e-05 extrapolated error= 1.642398004264578e-07 rectangles= 65536 basic error= 3.0447484171247652e-05 extrapolated error= 5.806764225013694e-08 rectangles= 131072 basic error= 1.5234007097131297e-05 extrapolated error= 2.053002301494189e-08 rectangles= 262144 basic error= 7.62063280745906e-06 extrapolated error= 7.258517786823404e-09 rectangles= 524288 basic error= 3.8115995590892737e-06 extrapolated error= 2.566310719487319e-09 rectangles= 1048576 basic error= 1.9062534803993003e-06 extrapolated error= 9.074017093269049e-10 </code></pre> <p>and by the time you have a lot of rectangles, you are gaining <span class="math-container">$1000\times$</span> precision from extrapolation.</p>
1,682,341
<p>While looking at another question on this site about constructable numbers I started wondering. If you can take a countable number of steps (possibly infinite) can you draw an interval of a length corresponding to a computable number?</p> <p>More strictly if I have a unit interval, a straight edge, a compass, a finite list of instructions (which can include instructions to repeat sequences of the instructions until an event occurs, instructions to draw lines using my tools and instructions on labeling points) and the ability to carry out a countably infinite number of actions in a finite time. Can I construct a interval that corresponds to a given computable number?</p>
user21820
21,820
<p>If you can take an infinite number of steps you construct a non-constructible length?</p> <p><strong>No.</strong> If you take infinitely many steps, you never finish drawing and hence you cannot draw what you want.</p> <p>However, you might want to ask whether you can draw successive approximations that get arbitrarily close to the desired drawing (such as a segment with its length being a certain value relative to a given unit segment). In that case...</p> <p><strong>It depends.</strong> If you just ask for the existence of a sequence of successive approximations, then...</p> <p><strong>Yes even for non-computable numbers</strong>. Any real number has such a sequence because every real number is the limit of some sequence of rational numbers, and all rational numbers are constructible (by compass and straightedge).</p> <p>But if you ask for the existence of a <strong>computable</strong> sequence of approximations then...</p> <p><strong>Yes only for computable numbers</strong>. Every computable number is <strong>by definition</strong> produced digit by digit by a procedure, and every time you get a new digit you can construct the corresponding approximation that is accurate to that digit. It can certainly be done because the approximations are all rational. Conversely, if you have a computable sequence of approximations that converge to a length, then clearly the limit is computable.</p>
925,140
<p>$$f(x)=\frac { x }{ x+4 } $$</p> <p>I am not sure how to go about solving this but here is what I have done so far:</p> <p>$$y=\frac { x }{ x+4 } $$</p> <p>$$(x+4)y=\frac { x }{ x+4 } (x+4)$$</p> <p>$$yx+4y=x$$</p> <p>I feel stuck now. Where do I go from here?</p>
Adriano
76,987
<p>Bring the terms containing $x$ together, factor out the $x$, then divide through: \begin{align*} yx - x &amp;= -4y \\ x(y - 1) &amp;= -4y \\ x &amp;= \frac{-4y}{y - 1} \end{align*}</p>
3,578,191
<p>Without tables or a calculator, find the value of <span class="math-container">$\displaystyle\frac{(\sqrt5 +2)^6 - (\sqrt5 - 2)^6}{8\sqrt5}$</span>.</p> <p>I do not understand how the positive/negative signs are obtained as shown in the book; is there a formula for expanding these kind of things (what kind of expression is it, by the way?)?</p> <p><a href="https://i.stack.imgur.com/TZjZo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TZjZo.png" alt="enter image description here"></a></p> <p>This is my solution:</p> <p><span class="math-container">$\displaystyle\frac{(\sqrt5 +2)^6 - (\sqrt5 - 2)^6}{8\sqrt5}$</span></p> <p><span class="math-container">$= \displaystyle\frac{[(\sqrt5+2)^3+(\sqrt5-2)^3][(\sqrt5+2)^3-(\sqrt5-2)^3]}{8\sqrt5}$</span></p> <p><span class="math-container">$=\displaystyle\frac{(\sqrt5+2+\sqrt5-2)[(\sqrt5+2)^2\color{red}{+}(\sqrt5+2)(\sqrt5-2)+(\sqrt5-2)^2](\sqrt5+2-\sqrt5+2)[(\sqrt5+2)^2\color{red}{-}(\sqrt5+2)(\sqrt5-2)+(\sqrt5-2)^2]}{8\sqrt5}$</span></p> <p><span class="math-container">$=\displaystyle\frac{[2\sqrt5(5+4\sqrt5+4+\color{red}{5-4}+5-4\sqrt5+4][4(5+4\sqrt5+4\color{red}{-(5-4)}+(5-4\sqrt5+4)]}{8\sqrt5}$</span></p> <p><span class="math-container">$=\displaystyle\frac{2584\sqrt5}{8\sqrt5}$</span></p> <p><span class="math-container">$=323$</span></p> <p>Because of the multiplication, I still got the same answer as given in the book. However, is the book or I correct in terms of the positive/negative signs(in red)?</p>
David Diaz
431,789
<p>The book is correct. Notice the signs in the identities: <span class="math-container">$$a^3 + b^3 = (a+b)(a^2 - ab + b^2)$$</span> <span class="math-container">$$a^3 - b^3 = (a-b)(a^2 + ab + b^2)$$</span> Let <span class="math-container">$a = (\sqrt{5}+2)^2$</span> and <span class="math-container">$b = (\sqrt{5}-2)^2$</span> and plug in to the second formula to recover your equation.</p> <p>Your arithmetic happened to work by the lucky circumstance of <span class="math-container">$(18+1)(18-1)$</span> equalling <span class="math-container">$(18-1)(18+1)$</span> </p>
3,691,692
<p>Find all real values of a such that <span class="math-container">$x^2+(a+i)x-5i=0$</span> has at least one real solution. </p> <p><span class="math-container">$$x^2+(a+i)x-5i=0$$</span></p> <p>I have tried two ways of solving this and cannot seem to find a real solution.</p> <p>First if I just solve for <span class="math-container">$a$</span>, I get <span class="math-container">$$a=-x+i\frac{5-x}{x}$$</span> Which is a complex solution, not a real solution...</p> <p>Then I tried using the fact that <span class="math-container">$x^2+(a+i)x-5i=0$</span> is in quadratic form of <span class="math-container">$x^2+px+q=0$</span> with <span class="math-container">$p=(a+i)$</span> and <span class="math-container">$q=5i$</span></p> <p>So I transform <span class="math-container">$$x^2+(a+i)x-5i=0$$</span> to <span class="math-container">$$(x+\frac{a+i}{2})^2=(\frac{a+i}{2})^2+5i$$</span></p> <p>Now it is in the form that one side is the square of the other but I don't know how to find the roots since I'm not sure if I'm supposed to convert <span class="math-container">$(\frac{a+i}{2})^2+5i$</span> to polar form since I can't take the modulus of <span class="math-container">$(\frac{a+i}{2})^2+5i$</span> (or at least I don't know how).</p> <p>At thins point I feel like I'm just using the wrong method if anyone could guide me in the right direction I would very much appreciate it. Thank you. </p>
Michael Rozenberg
190,319
<p>Now, <span class="math-container">$$\frac{(ad+bc)^2}{abcd}-4=\frac{a^2d^2-2abcd+b^2c^2}{abcd}=\frac{(ad-bc)^2}{abcd}\geq0.$$</span>Also, by C-S <span class="math-container">$$\left(\dfrac{b}{a}+\dfrac{d}{c}\right)\cdot\left(\dfrac{a}{b}+\dfrac{c}{d}\right)\geq\left(\sqrt{\frac{b}{a}\cdot\frac{a}{b}}+\sqrt{\frac{d}{c}\cdot\frac{c}{d}}\right)^2`=4$$</span></p>
3,691,692
<p>Find all real values of a such that <span class="math-container">$x^2+(a+i)x-5i=0$</span> has at least one real solution. </p> <p><span class="math-container">$$x^2+(a+i)x-5i=0$$</span></p> <p>I have tried two ways of solving this and cannot seem to find a real solution.</p> <p>First if I just solve for <span class="math-container">$a$</span>, I get <span class="math-container">$$a=-x+i\frac{5-x}{x}$$</span> Which is a complex solution, not a real solution...</p> <p>Then I tried using the fact that <span class="math-container">$x^2+(a+i)x-5i=0$</span> is in quadratic form of <span class="math-container">$x^2+px+q=0$</span> with <span class="math-container">$p=(a+i)$</span> and <span class="math-container">$q=5i$</span></p> <p>So I transform <span class="math-container">$$x^2+(a+i)x-5i=0$$</span> to <span class="math-container">$$(x+\frac{a+i}{2})^2=(\frac{a+i}{2})^2+5i$$</span></p> <p>Now it is in the form that one side is the square of the other but I don't know how to find the roots since I'm not sure if I'm supposed to convert <span class="math-container">$(\frac{a+i}{2})^2+5i$</span> to polar form since I can't take the modulus of <span class="math-container">$(\frac{a+i}{2})^2+5i$</span> (or at least I don't know how).</p> <p>At thins point I feel like I'm just using the wrong method if anyone could guide me in the right direction I would very much appreciate it. Thank you. </p>
Michael Hoppe
93,935
<p>Let <span class="math-container">$x=a/b$</span>, <span class="math-container">$y=c/d$</span>, you'll get <span class="math-container">$2+x/y+y/x$</span>. Now use that for any positive number the sum of that number and its reciprocal is at least <span class="math-container">$2$</span>.</p>
2,079,822
<p>I am asked to find the maximum velocity of a mass. </p> <p>I know that the equation for maximum acceleration is </p> <p>$$a = w^2A$$</p> <p>However I do not know how to find the maximum velocity. Is velocity just the same as acceleration? </p>
eyeballfrog
395,748
<p>Velocity is the rate of change of position. It's pretty much just the speed of the object, with a little extra structure to keep track of the direction it's moving.</p> <p>Acceleration is the rate of change of the velocity. It incorporates both the object speeding up and slowing down, and the object turning to move in a different direction.</p> <p>Now, from the form of the acceleration equation you gave, I'm guessing this is a mass on a spring problem. In that case, the maximum velocity can be found from either differentiating the mass's position function (which should be $x = A\sin(\omega t + \phi)$) or from conservation of energy ($m\omega^2x^2/2 + mv^2/2= m\omega^2A^2/2$).</p>
1,893,540
<p>I've been asked to prove the following, if $x - ε ≤ y$ for all $ε&gt;0$ then $x ≤ y$. I tried proof by contrapositive, but I keep having trouble choosing the right $ε$. Can you guys help me out? </p>
DeepSea
101,504
<p>Suppose $x &gt; y \implies \epsilon = x-y &gt; 0 \implies x = y + \epsilon &gt; y + \dfrac{\epsilon}{2}$, contradiction.</p>
3,597,301
<p>We know that formula of finding mode of grouped data is</p> <p>Mode = <span class="math-container">$l+\frac{(f_1-f_0)}{(2f_1-f_0-f_2)}\cdot h$</span></p> <p>Where, <span class="math-container">$f_0$</span> is frequency of the class preceding the modal class and <span class="math-container">$f_2$</span> is frequency of the class succeeding the modal class. But how to calculate mode when there is no class preceding or succeeding the modal class.</p>
BruceET
221,800
<p>Here is an elementary example of the use of a density estimator in R.</p> <p>First we generate a thousand observations from the gamma distribution <span class="math-container">$\mathsf{Gamma}(\mathsf{shape}=\alpha=2, \mathsf{rate} = \lambda = 1/3)$</span> and plot their histogram in such a way that the 'modal bin' includes the smallest values.</p> <pre><code>set.seed(327) x = rgamma(1000, 2, 1/3) hist(x, prob=T, br=7, col="skyblue2") </code></pre> <p><a href="https://i.stack.imgur.com/ZDdTU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZDdTU.png" alt="enter image description here"></a></p> <p>Then we find the default density estimator in R. It consists of 512 points. Plotting them imitates a smooth curve.</p> <pre><code>den.est = density(x) hist(x, prob=T, br=7, ylim=c(0,.15),col="skyblue2") lines(den.est, type="l", lwd=2, col="red") </code></pre> <p><a href="https://i.stack.imgur.com/N5WMp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N5WMp.png" alt="enter image description here"></a></p> <p>Here is a summary of the <span class="math-container">$(x,y)$</span> points of the density estimator. We can use these points to find where the estimated density curve is at its highest points. Thus, we can locate the 'mode' of the data, as defined by the density estimator. For our simulated data its about <span class="math-container">$3.65.$</span> (We take the 'mean' of x's with highest y-value because there may be ties.)</p> <pre><code>den.est Call: density.default(x = x) Data: x (1000 obs.); Bandwidth 'bw' = 0.8625 x y Min. :-2.480 Min. :6.260e-06 1st Qu.: 4.383 1st Qu.:2.507e-03 Median :11.247 Median :1.828e-02 Mean :11.247 Mean :3.639e-02 3rd Qu.:18.110 3rd Qu.:6.596e-02 Max. :24.974 Max. :1.203e-01 mean(den.est<span class="math-container">$x[den.est$</span>y == max(den.est$y)]) [1] 3.644313 </code></pre> <p>Usually the point of finding the mode of a histogram is to estimate the mode of the population distribution. We did pretty well in this example: The gamma distribution <span class="math-container">$\mathsf{Gamma}(\alpha=2,\lambda=1/3),$</span> from which we simulated the data has its mode at <span class="math-container">$(\alpha-1)/\lambda = 1/(1/3) = 3.$</span></p> <p><em>Note:</em> By way of full disclosure: (1) With as many as <span class="math-container">$n = 1000$</span> observations, we might have used more bins in our original histogram so that the traditional formula could be used. Here is a frequency histogram of the data with more bins. (I will leave it to you to see what value the traditional method gives.)</p> <pre><code>hist(x, ylim=c(0,260), labels=T, col="skyblue2") </code></pre> <p><a href="https://i.stack.imgur.com/3W2jq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3W2jq.png" alt="enter image description here"></a></p> <p>(2) Also, if the population distribution has its mode at one end of its support, a modification of the default kernel density estimator in R may be required for a good estimate of the mode. (An exponential distribution, with its 'mode' at <span class="math-container">$0,$</span> would be an example.)</p>
1,902,455
<p>$x=e^t$ $y=te^(-t)$</p> <p>$\frac{dy}{dx}= \frac{e^(-t)(1-t)}{e^(t)}$</p> <p>$\frac{d^2y}{dx^2}= \frac{\frac{dy}{dx}}{\frac{dx}{dt}}= \frac{e^(-t)(1-t)}{e^t}$</p> <p>any t's without proper enclosement are meant to be to the power...I don't know why its giving me this trouble. I entered these answers into my homework and it said it could not understand my answers and could not be graded.... Also it asked the interval over which it is is concave upward and if im not mistaken from my work it would be from 0 to infinity.</p>
Community
-1
<p>A substitution of 1+y^2 or such is tempting but the derivative produces a y term which needs removal by trig substitution. Thus a substitution in both y^2 and y is needed. The only way I can see is this substitution:</p> <p>$$ u= \frac {y}{1+y^2} \Rightarrow \frac {du}{dx}= \frac {1-y^2}{(1+y^2)^2} \\ \therefore I=\int \frac {du}{dx}dx =\int du = u+C=\frac {y}{1+y^2}+C$$</p> <p>But is substituting the answer, which is trivial as it works for any integral. The $\tan \theta $ substitution is cumbersome but the only solution I can see.</p>
1,902,455
<p>$x=e^t$ $y=te^(-t)$</p> <p>$\frac{dy}{dx}= \frac{e^(-t)(1-t)}{e^(t)}$</p> <p>$\frac{d^2y}{dx^2}= \frac{\frac{dy}{dx}}{\frac{dx}{dt}}= \frac{e^(-t)(1-t)}{e^t}$</p> <p>any t's without proper enclosement are meant to be to the power...I don't know why its giving me this trouble. I entered these answers into my homework and it said it could not understand my answers and could not be graded.... Also it asked the interval over which it is is concave upward and if im not mistaken from my work it would be from 0 to infinity.</p>
Jack Tiger Lam
186,030
<p>Divide throughout the fraction by $y^2$ to obtain:</p> <p>$$\displaystyle \int \frac{\frac{1}{y^2} -1}{\left(y+\frac{1}{y}\right)^2} \text{d}y = \int \frac{-\text{d}\left(y+\frac{1}{y}\right)}{\left(y+\frac{1}{y}\right)^2}$$</p> <p>Which yields immediately to the reverse chain rule.</p>
2,435
<p>I'm not sure we already have something similar, but I'm working on more code inspections for the IntelliJ plugin and it's always a good idea to ask the community. Since it doesn't really fit on main, I'm posting it here on Meta.</p> <p>Linting is an excellent way to point the developer to probable errors that he might have overlooked. With a dynamic language like the one of Mathematica, we are a bit restricted with what we can do, since we cannot evaluate code and since most things require evaluation to be sure if they are a bug or not. Nevertheless, there are checks we can do. For instance <code>If[a=b, ..]</code> is most likely a bug and even if the developer knew what he did, it is a bad style.</p> <p>There are trickier examples like <code>If[a&lt;5,...]</code>. This looks okay but knowing that <code>a&lt;5</code> stays unevaluated if the comparison cannot be done, it is a source of error because you end up with the unevaluated <code>If</code> expression in your wrong result and debugging might be complicated.</p> <p>In both examples, wrapping <code>TrueQ</code> around the condition resolves the issue and although there might still be a bug, at least you can be sure your <code>If</code> expression is evaluated to some branch. Other common sources of error are, e.g. <code>x_?testFunc[#]&amp;</code> or implicit multiplication through linebreaks.</p> <p><strong>Question:</strong> What are common bugs in your code and could they have been pointed out by a linter? If you like to share your thoughts, please provide one issue per answer, so that others can vote. I'm looking forward to your suggestions and see if I can implement some of them in IntelliJ.</p> <hr> <p>Example issue: With the <a href="https://mathematica.stackexchange.com/a/176489/187">alternative layout for packages</a> which was pointed out by Leonid, we can use <em>directives</em> for a static code analyzer to easily export symbols or declare them as package symbols. As Leonid pointed out, the directives need to be on their own source-line with nothing else on it. So for the directives</p> <pre><code>PackageScope["myFunc"] PackageExport["MyExportedFunc"] </code></pre> <p>I implemented the following rules</p> <ol> <li>They need to be on their own source line with nothing else on it</li> <li>Their string argument must be a valid identifier</li> </ol> <p><a href="https://i.stack.imgur.com/3bO61.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3bO61.gif" alt="enter image description here"></a></p>
C. E.
731
<blockquote> <h1>Status Completed</h1> </blockquote> <p>It would be neat if IntelliJ could detect this precedence issue:</p> <pre><code>Plot3D[Sin[x y], {x, 0, 3}, {y, 0, 3}, ColorFunction -&gt; Hue[#3] &amp;] </code></pre> <p>it should be</p> <pre><code>Plot3D[Sin[x y], {x, 0, 3}, {y, 0, 3}, ColorFunction -&gt; (Hue[#3] &amp;)] </code></pre> <p>This may not be as much of a problem in an IDE as in a notebook, but nevertheless:</p> <pre><code>{1, 2, 3}/.2 -&gt; 3 </code></pre> <p>There is a missing space between <code>/.</code> and <code>2</code>.</p> <h2>Comment halirutan:</h2> <p>Excellent suggestions and I implemented both. Anonymous function in IntelliJ are <em>italic</em> and it is easier to see the scope, but I included this inspection anyway</p> <p><img src="https://i.imgur.com/KDBnwKp.png" alt="function"></p> <p>The second inspection has some corner-cases, but a basic version is also available now</p> <p><img src="https://i.imgur.com/LNMD4AL.png" alt="replace"></p>
86,755
<p>I need to solve the following integral equations for a problem I'm working on - </p> <p>$\displaystyle \frac{-i}{2 \pi}$ $\int_{-a}^{a} \mathrm{dt}\,\, \frac{e^{i k t}}{t + i \tau}$ and $\displaystyle \frac{-i}{2 \pi}$ $\int_{a}^{\infty} \mathrm{dt}\,\, \frac{e^{i k t}}{t + i \tau}$</p> <p>where $\tau, k \in \mathbb{R}$.</p> <p><em>Mathematica</em> can do these numerically and I get sensible plots as a function of $\tau$ for given $a$ and $k$, where by sensible I mean that the integrals go to 0 as $\tau \to \pm \infty$. </p> <p>However, I'd like to find an analytic expression for these, even if it is in terms of exponential integrals, etc. When I try to do this, <em>Mathematica</em> spits out an answer that agrees with the numerical integration at small $\tau$, but at large $\tau$ the analytic expressions break down. </p> <p>For concreteness, suppose I take $\frac{-i}{2 \pi}$ $\int_{a}^{\infty} \mathrm{dt}\,\, \frac{e^{i k t}}{t + i \tau}$ with k = 1 and a = 1. </p> <p>Doing this integral numerically and plotting the real part vs $\tau$,</p> <pre><code>Plot[Re[-I/(2 π) NIntegrate[E^(I t)/( t + I τ), {t, 1, ∞}]], {τ, -200, 200}, PlotRange -&gt; All] </code></pre> <p><img src="https://i.stack.imgur.com/rMeKZ.jpg" alt="Numerically_Integrated_Plot"></p> <p>But now, suppose I do this integral analytically.</p> <pre><code>Integrate[-I/(2 π) E^(I t)/(t + I τ), {t, 1, ∞}] </code></pre> <p>This gives an output</p> <pre><code> (E^τ (π + 2 I CosIntegral[1 + I τ] + 2 I SinhIntegral[I - τ]))/(4 π) </code></pre> <p>and when I plot the Real part of this, the result looks like - </p> <p><img src="https://i.stack.imgur.com/jDOyd.jpg" alt="Analytically_Generated_Plot"></p> <p>I can't really understand why this breaks down and would like to have analytic expressions for both those integrals for an values of $\tau, k$ and $a$.</p>
glS
27,539
<p>This is due to the sum of two very large numbers (coming from <code>CosIntegral</code> and <code>SinhIntegral</code>) being carried out without sufficient machine precision used to represent them. You can fix it giving an appropriate value of <code>WorkingPrecision</code> as an option to plot.</p> <p>You can see quite clearly that the problem comes from this by plotting the two functions (the one coming from the numerical integration and the other from the numerical evaluation of the symbolic integration) at varying values of <code>WorkingPrecision</code>. The lesser <code>WorkingPrecision</code> is the sooner the problem arises:</p> <pre><code>GraphicsRow@Table[ Plot[ { Re[E^\[Tau] (\[Pi] + 2 I CosIntegral[1 + I \[Tau]] + 2 I SinhIntegral[I - \[Tau]])]/(4 \[Pi]), Re[-I/(2 \[Pi]) NIntegrate[ E^(I t)/(t + I \[Tau]), {t, 1, \[Infinity]}]] }, {\[Tau], 0, 100}, PlotRange -&gt; All, WorkingPrecision -&gt; wp, ImageSize -&gt; Medium ], {wp, {20, 40, 80}} ]~Monitor~{wp} </code></pre> <p><img src="https://i.stack.imgur.com/SIhLl.png" alt="enter image description here"></p> <h2>EDIT1: How to obtain a specific value with wanted precision</h2> <p>To obtain a correct particular value you can try enforcing the required precision of the result with the second argument of <code>N</code>. In this case even small values of this parameter seem to work, probably because Mathematica automatically uses the required precision during the computation to obtain the requested one at the end:</p> <pre><code>f[\[Tau]_] := Re[E^\[Tau] (\[Pi] + 2 I CosIntegral[1 + I \[Tau]] + 2 I SinhIntegral[I - \[Tau]])]/(4 \[Pi]) N[f[100]] (* Out=6.72029*10^42 *) N[f[100], 2] (* Out=0.0013 *) </code></pre> <p>Indeed, you can use this approach to obtain a more reliable plot. It may be necessary to modify the value of <code>$MaxExtraPrecision</code> to avoid the 50 digits internal default limit of Mathematica:</p> <pre><code>f[\[Tau]_] := Re[E^\[Tau] (\[Pi] + 2 I CosIntegral[1 + I \[Tau]] + 2 I SinhIntegral[I - \[Tau]])]/(4 \[Pi]) Block[{$MaxExtraPrecision = 1000}, Show[ ListLinePlot[ Table[ {\[Tau], N[f[\[Tau]], 4]}, {\[Tau], 0, 200} ], PlotStyle -&gt; {Thick, Green}, PlotRange -&gt; All ], Plot[ Re[-I/(2 \[Pi]) NIntegrate[ E^(I t)/(t + I \[Tau]), {t, 1, \[Infinity]}]], {\[Tau], 0, 200}, PlotRange -&gt; All, ImageSize -&gt; Large, PlotStyle -&gt; {Red, Dashed} ] ] ] </code></pre> <p><img src="https://i.stack.imgur.com/NYorZ.png" alt="enter image description here"></p> <h2>EDIT 2: How to reliably evaluate the function at non-integer values of $k$</h2> <p>If more in general we are interested in evaluating the function at other values of $k$ other than 1, we must be careful in the way we give the value of $k$. If we use a value like <code>k=-1.2</code> in the definition of $f$, Mathematica will not be able to use its arbitrary precision engine, as well explained for example in <a href="https://mathematica.stackexchange.com/a/3153/27539">this answer</a>. A way around this is to <code>Rationalize</code> the value of $k$ before the evaluation of $N$. Here is a working example of the correct evaluation of $f(\tau,k=-1.2)$ for $\tau=1,...,40$:</p> <pre><code>f[\[Tau]_, k_] := f[\[Tau], k] = Re@Integrate[-I/(2 \[Pi]) E^(I k t)/(t + I \[Tau]), {t, 1, \[Infinity]}, Assumptions -&gt; {\[Tau] \[Element] Reals, k \[Element] Reals}] ListLinePlot[ Table[ {\[Tau], N[f[\[Tau], Rationalize[-1.2]], {Infinity, 4}]}, {\[Tau], 0, 40} ], PlotRange -&gt; All, ImageSize -&gt; Large ]~Monitor~{\[Tau]} </code></pre> <p>Note that this may take a while to evaluate because the larger $\tau$ gets the more digits Mathematica has to use to correctly compute the result. The numerical integration is definitely faster in these cases.</p> <p>You can also use this approach with <code>Plot</code>, if you are very patient. A way to get the feel of the hardness of such a computation without having to wait a very long time to get the complete result is for example using the <code>dynamicPlot</code> function given in <a href="https://mathematica.stackexchange.com/a/85325/27539">this answer</a>, which allows to see the plot while it's being drawn. To do this evaluate the function <code>dynamicPlot</code> in the linked answer and then use the following code:</p> <pre><code>f[\[Tau]_, k_] := f[\[Tau], k] = Re@Integrate[-I/(2 \[Pi]) E^(I k t)/(t + I \[Tau]), {t, 1, \[Infinity]}, Assumptions -&gt; {\[Tau] \[Element] Reals, k \[Element] Reals}] g[\[Tau]_] := N[f[\[Tau], Rationalize[-1.2]], {Infinity, 4}] dynamicPlot[ g, {x, 0, 40}, PlotRange -&gt; All, ImageSize -&gt; Large, WorkingPrecision -&gt; Infinity ] </code></pre>
957,400
<p>S: Every employee who is honest and persistent is successful or bored.</p> <p>Would this statement be the negations, converse, or contrapositive of S?</p> <p>-> All employees who are dishonest or not persistent must be unsuccessful and not bored.</p>
Platehead
29,459
<p>Sorry if my notation is unfamiliar.</p> <p>Write \begin{align*} Hx &amp;= \text{$x$ is honest}\\ Px &amp;= \text{$x$ is persistent}\\ Sx &amp;= \text{$x$ is successful}\\ Bx &amp;= \text{$x$ is bored} \end{align*}</p> <p>Then $S$ can be written $\forall x (Hx \land Px) \to (Sx \lor Bx)$.</p> <p>The next statement can be written $\forall x (\neg Hx \lor \neg Px) \to (\neg Sx \land \neg Bx)$.</p> <p>Can you apply De Morgan's laws to make sense of that?</p>
4,515,488
<p>I am making a computer program to play cards, for this algorithm to work I need to deal cards out randomly. However, I know that some people cannot have some cards due to the rules of the card game.</p> <p>To elaborate on this, imagine we have 3 players: <em>a</em>, <em>b</em> and <em>c</em>. Also, there are 4 cards left to divide: 1, 2, 3 and 4. I know from <em>c</em> that he cannot have card 1 or card 2, I know from player <em>a</em> that he cannot have card 3. Each player is listed below with their corresponding possible cards.</p> <pre><code>a b c 1 1 2 2 3 3 4 4 4 </code></pre> <p>Also, I know that player <em>a</em> must receive 1 card, player <em>b</em> must receive 2 and player <em>c</em> must receive 1 card (for a total of 4 cards). Note, that it does not matter in which order a player receives his cards.</p> <p><strong>My Question:</strong> Is there some algorithm that can deal the cards out randomly (with arbitrary amounts of cards and 3 players) such that each possible deal is equally likely?</p> <p><strong>My attemps on solving this problem:</strong> First, I enlisted each possible solution (note that switching the cards around in the middle column doesn't influence the relative chances of the possibilities).</p> <pre><code>a b c 1 2 3 4 1 2 4 3 2 1 3 4 2 1 4 3 4 1 2 3 </code></pre> <p>And then I noticed that every algorithm that I could think of did not sample the above possibilities with equal chances. I could, however, come up with one algorithm which is to generate a random partition of 1, 2, 3 and 4 and check if it is valid. If its not I generate a random partition again and check it, and so on. Altough this gives me a random sample where all options are equally likely, as we move on to more and more cards this algorithm takes up a lot of time. I have had no succes on finding speedups or different algorithms to solve this problem.</p>
kodlu
66,512
<p>Since each card <em>must</em> be dealt, this can be done recursively.</p> <p>Let <span class="math-container">$P_i=\{x:$</span> x is a player that can be dealt card <span class="math-container">$i\},$</span> and <span class="math-container">$C_a=\{i:$</span> x is a card that can go to player <span class="math-container">$a\}.$</span> Also <span class="math-container">$N_a$</span> is the number of cards that player <span class="math-container">$a$</span> must receive. Here we have <span class="math-container">$P_1=\{a,b\}=P_2, P_3=\{b,c\}, P_4=\{a,b,c\},$</span> and <span class="math-container">$C_a=\{1,2,4\},C_b=\{1,2,3,4\}, C_c=\{3,4\},$</span> with <span class="math-container">$N_a=1,N_b=2,N_c=1.$</span></p> <p>List the set of <em>allowed</em> player card pairs:</p> <p><span class="math-container">$L=\{(1,a),(2,a),(4,a),(1,b),(2,b),(3,b),(4,b),(3,c),(4,c)\}$</span></p> <p>Draw from list <span class="math-container">$L$</span> uniformly, say you get <span class="math-container">$(2,b)$</span>. Give card 2 to player <span class="math-container">$b.$</span> Remove all cards of the form <span class="math-container">$(2,\cdot)$</span> from the list since we no longer have card 2, to get</p> <p><span class="math-container">$L=\{(1,a),(4,a),(1,b),(3,b),(4,b),(3,c),(4,c)\}$</span></p> <p>Since player <span class="math-container">$b$</span> was given a card, reduce <span class="math-container">$N_b$</span> by 1 to <span class="math-container">$N_b=1.$</span> If <span class="math-container">$N_b$</span> had become zero, you would have removed all pairs with <span class="math-container">$b$</span> in the second coordinate from the list as well since Player <span class="math-container">$b$</span> would have got the number of required cards.</p> <p>Continue with 3 more iterations, since you had 4 cards in total to deal.</p> <p>The algorithm uniformly chooses pairs from the currently admissible set of pairs, recursively updating admissible sets, by the properties of conditional probability it samples the admissible initial space uniformly.</p>
1,719,568
<p>Can we say that $k$ grows faster than $\sqrt{k}$ when term is large? But what is the formal way write it ?</p>
Henricus V.
239,207
<p>We can restrict $k &gt; 4$, then it suffices to show that $$ \lim_{k \to \infty} (k - 2\sqrt{k}) = \infty $$ Using the fact that $2\sqrt{k} \in o(k)$, $$ \lim_{k \to \infty} (k - 2\sqrt{k}) = \lim_{k \to \infty} k = \infty $$</p>
1,719,568
<p>Can we say that $k$ grows faster than $\sqrt{k}$ when term is large? But what is the formal way write it ?</p>
Pedro
70,305
<p>Note that, for all positive integer $k$, $$\frac{1}{k-2\sqrt{k}}=\frac{\frac{1}{k}}{1-2\frac{\sqrt{k}}{k}}=\frac{\frac{1}{k}}{1-2\frac{1}{\sqrt{k}}}$$</p> <p>Thus, if you know that $$\lim_{k\to\infty}\frac{1}{k}=0\quad\text{and}\quad\lim_{k\to\infty}\frac{1}{\sqrt{k}}=0,$$ then you can conclude that $$\lim_{k\to\infty}\frac{1}{k-2\sqrt{k}}=\frac{0}{1-2\cdot 0}=0$$</p>
91,739
<p>I have 2 groups: </p> <ul> <li>general linear $ k \times k $ with $\cdot$</li> <li>top-triangle matrix $ n \times n $ with 1 on main diagonal. Operation is $\cdot$ too</li> </ul> <p>Is there isomorphism for any any non-trivial $n,k$ i.e $n \neq 2 \ or \ k \neq 1$ over $\mathbb{R}$ or $\mathbb{Q}$?</p> <p>If no, how can I prove it?</p>
Bill Cook
16,423
<p>Upper-triangular matrices form solvable groups, general linear groups are not solvable (for $k>1$). Thus they cannot be isomorphic.</p>
1,559,485
<p>Suppose $\sup_{x \in \mathbb{R}} f'(x) \le M$.</p> <p>I am trying to show that this is true if and only if $$\frac{f(x) - f(y)}{x - y} \le M$$</p> <p>for all $x, y \in \mathbb{R}$.</p> <p><strong>Proof</strong></p> <p>$\text{sup}_{x \in \mathbb{R}} f'(x) \le M$</p> <p>$f'(x) \le M$ for all $x \in \mathbb{R}$</p> <p>$\lim_{y \to x} \frac{f(y) - f(x)}{y - x} \le M$</p> <p>$\lim_{y \to x} \frac{f(x) - f(y)}{x - y} \le M$</p> <p>I can see geometrically why this property holds, but how do I get rid of the limit here? Or am I approaching it wrong in general?</p>
Justpassingby
293,332
<p>The 'if' part follows from the definition of the derivative as a limit. If some expression is always less than or equal to $M,$ and the limit exists, then the limit also satisfies that inequality. That, in its turn, follows from the epsilon-delta definition of a limit.</p> <p>The 'only if' part is the really interesting part. As commenters have pointed out it is (a direct consequence of) the Mean Value Theorem.</p>
434,290
<p>According to the <a href="http://arxiv.org/abs/0910.5922" rel="nofollow">equation 4</a>, $$\phi(0,t)= \frac{A_0}{(1+\frac{2t^2}{R^4})^{3/4}}\cos \left(\sqrt2 t+ \frac{3}{2}\tan^{-1}\left[\frac{\sqrt2 t}{R^2}\right]\right)\tag{1}$$ what conditions makes, $$\cos \left(\sqrt2 t+ \frac{3}{2}\tan^{-1}\left[\frac{\sqrt2 t}{R^2}\right]\right)=1$$ so the equation (1) will be </p> <p>$$\phi(0,t)= \frac{A_0}{(1+\frac{2t^2}{R^4})^{3/4}}$$ The author used the <a href="http://arxiv.org/abs/hep-ph/9503217" rel="nofollow">article reference</a> to establish the equation $$\frac{1}{2} \Gamma_{lin}= \frac{1}{\tau_{linear}} \approx \frac{1.196}{\omega_{mass}} \approx \frac{.846}{R^2}$$ but I didn't get any argument there, can you explain this a bit please.</p>
Hanul Jeon
53,976
<p>Substitute $t=1/(1+x^4)$ then we get $$\int_{0}^{\infty }\frac {\ln x}{x^4+1}\ dx =\frac{1}{16}\int_0^1 \ln\left(\frac{1-t}{t}\right)(1-t)^{-3/4}t^{-1/4}dt.$$ And $\mathrm{B}(x,y)=\Gamma(x)\Gamma(y)/\Gamma(x+y)$, we get $$\frac{\partial}{\partial x}\mathrm{B}(x,y)=\mathrm{B}(x,y)[\psi(x)-\psi(x+y)]$$ where $\psi$ is <a href="http://en.wikipedia.org/wiki/Digamma_function" rel="nofollow">digamma function</a>. And by Euler integral of the first kind we get $$\frac{\partial}{\partial x}\mathrm{B}(x,y)=\int_0^1 \ln t\cdot t^{x-1}(1-t)^{y-1}dt.$$ So $$ \begin{array}{lcl} &amp;&amp;\frac{1}{16}\int_0^1 \ln\left(\frac{1-t}{t}\right)(1-t)^{-3/4}t^{-1/4}dt \\ &amp;=&amp;\frac{1}{16}\int_0^1 \ln(1-t) (1-t)^{-3/4} t^{-1/4}dt-\frac{1}{16}\int_0^1 \ln(t)\cdot (1-t)^{-3/4} t^{-1/4}dt\\ &amp;=&amp; \frac{1}{16}\int_0^1 \ln (t)\cdot t^{-3/4} (1-t)^{-1/4}dt -\frac{1}{16}\int_0^1 \ln(t)\cdot (1-t)^{-3/4} t^{-1/4}dt\\ &amp;=&amp;\frac{1}{16}\mathrm{B}\left(\frac{1}{4},\frac{3}{4}\right)\left[\psi\left(\frac{1}{4}\right)-\psi(1)\right]-\frac{1}{16}\mathrm{B}\left(\frac{1}{4},\frac{3}{4}\right)\left[\psi\left(\frac{3}{4}\right)-\psi(1)\right] \\ &amp;=&amp;\frac{1}{16}\mathrm{B}\left(\frac{1}{4},\frac{3}{4}\right)\left[\psi\left(\frac{1}{4}\right)-\psi\left(\frac{3}{4}\right) \right] \end{array} $$ And $\mathrm{B}\left(\frac{1}{4},\frac{3}{4}\right)\left[\psi\left(\frac{1}{4}\right)-\psi\left(\frac{3}{4}\right) \right]=-\pi^2\sqrt{2}$. (It can easily be derived from reflection formula of <a href="http://en.wikipedia.org/wiki/Gamma_function#General" rel="nofollow">gamma</a> and <a href="http://en.wikipedia.org/wiki/Digamma_function#Reflection_formula" rel="nofollow">digamma</a> function.)</p>
680,205
<p>Milnor lemma 2 pg 34 "Any orientation preserving diffeomorphism f on $R^m$ is smoothly homotopic to the identity"</p> <p>So he proves that $f\simeq df_0$ ,which he says is clearly homotopic to the identity. Can you explain me why?</p> <p>Here I found two explanations I don't understand: 1) $Gl^{+}(m,\mathbb{R})$ is path connected. Why is $df_0 \in Gl^{+}(m,\mathbb{R})$? What prevents $df_{0}\in Gl^{-}(m,\mathbb{R})$?</p> <p>2) $df_0$ is isomorphic everywhere and thus(why?) isotopic to identity.</p> <p>Thanks</p>
Ted Shifrin
71,348
<p>First, $f$ is orientation-preserving. Second, $GL(n)^+$ is path-connected (e.g., use the $QR$ decomposition).</p>
1,417,286
<p>So I'm trying to learn about RSA and have come across various subtopics, including the discrete logarithm problem. </p> <p>This mentions primitive roots, which I do not understand.</p> <p>Essentially all I want is an answer in simple terms of what a primitive root actually is.</p> <p>Thanks</p>
lulu
252,071
<p>I sometimes find it helpful to think of primitive roots as akin to logarithms...that is, a way to change multiplication into addition. For example, let's consider powers of $3$ mod$(17)$. They are: $$\{3,9,10,13,5,15,11,16,14,8,7,4,12,2,6,1\}$$</p> <p>We note there are $16$ distinct values, so $3$ is indeed a primitive root mod$(17)$. We now ask, for each residue class $i$, what power of $3$ gives $i$ mod$(17)$? By inspection these "logarithms" are:</p> <p>$$\{16, 14, 1, 12, 5, 15, 11, 10, 2,3, 7, 13, 4,9,6,8\}$$</p> <p>That is to say, $$3^{16}=1,\;\;3^{14}=2,\;\; 3^1=3,\;\;...$$</p> <p>Now say you want to multiply $8$ by $13$ mod$(17)$. We read off that $8=3^{10}$ and $13=3^4$ so $8*13=3^{14}=2$.</p> <p>In this way, if you have a primitive root and you have a look up table for the "logarithms" then you can always reduce multiplication to addition. Of course, it isn't all that easy to find primitive roots.</p>
815,195
<p>I am working on an old qualifying exam problem and I can't seem to really get anywhere. I would love some help. Thank you.</p> <p>Let $f$ be a polynomial such that $|f(z)| ≤ 1 − |z|^2 + |z|^{1000}$ for all $z ∈ C.$ Prove that $|f(0)| ≤ 0.2.$</p>
Community
-1
<p>This is a strange question to put on a complex analysis qual, since numerical estimates overshadow the complex analysis material (the maximal principle, as in the answer by Umberto P.). </p> <p>We need a value of $|z|$ such that $1-|z|^2+|z|^{1000}$ can be bounded by $1/5$. I'll take $$|z| = \sqrt{\frac{10}{11}}$$ so that the desired inequality becomes $$\frac{1}{11} + \left(\frac{10}{11}\right)^{500} \le \frac15$$ It suffices to show that $$\left(\frac{10}{11}\right)^{500} \le \frac1{10}$$ which follows from Bernoulli's inequality: $1.1^{500} &gt; 1+0.1\cdot 500 = 51$.</p>
357,557
<p>I have a function: $f(x)=-\frac{4x^{3}+4x^{2}+ax-18}{2x+3}$ which has only one point of intersection with the $x$-axis.</p> <p>How can i find the value of $a$?</p> <p>I tried polynomial division and discriminant, but it didn't help me.</p>
Asaf Karagila
622
<p>If $x=2^k$ then every prime number which divides $x$ has to be $2$. Of course that saying "every prime number" is an unbounded assertion, but luckily no prime number which is larger than $x$ can divide $x$, so we can instead write it as follows:</p> <blockquote> <p>$x=2^k$ if and only if $x\neq 0$ and for every $p&lt;x$, if $p$ is prime and $p\mid x$ then $p=2$.</p> </blockquote> <p>Now we need to verify that "$k$ is a prime number" and $k\mid n$ are both bounded statement, but that's not very hard:</p> <ol> <li>Recall that $k\mid n$ if and only if there exists $m&lt;n+1$ such that $k\cdot m=n$. Under this definition every number divides zero, and zero divides every number. Formally speaking we have the following bounded: $$k\mid n\iff\exists m&lt;s(n):k\cdot m=n.$$</li> <li>Recall that $p$ is a prime if whenever $k&lt;p$ and $k\mid p$ then $k=1$, but we also have that $0$ divides every number, so we actually have $p$ is a prime if $1&lt;p$ and for every $k&lt;p$, if $k\mid p$ then $k&lt;2$. Again we have only one quantifier and it is bounded, so we have the following formula: $$p\text{ is prime} \iff s(0)&lt;p\land\forall k&lt;p:k\mid p\rightarrow k&lt;s(s(0)).$$</li> </ol> <p>So now to combine everything together we have as follows:</p> <p>$$x=2^k\iff\lnot(x=0)\land\forall p&lt;x:p\text{ is prime}\land p\mid x\rightarrow p=s(s(0)).$$</p> <p>There is one quantifier which is bounded, over two formulas containing only bounded quantifiers themselves. Therefore the whole statement is bounded, that is to say $\Delta_0$.</p>
2,904,912
<p>$$24a(n)=26a(n-1)-9a(n-2)+a(n-3)$$ $$a(0)=46, a(1)=8, a(2)=1$$ $$\sum\limits_{k=3}^{\infty}a(k)=2^{-55}$$ How can I prove it?</p>
Deepesh Meena
470,829
<p>$$24r^3=26r^2-9r+1$$</p> <p>solutions $$r=\frac{1}{2},\frac{1}{3},\frac{1}{4}$$</p> <p>$$a_n=x\left(\frac{1}{2}\right)^n+y\left(\frac{1}{3}\right)^n+z\left(\frac{1}{4}\right)^n$$ Determine $x,y,z$ using base conditions</p> <p>$$a_0=46=x+y+z$$ $$a_1=8=x/2+y/3+z/4$$ $$a_2=1=x/4+y/9+z/16$$</p> <p>$$\fbox{ x=4, y=-54,z=96 }$$</p> <p>$$a_n=4\left(\frac{1}{2}\right)^n-54\left(\frac{1}{3}\right)^n+96\left(\frac{1}{4}\right)^n$$</p> <p>$$\sum\limits_{k=3}^{\infty}a(k)=\sum\limits_{k=3}^{\infty}\left(4\left(\frac{1}{2}\right)^k-54\left(\frac{1}{3}\right)^k+96\left(\frac{1}{4}\right)^k\right)$$</p> <p>$$\sum_{k=3}^{\infty}4\left(\frac{1}{2}\right)^k=4\sum_{k=3}^{\infty}\left(\frac{1}{2}\right)^k=4\cdot\frac{\left(\frac{1}{2}\right)^3}{1-\frac{1}{2}}=1$$</p> <p>$$54\sum_{k=3}^{\infty}\left(\frac{1}{3}\right)^k=54\frac{\left(\frac{1}{3}\right)^3}{1-\frac{1}{3}}=3$$ $$96\sum_{k=3}^{\infty}\left(\frac{1}{4}\right)^k=96\cdot\frac{\left(\frac{1}{4}\right)^3}{1-\frac{1}{4}}=2$$</p> <p>$$\sum\limits_{k=3}^{\infty}a_k=1-3+2=0$$</p>
1,392,858
<p>Is is known that the space of symmetric matrices $\mathbb{R}_{sym}^{n \times n}$ has $\binom{n}{2}$ dimensions.</p> <p>And according to the spectral theorem every symmetric matrix $A \in \mathbb{R}_{sym}^{n \times n}$ has a spectral decomposition in terms of 1-rank matrices.</p> <p>A = $\sum_{i=1}^n \lambda_i v_i v_i^T $</p> <p>Hence we conclude the dimension space of the symmetric matrices is $n$.</p> <p>Where is the fallacy of this reasoning ?</p> <p>Thanks in advance. </p>
Eugene Zhang
215,082
<p>The linear subspace of symmetric matrices is actually of dimension $n(n+1)/2$. The linear subspace of diagoanl matrices is of dimension $n$. The similar transformation (spectral decomposition) maps $n(n+1)/2$ space to $n$ space. You have confused 2 different spaces with a single space. </p>
48,077
<p>First, I'm a beginner.</p> <p>I can compute the sum of roots with the follwing:</p> <pre><code>Roots[x^7 + 5 x^6 + x^5 + x + 1 == 0, x] Plus @@ (x /. {ToRules[%]}) // Simplify </code></pre> <p>Of course I get, except the sign, the coefficient of x^6.</p> <p>Now, is there a way to compute more elaborate symmetric functions, for example the sum of xi/xj for all i,j ?</p>
Daniel Lichtblau
51
<p>You might compute the defining polynomial for the root quotients directly. Since you are interested in root quotients x/y, call the result z and we have x-y*z as a new polynomial relation. Now use iterated resultants to eliminate x and y.</p> <pre><code>res = Resultant[Resultant[x^7 + 5 x^6 + x^5 + x + 1, x - y*z, x], y^7 + 5 y^6 + y^5 + y + 1, y] (* Out[45]= 1 - 5 z + z^2 - 2525 z^5 + 60994 z^6 - 70555 z^7 + 12038 z^8 - 261 z^9 + 930 z^10 - 50760 z^11 - 167510 z^12 + 224509 z^13 - 5732 z^14 + 7071 z^15 + 36884 z^16 + 26408 z^17 + 391806 z^18 - 417131 z^19 - 55251 z^20 - 42118 z^21 + 80948 z^22 - 122338 z^23 - 498110 z^24 + 498110 z^25 + 122338 z^26 - 80948 z^27 + 42118 z^28 + 55251 z^29 + 417131 z^30 - 391806 z^31 - 26408 z^32 - 36884 z^33 - 7071 z^34 + 5732 z^35 - 224509 z^36 + 167510 z^37 + 50760 z^38 - 930 z^39 + 261 z^40 - 12038 z^41 + 70555 z^42 - 60994 z^43 + 2525 z^44 - z^47 + 5 z^48 - z^49 *) </code></pre>
3,156,643
<blockquote> <p>Prove that <span class="math-container">$\sin(x) &lt; x$</span> when <span class="math-container">$0&lt;x&lt;2\pi.$</span></p> </blockquote> <p>I have been struggling on this problem for quite some time and I do not understand some parts of the problem. I am supposed to use rolles theorem and Mean value theorem</p> <p>First using the mean value theorem I got <span class="math-container">$\cos(x) = \dfrac {\sin(x)}x$</span> and since <span class="math-container">$1 ≥ \cos x ≥ -1$</span> , <span class="math-container">$1 ≥ \dfrac {\sin(x)}x$</span> which is <span class="math-container">$x ≥ \sin x$</span> for all <span class="math-container">$x ≥ 0$</span>.</p> <p>Here the first issue is that I didn't know how to change <span class="math-container">$≥$</span> to <span class="math-container">$&gt;$</span>. </p> <p>The second part is proving when <span class="math-container">$x&lt;2\pi$</span> and this part I have no idea.</p> <p>I know that <span class="math-container">$2\pi &gt; 1$</span> , and <span class="math-container">$1 ≥ \sin x$</span> and my thought process ends here.</p>
Theo Bendit
248,286
<p>Using <span class="math-container">$f : [0, \infty) : x \mapsto x - \sin(x)$</span> and the mean value theorem, we can solve this problem. Note that <span class="math-container">$$f'(x) = 1 - \cos(x) \ge 0.$$</span> The mean value theorem tells us that <span class="math-container">$f$</span> is therefore non-decreasing (prove the contrapositive!). So, for all <span class="math-container">$x \ge 0$</span>, we have <span class="math-container">$$x - \sin(x) = f(x) \ge f(0) = 0 \implies x \ge \sin(x)$$</span> as you deduced already. The only thing we need to do is show <span class="math-container">$f(x) &gt; 0$</span> for <span class="math-container">$x &gt; 0$</span>. Note that, in order for this to be true, since <span class="math-container">$f$</span> is non-decreasing, we would need to have <span class="math-container">$f(x) = 0$</span> on some interval <span class="math-container">$[0, \lambda]$</span>; as soon as <span class="math-container">$f(\lambda) = 0$</span> for some <span class="math-container">$\lambda &gt; 0$</span>, then <span class="math-container">$$x \in [0, \lambda] \implies 0 = f(0) \le f(x) \le f(\lambda) = 0 \implies f(x) = 0.$$</span> But, this is not the case. This would imply that <span class="math-container">$f'(x) = 0$</span> for all <span class="math-container">$x \in (0, \lambda)$</span>, which is not the case as <span class="math-container">$\cos$</span> is not constant locally to the right of <span class="math-container">$0$</span>.</p>
1,211,287
<p>Given that the angles between the consecutive lateral edges AB, AC &amp; AD meeting at the vertex A of a tetrahedron ABCD are $ α, β, γ$ (as shown in the diagram below). Is there any set-formula to find out the solid angle subtended by the tetrahedron at the same vertex? </p> <p>Note: A tetrahedron is a solid having 4 triangular faces, 6 edges &amp; 4 vertices. Three triangular faces meet together at each of four vertices &amp; each of six edges is shared (common) by two adjacent triangular faces. </p> <p><img src="https://i.stack.imgur.com/CdCvt.jpg" alt="Tetrahedron"></p>
Harish Chandra Rajpoot
210,295
<p>The <strong>solid angle <span class="math-container">$\omega$</span> subtended at a vertex by any tetrahedron having (vertex) angles <span class="math-container">$\alpha, \beta$</span> &amp; <span class="math-container">$\gamma$</span> between consecutive lateral edges meeting at the same vertex</strong>, is given by the following <a href="https://www.academia.edu/32720371/Solid_angle_subtended_by_a_tetraheron_at_its_vertex_given_the_angles_between_the_consecutive_lateral_edges_meeting_at_that_vertex_and_solid_angle_subtended_by_a_triangle_at_the_origin_given_the_position_vectors_of_its_vertices_Application_of_HCRs_cosine_formula_" rel="nofollow noreferrer">HCR's Generalized Formula</a> <span class="math-container">$$\omega=\cos^{-1}\left(\frac{\cos\alpha-\cos\beta\cos\gamma}{\sin\beta\sin\gamma}\right)-\sin^{-1}\left(\frac{\cos\beta-\cos\alpha\cos\gamma}{\sin\alpha\sin\gamma}\right)-\sin^{-1}\left(\frac{\cos\gamma-\cos\alpha\cos\beta}{\sin\alpha\sin\beta}\right)$$</span> OR</p> <p><span class="math-container">$$\omega=\frac{\pi}{2}-\sin^{-1}\left(\frac{\cos\alpha-\cos\beta\cos\gamma}{\sin\beta\sin\gamma}\right)-\sin^{-1}\left(\frac{\cos\beta-\cos\alpha\cos\gamma}{\sin\alpha\sin\gamma}\right)-\sin^{-1}\left(\frac{\cos\gamma-\cos\alpha\cos\beta}{\sin\alpha\sin\beta}\right)$$</span></p> <p>OR <span class="math-container">$$\omega=\cos^{-1}\left(\frac{\cos\alpha-\cos\beta\cos\gamma}{\sin\beta\sin\gamma}\right)+\cos^{-1}\left(\frac{\cos\beta-\cos\alpha\cos\gamma}{\sin\alpha\sin\gamma}\right)+\cos^{-1}\left(\frac{\cos\gamma-\cos\alpha\cos\beta}{\sin\alpha\sin\beta}\right)-\pi$$</span></p> <p>It is worth noticing that the above formula has <strong>internal symmetry</strong> i.e. the vertex-angles <span class="math-container">$\alpha, \beta$</span> &amp; <span class="math-container">$\gamma$</span> can be taken in any order/sequence but the result obtained in each case remains the same.</p>
2,623,735
<p><a href="https://i.stack.imgur.com/5QfOQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5QfOQ.png" alt="enter image description here"></a></p> <p>I have proven that $S$ is also a basis. But I am not sure about the second one. Is it just identity matrix $3\times 3$ as we don't change anything?</p>
lab bhattacharjee
33,337
<p>As $\arctan(-x)=-\arctan x,$</p> <p>Set $x=-y$ with $x\ge0,$ in $$\arccos\dfrac{1-x^2}{1+x^2}=2\arctan x$$</p> <p>to find $$\arccos\dfrac{1-(-y)^2}{1+(-y)^2}=2\arctan(-y)=?$$</p> <p>In $$\arcsin\dfrac{2x}{1+x^2}=2\arctan x$$ with $x\le1$</p> <p>set $x=\dfrac1y$ to find $$\arcsin\dfrac{2/y}{1+(/y)^2}=2\arctan(1/y)$$</p> <p>Now use <a href="https://math.stackexchange.com/questions/304399/are-mathrmarccotx-and-arctan1-x-the-same-function">Are $\mathrm{arccot}(x)$ and $\arctan(1/x)$ the same function?</a></p>
422,118
<p>I'm a CS major working on social network analysis and its friends.</p> <p>In page 15 of <a href="http://open.umich.edu/sites/default/files/1446/SI508-F08-Week3.pdf" rel="nofollow">this lecture note</a>, two very interesting questions have been asked. Given a social network graph, in which cases would we find nodes with high betweenness but relatively low degree? And, which cases would cause the opposite to happen, that is, high degree but relatively low betweenness? I'm trying to understand this from an intuitive point of view. I'd really appreciate it if someone can shed some light on this.</p> <p><strong>Added notes:</strong><br/> <strong>Betweenness:</strong> intuition: how many pairs of nodes would have to go through a particular node in order to reach one another in the minimum number of hops? Check <a href="http://en.wikipedia.org/wiki/Betweenness_centrality" rel="nofollow">Betweenness centrality</a> in Wikipedia for formal definitions.</p>
hcp
12,963
<p><strong>No, it isn't.</strong></p> <p>The basic idea of the proof is this: We can define an alternative interpretation for the logic such that the logic is correct with respect to said interpretation, but $\triangledown(\top, \top)$ isn't valid in the interpretation, therefore implying that $\triangledown(\top, \top)$ can't be an theorem of the logic.</p> <p>We'll restrict ourselves to the <em>minimal normal modal logic</em> $K_2$. We need the following alternative characterization of it:</p> <p>For all modal formulas $\phi$, $\psi$ and $\rho$, we have that</p> <ol> <li>All propositional tautologies (and the result of any uniform substitution applied to them, in other words, any of their instances, including those containing modal formulas) belong to $K_2$ </li> <li>If $\phi \in K_2$ and $\phi\rightarrow\psi \in K_2$, then $\psi \in K_2$,</li> <li>$\triangledown(\phi,\rho)\rightarrow(\triangledown(\phi\rightarrow \psi,\rho)\rightarrow \triangledown(\psi,\rho))\in K_2$ and $\triangledown(\rho,\phi)\rightarrow(\triangledown(\rho,\phi\rightarrow \psi)\rightarrow \triangledown(\rho,\psi))\in K_2$,</li> <li>$\triangle(\phi,\psi)\leftrightarrow\neg\triangledown(\neg \phi,\neg \psi) \in K_2$,</li> <li>If $\phi \in K_2$ then $\triangledown(\phi,\bot) \in K_2$ and $\triangledown(\bot,\phi) \in K_2$,</li> </ol> <p>and no other formula belongs to $K_2$. </p> <p>At this point, we should prove that $K_2$ is indeed a normal modal logic, or more precisely, we should prove that $K_2$ is closed for uniform substitution, as the rest follows trivially. Note that the notion of "belonging to $K_2$" has a natural inductive structure; do an induction on that, and the proof for closure under uniform substituition follows.</p> <p>Next, let's define the alternative interpretation. First, define an <em>A-Kripke frame</em> to be a tuple of $(W,R_a,R_b)$, where $W$ is a set, and $R_a,R_b$ are binary relations over $W$ (that is, $R_a,R_b \subseteq W\times W$). An <em>A-Kripke model</em> is a tuple $(F,V)$, where $F$ is an A-Kripke frame, and $V$ is a valuation function, a function from propositional letters to sets of elements of W (that is, $V : \Phi \rightarrow P(W)$, where $\Phi$ is the set of propositional letters, and $P(W)$ is the set of all subsets of $W$).</p> <p>Next, we define the <em>A-satisfaction relation</em> $\Vdash^A$, a relation between A-Kripke models, elements of W, and modal formulas. Let $w,v\in W$, $M=((W,R_a,R_b),V)$ be an A-Kripke model, and $\phi, \psi$ be modal formulas. We have:</p> <p>$$ M,w \Vdash^A p \textrm{ iff } w \in V(p) $$ $$ M,w \Vdash^A \phi\lor\psi \textrm{ iff } M,w \Vdash^A \phi \textrm{ or } M,w \Vdash^A \psi $$ $$ M,w \Vdash^A \neg\psi \textrm{ iff } M,w \Vdash^A \psi \textrm{ isn't the case (that is, $M,w \nVdash^A \psi$) } $$ $$ M,w \Vdash^A \triangledown(\phi, \psi),\textrm{ with }\phi\neq\bot\textrm{ and }\psi\neq\bot, \textrm{ or } \phi=\psi=\bot,\textrm{ is never the case} $$ $$ M,w \Vdash^A \triangledown(\bot, \psi),\textrm{ with }\psi\neq\bot, \textrm{ iff for all worlds } v\in W \textrm{ such that } R_awv, \textrm{ we have } M,v \Vdash^A \psi $$ $$ M,w \Vdash^A \triangledown(\phi, \bot),\textrm{ with }\phi\neq\bot, \textrm{ iff for all worlds } v\in W \textrm{ such that } R_bwv, \textrm{ we have } M,v \Vdash^A \phi $$</p> <p>All other formulas can be defined in terms of those mentioned above; in particular, $\top$ is defined as $\neg\bot$, and $\triangle(\phi,\psi)$ is defined by duality, as $\neg\triangledown(\neg\phi,\neg\psi)$. So this gives us the satisfability of $\triangledown(\top, \top)$ in this interpretation as impossible, since $\top\neq\bot$.</p> <p>Then all that remains is establishing the main correctness result; we'll prove that, for all modal formulas $\phi$, if $\vdash_{K_2}\phi$ (that is, $\phi\in K_2$), then for all A-Kripke models $M$ and its worlds $w$, $M,w\Vdash^A \phi$, or in more usual notation, simply $\Vdash^A \phi$ (this means: $\phi$ is a valid formula). From there, we can reason counterpositively - if $\triangledown(\top, \top)$ is not valid (in fact it is not even satisfiable!), then it can't possibly belong to $K_2$, and so doesn't belong to all modal logics; which clearly means our definition of modal logic is bogus.</p> <p>We must once again reason inductively on the structure of $K_2$; this time, let's not skip the proof (at least, not the basic structure!). Let $\phi\in K_2$ be a modal formula. Then our original hypothesis, $\vdash_{K_2}\phi$ gets divided into 5 cases, the 5 possibilities we listed as justification for a formula to belong to $K_2$ (and we have to prove $\Vdash^A \phi$ in all of them): </p> <ol> <li>$\phi$ is an instance of a propositional tautology. Our satisfaction relation agrees completely with the usual definitions for the propositional conectives; it must be the case that $\phi$ is valid.</li> <li>$\phi$ can be derived from modus ponens, that is, there exist $\psi\in K_2$ and $\psi\rightarrow\phi\in K_2$. In this case, the inductive hypothesis tells us that $\Vdash^A \psi$ and $\Vdash^A \psi\rightarrow\phi$. Again, our satifaction relation behaves as usual regarding propositional conectives, so we do have $\Vdash^A \phi$.</li> <li>$\phi$ is one of the $K$ axioms. Then consider $\phi=\triangledown(\eta,\rho)\rightarrow(\triangledown(\eta\rightarrow \psi,\rho)\rightarrow \triangledown(\psi,\rho))$ (we'll leave the other possibility for the reader, as it is rather similar to this one). We're trying to establish $\Vdash^A \triangledown(\eta,\rho)\rightarrow(\triangledown(\eta\rightarrow \psi,\rho)\rightarrow \triangledown(\psi,\rho))$. Then consider an A-Kripke model $M=(W,R_a,R_b,V)$, and $w\in W$. Next, suppose $M,w\Vdash^A \triangledown(\eta,\rho)$ and $M,w\Vdash^A \triangledown(\eta\rightarrow \psi,\rho)$. If with those hypotheses we prove $M,w\Vdash^A \triangledown(\psi,\rho)$, then we'll have proven what we wanted. Now, do observe that $\rho$ must be $\bot$, for if it were not, then we'd have $M,w\Vdash^A \triangledown(\eta\rightarrow \psi,\rho)$ and both subformulas would be different from $\bot$, a situation defined as impossible. Also note that, since $\rho=\bot$ then $\eta\neq\bot$, for if not then we'd have an impossible situation arising from $M,w\Vdash^A \triangledown(\eta,\rho)$ and $\eta=\rho=\bot$. Then consider a $v$ such that $R_bwv$. By our hypotheses and observations, we have $M,w\Vdash^A \triangledown(\eta,\bot)$ and $M,w\Vdash^A \triangledown(\eta\rightarrow \psi,\bot)$, with $\eta\neq\bot$. Then, by definition of the satisfaction relation, it must be the case that $M,v\Vdash^A \eta$, and $M,v\Vdash^A \eta\rightarrow\psi$. As our satisfaction relation is propositionally sound, we have $M,v\Vdash^A \psi$, and since we made no stipulation over $v$ other than $R_bwv$, we conclude that $M,w\Vdash^A \triangledown(\psi,\bot)$, which, since $\rho=\bot$, is what we wanted to conclude.</li> <li>$\phi$ is one of the duality axioms. This is trivial, since $\triangle$ is defined in terms of $\triangledown$ exactly as postulated by the axiom.</li> <li>$\phi$ was obtained from an application of a generalization rule. Let's handle one of them, again leaving the second one for the reader. So we have $\phi=\triangledown(\psi,\bot)$, and $\psi \in K_2$. This time, our inductive hypothesis is simply $\Vdash^A \psi$. But then we have that, for any world in any A-Kripke model, $\psi$ is satisfied there; in particular, we have that in any A-Kripke model $M=(W,R_a,R_b,V)$, for any pair of worlds $w,v$ such that $R_bwv$, $M,v\Vdash^A \psi$, and by the definition of $\Vdash^A$, $M,w\Vdash^A \triangledown(\psi,\bot)$, or simply, $M,v\Vdash^A \phi$. As this was proven for any model and world, we get $\Vdash^A \phi$.</li> </ol> <p>Therefore, $K_2$ is correct with respect to $\Vdash^A$, and as we've reasoned before, we now know that $\triangledown(\top,\top)\notin K_2$, thus proving that $\triangledown(\top,\top)$ doesn't actually belong to all modal logics as defined in this question (and by the textbook). Since $\triangledown(\top,\top)$ is indeed a validity in the usual semantics, the given definition of modal logic is <em>incomplete</em>, in the proper logical sense of the word.</p> <p>(As a final note, it is rather trivial to fix this definition. One simple way to go at is is by replacing the $\bot$ in the generalization rule with arbitrary formulas $\psi_1,...,\psi_n$, for each $\bot$ in the modal operator, though i'm not sure if this is the best one. Also, another development would be finding an actually complete semantics for this flawed definition of modal logic (or proving that there can't be one!); if someone has a link to that i'd be grateful.)</p>
677,785
<p>I have to evaluate this integral:</p> <p>$$ \int_0^4 \int_\sqrt{y}^2 y^2 {e}^{x^7} \operatorname d\!x \operatorname d\!y\, $$</p> <p>I have no idea what to do with $\;{e}^{x^7}$.</p> <p>I have even <a href="http://www.wolframalpha.com/input/?i=int+e%5Ex%5E7+dx" rel="nofollow">tried $\int{e}^{x^7} dx$ with WolframAlpha</a>, but it gives me something with $\;\Gamma\;$ and I don't know what to do with that.</p> <p>I tried posing $\;u = x^7\;$ and doing another change of variables. I got $\;445 {e}^{128}/9408\;$, but I'm not really sure about it.</p> <p>If anyone could at least point me in the right direction, it would be awesome! Thanks.</p>
Community
-1
<p>Change the order of integration; this leads to</p> <p>$$\int_0^2 \int_0^{x^2} y^2 e^{x^7} dy dx = \frac 1 3\int_0^2 x^6 e^{x^7} dx$$</p> <p>which is an easy integral.</p>
120,067
<p>The <em>theta function</em> is the analytic function $\theta:U\to\mathbb{C}$ defined on the (open) right half-plane $U\subset\mathbb{C}$ by $\theta(\tau)=\sum_{n\in\mathbb{Z}}e^{-\pi n^2 \tau}$. It has the following important transformation property.</p> <blockquote> <p><strong>Theta reciprocity</strong>: $\theta(\tau)=\frac{1}{\sqrt{\tau}}\theta\left(\frac{1}{\tau}\right)$.</p> </blockquote> <p>This theorem, while fundamentally analytic&mdash;the proof is just Poisson summation coupled with the fact that a Gaussian is its own Fourier transform&mdash;has serious arithmetic significance.</p> <ul> <li><p>It is the key ingredient in the proof of the functional equation of the Riemann zeta function.</p></li> <li><p>It expresses the <em>automorphy</em> of the theta function.</p></li> </ul> <p>Theta reciprocity also provides an analytic proof (actually, the <em>only</em> proof, as far as I know) of the Landsberg-Schaar relation</p> <p>$$\frac{1}{\sqrt{p}}\sum_{n=0}^{p-1}\exp\left(\frac{2\pi i n^2 q}{p}\right)=\frac{e^{\pi i/4}}{\sqrt{2q}}\sum_{n=0}^{2q-1}\exp\left(-\frac{\pi i n^2 p}{2q}\right)$$</p> <p>where $p$ and $q$ are arbitrary positive integers. To prove it, apply theta reciprocity to $\tau=2iq/p+\epsilon$, $\epsilon&gt;0$, and then let $\epsilon\to 0$.</p> <p>This reduces to the formula for the quadratic Gauss sum when $q=1$:</p> <p>$$\sum_{n=0}^{p-1} e^{2 \pi i n^2 / p} = \begin{cases} \sqrt{p} &amp; \textrm{if } \; p\equiv 1\mod 4 \\\ i\sqrt{p} &amp; \textrm{if } \; p\equiv 3\mod 4 \end{cases}$$</p> <p>(where $p$ is an odd prime). From this, it's not hard to deduce Gauss's "golden theorem".</p> <blockquote> <p><strong>Quadratic reciprocity</strong>: $\left(\frac{p}{q}\right)\left(\frac{q}{p}\right)=(-1)^{(p-1)(q-1)/4}$ for odd primes $p$ and $q$.</p> </blockquote> <p>For reference, this is worked out in detail in the paper "<a href="http://www.math.kth.se/~akarl/langmemorial.pdf">Applications of heat kernels on abelian groups: $\zeta(2n)$, quadratic reciprocity, Bessel integrals</a>" by Anders Karlsson.</p> <hr> <p>I feel like there is some deep mathematics going on behind the scenes here, but I don't know what.</p> <blockquote> <p>Why should we expect theta reciprocity to be related to quadratic reciprocity? Is there a high-concept explanation of this phenomenon? If there is, can it be generalized to other reciprocity laws (like Artin reciprocity)?</p> </blockquote> <p>Hopefully some wise number theorist can shed some light on this!</p>
Stopple
6,756
<p>Going in the direction of more generality:</p> <p>With $\theta(\tau)=\sum_n\exp(\pi i n^2 \tau)$, theta reciprocity describes how the function behaves under the linear fractional transformation $[\begin{smallmatrix} 0&amp;1 \\ -1&amp;0\end{smallmatrix}]$. From this one can show it's an automorphic form (of half integral weight, on a congruence subgroup). Automorphic forms and more generally automorphic representations are linked by the Langlands program to a very general approach to a non-abelian class field theory. Your "Why should we expect ..." question is dead-on. This is very deep and surprising stuff.</p> <p>In the direction of more specificity, the connection to the heat kernel is fascinating. (In this context, Serge Lang was a great promoter of 'the ubiquitous heat kernel.') The theta function proof is also discussed in Dym and McKean's 1972 book <em>"Fourier Series and Integrals"</em> and in Richard Bellman's 1961 book <em>"A Brief Introduction to Theta Functions."</em> Bellman points out that theta reciprocity is a remarkable consequence of the fact that when the theta function is extended to two variables, both sides of the reciprocity law are solutions to the heat equation. One is, for $t\to 0$ what physicists call a 'similarity solution' while the other is, for $t\to \infty$ the separation of variables solution. By the uniqueness theorem for solutions to PDEs, the two sides must be equal!</p> <p>A special case of quadratic reciprocity is that an odd prime $p$ is a sum of two squares if and only if $p\equiv 1\bmod 4$. This can be be done via the theta function and is in fact given in Jacobi's original 1829 book <em>"Fundamenta nova theoriae functionum ellipticarum."</em></p>
684,892
<p>My progress:</p> <p>Let's take $a \in \mathbb{Z}\left[\frac{-1 + \sqrt{-3}}2\right]$ such that $a \mid 2$, and function $l(x) = x \bar x$.</p> <p>$a \mid 2$ $\Rightarrow$ $2 = ab$ $\Rightarrow$ $l(ab) = l(a)l(b) = 4 = l(2)$</p> <p>If $z \in \mathbb{Z}[\frac{-1 + \sqrt{-3}}2]$, then $z = x + y\frac{-1 + \sqrt{-3}}2$, $x, y \in Z$ and $l(z) = x^2 - xy + y^2 \in \mathbb{Z}$. </p> <p>So $l(a), l(b) \in \mathbb{Z}$. </p> <p>Thus there are three possible values for $l(a)$:</p> <ol> <li>$l(a) = 1$ and $l(b) = 4$ : $a \bar a = 1$, thus $a$ is a unit.</li> <li>$l(a) = 4$ and $l(b) = 4$ : $b$ is a unit.</li> <li>$l(a) = 2$ and $l(b) = 2$ : If $a = x + y\frac{-1 + \sqrt{-3}}2$ , $l(a) = x^2 - xy + y^2 = 2$. </li> </ol> <p>If the equation $x^2 - xy + y^2 = 2$ doesn't have a solution, then $2$ is a prime. How can I show that it doesn't have a solution?</p> <p>Any other proof would be highly appreciated.</p>
Astro Nauft
122,372
<p>Another simple proof is that your equation is reduced bilinear form, and the smallest integer that reduced bilinear form represents is x's coefficient.</p>
2,616,847
<p>By definition, a function $f:\Bbb R^n \to \Bbb R^m$ is linear if</p> <ol> <li>$f(x+y)=f(x)+f(y) \forall x,y\in \Bbb R^n$</li> <li>$f(ax)=af(x) \forall x\in \Bbb R^n$</li> </ol> <p>I want to prove that $f$ is linear iff $f(x)=Ax,A\in\Bbb R^{m\times n}$ and A is unique for any x. </p> <p>I try to prove it by showing $f(x)=f(x\cdot1)=xf(1)=ax$ if $f$. But $x$ and $1$ is not in the same dimension. How can I do it?</p> <p>And to prove that $A$ is unique, it means if</p> <p>$Ax=Bx, 0=f(x)-f(x)=Ax-Bx=(A-B)x$, then $A=B$. </p> <p>Am I understanding it right?</p>
user
505,767
<p>We need to prove two implications</p> <ol> <li>if $f$ is linear $\implies$ $f(x)=Ax$</li> <li>if $f(x)=Ax$ $\implies$ $f$ is linear</li> </ol> <p>To <strong>prove "1"</strong> you need to show that every $x$ can be expressed by a linear combination of a basis $x=\sum a_i\cdot v_i$ and that $f(x)$ for linearity is completely determinated by $f(v_1), f(v_2),...,f(v_n)$. </p> <p>The unicity of A, for a given basis can be shown assuming that for $v\neq w$ $$f(v)=f(w) \iff f(v)-f(w)=0\iff f(v-w)=0\iff v-w=0\iff v=w$$</p> <p>To <strong>prove "2"</strong>, assuming $$f(x)=Ax,A\in\Bbb R^{m\times n}$$</p> <p>we need to show that</p> <ol> <li>$A(x+y)=Ax+Ay \quad \forall x,y\in \Bbb R^n$</li> <li>$A(ax)=aAx \quad\forall x\in \Bbb R^n$</li> </ol> <p>which are true for matrix properties. Unicity also is trivial to prove.</p>
2,101,750
<p>The WP article on general topology has a section titled "<a href="https://en.wikipedia.org/wiki/General_topology#Defining_topologies_via_continuous_functions">Defining topologies via continuous functions</a>," which says,</p> <blockquote> <p>given a set S, specifying the set of continuous functions $S \rightarrow X$ into all topological spaces X defines a topology [on S].</p> </blockquote> <p>The first thing that bothered me about this was that clearly this collection of continuous functions is a proper class, not a set. Is there a way of patching up this statement so that it literally makes sense, and if so, how would one go about proving it?</p> <p>The same section of the article has this:</p> <blockquote> <p>for a function f from a set S to a topological space, the initial topology on S has as open subsets A of S those subsets for which f(A) is open in X.</p> </blockquote> <p>This confuses me, because it seems that this does not necessarily define a topology. For example, let S be a set with two elements, and let f be a function that takes these elements to two different points on the real line. Then f(S) is not open, which means that S is not an open set in S, but that violates one of the axioms of a topological space. Am I just being stupid because I haven't had enough coffee this morning?</p>
Noah Schweber
28,111
<p>Your first question is easily addressed. Yes, on the face of it, there are a proper class of such maps. However, we can restrict attention to topological spaces $X$ which have cardinality no greater than $S$, since we can leave off the part of a space not in the image of $f$. Up to homeomorphism, there are only set-many such $X$ (specifically, $2^{2^{\vert S\vert}}$-many at most). And for each specific space $X$, there are only set-many maps from $S$ to $X$.</p> <p><em>Incidentally, there <strong>is</strong> still one remaining subtlety: actually picking out a set of "enough target spaces". This is potentially an issue, since each homeomorphism type contains a proper class of spaces! This can be handled - without even using choice! - by noting that every such homeomorphism type has a representative of rank (in the cumulative hierarchy sense) $\le\kappa+3$, where $\vert X\vert+\aleph_0\le \kappa$ (this is actually massive overkill but oh well); and the class of all topological spaces of a bounded rank is a set.</em></p> <p><strong>EDIT: Daron's answer gives a much slicker way to approach the problem. However, it's worth understanding the brute-force approach above, since that kind of reasoning is useful in other contexts as well where we have to deal with an apparent proper class of objects.</strong></p> <p>Re: your second question, yep, that's a pretty fundamental mistake. It should go the other way: you take the <em>preimage</em> of open sets in the target space. Specifically, the topology induced by $f$ is $$\{A\subseteq S: A=f^{-1}(U)\mbox{ for some $U$ open in the target space}\}.$$ The discrepancy, of course, is due to the fact that $f\circ f^{-1}(U)\subseteq U$, but these sets are <em>not equal in general</em>.</p>
1,841,958
<p>This is a claim on Wikipedia <a href="https://en.wikipedia.org/wiki/Partially_ordered_set">https://en.wikipedia.org/wiki/Partially_ordered_set</a></p> <p>I am not sure how to make sense of the claim</p> <p>What does it mean by ordered by inclusion? Inclusion as in $\subseteq$? </p> <p>Can someone provide a small example of couple subspaces being "ordered" by inclusion?</p> <p>Is this a linear order?</p>
user247327
247,327
<p>"Ordered by inclusion" means "$A\le B$ if only if A is a subset of B". For example, the set, U, of all vectors of the form (a, b, 3a+ 2b) is a subspace of $R^3$ so is a subset so "$U\le R^3$". And the set, V, of all vectors of the form (a, 3a, 9a) is a subspace of U: $V\le U$.</p>
434,061
<p>I am given a matrix $A\in M(n\times n, \mathbb{C})$ normal (in matrix form $AA^*=A^*A$) and $A^2=A$. The task is to prove that the matrix is Hermitian.</p> <p>But when I try something like $A^*=\,\,...$ , then I can't reach $A$, because I can't "get rid of star" in expression. Also it is not enough to show $BA=BA^*$ for some $B$ since matrix don't form a field, and I haven't got any other thoughts.</p> <p>Thanks in advance!</p>
tomasz
30,222
<p><strong>Hint</strong>: by spectral theorem, a normal matrix is hermitian if and only if all its eigenvalues are real. What complex numbers have the property that they are equal to their squares?</p>
1,039,141
<blockquote> <p>Let <span class="math-container">$X = \mathbb{R}$</span> and <span class="math-container">$Y = \{x \in \mathbb{R} :x ≥ 1\}$</span>, and define <span class="math-container">$G : X → Y$</span> by <span class="math-container">$$G(x) = e^{x^2}.$$</span> Prove that <span class="math-container">$G$</span> is onto.</p> </blockquote> <p>Is this going along the right path and if so how do get the function to equal <span class="math-container">$y$</span>?</p> <blockquote> <p><span class="math-container">$G: \mathbb{R} \to\mathbb{N}_1$</span>. Let <span class="math-container">$y$$\in $$\mathbb{N_1}$</span>.</p> <p><em>claim:</em> <span class="math-container">$\sqrt{\ln y}$</span> maps to <span class="math-container">$y$</span>.</p> <p>Does <span class="math-container">$\sqrt{\ln y}$</span> belong to <span class="math-container">$\mathbb{N_1}$</span>? Yes because <span class="math-container">$y \in \mathbb{N_1}$</span>, <span class="math-container">$G( \sqrt{\ln y})=e^{(\sqrt{\ln y})^2}$</span>.</p> </blockquote>
mathcounterexamples.net
187,663
<p>You should write $G$ as a composite function of onto functions. Then as an exercise, prove again that a composite function of onto functions is onto.</p>
3,201,797
<p>I have three points <span class="math-container">$(x_1, y_1),~ (x_2, y_2),~ (x_3, y_3)$</span> that are on the same line. How to efficiently find which is the point in between.</p> <p><a href="https://i.stack.imgur.com/e2wHq.png" rel="nofollow noreferrer">Example</a></p> <p>Also, is there any efficient way to check if 3 random points are on the same line and then find the point in between?</p>
Community
-1
<p>Sort the <span class="math-container">$x$</span> and sort the <span class="math-container">$y$</span> (sorting three items takes three comparisons). Consider the axis on which the difference between the extremes is the largest, and return the point with the intermediate coordinate.</p> <p><a href="https://i.stack.imgur.com/ba94z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ba94z.png" alt="enter image description here"></a></p> <p>Both axis need to be compared to avoid degenerate cases. This method is the most numerically robust.</p> <hr> <p>Finding if points are aligned is more challenging because the analytical criterion, <span class="math-container">$$P_1P_2\times P_2P_3=0$$</span> will fail in virtually all cases for numerical reasons. A more reasonable approach is to set a tolerance on the distance of one point to the line of support of the two others. And for best accuracy, the line should be built on the most far apart points, as found above.</p> <p>A better formula is <span class="math-container">$$\|P_1P_2\times P_2P_3\|\le\|P_1P_2\|\,\delta.$$</span></p>
3,364,316
<p>While I'm reading E. Landau's <em>Grundlagen der Analysis</em> (tr. <em>Foundations of Analysis</em>, 1966), I couldn't understand the proof of <em>Theorem 3</em> at the segment of <em>Natural Numbers</em> which I've quoted below.</p> <blockquote> <p><strong>Theorem 3:</strong> <em>If</em><br> <span class="math-container">$$x \neq 1$$</span> <em>then there exists one</em> (hence, by Axiom 4, exactly one) <span class="math-container">$u$</span> <em>such that</em><br> <span class="math-container">$$x = u'$$</span> <strong>Proof:</strong> Let <span class="math-container">$\mathbb{S}$</span> be the set consisting of the number <span class="math-container">$1$</span> and of all those <span class="math-container">$x$</span> for which there exists such a <span class="math-container">$u$</span>. (For any such <span class="math-container">$x$</span>, we have of necessity that<br> <span class="math-container">$$x \neq 1$$</span> by Axiom 3.)<br> I) <span class="math-container">$1$</span> belongs to <span class="math-container">$\mathbb{S}$</span>.<br> II) If <span class="math-container">$x$</span> belongs to <span class="math-container">$\mathbb{S}$</span>, then, with <span class="math-container">$u$</span> denoting the number <span class="math-container">$x$</span>, we have<br> <span class="math-container">$$x'=u'$$</span><br> so that <span class="math-container">$x'$</span> belongs to <span class="math-container">$\mathbb{S}$</span>.<br> By Axiom 5, <span class="math-container">$\mathbb{S}$</span> therefore contains all the natural numbers. <span class="math-container">$\square$</span> </p> </blockquote> <p>Sir Landau refers to the Axioms of Peano on proof text. Can someone explain what's going on?</p>
Ben Grossmann
81,360
<p>In fact, your lim inf is a lim. First, observe that <span class="math-container">$$ (n+1)\ln(n+1) = n \ln(n+1) + \ln(n+1) = \\ n[\ln(n+1) - \ln(n) + \ln(n)] + \ln(n+1) = \\ n \ln(n) + n \ln\left(\frac{n+1}{n}\right) + \ln(n+1) $$</span> Thus (by the prime number theorem), we have <span class="math-container">$$ \lim_{n \to \infty} \frac{p_n}{p_{n+1}} = \lim_{n \to \infty} \frac{p_n}{p_{n+1}} \cdot \frac{n \ln(n)}{n \ln(n)} \cdot \frac{(n+1)\ln(n+1)}{(n+1)\ln(n+1)}\\ = \lim_{n \to \infty} \frac{p_n}{n\ln(n)} \cdot \frac{(n+1)\ln(n+1)}{p_{n+1}} \cdot \frac{n\ln(n)}{(n+1)\ln(n+1)}\\ = \lim_{n \to \infty} \frac{n\ln(n)}{(n+1)\ln(n+1)}\\ = \left[\lim_{n \to \infty} \frac{(n+1)\ln(n+1)}{n\ln(n)}\right]^{-1}\\ = \left[\lim_{n \to \infty}1 + \frac{\ln[(n+1)/n]}{\ln(n+1)} + \frac{\frac{\ln(n+1)}{\ln(n)}}{n }\right]^{-1} = 1^{-1} = 1. $$</span> it follows that <span class="math-container">$$ \lim_{n \to \infty} \frac{p_n}{p_n + p_{n+1}} = \lim_{n \to \infty} \frac{1}{1 + \frac{p_{n+1}}{p_n}} = \frac{1}{1+1} = \frac 12. $$</span></p>
3,364,316
<p>While I'm reading E. Landau's <em>Grundlagen der Analysis</em> (tr. <em>Foundations of Analysis</em>, 1966), I couldn't understand the proof of <em>Theorem 3</em> at the segment of <em>Natural Numbers</em> which I've quoted below.</p> <blockquote> <p><strong>Theorem 3:</strong> <em>If</em><br> <span class="math-container">$$x \neq 1$$</span> <em>then there exists one</em> (hence, by Axiom 4, exactly one) <span class="math-container">$u$</span> <em>such that</em><br> <span class="math-container">$$x = u'$$</span> <strong>Proof:</strong> Let <span class="math-container">$\mathbb{S}$</span> be the set consisting of the number <span class="math-container">$1$</span> and of all those <span class="math-container">$x$</span> for which there exists such a <span class="math-container">$u$</span>. (For any such <span class="math-container">$x$</span>, we have of necessity that<br> <span class="math-container">$$x \neq 1$$</span> by Axiom 3.)<br> I) <span class="math-container">$1$</span> belongs to <span class="math-container">$\mathbb{S}$</span>.<br> II) If <span class="math-container">$x$</span> belongs to <span class="math-container">$\mathbb{S}$</span>, then, with <span class="math-container">$u$</span> denoting the number <span class="math-container">$x$</span>, we have<br> <span class="math-container">$$x'=u'$$</span><br> so that <span class="math-container">$x'$</span> belongs to <span class="math-container">$\mathbb{S}$</span>.<br> By Axiom 5, <span class="math-container">$\mathbb{S}$</span> therefore contains all the natural numbers. <span class="math-container">$\square$</span> </p> </blockquote> <p>Sir Landau refers to the Axioms of Peano on proof text. Can someone explain what's going on?</p>
hunter
108,129
<p>If you're willing to use recent huge theorems in your proof, it follows from Zhang's prime gaps theorem (there is some <span class="math-container">$N$</span> and infinitely many <span class="math-container">$n$</span> with <span class="math-container">$p_{n+1} - p_n &lt; N$</span>).</p>
4,274,314
<blockquote> <p>Find all <span class="math-container">$f: \mathbb{R} \to \mathbb{R}$</span> such that <span class="math-container">$$f\bigl(xf(y)+y\bigr)+f\bigl(-f(x)\bigr)=f\bigl(yf(x)-y\bigr)+y $$</span> for all <span class="math-container">$x,y \in \mathbb{R}$</span>.</p> </blockquote> <p>Help me solving this. My expectation of the answer is <span class="math-container">$f(x) = x+1$</span>.</p> <p>My try: <span class="math-container">$$ P(x, y): f\bigl(xf(y)+y\bigr)+f\bigl(-f(x)\bigr)=f\bigl(yf(x)-y\bigr)+y \text. \\ P(x, 0): f\bigl(xf(0)\bigr)+f\bigl(-f(x)\bigr) = f(0) \text. \\ P(0, y): f(y)+f\bigl(-f(0)\bigr) = f\Bigl(y\bigl(f(0)-1\bigr)\Bigr)+y \text. \\ P(0, 0): f(0)+f\bigl(-f(0)\bigr) = f(0) \implies f\bigl(-f(0)\bigr)=0 \text. \\ \text {Assume) } f(a)=1 \text. \\ P(x, a): f(x+a) + f\bigl(-f(x)\bigr)=f\bigl(af(x)-a\bigr)+a \text. \\ x = 0 \text; \ 1 = f\Bigl(a\bigl(f(0)-1\bigr)\Bigr)+a \text. \\ x = a \text; \ f(2a)+f(-1) = f(0)+a \text. \\ $$</span></p>
Knas
634,505
<p>Throughout my answer, the expression <span class="math-container">$(n)(x_1, x_2, \ldots, x_n)$</span> will mean the substitution in the equation with the number <span class="math-container">$n$</span>.</p> <p>We want to find all functions <span class="math-container">$f:\mathbb{R} \rightarrow \mathbb{R}$</span> such that for every <span class="math-container">$x, y \in \mathbb{R}$</span> following holds <span class="math-container">$$ f(xf(y)+y)+f(−f(x))=f(yf(x)−y)+y \label{eq:main} \tag*{$(1)(x, y)$}$$</span></p> <p>Substitutions <span class="math-container">$(1)(0, 0),\ (1)(0, y),\ (1)(-y, y),\ (1)(y, 0)$</span> lead us to <span class="math-container">\begin{gather} f(-f(0)) = 0 \tag*{$(2)$} \\[0.35em] f(y) = f\big(y(f(0) - 1)\big) + y &amp;&amp; \tag*{$(3)(y)$} \\[0.35em] f\big(-y(f(y) - 1)\big)+f(-f(-y)) = f\big(y(f(-y)-1)\big) + y \tag*{$(4)(y)$} \\[0.35em] f(f(0)y) + f(-f(y)) = f(0) \tag*{$(5)(y)$} \end{gather}</span></p> <p>Adding equalities <span class="math-container">$(4)(y)$</span> and <span class="math-container">$(4)(-y)$</span>, we obtain <span class="math-container">$$ f(-f(y)) = -f(-f(-y)) \tag*{$(6)(y)$} $$</span></p> <h2>Case <span class="math-container">$f(0) = 0$</span></h2> <p>If we assume that <span class="math-container">$f(0) = 0$</span>, then <span class="math-container">$(3)(y)$</span> and <span class="math-container">$(5)(y)$</span> will simplify to <span class="math-container">\begin{gather} f(y) = f(-y) + y &amp;&amp; \tag*{$(7)(y)$} \\[0.35em] f(-f(y)) = 0 \tag*{$(8)(y)$} \end{gather}</span></p> <p>Combining <span class="math-container">$(7)(f(y))$</span> and <span class="math-container">$(8)(y)$</span> we have that <span class="math-container">\begin{gather} f(f(y)) = f(y) &amp;&amp; \tag*{$(9)(y)$} \end{gather}</span></p> <p>Then <span class="math-container">$(1)(-f(x), f(y))$</span> is equivalent to <span class="math-container">\begin{gather} f\big(f(y)(1-f(x))\big) = f(y) &amp;&amp; \tag*{$(9)(y)$} \end{gather}</span></p> <p>Now we use together <span class="math-container">$(1)(x, f(y)),\ (7)(f(y)(f(x)-1))$</span> and <span class="math-container">$(9)(y)$</span> <span class="math-container">\begin{align*} f\big(f(y)(1+x)\big) &amp;= f\big(f(y)(f(x)-1)\big) + f(y) \\[0.35em] &amp;= f\big(f(y)(1-f(x))\big) + f(y)(f(x)-1) + f(y) \\[0.35em] &amp;= f(y) + f(y)f(x) - f(y) + f(y) \\[0.35em] &amp;= f(y)(1+f(x)) \end{align*}</span></p> <p>If we take <span class="math-container">$x = -1$</span> in last equality and note that <span class="math-container">$f(x) \not\equiv 0$</span> we get that <span class="math-container">$f(-1) = -1$</span>. From <span class="math-container">$(7)(1)$</span> follows that <span class="math-container">\begin{gather} f(1) = 0 \tag*{$(10)(y)$} \end{gather}</span></p> <p><span class="math-container">$(1)(1, -1)$</span> implies <span class="math-container">$f(-2) = -1$</span>, then <span class="math-container">$(7)(2)$</span> implies <span class="math-container">$f(2) = 1$</span> and <span class="math-container">$(9)(2)$</span> leads to contradictions <span class="math-container">$f(1)=1$</span>, so there are no functions with <span class="math-container">$f(0) = 0$</span>.</p> <h2>Case <span class="math-container">$f(0) \neq 0$</span></h2> <p>From <span class="math-container">$(5)(y)$</span> and <span class="math-container">$(6)(y)$</span> we have that: <span class="math-container">\begin{gather} f(-f(0)y) + f(-f(-y)) = f(0) \\[0.35em] f(-f(0)y) - f(-f(y)) = f(0) \\[0.35em] f(-f(0)y) + f(f(0)y) = 2f(0) \\[0.35em] f(-y) + f(y) = 2f(0) \tag*{$(11)(y)$} \end{gather}</span></p> <p>Denote set of zeros of <span class="math-container">$f$</span> as <span class="math-container">$\mathrm{U}$</span>. <span class="math-container">$\mathrm{U}$</span> nonempty due <span class="math-container">$(2)$</span>. For any <span class="math-container">$u \in \mathrm{U}$</span> substitution <span class="math-container">$(1)(u,u)$</span> together with <span class="math-container">$(11)(u)$</span> shows us that <span class="math-container">\begin{align*} f(0) &amp;= f(-u) + u \\[0.35em] &amp;= 2f(0) + u \\[0.35em] u &amp;= -f(0) \\[0.35em] \mathrm{U} &amp;= \Big\lbrace -f(0) \Big\rbrace \tag*{$(12)$} \end{align*}</span></p> <p>We denote by <span class="math-container">$\mathrm{Z}$</span> the set <span class="math-container">$y \in \mathbb{R}$</span> such that <span class="math-container">$f(y) = f(0)$</span>. From <span class="math-container">$(11)(y \in \mathrm{Z})$</span> we have that <span class="math-container">\begin{gather} y \in \mathrm{Z} \Rightarrow -y \in \mathrm{Z} \tag*{$(13)(y)$} \end{gather}</span> Now take any <span class="math-container">$y \in \mathrm{Z}$</span>. Then substitution <span class="math-container">$(5)(f(0)^{-1}y)$</span> and <span class="math-container">$(12)$</span> leads to <span class="math-container">\begin{gather} f(-f(f(0)^{-1}y)) = 0 \\[0.35em] -f(f(0)^{-1}y) = -f(0) \\[0.35em] f(f(0)^{-1}y) = f(0) \\[0.35em] y \in \mathrm{Z} \Rightarrow f(0)^{-1}y \in \mathrm{Z} \tag*{$(14)(y)$} \end{gather}</span></p> <p>Let <span class="math-container">$x, y \in \mathrm{Z}$</span>. Then <span class="math-container">$(1)(f(0)^{-1}x, y)$</span> with <span class="math-container">$(3)(y)$</span> implies <span class="math-container">\begin{align*} f(x + y) &amp;= f\big(y(f(0)-1)\big) + y \\[0.35em] &amp;= f(y) \\[0.35em] &amp;= f(0) \\[0.35em] x, y \in \mathrm{Z} &amp;\Rightarrow x + y \in \mathrm{Z} \tag*{$(15)(x, y)$} \end{align*}</span></p> <p><span class="math-container">$(3)(-f(0))$</span> equivalent to <span class="math-container">$-f(0)(f(0)-1) \in \mathrm{Z}$</span>. Then by <span class="math-container">$(13)$</span> and <span class="math-container">$(14)$</span> both <span class="math-container">$f(0)(f(0)-1)$</span> and <span class="math-container">$1-f(0)$</span> also lies in <span class="math-container">$\mathrm{Z}$</span>. By <span class="math-container">$(15)$</span> we have <span class="math-container">$(f(0)-1)^2 \in \mathrm{Z}$</span>. If we put <span class="math-container">$y=f(0)-1$</span> in <span class="math-container">$(3)$</span>, we conclude that <span class="math-container">$f(0) = 1$</span>. Then <span class="math-container">$(3)(y)$</span> simplifies to <span class="math-container">$$ f(y) = 1 + y $$</span> Function <span class="math-container">$f(y) = 1 + y$</span> satisfies equation <span class="math-container">$(1)$</span>, so this is the only solution.</p> <p>P. S. English is not my native language. I apologize for any possible mistakes when writing the answer in English.</p>
2,354,467
<p>I am trying to evaluate the following \begin{equation} I(a,b) = \int_{a}^{\frac{a+b}{2}} (x-a)^{\alpha-1} \, x^n \, dx + \int_{\frac{a+b}{2}}^{b} (b-x)^{\alpha-1} \, x^n \, dx, \end{equation} where $0&lt;\alpha&lt;1$. Wolfram alpha gives no solution. I tried integration by parts without success. My problem is that I don't understand well the evaluation of the limit of the upper limit and this integrand.</p>
JJacquelin
108,514
<p>Since the wording of the question was modified, my first anser is no longer valid. So, I post a new answer to the new wording : \begin{equation} I(a,b) = \int_{a}^{\frac{a+b}{2}} (x-a)^{p-1} \, x^n \, dx + \int_{\frac{a+b}{2}}^{b} (b-x)^{p-1} \, x^n \, dx, \end{equation}</p> <p>About the convergence of the first integral :</p> <p>Since $\quad p-1&gt;-1\quad$ the integral is convergent at the lower bound : $$\int_{a}^{X\to\: a} (x-a)^{p-1} \, x^n \, dx \sim a^n\frac{(X-a)^p}{p}$$ This is easy to prove with change of variable $\quad x=a+\epsilon$</p> <p>Obviously, there is no problem of convergence at the upper bound insofar $b&gt;a$. So, there is no problem of convergence for the first integral.</p> <p>About the convergence of the second integral :</p> <p>Since $\quad p-1&gt;-1\quad$ the second integral is convergent at the upper bound : $$\int_{X\to \:b}^{b} (b-x)^{p-1} \, x^n \, dx\sim b^n\frac{(b-X)^p}{p}$$ This is easy to prove with change of variable $\quad x=b-\epsilon$</p> <p>Obviously, there is no problem of convergence at the lower bound. So, there is no problem of convergence for the second integral.</p> <p>NOTE :</p> <p>These integrals cannot be expressed with a finite number of elementary functions. Some possible ways of solving are :</p> <ul> <li><p>Numerical calculus (suggested for technical applications).</p></li> <li><p>Solving in terms of infinite series for theory and limited series in practice.</p></li> <li><p>Solving in terms of special functions : The Beta and Incomplete Beta functions. $$I(a,b)=a^{n+p}\left(\text{B}_{\frac{a+b}{2a}}(n+1,p)- \text{B}(n+1,p)\right) + b^{n+p}\left( \text{B}(n+1,p)-\text{B}_{\frac{a+b}{2b}}(n+1,p)\right) $$</p></li> </ul>
2,130,397
<p>If I want to find the power series representation of the following function:</p> <p>$$ \ln \frac{1+x}{1-x} $$</p> <p>I understand that it can be written as </p> <p>$$ \ln (1+x) - \ln(1-x) $$</p> <p>And I understand that if I now write in the power series representations for $ln(1+x)$ and $ln(1-x)$:</p> <p>$$\sum_{n=1}^\infty \frac{(-1)^{n-1}x^{n}}{n} - \sum_{n=1}^\infty \frac{(-1)^{n-1}(-x)^{n}}{n} $$</p> <p>My textbook solution does an odd thing where it writes it out as</p> <p>$$\sum_{n=1}^\infty \frac{x^{n}}{n} - \sum_{n=1}^\infty \frac{(-1)^{n-1}(-x)^{n}}{n} $$</p> <p>$$2\sum_{n=1}^\infty \frac{x^{2n-1}}{2n-1} $$</p> <p>I have no idea how it got from the line where I have the power series representation for $ln(1+x)$ and $ln(1-x)$ to the last two lines. If anyone could help me link my part to the textbook solution I would really appreciate it! Thank you! </p>
Community
-1
<p>We have $$I = \int_{0}^{\frac {\pi}{2}} \frac {1}{1+\cos^2 x} \mathrm{d}x = \int_{0}^{\frac {\pi}{2}} \frac {1}{\tan^2 x +2} \sec^2 x \mathrm {d}x = \int_{0}^{\infty} \frac {1}{u^2+2} \mathrm {d}u $$ by substituting $u = \tan x $. Hope you can take it from here. If you want to check, the answer is $\boxed {\frac {\pi}{2\sqrt {2}}} $.</p>
2,130,397
<p>If I want to find the power series representation of the following function:</p> <p>$$ \ln \frac{1+x}{1-x} $$</p> <p>I understand that it can be written as </p> <p>$$ \ln (1+x) - \ln(1-x) $$</p> <p>And I understand that if I now write in the power series representations for $ln(1+x)$ and $ln(1-x)$:</p> <p>$$\sum_{n=1}^\infty \frac{(-1)^{n-1}x^{n}}{n} - \sum_{n=1}^\infty \frac{(-1)^{n-1}(-x)^{n}}{n} $$</p> <p>My textbook solution does an odd thing where it writes it out as</p> <p>$$\sum_{n=1}^\infty \frac{x^{n}}{n} - \sum_{n=1}^\infty \frac{(-1)^{n-1}(-x)^{n}}{n} $$</p> <p>$$2\sum_{n=1}^\infty \frac{x^{2n-1}}{2n-1} $$</p> <p>I have no idea how it got from the line where I have the power series representation for $ln(1+x)$ and $ln(1-x)$ to the last two lines. If anyone could help me link my part to the textbook solution I would really appreciate it! Thank you! </p>
MoNtiDeaD MoonDogs
408,248
<p>Note $\displaystyle \cos x=\frac{1}{\sec x}$.</p> <p>So, $$ \begin{align}I&amp;=\int\frac{1}{\cos^2x+1}dx\\ &amp;=\int\frac{\sec^2x}{\sec^2x+1}dx \\&amp;=\int\frac{\sec^2x}{\tan^2x+2}dx \tag{1}\end{align}$$</p> <p>Now Let $u=\tan x\rightarrow du=\sec^2xdx$ and substituting in $(1)$, we get</p> <p>$$\begin{align}I&amp;=\int\frac{du}{u^2+2}\\&amp;=\frac{1}{\sqrt2}\cdot\arctan{\frac{u}{\sqrt2}}+C\\&amp;=\frac{1}{\sqrt2}\cdot\arctan{\frac{\tan x}{\sqrt2}}+C\end{align}$$</p> <p>Finally find the value by improper integral.</p> <p>$$\begin{align}\int_0^\frac{\pi}{2}\frac{1}{\cos^2x+1}dx&amp;=\lim_{t\rightarrow\frac{\pi}{2}^-}\frac{1}{\sqrt2}\cdot\arctan{\frac{\tan t}{\sqrt2}}-0\\&amp;=\frac{1}{\sqrt2}\cdot\frac{\pi}{2}\\&amp;=\boxed{\frac{\pi}{2\sqrt2}}\end{align}$$</p>
1,525,660
<p><a href="https://i.stack.imgur.com/w2y9k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w2y9k.png" alt="image"></a></p> <p>Problem above. (Sorry I can't embed yet and the link seems to be removed when hyperlinked)</p> <p>Hello,</p> <p>Fairly simple (I imagine) question that I am stuck on. The first two parts are fairly straightforward, however I am unsure on how to continue beyond that.</p> <p>How does one find the roots given two roots? I've tried long dividing and resulted in nothing useful; any clever ideas?</p> <p>Also for the final part, would the velocity merely be z'(t) and acceleration z''(t)? (after taking the complex conjugate of the denominator to turn it real)</p> <p>Seems too easy to be the most numerously marked question, hence I feel like something is probably going wrong. Would I take the real parts of the velocity/acceleration after finding z' and z'' to find the magnitude? Or would it be a case of just finding the magnitude in the same sense as you would of an argand diagram (root of [x^2 + y^2]) simply with the variable t in place?</p> <p>Thanks</p> <p>These questions were taken out of a Cambridge 'Maths for Natural Sciences' past exam question if any were curious</p>
Nicholas
282,542
<p>Perhaps it could be easier expand $(1-x^2)^{-4}$ using the binomial series.</p> <p>$\begin{align} (1-x^2)^{-4} &amp;= 1+4x^2 + \frac{(-4)(-4-1)}{2}(-x^2)^2+\frac{(-4)(-4-1)(-4-2)}{3!}(-x^2)^3+ \dots\\ &amp;=1+4x^2+10x^4+20x^6+35x^8+\dots\\ &amp;=\sum_{n=1}^{\infty} \binom{n+2}{3}x^{2n-2}\\ \end{align}$</p>
1,300,853
<p>Could somebody tell me the exact value of this series? $$ \sum_{k=1}^{\infty} (-1)^k\frac{H_k^{(5)}}{k} $$ where $$ H_k^{(n)}=\sum_{i=1}^{k}\frac{1}{i^n} $$</p> <p>Thanks!</p>
Olivier Oloa
118,798
<p><strong>Hint.</strong> You may write $$ \sum_{k=1}^{\infty} (-1)^k\frac{H_k^{(5)}}{k}=\sum_{k=1}^{\infty} (-1)^k\frac{H_{k-1}^{(5)}}{k}+\sum_{k=1}^{\infty} \frac{(-1)^k}{k^6}=\zeta(\bar{1},5)-\frac{31 \pi ^6}{30240}. $$ I am not sure the <a href="http://en.wikipedia.org/wiki/Multiple_zeta_function" rel="nofollow">Multi Zeta Values</a> $\zeta(\bar{1},5)$ has a closed form in terms of known constants.</p>
24,873
<p>It is very elementary to show that $\mathbb{R}$ isn't homeomorphic to $\mathbb{R}^m$ for $m&gt;1$: subtract a point and use the fact that connectedness is a homeomorphism invariant.</p> <p>Along similar lines, you can show that $\mathbb{R^2}$ isn't homeomorphic to $\mathbb{R}^m$ for $m&gt;2$ by subtracting a point and checking if the resulting space is simply connected. Still straightforward, but a good deal less elementary.</p> <p>However, the general result that $\mathbb{R^n}$ isn't homeomorphic to $\mathbb{R^m}$ for $n\neq m$, though intuitively obvious, is usually proved using sophisticated results from algebraic topology, such as invariance of domain or extensions of the Jordan curve theorem.</p> <p>Is there a more elementary proof of this fact? If not, is there intuition for why a proof is so difficult?</p>
IV_
292,527
<p><a href="https://en.wikipedia.org/wiki/Emanuel_Sperner" rel="nofollow noreferrer">Sperner</a> showed in his doctoral thesis [Sperner 1928] that invariance of open sets, invariance of domain and invariance of dimension can be proved already with elementary combinatorial methods alone.</p> <p>see also: <a href="https://math.stackexchange.com/questions/1197640/elementary-proof-of-topological-invariance-of-dimension-using-brouwers-fixed-po/2316534#2316534">Elementary proof of topological invariance of dimension using Brouwer&#39;s fixed point and invariance of domain theorems?</a></p> <p>[Sperner 1928] Sperner, Emanuel: Neuer Beweis für die Invarianz der Dimensionszahl und des Gebietes. In: Abh. Math. Sem. Univ. Hamburg. Band 6, 1928, 265–272</p> <p>Sperner, Emanuel: Gesammelte Werke. Benz, W.; Karzel, H.; Kreuzer , A. (Hrsg.). Heldermann Verlag, Lemgo, Germany, 2005</p> <p><a href="https://encyclopediaofmath.org/wiki/Sperner_lemma" rel="nofollow noreferrer">Encyclopedia of Mathematics: Sperner lemma</a></p> <p><a href="https://encyclopediaofmath.org/wiki/Brouwer_theorem" rel="nofollow noreferrer">Encyclopedia of Mathematics: Brouwer theorem</a></p> <p><a href="https://en.wikipedia.org/wiki/Sperner%27s_lemma" rel="nofollow noreferrer">Wikipedia: Sperner's lemma</a></p>
1,299,474
<p><img src="https://i.stack.imgur.com/EyYdm.jpg" alt="enter image description here"></p> <p>Here is an attempt at a solution:</p> <p><img src="https://i.stack.imgur.com/IWSah.jpg" alt="enter image description here"></p> <p>Since $f(x)&gt;0$, $f(x)&gt;\delta$ for all x between $1$ and $2$ </p> <p>Is this correct? </p>
Paul
17,980
<p>It seems having no mistakes. I think it is okay.</p>
1,858,297
<p>Suppose the diameter of a nonempty set $A$ is defined as </p> <p>$$\sigma(A) := \sup_{x,y \in A} d(x,y)$$</p> <p>where $d(x,y)$ is a metric.</p> <p>Is $\sigma(.)$ a 'measurement'? I.e., how do I prove the countable additivity for this particular case?</p>
drhab
75,923
<p>Observe that the diameter of singletons is $0$ and the diameter of set $\{x,y\}$ is $d(x,y)&gt;0$ if $x\neq y$. So there is no additivity.</p>
806,532
<p>This question takes place in a general metric space $X$. </p> <p>Let $x$ be an interior* point of $E \subset X$ iff there exists a deleted neighborhood of $x$ that is contained in $E$. </p> <p>This is like the normal definition of "interior point", except it uses "deleted neighborhood" instead of "neighborhood", thus allowing a point not in $E$ to be an interior* point of $E$.</p> <p>My question is: why is this not the standard definition of "interior point"? I see a couple reasons that it would make a more elegant system.</p> <ol> <li>"Limit point" and "interior* point" are both defined in terms of deleted neighborhoods ($x$ is a limit point of $E$ iff all deleted neighborhoods of $x$ include some point of $E$). This is more symmetrical.</li> <li>(Note: I do not yet have a general/categorical notion of duality) "Limit point" and "interior* point" are more adequately dual, for $x$ is a limit point of $E$ iff $x$ is not an interior* point of the complement of $E$, whereas this does not hold for "limit point" and "interior point".</li> <li>The dual notions of closure and interior are more symmetrically defined using "interior* point". The closure is defined as the <b>union</b> of $E$ and the set of limit points of $E$, and the interior is defined as the <b>intersection</b> of $E$ and the set of interior* points of $E$. The duality between closure and interior is harder to see with the standard definition of interior as the set of interior points of $E$. Also the proof that the complement of the closure of $E$ is the interior of the complement of $E$ reduces to a few applications of DeMorgan's law.</li> </ol> <p>So why do people use "interior point" and not "interior* point"? </p>
Lee Mosher
26,501
<p>A good test of a new definition is its power of expressiveness. What I mean by this is, when introducing the definition into mathematical discourse, does it help you express mathematical ideas or concepts in a way that enhances understanding, aids discovery, quickens comprehension of proofs, and so on? </p> <p>As others have said, $\text{interior}^*$ is unfamiliar, so I do not know whether it will pass this test. If you want to know whether it will, try preparing a few lectures of elementary topology using it. Who knows? Maybe it will catch on.</p>
1,070,870
<p>"Write down (say, as a power series) a holomorphic function $f(z)$ on $D(1, 1)$ which satisfies $f(z)^5 = z$ and $f(1) = 1$. What is the result of analytically continuing $f$ along a path which travels once counterclockwise around the origin, returning to the point $1$? What about if you go $N$ times counterclockwise around the origin, where $N$ is an integer?"</p> <p>For the analytic continuation, I understand how to do it in the way where I explicitly write down square root functions in successive disks around the origin in terms of polar coordinates. Is there a more general/"abstract" way to do it though?</p> <p>Thanks in advance!</p>
megas
191,170
<p>As pointed out in the comments by Franco, you need $m \ge n$. Under this assumption, for real and complex matrices, you could argue based on the (truncated) singular value decomposition.</p> <p>Let $\mathbf{A} = \mathbf{U}\mathbf{\Sigma}\mathbf{V}^{H}$ be the singular value decomposition of $\mathbf{A}$: $\mathbf{U}$ and $\mathbf{V}$ are matrices with orthonormal columns and $\mathbf{\Sigma}$ is an $n \times n$ diagonal matrix with real entries. Recall that the number of nonzero entries in $\mathbf{\Sigma}$ is equal to the rank of $\mathbf{A}$. Then, $$ \mathbf{A^{H}}\mathbf{A} = \mathbf{V}\mathbf{\Sigma}^{T}\mathbf{U}^{H} \mathbf{U}\mathbf{\Sigma}\mathbf{V}^{H} = \mathbf{V}\mathbf{\Sigma}\mathbf{\Sigma}\mathbf{V}^{H} = \mathbf{V}\mathbf{\Sigma}^{2}\mathbf{V}^{H}. $$ Note that $\mathbf{A^{H}}\mathbf{A}$ is a symmetric matrix, and that $\mathbf{V}\mathbf{\Sigma}^{2}\mathbf{V}^{H}$ is its eigenvalue decomposition: the real eigenvalues of $\mathbf{A}^{H}\mathbf{A}$ are on the diagonal of $\mathbf{\Lambda} = \mathbf{\Sigma}^{2}$. Finally, recall that the rank of $\mathbf{A}^{H}\mathbf{A}$ is equal to the number of its nonzero eigenvalues. </p> <p>Since the number of nonzero entries in $\mathbf{\Lambda}$ is exactly equal to the number of nonzero entries in $\Sigma$, we conclude that $$ \text{rank}(\mathbf{A^{H}}\mathbf{A}) = \text{rank}(\mathbf{A}). $$ Finally, note that an $n \times n$ matrix (like $\mathbf{A^{H}}\mathbf{A}$) is invertible if and only if its rank is equal to $n$.</p>
3,450,598
<blockquote> <p>Prove that <span class="math-container">$\sum_{i = m}^n a_i + \sum_{i = n + 1}^p a_i = \sum_{i = m}^p a_i$</span>, where <span class="math-container">$m ≤ n&lt;p$</span> are integers, and <span class="math-container">$a_i$</span> is a real number assigned to each integer <span class="math-container">$m ≤ i ≤ p$</span>. (Hint: you might want to use induction)</p> </blockquote> <p>Let's follow the hint and use induction on <span class="math-container">$p-m = k$</span><br> Base case: <span class="math-container">$k = 1$</span>, then <span class="math-container">$p = m + 1$</span> and <span class="math-container">$n = m$</span>. <span class="math-container">$\sum_{i = m}^n a_i + \sum_{i = n + 1}^p a_i = a_m + a_{m+1}$</span> and <span class="math-container">$\sum_{i = m}^p a_i = a_m+a_{m+1}$</span>. Therefore, the right-hand side is equal to the left-hand side.<br> Inductive step: Assume for <span class="math-container">$p-m=k$</span> the statement holds, show for <span class="math-container">$p-m = k + 1$</span>. We know that <span class="math-container">$\sum_{i = m}^n a_i + \sum_{i = n + 1}^{m+k} a_i = \sum_{i = m}^{m+k} a_i$</span>. Now, <span class="math-container">$\sum_{i = m}^n a_i + \sum_{i = n + 1}^{m+k+1} a_i = \sum_{i = m}^n a_i + \sum_{i = n + 1}^{m+k} a_i + a_{m+k+1} = \sum_{i = m}^{m+k} a_i + a_{m+k+1}$</span> by inductive hypothesis. Therefore, we get <span class="math-container">$$\sum_{i = m}^n a_i + \sum_{i = n + 1}^{m+k+1} a_i =\sum_{i = m}^{m+k+1} a_i$$</span></p> <p>Is this prove plausible. At this point about the finite sum, I can use the following facts: </p> <p>if <span class="math-container">$ m &lt; n \sum_n^m a_i= 0$</span><br> if <span class="math-container">$n \ge m - 1 \sum_{m}^{n+1}a_i = \sum_{m}^{n}a_i + a_{n+1}$</span> </p>
SARTHAK GUPTA
293,005
<p>It should be pretty straightforward. </p> <p>The left-hand side is <span class="math-container">$\sum_{i = m}^n a_i + \sum_{i = n + 1}^p a_i$</span> which after expansion (since <span class="math-container">$m,p,n$</span> are finite, we can always expand explicitly) gives us <span class="math-container">$(a_m+a_{m+1}+ \cdots+ a_{n-1}+a_n)+ (a_{n+1}+ \cdots+ a_{p-1}+a_p).$</span> </p> <p>Clubbing together all the terms, we can write it as <span class="math-container">$(a_m+a_{m+1}+ \cdots a_{n-1}+a_n+a_{n+1}+ \cdots+ a_{p-1}+a_p).$</span> which is nothing but <span class="math-container">$ \sum_{i = m}^p a_i.$</span> i.e. right-hand side.</p> <p>I think it qualifies as proof.</p>
280,393
<p>I want to show that $(3, \sqrt 15)$ is not a principal ideal in the ring $ R = \mathbb{Z}[\sqrt{15}]$ with norm $N(a + b \sqrt 15) = a^2 - 15b^2$.</p> <p>My attempt:</p> <p>Suppose $(3, \sqrt 15) = (x) $</p> <p>Then $3 = x * r1$ and $\sqrt 15 = x * r2$ , $r1,r2 \in R$.</p> <p>$N(3) = 9 = N(x) N(r1)$ and $N(\sqrt 15) = -15 = N(x)N(r2)$ so $N(x) = 3$ or -3 or 1 or -1.</p> <p>Any ideas on how to continue? Thanks.</p>
Gerry Myerson
8,269
<p>If the norm of $x$ is $\pm1$ then $x$ is a unit. </p> <p>If the norm of $x$ is $\pm3$, you get a contradiction from working modulo $5$. </p>
4,124,324
<p>I am trying to find the complex function <span class="math-container">$f(z)$</span> who's derivative equals the complex conjugate of its reciprocal</p> <p><span class="math-container">$$\dfrac{\mathrm{d} f(z)}{\mathrm{d} z} = \dfrac{1}{f(z)^*}$$</span></p> <p>which is equivalent to</p> <p><span class="math-container">$$ f(z)' f(z)^* = 1 $$</span></p> <p>I know that for <span class="math-container">$f(z)' f(z) = 1$</span> the solution is simply <span class="math-container">$\pm \sqrt{2 z + c }$</span>. But the above turns out to be a bit trickier...</p>
José Carlos Santos
446,262
<p>There is no such function, at least if its domain is a non-empty open subset <span class="math-container">$D$</span> of <span class="math-container">$\Bbb C$</span>. Suppose otherwise. Take <span class="math-container">$w\in D$</span> and take <span class="math-container">$r&gt;0$</span> such that <span class="math-container">$D_r(w)\subset D$</span>. Then, on <span class="math-container">$D_r(w)$</span>, <span class="math-container">$f'$</span> is an analytic function and the so is <span class="math-container">$f'$</span>. But <span class="math-container">$f'=1\left/\overline f\right.$</span>. So, <span class="math-container">$\overline f$</span> is analytic too. But the only case in which <span class="math-container">$f$</span> and <span class="math-container">$\overline f$</span> are analytic is when <span class="math-container">$f$</span> is constant. But then <span class="math-container">$f'$</span> is the null function, and therefore it cannot be equal to <span class="math-container">$1\left/\overline f\right.$</span>.</p>
178,319
<p>I asked this initially in <a href="https://math.stackexchange.com/questions/894399/identities-that-connect-antipode-with-multiplication-and-comultiplication">math.stackexchange</a>:</p> <p>The group algebra $k(G)$ of any group $G$ satisfies as a Hopf algebra the following identities: $$ S\otimes S\circ \Delta=\sigma\circ\Delta\circ S $$ $$ \nabla\circ S\otimes S=S\circ\nabla\circ\sigma $$ where $S$ is the antipode, $\Delta$, the comultiplication, $\nabla$, the multiplication, and $\sigma:x\otimes y\mapsto y\otimes x$. </p> <p>Is this valid for all Hopf algebras (in any braided monoidal category) or only for some special class?</p>
Zahlendreher
33,854
<p>This also holds in an arbitrary braided monoidal category and is not hard to see. See for example Majid: Foundations of quantum group theory. He gives a graphical calculus proof of this in Figure 9.14. Strictly speaking, you need to turn this proof upside down (i.e. dualize) to get the corresponding identity for the comultiplication. There are two notions of anti-(co)algebra homomorphism in a braided monoidal category. One involving the braiding, one its inverse. For example, $S^{-1}$ is also an anti-(co)algebra morphism but involves the inverse braiding.</p>
139,954
<p>I'm having trouble understanding this question.</p> <p>We have a path <span class="math-container">$h$</span> in <span class="math-container">$X$</span> from <span class="math-container">$x_0$</span> to <span class="math-container">$x_1$</span> and <span class="math-container">$\overline{h}$</span> its inverse path. Then a map <span class="math-container">$\beta _h:\pi_1(X,x_1)\to \pi _1(X,x_0)$</span> defined by <span class="math-container">$\beta _h[f]=\left [h\circ f\circ \overline{h}\right ]$</span>, for every path <span class="math-container">$f$</span> in <span class="math-container">$X$</span>.</p> <p>The question is to show that <span class="math-container">$\beta _h$</span> depends only on the homotopy class of <span class="math-container">$h$</span>.</p> <p>Firstly, it says for every path <span class="math-container">$f$</span> in <span class="math-container">$X$</span>, but surely <span class="math-container">$f$</span> has to be a loop or you can't form <span class="math-container">$\left [h\circ f\circ \overline{h}\right ]$</span>?</p> <p>And also, I don't understand why it depends on the homotopy class of <span class="math-container">$h$</span>, when <span class="math-container">$\left [h\circ f\circ \overline{h}\right ]$</span> is the path going from <span class="math-container">$x_0$</span> to <span class="math-container">$x_1$</span>, around <span class="math-container">$f$</span>, then back to <span class="math-container">$x_0$</span>, why does the homotopy class of <span class="math-container">$h$</span> matter? In general I don't think I fully understand what this map <span class="math-container">$\beta _h$</span> is and would like someone to help me out. Thanks.</p>
Michael Albanese
39,599
<p><em>This is effectively an extended comment.</em></p> <p>The question is for every path $h$, not $f$. As $[f] \in \pi_1(X, x_1)$, $f$ is a loop in $X$ based at $x_1$, not a path from $x_0$ to $x_1$.</p> <p>Instead of writing $h\circ f\circ\bar{h}$ you should write $h\cdot f\cdot\bar{h}$ because you are not composing the maps $h$, $f$, and $\bar{h}$, which is what the symbol $\circ$ is usually reserved for. Also, when using composition, we work right to left (i.e. $f\circ g\circ h$ means apply $h$, then apply $g$, then apply $f$), but with concatenation of paths we work left to right (i.e. $f\cdot g\cdot h$ means travel along the path $f$, then the path $g$, then the path $h$).</p> <p>As Arturo pointed out, you need to show that if $h$ and $k$ are homotopic paths from $x_0$ to $x_1$, then $h\cdot f\cdot\bar{h}$ and $k\cdot f\cdot\bar{k}$ are homotopic loops based at $x_0$.</p>
139,954
<p>I'm having trouble understanding this question.</p> <p>We have a path <span class="math-container">$h$</span> in <span class="math-container">$X$</span> from <span class="math-container">$x_0$</span> to <span class="math-container">$x_1$</span> and <span class="math-container">$\overline{h}$</span> its inverse path. Then a map <span class="math-container">$\beta _h:\pi_1(X,x_1)\to \pi _1(X,x_0)$</span> defined by <span class="math-container">$\beta _h[f]=\left [h\circ f\circ \overline{h}\right ]$</span>, for every path <span class="math-container">$f$</span> in <span class="math-container">$X$</span>.</p> <p>The question is to show that <span class="math-container">$\beta _h$</span> depends only on the homotopy class of <span class="math-container">$h$</span>.</p> <p>Firstly, it says for every path <span class="math-container">$f$</span> in <span class="math-container">$X$</span>, but surely <span class="math-container">$f$</span> has to be a loop or you can't form <span class="math-container">$\left [h\circ f\circ \overline{h}\right ]$</span>?</p> <p>And also, I don't understand why it depends on the homotopy class of <span class="math-container">$h$</span>, when <span class="math-container">$\left [h\circ f\circ \overline{h}\right ]$</span> is the path going from <span class="math-container">$x_0$</span> to <span class="math-container">$x_1$</span>, around <span class="math-container">$f$</span>, then back to <span class="math-container">$x_0$</span>, why does the homotopy class of <span class="math-container">$h$</span> matter? In general I don't think I fully understand what this map <span class="math-container">$\beta _h$</span> is and would like someone to help me out. Thanks.</p>
Dasheng Wang
616,946
<p>Actually <span class="math-container">$\beta_h$</span> does depend on the homotopy class of <span class="math-container">$h$</span>. Note that <span class="math-container">$\beta_h$</span> is an isomorphism between <span class="math-container">$\pi_1(X,x_0)$</span> and <span class="math-container">$\pi_1(X,x_1)$</span> but there could be many isomorphisms between the two groups. For example, if <span class="math-container">$X$</span> is a Torus and <span class="math-container">$x_0,x_1$</span> are two points on it, then there are two ways to move <span class="math-container">$x_0$</span> to <span class="math-container">$x_1$</span>:</p> <p><a href="https://i.stack.imgur.com/rrf0tm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rrf0tm.jpg" alt="enter image description here" /></a></p> <p>The first one corresponds to the identity automorphism of <span class="math-container">$\mathbb Z\times \mathbb Z$</span> and the second one corresponds to the automorphism that swaps the coordinates.</p>
3,673,014
<p>If you take an <span class="math-container">$2r\times 2r\times 2r$</span> cube, and divide it to 27 equal cubes, and then remove all the "axis" cubes (all the cubes which are straight left, straight right, straight up, etc. from the middle cube) then divide each cube into 125 equal cubes and remove all the axis cubes, and repeat this process for every odd number to the power of three, you will get a nice fractal cube.</p> <p>If we try to work out the area of this cube we get (after some short algebra):<span class="math-container">$$A=8r^3\prod_{n=1}^{\infty}\frac{(2n)^2(2n+3)}{(2n+1)^3}=8r^3\frac{2\times2\times\not5\times4\times4\times\not7\times6\times6\times\dots}{3\times3\times3\times\not5\times5\times5\times\not7\times7\times7\times\dots}$$</span> Now, we extract the Wallis product and get<span class="math-container">$$A=8r^3\frac{\pi}6$$</span>Which is the area of a sphere with radius <span class="math-container">$r$</span>(!!!)</p> <p>Now I want to know what is the fractal dimension of this sphere cube.</p>
Dr. Richard Klitzing
518,676
<ul> <li><span class="math-container">$v_1, v_2, v_3$</span> belong to <span class="math-container">$x_3=0$</span></li> <li><span class="math-container">$v_1, v_2, v_4$</span> belong to <span class="math-container">$x_2=x_3$</span></li> <li><span class="math-container">$v_1, v_3, v_4$</span> belong to <span class="math-container">$x_1=0$</span></li> <li><span class="math-container">$v_2, v_3, v_4$</span> belong to <span class="math-container">$x_1+x_2=1$</span></li> </ul> <p>ain't it?</p> <p>--- rk</p>
185,766
<p>After studying general a linear algebra course, how would an advanced linear algebra course differ from the general course? </p> <p>And would an advanced linear algebra course be taught in graduate schools?</p>
Gerry Myerson
8,269
<p>Different universities will teach different things under the heading "advanced linear algebra", and at different levels. I would suggest you go to a few university websites and see what they have on offer and what the contents are. </p> <p>At my university, we teach a 3rd-year undergarduate course which is half Galois Theory, half Numerical Linear Algebra. Macquarie University, Math 338. </p>
2,317,391
<p>This question popped up somewhere on the internet and I thought it was interesting. I attempted to solve it but I don't know if it is correct.</p> <blockquote> <p>Find the derivative of $$F(x)=\int_{\cos{x^3}}^{\int_{1}^{x} {1/(1+t^2)dt}} {\sin{w} dw}$$</p> </blockquote> <p>$$\begin{align} \implies F(x) &amp; =-\cos{w}]_{\cos{x^3}}^{\arctan{x}-\frac{\pi}{4}}\\ &amp; = -\cos{(\arctan{x}-\frac{\pi}{4})}+\cos{(\cos{x^3})}\\ \implies F'(x) &amp; = \frac{1}{1+x^2} \sin{(\arctan{x}-\frac{\pi}{4}})+3x^2\sin{(x^3)} \sin{(\cos{(x^3)})}\\ \end{align}$$</p> <p>Is it this simple? Or is there something I should know before solving this that changes the normal differentiation and integration techniques?</p>
Community
-1
<p>Yes, it's that simple. Though, perhaps, you were meant to use the formula $$\frac{d}{dx}\left[\int_{a(x)}^{b(x)}f(t)\,dt\right]=b'(x)f(b(x))-a'(x)f(a(x))$$ which holds for continuous $f$ and differentiable $a$ and $b$.</p> <p>Another possible simplification is using the identities $\cos\arctan x=\frac{1}{\sqrt{1+x^2}}$ and $\sin\arctan x=\frac{x}{\sqrt{1+x^2}}$ and expanding the various $\cos(\arctan x+\alpha)$ and $\sin(\arctan x +\alpha)$.</p>
1,230,112
<p>Given a vector $\mathbf{x} \in \mathbb{R}^n$, a scalar $r\gt 0$ and an invertible matrix $\mathbf{A} \in \mathbb{R}^{n\times n}$, I'd like to maximize one of the components $x_\alpha$ constrained by $\mathbf{x}^T\mathbf{A}^T\mathbf{A}\mathbf{x}=r^2$.</p> <p>I tried to do this with tensor algebra but I'm pretty new to that and while I did get a result, I'm almost certain that it is wrong. I used some steps which I think are dodgy at best, so I have a rough idea where the problem(s) might lie, however I do not know how to fix them. Here is what I have thus far:</p> <ol> <li>rewriting $\mathbf{x}^T\mathbf{A}^T\mathbf{A}\mathbf{x}=r^2$ in Tensor notation:</li> </ol> <p>$$x_i a_{ji} a_{jk} x_k = r^2$$</p> <ol start="2"> <li>Building a Lagrangian:</li> </ol> <p>$$L\left(\mathbf{x},\lambda\right)=x_\alpha + \lambda \left(x_i a_{ji} a_{jk} x_k-r^2\right)$$</p> <p>The $x_\alpha$ here is the component I want to eventually maximize. This is where the dodginess begins: The indices aren't balanced. I add a component of a vector to a scalar. However, since I actually want a component of a vector (effectively a scalar) in the end, and not a full vector, it seemed ok to me in this case.</p> <ol start="3"> <li>taking the gradient of $L$:</li> </ol> <p>$$ \frac{\partial L}{\partial x_l} = \delta_{l\alpha}+\lambda a_{ji} a_{jk} \left( \delta_{l k} x_i + \delta_{l i} x_k \right) \\ \frac{\partial L}{\partial x_l} = \delta_{l\alpha} +\lambda a_{j l} \left( a_{j i} x_i + a_{j k} x_k \right) = 0$$</p> <ol start="4"> <li>Solving for $x_m$ (while ramping up the dodginess):</li> </ol> <p>$$ \delta_{l\alpha} +\lambda a_{j l} \left( a_{j i} \delta_{i m} x_m + a_{j k} \delta_{k m} x_m \right) = 0\\ \delta_{l\alpha} +\lambda a_{j l} \left( a_{j m} + a_{j m} \right) x_m = 0 \\ x_m = -\frac{\delta_{l\alpha}}{2\lambda a_{jl} a_{jm}}\\ x_m = -\frac{1}{2\lambda a_{j\alpha} a_{j m}}$$</p> <p>Am I actually allowed to divide by those matrix components like that? What about zero-components? And then, am I allowed to simplify $\frac{\delta_{i j}}{a_{j k}}=\frac{1}{a_{i k}}$?</p> <ol start="5"> <li>Plugging that into my constraint and solving for $\lambda$:</li> </ol> <p>$$ x_i a_{j i} a_{j k} x_k = r^2 \\ \left(-\frac{1}{2\lambda a_{l\alpha}a_{l i}}\right)\left(-\frac{1}{2\lambda a_{m \alpha} a_{m k}}\right)a_{ji}a_{jk}=r^2\\ \frac{a_{j i}a_{j k}}{4 \lambda^2 a_{l\alpha} a_{m\alpha} a_{l i} a_{m k}}=r^2\\ \lambda=\pm\frac{1}{2r}\sqrt{\frac{a_{ji}a_{jk}}{a_{l\alpha}a_{m\alpha}a_{li}a_{mk}}}$$</p> <ol start="6"> <li>Plugging that back into $x_\beta$</li> </ol> <p>$$x_\beta = -\frac{1}{\pm\frac{1}{r}\sqrt{\frac{a_{ji}a_{jk}}{a_{l\alpha}a_{m\alpha}a_{li}a_{mk}}} a_{n\alpha} a_{n\beta}}\\ x_\beta=\mp\frac{r}{a_{n\alpha} a_{n\beta}}\sqrt{\frac{a_{l\alpha}a_{m\alpha}a_{li}a_{mk}}{a_{ji}a_{jk}}}$$</p> <ol start="7"> <li>Sanity check, using $\alpha = \beta$ and $\mathbf{A}= \delta_{ij}$:</li> </ol> <p>$$x_\alpha=\mp\frac{r}{\delta_{n\alpha}^2}\sqrt{\frac{\delta_{l\alpha}\delta_{m\alpha}\delta_{li}\delta_{mk}}{\delta_{ji}\delta_{jk}}}\\ x_\alpha=\mp\frac{r}{\delta_{nn}}\sqrt{\frac{\delta_{lm}\delta_{li}\delta_{mk}}{\delta_{ik}}}\\ x_\alpha=\mp\frac{r}{\delta_{nn}}\sqrt{\frac{\delta_{ik}}{\delta_{ik}}}\\ x_\alpha=\mp\frac{r}{\delta_{nn}}$$</p> <p>This result is obviously incorrect: if $\mathbf{A}$ is the identity matrix, the result should simply be $x_\alpha=\pm r$ (and the other coordinates should all be $0$). Though it's so close (just incorrect by a factor dependent on the dimension of the involved vectorspace) that I must assume what I did isn't complete nonsense. So, where did I go off-rails and what would be the correct way to do this?</p> <p>Also, with all those indices, I have a hard time turning that result back into usual matrix notation. If that's possible, how would that look like then?</p>
Nico
101,332
<p>The trick I used to memorize them actually stemmed from formal logic (which you may or may not have had any exposure to):</p> <p>The symbol $\land$ is a way to symbolize the binary connective "and". Notice it looks like a "pointy" $\cap$. Similarly $\lor$ (or) looks similar to $\cup$.</p> <p>Now, $$x\in A\cap B$$ can be read as $$x\text{ is in } A \textbf{ and } B$$ and $$x\in A\cup B$$ is $$x\text{ is in } A \textbf{ or } B$$</p> <p>This might actually be why the symbols look similar, actually.</p>
1,618,042
<p>Is there any operation that makes a set of primes i.e. {2,3,5,7.... .} a group with identity 2?</p>
Sean English
220,739
<p>There are two ways to interpret your question.</p> <ol> <li><p>If you want the group to be a subgroup of $\mathbb{Z}$ with the usual addition then no. To see this, all we need to see is that $3$ has no inverse.</p></li> <li><p>If you just want the group to have as its underlying set the set of all primes, then yes. Since there are countably infinitely many primes, there is a bijection between the set of all primes and the set of integers, say $f$ where $f(2)=0$. Then we can define multiplication of two primes $a$ and $b$ by $$a\ast b=f^{-1}(f(a)+f(b))$$ it isn't hard to prove this will be a group isomorphic to $(\mathbb{Z},+)$ with the thing represented by the symbol $2$ as the identity.</p></li> </ol>
3,425,373
<p>Consider:</p> <p><span class="math-container">$$ 1+1/2^2+2/3^2+1/4^2+2/5^2+1/6^2+...$$</span></p> <p>Does this sum have a closed form?</p> <p>If all the numerators are <span class="math-container">$1$</span> then it does have a closed form. </p>
Community
-1
<p>You have </p> <p><span class="math-container">$$1+\dfrac{2-1}{2^2}+\dfrac2{3^2}+\dfrac{2-1}{4^2}+\dfrac2{5^2}+\cdots$$</span></p> <p>or</p> <p><span class="math-container">$$2\left(1+\dfrac1{2^2}+\dfrac1{3^2}+\dfrac1{4^2}+\dfrac1{5^2}+\cdots\right)-1-\left(\dfrac1{2^2}+\dfrac1{4^2}+\dfrac1{6^2}+\dfrac1{8^2}+\cdots\right)$$</span></p> <p>which is</p> <p><span class="math-container">$$2\left(1+\dfrac1{2^2}+\dfrac1{3^2}+\dfrac1{4^2}+\dfrac1{5^2}+\cdots\right)-1-\frac1{2^2}\left(1+\dfrac1{2^2}+\dfrac1{3^2}+\dfrac1{4^2}+\cdots\right).$$</span></p> <p>The rest is obvious.</p>
3,231,387
<p>I have been given the following quadratic equation and is asked to find the range of its roots <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span>, where <span class="math-container">$\alpha&gt;\beta$</span> <span class="math-container">$$(k+1)x^2 - (20k+14)x + 91k +40 =0,$$</span> where <span class="math-container">$k&gt;0$</span> .<br><br> Here's my approach. <br><br> I applied the quadratic formula for the roots and got. <span class="math-container">$$\alpha=\frac{(10k+7) -3\sqrt{k^2+k+1}}{k+1}$$</span> Similarly <span class="math-container">$$\beta=\frac{(10k+7)+3\sqrt{k^2+k+1}}{k+1}$$</span> But how to find the range. Please help</p>
Shantam Srivastava
675,373
<p>We know from the question that <span class="math-container">$ \alpha &gt; \beta $</span> ,using the Quadratic formula, the root with the positive sign will be <span class="math-container">$ \alpha $</span> and the one with the negative sign will be <span class="math-container">$ \beta $</span>, which you have reversed in your description.</p> <p><span class="math-container">$ \alpha = \frac{10k+7 + 3\sqrt{k^2 + k + 1}}{k + 1}$</span></p> <p>and the other root will be <span class="math-container">$\beta$</span></p> <p>After that, plot the graph of <span class="math-container">$ \alpha $</span> versus k, with k as the x-axis. The maximum value of that function for k > 0, will be the upper limit of the range of the root.</p> <p>Similarly, plot <span class="math-container">$ \beta $</span> versus k, with k as the x-axis. The minimum value of that function for k > 0, will be the lower limit of the range of the root. </p>
3,655,545
<p>What is the asymptotic behaviour of the difference <span class="math-container">$$ c_j - c_{j+1} $$</span> for <span class="math-container">$j\rightarrow \infty$</span> if <span class="math-container">$(c_j)_{j\in\mathbb{N}}$</span> is a null sequence?</p>
Obriareos
210,676
<p>Oh..., it was obvious - now I have it.</p> <p>The asymptotic behaviour is of course as <span class="math-container">$c_j$</span> since <span class="math-container">$(c_j - c_{j+1})/c_j = 1-c_{j+1}/c_j\rightarrow 1.$</span></p>
3,950,098
<p>I can evaluate the limit with L'Hospital's rule:</p> <p><span class="math-container">$\lim_{n\to\infty}n(\sqrt[n]{4}-1)=\lim_{n\to\infty}\cfrac{(4^{\frac1n}-1)}{\dfrac1n}=\lim_{n\to\infty}\cfrac{\dfrac{-1}{n^2}\times 4^{\frac1n}\times\ln4}{\dfrac{-1}{n^2}}=\ln4$</span></p> <p>But is there any way to do it without using L'Hospital's rule?</p>
José Carlos Santos
446,262
<p>If <span class="math-container">$f(x)=4^x$</span>, then <span class="math-container">$f'(x)=\log(4)4^x$</span> and, in particular, <span class="math-container">$f'(0)=\log(4)$</span>. In other words,<span class="math-container">$$\lim_{h\to0}\frac{4^h-1}h=\log(4)$$</span>and therefore<span class="math-container">$$\lim_{n\to\infty}\frac{4^{1/n}-1}{1/n}=\log(4),$$</span>which is the same thing as asserting that<span class="math-container">$$\lim_{n\to\infty}n\left(\sqrt[n]4-1\right)=\log(4).$$</span>Note that all that I used was the definition of derivative together with the knowledge of <span class="math-container">$(4^x)'$</span>.</p>
2,995,643
<p>Here is a thought experiment I have. </p> <p>Say we flip a unique coin where we have a 99.99999999999% chance of it landing on heads, and a .000000000001% chance of it landing on tails (the two possibilities equal to 100%).</p> <p>And say we have an <em>infinite</em> number of coins flipped all at once (and only one time).</p> <p>Is it possible that none of the trials will experience the coin land on tails?</p>
Michael Hoppe
93,935
<p>It will happen infinitely often.</p>
3,182,532
<p>I am confused about converting a <strong>Probability Density Function</strong> from <strong>Polar coordinates</strong> to <strong>Cartesian coordinates</strong>. </p> <p>Here is an example:</p> <p>In Polar coordinates, we can have a <strong>Gaussian probability function</strong>:</p> <p><strong><span class="math-container">$P(r,\theta)=Ae^{-r^2/2\sigma^2}$</span></strong> according to the transformation: <span class="math-container">$r^2=x^2+y^2 \textrm{ and } \theta=\tan^{-1}(y/x)$</span>.</p> <p>This function in Cartesian coordinates should also be a Gaussian function:</p> <p><strong><span class="math-container">$P(x,y)=Ae^{-(x^2+y^2)/2\sigma^2}$</span></strong></p> <p><strong>But</strong> somebody told me that in this transformation, I should multiply by the absolute value of the Jacobian determinate in order to have:</p> <p><strong><span class="math-container">$P(x,y)=Ae^{-(x^2+y^2)/2\sigma^2}/\sqrt{x^2+y^2}$</span></strong></p> <p>And the result is not Gaussian anymore!</p> <p>Could someone tell me which one is correct and also the reason, please?</p>
Sharat V Chandrasekhar
400,967
<p>Your transformation takes a PDF in <span class="math-container">$r$</span> and converts it into a Joint Density Function in <span class="math-container">${x,y}$</span>. So what you really need to do is preserve the normalising property i.e., </p> <p><span class="math-container">$$\int_{x=-\infty}^{x=\infty} \int_{y=-\infty}^{y=\infty} P(x,y)dxdy =1$$</span></p> <p>See if you can take it from here.</p>