qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
21,238
<p>Would someone be able to point me to a good resource explaining step by step the process for solving inhomogenous recurrence relations? (ie something of the form $ a_n = \sum{{b_i}{a_{n-i}}} + f(n)$ )</p>
VA.
1,784
<p>There is no general criterion, as far as I know, it is all try and see.</p> <p>Any projective birational morphism $f:X\to Y$ between varieties is the blowup of <em>some</em> sheaf of ideals $I$ on $Y$, so you can see that anything can happen.</p>
5,231
<p>I have coordinates for 4 vertices/points that define a plane and the normal/perpendicular. The plane has an arbitrary rotation applied to it.</p> <p>How can I 'un-rotate'/translate the points so that the plane has rotation 0 on x,y,z ?</p> <p>I've tried to get the plane rotation from the plane's normal:</p> <pre><code>rotationX = atan2(normal.z,normal.y); rotationY = atan2(normal.z,normal.x); rotationZ = atan2(normal.y,normal.x); </code></pre> <p>Is this correct ?</p> <p>How do I apply the inverse rotation to the position vectors ?</p> <p>I've tried to create a matrix with those rotations and multiply it with the vertices, but it doesn't look right.</p> <p>At the moment, I've wrote a simple test using <a href="http://processing.org/" rel="nofollow noreferrer">Processing</a> and can be seen <a href="http://lifesine.eu/so/vertex_rotation/" rel="nofollow noreferrer">here</a>:</p> <pre><code>float s = 50.0f;//scale/unit PVector[] face = {new PVector(1.08335042,0.351914703846,0.839020013809), new PVector(-0.886264681816,0.69921118021,0.839020371437), new PVector(-1.05991327763,-0.285596489906,-0.893030643463), new PVector(0.909702301025,-0.63289296627,-0.893030762672)}; PVector n = new PVector(0.150384, -0.500000, 0.852869); PVector[] clone; void setup(){ size(400,400,P3D); smooth(); clone = unRotate(face,n,true); } void draw(){ background(255); translate(width*.5,height*.5); if(mousePressed){ rotateX(map(mouseY,0,height,0,TWO_PI)); rotateY(map(mouseX,0,width,0,TWO_PI)); } stroke(128,0,0); beginShape(QUADS); for(int i = 0 ; i &lt; 4; i++) vertex(face[i].x*s,face[i].y*s,face[i].z*s); endShape(); stroke(0,128,0); beginShape(QUADS); for(int i = 0 ; i &lt; 4; i++) vertex(clone[i].x*s,clone[i].y*s,clone[i].z*s); endShape(); } //get rotation from normal PVector getRot(PVector loc,Boolean asRadians){ loc.normalize(); float rz = asRadians ? atan2(loc.y,loc.x) : degrees(atan2(loc.y,loc.x)); float ry = asRadians ? atan2(loc.z,loc.x) : degrees(atan2(loc.z,loc.x)); float rx = asRadians ? atan2(loc.z,loc.y) : degrees(atan2(loc.z,loc.y)); return new PVector(rx,ry,rz); } //translate vertices PVector[] unRotate(PVector[] verts,PVector no,Boolean doClone){ int vl = verts.length; PVector[] clone; if(doClone) { clone = new PVector[vl]; for(int i = 0; i&lt;vl;i++) clone[i] = PVector.add(verts[i],new PVector()); }else clone = verts; PVector rot = getRot(no,false); PMatrix3D rMat = new PMatrix3D(); rMat.rotateX(-rot.x);rMat.rotateY(-rot.y);rMat.rotateZ(-rot.z); for(int i = 0; i&lt;vl;i++) rMat.mult(clone[i],clone[i]); return clone; } </code></pre> <p>Any syntax/pseudo code or explanation is useful.</p> <p>What trying to achieve is this: If I have a rotated plane: <img src="https://i.stack.imgur.com/bZ1fn.png" alt="rotated plane"></p> <p>How can move the vertices to have something that would have no rotation: <img src="https://i.stack.imgur.com/ogFR0.png" alt="plane with no rotations"></p> <p>Thanks!</p> <p><strong>UPDATE:</strong></p> <p>@muad</p> <p>I'm not sure I understand. I thought I was using matrices for rotations. PMatrix3D's rotateX,rotateY,rotateZ calls should done the rotations for me. Doing it manually would be declaring 3d matrices and multiplying them. Here's a little snippet to illustrate this:</p> <pre><code> PMatrix3D rx = new PMatrix3D(1, 0, 0, 0, 0, cos(rot.x),-sin(rot.x), 0, 0, sin(rot.x),cos(rot.x) , 0, 0, 0, 0, 1); PMatrix3D ry = new PMatrix3D(cos(rot.y), 0,sin(rot.y), 0, 0, 1,0 , 0, -sin(rot.y), 0,cos(rot.y), 0, 0, 0,0 , 1); PMatrix3D rz = new PMatrix3D(cos(rot.z),-sin(rot.z), 0, 0, sin(rot.z), cos(rot.z), 0, 0, 0 , 0, 1, 0, 0 , 0, 0, 1); PMatrix3D r = new PMatrix3D(); r.apply(rx);r.apply(ry);r.apply(rz); //test PMatrix rmat = new PMatrix3D();rmat.rotateX(rot.x);rmat.rotateY(rot.y);rmat.rotateZ(rot.z); float[] frmat = new float[16];rmat.get(frmat); float[] fr = new float[16];r.get(fr); println(frmat);println(fr); /* Outputs: [0] 0.059300933 [1] 0.09312407 [2] -0.99388695 [3] 0.0 [4] 0.90466285 [5] 0.41586864 [6] 0.09294289 [7] 0.0 [8] 0.42198166 [9] -0.9046442 [10] -0.059584484 [11] 0.0 [12] 0.0 [13] 0.0 [14] 0.0 [15] 1.0 [0] 0.059300933 [1] 0.09312407 [2] -0.99388695 [3] 0.0 [4] 0.90466285 [5] 0.41586864 [6] 0.09294289 [7] 0.0 [8] 0.42198166 [9] -0.9046442 [10] -0.059584484 [11] 0.0 [12] 0.0 [13] 0.0 [14] 0.0 [15] 1.0 */ </code></pre>
Community
-1
<p>From your comments, what I understand of your problem is that you have the coordinates of an arbitrarily oriented rectangle centred on the origin, and you want to find the rotation that will bring it to an axis-aligned rectangle on the $xy$ plane.</p> <p>Let $u$ and $v$ be the unit vectors that should be mapped to the axis-oriented unit vectors $e_x$ and $e_y$ respectively. You can get these by subtracting adjacent points of the rectangle and normalizing. Then you want a rotation $R$ which satisfies $Ru = e_x$, $Rv = e_y$, and $Rn = e_z$.</p> <p>You can express this as $R[u\;v\;n] = [e_x\;e_y\;e_z] = I$. Then $R$ equals $[u\;v\;n]^{-1}$, which is simply $[u\;v\;n]^T$ since $u$, $v$, and $n$ are an orthonormal set. To be more explicit, the rotation matrix you want is: $$R = \begin{bmatrix}u_x &amp; u_y &amp; u_z \\ v_x &amp; v_y &amp; v_z \\ n_x &amp; n_y &amp; n_z\end{bmatrix}.$$ If you really like Euler angles (<code>rotateX</code>, <code>rotateY</code>, <code>rotateZ</code>), there are ways to convert a rotation matrix like above to Euler angles, but they're ugly. You're best off using the rotation matrix explicitly.</p> <p>By the way, if your rectangle is <em>not</em> centred on the origin, and you want to perform the rotation keeping its centre (say $c$) fixed, you'll have to get the rotated coordinates of a point $p$ not simply as $Rp$ but as $R(p-c) + c$.</p>
2,945,913
<p>I have a quick question about simplifying these exponents and then comparing them:</p> <p><span class="math-container">$8^{\log_2 n}, 2^{3log_2(log_2n)}$</span> and <span class="math-container">$2^{(log_2(n))^2} $</span></p> <p>I know the third one evaluates to <span class="math-container">$n^{log_2(n)}$</span>, but I'm not sure how I would simplify the other two. I do know that <span class="math-container">$2^{log_2(n)}$</span> = n, but how could I use this to simplify the other ones because I don't think I'm simplifying them correctly. I also tried simplifying 8 into <span class="math-container">$2^3$</span>, but I wasn't sure what to do from there.</p> <p>Thanks!</p>
xbh
514,490
<p>Seems fine, but you got a typo at the last part. It should be <span class="math-container">$I(b) = \pi/4$</span> not <span class="math-container">$\pi/2$</span>. Also we could do this quicker: <span class="math-container">$$ I = \int_0^{\pi/2} \frac {\mathrm dx} {1+\tan(x)^\pi} = \int_0^{\pi/2} \frac {\mathrm dx} {1 + \cot(x)^\pi} = \int_0^{\pi/2} \frac {\tan(x)^\pi \mathrm dx}{1+ \tan(x)^\pi} \implies 2I = \frac \pi 2 \implies I = \frac \pi 4. $$</span></p>
872,017
<p>$$\int_0^1 xe^{\sqrt{x}} dx = ? $$</p> <p>All I can think of is the integration by parts rule, where $ u = x $ and $ dv= e^{\sqrt(x)} $ $ \Rightarrow du = 1$ and $ v= e^{\sqrt(x)} $ </p> <p>The answer I get is $e^{\sqrt(x)}(x-1)$ , which is wrong.</p> <p>Can anyone please explain in detail?</p>
Mathsource
12,624
<p>In fact, $$ \int xe^{\sqrt{x}}dx = 2\int x\sqrt{x}\dfrac{e^{\sqrt{x}}}{2\sqrt{x}}dx = 2\int x\sqrt{x} e^{\sqrt{x}}d(\sqrt{x}) = 2\int u^{3}e^u du = \dfrac{2}{D}(u^3e^u) $$ $$ = 2e^u \dfrac{1}{1 + D}u^3 = 2e^u(1 - D + D^2 - D^3)u^3 = 2e^u[u^3 - 3u^2 + 6u - 6] $$ $$ =2e^{\sqrt{x}}[x^{3/2} - 3x + 6\sqrt{x} - 6] + C $$</p>
791,719
<p>I have this inequation: $$5-3|x-6|\leq 3x -7$$</p> <p>i solved this this way: </p> <p>i said, for $x\geq6$ is the modulus positive, so I made 2 cases in which the modulus gives + or - : </p> <p>1) for $x\geq6$ (positive): </p> <p>$5-3x+6\leq 3x -7\\ 6x\geq30\\ x\geq5$</p> <p>2) for $x&lt;6$ (negative): </p> <p>$5-3(-x+6)\le3x-7\\ -13\leq-7$</p> <p>But i dont understand what those $x\geq6$ and $x\geq5$ say to me about $x$. </p>
pointer
121,270
<p>$$arctg(a)+arcctg(a)=\pi/2$$ and $$tg(\alpha)=1/ctg(\alpha)$$</p>
3,766,042
<p>I was doing the problem</p> <blockquote> <p>Find all real solutions for <span class="math-container">$x$</span> in:</p> <p><span class="math-container">$$ 2(2^x- 1) x^2 + (2^{x^2}-2)x = 2^{x+1} -2$$</span></p> </blockquote> <p>There was a hint, to prove that <span class="math-container">$2^{x} - 1$</span> has the same sign as <span class="math-container">$x$</span>, although with basic math, if <span class="math-container">$2^0 - 1$</span> = 0, and x in this case = 0, The two expressions will have the same sign, so I am just puzzled on where to go next with this problem, any help would be welcome!</p> <p>Messing around with basic values, I got <span class="math-container">$\pm 1, 0$</span> as the answers, although I have not checked for extra solutions, and do not know how to prove these, as it was just basic guessing and checking with the three most basic numbers for solving equations.</p>
OlympusHero
811,375
<p>Looks like you're in my class since this is the week that just ended. (on AoPS) If you got an extension, here's a hint as to my solution with some sign work.</p> <p>Dividing both sides by <span class="math-container">$2$</span>, we get <span class="math-container">$x^2(2^x-1)+x(2^{x^2-1}-1)=2^x-1$</span>. Subtracting <span class="math-container">$2^x-1$</span> from both sides gives <span class="math-container">$(x^2-1)(2^x-1)+x(2^{x^2-1}-1)=0$</span>. We can also note that for any <span class="math-container">$x&gt;1$</span>, all terms on the LHS of the equation <span class="math-container">$(x^2-1)(2^x-1)+x(2^{x^2-1}-1)=0$</span> are positive. For all <span class="math-container">$x&lt;-1$</span>, <span class="math-container">$x^2-1$</span> is positive, <span class="math-container">$2^x-1$</span> is negative, <span class="math-container">$x$</span> is negative, and <span class="math-container">$2^{x^2-1}-1$</span> is positive, so the whole expression is negative. Therefore, our solutions must be in the range <span class="math-container">$-1 \leq x \leq 1$</span>.</p> <p>This is <em>not</em> taken from the actual AoPS solution, it is entirely my own. Hope this helped!</p>
362
<p>What do you guys think of <a href="https://math.stackexchange.com/questions/1187/whats-the-most-effective-ways-of-teaching-kids-times-tables">maths education questions</a>? I wouldn't want the site to be overwhelmed, but they are related to maths. By its nature, many education questions will be subjective and difficult to answer.</p>
BBischof
16
<p>I think they have a place, but should follow standards that we can all agree one. One math ed question that I am particularly proud of is <a href="https://mathoverflow.net/questions/8258/whats-a-nice-argument-that-shows-the-volume-of-the-unit-n-ball-in-rn-approaches">https://mathoverflow.net/questions/8258/whats-a-nice-argument-that-shows-the-volume-of-the-unit-n-ball-in-rn-approaches</a> It turned out to be a great question with some wonderful answers.</p>
79,292
<p>I recently realized that I don't know any non-linear diffeomorphisms of the plane (or $\mathbb{R}^n$ in general) except for linear ones, so I want to ask rather broad questions hoping to be pointed to the appropriate literature.</p> <p><strike>1) Are there simple ways of constructing autodiffeomorphisms of $\mathbb{R}^n$ that can be expressed in closed form?</strike> UPD: Ok, there's e.g. $(x, y) \mapsto (x, y + f(x))$, where $f: \mathbb{R} \to \mathbb{R}$ is smooth, so I call this one back - kind of, if you know some exciting and unusual family, feel free to share :)<br> 2) Is every autodiffeomorphism of $\mathbb{R}^n$ isotopic to a linear one? Obviously, every one is <em>homotopic</em> to any other due to $\mathbb{R}^n$ being contractible, but since $\mathrm{GL}(\mathbb{R}^n)$ is not connected, $\mathbb{R}^n$ is a k-space), and homotopies behave nicely under differentials at a point, even some linear autodiffeomorphisms are not isotopic, if I'm not mistaken.</p>
Sanand
70,278
<p>Consider $\mathbb{R}^2$ first. Let $f$ be a smooth function on $\mathbb{R}^2$. If we consider a matrix $$ \begin{bmatrix} \cos(f(x,y)) &amp; -\sin(f(x,y))\\ \sin(f(x,y)) &amp; \cos(f(x,y)) \end{bmatrix}$$ This is non linear map and it gives a diffeomorphism on $\mathbb{R}^2$. Using these $2\times2$ blocks we can build diffeomorphisms on $\mathbb{R}^n$ just like we build rotations from the $2\times2$ blocks. We can use hyperbolic cos and sine also. $$ \begin{bmatrix} \cosh(f(x,y)) &amp; \sinh(f(x,y))\\ \sinh(f(x,y)) &amp; \cosh(f(x,y)) \end{bmatrix}$$ Another set of non linear diffeomorphisms are given by, $$\begin{bmatrix} 1 &amp; f(x,y)\\0 &amp; 1\end{bmatrix}$$ we can choose any non zero constants on the diagonals instead of $1$ and can have lower triangular matrices also. Product of all these types also gives non linear diffeomorphisms.</p>
2,717,007
<p>the curves are $x^2 = 4y$ and $x^2=4y-4$ these are just the same parabolas but the other one is shifted up by one unit.</p> <p>I have been thinking of 3 possibilities that might be the answer.</p> <ol> <li><p>The area is equal to infinite sq. units</p></li> <li><p>The area is equal zero</p></li> <li><p>The area is undefined</p></li> </ol> <p>The answer I have concluded that is probably the most correct is that the area is undefined because:</p> <ol> <li><p>The area is not enclosed by the two curves</p></li> <li><p>Infinity is not a number</p></li> <li><p>The area is definitely not zero since the curves are not overlapping</p></li> </ol> <p>So my question is if I answered this correctly.</p>
Andrew Li
344,419
<p>You can rewrite this as an improper integrals:</p> <p>$$\int_{-\infty}^\infty \left({x^2 + 4\over 4} - {x^2\over 4}\right) \,\mathrm dx = \int_{-\infty}^\infty 1\,\mathrm dx$$</p> <p>It becomes obvious this does not converge thus the area is not finite:</p> <p>$$\lim_{b\to\infty} x \,\Big|^b_{-b} \rightarrow \infty - (-\infty) = \infty$$</p> <p>The notion that the area is undefined because the curves do not cross is wrong. Consider the <a href="https://en.m.wikipedia.org/wiki/Gaussian_integral" rel="nofollow noreferrer">Gaussian Integral</a> which is between the functions $f(x) = e^{-x^2}$ and $g(x) = 0$. They do not cross, yet the integral from negative infinity to positive infinity is finite:</p> <p>$$\int_{-\infty}^\infty e^{-x^2} \,\mathrm dx = \sqrt{\pi}$$</p> <p>Note that $g(x) = 0$ is technically a curve. Mathematically speaking, a curve is a generalization of a line.</p>
3,971,025
<p>I need to find the number of conjugated to the permutation (12)(34) in the symmetric group <span class="math-container">$S_6$</span> of rank 6</p> <p>My answer is 6! = 720</p> <p>Is this correct?</p> <p>I concluded that (12)(34)=(12)(34)(5)(6) and the number of combinations for <span class="math-container">$S_6$</span> is 6! as they need to be the same partition type</p> <p>Edit:</p> <p>It seems to be <span class="math-container">$6! / (2*2*1*1) = 180$</span></p>
HallaSurvivor
655,547
<p>Here <span class="math-container">$L^+$</span> is the set of functions which takes a prime number <span class="math-container">$p$</span> and outputs some natural number <span class="math-container">$n$</span> with the bonus property that <span class="math-container">$f(p) = 0$</span> for all but finitely many primes.</p> <p>For instance, we may consider the function</p> <p><span class="math-container">$$ \begin{cases} g(2) = 3 \\ g(3) = 1 \\ g(5) = 0 \\ g(7) = 3 \\ g(p) = 0 &amp; \text{for all other primes} \end{cases}. $$</span></p> <p>The theorem is saying that the set of all such functions canonically corresponds to the natural numbers. How? Perhaps a definition by examples is the best option:</p> <p>To <span class="math-container">$g$</span> as defined above, we can associate this function with the number</p> <p><span class="math-container">$$ 8232 = 2^3 \cdot 3^1 \cdot 5^0 \cdot 7^3 = 2^{g(2)} \cdot 3^{g(3)} \cdot 5^{g(5)} \cdot 7^{g(7)} $$</span></p> <p>Here we think of <span class="math-container">$g(p)$</span> as telling us what the exponent on <span class="math-container">$p$</span> should be. The fact that all but finitely many <span class="math-container">$g(p)$</span> are <span class="math-container">$0$</span> means that we get a finite number when we multiply them all together (do you see why?).</p> <p>Conversely, say we're given a number like <span class="math-container">$80864$</span>. Then we can factor it as <span class="math-container">$2^5 \cdot 7 \cdot 19^2$</span>, and so we associate it to the function</p> <p><span class="math-container">$$ \begin{cases} f(2) = 5 \\ f(7) = 1 \\ f(19) = 2 \\ f(p) = 0 &amp; \text{for all other primes} \end{cases} $$</span></p> <p>It's easy to see that these two maneuvers undo each other, and so they define a bijection.</p> <p>This is a fancy way of saying that we can identify a natural number with the exponents of its prime factors. But <em>this</em> is a fancy way to say that every natural number admits a unique prime factorization. I.e. <a href="https://en.wikipedia.org/wiki/Fundamental_theorem_of_arithmetic" rel="nofollow noreferrer">the fundamental theorem of arithmetic</a>.</p> <hr /> <p>I hope this helps ^_^</p>
383,194
<p>What are the free objects in the category of $G$-sets for a group $G$? </p> <p>After considerable deliberation (I'm not very bright), I'm pretty sure they are the $G$-sets $X$ on which $G$ acts freely, that is in such a way that only $e$ fixes any elements in $X$. I can prove it -- almost.</p> <p>Suppose $X$ is a $G$-set on which $G$ acts freely. Let's take a transversal $B$ of the set of orbits $X/G$. I will show that $X$ is a free object in the category of $G$-sets, and $B$ is its basis. Let $Y$ be an arbitrary $G$-set, and let $f:B\to Y$ be any function. Suppose $\tilde f:X\to Y$ is a $G$-set morphism extending $f$. Let $x\in X$. Then there exist $b\in B$ and $g\in G$ such that $gb=x.$ Then $$\tilde f(x)=\tilde f(gb)=g\tilde f(b)=gf(b).$$ Hence the extension $\tilde f$ is unique.</p> <p>For the existence of $\tilde f$, it's enough to check that the above is a good definition, or that it doesn't depend on the choice of $g$. Suppose $gb=x$ and $g'b=x$. Then $b=g^{-1}g'b$, and since $G$ acts freely on $X$, we must have $g=g'.$ Therefore the extension $\tilde f$ exists and is unique, and so $X$ is a free object in the category of $G$-sets.</p> <p>Now suppose $X$ is a free object in the category of $G$-sets. Suppose for a moment that the following lemma is true.</p> <p><em>Lemma.</em> If $X$ is a free object in the category of $G$-sets, then its bases are exactly the transversals of $X/G.$</p> <p>With this lemma I can show that $G$ must act freely on $X$. For let $x\in X.$ Then there exist a (unique, but I don't think I need this) $b\in B$ and $g\in G$ such that $gb=x.$ Suppose for an $h\in G$ we have $hx=x.$ This means that $$g^{-1}hgb=b.$$ Now let $f:B\to G$ be any function. Since $B$ is a basis, we can extend it uniquely to a $G$-set morphism $\tilde f$. ($G$ acts on itself by multiplication.) This implies that $$g^{-1}hgf(b)=f(b).$$ But $f(b)$ can be chosen arbitrarily, which means that $g^{-1}hg$ must fix every element of $G$, and so $h=e.$</p> <p>So what I need to do is prove the lemma. Actually, I just need the "exactly" part. I just need to know that if $X$ is a free object in the category $G$-sets, then its basis must be a transversal of $X/G.$</p> <p>Suppose $X$ is a free object. A transversal is a subset of $X$ such that there is exactly one element of each orbit in it. I think the uniqueness is analogous to linear independence in a vector space, and the existence is analogous to spanning the whole space. I can show the uniqueness (but as I said, I don't think I need it actually). Suppose $B$ is a basis of $X$. Suppose there are $b,b'\in B$ such that $b'=gb$ for some $g$. Again, by considering the functions $f:B\to G$, I can show that $g=e.$</p> <p>But I don't see how to show that every orbit must have a representative in $B$. (Please keep in mind that at this point I still haven't proven that $G$ acts freely on $X$.) Since this is analogous to the fact that a vector space basis must generate the whole space, I've looked at the proof of that, the one in which we generate a subspace by the basis, quotient it out and show that the quotient is $0$. Can it be done here? I don't know how to define a quotient of a $G$-set. Or maybe I just need to prove it in some other way?</p>
Martin Brandenburg
1,650
<p>Sorry I've only read the first line of your question.</p> <p>If $X$ is a set, the free $G$-set on $X$ is $G \times X$ with the $G$-action $g(h,x)=(gh,x)$. This is because for every $G$-set $Y$ every map $\alpha : X \to Y$ of sets extends uniquely to a $G$-map $G \times X \to Y$ via $(g,x) \mapsto g\, \alpha(x)$.</p> <p>A $G$-set is free in the sense of category theory iff it is free in the usual sense: $\Rightarrow$ follows since $G$ acts freely on $G \times X$. For $\Leftarrow$ assume that $G$ acts freely on a set $X$. Let $B$ be a system of representatives of $G/X$. Then $B \hookrightarrow X$ extends to a $G$-map $G \times B \to X$, which is surjective by construction. It is also injective: If $gb=g'b'$, then $b=b'$ (by definition of $B$), and then $g=g'$ (since the action is free).</p> <p>Good that you asked this, I didn't know this up to now!</p> <p>(More generally, if $T : C \to C$ is a monad in a category, we can consider the forgetful functor $\mathsf{Mod}(T) \to C$, and it's left adjoint is given by $x \mapsto (T(x),\mu_x)$. For example, when $G$ is a monoid object in $C$ and $C$ has products, then $T(X) = G \times X$ is a monad and $\mathsf{Mod}(T)=\mathsf{Mod}(G)$ is the category of objects of $C$ together with a left $G$-action, for short $G$-objects. The free $G$-object on an object $X \in C$ is therefore $G \times X$ with the obvious $G$-action.)</p>
187,459
<p>What are all 4-regular graphs such that every edge in the graph lies in a unique-4 cycle?</p> <p>Among all such graphs, if we impose a further restriction that any two 4-cycles in the graph have at most one vertex in common, then can we characterize them in some way?</p> <p>When is it possible to draw such a graph on a plane such that every 4-cycle is of the form: (a,c)-(b,c)-(b,d)-(a,d)-(a,c) for some a,b,c,d ?</p>
Flo Pfender
12,487
<p>Here is one more construction which covers a lot of graphs: start with a $4$-regular graph with girth at least $5$, take the line graph and delete a perfect matching in each resulting $K_4$...</p> <p>This may get us almost all answers to the second question: If we start with an answer to the second question, and "fill in" all squares to make $C_4$s, we end up with a line graph of a $4$-regular graph. Not necessarily girth greater $3$, though.</p> <p>To get all such graphs this way, you need to start with any $4$-regular graph, take the line graph, and then <em>carefully</em> delete the matchings to avoid extra squares. Describing what "carefully" entails, and deciding if it is even possible, may turn out to be difficult, though.</p>
332,772
<p>I am looking for a reference or a proof of the following fact:</p> <p>Let <span class="math-container">$X_{1}\subset X_{2}\subset\dots $</span> be a sequence of (hausdorff) topological spaces indexed by natural numbers such that each <span class="math-container">$X_{i}\subset X_{i+1}$</span> is a closed subspace for any <span class="math-container">$i\in \mathbb{N}$</span>. We define <span class="math-container">$X=colim_{i\in \mathbb{N}}X_{i}$</span>. </p> <p>Then the <span class="math-container">$H_{m}(X,\mathbb{Z})=colim_{i\in \mathbb{N}}H_{m}(X_{i},\mathbb{Z})$</span> for any natural number <span class="math-container">$m\in\mathbb{N}$</span>. </p>
David White
11,540
<p>This is a Theorem on page 115 of Peter May's book <a href="http://math.uchicago.edu/~may/CONCISE/ConciseRevised.pdf" rel="nofollow noreferrer">A concise course in algebraic topology</a>.</p> <p>A discussion can be found <a href="https://math.stackexchange.com/questions/1341832/when-will-homology-and-direct-limit-commute?rq=1">here</a>.</p>
222,639
<p>It seems to be true that multiply transitive permutation groups have been classified completely (using CFSG), but I am having trouble finding a reference where this classification is actually stated. Is there a canonical reference?</p>
Derek Holt
35,840
<p>Here is a list of the finite $3$-transitive groups, derived by looking through the list of $2$-transitive groups in Section 7.7 of Dixon and Mortimer and identifying those that are $3$-transitive.</p> <p>Let's first recall the structure of $G := P{\Gamma}L(2,q)$ with $q=p^e$, $p$ prime. Let $S={\rm PSL}(2,q)$. For $q$ even, $G = S \rtimes \langle \phi \rangle$ with $\phi$ acting as field automorphism of order $e$, and $G/S \cong C_e$.</p> <p>For $q$ odd, $G = S\langle \delta,\phi \rangle$, where $\delta$ acts as a diagonal automorphism of order $2$, and $\phi$ as a field automorphism, (Note that this extension is nonsplit when $e$ is even.) We have $G/S \cong C_2 \times C_e$. The subgroup $ S \langle \phi \rangle$ of index $2$ in $G$ is denoted by $P{\Sigma}L(2,q)$.</p> <p>So now, the finite $3$-transitive groups are as follows.</p> <p>$A_n$ ($n \ge 5$), $S_n$ ($n \ge 3$), degree $n$. (There are two inequivalent actions, conjugate under an outer automorphism of $S_6$, when $n=6$.)</p> <p>$A{\Gamma}L(n,2) = {\mathbb F}_2^n \rtimes {\rm GL}(n,2)$ with $n \ge 2$. degree $2^n$.</p> <p>${\mathbb F}_2^4 \rtimes A_7$, degree $16$.</p> <p>Groups $G$ with ${\rm PSL}(2,2^e) \le G \le P{\Gamma}L(2,2^e)$, degree $2^n+1$.</p> <p>For $q$ odd, groups $G$ with ${\rm PSL}(2,q) \le G \le P{\Gamma}L(2,q)$ and $G \not\le P{\Sigma}L(2,q)$, degree $q+1$.</p> <p>The Mathieu groups $M_{11},M_{12},M_{22},M_{22}.2 = {\rm Aut}(M_{22}), M_{23},M_{24}$, degrees $11,12,22,22,23,24$.</p> <p>$M_{11}$, degree $12$.</p> <p>For completeness, the finite $4$-transitive groups are: $A_n$ ($n \ge 6$), $S_n$ ($n \ge 4$), $M_{11},M_{12},M_{23},M_{24}$,.</p> <p>The $5$-transitive groups are: $A_n$ ($n \ge 7$), $S_n$ ($n \ge 5$), $M_{12},M_{24}$.</p> <p>And the finite $k$-transitive groups for $k \ge 6$ are: $A_n$ ($n \ge k+2$), $S_n$ ($n \ge k$).</p>
762,472
<p>Let $C$ be a circle of radius $r$ in the plane. Let $p$ be a point in the plane that lies outside of $C$. Show that there are exactly two lines through $p$ that are tangent to $C$.</p> <hr> <p>It is one of those questions that seem very intuitive but very hard to prove for me. How do I show that there are "exactly" two tangent lines? Try to construct a third one but reach a contradiction? I'd appreciate any help. Thanks.</p>
Kaj Hansen
138,538
<p><strong>Method Using Calculus</strong>: </p> <p>Say we are given any circle and any point outside of that circle. WLOG, we can translate the circle to be centered at the origin, and rotate our system so that the point is situated along the $y$-axis. </p> <p>From here, we claim that we can hit that point with exactly two tangent lines to the circle. Imagine the circle is described by the equation $x^2 + y^2 = r^2$, and consider some point outside of the circle on the $y$-axis, say $(0, b)$. </p> <p>Differentiating implicitly, we find that $\frac{dy}{dx} = -x/y$. Therefore, the tangent line through a given point on the circle, say $(m, n)$, will be described by the equation $y = (-m/n)x + b$, where $b$ is the $y$-intercept at the desired point. Plugging in to solve for $b$:</p> <p>$$n = -m^2/n + b$$ $$b = n+ m^2/n$$</p> <p>We want to find points along our circle that both satisfy the above, and also $x^2 + y^2 = r^2$. Rearranging the above, we get:</p> <p>$$m^2 + n^2 = nb$$</p> <p>In order to satisfy our circle's equation, we must take $n$ such that $nb = r^2 \Longrightarrow n = r^2/b$. From here, we have exactly two choices for $m$ (positive or negative), and hence, exactly $2$ possible tangent lines to the circle through the desired point.</p>
993,767
<p>Suppose $V$ is an inner product space over $\mathbb F$ and $u$,$v$ ∈ $V$ and $\|u\| ≤ \|u + av\|$ for all $a$ ∈ $\mathbb{F}$.Then I want to show that $u$ and $v$ are orthogonal.I want to prove it geometrically.Somebody please give me some hint.</p>
Did
6,179
<p>A different idea, for a more explicit example: $$a_{2^n+k}=2^{-n}\,(2k-2^n)\, n\qquad 0\leqslant k\leqslant2^n-1$$</p>
34,247
<p>We will have an election soon to elect 2 new moderators. Continuing the tradition from past elections (<a href="https://math.meta.stackexchange.com/questions/17598/2014-nominations-for-moderator-on-math-se">2014</a>, <a href="https://math.meta.stackexchange.com/questions/27078/2017-moderator-election-nominating-another-user-to-consider-running-for-a-moder">2017</a>, <a href="https://math.meta.stackexchange.com/questions/28686/the-unofficial-2018-elections-nomination-post">2018</a>, <a href="https://math.meta.stackexchange.com/questions/32123/the-unofficial-2020-elections-nomination-thread">2020</a>) we have a thread where people can nominate <em>other</em> people. Since candidacy is always voluntary this thread is only meant as a show of interest in possible candidates. You can still join the race even if you weren't named here, and you can decline participating, even if you were nominated by others.</p> <hr /> <p><strong>Guidelines</strong> (Taken from quid's 2018 thread, again):</p> <p>Some guidelines:</p> <ul> <li>Don't nominate Martin Sleziak.<sup>1</sup></li> <li>One nomination per answer.</li> <li>In case there could be confusion, link to the profile.</li> <li>Try to give some details, don't only post a name.</li> <li>Even if you do not like some nominee, try to show restraint about it. Critical points can be raised. But this is not a thread to &quot;grill&quot; potential candidates, before they even decided to run.</li> </ul> <p>Note that this is not an official thread. Everybody that wants to be a candidate must go through the official process. &quot;Accepting&quot; or &quot;declining&quot; a nomination here, does not mean anything in the end.</p> <hr /> <p><sup>1</sup> The point is, don't nominate somebody that said they do not want to be nominated. If you want to approach them do so elsewhere.</p>
Sarvesh Ravichandran Iyer
316,409
<p>In the last election, <a href="https://math.meta.stackexchange.com/users/72031/paramanand-singh"><strong>Paramanand Singh</strong></a> missed out in a closely fought race with Xander Henderson. For sure, if they are available, they should be contesting this time.</p> <p>That's because one can see their responses in the 2020 moderator election that are <a href="https://math.stackexchange.com/election/8?tab=election#post-3763870">their nomination speech</a> and <a href="https://math.meta.stackexchange.com/a/32200/316409">their answers to the 2020 questionnaire</a>. Not only are the second category of responses overwhelmingly well-received in terms of votes, their humble disposition has been highlighted in the comments section of the questionnaire. I recall an example where they voted to close and answered the same question, for which their apology on meta was well-received.</p> <p>They are a relatively active member of CURED, have been an active site member for a long period, being able to write very decent mathematical answers particularly in the field of real analysis.</p> <p>If I were to highlight possibly the most important quality that they can bring to a discussion, it is a degree of calm and a reduction of fractiousness among conflicting parties.</p> <p>This user has a <a href="https://data.stackexchange.com/math/query/1080450/whats-my-candidate-score-for-moderator-elections" rel="nofollow noreferrer">moderator candidate score</a> of <span class="math-container">$36/40$</span>, which is usually considered appropriate for a candidate who wishes to run for the election.</p> <p>With that, I nominate Paramanand Singh, in case it wasn't obvious already. I hope they will run!</p> <p>Note : One last but important point is that this user is based in India, a geographical location that is somewhat far-flung from the regions the current moderators are based in (Western Europe, and North and South America) which would allow for moderation duties to be executed at a time when other moderators may find a time crunch.</p>
3,756,970
<p>I know which step is wrong in the following argument, but would like to have contributors' explanations of <em>why</em> it is wrong.</p> <p>We assume below that weather forecasts always predict whether or not it is going to rain, so <em>not forecast to rain</em> means the same as <em>forecast not to rain</em>. We shall also assume that forecasts are not always right.</p> <p>It is not generally true that the probability of rain when forecast is equal to that of its having been forecast to rain when it does rain. Indeed let us assume that <span class="math-container">$$P(\text{R}|\text{F}_{\text R}) \neq P(\text{F}_{\text R}|\text{R}).$$</span> But, having been forecast to rain, it will either rain or not rain (<span class="math-container">$\bar{\text{R}}$</span>), so <span class="math-container">$$P(\text{R}|\text{F}_{\text R})+P(\overline {\text{R}}|\text{F}_{\text R})=1\ \ \ \ \ \ \mathbf{eq. 1}$$</span> Likewise, if it rains, it will either have been forecast to rain or (we are assuming) not forecast to rain (<span class="math-container">$\overline{\text{F}_{\text R}}$</span>), so <span class="math-container">$$P(\text{F}_{\text R}|\text{R})+P(\overline{\text{F}_{\text R}}|\text{R})=1 \ \ \ \ \ \ \mathbf{eq. 2}$$</span> But we know that &quot;If rain then not forecast to rain&quot; is logically equivalent to &quot;If forecast to rain then no rain&quot;. So the corresponding conditional probabilities must be equal, that is <span class="math-container">$$P(\overline{\text{F}_{\text R}}|\text{R})=P(\overline {\text{R}}|\text{F}_{\text R})\ \ \ \ \ \ \ \ \ \ \ \ \mathbf{eq. 3}$$</span> It follows immediately from <span class="math-container">$\mathbf {eqs 1,\ 2\ and\ 3}$</span> that <span class="math-container">$$P(\text{R}|\text{F}_{\text R}) = P(\text{F}_{\text R}|\text{R}).$$</span> which is contrary to our hypothesis.</p>
Moe Sarah
787,944
<p>Every set can become a metric space.</p> <p>Let <span class="math-container">$X$</span> be a set. Define <span class="math-container">$d: X\times X\rightarrow \mathbb{R}$</span> by <span class="math-container">$d(x,y)=0$</span> if <span class="math-container">$x=y$</span> and <span class="math-container">$1$</span> otherwise.</p> <p>This is called the <strong>discrete metric</strong> on <span class="math-container">$X$</span>.</p>
45,973
<p>Let $B,C,D \geq 1$ be positive integers and $(b_n)_{n\geq 0}$ be a sequence with $b_0 = 1, b_n = B b_{n-1} + C B^n + D$ for $n \geq 1$.</p> <p>Prove that </p> <p>(a) $\sum_{n\geq 0}^\infty b_n t^n$ ist a rational function</p> <p>(b) identify a formula for $b_n$</p> <hr> <p>Hi!</p> <p>(a)</p> <p>As I know I need to show that $\sum_{n\geq 0}^\infty b_n t^n$ can be rewritten as a fraction of two polynomials $\frac{P(x)}{Q(x)}$ while $Q(x)$ is not the zero polynomial, right?</p> <p>There is no fraction in the recurrence formula given above, how do I show that? Can't I just take $Q(x) = 1$?</p> <p>(b)</p> <p>I do already have the formula $b_n = B b_{n-1} + C B^n + D$, so I might need one without any $b_n$ on the right side. But how do I eleminate it? If I would divide by $b_n$ I still don't know how to calculate $\frac{b_{n-1}}{b_n}$ (if this would actually help). </p> <p>Any ideas how this might be done?</p> <p>Thanks in advance!</p>
André Nicolas
6,312
<p>I do not know how much help you need. Both Robert Israel and Steve have given an almost complete solution of the problem, and one of theirs should be chosen as the solution to "accept."</p> <p>You want to find a simple expression for $F(t)$, where $$F(t)=\sum_0^\infty b_nt^n$$ Substituting in the recurrence for $b_n$, we find that we want $$\sum_0^\infty (Bb_{n-1}+CB^n+D)t^n$$ Well, not quite! Note that when $n=0$, we have $n-1=-1$, and we do not have an expression for $b_{n-1}$. The following is a way around the problem. (There is a cleverer way: find it!) Note that $$F(t)=1+\sum_1^\infty b_nt^n$$ Now the substitution goes fine. $$F(t)=1 + \sum_1^\infty (Bb_{n-1}+CB^n+D)t^n$$</p> <p>Multiply through by $t^n$. We want $$\sum_1^\infty Bb_{n-1}t^n+\sum_1^\infty CB^nt^n +\sum_1^\infty Dt^n$$</p> <p>Let's deal with these various sums, say in backwards order.</p> <p>So we want to calculate $\sum_1^\infty Dt^n$. There is a common factor of $Dt$, so we want $Dt\sum_0^\infty t^k$. I imagine you recognize that this is $Dt/(1-t)$.</p> <p>Now let us calculate $\sum_1^\infty CB^nt^n=C\sum_1^\infty (Bt)^n$. Again, we have an infinite geometric series, and the sum is $CBt/(1-Bt)$.</p> <p>Finally, let's deal with $\sum_1^\infty b_{n-1}t^n$. This is $b_0t+b_1t^2 + b_2t^3+\cdots$. If we take out the common factor $t$, we get simply $t(b_0+b_1t+b_2t^2 +\cdots$, which is $tF(t)$.</p> <p>Put all the stuff together. We get $$F(t)= 1+ tF(t) + \frac{CBt}{1-Bt} + \frac{t}{1-t}$$ Rearrange a bit. $$(1-t)F(t)= 1+ \frac{CBt}{1-Bt} + \frac{t}{1-t}$$ Now divide both sides by $1-t$ to obtain $$F(t)= \frac{1}{1-t}+ \frac{CBt}{(1-Bt)(1-t)} + \frac{t}{(1-t)^2}$$</p> <p>It remains to deal with part (b), but I already have typed too long. We want a formula for $b_n$, for $n \ge 1$. There are various ways to do this, and I don't know what the preferred way is in your course. Your function is a sum of $3$ terms. You know the coefficient of $t^n$ in the expansion of the first, and probably can handle the third, by exploiting the connection with the derivative of $1/(1-t)$. The second term I would handle by first calculating its "partial fraction" decomposition. </p> <p><strong>Added comment</strong>: The suggestion in the answer by Ross Millikan leads to a solution of (b) that is nicer and faster than extracting the coefficients from the generating function $F(t)$ using the ideas of the preceding paragraph. But in your course you may be expected to know how to compute the coefficients of relatively simple generating functions like $F(t)$. </p>
2,847,419
<p>I know that <br/> $\sigma , \delta$ be 2 function then <br/> $1)$ $\sigma \circ \delta$ is onto or one-one if both $\sigma $ and $\delta$ is onto or one one.<br/> I can prove this fact . I wanted to find the counterexample for both cases if the converse is not true. <br/> Any Help will be appreciated </p>
mengdie1982
560,634
<h1>Solution</h1> <p>Notice <span class="math-container">$$f'(x)=k(x+e^x)^{k-1}(1+e^x).$$</span></p> <p>Let <span class="math-container">$f'(x)=0$</span>. Then we obtain <span class="math-container">$$x+e^x=0$$</span></p> <p>This equation has no closed-form solution. But by graphing <span class="math-container">$y=-x$</span> and <span class="math-container">$y=e^x$</span>, you may intuitively find there exists only one root over <span class="math-container">$(-1,0)$</span>.</p> <p><a href="https://i.stack.imgur.com/oFuPX.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFuPX.jpg" alt="enter image description here" /></a></p> <p>Moreover, we may prove this fact. Since <span class="math-container">$f''(x)=1+e^x&gt;0.$</span> Then <span class="math-container">$f'(x)$</span> is rigorously increasing over <span class="math-container">$(-\infty,+\infty)$</span>. Thus, there exists at most one solution for <span class="math-container">$f'(x)=0$</span>. Notice that <span class="math-container">$f'(-1)=-1+e^{-1}&lt;0$</span> and <span class="math-container">$f'(0)=0+e^0&gt;0$</span>. Thus, by intermediate value theorem, we may claim the fact.</p>
3,520,722
<p>I have a few challenges setting up the bounds of integration for the region <span class="math-container">$$U = \{(x,y) | -1 \leq x-y \leq 1 , \quad 1 \leq xy \leq 2 \}$$</span> My ultimate goal is to solve <span class="math-container">$$\iint_U x^2y + xy^2 dxdy = \iint_U f(x,y) dxdy$$</span></p> <p>Here is a plot of the domain I have to consider:</p> <p><span class="math-container">$\hspace{4.5cm}$</span><img src="https://i.stack.imgur.com/l5qdjm.png" alt="enter image description here][1]"></p> <p>Since the region is symmetric, I decided to first focus on the first quadrant. The domain is not simple, therefore I divided it into simple domains by introducing two new bounds <span class="math-container">$x=1$</span> and <span class="math-container">$y=1$</span> yielding three simple domains: (<a href="https://www.desmos.com/calculator/8cshs9eejc" rel="nofollow noreferrer">Link to the interactive graph on Desmos</a>)</p> <p><span class="math-container">$\hspace{4.5cm}$</span><img src="https://i.stack.imgur.com/2IGJvm.png" alt="enter image description here][3]"></p> <p>Starting with the region below <span class="math-container">$y=1$</span> then the middle region, then the upper one, I came up with the following limits:</p> <p><span class="math-container">$$\int_{ y= \frac{\sqrt5 - 1}{2} }^1 \int_{\frac1y}^{y+1} \space f(x,y) \space dxdy + \int_{ x=1 }^2 \int_{1}^{\frac2x} \space f(x,y) \space dydx + \int_{ y=1 }^2 \int_{\frac1y}^{1} \space f(x,y) \space dxdy $$</span></p> <p>Now, the order turned out different for each of the summands, which is making me doubt that my setup is correct. Also, it seems to me that the points on the lines <span class="math-container">$x=1$</span> and <span class="math-container">$y=1$</span> will be ... counted twice (?) since they border two different integrals if that makes sense. Thus, could anyone kindly clarify on this.</p>
Claude Leibovici
82,404
<p>Starting from <span class="math-container">$$4\sum_{k=0}^{\infty} \frac{2^k k! \left(k+2\right)!}{\left(2k+4\right)!} $$</span>Consider <span class="math-container">$$4\sum_{k=0}^{\infty}\frac{k! (k+2)!}{ (2 k+4)!}(2t)^{2k}$$</span> and, now, the trick is to recognize (not so obvious) that this is <span class="math-container">$$\frac{1}{t^2}-\frac{\sqrt{1-t^2} }{t^3}\sin ^{-1}(t)$$</span> Make <span class="math-container">$t=\frac 1 {\sqrt 2}$</span> and get the result.</p>
1,822,160
<p>Why is the "column space" on the vertical in a matrix? In my mind the column space is that space that the vectors in the matrix have created. I mean, for example take the equations:</p> <pre><code>3x + 4y = 5 2x + 8y = 6 </code></pre> <p>Then the matrix will be:</p> <p>\begin{pmatrix} 3 &amp; 4 \\ 2 &amp; 8 \end{pmatrix}</p> <p>But why is the space defined by this matrix on the vertical?</p> <p>Aren't the two vectors:</p> <pre><code>3i + 4j </code></pre> <p>and</p> <pre><code>2i + 8j </code></pre> <p>defining the space we're working in?</p>
levap
32,262
<p>There is also a space defined by the rows of the matrix and it is called (unsurprisingly) the "row space". When you construct a matrix from a linear system of equations, you are indeed constructing a matrix "row by row" and not "column by column" in the sense that each equation defines a row and not a column and so it might seem that you shouldn't care about the columns. However, when you begin solving the equation, you see that the columns also play an important role. For example, $Ax = b$ will have a solution if and only if $b$ belongs to the span of the columns of $A$ (the column space of $A$). Both spaces are important and are related by duality and/or the fact that you can always convert rows to columns by performing the transpose operation.</p>
1,136,060
<p>What identity would I need to use to solve for $\theta$? </p> <p>$5 + \cos(\theta) = 7\sin(\theta)$ </p> <p>By plugging this into a calculator, I was able to get $\theta \approx 53.13^\circ$. </p>
Khosrotash
104,171
<p>$$a sinx +bcos x=\sqrt{a^2+b^2}sin(x+\alpha)\\tan \alpha =\frac{b}{a}\\$$see an example :$$\sqrt{3}sinx+cosx=\sqrt{3+1}sin(x+arctan(\frac{1}{\sqrt{3}}))=2sin(x+30)$$</p>
3,867,197
<p>Let <span class="math-container">$A$</span> be the following matrix</p> <p><span class="math-container">$$\left( \begin{array}{ccc} 1 &amp; 0 &amp; x \\ 0 &amp; 1 &amp; y \\ x &amp; y &amp; 1 \end{array} \right)$$</span></p> <p>I have to prove that if, at least <span class="math-container">$x+y&gt;\frac{3}{2}$</span>, <span class="math-container">$A$</span> is not positive definite.</p> <p>I have tried to prove it by calculating the eigenvalues, obtaining: <span class="math-container">$$ \begin{array}{c} \lambda_1=1\\ \lambda_2=1+\sqrt{x^2+y^2} \\ \lambda_3=1-\sqrt{x^2+y^2} \end{array} $$</span></p> <p>It is obvious that <span class="math-container">$\lambda_1$</span> and <span class="math-container">$\lambda_2$</span> are always positive, so I only have to take care of <span class="math-container">$\lambda_3$</span>. The problem is that I cannot relate the given condition with <span class="math-container">$1-\sqrt{x^2+y^2}&lt;0$</span>, which would prove that the matrix is not positive definite.</p>
Martingalo
127,445
<p>By direct definition: A matrix <span class="math-container">$A$</span> is said to be positive definite if the scalar <span class="math-container">$u^TAu$</span> is strictly positive for every non-zero vectors <span class="math-container">$u$</span>. Let <span class="math-container">$u=(u_1,u_2,u_3)$</span>, <span class="math-container">$u_1,u_2,u_3\in \mathbb{R}$</span>. Since <span class="math-container">$u$</span> is non-zero there must be a non-zero component, say <span class="math-container">$u_3\neq 0$</span> then we can divide by <span class="math-container">$u_3$</span> and without loss of generality assume directly that <span class="math-container">$u_3=1$</span> (you can check the case <span class="math-container">$u_3=0$</span> separately, you will see that the result is trivially positive). Then <span class="math-container">$$u^T A u = \begin{pmatrix} u_1 &amp; u_2 &amp; 1 \end{pmatrix}\begin{pmatrix} 1 &amp; 0 &amp; x \\ 0 &amp; 1 &amp; y \\ x &amp; y &amp; 1 \end{pmatrix}\begin{pmatrix} u_1 \\ u_2 \\ 1 \end{pmatrix} = u_1^2 + u_2^2 + 1 + 2xu_1 + 2y u_2.$$</span></p> <p>Furthermore, observe that <span class="math-container">$$u_1^2 + u_2^2 + 1 + 2xu_1 + 2y u_2 = (u_1+x)^2 + (u_2 +y)^2 -x^2 -y^2 +1.$$</span> In particular, the matrix is positive definite if <span class="math-container">$$(u_1+x)^2 + (u_2 +y)^2 +1 &gt; x^2 +y^2$$</span> which is implied by your condition.</p>
1,704,555
<blockquote> <p>If $I\subseteq J$ are ideals in a polynomial ring of $n$ variables, how do I prove that $I = J$ if $\operatorname{in}_{\lt}(I)=\operatorname{in}_{\lt}(J)$, where $\lt$ is any monomial ordering?</p> </blockquote> <p>Obviously it suffices to prove that $J \subseteq I$. I'm stuck with how to go forward once I pick an arbitrary element $f \in J$ and have that $\operatorname{in}_{\lt}(f) \in \operatorname{in}_{\lt} (J)=\operatorname{in}_{\lt}(I)$.</p>
DonAntonio
31,254
<p>Take subspaces of the plane, for example:</p> <p>$$\begin{align*}&amp;x+2y+z=0\\{}\\&amp;(1,0,-1)+t(-1,1,-1)\;t\in\Bbb R\\{}\\&amp;(1,0,-1)+t(0,1,-2)\;,\;\;t\in\Bbb R\;\;,\;\;\;etc.\end{align*}$$</p>
1,704,555
<blockquote> <p>If $I\subseteq J$ are ideals in a polynomial ring of $n$ variables, how do I prove that $I = J$ if $\operatorname{in}_{\lt}(I)=\operatorname{in}_{\lt}(J)$, where $\lt$ is any monomial ordering?</p> </blockquote> <p>Obviously it suffices to prove that $J \subseteq I$. I'm stuck with how to go forward once I pick an arbitrary element $f \in J$ and have that $\operatorname{in}_{\lt}(f) \in \operatorname{in}_{\lt} (J)=\operatorname{in}_{\lt}(I)$.</p>
copper.hat
27,978
<p>Let $P$ be the plane, then we must have $Tx = x$ for $x \in P$. Hence any subspace of $P$ is $T$ invariant. Since $\dim P = 2$, you can choose any line in $P$ through the origin.</p> <p>For example, note that $(2,-1,0)$ and $(0,5,-2)$ are in $P$. Then $v_{\alpha,\beta}=\alpha (2,-1,0) + \beta (0,5,-2) \in P$ and so $T v_{\alpha,\beta} = v_{\alpha,\beta}$, and so $\operatorname{sp} \{ v_{\alpha,\beta} \} $ is a $T$ invariant subspace.</p> <p>Now pick 5 values of $\alpha,\beta$ so that $v_{\alpha,\beta}$ lie on different lines.</p> <p>Note that $P$ is also $T$ invariant.</p>
1,221,639
<p>Consider two random variables <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. If X and Y are independent random variables, then it can be shown that: <span class="math-container">$$E(XY) = E(X)E(Y).$$</span></p> <p>Let <span class="math-container">$X$</span> be the random variable that takes each of the values <span class="math-container">$-1\!\!\!$</span>, <span class="math-container">$0$</span>, and <span class="math-container">$1$</span> with probability <span class="math-container">$1/3$</span>. Let <span class="math-container">$Y$</span> be the random variable with value <span class="math-container">$Y = X^2$</span>.</p> <blockquote> <p>Prove that <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are not independent.</p> <p>Prove that <span class="math-container">$E(XY) = E(X)E(Y)$</span>.</p> </blockquote> <hr /> <p>I understand that <span class="math-container">$E(XY) = E(X^3)$</span> since <span class="math-container">$Y = X^2$</span> so that makes each side of the equation equal to zero.</p> <p>But I am not sure how to go about proving that <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are not independent.</p>
Math1000
38,584
<p>Let $M_X(t)$ and $M_Y(t)$ be the moment generating functions of $X$ and $Y$, respectively. Then</p> <p>$$ \begin{align*} M_X(t) &amp;= \mathbb E[e^{tX}]\\ &amp;= e^{-t}\mathbb P(X=-1) + \mathbb P(X=0) + e^t\mathbb P(X=1)\\ &amp;= \frac13 e^{-t} + \frac13 + \frac 13 e^t\\ &amp;= \frac13(e^{-t} + 1 + e^t) \end{align*} $$ and $$ \begin{align*} M_Y(t) &amp;= \mathbb E[e^{tY}]\\ &amp;= \mathbb P(Y=0) + e^t\mathbb P(Y=1)\\ &amp;= \frac13 + \frac 23 e^t\\ &amp;= \frac13(1 + 2e^t). \end{align*} $$ Let $Z=X+Y$. Then $$ \begin{align*} \mathbb P(Z=n) &amp;= \mathbb P(X+Y= n)\\ &amp;=\mathbb P(X+X^2 = n)\\ &amp;=\mathbb P(X(X+1)=n)\\ &amp;=\begin{cases} \frac23,&amp; n=0\\ \frac13,&amp; n=2. \end{cases} \end{align*} $$ The moment generating function of $Z$ is $$ \begin{align*} M_Z(t) &amp;= \mathbb E[e^{tZ}]\\ &amp;= \frac23 + \frac13 e^{2t}\\ &amp;= \frac13(2 + e^{2t}) \end{align*} $$ while $$ \begin{align*} M_X(t)M_Y(t) &amp;= \frac13(e^{-t}+1+e^t)\frac13(1+2e^t)\\ &amp;= \frac19(e^{-t}+3+3e^t+2e^{2t}). \end{align*} $$ Since $M_Z(t)\ne M_X(t)M_Y(t)$, we conclude that $X$ and $Y$ are not independent.</p>
1,221,639
<p>Consider two random variables <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>. If X and Y are independent random variables, then it can be shown that: <span class="math-container">$$E(XY) = E(X)E(Y).$$</span></p> <p>Let <span class="math-container">$X$</span> be the random variable that takes each of the values <span class="math-container">$-1\!\!\!$</span>, <span class="math-container">$0$</span>, and <span class="math-container">$1$</span> with probability <span class="math-container">$1/3$</span>. Let <span class="math-container">$Y$</span> be the random variable with value <span class="math-container">$Y = X^2$</span>.</p> <blockquote> <p>Prove that <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are not independent.</p> <p>Prove that <span class="math-container">$E(XY) = E(X)E(Y)$</span>.</p> </blockquote> <hr /> <p>I understand that <span class="math-container">$E(XY) = E(X^3)$</span> since <span class="math-container">$Y = X^2$</span> so that makes each side of the equation equal to zero.</p> <p>But I am not sure how to go about proving that <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are not independent.</p>
rightskewed
171,836
<p>$P(X=-1) = P(X=0) = P(X=1) =\frac{1}{3}$</p> <p>$Y = X^2$ so $P(Y=1) = \frac{2}{3}$ and $P(Y=0) = \frac{1}{3}$ . $Y$ equals zero iff $X$ equals 0. But $Y$ equals 1 if $X$ is $1$ or $-1$.</p> <p>$$E[X] = -1.\frac{1}{3} + 0.\frac{1}{3} + 1.\frac{1}{3} = 0$$ $$E[Y] = 1.\frac{2}{3} + 0.\frac{1}{3} = \frac{2}{3}$$</p> <p>$$E[XY] = E[X^3] = E[X] = 0$$</p> <p>The last equality holds because $X$ takes only values in $[-1.0.1]$ Thus, $$E[XY] =E[X]E[Y]=0$$</p> <p>But, are $X,Y$ independent?</p> <p>For $X,Y$ to be independent $P(X=x, Y=y) = P(X=x) P(Y=y)$ where $x \in [-1,0,1]$ and $y \in [0,1]$. </p> <p>Let's consider $x=1 \implies y=1$ so $P(X=1, Y=1)=\frac{1}{3}$ while $P(X=1)P(Y=1) = \frac{1}{3}.\frac{2}{3} = \frac{2}{9} \neq P(X=1, Y=1)$</p>
1,879,673
<p>I have woven the below incomplete proof of the following claim:</p> <blockquote> <p><em>Claim</em>. If $X$ is completely regular and $Y$ is a compactification of $X$, then there is a unique, continuous, surjective, closed map $g:\beta\left(X\right)\to Y$ which is the identity on $X$.</p> </blockquote> <p><em>Here, $\beta\left(X\right)$ is the Stone-Čech compactification of $X$.</em></p> <p><em>Proof</em>. Let $f:X\to Y$ be such that $x\mapsto x$. Then $f$ is continuous. Since $f$ is continuous and $Y$ is compact and Hausdorff, it is the case that $f$ extends uniquely to a continuous map $g:\beta\left(X\right)\to Y$. Then $g$ is the identity on $X$. Let $C\subseteq\beta\left(X\right)$ be closed. Since $\beta\left(X\right)$ is compact, it is the case that $C$ is compact. Since $g$ is continuous, it is the case that $g\left(C\right)$ is compact. Since $Y$ is Hausdorff, it is the case that $g\left(C\right)$ is closed. Then $g$ is closed...</p> <p>I do not know how to show that $g$ is surjective. Am I allowed to use the "maximality" of $\beta\left(X\right)$? If so, then I believe that it would follow that $Y\subseteq\beta\left(X\right)$, which would imply that $g$ is surjective. I am not sure because this "maximality" is not defined in terms of containment.</p>
DanielWainfleet
254,665
<p>You proved the existence of a continuous closed surjection $g:\beta X\to Y$ with $g|_X=id_X.$ You did not explicitly prove the uniqueness of such a $g.$ </p> <p>If $f_1:A\to B$ and $f_2:A\to B$ are continuous and $B$ is Hausdorff then $\{x\in A: f_1(x)=f_2(x)\}$ is closed in $A.$ </p> <p>In particular if $g_1:\beta X\to Y$ is continuous and $g_1|_X=id_X$ then $$\{x\in \beta X:g(x)=g_1(x)\}\supset Cl_{\beta X}(X)=\beta X.$$</p>
1,043,734
<p>I found the following on Wikipedia, <a href="http://en.wikipedia.org/w/index.php?title=Elementary_algebra&amp;oldid=633621020#Properties_of_inequality" rel="nofollow">on the page for Inequalities</a>:</p> <blockquote> <p>If $a&lt;b$ and $c&lt;d$ then $a+c &lt; b+d$.</p> </blockquote> <p>It references <a href="http://books.google.co.uk/books?id=b3vqad8tbiAC&amp;lpg=PA96&amp;ots=FF2OYYGNCV&amp;dq=algebra%20inequality%20properties&amp;pg=PA96#v=onepage&amp;q=algebra%20inequality%20properties&amp;f=false" rel="nofollow">Intermediate Algebra</a>, but I don't see this specific property there.</p> <p>Is there a name for this particular property? Is there a proof for this property from the other properties?</p>
Milo Brandt
174,927
<p>This holds in any ordered field (or more generally, partially ordered group); the only property we need to take advantage of is translation invariance and transitivity. That is, the properties that $$a&lt;b\Leftrightarrow a+c&lt;b+c$$ $$a&lt;b \text{ and } b&lt;c\Rightarrow a&lt;c$$</p> <p>Starting with $$a&lt;b$$ we can, using translation invariance, add a constant to both sides: $$a+c&lt;b+c$$ We can, using translation invariance, we can also establish, from $c&lt;d$ translated by $b$ get that $$b+c&lt;b+d$$ and using transitivity gives $$a+c&lt;b+d$$</p>
145,950
<p>how can I show that any finite CW-space can embedded into an euclidean space of some dimension? Any help or reference would be greatly appreciated.</p>
Igor Rivin
11,142
<p>Well, any simplicial complex can be realized as a subset of the simplex in $\mathbb{R}^V$ (where $V$ is the number of vertices). But a CW complex can only be embedded up to homotopy, it seems (see the answer to your duplicate question on math.stackexchange)</p>
699,383
<p>I am a non-mathematician who knows some elemententary calculus ans I want to prove that the sequence $(x_n)$ given by</p> <p>$$ x_n=-\sqrt{n} + n\ln\Big(1+\frac{1}{\sqrt{n}}\Big) $$</p> <p>is decreasing. Is there an elegant way to show this?</p>
Artem
48,057
<ol> <li><p>as @user130512 said, you could prove that $x_{n+1}$ is greater than $x_n$. Sometimes, it can help to look at the ratio between the two, that is $x_{n+1}/x_n$. If it is greater than $0$, $x_{n+1}$ is greater than $x_n$.</p></li> <li><p>I think you also should be able to look at the derivative of $x_n$ to deduce whether it is positive or negative. So, positve $ \frac{d(x_n)}{dn}$ means it is increasing while negative would mean it is decreasing. Note that it can be increasing for one period but decreasing for another, so, as in your case, you would need to prove that $ \frac{d(x_n)}{dn}$ is smaller than $0$ for all values of $n$ to show that it is negative since the derivative will not be a constant. </p></li> </ol> <p>The derivative in your case is:</p> <p>$$\dfrac{2(n+\sqrt{n})\log\left(\tfrac1{\sqrt{n}}+1\right)+1}{2(n+\sqrt{n})}$$</p> <p>As you can see, we only need to find what values would make this function equal to zero. Because of this, we can work with the nominator only, $$(n+\sqrt{n})\times \log\left( \frac{1}{\sqrt{n}} +1\right)+1 =0$$ </p> <p>You can also see it graphically in the derivative:</p> <p><img src="https://i.stack.imgur.com/OS6fo.png" alt="enter image description here"></p>
699,383
<p>I am a non-mathematician who knows some elemententary calculus ans I want to prove that the sequence $(x_n)$ given by</p> <p>$$ x_n=-\sqrt{n} + n\ln\Big(1+\frac{1}{\sqrt{n}}\Big) $$</p> <p>is decreasing. Is there an elegant way to show this?</p>
WimC
25,313
<p>This does not answer your question but might be helpful. Let $x&gt;0$ be a real number. Then it follows from $$\log(x) = \int_1^x \frac{dt}{t}$$ and the estimates $$2-t &lt;\frac{1}{t}&lt;1-\frac{(t-1)x}{x+1}$$ for $t\in(1,1+1/x)$ that</p> <p>$$ x-\frac{1}{2} &lt; x^2\log\left(1+\frac{1}{x}\right)&lt;x-\frac{1}{2}+\frac{1}{2(x+1)}. $$</p> <p>In particular $$\lim_{x\to\infty}x^2\log\left(1+\frac{1}{x}\right)-x=-\frac{1}{2}$$ which might or might not be what you are after.</p>
831,618
<p>Please help me to prove that this integral converges.</p> <p>$$\int_{0}^1 \frac{1}{\sqrt[3]{1-x^3}}\ dx $$</p> <p>No ideas. Tried to find function which is bigger and converges, equivalent fun-s, but no result still.</p>
Lucian
93,448
<p><strong>Hint:</strong> Use the fact that $\sqrt{1-x^2}&lt;\sqrt[3]{1-x^3}&lt;1$ for $x\in(0,1)$.</p>
147,363
<blockquote> <p>If $\alpha$ is an algebraic element of $\mathbb{C}$, then there is a unique non-zero polynomial $f \in \mathbb{Q}[x]$ with leading coefficient $1$ such that $f(\alpha) = 0$, and $f$ is irreducible. </p> </blockquote> <p>The first part of this proof would be proving that $f$ is not a unit, but what does the concept of a unit mean in the set of polynomials? I can't see how a polynomial would have a $2$ sided inverse under multiplication? </p>
Bill Dubuque
242
<p><strong>Hint</strong> $\rm\:\ f\:\!$ unit (i.e. invertible) $\rm\: \Rightarrow\: 1 = f(x)g(x)\:\Rightarrow\: 1 = f(\alpha)g(\alpha) = 0\cdot g(\alpha) = 0$ </p> <p>The special case $\rm\:f(x) = x\:$ is one of my <a href="https://math.stackexchange.com/a/2523/242">most popular posts</a> (due to its <em>universal</em> view?)</p> <p><a href="https://math.stackexchange.com/q/82552/242">Generally</a> $\rm\:\sum a_i\:\!x^i\:$ is a unit in $\rm R[x] \iff a_0\:$ is a unit$\rm\:R\:$ and $\rm\:a_1,...,a_n$ are nilpotent in $\rm R.$</p>
375,094
<p>A metric space <span class="math-container">$(M,d)$</span> is <em>doubling</em> if there exists <span class="math-container">$n$</span> such that every ball of radius <span class="math-container">$r$</span> can be covered by <span class="math-container">$n$</span> balls of radius <span class="math-container">$r/2$</span>, for all <span class="math-container">$r$</span>. For which f.g. groups <span class="math-container">$G$</span> and finite symmetric generating sets <span class="math-container">$S$</span>, is <span class="math-container">$\mathrm{Cay}(G, S)$</span> doubling under the path metric? Groups like this have polynomial growth, so they are virtually nilpotent by Gromov's theorem.</p> <p>So which virtually nilpotent groups are doubling, and for which generating sets? All, I suppose, but I got cold feet trying to do it, it seemed quite difficult straight from the definitions and I don't really know the Lie group stuff well enough.</p> <blockquote> <p>If <span class="math-container">$S$</span> is a finite symmetric generating set for a group <span class="math-container">$G$</span>, is <span class="math-container">$\mathrm{Cay}(G, S)$</span> doubling precisely when <span class="math-container">$G$</span> is virtually nilpotent?</p> </blockquote> <p>I'll note that in general (undirected) graphs, doubling implies polynomial growth, but not the other way around, consider for example the comb graph with vertices <span class="math-container">$\mathbb{Z} \times \mathbb{N}$</span> and edges <span class="math-container">$\{\{(m,n), (m,n+1)\}, \{(m,0), (m+1,0)\} \;|\; m \in \mathbb{Z}, n \in \mathbb{N}\}$</span>. But could be true for vertex-transitive graphs.</p>
YCor
14,094
<p>Yes: a f.g. discrete, and more generally compactly generated locally compact group is doubling iff it has polynomial growth.</p> <p>For f.g. groups, you mentioned <span class="math-container">$\Rightarrow$</span>, and asked <span class="math-container">$\Leftarrow$</span>, which I justify below.</p> <p>Define <span class="math-container">$X$</span> to be large-scale doubling if for some <span class="math-container">$R_0,M_0$</span>, every ball of radius <span class="math-container">$R\ge R_0$</span> is finite union of <span class="math-container">$M_0$</span> balls of radius <span class="math-container">$R/2$</span>.</p> <p>To be large-scale doubling is a QI-invariant. For a metric space in which balls of given radius have bounded cardinal, it's obviously equivalent to doubling.</p> <p>So for a f.g. group, being doubling doesn't depend on a choice of finite generating subset. Since every f.g. nilpotent group is QI to some simply connected nilpotent Lie group (Malcev), it is enough to check that every simply connected nilpotent Lie group <span class="math-container">$G$</span> is large-scale doubling. (More generally every compactly generated locally compact group of at most polynomial growth is QI to such a <span class="math-container">$G$</span>.)</p> <p>Indeed Pansu proved in 1983 that every asymptotic cone of such a Lie group <span class="math-container">$G$</span> is homeomorphic to <span class="math-container">$G$</span> and is a proper metric space. This implies that <span class="math-container">$G$</span> is large-scale doubling, by the following fact:</p> <p>If a space <span class="math-container">$X$</span> is not large-scale doubling, then there exists a sequence of points <span class="math-container">$(x_n)$</span>, and radii <span class="math-container">$r_n\to\infty$</span> and <span class="math-container">$M_n\to\infty$</span> such that the <span class="math-container">$2r_n$</span>-ball around <span class="math-container">$x_n$</span> contains <span class="math-container">$M_n$</span> points at distance <span class="math-container">$\ge r_n$</span>. It easily follows that the ultralimit of rescaled metric spaces <span class="math-container">$(X,x_n,\frac{1}{r_n}d)$</span>, which has a natural basepoint <span class="math-container">$o$</span> has infinitely many points in the <span class="math-container">$2$</span>-ball around <span class="math-container">$o$</span> at pairwise distance <span class="math-container">$\ge 1$</span>, so is not a proper metric space.</p>
3,029,778
<p>I asked a similar question in <a href="https://math.stackexchange.com/questions/3029766/positive-definite-matrix-implies-the-infimum-of-eigenvalues-are-positive">here</a>, but actually what I want to ask is more difficult as described below:</p> <p>Suppose <span class="math-container">$P(x): \mathbb{R} \to \mathbb{R}^{n \times n}$</span> is always a positive semi-definite matrix. Now if there is a set <span class="math-container">$\Omega \subset \mathbb{R}$</span> such that we know the infimum of the determinant of <span class="math-container">$P(x)$</span> over <span class="math-container">$\Omega$</span> is always positive, then does it imply that the infimum (over <span class="math-container">$\Omega$</span>) of the minimum eigenvalue of <span class="math-container">$P(x)$</span> is always positive? In a mathematical way:</p> <p>Is the following conclusion correct? <span class="math-container">\begin{equation} \inf_{x \in \Omega}\{\det(P(x))\}&gt;0 \implies \inf_{x \in \Omega} \{\lambda_{{\rm min}}(P(x)) \} &gt; 0 \end{equation}</span>.</p>
Michael Burr
86,421
<p>Originally, the text-part of the question asked whether we knew that, over <span class="math-container">$\Omega$</span>, the determinant of <span class="math-container">$P$</span> is always positive. This is different from the case that the infimum of the determinant is positive (the infimum condition implies that the determinant is always bounded away from zero).</p> <p>For the edited question:</p> <p>It is enough for <span class="math-container">$\Omega$</span> to be compact and <span class="math-container">$P$</span> to be continuous. Suppose that <span class="math-container">$\Omega$</span> has the desired property and the infimum of the smallest eigenvalue goes to zero. In this case, there exists a sequence <span class="math-container">$\{x_i\}$</span> so that <span class="math-container">$P(x_i)\in\Omega$</span> for all <span class="math-container">$i$</span> and <span class="math-container">$\lambda_{\min}(P(x_i))\rightarrow 0$</span>.</p> <p>Since the determinant is bounded from below and the determinant is the product of the eigenvalues, it must be that the largest eigenvalue of <span class="math-container">$P(x_i)$</span> is growing (without bound). Since the magnitudes of the eigenvalues are bounded in terms of the maximum of the entries of the matrix, for an eigenvalue to grow without bound, the maximum of the entries of the matrix must also be growing.</p> <p>We can find a subsequence <span class="math-container">$\{x_{i_j}\}$</span> so that in one particular entry of the matrix, the magnitude of that entry is growing without bound. By sequential compactness, there is some subsequence that converges in <span class="math-container">$\Omega$</span>. However, since the magnitude of the entry is growing without bound, it follows that the corresponding entry of the limit must be <span class="math-container">$\pm\infty$</span>, which is absurd.</p>
3,248,569
<p>I have the following two parametric equations of lines:</p> <p><span class="math-container">$$\begin{cases} x = -t + 1 \\ y = t + 3 \\ z = -6t \end{cases} \quad \land \quad \begin{cases} x = 2s + 4 \\ y = -s \\ z = 2s + 1 \end{cases}$$</span></p> <p>I want to examinate their mutual position, that is, I want to find out if they intersect.</p> <p><span class="math-container">$$\begin{cases} -t + 1 = 2s + 4 \\ t + 3 = -s \\ -6t = 2s + 1 \end{cases}$$</span></p> <p><span class="math-container">$$\begin{cases} -t + 1 = 2s + 4 \\ -t - 3 = s \\ -6t = 2(-t-3) + 1 \Rightarrow -6t = -2t - 5 \Longrightarrow t = \frac{5}{4}\end{cases}$$</span></p> <p><span class="math-container">$$-\frac{5}{4} - 3 = s \Longrightarrow s = - \frac{17}{4}$$</span></p> <p>So they intersect for <span class="math-container">$t = \frac{5}{4} \lor s=-\frac{17}{4}$</span>.</p> <p>Plugging <span class="math-container">$t$</span> into first equation:</p> <p><span class="math-container">$$\begin{cases} x = -\frac{5}{4} + 1 \\ y = \frac{5}{4} + 3 \\ z = -6 * \frac{5}{4} \end{cases}$$</span></p> <p>Point <span class="math-container">$P(-\frac{1}{4}, \frac{17}{4}, -\frac{15}{2})$</span> is the intersection point. </p> <p>Thing is, I don't think that's true. I sketched those lines in parametric plotter and they look like this:</p> <p><a href="https://i.stack.imgur.com/AmItT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AmItT.png" alt="skew"></a></p> <p>They look like skew to me. What went wrong?</p>
Cesareo
397,348
<p>Hint.</p> <p>Calling <span class="math-container">$\lambda_n = \frac{y_n}{x_n}$</span> we have</p> <p><span class="math-container">$$ \lambda_n = \frac{\lambda_{n-1}+3}{\lambda_{n-1}+1} $$</span></p> <p>giving a sequence <span class="math-container">$\lambda_n$</span> with limit at</p> <p><span class="math-container">$$ \lambda = \frac{\lambda+3}{\lambda+1}\Rightarrow \lambda^2 = 3 $$</span></p>
13,882
<p>Background: When Ueno builds the fully faithful functor from Var/k to Sch/k he mentions that the variety $V$ can be identified with the rational points of $t(V)$ over $k$. I know how to prove this on affine everything and will work out the general case at some future time.</p> <p>The question that this got me thinking about was if $X$ is a $k$-scheme where $k$ is algebraically closed, then are the $k$-rational points of $X$ just the closed points? This is probably extremely well known, but I can't find it explicitly stated nor can I find a counterexample.</p> <p>For $k$ not algebraically closed, I can come up with examples where this is not true. So in general is there some relation between the closed points and rational points on schemes (everything over $k$)?</p> <p>This would give a bit more insight into what this functor does. It takes the variety and makes all the points into closed points of a scheme, then adds the generic points necessary to actually make it a legitimate scheme. General tangential thoughts on this are welcome as well.</p>
Wanderer
1,107
<p>It is certainly true for schemes of finite type over $k$ (algebraically closed) that the closed points are exactly the $k$-points. To see this, notice that if $x \in X$ is any point, then the closure $\overline{\{x\}}$, equipped with its reduced subscheme structure, is integral and has dimension equal to the transcendence degree of its function field over $k$ (Hartshorne, exercise 3.20 in chapter 2). I hope that's clear enough?</p> <p>For $k$-scheme which are not (locally) of finite type, this doesn't work, as Martin shows below.</p>
1,836,325
<p>I am trying to check whether </p> <p>$f: \mathbb{R} \to \mathbb{R}^\omega$ $f(t) = (t, 2t, 3t, \ldots)$ is continuous or not in the product and box topology. </p> <p>But I have a feeling I don't have the necessary correct concepts to do these proofs. </p> <p>I know the product topology:</p> <ol> <li><p>Is the coarsest topology such that $\pi_k: \prod\limits_{i = 1}^\infty \mathbb{R} \to \mathbb{R}$ is continuous</p></li> <li><p>Has subbasic elements: $\mathbb{R} \times \mathbb{R} \times U \times \mathbb{R} \times \ldots$, where $U$ is open in $(\mathbb{R}, \tau)$</p></li> </ol> <p>I don't think these are sufficient for me to check continuity of $f$...</p> <p>To show that $f$ is continuous, we need to show that the preimage of $f$ is open. Take a subbasic element say: $$\mathbb{R} \times \mathbb{R} \times U \times \mathbb{R} \times \ldots$$ as above, then $f^{-1}(\mathbb{R} \times \mathbb{R} \times U \times \mathbb{R} \times \ldots) = ?$ This is too abstract.</p> <p>How about looking at just $f(t) = t$? Since $f(t)$ is continuous, we know for sure that $f^{-1}(U)$ is open.</p> <p>How about $f(t) = (t, 2t)$? Take some open set $U \times V \subset X \times Y$, then $f^{-1}(U \times V)$ is? How can I show that $f^{-1}(U \times V)$ is open? It doesn't seem to be obvious </p> <p>I think if I see how this works I can generalize it to the general infinite product case</p>
syzygy
349,357
<p>This is clear: the components of $f$ are continuous. Now use the universal property of product spaces.</p>
3,625,233
<p>We roll a dice until we get 6. Knowing that we have rolled 10 times, evaluate the probability that in the next 20 rolls there will be no 6. So in this question are we supposed to use binomial distribution or geometric distribution?</p>
Martin Argerami
22,857
<p>Because <span class="math-container">$e^{-x}$</span> and <span class="math-container">$xe^{-x}$</span> are already solutions of the homogeneous part. So if you were to take <span class="math-container">$ae^{-x}+bxe^{-x}$</span>, what you get is a solution of the homogeneous part, and you can never get <span class="math-container">$e^{-x}$</span>. In those situations, what works is "going up one degree"; that's why you try with <span class="math-container">$ax^2e^{-x}$</span> to obtain the part with <span class="math-container">$e^{-x}$</span>. </p>
3,625,233
<p>We roll a dice until we get 6. Knowing that we have rolled 10 times, evaluate the probability that in the next 20 rolls there will be no 6. So in this question are we supposed to use binomial distribution or geometric distribution?</p>
user577215664
475,762
<p><span class="math-container">$$y'' +2y' +y = e^{-x}+e^x$$</span> Rewrite it as : <span class="math-container">$$(e^xy)''=1+e^{2x}$$</span> Integrate twice.</p> <hr> <p>With undetermined coefficients method the guess should be: <span class="math-container">$$y=Ae^x+Bx^2e^{-x}$$</span> Since <span class="math-container">$e^{-x}$</span> and <span class="math-container">$xe^{-x}$</span> are already solution to the homogeneous equation.</p>
2,878,206
<p>Let $a_n = \frac{9^n}{n + 5^n}$.</p> <p>At large $n$ value, $a_n$ is expected to behave like $\frac{9^n}{5^n}$, therefore it diverges.</p> <p>Using the direct comparison test, how can I find $b_n$ (has to be smaller than $a_n$ to prove that $a_n$ diverges)?</p>
user
505,767
<p>We have that eventually $6^n \ge n+5^n$ therefore</p> <p>$$a_n = \frac{9^n}{n + 5^n}\ge \frac{9^n}{6^n}=\left(\frac32\right)^n\to \infty$$</p> <p>indeed by induction </p> <ul> <li><p>$n=1\implies 6\ge 1+5$</p></li> <li><p>assuming $6^n \ge n+5^n$ true we have</p></li> </ul> <p>$$6^{n+1}=6\cdot 6^n\ge 6n+6\cdot 5^n\ge (n+1)+5^{n+1}$$</p>
2,057,857
<p>I am trying to complete my homework based on equivalence relation and I don't seem to understand it properly so I need help !</p> <p>My question is that do all the elements in my set must satisfy all the three conditions then I can say there is an equivalence relation or I can say there is an equivalence relation of just some of the elements satisfy the three conditions and does the order matter do I have to compare the element from the left to the right or can I pick randomly e.g. my set=( 0,2,4,6) relation defined as a~b if a+b>2 so if I start by oder I can say that 0 is not related to 2 because 0+2=2 but if I pick randomly I can state that 6~0 because 6+0>2 and u can see that some of the elements satisfy the conditions but not all of them so do I still say there is an equivalence relation ?? I am really confuse </p> <p>Thank you</p>
Akerbeltz
351,735
<p>Let <span class="math-container">$A$</span> be an element of the set of all the finite subsets of <span class="math-container">$\mathbb{R}$</span>, which we shall denote simply by <span class="math-container">$\mathcal{P}_{&lt;\omega}(\mathbb{R})$</span></p> <p><strong>Assertion</strong>: <span class="math-container">$\mathcal{P}_{&lt;\omega}(\mathbb{R})\preccurlyeq\,^\omega\mathbb{R}$</span></p> <p>We will identify <span class="math-container">$A$</span> with a <span class="math-container">$\omega$</span>-sequence of elements of <span class="math-container">$\mathbb{R}$</span>, that is, and element of <span class="math-container">$^\omega\mathbb{R}$</span>, in the following way:</p> <p>Assume that <span class="math-container">$A=\{a_0,\dots,a_n\}$</span>. Then we can construct a <span class="math-container">$\omega$</span>-sequence <span class="math-container">$(b_k)_{k\in\omega}$</span> defined by:</p> <p><span class="math-container">$$b_k=\begin{cases} a_k\qquad\qquad\text{if }k\le n \\ a_n+1\qquad\text{ if }k&gt;n \end{cases}$$</span></p> <p>It is clear that the correspondence <span class="math-container">$A\longmapsto(b_k)_{k\in\omega}$</span> is injective.</p> <p>Now, on the one hand we have that <span class="math-container">$\mathbb{R}\preccurlyeq\mathcal{P}_{&lt;\omega}(\mathbb{R})$</span>, because the function <span class="math-container">$r\in\mathbb{R}\longmapsto\{r\}\in\mathcal{P}_{&lt;\omega}(\mathbb{R})$</span> is obviously injective.</p> <p>On the other hand, <span class="math-container">$\mathcal{P}_{&lt;\omega}(\mathbb{R})\preccurlyeq\mathbb{R}$</span>, since <span class="math-container">$\;\mathcal{P}_{&lt;\omega}(\mathbb{R})\preccurlyeq\,^\omega\mathbb{R}\;$</span> and <span class="math-container">$\;^\omega\mathbb{R}\preccurlyeq\mathbb{R}$</span>: in fact, <span class="math-container">$|^\omega\mathbb{R}|=\big(2^{\aleph_0}\big)^{\aleph_0}=2^{\aleph_0\times\aleph_0}=2^{\aleph_0}=|\mathbb{R}|$</span></p> <p>From the Cantor-Bernstein therem, we obtain that <span class="math-container">$|\mathcal{P}_{&lt;\omega}(\mathbb{R})|=|\mathbb{R}|=2^{\aleph_0}$</span></p>
2,885,918
<p>We consider the following random variable $X$: We have a uniform distribution of the numbers of the unit interval $[0,1]$. After a number $x$ from $[0,1]$ is chosen, numbers from $[0,1]$ are chosen until a number $y$ with $x\leq y$ pops up.</p> <p>The random variable $X$ counts the number of trials to obtain $y$. How to calculate $P(X=n)$, $n=1,2,\ldots$?</p>
uniquesolution
265,735
<p>The setting is naturally of independent draws of numbers.</p> <p>Given $x\in[0,1]$, the probability that $y&lt;x$ is just $x$, so the probability that $n-1$ numbers were chosen and all smaller than $x$ is $x^{n-1}$, and you want the next number to be at least $x$, so the probability for that is $(1-x)$. Thus:</p> <p>$$P(X=n)=x^{n-1}(1-x),$$</p> <p>if the initial $x$ is fixed, non-random, and</p> <p>$$P(X=n)=\int_0^1x^{n-1}(1-x)\,dx=\frac{1}{n}-\frac{1}{n+1},$$</p> <p>if the sample space of $X$ includes random choices of the initial $x$, and then the distribution of $X$ is of course independent of $x$.</p> <p>As written, and in lack of any additional information, the original question may be interpreted in both ways.</p>
2,607,090
<p>I have a function for which I know:</p> <p>$f(2) = 2x -3y \\ f(3) = 5x - 6y \\ f(4) = 9x - 10 y \\ f(5) = 14x - 15y$</p> <p>Assuming that $f$ is a polynomial, how do I find the general expression for $f$? After many minutes of fiddling I eventually found that this general expression works:</p> <p>$f(N) = \frac{N(N+1)-2}{2}x - \frac{N(N+1)}{2}y$.</p> <p>It's easy to verify that the expression works, but I found this by trial-and-error and I don't know if it's either unique or the simplest solution.</p>
filtercoffee
455,938
<p>You know : $f(2) = 2x -3y \\ f(3) = 5x - 6y \\ f(4) = 9x - 10 y \\ f(5) = 14x - 15y$</p> <p>Which means you know the function is in the form of </p> <p>$f(N) = Ax - By$</p> <p>where A and B are expressions involving N.</p> <p>You can find polynomial expressions for A and B by using <a href="https://en.wikipedia.org/wiki/Polynomial_interpolation" rel="nofollow noreferrer">polynomial interpolation</a>.</p> <p>For A, the data points you use are $(2,2),(3,5),(4,9),(5,14)$.</p> <p>For B, the data points you use are $(2,3),(3,6),(4,10),(5,15)$.</p> <p>Putting these into an <a href="https://www.dcode.fr/lagrange-interpolating-polynomial" rel="nofollow noreferrer">online lagrange interpolation calculator</a>, it finds that $A = N^2/2 + N/2 - 1$ and $B = N^2/2 + N^2$ which is the same solution you found.</p>
3,290,199
<p>If I throw a fair dice <span class="math-container">$12$</span> times, the expected number of <span class="math-container">$6$</span> is <span class="math-container">$2$</span> i.e <span class="math-container">$6$</span> is expected to appear <span class="math-container">$2$</span> times when the dice is thrown <span class="math-container">$12$</span> times. But the probability of getting <span class="math-container">$6$</span> exactly <span class="math-container">$2$</span> times is <span class="math-container">${12}\choose{2}$$(1/6)^{2} (5/6)^{10} $</span> which is less than <span class="math-container">$1$</span>. </p> <p>Now my question is <strong>How can you expect the face value six to appear for two times , when the possibility of that appearing for two times is very low?</strong></p> <p>I am tying to give an analogy..If you are participating a game where you can win , lose or remain undecided. How can you expect to win When you know the possibility of winning the game is very low?</p> <p>Can anyone please make me understand where I am getting wrong? I am really trying hard to understand.</p>
mlchristians
681,917
<p>Continuity over the given interval is all that is required for the integral (term synonymous with <em>antiderivative</em>) to exist over that interval.</p> <p>When our interval is <span class="math-container">$[a, b]$</span> we may invoke the Fundamental Theorem of Calculus which provides the guarantee. </p> <p>On the other hand, if we have say, continuity over an unbounded interval, say <span class="math-container">$[a, \infty)$</span>, we may restructure the (improper) integral so that we take the limit as the variable, say <span class="math-container">$b$</span>, approaches infinity of the comparable definite integral over <span class="math-container">$[a,b]$</span>. This permits us the use of the FTC to find an antiderivative before taking the limit. </p>
122,293
<p>Let's consider all possible permutations of N numbers. Suppose for each permutation we calculate the sum of absolute differences between consecutive elements. Thus, for (1,2,3) one would have abs(1-2)+abs(2-3)=2. Is it possible to obtain a distribution of such sums for given N? For instance, for N=3 one would have 3!=6 permutations and possible sums as 2 and 3. The number of sums of 2 is 2 and for 3 it's equal 4. </p>
Andrew D. King
4,580
<p>Here is a histogram for length 100. It seems to be normally distributed around $100^2 / 3$, which should put you on the scent, in spite of a complete absence of proof (see edit below).</p> <p>This is not at all surprising, since if $x$ and $y$ are drawn uniformly at random from the interval $[0,1]$, the expected value of $|x-y|$ is $$ \int_{x=0}^1 \left(\frac{x^2}{2} + \frac{(1-x)^2}{2}\right) = \frac 13. $$ Maybe turning this fact into a proof is very straightforward; maybe it is tricky.</p> <p><strong>Edit:</strong> I should note that the expected value of $(n^2-1)/3$ actually is very easy to prove; it is just the distribution that might be tricky.</p> <p>One way to generate a random permutation $\pi$ is to choose $n$ reals $r(i)$ uniformly at random from $[0,1]$. Now, what is the expected value of $|\pi(i) - \pi(i+1)|$? It is simply the expected number of $r(j)$ between $r(i)$ and $r(i+1)$, plus one. Because the expected difference between $r(i)$ and $r(i+1)$ is $1/3$, this is $1+(n-2)/3$, or $(n+1)/3$. Since we compute this for $n-1$ consecutive pairs, linearity of expectation tells us that the expected sum is $(n-1)(n+1)/3$, or $(n^2-1)/3$.</p> <p><img src="https://i.imgur.com/wYQ3uwh.png" alt=""></p>
635,195
<p>I'm trying to calculate the following limit: </p> <p>$$\mathop {\lim }\limits_{x \to {0^ + }} {\left( {\frac{{\sin x}}{x}} \right)^{\frac{1}{x}}}$$</p> <p>What I did is writing it as: </p> <p>$${e^{\frac{1}{x}\ln \left( {\frac{{\sin x}}{x}} \right)}}$$</p> <p>Therefore, we need to calculate: </p> <p>$$\mathop {\lim }\limits_{x \to {0^ + }} \frac{{\ln \left( {\frac{{\sin x}}{x}} \right)}}{x}$$</p> <p>Now, we can apply L'Hopital rule, Which I did:<br> $$\Rightarrow cot(x) - {1 \over x}$$</p> <p>But in order to reach the final limit two more application of LHR are needed. Is there a better way?</p>
Stephen Dedalus
108,592
<p>Elementary proof using well known limits and inequalities: $$\frac{{\ln \left( {\frac{{\sin x}}{x}} \right)}}{x} = \frac{{\ln \left( \left( \frac{{\sin x}}{x}-1 \right) + 1 \right)}}{\frac{{\sin x}}{x}-1 } \cdot \frac{\frac{{\sin x}}{x}-1 }{x} =\frac{{\ln \left( \left( \frac{{\sin x}}{x}-1 \right) + 1 \right)}}{\frac{{\sin x}}{x}-1 } \cdot \frac{\sin x-x }{x^2}$$ but for $x &gt;0$ we have: $x-\frac{x^3}{6} \le \sin x \le x \Rightarrow -\frac{x^3}{6} = x-\frac{x^3}{6}-x\le\sin x -x \le 0 \Rightarrow -\frac{x^2}{6} \le\frac{\sin x-x }{x^2} \le 0$ the rest is squeeze theorem.</p>
237,838
<p>The data are for the model $T(t) = T_{s} - (T_{s}-T_{0})e^{-\alpha t}$, where $T_0$ is the temperature measured at time 0, and $T_{s}$ is the temperature at time $t=\infty$, or the environment temperature. $T_{s}$ and $\alpha$ are parameters to be determined.</p> <p>How can I fit my data against this model? I'm trying to solve $T_{s}$ by $T_{s}=(T_{0}T_{2}-T_{1}^{2})/(T_{0}+T_{2}-2T_{1})$, where $T_{1}$ and $T_{2}$ are measurements in time $\Delta t$ and $2\Delta t$, respectively.</p> <p>However, the results are varying a lot through the whole data set.</p> <p>Shall I try gradient descent for the parameters?</p>
dantopa
206,581
<p>The model is nonlinear, but one of the parameters, $T_{s}$ is linear, which means we can 'remove' it.</p> <p>Start with a crisp set of definitions: a set of $m$ measurements $\left\{ t_{k}, T_{k} \right\}_{k=1}^{m}.$ The trial function, as pointed out by @Yves Daust, is $$ T(t) = T_{s} \left( 1 - e^{-\alpha t}\right). $$ The $2-$norm minimum solution is defined as $$ \left( T_{s}, \alpha \right)_{LS} = \left\{ \left( T_{s}, \alpha \right) \in \mathbb{R}_{+}^{2} \colon r^{2} \left( T_{s}, \alpha \right) = \sum_{k=1}^{m} \left( T_{k} - T(t_{k}) \right)^{2} \text{ is minimized} \right\}. $$</p> <p>The minimization criterion $$ \frac{\partial} {\partial \alpha} r^{2} = 0 $$ leads to $$ T_{s^{*}} % = \frac{\sum T_{k} \left( 1 - e^{-\alpha t_{k}} \right)} {\sum \left( 1 - e^{-\alpha t_{k}} \right)^{2}}. $$</p> <p>Now the total error can be written is terms of the remaining parameter $\alpha$: $$ r^{2}\left( T_{s^{*}}, \alpha \right) = r_{*}^{2} ( \alpha ) = \sum_{k=1}^{m} \frac{\sum T_{k} \left( 1 - e^{-\alpha t_{k}} \right)} {\sum \left( 1 - e^{-\alpha t_{k}} \right)^{2}} \left( 1 - e^{-\alpha t_{k}} \right). $$</p> <p>This function is an absolute joy to minimize. It decreases monotonically to the lone minimum, then increases monotonically. </p>
301,264
<p>Note: There is another question of the same title, but it is different and asks for group theory prerequisites in algebraic topology, while i want the topology prerequisites. </p> <p>I am a physics undergrad, and I wish to take up a course on Introduction to Algebraic Topology for the next sem, which basically teaches the first two chapters of Hatcher, on Fundamental Group and Homology. However, I don't have a formal mathematics background in point-set topology, and I don't have enough time to go though whole books such as Munkres. So What part of point set topology from Munkres is actually used in the first two chapters of Hatcher?</p> <p>More importantly, I wanted to know if the first chapter of the book <a href="http://rads.stackoverflow.com/amzn/click/1441972536">Topology, Geometry and Gauge Fields by Naber</a> or first 2 chapters of Lee's Topological Manifolds would be sufficient to provide me the necessary background for Hatcher.</p> <p>Thanks in advance!</p>
Gaston Burrull
31,167
<p>I prefer Munkres over all topology books.</p> <p>You might starting with Munkres chapter 2, then read chapters 3, 4, 7 (without " * " sections), but if you have enought time is not bad idea reading all of the first part: Chapters 1-8 (long but fun). </p> <p>I think that chapter 1 is good for you, is an intuitive approach for set-theory, since you are a physicist probably not like going too deeply into sets, but if you dont have time, skip it.</p> <p>But my biggest advice is not worry about taking the course as quickly, if you don't feel safe. I was physicist.</p>
114,733
<p>Say you have the half-plane $\{z\in\mathbb{C}:\Re(z)&gt;0\}$. Is there a rigorous explanation why the transformation $w=\dfrac{z-1}{z+1}$ maps the half plane onto $|w|&lt;1$?</p>
Adam
44,654
<p>( This is more comment then answer but I can't add images in comment ) <img src="https://i.stack.imgur.com/8LsvT.png" alt="Image of 2 maps"></p> <p>Her is <a href="http://commons.wikimedia.org/wiki/File:Conformal_mapping_from_right_half_plane_to_unit_circle.svg" rel="noreferrer">image and src code</a> </p>
3,522,736
<p>I've been messing around with trying to negate this statement using DeMorgans laws and I keep ending up with incorrect answers such as (~p or ~q) and ~r. If someone could help me with the negation of compound statements. </p> <p>Thank you.</p>
Tsemo Aristide
280,301
<p><span class="math-container">$f'(x)={{-1}\over{(x+1)^2}}$</span> is never <span class="math-container">$0$</span> we deduce that <span class="math-container">$f$</span> cannot have an extremum since the derivative is zero at an extremum.</p>
70,146
<p>I'm trying to use an image as a <code>ChartLabel</code> and I'm getting strange results.</p> <p>Here is a bar chart, with labels, that looks ok:</p> <p><img src="https://i.stack.imgur.com/YmRQp.png" alt="chart ok"></p> <p>But when I try to replace the "A" label with an image, the output is confusing:</p> <p><img src="https://i.stack.imgur.com/7XT0u.png" alt="chart busted"></p> <p>Specifically, the image overlaps the plot and is scaled weirdly.</p> <p>I'd like it to be small, and centered, as the "A" label is in the image above.</p> <p>What's the right way to use an image as a <code>ChartLabel</code>?</p>
kglr
125
<p><strong>Update:</strong> The option <a href="https://reference.wolfram.com/language/ref/LabelingSize.html" rel="nofollow noreferrer"><code>LabelingSize</code></a> provides a convenient way to size the images.</p> <pre><code>images = ExampleData[{"TestImage", #}] &amp; /@ {"Lena", "Elaine", "Mandrill"}; is = 60; bs = .2; BarChart[Thread[Labeled[{1, 2, 3}, images, Below]], BarSpacing -&gt; bs, LabelingSize -&gt; is (1 - bs), ImageSize -&gt; 1 -&gt; is] </code></pre> <p><a href="https://i.stack.imgur.com/Prltl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Prltl.png" alt="enter image description here"></a></p> <p>With <code>is = 120</code> and <code>bs = .3</code> we get:</p> <p><a href="https://i.stack.imgur.com/rfrS8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rfrS8.png" alt="enter image description here"></a></p> <p><strong>Original answer:</strong></p> <pre><code>images = ExampleData[{"TestImage", #}] &amp; /@ {"Lena", "Elaine", "Mandrill"}; BarChart[{1, 2, 3}, ChartLabels -&gt; Placed[Thumbnail[#, Tiny] &amp; /@ images, Axis, Panel[#, FrameMargins -&gt; 0] &amp;]] </code></pre> <p><img src="https://i.stack.imgur.com/aLOn7.png" alt="enter image description here"></p> <p>Alternatively, you could use <code>Magnify</code> instead of <code>Thumbnail</code>:</p> <pre><code>BarChart[{1, 2, 3}, ChartLabels -&gt; Placed[images, Axis, Framed[Magnify[#, .3], FrameStyle -&gt; None] &amp;]] </code></pre> <p><img src="https://i.stack.imgur.com/YveTY.png" alt="enter image description here"></p> <p>Few more alternatives that produce similar pictures:</p> <pre><code>BarChart[{1, 2, 3}, ChartLabels -&gt; (Framed[Magnify[#, .3], FrameStyle-&gt;None] &amp; /@ images)] BarChart[{1, 2, 3}, ChartLabels -&gt; Placed[Pane[Magnify[#, .1]] &amp; /@ images, Axis]] BarChart[{1, 2, 3}, ChartLabels-&gt;Placed[Framed[Magnify[#, .3], FrameStyle -&gt; None]&amp; /@ images, Axis]] </code></pre>
1,610,700
<blockquote> <p>$$\int \frac{x-3}{\sqrt{1-x^2}} \mathrm dx$$</p> </blockquote> <p>I know that $\int \frac{1}{1-x^2}\mathrm dx=\arcsin(\frac{x}{1})$ but how can I continue from here? </p>
Harish Chandra Rajpoot
210,295
<p>Notice, $$\int \frac{x-3}{\sqrt {1-x^2}}\ dx$$$$=\int \frac{x}{\sqrt {1-x^2}}\ dx-\int \frac{3}{\sqrt {1-x^2}}\ dx$$ $$=-\frac{1}{2}\int \frac{(-2x)}{\sqrt {1-x^2}}\ dx-3\int \frac{1}{\sqrt {1-x^2}}\ dx$$ $$=-\frac{1}{2}\int (1-x^2)^{-1/2}\ d(1-x^2)-3\int \frac{1}{\sqrt {1-x^2}}\ dx$$ $$=-\frac{1}{2}\frac{(1-x^2)^{1/2}}{1/2}-3\sin^{-1}(x)+C$$ $$=-\sqrt{1-x^2}-3\sin^{-1}(x)+C$$</p>
864,920
<p>The matrices $\begin{pmatrix}1&amp;2\\0&amp;1\end{pmatrix}$ and $\begin{pmatrix}1&amp;0\\2&amp;1\end{pmatrix}$ are well-known to (freely) generate a free group. Some years ago, I read a paper that, if memory serves, sort of summarised what was known about generalisations to pairs of the form $\begin{pmatrix}1&amp;\lambda\\0&amp;1\end{pmatrix}$ and $\begin{pmatrix}1&amp;0\\\lambda&amp;1\end{pmatrix}$. (Or perhaps even more general such pairs.) I recall that there were results there describing regions in $\mathbb{C}$ from which $\lambda$ chould be chosen so that such a pair freely generated a free group, and there were plots of these regions (at least one, anyway). Unfortunately, I cannot recall the title or author(s) of the paper, and Googling didn't turn up anything that looked familiar.</p> <blockquote> <p>Does anyone recall such a paper and be able to provide a reference.</p> </blockquote> <p>(I did find a 1967 paper by Lyndon and Ullman, but that wasn't it, and I think the one I'm after was later, and probably described further progress on the question, probably with its own new results.)</p>
Andreas Caranti
58,401
<p>There's a <strong><em>1969</em></strong> paper by Lyndon and Ullman that appears to be it, plots and all.</p> <blockquote> <p>R. C. Lyndon and J. L. Ullman, <em>Groups generated by two parabolic linear fractional transformations.</em> Canad. J. Math. <strong>21</strong> (1969) 1388-1403</p> </blockquote> <p>The <a href="http://cms.math.ca/cjm/v21/cjm1969v21.1388-1403.pdf">article</a> itself is available online, but I'm not sure whether it's free, or I'm seeing it because of my University's subscription. (Most likely the former applies.)</p>
864,920
<p>The matrices $\begin{pmatrix}1&amp;2\\0&amp;1\end{pmatrix}$ and $\begin{pmatrix}1&amp;0\\2&amp;1\end{pmatrix}$ are well-known to (freely) generate a free group. Some years ago, I read a paper that, if memory serves, sort of summarised what was known about generalisations to pairs of the form $\begin{pmatrix}1&amp;\lambda\\0&amp;1\end{pmatrix}$ and $\begin{pmatrix}1&amp;0\\\lambda&amp;1\end{pmatrix}$. (Or perhaps even more general such pairs.) I recall that there were results there describing regions in $\mathbb{C}$ from which $\lambda$ chould be chosen so that such a pair freely generated a free group, and there were plots of these regions (at least one, anyway). Unfortunately, I cannot recall the title or author(s) of the paper, and Googling didn't turn up anything that looked familiar.</p> <blockquote> <p>Does anyone recall such a paper and be able to provide a reference.</p> </blockquote> <p>(I did find a 1967 paper by Lyndon and Ullman, but that wasn't it, and I think the one I'm after was later, and probably described further progress on the question, probably with its own new results.)</p>
i. m. soloveichik
32,940
<p>FREE AND NONFREE SUBGROUPS OF $PSL_2(\mathbf{C})$ GENERATED BY TWO PARABOLIC ELEMENTS by Ju A Ignatov</p> <p>1979 Math. USSR Sb. 35 49. doi:10.1070/SM1979v035n01ABEH001449</p>
2,753,548
<p>Let $F$ be a field with $7^5$ elements. $$X=\{a^7-b^7 \mid a,b \in F\}$$ I have no idea how to solve. Please help me.</p>
Dietrich Burde
83,966
<p>Since $a^7-b^7=(a-b)^7$ in $F$, and $x\mapsto x^7$ is an isomorphism, we see that $X=F$</p>
3,279,544
<blockquote> <p><span class="math-container">$50$</span> fighters are standing around in a circle. Every fighter may choose to fight with the person on its left or its right with equal probability. One person may be chosen by two different persons and another may not be chosen at all. Can you expect how many fighters were not chosen? Explain.</p> </blockquote> <p>I started trying with <span class="math-container">$4,6,8$</span> fighters to see if I could end up figuring out a pattern. And noticed that if <span class="math-container">$X$</span> represents the random variable associated with the number of people not selected then <span class="math-container">$0 \leq X \leq 25$</span>. Also one of the things I've noticed was that <span class="math-container">$P(n)=P(25-n)$</span> which may help through solving this problem. Also I know that this is an expected value probability problem, and that I have to figure out <span class="math-container">$P(X=n)$</span>, for <span class="math-container">$n$</span> going from <span class="math-container">$0$</span> to <span class="math-container">$25$</span>.</p> <p>Unfortunately, I couldn't figure it out, so can anyone help?</p>
drhab
75,923
<p><strong>Guide</strong>.</p> <p>Number the fighters and let <span class="math-container">$X_i$</span> take value <span class="math-container">$1$</span> if fighter <span class="math-container">$i$</span> is not chosen and <span class="math-container">$0$</span> if he was chosen. </p> <p>If <span class="math-container">$X$</span> denotes the number of fighters not chosen then:<span class="math-container">$$X=X_1+\cdots+X_{50}$$</span></p> <p>Now find the expectation of <span class="math-container">$X$</span> by applying linearity of expectation and symmetry.</p>
135,936
<p>I need this one result to do a problem correctly.</p> <p>I want to show that for any $b \in \mathbb{C}$ and $z$ a complex variable:</p> <p>$$ |z^2 + b^2| \geq |z|^{2} - |b|^{2}$$ </p> <p>My attempts have only led me to conclude that </p> <p>$$ |z^2 + b^2| &gt; \frac{|z|^{2} + |b|^{2}}{2}$$ </p>
agt
6,752
<p>The searched inequality is an instance of the Lipshitz inequality for the distance.</p> <p>In the concrete case $$||z_1|-|z_2||\leq|z_1-z_2|$$ It is obtained by the triangle inequality $$|z_1|\leq|z_1-z_2|+|z_2|$$ and its symmetric under the exchange of $z_1$ and $z_2$</p>
3,419,620
<p>I'm trying to solve the equation <span class="math-container">$$(z+i)^2=(\sqrt3+i)^3$$</span> but I don't know how to extract the roots <span class="math-container">$$(z+i)^2=(\sqrt3+i)^3 \rightarrow (z+i)^2=8i \rightarrow z^2+(2i)z-(8i+1)=0$$</span></p> <p><span class="math-container">$z_{1,2}=-i \pm \sqrt{8i}$</span>.</p> <p>According to my book the solutions are <span class="math-container">$2+i$</span> and <span class="math-container">$-2-3i$</span> and I think that they are another way to write them but I don't know how to have them. Can someone help me to understand?</p>
user
505,767
<p>We have that</p> <p><span class="math-container">$$(z+i)^2=(\sqrt3+i)^3=8i=8e^{i\left(\frac \pi 2+2k\pi\right)}\implies z+i=2\sqrt 2e^{i\left(\frac \pi 4+k\pi\right)}$$</span></p> <p>for <span class="math-container">$k=0,1$</span> that is</p> <ul> <li><span class="math-container">$z+i=2\sqrt 2e^{i\frac \pi 4}=2+2i \implies z=2+i$</span></li> <li><span class="math-container">$z+i=2\sqrt 2e^{i\frac {5\pi} 4}=-2-2i \implies z=-2-3i$</span></li> </ul>
3,419,620
<p>I'm trying to solve the equation <span class="math-container">$$(z+i)^2=(\sqrt3+i)^3$$</span> but I don't know how to extract the roots <span class="math-container">$$(z+i)^2=(\sqrt3+i)^3 \rightarrow (z+i)^2=8i \rightarrow z^2+(2i)z-(8i+1)=0$$</span></p> <p><span class="math-container">$z_{1,2}=-i \pm \sqrt{8i}$</span>.</p> <p>According to my book the solutions are <span class="math-container">$2+i$</span> and <span class="math-container">$-2-3i$</span> and I think that they are another way to write them but I don't know how to have them. Can someone help me to understand?</p>
J. W. Tanner
615,567
<p>If you note <span class="math-container">$(2+2i)^2=8i$</span>, you'll see your answer matches the book's.</p>
3,419,620
<p>I'm trying to solve the equation <span class="math-container">$$(z+i)^2=(\sqrt3+i)^3$$</span> but I don't know how to extract the roots <span class="math-container">$$(z+i)^2=(\sqrt3+i)^3 \rightarrow (z+i)^2=8i \rightarrow z^2+(2i)z-(8i+1)=0$$</span></p> <p><span class="math-container">$z_{1,2}=-i \pm \sqrt{8i}$</span>.</p> <p>According to my book the solutions are <span class="math-container">$2+i$</span> and <span class="math-container">$-2-3i$</span> and I think that they are another way to write them but I don't know how to have them. Can someone help me to understand?</p>
fleablood
280,126
<p>You have <span class="math-container">$z_{1,2} = -i + K$</span> where <span class="math-container">$K^2 = 8i$</span>.</p> <p>So find <span class="math-container">$K$</span> where <span class="math-container">$K^2 = 8i$</span>.....</p> <p>Let <span class="math-container">$K = a+bi$</span> so <span class="math-container">$K^2 = (a^2 -b^2) + 2abi = 0 + 8i$</span> so <span class="math-container">$a^2-b^2 =0$</span> and <span class="math-container">$2ab=8$</span></p> <p>So solve for <span class="math-container">$a$</span> and <span class="math-container">$b$</span>.....</p> <p><span class="math-container">$a^2 - b^2 =0$</span> means <span class="math-container">$a=\pm b$</span> and <span class="math-container">$2ab = 8$</span> means <span class="math-container">$ab=4$</span> means <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are both the same sign. So <span class="math-container">$a=b$</span> and <span class="math-container">$ab =a^2=4$</span> so <span class="math-container">$a =\pm 2$</span> and <span class="math-container">$b=\pm 2$</span>.</p> <p>So <span class="math-container">$K = \pm (2 + 2i)$</span> and <span class="math-container">$z_{1,2}=-i \pm(2+2i) = \begin{cases}2+i\\-2-3i\end{cases}$</span>.</p>
351,850
<blockquote> <p>A subset $E$ contained in $\mathbb{R}^n$ is such that the function $x \mapsto \left\Vert x\right\Vert^2$ is uniformly continuous on $E$. For $r &gt; 0$, let $E_r$ denote the union of all open balls of radius $r$ contained in $E$. Prove that $E_r$ is bounded for all $r &gt; 0$. Find an example showing that $E$ itself does not have to be bounded.</p> </blockquote> <p>I have been working on this one for a while and I seem to be stumped. I know what the definitions are, but I'm having trouble getting started on this problem. </p> <p>Thanks</p>
Jyrki Lahtonen
11,619
<p>The function $f$ is uniformly continuous on the subset $E_r$ as well. </p> <p>If contrariwise $E_r$ is unbounded for some $r&gt;0$, then there is a sequence of vectors $x_n$ such that $\Vert x_n\Vert\to\infty$, and $B(x_n,r)\subseteq E$. For all $\delta\in(0,r)$ both $x_n$ and $x'_n(\delta)=x_n(1+\delta/(2\Vert x_n\Vert))$ are then in $E_r$, because $\Vert x_n-x'_n(\delta)\Vert&lt;\delta&lt;r$. But $$ f(x'_n(\delta))=f(x_n)\left(1+\frac{\delta}{2\Vert x_n\vert}\right)^2, $$ so $$ f(x'_n(\delta))-f(x_n)\ge f(x_n)\frac{\delta}{2\Vert x_n\Vert}=\frac{\delta}2 \Vert x_n\Vert. $$ This set of differences of values of $f$ is unbounded in spite of the arguments being within $\delta$ of each other violating the assumption that the restriction of $f$ to $E_r$ is uniformly continuous.</p> <p>For an example of unbounded $E$ I proffer $E=\mathbb{Z}\subset\mathbb{R}$ (map this on the $x$-axis, if you want this to work for any $n$). There are no distinct points within distance $&lt;1$ of each other, so uniform continuity of any function is automatic. Yet $E$ is unbounded.</p>
14,385
<p>I have always taught my students that the <span class="math-container">$y$</span>-intercept of a line is the <span class="math-container">$y$</span>-coordinate of the point of intersection of a line with the <span class="math-container">$y$</span>-axis, that is, for the line given by the equation <span class="math-container">$y=mx+y_0$</span>, the <span class="math-container">$y$</span>-intercept is <span class="math-container">$y_0$</span>. I emphasize that that the <span class="math-container">$y$</span>-intercept is the <em>number</em> <span class="math-container">$y_0$</span> and not the <em>point</em> <span class="math-container">$(0,y_0)$</span>.</p> <p>But I was quite surprised when I recently looked at the <a href="https://en.wikipedia.org/wiki/Intercept" rel="nofollow noreferrer">Wikipedia</a> and <a href="http://mathworld.wolfram.com/x-Intercept.html" rel="nofollow noreferrer">Wolfram</a> <a href="http://mathworld.wolfram.com/y-Intercept.html" rel="nofollow noreferrer">MathWorld</a> entries for <span class="math-container">$y$</span>-intercept because these define the intercept as a point and not as a number (&quot;the point where a line crosses the y-axis&quot; and &quot;The point at which a curve or function crosses the y-axis&quot;).</p> <p>Further investigation yielded inconsistencies: the Wikipedia entry for &quot;<a href="https://en.wikipedia.org/wiki/Line_(geometry)#On_the_Cartesian_plane" rel="nofollow noreferrer">Line (geometry)</a>&quot; states that in the equation <span class="math-container">$y=mx+b$</span>, &quot;<span class="math-container">$b$</span> is the y-intercept of the line&quot;; the Wolfram MathWorld entry for &quot;<a href="http://mathworld.wolfram.com/Line.html" rel="nofollow noreferrer">Line</a>&quot; states that &quot;The line with <span class="math-container">$y$</span>-intercept <span class="math-container">$b$</span> and slope <span class="math-container">$m$</span> is given by the slope-intercept form <span class="math-container">$y=mx+b$</span>.</p> <hr /> <p><sup>Edit made on February 21, 2021</sup></p> <p>According to the <em>Dictionary of Analysis, Calculus, and Differential Equations</em> (edited by Douglas N. Clark, published by CRC Press in 2000),</p> <blockquote> <p><strong>intercept</strong> The point(s) where a curve or graph of a function in <span class="math-container">$\mathbf R^n$</span> crosses one of the axes. For the graph of <span class="math-container">$y=f(x)$</span> in <span class="math-container">$\mathbf R^2$</span>, the <span class="math-container">$y$</span>-<em>intercept</em> is the point <span class="math-container">$(0,f(0))$</span> and the <span class="math-container">$x$</span>-<em>intercepts</em> are the points <span class="math-container">$(p,f(p))$</span> such that <span class="math-container">$f(p)=0$</span>.</p> </blockquote> <p>Unfortunately, the book does not consistently use that definition.</p> <blockquote> <p><strong>slope-intercept equation of line</strong> An equation of the form <span class="math-container">$y=mx+b$</span>, for a straight line in <span class="math-container">$\mathbf R^2$</span>. Here <span class="math-container">$m$</span> is the slope of the line and <span class="math-container">$b$</span> is the <span class="math-container">$y$</span>-intercept; that is, <span class="math-container">$y=b$</span>, when <span class="math-container">$x=0$</span>.</p> </blockquote> <p>Thus, even though the book defines an intercept as a point, it uses the term to denote a number.</p> <hr /> <p>Is there a trusted source targeted at mathematics educators (from, say, a government agency, an educational institution, or an organization) that defines &quot;intercept&quot; and consistently uses that definition?</p>
Chickenmancer
10,192
<p>It comes down to convention, ultimately, since specifying a unique point $y_0$ associated to $x_0$ is equivalent to specifying the ordered pair $(x_0,y_0)$ on the graph of $f(x)=y.$ That is, since a function is well defined by definition, then there is no ambiguity if one requires that the intercept "$y_0$" be associated to a given input "$x_0$".</p>
3,465,945
<p>Prove that <span class="math-container">$\inf f(A) \leq f( \inf A)$</span> if <span class="math-container">$f: [-\infty, + \infty] \to \mathbb{R}$</span> is continuous and <span class="math-container">$A \neq \emptyset$</span> is a subset of <span class="math-container">$\mathbb{R}$</span>.</p> <p>Attempt;</p> <p>Put <span class="math-container">$a:= \inf A$</span>. Choose a sequence in <span class="math-container">$A$</span> such that <span class="math-container">$a_n \to a$</span>. Then</p> <p><span class="math-container">$$ \inf f(A)\leq\lim_{n \to \infty} \underbrace{f(a_n)}_{\geq \inf f(A)} = f(a) = f( \inf A)$$</span></p> <p>and we can conclude.</p> <p>Is this correct?</p>
copper.hat
27,978
<p>You have <span class="math-container">$\inf_x f(x) \le f(y)$</span> for all <span class="math-container">$y$</span> by definition. Ley <span class="math-container">$y = \inf A$</span> to finish.</p>
1,645,232
<blockquote> <p>Describe all vectors $v = \pmatrix{x\\y}$ that are orthogonal to $u = \pmatrix{a\\b}$.</p> </blockquote> <p>I know that vectors that are orthogonal will have a dot product of 0. So here's what I was thinking: \begin{align*} ax + by &amp;= 0\\ yb &amp;= -ax\\ y &amp;= -ax/b \end{align*}</p> <p>I then looked up the answer to check if I was right, and the solution says:</p> <blockquote> <p>$v$ is of the form $k\pmatrix{b\\-a}$, where $k$ is a scalar.</p> </blockquote> <p>Can anyone help me to understand how they came up with this answer?</p>
Travis Willse
155,629
<p>Note that your method implicitly assume $b \neq 0$. Here's a more conceptual way to approach that problem, which avoids that kind of assumption, and which leads directly to the answer in the text:</p> <p><strong>Hint</strong> The map ${\bf u}^{\flat} : \Bbb R^2 \to \Bbb R$ defined by $${\bf u}^{\flat} : {\bf v} \mapsto {\bf u} \cdot {\bf v}$$ is nonzero so it has rank $1$. So, its kernel, which by construction is the set of vectors orthogonal to $\bf u$, $$\ker {\bf u}^{\flat} = \{{\bf v} : {\bf u} \cdot {\bf v} = 0\} ,$$ is a subspace of $\Bbb R^2$ of dimension $1$.</p> <blockquote class="spoiler"> <p>Thus, this set is spanned by any nonzero element it contains. Computing gives $${\bf u} \cdot \pmatrix{b\\-a} = \pmatrix{b\\-a} \cdot \pmatrix{a\\b} = 0,$$ and so the vectors orthogonal to $\bf u$ are precisely the vectors $$k \pmatrix{b\\-a}, \qquad k \in \Bbb R .$$</p> </blockquote>
3,191,345
<p>Evaluate the following definite integral : <span class="math-container">$$\int_0^{\pi/2}\cfrac{\cos x}{\sqrt{1-\sin x}} \qquad \qquad \qquad (1)$$</span> </p> <p><span class="math-container">\begin{align} &amp; = \int_0^{\pi/2}\cfrac{\cos x}{\sqrt{1-\sin x}} \ \ I \ used \ u=1-\sin x \ and \ dx= \cfrac{-du}{cosx} \\ &amp; = -\int_1^0\cfrac{du}{\sqrt u} \\ &amp; = \int_0^1\cfrac{du}{\sqrt u} \\ &amp; = 2\sqrt u |_0^1 \\ &amp; = 2-0 =2 \\ \end{align}</span> But Symbolab says that is 0, what i have done wrong in (1) ?</p>
Jesús Álvarez Lobo
632,320
<p>What you submit is correct. The error is in Symbolab; I do not know the cause. Check carefully how you enter the data in Symbolab. It could also come from a bug in the program. (?)</p>
3,191,345
<p>Evaluate the following definite integral : <span class="math-container">$$\int_0^{\pi/2}\cfrac{\cos x}{\sqrt{1-\sin x}} \qquad \qquad \qquad (1)$$</span> </p> <p><span class="math-container">\begin{align} &amp; = \int_0^{\pi/2}\cfrac{\cos x}{\sqrt{1-\sin x}} \ \ I \ used \ u=1-\sin x \ and \ dx= \cfrac{-du}{cosx} \\ &amp; = -\int_1^0\cfrac{du}{\sqrt u} \\ &amp; = \int_0^1\cfrac{du}{\sqrt u} \\ &amp; = 2\sqrt u |_0^1 \\ &amp; = 2-0 =2 \\ \end{align}</span> But Symbolab says that is 0, what i have done wrong in (1) ?</p>
Community
-1
<p>I put it into symbolab and I got the correct answer!</p> <p><a href="https://i.stack.imgur.com/hR7RV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hR7RV.png" alt="enter image description here"></a></p>
3,191,345
<p>Evaluate the following definite integral : <span class="math-container">$$\int_0^{\pi/2}\cfrac{\cos x}{\sqrt{1-\sin x}} \qquad \qquad \qquad (1)$$</span> </p> <p><span class="math-container">\begin{align} &amp; = \int_0^{\pi/2}\cfrac{\cos x}{\sqrt{1-\sin x}} \ \ I \ used \ u=1-\sin x \ and \ dx= \cfrac{-du}{cosx} \\ &amp; = -\int_1^0\cfrac{du}{\sqrt u} \\ &amp; = \int_0^1\cfrac{du}{\sqrt u} \\ &amp; = 2\sqrt u |_0^1 \\ &amp; = 2-0 =2 \\ \end{align}</span> But Symbolab says that is 0, what i have done wrong in (1) ?</p>
user0102
322,814
<p>Make the substitution <span class="math-container">$w = 1 - \sin(x)$</span>. Then you get <span class="math-container">$\mathrm{d}w = -\cos(x)\mathrm{d}x$</span>. Thus we have <span class="math-container">\begin{align*} \int_{0}^{\pi/2}\frac{\cos(x)}{\sqrt{1-\sin(x)}}\mathrm{d}x = -\int_{1}^{0}\frac{\mathrm{d}w}{\sqrt{w}} = \int_{0}^{1}\frac{\mathrm{d}w}{\sqrt{w}} = 2\sqrt{w}\,\,\biggr|_{0}^{1} = 2 \end{align*}</span></p>
672,736
<p>Let $A = \begin{bmatrix}1&amp;2&amp;1\\0&amp;1&amp;0\\1&amp;3&amp;1\end{bmatrix}$. Find the eigenvalues of $A$.</p> <p>I think I got a pretty steady ground on how I approached this, I just have some difficulty getting the right answer.</p> <p>What I have done so far:</p> <p>$P(\lambda) = det(A - \lambda I)$</p> <p>$det\begin{bmatrix}1-\lambda&amp;2&amp;1\\0&amp;1-\lambda&amp;0\\1&amp;3&amp;1-\lambda\end{bmatrix} = 0$</p> <p>$=(1-\lambda)(1-\lambda)^2 - 2(0) + 1(1-\lambda) = 0$</p> <p>$= (1- \lambda) ^3 +(1-\lambda) = 0$</p> <p>But I'm not getting the right eigenvalues. The above answer gives me the eigenvalue: 1 only.</p> <p>but the right answer is: 2, 1, 0.</p>
gt6989b
16,192
<p>You did the determinant wrong, the second term is incorrect, as @FH93 hinted.</p> <p>Another way is to expand where there are most zeros, saves you work. Let's go by 2nd row: $$ \det \begin{bmatrix}1-\lambda&amp;2&amp;1\\0&amp;1-\lambda&amp;0\\1&amp;3&amp;1-\lambda\end{bmatrix} = -(1-\lambda)\left[(1-\lambda)^2-1\right] = 0 $$ which you can take from here...</p>
33,215
<p>There is a huge debate on the internet on the value of <span class="math-container">$48\div2(9+3)$</span>.</p> <p>I believe the answer <span class="math-container">$2$</span> as I believe it is part of the bracket operation in BEDMAS. <a href="https://www.mathway.com" rel="nofollow noreferrer">Mathway</a> yields the same answer. I also believe that if <span class="math-container">$48\div2\times(9+3)$</span> was asked it would be <span class="math-container">$288$</span> which Mathway agrees with as well.</p> <p>However, <a href="https://www.wolframalpha.com" rel="nofollow noreferrer">WolframAlpha</a> says it is <span class="math-container">$288$</span> either way.</p> <p>A friend of mine (who is better at math) told me that there is no such thing as 'implicit multiplication', only shorthand so that is in fact done after the division (going left to right, not necessarily because division occurs before multiplication. But he didn't explicitly give a reason)</p> <p>What is the answer and why?</p>
Michael Burge
5,468
<p>I would say it isn't even well-defined. In Group Theory or such, you usually pass by a statement that says &quot;associativity means that <span class="math-container">$(1 + 2) + 3$</span> is the same as <span class="math-container">$1 + (2+3)$</span>, so we can write <span class="math-container">$1 + 2 + 3$</span> without ambiguity.&quot; <span class="math-container">$\div$</span> doesn't have this property of being unambiguous.</p> <p>This is one of the advantages of using <span class="math-container">$\frac{48}{2}(9+3)$</span> or <span class="math-container">$\frac{48}{2(9+3)}$</span> - it's not quite associative, but it isn't ambiguous. I haven't seen <span class="math-container">$\div$</span> since elementary school probably for this very reason.</p>
33,215
<p>There is a huge debate on the internet on the value of <span class="math-container">$48\div2(9+3)$</span>.</p> <p>I believe the answer <span class="math-container">$2$</span> as I believe it is part of the bracket operation in BEDMAS. <a href="https://www.mathway.com" rel="nofollow noreferrer">Mathway</a> yields the same answer. I also believe that if <span class="math-container">$48\div2\times(9+3)$</span> was asked it would be <span class="math-container">$288$</span> which Mathway agrees with as well.</p> <p>However, <a href="https://www.wolframalpha.com" rel="nofollow noreferrer">WolframAlpha</a> says it is <span class="math-container">$288$</span> either way.</p> <p>A friend of mine (who is better at math) told me that there is no such thing as 'implicit multiplication', only shorthand so that is in fact done after the division (going left to right, not necessarily because division occurs before multiplication. But he didn't explicitly give a reason)</p> <p>What is the answer and why?</p>
Gerry Myerson
8,269
<p>There is no Supreme Court for mathematical notation; there were no commandments handed down on Sinai concerning operational precedence; all there is, is convention, and different people are free to adhere to different conventions. Wise people will stick in enough parentheses to make it impossible for anyone to mistake the meaning. If they mean, $(48\div2)(9+3)$, they'll write it that way; if they mean $48\div\bigl(2(9+3)\bigr)$, they'll write it that way. </p>
272,846
<p>Suppose I have a List of numbers:</p> <pre><code>num = Range[5] </code></pre> <p>I want to combine the second and the third element into a sublist to get the result as {1,{2,3},4,5}.<br /> I tried using this:</p> <pre><code>MapAt[List, num, {{2}, {3}}] </code></pre> <p>which is not giving me the desired result. What changes are needed to be made?<br /> Can the same changes be applied to this code:</p> <pre><code>music = SoundNote[&quot;CSharp&quot;, 0.1, 0.2, &quot;Violin&quot;] </code></pre> <p>to get the result as SoundNote[CSharp,{0.1,0.2},Violin]?</p>
Nasser
70
<p>One possibility is to use <code>ReplacePart</code></p> <h2>Example 1</h2> <pre><code>lst = Range[5] idx = {2, 3}; remove = (# -&gt; Nothing) &amp; /@ idx[[2 ;; -1]] rep = First@idx -&gt; lst[[First@idx ;; Last@idx]]; ReplacePart[lst, {rep, Sequence @@ remove}] </code></pre> <p><img src="https://i.stack.imgur.com/dJmab.png" alt="Mathematica graphics" /></p> <h2>Example 2</h2> <pre><code>lst = Range[10] idx = {2, 3, 4}; remove = (# -&gt; Nothing) &amp; /@ idx[[2 ;; -1]]; rep = First@idx -&gt; lst[[First@idx ;; Last@idx]]; ReplacePart[lst, {rep, Sequence @@ remove}] </code></pre> <p><img src="https://i.stack.imgur.com/ssK5g.png" alt="Mathematica graphics" /></p> <p>This assumes ofcourse that elements to be combined into a list are sequential and have no gaps.</p>
177,574
<p>Fix $k \in \mathbb{N}$, $k \geq 1$. Let $p \in [0,1]$ and $x = (x_0, \ldots, x_k)$ be a $(k+1)$-dimensional <em>real</em> vector, and define $$S(p,x) = -x_0^2 + \sum_{i=0}^k {k \choose i} p^i (1 - p)^{k - i} \cdot (x_i - p)^2.$$ Experiments show that for small values of $k$ $$\exists x \in \mathbb{R}^{k+1} \,.\, \forall p \in [0,1] \,.\, S(p,x) = 0.$$ In other words, there are $x_i$'s such that $S(x,p)$ is identically zero as a polynomial in $p$.</p> <p>For a given $k$ we can expand $S(x,p)$ as a polynomial in $p$ and equate the coefficients to $0$. For $k = 2$ we get \begin{align*} 0&amp;=0 \\ -x_0^2-2 x_0+x_1^2&amp;=0 \\ 2 x_0-2 x_1+1&amp;=0 \\ \end{align*} and this has two solutions: $$x = (\frac{1}{2} (-1-\sqrt{2}),\frac{1}{2},\frac{1}{2} (3+\sqrt{2}))$$ and $$x = (\frac{1}{2} (-1+\sqrt{2}),\frac{1}{2},\frac{1}{2} (3-\sqrt{2})).$$ For $k = 1, 2, 3, 4, 5, 6, 7$ there are $1, 2, 4, 8, 14, 28, 48$ solutions respectively, according to Mathematica. <a href="https://oeis.org/search?q=1%2C%202%2C%204%2C%208%2C%2014%2C%2028%2C%2048" rel="nofollow">According to OEIS</a> this is <a href="https://oeis.org/A068912" rel="nofollow">A068912</a>, "the number of $n$ step walks (each step $\pm 1$ starting from $0$) which are never more than $3$ or less than $-3$." This is kind of interesting because the problem arises in statistics, see <a href="http://www.win-vector.com/blog/2014/07/frequenstist-inference-only-seems-easy/" rel="nofollow">John Mount's blog post</a> for background.</p> <p><strong>Question:</strong> Is there a solution for every $k$?</p> <p><strong>Addendum:</strong> John says he wants soltions in $[0,1]^{k+1}$...</p> <hr> <p>Here is the relevant Mathematica code:</p> <pre><code>s[k_, p_, x_] := Sum[Binomial[k, i] * p^i* (1 - p)^(k - i)* (Subscript[x, i] - p)^2, {i, 0, k}] Subscript[x, 0]^2 xs[k_] := Table[Subscript[x, i], {i, 0, k}] system[k_, p_, x_] := Thread[CoefficientList[s[k, p, x], p] == 0] solutions[k_] := Solve[system[k, p, x], xs[k], Reals] </code></pre> <p>To see the system of equations for $k = 4$, type</p> <pre><code>system[4, p, x] // ColumnForm </code></pre> <p>To see the solutions for $k = 4$, type</p> <pre><code>solutions[4] </code></pre> <p>To make a table of counts of solutions up to $k = 7$, type</p> <pre><code>Table[{k, Length@solutions[k]}, {k, 1, 7}] // ColumnForm </code></pre>
John Mount
56,665
<p>Having trouble formatting. Here is a [line of attack][1] . Also a [proof of the problem mapping][2]. Apparently I am both user 56-something and "John Mount" but have lost control of at least one of those accounts.</p> <hr> <p>[1] <a href="http://winvector.github.io/freq/explicitSolution.html" rel="nofollow">http://winvector.github.io/freq/explicitSolution.html</a> [2] <a href="http://winvector.github.io/freq/minimax.pdf" rel="nofollow">http://winvector.github.io/freq/minimax.pdf</a></p>
475,005
<p>I want to check how many integral numbers in $\big[1,10^6\big]$ include the numbers $1,2,3,4,5$ and how many only them.<br> how should I check it? this is a problem of inclusion-exclusion? <br> I would like to get some advice!<br> Thanks!</p>
Alraxite
61,039
<p>Exclude $10^6$, and consider $[0,999999]$.</p> <p>For your first question:</p> <p>There are $10^6$ numbers in the interval $[0,999999]$. Each number in this interval can be thought of as having $6$ digits (So, $27$ would be $000027$). The numbers that do not satisfy the given property are entirely composed of $0,6,7,8,9$. So the total number of numbers to be excluded are $5^6$.</p> <p>For your second question:</p> <p>There are exactly $5$ one-digit numbers that satisfy this property. Similarly, there are $5^2$ two-digit numbers with this property. Continue and then sum all the numbers.</p> <p>Make sure to consider $10^6$ in the end.</p>
871,412
<p>$$I=\int_a^b \sin(\alpha-\beta x^2)\cos(x)\, dx.$$</p> <p>Can anybody tell me, how to solve this integral ? I know that this is related to <a href="http://www.it.uom.gr/teaching/linearalgebra/NumericalRecipiesInC/c6-9.pdf" rel="nofollow">Fresnel Integral</a> if the $\cos(x)$ term is absent. </p>
Claude Leibovici
82,404
<p>In almost the same spirit as user71352's answer, the antiderivative of $$I=\int \sin(\alpha-\beta x^2)\cos(x)\, dx$$ can be written as $$2 \sqrt{\frac{2\beta}{\pi}} I=$$ $$\sin \left(\alpha +\frac{1}{4 \beta }\right) \left(C\left(\frac{2 x \beta -1}{\sqrt{2 \pi\beta} }\right)+C\left(\frac{2 x \beta +1}{\sqrt{2 \pi \beta} }\right)\right)-\cos \left(\alpha +\frac{1}{4 \beta }\right) \left(S\left(\frac{2 x \beta -1}{\sqrt{2 \pi \beta} }\right)+S\left(\frac{2 x \beta +1}{\sqrt{2 \pi \beta} }\right)\right)$$ where $C$ and $S$ are the Fresnel integrals already mentioned by user71352.</p>
2,507,328
<p>A bit of a beginner question, but I've been told that between 2 x-intercepts, for any polynomial of degree 2 or higher. That is true. But the controversy here is that apparently, it has to be exactly in the middle between the 2. I'm not talking about quadratics; the only turning point is at $x = -b/2a$. I am talking about polynomials of higher degree. For example, $y= x^4−2x^2+x$. It seems that I am wrong, even for <em>most</em> polynomials: the turning point is usually not in the middle.</p> <p>1. Is this completely false? (That it has to be in the middle) Or is it the case for specific types of graphs?</p> <p>2. If it is, then what is the standard rule for finding the turning point between any 2 x-intercepts for any given polynomial</p> <hr> <p>I am just in year 10, so please bear with me if this question is too simple.</p>
supinf
168,859
<p>First, it is easy to see that $(-\infty,q)$ is in the sigma algebra for rational $q$. Since the rational numbers are countable, you can do something like $$ (-\infty,x)=\bigcup_{q&lt;x, q\in\mathbb Q} (-\infty,q) $$</p>
1,107,317
<p>I've got this hypergeometric series</p> <p>$_2F_1 \left[ \begin{array}{ll} a &amp;-n \\ -a-n+1 &amp; \end{array} ; 1\right]$</p> <p>where $a,n&gt;0$ and $a,n\in \mathbb{N}$</p> <p>The problem is that $-a-n+1$ is negative in this case. So when I try to use Gauss's identity</p> <p>$_2F_1 \left[ \begin{array}{ll} a &amp; b \\ c &amp; \end{array} ; 1\right] = \dfrac{\Gamma(c-a-b)\Gamma(c)}{\Gamma(c-a)\Gamma(c-b)}$</p> <p>I give negative parameters to the $\Gamma$ function.</p> <p>What other identity can I use?</p> <p>I'm trying to find a closed form to this: $\sum _{i=0}^n \binom{a+i-1}{i} \binom{a-i+n-1}{n-i}$</p> <p>Wolfram Mathematica answered this as a closed form: $\frac{2^{-2 a} \Gamma \left(\frac{1}{2} (1-2 a)\right) \binom{a+n-1}{n} \Gamma (-a-n+1)}{\sqrt{\pi } \Gamma (-2 a-n+1)}$</p> <p>But I would like to have a manual solution with proof.</p> <p>This is how I got that hypergeometric series:</p> <p>$\sum _{i=0}^n \binom{a+i-1}{i} \binom{a-i+n-1}{n-i}$</p> <p>$\dfrac{t_{i+1}}{t_{i}} = \frac{\binom{a+i+1-1}{i+1} \binom{a-i+n-2}{-i+n-1}}{\binom{a+i-1}{i} \binom{a-i+n-1}{n-i}} = \frac{(a+i) (n-i)}{(i+1) (a-i+n-1)} = \frac{(a+i) (i-n)}{ (i-a-n+1)(i+1)}$</p> <p>$\sum _{i=0}^n \binom{a+i-1}{i} \binom{a-i+n-1}{n-i} = _2F_1 \left[ \begin{array}{ll} a &amp;-n \\ -a-n+1 &amp; \end{array} ; 1\right]$</p> <p><strong>UPDATE</strong></p> <p>Thanks to David H, I got closer to the solution.</p> <p>$\sum _{i=0}^n \binom{a+i-1}{i} \binom{a-i+n-1}{n-i} = _2F_1 \left[ \begin{array}{ll} a &amp;-n \\ -a-n+1 &amp; \end{array} ; 1\right]$</p> <p>$\lim\limits_{\epsilon \to0} \frac{\Gamma (-2 a-2 \epsilon +1) \Gamma (-a-n-\epsilon +1)}{\Gamma (-a-\epsilon +1) \Gamma (-2 a-n-2 \epsilon +1)} = \frac{4^{-a} \Gamma \left(\frac{1}{2}-a\right) \Gamma (-a-n+1)}{\sqrt{\pi } \Gamma (-2 a-n+1)}$</p> <p>As you can see this result is close to the expected $\frac{2^{-2 a} \Gamma \left(\frac{1}{2} (1-2 a)\right) \binom{a+n-1}{n} \Gamma (-a-n+1)}{\sqrt{\pi } \Gamma (-2 a-n+1)}$ formula. But the $\binom{a+n-1}{n}$ factor is still missing and I don't really understand why.</p>
David H
55,051
<p>Using <a href="http://functions.wolfram.com/HypergeometricFunctions/Hypergeometric2F1/03/06/05/0016/" rel="nofollow">this identity</a> to express the hypergeometric function as a Gegenbauer function, and <a href="http://functions.wolfram.com/HypergeometricFunctions/GegenbauerC3General/03/01/01/0002/" rel="nofollow">this identity</a> which gives the value of the Gegenbauer function evaluated at $1$, the hypergeometric function in question may then be expressed as a ratio of gamma functions whose arguments are each positive integers. These can be reorganized into binomial terms for a compact final expression: </p> <p>$$\begin{align} {_2F_1}{\left(a,-n;1-a-n;1\right)} &amp;=\frac{n!}{(a)_{n}}C_{n}^{a}{\left(1\right)}\\ &amp;=\frac{\Gamma{\left(n+1\right)}\,\Gamma{\left(a\right)}}{\Gamma{\left(a+n\right)}}\cdot\frac{\Gamma{\left(2a+n\right)}}{\Gamma{\left(2a\right)}\,\Gamma{\left(n+1\right)}}\\ &amp;=\frac{\Gamma{\left(a\right)}\,\Gamma{\left(2a+n\right)}}{\Gamma{\left(a+n\right)}\,\Gamma{\left(2a\right)}}\\ &amp;=\frac{\binom{2a+n-1}{2a-1}}{\binom{a+n-1}{a-1}}.\\ \end{align}$$</p> <p>There's likely a much more direct way to derive this without this absurd detour into exotic special functions, but maybe this response will tide you over until someone more knowledgeable in combinatorial comes around. =)</p> <hr> <p><strong>Edit:</strong></p> <p>Here's a much nicer way of evaluating the sum using beta function machinery.</p> <p>$$\begin{align} s{(a,n)} &amp;=\sum_{k=0}^{n}\binom{a+k-1}{k}\binom{a+n-k-1}{n-k}\\ &amp;=\sum_{k=0}^{n}\frac{\Gamma{\left(a+k\right)}}{\Gamma{\left(k+1\right)}\,\Gamma{\left(a\right)}}\cdot\frac{\Gamma{\left(a+n-k\right)}}{\Gamma{\left(n-k+1\right)}\,\Gamma{\left(a\right)}}\\ &amp;=\frac{1}{\Gamma{\left(a\right)}^2}\sum_{k=0}^{n}\frac{\Gamma{\left(a+k\right)}}{\Gamma{\left(k+1\right)}}\cdot\frac{\Gamma{\left(a+n-k\right)}}{\Gamma{\left(n-k+1\right)}}\\ &amp;=\frac{\Gamma{\left(2a+n\right)}}{\Gamma{\left(a\right)}^2\,\Gamma{\left(n+1\right)}}\sum_{k=0}^{n}\frac{\Gamma{\left(n+1\right)}}{\Gamma{\left(k+1\right)}\,\Gamma{\left(n-k+1\right)}}\cdot\frac{\Gamma{\left(a+k\right)}\,\Gamma{\left(a+n-k\right)}}{\Gamma{\left(2a+n\right)}}\\ &amp;=\frac{\Gamma{\left(2a+n\right)}}{\Gamma{\left(a\right)}^2\,\Gamma{\left(n+1\right)}}\sum_{k=0}^{n}\binom{n}{k}\cdot\operatorname{B}{\left(a+k,a+n-k\right)}\\ &amp;=\frac{\Gamma{\left(2a+n\right)}}{\Gamma{\left(a\right)}^2\,\Gamma{\left(n+1\right)}}\sum_{k=0}^{n}\binom{n}{k}\int_{0}^{1}t^{a+k-1}(1-t)^{a+n-k-1}\,\mathrm{d}t\\ &amp;=\frac{\Gamma{\left(2a+n\right)}}{\Gamma{\left(a\right)}^2\,\Gamma{\left(n+1\right)}}\int_{0}^{1}t^{a-1}(1-t)^{a-1}\sum_{k=0}^{n}\binom{n}{k}t^{k}(1-t)^{n-k}\,\mathrm{d}t\\ &amp;=\frac{\Gamma{\left(2a+n\right)}}{\Gamma{\left(a\right)}^2\,\Gamma{\left(n+1\right)}}\int_{0}^{1}t^{a-1}(1-t)^{a-1}\cdot1\,\mathrm{d}t\\ &amp;=\frac{\Gamma{\left(2a+n\right)}}{\Gamma{\left(a\right)}^2\,\Gamma{\left(n+1\right)}}\operatorname{B}{\left(a,a\right)}\\ &amp;=\frac{\Gamma{\left(2a+n\right)}}{\Gamma{\left(2a\right)}\,\Gamma{\left(n+1\right)}}\\ &amp;=\binom{2a+n-1}{2a-1}.~~\blacksquare\\ \end{align}$$</p>
309,380
<p>Let me sum up my - hopefully correct - understanding of the <a href="https://en.wikipedia.org/wiki/Travelling_salesman_problem" rel="nofollow noreferrer">travelling salesman problem</a> and <a href="https://en.wikipedia.org/wiki/Complexity_class" rel="nofollow noreferrer">complexity classes</a>. It's about <a href="https://en.wikipedia.org/wiki/Decision_problem" rel="nofollow noreferrer">decision problems</a>:</p> <blockquote> <p>"[...] a decision problem is a problem that can be posed as a yes-no question of the input values. Decision problems typically appear in mathematical questions of decidability, that is, the question of the existence of an <strong>effective method</strong> to determine the existence of some object."</p> </blockquote> <p>The travelling salesman problem (<a href="https://en.wikipedia.org/wiki/Travelling_salesman_problem" rel="nofollow noreferrer"><strong>TSP</strong></a>) - as a decision problem - is to find an answer to the question:</p> <blockquote> <p>Given an $n \times n$ matrix $W = (w_{ij})$ with $w_{ij} \in \mathbb{Q}$ and a number $L\in \mathbb{Q}$.</p> <p>Is there a permutation $\pi$ of $\{1,\dots, n\}$ such that</p> <p>$$L(\pi) = \sum_{i=1}^{n} w_{\pi(i)\pi(i+1)} &lt; L?$$</p> </blockquote> <p><sub>with <a href="https://en.wikipedia.org/wiki/Modular_arithmetic" rel="nofollow noreferrer">modular</a> addition, i.e. $n+1 = 1$</sub></p> <p>The answer can be given as a specific example (the output of a constructive "problem solver") which then can be checked for correctness. For <strong>TSP</strong> we know that a specific example given by a constructive problem solver (e.g. a specific permutation $\pi$) can be checked in polynomial time for $L(\pi) &lt; L$, that means <strong>TSP</strong> <a href="https://en.wikipedia.org/wiki/NP_(complexity)" rel="nofollow noreferrer">$\in\mathcal{NP}$</a>. </p> <p>But the answer may also be given by just a boolean value <strong>YES</strong> or <strong>NO</strong> , which cannot be checked at all. (What would we try to check?)</p> <p>The first kind of answer is given by algorithms that are programmed to read arbitary matrices $W$ and numbers $L$ and give an example $\pi$. These are equivalent to constructive proofs which somehow construct a $\pi$ from given $W$ and $L$, and which may be correct or not.</p> <p>The second kind of answer is given by non-construtive proofs - which nevertheless give an answer. Such a proof also "reads" some general $W$ and $L$ and makes some general considerations about them, e.g. like this: If numbers $x_1, \dots x_n$ can be calculated from $W$ and they relate to $L$ such that $f(x_1,\dots, x_n, L) = 0$ then the answer is <strong>YES</strong> otherwise <strong>NO</strong>.</p> <p>My question is: </p> <blockquote> <p>If some day it is proved that <strong>TSP</strong> <a href="https://en.wikipedia.org/wiki/P_(complexity)" rel="nofollow noreferrer">$\not\in \mathcal{P}$</a> (because <a href="https://en.wikipedia.org/wiki/P_versus_NP_problem" rel="nofollow noreferrer">$\mathcal{P} \neq \mathcal{NP}$</a> and <strong>TSP</strong> is <a href="https://en.wikipedia.org/wiki/NP-hardness" rel="nofollow noreferrer">$\mathcal{NP}$-hard</a>), what do we learn about hypothetical non-construtive proofs that for given $W$ and $L$ there exist solutions $\pi$ with $L(\pi) &lt; L$ (<strong>YES</strong> or <strong>NO</strong>)?</p> </blockquote> <p>Or is the talk about such proofs only a chimera - because they are ill-defined or cannot exist for obvious reasons?</p> <p><strong>Remark 1:</strong> Since proofs have no run-time, the things we can learn about them may concern only their length and/or complexity (in general: structure).</p> <p><strong>Remark 2:</strong> Very short and simple algorithms may have exponential run-times.</p> <p>To think more specifically about this: Assume there is a proof that proves:</p> <blockquote> <p>If you calculate numbers $x_1(W),\dots, x_m(W)$ of a quadratic matrix $W$ and you find that if $f(x_1,\dots,x_m,L) = 0$ then there is a permutation $\pi$ with $L(\pi) &lt; L$.</p> </blockquote> <p>What could be said about this (hypothetical!) proof, assuming that <a href="https://en.wikipedia.org/wiki/P_versus_NP_problem" rel="nofollow noreferrer">$\mathcal{P} \neq \mathcal{NP}$</a>?</p>
Timothy Chow
3,106
<p>I think you may need to formulate your question more precisely.</p> <p>Consider the following "nonconstructive proof": If you do blah-blah-blah and the result is 1 then YES, but if the result is 0 then NO, where "blah-blah-blah," upon closer inspection, amounts to running a Turing machine that exhaustively tries all possible permutations, and outputs 1 if it finds a suitable permutation, but outputs 0 if it does not.</p> <p>This "proof" is pretty short since the Turing machine is rather simple. Secretly, of course, it "constructs" the permutation, but it keeps that information private and does not output it, so from the outside it looks nonconstructive.</p> <p>If this sort of thing counts as a "nonconstructive proof" then we don't need to wait for someone to resolve the P ≠ NP question. We have the proof today.</p> <p>If you don't want this sort of thing to count then you need to specify carefully what you do and do not allow.</p> <p>I conjecture that you may be interested in <a href="https://en.wikipedia.org/wiki/Proof_complexity" rel="nofollow noreferrer">proof complexity</a>, which is related to but different from computational complexity. But this is just a conjecture.</p>
3,430,136
<p>I would like to prove that the map <span class="math-container">$f: S^n \times S^m \to 2S^{m+n+1}: ((x_1,..,x_{n+1}), (y_1,...,y_{m+1})) \to (x_1,...,x_{n+1},y_1,...,y_{m+1})$</span> is an imersion. Here <span class="math-container">$2S^{m+n+1}$</span> is the <span class="math-container">$m+n+1$</span> dimensional sphere with radius <span class="math-container">$\sqrt2$</span>. </p> <p>I know that I have to prove that the map <span class="math-container">$(f_\star)_p : T_p(S^n \times S^m) \to T_{f(p)}(2S^{m+n+1}) : [\gamma] \to [f \circ \gamma]$</span> is injective but I fail to do this. My initial idea was the following: </p> <p>Assume that <span class="math-container">$(f_\star)_p([\gamma_1]) = (f_\star)_p([\gamma_2])$</span>. It holds that <span class="math-container">$[f \circ \gamma_1] = [f \circ \gamma_2]$</span> and thus <span class="math-container">$f \circ \gamma_1 \sim f \circ \gamma_2$</span>. So there is a chart <span class="math-container">$(U,\phi: U \to \mathbb{R}^p)$</span> with <span class="math-container">$ U \subset \mathbb{R}^{n+m+2}$</span> open such that <span class="math-container">$(\phi \circ (f \circ \gamma_1))'(0) = (\phi \circ (f \circ \gamma_2))'(0)$</span>. It is easy to see that <span class="math-container">$f:S^n \times S^m \to f(S^n \times S^m)$</span> is a homeomorphism and thus <span class="math-container">$\phi \circ f$</span> is a chart for <span class="math-container">$S^n \times S^m$</span>. Thereby <span class="math-container">$\gamma_1 \sim \gamma_2$</span> and thus <span class="math-container">$[\gamma_1] = [\gamma_2]$</span>. </p> <p>I don't think that this is true because we do not know if <span class="math-container">$\phi \circ f$</span> is globally good defined. Can someone help me? </p> <p>Thanks! </p>
Ted Shifrin
71,348
<p>Working in charts on spheres is (almost?) always painful. So, as another answer suggested, let's make it easier by looking at the map on Euclidean space. We can consider <span class="math-container">$F\colon\Bbb R^{n+1}\times\Bbb R^{m+1} \to \Bbb R^{n+m+2}$</span> given by <span class="math-container">$F(x,y) = (x,0)+(0,y) = (x,y)$</span>, where we identify <span class="math-container">$\Bbb R^{n+m+2}$</span> with the product in a natural way. This is a linear map that truly is the identity map and is a diffeomophism. </p> <p>When we restrict to <span class="math-container">$M=S^n\times S^m$</span>, <span class="math-container">$F|_M$</span> maps to <span class="math-container">$\{(x,y): \|x\|=\|y\|=1\}$</span>, and so the image is contained in a sphere <span class="math-container">$N$</span> of radius <span class="math-container">$\sqrt2$</span>, since <span class="math-container">$\|(x,y)\|^2 = \|x\|^2 + \|y\|^2$</span>.</p> <p>How do we check that <span class="math-container">$f\colon M\to N$</span> is in fact an immersion? Let <span class="math-container">$\pi\colon\Bbb R^{n+m+2}-\{0\}\to N$</span> be the obvious projection map given by <span class="math-container">$\pi(z) = \sqrt2 z/\|z\|$</span>. Then we observe that <span class="math-container">$f = \pi\circ F|_M$</span>. We want to check that <span class="math-container">$df_{(x,y)}$</span> is injective for any <span class="math-container">$(x,y)\in M$</span>. Well, in view of our earlier discussion <span class="math-container">$dF_{(x,y)}$</span> is the identity map, and restricting the identity map to the subspace <span class="math-container">$T_{(x,y)}M$</span> is injective. What is going on with <span class="math-container">$d\pi_{(x,y)}$</span>? It maps surjectively to <span class="math-container">$T_{(x,y)}N$</span> with kernel precisely the <span class="math-container">$1$</span>-dimensional subspace of <span class="math-container">$\Bbb R^{n+m+2}$</span> spanned by <span class="math-container">$(x,y)$</span>. So, if <span class="math-container">$v\in T_{(x,y)}M$</span> and <span class="math-container">$df_{(x,y)}(v) = d\pi_{(x,y)}dF_{(x,y)}v = d\pi_{(x,y)} v = 0$</span>, this means that <span class="math-container">$v$</span> is some scalar multiple of <span class="math-container">$(x,y)$</span>. But <span class="math-container">$(tx,ty)$</span> is tangent to <span class="math-container">$M$</span> if and only if <span class="math-container">$tx\in T_x S^n$</span> and <span class="math-container">$ty\in T_y S^m$</span>, and this happens if and only if <span class="math-container">$t=0$</span>, i.e., if <span class="math-container">$v=0$</span> to start with. Thus, <span class="math-container">$f$</span> is an immersion.</p>
2,144,520
<p>I don't really understand Cantor's diagonal argument, so this proof is pretty hard for me. I know this question has been asked multiple times on here and i've gone through several of them and some of them don't use Cantor's diagonal argument and I don't really understand the ones that use it. I know i'm supposed to assume that A is countable and then find the contradiction to state that it is uncountable. I just don't know how to get there. Also, there's a part B. </p> <p>Here's part B if you can help: </p> <p>Prove that P(N) = {X : X ⊆ N}, the power set of the natural numbers, is uncountable by establishing a bijection between P(N) and the set A from part (a). </p> <p>(HINT: Given X ⊆ N, we can ask whether 1 ∈ X, 2 ∈ X, etc. Based on the true/false results, can you think of a way to define a unique binary sequence to go with each subset of N?)</p>
A. Salguero-Alarcón
405,514
<p>Suppose $A$ is countable. Then, we will have a bijection:</p> <p>$1 \to \phi(1)=a= \{a_1, a_2, a_3, a_4,...\}$</p> <p>$2 \to \phi(2)=b=\{b_1, b_2, b_3, b_4,...\}$</p> <p>$3 \to \phi(3)=c=\{c_1,c_2,c_3,c_4...\}$</p> <p>$4 \to \phi(4)=d=\{d_1,d_2,d_3,d_4...\}$</p> <p>and so on. Now, we're going to build a sequence that is in $A$ and it's not in the list, arriving to contradiction.</p> <p>Our sequence $x=\{x_1,x_2,x_3,x_4...\}$ is defined this way:</p> <ul> <li>$x_1\neq a_1$. That is, if $a_1=1$, then $x_1=0$, and if $a_1=0$, then $x_1=1$.</li> <li>$x_2\neq b_2$.</li> <li>In general, $x_n\neq \phi(n)_n$.</li> </ul> <p>Now, we see that $x\neq a$, because their first terms are different. Now, $x\neq b$ because their second terms are diferent. In general, $x$ cannot be the $n$-th element of the list $\phi(n)$, because its $n$-th term is different from the $n$-th term of $\phi(n)$.</p> <p>Now, note that there is a biyection between $A$ and $\mathcal P(\mathbb N)$. For each $A\subset \mathbb N$, let us consider the sequence $x$ that satisfies $x_n=1$ if $n\in A$ and $x_n=0$ if $x\notin A$. For example:</p> <ul> <li>if $A=\{1,3,5,7,...\}$ then $x=(1,0,1,0,...)$. </li> <li>if $A=\{1,2,3\}$, then $x=(1,1,1,0,0,0,0,0,...)$.</li> </ul> <p>It is clear that this is a bijection between $A$ and $\mathcal P(\mathbb N)$, but $A$ is uncountable, as we proved before.</p>
2,548,942
<p>What would be the best approach to calculate the following limits </p> <p>$$ \lim_{x \rightarrow 0} \left (1+\frac {1} {\arctan x} \right)^{\sin x}, \qquad \lim_{x \rightarrow 0} \frac {\tan ^7 x} {\ln (7x+1)} $$ in a basic way, using some special limits, without L'Hospital's rule? </p>
Community
-1
<p>The second is: $$ \lim_{x\to0}\frac{\sin^7x}{\cos^7x}\frac{1}{\log(7x+1)}= \lim_{x\to0}\frac{\sin^7x}{\cos^7x}\frac{1}{\log(7x+1)}\frac{x^7}{x^7}\frac{7x}{7x}=0$$</p> <p>For the first use the substitution method.</p>
117,500
<p>How would you go about finding the conjugacy classes of the nonabelian group of order 21, $G:=\left\langle x,y | x^7=e=y^3, y^{-1}xy=x^2\right\rangle$?</p>
awllower
6,792
<p>We can in fact generalize this situation as follows: <a href="https://math.stackexchange.com/questions/153381/estimates-on-conjugacy-classes-of-a-finite-group">Please see this question for reference</a></p> <blockquote> <p>Theorem:<br> Let A be a normal subgroup of G such that A is the centralizer of every non-trivial element in A. If further G/A is abelian, then G has |G:A| linear characters, and (|A|−1)/|G:A| non-linear irreducible characters of degree =|G:A| which vanish off A. </p> </blockquote> <p>Notice that the non-abelian group of order=$pq$ with $q \equiv 1 \mod{p}$ satisfies the conditions there, while that subgroup$A$ is given by a normal subgroup of order=$q$.<br> As when I post there is no answer which deploits characters, I post for the sake of references.<br> Thanks for paying attention.</p>
1,986,402
<blockquote> <p>How can I simplify $\prod \limits_{l=1}^{a} \frac{1}{4^a} \cdot 16^l$?</p> </blockquote> <p>I've tried looking at the terms and finding something in there to conlcude what it might be and also took the $n^{th}$ term of $16^l$ into one fraction but that does rather the opposite of simplification.</p>
Jack D'Aurizio
44,121
<p>$$\prod_{l=1}^{a}\frac{16^l}{4^a} = \frac{1}{4^{a^2}}\prod_{l=1}^{a}16^l = 4^{-a^2} 16^{\sum_{l=1}^{a}l} = 4^{-a^2} 4^{a(a+1)}=\color{red}{4^a}.$$</p>
4,247,637
<p>There are several tea cups in the kitchen, some with handle and the others without handles. The number of ways of selecting two cups without a handle and three with a handle is exactly <span class="math-container">$1200$</span>. What is the maximum possible number of cups in the kitchen?<br> Here's what I did:<br> I assumed cups with handle are <span class="math-container">$x$</span> and without handle are <span class="math-container">$y$</span>. Now ways of selecting three cups with a handle are <span class="math-container">$^xC_3$</span> and ways of selecting <span class="math-container">$2$</span> cups without a handle are <span class="math-container">$^yC_2$</span>. So, <span class="math-container">$x(x-1)y(y-1)(y-2)=14400$</span> .Now I am stuck here. How do I proceed from here? Is the only way hit and trial? Please help me out.</p>
Random Variable
16,033
<p>Using the principal branch of the logarithm, let's integrate the function <span class="math-container">$$\frac{e^{i \xi z}}{2 \sqrt{1+z^{2}}}, \quad \xi &gt;0,$$</span> around a semicircular contour in the upper half of the complex plane that is deformed around the branch cut on <span class="math-container">$[i, i \infty$</span>).</p> <p>(It looks exactly like the contour used <a href="https://math.stackexchange.com/a/371327">here</a>.)</p> <p>According to Jordan's lemma, the integrals along the two big arcs vanish as their radii go to infinity.</p> <p>So we get <span class="math-container">$$ \int_{-\infty}^{\infty} \frac{e^{i \xi x}}{2 \sqrt{1+x^{2}}} \, \mathrm dx + \int_\infty^1 \frac{e^{i \xi (it)}}{2 \sqrt{|1+(it)^2|e^{i \pi}}} \, i\, \mathrm dt + \int_1^\infty \frac{e^{i \xi (it)}}{2 \sqrt{|1+(it)^2|e^{-i \pi}}} \, i\mathrm \, dt=0.$$</span></p> <p>And rearranging, we get <span class="math-container">$$ \begin{align} \int_{-\infty}^{\infty} \frac{e^{i \xi x}}{2 \sqrt{1+x^{2}}} \, \mathrm dx &amp;= \frac{i \left(e^{-i \pi /2}-e^{i \pi/2} \right)}{2}\int_{1}^{\infty} \frac{e^{-\xi t}}{\sqrt{t^2-1}} \, \mathrm dt \\ &amp;= \int_{1}^{\infty} \frac{e^{-\xi t}}{\sqrt{t^2-1}} \, \mathrm dt \\ &amp; = \int_{0}^{\infty} e^{- \xi \cosh u} \, \mathrm du. \end{align}$$</span></p> <hr /> <p>I ignored the small circle of radius <span class="math-container">$\epsilon$</span> about the branch point at <span class="math-container">$z=i$</span> since the absolute value of the integral on that circle is bounded above by <span class="math-container">$$2 \pi e^{-x(1- \epsilon)} \epsilon (2 \epsilon- \epsilon^{2})^{-1/2} = 2 \pi e^{-x(1- \epsilon)} \left(\frac{2}{\epsilon}-1 \right)^{-1/2}, $$</span> which vanishes as <span class="math-container">$\epsilon \to 0$</span>.</p> <p>All I did was use the ML inequality and the reverse triangle inequality.</p>
170,967
<p>Cog $A$ is at position: $Ax$, $Ay$, rotation: $Ar$ and number of teeth: $At$</p> <p>Cog $B$ is at position: $Bx$, $By$ and number of teeth $Bt$. What is Cog $B$'s rotation such that teeth between Cog $A$ and Cog $B$ line up. There will be the same number of answers as there are teeth, but a 'base angle' is desired.</p> <p><img src="https://i.imgur.com/ICrhs.jpg" alt="cog diagram"></p>
sq2
35,825
<p>The solution I have found, which may be @joriki's solution expressed in a different format:</p> <p>Angle α is of course required, see @joriki's solution.</p> <p>Br = At / Bt * -Ar + α * (At + Bt) / Bt</p> <p>and if Bt is even, add π / Bt to Br</p>
514,338
<p>Okay so my algebra knowledge is pretty guff..</p> <p>I am taking a control systems class and pretty much all the questions I am expected to revise, are about doing this algebraic manipulation and I don't know what steps the tutor is taking to do it..</p> <p>Okay here goes..</p> <p>If the transfer function of a system is $G(s) = 3/(20s+1)$, then the closed loop version of that is</p> <p>$$G(s)/(G(s) + 1)$$ </p> <p>so that would be </p> <p>$$\frac{\frac{3}{20s+1}}{\frac{3}{20s+1} + 1}$$</p> <p>This is the bit I am having trouble with.. he just then cancels it all out and gives us the answer on the next line which is.. $$\frac{3}{20s+4}$$</p> <p>He gives us loads of problems to this which are all similar but I just cannot work them out as my algebra sucks so bad.. I don't know how to cancel out stuff which has a division but with an addition in the denominator..</p> <p>I have taken a screen shot of the pdf here <a href="https://i.imgur.com/t82sfYr.png" rel="nofollow noreferrer">http://i.imgur.com/t82sfYr.png</a></p> <p><a href="https://i.imgur.com/GoCBjuq.png" rel="nofollow noreferrer">And another one of another pdf which explains nearly how to do it but misses out the steps..</a></p>
Community
-1
<p>You can usually resort to doing everything straightforwardly if you don't see how the "tricks" work.</p> <p>In this case, the straightforward thing is to "compute" the sum:</p> <p>$$ \frac{3}{20s + 1} + 1 = \frac{20s + 4}{20s+1} $$</p> <p>There are lots of ways to see how to do this. You can try to find the least common denominator, but it's good enough to find <em>any</em> common denominator. i.e. you can write</p> <p>$$ \frac{a}{b} + \frac{c}{d} = \frac{ad}{bd} + \frac{bc}{bd} = \frac{ad+bc}{bd} $$</p> <p>Once you've done that, the next straightforward thing to do is to divide fractions. There are again several mnemonics for remembering this; a common one is</p> <p>$$ \frac{\frac{a}{b}}{\frac{c}{d}} = \frac{a}{b} \cdot \frac{d}{c} = \frac{ad}{bc} $$</p>
3,047,241
<blockquote> <p>Let <span class="math-container">$X_1, X_2, \cdots, X_n$</span> be i.i.d. <span class="math-container">$\sim \text{Bernoulli}(p)$</span>. Then <span class="math-container">$\bar{x}$</span> is an unbiased estimator of <span class="math-container">$p$</span>.</p> </blockquote> <p>How should I approach for this types of problems. Some hint will also help me.</p>
Ahmad Bazzi
310,385
<p>If you define <span class="math-container">$\bar{x}$</span> as the sample mean, i.e. <span class="math-container">$$\bar{x} = \frac{1}{n}\sum_{i=1}^n X_i$$</span> Take expectations on both sides <span class="math-container">$$E \bar{x} = E \frac{1}{n}\sum_{i=1}^n X_i$$</span> Since expectation is a linear operator, then <span class="math-container">$$E \bar{x} = \frac{1}{n}\sum_{i=1}^n E X_i$$</span> But since <span class="math-container">$E X_i = p$</span>, then <span class="math-container">$$E \bar{x} = \frac{1}{n}\sum_{i=1}^n p$$</span> Since <span class="math-container">$p$</span> is a constant, then we could extract it as <span class="math-container">$$E \bar{x} = \frac{p}{n}\sum_{i=1}^n 1$$</span> Now since <span class="math-container">$\sum_{i=1}^n 1 = n$</span>, we get <span class="math-container">$$E \bar{x} = \frac{p}{n}n = p = \mu$$</span> where <span class="math-container">$\mu = p$</span> is the true mean of the Bernoulli distribution, hence we say that <span class="math-container">$\bar{x}$</span> is an <em>unbiased estimator</em> of <span class="math-container">$p$</span>, since in average, it gives us <span class="math-container">$p$</span>.</p>
83,965
<p>When students learn multivariable calculus they're typically barraged with a collection of examples of the type "given surface X with boundary curve Y, evaluate the line integral of a vector field Y by evaluating the surface integral of the curl of the vector field over the surface X" or vice versa. The trouble is that the vector fields, curves and surfaces are pretty much arbitrary except for being chosen so that one or both of the integrals are computationally tractable.</p> <p>One more interesting application of the classical Stokes theorem is that it allows one to interpret the curl of a vector field as a measure of swirling about an axis. But aside from that I'm unable to recall any other applications which are especially surprising, deep or interesting. </p> <p>I would like use Stokes theorem show my multivariable calculus students something that they enjoyable. Any suggestions?</p>
Igor Rivin
11,142
<p>One could argue that complex function theory (the fact that analytic functions integrate to zero around contours) is an application, and a nice one.</p>
83,965
<p>When students learn multivariable calculus they're typically barraged with a collection of examples of the type "given surface X with boundary curve Y, evaluate the line integral of a vector field Y by evaluating the surface integral of the curl of the vector field over the surface X" or vice versa. The trouble is that the vector fields, curves and surfaces are pretty much arbitrary except for being chosen so that one or both of the integrals are computationally tractable.</p> <p>One more interesting application of the classical Stokes theorem is that it allows one to interpret the curl of a vector field as a measure of swirling about an axis. But aside from that I'm unable to recall any other applications which are especially surprising, deep or interesting. </p> <p>I would like use Stokes theorem show my multivariable calculus students something that they enjoyable. Any suggestions?</p>
Nilima Nigam
14,740
<p>A student may also learn about the content from Stokes theorem from instances where it <em>failed</em> to hold as expected. For example, one has to exercise care when trying to use the theorem on domains with holes. Turn this around: the failure of Stokes to hold <em>as expected</em> tells you about the cohomology of the domain. I think it is possible via concrete examples to illustrate this point in a multivariate calculus class without using the more technical phraseology. </p> <p>A similar discussion occured at <a href="https://mathoverflow.net/questions/57025/down-to-earth-uses-of-de-rham-cohomology-to-convince-a-wide-audience-of-its-usefu">Down-To-Earth Uses of de Rham Cohomology to Convince a Wide Audience of its Usefulness</a></p> <p>For a non-standard application of the failure of the Stokes theorem, there's the odd case of the Purcell Swimmer: <a href="http://iopscience.iop.org/1367-2630/10/6/063016/fulltext/" rel="nofollow noreferrer">http://iopscience.iop.org/1367-2630/10/6/063016/fulltext/</a> Rendering this accessible to a multivariable calculus class may take some work, depending on your students. </p>
73,559
<p>I have the following problem. I'd like to add a legend to <code>MatrixPlot</code>. Each colour should have a legend entry. I used <code>PlotLegends</code>, which in principle works. However, if I use more than five colours, this doesn't work anymore.</p> <pre><code>a = RandomInteger[{1, 6}, {50}]; MatrixPlot[{a}, ColorRules -&gt; {1 -&gt; Red, 2 -&gt; Blue, 3 -&gt; Green, 4 -&gt; Gray,5-&gt;Yellow,6-&gt;Orange}] </code></pre>
C. E.
731
<p>When <code>Automatic</code> fails the same thing can, luckily, be done manually:</p> <pre><code>rules = {1 -&gt; Red, 2 -&gt; Blue, 3 -&gt; Green, 4 -&gt; Gray, 5 -&gt; Yellow, 6 -&gt; Orange}; a = RandomInteger[{1, 6}, {50}]; MatrixPlot[ {a}, ColorRules -&gt; rules, PlotLegends -&gt; SwatchLegend[rules[[All, 2]], rules[[All, 1]]] ] </code></pre> <p><img src="https://i.stack.imgur.com/nLMhV.png" alt="Example legends"></p> <p>A horizontal version (this is an update in response to Bob's new answer) can be achieved thus:</p> <pre><code>PlotLegends -&gt; Placed[ SwatchLegend[rules[[All, 2]], rules[[All, 1]]], Bottom ] </code></pre> <p><img src="https://i.stack.imgur.com/yo8rz.png" alt="Horizontal legends"></p>
134,205
<blockquote> <p>Find the expectation of a Geometric distribution using $\mathbb{E}(X)= \sum_{k=1}^\infty P(X \ge k)$. </p> </blockquote> <p>Okay I know how to find the expectation using the definition of the geometric distribution $$P(X=k)= p \cdot(1-p)^{k-1}$$ and I figured that $P(X \ge k)=(1-p)^{k-1}$ but I don't know how to show it. </p> <p>I know the expectation is $\frac{1}{p}$ but I just get $\mathbb E(X)= \frac{1}{p^2}$ using the method specified in the question.</p>
David Mitra
18,986
<p>For $|r|&lt1$, the sum of the geometric series $\sum\limits_{k=1}^\infty r^k$ is ${ r\over 1-r}$. So, write $$\sum\limits_{k=1}^\infty P[X\ge k]= \sum\limits_{k=1}^\infty (1-p)^{k-1} = {1\over 1-p}\sum\limits_{k=1}^\infty (1-p)^{k },$$ and apply the formula with $r=1-p$.</p>
3,812,087
<p>Let <span class="math-container">$(M,d)$</span> be a metric space. A set <span class="math-container">$A \subset M$</span> is said to be compact if every open cover of <span class="math-container">$A$</span> has a finite subcover.</p> <p>Why do we use this definition, rather than the other &quot;definition&quot; which holds in <span class="math-container">$\mathbb{R}^n$</span>, that is, a set is compact if it is closed and bounded? It is a more intuitive definition, and it is hard for me to think of compact sets being separate from &quot;merely&quot; closed and bounded sets (probably because I can only imagine Euclidian spaces).</p> <p>Is it simply because being closed and bounded (together) is not a topological property?</p>
freakish
340,986
<p>The question &quot;why we define something as something?&quot; is a really tricky question. There is no some universal reason not to define &quot;compact&quot; as &quot;closed and bounded&quot; or as &quot;finite&quot; or as &quot;empty&quot; or as &quot;green grass&quot;. It's just a definition, a label, nothing more.</p> <p>What we actually care about is behaviour and usefulness. In the metric world it turns out that the property &quot;every sequence has a convergent subsequence&quot; is a very strong and desired one. We called that &quot;(sequentially) compact&quot;. And by <a href="https://en.wikipedia.org/wiki/Bolzano%E2%80%93Weierstrass_theorem" rel="nofollow noreferrer">the Bolzano-Weierstrass theorem</a> this definition is equivalent to being &quot;closed and bounded&quot; <strong>but only for <span class="math-container">$\mathbb{R}^n$</span></strong>. The simplest counterexample in the metric world is <span class="math-container">$\mathbb{Q}$</span>. Indeed, not every sequence in <span class="math-container">$A=[0,1]\cap\mathbb{Q}$</span> has a convergent subsequence (e.g. approximation of <span class="math-container">$\sqrt{2}/2$</span> by rationals) even though <span class="math-container">$A$</span> is both closed and bounded (with respect to the Euclidean metric) in <span class="math-container">$\mathbb{Q}$</span>.</p> <p>So &quot;every sequence has a convergent subsequence&quot; is a great property. And in fact it is easily generalizable to non-metric spaces. In the general setup it is also known as <a href="https://en.wikipedia.org/wiki/Sequentially_compact_space" rel="nofollow noreferrer">&quot;sequential compactness&quot;</a>. But it turns out that for metric spaces there is another property that is equivalent to compactness, namely &quot;every open cover has a finite subcover&quot;. And since the definition doesn't require a metric (unlike &quot;bounded&quot; property), it is easily generalizable to non-metric world as well. But unfortunately outside of the metric world, this definition of compactness is not equivalent to sequential compactness (in fact neither implies the other). Comparing the two definitions mathematicians came to conclusion that the &quot;open cover&quot; definition is actually more useful and hence it became the standard one.</p> <blockquote> <p>It is a more intuitive definition</p> </blockquote> <p>Well, just because something is more intuitive doesn't mean it is better. Besides, nothing is intuitive until it becomes intuitive. :) I doubt that any professional mathematician nowadays would call the open cover definition as counterintuitive. It is so common that they've got used to it.</p>
3,812,087
<p>Let <span class="math-container">$(M,d)$</span> be a metric space. A set <span class="math-container">$A \subset M$</span> is said to be compact if every open cover of <span class="math-container">$A$</span> has a finite subcover.</p> <p>Why do we use this definition, rather than the other &quot;definition&quot; which holds in <span class="math-container">$\mathbb{R}^n$</span>, that is, a set is compact if it is closed and bounded? It is a more intuitive definition, and it is hard for me to think of compact sets being separate from &quot;merely&quot; closed and bounded sets (probably because I can only imagine Euclidian spaces).</p> <p>Is it simply because being closed and bounded (together) is not a topological property?</p>
Michael Hardy
11,667
<p>In some commonplace metric spaces such as <span class="math-container">$\ell^2,$</span> there are sets that are closed and bounded but NOT compact. In particular, the standard orthonormal basis of <span class="math-container">$\ell^2$</span> is an example of such a set. And the closed interval from <span class="math-container">$0$</span> to <span class="math-container">$1$</span> within the space of rational numbers with the usual metric is another example. These examples are closed and bounded but not compact.</p>
1,201,002
<p>I´m trying to find a vector $\vec{c} = $ , which is orthogonal to vector $\vec{a}$ and $\vec{b}$:</p> <p>As far I understood, I have to show that:</p> <p>$$\langle a,c\rangle=0 $$ $$\langle b,c\rangle=0 $$ </p> <p>So if I would like to determine an orthogonal vector regarding: \begin{bmatrix}-1\\1\end{bmatrix} I just intuitively uses: $$\langle v,w\rangle=1 \cdot(-1)+1\cdot 1=0 $$ in order to arrive at \begin{bmatrix}1\\1\end{bmatrix} My problem is that I just dont know a mechanic way to solve for an orthogonal vector. It was more a educated guess.</p> <p>For example, given: $\vec{a} = \begin{bmatrix}-1\\1\\1\end{bmatrix}$ and $\vec{b} = \begin{bmatrix}\sqrt{2}\\1\\-1\end{bmatrix}$ how do I find a orthogonal vector?</p> <p>Thank you in advance.</p>
David K
139,123
<p>Given <span class="math-container">$m$</span> orthogonal vectors <span class="math-container">$v_1, v_2, \ldots, v_m$</span> in <span class="math-container">$\mathbb R^n$</span>, a vector orthogonal to them is any vector <span class="math-container">$x$</span> that solves the matrix equation</p> <p><span class="math-container">$$\begin{pmatrix}v_1^T \\ v_2^T \\ \vdots \\ v_m^T\end{pmatrix} x = 0.$$</span></p> <p>To put this a bit more concretely, suppose</p> <p><span class="math-container">$$v_1 = \begin{pmatrix}v_{11} \\ v_{12} \\ \vdots \\ v_{1n}\end{pmatrix},\quad v_2 = \begin{pmatrix}v_{21} \\ v_{22} \\ \vdots \\ v_{2n}\end{pmatrix},\ \ldots,\quad v_m = \begin{pmatrix}v_{m1} \\ v_{m2} \\ \vdots \\ v_{mn}\end{pmatrix},\ \mbox{and}\quad x = \begin{pmatrix}x_{1} \\ x_{2} \\ \vdots \\ x_{n}\end{pmatrix}$$</span></p> <p>where the numbers <span class="math-container">$v_{ij} \in \mathbb R$</span> are all known and the numbers <span class="math-container">$x_i \in \mathbb R$</span> are all unknown. Then the matrix equation above can also be written</p> <p><span class="math-container">$$ \begin{pmatrix} v_{11} &amp; v_{12} &amp; \cdots &amp; v_{1n} \\ v_{21} &amp; v_{22} &amp; \cdots &amp; v_{2n} \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ v_{m1} &amp; v_{m2} &amp; \cdots &amp; v_{mn} \end{pmatrix} \begin{pmatrix}x_1 \\ x_2 \\ \vdots \\ x_n\end{pmatrix} = \begin{pmatrix}0 \\ 0 \\ \vdots \\ 0\end{pmatrix}.$$</span></p> <p>This is equivalent to the system of linear equations <span class="math-container">$$ \begin{array}{ccccccccl} v_{11}x_1 &amp;+&amp; v_{12}x_2 &amp;+&amp; \cdots &amp;+&amp; v_{1n}x_n &amp;=&amp; 0, \\ v_{21}x_1 &amp;+&amp; v_{22}x_2 &amp;+&amp; \cdots &amp;+&amp; v_{2n}x_n &amp;=&amp; 0, \\ \vdots&amp;&amp;\vdots&amp;&amp;\ddots&amp;&amp;\vdots&amp;&amp;\vdots \\ v_{m1}x_1 &amp;+&amp; v_{m2}x_2 &amp;+&amp; \cdots &amp;+&amp; v_{mn}x_n &amp;=&amp; 0. \end{array}$$</span></p> <p>That is, you need to solve a linear system of <span class="math-container">$m$</span> equations with <span class="math-container">$n$</span> unknowns. This is something you can do using row reduction.</p> <p>The solution will never be unique; if the vector <span class="math-container">$x$</span> is a solution then the vector <span class="math-container">$cx$</span> is also a solution, where <span class="math-container">$c$</span> is any scalar constant. If the <span class="math-container">$m$</span> vectors include fewer than <span class="math-container">$n-1$</span> independent vectors, the solution is not even unique up to a scalar constant; you can have multiple vectors in different directions that are all orthogonal to the given vectors. If <span class="math-container">$m \geq n$</span> there may not be a solution at all; the <span class="math-container">$m$</span> vectors may span <span class="math-container">$\mathbb R^n$</span>. There will, however, be solutions as long as the set of given vectors does not contain <span class="math-container">$n$</span> or more mutually independent vectors.</p> <p>In your particular case, if you are not aware of the fact that the cross-product of two independent vectors in <span class="math-container">$\mathbb R^3$</span> is orthogonal to each of those vectors, you have</p> <p><span class="math-container">$$v_1 = \begin{pmatrix}v_{11}\\v_{12}\\v_{13}\end{pmatrix} = \begin{pmatrix}-1\\1\\1\end{pmatrix} \quad \mbox{and} \quad v_2 = \begin{pmatrix}v_{21}\\v_{22}\\v_{23}\end{pmatrix} = \begin{pmatrix}\sqrt{2}\\1\\-1\end{pmatrix},$$</span></p> <p>so you could solve the system of equations</p> <p><span class="math-container">$$\begin{eqnarray} -1\cdot x_1 + 1\cdot x_2 + 1 \cdot x_3 &amp;=&amp; 0, \\ \sqrt{2}\cdot x_1 + 1\cdot x_2 - 1 \cdot x_3 &amp;=&amp; 0. \end{eqnarray}$$</span></p> <p>Blindly applying the methods I was taught in high school, I find this is equivalent to</p> <p><span class="math-container">$$\begin{array}{ccccccl} x_1 &amp;-&amp; x_2 &amp;-&amp; x_3 &amp;=&amp; 0, \\ &amp;&amp;\left(1+\sqrt{2}\right)x_2 &amp;+&amp;\left(-1+\sqrt{2}\right) x_3 &amp;=&amp; 0. \end{array}$$</span></p> <p>At this point we can make an arbitrary choice of for <span class="math-container">$x_3$</span> and proceed to solve the equations as a system of two equations in two unknowns.</p>
1,649,194
<p>Let $S^n\subset\mathbb{R}^{n+1}$ denote the standard unit sphere with normal bundle $\nu$, let $N=(0,\dots,0,1)$ and $S=(0,\dots,0,-1)$. Then there are two sterographic projections $$\sigma_+\colon S^n-S\to\mathbb{R}^n $$ and $$\sigma_-\colon S^n-N\to\mathbb{R}^n$$ Both of these maps are homeomorphisms and they form an atlas for the standard smooth structure on $S^n$. I'm interested in how they should induce the standard orientation on $S^n$. (Here I orient $S^n$ by putting the standard orientation on $T S^n \oplus \nu\cong T\mathbb{R}^{n+1}|_{S^n}$ and orient $\nu$ by declaring that the outward-pointing direction is positive.)</p> <p>Let $v\in S^{n-1}\subset S^n$ so that $\sigma_+^{-1}\circ \sigma_-(v)=v$, and let $B=\{b_1,\dots,b_n\}$ be a positively-oriented orthonormal basis for $T_vS^n$ where $b_1$ points along the great circle from $N$ to $S$. Geometrically, it seems like $D_v (\sigma_+^{-1}\circ \sigma_-) (b_i)=-b_1$ if $i= 1$ and $b_i$ otherwise, suggesting that the transition function is orientation reversing, and hence exactly one of $\sigma_+$ and $\sigma_-$ is orientation reversing. My question is which one reverses orientation and which preserves?</p> <p>I haven't succeeded in computing anything in terms of the formulas for stereographic projection. The Jacobean matrix I get in the 2-dimensional case for $\sigma_+$ or $\sigma_-$ is $2\times 3$ so I don't know how I'm supposed to interpret its "determinant". </p> <p>(I should point out that in general my experiences with differential geometry have been very, very, very bad, and I'm much more topologically minded. In particular my definition for an orientation of a vector bundle is a Thom class.)</p>
William
14,816
<p>It looks like, in all dimensions, $\sigma_+$ preserves orientation and $\sigma_-$ reverses it. Here is the case of $S^2$ explicitly worked out:</p> <p>For a vector space $V$ of dimension $n$, an orientation will be an element of $$ \{(b_1,\dots,b_n)\ |\ V=\langle b_1,\dots, b_n\rangle \}/GL^+(V) \cong \mathbb{Z}/2$$ and given oriented vector spaces $(V,o)$ and $(V',o')$, a linear isomorphism $L\colon V\to W$ will be orientation preserving if $L(o)=o'$, and orientation reversing if $L(o)=-o'$.</p> <p>For the sake of clarity, we will write $\mathbb{R}^3=\langle e_1,e_2,e_3\rangle$ and $\mathbb{R}^2=\langle e_1',e_2'\rangle$, and we will give these vector spaces the natural orientations.</p> <p>Explicitly, the stereographic projections are given by $$\sigma_+(x,y,z)=(\frac{x}{1+z},\frac{y}{1+z})\text{ and }\sigma_-(x,y,z)=(\frac{x}{1-z},\frac{y}{1-z})$$ for $(x,y,z)$ in the appropriate domains. </p> <p>Consider specifically the point $e_1=(1,0,0)$, which is in the domains of both functions. An orientation of $T_{e_1}S^2\cong \langle e_2,e_3\rangle$ is an ordered basis with the property that concatenating with the outward-pointing normal vector $e_1$ gives a positively oriented basis for $\mathbb{R}^3$. (In this case it doesn't matter if the concatenation happens at the beginning or end, because they differ by an even number of transpositions.) The preferred orientation for $\mathbb{R}^3$ is $[e_1,e_2,e_3]$, so the orientation on $T_{e_1}S^2\cong \langle e_2,e_3\rangle$ is given by $[e_2,e_3]$.</p> <p>Stereographic projection sends the standard great circle $S^{1}\subset S^2$ to the unit circle $S^{1}\subset\mathbb{R}^2=\langle e_1',e_2'\rangle$, and in particular $e_1$ is sent to $e_1'$. The tangent space $T_{e_1'}\mathbb{R}^2$ is naturally isomorphic to $\langle e_1',e_2'\rangle$, and the preferred orientation is $[e_1',e_2']$. </p> <p>In order to compute the derivative of $\sigma_{\pm}$ at $e_1$, consider the two paths $$ \gamma_2(t)=(cos(t),sin(t),0)\text{ and }\gamma_3(t)=(cos(t),0,sin(t))$$ Then $\frac{d}{dt}\gamma_i(t)|_{t=0}=e_i$, i.e. the derivatives form a basis for the tangent space at $e_1$. Compute</p> <p>$$\frac{d}{dt}\sigma_-\big(\gamma_2(t)\big)|_{t=0} = \frac{d}{dt}\big(cos(t),sin(t)\big)|_{t=0}=(0,1)=e_2'$$ and $$\frac{d}{dt}\sigma_-\big(\gamma_3(t)\big)|_{t=0} = \frac{d}{dt}\left(\frac{cos(t)}{1-sin(t)},0\right)|_{t=0}=(1,0)=e_1'$$ Hence the derivative of $\sigma_-$ at $e_1$ takes the positively oriented basis $[e_2,e_3]$ to $[e_2',e_1']=-[e_1', e_2']$, and so $\sigma_-$ reverses orientation at $e_1$. That it reverses orientation at all points in its domain should follow from continuity of $\sigma_-$ and connectedness of its domain.</p> <p>Similarly, one computes that the derivative of $\sigma_+$ at $e_1$ takes the basis $[e_2,e_3]$ to $[e_2',-e_1']=[e_1’,e_2’]$, and so it preserves orientation.</p> <p>A similar computation also works in higher dimensions, but I wrote out the case of $S^2$ because it still illustrates the key idea and computation. For $S^{2n+1}$ one needs to be careful about defining an oriented basis for a tangent space, because moving a normal vector from the end of an ordered basis to the beginning is an odd permutation; however, if the convention is that the normal vector is added at the end, then again we should have $\sigma_-$ orientation reversing and $\sigma_+$ orientation preserving.</p>
2,634,701
<p>Let $ f: {{\mathbb{R^n}} \rightarrow {{\mathbb{R}} }}$ be continuous and let $a$ and $b$ be points in $ {{\mathbb{R} }} $ Let the function $g: {\mathbb{R}} \rightarrow {\mathbb{R}}$ be defined as: $$ g(t) = f(ta+(1-t)b) $$ Show that $g$ is continuous .</p> <p>If I define a function $ h(t)=ta+(1-t)b$, then I have that $g(t)=f(h(t))$ I know that $f$ is continuous, so I have to prove that $h(t)$ is continuous as a compound function of two continuous function is also continuous. </p> <p>How do I prove that $h(t)$ is continuous in ${{\mathbb{R^n}}}$? </p>
Peter Szilas
408,605
<p>$h: \mathbb{R} \rightarrow \mathbb{R^n}$</p> <p>Denote norm in $\mathbb{R}$ by $|\cdot |$, in $\mathbb{R^n}$ by $||\cdot||.$</p> <p>Let $a \not= b,$ $ a,b,$ and $t$ be real.</p> <p>Show $h(t)$ is cont. at $t=t_0.$</p> <p>Let $\epsilon \gt 0$ be given.</p> <p>$|h(t)-h(t_0)|= ||(t-t_0)(a-b)|| =|t-t_0| ||a-b||$</p> <p>Choose $\delta \lt \dfrac{\epsilon}{||a-b||}$;</p> <p>Then $|t-t_0|\lt \delta$ implies </p> <p>$|h(t)-h(t_0)| =|(t-t_0)| ||(a-b)|| $</p> <p>$\lt \delta ||a-b|| \lt \epsilon$.</p>
1,727,339
<p>What am I doing wrong?</p> <p>I've been learning how to put matrices into Jordan canonical form and it was going fine until I encountered this $4 \times 4$ matrix:</p> <p>$A=\begin{bmatrix} 2 &amp; 2 &amp; 0 &amp; -1 \\ 0 &amp; 0 &amp; 0 &amp; 1 \\ 1 &amp; 5 &amp; 2 &amp; -1 \\ 0 &amp; -4 &amp; 0 &amp; 4 \\ \end{bmatrix} $</p> <p>Which has as only eigenvalue $\lambda_1=\lambda_2=\lambda_3=\lambda_4=2$ with 2 corresponding eigenvectors, which I will for now call $v_1$ and $v_2$:</p> <p>$v_1 = \pmatrix{0\\0\\1\\0}, v_2=\pmatrix{-3 \\ 1 \\ 0 \\ 2} $ </p> <p>2 eigenvectors means 2 Jordan blocks so I have 2 possibilities:</p> <p>$J= \pmatrix{2 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 2 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 2 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 2} $ or $ J= \pmatrix{2 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 2 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 2 &amp; 1 \\ 0 &amp; 0 &amp; 0 &amp; 2} $</p> <p>I consider the first possibility. This gives me the relations:</p> <p>$Ax_1=2x_1 \\ Ax_2=x_1+2x_2 \\ Ax_3=2x_3+x_2 \\ Ax_4=2x_4 \\ $</p> <p>where $x_1$ and $x_4$ should be $v_1$ and $v_2$. From the second relation $(A-2I)x_2=x_1$ I see</p> <p>$\pmatrix{0 &amp; 2 &amp; 0 &amp; -1 \\ 0 &amp; -2 &amp; 0 &amp; 1 \\ 1 &amp; 5 &amp; 0 &amp; -1 \\ 0 &amp; -4 &amp; 0 &amp; 2} \pmatrix{a \\ b \\ c \\ d} =\pmatrix{0 \\ 0 \\ 1 \\ 0} $ </p> <p>( $v_2= \pmatrix{ -3 \\ 1 \\ 0 \\ 2} $ will give an inconsistent system)</p> <p>Now I get that $x_2 = \pmatrix{-2 \\ 1 \\ 0 \\ 2} $</p> <p>From the third relation $(A-2I)x_3=x_2$:</p> <p>$\pmatrix{0 &amp; 2 &amp; 0 &amp; -1 \\ 0 &amp; -2 &amp; 0 &amp; 1 \\ 1 &amp; 5 &amp; 0 &amp; -1 \\ 0 &amp; -4 &amp; 0 &amp; 2} \pmatrix{e \\ f \\ g \\ h} =\pmatrix{-2 \\ 1 \\ 0 \\ 2} $ </p> <p>But this system is inconsistent as well! No matter which vectors I try in which places, when I try to generalize eigenvectors I seem to always end up with some inconsistency.</p> <p>Is there something staring me in the face that I am overlooking? Or am I doing it completely wrong (even though this method worked fine for me before)?</p> <p>Sorry for the lengthiness and thank you in advance.</p>
Disintegrating By Parts
112,478
<p>Take a look at $$ (A-2I)^2 = \begin{bmatrix} 0 &amp; 2 &amp; 0 &amp; -1 \\ 0 &amp; -2 &amp; 0 &amp; 1 \\ 1 &amp; 5 &amp; 0 &amp; -1 \\ 0 &amp; -4 &amp; 0 &amp; 2 \\ \end{bmatrix}^2 $$ The $a_{1,2}$ entry is non-zero. What does that tell you?</p>
2,360,268
<p>Draw a triangle given $A-B=90$(degree) and length of $AC,BC$.</p> <p><strong>My attempt</strong>:I thought It would be a good idea to draw a right angle so I made the picture below:</p> <p><a href="https://i.stack.imgur.com/yikB7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yikB7.png" alt="enter image description here"></a></p> <p>But I don't know how to find the appropriate point on the circle?</p>
Michael Rozenberg
190,319
<p>By Law of sines for $\Delta ABC$ we obtain $$\frac{b}{\sin\beta}=\frac{a}{\sin(90^{\circ}+\beta)}$$ or $$\tan\beta=\frac{b}{a}$$ and the angle with this measure easy to construct.</p>