qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
710,518
<p>I'm having a brainfart while trying to solve a problem for differential equations that requires me to recall some Calculus. If I have $y' = f(t, y) = 1 - t + 4y$, what is $y''$? Do I just differentiate with respect to $t$ to get $y'' = -1$?</p>
Guy
127,574
<p>I am assuming $y'=\frac{dy}{dt}$ since there are no other variables here.</p> <p>We have $\large \frac{dy}{dt} = 1 - t +4y$ </p> <p>Differentiating with respect to $t$ (your idea is right, but the final answer isn't)</p> <p>$$\frac{d^2y}{dt^2} = -1+4\frac{dy}{dt}$$</p> <p>Substituting back the original value of $\frac{dy}{dt}$</p> <p>$$\begin{align}\frac{d^2y}{dt^2} &amp;= -1+4 - 4t +16y\\ &amp;=3-4t+16y\end{align}$$</p>
196,173
<p>Recently I have stumbled upon links which are closures of braids, of the form $\sigma = \tau^{n}$. Such links generalize torus links. Are there any papers studying such links? In particular I am interested in questions like which links appear in this way, what can we say about polynomial invariants of such links, are there any criteria in terms of polynomial links invariants.</p>
Marco Golla
13,119
<p>If you only cared about knots, then you'd be looking at a special case of the so-called <em>periodic</em> knots, and the latter have been studied by several people.</p> <p>There is <a href="http://www.indiana.edu/~jfdavis/papers/dliv1.pdf" rel="nofollow">this paper</a> by Davis and Livingston where some results are summarised and some others are referred to in the introduction. In particular, they state an older result of Murasugi: the Alexander polynomial of the closure $K$ of $\sigma$ is determined by the (two-variable) Alexander polynomial of the link $L =\hat\tau\cup B$, where $B$ is the unknot that links the braid (the axis of the braid, as I believe it's called). Namely: $$ \delta_\ell(t)\cdot\Delta_K(t) = \prod_{j=1}^n\Delta_L(\zeta^j,t), $$ where $\ell$ is the number of strands of the braid, $\delta_\ell(t) = (1-t^\ell)/(1-t)$, $\zeta$ is a primitive $n$-th root of unity, and the first variable of $\Delta_L$ corresponds to the (meridian of) the unknotted component $B$.</p>
27,509
<p>I have a set of data of the kind {{$x_i$, $y_i$, $z_i$}, ...} at randomly chosen points $x_i, y_i$. </p> <p>The $z_i$ are supposed to be a smooth function of the $x_i$ and $y_i$. Unfortunately they turn out not to be. A few points stick out. These are local extrema - I want to find them and remove them from the data set.</p> <p>The problem is that the $x_i$ and $y_i$ are chosen randomly, so I cannot choose nearest neighbors for comparison easily. Are there any methods to find the nearest neighbors in a given list, or even directly to find the local maxima in a random grid?</p>
Szabolcs
12
<p>The idea is to construct the Delaunay triangulation and get the points that have higher values than all of their neighbours (according to this triangulation).</p> <p>Here's a worked example:</p> <p>Generate the sample data:</p> <pre><code>f[x_, y_] := Sin[x] Sin[y] points = RandomReal[{-3, 3}, {300, 2}]; zval = f @@@ points; </code></pre> <p>Find the local maxima:</p> <pre><code>&lt;&lt; ComputationalGeometry` tri = DelaunayTriangulation[points]; tests = And @@ Thread[zval[[#1]] &gt; zval[[#2]]] &amp; @@@ tri maxima = Pick[points, tests] (* ==&gt; {{1.3437, 1.68432}, {2.75095, -2.69678}, {-2.97601, 1.98261}, {-1.57433, -1.57115}} *) </code></pre> <p>(For your noisy data you might want to use a stricter criterion than just "larger than all neighbours", e.g. larger by a threshold. Also, for simplicity I used the slow ComputationalGeometry package but you'll probably want <a href="https://mathematica.stackexchange.com/questions/13437/how-to-speed-up-the-function-delaunaytriangulation">something faster</a>.)</p> <p>Visualize them:</p> <pre><code>ListDensityPlot[ArrayFlatten[{{points, List /@ zval}}], Epilog -&gt; {Red, PointSize[Large], Point[maxima]}] </code></pre> <p><img src="https://i.stack.imgur.com/1RQ67.png" alt="enter image description here"></p> <p>Two of them are on the convex hull. You can filter those out:</p> <pre><code>Complement[maxima, points[[ConvexHull[points]]]] (* ==&gt; {{-1.57433, -1.57115}, {1.3437, 1.68432}} *) </code></pre> <p>(Note: it would have been better to filter these based on the indexes of points rather than the coordinates to avoid unnecessary floating point comparisons, but I was lazy :)</p>
3,232,319
<p>I need to show that <span class="math-container">$e^{-it}=\frac{1}{e^{it}}$</span>. but I don't understand what needs to be proven, it seems trivial to me. If anyone could help me. Is the claim true even if t is not real? Thank you</p>
Ahmad Bazzi
310,385
<p>Using Eulers formula,</p> <p><span class="math-container">$$\cos(x) + i\sin(x) = e^{ix}$$</span></p> <p><span class="math-container">$$\cos(x) - i\sin(x) = e^{-ix}$$</span></p> <p>Multiply the two equations and you’re done (work out the left hand side containing trigonometric quantities which should turn out to be <span class="math-container">$1$</span>, while the right hand side leave it as <span class="math-container">$e^{ix}e^{-ix}$</span></p>
3,232,319
<p>I need to show that <span class="math-container">$e^{-it}=\frac{1}{e^{it}}$</span>. but I don't understand what needs to be proven, it seems trivial to me. If anyone could help me. Is the claim true even if t is not real? Thank you</p>
José Carlos Santos
446,262
<p>You have<span class="math-container">$$e^{it}\cdot e^{-it}=e^{it-it}=e^0=1$$</span>and therefore<span class="math-container">$$e^{-it}=\frac1{e^{it}}.$$</span></p>
1,671,572
<p>$\|A\vec{x}\|\leq\|A\|\space\|\vec{x}\|$ where $A$ is a $m\times n$ matrix and $\vec{x}$ is a n-dimensional column vector. Assume that $\|A\|=\sqrt{\Sigma_{i}\Sigma_{j}a_{ij}^{2}}$</p>
Emilio Novati
187,568
<p>If you are using an <a href="https://en.wikipedia.org/wiki/Matrix_norm#Induced_norm" rel="nofollow noreferrer">induced norm</a>, the inequality is guaranteed by the definition, as noted in the other answers.</p> <p>From your comment it seems that you are using the Frobenius norm. in this case you can find a proof of the inequality here: <a href="https://math.stackexchange.com/questions/738478/consistency-of-matrix-norm-ax-2-leq-a-frobeniusx-2">Consistency of matrix norm: $||Ax||_2 \leq ||A||_{Frobenius}||x||_2$</a>.</p> <p>Note that we can also have norms (on the linear operators space) for which the inequality is not true.</p>
23,564
<p>In the <a href="http://www.ems-ph.org/journals/newsletter/pdf/2009-12-74.pdf" rel="noreferrer">December 2009 issue</a> of the <a href="http://www.ems-ph.org/journals/all_issues.php?issn=1027-488X" rel="noreferrer">newsletter of the European Mathematical Society</a> there is a very interesting interview with Pierre Cartier. In page 33, to the question</p> <blockquote> <p>What was the ontological status of categories for Grothendieck?</p> </blockquote> <p>he responds</p> <blockquote> <p>Nowadays, one of the most interesting points in mathematics is that, although all categorical reasonings are formally contradictory, we use them and we never make a mistake.</p> </blockquote> <p>Could someone explain what this actually means? (Please feel free to retag.)</p>
algori
2,349
<p>I'm not a logician at all, but since I'm using categories, I decided at some point to find out what is going on. A logician would probably give you a better answer, but in the mean time, here is my understanding.</p> <p>Grothendieck's Universe axiom (every set is an element of a Grothendieck universe) is equivalent to saying that for every cardinal there is a larger strictly inaccessible cardinal. </p> <p>(Recall that a cardinal $\lambda$ is weakly inaccessible, iff a. it is regular (i.e. a set of cardinality $\lambda$ can't be represented as a union of sets of cardinality $&lt;\lambda$ indexed by a set of cardinality $&lt;\lambda$) and b. for all cardinals $\mu&lt;\lambda$ we have $\mu^+&lt;\lambda$ where $\mu^+$ is the successor of $\mu$. Strongly inaccessible cardinals are defined in the same way, with $\mu^+$ replaced by $2^\mu$. Usually one also adds the condition that $\lambda$ should be uncountable.)</p> <p>Assuming ZFC is consistent we can show that ZFC + " there are no weakly inaccessible cardinals" is consistent. The way I understand it, this is because if we have a model for ZFC, then all sets smaller than the smallest inaccessible cardinal would still give us a model for ZFC where no inaccessible cardinals exist. See e.g. Kanamori, "The Higher Infinite", p. 18.</p> <p>So far the situation is pretty similar to e.g. the Continuum Hypothesis (CH): assuming ZFC is consistent, we can show that ZFC+not CH is consistent.</p> <p>What makes the Universe Axiom different from CH is that we can not deduce the considtency of ZFC + IC from the consistency of ZFC (here IC stands for "there is an inaccessible cadrinal"). This is because we can deduce from ZFC + IC that ZFC is consistent: basically, again all sets smaller than the smallest inaccessible cardinal give a model for ZFC. So if we can deduce from the consistency of ZFC that ZFC + IC is consistent, then we can prove the consistency of ZFC + IC in ZFC + IC, which violates G\"odel's incompleteness theorem (see e.g. Kanamori, ibid, p. 19).</p> <p>Notice that the impossibility to deduce the consistency of ZFC + IC from the consistency of ZFC is stronger that simply the impossibility to prove IC in ZFC.</p> <p>So my guess would be that Cartier is referring to the fact that the consistency of ZFC implies the consistency of ZFC plus the negation of the Universe Axiom, but not ZFC plus the Universe Axiom itself.</p>
23,564
<p>In the <a href="http://www.ems-ph.org/journals/newsletter/pdf/2009-12-74.pdf" rel="noreferrer">December 2009 issue</a> of the <a href="http://www.ems-ph.org/journals/all_issues.php?issn=1027-488X" rel="noreferrer">newsletter of the European Mathematical Society</a> there is a very interesting interview with Pierre Cartier. In page 33, to the question</p> <blockquote> <p>What was the ontological status of categories for Grothendieck?</p> </blockquote> <p>he responds</p> <blockquote> <p>Nowadays, one of the most interesting points in mathematics is that, although all categorical reasonings are formally contradictory, we use them and we never make a mistake.</p> </blockquote> <p>Could someone explain what this actually means? (Please feel free to retag.)</p>
Dan Doel
5,880
<p>With regard to what Darij said, if Haskell is viewed as a formal system, it's quite easy to derive contradictions; they correspond to infinite loops:</p> <pre><code>loop :: forall a. a loop = loop </code></pre> <p>However, when viewed in this light, the quote becomes something like:</p> <blockquote> <p>Nowadays, one of the most interesting things in computer programming is that, although most languages allow one to write infinite loops, most programs that people write aren't infinite loops.</p> </blockquote> <p>But this isn't surprising at all, because most people construct programs with some productive activity in mind, rather than just spinning in a vicious loop. And similarly, most mathematicians presumably choose theorems and proofs that they find to be similarly productive, rather than a masquerading paradox, even if the formal system would allow the latter. Just because the formal system allows paradox doesn't mean that there aren't non-paradoxical ways to prove things in the system, or that most proofs people write wouldn't be the latter.</p> <p>Now, one might say, "people do write infinite loops." And this is true. But, the analogy with Haskell breaks down a bit at that point, because Haskell makes it <em>easy</em> to write infinite loops. It's just a given in the language, like mine above. With inconsistent formal systems, it's typically not as easy to get a proof of false. People did a lot of naive set theory without noticing that you could do it for instance. Russel's paradox is probably the easiest of the bunch, but it's quite easy to break it. For instance, you could have a naive set theory with more typing restrictions, such that propositions like x ∉ x aren't well-formed. It will still be inconsistent, but you'll have to do more work to construct a paradox. As another example, a type theory with <code>Type : Type</code> (that is, there's a type of all types) is inconsistent, but proving false <a href="http://www.cs.chalmers.se/~ulfn/darcs/Agda2/test/succeed/Hurkens.agda">is a lot of work</a>.</p> <p>So, to posit a reason for why people never make mistakes with their categorical proofs, despite it being a possibility given that people typically work in a naive system: it may well be much harder to construct circular proofs (in category theory) for most theorems that people are interested in than it is to write good proofs. For one, it might be hard to prove false. For two, proving false and then inferring whatever you want is obviously bad, so the paradoxical logic would have to be disguised as a legitimate argument. And that's unlikely to be the sort of reasoning a mathematician has in mind to prove things that aren't fairly related to paradoxes.</p>
4,528,027
<p>Im trying to prove that the function <span class="math-container">$$\begin{cases}f(x,y)=\dfrac{(2x^2y^4-3xy^5+x^6)}{(x^2+y^2)^2}, &amp; (x,y)≠0\\ 0, &amp; (x,y)=0\end{cases}$$</span> is continuous at point (0,0) using the rigorous defintion of a limit.</p> <p>Attempting to find the upper limit of the function: <span class="math-container">$$|f(x)-f(x_0)|= \left|\frac{(2x^2y^4-3xy^5+x^6)}{(x^2+y^2)^2}-0\right|$$</span> I see the denominator is always positive so this is equal to <span class="math-container">$\dfrac{|2x^2y^4-3xy^5+x^6|}{(x^2+y^2)^2}$</span>. Using the triangle inequality i know that this is equal or less than <span class="math-container">$\dfrac{|(2x^2y^4)-(3xy^5)|+|x^6|}{(x^2+y^2)^2}$</span>. From here I would like to continue finding expressions which are equal or greater than this, which allow me to cancel some terms against <span class="math-container">$((x^2+y^2)^2)$</span>. Im thinking i can write <span class="math-container">$$x^6 = (x^2)^3 ≤ (x^2+y^2)^3 $$</span> for instance, but i am unsure of how to &quot;handle&quot; <span class="math-container">$|(2x^2y^4)-(3xy^5)$</span>|. Could someone give me any pointers?</p>
Lelouch
991,491
<p>You can just use : <span class="math-container">$x \leq \sqrt{x^2+y^2} =r$</span> and <span class="math-container">$y \leq \sqrt{x^2+y^2} =r $</span></p> <p>Thus <span class="math-container">$$ 0 \leq | f(x,y) | \leq \frac{2r^6+3r^6+r^6}{r^4} = 6r^2 $$</span></p> <p>Then just make <span class="math-container">$r \to 0$</span> and it shows that <span class="math-container">$f(x,y) \to 0$</span> when <span class="math-container">${(x,y) \to (0,0)}$</span></p>
2,775,443
<p>The statement $\displaystyle \sum_{n=1}^{\infty}a_n$ converges $\implies$ $\displaystyle \sum_{n=1}^{\infty}\cfrac{1}{a_n}$ looks natural but do we have this implication? I am checking alternating series as an counter-example but could not find one yet. What can we say about the implication?</p>
Kirk Fox
551,926
<p>The $n$-th term divergence test states that if $\lim_{n\to\infty} a_n \ne 0$, then $\sum_{n=1}^{\infty} a_n$ diverges. The contraposition of this tells us that if $\sum_{n=1}^{\infty} a_n$ converges, then $\lim_{n\to\infty} a_n = 0$. If we then apply the $n$-th term divergence test to $\frac{1}{a_n}$, we get $$\lim_{n\to\infty} \frac{1}{a_n} = \infty \ne 0$$ So, by the $n$-th term divergence test, $\sum_{n=1}^{\infty} \frac{1}{a_n}$ diverges.</p>
502,994
<p>If the image is f(x), why does f(|x|) look like two triangles above the x axis (basically the right side duplicated on the left)?</p> <p><img src="https://i.imgur.com/pGIdNYZ.jpg" alt="function image"></p>
William Ballinger
79,615
<p>$f(|x|)$ will equal $f(x)$ if $x$ is positive, and $f(-x)$ otherwise. Therefore, the positive part of graph of $f(|x|)$ will be identical to the positive part of the graph of $f(x)$, and the negative part will be a reflection of the positive part of the graph of $f(x)$.</p>
1,231,082
<p>Let $\text{tr}A$ be the trace of the matrix $A \in M_n(\mathbb{R})$.</p> <ul> <li>I realize that $\text{tr}A: M_n(\mathbb{R}) \to \mathbb{R}$ is obviously linear (but how can I write down a <em>formal</em> proof?). However, I am confused about how I should calculate $\text{dim}(\text{Im(tr)})$ and $\text{dim}(\text{Ker(tr)})$ and a basis for each of these subspace according to the value of $n$. </li> <li>Also, I don’t know how to prove that $\text{tr}(AB)= \text{tr}(BA)$, and I was wondering if it is true that $\text{tr}(AB)= \text{tr}(A)\text{tr}(B)$. </li> <li>Finally, I wish to prove that $g(A,B)=\text{tr}(AB)$ is a positive defined scalar product if $A,B$ are <em>symmetric</em>; and also $g(A,B)=-\text{tr}(AB)$ is a scalar product if $A,B$ are <em>antisymmetric</em>. Can you show me how one can proceed to do this? </li> </ul> <p>I would really appreciate some guidance and help in clarifying the doubts and questions above. Thank you.</p>
lab bhattacharjee
33,337
<p>We don't even need Fermat's Little theorem if we use <a href="https://math.stackexchange.com/questions/641443/proof-of-anbn-divisible-by-ab-when-n-is-odd">Proof of $a^n+b^n$ divisible by a+b when n is odd</a></p> <p>Now $105=3\cdot5\cdot7$</p> <p>So, check for $2^3+3^3,2^5+3^5,2^7+3^7$</p>
1,825,076
<p>Prove that $\sum_{n=1}^\infty \frac{x^{2n}}{(1 + x + \dots + x^{2n})^2}$ converges uniformly when $x \geq 0$.</p>
Jack D'Aurizio
44,121
<p>By setting $f_n(x)=\frac{x^{2n}}{(1+x+\ldots+x^{2n})^2}$ we may easily check that $f_n(x)=f_n(x^{-1})$, so it is enough to prove the series is uniformly convergent over $[1,+\infty)$ or $(0,1]$. Moreover, for any $z\in\mathbb{R}^+$ we have $z+\frac{1}{z}\geq 2$, so: $$ 0\leq f_{n}(x) = \frac{1}{\left(x^{-n}+x^{1-n}+\ldots+x^{n-1}+x^n\right)^2}\leq\frac{1}{(2n+1)^2} $$ and: $$ \sum_{n\geq 0}\frac{1}{(2n+1)^2}=\frac{\pi^2}{8}.$$</p>
2,714,738
<p>If $R$ is a noetherian ring then also $R[x]$ is a noetherian ring, i.e. $R[x]$ is noetherian as $R[x]$-module. Is $R[x]$ also noetherian as $R$-module?</p>
rschwieb
29,335
<p>$R[x]\cong \bigoplus_{i=0}^\infty R$ as $R$ modules, so obviously not.</p>
2,714,738
<p>If $R$ is a noetherian ring then also $R[x]$ is a noetherian ring, i.e. $R[x]$ is noetherian as $R[x]$-module. Is $R[x]$ also noetherian as $R$-module?</p>
Bernard
202,857
<p>A noetherian $R$-module has all its submodules finitely generated – in particular it has to be finitely generated as an $R$-module. If $R[x]$ were finitely generated, it would be generated by a finite number of polynomials as an $R$-module, so that polynomials would have a bounded degree.</p>
2,768,249
<p>In the middle of a proof, I have had to analyze the asymptototic behavior of $$ \mathbb{E}\left[\frac{1}{(1+X)^2}\right] = \frac{1}{2^n}\sum_{k=0}^n \binom{n}{k}\frac{1}{(1+k)^2}\tag{1} $$ where $X$ is Binomially distributed with parameters $n$ and $1/2$. (I also had to handle $\mathbb{E}\left[\frac{1}{(1+X)^4}\right]$, but let's start with the square). </p> <p>Now, it is easy to compute $\mathbb{E}\left[\frac{1}{1+X}\right], \mathbb{E}\left[\frac{1}{(1+X)(2+X)}\right]$, but (1) does not have any closed form (the hypergeometric function is <em>not</em> considered by me as a closed form). </p> <p>I know that, as $n\to\infty$, $$ \mathbb{E}\left[\frac{1}{(1+X)^2}\right] = \frac{4}{n^2} - \frac{4}{n^3} + O\left(\frac{1}{n^4}\right) \tag{2} $$ (see e.g. [1], which implies this but deals with general probability $p$ and power $r$ instead of $p=1/2$ and $r=2$), but that seems overkill. </p> <blockquote> <p>What is the simplest and most elegant way to derive (2)?</p> </blockquote> <ul> <li><p>Via a simple argument (concentration of the Binomial around $n/2+O(\sqrt{n})$) is it easy to show that is it $\Theta(1/n^2)$. Applying Jensen also shows a lower bound of $\frac{4}{n^2}+\Theta\left(\frac{1}{n^3}\right)$.</p></li> <li><p>via a still simple argument, involving comparing it to the (explicitly computable) $\mathbb{E}\left[\frac{1}{(1+X)(2+X)}\right]$ and bounding the difference, it is not hard to show it is $\frac{4}{n^2} + \Theta\left(\frac{1}{n^3}\right)$.</p></li> </ul> <p>But that's not necessarily elegant, and also doesn't quite lead to (2) (I reckon the second approach can be made work, but it'll get messy, and will definitely not end up being a "proof from the book")</p> <hr> <p>[1] Francisco Cribari-Neto, Nancy Lopes Garcia, and Klaus LP Vasconcellos. <em>A note on inverse moments of binomial variates.</em> Brazilian Review of Econometrics, 20(2):269– 277, 2000.</p>
Community
-1
<p>For <a href="https://math.stackexchange.com/questions/172841/explain-why-ex-int-0-infty-1-f-x-t-dt-for-every-nonnegative-rando?noredirect=1&amp;lq=1">non-negative random variables</a>, $X$, </p> <p>$$ E[X] = \int_0^\infty P(X &gt; x)dx $$</p> <p>Since $|X| \leq M$ almost surely, $\{|X| &gt; x\} \subseteq \{M &gt; x\}$. Thus, </p> <p>$$ E[|X|] = \int_0^\infty P(|X| &gt; x)dx \leq \int_0^\infty P(M &gt; x)dx = \int_0^\infty \chi_{\{M &gt; x\}}dx = M$$</p>
3,600,868
<p>Ellipse can be <a href="https://math.stackexchange.com/q/3594700/122782">perfectly packed with <span class="math-container">$n$</span> circles</a> if </p> <p><span class="math-container">\begin{align} b&amp;=a\,\sin\frac{\pi}{2\,n} \quad \text{or equivalently, }\quad e=\cos\frac{\pi}{2\,n} , \end{align}</span> </p> <p>where <span class="math-container">$a,b$</span> are the major and minor semi-axis of the ellipse and <span class="math-container">$e=\sqrt{1-\frac{b^2}{a^2}}$</span> is its eccentricity.</p> <p>Consider a triangle and any ellipse, naturally associated with it, for example, Steiner circumellipse/inellipse, Marden inellipse, <a href="https://mathworld.wolfram.com/BrocardInellipse.html" rel="nofollow noreferrer">Brocard inellipse</a>, <a href="https://mathworld.wolfram.com/LemoineInellipse.html" rel="nofollow noreferrer">Lemoine inellipse</a>, ellipse with the circumcenter and incenter as the foci and <span class="math-container">$r+R$</span> as the major axis, or any other ellipse you can come up with, which can be consistently associated with the triangle.</p> <p>The question is: provide the example(s) of triangle(s) for which the associates ellipse(s) can be perfectly packed with circles.</p> <p>Let's say that the max number of packed circles is 12, unless you can find some especially interesting case with more circles.</p> <p>For example, the Steiner incircle for the famous <span class="math-container">$3-4-5$</span> right triangle can not be perfectly packed. </p> <p>The example of the right triangle with the Marden inellipse, perfectly packed with six circles is given in the self-answer below.</p>
g.kov
122,782
<p><a href="https://i.stack.imgur.com/wwA8R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wwA8R.png" alt="enter image description here"></a></p> <p>This is example of the ellipse, perfectly packed with three circles, inscribed into the equilateral triangle <span class="math-container">$ABC$</span>.</p> <p>Let the center of the ellipse be <span class="math-container">$M=0$</span> and its semi-axes defined as <span class="math-container">\begin{align} s_a=|DF_1|=|DF_2|&amp;=1 ,\\ s_b=|MD|=|ME|&amp;=\sin\frac\pi{2\cdot3}=\frac12 , \end{align}</span> locations of the top and bottom points are <span class="math-container">\begin{align} D&amp;=(0,-\tfrac12) ,\quad E=(0,\tfrac12) ,\\ F_1&amp;=(-\tfrac{\sqrt3}2,0) ,\quad F_2=(\tfrac{\sqrt3}2,0) . \end{align}</span></p> <p><span class="math-container">\begin{align} \text{Then the equation of the ellipse is } \quad x^2+2\,y^2&amp;=0 \end{align}</span> and for the upper arc we have <span class="math-container">\begin{align} y(x)&amp;=\tfrac12\,\sqrt{1-x^2} ,\\ y'(x)&amp;=-\tfrac12\,\frac{x}{\sqrt{1-x^2}} , \end{align}</span></p> <p>so ve can find the point <span class="math-container">$K$</span>, tangent to the circumscribed equilateral <span class="math-container">$\triangle ABC$</span>: <span class="math-container">\begin{align} -\tfrac12\,\frac{x}{\sqrt{1-x^2}} &amp;=\tan\tfrac\pi6=\tfrac{\sqrt3}3 ,\\ x&amp;=-\tfrac{2}{13}\sqrt{39} ,\\ y(x)&amp;=\tfrac{1}{26}\sqrt{13} . \end{align}</span> So, the tangential points are <span class="math-container">\begin{align} K&amp;=(-\tfrac{2}{13}\sqrt{39},\tfrac{1}{26}\sqrt{13} ) ,\quad L=(\tfrac{2}{13}\sqrt{39},\tfrac{1}{26}\sqrt{13} ) , \end{align}</span> and the location of the vertices of <span class="math-container">$\triangle ABC$</span> can be easily found as <span class="math-container">\begin{align} A&amp;=(-\tfrac16\,\sqrt3\,(\sqrt{13}+1), -\tfrac12) ,\quad B=(\tfrac16\,\sqrt3\,(\sqrt{13}+1), -\tfrac12) ,\\ C&amp;=(0,\tfrac12\,\sqrt{13}) , \end{align}</span> the side length of the triangle is thus <span class="math-container">\begin{align} |AB|=|BC|=|CA|=a&amp;=\tfrac13\,\sqrt3\,(\sqrt{13}+1) , \end{align}</span></p> <p>and the tangential points <span class="math-container">$D,K,L$</span> divide the side segments as follows: <span class="math-container">\begin{align} |AK|=|BL|&amp;=\sqrt{\tfrac2{39}(7+\sqrt{13})} ,\quad |CK|=|CL|=\tfrac4{13}\,\sqrt{39} . \end{align}</span> </p> <p>This ellipse is neither Steiner nor Mandart inellipse, it is essentially a [![generalized Steiner inellipse]][Linfield1920], for which the foci are the roots of the derivative of the rational function <span class="math-container">\begin{align} f(z)&amp;=(z-A)^u (z-B)^v (z-C)^w , \end{align}</span></p> <p>where <span class="math-container">$A,B,C$</span> are the coordinates of the vertices of the triangle, and the tangent points <span class="math-container">$L,K,D$</span> divide the segments <span class="math-container">$BC,CA$</span> and <span class="math-container">$AB$</span> as <span class="math-container">$v:w,w:u$</span> and <span class="math-container">$u:v$</span>, respectively. In this case <span class="math-container">\begin{align} u=v&amp;=\tfrac1{12}\,(\sqrt{13}-1) ,\\ w&amp;=\tfrac16\,(7-\sqrt{13}) . \end{align}</span></p> <p>Semi-axes of such ellipse, expressed in terms of the side length <span class="math-container">$a$</span> of the equilateral triangle, are</p> <p><span class="math-container">\begin{align} s_a&amp;=\frac{a\sqrt3}{12}\,(\sqrt{13}-1) ,\quad s_b=\frac{a\sqrt3}{24}\,(\sqrt{13}-1) . \end{align}</span> </p> <p>Similarly, for another orientation, </p> <p><a href="https://i.stack.imgur.com/l5L6s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l5L6s.png" alt="enter image description here"></a></p> <p>we have</p> <p><span class="math-container">\begin{align} s_a&amp;=\frac{a\sqrt3}{3}\,(\sqrt{7}-2) ,\quad s_b=\frac{a\sqrt3}{6}\,(\sqrt{7}-2) \end{align}</span> </p> <p>and <span class="math-container">\begin{align} u&amp;=\tfrac{11}3-\tfrac43\,\sqrt7 ,\quad v=\tfrac23\,\sqrt7-\tfrac43 ,\quad w=v . \end{align}</span></p> <p>Reference</p> <p>[Linfield1920]: Ben-Zion Linfield. “On the relation of the roots and poles of a rational function to the roots of its derivative”. In: Bulletin of the American Mathematical Society 27.1 (1920), pp. 17-21 </p>
587,162
<p>I want to prove that the following holds, where the $+$ means Minkowski sum:</p> <p>$$ conv(A+B)=conv(A)+conv(B) $$</p> <p>Let the convex hull of $A+B$ be $$ conv(A+B)=\sum_{j,k}\lambda_j\mu_k(a_j+b_k) $$</p> <p>I don't know how to continue from here. </p>
Post No Bulls
111,742
<p>A set $C$ is convex iff for all $t\in (0,1)$ we have $tC+(1-t)C= C$.</p> <p>It follows that the Minkowski sum of convex sets $A, B$ is convex: $$t(A+B)+(1-t)(A+B) = (tA+(1-t)A) + (tB+(1-t)B) = A+B$$</p> <p>Therefore, for general sets $A,B$ the sum $\operatorname{conv}(A)+\operatorname{conv}(B)$ is convex; and since it contains $A+B$, it also contains the convex hull of $A+B$. One inclusion proved. </p> <p>For the opposite inclusion, pick a point in the convex hull of $A+B$. It is a convex combination of some points of $A+B$, i.e., $\sum \lambda_k (a_k+b_k)$. Since $$ \sum \lambda_k (a_k+b_k)= \sum \lambda_k a_k + \sum \lambda_k b_k \in \operatorname{conv}(A)+\operatorname{conv}(B)$$</p>
3,364,263
<p>How many 8 digit (its digits from left to right are labeled a through g) satisfy the following constraints:</p> <ol> <li><span class="math-container">$a_1 &lt; a_2 &lt; a_3 &lt; a_4$</span></li> <li><span class="math-container">$a_4 &gt; a_5$</span></li> </ol>
Dr. Sonnhard Graubner
175,066
<p>Your equation is equivalent to <span class="math-container">$$x \left( x-5 \right) \left( {x}^{2}-x+7 \right) \left( {x}^{2}-6\,x+ 12 \right) =0$$</span></p>
207,399
<p><em>Let $A$ and $B$ be non-empty sets, and let $f\,:\,A\rightarrow B$ be a function.</em> <br/></p> <hr/> <blockquote> <p>$ \color{darkred}{\bf Theorem}$: The function $f$ is injective if and only if $f\circ g=f\circ h$ implies $g=h$ for all functions $g,h:\,Y\rightarrow A$ for all sets Y. ($f\,:\,A\, \rightarrowtail \,B$, $f$ is a monomorphism) <br/> </p> </blockquote> <p><hr/> I want to prove this ${\bf {\it theorem}}$, but I get stuck. <br/></p> <p>$\color{darkred}{\bf proof\,\,}$: </p> <p>$\Rightarrow$) Assume that $f$ is injective. Let $g,h:\,Y\rightarrow A$ be functions such that $f\circ g(y)=f\circ h(y),\,\,\,\,\,\,y\in Y$ it follows that $f(g(y))=f(h(y))$, and $f$ is injective, therefore $g(y)=h(y)$ for every $y\in Y$. <br/></p> <p>$\Leftarrow$) and here I get stuck, can’t figure out how to prove this. <br/></p> <p>Can someone help me with this proof?<br/></p>
Community
-1
<p>assume $f(a) = f(b)$ with $a \neq b$. Then define $g,h: A \rightarrow A$ with $g(x) := b$ as $x=a$ and $g(x) := a$ as $x = b$ and $x$ else. Take for $h:= Id_A$. Then we have $f\circ g =f \circ h$ but $f \neq h$.</p>
4,033,453
<p>How many roots does the equation <span class="math-container">$$\left\lfloor\frac x3\right\rfloor=\frac x2$$</span> have?</p> <ol> <li><span class="math-container">$1$</span></li> <li><span class="math-container">$2$</span></li> <li><span class="math-container">$3$</span></li> <li>infinitely many</li> </ol> <p>I checked that <span class="math-container">$x=0$</span> and <span class="math-container">$x=-2$</span> are the answers, so I think the answer is <span class="math-container">$(2)$</span>. but I don't know how to solve the problem in general.</p>
VIVID
752,069
<p>First observe that <span class="math-container">$$\color{blue}{\frac x3 } \ge \left\lfloor\frac x3\right\rfloor=\color{blue}{\frac x2} \implies x \le 0 \\ \color{blue}{\frac x3 - 1} \le \left\lfloor\frac x3\right\rfloor=\color{blue}{\frac x2} \implies x \ge -6$$</span> Now, from the equation, notice that <span class="math-container">$x/2$</span> is an integer which means <span class="math-container">$x$</span> is an even number. Therefore, the candidates are <span class="math-container">$x \in \{-6, -4,-2,0\}$</span>.<br /> Checking for all these four values we get <span class="math-container">$x = -4$</span> or <span class="math-container">$x = -2$</span> or <span class="math-container">$x = 0$</span>.</p> <p>So, the equation has <span class="math-container">$3$</span> roots.</p>
620,340
<p>I am stuck by this question from Liu's algebraic geometry textbook on quasi-coherent modules.</p> <p>Let X be an affine scheme $\mbox {Spec} A $. Let $\mathcal {F} $ be a quasi-coherent $\mathcal{O}_{X} $ module. Show that for any affine open subset U of X we have a canonical isomorphism</p> <p>$\mathcal {F}(X)\otimes_{A}\mathcal {O}_{X}(U)\cong\mathcal {F}(U) $. </p> <p>I have tried to reduce to the problem to the really simple case where $ U=D (f) $ for $ f\in A $, which is really easy but I can't prove it for general case where U is just any affine open subset. Can somebody give me some hints? Thanks!</p> <p>EDIT: I am still working on it when I realise that I might need this: Can someone tell me if this is true?</p> <p>If $ u\in U $ where U is an affine subset as before, then is</p> <p>$\mathcal {F} _{u}=\tilde {\mathcal{F}(X)}_{u}=\tilde {\mathcal {F}(U)}_{u}\cong \mathcal {F}(U)\otimes_{\mathcal{O}_{X}(U)}\mathcal{O}_{X}(U)_{u} $ </p> <p>the above expression true because it is quasi coherent over affine schemes?</p>
enoughsaid05
31,754
<p>I attempted the question but it will be too long as a comment in the preceding answer, hope someone can help me verify. (And because I am typing this on the phone :( )</p> <p>I need to show that $\mathcal {F}(X)\otimes_{A}\mathcal {O}_{X}(U)\cong \mathcal {F}(U) $.</p> <p>The $\mathcal{O}_{X} $-module $ \mathcal {O}_{X}|_{U}$ is quasi-coherent because $ i: U\hookrightarrow X $ corresponds to morphisms of affine schemes.</p> <p>Then localising at $ u\in U $, we have</p> <p>$ (\mathcal {F}\otimes_{\mathcal {O}_{X}}i_{*}\mathcal {O}_{X}|_{U})_{u}= \tilde {i_{*}\mathcal {F}|_{U}(X)}_{u} ,$</p> <p>Which is the one that needs checking. </p> <p>Then the two schemes are isomorphic and so we get the conclusion.</p>
2,696,281
<blockquote> <p>$\triangle ABC$ is an equilateral triangle of side $1$. $ \triangle BDC$ is an isosceles triangle with $D$ opposite to $A$ and $DB = DC$ and $\angle BDC = 120^\circ$. If points $M$ and $N$ are on $AB$ and $AC$, respectively, such that $\angle MDN = 60^\circ$, find the perimeter of $\triangle AMN$.</p> </blockquote> <p>My answer comes out to be $1$ by taking the points $M$ and $N$ in such a way that the quadrilateral $AMDN$ becomes a rhombus. This is all I could do and please help in out extending the result of the perimeter (if correct) to all points $M$ and $N$ in general.</p> <p><a href="https://www.facebook.com/Mathematics-Gems-by-Ajay-Lakhina-603105973366261/" rel="nofollow noreferrer">Question is from the Facebook page created by my teacher</a></p>
John Glenn
528,976
<p>Like Vasya and Qurultay in the comments, I too get 2.</p> <p>To find the length of $BD$, let $K$ be a perpendicular bisector of $BC$, which also bisects $\angle BDC$, thus we have a right triangle $\triangle BKD$. Thus: $$\sin 60^\circ=\frac{0.5}{BD}\Rightarrow BD=\frac{0.5}{\sin 60^\circ}$$</p> <p>Note that $\angle MBD $ is a right angle${}^{\color{red}{2}}$. We know that $\angle MDB$ is $30^\circ {}^{\color{red}{1}}$. To find $MD$: $$\cos30^\circ=\frac{BD}{MD}\Rightarrow MD=\frac{BD}{\cos30^\circ}=\frac{0.5}{\cos30^\circ\sin60^\circ}=\frac{0.5}{0.75}=\frac23$$</p> <p>From this information, we can deduce that $MB=\frac13$ (you can work this out yourself) and thus $AM=\frac23$. We must note that $\triangle MDN$ is isosceles${}^{\color{red}{3}}$, and so is $\triangle AMN$ and thus, knowing $AM$, the perimiter of $\triangle AMN$ is shown as: $$\mathrm{P}_{\triangle AMN}=3\cdot AM=3\cdot\frac23=2$$</p> <hr> <p>${}^{\color{red}{1}}$If $KD$ bisects $BDC$, then $\angle KDB=60^\circ$ and $\angle KMD=30^\circ$ which leaves $\angle MDB=30^\circ$</p> <p>${}^{\color{red}{2}}$ <em>see comments</em>.</p> <p>${}^{\color{red}{3}}$ Since $MD=AM=\frac23$ and $\angle MAN=\angle MDN=60^\circ$, though I argue there could be a more rigorous proof than this.</p>
1,992,009
<p>It is known that $$\lim_{x \to 0}\frac{f(x)}{x} = -\frac12$$ </p> <p>Solve $$\lim_{x \to 1}\frac{f(x^3-1)}{x-1}.$$</p> <p>Beforehand, I know that I should aim to get rid of the denominator $(x-1)$ and as such I factor the numerator to get:</p> <p>$$\lim_{x \to 1}{f(x^2+x+1)}{}.$$</p> <p>Now that I factored the denominator out, I believe I can insert the 1 in to the limit and I would end up with $f(3)$. Here is where I am confused, how can I incorporate the $-\frac12$ in to this? I figured that since one is approaching $1$ and the other is approaching $0$ there is more to this problem. My guess is that I can simply multiply the two limits to get the answer of $-3/2$. </p> <p>Since the original limit is simply $f(x) / x$ , all I have to do is multiply it by $x$(in this case it is 3) to get $f(x)$ again. </p> <p>Am I on the right path? </p>
hamam_Abdallah
369,188
<p>We have</p> <p>$$\frac{f(x^3-1)}{x-1}=$$</p> <p>$$(x^2+x+1)\frac{f(x^3-1)}{x^3-1}.$$</p> <p>and when $x\to 1\;$, your limit is $3.\frac{-1}{2}=\frac{-3}{2}$.</p>
335,295
<p>How do you show that a deduction exist in the Hilbert Proof System, as used in Herbert Enderton, <em>A Mathematical Introduction to Logic</em>.</p> <p>L is a FOL (First Order Language) which contains R, where R is a single binary predicate symbol.</p> <p>a1, a2, a3 are defined as:</p> <p>a1 = $∀x∀y∀z(Rxy → (Ryz → Rxz))$</p> <p>a2 = $∀x(¬Rxx)$</p> <p>a3 = $∀x∀y(x \ne y→Rxy∨Ryx)$</p> <p>We have the theory, Γ = {a1, a2, a3} and for :</p> <p>$Γ ⊢ ∀x∀y(Rxy → (¬Ryx))$</p> <p>How does one going about showing that a proof exists?</p>
Alex Kruckman
7,062
<p>Well, there are two options.</p> <p>Option 1: You could write out a proof using the Hilbert Proof System. The details of this proof are going to depend on the details of the proof system. Since I don't have a copy of Enderton on hand, I can't help you here.</p> <p>Option 2: If you know the completeness theorem for first-order logic, you can argue semantically instead of syntactically. The completeness theorem says that all models for $\Gamma$ satisfy a sentence $\phi$ if and only if there is a proof of $\phi$ from $\Gamma$. This frees us from the formal rules of the system and allows us to argue on a higher level.</p> <p>So take a model $M\models \Gamma$. Why must the sentence $\forall x \forall y (Rxy \rightarrow \lnot Ryx)$ hold in $M$? Hint: What are the axioms a1, a2, a3 saying about $R$?</p>
106,838
<p>Suppose $n &gt; 1$ is a natural number. Suppose $K$ and $L$ are fields such that the general linear groups of degree $n$ over them are isomorphic, i.e., $GL(n,K) \cong GL(n,L)$ as groups. Is it necessarily true that $K \cong L$?</p> <p>I'm also interested in the corresponding question for the special linear group in place of the general linear group.</p> <p>NOTE 1: The statement is false for $n = 1$, because $GL(1,K) \cong K^\ast$ and non-isomorphic fields can have isomorphic multiplicative groups. For instance, all countable subfields of $\mathbb{R}$ that are closed under the operation of taking rational powers of positive elements have isomorphic multiplicative groups.</p> <p>NOTE 2: It's possible to use the examples of NOTE 1 to construct non-isomorphic fields whose additive groups are isomorphic <em>and</em> whose multiplicative groups are isomorphic.</p>
Mikhail Borovoi
4,149
<p>The answer is "yes", see below.</p> <p>Dieudonné in his book "La géométrie des groupes classiques" considers the abstract group $SL_n(K)$ for a field $K$, not necessarily commutative, and writes $PSL_n(K)$ for $SL_n(K)$ modulo the center. In Ch. IV, Section 9, he considers the question whether $PSL_n(K)$ can be isomorphic to $PSL_m(K')$ for $n\ge 2,\ m\ge 2$. He writes that they can be isomorphic only for $n=m$, except for $PSL_2(\mathbb{F}_7)$ and $PSL_3(\mathbb{F}_2)$. If $n=m&gt;2$, then the isomorphism is possible only if $K$ and $K'$ are isomorphic or anti-isomorphic. The same is true for $m=n=2$ if both $K$ and $K'$ are commutative, except for the case $K=\mathbb{F}_4$, $K'=\mathbb{F}_5$. Dieudonné gives ideas of proof and references to Schreier and van der Waerden (1928), to his paper "On the automorphisms of classical groups" in Mem. AMS No. 2 (1951) and to the paper of Hua L.-K. and Wan in J. Chinese Math. Soc. 2 (1953), 1-32.</p> <p>This answers affirmatively the question for $SL_n$, because if $SL_n(K)\cong SL_n(K')$, then $PSL_n(K)\cong PSL_n(K')$. In the case $n=2$, $K=\mathbb{F}_4$, $K'=\mathbb{F}_5$, the orders $|SL_2(\mathbb{F}_4)|=60$ and $|SL_2(\mathbb{F}_5)|=120$ are different, and therefore these groups are not isomorphic.</p> <p>This also answers affirmatively the question for $GL_n$, because $SL_n(K)$ is the commutator subgroup of $GL_n(K)$, except for $GL_2(\mathbb{F_2})$, see Dieudonné, Ch. II, Section 1. In the case $n=2$, $K=\mathbb{F}_2$, we have $|GL_2(\mathbb{F}_2)|=6$ , which is less than $|GL_2(\mathbb{F}_q)|=q(q-1)(q^2-1)$ for any $q=p^r&gt;2$, hence $GL_2(\mathbb{F}_2)\not\cong GL_2(\mathbb{F}_q)$ for $q&gt;2$.</p>
3,237,094
<p>Find all differentiable functions <span class="math-container">$f\colon [0,\infty)\to [0,\infty)$</span> for which <span class="math-container">$f(0)=0$</span> and <span class="math-container">$f^{\prime}(x^2)=f(x)$</span> for any <span class="math-container">$x\in [0,\infty)$</span>. </p> <p>I have tried to reduce to the form <span class="math-container">$f(x)=f'(x^{\frac{1}{2^n}})$</span>. but it is not coming. Is there any other way?</p>
Lázaro Albuquerque
85,896
<p>First note that <span class="math-container">$f$</span> and <span class="math-container">$f'$</span> are both nondecreasing.</p> <p>By MVT, there is a <span class="math-container">$0 &lt; \lambda &lt; 1$</span> such that <span class="math-container">$f(1)=f'(\lambda)$</span>. But <span class="math-container">$f'(\lambda)=f(\sqrt{ \lambda })$</span> and since <span class="math-container">$f$</span> is nondecreasing, it should be constant on <span class="math-container">$[\sqrt{ \lambda }, 1]$</span>. Therefore, if <span class="math-container">$\sqrt{ \lambda } &lt; x &lt; 1$</span>, we have <span class="math-container">$f'(x)=0=f(\sqrt{x})$</span>. By continuity, <span class="math-container">$f(1)=0$</span>.</p> <p>Take <span class="math-container">$0 &lt; \delta &lt; 1$</span> and suppose that <span class="math-container">$f(n)=0$</span>. Then <span class="math-container">$$f(n + \delta) = \int_n^{n + \delta} f'(x)dx $$</span> <span class="math-container">$$ \le \delta f'(n + \delta) $$</span> <span class="math-container">$$ = \delta f(\sqrt{n + \delta}) $$</span> <span class="math-container">$$ \le \delta f(n + \delta) $$</span> </p> <p>The last inequality follows because for <span class="math-container">$n \ge 1$</span>, we have <span class="math-container">$1 &lt; \sqrt{n + \delta} &lt; n + \delta$</span>. So <span class="math-container">$f(n+\delta) \le 0$</span> and then <span class="math-container">$f(n + \delta)=0$</span>. By continuity, <span class="math-container">$f(n+1)=0$</span> and the induction is complete.</p>
3,625,886
<blockquote> <p>Show that if <span class="math-container">$a$</span> and <span class="math-container">$b$</span> belong to <span class="math-container">$\mathbb{Z}_+$</span> then there are divisors <span class="math-container">$c$</span> of <span class="math-container">$a$</span> and <span class="math-container">$d$</span> of <span class="math-container">$b$</span> with <span class="math-container">$(c,d)=1$</span> and <span class="math-container">$cd=\text{lcm}(a,b)$</span>.</p> </blockquote> <p>My try: </p> <p>We know that <span class="math-container">$$\text{gcd}(a,b)\cdot \text{lcm}(a,b)=ab$$</span> then, <span class="math-container">$$\text{lcm}(a,b)=\dfrac{ab}{\text{gcd}(a,b)}$$</span> Let <span class="math-container">$d=b$</span> and <span class="math-container">$c=\frac{a}{\text{gcd}(a,b)}.$</span></p> <p>Now <span class="math-container">$d|b$</span> and <span class="math-container">$c|a$</span> with <span class="math-container">$\text{lcm}(a,b)=cd$</span>.</p> <p>Now I am not sure about the following step: <span class="math-container">$$\text{gcd}(c,d)=\text{gcd}\left(\dfrac{a}{\text{gcd}(a,b)},b\right) =\text{gcd}\left(\dfrac{\text{lcm}(a,b)}{b},b\right)=1$$</span></p> <p>Tell me is this correct?</p>
Calvin Lin
54,563
<p>It is not correct.</p> <p>If <span class="math-container">$ a = 4, b = 2$</span>, then you have <span class="math-container">$ d = b = 2, c = a/ \gcd(a,b) = 4/2 = 2$</span>.<br> This doesn't satisfy <span class="math-container">$ \gcd(c,d) = 1$</span>. </p> <p>Now, figure out where your reasoning breaks down, and how to fix it. </p>
104,186
<p>I'm having difficulty understanding how to express text in <code>Epilog</code> that is dynamically updated using <code>Log[b, x]</code>. <em>Mathematica</em> changes this to base $e$, but I would like it to be <code>Log[b, x]</code> in traditional format with base $b$, and I can't seem to make it work. I'm guessing I need to break the $\log$ apart using boxes or something, but don't know how to make a subscript that is a dynamically updated <code>b</code> value. Any ideas?</p> <pre><code>Manipulate[ Plot[{b^x, x, Log[b, x]}, {x, -10, 10}, PlotRange -&gt; {{-5, 5}, {-10, 10}}, PerformanceGoal -&gt; "Quality", ImageSize -&gt; All, Epilog -&gt; {Text[b^x, {-3, 4}], Text[Log[b, x], {3, -5}]}, GridLines -&gt; {Range[-10, 10, 1], Range[-10, 10, 1]}, GridLinesStyle -&gt; Opacity[.04]], {{b, 2, "Choose a base"}, 0.01, 4}] </code></pre>
Michael E2
4,999
<p>Here's a way that let's <em>Mathematica</em> take care of the typesetting. Use <a href="http://reference.wolfram.com/language/ref/With.html" rel="nofollow noreferrer"><code>With</code></a> to inject the value of <code>b</code> into a held expression for the logarithm (see <a href="http://reference.wolfram.com/language/ref/HoldForm.html" rel="nofollow noreferrer"><code>HoldForm</code></a>).</p> <pre><code>Epilog -&gt; {Text[b^x, {-3, 4}], Text[With[{b = b}, HoldForm@Log[b, x]], {3, -5}]} </code></pre> <p><img src="https://i.stack.imgur.com/Gjkr0.png" alt="Mathematica graphics"></p>
1,357,488
<p>Let $E$ together with $g$ be a inner product space(over field $\mathbb R$) , $\text{dim}E=n&lt;\infty$ and $\{e_1,\cdots,e_n\}$ is orthonormal basis of $E$ that $\{e^1,\cdots,e^n\}$ is its dual basis. Now we define $\omega:=e^1\wedge\cdots\wedge e^n$ as element of volume of $E$.</p> <p>How can I prove that $\omega(u_1,\cdots,u_n)\omega(v_1,\cdots,v_n)=det[g(u_i,v_j)] \qquad \forall u_i,v_j\in E\qquad\text{and } i=1,\cdots,n\ ?$ </p> <p>Of course, I prove that $\omega(u_1,\cdots,u_n)=det[u_1 \cdots u_n]_{n\times n}$ ($u_i$'s are columns of the matrix)but I do not find out how uses it for my problem.</p>
HK Lee
37,116
<p>Hint : (1) Let $$ U=[u_1\cdots u_n],\ V=[v_1\cdots v_n]$$</p> <p>Then $$ (U^TV)_{ij} = u_i\cdot v_j =g(u_i,v_j) $$</p> <p>(2) ${\rm det} (U^TV)={\rm det}\ U {\rm det}\ V$</p>
3,822,822
<p>Four cards are face down on a table. You are told that two are red and two are black, and you need to guess which two are red and which two are black. You do this by pointing to the two cards you’re guessing are red (and then implicitly you’re guessing that the other two are black). Assume that all configurations are equally likely, and that you do not have psychic powers. Find the probability that exactly j of your guesses are correct, for j = 0, 1, 2, 3, 4. Hint: Some probabilities are 0.</p> <p>My professor worked out this example in class and I know the answers for j = 1 and j = 3 are zero, j = 0 is 1/6, j = 2 is 2/3, and j = 4 is 1/6 but I do not understand the process or concept behind the question. I do not understand where the numbers are coming from and why only the even js have a probability but not the odd js. The j guesses and pointing is confusing me. Can someone please help explain the question, I would appreciate it.</p>
fny
28,533
<p>Here's a general answer that should provide you with insight.</p> <p>Say we have <span class="math-container">$2n$</span> cards where <span class="math-container">$n$</span> are red and <span class="math-container">$n$</span> are black. There are <span class="math-container">${2n \choose n}$</span> arrangements of those cards (you basically choose which <span class="math-container">$n$</span> are red).</p> <p>Great! So now we need count all the ways we can get <span class="math-container">$j$</span> correct. While tempting, we can't just choose <span class="math-container">$j$</span> of the <span class="math-container">$2n$</span> cards. Why? <em>Getting one card correct guarantees you'll get a card of the opposite color correct.</em></p> <p>Here's why: Say you guess 1 red card correctly. You now have <span class="math-container">$n-1$</span> red cards and <span class="math-container">$n$</span> black cards to guess, which means you could at most guess <span class="math-container">$n-1$</span> of the <span class="math-container">$n$</span> black cards incorrectly leaving one which you are guaranteed to guess correctly.</p> <p>Given that, the number of ways to guess <span class="math-container">$j$</span> cards correctly is the number of ways we can pick <span class="math-container">$j/2$</span> red cards and the number of ways we can pick <span class="math-container">$j/2$</span> black cards guaranteed by the red cards.</p> <p><span class="math-container">$$ P(j) = {{n \choose j/2}{n \choose j/2} \over {2n \choose n}} = \frac{n!^4}{(2n)!(j/2)!^2(n-j/2)!^2} $$</span></p>
129,993
<p>Let $p$ and $q$ be relative primes, $n$ positive integer.</p> <p>Given</p> <ul> <li>$n\bmod p$ and</li> <li>$n\bmod q$</li> </ul> <p>how do I calculate $n\bmod (pq)$ ?</p>
Bill Dubuque
242
<p>One may use the Bezout identity $\rm\:a\:p + b\:q = 1\:$ obtained by the extended Euclidean algorithm. But, in practice, it's often more convenient to use the form below, e.g. see my <a href="https://math.stackexchange.com/search?q=user%3A242+easy-CRT">Easy CRT posts.</a></p> <p><strong>Theorem (Easy CRT)</strong> $\rm\ \ $ If $\rm\ p,\:q\:$ are coprime integers then $\rm\ p^{-1}\ $ exists $\rm\ (mod\ q)\ \ $ and</p> <p>$\rm\displaystyle\quad\quad\quad\quad\quad \begin{eqnarray}\rm n&amp;\equiv&amp;\rm\ a\ (mod\ p) \\ \rm n&amp;\equiv&amp;\rm\ b\ (mod\ q)\end{eqnarray} \ \iff\ \ n\ \equiv\ a + p\ \bigg[\frac{b-a}{p}\ mod\ q\:\bigg]\ \ (mod\ p\:\!q)$</p> <p><strong>Proof</strong> $\rm\ (\Leftarrow)\ \ \ mod\ p\!:\:\ n\equiv a + p\ (\cdots)\equiv a\:,\ $ and $\rm\ mod\ q\!:\:\ n\equiv a + (b-a)\ p/p \equiv b\:.$</p> <p>$\rm\ (\Rightarrow)\ \ $ The solution is unique $\rm\ (mod\ p\!\:q)\ $ since if $\rm\ x',\:x\ $ are solutions then $\rm\ x'\equiv x\ $ mod $\rm\:p,q\:$ therefore $\rm\ p,\:q\ |\ x'-x\ \Rightarrow\ p\!\:q\ |\ x'-x\ \ $ since $\rm\ \:p,\:q\:$ coprime $\rm\:\Rightarrow\ lcm(p,q) = p\!\:q\:.\quad$ <strong>QED</strong></p>
1,990,033
<p>Suppose I have a point $P(x_1, y_1$) and a line $ax + by + c = 0$. I draw a perpendicular from the point $P$ to the line. The perpendicular meets the line at point $Q(x_2, y_2)$. I want to find the coordinates of the point $Q$, i.e., $x_2$ and $y_2$.</p> <p>I searched up for similar questions where the coordinates of end points of the line segment are given. But here, I've got an equation for the line. So, I am pretty clueless how to solve this.</p> <p>Please give me a formula to arrive at my answer (if any) and show me its derivation too. I am a high school student with a basic knowledge of trigonometry. I have no idea of calculus, so please give me a simplified answer.</p> <p>Any help is highly appreciated. Thanks a lot in advance...</p>
lab bhattacharjee
33,337
<p>Using $\sin3B=3\sin B-4\sin^3B$</p> <p>$$32\sin^3x\cdot\cos^2x\cdot\cos2x=\sin x(3-4\sin^2x)$$</p> <p>If $\sin x=0,x=180^\circ n$ where $n$ is any integer</p> <p>Else using $\cos2A=2\cos^2A-1=1-2\sin^2A,$</p> <p>$$32\cdot\dfrac{1-\cos2x}2\cdot\dfrac{1+\cos2x}2\cdot\cos2x=3-2(1-\cos2x)$$</p> <p>$$\iff8(c-c^3)=1+2c\iff4c^3-3c=-\dfrac12$$</p> <p>$$\implies\cos6x=-\dfrac12,6x=360^\circ m\pm120^\circ\iff x=60^\circ m\pm20^\circ$$ where $m$ is any integer</p>
1,482,152
<p>Show that four non-coplanar points in $\mathbb{R}^3$ determine an unique sphere.</p> <p>I have no idea how to solve this exercise. Thank you for your help.</p>
Emilio Novati
187,568
<p>Hint:</p> <p>The equation of a sphere of center $C=(\alpha,\beta,\gamma)$ and radius $r$ is: $$ (x-\alpha)^2+(y-\beta)^2+(z-\gamma)^2=r^2 $$ that becomes $$ x^2+y^2+z^2+ax+by+cz+d=0 $$ with $$ a=-2\alpha \quad b=-2\beta \quad c=-2\gamma \quad d=\alpha^2+\beta^2+\gamma^2-r^2 $$</p> <p>so if we have four points, substituting the coordinates of these points in the equation we find a linear system of four equations in four unknowns $a,b,c,d$ that has one solution if the four points are not coplanar (look at its determinant).</p>
1,708,420
<p>Here's what I am trying to figure out: Find all vectors $\vec{b}$ that are in $span \{\vec{u},\vec{v},\vec{w}\}$ where $\vec{u},\vec{v},\vec{w}$ are vectors.</p> <p>I'm given specific vectors in $\mathbb{R}^3$ for $\vec{u},\vec{v},\vec{w}$, but I really just want to ask about concepts. I can do the algebra, certainly.</p> <p>I understand what it means for something to be in the Span of a set of vectors (it is a linear combination of the vectors). And I suspect that I know what it means to find all vectors $\vec{b}$ that are in $span \{\vec{u},\vec{v},\vec{w}\}$, where $\vec{u},\vec{v},\vec{w}$ are vectors. I believe that, given vectors $\vec{u},\vec{v},\vec{w}$, I need to set those vectors with some arbitrary vector $\vec{b}$ as the augmented column, with entries $b_1,b_2,b_3$, to form the augmented matrix. Then I need to reduce the augmented matrix to reduced echelon form, and, if necessary, find the values of $b_1,b_2,b_3$ that would make the system inconsistent (i.e. a nonsense row with zeroes in the coefficient columns of a row and a non-zero number in the augmented column of the same row). Then I would define $\vec{b}$ in terms of what I have in that column after row reduction, barring those values of $b_1,b_2$ and $b_3$ that would make a nonsense row.</p> <p>Does this seem sensible to you? </p>
lab bhattacharjee
33,337
<p>HINT:</p> <p>Use $\displaystyle I=\int_a^bf(x)\ dx=\int_a^bf(a+b-x)\ dx$</p> <p>$\displaystyle I+I=\int_a^bf(x)\ dx+\int_a^bf(a+b-x)\ dx=\int_a^b[f(x)+f(a+b-x)]\ dx$</p>
961,304
<p>I am studying a book and I am stagnating on a what should be a straightfoward proof:</p> <p>Show that if $X$ is compact, $V\subset X$ is open and $x\in V$, then there exists an open set $U$ in $X$ with $x\in U\subset \bar U\subset V$</p> <p>I don't know how to find the appropriate set $U$. I am guessing you need to do something like taking the intersection with $V$ of a finite subcover of X and then show that its closure is contained in $V$ ...</p> <p>Could someone nudge me in the right direction?</p>
orangeskid
168,051
<p>That would be $y$ so that the subsets of $[a,b]$ </p> <p>$\{x \in [a,b] \ | \ f(x) \le y\}$ and $\{x \in [a,b] \ | \ f(x) \ge y\}$ </p> <p>have the same measure (with some care). </p> <p>Obs: if $f$ is monotonous it will be $f(\frac{a+b}{2})$.</p>
3,122,989
<p>Let <span class="math-container">$A\in\mathbb{R}^{n\times n}$</span> and define conjugation by <span class="math-container">$GL_n$</span> on <span class="math-container">$\mathbb{R}^{n\times n}$</span> in the usual way (e.g. for all <span class="math-container">$A\in\mathbb{R}^{n\times n}$</span> and <span class="math-container">$T\in GL_n$</span>, <span class="math-container">$A\mapsto T^{-1}AT$</span>). Are there any ways to derive conditions on <span class="math-container">$T$</span> so that a given subset of the entries of <span class="math-container">$A$</span> will be invariant under conjugation? </p> <p>For example, let <span class="math-container">$$ A = \begin{bmatrix}a &amp; b\\ c &amp; d \end{bmatrix} $$</span> and suppose we want <span class="math-container">$b$</span> to be invariant under conjugation. Then we are looking for the subset <span class="math-container">$U \subset GL_n$</span> such that, for all <span class="math-container">$T \in U$</span>, <span class="math-container">$$ T^{-1}AT = \begin{bmatrix}a' &amp; b \\ c' &amp; d' \end{bmatrix}. $$</span> Due to the application from which this question arises, it is also enough to be able say whether or not <span class="math-container">$U$</span> consists of just the identity matrix.</p> <p>I would also appreciate any pointers to relevant literature.</p>
Dietrich Burde
83,966
<p>Let <span class="math-container">$T=\begin{pmatrix} t_1 &amp; t_2\cr t_3 &amp; t_4\end{pmatrix}$</span> with <span class="math-container">$\det(T)=1$</span>. Then <span class="math-container">$T^{-1}AT$</span> has upper right corner equal to <span class="math-container">$b$</span> if and only if <span class="math-container">$$ at_2t_4 + bt_4^2 - ct_2^2 - dt_2t_4=b. $$</span> This follows by a direct computation. Suppose that <span class="math-container">$c\neq 0$</span>. Then we can solve the quadratic equation for <span class="math-container">$t_2$</span> over the complex numbers. This will give, in general, two non-zero solutions, hence there are more than just the multiples of the identity fixing <span class="math-container">$b$</span> under conjugation. </p>
814,899
<p>Suppose that $\{X_n\}$ is an independent sequence and $E[X_n]=0$. If $\sum \operatorname{Var}[X_n] &lt; \infty$, then $\sum X_n$ converges with probability $1$. Is independence necessary condition here ? I am thinking of a counterexample. The intuition behind the other assumptions is clear. </p>
ajotatxe
132,456
<p>As you hace been said, we need to assume that $A$ is connected. Then, it is possible to assume WOLOG that $V$ is connected, hence, $f(V)$ is connected.</p> <p>Suppose that $f(V)$ is not open. Then, there exists some $y\in f(V)$ (say, $f(x)=y$) such that for every $\epsilon&gt;0$ there exists $y_\epsilon\in(y-\epsilon,y+\epsilon)$. Therefore, $\sup$ or $\inf f(V)=y$, that is, $f$ meets an extremus at $x$.</p>
2,717,842
<p>I want to show that if $X$ is a locally connected topological space, $A\subseteq X$ is a subspace and $f:X \rightarrow A$ is continuous such that $f|_{A} = Id_{A}$, then $A$ must be locally connected as well.</p> <p>My progress so far:</p> <p>Take $U\subseteq A$ $A-$open and $x\in U$. Since $f^{-1} (U)$ is $X-$open and $x\in f^{-1} (U)$, there exists a connected, $X-$open subset $V\subseteq X$ such that $x\in V\subseteq f^{-1}(U)$.</p> <p>Now we have $x\in V\cap A \subseteq f^{-1}(U)\cap A = U$</p> <p>My guess is that $V\cap A$ should be connected, but I am unsure if this is correct.</p> <p>Any help would be appreciated!</p>
red whisker
833,796
<p>It is easier to instead show that components of open sets are open, which is equivalent to local connectedness. Let <span class="math-container">$r\colon X\to A$</span> be the retraction and suppose <span class="math-container">$N$</span> is an open set in <span class="math-container">$A$</span> and <span class="math-container">$C$</span> is a component of <span class="math-container">$N$</span>.</p> <p>Let <span class="math-container">$p\in N$</span> and <span class="math-container">$U = r^{-1}(N)$</span>. By continuity, <span class="math-container">$U$</span> is open and by local connectedness, there is a connected open <span class="math-container">$V$</span> such that <span class="math-container">$p\in V\subseteq U$</span>. Again by continuity, <span class="math-container">$r(V)$</span> is a connected set containing <span class="math-container">$p$</span>, hence <span class="math-container">$r(V)\subseteq C$</span>.</p> <p>Now, <span class="math-container">$p\in V\cap A\subseteq r(V)$</span> and <span class="math-container">$V\cap A$</span> is open in <span class="math-container">$A$</span>, therefore, <span class="math-container">$p\in V\cap A\subseteq C$</span>, hence <span class="math-container">$C$</span> is an open set.</p> <p>Therefore, the connected components of <span class="math-container">$N$</span> are open, hence <span class="math-container">$A$</span> is locally connected.</p>
2,347,995
<p>Not sure what I'm doing wrong.</p> <p>Here's my work:</p> <p>Expressing the first part $A \setminus (B\setminus C)$ using logical symbols:</p> <p>$A \land \neg(B \land \neg C)$ becomes</p> <p>$A \land \neg B\lor C$ (De Morgan's law)</p> <p>While the second expression $(A \setminus B) \cup (A \cap C)$ is </p> <p>$(A \land \neg B) \lor (A \land C)$ which becomes</p> <p>$A \land \neg B \lor A \land C$ (Associative Law)</p> <p>or $A \land \neg B \land C$</p> <p>How are these two expressions similar? Thanks for any help in advance! </p>
user217285
217,285
<p>The inequality is obvious for $x = 0$ and the left-hand side is even in $x$ so we may assume $x \in (0,1)$. Taylor's theorem implies that $$ \cos x = 1 - \frac{x^2}{2} + \frac{\xi^4}{24}$$ for some $\xi \in (0,x)$, so $$ \frac{1- \cos x}{x^2} = \frac12 - \frac{\xi^4}{24x^2}.$$ Hence it is enough to show that $\xi^4/x^2 &lt; 6$ for any $\xi \in (0,x)$. But this is obvious because...</p>
966,420
<p>How can one write $x$ is a factor of $y$ (as a constraint)? I am also not sure what else to add to meet the question quality requirements. </p>
Traklon
103,134
<p>Well $|$ is used to describe divisibility so I think it meets our requirements.</p> <p>For example, $3|6$.</p> <p>$x$ is a factor of $y$ $\Leftrightarrow$ $x|y$. </p>
2,106,983
<p><strong>Question</strong></p> <p>how to evaluate $\tan x-\cot x=2.$</p> <p>Given that it lies between on $\left[\frac{-\pi} 2,\frac \pi 2 \right]$.</p> <p><em>My Steps so far</em></p> <p>I converted cot into tan to devolve into $\frac{\tan^2 x-1}{\tan x}=2$.</p> <p>Then I multiply $\tan{x}$ on both sides and then get $\tan^2 x-2\tan x-1$.</p> <p>From there I dont know where to go.</p>
Fawad
369,983
<p>$\tan x - \cot x =2$</p> <p>$\dfrac{\sin x }{\cos x} - \dfrac{\cos x}{\sin x }=2$</p> <p>$\dfrac{\sin^2 x - \cos ^2 x}{\sin x \cos x } =2$</p> <p>$-\cos 2x = 2\sin x \cos x$</p> <p>$-\cos 2x=\sin2x$</p> <p>$\tan 2x=-1$</p> <p>$2x= n\pi -\frac{\pi}{4}$</p> <p>Put n values to get values of x in required range .</p>
1,508,753
<p>Show that for $x,y\in\mathbb{R}$ with $x,y\geq 0$, the arithmetic mean-quadratic mean inequality $$\frac{x+y}{2}\leq \sqrt{\frac{x^2+y^2}{2}}$$ holds.</p> <p>After my calculations I'll get: </p> <p>$$-x^2+2xy-y^2$$ which can't be $\leq 0$.</p>
Graham Kemp
135,106
<p>Where $I$ is a sufficiently close interval to the limit point, a limit will exists if:</p> <p>Right Sided $\forall\varepsilon &gt; 0\;\exists \delta &gt;0 \;\forall x \in I \;(0 &lt; x -2 &lt; \delta \Rightarrow \lvert \frac 1{1-x} +1 \rvert&lt;\varepsilon)$</p> <p>Left Sided $\forall\varepsilon &gt; 0\;\exists \delta &gt;0 \;\forall x \in I \;(0 &lt; 2-x &lt; \delta \Rightarrow \lvert \frac 1{1-x} +1 \rvert&lt;\varepsilon)$</p> <p>So pick an arbitrary small number $\varepsilon$ and show the existence of some $\delta$ so that when $x$ lies within the restriction, then $\lvert \frac 1{1-x} +1 \rvert&lt;\varepsilon$.</p> <p>Or in simpler terms: that as the argument approaches the limit point the function converges towards the limit value.</p>
1,508,753
<p>Show that for $x,y\in\mathbb{R}$ with $x,y\geq 0$, the arithmetic mean-quadratic mean inequality $$\frac{x+y}{2}\leq \sqrt{\frac{x^2+y^2}{2}}$$ holds.</p> <p>After my calculations I'll get: </p> <p>$$-x^2+2xy-y^2$$ which can't be $\leq 0$.</p>
Hamed
191,425
<p>Define $f(x) = \frac{1}{1-x}$. We want to show that $f(x)$ is continuous at $-1$. With $\epsilon-\delta$ we need to show that for any $\epsilon&gt;0$ there exists $\delta&gt;0$ such that $|x-2|&lt;\delta$ yields $|f(x)+1|&lt;\epsilon$. This much is definition. Now $$f(x)+1 = \frac{1}{1-x}+1 = \frac{x-2}{x-1}$$ I claim that for a given $\epsilon$, it is enough to choose $\delta = \frac{\epsilon}{1+\epsilon}$. Let's check if it works: Note that if $|x-2|&lt;\delta$, then $-\delta&lt;x-2&lt;\delta$, so $-\delta+1&lt;x-1&lt;\delta+1$. But then my $\delta&lt;1$, so $$ \frac{1}{1+\delta}&lt;\frac{1}{x-1}&lt;\frac{1}{1-\delta}\Longrightarrow \frac{1}{|x-1|}&lt;\frac{1}{1-\delta} $$ So we have $$ |f(x)+1| = \left|\frac{x-2}{x-1}\right|&lt;\frac{\delta}{1-\delta}=\frac{\frac{\epsilon}{1+\epsilon}}{1-\frac{\epsilon}{1+\epsilon}}=\epsilon $$ Done! Now since the function is continuous at $-1$, its limit is its value. (I intentionally took a longer approach to try and familiarize the $\epsilon-\delta$ approach as much as I can)</p> <hr> <p>If you are wondering how in the world I came up with this $\delta$, here's how you reverse engineer it (I have to repeat parts of previous argument, sorry!): We think in reverse direction, suppose we already know that $\delta$ exists, then what can it be? Whatever it is we should only care about small values of $\delta$ since limit only cares about small neighborhoods. So let's assume our unknown $\delta$ is less than one. Then if $\delta$ exists (as we are assuming) we must have (similar to above) $$ \left|\frac{x-2}{x-1}\right|&lt;\frac{\delta}{1-\delta} $$ So it only remains to choose $\delta$ so that the right hand side is actually $\epsilon$ or less. Well solve $\frac{\delta}{1-\delta}=\epsilon$ and find $\delta=\frac{\epsilon}{1+\epsilon}$.</p>
48,679
<p>I've been going through Fermats proof that a rational square is never congruent. And I've stumbled upon something I can't see why is. Fermat says: ''If a square is made up of a square and the double of another square, its side is also made up of a square and the double of another square'' Im having difficulties understanding why this is. Can anyone help me understand this?</p>
liuyao
1,189
<p>In boundary value problems, physicists consider the infinity (in space and in time) to be part of the boundary. Mathematicians know there's a distinction between compact and non-compact spaces.</p>
48,679
<p>I've been going through Fermats proof that a rational square is never congruent. And I've stumbled upon something I can't see why is. Fermat says: ''If a square is made up of a square and the double of another square, its side is also made up of a square and the double of another square'' Im having difficulties understanding why this is. Can anyone help me understand this?</p>
Gene S. Kopp
8,410
<p>The use of <a href="http://en.wikipedia.org/wiki/Random_matrix">random matrix theory</a> to model energy levels of heavy nuclei and other physical systems. See also the following <a href="http://www.williams.edu/go/math/sjmiller/public_html/ntrmt10/handouts/general/Hayes_spectrum_riemannium.pdf">historical piece</a> and the pictures therein: There is striking statistical evidence that the eigenvalues of large random self-adjoint matrices, the energy levels of heavy nuclei, and the normalized zeros of $L$-functions (!) are all spaced about the same.</p>
48,679
<p>I've been going through Fermats proof that a rational square is never congruent. And I've stumbled upon something I can't see why is. Fermat says: ''If a square is made up of a square and the double of another square, its side is also made up of a square and the double of another square'' Im having difficulties understanding why this is. Can anyone help me understand this?</p>
Jamahl Peavey
21,268
<p>Yang-Mills Equations are experimentally proven but have no strong mathematical foundations. In the Clay Mathematics Institute the mass gap problem is worth one million dollars.</p>
1,197,056
<p>I tried to evaluate the following limits but I just couldn't succeed, basically I can't use L'Hopital to solve this... </p> <p>for the second limit I tried to transform it into $e^{\frac{2n\sqrt{n+3}ln(\frac{3n-1}{2n+3})}{(n+4)\sqrt{n+1}}}$ but still with no success...</p> <p>$$\lim_{n \to \infty } \frac{2n^2-3}{-n^2+7}\frac{3^n-2^{n-1}}{3^{n+2}+2^n}$$</p> <p>$$\lim_{n \to \infty } \frac{3n-1}{2n+3}^{\frac{2n\sqrt{n+3}}{(n+4)\sqrt{n+1}}}$$</p> <p>Any suggestions/help? :)</p> <p>Thanks</p>
Alex
38,873
<p>For the second one pay attention to the order of the numerator and denominator: the largest terms converge to some constant, the rest to 0, so you should get $(\frac{3}{2})^2$. </p>
3,795,655
<p>Let <span class="math-container">$H(\mu|\nu)$</span> be the relative entropy (or Kullback-Leibler convergence) defined in the usual way. I am looking for a proof or reference to the following fact: <span class="math-container">$\mu,\nu$</span> two-dimensional probability measures with marginals <span class="math-container">$\mu_1,\mu_2$</span> and <span class="math-container">$\nu_1,\nu_2$</span>, respectively, then <span class="math-container">$$ H(\mu|\nu)\geq H(\mu_1|\nu_1) + H(\mu_2|\nu_2). $$</span></p> <p>Any help appreciated. Thanks.</p>
E-A
499,337
<p>The below answer is false; need to think; most likely the claim made is false for arbitrary reference measures; I know it true to be suitably chosen gaussians and uniforms.</p> <blockquote> <p>This is an immediate consequence of chain rule and the joint convexity of KL divergence. You can look at the proofs of the individual claims here: <a href="https://homes.cs.washington.edu/%7Eanuprao/pubs/CSE533Autumn2010/lecture3.pdf;" rel="nofollow noreferrer">https://homes.cs.washington.edu/~anuprao/pubs/CSE533Autumn2010/lecture3.pdf;</a> chain rule follows from writing out the joint entropy, and the convexity of KL divergence follows from log-sum inequality. So chain rule says: <span class="math-container">$$D(\mu(x,y) || \nu(x,y)) = D(\mu_1(x) || \nu_1(x)) + E_{\mu_1(x)} [D(\mu(y | x) || \nu(y | x)] $$</span> and convexity says that <span class="math-container">$$ E_{\mu_1(x)} [D(\mu_2(y | x) || \nu_2(y | x)] \leq D( E_{\mu_1(x)} [\mu(y | x)] || E_{\mu_1(x)} [\nu(y | x)] ) = D(\mu_2(y) || \nu_2(y) ) $$</span> P.S. This should be considered super-additivity? From this, you can derive the subadditivity of (differential) entropy by taking <span class="math-container">$v$</span> to be uniform on a set that contains the support of your <span class="math-container">$\mu$</span>s. (Gaussian for differential entropy).</p> </blockquote>
625,746
<p>If the nth term of a series is given by $T(n)=T(n-1) + T(n-1) \times C$ where $C$ is a given constant and $T(1)=A$ and $n \ge 2$ . I need to tell whether its value will be greater than $G$ or not before its mth term.</p> <p>EXAMPLE : Say A that is first term is $2$ and say $C$ is also $2$ . If we need to check weather its value is atleast $10(=G)$ before or upto 2nd term.</p> <p>Then answer should be No.</p> <p>I need to just check it if its possible or not but without calculating the values. Can anyone help.If all the variables can be very large(say of order $10^9$).</p>
Thanos Darkadakis
105,049
<p>$T(n)=T(n-1)+T(n-1)*C$</p> <p>$T(n)=(1+C)*T(n-1)$</p> <p>$T(n)=(1+C)^{n-1}*T(1)$</p> <p>$T(n)=(1+C)^{n-1}*A$</p> <p>Do you need something more?</p>
1,129,567
<p>Using the substitution $x=\cosh (t)$ or otherwise, find $$\int\frac{x^3}{\sqrt{x^2-1}}dx$$ The correct answer is apparently $$\frac{1}{3}\sqrt{x^2-1}(x^2+2)$$ I seem to have gone very wrong somewhere; my answer is way off, can someone explain how to get this answer to me.</p> <p>Thanks.</p> <p>My working: $$\int\frac{\cosh^3t}{\sinh^2t}dt$$ $$u=\sinh t$$ $$\int\frac{1+u^2}{u^2}du$$ $$\frac{-1}{u}+u$$ $$\frac{-1}{\sinh t}+\sinh t$$ $$\frac{-1}{\sqrt{x^2-1}}+\sqrt{x^2-1}$$</p> <p>^my working, I'm pretty sure this is very wrong though.</p> <p>Edit: I've spotted my error. On the first line it should be $$\int \cosh^3t \, dt$$ </p> <p>not</p> <p>$$\int\frac{\cosh^3t}{\sinh^2t}dt$$</p>
graydad
166,967
<p>If you want to do this by using the substitution $x = \cosh(t)$ you also need $dx = \sinh(t)dt$, which means $$\int \frac{x^3}{\sqrt{x^2-1}}dx = \int \frac{\cosh^3(t)\sinh(t)dt}{\sqrt{\cosh^2(t)-1}}$$ and now use $\cosh^2(t) = 1+\sinh^2(t)$ in the numerator to get $$\int \frac{\cosh^3(t)\sinh(t)dt}{\sqrt{\cosh^2(t-11}} = \int \frac{(1+\sinh^2(t))\cosh(t)\sinh(t)dt}{\sqrt{\cosh^2(t)-1}} \\ =\int \frac{\cosh(t)\sinh(t)+\cosh(t)\sinh^3(t)}{\sqrt{\cosh^2(t)-1}}dt \\ = \int \frac{\cosh(t)\sinh(t)}{\sqrt{\cosh^2(t)-1}}dt+\int \frac{\cosh(t)\sinh^3(t)}{\sqrt{\cosh^2(t)-1}}dt$$ The first half of that integral $\int \frac{\cosh(t)\sinh(t)}{\sqrt{\cosh^2(t)-1}}dt$ is extremely easy if you make the substitution $u = \cosh^2(t)-1$. For the second integral, use integration by parts with $$u = \sinh^2(t), \quad du = 2\sinh(t)\cosh(t)dt, \quad dv = \frac{\cosh(t)\sinh(t)}{\sqrt{\cosh^2(t)-1}}dt$$ where $v$ is the first half of the integral you already solved. Hence, $$\int \frac{\cosh(t)\sinh^3(t)}{\sqrt{\cosh^2(t)-1}}dt = \int \frac{\cosh(t)\sinh(t)}{\sqrt{\cosh^2(t)-1}}\sinh^2(t)dt \\ = uv-\int v du \\ = \sinh^2(t) \cdot (\text{Integral you already solved})-\int 2\sinh(t)\cosh(t)\cdot (\text{Integral you already solved})dt$$ For the new integral obtained through integration by parts, you should again be able to use $u =\cosh^2(t)-1$ for a pretty easy result.</p>
2,309,527
<p>I was sincerely hoping someone could explain to me how for the function found below I would determine its standard matrix and whether or not the function is 1-to-1 and onto. </p> <blockquote> <p>The linear function $T:\mathbb{R}^2 \to \mathbb{R}^3$ is given by $$ T(x,y) = \begin{pmatrix} x-y \\ 5x+3y \\ 2x+4y \end{pmatrix} $$</p> </blockquote> <ol> <li>I believe here the standard matrix would be: $\begin{bmatrix} 1 &amp; -1 \\ 5 &amp; 3 \\ 2 &amp; 4\end{bmatrix}$, because I think multiplying that matrix with $(x,y)^T$ would result in that function. I'm not sure about this though. </li> <li>As for the 1-to-1 question; I know that that function is 1-to-1 if each $x \in \mathbb{R}^2$ is related to a different $y \in \mathbb{R}^3$. But how do I test and prove this? </li> <li>As for onto: A function is onto when its image equals its co-domain, so here that would be if $T(x)=y$? But yet again, how would I test/prove this on this function? </li> </ol> <p>Some very basic questions that I tried googling, but the explanations I found so far did not help much unfortunately. For example, I found explanations for functions like $f(x,y) = x-y$, but none like the one I have here. I also did a lot of looking in the slides from the school course, but that didn't help either. </p>
Martin Argerami
22,857
<p>All the operators you list are finite-rank, so compact. </p> <p>More generally, if you define $$ Tx=(a_1x_1,a_2x_2,\ldots) $$ then $T$ is compact if and only if $\lim_{n\to\infty}a_n=0$. </p>
3,975,162
<p>Let <span class="math-container">$(X, A, µ)$</span> be a positive metric space. If <span class="math-container">$\mu(X) &lt; \infty$</span> and <span class="math-container">$(A_n)_{(n \in N^*)},A \in X$</span> <br /> show that if <span class="math-container">$\mu(A\bigtriangleup A_n)\rightarrow 0$</span> then <span class="math-container">$μ(A_n)\rightarrow\mu(A)$</span></p> <p>What I have tried so far is</p> <p>use that <span class="math-container">$A\bigtriangleup A_n = (A/A_n)\cup(A_n/A)$</span> <br /> and since <span class="math-container">$(A/A_n)\cap(A_n/A)= \emptyset$</span> then <span class="math-container">$\mu(A\bigtriangleup A_n) = \mu(A/A_n)+\mu(A_n/A) \rightarrow 0$</span> <br /> now I am trying to contain <span class="math-container">$μ(A_n)$</span> in an inequality where both side converge to <span class="math-container">$A$</span></p> <p>EDIT: thanks to Thorgott's point both <span class="math-container">$\mu(A/A_n)$</span> and <span class="math-container">$\mu(A_n/A)$</span> converge to <span class="math-container">$0$</span> and <span class="math-container">$(A/A_n)\cap A_n = \emptyset$</span> then <span class="math-container">$\mu(A) = \mu(A/A_n) + \mu(A_n) \Rightarrow \mu(A_n) \rightarrow0+\mu(A)$</span></p>
Stinking Bishop
700,480
<p>If <span class="math-container">$y$</span> divides <span class="math-container">$(y+1)^2=y^2+2y+1$</span>, knowing that <span class="math-container">$y$</span> already divides the terms <span class="math-container">$y^2$</span> and <span class="math-container">$2y$</span>, we conclude that <span class="math-container">$y\mid 1$</span>. Thus, <span class="math-container">$y=\pm 1$</span>.</p>
3,975,162
<p>Let <span class="math-container">$(X, A, µ)$</span> be a positive metric space. If <span class="math-container">$\mu(X) &lt; \infty$</span> and <span class="math-container">$(A_n)_{(n \in N^*)},A \in X$</span> <br /> show that if <span class="math-container">$\mu(A\bigtriangleup A_n)\rightarrow 0$</span> then <span class="math-container">$μ(A_n)\rightarrow\mu(A)$</span></p> <p>What I have tried so far is</p> <p>use that <span class="math-container">$A\bigtriangleup A_n = (A/A_n)\cup(A_n/A)$</span> <br /> and since <span class="math-container">$(A/A_n)\cap(A_n/A)= \emptyset$</span> then <span class="math-container">$\mu(A\bigtriangleup A_n) = \mu(A/A_n)+\mu(A_n/A) \rightarrow 0$</span> <br /> now I am trying to contain <span class="math-container">$μ(A_n)$</span> in an inequality where both side converge to <span class="math-container">$A$</span></p> <p>EDIT: thanks to Thorgott's point both <span class="math-container">$\mu(A/A_n)$</span> and <span class="math-container">$\mu(A_n/A)$</span> converge to <span class="math-container">$0$</span> and <span class="math-container">$(A/A_n)\cap A_n = \emptyset$</span> then <span class="math-container">$\mu(A) = \mu(A/A_n) + \mu(A_n) \Rightarrow \mu(A_n) \rightarrow0+\mu(A)$</span></p>
Bill Dubuque
242
<p>Since you tagged it &quot;polynomials&quot; we highlight their key role here.</p> <p>Note that <span class="math-container">$\ y\mid y^2\!+\!1\iff y\mid (\overbrace{y^2\!+\!1\bmod y}^{\large \color{#c00}1})\!\iff y\mid \color{#c00}1$</span></p> <p><strong>Generally</strong> <span class="math-container">$\ y\mid f(y)\iff y\mid \overbrace{f(y)\bmod y}^{\large \color{#c00}{f(0)}}\iff y\mid \color{#c00}{f(0)}\,$</span> via <a href="https://math.stackexchange.com/a/94729/242">Polynomial <span class="math-container">$\rm\color{#c00}{Remainder}$</span> Theorem</a>, where <span class="math-container">$\,f(x)\,$</span> is any polynomial with <em>integer</em> coefficvients.</p>
137,006
<p><code>FunctionDomain[(x^2-x-2)/(x^2+x-6),x]</code> </p> <p>gives </p> <blockquote> <p><code>x &lt; -3 || -3 &lt; x &lt; 2 || x &gt; 2</code>.</p> </blockquote> <p>However, when I factor the numerator and denominator the result is different:</p> <p><code>FunctionDomain[((x - 2) (x + 1))/((x - 2) (x + 3)),x]</code></p> <blockquote> <p><code>x &lt; -3 || x &gt; -3</code></p> </blockquote> <p>As I understand things, <em>Mathematica</em> first of all simplifies the argument and then applies the function. But both expressions simplify to <code>(x + 1)/(x + 3)</code>.</p> <p>Why is the output different?</p>
Szabolcs
12
<p>Evaluate <code>((x - 2) (x + 1))/((x - 2) (x + 3))</code> and see what it gives. It <em>automatically</em> simplifies to <code>(1 + x)/(3 + x)</code>. The second input you show is effectively</p> <pre><code>FunctionDomain[(1 + x)/(3 + x), x] </code></pre> <p>The different results you get are not due to <code>FunctionDomain</code>, but due to the different inputs which are passed to it.</p> <p>You may wonder if the fully automatic simplification of this fraction should be considered incorrect. There is a closely related discussion here, where I argued that such simplifications are in fact more useful than harmful and quite reasonable:</p> <ul> <li><a href="https://mathematica.stackexchange.com/q/65624/12">A one line proof that one is zero using Mathematica 10</a></li> </ul> <hr> <p><strong>Update</strong></p> <p>Per @ChipHurst's comment, wrapping the argument in <code>Hold</code> works too:</p> <pre><code>FunctionDomain[Hold[((x - 2) (x + 1))/((x - 2) (x + 3))], x] (* x &lt; -3 || -3 &lt; x &lt; 2 || x &gt; 2 *) </code></pre> <p>This appears to be an undocumented extension of <code>FunctionDomain</code>. Only <code>Hold</code> works, not other function with the <code>HoldAll</code> attribute (not even <code>HoldForm</code>).</p>
1,606,709
<p>I am studing Kähler differentials and I tried to understand the geometric motivation behind these settings. What I do not understand is the role which plays the diagonal in all these theory. The cotangent sheaf is later defined in terms of the diagonal map. Why is this geometrically interesting? I tried to write a short introduction to Kähler differential to make the geometric nature more available, but I do not now if it make sense. Here it is: </p> <blockquote> <p>Differential $ 1 $-forms are linear transformations $ \omega_{p}:T_p X\mapsto K $ assigning an element in $ K $ to a tangent vector of the tangent space $ T_p X $ of a point $ p\in X $ in some differential manifold $ X $. Differential $ 1 $-forms can be viewed as \textit{infinitesimal} direction vectors $ \triangle p $. This means in physical terms, that the scalar $ \omega_p(\triangle t)\in K $ with $ \triangle t $ a tangent vector represents the \textit{work} required to move from $ x_i $ to $ x_{i+1} $ with $ p\in (x_i,x_{i+1}) $ along some curve. In other words, differential forms are cotangent vectors over some field $ K $, which give information about the work which is locally required to move along some curve. However, they can be generalized and captured by sheaf theory. For this attempt, we observe first that $ \triangle p $ is related to Taylor expansions. Indeed, let $ f $ be a smooth function, that is $ f\in C^{\infty}(\mathbb{C}) $, on a differential manifold $ X$ and let $\mathfrak{J}$ be the ideal of smooth functions vanishing at the point $p\in X$. The zero order part of the Taylor series of a smooth function $f$ is the value of $f$ at the point $ p $, let us say, $ f(p)=c $, so that $ f-c\in\mathfrak{J}$. Now the first order derivatives of $f-c$ correspond to the first order terms in the Taylor series and these are given by the image of $f$ in $\mathfrak{J}/\mathfrak{J}^2$. Let us denote this map by $ d(f) $ with $ d: \mathcal{O}_X\rightarrow \mathfrak{J}/\mathfrak{J}^2 $ and where $ \mathcal{O}_X $ denotes the ring of smooth functions on $ X $. Moreover, if $ f $ is constant, this means a fixed vector, then $ d(f)=0 $. Another important input is that $ \triangle p$ is required to be non zero, as there is no direction available for the zero vector. But $ \triangle p=0 $ is satisfied if and only if the two endpoints of the tangent vector $ \triangle p $ are choosen to be the same, which happens if and only if the point $ p\in X $ correspond to an element on the diagonal of $ X\times X $. So we demand that we just consider elements in $ X\times X $ vanishing on the diagonal (or in the complement of the diagonal).</p> </blockquote> <p>Summarizing up, my two questions are the following: </p> <p>1) which geoemtric interpretation have the diagonal in these context? </p> <p>2) Higher derivations seemed to me a generalization of Kähler differentials, but what is their motivation or geometric nature (analogy to differential geometry), since I cannot see any connection between Kähler differentials and higher derivations ? </p>
Christian Blatter
1,303
<p>Begin by investigating the sequence $$x_0:=0,\quad x_{k+1}:=\sqrt{{1\over2}+x_k}\quad(k\geq0)\ .$$ It has a certain limit $\xi$. Knowing $\xi$ draw conclusions about the limit $$\lim_{n\to\infty}\&gt;\prod_{k=1}^n x_k\ .$$ </p>
2,804,074
<p>Hello I am self teaching foundational math, I am thinking about union of a set. It definition assumes the set to have only elements that are also sets, else it would break. But I started thinking is the empty set infact an element of any mathematical object, like say the integer n? In that case taking the union of the set U {1,{2,3}} would yeild {2,3} Do I understand this correctly? Also since the integers can be constructed as sets of empty sets etc, the empty set would be an element of any integer. But does this extend to any mathematical object?</p> <p>My question boils down to do union of a set imply that the set is composed of sets, or do we allow the empty set to be the result when asking for the element if any mathematical object that is not an explicit set.</p>
Ishan Rai
458,220
<p>I really think the answer depends on <strong>n</strong> more than <strong>x</strong>. As it involves <strong>frac(x)</strong> the value of that doesn't change even if <strong>x->infinity</strong>.</p> <p>Before giving an answer take a look at this, maybe it gives you a hint. Sometimes graphing can actually help a lot.</p> <p><a href="https://www.desmos.com/calculator/ixvutwqnvz" rel="nofollow noreferrer">https://www.desmos.com/calculator/ixvutwqnvz</a></p> <p>As now it's edited in the question that it's n->infinity:</p> <p>{x} will tend to zero as n->infinity due to the fact that {x}&lt;1. And hence you did get the right answer.</p>
3,247,841
<p>I'm working on an algorithm to colour a map drawn in an editor using 4 colours, as a visual demonstration of the four colour theorem. However, my (imperfect) algorithm was able to colour all maps except this one, which after giving it a go myself I struggled to do. I was also unable to collapse it into an 'untangled' graph, so it's possible there's some illegality about it I've not fully understood (or I'm just bad at graph theory). I'd appreciate any help with solving this, and if possible an explanation of/link to a good algorithm to go about solving problems of this style.</p> <p>Here's the map:</p> <p><a href="https://i.stack.imgur.com/f5yii.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f5yii.jpg" alt="The map"></a></p>
Sudix
470,072
<p>One possible algorithm would be the following:</p> <ol> <li>Take the map, and turn it into a graph.</li> <li>Delete all nodes with degree <span class="math-container">$&lt;4$</span>.</li> <li>Turn the graph into a clause set (a special form of a logical formula) by doing the following: <br> a. For every edge <span class="math-container">$A\to B$</span> (where, <span class="math-container">$A,B$</span> are vertices of the graph), we add the formula <span class="math-container">$\lnot \left((A_0\Leftrightarrow B_0) \land(A_1\Leftrightarrow B_1) \right)$</span> as a clause into the clause set.</li> <li>We run DPLL (or the simpler version, Davis-Putnam) on this clause set.</li> <li>DPLL will give us a satisfying model. The color (whereas our colors are <span class="math-container">$0,1,2,3$</span>) of a node <span class="math-container">$A$</span> is given by our model as <span class="math-container">$A= A_0 + 2\cdot A_1$</span></li> <li>Determine for every deleted node a possible coloring using the model.</li> </ol> <p>The underlying idea is the following:<br> Every border in the map says as much as "the bordering areas mustn't be equal" (in terms of color).</p> <p>By letting our colors be <span class="math-container">$0,1,2,3$</span>, we can represent each color in binary as <span class="math-container">$2^0\cdot z_0 + 2^1 \cdot z_1$</span>. <br> As two numbers are equal iff all their bits are equal, we get for every border in the map between two areas <span class="math-container">$A,B$</span> the logic formula: <span class="math-container">$$\lnot \left((A_0\Leftrightarrow B_0) \land(A_1\Leftrightarrow B_1) \right)$$</span></p> <p>Now a 4-coloring is a coloring where this formula holds for exactly every border.</p>
4,301,673
<p>Suppose we have chosen <span class="math-container">$n$</span> random points in a line segment <span class="math-container">$[0, t]$</span>, <span class="math-container">$n\leq t+1$</span>. What is the probability that distance between each pair of adjacent points is &gt; 1? Or more formally, let <span class="math-container">$U_1, U_2, ..., U_n \stackrel{iid}{\sim} U(0,t), n\leq t+1$</span>, let <span class="math-container">$U_{(i)}$</span> denotes <span class="math-container">$i^{th}$</span> ordered statistic. Find <span class="math-container">$P(\cap_{i=1}^{n-1} U_{(i+1)} - U_{(i)} &gt; 1)$</span>?</p>
MXXZ
966,405
<p>Answer: <span class="math-container">$\mathbb{P}(\bigcap_{i=1}^{n-1} U_{(i+1)} - U_{(i)} &gt; 1) = \frac{(t-(n-1))^n}{t^n}$</span>.</p> <p><strong>1. <span class="math-container">$\mathbb{P}(\bigcap_{i=1}^{n-1} U_{(i+1)} - U_{(i)} &gt; 1) = n! \cdot \mathbb{P} (\forall 1 \leq i \leq n-1 \colon U_i + 1 &lt; U_{i+1})$</span>:</strong></p> <p>First, note that <span class="math-container">$\mathbb{P}(\exists i,j \in \{1, \dots, n\}, i \neq j\colon U_i = U_j ) = 0$</span>. (&quot;Almost surely, all values are unique.&quot;)</p> <p>Hence, <span class="math-container">$\mathbb{P}\left(\exists \pi \in S_n\colon U_{\pi(1)} &lt; U_{\pi(2)} &lt; \dots &lt; U_{\pi(n)} \right) = \sum_{\pi \in S_n} \mathbb{P}\left(U_{\pi(1)} &lt; U_{\pi(2)} &lt; \dots &lt; U_{\pi(n)} \right) = 1$</span>, where <span class="math-container">$S_n$</span> denotes the symmetric group.</p> <p>Since the <span class="math-container">$U_i$</span>'s are i.i.d., every one of the orderings should have equal probability, i.e.</p> <p><span class="math-container">$\mathbb{P}\left(U_{\pi(1)} &lt; U_{\pi(2)} &lt; \dots &lt; U_{\pi(n)} \right) = \frac{1}{n!}$</span> for all <span class="math-container">$\pi \in S_n$</span>.</p> <p>Furthermore, <span class="math-container">$$\mathbb{P}\left(\left\{ \bigcap_{i=1}^{n-1} U_{(i+1)} - U_{(i)} &gt; 1 \right\} \cap \left\{U_{\pi(1)} &lt; U_{\pi(2)} &lt; \dots &lt; U_{\pi(n)} \right\}\right) = \mathbb{P} \left(\forall 1 \leq i \leq n-1 \colon U_{\pi (i)} + 1 &lt; U_{\pi(i+1)}\right)$$</span> for all <span class="math-container">$\pi \in S_n$</span>.</p> <p>Again, since each the <span class="math-container">$U_i$</span>'s are i.i.d., each of the &quot;spaced orderings&quot; should be equally likely.</p> <p>All in all, we have</p> <p><span class="math-container">\begin{align} \mathbb{P}\left(\bigcap_{i=1}^{n-1} U_{(i+1)} - U_{(i)} &gt; 1\right) &amp;= \sum_{\pi \in S_n} \mathbb{P}\left(\left\{ \bigcap_{i=1}^{n-1} U_{(i+1)} - U_{(i)} &gt; 1 \right\} \cap \left\{U_{\pi(1)} &lt; U_{\pi(2)} &lt; \dots &lt; U_{\pi(n)} \right\}\right) \\ &amp;= \sum_{\pi \in S_n} \mathbb{P} \left(\forall 1 \leq i \leq n-1 \colon U_{\pi (i)} + 1 &lt; U_{\pi(i+1)}\right) \\ &amp;= \sum_{\pi \in S_n} \mathbb{P} \left(\forall 1 \leq i \leq n-1 \colon U_{i} + 1 &lt; U_{i+1}\right) \\ &amp;= n! \cdot \mathbb{P} \left(\forall 1 \leq i \leq n-1 \colon U_{i} + 1 &lt; U_{i+1}\right). \end{align}</span></p> <p><strong>2. <span class="math-container">$\mathbb{P} (\forall 1 \leq i \leq n-1 \colon U_i + 1 &lt; U_{i+1}) = \frac{(t-(n-1))^n}{n! \cdot t^n}$</span></strong></p> <p>Now that we have concrete ordering, we can calculate the probability using integrals:</p> <p>Clearly, since the <span class="math-container">$U_i$</span>'s are i.i.d., their joint probability density function is <span class="math-container">$$f(u_1, \dots, u_n) = \frac{1_{\{u_1, \dots, u_n \in [0,t]\}}}{t^n}.$$</span></p> <p>Also, it should be clear that from <span class="math-container">$U_{n-1} + 1 &lt; U_{n} \leq t$</span> we must have <span class="math-container">$U_{n-1} &lt; t - 1$</span>.</p> <p>Inductively, we get <span class="math-container">$U_i &lt; t - (n - i)$</span> (given that <span class="math-container">$ \forall 1 \leq i \leq n-1 \colon U_i + 1 &lt; U_{i+1}$</span> occurs).</p> <p>With that, we get for <span class="math-container">$p := \mathbb{P} (\forall 1 \leq i \leq n-1 \colon U_i + 1 &lt; U_{i+1})$</span> (using Fubini)</p> <p><span class="math-container">$$p = \int_0^{t-(n-1)} \int_{u_1 + 1}^{t-(n-2)} \dots \int_{u_{n-2} + 1}^{t-1} \int_{u_{n-1} + 1}^{t} f(u_1, \dots, u_n) d u_n d u_{n-1} \dots d u_2 d u_1.$$</span></p> <p>On that domain, <span class="math-container">$f(u_1, \dots, u_n) \equiv \frac{1}{t^n}$</span>. Iteratively, we get</p> <p><span class="math-container">\begin{align} p &amp;= \int_0^{t-(n-1)} \int_{u_1 + 1}^{t-(n-2)} \dots \int_{u_{n-2} + 1}^{t-1} \int_{u_{n-1} + 1}^{t} \frac{1}{t^n} d u_n d u_{n-1} \dots d u_2 d u_1 \\ &amp;= \frac{1}{t^n} \cdot \int_0^{t-(n-1)} \int_{u_1 + 1}^{t-(n-2)} \dots \int_{u_{n-2} + 1}^{t-1} \int_{u_{n-1} + 1}^{t} d u_n d u_{n-1} \dots d u_2 d u_1 \\ &amp;= \frac{1}{t^n} \cdot \int_0^{t-(n-1)} \int_{u_1 + 1}^{t-(n-2)} \dots \int_{u_{n-2} + 1}^{t-1} (t -1 - u_{n-1}) d u_{n-1} \dots d u_2 d u_1 \\ &amp;= \frac{1}{t^n} \cdot \int_0^{t-(n-1)} \int_{u_1 + 1}^{t-(n-2)} \dots \int_{u_{n-3} + 1}^{t-2} \left[ \frac{-(t -1 - u_{n-1})^2}{2} \right]_{u_{n-2} + 1}^{t-1} d u_{n-2} \dots d u_2 d u_1 \\ &amp;= \frac{1}{t^n} \cdot \int_0^{t-(n-1)} \int_{u_1 + 1}^{t-(n-2)} \dots \int_{u_{n-4} + 1}^{t-3} \int_{u_{n-3} + 1}^{t-2} \frac{(t -2 - u_{n-2})^2}{2} d u_{n-2} d u_{n-3} \dots d u_2 d u_1 \\ &amp;= \frac{1}{t^n} \cdot \int_0^{t-(n-1)} \int_{u_1 + 1}^{t-(n-2)} \dots \int_{u_{n-4} + 1}^{t-3} \left[ \frac{-(t -2 - u_{n-2})^3}{3 \cdot 2} \right]_{u_{n-3} + 1}^{t-2} d u_{n-3} \dots d u_2 d u_1 \\ &amp;= \frac{1}{t^n} \cdot \int_0^{t-(n-1)} \int_{u_1 + 1}^{t-(n-2)} \dots \int_{u_{n-4} + 1}^{t-3} \frac{(t -3 - u_{n-3})^3}{3 \cdot 2} d u_{n-3} \dots d u_2 d u_1 \\ &amp;\vdots \\ &amp;= \frac{1}{t^n} \cdot \int_0^{t-(n-1)} \frac{(t - (n-1) - u_{1})^{n-1}}{(n-1)!} d u_1 \\ &amp;= \frac{1}{t^n} \cdot \left[ \frac{- (t - (n-1) - u_{1})^{n}}{n!} \right]_0^{t-(n-1)} \\ &amp;= \frac{(t-(n-1))^n}{n! \cdot t^n} \end{align}</span></p> <p>Thus,</p> <p><span class="math-container">$$\mathbb{P}\left(\bigcap_{i=1}^{n-1} U_{(i+1)} - U_{(i)} &gt; 1\right) = \frac{(t-(n-1))^n}{t^n}.$$</span></p>
121,431
<p>I need a good reference for the basic definitions of the dual of locally compact group (not necessarily abelian), its natural topology, $\sigma$-algebra, and the Plancherel measure on it (when they are defined). This topic seems pretty standard to me, but when I needed a basic reference on this (both to check my memories and to be able to cite it in a paper I am writing), I didn't find one. </p> <p>By the way, the wikipedia webpage "Plancherel measure" should be completely rewritten. There is not even a definition, just a list of examples (and the definition given in the finite case is not compatible with the one given in the compact case). I would be happy to rewrite it, when I have a reference to check the details. </p>
Carlos De la Mora
40,832
<p>In my humble opinion the best reference is Dixmier $C^*$-algebras. The first half of the book has a very complete explanation of what you need to know about $C^*$-algebras. In chapter 8 he goes over what is the decomposition of a trace for $C^*$-algebras. Then from Chapter 13 on he goes into the theory for a locally compact group. He explains necessary and sufficient conditions for the Plancherel formula to exist (has to be Type I, separable, postliminal, unimodular etc.). He also explains the topology to be given $\widehat{G}$; in fact he gives three different topologies on this set, and shows all of them agree in the case we are interested in—it is just that beautiful of a book. Chapter 18 is the statement of the Plancherel Theorem; the proof essentially is the one in Chapter 8 for $C^*$-algebras. The english version is very good, with very few typos or print mistakes that may confuse you. I have not found a typo or a mistake of any sort in the French version. I think it is a very good book, like reading a novel. </p>
3,757,038
<h2>The problem</h2> <p>So recently in school, we should do a task somewhat like this (roughly translated):</p> <blockquote> <p><em>Assign a system of linear equations to each drawing</em></p> </blockquote> <p>Then, there were some systems of three linear equations (SLEs) where each equation was describing a plane in their coordinate form and some sketches of three planes in some relation (e.g. parallel or intersecting at 90°-angles.</p> <h2>My question</h2> <p>For some reason, I immediately knew that these planes:</p> <p><a href="https://i.stack.imgur.com/6luFl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6luFl.png" alt="enter image description here" /></a></p> <p>belonged to this SLE: <span class="math-container">$$ x_1 -3x_2 +2x_3 = -2 $$</span> <span class="math-container">$$ x_1 +3x_2 -2x_3 = 5 $$</span> <span class="math-container">$$-6x_2 + 4x_3 = 3$$</span></p> <p>And it turned out to be true. In school, we proved this by determining the planes' intersecting lines and showing that they are parallel, but not identical.<br /> However, I believe that it must be possible to show the planes are arranged like this without a lot of calculation. Since I immediately saw/&quot;felt&quot; that the planes described in the SLE must be arranged in the way they are in the picture (like a triangle). I could also determine the same &quot;shape&quot; on a similar question, so I do not believe that it was just coincidence.</p> <h2>What needs to be shown?</h2> <p>So we must show that the three planes described by the SLE cut each other in a way that I do not really know how to describe. They do not intersect with each other perpendicular (at least they don' have to to be arranged in a triangle), but there is no point in which all three planes intersect. If you were to put a line in the center of the triangle, it would be parallel to all planes.</p> <p>The three planes do not share one intersecting line as it would be in this case:</p> <p><a href="https://i.stack.imgur.com/LQ5IY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LQ5IY.png" alt="enter image description here" /></a></p> <p>(which was another drawing from the task, but is not relevant to this question except for that it has to be excluded)</p> <h2>My thoughts</h2> <p>If you were to look at the planes exactly from the direction in which the parallel line from the previous section leads, you would see something like this:</p> <p><a href="https://i.stack.imgur.com/eMj2x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eMj2x.png" alt="enter image description here" /></a></p> <p>The red arrows represent the normal of each plane (they should be perpendicular). You can see that the normals somehow are part of one (new) plane. This is already given by the manner how the planes intersect with each other (as I described before). If you now were to align your coordinate system in such a way that the plane in which the normals lie is the <span class="math-container">$x_1 x_2$</span>-plane, each normals would have an <span class="math-container">$x_3$</span> value of <span class="math-container">$0$</span>. If you were now to further align the coordinate axes so that the <span class="math-container">$x_1$</span>-axis is identical to one of the normals (let's just choose the bottom one), the values of the normals would be somehow like this:</p> <p><span class="math-container">$n_1=\begin{pmatrix} a \\ 0 \\ 0 \end{pmatrix}$</span> for the bottom normal</p> <p><span class="math-container">$n_2=\begin{pmatrix} a \\ a \\ 0 \end{pmatrix}$</span> for the upper right normal</p> <p>and <span class="math-container">$n_3=\begin{pmatrix} a \\ -a \\ 0 \end{pmatrix}$</span> for the upper left normal</p> <p>Of course, the planes do not have to be arranged in a way that the vectors line up so nicely that they are in one of the planes of our coordinate system.</p> <p>However, in the SLE, I noticed the following:</p> <p>-The three normals (we can simpla read the coefficients since the equations are in coordinate form) are <span class="math-container">$n_1=\begin{pmatrix} 1 \\ -3 \\ 2 \end{pmatrix}$</span>, <span class="math-container">$n_2=\begin{pmatrix} 1 \\ 3 \\ -2 \end{pmatrix}$</span> and <span class="math-container">$n_3=\begin{pmatrix} 0 \\ -6 \\ 4 \end{pmatrix}$</span>.</p> <p>As we can see, <span class="math-container">$n_1$</span> and <span class="math-container">$n_2$</span> have the same values for <span class="math-container">$x_1$</span> and that <span class="math-container">$x_2(n_1)=-x_2(n_2)$</span>; <span class="math-container">$x_3(n_1)=-x_3(n_2)$</span></p> <p>Also, <span class="math-container">$n_3$</span> is somewhat similar in that its <span class="math-container">$x_2$</span> and <span class="math-container">$x_3$</span> values are the same as the <span class="math-container">$x_2$</span> and <span class="math-container">$x_3$</span> values of <span class="math-container">$n_1$</span>, but multiplied by the factor <span class="math-container">$2$</span>.</p> <p>I also noticed that <span class="math-container">$n_3$</span> has no <span class="math-container">$x_1$</span> value (or, more accurately, the value is <span class="math-container">$0$</span>), while for <span class="math-container">$n_1$</span> and <span class="math-container">$n_2$</span>, the value for <span class="math-container">$x_1$</span> is identical (<span class="math-container">$n_1=1$</span>).</p> <h2>Conclusion</h2> <p>I feel like I am very close to a solution, I just don't know what to do with my thoughts/approaches regarding the normals of the planes.<br /> Any help would be greatly appreciated.</p> <p><strong>How can I show that the three planes are arranged in this triangular-like shape by using their normals, i.e. without having to calculate the planes' intersection lines?</strong> (Probably we will need more than normals, but I believe that they are the starting point).</p> <hr /> <p><strong>Update:</strong> I posted a <a href="https://math.stackexchange.com/questions/3827387/why-are-three-vectors-linearly-dependent-when-one-of-them-is-a-combination-of-th">new question</a> that is related to this problem, but is (at least in my opinion) not the same question.</p>
Jean Marie
305,862
<p>There is a very easy-to-check necessary and sufficient condition :</p> <p>You will have the first figure (triangle) if and only if there exists a linear combination of the LHS of your system of equations (1),(2),(3) making <span class="math-container">$0$</span> <strong>without the RHS being so</strong> with the same coefficients ; precisely here :</p> <p><span class="math-container">$$\begin{cases} \text{(condition A)} \ \ &amp; \color{red}{[-1]} \times (1) + \color{red}{[1]} \times (2) + \color{red}{[1]} \times (3) &amp;=&amp; 0 \ \ \text{whereas}\\ \text{(condition B)} \ \ &amp; \color{red}{[-1]} \times -2 + \color{red}{[1]} \times 5 + \color{red}{[1]} \times 3 &amp;\neq &amp; 0\end{cases}$$</span></p> <p>We would be in the second case (triangle reduced to <span class="math-container">$0$</span> = pencil of planes) iff the RHS is <span class="math-container">$0$</span> as well.</p> <p><strong>Remark:</strong></p> <ol> <li><p>The proof of this fact, as remarked by you, is that condition A is equivalent to a linear dependency of the normals, whereas condition B amounts to the negation of the fact that for example the 3rd plane is a member of the pencil of planes defined by the first and second plane.</p> </li> <li><p>There is a more &quot;linear algebra way&quot; to express remark 1). Let me borrow for that the notations of the excellent answer by @paulinho, working this time with an augmented matrix : <span class="math-container">$$\exists ? \ \vec{y} \ \text{such that} \ \ \ \underbrace{\begin{bmatrix} y_1 \ \ y_2 \ \ y_3 \end{bmatrix}}_{\vec{y}}\underbrace{[A \ | \ \vec{b}]}_B=\begin{bmatrix} y_1 \ \ y_2 \ \ y_3 \end{bmatrix}\left[\begin{array}{rrr|r} 1 &amp; -3 &amp; 2 &amp; -2 \\ 1 &amp; 3 &amp; -2 &amp; 5 \\ 0 &amp; -6 &amp; 4 &amp; 3 \end{array}\right]=0 $$</span></p> </li> </ol> <p>Either rank<span class="math-container">$(B)=3$</span>, no such <span class="math-container">$\vec{y}$</span> exists and we are in the first case of the necessary and sufficient condition; otherwise, if rank<span class="math-container">$(B)&lt;3$</span> : we are in the second case.</p>
3,757,038
<h2>The problem</h2> <p>So recently in school, we should do a task somewhat like this (roughly translated):</p> <blockquote> <p><em>Assign a system of linear equations to each drawing</em></p> </blockquote> <p>Then, there were some systems of three linear equations (SLEs) where each equation was describing a plane in their coordinate form and some sketches of three planes in some relation (e.g. parallel or intersecting at 90°-angles.</p> <h2>My question</h2> <p>For some reason, I immediately knew that these planes:</p> <p><a href="https://i.stack.imgur.com/6luFl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6luFl.png" alt="enter image description here" /></a></p> <p>belonged to this SLE: <span class="math-container">$$ x_1 -3x_2 +2x_3 = -2 $$</span> <span class="math-container">$$ x_1 +3x_2 -2x_3 = 5 $$</span> <span class="math-container">$$-6x_2 + 4x_3 = 3$$</span></p> <p>And it turned out to be true. In school, we proved this by determining the planes' intersecting lines and showing that they are parallel, but not identical.<br /> However, I believe that it must be possible to show the planes are arranged like this without a lot of calculation. Since I immediately saw/&quot;felt&quot; that the planes described in the SLE must be arranged in the way they are in the picture (like a triangle). I could also determine the same &quot;shape&quot; on a similar question, so I do not believe that it was just coincidence.</p> <h2>What needs to be shown?</h2> <p>So we must show that the three planes described by the SLE cut each other in a way that I do not really know how to describe. They do not intersect with each other perpendicular (at least they don' have to to be arranged in a triangle), but there is no point in which all three planes intersect. If you were to put a line in the center of the triangle, it would be parallel to all planes.</p> <p>The three planes do not share one intersecting line as it would be in this case:</p> <p><a href="https://i.stack.imgur.com/LQ5IY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LQ5IY.png" alt="enter image description here" /></a></p> <p>(which was another drawing from the task, but is not relevant to this question except for that it has to be excluded)</p> <h2>My thoughts</h2> <p>If you were to look at the planes exactly from the direction in which the parallel line from the previous section leads, you would see something like this:</p> <p><a href="https://i.stack.imgur.com/eMj2x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eMj2x.png" alt="enter image description here" /></a></p> <p>The red arrows represent the normal of each plane (they should be perpendicular). You can see that the normals somehow are part of one (new) plane. This is already given by the manner how the planes intersect with each other (as I described before). If you now were to align your coordinate system in such a way that the plane in which the normals lie is the <span class="math-container">$x_1 x_2$</span>-plane, each normals would have an <span class="math-container">$x_3$</span> value of <span class="math-container">$0$</span>. If you were now to further align the coordinate axes so that the <span class="math-container">$x_1$</span>-axis is identical to one of the normals (let's just choose the bottom one), the values of the normals would be somehow like this:</p> <p><span class="math-container">$n_1=\begin{pmatrix} a \\ 0 \\ 0 \end{pmatrix}$</span> for the bottom normal</p> <p><span class="math-container">$n_2=\begin{pmatrix} a \\ a \\ 0 \end{pmatrix}$</span> for the upper right normal</p> <p>and <span class="math-container">$n_3=\begin{pmatrix} a \\ -a \\ 0 \end{pmatrix}$</span> for the upper left normal</p> <p>Of course, the planes do not have to be arranged in a way that the vectors line up so nicely that they are in one of the planes of our coordinate system.</p> <p>However, in the SLE, I noticed the following:</p> <p>-The three normals (we can simpla read the coefficients since the equations are in coordinate form) are <span class="math-container">$n_1=\begin{pmatrix} 1 \\ -3 \\ 2 \end{pmatrix}$</span>, <span class="math-container">$n_2=\begin{pmatrix} 1 \\ 3 \\ -2 \end{pmatrix}$</span> and <span class="math-container">$n_3=\begin{pmatrix} 0 \\ -6 \\ 4 \end{pmatrix}$</span>.</p> <p>As we can see, <span class="math-container">$n_1$</span> and <span class="math-container">$n_2$</span> have the same values for <span class="math-container">$x_1$</span> and that <span class="math-container">$x_2(n_1)=-x_2(n_2)$</span>; <span class="math-container">$x_3(n_1)=-x_3(n_2)$</span></p> <p>Also, <span class="math-container">$n_3$</span> is somewhat similar in that its <span class="math-container">$x_2$</span> and <span class="math-container">$x_3$</span> values are the same as the <span class="math-container">$x_2$</span> and <span class="math-container">$x_3$</span> values of <span class="math-container">$n_1$</span>, but multiplied by the factor <span class="math-container">$2$</span>.</p> <p>I also noticed that <span class="math-container">$n_3$</span> has no <span class="math-container">$x_1$</span> value (or, more accurately, the value is <span class="math-container">$0$</span>), while for <span class="math-container">$n_1$</span> and <span class="math-container">$n_2$</span>, the value for <span class="math-container">$x_1$</span> is identical (<span class="math-container">$n_1=1$</span>).</p> <h2>Conclusion</h2> <p>I feel like I am very close to a solution, I just don't know what to do with my thoughts/approaches regarding the normals of the planes.<br /> Any help would be greatly appreciated.</p> <p><strong>How can I show that the three planes are arranged in this triangular-like shape by using their normals, i.e. without having to calculate the planes' intersection lines?</strong> (Probably we will need more than normals, but I believe that they are the starting point).</p> <hr /> <p><strong>Update:</strong> I posted a <a href="https://math.stackexchange.com/questions/3827387/why-are-three-vectors-linearly-dependent-when-one-of-them-is-a-combination-of-th">new question</a> that is related to this problem, but is (at least in my opinion) not the same question.</p>
Ingix
393,096
<p>I guess the reason you &quot;immediately knew&quot; that the system</p> <p><span class="math-container">$$ x_1 -3x_2 +2x_3 = -2 \tag1 \label{eq1}$$</span> <span class="math-container">$$ x_1 +3x_2 -2x_3 = 5 \tag2 \label{eq2}$$</span> <span class="math-container">$$-6x_2 + 4x_3 = 3 \tag3 \label{eq3}$$</span></p> <p>behaved like that</p> <p><a href="https://i.stack.imgur.com/6luFl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6luFl.png" alt="3 planes intersecting, intesections are parallel lines" /></a></p> <p>was that you saw (maybe subconciously) that adding \eqref{eq2} and \eqref{eq3} and subtracting \eqref{eq1} leads to</p> <p><span class="math-container">$$ 0 = 10,$$</span></p> <p>showing that there cannot exist a point where all planes intersect.</p> <p>That can happen in several ways, the most obvious being that 2 of the planes are parallel. But parallel planes are easy to identify in algebraic form, if they are given as</p> <p><span class="math-container">$$a_1x_1+a_2x_2+a_3x_3=z_a$$</span> <span class="math-container">$$b_1x_1+b_2x_2+b_3x_3=z_b$$</span></p> <p>than being parallel means that there exists a number <span class="math-container">$f$</span> such that <span class="math-container">$b_1=fa_1, b_2=fa_2, b_3=fa_3.$</span> It's easy to see that this isn't true for any pair of planes described by \eqref{eq1},\eqref{eq2},\eqref{eq3}.</p> <p>However, that means that each of the 3 pairs of planes have a line as intersection, making 3 lines of intersection. But any two of those lines can't intersect themselves, because that would mean their point of intersection would lie on all 3 planes, which is impossible. Since any 2 lines of intersection lie in of the 3 planes, that means they are parallel!</p> <p>So we've come to the conclusion that the planes described by \eqref{eq1},\eqref{eq2} and \eqref{eq3} form that picture: They each intersect pairwise, but their intersections are parallel.</p>
3,243,328
<p>X is a random variable with values from <span class="math-container">$\Bbb N\setminus{0}$</span></p> <p>I am trying to show that <span class="math-container">$E[X^2]$</span> = <span class="math-container">$\sum_{n=1}^\infty (2n-1) P(X\ge n)$</span> iff <span class="math-container">$E[X^2]$</span> &lt; <span class="math-container">$\infty$</span>.</p> <p>I rewrote <span class="math-container">$P(X \ge n)$</span>:</p> <p><span class="math-container">$E[X^2]$</span> = <span class="math-container">$\sum_{n=1}^\infty (2n-1)\sum_{x=1}^\infty 1_{x \ge n}P(X=x)$</span></p> <p>Now I tried to rearrange the sums:</p> <p><span class="math-container">$E[X^2]$</span> = <span class="math-container">$\sum_{x=1}^\infty \sum_{n=1}^x (2n-1)P(X=x)$</span></p> <p>But I think that I made a mistake. Could you give me some hints?</p>
Oliver Díaz
121,671
<p>Just as you did it: <span class="math-container">\begin{align} \sum^\infty_{n=1}(2n-1)P(X\geq n) &amp;=\sum^\infty_{n=1}(2n-1)\Big(\sum^\infty_{j=n}P(X=j)\Big)\\ &amp;=\sum^\infty_{j=1}\sum^j_{n=1}P(X=j)(2n-1)\\ &amp;=\sum^\infty_{j=1}P(X=j)\Big(\sum^j_{n=1}(2n-1)\Big)\\ &amp;=\sum^\infty_{j=1}P(X=j)j^2 \end{align}</span></p> <p>The last line follows from <span class="math-container">$\sum^j_{n=1}(2n-1)=2\frac{j(j+1)}{2}-j$</span></p>
4,577,925
<p>Which expression is larger, <span class="math-container">$$ 99^{50}+100^{50}\quad\textrm{ or }\quad 101^{50}? $$</span></p> <p>Idea is to use the Binomial Theorem:</p> <p>The right hand side then becomes <span class="math-container">$$ 101^{50}=(100+1)^{50}=\sum_{k=0}^{50}\binom{50}{k}1^{50-k}100^k=100^{50}+\sum_{k=0}^{49}\binom{50}{k}100^k $$</span></p> <p>The left hand side reads <span class="math-container">$$ 99^{50}+100^{50}=(100-1)^{50}+100^{50}=\sum_{k=0}^{50}\binom{50}{k}(-1)^{50-k}100^k+100^{50} $$</span></p> <p>Thus, since both sides have the summand <span class="math-container">$100^{50}$</span>, it remains to compare <span class="math-container">$$ \sum_{k=0}^{50}\binom{50}{k}(-1)^{50-k}100^k\quad\textrm{and}\quad \sum_{k=0}^{49}\binom{50}{k}100^k $$</span></p>
Claude Leibovici
82,404
<p>It could be amazing to compare <span class="math-container">$$(2x)^x +(2 x-1)^x \qquad \text{and} \qquad (2 x+1)^x$$</span> or, better, their logarithms.</p> <p>So, consider that you look for the zero of function <span class="math-container">$$f(x)=\log \left((2x)^x +(2 x-1)^x\right)-\log \left((2 x+1)^x\right)$$</span> which has a trivial solution <span class="math-container">$x=2$</span>.</p> <p>So</p> <p><span class="math-container">$$x\gt 2 \qquad \implies\qquad (2x)^x +(2 x-1)^x \lt (2 x+1)^x$$</span></p> <p>Checking <span class="math-container">$$\left( \begin{array}{cccc} x &amp;(2x)^x +(2 x-1)^x &amp; (2 x+1)^x &amp;(2x)^x +(2 x-1)^x- (2 x+1)^x\\ 1 &amp; 3 &amp; 3 &amp; 0 \\ 2 &amp; 25 &amp; 25 &amp; 0 \\ 3 &amp; 341 &amp; 343 &amp; -2 \\ 4 &amp; 6497 &amp; 6561 &amp; -64 \\ 5 &amp; 159049 &amp; 161051 &amp; -2002 \\ 6 &amp; 4757545 &amp; 4826809 &amp; -69264 \\ \end{array} \right)$$</span></p> <p>May be, you could use induction.</p>
1,344,690
<p>I was wondering how to find the vertices of an equilateral triangle given its center point?</p> <p>Such as:</p> <pre><code> A /\ / \ / \ / M \ B /________\ C </code></pre> <p>Provided that <code>AB, AC, BC = x</code> and <code>M = (50,50)</code> and <code>M</code> is the middle of the triangle, I want to find <code>A</code>, <code>B</code> and <code>C</code>.</p> <p>Thanks.</p>
Hagen von Eitzen
39,174
<p>Draw the circle of radius $\frac {\sqrt 3}3x$ around $M$. Pick an arbitrary point $A$ on this circle. Then intersect the circle of radius $X$ around $A$ with the first circle to determine $B,C$ as intersection points. </p> <p>Note that $A$ could be picked anywhere on the circle, hence the result is not unique. </p>
3,056,031
<p>From the first chapter of Arfken's Mathematical Methods for physicists (rotation of the coordinate axis):</p> <blockquote> <p>To go on to three and, later, four dimensions, we find it convenient to use a more compact notation. Let <span class="math-container">\begin{equation} x → x_1, \textbf{ } y → x_2 \end{equation}</span></p> <p><span class="math-container">\begin{equation} a_{11} = cos\phi,\textbf{ } a_{12} = sin\phi \end{equation}</span></p> <p><span class="math-container">\begin{equation} a_{21} = −sin\phi, \textbf{ } a_{22} = cos\phi \end{equation}</span></p> <p>Then Eqs. become</p> <p><span class="math-container">\begin{equation} x′_1 = a_{11}x_1 + a_{12}x_2 \end{equation}</span></p> <p><span class="math-container">\begin{equation} x′_2 = a_{21}x_1 + a_{22}x_2. \end{equation}</span></p> <p>The coefficient <span class="math-container">$a_{ij}$</span> may be interpreted as a direction cosine, the cosine of the angle between <span class="math-container">$x'_i$</span> and <span class="math-container">$x_j$</span> ; that is,</p> </blockquote> <p>This is all good. Later, the book states:</p> <blockquote> <p>From the definition of <span class="math-container">$a_{ij}$</span> as the cosine of the angle between the positive <span class="math-container">$x′_i$</span> direction and the positive <span class="math-container">$x_j$</span> direction we may write (Cartesian coordinates):</p> <p><span class="math-container">\begin{equation} a_{ij}=\frac{\partial x'_i}{\partial x_j} \end{equation}</span></p> </blockquote> <p>Where did that come from? I can't find anything about the cosine as a partial derivative, and I don't see how that works.</p>
Renzo Vizarreta
761,926
<p>When you choose to represent the system of equations as a sum, it looks like this: </p> <p><span class="math-container">$$ x^{'}_i = \sum_{j=1}^N a_{ij}x_j $$</span></p> <p>For example, in the two-dimensional case (N=2), it follows that:</p> <p><span class="math-container">$$ x^{'}_1 = a_{11}x_1 + a_{12}x_2 $$</span> <span class="math-container">$$ x^{'}_2 = a_{21}x_1 + a_{22}x_2 $$</span></p> <p>As you can see, both systems are functions of <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span>, so we can derivative each one of those in order to obtain the required <span class="math-container">$a_{ij}$</span>. Lets see one of them:</p> <p><span class="math-container">$$ \frac{\partial x^{'}_1}{\partial x_1} = \frac{\partial (a_{11}x_1 + a_{12}x_2)}{\partial x_1} = a_{11} $$</span></p> <p>The same procedure is done for the rest of the <span class="math-container">$a_{ij}$</span> required cosines. Trying to generalice this, we can derivative the sum with respect of <span class="math-container">$x_j$</span>, it then looks as follow:</p> <p><span class="math-container">$$ \frac{\partial x^{'}_i}{\partial x_j} = a_{ij} $$</span></p> <p>That's where it comes from :)</p>
3,123,681
<p>let us consider following problem taken from book</p> <p><em>An appliance store purchases electric ranges from two companies. From company A, 500 ranges are purchased and 2% are defective. From company B, 850 ranges are purchased and 2% are defective. Given that a range is defective, find the probability that it came from company B</em></p> <p>so here we are assuming that probability of selection company is equally right? that means that <span class="math-container">$P(A)=P(B)=\frac{1}{2}$</span> , also <span class="math-container">$2$</span> % defective means that probability of selection of defective from ranges is equal to <span class="math-container">$0.02$</span>, for instance in Company A, number of defective ranges is <span class="math-container">$500*0.02=10$</span> there probability is equal to <span class="math-container">$\frac{10}{500}=0.02=2$</span>% </p> <p>we know probability of selecting defective range is equal to</p> <p><span class="math-container">$ \frac{1}{2} *2$</span>% + <span class="math-container">$\frac{1}{2} *2$</span>% and probability of selecting defective from company B will be <span class="math-container">$1/2 * 2$</span>% divided by probability of selection of defective range, but book says that answer is <span class="math-container">$0.65 $</span>, how?</p>
Felix Marin
85,343
<p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span> <span class="math-container">\begin{align} &amp;\bbox[10px,#ffd]{\lim_{n \to \infty} \sum_{k = 1}^{n}\arctan\pars{1 \over n + k}} \\[5mm] = &amp;\ \lim_{n \to \infty} \sum_{k = 1}^{n}\pars{n + k} \int_{0}^{1}{\dd x \over x^{2} + \pars{n + k}^{2}} \\[5mm] = &amp;\ \lim_{n \to \infty}\int_{0}^{1}\Re\sum_{k = 0}^{n - 1} {1 \over k + n + 1 - \ic x}\,\dd x \\[5mm] = &amp;\ \lim_{n \to \infty}\Re\int_{0}^{1}\sum_{k = 0}^{\infty}\pars{% {1 \over k + n + 1 - \ic x} - {1 \over k + 2n + 1 - \ic x} }\,\dd x \\[5mm] = &amp;\ \lim_{n \to \infty}\Re\int_{0}^{1}\bracks{% \Psi\pars{2n + 1 - \ic x} - \Psi\pars{n + 1 - \ic x}}\,\dd x \\[2mm] &amp;\ \mbox{where}\ \pars{~\Psi:\ Digamma\ Function~} \\[5mm] = &amp;\ -\lim_{n \to \infty}\Im \ln\pars{{\Gamma\pars{2n + 1 - \ic} \over \Gamma\pars{2n + 1}}\, {\Gamma\pars{n + 1} \over \Gamma\pars{n + 1 - \ic}}} \\[5mm] = &amp;\ -\lim_{n \to \infty}\Im \ln\pars{\bracks{2n - \ic}!/\pars{2n}! \over \bracks{n - \ic}!/n!} = -\lim_{n \to \infty}\Im \ln\pars{\bracks{2n}^{-\ic} \over n^{-\ic}} \label{1}\tag{1} \\[5mm] = &amp;\ -\Im\ln\pars{2^{-\ic}} = -\Im\ln\pars{\vphantom{\Large A}\cos\pars{\ln\pars{2}} - \ic\sin\pars{\ln\pars{2}}} \\[5mm] = &amp;\ \bbx{\ln\pars{2}} \end{align}</span></p> <blockquote> <p><span class="math-container">$\ds{\Gamma}$</span> is the <em>Gamma Function</em>. Note that (&nbsp;see \eqref{1}&nbsp;)</p> </blockquote> <p><span class="math-container">\begin{align} &amp;\bbox[10px,#ffd]{\left.{\pars{\alpha n - \ic}! \over \pars{\alpha n}!}\,\right\vert_{\ \alpha\ \in\ \braces{1,2}}} \,\,\,\stackrel{\mrm{as}\ n\ \to\ \infty}{\sim}\,\,\, {\root{2\pi}\pars{\alpha n - \ic}^{\alpha n - \ic + 1/2}\expo{-\pars{\alpha n - \ic}} \over \root{2\pi}\pars{\alpha n}^{\alpha n + 1/2}\expo{-\alpha n}} \\[5mm] = &amp;\ {\pars{\alpha n}^{\alpha n - \ic + 1/2}\, \bracks{1 - \ic/\pars{\alpha n}}^{\alpha n - \ic + 1/2}\expo{\ic} \over \pars{\alpha n}^{\alpha n + 1/2}} \,\,\,\stackrel{\mrm{as}\ n\ \to\ \infty}{\sim}\,\,\, {\large\pars{\alpha n}^{-\ic}} \end{align}</span></p>
1,273,477
<blockquote> <p>What is the unit normal vector of the curve $y + x^2 = 1$, $-1 \leq x \leq 1$? </p> </blockquote> <p>I need this to calculate the flux integral of a vector field over that curve.</p>
marwalix
441
<p>The curve is given by $F(x,y)=x^2+y-1=0$. A normal vector is $\operatorname{grad}{F}=(F_x,F_y)=(2x,1)$. We now normalize to get</p> <p>$$n=\left(\frac{2x}{\sqrt{4x^2+1}},\frac{1}{\sqrt{4x^2+1}}\right)$$</p>
1,273,477
<blockquote> <p>What is the unit normal vector of the curve $y + x^2 = 1$, $-1 \leq x \leq 1$? </p> </blockquote> <p>I need this to calculate the flux integral of a vector field over that curve.</p>
E.H.E
187,799
<p>this is another way used when you want to find the unit normal vector in plane $$y=1-x^2$$ $$y'=-2x$$ $$v=2xi+j$$ $$|v|=\sqrt{(2x)^2+1^2}=\sqrt{4x^2+1}$$ $$n=\frac{v}{|v|}=\frac{2xi}{\sqrt{4x^2+1}}+\frac{j}{\sqrt{4x^2+1}}$$</p>
519,224
<p>Using the Squeeze Theorem, how do I find: $$\lim_{x\to 3} (x^2-2x-3)^2\cos\left(\pi \over x-3\right)$$ I thought I knew the Squeeze Theorem, but I haven't encountered anything like this yet, so I honestly have no idea how and where to start.</p> <p>I would appreciate any help I get because I really want to be able to understand these types of questions!</p>
Community
-1
<p>By these inequalities</p> <p>$$0\leq\lim_{x\to 3} |(x^2-2x-3)^2\cos\left(\pi \over x-3\right)|\leq\lim_{x\to 3} |(x^2-2x-3)^2|=0$$ we have $$\lim_{x\to 3} (x^2-2x-3)^2\cos\left(\pi \over x-3\right)=0$$</p>
4,036,903
<p>Suppose <span class="math-container">$\sigma_1:\Delta^k \rightarrow X$</span> is a singular <span class="math-container">$k$</span>-simplex and <span class="math-container">$\sigma_2:\Delta^l \rightarrow X$</span> is a singular <span class="math-container">$l$</span>-simplex. Is there a singular <span class="math-container">$(k+l)$</span>-simplex, <span class="math-container">$\sigma: \Delta^{k+l} \rightarrow X$</span>, such that <span class="math-container">$\sigma|_{[v_1,\dots,v_k]} = \sigma_1$</span> and <span class="math-container">$\sigma|_{[v_k,\dots,v_{k+l}]}=\sigma_2$</span>?</p>
Sam Freedman
245,133
<p>Were there a common extension <span class="math-container">$\sigma : \Delta^2 \to X$</span> with <span class="math-container">$\sigma|_{[v_0, v_1]} = \sigma_1$</span> and <span class="math-container">$\sigma|_{[v_1, v_2]} = \sigma_2$</span> where 1-simplices <span class="math-container">$\sigma_1$</span> and <span class="math-container">$\sigma_2$</span>, then it must be the case that <span class="math-container">$\sigma_1(v_1) = \sigma_2(v_1)$</span>. This won't always be possible: consider the case when the <span class="math-container">$\sigma_1$</span> and <span class="math-container">$\sigma_2$</span> have disjoint images.</p>
3,383,672
<p>I wanted to prove <span class="math-container">$(Ma)\times(Mb)=(\det M)(M^{-1})^T(a\times b)$</span> where the formula could be found here <a href="https://en.wikipedia.org/wiki/Cross_product#Algebraic_properties" rel="nofollow noreferrer">wiki's cross product page</a> However, I had a hard time counting the index, and I was wondering if there's any easier way to prove it. </p>
user1551
1,551
<p>They are equal because <span class="math-container">\begin{aligned} \left[(\det M)(M^{-1})^T(a\times b)\right]\cdot Mc &amp;=(\det M)(a\times b)\cdot M^{-1}Mc\\ &amp;=\det(M)(a\times b)\cdot c\\ &amp;=\det(M)\det[a,b,c]\\ &amp;=\det[Ma,Mb,Mc]\\ &amp;=(Ma\times Mb)\cdot Mc \end{aligned}</span> for every vector <span class="math-container">$c$</span>.</p>
3,383,672
<p>I wanted to prove <span class="math-container">$(Ma)\times(Mb)=(\det M)(M^{-1})^T(a\times b)$</span> where the formula could be found here <a href="https://en.wikipedia.org/wiki/Cross_product#Algebraic_properties" rel="nofollow noreferrer">wiki's cross product page</a> However, I had a hard time counting the index, and I was wondering if there's any easier way to prove it. </p>
J.G.
56,861
<p>By the definition of the determinant, <span class="math-container">$\epsilon_{ijk}M_{il}M_{jm}M_{kn}=(\det M)\epsilon_{lmn}$</span>. Multiplying both sides by <span class="math-container">$a_lb_m$</span>, <span class="math-container">$[M^T(Ma\times Mb)]_n=(\det M)(a\times b)_n$</span>, i.e. <span class="math-container">$M^T(Ma\times Mb)=(\det M)a\times b$</span>. The desired result then follows from <span class="math-container">$(M^T)^{-1}=(M^{-1})^T$</span>.</p>
1,210,194
<p>for example: I have a series <img src="https://i.stack.imgur.com/hV9QG.jpg" alt="enter image description here"></p> <p>is there numerical computation method to find it ? thanks</p>
Claude Leibovici
82,404
<p>Yoy can write <span class="math-container">$$(1 - 4 n^2)^2 (36 n^2 + 1)=(2 n-1)^2 (2 n+1)^2 (6 n-i) (6 n+i)$$</span> Using partiai fractions, the summand is <span class="math-container">$$\frac{7}{100 (2 n+1)}-\frac{7}{100 (2 n-1)}+\frac{81 i}{200 (6 n+i)}-\frac{81 i}{200 (6 n-i)}+$$</span> <span class="math-container">$$\frac{1}{40 (2 n+1)^2}+\frac{1}{40 (2 n-1)^2}$$</span></p> <p>Computing the partial sums up to <span class="math-container">$p$</span>, we have at the end for <span class="math-container">$800 S_p$</span> <span class="math-container">$$\frac{56}{2 p+1}+\frac{5}{\left(p+\frac{1}{2}\right)^2}-54 i \psi \left(p+\left(1-\frac{i}{6}\right)\right)+54 i \psi \left(p+\left(1+\frac{i}{6}\right)\right)-10 \psi ^{(1)}\left(p+\frac{1}{2}\right)+5 \pi ^2-400+54 \pi \coth \left(\frac{\pi }{6}\right)$$</span></p> <p>Now, using asymptotics and Taylor expansions <span class="math-container">$$S_p=\frac{1}{800} \left(5 \pi ^2+54 \pi \coth \left(\frac{\pi }{6}\right)-400\right)-\frac{1}{2880 p^5}+\frac{1}{1152 p^6}+O\left(\frac{1}{p^7}\right)$$</span></p> <p>Computing for <span class="math-container">$p=10$</span> <span class="math-container">$$S_{10}=\frac{89936259827237312633515635583570094}{29615662049611209795735649868626520625}$$</span> which is <span class="math-container">$0.00303678032$</span> while the above truncated expansion gives</p> <p><span class="math-container">$$S_{10}\sim -\frac{192000001}{384000000}+\frac{\pi ^2}{160}+\frac{27}{400} \pi \coth \left(\frac{\pi }{6}\right)$$</span> which is <span class="math-container">$0.00303678042$</span></p>
2,440,785
<p>if $f: X \to X$ is continuous where $X$ is a topological space with a cofinite topology, then:</p> <p>$$(i) \ f^{-1}(x) \text{ is finite for all $x$} \\ \text{or} \\ (ii) \ f \text{ is constant}$$</p> <p>My approach:</p> <p>I couldn't build up a proper approach here to be honest. I believe we need to use the fact that inverse functions preserve differences of sets. But couldn't go on. </p> <p>Any hints?</p>
Community
-1
<p>Hint: $f^{-1}(X\setminus \{ x\}) = X\setminus f^{-1}(\{x\})$ must be open.</p>
2,440,785
<p>if $f: X \to X$ is continuous where $X$ is a topological space with a cofinite topology, then:</p> <p>$$(i) \ f^{-1}(x) \text{ is finite for all $x$} \\ \text{or} \\ (ii) \ f \text{ is constant}$$</p> <p>My approach:</p> <p>I couldn't build up a proper approach here to be honest. I believe we need to use the fact that inverse functions preserve differences of sets. But couldn't go on. </p> <p>Any hints?</p>
Ice sea
37,787
<p>First, every point set $\{x\}$ is a closed set, the continuity of $f$ implies that $f^{-1}(x)$ is a closed set in $X$. Hence, $(f^{-1}(x))^c$ is an open set in $X$. So it has to be a finite set. </p>
475,151
<blockquote> <blockquote> <p>Determine the volume of $$ M:=\left\{(x,y,z)\in\mathbb{R}^3: z\in [0,2\pi],(x-\cos(z))^2+(y-\sin(z))^2\leq\frac{1}{4}\right\} $$</p> </blockquote> </blockquote> <p>My idea is to use the principle of Cavalieri, i.e. to set $$ M_z:=\left\{(x,y)\in\mathbb{R}^2: (x,y,z)\in M\right\} $$ and then calculate $$ \operatorname{vol}_3(M)=\int\limits_0^{2\pi}\operatorname{vol}_2(M_z)\, dz $$</p> <p>So I have to calculate $\operatorname{vol}_2(M_z)$.</p> <p>I set $a:=x-\cos(z)$ and $b:=y-\sin(z)$ and then the condition $a^2+b^2\leq\frac{1}{4}$ means that $$ \lvert a\rvert\leq\frac{1}{2}\Leftrightarrow \cos(z)-\frac{1}{2}\leq x\leq\cos(z)+\frac{1}{2},~~~~~\lvert b\rvert\leq\frac{1}{4}\Leftrightarrow\sin(z)-\frac{1}{2}\leq y\leq\sin(z)+\frac{1}{2}. $$</p> <p>So I used Fubini and calculated $$ \operatorname{vol}_2(M_z)=\int\limits_{\sin(z)-\frac{1}{2}}^{\sin(z)+\frac{1}{2}}\int\limits_{\cos(z)-\frac{1}{2}}^{\cos(z)+\frac{1}{2}}1\, dx\, dy=1. $$</p> <p>But then I get $$ \operatorname{vol}_3(M)=2\pi $$</p> <p>and the result should be $\frac{\pi^2}{2}$. So where is my mistake?</p>
Pratyush Sarkar
64,618
<p>You are making a mistake in concluding the following statement: $$ (x-\cos(z))^2+(y-\sin(z))^2\leq\frac{1}{4} \iff \\ \cos(z)-\frac{1}{2}\leq x\leq\cos(z)+\frac{1}{2} \,\,\,\,\, \text{and} \,\,\,\,\, \sin(z)-\frac{1}{2}\leq y\leq\sin(z)+\frac{1}{2} $$ And so you conclude the set $M_z$ is the rectangle $R_z = \left[\cos(z)-\frac{1}{2}, \cos(z)+\frac{1}{2}\right] \times \left[\sin(z)-\frac{1}{2}, \sin(z)+\frac{1}{2}\right]$ and apply Fubini's theorem.</p> <p>The mistake is $\iff$ in the above statement when in fact it is only true in one direction (the left implies the right but not the other way). To see that the right does not imply the left take $x = \cos(z)+\frac{1}{2}$ and $y = \sin(z)+\frac{1}{2}$. Then clearly $(x, y) \in R_z$, but $(x-\cos(z))^2+(y-\sin(z))^2 = (\frac{1}{2})^2 + (\frac{1}{2})^2 = \frac{1}{2} &gt; \frac{1}{4}$ which proves $(x, y) \notin M_z$.</p> <p>So, we can conclude that $M_z \subset R_z$ but not $M_z = R_z$ (in fact $M_z$ is a circle contained in the rectangle $R_z$). So you were integrating over a larger area and you got a larger answer too.</p>
1,957,166
<p>For a given set $A$, An element such that $a \in A $ exists. </p> <p>If $A$ is a set of all natural numbers, then:</p> <p>$$ a \in A \in \mathbb{N} \subset \mathbb{Z} \subset \mathbb{R}. $$</p> <p>Would maths normally be written like this, if it is correct? </p>
Bargabbiati
352,078
<p>A couple of things, if $A$ is the empty set doesn't exists any $a\in A$. If $A$ is the set of all natural numbers than you have $A= \mathbb N$. The inclusions are ok.</p>
1,957,166
<p>For a given set $A$, An element such that $a \in A $ exists. </p> <p>If $A$ is a set of all natural numbers, then:</p> <p>$$ a \in A \in \mathbb{N} \subset \mathbb{Z} \subset \mathbb{R}. $$</p> <p>Would maths normally be written like this, if it is correct? </p>
ForgotALot
295,090
<p>You have written:</p> <p>$$a \in A \in \mathbb{N} \subset \mathbb{Z} \subset \mathbb{R}$$</p> <p>and told us to assume $a\in A$ and $A=\mathbb{N}$. Under that assumption, the inclusion $A \in \mathbb{N}$ is incorrect; the set of all natural numbers is not a natural number (sorry I don't have a reference handy for this elementary fact). The other inclusions are correct. If you replace $A \in \mathbb{N}$ with $A\subset \mathbb{N}$, then everything becomes correct.</p>
1,545,092
<p>The following sum represents the number of relevant kinds of lines in an N-dimensional tic-tac-toe game, which is why I am interested in finding a closed form, but it also is the sum of all possible combinations of N unique elements when any number of the elements from 1 to N can be chosen, which is also cool, and seems like the kind of thing that would have an elegant transcendental form involving factorials and stuff.</p> <p>$$ S = \sum_{j=1}^{N} {N! \over j!(N-j)!}, N \in \mathbb{Z}_{+} $$</p> <p>So is there an easy way to find a closed form here?</p>
Aguila
288,634
<p>$S = 2^{N}-1$. You can get this by noticing that $\sum\limits_{i=0}^{N}\binom{N}{i} = (1+1)^{N} = 2^{N}$ by the binomial theorem.</p>
306,732
<p>Let $R$ and $S$ be rings and $h$ and $g$ be homomorphisms from $R$ into $S$. </p> <p>Let $T=\{r| r\in R$ and $h(r)=g(r)\}$</p> <p>Prove that $T$ is a subring of $R$. </p> <p>I understand what the question is asking but I am a little confused on how to get it started. I know that I have to prove $T\neq \emptyset$; $rt\in T$ for all $r,t\in T$; and $r-t\in T$ for all $r,t\in T$ right? But I'm not really sure how I can do that. Also, I don't get what $T$ is actually being defined as. I don't want the answer just how to get started and what does $T$ actually mean? Thanks. </p>
Julien
38,053
<p>You're right. Just get started. </p> <p>First, $h(0)=0=g(0)$ so $0\in T$ and so $T\neq\emptyset$. </p> <p>Next take $r,t\in T$, i.e. $h(r)=g(r)$ and $h(t)=g(t)$.</p> <p>Then $h(rt)=h(r)h(t)=g(r)g(t)=g(rt)$. So $rt\in T$.</p> <p>Can you continue?</p>
1,057,050
<p>I am currently doing a math problem and have come across an unfamiliar notation. A mini circle between <span class="math-container">$f$</span> and <span class="math-container">$h(x)$</span></p> <p>The question ask me to find for 'the functions <span class="math-container">$f(x)=2x-1$</span> and <span class="math-container">$h(x)=3x+2$</span>'</p> <p><span class="math-container">$$f \circ h(x)$$</span></p> <p>However, I can't do this as I do not know what the circle notation denotes to. Does it mean to multiply?</p>
Eric Stucky
31,888
<p>This notation means that you take the output of $h$ and use it as the input of $f$. When we are working with a specific $x$ value, we can suggestively write $f(h(x))$ instead.</p> <p>For instance if $f(z)=1/z$ and $h(x)=2+3x$ then $$(f\circ h)(x) = f\big(h(x)\big) = f(2+3x) = \frac{1}{2+3x}.$$</p> <p>(Note: I only used $z$ as the variable for $f$ to avoid confusion; in practice the function does not care what its input variable is named.)</p>
2,571,746
<p>$$|x+4| -4 =x $$</p> <p>I've two questions about this equation. </p> <ul> <li><p>Why do we need to build an inequality?</p></li> <li><p>If we build an inequality, in what cases do we need to analyse?</p></li> </ul> <p>Also I'm trying to find the negative values that $x$ can take.</p>
Community
-1
<p>First of all, note that this isn’t an inequality, but an equation. There are two methods to solve this:</p> <ul> <li><p>Case by case analysis: Note that the equation can be written as: $$|x+4|= x +4 $$ When $x\leq -4$, we know that $(x+4)\leq 0$, hence, $|x+4|=-4-x$. Thus, it boils down to solving: $$-4-x=x+4 \implies x = -4$$ which is valid in the range. When $x &gt; -4$, we know that $(x+4)&gt;0$, hence, $|x+4|=x+4$. Thus, it boils down to solving $$x+4=x+4 \implies \text{ Holds } \forall x&gt;-4$$</p></li> <li><p>By squaring: Note that when we square the absolute value, we can get rid of the anonymity surrounding it, thus: $$|x+4|=x+4 \implies |x+4|^2=(x+4)^2 \implies x^2+8x+16 = x^2+8x + 16 \implies \text{ Holds } \forall x\geq -4$$</p></li> </ul> <p>Note that the second method is easier than the first.</p>
1,772,675
<blockquote> <p>Writer is writing a book and he is doing 2 mistakes per page</p> </blockquote> <p>What is the probability that the 2nd mistake of the writer is in page 3?</p> <p>What I tried to do is as follows:<br> $X$~$Poi(2)$ -> 2 mistakes per page hence<br> $P(X=1)=2e^{-2}$ this is the chance to do 1 mistake in a page<br> $Y$~$NB(2,2e^{-2})$ we are looking for the first occurrence of 2 mistakes<br> So we are looking for $P(Y=3)$ which is the probability of doing the 2nd mistake in page 3 which equals to $8e^{-4}-16e^{-6}$ which is WRONG.</p> <p>What have I done wrong here?</p> <p>EDIT:<br> X is the number of mistakes per page, Y is the index of the first time we get 2 mistakes total</p>
Paul
202,111
<p>You don't need two mistakes on the third page; you just need the second mistake on the third page.</p> <p>For this problem it is best to consider TWO different Poisson processes:</p> <p>(1) One Poisson process for the number of mistakes on the first two pages, with a rate of 4 for every two pages, and</p> <p>(2) One Poisson process for the number of mistakes on the third page, with a rate of 2 on the third page.</p> <p>There are two independent possibilities to consider for the second mistake to be on the third page:</p> <p>(1) There are no mistakes on the first two pages, but at least two on the third page, and</p> <p>(2) There is one mistake on the first two pages, and at least one mistake on the third page.</p> <p>Can you take it from there?</p>
1,772,675
<blockquote> <p>Writer is writing a book and he is doing 2 mistakes per page</p> </blockquote> <p>What is the probability that the 2nd mistake of the writer is in page 3?</p> <p>What I tried to do is as follows:<br> $X$~$Poi(2)$ -> 2 mistakes per page hence<br> $P(X=1)=2e^{-2}$ this is the chance to do 1 mistake in a page<br> $Y$~$NB(2,2e^{-2})$ we are looking for the first occurrence of 2 mistakes<br> So we are looking for $P(Y=3)$ which is the probability of doing the 2nd mistake in page 3 which equals to $8e^{-4}-16e^{-6}$ which is WRONG.</p> <p>What have I done wrong here?</p> <p>EDIT:<br> X is the number of mistakes per page, Y is the index of the first time we get 2 mistakes total</p>
Med
261,160
<p>$3$ possible scenarios might happen:</p> <p>1- one error in the first page, no error in the second page and at least one error in the third page. the probability would be:</p> <p>$2*e^{-2}*e^{-2}*\sum_{i=1}^\mathbb{\infty}\frac{2^{i}*e^{-2}}{i!}$</p> <p>2- No error in the first page, one error in the second page and at least one error in the third page. the probability would be the same as the previous one.</p> <p>3- No error in the first and the second pages and at least two errors in the third page. The corresponding probability is:</p> <p>$e^{-2}*e^{-2}*\sum_{i=2}^\mathbb{\infty}\frac{2^{i}*e^{-2}}{i!}$</p> <p>Please note that we can multiply probabilities, because each time interval is assumed to be independent. Also, we can add the three possibilities to get the final result, because the events are disjoint.</p>
545,380
<p>Let $F$ be a field such that $|F|=3^{2n+1}$ and $r=3^{n+1}$. I want to find the number of $x\in F$ that satisfies the equation $x^{r+1}=1$.</p>
DonAntonio
31,254
<p>Well, since the multiplicative field of $\;\Bbb F\;$ , namely $\;\Bbb F^*:=\Bbb F-\{0\}\;$ is cyclic of order $\;3^{2n+1}-1\;$, we get that forall </p> <p>$$x\in\Bbb F^*\;,\;\;x^{3^{2n+1}-1}=1\;$$</p> <p>You though want some $\;x\in\Bbb F^*\;$ s.t.</p> <p>$$x^{3^{n+1}+1}=1\iff \left(3^{n+1}+1\right)\mid\left(3^{2n+1}-1\right)$$</p> <p>Try to take it from here.</p>
231,549
<p>Is there a standard name for a linear operator $T$ on a finite dimensional vector space satisfying $T^n=T^{n+1}$ for some $n\geq 1$ or, equivalently, $T$ is a similar to a direct sum of a nilpotent matrix and an identity matrix? I am not looking so much for name suggestions, but rather for a generally accepted terminology from the literature.</p> <p><Strong>Added Motivation.</strong> In Kovacs proof that the complex algebra of the monoid of $n\times n$-matrices over a finite field is semisimple a key step is to show that the ideal of the monoid algebra spanned by the singular matrices is a unital ring. He shows that the identity is a linear combination of matrices satisfying the above property. He calls such matrices semi-idempotent. But I believe he invented the name. </p> <p>Being a semigroup theorist I don't like math terms involving "semi" and so in my book I would prefer another term, preferably one in use in the matrix theory literature. </p>
Dmitry Vaintrob
7,108
<p>I would call it projectipotent :)</p>
1,371,075
<p>$$3^x = 3 - x$$</p> <p>I have to prove that only one solution exists, and then find that one solution.</p> <p>My approach has been the following:</p> <p>$$\log 3^x = \log (3 - x)$$</p> <p>$$x\log 3 = \log (3 - x)$$</p> <p>$$\log 3 = \frac{\log (3 - x)}{x}$$</p> <p>And this is where I get stuck. Any help will be greatly appreciated, thanks in advance.</p>
Zain Patel
161,779
<p>Unfortunately, finding the solution explicitly is not possible in terms of elementary functions. You'll need to use the Lambert W function.</p> <p>You have several methods of doing so, one is to simply sketch the graphs and show that they only intersect once as done below:</p> <p><a href="https://i.stack.imgur.com/DZpEs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DZpEs.png" alt="enter image description here"></a></p> <hr> <p>A more rigorous approach would be to show that $f(x) = 3^x + x - 3$ is a strictly increasing function and show that it attains both negative and positive values. </p> <p>So $f'(x) = \ln 3\cdot 3^{x} + 1&gt; 0$ for all real $x$, so the function is strictly increasing. Secondly, we have that $f(0) = \text{negative}$ and $f(5) = \text{positive}$ so it crosses the $x$-axis exactly once and hence has only one root.</p>
6,534
<p>I apologize if this isn't the right place to ask this question.</p> <p>Two features of stackexchange would be very useful for a personal math blog -- Latex works great, and comments and replies can be voted upon.</p> <p>Is there any way to use the stackexchange functionality in a personal math blog?</p>
GNUSupporter 8964民主女神 地下教會
290,189
<p>Thanks to <a href="https://github.blog/2022-05-19-math-support-in-markdown/" rel="nofollow noreferrer">GitHub's recent announcement of MathJax support in GitHub Markdown</a>, you may setup a GitHub repo to write Markdown and LaTeX code to be stored in text files inside the repo. Others can give feedback through GitHub issues/discussions.</p> <p><a href="https://i.stack.imgur.com/ogFvm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ogFvm.png" alt="github markdown math support" /></a><br /> Image copied from the linked GitHub blog post.</p>
9,304
<p>I have the following question: I have a file that has structure:</p> <pre><code>x1 y1 z1 f1 x2 y2 z2 f2 ... xn yn zn fn </code></pre> <p>I can easily visualize it with <em>Mathematica</em> using <code>ListContourPlot3D</code>. But could you please tell me how I can plot contour plot for this surface? I mean with these data I have a set of surfaces corresponding to different isovalues (f) and I want to plot intersection between all these surfaces and some certain plane. I tried to Google but didn't get any results. Any help and suggestions are really appreciated. Thanks in advance!</p>
Mr.Wizard
121
<p>I'm not claiming this is a good method, I'm just getting some ink the page:</p> <pre><code>data = Table[{x, y, z, x^2 + y^2 - z^2}, {x, -2, 2, 0.2}, {y, -2, 2, 0.2}, {z, -2, 2, 0.2}] ~Flatten~ 2; ListContourPlot3D[data, Contours -&gt; {0.5, 2}, Mesh -&gt; None] </code></pre> <p><img src="https://i.stack.imgur.com/9TOTN.png" alt="Mathematica graphics"></p> <pre><code>int = Interpolation[data]; ContourPlot3D[int[x, y, z], {x, -2, 2}, {y, -2, 2}, {z, -2, 2}, Contours -&gt; {0.5, 2}, RegionFunction -&gt; (-0.02 &lt; #2 - # &lt; 0.02 &amp;)] </code></pre> <p><img src="https://i.stack.imgur.com/1M5ye.png" alt="Mathematica graphics"></p>
9,304
<p>I have the following question: I have a file that has structure:</p> <pre><code>x1 y1 z1 f1 x2 y2 z2 f2 ... xn yn zn fn </code></pre> <p>I can easily visualize it with <em>Mathematica</em> using <code>ListContourPlot3D</code>. But could you please tell me how I can plot contour plot for this surface? I mean with these data I have a set of surfaces corresponding to different isovalues (f) and I want to plot intersection between all these surfaces and some certain plane. I tried to Google but didn't get any results. Any help and suggestions are really appreciated. Thanks in advance!</p>
Heike
46
<p>You can use the options <code>MeshFunctions</code> in combination with <code>Mesh</code> for this. </p> <p>I'm borrowing Mr.Wizard's data here for a moment:</p> <pre><code>data = Flatten[Table[{x, y, z, x^2 + y^2 - z^2}, {x, -2, 2, 0.2}, {y, -2, 2, 0.2}, {z, -2, 2, 0.2}], 2]; </code></pre> <p>Suppose you want to plot the intersection of the contours of <code>data</code> with the plane <code>x - y == 0</code>, then you could do something like</p> <pre><code>ListContourPlot3D[data, Contours -&gt; {0.5, 2}, ContourStyle -&gt; Opacity[0.3], BoundaryStyle -&gt; Opacity[0.3], MeshFunctions -&gt; {(#1 - #2) &amp;}, Mesh -&gt; {{0}}, MeshStyle -&gt; {Thick, Orange}] </code></pre> <p><img src="https://i.stack.imgur.com/Vn0Gf.png" alt="Mathematica graphics"></p>
42,411
<p>I have some products that I want to increase in value such that a 20% discount gives their current value. It's been ~25 years since college algebra and so I'm a bit rusty on setting up the equation.</p> <p>I've been trying to figure out how to solve for X being the percentage increase needed in order that 20% off would give the current value.</p> <p>For example a product worth 100. I know a 20% increase would make it 120, but 20% off of that would be 96 which isn't 100.</p> <p>I'd give a bounty for explaining the algebra and steps to figure it out, but I'm new to this exchange and am unable to award one - thanks if you spend the time to explain this to me!</p> <p>And if someone wouldn't mind tagging this appropriately for this exchange I'd appreciate it.</p>
Jonas Meyer
1,424
<p>Increasing an amount $A$ by $X$ percent means adding $A\cdot \frac{X}{100}$ to $A$, resulting in $A+A\cdot\frac{X}{100}=A\left(1+\frac{X}{100}\right)$. Decreasing an amount $B$ by $Y$ percent means subtracting $B\cdot\frac{Y}{100}$, resulting in $B-B\cdot\frac{Y}{100}=B\left(1-\frac{Y}{100}\right)$. To have an $X$ percent increase followed by a $20$ percent decrease with an initial amount $A$, you will first multiply by $1+\frac{X}{100}$ to obtain a new amount. If we call that amount $B$, then the next step is to decrease $B$ by $20$ percent by multiplying by $1-\frac{20}{100}$. At this point you will have $A\cdot \left(1+\frac{X}{100}\right)\cdot\left(1-\frac{20}{100}\right)$. For this to leave you where you started, you need to solve the equation $$A\cdot \left(1+\frac{X}{100}\right)\cdot\left(1-\frac{20}{100}\right) =A.$$ You can cancel $A$ from both sides, leaving an equation $$\frac{4}{5}\left(1+\frac{X}{100}\right)=1$$ with $X$ as the only unknown, which can then be solved by division, subtraction, and multiplication. Does that get you where you want to be?</p>
3,109,036
<p>I must prove this tautology using logical equivalences but I can't quite figure it out. I know it has something to do with the fact that not p and p have opposite truth values at all times. Any help would be appreciated.</p>
J.G.
56,861
<p>Rewrite <span class="math-container">$p\to\neg q$</span> as <span class="math-container">$q\to\neg p$</span> (both are equivalent to <span class="math-container">$\neg p\lor\neg q$</span>).</p>
2,523,000
<p>Three players are playing a game and have a fair six-sided die. This is an arbitrary game conditioned on the following rule:</p> <p>Player 1 rolls first, Player 2 roles until he has a different number to 1, Player 3 rolls until he has a different number to players 1 and 2. </p> <p>$\underline{Question}$: How do I go about calculating the expected value of each of the players rolls? </p> <p>Let $X_i =$ number appearing for player $i=1,2,3$. I can get the first one but after I get stuck in setting up the equation for the next:</p> <p>$\mathbb{E}[X_1] = \frac{1+2+...+6}{6} = 3.5$,</p> <p>$\mathbb{E}[X_2 | X_2 \ne X_1]$ =? is this what I am looking for and if so any help calculating it would be appreciate. </p> <p>Best wishes, I.</p>
Christian Blatter
1,303
<p>Since the numerical values of the rolls are of no relevance there could as well be six different animals on the faces of the die. </p> <p>This should make it clear that the expected value is $3.5$ for each of the three players, by symmetry.</p>
1,560,192
<ol> <li><p>Compute the determinant of \begin{pmatrix} 1 &amp; 2 &amp; 3 &amp; ...&amp; n\\ -1 &amp; 0 &amp; 3 &amp; ...&amp; n\\ -1 &amp; -2 &amp; 0 &amp; ...&amp; n\\ ...&amp; ...&amp; ...&amp; ...&amp; \\ -1 &amp; -2 &amp; -3 &amp; ...&amp; n \end{pmatrix} after some elementary raw operation, one can reach: \begin{pmatrix} 1 &amp; 2 &amp; 3 &amp; ...&amp; n\\ -2 &amp; -2 &amp; 0 &amp; ...&amp; 0\\ 0&amp; -2&amp; -3&amp; ...&amp;0&amp; \\ ...&amp; ...&amp; ...&amp; ...&amp; \\ 0 &amp; 0 &amp; 0 &amp; 1-n &amp; n \end{pmatrix} but I don't sure how to proceed.</p></li> <li><p>Why the det. of the following matrix is divisible by 6 without remainder? \begin{pmatrix} 2^0 &amp; 2^1 &amp; 2^2 \\ 4^0 &amp; 4^1 &amp; 4^2\\ 5^0 &amp; 5^1 &amp; 5^2 \end{pmatrix} So I know that I have to show that its det. is divisible by $2$ and $3$, or equivalently that the sum of its digits divisible by $3$ and last digit is even. But I don't sure how to start the process. </p></li> </ol> <p>Thank you.</p>
Tim Raczkowski
192,581
<p>Yes, there is a relationship. Note that $$(n+1)^2=n^2+2n+1.$$ </p> <p>So, the difference between a perfect square, $n^2$, and the next perfect square, $(n+1)^2$, is $2n+1$ which is always an odd number.</p>
1,560,192
<ol> <li><p>Compute the determinant of \begin{pmatrix} 1 &amp; 2 &amp; 3 &amp; ...&amp; n\\ -1 &amp; 0 &amp; 3 &amp; ...&amp; n\\ -1 &amp; -2 &amp; 0 &amp; ...&amp; n\\ ...&amp; ...&amp; ...&amp; ...&amp; \\ -1 &amp; -2 &amp; -3 &amp; ...&amp; n \end{pmatrix} after some elementary raw operation, one can reach: \begin{pmatrix} 1 &amp; 2 &amp; 3 &amp; ...&amp; n\\ -2 &amp; -2 &amp; 0 &amp; ...&amp; 0\\ 0&amp; -2&amp; -3&amp; ...&amp;0&amp; \\ ...&amp; ...&amp; ...&amp; ...&amp; \\ 0 &amp; 0 &amp; 0 &amp; 1-n &amp; n \end{pmatrix} but I don't sure how to proceed.</p></li> <li><p>Why the det. of the following matrix is divisible by 6 without remainder? \begin{pmatrix} 2^0 &amp; 2^1 &amp; 2^2 \\ 4^0 &amp; 4^1 &amp; 4^2\\ 5^0 &amp; 5^1 &amp; 5^2 \end{pmatrix} So I know that I have to show that its det. is divisible by $2$ and $3$, or equivalently that the sum of its digits divisible by $3$ and last digit is even. But I don't sure how to start the process. </p></li> </ol> <p>Thank you.</p>
costrom
271,075
<p>This result does not need calculus to be shown, just knowing how to expand $(a+b)^n$ for $n = 2$:</p> <p>$(x+1) = x^2+2x+1$</p> <p>Then subtract off $x^2$ (since that is the preceding square) and you get:</p> <p>$x^2+2x+1-x^2 = 2x+1$</p> <p>so each successive difference will increase by two (as it depends on $x$, which is incrementing each time)</p>
25,784
<p>As many Americans know, the “traditional” high school sequence is:</p> <p>Algebra 1</p> <p>Geometry</p> <p>Algebra 2</p> <p>PreCalculus</p> <p>Calculus</p> <p>For those who take developmental education at the community college level, it consists of something like:</p> <p>Developmental Algebra</p> <p>Intermediate Algebra</p> <p>College Algebra</p> <p>PreCalculus</p> <p>Calculus</p> <p>While the college courses cover most of the algebra, there seems to be no Geometry in the curriculum. Why is that? If there's a good reason for it not to be covered in the Community College system, does it still have a place in high school?</p>
Argyll
20,601
<p>Typically, a geometry class in high school teaches Euclidean geometry. Depending on how much time is spent and the exact class, Euclidean geometry as rendered today explores properties of triangle, parallelogram, and circle, while following an axiomatic system, much of which was initially laid out thousands of years ago. The purpose is less to understand these objects deeply per se but to learn how a large set of conclusions can be derived from a few simple assumptions.</p> <p>In higher education past high school, modern mathematics are taught instead -- either to prepare students to research in mathematics, or to prepare students for applications of mathematics. Euclidean geometry would be rather remote from either. No current research fields draw much from the old school Euclidean geometry -- we are thousands of years past that after all. No common applications require Euclidean geometry either.</p> <p>In the case of training students on logical rigor, an advanced degree have many more such opportunities. Thus no need to spend time on Euclidean geometry. For high school, some of the more intricate machineries in modern mathematics can be too abstract. Euclidean geometry thus became the quick and more viable way to expose students to an axiomatic approach.</p> <hr /> <p>The competition for time should be a thing in high school too. So it's worth mentioning that, in Asian countries, Euclidean geometries are taught in junior high schools. ie. grade 7-9 (though there is little teaching in grade 9 with it being mostly exam prep) where grade 1-6 is the primary school. With that arrangement, any hypothesized benefits would be gained without taking away valuable time in high school that can be used for further education in math -- to teach foundation for calculus (including some basic analysis), probability, etc, which in turn can be especially helpful in preparing students who will enter fields that require applications of math yet won't have a lot of time to spare for math such as the various engineering fields.</p>
1,550,934
<p>I want to calculate the nth derivative of <span class="math-container">$\arcsin x$</span>. I know <span class="math-container">$$ \frac{d}{dx}\arcsin x=\frac1{\sqrt{1-x^2}} $$</span> And <span class="math-container">$$ \frac{d^n}{dx^n} \frac1{\sqrt{1-x^2}} = \frac{d}{dx} \left(P_{n-1}(x) \frac1{\sqrt{1-x^2}}\right) = \left(-\frac{x}{(1-x^2)^{}} P_{n-1}(x) + \frac{dP_{n-1}}{dx}\right)\frac1{\sqrt{1-x^2}} = P_n(x) \frac1{\sqrt{1-x^2}} $$</span> Hence we have the recursive relation of <span class="math-container">$P_n$</span>: <span class="math-container">$$ P_{n}(x)=-\frac{x}{(1-x^2)^{}} P_{n-1}(x) + \frac{dP_{n-1}}{dx}, \:P_0(x) = 1 $$</span> My question is how to solve the recursive relation involving function and derivative. I think it should use the generating function, but not sure what it is. </p>
Lucian
93,448
<p>Let $~P_n(x)~=~\dfrac{2^n}{n!}~\Big(\sqrt{1-x^2}\Big)^{2n+1}~\bigg(\dfrac1{\sqrt{1-x^2}}\bigg)^{(n)}.~$ Then its coefficients form the sequence described <a href="http://oeis.org/A051288" rel="nofollow">here</a>.</p>
849,486
<p>Find all <em>distinct</em> integers $x$ and $y$ that satisfy the following equation.</p> <p>$$ x\log y=y\log x. $$</p> <p>Obviously, if $x=y$, the equation is satisfied. I found $x=2$, and $y=4$. I think we cannot find all solutions (if there are many).</p> <p>P.S. The base of the $\log(\cdot)$ is not important.</p>
lab bhattacharjee
33,337
<p>For positive real $x,y$ $$x\log y=y\log x\iff \frac1y\log(y)=\frac1x\log(x)\implies y^{\dfrac1y}=x^{\dfrac1x}$$</p> <p>Now, show that $f(z)z=z^{\dfrac1z}$ is not constant </p>
4,127,468
<p>Suppose we have two functions <span class="math-container">$f,g:\Bbb R\rightarrow \Bbb R$</span>. The chain rule states the following about the derivative of the composition of these functions, namely that <span class="math-container">$$ (f \circ g)'(x) = f′(g(x))\cdot g′(x). $$</span> However, the equivalent expression using Leibniz notation seems to be saying something different. I know that <span class="math-container">$f'(g(x))$</span> means the derivative of <span class="math-container">$f$</span> evaluated at <span class="math-container">$g(x)$</span>, but when considering the Leibniz equivalent of the chain rule, it appears that it should really mean the derivative of <span class="math-container">$f$</span> with respect to <span class="math-container">$g(x)$</span>. If we let <span class="math-container">$z=f(y)$</span> and y=<span class="math-container">$g(x)$</span>, then <span class="math-container">$$ {\frac {dz}{dx}}={\frac {dz}{dy}}\cdot {\frac {dy}{dx}}. $$</span> Where here the <span class="math-container">$\frac{dz}{dy}$</span> corresponds to <span class="math-container">$f'(g(x))$</span>. Since <span class="math-container">$y=g(x)$</span>, I am tempted to believe that the expression <span class="math-container">$f'(u)$</span> means the derivative of <span class="math-container">$f$</span> with respect to <span class="math-container">$u$</span>; it would make sense in this case as we are treating <span class="math-container">$g(x)$</span> as the independant variable. This leaves me with the question: does <span class="math-container">$f'(g(x))$</span> mean the derivative of <span class="math-container">$f$</span> evaluated at <span class="math-container">$g(x)$</span>, <span class="math-container">$\frac{df}{dx} \Bigr\rvert_{x = g(x)}$</span>, or the derivative of <span class="math-container">$f$</span> with respect to <span class="math-container">$g(x)$</span>, <span class="math-container">$\frac{df}{dg(x)}?$</span></p>
user21820
21,820
<p>While the other answers deal with the modern definition of derivatives, it is <strong>not</strong> actually impossible to make the original Leibniz notation completely rigorous as I sketched <a href="http://math.stackexchange.com/a/2118909/21820">here (see &quot;Notes&quot;)</a>. In fact, doing so yields a generalization of the usual notion of derivatives (at least for one parameter), as shown by the examples in the linked post.</p> <p>Furthermore, we can completely explain the error in your reasoning in this framework. <span class="math-container">$ \def\lfrac#1#2{{\large\frac{#1}{#2}}} $</span></p> <p>Take any variables <span class="math-container">$x,y,z$</span> varying with parameter <span class="math-container">$t$</span> (which may well be <span class="math-container">$x$</span> or may be something else we do not care about). Then whenever <span class="math-container">$\lfrac{dz}{dy},\lfrac{dy}{dx}$</span> are defined, we have <span class="math-container">$\lfrac{dz}{dx} = \lfrac{dz}{dy} · \lfrac{dy}{dx}$</span>. If furthermore there are functions <span class="math-container">$f,g$</span> such that <span class="math-container">$z = f(y)$</span> and <span class="math-container">$y = g(x)$</span> everywhere (i.e. for every <span class="math-container">$t$</span>), then by plain substitution <span class="math-container">$\lfrac{d(f(g(x)))}{dx} = \lfrac{d(f(y))}{dy} · \lfrac{d(g(x))}{dx}$</span>, which is equivalent to <span class="math-container">$(f∘g)'(x) = f'(y) · g'(x)$</span>. Since <span class="math-container">$f'(y) = f'(g(x))$</span> everywhere, there is nothing wrong here at all!</p> <p>So what is the error? <span class="math-container">$f'(u)$</span> is <strong>not</strong> &quot;the derivative of <span class="math-container">$f$</span> with respect to <span class="math-container">$u$</span>&quot;. That phrase actually does not make sense, because <span class="math-container">$f$</span> is a function in the modern sense and does <strong>not</strong> have any 'independent variable'! Instead, <span class="math-container">$f'(u) = \lfrac{d(f(u))}{du}$</span> for every variable <span class="math-container">$u$</span> whose value is always in the domain of <span class="math-container">$f$</span>.</p> <p>So <span class="math-container">$f'(g(x))$</span> <strong>is</strong> the derivative of <span class="math-container">$f$</span> at <span class="math-container">$g(x)$</span> but <strong>is not</strong> what you thought. Your &quot;<span class="math-container">$\lfrac{df}{dx}|_{x=g(x)}$</span>&quot; does not make sense for two reasons: (1) Leibniz notation cannot be (correctly) mixed with (modern) functions, so &quot;<span class="math-container">$\lfrac{df}{dx}$</span>&quot; is incorrect; (2) &quot;<span class="math-container">$x=g(x)$</span>&quot; is meaningless. Instead, <span class="math-container">$f'(g(x)) = \lfrac{d(f(g(x)))}{d(g(x))}$</span>, exactly in line with the above explanation of the Leibniz chain rule.</p> <p>By the way, the reason for having variables <span class="math-container">$x,y,z$</span> possibly different from the underlying parameter <span class="math-container">$t$</span> is that in many applications it is often the case that we are interested in variables that in reality vary with respect to time <span class="math-container">$t$</span>, but have some relation that does not depend on time, such as <a href="http://math.stackexchange.com/a/2176223/21820">here</a>.</p>