qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,957,481
<p>How many ways are there to roll a die six times such that there are more ones than twos?</p> <p>I broke this up into six cases: </p> <p><span class="math-container">$\textbf{EDITED!!!!!}$</span></p> <p><span class="math-container">$\textbf{Case 1:}$</span> One 1 and NO 2s --> 1x4x4x4x4x4 = <span class="math-container">$4^5$</span>. This can be arranged in six ways: <span class="math-container">$\dfrac{6!}{5!}$</span>. So there are <span class="math-container">$\dfrac{6!}{5!}$$4^5$</span> ways for this case. </p> <p><span class="math-container">$\textbf{Case 2:}$</span> Two 1s and One 2 OR NO 2s --> 1x1x5x4x4x4 = <span class="math-container">$5x4^3$</span>. This can be arranged in <span class="math-container">$\dfrac{6!}{2!3!}$</span> ways. So there are <span class="math-container">$(6x4^3)$$\dfrac{6!}{2!3!}$</span> ways for this case. </p> <p><span class="math-container">$\textbf{Case 3:}$</span> Three 1s and Two, One or NO 2s --> 1x1x1x5x5x4 = 4x<span class="math-container">$5^2$</span>. This can be arranged in <span class="math-container">$\dfrac{6!}{2!3!}$</span> ways. So there are <span class="math-container">$(4x5^2)$$\dfrac{6!}{2!3!}$</span> ways for this case. </p> <p><span class="math-container">$\textbf{Case 4:}$</span> Four 1s and Two, One or NO 2s --> 1x1x1x1x5x5 = <span class="math-container">$5^2$</span>. This can be arranged in <span class="math-container">$\dfrac{6!}{4!2!}$</span> ways. So there are <span class="math-container">$(5^2)$$\dfrac{6!}{4!2!}$</span> ways for this case. </p> <p><span class="math-container">$\textbf{Case 5:}$</span> Five 1s and One or NO 2s --> 1x1x1x1x1x5 = 5. This can be arranged in <span class="math-container">$\dfrac{6!}{5!}$</span> ways = 6 ways. So there are <span class="math-container">$5^3$</span> ways for this case. </p> <p><span class="math-container">$\textbf{Case 6:}$</span> Six 1s and NO 2s --> 1x1x1x1x1x1 = 1. There is only one way to arrange this so there is only 1 way for this case. </p> <p>With this logic...I would add the number of ways from each case to get my answer. </p>
dan_fulea
550,003
<p>If the question is "where is my error", than plese ignore this answer, which is giving an other way to count. The idea is that there are either more <span class="math-container">$1$</span>'s, case (1), or more <span class="math-container">$2$</span>'s, case (2), or they occur equaly often, case <span class="math-container">$(=)$</span>. Of course, the count of possibilities for case (1) is the same as for case (2), so we simply count the possibilities for <span class="math-container">$(=)$</span>, subtract them from the total, divide by <span class="math-container">$2$</span>, so the number we are searching for is: <span class="math-container">$$ N = \frac 12\left(\ 6^6 -\binom 60\binom 60 4^{6-0} -\binom 61\binom 51 4^{6-2} -\binom 62\binom 42 4^{6-4} -\binom 63\binom 33 4^{6-6} \ \right) \ . $$</span> The products of binomial coefficients of the shape <span class="math-container">$\binom 6k\binom{6-k}k$</span>, <span class="math-container">$k=0,1,2,3$</span> count the possibilities to fix the <span class="math-container">$k$</span> places among <span class="math-container">$6$</span> places for the <span class="math-container">$1$</span>'s, then the <span class="math-container">$k$</span> places for the <span class="math-container">$2$</span>'s from the remaining <span class="math-container">$6-k$</span>. We get then <span class="math-container">$$ N = \frac 12\left(\ 6^6 - 4^6 - 6\cdot 5\cdot 4^4 - 15\cdot 6\cdot 4^2 - 20\cdot 1\cdot 4^0 \ \right) = 16710\ . $$</span> We can also check this answer with <a href="http://www.sagemath.org" rel="nofollow noreferrer">sage</a>, by enumerating all possibilities.</p> <pre><code>sage: len( [ x for x in cartesian_product( [[1..6] for _ in [1..6]] ) ....: if list(x).count(1) &gt; list(x).count(2) ] ) 16710 </code></pre>
1,723,299
<p>I can work out an heuristic argument for $n=2$: (homeomorphically) turning the disc $D^2$ to something like a funnel (no pipe of course), then gradually contracting the open "mouth" of the funnel to a smaller and smaller hole which eventually vanishes (or "heals", if you imagine the hole as a open wound), we have now got an $S^2$. </p> <p>This argument unfortunately doesn't carry over into higher dimensions. So I think I have to find another approach. Hopefully this simple-looking result $D^n/S^{n-1}\cong S^n$ doesn't entail a dreadfully hard proof. So is there any easy way out or is there any theorem that the result is just a few steps away from? Thanks in advance. </p>
Hagen von Eitzen
39,174
<p>I view $S^{n-1}$ and $D^n$ as subspaces of $\Bbb R^n$ in the obvious manner. With the abbreviations $$\begin{align}r&amp;:=\sqrt{x_1^2+\ldots +x_n^2}\\\alpha&amp;:=2\arcsin r\\h&amp;:=\begin{cases}\frac{\sin \alpha}{r}&amp;r&gt;0\\2&amp;r=0\end{cases}\end{align}$$ (which is continuous!), the map $$(x_1,\ldots,x_n)\mapsto (hx_1,\ldots,hx_n,\cos\alpha) $$ maps $D^n\to S^{n}$ while smashing precisely $S^{n-1}$ into the south pole.</p>
52,364
<p>A given user can interact in multiple ways with a website. Let's simplify a bit and say say a user can:</p> <ul> <li>Post a message</li> <li>Comment a message</li> <li>"like" something on the website via Facebook</li> </ul> <p><em>(after that we could add, following the site on twitter, buying something on the site &amp; so on, but for readability's sake let's stick to these 3 cases)</em></p> <p>I'm trying to find a formula that could give me a number between 0 and 100 that reflects accurately the user interaction with the given website.</p> <p><strong>It has to take the following into account:</strong></p> <ul> <li><p>A user with 300 posts and a one with 400 should have almost the same score, very close to the maximum</p></li> <li><p>A user should see his number increase faster at the beginning. For instance a user with 1 post would have 5/100, a user with 2 would have 9/100, one with 3 would have 12/100 and so on.</p></li> <li><p>Each of these interactions have a different weight because they do not imply the same level of involvement. It would go this way: <code>Post &gt; Comment &gt; Like</code></p></li> <li><p>In the end, the repartition of data should be a bit like the following, meaning a lot of user around 0-50, and then users really interacting with the website.</p></li> </ul> <p><img src="https://i.stack.imgur.com/65JVm.jpg" alt="enter image description here"></p> <hr> <p>This is quite specific and data-dependent, but I'm not looking for the perfect formula but more for how to approach this problem.</p>
Ilmari Karonen
9,602
<p>Well, one approach might be to just assign a fixed score for each action, sum the scores of all actions taken by the user, and then apply a saturating function like $f(x) = 1-\exp(-x)$ to the result. Of course, it may be easier to store the raw sum of scores internally and just apply $f$ when displaying it.</p> <p>To elaborate a little, let's say you use the saturating function $f(x) = 100(1-\exp(x/100))$. This function is close to identity when $x$ is small, so that e.g. $f(5) \approx 4.9$, $f(10) \approx 9.5$, $f(15) \approx 13.9$ and so on. It saturates at 100, so that e.g. $f(250) \approx 91.8$, $f(500) \approx 99.3$ and $f(1500) \approx f(2000) \approx 100$. If you internally award 5 points for each post, the adjusted score should look pretty much like your examples.</p>
32,111
<p>Let p : Y -> X be an n-sheeted covering map, where X and Y are topological spaces. If X is compact, prove that Y is compact.</p> <p>I realize that this seems like a very simple problem, but I want to stress the lack of assumptions on X and Y. For example, this is very easy to prove if we can assume that X and Y are metrizable, for sequential compactness is then equivalent to compactness and it is easy to lift sequential compactness from X to Y.</p> <p>I asked three people in person this question and all of them immediately made the assumption that X and Y are metrizable, so I feel like I should put in this warning here that they are not.</p>
Greg Kuperberg
1,450
<p>A direct argument without the use of nets:</p> <p>Let $\mathcal{C}$ be an open cover of $Y$. For each $p \in X$, choose an open set $p \in U \subseteq X$ such that $Y$ is trivial over $U$, and such that each lift of $U$ is contained in some element of $\mathcal{C}$. This is an open cover $\mathcal{D}$ of $X$, which has a finite subcover $\mathcal{D}'$ since $X$ is compact. The lift of $\mathcal{D}'$ to $Y$ is also a finite cover, as well as a cover that refines $\mathcal{C}$. Thus $\mathcal{C}$ must have a finite subcover. (The fact that $Y$ is a finite cover is used twice, first to make each $U$, second to lift $\mathcal{D}'$.)</p>
619,607
<p>$\mathbf{(1)}$ Find $y^{\prime}$ of $y=8^{\sqrt x}$ </p> <p>My try: </p> <p>$\ln y=\ln(8)^{\sqrt x}$<br> $\dfrac{1}{y}y^{\prime}=\sqrt{x}\ln8$<br> I don't know how to proceed with right side. </p> <p>$\mathbf{(2)}$ Find $y^{\prime}$ of $y=(t+4)(t+6)(t+7).$</p> <p>This one I have no idea what to do so I don't have any work to show. My text says to use logarithmic differentiation, but still I don't how to solve this. </p> <p>Thank you. </p>
Ragnar
91,741
<p>For (2), you get $$ \ln y=\ln (t+4)+\ln (t+6)+\ln (t+7) $$ Differentiating gives $$ \frac{ y'}y=\frac1{t+4}+\frac1{t+6}+\frac1{t+7} $$ thus \begin{align} y'&amp;=y\left(\frac1{t+4}+\frac1{t+6}+\frac1{t+7}\right)\\ &amp;=(t+4)(t+6)(t+7)\left(\frac1{t+4}+\frac1{t+6}+\frac1{t+7}\right) \end{align}</p>
619,607
<p>$\mathbf{(1)}$ Find $y^{\prime}$ of $y=8^{\sqrt x}$ </p> <p>My try: </p> <p>$\ln y=\ln(8)^{\sqrt x}$<br> $\dfrac{1}{y}y^{\prime}=\sqrt{x}\ln8$<br> I don't know how to proceed with right side. </p> <p>$\mathbf{(2)}$ Find $y^{\prime}$ of $y=(t+4)(t+6)(t+7).$</p> <p>This one I have no idea what to do so I don't have any work to show. My text says to use logarithmic differentiation, but still I don't how to solve this. </p> <p>Thank you. </p>
Ross Millikan
1,827
<p>For 1, plug in the expression you started with for $y$ and simplify. For 2, I would not normally use logaritmic differentiation-I would just use the product rule. Quick and easy. But if must, you have the same approach: Assuming $t \gt -4, \log y= \log (t+4) +\dots$, differentiate to get $\frac {y'}y$, replace the $y$, multiply out, and get the same answer.</p>
2,210,317
<p>Simplify $\sum^{n}_{k=1} (-1)^k(n-k)!(n+k)!$. </p> <p>I thought of interpreting a term $(n-k)!(n+k)!=\frac{(2n)!}{\dbinom{2n}{n-k}}$, but I do not know of any ways to sum this efficiently.</p>
hxthanh
58,554
<p>$(-1)^k(n-k)!(n+k)!=(-1)^k(n-k)!(n+k)!\frac{n+k+1+(n-k+1)}{2n+2}$ $=\frac{1}{2n+2}\Delta\Big[-(-1)^{k+1}\big(n+1-(k+1)\big)!(n+k+1)!\Big]$ $$\;$$ Therefore $$\;$$ $\displaystyle\sum_{k=1}^n(-1)^k(n-k)!(n+k)!=\frac{1}{2n+2}\bigg((-1)^n(2n+1)!-n!(n+1)!\bigg)$</p>
309,872
<blockquote> <p>Let $a$ and $b$ be elements of a group $G$. If $|a| = 12, |b| = 22$ and $\langle a \rangle \: \cap \: \langle b\rangle \ne e$, prove that $a^6 = b^{11}$.</p> </blockquote> <p>Any ideas where to start would be helpful.</p> <p>Thanks.</p>
Phani Raj
42,031
<p>First observe that orders of $a$ and $b$ are even and so there exists element $d$ such that $d^{2}=e$ in $\langle a\rangle$ and $\langle b\rangle$. Therefore $a^{6}=b^{11}=d$, such that $d^{2}=e$.</p>
1,217,805
<p>Let's say I have a set of linearly independent vectors, collected in a square matrix $\mathbf{M}$.</p> <p>I know that I could orthogonalize these vectors with the QR decomposition,</p> <p>$\mathbf{M} = \mathbf{QR}$</p> <p>where $\mathbf{Q}$ is my orthogonal set of vectors.</p> <p>I'm curious if it is possible to do the same with the SVD, but I cannot figure out how. </p> <p>I want to know how to go from $\mathbf{M}$ to $\mathbf{Q}$, using SVD instead of QR.</p> <p>If this is not possible, I'd like to know as well.</p>
Ben Grossmann
81,360
<p>If $M$ is a matrix of $k$ linearly independent columns with SVD $M = U \Sigma V^H$, then the first $k$ columns of $U$ form an orthonormal basis for the span of the columns of $M$.</p>
168,224
<p>I am doing a project on Hamiltonian group actions on symplectic manifolds, and my supervisor was able to list several good books on Riemannian geometry to start me off, but he didn't know of any single place to learn about Hamiltonian groups. </p> <p>I have found some books (even available online from the author!) that come highly recommended, specifically: <em>Introduction to Symplectic and Hamiltonian Geometry, A. C. da Silva.</em></p> <p>As the title suggests however, this seems to come from more of a geometric standpoint. Which books are recommended that might focus on the group-theory and topology end of this subject? The project description specifically mentions cohomological obstructions, something that I think is related to group cohomology? (At this point I'm getting all of this from Wikipedia...) </p> <p>I have had basic, introductory courses in Differential Geometry (in $\mathbb{R}^n$) and in topology (up to calculating the first fundamental group of a topological space). </p> <p>Thank you in advance for any input!</p>
Community
-1
<p>The best Book about Hamiltonian action is</p> <p><a href="http://lib.org.by/get/M_Mathematics/MD_Geometry%20and%20topology/MDdg_Differential%20geometry/Ginzburg%20V.L.,%20Guillemin%20V.,%20Karshon%20Y.%20Moment%20maps,%20cobordisms,%20and%20Hamiltonian%20group%20actions%20%28AMS,%202002%29%28ISBN%200821805029%29%28356s%29_MDdg_.pdf" rel="nofollow">Moment Maps, Cobordisms, and Hamiltonian Group Actions Par Victor Guillemin,Yael Karshon,Viktor L. Ginzburg</a></p> <p>The second excellent lecture note is from Heckman</p> <p><a href="http://www.staff.science.uu.nl/~kolk0101/SpringSchool2004/momentum.pdf" rel="nofollow">Lecture notes on Geometry of the momentum map, written with Gert Heckman,</a></p> <p>Moreover, from geometric point of view this book is excellent</p> <p><a href="http://en.bookfi.org/book/1307294" rel="nofollow">Convexity Properties of Hamiltonian Group Actions</a> Victor Guillemin,Reyer Sjamaar</p> <p>Following note is also good and introductory </p> <p><a href="http://www.math.nyu.edu/~kessler/teaching/group/Talk4.pdf" rel="nofollow">Hamiltonian group actions, Sara Grundel</a></p> <p>Also the master thesis entitled "<a href="http://math.berkeley.edu/~alanw/pirlmasters.pdf" rel="nofollow">The Momentum Map, Symplectic Reduction and an Introduction to Brownian Motion</a>" which supervised by Alan Weinstein is very good </p> <p>And in final if you know french as Francois said , <a href="http://www.jmsouriau.com/structure_des_systemes_dynamiques.htm" rel="nofollow">Structure des systèmes dynamiques</a></p>
3,464,383
<p>I couldn't find any substantial list of 'strange infinite convergent series' so I wanted to ask the MSE community for some. By <em>strange</em>, I mean infinite series/limits that <strong>converge when you would not expect them to and/or converge to something you would not expect</strong>.</p> <p>My favorite converges to Khinchin's (sometimes Khintchine's) constant, <span class="math-container">$K$</span>. For almost all <span class="math-container">$x \in \mathbb{R}$</span> (those for which this does not hold making up a measure zero subset) with infinite c.f. representation: <span class="math-container">$$x = a_0 + \frac{1}{a_1+\frac{1}{a_2+\frac1{\ddots}}}$$</span> We have: <span class="math-container">$$\lim_{n \to \infty} =\root n \of{\prod_{i=1}^na_i} = \lim_{n \to \infty}\root n \of {a_1a_2\dots a_n} = K$$</span> Which is...wow! That it converges independent of <span class="math-container">$x$</span> really gets me.</p>
Zhant
493,028
<p>A pretty commonly mentioned one is the <a href="https://en.wikipedia.org/wiki/Kempner_series" rel="noreferrer">Kempner series</a>, which is the Harmonic series but "throwing out" (omitting) the numbers with a 9 in their decimal expansion. And 9 is not special; you can generalize to any finite sequence of digits, and the series will converge. <a href="http://mathworld.wolfram.com/KempnerSeries.html" rel="noreferrer">MathWorld</a> has approximate values for the single-digit possibilities.</p>
3,464,383
<p>I couldn't find any substantial list of 'strange infinite convergent series' so I wanted to ask the MSE community for some. By <em>strange</em>, I mean infinite series/limits that <strong>converge when you would not expect them to and/or converge to something you would not expect</strong>.</p> <p>My favorite converges to Khinchin's (sometimes Khintchine's) constant, <span class="math-container">$K$</span>. For almost all <span class="math-container">$x \in \mathbb{R}$</span> (those for which this does not hold making up a measure zero subset) with infinite c.f. representation: <span class="math-container">$$x = a_0 + \frac{1}{a_1+\frac{1}{a_2+\frac1{\ddots}}}$$</span> We have: <span class="math-container">$$\lim_{n \to \infty} =\root n \of{\prod_{i=1}^na_i} = \lim_{n \to \infty}\root n \of {a_1a_2\dots a_n} = K$$</span> Which is...wow! That it converges independent of <span class="math-container">$x$</span> really gets me.</p>
Descartes Before the Horse
592,365
<p>To add another; I was surprised when I learned the two sums: <span class="math-container">$$\sum_{k=1}^{\infty}\frac1{k^2} = \frac{\pi^2}{6}$$</span> <span class="math-container">$$\sum_{k=1}^{\infty}\frac{(-1)^k}{k^2} = \frac{\pi^2}{12}$$</span> And thought the intuition behind the second coming from the famous first sum was neat.</p>
3,464,383
<p>I couldn't find any substantial list of 'strange infinite convergent series' so I wanted to ask the MSE community for some. By <em>strange</em>, I mean infinite series/limits that <strong>converge when you would not expect them to and/or converge to something you would not expect</strong>.</p> <p>My favorite converges to Khinchin's (sometimes Khintchine's) constant, <span class="math-container">$K$</span>. For almost all <span class="math-container">$x \in \mathbb{R}$</span> (those for which this does not hold making up a measure zero subset) with infinite c.f. representation: <span class="math-container">$$x = a_0 + \frac{1}{a_1+\frac{1}{a_2+\frac1{\ddots}}}$$</span> We have: <span class="math-container">$$\lim_{n \to \infty} =\root n \of{\prod_{i=1}^na_i} = \lim_{n \to \infty}\root n \of {a_1a_2\dots a_n} = K$$</span> Which is...wow! That it converges independent of <span class="math-container">$x$</span> really gets me.</p>
Clement C.
75,808
<p>I still like the fact that <span class="math-container">$$ \sum_{n=N}^\infty \frac{1}{n\ln n \cdot \ln \ln n \cdot \ln \ln \ln n \cdot \ln \ln \ln \ln n} $$</span> diverges, but <span class="math-container">$$ \sum_{n=N}^\infty \frac{1}{n\ln n \cdot \ln \ln n \cdot \ln \ln \ln n \cdot (\ln \ln \ln \ln n)^{1.01}} $$</span> converges (where <span class="math-container">$N$</span> is a large enough constant for the denominator to be defined).</p>
294,972
<p>I am seeking a particular integral of the differential equation:</p> <p>$u^{(4)}(t) - 5u''(t) + 4u(t) - t^3 = 0$</p> <p>I simply interested in the technique, not just an answer (Mathematica suffices for that). How would one apply undetermined coefficients in this case?</p>
Gerry Myerson
8,269
<p>The characteristic polynomial is $$z^4-5z^2+4=(z^2-1)(z^2-4)=(z-1)(z+1)(z-2)(z+2)$$ Since $z=0$ is not a root, the homogeneous equation has no polynomial solutions. Therefore, the idea David Mitra gives in the comments will work. </p>
35,579
<p>Consider the kernel of the homomorphism from two copies of the free group $F_2 \times F_2$ onto the integers sending every generator to 1. How to see that this subgroup is not finitely presented?</p>
Derek Holt
2,820
<p>This is actually a standard example of a finitely generated but not finitely presented group. I seem to remember that there is a homological proof. I have been trying to think of a reasonably straightforward group-theoretical proof, but I am running out of time.</p> <p>What I have done is to calculate an (infinite) presentation of the kernel $K$ in question. Let the two copies of $F_2$ be generated by $\{a,b\}$ and $\{x,y\}$. Then $K$ is generated by $B := ba^{-1}$, $X := xa^{-1}$ and $Y := ya^{-1}$. Using a reasonably straightforward Reidemeister-Schreier calculation, we get the presentation</p> <p>$\langle\, X, Y, B \mid [B,(YX^{-1})^{X^i}]\: (i \in \mathbb{Z})\, \rangle$</p> <p>of $K$, which is an HNN-extension of the free group $\langle X,Y \rangle$ with stable letter $B$, where the subgroup generated by $(YX^{-1})^{X^i}$ for $i \in \mathbb{Z}$, which is not finitely generated, is centralized by $B$.</p> <p>Edited: To show that $K$ is not finitely presentable, it is enough to show that any group $K&#39;$ defined by a presentation using a finite subset of the relators in the above presentation is unequal to $K$. Notice that $K&#39;$ is also an HNN-extension of $\langle X,Y \rangle$ by $B$, but the subgroup of $\langle X,Y \rangle$ centralized by $B$ is finitely generated, and so is a proper subgroup of the subgroup centralized by $B$ in $K$. So $K \ne K&#39;$ and hence $K$ is not finitely presentable.</p>
3,386,696
<p>I need to solve <span class="math-container">$\int_{0}^{\frac{\pi}{2}} \cos^6xdx$</span> I tried to use <span class="math-container">$cos(3x)=4\cos^3x-3\cos x$</span> but not succeeded</p>
Kavi Rama Murthy
142,385
<p><span class="math-container">$$P(a)=P(a|b)P(b)+P(a|b')P(b')\leq \max \{P(a|b),P(a|b')\} P(b)+\max \{P(a|b),P(a|b')\} P(b')$$</span> <span class="math-container">$$=\max \{P(a|b),P(a|b')\}$$</span> since <span class="math-container">$P(b)+P(b')=1$</span>. The left hand inequality is similar.</p>
818,850
<p>I want to generate numbers $1$ to $10$ with uniform probability distribution. So I write the numbers $1$ to $10$ in the natural order. I keep writing the next $10$ number block by permuting the first $10$ numbers. Will this long string can be considered random with uniform distribution? Is there a problem?</p>
Mark Bennet
2,906
<p>Here is another way of approximating the square root of two by rational numbers which doesn't depend on the decimal system.</p> <p>Suppose $p^2-2q^2=\pm 1$ so that $\left(\cfrac pq\right)^2=2\pm\cfrac 1{q^2}$, then the larger we can make $q$ the closer $\cfrac pq$ is to $\sqrt 2$.</p> <p>Consider now $(p+2q)^2-2(p+q)^2=p^2+4pq+4q^2-2p^2-4pq-2q^2=2q^2-p^2=\mp 1$ so that $\cfrac {p+2q}{p+q}$ is a better approximation.</p> <p>From this we obtain the approximations $$\frac 11, \frac 32, \frac 75, \frac {17}{12}, \frac {41}{29},\frac {99}{70} \dots$$</p> <p><a href="http://en.wikipedia.org/wiki/Square_root_of_2">The Wikipedia entry</a> gives also that if $r$ is an approximation, $\frac r2+\frac 1r$ is a better one, which picks out $1, \frac 32, \frac {17}{12}, \frac {577}{408} \dots$ which converges very quickly, picking out a subsequence of the previous one..</p> <p>This takes $$\frac pq \text{ to } \frac {p^2+2q^2}{2pq}$$</p> <p>It also gives a geometric proof which is quite visual and may help - the problem with these kinds of proofs is that they often work by some form of descent, and therefore terminate, so don't give a sense of never-ending.</p>
818,850
<p>I want to generate numbers $1$ to $10$ with uniform probability distribution. So I write the numbers $1$ to $10$ in the natural order. I keep writing the next $10$ number block by permuting the first $10$ numbers. Will this long string can be considered random with uniform distribution? Is there a problem?</p>
Arby
154,968
<p>The infinite series of the sum of the values of decimal places is converging, and therefore the value is finite. </p>
818,850
<p>I want to generate numbers $1$ to $10$ with uniform probability distribution. So I write the numbers $1$ to $10$ in the natural order. I keep writing the next $10$ number block by permuting the first $10$ numbers. Will this long string can be considered random with uniform distribution? Is there a problem?</p>
Chalky
155,049
<p>1/3 of a pie is clearly finite. The only infinite part is the number of digits required to express absolute accuracy (i.e. you can't) when written in Base 10. In Base 3, it'd be 0.1. It is finite, but it also has absolute accuracy.</p>
818,850
<p>I want to generate numbers $1$ to $10$ with uniform probability distribution. So I write the numbers $1$ to $10$ in the natural order. I keep writing the next $10$ number block by permuting the first $10$ numbers. Will this long string can be considered random with uniform distribution? Is there a problem?</p>
Julie in Austin
155,085
<p>Many of the above answers are quite excellent, but lack a certain experiencing of both the never ending nature of the square root of 2 and its irrationality.</p> <p>Take for example, 1/3. It's a rational number and it repeats forever. The most common method of representing that is 0.3 with a bar over the 3. The same can be done for other repeating fractions, such as 1/7, which repeats after 0.142857. Both 1/3 and 1/7 "go on forever", but neither of those just keeps on changing the way the square root of 2 does.</p> <p>This is actually the key to an irrational number -- it doesn't repeat. If it did, it would be rational.</p> <p>The best way -- I think -- to comprehend the value is to compute it using something such as the Babylonian Method for computing square roots, as described in this article -- <a href="http://en.wikipedia.org/wiki/Babylonian_method#Examples" rel="nofollow" title="Babylonian Method">Babylonian Method</a>. That will very quickly demonstrate that the square root of 2 just goes on and on and on.</p>
174,676
<p>When I am faced with a simple linear congruence such as $$9x \equiv 7 \pmod{13}$$ and I am working without any calculating aid handy, I tend to do something like the following:</p> <p>"Notice" that adding $13$ on the right and subtracting $13x$ on the left gives: $$-4x \equiv 20 \pmod{13}$$</p> <p>so that $$x \equiv -5 \equiv 8 \pmod{13}.$$</p> <p>Clearly this process works and is easy to justify (apart from not having an algorithm for "noticing"), but my question is this: I have a vague recollection of reading somewhere this sort of process was the preferred method of C. F. Gauss, but I cannot find any evidence for this now, so does anyone know anything about this, or could provide a reference? (Or have I just imagined it all?)</p> <p>I would also be interested to hear if anyone else does anything similar.</p>
DonAntonio
31,254
<p>When the prime is a reasonably small one I'd rather find directly the inverse: $$9^{-1}=\frac{1}{9}=3\pmod {13}\Longrightarrow 9x=7\Longrightarrow x=7\cdot 9^{-1}=7\cdot 3= 21=8\pmod {13}$$ <strong>But</strong>...I try Gauss's method when the prime is big and/or evaluating inverses is messy.</p>
248,167
<p>I want to know why, when I look at the Julia sets of the quadratic family, I see only a finite number of repeating patterns, rather than a countable infinity of them.</p> <p>My question is specifically about the interaction of these three theorems:</p> <p><strong>Theorem 1</strong>: Let $z_0\in\mathbb{C}$ be an repelling periodic point of the function $f_c:z\mapsto z^2+c$. Tan Lei proved in the 90s that the filled in Julia set $K_c$ is asymptotically $\lambda$-self-similar about $z_0$, where $\lambda$ denotes the multiplier of the orbit.</p> <p><strong>Theorem 2</strong>: (Iterated preimages are dense) Let $z\in J_c$, then the preimages of $z$ under the set $\cup_{n\in\mathbb{N}} ~ f^{-n}(z)$ is dense in $J_c$</p> <p><strong>Theorem 3</strong>: $J_c$ is the closure of repelling periodic points.</p> <p>Let's expand on Theorem 1:<br> Technically it means that the sets $(\lambda^n \tau_{-z_0} K_c)\cap\mathbb{D}_r$ approach (in the Hausdorff metric of compact subsets of $\mathbb{C}$) a set $X \cap \mathbb{D_r}$ where the limit model $X \subset \mathbb{C}$ is such $\lambda$-self-similar: $X = \lambda X$.<br> Practically this means that, when one zooms into a computer generated $K_c$ about $z_0$, the image becomes, to all practical purposes, self-similar. No new information is gained by zooming again about $z_0$.</p> <p>Lei also proved that $K_c$ is asymptotically $\lambda$-self-similar about the preimages of $z_0$, with the same limit model $X$, up to rotation and rescaling. This means that zooming in at each point in the repelling cycle of $z_0$, provides a basically the same spectacle, apart maybe rotated, that does zooming into $z_0$. Not only, but the preimages of $z_0$ are dense in $J_{c}$ (Theorem 2), meaning that this $X$ pattern can be seen throughout the Julia set.</p> <p>Now, let consider a different repelling periodic point $z_1$. Lei tells us that $K_c$ will be asymptotically self-similar about $z_1$ and all <em>its</em> pre-images, with an <em>a priori different</em> limit set $Y$. Since the pre-images of $z_1$ are also dense in $J_c$ we may observe the limit model $Y$ all over $J_c$.</p> <p>So, <strong><em>a priori</em></strong> to each repelling periodic orbit, there should be an associated limit model, and each of these limit models could be distinct. <em>However</em>, when I look at a computer generated Julia set, the parts of it that are asymptotically self-similar, seem to approach one of a <strong><em>finite</em></strong> set of limit models (up to rotation).</p> <p>Why is it so? Maybe my eye cannot see the difference? Or the computer cannot generate all of the detail?</p> <p>Or is it the case that the limit models are finite?</p> <p><a href="https://i.stack.imgur.com/JKaA9.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/JKaA9.jpg" alt="Simple Julia zoom"></a> In this image (read like a comic strip), I zoom into the neighbourhood of a point, four times, then purposely "miss the center", and zoom onto a detail for four more times. The patterns that emerge are very similar. Are they the same?<br> This is perhaps one of the simplest Julia set, but the experience is </p>
inyo
73,630
<p>When you consider infinitely renormalizable polynomial, you can see infinitely many completely different picture. For example, there exists $c$ such that you can find periodic points $x_n$ such that $K_c\setminus \{x_n\}$ has exactly $n$ components.</p> <p>Such an infinitely renormalizable $c$ is in a infinite nest of baby Mandelbrot sets. When you choose the baby Mandelbrot set of depth $n+1$ in the $1/n$-limb of the baby Mandelbrot set of depth $n$, then the so-called $\alpha$-fixed point $x_n$ of the $n$-th renormalization has rotation number $1/n$, so $K_c \setminus \{x_n\}$ has $n$ components.</p>
79,868
<p>Given a function $f: \mathbb{R}^+ \rightarrow \mathbb{C}$ satisfying suitable conditions (exponential decay at infinity, continuous, and bounded variation) is good enough, its <em>Mellin transform</em> is defined by the function</p> <p>$$M(f)(s) = \int_0^{\infty} f(y) y^s \frac{dy}{y},$$</p> <p>and $f(y)$ can be recovered by the Mellin inversion formula:</p> <p>$$f(y) = \frac{1}{2\pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} y^{-s} M(f)(s) ds.$$</p> <p>This is a change of variable from the Fourier inversion formula, or the Laplace inversion formula, and can be proved in the same way. This is used all the time in analytic number theory (as well as many other subjects, I understand) -- for example, if $f(y)$ is the characteristic function of $[0, 1]$ then its Mellin transform is $1/s$, and one recovers the fact (Perron's formula) that </p> <p>$$\frac{1}{2\pi i} \int_{2 - i \infty}^{2 + i \infty} n^{-s} \frac{ds}{s}$$</p> <p>is equal to 1 if $0 &lt; n &lt; 1$, and is 0 if $n &gt; 1$. (Note that there are technical issues which I am glossing over; one integrates over any vertical line with $\sigma &gt; 0$, and the integral is equal to $1/2$ if $n = 1$.)</p> <p>I use these formulas frequently, but... I find myself having to look them up repeatedly, and I'd like to understand them more intuitively. Perron's formula can be proved using Cauchy's residue formula (shift the contour to $- \infty$ or $+ \infty$ depending on whether $n &gt; 1$), but this proof doesn't prove the general Mellin inversion formula.</p> <p>My question is:</p> <blockquote> <p>What do the Mellin transform and the inversion formula mean? Morally, why are they true?</p> </blockquote> <p>For example, why is the Mellin transform an integral over the positive reals, while the inverse transform is an integral over the complex plane?</p> <p>I found some resources -- Wikipedia; <a href="https://mathoverflow.net/questions/383/motivating-the-laplace-transform-definition">this MO question</a> is closely related, and the first video in particular is nice; and a proof is outlined in Iwaniec and Kowalski -- but I feel that there should be a more intuitive explanation than any I have come up with so far.</p>
Frank Thorne
1,050
<p>Thanks to everyone who answered! A (CW'ed) summary of some of what I learned:</p> <p>In the first place, I now cheerfully second Greg Martin's recommendation of Chapter 5.1 of Montgomery and Vaughan. It is a rather "lowbrow", very readable treatment. (doesn't prove Mellin inversion in complete generality)</p> <p>Also, as Matt Young pointed out, for any complex $s$, the function $t \rightarrow t^s$ is a character on $\mathbb{R}^{\times}$. This is a triviality, but the importance of this fact escaped me the first time. The invariant measure on $\mathbb{R}^{\times}$ is $\frac{dx}{x}$, and so the Fourier transform of a function $f$ defined on this group is <em>exactly</em></p> <p>$$\int_{x \in \mathbb{R}^{\times}} f(x) x^s \frac{dx}{x},$$</p> <p>the Mellin transform. Once this is written down, the rest follows mechanically (from change of variables and Fourier inversion).</p> <p>Thanks to all!</p>
79,868
<p>Given a function $f: \mathbb{R}^+ \rightarrow \mathbb{C}$ satisfying suitable conditions (exponential decay at infinity, continuous, and bounded variation) is good enough, its <em>Mellin transform</em> is defined by the function</p> <p>$$M(f)(s) = \int_0^{\infty} f(y) y^s \frac{dy}{y},$$</p> <p>and $f(y)$ can be recovered by the Mellin inversion formula:</p> <p>$$f(y) = \frac{1}{2\pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} y^{-s} M(f)(s) ds.$$</p> <p>This is a change of variable from the Fourier inversion formula, or the Laplace inversion formula, and can be proved in the same way. This is used all the time in analytic number theory (as well as many other subjects, I understand) -- for example, if $f(y)$ is the characteristic function of $[0, 1]$ then its Mellin transform is $1/s$, and one recovers the fact (Perron's formula) that </p> <p>$$\frac{1}{2\pi i} \int_{2 - i \infty}^{2 + i \infty} n^{-s} \frac{ds}{s}$$</p> <p>is equal to 1 if $0 &lt; n &lt; 1$, and is 0 if $n &gt; 1$. (Note that there are technical issues which I am glossing over; one integrates over any vertical line with $\sigma &gt; 0$, and the integral is equal to $1/2$ if $n = 1$.)</p> <p>I use these formulas frequently, but... I find myself having to look them up repeatedly, and I'd like to understand them more intuitively. Perron's formula can be proved using Cauchy's residue formula (shift the contour to $- \infty$ or $+ \infty$ depending on whether $n &gt; 1$), but this proof doesn't prove the general Mellin inversion formula.</p> <p>My question is:</p> <blockquote> <p>What do the Mellin transform and the inversion formula mean? Morally, why are they true?</p> </blockquote> <p>For example, why is the Mellin transform an integral over the positive reals, while the inverse transform is an integral over the complex plane?</p> <p>I found some resources -- Wikipedia; <a href="https://mathoverflow.net/questions/383/motivating-the-laplace-transform-definition">this MO question</a> is closely related, and the first video in particular is nice; and a proof is outlined in Iwaniec and Kowalski -- but I feel that there should be a more intuitive explanation than any I have come up with so far.</p>
Michael Rubinstein
31,582
<p>Have a look at Zagier's appendix: <a href="http://people.mpim-bonn.mpg.de/zagier/files/tex/MellinTransform/fulltext.pdf" rel="noreferrer">http://people.mpim-bonn.mpg.de/zagier/files/tex/MellinTransform/fulltext.pdf</a></p> <p>It provides a nice description of the Mellin transform when $f(x)$ is sufficiently smooth at $x=0$, and of rapid decay at infinity.</p> <p>For example, assume $f(x) = \sum_0^\infty a_n x^n$, in some neighbourhood of the origin, and decays rapidly as $x \to \infty$, then its Mellin transform has meromorphic continuation to all of $\mathbb{C}$ with simple poles of residue $a_n$ at $s=-n$, $n=0,1,2,3,\ldots$. This is nicely explained in Zagier's appendix.</p> <p>So, rate of decay issues aside, shifting the inverse Mellin transform to the left, i.e. letting $\sigma \to -\infty$, picks up the residues of the integrand at s=-n, i.e. $a_n x^n$, i.e. recovers the Taylor expansion about $x=0$ of $f(x)$.</p> <p>Of course, it only applies to a limited class of functions $f$, but, in many practical examples, this reasoning gives one explanation of why the Mellin inversion formula is true, without resorting to Fourier inversion. </p>
76,747
<pre><code>Plot[D[Abs[x], x], {x, -10, 10}, Exclusions -&gt; {0}] </code></pre> <p>gives out several lines of error messages and an empty plot.</p> <pre><code>Plot[Derivative[1][Abs[#1] &amp; ][x], {x, -10, 10}, Exclusions -&gt; {0}] </code></pre> <p>just gives out an empty plot.</p> <p>How do I plot $\left(\left|x\right|\right)^\prime$?</p>
kglr
125
<p>Using <code>PiecewiseExpand[Abs[x], x \[Element] Reals]</code> instead of <code>Abs[x]</code>:</p> <pre><code>Plot[Evaluate@ D[PiecewiseExpand[Abs[x], x \[Element] Reals], x], {x, -10, 10}, PlotStyle -&gt; Thick] </code></pre> <p><img src="https://i.stack.imgur.com/zUHAW.png" alt="enter image description here"></p> <p>You can convert <code>Abs</code> to <code>Piecewise</code> for real arguments using <code>PiecewiseExpand</code>:</p> <pre><code>absToPW[x_] := PiecewiseExpand[Abs[x], x \[Element] Reals] absToPW[z] </code></pre> <p><img src="https://i.stack.imgur.com/CtTqq.png" alt="enter image description here"></p> <p>which you can differentiate</p> <pre><code>D[absToPW[z], z] </code></pre> <p><img src="https://i.stack.imgur.com/MDN95.png" alt="enter image description here"></p> <p>and plot</p> <pre><code>Plot[Evaluate@{absToPW[x], D[absToPW[x], x]}, {x, -4, 4}, PlotStyle -&gt; Thick] </code></pre> <p><img src="https://i.stack.imgur.com/R83Oh.png" alt="enter image description here"></p>
588,725
<p>Prove the following statement:</p> <p>$$\frac{1}{x}&lt;\ln(x)-\ln(x-1)&lt;\frac{1}{x-1}$$</p> <p>Proof:</p> <p>$$\frac{-1}{x^2}&lt;\frac{1}{x(x-1)}&lt;\frac{-1}{(x-1)^2}$$</p> <p>$$e^{(\frac{-1}{x^2})}&lt;e^{(\frac{-1}{x(x-1)})}&lt;e^{(\frac{-1}{(x-1)^2})}$$</p> <p>$$\lim_{x\to\infty}e^{(\frac{-1}{x^2})}&lt;\lim_{x\to\infty}e^{(\frac{-1}{x(x-1)})}&lt;\lim_{x\to\infty}e^{(\frac{-1}{(x-1)^2})}$$</p> <p>$$e^{0}&lt;e^{0}&lt;e^{0}$$</p> <p>$$1&lt;1&lt;1$$</p> <p>therefore MVT and we get the statement to be proven.</p> <p>does anyone agree with me in the way i choose to prove the above statement? any feedback would be good thank you in advance!</p>
Paramanand Singh
72,031
<p>By the nature of question we must have $x &gt; 1$. Now use mean value theorem to get $$\ln(x) - \ln(x - 1) = \{x - (x - 1)\}\cdot\dfrac{1}{y} = \dfrac{1}{y}$$ for some $x - 1 &lt; y &lt; x$. Thus we get $$\frac{1}{x} &lt; \frac{1}{y} &lt; \frac{1}{x - 1}$$ so that $$\frac{1}{x} &lt; \ln x - \ln (x - 1) &lt; \frac{1}{x - 1}$$</p> <p>Regarding your approach I am not sure why you have differentiated the inequality in question. And the new identity obtained by differentiating is wrong because terms on left and right are negative and middle one is positive. Whenever you see expression like $f(a) - f(b)$ one possible technique is to apply the mean value theorem directly : $f(a) - f(b) = (a - b)f'(c)$ for some $a &lt; c &lt; b$.</p>
1,792,422
<p>I need help with an optimization problem. I have a rectangle space being fenced. Three sides are fenced with a material costing 4 dollars and the last side costs 16 dollars. I was given that the area is 250 and asked to minimize the cost.</p> <p>I got what I think is the correct formula for the cost where $$C(y)=\frac{5000}{y}+8y.$$ This is where I got stuck. In my class, we're allowed to use demos to get the optimized value but when I plug the number into my formula I end up with negative values for the dimensions. Any help would be nice. </p>
Michael Hardy
11,667
<p>Since $f'(x) = \dfrac 1 {2\sqrt x}$, you need $$ \frac{\sqrt 4 - \sqrt 0}{4-0} = \frac{f(4) - f(0)}{4-0} = \frac{f(b)-f(a)}{b-a} = f'(c) = \frac 1 {2\sqrt c}. $$ If $\dfrac{\sqrt 4} 4 = \dfrac 1 {2\sqrt c}$ then what number is $c$?</p>
1,792,422
<p>I need help with an optimization problem. I have a rectangle space being fenced. Three sides are fenced with a material costing 4 dollars and the last side costs 16 dollars. I was given that the area is 250 and asked to minimize the cost.</p> <p>I got what I think is the correct formula for the cost where $$C(y)=\frac{5000}{y}+8y.$$ This is where I got stuck. In my class, we're allowed to use demos to get the optimized value but when I plug the number into my formula I end up with negative values for the dimensions. Any help would be nice. </p>
HighEnergy
330,386
<p>The above answers are correct; however, I would like to point out that in lieu of utilizing $$\frac{f(b)-f(a)}{b-a},$$ you utilized $f'(a)$ and $f'(b)$. Before using <code>MVT</code>, make sure f is continuous on $[a,b]$ and differentiable on $(a,b)$, which they are here.</p>
370,898
<p>Lately, I have been learning about the <strong>replication crisis</strong>, see <a href="https://www.youtube.com/watch?v=6abrUN823gY&amp;ab_channel=Skeptic" rel="noreferrer">How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth</a> (good YouTube video) — by Michael Shermer and Stuart Ritchie. According to Wikipedia, the <a href="https://en.wikipedia.org/wiki/Replication_crisis" rel="noreferrer">replication crisis</a> (also known as the replicability crisis or reproducibility crisis) is</p> <blockquote> <p>an ongoing methodological crisis in which it has been found that many scientific studies are difficult or impossible to replicate or reproduce. The replication crisis affects the social sciences and medicine most severely.</p> </blockquote> <p>Has the replication crisis impacted (pure) mathematics, or is mathematics unaffected? How should results in mathematics be reproduced? How can complicated proofs be replicated, given that so few people are able to understand them to begin with?</p>
Paul Siegel
4,362
<p>Mathematics does have its own version of the replicability problem, but for various reasons it is not as severe as in some scientific literature.</p> <p>A good example is the <a href="https://en.wikipedia.org/wiki/Classification_of_finite_simple_groups" rel="noreferrer">classification of finite simple groups</a> - this was a monumental achievement (mostly) completed in the 1980's, spanning tens of thousands of pages written by dozens of authors. But over the past 20 years there has been significant ongoing effort undertaken by Gorenstein, Lyons, Solomon, and others to consolidate the proof in one place. This is partially to simplify and iron out kinks in the proof, but also out of a very real concern that the proof will be lost as experts retire and the field attracts fewer and fewer new researchers. This is one replicability issue in mathematics: some bodies of mathematical knowledge slide into folklore or arcana unless there is a concerted effort by the next generation to organize and preserve them.</p> <p>Another example is the ongoing saga of Mochizuki's proposed proof of the <a href="https://en.wikipedia.org/wiki/Abc_conjecture" rel="noreferrer">abc conjecture</a>. The proof involve thousands of pages of work that remains obscure to all but a few, and there remains <a href="https://mathoverflow.net/a/319844">serious disagreement over whether the argument is correct</a>. There are numerous other examples where important results are called into question because few experts spend the time and energy necessary to carefully work through difficult foundational theory - <a href="https://www.quantamagazine.org/the-fight-to-fix-symplectic-geometry-20170209/" rel="noreferrer">symplectic geometry</a> provides another recent example.</p> <p>Why do I think these issues are not as big of a problem for mathematics as analogous issues in the sciences?</p> <ol> <li>Negative results: If you set out to solve an important mathematical problem but instead discover a disproof or counterexample, this is often just as highly valued as a proof. This provides a check against the perverse incentives which motivate some empirical researchers to stretch their evidence for the sake of getting a publication.</li> <li>Interconnectedness: Most mathematical research is part of an ecosystem of similar results about similar objects, and in an area with enough activity it is difficult for inconsistencies to develop and persist unnoticed.</li> <li>Generalization: Whenever there is a major mathematical breakthrough it is normally followed by a flurry of activity to extend it and solve other related problems. This entails not just replicating the breakthrough but clarifying it and probing its limits - a good example of this is all the work in the Langlands program which extends and clarifies Wiles' work on the modularity theorem.</li> <li>Purity: social science and psychology research is hard because the results of an experiment depend on norms and empirical circumstances which can change significantly over time - for instance, many studies about media consumption before the 90's were rendered almost irrelevant by the internet. The foundations of an area of mathematics can change, but the logical correctness of a mathematical argument can't (more or less).</li> </ol>
370,898
<p>Lately, I have been learning about the <strong>replication crisis</strong>, see <a href="https://www.youtube.com/watch?v=6abrUN823gY&amp;ab_channel=Skeptic" rel="noreferrer">How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth</a> (good YouTube video) — by Michael Shermer and Stuart Ritchie. According to Wikipedia, the <a href="https://en.wikipedia.org/wiki/Replication_crisis" rel="noreferrer">replication crisis</a> (also known as the replicability crisis or reproducibility crisis) is</p> <blockquote> <p>an ongoing methodological crisis in which it has been found that many scientific studies are difficult or impossible to replicate or reproduce. The replication crisis affects the social sciences and medicine most severely.</p> </blockquote> <p>Has the replication crisis impacted (pure) mathematics, or is mathematics unaffected? How should results in mathematics be reproduced? How can complicated proofs be replicated, given that so few people are able to understand them to begin with?</p>
fedja
1,131
<p><em>How can we expect that increasingly complicated proofs are replicated when so few people can understand them in the first place?</em></p> <p>My answer to that is that we do not expect them to be replicated in the usual sense of this word (repeated and included into textbooks with just minor cosmetic and stylistic changes). Rather we expect them to be gradually simplified and streamlined either through changing the proofs themselves by finding a shortcut or replacing the whole argument with a completely different one, or by building a theory that is locally trivial but proceeds in the direction of making the proof understandable and verifiable much faster than the currently existing one. The latter is exactly what Mochizuki tried to do though his goal was rather to just reduce the difficulty from &quot;totally impossible&quot; to &quot;barely feasible&quot; and the prevailing opinion is that he failed in the case of the ABC conjecture though he has succeeded in several other problems.</p> <p>The first approach is more common in analysis (broadly understood), the second is more common in algebra (also broadly understood), but you can try to play either game in either field. My own perception of what is proved and what is not borders on solipsism: I accept the fact as proven if I've read and understood the whole argument or figured it out myself. So most mathematics remains &quot;unproved&quot; to me and, apparently, will stay unproved for the rest of my life. Of course, it doesn't mean that I'm running around questioning the validity of the corresponding theorems. What it means is that I just never allow myself to rely in my own papers on anything that I haven't fully verified to my satisfaction, try to make my papers as self-contained as possible within practical limits, and that I consider the activity of simplifying the existing proofs as meaningful as solving open questions even in the case when the proofs are reasonably well-known and can already be classified as &quot;accessible&quot;. But not everybody works this way. Many people are completely happy to drop a nuke any time they have an opportunity to do it and there is nothing formally wrong with that: the underlying point of view is that our time is short, we have to figure out as many things as possible, and the simplifications, etc. will come later. Probably, we need a mixture of both types to proceed as efficiently as we can.</p> <p>So I would say that the mathematics is reasonably immune to this crisis in the sense that mathematicians are aware of the associated risks, take them willingly, and try to gradually build the safe ground of general accessibility under everything though the process of this building is always behind the process of the mathematical discovery itself. The same applies to physics and medicine though the gap between the &quot;front line&quot; and the &quot;safe ground&quot; there may be wider. In fact, it applies to any science that deserves to be called by that name. As to the so called &quot;social sciences&quot;, they are often done at the level of alchemy and astrology today in my humble opinion (and not only mine: read the Richard Feinman critiques, for example) but we should not forget that those were the precursors to such well-respected sciences as chemistry and astronomy/cosmology, so I view the current crisis there as a part of the normal healthy process of transitioning from the prevailing general &quot;blahblahblah&quot; and weathervane behavior with respect to political winds to something more substantial.</p> <p><strong>Edit:</strong> Paul Siegel has convinced me that things have indeed changed since the time I took (obligatory) courses of Marxist philosophy and the history of communist party, though this change may be not easily visible to the general public because it mainly happens outside academia and is driven primarily by company business interests, so a huge part of it occurs behind closed doors (Paul, please correct me if I misinterpreted what you said in any way). So my statement that the current social sciences are not <em>capable</em> of something beyond general blahblahblah is no longer valid and I retract it. However I still maintain the opinion that it <em>is</em> blahblahblah rather than hard data analysis or other scientific approach that drives many public political and social discussions and decisions of today (I don't know what happens here behind the closed doors, of course, and it may be that, like in advertising, what we see is just what shepherds choose to show to their sheep to drive them in the direction they want, but I prefer to think that it is not exactly the case). If somebody can convincingly challenge that, I would be quite interested.</p> <p>Apologies to everybody for switching this discussion to a sideline.</p>
370,898
<p>Lately, I have been learning about the <strong>replication crisis</strong>, see <a href="https://www.youtube.com/watch?v=6abrUN823gY&amp;ab_channel=Skeptic" rel="noreferrer">How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth</a> (good YouTube video) — by Michael Shermer and Stuart Ritchie. According to Wikipedia, the <a href="https://en.wikipedia.org/wiki/Replication_crisis" rel="noreferrer">replication crisis</a> (also known as the replicability crisis or reproducibility crisis) is</p> <blockquote> <p>an ongoing methodological crisis in which it has been found that many scientific studies are difficult or impossible to replicate or reproduce. The replication crisis affects the social sciences and medicine most severely.</p> </blockquote> <p>Has the replication crisis impacted (pure) mathematics, or is mathematics unaffected? How should results in mathematics be reproduced? How can complicated proofs be replicated, given that so few people are able to understand them to begin with?</p>
Flounderer
24,993
<blockquote> <p>Has this crisis impacted (pure) mathematics, or do you believe that maths is mostly immune to it?</p> </blockquote> <p>Immune to the replication problem, yes. But not immune to the attitudes which cause scientists to do unreplicable research in the first place. Some mathematicians will announce that a particular theorem has been proven, harvest the glory based on the fact that they have proved things in the past, and then never publish their results. <a href="https://mathoverflow.net/a/357372">Rota's Conjecture</a> is one notorious example. Now we are in a situation where (a) nobody knows whether it is true and (b) nobody has worked on it for seven years, and probably (if it turns out that no proof actually exists) will not work on it for at least another decade.</p> <blockquote> <p>How should results in mathematics be reproduced?</p> </blockquote> <p>In science, it would be ideal if people dedicated research time to replicating published experimental results. This doesn't happen much because there is no glory to be gained by doing it.</p> <p>The analogue in mathematics would be for people to publish new proofs of existing results, or expositions of existing proofs, which is happily much more common. I don't mean <a href="https://arxiv.org/abs/1410.3671" rel="noreferrer">copying out well-known results in new language</a> (Tom Leinster, <em>The bijection between projective indecomposable and simple modules</em>), I mean expository papers like <a href="https://dx.doi.org/10.4310/AJM.2006.v10.n2.a2" rel="noreferrer">this</a> (Cao and Zhu, <em>A complete proof of the Poincaré and geometrization conjectures</em>, Asian J. Math. <strong>10</strong> (2006) pp. 165–492).</p> <p>Even more noble are the <a href="https://mathoverflow.net/q/291158">people using proof assistant software to verify existing mathematics</a>.</p> <blockquote> <p>How can we expect that increasingly complicated proofs are replicated when so few people can understand them in the first place?</p> </blockquote> <p>I think our best hope is proof assistant software. Perhaps by the end of this century, we will he living in a world where no mathematician can replicate any reasonably cutting-edge proof, yet research is still happily chugging along.</p>
23,818
<p>I'm trying to define some notation so that Mathematica code would be more functional, similar to Haskell (just for fun): currying, lambdas, infix operator to function conversion, etc.. And I have some questions about it:</p> <ul> <li>Is it possible to make all Mathematica <code>h_[x1_,x2_,...]</code> functions to work as <code>h[x1][x2][..]</code>?</li> <li>Can I distinguish inside <code>Notation</code> box between <code>&lt;+1&gt;</code> and <code>&lt;1+&gt;</code>, how do I check for + there?</li> <li>How to define right-associate apply operator with highest precedence (<code>$</code>)?</li> </ul> <p>This is what I have so far:</p> <pre><code>&lt;&lt; Notation`; lapply[x_, y_] := x[y] rapply[x_, y_] := x[y] InfixNotation[ParsedBoxWrapper["\\"], lapply] InfixNotation[ParsedBoxWrapper["$"], rapply] x_ \ y_ \ z_ := x[y][z] x_ $ y_ $ z_ := x[y[z]] Notation[ParsedBoxWrapper[ RowBox[{ RowBox[{"λ", " ", "x__"}], "-&gt;", "y_"}]] ⟺ ParsedBoxWrapper[ RowBox[{"Function", "[", RowBox[{ RowBox[{"{", "x__", "}"}], ",", "y_"}], "]"}]]] Notation[ParsedBoxWrapper[ RowBox[{"〈", RowBox[{"op_", " ", "x_"}], "〉"}]] ⟺ ParsedBoxWrapper[ RowBox[{ RowBox[{"#", "op_", " ", "x_"}], "&amp;"}]]] AddInputAlias["f" -&gt; ParsedBoxWrapper[ RowBox[{"〈", RowBox[{"\[Placeholder]", "\[Placeholder]"}], "〉"}]]] Notation[ParsedBoxWrapper[ RowBox[{"{", RowBox[{ RowBox[{"x_", " ", ".."}], " ", "y_"}], "}"}]] ⟺ ParsedBoxWrapper[ RowBox[{"Range", "[", RowBox[{"x_", ",", "y_"}], "]"}]]] filter[f_][x_List] := Select[x, f] map[f_][x__List] := Map[f, x] filter\PrimeQ $ map\〈-1〉 $ map\ \ (λ x -&gt; 2^x)\ {1 .. 100} </code></pre> <p><strong>EDIT</strong>: Also did some kinda lazy lists, soon it will be haskell inside Mathematica :)</p> <pre><code>SetAttributes[list, HoldAll] list[h_, l_][x_] := list[h, l[x]] list[x_] := list[x, list] map[f_][list] := list map[f_][list[x_, xs_]] := list[f[x], map[f][xs]] take[0][_] := list take[_][list] := list take[n_Integer][list[x_, xs_]] := list[x, take[n - 1][xs]] range[n_Integer] := range[1, n] range[m_, n_] := list[m, range[m + 1, n]] range[n_, n_] := list[n, list] show[list] := "[]" show[list[x_, l_]] := ToString[x] &lt;&gt; "," &lt;&gt; show[l] show $ (take[10] $ map\ (λ x -&gt; x^2) $ range[10000]) </code></pre>
swish
2,490
<p>That's how I finally defined haskell operators:</p> <pre><code>rapply[x_] := x rapply[x_, y__] := x[rapply[y]] InfixNotation[ParsedBoxWrapper["|"], rapply] lapply[x_] := x lapply[x__, y_] := lapply[x][y] InfixNotation[ParsedBoxWrapper["∘"], lapply] InfixNotation[ParsedBoxWrapper["·"], Composition] </code></pre> <p>Now <span class="math-container">$\circ$</span>, <span class="math-container">$\dot{}{}$</span> and | act exactly like haskell's space, . and $ respectfully. Also if we have only single left application then <code>@</code> is still helpful and it can be hidden with escape characters <code>:@:</code>.</p> <p>And beautiful code like <span class="math-container">$\bf{show\cdot take\ 10\cdot map\ (\lambda\ x\to x{}^{\wedge}2)\cdot range | \infty }$</span> is possible. There are invisible <code>@</code>'s between <em>take</em> and <em>10</em>, <em>map</em> and (<span class="math-container">$\lambda\ x\to x{}^{\wedge}2)$</span>. In haskell the same would look like <span class="math-container">$\bf{show . take\ 10.map(\backslash x-&gt;x{}^{\wedge}2)$[1..]}$</span>.</p> <p>With double left application, <span class="math-container">$\circ$</span> is necessary: <span class="math-container">$\bf{map\circ(\lambda\ x\to x+1)\circ \{1,2,3\}}$</span></p> <p><strong>UPDATE</strong>: I made this <code>TextCell</code> hack to make partial infix operators:</p> <pre><code>infix[f_String] := Block[{x,y},Head[ToExpression["x" &lt;&gt; f &lt;&gt; "y"]]] 〈TextCell[s_][x_]〉 := infix[s][#, x] &amp; 〈x_[TextCell[s_]]〉 := infix[s][x, #] &amp; </code></pre> <p>We again can use invisible <code>@</code> for application. Aliases can be made with <code>TextCell</code> on the left and on the right within <code>AngleBrackets</code> to enter them conveniently. Now stuff like &lt;~Mod~10>, &lt;2^>, &lt;^3> also works.</p>
2,385,338
<p>In a bag, there are 6 white disks, 6 black disks and 8 red disks. A disk is drawn at random from the bag.The colour is recorded and the disc is returned to the bag This process is repeated 10 times. Find the probability that less than 4 red disks are drawn </p> <p>I approached this question with n= 8 and p=0.4, q=0.6. But I'm not sure about what to do with the repetition of 10 times, how do I do this question?</p>
LloydTao
470,153
<p>The probability P(R) of drawing a red disk is 8/20, which = 0.4. You have 10 trials, therefore:</p> <blockquote> <p>R ~ B(10, 0.4).</p> </blockquote> <p>Using this distribution, you want to calculate:</p> <blockquote> <p>P(R &lt; 4), or P(R ≤ 3), since it's discrete.</p> </blockquote> <p>Modern calculators will evaluate this, but since the formula only uses '=' rather than '≤', you are technically doing:</p> <blockquote> <p>P(R = 0) + P(R = 1) + P(R = 2) + P(R = 3).</p> </blockquote> <p>The formula for P(R = r), where R ~ B(n, p) is:</p> <blockquote> <p>$$ \binom{n}{r} \times p^r \times q^{n-r}, where \binom{n}{r} = \frac{r!}{r!(n-r)!} $$</p> </blockquote> <p>Use this formula to sum the probabilities, and you should achieve the answer:</p> <blockquote class="spoiler"> <p> 0.3822806016, or 0.382 (3 S.F).</p> </blockquote>
2,022,529
<p>$$\Large \lim_{x\to0^+}\frac{1}{\sin^2x}\int_{\frac{x}{2}}^x\sin^{-1}t\,dt $$</p> <p>I am trying to calculate this limit. Using L’Hôpital’s rule, I am getting it as $1/4$, but the book says it's $3/8$. I don't know where I am doing the mistake.</p>
Learnmore
294,365
<p>Let $A$ denote the set of people of two people who refuse to work together and $B$ denote the remaining group of $5$ people.</p> <p>To form the committee there are two ways :</p> <p>1.$3$ people are from $B$ only which is $5\choose 3$.</p> <p>2.$2$ people are from $B$ and one from $A$ which is $5\choose 2$ $ 2\choose 1$</p>
42,192
<p>In search for a Machian formulation of mechanics I find the following problem. In Machian mechanics absolute space does not exists, and the only real entities are the relative distances between the particles. As a consequence, the configuration space of a N-particle system is the set of the distances on a set of N elements. Actually these distances are usually required to be isometrically embeddable in $\mathbb{R}^3$. But if absolute space does not exists, this requirement appears to be not appropriate. The natural generalization it therefore to admit any possible distance as physically acceptable, and to find a preferred way to derive a 3-geometry, possibly non-flat, form a generic distance. </p> <p>To be more specific, consider the following simple example. Let A be a metric space with 3 elements. There are infinitely many bi-dimensional riemaniann manifolds (surfaces) is which A can be isometrically embedded. There is however a preferred embedding, namely the embedding into a plane. The existence of a preferred embedding defines a preferred value for the angles between the geodetics joining the points, which in this case are simply the angles of the triangle defined by the distance between the points. </p> <p>Suppose now that A has four point. In general this metric space cannot be isometrically embedded in a 2-plane. The problem therefore is the following: is there a preferred isometric embedding of this metric space in a 2-surface, or equivalently, there is a preferred way for defining the values of the angles between the geodetics?</p> <p>In more forma way, the problem is the following: is there a preferred isometric embedding of a finite metric space in a riemaniann manifold of given dimension?</p>
Suresh Venkat
972
<p>This is almost certainly not what you want, but it illustrates why you need a tighter specification of 'preferred'. </p> <blockquote> <p>Any $n$-point metric space can be embedded isometrically in $\ell_\infty^n$.</p> </blockquote> <p><strong>Proof</strong>: (this is a well known result): Let the $r^{th}$ coordinate of $x_j$ be the distance from $x_i$ to $x_r$. BY triangle inequality, we know that for any triple $i,j,k$, $$ d(x_i,x_j) - d(x_k, x_j) \le d(x_i, x_k) $$ which establishes the correctness of the embedding. </p> <p>One interesting notion of preferred therefore might be that the target space dimension is either independent of $n$, or at the very least depends sublinearly on $n$. </p>
1,460,704
<p>Intuitively we know that $n^2$ grows faster than $n$, thus the difference tends to negative infinity. But I have trouble proving it symbolically because of the indeterminate form $\infty - \infty$. Is there anyway to do this without resorting to the Epsilon-Delta definition ?</p>
Community
-1
<p>Note that when $n\ge 2$, </p> <p>$$n - n^2 \le n \left(\frac n2\right) - n^2 = -\frac{n^2}{2},$$</p> <p>as $$\lim_{n\to \infty} -\frac{n^2}{2} = -\infty,$$</p> <p>then so are $\lim (n-n^2)$. </p>
382,295
<p>Please help. I haven't found any text on how to prove by induction this sort of problem: </p> <p>$$ \lim_{n\to +\infty}1 + \frac{1}{4} + \frac{1}{4^2} + \cdots+ \frac{1}{4^n} = \frac{4}{3} $$</p> <p>I can't quite get how one can prove such. I can prove basic divisible "inductions" but not this. Thanks.</p>
Community
-1
<p>The below image can easily be made into an inductive proof.</p> <p><img src="https://i.stack.imgur.com/7N6Ol.png" alt="enter image description here"></p> <p>To be precise, the above image first shows that $$\dfrac12 + \dfrac14 + \dfrac18 + \cdots = 1$$ and then shows that</p> <p>$$\dfrac1{2^2} + \dfrac1{(2^2)^2} + \dfrac1{(2^3)^2} + \dfrac1{(2^4)^2} + \cdots = \dfrac13$$ which implies $$\dfrac1{4} + \dfrac1{4^2} + \dfrac1{4^3} + \dfrac1{4^4} + \cdots = \dfrac13$$</p>
382,295
<p>Please help. I haven't found any text on how to prove by induction this sort of problem: </p> <p>$$ \lim_{n\to +\infty}1 + \frac{1}{4} + \frac{1}{4^2} + \cdots+ \frac{1}{4^n} = \frac{4}{3} $$</p> <p>I can't quite get how one can prove such. I can prove basic divisible "inductions" but not this. Thanks.</p>
ngoaho91
76,219
<p>i have a solution, but not seem mathematical induction</p> <p>$s = 1 + \dfrac1{4} + \dfrac1{4^2} + \dfrac1{4^3} + \dfrac1{4^4} + \cdots$</p> <p>$\dfrac1{4}s = \dfrac1{4} + \dfrac1{4^2} + \dfrac1{4^3} + \dfrac1{4^4} + \dfrac1{4^5} + \cdots$</p> <p>$\dfrac1{4}s = - 1 + (1 + \dfrac1{4} + \dfrac1{4^2} + \dfrac1{4^3} + \dfrac1{4^4} + \dfrac1{4^5} + \cdots)$</p> <p>$\dfrac1{4}s = - 1 + s$</p> <p>$\dfrac3{4}s =1$</p> <p>$s = \dfrac4{3}$</p>
2,608,455
<p>Could someone please help me with how do I calculate the sum of the $$\sum_{n=1}^{\infty}\frac{1}{4n^{2}-1}$$ infinite series? I see that $$\lim_{n\rightarrow\infty}\frac{1}{4n^{2}-1}=0$$ so the series is convergent based on the Cauchy's convergence test. But how do I calculate the sum? Thank you.</p>
Olivier Oloa
118,798
<p><strong>Hint</strong>. By a fraction decomposition, one gets $$ \frac{2}{4n^{2}-1}=\frac{1}{2n-1}-\frac{1}{2n+1} $$ then one may use a <a href="http://mathworld.wolfram.com/TelescopingSum.html" rel="nofollow noreferrer">telescoping sum</a>.</p>
2,608,455
<p>Could someone please help me with how do I calculate the sum of the $$\sum_{n=1}^{\infty}\frac{1}{4n^{2}-1}$$ infinite series? I see that $$\lim_{n\rightarrow\infty}\frac{1}{4n^{2}-1}=0$$ so the series is convergent based on the Cauchy's convergence test. But how do I calculate the sum? Thank you.</p>
operatorerror
210,391
<p>Your justification for the convergence to he series needs work (having a limit of zero of the summand does not imply the sum converges!), it does however converge a priori by noting that $$ \frac{1}{4n^2-1}=O\left(\frac{1}{n^2}\right) $$ So it converges by the $p$-test.</p> <p>You may also use the partial fraction decomposition noted in the other answers and compute the telescoping series, showing that it converges.</p>
7,725
<p>Basically the textbooks in my country are awful, so I searched on the web for a precalculus book and found this one: <a href="http://www.stitz-zeager.com/szprecalculus07042013.pdf">http://www.stitz-zeager.com/szprecalculus07042013.pdf</a></p> <p>However, it does not cover convergence,limits etc. and those topics were briefly mentioned in my old textbooks. So what i am asking is are these topics a prerequesite for calculus or are they a part of the subject?</p>
Jessica B
4,746
<p>Just to throw out a different answer: my immediate reaction was 'neither'. My experience in the UK has been that limits and convergence get the slightest whisper when you first meet differentiation and integration at high school, and at university they sit firmly under the heading 'analysis'.</p>
2,610,560
<blockquote> <p>Given that $x, y, z$ are positive integers and that $$3x=4y=7z$$ find the minimum value of $x+y+z$. The options are:</p> <p>A) 33</p> <p>B) 40</p> <p>C) 49</p> <p>D) 61</p> <p>E) 84</p> </blockquote> <p>My attempt:</p> <p>$y=\frac{3}{4}x, z=\frac{3}{7}x$. </p> <p>Substituting these values into $x+y+z$, I get $\frac{117}{28}x$. I have no idea how to continue. $x$ in this case would have to be 28, meaning that the sum is $117$, which is not one of the options</p>
cd7c3
522,571
<p>You can think of the smallest value that $3x$, $4y$, and $7z$ can be as the smallest multiple of three that is also a multiple of 4 and 7, or as the $gcm(3,4,7)$, which equals $84$. So if the minimum value of $3x$ is $84$, the minimum value of $x$ is $28$. Using the same method you find that $y$ equals $21$ and that $z$ equals $12$therefore the minimum value of $x+y+z$ is $28+21+12$, which equals $61$. </p>
3,433,383
<p>Is it possible to write, let's say :- (a^2 + 1/a^2 - 1) as (a-b)^2.</p> <hr> <p>So I have to solve a problem (as practice, no test or assignment) that says to show if the roots of a given equation are real.</p> <p>The equation is :- </p> <p><span class="math-container">$$x^2 - 2\left(m + \frac1m \right)x + 3 = 0$$</span></p> <p>I proceeded to solve it using the discriminant formula(b^2 - 4ac) and what I'm left with is :-</p> <p><span class="math-container">$$4\left(m^2 + \frac1{m^2} -1\right) $$</span></p> <p>And I have no idea what to do next, i know that for a equation to be real its discriminant has to be greater than zero, but I dont know how this would be greater or less than zero and if this is the correct answer to the problem.</p> <p>Thanks for the help</p>
Michael Hoppe
93,935
<p>As <span class="math-container">$x^2-2ax+3=0\iff (x-a)^2=a^2-3$</span> we'll have two solution iff <span class="math-container">$a^2&gt;3$</span>. Now <span class="math-container">$$\left(m+\frac1m\right)^2=\left(m-\frac1m\right)^2+4.$$</span></p>
4,094,333
<blockquote> <p>Suppose <span class="math-container">$a_1,a_2&gt;0$</span> and <span class="math-container">$a_{n+2}=2+\dfrac{1}{a_{n+1}^2}+\dfrac{1}{a_n^2}(n\ge 1)$</span>. Prove <span class="math-container">$\{a_n\}$</span> converges.</p> </blockquote> <p>First, we may show <span class="math-container">$\{a_n\}$</span> is bounded for <span class="math-container">$n\ge 3$</span>, since <span class="math-container">$$2 \le a_{n+2}\le 2+\frac{1}{2^2}+\frac{1}{2^2}=\frac{5}{2},~~~~~~ \forall n \ge 1.$$</span></p> <p>But how to go on?</p>
cybershiptrooper
719,632
<p>I'm not so sure about this one, but here's my attempt-</p> <p><span class="math-container">\begin{align} \lVert a_{n+1}-a_n \rVert &amp;= \left\lVert \frac{1}{a_n^2}-\frac{1}{a_{n-2}^2} \right\rVert\\ &amp;\leq \frac{ \lVert {a_{n-2}^2}-a_n^2 \rVert }{16} \\ &amp;= \frac{ \lVert {a_{n-2}+a_{n}} \rVert \lVert {a_{n-2}-a_{n}} \rVert }{16} \\ &amp;\leq \frac{6}{16} \lVert {a_{n-2}-a_{n}} \rVert \\ &amp;=\frac{6}{16} \left\lVert \frac{1}{a_{n-3}^2} + \frac{1}{a_{n-4}^2} - \frac{1}{a_{n-1}^2} - \frac{1}{a_{n-2}^2} \right\rVert \\ &amp;=\frac{6}{16} \left\lVert \left(\frac{1}{a_{n-3}^2} - \frac{1}{a_{n-1}^2}\right) + \left(\frac{1}{a_{n-4}^2} - \frac{1}{a_{n-2}^2}\right) \right\rVert \\ &amp;\leq \frac{6}{16} \cdot \frac{6}{16} \lVert (a_{n-1}-a_{n-3})+(a_{n-2}-a_{n-4}) \rVert\\ &amp;\ \ \vdots \\ &amp;\leq \left(\frac{6}{16}\right)^n max(|| a_4-a_2 ||,\lVert a_3-a_1||)2^n \\ &amp;=\left(\frac{12}{16}\right)^n max(|| a_4-a_2 ||,\lVert a_3-a_1||) \end{align}</span></p> <p>(Original image: <a href="https://i.stack.imgur.com/uBBqe.jpg" rel="nofollow noreferrer">https://i.stack.imgur.com/uBBqe.jpg</a> )</p> <p>Does this seem correct? I tried using Cauchy's convergence test and the fact that the tail lies between 2 and 3. If not, I hope it provides at least some clue to the correct path. The last step is a result of the observation that each term of <span class="math-container">$a_k - a_{k-2}$</span> should produce two similar terms(as seen while expanding), whose index decreases linearly.</p> <p>Edit: I have added the term <span class="math-container">$max(|| a_4-a_2 ||,\lVert a_3-a_1||)$</span> instead of the original single term to take care of the parity issue.</p>
2,888,464
<p>Let $S$ be the set of polynomials $f(x)$ with integer coefficients satisfying </p> <p>$f(x) \equiv 1$ mod $(x-1)$</p> <p>$f(x) \equiv 0$ mod $(x-3)$</p> <p>Which of the following statements are true?</p> <p>a) $S$ is empty .</p> <p>b) $S$ is a singleton.</p> <p>c)$S$ is a finite non-empty set.</p> <p>d) $S$ is countably infinite.</p> <p>My Try: I took $x =5$ then $f(5) \equiv 1$ mod $4$ and $f(5) \equiv 0$ mod $2$ . Which is impossible so $S$ is empty.</p> <p>Am I correct? Is there any formal way to solve this?</p>
A. Pongrácz
577,800
<p>The second condition basically says that $3$ is a root. But then substitute $x=3$ into the first condition: $0=f(3)\equiv 1 \pmod 2$, a contradiction.</p>
3,200,431
<p>We can say, that any field <span class="math-container">$\mathbb{K}$</span> -- <span class="math-container">$1$</span>-dim vector space on itself: <span class="math-container">$\mathbb{K}_{\mathbb{K}}$</span>. So any vector of one another finite-dimensional vector space <span class="math-container">$V_{\mathbb{K}}$</span>, after choosing the some basis can be represented as the element of isomorphic space <span class="math-container">$\mathbb{K}_{\mathbb{K}}^{n} = \prod_{i=1}^n\mathbb{K}_i$</span>, where <span class="math-container">$n$</span> -- dimension of <span class="math-container">$V_{\mathbb{K}}$</span>. But we can determine operations on elements of <span class="math-container">$\mathbb{K}_{\mathbb{K}}^{n}$</span> like on the direct group product: <span class="math-container">$(x_1,\dots,x_n) + (x'_1, \dots, x'_n) = (x_1 + x'_1, \dots ,x_n + x'_n)$</span> and similar for second field operation: <span class="math-container">$(x_1,\dots,x_n) \times (x'_1, \dots, x'_n) = (x_1 \times x'_1, \dots ,x_n \times x'_n)$</span>.</p> <p>But usually we doing it only with one field operation <span class="math-container">$+$</span>. Why? </p>
Dirk
379,594
<p>On the vector space, there is an addition that does not depend on the choice of basis you make, sometimes called a "natural" addition.<br> However, you usually can't define a natural multiplication in your way, it would always depend on a basis and different bases will give different multiplications. Therefore, it is mostly not used when dealing with vector spaces.</p>
3,044,390
<p>As we know, each random variable is responsible for associating some random events to the probability values. These random events belong to the specific population, and that random variable represent that population. In other words, random variables are representatives of their populations. By virtue of this, they present their own distributions.</p> <blockquote> <p>What I am wondering is that summing adequate number of random variables</p> </blockquote> <p>According to the resource that I read two days ago, by summing large number of random variables, we can obtain new random variable of <code>normal distribution</code>. Besides, this is called <code>central limit theorem</code>. </p> <p>Actually, I investigated central limit theorem, and I could not construct a relationship between the theorem itself and summing random variables. The theorem is about the fact that large amount of data samples construct a normal distribution. </p> <blockquote> <p>Can anyone explain if there is a relationship between summing random variables between central limit theorem ?</p> </blockquote>
Sener Ozonder
258,967
<p>Say you have a probability distribution function (PDF) which has a finite mean <span class="math-container">$\mu$</span> and variance <span class="math-container">$\sigma$</span>. This PDF does not have to be a well-known PDF like Poisson or Gaussian; it can have an uncommon profile.</p> <p>1) If you generate from this PDF large enough number of random numbers, the histogram of those random numbers will reveal this PDF itself. Of course, this is trivially true. The Central Limit Theorem is not relevant here.</p> <p>2) Now imagine you generate, say, 30 different sets (samples) from this PDF, each set containing, say, 50 random numbers. Now calculate the mean of each set to get 30 means. Now plot the histogram of those means (not the random numbers), and you will see that the histogram of those means will be a Gaussian regardless of the PDF that you used to generate the random numbers. This is what the Central Limit Theorem tells us. </p> <p>Note that 1) is about the random numbers themselves generated from the PDF while 2) is about the distribution of the "means", not the random numbers themselves. The common misconception is that if you sample large enough numbers from a PDF, you'll get Gaussian regardless of the underlying PDF. Of course this is trivially false.</p> <p>Hope this helps.</p>
1,368,447
<p><img src="https://i.stack.imgur.com/TL80w.jpg" alt="enter image description here"></p> <p>So I tried the good old Calculus 1 approach and turned this into an optimization problem. The equations got REALLY hairy, but it was okay since this was the graphing calculator section of the exam. I called the longer part of the horizontal diagonal $x_2$, the shorter part of the horizontal diagonal $x_1$, and half of the vertical diagonal $v$. </p> <p>After getting $x_1$ in terms of $x_2$ and some tedious algebra, I got that $$2y=\sqrt{4 - \sqrt{x_2^2 - 21}} + \sqrt{25 - x_2^2}.$$ I then multiplied both sides of the equation by the total length of the horizontal diagonal, and graphed the right-hand side of the equation on my calculator. The logic was to find out where $(x_1 + x_2)2y$ would reach a maximum point, since the product of the diagonals of a kite equal twice the area of the kite. </p> <p>After graphing that disaster I got a somewhat reasonable answer. I got the horizontal diagonal to be equal to roughly $4.5829$, and the vertical diagonal to be equal to $4$. However, I don't have an answer key, so I don't know if I am correct. Any feedback would be appreciated!</p>
André Nicolas
6,312
<p>It is easier without a calculator. The area of the top triangle is $(1/2)(2)(5)\sin\theta$, where $\theta$ is the top angle. This is maximized when $\theta=\pi/2$. So the long diagonal has length $\sqrt{29}$. The other diagonal is now not hard to calculate, since the area of the kite is $10$.</p> <p><strong>Remark:</strong> In your notation, $x_1=\sqrt{x_2^2-21}$ and $y=\sqrt{25-x_2^2}$. So we are trying to maximize $$\sqrt{25-x_2^2}\left(\sqrt{x_2^2-21}+x_2\right).$$ From the description in the OP, it seems you may have used the wrong expression for $y$ in terms of $x_2$.</p>
865,639
<p>I was doing some computations for research purposes, which led me to this integral:</p> <p>$$I(n) = \int_0^{\infty} (t^2+t^4)^n e^{-t^2-t^4}\,dt.$$</p> <p>This is very suggestively written so as to employ a parametric differentiation technique as so:</p> <p>$$\left(\frac{\partial^n}{\partial \alpha^n}\right)\int_0^{\infty}e^{-\alpha(t^2+t^4)}\,dt.$$</p> <p>This integral has a nice, closed form expression:</p> <p>$$\int_0^{\infty} e^{-\alpha(t^2+t^4)}\,dt = \frac{1}{4} e^{\frac{\alpha}{8}}K_{\frac{1}{4}}\left(\frac{\alpha}{8}\right),$$</p> <p>where $K_{\nu}$ is the modified Bessel function of the second kind. From here, I would have to employ $n$ differentiations which would be pretty messy to work out due to product rule. $n$ applications of product rule does have a nice combinatorial expression but it is far from explicit. Moreover, Bessel functions can get pretty complicated after differentiating so this seems like a bad approach.</p> <p>Instead I decided to run some examples on Mathematica and computed the first 22 of these and noticed a very surprising pattern. In what follows $I_{\nu}$ is the modified Bessel function of the first kind.</p> <p>$$I(0) = \frac{1}{4} e^{\frac{1}{8}}K_{\frac{1}{4}}\left(\frac{1}{8}\right)$$</p> <p>$$I(1) = \frac{1}{32} e^{\frac{1}{8}}\left(K_{\frac{1}{4}}\left(\frac{1}{8}\right) + K_{\frac{3}{4}}\left(\frac{1}{8}\right)\right)$$</p> <p>$$I(2) = \frac{3}{128\sqrt{2}} e^{\frac{1}{8}}\pi \left(3 I_{-\frac{1}{4}} \left(\frac{1}{8}\right) + I_{\frac{1}{4}}\left(\frac{1}{8}\right) - I_{\frac{3}{4}}\left(\frac{1}{8}\right) + I_{\frac{5}{4}}\left(\frac{1}{8}\right)\right)$$</p> <p>$$I(3) = \frac{1}{256\sqrt{2}} e^{\frac{1}{8}}\pi \left(39 I_{-\frac{1}{4}} \left(\frac{1}{8}\right) + 17 I_{\frac{1}{4}}\left(\frac{1}{8}\right) - 14 I_{\frac{3}{4}}\left(\frac{1}{8}\right) + 14 I_{\frac{5}{4}}\left(\frac{1}{8}\right)\right)$$</p> <p>$$I(4) = \frac{1}{2048\sqrt{2}} e^{\frac{1}{8}} \pi \left(1029 I_{-\frac{1}{4}} \left(\frac{1}{8}\right) + 367 I_{\frac{1}{4}}\left(\frac{1}{8}\right) - 349 I_{\frac{3}{4}}\left(\frac{1}{8}\right) + 349 I_{\frac{5}{4}}\left(\frac{1}{8}\right)\right)$$</p> <p>$$I(5) = \frac{9}{8192\sqrt{2}} e^{\frac{1}{8}} \pi \left(1953 I_{-\frac{1}{4}} \left(\frac{1}{8}\right) + 619 I_{\frac{1}{4}}\left(\frac{1}{8}\right) - 643 I_{\frac{3}{4}}\left(\frac{1}{8}\right) + 643 I_{\frac{5}{4}}\left(\frac{1}{8}\right)\right)$$</p> <p>$$I(6) = \frac{1}{16384\sqrt{2}} e^{\frac{1}{8}} \pi \left(185157 I_{-\frac{1}{4}} \left(\frac{1}{8}\right) + 53131 I_{\frac{1}{4}}\left(\frac{1}{8}\right) - 59572 I_{\frac{3}{4}}\left(\frac{1}{8}\right) + 59572 I_{\frac{5}{4}}\left(\frac{1}{8}\right)\right)$$</p> <p>Repeat ad nauseum. Each of the terms in the denominator seems to be a power of $2$, the third and fourth terms seem to have the same coefficient (modulo a sign) and the signs are $+$, $+$, $-$, $+$. The "nice" output seems to suggest to me that there is a closed-form expression for $I(n)$ in general but I haven't the slightest clue as to how to come up with it. Can anyone shed some light on the matter?</p> <p>A PDF with more expressions can be found <a href="https://drive.google.com/file/d/0B2TnDqwzoFw6YWpWekhMQm1xcWs/edit?usp=sharing">here</a>. (Mathematica output.)</p>
Daccache
79,416
<p>Not a full answer, but a partial one:<br /> Notice that the powers of two in the denominator are increasing in periodic repetitions of 3, 2, 1. That is, in this case the exponents on the 2 are 2, 5, 7, 8, 11, 13, 14, 17, 19, 20.... This is sequence <a href="https://oeis.org/A047268" rel="noreferrer">A047268</a> on the OEIS website, defined as the sequence of numbers congruent to {1, 3, 5} mod(6).</p> <p>I fiddled around with the other coefficients, however nothing useful turned out (no known sequences with the numbers themselves or their differences, products, sums...). If you absolutely need a closed form expression for it, then you could just define three new sequences <span class="math-container">$S_1$</span>, <span class="math-container">$S_2$</span> and <span class="math-container">$S_3$</span> as the coefficients of the first three modified Bessel functions of the first kind in your expression. Then the closed form expression would be:</p> <p><span class="math-container">$I(n) = 2^{-\delta_{n + 2}}e^{\frac18}(K_{\frac14}(\frac18)+nK_{\frac34}(\frac18))$</span> <span class="math-container">$ n = 0,1$</span></p> <p><span class="math-container">$I(n) = 2^{-\frac{\delta_{n + 2}}{2}}e^{\frac18}\pi(S_1I_{-\frac{1}{4}}(\frac18) + S_2I_{\frac{1}{4}}(\frac18) - S_3I_{\frac{3}{4}}(\frac18) + S_3I_{\frac{5}{4}}(\frac18))$</span> <span class="math-container">$ n \geq 2$</span>.</p> <p>Where,<br /> <span class="math-container">$\delta_n$</span> is sequence A047268,<br /> <span class="math-container">$S_1, S_2, S_3$</span> are explicitly given by you. Note that the coefficients are not simplified by being factored out, as is for example in I(2) where 3 is factored out. You need to multiply them back in to get the sequences so there will always be a coefficient of 1 outside the numerator.</p> <p>Finally, if you really want to get compact, you can write the second expression as such:<br /> <span class="math-container">$I(n) = 2^{-\frac{\delta_{n + 2}}{2}}e^{\frac18}\pi\sum\limits_{k = 1}^4 S_kI_{\frac{2k - 3}{2}}(\frac18)$</span>.</p> <p>Where <span class="math-container">$\delta_n$</span> and <span class="math-container">$S_1, S_2, S_3$</span> are defined as above, and <span class="math-container">$S_4 = -S_3$</span> (it appears implicitly in the summation).</p> <p>Hope this helps!</p>
2,331,255
<p>Let $K$ be a set of all Killing vector fields on $\mathbf R^n$ (with the Euclidean metric $\bar g$) which vanish at the origin.</p> <p>(A vector field $V$ on a Riemannian manifold $(M, g)$ is said to be a <strong><em>Killing vector field</em></strong> if the flow of $V$ acts by isometries of $M$. This is equivalent to saying that $\mathcal L_Vg=0$).</p> <p>If $V\in K$, then by using $\mathcal L_V\bar g=0$, we get that the matrix $[\partial V^i/\partial x^j]$ is anti-symmetric, where $V^i$ are the components of $V$ in the standard coordinates.</p> <p>Define a map $T:K\to \mathfrak o(n)$ as</p> <p>$$T(V)= \left[\frac{\partial V^i}{\partial x^j}(0)\right]$$ where $\mathfrak o(n)$ is the Lie algebra of $O(n)$, which is "same" as the space of $n\times n$ real anti-symmetric matrices.</p> <blockquote> <p><strong>Problem.</strong> To show that $T$ is injective.</p> </blockquote> <p>I am quite lost here.</p>
D Ford
317,912
<p>We need the following lemma, whose proof will be at the bottom of this answer. </p> <p><strong>Lemma:</strong> If $\phi, \psi$ are Riemannian isometries on open subsets $U \to V$ of $\mathbb R^n$, with $\phi(p) = \psi(p)$ and $d\phi_p = d\psi_p$ for some $p \in U$, then $\phi = \psi$. </p> <p>Now, suppose $V$ is a Killing vector field vanishing at $0$ in the kernel of the map $T : K \to \mathfrak o(n)$. Since $V$ vanishes at $0$, if $\theta$ is the flow of $V$, $\theta_t(0) = 0$ for every $t$ in the flow domain. We claim $d(\theta_t)_0$ is independent of $t.$ For then, by the lemma, $\theta_t = \mathrm{Id}$ on all of $\mathbb R^n$, and so $V = 0$. </p> <p>Let $\theta_t^i$ be the $i^\textrm{th}$ component of the flow $\theta_t$. Then $\dfrac{d}{dt}\bigg|_{t=0} \theta^i_t(x) = V^i(x)$ for $x \in \mathbb R^n$, where $V^i$ are the component functions of $V$. To prove the claim, it suffies to show that $\dfrac{\partial \theta_t^i}{\partial x^j}(0)$ is independent of $t$ for every $i,j$. This follows from Clairaut's theorem: for every $t_0$ in the flow domain, we have $$ \frac{d}{dt}\bigg|_{t=t_0} \frac{\partial}{\partial x^j} \theta_t^i(0) = \frac{\partial}{\partial x^j} \frac{d}{dt}\bigg|_{t=t_0} \theta_t^i(0) = \frac{\partial}{\partial x^j} \frac{d}{dt}\bigg|_{t=0} \theta^i_t(\theta_{t_0}(0)) = \frac{\partial}{\partial x^j} V^i(\theta_{t_0}(0)) = \frac{\partial V^i}{\partial x^j}(0) = 0 $$ since $V \in \ker(T)$. Therefore $\dfrac{\partial \theta_t^i}{\partial x^j}(0)$ is independent of $t$ for every $i,j$, hence so is $d(\theta_t)_0$, so $V=0$ by the lemma, so $T$ is injective. QED. </p> <p><em>Proof of lemma:</em> Assume $U$ is convex. Isometries take line segments to line segments, so $V$ is also convex. For $q \in U$, let $\gamma(t) = p + t(q-p)$. Then $\phi \circ \gamma$ and $\psi \circ \gamma$ are both line segments from $r := \phi(p) = \psi(p)$ to $\phi(q)$ and to $\psi(q)$ respectively; that is, $\phi \circ \gamma(t) = r + t(\phi(q)-r)$ and $\psi \circ \gamma(t) = r + t(\psi(q) - r)$. Since $d\phi_p = d\psi_p$, we get $$\phi(q) - r = \frac d{dt}\bigg|_{t=0} (\phi \circ \gamma)(t) = d\phi_p(\dot\gamma(0)) = d\psi_p(\dot\gamma(0)) = \frac{d}{dt}\bigg|_{t=0} (\psi \circ \gamma)(t) = \psi(q)-r $$ so $\phi(q) = \psi(q)$. If $U$ is not convex, $U$ is open, hence a union of convex open sets, on each of which $\phi = \psi$. QED. </p>
2,675,758
<p>For example:</p> <p>There are two green marbles and two red marbles in a bag. Two marbles are chosen at random, what is the probability that the two marbles which have been chosen are of the same colour?</p> <p>Ordered + distinct: <br> Marbles = $G1, G2, R1, R2$ <br> Event space = $\{|G1~G2|, |G2~G1|, |R1~R2|, |R2~R1|\} = 4$ ways <br> Sample space = $^4P_2 = 12$ ways <br> Probability = $\dfrac{4}{12} = \dfrac{1}{3}$ <br></p> <p>Unordered + distinct: <br> Marbles = $G1, G2, R1, R2$ <br> Event space = $\{|G1~G2|, |R1~R2|\} = 2$ ways <br> Sample space = $^4C_2 = 6$ ways <br> Probability = $\dfrac{2}{6} = \dfrac{1}{3}$ <br></p> <p>Ordered + identical: <br> Marbles = $G, G, R, R$ <br> Event space = $\{|G~G|, |R~R|\} = 2$ ways <br> Sample space = $\{|G~G|, |G~R|, |R~G|, |R~R|\} = 4$ ways <br> Probability = $\dfrac{2}{4} = \dfrac{1}{2}$ <br></p> <p>Unordered + identical: <br> Marbles = $G, G, R, R$ <br> Event space = $\{|G~G|, |R~R|\} = 2$ ways <br> Sample space = $\{||G~G|, |G~R|, |R~R||\} = 3$ ways <br> Probability = $\dfrac{2}{3}$</p> <p>Which approach is the correct one? And more importantly, <strong>why</strong> are the other approaches wrong?</p>
Siong Thye Goh
306,553
<p>$$(x+2)^2 = 100$$</p> <p>$$x+2 = \pm 10$$ $$x=-2\pm10=-12 \text{ or} 8$$</p> <p>Since $x&gt;0$, reject $x=-12$ as a solution and hence $x=8$.</p>
3,489,896
<p>It is clear to me that <span class="math-container">$\liminf X_n \le \limsup X_n$</span> but every time that I see that <span class="math-container">$\liminf X_n \le \sup X_n$</span> it sound to me not so obvious awkward. Could someone help me to understand this inequality?<span class="math-container">$ $</span> <span class="math-container">$ $</span></p>
Boka Peer
304,326
<p>Isn't it clear from the definition? Limsup is less than equal to sup. </p> <p>Proof: </p> <p>The definition of limsup is the following: Inf{sup<span class="math-container">$(T_n)$</span>} where <span class="math-container">$T_n$</span> is the <span class="math-container">$n$</span>th tail of the sequence <span class="math-container">$x_n$</span>. Clearly, sup<span class="math-container">$(T_n)$</span> is less than equal to sup(<span class="math-container">$x_n$</span>). Now, just take infimum over <span class="math-container">$n$</span>. Then we get that limsup<span class="math-container">$x_n$</span> <span class="math-container">$\leq$</span> sup<span class="math-container">$x_n$</span>. </p> <p>By the way, the definition of <span class="math-container">$T_n$</span> is {<span class="math-container">$x_k| k\geq n$</span>}.</p>
115,630
<p>Let $V$ and $W$ be two algebraic structures, $v\in V$, $w\in W$ be two arbitrary elements.</p> <p>Then, what is the geometric intuition of $v\otimes w$, and more complex $V\otimes W$ ? Please explain for me in the most concrete way (for example, $v, w$ are two vectors in 2 dimensional vector spaces $V, W$)</p> <p>Thanks</p>
Neal
20,569
<p>You want to stay concrete, so let's let $V$ be a two-dimensional real vector space and $W = \operatorname{Hom}(V,\mathbb{R})$. Then $V = T^1_0(V)$ and $W = T^0_1(V)$, so for any $v\in V$ and $w\in W$, $v\otimes w\in T^1_1(V)$.</p> <p>Each of $v,w$ has two components; $v = v^1e_1 + v^2e_2$ and $w = w_1e^1 + w_2e^2$. Here $e_1,e_2$ is a basis for $V$ and $e^1,e^2$ is the dual basis in $V^*$. </p> <blockquote> <p>The components of $v\otimes w$ are all the components of $v$ times all the components of $w$: $$(v\otimes w)^i_j = v^iw_j.$$ </p> </blockquote> <p>To see this, observe that $(v\otimes w)(\theta, x)=v(\theta)w(x),$ so that $(v\otimes w)(e^i, e_j) = v(e^i)w(e_j).$</p> <p>More generally, if $A = A^{i_1\cdots i_p}_{j_1\cdots j_q}\in T^p_q$ and $B = B^{k_1\cdots k_r}_{l_1\cdots l_s}\in T^r_s$, then </p> <p>$$(A\otimes B)^{i_1\cdots i_pk_1\cdots k_r}_{j_1\cdots j_q l_1\cdots l_s} = A^{i_1\cdots i_p}_{j_1\cdots j_q}B^{k_1\cdots k_r}_{l_1\cdots l_s}.$$</p> <p>Note that our example is from the derived tensor algebra over a two-dimensional vector space, $T^p_q(V) = (V^*)^{\otimes p}V^{\otimes q}.$ Hopefully this helps build your intuition about the case where $V$ and $W$ are two vector spaces of potentially different dimension.</p>
3,649,125
<p>Bridge is a game of four players in which each player is dealt 13 cards from a standard 52 card deck. Bridge players (such as myself) are interested in the number of possible deals, where each player is distinct. This can be counted by</p> <p><span class="math-container">$$\binom{52}{13}\binom{39}{13}\binom{26}{13}\binom{13}{13}=5.364\times10^{28}$$</span></p> <p>However, this number is misleadingly large, since bridge players usually only care about the face cards (jack, queen, king, and ace) in each suit. We often consider the cards with denominations 2-10 as indistinguishable.<b> Supposing we distinguish only face cards, what is the number of possible deals? </b></p> <p><a href="http://www.rpbridge.net/7z74.htm" rel="nofollow noreferrer">This source</a> puts the figure at <span class="math-container">$8.110\times10^{15}$</span> based on a computer program. I am curious if there is a more elegant mathematical solution.</p>
Rayna
751,499
<p>(not enough rep to comment)</p> <p>FWIW, <a href="https://en.wikipedia.org/wiki/Contract_bridge_probabilities#Number_of_possible_deals" rel="nofollow noreferrer">Wikipedia</a> cites <a href="http://home.planet.nl/~narcis45/CountingBridgeDeals.htm" rel="nofollow noreferrer">another site</a> which gives the same number as the OP's link. Both this site and the OP's link say that there doesn't seem to be any simple formula to answer this question.</p>
1,200,878
<p>Integrate the following integral:</p> <p>$$\int \frac{\sin x (2 \cos x - \sin x)}{2\sin x + \cos x} dx$$</p> <p>I have tried it by using by parts by considering the $\sin x$ as first function. Again in the following step i got stuck. Please help me.</p>
Claude Leibovici
82,404
<p>You can even start from the beginning using the tangent half-angle substitution $t=\tan(\frac x2)$ which will lead to $$\int \frac{\sin x (2 \cos x - \sin x)}{2\sin x + \cos x} dx=\int \frac{8t(t^2 + t - 1)}{(1+t^2)^2(t^2-4t-1)} dt$$ Now continue with partial fraction decomposition taking into account the fact that $t^2-4t-1=0$ has two real roots.</p> <p>This will let you with a series of quite simple integrals.</p> <p>However, Elaqqad made the problem simpler by the first observation.</p>
638,971
<p>What is the definition of $R_{ijkl}$ in terms of metrics on a manifold?</p> <p>I know what the definition of the riemann tensor, $R^l_{ink}$, is. But what exactly is meant by $R_{ijlk}$?</p>
alexjo
103,399
<p>$R_{ijkl}$ is the covariant version of the curvature tensor $$ R_{ijkl} = g_{im} R^m{}_{jkl} $$</p> <p>If one defines the curvature tensor by $$ R^\ell{}_{ijk}= \frac{\partial}{\partial x^j} \Gamma^\ell{}_{ik}-\frac{\partial}{\partial x^k}\Gamma^\ell{}_{ij} +\Gamma^\ell{}_{js}\Gamma_{ik}^s-\Gamma^\ell{}_{ks}\Gamma^s{}_{ij} $$ lowering indices with $R_{\ell ijk}=g_{\ell s}R^s{}_{ijk}$ one gets $$ R_{ik\ell m}=\frac{1}{2}\left( \frac{\partial^2g_{im}}{\partial x^k \partial x^\ell} + \frac{\partial^2g_{k\ell}}{\partial x^i \partial x^m} - \frac{\partial^2g_{i\ell}}{\partial x^k \partial x^m} - \frac{\partial^2g_{km}}{\partial x^i \partial x^\ell} \right) +g_{np} \left( \Gamma^n{}_{k\ell} \Gamma^p{}_{im} - \Gamma^n{}_{km} \Gamma^p{}_{i\ell} \right). $$ The symmetries of the tensor are</p> <ul> <li>$R_{ik\ell m}=R_{\ell mik}$ </li> <li>$R_{ik\ell m}=-R_{ki\ell m}=-R_{ikm\ell}$. </li> </ul> <p>that is, it is symmetric in the exchange of the first and last pair of indices, and antisymmetric in the flipping of a pair.</p> <p>The cyclic permutation sum (<strong>First Bianchi identity</strong>) is $$R_{ik\ell m}+R_{imk\ell}+R_{i\ell mk}=0.$$ This is often written $R_{i[jk\ell]}^{}=0$, where the brackets denote the antisymmetric part on the indicated indices.</p> <p>The <strong>Second Bianchi identity</strong> is $$R_{ijk\ell;m}^{}+R_{ij\ell m;k}^{}+R_{ijmk;\ell}^{}=0$$ or equivalently, $R_{ij[k\ell;m]}^{}=0$ where the semi-colon denotes a covariant derivative. </p>
509,487
<p>Prove $1+{n \choose 1}2+{n \choose 2}4+...+{n \choose n-1}2^{n-1}+{n \choose n}2^n=3^n$ using combinatorial arguments. I have no idea how to begin solving this, a nudge in the right direction would be appreciated. </p>
peterwhy
89,922
<p>Consider a set of strings with $n$ characters made up of $a$, $b$ or $c$. The total number of such string is the sum of the number of strings with $n$ $a$'s, the number of strings with $n-1$ $a$'s, $\cdots$, down to the number of strings with no $a$'s.</p>
2,068,249
<p>I am considering block matrices $$\begin{pmatrix} A &amp; v \\ v^T &amp; x \end{pmatrix}$$ with $A \in \mathbb{R}^{(n-1) \times (n-1)}$, $v \in \mathbb{R}^{n-1}$, $x \in \mathbb{R}.$ Is there a rational $(n-1) \times (n-1)$ expression $p(A,v,x)$ I can form in the variables $A,v,x$ such that $$\mathrm{det}\begin{pmatrix} A &amp; v \\ v^T &amp; x \end{pmatrix} = \mathrm{det}(p(A,v,x))\; ?$$</p> <p>The first thing I tried is the block matrix formula $$\mathrm{det}\begin{pmatrix} A &amp; v \\ v^T &amp; x \end{pmatrix} = x \, \mathrm{det}\Big(A - \frac{1}{x}vv^T \Big).$$ However writing this as just an $(n-1) \times (n-1)$ determinant introduces strange exponents: $$\mathrm{det}\Big( x^{1/(n-1)} A - x^{(2-n)/(n-1)} vv^T \Big)$$ which is obviously not polynomial unless $n = 2$. Since the result does not involve noninteger powers of $x$ I am hoping there is some expression $p$ that also only involves integer powers of $x$.</p>
gjh
37,021
<p>In "Algebra" by J.W.Archbold, Pitman, 4e, 1970, s24.16, p.417, it proves, </p> <p>$\mathrm{det}\begin{pmatrix} A &amp; u^T \\ v &amp; w \end{pmatrix} = -v.adj(A).u^T + w .det(A)$ </p> <p>where $u,v$ are $1$ x $n$ and $A$ is $n$ x $n$ and $w$ is $1$ x $1$. </p>
2,068,249
<p>I am considering block matrices $$\begin{pmatrix} A &amp; v \\ v^T &amp; x \end{pmatrix}$$ with $A \in \mathbb{R}^{(n-1) \times (n-1)}$, $v \in \mathbb{R}^{n-1}$, $x \in \mathbb{R}.$ Is there a rational $(n-1) \times (n-1)$ expression $p(A,v,x)$ I can form in the variables $A,v,x$ such that $$\mathrm{det}\begin{pmatrix} A &amp; v \\ v^T &amp; x \end{pmatrix} = \mathrm{det}(p(A,v,x))\; ?$$</p> <p>The first thing I tried is the block matrix formula $$\mathrm{det}\begin{pmatrix} A &amp; v \\ v^T &amp; x \end{pmatrix} = x \, \mathrm{det}\Big(A - \frac{1}{x}vv^T \Big).$$ However writing this as just an $(n-1) \times (n-1)$ determinant introduces strange exponents: $$\mathrm{det}\Big( x^{1/(n-1)} A - x^{(2-n)/(n-1)} vv^T \Big)$$ which is obviously not polynomial unless $n = 2$. Since the result does not involve noninteger powers of $x$ I am hoping there is some expression $p$ that also only involves integer powers of $x$.</p>
Arin Chaudhuri
404
<p>Using multilinearity one can rewrite $\text{det}\begin{pmatrix} A &amp; v \\ v^T &amp;x \end{pmatrix} = \text{det}\begin{pmatrix} A &amp; 0 \\ v^T &amp;x \end{pmatrix} + \text{det}\begin{pmatrix} A &amp; v \\ v^T &amp; 0 \end{pmatrix} = x \text{det}(A)+\text{det}\begin{pmatrix} A &amp; v \\ v^T &amp; 0 \end{pmatrix}.$</p> <p>Let $v^T = \begin{pmatrix} v_1 &amp; v_2 &amp; \dots &amp; v_n \end{pmatrix}$ so $v = \sum_{i=1}^{n} v_i e_i$ where $e_i$ is the $i^{\text{th}}$ canonical basis vector consisting of $1$ at the $i^{\text{th}}$ coordinate and $0$ elsewhere.</p> <p>Using multilinearity of the determinant one sees $\text{det}\begin{pmatrix} A &amp; v \\ v^T &amp; 0 \end{pmatrix} = \text{det}\begin{pmatrix} A &amp; \sum_{i=1}^{n} v_i e_i \\ \sum_{i=1}^{n} v_i e_i^T &amp; 0 \end{pmatrix} = \sum_{i=1}^{n}\sum_{j=1}^{n}v_i v_j \operatorname{det}\begin{pmatrix} A &amp; e_i \\ e_j^T &amp; 0 \end{pmatrix}.$</p> <p>Expanding $\operatorname{det}\begin{pmatrix} A &amp; e_i \\ e_j^T &amp; 0 \end{pmatrix}$ along the last row and then the last column it is easy to see $\operatorname{det}\begin{pmatrix} A &amp; e_i \\ e_j^T &amp; 0 \end{pmatrix} = (-1)^{n+1+i}\operatorname{det}\begin{pmatrix} A_{(-i)} \\ e_j^T \end{pmatrix} = {(-1)}^{n+1+i}(-1)^{n+j}M_{ij}=-C_{ij}$ where $A_{(-i)}$ is the submatrix of $A$ with the $i$th row deleted, $M_{ij}$ is the determinant of the submatrix of $A$ obtained by deleting the $i$th row and $j$th column and $C_{ij}$ is the $i,j$ th cofactor of $A$. Let $C = (C_{ij}) = \operatorname{adj}(A)^T$ then $\operatorname{det}\begin{pmatrix} A &amp; v \\ v^T &amp;x \end{pmatrix} = x \operatorname{det}(A) - v^T C v = x \operatorname{det}(A) - v^T C^T v = x \operatorname{det}(A) - v^T \operatorname{adj}(A) v.$ </p> <p>The above formula is obviously a polynomial in the entries of $A,v$ and $x$.</p>
680,660
<p>Recently I came across this question:</p> <p>Given a random permutation of integers 1, 2, 3, …, n with a discrete, uniform distribution, find the expected number of local maxima. (A number is a local maxima if it is greater than the number before and after it.) For example, if n=4 and our permutation was 1, 4, 2, 3, then the # of local maxima would be 2 (both 4 and 3 are maxima).<br> I know the answer will be (n+1)/3 .I wanted to know what will be the answer when we have to consider the local maxima as well as the local minima i.e Finding out the expected number of local maxima and local minima.</p>
Tom M
131,247
<p>Here is a numerical method that can be used to solve the problem for small K.</p> <p>1) Brute force solve the problem for small x (say x = 3 to 12)<br> 2) Find a common denominator for each of your solutions (for K = 2, the common denominator is 90)<br> 3) put each solution in terms of the common denominator<br> 4) Use the method of common differences to find what the maximum order of the quadratic equations will be $(ax^n + bx^{n-1} + cx^{n-2} ... )$<br> <a href="http://www.purplemath.com/modules/nextnumb.htm" rel="nofollow noreferrer">http://www.purplemath.com/modules/nextnumb.htm</a><br> 5) using your solutions set up a matrix representing the system of linear equations and then solve the matrix, which will give you your values for $a, b, c$, ...<br> 6) then use those values for your your equation.<br> 7) simplify as desired </p> <p>For $K = 2$, it simplifies to $E[X^2] = (40 X^2 - 144 X + 131)/90$<br> For $K = 3$, it simplifies to $E[X^3] = (280 X^3 -1344 X^2 + 2063 X - 1038)/945 $ </p>
145,156
<p>I need to fetch some journals issues, in example like this: www.sciencedirect.com/science/journal/00963003/124/1</p> <p>You may check this part to be constant: </p> <pre><code>string = "https://www.sciencedirect.com/science/journal/00963003" </code></pre> <p>I read <a href="http://mathematica.stackexchange.com/questions/3387/fetching-data-from-html-source">here</a> that I can do something like <code>src=Import["https://www.sciencedirect.com/science/journal/00963003/124/1","HTML"]</code></p> <p>But this is only for one issue. In this example there are as many as 310 volumes and each volume may have none, 1, 2, 2-3,3 or 4 issues. Is it possible to write a script in matematica to fech all HTML pages of issues at once, or at least at minimal number of fetching trials? Sorry, I cannot make more than 2 links, so some links are without https://</p>
David G. Stork
9,735
<p>If the sequences can be enumerated in a Table, then try this:</p> <pre><code>Table[ Import[ "http://www.sciencedirect.com/science/journal/00963003/124/" &lt;&gt; ToString[mycounter], "HTML"], {mycounter, 1, 3}] </code></pre> <p>or if a list of texts...</p> <pre><code>... {mylist, {"myjournal1/1","myjournal1/2","myjournal1/May1999","myjournal2/August2011""} </code></pre> <p>etc.</p>
3,315,558
<p>I am having a very frustrating time with the back book that says my answer is way off but to me everything looks fine:</p> <p><span class="math-container">\begin{align*} (xy)y'&amp;= x^2+3y^2\\ y' &amp;= \frac{x^2}{xy} + \frac{3y^2}{xy}\\ y' &amp;= \frac{x}{y} + \frac{3y}{x}\\ y' &amp;= \frac{1}{v} + 3v\\ y' &amp;= \frac{1 + 3v^2}{v}\\ v+\frac{dv}{dx}x &amp;= \frac{1+3v^2}{v}\\ \frac{dv}{dx}x&amp;= \frac{1+3v^2-v^2}{v}\\ \frac{dv}{dx}x &amp;= \frac{1+2v^2}{v}\\ \int \frac{v}{2v^2+1}\,dv &amp;= \int\frac{1}{x}\,dx\\ u &amp;= 2v^2+1\\ du &amp;= 4v\,dv\\ dv &amp;= \frac{1}{4v}\,du\\ \int \frac{v}{u} \frac{1}{4v}\,du &amp;= \int \frac{1}{x} \,dx\\ \int \frac{1}{4u}\,du &amp;= \ln|x| + c\\ \frac{1}{4} \int \frac{1}{u}\,du &amp;= \ln|x| +c\\ \frac{1}{4} \ln|2v^2 + 1| &amp;= \ln |x| + c\\ \ln|2v^2 + 1|&amp;= 4\ln|x|+c\\ 2v^2 + 1 &amp;= e^{4\ln|x|}e^c\\ 2v^2 + 1 &amp;= Cx^4\\ 2v^2 &amp;= Cx^4\\ v^2 &amp;= Cx^4\\ \frac{y}{x} &amp;= \sqrt{Cx^4}\\ y &amp;= x\sqrt{Cx^4} \end{align*}</span></p> <p>However the book says the answer is <span class="math-container">$x^2 + 2y^2 = Cx^6.$</span> I am fairly sure there are no mistakes.</p>
Axion004
258,202
<p>As you set </p> <p><span class="math-container">$$v=\frac{y}{x}$$</span></p> <p>you need to substitute it back into </p> <p><span class="math-container">$$2v^2+1=Cx^4$$</span> </p> <p>which forms</p> <p><span class="math-container">$$2\Big(\frac{y}{x}\Big)^2 + 1 = Cx^4 \implies x^2+2y^2 = Cx^6$$</span></p>
3,259,475
<p>I am looking for a nice slick way to show</p> <blockquote> <p><span class="math-container">$$\int^1_0 \frac{\tanh^{-1} x}{x} \ln [(1 + x)^3 (1 - x)] \, dx = 0.$$</span></p> </blockquote> <p>So far I can only show the result using brute force as follows. Let <span class="math-container">$$I = \int^1_0 \frac{\tanh^{-1} x}{x} \ln [(1 + x)^3 (1 - x)] \, dx.$$</span> Since <span class="math-container">$$\tanh^{-1} x = \frac{1}{2} \ln \left (\frac{1 + x}{1 - x} \right ),$$</span> the above integral, after rearranging, can be rewritten as <span class="math-container">$$I = \frac{3}{2} \int^1_0 \frac{\ln^2 (1 + x)}{x} \, dx - \int^1_0 \frac{\ln (1 - x) \ln (1 + x)}{x} \, dx - \frac{1}{2} \int^1_0 \frac{\ln^2 (1 - x)}{x} \, dx.\tag1$$</span> Each of the above three integrals can be found. The results are: <span class="math-container">$$\int^1_0 \frac{\ln^2 (1 + x)}{x} \, dx = \frac{1}{4} \zeta (3).$$</span> For a proof, see <a href="https://math.stackexchange.com/questions/795867/evaluation-of-int-01-frac-log21xx-dx?noredirect=1&amp;lq=1">here</a> or <a href="https://math.stackexchange.com/questions/316745/how-to-evaluate-int-01-frac-log21xx-mathrm-dx">here</a>. <span class="math-container">$$\int^1_0 \frac{\ln (1 - x) \ln (1 + x)}{x} \, dx = -\frac{5}{8} \zeta (3).$$</span> For a proof, see <a href="https://math.stackexchange.com/questions/3085699/evaluate-int-01-dfrac-ln-1-x-ln-1-xx-dx">here</a>. And <span class="math-container">$$\int^1_0 \frac{\ln^2 (1 - x)}{x} \, dx = 2 \zeta (3).$$</span> For a proof of this last one, see <a href="https://math.stackexchange.com/questions/1882695/find-int-01-frac-ln21-xx-dx">here</a>.</p> <p>Thus (1) becomes <span class="math-container">$$\int^1_0 \frac{\tanh^{-1} x}{x} \ln [(1 + x)^3 (1 - x)] \, dx = \frac{3}{8 } \zeta (3) + \frac{5}{8} \zeta (3) - \zeta (3) = 0,$$</span> as expected. </p>
FDP
186,817
<p><span class="math-container">\begin{align}J=\int^1_0 \frac{\tanh^{-1} x}{x} \ln [(1 + x)^3 (1 - x)] \, dx\end{align}</span> Perform the change of variable <span class="math-container">$y=\dfrac{1-x}{1+x}$</span>, <span class="math-container">\begin{align}J&amp;=\int^1_0\frac{\ln\left(\frac{16x}{(1+x)^4}\right)\ln x}{1-x^2}\, dx\\ &amp;=4\ln 2\int_0^1\frac{\ln x}{1-x^2}\,dx+\int_0^1\frac{\ln^2 x}{1-x^2}\,dx-4\int_0^1\frac{\ln x\ln(1+x)}{1-x^2}\,dx\\ \end{align}</span></p> <p>Define on <span class="math-container">$[0;1]$</span> the function <span class="math-container">$R$</span> by, <span class="math-container">\begin{align}R(x)&amp;=\int_0^x \frac{\ln t}{1-t^2}\,dt\\ &amp;=\int_0^1 \frac{x\ln(tx)}{1-t^2x^2}\,dt \end{align}</span> Therefore, <span class="math-container">\begin{align}K&amp;=\int_0^1\frac{\ln x\ln(1+x)}{1-x^2}\,dx\\ &amp;=\Big[R(x)\ln(1+x)\Big]_0^1-\int_0^1\int_0^1 \frac{x\ln(tx)}{(1-t^2x^2)(1+x)}\,dt\,dx\\ &amp;=\int_0^1 \frac{\ln 2\ln t}{1-t^2}\,dt-\int_0^1\left(\int_0^1 \frac{x\ln t}{(1-t^2x^2)(1+x)}\,dx\right)\,dt-\int_0^1\left(\int_0^1 \frac{x\ln x}{(1-t^2x^2)(1+x)}\,dt\right)\,dx\\ &amp;=\ln 2\int_0^1 \frac{\ln t}{1-t^2}\,dt-\\ &amp;\frac{1}{2}\left(\int_0^1 \frac{\ln t\ln(1+t)}{1-t}\,dt-\int_0^1 \frac{\ln t\ln\left(\frac{1-t}{1+t}\right)}{t}\,dt-\int_0^1 \frac{2\ln 2\ln t}{1-t^2}\,dt+\int_0^1 \frac{\ln(1-t)\ln t}{1+t}\,dt\right)-\\ &amp;\frac{1}{2}\left(\int_0^1 \frac{\ln x\ln(1+x)}{1+x}\,dx-\int_0^1 \frac{\ln x\ln(1-x)}{1+x}\,dx\right)\\ &amp;=2\ln 2\int_0^1 \frac{\ln t}{1-t^2}\,dt-K+\frac{1}{2}\int_0^1 \frac{\ln t\ln\left(\frac{1-t}{1+t}\right)}{t}\,dt \end{align}</span> Therefore, <span class="math-container">\begin{align}K&amp;=\ln 2\int_0^1 \frac{\ln t}{1-t^2}\,dt+\frac{1}{4}\int_0^1 \frac{\ln t\ln\left(\frac{1-t}{1+t}\right)}{t}\,dt\\ &amp;=\ln 2\int_0^1 \frac{\ln t}{1-t^2}\,dt+\frac{1}{8}\left[\ln^2 t\ln\left(\frac{1-t}{1+t}\right)\right]_0^1+\frac{1}{4}\int_0^1 \frac{\ln^2 t}{1-t^2}\,dt\\ &amp;=\ln 2\int_0^1 \frac{\ln t}{1-t^2}\,dt+\frac{1}{4}\int_0^1 \frac{\ln^2 t}{1-t^2}\,dt\\ \end{align}</span> Therefore, <span class="math-container">\begin{align}\boxed{J=0}\end{align}</span> <strong>NB:</strong></p> <p>It's easy to deduce that, <span class="math-container">\begin{align}\int_0^1\frac{\ln x\ln(1+x)}{1-x^2}\,dx=\frac{7}{16}\zeta(3)-\frac{1}{8}\pi^2\ln 2\end{align}</span></p>
1,157,528
<p>How to find the corresponding probability distribution function for the following distribution function ?</p> <p>$$F (x)= \left\{ \begin{array}{ll} 0 &amp; \text{if } x&lt;0 \\ x^2 &amp; \text{if } 0\leq x\leq \frac{1}{2} \\ \frac{1}{25} \left(1-3 (3-x)^2\right) &amp; \text{if }\frac{1}{2}&lt;x\leq 3 \\ 1 &amp; \text{if } x\geq 3 \end{array} \right.$$</p>
SA-255525
135,922
<p>Its pdf is f(x)=2x if 0&lt;=x&lt;=1/2 and f(x)=6/25(3-x) if 1/2
153,473
<p>How does one prove that if a $X$ is a Banach space and $x^*$, a continuous linear functional from $X$ to the underlying field, then $x^*$ attains its norm for some $x$ in $X$ and $\Vert x\Vert = 1$?</p> <p>My teacher gave us a hint that we should use the statement that if $X$ is a reflexive Banach space, the unit ball is weak sequentially compact, but I am not sure as to how to construct a sequence in this ball which does not converge.</p> <p>Thank you.</p>
Davide Giraudo
9,849
<p>We can use a corollary of Hahn-Banach theorem, applied to the dual space $X'$ of $X$. We have $$\lVert x'\rVert=\max_{y\in X'',\lVert y\rVert=1}|y(x')|$$ (the maximum is reached for a $y_0$ that can be constructed thanks to Hahn-Banach theorem). For this $y_0\in X''$, since $X$ is reflexive we can find $u\in X$ such that $J(u)=y_0$, where $J\colon X\to X''$ is the canonical embedding. Hence by definition $J(u)(x')=x'(u)=y(x')$ and $u$ (or $-u$) is such that $|x(u)|=\sup_{\lVert v\rVert=1}|x(v)|$.</p> <hr> <p>Note that it doesn't follow the hint given by your teacher. Note that the converse is true (if each linear continuous functional attains its norm, the Banach space is reflexive). It's a difficult result, from James I think. </p>
2,859,276
<p>Given a measurable space $(X,\scr{A})$ where $\scr{A}$ is a sigma algebra on $X$, I would like to know if $f:X \to \mathbb{R}$ is measurable then $cf$ where $c \in \mathbb{R}$ is also measurable.</p> <p>A function $f:X \to \mathbb{R}$ is measurable if $[f &gt; \alpha] = \{x \in X \:{:}\: f(x) &gt; \alpha \} \in \mathscr{A}$ for all $\alpha \in \mathbb{R}$.</p> <p>If $c = 0$, then $[cf &gt; \alpha] = X$ for all $\alpha &lt; 0$ and $[cf &gt; \alpha] = \emptyset$ for all $\alpha \geq 0$, so $[cf &gt; \alpha] \in \scr{A}$.</p> <p>If $c &gt; 0$, then $[cf &gt; \alpha] = [f &gt; \frac{\alpha}{c}] \in \scr{A}$.</p> <p>My question is when $c &lt; 0$, my attempt is as follows.</p> <p>Since $[cf &gt; \alpha] = [f &lt; \frac{\alpha}{c}] = \{x \in X \:{:}\: f(x) &lt; \frac{\alpha}{c} \}$ and $[f &gt; \frac{\alpha}{c}] \in \scr{A}$, the complement of ${[f &gt; \frac{\alpha}{c}]} $ which is $[f \leq \frac{\alpha}{c}] \in \scr{A}$, but as $[f &lt; \frac{\alpha}{c}] \neq [f \leq \frac{\alpha}{c}]$, I am stuck in showing $[f &lt; \frac{\alpha}{c}] \in \scr{A}$.</p> <p>Any help will be greatly appreciated. </p>
José Carlos Santos
446,262
<p>Since $\mathcal A$ is a $\sigma$-\algebra, asserting that$$\left\{x\in X\,\middle|\,f(x)&lt;\frac\alpha c\right\}\in\mathcal A$$is equivalent to asserting that its complement, which is$$\left\{x\in X\,\middle|\,f(x)\geqslant\frac\alpha c\right\},\tag1$$belongs to $\mathcal A$. And$$(1)=\bigcap_{n\in\mathbb N}\left\{x\in X\,\middle|\,f(x)&gt;\frac\alpha c-\frac1n\right\}$$Therefore, $(1)\in\mathcal A$.</p>
1,927,650
<p>In the diagram below, DE is a chord of the circle that goes through c,d and e. A is the center of the circle. The perpendicular line from centre A intersects DE at B and the circle at C, </p> <p>DE=100cm BC= 10cm AB=X</p> <p>I need to calculate the length of AB / x.</p> <p>Basically i need to calculate the radius of the circle without the diameter or circumference or anything. Do you have any suggestions as to how I would do this?</p> <p><a href="https://i.stack.imgur.com/eLUHR.png" rel="nofollow noreferrer">enter image description here</a></p>
Community
-1
<p>Extend $CA$ to cut the circle again at $F$. Then $BF=10+2x$ and by the intersecting chords property, $$10(10+2x)=50\times 50$$ Thus $x = 120$</p>
319,288
<p>Let <span class="math-container">$\mathcal{M}$</span> be the field of meromorphic functions of one (complex) variable and <span class="math-container">$w = w(z)$</span> an analytic function satisfying a polynomial equation</p> <p><span class="math-container">$P(w; z) := w^n + a_{n-1}(z) w^{n-1} + \cdots + a_1(z) w + a_0(z) = 0$</span>,</p> <p>where <span class="math-container">$a_0(z), \ldots, a_{n-1}(z)$</span> are in <span class="math-container">$\mathcal{M}$</span> (actually, is suffices to consider the case where <span class="math-container">$a_j(z)$</span> is entire for <span class="math-container">$j = 0, \ldots, n-1$</span>).</p> <p>Suppose <span class="math-container">$w(z)$</span> has finitely many branch points.</p> <p>In the (hyperelliptic) case <span class="math-container">$n=2$</span> it is clear that <span class="math-container">$\mathcal{M}(w) = \mathcal{M}(\sqrt{Q})$</span>, where <span class="math-container">$Q(z)$</span> is a polynomial.</p> <p>Is it always true that, under the above assumptions we have that <span class="math-container">$\mathcal{M}(w) = \mathcal{M}(\beta)$</span>, where <span class="math-container">$\beta(z)$</span> is some algebraic function?</p>
Alexandre Eremenko
25,510
<p>First, as you noticed, it is enough to consider the case that the equation has the form <span class="math-container">$$w^n+a_{n-1}(z)w^{n-1}+\ldots+a_0(z)=0,$$</span> where the coefficients are entire. Then <span class="math-container">$w$</span> is holomorphic on its Riemann surface, let us call this Riemann surface <span class="math-container">$S$</span>. From your condition follows that <span class="math-container">$S$</span> is a compact Riemann surface with finitely many punctures. So it can be represented by an algebraic curve <span class="math-container">$K$</span> in <span class="math-container">$C^2$</span> given as a zero set of a polynomial <span class="math-container">$F(z,u)=0$</span>. Suppose that this curve is non-singular. Let <span class="math-container">$m=\deg_u F$</span>. We have an analytic function <span class="math-container">$w$</span> on <span class="math-container">$K$</span>, so it can be extended to the whole <span class="math-container">$C^2$</span>, So <span class="math-container">$w$</span> is a restriction of <span class="math-container">$K$</span> of an entire function <span class="math-container">$G(z,u)$</span>. Now on <span class="math-container">$K$</span> we have <span class="math-container">$$G(z,u)=\sum_{k,j}a_{k,j}z^ku^j=\sum_{j=0}^{m-1} u^j\sum_{k,i=0}^\infty a_{k,j+im}z^k=\sum_{j=0}^{m-1}b_j(z)u^j,$$</span> where the rearrangement of the infinite sum is legitimate because of the absolute convergence. This proves your statement as <span class="math-container">$u$</span> is algebraic over <span class="math-container">$C(z)$</span>.</p> <p>It may happen that every realization of <span class="math-container">$S$</span> in <span class="math-container">$C^2$</span> is singular. In this case we realize <span class="math-container">$S$</span> as a non-singular curve <span class="math-container">$K$</span> in <span class="math-container">$C^n$</span> (I suppose one can take <span class="math-container">$n=3$</span> but this is irrelevant.) Let the coordinates in <span class="math-container">$C^n$</span> be <span class="math-container">$(z,u_1,\ldots,u_{n-1})$</span>. Then <span class="math-container">$w$</span> can be represented by an entire function <span class="math-container">$G(z,u_1,\ldots,u_n)$</span> and the restrictions on <span class="math-container">$K$</span> of the coordinate functions <span class="math-container">$u_1,\ldots,u_{n-1}$</span> are algebraic functions of <span class="math-container">$z$</span>, and by the theorem on the primitive element, they are all rational functions of <span class="math-container">$z$</span> and some <span class="math-container">$\beta$</span>, where <span class="math-container">$\beta$</span> is an algebraic function of <span class="math-container">$z$</span>. Then the same argument works. </p>
2,912,043
<p>How do I approach this problem? My book gives answer as $[\frac{3-\sqrt{5}}{2},\frac{3+\sqrt{5}}{2}]$. I tried forming an equation in $y$ and putting discriminant greater than or equal to zero but it didn't work. Would someone please help me?</p> <p>I get $x^2 (y-1) + 2x (y+1) + (5y-5) =0$ and discriminant gives $2y^2 - y + 2 \leq 0$, which has complex roots.</p>
Kusma
514,933
<p>You have that $\sin(t)=-\sin (t-\pi)$. Apply to the integral over $(\pi,2\pi)$ and change variables $s=t-\pi$. </p>
2,661,113
<p>Consider $n$ player numbered $1,2,\ldots,n$. If player $i$ fights against $j$ then $i$ wins with probability $i/(i+j)$. There are no ties.</p> <p>A player $i_1$ is extracted at random. Then, a second different player $i_2$ is extracted at random. They fight against each other. </p> <p>Then, we extract another player $i_3$ ($\neq i_1,i_2$). The winner of the latter round fights against $i_3$.The fights continues until all players have been extracted, hence $n-1$ fights in total.</p> <p>Now, given $i\le j$, I think that player $i$ wins the game with probability <em>at most</em> $i/j$ times the probability that player $j$ wins the game. (I can prove it manually for $n\le 4$.) </p> <blockquote> <p><strong>Question.</strong> Is it possible to prove it for all $n$?</p> </blockquote> <p>Ps. Another property of the same game has been asked <a href="https://math.stackexchange.com/questions/2656210/a-game-with-n-players">here</a>.</p> <p>Ps2. Is it a "known game" ? </p>
mjqxxxx
5,546
<p>For each subset $S\subseteq \{1,2,\ldots, n\}$, you can calculate the probability that each $i\in S$ is the winner of a random tournament of $S$. You will do this recursively. For singleton subsets, clearly the sole player is the winner. (Hooray!) For a set of size $m &gt; 1$, iterate over the $m$ players, considering each in turn to be the <em>last</em> player (which will happen with probability $1/m$). In each case, the first $m-1$ players play a random tournament, which has $m-1$ possible winners whose probabilities we already computed and cached; and in each of <em>these</em> cases, either the last player ($i$) or the winner-so-far ($j$) is the final winner, with probabilities $i/(i+j)$ and $j/(i+j)$. So you're iterating over only $m(m-1)$ cases to get the random-tournament-winner probabilities for a particular set of size $m$, once you've done it for sets of size $m-1$. At worst it's taking $n(n-1)2^n$ steps to get the random-tournament probabilities for $\{1,2,\ldots, n\}$. This is very efficient; the Python code below runs for $n=20$ in less than a minute.</p> <pre><code>def winners(S, cache={}): # S will be a sorted tuple if cache.has_key(S): return cache[S] m = len(S) ret = {} for ix in xrange(m): i = S[ix] T = S[:ix] + S[(ix+1):] if not T: ret[i] = 1.0 else: for (j, pj) in winners(T, cache).items(): ret[i] = ret.get(i, 0.0) + (1.0 * pj * i) / (m * (i+j)) ret[j] = ret.get(j, 0.0) + (1.0 * pj * j) / (m * (i+j)) cache[S] = ret return ret </code></pre> <p>The results are interesting. Not only is the hypothesis true, at least that far out, but the inequality also becomes tight: the probability that $i$ wins appears to converge uniformly to $2i / (n(n+1))$ with increasing $n$. By $n=20$, each probability is within about $6\%$ of this.</p>
1,730,569
<p>All spaces are assumed Hausdorff. We call a topological space <em>compact</em> if every open cover has a finite subcover. We call it <em>Lindelöf</em> if every open cover has a countable subcover, and <em>hereditarily Lindelöf</em> if moreover every subspace is Lindelöf. </p> <p>It is obvious that every compact space is Lindelöf. We have that every closed subset of a compact space is compact, i.e. it is Lindelöf and 'so much more'. This leads to the natural question, is every open set Lindelöf, too? (this would suffice to make the space hereditarily Lindelöf; see for example <a href="https://math.stackexchange.com/questions/494918/showing-that-if-every-open-subspace-is-lindel%C3%B6f-then-the-space-is-hereditarily">here</a>). </p>
Simon_Peterson
271,607
<p>Unfortunately, this is not true. Take for example, any uncountable set $X$ with the discrete topology (so that singletons are open). It is obviously not Lindelöf. </p> <p>Now, 'create' the space $Y$ by adding a point $*$ to $X$ with neighbourhoods of the form $U=\{*\}\cup C$, where $C$ is a co-finite subset of $X$. To be clear, $Y=X\cup\{*\}$, with all points in $X$ being open, and the above-mentioned neighbourhoods of $*$. </p> <p>Then $Y$ is obviously compact, and $X$ is obviously an open subset of $Y$, but $X$, being uncountable, is pretty far from being Lindelöf. To be specific, the open cover $\mathcal{U}=\{\{x\}:x\in X\}$ has no countable subcover. </p>
864,816
<p>I was looking for examples of real valued functions $f$ such that $f(x)+f(-x)=f(x)f(-x)$. Preferably, I'd like them to be continuous, differentiable, etc.</p> <p>Of course, there are the constant functions $f(x)=0$ and $f(x)=2$. I also showed that $1+b^x$, where $b&gt;0$, is another solution. Are there any other nice ones?</p>
Calvin Lin
54,563
<p>Hurkyl's solution is nice, but the change of variables obscures the inherent closed orbit property, which is the crucial part of the problem.</p> <p>Observe that the functional equation only involves $x$ and $ -x$. In particular, if $h(x) = -x$, then the orbit of $x \neq 0$ is $\{x, -x\}$ and the orbit of $0$ is $\{0\}$. As such, the function is uniquely defined by the non-negative part.</p> <p>For $x = 0$, we have $2 f(0) = f(0)^2 $, which means $ f(0) = 0$ or $2$.<br> For $x \neq 0$, we have $ f(-x) = \frac{ f(x) } { f(x) - 1}$, if $ f(x) \neq 1$.<br> Note that if $ f(x) = 1$, then there is no possible value for $ f(-x)$.</p> <p>This is a necessary and sufficient condition.</p>
2,993,299
<p>If anyone can help me with how to go about solving these kind of equations i would really appreciate it. :-)</p> <p><span class="math-container">$$\sqrt{36-2x^2} = 4$$</span></p> <p>Solve for X</p>
egreg
62,967
<p>The idea is doing <span class="math-container">$$ \frac{x^2+1}{x-1}=\frac{x^2}{x}\frac{1+\dfrac{1}{x^2}}{1-\dfrac{1}{x}} $$</span> The limit of <span class="math-container">$$ \frac{1+\dfrac{1}{x^2}}{1-\dfrac{1}{x}} $$</span> is <span class="math-container">$1$</span> by standard rules on limits. Hence there is <span class="math-container">$M&gt;0$</span> such that, for <span class="math-container">$x&gt;M$</span>, <span class="math-container">$$ \frac{1+\dfrac{1}{x^2}}{1-\dfrac{1}{x}}&gt;\frac{1}{2} $$</span> Hence, for <span class="math-container">$x&gt;M$</span>, <span class="math-container">$$ \frac{x^2+1}{x-1}&gt;\frac{x}{2} $$</span> Thus, given <span class="math-container">$K&gt;0$</span>, you can say that, for <span class="math-container">$x&gt;\max\{M,2K\}$</span>, you have <span class="math-container">$$ \frac{x^2+1}{x-1}&gt;K $$</span> The same reasoning, with obvious changes, applies to every function of the form <span class="math-container">$$ \frac{a_nx^n+a_{n-1}x^{n-1}+\dots+a_1x+a_0}{b_mx^n+b_{m-1}x^{m-1}+\dots+b_1x+b_0} $$</span> with <span class="math-container">$a_n\ne0$</span>, <span class="math-container">$b_m\ne0$</span> and <span class="math-container">$n&gt;m$</span>.</p> <p>If, instead, <span class="math-container">$n&lt;m$</span>, the limit is <span class="math-container">$0$</span>. With <span class="math-container">$n=m$</span>, the limit is <span class="math-container">$a_n/b_n$</span>.</p>
3,629,379
<p>I'm trying to understand the following proof that a space <span class="math-container">$X$</span> is compact if and only if every net has a cluster point. I have a specific confusion with how cluster points relate to closure which is expanded on after the proof. </p> <p>A cluster point is defined as </p> <p><a href="https://i.stack.imgur.com/IEQUo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IEQUo.png" alt="enter image description here"></a></p> <p>Here is the proof of the implication that if every net has a cluster point then <span class="math-container">$X$</span> is compact. </p> <p><a href="https://i.stack.imgur.com/6zdRd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6zdRd.png" alt="enter image description here"></a></p> <blockquote> <p>How does closure relate to cluster points?</p> </blockquote> <p>I do not understand why <span class="math-container">$x$</span> must be in <span class="math-container">$\overline{X \backslash U_\alpha}$</span>. As far as I understand the closure of a space is the space with all its limit points. But a cluster point need not be a limit point? Is there some other reason <span class="math-container">$x$</span> must be in the closure?</p>
Community
-1
<p>Your claim is true as soon as <span class="math-container">$|G|\ge 3$</span>, as a corollary of <a href="https://math.stackexchange.com/a/3063228/750041">this general result</a> by taking <span class="math-container">$H=\{e\}$</span>.</p>
2,643,486
<p>I have a problem in my maths book which says </p> <blockquote> <p>Find the arithmetic sequence in which $T_8 = 11$ and $T_{10}$ is the additive inverse of $T_{17}$</p> </blockquote> <p>I don't have a first term of common difference to solve it, so I managed to make two equations to find the first term and common difference from them. Here they are.</p> <p>first equation $a + 7d = 11$</p> <p>second equation since $T_{10} + T_{17} = 0$</p> <p>therefore $a+9d + a+16d = 0 ~~\Rightarrow~~ 2a + 25d = 0$</p> <p>So what I did is subtracting the first equation from the second one to form this equation with two unknowns $a – 18d = 11$</p> <p>This is what I came up with and I can't solve the equations, any help?</p>
hmakholm left over Monica
14,366
<p>You seem to be overlooking the last three words of the question. The question doesn't want any old equivalence relation on $A$ you can come up with -- it want the <em>particular</em> equivalence relation <strong>"induced by $\pi_1$</strong>".</p> <p>Perhaps you have missed that "the equivalence relation that such-and-such partition induces" has a particular definition? The exercise is asking you to apply that definition to find which one of the many possible equivalence relations on $A$ it is speaking about.</p> <p>There are various equivalent ways to define this concept -- we can either say</p> <blockquote> <p>We say that the equivalence relation $R$ is <strong>induced by</strong> the partition $\pi$ if the elements of $\pi$ are exactly the equivalence classes under $R$.</p> </blockquote> <p>or</p> <blockquote> <p>Given a partition $\pi$, the equivalence relation <strong>induced by</strong> this partition is the relation $R_\pi$ defined by $$ x\mathrel{R_\pi}y \iff \exists P\in\pi: \{x,y\}\subseteq P $$</p> </blockquote>
2,819,652
<p>I know that my function $f$ must satisfy the following condition for $x \geq 0:$ $$\frac{d\left(xf(x)-x\right)}{dx} \geq 0.$$ What can I say about $f?$ I am curious about its possible sign or variations with respect to $x$.</p> <p>I have investigated Grönwall's inequality to find an upper bound for $f$ but it seems useless in my case.</p>
Dave
334,366
<p>Since $\sqrt[3] 2$ is a root of $f(x)=x^3-2$, the polynomial $x-\sqrt[3] 2$ divides $f(x)$. So $f(x)=(x-\sqrt[3] 2)(x^2+ax+b)$ for some real numbers $a,b$. If you perform the long division you can obtain $a$ and $b$, and then solve the quadratic equation $x^2+ax+b=0$ using the quadratic formula, which gives the two complex roots.</p> <p>Here is a more general solution:</p> <blockquote class="spoiler"> <p> If $f(x)=x^n-a$ then the roots of $f(x)$ are $\sqrt[n] a,~ \alpha\sqrt[n] a, \ldots, ~\alpha^{n-1}\sqrt[n] a$ where $\alpha$ is a primitive $n$-th root of unity (e.g. $e^{2\pi i/n}$).</p> </blockquote>
2,596,347
<p>Suppose $X$ is compact and Hausdorff and that $f:X \to Y$ is continuous, closed, and surjective. How can I show that $Y$ is Hausdorff?</p>
Kavi Rama Murthy
142,385
<p>Let $y_1 \neq y_2$ in Y. Pick $x_1 ,x_2$ such that $f(x_1)=y_1$ and $f(x_2)=y_2$ Claim: there exist neighborhoods U and V of $x_1$ and $x_2$ such that $f(U) \cap f(V)$ is empty. [ This is where continuity is used]. If not, for any neighborhoods U and V there exist points $t_{UV}$, $s_{UV}$ in U and V such that $f(t_{UV})=f(s_{UV})$ Order the pairs (U,V) by $(U,V) \geq (S,T)$ if U is a subset of S and V is a subset of T. We get nets $\{t_{UV}\}$ and $\{t_{UV}\}$ converging to $x_1$ and $x_2$ respectively. Since f is continuous and $f(t_{UV})=f(s_{UV})$ we get $y_1=f(x_1)=f(x_2)=y_2$ which is a contradiction. This proves the claim. Next pick open sets A, B containing $x_1$ and $x_2$ such that their closures are contained in U and V respectively. Now the complements of the images of the closures of A and B are disjoint neighborhoods of $y_1$ and $y_2$. </p>
971,167
<p>I'm having issues with this problem. I have solved for the eigenvalues but am having trouble finding the bases for both eigenvalues. The pictures below contain my work for solving for the eigenvalues and I solved for one of the basis but I think it is incorrect so I stopped. Can someone please help me. Thanks</p> <p><img src="https://i.stack.imgur.com/fkEnU.jpg" alt="enter image description here"></p> <p><img src="https://i.stack.imgur.com/9Qd8f.jpg" alt="enter image description here"></p> <p><img src="https://i.stack.imgur.com/9dEJY.jpg" alt="enter image description here"></p>
Mustafa Said
90,927
<p>Your eigenvalues, $2 + i\sqrt{3}, 2 - i\sqrt{3}$ are correct. Now to solve the system in your picture above, multiply the 1st row by $\frac{1}{2}(-1-i\sqrt{3})$ and add it to the second row. This will turn the $-2$ in the second row to $0$ and you can then find your eigenvector(s).</p>
182,307
<p>I have a long flat list that needs to be partitioned. The list is formatted so the "header" is repeated, followed by the values. Essentially, it looks something like this:</p> <pre><code>list={a,a,1,2,3,b,b,5,6,c,c,1,5,a,a,7,8,9,1} </code></pre> <p>I am looking for an output of:</p> <pre><code>{{a,1,2,3},{b,5,6},{c,1,5},{a,7,8,9,1}} </code></pre> <p>The output above would then let me create the association list I need.</p> <p>Obviously <code>Partition</code> won't work because the sublists are of different lengths. I have looked at various ways to identify where the repeated "header" data is, but that doesn't help with the splits. </p>
user1066
106
<pre><code>SequenceSplit[list, {a_, a_, x:_Integer ..} -&gt; {a,x}] </code></pre> <blockquote> <p>{{a, 1, 2, 3}, {b, 5, 6}, {c, 1, 5}, {a, 7, 8, 9, 1}}</p> </blockquote>
116,220
<p>Say two graphs are not isomorphic but are both strongly regular with the same set of parameters. Are there any parameters (other than the usual such as order, degrees, eigenvalues and multiplicities, etc.) that are determined, e.g., independence number, chromatic number, etc.?</p> <p>Thanks for any help</p>
Moh514
8,725
<p>The diameter, energy and number of closed walks could be determined by parameters.</p>
3,570,084
<blockquote> <p>find the value of <span class="math-container">$f(0)$</span>, given <span class="math-container">$f(f(x))=0$</span> has roots as 1 &amp; 2, <span class="math-container">$f(x) = x^2 + \alpha x + \beta$</span></p> </blockquote> <p><strong>My attempt:</strong></p> <p>If <span class="math-container">$k_1$</span>, <span class="math-container">$k_2$</span> are the roots of <span class="math-container">$f(x)$</span>, <span class="math-container">$f(x) = k_1$</span> has roots 1 &amp; 2 <span class="math-container">$\therefore \alpha = -3, \beta-k_1 = 2 \text{ or } \beta-k_2 = 2$</span> and <span class="math-container">$f(x) = x^2 -3x+\beta$</span>. I could not proceed further with this approach</p> <p>Another line of thought is that <span class="math-container">$f(f(x))$</span> would be a quartic equation, which means that for it to have 2 roots, it will either have 2 real, 2 imaginary roots or just 2 roots (x-axis would be tangent to this curve at 1 and 2). This information doesn't seem very helpful in solving the problem.</p>
The 2nd
751,538
<blockquote> <p><span class="math-container">$$f(x) = x^2 + \alpha x + \beta$$</span></p> </blockquote> <p>Since <span class="math-container">$f(f(x))=0$</span> has root at <span class="math-container">$x=1$</span> and <span class="math-container">$x=2$</span>, it implies <span class="math-container">$f(f(1))=0$</span> and <span class="math-container">$f(f(2))=0$</span></p> <p><span class="math-container">$$f(f(1))=0 \implies f(\alpha + \beta + 1) = 0$$</span> <span class="math-container">$$f(f(2))=0 \implies f(2 \alpha + \beta + 4) = 0$$</span></p> <p><span class="math-container">$\implies f(x)$</span> has at least <span class="math-container">$1$</span> roots <span class="math-container">$\implies Δ \geq 0$</span></p> <p><strong>Case <span class="math-container">$1$</span>:</strong> <span class="math-container">$Δ = 0$</span></p> <ul> <li><p><span class="math-container">$f(x) \text{ has only 1 root} \implies \alpha + \beta + 1 = 2 \alpha + \beta + 4 \text{ and } \implies \alpha = -3$</span> </p></li> <li><p>Then <span class="math-container">$f(\beta - 2) = 0 \implies (\beta - 2)^2 - 3(\beta - 2) + \beta = 0$</span> which has 2 imaginary roots: <span class="math-container">$\beta = 3 \pm i$</span>. Therfore:</p></li> </ul> <blockquote> <p><span class="math-container">$$f(0) = \beta = 3 \pm i$$</span></p> </blockquote> <p><strong>Case <span class="math-container">$2$</span>:</strong> <span class="math-container">$Δ &gt; 0$</span></p> <ul> <li><p><span class="math-container">$f(x) \text{ has 2 distinct roots which is } \alpha + \beta + 1 \text{ and } 2 \alpha + \beta + 4$</span> </p></li> <li><p>By <strong>Vieta's Formula</strong>:</p></li> </ul> <p><span class="math-container">$$(\alpha + \beta + 1) + (2 \alpha + \beta + 4) = -\alpha \text{ (1) }$$</span> <span class="math-container">$$(\alpha + \beta + 1) \cdot (2 \alpha + \beta + 4) = \beta \text{ (2) }$$</span></p> <p><span class="math-container">$\text{(1) } \implies 4\alpha + 2\beta + 5 = 0 \implies 2 \alpha + \beta + 4 = \frac{3}{2} \text{(1')}$</span></p> <p>Substitute <span class="math-container">$(1')$</span> into <span class="math-container">$(2)$</span>:</p> <p><span class="math-container">$\text{(2) } \implies \alpha + \beta + 1 = \frac{2}{3} \beta \implies \alpha + \frac{1}{3} \beta + 1 = 0 \text{ (2')}$</span></p> <p>Solving <span class="math-container">$(1')$</span> and <span class="math-container">$(2')$</span> to get:</p> <blockquote> <p><span class="math-container">$$\alpha = -\frac{1}{2}, \beta = -\frac{3}{2}$$</span> </p> </blockquote> <p>Therefore:</p> <blockquote> <p><span class="math-container">$$f(0) = \beta = -\frac{3}{2}$$</span> </p> </blockquote>
1,546,231
<p>I am working on a problem in which, for $a$, $b \gt 0$, we let $(x/a)^2 + (y/b)^2 = 1$ describe an ellipse.<br> I am required to use the method of Lagrange multipliers and the corresponding second derivative test to find $a$ and $b$ such that the ellipse passes through $(x, y) = (\sqrt 2, 2)$ and such that the area of the ellipse $A = \pi ab$ is minimised amongst all such ellipses.</p> <p>My thoughts:</p> <p>Firstly, since $(\sqrt 2, 2)$ lies on the ellipse, we must have that </p> <p>$$\frac{2}{a^2} + \frac{4}{b^2} = 1.$$</p> <p>Secondly, I let $g(a, b) = \pi ab$.<br> With the aim of minimising this function, I calculated $g_a = \pi b$ and $g_b = \pi a$ and set them both equal to $0$.<br> However, this only gives that $a = b = 0$, which is not allowed.</p> <p>I am not yet sure how to progress further with this question and would appreciate <strong>hints</strong>.</p>
Dr. Sonnhard Graubner
175,066
<p>HINT: use the Lagrange Multiplier method $$F=\pi ab+\lambda(\frac{2}{a^2}+\frac{4}{b^2}-1)$$</p>
1,546,231
<p>I am working on a problem in which, for $a$, $b \gt 0$, we let $(x/a)^2 + (y/b)^2 = 1$ describe an ellipse.<br> I am required to use the method of Lagrange multipliers and the corresponding second derivative test to find $a$ and $b$ such that the ellipse passes through $(x, y) = (\sqrt 2, 2)$ and such that the area of the ellipse $A = \pi ab$ is minimised amongst all such ellipses.</p> <p>My thoughts:</p> <p>Firstly, since $(\sqrt 2, 2)$ lies on the ellipse, we must have that </p> <p>$$\frac{2}{a^2} + \frac{4}{b^2} = 1.$$</p> <p>Secondly, I let $g(a, b) = \pi ab$.<br> With the aim of minimising this function, I calculated $g_a = \pi b$ and $g_b = \pi a$ and set them both equal to $0$.<br> However, this only gives that $a = b = 0$, which is not allowed.</p> <p>I am not yet sure how to progress further with this question and would appreciate <strong>hints</strong>.</p>
Dylan
135,643
<p>Your constraint function is $$ f(a,b) = \frac{2}{a^2} + \frac{4}{b^2} - 1 $$ and your minimization function is $g(a,b) = \pi ab$</p> <p>Using the Lagrange multiplier method, this requires $\nabla g = \lambda \nabla f $, thus we have the following system</p> <p>$$ \pi b = -\lambda\frac{4}{a^3} $$ $$ \pi a = -\lambda\frac{8}{b^3} $$ $$ f(a,b) = 0 $$</p>
81,016
<p>I'm trying to learn about derived categories of algebraic stacks. To be honest, as of now, I don't need anything fancy nor deep. In my setup I have a scheme $X$ (well, a smooth and projective variety over $\mathbb{C}$) and a finite group $G$ acting on it. I would like to understand simple things like: what does a coherent sheaf on $[X/G]$ look like? what do the (derived) pushforward functors between the various spaces do? and the pullbacks?</p> <p>Any pointers would be greatly appreciated!</p>
Aaron Bergman
947
<p>Bridgeland, King and Reid, "<a href="http://www.ams.org/journals/jams/2001-14-03/S0894-0347-01-00368-X/S0894-0347-01-00368-X.pdf" rel="nofollow">The McKay correspondence as an equivalence of derived categories</a>" has a nice discussion expanding on David Roberts's comment.</p>
429,178
<p>Consider the following function $$ \Pi(t) = \begin{cases} 1 \ \ \text{if} \ |t| &lt; 1/2 \\ 0 \ \ \text{if} \ |t| \geq 1/2 \end{cases} $$</p> <p>This is not a periodic function. Why can we define the fourier coefficients of $\Pi(t)$ as $$c_n = \frac{1}{T} \int_{-T/2}^{T/2} e^{-2 \pi int/T} \Pi(t) \ dt $$</p> <p>$$ = \frac{1}{T} \int_{-1/2}^{1/2} e^{-2 \pi int/T} \Pi(t) \ dt $$</p> <p>if the function is not periodic? I thought fourier coefficients were only defined for periodic functions?</p>
bob.sacamento
61,250
<p>Fourier coefficients are initially explained and taught as having applying to periodic functions, since they involve $\sin()$ and $\cos()$ functions, and since they find great utility among periodic functions. But the process of taking the integral</p> <p>$\displaystyle \int \exp(ikx) f(x) dx $</p> <p>has no inherent limit in period length. It can be taken for periods as long as you like, i.e. periods approaching infinity, and can therefore be applied to non-periodic functions. It's just a matter of whether 1) the integral converges and 2) whether the numbers that pop out are useful somehow, and they often times are.</p>
128,009
<p>Hi, Is there any known sequence such that the sum of a combination of one subsequence never equals another subsequence sum. The subsequences should have elements only from the parent sequence.</p> <p>Thanks Sundi</p>
André Henriques
5,690
<p>I am tempted to answer your question by the negative:<br> I believe that there does not exist any other, simpler, way of saying "left-invariant pseudo-differential operator on $G$".</p> <p>The reason is that if you look at the subset of smoothing operators (operators with smooth integral kernel), I already don't know how to say "left-invariant smoothing operator on $G$" in any simpler (algebraic) way.</p>
155,860
<p>I often stumble over the term "Lie superalgebra" (= "Lie algebra with a $\mathbb{Z}_2$ grading"). Obvious question: What about $\mathbb{Z}_3$ grading (and so on)? Is a Lie algebra with $\mathbb{Z}_n$ grading just the special case of a quantum Lie algebra $L(q)$ with $q$ being an $n$-th root of 1 (I only looked at the commutator equation :-) or are these completely different things? And are there other generalizations of Lie algebras I should know? (Just to get concrete, what is the Lie algebra series behind the "Vogel plane" for a thing?)<br> (Sidenote: I'm also asking because I found a very special tangled graph invariant which doesn't differ from any "standard" Reshitikhine-Turaev invariant in any relevant property, but if you look closely, the adjoint splits as $6 \cdot 6=1+1+6+8+8+12$ and the metric tensor is not singular. The latter rules out non-semisimple Lie algebras and the $1+1$ semisimple ones. So my first thought was it might come from a Lie superalgebra.)</p>
Can Hatipoglu
9,469
<p>A generalization of the idea of a Lie superalgebra exists and is known as $\epsilon$-Lie algebras or color Lie algebras. They generalize Lie superalgebras in the sense that the underlying vector space is $G$-graded instead of $\mathbb{Z}_{2}$, where $G$ is an arbitrary abelian group. Of course, one modifies the skew-symmetry and skew Jacobi identities, using commutation factors. </p> <p>See for example "Scheunert, M. Generalized Lie algebras. J. Math. Phys. 20 (1979), no. 4, 712–720."</p>
2,796,711
<p>I have the following question in hand. </p> <p>If $\lambda_1,\cdots,\lambda_n$ are the eigenvalues of a given matrix $A \in M_n$, then prove that the matrix equation $AB - BA = \lambda B$ has a nontrivial solution $B \neq 0 \in M_n$, if and only if $\lambda = \lambda_i - \lambda_j$ for some $i,j$.</p>
Pedro
23,350
<p>Consider the operator $B\mapsto [A,B]$. What are its eigenvalues?</p>
392,297
<p>For which sets of primes <span class="math-container">$P$</span> is there a finite type <span class="math-container">$\mathbb{Z}$</span>-algebra <span class="math-container">$A$</span> such that<span class="math-container">$$p\in P\iff\mathrm{Hom}(A, \mathbb{F}_p)=\emptyset?$$</span>Do all the finite <span class="math-container">$P$</span> arise this way?</p> <p><span class="math-container">$A=\mathbb{Z}/n$</span> works for the cofinite <span class="math-container">$P$</span>.</p>
R. van Dobben de Bruyn
82,179
<p>The <a href="https://mathoverflow.net/a/392404">answer</a> of RP_ is correct, so my main contribution is cleaning it up, providing more detail, and uniformising the different cases.</p> <p><strong>Definition.</strong> Let <span class="math-container">$\Omega$</span> be the set of prime numbers. For subsets <span class="math-container">$S, T \subseteq \Omega$</span>, write <span class="math-container">$S \sim T$</span> if the symmetric difference <span class="math-container">$S \mathbin\triangle T = (S \setminus T) \cup (T \setminus S)$</span> is finite; note that this is an equivalence relation.</p> <p>For a finite type <span class="math-container">$\mathbf Z$</span>-scheme <span class="math-container">$X$</span>, write <span class="math-container">$S_X$</span> for the set of primes such that <span class="math-container">$X(\mathbf F_p) \neq \varnothing$</span>. If <span class="math-container">$X = \operatorname{Spec} A$</span> is affine, write <span class="math-container">$S_A$</span> for <span class="math-container">$S_X$</span>. Finally, if <span class="math-container">$f \in \mathbf Z[x]$</span> is a polynomial, set <span class="math-container">$S_f = S_{\mathbf Z[x]/(f)}$</span>, as in RP_'s <a href="https://mathoverflow.net/a/392404">answer</a>. Define subsets of the power set <span class="math-container">$\mathcal P(\Omega)$</span> by <span class="math-container">\begin{align*} \mathcal P_{\text{sch}}(\Omega) &amp;:= \{S_X \mathrel| X \in \mathbf{Sch}_{\mathbf Z}^{\text{f.t.}}\},\\ \mathcal P_{\text{aff}}(\Omega) &amp;:= \{S_A \mathrel| A \in \mathbf{Alg}_{\mathbf Z}^{\text{f.t.}}\},\\ \mathcal P_{\text{poly}}(\Omega) &amp;:= \{S_f \mathrel| f \in \mathbf Z[x]\},\\ \mathcal P_{\text{monic}}(\Omega) &amp;:= \{S_f \mathrel| f \in \mathbf Z[x] \text{ monic}\}. \end{align*}</span> In the main proposition below, we will show that the first three agree, and the fourth one as well if we only consider subsets of <span class="math-container">$\Omega$</span> up to <span class="math-container">$\sim$</span>.</p> <p><strong>Example.</strong> The set of primes congruent to <span class="math-container">$1$</span> modulo <span class="math-container">$4$</span> is <span class="math-container">$S_{4x^2+1}$</span>, since <span class="math-container">$-1$</span> is a square modulo a prime <span class="math-container">$p$</span> if and only if <span class="math-container">$p = 2$</span> or <span class="math-container">$p \equiv 1 \pmod 4$</span>. In general, the Chebotarev density theorem says that elements of <span class="math-container">$\mathcal P_{\text{poly}}(\Omega)$</span> have rational density. For example, if <span class="math-container">$\mathbf Z[x]/(f) \cong \mathcal O_K$</span> is the ring of integers in a finite Galois extension <span class="math-container">$\mathbf Q \subseteq K$</span> of degree <span class="math-container">$n$</span>, then <span class="math-container">$S_f$</span> has density <span class="math-container">$\tfrac{1}{n}$</span>.</p> <p><strong>Remark.</strong> Note that <span class="math-container">$S_{X \times Y} = S_X \cap S_Y$</span> and <span class="math-container">$S_{X \amalg Y} = S_X \cup S_Y$</span>, and more generally <span class="math-container">$S_{X \cup Y} = S_X \cup S_Y$</span> if <span class="math-container">$X,Y \subseteq Z$</span> are subschemes (not necessarily disjoint). This shows that <span class="math-container">$\mathcal P_{\text{sch}}(\Omega)$</span> and <span class="math-container">$\mathcal P_{\text{aff}}(\Omega)$</span> are closed under finite unions and intersections.</p> <p>This also gives <span class="math-container">$S_{fg} = S_f \cup S_g$</span>, since <span class="math-container">$\operatorname{Spec} \mathbf Z[x]/(fg)$</span> is the union <span class="math-container">$\operatorname{Spec} \mathbf Z[x]/(f) \cup \operatorname{Spec} \mathbf Z[x]/(g)$</span> (the intersection between the two components need not be empty, but it doesn't matter), so <span class="math-container">$\mathcal P_{\text{poly}}(\Omega)$</span> is closed under unions. In addition, the Corollary to Lemma 2 below shows that it is also closed under intersection.</p> <p><strong>Lemma 1.</strong> <em>If <span class="math-container">$S \in \mathcal P_{\text{poly}}(\Omega)$</span> and <span class="math-container">$T \sim S$</span>, then <span class="math-container">$T \in \mathcal P_{\text{poly}}(\Omega)$</span>.</em></p> <p><em>Proof.</em> By assumption, there exists <span class="math-container">$f \in \mathbf Z[x]$</span> such that <span class="math-container">$S = S_f$</span>. It suffices to show that if <span class="math-container">$p \in \Omega$</span>, then <span class="math-container">$S \cup \{p\}$</span> and <span class="math-container">$S \setminus \{p\}$</span> are in <span class="math-container">$\mathcal P_{\text{poly}}(\Omega)$</span>. For <span class="math-container">$S_f \cup \{p\}$</span>, we may use <span class="math-container">$S_{pf}$</span>. If <span class="math-container">$f = 0$</span>, then <span class="math-container">$S_{px-1} = \Omega\setminus \{p\} = S \setminus \{p\}$</span>, showing that <span class="math-container">$S \setminus \{p\} \in \mathcal P_{\text{poly}}(\Omega)$</span>. If <span class="math-container">$f \neq 0$</span>, choose <span class="math-container">$a \in \mathbf Z$</span> with <span class="math-container">$f(a) \neq 0$</span>, and let <span class="math-container">$r = v_p(f(a)) \in \mathbf Z_{\geq 0}$</span>. After replacing <span class="math-container">$f(x)$</span> by <span class="math-container">$f(x-a)$</span>, we may assume <span class="math-container">$a = 0$</span>. Then <span class="math-container">$g(x) = \tfrac{f(p^{r+1}x)}{p^r}$</span> has a solution modulo a prime <span class="math-container">$q \neq p$</span> if and only if <span class="math-container">$f$</span> does, since <span class="math-container">$p^{r+1}$</span> is invertible modulo <span class="math-container">$q$</span>. This gives an integer polynomial whose terms are all divisible by <span class="math-container">$p$</span> except the constant term, so <span class="math-container">$g$</span> has no zeroes modulo <span class="math-container">$p$</span>. Therefore, <span class="math-container">$S_g = S \setminus \{p\}$</span>, showing that <span class="math-container">$S \setminus \{p\} \in \mathcal P_{\text{poly}}(\Omega)$</span>. <span class="math-container">$\square$</span></p> <p><strong>Lemma 2.</strong> <em>Let <span class="math-container">$f, g \in \mathbf Z[x]$</span>. Then there exists a monic polynomial <span class="math-container">$h \in \mathbf Z[x]$</span> such that <span class="math-container">$S_f \cap S_g \sim S_h$</span>.</em></p> <p><em>Proof.</em> Write <span class="math-container">$A = \mathbf Z[x]/(f)$</span> and <span class="math-container">$B = \mathbf Z[x]/(g)$</span>, and set <span class="math-container">$C = A \otimes B$</span>, so <span class="math-container">$S_f \cap S_g = S_C$</span>. Then <span class="math-container">$(C \otimes \mathbf Q)^{\text{red}}$</span> is a finite product of fields, so can be written as <span class="math-container">$\prod_{i=1}^r \mathbf Q[x]/(h_i)$</span> for monic polynomials <span class="math-container">$h_i \in \mathbf Z[x]$</span>. Then <span class="math-container">$C' = \prod_{i=1}^r\operatorname{Spec} \mathbf Z[x]/(h_i)$</span> differs from <span class="math-container">$(\operatorname{Spec} C)^{\text{red}}$</span> in finitely many closed fibres above <span class="math-container">$\operatorname{Spec} \mathbf Z$</span>, so away from the corresponding primes we have <span class="math-container">$S_{C'} = S_C$</span>. Setting <span class="math-container">$h = h_1 \dotsm h_r$</span> gives the result, since <span class="math-container">$S_{C'} = \bigcup_i S_{h_i} = S_h$</span>. <span class="math-container">$\square$</span></p> <p><strong>Corollary.</strong> <em>The set <span class="math-container">$\mathcal P_{\text{poly}}(\Omega)$</span> is closed under (finite) intersections.</em></p> <p><em>Proof.</em> If <span class="math-container">$S, T \in \mathcal P_{\text{poly}}(\Omega)$</span>, then there exists <span class="math-container">$U \in \mathcal P_{\text{monic}}(\Omega)$</span> with <span class="math-container">$U \sim S \cap T$</span> by Lemma 2. Then Lemma 1 gives <span class="math-container">$S \cap T \in \mathcal P_{\text{poly}}(\Omega)$</span>. <span class="math-container">$\square$</span></p> <p>The main claim is the following:</p> <p><strong>Proposition.</strong> <em>Let <span class="math-container">$S \subseteq \Omega$</span> be a set of primes. Then the following are equivalent:</em></p> <ol> <li><em><span class="math-container">$S \in \mathcal P_{\text{sch}}(\Omega)$</span>;</em></li> <li><em><span class="math-container">$S \in \mathcal P_{\text{aff}}(\Omega)$</span>;</em></li> <li><em><span class="math-container">$S \in \mathcal P_{\text{poly}}(\Omega)$</span>;</em></li> <li><em><span class="math-container">$S \sim T$</span> for some <span class="math-container">$T \in \mathcal P_{\text{monic}}(\Omega)$</span>.</em></li> </ol> <p>That is, there exists a monic polynomial <span class="math-container">$f \in \mathbf Z[x]$</span> such that <span class="math-container">$S \mathbin\triangle S_f$</span> is finite. Note that this is not quite the Boolean lattice generated by the sets <span class="math-container">$S_f$</span>, as <span class="math-container">$\mathcal P_{\text{aff}}(\Omega)$</span> is not closed under complements (e.g. it doesn't contain the set of primes congruent to <span class="math-container">$3$</span> modulo <span class="math-container">$4$</span>, right?).</p> <p><em>Proof of Proposition.</em> Note that all sets contain singletons and complements of singletons, and are closed under finite unions and intersections (for intersections in <span class="math-container">$(4)$</span> and <span class="math-container">$(3)$</span>, use Lemma 2 and its Corollary above). Implications <span class="math-container">$(3) \Rightarrow (2) \Rightarrow (1)$</span> are clear, and breaking up an arbitrary finite type <span class="math-container">$\mathbf Z$</span>-scheme into locally closed affine subschemes shows <span class="math-container">$(1) \Rightarrow (2)$</span>. Note that Lemma 2 implies <span class="math-container">$(3) \Rightarrow (4)$</span>, but we don't need this. The converse follows from Lemma 1.</p> <p>It remains to show <span class="math-container">$(2) \Rightarrow (4)$</span>, where we may assume <span class="math-container">$A$</span> is an integral domain. Let <span class="math-container">$K$</span> be the algebraic closure of <span class="math-container">$\mathbf Q$</span> in <span class="math-container">$\operatorname{Frac}(A)$</span>. Then <span class="math-container">$A \otimes \mathbf Q$</span> is a geometrically integral <span class="math-container">$K$</span>-algebra [Tags <a href="https://stacks.math.columbia.edu/tag/020I" rel="noreferrer">020I</a>, <a href="https://stacks.math.columbia.edu/tag/037P" rel="noreferrer">037P</a>, and <a href="https://stacks.math.columbia.edu/tag/054Q" rel="noreferrer">054Q</a>]. There is a finite set of primes <span class="math-container">$T$</span> of <span class="math-container">$K$</span> such that <span class="math-container">$A[1/T]$</span> is a flat <span class="math-container">$\mathcal O_{K,T}$</span>-algebra with geometrically integral fibres [EGA IV<span class="math-container">$_3$</span>, Prop. 8.9.4 and Thm. 9.7.7]. The Lang–Weil bound (or more precise versions coming from the Weil conjectures as proven by Deligne) then gives <span class="math-container">$$\lvert A(\kappa(\mathfrak p))\rvert \geq \lvert\kappa(\mathfrak p)\rvert^d - c \lvert\kappa(\mathfrak p)\rvert^{d-\tfrac{1}{2}}$$</span> for all prime ideals <span class="math-container">$\mathfrak p \subseteq \mathcal O_{K,T}$</span> and some <span class="math-container">$c &gt; 0$</span> that <em>does not depend on <span class="math-container">$\mathfrak p$</span></em>. In particular, for all but finitely many primes <span class="math-container">$\mathfrak p \subseteq \mathcal O_{K,T}$</span>, we get <span class="math-container">$A(\kappa(\mathfrak p)) \neq \varnothing$</span>. Therefore, for all but finitely primes <span class="math-container">$p \in \Omega$</span>, we get <span class="math-container">$A(\mathbf F_p) \neq \varnothing$</span> if and only if <span class="math-container">$\mathcal O_{K,T}$</span> has a prime with residue field <span class="math-container">$\mathbf F_p$</span>, i.e. if and only if <span class="math-container">$\mathcal O_{K,T}(\mathbf F_p) \neq \varnothing$</span>. In other words, <span class="math-container">$S_A \sim S_{\mathcal O_{K,T}}$</span>.</p> <p>Thus we may replace <span class="math-container">$A$</span> by <span class="math-container">$\mathcal O_{K,T}$</span>, and now we proceed as in Lemma 2: if <span class="math-container">$f \in \mathbf Z[x]$</span> is a monic polynomial such that <span class="math-container">$K \cong \mathbf Q[x]/(f)$</span>, then <span class="math-container">$A$</span> and <span class="math-container">$\mathbf Z[x]/(f)$</span> differ in finitely many closed fibres, so <span class="math-container">$S_A \sim S_f$</span>. <span class="math-container">$\square$</span></p>
1,507,859
<blockquote> <p>$$\int \tan \sqrt {x} \,dx$$</p> </blockquote> <p>I was trying to solve this. But it took very long time and three pages. Could someone please tell me how to solve this quickly. </p>
user21820
21,820
<p>Your way is reasonable, but uses Bernoulli's inequality. There is another more direct way.</p> <p>If the sequence is bounded from above, it has a supremum $c$ by the completeness axiom for reals, and by definition we have $a^k &gt; c/a$ for some natural number $k$ because $c/a &lt; c$ since $a &gt; 1$. Thus $a^{k+1} &gt; c$, contradicting the choice of $c$.</p> <p>Therefore the sequence is not bounded from above.</p>
73,767
<p>I have the following expressions $g^{ab+cd+...}$ or in the full form <code>Power[g,Plus[Times[a,b],Times[c,d],...]]</code></p> <p>How to convert this expression into</p> <p><code>Power[g,Times[a,b]]Power[g,Times[c,d]]...</code>?</p> <p>Here $...$ could be either nothing or one more, but of course it would be nice to get the most general expression for any number of terms.</p>
kglr
125
<pre><code>expr1 = Power[g, Plus[Times[a, b], Times[c, d]]]; Times @@ Defer /@ (expr1 /. Plus -&gt; List) (* or Times @@ HoldForm /@ (expr1 /. Plus -&gt; List) *) </code></pre> <p><img src="https://i.stack.imgur.com/hJCBp.png" alt="enter image description here"></p> <p>In version 10, you can also use</p> <pre><code>Inactive[Times]@@ (expr1 /. Plus -&gt; List) </code></pre>
2,616,604
<p>Why it is true that every odd number r that is not equivlant to p mod p has a odd number $s$ s.t</p> <p>$rs\equiv 2p-1 \mod 2p $</p> <p>My first thought is that i should have an inverse for $r$ in the ring so i can write. $s\equiv r^{-1} (-1) \mod 2p $ but im not sure what in number theory tells me that $r^{-1} $ exists?</p> <p>Where p as always in number theory is prime.</p>
Dietrich Burde
83,966
<p>Indeed, we can take $-1/r=-r^{-1}$ in $\mathbb{Z}/2p$, since we know that $r\not\equiv 0\bmod 2$ and $r\not\equiv 0\bmod p$ by assumption.</p>
1,046,582
<p>Question : </p> <p>$$\lim_{x \to 0} \frac{e^{\tan^3x}-e^{x^3}}{2\ln (1+x^3\sin^2x)}$$</p> <p>Here I tried $\tan x$ and $\sin x$ expansion in numerator and denominator which are as follows : </p> <p>$$\tan x =x +\frac{x^3}{3}+\frac{2x^5}{15} \cdots$$ </p> <p>$$\sin x = x-\frac{x^3}{3!}+\frac{x^5}{5!}\cdots$$ </p> <p>but this method is not working and also other alternative of using L'Hospital's rule is not working one this ... please help how to tackle this limit prblem thanks.</p>
Siminore
29,672
<p>It looks like a bad guy. I used Wolfram Mathematica to compute $$ \tan^3 x -x^3 = x^5+\frac{11}{15}x^7 + O(x^8) $$ and $$ e^{\tan^3 x -x^3}=1+x^5+\frac{11}{15}x^7 + O(x^8). $$ Therefore $$ e^{\tan^3 x}-e^{x^3}=x^5+\frac{11}{15}x^7 + O(x^8). $$ Since $$ \log(1+x^3 \sin^2 x) = x^5 +O(x^6), $$ the limit exists and is equal to $1/2$.</p> <p>But it can be very boring to perform these computations by hand: all the terms up to power $5$ cancel out, so you should take care of really many terms in each expansion, otherwise you'll get nothing useful.</p>
2,536,877
<p>I have to evaluate this integral: $$\int \frac{\tan^5 x}{\cos^3{x}} dx.$$</p> <p>I guess I should use substitution, but I don't know what to substitute =(</p>
klirk
385,702
<p>If you set $\frac 1 {\cos x}=y$, $dy = \frac {\sin x}{\cos^2 x} dx = \frac {\tan x} {\cos x} dx$.<br> Also note that $\tan^2 = \frac 1 {\cos^2}-1$. Then your integral becomes $$\int \tan^4 x \frac 1 {\cos^2 x} \frac {\tan x}{\cos x} dx =\int (y-1)^2 y^2 dy .$$</p> <p>This should be easy to evaluate.</p>
1,287,236
<p>Need to calculate $\sin 15-\cos 15$? In degrees. I got zero, but it is wrong. Though, it seems to me that I was solving correct, I was doing this was: $\sin (45-30) - \cos (45-30) = ((\sqrt(6)-\sqrt(2)/4) - ((\sqrt(6)+\sqrt(2)/4) =0$</p>
Community
-1
<p>You just did some algebra wrong. </p> <p>$$\frac{\sqrt{6}-\sqrt{2}}{4} - \frac{\sqrt{6}+\sqrt{2}}{4}=-\frac{\sqrt{2}}{4}-\frac{\sqrt{2}}{4}=-\frac{\sqrt{2}}{2}$$</p> <p>WA confirms <a href="http://www.wolframalpha.com/input/?i=sin%2815%29-cos%2815%29" rel="nofollow">http://www.wolframalpha.com/input/?i=sin%2815%29-cos%2815%29</a></p>
2,636,929
<p>I know that it can be obtained by simply differentiating the equation and finding the roots of the derivative, but it is a lengthy and tricky process. I am looking for a faster and more straightforward way.</p> <p>A more effective and quick way to find the answer via simple differentiation will also be appreciated.</p>
dezdichado
152,744
<p>Let $a = \arcsin x , b = \arccos x$, then you need to optimize $a^4+b^4$ with the constraint $a+b =\dfrac{\pi}{2}.$ An obvious lower bound follows from power-mean inequality: $$a^4+b^4\geq2\left(\dfrac{a+b}{2}\right)^4 = \dfrac{\pi^4}{8}.$$</p> <p>In order to maximize though, you will need to resort to Lagrange multipliers, or second derivative test for single variables by eliminating one of them. </p>
3,247,923
<p>I’ve tried splitting this into the sum of two Chinese Dumbass triangles:</p> <p>0</p> <p>-5 0</p> <p>0 15 -5</p> <p>0 -5 0 0</p> <p>And</p> <p>1</p> <p>2 2</p> <p>2 -15 2</p> <p>1 2 2 1</p> <p>And that fails. Any hint or help would be appreciated!</p>
W-t-P
181,098
<p>Let <span class="math-container">$$ f(a,b,c) := a(a-b)(a-2b)+b(b-c)(b-2c)+c(c-a)(c-2a). $$</span> The key observation is that the coefficient of <span class="math-container">$x^2$</span> in <span class="math-container">$f(a+x,b+x,c+x)$</span> is <span class="math-container">$$ -(a-b)-(b-c)-(c-a) = 0; $$</span> that is, <span class="math-container">$f(a+x,b+x,c+x)$</span> is <em>linear</em> in <span class="math-container">$x$</span>. Furthermore, the coefficient of <span class="math-container">$x$</span> is <span class="math-container">\begin{multline*} -2b(a-b) - 2c(b-c) - 2a(c-a) \\ = (a^2+b^2-2ab) + (b^2+c^2-2bc) + (c^2+a^2-2ac) \ge 0. \end{multline*}</span> It follows that <span class="math-container">$f(a+x,b+x,c+x)$</span> is an increasing function of <span class="math-container">$x$</span>. Therefore, assuming for definiteness that <span class="math-container">$\min\{a,b,c\}=a$</span>, we get <span class="math-container">$$ f(a,b,c) \ge f(0,b-a,c-a). $$</span> That is, the general case reduces to that where <span class="math-container">$a=0$</span>, and this special case is very easy to verify.</p>
3,537,549
<p>In the game of poker, five cards from a standard deck of 52 cards are dealt to each player.</p> <p>Assume there are four players and the cards are dealt five at a time around the table until all four players have received five cards.</p> <p>a. What is the probability of the first player receiving a royal flush (the ace, king, queen, jack, and 10 of the same suit).</p> <p>b. What is the probability of the second player receiving a royal flush?</p> <p>c. If the cards are dealt one at a time to each player in turn, what is the probability of the first player receiving a royal flush?</p> <p>I know that the probability of a royal flush is 1649740, because of 52C5=2598960, and 42598960=1649740.</p> <p>But I am struggling to understand how to determine the probability of the first player getting it, etc...</p> <p>Any help would be greatly appreciated.</p>
EugenS
588,430
<p>Thanks hint of @Eric Towers. With his permission, I will write a more extended estimation. Despite the fact that the answer is still not completely agreed, the assessment is in any case true.</p> <p>Remembering that <span class="math-container">$|e^{i \sin{\theta}}| = 1$</span> rewrite estimations: <span class="math-container">\begin{align} \oint_{C_2} dz \frac{e^{z t}}{1+2 \sqrt{z}} \le {}&amp;\int_{\frac{\pi}{2}}^{\pi} |d\theta| \frac{R |e^{ t R (\cos{\theta} +i sin{\theta})}|}{1+2 \sqrt{R}} \le\\ &amp;\le \frac{R}{1+2 \sqrt{R}} \int_{\frac{\pi}{2}}^{\pi} |d\theta| |e^{ t R \cos{\theta}}|\cdot | e^{i sin{\theta}}| \le\\ &amp; \le \frac{R}{1+2 \sqrt{R}} \int_{\frac{\pi}{2}}^{\pi} |d\theta| |e^{ t R \cos{\theta}}| \label{4} \tag{4} \end{align}</span> Introduce a dummy variable <span class="math-container">$$\phi = \theta - \frac{\pi}{2},$$</span> and <span class="math-container">$$\cos{\theta} = \cos{(\phi + \frac{\pi}{2})} = -\sin{\phi}.$$</span> Then <span class="math-container">$$\frac{R}{1+2 \sqrt{R}} \int_{\frac{\pi}{2}}^{\pi} |d\theta| |e^{ t R \cos{\theta}}| = \frac{R}{1+2 \sqrt{R}} \int_{0}^{\frac{\pi}{2}} |d\phi| |e^{ -t R \sin{\phi}}| \label{5} \tag{5}$$</span></p> <p>Taking into account that <span class="math-container">$-2\phi / \pi \geq -\sin \phi$</span> new estimation will be: <span class="math-container">$$\frac{R}{1+2 \sqrt{R}} \int_{0}^{\frac{\pi}{2}} |d\phi| |e^{ -t R \sin{\phi}}| \leq \frac{R}{1+2 \sqrt{R}} \int_{0}^{\frac{\pi}{2}} |d\phi| |e^{ -t R 2 \phi/\pi}|\label{6} \tag{6}$$</span> Introduce dummy variable <span class="math-container">$u$</span>: <span class="math-container">\begin{align} &amp;u = -\frac{-tR2 \phi}{\pi} \\ &amp;d \phi = -\frac{\pi}{-2tR } d u \\ &amp;\int_{0}^{\frac{\pi}{2}} |d\phi| |e^{ - t R 2 \phi/\pi}| = -\frac{\pi}{-2tR } \int_{0}^{-tR} e^{u} du = \frac{\pi}{2tR }(1-e^{-tR}) \end{align}</span> Thus <span class="math-container">\begin{align} \lim_{R \to \infty}\oint_{C_2} dz \frac{e^{z t}}{1+2 \sqrt{z}} \le {}&amp;\lim_{R \to \infty}\int_{\frac{\pi}{2}}^{\pi} |d\theta| \frac{R |e^{ t R (\cos{\theta} +i sin{\theta})}|}{1+2 \sqrt{R}} \le \\ &amp; \le \lim_{R \to \infty} \frac{\pi}{2t(1+2 \sqrt{R})}(1-e^{-tR}) = 0 \end{align}</span> </p>
4,236,875
<p>Today, I was solving a problem which is described below.</p> <blockquote> <p>Show that the equation of any circle passing through the points of intersection of the ellipse <span class="math-container">$(x + 2)^2 + 2y^2 = 18$</span> and the ellipse <span class="math-container">$9(x - 1)^2 + 16y^2= 25$</span> can be written in the form <span class="math-container">$x^2 - 2ax + y^2 = 5 - 4a$</span>.​</p> </blockquote> <p>I tried to solve and found the intersection points of both ellipses which are <span class="math-container">$(2,-1)$</span> and <span class="math-container">$(2,1)$</span>. But the problem is that how could I show the required result for the question?</p> <p>I haven't solved any equation of circle with just two given point lying on the circle. Please help me solving it.</p> <p><a href="https://i.stack.imgur.com/uDZZk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uDZZk.png" alt="enter image description here" /></a></p>
hamam_Abdallah
369,188
<p>Let <span class="math-container">$ A(2,-1) $</span> and <span class="math-container">$ B(2,1) $</span>.</p> <p>if <span class="math-container">$ C(a,b) $</span> is the center, then</p> <p><span class="math-container">$$AC^2=BC^2 \implies$$</span></p> <p><span class="math-container">$$(a-2)^2+(b+1)^2=$$</span> <span class="math-container">$$(a-2)^2+(b-1)^2$$</span> <span class="math-container">$$\implies b=0$$</span> <span class="math-container">$$\implies C \text{ is in } x- \text{axis}.$$</span></p> <p>the equation of the circle will be of the form</p> <p><span class="math-container">$$(x-a)^2+y^2=AC^2$$</span> <span class="math-container">$$=(a-2)^2+(0+1)^2$$</span></p> <p>or <span class="math-container">$$x^2-2ax+y^2=5-4a$$</span></p>
1,484,880
<p>i messed up an exam yesterday. given</p> <p>$$\mathcal{B}:=\{(a,b]:a,b \in\mathbb{R}, a \le b\},$$</p> <p>i was able to show that $\mathcal{B}$ is the base of a topology $\mathcal{T}$, and that $\mathcal{T}$ is finer than the standard topology. however, i couldn't find a countable dense subset in $(\mathbb{R},\mathcal{T})$ :S i thought about $\mathbb{Q}$, but is $\mathbb{Q} \in \mathcal{T}$? how to show that there is no countable base for $\mathcal{T}$?</p> <p>i'm just curious, and a bit afraid that there is a simple answer to this :S</p>
Brian M. Scott
12,042
<p>HINT: To show that the space has no countable base, let $\mathscr{B}$ be any base for $\mathscr{T}$, and note that for each $x\in\Bbb R$ there must be some $B_x\in\mathscr{B}$ such that $x\in B_x\subseteq(x-1,x]$. Then show that if $x\ne y$, then $B_x\ne B_y$.</p>