qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
43,690
<p>I have to apologize because this is not the normal sort of question for this site, but there have been times in the past where MO was remarkably helpful and kind to undergrads with similar types of question and since it is worrying me increasingly as of late I feel that I must ask it.</p> <p>My question is: what can one (such as myself) contribute to mathematics?</p> <p>I find that mathematics is made by people like Gauss and Euler - while it may be possible to learn their work and understand it, nothing new is created by doing this. One can rewrite their books in modern language and notation or guide others to learn it too but I never believed this was the significant part of a mathematician work; which would be the creation of original mathematics. It seems entirely plausible that, with all the tremendously clever people working so hard on mathematics, there is nothing left for someone such as myself (who would be the first to admit they do not have any special talent in the field) to do. Perhaps my value would be to act more like cannon fodder? Since just sending in <em>enough</em> men in will surely break through some barrier.</p> <p>Anyway I don't want to ramble too much but I really would like to find answers to this question - whether they come from experiences or peoples biographies or anywhere.</p> <p>Thank you.</p>
Joe Silverman
11,926
<p>It's 12:40am New Year's Day, so maybe not the best time to be writing MO responses, but I loved reading everyone's encouraging answers to this question. (BTW: @Lubin: Taking algebra, with a dollop of algebraic number theory, from you was inspiring.) Anyway, my answer to the OP would be to learn lots and to work hard on problems that interest you. Don't worry so much about whether you're proving breakthrough theorems, just try as hard as you can to understand the parts of mathematics that interest you the most. (By "understand", of course, I mean get into the guts, figure out what's really going on, and prove as much as you can.) Also don't worry that you won't solve every problem (or even a majority of the problems) on which you work, and don't worry that you won't ever feel you fully understand everything about a problem; that's why there's always more to investigate. Then, after a decade or two, feel free to look back, and I think you'll find that you have made a contribution to humanity's knowledge of mathematics. </p> <p>And even when you're doing research, it's hard (at least, I've found it hard) to decide on the significance of what you've done. I think the difficulty is that after working hard on a problem for a year or two and making enough progress to write a paper, one understands the problem so well that everything that one can prove seems trivial, while everything that's left undone seems hopeless. So maybe my saying "wait a decade or two" is a bit excessive, but it's definitely worth waiting a couple years before you decide on the quality of the work you've done.</p> <p>Happy New Year to one and all at MO.</p>
328,868
<p>It is easily shown that the function $$\begin{cases} \exp \left(\frac{1}{x^2-1} \right) &amp; |x| &lt; 1 \\ 0 &amp; \text{otherwise} \\ \end{cases}$$ is smooth and has compact support in $\mathbb R$. I tried playing with it to find a function with the following properties:</p> <p>a. $f(x)=0$ for $x \le 0$</p> <p>b. $f(x)=1$ for $x \ge 1$</p> <p>c. $f$ is monotonically increasing.</p> <p>d. $f$ is smooth.</p> <p>Is it possible to find an explicit formula for such $f$?</p>
user1337
62,839
<p>I've found this other explicit construction of a transition function (I believe it's from Spivak's Calculus on Manifolds).</p> <ol> <li><p>Let $$g(x)=\begin{cases} \exp \left( \frac{1}{x^2-1} \right) &amp; |x|&lt;1 \\ 0 &amp; |x| \geq 1 \end{cases}$$ and let $A:=\int_{-\infty}^\infty g(x) \mathrm{d}x$.</p></li> <li><p>The function $$f(x):=\frac{1}{A} \int_{-\infty}^x g(t) \mathrm{d} t$$ satisfies the requirements (up to a simple change of variables of the form $x \mapsto ax+b)$.</p></li> </ol>
57,223
<p><strong>No error in 10.0.2</strong></p> <hr> <p>When using <code>Correlation[]</code> function I sometimes get the strange warning:</p> <pre><code>CorrelationTest::nortst: "At least one of the p-values in {0.0527317}, resulting from a test for normality, is below 0.05`. The tests in \!\({\"PearsonCorrelation\"}\) require that the data is normally distributed." </code></pre> <p>But take a look that there's only one p-value and it is actually greater than $0.05$:</p> <pre><code>0.0527317 &lt; 0.05 False </code></pre> <p>The code that causes this:</p> <pre><code>x = RandomReal[{-5, 5}, 100]; y = 2 x + 1 + RandomReal[{-0.1, 0.1}, 100]; X = Transpose[{x, y}]; ListPlot[X] Correlation[X] // MatrixForm CorrelationTest[X, 99995/100000, "PearsonCorrelation"] </code></pre> <p>Why this happens? (to reproduce the issue a few repetitions are usually required)</p>
Andy Ross
43
<p>It appears to be a bug in the reporting mechanism. The individual components <code>x</code> and <code>y</code> are being tested for normality but the reported value is that of a joint test for multivariate normality. The conclusion is correct, the message is wrong.</p> <pre><code>SeedRandom[2154]; x = RandomReal[{-5, 5}, 100]; y = 2 x + 1 + RandomReal[{-0.1, 0.1}, 100]; X = Transpose[{x, y}]; CorrelationTest[X, 99995/100000, "PearsonCorrelation"] (* CorrelationTest::nortst: At least one of the p-values in {0.0492268}, resulting from a test for normality, is below 0.05`. The tests in {PearsonCorrelation} require that the data is normally distributed. &gt;&gt; 0.540402 *) </code></pre> <p>Marginal tests:</p> <pre><code>DistributionFitTest[x] (* 0.00121523 *) DistributionFitTest[y] (* 0.00123259 *) </code></pre> <p>Joint test for multivariate normality. Notice the p-value matches the one in the message:</p> <pre><code>DistributionFitTest[X] (* 0.0492268 *) </code></pre>
2,948,557
<p>So I've been trying to prove that <span class="math-container">$$\sum^{\infty}_{n=1}\frac{(-1)^{n}\sin(n)}{n}=-\frac{1}{2}$$</span></p> <p>I've tried putting various bounds on it to see if I can "squeeze" out the result. Say something like (one of many tried examples): <span class="math-container">$$ -\frac{1}{n}-\frac{1}{2}\leq \sum^{n}_{k=1}\frac{(-1)^{k}\sin(k)}{k}\leq \frac{1}{n}-\frac{1}{2}$$</span></p> <p>I've tried too see if I could find some periodic continuous function in order to use Parseval's theorem, but I can't come up with any that work.</p> <p>May I please get a hint or some piece of the puzzle for this problem?</p>
Siong Thye Goh
306,553
<p>Guide:</p> <p>Verify by computing the fourier series of <span class="math-container">$f(x) = \frac{x}{\pi}, -\pi &lt; x&lt;\pi$</span></p> <p><span class="math-container">$$\frac{x}{\pi} = \frac{2}{\pi}\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n}\sin(nx)$$</span></p> <p>and choose an appropriate value of <span class="math-container">$x$</span>.</p>
2,948,557
<p>So I've been trying to prove that <span class="math-container">$$\sum^{\infty}_{n=1}\frac{(-1)^{n}\sin(n)}{n}=-\frac{1}{2}$$</span></p> <p>I've tried putting various bounds on it to see if I can "squeeze" out the result. Say something like (one of many tried examples): <span class="math-container">$$ -\frac{1}{n}-\frac{1}{2}\leq \sum^{n}_{k=1}\frac{(-1)^{k}\sin(k)}{k}\leq \frac{1}{n}-\frac{1}{2}$$</span></p> <p>I've tried too see if I could find some periodic continuous function in order to use Parseval's theorem, but I can't come up with any that work.</p> <p>May I please get a hint or some piece of the puzzle for this problem?</p>
Parcly Taxel
357,390
<p>A direct approach: <span class="math-container">$$\sum_{n=1}^\infty\frac{(-1)^n\sin n}n=\sum_{n=1}^\infty\frac{(-1)^n\Im (e^{in})}n=-\Im\sum_{n=1}^\infty\frac{(-1)^{n+1}(e^i)^n}n=-\Im\ln(1+e^i)$$</span> where the Maclaurin series of <span class="math-container">$\ln(1+z)$</span> was used. As can be seen by drawing a diagram, <span class="math-container">$\arg(1+e^i)=\frac12$</span>. This is thus the imaginary part of the logarithm, and thus the original sum is <span class="math-container">$-\frac12$</span>.</p>
3,380,479
<p>i'm doing a seminar about Galois Theory but I have a problem with the definition of purely inseparable element and the one of purely inseparable extension, I read the definitions but i would like to see an example for each one of them. If somebody can help me please. </p>
lhf
589
<p>Instead of the extended euclidean algorithm, you can also use Fermat's theorem.</p> <p>For instance, <span class="math-container">$1 \equiv 7^{12} = 7 \cdot 7^{11}$</span>. Therefore, the inverse of <span class="math-container">$7$</span> mod <span class="math-container">$13$</span> is <span class="math-container">$7^{11}$</span>. It's not so bad as it looks: <span class="math-container">$$ 7^{11} = 7^{8} \cdot 7^{2} \cdot 7 $$</span> Now <span class="math-container">$$ 7^2 = 49 \equiv 10 ,\quad 7^4 \equiv 100 \equiv 9 ,\quad 7^8 \equiv 81 \equiv 3 $$</span> Therefore, <span class="math-container">$$ 7^{11} = 7^{8} \cdot 7^{2} \cdot 7 \equiv 3 \cdot 10 \cdot 7 \equiv 2 $$</span> This method of finding powers is called <a href="https://en.wikipedia.org/wiki/Exponentiation_by_squaring" rel="nofollow noreferrer">exponentiation by squaring</a>.</p>
2,090,452
<p>I have a signature given with the real numbers as its universe and addition and multiplication as functions. I need to write the following expression in First Order Logic.</p> <ul> <li>$x$ is a rational number</li> </ul> <p>My idea: $\varphi(x) = \exists a \, \exists b \, x = \frac{a}{b}$.</p> <p>The problem is I don't have division.</p> <p>Extra question: how can I write </p> <ul> <li>$x \geq 0$?</li> </ul>
Noah Schweber
28,111
<p>Let me strengthen what others have already said:</p> <p>The structure $\mathcal{R}=(\mathbb{R}; +, \times)$ is <em>decidable</em> (this is <a href="https://en.wikipedia.org/wiki/Real_closed_field#Model_theory:_decidability_and_quantifier_elimination" rel="nofollow noreferrer">due to Tarski</a>). This immediately rules out the possibility of defining $\mathbb{Z}$ in $\mathcal{R}$, since the theory of the integers is undecidable (by Goedel). (Incidentally, since $\mathbb{Z}$ is (nontrivially) definable in $\mathbb{Q}$, this also rules out the possibility of defining $\mathbb{Q}$ in $\mathcal{R}$.)</p> <p>But in fact more is true: Tarski showed that it is <em>o-minimal</em>, that is, every definable set is a finite union of intervals. So nothing remotely like $\mathbb{Z}$ or $\mathbb{Q}$ can be a definable subset of $\mathcal{R}$.</p>
969,871
<p>First, by definition of odd functions I have $-f(x) = f (-x)$. It would follow that $|–f(x)|= |f(-x)|$. Then given $\epsilon &gt; 0$ there exists $\delta &gt; 0$ s.t. when $0 &lt; |x| &lt; \delta$, it follows that $|f(x)-M| &lt; \epsilon$. With $M = 0$, then $|f(x)| &lt; \epsilon$. (approaching $0$ from the left). If I look at $f (-x)$, assuming identical $\epsilon$ and $\delta$ because of symmetry about the origin, I have the same conclusion that $|f(x)| &lt; \epsilon$. But I feel as though this is too general and I can’t wrap my head around stating it with math fluency that leads to a solid conclusion. Or that I'm even in the correct rabbit hole. Any help would be great.</p>
Some Math Student
181,130
<p>I have the slight impression you're making your job a bit harder than it seems to be.</p> <p>First, because our function is continuous, we know that our limit does exist.</p> <p>Second, I suggest you assume your limit is anything other than zero. Without loss of generality, we can say it could be a $\varepsilon &gt;0$. </p> <p>Now, what happens if we just move a tiny little bit to either side of zero? Can you say something about $f(-\delta),f(\delta)$? Does that somehow contradict the fact that $f$ is odd? </p> <p>So, now, what does happen. Let assume that $f(0)=\varepsilon&gt;0$. Then, since we are continuous, we know that there is a $\delta$ such that $|f(\pm\delta)-f(0)|&lt;\frac{\varepsilon}{2}$, which tells us that $f(\pm\delta)&gt;\frac{\varepsilon}{2}&gt;0$.</p> <p>Can you take it from there? Keep in mind, we have not yet used that $f$ is odd.</p>
1,072,750
<p>I'm looking at this solution to this problem:</p> <p><img src="https://i.stack.imgur.com/qQTTU.png" alt="enter image description here"></p> <p>I'm getting thrown off by the special case where $n = 2$. If $n = 2$, why must it be that $x = 1$? All that we then know is that $x^2 = 1$ or that $x = x^{-1}$. However, I don't see how this implies that $x = 1$. Could someone enlighten me?</p> <hr> <p><strong>Resolved</strong>: If $n = 2$, in $R/P$ we have $x^2 = x$ or $x(x-1) = 0$. Since $R/P$ is an integral domain, there are no zero divisors and so $x - 1 = 0$ or $x = 1$.</p>
rschwieb
29,335
<p>Probably what is throwing you off is that $x\in R/P$ and <em>not</em> $x\in R$.</p> <p>Of course, $x^2=x$ in $R$ without $x$ being $1$. But in the quotient, there are no zero divisors (because you're quotienting by a prime ideal.) So you must examine $x^2-x=0$.</p>
1,201,717
<p>Inn$(G)=\{\varphi_g \in \text{Aut}(G) \mid g \in G\}$</p> <p>If $\varphi_g, \varphi_h \in \text{Inn}(G)$, then $$\varphi_g \varphi_h (x) =\varphi_g(hxh^{-1})=ghxh^{-1}g^{-1}=ghx(gh)^{-1}=\varphi_{gh} \in \text{Inn}(G)$$ Also, since $\varphi_g\varphi_g^{-1}=x$, and $\varphi_g\varphi_{g^{-1}}=x$, $$\varphi_g^{-1}=\varphi_{g^{-1}} \in \text{Inn}(G)$$ So, Inn$(G) \le \text{Aut}(G)$.</p> <p>Define $\psi: G \to \text{Inn}(G)$ where $\psi(x)=\varphi_x$</p> <p><strong><em>How do I show $\psi$ is well defined?</em></strong></p> <p>I need to show that if $a=b$, then $\psi(a)=\psi(b)$</p>
Hagen von Eitzen
39,174
<p>Well-definedness is not needed to be shown as there is no choice involved: Given $x\in G$ you are to exhibit some $\psi(x)\in\operatorname{Inn}(G)$. Since you are given $x\in G$, no-one can prevent you from considering the inner automorphism $\phi_x$.</p> <hr> <p>Just to make clear where we do need to show well-definedness: Suppose we wanted to define a map in the reverse direction, i.e., $\Psi\colon \operatorname{Inn}(G)\to G$, $\phi_x\mapsto x$. In <em>this</em> case, we'd have to show that $x$ can be determined from the given inner automorphism. That is, if $f\in\operatorname{Inn}(G)$, then there exists only one possible choice $x\in G$ such that $f=\phi_x$. This is in general not the case, hence the reverse map is not well-defined.</p>
1,316,661
<p>Let $\{ h_n :X \to Y\}_{n \in \mathbb{Z^+}} $ be a sequence of continuous functions from a topological space $X$ to another topological space $Y$, and for each $n$ let $U_n$ be an open subset of $Y$. </p> <p>Does $$ \bigcup_{n=1}^{\infty} {h_n}^{-1} (\overline{U_n}) \subset \overline{ \bigcup_{n=1}^{\infty} {h_n}^{-1} (U_n) } $$ holds?</p>
Rory Grice
242,544
<p>If a is a real number then you can make $x^a$ as small or as large as you like my making a more and more negative or more and more positive, regardless of the value of x. consider:</p> <p>1)If x is between zero and one, then you can make your number a largely positive to get a number that is very close to zero and positive, thus between zero and one.</p> <p>2)If x is between zero and one, then you can make your number a largely negative to get a number that is largely positive.</p> <p>3)If x is a number greater than one, then you can make your number a largely positive to get a number that is also largely positive.</p> <p>4)If x is a number less than one, then you can make your number a largely negative to get a number that is also negative.</p> <p>This means that not knowing anything about a, other than it is real, knowing that x is positive and knowing the value of $x^a$, there is no way of calculating the value of x.</p> <p>You will note that I have not mentioned what happens when x is a negative number. This is not important for pre-calculus, but if mathematics is something that you find fun then it might be something worth thinking about. See if you can work it out.(This requires an understanding of complex numbers)</p>
3,174,153
<p>Let <span class="math-container">$X_1,X_2,...X_n$</span> be a random sample from <span class="math-container">$f(x,\theta)=\frac{1}{2 \theta}e^{\frac{-|x|}{\theta}}$</span>.We know by Factorisation theorem that <span class="math-container">$\frac{\sum |X_i|}{n}$</span> is sufficient for <span class="math-container">$\theta$</span>. But can we show that <span class="math-container">$\frac{\sum X_i}{n}$</span> is not sufficient for <span class="math-container">$\theta$</span>? It is tough to show it by definition as the distribution of <span class="math-container">$\sum X_i$</span> cannot be found explicitly. Is there any other way?</p>
Masoud
653,056
<p>let <span class="math-container">$S$</span> is minimal sufficient. <span class="math-container">$T$</span> is not sufficient if exist <span class="math-container">$x,y\in support$</span> </p> <p><span class="math-container">$T(x)=T(y)$</span> but <span class="math-container">$S(x)\neq S(y)$</span> </p> <p>let n=2, <span class="math-container">$T(x)=x_1+x$</span> and <span class="math-container">$S(x)=|x_1|+|x_2|$</span> (S is minimal sufficient) </p> <p><span class="math-container">$x=(2,-1)$</span> and <span class="math-container">$y=(3,-2)$</span></p> <p><span class="math-container">$T(x)=1=T(y)$</span> <span class="math-container">$S(x)=3\neq 5=T(y)$</span></p> <p>so <span class="math-container">$T$</span> is not sufficient</p> <p>in general for arbitrary <span class="math-container">$n$</span> choose <span class="math-container">$x=(2,-1,0,\cdots ,0)$</span> <span class="math-container">$y=(3,-2,0,\cdots ,0)$</span></p> <p>this method is based of this point that minimal sufficient is a function of any sufficient Statistic, and in above we shown that <span class="math-container">$S$</span> is not a function of <span class="math-container">$T$</span>.note <span class="math-container">$V$</span> is a function of <span class="math-container">$U$</span> if</p> <p><span class="math-container">$\forall x,y \quad U(x)=U(y) \Longrightarrow V(x)=V(y)$</span> so </p> <p><span class="math-container">$V$</span> is not a function <span class="math-container">$U$</span> if</p> <p><span class="math-container">$\exists x,y \quad V(x)=V(y) \quad but \quad U(x)\neq U(y) $</span></p>
653,451
<p>For $X$ a Banach space, let me define the space $C^0([0,T];X)$ to consist of elements $u:[0,T] \to X$ such that $$\lVert u \rVert_{C^0} := \max_{t \in [0,T]}\lVert u(t) \rVert_X &lt; \infty.$$ So the difference is that I don't care about continuity of $u$ in $t$. This defines a norm. </p> <p>For completeness, let $u_n \in C^0$ be a Cauchy sequence. Then $\{u_n(t)\}_n$ is convergent for each $t \in [0,T]$. Let's say $u_n(t) \to u(t) \in X$ for each $t$. Then $u$ has finite $C^0$ norm right? </p> <p>So what is the use of this space? Is there something good about it?</p>
Giorgio Mossa
11,888
<p>It not so clear what is they mean, but I guess what they mean is that if you consider the graph above (in which edges with different labels are different) then you cannot put on that graph a structure of a category.</p> <p>To prove that you have to use reductio ab absurdum: if there where any category structure on that graph there should be a law of composition such that $g \circ h = \text{id}_B$ and also $f \circ g = \text{id}_A$ (that's follows for what is said in the link you posted above) and so it should also be the case that $$f = f \circ \text{id_B} = f \circ (g \circ h) = (f \circ g) \circ h = \text{id}_A \circ h = h \ .$$</p> <p>This would implies that $f=h$ but by hypothesis $f \ne h$ hence you've arrived to an absurd, so you cannot find any composition law that give to the graph the structure of a category.</p> <p>Hope this helps.</p>
2,029,279
<blockquote> <p>Let <span class="math-container">$T:\Bbb R^2\to \Bbb R^2$</span> be a linear transformation such that <span class="math-container">$T (1,1)=(9,2)$</span> and <span class="math-container">$T(2,-3)=(4,-1)$</span>.</p> <p>A) Determine if the vectors <span class="math-container">$(1,1)$</span> and <span class="math-container">$(2,-3)$</span> form a basis.<br> B) Calculate <span class="math-container">$T(x,y)$</span>.</p> </blockquote> <p>I need help with these, please I'm stuck, don't even know how to start them...</p>
user5216
250,930
<p>a) hint: Check linear independence. b) Write any vector (x,y) as linear combination of basis you have and use the property of linear operator.</p>
4,117,954
<p>Let <span class="math-container">$l^\infty$</span> be the space of all real bounded sequences equipped with supremum norm. Let <span class="math-container">$S$</span> be the shift operator defined on <span class="math-container">$l^\infty$</span> by <span class="math-container">$(Sx)_n=x_{n+1}$</span>, <span class="math-container">$n \in \mathbb{N}$</span> for all <span class="math-container">$x \in l^{\infty}$</span>.</p> <p>I proved that by using Hahn Banach theorem that there exists <span class="math-container">$L \in (l^\infty)^*$</span> such that</p> <p>i) <span class="math-container">$\liminf_{n \rightarrow \infty} x_n \leq L(x) \leq \limsup_{n \rightarrow \infty} x_n$</span></p> <p>ii) <span class="math-container">$L(Sx)=L(x)$</span></p> <p>I also proved that <span class="math-container">$l^\infty \cong (l^1)^*$</span>, how do I show <span class="math-container">$ L \not\in \widehat{(l^1)} $</span>?</p> <p>Can anyone give some hint for the last part?</p> <p><a href="https://i.stack.imgur.com/DkrHn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DkrHn.jpg" alt="enter image description here" /></a></p>
Theo Bendit
248,286
<p>As I said in the comments, you need to show that no <span class="math-container">$(y_n)_n \in \ell^1$</span> exists so that <span class="math-container">$$L(x_n) = \sum_{n=1}^\infty x_n y_n$$</span> holds for all <span class="math-container">$(x_n)_n \in \ell^\infty$</span>. Suppose this were the case, and consider <span class="math-container">$e^m = (e^m_n)_n \in \ell^1$</span>, defined by <span class="math-container">$e^m_n = \delta_{mn}$</span> (i.e. the standard basis). Note that <span class="math-container">$e^m_n \to \infty$</span> as <span class="math-container">$n \to \infty$</span>, so <span class="math-container">$L(e^m) = 0$</span>. But then, <span class="math-container">$$0 = L(e^m) = \sum_{n=1}^\infty e^m_n y_n = \sum_{n=1}^\infty y_n\delta_{mn} = y_m,$$</span> hence <span class="math-container">$(y_n)_n$</span> must be the <span class="math-container">$0$</span> sequence. But, <span class="math-container">$L$</span> doesn't send every sequence to <span class="math-container">$0$</span> (e.g. the constantly <span class="math-container">$1$</span> sequence), so no such <span class="math-container">$(y_n)_n \in \ell^1$</span> exists.</p>
2,260,172
<p>I know that if two topological spaces $X$ and $Y$ are homeomorphic then so are their one point compactifications $X^*$ and $Y^*$. If $X$ and $Y$ (say both are smooth manifolds) are diffeomorphic what do we know about $X^*$ and $Y^*$? Are they diffeomorphic too or only homeomorphic? If the latter is true, are there further condition under which $X^*$ and $Y^*$ will be diffeomorphic?</p>
Community
-1
<p>You can't put a differentiable structure on the one-point compatification, so your question is not well defined. For example, consider the union of several open segments $(0,1)$, the one-point compactification will be a wedge of circle which is not a smooth manifold. I also think that one point compactification of $\mathbb R^2 \backslash D$, where $D$ is a union of disks, can't be a smooth manifold. </p>
280,292
<p>Find $a,c$ such that:</p> <p>$$f(x)= \begin{cases} a\frac{\exp(tgx)}{(1+\exp(tgx))} &amp;\text{for }|x|&lt;\pi/2 \\[2ex] \exp(cx)-2 &amp;\text{for } |x|\ge\pi/2 \end{cases}$$</p> <p>is continuous. </p> <p>How do I evaluate the left- and right-hand limits to see if they are equal?</p>
Mhenni Benghorbal
35,472
<p>To prove the function is continuous, you need to fullfill the condition </p> <p>$$ \lim_{x\to x_0}f(x)=f(x_0). $$</p> <p>So, as a first step, you need to prove that the limit exists at the point $x_0$, that is why you need to prove $$\lim_{x\to \pi/2^{-}}f(x)= \lim_{x\to \pi/2^{+}}f(x).$$ </p>
3,305,861
<p>This is an answer from Slader for <em>Understanding Analysis</em> by Abbott. The book suggests to use <span class="math-container">$a = a-b+b$</span> but the steps for this answer doesn't make any sense to me. For one, why does s/he write use <span class="math-container">$a=a-b+b$</span> when it looks like they used <span class="math-container">$a = a-b$</span> which doesn't seem valid because then you don't have a true statement for all <span class="math-container">$a,b$</span> but only when <span class="math-container">$b = 0$</span>.</p> <p><a href="https://i.stack.imgur.com/0KQQn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0KQQn.png" alt=""></a></p>
Quanto
686,284
<p>WolframAlpha, though handy, is not yet advanced to perform certain integrals satisfactorily. For less familiar integrands, it tends to fall back to complex variables, leading often to uninteresting, perhaps only symbolically useful, results.</p> <p>The integral in question can be integrated in real variables explicitly</p> <p><span class="math-container">\begin{align} \int \tan^{-1}\frac{1}{1+x^2}\ dx &amp; = x\tan^{-1}\frac{1}{1+x^2}\\ &amp;\&gt;\&gt;\&gt;+\sqrt{2(\sqrt{2}-1)}\tan^{-1}\frac{x^2-\sqrt{2}}{x\sqrt{2(\sqrt{2}+1)}} \\ &amp; \&gt;\&gt;\&gt;- \sqrt{2(\sqrt{2}+1)}\tanh^{-1}\frac{x^2+\sqrt{2}}{x\sqrt{2(\sqrt{2}-1)}}+C \end{align}</span></p> <p>which eludes WA due to limitations in its current algorithm.</p>
2,115,451
<p>$$\lim_{n\to \infty}\frac{\frac{1}{2\ln2}+\frac{1}{3\ln3}+\ldots+\frac{1}{n\,\ln\,n}}{\ln(\ln\,n)}$$</p> <p>The result is $1$ (according to the book, though it does not show the steps, which I'm interested in).</p> <p>I've applied the theorem and it led me to an equally unhelpful limit.</p>
JoeyJoey
410,414
<p>We apply the Stolz-Cesaro theorem: $$lim_{n\to \infty}\frac{(\frac{1}{2ln2}+\frac{1}{3ln3}+...+\frac{1}{n\,ln\,n}+\frac{1}{(n+1)\,ln(n+1)})-(\frac{1}{2ln2}+\frac{1}{3ln3}+...+\frac{1}{n\,ln\,n})}{ln(ln(n+1))-ln(ln\,n)}=lim_{n\to \infty}\frac{\,\frac{1}{(n+1)\,ln(n+1)}\,}{\,ln(\frac{ln(n+1)}{ln\,n})\,}$$</p> <p>We know that: $$ln\left(\frac{ln(n+1)}{ln\,n}\right)=ln\left(\frac{ln(n(1+\frac{1}{n}))}{ln\,n}\right)=ln\left(\frac{ln\,n+ln(1+\frac{1}{n})}{ln\,n}\right)=ln\left(1+\frac{ln(1+\frac{1}{n})}{ln\,n}\right)$$</p> <p>Now, $log(1+x)\ \sim x$, so $ln\left(1+\frac{ln(1+\frac{1}{n})}{ln\,n}\right) \sim \frac{ln(1+\frac{1}{n})}{ln\,n} \sim \frac{\frac{1}{n}}{ln\,n}=\frac{1}{n\,ln\,n}$ .</p> <p>( "$\sim$" means "approximately equal to" in this context)</p> <p>So: $$lim_{n\to \infty}\frac{\,\frac{1}{(n+1)\,ln(n+1)}\,}{\,ln(\frac{ln(n+1)}{ln\,n})\,}=lim_{n\to \infty}\frac{\,\frac{1}{(n+1)\,ln(n+1)}\,}{\,\frac{1}{n\,ln\,n}\,}=lim_{n\to \infty}\frac{n\,ln\,n}{(n+1)\,ln(n+1)}$$ which, in turn, is equal to $$\lim_{n\to \infty} \frac{n}{n+1}\cdot\lim_{n\to \infty}\frac{ln\,n}{ln(n+1)}=1\cdot1=1$$</p> <p>So 1 is our final answer.</p>
1,139,885
<p>If $a$ and $b$ are in a group $G$ and $ab=ba$, show that $xax^{-1}$ commutes with $xbx^{-1}$ for any $x \in G$.</p> <p>So I wrote: </p> <p>WWTS: $\bf{xax^{-1} \times xbx^{-1}=xbx^{-1}\times xax^{-1} }$</p> <p>Now, the problem I have is I don't know where to start. Let's say if I start with what is given:</p> <p>ab=ba, then am I allow to multiply each side by x and x$^{-1}$ and use the associative law since this is a group. So for example:</p> <p>ab=ba </p> <p>$xabx^{-1}=xbax^{-1}$ and then by associative i can change:</p> <p>$xax^{-1} b=xbx^{-1} a$ and multiply by x and x^-1 on the right</p> <p>$xax^{-1} \times bxx^{-1}=xbx^{-1} \times axx^{-1}$</p> <p>and use associative again</p> <p>$xax^{-1} \times xbx^{-1}=xbx^{-1}\times xax^{-1}$</p> <p>Any ideas?</p>
Milo Brandt
174,927
<p>A very useful idea to such problems is that the map $$f(a)=xax^{-1}$$ is called "conjugation" and is an homomorphism from the group to itself. That is to say, we can prove, for all $a,b$ that: $$f(a)f(b)=f(ab).$$ This is very clear if you expand both expressions. Now, if we start with $$ab=ba$$ We can apply $f$ to both sides to get $$f(ab)=f(ba)$$ and then use that $f$ is a homomorphism to get $$f(a)f(b)=f(b)f(a)$$ which, if you expand, is the desired statement.</p>
134,424
<p>I would like to create a function where I can define which case I want to use to create a path.</p> <pre><code>p1 = {40, 48}; p2 = {50, 116}; p3 = {63, 160}; listPurple = Symbol["p" &lt;&gt; ToString[#]] &amp; /@ Range[3]; disksPurple = {Purple, Disk[#, 2] &amp; /@ listPurple}; Graphics[{disksPurple}, ImageSize -&gt; 200] </code></pre> <p><a href="https://i.stack.imgur.com/X8otFm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X8otFm.png" alt="Imagem"></a></p> <p>I do not want two functions, as I created:</p> <pre><code>lVertical[p1_, p2_] := {p1, {p1[[1]], p2[[2]]}}; lHorizontal[p1_, p2_] := {p1, {p2[[1]], p1[[2]]}}; </code></pre> <p>With them I define whether it will be a horizontal or vertical line:</p> <pre><code>l1 = lVertical[p1, p2]; l2 = lHorizontal[p2, p1]; l3 = lHorizontal[p2, p3]; l4 = lVertical[p3, p2]; lines = Sort@Symbol["l" &lt;&gt; ToString[#]] &amp; /@ Range[4]; l = {Red, Dashed, Line[#] &amp; /@ lines}; Graphics[{l, disksPurple}, ImageSize -&gt; 200] </code></pre> <p><a href="https://i.stack.imgur.com/cl4HEm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cl4HEm.png" alt="Imagem"></a></p> <p>I would like it in a format similar to this:</p> <pre><code>f[p1_, p2_, lVertical or lHorizontal] </code></pre>
corey979
22,013
<p><strong>Function:</strong></p> <pre><code>path[points_List, directions_List] := Block[{disksPurple, pseudoCross, v, h, ints, dirs}, If[Length@directions != Length@points - 1, Print["incorrect input"]; Abort[]]; disksPurple = {Purple, Disk[#, 2] &amp; /@ points}; pseudoCross[p1_, p2_] := {{p1[[1]], p2[[2]]}, {p2[[1]], p1[[2]]}}; dirs = directions /. {v -&gt; {1, 0}, h -&gt; {0, 1}}; ints = Flatten[#, 1] &amp; @ Pick[pseudoCross @@@ Partition[points, 2, 1], dirs, 1]; Graphics[{disksPurple, {Red, Dashed, Line @ Riffle[points, ints]}}] ] </code></pre> <p><strong>Usage:</strong> list of points is the first argument; the second is a list of directions (only <em>first</em> directions) between the consecutive points. E.g., to go from <code>p1</code> to <code>p2</code> first vertically, and then horizontally, type <code>v</code>; to go first horizontally, then vertically - type <code>h</code>. The reason is that if you go vertically from <code>p1</code>, you have to go horizontally next to reach <code>p2</code>, and <em>vice versa</em>.</p> <p><strong>Examples:</strong></p> <pre><code>Clear[p1, p2, p3, p4, pts] p1 = {40, 48}; p2 = {50, 116}; p3 = {63, 160}; p4 = {80, 100}; pts = {p1, p2, p3, p4}; path[pts, {v, v, v}] </code></pre> <p><a href="https://i.stack.imgur.com/l0Rxt.png" rel="noreferrer"><img src="https://i.stack.imgur.com/l0Rxt.png" alt="enter image description here"></a></p> <pre><code>path[pts, {h, h, h}] </code></pre> <p><a href="https://i.stack.imgur.com/q3MyK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/q3MyK.png" alt="enter image description here"></a></p> <pre><code>path[pts, {v, h, h}] </code></pre> <p><a href="https://i.stack.imgur.com/EdvUh.png" rel="noreferrer"><img src="https://i.stack.imgur.com/EdvUh.png" alt="enter image description here"></a></p> <pre><code>path[pts, {v, h, v}] (* this is correct, just the choice of direction is poor *) </code></pre> <p><a href="https://i.stack.imgur.com/Ma797.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Ma797.png" alt="enter image description here"></a></p> <hr> <pre><code>n = 10; pts = RandomInteger[{0, 100}, {n, 2}]; dir = RandomChoice[{v, h}, Length@pts - 1]; path[pts, dir] </code></pre> <p><a href="https://i.stack.imgur.com/o7FpR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/o7FpR.png" alt="enter image description here"></a></p> <p>Works also when three consecutive points have their $x$ or $y$ coordinates the same:</p> <pre><code>path[{{100, 0}, {200, 0}, {300, 0}}, {h, h}] </code></pre> <p><a href="https://i.stack.imgur.com/7BkdJ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/7BkdJ.png" alt="enter image description here"></a></p> <p>The same image is obtained with <code>{v, v}</code>, <code>{v, h}</code> and <code>{h, v}</code>; similarly for the vertical alignment.</p> <hr> <p>Incorrect input:</p> <pre><code>path[{{100, 0}, {200, 0}, {300, 0}}, {h, h, v}] </code></pre> <blockquote> <p>incorrect input</p> <p>$Aborted</p> </blockquote> <hr> <p>And finally, there's this funny behaviour when you make a typo in the <code>directions</code>:</p> <pre><code>Clear[p1, p2, p3, p4, pts, path] p1 = {40, 48}; p2 = {50, 116}; p3 = {63, 160}; p4 = {80, 100}; pts = {p1, p2, p3, p4}; path[pts, {v, v, hv}] </code></pre> <p><a href="https://i.stack.imgur.com/XENc6.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XENc6.png" alt="enter image description here"></a></p>
877,850
<p>I'm trying to calculate the expected area of a random triangle with a fixed perimeter of 1. </p> <p>My initial plan was to create an ellipse where one point on the ellipse is moved around and the triangle that is formed with the foci as the two other vertices (which would have a fixed perimeter) would have all the varying areas. But then I realized that I wouldn't account for ALL triangles using that method. For example, an equilateral triangle with side lengths one third would not be included.</p> <p>Can anyone suggest how to solve this problem? Thanks.</p>
Taladris
70,123
<p>A calculus proof:</p> <p>Let $f(x)=\arctan(x)+\textrm{arcot}(x)$ for all $x\in {\mathbb R}$. The function $f$ is differentiable and $f'(x)=\frac{1}{1+x^2}+\frac{-1}{1+x^2}=0$, so $f$ is constant. The value of the constant is $f(0)=\arctan(0)+\textrm{arccot}(0)=0+\frac{\pi}{2}$ (remember that $\tan(0)=0$ and $\cot(\frac{\pi}{2})=0$), so $\arctan(x)+\textrm{arccot}(x)=\frac{\pi}{2}$ for all $x\in {\mathbb R}$.</p> <p>Note: I prefer the notation $\arctan$ over $\tan^{-1}$ since it helps to avoid mistakes like $\cot^{-1}(x)=\frac{1}{\tan^{-1}(x)}$.</p> <hr> <p>Edit: the precalculus tag was added while I was writing my answer. </p>
200,063
<p>I am looking to evaluate</p> <p>$$\int_0^1 x \sinh (x) \ \mathrm{dx}$$</p>
Aang
33,989
<p>$2\sinh(x)=e^x-e^{-x}$ and $\int e^x(f(x)+f'(x))dx=e^xf(x)$</p> <p>Thus, $$2\int_0^1 x \sinh (x)dx=\int_0^1 xe^xdx-\int_0^1 xe^{-x}dx$$ $$=\int_0^1 e^x(x+1)dx-\int_0^1 e^{-x}(-x+1)d(-x)-\int_0^1 e^{x}dx+\int_0^1 e^{-x}d(-x)$$ $$=xe^x|_0^1-(-xe^{-x})|_0^1-e^x|_0^1+e^{-x}|_0^1$$ $$=e+\frac{1}{e}-(e-1)+(\frac{1}{e}-1)=\frac{2}{e}$$</p> <p>Hence, $$2\int_0^1 x \sinh (x)dx=\frac{2}{e}\implies \int_0^1 x \sinh (x)dx=\frac{1}{e}$$</p>
3,949,525
<p>This question is from Ponnusamy and silvermann complex analysis and I am making some mistake in this question and unable to find it.</p> <blockquote> <p>Show that <span class="math-container">$\prod_{n=1}^{\infty} (1-z/n) e^{z/n +5z^2/n^2}$</span> is entire.</p> </blockquote> <p><span class="math-container">$1+ f_n (z)= (1-z/n) e^{z/n +5z^2/n^2}$</span> and finding a upper bound on <span class="math-container">$f_n (z)$</span> . <span class="math-container">$1+ f_n (z) = exp( Log(1-z/n) + z/n+ 5 z^2 /n^2)$</span> For <span class="math-container">$|z|\leq R$</span> then choose N large enough such that |z/n|&lt;1 for all <span class="math-container">$n \geq N$</span> and using the identity<span class="math-container">$ Log(1-z/n) = -\sum_{k=1}^{\infty} 1/k (z/n)^k$</span> and the fact that|R/n|&lt;1.</p> <p>So, <span class="math-container">$|Log (-\sum_{k=1}^{\infty} 1/k (z/n)^k +z/n +5 z^2/n^2) | \leq -5 -1/2 R^2/n^2 $</span>, where I used Formula of infinite geometric series.</p> <p>Now <span class="math-container">$|f_n(z)| \leq exp|e^{-5 -1/2 R^2 /n^2 -1}-1| exp( -5 -1/2 R^2 /n^2)$</span> -1 and using <span class="math-container">$e^x -1 \leq x e^x for all x \geq 0 $</span> we get <span class="math-container">$|f_n(z)|\leq 5+ 1/2 R^2/ n^2 $</span> and if I sum <span class="math-container">$|f_n (z)| $</span> from N to <span class="math-container">$\infty$</span> i get Infinity as 5 is also there .</p> <p>So, I think while approximating I am doing some mistake and I am unable to find it. Can you please point that out.</p> <p>If there is some other method of proving it entire that is also welcome.</p> <p>Thanks!</p>
reuns
276,986
<p>Because <span class="math-container">$f(z)=\frac{\log(1-z)+z}{z^2}$</span> is analytic thus continuous for <span class="math-container">$|z|&lt; 1$</span> then <span class="math-container">$$C= \sup_{|z|\le 1/2}|f(z)|&lt;\infty$$</span> gives <span class="math-container">$|\log(1-z/n)+z/n|\le Cz^2/n^2$</span> for <span class="math-container">$|z|&lt;n/2$</span>.</p>
3,213,253
<p>We're given dynamical system:</p> <p><span class="math-container">$$ \dot x = -x + y + x (x^2 + y^2)\\ \dot y = -y -2x + y (x^2 + y^2) $$</span></p> <p>Question is what's the largest constant <span class="math-container">$r_0$</span> s.t. circle <span class="math-container">$x^2+y^2 &lt; r_0^2$</span> lies in the origins basin of attraction.</p> <p>So far with relatively easy algebra I've got:</p> <p><span class="math-container">$$ \dot r = \frac{r}{2}(-2-\sin(2 \phi)+2r^2) \\ \dot \phi = -(1+\cos^2(\phi)) $$</span></p> <p>Which immediately shows <span class="math-container">$r_0 \geq \sqrt{1/2}$</span>. How to show that there is no better bound?</p>
Maxim
491,644
<p>Since the limit cycle looks like an ellipse centered at the origin, we'll try to find a Lyapunov function <span class="math-container">$V = a x^2 + b x y + c y^2$</span> s.t. <span class="math-container">$\dot V = (A x^2 + B x y + C y^2 + D) (V - V_0)$</span>. This only requires equating the coefficients of two polynomials in <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, yielding, up to a constant factor, <span class="math-container">$$V = 8 x^2 - 2 x y + 5 y^2, \\ \dot V = 2 (x^2 + y^2) (V - 6).$$</span> <span class="math-container">$V = 6$</span> is the equation of the limit cycle; the radius of the maximal circle is the minor semiaxis of the ellipse.</p>
1,498,952
<p>How does one compute the bias for the estimator given by the sample geometric mean for a gamma distribution with parameters ($\theta$,1)?</p> <p>i.e: Given $X_1,...,X_n$ are iid with distribution Gamma($\theta$,1), what is the bias of the estimator given by: $$\hat\theta = \left(\prod_{i=1}^n X_i\right)^{1/n}$$</p> <p>I have that $\mathbb{B}(\hat\theta) = \prod_{i=1}^n \mathbb{E}\left[X_i^{1/n}\right] - \theta$. Evaluating this gives an awful expression littered with gamma functions. Have I made any mistakes here?</p>
Narek Margaryan
203,307
<p>For a $\Gamma(k,\lambda)$ you have to count the integral:</p> <p>$$\int_0^\infty \frac {X^{k-1+1/n}e^{-\lambda X}\lambda^k}{\Gamma(k)}dX$$ </p> <p>All we need is to multiply and divide the fraction by $\lambda^{1/n-1}$</p> <p>$$\frac{1}{\lambda^{1/n}}\int_0^\infty \frac {(X\lambda)^{k+1/n-1}e^{-\lambda X}}{\Gamma(k)}d\lambda X$$</p> <p>Which is</p> <p>$$\frac {\Gamma(k+1/n)}{\lambda^{1/n}\Gamma(k)}$$</p> <p>So the bias is </p> <p>$$\left(\frac {\Gamma(k+1/n)}{\lambda^{1/n}\Gamma(k)}\right)^n-\theta$$</p>
2,484,027
<p>I've been trying to solve this permutation problem. I know that it's been posted on this site before, but my question is about the specific approach I take to solving it.</p> <p>Here's what I thought I'd do : I could first figure out the total number of permutation where AT LEAST $2$ girls are together, and then subtract this from the total number of permutations.</p> <p>Now, the total number of ways in which the students can be seated is obviously $^8\mathbf{P}_8$, or $8!$.</p> <p>For the total number of permutations where at least $2$ girls are together, I first figured out that if I take $2$ girls as $1$ object, I can seat them in a total of $ 7 $ ways. </p> <p>To this I multiplied the total number of way in which $2$ out of $3$ girls can be chosen. Thus I had $ 7 \times ^3\mathbf{P}_2 $, or $ 6 \times 7$</p> <p>Finally, to this I multiplied the total number of permutations for the seating arrangement of the remaining students, to get $7 \times 6 \times ^6\mathbf{P}_6 $, or $ 6 \times 7!$.</p> <p>Finally, I subtracted the number of permutation where AT LEAST 2 girls are seated together ($6 \times 7!$), from the total number of permutations of possible seating arrangements ($8!$).</p> <p>Thus, I have $8! - 6 \times 7!$, or $7!(8-6) = 10080$.</p> <p>BUT, the answer given in my book, and online, is 14400. </p> <p>I want to know where my problem solving logic is wrong and if my calculation is wrong ?</p> <p>Thanks.</p>
Vysakh Mohan
707,497
<p>You're overcounting the number of ways "at least two" girls can sit together, because for example</p> <p>Boy1 Boy2 <em>Girl1 Girl2 Girl3</em> Boy3 Boy4 Boy5 is produced twice: Once considering <strong>Girl1 Girl2</strong> as the unit, and once considering <strong>Girl2 Girl3</strong> as the unit. so Girl1 Girl2 and Girl2 </p>
3,244,365
<blockquote> <p>Prove that the product of any four consecutive integers is one less than a perfect square.</p> </blockquote> <p>My first idea is the let k be a member of the integers. Let <span class="math-container">$m$</span> which also belongs to the integers be equal to <span class="math-container">$k(k+1)(k+2)(k+3)$</span>. When I expanded, I ended up with <span class="math-container">$k^4 + 6k^3 + 11k^2 + 6k$</span>. Now, my only suggestion is to compare with the form of a perfect square trinomial and show that it cannot be expressed in that form of <span class="math-container">$a^2 + 2ab + b^2$</span>. Can someone help me to complete this?</p>
Dan Uznanski
167,895
<p>We know that a number that's one less than a perfect square has two factors that are two apart: <span class="math-container">$y^2-1 = (y-1)(y+1)$</span>. If we can find a way to make the factors of <span class="math-container">$x(x+1)(x+2)(x+3)$</span> combine into two factors that are two apart, we win.</p> <p><span class="math-container">$$ \begin{align} x(x+3)&amp;=x^2+3x\\ (x+1)(x+2)&amp;=x^2+3x+2 \end{align} $$</span></p> <p>That was easy. So: <span class="math-container">$x(x+1)(x+2)(x+3) = y^2 - 1$</span> has the solution <span class="math-container">$y = x^2 + 3x + 1$</span>.</p>
356,583
<p>In the paper <a href="https://arxiv.org/abs/1811.02002" rel="nofollow noreferrer">"Finding Mixed Nash Equilibria of Generative Adversarial Networks"</a> the authors write in equation (1) on page 2:</p> <blockquote> <p>Consider the classical formulation of a two-player game with <em>finitely</em> many strategies: <span class="math-container">\begin{equation*} \tag1\label1 \min_{\boldsymbol{p} \in \Delta_m} \max_{\boldsymbol{q} \in \Delta_n} \langle \boldsymbol{q},\boldsymbol{a} \rangle - \langle \boldsymbol{q},A\boldsymbol{p} \rangle , \end{equation*}</span> where <span class="math-container">$A$</span> is a payoff matrix, <span class="math-container">$\boldsymbol a$</span> is a vector, and <span class="math-container">$ \Delta_d := \{\boldsymbol{z} \in \mathbb{R}_{\geq 0}^d \mid \sum\nolimits_{i=1}^d z_i = 1\}$</span> is the probability simplex, representing the <em>mixed strategies</em> (i.e., probability distributions) over <span class="math-container">$d$</span> pure strategies. A pair <span class="math-container">$(\boldsymbol{p}_{\text{NE}},\boldsymbol{q}_{\text{NE}})$</span> achieving the min-max value in (\ref{1}) is called a mixed NE.</p> </blockquote> <p>I was wondering:</p> <ul> <li>What does this formulation mean?</li> <li>The formulation seems to result in a <strong>parametrized</strong> by a vector <span class="math-container">$\boldsymbol a$</span> pair of strategies <span class="math-container">$(\boldsymbol p, \boldsymbol q)$</span>. What is the role of vector <span class="math-container">$\boldsymbol a$</span> in the above equation?</li> </ul> <p>Thank you</p> <p>After further contemplating: I guess they want to align their application (GAN) with the game theory framework. To that end, they write on page 3:</p> <blockquote> <p>[W]e consider the set of all probability distributions over <span class="math-container">$\Theta$</span> and <span class="math-container">$\mathcal{W}$</span>, and we search for the optimal distribution that solves the following program: <span class="math-container">\begin{equation*} \tag4\label4 \min_{\nu \in \mathcal{M}(\Theta)} \max_{\mu \in \mathcal{M}(\mathcal{W})} \mathbb{E}_{\boldsymbol{w} \sim \mu} \mathbb{E}_{X \sim \mathbb{P}_{real}} [f_\boldsymbol{w}(X)] - \mathbb{E}_{\boldsymbol{w} \sim \mu} \mathbb{E}_{\boldsymbol{\theta} \sim \nu} \mathbb{E}_{X \sim \mathbb{P}_{\boldsymbol{\theta}}} [f_\boldsymbol{w}(X)] . \end{equation*}</span></p> </blockquote> <p>They then show that the above can be cast as</p> <blockquote> <p><span class="math-container">\begin{equation*} \tag5\label5 \min_{\nu \in \mathcal{M}(\Theta)} \max_{\mu \in \mathcal{M}(\mathcal{W})} \langle \mu,g \rangle - \langle \mu,G\nu \rangle , \end{equation*}</span> with <span class="math-container">$g$</span> defined as <span class="math-container">$g : \mathcal{W} \rightarrow \mathbb{R}$</span> by <span class="math-container">$g(w) := \mathbb{E}_{X \sim \mathbb{P}_{real}} [f_\boldsymbol{w}(X)]$</span>, the operator <span class="math-container">$G : \mathcal{M}(\Theta) \rightarrow \mathcal{F}(\mathcal{W})$</span> as <span class="math-container">$(G\nu)(w) := \mathbb{E}_{\boldsymbol{\theta} \sim \nu} \mathbb{E}_{X \sim \mathbb{P}_{\boldsymbol{\theta}}} [f_\boldsymbol{w}(X)]$</span> and denoting <span class="math-container">$\langle \mu,h \rangle := \mathbb{E}_{\mu}h$</span> for any probability measure <span class="math-container">$\mu$</span> and function <span class="math-container">$h$</span> (where <span class="math-container">$\langle \mu,h \rangle$</span> is NOT an inner product, but a dual pairing in Banach spaces),</p> </blockquote> <p>which looks like (\ref{1}) (for <em>finitely</em> many strategies). Notice that (\ref{4}) has a free parameter <span class="math-container">$\mathbb{P}_{real}$</span> (hidden in <span class="math-container">$g$</span> in (\ref{5})), which <span class="math-container">$\boldsymbol{a}$</span> in (\ref{1}) seems to have been introduced to account for.</p> <p>Also, <span class="math-container">\begin{equation*} \min_{\boldsymbol{p} \in \Delta_m} \max_{\boldsymbol{q} \in \Delta_n} \langle \boldsymbol{q},\boldsymbol{a} \rangle - \langle \boldsymbol{q},A\boldsymbol{p} \rangle = \min_{\boldsymbol{p} \in \Delta_m} \max_{\boldsymbol{q} \in \Delta_n} \langle \boldsymbol{q}, (\boldsymbol{a} \otimes \boldsymbol{1} - A)\boldsymbol{p} \rangle \end{equation*}</span></p> <p>This is because <span class="math-container">$\boldsymbol{p}$</span> is a probability simplex and therefore each row <span class="math-container">$m$</span> of vector <span class="math-container">$\boldsymbol{a}$</span> increases row <span class="math-container">$m$</span> of payoff matrix <span class="math-container">$A$</span>. Therefore, the above game is equivalent to a standard zero-sum game with payoff matrix <span class="math-container">$\tilde{A}=(\boldsymbol{a} \otimes \boldsymbol{1} - A)$</span>.</p>
Eilon
64,609
<p>The classical formulation of a two player zero sum game with finitely many strategies is <span class="math-container">$\langle q, Ap \rangle$</span>. There is no need to introduce the vector <span class="math-container">$a$</span>, as it can be incorporated in the matrix <span class="math-container">$A$</span>. In the paper you mention, the authors assume that it is too costly to evaluate <span class="math-container">$A$</span>; maybe the formulation they provide is useful in their analysis.</p>
2,013,459
<p>The period of $\sin(x)$ is $2\pi$. Thus the period of $\sin(\pi x)$ will become $T_1=2$, similarly the period of $\sin(2\pi x)$ is $T_2=1$ and for $\sin(5\pi x)$, the period is $T_3=\frac{2}{5}$.</p> <p>To find the period of $\sin(\pi x)-\sin(2\pi x)+\sin(5\pi x)$, we can have $$\frac{T_1}{T_{2}}=\frac{2}{1}\Rightarrow \,\, T^*=T_1=2T_2=2$$ Now, We can also have $$\frac{T^*}{T_{3}}=\frac{2}{\frac{2}{5}}\Rightarrow \,\, T=2T^*=10T_3=4$$ </p> <p>So the period is 2. Is this correct? </p>
String
94,971
<p>You have $$ (T_1,T_2,T_3)=\left(\tfrac{10}5,\tfrac55,\tfrac25\right) $$ where $\operatorname{LCM}(10,5,2)=10$ (least common multiple). Thus the minimal common integer multiple where those fractions coincide must be $$ \frac{10}5=2\cdot \frac 55=5\cdot\frac25=2 $$ So your suggestion is correct.</p>
2,648,526
<p>I really cannot figure this question out. Can anyone help me please!?</p> <blockquote> <p>Prove that the length of the median $m_a$ of obtuse triangle $△ABC$ with the obtuse $∠CAB$ is smaller than $\dfrac{1}{2}|BC|$.</p> </blockquote> <p>Thank you very much!</p>
Dr. Sonnhard Graubner
175,066
<p>by the theorem of cosines we have $$,m_a^2=c^2+\frac{a^2}{4}-2\frac{a}{2}c\cos(\beta)$$ with $$\cos(\beta)=\frac{a^2+c^2-b^2}{2ac}$$ we get $$m_a^2=c^2+\frac{a^2}{4}-ac\left(\frac{a^2+c^2-b^2}{2ac}\right)$$ or $$m_a^2=\frac{2(c^2+b^2)-a^2}{4}$$ and we get $$m_a^2&gt;\frac{a^2}{4}$$ and this is equivalent to $$b^2+c^2&lt;a^2$$ which is true for an triangle with the obtuse angle $\angle CAB$</p>
1,540,756
<p>I need help trying to solve this question, been cracking my head for the whole week and my professor said he used an online solver but in exams we have to solve by hand!</p> <p>Given these 8 equations, we are supposed to solve for $i_0, i_1, \dots, i_7$: $$\begin{array}{rl} i_1+i_2 &amp;= 12 \\ i_2+i_5+i_6 &amp;= 0 \\ i_3+i_5+i_7 &amp;= 0 \\ i_2-i_4+i_5+i_7 &amp;= 0 \\ i_0-i_1 &amp;= 0 \\ i_1-i_2-i_3+i_5 &amp;= 0 \\ i_3-i_4-i_7 &amp;= 0 \\ i_5-i_6-i_7 &amp;= 0 \end{array}$$</p> <p>I know the answers are: $ i_0=8, i_1=8, i_2=4, i_3=2, i_4=2, i_5=2, i_6=2, i_7=0 $, but I don’t know how to solve by hand!</p>
amd
265,466
<p>The above equations can be represented by the following augmented matrix: $$\left(\begin{array}{rrrrrrrr|r} 0&amp;1&amp;1&amp;0&amp;0&amp;0&amp;0&amp;0 &amp; 12 \\ 0&amp;0&amp;1&amp;0&amp;0&amp;1&amp;1&amp;0 &amp; 0 \\ 0&amp;0&amp;0&amp;1&amp;0&amp;1&amp;0&amp;1 &amp; 0 \\ 0&amp;0&amp;1&amp;0&amp;-1&amp;1&amp;0&amp;1 &amp; 0 \\ 1&amp;-1&amp;0&amp;0&amp;0&amp;0&amp;0&amp;0 &amp; 0 \\ 0&amp;1&amp;-1&amp;-1&amp;0&amp;1&amp;0&amp;0 &amp; 0 \\ 0&amp;0&amp;0&amp;1&amp;-1&amp;0&amp;0&amp;-1 &amp; 0 \\ 0&amp;0&amp;0&amp;0&amp;0&amp;1&amp;-1&amp;-1 &amp; 0 \end{array}\right).$$ It’s pretty straightforward, although a bit tedious, to perform row-reduction on it to solve the system.</p>
3,694,719
<p>source: BMO2 2004 q4</p> <p>The real number <span class="math-container">$x$</span> between <span class="math-container">$0$</span> and <span class="math-container">$1$</span> has decimal representation <span class="math-container">$0.a_1a_2a_3a_4\dots$</span> And enjoys the following property: the number of distinct blocks of the form <span class="math-container">$a_k$$a_{k+1}$$a_{k+2}$</span> … <span class="math-container">$a_{k+2003}$</span>,</p> <p>as <span class="math-container">$k$</span> ranges through the positive integers, is less than or equal to <span class="math-container">$2004$</span>. Prove that <span class="math-container">$x$</span> is rational.</p> <p>I try to prove by induction. Claim: For any positive integer n, if there are at most n different blocks of length n, then the number is rational. When n=1, if a1=a, then every digit is a and the number 0.aaa… = a x 0.111… = a/9 is rational. Assume n = m-1 is true, Consider the number x in question. If there are fewer than n different blocks of length n-1, then x is rational. (n=m-1 is true) If there are at least n different blocks of length n-1. I don't know how to proceed....</p>
Balaji sb
213,498
<p>A reduction of problem (not a full answer): Let us retain only the sequences of length 2004 that repeat infinitely often. Hence w.l.o.g there exists a smallest <span class="math-container">$\ell$</span> such that <span class="math-container">$a_k,a_{k+1},...,a_{k+\ell}$</span> is one of <span class="math-container">$\ell+1$</span> sequences each of length <span class="math-container">$\ell+1$</span> for every <span class="math-container">$k$</span> and each of these <span class="math-container">$\ell+1$</span> sequences repeat infinitely often in <span class="math-container">$0.a_1a_2...$</span>.</p> <p>let us look at the sequences appearing. Let <span class="math-container">$s_k = a_k,a_{k+1},...,a_{k+\ell}$</span>. We write as <span class="math-container">$s_k \rightarrow s_{k+1} \rightarrow s_{k+2}....$</span>. Now <span class="math-container">$s_k$</span> and <span class="math-container">$s_{k+1}$</span> has <span class="math-container">$\ell$</span> digits in common and <span class="math-container">$s_k$</span> is one of the <span class="math-container">$\ell+1$</span> sequences. Lets call these <span class="math-container">$\ell+1$</span> sequences as <span class="math-container">$g_1,...,g_{\ell+1}$</span>.</p> <p>Let <span class="math-container">$s_k = g_{\sigma(k)}$</span>. Hence we have <span class="math-container">$g_{\sigma(k)} \rightarrow g_{\sigma(k+1)} \rightarrow g_{\sigma(k+2)}....$</span>. Now all the <span class="math-container">$\ell+1$</span> sequences appear in this transitions. If <span class="math-container">$\sigma(k+\ell+1) = \sigma(k)$</span>, forall <span class="math-container">$k$</span>, then the digits repeat in <span class="math-container">$a_1....$</span> and hence a rational number. But if <span class="math-container">$\sigma(k+j) = \sigma(k)$</span> then it must be possible that two sequences transitions are possible i.e., say it must be possible that <span class="math-container">$g_1 \rightarrow g_2$</span> and <span class="math-container">$g_1 \rightarrow g_3$</span>. Hence there must be atleast 2 sequences <span class="math-container">$g_2,g_3$</span> which differ in exactly last element. Now form the set of all such so called possible orbits <span class="math-container">$(g_m \rightarrow g_u.... g_i \rightarrow g_m)$</span> (no repitition in the transitions). The number is rational if there is exactly only one possible orbit <span class="math-container">$(g_1 \rightarrow g_u.... g_i \rightarrow g_1)$</span> (no repitition in the transitions) and no other orbit is possible. &quot;In general&quot; at a high level, we can use <span class="math-container">$2$</span> orbits satisfying certain properties and construct a number by placing these 2 orbits successively in various ways pretending these 2 orbits are mapped 0/1 such that the number is irrational.</p> <p>Note that if there is a non-repeating orbit <span class="math-container">$(g_1 \rightarrow g_i ... \rightarrow g_m \rightarrow g_1)$</span> of length <span class="math-container">$u$</span> &lt; <span class="math-container">$(\ell+1)$</span> then each element <span class="math-container">$g_j$</span> in the orbit can be generated by a single sequence of the form <span class="math-container">$(e_0,e_1,....,e_{\ell})$</span> and the orbit is generated by <span class="math-container">$$(e_0,...,e_{\ell}) \rightarrow ... \rightarrow (e_{(i:\ell)},e_{((\ell-u+1): (\ell-u+1+i-1))}) \rightarrow ...$$</span>.</p> <p>Since any sequence is reachable from any sequence, we have that all sequences <span class="math-container">$g_1,...,g_{\ell+1}$</span> are generated from a single sequence <span class="math-container">$(e_0,e_1,....,e_{\ell})$</span>.</p> <p>For every orbit of length <span class="math-container">$u$</span>, involving <span class="math-container">$g_i = (m_0,m_1,....,m_{\ell})$</span>, we have the condition <span class="math-container">$m_{(u:\ell)} = m_{(0:\ell-u)}$</span>. This implies the sequence <span class="math-container">$(m_0,m_1,....,m_{\ell})$</span> is periodic with period <span class="math-container">$u$</span>. Hence each sequence in a non-repeating orbit of length <span class="math-container">$u$</span> is periodic with period <span class="math-container">$u$</span>. Similarly for other <span class="math-container">$g_j$</span>. Hence we have that every sequence <span class="math-container">$g_i$</span> is periodic with period <span class="math-container">$u_1,u_2,...$</span> where <span class="math-container">$u_i$</span> i s the length of some non-repeating orbit involving <span class="math-container">$g_i$</span>. Note that every <span class="math-container">$g_i$</span> is generated by <span class="math-container">$(e_0,e_1,....,e_{\ell})$</span>. Hence every sequence <span class="math-container">$g_i$</span> is generated from <span class="math-container">$(e_0,e_1,....,e_{\ell})$</span> after a series of periodic shifts of length <span class="math-container">$\ell_1,\ell_2,...,\ell_q$</span> w.r.t period <span class="math-container">$u_1,u_2,...,u_q$</span> for some <span class="math-container">$q \leq \frac{\ell+1}{2}$</span>.</p> <p>Try to proceed from this point.</p>
79,317
<p>This concerns one of those "well known" facts, referred to in a recent preprint I've been looking at. In principle it's elementary, but I can't pin down an explicit textbook reference for it. Start with two finite groups $A,B$ and their product $G:=A \times B$, working over a splitting field $K$ for the groups involved with prime characteristic dividing $|G|$. Let $S_1, \dots, S_m$ and $T_1, \dots, T_n$ be respective sets of representatives of isomorphism classes of simple modules for the group algebras $KA, KB$. In turn let the projective covers (=injective hulls) be respectively $P_i, Q_j$. These are the PIMs or indecomposable projective modules for the two group algebras.</p> <p>It's a standard observation (found in some books) that there is an obvious isomorphism between $KG$ and the tensor product algebra $KA \otimes_K KB$, while each group algebra splits into the direct sum (as a left module over itself) of the various PIMs taken with multiplicity equal to the dimension of the corresponding simple module. It's also a standard fact (found in some books) that each $S_i \otimes T_j$ is a simple module for $KG$. From these ingredients one can conclude that $P_i \otimes Q_j$ is the corresponding PIM, thereby exhausting all isomorphism classes for $KG$. </p> <blockquote> <p>Is all of this written down in a self-contained way somewhere?</p> </blockquote>
Benjamin Steinberg
15,934
<p>The paper Representations of direct products of finite groups. Burton Fein Source: Pacific J. Math. Volume 20, Number 1 (1967), 45-58.</p> <p><a href="https://projecteuclid.org/journals/pacific-journal-of-mathematics/volume-20/issue-1/Representations-of-direct-products-of-finite-groups/pjm/1102992967.full" rel="nofollow noreferrer">Link</a></p> <p>has what you are looking for and also explains what happens in the case that <span class="math-container">$K$</span> is not a splitting field. Look at Theorem 2.2 and the remark following it.</p>
9,302
<p>Say I have a symmetric matrix. I have the concept of 2-norm as defined on wikipedia. Now I want to prove (disprove?) that the norm of a symmetric matrix is maximum absolute value of its eigenvalue. I would really appreciate if this can be done only using simple concepts of linear algebra.</p> <p>I am quite new to mathematics. </p>
mpiktas
4,742
<p>Here is a simple explanation not necessarily from linear algebra. We have</p> <p>$$\|A\|_2=\max_{\|x\|=1}\|Ax\|$$</p> <p>where $\|\cdot\|$ is simple euclidean norm. This is a constrained optimisation problem with Lagrange function:</p> <p>$$L(x,\lambda)=\|Ax\|^2-\lambda(\|x\|^2-1)=x&#39;A^2x-\lambda(x&#39;x-1)$$</p> <p>here I took squares which do not change anything, but makes the following step easier. Taking derivative with respect to $x$ and equating it to zero we get</p> <p>$$A^2x-\lambda x=0$$</p> <p>the solution for this problem is the eigenvector of $A^2$. Since $A^2$ is symmetric, all its eigenvalues are real. So $x&#39;A^2x$ will achieve maximum on set $\|x\|^2=1$ with maximal eigenvalue of $A^2$. Now since $A$ is symmetric it admits representation</p> <p>$$A=Q\Lambda Q&#39;$$</p> <p>with $Q$ the orthogonal matrix and $\Lambda$ diagonal with eigenvalues in diagonals. For $A^2$ we get</p> <p>$$A^2=Q\Lambda^2 Q&#39;$$</p> <p>so the eigenvalues of $A^2$ are squares of eigenvalues of $A$. The norm $\|A\|_2$ is the square root taken from maximum $x&#39;A^2x$ on $x&#39;x=1$, which will be the square root of maximal eigenvalue of $A^2$ which is the maximal absolute eigenvalue of $A$.</p>
3,435,511
<p>I'm looking to write down formally that a multiset of elements contains at least two elements that differs in value. e.g., S1 = {1,1,1,1,1,1} and S2={1,1,1,1,0,1} S1 has all identical elements, S2 has at least two elements that differs in value.</p>
Robo300
718,762
<p><img src="https://i.stack.imgur.com/nBazV.jpg" alt="the work"> I believe that the first derivative at zero is zero, my justification is in the attached image. It basically boils down to defining a multivariate function <span class="math-container">$F(x,y)$</span> such that <span class="math-container">$f(x) = F(x,x)$</span>, and computing the partial derivatives of <span class="math-container">$F$</span> at (0,0) on the contour x=y.</p>
3,435,511
<p>I'm looking to write down formally that a multiset of elements contains at least two elements that differs in value. e.g., S1 = {1,1,1,1,1,1} and S2={1,1,1,1,0,1} S1 has all identical elements, S2 has at least two elements that differs in value.</p>
ComplexYetTrivial
570,419
<p><strong>Some integral representations</strong></p> <p>For <span class="math-container">$x &gt; 0$</span> we have <span class="math-container">\begin{align} f(x) &amp;= \int \limits_x^\infty - \log \left(1 - \mathrm{e}^{-k}\right) \sqrt{1 - \frac{x^2}{k^2}} \, \mathrm{d}k \tag{1} \\ &amp;\!\!\!\!~\stackrel{k = x u}{=} x \int \limits_1^\infty - \log \left(1 - \mathrm{e}^{-x u}\right) \sqrt{1 - \frac{1}{u^2}} \, \mathrm{d}u \tag{2} \\ &amp;\!\!\stackrel{\text{i.b.p.}}{=} \int \limits_1^\infty \frac{\operatorname{Li}_2\left(\mathrm{e}^{-x u}\right)}{u^2 \sqrt{u^2-1}} \, \mathrm{d} u \tag{3} \\ &amp;\!\!\!\!\!\!\stackrel{u = \cosh(s)}{=} \int \limits_0^\infty \frac{\operatorname{Li}_2\left(\mathrm{e}^{-x \cosh(s)}\right)}{\cosh^2(s)} \, \mathrm{d} s \tag{4} \\ &amp;\!\!\!\!\!\stackrel{\tanh(s) = \tau}{=} \int \limits_0^1 \operatorname{Li}_2\left(\mathrm{e}^{-\frac{x}{\sqrt{1-\tau^2}}}\right) \, \mathrm{d} \tau \, ,\tag{5} \end{align}</span> where <span class="math-container">$\operatorname{Li}_2$</span> is the <a href="http://mathworld.wolfram.com/Dilogarithm.html" rel="nofollow noreferrer">dilogarithm</a>. Apart from <span class="math-container">$(2)$</span> these representations are also valid for <span class="math-container">$x=0$</span> and yield <span class="math-container">$f(0) = \frac{\pi^2}{6}$</span> as expected. None of them gives much hope for a closed-form expression, but they can be used to study the asymptotic behaviour of <span class="math-container">$f$</span>.</p> <hr> <p><strong>The expansion for <span class="math-container">$x \to \infty$</span></strong></p> <p>Using <span class="math-container">$(5)$</span> and the series representation of the dilogarithm we find <span class="math-container">$$ f(x) = \sum \limits_{n=1}^\infty \frac{1}{n^2} \int \limits_0^1 \mathrm{e}^{- \frac{n x}{\sqrt{1-\tau^2}}} \, \mathrm{d} \tau \stackrel{\frac{1}{\sqrt{1-\tau^2}} = 1 + t}{=} \frac{1}{\sqrt{2}} \sum \limits_{n=1}^\infty \frac{\mathrm{e}^{-n x}}{n^2} \int \limits_0^\infty \frac{\mathrm{e}^{-n x t}}{\sqrt{t(1+\frac{t}{2})} (1+t)^2} \, \mathrm{d}t \, . $$</span> We can then employ <a href="https://en.wikipedia.org/wiki/Watson%27s_lemma" rel="nofollow noreferrer">Watson's lemma</a> with <span class="math-container">$\lambda = -\frac{1}{2}$</span> and <span class="math-container">$g(t) = \frac{1}{\sqrt{1+\frac{t}{2}}(1+t)^2}$</span> to obtain the asymptotic expansion <span class="math-container">$$ \int \limits_0^\infty \frac{\mathrm{e}^{-n x t}}{\sqrt{t(1+\frac{t}{2})} (1+t)^2} \, \mathrm{d}t \sim \sum \limits_{k=0}^\infty \frac{g^{(k)}(0) \operatorname{\Gamma}\left(k + \frac{1}{2}\right)}{k! (n x)^{k+\frac{1}{2}}} \, , \, x \to \infty \, , \, n \in \mathbb{N} \, . $$</span> Simplifying the gamma function we arrive at <span class="math-container">$$ f(x) \sim \sqrt{\frac{\pi}{2 x}} \sum \limits_{n=1}^\infty \frac{\mathrm{e}^{-n x}}{n^{5/2}} \sum \limits_{k=0}^\infty \frac{{{2k} \choose k} g^{(k)}(0)}{(4nx)^k} \, , \, x \to \infty \, .$$</span> Clearly, terms with <span class="math-container">$n &gt; 1$</span> are exponentially smaller than those with <span class="math-container">$n = 1$</span> and may be dropped, so we find the asymptotic series <span class="math-container">$$ f(x) \sim \sqrt{\frac{\pi}{2x}} \mathrm{e}^{-x} \left[1 - \frac{9}{8x} + \frac{345}{128 x^2} + \sum \limits_{k=3}^\infty \frac{{{2k} \choose k} g^{(k)}(0)}{(4x)^k} \right] \, , \, x \to \infty \, ,$$</span> which agrees with Robert Israel's result.</p> <hr> <p><strong>The value of <span class="math-container">$f'(0)$</span></strong></p> <p>Using <span class="math-container">$(3)$</span>, <span class="math-container">$(4)$</span> or <span class="math-container">$(5)$</span> it is not hard to show that <span class="math-container">$f$</span> is smooth on <span class="math-container">$(0,\infty)$</span>. The derivative at zero, however, does not exist. This can be seen by combining <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span> to write (for <span class="math-container">$x &gt; 0$</span>) <span class="math-container">\begin{align} \frac{f(0) - f(x)}{x} &amp;= \int \limits_0^\infty - \log \left(1 - \mathrm{e}^{-x u}\right) \, \mathrm{d} u - \int \limits_1^\infty - \log \left(1 - \mathrm{e}^{-x u}\right) \sqrt{1 - \frac{1}{u^2}} \, \mathrm{d}u \\ &amp;= \int \limits_0^1 - \log \left(1 - \mathrm{e}^{-x u}\right) \, \mathrm{d} u + \int \limits_1^\infty - \log \left(1 - \mathrm{e}^{-x u}\right) \left[1 - \sqrt{1 - \frac{1}{u^2}}\right] \, \mathrm{d}u \\ &amp;\geq \int \limits_0^1 - \log(x u) \, \mathrm{d} u + 0 = 1 - \log(x) \stackrel{x \to 0^+}{\longrightarrow} \infty \, , \end{align}</span> which implies <span class="math-container">$f'(0) = - \infty$</span>.</p> <hr> <p><strong>An idea for <span class="math-container">$x \to 0^+$</span></strong></p> <p>In the face this result, the expansion of <span class="math-container">$f$</span> at zero cannot be a simple Taylor series. Instead, we might want to use the <a href="https://en.wikipedia.org/wiki/Polylogarithm#Series_representations" rel="nofollow noreferrer">asymptotic expansion</a> <span class="math-container">$$ \operatorname{Li}_2 \left(\mathrm{e}^{- a}\right) = a [\log(a)-1] + \sum \limits_{k \in \mathbb{N}_0 \setminus \{1\}} \frac{\zeta(2-k)}{k!} (-a)^k = \frac{\pi^2}{6} + a \log(a) - a -\frac{a^2}{4} + \frac{a^3}{72} + \mathcal{O}\left(a^5\right) $$</span> of the dilogarithm for <span class="math-container">$0 &lt; a &lt; 2 \pi$</span>. Naively plugging this result into <span class="math-container">$(5)$</span> we obtain <span class="math-container">\begin{align} f(x) &amp;\stackrel{?}{\sim} \int \limits_0^1 \left[\frac{\pi^2}{6} + \frac{x}{\sqrt{1-\tau^2}}\left[\log\left(\frac{x}{\sqrt{1-\tau^2}}\right) -1\right] + \mathcal{O}\left(x^2\right)\right] \mathrm{d} \tau \\ &amp;= \frac{\pi^2}{6} - \frac{\pi}{2} x \left[-\log(2 x) + 1\right] + \mathcal{O}\left(x^2\right) \, , \, x \to 0^+ \, . \end{align}</span> While this approximation appears to be quite good numerically, it seems unlikely that it is entirely correct. The integral <span class="math-container">$\int_0^1 \frac{\mathrm{d}{\tau}}{1-\tau^2}$</span>, which is the prefactor of the <span class="math-container">$x^2$</span>-term, is divergent to begin with. This is probably related to the fact that the expansion is only valid for <span class="math-container">$\tau^2 &lt; 1 - \frac{x^2}{4 \pi^2}$</span>. I do not know (yet?) how to compute or estimate the higher-order terms rigorously, so I'll leave it at that for now.</p>
1,077,119
<p>I am starting to learn about tensor products of abelian groups.</p> <p>Why is the tensor product defined for <strong>abelian</strong> groups? In which part of the construction the commutativity of the groups is needed?</p>
Mister Benjamin Dover
196,215
<p>There is a version of the tensor product for nonabelian groups, but this notion is much more specialized. See <a href="http://www-irma.u-strasbg.fr/~loday/PAPERS/87BrownLoday%28vanKampen%29.pdf" rel="nofollow">http://www-irma.u-strasbg.fr/~loday/PAPERS/87BrownLoday%28vanKampen%29.pdf</a>, section 2. In the construction at some point you do a mod out, which you cannot do in general if you do take the free group instead of the free abelian group. (You see a free group on some set is nonabelian unless the set has cardinality $&gt;1$, so you need to have a normal subgroup to form the mod out, and the standard way to get over this issue is to take the normal closure. This is implicit in the paper, where they use a presentation.)</p> <p>see also <a href="http://pages.bangor.ac.uk/~mas010/nonabtens.html" rel="nofollow">http://pages.bangor.ac.uk/~mas010/nonabtens.html</a> </p>
2,049,714
<p>Does it make sense when people say "statistically impossible"?</p>
Peter Darmis
416,428
<p>In <code>statistics</code> there is nothing <code>impossible</code>, you may want to change this to <code>improbable</code> which is something completely different. Theoretically speaking everything could have <strong>1&#xF7;&#x221E;&#xA0;=&#xA0;~0</strong> probability but that just not mean it could actually happen at least in our life cycle. In addition some of the things that we consider <code>more sure to happen</code> just because they gather a large percentage of probability don't always appear to follow that rule.</p>
4,030,877
<p><strong>Question:</strong> Ten different candies are given to three children <code>A</code>, <code>B</code> and <code>C</code>. each child has at least one. How many different ways are there?</p> <p>I use two different methods to solve this problem:</p> <pre><code> f[1, n_] := n; f[n_, 1] := 1; f[n_, m_] := f[n, m] = m (f[n - 1, m - 1] + f[n - 1, m]) f[10, 3] </code></pre> <pre><code> Select[Tuples[Range[3], 10], Length[Union[#]] == 3 &amp;] // Length </code></pre> <p>But the result of the first method is not equal to <code>StirlingS2[10, 3] 3!</code>. I want to know how to solve this problem correctly?</p>
hamam_Abdallah
369,188
<p><strong>hint</strong></p> <p><span class="math-container">$$x&gt;0\implies x^n=e^{n\ln(x)}$$</span> So, <span class="math-container">$$\lim_{n\to+\infty}x^n=0\iff \ln(x)&lt;0$$</span> <span class="math-container">$$\iff x&lt;1$$</span></p> <p>and <span class="math-container">$$x&lt;0\implies x^n=(-1)^ne^{n\ln(-x)}$$</span></p>
2,706,504
<p>In Lang's Algebra, I had hard times with this question:</p> <blockquote> <p>Let $K$ be field with characteristic $p$ (a prime). Let $L|K$ be a finite extension of $K$, and suppose $gcd([L:K],p)=1$. Show $L$ is separable over $K$.</p> </blockquote> <p>In order to have $L$ separable over $K$, one needs to have $K(\alpha)|K$ separable for each $\alpha \in L$. </p> <p>We can say $L=K(\alpha_1,...,\alpha_m)$. It seems like I need to use the tower relation of these extensions and get some information about their degrees that will be relevant to the gcd thing. But I couldn't do these. A hint is welcomed.</p>
Zev Chonoles
264
<p>The extension $K(\alpha)/K$ is separable when the minimal polynomial of $\alpha$ over $K$ has no repeated roots.</p> <p>An <em>irreducible</em> polynomial $f\in K[x]$ has repeated roots if and only if $f(x)=g(x^p)$ for some $g\in K[x]$.</p> <p>Do you see how to proceed from here?</p>
2,970,773
<p>Let's assume I am given a positive integer <span class="math-container">$n$</span>, as well as an upper limit <span class="math-container">$L$</span>.</p> <p>How could one find all, or at least one, possible solutions for <span class="math-container">$a$</span> and <span class="math-container">$b$</span> such that <span class="math-container">${a \mod b = n}$</span> where as <span class="math-container">$0 &lt;=a, b &lt;= L$</span>?</p>
Henno Brandsma
4,280
<p>He doesn't. You could say he defines the set <span class="math-container">$G=\{(n,m) \in \mathbb{N}: \overline{B_n} \subseteq B_m\}$</span>, the set of "good" pairs. One you've fixed your enumeration of a countable base, there will be some pairs like this, a randomly chosen one won't satisfy it, but there are many pairs that do (regularity will garantuee that, see the end of that proof where regularity is used). At least you know that <span class="math-container">$G$</span> is a countable set of pairs and for each of them we pick such a function and we only use those functions.</p> <p>Munkres said "for each pair where [condition]..." which means he does nothing for the pairs where this is not satisfied, he ignores those. </p>
3,412,217
<p>The category of abelian groups <span class="math-container">$\mathbb{Ab}$</span> has a monoidal (closed) structure <span class="math-container">$(\otimes, \mathbb{Z})$</span>. Moreover, it is monadic over the category of sets via the free abelian group monad <span class="math-container">$$\mathbb{Z}[\_]: \text{Set} \to \text{Set}.$$</span></p> <blockquote> <p><strong>Q:</strong> I wonder (and I believe) if it is possible to recover the monoidal structure from the monad and the underlying monoidal structure on the category of sets.</p> </blockquote> <p>The question might seem vague, an accepted answer would look like "<em>of course it is possible, <span class="math-container">$A \otimes B$</span> is some algebra structure on the coequalizer of some diagram of the form <span class="math-container">$\mathbb{Z}[\mathsf{U}A \times \mathsf{U}B] \rightrightarrows \mathbb{Z}[\mathsf{U}A \times \mathsf{U}B]$</span></em>", or something similar where the cartesian product of Set and the monad appear.</p>
Simon
716,796
<p>Let <span class="math-container">$ (x_0,y_0) $</span> be a solution of <span class="math-container">$ ax+by = n $</span>, then by some elementary number theory, all the integer solution is given by <span class="math-container">$$ x=x_0+\frac{b}{(a,b)}t,y = y_0-\frac{a}{(a,b)}t $$</span> where <span class="math-container">$ (a,b) $</span> is the greatest common divider and <span class="math-container">$ t $</span> is any integer. Thus it suffices to let <span class="math-container">$$ -\frac{(a,b)x_0}{b}\leq t\leq \frac{(a,b)y_0}{a}. $$</span> Because <span class="math-container">$ -\frac{(a,b)x_0}{b} $</span> and <span class="math-container">$ \frac{(a,b)y_0}{a} $</span> may fail to be integers, there does not exist a analytical formula.</p> <p>But the number of the solutions is approximately <span class="math-container">$\frac{n(a,b)}{ab}$</span>, because <span class="math-container">$\frac{x_0}{b}+\frac{y_0}{a}=\frac{n}{ab}$</span>.</p>
2,001,441
<p>I am having the hardest time solving the following trigonometric equation. Can anyone help please? Thank you.</p> <p>Solve for x. [Hint: Let $\\u = \tan^{-1}(x)$ and $\\v = tan^{-1}(2x)$. Solve the equation $\\u+v = \frac{π}{4}$ by taking the tangent of each side.]</p> <p>$\tan^{-1}(x) +\tan^{-1}(2x)= \frac{π}{4}$</p>
Tyler
383,143
<p><strong>Hint:</strong> Use their hint and take the tangent of each side - you get $\tan(u+v)=\tan(\frac{\pi}{4})$. The following identity may be useful: $$\tan(a+b)=\frac{\tan(a)+\tan(b)}{1-\tan(a)\tan(b)}$$</p>
655,064
<p>Let $G$ be a group and $a\in G$. Define the centralizer of $a$ to be </p> <p>$\hspace{150pt} C(a)=\{g\in G : ga=ag\}$.</p> <p>That is, $C(a)$ consists of all the elements that commute with $a$. Show that $C(a)$ is a subgroup of $G$. </p> <hr> <p>Clearly $C(a)$ is nonempty. Since $G$ is a group, $\exists e\in G$ such that $ea=ae=a$, so $e\in C(a)$. It is also clear that $a,g\in C(a)$ since $aa=aa=a^2$ and $ga=ag$ by definition. </p> <p>Now suppose $(ag^{-1})a\ne a(ag^{-1})$.</p> <p>$\hspace{160pt}g(ag^{-1})a\ne ga(ag^{-1}) $</p> <p>$\hspace{160pt}(ga)g^{-1}a\ne (ga)ag^{-1} $</p> <p>$\hspace{160pt}(ag)g^{-1}a\ne (ag)ag^{-1} $</p> <p>$\hspace{160pt}a(gg^{-1})a\ne a(ga)g^{-1} $</p> <p>$\hspace{187pt}aa\ne a(ag)g^{-1} $</p> <p>$\hspace{187pt}aa\ne aa(gg^{-1}) $</p> <p>$\hspace{187pt}aa\ne aa $</p> <p>Therefore, $ag^{-1}\in C(a)$. </p> <p>$C(a)$ is a subset of $G$ by the subgroup theorem. </p> <hr> <p>I just wanted to see if I was going about this the right way. Is there maybe a better way to prove that $C(a)$ is a subgroup of $G$?</p>
Robert Lewis
67,071
<p>I had a hard time following the OP's extended calculation; I think he/she was trying to show that for $a, g \in C(a)$, $ag^{-1} \in C(a)$. It is certainy true that for $S \subset G$, $S$ is a subgroup if and only if $s_1, s_2 \in S$ implies $s_1s_2^{-1} \in S$, but I think in the present case it is easier to proceed directly from first principles. We verify the group axioms for $C(a)$:</p> <p>1.) for $g_1, g_2 \in C(a)$, we have</p> <p>$(g_1g_2)a = g_1(g_2a) = g_1(ag_2) = (g_1a)g_2 = (ag_1)g_2 = a(g_1g_2), \tag{1}$</p> <p>showing $C(a)$ is closed under the group operation;</p> <p>2.) since $ea = ae$ for $e \in G$ the identity element, $e \in C(a)$;</p> <p>3.) if $g \in C(a)$, then $ag = ga$, whence $g^{-1}a = ag^{-1}$, showing $g^{-1} \in C(a)$ as well. </p> <p>It is clear that $C(a)$ is nonempty, since evedently both $e, a \in C(a)$. Having seen in items (1)-(3) above that $C(a)$ is closed under the group operation, contains the identity and inverses, we find that $C(a)$ is indeed a subgroup of $G$. <strong><em>QED!!!</em></strong></p> <p><em><strong>Remark:</em></strong> It is also clear that $C(a)$ contains $\langle a \rangle$, the cyclic subgroup generated by $a$.</p> <p>Hope this helps. Cheerio,</p> <p>and as always,</p> <p><em><strong>Fiat Lux!!!</em></strong></p>
335,611
<p>find this limit: $$\displaystyle\lim_{n\to+\infty}\left[\sum_{k=1}^{n}\left(\dfrac{1}{\sqrt{k}}- \int_{0}^{\large {1/\sqrt k}}\dfrac{t^2}{1+t^2}dt\right)-2\sqrt{n}\right]$$</p>
Ian Coley
60,524
<p>First, you can check that $$ \int_0^{1/\sqrt k} \frac{t^2}{1+t^2}\,dt=\int_0^{1/\sqrt k} 1-\frac{1}{1+t^2}\,dt=\frac{1}{\sqrt k}-\arctan\frac{1}{\sqrt k}, $$ so you may rewrite your limit as $$ \lim_{n\to\infty}\left[\sum_{k=1}^n\left(\arctan\frac{1}{\sqrt k}\right)-2\sqrt n\right]. $$</p> <p>This is at least somewhere to start.</p> <p>Additionally, we have $\arctan x + \arctan y = \arctan\frac{x + y}{1 - xy}$. This provides some hope of being able to simplify the sum on the inside.</p> <p>Further edit: No guarantee this will pan out to work, but it does seem more tractable.</p>
3,992,972
<p>I want to draw the set <span class="math-container">$M_{4}$</span>={(x, y) ; |xy|&lt; 1/4}</p> <p>My attempt is that I evaluate the expression of |xy|&lt;1/4 depending on the values of x an y, i.e depending on the quadrant.</p> <p>For the second quadrant I get the following:</p> <p>x &lt; 0 <span class="math-container">$\implies$</span> |x| = -x</p> <p>y <span class="math-container">$\ge$</span>0 <span class="math-container">$\implies$</span> |y| = y</p> <p><span class="math-container">$\implies$</span> |xy| =-xy &lt; <span class="math-container">$\frac{1}{4}$</span> <span class="math-container">$\implies$</span> y &gt; - <span class="math-container">$\frac{1}{4x}$</span></p> <p>Which does not make any sense as the set is enclosed by the lines y=-<span class="math-container">$\frac{1}{4}$</span> y = <span class="math-container">$\frac{1}{4x}$</span>, and thus for the second quadrant I should get y &lt; - <span class="math-container">$\frac{1}{4x}$</span>. I don't understand where I go wrong, any help would be much appreciated!</p>
Community
-1
<p>You are right to consider the set in four different quadrants.</p> <p>For <span class="math-container">$xy&lt;0$</span>, you have <span class="math-container">$-xy&lt;\frac14$</span>.</p> <p>For <span class="math-container">$x&gt;0$</span> and <span class="math-container">$y&lt;0$</span>, <span class="math-container">$$ y&gt;\frac{-1}{4x} $$</span></p> <p>For <span class="math-container">$x&lt;0$</span> and <span class="math-container">$y&gt;0$</span>, <span class="math-container">$$ y&lt;\frac{-1}{4x} $$</span></p> <hr /> <p>Notes.</p> <p>The mistake you make is that for <span class="math-container">$x&lt;0$</span> and <span class="math-container">$y&gt;0$</span>, you should have <span class="math-container">$$ -xy&lt;\frac14\Rightarrow y\color{red}{&lt;}\frac{-1}{4x} $$</span> because you are dividing the quantity <span class="math-container">$-x$</span> on both sides and it is a positive quantity. So the direction of the inequality does not change.</p> <p><a href="https://i.stack.imgur.com/onOrr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/onOrr.png" alt="enter image description here" /></a></p>
4,343,249
<p>In a variant of Russian Roulette, where you put 2 bullets in 2 adjacent chambers, like that:<a href="https://i.stack.imgur.com/2pVxr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2pVxr.png" alt="This is how it looks might look like, red circles represent bullet" /></a></p> <p>Image credit: Brilliant.org</p> <p>Now, first person shoots and survives, you are the second. The question is: In which scenario are you more likely to survive:</p> <ol> <li>You spin the barrel again, assuming a random spin(each chamber has equal probability).</li> <li>Shot, without spinning.</li> </ol> <p>I have tried solving it like that: Even though I have gotten the correct result I was reasoning incorrectly I have calculated the probability of 1-st scenario and got 1/3(2 possible ways of killing, 6 barrels), this gives the chance of dying if spin. Then the probability of 2nd scenario, here I got the wrong result.</p> <p>P(dying in second scenario) = P(1st surviving)*P(you being hit) = <span class="math-container">$$ \frac{4}{6} * \frac25 = \frac4{15}$$</span> Which is approximately 0.27. Not to forget I have also thought that this answer gives the P(dying), but I need P(dying given 1st survived). So, I thought that I need to calculate how big is the chance of this specific outcome of 2nd dying if first survived. To do that I divided it by 2/3 - because that was the probability of the first one surviving. Also, I don't really know why we are dividing, probability by probability.</p> <p>But the correct answer is 0.25. Reasoning like that: <a href="https://i.stack.imgur.com/Lq3gv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Lq3gv.png" alt="enter image description here" /></a></p> <p>There are 4 ways to get the specific result, and only 1 way this could happen.</p>
Vincent
101,420
<blockquote> <p>P(dying in second scenario) = P(1st surviving)*P(you being hit)</p> </blockquote> <p>(quote from your question)</p> <p>holds if you compute the probability of you dying <em>on forehand</em>, before the first person shoots. However, from the wording of the question I think they want you to compute the probability <em>after</em> the other person has shot and you <em>already know</em> that they survived.</p> <p>After all: that is the moment that you need to make the decision between the two scenarios and hence the most natural moment to calculate the probability.</p> <p>So you are more interested in calculating:</p> <p>P(you being hit | 1st surviving)</p>
1,958,152
<blockquote> <p>I want to know why $\frac{\log4}{\log b}$ can't be simplified to $\frac4b$. </p> </blockquote> <p>I am a high school student. Please do not quote some theories that are too advanced for me. Thank you!</p>
OFRBG
42,793
<p>Assuming you don't have a good professor and something of the sort, remember that $\log$ is the name of an operation. It's like $\cos(x)$. You can't, in general, simplify quotients of functions. So $\frac{\cos30º}{\cos45º}$ is not $\frac{30º}{45º}$. Same thing happens with logarithm.</p> <p>There are other relations you can use, but you cannot "simplify".</p> <p>For further information, dividing $\log$ by another $\log$ changes the base of the one above. (I'm not sure if you know this, but when we write $\log$, we usually asume it is logarithm base $e$.) The thing together is:</p> <p>$$ \frac{\log(A)}{\log(B)}=\log_B(A) $$</p> <p>where $\log_B(A)$ is no longer base $e$ (or the original one), but base $B$ instead.</p>
1,958,152
<blockquote> <p>I want to know why $\frac{\log4}{\log b}$ can't be simplified to $\frac4b$. </p> </blockquote> <p>I am a high school student. Please do not quote some theories that are too advanced for me. Thank you!</p>
Thomas Andrews
7,933
<p>The cancellation rule you know, $$\frac{ab}{ac}=\frac{b}{c}$$ is only true if $ab$ means "$a$ times $b$."</p> <p>But $\log 4$ doesn't mean multiplying something called "log" by the number $4$. It means we are applying an operation, "logarithm," to the value $4$. As notation, it looks similar, but it does not <strong>mean</strong> the same thing as multiplication.</p> <p>So the rule you know is, verbosely, written as:</p> <p>$$\frac{a\times b}{a\times c}=\frac{b}{c}$$</p> <p>When we write $ab$ to mean $a\times b$, then we can cancel like this.</p> <p>But $\log 4$ does not mean $\log \times 4$, which is meaningless.</p> <p>Now, as to why $\frac{\log 4}{\log b}\neq \frac{4}{b}$, specifically, the two are equal when $b=2$ or $b=4$, but not for any other $b$. $\frac{\log 4}{\log b}$ is not even defined when $b=1$, since $\log 1 = 0$. </p> <p>If $0&lt;b&lt;1$, then $\log b&lt;0$ and thus $\frac{\log 4}{\log b}&lt;0$, but $\frac{4}{6}&gt;0$.</p>
1,958,152
<blockquote> <p>I want to know why $\frac{\log4}{\log b}$ can't be simplified to $\frac4b$. </p> </blockquote> <p>I am a high school student. Please do not quote some theories that are too advanced for me. Thank you!</p>
Community
-1
<p>Well, suppose you <em>could</em> do such simplification: $$ \frac{\log 4}{\log b}=\frac{4}{b}\tag{1} $$ You would end up with (do you know why?) $$ b\cdot \log 4=4\cdot\log b, $$ which implies (do you know why?) that $$ \log 4^b=\log b^4\tag{2}. $$ If (1) were true for every $b&gt;0$, then (2) must also be true for every $b&gt;0$ and in particularly true for $b=1$ which is $\log 4=\log 1$. But it is impossible. </p>
3,755,638
<p>Given a point <span class="math-container">$A$</span>, a circle <span class="math-container">$O$</span> and conic section <span class="math-container">$e$</span>, if <span class="math-container">$BC$</span> is a moving chord of the circle <span class="math-container">$O$</span> tangent to <span class="math-container">$e$</span>, then prove that<br /> <strong>the locus of △<span class="math-container">$ABC$</span>'s circumcenters <span class="math-container">$T$</span> is a conic section.</strong><br /> The question was posted in <a href="https://tieba.baidu.com/f?kw=%E7%BA%AF%E5%87%A0%E4%BD%95" rel="noreferrer">纯几何吧</a> by TelvCohl and remained unsolved for many years but regrettably I cannot provide the link because the post was deleted by Baidu accidentally.<br /> It seems that the locus related to circumcenter is often a conic section.Another example:<br /> The directions of two sides of a triangle is fixed and the third side passes through a fixed point, then the locus of the circumcenter is a conic section.(<em>The elementary geometry of conics</em>.1883)</p> <p><a href="https://i.stack.imgur.com/bHkXD.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/bHkXD.jpg" alt="enter image description here" /></a></p>
brainjam
1,257
<p>This an observation, not an answer, that is too long to put in a comment. It introduces a new conic, and runs the OP's construction in reverse. The diagram below is based on the the original diagram, and adds a given conic <span class="math-container">$e$</span> (in black) with foci <span class="math-container">$O$</span> and <span class="math-container">$A$</span> and major axis length <span class="math-container">$r$</span>. The other given conic is <span class="math-container">$f$</span> (rose colored). The reverse construction will create the green circle and the orange conic.</p> <p>For a point <span class="math-container">$T$</span> on <span class="math-container">$f$</span>, construct its tangents to <span class="math-container">$e$</span>. Reflect point <span class="math-container">$A$</span> in both tangents, producing points <span class="math-container">$B$</span> and <span class="math-container">$C$</span>. (the tangents are the perpendicular bisectors in the original construction whose intersection is the circumcenter.) Then as <span class="math-container">$T$</span> moves on <span class="math-container">$f$</span>, <span class="math-container">$B$</span> and <span class="math-container">$C$</span> run along the circle <span class="math-container">$c$</span>, and the moving chord <span class="math-container">$BC$</span> traces an envelope of conic <span class="math-container">$c$</span> (orange).</p> <p>Circle <span class="math-container">$c$</span> has center <span class="math-container">$O$</span> and radius <span class="math-container">$r$</span> and is the orthotomic circle of <span class="math-container">$e$</span> with respect to focus <span class="math-container">$A$</span>. Which is basically the <a href="https://en.wikipedia.org/wiki/Pedal_curve" rel="nofollow noreferrer">pedal curve</a> of <span class="math-container">$e$</span> with respect to <span class="math-container">$A$</span>, scaled up by 2. It's also known as the <a href="https://en.wikipedia.org/wiki/Focus_(geometry)#Defining_conics_in_terms_of_a_focus_and_a_directrix_circle" rel="nofollow noreferrer">directrix circle</a> of <span class="math-container">$e$</span>.</p> <p>All this said, it's not obvious that this observation will help getting the proof requested in the question.</p> <p><a href="https://i.stack.imgur.com/v7EFf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v7EFf.png" alt="enter image description here" /></a></p>
804,414
<p>A Fair Dice is Thrown Repeatedly. Let $X$ be number of Throws required to get a '$6$' and $Y$ be number of throws required to get a '$5$'. Find $$E(X|Y=5)$$</p>
Graham Kemp
135,106
<p>We cannot get both 5 and 6 on the fifth trial, so $P(X=5 \mid Y=5)=0$</p> <p>$\mathrm{\large E}(X\mid Y=5) = \sum\limits_{x=1}^{4} x\cdot\mathrm{\large P}(X=x\mid Y=5) + \sum\limits_{x=6}^{\infty} x\cdot\mathrm{\large P}(X=x\mid Y=5)$</p> <p>To unconditionally get a 5 on the fifth trial, we need four not 5 before a 5 on the fifth.$$\mathrm{\large P}(Y=5) = \left(\frac{5}{6}\right)^4 \frac 16$$ </p> <p>To get a 6 before getting a 5 on the fifth trial, we have $(x-1)$ rolls that are neither, a 6 on the $x^{th}$, $(3-x)$ rolls that are not 5, then a 5 on the fifth trial. </p> <p>$$\mathrm{\large P}(X=x\mid Y=5)|_{x&lt;5} = \dfrac{\mathrm{\large P}(X=x , Y=5)|_{x&lt;5}}{\mathrm{\large P}(Y=5)} \\ = \dfrac{\left(\frac{4}{6}\right)^{x-1}\frac{1}{6}\left(\frac{5}{6}\right)^{3-x}\frac{1}{6}}{\left(\frac{5}{6}\right)^4\frac{1}{6}} = \dfrac{4^{x-1}5^{3-x}}{6^5}\cdot\dfrac{6^5}{5^4} = \dfrac{1}{20} \cdot\dfrac{4^{x}}{5^{x}}$$</p> <p>To get a 6 after getting a 5 on the fifth trial, we have $4$ rolls that are neither, a 5 on the fifth trial, $(x-6)$ rolls that are not 6, then a 6 on the $x^{th}$ trial. </p> <p>$$\mathrm{\large P}(X=x\mid Y=5)|_{x&gt;5} = \dfrac{\mathrm{\large P}(X=x , Y=5)|_{x&gt;5}}{\mathrm{\large P}(Y=5)} \\ = \dfrac{\left(\frac{4}{5}\right)^{4}\frac{1}{6}\left(\frac{5}{6}\right)^{x-6}\frac{1}{6}}{\left(\frac{5}{6}\right)^4\frac{1}{6}} = \dfrac{4^45^{x-6}}{6^x}\cdot\dfrac{6^5}{5^4} = \dfrac{4^4}{5^5}\cdot\dfrac{5^{x-5}}{6^{x-5}}$$</p> <p>So $$P(X=x\mid Y=5) = \begin{cases} \dfrac{1}{20}\cdot\dfrac{4^{x}}{5^{x}} &amp; 0&lt; x &lt; 5 \\ \dfrac{4^4}{5^5}\cdot\dfrac{5^{x-5}}{6^{x-5}} &amp; x&gt;5 \\ 0 &amp; \text{elsewhere}\end{cases}$$</p> <p>Putting it together: $$\mathrm{\large E}(X\mid Y=5) = \sum\limits_{x=1}^{4} x\cdot\mathrm{\large P}(X=x\mid Y=5|_{x&lt;5} + \sum\limits_{x=6}^{\infty} x\cdot\mathrm{\large P}(X=x\mid Y=5|_{x&gt;5} \\ = \dfrac{1}{20}\sum\limits_{x=1}^{4} x\left(\dfrac{4}{5}\right)^x + \dfrac{4^4}{5^5}\sum\limits_{x=6}^{\infty} x \left(\dfrac{5}{6}\right)^{x-5} \\ = \dfrac{14901}{3125} \approx 4.76832 $$</p>
223,087
<p>Given a list of numbers in decimal form, what is the most efficient way to determine if there are any consecutive 1s in the binary forms of those numbers? My solution so far:</p> <pre><code>dim = 3; declist = Range[0, 2^dim - 1]; consecutiveOnes[binary_] := AnyTrue[Total /@ Split[binary], # &gt; 1 &amp;]; consecutiveOnes[#] &amp; /@ IntegerDigits[declist, 2] </code></pre> <p>which gives <code>{False, False, False, True, False, False, True, True}</code>, in accordance with the binary representations <code>{{0}, {1}, {1, 0}, {1, 1}, {1, 0, 0}, {1, 0, 1}, {1, 1, 0}, {1, 1, 1}}</code>.</p> <p>For <code>dim=15</code> this takes ~600ms on my machine, which seems a little high, and I just want to see if there is a cleaner way to do it. I've tried using BlockMap with Times but it was much slower.</p> <p>Two "extras":</p> <ol> <li><p>I guess as a comment, it is also acceptable if your method simply returns all decimal numbers up to some max number for which the binary representations have no consecutive 1s. In other words, I'm just going to run <code>Pick</code> on the <code>declist</code> with the negated results of this function, so if your solution just cuts out the middle man, that is great/acceptable.</p></li> <li><p>I also care about the possibility of "wrapping around", i.e. if the first and last binary digits are both 1s. Obviously I could just append the first digit to the end of each list, but perhaps this is not the most efficient way to proceed.</p></li> </ol> <p><strong>Addendum</strong>: Some great solutions! I took the liberty of implementing and speed testing them, with some minor modifications - hopefully I have not distorted your codes too badly:</p> <pre><code>dim = 15; declist = Range[0, 2^dim - 1]; m1[range_] := FromDigits[#, 2] &amp; /@ DeleteCases[IntegerDigits[range, 2], {___, 1, 1, ___}]; m2helper[num_] := NoneTrue[Total /@ Split[num], # &gt; 1 &amp;]; m2[range_] := Pick[declist, m2helper[#] &amp; /@ IntegerDigits[range, 2]]; m3helper[num_] := NestWhile[Quotient[#, 2] &amp;, num, # &gt; 0 &amp;&amp; BitAnd[#, 3] != 3 &amp;] &gt; 0 m3[range_] := Pick[declist, Not[m3helper[#]] &amp; /@ range]; m41 = (4^(Ceiling[dim/2]) - 1)/3; m42 = 2 m41; m4helper = Function[{n}, Evaluate[ Nor[BitAnd[BitAnd[n, m42], BitShiftLeft[BitAnd[n, m41], 1]] &gt; 0, BitAnd[BitAnd[n, m42], BitShiftRight[BitAnd[n, m41], 1]] &gt; 0]], {Listable}]; m4[range_] := Pick[declist, m4helper[range]]; Clear[m5]; m5[0] = {0}; m5[1] = {0, 1}; m5[n_?(IntegerQ[#] &amp;&amp; # &gt; 1 &amp;)] := m5[n] = Join[m5[n - 1], 2^(n - 1) + m5[n - 2]] m6[range_] := Pick[range, Thread[BitAnd[range, BitShiftRight[range, 1]] == 0]]; aa = m1[declist] // RepeatedTiming; bb = m2[declist] // RepeatedTiming; cc = m3[declist] // RepeatedTiming; dd = m4[declist] // RepeatedTiming; ee = m5[dim] // AbsoluteTiming; ff = m6[declist] // RepeatedTiming; Column[{aa[[1]], bb[[1]], cc[[1]], dd[[1]],ee[[1]],ff[[1]]}] aa[[2]] == bb[[2]] == cc[[2]] == dd[[2]] == ee[[2]]==ff[[2]] </code></pre> <p>yields</p> <pre><code>0.0464 0.619 0.322 0.0974 0.00024 0.0086 True </code></pre> <p>So the direct construction method seems clearly the fastest - still this does "skip" the actual pruning step, which is not required for me, but maybe is in other use cases. If the actual pruning list is desired, it seems like the direct <code>BitAnd</code>+<code>BitShiftRight</code> method is fastest, followed by the <code>SelectCases</code>/<code>DeleteCases</code>. But if other people have other methods, certainly share them!</p>
Nasser
70
<p>You can use <code>SequenceCases</code> to check if there is <code>1,1</code> anywhere. For example</p> <pre><code> SequenceCases[{0, 1, 1, 1, 0, 0, 1, 0, 1}, {___, 1, 1, ___}] </code></pre> <p>And check if the result is <code>{}</code> or not since you only care if there is at least one such case within.</p> <p>Here is an example</p> <pre><code>data = {#, n = IntegerDigits[#, 2]; z1 = (StringJoin[ToString[#] &amp; /@ n]); z2 = If[SequenceCases[n, {___, 1, 1, ___}] === {}, False, True]; z1, z2} &amp; /@ Range[0, 25]; Grid[data, Frame -&gt; All] </code></pre> <p><a href="https://i.stack.imgur.com/hFUcA.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hFUcA.png" alt="enter image description here"></a></p> <hr> <blockquote> <p>I also care about the possibility of "wrapping around", i.e. if the first and last binary digits are both 1s.</p> </blockquote> <p>The above does not now handle this special case, but could be easily added by one extra special check when it fail the first test. Here is an implementation of this</p> <pre><code>check[n_Integer] := Module[{z1, z2, m}, m = IntegerDigits[n, 2]; z1 = (StringJoin[ToString[#] &amp; /@ m]); z2 = If[SequenceCases[m, {___, 1, 1, ___}] === {}, If[First[m] == 1 &amp;&amp; Last[m] == 1 &amp;&amp; Length[m] &gt; 1, True , False ] , True]; {n, z1, z2} ]; </code></pre> <p>Call it as</p> <pre><code>Grid[check[#] &amp; /@ Range[0, 25], Frame -&gt; All] </code></pre> <p><a href="https://i.stack.imgur.com/L2A3Q.png" rel="noreferrer"><img src="https://i.stack.imgur.com/L2A3Q.png" alt="enter image description here"></a></p> <hr> <p>If you want the function to just return True/False, so you can use Pick, simply change to </p> <pre><code>check[n_Integer] := Module[{m}, m = IntegerDigits[n, 2]; If[SequenceCases[m, {___, 1, 1, ___}] === {}, If[First[m] == 1 &amp;&amp; Last[m] == 1 &amp;&amp; Length[m] &gt; 1, True , False ] , True] ]; </code></pre> <p>And call it as</p> <pre><code>check[#] &amp; /@ Range[0, 25] {False, False, False, True, False, True, True, True, False, True, False, True, True, True, True, True, False, True, False, True, False, True, True, True, True, True} </code></pre>
223,087
<p>Given a list of numbers in decimal form, what is the most efficient way to determine if there are any consecutive 1s in the binary forms of those numbers? My solution so far:</p> <pre><code>dim = 3; declist = Range[0, 2^dim - 1]; consecutiveOnes[binary_] := AnyTrue[Total /@ Split[binary], # &gt; 1 &amp;]; consecutiveOnes[#] &amp; /@ IntegerDigits[declist, 2] </code></pre> <p>which gives <code>{False, False, False, True, False, False, True, True}</code>, in accordance with the binary representations <code>{{0}, {1}, {1, 0}, {1, 1}, {1, 0, 0}, {1, 0, 1}, {1, 1, 0}, {1, 1, 1}}</code>.</p> <p>For <code>dim=15</code> this takes ~600ms on my machine, which seems a little high, and I just want to see if there is a cleaner way to do it. I've tried using BlockMap with Times but it was much slower.</p> <p>Two "extras":</p> <ol> <li><p>I guess as a comment, it is also acceptable if your method simply returns all decimal numbers up to some max number for which the binary representations have no consecutive 1s. In other words, I'm just going to run <code>Pick</code> on the <code>declist</code> with the negated results of this function, so if your solution just cuts out the middle man, that is great/acceptable.</p></li> <li><p>I also care about the possibility of "wrapping around", i.e. if the first and last binary digits are both 1s. Obviously I could just append the first digit to the end of each list, but perhaps this is not the most efficient way to proceed.</p></li> </ol> <p><strong>Addendum</strong>: Some great solutions! I took the liberty of implementing and speed testing them, with some minor modifications - hopefully I have not distorted your codes too badly:</p> <pre><code>dim = 15; declist = Range[0, 2^dim - 1]; m1[range_] := FromDigits[#, 2] &amp; /@ DeleteCases[IntegerDigits[range, 2], {___, 1, 1, ___}]; m2helper[num_] := NoneTrue[Total /@ Split[num], # &gt; 1 &amp;]; m2[range_] := Pick[declist, m2helper[#] &amp; /@ IntegerDigits[range, 2]]; m3helper[num_] := NestWhile[Quotient[#, 2] &amp;, num, # &gt; 0 &amp;&amp; BitAnd[#, 3] != 3 &amp;] &gt; 0 m3[range_] := Pick[declist, Not[m3helper[#]] &amp; /@ range]; m41 = (4^(Ceiling[dim/2]) - 1)/3; m42 = 2 m41; m4helper = Function[{n}, Evaluate[ Nor[BitAnd[BitAnd[n, m42], BitShiftLeft[BitAnd[n, m41], 1]] &gt; 0, BitAnd[BitAnd[n, m42], BitShiftRight[BitAnd[n, m41], 1]] &gt; 0]], {Listable}]; m4[range_] := Pick[declist, m4helper[range]]; Clear[m5]; m5[0] = {0}; m5[1] = {0, 1}; m5[n_?(IntegerQ[#] &amp;&amp; # &gt; 1 &amp;)] := m5[n] = Join[m5[n - 1], 2^(n - 1) + m5[n - 2]] m6[range_] := Pick[range, Thread[BitAnd[range, BitShiftRight[range, 1]] == 0]]; aa = m1[declist] // RepeatedTiming; bb = m2[declist] // RepeatedTiming; cc = m3[declist] // RepeatedTiming; dd = m4[declist] // RepeatedTiming; ee = m5[dim] // AbsoluteTiming; ff = m6[declist] // RepeatedTiming; Column[{aa[[1]], bb[[1]], cc[[1]], dd[[1]],ee[[1]],ff[[1]]}] aa[[2]] == bb[[2]] == cc[[2]] == dd[[2]] == ee[[2]]==ff[[2]] </code></pre> <p>yields</p> <pre><code>0.0464 0.619 0.322 0.0974 0.00024 0.0086 True </code></pre> <p>So the direct construction method seems clearly the fastest - still this does "skip" the actual pruning step, which is not required for me, but maybe is in other use cases. If the actual pruning list is desired, it seems like the direct <code>BitAnd</code>+<code>BitShiftRight</code> method is fastest, followed by the <code>SelectCases</code>/<code>DeleteCases</code>. But if other people have other methods, certainly share them!</p>
flinty
72,682
<p><strong>Update</strong>: just use <code>BitAnd[x, BitShiftRight[x, 1]] &gt; 0</code>. It's 10x faster than below. Bit-level parallelism beats multiple shifts every time.</p> <p>This method is super fast and uses little memory all the way out to truly astronomical numbers like <span class="math-container">$2^{8192} + 2^{8191}$</span>.</p> <pre><code>hasConsecBits[x_] := NestWhile[Quotient[#, 2] &amp;, x, # &gt; 0 &amp;&amp; BitAnd[#, 3] != 3 &amp;] &gt; 0 (* hasConsecBits[2^8192 + 2^8191] == True *) (* timing, around 0.015625 seconds *) </code></pre> <p><code>AbsoluteTiming</code> for small numbers is on the order of 2.*10^-7. You can replace <code>Quotient[#,2]</code> with <code>BitShiftRight[#,1]</code> if you want - the performance gain is negligible.</p> <p>For wraparound, it's a very simple extension. Since all binary numbers x > 0 start with a 1, any number with wraparound will have the top bit and bottom bit set - i.e it's an odd number bigger than 1 or it has consecutive bits in the middle:</p> <pre><code>hasConsecBitsWithWrap[x_] := ((x &gt; 1) &amp;&amp; OddQ[x]) || hasConsecBits[x] </code></pre> <p>On my machine this takes 1 second for one million numbers:</p> <pre><code>ParallelTable[hasConsecBits[x], {x, 0, 1000000}] // Timing </code></pre>
386,655
<p>Recently Ernest Davis asked me about the following computational problem: we're given as input a composite integer <span class="math-container">$n$</span>, a divisor <span class="math-container">$k$</span> of <span class="math-container">$n$</span>, and a subset <span class="math-container">$S \subset \mathbb{Z}_n$</span> of size k. The problem is to decide whether <span class="math-container">$\mathbb{Z}_n$</span> can be covered with <span class="math-container">$n/k$</span> cyclic translations of <span class="math-container">$S$</span>, i.e. sets of the form <span class="math-container">$S+a_i$</span> for various <span class="math-container">$a_i \in \mathbb{Z}_n$</span>. This is simply a special case of the NP-complete EXACT COVER problem---namely, where the available sets are all cyclic shifts of each other, and where all cyclic shifts are available. My suspicion is that the special case is already NP-complete, while Ernest suspects that an <span class="math-container">$n^{O(1)}$</span> time algorithm exists. I searched Google (and Garey&amp;Johnson) and couldn't find leads -- would appreciate any thoughts or references!</p>
Jan Kyncl
24,076
<p>Here are a few references; the right keywords seem to be &quot;tiling by translation&quot; or a &quot;factorization of a group&quot;:</p> <p>Mihail N. Kolountzakis and Máté Matolcsi, Algorithms for translational tiling, Journal of Mathematics and Music, 3:2 (2009), 85-97, <a href="https://doi.org/10.1080/17459730903040899" rel="noreferrer">https://doi.org/10.1080/17459730903040899</a></p> <p>Mihail N. Kolountzakis and Mate Matolcsi, Tilings by translation (2010), <a href="https://arxiv.org/abs/1009.3799" rel="noreferrer">https://arxiv.org/abs/1009.3799</a></p> <p>See also the following MO question about a more general problem about covering density (2013): <a href="https://mathoverflow.net/questions/137219/is-this-a-known-combinatorial-problem">Is this a known combinatorial problem?</a></p>
1,989,229
<p>My approach : I tried using integral calculus and using infinite geometric series..however it didn't match..any trick?</p>
MrYouMath
262,304
<p>Hint: $$\frac{1}{1-x}=1+x+x^2+x^3+\dots$$ Differentiating with respect to $x$ and assuming $|x|&lt;1$, which guarantees uniform convergence:</p> <p>$$\dfrac{d}{dx}\left(\frac{1}{1-x}\right)=1+2x+3x^2+\dots.$$ (you should also check the radius of convergence of the resulting expression.)</p> <hr> <p>An Alternative approach is based on the assumption of absolute convergence of the series:</p> <p>$$S = 1 + x + x^2 + \dots$$ $$ + x + x^2 + \dots$$ $$ + x^2 + \dots$$ $$ \dots $$</p>
272,752
<p><strong>Bug introduced in 12.2.0 and persisting through 13.1</strong></p> <hr /> <p>I have selected the following minimal example from the documentation. I am running v12.2.0 Win7-x64.</p> <pre><code>GeoRegionValuePlot[ EntityClass[&quot;Country&quot;, &quot;SouthAmerica&quot;] -&gt; &quot;MerchantShips&quot;] </code></pre> <p><a href="https://i.stack.imgur.com/lchfF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lchfF.png" alt="enter image description here" /></a></p> <p><strong>Question:</strong> How do I put a <code>GeoScaleBar</code> on this plot in the lower right corner?</p> <p>Thanks for your help in advance.</p>
MarcoB
27,951
<p>I am not sure why it doesn't work in the &quot;obvious&quot; way, i.e. by just adding the <code>GeoScaleBar</code> option to <code>GeoRegionValuePlot</code>, but it can be accomplished using <code>Show</code> after the fact:</p> <pre><code>Show[ GeoRegionValuePlot[EntityClass[&quot;Country&quot;, &quot;SouthAmerica&quot;] -&gt; &quot;MerchantShips&quot;], GeoScaleBar -&gt; Placed[{&quot;Imperial&quot;, &quot;Metric&quot;}, {Right, Bottom}] ] </code></pre> <p><a href="https://i.stack.imgur.com/6aVH7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6aVH7.png" alt="plot with legend and placed scale bar" /></a></p> <p>I feel that the obvious solution <em>should</em> work though, and this should be reported to Wolfram support.</p>
96,198
<p>I'm trying to prove that the variance of a RV whose values are discrete 1's or 0's is greater than the variance of a RV who's values are 0's or continuous on the domain (0,1], where any "1" in the Bernoulli RV corresponds to a value on (0,1] in the other RV. Intuitively, I think this is the case, but I'm trying to demonstrate it.</p> <p>For instance, I know that $Var[X] &gt; Var[\alpha X]$ where $0 \le \alpha &lt; 1$, but what if $\alpha$ is a RV on the interval $(0,1]$?</p> <p>The reason I'm asking is that I'm sampling a RV with values on the interval (0,1] and I'm using a binomial confidence interval as an upper bound (you can read more of the <a href="https://stats.stackexchange.com/questions/15567/putting-a-confidence-interval-on-the-mean-of-a-very-rare-event">context from an old discussion here</a>), and I need to prove that that's a reasonable upper bound on the confidence interval, since the values could all be 1's or 0's, but have the possibility of being between those values.</p> <p><strong>Edit</strong> Sasha provided a counterexample to my question as originally stated. The distribution in question is such that it's Bernoulli-like, however instead of having only 1's and 0's, the "true" values can take on values on the interval (0,1]. So, for Sasha's case of $p_{true} = 0.01$, the distribution in question would have a delta function at 0 of value $1-p_{true}$ and the rest of the distribution would be on (0,1].</p>
Michael Hardy
11,667
<p>$$ \begin{align} 1 &amp; = E(X+Y+Z\mid X+Y+Z=1) \\ \\ &amp; = E(X\mid X+Y+Z=1) + E(Y\mid X+Y+Z=1) + E(Z\mid X+Y+Z=1). \end{align} $$ If they're <b>independent</b>, you've got symmetry that justifies the conclusion that the three terms are equal. You can get by with much weaker hypotheses than independence if they justify symmetry.</p> <p>With the variance I'll just assume <b>independence</b> and leave weaker hypotheses for another occasion. We have this bivariate normal distribution: $$ \begin{bmatrix} X \\ X+Y+Z \end{bmatrix} \sim N\left( \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 &amp; 1 \\ 1 &amp; 3 \end{bmatrix} \right), $$ and hence $\operatorname{cor}(X,X+Y+Z)=1/\sqrt{3}$.</p> <p>So how do we think about the bivariate normal distribution $$ \begin{bmatrix} U \\ V \end{bmatrix} \sim N\left(\begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} \sigma^2, &amp; \rho\sigma\tau \\ \rho\sigma\tau, &amp; \tau^2 \end{bmatrix} \right) $$ where $\sigma,\tau$ are standard deviations and $\rho$ is the correlation?</p> <p>Conditional on $V$ being a certain number of standard deviations above its mean, $U$ is expected to be $\rho$ times that many standard deviations about <em>its</em> mean. Thus $$ E(U\mid V) = \rho\sigma \frac V\tau = \frac{\rho\sigma\tau}{\tau^2} V. $$ By the law of total variance we have $$ \begin{align} \sigma^2 = \operatorname{var}(U) &amp; = E(\operatorname{var}(U \mid V)) + \operatorname{var}(E(U\mid V)) \tag{law of total variance} \\ \\ &amp; = E(\operatorname{var}(U \mid V)) + \operatorname{var}\left( \frac{\rho\sigma\tau}{\tau^2} V \right) = E(\operatorname{var}(U \mid V)) + \rho^2\sigma^2. \end{align} $$ If you accept "homscedasticity", i.e. the conditional variance of $U$ given $V$ does not depend on $V$, then you get $\operatorname{var}(U\mid V) = (1-\rho^2)\sigma^2$.</p> <p>Applied to the particular distribution we're considering, this says $$ \operatorname{var}(X\mid X+Y+Z) = \left(1 - \frac13\right)\cdot 1 = \frac23. $$</p>
4,630,145
<p>We need to determine all <span class="math-container">$\mathbb{Q}$</span>-homomorphisms from <span class="math-container">$\mathbb{Q}(\sqrt[5]2)$</span> to <span class="math-container">$\mathbb{Q}(\sqrt[5]2, \omega)$</span>, where <span class="math-container">$\omega=e^{2\pi i/5}$</span>.</p> <p>I can figure it out that there are 5 injective homomorphisms because there are <span class="math-container">$5$</span> roots of <span class="math-container">$x^5-2$</span> in <span class="math-container">$\mathbb{Q}(\sqrt[5]2, \omega)$</span>.</p> <p>But how do I proceed after this to find all of them?</p>
cigar
1,070,376
<p>We need <span class="math-container">$\varphi (\sqrt[5]2)^5=\varphi (\sqrt[5]2^5)=\varphi (2)=2,$</span> by homomorphismness.</p> <p>Thus there are <span class="math-container">$5:$</span></p> <p><span class="math-container">$$\sqrt [5]2\to\sqrt [5]2\omega ^i,i=1,\dots, 5.$$</span></p>
1,864,939
<p>I am surprised that this question hasn't been asked on here</p> <p><strong>I need to show that</strong> </p> <blockquote> <p>$$d_2(f,g) = \sqrt{\int\limits_0^1 |f(x) - g(x)|^2 dx}$$</p> </blockquote> <p>is a metric on $C[0,1]$</p> <hr> <p><strong>Proof:</strong></p> <ul> <li><p>As usual, positive semidefiniteness and symmetry are trivial</p> <p>We want to show that </p> <p>$$d_2(f,g) \leq d_2(f,h) + d_2(h,g)$$ for some $f,g,h \in C^1$</p></li> </ul> <p>(Unfortunately, I am not sure how to approach this in the most efficient manner)</p> <p>$ d_2(f,g) = \sqrt{\int\limits_0^1 |f(x) - g(x)|^2 dx}$</p> <p>$ = \sqrt{\int\limits_0^1 |(f(x) - g(x))^2| dx}$ $ = \sqrt{\int\limits_0^1 |f^2 - 2gf + g^2| dx} $</p> <p>$ = \sqrt{\int\limits_0^1 |(f^2 -2hf-h^2 + 2hf+h^2)- 2gf + (g^2 -2hg-h^2 + 2hg+h^2)| dx}$ </p> <p>$ = \sqrt{\int\limits_0^1 |(f-h)^2 + 2hf-h^2- 2gf + (g-h)^2 + 2hg-h^2)| dx}$ </p> <p>$ \leq \sqrt{\int\limits_0^1 |(f-h)^2| dx+ \int\limits_0^1 |(g-h)^2| dx +\int\limits_0^1 |2hf-2h^2- 2gf + 2hg| dx}$</p> <p>$\leq d_2(f,h) + d_2(h,g) + \sqrt{\int\limits_0^1 |2hf-2h^2- 2gf + 2hg| dx}$</p> <p>Seems like this method cannot work....Can someone offer suggestions on how to fix this</p>
Jack D'Aurizio
44,121
<p>It is an overkill, but I think it is an interesting one. $C^0(0,1)\subset L^2(0,1)$, hence we may assume that the functions we are dealing with have a Fourier-Legendre expansion in terms of <a href="https://en.wikipedia.org/wiki/Legendre_polynomials" rel="nofollow">shifted Legendre polynomials</a>: they give an orthogonal complete base of $L^2(0,1)$ with respect to the usual inner product and by assuming $$ f(x)=\sum_{n\geq 0}c_n P_n(2x-1)\tag{1} $$ we have, by Parseval's identity: $$ \| f \|_2 = \sqrt{\sum_{n\geq 0}\frac{c_n^2}{2n+1}}\tag{2} $$ so it is enough to work in <a href="https://en.wikipedia.org/wiki/Hilbert_space#Sequence_spaces" rel="nofollow">$\ell^2$</a>. We may notice that: $$ \sum_{n\geq 0}\frac{(a_n+b_n)^2}{2n+1}=\sum_{n\geq 0}\frac{a_n^2}{2n+1}+\sum_{n\geq 0}\frac{b_n^2}{2n+1}+2\sum_{n\geq 0}\frac{a_n b_n}{2n+1}\tag{3}$$ but due to the <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality" rel="nofollow">Cauchy-Schwarz inequality</a>: $$ \left|\sum_{n\geq 0}\frac{a_n b_n}{2n+1}\right|\leq \sqrt{\left(\sum_{n\geq 0}\frac{a_n^2}{2n+1}\right)\cdot\left(\sum_{n\geq 0}\frac{b_n^2}{2n+1}\right)}\tag{4} $$ so, by $(3)$, $$ \| a+b \|_{\ell^2} \leq \| a\|_{\ell^2}+\| b \|_{\ell^2}\tag{5} $$ and that proves: $$ \| f+g \|_{L^2} \leq \| f\|_{L^2}+\| g \|_{L^2}\tag{6} $$ as wanted.</p>
1,864,939
<p>I am surprised that this question hasn't been asked on here</p> <p><strong>I need to show that</strong> </p> <blockquote> <p>$$d_2(f,g) = \sqrt{\int\limits_0^1 |f(x) - g(x)|^2 dx}$$</p> </blockquote> <p>is a metric on $C[0,1]$</p> <hr> <p><strong>Proof:</strong></p> <ul> <li><p>As usual, positive semidefiniteness and symmetry are trivial</p> <p>We want to show that </p> <p>$$d_2(f,g) \leq d_2(f,h) + d_2(h,g)$$ for some $f,g,h \in C^1$</p></li> </ul> <p>(Unfortunately, I am not sure how to approach this in the most efficient manner)</p> <p>$ d_2(f,g) = \sqrt{\int\limits_0^1 |f(x) - g(x)|^2 dx}$</p> <p>$ = \sqrt{\int\limits_0^1 |(f(x) - g(x))^2| dx}$ $ = \sqrt{\int\limits_0^1 |f^2 - 2gf + g^2| dx} $</p> <p>$ = \sqrt{\int\limits_0^1 |(f^2 -2hf-h^2 + 2hf+h^2)- 2gf + (g^2 -2hg-h^2 + 2hg+h^2)| dx}$ </p> <p>$ = \sqrt{\int\limits_0^1 |(f-h)^2 + 2hf-h^2- 2gf + (g-h)^2 + 2hg-h^2)| dx}$ </p> <p>$ \leq \sqrt{\int\limits_0^1 |(f-h)^2| dx+ \int\limits_0^1 |(g-h)^2| dx +\int\limits_0^1 |2hf-2h^2- 2gf + 2hg| dx}$</p> <p>$\leq d_2(f,h) + d_2(h,g) + \sqrt{\int\limits_0^1 |2hf-2h^2- 2gf + 2hg| dx}$</p> <p>Seems like this method cannot work....Can someone offer suggestions on how to fix this</p>
avs
353,141
<p>First, let's show that $||f||_2 = \sqrt{\int_{[0,1]} f(x)^2 dx}$ is a norm. I will only prove that it satisfies the triangle inequality for norms: $$ ||f + g||_2 \leq ||f||_2 + ||g||_2. $$ This follows from noting that the scalar product $$ (f, g) = \int_{[0,1]} f(x) g(x) dx $$ satisfies the Cauchy-Schwarz inequality: <a href="https://en.wikipedia.org/wiki/Triangle_inequality#Normed_vector_space" rel="nofollow">https://en.wikipedia.org/wiki/Triangle_inequality#Normed_vector_space</a></p> <p>Now, $$ d_2(f, g) = ||f - g||_2 = ||(f - h) + (h - g)||_2 \leq ||f - h||_2 + ||h - g||_2 = d_2(f, h) + d_2(h, g). $$</p>
2,270,019
<p>Is $\displaystyle \int _ a^ b \left(\int_c^d f_{X,Y}(x,y)\, dx\right)\, dy$</p> <p>a) $\displaystyle \int _ a^ b dy \int_ c^ d f(x,y)\,dx$</p> <p>or</p> <p>b) $\displaystyle \int _ a^ b dx \int_ c^ d f(x,y)\,dy$</p> <p>?</p>
Gio67
355,873
<p>You are right, it would have been better if in that reply they had separated $n$ into even and odds. If $n=2k$ you get $$\int_{(2k-1)\pi}^{2k\pi}\frac{(\sin x)^-}{x}dx\ge \frac{1}{k\pi}$$ and so if you sum over all $k$ you get that $$\int_{0}^{\infty}\frac{(\sin x)^-}{x}dx=\infty.$$ Similarly, if you take $n=2k-1$ you get $$\int_{(2k-2)\pi}^{(2k-1)\pi}\frac{(\sin x)^+}{x}dx\ge \frac{2}{(2k-1)\pi}$$ and so if you sum over all $k$ you get that $$\int_{0}^{\infty}\frac{(\sin x)^+}{x}dx=\infty.$$ <strong>Edit</strong>: Alternatively, you need to prove that the improper Riemann integral of $\frac{\sin x}{x}$ exists. This can be done by integrating by parts $$\int_{\pi/2}^t \frac{\sin x}{x}dx=-\frac{\cos t}{t}-\int_{\pi/2}^t \frac{\cos x}{x^2}dx.$$ Then $-\frac{\cos t}{t}\to 0$ as $t\to\infty$ and $\left\vert \frac{\cos x}{x^2}\right\vert \le \frac1{x^2}$ which is integrable in $[\pi/2,\infty)$. Hence, the improper Riemann integrable exists. This together with $$\int_0^\infty \frac{|\sin x|}{x}dx=\infty$$ implies that $$\int_{0}^{\infty}\frac{(\sin x)^+}{x}dx=\int_{0}^{\infty}\frac{(\sin x)^-}{x}dx=\infty.$$ <strong>PS:</strong> Jason is right. To prove that a function is not Lebesgue integrable it suffices to prove that $\int|f|=\infty$ (but if you want to give an example for that, just use $f(x)=\frac1{x}$ in $(1,\infty)$. To prove that the Lebesgue integral of a function does not exist, you need to prove that $\int (f)^+=\int (f)^-=\infty$. This follows either by direct computation, or, if appropriate, as in this case, if the proper Riemann integrable is finite and $\int|f|=\infty$</p>
3,525,017
<p>a) Show that the generating function by length for binary strings where every block of 0s has length at least 2, each block of ones has length at least 3 is:</p> <p><span class="math-container">$$\frac{(1-x+x^3)(1-x+x^2)}{1-2x+x^2-x^5}$$</span></p> <p>b) Give a recurrence relation and enough initial conditions to determine coefficients of power series.</p> <p>So for a), I came up with the block decomposition <span class="math-container">$((0^*00)^*(1^*111)^*)^*$</span> and found the generating function, using the fact that <span class="math-container">$0\leadsto x$</span>, <span class="math-container">$1\leadsto x$</span>, and <span class="math-container">$a^*\leadsto\frac{1}{1-a}$</span> where a is some binary string:</p> <p><span class="math-container">$$\frac{(1-x)^2}{(1-x-x^2)(1-x-x^3)}$$</span></p> <p>which, clearly, does not equal the expected result. Could someone clear up for me where I went wrong?</p> <p>Also, for b), how would you find a recurrence relation, since the degree of the numerator and denominator are the same, so there would be no general <span class="math-container">$a_n$</span> term.</p> <p>Thanks in advance for any help!</p>
Henry
6,460
<p>robjohn has given a generating function. Here is a recurrence based approach. </p> <p>Suppose <span class="math-container">$b_n$</span> is the number of strings ending with <span class="math-container">$0$</span> and <span class="math-container">$c_n$</span> the number ending with <span class="math-container">$1$</span>, and <span class="math-container">$a_n=b_n+c_n$</span> being what you want. It seems obvious that </p> <ol> <li><span class="math-container">$a_n=b_n+c_n$</span> by definition</li> <li><span class="math-container">$b_n=b_{n-1}+c_{n-2}$</span> by adding <span class="math-container">$0$</span> to an existing <span class="math-container">$0$</span> or <span class="math-container">$00$</span> to an existing <span class="math-container">$1$</span></li> <li><span class="math-container">$c_n=b_{n-1}+c_{n-1}$</span> by adding <span class="math-container">$1$</span> to an existing <span class="math-container">$0$</span> or <span class="math-container">$1$</span></li> </ol> <p>so </p> <ol start="4"> <li><span class="math-container">$c_n=a_{n-1}$</span> from (3) and (1) and so <span class="math-container">$c_{n-2}=a_{n-3}$</span></li> <li><span class="math-container">$b_{n-1}=c_n-c_{n-1}=a_{n-1}-a_{n-2}$</span> from (3) and (4) and so <span class="math-container">$b_{n}=a_{n}-a_{n-1}$</span></li> <li><span class="math-container">$a_{n}-a_{n-1}=a_{n-1}-a_{n-2}+a_{n-3}$</span> from (2) and (5) and (4) giving <span class="math-container">$$a_{n}=2a_{n-1}-a_{n-2}+a_{n-3}$$</span></li> </ol> <p>That will lead to a denominator in the generating function of <span class="math-container">$1-2x+x^2-x^3$</span></p> <p>Since <span class="math-container">$b_1=0, b_2=1, c_1=1, c_2=1$</span>, we can find <span class="math-container">$a_0=1, a_1=1, a_2=2, a_3=4, a_4=7$</span> etc. But <span class="math-container">$\frac{1}{1-2x+x^2-x^3}=1+2x+3x^2+5x^3+9x^4+\cdots$</span> so we need to subtract <span class="math-container">$\frac{x}{1-2x+x^2-x^3} = x+2x^2+3x^3+5x^4+\cdots$</span> and then add <span class="math-container">$\frac{x^2}{1-2x+x^2-x^3}= x^2+2x^3+3x^4+\cdots$</span> to match the coefficients, with a result of <span class="math-container">$$\frac{1-x+x^2}{1-2x+x^2-x^3}$$</span> as robjohn found more directly</p>
415,150
<blockquote> <p>Let $x$ and $y$ be positive numbers. Let $a_0=y$, and let $$a_n=\frac{(x/a_{n-1})+a_{n-1}}{2}$$Prove that the sequence $\{a_n\}$ has limit $\sqrt{x}$. Generalize to $n$th roots.</p> </blockquote> <p>I already solved the first part <a href="https://math.stackexchange.com/questions/415125/limit-of-recursive-sequence-a-n-fracx-a-n-1a-n-12">here</a>, but I have no idea how to generalize the $n$th roots. The limit should be $\sqrt[n]{x}$, but what is the recurrence supposed to look like?</p>
Zen
72,576
<p>This is Evariste's answer translated into Spanish.</p> <p>La integral que has computado es correcta, pero tal vez te saldria mejor si incluyes los limites de integracion antes de evaluar la integral, es decir, no dejarla en forma indefinida asi: $$\frac{4}{25}\frac{x^{2+1}}{2+1}+x$$ </p> <p>Si dudaste que todo lo que has hecho este bien, puedes aprovecharte de <a href="http://www.wolframalpha.com/" rel="nofollow">WolframAlpha</a>. Es un sitio web muy util que puedes usar para verifiar tus computaciones. </p> <p>Desde el punto de vista de estrategio, visto que has separado las integrales ya en la suma de dos integrales (las que son mas facil de evaluar), ?porque las agregaste otra vez cuando llegaste a la etapa de evaluacion? Sugiero que trabajes mas en computar integrales y que adoptes una manera sistematica para ello. Eso te va a ahorrar muchisimo tiempo. </p> <p>La cifra final debe ser 51122500/81$m^2$, o aproximadamente 631141.98$m^2$.</p> <p>Un saludo. </p>
4,482,285
<p>I'm currently making a game and have run into a problem I'm not quite sure how to solve, I'll try to lay it out as a maths question. None of the values are fixed, so I'm looking for an equation that solves the below question:</p> <p>A plane lies on the position vector <code>p0</code> <code>&lt;x0, y0, z0&gt;</code> and has a normal unit vector <code>n</code> <code>&lt;a, b, c&gt;</code>.</p> <p>Given a sphere with a radius of <code>R</code> forms a tangent with the plane at an unknown position vector <code>t</code> <code>&lt;tx, ty, tz&gt;</code>:</p> <p>Find the highest height of the center of a sphere, when the sphere is located at the position vector <code>s</code> <code>&lt;sx, sy, sz&gt;</code>, and <code>sx</code> and <code>sz</code> are known values, but <code>sy</code> is not.</p> <p>The known vectors are <code>p0</code>, <code>n</code>, and the other known values are <code>sx</code>, <code>sz</code>, and <code>R</code>. The position vector <code>t</code> is unknown, and the height of the sphere <code>sy</code> is also unknown. Should it be necessary, another point on the plane <code>p1</code> <code>&lt;x1, y1, z1&gt;</code> can be provided.</p> <p>In my case, the Y axis is the up axis, the Z axis is the forward axis, and the X axis is the right axis.</p> <p>I wanted to essentially do something similar to what is shown in this video, but in my case I know the length of the vector but not the positions: <a href="https://www.youtube.com/watch?v=zWMTTRJ0l4w" rel="nofollow noreferrer">https://www.youtube.com/watch?v=zWMTTRJ0l4w</a></p> <p>Here is 2d visualisation of the problem: <a href="https://i.stack.imgur.com/T0WbS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T0WbS.png" alt="Sphere and plane" /></a></p> <p>I was only able to find the positions due to trial and error, but these were approximately correct.</p> <p>Here is what I do know: The vector <code>t</code> to <code>s</code> will have the same direction as the normal vector, but with the magnitude being equal to R. It could then be defined as <code>Rn</code>, given that <code>n</code> is a unit vector. Given that, the vector could be defined as a translation of the vector <code>Rn</code> along the plane.</p> <p>Beyond this point, I am a little confused, so any help would be appreciated. I would prefer to avoid cartesian form, and values can be stored in variables at later dates if necessary. I have v1:Dot(v2) and v1:Cross(v2) methods available to me, and I expect this probably will use the dot product at some point.</p> <p>Thank you.</p>
tukars
397,739
<p>It took some thinking, but I managed to find a solution to finding <code>sy</code>.</p> <p>Essentially, the length of the vector between <code>t</code> and <code>s</code>, which I will call vector R can be defined using:</p> <p><code>abs|n ⋅ (s - p0)|</code>.</p> <p>Since the length of the vector is already known, this allows us to set up an equation such that:</p> <p><code>R = abs|n · (s - p0)|</code></p> <p>Then, expanding:</p> <p><code>R = abs|&lt;nx, ny, nz&gt; · &lt;sx - x0, sy - y0, sz - z0&gt;|</code></p> <p><code>R = abs|nx(sx - x0) + ny(sy - y0) + nz(sz - z0)|</code></p> <p><code>R = abs|nx·sx + ny·sy + nz·sz - nx·x0 - ny·y0 - nz·z0|</code></p> <p>Given that it is absolute, that means that it could be <code>+R</code> or <code>-R</code>.</p> <p><code>R = nx·sx + ny·sy + nz·sz - nx·x0 - ny·y0 - nz·z0</code></p> <p>OR</p> <p><code>-R = nx·sx + ny·sy + nz·sz - nx·x0 - ny·y0 - nz·z0</code></p> <p>First I will re-arrange the positive, keeping in mind there is also a negative solution.</p> <p><code>ny·sy = R - nx·sx - nz·sz + nx·x0 + ny·y0 + nz·z0</code></p> <p>Dividing by Y:</p> <p><code>sy = ((R - nx·sx - nz·sz + nx·x0 + nz·z0) / ny) + y0</code></p> <p>Remembering there is also <code>-R</code> this gives us the two solutions:</p> <p><code>sy1 = ((R - nx·sx - nz·sz + nx·x0 + nz·z0) / ny) + y0</code></p> <p><code>sy2 = ((-R - nx·sx - nz·sz + nx·x0 + nz·z0) / ny) + y0</code></p> <p>Then, selecting the maximum of each:</p> <p><code>sy = math.max(sy1, sy2)</code></p> <p>Which then gives us our vector <code>S</code> <code>&lt;sx, sy, sz&gt;</code></p> <p>The tangent point can then be calculated by taking away <code>Rn</code> from our calculated <code>S</code> vector.</p>
4,645,950
<p>The question is: Peter and Mary take turns rolling a fair die. If Peter rolls 1 or 2 he wins and the game stops. If Mary rolls 3,4,5, or 6, she wins and the game stops. They keep rolling until one of them wins. Suppose Peter rolls first.</p> <p>a) What is the probability that Peter wins and rolls at most 4 times?</p> <p>Here is the solution: We want to find <span class="math-container">$A=\\\{ \text{Peter wins and rolls at most 4 times} \\\}$</span>. We decompose the event into <span class="math-container">$A_k$</span>= {Peter wins on his kth roll}. Then <span class="math-container">$A=\cup^4_{k=1}$</span> and since the events <span class="math-container">$A_k$</span> are mutually exclusive, <span class="math-container">$P(A)=\sum^4_{k=1} P(A_k)$</span>. My book says that the ratio of favorable alternatives over the total number of alternatives yields <span class="math-container">$$P(A_k)=\frac{(4 \cdot 2)^{k-1}\cdot 2}{(6 \cdot 6)^{k-1} \cdot 6}=\left(\frac{8}{36} \right)^{k-1}\frac{2}{6}=\left( \frac{2}{9}\right)^{k-1}\frac{1}{3}$$</span> Then the solution if sound using the geometric sum.</p> <p>I am having problems understanding this answer. Why is the 2/6 not to the k power? and why are we multiplying the probability of failure of Peter and Mary inside the k-1 power? I'd really appreciate it if someone could break down how we obtain <span class="math-container">$A_k$</span>.</p>
Pavan C.
914,078
<p>The probability that Peter wins in any given round is <span class="math-container">$\frac{2}{6}$</span>. The probability that Peter and Mary both don't win on any given round is</p> <p><span class="math-container">$$\mathbb{P}(\text{Peter loses and Mary loses}) = \mathbb{P}(\text{Peter loses}) \cdot \mathbb{P}(\text{Mary loses}) = \frac{4}{6} \cdot \frac{2}{6}.$$</span></p> <p>If we want Peter to win, then we need Mary to lose every time, and Peter to win at the end. We use independence to turn intersection (&quot;and&quot;) into multiplication.</p> <p><span class="math-container">$$\mathbb{P}(\text{Peter wins on first roll}) = \frac{2}{6} $$</span></p> <p><span class="math-container">$$ \begin{align*} \mathbb{P}(\text{Peter wins on the second roll}) &amp;= \mathbb{P}(\text{Peter and Mary both lose on round 1 and Peter wins on round 2}) \\ &amp;= \mathbb{P}(\text{Peter and Mary both lose}) \cdot \mathbb{P}(\text{Peter wins}) \\ &amp;= \frac{4}{6} \cdot \frac{2}{6} \cdot \frac{2}{6} \end{align*} $$</span></p> <p><span class="math-container">$$ \begin{align*} \mathbb{P}(\text{Peter wins on the third roll}) &amp;= \mathbb{P}(\text{Peter and Mary both lose on round 1 and 2 and Peter wins on round 3}) \\ &amp;= \mathbb{P}(\text{Peter and Mary lose})^2 \cdot \mathbb{P}(\text{Peter wins}) \\ &amp;= \left(\frac{4}{6} \cdot \frac{2}{6}\right)^2 \cdot \frac{2}{6} \end{align*}$$</span></p> <p><span class="math-container">$$ \begin{align*} \mathbb{P}(\text{Peter wins on the fourth roll}) &amp;= \mathbb{P}(\text{Peter and Mary both lose on round 1 and 2 and 3 and Peter wins on round 4}) \\ &amp;= \mathbb{P}(\text{Peter and Mary lose})^3 + \mathbb{P}(\text{Peter wins}) \\ &amp;= \left(\frac{4}{6} \cdot \frac{2}{6}\right)^3 \cdot \frac{2}{6} \end{align*}$$</span></p> <p>Add up these. Hopefully the pattern makes sense--you multiply probability that Peter and Mary lose for the first <span class="math-container">$k-1$</span> rounds, then finally probability that Peter wins on round <span class="math-container">$k$</span>.</p>
1,717,169
<p>I know that if a vector field is conservative, a potential function exists, but does this relation also hold the other way around? In other words, does a potential function exist for a certain vector field if and only if the field in question is conservative?</p>
Captain Lama
318,467
<p>EDIT : it seems you are looking for the "easy" implication that if a vector field is derived from a potential then it's conservative, which is always true. My answer actually says that what you are taking for granted (the reverse implication) is false in general. I guess you are working in a case where it's true, but it may interest you that there are restrictions.</p> <hr> <p>It depends on the geometry of the space upon which the vector space is defined. If it's defined on $\mathbb{R}^n$, then yes. If it's (for instance) $\mathbb{R}^2\setminus \{0\}$, then no.</p> <p>This is because a conservative vector field is basically a closed $1$-form, and a field that comes from a potential is basically an exact $1$-form, and closed $1$-forms are exact precisely when the first de Rham cohomology group is trivial. This happens for instance on a contractible space (such as $\mathbb{R}^n$), but not on any space.</p> <p>It's actually a purely topological condition : it's equivalent to the fact that $\pi_1(X)^{ab}$ is a torsion group, where $X$ is the space on which the vector field is defined.</p>
1,717,169
<p>I know that if a vector field is conservative, a potential function exists, but does this relation also hold the other way around? In other words, does a potential function exist for a certain vector field if and only if the field in question is conservative?</p>
SchrodingersCat
278,967
<p>Say a vector field $\vec A$ has the potential $\phi$.</p> <p>So we can write that $\vec A=\nabla \phi$</p> <p>Now, line integral around the field from $M$ to $N$<br> $= \int_M^N \vec A \cdot \vec{dl}$ <br> $= \int_M^N \nabla \phi \cdot \vec{dl}$ <br> $= \int_M^N \left(\frac{\partial \phi}{\partial x}\hat i+ \frac{\partial \phi}{\partial y}\hat j+ \frac{\partial \phi}{\partial z}\hat k\right) \cdot (dx\hat i+dy\hat j+dz\hat k)$ <br> $= \int_M^N \left(\frac{\partial \phi}{\partial x}dx+ \frac{\partial \phi}{\partial y}dy+ \frac{\partial \phi}{\partial z}dz\right)$ <br> $= \int_M^N d\phi$ <br> $=\phi(M)-\phi(N)$</p> <p>which is independent of the path joining $M$ and $N$.</p> <p>So such <em>vector field</em>s having <em>scalar potential</em> are <em>conservative</em>.</p>
196,155
<p>I have recently read some passage about nested radicals, I'm deeply impressed by them. Simple nested radicals $\sqrt{2+\sqrt{2}}$,$\sqrt{3-2\sqrt{2}}$ which the later can be denested into $1-\sqrt{2}$. This may be able to see through easily, but how can we denest such a complicated one $\sqrt{61-24\sqrt{5}}(=4-3\sqrt{5})$? And Is there any ways to judge if a radical in $\sqrt{a+b\sqrt{c}}$ form can be denested?</p> <p>Mr. Srinivasa Ramanujan even suggested some CRAZY nested radicals such as: $$\sqrt[3]{\sqrt{2}-1},\sqrt{\sqrt[3]{28}-\sqrt[3]{27}},\sqrt{\sqrt[3]{5}-\sqrt[3]{4}}, \sqrt[3]{\cos{\frac{2\pi}{7}}}+\sqrt[3]{\cos{\frac{4\pi}{7}}}+\sqrt[3]{\cos{\frac{8\pi}{7}}},\sqrt[6]{7\sqrt[3]{20}-19},...$$ Amazing, these all can be denested. I believe there must be some strategies to denest them, but I don't know how.</p> <p>I'm a just a beginner, can anyone give me some ideas? Thank you.</p>
Bill Dubuque
242
<p>There do exist <a href="https://math.stackexchange.com/a/194148/242">general denesting algorithms</a> employing Galois theory, but for the simple case of quadratic algebraic numbers we can employ a simple rule that I discovered as a teenager.</p> <hr /> <p><strong>Simple Denesting Rule</strong> <span class="math-container">$\rm \ \ \color{blue}{subtract\ out}\ \sqrt{norm}\:,\ \ then\ \ \color{brown}{divide\ out}\ \sqrt{trace} $</span></p> <p>Recall <span class="math-container">$\rm\: w = a + b\sqrt{n}\: $</span> has <strong>norm</strong> <span class="math-container">$\rm =\: w\:\cdot\: w' = (a + b\sqrt{n})\ \cdot\: (a - b\sqrt{n})\ =\: a^2 - n\, b^2 $</span></p> <p>and, <span class="math-container">$ $</span> furthermore, <span class="math-container">$\rm\ w\:$</span> has <span class="math-container">$ $</span> <strong>trace</strong> <span class="math-container">$\rm\: =\: w+w' = (a + b\sqrt{n}) + (a - b\sqrt{n})\: =\: 2a$</span></p> <hr /> <p>Here <span class="math-container">$\:61-24\sqrt{5}\:$</span> has norm <span class="math-container">$= 29^2.\:$</span> <span class="math-container">$\rm\, \color{blue}{subtracting\ out}\ \sqrt{norm}\ = 29\ $</span> yields <span class="math-container">$\ 32-24\sqrt{5}\:$</span></p> <p>and this has <span class="math-container">$\rm\ \sqrt{trace}\: =\: 8,\ \ thus,\ \ \ \color{brown}{dividing \ it \ out}\, $</span> of this yields the sqrt: <span class="math-container">$\,\pm( 4\,-\,3\sqrt{5}).$</span></p> <hr /> <p>See <a href="http://math.stackexchange.com/a/816527/242">here</a> for a simple proof of the rule, and see <a href="https://math.stackexchange.com/search?tab=votes&amp;q=user%3a242%20denesting%20subtract%20norm%20trace">here</a> for many examples of its use.</p>
513,239
<p>I've recently heard a riddle, which looks quite simple, but I can't solve it.</p> <blockquote> <p>A girl thinks of a number which is 1, 2, or 3, and a boy then gets to ask just one question about the number. The girl can only answer "<em>Yes</em>", "<em>No</em>", or "<em>I don't know</em>," and after the girl answers it, he knows what the number is. What is the question?</p> </blockquote> <p>Note that the girl is professional in maths and knows EVERYTHING about these three numbers.</p> <hr> <p><strong>EDIT:</strong> The person who told me this just said the correct answer is:</p> <blockquote class="spoiler"> <p> "I'm also thinking of a number. It's either 1 or 2. Is my number less than yours?"</p> </blockquote>
Blue
409
<p>"I'm also thinking of one of these numbers. Is your number, raised to my number, bigger than $2$?"</p> <p>Let $n$ be the girl's number (unknown to me), and let $m$ be my number (unknown to her).</p> <ul> <li>$n = 1 \implies $ <strong>NO</strong>: $1^m = 1 \not &gt; 2$ for all $m \in \{1,2,3\}$.</li> <li>$n = 3 \implies $ <strong>YES</strong>: $3^m \geq 3 &gt; 2$ for all $m \in \{1,2,3\}$.</li> <li>$n = 2 \implies $ <strong>I DON'T KNOW</strong>: Whether $2^m &gt; 2$ depends on $m$.</li> </ul>
513,239
<p>I've recently heard a riddle, which looks quite simple, but I can't solve it.</p> <blockquote> <p>A girl thinks of a number which is 1, 2, or 3, and a boy then gets to ask just one question about the number. The girl can only answer "<em>Yes</em>", "<em>No</em>", or "<em>I don't know</em>," and after the girl answers it, he knows what the number is. What is the question?</p> </blockquote> <p>Note that the girl is professional in maths and knows EVERYTHING about these three numbers.</p> <hr> <p><strong>EDIT:</strong> The person who told me this just said the correct answer is:</p> <blockquote class="spoiler"> <p> "I'm also thinking of a number. It's either 1 or 2. Is my number less than yours?"</p> </blockquote>
SamYonnou
91,283
<p>For an unknown $x$, is your number 1 or (2 and $x$)?</p> <p><b>YES</b> means that the number is 1<br> <b>NO</b> means that the number is not 1 and not 2 -- so it's 3<br> <b>I DON'T KNOW</b> means that the number is 2 (since the (2 and $x$) clause will be true if $x=2$ and false otherwise)</p>
513,239
<p>I've recently heard a riddle, which looks quite simple, but I can't solve it.</p> <blockquote> <p>A girl thinks of a number which is 1, 2, or 3, and a boy then gets to ask just one question about the number. The girl can only answer "<em>Yes</em>", "<em>No</em>", or "<em>I don't know</em>," and after the girl answers it, he knows what the number is. What is the question?</p> </blockquote> <p>Note that the girl is professional in maths and knows EVERYTHING about these three numbers.</p> <hr> <p><strong>EDIT:</strong> The person who told me this just said the correct answer is:</p> <blockquote class="spoiler"> <p> "I'm also thinking of a number. It's either 1 or 2. Is my number less than yours?"</p> </blockquote>
Minuteman
94,593
<blockquote> <p>If k is your number, does each continuous mass distribution µ in $\mathbb{R}^{3k-2}$ admit an equipartition by hyperplanes ?</p> </blockquote>
513,239
<p>I've recently heard a riddle, which looks quite simple, but I can't solve it.</p> <blockquote> <p>A girl thinks of a number which is 1, 2, or 3, and a boy then gets to ask just one question about the number. The girl can only answer "<em>Yes</em>", "<em>No</em>", or "<em>I don't know</em>," and after the girl answers it, he knows what the number is. What is the question?</p> </blockquote> <p>Note that the girl is professional in maths and knows EVERYTHING about these three numbers.</p> <hr> <p><strong>EDIT:</strong> The person who told me this just said the correct answer is:</p> <blockquote class="spoiler"> <p> "I'm also thinking of a number. It's either 1 or 2. Is my number less than yours?"</p> </blockquote>
Kaz
28,530
<p>Does the girl know computing addition to math? Probably. </p> <p>"Here is an array of three character strings, indexed from <code>[1]</code>:</p> <pre><code>s[] = { "Yes", "No", "I don't know" } </code></pre> <p>what is the value of <code>s[x]</code> where <code>x</code> is the number you are thinking of?"</p> <p>Basically, the space of three possible answers can be used as symbols to encode the information directly. </p> <p><strong>Justification, in the light of comments:</strong></p> <p>The other answers differ in that they employ an arithmetic and logical coding trick: arithmetic is applied and then logic to produce an answer, whose truth value or in determinacy is then rendered to English "Yes", "No" or "I don't know".</p> <p>It is just as valid and "mathematical" to simply obtain these symbols directly without using arithmetic coding.</p> <p>Furthermore, it can still be regarded as arithmetic coding, because the answer strings are made of bits, and can therefore be coded as numbers: for instance, the bit patterns of the ASCII characters can be catenated together and treated as large integers. <code>s</code> is then effectively just a numeric table lookup which maps the indices 1 through 3 to integer symbols which denote text when broken into 8 bit chunks and mapped to ASCII characters.</p> <p>A lookup table, though arbitrarily chosen, is a mathematical object: a function. </p> <p>Furthermore, the displacement calculation to do the array indexing is arithmetic; we are exploiting the fact that the information we are retrieving is numeric and can be used to index into a table. Otherwise we would have to specify an associative set relation instead of a function from the integer domain. ("Here is a mapping of your possible state values to the symbols I'd like you to use to send me the value.")</p> <p>This answer reveals that the <strong>question is basically uninteresting</strong>. An entity holds some information that can be in one of three states, and there is to be a three-symbol protocol for querying that information. It boils down to, give me the symbol which corresponds to your state, according to this state->symbol mapping function. I would therefore argue that <strong>the convoluted arithmetic coding is the hack answer</strong> not this straightforward coding method. In computing, we sometimes resort to arithmetic encoding hacks when we have to use a language that isn't powerful enough to do some task directly, or simply when the resources (time, space) aren't there for the cleaner solution. </p>
513,239
<p>I've recently heard a riddle, which looks quite simple, but I can't solve it.</p> <blockquote> <p>A girl thinks of a number which is 1, 2, or 3, and a boy then gets to ask just one question about the number. The girl can only answer "<em>Yes</em>", "<em>No</em>", or "<em>I don't know</em>," and after the girl answers it, he knows what the number is. What is the question?</p> </blockquote> <p>Note that the girl is professional in maths and knows EVERYTHING about these three numbers.</p> <hr> <p><strong>EDIT:</strong> The person who told me this just said the correct answer is:</p> <blockquote class="spoiler"> <p> "I'm also thinking of a number. It's either 1 or 2. Is my number less than yours?"</p> </blockquote>
Joe Z.
24,644
<p>Here's one involving probabilistic trials.</p> <p>If I were to choose a random variable $x$ distributed at $N(2, 0.01)$ but cut off at $[1.5, 2.5]$, would the number you're thinking of be higher than $x$?</p> <p>If you're thinking of $1$, your answer is "no".</p> <p>If you're thinking of $2$, your answer is "I don't know", since there's a 50-50 chance either way.</p> <p>If you're thinking of $3$, your answer is "yes".</p>
174,339
<p>This one is somewhat hard to explain, but I'll try my best to.<br> I'm trying to generate a list of numbers (containing only pi/10, 0, -pi/10). Numbers are randomly selected from these 3, where the probability of getting 0 is always 70%. But the probability of getting pi/10 and -pi/10 depend on the previous outcomes. </p> <p>Here's I'm going to explain it how it depends:<br/></p> <p>Initially, the probability of getting pi/10 is 25%, and of getting -pi/10 is 5%. But once I get -pi/10 as an outcome, probabilities will be switched. (i.e., the probability of getting pi/10 will be 5%, and of getting -pi/10 will be 25%.)<br/> Similarly, when pi/10 comes next comes as an outcome, probabilities will again get switched and so on...</p> <p>So, here's what I've tried, but didn't get the desired result.</p> <pre><code>rr := RandomReal[{0, 1}] x1 = \[Piecewise]{{0, # &lt; 0.7}, {-\[Pi]/10, 0.7 &lt;= # &lt; 0.75}, {\ [Pi]/10,True}} &amp;@rr x2 = \[Piecewise]{{0, # &lt; 0.7}, {\[Pi]/10, 0.7 &lt;= # &lt; 0.75}, {-\[Pi]/10, True}} &amp;@rr </code></pre> <p>And then I run a for-loop:<br/> (Here, where I have used which function, my 3rd condition is when I get 0, and in that case, probabilites don't get switched, so it remains the previous "a". If you have a better way to that also, then that'll be great.)</p> <pre><code>list{} For[i = 1 ; a = 1, i &lt;= 100, i++; If[a == 0, x = x1, x = x2]; a = Which[x == \[Pi]/10, 0, x == -\[Pi]/10, 1, True, a]; AppendTo[list, x]] list </code></pre> <p>Your help will be really appreciated!!</p> <p>P.S.: There might be some syntax errors, if you could point that out too, I'll be thankful to you (as I'm new to Mathematica).</p>
mef
449
<p>If I have understood correctly, then I think the following code does what you want. (To keep things simple, I have used {-1, 0, 1} instead of {-Pi/10, 0, Pi/10}. One can easily modify the output as shown at the end.) I have compiled the function for speed.</p> <pre><code>cf = Compile[{{n, _Integer}}, Module[{ probs = {.25, .7, .05}, state = -1, current }, Table[ current = RandomChoice[ If[ state == -1, probs, Reverse[probs] ] -&gt; {-1, 0, 1} ]; If[ current == 0, current, state = current ], n] ] ] </code></pre> <p>Here is the usage (this takes about 0.4 seconds on my MacBook):</p> <pre><code>ran = cf[10^6]; </code></pre> <p>We can examine the output:</p> <pre><code>KeySort[Counts[Partition[ran, 2, 1]]] --&gt; &lt;|{-1, -1} -&gt; 37211, {-1, 0} -&gt; 104798, {-1, 1} -&gt; 7324, {0, -1} -&gt; 104713, {0, 0} -&gt; 491009, {0, 1} -&gt; 105168, {1, -1} -&gt; 7410, {1, 0} -&gt; 105082, {1, 1} -&gt; 37284|&gt; </code></pre> <p>Making the substitution takes about 0.23 seconds:</p> <pre><code>ran1 = ran /. {-1 -&gt; -Pi/10, 1 -&gt; Pi/10}; </code></pre> <p>Let me know if I have misunderstood. </p>
3,768,836
<blockquote> <p><strong>Exercise 16 (Stein): The Borel-Cantelli Lemma: Suppose <span class="math-container">$\{E_k\}_{k=1}^\infty$</span> is a countable family of measurable subsets of <span class="math-container">$\mathbb{R}^d$</span> and that <span class="math-container">\begin{equation*} \sum_{k=1}^\infty m(E_k) &lt; \infty. \end{equation*}</span> Let</strong> <span class="math-container">\begin{align*} E = \{x \in \mathbb{R}^d : x \in E_k, \text{ for infinitely many } k \}\\ = \lim_{k\rightarrow\infty} \sup (E_k). \end{align*}</span> <strong>Show that <span class="math-container">$E$</span> is measurable.</strong></p> </blockquote> <p><strong>Remark:</strong> Now, after looking at some resources I now understand that the proof of this extremely simple once one remembers that the set of measurable sets is a <span class="math-container">$\sigma$</span>-algebra, however I was curious if the proof I have worked out is also true. Any suggestions on writing proofs in general is always welcome of course!</p> <p>Let <span class="math-container">$\epsilon &gt; 0$</span> be arbitrary. First, recall that the countable union of measurable sets is itself measurable, and notice that <span class="math-container">\begin{equation*} m\biggr(\bigcup_{k = j}^\infty E_k \biggr) \leq \sum_{k=j}^\infty m(E_k) &lt; \sum_{k=1}^\infty m(E_k) &lt; \infty, \end{equation*}</span> for all <span class="math-container">$j \in \mathbb{N}$</span>. Now consider the following decreasing sequence <span class="math-container">\begin{equation*} \bigcup_{k=1}^\infty E_k \supset \bigcup_{k=2}^\infty E_k \supset \dots \supset \bigcup_{k=N}^\infty E_k \supset \dots \supset E. \end{equation*}</span> From Corollary 3.3, since the sequence decreases to <span class="math-container">$E$</span> and <span class="math-container">$\displaystyle m\biggr(\bigcup_{k=1}^\infty E_k\biggr) &lt; \infty$</span>, it follows that <span class="math-container">\begin{equation*} m(E) = \lim_{N \rightarrow \infty}\biggr(\bigcup_{k = N}^\infty E_k\biggr). \end{equation*}</span> Thus it follows that there exists an <span class="math-container">$N \in \mathbb{N}$</span> such that for <span class="math-container">$\epsilon' = \frac{\epsilon}{2}$</span>, <span class="math-container">\begin{equation*} m\biggr(\bigcup_{j=N}^\infty E_j \setminus E \biggr) &lt; \epsilon'. \end{equation*}</span> Moreover, notice that since each <span class="math-container">$\bigcup_{j=k}^\infty E_j$</span> is measurable for every <span class="math-container">$k \in \mathbb{N}$</span>, it follows that there exists an <span class="math-container">$\mathcal{O}_N$</span> for <span class="math-container">$N$</span> such that for <span class="math-container">$\epsilon''=\frac{\epsilon}{2}$</span> <span class="math-container">\begin{equation*} m\biggr(\mathcal{O}_n \setminus \bigcup_{j=N}^\infty E_j\biggr) &lt; \epsilon''. \end{equation*}</span> Therefore, we can conclude that then there exists <span class="math-container">$\mathcal{O}_N$</span> such that <span class="math-container">\begin{equation*} m(\mathcal{O}_N - E) = m\biggr(\mathcal{O}_n \setminus \bigcup_{j=N}^\infty E_j\biggr) + m\biggr(\bigcup_{j=N}^\infty E_j \setminus E \biggr) &lt; \epsilon' + \epsilon'' = \epsilon. \end{equation*}</span> Since <span class="math-container">$\epsilon$</span> is arbitrary, it follows that <span class="math-container">$E$</span> is a measurable set.</p> <p><strong>Remark 2:</strong> The second part of the Borel-Cantelli Lemma (<span class="math-container">$m(E) = 0$</span>) is clear as if this were not true the hypothesis of the finiteness of the sum of the measure of the members of the family would not hold.</p>
QuantumSpace
661,543
<p>We have <span class="math-container">$E= \bigcap_{n=1}^\infty \bigcup_{k=n}^\infty E_k$</span> (why?) Hence, <span class="math-container">$E$</span> is trivially measurable by the axioms of <span class="math-container">$\sigma$</span>-algebra. The assumption <span class="math-container">$\sum_k m(E_k) &lt; \infty$</span> is not necessary.</p>
763,381
<p>There are 10 men and 7 women working as supervisors in a company. The company has recently decided to form a committee to represent all the employees. The committee has to consist of 3 members, all of whom must be supervisors. The members will be President, General Secretary and Coordinator respectively. Answer the following questions based on this information.</p> <p>(a) How many ways can the committee be formed from the supervisors available? <br/> (b) How many ways can the committee be formed if the General Secretary must be a female?<br/> (c) How many ways can the committee be formed if it must have at least one man and at least one woman? <br/></p> <p>(a) $\binom{17}{3}$ <br/> (b) $\binom{16}{2}$ <br/> (c) $\binom{10}{1}\binom{7}{2} + \binom{10}{2} \binom{7}{1}$. </p> <p>Do you think my answers are correct?</p>
Thomas
26,188
<p>Hint: If $x$ is a root of $f$, then $f(x) = 0$. That means that $f(x^2 +1) = f(x)g(x) = 0$. So that means that $x^2 + 1$ is a solution. But $x &lt; x^2 + 1$. You can see by noting that $x = x^2 +1$ doesn't have any roots and since $1^2 + 1 &gt; 1$ you have $x &lt; x^2 +1$ for all real numbers $x$.</p> <p>So you must now have two roots, so ... </p>
2,173,586
<p>Let $R$ be an unique factorization domain. Being an integral domain, it has a field of quotients $F$. We can consider $R[x]$ to be a subring of $F[x]$. </p> <blockquote> <p>Given any polynomial $f(x)\in F[x]$, then $f(x)=(f_0(x)/a)$, where $f_0(x)\in R[x]$ and where $a\in R$.</p> </blockquote> <p>I don't understand why $a\in R$, in the proof of the theorem that Every integral domain can be imbedded in a field, we used $a/b$, where $a,b$ where in the integral domain. So why in this example above we have $a/b$ where $a\in R[x]$ and $b\in R$? </p>
Bram28
256,001
<p>First of all, I am not sure why you can't sum an infinite number of terms (other than summing them one by one of course). If $a_i = 0$ for all $i$, then the sum is $0$ ... no limit needed.</p> <p>But yes, typically we have to use a limit to evaluate the value of the sum.</p> <p>The difference with sets is that we are not <em>evaluating</em> anything ... we are simply defining what the elements of the union or intersection are. </p>
3,980,796
<p>Given <span class="math-container">$f: R \to (0,\infty]$</span>, twice differentiable with <span class="math-container">$f(0) = 0, f'(0) = 1, 1 + f = 1/f''$</span></p> <p>Prove that <span class="math-container">$f(1) &lt; 3/2$</span></p> <p>I found some useless (I believe) facts about <span class="math-container">$f$</span>, but nothing that gives me a light to answer it.</p> <ol> <li><span class="math-container">$f''(x) &gt; 0$</span></li> <li><span class="math-container">$f'(x) \ge 1$</span></li> <li><span class="math-container">$e^{((f')^2-1)/2} = f+1$</span> is another way to write the differential equation</li> </ol> <p>.... ?</p>
Community
-1
<p>As you've noticed, this can be written as <span class="math-container">$$\frac12(f'^2-1)=\ln(1+f).$$</span> If we want a parametric equation, we might choose <span class="math-container">$f'=t$</span> as a parameter, and then, we have (denoting the derivative with respect to <span class="math-container">$t$</span> by a point) <span class="math-container">$$f(t)=e^{(t^2-1)/2}-1$$</span> and <span class="math-container">$$\dot x=\dot f/t=e^{(t^2-1)/2},$$</span> and since <span class="math-container">$x=0$</span> corresponds to <span class="math-container">$t=f'(0)=1$</span>, <span class="math-container">$$x(t)=\int^t_1 e^{(u^2-1)/2}\,du.$$</span> The latter may not be expressible by &quot;elementary&quot; functions, but there are a lot of rather common special functions one can use for that (Dawson's integral, error function or incomplete gamma function at some imaginary argument). So we can numerically solve for the value of <span class="math-container">$t$</span> that corresponds to <span class="math-container">$x=1$</span>, finding <span class="math-container">$t\approx1.6517245235869685402116665754535807279$</span>, and there, <span class="math-container">$f\approx1.3728623070052740075133959835820792705$</span>.</p>
624,672
<blockquote> <p>Let $K$ be a cyclic group. Let $\phi,\psi: K\rightarrow Aut(H)$ be group homomorphisms such that there exists $\zeta\in Aut(H)$ satisfying $\phi(K)=\zeta \psi(K)\zeta^{-1}$. Then can we prove $H\rtimes_{\phi}K\simeq H\rtimes_{\psi}H$?</p> </blockquote> <p>My idea is to use the following</p> <p>Theorem: Let $K,H$ be groups. Let $\phi,\psi: K\rightarrow Aut(H)$ be group homomorphisms and $\zeta\in Aut(H)$, $\alpha\in Aut(K)$ and $\psi=\sigma_{\zeta}\circ\phi\circ\alpha$, where $\sigma_{\zeta}\circ \phi\circ\alpha(k)=\zeta(\phi(\alpha(k)))\zeta^{-1}$ for all $k\in K$. Then $H\rtimes_{\phi}K\simeq H\rtimes_{\psi}H$.</p> <p>Let $K=\langle a\rangle$ generated by $a$. From the condition $\phi(K)=\zeta \psi(K)\zeta^{-1}$, we have $\phi(a)=\zeta \psi(a^n)\zeta^{-1}$ for some integer $n$. Then how to continue? Thanks.</p>
Derek Holt
2,820
<p>The problem is that the action of $\zeta$ might not lift to the whole of $K$.</p> <p>As an example, let $H=\langle z \mid z^7=1 \rangle = C_7$ and $K = \langle x,y \mid x^3=y^7=1,x^{-1}yx=y^2 \rangle$ be the nonabelian group of order $21$.</p> <p>Define $\phi,\psi$ by $\phi(x)= (z \mapsto z^2), \phi(y)= 1$, and $\psi(x)= (z \mapsto z^4), \psi(y)= 1$.</p> <p>So $\phi(K)=\psi(K)$ and we can take $\zeta=1$.</p> <p>The two semidirect products are</p> <p>$\langle x,y,z \mid x^3=y^7=z^7=1,yz=zy,x^{-1}yx=y^2,x^{-1}zx=z^2 \rangle$, and</p> <p>$\langle x,y,z \mid x^3=y^7=z^7=1,yz=zy,x^{-1}yx=y^2,x^{-1}zx=z^4 \rangle$,</p> <p>which are not isomorphic.</p> <p><b>Added later</b>: I realize that this is not a counterexample to the question asked, because $K$ is not cyclic. But I think the example is interesting, so I will not delete it.</p>
2,768,308
<p>If there is an easier way to see this I would be thrilled to know that way. I think the trick is to make sure the maximum of $f_n$ occurs to the right of a fixed $x_0$ so that we can avoid the function converging to $\frac{1}{e}$.</p> <p><strong>Attempt.</strong></p> <p>For x=1 and x=0, $f_n(x)=0$. Fix an $x_0\in(0,1)$. Since the max of each $f_n$ occurs at $x=\frac{n}{n+1}$, choose $N_1$ such that $n&gt;N$ imples $\frac{n}{n+1} &gt;x_0$. Then for $n&gt;N, \exists \delta&gt;0$ such that $\frac{n}{n+1}-\delta&gt;x_0$.</p> <p>So $|f_n(x_0)|=|nx_0^n(1-x_0)|\leq |n(\frac{n}{n+1}-\delta)^n(1-(\frac{n}{n+1}-\delta))| = |n(\frac{n}{n+1}-\delta)^n(\frac{1}{n+1}+\delta)|$</p> <p>Now, $|n(\frac{n}{n+1}-\delta)^n(\frac{1}{n+1}+\delta)| \leq |n(\frac{n}{n+1}-\delta)^n(\frac{1}{n+1})| = |\frac{n}{n+1}(\frac{n}{n+1}-\delta)^n|$</p> <p>and since $\frac{n}{n+1} \leq 1$, $|f_n(x_0)| \leq (\frac{n}{n+1}-\delta)^n = (1-(\frac{1}{n+1}+\delta))^n \leq (1-\delta)^n$</p> <p>So given an $\epsilon&gt;0$, choose $n&gt;\frac{log(\epsilon)}{log(1-\delta)}$</p>
XPenguen
293,908
<p>$nx^n(1-x)\leq nx^n$ for all $x\in (0,1)$ <br><br> $x^n=\frac{1}{(1+y)^n}$ for some $y\gt 1$</p> <p>$(1+y)^n \ge \frac{n(n-1)}{2}y^2$ We get $nx^n\le n\frac{2}{n(n-1)y^2}=\frac{2}{(n-1)y^2}$</p> <p>You can continue from here</p>
810,514
<p>How to compute the following series:</p> <p>$$\sum_{n=1}^{\infty}\frac{n+1}{2^nn^2}$$</p> <p>I tried</p> <p>$$\frac{n+1}{2^nn^2}=\frac{1}{2^nn}+\frac{1}{2^nn^2}$$</p> <p>The idea is using Riemann zeta function</p> <p>$$\zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^s}$$</p> <p>but the term $2^n$ makes complicated. I know that $$\sum_{n=1}^{\infty}\frac{1}{2^n}=1$$ using geometric series but I don't know how to use those series to answer the question. Any help would be appreciated. Thanks in advance.</p>
Tunk-Fey
123,277
<p>Consider <a href="http://en.wikipedia.org/wiki/Taylor_series#List_of_Maclaurin_series_of_some_common_functions">Maclaurin series of natural logarithm</a> $$ \ln(1-x)=-\sum_{n=1}^\infty\frac{x^n}{n}. $$ Taking $x=\dfrac12$ yields \begin{align} \ln\left(1-\frac12\right)&amp;=-\sum_{n=1}^\infty\frac{1}{2^n\ n}\\ \ln2&amp;=\sum_{n=1}^\infty\frac{1}{2^n\ n}. \end{align} Now, dividing the Maclaurin series of natural logarithm by $x$ yields \begin{align} \frac{\ln(1-x)}{x}&amp;=-\sum_{n=1}^\infty\frac{x^{n-1}}{n}, \end{align} then integrating both sides and taking the limit of integration $0&lt;x&lt;\dfrac12$. We obtain \begin{align} \int_0^{\Large\frac12}\frac{\ln(1-x)}{x}\ dx&amp;=-\int_0^{\Large\frac12}\sum_{n=1}^\infty\frac{x^{n-1}}{n}\ dx\\ &amp;=-\sum_{n=1}^\infty\int_0^{\Large\frac12}\frac{x^{n-1}}{n}\ dx\\ &amp;=-\left.\sum_{n=1}^\infty\frac{x^{n}}{n^2}\right|_{x=0}^{\Large\frac12}\\ -\int_0^{\Large\frac12}\frac{\ln(1-x)}{x}\ dx&amp;=\sum_{n=1}^\infty\frac{1}{2^n\ n^2}. \end{align} The integral in the LHS is $\text{Li}_2\left(\dfrac12\right)=\dfrac{\pi^2}{12}-\dfrac{\ln^22}{2}$, but since you are not familiar with <a href="http://en.wikipedia.org/wiki/Dilogarithm">dilogarithm function</a>, we will evaluate the LHS integral using IBP. Taking $u=\ln(1-x)$ and $dv=\dfrac1x\ dx$, we obtain \begin{align} \int_0^{\Large\frac12}\frac{\ln(1-x)}{x}\ dx&amp;=\left.\ln(1-x)\ln x\right|_0^{\large\frac12}+\int_0^{\Large\frac12}\frac{\ln x}{1-x}\ dx\\ &amp;=\ln^22-\lim_{x\to0}\ln(1-x)\ln x-\int_1^{\Large\frac12}\frac{\ln (1-x)}{x}\ dx\ ;\\&amp;\color{red}{\Rightarrow\text{let}\quad x=1-x}\\ \int_0^{\Large\frac12}\frac{\ln(1-x)}{x}\ dx+\int_1^{\Large\frac12}\frac{\ln (1-x)}{x}\ dx&amp;=\ln^22-0\\ -\left.\sum_{n=1}^\infty\frac{x^{n}}{n^2}\right|_{x=0}^{\Large\frac12}-\left.\sum_{n=1}^\infty\frac{x^{n}}{n^2}\right|_{x=1}^{\Large\frac12}&amp;=\ln^22\\ \sum_{n=1}^\infty\frac{1}{2^n\ n^2}+\sum_{n=1}^\infty\frac{1}{2^n\ n^2}-\sum_{n=1}^\infty\frac{1}{n^2}&amp;=-\ln^22\\ 2\sum_{n=1}^\infty\frac{1}{2^n\ n^2}-\frac{\pi^2}{6}&amp;=-\ln^22\\ \sum_{n=1}^\infty\frac{1}{2^n\ n^2}&amp;=\frac{\pi^2}{12}-\frac{\ln^22}{2}. \end{align} Thus, \begin{align} \sum_{n=1}^\infty\frac{n+1}{2^n\ n^2}&amp;=\sum_{n=1}^\infty\left(\frac{1}{2^n\ n}+\frac{1}{2^n\ n^2}\right)\\ &amp;=\large\color{blue}{\ln2+\frac{\pi^2}{12}-\frac{\ln^22}{2}}. \end{align}</p>
3,256,879
<p>I've only been exposed to basic abstract algebra (Like Definition of a group + Subgroup lemma etc) and some first year linear algebra. (We have not seen lagranges theorem, incase that is required for this question).</p> <p>I was hoping if someone could show an elementary way of doing this question:</p> <blockquote> <p>Let <span class="math-container">$H$</span> be the smallest subgroup of <span class="math-container">$GL(2,\mathbb{R})$</span> containing both <span class="math-container">$$A = \pmatrix{0 &amp; 1 \\ -1 &amp; 0} \text{ and } B =\pmatrix{0 &amp; 1 \\ 1 &amp; 0}.$$</span> <strong>Show that <span class="math-container">$H$</span> has eight elements.</strong> (Recall <span class="math-container">$GL(2,\mathbb{R})$</span> is the group of <span class="math-container">$2\times 2$</span> invertible matrices with real entries under matrix multiplication)</p> </blockquote> <p>Is there a way of doing the question without making a long 8 by 8 multiplication table? (That was my initial attempt, but it was far too tedious).</p> <p>Thanks!</p>
ArsenBerk
505,611
<p>I don't know you will find this one short or not but it is elementary I can say.</p> <p>Say <span class="math-container">$A = \pmatrix{0 &amp; 1 \\ -1 &amp; 0}$</span> and <span class="math-container">$ B= \pmatrix{0 &amp; 1 \\ 1 &amp; 0}$</span>. Then, first we can multiply and element with itself until we get identity matrix <span class="math-container">$I$</span>. For <span class="math-container">$B$</span>, we have <span class="math-container">$B^2 = I$</span>. For <span class="math-container">$A$</span>, we have <span class="math-container">$A^2 = \pmatrix{-1 &amp; 0 \\ 0 &amp; -1}$</span> so <span class="math-container">$A^2 \in H$</span>. Then, <span class="math-container">$A^3 = \pmatrix{0 &amp; -1 \\ 1 &amp; 0}$</span> so <span class="math-container">$A^3 \in H$</span>. Then, <span class="math-container">$A^4 = I$</span>. Now, we need to check <span class="math-container">$AB$</span>, <span class="math-container">$A^2B, A^3B$</span>. Can you take it from here?</p> <p>Note that this procedure comes from the closure of groups.</p>
2,839,388
<p>Given that the formula for the odds of an even split for $2n$ flips is $\binom{2n}{n}=\frac{2n!}{n!^2}$, this means that as the</p> <p>the number of flips increases, the chances of obtaining an even split become smaller. If you flip a coin<br> $100$ times the odds of getting $50$ heads are $0.079$. So why then if a coin is flipped $1000$ times, do we get a distribution that is close to a $50:50$ split most of the time? </p> <p>Does the basis for that intuition come from the fact that we sum the probabilities of all combinations that we consider close to an even split (such as $485 H: 515 T$) and combine them? </p>
Joseph Eck
387,311
<p>From <a href="https://en.m.wikipedia.org/wiki/Law_of_large_numbers" rel="nofollow noreferrer">Wikipedia’s Law of Large Numbers Page</a></p> <blockquote> <p>For example, a fair coin toss is a Bernoulli trial. When a fair coin is flipped once, the theoretical probability that the outcome will be heads is equal to 1/2. Therefore, according to the law of large numbers, the proportion of heads in a &quot;large&quot; number of coin flips &quot;should be&quot; roughly 1/2. In particular, the proportion of heads after n flips will almost surely converge to 1/2 as n approaches infinity.</p> <p>Though the proportion of heads (and tails) approaches 1/2, almost surely the absolute difference in the number of heads and tails will become large as the number of flips becomes large. That is, the probability that the absolute difference is a small number, approaches zero as the number of flips becomes large. Also, almost surely the ratio of the absolute difference to the number of flips will approach zero. Intuitively, expected absolute difference grows, but at a slower rate than the number of flips, as the number of flips grows.</p> </blockquote>
2,839,388
<p>Given that the formula for the odds of an even split for $2n$ flips is $\binom{2n}{n}=\frac{2n!}{n!^2}$, this means that as the</p> <p>the number of flips increases, the chances of obtaining an even split become smaller. If you flip a coin<br> $100$ times the odds of getting $50$ heads are $0.079$. So why then if a coin is flipped $1000$ times, do we get a distribution that is close to a $50:50$ split most of the time? </p> <p>Does the basis for that intuition come from the fact that we sum the probabilities of all combinations that we consider close to an even split (such as $485 H: 515 T$) and combine them? </p>
user21820
21,820
<blockquote> <p>Does the basis for that intuition come from the fact that we sum the probabilities of all combinations that we consider close to an even split (such as 485 H: 515 T) and combine them? </p> </blockquote> <p>Short answer: More or less.</p> <p>Long answer: The <strong>central limit theorem</strong> tells you more precisely how the split is distributed. In particular, for any constant $c &gt; 0$ and $n \to \infty$, the probability that the number of heads obtained on flipping the coin $n$ times lies in the interval $(\frac12 n \pm c\sqrt{n})$ goes to some nonzero constant. Also, the probability for the interval $(\frac12 \pm c)n$ goes to $1$. So if you fix a threshold of closeness of the <strong>ratio</strong> to half, the ratio will indeed be more likely to be close to half as $n \to \infty$. This does not contradict the fact that the probability of getting <strong>exactly</strong> half goes to $0$.</p>
41,707
<p>Is there a slick way to define a partial computable function $f$ so that $f(e) \in W_{e}$ whenever $W_{e} \neq \emptyset$? (Here $W_{e}$ denotes the $e^{\text{th}}$ c.e. set.) My only solution is to start by defining $g(e) = \mu s [W_{e,s} \neq \emptyset]$, where $W_{e,s}$ denotes the $s^{\text{th}}$ finite approximation to $W_{e}$, and then set $$ f(e) = \begin{cases} \mu y [y \in W_{e, g(e)}] &amp;\text{if } W_{e} \neq \emptyset \\ \uparrow &amp;\text{otherwise}, \end{cases} $$ but this is ugly (and hence not slick).</p>
Qiaochu Yuan
232
<p>A fundamental example is the <a href="http://en.wikipedia.org/wiki/Chain_rule#The_chain_rule_in_higher_dimensions">multivariate chain rule</a>. A basic principle in mathematics is that if a problem is hard, you should try to linearize it so that you can reduce as much of it as possible to linear algebra. Often this means replacing a function with a linear approximation (its Jacobian), and then composition of functions becomes multiplication of Jacobians. But of course there are many other ways to reduce a problem to linear algebra. </p>
41,707
<p>Is there a slick way to define a partial computable function $f$ so that $f(e) \in W_{e}$ whenever $W_{e} \neq \emptyset$? (Here $W_{e}$ denotes the $e^{\text{th}}$ c.e. set.) My only solution is to start by defining $g(e) = \mu s [W_{e,s} \neq \emptyset]$, where $W_{e,s}$ denotes the $s^{\text{th}}$ finite approximation to $W_{e}$, and then set $$ f(e) = \begin{cases} \mu y [y \in W_{e, g(e)}] &amp;\text{if } W_{e} \neq \emptyset \\ \uparrow &amp;\text{otherwise}, \end{cases} $$ but this is ugly (and hence not slick).</p>
Gerben
6,004
<p>High-dimensional problems in statistical physics can sometimes be solved directly using matrix multiplication, see <a href="http://en.wikipedia.org/wiki/Transfer_matrix_method" rel="noreferrer">http://en.wikipedia.org/wiki/Transfer_matrix_method</a>. The best-known example of this trick is the one-dimensional Ising model <a href="http://en.wikipedia.org/wiki/Ising_model" rel="noreferrer">http://en.wikipedia.org/wiki/Ising_model</a>, where an $N$-particle system can be 'solved' by calculating the $N$-th power of a 2x2-matrix, which is (almost) trivial; otherwise, one would have to compute a sum over $2^N$ terms to get the same result.</p>
1,832,887
<p>Consider the conjunction introduction and implication elimination rules of natural deduction:</p> <p>$$\frac{\Gamma\vdash\alpha \quad \Gamma\vdash\beta}{ \Gamma\vdash \alpha \land \beta} (\land I) \qquad \frac{ \Gamma \vdash \alpha \to \beta \quad \Gamma \vdash \alpha} {\Gamma,\vdash\beta} (\to E) \qquad \text{(single)}$$</p> <p>and note that the context $\Gamma$ of both premises of $(\to E)$ and $(\land I)$ must be the same.</p> <p>Because this <em>need not be the case in general</em>, why not to write those rules like this instead:</p> <p>$$\frac{\Gamma\vdash\alpha \quad \Delta\vdash\beta}{ \Gamma,\Delta\vdash \alpha \land \beta} (\land I') \qquad \frac{ \Gamma \vdash \alpha \to \beta \quad \Delta \vdash \alpha} {\Gamma, \Delta\vdash\beta} (\to E') \qquad \text{(multiple)} $$</p> <p>i.e. with the rules stated like this one might allow premises with distinct contexts.</p> <p><strong>Questions</strong>:</p> <ol> <li><p>Should multiple premises of a natural deduction inference rule always have the same context?</p></li> <li><p>In spite of their generality, why most (if not all) textbook or canonical presentations of the inference rules of the natural deduction refrain from using $\text{(multiple)}$-like rules? Because they are less didactical?</p></li> <li><p>Aren't $\text{(multiple)}$-like rules valid as well in the natural deduction? </p></li> </ol> <p>Thanks!</p>
Vineet Mangal
346,869
<p>Assume $p(x)$ of the form $$p(x)=a\prod_{r=1}^5(x-r)+b\prod_{r=1}^4(x-r)+c\prod_{r=1}^3(x-r)+d\prod_{r=1}^2(x-r)+e(x-1)+f$$ Now put the values of $x$ i.e. $x=1,2,3,4,5,6$ , then values of $a,b,c,d,e,f$ will be $[\frac{-1}{40},\frac{1}{12},\frac{-1}{6},\frac{1}{2},0,1]$ respectively. you can get these values very easily and with alomost no calculation. Start with $x=1$ and get the value of $f$ and then put more values to get $b,c,d,e,f$</p> <p>So $p(7)=8$</p> <p>Hope this will help as this method does not solves the complicated equations.</p>
3,796,937
<p>Prove that <span class="math-container">$2^n+1$</span> is not a cube for any <span class="math-container">$n\in\mathbb{N}$</span>.</p> <p>I managed to prove this statement but I would like to know if there any other approaches different from mine.</p> <p>If existed <span class="math-container">$k\in\mathbb{N}$</span> such that <span class="math-container">$2^n+1=k^3$</span> then <span class="math-container">$k=2l+1$</span> for some <span class="math-container">$l\in\mathbb{N}$</span>. Then <span class="math-container">$(2l+1)^3=2^n+1 \iff 4l^3+6l^2+3l=2^{n-1}$</span>. As I am looking for an integer solution, from the Rational Root Theorem <span class="math-container">$l$</span> would need to be of the form <span class="math-container">$2^j$</span> for <span class="math-container">$j=1,...,n-1$</span>. But then</p> <p><span class="math-container">$$4(2^j)^3+6(2^j)^2+3\times2^j=2^{n-1} \iff 2^{2j+2}+3(2^{j+1}+1)=2^{n-1-j}$$</span></p> <p>the LHS is odd which implies that <span class="math-container">$j=n-1$</span>. Absurd.</p> <p>Thank you in advance.</p>
Brian M. Scott
12,042
<p>Here is a parity-based solution that avoids the rational root test.</p> <p>If <span class="math-container">$2^n+1=m^3$</span>, then <span class="math-container">$2^n=m^3-1=(m-1)(m^2+m+1)$</span>, so <span class="math-container">$m-1=2^k$</span> for some <span class="math-container">$k\le n$</span>, and</p> <p><span class="math-container">$$2^n+1=\left(2^k+1\right)^3=2^{3k}+3\cdot2^{2k}+3\cdot2^k+1\,.$$</span></p> <p>Then <span class="math-container">$2^n=2^k\left(2^{2k}+3\cdot2^k+3\right)$</span>, so <span class="math-container">$2^{n-k}=2^{2k}+3\cdot2^k+3$</span> is odd and greater than <span class="math-container">$1$</span>, which is impossible.</p> <p><strong>Added:</strong> As one can see from the comments below, there are many ways to continue this argument after the first line. I took what I think of as the follow-your-nose approach, i.e., the most obvious, straightforward one, not necessarily the neatest. (And speaking of neatest, I quite like the one by <strong>rtybase</strong>.) Then again, folks’ noses don’t always point in the same direction. :-)</p>
3,796,937
<p>Prove that <span class="math-container">$2^n+1$</span> is not a cube for any <span class="math-container">$n\in\mathbb{N}$</span>.</p> <p>I managed to prove this statement but I would like to know if there any other approaches different from mine.</p> <p>If existed <span class="math-container">$k\in\mathbb{N}$</span> such that <span class="math-container">$2^n+1=k^3$</span> then <span class="math-container">$k=2l+1$</span> for some <span class="math-container">$l\in\mathbb{N}$</span>. Then <span class="math-container">$(2l+1)^3=2^n+1 \iff 4l^3+6l^2+3l=2^{n-1}$</span>. As I am looking for an integer solution, from the Rational Root Theorem <span class="math-container">$l$</span> would need to be of the form <span class="math-container">$2^j$</span> for <span class="math-container">$j=1,...,n-1$</span>. But then</p> <p><span class="math-container">$$4(2^j)^3+6(2^j)^2+3\times2^j=2^{n-1} \iff 2^{2j+2}+3(2^{j+1}+1)=2^{n-1-j}$$</span></p> <p>the LHS is odd which implies that <span class="math-container">$j=n-1$</span>. Absurd.</p> <p>Thank you in advance.</p>
J. W. Tanner
615,567
<p>Invoking an argument more powerful than needed for this:</p> <p>there cannot be any solutions to <span class="math-container">$2^n+1=m^3$</span> (i.e., <span class="math-container">$m^3-2^n=1$</span>) by <a href="https://en.wikipedia.org/wiki/Catalan%27s_conjecture" rel="noreferrer">Mihăilescu's theorem</a>,</p> <p>which states that <span class="math-container">$2^3$</span> and <span class="math-container">$3^2$</span> are the only two powers of natural numbers</p> <p>whose values are consecutive.</p>
881,013
<p>I am still an undergraduate student, and so perhaps I just haven't seen enough of the mathematical world. </p> <p><strong>Question:</strong> What are some examples of mathematical logic solving open problem outside of mathematical logic?</p> <p>Note that the <a href="//en.wikipedia.org/wiki/Ax%E2%80%93Grothendieck_theorem" rel="nofollow noreferrer">Ax-Grothendieck Theorem</a> <em>would have been</em> a perfect example of this (namely, If $P$ is a polynomial function from $\mathbb{C}^n$ to $\mathbb{C}^n$ and $P$ is injective then $P$ is bijective). However, even though there is a beautiful logical proof of this result, it was first proven not specifically using mathematical logic. I'm curious as to whether there are any results where "the logicians got there first".</p> <p><strong>Edit 1:Bonus</strong>: I am quite curious if one can post an example from Reverse Mathematics. </p> <p><strong>Edit 2:</strong><a href="https://math.stackexchange.com/questions/886848/why-exactly-is-whiteheads-problem-undecidable">This post</a> reminded me that the solution to <a href="http://en.wikipedia.org/wiki/Whitehead_problem" rel="nofollow noreferrer">Whitehead's Problem</a> came from logic (a problem in group theory). According to the wikipedia article, the proof by Shelah was 'completely unexpected'. It utilizes the fact that <strong>ZFC+(V=L)</strong> implies every Whitehead group is free while <strong>ZFC+$\neg$CH+MA</strong> implies there exists a Whitehead group which is not free. Since these two separate axiom systems are equiconsistent, hence Whitehead's problem is undecidable. </p> <p><strong>Edit 3:</strong> A year later, we have some more examples: </p> <p>1) Hrushovski's Proof of the Mordell-Lang Conjecture for functional fields in all characteristics. </p> <p>2) The Andre-Óort conjecture by Pila and Tsimerman.</p> <p>3) Various results in O-minimality including work by Wilkie and van den Dries (as well as others). </p> <p>4) Zilber's Pseudo-Exponential Field as work towards Schanuel's conjecture. </p>
Conifold
152,568
<p>I was impressed by Bernstein and Robinson's 1966 proof that if some polynomial of an operator on a Hilbert space is compact then the operator has an invariant subspace. This solved a particular instance of <a href="http://en.wikipedia.org/wiki/Invariant_subspace_problem" rel="nofollow noreferrer">invariant subspace problem</a>, one of pure operator theory without any hint of logic.</p> <p>Bernstein and Robinson used hyperfinite-dimensional Hilbert space, a nonstandard model, and some very metamathematical things like transfer principle and saturation. Halmos was very unhappy with their proof and eliminated non-standard analysis from it the same year. But the fact remains that the proof was originally found through non-trivial application of the model theory.</p> <p>Another example is the solution to the <a href="http://en.wikipedia.org/wiki/Hilbert%27s_tenth_problem" rel="nofollow noreferrer">Hilbert's tenth problem</a> by Matiyasevich. Hilbert asked for a procedure to determine whether a given polynomial Diophantine equation is solvable. This was a number theoretic problem, and he did not expect that such procedure can not exist. Proving non-existence though required developing a branch of mathematical logic now called computability theory (by Gödel, Church, Turing and others) that formalizes the notion of algorithm. Matiyasevich showed that any recursively enumerable set can be the solution set for a Diophantine equation, and since not all recursively enumerable sets are computable there can be no solvability algorithm. </p> <p>This example is typical of how logic serves all parts of mathematics by saving effort on doomed searches for impossible constructions, proofs or counterexamples. For instance, an analyst might ask if the plane can be decomposed into a union of two sets, one at most countable along every vertical line, and the other along every horizontal line. It seems unlikely and people could spend a lot of time trying to disprove it. In vain, because Sierpinski <a href="http://www.ams.org/journals/bull/1936-42-05/S0002-9904-1936-06291-9/S0002-9904-1936-06291-9.pdf" rel="nofollow noreferrer">proved</a> that existence of such a decomposition is equivalent to the continuum hypothesis, and Gödel showed that disproving it is impossible by an elaborate logical construction now called <a href="http://en.wikipedia.org/wiki/Inner_model#Related_ideas" rel="nofollow noreferrer">inner model</a>. As is proving it as Cohen showed by an even more elaborate logical construction called forcing.</p> <p>A more recent example is the <a href="http://www.ams.org/journals/jams/1996-9-03/S0894-0347-96-00202-0/" rel="nofollow noreferrer">proof of the Mordell-Lang conjecture for function fields by Hrushovski (1996)</a>. The conjecture "<em>is essentially a finiteness statement on the intersection of a subvariety of a semi-Abelian variety with a subgroup of finite rank</em>". In characteristic $0$, and for Abelian varieties or finitely generated groups the conjecture was proved more traditionally by Raynaud, Faltings and Vojta. They inferred the result for function fields from the one for number fields using a specialization argument, another proof was found by Buium. Abramovich and Voloch proved many cases in characteristic $p$. Hrushovski gave a uniform proof for general semi-Abelian varieties in arbitrary characteristic using "<em>model-theoretic analysis of the kernel of Manin's homomorphism</em>", which involves definable subsets, Morley dimension, $\lambda$-saturated structures, model-theoretic algebraic closure, compactness theorem for first-order theories, etc. </p>
881,013
<p>I am still an undergraduate student, and so perhaps I just haven't seen enough of the mathematical world. </p> <p><strong>Question:</strong> What are some examples of mathematical logic solving open problem outside of mathematical logic?</p> <p>Note that the <a href="//en.wikipedia.org/wiki/Ax%E2%80%93Grothendieck_theorem" rel="nofollow noreferrer">Ax-Grothendieck Theorem</a> <em>would have been</em> a perfect example of this (namely, If $P$ is a polynomial function from $\mathbb{C}^n$ to $\mathbb{C}^n$ and $P$ is injective then $P$ is bijective). However, even though there is a beautiful logical proof of this result, it was first proven not specifically using mathematical logic. I'm curious as to whether there are any results where "the logicians got there first".</p> <p><strong>Edit 1:Bonus</strong>: I am quite curious if one can post an example from Reverse Mathematics. </p> <p><strong>Edit 2:</strong><a href="https://math.stackexchange.com/questions/886848/why-exactly-is-whiteheads-problem-undecidable">This post</a> reminded me that the solution to <a href="http://en.wikipedia.org/wiki/Whitehead_problem" rel="nofollow noreferrer">Whitehead's Problem</a> came from logic (a problem in group theory). According to the wikipedia article, the proof by Shelah was 'completely unexpected'. It utilizes the fact that <strong>ZFC+(V=L)</strong> implies every Whitehead group is free while <strong>ZFC+$\neg$CH+MA</strong> implies there exists a Whitehead group which is not free. Since these two separate axiom systems are equiconsistent, hence Whitehead's problem is undecidable. </p> <p><strong>Edit 3:</strong> A year later, we have some more examples: </p> <p>1) Hrushovski's Proof of the Mordell-Lang Conjecture for functional fields in all characteristics. </p> <p>2) The Andre-Óort conjecture by Pila and Tsimerman.</p> <p>3) Various results in O-minimality including work by Wilkie and van den Dries (as well as others). </p> <p>4) Zilber's Pseudo-Exponential Field as work towards Schanuel's conjecture. </p>
apurv
49,824
<p>Not sure if this qualifies, but <a href="http://en.m.wikipedia.org/wiki/Goodstein&#39;s_theorem">Goodstein's theorem</a> which appears quite number theoretic and states that any <a href="http://mathworld.wolfram.com/GoodsteinSequence.html">Goodstein sequence</a> converges, was proved using ordinals. </p>
881,013
<p>I am still an undergraduate student, and so perhaps I just haven't seen enough of the mathematical world. </p> <p><strong>Question:</strong> What are some examples of mathematical logic solving open problem outside of mathematical logic?</p> <p>Note that the <a href="//en.wikipedia.org/wiki/Ax%E2%80%93Grothendieck_theorem" rel="nofollow noreferrer">Ax-Grothendieck Theorem</a> <em>would have been</em> a perfect example of this (namely, If $P$ is a polynomial function from $\mathbb{C}^n$ to $\mathbb{C}^n$ and $P$ is injective then $P$ is bijective). However, even though there is a beautiful logical proof of this result, it was first proven not specifically using mathematical logic. I'm curious as to whether there are any results where "the logicians got there first".</p> <p><strong>Edit 1:Bonus</strong>: I am quite curious if one can post an example from Reverse Mathematics. </p> <p><strong>Edit 2:</strong><a href="https://math.stackexchange.com/questions/886848/why-exactly-is-whiteheads-problem-undecidable">This post</a> reminded me that the solution to <a href="http://en.wikipedia.org/wiki/Whitehead_problem" rel="nofollow noreferrer">Whitehead's Problem</a> came from logic (a problem in group theory). According to the wikipedia article, the proof by Shelah was 'completely unexpected'. It utilizes the fact that <strong>ZFC+(V=L)</strong> implies every Whitehead group is free while <strong>ZFC+$\neg$CH+MA</strong> implies there exists a Whitehead group which is not free. Since these two separate axiom systems are equiconsistent, hence Whitehead's problem is undecidable. </p> <p><strong>Edit 3:</strong> A year later, we have some more examples: </p> <p>1) Hrushovski's Proof of the Mordell-Lang Conjecture for functional fields in all characteristics. </p> <p>2) The Andre-Óort conjecture by Pila and Tsimerman.</p> <p>3) Various results in O-minimality including work by Wilkie and van den Dries (as well as others). </p> <p>4) Zilber's Pseudo-Exponential Field as work towards Schanuel's conjecture. </p>
Mikhail Katz
72,694
<p>Terry Tao has been using Abraham Robinson's framework and more generally ultraproducts (often considered to be "logic" as you put it), as for example his solution of Hilbert's Fifth problem in <a href="http://bookstore.ams.org/gsm-153" rel="nofollow">this recent book</a>.</p>
847,316
<p>Why is the solution to linear differential equations with constant coefficients sought in the form of $Ce^{kx}$ ?</p>
Gaussler
129,649
<p>I'm making a long reply to one of joo's comments:</p> <p>Well, a linear differential equation with constant coefficients has the form $y' = Ay$ for some matrix $A$. Now the general solution with the initial value $y(0) = v$ is $y(t) = \exp(tA)v$, where $\exp(X) = \sum_{k=0}^\infty \frac 1{k!} X^k$ for all square matrices $X$ (it can be proved that this series is always convergent, just like the ordinary exponential series). This follows from the formula $$ \tfrac d{dt}\exp(tA) = A\exp(tA), $$ which is not at all trivial, but which is proved in much literature on the subject.</p> <p>In the case where $A$ is diagonalizable, there exists a basis $v_1,\ldots,v_n$ for $\Bbb R^n$ of eigenvectors for $A$. Putting $V = (v_1,\ldots,v_n)$ in column form, $D:= V A V^{-1} = \begin{pmatrix}\lambda_1 &amp; &amp;0\\ &amp; \ddots\\ 0&amp; &amp; \lambda_n\end{pmatrix}$ is a diagonal matrix, where $\lambda_i$ is the eigenvalue corresponding to $v_i$ (Exercise!). Now $$ \exp(D)=\exp(V^{-1} A V) = \sum_{k=0}^\infty (V^{-1} A V)^k = \sum_{k=0}^\infty \underbrace{(V^{-1}AV)(V^{-1}AV)\cdots (V^{-1} A V)}_{\text{$k$ times}} = V^{-1}\big(\sum_{k=0}^\infty A^k\big)V = V^{-1}\exp(A)V. $$ Similarly, $\exp(tD)=\exp(t V^{-1}AV)=V^{-1}\exp(tA)V$ for all $t\in\Bbb R$. Hence if $y$ is the solution from before, we have $$ y = \exp(tA)v=V\exp(tD)V^{-1}v = V\begin{pmatrix}e^{\lambda_1 t} &amp; &amp; 0\\&amp;\ddots&amp;\\0&amp;&amp;e^{\lambda_n t}\end{pmatrix}V^{-1} v = \big(\sum_{i=1}^n e^{\lambda_i t}v_i\big) V^{-1} v, $$ where the above expression for $\exp(tD)$ follows from the fact that $D^k = \begin{pmatrix}\lambda_1^k &amp; &amp;0\\ &amp; \ddots\\ 0&amp; &amp; \lambda_n^k\end{pmatrix}$ for all $k\in\Bbb N$; hence $\exp(tD)$ is calculated entry-wise. Now the expression on the right is a linear combination of functions of the form $t\mapsto e^{\lambda_i t} v_i$, which was what you wanted to know.</p>
3,118,282
<p>I'm trying to do the following problem in my book, but I don't understand how the book got their answer.</p> <p>The problem: Determine whether the following relations are equivalence relations:<span class="math-container">$\newcommand{\relR}{\mathrel{R}}$</span></p> <p>The relation <span class="math-container">$\relR$</span> on <span class="math-container">$\mathbb{R}$</span> given by <span class="math-container">$x\relR y$</span> if and only if <span class="math-container">$|x-y|\leq1$</span>. </p> <p>The answer only says it isn't transitive and gives this example: <span class="math-container">$(1\relR2)\wedge(2\relR3)$</span>, but <span class="math-container">$1\not\relR 3$</span>. Where did they get those numbers from?</p> <p>As for the problem being reflexive and symmetric, please correct me if I'm wrong but here is what I assume it to be:</p> <p>Reflexive: For any <span class="math-container">$x$</span> such that <span class="math-container">$x\relR x\Rightarrow x\leq1$</span></p> <p>Symm: For any <span class="math-container">$x$</span>, <span class="math-container">$y$</span> such that <span class="math-container">$x\relR y\Rightarrow|x-y|\leq1\text{ and }1\geq|x-y|$</span></p>
Graham Kemp
135,106
<blockquote> <p>Reflexive: For any x such that xRx ---> x &lt;= 1</p> </blockquote> <p>No, reflexivity requires that <span class="math-container">$\def\R{\operatorname R}\forall x{\in}\Bbb R~(x\R x)$</span>, which is clearly true (given the definitions for absolute value, substraction, and the <span class="math-container">$\leqslant$</span> comparator).<span class="math-container">$$\forall x{\in}\Bbb R~(\lvert x-x\rvert\leqslant 1)$$</span></p> <blockquote> <p>Symm: For any x,y such that xRy ---> |x-y| &lt;= 1 and 1>= |x-y|</p> </blockquote> <p>No, symmetry requires that <span class="math-container">$\forall x{\in}\Bbb R\,\forall y{\in}\Bbb R~(x\R y\to y\R x)$</span>, which is clearly true. <span class="math-container">$$\forall x{\in}\Bbb R\,\forall y{\in}\Bbb R~(\lvert x-y\rvert\leqslant 1\to\lvert y-x\rvert\leqslant 1)$$</span></p> <hr> <p>Transivity requires that <span class="math-container">$\forall x{\in}\Bbb R\,\forall y{\in}\Bbb R\,\forall z{\in}\Bbb R\;((x\R y\land y\R z)\to x\R z)$</span>. &nbsp; The truth value for this universal statement not so obvious, so we shall look into the possibility of counterexamples. &nbsp; We just need to demonstrate one counterexample to <em>disprove</em> a universal quantified statement.</p> <p>Our relation, <span class="math-container">$\R$</span> is <em>not</em> transitive if <span class="math-container">$\exists x{\in}\Bbb R\,\exists y{\in}\Bbb R\,\exists z{\in}\Bbb R\;(x\R y\land y\R z\land x\require{cancel}\cancel{\R}z)$</span> . &nbsp; That is to say, should there <strong>exist some <span class="math-container">$x,y,z$</span> where <span class="math-container">$y$</span> is at most a distance of one from each of <span class="math-container">$x$</span> and <span class="math-container">$z$</span>, but <span class="math-container">$x$</span> is more than one from <span class="math-container">$z$</span>.</strong></p> <p><span class="math-container">$$\exists x{\in}\Bbb R\,\exists y{\in}\Bbb R\,\exists z{\in}\Bbb R\;\big(\lvert x-y\rvert \leqslant 1\,\land\, \lvert y-z\rvert\leqslant 1\,\land\,\lvert x-z\rvert \gt 1\big)$$</span></p> <p>So what real numbers could possible make that so? </p> <p>Well, <span class="math-container">$1,2,3$</span> easily fit that bill. <span class="math-container">$$\lvert \mathbf 1-\mathbf 2\rvert \leqslant 1\,\land\, \lvert \mathbf 2-\mathbf 3\rvert\leqslant 1\,\land\,\lvert \mathbf 1-\mathbf 3\rvert \gt 1$$</span></p>
1,571,478
<p>Let $X, Y$ have the joint pdf</p> <p>$f(x, y) = 2, \quad 0 &lt; x &lt; y &lt; 1 \quad 0, \quad$ otherwise</p> <p>Find $P(0 &lt; X &lt; 1/2$ | $y = 3/4)$</p> <p>The solutions say</p> <p>$\int_0^{1/2} f_{X|Y}(x | y = 3/4)dx$</p> <p>I know that $f_{X|Y}(x | Y = y) = 2$, but how do I find $f_{X|Y}(x | y = 3/4)$?</p>
Graham Kemp
135,106
<p>$f_{X\mid Y}(x\mid y) = \dfrac{f_{X,Y}(x,y)}{\displaystyle \int_0^y f_{X,Y}(x,y)\operatorname d x} = \dfrac{1}{y}\quad\mathbf 1_{0&lt; x&lt; y&lt; 1}$</p> <p>That is $X$ (conditioned on $Y=y$) is uniformly distributed on the interval $(0;y)$</p>
633,522
<ol> <li><p>$ \lim_{n\to \infty} \sqrt[n]{3^n+4^n} $ . I think the limit is $4$. I did : $ \sqrt[n]{3^n+4^n} = 4 \sqrt[n]{(\frac{3}{4}) ^n+1}$ .Am I right?</p></li> <li><p>$ \lim_{n\to \infty} \frac{1}{1\cdot 4 } + \frac{1}{4\cdot 7} +...+\frac{1}{(3n-2)(3n+1)} $. I know that for each $k$ , this sequence is the sum of $k$ terms, the smallest one is $ \frac{1}{(3n-2)(3n+1)} $, and the largest is $ \frac{1}{1. 4 }$ . But when substituting and trying to use the squeeze thm, I get that the limit should be between $0$ and $\infty$, which gives me nothing.</p></li> </ol> <p>Thanks in advance.</p>
Yiorgos S. Smyrlis
57,021
<p>For every $n$ $$ 4^n&lt;3^n+4^n&lt; 2\cdot 4^n, $$ thus $$ 4&lt;\sqrt[n]{3^n+4^n}&lt; 2^{1/n}\cdot 4. $$</p>
3,967,862
<p>If <span class="math-container">$A,B$</span> commute, that <span class="math-container">$e^A\leq e^B$</span> follows from functional calculus.</p> <p>Is this still true when <span class="math-container">$A,B$</span> do not commute?</p> <p>Also I wonder if <span class="math-container">$A^{2n+1}\leq B^{2n+1}$</span> is true for natural number <span class="math-container">$n$</span>.</p>
Community
-1
<p><span class="math-container">$ \dfrac{\cos x + \sin x}{\cos x - \sin x} = \dfrac{1+ \tan x}{1 - \tan x} $</span></p> <p>Now, <span class="math-container">$\tan (x+y) = \dfrac{\tan x+ \tan y}{1 - \tan x \tan y}$</span><br><br></p> <p>Therefore, <span class="math-container">$\tan (x+ \frac{\pi}{4}) = \dfrac{\tan x+ 1}{1 - \tan x}$</span><br></p> <p>Therefore, <span class="math-container">$\dfrac{\cos x + \sin x}{\cos x - \sin x} = \tan(x+ \frac{\pi}{4})$</span></p> <p>Is this what you meant by 1 term?.</p>
907,055
<p>I've begun a course in "Real Analysis" recently and I have this trivial exercise. Could someone check if my proof is correct?</p> <p>Proposition: There exists Injective function $ f: A \rightarrow B \iff $ there exists function $ g: B \rightarrow A $ is surjective</p> <p>Proof: Firstly, we prove injective function $f: A \rightarrow B \Longrightarrow g: B \rightarrow A$ is surjective Suppose $\exists f: A \rightarrow B, $ such that $ f$ is injective, i. e., $ \forall x_{1}, x_{@} \in A, x_{1} \neq x_{2} \rightarrow f(x_{1}) \neq f(x_{2})$. </p> <p>By hypothesis, $\exists g: B \rightarrow A$ such that $g$ is not surjective. Then,there is at least one $ x \in A $ such that $ \forall y \in B, g(y) \neq x $. But, that is not possible, because if $f$ is injective, then all $x \in A$ correspond to some $y \in B$. Contradiction!</p> <p>Now, we prove surjective function $g: B \rightarrow A \Longrightarrow f: A \rightarrow B$ is injective. Suppose $g: B \rightarrow A $ is surjective, i. e., $\forall y \in B, \exists x \in A$, such that $ g(y) = x$. By hypothesis, $\exists f: A \rightarrow B$ such that f is not injective. Then, there are $x_{1}, x_{2} \in A$ such that for $x_{1} \neq x_{2}$, there are $f(x_{1}) = f(x_{2})$. By the definition of function, that only could happen, if there is $ y \in B $ such that $ y \notin Dom(g) $. Contradiction!</p> <p>So, There exists Injective function $ f: A \rightarrow B \iff $ there exists function $ g: B \rightarrow A $ is surjective. Q.E.D.</p>
forallepsilon
170,777
<p>There are several problems with your proof.</p> <p>Firstly, in both directions of your proof, your statements "By hypothesis, ..." seem to assume that <em>all</em> proofs are by contradiction, and that you must refute the negation of the statement. What you are assuming is not a hypothesis.</p> <p>Secondly, your negation of the statement you wish to prove is incorrect. The negation of "there exists a surjective function" is <em>not</em> "there exists a function that is not surjective," but rather: "<em>all</em> functions are not surjective." The same goes for your other direction.</p> <p>You may want to look at the $\underline{inverse}$ of these funtions, and in one case, extend its domain. </p>