qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
312,012
<p>What is the exact difference between $\arg\max$ and $\max$ of a function?</p> <p>Is it right to say the following?</p> <blockquote> <p>$\arg\max f(x)$ is nothing but the value of $x$ for which the value of the function is maximal. And $\max f(x)$ means value of $f(x)$ for which it is maximum.</p> </blockquote> <p>More precisely,</p> <blockquote> <p>$\arg \max$ returns a value from the domain of the function and $\max$ returns from the range of the function?</p> </blockquote>
copper.hat
27,978
<p>$\arg \max_x f(x) = f^{-1} \{ \max_x f(x) \}$.</p> <p>For example, $\max_x \cos x = 1$, $\arg \max_x \cos x = \{ n 2 \pi\}_{n \in \mathbb{Z}}$.</p>
2,112,259
<p>I want to prove or disprove $\min{S_1}=\min{S_2}\Longleftrightarrow\min{S_1}\in S_2\land\min{S_2}\in S_1$ but I don't know where to start.</p> <p>Maybe take cases like $\min{S_1}&lt;\min{S_2}$, $\min{S_1}&gt;\min{S_2}$ and $\min{S_1}=\min{S_2}$ and see where it is true?</p>
Mark Fischler
150,362
<p>Your equations are \begin{cases} 12k= 839+ p^2\\ 2k= 135+ q^2 \end{cases} and we observe that $$p^2-q^2 = 10k - 704 = 5(2k-141)+1$$ so $\,p^2 - q^2 = 1 \pmod{5}$ which tells us that $p^2 = 1 \mod 5$ and $q = 0 \mod 5$, or $p=0 \mod 5$ and $q^2 = -1 \mod 5$. We consider the first case here, and the second later. Since $q$ is prime and divisible by $5$, $q=5$. Then $$ 2k = 135+25 = 160 \implies k=80 $$ and $$ 12k = 960 =- 839+p^2\\ P^2 = 121\\ p=11 $$ Finally, $$ 12 n - 21143 = 25p^2= 3025\\ 12n = 24168\\ n=2014 $$ or $$ 2 n - 3403= 25q^2= 625\\ 2n = 4028\\ n=2014 $$</p> <p>So that is the one solution.</p> <p>Now consider the second case: $p=0 \mod 5$ and $q^2 = -1 \mod 5$. $$12k=839+p^2 = 144 \\ k=72$$ And the other equation becomes $$2\cdot 72 = 135 + q^2\\ q=3$$ So that will give a second solution, with $$ n = 14 + 25*k = 14 + 25\cdot 72=1814 $$</p> <p>Kudos to Leox for noting the second case, which I had overlooked in my original answer.</p>
4,010,386
<p>Let <span class="math-container">$P_n$</span> be a vector space of all polynomials with real coefficients and of degree <span class="math-container">$n$</span> or less. If <span class="math-container">$a \in \Bbb R$</span> and <span class="math-container">$W=\{p \in P_n \mid p(a)=0\}$</span>, show that <span class="math-container">$W$</span> is a subspace of <span class="math-container">$P_n$</span>.</p> <p><strong>Edit:</strong></p> <p><strong>Attempt:</strong> Denote <span class="math-container">$0_p$</span> be the zero polynomial.</p> <ol> <li>Since <span class="math-container">$0_p(a)=0 \in W$</span>, then <span class="math-container">$0_p \in W$</span> and thus, <span class="math-container">$W \ne \emptyset$</span>.</li> <li>Let <span class="math-container">$p,q \in W$</span>. Then, <span class="math-container">$p(a)=0=q(a)$</span> for some <span class="math-container">$a \in \Bbb R$</span>. Now, <span class="math-container">$(p+q)(a) = p(a)+q(a)=0+0=0$</span>. Hence, <span class="math-container">$p+q \in W$</span>.</li> <li>Let <span class="math-container">$p \in W$</span>. Let <span class="math-container">$c$</span> be an arbitrary scalar. Then, <span class="math-container">$p(a)=0$</span> for some <span class="math-container">$a \in \Bbb R$</span>. Now, <span class="math-container">$(cp)(a)=cp(a)=c0=0$</span>. Hence, <span class="math-container">$cp \in W$</span>.</li> </ol> <p>Thus, <span class="math-container">$W$</span> is a subspace of <span class="math-container">$P_n$</span>.</p> <p>Am I true?</p>
Rhys Hughes
487,658
<p>Spot on.</p> <p>You can also condense the latter two properties into one: <span class="math-container">$$\forall \alpha, \beta \in \Bbb R; p, q\in W \implies\alpha p + \beta q \in W $$</span></p>
4,010,386
<p>Let <span class="math-container">$P_n$</span> be a vector space of all polynomials with real coefficients and of degree <span class="math-container">$n$</span> or less. If <span class="math-container">$a \in \Bbb R$</span> and <span class="math-container">$W=\{p \in P_n \mid p(a)=0\}$</span>, show that <span class="math-container">$W$</span> is a subspace of <span class="math-container">$P_n$</span>.</p> <p><strong>Edit:</strong></p> <p><strong>Attempt:</strong> Denote <span class="math-container">$0_p$</span> be the zero polynomial.</p> <ol> <li>Since <span class="math-container">$0_p(a)=0 \in W$</span>, then <span class="math-container">$0_p \in W$</span> and thus, <span class="math-container">$W \ne \emptyset$</span>.</li> <li>Let <span class="math-container">$p,q \in W$</span>. Then, <span class="math-container">$p(a)=0=q(a)$</span> for some <span class="math-container">$a \in \Bbb R$</span>. Now, <span class="math-container">$(p+q)(a) = p(a)+q(a)=0+0=0$</span>. Hence, <span class="math-container">$p+q \in W$</span>.</li> <li>Let <span class="math-container">$p \in W$</span>. Let <span class="math-container">$c$</span> be an arbitrary scalar. Then, <span class="math-container">$p(a)=0$</span> for some <span class="math-container">$a \in \Bbb R$</span>. Now, <span class="math-container">$(cp)(a)=cp(a)=c0=0$</span>. Hence, <span class="math-container">$cp \in W$</span>.</li> </ol> <p>Thus, <span class="math-container">$W$</span> is a subspace of <span class="math-container">$P_n$</span>.</p> <p>Am I true?</p>
Hagen von Eitzen
39,174
<p>Yes, that's fine.</p> <p>In case you have this tool available, you might also simply observe that <span class="math-container">$W$</span> is defined as the kernel of the linear map <span class="math-container">$P_n\to\Bbb R$</span>, <span class="math-container">$p\mapsto p(a)$</span> and therefore a subspace.</p>
4,010,386
<p>Let <span class="math-container">$P_n$</span> be a vector space of all polynomials with real coefficients and of degree <span class="math-container">$n$</span> or less. If <span class="math-container">$a \in \Bbb R$</span> and <span class="math-container">$W=\{p \in P_n \mid p(a)=0\}$</span>, show that <span class="math-container">$W$</span> is a subspace of <span class="math-container">$P_n$</span>.</p> <p><strong>Edit:</strong></p> <p><strong>Attempt:</strong> Denote <span class="math-container">$0_p$</span> be the zero polynomial.</p> <ol> <li>Since <span class="math-container">$0_p(a)=0 \in W$</span>, then <span class="math-container">$0_p \in W$</span> and thus, <span class="math-container">$W \ne \emptyset$</span>.</li> <li>Let <span class="math-container">$p,q \in W$</span>. Then, <span class="math-container">$p(a)=0=q(a)$</span> for some <span class="math-container">$a \in \Bbb R$</span>. Now, <span class="math-container">$(p+q)(a) = p(a)+q(a)=0+0=0$</span>. Hence, <span class="math-container">$p+q \in W$</span>.</li> <li>Let <span class="math-container">$p \in W$</span>. Let <span class="math-container">$c$</span> be an arbitrary scalar. Then, <span class="math-container">$p(a)=0$</span> for some <span class="math-container">$a \in \Bbb R$</span>. Now, <span class="math-container">$(cp)(a)=cp(a)=c0=0$</span>. Hence, <span class="math-container">$cp \in W$</span>.</li> </ol> <p>Thus, <span class="math-container">$W$</span> is a subspace of <span class="math-container">$P_n$</span>.</p> <p>Am I true?</p>
Ethan Bolker
72,858
<p>This is almost right. The train of thought and the important algebraic steps are right. But throughout your argument you refer to &quot;some <span class="math-container">$a$</span>&quot;. That's not correct. The <span class="math-container">$a$</span> is fixed from the start. The space you are interested in might be called <span class="math-container">$W_a$</span>. It's a different space for each fixed <span class="math-container">$a$</span>.</p>
4,490,117
<blockquote> <p>(a) Let <span class="math-container">$f : [a, b] \to \mathbb{R}$</span> be continuous and suppose that <span class="math-container">$f(x) \gt 0$</span> for all <span class="math-container">$x$</span>. Show that there is some <span class="math-container">$L\gt 0$</span> such that <span class="math-container">$f(x) \ge L$</span> for all <span class="math-container">$ x \in [a, b]$</span>.</p> </blockquote> <blockquote> <p>(b) Give an example of a continuous function <span class="math-container">$f: \mathbb{R} \to \mathbb{R}$</span> satisfying <span class="math-container">$f(x) \gt 0$</span> for all <span class="math-container">$x$</span>, such that no <span class="math-container">$L\gt 0$</span> satisfies <span class="math-container">$f(x) \ge L$</span> for all <span class="math-container">$x$</span>.</p> </blockquote> <p>I was given these two pribelsy together. For the first one I could solve it easily by using the property that <span class="math-container">$f$</span> will attain it's bounds in the given closed interval and hence the minimum value will do the trick.</p> <p>But I can't prove (b) analytically.</p> <p>I thought of <span class="math-container">$f(x) = e^x$</span> and I know it will work but I can't prove it using any contradiction.</p> <p>Can I get some help please?</p>
Prime Mover
466,895
<p>Two different things going on here.</p> <p>Treatment of the Peano axioms is very much in Fundamentals of Mathematics, which often starts with axiomatic set theory and works towards the establishment of the rules of commutativity and associativity. There are many such books on this, although all take the subject slightly differently from every other such book. Perhaps you would get on with Keith Devlin's &quot;The Joy of Sets&quot;.</p> <p>The &quot;classical algorithms&quot;, on the other hand, are a different thing altogether. By this time, the construction of the standard number systems, along with commutativity and associativity, are already assumed to be proven.</p> <p>What we are now about is the exercise of arithmetic on numbers expressed in basis representation form (i.e. a string of digits whose values depend on the position in the string). This is the stuff which you learn at age 6. Not so many books on this. As suggested in a comment, Knuth covers them in detail in his The Art Of Computer Programming Volume 2: Seminumerical Algorithms, Chapter 4 - Arithmetic: 4.3.1 The Classical Algorithms.</p>
305,373
<p>I'm doing some self-studying on discrete mathematics and so far its going well however I came upon a question that whilst I can make sense of and I know that set $A$ is a subset of set $B$, I cannot think of how to express it and the book does not feature an answer for this question.</p> <p>The question is, prove that $A$ is a subset of $B$, where</p> <p>$$A = \{ 2n \mid n\in\mathbb{Z}^+\},\quad B = \{ n \mid n\in\mathbb{Z}^+\}.$$</p> <p>I am somewhat at a loss to the logic of this, both sets are identified as they both have elements that contain positive integers and $n$ is half of $2n$, but then isn't if we were to use numbers: $n = 1$, $2n = 2$ etc... therefore the two are not equal as they will never contain the same numbers, I believe I am thinking about this wrong but am somewhat at a loss with this simple question! Any help is appreciated, thanks!</p>
Zev Chonoles
264
<p>Recall that a set $S$ is a subset of a set $T$, which we express as $S\subseteq T$, when $$x\in S\implies x\in T.$$ Thus, in your problem, to show that $A\subseteq B$, what you must show is that, for any $n\in\mathbb{Z}^+$, we have that $2n$ is among the elements of $B$. Note that, because $B$ has precisely the same elements as $\mathbb{Z}^+$, we simply have $B=\mathbb{Z}^+$.</p> <p>But it's true that $2n\in\mathbb{Z}^+$; we know that $\mathbb{Z}^+$ is closed under multiplication by $2$! Thus we are done.</p> <p>The user copper.hat explains it quite well in the comment above; in <a href="http://en.wikipedia.org/wiki/Set-builder_notation" rel="nofollow">set-builder notation</a>, the variables are <a href="http://en.wikipedia.org/wiki/Bound_variable" rel="nofollow"><em>dummy variables</em></a>. That is, they do not continue to have a fixed meaning outside of the set-builder expression. Thus, for example, $$\mathbb{Z}^+=\{n\mid n\in\mathbb{Z}^+\}=\{m\mid m\in\mathbb{Z}^+\}=\{\xi\mid \xi\in\mathbb{Z}^+\}=\{\star\mid \star\in\mathbb{Z}^+\}$$ and, when you want to prove that $A$ is a subset of $B$, the correct interpretation of what you must show is that $$\text{for any }x\in A\text{, we have }x=y\text{ for some }y\in B$$ $$\text{for any }n\in\mathbb{Z}^+\text{, we have }2n=y\text{ for some }y\in B$$ $$\text{for any }n\in\mathbb{Z}^+\text{, we have }2n=m\text{ for some }m\in\mathbb{Z}^+$$</p>
1,832,515
<p>I know that you almost always set domains for a summation function $\left( \sum \right)$, but can you also set an interval for that domain? Say the domain was 1 to 10, could I set an increment of 0.5 instead of the standard 1? I know that there are ways around this such as setting another variable for the previous number, subtracting 0.5 from the current loop variable, and so on, but is there a way to simply set an interval so that I don't have to do any of that? </p>
j4l3kl24jkl2
266,771
<p>Like Will Jagy said, substitute $t=\sqrt{x^2 -2x+3}$. You then can solve the formula: $$\sqrt{t^2+5}=125-t \\t^2 + 5 =15625-250t +t^2$$ $$250t = 15620$$ $$t=62.48$$ Then you can plug your value for $t$ back into $t^2=x^2 -2x+3$ and solve a quadratic formula with respect to $x$.</p>
501,993
<p>I have never encounter "proof" questions before in my career, but this question in textbook troubled me and I have totally no clue where to start.</p> <p>Prove that the limit $$\lim_{x\rightarrow0} \frac{1-\sqrt{1-x^2}}{x^2} = \frac{1}{2}$$</p>
peterwhy
89,922
<p>$$\begin{align} \lim_{x\rightarrow0} \frac{1-\sqrt{1-x^2}}{x^2} =&amp; \lim_{x\rightarrow0} \frac{1-\sqrt{1-x^2}}{x^2}\cdot\frac{1+\sqrt{1-x^2}}{1+\sqrt{1-x^2}}\\ =&amp; \lim_{x\rightarrow0} \frac{1^2-\left(1-x^2\right)}{x^2\left(1+\sqrt{1-x^2}\right)}\\ =&amp; \lim_{x\rightarrow0} \frac{x^2}{x^2\left(1+\sqrt{1-x^2}\right)}\\ =&amp; \lim_{x\rightarrow0} \frac{1}{1+\sqrt{1-x^2}}\\ =&amp; \frac{1}{1+\sqrt{1-0^2}}\\ =&amp; \frac{1}{2} \end{align}$$</p>
2,803,956
<p>Let $\mathfrak{g}$ be a Lie algebra. Then its automorphism group $Aut(\mathfrak{g})$ is a Lie group, and hence we may take its Lie algebra $Lie(Aut(\mathfrak{g}))$.</p> <p>I'd like to say that this is equal to the Lie algebra of derivations $Der(\mathfrak{g})$. Is this true? Where can I find a reference?</p>
William C. Newman
773,090
<p><span class="math-container">$\DeclareMathOperator{\Aut}{Aut}$</span> It is in fact true that <span class="math-container">$\text{Lie}(\Aut(L))=\text{Der}(L)$</span>, as you say. There are a lot of questions on Stack Exchange asking about this, and how to prove it. But no answers seem to give a proof.</p> <p>[Additionally, the only source I saw out of all of these questions, was the alternative answer to this particular question. Unfortunately, the source given is just showing it is true for the real numbers assuming it is true for the complex numbers. The source gives its own source for the proof for the complex number case, but the limited access I had on Google Books did not let me see the references.]</p> <p>Luckily, I found that this is an exercise in Fulton and Harris's Representation Theory, specifically <span class="math-container">$8.27$</span>, and it is one of the few exercises without a hint in the back, so it cannot be that hard.</p> <p>And it actually is not. Let <span class="math-container">$G=\{g\in \text{GL}(L)| g[x,y]=[gx,gy], x,y\in L\}$</span>. Then, in order to get the tangent space of <span class="math-container">$G$</span> at <span class="math-container">$1$</span>, i.e. the Lie algebra of <span class="math-container">$G$</span>, we just have to differentiate the equation <span class="math-container">$g[x,y]=[gx,gy]$</span> and plug in <span class="math-container">$g=1$</span>. If we differentiate at <span class="math-container">$g$</span> in the direction of <span class="math-container">$\delta$</span>, we get <span class="math-container">$\delta([x,y])=[gx,\delta(y)]+[\delta(x),gy]$</span>, and so plugging in <span class="math-container">$g=1$</span> gives us <span class="math-container">$\delta([x,y])=[x,\delta(y)]+[\delta(x),y]$</span>. Thus, <span class="math-container">$\delta$</span> is in the tangent space of <span class="math-container">$1$</span> if and only if <span class="math-container">$\delta$</span> is a derivation on <span class="math-container">$L$</span>. (And the Lie products are the same, because they are both given by the commutator <span class="math-container">$[\delta_1,\delta_2]=\delta_1\delta_2-\delta_2\delta_1$</span>).</p>
551,873
<p>I have to solve the following problem: Using $\exists$ Introduction prove that PA$\vdash x\leq y \wedge y\leq z \longrightarrow x\leq z$: I used that if $x\leq y$ then $\ \exists \ r\ x+r=y$ and in the same way $\exists t \ y +t=z$, but in logic term I don't know how to use these equalities and the Peano's axioms to get the result.</p>
Cameron Buie
28,900
<p>Well, you know that $x+r=y$ that $y+t=z,$ so, <em>if we know that addition is associative</em>, then it follows that $x+(r+t)=(x+r)+t=y+t=z.$ Since $r+t\in\Bbb N$ for any $r,t\in\Bbb N,$ then putting $s=r+t,$ we have shown that there exists $s$ such that $x+s=z,$ and so $x\le z$ by definition of the relation "$\le$."</p> <p>All that we need to do, then, is show that addition <em>is,</em> in fact, associative, so that the above reasoning completes the proof of your result.</p> <p><strong>Claim</strong>: $\forall k,m,n\in\Bbb N,$ we have $(k+m)+n=k+(m+n).$</p> <p><strong>Proof Outline</strong>: Take any $k,m\in\Bbb N.$ We proceed by induction on $n,$ with the $n=0$ case being clear by two applications of property (vi) from your comment below. Suppose, then, that $$(k+m)+n=k+(m+n)$$ for some $n\in\Bbb N.$ It follows, then, with two applications of property (vii) from your comment below, that $$(k+m)+S(n)=k+\bigl(m+S(n)\bigr),$$ so by induction, the Claim holds. $\Box$</p>
2,283,367
<p>I know product of nth roots of unity is 1 or -1 depending whether n is odd or even. But in this way I am getting 1. Where am I wrong?</p> <p>$ \text{Let }\alpha = \cos \frac{2 \pi}{n} + \iota \sin \frac{2 \pi}{n} \text{ be a root of }x^n=1 \\ \text{Then product of nth roots will be } 1\cdot \alpha \cdot \alpha^2 ... \alpha^{n-1} = \alpha^{\frac{(n)(n-1)}{2}}\\ =\left( \alpha^n \right)^{\frac{n-1}{2}}\\ =1^{\frac{n-1}{2}} \text{......By the definition of alpha ??}\\ =1 $</p> <p>I can see this doesn't even work for n=2.</p>
Empy2
81,790
<p>$(n-1)/2$ is not an integer, so $1^{(n-1)/2}$ involves the square-root, which might be positive or negative. You have to look more closely to decide which.<br> It is a bit like $-1=(-1)^{2/2}=1^{1/2}=1$</p>
2,283,367
<p>I know product of nth roots of unity is 1 or -1 depending whether n is odd or even. But in this way I am getting 1. Where am I wrong?</p> <p>$ \text{Let }\alpha = \cos \frac{2 \pi}{n} + \iota \sin \frac{2 \pi}{n} \text{ be a root of }x^n=1 \\ \text{Then product of nth roots will be } 1\cdot \alpha \cdot \alpha^2 ... \alpha^{n-1} = \alpha^{\frac{(n)(n-1)}{2}}\\ =\left( \alpha^n \right)^{\frac{n-1}{2}}\\ =1^{\frac{n-1}{2}} \text{......By the definition of alpha ??}\\ =1 $</p> <p>I can see this doesn't even work for n=2.</p>
lab bhattacharjee
33,337
<p>Hint:</p> <p>$$\alpha^{n(n-1)/2}=(\alpha^{n/2})^{n-1}$$</p> <p>Now $\alpha^{n/2}=\cdots=-1$</p>
166,202
<p>If we have $k(x,t)= \frac {1}{(4t)^{\frac{n}{2}}} \exp\left(\frac{-|x|^2}{4t}\right)$ is the fundamental solution of heat equation. If we consider $n \ge 3 $, I would like to show that $\int_0^\infty k(x,t) dt$ is the fundamental solution of lLaplace equation. I would like some hints. I thought of integrating but don't know how to approach. Thank you ie , i need to arrive to a form like $\frac{1}{B} \frac{1}{|x|^{n-1}}$ $B $ is a constant depending on the measure of the space. </p>
timur
2,473
<p>Let $e(x)=\int_0^\infty k(x,t)\,\mathrm{d}t$, and let us compute the action of the distribution $\Delta e$ on the test function $\phi$ (which is a compactly supported smooth function in $\mathbb{R}^n$) as $$ \begin{split} \langle\Delta e,\phi\rangle &amp;= \int_{\mathbb{R}^n} e(x) \Delta\phi(x)\,\mathrm{d}x = \int_{\mathbb{R}^n} \int_0^\infty k(x,t) \Delta\phi(x)\,\mathrm{d}t\,\mathrm{d}x = \int_0^\infty \int_{\mathbb{R}^n} k(x,t) \Delta\phi(x)\,\mathrm{d}x\,\mathrm{d}t \\ &amp;= \int_0^\infty \int_{\mathbb{R}^n} \Delta k(x,t) \phi(x)\,\mathrm{d}x\,\mathrm{d}t = \int_0^\infty \int_{\mathbb{R}^n} \partial_t k(x,t) \phi(x)\,\mathrm{d}x\,\mathrm{d}t \\ &amp;= \int_0^\infty \int_{\mathbb{R}^n} \partial_t k(x,t) \phi(x)\,\mathrm{d}x\,\mathrm{d}t = -\lim_{t\to0}\int_{\mathbb{R}^n} k(x,t) \phi(x)\,\mathrm{d}x = - \phi(0), \end{split} $$ where we have used the fact that $\Delta k =\partial_tk$ and that $\int_{\mathbb{R}^n} k(x,t) \phi(x)\,\mathrm{d}x\to\phi(0)$ as $t\to0$. Thus we have $\Delta e = - \delta$. Some would say that $e$ is not exactly a fundamental solution, but that $-e$ is one.</p>
1,951,964
<p>I think the answer is <span class="math-container">$\mathbb{Z}_n$</span> where <span class="math-container">$n$</span> is all the factors of <span class="math-container">$26$</span>. Is this right?</p>
Claude Leibovici
82,404
<p><em>Just another way to do it.</em></p> <p>Consider that $${\frac {x}{1+x^2}}=\frac 12 \frac d {dx} \log(1+x^2)$$ Now, using $$\log(1+t)=\sum_{n=1}^\infty(-1)^{n+1}\frac{t^n}n$$ $$\log(1+x^2)=\sum_{n=1}^\infty(-1)^{n+1}\frac{x^{2n}}n$$ $$\frac d {dx} \log(1+x^2)=\sum_{n=1}^\infty(-1)^{n+1}\frac{2nx^{2n-1}}n=2\sum_{n=1}^\infty(-1)^{n+1}x^{2n-1}=2\sum_{m=0}^\infty(-1)^{m}x^{2m+1}$$ and then the result.</p>
1,637,237
<p>I'm studying differential calculus, but one of the questions involves solving an inequality:</p> <p>$$(x-2)e^x &lt; 0$$</p> <p>I intend to go deeper in solving inequalities later, but I just want to understand how the teacher got the following solution in order to advance in these lectures: $$x-2 &lt; 0$$ $$x &lt; 2$$</p> <p>Where did the $e^x$ go? There's some rule to solve these inequalities involving $e$?</p>
adjan
219,722
<p>Divide by $e^x$</p> <p>$$(x-2)e^x &lt; 0$$ $$\iff x - 2 &lt; 0$$</p> <p>This is valid since $e^x &gt; 0$ $\forall x \in \mathbb{R}$</p>
4,114,360
<p><strong>Question</strong> :</p> <blockquote> <p>Andy and Beth are playing a game worth $100. They take turns flipping a penny. The first person to get 10 heads will win.</p> <p>But they just realized that they have to be in math class right away and are forced to stop the game.</p> <p>Andy had four heads and Beth had seven heads. How should they divide the pot?</p> </blockquote> <p>This is a question from the book <strong>Probability: With Applications and R by Robert P. Dobrow</strong> .Now the given answer is :</p> <blockquote> <p>P(Andy wins) = 0.1445. Andy gets <span class="math-container">$14.45 \;and\; Beth\; gets\; $</span>85.55</p> </blockquote> <p>But I am not getting this answer . So can someone check my solution and tell me where I am wrong , or else provide a better solution . Thanks in advance.</p> <p><strong>Proposed Solution</strong> : Let Andy be A and Beth B: A needs 6 heads to win, and B needs 3 heads to win.</p> <p>Let <span class="math-container">$X_a$</span> be the number of times A needs to flip the coin to get 6 wins , and <span class="math-container">$X_b$</span> be the number of flips for B to get 3 wins . Both <span class="math-container">$X_a$</span> and <span class="math-container">$X_b$</span> are Negative Binomial Variables with the <span class="math-container">$X_a$</span> having parameters 6 and <span class="math-container">$p=\frac{1}{2}$</span> and <span class="math-container">$X_b$</span> having parameters 3 and <span class="math-container">$p=\frac{1}{2}$</span>.</p> <p>Now if A needs <span class="math-container">$k$</span> flips to win. Then the required probability is :</p> <p><span class="math-container">$P(A \;wins) = P(X_a = k)*P(X_b &gt; k)$</span>(i.e A wins in <span class="math-container">$k$</span> moves and B gets 3 heads after <span class="math-container">$k$</span> flips)</p> <p><span class="math-container">$P(X_a = k)=(^{k-1}C_5)*(\frac{1}{2})^6(\frac{1}{2})^{k-6}=(^{k-1}C_5)*(\frac{1}{2})^k$</span>(Negative Binomial Distribution formula)</p> <p><span class="math-container">$P(X_b &gt; k) =$</span> The 3rd head should come after <span class="math-container">$k$</span> trails , therefore until <span class="math-container">$k$</span> trails only <span class="math-container">$0\;or\;1\;or\;2$</span> heads can occur.</p> <p>Therefore : <span class="math-container">$P(X_b&gt;k) = \sum_{i=0}^2{^{k}C_i}*(\frac{1}{2})^i(\frac{1}{2})^{k-i}=\sum_{i=0}^2{^{k}C_i}*(\frac{1}{2})^k$</span></p> <p>So <span class="math-container">$P(A\;wins) = (^{k-1}C_5*(\frac{1}{2})^k) * (\sum_{i=0}^2{^{k}C_i}*(\frac{1}{2})^k) $</span> . Now minimum 6 tries are required for A to win, so <span class="math-container">$k:6 \to \infty$</span></p> <p><span class="math-container">$\begin{equation} P(A\;wins) = \sum_{k=6}^{\infty}\biggl( (^{k-1}C_5*(\frac{1}{2})^k) * (\sum_{i=0}^2{^{k}C_i}*(\frac{1}{2})^k) \biggr) \\ = \sum_{k=6}^{\infty}\biggl((\frac{1}{2})^{2k}*(^{k-1}C_5)*(^kC_0+^kC_1+^kC_2)) \biggr) \\ =\sum_{k=6}^{\infty}\biggl(\frac{1}{2^{2k+1}}*(^{k-1}C_5)*(k^2+k+2)\biggr) \end{equation}$</span></p> <p>Now I calculated the sum on my calculator for <span class="math-container">$k : 6 \to 150$</span> and it is converging to 0.052. Which is very far off from the given answer of <span class="math-container">$P(Andy\; wins) = 0.1445$</span>. So is my method wrong ? Or is the sum convergence slow ?</p> <p>Can someone solve this question , employing an entirely different method if necessary.</p>
angryavian
43,949
<p>A slight modification of your attempt will work. Take <span class="math-container">$r = \min(\lceil x \rceil - x, x - \lfloor x \rfloor, \frac{1}{2}|x - \lfloor x \rfloor - \frac{1}{2 \lfloor x \rfloor}|)$</span>. We will show all elements of <span class="math-container">$E$</span> are outside of the open ball of radius <span class="math-container">$r$</span> centered at <span class="math-container">$x$</span>.</p> <ul> <li>For <span class="math-container">$n \le \lfloor x \rfloor - 1$</span>, we have <span class="math-container">$|x - (n + \frac{1}{2n})| &gt; (x - \lfloor x \rfloor ) + (\lfloor x \rfloor - (n+1)) \ge x - \lfloor x \rfloor \ge r$</span>.</li> <li>For <span class="math-container">$n \ge \lceil x \rceil $</span>, we have <span class="math-container">$|x - (n+\frac{1}{2n})| &gt; (n - \lceil x \rceil) + (\lceil x \rceil - x) \ge \lceil x \rceil - x \ge r$</span>.</li> <li>It remains to consider <span class="math-container">$n=\lfloor x \rfloor$</span>. We have <span class="math-container">$|x - (n + \frac{1}{2n})| &gt; \frac{1}{2} |x - (n + \frac{1}{2n})| \ge r$</span>.</li> </ul>
4,114,360
<p><strong>Question</strong> :</p> <blockquote> <p>Andy and Beth are playing a game worth $100. They take turns flipping a penny. The first person to get 10 heads will win.</p> <p>But they just realized that they have to be in math class right away and are forced to stop the game.</p> <p>Andy had four heads and Beth had seven heads. How should they divide the pot?</p> </blockquote> <p>This is a question from the book <strong>Probability: With Applications and R by Robert P. Dobrow</strong> .Now the given answer is :</p> <blockquote> <p>P(Andy wins) = 0.1445. Andy gets <span class="math-container">$14.45 \;and\; Beth\; gets\; $</span>85.55</p> </blockquote> <p>But I am not getting this answer . So can someone check my solution and tell me where I am wrong , or else provide a better solution . Thanks in advance.</p> <p><strong>Proposed Solution</strong> : Let Andy be A and Beth B: A needs 6 heads to win, and B needs 3 heads to win.</p> <p>Let <span class="math-container">$X_a$</span> be the number of times A needs to flip the coin to get 6 wins , and <span class="math-container">$X_b$</span> be the number of flips for B to get 3 wins . Both <span class="math-container">$X_a$</span> and <span class="math-container">$X_b$</span> are Negative Binomial Variables with the <span class="math-container">$X_a$</span> having parameters 6 and <span class="math-container">$p=\frac{1}{2}$</span> and <span class="math-container">$X_b$</span> having parameters 3 and <span class="math-container">$p=\frac{1}{2}$</span>.</p> <p>Now if A needs <span class="math-container">$k$</span> flips to win. Then the required probability is :</p> <p><span class="math-container">$P(A \;wins) = P(X_a = k)*P(X_b &gt; k)$</span>(i.e A wins in <span class="math-container">$k$</span> moves and B gets 3 heads after <span class="math-container">$k$</span> flips)</p> <p><span class="math-container">$P(X_a = k)=(^{k-1}C_5)*(\frac{1}{2})^6(\frac{1}{2})^{k-6}=(^{k-1}C_5)*(\frac{1}{2})^k$</span>(Negative Binomial Distribution formula)</p> <p><span class="math-container">$P(X_b &gt; k) =$</span> The 3rd head should come after <span class="math-container">$k$</span> trails , therefore until <span class="math-container">$k$</span> trails only <span class="math-container">$0\;or\;1\;or\;2$</span> heads can occur.</p> <p>Therefore : <span class="math-container">$P(X_b&gt;k) = \sum_{i=0}^2{^{k}C_i}*(\frac{1}{2})^i(\frac{1}{2})^{k-i}=\sum_{i=0}^2{^{k}C_i}*(\frac{1}{2})^k$</span></p> <p>So <span class="math-container">$P(A\;wins) = (^{k-1}C_5*(\frac{1}{2})^k) * (\sum_{i=0}^2{^{k}C_i}*(\frac{1}{2})^k) $</span> . Now minimum 6 tries are required for A to win, so <span class="math-container">$k:6 \to \infty$</span></p> <p><span class="math-container">$\begin{equation} P(A\;wins) = \sum_{k=6}^{\infty}\biggl( (^{k-1}C_5*(\frac{1}{2})^k) * (\sum_{i=0}^2{^{k}C_i}*(\frac{1}{2})^k) \biggr) \\ = \sum_{k=6}^{\infty}\biggl((\frac{1}{2})^{2k}*(^{k-1}C_5)*(^kC_0+^kC_1+^kC_2)) \biggr) \\ =\sum_{k=6}^{\infty}\biggl(\frac{1}{2^{2k+1}}*(^{k-1}C_5)*(k^2+k+2)\biggr) \end{equation}$</span></p> <p>Now I calculated the sum on my calculator for <span class="math-container">$k : 6 \to 150$</span> and it is converging to 0.052. Which is very far off from the given answer of <span class="math-container">$P(Andy\; wins) = 0.1445$</span>. So is my method wrong ? Or is the sum convergence slow ?</p> <p>Can someone solve this question , employing an entirely different method if necessary.</p>
Rivers McForge
774,222
<p>If you know open intervals are open sets, and arbitrary unions of open sets are open, then <span class="math-container">$$E = \{ 1.5, 2.25, 3.167, 4.125, ... \}$$</span> has a complement which is easily seen to be a union of open intervals: <span class="math-container">$$E^c = (-\infty, 1.5) \cup (1.5, 2.25) \cup (2.25, 3.167) \cup (3.167, 4.125) \cup ....$$</span></p>
4,114,360
<p><strong>Question</strong> :</p> <blockquote> <p>Andy and Beth are playing a game worth $100. They take turns flipping a penny. The first person to get 10 heads will win.</p> <p>But they just realized that they have to be in math class right away and are forced to stop the game.</p> <p>Andy had four heads and Beth had seven heads. How should they divide the pot?</p> </blockquote> <p>This is a question from the book <strong>Probability: With Applications and R by Robert P. Dobrow</strong> .Now the given answer is :</p> <blockquote> <p>P(Andy wins) = 0.1445. Andy gets <span class="math-container">$14.45 \;and\; Beth\; gets\; $</span>85.55</p> </blockquote> <p>But I am not getting this answer . So can someone check my solution and tell me where I am wrong , or else provide a better solution . Thanks in advance.</p> <p><strong>Proposed Solution</strong> : Let Andy be A and Beth B: A needs 6 heads to win, and B needs 3 heads to win.</p> <p>Let <span class="math-container">$X_a$</span> be the number of times A needs to flip the coin to get 6 wins , and <span class="math-container">$X_b$</span> be the number of flips for B to get 3 wins . Both <span class="math-container">$X_a$</span> and <span class="math-container">$X_b$</span> are Negative Binomial Variables with the <span class="math-container">$X_a$</span> having parameters 6 and <span class="math-container">$p=\frac{1}{2}$</span> and <span class="math-container">$X_b$</span> having parameters 3 and <span class="math-container">$p=\frac{1}{2}$</span>.</p> <p>Now if A needs <span class="math-container">$k$</span> flips to win. Then the required probability is :</p> <p><span class="math-container">$P(A \;wins) = P(X_a = k)*P(X_b &gt; k)$</span>(i.e A wins in <span class="math-container">$k$</span> moves and B gets 3 heads after <span class="math-container">$k$</span> flips)</p> <p><span class="math-container">$P(X_a = k)=(^{k-1}C_5)*(\frac{1}{2})^6(\frac{1}{2})^{k-6}=(^{k-1}C_5)*(\frac{1}{2})^k$</span>(Negative Binomial Distribution formula)</p> <p><span class="math-container">$P(X_b &gt; k) =$</span> The 3rd head should come after <span class="math-container">$k$</span> trails , therefore until <span class="math-container">$k$</span> trails only <span class="math-container">$0\;or\;1\;or\;2$</span> heads can occur.</p> <p>Therefore : <span class="math-container">$P(X_b&gt;k) = \sum_{i=0}^2{^{k}C_i}*(\frac{1}{2})^i(\frac{1}{2})^{k-i}=\sum_{i=0}^2{^{k}C_i}*(\frac{1}{2})^k$</span></p> <p>So <span class="math-container">$P(A\;wins) = (^{k-1}C_5*(\frac{1}{2})^k) * (\sum_{i=0}^2{^{k}C_i}*(\frac{1}{2})^k) $</span> . Now minimum 6 tries are required for A to win, so <span class="math-container">$k:6 \to \infty$</span></p> <p><span class="math-container">$\begin{equation} P(A\;wins) = \sum_{k=6}^{\infty}\biggl( (^{k-1}C_5*(\frac{1}{2})^k) * (\sum_{i=0}^2{^{k}C_i}*(\frac{1}{2})^k) \biggr) \\ = \sum_{k=6}^{\infty}\biggl((\frac{1}{2})^{2k}*(^{k-1}C_5)*(^kC_0+^kC_1+^kC_2)) \biggr) \\ =\sum_{k=6}^{\infty}\biggl(\frac{1}{2^{2k+1}}*(^{k-1}C_5)*(k^2+k+2)\biggr) \end{equation}$</span></p> <p>Now I calculated the sum on my calculator for <span class="math-container">$k : 6 \to 150$</span> and it is converging to 0.052. Which is very far off from the given answer of <span class="math-container">$P(Andy\; wins) = 0.1445$</span>. So is my method wrong ? Or is the sum convergence slow ?</p> <p>Can someone solve this question , employing an entirely different method if necessary.</p>
Henno Brandsma
4,280
<p>Let <span class="math-container">$a_n = n + \frac{1}{2n}$</span>, so that <span class="math-container">$E=\{a_n\mid n \in \Bbb N\}$</span>.</p> <p>Suppose that there were a sequence <span class="math-container">$(x_n)$</span> from <span class="math-container">$E$</span> that converges to some <span class="math-container">$x \in \Bbb R$</span>. As all convergent sequences are bounded and <span class="math-container">$|a_n| &gt; n$</span> for all <span class="math-container">$n$</span>, we have that only finitely many different <span class="math-container">$a_n$</span> are used in this sequence and so there is one <span class="math-container">$a_m$</span> that occurs infinitely any times and that subsequence of <span class="math-container">$(x_n)$</span> is convergent to <span class="math-container">$a_m$</span> and as limits are unique <span class="math-container">$x= a_m \in E$</span>.</p> <p>So <span class="math-container">$E$</span> is closed as it’s closed under sequential limits and we’re working in a metric space.</p>
2,309,986
<p>This question is from the text <em>Geometry and Complexity Theory</em>, from J.M. Landsberg. Before start talking about the question, I think it is good to show the definition of border rank. Consider a tensor $T \in \mathbb{C}^r \otimes \mathbb{C}^r \otimes \mathbb{C}^r$. Then the <em>border rank</em> of $T$ is the minimum $r$ such that $T$ can be arbitrarily approximated by tensors of rank $r$. </p> <p>In this discussion we asume $r &gt; 1$ is integer. In general, the limit of tensors of rank $r$ may not be a tensor of rank $r$. An example is given in the text (I changed a little the notation):</p> <p>Let $x_i, y_i \in \mathbb{C}^r$ be linearly independent vectors, for $i = 1,2,3$. Then the sequence of rank 2 tensors</p> <p>$$T_n = n\left( x_1 + \frac{1}{n} y_1 \right) \otimes \left( x_2 + \frac{1}{n} y_2 \right) \otimes \left( x_3 + \frac{1}{n} y_3 \right) - nx_1 \otimes x_2 \otimes x_3 $$</p> <p>converges to the rank 3 tensor</p> <p>$$T = x_1 \otimes x_2 \otimes y_3 + x_1 \otimes y_2 \otimes x_3 + y_1 \otimes x_2 \otimes x_3. $$</p> <p>The author says that $T$ is a rank $3$ tensor with border rank $2$. With this example in mind I tried to come up with the following idea: </p> <p>Let $x_i, y_i \in \mathbb{C}^r$ be linearly independent vectors, for $i = 1, \ldots, r$. Consider the sequence</p> <p>$$T_n = n\left( x_1 + \frac{1}{n} y_1 \right) \otimes \left( x_2 + \frac{1}{n} y_2 \right) \otimes \left( x_3 + \frac{1}{n} y_3 \right) - nx_1 \otimes x_2 \otimes x_3 + $$ $$ + n\left( x_2 + \frac{1}{n} y_2 \right) \otimes \left( x_3 + \frac{1}{n} y_3 \right) \otimes \left( x_4 + \frac{1}{n} y_4 \right) - nx_2 \otimes x_3 \otimes x_4 + \ldots$$ $$ \ldots + n\left( x_{\frac{r}{2}} + \frac{1}{n} y_{\frac{r}{2}} \right) \otimes \left( x_{\frac{r}{2}+1} + \frac{1}{n} y_{\frac{r}{2}+1} \right) \otimes \left( x_{\frac{r}{2}+2} + \frac{1}{n} y_{\frac{r}{2}+2} \right) - nx_{\frac{r}{2}} \otimes x_{\frac{r}{2}+1} \otimes x_{\frac{r}{2}+2}$$</p> <p>for $r$ even. The $T_n$ is a sum of $r$ rank 1 tensor, so $T_n$ has rank $r$. Furthermore, we have that $T_n$ converges to </p> <p>$$T = x_1 \otimes x_2 \otimes y_3 + x_1 \otimes y_2 \otimes x_3 + y_1 \otimes x_2 \otimes x_3 + $$ $$ + x_2 \otimes x_3 \otimes y_4 + x_2 \otimes y_3 \otimes x_4 + y_2 \otimes x_3 \otimes x_4 + \ldots $$ $$\ldots + x_{\frac{r}{2}-2} \otimes x_{\frac{r}{2}-1} \otimes y_\frac{r}{2} + x_{\frac{r}{2}-2} \otimes y_{\frac{r}{2}-1} \otimes x_\frac{r}{2} + y_{\frac{r}{2}-2} \otimes x_{\frac{r}{2}-1} \otimes x_\frac{r}{2}$$</p> <p>which is a sum of $3(\frac{r}{2}-2)$ rank 1 tensors, so $T$ has rank $3(\frac{r}{2}-2)$.</p> <p>I have some concerns about this "solution"., which I'm going to list here.</p> <p><strong>1)</strong> I'm not sure if the tensors $T_n$ really have rank $r$, maybe there is a way to reduce the number of terms which I'm not aware of. The same goes to $T$. </p> <p><strong>2)</strong> In order to have $3(\frac{r}{2} - 2) &gt; r$ we need $r &gt; 12$. This restriction indicates my solution is wrong someway. </p> <p><strong>3)</strong> Finally, to work with $r$ odd I need to add one last term in an artificial way. This also makes me think this whole idea is not good.</p> <p>Well, I need some directions here. To be honest I'm new in the study of tensors, so any help with nice explanations are very welcome!</p> <p>Thank you! </p>
Vladimir Lysikov
20,589
<p>One way to prove that the border rank is exactly $r$ and not less is the flattening lower bound. You can see a tensor $T \in A\otimes B\otimes C$ as a linear map $T\colon A^* \to B\otimes C$. If $T$ has tensor rank $\leq r$, than this map also has rank $\leq r$, and we can take the limits to talk about border rank. (Landsberg discusses it in the section "Our first lower bound").</p> <p>To show that the rank is more than $r$, we need to do something slightly more complicated. Again look at $T\colon A^* \to B\otimes C$. If $T$ has rank $r$, then you can say many things about values $T(\alpha)$ for various $\alpha \in A^*$. Try to construct your tensor in such a way that some of these things cannot hold.</p>
2,025,711
<p>What is the sum of this series: $$\sum_{n&gt;=1}{(-1)^{n+1}*\frac{2n+1}{n(n+1)}}$$ I don't know how to solve it especialy with that ${(-1)^{n+1}}$</p>
TZakrevskiy
77,314
<p>Let $n$ is even. Write the sum of two consecutive elements of your sum: $$\frac{2n+1}{n(n+1)} - \frac{2n+3}{(n+2)(n+1)} = \frac{2}{n(n+2)} = \frac 1n - \frac{1}{n+2}.$$</p> <p>Can you take it from here?</p>
2,699,842
<p>I've been looking at this question for hours now, but I can't grasp it: </p> <p>Let A be a symmetric matrix with minimum eigenvalue $\lambda_{\text{min}}$ . Give a bound on the largest element of $A^{−1}$.</p> <p>I was looking at the spectral decomposition where $A=VDV^T$ with D a diagonal matrix, however I don't think this is the rightway since for this case is not always true that $A^{-1}=A^T$</p>
angryavian
43,949
<p>The $(i,j)$ entry of a matrix $B$ can be written as $e_i^\top B e_j$ where $e_i$ is the $i$th standard basis vector. By Cauchy-Schwarz, we have $|B_{ij}| = |e_i^\top B e_j| \le \|B e_j\| \le \sigma_{\max}(B)$ where $\sigma_{\max}(B)$ is the largest singular value of $B$. If $B$ is symmetric, this is the largest eigenvalue of $B$ in absolute value.</p> <p>We apply the above with $B=A^{-1}$. If $\lambda_{\min}$ is the smallest eigenvalue of $A$ in absolute value, then the maximum singular value of $A^{-1}$ is $|\lambda_{\min}|^{-1}$. (Note that we must have $|\lambda_{\min}|&gt;0$ in order for $A$ to be invertible.)</p>
2,699,842
<p>I've been looking at this question for hours now, but I can't grasp it: </p> <p>Let A be a symmetric matrix with minimum eigenvalue $\lambda_{\text{min}}$ . Give a bound on the largest element of $A^{−1}$.</p> <p>I was looking at the spectral decomposition where $A=VDV^T$ with D a diagonal matrix, however I don't think this is the rightway since for this case is not always true that $A^{-1}=A^T$</p>
Ross Millikan
1,827
<p>There is no limit. Let $\lambda_{min} \lt 0$ and $A=\begin {pmatrix} \lambda_{min}&amp;0\\0&amp;\epsilon \end {pmatrix}$. We can make $\epsilon$ as small as we want in absolute value and $A^{-1}=\begin {pmatrix} 1/\lambda_{min}&amp;0\\0&amp;1/\epsilon \end {pmatrix}$</p>
125,393
<p>As we know, Linux’s Bash shell is coming to Windows 10. Now, I want to know, how to using Run[] command or other commands to run some commands like gfortran installed in Linux of Win10 ? And, is it possible to call mathematica using Bash shell in Windows 10 ? </p>
Rolf Mertig
29
<p>Installing the Windows git-bash version from <a href="https://git-scm.com/downloads" rel="nofollow noreferrer">here</a> enables a simple way to run bash under Windows 10 from Mathematica:</p> <pre><code>RunProcess[{"C:\\Program Files\\Git\\bin\\bash.exe", "-c", "bash --version | head -1 | tr -d '\n'"}, "StandardOutput"] </code></pre> <p>results in</p> <pre><code>(* "GNU bash, version 4.3.46(2)-release (x86_64-pc-msys)" *) </code></pre>
12,091
<p>It looks like the traffic is really high today, probably because of the introduction of hats. </p> <p>I was wondering, is there a way to check statistics about the traffic on math.SE, and on the other SE websites?</p>
Alex Becker
8,173
<p>You can take a look at the <a href="http://www.alexa.com/" rel="nofollow">Alexa rankings</a> for any site, but those aren't very good. Stackexchange keeps its own, more detailed data on traffic and usage, but this is not available to regular users.</p>
71,670
<pre><code>TableForm[Table[i/j + 4*Boole[j &gt; i] // N, {i, 3}, {j, 4}], TableHeadings -&gt; {{"Row1", "Row2", "Row3"}, {"Col1", "Col2", "Col3", "Col4"}}] </code></pre> <p>Produces the following table:</p> <p><img src="https://i.stack.imgur.com/Dqnmr.png" alt="enter image description here"></p> <p>How select the maximum value in each row and make it a bold font?</p> <p>So that <code>4.5</code>, <code>4.66667</code> and <code>4.75</code> are bold in the 1st, 2nd and 3rd row.</p> <p>Thanks</p>
ybeltukov
4,678
<p>Yet another realization of the straightforward idea</p> <pre><code>t = Table[i/j + 4*Boole[j &gt; i] // N, {i, 3}, {j, 4}]; h = {{"Row1", "Row2", "Row3"}, {"Col1", "Col2", "Col3", "Col4"}}; Style[TableForm[# /. x : Max@# :&gt; Style[x, Bold] &amp; /@ t, TableHeadings -&gt; h], FontFamily -&gt; "Times"] </code></pre> <p><img src="https://i.stack.imgur.com/hRUUL.png" alt="enter image description here"></p> <p>I change <code>FontFamily</code> for better visibility</p>
41,940
<p>For example, the square can be described with the equation $|x| + |y| = 1$. So is there a general equation that can describe a regular polygon (in the 2D Cartesian plane?), given the number of sides required?</p> <p>Using the Wolfram Alpha site, this input gave an almost-square: <code>PolarPlot(0.75 + ArcSin(Sin(2x+Pi/2))/(Sin(2x+Pi/2)*(Pi/4))) (x from 0 to 2Pi)</code></p> <p>This input gave an almost-octagon: <code>PolarPlot(0.75 + ArcSin(Sin(4x+Pi/2))/(Sin(4x+Pi/2)*Pi^2)) (x from 0 to 2Pi)</code></p> <p>The idea is that as the number of sides in a regular polygon goes to infinity, the regular polygon approaches a circle. Since a circle can be described by an equation, can a regular polygon be described by one too? For our purposes, this is a regular convex polygon (triangle, square, pentagon, hexagon and so on).</p> <p>It can be assumed that the centre of the regular polygon is at the origin $(0,0)$, and the radius is $1$ unit.</p> <p>If there's no such equation, can the non-existence be proven? If there <em>are</em> equations, but only for certain polygons (for example, only for $n &lt; 7$ or something), can those equations be provided?</p>
Michael Gmirkin
12,138
<p>Reposted from PolymathProgrammer.com, <a href="http://polymathprogrammer.com/2011/05/29/regular-polygons-equation/comment-page-1/#comment-35399" rel="nofollow">my answer to my own initial query</a>. Generated mostly on my own after some initial help from a friend (Jason Schmurr) and my dad (Russell Gmirkin)</p> <hr> <p>I believe I've solved my own inquiry. The following are functions that, when graphed in polar coordinates render lovely polygons.</p> <p>In fact, I’ve got 3 versions (6 if you consider rotation a factor; to either align a vertex or the midpoint of a side with $\theta=0$). One with circumradius = 1 (as vertices $\to \infty$, polygons expand outward toward the circumscribed circle), one with apothem = 1 (as vertices $\to \infty$, polygons collapse inward toward the inscribed circle) and one with the midpoint between circumradius &amp; apothem = 1 (as vertices $\to \infty$, both the maxima and minima, thus the circumscribed and inscribed circles, collapse toward that ‘midpoint radius’).</p> <p>I’d be interested to know whether this approach, describing the radius of a polygon as a periodic function, has any precedent (has anyone else done this, or am I the first)? I’ve been working on this idea for some time (on and off for years), but just recently overcame some stumbling blocks with a little help from a friend and my dad. Most of the legwork was my own, though.</p> <p>The relatively final form(s) appear to be:</p> <p>(n-gon, circumradius=1, unrotated)<br> <code>1/(((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[(v*x)/4]]+((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(Pi/2)]]-((Sec[pi/v]-1)/(Sec[Pi/4]-1))+1)</code></p> <p>(n-gon, circumradius=1, rotated $-\pi/4$)<br> <code>1/(((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(Pi/4)]]+((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(3Pi/4)]]-((Sec[pi/v]-1)/(Sec[Pi/4]-1))+1)</code></p> <p>(n-gon, function centered around unit circle, unrotated)<br> <code>((Sec[Pi/v]+1)/2)/(((Sec[Pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)]]+((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(Pi/2)]]-((Sec[pi/v]-1)/(Sec[Pi/4]-1))+1)</code></p> <p>(n-gon, function centered around unit circle, rotated $-\pi/4$)<br> <code>((Sec[Pi/v]+1)/2)/(((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(Pi/4)]]+((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(3Pi/4)]]-((Sec[pi/v]-1)/(Sec[Pi/4]-1))+1)</code></p> <p>(n-gon, apothem=1, unrotated)<br> <code>Sec[Pi/v]/(((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[(v*x)/4]]+((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(Pi/2)]]-((Sec[pi/v]-1)/(Sec[Pi/4]-1))+1)</code></p> <p>(n-gon, apothem=1, rotated $-\pi/4$)<br> <code>Sec[Pi/v]/(((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(Pi/4)]]+((Sec[pi/v]-1)/(Sec[Pi/4]-1))Abs[Cos[((v*x)/4)-(3Pi/4)]]-((Sec[pi/v]-1)/(Sec[Pi/4]-1))+1)</code></p> <p>Don’t know whether they simplify at all to something less complicated… Even if not, they’re beauties!</p> <p>Examples:</p> <p>3-gon: <a href="http://www.wolframalpha.com/input/?i=PolarPlot%5B%28Sec%5BPi/3%5D%29/%28%28%28Sec%5BPi/3%5D-1%29/%28Sec%5BPi/4%5D-1%29%29%2a%28Abs%5BCos%5B%28%283%2ax%29/4%29-%28Pi/4%29%5D%5D%2bAbs%5BCos%5B%28%283%2ax%29/4%29-%283Pi/4%29%5D%5D-1%29%2b1%29%5D%20%28x%20from%20-9%20to%209%29" rel="nofollow">here</a></p> <p>4-gon: <a href="http://wolframalpha.com/input/?i=PolarPlot%5B%28Sec%5BPi/4%5D%29/%28%28%28Sec%5BPi/4%5D-1%29/%28Sec%5BPi/4%5D-1%29%29%2a%28Abs%5BCos%5B%28%284%2ax%29/4%29-%28Pi/4%29%5D%5D%2bAbs%5BCos%5B%28%284%2ax%29/4%29-%283Pi/4%29%5D%5D-1%29%2b1%29%5D%20%28x%20from%20-9%20to%209%29" rel="nofollow">here</a></p> <p>5-gon: <a href="http://wolframalpha.com/input/?i=PolarPlot%5B%28Sec%5BPi/5%5D%29/%28%28%28Sec%5BPi/5%5D-1%29/%28Sec%5BPi/4%5D-1%29%29%2a%28Abs%5BCos%5B%28%285%2ax%29/4%29-%28Pi/4%29%5D%5D%2bAbs%5BCos%5B%28%285%2ax%29/4%29-%283Pi/4%29%5D%5D-1%29%2b1%29%5D%20%28x%20from%20-9%20to%209%29" rel="nofollow">here</a></p> <p>If it's a unique solution and I'm first to it, I submit these as the Gmirkin Polygon Radius Function(s) (or some suitably nifty sounding name that’s not too cumbersome). *Smile* Heh.</p> <p>I may write them up formally for publication at some point, once a few previous engagements clear up, assuming they’ve not previously been published or some directly correlated function has already been published elsewhere. (If so, I’d like to know when, where and by whom; for academic curiosity’s sake.)</p> <p>It is my belief that a similar function exists for describing 3D Polyhedrons of some description(s). Though, I have not yet even attempted such a case and will probably stick to 2D cases for now. I can also tell you that if you vary the phase shift of the denominator <code>[Abs[Cos[]]]</code> terms by differing amounts (though not both by some multiple of $\pi/4$, $\pi/2$, etc.), you can also reproduce rectangles, isosceles triangles, etc. In some cases you can also generate diamond shapes by varying some other parameters. It's s surprisingly robust solution, as I'd hoped. Lord knows it's taken me a few years of false starts to get at the correct combination of functions. Though, I learned plenty along the way, much of which helped me generalize to all polygons from the square case a friend solved at my behest a week or two ago.</p> <p>Here's hoping this is an interesting, unique new solution that's viable and notable. (One can hope!)</p> <p>Sorry the post is a bit lengthy... ;)</p> <p>Best,<br> ~Michael Gmirkin</p> <hr> <p>Edit:</p> <p>Sorry. Jumped the gun slightly.</p> <p>I retract the above equations. At the behest of someone on another site, I checked in Wolfram Alpha at a few data points. While it appears to work for the Square case (where the coefficients and corrective term basically cancel out), it doesn't work for other cases, but is slightly off. I think I've got the coefficients wrong. Will have to poke around a bit more in the maths to see if it's possible to get a technically correct exact solution.</p> <p>The graphs were so close as to fool me into thinking they were exact for all cases. Will get back to you if/when I get a technically correct solution. 'Til then... I still believe there is a valid function, since the Square case is technically correct @ <code>1/(Abs[Sin(x)]+Abs[Cos[x]])</code> or <code>1/(Abs[Cos[x]]+[Abs[Cos[x-(Pi/2)]]])</code>. Just need the technically correct coefficient... will work on it as I've got some time. But, for now, the incorrect versions are darned close! ;o) Enough to fool most people (including me, apparently).</p>
41,940
<p>For example, the square can be described with the equation $|x| + |y| = 1$. So is there a general equation that can describe a regular polygon (in the 2D Cartesian plane?), given the number of sides required?</p> <p>Using the Wolfram Alpha site, this input gave an almost-square: <code>PolarPlot(0.75 + ArcSin(Sin(2x+Pi/2))/(Sin(2x+Pi/2)*(Pi/4))) (x from 0 to 2Pi)</code></p> <p>This input gave an almost-octagon: <code>PolarPlot(0.75 + ArcSin(Sin(4x+Pi/2))/(Sin(4x+Pi/2)*Pi^2)) (x from 0 to 2Pi)</code></p> <p>The idea is that as the number of sides in a regular polygon goes to infinity, the regular polygon approaches a circle. Since a circle can be described by an equation, can a regular polygon be described by one too? For our purposes, this is a regular convex polygon (triangle, square, pentagon, hexagon and so on).</p> <p>It can be assumed that the centre of the regular polygon is at the origin $(0,0)$, and the radius is $1$ unit.</p> <p>If there's no such equation, can the non-existence be proven? If there <em>are</em> equations, but only for certain polygons (for example, only for $n &lt; 7$ or something), can those equations be provided?</p>
J. M. ain't a mathematician
498
<p>Here is <em>another</em> parametric equation for a regular $n$-gon with unit radius:</p> <p>$$\begin{align*}x&amp;=\cos\left(\frac{\pi}{n}\right)\cos\left(\frac{\pi}{n}(2\lfloor u\rfloor+1)\right)-(2u-2\lfloor u\rfloor-1)\sin\left(\frac{\pi}{n}\right)\sin\left(\frac{\pi}{n}(2\lfloor u\rfloor+1)\right)\\y&amp;=\cos\left(\frac{\pi}{n}\right)\sin\left(\frac{\pi}{n}(2\lfloor u\rfloor+1)\right)+(2u-2\lfloor u\rfloor-1)\sin\left(\frac{\pi}{n}\right)\cos\left(\frac{\pi}{n}(2\lfloor u\rfloor+1)\right)\end{align*}$$</p> <p>for $0 \leq u \leq n$.</p> <p>The provenance of this set is a bit more transparent if we switch to matrix-vector notation:</p> <p>$$\begin{pmatrix}\cos\left(\frac{\pi}{n}(2\lfloor u\rfloor+1)\right)&amp;-\sin\left(\frac{\pi}{n}(2\lfloor u\rfloor+1)\right)\\\sin\left(\frac{\pi}{n}(2\lfloor u\rfloor+1)\right)&amp;\cos\left(\frac{\pi}{n}(2\lfloor u\rfloor+1)\right)\end{pmatrix}\begin{pmatrix}\cos\left(\frac{\pi}{n}\right)\\(2u-2\lfloor u\rfloor-1)\sin\left(\frac{\pi}{n}\right)\end{pmatrix}$$</p> <p>and we see that the construction involves rotating and copying the line segment joining the points $(\cos\left(\frac{\pi}{n}\right),\pm\sin\left(\frac{\pi}{n}\right))$ $n$ times around a circle.</p> <hr> <p>Here's sundry <em>Mathematica</em> code:</p> <pre><code>GraphicsGrid[Partition[Table[ ParametricPlot[Through[{Re, Im}[ (Cos[Pi/n] + I (2u - 2 Floor[u] - 1)Sin[Pi/n])* Exp[(I Pi/n) (2 Floor[u] + 1)]], {u, 0, n}], {n, 3, 11}], 3]] </code></pre> <p><img src="https://i.stack.imgur.com/349Fy.png" alt="regular polygons, 3-11"></p>
2,440,478
<p>So, I understand that for something to be <em>rigorous</em> it is inclined to meet strict and occasionally extreme criteria. Within the scope of mathematics, this therefore implies that anything that is truly rigorous may not defy any of the rules of mathematics, and as it should with everything in mathematics, but my question is: do the abstract rules align with the geometric rules when it comes to proving statements in question?</p> <p>For example, I will take the proof that $$\lim_{x\to0}\frac{\sin(x)}{x} = 1$$</p> <p>the most renowned proof of this statement entails the following geometric shape:<img src="https://i.stack.imgur.com/QtBB9.gif" alt="enter image description here"><a href="https://i.stack.imgur.com/QtBB9.gif" rel="nofollow noreferrer">1</a></p> <p>based on this shape, the following inequality can be seen: $$\triangle ACB\leq ACB \leq \triangle ADB$$ which can be rewritten as $$\frac{1}{2}\cdot1\cdot1\cdot\sin(x)\leq\frac{1}{2}1^2x\leq\frac{1}{2}\cdot1\cdot\tan(x)$$ $$\frac{1}{2}\sin(x)\leq\frac{1}{2}x\leq\frac{1}{2}\tan(x)$$ then, dividing through by $\frac{1}{2}$, $$\sin(x)\leq x\leq \tan(x)$$ and then by $\sin(x)$, $$1\leq\frac{x}{\sin(x)}\leq\frac{\tan(x)}{\sin(x)}$$ and since $\tan(x)=\frac{\sin(x)}{\cos(x)}, \frac{\tan(x)}{\sin(x)}=\frac{1}{\cos(x)}$ $$1\geq\frac{\sin(x)}{x}\geq\cos(x)$$ from here, the limit can be taken: $$\lim_{x\to0}1\geq\lim_{x\to0}\frac{\sin(x)}{x}\geq\lim_{x\to0}\cos(x)$$ $$1\geq\lim_{x\to0}\frac{\sin(x)}{x}\geq1$$ hence, by the squeeze theorem $$\lim_{x\to0}\frac{\sin(x)}{x}=1$$</p> <p>the reason for my question is this: why, despite the fact that we identify the inequality by means of observation, and that hence it can clearly be observed that the three identified areas are each one bigger than the other, do we still use the $\leq$ symbol as oppose to the $&lt;$ symbol? Isn't it true that these areas can never be <em>exactly</em> equal to each other? Hence, does the use of the symbol <em>truly</em> act in accordance with the "extreme conditions" of abstract mathematics? <em>(Note: in this context, I am referring to abstract mathematics as anything that is <strong>not</strong> represented visually)</em></p> <p>I understand that explaining the scenario for one geometric proof does not mean it applies to all geometric proofs, but the question is merely a curious one, and anyone with more experience in mathematics than myself is more than welcome to explain any flaws in my logic. Thank you.</p>
Especially Lime
341,019
<p>It is perfectly valid to go through and replace $\leq$ by $&lt;$ for most of this proof, since for any given $x&lt;\pi/2$ the areas are all different. (However, even if we get to $1&lt;\frac{\sin x}x&lt;\cos x$, when we take the limit we have to put $\leq$ back in. $f(x)&lt;g(x)$ only implies $\lim f(x)\leq \lim g(x)$.)</p> <p>But you really don't have to use $&lt;$. Saying $f(x)\leq g(x)$ does not imply that sometimes $f(x)&lt;g(x)$ and sometimes $f(x)=g(x)$; all it means is that for every $x$ one or the other is true, and it is perfectly fine if it is the same one every time.</p>
4,237,056
<p>Can any subgroup of a cyclic group <span class="math-container">$\left&lt;a\right&gt;$</span> be presented as <span class="math-container">$\{x\in \left&lt;a\right&gt;|x^m=e\}$</span>?<br /> Note that m is a factor of n, where n is the order of <span class="math-container">$\left&lt;a\right&gt;$</span>.<br /> I found that we can easily construct a subgroup from <span class="math-container">$\left&lt;a\right&gt;$</span> using a factor m of n. Then <span class="math-container">$C_m≡\{x\in \left&lt;a\right&gt;|x^m=e\}$</span> is a subgroup of <span class="math-container">$\left&lt;a\right&gt;$</span> (the proof is simple and I will not write it here).<br /> For example, let <span class="math-container">$\left&lt;a\right&gt;=\mathbb{Z}_{12}$</span>.<br /> If <span class="math-container">$m=1$</span>, then <span class="math-container">$C_m=\{0\}$</span>.<br /> If <span class="math-container">$m=3$</span>, then <span class="math-container">$C_m=\{0,4,8\}$</span>.<br /> If <span class="math-container">$m=4$</span>, then <span class="math-container">$C_m=\{0,3,6,9\}$</span>.<br /> If <span class="math-container">$m=12$</span>, then <span class="math-container">$C_m=\mathbb{Z}_{12}$</span>.<br /> The question is, is the reverse also true? Can every subgroup of <span class="math-container">$\left&lt;a\right&gt;$</span> be written as <span class="math-container">$C_m$</span>? In other words, can we prove that the number of subgroups of <span class="math-container">$\left&lt;a\right&gt;$</span> is exactly the number of factors of n?</p>
Sidharth Ghoshal
58,294
<p>So our two equations you found combined imply that</p> <p><span class="math-container">$$ \lambda = \frac{2(x_2 -2)}{5} $$</span></p> <p>[I didn't check this and will just assume you did the algebra correctly].</p> <p>So now what to do with this information? We need to go ahead and grab one of your unused original equations. I will select</p> <p><span class="math-container">$$ \frac{\partial L}{\partial x_1} = 0 \rightarrow (2 x_2 - 2) - 5 \lambda = 0$$</span></p> <p>And we need to recall the third equation you forgot</p> <p><span class="math-container">$$ \frac{\partial L}{\partial \lambda } = 0 $$</span></p> <p>This of course just gives</p> <p><span class="math-container">$$ (x_1 - 1)^2 - 5x_2 = 0 $$</span></p> <p>Which you might recognize as the original constraint. (The derivative w.r.t the lagrange parameter always gives the constraint)</p> <p>So now we have our three equations</p> <p><span class="math-container">$$ \lambda = \frac{2(x_2 - 2)}{5} $$</span> <span class="math-container">$$ 2(x_1 - 1) + 2\lambda (x_1 - 1) = 0 $$</span> <span class="math-container">$$ (x_1 - 1)^2 - 5x_2 = 0 $$</span></p> <p>We eliminate the <span class="math-container">$\lambda$</span> term.</p> <p><span class="math-container">$$ 2(x_1 - 1) - 2(x_1 - 1) \frac{2(x_2 - 2)}{5} = 0$$</span> <span class="math-container">$$ (x_1 -1)^2 - 5x_2 = 0$$</span></p> <p>And now we can solve this system of two equations in two unknowns but observe carefully a little earlier:</p> <p><span class="math-container">$$ 2(x_1 - 1) + 2\lambda (x_1 - 1) = 0 $$</span></p> <p>Implies</p> <p><span class="math-container">$$ 2(x_1 - 1)(1 + \lambda) = 0 $$</span></p> <p>So this simplifies your work considerably see @subrosar's answer for how that works/how to observe that trick even earlier in the problem.</p>
1,188,483
<p>Suppose $|a|&lt;1$, show that $f(x) = \frac{z-a}{1-\overline{a}z}$ is a mobius transformation that sends $B(0,1)$ to itself.</p> <p>To make such a mobius transformation i tried to send 3 points on the edge to 3 points of the edge. so filling $i,1,-1$ in $f(x)$ we should get on the edges of the unit ball. But i don't seem to know how to calculate these exactly:</p> <p>$$f(1) = \frac{1-a}{1-\overline{a}1}$$</p> <p>$$f(-1) = \frac{-1-a}{1+\overline{a}}$$</p> <p>$$f(1) = \frac{i-a}{1-\overline{a}i}$$</p> <p>I don't seem to get how i could write these formula's in such a way that i get into the edges of the circle.</p> <p>Anyone can help me?</p> <p>Kees</p>
DeepSea
101,504
<p><strong>hint</strong>: prove that : $\left|\dfrac{z-a}{1-\bar{a}z}\right| &lt; 1$ by using $z = x+iy, a = m+in$, and use the well-known CS inequality: $(a^2+b^2)(c^2+d^2) \geq (ac+bd)^2$.</p>
2,480,493
<p>I have a problem with what looks like a very easy equation to solve $\left|\frac{1 + z}{1- i\overline z}\right| = 1$ . ($z$ is a complex number, $\overline z$ is a conjugate of $z$) I got stuck at the point when after replacing $\overline z = a-bi $ and $z = a +bi$ and getting rid of absolute value I end up with $a^2 +a-b =0$. I have no idea how to follow this up or wether I should take totally different approach from the begining. I'd be very gratefull if someone could guide me into the right solution.</p>
nonuser
463,553
<p>Since $|1+z|=|1-i\overline{z}|$ and $|w| = |\overline{w}|$, you can rewrite like this $$|z-(-1)|=|1+z|=|1-i\overline{z}| = |\overline{1-i\overline{z}}| =|1+iz| =|i||-i+z| = |z-i| $$</p> <p>So $z$ is at equal distance from $-1$ and $i$. So $z$ is on perpendicular bisector of segment between $-1$ and $i$. </p>
2,480,493
<p>I have a problem with what looks like a very easy equation to solve $\left|\frac{1 + z}{1- i\overline z}\right| = 1$ . ($z$ is a complex number, $\overline z$ is a conjugate of $z$) I got stuck at the point when after replacing $\overline z = a-bi $ and $z = a +bi$ and getting rid of absolute value I end up with $a^2 +a-b =0$. I have no idea how to follow this up or wether I should take totally different approach from the begining. I'd be very gratefull if someone could guide me into the right solution.</p>
Robert Z
299,698
<p>Something went wrong in your evaluation. After the substitution $z=a+ib$, you should have $$(1+a)^2+b^2=|1+a+ib|^2=|1-b-ia|^2=(1-b)^2+(-a)^2$$ that is, after a few simplifications, $a=-b$.</p>
4,435,945
<p>Near point <span class="math-container">$(0,0)$</span> find taylor formula for <span class="math-container">$f(x,y)=\ln(1+x+y)$</span></p> <p><span class="math-container">$f(x)=f(x_0)+\sum_{k=1}^n \frac{1}{k!}d^kf(x_0)+o(|x-x_0|^n)$</span></p> <p><span class="math-container">$$d^kf(0,0)=\sum_{n=1}^k\binom{k}{n}\frac{\partial^kf}{\partial x^n\partial y^{k-n}}(0,0)x^ny^{k-n}$$</span></p> <p><span class="math-container">$\frac{\partial^kf}{\partial x^n \partial y^{k-n}}=\frac{(-1)^k k!}{(1+x+y)^k}$</span></p> <p><span class="math-container">$d^kf(0,0)=\sum_{n=0}^k \binom{k}{n} (-1)^{k-1}k!x^n y^{k-n} = (-1)^{k-1} k!(x+y)^n$</span></p> <p>Plugging in the formula I don't get the answer. Answer in the book is <span class="math-container">$\sum_{n=1} ^m \frac{(-1)^{n-1}(x+y)^n}{n} + o((x^2+y^2)^\frac{m}{2})$</span></p>
José Carlos Santos
446,262
<p>Your guess is right. Note that<span class="math-container">$$\{e_1,e_2\}=\left\{\begin{bmatrix}\frac2{\sqrt5}\\0\\\frac1{\sqrt5}\end{bmatrix},\begin{bmatrix}-\frac1{\sqrt{30}}\\\frac5{\sqrt{30}}\\\frac2{\sqrt{30}}\end{bmatrix}\right\}$$</span>is an orthonormal basis of <span class="math-container">$P$</span> (which I got applying Gramm-Schidt to the given basis). Therefore, the point of <span class="math-container">$P$</span> which is closer to <span class="math-container">$T_s$</span> is<span class="math-container">$$\langle T_s,e_1\rangle e_1+\langle T_s,e_2\rangle e_2=\begin{bmatrix}2\\2s\\s+1\end{bmatrix}.$$</span></p>
750,322
<p>Find a formula for $\sum_{i=1}^{n} \frac{1}{(2i-1)(2i+1)}$ and prove that it holds for all $n \geq 1$ I don't know how to solve this particular problem, can someone help me please. Thanks </p>
David H
55,051
<p>Hint: Expand the summand by partial fractions to see that the partial sums telescope: </p> <p>$$\frac{1}{(2k-1)(2k+1)}=\frac{1}{2(2k-1)}-\frac{1}{2(2k+1)}\\ \sum_{k=1}^n\frac{1}{(2k-1)(2k+1)}=\sum_{k=1}^n\left(\frac{1}{2(2k-1)}-\frac{1}{2(2k+1)}\right)\\ =\frac{1}{2(2\cdot 1-1)}-\frac{1}{2(2n+1)}$$</p>
207,515
<p>Consider the upper half space $\mathbb{R}^n_{+} = \{x = (x_1,..,x_n) \in \mathbb{R}^n : x_n \geq 0\}$. Consider the Laplacian on this space with either the Dirichlet boundary condition or the Neumann boundary condition. My question is: can the Laplacian $\Delta$ under such boundary conditions be treated as a pseudodifferential operator of order 2? I can understand that the usual trick of Fourier inversion does not work, but if we still forcibly write $$-\Delta f(x) = \int p(x, \xi)\hat{f}(\xi)e^{ix.\xi}d\xi$$ and assume that $p(x, \xi)$ belongs to some symbol class $S^m_{\rho, \delta}$, what can go wrong?</p>
Community
-1
<p>What are pseudo-differential operators (pseudors) on a closed half-space $H$ or on a manifold with boundary? It is not adequate to only define them as restrictions from some open neighbourhood of $H$ to $H$. A pseudor $P$ does not map $C_c^\infty(H)$ into $C^\infty(H)$ (smoothness up to the boundary!) unless $P$ satisfies the transmission condition; see Boutet de Monvel' paper in Acta. Math. of 1971 or section 18.2 in Hörmander's volume III. Differential operators and the parametrices of elliptic differential operators satisfy the transmission condition, but a (scalar) square root of the Laplacian does not. (Dirac operators are non-scalar square roots of Laplacians, and they are differential operators.)</p> <p>Much work (by Boutet de Monvel, Melrose, Schulze, and many others) has been done in developing pseudo-differential calculi for manifolds with boundary (possibly with corners). Already finding the right function (distribution) spaces, on with the pseudors act as an algebra of operators, is an important issue. For example, if functions $u$ are extended somehow from $H$ to all space, then one might wish to define $Pu$ in such a way that it does not depend on the particular extension. This is no problem if $P$ is a differential operator because differential operators are local. But for genuine pseudors it is; well, it was before any theory of pseudo-differential boundary calculi existed.</p>
2,658,368
<p>I know that one can use Category Theory to formulate polynomial equations by modeling solutions as limits. For example, the sphere is the equalizer of the functions \begin{equation} s,t:\mathbb{R}^3\rightarrow\mathbb{R},\qquad s(x,y,z):=x^2+y^2+z^2,~t(x,y,z)=1. \label{equalizer} \end{equation} One could then find out more about the solution set by mapping the equalizer diagram into other categories. More generally, solution sets of polynomial equations (and more generally, <a href="https://en.wikipedia.org/wiki/Algebraic_variety" rel="noreferrer">algebraic varieties</a>) are a central study object of algebraic geometry.</p> <p>As differential equations are central to all areas of physics, I assume that there have been made a lot of attempts to generalise these ideas to solution sets of these. However, I do not yet have a lot of knowledge about algebraic geometry, topos theory or synthetic differential geometry. Thus I would be grateful if someone could explain roughly where and how Category Theory is used to study differential equations. </p> <p>Can Category Theory really <strong>help to solve</strong> differential equations (for example by mapping diagrams of equations to other categories, similarly to how problems of topology are often solved by mapping topological spaces to algebraic ones in algebraic topology) or can it "only" provide schemes for generalising differential equations to other spaces/categories?</p> <p>I am particularly interested in names of areas I have to look into if I want to understand this better. Also literature recommendation would be very welcome. <br><br><br> <strong>EDIT</strong>: I found a book by Vinogradov called <a href="http://books.google.dk/books?id=XIve9AEZgZIC" rel="noreferrer">Cohomological Analysis of Partial Differential Equations and Secondary Calculus</a> where "the main result [...] is Secondary Calculus on diffieties". </p> <p>However, the material is very deep and thus I am still not completely able to say whether these "new geometrical objects which are analogs of algebraic varieties" can be used to help solving PDEs or if they serve to structure the theory of PDEs or result in other applications I am not aware of. Thus further information would be very appreciated!</p>
Nicolas Hemelsoet
491,630
<p>I just realized that <em>derived categories</em> and techniques from homological algebra can help to solve differential equations. This is a very huge subject called <em>algebraic analysis</em> which use the tools of $D$-modules and sheaf theory. See this <a href="https://mathoverflow.net/questions/77616/d-modules-and-algebraic-solutions-of-pdes">Mathoverflow question</a>, this <a href="https://mathoverflow.net/questions/261131/applications-of-algebra-to-analysis/261170#261170">other MO question</a> (first answer) and this <a href="http://www.math.columbia.edu/~scautis/dmodules/hottaetal.pdf" rel="noreferrer">book on D-modules</a> which use heavily the langage of category. Also, Borel and Coutinho each wrote a book, you might be able to find some more informations about it. </p>
391,572
<p>I never really understood what $e$ means and I'm always terrified when I see it in equations. What is it? Can somebody dumb it down for me? I know it's a constant. Is it as simple as that?</p>
rurouniwallace
35,878
<p>The simplest way to understand it is, consider the following equation:</p> <p>$$f(x)=\left(1+\frac{1}{x}\right)^x$$</p> <p>As $x$ gets larger and larger, notice what number it gets closer to:</p> <p>$$f(1)=2$$ $$f(2)=2.25$$ $$f(3)\approx 2.37073$$ $$...$$ $$f(1000)\approx 2.7169$$ $$...$$</p> <p>It is important in mathematics because it is arguably the single real number most commonly found in nature (from heat flow to battery discharge to population growth/decay models), which is why its inverse is called the <em>natural</em> logarithm.</p>
1,057,172
<p>How many 3digit numbers can be written with $2,4,4,6,6$ ? </p> <p>I tried $\frac{5.4.3}{2!.2!} = 15$ but it's wrong. </p> <p>when I solved the question "how many 3digit numbers can be written with $1,1,2$" the solution $\frac{3.2.1}{2!}$ was correct but why this way doesn't work for above question?</p>
Jack D'Aurizio
44,121
<p>The probability to hit $1$ after $1$ step is $\frac{1}{2}$, encoded by the string $+$; the probability to hit $1$ after $3$ steps is $\frac{1}{8}$, encoded by the string $-++$; the probability to hit $1$ after $5$ steps is $\frac{1}{16}$, encoded by the strings $-+-++$ and $--+++$. To compute the probability to hit $1$ after $2k+1$ steps, we just have to count how many binary strings of length $2k$, with $k$ zeroes and $k$ ones, have more zeroes than ones in every prefix. This is a well-known problem: the solution is given by the <a href="http://en.wikipedia.org/wiki/Catalan_number" rel="nofollow">Catalan numbers</a>: $$ C_k = \frac{1}{k+1}\binom{2k}{k}. $$ The probability to hit $1$ in the first $2N+1$ steps is so: $$ \sum_{k=0}^{N}\frac{1}{2^{2k+1}(k+1)}\binom{2k}{k}=\sum_{k=0}^{N}\left(\frac{1}{4^k}\binom{2k}{k}-\frac{1}{4^{k+1}}\binom{2k+2}{k+1}\right)\\=1-\frac{1}{4^{N+1}}\binom{2N+2}{N+1}=1-\frac{1}{\sqrt{\pi(N+5/4)}}+O\left(\frac{1}{N^{5/2}}\right).$$</p>
112,889
<p>I am new in Wolfram Math and I need a help in making a simple program. Let's suppose we have a list:</p> <pre><code>list = {{1, 11}, {2, 7}, {4, 2}, {7, 9}, {-2, 3}, {-1, 10}}; </code></pre> <p>Now, I need to collect the first elements of sublists, but not all of them, only those whose second elements are larger than 8. </p> <p>Thanks in advance. </p>
RunnyKine
5,709
<pre><code>list = {{1, 11}, {2, 7}, {4, 2}, {7, 9}, {-2, 3}, {-1, 10}}; Select[list, #[[2]] &gt; 8 &amp;][[All, 1]] </code></pre> <blockquote> <p>{1, 7, -1}</p> </blockquote> <p>OR using <code>Pick</code></p> <pre><code>Pick[list[[All, 1]], UnitStep[list[[All, 2]] - 8], 1] </code></pre> <blockquote> <p>{1, 7, -1}</p> </blockquote>
2,419,785
<blockquote> <p>Prove the statement is true using mathematical induction: $$2n-1 \leq n!$$</p> </blockquote> <p>My attempt: this is true for $n=1$. </p> <p>Suppose it is true for $n$, i.e., $2n-1 \leq n!$</p> <p>Now, $2n-1 \leq n!\implies 2n-1+2 \leq n!+2$ </p> <p>From here, how do I proceed? </p>
user577215664
475,762
<p>When exponents are at the same level you only deal with addition and multiplication... So $x^{10n} \left( x^5 \left(x^{-5}\right)^n \right) = x^{-10}$ Multiply n and 5</p> <p>$x^{10n} x^5 x^{-5n} = x^{-10}$</p> <p>Adding 5 and 5n</p> <p>$x^{10n} x^{5-5n} = x^{-10}$</p> <p>$x^{10n+5-5n} = x^{-10}$</p> <p>$5+5n = -10$</p> <p>$5n = -15$</p>
2,546,434
<p>I have trouble calculating this limit algebraically (without L'Hospital's rule):</p> <p>$$\lim_{x \to 1} \frac{x^{\pi} - x^e}{x-1}$$</p> <p>Substituting 1 gives indeterminate form. If this was e.g. $(x^5 - x^2)$ in the numerator, one could easily factor out the $(x-1)$, because $(x^5 - x^2) = x^2 (x - 1) (x^2 + x + 1)$. But I don't know what to do in my problem, because the exponents are not integers.</p> <p>EDIT: The problem is from a calculus textbook and is before derivatives are introduced so I assume it can be found without derivatives.</p>
Rene Schipperus
149,912
<p>Ok so for integer $n$, its clear $$\frac{x^n-1}{x-1}\to n$$ for a fraction $\frac{p}{q}$ let $x=y^q$ and then $$\frac{x^{\frac{p}{q}}-1}{x-1}=\frac{y^p-1}{y^q-1}\to \frac{p}{q}$$<br> Now for any $a$ we have </p> <p>$$\lim\limits_{x\to 1^+}\frac{x^a-1}{x-1}=\lim\limits_{x\to 1^+}\sup\{\frac{x^r-1}{x-1}|r\in \mathbb{Q}, r&lt;a\}$$ $$=\sup\{\lim\limits_{x\to 1^+}\frac{x^r-1}{x-1}|r\in \mathbb{Q}, r&lt;a\}=a$$ thus your limit is $$\pi-e$$</p>
1,644,715
<p>Define a locally lipschitz and nonnegative function $f\colon\mathbb{R}^n\to\mathbb{R}$. Let $M\in\mathbb{R}^{n\times n}$ and $\eta&gt;0\in\mathbb{R}$. Consider the function $h\colon\mathbb{R}^n\to\mathbb{R}^n$ defined as </p> <p>$$h(\mathbf{x})= \begin{cases} \frac{1}{\Vert M\mathbf{x}\Vert}M\mathbf{x}, &amp;\text{if }f(\mathbf{x})\Vert M\mathbf{x}\Vert\ge\eta,\\ \frac{f(\mathbf{x})}{\eta}M\mathbf{x}, &amp;\text{if }f(\mathbf{x})\Vert M\mathbf{x}\Vert&lt;\eta. \end{cases}$$</p> <p>Show $h$ is lipschitz on any compact subset $\mathcal{D}\subseteq\mathbb{R}^n$.</p> <hr> <p>Let $\mathbf{x},\mathbf{y}\in\mathcal{D}$, then $h$ is Lipschitz on $\mathcal{D}\subseteq\mathbb{R}^n$ if $$\Vert h(\mathbf{x})-h(\mathbf{y})\Vert\le L\Vert \mathbf{x}-\mathbf{y}\Vert$$</p> <p>for some Lipschitz constant $L&gt;0\in\mathbb{R}$.</p> <hr> <p>What is the best plan of attack for this? I initially split this into two cases when $f(\mathbf{x})\Vert M\mathbf{x}\Vert\ge\eta$ and when $f(\mathbf{x})\Vert M\mathbf{x}\Vert&lt;\eta$. I then tried to write $\Vert h(\mathbf{x})-h(\mathbf{y})\Vert$ in terms of a const. multiplied by $\Vert\mathbf{x}-\mathbf{y}\Vert$ then we can take $L=$ const. but I come to difficulties since $f$ is only locally lipschitz.</p>
Gregor de Cillia
210,541
<p>Define the two sets </p> <p>$$A:=\{x\in \mathbb{R}^n: f(x)\|Mx\|\geq \eta\}$$ $$B:=\{x\in \mathbb{R}^n: f(x)\|Mx\| &lt; \eta\}$$</p> <p>For simplicity, is assume that the provided matrixnorm satisfies</p> <p>$$\|Mx-My\| \leq \|M\|\|x-y\|$$</p> <p>Since we want to show that $h$ is locally lipschitz, we will only regard a compact subsert $\mathcal{D}\subseteq \mathbb{R}^n$ and denote</p> <p>$$\bar{f} := \max_{x\in\mathcal{D}}\|f(x)\|,\ L_f := \max_{x\neq y\in \mathcal{D}} \frac{\|f(y)-f(x)\|}{\|y-x\|},\ \|\mathcal{D}\|:= \max_{y\in \mathcal{D}}\|y\|$$</p> <hr> <h2>Step 1: Show that $h$ is locally lipschitz on $A$</h2> <p>$$\|h(x)-h(y)\| = \left\|M\left(\frac{x}{\|Mx\|}-\frac{y}{\|My\|}\right)\right\|\leq \|M\| \left\|\frac{x}{\|Mx\|}-\frac{y}{\|My\|}\right\|$$</p> <p>$$\leq \|M\| \frac{\|M\|\|\mathcal{D}\|+\|M\|\|\mathcal{D}\|}{\eta^2 (\bar{f})^{-2}}\|x-y\|$$</p> <hr> <h2>Step 2: Show that $h$ is locally lipschitz on $B$</h2> <p>$$\|h(x)-h(y)\| \leq \frac{1}{\eta}(\|f(x)-f(y)\| \|M\|\|x\|+\bar{f}\|M\|(x-y)$$</p> <p>$$\leq \frac{1}{\eta}(L_f\|M\|\|\mathcal{D}\|+\bar{f}\|M\|)\|x-y\|$$</p> <hr> <h2>Step 3: Show that $h$ is continuous on $\partial A$</h2> <p>Let $x^*\in\partial A$ be arbitrary. Let $(x_n)_{n\in\mathbb{N}}$ be a converging sequence in $A$ satisfying $x^*=\lim_{n\in\mathbb{N}}x_n$, we get</p> <p>$$h_1 := \lim_{n\in\mathbb{N}} h(x_n) = \frac{1}{\|Mx^*\|}Mx^*$$</p> <p>If the sequence is in $B$ i.e. $x_n\in B,\ n\in\mathbb{N}$, we have</p> <p>$$h_2 := \lim_{n\in\mathbb{N}} h(x_n) = \frac{f(x^*)}{\eta}Mx^*$$</p> <p>since $x^*\in \partial A$ we have $f(x)\|Mx\|=\eta$ and therefore $h_1=h_2$</p>
2,315,236
<blockquote> <p>Let $I := [0, \frac{\pi}{2}]$ and let $f : I \to \Bbb{R}$ be defined by $f(x) = \sup \{x^2, \cos(x)\}$ for $x \in I$. Show there exists an absolute minimum point $x_0 \in I$ for $f$ on $I$. Show that $x_0$ is a solution to the equation $x^2 = \cos(x)$. </p> </blockquote> <p>I am having a little trouble with this problem. From the max/min theorem, we know that such an $x_0$ described above exists; moreover, by the intermediate value theorem it is clear that there exists a $z \in I$ such that $z^2 = \cos(z)$. My strategy is to show that $x_0 = z$. Since $f(x_0)$ is minimum, we know that $\sup \{x_0^2, \cos(x_0) \} \le \max \{z^2, \cos(z)\} = z^2$. Hence $x_0^2 \le z^2$, $\cos(x_0) \le z^2$, and $\cos (x_0) \le \cos(z)$, and therefore $(x_0 - z)(x_0 + z) \le 0$. </p> <p>Now, if $x_0 \neq z$ were true, then $x_0 + z &gt; 0$ and so we would need $x_0 - z \le 0$ or $x_0 \le z$. Since $\cos()$ is decreasing on $I$, $\cos (z) \le \cos (x_0)$, which contradicts the fact that $\cos (x_0) \le \cos (z)$. However, I don't yet have access to the concept of a derivative yet to show that $\cos()$ is decreasing. </p> <p>I am not sure how to finish this problem without the assumption that $\cos()$ is decreasing on $I$. I could use some hints.</p>
Saketh Malyala
250,220
<p>so in a $2$ element set, the supremum is basically the maximum.</p> <p>IF they are both equal, then it's the value.</p> <p>So basically, as you can see, $f(x)=\cos(x)$ on the interval $(0,x_0]$ because $\cos(x)\geq x^2$.</p> <p>And $f(x)=x^2$ on the interval $[x_0,\frac{π}{2})$ because $x^2 \geq \cos(x)$.</p> <p>Consider $f(x)$ near $x=x_0$. From the left $f(x)$ is decreasing, and from the right $f(x)$ is increasing.</p> <p>This change in sign of the derivative signals a relative minimum.</p> <p>And right as $\cos(x)$ and $x^2$ pass each other on the number line, the function switches pieces. This is also where $\cos(x)=x^2$, which is when $x=x_0$. </p>
2,755
<p>As Akhil had great success with his <a href="https://mathoverflow.net/questions/1291/a-learning-roadmap-for-algebraic-geometry">question</a>, I'm going to ask one in a similar vein. So representation theory has kind of an intimidating feel to it for an outsider. Say someone is familiar with algebraic geometry enough to care about things like G-bundles, and wants to talk about vector bundles with structure group G, and so needs to know representation theory, but wants to do it as geometrically as possible.</p> <p>So, in addition to the algebraic geometry, lets assume some familiarity with representations of finite groups (particularly symmetric groups) going forward. What path should be taken to learn some serious representation theory?</p>
Dinakar Muthiah
788
<p>I really like "Lie Groups and Lie Algebras" by Kirillov Jr. It's available <a href="http://www.math.sunysb.edu/~kirillov/liegroups/liegroups.pdf">here</a> for free. It covers a lot of material in a relatively short book, so I recommend it if you're trying to get a good overview of what Lie theory is about. </p> <p>Fulton and Harris is all right, but I found the book to be too drawn out for its own good. They have a lot of worked out examples, which are good to look at after you've learned the material elsewhere, but they are difficult to follow if you've never seen them before. </p> <p>I second Ginzburg and Chriss. It's full of tons of interesting stuff.</p>
2,755
<p>As Akhil had great success with his <a href="https://mathoverflow.net/questions/1291/a-learning-roadmap-for-algebraic-geometry">question</a>, I'm going to ask one in a similar vein. So representation theory has kind of an intimidating feel to it for an outsider. Say someone is familiar with algebraic geometry enough to care about things like G-bundles, and wants to talk about vector bundles with structure group G, and so needs to know representation theory, but wants to do it as geometrically as possible.</p> <p>So, in addition to the algebraic geometry, lets assume some familiarity with representations of finite groups (particularly symmetric groups) going forward. What path should be taken to learn some serious representation theory?</p>
Fran Burstall
1,143
<p>Sounds like Goodman and Wallach's <a href="http://books.google.co.uk/books?id=KBFoV2jpnPUC&amp;lpg=PR13&amp;ots=YyDe_Qb65k&amp;dq=goodman%20wallach%20invariant%20theory&amp;pg=PR4#v=onepage&amp;q=&amp;f=false" rel="noreferrer">Representations and invariants of the classical groups</a> might be worth looking at. It has a rather algebro-geometric perspective, is beautifully written and goes further than many texts: for example, both the analytic and cohomological proofs of the Weyl character formula; branching laws; spinors and more.</p>
2,882,889
<p>Let $(P_n)$ be a sequence such that $P_0 = 1$ and $P_n = (\log{(n+1)})^n$.</p> <p>I'm trying to prove that $P_n ^2 \leq P_{n-1}\,P_{n+1}$ for $n \geq 1$ using an induction argument.</p> <p>For $n=1$ we have that $(\log 2)^2 = 0.48$ and $1\cdot (\log 3)^2 = 1.206$, hence $P_1 ^{2} \leq P_{0}\,P_2$.</p> <p>Given $n &gt; 1$, suppose that $P_n ^2 \leq P_{n-1}\,P_{n+1}$, i.e $$ (\log (n+1))^{2n} \leq (\log (n))^{n-1}\, (\log (n+2))^{n+1} $$</p> <p>we want to show that</p> <p>$$ (\log (n+2))^{2(n+1)} \leq (\log (n+1))^{n}\, (\log (n+3))^{n+2}.$$</p> <p>However, I've been trying here without any success.</p> <p>Help?</p>
Marco
582,590
<p>Let <span class="math-container">$f(x)=x \log\log(x+1)$</span>. Then <span class="math-container">$$f'(x)=\log \log(x+1)+\dfrac{x}{(x+1)\log(x+1)},$$</span> <span class="math-container">$$f''(x)=\dfrac{1}{(x+1)\log(x+1)}+\dfrac{1}{(x+1)^2\log(x+1)}-\dfrac{x}{(x+1)^2\log^2(x+1)}=\dfrac{1}{(x+1)\log(x+1)}\left( 1+\dfrac{1}{x+1}-\dfrac{1}{(x+1)\log(x+1)}\right )\geq 0.$$</span> So <span class="math-container">$y=f(x)$</span> is concave for <span class="math-container">$x\geq 0$</span>. In particular <span class="math-container">$$f(n) \leq \dfrac{f(n-1)+f(n+1)}{2}.$$</span> So <span class="math-container">$$n\log \log(n+1)\leq \dfrac{(n-1)\log \log n + (n+1)\log \log(n+2)}{2}$$</span> which implies the inequality after exponentiating both sides.</p>
1,072,634
<p>I need to solve the differential equation $$y\frac{dy}{dx} = (x+7)(y^2+6)$$</p> <p>I know that the first step is to isolate both term each side and then integrate...</p> <p>But I can't figure out how to isolate term on this one, I would probably be able to solve the rest of the equation, just need to know how to start.</p> <p>Thanks.</p>
bkn
201,879
<p>No (to your title question), a function $f : A \to B$, by definition, gives each $x \in A$ a value $f(x) \in B$.</p> <p>To be precise, you should have said that "for every element in the codomain, there is (at least) one element which is mapped to it by $f$" for your definition of surjectivity.</p>
2,743,604
<p>**A busy railway station has two taxi stands at separate exits, A. At stand A, taxis arrive according to a Poisson Process of rate 2 per minute.</p> <p>Following the arrival of a train, a queue of 40 customers suddenly builds up at stand A, waiting for taxis to arrive. Assume that whenever a taxi arrives, it instantly picks up exactly one customer (if there is one), and assume that all customers in the queue patiently wait their turn. Approximately calculate the probability that the 40th customer in the queue has to wait longer than 15 minutes in total before boarding a taxi.**</p> <hr> <p>Not quite sure which distribution this question is hinting at. My first thought was that it was the normal distribution but the question doesn't hint at a standard deviation, so that I can put into the equation.</p> <p>Second thought was that it could be the poison distribution but that's about finding the probability of the number of times an event happens in a fixed time interval. </p> <p>thoughts?</p>
Mike Pierce
167,197
<p>Edit: After reading <a href="https://math.stackexchange.com/a/2743793/167197">fleablood's answer</a>, I'm rather convinced that my interpretation of the question is wrong. That first sentence, "<em>A watch is stopped for 15 minutes every hour on the hour,</em>" makes it sound like the watch stops on hours of <em>real time</em>, not hours in terms of the watch's time. I assumed this second interpretation in my answer below. But it seems that the "<em>back of the book</em>" missed this subtlety too? Anyways, future readers should see <a href="https://math.stackexchange.com/a/2743793/167197">fleablood's answer</a> in addition to this one. </p> <hr> <p>As <a href="https://math.stackexchange.com/questions/2743596/elementary-algebra-puzzle#comment5660957_2743596">mentioned in the comments</a>, yeah, the puzzle statement isn't exactly clear. It makes sense to stop counting time the moment the watch strikes midnight, but we don't know whether we should start counting from the first moment the watch reads 12 noon (in which case we count that 15 minute pause), or if we should start counting from the last moment the watch reads 12 noon (once that 15 minute pause is over). So depending on your interpretation, the answer is either </p> <p>$$12\,\mathrm{hrs} + (11)\frac{1}{4}\,\mathrm{hrs} = \left(14+\frac{3}{4}\right) \,\mathrm{hrs} \quad\text{or}\quad 12\,\mathrm{hrs} + (12)\frac{1}{4}\,\mathrm{hrs} = 15\,\mathrm{hrs}\,.$$</p> <p>The first one is what the "<em>back of the book</em>" says, but I personally like the second interpretation better since you get a whole number of hours. Maybe there's a better way to phrase the puzzle to imply this second interpretation. Here's my (wordy) version:</p> <blockquote> <p>You have an old antique stopwatch, and the hands of the stopwatch are both pointing directly to 12. You start the stopwatch, but nothing immediately happens, both hands frozen on 12 (the stopwatch is old, after all). After exactly 15 minutes pass though, the stopwatch begins working normally and the hands begin to move (thank goodness it's not completely broken). But one hour later, when the minute hand is pointing to 12 again, the stopwatch freezes for another 15 minutes. You conclude that there is some quirk about the watch that causes it to freeze up every time the minute hands points to 12. Now watching a stopwatch count time isn't exactly exciting, so naturally you fall asleep. You awaken later just in time to see the hour hand and minute hand meet up at 12 again. How long have you been asleep?</p> </blockquote>
2,035,237
<p>Let $f: [0, \infty) \rightarrow R$ be a continuous function and define $$F(t) = \int_{0}^{\frac{1}{t}} f(tx)\cos(t^2x) dx.$$</p> <p>Show that $\lim_{t \rightarrow \infty} tF(t) = 0.$</p> <p>This is part of a question from an old analysis qualifying at my university. I came across it while trying to study for my final exam (I'm an undergrad). I have honestly not seen a problem like this before, so I am not sure how to proceed. Can I please have a hint?</p> <p>I tried integration by parts, but that was a disaster. Another technique I know to make integrals nice is the second mean value theorem for integrals, but that doesn't apply since neither $f(tx)$ nor $\cos(t^2x)$ are not necessarily non-negative for all $x$. Not sure what to try next, or if this question is even in the scope of what was covered in my class. </p>
yangcs11
394,064
<p>Some suggestion that might be useful. We can rewrite the integral as following: \begin{equation} \lim_{t\to \infty}tF(t) = \lim_{t\to \infty}\int_{0}^{1}f(x)cos(tx)dx \end{equation} You see, as $t \to \infty$, $cos(tx)$ oscillates very fast between -1 and +1. Because $f(x)$ is a continuous function, so the integral will be 0.</p>
2,412,839
<p>I figured I could use the binomial formula to solve this problem but then remembered it only works when the order does matter (repeated combinations are included). Is there a formula that does not account for the repeated combinations and that could help me solve this problem?</p>
carmichael561
314,708
<p>It's simpler to consider the complementary probability: the probability that no sixes are rolled is $(\frac{5}{6})^3$, so the probability that at least one six is rolled is $1-(\frac{5}{6})^3$.</p> <p>If you instead use the method you suggested in your question, the result will be $$ {3\choose 1}\frac{1}{6}\Big(\frac{5}{6}\Big)^2+{3\choose 2}\Big(\frac{1}{6}\Big)^2\frac{5}{6}+{3\choose 3}\Big(\frac{5}{6}\Big)^3$$</p> <p>It follows from the binomial theorem that these two answers are in fact equal.</p>
1,902,407
<p>Expand given function $f$ as Taylor series around $c=3$ $$f(x) = \frac{x-3}{(x-1)^2}+\ln{(2x-4)} $$</p> <p>and find out open interval at what that series is convergent. What is radius of convergence?</p> <p>This is what i have so far. We know that $\ln{(1+x)} = \sum_{n=1}^{\infty} \frac{x^n}{n}$ and $\frac{1}{1-x}=\sum_{n=1}^\infty x^n ,|x| \lt 1$</p> <p>We can rewrite $\ln{(2x-4)}$ as $\ln{(-\frac{1}{4})}+\ln{(-x/2+1)}$ and then expand last expression, but i don't know what to do with $\ln{(-1/4)}$. We can can split given fraction onto partial fractions as $\frac{x-3}{(x-1)^2} = \frac{1}{x-1}-\frac{2}{(x-1)^2}$. First fraction we can expand easily, but i don't know how to expand fraction with binomial as denominator.</p>
Bernard
202,857
<p>You have to find prime numbers $p$ such that $p-1$ is a divisor of $100$ – or a divisor $d$ of $100$ such that $d+1$ is prime, and that do not divide $10$. The list of divisors of $100$ is $$\begin{matrix}1&amp;5&amp;25\\2&amp;10&amp;50\\4&amp;20&amp;100\end{matrix}$$ So you obtain $\;\{3,11,101\}$.</p>
1,397,450
<p>10 objects are randomly distributed among 3 boxes. What is the probability to have 6 objects in one of the boxes, 3 in another one and a single object in the remaining third box. </p> <p>My solution is </p> <p>For the first box we have $10 \choose 6$, since 6 objects are already placed we are left with 4 remaining, so for the second box we have $4 \choose 3$ and the remaining single object is placed in the last unoccupied box. Therefore, the total number of possibilities for creating the desired distribution of object among the 3 boxes is : $$ {10 \choose 6} \times {4\choose 3} \times {1 \choose 1} $$</p> <p>To use the probability formula I need the total number of ways to distribute 10 objects among 3 boxes. </p> <p>Well, I cheated and looked in the answers section, the answer is $3^{10}$ and the probability is </p> <p>$$ P=\frac{{10 \choose 6} \times {4\choose 3} \times {1 \choose 1}}{3^{10}} $$ </p> <p>The question is how the $3^{10}$ can be explained. This looks like variation with repetition, but I cant really understand it. </p>
Michael Hardy
11,667
<p>If <span class="math-container">$a,b$</span> are real, then <span class="math-container">$a+ib = r(\cos\theta+i\sin\theta)$</span> where <span class="math-container">$r=\sqrt{a^2+b^2}$</span> and <span class="math-container">$\tan\theta = \dfrac b a$</span>, and <span class="math-container">$$ (a+ib)^N = r^N(\cos(N\theta) + i\sin(N\theta)). $$</span></p>
1,397,450
<p>10 objects are randomly distributed among 3 boxes. What is the probability to have 6 objects in one of the boxes, 3 in another one and a single object in the remaining third box. </p> <p>My solution is </p> <p>For the first box we have $10 \choose 6$, since 6 objects are already placed we are left with 4 remaining, so for the second box we have $4 \choose 3$ and the remaining single object is placed in the last unoccupied box. Therefore, the total number of possibilities for creating the desired distribution of object among the 3 boxes is : $$ {10 \choose 6} \times {4\choose 3} \times {1 \choose 1} $$</p> <p>To use the probability formula I need the total number of ways to distribute 10 objects among 3 boxes. </p> <p>Well, I cheated and looked in the answers section, the answer is $3^{10}$ and the probability is </p> <p>$$ P=\frac{{10 \choose 6} \times {4\choose 3} \times {1 \choose 1}}{3^{10}} $$ </p> <p>The question is how the $3^{10}$ can be explained. This looks like variation with repetition, but I cant really understand it. </p>
Jan Eerland
226,665
<p>$$(1+i)^N=$$ $$\left(|1+i|e^{\arg(1+i)i}\right)^N=$$ $$\left(\sqrt{2}e^{\frac{\pi}{4}i}\right)^N=$$ $$\left(\sqrt{2}\right)^N e^{\frac{\pi N}{4}i}=$$ $$2^{\frac{N}{2}} e^{\frac{\pi N}{4}i}$$</p>
574,220
<p><strong>Question 1:</strong></p> <p>let Polynomial <span class="math-container">$f(x)=\displaystyle\sum_{i=0}^{3}a_{i}x^i,$</span> have three real numbers roots,where <span class="math-container">$a_{i}&gt;0,i=1,2,3$</span>.</p> <blockquote> <p>show that: <span class="math-container">$$g(x)=\sum_{i=0}^{3}a^m_{i}x^i$$</span> have only real roots,where <span class="math-container">$m\in R,m\ge 1$</span></p> </blockquote> <p><strong>My try:</strong></p> <p><strong>case (1):</strong></p> <p>suppose that <span class="math-container">$f(x)$</span> has a zero of multiplicity 3, then we assume</p> <blockquote> <p><span class="math-container">$$f(x)=(x+p)^3=x^3+3x^2p+3xp^2+p^3$$</span> then <span class="math-container">$$g(x)=x^3+(3p)^mx^2+(3p^2)^mx+(p^3)^{m}=(x+p^m)[x^2+(3^mp^m-p^m)x+p^{2m}]$$</span></p> </blockquote> <p>then</p> <blockquote> <p><span class="math-container">$$h(x)=x^2+p^m(3^m-1)x+p^{2m}\Longrightarrow \Delta =(p^m(3^m-1))^2-4p^{2m}&gt;0$$</span> so this case <span class="math-container">$g(x)$</span> have only three real roots.</p> </blockquote> <p><strong>for case (2):</strong></p> <p>let <span class="math-container">$f(x)=(x+p)^2(x+q)$</span>,</p> <p>I can't prove it,</p> <p><strong>and the case (3):</strong></p> <p><span class="math-container">$$f(x)=(x+p)(x+q)(x+r)$$</span> and this case I can't prove it too.</p> <p><strong>I hope someone can help solve this nice problem ;</strong></p> <p>Thank you very much!</p>
Igor Rivin
109,865
<p>For the first question, this is really a question about the derivative. $f$ has three real zeros if and only if its derivative has two zeros, and $f$ is positive at the smaller critical point, and negative at the bigger critical point [by the way, I am dividing through by $a_3$ to make the polynomial monic -- this does not change the question). This can be checked for $g$ by tedious computation, which I leave to you...</p>
4,174,481
<p>I'm going through a proof in which at one point, the authors make the following statement :</p> <blockquote> <p>Let <span class="math-container">$(X,d)$</span> be a complete metric space and <span class="math-container">$f:X\to X$</span> and <span class="math-container">$f_m:X\to X$</span> Lipschitz with <span class="math-container">$lip(f)&lt;1$</span> and <span class="math-container">$lip(f_m)&lt;1\; \forall m\in\mathbb{N}$</span>. Let <span class="math-container">$x^*_m$</span> the fixed point of <span class="math-container">$f_m$</span> and <span class="math-container">$x^*$</span> the fixed point of <span class="math-container">$f$</span>. If for each <span class="math-container">$x\in X$</span> we have that<br /> <span class="math-container">$$\lim_{m\to+\infty} d(f_m(x),f(x)) = 0, $$</span> it follows that <span class="math-container">$$\lim_{m\to+\infty} d(x^*_m,x^*) = 0 .$$</span></p> </blockquote> <p>I tried to figure out how they came up with this by an argument along the lines of <span class="math-container">\begin{align*}d(x^*_m,x^*) &amp;= d(f_m(x^*_m),f(x^*)) \leq d(f_m(x^*_m),f(x_m^*))+ d(f(x_m^*),f(x^*))\\&amp;\leq d(f_m(x^*_m),f(x_m^*))+lip(f)d(x_m^*,x^*). \end{align*}</span></p> <p>Then <span class="math-container">$$0\leq (1-lip(f))d(f(x_m^*),f(x^*))\leq d(f_m(x^*_m),f(x_m^*)).$$</span> But now we can't apply the first limit because <span class="math-container">$x_m^*$</span> depends on <span class="math-container">$m$</span>.</p> <p>Also another thing I tried is by using the fact that <span class="math-container">$$\lim_{n \to+\infty}d(f^{\circ n}(x),x^*)=0 \; \forall x\in X$$</span> Where <span class="math-container">$f^{\circ n} = \underbrace{f\circ f\circ \dots \circ f}_{\text {n times}}.$</span></p> <p>Then for a fixed <span class="math-container">$x\in X$</span> <span class="math-container">\begin{equation*}d(x^*_m,x^*)\leq d(x^*_m,f^{\circ n}_m(x)) + d(f^{\circ n}_m(x),f^{\circ n}(x))+ d(f^{\circ n}(x),x^*),\; \forall n\in\mathbb{N} \end{equation*}</span></p> <p>Now let <span class="math-container">$\varepsilon &gt; 0$</span> and set <span class="math-container">$N$</span> big enough so that <span class="math-container">$d(f^{\circ N}(x),x^*)&lt;\varepsilon/3$</span> and <span class="math-container">$d(f_m^{\circ N}(x),x_m^*)&lt; \varepsilon/3$</span>.</p> <p>Then <span class="math-container">\begin{align*} d(x^*_m,x^*) &amp;\leq d(x^*_m,f^{\circ N}_m(x)) + d(f^{\circ N}_m(x),f^{\circ N}(x))+ d(f^{\circ N}(x),x^*)\\&amp;&lt;2\varepsilon/3 + d(f^{\circ N}_m(x),f^{\circ N}(x)). \end{align*}</span></p> <p>Now it can be shown that <span class="math-container">$\lim\limits_{m\to+\infty} d(f^{\circ N}_m(x),f^{\circ N}(x)) =0$</span>, so there is some <span class="math-container">$M$</span> such that <span class="math-container">$d(f^{\circ N}_m(x),f^{\circ N}(x)) &lt;\varepsilon/3\; \forall m&gt;M$</span>, from which we get <span class="math-container">$$d(x^*_m,x^*)&lt;\varepsilon.$$</span></p> <p>But I feel like something is not right in my second attempt.</p> <p>Is something missing from the statement?</p>
Paresseux Nguyen
758,600
<p>Your theorem is wrong. Next time, I would like you to check your statement carefully to save time of other MSE fellows who want to help you. You know, it is not always easy to tell whether a theorem is true or not.</p> <hr> <p><strong>Counter example</strong>:<br /> Choose <span class="math-container">$X= l^1(\mathbb{N})$</span>, for all <span class="math-container">$a \in l^1$</span> and <span class="math-container">$m \in \mathbb{N}_*$</span>, <span class="math-container">$f(a)=0$</span> and <span class="math-container">$$f_m(a)=(0,\dots,\underbrace{\frac{1}{m}+(1-\frac{1}{m})a_m}_{m-\text{th position}}, \dots )$$</span> where <span class="math-container">$a=(a_0,a_1,\dots,...)$</span>.<br> So firstly, you can see that <span class="math-container">$X$</span> is complete with its canonical norm.<br> Secondly, <span class="math-container">$f_m$</span> converges pointwise to <span class="math-container">$f$</span> ( note that <span class="math-container">$X$</span> is <span class="math-container">$l^1$</span>) And thirdly, the fixed points of <span class="math-container">$f_m$</span> are <span class="math-container">$e_m$</span>, which doesn't converge to <span class="math-container">$0$</span>- the only fixed point of <span class="math-container">$f$</span>.</p>
3,789,429
<blockquote> <p><strong>Problem</strong>: Can an <span class="math-container">$f$</span> function be created where:<span class="math-container">$$f\colon\mathbb Q_{+}^{*}\to \mathbb Q_{+}^{*}$$</span> The function is defined on the set of fully positive rational numbers and is achieved: <span class="math-container">$\forall(x,y)\in \mathbb Q_{+}^{*}\times\mathbb Q_{+}^{*},f(xf(y))=\frac{f(f(x))}{y}$</span></p> </blockquote> <p>This question is similar to one of the Olympiad questions that I was very passionate about and used several ideas to solve this problem, but I did not arrive at any result from one of them by using the basic theorem in arithmetic that states that there is a corresponding application between <span class="math-container">$(\mathbb Q_{+}^{*})$</span>and <span class="math-container">$(\mathbb Z^{\mathbb N})$</span> where: <span class="math-container">$$\left\{\mathbb Z^{\mathbb N} =\text{ A set of stable sequences whose values ​​are set in} \quad\mathbb Z\right\}$$</span> This app is defined like this <span class="math-container">$$\varphi\colon\mathbb Z^{\mathbb N}\to \mathbb Q_{+}^{*} ,(\alpha_n)_{n\in\mathbb N}\longmapsto \prod_{n\in\mathbb N} P_n^{\alpha_n}$$</span> Where:<span class="math-container">$$\mathbb P=\left\{P_k:k\in\mathbb N\right\}\text{ is the set of prime numbers} $$</span> And put <span class="math-container">$x=\prod_{n\in\mathbb N}P_n^{\alpha_n},\quad y=\prod_{n\in\mathbb N }P_n^{\beta_n},\text{and}\quad $</span> <span class="math-container">$$f(\prod_{n\in\mathbb N}P_n^{\alpha_n})=\left(\prod_{n\in\mathbb N}P_{2n}^{\alpha_{2n+1}}\right)\left(\prod_{n\in\mathbb N}P_{2n+1}^{-\alpha_{2n}}\right)$$</span></p> <p>Found in the latter<span class="math-container">$$f(xf(y))=\frac{\left(\prod_{n\in\mathbb N}P_{2n}^{\alpha_{2n+1}}\right)\left(\prod_{n\in\mathbb N}P_{2n+1}^{-\alpha_{2n}}\right)}{\left(\prod_{n\in\mathbb N}P_n^{\beta_{2n}}\right)}=\frac{f(x)}{y}$$</span> However, this did not help me create this method</p> <p>I need an idea or suggestion to solve this problem if possible and thank you for your help</p> <blockquote> <p>Note: <span class="math-container">$(\alpha_n)_{n\in\mathbb N}\quad \text{is a stable sequence}\leftrightarrow \forall n\in\mathbb N ,\exists n_0\in\mathbb N :\left( n\geq n_0 \quad \alpha_{n}=0\right) $</span></p> </blockquote>
Bachamohamed
804,312
<p>we put <span class="math-container">$$x=\prod_{n\in\mathbb N}P_{n}^{\alpha_{n}}\quad ,y=\prod_{n\in\mathbb N}P_n^{\beta_n}$$</span> where <span class="math-container">$(x.y)\in\mathbb Q_{+}^{*}\times\mathbb Q_{+}^{*}$</span> and <span class="math-container">$$f(\prod_{n\in\mathbb N}P_n^{\alpha_n})=\left(\prod_{n\in\mathbb N}P_{2n}^{\alpha_{2n+1}}\right)\left(\prod_{n\in\mathbb N}P_{2n+1}^{-\alpha_{2n}}\right)$$</span> We find : <span class="math-container">\begin{align*} xf(y)=\left(\prod_{n\in\mathbb N}P_{2n}^{\alpha_{2n}}\right)\left(\prod_{n\in\mathbb N}P_{2n+1}^{\alpha_{2n+1}}\right)\left(\prod_{n\in\mathbb N}P_{2n}^{\beta_{2n+1}}\right)\left(\prod_{n\in\mathbb N}P_{2n+1}^{-\beta_{2n}}\right)\\ &amp;=\left(\prod_{n\in\mathbb N}P_{2n}^{\alpha_{2n}+\beta_{2n+1}}\right)\left(\prod_{n\in\mathbb N}P_{2n+1}^{\alpha_{2n+1}-\beta_{2n}}\right)\\ \end{align*}</span> <span class="math-container">$\implies$</span> <span class="math-container">\begin{align*} f(xf(y))&amp;=\left(\prod_{n\in\mathbb N}P_{2n}^{\alpha_{2n+1}-\beta_{2n}}\right)\left(\prod_{n\in\mathbb N}P_{2n+1}^{-\alpha_{2n}-\beta_{2n+1}}\right)\\ &amp;=\left(\prod_{n\in\mathbb N}P_{2n}^{\alpha_{2n+1}}\right)\left(\prod_{n\in\mathbb N}P_{2n+1}^{-\alpha_{2n}}\right)\left(\prod_{n\in\mathbb N}P_{2n}^{-\beta_{2n}}\right)\left(\prod_{n\in\mathbb N}P_{2n+1}^{-\beta_{2n+1}}\right)\\ &amp;=\frac{\left(\prod_{n\in\mathbb N}P_{2n}^{\alpha_{2n+1}}\right)\left(\prod_{n\in\mathbb N}P_{2n+1}^{-\alpha_{2n}}\right)}{\left(\prod_{n\in\mathbb N}P_{n}^{\beta_{n}}\right)}\\ &amp;=\frac{f(x)}{y}\\ \end{align*}</span> So <span class="math-container">$f(f(xf(y)))=f(\frac{f(x)}{y})\implies\left (f(f(xf(y)))=f(xf(y)) \wedge f(\frac{f(x)}{y})=\frac{f(f(x))}{f(y)} \wedge f(xf(y))=\frac{f(x)}{y}\right)\implies f(x)\text{is apl linear and } f=id_{\mathbb Q_{+}^{*}}\text{and}f(xf(y))=\frac{f(x)}{y}$</span> But <span class="math-container">$f$</span> cannot achieve. Any of these wicker for <span class="math-container">$(x,y)\in\mathbb Q_{+}^{*}$</span></p>
274,885
<p>I've been doing some research on ranking algorithms, and I've read theresearch with interest. The aspects of the Schulze Algorithm that appeals to me is that respondents do not have to rank all options, the rank just has to be ordinal (rather than in order), and ties can be resolved. However, upon implementation, I'm having trouble showing that ties do not occur when using the algorithm.</p> <p>I've put together a below real-life example, in which the result looks to be a tie. I can't figure out what I'm doing wrong. Is the solution simply to randomly select a winner when there is a tie? Could you somoene please have a look and offer any advice?</p> <p>Thanks very much,</p> <p>David</p> <p>Who do you think are the best basketball players?</p> <p>_ Kevin Garnett</p> <p>_ Lebron James</p> <p>_ Josh Smith</p> <p>_ David Lee</p> <p>_ Tyson Chandler</p> <p>5 users (let’s just call them User A, User B, User C, User D and User E) answer the question in the following way:</p> <p>User A : </p> <p>1.Kevin Garnett</p> <p>2.Tyson Chandler</p> <p>3.Josh Smith</p> <p>4.David Lee</p> <p>5.Lebron James</p> <p>User B: </p> <p>1.Kevin Garnett</p> <p>2.David Lee</p> <p>3.Lebron James</p> <p>4.Tyson Chandler</p> <p>5.Josh Smith</p> <p>User C: </p> <p>1.David Lee</p> <p>2.Josh Smith</p> <p>3.Kevin Garnett</p> <p>4.Lebron James</p> <p>5.Tyson Chandler</p> <p>User D: </p> <p>1.Lebron James</p> <p>2.Josh Smith</p> <p>3.Tyson Chandler</p> <p>4.David Lee</p> <p>5.Kevin Garnett</p> <p>User E: </p> <p>1.Tyson Chandler</p> <p>2.David Lee</p> <p>3.Kevin Garnett</p> <p>4.Lebron James</p> <p>5.Josh Smith</p> <p>If you put together the matrix of pairwise preferences, it would look like:</p> <p>p[*Kevin Garnett]</p> <p>p[*Tyson Chandler]</p> <p>p[*David Lee]</p> <p>p[*Lebron James]</p> <p>p[*Josh Smith]</p> <p>p[*Kevin Garnett]</p> <p>-</p> <p>3</p> <p>2</p> <p>4</p> <p>3</p> <p>p[*Tyson Chandler]</p> <p>2</p> <p>-</p> <p>3</p> <p>2</p> <p>3</p> <p>p[*David Lee]</p> <p>3</p> <p>2</p> <p>-</p> <p>4</p> <p>3</p> <p>p[*Lebron James]</p> <p>1</p> <p>3</p> <p>1</p> <p>-</p> <p>3</p> <p>p[*Josh Smith]</p> <p>2</p> <p>2</p> <p>2</p> <p>2</p> <p>-</p> <p>To find the strongest paths, you can then construct the grid to look like:</p> <p>Now, the strongest paths are (weakest links in red):</p> <p>…to Kevin Garnett</p> <p>…to Tyson Chandler</p> <p>…to David Lee</p> <p>…to Lebron James</p> <p>…to Josh Smith</p> <p>From Kevin Garnett…</p> <p>-</p> <p>Tyson Chandler (3)</p> <p>Tyson Chandler (3) – David Lee (3)</p> <p>Lebron James (4)</p> <p>Josh Smith (3)</p> <p>From Tyson Chandler…</p> <p>David Lee (3) – Kevin Garnett (3)</p> <p>-</p> <p>David Lee (3)</p> <p>David Lee (3) – Lebron James (4)</p> <p>Josh Smith (3)</p> <p>From David Lee…</p> <p>Kevin Garnett (3)</p> <p>Kevin Garnett (3) – Tyson Chandler (3)</p> <p>-</p> <p>Lebron James (4)</p> <p>Josh Smith (3)</p> <p>From Lebron James…</p> <p>Tyson Chandler (4) – David Lee (3) – Kevin Garnett (3)</p> <p>Tyson Chandler (4)</p> <p>Tyson Chandler (4) – David Lee (3)</p> <p>-</p> <p>Josh Smith (3)</p> <p>From Josh Smith…</p> <p>0</p> <p>0</p> <p>0</p> <p>0</p> <p>-</p> <p>So the new strongest paths grid is:</p> <p>p[*Kevin Garnett]</p> <p>p[*Tyson Chandler]</p> <p>p[*David Lee]</p> <p>p[*Lebron James]</p> <p>p[*Josh Smith]</p> <p>p[*Kevin Garnett]</p> <p>-</p> <p>3</p> <p>3</p> <p>4</p> <p>3</p> <p>p[*Tyson Chandler]</p> <p>3</p> <p>-</p> <p>3</p> <p>3</p> <p>3</p> <p>p[*David Lee]</p> <p>3</p> <p>3</p> <p>-</p> <p>4</p> <p>3</p> <p>p[*Lebron James]</p> <p>3</p> <p>4</p> <p>3</p> <p>-</p> <p>3</p> <p>p[*Josh Smith]</p> <p>0</p> <p>0</p> <p>0</p> <p>0</p> <p>-</p> <p>Kevin Garnett and David Lee would tie in this case.</p>
joriki
6,622
<p>There's no reason to believe that the method can't result in a tie; in fact it's easy to think of much simpler examples where it does, e.g. when two people vote on two items and have opposite preferences. The Wikipedia article also has <a href="http://en.wikipedia.org/wiki/Schulze_method#Ties_and_alternative_implementations" rel="nofollow">a section on ties</a>, which explicitly states that they occur.</p>
2,491,560
<p>Given a sequence $(s_n)$ in $\mathbb R$ such that $$\lim \limits_{n \to \infty}( s_{n+1}-s_n)=0,$$ I am asked to prove $(s_n)$ converges. </p> <p>I know all Cauchy sequences converge in $\mathbb R^k$. So I want to prove that $(s_n)$ is Cauchy. </p> <p>I am stuck as to how to show the given sequence is a Cauchy. Thank you.</p>
marty cohen
13,079
<p>For any $0 &lt; c &lt; 1$, $s_n =n^c$ satisfies $s_{n+1}-s_n \to 0$ but, obviously, $s_n \to \infty$.</p> <p>An easy proof is via the mean value theorem. Since $(x^c)' =cx^{c-1} $, then, for some $n \le x \le n+1$, $s_{n+1}-s_n =cx^{c-1} \to 0 $.</p>
1,061,325
<p>I need to show that for $x&gt;0$, $1+\frac{x}{2} \ge \sqrt{1+x} \ge 1+\frac{x}{2} - \frac{x^2}{8} $.</p> <p>I used the geometric/arithmetic mean inequality to show that $1+\frac{x}{2} \ge \sqrt{1+x}$ is indeed true. My issue lies in the second part of this.</p> <p>In trying to show that $\sqrt{1+x} \ge 1+\frac{x}{2} - \frac{x^2}{8} $, I determined that this is a partial taylor series expansion and I attempted to play around with both inequalities to show that this is true but to no avail.</p> <p>I would appreciate any help in solving this. Ideally I would like to do this analytically but any solutions using more complicated theorems are welcome.</p>
1k5
163,680
<p>Angle is a property not of one line, but of two intersecting lines. In your question the second line is implicit but different in both cases (going north for navigation, going to the right for trigonometry.) This is by convention and has no real meaning except it makes it easier for you to talk about angles since you need to mention only one line...</p> <p>To make things slightly more complicated, the orientation is also different. Angles in trigonometry are positive in counter-clockwise direction, while in navigation positive angles go from the north in clockwise direction.</p>
2,011,378
<p>My girlfriend suggested I use Lagrange theorem, but that only deals with finite groups.</p> <p>If G was finite, and H has order 3 and K has order 5 , then clearly since the orders of H and K must divide the order of G, then there must also be an element of order 15.</p> <p>But what about if G is infinite?</p> <p>Edit: if G is infinite, then elements can't have finite order ( except for the identity). Nvm lol</p> <p>Edit:2 I thought about it for a couple of more minutes and if G can only be finite, then G has order of at least 3, or 5, or any multipleof 3 and 5... the Lcm of 3 and 5 is 15, and so there must be an element of order 15.</p>
D Wiggles
103,836
<p>If $G$ has subgroups $H_1=\langle h_1\rangle$ and $H_2=\langle h_2\rangle$ of orders $3$ and $5$ respectively, what can you say about the group $\langle h_1,h_2\rangle$?</p> <p>Also, a note that your edit is not true. There are infinite finitely generated groups in which every element has finite order. The Grigorchuk group is an example.</p>
416,497
<p>I was recently trying to find a numerical solution to a thermodynamics problem and the expression <span class="math-container">$x\ln x$</span> appeared in one of the computations. I did not have to find its value very near <span class="math-container">$0$</span>, so the computer managed fine, but it got me thinking - can one make a stable numerical algorithm to compute <span class="math-container">$x\ln x$</span> for values near 0?</p> <p>It is easy to prove that <span class="math-container">$\lim\limits_{x\to 0^+} x\ln x=0$</span>. However, simply multiplying <span class="math-container">$x$</span> by <span class="math-container">$\ln x$</span> is not a good solution for small <span class="math-container">$x$</span>. The problem seems to be that we are multiplying a small number (<span class="math-container">$x$</span>) by a large number (<span class="math-container">$\ln x$</span>).</p> <p>My first thought would be to approximate it somehow. But I quickly saw that Taylor series wouldn't work, as the derivative is <span class="math-container">$\ln x + 1$</span>, which blows up (or rather down :-)) to <span class="math-container">$-\infty$</span>. Some kind of iterative method like Newton's method does not seem to be the solution either, because the operations needed seem to be even more messy than what we are trying to compute.</p> <p>So my question is - is there some numerically stable method to compute <span class="math-container">$x\ln x$</span> for small values of <span class="math-container">$x$</span>? And preferably one that is more general, so that it could be used on functions like <span class="math-container">$x^n \ln x$</span> - but these at least have a finite first derivative at <span class="math-container">$0$</span> for <span class="math-container">$n&gt;1$</span>.</p>
Carlo Beenakker
11,260
<p>Since <span class="math-container">$\ln x = - \ln(1/x)$</span>, to evaluate the logarithm near zero is equivalent to evaluating it for large argument. You can then use the result <span class="math-container">$$\ln y=\frac{\pi}{2a}\left(1+{\cal O}(y^{-2})\right),$$</span> with <span class="math-container">$a$</span> the <a href="https://en.wikipedia.org/wiki/Arithmetic%E2%80%93geometric_mean" rel="nofollow noreferrer">arithmetic–geometric mean</a><span class="math-container">$^\ast$</span> of <span class="math-container">$a_0=1$</span> and <span class="math-container">$b_0=4/y$</span>, see page 11 of <A HREF="https://arxiv.org/abs/1004.3412" rel="nofollow noreferrer">Multiple-precision zero-finding methods and the complexity of elementary function evaluation.</A></p> <sub> <span class="math-container">$^\ast$</span> Starting from any two positive numbers <span class="math-container">$a_0$</span> and <span class="math-container">$b_0$</span>, the arithmetic-geometric-mean iterate is <span class="math-container">$a_{i+1} = (a_i + b_i)/2$</span>, <span class="math-container">$b_{i+1} = \sqrt{a_ib_i}$</span>. For <span class="math-container">$a_0\gg b_0$</span> this converges rapidly to <span class="math-container">$a=\lim_{i\rightarrow\infty} a_i$</span>. </sub>
41,858
<p>Halting Problem is (theoretically) decidable for such algorithms which termination may be proved in First Order Logic (FOL) because all true statements in FOL are recursively enumerable. It is correct?</p> <p>I'm talking about programs in Turing-complete languages and use "realworldiness" property not in the sense of finiteness but in the sense of "be designed to some real purpose" and "be not specially cooked to require HOL for termination proof" (that allows potentially to require unprovable statements for termination proof).</p> <p>As for argument that FOL is adequate for working with ordinary programs is a fact that there are plenty systems designed for industrial application for proving program properties that use only FOL (TLA+, for example).</p> <p>To be precise, let set of real world programs be all programs that humans write for their utility for entire human history to the end of times.</p> <p>Perhaps I must say even more precise: Can such property be existing that limits Turing-complete language in such way that termination of any program in language with that property can be proved in FOL but no programmer will be disturbed by that limitation?</p> <p>And interesting satellite question: is absolutely minimal restriction of Turing-complete languages possible such that Halting Problem become decidable for restricted languages?</p>
Raphael
3,330
<p>FO logic (to be precise, FO[&lt;]) can express no more than star-free languages (McNaughton, Papert, 1971), a proper subset of regular languages. Therefore, I think any hope that FOL is sufficient to express all desired properties programs might have is misguided.</p> <p>You have to be precise about what the Halting Problem really is and what its undecidability really implies: There can be no algorithm which <em>uniformly</em> decides for <em>any</em> input and <em>any</em> program wether it holds or not.</p> <p>That does not mean that you can not decide termination for some program, or for (large) sets of programs. In fact, people do that, as exhibited by a <a href="http://m.cacm.acm.org/magazines/2011/5/107680-proving-program-termination/fulltext" rel="nofollow">recent article</a> in <em>Communications of the ACM</em>.</p>
173,360
<p>From my lecture notes: "The notation $\mathbb T$ will be used for the additive circle and $S^1$ for the multiplicative circle."</p> <p>What I understand: As a topological group, $S^1$ has the subspace topology of $\mathbb R^2$ and multiplication is defined as $(e^{ia}, e^{ib}) \mapsto e^{i(a + b)}$. </p> <p>My guess is that $\mathbb T$ as an additive group should then be something like $(a,b) \mapsto (a + b) \mod 1$. The problem with that is that the space would look like $[0,1)$ but that's not compact. </p> <p>But I'm confused: "mod 1" seems to be the same as $\mathbb R / \mathbb Z$ which is $S^1$. But I can't add complex numbers on the unit circle and stay on the unit circle. </p> <p>So: What's $\mathbb T$? How are elements in it added?</p>
Dylan Moreland
3,701
<p>The map $\mathbb R \to S^1$, $t \mapsto e^{2\pi it}$ descends to a map $\mathbb T \to S^1$ which is an algebraic and topological isomorphism. [Remember that $\mathbb T$ is an algebraic and topological quotient. What is the inverse of this map?] So you can indeed pass freely between the two, and in particular $\mathbb T$ is compact. If you want to see this without reference to $S^1$, note that the restriction of the quotient map $\mathbb R \to \mathbb R/\mathbb Z$ to the compact subspace $[0, 1]$ is still surjective.</p>
1,918,781
<p>Suppose that $\phi:\mathbb R^3\to\mathbb R$ is a harmonic function. I am asked to show that for any sphere centered at the origin, the average value of $\phi$ over the sphere is equal to $\phi(0)$. I am also directed to use Green's second identity: for any smooth functions $f,g:\mathbb R^3\to \mathbb R$, and any sphere $S$ enclosing a volume $V$, $$\int_S (f\nabla g-g\nabla f)\cdot dS=\int_V (f\nabla^2g-g\nabla^2 f)dV.$$</p> <p>Here is what I have tried: left $f=\phi$ and $g(\mathbf r)=|\mathbf r|$ (distance from the origin). Then $\nabla g=\mathbf{\hat r}$, $\nabla^2 g=\frac1r$, and $\nabla^2 f=0$. Note also that $\int_Sg\nabla f\cdot dS=r\int_S\nabla f\cdot dS=0.$ Then $$\frac{1}{4\pi r^2}\int_S f(x)\,dA=\frac{1}{4\pi r^2}\int_S f(x)\nabla g(x)\cdot dS=\frac{1}{4\pi r^2}\int_S (f\nabla g-g\nabla f)\cdot dS.$$ Using Green's identity, this is equal to $$\frac{1}{4\pi r^2}\int_V (f\nabla^2g-g\nabla^2 f)\,dV=\frac{1}{4\pi r^2}\int_V\frac{f}{r}\,dV.$$ This reminds me of the Cauchy integral formula. If there is indeed some sort of identity that I can use to show that the last integral is equal to $f(0)$? Or is there another way to solve this problem?</p>
velut luna
139,981
<p>There is a simple proof: Because the region satisfies the Laplace's equation, the potential must be due to some charges outside the sphere. Consider a point charge $q$ at $\vec{r}$, where $r&gt;R$. The average potential is $$\langle \phi\rangle=\frac{1}{4\pi R^2}\int\frac{q}{4\pi\epsilon_0}\frac{1}{|\vec{r'}-\vec{r}|}da'$$ Write is as $$\int\frac{q/4\pi R^2}{4\pi\epsilon_0}\frac{1}{|\vec{r}-\vec{r'}|}da'$$ One can readily observe that this is just the potential at $\vec{r}$ due to an amount of charge $q$ uniformly distributed on the spherical surface, which can be obtained easily by Gauss's law and the shell theorem to be $$\frac{1}{4\pi\epsilon_0}\frac{q}{|\vec{r}|}$$ But this is just the potential due to $q$ at $\vec{r}$ at the center of the sphere. So the property hold for one single charge outside the sphere. By superposition, it holds for arbitrary charge distribution outside the sphere.</p>
398,535
<p>I understand that "recursive" sets are those that can be completely decided by an algorithm, while "recursively enumerable" sets can be listed by an algorithm (but not necessarily decided). I am curious why the word "recursive" appears in their name. What does the concept of decidability/recognizability have to do with functions that call themselves?</p>
Thomas Klimpel
12,490
<blockquote> <p>What does the concept of decidability/recognizability have to do with functions that call themselves?</p> </blockquote> <p>The primitive recursive functions are closely related to the functions that call themselves. For the general or $\mu$-recursive function, and additional minimization operation is allowed. This minimization operation doesn't always define a function, hence it leads to partial functions.</p> <p>However, I have to admit that the Ackermann function is not primitive recursive (it is a $\mu$-recursive total function), still its definition looks like a function that is calling itself. So the relation between $\mu$-recursive (total) functions and functions that call themselves is deeper than a simple misnomer.</p>
121,453
<p>I wanted to ask a summary of known results and references about the homotopy type of the mapping space $\mathrm{Map}(BG,BK)$ (and specially the connected components) between the classifying spaces when G and K are general topological groups. Thank you. </p>
Andreas Thom
8,176
<p>As Neil Strickland points our, this is too difficult to have a good answer in general. In this post, I am only considering discrete groups. A very special case is the following</p> <blockquote> <blockquote> <p><strong>Definition</strong> Let $f : BG \to BK$ be a continuous map. We say that $f$ is a superposition, if for any ${\mathbb Q} K$-module $L$, the induced map on equivariant homology $$H^G_\ast(G,f^\ast(L)) \to H^K_\ast(K,L)$$ is surjective.</p> </blockquote> </blockquote> <p>Examples of superpositions include retractions, maps between oriented manifolds of non-zero degree, maps whose homotopy fibre is a finite complex with non-zero Euler characteristic. In the application below, one needs a much weaker condition which might be checked by hand for particular maps. The following result can be proved:</p> <blockquote> <blockquote> <p><strong>Theorem</strong> Let $BK$ be a finite complex. Let $f \colon BG \to BK$ be a continuous superposition. If $\chi(BK) \neq 0$, then the mapping space $map(BG,BK,f)$ (that is the connected component of $f$) is contractible.</p> </blockquote> </blockquote> <p>This is (a special case of) a consequence of results in [Daniel H. Gottlieb. Covering transformations and universal fibrations. Illinois J. Math., 13:432–437, 1969., Daniel H. Gottlieb. Self coincidence numbers and the fundamental group. J. Fixed Point Theory Appl. 2 (2007), no. 1, 73–83.] and was proved in [Thomas Schick and Andreas Thom. On a conjecture of Gottlieb. Algebr. Geom. Topol. 7 (2007), 779–784.]</p>
18,174
<p>I am an undergraduate secondary math education major. In <span class="math-container">$2$</span> weeks I have to give a <a href="https://www.mathmammoth.com/lessons/number_talks.php" rel="nofollow noreferrer">Number Talk</a> in my math ed class on the problem "<span class="math-container">$3.9$</span> times <span class="math-container">$7.5$</span>". I need to come up with as many different solution methods as possible. </p> <p>Here is what I have come up with so far:</p> <ol> <li><p>The most common way: multiply the two numbers "vertically", ignoring the decimal, to get <span class="math-container">$2925$</span>: <span class="math-container">\begin{array} {}\hfill {}^6{}^439\\ \hfill \times\ 75 \\\hline \hfill {}^1 195 \\ \hfill +\ 273\phantom{0} \\\hline \hfill 2925 \end{array}</span> Since there are two numbers that are to the right of the decimal, place the decimal after the <span class="math-container">$9$</span> to get the answer <span class="math-container">$29.25$</span>.</p></li> <li><p>Write both numbers as improper fractions: <span class="math-container">$$3.9= \dfrac{39}{10}$$</span> and <span class="math-container">$$7.5=\dfrac{75}{10}$$</span>Then multiply <span class="math-container">$$\dfrac{39}{10}\cdot\dfrac{75}{10}$$</span> to get <span class="math-container">$\dfrac{2925}{100}$</span> which simplifies to 29.25.</p></li> <li><p>Use lattice multiplication. This is a very uncommon method that I doubt the students will use, and I need to review it myself before I consider it.</p></li> <li><p>Since <span class="math-container">$3.9$</span> is very close to <span class="math-container">$4$</span>, we could instead do <span class="math-container">$4\cdot7.5=30$</span> and then subtract <span class="math-container">$0.1\cdot7.5=0.75$</span> to get <span class="math-container">$30 - 0.75=29.25$</span></p></li> <li><p>Similarly, since <span class="math-container">$7.5$</span> rounds up to <span class="math-container">$8$</span>, we can do <span class="math-container">$3.9\cdot 8=31.2$</span> and then subtract .<span class="math-container">$5\cdot 3.9=1.95$</span> to get <span class="math-container">$31.2-1.95=29.25$</span> </p></li> </ol> <p>Are there any other possible methods the students might use? (<strong>Note:</strong> they are junior college math ed students.) Thanks!</p>
Mitchell Spector
13,838
<p>There's a fun method which I've seen referred to as Russian peasant multiplication or ancient Egyptian multiplication. (I don't know if these names have a historical basis.)</p> <p>If you think about how the algorithm works behind the scenes, it's essentially multiplying the two numbers in base 2, so it could have more than one pedagogical use.</p> <p>This technique is for integers; you'll still need to handle the placement of the decimal point separately, as usual. (You could actually keep the decimal points in column 2, but you'd still have to adjust the placement of the decimal point in the product, so it's probably easier just to work with integers throughout.)</p> <p>Here's the method:</p> <p>(a) Put each number at the top of a column.</p> <p>(b) In column 1, divide by <span class="math-container">$2$</span> repeatedly, ignoring any remainders, until you reach <span class="math-container">$1.$</span></p> <p>(c) In column 2, multiply by <span class="math-container">$2$</span> repeatedly. Stop when column 2 is the same height as column 1.</p> <p>(d) Cross off every row where the number in column 1 is even.</p> <p>(e) Add the numbers in column 2 that you haven't crossed off. That's the product.</p> <hr> <p>Here's your example of <span class="math-container">$39\times75,$</span> worked out using this method:</p> <pre> 39 75 19 150 9 300 <s>4 600</s> <s>2 1200</s> 1 2400 ---- 2925 </pre>
1,458,464
<p>Let</p> <ul> <li>$(\Omega,\mathcal A)$ and $(E,\mathcal E)$ be measurable spaces</li> <li>$I\subseteq[0,\infty)$ be at most countable and closed under addition with $0\in I$</li> <li>$X=(X_t)_{t\in I}$ be a stochastic process on $(\Omega,\mathcal A)$ with values in $(E,\mathcal E)$</li> <li>$\mathbb F=(\mathcal F_t)_{t\in I}$ be the filtration generated by $X$</li> <li>$\tau$ be a $\mathbb F$-stopping time</li> <li>$f:E^I\to\mathbb R$ be bounded and $\mathcal E^{\otimes I}$-measurable</li> </ul> <p>Clearly, $$Y_s:=1_{\left\{\tau=s\right\}}\operatorname E\left[f\circ\left(X_{s+t}\right)_{t\in I}\mid\mathcal F_\tau\right]$$ is $\mathcal F_s$-measurable. Thus,</p> <p>\begin{equation} \begin{split} \operatorname E\left[f\circ\left(X_{\tau+t}\right)_{t\in I}\mid\mathcal F_\tau\right]&amp;=&amp;\sum_{s\in I}Y_s\\&amp;=&amp;\sum_{s\in I}\operatorname E\left[Y_s\mid\mathcal F_s\right]\\&amp;\color{red}=&amp;\color{red}{\sum_{s\in I}\operatorname E\left[1_{\left\{\tau=s\right\}}\operatorname E\left[f\circ\left(X_{\tau+t}\right)_{t\in I}\mid\mathcal F_s\right]\mid\mathcal F_\tau\right]}\;, \end{split} \end{equation}</p> <p>but I don't understand why the $\color{red}{\text{red}}$ part is true. It looks like the <em>tower property</em>, but we shouldn't be able to use it unless $\mathcal F_\tau\subseteq\mathcal F_s$, which is obviously wrong. So, how do we need to argue?</p>
zhoraster
262,269
<p>I wouldn't regard John Dawkin's answer as a valid argument, but he is right. </p> <p>Denote $\eta = f\circ\left(X_{\tau+t}\right)_{t\in I}$ and take $A\in \mathcal F_\tau$. Then $A\cap\{\tau = s\}\in \mathcal F_s$, so$$E[\mathbf{1}_A \mathbf{1}_{\{\tau=s\}} E[\eta\mid \mathcal F_s]] = E[\mathbf{1}_A\mathbf{1}_{\{\tau=s\}}\eta].$$ It follows that $$ E[ \mathbf{1}_{\{\tau=s\}} E[\eta\mid \mathcal F_s]\mid\mathcal F_\tau] = E[\mathbf{1}_{\{\tau=s\}} \eta\mid \mathcal F_\tau] = Y_s. $$</p>
2,879,915
<p>I am attempting to prove $\lim_{k\to\infty}\dfrac{k^2}{k^2+2k+2}=1$. However, I am getting tripped up on the algebra. I believe I want to show that there exists some $N\in\mathbb{N}$ such that when $k\geq N$, $\left|\dfrac{k^2}{k^2+2k+2}-1\right|&lt;\epsilon$ for arbitrary $\epsilon&gt;0$. I set out to find a choice for $N$ by turning $1$ into $\dfrac{k^2+2k+2}{k^2+2k+2}$, but I am not sure where to go from $\left|\dfrac{-2k-2}{k^2+2k+2}\right|&lt;\epsilon$. How can I rearrange this inequality to be in terms of $k$? Thank you!</p>
Vasili
469,083
<p>Let's start with $\left|\dfrac{-2k-2}{k^2+2k+2}\right|=\left|\dfrac{2(k+1)}{k^2+2k+2}\right|&lt;\left|\dfrac{2(k+1)}{(k+1)^2}\right|=\dfrac{2}{k+1}&lt;\epsilon$. This inequality will hold if $k+1&gt;\dfrac{2}{\epsilon}$ or $k&gt;\dfrac{2-\epsilon}{\epsilon}$</p>
3,084,998
<p>Define the linear transformation <span class="math-container">$L$</span> as </p> <p><span class="math-container">$L: \mathbb{R}^3 \rightarrow \mathbb{R}^3: x \rightarrow x - \langle x, a \rangle b$</span></p> <p>Let <span class="math-container">$a, b \in \mathbb{R}^3$</span> such that <span class="math-container">$\langle a, b \rangle = 2$</span> and let <span class="math-container">$\langle .,. \rangle $</span> be the standard inner product of <span class="math-container">$\mathbb{R}^3$</span>. </p> <p>How would I find the eigenvalues of the linear transformation without using the matrix of the linear transformation?</p>
Locally unskillful
494,915
<p>It should be <span class="math-container">$s(t) = c_1 e^{(-1+i)t} + c_2 e^{(-1-i)t}$</span>.</p> <p>Plug in <span class="math-container">$t=0$</span> and <span class="math-container">$s=1$</span> to find <span class="math-container">$1=c_1+c_2$</span>.</p> <p>Now differentiate, and plug in <span class="math-container">$t=0$</span>, <span class="math-container">$s'=1$</span>. Then solve simultaneous equations.</p>
3,381,543
<p>Let <span class="math-container">$X \sim \text{Poisson}(\lambda)$</span>. If <span class="math-container">$P(X = 1) = 0.1$</span> and <span class="math-container">$P(X = 2) = 0.2$</span>, what is the value of <span class="math-container">$P(X = 3)?$</span></p> <p>I've tried to expand the given equalities, finding <span class="math-container">$\lambda = 4$</span>, which is clearly wrong if you try to evaluate <span class="math-container">$P(X = 1)$</span> or <span class="math-container">$P(X = 2)$</span>. I'm stuck at this point; I don't know how to know the value of <span class="math-container">$\lambda$</span> given that</p> <p><span class="math-container">\begin{cases}e^{-\lambda}\cdot\lambda &amp;= 0.1\\ e^{-\lambda}\cdot\lambda^2 &amp;= 0.4 \end{cases}</span></p>
Ongky Denny Wijaya
556,947
<p>If <span class="math-container">$P(X=1)=0.1$</span>, then <span class="math-container">$\dfrac{e^{-\lambda}\lambda^1}{1}=0.1$</span>. We have nonlinear equation <span class="math-container">$$\lambda e^{-\lambda}=0.1.$$</span> So we must solve numerically (using calculator or numerical method for finding root of nonlinear equation). We have <span class="math-container">$\lambda=0.1118325592$</span> or <span class="math-container">$\lambda=3.577152064$</span>.</p> <p>If <span class="math-container">$P(X=2)=0.2$</span>, then <span class="math-container">$\dfrac{e^{-\lambda}\lambda^2}{2}=0.2$</span>. We have nonlinear equation <span class="math-container">$$\lambda^2 e^{-\lambda}=0.4.$$</span> Solving this equality we have <span class="math-container">$\lambda=0.6052671213$</span> and <span class="math-container">$\lambda=4.707937918$</span>.</p> <p>So we cannot have <span class="math-container">$P(X=1)=0.1$</span> and <span class="math-container">$P(X=2)=0.2$</span> with the same <span class="math-container">$\lambda$</span>.</p> <p>If <span class="math-container">$P(X=1)=0.1$</span>, then we have <span class="math-container">$$P(X=3)=\dfrac{e^{-0.1118325592}0.1118325592^3}{6}=0.0002084420217.$$</span> or</p> <p><span class="math-container">$$P(X=3)=\dfrac{e^{-3.577152064}3.577152064^3}{6}=0.2132669481.$$</span></p> <p>If <span class="math-container">$P(X=2)=0.2$</span>, then we have <span class="math-container">$$P(X=3)=\dfrac{e^{-0.6052671213}0.6052671213^3}{6}=0.02017557070.$$</span> or <span class="math-container">$$P(X=3)=\dfrac{e^{-4.707937918}4.707937918^3}{6}=0.1569312640.$$</span></p>
58,009
<p>Let $f: X \to Y$ be a morphism of varieties such that its fibres are isomorphic to $\mathbb{A}^n$. Since the definition of a vector bundle stipulates that $f$ be locally the projection $U \times \mathbb{A}^n \to U$, it is likely that there exist morphisms that are not locally of that form, but I can't come up with an example.</p> <p>So the question is: what is an example of a morphism with fibres $\mathbb{A}^n$ that is not locally trivial? not locally isotrivial? </p> <p>UPDATE: what if one assumes vector space structure on the fibres?</p>
Steven Landsburg
10,503
<p>Does your definition of varieties allow them to be disconnected? If so, let $Y$ be any variety, $P\in Y$ a closed point, $U$ the complement of $P$, $X_1$ a vector bundle over $U$, $X_2$ a vector bundle over $P$, and $X=X_1\cup X_2$. </p>
3,068,381
<p>I calculated, using Mathematica, that for <span class="math-container">$4\leq k \leq 100$</span>, <span class="math-container">$$ \sum_{j=k}^{2k} \sum_{i=j+1-k}^j (-1)^j 2^{j-i} \binom{2k}{j} S(j,i) s(i,j+1-k) = 0,$$</span> where <span class="math-container">$s(i,j)$</span> and <span class="math-container">$S(i,j)$</span> are Stirling numbers of the first and second kinds, respectively.</p> <p>Here the code:</p> <blockquote> <p>F[k_] := Sum[(-1)^j 2^(j - i) Binomial[2 k, j] StirlingS2[j, i] (StirlingS1[i, j + 1 - k]), {j, k, 2 k}, {i, j - k + 1, j}];</p> <p>Table[F[k], {k, 4, 100}]</p> </blockquote> <p>How do I prove it holds for all <span class="math-container">$k \geq 4$</span> ?</p>
G Cab
317,234
<p>Just a first step (too long for a comment).</p> <p>Let me change your symbology and put <span class="math-container">$$ S(n) = \sum\limits_{j = n}^{2n} {\sum\limits_{k = j + 1 - n}^j {\left( { - 1} \right)^{\,j} 2^{\,j - k} \left( \matrix{2n \cr j \cr} \right) \left\{ \matrix{ j \cr k \cr} \right\} \left[ \matrix{ k \cr j + 1 - n \cr} \right]} } $$</span> where the brackets indicate respectively the binomial, Stirling 2nd kind, un-signed Stirling 1st kind.</p> <p>Note that if you extend the lower limit to start from <span class="math-container">$0$</span> then you get a cleaner result:<br> <span class="math-container">$S(n)=0$</span> for any <span class="math-container">$0 \le n$</span>.<br> And since the sum bounds are implicit in the Binomial and Stirling numbers we can plainly omit them, thus simplifying the algebraic operations. <span class="math-container">$$ \eqalign{ &amp; S(n) = \sum\limits_{0\,\, \le \,j\, \le \,2n} {\;\sum\limits_{0\,\, \le \,k\, \le \,j} { \left( { - 1} \right)^{\,j} 2^{\,j - k} \left( \matrix{ 2n \cr j \cr} \right)\left\{ \matrix{ j \cr k \cr} \right\}\left[ \matrix{ k \cr j + 1 - n \cr} \right]} } = \cr &amp; = \sum\limits_{\left( {0\,\, \le } \right)\,j\, \le \,\left( {2n} \right)} {\;\sum\limits_{\left( {0\,\, \le } \right)\,k\,\left( { \le \,j} \right)} { \left( { - 1} \right)^{\,j} 2^{\,j - k} \left( \matrix{ 2n \cr j \cr} \right)\left\{ \matrix{ j \cr k \cr} \right\}\left[ \matrix{ k \cr j + 1 - n \cr} \right]} } = 0\quad \left| {\,0 \le n} \right. \cr } $$</span></p>
2,572,286
<p>I have $mp = 156^{107} \pmod{17},\;$ which simplifies to $3^{11} \pmod{17}.$</p> <p>How did he get the $11$?</p> <p>Same for </p> <p>$mq = 156^{107} \pmod {11},\;$ which simplifies to $2^7 \pmod {11}.$</p> <p>How to get the $7$? I know how to get the $3$ and $2$</p>
Bernard
202,857
<p>I'd add that one deduces at once from <em>Euler's theorem</em> that, if $a$ and $m$ are coprime, $$a^{N}\equiv a^{N\bmod \varphi(m)}\mod m.$$ In particular, if $m$ is prime, $\varphi(m)=m-1$.</p>
220,170
<p>My question is rather philosophical : without using advanced tools as Perlman-Thurston's geometrisation, how can we get convinced that the class of closed oriented $3$-manifolds is large and that simple invariants as Betti number are not even close to classify ?</p> <p>For example i would start with :</p> <ol> <li><p>If $S_g$ is the closed oriented surface of genus $g$, the family $S_g \times S^1$ gives an infinite number of non pairwise homeomorphic $3$-manifolds.</p></li> <li><p>Mapping tori of fiber $S_g$ gives as much as non-diffeomorphic $3$-manifolds as conjugacy classes in the mapping class group of $S_g$ which can be shown to be large using the symplectic representation for instance.</p></li> </ol> <p>I think that I would like also say that Heegaard splittings give rise to a lot of different $3$-manifolds which are essentially different, but I don't know any way to do this. </p> <p>So if you know a nice construction which would help understanding the combinatorial complexity of three manifolds, please share it :) </p>
Simon Rose
1,703
<p>This does depend (as others have said) on what you mean by "advanced tools", but what about looking at Seifert-Fibred manifolds? These are ones that are locally products $S^1 \times D^2 / C_n$ for some cyclic group $C_n$. Alternatively, you can view them as appropriate notions of a bundle over an orbifold surface $S$.</p> <p>It's worth noting that, as I recall, a large variety of three-manifolds can be Seifert-fibred. But even ignoring that, you get the fact that you can construct them easily from an orbifold surface (quite a concrete object) together with some extra discrete data.</p>
220,170
<p>My question is rather philosophical : without using advanced tools as Perlman-Thurston's geometrisation, how can we get convinced that the class of closed oriented $3$-manifolds is large and that simple invariants as Betti number are not even close to classify ?</p> <p>For example i would start with :</p> <ol> <li><p>If $S_g$ is the closed oriented surface of genus $g$, the family $S_g \times S^1$ gives an infinite number of non pairwise homeomorphic $3$-manifolds.</p></li> <li><p>Mapping tori of fiber $S_g$ gives as much as non-diffeomorphic $3$-manifolds as conjugacy classes in the mapping class group of $S_g$ which can be shown to be large using the symplectic representation for instance.</p></li> </ol> <p>I think that I would like also say that Heegaard splittings give rise to a lot of different $3$-manifolds which are essentially different, but I don't know any way to do this. </p> <p>So if you know a nice construction which would help understanding the combinatorial complexity of three manifolds, please share it :) </p>
Lee Mosher
20,787
<p>Read Chapter 4 of Thurston's notes <a href="http://library.msri.org/books/gt3m/">http://library.msri.org/books/gt3m/</a>. He produces infinitely many closed hyperbolic 3-manifold of different volumes, and hence non-homeomorphic, just by doing Dehn surgery on the figure 8 knot in $S^3$. This does not used advanced geometrisation theorems, just some basic (but clever) hyperbolic geometry, combined with the Mostow rigidity theorem.</p>
1,281,950
<p>Is there a way to express the integral</p> <p>$I(x_{0}, t) = \int_{x_{0}}^{\infty} \frac{1}{x} \, \cos (x t) \, \text{e}^{-x^{2}} \, dx$,</p> <p>where $x_{0} \neq 0$ and $t \ge 0$, in terms of more well-known functions? In particular, I am interested in a form where $t$ appears in the limits and not in the integrand, analogous to how something like the integral of $sin (x t) \, \text{e}^{-x^{2}}$ may be expressed in terms of error functions. My attempts thus far have only yielded</p> <p>$I(x_{0}, t) = \text{e}^{-t^{2} / 4} \Big(\int_{x_{0} - i t/2}^{\infty} \frac{\text{e}^{-x^{2}} (z - i t / 2)}{z^{2} + t^{2} / 4} \, dz + \int_{x_{0} + i t/2}^{\infty} \frac{\text{e}^{-x^{2}} (z + i t / 2)}{z^{2} + t^{2} / 4} \, dz \Big)$,</p> <p>from which I cannot find a way to proceed further. My next step would be to split the limits, introducing integrals to and from $0$, although this introduces singularities which naturally cancel but are nonetheless undesirable.</p> <p>Any analyses of the integral are welcome.</p> <p>For completeness (although not relevant to the actual question itself), I have an incrementally increasing variable $t$ with fixed $x_{0}$, for which I intend to compute this integral. For large $t$, it becomes highly oscillatory.</p>
Alex Fok
223,498
<p>Basically it is just a consequence of the fact that the inverse of a bijection is unique. If $AB=I$ then $B$ is injective as a linear transformation on a vector space. Being injective implies that it is surjective as well by dimension theorem. $B$ is thus a bijection and $A$ its unique inverse.</p>
3,464,477
<p>An odd perfect number n is of the form <span class="math-container">$n=q^{k}m^{2}$</span> where <span class="math-container">$q$</span> is prime and both <span class="math-container">$q,k \equiv 1 \mod 4$</span>. Also, n satisfies <span class="math-container">$\sigma (n)=2n$</span> so that <span class="math-container">$\sigma (q^{k}m^{2})=2q^{k}m^{2}$</span>. My questions are about <span class="math-container">$q,k$</span>. </p> <p>1) Is it known whether it is possible that <span class="math-container">$k&gt;q$</span> ? and also if so</p> <p>2) Can <span class="math-container">$k=a*q$</span> where is a positive integer?</p>
Jose Arnaldo Bebita Dris
28,816
<p>Let <span class="math-container">$n = q^k m^2$</span> be an odd perfect number with special / Euler prime <span class="math-container">$q$</span> satisfying <span class="math-container">$q \equiv k \equiv 1 \pmod 4$</span> and <span class="math-container">$\gcd(q,m)=1$</span>.</p> <p>Descartes, Frenicle, and subsequently Sorli conjectured that <span class="math-container">$k=1$</span> always holds for <em>all</em> odd perfect numbers.</p> <p>If proven true, then the Descartes-Frenicle-Sorli Conjecture (on odd perfect numbers) would imply that:</p> <blockquote> <p>(1) It is not possible to have <span class="math-container">$k &gt; q$</span>, as we would then have the contradiction <span class="math-container">$$1=k&gt;q \geq 5.$$</span></p> <p>(2) It cannot be the case that <span class="math-container">$k = aq$</span> (where <span class="math-container">$q \geq 5$</span> and <span class="math-container">$a$</span> is a positive integer).</p> </blockquote> <p>As commented by <a href="https://math.stackexchange.com/users/11955">Ed Pegg</a>: "This is one of the famous unsolved math questions."</p>
2,712,318
<p>Let $a$ be the real root of the equation $x^3+x+1=0$</p> <p>Calculate $$\sqrt[\leftroot{-2}\uproot{2}3]{{(3a^{2}-2a+2)(3a^{2}+2a)}}+a^{2}$$</p> <p>The correct answer should be $ 1 $. I've tried to write $a^3$ as $-a-1$ but that didn't too much, I guess there is some trick here :s</p>
David Quinn
187,299
<p>The expression under the cube root is $$9a^4+2a^2+4a=9a(-(a+1))+2a^2+4a=-7a^2-5a$$</p> <p>Meanwhile consider $$(1-a^2)^3=1-3a^2+3a^4-a^6=1-3a^2+3a(-a-1)-a^2-2a-1$$ $$=-7a^2-5a$$</p> <p>So the required result follows immediately</p>
2,712,318
<p>Let $a$ be the real root of the equation $x^3+x+1=0$</p> <p>Calculate $$\sqrt[\leftroot{-2}\uproot{2}3]{{(3a^{2}-2a+2)(3a^{2}+2a)}}+a^{2}$$</p> <p>The correct answer should be $ 1 $. I've tried to write $a^3$ as $-a-1$ but that didn't too much, I guess there is some trick here :s</p>
Jack D'Aurizio
44,121
<p>The real root $a$ of $x^3+x+1$ belongs to the interval $\left(-1,-\frac{1}{2}\right)$ and in order to check that $$ \sqrt[3]{(3a^2-2a+2)(3a^2+2a)}=1-a^2 \tag{1}$$ holds it is enough to check that $$ (3a^2-2a+2)(3a^2+2a)=1-3a^2+3a^4+a^6 \tag{2}$$ holds, or that $$ (1+a+a^3)(-1+5a+a^3) = 0 \tag{3} $$ holds, but that is trivial.</p>
4,432,047
<p><span class="math-container">$3\times 2$</span> is <span class="math-container">$3+3$</span> or <span class="math-container">$2+2+2$</span>. We know both are correct as multiplication is commutative for whole numbers. But which one is <strong>mathematically</strong> accurate?</p>
J.-E. Pin
89,374
<p>In every ring and even in every <a href="https://en.wikipedia.org/wiki/Semiring" rel="nofollow noreferrer">semiring</a>, <span class="math-container">$3 \times 2$</span> is by definition <span class="math-container">$(1+1+1)\times (1+1)$</span> . Since multiplication distributes over addition, you get on the one hand <span class="math-container">$$ (1+1+1) \times (1+1) = (1+1+1) + (1+1+1) = 3 +3 $$</span> and on the other hand <span class="math-container">$$ (1+1+1) \times (1+1) = (1+1) + (1+1) + (1+1) = 2 + 2 + 2 $$</span></p>
4,432,047
<p><span class="math-container">$3\times 2$</span> is <span class="math-container">$3+3$</span> or <span class="math-container">$2+2+2$</span>. We know both are correct as multiplication is commutative for whole numbers. But which one is <strong>mathematically</strong> accurate?</p>
Hans Lundmark
1,242
<p>It might be worth mentioning that in the context of ordinal arithmetic, where the order actually matters, the currently accepted convention is that <span class="math-container">$\alpha \cdot \beta$</span> means “repeat the operation of ‘counting to <span class="math-container">$\alpha$</span>’ <span class="math-container">$\beta$</span> times”, so that <span class="math-container">$$\omega \cdot 2 = \omega + \omega$$</span> (“count <span class="math-container">$1,2,3,\dots$</span>, and then do it again”) and <span class="math-container">$$2 \cdot \omega = 2 + 2 + 2 + \cdots = \omega$$</span> (“count <span class="math-container">$1,2,1,2,1,2,\dots$</span>”).</p> <p>Cantor himself originally used the opposite convention, but later switched (in <em>Beiträge zur Begründung der transfiniten Mengenlehre</em>, where at the end of §17 he writes “Die scheinbare Abweichung der Formeln [...] hängt nur mit der veränderten Schreibweise des Produktes zweier Zahlen zusammen, da wir nun den Multiplikandus links, den Multiplikator rechts setzen, [...]”; see <a href="https://hsm.stackexchange.com/questions/7461/who-decided-on-the-convention-for-ordinal-multiplication">this HSM.SE question</a>.</p>
893,839
<p>I am dealing with the dual spaces for the first time.</p> <p>I just wanted to ask is their any practical application of Dual space or is it just some random mathematical thing? If there is, please give a few.</p>
Mikhail Katz
72,694
<p>Vector fields and differential forms are <em>dual</em> in this sense but behave quite differently. Thus, one can pull back differential forms by smooth maps but there is no analogous operation for vector fields. </p> <p>A finite dimensional vector space is of course isomorphic to its dual, but the isomorphism is not canonical. In other words, there is no natural way of identifying them.</p> <p>To get a feeling for what "natural" means in this context, you have to start thinking about global objects such as vector bundles. For example, the Mobius band can be thought of as a vector bundle over the circle, with fiber a line. If one could construct a natural isomorphism between a fiber and the line $\mathbb R$, this would lead to a trivialisation of the vector bundle and would imply that the Mobius band is actually a cylinder, which it is not.</p>
893,839
<p>I am dealing with the dual spaces for the first time.</p> <p>I just wanted to ask is their any practical application of Dual space or is it just some random mathematical thing? If there is, please give a few.</p>
Urgje
95,681
<p>Dual spaces enter naturally if one considers the weak topology on the original space. Sometimes a differential equation has a weak solution but not a strong one (i.e. in the original=strong, topology). Well known examples of dual spaces are $L^p$ spaces. The dual of ${L^p}(R,dx)$, $1≤p&lt;∞$ is ${L^q}(R,dx)$ where $p^{-1}+q^{-1}=1$. You can find many details in the volumes by Reed and Simon.</p>
4,352,129
<p>I am struggling to compute the power series expansion of <span class="math-container">$$f(z) = \frac{1}{2z+5}$$</span> about <span class="math-container">$z=0$</span>, where <span class="math-container">$f$</span> is a complex function. I tried comparing it to the geometric series as follows,<span class="math-container">$$ f(z) = \frac{1}{2z+5} = \frac{1}{1-\omega} = \sum_{n=0}^\infty\omega^n $$</span> which gives us <span class="math-container">$$ 2z+ 5 = 1 - \omega \implies \omega = -2z - 4 \implies f(z) = \sum_{n-0}^\infty(-2z + 4)^n = \sum_{n=0}^\infty(-2)^n(z+2)^n$$</span> which is clearly not even an expansion about <span class="math-container">$z=0$</span>. This is the wrong answer as well according to my textbook which stated that the answer was <span class="math-container">$$\sum_{n=0}^\infty\frac{2^n}{5^{n+1}}(-1)^nz^n$$</span> could someone please explain where I have gone wrong.</p>
Essaidi
708,306
<p><span class="math-container">$$\begin{array}{lcl} \dfrac{1}{2 z + 5} &amp; = &amp; \dfrac{1}{5} \dfrac{1}{1 + \dfrac{2 z}{5}} \\[3mm] &amp; = &amp; \displaystyle \dfrac{1}{5} \sum_{n = 0}^{+\infty} (-1)^n \left(\dfrac{2 z}{5}\right)^n \\[3mm] &amp; = &amp; \displaystyle \sum_{n = 0}^{+\infty} \dfrac{2^n}{5^{n + 1}} (-1)^n z^n \\[3mm] \end{array}$$</span> with the condition : <span class="math-container">$$\left|\dfrac{2 z}{5}\right| &lt; 1$$</span></p>
2,433,614
<p>The solution to the initial-value problem $y' = y^{2} + 1$ with $y(0)=1$ is $y = \tan(x + \frac{\pi}{4})$. I would like to show that this is the correct solution in a way that is analogous to my solution to differential equation $y' = y - 12$.</p> <p><strong>Solution to Differential Equation</strong></p> <p>If $y' = y - 12$, \begin{equation*} \frac{y'}{y - 12} = 1 , \end{equation*} \begin{equation*} \left(\ln(y - 12)\right)' = 1 , \end{equation*} \begin{equation*} \ln(y - 12) = t + C \end{equation*} for some constant $C$. If $C' = e^{C}$, \begin{equation*} y = C' e^{t} + 12 . \end{equation*}</p>
Paramanand Singh
72,031
<p>Let's see what is involved in the parametrization of unit circle via circular functions. First is the obvious relation $$\cos^{2}t+\sin^{2}t=1\tag{1}$$ and further that these functions are continuous/differentiable with values $\cos 0=1,\sin 0=0$. Some idea of $\pi$ is needed so that we can prove these functions are periodic with period $2\pi$.</p> <p>All of this can be easily derived using the Taylor series for these functions. From the definitions $$\cos t=1-\frac{t^{2}}{2!}+\frac{t^{4}}{4!}-\dots \tag{2}$$ and $$\sin t =t-\frac{t^{3}}{3!}+\frac{t^{5}}{5!}-\dots\tag{3}$$ we can easily see the initial values at $t=0$ and their derivatives $$(\sin t) '=\cos t, (\cos t)' =-\sin t\tag{4}$$ and if we have $g(t) =\cos^{2}t+\sin^{2}t$ then we get $g'(t) =0$ so that $g(t) =g(0)=1$ and equation $(1)$ is proved.</p> <p>To introduce $\pi$ we need a bit more work. It can be shown that $\cos t$ changes sign in $[0,2]$ and hence vanishes somewhere in this interval. Moreover the derivative $—\sin t$ maintains constant sign in $(0,2)$ hence it follows that there is a unique number $\xi\in(0,2)$ such that $\cos \xi=0$ and we define $\pi=2\xi$. It is then easily proved that $\sin(\pi/2)=1$.</p> <p>Proving periodicity is bit tricky and one way to do it is to establish addition formulas. This can be done by noting that both $\cos t, \sin t$ are solutions to $f''(t) +f(t) =0$ and general solution to this equation is $f(t) = f(0)\cos t+f'(0)\sin t$. To prove the general solution consider the function $$h(t) =f(t) - f(0)\cos t-f'(0)\sin t$$ then $h''(t) +h(t) =0, h(0)=h'(0)=0$. And then we consider $\phi(t) = \{h(t)\} ^{2}+\{h'(t)\}^{2}$. Clearly $\phi'(t) =0$ and hence $\phi(t) =\phi(0)=0$ and hence $h(t) =h'(t) =0$. This gives us $f(t) =f(0)\cos t+f'(0)\sin t$. </p> <p>Since $\cos (a+t) $ also satisfies the equation $f''+f=0$, it follows from the above that $\cos (a+t) =\cos a \cos t-\sin a \sin t $ and similarly addition formula for $\sin t $ can be established. Using these addition formulas we can show that $\cos(t+2\pi)=\cos t, \sin(t+2\pi)=\sin t$. </p>
3,284,729
<p>I was wondering how to calculate the following: I know the average chance of death per year for a certain age. What is the average chance of death for a 5 year period then?</p> <p>Is it simply additive? The chances are listed here (jaar is the dutch word for Year)</p> <p><a href="https://i.stack.imgur.com/09wt9.png" rel="nofollow noreferrer">Chance Table</a></p>
Empy2
81,790
<p>Your chance of survival multiplies. </p> <p>If your chance of death in three years is one percent, two percent and three percent, then your chance of survival is <span class="math-container">$0.99*0.98*0.97$</span></p>
3,827,853
<p>I am trying to compute the singular cohomology of <span class="math-container">$\mathbb{N}\subset\mathbb{R}$</span>. I have that <span class="math-container">$C_0(\mathbb{N})\cong C_1(\mathbb{N})\cong\mathbb{Z}^{\oplus\mathbb{N}}$</span>, where I mean <span class="math-container">$\mathbb{Z}^{\oplus\mathbb{N}}=\{(n_0,n_1,n_2,...)|n_i\in\mathbb{Z},n_i=0\text{ for all but finitely many }i\}$</span>. Moreover, I know that <span class="math-container">$d_1=d_0=0$</span>.</p> <p>I know that that Hom<span class="math-container">$(\mathbb{Z}^{\oplus\mathbb{N}},\mathbb{Z})\cong C^0(\mathbb{N})\cong C^1(\mathbb{N})$</span>.</p> <p>I am fairly sure that Hom<span class="math-container">$(\mathbb{Z}^{\oplus\mathbb{N}},\mathbb{Z})\cong \mathbb{Z}^\mathbb{N}$</span>, where <span class="math-container">$\mathbb{Z}^\mathbb{N}=\{(n_0,n_1,n_2,...|n_i\in\mathbb{Z}\}$</span>. I am, however, unsure how to prove this.</p>
Luke Derry
825,079
<p>Let <span class="math-container">$x\in\mathbb{Z}^\mathbb{N}, x=(x_0,x_1,x_2,...)$</span>. Let <span class="math-container">$f_x:\mathbb{Z}^{\oplus\mathbb{N}}\rightarrow\mathbb{Z};f_x(y)=x_0y_0+x_1y_1+x_2y_2+...$</span></p> <p>Then <span class="math-container">$f_x\in\text{Hom}(\mathbb{Z}^{\oplus\mathbb{N}},\mathbb{Z})$</span>, as only finitely many <span class="math-container">$y_i$</span> are nonzero for any <span class="math-container">$y\in\mathbb{Z}^{\oplus\mathbb{N}}$</span>.</p> <p>Moreover, if <span class="math-container">$f_x=0$</span>, then, letting <span class="math-container">$y^i\in\mathbb{Z}^{\oplus\mathbb{N}}$</span> be the element with <span class="math-container">$y^i_j=0\ \forall j\neq i,y^i_i=1$</span>, we have <span class="math-container">$f_x(y^i)=x_i=0$</span> for every <span class="math-container">$i\in\mathbb{N}$</span>. Thus <span class="math-container">$x=0$</span>. Furthermore, any <span class="math-container">$f\in\text{Hom}(\mathbb{Z}^{\oplus\mathbb{N}},\mathbb{Z})$</span> is totally determined by the values it takes on the <span class="math-container">$y^i$</span>s, so if <span class="math-container">$x=(f(y^0),f(y^1),f(y^2),...)$</span>, then <span class="math-container">$f_x=f$</span>.</p> <p>Thus the homomorphism <span class="math-container">$\phi:\mathbb{Z}^\mathbb{N}\rightarrow\text{Hom}(\mathbb{Z}^{\oplus\mathbb{N}},\mathbb{Z});x\mapsto f_x$</span> is an isomorphism.</p>
265,549
<p>$$ \left\{ x \in\mathbb{R}\; \middle\vert\; \tfrac{x}{|x| + 1} &lt; \tfrac{1}{3} \right\}$$</p> <p>What is the supremum and infimum of this set? I thought the supremum is $\frac{1}{3}$. But can we say that for any set $ x &lt; n$ that $n$ is the supremum of the set? And for the infimum I have no idea at all. Also, let us consider this example:</p> <p>$$ \left\{\tfrac{-1}{n} \;\middle\vert\; n \in \mathbb{N}_0\right\}$$</p> <p>How can I find the infimum and supremum of this set? It confuses me a lot. I know that as $n$ gets bigger $\frac{-1}{n}$ asymptotically approaches $0$ and if $n$ gets smaller $\frac{-1}{n}$ approaches infinity, but that's about it. </p>
Julien
38,053
<p>For $x\geq 0$, the condition $\frac{x}{|x|+1}&lt;1/3$ is equivalent to $x&lt;x/3+1/3$, which in turn is equivalent to $x&lt;1/2$. So the supremum of your set is $1/2$.</p> <p>For $x\leq 0$, we have that $\frac{x}{|x|+1}&lt;1/3$ is equivalent to $x&lt;-x/3+1/3$, and then to $4x&lt;1$, which is true for every nonpositive number. So the infimum of your set is $-\infty$.</p> <p>Another way to put it is to observe that the set under consideration is $]-\infty,1/2[$.</p> <p>For the second set $\{-1/n|n\geq 1\}=\{-1,-1/2,-1/3,\ldots\}$, the infimum is a mimimum and is equal to $-1$, while the supremum is the limit of this increasing sequence, namely $0$.</p>
1,237,464
<p>In a bag of reds and black balls, $30\%$ were red, and $90\%$ of the black balls and $80\%$ of the red balls are marked balls. What percentage of the bag of balls is marked?</p> <p>I thought I would have to use: Let </p> <p>$A = \text{marked red balls},$</p> <p>$B = \text{marked black balls}.$ </p> <p>$P(A \cup B) = P(A) + P(B) - P(A \cap B)$</p> <p>But the answer is simply $0.8\times0.3 + 0.7\times0.9$. How come we don't have to take $P(A \cap B)$ into consideration?</p>
Bib
190,219
<p>Consider the following:</p> <p>$30\%$ of the balls are red and $80\%$ of the red balls are marked, so we have $P(A) = 30\% \times 80\%$.</p> <p>Now $70\%$ of the balls are black and $90\%$ of the black balls are marked, so we have $P(B) = 70\% \times 90\%$.</p> <p>Now we can evaluate the formula $P(A \cup B) = P(A) + P(B) - P(A \cap B) = 30\% \times 80\% + 70\% \times 90\%$ because $P(A \cap B) = 0$. This is the percentage of all the balls that are marked.</p> <p>The reason we don't consider the intersection mentioned in your question is because there is no intersection. Indeed, your reasoning with the equation $P(A \cup B) = P(A) + P(B) - P(A \cap B)$ is correct, but $P(A \cap B) = 0$ because a ball cannot be both black and red at the same time.</p>
1,405,787
<p>We're given the following function :</p> <p>$$f(x,y)=\dfrac{1}{1+x-y}$$</p> <p>Now , how to prove that the given function is differentiable at $(0,0)$ ?</p> <p>I found out the partial derivatives as $f_x(0,0)=(-1)$ and $f_y(0,0)=1$ ,</p> <p>Clearly the partial derivatives are continuous , but that doesn't guarantee differentiability , does it ?</p> <p>Is there any other way to prove the same ?</p>
5xum
112,884
<p>If all partial derivatives of a function (over all possible variables) are continuous at some point, then the function is differentiable at that point.</p>
463,539
<p>I've transformed the number 11 to:</p> <p>$11^e = 677.32$</p> <p>Given the exponent and the transformed value, how can I solve for the original number?</p> <p>I know that $x = y^z$ and that $z = \log_y(x)$, but I don't know how to solve for $x$? Can anyone explain how I can use the exponent $e$ and $677.32$ to find the $x$ value of 11?</p>
Rasmus
367
<p>If you want $x$ to satisfy $x^e=b$, then what you are looking for is $\sqrt[e]{b}$ or $b^\frac1e$.</p>
4,253,761
<p><span class="math-container">$$\begin{array}{ll} \text{extremize} &amp; xy+2yz+3zx\\ \text{subject to} &amp; x^2+y^2+z^2=1\end{array}$$</span></p> <p>How to find the maximum/minimum using Lagrange multipliers?</p> <p>Context: This is not a homework problem, my friend and I often make up problems to challenge each other. We both love Maths and we are both students.</p> <p>I have improved my answer based on user247327's suggestion, and I have found the maximum value of 2.056545.Thank you for contributing ideas to my questions.</p>
Ibadul Qadeer
997,150
<p>From an abstract algebra point of view, the best way to understand what trivial is would be to look at situations or examples where it is mostly used and encountered.</p> <p>Take the case of subsets of a set, say <span class="math-container">$A$</span>. Since every set of is a subset of itself, <span class="math-container">$A$</span> is a trivial subset of itself.</p> <p>Another situation would be the case of a subgroup. The subset containing only the identity of a group is a group and it is called trivial.</p> <p>Take a completely different situation. Take the case of a system of linear equations, <span class="math-container">$$a_1x+b_1y=0\\a_3x+b_4y=0\\a_5x+b_6y=0$$</span> It is obvious that <span class="math-container">$x=y=0$</span> is a solution of such a system of equations. This solution would be called trivial.</p> <p>Take matrices, if the square of a matrix, say that of <span class="math-container">$A$</span>, is <span class="math-container">$O$</span>, we have <span class="math-container">$A^2=O$</span>. An obvious (trivial) solution would be <span class="math-container">$A=O$</span>. However, there exist other (non-trivial) solutions to this equation. All non-zero nilpotent matrices would serve as non-trivial solutions of this matrix equation.</p>
82,061
<p>Well, another problem here I don't get.</p> <p>From what I know, sin(x)'=cos(x) right?</p> <p>Well, here is a problem:</p> <p>Find $\frac{d}{dx}(y\cos(\frac{y}{x^4}))$ ?</p> <p>Let $u=y\cos(\frac{y}{x^4})$ and $s=y$, $v=\cos(\frac{y}{x^4})$</p> <p>$$u&#39;=v\frac{ds}{dx}+s \frac{dv}{dx}$$</p> <p>$$u&#39;=\cos\frac{y}{x^4} \frac{dy}{dx} + y \frac{d}{dx}(\cos\frac{y}{x^4})$$</p> <p>Now the question, isn't $\frac{d}{dx}(\cos\frac{y}{x^4})=-\sin(\frac{y}{x^4})$ and here it's where it all ends? Why is the teacher going on and differentiates what's in the parentheses? </p>
Anurag
17,939
<p>The generating function for this is $B_n(x) = \sum P(n,k)x^k = (1+x)(1+2x)\ldots (1+nx)$. You might be able to manipulate this to get a closed form. For example, $\log(B_n(x)) = \sum(-1)^kS_kx^k/k$ where $S_k$ is the sum of $k_{th}$ powers. Still, no closed formula.</p>
2,266,342
<p>So on my review for my final exam there is this question: </p> <p>Is there a linear transformation from $P_2$ to $P_2$ with the following properties? In each case, either give an example of such a transformation or prove that no such transformation exist.</p> <blockquote> <p>$T(t^2+t+1) = t^2+t+1, T(t^2+2t+3) = 3t^2+2t+1, T(t^2+2t+2) = 2t^2+2t+1$</p> </blockquote> <p>So we know that the standard basis for $P_2$ is </p> <p>{$1,t,t^2$}</p> <p>My question is how do you even start? Is there a trick? we have never done this kind of question in our class before.</p>
Doug M
317,162
<p>$T\begin{bmatrix} 1&amp;1&amp;1\\1&amp;2&amp;2\\1&amp;3&amp;2 \end{bmatrix} = \begin{bmatrix} 1&amp;1&amp;2\\1&amp;2&amp;2\\1&amp;3&amp;1 \end{bmatrix}\\ T = \begin{bmatrix} 1&amp;1&amp;2\\1&amp;2&amp;2\\1&amp;3&amp;1 \end{bmatrix}\begin{bmatrix} 1&amp;1&amp;1\\1&amp;2&amp;2\\1&amp;3&amp;2 \end{bmatrix}^{-1}\\ T = \begin{bmatrix} 0&amp;2&amp;-1\\0&amp;1&amp;0\\1&amp;-2&amp;2 \end{bmatrix}$</p>
478,516
<p>$\lim_{d \to \infty} (1+\frac{w}{d})^{\frac{d}{w} } = e$. But, what if the number of bits used to encode $d$ is polynomial in length. In this model, infinity can't be encoded. However, $d$ is polynomialy much larger than $w$. Is there any tight lower bound, a closed form function $f(d)$ such that $$ f(d) \le (1+\frac{w}{d})^{\frac{d}{w} }$$</p>
Did
6,179
<p>For every fixed positive $d$, $(1+w/d)^{d/w}\to1$ when $w\to+\infty$ while $(1+w/d)^{d/w}\gt1$ for every positive $w$. Hence the best lower bound of $(1+w/d)^{d/w}$ valid for every positive $w$ (or only for every $w$ large enough) is $f(d)=1$.</p>
925,941
<p>Solve:</p> <p>$xy=-30$<br> $x+y=13$</p> <p>{15, -2} is a particular solution, but, how would I know if is the only solution, or what would be the way to solve this without "guessing" ?</p>
André Nicolas
6,312
<p>We have in general $$(x+y)^2-4xy=(x-y)^2.$$ In our case that gives $(x-y)^2=289$, so $x-y=\pm 17$.</p> <p>Now solve the system $x+y=13$, $x-y=17$ and the system $x+y=13$, $x-y=-17$.</p> <p><strong>Remark:</strong> This procedure for finding $x$ and $y$ given their sum and product goes back to Neo-Babylonian times. (Of course, algebraic notation was not used, but an equivalent algorithm was taught.)</p>
925,941
<p>Solve:</p> <p>$xy=-30$<br> $x+y=13$</p> <p>{15, -2} is a particular solution, but, how would I know if is the only solution, or what would be the way to solve this without "guessing" ?</p>
Adi Dani
12,848
<p>$$xy=-30$$ $$x+y=13$$</p> <p>from second equation $y=13-x$ then put on first we get $$x(13-x)=-30$$ $$x^2-13x-30=0$$ $$x_{1,2}=\frac{13\pm\sqrt{189}}{2}=\frac{13\pm17}{2}$$ $$x_1=\frac{13+17}{2}=15,x_2=\frac{13-17}{2}=-2$$ $$y_1=13-15=-2,y_2=13-(-2)=15$$ so the solutions are $(15,-2)$ and $(-2,15)$</p>
4,349,487
<h2>Problem :</h2> <p>Let <span class="math-container">$a,x&gt;0$</span> then define :</p> <h2><span class="math-container">$$f\left(x\right)=x^{\frac{x}{x^{2}+1}}$$</span></h2> <p>And :</p> <h2><span class="math-container">$$g\left(x\right)=\sqrt{\frac{x^{a}}{a}}+\sqrt{\frac{a^{x}}{x}}$$</span></h2> <p>Then prove or disprove that :</p> <h2><span class="math-container">$$2^{\frac{1}{2}}\cdot\left(f\left(xa\right)+f\left(\frac{1}{xa}\right)\right)^{\frac{1}{2}}\leq g(x)\tag{E}$$</span></h2> <hr /> <hr /> <h2>My attempt</h2> <p>From here <a href="https://math.stackexchange.com/questions/3253006/new-bound-for-am-gm-of-2-variables">New bound for Am-Gm of 2 variables</a> we have :</p> <h2><span class="math-container">$$h(x)=\left(-\frac{1}{x}+\frac{1}{x^{2}}+1\right)^{-\frac{x}{2}}\geq x^{\frac{x}{x^{2}+1}}$$</span></h2> <p>So we need to show :</p> <p><span class="math-container">$$\sqrt{2}\sqrt{h\left(xa\right)+h\left(\frac{1}{xa}\right)}\leq g(x)$$</span></p> <p>Now using Binomial theorem (second order at <span class="math-container">$x=1$</span>) for <span class="math-container">$1\leq x\leq 2$</span> and <span class="math-container">$0.5\leq a\leq 1$</span> we need to show :</p> <p><span class="math-container">$$\sqrt{2}\sqrt{r\left(xa\right)+r\left(\frac{1}{xa}\right)}\leq g(x)$$</span></p> <p>Where :</p> <p><span class="math-container">$$r(x)=\left(1+\left(-\frac{1}{x}+\frac{1}{x^{2}}\right)x+\frac{1}{2}\left(-\frac{1}{x}+\frac{1}{x^{2}}\right)^{2}\cdot x\cdot\left(x-1\right)\right)^{-\frac{1}{2}}$$</span></p> <p>I have not tried but here <a href="https://math.stackexchange.com/questions/4268913/show-this-inequality-sqrt-fracabb-sqrt-fracbaa-ge-2/4269271#4269271">show this inequality $\sqrt{\frac{a^b}{b}}+\sqrt{\frac{b^a}{a}}\ge 2$</a> user RiverLi's provide some lower bound again I haven't checked but (perhaps?) it works with this inequality .If it doesn't work we need a higher order in the Pade approximation .</p> <p>Edit 06/01/2022 :</p> <p>Using the nice solution due to user RiverLi it seems we have for <span class="math-container">$0.7\leq a \leq 1$</span> and <span class="math-container">$1\leq x\leq 1.5$</span> :</p> <p><span class="math-container">$$\frac{1}{a}\cdot\frac{1+x+(x-1)a^{2}}{1+x-(x-1)a^{2}}+\frac{1}{x}\cdot\frac{1+a+(a-1)x^{2}}{1+a-(a-1)x^{2}}\geq \sqrt{2}\sqrt{r\left(a^{2}x^{2}\right)+r\left(\frac{1}{a^{2}x^{2}}\right)}$$</span></p> <p>If true and proved it provides a partial solution .</p> <p>Edit 07/01/2022:</p> <p>Define :</p> <p><span class="math-container">$$t\left(x\right)=\left(\ln\left(\left(-\frac{1}{x}+\frac{1}{x^{2}}+1\right)^{\frac{x}{2}}+1\right)\right)^{-1}$$</span></p> <p>As accurate inequality we have for <span class="math-container">$x\geq 1$</span> :</p> <p><span class="math-container">$$\left(-\frac{1}{x}+\frac{1}{x^{2}}+1\right)^{-\frac{x}{2}}\leq \left(\ln\left(\left(-\frac{1}{x}+\frac{1}{x^{2}}+1\right)^{\frac{x}{2}}+1\right)\right)^{-1}+h(1)-t(1)$$</span></p> <p>Again it seems we have for <span class="math-container">$0&lt;x\leq 1$</span> :</p> <p><span class="math-container">$$\left(\ln\left(\left(-\frac{1}{x}+\frac{1}{x^{2}}+1\right)^{\frac{x}{2}}+1\right)\right)^{-\frac{96}{100}}+h(1)-(t(1))^{\frac{96}{100}}\geq \left(-\frac{1}{x}+\frac{1}{x^{2}}+1\right)^{-\frac{x}{2}}$$</span></p> <p>If true we can use the power series of <span class="math-container">$\ln(e^x+1)$</span> around zero and hope .</p> <p>Last edit 08/01/2022:</p> <p>I found a simpler one it seems we have firstly :</p> <p>On <span class="math-container">$(0,1]$</span> :</p> <p><span class="math-container">$$\left(1+\frac{1}{x^{2}}-\frac{1}{x}\right)^{-\frac{x}{2}}-\frac{x^{2}}{x^{2}+1}-\left(1-\frac{0.5\cdot1.25x}{x+0.25}\right)\leq 0$$</span></p> <p>And on <span class="math-container">$[1,8]$</span> :</p> <p><span class="math-container">$$\left(1+\frac{1}{x^{2}}-\frac{1}{x}\right)^{-\frac{x}{2}}-\frac{x^{2}}{x^{2}+1}-\frac{0.5\cdot1.25x}{x+0.25}\leq 0$$</span></p> <p>Question :</p> <p>How to show or disprove <span class="math-container">$(E)$</span> ?</p> <p>Thanks really .</p>
GEdgar
442
<p>We will use the half-angle formula for tangent: <span class="math-container">$$ \tan\frac{\theta}{2} = \frac{1-\cos(\theta)}{\sin(\theta)} . $$</span> We want to get <span class="math-container">$2+\sqrt{3}$</span>. Remembering the basic values of sine and cosine, I see that <span class="math-container">$$ 2+\sqrt3 = \frac{1+\frac{\sqrt{3}}{2}}{\frac12} = \frac{1-\cos\frac{5\pi}{6}}{\sin\frac{5\pi}{6}} = \tan\frac{5\pi}{12} $$</span> and therefore <span class="math-container">$$\arctan(2+\sqrt3) = \frac{5\pi}{12}$$</span></p>