qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
4,019,561
<p>If <span class="math-container">$M_{n\times n}$</span> is the set of invertible matrices with real entries. Find two matrices <span class="math-container">$A,B\in M_{n \times n}$</span> with the propriety that there not exists such a continuous function</p> <p><span class="math-container">$$f:[0,1]\to M, \quad f(0)=A, f(1)=B $$</span></p> <p>the only way i was thinking was is the inverse function such as <span class="math-container">$f^{-1}(A)=0, \quad f^{-1}(B)=1,$</span> but this doesnt seem to get me anywhere.</p>
José Carlos Santos
446,262
<p>If <span class="math-container">$\det(A)&gt;0&gt;\det(B)$</span>, then there is no such function, because otherwise the range of the map <span class="math-container">$\det\circ f$</span> would contain <span class="math-container">$\det(A)$</span> and <span class="math-container">$\det(B)$</span>, but not <span class="math-container">$0$</span>.</p>
297,907
<p>Let consider the ring $\mathbb{Z}_p$ and $\zeta$ be a $p$-th root of unity. Especially $\zeta \not \in \mathbb{Z}_p$. Denote with $\Phi _p(x)$ the cyclotomical polynomial in $p$. Since $p$ is a prime we know that it has the shape $\Phi _p(x)= 1 + x +x^2 +... +x^{p-1}$. This gives rise for the quotient ring</p> <p>$$ \mathbb{Z}_p[X]/\langle\Phi_p(x)\rangle \cong \mathbb{Z}_p[\zeta] = \mathbb{Z}_p \oplus \zeta \mathbb{Z}_p \oplus \dots \oplus \zeta^{p-2} \mathbb{Z}_p $$</p> <p>which is obviously a free $\mathbb{Z}_p$-module of rank $p-1$. Denote $g=\zeta -1$.</p> <p>My question is how to see that $\mathbb{Z}_p[\zeta]$ isa local ring with maximal ideal $g \mathbb{Z}_p[\zeta]$?</p> <p>I tried to argue in following way: Obviously observation provides $\Phi _p(g +1) =0$ and the formula above gives $$ \Phi_p(x + 1) = p + \binom{p}{2}x + \binom{p}{3}x^2 + \dots + \binom{p}{p - 1} x^{p - 2} + x^{p - 1} $$.</p> <p>In light of this I can conclude following inclusions:</p> <p>$p \mathbb{Z}_p[\zeta] \subset g \mathbb{Z}_p[\zeta]$ and $\pi^{p-1} \mathbb{Z}_p[\zeta] \subset p \mathbb{Z}_p[\zeta]$ which imply $(g\mathbb{Z}_p[\zeta]) \cap\mathbb{Z}_p = p\mathbb{Z}_p$.</p> <p>From here I'm stuck.</p>
Neil Strickland
10,366
<p>Put $A=\mathbb{Z}_p[\zeta]$ and $\pi=\zeta-1$. Then $$ A/\pi=\mathbb{Z}_p[x]/(\Phi_p(x),x-1)= (\mathbb{Z}_p[x]/(x-1))/\Phi_p(x) = \mathbb{Z}_p/\Phi_p(1) = \mathbb{Z}/p. $$ This is a field, so $\pi$ generates a maximal ideal. Now suppose that $u$ lies outside this maximal ideal. Let $v$ be a lift in $\mathbb{Z}_p^\times$ of the image of $u$ in $A/\pi=\mathbb{Z}/p$, so $u=v(1-a\pi)$ for some $a\in A$. As $p$-th powers are additive mod $p$, in $A/p$ we have $\pi^p=\zeta^p-1=0$. This means that $\pi^p$ is divisible by $p$, so the series $\sum_i(a\pi)^i$ converges $p$-adically to an inverse for $1-a\pi$, and we deduce that $u$ is also invertible in $A$. As $A\pi$ is an ideal such that $A\setminus A\pi$ consists of units, we see that it is the unique maximal ideal, so $A$ is local.</p>
142,993
<p>I'm challenging myself to figure out the mathematical expression of the number of possible combinations for certain parameters, and frankly I have no idea how.</p> <p>The rules are these:</p> <p>Take numbers 1...n. Given m places, and with <em>no repeated digits</em>, how many combinations of those numbers can be made?</p> <p>AKA</p> <ul> <li>1 for n=1, m=1 --> 1</li> <li>2 for n=2, m=1 --> 1, 2</li> <li>2 for n=2, m=2 --> 12, 21</li> <li>3 for n=3, m=1 --> 1,2,3</li> <li>6 for n=3, m=2 --> 12,13,21,23,31,32</li> <li>6 for n=3, m=3 --> 123,132,213,231,312,321</li> </ul> <p>I cannot find a way to express the left hand value. Can you guide me in the steps to figuring this out?</p>
Douglas S. Stones
139
<p>Here's a computational solution. We can generate all such triples using this code in <a href="http://www.gap-system.org/" rel="nofollow">GAP</a>:</p> <pre><code>S:=Filtered(Tuples([1..8],2),c-&gt;(c[1]+c[2]) mod 2=0); T:=Filtered(Combinations(S,3),i-&gt;i[1][1]&lt;&gt;i[2][1] and i[1][1]&lt;&gt;i[3][1] and i[2][1]&lt;&gt;i[3][1] and i[1][2]&lt;&gt;i[2][2] and i[1][2]&lt;&gt;i[3][2] and i[2][2]&lt;&gt;i[3][2]); </code></pre> <p>This code gives 2496 triples of white squares, as per Wonder and Will Orrick's answers.</p>
160,801
<p>Here is a vector </p> <p>$$\begin{pmatrix}i\\7i\\-2\end{pmatrix}$$</p> <p>Here is a matrix</p> <p>$$\begin{pmatrix}2&amp; i&amp;0\\-i&amp;1&amp;1\\0 &amp;1&amp;0\end{pmatrix}$$</p> <p>Is there a simple way to determine whether the vector is an eigenvector of this matrix?</p> <p>Here is some code for your convenience.</p> <pre><code>h = {{2, I, 0 }, {-I, 1, 1}, {0, 1, 0}}; y = {I, 7 I, -2}; </code></pre>
user541686
22,830
<p>Either</p> <pre><code>Reduce[h . y == x * y, x] </code></pre> <p>or</p> <pre><code>Reduce[(h - IdentityMatrix[Length[h]] x) . y == 0, x] </code></pre> <p>depending on whether you would rather type $y$ once or twice.</p>
725,602
<p>I am trying to prove the 'second' triangle inequality: $$||x|-|y|| \leq |x-y|$$</p> <p>My attempt: $$----------------$$ Proof: $|x-y|^2 = (x-y)^2 = x^2 - 2xy + y^2 \geq |x|^2 - 2|x||y| + |y|^2 = (||x|-|y||)^2$</p> <p>Therefore $\rightarrow |x-y| \geq ||x|-|y||$</p> <p>$$----------------$$</p> <p>My questions are: Is this an acceptable proof, and are there alternative proofs that are more efficient?</p>
Robert Israel
8,508
<p>If $M = \pmatrix{0 &amp; 1\cr 1&amp; 1\cr}$, then $M^n = \pmatrix{F_{n-1} &amp; F_n\cr F_n &amp; F_{n+1}\cr}$ (easy to prove by induction). The left side of your equation is $\det(M^n) = (\det(M))^n$.</p>
353,480
<p>Is $f(x)=\ln(x)$ uniformly continuous on $(1,+\infty)$? If so, how to show it?</p> <p>I know how to show that it is not uniformly continuous on $(0,1)$, by taking $x=\frac{1}{\exp(n)}$ and $y = \frac{1}{\exp(n+1)}$.</p> <p>Also, on which interval does $\ln(x)$ satisfy the Lipschitz condition?</p>
Bombyx mori
32,240
<p>Hint: Try to show $x$ is uniformly continuous on $\mathbb{R}$. Then try to establish $\ln(x)-\ln(y)&lt;|x-y|$ for $x,y$ large enough. </p>
1,885,751
<p>An urn contains 15 Balls (5 white, 10 Black). Let's say we pick them one after the other without returning them. How many white balls are expected to have been drawn after 7 turns?</p> <p>I can calculate it by hand with a tree model but is there a formula for this?</p>
samerivertwice
334,732
<p>The probability of getting zero 1's is $(\frac{5}{6})^5=\frac{3125}{7776}$</p> <p>There are 5 ways of getting one 1, so the probability is $5\times\frac{1}{6}\times(\frac{5}{6})^4$</p> <p>You want the probability of neither of these events:</p> <p>$$1-(\frac{5}{6})^5-(5\times\frac{1}{6}\times(\frac{5}{6})^4)=\frac{763}{3888}$$</p>
1,556,747
<p>$$\text{a)} \ \ \sum_{k=0}^{\infty} \frac{5^{k+1}+(-3)^k}{7^{k+2}}\qquad\qquad\qquad\text{b)} \ \ \sum_{k=1}^{\infty}\log\bigg(\frac{k(k+2)}{(k+1)^2}\bigg)$$</p> <p>I am trying to determine the convergence values. I tried with partial sums and got stuck...so I am thinking the comparison test...Help</p>
Jan Eerland
226,665
<p>HINT:</p> <p>$$\sum_{k=0}^{\infty}\frac{5^{k+1}+(-3)^k}{7^{k+2}}=\lim_{m\to\infty}\sum_{k=0}^{m}\frac{5^{k+1}+(-3)^k}{7^{k+2}}=$$ $$\lim_{m\to\infty}\frac{182+\left(-\frac{1}{7}\right)^{m}3^{1+m}-5^{3+m}7^{-m}}{490}=$$ $$\frac{1}{490}\lim_{m\to\infty}\left(182+\left(-\frac{1}{7}\right)^{m}3^{1+m}-5^{3+m}7^{-m}\right)=$$ $$\frac{1}{490}\left(182+0-0\right)=\frac{182}{490}=\frac{13}{35}$$</p>
125,165
<p>Hi friends,</p> <p>I have some questions concerning the critical values of motives, in the sense of Deligne. I will only look at motives of the form $h^i(X)$ where $X$ is a smooth projective algebraic variety over $\mathbb{Q}$. If I understand correctly, the notion of critical value depends only on the Hodge numbers. </p> <p>Introduce the notations:</p> <p>$$ \Gamma_\mathbb{C}(s):=2(2\pi)^{-s}\Gamma(s), \quad \Gamma_\mathbb{R}(s):=\pi^{-s/2}\Gamma(\frac{s}{2}) $$ where $\Gamma$ is the usual Gamma function. Then one define the $L$-factor at infinity as follows. </p> <p>Let us first consider the case where $i$ is odd. Then:</p> <p>$$ L_\infty(h^i(X), s)=\prod_{p &lt; q} \Gamma_{\mathbb{C}}(s-p)^{h^{p, q}} $$ where $p+q=k$ and $h^{p, q}$ is the corresponding Hodge number. </p> <p><strong>Example</strong>: If $i=3$ and $h^{3, 0}=0$ (for instance $X$ is an hypersurface in $\mathbb{P}^4$), then </p> <p>$L_\infty(h^3(X), s)=\Gamma_{\mathbb{C}}(s-1)^{h^{2, 1}}$</p> <p>When $i$ is even, the definition is more involved, as one has also to consider the action of complex conjugation on $H^{p, p}$, which decomposes this space as $H^{p^+}\oplus H^{p^{-}}$. Let</p> <p>$$ h^{p^{\pm}}:=\dim_{\mathbb{C}} H^{p^{\pm}}. $$</p> <p>Then $$ L_\infty(h^i(X), s)=\prod_{p &lt; q} \Gamma_{\mathbb{C}}(s-p)^{h^{p, q}}\cdot \Gamma_{\mathbb{R}}(s-\frac{i}{2})^{h^{i/2+}} \Gamma_{\mathbb{R}}(s-\frac{i}{2}+1)^{h^{i/2-}} $$</p> <p>After having introduced this: an integer $n$ is said to be critical for $M=h^i(X)$ if $L_\infty(M, s)$ has no pole at $s=n$. </p> <p>In his paper Deligne also asks that $L_\infty(\hat{M}, 1-s)$ has no pole at $n$. Is that necessary or it is a consequence of the former provided one has a functional equation?</p> <p>Anyway, I would like to know if for my example the critical integers are all integers $n \geq 2$. </p> <p><strong>Question 2</strong> What about a $K3$ surface? Can one determine the critical integers for the transcedental lattice? In that case there is a $\Gamma(s)$ coming from $h^{2,0}=1$. What about the real Gamma factor?</p> <p>Thanks for your help!</p>
François Brunault
6,506
<p>The factor $L_\infty(M,s)$ is holomorphic and non-vanishing for $\operatorname{Re}(s)$ large enough, so it is definitely necessary to also ask that $L_\infty(\hat{M},1-s)$ has no pole at the given integer. As an example, for the Riemann zeta function $\zeta(s)$, only the even integers $n \geq 2$ are critical.</p> <p>I haven't done the computation of critical integers for $L$-functions of $K3$ surfaces, but there should be no difficulty in this computation as the answer depends only on the behaviour of the motive at infinity, which in this case just means (the cohomology of) the complex variety.</p>
3,282,206
<p>I'm familiar with Fermat's Little Theorem and Euler's Totient, but I'm wondering whether the fact that the only shared factor of <span class="math-container">$(a,N)=1$</span> has something to do with the fact that, given the prior constraints there exists at least one <span class="math-container">$x$</span> (with <span class="math-container">$x$</span> being one of elements of the least non-zero positive remainder class of <span class="math-container">$N$</span>) where <span class="math-container">$a^x \equiv 1 \pmod{N}$</span>? </p> <p>It doesn't appear to be dependent upon the modulus being prime since using a modulus of 10 the following is true:</p> <p><span class="math-container">$9^1\pod{10}=9$</span><br> <span class="math-container">$9^2\pod{10}=1$</span><br> <span class="math-container">$9^3\pod{10}=9$</span><br> <span class="math-container">$9^4\pod{10}=1$</span><br> <span class="math-container">$9^5\pod{10}=9$</span><br> <span class="math-container">$9^6\pod{10}=1$</span><br> <span class="math-container">$9^7\pod{10}=9$</span><br> <span class="math-container">$9^8\pod{10}=1$</span><br> <span class="math-container">$9^9\pod{10}=9$</span><br></p> <p>This is also really naive but another observation is that given the previous assumptions the initial base will always be less than <span class="math-container">$N$</span> and <span class="math-container">$a^1 \pmod{N}$</span> will always return <span class="math-container">$a$</span>, so given <span class="math-container">$\gcd(a,N)=1$</span>, <span class="math-container">$a$</span> will always produce at least one remainder other than <span class="math-container">$1$</span>.</p> <p>I know there's some intuition that I'm missing?</p>
İbrahim İpek
554,493
<p>It is because for a number to have an inverse (i.e. some element which product of both is a residue of <span class="math-container">$1$</span>), we need the number to be relatively prime with the modulus. </p> <p><span class="math-container">$$a \cdot a^{x -1} \equiv_N 1$$</span></p> <p>We have an inverse of <span class="math-container">$a$</span> in this case. You see now? </p> <p>§ Further Explanation §</p> <p><span class="math-container">$$ax + Ny = 1$$</span></p> <p>So <span class="math-container">$x$</span> modulo <span class="math-container">$n$</span> is the inverse but by Bezout's theorem this is only true if <span class="math-container">$a$</span> and <span class="math-container">$N$</span> are relatively prime.</p>
3,282,206
<p>I'm familiar with Fermat's Little Theorem and Euler's Totient, but I'm wondering whether the fact that the only shared factor of <span class="math-container">$(a,N)=1$</span> has something to do with the fact that, given the prior constraints there exists at least one <span class="math-container">$x$</span> (with <span class="math-container">$x$</span> being one of elements of the least non-zero positive remainder class of <span class="math-container">$N$</span>) where <span class="math-container">$a^x \equiv 1 \pmod{N}$</span>? </p> <p>It doesn't appear to be dependent upon the modulus being prime since using a modulus of 10 the following is true:</p> <p><span class="math-container">$9^1\pod{10}=9$</span><br> <span class="math-container">$9^2\pod{10}=1$</span><br> <span class="math-container">$9^3\pod{10}=9$</span><br> <span class="math-container">$9^4\pod{10}=1$</span><br> <span class="math-container">$9^5\pod{10}=9$</span><br> <span class="math-container">$9^6\pod{10}=1$</span><br> <span class="math-container">$9^7\pod{10}=9$</span><br> <span class="math-container">$9^8\pod{10}=1$</span><br> <span class="math-container">$9^9\pod{10}=9$</span><br></p> <p>This is also really naive but another observation is that given the previous assumptions the initial base will always be less than <span class="math-container">$N$</span> and <span class="math-container">$a^1 \pmod{N}$</span> will always return <span class="math-container">$a$</span>, so given <span class="math-container">$\gcd(a,N)=1$</span>, <span class="math-container">$a$</span> will always produce at least one remainder other than <span class="math-container">$1$</span>.</p> <p>I know there's some intuition that I'm missing?</p>
Bernard
202,857
<p>Not sure this fully answers you question (not very clear to me), but <span class="math-container">$\gcd(a,N)=1$</span> implies <span class="math-container">$a$</span> is a unit modulo <span class="math-container">$N$</span>, so the congruence class of <span class="math-container">$a\bmod N$</span> generates a <em>finite</em> group, and it has a power congruent to <span class="math-container">$1$</span>. One may add that this group is a subgroup of the (multiplicative) group <span class="math-container">$U_N$</span> of units modulo <span class="math-container">$N$</span>, and by <em>Lagrange's theorem</em>, it has order a divisor of <span class="math-container">$$|U_N|=\varphi(N).$$</span> For your example, <span class="math-container">$\;\varphi(10)=\varphi(2)\,\varphi(5)=4$</span>.</p>
2,648,492
<p>I am having trouble with this problem. When they say spot I think they are essentially saying the sum, so its the probability that the sum of dice is $11$ or less.</p> <p>I understand that there are $6^5$ combinations.</p> <p>I found 6 ways that it can equal to $11$ $(2,3,2,2,2)(3,3,1,1,3),(4,4,1,1,1),(5,2,2,1,1),(6,2,1,1,1)(7,1,1,1,1)$, but I know there has to be an easier way than just counting. Is it like $\binom{6}{5}$? Thanks for the help.</p>
BruceET
221,800
<p>My guess is that you have been studying the Central Limit Theorem and that you are to assume that the total $T$ on five dice is approximately normal. Here is an outline of that method.</p> <p>It is not difficult to show that the number $X$ on a single die has $E(X) = 7/2$ and $var(X) = 105/36.$ </p> <p>Thus $T$ has $\mu=E(T) = 17.5,\,\sigma^2=Var(T) = 14.58333,$ and $\sigma=SD(T) = 3.818813.$</p> <p>Then, approximately, $T \sim \mathsf{Norm}(\mu, \sigma),$ and $$P(T \le 11) = P(T \le 11.5) = P\left(\frac{T-\mu}{\sigma} \le \frac{11.5-17.5}{3.819}\right) \approx P(Z \le -1.571) \approx 0.0581,$$ where $Z$ is standard normal and the probability can be evaluated using printed normal tables.</p> <p>As a reality check on the normal approximation, I simulated a million 5-dice experiments. One run of the simulation gave $P(T \le 11) = 0.0589$ (and also results for $E(T)$ and $SD(T)$ that are in good agreement with known values).</p> <pre><code>t = replicate(10^6, sum(sample(1:6, 5, rep=T))) mean(t &lt;= 11); mean(t); sd(t) ## 0.058977 # aprx P(T &lt;= 11) ## 17.50287 # aprx E(T) = 17.5 ## 3.824495 # aprx SD(T) = 3.819 </code></pre> <p>Below is a histogram of the simulated values of $T$ along with the approximating normal density curve. The fit is not perfect, but it is pretty good for values of $T$ below about 12, which is what we need.</p> <p><a href="https://i.stack.imgur.com/YpT54.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YpT54.png" alt="enter image description here"></a></p> <p><em>Notes:</em> Of course, an exact combinatorial solution for $P(T \le 11)$ would be best. You have listed possible ways for dice numbers to add to eleven, but you have not considered the various ways those numbers might be assigned to the individual dice, so $6/6^5$ is not the correct answer for $P(T = 11).$ Maybe one of our experts in combinatorics will find a way to get an exact solution. But I have seen similar problems before, where a normal approximation was intended.</p>
1,293,207
<p>A ray of light travels from the point $A$ to the point $B$ across the border between two materials. At the first material the speed is $v_1$ and at the second it is $v_2$. Show that the journey is achieved at the least possible time when Snell's law: $$\frac{\sin \theta_1}{\sin \theta_2}=\frac{v_1}{v_2}$$ holds. </p> <p>To show that do we have to use Lagrange multipliers method?? </p> <p>But which function do we want to minimize??</p>
Demosthene
163,662
<p><strong>Without Lagrange multipliers</strong></p> <p>You don't need to use Lagrange multipliers. Snell's law can be derived directly from Fermat's principle of least time, which consists in minimizing the time taken by the ray of light to travel from one point to another. Take a look at the diagram below:</p> <p><img src="https://i.stack.imgur.com/e9FzZ.png" alt=""></p> <p>In the first medium, with refractive index $n_1$, light will travel at velocity $v_1=\dfrac{c}{n_1}$. In the second medium, at $v_2=\dfrac{c}{n_2}$. Therefore, the total time from point $P$ to $Q$ is $T(x)=\dfrac{d_1}{v_1}+\dfrac{d_2}{v_2}$.</p> <p>To find $d_1$ and $d_2$, the distances traveled in media $1$ and $2$ respectively, you can simply use the Pythagorean theorem: $$d_1=\sqrt{x^2+a^2},\quad d_2=\sqrt{(l-x)^2+b^2}$$ Thus: $$T(x)=\dfrac{\sqrt{x^2+a^2}}{v_1}+\dfrac{\sqrt{(l-x)^2+b^2}}{v_2}$$ Applying Fermat's principle, you then take minimize $T(x)$: $$\dfrac{dT}{dx}=\dfrac{x}{v_1\sqrt{x^2+a^2}}+\dfrac{x-l}{\sqrt{(l-x)^2+b^2}}=0$$ Using trigonometry: $$\dfrac{x}{\sqrt{x^2+a^2}}=\sin\theta_1,\quad \dfrac{l-x}{\sqrt{(1-x)^2+b^2}}=\sin\theta_2$$ Therefore: $$\dfrac{dT}{dx}=0\Longleftrightarrow \dfrac{\sin\theta_1}{v_1}=\dfrac{\sin\theta_2}{v_2}$$ Using the aforementioned relation between the speed of light in a given medium and the corresponding refractive index, we recover Snell's law: $$\dfrac{\sin\theta_1}{\sin\theta_2}=\dfrac{n_2}{n_1}$$</p> <hr> <p><strong>With Lagrange multipliers</strong></p> <p>We try to minimize the time taken by the ray of light to travel from $P$ to $Q$, which we can express as: $$c\cdot T(\theta_1,\theta_2)=\dfrac{an_1}{\cos\theta_1}+\dfrac{bn_2}{\cos\theta_2}$$ (You can check that this is equivalent to $T(x)$ in the previous part). We also derive the following constraint, from the fixed horizontal distance between $P$ and $Q$: $$l=x+(l-x)=a\tan\theta_1+b\tan\theta_2$$ We can therefore write down the Lagrangian: $$L(\theta_1,\theta_2,\lambda)=\dfrac{an_1}{\cos\theta_1}+\dfrac{bn_2}{\cos\theta_2}+\lambda(a\tan\theta_1+b\tan\theta_2-l)$$ And we solve the Lagrange equations for $\lambda$: $$\dfrac{\partial L}{\partial\theta_1}=0\Longrightarrow\dfrac{an_1\sin\theta_1+a\lambda}{\cos^2\theta_1}=0\Longrightarrow -\lambda=n_1\sin\theta_1$$ $$\dfrac{\partial L}{\partial\theta_2}=0\Longrightarrow\dfrac{bn_2\sin\theta_2+b\lambda}{\cos^2\theta_2}=0\Longrightarrow -\lambda=n_2\sin\theta_2$$ And indeed, combining the two results above yields Snell's law: $$\dfrac{\sin\theta_1}{\sin\theta_2}=\dfrac{n_2}{n_1}$$</p>
1,556,645
<p>I am new to the axiom of choice, and currently working my way through some exercises. I am struggling with the following exercise:</p> <p><strong>Exercise -</strong> Prove the Axiom of Choice (every surjective $f: X \to Y$ has a section) in the following two special cases:</p> <ol> <li>Y is finite</li> <li>X is countable</li> </ol> <p>(A section has been previously defined as a function $s: Y \to X$ such that $f(s(y)) = y$ for all $y \in Y$)</p> <p><strong>My confusion -</strong> Now from what I understand, in (1) you use surjectivity of $f$ to pick an $x \in X$ such that $f(x) = y_0$ proving the case $|Y| = 1$, and then use induction to proof it for general $|Y| = N$.</p> <p>I am a bit confused at (2) though. Would it be legal to take the previous argument and take the limit $N \to \infty$? I tried to google it but I got more confused after reading <a href="https://math.stackexchange.com/questions/241995/countable-infinity-and-the-axiom-of-choice">this question</a> and seeing other references to the axiom of countable choice, suggesting that this result cannot be proven.</p> <p>By the way, I am doing 'naive' set theory here; ZF/ZFC axiom systems and those kind of things have not been discussed in the course.</p>
Asaf Karagila
622
<p>There is a terminological discrepancy here.</p> <p>The Axiom of Countable Choice refers to the statement "If $S$ is a countable family of non-empty sets, then $S$ admits a choice function".</p> <p>What you are trying to prove is that if $f\colon X\to Y$ and $X$ is countable, then $f$ admits a section. </p> <p>This is not the same as choosing from a countable family of non-empty sets. If $\{A_n\mid n\in\Bbb N\}$ is a countable family of non-empty sets then a choice function would be a function from $\Bbb N$ into $\bigcup A_n$, rather than a function from $\Bbb N$ onto the union. The section map, if anything, would map $a\in\bigcup A_n$ to the least $n\in\Bbb N$, such that $a\in A_n$.</p> <p>But an even more general thing is true. If $X$ can be well-ordered (so it can be uncountable as well), then by fixing a well-ordering of $X$ we can choose for each $y\in Y$ the minimal element of $\{x\in X\mid f(x)=y\}$ (which is non-empty by the surjectivity of $f$).</p>
216,031
<p>Using image analysis, I have found the positions of a circular ring and imported them as <code>xx</code> and <code>yy</code> coordinates. I am using <code>ListInterpolation</code> to interpolate the data:</p> <pre><code>xi = ListInterpolation[xx, {0, 1}, InterpolationOrder -&gt; 4, PeriodicInterpolation -&gt; True, Method -&gt; "Spline"]; yi = ListInterpolation[yy, {0, 1}, InterpolationOrder -&gt; 4, PeriodicInterpolation -&gt; True, Method -&gt; "Spline"]; </code></pre> <p>I plot the results as:</p> <pre><code>splinePlot = ParametricPlot[{xi[s], yi[s]} , {s, 0, 1}, PlotStyle -&gt; {Red}] </code></pre> <p>and the result looks like: </p> <p><a href="https://i.stack.imgur.com/c2wRi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c2wRi.png" alt="interpolation of data"></a></p> <p>I am trying to study this shape as it deforms, and I will need to look at the derivatives of this interpretation (notably, second derivatives). I know that there are physical constraints that will not let the local curvature at any point be <strong>larger</strong> than, for example <code>1/10</code> in the units show (so, a radius of curvature <code>10</code>). <strong>Is there a way that I can constrain the interpolation so that the local curvature never exceeds a given value?</strong></p> <p>Here is the data (Dropbox Link): <a href="https://www.dropbox.com/s/g9vajch0obbcplk/testShape.csv?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/g9vajch0obbcplk/testShape.csv?dl=0</a></p>
MikeY
47,314
<p>You can use a Fourier Transform to take it into the frequency space, then truncate coming back with the inverse transform, thus putting a limit on the max frequency component and therefore max curvature.</p> <p>Load in the data</p> <pre><code>data = Import["testShape.csv"]; </code></pre> <p>Center the data about the mean (or median or other measure of centered-ness)</p> <pre><code>cc = Mean[data]; dc = # - cc &amp; /@ data; ListPlot[dc] </code></pre> <p><a href="https://i.stack.imgur.com/zp0SR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zp0SR.jpg" alt="enter image description here"></a></p> <p>Convert to polar coordinates, making the first column <span class="math-container">$\theta$</span> and second column <span class="math-container">$r$</span>, and removing a repeated point. The <code>Union</code> also sorts it.</p> <pre><code>polar = Union@Transpose@Reverse@Transpose@(ToPolarCoordinates /@ dc); ListPlot@polar </code></pre> <p><a href="https://i.stack.imgur.com/Xg8HA.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Xg8HA.jpg" alt="enter image description here"></a></p> <p>This is a repeating sequence with uneven <span class="math-container">$\theta$</span> values. Do an interpolation of it. Add points before and after so the interpolation covers greater than <span class="math-container">$\{-\pi,\pi\}$</span></p> <pre><code>polpolpol = Join[# - {2 π, 0} &amp; /@ polar, polar, # + {2 π, 0} &amp; /@ polar]; if = Interpolation[polpolpol]; Plot[if[θ], {θ, -π, π}]; </code></pre> <p><a href="https://i.stack.imgur.com/XyByU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XyByU.jpg" alt="enter image description here"></a></p> <p>Now create an evenly subdivided data set that we can do a Fourier transform on. We want <span class="math-container">$-\pi &lt; \theta \leq \pi$</span>. We can vary the number of points, here I chose 4096.</p> <pre><code>angles = Rest@Subdivide[-π, π, 2^12]; reg = if /@ angles; </code></pre> <p>Do a transform to go to the frequency domain space.</p> <pre><code>ft = Fourier[reg]; </code></pre> <p>Come back to the <span class="math-container">$\theta$</span> space, but only carry the first 10 components, dropping the high frequency signal (and therefore high curvature signal). </p> <pre><code>regTrunc = Re@InverseFourier[PadRight[Take[ft, 10], 2^12]]; </code></pre> <p>Compare</p> <pre><code>ListPlot[{reg, regTrunc}] </code></pre> <p><a href="https://i.stack.imgur.com/uTAGv.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uTAGv.gif" alt="enter image description here"></a></p> <p>Going back to cartesian...</p> <pre><code>ListPlot[{FromPolarCoordinates /@ Transpose@{regTrunc, angles}, FromPolarCoordinates /@ Transpose@{reg, angles}}, Joined -&gt; True] </code></pre> <p><a href="https://i.stack.imgur.com/rMImB.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rMImB.gif" alt="enter image description here"></a></p> <p>At this point, you have a closed form equation of <code>regTrunc</code> in terms of the <code>Fourier</code> expansion so you can ultimately work through to the curvature <a href="https://mathworld.wolfram.com/RadiusofCurvature.html" rel="nofollow noreferrer">using these relationships</a> and bound it. Remainder left as an exercise!</p>
2,194,376
<p>I'm having some trouble understanding the answers to the following questions:</p> <p><a href="https://i.stack.imgur.com/nnuu8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nnuu8.png" alt="enter image description here"></a></p> <p>(a)</p> <p>Why would it make sense for Eve to test out the $gcd(77, 35)$ ?</p> <p>I understand that she has the following mapping</p> <p>$e(x) = x^7\,mod\,35$</p> <p>$e(x) = x^7\,mod\,77$</p> <p>(b)</p> <p>I believe this answer follows from (a)</p>
Mark Viola
218,419
<p>Let $f_n(x)$ be given by </p> <p>$$f_n(x)=\frac{\sin(x/n)\sin(2nx)}{x^2+4n}$$</p> <p>Using the <a href="https://en.wikipedia.org/wiki/Inequality_of_arithmetic_and_geometric_means" rel="nofollow noreferrer">AM-GM inequality</a>, we assert that</p> <p>$$|f_n(x)|\le \frac{|x|}{n(x^2+4n)}\le \frac{1}{4n^{3/2}}$$</p> <blockquote> <p>Inasmuch as $\displaystyle \sum_{n=1}^{\infty}\frac{1}{4n^{3/2}}&lt;\infty$, the series $\displaystyle \sum_{n=1}^\infty f_n(x)$ uniformly converges for $x\in \mathbb{R}$ and <strong><em>not only on compact subsets thereof</em></strong>.</p> </blockquote>
1,811,028
<p>I want to know which of the following methods is right for graphing $f(ax-b)$ from $f(x)$ and why:</p> <p>Method 1. First Horizontally Translate it ($f(x)$) by $b$, then Horizontally Stretch/Compress it by $a$.</p> <p>Method 2. First Horizontally Stretch/Compress it by $a$ then Horizontally Translate it by $b/a$.</p> <p>Method 3. First Horizontally Translate it by $b/a$ then Horizontally Stretch/Compress it by $a$.</p>
Tsemo Aristide
280,301
<p>Yes, a connected compact Lie group is the product of a semi-simple Lie group and a torus $T^n$, thus its Lies algebra $g$ is the sum of a semi-simple Lie algebra $s$ and a commutative algebra $c$, such that $[s,c]=0$, thus $[s+c,s+c]=s$.</p> <p><a href="https://en.wikipedia.org/wiki/Compact_Lie_algebra#Definition" rel="nofollow">https://en.wikipedia.org/wiki/Compact_Lie_algebra#Definition</a></p>
4,537,050
<p>Question 2 of Chapter 14 in Spivak's <em>Calculus</em> reads as follows:</p> <blockquote> <p>For each of the following <span class="math-container">$f$</span>, if <span class="math-container">$F(x)=\int_0^xf$</span>, at which points <span class="math-container">$x$</span> is <span class="math-container">$F'(x)=f(x)$</span>?</p> </blockquote> <p>Part (viii) of Question 2 uses the function:</p> <blockquote> <p><span class="math-container">$f(x)=1$</span> if <span class="math-container">$x=\frac{1}{n}$</span> for some <span class="math-container">$n$</span> in <span class="math-container">$\mathbb N$</span>, <span class="math-container">$f(x)=0$</span> otherwise.</p> </blockquote> <p>The solution manual for this problem reads as:</p> <blockquote> <p>All <span class="math-container">$x$</span> not of the form <span class="math-container">$\frac{1}{n}$</span> for some natural number <span class="math-container">$n$</span> [...are points where <span class="math-container">$F'(x)=f(x)$</span>]</p> </blockquote> <p>From this proposed solution, Spivak suggests that <span class="math-container">$F'(0)=f(0)$</span> (more specifically, it should be <span class="math-container">$F'^+(0)=f(0)$</span>...but for his problems, this is usually implicit).</p> <p>However, I believe <span class="math-container">$f$</span> is discontinuous at <span class="math-container">$0$</span> (because there is no right limit) and, moreover, the intermediate value property is not upheld for any <span class="math-container">$\delta_n=\frac{1}{n} \gt 0$</span> on the interval <span class="math-container">$[0,\delta_n]$</span>. Therefore, I do not think that <span class="math-container">$0$</span> should be included in the list of points that exhibit the feature of <span class="math-container">$F'(x)=f(x)$</span>.</p> <p>Is this correct?</p>
S.C.
544,640
<p>Here is another approach using the <strong>Darboux Integral</strong> definition.</p> <hr /> <p>Consider the function that is defined by the formula:</p> <p><span class="math-container">$f(x)= \begin{cases}1\quad &amp;\text {if $x=\frac{1}{n}$ for some $n \in \mathbb N$} \\0 \quad &amp;\text{otherwise} \end{cases}$</span></p> <p>We will show that <span class="math-container">$\displaystyle \int_a^b f$</span> is integrable for any <span class="math-container">$a \leq b \in \mathbb R$</span>. Further, for any <span class="math-container">$a \leq b$</span>, we have that <span class="math-container">$\displaystyle \int_a^b f=0$</span>. This shows that <span class="math-container">$F$</span> is the constant <span class="math-container">$0$</span> and allows one to conclude that <span class="math-container">$F(0)=f(0)$</span>.</p> <hr /> <p>Firstly, for any <span class="math-container">$a \leq 0$</span> and any <span class="math-container">$b\geq 1$</span>, we obviously have that <span class="math-container">$\displaystyle \int_a^0f=0$</span> and <span class="math-container">$\displaystyle \int_1^bf=0$</span>, respectively.</p> <p>Consider when <span class="math-container">$a=0$</span> and <span class="math-container">$b=1$</span>. Next, consider an arbitrary <span class="math-container">$\varepsilon \gt 0$</span>. Our objective is to construct a partition <span class="math-container">$P$</span> of <span class="math-container">$[0,1]$</span> such that: <span class="math-container">$U(f,P) - L(f,P) \lt \varepsilon$</span>, which is equivalent to showing that <span class="math-container">$f|_{[0,1]}$</span> is integrable. If we can show that this is true, we next note that the set of irrational numbers is dense in <span class="math-container">$\mathbb R$</span>. Making use of the following set <span class="math-container">$S=\left\{s \ |\ \exists n \in \mathbb N: s=\frac{1}{n}\right\}\subset \mathbb Q$</span>, it is clear that <span class="math-container">$S \cap (\mathbb R \setminus \mathbb Q)=\emptyset$</span>, which means that for any <span class="math-container">$[t_{i-1},t_i]$</span>, we know that there is an <span class="math-container">$x \in [t_{i-1},t_i]: f(x)=0$</span>. Therefore, for any <span class="math-container">$[t_{i-1},t_i]$</span> described by any partition <span class="math-container">$P$</span> of <span class="math-container">$[0,1]$</span>, we know that the corresponding subinterval infimum of <span class="math-container">$f$</span>, denoted as <span class="math-container">$m_i $</span>, is equal to <span class="math-container">$0$</span>, which implies that for any partition <span class="math-container">$P$</span> of <span class="math-container">$[a,b]$</span>, we have that <span class="math-container">$L(f,P)=0$</span>. This would then mean that <span class="math-container">$\displaystyle \int_0^1 f=0$</span> because the integrability of <span class="math-container">$f$</span> on <span class="math-container">$[0,1]$</span> implies that <span class="math-container">$\displaystyle \sup \{L(f,P):\ P \text{ is a partition of $[a,b]$}\}=\inf \{U(f,P):\ P \text{ is a partition of $[a,b]$}\}$</span>. Finally, if <span class="math-container">$\displaystyle \int_0^1 f$</span> exists, then any subset is integrable, meaning that <span class="math-container">$\displaystyle \int_0^bf$</span> for <span class="math-container">$0 \leq b \lt 1$</span> is integrable. Using the exact logic as before regarding <span class="math-container">$m_i$</span>'s value, we know that for any partition <span class="math-container">$S$</span> of <span class="math-container">$[0,b]$</span>, all subintervals <span class="math-container">$[t_{i-1},t_i]$</span> defined by <span class="math-container">$S$</span> will have <span class="math-container">$f|_{[t_{i-1},t_i]}$</span>'s infimum as <span class="math-container">$0$</span>. This means that <span class="math-container">$\displaystyle \int_0^bf = 0$</span>. Therefore, we know that <span class="math-container">$\displaystyle F(x)= \int_0^x f = 0$</span> for any <span class="math-container">$x \geq 0$</span>. Thus, <span class="math-container">$F$</span> is a constant and its derivative is <span class="math-container">$0$</span> everywhere...note that its full derivative (not just right-derivative) is defined at <span class="math-container">$x=0$</span> because <span class="math-container">$\displaystyle \int_a^0 =0$</span> for any <span class="math-container">$a \lt 0$</span>.</p> <hr /> <p>To construct a partition <span class="math-container">$P$</span> of <span class="math-container">$[0,1]$</span> such that <span class="math-container">$U(f,P)-L(f,P) \lt \varepsilon$</span>, we will consider 2 different 'zones' each with the property that their upper-lower sums of <span class="math-container">$f$</span> (when restricted to the relevant subintervals comprising each zone) are less than <span class="math-container">$\frac{\varepsilon}{2}$</span>, each. When all such upper-lower sum differences across the 2 zones are added together (while keeping track of each zone's corresponding subinterval partitioning), we will have the necessary partition to show that <span class="math-container">$U(f,P)-L(f,P) \lt \varepsilon$</span>.</p> <p><em>First Zone</em></p> <p>For an arbitrary <span class="math-container">$\varepsilon \gt 0$</span>, we know that there is an <span class="math-container">$n \in \mathbb N$</span> such that: <span class="math-container">$\frac{1}{n}\lt \frac{\varepsilon}{2}$</span>. Consider any arbitrary partition of <span class="math-container">$[0,\frac{1}{n}]$</span>: call it <span class="math-container">$P_1$</span>. On this interval, we know that <span class="math-container">$U(f,P_1) \leq 1\cdot (\frac{1}{n}-0)$</span> because <span class="math-container">$1$</span> is an upper bound to <span class="math-container">$f$</span>. Further, as noted earlier, <span class="math-container">$L(f,P_1)=0$</span> There <span class="math-container">$U(f,P_1)-L(f,P_1) \leq \frac{1}{n} \lt \frac{\varepsilon}{2}$</span></p> <p><em>Second Zone</em></p> <p>For our next zone, we will recall the <span class="math-container">$n$</span> used in the <em>first zone</em>. Because this a finite number, we know that there is a finite list of different <span class="math-container">$i \in \mathbb N$</span> such that <span class="math-container">$\frac{1}{n} \leq \frac{1}{i} \leq 1$</span>. In particular, <span class="math-container">$i$</span> is any element in the set <span class="math-container">$\{1,2,3,\cdots,n\}$</span>. Note, then that <span class="math-container">$\frac{1}{1}$</span> and <span class="math-container">$\frac{1}{n}$</span> are in this zone. Consider a closed neighborhood <span class="math-container">$[\frac{1}{i}-\delta,\frac{1}{i}+\delta]$</span> with <span class="math-container">$\delta \gt 0$</span> centered around each <span class="math-container">$\frac{1}{i}$</span>. There will be <span class="math-container">$n$</span> such neighborhoods (two of which will end up only contributing half neighborhoods). We know that on each of these neighborhoods, <span class="math-container">$f$</span>'s supremum will achieve the value of <span class="math-container">$M_i=1$</span>. Therefore, the upper-lower sum difference across these closed neighborhoods is no greater than <span class="math-container">$1\cdot n \cdot 2\delta$</span>, and we need this value to be <span class="math-container">$\lt \frac{\varepsilon}{2}$</span>. Therefore, we compute that <span class="math-container">$\displaystyle\delta \lt \frac{\varepsilon}{4n}$</span>. For convenience, we will further ensure that none of these neighborhoods overlap. To do this, we will impose an additional length constraint on <span class="math-container">$\delta$</span> such that <span class="math-container">$\delta$</span> is smaller than the distance between the closest points <span class="math-container">$\frac{1}{i}$</span> and <span class="math-container">$\frac{1}{j}$</span>.</p> <p>Because the derivative of the function <span class="math-container">$g(x)=\frac{1}{x}$</span> is <span class="math-container">$g'(x)=\frac{-1}{x^2}$</span>, an application of the Mean Value Theorem (plus some inequality manipulations) will provide us with the following: for any <span class="math-container">$m \lt n \in \mathbb N$</span>, we have that <span class="math-container">$\frac{1}{n-1}-\frac{1}{n} \lt \frac{1}{m-1}-\frac{1}{m}$</span>. Therefore, let us stipulate that <span class="math-container">$\delta \lt \min\left(\frac{1}{n-1}-\frac{1}{n}, \frac{\varepsilon}{4n}\right)$</span>. Now, union together all numbers that comprised the end points <span class="math-container">$\frac{1}{i}-\delta$</span> and <span class="math-container">$\frac{1}{i}+\delta$</span> for each <span class="math-container">$i \in \{1,2,\cdots,n\}$</span>: you can disregard <span class="math-container">$\frac{1}{n}-\delta$</span> and <span class="math-container">$\frac{1}{1}+\delta$</span>. Let this set equal <span class="math-container">$P_2$</span>. Importantly, for the space <em>between</em> the neighborhoods, e.g. <span class="math-container">$[\frac{1}{i}+\delta,\frac{1}{i-1}-\delta]$</span>, it should be obvious that there are no elements of the form <span class="math-container">$\frac{1}{j}$</span> in this interval. Therefore, on these subintervals, <span class="math-container">$f$</span>'s supremum takes on the value <span class="math-container">$0$</span>, so the upper-lower sum difference on these subintervals is always <span class="math-container">$0$</span>. Thus, we must have that <span class="math-container">$U(f,P_2)-L(f,P_2) \lt \frac{\varepsilon}{2}$</span>.</p> <p><em>Conclusion</em></p> <p>Noting that <span class="math-container">$P_1=\{0,\cdots,\frac{1}{n}\}$</span> and <span class="math-container">$P_2=\{\frac{1}{n},\cdots,1\}$</span>, their union <span class="math-container">$P=P_1 \cup P_2$</span> is necessarily a partition of <span class="math-container">$[0,1]$</span>. Therefore, we can conclude that <span class="math-container">$U(f,P)-L(f,P)=\left[U(f,P_1) - L(f,P_1)\right]+\left[U(f,P_2)-L(f,P_2)\right] \lt \frac{\varepsilon}{2}+\frac{\varepsilon}{2} = \varepsilon$</span>, as desired. Our previously established logic then follows.</p>
552,474
<p>If there are, Are there unity <strong>(but not division)</strong> rings of this kind? Are there non-unity rings of this kind?</p> <p>Sorry, I forgot writting the non division condition.</p>
Community
-1
<p>Yes, in fact much more can be said: There are rings with $1$ such that every non-zero element has a multiplicative inverse. These rings are called <a href="https://en.wikipedia.org/wiki/Division_ring">division rings</a> or skew-fields.</p> <p>The real quaternions are an example of a non-commutative division ring.</p>
552,474
<p>If there are, Are there unity <strong>(but not division)</strong> rings of this kind? Are there non-unity rings of this kind?</p> <p>Sorry, I forgot writting the non division condition.</p>
Matt E
221
<p>Another kind of example: non-commutative deformations of commutative integral domains, e.g. $\mathbb C[x,y]$ with the commutation relation $[x,y] = 1$. </p>
1,868,797
<blockquote> <p><strong>Question:-</strong></p> <p>Three points represented by the complex numbers $a,b$ and $c$ lie on a circle with center $O$ and radius $r$. The tangent at $c$ cuts the chord joining the points $a$ and $b$ at $z$. Show that $$z=\dfrac{a^{-1}+b^{-1}-2c^{-1}}{a^{-1}b^{-1}-c^{-2}}$$</p> </blockquote> <hr> <p><strong>Attempt at a solution:-</strong> To simplify our problem let $O$ be the origin, then the equation of circle becomes $|z|=r$.</p> <p>Now, the equation of chord passing through $a$ and $b$ can be given by the following determinant</p> <p>$$\begin{vmatrix} z &amp; \overline{z} &amp; 1 \\ a &amp; \overline{a} &amp; 1 \\ b &amp; \overline{b} &amp; 1 \\ \end{vmatrix}= 0$$ which simplifies to $$z(\overline{a}-\overline{b})-\overline{z}(a-b)+(\overline{a}b-a\overline{b})=0 \tag{1}$$</p> <p>Now, for the equation of the tangent through $c$, I used the cartesian equation of tangent to a circle $xx_1+yy_1=r^2$ from which I got $$z\overline{c}+\overline{z}c=2r^2\tag{2}$$</p> <p>Now, from equation $(1)$, we get $$\overline{z}=\dfrac{z\left(\overline{a}-\overline{b}\right)+\left(a\overline{b}-\overline{a}b\right)}{(a-b)}$$</p> <p>Putting this in equation $(2)$, we get $$z=\dfrac{2r^2(a-b)+\left(a\overline{b}-\overline{a}b\right)c}{\left(a\overline{c}+\overline{a}c\right)-\left(b\overline{c}+\overline{b}c\right)}$$</p> <blockquote> <p>After this I am not able to get to anything of much value, so your help would be appreciated. And as always, more solutions are welcomed.</p> </blockquote>
Dietrich Burde
83,966
<p>An Linear Programming is degenerate if in a basic feasible solution, one of the basic variables takes on a zero value. Degeneracy is caused by redundant constraint(s), e.g. see <a href="http://optlab.mcmaster.ca/feng/4O03/LP.Degeneracy.pdf" rel="nofollow">this example</a>.</p>
4,429,162
<p><a href="https://i.stack.imgur.com/9nrUn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9nrUn.png" alt="enter image description here" /></a></p> <p>This is from Rambo's Math subject GRE book.</p> <p>One solution to this problem is to note that the equation of the circle is <span class="math-container">$x^2+(y-k)^2=1$</span>. By taking the derivative of this and solving for <span class="math-container">$y'$</span>, and then setting this equal to the derivative of the parabola, <span class="math-container">$y'=2x$</span>, we can solve for <span class="math-container">$x$</span> and <span class="math-container">$k$</span> and get all the information we need to set up the necessary integeral:</p> <p><span class="math-container">$2\int_0^{\frac{\sqrt{3}}{2}}\frac{5}{4}-\sqrt{1-x^2}-x^2dx=\frac{3\sqrt{3}}{4}-\frac{\pi}{3}$</span></p> <p>Where the expression involving a square root can is evaluated with trig substitution and power reduction identities.</p> <p>The author of the text mentions that this problem also be solved by using some trigonometry and the fact that:</p> <p><span class="math-container">$2\int_0^{\frac{\sqrt{3}}{2}}x^2dx=\frac{\sqrt{3}}{2}$</span></p> <p>However I can't figure out what he means by this. Can anyone show me how to solve this problem using less calculus and more trigonometry? Also, if anyone has any other alternative methods, I'd greatly appreciate it. Thanks!</p>
Mark McClure
21,361
<p>The area swept out by a segment of length <span class="math-container">$r$</span> as it rotates about one end point through the angle <span class="math-container">$\theta$</span> is <span class="math-container">$\frac{1}{2}r^2\theta$</span>. Now, since you already know how to find the points of intersection, it's pretty easy to use &quot;a little trigonometry&quot; to show that the marked angle is <span class="math-container">$\pi/3$</span>. Thus, the area of the shaded sector shown below is <span class="math-container">$\pi/3$</span>.</p> <p><a href="https://i.stack.imgur.com/1Bs3h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1Bs3h.png" alt="enter image description here" /></a></p>
4,429,162
<p><a href="https://i.stack.imgur.com/9nrUn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9nrUn.png" alt="enter image description here" /></a></p> <p>This is from Rambo's Math subject GRE book.</p> <p>One solution to this problem is to note that the equation of the circle is <span class="math-container">$x^2+(y-k)^2=1$</span>. By taking the derivative of this and solving for <span class="math-container">$y'$</span>, and then setting this equal to the derivative of the parabola, <span class="math-container">$y'=2x$</span>, we can solve for <span class="math-container">$x$</span> and <span class="math-container">$k$</span> and get all the information we need to set up the necessary integeral:</p> <p><span class="math-container">$2\int_0^{\frac{\sqrt{3}}{2}}\frac{5}{4}-\sqrt{1-x^2}-x^2dx=\frac{3\sqrt{3}}{4}-\frac{\pi}{3}$</span></p> <p>Where the expression involving a square root can is evaluated with trig substitution and power reduction identities.</p> <p>The author of the text mentions that this problem also be solved by using some trigonometry and the fact that:</p> <p><span class="math-container">$2\int_0^{\frac{\sqrt{3}}{2}}x^2dx=\frac{\sqrt{3}}{2}$</span></p> <p>However I can't figure out what he means by this. Can anyone show me how to solve this problem using less calculus and more trigonometry? Also, if anyone has any other alternative methods, I'd greatly appreciate it. Thanks!</p>
Math Lover
801,574
<p><a href="https://i.stack.imgur.com/UvIdR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UvIdR.png" alt="enter image description here" /></a></p> <p>Equation of the circle is <span class="math-container">$~x^2 + (y-k)^2 = 1$</span></p> <p>Equation of the parabola is <span class="math-container">$~y = x^2$</span></p> <p>If they are tangent to each other,</p> <p><span class="math-container">$y + (y-k)^2 = 1 \implies y^2 - (2k-1) y + (k^2-1) = 0~$</span> should have a double root.</p> <p>So we must have, <span class="math-container">$(2k-1)^2 - 4 (k^2-1) = 0 \implies k = \dfrac 54$</span></p> <p>At point of tangency, <span class="math-container">$ \displaystyle y = \frac{2k-1}{2} = \frac 34, x = \pm \sqrt y = \pm \frac{\sqrt{3}}{2}$</span></p> <p><span class="math-container">$ \displaystyle \cos \alpha = \frac{\sqrt3}{2} \implies \alpha = \frac {\pi}{3}$</span></p> <p>We can just focus on the desired area to the right of y-axis and then using symmetry, we can multiply the result by <span class="math-container">$2$</span>.</p> <p>Half of the desired area <span class="math-container">$~ = [OABC] - [BCE] - [OAB]$</span> (note we use the curved path <span class="math-container">$BE$</span> and <span class="math-container">$OB$</span>)</p> <p><span class="math-container">$ \displaystyle [OABC] = \frac 12 \left(\frac 54 + \frac 34\right) \cdot \frac {\sqrt3}{2} = \frac{\sqrt3}{2}$</span></p> <p><span class="math-container">$ \displaystyle [BCE] = \frac {\pi}{6}$</span></p> <p><span class="math-container">$ \displaystyle [OAB] = \int_0^{\sqrt3/2} x^2 ~ dx = \frac{\sqrt3}{8} $</span></p> <p>So, the desired area <span class="math-container">$ ~ = \displaystyle 2 \cdot \left(\frac{\sqrt3}{2} - \frac{\sqrt3}{8} - \frac{\pi}{6}\right) = \frac{3 \sqrt3}{4} - \frac{\pi}{3}$</span></p>
76,163
<p>Represent the position of a unit-length, oriented segment $s$ in the plane by the location $a$ of its <em>basepoint</em> and an orientation $\theta$: $s = (a,\theta)$. So $s$ can be viewed as a point in $\mathbb{R^2} \times \mathbb{S^1}$. Now I'll define a metric on this space. Define the distance $d(s_1,s_2)$ between two positions of unit-length segments as the average distance between their corresponding points: <br /> &nbsp; &nbsp; &nbsp; <img src="https://i.stack.imgur.com/4G9o9.jpg" alt="Segments" /> <br /> Above the distances are about 0.31, 0.61, and 0.53, left-to-right.</p> <p>So if the endpoints of $s_i$ are $a_i$ and $b_i$, then $d(s_1,s_2)$ is the average of the Euclidean distances between $(1-t) a_1 + t b_1$ and $(1-t) a_2 + t b_2$ as $t$ varies in $[0,1]$. This is indeed a metric, I believe, because the triangle inequality holds between corresponding points in three positions of the segment. This metric is intended to capture the intuitive notion of how much work is required to move $s_1$ to $s_2$.</p> <p>My question is: What are the geodesics in this space under this metric? Certainly a pure translation of $s$ is a geodesic. It seems that a pure rotation by at most $\pi$ of $s$ about a point $p \in s$ should also be a geodesic, but even this is not so clear to me. Certainly a rotation about a point not on $s$ is (generally) not a geodesic. Of course the main interest would be in geodesics that mix translation and rotation, showing (locally) optimal repositioning paths.</p> <p>I investigated this long ago when working on motion-planning algorithms ("moving a ladder"), but got quite blocked on this natural question. This superficially seems related to the <a href="http://en.wikipedia.org/wiki/Kakeya_set#Kakeya_needle_problem" rel="nofollow noreferrer">Kakeya needle problem</a>, but the metric I propose does not measure swept area. Perhaps it has been studied in some guise previously. If so, a pointer would be appreciated. Thanks!</p> <p><b>Addenda</b>. (<em>26Sep11</em>.) I just ran across this book, by V. A. Dubovit͡s︡kiĭ, which seems relevant: <em>The Ulam problem of optimal motion of line segments</em>, Translation Series in Mathematics and Engineering, Optimization Software, 1985. It may take some time for me to locate a copy...</p> <p>(<em>11Nov11</em>). I finally have this book in my hands. The Preface by Hestenes says,</p> <blockquote> <p>Dubovitskij has succeeded in solving in closed form a generalization of a problem of S[.] Ulam..: Among all continuous motions of an oriented line segment $S$ in $\mathbb{E}^n$ from one position to another, which preserves its length [...], find one for which the sum of the lengths of the paths swept by its endpoints is minimal.</p> </blockquote> <p>The concentration here on the motion of the endpoints&mdash;in contrast to the average distance metric I proposed&mdash;seems to render these results as not directly relevant, although nevertheless <em>quite</em> interesting.</p>
j.c.
353
<p>What follows are just some illustrations, not a full answer; please refer to Anton Petrunin's answer for a nice description of the 4-dimensional geometry that the original question is embedded in. </p> <p><a href="https://github.com/jcmathoverflow/jc-MO-nb/blob/master/segmentmetric.nb" rel="nofollow noreferrer">Here's</a> a bit of Mathematica code to generate some crude discrete approximations to geodesics. Given the two endpoint segments $s_0,s_1$, I create $n$ segments on the naïve $s_\alpha$ path I defined in the comments above, normalize their lengths to one, and then vary the positions of the endpoints of these intermediate segments with Mathematica's FindMinimum function to find an approximate geodesic. The code I wrote looks for a local minimum of an objective function with two terms: one is just the sum of the distances between all the intermediate segments and the other is a constraint that forces the distances between each pair of adjacent segments on the path to be equal (otherwise the intermediate segments all flow to the endpoints). The segments are all constrained to have unit length.</p> <p>As Mathematica is not really good at a serious minimization problem, the code runs rather slowly (finding a discrete path with $n=10$ takes about 7 minutes), but perhaps you might still be able to get some more direct intuition for the geodesics by playing around with it in different cases. It's a start, anyways.</p> <p>Below is an image of one example. The endpoints are a segment with endpoints $(0,0)$ to $(1,0)$ (orange) and $(1,0)$ to $(1,1)$ (red), and I approximated a geodesic with a chain of $n=10$ segments. The path begins with the orange segment sliding upwards a tiny bit to "yellow", and then the segments rotates counter clockwise and translate right until they reach red.</p> <p>The segment distance between red and orange is $\frac{1}{8}\left(4+\sqrt{2}\log(3+2\sqrt{2})\right)\approx0.8116$, but the length of the approximate geodesic is $0.865$. Each pair of "adjacent" segments in the picture has a segment distance roughly 0.096 between them.</p> <p>With $n=10$, the length has not converged to high accuracy! For $n=7,8,9$ the lengths of my approximations are $0.857,0.884,0.876$, respectively. In any case, it's clear that the length of the true geodesic will be greater than the distance between the endpoints. You might stare at this picture and imagine the true geodesic "hugging" the 3 dimensional unit length segment hypersurface in the 4D space, whereas the distance measures a "chord" through the 4 dimensional space of segments with arbitrary length.</p> <p><a href="https://i.stack.imgur.com/eJs14.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eJs14.png" alt="10 point approximation to geodesic"></a></p> <p><strong>update</strong></p> <p>As Joseph O'Rourke points out in the comments, the code is not very good with (anti-)parallel configurations. What seems to work is to give either segment a slight perturbation.</p> <p>As an example, here's an approximate geodesic ($n=10$ points) between a segment with endpoints $(1,0)$ to $(2,0)$ (orange) and a segment with endpoints $(-1,0)$ to $(-1+\cos(\pi+0.001),\sin(\pi+0.001))\approx(-2+5\times10^{-7},0.001)$. The distance between the endpoints is $\approx3-2\times10^{-7}$, but the length of the depicted approximate geodesic is 3.30 (with steps of about 0.367).</p> <p>Interestingly, this approximate geodesic seems to break symmetry in two ways. First, the segments rotate clockwise while traveling left. Second, the picture doesn't have left-right symmetry, which means that the first half of the journey is different from the second half (an analogue of this can be seen in the example above too, which doesn't have reflection symmetry across the -45º line). Is the second effect just due to discretization or non-convergence of the minimization? I don't know how to show that the true geodesics must be symmetric if there's some symmetry relating the two endpoints.</p> <p><a href="https://i.stack.imgur.com/DaPoD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DaPoD.png" alt="10 point approximation to geodesic between nearly antiparallel segments"></a></p> <p>code snippet for this:</p> <pre><code>a1 = {1, 0}; b1 = {2, 0}; a2 = {-1, 0}; b2 = {-1 + Cos[\[Pi] + .001], Sin[\[Pi] + .001]}; Timing[anti3 = FindChain[{a1, b1}, {a2, b2}, 10]] SegmentDist2[{a1, b1}, {a2, b2}] Table[SegmentDist2[anti3[[i]], anti3[[i + 1]]], {i, 9}] Sum[SegmentDist2[anti3[[i]], anti3[[i + 1]]], {i, 9}] Graphics[Table[{Hue[i/Length[anti3]], Line[anti3[[i]]]}, {i, Length[anti3]}]] </code></pre>
76,163
<p>Represent the position of a unit-length, oriented segment $s$ in the plane by the location $a$ of its <em>basepoint</em> and an orientation $\theta$: $s = (a,\theta)$. So $s$ can be viewed as a point in $\mathbb{R^2} \times \mathbb{S^1}$. Now I'll define a metric on this space. Define the distance $d(s_1,s_2)$ between two positions of unit-length segments as the average distance between their corresponding points: <br /> &nbsp; &nbsp; &nbsp; <img src="https://i.stack.imgur.com/4G9o9.jpg" alt="Segments" /> <br /> Above the distances are about 0.31, 0.61, and 0.53, left-to-right.</p> <p>So if the endpoints of $s_i$ are $a_i$ and $b_i$, then $d(s_1,s_2)$ is the average of the Euclidean distances between $(1-t) a_1 + t b_1$ and $(1-t) a_2 + t b_2$ as $t$ varies in $[0,1]$. This is indeed a metric, I believe, because the triangle inequality holds between corresponding points in three positions of the segment. This metric is intended to capture the intuitive notion of how much work is required to move $s_1$ to $s_2$.</p> <p>My question is: What are the geodesics in this space under this metric? Certainly a pure translation of $s$ is a geodesic. It seems that a pure rotation by at most $\pi$ of $s$ about a point $p \in s$ should also be a geodesic, but even this is not so clear to me. Certainly a rotation about a point not on $s$ is (generally) not a geodesic. Of course the main interest would be in geodesics that mix translation and rotation, showing (locally) optimal repositioning paths.</p> <p>I investigated this long ago when working on motion-planning algorithms ("moving a ladder"), but got quite blocked on this natural question. This superficially seems related to the <a href="http://en.wikipedia.org/wiki/Kakeya_set#Kakeya_needle_problem" rel="nofollow noreferrer">Kakeya needle problem</a>, but the metric I propose does not measure swept area. Perhaps it has been studied in some guise previously. If so, a pointer would be appreciated. Thanks!</p> <p><b>Addenda</b>. (<em>26Sep11</em>.) I just ran across this book, by V. A. Dubovit͡s︡kiĭ, which seems relevant: <em>The Ulam problem of optimal motion of line segments</em>, Translation Series in Mathematics and Engineering, Optimization Software, 1985. It may take some time for me to locate a copy...</p> <p>(<em>11Nov11</em>). I finally have this book in my hands. The Preface by Hestenes says,</p> <blockquote> <p>Dubovitskij has succeeded in solving in closed form a generalization of a problem of S[.] Ulam..: Among all continuous motions of an oriented line segment $S$ in $\mathbb{E}^n$ from one position to another, which preserves its length [...], find one for which the sum of the lengths of the paths swept by its endpoints is minimal.</p> </blockquote> <p>The concentration here on the motion of the endpoints&mdash;in contrast to the average distance metric I proposed&mdash;seems to render these results as not directly relevant, although nevertheless <em>quite</em> interesting.</p>
Jean-Marc Schlenker
9,890
<p>A useful keyword for this problem is the Wasserstein distance, see <a href="http://en.wikipedia.org/wiki/Wasserstein_metric" rel="nofollow">wikipedia</a>. I believe that this Wasserstein distance, for $p=1$, provides a variant of the distance you're considering but for unoriented segments. There is a well-developed theory here, in particular concerning the geodesics and their behavior. Incidentally things my turn out to be easier if you take the means of the squares of the distances, rather than of the distances.</p>
3,482,376
<blockquote> <p>Suppose <span class="math-container">$f$</span> is differentiable on <span class="math-container">$[0,\infty)$</span> and <span class="math-container">$\displaystyle \lim_{x \to \infty} \frac{f(x)}{x} = 0$</span>. Show that <span class="math-container">$\displaystyle \liminf_{x \to \infty}|f'(x)| = 0$</span>.</p> </blockquote> <p>What I have tried is to apply the mean value theorem to <span class="math-container">$\frac{f(y)}{y} - \frac{f(x)}{x}$</span> with <span class="math-container">$0 &lt; x &lt; y$</span>. There exists <span class="math-container">$c_x \in (x,y)$</span> such that <span class="math-container">$\dfrac{\frac{f(y)}{y} - \frac{f(x)}{x}}{y-x} = \frac{f'(c_x)}{c_x}- \frac{f(c_x)}{c_x^2}$</span>. From here we get <span class="math-container">$$|f'(c_x)| \leq \frac{c_x}{y-x}\left|\frac{f(y)}{y} - \frac{f(x)}{x} \right| + \left|\frac{f(c_x)}{c_x} \right| $$</span></p> <p>Now I want to set <span class="math-container">$y = 2x$</span> and take <span class="math-container">$\liminf$</span> of both sides to get <span class="math-container">$0$</span>, but I am unclear on:</p> <ol> <li><p>If <span class="math-container">$\liminf_{x \to \infty} |f'(c_x)| = 0$</span>, then <span class="math-container">$\liminf_{x \to \infty} |f'(x)| = 0$</span>. I think this is true?</p></li> <li><p>How to handle the factor <span class="math-container">$c_x/(y-x) = c_x/x$</span> when taking <span class="math-container">$\liminf$</span>?</p></li> </ol>
Pythagoras
701,578
<p>What you have is for <span class="math-container">$y=2x$</span> and <span class="math-container">$c_x\in (x,2x)$</span> that <span class="math-container">$$|f'(c_x)|\leq \frac{c_x}x\left|\frac{f(2x)}{2x}-\frac{f(x)}x\right|+\left|\frac{f(c_x)}{c_x}\right|,$$</span> where <span class="math-container">$1&lt;\frac {c_x}x&lt;2.$</span> Since <span class="math-container">$\frac{c_x}x$</span> is bounded and <span class="math-container">$\lim_{x\rightarrow \infty}\frac{f(x)}x=0$</span>, one has <span class="math-container">$$\lim_{x\rightarrow\infty}|f'(c_x)|=0,$$</span> which implies that <span class="math-container">$$\liminf_{x\rightarrow\infty}|f'(x)|=0,$$</span> as required.</p>
1,000,448
<p>One of the $x$-intercepts of the function $f(x)=ax^2-3x+1$ is at $x=-1$. Determine $a$ and the other $x$-intercept.</p> <p>I happen to know that $a=-4$ and the other $x$-intercept is at $x=\frac{1}{4}$ but I don't know how to get there. I tried substituting $x=-1$ into the quadratic formula.</p> <p>$$ -1=\frac{-(-3) \pm \sqrt{(-3)^2-4a}}{2a} $$</p> <p>Solving for $a$ I managed to come up with $a=-\frac {5}{2}$ by some convoluted process but obviously that doesn't work.</p> <p>How do I properly solve for $a$, taking into account the known root of $x$?</p>
Community
-1
<p>One approach is given in comment by Winther, here is another,<br> $ax^2+bx+c=0$ has two roots $r_1$ and $r_2$ then $$r_1+r_2=\frac{-b}{a}$$ and $$r_1r_2=\frac{c}{a}$$</p>
2,668,826
<p>I am stuck on this result, which the professor wrote as "trivial", but I don't find a way out.</p> <p>I have the function </p> <p>$$f_{\alpha}(t) = \frac{1}{2\pi} \sum_{k = 1}^{+\infty} \frac{1}{k}\int_0^{\pi} (\alpha(p))^k \sin^{2k}(\epsilon(p) t)\ dp$$</p> <p>and he told use that for $t\to +\infty$ we have:</p> <p>$$f_{\alpha}(t) = \frac{1}{2\pi} \sum_{k = 1}^{+\infty} \frac{1}{4^k k}\binom{2k}{k}\int_0^{\pi} (\alpha(p))^k\ dp$$</p> <p>Now, it's all about the sine since it's the only term with a dependance on $t$. Yet I cannot find a way to send</p> <p>$$\sin^{2k}(\epsilon(p) t)$$</p> <p>into</p> <p>$$\frac{1}{4^k}\binom{2k}{k}$$</p> <p>Any help? Thank you so much.</p> <p><strong>More Details</strong></p> <p>$$\epsilon(p)$$</p> <p>Is a positive, limited and continuous function.</p> <p>The "true" starting point was</p> <p>$$f_{\alpha}(t) = -\frac{1}{2\pi}\int_0^{\pi} \log\left(1 - \alpha(p)\sin^2(\epsilon(p)t)\right)\ dp$$</p> <p>Then I thought I could have expanded the logarithm in series. Maybe I shouldn't had to...</p>
user178256
178,256
<p>Let $$ I=\int_{0}^{1}\ln\left(\frac{1 + x^{11}}{1 + x^{3}}\right) \,{\mathrm{d}x \over \left(1 + x^{2}\right)\ln\left(x\right)}+\int_{1}^{\infty}\ln\left(\frac{1 + x^{11}}{1 + x^{3}}\right) \,{\mathrm{d}x \over \left(1 + x^{2}\right)\ln\left(x\right)}$$ $$\int_{1}^{\infty}\ln\left(\frac{1 + x^{11}}{1 + x^{3}}\right) \,{\mathrm{d}x \over \left(1 + x^{2}\right)\ln\left(x\right)}=-\int_{0}^{1}\ln\left(\frac{1 + y^{11}}{1 + y^{3}}\right) \,{\mathrm{d}y \over \left(1 + y^{2}\right)\ln\left(y\right)}+8\int_{0}^{1}\frac{dy}{1+y^2}$$ Put $$x=\frac{1}{y}$$ $$I=8\frac{\pi}{4}=2{\pi}$$</p>
120,667
<p>Let $V, W$ be two finite-dimensional vector spaces, $f: V\rightarrow W$ a linear map, and $U \subseteq W$ a vector subspace. I'm trying to show that $(f^{-1}(U))^0 = f^*(U^0)$, i.e. that the annihilator of the inverse image of $U$ is the image of the annihilator under the the dual $f^*$ of $f$. $(f^{-1}(U))^0 \supseteq f^*(U^0)$ is easy to prove, but I'm having troubles with the other direction…</p> <p>(the annihilator for $Y \subseteq X$ vector spaces is defined here as $Y^0 := \{x^* \in X^*\ |\ \forall y \in Y: x^*(y) =0 \}$, with $X^*$ the dual space of $X$; for inner product spaces, $X^0 \cong X^\perp$).</p>
joriki
6,622
<p>Given a linear functional $v^*\in(f^{-1}(U))^0$ on $V$ that annihilates all of $f^{-1}(U)$, in particular it annihilates the kernel of $f$. Thus we can define a linear functional $w^*$ on $W$ by $w^*(w)=v^*(v)$ using any preimage $v$ of $w$, since $v^*$ has the same value an all preimages of $w$. Then $w^*\in U^0$ and $v^*=f^*(w^*)$, thus $v^*\in f^*(U^0)$, and thus, since $v^*$ was arbitrary, $(f^{-1}(U))^0\subseteq f^*(U^0)$.</p>
4,025,279
<p>I have been reading material on uniformly continuous functions. And going through the problems where we have to prove that a function is not uniformly continuous or otherwise.</p> <p>A function defined on an interval I is said to be uniformly continuous on I if to each <span class="math-container">$\epsilon$</span> there exists a <span class="math-container">$\delta$</span> such that</p> <p><span class="math-container">$|f(x_1)-f(x_2)| &lt; \epsilon$</span>, for arbitrary points <span class="math-container">$x_1, x_2$</span> of I for which <span class="math-container">$|x_1-x_2|&lt;\delta$</span>.</p> <p>Now I understand the above definition in mathematical terms and able to apply this definition to solve problems. But I don't understand why this definition was introduced, how does uniformly continuous functions look like ? If given a graph of a function, how can I tell if it is a uniformly continuous function ? When I imagine continuous functions, I have this picture in mind as to how they look like but for uniformly continuous functions, I can't think of any picture.</p> <p>I hope I am able to explain my question correctly. Please help me understand.</p>
Angelo
771,461
<p>For uniformly continuous functions, there is for each <span class="math-container">$\varepsilon &gt;0$</span> a <span class="math-container">$\delta &gt;0$</span> such that when we draw a rectangle around each point of the graph with width <span class="math-container">$2\delta$</span> and height <span class="math-container">$2\varepsilon$</span> , the graph lies completely inside the rectangle.</p> <p><a href="https://i.stack.imgur.com/RVoOU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RVoOU.png" alt="enter image description here" /></a></p> <p>For functions that are not uniformly continuous, there is an <span class="math-container">$\varepsilon &gt;0$</span> such that regardless of the <span class="math-container">$\delta &gt;0$</span>, when we draw a <span class="math-container">${\displaystyle 2\varepsilon \times 2\delta } $</span> rectangle around a point of the graph, there are points of the graph directly above or below the rectangle. There might be midpoints where the graph is completely inside the rectangle but this is not true for every midpoint.</p> <p><a href="https://i.stack.imgur.com/PGHyP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PGHyP.png" alt="enter image description here" /></a></p>
1,624
<p>For example, to change the color of each pixel to the mean color of the three channels, I tried</p> <pre><code>i = ExampleData[{"TestImage", "Lena"}]; Mean[i] </code></pre> <p>but it just remains unevaluated:</p> <p><img src="https://i.stack.imgur.com/K1RRR.png" alt="enter image description here"></p> <p>How can I read the colors of an image into a list or matrix and change the color codes and save it back to an image?</p>
Mr.Wizard
121
<p>Have you looked through the included documentation?</p> <p><a href="http://reference.wolfram.com/mathematica/guide/ImageProcessing.html" rel="nofollow">guide/ImageProcessing</a></p> <p><a href="http://reference.wolfram.com/mathematica/tutorial/ImageProcessing.html" rel="nofollow">tutorial/ImageProcessing</a></p> <p>There are also a number of entries on the Wolfram Blog that relate to image processing:</p> <p><a href="http://blog.wolfram.com/2008/12/01/the-incredible-convenience-of-mathematica-image-processing/" rel="nofollow">the incredible convenience of mathematica image processing</a></p> <p><a href="http://blog.wolfram.com/2010/11/03/amazeing-image-processing-in-mathematica/" rel="nofollow">aMAZEing image processing in mathematica</a></p> <p><a href="http://blog.wolfram.com/2010/12/21/the-battle-of-the-marlborough-maze-at-blenheim-palace-continues/" rel="nofollow">the battle of the marlborough maze at blenheim palace continues</a></p> <p><a href="http://blog.wolfram.com/2008/12/23/fun-with-line-art/" rel="nofollow">fun with line-art</a></p> <p><a href="http://blog.wolfram.com/2010/12/27/fixing-bad-astrophotography-using-mathematica-8-and-advanced-image-deconvolution/" rel="nofollow">fixing bad astrophotography</a></p> <p>To learn more on the subject ask Heike. ;-)</p>
935,331
<p>Previously, to integrate functions like $x(x^2+1)^7$ I used integration by parts. Today we were introduced to a new formula in class: $$\int f'(x)f(x)^n dx = \frac{1}{n+1} {f(x)}^{n+1} +c$$ I was wondering how and why this works. Any help would be appreciated. </p>
Claude Leibovici
82,404
<p><strong>Hint</strong></p> <p>Let $u'=f'(x)$ and $v=f(x)^n$; so $u=f(x)$, $v'=n f(x)^{n-1}f'(x)$. Then $$I=\int f'(x)f(x)^n dx =f^{n+1}(x)-n\int f'(x)f(x)^ndx=f(x)^{n+1}-nI$$</p> <p>I am sure that you can take from here.</p>
230,167
<p>The following 3 matrices are useful when viewing matrices as vectors, known as commutation <span class="math-container">$K_n$</span>, symmetrizer <span class="math-container">$N_n$</span> and duplication <span class="math-container">$G_n$</span>. They are usually defined by their matrix relations below.</p> <p><span class="math-container">$$ \begin{eqnarray} \text{vec}A &amp; = &amp; K_n \text{vec}A' \\ \text{vec}((A+A')/2) &amp; = &amp;N_n \text{vec}A\\ \text{vec}A_s &amp; = &amp; G_n \text{vech}A_s\\ \end{eqnarray} $$</span></p> <p>Here <span class="math-container">$\text{vec}$</span> is a vectorization operator that stacks columns, and <span class="math-container">$\text{vech}$</span> is &quot;lower-half&quot; vectorization, stacking columns of the lower half of the matrix. <span class="math-container">$A$</span> is arbitrary matrix, <span class="math-container">$A_s$</span> is symmetric</p> <p>(A <a href="https://math.stackexchange.com/questions/3830092/matrix-corresponding-to-transformation-ta-otimes-b-b-otimes-a">related matrix</a> commutes the order of Kronecker product <span class="math-container">$A\otimes B\to B\otimes A$</span>)</p> <p>I have an ugly-looking implementation of the first two matrices based on some some algebra done by Seber, &quot;Handbook of Statistics&quot;, section 11.5. Can someone see a good way to implement the third matrix?</p> <p>Also wondering if there's some functionality in Mathematica that would obviate the need to do manual algebra and instead rely on matrix relations above.</p> <pre><code>(* Commutation matrix m,n *) Kmat[m_, n_] := Module[{x, X, before, after, positions, matrix}, X = Array[x, {m, n}]; before = Flatten@vec@X; after = Flatten@vec@Transpose[X]; positions = MapIndexed[{First@#2, First@Flatten@Position[before, #]} &amp;, after]; matrix = SparseArray[# -&gt; 1 &amp; /@ positions] // Normal ]; Nmat[n_] := (Normal@Kmat[n, n] + IdentityMatrix[n^2])/2; Gmat[n_] := Array[1 &amp;, {n, n (n + 1)/2}]; n = 3; Clear[a]; A = Array[a, {3, 3}]; As = Array[a[Min[#1, #2], Max[#1, #2]] &amp;, {n, n}]; vec[W_] := Transpose@{Flatten@Transpose[W]}; vech[W_] := Flatten@Table[Table[W[[i, j]], {i, j, n }], {j, 1, n}]; On[Assert]; Assert[vec[A] == Kmat[n, n].vec[A\[Transpose]]] Assert[vec[(A + Transpose[A])/2] == Nmat[n].vec[A] // Reduce] Assert[vec[As] == Gmat[n].vech[As] // Reduce] </code></pre> <h1>Official description</h1> <p>Here's description from Seber's Handbook of Statistics: (<span class="math-container">$G_3=D_3$</span> is duplication matrix, <span class="math-container">$H_3$</span> is it's inverse -- the elimination matrix, and <span class="math-container">$I_{(3,3)}$</span> is the commutation matrix)</p> <p><a href="https://i.stack.imgur.com/1MrSS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1MrSS.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/uvsD9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uvsD9.png" alt="enter image description here" /></a></p>
Daniel Huber
46,318
<p>If I understand correctly, then you only need the operator &quot;vec&quot;. This is clear for the first line. The second line applies vec to the symmetrized version of A: (A+Transpose[A])/2. And the third line applies &quot;vec&quot; to a symmetric matrix, the operator is the same, only the operand is different. Therefor in MMA I would code:</p> <pre><code>A = Array[a, {3, 3}]; As = Array[a[Min[#1, #2], Max[#1, #2]] &amp;, {n, n}]; vec[m_]:= List /@ Flatten@Transpose@m; </code></pre> <p>with this your examples read:</p> <pre><code>vec[A] vec[(A + Transpose[A])/2] vec[As] </code></pre>
230,167
<p>The following 3 matrices are useful when viewing matrices as vectors, known as commutation <span class="math-container">$K_n$</span>, symmetrizer <span class="math-container">$N_n$</span> and duplication <span class="math-container">$G_n$</span>. They are usually defined by their matrix relations below.</p> <p><span class="math-container">$$ \begin{eqnarray} \text{vec}A &amp; = &amp; K_n \text{vec}A' \\ \text{vec}((A+A')/2) &amp; = &amp;N_n \text{vec}A\\ \text{vec}A_s &amp; = &amp; G_n \text{vech}A_s\\ \end{eqnarray} $$</span></p> <p>Here <span class="math-container">$\text{vec}$</span> is a vectorization operator that stacks columns, and <span class="math-container">$\text{vech}$</span> is &quot;lower-half&quot; vectorization, stacking columns of the lower half of the matrix. <span class="math-container">$A$</span> is arbitrary matrix, <span class="math-container">$A_s$</span> is symmetric</p> <p>(A <a href="https://math.stackexchange.com/questions/3830092/matrix-corresponding-to-transformation-ta-otimes-b-b-otimes-a">related matrix</a> commutes the order of Kronecker product <span class="math-container">$A\otimes B\to B\otimes A$</span>)</p> <p>I have an ugly-looking implementation of the first two matrices based on some some algebra done by Seber, &quot;Handbook of Statistics&quot;, section 11.5. Can someone see a good way to implement the third matrix?</p> <p>Also wondering if there's some functionality in Mathematica that would obviate the need to do manual algebra and instead rely on matrix relations above.</p> <pre><code>(* Commutation matrix m,n *) Kmat[m_, n_] := Module[{x, X, before, after, positions, matrix}, X = Array[x, {m, n}]; before = Flatten@vec@X; after = Flatten@vec@Transpose[X]; positions = MapIndexed[{First@#2, First@Flatten@Position[before, #]} &amp;, after]; matrix = SparseArray[# -&gt; 1 &amp; /@ positions] // Normal ]; Nmat[n_] := (Normal@Kmat[n, n] + IdentityMatrix[n^2])/2; Gmat[n_] := Array[1 &amp;, {n, n (n + 1)/2}]; n = 3; Clear[a]; A = Array[a, {3, 3}]; As = Array[a[Min[#1, #2], Max[#1, #2]] &amp;, {n, n}]; vec[W_] := Transpose@{Flatten@Transpose[W]}; vech[W_] := Flatten@Table[Table[W[[i, j]], {i, j, n }], {j, 1, n}]; On[Assert]; Assert[vec[A] == Kmat[n, n].vec[A\[Transpose]]] Assert[vec[(A + Transpose[A])/2] == Nmat[n].vec[A] // Reduce] Assert[vec[As] == Gmat[n].vech[As] // Reduce] </code></pre> <h1>Official description</h1> <p>Here's description from Seber's Handbook of Statistics: (<span class="math-container">$G_3=D_3$</span> is duplication matrix, <span class="math-container">$H_3$</span> is it's inverse -- the elimination matrix, and <span class="math-container">$I_{(3,3)}$</span> is the commutation matrix)</p> <p><a href="https://i.stack.imgur.com/1MrSS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1MrSS.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/uvsD9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uvsD9.png" alt="enter image description here" /></a></p>
flinty
72,682
<p>I hope this does the trick. It's more code than yours but I've come at it from a slightly different angle - I suppose another implementation can't hurt right? I've used <code>FindPermutation</code> to get <span class="math-container">$K_n$</span> and <code>SolveAlways</code> for non-square <span class="math-container">$G_n$</span>:</p> <pre><code>vec[W_] := Join @@ Transpose[W] vech[W_] := With[{n = Length[W]}, Flatten[MapThread[#1[[-#2 ;;]] &amp;, {Transpose[W], Reverse@Range[n]}]]] getperm[perm_, n_] := Permute[IdentityMatrix[n*n], perm] kcomm[n_] := With[{mtx = ArrayReshape[Range[n*n], {n, n}]}, getperm[FindPermutation[vec[Transpose[mtx]], vec[mtx]], Length[mtx]]] nsymm[n_] := (kcomm[n] + IdentityMatrix[n^2])/2 gdupe[n_] := With[{mtx = Array[a[Min[#1, #2], Max[#1, #2]] &amp;, {n, n}], gmatrix = Array[x, {n*n, n (n + 1)/2}]}, gmatrix /. First[SolveAlways[vec[mtx] == gmatrix.vech[mtx], Variables[mtx]]]] (* tests *) d = 3; m = RandomReal[{-1, 1}, {d, d}]; kcomm[d].vec[Transpose[m]] == vec[m] (* True *) nsymm[d].vec[m] == vec[(m + Transpose[m])/2] (* True *) vec[Normal[Symmetrize[m]]] == gdupe[d].vech[Normal[Symmetrize[m]]] (* True *) </code></pre>
129,890
<p>The code</p> <pre><code>x0 = 0.25; T = 20; u1 = -0.03; u2 = 0.07; u3 = -0.04; a = 1/100; t0 = 5; omega = 2; a = 0.01; dis[x_] := a/(Pi (x^2 + a^2)) P[t_] := If[t &lt;= t0, Sin[omega t], 0] u[t_] := u1 HeavisideTheta[t - 0.8] + u2 HeavisideTheta[t - 1.64] + u3 HeavisideTheta[t - 3.33] pde = a D[w[x, t], {x, 4}] + D[w[x, t], {t, 2}] - P[t] dis[x - x0]; sol = NDSolve[{pde == 0, w[0, t] == u[t], w[1, t] == 0, Derivative[2, 0][w][0, t] == 0, Derivative[2, 0][w][1, t] == 0, w[x, 0] == 0, Derivative[0, 1][w][x, 0] == 0}, w[x, t], {x, 0, 1}, {t, 0, 80}, Method -&gt; "StiffnessSwitching"]; </code></pre> <p>gives an error</p> <pre><code>NDSolve::eerr: Warning: scaled local spatial error estimate of 246.5944594961422` at t = 80.` in the direction of independent variable x is much greater than the prescribed error tolerance. Grid spacing with 25 points may be too large to achieve the desired accuracy or precision. A singularity may have formed or a smaller grid spacing can be specified using the MaxStepSize or MinPoints method options. </code></pre> <p>I think it's because of the boundary conditions on derivatives. Have tried Automatic, MethodOfLines, etc., does not help. Tried</p> <pre><code>Method -&gt; {"MethodOfLines", "SpatialDiscretization" -&gt; {"TensorProductGrid", "MinPoints" -&gt; 100}} </code></pre> <p>works fine with second order equation subjected to bc containing only first order derivative. Any thoughts, hints?</p>
tablecircle
43,920
<p>I think that maybe Mathematica can't solve the boundary condition on the second order derivative problem.(I'm not sure, waiting for other expert's answer!) So, I tried to rewrite your code by defining <code>s[x, t] == D[w[x, t], {x, 2}]</code></p> <p>Therefore, your pde and bc becomes:</p> <pre><code>pde = {a D[s[x, t],{x, 2}]+D[w[x, t],{t, 2}]-P[t] dis[x- x0]==0, s[x, t] == D[w[x, t], {x, 2}]}; bc = {w[0, t] == u[t], w[1, t] == 0, s[0, t] == 0, s[1, t] == 0, w[x, 0] == 0, Derivative[0, 1][w][x, 0] == 0}; sol = NDSolve[{pde, bc}, {s, w}, {x, 0, 1}, {t, 0, 80}]; Plot3D[w[x, t] /. sol, {x, 0, 1}, {t, 0, 80}] </code></pre> <p>with result:</p> <p><a href="https://i.stack.imgur.com/y7zEH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y7zEH.jpg" alt="enter image description here"></a></p>
987,620
<p>$P$ and $Q$ are two distinct prime numbers. How can I prove that $\sqrt{PQ}$ is an irrational number?</p>
Ross Millikan
1,827
<p>If you follow through the usual proof that $\sqrt 2 $ is irrational, it goes through in this case as well. One of them (for $\sqrt 3$) is <a href="https://math.stackexchange.com/questions/131391/proving-sqrt-3-is-irrational">here</a>, but a search for irrational+sqrt will find many choices</p>
36,477
<p>The length, width and height of a rectangular box are measured to be 3cm, 4cm and 5cm respectively, with a maximum error of 0.05cm in each measurement. Use differentials to approximate the maximum error in the calculated volume.</p> <p><br><br> Please help</p>
Ross Millikan
1,827
<p>Hint: express V as a function of L, W, H. Take the partial derivative with respect to each variable to see how V changes with each.</p>
129,295
<p>$$\int{\sqrt{x^2 - 2x}}$$</p> <p>I think I should be doing trig substitution, but which? I completed the square giving </p> <p>$$\int{\sqrt{(x-1)^2 -1}}$$</p> <p>But the closest I found is for</p> <p>$$\frac{1}{\sqrt{a^2 - (x+b)^2}}$$ </p> <p>So I must add a $-$, but how? </p>
David Mitra
18,986
<p>In hopes of taking advantage of the identity $$\tan^2\theta=\sec^2\theta-1,$$ make the substitution: $$x-1=\sec\theta, \quad dx=\sec\theta\,\tan\theta\,d\theta.$$ Then we have $$ \int \sqrt{(x-1)^2-1}\, dx = \int\sqrt{\sec^2\theta -1}\, \sec\theta\tan\theta\,d\theta =\int\tan^2\theta\sec\theta\,d\theta=\int (\sec^3\theta-\sec\theta)\,d\theta. $$</p> <p><br></p> <p>To integrate $\sec\theta$, you can use a trick: $$\tag{1} \int\sec\theta\,d\theta =\int\sec\theta \cdot {\textstyle{\sec\theta+\tan\theta\over\sec\theta+\tan\theta}}\,d\theta =\int{ {\textstyle{\sec^2\theta+\sec\theta\tan\theta\over \sec\theta+\tan\theta}}}d\theta=\ln|\sec\theta+\tan\theta|+C\ \ \ \ \ \ \ $$</p> <p>To integrate $\sec^3\theta$, you can start with integration by parts: $$\eqalign{ \color{maroon}{\int\sec^3\theta\,d\theta}=\int\underbrace{\sec \theta}_u \,\underbrace{\sec^2\theta d\theta}_{dv} &amp;= \underbrace{\sec \theta}_u \,\underbrace{\tan\theta}_{v} - \int\underbrace{\tan \theta}_v\, \underbrace{\sec \theta\tan\theta\, d\theta}_{du}\cr &amp;=\sec\theta\tan\theta -\int( \sec^3\theta-\sec\theta)\,d\theta\cr &amp;=\sec\theta\tan\theta \color{maroon}{-\int \sec^3\theta\,d\theta}+\int \sec\theta\,d\theta. } $$ Then, in the above, solve for $\color{maroon}{\int\sec^3\theta\,d\theta}$: $$\tag{2} \int\sec^3\theta\,d\theta={1\over2}\sec\theta\tan\theta+{1\over2}\int\sec\theta\,d\theta ={1\over2}\sec\theta\tan\theta+{1\over2}\ln|\sec\theta+\tan\theta|+C. $$</p> <p>So, using $(1)$ and $(2)$: $$ \int(\sec^3\theta-\sec\theta)\,d\theta ={1\over2}\sec\theta\tan\theta -{1\over2}\ln|\sec\theta+\tan\theta|+C. $$ Finally put everything back in terms of $x$ using $\theta=\sec^{-1}(x-1)$ (which I'll leave to you).</p>
1,422,990
<p>How to show that $(2^n-1)^{1/n}$ is irrational for all integer $n\ge 2$?</p> <p>If $(2^n-1)^{1/n}=q\in\Bbb Q$ then $q^n=2^n-1$ which doesn't seem right, but I don't get how to prove it.</p>
user236182
236,182
<p>For contradiction, assume $(2^n-1)^{1/n}$ is rational. </p> <p>Then $(2^n-1)^{1/n}=p/q$ for some $p,q\in\Bbb Z_{&gt;0}, \gcd(p,q)=1$.</p> <p>But then $(2^n-1)q^n=p^n\implies q^n\mid p^n\implies q=1$.</p> <p>$2^n-1=p^n$. We can't have $p=1$, since $n\ge 2$. But then $2^n-1&lt;2^n\le p^n$, contradiction.</p>
110,722
<p>1) Many Mathematics departments ask to send a "list of publications" while applying for research postdoctoral jobs. My question is: how important is it to post my papers in arXiv. I know, posting on arXiv is always good, because people might search for the arXiv -ed papers, but how much difference is publication on arXiv going to make ? What if I prepare a publication list (in .pdf) of accepted paper(s), and submitted paper(s) and paper(s) in preparation and send it to the employers ? Will that be sufficient or as good as putting them on arXiv ? (I will also send them a link to my site containing the publication list, including the downloadable links).</p> <p>The situation is: I can put one of my papers in arXiv, but the others are either in collboration or deal with problems that stem from a question raised by collaborator(s) and the collaborator(s) did't agree to put on arXiv right away. Hence I am asking.</p> <p>2) Also what exactly should I write in the "list of publications", the name of the paper, author(s) and its status(accepted/submitted/in preparation), that's all or or should I briefly also describe its content/abstract (to make it more informational) ?</p> <p>Thank you.</p>
Delio Mugnolo
26,039
<p>Answer to 2): In my experience, I think an abstract is not appropriate in a list of publications.</p> <p>Answer to 1): A submitted article that nobody can see is basically a non-existing article - the committee members do not even now whether it is a deep 300-page-essay or a 2-page-note. More generally, I believe that mentioning preprints or submitted articles in an applications is only somewhat intereting if the committee members can use these pieces of information to extract a pattern about your research, otherwise is not really useful, perhaps even dangerous ("this guy has posted his paper on XYZ in the arXiv in 2009 and in 2012 it has not yet been published, uhm").</p> <hr> <p>Let me add that in the last few months the policies of Elsevier and Springer about arXiv have become explicitly supportive - see their home pages. This has been enough for me to convince all my co-authors to post our manuscripts in the arXiv - even those co-authors who would have been skeptical a few years ago. Besides, virtually all publishers allow self-archiving - I would be surprised if your coauthors would object even to your posting a joint paper on your own web page.</p>
110,722
<p>1) Many Mathematics departments ask to send a "list of publications" while applying for research postdoctoral jobs. My question is: how important is it to post my papers in arXiv. I know, posting on arXiv is always good, because people might search for the arXiv -ed papers, but how much difference is publication on arXiv going to make ? What if I prepare a publication list (in .pdf) of accepted paper(s), and submitted paper(s) and paper(s) in preparation and send it to the employers ? Will that be sufficient or as good as putting them on arXiv ? (I will also send them a link to my site containing the publication list, including the downloadable links).</p> <p>The situation is: I can put one of my papers in arXiv, but the others are either in collboration or deal with problems that stem from a question raised by collaborator(s) and the collaborator(s) did't agree to put on arXiv right away. Hence I am asking.</p> <p>2) Also what exactly should I write in the "list of publications", the name of the paper, author(s) and its status(accepted/submitted/in preparation), that's all or or should I briefly also describe its content/abstract (to make it more informational) ?</p> <p>Thank you.</p>
Ben Webster
66
<p>While usually I don't like answering questions like this on MO, there is actually an important fact here specific to the mathematics community which would probably be missed on academia.stackexchange, or another non-mathematical site. </p> <p>The answer to your 2) is:</p> <blockquote> <p>There is no logic behind asking for separate publication lists for postdoctoral candidates. Job ads only ask for them because it is a default setting on MathJobs, and nobody bothers to uncheck the box. You can just copy whatever list of papers you put in your CV (and leave the list of papers in your CV) or you can add abstracts.</p> </blockquote> <p>The question 1) is a bit more serious, and maybe better for academia.stackexchange. However, I would say that putting a paper on the arXiv is a serious signal that you think it is complete. To me, it means something.</p>
2,334,381
<p>is strong topology on a metric space the topology that is induced by metric? Is a open set in weak topology also open in strong topology?</p>
Subhajit Saha
435,027
<p>If we consider two topologies;The 1st topology is greater or bigger or stronger or finner whatever you says if it contains all the open sets of 2nd topology. So you are correct. </p>
173,131
<p>Let's suppose that for the following expression:</p> <p>$\qquad \alpha\,\beta +\alpha+\beta$</p> <p>I know that $\alpha$ and $\beta$ are of small magnitude (e.g., 0 &lt; $\alpha$ &lt; 0.02 and 0 &lt; $\beta$ &lt; 0.02). Therefore, the magnitude of $\alpha\,\beta$ is negligible, i.e., the original expression can be approximated by</p> <p>$\qquad \alpha+\beta$</p> <p>Is there any command in Mathematica to do such an operation?</p> <hr> <p>Reminder from original question: if and $\alpha$ and $\beta$ are of small magnitude, we may approximate the original equation by disregarding nonlinear terms, such as $\alpha^n$, $\beta^n$,n=2,3,..., and $\alpha\beta$</p> <p>My observation is that by applying the good code suggestion of @Henrik Schumacher in a fraction it seems to not generate a proper result to the whole fraction. For instance,if </p> <p>$\text{numerator}=\alpha \beta +\alpha +\beta ^2+\beta -\lambda \epsilon q(t)+\text{$\beta $q}(t)$ </p> <pre><code>numerator = α*β + β^2 + α + β + β q[t] - ϵ*λ*q[t] f = numerator; (f /. {α -&gt; 0, β -&gt; 0}) + (D[f, {{α, β}, 1}] /. {α -&gt; 0, β -&gt; 0}) . {α, β} </code></pre> <p>The code generates the correct elimination on numerator :$\alpha +\beta -\lambda \epsilon q(t)+\text{$\beta $q}(t)$</p> <p>and</p> <p>$\text{denominator}=\alpha +\alpha (-\beta ) \text{LD}(t)+\alpha \beta q(t)+\beta q(t)+\epsilon +1$:</p> <pre><code>denominator= 1 +α +ϵ -α*β*LD[t] +β*q[t] +α*β*q[t] f = denominator; (f /. {α -&gt; 0, β -&gt; 0}) + (D[f, {{α, β}, 1}] /. {α -&gt; 0, β -&gt; 0}) . {α, β} </code></pre> <p>The code generates the correct elimination on denominator: $\alpha +\beta q(t)+\epsilon +1$ </p> <p>Nevertheless, when applying the suggested first order Taylor Series expansion in the fraction as a whole: </p> <pre><code>f = (numerator /denominator); (f /. {α -&gt; 0, β -&gt; 0}) + (D[ f, {{α, β}, 1}] /. {α -&gt; 0, β -&gt; 0}).{α, β} // Simplify </code></pre> <p>The result generated is incorrect. $\frac{(\epsilon +1) (\alpha +\beta )-q(t) (\lambda \epsilon (-\alpha +\epsilon +1)+\beta \text{$\beta $q}(t))+\beta \lambda \epsilon q(t)^2+(-\alpha +\epsilon +1) \text{$\beta $q}(t)}{(\epsilon +1)^2}$</p> <p>That can be observed by, for instant, noticing that in the output above (i) the numerator has $\beta^2$, (ii) the code generated $q[t]^2$ that did not exist in the original equation.</p> <p>I hope that this time I could express my concern in a proper format. Thank you all for your support!</p> <hr> <p>Questions related to @Akku14's code suggestion</p> <p><strong>Question 1</strong>: @Akku14, I was trying to use your code suggestion with the slight modification in the original purpose(instead of eliminating α and β, now eliminating α, β and LD[t]), but I had no success. I think the reason is because I could not find a way of writing LD[t] as a parameter of function g: </p> <p>for the following equation: </p> <pre><code>\[CapitalDelta]p[t] = (α*β + β^2 + α + β + β*q[t] - ϵ*λ*q[t])/(1 + α + ϵ -α*LD[t] + β*q[t] + α*β*q[t]) </code></pre> <p>$\text{$\Delta $p}(t)=\frac{\alpha \beta +\alpha +\beta ^2+\beta +\beta q(t)-\lambda \epsilon q(t)}{\alpha +\alpha (-\text{LD}(t))+\alpha \beta q(t)+\beta q(t)+\epsilon +1}$</p> <pre><code>g[α_, β_, LD[t] _] = \[CapitalDelta]p[t] </code></pre> <p>By using @Akku14's suggestion:</p> <pre><code>ser = (Series[g[α eps, β eps, LD[t] eps], {eps, 0, 1}] // Normal) /. eps -&gt; 1 // Simplify </code></pre> <p>I get the following output:</p> <p>$\frac{(\epsilon +1) (\alpha +\beta )+q(t) (\lambda \epsilon (\alpha -\epsilon -1)+\beta (\epsilon +1)-\alpha \lambda \epsilon LD[t]+\beta \lambda \epsilon q(t)^2}{(\epsilon +1)^2}$</p> <p>which is incorrect since $\alpha \lambda \epsilon LD[t]$ is present in the numerator of <code>ser</code>.</p> <p>Again, I think that the problem is that my <code>g</code> is not recognizing LD[t] as a parameter; would any of you know how to approach this issue?</p> <p><strong>Question 2.1</strong>: in case I wanted second order Taylor Series for <code>g[α, β, LD[t]]</code>, changing <code>{eps, 0, 1}</code> to <code>{eps, 0, 2}</code> at <code>ser</code> would be enough to get the correct result? Like:</p> <pre><code>ser = (Series[g[α eps, β eps, LD[t] eps], {eps, 0, 2}] // Normal) /. eps -&gt; 1 // Simplify </code></pre> <p><strong>Question 2.2</strong>: in case I wanted n order Taylor Series for <code>g[α, β, LD[t]]</code>, changing <code>{eps, 0, 1}</code> to <code>{eps, 0, n}</code> at <code>ser</code> would be enough to get the correct result?</p>
Carl Woll
45,431
<p>I would follow the approach recommended by @Jens in the linked answer, which is similar to the answer by @Akku <em>( It is different since @Akku takes series of the numerator and denominator separately, and then finds the series of the ratio after normalizing)</em>. Introduce a dummy scaling variable, and then do a series expansion about the scaling variable:</p> <pre><code>r = Normal @ Series[numerator/denominator /. v : α|β -&gt; s v, {s, 0, 1}] /. s-&gt;1 </code></pre> <blockquote> <p>-ϵ λ q[t]/(1 + ϵ) + (α + β + α ϵ + β ϵ + β q[t] + β ϵ q[t] + α ϵ λ q[t] + β ϵ λ q[t]^2)/(1 + ϵ)^2</p> </blockquote> <p>We can use <a href="http://reference.wolfram.com/language/ref/Collect" rel="nofollow noreferrer"><code>Collect</code></a> to get this into basically the same form as @Henrik's answer:</p> <pre><code>Collect[r, {α, β}, Apart] //TeXForm </code></pre> <blockquote> <p>$\alpha \left(\frac{\lambda \epsilon q(t)}{(\epsilon +1)^2}+\frac{1}{\epsilon +1}\right)+\beta \left(\frac{\lambda \epsilon q(t)^2}{(\epsilon +1)^2}+\frac{q(t)}{\epsilon +1}+\frac{1}{\epsilon +1}\right)-\frac{\lambda \epsilon q(t)}{\epsilon +1}$</p> </blockquote> <p>This approach will scale much better for higher order expansions.</p>
2,559,260
<p>There exists a function $f$ such that $\lim_{x \rightarrow \infty} \frac{f(x)}{x^2} = 25$ and $\lim_{x \rightarrow \infty} \frac{f(x)}{x} = 5$</p> <p>I am confused, I do not whether it is true or not</p> <p>I have a counter-example, but I think thre might be such function</p>
Sri-Amirthan Theivendran
302,692
<p><strong>Hint</strong> Suppose that such a function exists. Note that $$ 25=\lim_{x \to \infty} \frac{f(x)}{x^2} = \lim_{x \to \infty} \frac{f(x)}{x}\times \lim_{x \to \infty} \frac{1}{x}. $$</p>
2,559,260
<p>There exists a function $f$ such that $\lim_{x \rightarrow \infty} \frac{f(x)}{x^2} = 25$ and $\lim_{x \rightarrow \infty} \frac{f(x)}{x} = 5$</p> <p>I am confused, I do not whether it is true or not</p> <p>I have a counter-example, but I think thre might be such function</p>
oandersonm
511,441
<p>In this case,</p> <p>$lim_{x\to\infty}\frac{f(x)}{x^2} = 25 = 5*5 =5* lim_{x\to \infty}\frac{f(x)}{x} $</p> <p>so,</p> <p>$lim_{x\to\infty}\frac1x = 5$</p> <p>this gives a contradiction in the statement.</p>
2,595,612
<p>Is true that every limit can only converge, diverge or(exclusive) not exist?</p> <p>Can I demonstrate that it doesn't exist after I proved it doesn't converge neither diverge?</p> <p>I've never seen this, but it makes some sort of sense to me. If a real isn't positive nor negative, it must be zero... But with limits.</p>
Arthur
15,500
<p>Divergence means the limit doesn't exist. "Divergence to $\infty$" is a special case of divergence, and we sometimes say that the limit exists in those cases, but strictly speaking it doesn't (unless we're working in the extended reals, which as far as I can tell is mostly done just to indulge in this specific abuse of terminology (yes, I know there are legitimate reasons to use them, I was was only being half-serious)).</p> <p>So yes, a sequence can only converge or diverge, because either there is a limit, or there isn't. If you like the four categories "converge", "diverge to $\infty$", "diverge to $-\infty$" and "diverge, but not to any infinity", then yes, those four categories cover everything, because the last one specifically covers everything not in the first three, by definition.</p>
30,918
<p>Judging by some of the posts on meta<sup>1</sup> and comments posted there it seems that there are users who try to improve the posts by correcting spelling mistakes. Of course, there are other ways to improve the posts via editing, some of them probably more important than grammar and spelling.<sup>2</sup> Still this is certainly an improvement, a post with correct spelling is easier to read and using English correctly also increases the professional appearance of this site. Moreover, it helps with searching. (Posts which contain only the misspelled form of the words are less likely to be found when searching.)</p> <p>Maybe it could be useful to <em>collect some commonly misspelled words</em>. This could make it easier for users, who have some spare time and want to do a few such edits, to find posts which have this problem.</p> <p>For this reason it might be useful also include link to a search (or a <a href="https://math.meta.stackexchange.com/tags/data-explorer/info">SEDE query</a>) which returns those posts. Some minor comments on searching:</p> <ul> <li>For some words, the search also returns posts which contain minor variants of the words. Since we are searching for misspellings, in such cases adding quotation marks might help to find the misspelled ones. (For example, searching for <a href="https://math.stackexchange.com/search?q=occurence">occurence</a> returns many results, but we are more interested in the ones which are found when we search for <a href="https://math.stackexchange.com/search?q=&quot;occurence&quot;">"occurence"</a> and <a href="https://math.stackexchange.com/search?q=&quot;occurences&quot;">"occurences"</a>.)</li> <li>A simple SEDE query searching for some string can return words which contain this string as a some part. So sometimes we might want to look only at results containing exactly the given word. (Compare the results <a href="https://data.stackexchange.com/math/query/1157659/posts-containing-a-given-text-case-insensitive?num=5000&amp;text=banch?num=5000&amp;text=teh" rel="nofollow noreferrer">searching for teh</a> with the results where <a href="https://data.stackexchange.com/math/query/1157664/search-for-exact-string-case-insensitive?text=teh" rel="nofollow noreferrer">teh is preceded and followed by a non-letter</a>).</li> <li>It is better to use case insensitive version of SEDE queries. (So that we catch also misspelled word at the beginning of a sentence. This is especially important for names.)</li> <li>With SEDE you can find several words which contain the same substring. For example, words containing "anali" might often be typos (such as analisis, analise, analitic, etc.} and you can search for <a href="https://data.stackexchange.com/math/query/1157659/posts-containing-a-given-text-case-insensitive?num=5000&amp;text=anali" rel="nofollow noreferrer">anali</a>.</li> </ul> <p>A few comments about editing the misspellings. (Some of them apply to editing in general.)</p> <ul> <li>It is better to avoid <a href="https://math.meta.stackexchange.com/questions/5068/how-much-bumping-is-too-much">bumping too many old posts at the same time</a>. (Frontpage is considered a precious commodity.)</li> <li>For this reason it is better to concentrate on questions which are new or have been recently bumped for some other reason.</li> <li>If you edit some post, try to check whether there are also other improvements to do. (Since the post was bumped by correcting the misspelled word, it is better to improve also other things which are worth editing than to bump the same post by additional edits later.)</li> <li>Difference between British/American spelling should not be considered a misspelling, as discussed before: <a href="https://math.meta.stackexchange.com/q/30335">Is English (US) vs English (UK) grounds for an edit?</a></li> </ul> <p>Some names of mathematicians which are commonly misspelled can also be found in this answer: <a href="https://math.meta.stackexchange.com/q/8698#8709">Searching for accented characters is too strict</a>. (Although that answer is primarily about alternative spellings.) This might also help when creating list of common misspellings.</p> <p><sup>1</sup>Posts such as: <a href="https://math.meta.stackexchange.com/q/13730">Edit session for wrong spelling of mathematicians and mathematical concepts</a> or <a href="https://math.meta.stackexchange.com/q/30912">Wrong spelling of “occurrence”</a>. Some users have mentioned in chat that they do this kind of edits, for example, <a href="https://chat.stackexchange.com/transcript/message/2443651#2443651">Srivatsan</a>.</p> <p><sup>2</sup>Just to name a few common problems with posts on this site which could be helped by editing: <a href="https://math.meta.stackexchange.com/questions/9959/how-to-ask-a-good-question#10144">non-descriptive titles</a>, incorrectly tagged question, various problems with MathJax in the post or <a href="https://math.meta.stackexchange.com/questions/9687/guidelines-for-good-use-of-rm-latex-in-question-titles">in the title</a>, etc. (Still, I'd guess that posts with misspellings are slightly more likely to have also some other problems - so perhaps checking them from time to time might be useful also for other purposes.)</p> <p><sup>3</sup>These SEDE queries are more for people who are curious (since they do not actually help with editing), but they still might be interesting anyway. You can find <a href="https://data.stackexchange.com/math/query/1157683/posts-which-contained-the-text-in-some-revision-and-it-was-later-removed-case-in?word=haussdorff" rel="nofollow noreferrer">posts where a specific word has already been corrected</a>. You can also search in comments <a href="https://data.stackexchange.com/math/query/556789/comments-containing-given-keyword-with-text-and-author?Word=occurence" rel="nofollow noreferrer">for some string</a> or for <a href="https://data.stackexchange.com/math/query/1157639/comments-containing-exact-string-with-text-and-author?text=teh" rel="nofollow noreferrer">an exact word</a>.</p>
GEdgar
442
<p>Some examples of <strong>incorrect possessive:</strong></p> <p>Stoke's theorem (incorrect) should be Stokes' or even Stokes's since the man's name is "Stokes".</p> <p>Similar: Baye's theorem (incorrect), which appeared somewhere in .se recently.</p> <p>Descarte's Rule of Signs</p>
2,912,759
<p>I am trying to use the characteristic function of the uniform distribution defined on (0,1) to compute the mean. I have calculated the characteristic function (correctly) and used Euler's identity to convert it to the following form:</p> <p>$$\phi_Y(t)=\frac{\sin(t)}{t} + i \frac{1-\cos(t)}{t}$$</p> <p>I should be able to compute the mean (which should be 1/2) by taking the first derivative, multiplying by $\frac{1}{i}$, and evaluating at $t=0$. I've computed the first derivative as:</p> <p>$$\frac{\partial}{\partial t}\phi_Y(t)=\frac{t\cos(t)-\sin(t)}{t^2} + i\frac{t \sin(t) + \cos(t) -1}{t^2}$$</p> <p>And dividing by $i$, this expression simplifies to: $$E[X]=\Big(\frac{i\sin(t)-it\cos(t)+t(\sin(t)+\cos(t)-1}{t^2}\Big)\bigg\rvert_{t=0}$$</p> <p>This expression is undefined, because of division by 0. Am I missing something here? </p>
Angina Seng
436,618
<p>Use Maclaurin series: $$i\sin t-it\cos t+t\sin t+\cos t-1 =it-it+t^1+1-t^2/2-1)+O(t^3)=\frac{t^2}2+O(t^3)$$ and so $$\lim_{t\to0}\frac{i\sin t-it\cos t+t\sin t+\cos t-1 }{t^2}=\frac12.$$</p>
2,816,119
<p>Prove if $G$ is an abelian group, then $q(x) = x^2$ defines a homomorphism from $G$ into $G$. Is $q$ ever an isomorphism?</p> <p>The first proof was no problem, I'm having trouble with the isomorphism piece though. My book has the following explanation that I don't quite understand:</p> <p>"In order for $q$ to be an isomorphism, it must be the case that no element other than the identity is its own inverse. $x \in $Ker$(q) \iff q(x) = e \iff x*x = e \iff x^{-1} = x$"</p> <p>It also has a hint to think about it another way, considering $\Bbb Z_n$ for small values of $n$, which I also can't quite wrap my brain around...</p>
GSofer
509,052
<p>If there was an element other than the identity that is its own inverse, then the homomorphism would send it to the identity (because taking it squared would give you the identity).</p> <p>Since a homomorphism from a group to itself is an isomorphism only if the identity is the only element in the group that's sent to the identity by the homomorphism, you can't have an element that is its own inverse.</p> <p>A good example of a group that satisfies this is $\mathbb Z_3$. Generally, notice that the order of the group can't be even (can you see why?)</p>
2,816,119
<p>Prove if $G$ is an abelian group, then $q(x) = x^2$ defines a homomorphism from $G$ into $G$. Is $q$ ever an isomorphism?</p> <p>The first proof was no problem, I'm having trouble with the isomorphism piece though. My book has the following explanation that I don't quite understand:</p> <p>"In order for $q$ to be an isomorphism, it must be the case that no element other than the identity is its own inverse. $x \in $Ker$(q) \iff q(x) = e \iff x*x = e \iff x^{-1} = x$"</p> <p>It also has a hint to think about it another way, considering $\Bbb Z_n$ for small values of $n$, which I also can't quite wrap my brain around...</p>
lhf
589
<p>$q$ is injective iff $G$ has no element of order $2$.</p> <p>If $G$ is finite, this happens iff the order of $G$ is odd. In this case, $q$ is an isomorphism.</p> <p>If $G$ is infinite, $q$ might not be surjective even if it is injective. An example is $G=\mathbb Z$.</p>
1,940,446
<p>I'm working through Bona's "A Walk Through Combinatorics" and I came across this problem:</p> <blockquote> <p>A company has $20$ employees, $12$ male and $8$ female. How many ways are there to form a $5$ person committee that contains at least one male and at least one female?</p> </blockquote> <p>I realise that this question has been asked here before multiple times and that the standard way to solve such problems is to enumerate all the possible committees and then subtract the ones that do not satisfy the constraint, i.e., consist entirely of the same sex; thus giving the answer: $20C5-12C5-8C5 = 14656$</p> <p>However, I first tried to solve it like this: A pair of a man and a woman can be first chosen in $12*8$ ways, thus leaving $3$ open slots on the committee which can be filled in $18C3$ ways; thus giving an answer $12*8*18C3 = 78336$</p> <p>What is the error in reasoning here?</p>
Parcly Taxel
357,390
<p>There are $\binom{20}5=15504$ ways to make <em>any</em> committee from the people available. That the incorrect answer is much larger than this upper limit suggests an error of <em>overcounting</em>, which is a common mistake in combinatorial problems.</p> <p>For this problem, suppose the initial male and female chosen are Andy and Bridget respectively. Then we select two more females and one more male &ndash; Cindy, Danielle, Ethan. This counts as one selection.</p> <p>But we could also select Cindy and Ethan first, then Andy, Bridget and Danielle. This should not be counted as a distinct way, but the incorrect method does this. Hence the error of overcounting.</p>
1,232,420
<p>Consider the ideal $I = (13x+16y, 11x+13y)$ in the ring R = $\mathbb{Z}[x,y].$</p> <p>Prove that $I=(x-2y, 3x+y)$ by using mutual inclusion.</p> <p>I'm confused on how to start...do I begin by multiplying the elements in the ideal by a general element of R?</p>
Rolf Hoyer
228,612
<p>You want to show that both elements of one generating set are $\Bbb Z$-linear combinations of elements from the other generating set. In general, you would use $R$-linear combinations, but due to degree considerations you can use integers. </p> <p>As an example, you want to find relations like $-4(x-2y) + 5(3x+y) = 11x+13y$.</p>
15,480
<p>Say I have two lists,</p> <pre><code>list1 = {a, b, c} list2 = {x, y, z} </code></pre> <p>and I want to map a function f over them to produce</p> <pre><code>{f[a,x], f[a,y], f[a,z], f[b,x], f[b,y], f[b,z], f[c,x], f[c,y], f[c,d]} </code></pre> <p>I would assume I map the function over the first list to produce a "list of functions", which then run over the 2nd list, something like:</p> <pre><code>Map[Map[f[#1, #2]&amp;,list1]&amp;, list2] </code></pre> <p>but I can't figure out how to leave #2 "empty" until the 2nd map kicks in. How can I separate them to generate all combinations of arguments?</p>
halirutan
187
<p>What you try to achieve here is called <a href="http://en.wikipedia.org/wiki/Currying">Currying</a> which can be used in other languages like Haskell naturally. In <em>Mathematica</em> this does not work like that.</p> <p>But what about </p> <pre><code>Outer[f, list1, list2] (* {{f[a, x], f[a, y], f[a, z]}, {f[b, x], f[b, y], f[b, z]}, {f[c, x], f[c, y], f[c, z]}} *) </code></pre> <p>or <code>Flatten@Outer[f, list1, list2]</code> if you want a flat list?</p> <p>Of course this did not answer your question. Therefore, the real answer is: you can separate the <code>Slot</code>s by using <code>Function</code> explicitely:</p> <pre><code>Map[Function[p2, Map[Function[p1, f[p1, p2]], list1]], list2] (* {{f[a, x], f[b, x], f[c, x]}, {f[a, y], f[b, y], f[c, y]}, {f[a, z], f[b, z], f[c, z]}} *) </code></pre> <p>Here, it is clear that the <code>p1</code> parameter is for the inner <code>Function</code>, while <code>p2</code> is for the outer one. But note, that for your ordering, you need to do switch parameters.</p>
3,300,793
<p>Having an immense amount of trouble trying to figure this problem out, and the more I think and ask about it the more confused I seem to get. I think I have finally figured it out so can someone who truly knows the correct answer justify this?</p> <p>Problem:Let <span class="math-container">$A=\{a,b,c\}$</span>,Let <span class="math-container">$\mathcal{P}(A)=\{S:S\subseteq A\}$</span></p> <p>Q1.Is the relation "is a subset of" a relation on <span class="math-container">$A$</span>?</p> <p>Q2.Is the relation "is a subset of" a relation on <span class="math-container">$\mathcal{P}(A)$</span>?</p> <p>Q1.Attempt:</p> <p>I would say the relation "is a subset of" is <strong>NOT</strong> a relation on <span class="math-container">$A$</span></p> <p>Since <span class="math-container">$A \times A=\{(a,a),(a,b),(a,c),(b,a),(b,b),(b,c),(c,a),(c,b)(c,c)\}$</span></p> <p>Knowing <span class="math-container">$R \subseteq A \times A$</span></p> <p>It is clear to me <span class="math-container">$a,b,c$</span> are elements, not sets, so an element in the relation on <span class="math-container">$A$</span> takes the form of <span class="math-container">$(a,a)$</span> for <span class="math-container">$a \in A$</span>.So a "subset relation" cannot be defined on <span class="math-container">$A$</span>.</p> <p>Since <span class="math-container">$a \subseteq a$</span> is false for all <span class="math-container">$a \in A$</span> because <span class="math-container">$a$</span> is an element not a set</p> <p>Q2.However a subset relation <strong>can</strong> be defined on <span class="math-container">$\mathcal{P}(A)$</span></p> <p>Since elements of a relation on <span class="math-container">$\mathcal{P}(A)$</span> are actually ordered pairs of sets, now it <strong>is</strong> possible to define a "subset relation" on <span class="math-container">$\mathcal{P}(A)$</span>.</p> <p>For example <span class="math-container">$(\{a\},\{a,b,c\})\in \mathcal{P}(A) \times \mathcal{P}(A)$</span></p> <p>and since for a relation on <span class="math-container">$\mathcal{P}(A)$</span>, <span class="math-container">$R \subseteq \mathcal{P}(A) \times \mathcal{P}(A)$</span> and <span class="math-container">$\{a\} \subseteq \{a,b,c\}$</span></p> <p>Now it is possible to define the relation "is a subset" of on <span class="math-container">$\mathcal{P}(A)$</span></p>
Matthew Leingang
2,785
<p>With respect to JMoravitz's point of view, I think you understand the problem as intended by the instructor/problem author, and should not extend to the case that elements of <span class="math-container">$A$</span> might contain themselves as subsets.</p> <p>I'm inferring the point of the problem is to understand the difference between something being an element of a set versus a subset. The notation <span class="math-container">$A = \{a,b,c\}$</span> implies (not logically, but idiomatically) that <span class="math-container">$A$</span> is a set containing three elements which are not sets. A representative interpretation might be three integers, or three points in the plane.</p> <p>Assuming this, then you can say <span class="math-container">$\{a\} \subset \{a\}$</span> but <span class="math-container">$(\{a\},\{a\}) \notin A \times A$</span> (since <span class="math-container">$\{a\} \notin A$</span>), so is-a-subset-of is not a relation on <span class="math-container">$A$</span>.</p> <p>I'm sure your instructor would be happy to clarify the problem or discuss these counterintuitive edge cases.</p>
2,462,297
<p>Let $(x_1,...,x_n)$ be real numbers and M be an $n \times n$ matrix whose its column is given by the entries $x_i,x_i^2, x_i^3,...x_i^n$. Compute the determinant of M.</p> <p>I computed the formula for the determinant of M in terms of $x_1...x_n$ but I wonder if I can find a specific value.</p>
Ali H.
471,756
<p>@Tsemo Aristide is absolutely correct, you can follow that link and also <a href="https://proofwiki.org/wiki/Vandermonde_Determinant" rel="nofollow noreferrer">The Proof for your specific case here</a>. However, this is a different kind of explanation for what you have, which is not a proof but I think it might help you better grasp the concept.</p> <p>At the first glance, this looks very similar to the Vandermonde matrix; except we have </p> <p>$$ M = \left[ {\begin{array}{ccccc} x_1 &amp; x_2 &amp; x_3 &amp; \cdots &amp; x_n \\x_1^2 &amp; x_2^2 &amp; x_3^2 &amp;\cdots &amp; x_n^2\\ \vdots &amp; &amp; \ddots &amp; &amp; \vdots \\ x_1 ^n &amp; x_2^n &amp; x_3^n &amp; \cdots &amp;x_n^n \\ \end{array} } \right]$$ Now let us consider the Vandermonde's matrix which is</p> <p>$$V = \left[ {\begin{array}{ccccc} 1 &amp; x_1 &amp; x_1^2&amp; \cdots &amp; x_1^{n-1}\\ 1 &amp; x_2 &amp; x_2^2 &amp;\cdots &amp; x_2^{n-1}\\ \vdots &amp; &amp; \ddots &amp; &amp; \vdots \\ 1 &amp; x_n&amp; x_n &amp; \cdots &amp;x_n^{n-1}\\ \end{array} } \right]$$ As we can see, there are very closely related; so we want to find the matrix $X$ so that $$VX=M$$ and in that case the determinant of $M$ will be $$\det(M) = \det(V) \cdot \det(X)$$ Now let us look at an example of a $2 \times 2$ by $2 \times 2$ matrices, so assume that we have : $$\left[ {\begin{array}{cc} a &amp; 0\\ 0 &amp; b \\ \end{array} } \right] \times \left[ {\begin{array}{cc} 1 &amp; 1\\ 2&amp; 2 \\ \end{array} } \right] = \left[ {\begin{array}{cc} a &amp; a\\ 2b&amp; 2b \\ \end{array} } \right] $$ So we can see that by multiplying this diagonal matrix by a the square matrix, we can multiply each component of the first row by $a$ and second row by $b$. Now this is exactly what we want in order to go from our the Vandermonde matrix to the given matrix; furthermore this is very beneficial since we know the determinant of the diagonal matrix is the multiplication of the diagonals and we know the Vandermonde's determinant. So to get $M$ we will have </p> <p>$VX = N $ where $$X = \left[ {\begin{array}{ccccc} x_1 &amp; 0 &amp; 0&amp; \cdots &amp; 0\\ 0 &amp; x_2 &amp; 0 &amp;\cdots &amp; 0\\ \vdots &amp; &amp; \ddots &amp; &amp; \vdots \\ 0&amp;0 &amp;0 &amp; \cdots &amp;x_n\\ \end{array} } \right] $$ Now $$VX = \left[ {\begin{array}{ccccc} x_1 &amp; x_1^2 &amp; x_1^3 &amp; \cdots &amp; x_1^n\\ x_2 &amp; x_2^2 &amp; x_2^2 &amp;\cdots &amp; x_2^n\\ \vdots &amp; &amp; \ddots &amp; &amp; \vdots \\ x_n &amp; x_n^2 &amp; x_n^3 &amp; \cdots &amp;x_n^n\\ \end{array} } \right] = N $$ This is almost what we want except we have the transpose of this matrix, so $(VX)^T = M$. Also, for a square matrix, the determinant of the transpose is the same as the determinant of the matrix, we will have : $$\det(X)\cdot\det(V) = \prod_{1\leq i \leq n}x_i \cdot \prod_{1 \leq i \leq j \leq n} (x_j - x_i) = \det(M^T) = \det(M)$$</p> <p>Again this is not a proof, but I hope this clarified some stuff from both of the links.</p>
320,378
<p>Given a number of normal distributions <span class="math-container">$N(\mu_1, \sigma^2), N(\mu_2, \sigma^2), ..., N(\mu_n, \sigma^2)$</span> with fixed variance <span class="math-container">$\sigma^2$</span>, but not necessary equal means. My question is how to approximate the variance given a number of samples of the normal distributions. Hence given samples <span class="math-container">$$ X^1_1, X^1_2, ..., X^1_{m_1} \sim N(\mu_1, \sigma^2), \\ X^2_1, X^2_2, ..., X^2_{m_2} \sim N(\mu_2, \sigma^2), \\ \vdots \\ X^n_1, X^n_2, ..., X^n_{m_n} \sim N(\mu_n, \sigma^2). $$</span> Where <span class="math-container">$m_1, m_2, ..., m_n &gt; 0$</span>, but again not necessary equal. What is a good way to approximate the variance <span class="math-container">$\sigma^2$</span>?</p> <p>With <em>good way to approximate</em> I mean the following. I could take <span class="math-container">$m_i$</span> such that <span class="math-container">$m_i \geq m_1, m_2, ..., m_n$</span> and approximate <span class="math-container">$\sigma^2$</span> with <span class="math-container">$$ \frac{1}{m_i} \sum_{j=1}^{m_i} (X^i_j - E(X^i))^2 $$</span> where <span class="math-container">$E(X^i) = \frac{1}{m_i} \sum_k X^i_k$</span> is the average over the samples <span class="math-container">$X^i_1, ..., X^i_{m_i}$</span>. Can we do better? A good way to approximate <span class="math-container">$\sigma^2$</span> would be an estimation method that approximate <span class="math-container">$\sigma^2$</span> with better precision (on average) than the method described above. I also want to know how one would prove that one estimation method for the variance is better than another method.</p> <p>For example, my gut feeling is telling me that a weighted average over all variances would approximate better, i.e. <span class="math-container">$$ \frac{1}{m_1 + ... + m_n} \sum_{i=0}^{n} \sum_{j=1}^{m_i} (X^i_j - E(X^i))^2, $$</span> but I don't know how to prove this. Also I'm worried that one of the variance could be skewed if one of the normal distributions has way less samples than all the others.</p>
Iosif Pinelis
36,721
<p><span class="math-container">$\newcommand{\si}{\sigma}$</span> Let us assume that the <span class="math-container">$n$</span> samples from the respective distributions <span class="math-container">$N(\mu_1, \sigma^2), \dots, N(\mu_n, \sigma^2)$</span> are independent. Let <span class="math-container">$X_{ij}:=X^i_j$</span>. Everywhere here <span class="math-container">$i=1,\dots,n$</span> and <span class="math-container">$j=1,\dots,m_i$</span>. So, all the <span class="math-container">$X_{ij}$</span>'s are independent and <span class="math-container">$X_{ij}\sim N(\mu_i, \si^2)$</span>. So, the joint pdf of <span class="math-container">$X:=(X_{ij})$</span> is given by <span class="math-container">\begin{multline} f(x)=f_{\mu_1,\dots,\mu_n,\si^2}(x) =(2\pi)^{-m/2}\si^{-m}\exp\Big(-\frac1{2\si^2}\,\sum_{i,j}(x_{ij}-\mu_i)^2\Big) \\ =\exp\Big(-\frac1{2\si^2}\,\sum_{i,j}x_{ij}^2+\sum_i\frac{\mu_i}{\si^2}\,\sum_j x_{ij}\Big) \,c(\mu_1,\dots,\mu_n,\si^2), \end{multline}</span> where <span class="math-container">\begin{equation} m:=\sum_i m_i,\quad x:=(x_{ij}), \end{equation}</span> and <span class="math-container">$c(\mu_1,\dots,\mu_n,\si^2)$</span> does not depend on <span class="math-container">$x$</span>. So, the <span class="math-container">$(n+1)$</span>-variate statistic <span class="math-container">\begin{equation} S:=\Big(\sum_{i,j}X_{ij}^2,\sum_j X_{1j},\dots,\sum_j X_{nj}\Big) \end{equation}</span> is complete and sufficient for <span class="math-container">$(\mu_1,\dots,\mu_n,\si^2)$</span>. </p> <p>Let <span class="math-container">$\bar X_{i\cdot}:=\frac1{m_i}\,\sum_j X_{ij}$</span>. Then <span class="math-container">\begin{align} T&amp;:=\sum_i\Big(\sum_j(X_{ij}-\mu_i)^2-m_i(\bar X_{i\cdot}-\mu_i)^2\Big) \\ &amp;=\sum_i\Big(\sum_j X_{ij}^2-m_i\bar X_{i\cdot}^2\Big) \\ &amp;=\sum_{i,j}X_{ij}^2-\sum_i m_i\bar X_{i\cdot}^2 \end{align}</span> is a function of the complete sufficient statistic <span class="math-container">$S$</span>. Moreover, <span class="math-container">\begin{align} ET&amp;=\sum_i\Big(\sum_j E(X_{ij}-\mu_i)^2-m_i E(\bar X_{i\cdot}-\mu_i)^2\Big) \\ &amp;=\sum_i(m_i\si^2-m_i \si^2/m_i)=(m-n)\si^2. \end{align}</span> Assume now that <span class="math-container">$m&gt;n$</span>; that is, <span class="math-container">$m_i&gt;1$</span> for at least one <span class="math-container">$i$</span>. Then the statistic <span class="math-container">\begin{align} R:=\frac T{m-n}&amp;=\frac1{m-n}\,\sum_i\Big(\sum_j X_{ij}^2-m_i\bar X_{i\cdot}^2\Big) \\ &amp;=\frac1{m-n}\,\sum_{i,j} (X_{ij}-\bar X_{i\cdot})^2 \end{align}</span> is an unbiased estimator of <span class="math-container">$\si^2$</span>, and <span class="math-container">$R$</span> is also a function of the complete sufficient statistic <span class="math-container">$S$</span>. So, by the <a href="https://en.wikipedia.org/wiki/Lehmann%E2%80%93Scheff%C3%A9_theorem" rel="nofollow noreferrer">Lehmann–Scheffé theorem</a>, <span class="math-container">$R$</span> is the (essentially unique) uniformly minimum-variance unbiased estimator (UMVUE) of <span class="math-container">$\si^2$</span>. </p> <p><em>Notes</em>: (i) I don't think it's a good idea to use the expectation symbol <span class="math-container">$E$</span> as in your notation <span class="math-container">$E(X^i)$</span> to denote the arithmetic mean <span class="math-container">$\bar X_{i\cdot}$</span> of the <span class="math-container">$i$</span>th sample. (ii) With this caveat, the last displayed expression in your post (which is actually the maximum likelihood estimator of <span class="math-container">$\si^2$</span> here) comes pretty close to the UMVUE <span class="math-container">$R$</span>, except that the factor <span class="math-container">$\frac1{m_1 + \dots + m_n}=\frac1m$</span> should be replaced by <span class="math-container">$\frac1{m-n}$</span>, to get the unbiasedness; of course, <span class="math-container">$\frac1m\sim\frac1{m-n}$</span> if <span class="math-container">$m$</span> is much greater than <span class="math-container">$n$</span>, which should usually be the case. </p>
2,640,477
<p>According to <a href="https://rads.stackoverflow.com/amzn/click/0073383090" rel="nofollow noreferrer">Rosen</a>, an infinite set A is countable if $|A|= |\mathbb{Z}^+|$ which in turn can be established by finding a bijection from A to $\mathbb{Z}^+$.</p> <p>Also, a sequence is defined as a function from $\mathbb{Z}^+$ (or $\{0\} \cup \mathbb{Z}^+$) to some set.</p> <p>With the above, a sequence is certainly enumerable. However, it need not be a bijection, e.g. Fibonacci(1) = Fibonacci(2) = 1.</p> <p>This implies that not every sequence is countable which seems counterintuitive. Are there any results in this regard? Is there a mistake in the reasoning above?</p>
fleablood
280,126
<p>"However, it need not be a bijection"</p> <p>No, it doesn't.</p> <p>"This implies that not every sequence is countable"</p> <p>Why do you say that? </p> <p>$f: \mathbb Z^+ \to B$. If $f$ is not surjective then there are points of $B$ that are not in the image. Those to not matter. We can restrict ourselves to $f: \mathbb Z^+ \to Im(B)$.</p> <p>This <em>must</em> be surjective.</p> <p>It doesn't need to be injective however and your Fibinocci example shows.</p> <p>But... so what? Than means $|Im(B)| \le |\mathbb Z^+|$.</p> <p>Hence it <em>MUST</em> be countable (or countably finite).</p> <p>Anyway, as the terms of a sequence, as opposed to a set, need not be distinct it is possible, indeed common, for a sequence to have a finite number of distinct terms infinitely repeated.</p>
2,316,286
<blockquote> <p><strong>Theorem</strong> <em>(Cauchy-Schwarz Inequality) : If $u$ and $v$ are vectors in an inner product space $V$, then</em> $$\langle u,v\rangle ^2\leqslant \langle u,u\rangle \langle v,v\rangle .$$</p> </blockquote> <p><strong>Proof</strong> : If $u=0$, then $\langle u,v\rangle = \langle u,u\rangle=0$ so that the inequality clearly holds. Assume now that $u\neq 0$. Let $a=\langle u,u\rangle$, $b=2 \langle u,v\rangle$, $c=\langle v,v\rangle$, let $t$ be any real number. By the positivity axiom, the inner product of any vector with itself is always non-negative. Therefore $$0\leqslant\langle (tu+v),(tu+v)\rangle =\langle u,u\rangle t^2+2\langle u,v\rangle t+\langle v,v\rangle =at^2+bt+c.$$</p> <p>This inequality implies that the quadratic polynomial $at^2+bt+c$ has no real roots or a repeated real root. Therefore its discriminant must satisfy $b^2-4ac\leqslant0$. Expressing $a$,$b$ and $c$ in terms of $u$ and $v$ gives $$4\langle u,v\rangle^2-4\langle u,u\rangle \langle v,v\rangle \leqslant 0$$ or equivalently, $$ \langle u,v\rangle^2\leqslant\langle u,u\rangle\langle v,v\rangle.\blacksquare$$</p> <p><strong>Doubt</strong> : How do we know that $at^2+bt+c$ has no real roots or a repeated real root?</p>
alerouxlapierre
438,519
<p>Since you have $at^2+bt+c\geq 0$, you either have a double root or no real roots. In fact, if a polynomial of degree 2 has two distinct roots, then it is positive on a certain interval and negative on another one (you should check this). Thus, the polynomial $at^2+bt+c$ must not have two distinct roots because it is positive. Also, a polynomial of degree two has no real roots iff it's discriminant is negative.</p> <p>You can also check that the given polynomial has a double root if and only if $$\langle u,v\rangle^2=\langle u,u\rangle\langle v,v\rangle.$$</p>
2,316,286
<blockquote> <p><strong>Theorem</strong> <em>(Cauchy-Schwarz Inequality) : If $u$ and $v$ are vectors in an inner product space $V$, then</em> $$\langle u,v\rangle ^2\leqslant \langle u,u\rangle \langle v,v\rangle .$$</p> </blockquote> <p><strong>Proof</strong> : If $u=0$, then $\langle u,v\rangle = \langle u,u\rangle=0$ so that the inequality clearly holds. Assume now that $u\neq 0$. Let $a=\langle u,u\rangle$, $b=2 \langle u,v\rangle$, $c=\langle v,v\rangle$, let $t$ be any real number. By the positivity axiom, the inner product of any vector with itself is always non-negative. Therefore $$0\leqslant\langle (tu+v),(tu+v)\rangle =\langle u,u\rangle t^2+2\langle u,v\rangle t+\langle v,v\rangle =at^2+bt+c.$$</p> <p>This inequality implies that the quadratic polynomial $at^2+bt+c$ has no real roots or a repeated real root. Therefore its discriminant must satisfy $b^2-4ac\leqslant0$. Expressing $a$,$b$ and $c$ in terms of $u$ and $v$ gives $$4\langle u,v\rangle^2-4\langle u,u\rangle \langle v,v\rangle \leqslant 0$$ or equivalently, $$ \langle u,v\rangle^2\leqslant\langle u,u\rangle\langle v,v\rangle.\blacksquare$$</p> <p><strong>Doubt</strong> : How do we know that $at^2+bt+c$ has no real roots or a repeated real root?</p>
egreg
62,967
<p>Consider the parabola $y=ax^2+bx+c$. If the right hand side has distinct real roots, the vertex will have negative $y$-coordinate (as $a=\langle u,u\rangle&gt;0$).</p> <p>So the only case when no point on the parabola has negative $y$-coordinate is when $b^2-4ac\le0$.</p>
170,014
<p>Determine for which $a$ values $f = x^2+ax+2$ can be divided by $g= x-3$ in $\mathbb Z_5$. </p> <p>I don't know if there are more effective (and certainly <strong>right</strong>) ways to solve this problem, I assume there definitely are, but as I am not aware of them, I thought I could proceed like this: I have divided $f$ by $g$, pretending $a$ to be a constant in $\mathbb Q$, the resulting quotient is $x+(a+3)$, the reminder is $2+3(a+3)$. In order to have an exact division it needs to happen:</p> <p>$$\begin{aligned} 2+3(a+3) = 0 \end{aligned}$$ $$\begin{aligned} 2+3a + 4 = 0 \end{aligned}$$ $$\begin{aligned} 3a + 1 = 0 \Rightarrow 3a=-1 \Rightarrow 3a = 4 \Rightarrow a= \frac{4}{3}=3 \end{aligned}$$</p> <p>now I would expect $x^2+3x+2 = (x+1)(x-3)$, but it isn't the case because $(x+1)(x-3) = x^2-2x-3$. Is my way to solve this exercise totally wrong and it would be better if I'd set my notebook on fire (and in this case please feel free to jump in) or I am <em>just</em> doing some calculation wrong?</p>
DoubleTrouble
25,409
<p>I would be very surprised (in a good way) if there was a closed form solution of this. Even though it is straight-forward to calculate $$E\left[\int_0^t X(s) ds\right]$$ the distribution of $$\int_0^t X(s) ds$$ is unknown. And without knowing the distribution, I believe it will be difficult to calculate the expectation. The only thing I can figure out is a lower bound $$E\left[e^{- \lambda \int_0^t X(s) ds} \right] \geq e^{-\frac{2\lambda}{\sigma^2}( e^{\frac{\sigma^2 t}{2}}-1)}$$ which is of little use.</p>
678,073
<p>I am working with the multiplicative ring of integers modulo $2^{127}$.</p> <p>Consider the set $E=\{(k,l) \mid 5^k \cdot 3^l \equiv 1\mod 2^{127}, k &gt; 0, l&gt; 0\}$. I wonder if anybody knows or has an idea where to look for a result related to a lower bound for $M=\min\{k+l \mid (k,l)\in E \}$.</p> <p>We have that $0&lt;M\leq \mathrm{ord}_{\mathbb{Z}_{2^{127}}}(5)+\mathrm{ord}_{\mathbb{Z}_{2^{127}}}(3)$ where $\mathrm{ord}_{\mathbb{Z}_{2^{127}}}(5)=2^{125}$ and $\mathrm{ord}_{\mathbb{Z}_{2^{127}}}(3)=2^{125}$ (orders of these primes in the multiplicative ring $\mathbb{Z}_{2^{127}}$).</p> <p>I also would like to generalize the above for primes other than 5 and 3.</p> <p>Is there a result about a tighter lower bound for $M$?</p>
Steven Stadnicki
785
<p>To flesh my 'birthday paradox' heuristic from a comment out into at least a partial answer:</p> <p>The core concept is that among the first $m$ values of $5^k$ and the first $n$ values of $3^{-l}$, we have $mn$ different potential collisions, and if we treat these interactions as independent events then we should expect each of them to yield an actual collision with probability $2^{-127}$; this means that within approximately $2^{127}$ potential collisions we should expect an actual collision. Since $m+n$ is minimized for a given value of $mn$ when $m=n$, then we should expect the minimum value of $m+n$ to occur where $m\approx n$. Plugging this in to $mn\approx 2^{127}$ yields $m\approx n\approx 2^{64}$ and $m+n\approx 2^{65}$ (up to relatively small constant factors).</p> <p>Of course, this <em>is</em> a heuristic argument, not an exact one; but as long as $m\gg \log_5 2^{127}\approx 55$ and $n\gg\log_3 2^{127}\approx 80$, then I would expect the sets $\{5^k\ |\ 0\leq k\lt m\}$ and $\{3^{-l}\ |\ 0\leq l\lt n\}$ to be roughly equidistributed mod $2^{127}$; since the numbers we're talking about are many orders of magnitude larger then this assumption seems reasonable.</p> <p>Note that if the goal were to minimize $|k|+|l|$ then this heuristic argument can be extended to provide an explicit upper bound of $2\lceil\sqrt{2^{127}}\rceil$: among the $2^{127}$ values of $5^k3^l\bmod 2^{127}$ for $1\leq k\leq \lceil\sqrt{2^{127}}\rceil$, $1\leq l\leq \lceil\sqrt{2^{127}}\rceil$ there must (by the pigeonhole principle) be a collision, say $5^{k_0}3^{l_0} \equiv 5^{k_1}3^{l_1}$. Then dividing out, we obtain $5^{k_0-k_1}3^{l_0-l_1}\equiv 1$, where clearly $0\leq |k_0-k_1|\leq\lceil\sqrt{2^{127}}\rceil$, and likewise for the $l$ values. (This argument doesn't work for the actual problem, of course, because there's no guarantee that $k_0-k_1$ and $l_0-l_1$ have the same signs.)</p>
1,480,720
<p>How many times do you have to flip a coin such that the probability of getting $2$ heads in a row is at least $1/2$?</p> <p>I tried using a Negative Binomial: $P(X=2)=$$(n-1)\choose(r-1)$$p^r\times(1-p)^{n-r} \geq 1/2$ where $r = 2$ and $p = 1/4$. However, I don't get a value of $n$ that makes sense.</p> <p>Thank You</p>
Barry Cipra
86,747
<p>If you toss a coin $n$ times, the number of different ways the result can <em>aovid</em> getting two heads in a row is $F_{n+1}$, where $F_n$ is the Fibonacci sequence, with $F_0=F_1=1$ and $F_{n+1}=F_n+F_{n-1}$. So the crossover point is where $F_{n+1}/2^n\le1/2$, or $F_{n+1}\le2^{n-1}$. This occurs at $n=4$, since $F_5=8=2^3$.</p>
2,986,515
<p>Can anyone help me with this problem? </p> <p>Prove that for any real number <span class="math-container">$x &gt; 0$</span> and for any <span class="math-container">$M &gt; 0$</span> there is <span class="math-container">$N ∈ \mathbb N$</span> so that if <span class="math-container">$n &gt; N$</span> then <span class="math-container">$(1 + x)^n &gt; M.$</span></p> <p>A sequence <span class="math-container">${a_n}$</span> diverges to <span class="math-container">$+\infty$</span> if for any <span class="math-container">$M &gt; 0$</span> there is <span class="math-container">$N ∈\mathbb N$</span> so that if <span class="math-container">$n &gt; N$</span> then <span class="math-container">$a_n &gt; M$</span>. Suppose <span class="math-container">${a_n}$</span> diverges to <span class="math-container">$+\infty$</span> and <span class="math-container">$a_n \ne 0$</span> for all <span class="math-container">$n$</span>. Prove that in this case <span class="math-container">$\frac{1}{a_n}$</span> converges to <span class="math-container">$0$</span>.</p> <p>How do I approach these proofs?</p>
marty cohen
13,079
<p><span class="math-container">$\begin{array}\\ a_{n} &amp;= \dfrac{3n^2 - 9n + 6}{n^3 + 5n^2 + 8n + 4}\\ &amp;= 3\dfrac{n^2 - 3n + 2}{n^3 + 5n^2 + 8n + 4}\\ &amp;&lt; 3\dfrac{n^2}{n^3} \qquad\text{for } n \ge 1\\ &amp;=\dfrac{3}{n}\\ &amp;\le 1 \qquad\text{for } n \ge 3\\ \text{and}\\ a(n) &amp;\gt 0 \qquad\text{for } n \ge 3\\ \end{array} $</span></p>
1,449,450
<p>Is there a way to prove which one of these is bigger? $e^{(a+b)}$ or $e^a + e^b$?</p> <p>Thanks</p>
Rajat
177,357
<p>$$\frac{(e^a + e^b)}{2} \geq e^{\frac{(a+b)}{2}}\text{ (Using A.M.-G.M. inequality.)}$$ $$(e^a + e^b) &gt; e^{\frac{(a+b+1)}{2}}\text{ (Using $4&gt;e$).}$$ If, $e^{\frac{(a+b+1)}{2}}\leq 1$, then $$e^{(a+b+1)}\leq e^{\frac{(a+b+1)}{2}} \text{ (Using the fact that, if $x \in [0,1]$, then } x\geq x^2).$$ So, $$(e^a + e^b)&gt; e^{(a+b+1)} \text{ when, }a+b+1 \leq 0.$$ $$\implies (e^a + e^b)&gt; e^{(a+b)} \text{ when, }a+b+1 \leq 0.$$</p>
1,449,450
<p>Is there a way to prove which one of these is bigger? $e^{(a+b)}$ or $e^a + e^b$?</p> <p>Thanks</p>
hmakholm left over Monica
14,366
<p>Dividing through by $e^{a+b}$ gives $$ e^a+e^b \lessgtr e^{a+b} \qquad\text{as}\qquad e^{-a}+e^{-b} \lessgtr 1$$ This is not <em>much</em> nicer, but it is <em>somewhat</em> nicer because there's now only one $a$ and one $b$. So it is now easier to solve for $a$ or $b$: $$ e^a+e^b \lessgtr e^{a+b} \qquad\text{as}\qquad a \gtrless \begin{cases} -\log(1-e^{-b}) &amp;\text{when } b&gt; 0 \\ \infty &amp; \text{when }b\le 0 \end{cases}$$</p>
744,034
<p>How do I show that for all integers $n$, $n^3+(n+1)^3+(n+2)^3$ is a multiple of $9$? Do I use induction for showing this? If not what do I use and how? And is this question asking me to prove it or show it? How do I show it? </p>
Mark Bennet
2,906
<p>Here are a some ideas to work on.</p> <p>First the cubes mod $9$ turn out to be $0^3=0, 1^3=1, 2^3=-1, 3^3=0, 4^3=1, 5^3=-1 \dots$</p> <p>The pattern persists because $(3n\pm1)^3=27n^3\pm 27n^2+9n\pm 1\equiv \pm 1, (3n)^3=27n^3\equiv 0$ - the sum of any consecutive three is equal to $0$ mod $9$.</p> <p>Another way of doing it is to set $n=(m-1)$ and note that (using the binomial expansion) $$(m-1)^3+m^3+(m+1)^3 =3m^3+6m=3m(m^2+2)$$ </p> <p>Modulo $3, m(m^2+2)\equiv m(m^2+3m+2)=m(m+1)(m+2)$ the product of three consecutive numbers, which is therefore divisible by $3$.</p> <p>Or induction will do it because $(n+3)^3-n^3=9n^2+27n+27$ is divisible by $9$.</p>
2,343,958
<p>I am interested in a mathematical approach to quantum information theory. I have observed that several probabilists have been working in this area. What can be a suitable background and good book for this subject?</p>
Larry B.
364,722
<p>There's an excellent introductory book called "Quantum Computing for Computer Scientists" by Yanofski and Manucci. It goes through the theory of quantum computation and information using simple Hermetian matrices and tensor products as the mathematical language.</p>
2,988,089
<p>Let A, B, C, and D be sets. Prove or disprove the following:</p> <pre><code> (A ∩ B) ∪ (C ∩ D)= (A ∩ D) ∪ (C ∩ B) </code></pre> <p>I am just wondering can i simply prove it using a membership table ( seems to easy ) or do i have to use setbuilder notation?</p> <p>Thank you!</p>
Martund
609,343
<p>This is false. Take A = {1,2,3} B = {3,4,5} C = {7,8,9} D = {9,10,11} LHS = {3,9} RHS = phi Hence disproved. Hope it helps:)</p>
262,745
<p>I need to find the normal vector of the form Ax+By+C=0 of the plane that includes the point (6.82,1,5.56) and the line (7.82,6.82,6.56) +t(6,12,-6), with A=1.</p> <p>Of course, this is easy to do by hand, using the cross product of two lines and the point. There's supposed to be an automated way of doing it, though, and I can't find it. Any ideas on an efficient way of doing it?</p>
chuy
237
<p>Slightly different:</p> <pre><code>p0 = {6.82, 1, 5.56}; p1 = {7.82, 6.82, 6.56}; p2 = p1 + t {6, 12, -6}; plane = InfinitePlane[{p0, p1, p2}]; SubtractSides[ Reduce[RegionMember[plane, {x, y, z}], Reals]] // TraditionalForm (* x-0.255754 y+0.488491 z-9.28026==0. *) </code></pre>
802,877
<blockquote> <p>Find $\displaystyle\lim_{n\to\infty} n(e^{\frac 1 n}-1)$ </p> </blockquote> <p>This should be solved without LHR. I tried to substitute $n=1/k$ but still get indeterminant form like $\displaystyle\lim_{k\to 0} \frac {e^k-1} k$. Is there a way to solve it without LHR nor Taylor or integrals ?</p> <p>Maybe with the definition of a limit ?</p> <p>EDIT:</p> <p>$f(x)'=\displaystyle\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\lim_{h\to 0}\frac{(x+h)(e^{1/x+h}-1)-x(e^{\frac 1 x}-1)}{h}= \lim_{h\to 0}\frac{xe^{1/x+h}+he^{1/x+h}-x-h-xe^{\frac 1 x}+x}{h}= \lim_{h\to 0}\frac{xe^{1/x+h}+he^{1/x+h}-h-xe^{\frac 1 x}}{h}$</p>
Deepak
151,732
<p>Put $x = \frac{1}{n}$</p> <p>The limit becomes $\lim_{x \to 0}\frac{e^x-1}{x} \rightarrow \lim_{x \to 0}\frac{1+x-1}{x} = 1$</p> <p>using the Maclaurin series for $e^x$</p> <p>EDIT: Just noticed you also excluded Taylor series (which would preclude Maclaurin's) in your question. So feel free to ignore my answer.</p>
2,502,161
<p>I'm wondering if it's valid to write the follwing: <span class="math-container">$$\lim_{x \rightarrow \infty}\frac{2}{x^r}=2\lim_{x \rightarrow \infty}\frac{1}{x^r}=2.\frac{1}{\infty}=2.0=0$$</span></p> <p>I know it's valid to say that <span class="math-container">$\frac{1}{\infty}=0$</span> in limits but I'm not suring if it would be valid to say <span class="math-container">$2.\frac{1}{\infty}=2.0=0$</span></p>
Claude Leibovici
82,404
<p>Considering the differential equation <span class="math-container">$$\frac{d^2y}{dx^2}\,\frac{dy}{dx}\,y=1$$</span> let us use <span class="math-container">$$\frac{dy}{dx}=\frac1 {\frac{dx}{dy}}\qquad \text{and}\qquad \frac{d^2y}{dx^2}=-\frac{\frac{d^2x}{dy^2}}{\left(\frac{dx}{dy}\right)^3 }$$</span> (see <a href="https://math.stackexchange.com/questions/566507/prove-that-d2x-dy2-equals-d2y-dx2dy-dx-3">here</a>),we then arrive to <span class="math-container">$$x'' y=-(x')^4$$</span> Now, using <span class="math-container">$u=x'$</span>, this makes <span class="math-container">$$u' y=-u^4\implies \frac{u'}{u ^4}=-\frac 1y\implies \frac{1}{3 u^3}=\log(y)+c_1$$</span> which has three solutions in <span class="math-container">$u$</span>.</p> <p>Considering one of them <span class="math-container">$$u=\frac{1}{\sqrt[3]{3} \sqrt[3]{\log (y)+c_1}}$$</span> <span class="math-container">$$x=-\frac{e^{-c_1} }{\sqrt[3]{3}}\Gamma \left(\frac{2}{3},-(c_1+\log (y))\right)+c_2$$</span> from which <span class="math-container">$y$</span> cannot be extracted.</p>
3,479,940
<p>Say we're given a set of <span class="math-container">$d$</span> vectors <span class="math-container">$S=\{\mathbf{v}_1,\dots,\mathbf{v}_d\}$</span> in <span class="math-container">$\mathbb{R}^n$</span>, with <span class="math-container">$d\leq n$</span> (obviously). We want to test in an efficient way if S is linearly independent. Now, write the coefficient matrix <span class="math-container">$\mathbf{A}=[\mathbf{v}_1 \dots\mathbf{v}_d]$</span> (the <span class="math-container">$\mathbf{v}_i$</span> are considered to be column vectors). </p> <p>A <em>non-efficient</em> way would be to compute all minors of rank <span class="math-container">$d$</span> of <span class="math-container">$\mathbf{A}$</span>, and check that they are non-zero (up to some tolerance, as always when we do numerical linear algebra). Another way would be using Gram–Schmidt orthogonalization, but I recall that Gram–Schmidt orthogonalization is numerically unstable. Which is the correct alternative? Singular Value Decomposition (if all singular values are strictly positive, then the vectors are independent)? QR factorization? </p>
Alex
48,061
<p>Let <span class="math-container">$A$</span> be not the <span class="math-container">$n\times d$</span>-matrix, but the <span class="math-container">$n\times n$</span>-matrix <span class="math-container">$$A=[v_1 \ldots v_d\ \vec{0}\ldots\ \vec 0]^t.$$</span> Reduce <span class="math-container">$A$</span> to <a href="https://en.wikipedia.org/wiki/Smith_normal_form" rel="nofollow noreferrer">Smith Normal Form</a>. This takes <span class="math-container">$O(n\cdot d)$</span> row operations. The number of non-zero rows of the resulting matrix equals the number of these vectors that were linearly independent.</p>
2,782,109
<blockquote> <p>If a positive integer $m$ was increased by $20$%, decreased by $25$%, and then increased by $60$%, the resulting number would be what percent of $m$?</p> </blockquote> <p>A common step-by-step calculation will take time.</p> <p>After $20$% increase, $6m/5$.<br> After $25$% decrease, $9m/10$.<br> After $60$% increase, $144m/100$.<br> Finally, $m \cdot \frac{x}{100} = \frac{144m}{100} = 144$%</p> <p>what is the faster (or, fastest) method to solve this?</p>
Rhys Hughes
487,658
<p>$$\frac 65*\frac 34* \frac85=\frac{144}{100}$$</p>
2,221,033
<p>My question is due to <a href="https://en.wikipedia.org/w/index.php?title=Imaginary_number&amp;diff=prev&amp;oldid=175488747" rel="noreferrer">an edit</a> to the Wikipedia article: <a href="https://en.m.wikipedia.org/wiki/Imaginary_number" rel="noreferrer">Imaginary number</a>.</p> <p>The funny thing is, I couldn't find (in three of my old textbooks) a clear definition of an "imaginary number". (Though they were pretty good at defining "imaginary component", etc.)</p> <p>I understand that the number zero lies on both the real and imaginary axes.<br> <em>But is $\it 0$ both a real number and an imaginary number?</em></p> <p>We know certainly, that there are complex numbers that are neither purely real, nor purely imaginary. But I've always previously considered, that a purely imaginary number had to have a square that is a real and negative number (not just non-positive). </p> <p>Clearly we can (re)define a real number as a complex number with an imaginary component that is zero (meaning that $0$ is a real number), but if one were to define an imaginary number as a complex number with real component zero, then that would also include $0$ among the pure imaginaries. </p> <p><em>What is the complete and formal definition of an "imaginary number" (outside of the Wikipedia reference or anything derived from it)?</em></p>
Misha Lavrov
383,078
<p>The Wikipedia article cites <a href="https://books.google.com/books?id=mqdzqbPYiAUC&amp;pg=SA11-PA2" rel="nofollow noreferrer">a textbook</a> that manages to confuse the issue further:</p> <blockquote> <p><strong>Purely imaginary (complex) number :</strong> A complex number $z = x + iy$ is called a purely imaginary number iff $x=0$ <em>i.e.</em> $R(z) = 0$.</p> <p><strong>Imaginary number :</strong> A complex number $z = x + iy$ is said to be an imaginary number if and only if $y \ne 0$ <em>i.e.</em>, $I(z) \ne 0$.</p> </blockquote> <p>This is a slightly different usage of the word "imaginary", meaning "non-real": among the complex numbers, those that aren't real we call imaginary, and a further subset of those (with real part $0$) are <em>purely</em> imaginary. Except that by this definition, $0$ is clearly purely imaginary but <em>not</em> imaginary!</p> <p>Anyway, anybody can write a textbook, so I think that the real test is this: does $0$ have the properties we want a (purely) imaginary number to have?</p> <p>I can't (and <a href="https://math.stackexchange.com/questions/1242537/unique-properties-of-pure-imaginary-numbers">MSE can't</a>) think of any useful properties of purely imaginary complex numbers $z$ apart from the characterization that $|e^{z}| = 1$. But $0$ clearly has this property, so we should consider it purely imaginary.</p> <p>(On the other hand, $0$ has all of the properties a real number should have, being real; so it makes some amount of sense to also say that it's purely imaginary but not imaginary at the same time.)</p>
4,288,460
<blockquote> <p>Suppose that <span class="math-container">$\{X_t : Ω → S := \mathbb{R}^d, t\in T\}$</span> is a stochastic process with independent increments and let <span class="math-container">$\mathcal{B}_t :=\mathcal{B}_t^X$</span> (natural filtration) for all <span class="math-container">$t\in T$</span>. Show, for all <span class="math-container">$0 ≤ s &lt; t$</span>, that <span class="math-container">$(X_t − X_s )$</span> is independent of <span class="math-container">$\mathcal{B}_s^X$</span> and then use this to show <span class="math-container">$\{X_t\}_{t\in T}$</span> is a Markov process with transition kernels defined by <span class="math-container">$0 ≤ s ≤ t$</span>, <span class="math-container">$$q_{s,t}(x, A) := E [1_A (x + X_t − X_s )]\text{ for all }A\in \mathcal{S}\text{ and }x\in\mathbb{R}^d.$$</span></p> </blockquote> <p>The first part showing that <span class="math-container">$X_t-X_s$</span> is independent of <span class="math-container">$\mathcal{B}^X_s$</span>. I more or less understand from a monotone class lemma.</p> <p>For the part where I need to compute the transition kernel, I am not sure what I have to show. Is seems to me I have to show <span class="math-container">$P(X_t\in A|X_s)=q_{s,t}(X_s,A)$</span>, is that correct? To do this I observe <span class="math-container">\begin{align} P(X_t\in A|X_s)=E(1_{X_t\in A}|X_s)=E(1_A(X_t)|X_s)=E(1_A(X_t-X_s+X_s)|X_s) \end{align}</span> But then I am not sure how to finish. The exercise hintes that I should use that <span class="math-container">$X_t-X_s$</span> is independent of <span class="math-container">$\mathcal{B}^X_s$</span>.</p>
Lorenzo Pompili
884,561
<p>The proof is not well written in that way. In particular, you have to choose <span class="math-container">$\delta_0$</span> in a suitable way that depends on <span class="math-container">$\varepsilon$</span>. Let me try to correct it below.</p> <hr /> <p>Case 2 : There exist some x's that satisfy h(x)=0</p> <p>take m as a limit point of K, then by definition, given <span class="math-container">$\delta$</span>&gt;0 we have that <span class="math-container">$V_\delta(m)$</span> intersects K at some other point than m. Fix <span class="math-container">$\varepsilon&gt;0$</span>. Let’s choose <span class="math-container">$\delta_0$</span> such that <span class="math-container">$$ |h(x)-h(m)|&lt;\varepsilon \;\forall x\in (m-\delta_0,m+\delta_0).$$</span> This <span class="math-container">$\delta_0$</span> exists because of the continuity of the function <span class="math-container">$h$</span>. Then, as a consequence of what we wrote, there exists y <span class="math-container">$\in$</span> K such that y <span class="math-container">$\in V_{\delta_{0}}(m)$</span>, i.e., <span class="math-container">$\left|y-m\right|&lt;\delta_0$</span>.</p> <p>Since <span class="math-container">$|m-y|&lt;\delta_0$</span>, we now have <span class="math-container">$$\begin{align}\left|h(m)-h(y)\right|&lt;\varepsilon&amp;\implies\left|h(m)\right|&lt;\varepsilon\end{align}$$</span> Thus, since <span class="math-container">$\varepsilon&gt;0$</span> was chosen arbitrarily, <span class="math-container">$h(m)=0$</span> <span class="math-container">$\implies$</span> m <span class="math-container">$\in$</span> K. Since m was an arbitrary limit point of K, K is closed. <span class="math-container">$\blacksquare$</span></p>
417,896
<p>Given a connected smooth manifold <span class="math-container">$M$</span> of dimension <span class="math-container">$m&gt;1$</span>, points <span class="math-container">$p_1,\dots,p_n\in M$</span> and positive values <span class="math-container">$\{d_{i,j};1\leq i&lt;j\leq n\}$</span> satisfying the strict triangle inequalities <span class="math-container">$d_{i,j}&lt;d_{i,k}+d_{k,j}$</span>,</p> <p>Can we give <span class="math-container">$M$</span> a complete riemannian metric <span class="math-container">$g$</span> so that <span class="math-container">$d_g(p_i,p_j)=d_{i,j}$</span>, where <span class="math-container">$d$</span> is the geodesic distance?</p> <p>This can fail in dimension <span class="math-container">$2$</span>, as shown in the <a href="https://mathoverflow.net/a/417951">answer</a> by André Henriques. I'm pretty sure it has to be true for <span class="math-container">$m\geq3$</span>, but I have not been able to prove it.</p> <p>Some comments:</p> <ul> <li><p>This occurred to me while answering <a href="https://mathoverflow.net/questions/417712/equidistant-points-on-a-compact-riemannian-manifold">Equidistant points on a compact Riemannian manifold</a>, <a href="https://mathoverflow.net/questions/417712/equidistant-points-on-a-compact-riemannian-manifold/417722#417722">my answer</a> to that question contains the ideas I tried for <span class="math-container">$m\geq3$</span>.</p> </li> <li><p>By homogeneity of manifolds you can suppose the points <span class="math-container">$P_1,\dotsc,P_n$</span> are any set of <span class="math-container">$n$</span> points of <span class="math-container">$M$</span>, and using that it is not hard to reduce the problem to the case of <span class="math-container">$M$</span> being diffeomorphic to <span class="math-container">$\mathbb{R}^m$</span>. In particular if you prove it for <span class="math-container">$\mathbb{R}^3$</span> you will have proved it for any manifold of dimension <span class="math-container">$\geq 3$</span>.</p> </li> <li><p>One of the first ideas which come to mind is trying to somehow imbed <span class="math-container">$M$</span> in <span class="math-container">$\mathbb{R}^N$</span> for some big <span class="math-container">$N$</span>, but <a href="https://en.wikipedia.org/wiki/Distance_geometry#Characterization_via_Cayley%E2%80%93Menger_determinants" rel="noreferrer">triangle inequalities are not sufficient</a> for a finite set to be isometrically imbedded in some <span class="math-container">$\mathbb{R}^N$</span>.</p> </li> <li><p>What if we change the strict triangle inequalities for the usual ones?</p> </li> </ul>
Anton Petrunin
1,441
<p>Yes if <span class="math-container">$m\ge 2$</span>.</p> <p>Let us construct a metric graph <span class="math-container">$\Gamma$</span> by connecting vertices <span class="math-container">$p_1,\dots,p_n$</span> by edges with lengths <span class="math-container">$m_{ij}$</span>. Take its tiny tubular neighborhood and observe its surface has a nearly isometric copy of <span class="math-container">$\Gamma$</span>; the edges are assumed to be minimizing. We can also assume that embedding is stretching all edges slightly.</p> <p>We may assume that each edge runs in a flat part so by conformal change, we can make <span class="math-container">$\Gamma$</span> isometric.</p>
910,070
<p>I am working on a weighted minimization problem. Without the weights, the error function can be expressed as $e^T e$. With weights, $e$ first need to element-wise multiple by $w$, then the same formula applies: $(w \circ e)^T (w \circ e)$. How do I express it in pure matrix form (without the $\circ$). The $\circ$ operation is giving me a lot of trouble in trying to derive a derivative of a chained function on a set of parameters. It would be better if it's a matrix whose diagonal is $w_i e_i$, and 0 elsewhere; or a vector of $w_i e_i$. </p> <p>For the weighted minimization problem, I have $$g = e^T e, \; e_i = w_i u_i, \; u = h(X)$$ where $$u, w, e \in \mathbb{R}_{m}, \; X \in \mathbb{R}_{n}, \; g: \mathbb{R}_{m} \rightarrow \mathbb{R}_{1}, \; h: \mathbb{R}_{n} \rightarrow \mathbb{R}_{m} $$ I want to find $\frac{dg(X)}{dX}$. I think this should be in $\mathbb{R}_{n}^T$. Applying the chain rule in the single variable manner, $$ \frac{dg(X)}{dX} = 2 e \frac{de}{du} \frac{du}{dX} $$ $$ \frac{dg(u)}{du} \in \mathbb{R}_{m}^T, \; \frac{du}{dX} \in \mathbb{R}_{mn} $$ The sizes of the matrices don't foot because $e \frac{de}{du}$ should be $e \circ \frac{de}{du}$.</p>
Ben Grossmann
81,360
<p>One nice way to make your function into a matrix product is to define the diagonal matrix $$ W = \pmatrix{w_1&amp;&amp;\\&amp;\ddots&amp;\\&amp;&amp;w_n} $$ We then have $$ (w \circ e)^T(w \circ e) = (We)^T (We) = e^T W^TWe =\\ e^T \pmatrix{|w_1|^2&amp;&amp;\\&amp;\ddots&amp;\\&amp;&amp;|w_n|^2} e $$</p>
2,715,926
<p>I know the definition of the limit and how it applies in this case, but I don't know how to reach the end of the proof. The limit is:</p> <p>$\lim_{n\rightarrow \infty}{\dfrac{2n^2-3n+1}{n^2-n+7}}=2$</p> <p>I got to:</p> <p>$\dfrac{n+13}{n^2-n+7}&lt;\epsilon$</p> <p>Thank you.</p>
Dr. Sonnhard Graubner
175,066
<p>Use that $$\frac{n+13}{n^2-n+7}&lt;\frac{13}{n}$$ since $n&gt;0$ we get by cross multiplication $$n^2+13n&lt;13(n^2-n+7)$$ and this is equivalent to $$0&lt;12n^2-26n+91$$</p>
2,715,926
<p>I know the definition of the limit and how it applies in this case, but I don't know how to reach the end of the proof. The limit is:</p> <p>$\lim_{n\rightarrow \infty}{\dfrac{2n^2-3n+1}{n^2-n+7}}=2$</p> <p>I got to:</p> <p>$\dfrac{n+13}{n^2-n+7}&lt;\epsilon$</p> <p>Thank you.</p>
linksideal
171,582
<p>You could use <a href="https://en.m.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule" rel="nofollow noreferrer">L'Hopital''s Rule</a>, i.e. derive both the nominator and the denominator of your second expression to see that it converges to 0. According to L'Hopital, the same holds for the original expression!</p>
2,917,439
<p>Suppose we flip two fair coins and roll one fair six-sided die.</p> <p>What is the conditional probability that the number of heads equals the number showing on the die, conditional on knowing that the die showed 1?</p> <p>Let's define the following:</p> <p>$A=\{\text{#H = # on die}\}$</p> <p>$B=\{\text{# on die = 1}\}$</p> <p>We want to find:</p> <p>$P(A|B)=\dfrac{P(A\cap B)}{P(B)}=\dfrac{P(\{\text{#H = # on die}\} \cap \{\text{# on die = 1}\})}{P(\{\text{# on die = 1}\})}$</p> <p>I drew a tree diagram to help me:</p> <p><a href="https://i.stack.imgur.com/ZhP1s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZhP1s.png" alt="enter image description here"></a></p> <p>Basically we toss two coins, and the final toss can match with either number on the dice.</p> <p>Now $P(B)=1/6=(1/2)(1/2)(1/6)(4)$</p> <p>For the numerator, we want $\{\text{#H = # on die}\} \cap \{\text{# on die = 1}\}$. The more restrictive here is that the die must roll a 1:</p> <p>The orange show the only two paths that are the intersection. This equals $(2)(1/2)(1/2)(1/6)=1/12$</p> <p><a href="https://i.stack.imgur.com/JmS7l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JmS7l.png" alt="enter image description here"></a> </p> <p>Therefore our $P(A|B)=(1/12)/(1/6)=(1/2)$ as our final answer.</p> <p>Is this correct? I'm wondering if it makes sense because the dice rolled a $1$, and there are only two options on a coin, that thus the 50/50.</p>
Ross Millikan
1,827
<p>You are making it <em>far</em> too complicated. If you are conditioning on the die showing $1$, you are really asking the chance that you get one head out of two flips. You can ignore all the other results from the die. It is not that the coin is $50/50$ because you have two flips. It is that if the probability of the coin showing heads is $p$, the chance of one heads in two flips is $2p(1-p)$, which is $\frac 12$ if the coin is fair and $p=\frac 12$</p>
2,820,623
<p>Show that $\log(\det(H_1)) ≤ \log(\det(H_2)) + \operatorname{tr}[H^{-1}_2H_1]−N$ for all positive semidefinite matrices $H_1,H_2 \in C^N$.</p> <p>We know that all positive semidefinite matrices are singular and so the determinant is zero and as such they are not invertible. It is clear from the expression that $\log(\det(H_1)) = \log(\det(H_2)) = \log(0) = -\infty$. </p> <p>Also in the right hand side, inverse of $H_2$ does not exist. </p> <p>I would be grateful if someone can through some light how to proceed with this proof. Is there any specific property of positive semidefinite matrix to handle this?</p>
Community
-1
<p>We consider that $H_1,H_2\in S_n^{&gt;0}$ (they are $&gt;0$); the other cases have not any interest.</p> <p>Let $f:Z\in S_n^{&gt;0}\rightarrow tr(H_2^{-1}Z)-\log(\det(Z))+\log(\det(H_2)-n$. We show that the minimum of $f$ is $0$.</p> <p>The derivative is $Df_{Z}:K\in S_n\rightarrow tr(H_2^{-1}K)-tr(KZ^{-1})=tr(K(H_2^{-1}-Z^{-1}))$ (indeed, the tangent space to $S_n^{&gt;0}$ in $Z$ is $S_n$, the space of symmetric matrices). Thus $Df_{Z}=0$ iff for every symmetric $K$, $tr(K(H_2^{-1}-Z^{-1}))=0$.</p> <p>Finally, $H_2^{-1}-Z^{-1}$ is in the orthogonal of $S_n$, that is the space of skew symmetric matrices. That implies $H_2^{-1}=Z^{-1}$ or $Z=H_2$.</p> <p>Then, if $f$ admits a local extremum, it is necessarily $f(H_2)=0$.</p> <p>It suffices to show that $f$ is convex (note that $S_n^{&gt;0}$ is convex).</p> <p>The second derivative is </p> <p>$D^2f_Z(K,L)=tr(KZ^{-1}LZ^{-1})$, where $K,L\in S_n$. </p> <p>Then $D^2f_Z(K,K)=tr((KZ^{-1})^2)$. Since $Z^{-1}&gt;0$ and $K\in S_n$, $KZ^{-1}$ has only real eigenvalues. Consequently $tr((KZ^{-1})^2)\geq 0$, $D^2f_Z(K,K)\geq 0$ and we are done. $\square$</p>
2,820,623
<p>Show that $\log(\det(H_1)) ≤ \log(\det(H_2)) + \operatorname{tr}[H^{-1}_2H_1]−N$ for all positive semidefinite matrices $H_1,H_2 \in C^N$.</p> <p>We know that all positive semidefinite matrices are singular and so the determinant is zero and as such they are not invertible. It is clear from the expression that $\log(\det(H_1)) = \log(\det(H_2)) = \log(0) = -\infty$. </p> <p>Also in the right hand side, inverse of $H_2$ does not exist. </p> <p>I would be grateful if someone can through some light how to proceed with this proof. Is there any specific property of positive semidefinite matrix to handle this?</p>
nguyen0610
181,036
<p>If <span class="math-container">$H_1$</span> and <span class="math-container">$H_2$</span> are positive (i.e., <span class="math-container">$&gt;0$</span>), the inequality is equivalent to <span class="math-container">$$\log(\det(H_2^{-1}H_1))\leq \mathrm{Tr}(H_2^{-1}H_1) - N.$$</span> Notice that <span class="math-container">$\det(H_2^{-1}H_1) = \det(H_2^{-\frac12}H_1 H_2^{-\frac12})$</span> and <span class="math-container">$\mathrm{Tr}(H_2^{-1}H_1) = \mathrm{Tr}(H_2^{-\frac12}H_1 H_2^{-\frac12})$</span>. Denote <span class="math-container">$H = H_2^{-\frac12}H_1 H_2^{-\frac12}$</span>, the inequality is equivalent to <span class="math-container">$$\log \det(H) \leq \mathrm{Tr}(H) -N.$$</span> <span class="math-container">$H$</span> is positive, let <span class="math-container">$a_1, \ldots, a_N$</span> denote its eigenvalues. Then <span class="math-container">$\det(H) = a_1 \cdots a_N$</span> and <span class="math-container">$\mathrm{Tr}(H) = a_1+ \cdots + a_N$</span>. The inequality is followed from the elementary inequaly <span class="math-container">$\log x \leq x-1$</span> for <span class="math-container">$x &gt;0$</span>.</p>
1,006,354
<ul> <li>A multiple choice exam has 175 questions. </li> <li>Each question has 4 possible answers. </li> <li>Only 1 answer out of the 4 possible answers is correct. </li> <li>The pass rate for the exam is 70% (123 questions must be answered correctly). </li> <li>We know for a fact that 100 questions were answered correctly. </li> </ul> <p>Questions: What is the probability of passing the exam, if one were to guess on the remaining 75 questions? That is, pick at random one of the 4 answers for each of the 75 questions. </p>
Peter
82,961
<p>Hint : Use the formula for binomial distributed random variables.</p>
2,585,265
<p>I understand that $\cos(\theta) = \sin(\pi/2 - \theta)$ holds true. But, </p> <blockquote> <p>Does $\cos(\theta) = \sin(\pi/2 +\theta)$ always hold true?</p> </blockquote> <p>I am asking this question because I encountered the following question in my workbook.</p> <p>If $h(x) = \cos x$, $g(x) = \sin x$, and $h(x) = g(f(x))$, which of the following can be $f(x)$?</p> <p>(a) $-x$</p> <p>(b) $\pi/2 + x$</p> <p>(c) $\pi - x$</p> <p>(d) $3\pi/2 - x$</p> <p>(e) $3\pi/2 + x$</p> <p>My book says the correct answer is (b), and I am a bit baffled by this.</p> <p>I can see that this holds true by plugging in certain values for $x$. But is there a mathematical proof for $\cos(\theta) = \sin(\pi/2 + \theta)$?</p>
idok
514,894
<p>The number in the square root is at most $73$, and a square. So it is one of $1,4,9,16,25,36,49,64$.<br> Also, the number in the square root is one more from a divisor of $72$, so it is one of $4,9,25$. All those values can be achieved.</p>
855,570
<p>I am having trouble with what seems like it should be a simple problem. I am trying to find intersections of connections between multiple people but I want to include any intersection of connections found between any two of the sets.</p> <p>For example, Let</p> <p>$A$ = {January, February, March, April, May, August}</p> <p>$R$ = {January, February, March, April, September, October, November, December}</p> <p>$Y$ = {January, February, May, July}</p> <p>Now, $A \cap R \cap Y$ = {January, February},</p> <p>but $A \cap R$ = {January, February, March, April}, </p> <p>$A \cap Y$ = {January, February, May}, </p> <p>and $R \cap Y$ = {January, February}</p> <p>So what I really want is {January, February, March, April, May}</p> <p>Now, I may not just have $A, R, $ and $Y$, I may have a lot more to sift through. Is there a more simple principle that I am missing to group all of these intersections together as a subset?</p> <p>One thing I thought of was creating a graph, but I am rusty on my graph algorithms. Any suggestions though would be great.</p> <p>Thanks!</p>
JCV
160,462
<p>The set you are after can be formalised thus:$ \bigcup\limits_{\substack{A,B \in I \\ A \ne B}} A\cap B$, where $I$ is your set of subsets (i.e. in the example you gave $I=\{A,R,Y\}$).</p> <p>This will give you the set of elements that are in at least two of your subsets.</p> <p>In terms of a programming standpoint, I think you would have to go through each month in turn and count how many subsets it is in. Once you count it twice you can add it to your result set and move on to the next month.</p>
402,214
<p>I recently obtained "What is Mathematics?" by Richard Courant and I am having trouble understanding what is happening with the Prime Number Unique Factor Composition Proof (found on Page 23).</p> <p>The first part:</p> <blockquote> <p><img src="https://i.stack.imgur.com/h5rCh.png" alt="enter image description here"></p> </blockquote> <p>I have looked over it many times but I just don't understand what he is doing and why, as an example, if you remove the first factor from either side of the equation you end up with two essentially different compositions that make up a new smaller integer.</p> <p>I'm sure it is a simple error on my behalf but I have been stuck on this for a long time so I would appreciate a walkthrough explained clearly or some pointers in the right direction. Thank you.</p>
aram
77,407
<p>Take a look this is a proof by <strong>contradiction</strong> you're suposing that the two factorizations are <strong>different</strong> and getting an absurd (in this case that one prime is divisible by another prime), thus the asumption the factorizations were different is false. m is the smallest choice because you're using the principle of well ordering defined on page 18.. "Every non empty set of positive integers has a smallest member" <a href="https://en.wikipedia.org/wiki/Well-order" rel="nofollow">https://en.wikipedia.org/wiki/Well-order</a> this smallest member is the smallest number you're suposing you can factorize into two different prime factors </p>
402,214
<p>I recently obtained "What is Mathematics?" by Richard Courant and I am having trouble understanding what is happening with the Prime Number Unique Factor Composition Proof (found on Page 23).</p> <p>The first part:</p> <blockquote> <p><img src="https://i.stack.imgur.com/h5rCh.png" alt="enter image description here"></p> </blockquote> <p>I have looked over it many times but I just don't understand what he is doing and why, as an example, if you remove the first factor from either side of the equation you end up with two essentially different compositions that make up a new smaller integer.</p> <p>I'm sure it is a simple error on my behalf but I have been stuck on this for a long time so I would appreciate a walkthrough explained clearly or some pointers in the right direction. Thank you.</p>
André Nicolas
6,312
<p>We reframe the <em>logic</em> of the argument, using a variant of induction called <em>Fermat's Method of Infinite Descent</em>. </p> <p>Call a positive integer <strong>bad</strong> if it has (apart from order) more than one factorization as a product of prime powers. Note that $1$ is not bad.</p> <p>We want to show that there are <em>no bad positive integers</em>. Suppose to the contrary that there is a bad positive integer $n_1$.</p> <p>By the argument in the book, there is then a bad positive integer $n_2\lt n_1$.</p> <p>By the argument in the book, there is then a bad positive integer $n_3\lt n_2$.</p> <p>And so on. So if there is a bad positive integer, there is an <em>infinite descending sequence</em> $$n_1\gt n_2\gt n_3\gt n_4\gt \cdots$$ of (bad) positive integers.</p> <p>However, there is <strong>no</strong> infinite descending sequence of positive integers!</p> <p>Instead of using the language of descent. the proof quoted above used the fact that if there is a bad positive integer, there is a <em>smallest</em> bad. The two proofs are completely equivalent, they are really the <em>same</em> proof. However, descent feels more concrete to me. </p> <p><strong>Remark:</strong> The proof that is used in <em>What is Mathematics</em> is a clever one, in that it avoids using "Euclid's Lemma." This lemma essentially says that if a prime $p$ divides $ab$, then $p$ divides $a$ or $p$ divides $b$. The standard proof of the Fundamental Theorem uses Euclid's Lemma. Thus in a sense the proof you quoted uses less machinery than the standard proof, it is, sort of, more "elementary." It pays for that by being quite a bit more complicated, and harder to understand. It is the sort of proof that might be appreciated for its cleverness if one is already familiar with the standard proof.</p>
2,323,351
<p>I thought we take $4$ vowels and find number of arrangements $4!$ and multiply it with arrangements that can be made with consonants that is $5!/2!$. However my approach seems to be wrong. </p>
Especially Lime
341,019
<p>You must have done something wrong, since for $n=7$ your expression is $35-60+20-4=-9.$ </p> <p>To use inclusion-exclusion, your events should be something like "includes $i$ and $i+1$". Now the number of choices which include $i$ and $i+1$ is $\binom{n-2}{2}$, and there are $n-1$ possible $i$, so you should start $$\binom{n}{4}-(n-1)\binom{n-2}{2}...$$ Then how many ways are there to have $i$ and $i+1$ and $j$ and $j+1$, for $i&lt;j$? If $j=i+1$ there are $n-3$ ways (and there are $n-2$ choices for $i$). If not there is $1$ way, and $\binom{n-1}{2}-(n-2)=\binom{n-1}2$ choices for $i,j$ (the number of pairs minus the number where $i,j$ are consecutive). So it should continue $$\binom{n}{4}-(n-1)\binom{n-2}{2}+(n-3)(n-2)+\binom{n-2}2...$$ For the final term, the only way three events can happen is if $i+1=j,j+1=k$, for which there are $n-3$ options. So overall you get $$\binom{n}{4}-(n-1)\binom{n-2}{2}+(n-3)(n-2)+\binom{n-2}2-(n-3).$$</p> <p>[This is very close to what you had, the only difference is the extra term for having two separate consecutive pairs of $+\binom{n-2}2$.]</p> <p>To see that the above is the same as $\binom{n-3}{4}$, note that $(n-1)\binom{n-2}2=3\binom{n-1}3$. Also $(n-3)(n-2)=2\binom{n-2}{2}$, so this can be written as $$\binom{n}4-\binom31\binom{n-1}3+\binom32\binom{n-2}2-\binom33\binom{n-3}1.$$ This is precisely the inclusion-exclusion formula for the number of choices of $4$ numbers from the first $n$ which don't include any of the last $3$, so is $\binom{n-3}4$.</p>
2,323,351
<p>I thought we take $4$ vowels and find number of arrangements $4!$ and multiply it with arrangements that can be made with consonants that is $5!/2!$. However my approach seems to be wrong. </p>
Evargalo
443,536
<p>A recursive proof:</p> <p>Let $A(k,n)$ be the number of ways to pick k integers from {1,...n} s.t. no two consecutives numbers are picked. If $n&lt;2k-1$, then $A(k,n)=0$. $A(k,2k-1)=1$.</p> <p>If you pick k integers from {1,...n} s.t. no two consecutives numbers are picked, there are two cases:</p> <ul> <li>Either n is not picked. Then the $k$ integers are picked in {1,...,n-1}. The number of such solutions is $A(k,n-1)$.</li> <li>Either n is picked. Then n-1 isn't picked and $k-1$ integers are picked in {1,...,n-2}. The number of such solutions is $A(k-1,n-2)$.</li> </ul> <p>Building from $A(0,n)=1$ and $A(1,n)=n$ with the recursive rule $A(k,n)=A(k,n-1)+A(k-1,n-2)$, you reconstruct the Pascal tree and conclude that $A(k,n)=\binom{n-3}{k}$</p>
119,456
<p>I generated a 2d random array in $x-y$ plane with</p> <pre><code>L = 10; random = Table[{x, y, RandomReal[{-1, 1}]}, {x, 0, L, L/10}, {y, 0, L, L/10}]; </code></pre> <p>Now I want to save it for the next using by</p> <pre><code>iniF = Interpolation[Flatten[random, 1]]; inif[x_, y_] = c+iniF[x, y]; </code></pre> <p>where $c$ is constant. How do you save the random data with a convenient format? Thank you! </p>
Sumit
8,070
<p>As a general solution, you can always <code>Export</code> you data to a file.</p> <pre><code>L = 10; random = Table[{x, y, RandomReal[{-1, 1}]}, {x, 0, L, L/10}, {y, 0, L, L/10}]; Export[NotebookDirectory[]&lt;&gt;"rand.dat",Flatten[random,1]] </code></pre> <p>Next time you want to use this data</p> <pre><code>random = Import[NotebookDirectory[]&lt;&gt;"rand.dat"]; </code></pre> <p>One benefit of using data file is you can use the data with any other language as well and also share with others. If you are working only with MMA, using a <code>Seed</code> as suggested by Wjx would be a better option. </p>
3,245,223
<p>so I am supposed to solve a proof which seems fairly easy, but the negative exponents in <span class="math-container">$$\sum_{k=0}^n \binom nk\ (\frac{(-1)^k}{k+1})= \frac{1}{n+1}$$</span> are making this question very difficult for me. I have tried using binomial theorem on the right side with <span class="math-container">$(n+1)^{-1}$</span> but I understand that operation does not make sense. I can also tell that the only difference between the two proofs is that the left side has an additional <span class="math-container">$(k+1)^{-1}$</span> and the right side has an additional <span class="math-container">$(n+1)^{-1}$</span>, but I am still having difficulty solving this question. Hints appreciated.</p>
Arthur
15,500
<p>Hint: Consider antidifferentiating <span class="math-container">$f(x)=(1-x)^n$</span>.</p>
1,575,671
<p>The whole question is that <br> If $f(x) = -2cos^2x$, then what is $d^6y \over dx^6$ for x = $\pi/4$?</p> <p>The key here is what does $d^6y \over dx^6$ mean?</p> <p>I know that $d^6y \over d^6x$ means 6th derivative of y with respect to x, but I've never seen it before.</p>
MPW
113,214
<p>The symbol "$\frac d{dx}$" is used to indicate a single derivative (with respect to $x$).</p> <p>We treat repeated application of this operator symbolically as "powers" of the operator (as if it were ordinary multiplication by an ordinary fraction), writing "$\frac{d^n}{dx^n}$" to indicate $n$ successive applications of "$\frac d{dx}$".</p> <p>The notation is peculiar but wholly accepted as traditional. In particular, one might wonder why "$dx^n$" rather than "$(dx)^n$" in the "denominator"; but evidently the "$d$" isn't regarded as an independent factor, rather "$dx$" is regarded as an atomic term.</p> <p>One eventually accepts it and gets used to the notation.</p>
1,575,671
<p>The whole question is that <br> If $f(x) = -2cos^2x$, then what is $d^6y \over dx^6$ for x = $\pi/4$?</p> <p>The key here is what does $d^6y \over dx^6$ mean?</p> <p>I know that $d^6y \over d^6x$ means 6th derivative of y with respect to x, but I've never seen it before.</p>
Community
-1
<p>The correct notation for the sixth derivative is</p> <p>$$\frac{d^6 y}{dx^6}$$</p> <p>not $\frac{d^6 y}{d^6x}$. This notation is meant to be suggestive of taking the sixth power of the operator $d/dx$; that is,</p> <p>$$\frac{d^6 y}{dx^6} = \underbrace{\frac{d}{dx} \cdots \frac{d}{dx}}_{6} y$$</p> <p>Imagine $dx$ being treated as a single term, of which there are $6$.</p>
1,738,153
<p>I know the definition is given as follows:</p> <p>A map $p: G \rightarrow GL(V)$ such that $p(g_1g_2)=p(g_1)p(g_2)$ but I still do not really understand what this means</p> <p>Can someone help me gain some intuition for this - perhaps a basic example?</p> <p>Thanks</p>
John Martin
37,200
<p>Building off of my comment, the easiest non-trivial example I can think of is that of a finite cyclic group. Since a cyclic group is generated by a single element, we only need to say where the generator goes. Let $G$ be a cyclic group of order $2$, with generator $x$. Then $x\mapsto\begin{pmatrix}-1&amp;0\\0&amp;-1\end{pmatrix}$ defines a map $p:G\to GL_2$, and thus a 2-dimensional representation. Even more simply, you can take $x\mapsto [-1]$ where here we are viewing $[-1]$ as a $1\times 1$ matrix; thus we have a 1-dimensional representation of $G$ by $GL_1$.</p> <p>Note: The condition you write $p(g_1g_2) = p(g_1)p(g_2)$ is saying that $p$ has to be a group homomorphism.</p>
752,517
<p>From Wikipedia</p> <blockquote> <p>...the free group $F_{S}$ over a given set $S$ consists of all expressions (a.k.a. words, or terms) that can be built from members of $S$, considering two expressions different unless their equality follows from the group axioms (e.g. $st = suu^{−1}t$, but $s ≠ t$ for $s,t,u \in S$). The members of $S$ are called generators of $F_{S}$.</p> </blockquote> <p>I don't understand the distinction between " <strong>$a$ and $b$ freely generate a group</strong>" , "$a$ and $b$ <strong>generate a free group</strong>" and just checking if a group is free. For example I thought that if $a$ and $b$ freely generate a group then that group is free and vice versa. However I have seen statements like " $a$ and $b$ freely generate a free subgroup of..." quite a few times and so there seems to be some distinction between the terms. If so could you please provide an example where one of the conditions hold bit the other doesn't?</p> <p>If, for example, $S=\{a,b\}$, but $a^{3}=1$ and $b^{2}=1$ is it the case that $S$ cannot generate a free group even if any word written using only the letters $a$,$a^2$ and $b$ (excluding powers of these) has a unique representation? If we say that we are considering words in $\{a,b\}$ does that mean that we consider words using $a$,$b$, $a^{-1}$ and $b^{-1}$ as letters and identifying $b^2=1$ and $a^3=1$ in the process of reducing the letter or that we use any integer power of and b and only reduce identities that hold by group axioms independent of the actual group structure we are considering?</p> <p>I apologize if the question is confusing but I am quite confused myself. If you think there is any way of clarifying the question please feel free to suggest it.</p> <p>Thank you.</p>
Michael Hardy
11,667
<p>I don't know why I didn't say the following at the outset. I guess I was just going along with your method.</p> <p><b>Rid your proof of trigonometric functions and instead do the following:</b></p> <p>$$ t\mapsto (x,y) = \begin{cases} \phantom{\lim\limits_{t\to\infty}} \left( \dfrac{1-t^2}{1+t^2}, \dfrac{2t}{1+t^2} \right) &amp; \text{if }t\ne\infty, \\[10pt] \lim\limits_{t\to\infty} \left( \dfrac{1-t^2}{1+t^2}, \dfrac{2t}{1+t^2} \right) &amp; \text{if }t=\infty. \end{cases} $$</p> <p>That's a homeomorphism from $\mathbb R\cup\{\infty\}$ to $\{(x,y)\in\mathbb R^2:x^2+y^2=1\}$.</p> <p>To show that it's surjective, show that $t=\dfrac{y}{x+1}$ (and notice that $\dfrac{y}{x+1}\to\infty$ as $(x,y)\to(-1,0)$ along the curve $x^2+y^2=1$).</p>
412,642
<p>Let $E=\mathcal{C}[0,1]$. How to prove that if $f_n\rightarrow f$ with the norm $\displaystyle{\|\cdot\|_\infty=\sup_{t\in[0, 1]}f(t)}$ then $f_n\rightarrow f$ with the norm $\displaystyle{\|\cdot\|_p=\left(\int_{0}^{1}|f(t)|^p\,dt\right)^{1/p}}$. </p> <p>Give an example showing that the converse is not true</p>
Ted Shifrin
71,348
<p>Hints: If $|f-g|&lt;\epsilon$, what can you say about $\|f-g\|_p$?</p> <p>Draw the graph of a continuous function $f$ with $\|f\|_1&lt;\epsilon$ and $\|f\|_\infty=1$.</p>
3,946,762
<p><a href="https://i.stack.imgur.com/JHS35.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JHS35.png" alt="enter image description here" /></a></p> <p>In <span class="math-container">$\triangle ABC$</span> the cevians <span class="math-container">$AD$</span>,<span class="math-container">$BE$</span> and <span class="math-container">$CF$</span> are concurrent at the point <span class="math-container">$O$</span>. <span class="math-container">$DL\parallel EF$</span> and <span class="math-container">$BE$</span> intersects <span class="math-container">$DL$</span> at <span class="math-container">$K$</span>. How to prove that <span class="math-container">$DK = DL$</span>.<br /> I applied menelaus theorem twice in the <span class="math-container">$\triangle KEL$</span> and got a relation <span class="math-container">$\frac{DL^2} {DK^2}=\frac{BE} {BK}\cdot \frac{CL} {CE} \cdot \frac{AL} {AE} \cdot \frac{EO} {KO}$</span> and also tried to use the ratio of similar triangles but no result. any ideas on utilizing the ratio of similar triangles.</p>
timon92
210,525
<p>You may want to start like this: Since <span class="math-container">$EF \parallel DL$</span>, we have <span class="math-container">$\dfrac{DL}{JE}=\dfrac{DA}{AJ}$</span> and <span class="math-container">$\dfrac{DK}{JE} = \dfrac{DO}{OJ}$</span>. The problem reduces to showing that <span class="math-container">$\dfrac{DA}{AJ}=\dfrac{DO}{OJ}$</span>.</p>
3,946,762
<p><a href="https://i.stack.imgur.com/JHS35.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JHS35.png" alt="enter image description here" /></a></p> <p>In <span class="math-container">$\triangle ABC$</span> the cevians <span class="math-container">$AD$</span>,<span class="math-container">$BE$</span> and <span class="math-container">$CF$</span> are concurrent at the point <span class="math-container">$O$</span>. <span class="math-container">$DL\parallel EF$</span> and <span class="math-container">$BE$</span> intersects <span class="math-container">$DL$</span> at <span class="math-container">$K$</span>. How to prove that <span class="math-container">$DK = DL$</span>.<br /> I applied menelaus theorem twice in the <span class="math-container">$\triangle KEL$</span> and got a relation <span class="math-container">$\frac{DL^2} {DK^2}=\frac{BE} {BK}\cdot \frac{CL} {CE} \cdot \frac{AL} {AE} \cdot \frac{EO} {KO}$</span> and also tried to use the ratio of similar triangles but no result. any ideas on utilizing the ratio of similar triangles.</p>
Solumilkyu
297,490
<p>Following timon92, we are going to prove <span class="math-container">$\dfrac{DA}{AJ}=\dfrac{DO}{OJ}$</span>.</p> <p><a href="https://i.stack.imgur.com/1qJ1S.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1qJ1S.png" alt="enter image description here" /></a></p> <p>Consider the figure (only add point <span class="math-container">$M$</span>, which is an intersection of lines <span class="math-container">$EF$</span> and <span class="math-container">$BC$</span>), we focus on <span class="math-container">$\triangle MDJ$</span>, and apply Menelaus's theorem 4 times: <span class="math-container">\begin{align} \color{blue}{\frac{MB}{BD}}\color{red}{\frac{DA}{AJ}}\frac{JF}{FM}&amp;=1;\tag{1}\\ \color{blue}{\frac{MB}{BD}}\color{orange}{\frac{DO}{OJ}}\color{green}{\frac{JE}{EM}}&amp;=1;\tag{2}\\ \color{purple}{\frac{MC}{CD}}\color{orange}{\frac{DO}{OJ}}\frac{JF}{FM}&amp;=1;\tag{3}\\ \color{purple}{\frac{MC}{CD}}\color{red}{\frac{DA}{AJ}}\color{green}{\frac{JE}{EM}}&amp;=1.\tag{4} \end{align}</span> Combining (1) and (4), and combining (2) and (3), respectively, we obtain <span class="math-container">\begin{align} \left(\color{red}{\frac{DA}{AJ}}\right)^2 =\left(\color{blue}{\frac{MB}{BD}}\frac{JF}{FM}\color{purple}{\frac{MC}{CD}}\color{green}{\frac{JE}{EM}}\right)^{-1} =\left(\color{orange}{\frac{DO}{OJ}}\right)^2. \end{align}</span></p>
3,037,296
<p>I'm confused of what <span class="math-container">$\sqrt {3 + 4i}$</span> would be after I used quadratic formula to simplify <span class="math-container">$z^2 + iz - (1 + i)$</span></p>
TonyK
1,508
<p>Let <span class="math-container">$\sqrt{3+4i}=x+yi$</span>, with <span class="math-container">$x,y\in\Bbb R$</span>. Then <span class="math-container">$3+4i=(x+yi)^2=x^2-y^2+2xyi$</span>.</p> <p>Equating real and imaginary parts gives <span class="math-container">$x^2-y^2=3$</span> and <span class="math-container">$2xy=4$</span>. The second equation gives <span class="math-container">$y=2/x$</span>, and substituting this back into the first equation gives <span class="math-container">$x^2-4/x^2=3$</span>.</p> <p>Multiplying through by <span class="math-container">$x^2$</span> gives a quadratic in <span class="math-container">$x^2$</span>, namely <span class="math-container">$x^4-3x^2-4=0$</span>. This factorises as <span class="math-container">$(x^2-4)(x^2+1)=0$</span>. <span class="math-container">$x$</span> is real, so we can't have <span class="math-container">$x^2+1=0$</span>. Hence <span class="math-container">$x^2-4=0$</span>, i.e. <span class="math-container">$x=\pm 2$</span>.</p> <p>I will let you add the finishing touches.</p>
1,538,496
<p>I came across this riddle during a job interview and thought it was worth sharing with the community as I thought it was clever:</p> <blockquote> <p>Suppose you are sitting at a perfectly round table with an adversary about to play a game. Next to each of you is an infinitely large bag of pennies. The goal of the game is to be the player who is able to put the last penny on the table. Pennies cannot be moved once placed and cannot be stacked on top of each other; also, players place 1 penny per turn. There is a strategy to win this game every time. Do you move first or second, and what is your strategy?</p> </blockquote> <p>JMoravitz has provided the answer (hidden in spoilers) below in case you are frustrated!</p>
Roger Dahl
167,445
<p>I'm wondering if it would be possible to prove that the following solution is a winning or losing strategy:</p> <blockquote class="spoiler"> <p> Go first and place a penny <em>near</em> the edge of the table in such a way that there isn't room for another penny on the outside of it.</p> </blockquote>