qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,048,282
<p>I have been doing derivatives but I can't wrap my head around this question for whatever reason. Would appreciate anyone help. $$g(x) = \tan(x)/e^x$$</p>
K Split X
381,431
<p>You use the quotient rule.</p> <p>Let <span class="math-container">$f\left(x\right) = \tan \left(x\right)$</span></p> <p>Let <span class="math-container">$g\left(x\right) = e^x$</span></p> <p>The quotient rule says that:</p> <p><span class="math-container">$$\left(\frac{f\left(x\right)}{g\left(x\right)}\right)' = \left(\frac{f'\left(x\right)g\left(x\right)-g'\left(x\right)f\left(x\right)}{\left[g\left(x\right)\right]^2}\right)$$</span> <span class="math-container">$$ \left(\frac{\sec ^2\left(x\right)e^x - e^x\tan \left(x\right)}{e^{2x}}\right)$$</span></p> <hr> <p>On a side note: <span class="math-container">$$\left(e^x\right)' = e^x$$</span> <span class="math-container">$$\left(\tan \left(x\right)\right)' = \left(\sec ^2\left(x\right)\right)$$</span></p>
2,048,282
<p>I have been doing derivatives but I can't wrap my head around this question for whatever reason. Would appreciate anyone help. $$g(x) = \tan(x)/e^x$$</p>
Doug M
317,162
<p>You are going to hae to apply the product rule / quotient rule at least once. A question is wheter you want to apply it twice</p> <p>$y = e^{-x} \tan x\\ \frac{dy}{dx} = (\frac {d}{dx} e^{-x}) \tan x + e^{-x}(\frac {d}{dx} \tan x)$</p> <p>and $\tan x = \frac {\sin x}{\cos x}$</p> <p>However this is an intersting approch to $\frac {d}{dx} (\tan x)$</p> <p><a href="https://i.stack.imgur.com/HlwuV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HlwuV.png" alt="enter image description here"></a></p> <p>The light blue area is a section of a circle of measure $h$ and radius $\sec x$ The area of this section is $\frac 12 h \sec^2 x$</p> <p>The light blue area plus the green triangle is a triangle of base $\tan (x+h) - tan x$ and height $1.$ The area is $\frac 12 (\tan (x+h) - tan x)$</p> <p>and the combined shaded region is a section of a circle of measure $h$ and radius $\sec (x+h)$ The area of this section is $\frac 12 h \sec^2 (x+h)$</p> <p>$\sec^2 x\le \frac {\tan(x+h) - \tan x}{h} \le \sec^2 (x+h)$</p> <p>What happens to $\frac {\tan(x+h) - \tan x}{h}$ as $h$ goes to $0$?</p>
270,624
<p>For a polynomial $f(x) = \sum_{i=0}^dc_ix^i \in \mathbb Z[x]$ of degree $d$, let</p> <p>$$ H(f):=\max\limits_{i=0,1,\ldots, d}\{|c_i|\} $$</p> <p>denote the naive height. Further, define</p> <p>$$ R(M, r, d) := \#\{f(x) \colon \text{$H(f) \leq M$, $\deg f = d$ and $f(x)$ has extactly $r$ real roots}\}. $$</p> <p>I wonder if anything is known about the quantity $R(M,r,d)$. More precisely, I am interested how it changes as $r$ and $d$ remain fixed and $M$ varies. Has this question been explored at all? I would be thankful for any references.</p>
Igor Rivin
11,142
<p>Liviu's answer is very informative, though it answers a question orthogonal to that of the OP. I believe (Liviu can correct me if I am wrong), the results don't actually depend on the coefficients being discrete, and qualitatively, the results are not much different when the coefficients are uniform centered real random numbers (experiment certainly bears this out). If so, then the obvious rescaling shows the quantity is essentially independent of $M$ (the error terms [essentially addressing discretization errors] will, of course depend on $M,$ but I am guessing that this is quite hard).</p>
1,301,509
<p>I've the following integral, which should result in 1, as shown by the scetch, but in my calculation I get the result 0. What's my mistake?</p> <p>Sorry the comments are in German and please note that a German 1 often looks like an English 7. Anything in the picture which looks like a 7 to you is in fact a 1.</p> <p><img src="https://i.stack.imgur.com/zBwwQ.jpg" alt="enter image description here"></p>
Cameron Williams
22,551
<p>The issue is you didn't change your differential. $dz = -dx$ fixes it. Your function is even so you could have simply worked with the integral from $0$ to $1$ instead.</p>
3,022,921
<p>If 6 divides x and 8 divides x how do you deduce 24 divides x</p>
Peter Szilas
408,605
<p>Given:</p> <p>1)<span class="math-container">$x=6m$</span>, and </p> <p>2)<span class="math-container">$x=8n$</span>;</p> <p><span class="math-container">$x=6m=3(2m)$</span>; i .e. <span class="math-container">$3|x.$</span></p> <p>Using 2): <span class="math-container">$3$</span> divides <span class="math-container">$x= 8n.$</span></p> <p>Euclid's lemma:</p> <p>If prime <span class="math-container">$p$</span> divides <span class="math-container">$ab$</span>, then <span class="math-container">$p$</span> divides <span class="math-container">$a$</span> or <span class="math-container">$p$</span> divides <span class="math-container">$b$</span>.</p> <p><span class="math-container">$3|8n$</span> then <span class="math-container">$3|n$</span> (why?) , i.e. <span class="math-container">$n=3k$</span>.</p> <p>Combining </p> <p><span class="math-container">$x= 8n = 8(3k)=(24)k$</span>,</p> <p><span class="math-container">$24$</span> divides <span class="math-container">$x.$</span></p>
33,005
<p>Let n and p be any positive integer, make $p$ the subject of the equation: $(3n + p)\bmod4 = 0$. How is it done?</p> <p>I've worked out that the only values for p are 1, 2, 3 and 0.</p> <p>This formula is for calculating the amount of padding required in a bitmap's pixel array:</p> <blockquote> <p>Padding bytes (not necessarily 0) must be appended to the end of the rows in order to bring up the length of the rows to a multiple of four bytes. When the Pixel Array is loaded into memory, each row must begin at a memory address that is a multiple of 4. This address/offset restriction is mandatory only for Pixel Array's loaded in memory. For file storage purposes, only the size of each row must be a multiple of 4 bytes while the file offset can be arbitrary.[1] A 24-bit bitmap with Width=1, would have 3 bytes of data per row (blue, green, red) and 1 byte of padding, while Width=2 would have 2 bytes of padding, Width=3 would have 3 bytes of padding, and Width=4 would not have any padding at all.</p> </blockquote>
picakhu
4,728
<p>It depends on the value of $n$. </p> <p>If $n \equiv 1$ (mod 4), then, you have $3+p\equiv0$ (mod 4) which means that $p\equiv1$ (mod 4). This procedure can be repeated for other values of $n$, namely $n\equiv0,1,2,3$ (mod 4)</p> <p>In addition to this, note that if $p\equiv n$ (mod 4), then you have that $3n+p=4n\equiv0$ (mod 4)</p>
1,757,092
<p>I want to find an explicit formula for $\sum_{n=0}^\infty n^3x^n$ for $|x|\le1$.Is the idea that first to show that this series is convergent and then we can find the number that it converges to? I tried to use ratio test, but it didn't work. Any suggestion? Thanks!</p>
DeepSea
101,504
<p>For $x = \pm 1$, you can find the value of the series separately and its not that hard. For $|x| &lt; 1$, consider $f(x) = \displaystyle \sum_{n=0}^\infty x^n= \dfrac{1}{1-x}$, then find $xf'(x) = \displaystyle \sum_{n=0}^\infty nx^n= x(1-x)^{-2}$,and repeat this until you get to the desire series.</p>
444,448
<p>Let $M$ be a set of prime numbers of $\mathbb{Q}$ . The limit $$d(M)= \lim_{s\rightarrow 1^+} \frac{ \sum_{p \in M} p^{-s} }{ - \log(s-1)}$$ Where $p$ is a prime of $\mathbb{Q}$ is called <strong>Dirichlet Density</strong> of $M$. Also, the <strong>Natural density</strong> of $M$ is the limit $$ \delta(M)= \lim_{x\rightarrow \infty} \frac{ \# \{ p \in M : p \leq x\}}{ \# \{ p \in \mathbb{Q} : p \leq x\}} $$</p> <p>Now, let $$M= \bigcup_{k=0}^{\infty}\{ p \mbox{ prime} : 10^k \leq p &lt; 2\cdot10^k \}$$ Show that $\delta(M)$ not exists and $d(M)=\frac{\log(2)}{\log(10)}$</p> <p>For $\delta(M)$ i can apply the Prime Number Theorem, but if $$M(x)= \# \{ p \in M : p \leq x \}= \sum_{p \in M, p\leq x}1 = \sum_{0 \leq k \leq \frac{\log(x)}{\log(10)}} \sum_{10^k \leq p &lt; 2\cdot10^k, p \leq x} 1$$</p> <p>I do not know how to follow...</p>
Daniel Fischer
83,702
<p>Using $M(x)= \# \{ p \in M : p \leq x \}$, we observe that due to the definition of $M$, we have</p> <p>$$M(10^k) \leqslant \pi(2\cdot 10^{k-1}), \text{ and } M(2\cdot 10^k) \geqslant \pi(2\cdot 10^k) - \pi(10^k).$$</p> <p>Thus, supposing $k$ not too small, and using the prime number theorem, we obtain</p> <p>$$\frac{M(10^k)}{\pi(10^k)} \leqslant \frac{\pi(2\cdot 10^{k-1})}{\pi(10^k)} \approx \frac15\left(1 + \frac{\log 5}{\log (2\cdot 10^{k-1})}\right) \approx \frac15$$</p> <p>and</p> <p>$$\frac{M(2\cdot 10^k)}{\pi(2\cdot 10^k)} \geqslant 1 - \frac{\pi(10^k)}{\pi(2\cdot 10^k)} \approx 1 - \frac12\left(1 + \frac{\log 2}{\log (10^k)}\right) \approx \frac12.$$</p> <p>So the value of $\frac{M(x)}{\pi(x)}$ oscillates between $\approx \frac15$ (or smaller) and $\approx \frac12$, hence has no limit.</p> <p>For the Dirichlet density, we use that</p> <p>$$\sum_{p \leqslant x} \frac{1}{p} = \log \log x + M + O\biggl(\frac{1}{(\log x)^2}\biggr)\tag{1}$$</p> <p>where $M$ is the Meissel-Mertens constant. De la Vallée Poussin's error bound in the prime number theorem gives a smaller remainder term, but that would give no advantage in our calculation. For $x &gt; 2$, we obtain</p> <p>$$\sum_{x &lt; p \leqslant 2x} \frac{1}{p} = \log\biggl(1 + \frac{\log 2}{\log x}\biggr) + O\biggl(\frac{1}{(\log x)^2}\biggr) = \frac{\log 2}{\log x} + O\biggl(\frac{1}{(\log x)^2}\biggr)\tag{2}$$</p> <p>from $(1)$, so</p> <p>$$\sum_{10^k &lt; x \leqslant 2\cdot 10^k} \frac{1}{p} = \frac{A}{k} + R(k)\tag{3}$$</p> <p>with $A = \frac{\log 2}{\log 10}$ and $R(k) \in O(k^{-2})$ for $k \geqslant 1$. Define</p> <p>$$S(x) = \sum_{\substack{p\in M \\ p \leqslant x}} \frac{1}{p}.$$</p> <p>For $x &gt; 20$ pick $K$ such that $2\cdot 10^K \leqslant x &lt; 2\cdot 10^{K+1}$. Then</p> <p>$$S(x) = \sum_{k = 1}^K \Biggl(\frac{A}{k} + R(k)\Biggr) + O(K^{-1}) = A\log K + (A\gamma + C) + O(K^{-1})$$</p> <p>with $C = \sum_{k = 1}^{\infty} R(k)$, since</p> <p>$$\sum_{10^{K+1} &lt; p \leqslant x} \frac{1}{p} \in O(K^{-1})$$</p> <p>by $(2)$, and $C - \sum_{k = 1}^K R(k) = \sum_{k = K+1}^{\infty} R(k) \in O(K^{-1})$ too. With</p> <p>$$K = \biggl\lfloor \frac{\log \frac{x}{2}}{\log 10}\biggr\rfloor = \frac{\log x - \log 2}{\log 10}\cdot \Bigl( 1 + O\bigl((\log x)^{-1}\bigr)\Bigr)$$</p> <p>we thus obtain</p> <p>$$S(x) = A\log \log x + B + O\bigl((\log x)^{-1}\bigr),$$</p> <p>where $B = A\gamma + C - A\log \log 10$. Since $S(x) = 0$ for $x &lt; 11$, we can thus write</p> <p>$$\sum_{p \in M} \frac{1}{p^{1+\varepsilon}} = \varepsilon \int_e^{\infty} \frac{S(x)}{x^{1+\varepsilon}}\,dx = A\varepsilon \int_e^{\infty} \frac{\log \log x}{x^{1+\varepsilon}}\,dx + B\varepsilon\int_e^{\infty} \frac{dx}{x^{1+\varepsilon}} + O\Biggl(\varepsilon\int_e^{\infty} \frac{dx}{x^{1+\varepsilon}\log x}\Biggr).$$</p> <p>For the first term, we calculate</p> <p>\begin{align} \varepsilon \int_e^{\infty} \frac{\log \log x}{x^{1+\varepsilon}}\,dx &amp;= \varepsilon \int_1^{\infty} e^{-\varepsilon t}\log t\,dt \tag{$x = e^t$}\\ &amp;= \int_{\varepsilon}^{\infty} e^{-u}(\log u - \log \varepsilon)\,du \tag{$u = \varepsilon t$} \\ &amp;= \log \frac{1}{\varepsilon}\cdot \Biggl(1 - \int_0^{\varepsilon} e^{-u}\,du\Biggr) + \Gamma'(1) -\int_0^{\varepsilon} e^{-u}\log u\,du \\ &amp;= \log \frac{1}{\varepsilon} + \Gamma'(1) + O(\varepsilon \lvert\log \varepsilon\rvert). \end{align}</p> <p>The second term is $B e^{-\varepsilon}$, and to bound the error term, we note that</p> <p>$$\int_e^{\infty} \frac{dx}{x^{\varepsilon}(x\log x)} = \underbrace{\frac{\log \log x}{x^{\varepsilon}}\biggr\rvert_e^{\infty}}_{{}=0} + \varepsilon\int_e^{\infty} \frac{\log \log x}{x^{1+\varepsilon}}\,dx,$$</p> <p>so reusing the previous result we finally get</p> <p>$$\sum_{p\in M} \frac{1}{p^{1+\varepsilon}} = A\log \frac{1}{\varepsilon} + A\Gamma'(1) + B + O(\varepsilon\lvert\log\varepsilon\rvert),$$</p> <p>which shows</p> <p>$$d(M) = A = \frac{\log 2}{\log 10}.$$</p>
3,265,243
<p>The number of ways of placing <span class="math-container">$n$</span> objects not in position is given by the inclusion-exclusion number <span class="math-container">$D_n$</span>: </p> <p><span class="math-container">$n! \left( 1-\dfrac{1}{1!}+\dfrac{1}{2!}+....+(-1)^n\dfrac{1}{n!} \right)$</span></p> <p>which can also be written as:</p> <p><span class="math-container">$n!\sum_{i=1}^{n-1} (-1)^{i+1}/(i+1)!$</span></p> <p>Using a different approach, I came to the following answer, which has probably already been proven:</p> <p><span class="math-container">$(n-1)!\sum_{i=1}^{n-1} (-1)^{i+1}(n-i)/(i-1)!$</span></p> <p>The 2 different summations give the same answer ( I have tested for n=2,3,4,5,6,7). However,each term for any <span class="math-container">$i$</span> is different in each summation. How can the equality of the 2 summations be proved (if it is indeed he case for all <span class="math-container">$n$</span>)? </p>
Martin R
42,969
<p>For <span class="math-container">$n=1$</span> both expressions are zero. For <span class="math-container">$n\ge 2$</span> we start with the second expression, shift the index by one, and increase the upper limit (which adds nothing to the sum): <span class="math-container">$$ (n-1)!\sum_{i=1}^{n-1} (-1)^{i+1}\frac{n-i}{(i-1)!} = (n-1)!\sum_{i=0}^{n-2} (-1)^i\frac{n-i-1}{i!} = (n-1)!\sum_{i=0}^{n-1} (-1)^i\frac{n-i-1}{i!} $$</span> Now split the sum into three parts: <span class="math-container">$$ = (n-1)!\left( n \sum_{i=0}^{n-1} \frac{(-1)^i}{i!} - \sum_{i=0}^{n-1} \frac{(-1)^i i}{i!} - \sum_{i=0}^{n-1} \frac{(-1)^i}{i!} \right) $$</span> In the middle sum the term for <span class="math-container">$i=0$</span> is zero, and in all other terms a factor <span class="math-container">$i$</span> can be canceled: <span class="math-container">$$ = (n-1)!\left( n \sum_{i=0}^{n-1} \frac{(-1)^i}{i!} - \sum_{i=1}^{n-1} \frac{(-1)^i}{(i-1)!} - \sum_{i=0}^{n-1} \frac{(-1)^i}{i!} \right) \\ = n! \sum_{i=0}^{n-1} \frac{(-1)^i}{i!} - (n-1)! \left(\sum_{i=1}^{n-1} \frac{(-1)^i}{(i-1)!} + \sum_{i=0}^{n-1} \frac{(-1)^i}{i!}\right) $$</span> Now shift the indices in the middle sum: <span class="math-container">$$ = n! \sum_{i=0}^{n-1} \frac{(-1)^i}{i!} - (n-1)! \left(\sum_{i=0}^{n-2} \frac{(-1)^{i+1}}{i!} + \sum_{i=0}^{n-1} \frac{(-1)^i}{i!}\right) $$</span> and note that in the second and third sum all terms cancel, with the only exception of the <span class="math-container">$i=n-1$</span> term in the third sum, so that the expression is finally equal to <span class="math-container">$$ \\ = n! \sum_{i=0}^{n-1} \frac{(-1)^i}{i!} + (-1)^n = n! \left( 1-\dfrac{1}{1!}+\dfrac{1}{2!}+....+(-1)^n\dfrac{1}{n!} \right) $$</span> and that is the desired equality.</p>
4,022,415
<p>My attempt:</p> <p><span class="math-container">$\lbrace6,19,30\rbrace$</span> is sufficient to show that two sets are impossible.</p> <p>Using a computer program with a brute force method I found that separating the numbers <span class="math-container">$1$</span> through <span class="math-container">$85$</span> into three sets is possible as shown below:</p> <p><span class="math-container">$\lbrace1,4,6,9,13,14,17,18,20,26,28,33,34,37,41,42,49,54,56,57,62,69,70,73,76,78,81,85\rbrace$</span><br /> <span class="math-container">$\lbrace2,5,8,10,12,21,22,25,29,30,32,38,40,45,46,48,50,53,58,61,64,65,66,72,74,77,82,84\rbrace$</span><br /> <span class="math-container">$\lbrace3,7,11,15,16,19,23,24,27,31,35,36,39,43,44,47,51,52,55,59,60,63,67,68,71,75,79,80,83\rbrace$</span></p> <p>but <span class="math-container">$1$</span> through <span class="math-container">$86$</span> is impossible.</p> <p><strong>Edit:</strong></p> <p>WhatsUp in the answer below provides the set of four numbers: <span class="math-container">$\lbrace1058, 6338, 10823, 13826\rbrace$</span> with an explanation of how he got them. This is a alternative non-brute force way of showing that separating the natural numbers into three sets is impossible. In a comment of <a href="https://math.stackexchange.com/questions/1576986/pairwise-sums-are-perfect-squares">this</a> question a set of five numbers <span class="math-container">$\lbrace 7442, 28658,148583,177458,763442\rbrace$</span> is provided by the user Bob Kadylo. This shows that four sets are impossible.</p> <p><strong>Edit 2:</strong></p> <p>In a previous version of my post I made a proof showing that the number of sets needed for Natural numbers <span class="math-container">$1$</span> to <span class="math-container">$N$</span> so that no pair of numbers in the same set sums to a square is no more than <span class="math-container">$\lfloor\sqrt{2N-1}\rfloor$</span>. I realized that I can do significantly better than this. In order to explain the method that has a smaller upper bound I have to transform the problem into graph theory. An equivalent formulation is to have <span class="math-container">$N$</span> vertices labeled from <span class="math-container">$1$</span> to <span class="math-container">$N$</span>. A pair of points are connected iff the two points add up to a square. Then our goal is to color the vertices using the least number of colors so that no two vertices with the same color are connected. The first step in the greedy algorithm for coloring vertices is to make a list of colors with numbers. (ex. RED-1, BLUE-2, GREEN-3, YELLOW-4, etc.) If during the process more colors are required than are on the coloring list, add more colors to the list. The next step is to pick an uncolored vertex and use the lowest color number that isn't connected that the chosen vertex. Repeat the last step until all vertices are colored. The worst case scenario is to use one more color than the degree value of the vertex with the greatest degree (or tied with the greatest degree). If each vertex that is connected to the greatest degree vertex is a different color then the greatest degree vertex has to be a different color from all of those. The vertex with the greatest degree (or tied with the greatest) is <span class="math-container">$3$</span>. It has degree <span class="math-container">$\lfloor\sqrt{N+3}\rfloor-1$</span>. Therefore the Number of sets (or colors) required is no more than <span class="math-container">$\lfloor\sqrt{N+3}\rfloor$</span>. We can do slightly better by using <a href="https://en.wikipedia.org/wiki/Brooks%27_theorem" rel="nofollow noreferrer">brooke's theorem</a> which states that if a graph is simple, connected, not complete, and not an odd cycle, then the upper bound of the number of colors is <strong>equal</strong> to the degree of the greatest degree vertex. This means that the new upper bound is <span class="math-container">$\lfloor\sqrt{N+3}\rfloor-1$</span> sets. This is the significant improvment from <span class="math-container">$\lfloor\sqrt{2N-1}\rfloor$</span> I mentioned at the beginning.</p> <p><strong>End edits</strong></p> <p>For each natural number <span class="math-container">$X$</span> there are <span class="math-container">$\lfloor\sqrt{2X-1}\rfloor-\lfloor\sqrt{X}\rfloor$</span> numbers that are less than <span class="math-container">$X$</span> that when summed to <span class="math-container">$X$</span> results in a square. The expression: <span class="math-container">$\lfloor\sqrt{2X-1}\rfloor-\lfloor\sqrt{X}\rfloor$</span> increases as <span class="math-container">$X$</span> gets larger, because of this fact my guess is that separating the natural numbers into a finite number of sets so that no pair of numbers in a set doesn't sum to a square is impossible.</p>
mathworker21
366,088
<p>Not an answer.</p> <p>If you want no solutions to <span class="math-container">$a+b=c^2$</span> with <span class="math-container">$a,b,c$</span> all in the same set, then <span class="math-container">$16$</span> sets suffice.</p> <p>The sets are <span class="math-container">$$A_i := \{n \in \mathbb{N} : n \equiv i \pmod{5}\}$$</span> for <span class="math-container">$i=1,3,4$</span>, <span class="math-container">$$B_i := \{n \in \mathbb{N} : n = m\cdot 5^{2^{2u}(2v+1)} \text{ with } m \equiv i \pmod{5}, u,v \ge 0\}$$</span> for <span class="math-container">$i=1,2,3,4$</span>, <span class="math-container">$$C_i := \{n \in \mathbb{N} : n = m\cdot 5^{2^{2u+1}(2v+1)} \text{ with } m \equiv i \pmod{5}, u,v \ge 0\}$$</span> for <span class="math-container">$i=1,2,3,4$</span>, <span class="math-container">$$D_i := \{n \in \mathbb{N} : n = m\cdot 5^u+2 \text{ with } m \equiv i \pmod{5}, u \ge 1\}$$</span> for <span class="math-container">$i=1,2,3,4$</span>, and <span class="math-container">$$E := \{2\}.$$</span></p> <p>Where on Earth did this come from? Page <span class="math-container">$10$</span> of <a href="https://web.cs.elte.hu/%7Egykati/publ/csiksar.pdf" rel="nofollow noreferrer">this</a>.</p>
2,113,596
<p>Questions with likely obvious answers, but I don't have the required intuition to go with the flow.</p> <p>Consider $a+be^x + ce^{-x} = 0$. To solve it for the constants, we can try out different values of $x$ to get a system of $3$ equations and simply use a calculator (given technique). Why are we allowed to stick various values of $x$ into the equation? Is that because the equation is valid for all $x$? I mean the fact(?) that this equation holds for all $x$ allows us to stick any value of $x$ into the equation whose byproduct is the isolation of constants(just a bonus). Does that make sense?</p> <p>Consider $k_1(1) + k_2(t^2 - 2t) + 5k_3(t - 1)^2 = (k_2 + 5k_3)t^2 +(-k_2 - 10k_3)t + (k_1 + 5k_3) = 0.$ To solve the LHS of the equation for the constants, we can let $(k_2 + 5k_3) = (-k_2 - 10k_3) = (k_1 + 5k_3) = 0$ and solve the equality for $k_i.$ Call this maneuver $X.$ Why are we allowed to do $X$? Is it because $X$ is one of the solutions to $(k_2 + 5k_3)t^2 +(-k_2 - 10k_3)t + (k_1 + 5k_3) = 0$ which allows us to consider $X$? Hopefully, this makes sense. </p>
dxiv
291,201
<p>Considering the conversion to <code>C</code> as a function in <code>F</code> $\,\;C(F) = \frac{5}{9}(F-32)\,$, it is easy to see that it's a linear function, and its derivative is a constant $\frac{d\,C}{d\,F} = C' = \frac{5}{9}\,$.</p> <p>Then, as for any linear function, $\Delta C = C' \,\Delta F\,$, so a change $\Delta F = 1$ of one degree <code>F</code> translates into a change of $\Delta C = \frac{5}{9} \cdot 1 = \frac{5}{9}$ degrees <code>C</code>.</p>
2,113,596
<p>Questions with likely obvious answers, but I don't have the required intuition to go with the flow.</p> <p>Consider $a+be^x + ce^{-x} = 0$. To solve it for the constants, we can try out different values of $x$ to get a system of $3$ equations and simply use a calculator (given technique). Why are we allowed to stick various values of $x$ into the equation? Is that because the equation is valid for all $x$? I mean the fact(?) that this equation holds for all $x$ allows us to stick any value of $x$ into the equation whose byproduct is the isolation of constants(just a bonus). Does that make sense?</p> <p>Consider $k_1(1) + k_2(t^2 - 2t) + 5k_3(t - 1)^2 = (k_2 + 5k_3)t^2 +(-k_2 - 10k_3)t + (k_1 + 5k_3) = 0.$ To solve the LHS of the equation for the constants, we can let $(k_2 + 5k_3) = (-k_2 - 10k_3) = (k_1 + 5k_3) = 0$ and solve the equality for $k_i.$ Call this maneuver $X.$ Why are we allowed to do $X$? Is it because $X$ is one of the solutions to $(k_2 + 5k_3)t^2 +(-k_2 - 10k_3)t + (k_1 + 5k_3) = 0$ which allows us to consider $X$? Hopefully, this makes sense. </p>
John Joy
140,156
<p>Hint: Solve for $\Delta C$.</p> <p>$$C = \frac{5}{9}(F-32)$$ $$C + \Delta C = \frac{5}{9}((F+1)-32)$$</p> <p>Can you generalize this for a temperature increase of any magnitude?</p>
2,801,406
<blockquote> <p>Find the coordinates of the points where the line tangent to the curve $$x^2-2xy+2y^2=4$$ is parallel to the $x$-axis, given that $$\frac{dy}{dx}=\frac{y-x}{2y-x}$$</p> </blockquote> <p>By letting $dy/dx = 0$ I get $y=x$ which is no help... what do I do?</p> <p>Thanks</p>
Christopher Marley
510,133
<p>You are definitely on the right track! Yes, you should get $y=x$. Just plug it into the original equation and solve (because now you have two conditions that the point should satisfy: the equation and its derivative):</p> <p>$x^2-2x^2+2x^2=4$<br> $x=\pm 2$ --> $y=\pm2$</p> <p>Final points: $(-2,-2), (2,2)$</p>
1,829,975
<p>Find three different systems of linear equation whose solutions are $x_1 = 3, x_2 = 0, x_3 = -1$</p> <p>I'm confused, how exactly can I do this?</p>
Brian Fitzpatrick
56,960
<p>Note that your system is described by the matrix $$ \left[\begin{array}{rrr|r} 1 &amp; 0 &amp; 0 &amp; 3 \\ 0 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; -1 \end{array}\right] $$ Performing any row operation on this matrix yields a system with the same solutions. For example, you could add $\DeclareMathOperator{Row}{Row}\Row_1$ to $\Row_2$ to get the system $$ \left[\begin{array}{rrr|r} 1 &amp; 0 &amp; 0 &amp; 3 \\ 1 &amp; 1 &amp; 0 &amp; 3 \\ 0 &amp; 0 &amp; 1 &amp; -1 \end{array}\right] $$ You could then add $\Row_2$ to $\Row_3$ to get $$ \left[\begin{array}{rrr|r} 1 &amp; 0 &amp; 0 &amp; 3 \\ 1 &amp; 1 &amp; 0 &amp; 3 \\ 1 &amp; 1 &amp; 1 &amp; 2 \end{array}\right] $$ To get a third system, you could then add $2\cdot \Row_3$ to $\Row_1$ $$ \left[\begin{array}{rrr|r} 3 &amp; 2 &amp; 2 &amp; 7 \\ 1 &amp; 1 &amp; 0 &amp; 3 \\ 1 &amp; 1 &amp; 1 &amp; 2 \end{array}\right] $$</p>
2,694,525
<p>I came across this exercise</p> <p>$f(x,y)= \lim_{y\to\infty}{{1-y\sin{\pi x\over y}}\over \arctan x}$</p> <p>The result I get is ${1-\pi x \over \arctan x}$, which depends on the value of $x$.</p> <p>However, the question I have is that whatever $x$ is, since it's in the $\sin()$, which is a bounded function, shouldn't lay any effect on the limit. For example: $\lim_{x\to\infty}{1 \over x}\sin ax = 0, a\in(-\infty,+\infty)$, whose limit doesn't depend on $a$. </p> <p>Is there any intuitive understanding for this ?</p>
Mohammad Riazi-Kermani
514,496
<p>You are finding the limit as y approaches infinity at a fixed point x. Thus $$f(x,y)= \lim_{y\to\infty}{{1-y\sin{\pi x\over y}}\over arctanx }={1-\pi x \over acrtanx}$$ is expected. </p> <p>Of course the limit should depend on $x$ and it varies with x, but that does not mean that there is something wrong with the limit. </p>
4,030,296
<p>I'm reading Carothers' Real Analysis, and I'm currently looking at <strong>homeomorphisms</strong>. The author says &quot;two intervals that look different, are different&quot; - i.e. they are not homeomorphic. The proof is done for the case <span class="math-container">$(0,1]$</span> and <span class="math-container">$(a,b)$</span>, where the first interval is semi-open so we expect that the two are not homeomorphic.</p> <p>I have some difficulty following the book's argument which I paraphrase here (for convenience) with inline questions.</p> <blockquote> <p>Proof by contradiction. Suppose <span class="math-container">$(0,1]$</span> and <span class="math-container">$(a,b)$</span> are homeomorphic. Then by removing <span class="math-container">$1$</span> from <span class="math-container">$(0,1]$</span> we get <span class="math-container">$(0,1)$</span>, and by removing its image <span class="math-container">$f(1) = c$</span> from <span class="math-container">$(a,b)$</span> we have that <span class="math-container">$(0,1)$</span> is homeomorphic to <span class="math-container">$(a,c)\cup (c,b)$</span>.</p> </blockquote> <p>Wait, why is that? I understand that we are deleting an element each from the domain and codomain, so the bijection remains a bijection. What about the continuity of <span class="math-container">$f$</span> and <span class="math-container">$f^{-1}$</span> though? We literally threw away an element, how do we know that continuity still holds?</p> <blockquote> <p>But <span class="math-container">$(0,1)$</span> is homeomorphic to <span class="math-container">$\Bbb R$</span>, so <span class="math-container">$\Bbb R$</span> would have to be homeomorphic to <span class="math-container">$(a,c)\cup (c,b)$</span> too. So <span class="math-container">$\Bbb R$</span> can be written as the disjoint union of two non-trivial open sets, which is impossible.</p> </blockquote> <p>Two questions:</p> <ol> <li>Why is <span class="math-container">$(0,1)$</span> homeomorphic to <span class="math-container">$\Bbb R$</span>? We really need to produce a homeomorphism, that's all I'm asking.</li> <li>How to prove that <span class="math-container">$\Bbb R$</span> cannot be written as the disjoint union of two non-trivial open sets? The idea that comes to my mind is - we already know that any open subset of <span class="math-container">$\Bbb R$</span> can be written as the disjoint union of unique open maximal intervals. Since <span class="math-container">$\Bbb R$</span> is open in <span class="math-container">$\Bbb R$</span>, we can obviously do the same for it as well, and clearly, our choice of intervals would not be maximal in the event that we could write <span class="math-container">$\Bbb R$</span> as the union of two disjoint open intervals. Does this make sense?</li> </ol> <p>Thank you!</p>
TheSilverDoe
594,484
<ol start="0"> <li><p>Continuity still holds when you remove a point, because the restriction of a continuous map is still continuous.</p> </li> <li><p>A homeomorphism between <span class="math-container">$(0,1)$</span> and <span class="math-container">$\mathbb{R}$</span> is given by <span class="math-container">$$x \mapsto \tan \left(-\frac{\pi}{2}+t\pi \right)$$</span></p> </li> <li><p><span class="math-container">$\mathbb{R}$</span> is connected, so cannot be written as the disjoint union of two non-trivial open sets.</p> </li> </ol>
238,659
<p>Let $f : [a, b]\to R$ be a continuous function such that $[a,b] \subset [f(a), f(b)]$. Prove that there exists $x\in [a,b]$ such that $f(x) = x$.</p> <p>My attempt: I said let there be a $\delta &gt; 0 $and defined $c$ and $d$ to be $x + \delta$ and $x-\delta$ respectively. From here since $f$ is continuous $[f(c), f(d)]\subset [f(a), f(b)]$. Then I assumed by definition $[c,d]$ is also a subset of $[f(c), f(d)]$. Then I claimed $\delta$ can be arbitrarily small so that $f(c) = f(d) = f(x)$. </p> <p>Is this correct or is there a better approach?</p>
PtF
49,572
<p>In $\mathbb R^2$ draw the straight line $y=x$ and consider any interval at the positive part of $x$-axis, you'll get some intuition. Your problem is saying that under some conditions the graph of your function is going to intersept the straight line $y=x$ in some point. Think about the conditions for it to happen..</p>
2,549,690
<p>Is a direct sum of cyclic groups cyclic? I know every abelian group is a direct sum of cyclic groups of prime power orders, but I can't make use of this.</p>
user284331
284,331
<p>The map $x\rightarrow x/r$ is bicontinuous and bijective.</p>
332,603
<p>I've passed by this article: <a href="http://gauravtiwari.org/2011/12/11/claim-for-a-prime-number-formula/" rel="noreferrer">http://gauravtiwari.org/2011/12/11/claim-for-a-prime-number-formula/</a></p> <p>and this paper: <a href="http://www.m-hikari.com/ams/ams-2012/ams-73-76-2012/kaddouraAMS73-76-2012.pdf" rel="noreferrer">http://www.m-hikari.com/ams/ams-2012/ams-73-76-2012/kaddouraAMS73-76-2012.pdf</a></p> <p>They say that there is a formula such that when you give it (n) then it returns the n-th prime number. Where other articles states that no formula discovered so far that does such thing.</p> <p>If the formula exists indeed, then why from time to time they discover a new largest prime number known ever. It would be very simple using the formula to find a larger one.</p> <p>I just want to ensure whether such formula exists or not.</p>
Michiel
53,881
<p>As you already mention yourself: it doesn't make sense to keep on looking for prime numbers with computer algorithms if there is a prime number equation.</p> <p>Looking at the formulas on the site you provided, it seems to me that the formulas are really just an algorithm which allows you to determine whether some number is a prime number based on the previously found prime numbers. That I could already do in highschool by just checking whether a number can be divided by the previous prime numbers with integer solution (larger then 1). Probably, I cannot judge that quickly, the algorithm is more efficient then what I mention, but it still not a 'plug in the numbers and have your answer with a pocket calculator'.</p>
2,877,578
<p>Yesterday, I asked the question: <a href="https://math.stackexchange.com/questions/2876740/prove-that-if-a-b-are-closed-then-exists-u-v-open-sets-such-that-u-cap?noredirect=1#comment5938458_2876740">Prove that if $A,B$ are closed then, $ \exists\;U,V$ open sets such that $U\cap V= \emptyset$</a>. </p> <p>Here is the correct question: prove that if $A,B$ are closed sets in a metric space such that $A\cap B= \emptyset$, there exists $U,V$ open sets such that $A\subset U$, $B\subset V$, and $U\cap V= \emptyset$. </p> <p>I am thinking of going by contradiction, that is: $\forall\; U,V$ open sets such that $A\subset U$, $B\subset V$, and $U\cap V\neq \emptyset$. </p> <p>Let $ U,V$ open. Then, $\exists\;r_1,r_2$ such that $B(x,r_1)\subset U$ and $B(x,r_2)\subset V.$ I got stuck here!</p> <p>I'm thinking of using the properties of $T^4-$space but I can't find out a proof! Any solution or reference related to metric spaces?</p>
SFeesh
346,530
<p>Since $A$ and $B$ are disjoint, we can place an open ball $\mathcal B_a$ at each $a \in A$ such that $B \cap \overline{\mathcal B_a} = \varnothing$ (the bar denotes the closure). Then $\bigcup \mathcal B_a$ is an open neighbourhood of $A$. Because metric spaces are paracompact, we can choose a locally finite subcover of $A$. So assume the cover $\{\mathcal B_a \}_{a\in A}$ is already locally finite.</p> <p>Let $b \in B$. Then there exists an open neighbourhood $U$ of $b$ such that $U$ only intersects finitely many of the $\mathcal B_a$. Let's suppose it intersects $\mathcal B_a^1, \dots \mathcal B_a^n$. Then $U \setminus (\overline{\mathcal B_a^1} \cup \dots \cup \overline{\mathcal B_a^n})$ is an open set containing $b$. Thus we can choose an open ball $\mathcal B_b$ such that $b \in \mathcal B_b \subseteq U \setminus (\overline{\mathcal B_a^1} \cup \dots \cup \overline{\mathcal B_a^n})$. Note that $\mathcal B_b$ is disjoint from $\bigcup \mathcal B_a$. Choosing $\mathcal B_b$ in this manner for each $b\in B$ yields an open neighbourhood for $\bigcup \mathcal B_b$ of $B$ that is disjoint from $\bigcup \mathcal B_a$.</p> <p>Thus, $\bigcup \mathcal B_a$ and $\bigcup \mathcal B_b$ are disjoint open neighbourhoods of $A$ and $B$ respectively.</p>
22,392
<p>Consider the following polynomial series:</p> <p>$S(x) = \sum_{i=1}^{\infty}(-1)^{i+1}x^{i^{2}}$</p> <p>Between 0 and 1, this looks like a well-behaved function - is there any way to write this function in this interval without using a series?</p> <p>Given $0 &lt; S(x) &lt; 1$, I need to solve the equation for $x$ (in the 0 to 1 interval), but an analytic solution would be much nicer than a numerical one...</p>
S. Carnahan
121
<p>I don't know what you mean by "polynomial series" since your function doesn't seem to have much to do with polynomials (perhaps you could elaborate?).</p> <p>$S(x) = -\frac12 \theta(\frac12, \frac{\log x}{\pi i }) - 1$, where $\theta$ is <a href="http://en.wikipedia.org/wiki/Theta_function" rel="nofollow">Jacobi's theta function</a>. I don't know of any nice algebraic methods to take inverses of modular forms (even if you restrict to real $x$, i.e., $\tau$ purely imaginary), so I'm not sure if this is the sort of answer you want.</p>
22,392
<p>Consider the following polynomial series:</p> <p>$S(x) = \sum_{i=1}^{\infty}(-1)^{i+1}x^{i^{2}}$</p> <p>Between 0 and 1, this looks like a well-behaved function - is there any way to write this function in this interval without using a series?</p> <p>Given $0 &lt; S(x) &lt; 1$, I need to solve the equation for $x$ (in the 0 to 1 interval), but an analytic solution would be much nicer than a numerical one...</p>
Robin Chapman
4,213
<p>As other correspondents have pointed out, this is essentially a theta function. You ask if you can write it in any other way. You can replace the infinite series by an infinite product :-) One gets $$1-2S(x)=\prod_{n=1}^\infty(1-x^{2m-1})^2(1-x^{2m}).$$ The equivalence is a special case of the Jacobi triple product: <a href="http://en.wikipedia.org/wiki/Jacobi_triple_product" rel="nofollow">http://en.wikipedia.org/wiki/Jacobi_triple_product</a> . Using this represention makes numerical computation of the function more difficult :-)</p>
238,052
<p>If I have 4 different types of data such (that I get from an Excel file) as:</p> <pre><code>https://pastebin.com/j3Bgfxqm </code></pre> <p>I am trying to implement a <code>Do</code> loop that extracts the data from the Excel file, superimposes the data in two different regions (as done here: <a href="https://mathematica.stackexchange.com/questions/237742/superimposed-curves-in-two-regions">Superimposed curves in two regions</a>), plots the data individually (from the <code>Do</code> loop) and then plots the data all together and superimposed. Here's my approach to everything except superimposing the curves (that's the part I don't know how to implement in the <code>Do</code> loop):</p> <pre><code>Do[ datTCpc = Extract[Import[&quot;excel file.xlsx&quot;], 1][[3 ;; All, {i, i + 1}]]; store = AppendTo[tts1, datTCpc]; (*Stores the data*) (*Plotting individually in the do loop*) Show[ ListLinePlot[datTCpc, PlotStyle -&gt; {Red}, AxesLabel -&gt; {&quot;T (ºC)&quot;, &quot;Cp(J/gK)&quot;}, LabelStyle -&gt; {Black, Bold, 14}, PlotLabel -&gt; Style[q2 &quot;K/min&quot;, Black, 14]]] // Print, {i, 1, imax2, 2}] (*Plots the store data combined - superimposed*) Show[ListLinePlot[store, PlotStyle -&gt; {Red, Blue, Gray, Black}, LabelStyle -&gt; {Black, Bold, 14}, ImageSize -&gt; Large, Frame -&gt; True, Axes -&gt; False, GridLines -&gt; Automatic, GridLinesStyle -&gt; Lighter[Gray, .8], PlotRange -&gt; {{20, 110}, All}]] </code></pre> <p>How can I superimpose the data using the first region from 29 to 41 (in x-axis data) and the second region from 72 to 85 (in the x-axis data)?</p> <p><strong>Clarification:</strong> By superimposing the curves or the data, I mean simply placing the curves one on top of the other (taking one as the reference and putting the other on top based on two regions).</p> <p><strong>EDIT</strong></p> <p>This is how the data will look like superimposed (done manually in Excel), where the two regions are shown in red circles:</p> <p><a href="https://i.stack.imgur.com/iMzbf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iMzbf.png" alt="enter image description here" /></a></p> <p>It was done by translating or rotating the curves until minimizing the differences.</p>
Daniel Huber
46,318
<p>After importing your data, I needed some fixes to convert it to numerical data:</p> <pre><code>dat = Import[&quot;https://pastebin.com/j3Bgfxqm&quot;, &quot;Data&quot;][[1]]; dat = ToExpression[ StringCases[#, &quot;{&quot; ~~ NumberString ~~ &quot;,&quot; ~~ NumberString ~~ &quot;}&quot;]] &amp; /@ dat; </code></pre> <p>Now we have table of numerical data. To pick out the points with prescriped x values:</p> <pre><code>dat = Flatten[{Take[#, 29 ;; 41], Take[#, 72 ;; 85]} &amp; /@ dat, 1]; </code></pre> <p>Your plot statement has a superfluous <code>Show</code> and the plot range is wrong. With this fixes:</p> <pre><code>ListLinePlot[dat, PlotStyle -&gt; {Red, Blue, Gray, Black}, LabelStyle -&gt; {Black, Bold, 14}, ImageSize -&gt; Large, Frame -&gt; True, Axes -&gt; False, GridLines -&gt; Automatic, GridLinesStyle -&gt; Lighter[Gray, .8]] </code></pre> <p><a href="https://i.stack.imgur.com/MDnPJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MDnPJ.png" alt="enter image description here" /></a></p>
238,052
<p>If I have 4 different types of data such (that I get from an Excel file) as:</p> <pre><code>https://pastebin.com/j3Bgfxqm </code></pre> <p>I am trying to implement a <code>Do</code> loop that extracts the data from the Excel file, superimposes the data in two different regions (as done here: <a href="https://mathematica.stackexchange.com/questions/237742/superimposed-curves-in-two-regions">Superimposed curves in two regions</a>), plots the data individually (from the <code>Do</code> loop) and then plots the data all together and superimposed. Here's my approach to everything except superimposing the curves (that's the part I don't know how to implement in the <code>Do</code> loop):</p> <pre><code>Do[ datTCpc = Extract[Import[&quot;excel file.xlsx&quot;], 1][[3 ;; All, {i, i + 1}]]; store = AppendTo[tts1, datTCpc]; (*Stores the data*) (*Plotting individually in the do loop*) Show[ ListLinePlot[datTCpc, PlotStyle -&gt; {Red}, AxesLabel -&gt; {&quot;T (ºC)&quot;, &quot;Cp(J/gK)&quot;}, LabelStyle -&gt; {Black, Bold, 14}, PlotLabel -&gt; Style[q2 &quot;K/min&quot;, Black, 14]]] // Print, {i, 1, imax2, 2}] (*Plots the store data combined - superimposed*) Show[ListLinePlot[store, PlotStyle -&gt; {Red, Blue, Gray, Black}, LabelStyle -&gt; {Black, Bold, 14}, ImageSize -&gt; Large, Frame -&gt; True, Axes -&gt; False, GridLines -&gt; Automatic, GridLinesStyle -&gt; Lighter[Gray, .8], PlotRange -&gt; {{20, 110}, All}]] </code></pre> <p>How can I superimpose the data using the first region from 29 to 41 (in x-axis data) and the second region from 72 to 85 (in the x-axis data)?</p> <p><strong>Clarification:</strong> By superimposing the curves or the data, I mean simply placing the curves one on top of the other (taking one as the reference and putting the other on top based on two regions).</p> <p><strong>EDIT</strong></p> <p>This is how the data will look like superimposed (done manually in Excel), where the two regions are shown in red circles:</p> <p><a href="https://i.stack.imgur.com/iMzbf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iMzbf.png" alt="enter image description here" /></a></p> <p>It was done by translating or rotating the curves until minimizing the differences.</p>
Hugh
12,558
<p>I am sure that someone more expert in coding can simplify this but what I am going to do is to add the equation of a straight line to each curve and then choose values for the slope and intercept of the straight lines to minimise the differences between the curves. This will translate and rotate them to bring them together. I will select one curve as the reference curve and bring the others onto it.</p> <p>One difficulty is that the x-values are all different for the data. I will therefore interpolate the data and then resample to give them all the same x values. This will enable me to do differences between the curves easily.</p> <p>As Daniel Huber has done I start by getting the data and turning it into numbers.</p> <pre><code> dat = Import[&quot;https://pastebin.com/j3Bgfxqm&quot;, &quot;Data&quot;][[1]]; dat = ToExpression[ StringCases[#, &quot;{&quot; ~~ NumberString ~~ &quot;,&quot; ~~ NumberString ~~ &quot;}&quot;]] &amp; /@ dat; </code></pre> <p>Next I do the following:<br/> select values for the intervals that are to be used for adjusting the data,<br/> calculate the average increment of the data,<br/> let nn equal the number of curves<br/> interpolate each curve<br/> plot<br/></p> <pre><code> {x1, x2, x3, x4} = {20, 50, 73, 98}; dx = Mean[Flatten[Differences[#[[All, 1]]] &amp; /@ dat]]; nn = Length[dat]; ff = Interpolation[#] &amp; /@ dat; Plot[Evaluate[Table[ff[[n]][x], {n, nn}]], {x, x1, x4}] </code></pre> <p><a href="https://i.stack.imgur.com/cHPKk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cHPKk.png" alt="Interpolated data" /></a></p> <p>Next I do the following:<br/> resample the data adding the equation of a straight line to each curve,<br/> set the reference curve values,<br/> form the mean square error with the reference curve,<br/> define the list of unknowns (uk),<br/> take the derivative of the error with respect to the unknowns and set to zero,<br/> solve for the unknowns,<br/> define a new function for each curve with the correct translation and rotation,<br/> plot <br/></p> <pre><code> ClearAll[x, f, m , c, g]; Do[f[n, x] = Join[Table[ff[[n]][x] + m[n] x + c[n], {x, x1, x2, dx}], Table[ff[[n]][x] + m[n] x + c[n], {x, x3, x4, dx}]], {n, Length@ff}]; m[1] = 0; c[1] = 0; (* Reference function *) err = Plus @@ Sum[(f[n, x] - f[1, x])^2, {n, 2, Length@ff}]; uk = Flatten[Table[{m[n], c[n]}, {n, 2, Length@ff}]]; eqns = 0 == D[err, #] &amp; /@ uk // Simplify; sol = First@Solve[eqns]; g = Table[ff[[n]][x] + m[n] x + c[n] /. sol, {n, nn}]; Plot[g, {x, x1, x4}] </code></pre> <p><a href="https://i.stack.imgur.com/CBkrT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CBkrT.png" alt="adjusted curves" /></a></p> <p>Does this solve your problem? Happy for someone to tidy up formulation I feel sure this can be done.</p> <p><strong>EDIT</strong></p> <p>I was asked how to end up with data that can be in lists rather than as equations. This can be done by resampling the equations. Here I use the starting and ending points x1 and x4, defined previously, and the mean increment of the original data dx.</p> <pre><code> h = Table[{x, #}, {x, x1, x4, dx}] &amp; /@ g; ListPlot[h] </code></pre> <p><a href="https://i.stack.imgur.com/n06UK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n06UK.png" alt="Resampled data" /></a></p>
886,626
<p>I want to solve the following system of congruences:</p> <p>$ x \equiv 1 \mod 2 $</p> <p>$ x \equiv 2 \mod 3 $</p> <p>$ x \equiv 3 \mod 4 $</p> <p>$ x \equiv 4 \mod 5 $</p> <p>$ x \equiv 5 \mod 6 $</p> <p>$ x \equiv 0 \mod 7 $</p> <p>I know, but do not understand why, that the first two congruences are redundant. Why is this the case? I see that the modulo of the congruences are not pairwise relatively prime, but why does this cause a redundancy or contradiction? Further, why is it that in the solution to this system, we discard the first two congruences and not </p> <p>$ x \equiv 3 \mod 4 $</p> <p>$ x \equiv 5 \mod 6 $</p> <p>being that $ gcd(3,6) = 3 $ and $gcd(2,4) = 2$ ?</p> <p>EDIT:</p> <p>How is the modulo of the unique solution effected if I instead consider the system of congruences without the redundancy i.e. does $M = 4 * 5 * 6 * 7$ or does it remain $M= 2*3*4*5*6*7$?</p>
Bill Dubuque
242
<p><strong>Hint</strong> $\ x\equiv -1\ $ mod $\,2,3,4,5,6\iff x\equiv -1 \pmod m\ $ for $\, m = {\rm lcm}(2,3,4,5,6) = {\rm lcm}(4,5,6)$</p> <p>because $\ 2,3,4,5,6\mid x\!+\!1\iff 4,5,6\mid x\!+\!1,\ $ since $\,4,6\mid x\!+\!1\,\Rightarrow\,2,3\mid x\!+\!1$</p>
779,042
<p>Any pointers on how should I start?</p> <p>$$I:=\int_ {0}^{\infty} \frac {\cos(ax)} {(x^2 + b^2)^n} \ \mathrm{d}x$$</p>
Cody
13,295
<p>One may also do this one using residues, though the resulting series can be rather tedious.</p> <p>But, it follows a binomial pattern. </p> <p>I am going to allow the power on the denominator to be $n+1$ instead of $n$ as to find the nth derivative. </p> <p>Consider $$\int_{C}\frac{e^{iaz}}{(z^{2}+b^{2})^{n+1}}$$, where C is the semicircular contour in the UHP with poles at $\pm bi$, which have order $n+1$.</p> <p>The only pole to be concerned with is the one at $bi$.</p> <p>Thus, by taking the $nth$ derivative and defining $I(z)=\frac{e^{iaz}}{(z+bi)^{n+1}}$.</p> <p>$$I^{(n)}(z)=\frac{(ia)^{n}e^{iaz}}{(z+bi)^{n+1}}-\frac{(n+1)(n)(ia)^{n-1}e^{iaz}}{(z+bi)^{n+2}}+\frac{(n)(n-1)(ia)^{n-2}e^{iaz}(n+1)(n+2)}{2!(z+bi)^{n+3}}-\cdot\cdot\cdot +\frac{(-1)^{n}e^{iaz}(n+1)(n+2)\cdot\cdot\cdot (2n)}{(z+bi)^{2n+1}}$$.</p> <p>Since $$\int\frac{e^{iaz}}{(z^{2}+b^{2})^{n+1}}dz=\frac{2\pi i}{n!}I^{(n)}(bi)$$, </p> <p>let $z\to bi$, simplify a little, and note the powers of the $i$ terms eliminate the alternation of the series.</p> <p>Several terms being:</p> <p>$$\frac{2\pi i}{n!}\left[\frac{e^{-ab}}{(2bi)^{n+1}}(ia)^{n}-\frac{ne^{-ab}(n+1)}{(2bi)^{n+2}}(ia)^{n-1}+\cdot\cdot\cdot\right] $$</p> <p>simplify a little:</p> <p>$$\int\frac{e^{iaz}}{(z^{2}+b^{2})^{n+1}}dz=\frac{2\pi e^{-ab}}{n!}\left[\frac{a^{n}}{(2b)^{n+1}}+\frac{n(n+1)}{(2b)^{n+2}}a^{n-1}+\frac{(n-1)n(n+1)(n+2)}{2!(2b)^{n+3}}a^{n-2}+\cdot\cdot\cdot +\frac{(2n)!}{n!(2b)^{2n+1}}\right]$$</p> <p>The portions along the x axis and around the big arc:</p> <p>$$\int_{-\infty}^{0}\frac{e^{iax}}{(b^{2}+x^{2})^{n+1}}dx+\int_{0}^{\infty}\frac{e^{iax}} {(b^{2}+x^{2})^{n+1}}dx+\int_{0}^{\pi}\frac{e^{ia Re^{i\theta}}}{(b^{2}+R^{2}e^{2i\theta})^{n+1}}iRe^{i\theta}d\theta$$.</p> <p>By letting $x\to -x$ in the first one and adding to the second, one obtains:</p> <p>$$2\int_{0}^{\infty}\frac{\cos(ax)}{(b^{2}+x^{2})^{n+1}}dx$$</p> <p>Of course, the arc tends to 0 due to the standard $e^{-aR\sin\theta}\to 0$, as $R\to \infty$ argument. Thus, putting it together, the result becomes:</p> <p>$$\int_{0}^{\infty}\frac{\cos(ax)}{(b^{2}+x^{2})^{n+1}}dx=\frac{\pi e^{-ab}}{n!(2b)^{2n+1}}\left[(2ab)^{n}+n(n+1)(2an)^{n-1}+\frac{(n-1)n(n+1)(n+2)}{2!}(2ab)^{n-2}+\cdot\cdot\cdot \frac{(2n)!}{n!}\right]$$</p> <p>which can be written in general as:</p> <p>$$\int_{0}^{\infty}\frac{\cos(ax)}{(x^{2}+b^{2})^{n+1}}dx=\frac{\pi e^{-ab}}{n! (2b)^{2n+1}}\sum_{k=0}^{n}\frac{(2ab)^{n-k}(n+k)!}{k!(n-k)!}$$</p>
1,781,269
<p>What's the general method to find the slope of a curve at the origin if the derivative at the origin becomes indeterminate. For Eg--</p> <p>What is the slope of the curve <span class="math-container">$x^3 + y^3= 3axy$</span> at origin and how to find it because after following the process of implicit differentiation and plugging in <span class="math-container">$x=0$</span> and <span class="math-container">$y=0$</span> in the derivative we get <span class="math-container">$0/0$</span>.</p> <p>Actually this question has been asked by me before and a sort of satisfactory answer that I got was </p> <p>" For small <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, the values of <span class="math-container">$x^3$</span> and <span class="math-container">$y^3$</span> will be much smaller than <span class="math-container">$3axy$</span>, so the zeroes of the function will be approximately where the zeroes of <span class="math-container">$0=3axy$</span> are -- that is, near the origin the curve will look like the solutions to that, which is just the two coordinate axes. So the curve will cross itself at the origin, passing through the origin once horizontally and once vertically. (This is also why implicit differentiation can't work at the origin -- the solution set simply doesn't look like a straight line there under any magnification)." </p> <p><strong>Edit</strong></p> <p><em>If I approximate the function by saying that at (0,0) , the behavior is dominated be 3axy term as x^3 and y^3 are very small and then</em> <strong>3axy=0 and then tangents are x=0 and y=0 . Is doing so (saying x=0 and y=0) linear Approximation only.</strong> <strong>Because I am approximating the curve with a straight line at origin . But linear Approximation is 1st derivative (1st term of Taylor series)</strong>. This cannot be right because <strong>Taylor series can't be formed where derivative doesn't exist*.</strong></p> <p><strong>And if this is right then the function is approximately given by 3axy=0 at (0,0). But how does this give the tangent at (0,0).How shall I go about ?</strong></p> <p>Edit:</p> <p>Is the answer give right because the solpe does exist. </p>
Emilio Novati
187,568
<p>The curve of equation $x^3+y^3=3axy$ is a <a href="http://mathworld.wolfram.com/FoliumofDescartes.html" rel="nofollow">folium of Descartes</a>, and has a double point at $(x,y)=(0,0)$. So at this point the curve does not have a well defined tangent and, as noted in the answer of Hagen, it has no derivative nor slope. </p> <p>But, you can ask about the slope of the two tangents at this point, and you can find an answer using the parametric equation of the curve: $$ x=\frac{3at}{1+t^3} \qquad y=\frac{3at^2}{1+t^3} $$ </p> <p>Using this you can find: $\dot x=\frac{dx}{dt}$ and $\dot y=\frac{dy}{dt}$ and calculate the slope $$ \frac{dy}{dx}=\frac{\dot y}{\dot x} $$ or $$ \frac{dx}{dy}=\frac{\dot x}{\dot y} $$ for the two values of $t$ that corresponds to the two different passages through the origin. </p> <p>Note that one of these values is $t=0$ but the other is for $t \to \infty$ so one of the two slopes have to be evaluated as a limit. </p>
2,543,123
<p>Let $\gcd(a, 11) = 1$. If $3a^7 \equiv 5 \pmod{11}$, show that $a \equiv 3 \pmod{11}$.</p> <p>My first approach was to use Euler's theorem:</p> <p>$a^{10} \equiv 1 \pmod{11}$</p> <p>$3a^7 \equiv 5 \pmod{11}$ implies that $a^{-3} \equiv 9 \pmod{11}$ </p> <p>I feel i'm not on the right track, hints are appreciated.</p>
Michael Hardy
11,667
<p>The possible numbers are $n$ through $6n,$ not $1$ through $6n.$</p> <p>Does "random" mean uniformly distributed? Not in the usage of probabilists, but many people, including many mathematicians, sometimes use the term that way.</p> <p>Notice that if $n=2$ and $X$ is the sum of the two resulting numbers, then $\Pr(X=2) = \Pr(X=12) = 1/36,$ and $\Pr(X=3) = \Pr(X=11) = 2/36,$ and $\Pr(X=4) = \Pr(X=10) = 3/36,$ and so on, so these are not uniformly distributed.</p> <p>For the sum of the numbers shown by $n$ dice, the mean is $\frac 7 2 n$ and the variance is $\sigma^2/n$ where $n$ is the result from a single die, and that variance is much smaller than the variance of a uniform distribution on the set $\{n,n+1,n+2,\ldots,6n\}.$ So it's nowhere near uniformly distributed. You would only rarely get a result as small as $n$ or as large as $6n,$ and far more often get a number that's near $\frac 7 2 n.$</p>
25,239
<p>I am very new to <em>Mathematica</em> (first time!) and I'm already having troubles with arrays. I basically have a 4D tensor, that is a matrix (call it <code>M</code>) which has for each entry another matrix. I want to select the elements of the nested matrices that are bigger than zero, but <code>Select[M, # &gt; 0 &amp;]</code> doesn't work. I suspect it is because I do not respect the dimensions. I am kind of lost. Thanks in advance</p>
BoLe
6,555
<pre><code>m = RandomInteger[{1, 42}, {3, 3, 3, 3}]; </code></pre> <p><code>Cases</code> will extract every element equal to 42 here. (There won't always be any for the random nature.)</p> <pre><code>Cases[m, i_Integer /; i == 42, Infinity] </code></pre> <p>With <code>Position</code> you can identify where these elements are.</p> <pre><code>Position[m, i_Integer /; i == 42, Infinity] </code></pre> <p><code>Select</code> selects sublists (row of matrices in this case) with some property, whether for example they containt a matrix with <code>42</code>.</p> <pre><code>Select[m, MemberQ[Flatten[#], 42] &amp;] </code></pre>
25,239
<p>I am very new to <em>Mathematica</em> (first time!) and I'm already having troubles with arrays. I basically have a 4D tensor, that is a matrix (call it <code>M</code>) which has for each entry another matrix. I want to select the elements of the nested matrices that are bigger than zero, but <code>Select[M, # &gt; 0 &amp;]</code> doesn't work. I suspect it is because I do not respect the dimensions. I am kind of lost. Thanks in advance</p>
george2079
2,079
<p>If you goal is a matrix of the origial dimensions retaining only the selectred values you could do something like this:</p> <pre><code> Map[ If[# &gt; 0, #, Null] &amp; , m , {-1}] </code></pre>
69,655
<p>I'm facing a strange behavior of <code>HoldForm</code>.</p> <p>I need to display <code>1/2*3/4</code> in LaTeX like this : $$ \frac{1}{2} \times \frac{3}{4} $$</p> <p>So I use Mathematica : <code>1/2* 3/4 // HoldForm // TeXForm</code> BUT I get $$ \frac{3}{2\ 4} $$</p> <p>First the writing <code>2 space 4</code> is ambigous and second it does not hold form at all :(</p> <p>Can you help me ? Thank you ! (happy Holidays)</p> <p><strong>EDIT : I would need an automatic transformation of any input to correct TeX or an automatic correction of any output to correct TeX.</strong></p>
Fred Simons
20,253
<p>This unexpected behaviour of HoldForm and Hold seems to be due to the function MakeExpression and not a bug in HoldForm or Hold.</p> <p>Having entered</p> <pre><code>Hold[1/2 3/4] </code></pre> <p>the frontend sends the command</p> <pre><code>MakeExpression[BoxData[RowBox[{"Hold","[",RowBox[{RowBox[{"1","/","2"}], RowBox[{"3","/","4"}]}],"]"}]],StandardForm] </code></pre> <p>to the kernel for further evaluation. The essential part is</p> <pre><code>MakeExpression[BoxData[RowBox[{RowBox[{"a","/","b"}], RowBox[{"c","/","d"}]}]],StandardForm] (* HoldComplete[(a c)/(b d)] *) </code></pre> <p>So already in MakeExpression the numerators and denominaters are collected to one numerator and denominator, before Hold or HoldForm is used.</p>
69,655
<p>I'm facing a strange behavior of <code>HoldForm</code>.</p> <p>I need to display <code>1/2*3/4</code> in LaTeX like this : $$ \frac{1}{2} \times \frac{3}{4} $$</p> <p>So I use Mathematica : <code>1/2* 3/4 // HoldForm // TeXForm</code> BUT I get $$ \frac{3}{2\ 4} $$</p> <p>First the writing <code>2 space 4</code> is ambigous and second it does not hold form at all :(</p> <p>Can you help me ? Thank you ! (happy Holidays)</p> <p><strong>EDIT : I would need an automatic transformation of any input to correct TeX or an automatic correction of any output to correct TeX.</strong></p>
Mr.Wizard
121
<p>I am posting a second answer because I am now taking a very different interpretation of your problem. In a comment below my first answer you state:</p> <blockquote> <p>Your function seems to correct one type problem with fraction. But I am more looking for something able to display TeX in the exact form I write them. Probably MM is not the right tool to use. I am disapointed.</p> </blockquote> <p>I assumed that you were looking for TeX conversion of arbitrary expressions generated by (evaluation in) <em>Mathematica</em> but if instead you simply want TeX for expressions in "the exact form I write them" you may be able to use Strings, e.g.:</p> <p><img src="https://i.stack.imgur.com/VwLQn.png" alt="enter image description here"></p> <p>The string was created using <a href="http://reference.wolfram.com/language/tutorial/EnteringTwoDimensionalInput.html" rel="nofollow noreferrer">standard input methods</a>. <code>\[Times]</code> was entered with <kbd>Esc</kbd><code>*</code><kbd>Esc</kbd>. </p> <p>Here is the input in copyable form:</p> <pre><code>"\!\(\*FractionBox[\(1\), \(2\)]\)\[Times]\!\(\*FractionBox[\(3\), \(4\)]\)" // TeXForm </code></pre> <p>And the output formatted by MathJax:</p> <p>$\frac{1}{2}\times \frac{3}{4}$</p> <p>Critically this method avoids <em>interpretation</em> of your raw input into e.g. <code>Times</code> and <code>Power</code>, thereby bypassing those "pretty printing" rules that were changing your expression in an unwanted way.</p>
734,248
<p>Example of two open balls such that the one with the smaller radius contains the one with the larger radius.</p> <p>I cannot find a metric space in which this is true. Looking for hints in the right direction. </p>
George Kapoulas
678,810
<p>Try <span class="math-container">$p$</span>-adic numbers. The range of the metric are powers of <span class="math-container">$p$</span>.</p>
911,333
<p>Show that the series</p> <p>$$\sum_n \tan\left(\frac{1}{n}\right)$$</p> <p>diverges.</p> <p>I dont have any attempt to do, since I am having some troubles with series including geometric functions. I would be glad if I could get a detailed answer, if it is possible.</p> <p>Thanks!</p>
JimmyK4542
155,509
<p><strong>Hint</strong>: For $x \in (0,\tfrac{\pi}{2})$ we have $\tan x &gt; x$. Thus, $\tan \dfrac{1}{n} &gt; \dfrac{1}{n}$. Now use the comparison test. </p>
911,333
<p>Show that the series</p> <p>$$\sum_n \tan\left(\frac{1}{n}\right)$$</p> <p>diverges.</p> <p>I dont have any attempt to do, since I am having some troubles with series including geometric functions. I would be glad if I could get a detailed answer, if it is possible.</p> <p>Thanks!</p>
davidlowryduda
9,754
<p>As $x \to 0$, we have that $\frac{\tan x}{x} \to 1$. So $\tan \frac{1}{n}$ behaves just like $\frac 1n$ as $n$ gets large.</p>
3,170,871
<p>Could anyone please give me a hint on how to compute the following integral?</p> <p><span class="math-container">$$\int \sqrt{\frac{x-2}{x^7}} \, \mathrm d x$$</span></p> <p>I'm not required to use hyperbolic/ inverse trigonometric functions.</p>
peter
658,932
<p>Write <span class="math-container">$y(x):=\sqrt{\frac {x-2} {x^7}}$</span>. </p> <p>Note that <span class="math-container">$$y'(x)= \frac {7-3x}{x^8} \frac 1 y $$</span></p> <p>Hence <span class="math-container">$$\frac d {dx} x^n y=n x^{n-1} y + x^{n-8} \frac {7-3x} y.$$</span></p> <p>Do the ansatz <span class="math-container">$$F(x)=\sum_{n=0}^k a_nx^ny ~~~~\text{ and } ~~~~F'(x)=y(x). $$</span></p> <p>We get <span class="math-container">$$y\sum_{n=1}^{k}a_n nx^{n-1} +\frac 1 y\sum_{n=0}^{k} a_n x^{n-8} ({7-3x})=y $$</span></p> <p>Thus <span class="math-container">$$\sum_{n=0}^{k} a_n x^{n-8} ({7-3x})=y^2(1-\sum_{n=1}^{k}a_n nx^{n-1}).$$</span></p> <p>Now insert <span class="math-container">$y^2$</span> and get <span class="math-container">$$({7-3x})\sum_{n=0}^{k} a_n x^{n-1} = (x-2 )(1-\sum_{n=1}^{k}a_n nx^{n-1}) $$</span> or equivalently <span class="math-container">$$2-x+({7-3x})\sum_{n=0}^{k} a_n x^{n-1}+ (x-2)\sum_{n=1}^{k}a_n nx^{n-1}=0. $$</span> </p> <hr> <p>We solve this recursively. </p> <p>The lowest order is <span class="math-container">$x^{-1}$</span>. There we have <span class="math-container">$7a_0 x^{-1}=0$</span>, so <span class="math-container">$a_0=0$</span>. </p> <p>In constant order: <span class="math-container">$2+7a_1-3a_0-2a_1=0$</span>, so <span class="math-container">$a_1=-\frac 2 5$</span> </p> <p>In order <span class="math-container">$x$</span>: <span class="math-container">$(-1+7a_2 -3 a_1 +a_1-4 a_2)x=0$</span> , so <span class="math-container">$a_2= \frac 1 3 (1+2 a_1)=\frac 1 {15}$</span> </p> <p>in order <span class="math-container">$x^2$</span>: <span class="math-container">$(7a_3-3a_2+2a_2-6 a_3)x^2=0$</span>, so <span class="math-container">$a_3=a_2=\frac 1 {15}$</span>. </p> <p>in order <span class="math-container">$x^3$</span>: <span class="math-container">$(7a_4-3a_3+3a_3 - 8a_4)x^4=0$</span>, so <span class="math-container">$a_4=0$</span> </p> <p>in order <span class="math-container">$x^n$</span> for <span class="math-container">$n&gt;3$</span>: <span class="math-container">$(7a_{n+1} - 3 a_n +n a_n - 2(n+1) a_{n+1})x^n$</span>, so <span class="math-container">$a_{n+1}=\frac {n-3}{7-2(n+1)} a_n=0$</span>.</p> <hr> <p>In conclusion it follows that <span class="math-container">$$\int y(x)= const + F(x)= const+ \frac 1 {15} (-6x+ x^2+x^3) y $$</span></p>
3,977,081
<p>I’m just a high school student, so I may be somewhat logically flawed in understanding this.</p> <p>According to wikipedia, the definition of function requires an input <span class="math-container">$x$</span> with its domain <span class="math-container">$X$</span> and an output <span class="math-container">$y$</span> with its domain <span class="math-container">$Y$</span>, and the function <span class="math-container">$f$</span> maps <span class="math-container">$x$</span> to <span class="math-container">$y$</span>.</p> <p>But how about <span class="math-container">$f(x)$</span>? I often see syntaxes such as <span class="math-container">$f(1) = 0$</span> in my text book. Doesn’t that mean it is <span class="math-container">$f(x)$</span> being first assigned a value and then transfer the value into <span class="math-container">$y$</span>? So, there must be two transitions/mappings between the input <span class="math-container">$x$</span> and the output <span class="math-container">$y$</span> right?</p> <p>My conceptual model of function is like this: A definition of function requires an input <span class="math-container">$x$</span> with its domain <span class="math-container">$X$</span>, a forwarder <span class="math-container">$f(x)$</span> with its domain <span class="math-container">$F$</span> and an output <span class="math-container">$y$</span> with its domain <span class="math-container">$Y$</span>. The function <span class="math-container">$f$</span> first maps <span class="math-container">$x$</span> to <span class="math-container">$f(x)$</span> then maps <span class="math-container">$f(x)$</span> to <span class="math-container">$y$</span>.</p> <p>These two definitions are not quite the same.</p> <hr /> <p>On 2022.6.29: The picture below had solved my confusion.</p> <p><a href="https://i.stack.imgur.com/0idkt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0idkt.jpg" alt="enter image description here" /></a></p>
Timothy Chang
871,381
<p>I'm the OP, and I now kind of know what was causing me confusion back then.</p> <p>I was actually thinking of function in a more computer science way. When we use the notation <span class="math-container">$y = f(x)$</span> in programming languages, it actually functions in the same way as I described in the question above. It first calculates <span class="math-container">$f(x)$</span> with the input <span class="math-container">$x$</span> and then stores the result into a temporary variable, and it then assigns the value of the temporary variable to the variable <span class="math-container">$y$</span>. It does have two transitions in the process.</p> <p>Yes, of course one can argue that function in mathematics doesn't need to have the same definition as function in programming languages. I agree, even though it is really weird to have two different definitions describing the same concept. But it is actually worthwhile to talk about which of them is more appropriate and more intuitive.</p> <p>If we redefine function in mathematics in a more computer science way. We can define a function as <span class="math-container">$f: I → O$</span> , where <span class="math-container">$f$</span> maps the input <span class="math-container">$x$</span> in the domain <span class="math-container">$I$</span> to the output <span class="math-container">$f(x)$</span> in the codomain <span class="math-container">$O$</span>. Defining function in this way makes a much more elegant expression, which would get rid of the redundant usage of <span class="math-container">$y$</span> which just means the same as <span class="math-container">$f(x)$</span>. Confusion in choosing which notation to use happens all the time, not only in choosing between <span class="math-container">$f(x) = ax+b$</span> or <span class="math-container">$y = f(x) = ax+b$</span> but also <span class="math-container">$dy/dx$</span> or <span class="math-container">$df(x)/dx$</span>.</p> <p>Moreover, if we look back to the definition of function in programming languages, this kind of redundancy just doesn’t exist at all, and it never caused any problem. The variable <span class="math-container">$y$</span> is not pre-defined in the function itself, and is actually only needed when we want to “store” the final result, not to “use” the result for further calculation. The “real” output of the function <span class="math-container">$f$</span> is <span class="math-container">$f(x)$</span>, not <span class="math-container">$y$</span>.</p> <p>The variable y can still be used with function in mathematics, but now it should no longer be seen as part of the definition of function. The expression <span class="math-container">$y = f(x) = ax+b$</span> thus should be understood alternatively as simultaneous equations of <span class="math-container">$y=f(x)$</span> and <span class="math-container">$f(x) = ax+b$</span>.</p> <p>To conclude, I think the original definition is obviously outdated and flawed compared to the more modern one used in computer science. It deserves a refurbished definition as it has been a quite fundamental and frequently used concept in mathematics, hopefully urgently.</p>
4,064,209
<p><a href="https://i.stack.imgur.com/Ux3cH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ux3cH.png" alt="enter image description here" /></a></p> <p>Above is the exercise. Showing that <span class="math-container">$S$</span> is bounded is straightforward by <span class="math-container">$A$</span> bounded and the triangle inequality, but I thought that to show S is closed, I would do the usual thing of assuming <span class="math-container">$w \in \bar S$</span>, then by definition, there is a sequence <span class="math-container">$\{s_n\}$</span> with elements in <span class="math-container">$S$</span> so that <span class="math-container">$s_n \to x$</span> w.r.t. absolute metric.</p> <p>The problem is now I am stuck and can't seem to use the fact that <span class="math-container">$A$</span> is closed to conclude that <span class="math-container">$w \in S$</span>. Is this approach even possible? And is there a better approach, perhaps showing that <span class="math-container">$\mathbb{R^2} \backslash A$</span> is open is easier? Thanks!</p> <p>Note: Please do not use compactness as I have not covered it yet!</p> <p>(I have an exam soon and so I would like exposure to as many problems as possible. Sorry if I haven't shown too much work before posting.)</p>
Igor Rivin
109,865
<p>Let <span class="math-container">$P(x) = \sum_{i=0}^\infty a_i x^i,$</span> and let <span class="math-container">$Q(x) = \sum_{j=0}^\infty b_j x^j.$</span> Then, your limit just says that</p> <p><span class="math-container">$P(1) Q(1) = (PQ)(1).$</span> Notice that for this to make sense, both series have to have radius of convergence <span class="math-container">$1$</span> (and as pointed out in the other answer, one of them has to converge absolutely at <span class="math-container">$1.$</span>).</p>
3,898,411
<p>How can I prove that <span class="math-container">$ (a_n) = \frac{n^3 -1}{2n^3-n} $</span> converges?</p> <p>I've calculated the limit and got a result of a 1/2.</p> <p>Now I need to prove that this limit exists. So, I tried to use the definition and find an <span class="math-container">$M$</span> that <span class="math-container">$n &gt; M \implies |a_n - L| &lt; \epsilon$</span> for <span class="math-container">$ \epsilon &gt; 0 $</span>, but I couldn't reach in a result.</p> <p>Is there any strategy to prove that this sequence converges?</p>
player3236
435,724
<p>This is a partial solution concerning <span class="math-container">$m\ne 1$</span>.</p> <p>First we consider the cases <span class="math-container">$m\ge 2$</span>.</p> <p>For case 1 (<span class="math-container">$2^m \ge 3^n$</span>), write <span class="math-container">$2^m = 3^n + k^2$</span>.</p> <p>Taking modulo <span class="math-container">$3$</span>, we have <span class="math-container">$(-1)^m \equiv k^2 \pmod 3$</span>. This gives <span class="math-container">$m$</span> is even.</p> <p>Write <span class="math-container">$m = 2s$</span>, then <span class="math-container">$3^n = 2^m-k^2 = (2^s-k)(2^s+k)$</span>, and both <span class="math-container">$2^s \pm k$</span> are powers of <span class="math-container">$3$</span>.</p> <p>Summing them up, however, gives <span class="math-container">$2^{s+1}$</span> which is not a multiple of <span class="math-container">$3$</span>. Hence <span class="math-container">$2^s-k = 1$</span>.</p> <p><span class="math-container">$2^s+k = 2^{s+1}-1$</span> is a power of <span class="math-container">$3$</span>. [This gives <span class="math-container">$s=1, m=2, n=1$</span>.]</p> <hr /> <p>For case 2 (<span class="math-container">$2^m \le 3^n$</span>), write <span class="math-container">$3^n = 2^m + k^2$</span>.</p> <p>Taking modulo <span class="math-container">$4$</span>, we have <span class="math-container">$(-1)^n\equiv k^2 \pmod 4$</span>. This gives <span class="math-container">$n$</span> is even.</p> <p>Write <span class="math-container">$n=2t$</span>, then <span class="math-container">$2^m = 3^n-k^2 = (3^t-k)(3^t+k)$</span>, and both <span class="math-container">$3^t \pm k$</span> are powers of <span class="math-container">$2$</span>.</p> <p>If we write <span class="math-container">$3^t-k = 2^x, 3^t+k =2^y$</span>, we have <span class="math-container">$2^x+2^y = 2^x(1+2^{y-x})$</span>.</p> <p>Summing them up, however, gives <span class="math-container">$2 \times 3^t$</span>. Since <span class="math-container">$3^t$</span> is odd, we must have <span class="math-container">$2^x=2$</span> and <span class="math-container">$1+2^{y-1} = 3^t$</span>.</p> <p>[This equation has <span class="math-container">$2$</span> solutions: <span class="math-container">$y=2, t=1$</span> and <span class="math-container">$y=4, t=2$</span>.]</p> <hr /> <p>Both cases above end with solving an equation of the form <span class="math-container">$1+a^b= c^d$</span>, which by Catalan's Conjecture/Mihăilescu's theorem only has the solution <span class="math-container">$a=2,b=3,c=3,d=2$</span>, for integers <span class="math-container">$a,b,c,d &gt;1$</span>. However there are elementary proofs for this special case <span class="math-container">$a,c \in \{2,3\}$</span>, which I choose to omit (unless requested)</p> <hr /> <p>For the case <span class="math-container">$m=0$</span>, write <span class="math-container">$3^n = 1+k^2$</span>. By modulo <span class="math-container">$3$</span>, no solutions exist for <span class="math-container">$n \ge 1$</span>. This forces <span class="math-container">$n=0$</span>.</p> <p>For the case <span class="math-container">$m=1$</span>, write <span class="math-container">$3^n = 2+k^2$</span>. By modulo <span class="math-container">$3$</span> we have <span class="math-container">$k^2 \equiv 1\pmod 3$</span>, and by modulo <span class="math-container">$4$</span> we have <span class="math-container">$(-1)^n \equiv 2+k^2$</span>, forcing <span class="math-container">$n$</span> to be odd and <span class="math-container">$k^2\equiv 1 \pmod 4$</span>. <span class="math-container">$k$</span> must be an odd number not divisible by <span class="math-container">$3$</span>, but as of right now I am not sure how to proceed.</p>
504,431
<p>I'm the teaching assistant for a first semester calculus course, and the professor has given the students the following problem:</p> <blockquote> <p>Find the points on the curve $xy=\sin(x+y)$ that have a vertical tangent line.</p> </blockquote> <p>Here is a picture of the curve:</p> <p><img src="https://i.stack.imgur.com/Jx6W3.png" alt="enter image description here"></p> <p>My attempt to solve this problem led me to finding points on the given curve which also satisfy $x=\cos(x+y)$, but I can't figure out how to simultaneously solve these equations (without resulting to numerical methods, which the students are not assumed to know). Is there something I'm missing, or has the professor given the students a problem more difficult than he intended?</p> <p>Edit: I'm still looking for a solution not requiring numerical methods, or proof that no such solution exists.</p>
Calvin Lin
54,563
<p>(not a complete solution, but I wanted to add a graph)<br> I suspect that it's a much more difficult question than intended.</p> <hr> <p>By implicit differentiation, $x \frac{dy}{dx} + y = \cos( x+y) ( 1 + \frac{dy}{dx} ) $.<br> This gives $\frac{dy}{dx} = \frac{-y + \cos(x+y) } { x - \cos (x+y) }$.<br> We want a vertical tangent, which means that the denominator is 0, so $x = \cos (x+y)$.<br> Dividing the first equation, we get $ y = \tan (x+y)$.<br> There are actually infinitely many solutions, (at least) 1 for each $ n\pi &lt; x+y &lt; (n+1) \pi$.</p> <p>The blue line is bounded between $-1 $ and $1$ and bounces back and forth. The red lines are similar to a tilted $\tan \theta$ graph.</p> <p><img src="https://i.stack.imgur.com/EZ1tR.gif" alt="enter image description here"></p>
2,894,376
<blockquote> <p>$2$ different History books, $3$ different Geography books and $2$ different Science books are placed on a book shelf. How many different ways can they be arranged? How many ways can they be arranged if books of the same subject must be placed together?</p> </blockquote> <p>For the first part of the question I think the answer is </p> <p>$$(2+3+2)! = 5040 \text{ different ways}$$</p> <p>For the second part of the question I think that I will need to multiply the different factorials of each subject. There are $2!$ arrangements for science, $3!$ for geography and $2!$ for history. Am I correct in saying that the number of different ways to place the books on the shelf together by subject would be $$2! \times 3! \times 2! = 24 \text{ different ways}$$ </p>
David G. Stork
210,401
<p>If the books of the same subject must be placed together, there are in essence three "packs," and these can be ordered in just $3! = 6$ ways, where I assume that the order <em>within</em> a pack is irrelevant. If that order is not irrelevant, you then have $3!=6$ ways to arrange the packs, then within the associate packs you have $2!=2$, and $3!=6$ and $2!=2$ ways to order the books. Thus the total is $3! 2! 3! 2! = 144$ ways.</p> <p>If the seven books are distinct, one can indeed order them in $7!$ ways.</p>
275,820
<p>I have a question concerning the definition of the square root of bounded linear operators. To introduce some notation: tr denotes the trace of linear operators and $\mathcal{L}(H)$ denotes the set of bounded linear operators, from H to H, where H symbolizes a Hilbert space. L' stands for the adjoint operator of L. We introduced the the space of trace class operators in the following way: $\{L \in \mathcal{L}(H) \ : \ tr((LL')^{\frac{1}{2}} &lt; \infty \}$. The problem is, that I don't know how how square root of the above mentioned operator is defined.</p>
tomasz
30,222
<p>Notice that $LL'$ is positive definite. Square root is well-defined for bounded, positive definite operators (by functional calculus theorems).</p>
149,161
<p>A finite simplicial set is a simplicial set having only a finite number of non degenerate simplicies. It is not hard to show that every finite simplicial set has only a finite number of simplicies in each degree. My question is: does the converse hold? that is, is every simplicial set, having a finite number of simplicies in each degree, necessarily finite?</p>
Benjamin Steinberg
15,934
<p>Let G be a finite group viewed as a one object category. Then the nerve BG is a simplicial set with finitely many simplices in each dimension but it is not finite.</p>
4,011,581
<p>I know that a function is continuous at a point if the limit from left and right side exists and are equal and for a function to be continuous, the function should be continuous at all points. My question is that if I want to check continuity of a function, I cannot practically check continuity at each and every point in <span class="math-container">$\mathbb{R}$</span>. So, how to do it?</p>
Tito Eliatron
84,972
<p>Well, that is not the rigurous definition of continuity but it works for most UNDERGRADUATE functions.</p> <p>Some tricks to check continuity:</p> <ul> <li>You may know that elementary fucntions (<span class="math-container">$\sin$</span>, <span class="math-container">$\cos$</span>, <span class="math-container">$\exp$</span>, polynomials,...) are continuous everywhere.</li> <li><span class="math-container">$\log(\cdot)$</span> is continuous on its domain (whenever what is inside is strictly positive).</li> <li><span class="math-container">$\sqrt{\cdot}$</span> is continuous on its domain (wheneever what is inside is positive (not stricly).</li> <li>Moreover, any linear combination of continuous functions is also continuous (say, the addition of subtraction, and multiplication by a number). Also multiplication of continuous functions is also continuous.</li> <li>For quotient of contuinuous functions, everything works okay EXECEPT for those points that cancel the denominator.</li> </ul> <p>These are just a few tricks; they won’t prove continuity in every case, but for undegraduate students they may be enough.</p>
4,011,581
<p>I know that a function is continuous at a point if the limit from left and right side exists and are equal and for a function to be continuous, the function should be continuous at all points. My question is that if I want to check continuity of a function, I cannot practically check continuity at each and every point in <span class="math-container">$\mathbb{R}$</span>. So, how to do it?</p>
Jonas Linssen
598,157
<p>There are several „methods“ to check continuity of a function <span class="math-container">$f:\Bbb R \longrightarrow \mathbb{R}$</span>:</p> <ul> <li>show that given an arbitrary point <span class="math-container">$x$</span> and any sequence <span class="math-container">$x_n \rightarrow x$</span> converging to <span class="math-container">$x$</span> you have that <span class="math-container">$f(x_n) \rightarrow f(x)$</span>. This is feasible, if your function itself is given by a formula closely related to limits, like <span class="math-container">$\exp, \sin, \cos, x\mapsto x^2$</span> etc. Here you may use facts like (under certain hypotheses) limits commute or are compatible with multiplication, addition etc.</li> <li>show that it is given by a composition of continuous functions. For example <span class="math-container">$x \mapsto e^{x^2}$</span> is continuous due to <span class="math-container">$x \mapsto e^x$</span> and <span class="math-container">$x\mapsto x^2$</span> being continuous.</li> <li>show that it is induced by some sort of <em>universal mapping property</em>, by which I mean some sort of general theorem telling you that for example having continuous functions <span class="math-container">$f:\Bbb R \rightarrow \Bbb R, g: \Bbb R \rightarrow \Bbb R$</span> induces a unique continuous function <span class="math-container">$(f,g):\Bbb R \rightarrow \Bbb R^2, x \mapsto (f(x),g(x))$</span>.</li> </ul>
198,132
<p>Putting the equation $x^2 - x \sin(x) - \cos (x)$ into Wolfram Alpha, I am surprised that it has a nice <a href="http://www.wolframalpha.com/input/?i=roots%20of%20x%5E2%20-%20x%20sinx%20-%20cos%20x" rel="nofollow">parabolic shape</a>. Also, it has two complex roots.</p> <p><strong>Question</strong></p> <p>Is it possible to tell, in a simple way, that it has no real roots?</p>
N. S.
9,176
<p>$f(0)=-1$ and $f(2)=4-2\sin(2)-\cos(2)\geq 4-2-1=1 \,.$</p> <p>Then, by IVT it has a root between $0$ and $2$. Similarly, it has a second root between $-2$ and $0$. </p>
1,276,264
<p>So, I was wondering if it is possible to solve for $n$ in $2^n=8$ (or any other question where $n$ is a power) using $9^{th}$ grade math. Please excuse my naïveté if this is extremely stupid/simple. </p> <p>Thanks so much in advance! –– come to think of it: Is it possible at all?</p>
Mathmo123
154,802
<p>The best way to solve problems like this is using <em>logarithms</em>. If you have an equation $$10 ^x = y$$ where $y$ is any positive number, and you wish to find $x$, then the value of $x$ will be (by definition) $\log y$ (this is what the $\log$ button on your calculator is for). </p> <p>For example, to solve $$10^x = 110$$ we calculate $$x=\log 110= 2.0413...$$ There is a simple way to solve any equation like this. If $a$ and $y$ are positive, then the solution of $$a^x = y$$ is $$x= \frac{\log y}{\log a}$$ So in your example, the answer would be $\frac{\log 8}{\log 2}=3$. </p> <hr> <p>The theory of logarithms isn't too complicated, and logarithms were actually used as a way of multiplying large numbers before the invention of calculators (high school students would be given books of logarithms to use!). <a href="https://www.khanacademy.org/math/algebra2/logarithms-tutorial" rel="nofollow">Here</a> is a good place to start learning about them. </p> <p>However, at your level, the best way will be to use trial and error. For example, to solve $10^x=110$, we know that $10^2=100$ and $10^3=1000$, so $ x$ will be between $2$ and $3$. Similarly, we can show that $2&lt;x&lt;2.1$ and so on.</p>
529,053
<p>I have already proved that if ${X_k}$ converges to a limit $L$, then any subsequence of it also converges to $L$. And now the question asks to show that if ${X_k}$ has two subsequence which converge to two different limits, then ${X_k}$ can not be convergent.</p>
Hagen von Eitzen
39,174
<p><em>Hint:</em> Assume the sequnce converges to some limit $L$. What can be said about the so-called different limits of the subsequences.</p>
529,053
<p>I have already proved that if ${X_k}$ converges to a limit $L$, then any subsequence of it also converges to $L$. And now the question asks to show that if ${X_k}$ has two subsequence which converge to two different limits, then ${X_k}$ can not be convergent.</p>
tylerc0816
53,243
<p>A sequence $\{ x_n \}$ converges to $L$ if and only if <em>every</em> subsequence of $\{ x_n \}$ converges to $L$. Therefore, if there exists two subsequences $\{ x_{n_k} \}$ and $\{ x_{n_l} \}$ converging to two different limits $L'$ and $L''$, then $\{ x_n \}$ cannot be convergent.</p>
135,012
<p>How to prove (or to disprove) that all the roots of the polynomial of degree $n$ $$\sum_{k=0}^{k=n}(2k+1)x^k$$ belong to the disk $\{z:|z|&lt;1\}?$ Numerical calculations confirm that, but I don't see any approach to a proof of so simply formulated statement. It would be useful in connection with an irreducibility problem. </p>
Chris Godsil
1,266
<p>The standard approach to this type of question is to use the Schur-Cohn procedure.</p>
83,607
<p>While solving the heat equation in one spatial variable $u_t = u_{xx} $ (x goes from 0 to L) with the initial temperature distribution $T_0 \frac{x(L-x)}{L^2}$ , and with neumann boundary conditions $u_x(0,t) = u_x(L,t) = 0$, I got some really weird behaviour from NDSolve.</p> <p>My code looks like this:</p> <pre><code>h[x_] := x*(30 - x)/900; pde = D[u[t, x], t] == D[u[t, x], x, x] begin = 0; end = 30; bc = {u[0, x] == 100*h[x], (Derivative[0, 1][u])[t, begin] == 0, (Derivative[0, 1][u])[t, end] == 0}; finaltime = 100 s = NDSolve[{pde}~Join~bc, u, {t, 0, finaltime}, {x, begin, end}]; </code></pre> <p>Since heat cannot flow out through the ends, continuing this in time should yield a smoothening until it reaches the average everywhere. Instead, I get a very weird time evolution which when plotted seems to be of the form $u_x(x,t) = u_x(x,0) - kt$. This is particularily infuriating because the problem seems to be intermittent. Taking the square of h does not cause any trouble.</p> <p>The problem seems to magically fix itself if I instead feed in a truncated cosine series into the code:</p> <pre><code>rule = t -&gt; FourierCosSeries[t*(2*Pi - t), t, 35]; f[x_] := (t /. rule) /. (t -&gt; x) g[x_] := f[2*Pi*x/30]/(4*Pi*Pi) </code></pre> <p>and insert g instead of h into the code above. Trying a function interpolation gave only errors.</p> <p>Is there a fix which is more general than this quick hack? </p>
Michael E2
4,999
<p>You can use the finite element method with the method of lines as <a href="https://mathematica.stackexchange.com/users/120/toadatrix">@toadatrix</a> suggested, but for the FEM method to work, you need to do a little more. The Neumann boundary conditions need to be specified using <a href="http://reference.wolfram.com/language/ref/NeumannValue.html" rel="nofollow noreferrer"><code>NeumannValue</code></a>.</p> <pre><code>h[x_] := x*(30 - x)/900; op = D[u[t, x], t] - D[u[t, x], x, x]; begin = 0; end = 30; bc = {u[0, x] == 100*h[x]}; neumann = NeumannValue[0, x == begin] + NeumannValue[0, x == end]; finaltime = 100; s = NDSolve[{op == neumann, bc}, u, {t, 0, finaltime}, {x, begin, end}, Method -&gt; {"PDEDiscretization" -&gt; {"MethodOfLines", {"SpatialDiscretization" -&gt; "FiniteElement"}}} ]; </code></pre> <p>The default boundary condition is <code>NeumannValue[0,...]</code>, so you could just simply use <code>op == 0</code>.</p> <p>Check the result:</p> <pre><code>Show[ Plot3D[u[t, x] /. s // Evaluate, {t, 0, finaltime}, {x, begin, end}, MeshFunctions -&gt; {#2 &amp;}, PlotRange -&gt; All, AxesLabel -&gt; Automatic], Graphics3D[{Red, Opacity[0.5], InfinitePlane[{0, 0, Integrate[100*h[x], {x, 0, 30}]/30}, {{1, 0, 0}, {0, 1, 0}}]}] ] Show[%, ViewPoint -&gt; {Infinity, 0, 0}] </code></pre> <blockquote> <p><img src="https://i.stack.imgur.com/f7vnR.png" alt="Mathematica graphics"></p> <p><img src="https://i.stack.imgur.com/KArOQ.png" alt="Mathematica graphics"><br> <em>The graphs shows that the temperature evens out to the average over time.</em></p> </blockquote>
878,517
<p>Is there any more solutions to this functional equation $f(f(x))=x$?</p> <p>I have found: $f(x)=C-x$ and $f(x)=\frac{C}{x}$.</p>
Micah
30,836
<p>If you don't make any niceness assumptions about $f$, there are lots. Partition $\Bbb{R}$ (or whatever you want $f$'s domain to be) into $1$- and $2$-element subsets, in any way you like. Then define $f(x)=y$, if $\{x, y\}$ is in your partition, or $f(x)=x$, if $\{x\}$ is in your partition.</p> <p>Moreover, any such $f$ yields such a partition, into the sets $\{x, f(x)\}$. So this is a complete description of all solutions.</p> <p>Of course, $f$ will be wildly discontinuous for most choices of partition.</p>
878,517
<p>Is there any more solutions to this functional equation $f(f(x))=x$?</p> <p>I have found: $f(x)=C-x$ and $f(x)=\frac{C}{x}$.</p>
Michael Albanese
39,599
<p>This answer will only deal with continuous maps.</p> <hr> <p>A map $f : X \to X$ such that $f \circ f = \operatorname{id}_X$ is called an <em>involution</em> of $X$. Two involutions of $X$, $f$ and $g$, are said to be <em>equivalent</em> if there is a self-homeomorphism $h$ such that $f\circ h = h\circ g$ (or written differently, $f = h\circ g \circ h^{-1}$). With this terminology at our disposal, we can state the following result.</p> <blockquote> <p>Every non-identity involution of $\mathbb{R}$ is equivalent to the involution $g(x) = -x$.</p> </blockquote> <p>Therefore, $f$ is an involution of $\mathbb{R}$ if and only if $f = h\circ g\circ h^{-1}$ for some homeomorphism $h : \mathbb{R} \to \mathbb{R}$, or $f = \operatorname{id}_X$.</p> <p>For example, if we let $f(x) = C - x$, then $f = h\circ g\circ h^{-1}$ where $h : \mathbb{R} \to \mathbb{R}$ is the homeomorphism given by $h(x) = x + \frac{C}{2}$.</p>
123,202
<blockquote> <p>Let $X,Y$ be vectors in $\mathbb{C}^n$, and assume that $X\ne0$. Prove that there is a symmetric matrix $B$ such that $BX=Y$.</p> </blockquote> <p>This is an exercise from a chapter about bilinear forms. So the intended solution should be somehow related to it.</p> <p>Pre-multiplying both sides by $Y^t$, we get $Y^tBX=Y^tY$. The left hand side is a bilinear form $\langle Y,X\rangle $ with $B$ as the matrix of the form with respect to the standard basis. Am I correct here?</p> <p>If so, then it suffices to find a bilinear form $\langle\cdot,\cdot\rangle\colon\mathbb{C}^n\times\mathbb{C}^n\rightarrow\mathbb{C}$ such that $\langle Y,X\rangle=Y^tY$. If $Y=0$, any bilinear form will do, because $\langle0,X\rangle=0\langle 0,X\rangle =0$ by linearity in the first variable. If $Y\ne0$, it suffices to find a bilinear form such that $\langle Y,X\rangle$ is nonzero, then we can multiply by the appropriate factor. This should be very near to a complete solution, but I can't figure out the rest.</p> <p><em><strong>Edit</em></strong>: Okay, my approach seems to be completely wrong. Using Phira's hint, I think I managed to make a complete proof.</p> <p>Choose an orthonormal basis $(v_1,\ldots,v_n)$ such that $v_1=\frac{X}{\|X\|}$, which can be done by Gram-Schmidt process. Let $P$ be the $n\times n$ matrix whose $i$-th column is the vector $v_i$. Then $P$ is orthogonal. Let $P^{-1}Y=(a_1,\ldots,a_n)^t$. Choose such that the first column and the first row is the vector $\frac1{\|X\|}(a_1,\ldots,a_n)$, and 0 everywhere else. Clearly $M$ is symmetric and it's easy to check that $(PMP^{-1})X=Y$. So the desired matrix is $B=PMP^{-1}$, which is symmetric because $P$ is orthogonal. $\Box$</p> <p>However, this solution does not make use of bilinear forms. So there might be a simpler way.</p>
bfhaha
128,942
<p>My solution don't use bilinear form.</p> <p>Suppose that $$X=\vec{A}+\vec{B}i= \left( \begin{array}{c} a_1+b_1 i \\ a_2+b_2 i \\ \vdots \\ a_n+b_n i \\ \end{array} \right), Y=\vec{C}+\vec{D}i= \left( \begin{array}{c} c_1+d_1 i \\ c_2+d_2 i \\ \vdots \\ c_n+d_n i \\ \end{array} \right).$$ We have the following observation. If such matrix $B$ exists, suppose that $$B=S+Ti= \left( \begin{array}{cccc} s_{11}+t_{11}i &amp; s_{12}+t_{12}i &amp; \cdots &amp; s_{1n}+t_{1n}i \\ s_{21}+t_{21}i &amp; s_{22}+t_{22}i &amp; \cdots &amp; s_{2n}+t_{2n}i \\ \vdots &amp; \vdots &amp; \ddots &amp; \vdots \\ s_{n1}+t_{n1}i &amp; s_{n2}+t_{n2}i &amp; \cdots &amp; s_{nn}+t_{nn}i \\ \end{array} \right).$$ Since $BX=Y$, we have $$\sum_{j=1}^{n}(a_j+b_j i)(s_{kj}+t_{kj}i)=(c_k+d_k i).$$ Compare the real and imaginary parts, we get $$\sum_{j=1}^{n}(a_j s_{kj}-b_j t_{kj})=c_k,$$ $$\sum_{j=1}^{n}(b_j s_{kj}+a_j t_{kj})=d_k.$$ Then $$\left( \begin{array}{cccc|cccc} \vec{A}^t &amp; 0 &amp; 0 &amp; 0 &amp; -\vec{B}^t &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; \vec{A}^t &amp; 0 &amp; 0 &amp; 0 &amp; -\vec{B}^t &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; \ddots &amp; 0 &amp; 0 &amp; 0 &amp; \ddots &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; \vec{A}^t &amp; 0 &amp; 0 &amp; 0 &amp; -\vec{B}^t \\ \hline \vec{B}^t &amp; 0 &amp; 0 &amp; 0 &amp; \vec{A}^t &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; \vec{B}^t &amp; 0 &amp; 0 &amp; 0 &amp; \vec{A}^t &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; \ddots &amp; 0 &amp; 0 &amp; 0 &amp; \ddots &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; \vec{B}^t &amp; 0 &amp; 0 &amp; 0 &amp; \vec{A}^t \\ \end{array} \right) \left( \begin{array}{c} S^t e_1 \\ S^t e_2 \\ \vdots \\ S^t e_n \\ T^t e_1 \\ T^t e_2 \\ \vdots \\ T^t e_n \\ \end{array} \right)= \left( \begin{array}{c} \vec{C} \\ \vec{D} \\ \end{array} \right), $$ where $e_i$ is the $n\times 1$ column vector whose $i$ position is 1 and other 0. Write this equation by $MU=K$.</p> <p>For example, when $n=2$, $$M=\left( \begin{array}{cccc|cccc} a_1 &amp; a_2 &amp; 0 &amp; 0 &amp; -b_1 &amp; -b_2 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; a_1 &amp; a_2 &amp; 0 &amp; 0 &amp; -b_1 &amp; -b_2 \\ \hline b_1 &amp; b_2 &amp; 0 &amp; 0 &amp; a_1 &amp; a_2 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; b_1 &amp; b_2 &amp; 0 &amp; 0 &amp; a_1 &amp; a_2 \\ \end{array} \right).$$ When $n=3$, $$M=\left( \begin{array}{ccccccccc|ccccccccc} a_1 &amp; a_2 &amp; a_3 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; -b_1 &amp; -b_2 &amp; -b_3 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; a_1 &amp; a_2 &amp; a_3 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; -b_1 &amp; -b_2 &amp; -b_3 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; a_1 &amp; a_2 &amp; a_3 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; -b_1 &amp; -b_2 &amp; -b_3 \\ \hline b_1 &amp; b_2 &amp; b_3 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; a_1 &amp; a_2 &amp; a_3&amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; b_1 &amp; b_2 &amp; b_3 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; a_1 &amp; a_2 &amp; a_3 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; b_1 &amp; b_2 &amp; b_3 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; a_1 &amp; a_2 &amp; a_3 \\ \end{array} \right).$$</p> <p>We claim that the set of row vectors of $M$ are linearly independent. It is sufficient to show that the $l$-th row vector $v_l$ of $M$ and $(n+l)$-th row vector $v_{n+l}$ of $M$ are linearly independent for $l=1,2,...,n$. If $\lambda v_l+\gamma v_{n+l}=\vec{0}$, where $\lambda, \gamma\in \Bbb{R}$, then $$ \begin{array}{c} \lambda a_1+\gamma b_1=0 \\ \lambda a_2+\gamma b_2=0 \\ \vdots \\ \lambda a_n+\gamma b_n=0 \\ \end{array} \mbox{ and } \begin{array}{c} -\lambda b_1+\gamma a_1=0 \\ -\lambda b_2+\gamma a_2=0 \\ \vdots \\ -\lambda b_n+\gamma a_n=0 \\ \end{array}.$$ Since $X\neq 0$, there exists $i_0\in \{1,2,...,n\}$ such that $a_{i_0}\neq 0$ or $b_{i_0}\neq 0$. Then $a_{i_0}^2+b_{i_0}^2\neq 0$. Solve the equations $$\lambda a_{i_0}+\gamma b_{i_0}=0,$$ $$-\lambda b_{i_0}+\gamma a_{i_0}=0.$$ We get $\lambda(a_{i_0}^2+b_{i_0}^2)=0$ and $\lambda=0$. Similarly, we have $\gamma=0$. Hence $\lambda=\gamma=0$ and $v_{l}$ and $v_{n+l}$ are linearly independent.</p> <p>Conversely, given $X\neq 0$ and $Y$, if we view the equation $MU=K$ as a linear system with $n^2$ unknowns in $U$, then since $rank(M)=2n=rank(M|K)$, by Theorem (p.174, theorem 3.11, Linear Algebra, 4 edition, Friedberg, Insel, Spence), $U$ has a solution.</p> <p>Suppose that $$U=\left( \begin{array}{c} s_{11} \\ \vdots \\ s_{ij} \\ \vdots \\ s_{nn} \\ t_{11} \\ \vdots \\ t_{ij} \\ \vdots \\ t_{nn} \\ \end{array} \right).$$ For the symmetry of $B$, we need to require $s_{ij}=s_{ji}$ and $t_{ij}=s_{ji}$ in $U$. Hence we modify the vector $U$ into $U'$ such that there are $\frac{n(n+1)}{2}$ unknowns in $U'$. This modification reduces the column of $M$ into $M'$, but does not effect $rank(M)$ and $rank(M|K)$. That is, $rank(M')=rank(M)=rank(M|K)=rank(M'|K)$. Therefore, there is a solution for $U'$. Then we can construct the symmetry matrix $B$ from $U'$.</p>
2,036,943
<p>any riemann integrable function is a smooth piecewise function?</p> <p>Is true for fundamental theorem calculus?</p>
Marko Karbevski
45,470
<p>No, and here are a few examples:</p> <ul> <li><p>The indicator function of the Cantor set</p></li> <li><p>Any nowhere differentiable continuous function. As they are locally bounded and continuous, they are integrable on any compact interval</p></li> <li><p>As @user284331 and @ Henry W. pointed out, you can look up Thomae's function (also known as Stars over Babylon)</p></li> </ul>
1,279,564
<p>I try to be rational and keep my questions as impersonal as I can in order to comply to the community guidelines. But this one is making me <strong>mad</strong>. Here it goes. Consider the uniform distribution on $[0, \theta]$. The likelihood function, using a random sample of size $n$ is $\frac{1}{\theta^{n}}$.<br> Now $1/\theta^n$ is decreasing in $\theta$ over the range of positive values. Hence it will be maximized by choosing $\theta$ as small as possible while still satisfying $0 \leq x_i \leq \theta$.The textbook says 'That is, we choose $\theta$ equal to $X_{(n)}$, or $Y_n$, the largest order statistic'.But if we want to <strong>minimize</strong> <em>theta</em> to maximize the likelihood, why we choose the biggest x? Suppose we had real numbers for x like $X_{1} = 2, X_{2} = 4, X_{3} = 8$.If we choose 8, that yields $\frac{1}{8^{3}}=0.001953125$. If we choose $\frac{1}{2^{3}}=0.125$. Therefore why we want the maximum in this case $X_{n}$ and not $X_{1}$, since we`ve just seen with real numbers that the smaller the x the bigger the likelihood? Thanks!</p>
Seyhmus Güngören
29,940
<p>What you are doing is wrong. You must find the likelihood function. What you found is $1/\theta^n$? so where is it defined? It is true that $X_n$ is the maximum likelihood estimator because it maximizes the true likelihood function. How do you find it?</p> <p><strong>Added</strong>: Your answer is actually on the right direction but as I mentioned it is missing a crucial point which alters everything. So the right way of writing down the likelihood funtion is as follows:</p> <p>\begin{align}L(x_n;\theta)=\prod_{n=1}^N\theta^{-1}\mathbf{1}_{0\leq x_n\leq \theta}(x_n)\\=\theta^{-N}\prod_{i=1}^n\mathbf{1}_{0\leq x_n\leq \theta}(x_n)\end{align}</p> <p>Until now, $L$ is a function of $x_n$ now lets write it as a function of $\theta$</p> <p>\begin{align}L(\theta;x_n)=\theta^{-N}\prod_{i=1}^n\mathbf{1}_{\theta \geq x_n}(x_n)\end{align}</p> <p>Observe that $L(\theta;x_n)$ is <strong>zero</strong> if $\theta&lt;x_N$ and it is a decreasing positive function of $x$ if $\theta\geq x_N$. We can now see that for any choice $x&gt;x_N$, $L(\theta;x_N)&gt;L(\theta;x)$ this means maximum is reached at $\hat\theta=x_{N}$.</p>
1,085,702
<p>It's said that a computer program &quot;prints&quot; a set <span class="math-container">$A$</span> (<span class="math-container">$A \subseteq \mathbb N$</span>, positive integers.) if it prints every element of <span class="math-container">$A$</span> in ascending order (even if <span class="math-container">$A$</span> is infinite.). For example, the program can &quot;print&quot;:</p> <ol> <li>All the prime numbers.</li> <li>All the even numbers from <span class="math-container">$5$</span> to <span class="math-container">$100$</span>.</li> <li>Numbers including &quot;<span class="math-container">$7$</span>&quot; in them.</li> </ol> <p>Prove there is a set that no computer program can print.</p> <p>I guess it has something to do with an algorithm meant to manipulate or confuse the program, or to create a paradox, but I can't find an example to prove this. Any help?</p> <p>Guys, this was given to me by my Set Theory professor, meaning, this question does not regard computer but regards an algorithm that cannot exhaust <span class="math-container">$\mathcal{P}(\mathbb{N})$</span>. Everything you say about computer or number of programs do not really help me with this... The proof has to contain Set Theory claims, and I probably have to find a set with terms that will make it impossible for the program to print. I am not saying your proofs including computing are not good, on the contrary, I suppose they are wonderful, but I don't really understand them nor do I need to use them, for it's about sets.</p>
Xoff
36,246
<p>An explicit solution in your system could be :</p> <ol> <li>Consider an enumeration of your programs ($P_0$ is the first program, $P_1$ the second, and so on)</li> <li>Let denote by $P_{\! n}(m)$ the m$^\mbox{th}$ number printed by program $P_{\!n}$ (if it exists)</li> <li>Consider the integers $i_n$ defined by $$i_n=n+1+\max_{j,k\le n}\{P_j(k)\}$$ (if the subset is empty, just remember that max of a empty subset is $0$). </li> <li>Hence $i_{n+1}&gt;i_n$</li> <li>The set $\{i_n\}_{n\in\mathbb N}$ can not be printed as you want because if it can be printed in ascending order by some program, then there is a $s$ such that $P_s(n)=i_n$, but then $$P_s(s)=i_s\ge s+1+P_s(s)&gt;P_s(s)$$ This is a contradiction ! </li> </ol>
1,531,646
<p>Find the following limit</p> <p>$$ \lim_{x\to0}\left(\frac{1+x2^x}{1+x3^x}\right)^\frac1{x^2} $$</p> <p>I have used natural logarithm to get</p> <p>$$ \exp\lim_{x\to0}\frac1{x^2}\ln\left(\frac{1+x2^x}{1+x3^x}\right) $$</p> <p>After this, I have tried l'opital's rule but I was unable to get it to a simplified form.</p> <p>How should I proceed from here? Any here is much appreciated!</p>
Michael Medvinsky
269,041
<p>Note that $\lim\limits_{x\to0} xn^x=0$. Next, use the L'Hospital's rule twice to get $$\lim_{x\to0}\frac{\ln \left({1+xn^x}\right)}{x^2}= \lim_{x\to0}\frac{\frac{x n^x \ln ^2n+2 n^x \ln n}{x n^x+1}-\frac{\left(n^x+x n^x \ln n\right)^2}{\left(x n^x+1\right)^2}}{2} =\frac{2 \ln n-1}{2} $$ Now $$\lim_{x\to0}\frac{\ln\left(\frac{1+x2^x}{1+x3^x}\right)}{x^2}= \lim_{x\to0}\frac{\ln\left({1+x2^x}\right)-\ln\left({1+x3^x}\right)}{x^2} =\frac{2 \ln 2-1}{2} - \frac{2 \ln 3-1}{2} = \ln \frac{2}{3}$$</p> <p>Finally, take exponent to get $$\lim_{x\to0}\left(\frac{1+x2^x}{1+x3^x}\right)^\frac1{x^2}=\frac{2}{3}$$</p> <p><strong>Note</strong> $$\frac{d}{dx} n^x = n^x \ln n$$ $$\frac{d}{dx}\ln \left({1+xn^x}\right)=\frac{n^x+x n^x\ln n }{1+xn^x}$$ $$\frac{d^2}{dx}\ln \left({1+xn^x}\right)=\frac{d^2}{dx}\frac{n^x+x n^x\ln n }{1+xn^x}= \frac{(2n^x\ln n +x n^x(\ln n)^2)(1+xn^x)-(n^x+x n^x\ln n)^2 }{(1+xn^x)^2}= \frac{x n^x \ln ^2n+2 n^x \ln n}{x n^x+1}-\frac{\left(n^x+x n^x \ln n\right)^2}{\left(x n^x+1\right)^2}$$</p>
3,567,662
<p>I have just learned how to convert a plane in R3 from Cartesian to parametric form, by setting 2 variables to 0 and solving for the 3rd one in order to obtain 3 points on the plane, and solve from there. However, this does not work when 1 or 2 of the variables are 0, as it is not possible to find 3 points on the plane in the same way (for example in the picture). How can this be solved for the particular question, and for other cases where there are variables that are 0? <a href="https://i.stack.imgur.com/C7fEp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C7fEp.png" alt="enter image description here"></a></p>
Oliver Kayende
704,766
<p>Another way without the corollary but intuitive. By the fundamental theorem of arithmetic <span class="math-container">$(x,y)\mapsto 2^{x'}3^{y'}$</span> defines an injection from <span class="math-container">$X\times Y$</span> into <span class="math-container">$\Bbb N$</span> given bijections <span class="math-container">$x\mapsto x',y\mapsto y'$</span> from <span class="math-container">$X,Y$</span> to <span class="math-container">$\Bbb N$</span> respectively.</p>
1,761,857
<p>I need to show that, for $f:X\to \mathbb{R}$ bounded, we have:</p> <p>$$\sup\{|f(x)-f(y)|, x,y\in X\}= \sup f - \inf f$$</p> <p>Well, I know that </p> <p>$$\sup\{|f(x)-f(y)|, x,y\in X\}\ge |f(x)-f(y)|$$ but in what this helps? I really have no idea in how to prove this one</p>
triple_sec
87,778
<p>Fix any $x_0\in X$ and $y_0\in X$. Then, one has that $$f(x_0)-f(y_0)\leq|f(x_0)-f(y_0)|\leq\sup_{x,y\in X}\{|f(x)-f(y)|\},$$ since every real number is less than or equal to its absolute value. Rearranging yields: $$f(x_0)\leq\sup_{x,y\in X}\{|f(x)-f(y)|\}+f(y_0).$$ This is true for any $x_0\in X$, so taking supremum on the left-hand side yields $$\sup_{x\in X}f(x)\leq\sup_{x,y\in X}\{|f(x)-f(y)|\}+f(y_0).$$ Another rearrangement yields: $$\sup_{x\in X}f(x)-\sup_{x,y\in X}\{|f(x)-f(y)|\}\leq f(y_0),$$ and taking infimum on the right-hand side results in $$\sup_{x\in X}f(x)-\sup_{x,y\in X}\{|f(x)-f(y)|\}\leq\inf_{x\in X} f(x),$$ or $$\sup_{x\in X}f(x)-\inf_{x\in X} f(x)\leq\sup_{x,y\in X}\{|f(x)-f(y)|\}.$$</p> <hr> <p>As for the other direction, fix again arbitrary $x_0\in X$ and $y_0\in X$. If $f(x_0)\geq f(y_0)$, then $$|f(x_0)-f(y_0)|=f(x_0)-f(y_0)\leq \sup_{x\in X} f(x)-f(y_0)\leq\sup_{x\in X}f(x)-\inf_{x\in X}f(x).$$ If, on the other hand, $f(x_0)&lt;f(y_0)$, then $$|f(x_0)-f(y_0)|=f(y_0)-f(x_0)\leq \sup_{x\in X} f(x)-f(x_0)\leq\sup_{x\in X}f(x)-\inf_{x\in X}f(x).$$ In either case, $$|f(x_0)-f(y_0)|\leq\sup_{x\in X}f(x)-\inf_{x\in X}f(x).$$ Now taking supremum on the left-hand side over $x_0,y_0\in X$, one has that $$\sup_{x,y\in X}\{|f(x)-f(y)|\}\leq\sup_{x\in X}f(x)-\inf_{x\in X}f(x),$$ completing the proof.</p>
4,438,512
<p>Recently I learned about dividing and forking of formula / partial types:</p> <p>We say that <span class="math-container">$φ(x,b)$</span> divides over <span class="math-container">$C$</span> (where <span class="math-container">$b$</span> in the monster and <span class="math-container">$C$</span> is small) if there exists an indiscernible sequence <span class="math-container">$\{b_i\}_{i\in\omega}$</span> over <span class="math-container">$C$</span> of elements with the type <span class="math-container">$\operatorname{tp}(b/C)$</span> such that <span class="math-container">$\{φ(x,b_i)\}$</span> is inconsistent.</p> <p>We say that <span class="math-container">$φ(x,b)$</span> fork over <span class="math-container">$C$</span> if it implies finite disjunction of dividing formulas over <span class="math-container">$C$</span>.</p> <hr /> <p>I am trying to get a better intuition about those definitions, especially about forking.</p> <p>From my understanding, <span class="math-container">$φ(x,b)$</span> divides over <span class="math-container">$C$</span> if <span class="math-container">$φ(C,b)$</span> (i.e. the set <span class="math-container">$φ$</span> defines with <span class="math-container">$b$</span>) is not &quot;big&quot; in the sense that for every <span class="math-container">$A\subseteq ω$</span>, <span class="math-container">$|A|=k$</span> (for some fixed <span class="math-container">$k$</span>) we have <span class="math-container">$\bigcap\limits_{i∈A} φ(C,b_i)=∅$</span></p> <p>This view is reinforced when we see that the forking formulas are basically the ideal generated from the dividing formulas (I mainly worked in set theory with filters which are &quot;defining big sets&quot;, so my intuition about their dual, the ideals, is &quot;defining the small sets&quot;).</p> <p>But this view doesn't feel that good to me, it is somewhat lacking: specifically, what is the role of the second parameter (<span class="math-container">$b,b_i$</span>) here, why do we look at what happens to the defined subset when we change it. And why are we fixing <span class="math-container">$C$</span>? Isn't the first parameter is usually the parameter we move around?</p> <p>Further more, I feel like this view falls completely for forks: the set of forking formulas is not always a proper ideal, even over the empty set (e.g. in <span class="math-container">$(X,\mathcal P(X),∈)$</span> or circular order, in which <span class="math-container">$x=x$</span> forks over <span class="math-container">$∅$</span>)</p> <p>My understanding on forking/dividing partial types is basically the same as my understanding of forking.</p> <hr /> <hr /> <p>So I am guessing that my question is: is there a standard way to think about dividing and forking formulas and partial types?</p> <p>Also, I focused a lot on the study of set theory/formal logic, and kind of neglected abstract algebra for some time (which comes to bites me in the butt now...), so if there is a view that appeals to set theorists and relies less on algebraic intuition I would love to hear about it.</p>
Primo Petri
137,248
<p>Non-forking and invariance (a notion easier to grasp) coincide in a large class of theories.</p> <p>Below I elaborate with a model <span class="math-container">$M$</span> instead of a set <span class="math-container">$C$</span> to avoid some technical issues (b.t.w., these technical issues are quite interesting, just not now).</p> <p>Things are clearer when working with global types.</p> <p>We say that <span class="math-container">$\mathscr{D}\subseteq\mathscr{U}$</span> is invariant over <span class="math-container">$M$</span> if <span class="math-container">$\mathscr{D}$</span> is (setwise) fixed under the action of Aut<span class="math-container">$(\mathscr{U}/M)$</span>.</p> <p>To a global type <span class="math-container">$p(x)$</span> and a formula <span class="math-container">$\varphi(x,y)$</span> we associate the set <span class="math-container">$\mathscr{D}=\{a\in\mathscr{U}:\varphi(x,a)\in p\}$</span>.</p> <p>The global type <span class="math-container">$p(x)$</span> is invariant if all sets <span class="math-container">$\mathscr{D}$</span> as above are invariant (as <span class="math-container">$\varphi(x,y)$</span> ranges over all formulas).</p> <blockquote> <p><strong>Theorem</strong> Assume <span class="math-container">$T$</span> is NIP (the definition of NIP is irrelevant - just a large class of theories that include all stable theories) then the following are equivalent for every global type <span class="math-container">$p(x)$</span></p> <ol> <li><p><span class="math-container">$p(x)$</span> does not fork over <span class="math-container">$M$</span>;</p> </li> <li><p><span class="math-container">$p(x)$</span> is invariant over <span class="math-container">$M$</span>.</p> </li> </ol> </blockquote> <p>See Theorem 5.21 in <a href="https://www.normalesup.org/%7Esimon/NIP_guide.pdf" rel="noreferrer">P.Simon's book</a></p> <p>For an easier comparison with non-dividing, note that a simple argument of compactness proves the following</p> <blockquote> <p><strong>Fact</strong> For every <span class="math-container">$T$</span> the following are equivalent for every global type <span class="math-container">$p(x)$</span></p> <ol start="3"> <li><p><span class="math-container">$p(x)$</span> is invariant over <span class="math-container">$M$</span>;</p> </li> <li><p><span class="math-container">$\varphi(x,b)\in p$</span> and every <span class="math-container">$b_1\equiv_M\dots\equiv_Mb_n\equiv_Mb$</span> there is a <span class="math-container">$c$</span> such that <span class="math-container">$\varphi(c,b_i)$</span> for every <span class="math-container">$i$</span>.</p> </li> </ol> </blockquote> <p>It is evident that non-dividing is a weak analogue of 2.</p> <p>What about non-NIP theories? What about non-global types? Can we say something meaningful about 2 and/or 4?</p> <p>I wish I could answer YES. But there is a practical problem. The notions in 2 an 4 are semantic in nature. Though we (human being) can figure out semantic better, we mainly can work with syntactic notions. Dividing and forking offer a syntactic hand-grip.</p> <p>Between a natural notion with few theorems and a unnatural one with many theorems mathematicians clearly choose the second (who would disagree).</p>
232,276
<p>I can prove with the triangle inequality that the unit sphere in $R^n$ is convex, but how to show that it is strictly convex?</p>
agt
6,752
<p>I will show that if $(E,\langle\dot,\dot\rangle)$ is an inner product space, then the normed space $(E,\|\dot\|)$ is strictly convex, where $\|x\|:=\sqrt{\langle x,x\rangle}.$</p> <p>Take any two point $x,y\in E,$ with $\|x\|=\|y\|=1$ and $x\neq y.$ Then for any $0&lt;\alpha&lt;1,$ we have $\|\alpha x+(1-\alpha) y\|&lt;1,$ or equivalently $\|\alpha x+(1-\alpha) y\|^2-1&lt;0.$ Infact:</p> <p>$$\|\alpha x+(1-\alpha) y\|^2-1=\alpha^2+(1-\alpha)^2+2\alpha(1-\alpha)\langle x,y\rangle-1=2(1-\langle x,y\rangle)(\alpha-1)\alpha$$</p> <p>Which is negative for $\alpha\in]0,1[$ because the <em>Cauchy inequality</em> and the hypothesis $\|x\|=\|y|=1,$ $x\neq y,$ imply $-1\leq\langle x,y\rangle&lt;1.$</p>
2,135,228
<p>Find</p> <p>(a) $P\{A \cup B\}$</p> <p>(b) $P\{A^c\}$</p> <p>(c) $P\{A^c \cap B\}$</p> <p>This is what I have right now:</p> <p>(a) $P\{A \cup B\}=0.4+0.5=0.90$</p> <p>(b) $P\{A^c\}= 1-0.4=0.60$</p> <p>(c) $P\{A^c \cap B\}= (0.6)\cdot(0.5)=0.30$</p> <p>Am I doing it correctly?</p>
MPW
113,214
<p><strong>Hint:</strong> The first one is correct. Now what happens when you change the association by moving the grouping symbols to the pair in the other $\oplus$? For associativity to hold, the results must be equal for all choices of $u,v,w$. Is that the case?</p>
600,097
<p>I am stuck on the following problem from an exercise in my analysis book: </p> <blockquote> <p>Show that $$\int_0^4 x \mathrm d(x-[x])=-2$$ where $[x]$ is the greatest integer not exceeding $x$. </p> </blockquote> <p>I think I have to partition the interval $[0,4]$ into some suitable subintervals and here I see $x-[x] \ge 0$. </p> <p>But, I am not sure how to tackle it as no similar types of problems has been discussed in the book. Can someone explain? Thanks and regards to all.</p>
GEdgar
442
<p>What happens if you try partitions like $$ 0,1-\delta,1,2-\delta,2,3-\delta,3,4-\delta,4 $$ where $\delta&gt;0$ is very small?</p>
600,097
<p>I am stuck on the following problem from an exercise in my analysis book: </p> <blockquote> <p>Show that $$\int_0^4 x \mathrm d(x-[x])=-2$$ where $[x]$ is the greatest integer not exceeding $x$. </p> </blockquote> <p>I think I have to partition the interval $[0,4]$ into some suitable subintervals and here I see $x-[x] \ge 0$. </p> <p>But, I am not sure how to tackle it as no similar types of problems has been discussed in the book. Can someone explain? Thanks and regards to all.</p>
DonAntonio
31,254
<p>Using <a href="http://en.wikipedia.org/wiki/Riemann%E2%80%93Stieltjes_integral" rel="nofollow">integration by parts for the Riemann-Stieltjes integral</a>, we get:</p> <p>$$\int\limits_0^4xd(x-\lfloor x\rfloor)=\left.x(x-\lfloor x\rfloor)\right|_{x=4}-\left.x(x-\lfloor x\rfloor)\right|_{x=0}-\int\limits_0^4(x-\lfloor x\rfloor)dx=$$</p> <p>$$=4(4-4)-0(0-0)-\int\limits_0^4 x\,dx+\int\limits_0^4\lfloor x\rfloor dx=$$</p> <p>$$=\left.-\frac12x^2\right|_0^4+\int\limits_0^1 0\cdot dx+\int\limits_1^21dx+\int\limits_2^32dx+\int\limits_3^43dx=-8+1+2+3=-2$$</p>
4,007,987
<p>So define a polynomial <span class="math-container">$P(x) = 4x^3 + 4x - 5 = 0$</span>, whose roots are <span class="math-container">$a, b $</span> and <span class="math-container">$c$</span>. Evaluate the value of <span class="math-container">$(b+c-3a)(a+b-3c)(c+a-3b)$</span></p> <p>Now tried this in two ways (both failed because it was far too messy)</p> <ol> <li><p>Expand everything out (knew this was definitely not the required answer but I cant think of the quick method myself)</p> </li> <li><p>Using sum and product (which is still quite lengthy but better)</p> </li> </ol> <p>Of course the above methods relied on the use of Vieta's sum/product of roots.</p> <p>Does anyone have an amazing concise solution for me? I know you guys are full of tricks, and I enjoy reading them.</p>
Mark Bennet
2,906
<p>Hint: <span class="math-container">$b+c-3a = (a+b+c)-4a = -4a$</span> and everything simplifies easily.</p>
738,455
<p>Sources: <a href="https://rads.stackoverflow.com/amzn/click/0495011665" rel="nofollow noreferrer"><em>Calculus: Early Transcendentals</em> (6 edn 2007)</a>. p. 890, Section 14.3. Exercise 50b, c.. </p> <blockquote> <p><img src="https://i.stack.imgur.com/8ggG2.png" alt="enter image description here"></p> </blockquote> <p>Defining $t = xy$ transforms $f(xy)$ into $f(t)$, but this doesn't change the truth that $f$ depends on 2 independent variables ($x, y$) and so is multivariable. </p> <ol> <li><p>So how is writing $f'$ correct? $f$ isn't single-variable. </p></li> <li><p>Can the following be simplified more? </p></li> </ol> <p>(b) $\partial_y z = x \dfrac{ df(\color{darkorchid}{xy}) }{ d(\color{darkorchid}{xy}) } $ (c) $\partial_x z = \dfrac{ df(\color{green}{x/y}) }{ d(\color{green}{x/y}) } \dfrac{1}{y} $?</p>
Hagen von Eitzen
39,174
<p>The function $f$ depends on only one variable - there's no comma between the parentheses. So for the given function $t\mapsto f(t)$, there is a single derivative $t\mapsto f'(t)=\lim_{h\to0}\frac{f(t+h)-f(t)}{h}$. Then $f'(xy)$ is just obtained by pugging $t=xy$ into $f'(t)$. This should not be mixed with the partial derivatives of the function in two variables $(x,y)\mapsto f(xy)$. For example, if $f(t)=t^3$ then $f'(t)=3t^2$, so $\frac{\partial z}{\partial x} =yf'(xy)=3y^3x^2$ etc.</p>
2,965,193
<p>Basically the question is asking us to prove that given any integers <span class="math-container">$$x_1,x_2,x_3,x_4,x_5$$</span> Prove that 3 of the integers from the set above, suppose <span class="math-container">$$x_a,x_b,x_c$$</span> satisfy this equation: <span class="math-container">$$x_a^2 + x_b^2 + x_c^2 = 3k$$</span> So I know I am suppose to use the pigeon hole principle to prove this. I know that if I have 5 pigeons and 2 holes then 1 hole will have 3 pigeons. But what I am confused about is how do you define the hole? Do I just say that the container has a property such that if 3 integers are in it then those 3 integers squared sum up to a multiple of 3?</p>
SQB
106,234
<p>Any integer is of one of the following forms:</p> <ul> <li><span class="math-container">$3k + 0$</span> <em>(these are the multiples of 3)</em></li> <li><span class="math-container">$3k + 1$</span></li> <li><span class="math-container">$3k + 2$</span></li> </ul> <p>where <span class="math-container">$k$</span> is an integer.</p> <hr> <p>If we square these, we get </p> <ul> <li><span class="math-container">$9k^2$</span></li> <li><span class="math-container">$9k^2 + 6k + 1$</span></li> <li><span class="math-container">$9k^2 + 12k + 4$</span></li> </ul> <p>If we then look at the remainders of these when divided by <span class="math-container">$3$</span>, because <span class="math-container">$9$</span>, <span class="math-container">$6$</span>, and <span class="math-container">$12$</span> are all divisible by <span class="math-container">$3$</span>, we see that</p> <ul> <li><span class="math-container">$9k^2 \equiv 0 \pmod 3$</span></li> <li><span class="math-container">$9k^2 + 6k + 1 \equiv 1 \pmod 3$</span></li> <li><span class="math-container">$9k^2 + 12k + 4 \equiv 1 \pmod 3$</span></li> </ul> <p>So any squared integer is equivalent to <span class="math-container">$0$</span> or <span class="math-container">$1$</span>, modulo <span class="math-container">$3$</span>.</p> <p>These will be our two pigeon holes.</p> <hr> <p>Applying the pigeonhole principle to the squares of our 5 integers, we see that either at least 3 have remainder <span class="math-container">$0$</span>, or at least 3 have remainder <span class="math-container">$1$</span>. Let's call these 3 integers <span class="math-container">$x_1$</span>, <span class="math-container">$x_2$</span>, and <span class="math-container">$x_3$</span>.</p> <p>In the former case, <span class="math-container">$x_i^2 \equiv 0 \pmod 3$</span> and so the sum of these <span class="math-container">$x_1^2 + x_2^2 + x_3^2 \equiv 0 + 0 + 0 \pmod 3 \equiv 0 \pmod 3$</span></p> <p>In the latter case, <span class="math-container">$x_i^2 \equiv 1 \pmod 3$</span> and so the sum of these <span class="math-container">$x_1^2 + x_2^2 + x_3^2 \equiv 1 + 1 + 1 \pmod 3 \equiv 0 \pmod 3$</span></p> <p><strong>QED</strong></p>
3,699,645
<p>In the evaluation of <span class="math-container">$$\sum_{k=1}^\infty \sum_{\ell=1}^{k-1}\sum_{m=1}^{\ell-1}\frac{\delta_{k, 2\ell-2m}}{m\left(\ell-m\right)\left(k-\ell\right)}.$$</span> Here <span class="math-container">$\delta_{k, 2\ell-2m}$</span> denotes the "Kronecker delta" (see <a href="https://en.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">here</a>) and is given by <span class="math-container">$$\delta_{k, 2\ell-2m} = \begin{cases} 0 \qquad \mathrm{if} k \neq 2\ell-2m, \\ 1 \qquad \mathrm{if} k =2\ell-2m. \end{cases}$$</span> Thus we have the equality <span class="math-container">$$S = \sum_{\ell=1}^\infty\sum_{m=1}^{\ell-1} \frac{1}{m(\ell-m)(\ell-2m)}.$$</span> I was able to simplify this down to something along the lines of <span class="math-container">$$\sum_{n=1}^\infty \frac{H_{\left(n-3/2\right)}}{n^2}$$</span>which I have no idea how to approach. Sorry for the little information given, I am just looking for an idea of how I can go about this. Thank you!</p>
Varun Vejalla
595,055
<p>You did not reduce the sum properly. For a given even <span class="math-container">$k$</span>, the set of <span class="math-container">$m$</span> that satisfies <span class="math-container">$k = 2\ell - 2m, 1 \le \ell \le k-1, 1 \le m \le \ell-1$</span> is the positive natural numbers less than or equal to <span class="math-container">$k-1$</span></p> <p>This means that the sum should be <span class="math-container">$$\sum_{t = 1}^{\infty} \sum_{m=1}^{t-1} \frac{1}{m(\ell-m)(k-\ell)}$$</span></p> <p>where <span class="math-container">$k = 2t$</span> and <span class="math-container">$\ell = \frac{k+2m}{2} = t+m$</span></p> <p>Plugging those in and simplifying, I get <span class="math-container">$$\sum_{t = 1}^{\infty} \sum_{m=1}^{t-1} \frac{1}{mt(t-m)}$$</span></p> <p>Changing the order of summation yields <span class="math-container">$$\sum_{m=1}^{\infty} \sum_{t=m+1}^{\infty} \frac{1}{mt(t-m)}$$</span></p> <p>The inner sum can be simplified so that the double sum becomes <span class="math-container">$$\sum_{m=1}^{\infty}\frac{H_m}{m^2}$$</span></p> <p>Using <a href="https://math.stackexchange.com/a/605602/595055">this answer</a> to another question, the sum can be simplified to <span class="math-container">$$2\zeta(3) \approx 2.404$$</span></p>
3,699,645
<p>In the evaluation of <span class="math-container">$$\sum_{k=1}^\infty \sum_{\ell=1}^{k-1}\sum_{m=1}^{\ell-1}\frac{\delta_{k, 2\ell-2m}}{m\left(\ell-m\right)\left(k-\ell\right)}.$$</span> Here <span class="math-container">$\delta_{k, 2\ell-2m}$</span> denotes the "Kronecker delta" (see <a href="https://en.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">here</a>) and is given by <span class="math-container">$$\delta_{k, 2\ell-2m} = \begin{cases} 0 \qquad \mathrm{if} k \neq 2\ell-2m, \\ 1 \qquad \mathrm{if} k =2\ell-2m. \end{cases}$$</span> Thus we have the equality <span class="math-container">$$S = \sum_{\ell=1}^\infty\sum_{m=1}^{\ell-1} \frac{1}{m(\ell-m)(\ell-2m)}.$$</span> I was able to simplify this down to something along the lines of <span class="math-container">$$\sum_{n=1}^\infty \frac{H_{\left(n-3/2\right)}}{n^2}$$</span>which I have no idea how to approach. Sorry for the little information given, I am just looking for an idea of how I can go about this. Thank you!</p>
xpaul
66,420
<p>It is not a complete answer. Let <span class="math-container">\begin{eqnarray} f(x)&amp;=&amp;\sum_{m=1}^\infty \frac{H_{m-\frac32}}{m^2}x^m \end{eqnarray}</span> and then <span class="math-container">\begin{eqnarray} f'(x)&amp;=&amp;\sum_{m=1}^\infty \frac{H_{m-\frac32}}{m}x^{m-1}\\ (xf'(x))'&amp;=&amp;\sum_{m=1}^\infty H_{m-\frac32}x^{m-1}=\sum_{m=1}^\infty x^{m-1}\int_0^1\frac{1-t^{m-\frac32}}{1-t}dt\\ &amp;=&amp;\int_0^1\sum_{m=1}^\infty \frac{1-t^{m-\frac32}}{1-t}x^{m-1}dt\\ &amp;=&amp;\int_0^1\bigg(\frac{1}{(1-t) (1-x)}-\frac{1}{(1-t)\sqrt t (1-t x)}\bigg)dt. \end{eqnarray}</span> So <span class="math-container">\begin{eqnarray} xf'(x)&amp;=&amp;\int_0^x\int_0^1\bigg(\frac{1}{(1-t) (1-r)}-\frac{1}{(1-t) \sqrt t(1-t r)}\bigg)dtdr\\ &amp;=&amp;\int_0^1\int_0^x\bigg(\frac{1}{(1-t) (1-r)}-\frac{1}{(1-t)\sqrt t (1-t r)}\bigg)drdt\\ &amp;=&amp;\int_0^1\frac{-\log (1-x)+t^{-3/2} \log (1-t x)}{1-t}dt\\ f(1)&amp;=&amp;\int_0^1\frac1x\int_0^1\frac{-\log (1-x)+t^{-3/2} \log (1-t x)}{1-t}dtdx\\ &amp;=&amp;\int_0^1\int_0^1\frac1x\frac{-\log (1-x)+t^{-3/2} \log (1-t x)}{1-t}dxdt\\ &amp;=&amp;\int_0^1\frac{\pi ^2-6 t^{-3/2} \text{Li}_2(t)}{6-6 t}dt. \end{eqnarray}</span> Now I have problem for the last step. But Wolfram Mathematica gives <span class="math-container">$$ \int_0^1\frac{\pi ^2-6 t^{-3/2} \text{Li}_2(t)}{6-6 t}dt=\frac{7 \zeta (3)}{2}-\frac{1}{3} \pi ^2 (\log (2)-1)-8 \log (2). $$</span></p>
15,033
<p>I have one incident edges and multiple outgoing Edges, for which I want to pick an outgoing edge such that the angles between the outgoing edge and the incoming edge is the smallest of all. We know the coordinates for the vertex $V$ .</p> <p>The angle must start from the incoming edge ($e_1$) and ends at another edge ($e_2$, $e_3$, $e_4$). </p> <p>Also, the angle must be form in such a way that the face constructed from $e_1$ to $e_2$ must be in counter-close-wise direction. In other words, the face that you construct when you link all the vertexes $V$ in $e_1$ and $e_i$ together, must be a counter clockwise cycle.</p> <p>So in the case below, edge $e_2$ must be picked because the $\theta$ is the smallest.</p> <p><img src="https://docs.google.com/drawings/pub?id=1ducKqeABywKY4A1WS17exNytbvNnDxSAovK0cZnUs3Y&amp;w=960&amp;h=720"></p> <p>Is there an algorithm that does this elegantly? The few algorithms I devise are just plain ugly and requires a lot of hacks. </p> <p>Update: One method that I think of ( and <a href="https://math.stackexchange.com/questions/15033/find-the-outgoing-edge-with-the-smallest-angles-given-one-incident-edges-and-mul/15035#15035">proposed below</a>) is to use the cross product for each edge against the incoming edge $e_1$, and get the minimum $\theta_{i}$. i.e.,</p> <p>$$e_{1} \times e_{i} = |e_{1}||e_{i}| \sin\, \theta_{i}$$</p> <p>But the problem of this is it doesn't really work. Consider the following test cases: $$e_1=(1,0)$$ $$e_2=(-1/\sqrt(2),1/\sqrt(2) )$$ $$e_3=(1/\sqrt(2),-1/\sqrt(2))$$</p> <p>One would find that</p> <p>$$\theta_2 = -45^o$$ $$\theta_3=-45^o$$</p> <p>Both are equal-- even though we know that $e_2$ should be selected. </p>
Eric Beaudoin
5,292
<p>If you are trying to put this into code you may want to consider the function atan2(y,x), it is available in most languages. Note that it takes the y coordinate as the first parameter. Since atan2 takes two parameter it can figure out in which quadrant your vector lies, and it can output the angle in the range $-\pi$ to $\pi$.</p> <p>Here is a perl snippet to demonstrate how it works for your two angles:</p> <pre><code>#!/usr/bin/perl use Math::Trig; $t_rad = atan2(1/sqrt(2),-1/sqrt(2)); $t_deg = $t_rad * 180 / pi; print "theta_2 = $t_deg\n"; $t_rad = atan2(-1/sqrt(2),1/sqrt(2)); $t_deg = $t_rad * 180 / pi; print "theta_3 = $t_deg\n"; </code></pre> <p>And here is the output when you run it for the two examples above:</p> <pre><code>theta_2 = 135 theta_3 = -45 </code></pre>
4,544,450
<p>im trying to solve this logical equation</p> <p>p≡((p∧∼q)→q)→p i know i have to solve for the right side and im pretty certain the final step must be the absorption law 10). But the ∼q is bugging me</p>
Hongleng Fu
557,275
<p>One key technique here is to use other sentence to rephrase <span class="math-container">$a \to b$</span>.</p> <p>It is known that <span class="math-container">$a \to b$</span> is false only if <span class="math-container">$a$</span> is true and <span class="math-container">$b$</span> is false.</p> <p>Using the table of truth or other method should help you find the way to rephrase it.</p> <p>Actually, in solving this problem, I have not used the absorption law. What I use is <span class="math-container">$(a \wedge b) \lor (a \wedge \lnot b) \equiv a $</span></p>
242,203
<p>What's the derivative of the integral $$\int_1^x\sin(t) dt$$</p> <p>Any ideas? I'm getting a little confused.</p>
Ormi
49,301
<p>$ \frac{d}{dt}\int_1^x\sin(t)dt = \frac{d}{dt} [-\cos t]_1^x = \frac{d}{dt}[-\cos x+\cos(1)] = \sin x $</p>
2,164,994
<p>Is the ratio test for convergence applicable to the below series:</p> <p>$$\sum_{n=1}^\infty \frac{n^3+1}{\sqrt[3]{n^{10} + n}}$$</p> <p>I already know that the series diverge. I want to confirm if the ratio test is applicable or not?</p>
DeepSea
101,504
<p><strong>hint</strong>: Compare your series with $\displaystyle \sum_{n = 1}^{\infty} \dfrac{1}{n^{\frac{1}{3}}}$, which diverges.</p>
2,164,994
<p>Is the ratio test for convergence applicable to the below series:</p> <p>$$\sum_{n=1}^\infty \frac{n^3+1}{\sqrt[3]{n^{10} + n}}$$</p> <p>I already know that the series diverge. I want to confirm if the ratio test is applicable or not?</p>
marwalix
441
<p>Let's compute the ratio</p> <p>$${a_{n+1}\over a_n}={(n+1)^3+1\over n^3+1}\cdot {\sqrt[3]{(n+1)^{10}+n+1}\over \sqrt[3]{n^{10}+n}}\sim{n^{1\over 3}\over(n+1)^{1\over3}}\to 1$$</p> <p>We cannot conclude with the ratio test</p>
664,152
<p>I have a homework problem that I'm very stuck on. The problem statement is as follows:</p> <p>"Suppose that $X$ is a metric space, and that for any sets $E,F \subseteq X$, if dist$(E,F) &gt; 0$ then $\mu^*(E \cup F) = \mu^*(E) + \mu^*(F)$. Prove that every open set is a splitting set. (Recall that the distance between subsets $E$ and $F$ of a metric space is defined to be dist$(E,F) = \inf \{ d(x,y) : x \in E, y \in F \}$.)"</p> <p>Our professor defines a "splitting set" as follows: Let $\mu^*$ be an outer measure on a nonempty set $X$. A set $A \subseteq X$ is called a splitting set if, for all $E \subseteq X$, $\mu^*(E) = \mu^*(E \cap A) + \mu^*(E \cap A^c)$, where $A^c = X \backslash A$. (This is what Folland's Real Analysis calls a $\mu^*$-measurable set.)</p> <p>Here are a couple of my (failed) attempts:</p> <p>My first try was to let $U$ be an arbitrary open set in $X$, let $G \subseteq X$ be arbitrary, and define $E = G \cap U$, $F = G \cap U^c$. If I could somehow show that dist$(E,F) &gt; 0$ in this case, then the result would follow, but in general, this is not true (take $\mathbb{R}$ with the standard metric, let $G = [0,1]$, $U = (-1/2,1/2)$).</p> <p>My next attempt was by contradiction: suppose there is an open set $U$ such that $U$ is not a splitting set. Then there is some $E \subseteq X$ such that $\mu^*(E) \neq \mu^*(E \cap U) + \mu^*(E \cap U^c)$... and by monotonicity this means that $\mu^*(E) &lt; \mu^*(E \cap U) + \mu^*(E \cap U^c)$. But then I only have one set to work with, and with the assumptions, I need two sets $E,F$ to work with in order to get anywhere.</p> <p>I also tried exploring what I could do with closed sets, since if dist$(E,F) &gt; 0$, then the closures of $E$ and $F$ respectively are disjoint. But I'm still stuck. Any hints would be appreciated!!! Thanks in advance.</p>
phaiakia
81,511
<p>Felt like I should come back and add the correct answer. Let $U$ be an open set in $X$ and consider $U^c$. For each $n$, let $U_n = \{ x \in U : d(x,y) &gt; 1/n \; \forall y \in U^c \}$. Then $\mu^*(E \cap (U_n \cup U^c)) = \mu^*(E \cap U_n) + \mu^*(E \cap U^c)$. Then taking the limit of both sides, continuity from below of the outer measure gives the result.</p>
788,245
<p>$$\sum_{n=1}^{\infty}\frac{(n+2)!}{(3n-1)}$$ I know this series does not converge. Can someone show me how to prove that? Should i use criteria of Dalamber or any other criteria?</p>
Santosh Linkha
2,199
<p>The part of the series itself diverges. Use integral test to check it out. Forget about the whole series. $$\sum_{n=1}^{\infty}\frac{1}{(3n-1)} \le \sum_{n=1}^{\infty}\frac{(n+2)!}{(3n-1)}$$ also note that the first <strong>necessity</strong> for convergence of series is that it's limit $n\to\infty$ must be zero. i.e, $$\lim_{n\to\infty}a_n = 0$$</p>
3,773,933
<p>Question goes like this:</p> <p>In a box containing <span class="math-container">$36$</span> strawberries, <span class="math-container">$2$</span> of them are rotten. Kyle randomly picked <span class="math-container">$5$</span> of these strawberries.<br> a. What is the probability of having at least 1 rotten strawberry among the 5?<br> b. How many strawberries should be picked so that the probability of having exactly <span class="math-container">$2$</span> rotten strawberries among them equals <span class="math-container">$2/35$</span>?</p> <p>My Work: a) <span class="math-container">$C(36,3) = 7140$</span> and <span class="math-container">$C(36,5) = 37692$</span>. <span class="math-container">$7140 / 37692 = .0189$</span>, which is the probability of having at least <span class="math-container">$1$</span> rotten strawberry. Is this correct? If not, what did I do wrong?<br> b) I have no clue where to start, any help would be appreciated.</p> <p>Thank You</p>
N. F. Taussig
173,070
<blockquote> <p>In a box containing <span class="math-container">$36$</span> strawberries, <span class="math-container">$2$</span> of them are rotten. Kyle randomly picked <span class="math-container">$5$</span> of these strawberries. What is the probability of at least one rotten strawberry among the five?</p> </blockquote> <p>Your answer is incorrect. What you have calculated is the probability that both rotten strawberries are among the five. To obtain the correct answer, you need to add to that the probability that exactly one of the five strawberries Kyle selects is rotten.</p> <p>The probability that Kyle selects exactly <span class="math-container">$k$</span> rotten strawberries among the five he randomly picks is <span class="math-container">$$\Pr(\text{exactly $k$ rotten strawberries}) = \frac{\dbinom{2}{k}\dbinom{34}{5 - k}}{\dbinom{36}{5}}$$</span> since if he picks <span class="math-container">$k$</span> of the <span class="math-container">$2$</span> rotten strawberries, Kyle must also select <span class="math-container">$5 - k$</span> of the <span class="math-container">$36 - 2 = 34$</span> good strawberries. Therefore, the probability that Kyle picks at least one rotten strawberry is <span class="math-container">\begin{align*} \Pr(\text{at least one rotten strawberry}) &amp; = \Pr(\text{exactly one rotten strawberry} + \Pr(\text{exactly two rotten strawberries})\\ &amp; = \frac{\dbinom{2}{1}\dbinom{34}{4}}{\dbinom{36}{5}} + \frac{\dbinom{2}{2}\dbinom{34}{3}}{\dbinom{36}{5}} \end{align*}</span> In any selection, Kyle must pick exactly zero, exactly one, or exactly two rotten strawberries. Therefore, <span class="math-container">$$\frac{\dbinom{2}{0}\dbinom{34}{5}}{\dbinom{36}{5}} + \frac{\dbinom{2}{1}\dbinom{34}{4}}{\dbinom{36}{5}} + \frac{\dbinom{2}{2}\dbinom{34}{3}}{\dbinom{36}{5}} = 1$$</span> as you can verify by direct calculation. Hence, the probability that Kyle selects at least one rotten strawberry can be found by subtracting the probability that Kyle selects no rotten strawberries from <span class="math-container">$1$</span>, as Robert Shore did in his answer. <span class="math-container">\begin{align*} \Pr(\text{at least one rotten strawberry}) &amp; = 1 - \Pr(\text{no rotten strawberries})\\ &amp; = 1 - \frac{\dbinom{2}{0}\dbinom{34}{5}}{\dbinom{36}{5}}\\ &amp; = \frac{\dbinom{2}{1}\dbinom{34}{4}}{\dbinom{36}{5}} + \frac{\dbinom{2}{2}\dbinom{34}{3}}{\dbinom{36}{5}} \end{align*}</span> which agrees with the answer we obtained above.</p> <blockquote> <p>In a box containing <span class="math-container">$36$</span> strawberries, <span class="math-container">$2$</span> of them are rotten. Kyle randomly picked <span class="math-container">$5$</span> of these strawberries. How many rotten strawberries should be picked so that the probability of having exactly two rotten strawberries among them is <span class="math-container">$2/35$</span>?</p> </blockquote> <p>The probability of selecting exactly <span class="math-container">$2$</span> rotten strawberries among <span class="math-container">$n$</span> is <span class="math-container">$$\Pr(\text{exactly $2$ rotten strawberries}) = \frac{\dbinom{2}{2}\dbinom{34}{n - 2}}{\dbinom{36}{n}} = \frac{\dbinom{34}{n - 2}}{\dbinom{36}{n}}$$</span> Set this quantity equal to <span class="math-container">$2/35$</span>, simplify, and solve for <span class="math-container">$n$</span>.</p>
880,928
<p>$A,B,C,D,E,F,G$</p> <p>A list consists of all possible three-letter arrangements formed by using the letters above such that the first letter is $D$ and one of the remaining letters is $A$. If no letter is used more than once in an arrangement in the list and one three-letter arrangement is randomly selected from the list, what is the probability that the arrangement selected will be $DCA$? </p> <p>My Attempt: $1/7 \times 1/6 \times 1/5$. </p>
Asimov
137,446
<p>Well, if the first letter is D, we don't have to worry about the first choice.</p> <p>The second letter has 6 possibilities</p> <p>If it is A, then the last letter has 5 possibilities</p> <p>If it is not A (which there are 5 letters that aren't A), then the last letter must be A, so for each of them, there is only one last letter possible</p> <p>There are 10 ways to meet the requirements</p> <p>The chance that one that meets the requirements is DCA is 1/10 or 10%</p>
371,999
<p>We are trying to estimate the cardinality $K(n,p)$ of so-called Kuratowski monoid with $p$ positive and $n$ negative linearly ordered idempotent generators. In particular, we are interesting in the case $n=p$.</p> <p>This notion arose in an intersection of general topology and abstract algebra. We calculated the exact value of </p> <p>$$K(n,p)=\sum_{i=0}^n\sum_{j=0}^p \binom {i+j}i\binom{i+j}j.$$</p> <p>We try to simplify this formula, but we stuck. The problem is that many different binomial sums are known, but I cannot remember among them sums of binomial “polynomials” of degree greater than 1, containing the summation index above.</p> <p>If the summation diapason was a triangle $\{i,j\ge 0:i+j\le 2n\}$ instead of a rectangle $[0;n]\times [0;p]$, a sum should be simplified as follows:</p> <p>$$\sum_{i,j=0}^{i+j\le 2n} \binom {i+j}i^2=\sum_{k=0}^{2n}\sum_{i,j=0}^{i+j=k}\binom ki^2=\sum_{k=0}^{2n}\binom {2k}k.$$</p> <p>A rough asymptotic of $K(n,n)$ should be $\log K(n,n)\sim n\log n+O(\log n)$, which follows form the bounds: </p> <p>$$\binom {2n}n^2\le K(n,n)= \sum_{i=0}^n\sum_{j=0}^p \binom {i+j}i\binom{i+j}j\le 2(n+1)^2\binom {2n}n^2.$$</p> <p>Since we are topologists, it is complicated for us to obtain a highly estimated asymptotic, so we decided to ask specialists about that. </p>
Qiaochu Yuan
232
<p>The argument you've written down shows that </p> <p>$$K(n, n) \le \sum_{k=0}^{2n} {2k \choose k}.$$</p> <p>Stirling's formula gives the asymptotic</p> <p>$${2k \choose k} \sim \frac{4^k}{\sqrt{\pi k}}$$</p> <p>so this sum behaves approximately like a geometric series, and we expect that</p> <p>$$\sum_{k=0}^{2n} {2k \choose k} \sim \frac{4}{3} \left( \frac{4^{2n}}{ \sqrt{2 \pi n} } \right).$$</p> <p>We also have the (easily improved) lower bound </p> <p>$${2n \choose n}^2 \le K(n, n)$$</p> <p>and Stirling's formula here gives</p> <p>$${2n \choose n}^2 \sim \frac{4^{2n}}{\pi n}$$</p> <p>so we get upper and lower bounds that are within a multiplicative factor of $O(\sqrt{n})$ of each other this way. You can tighten the lower bound using the <a href="http://terrytao.wordpress.com/2010/01/02/254a-notes-0a-stirlings-formula/">refined entropy formula</a>. </p>
371,999
<p>We are trying to estimate the cardinality $K(n,p)$ of so-called Kuratowski monoid with $p$ positive and $n$ negative linearly ordered idempotent generators. In particular, we are interesting in the case $n=p$.</p> <p>This notion arose in an intersection of general topology and abstract algebra. We calculated the exact value of </p> <p>$$K(n,p)=\sum_{i=0}^n\sum_{j=0}^p \binom {i+j}i\binom{i+j}j.$$</p> <p>We try to simplify this formula, but we stuck. The problem is that many different binomial sums are known, but I cannot remember among them sums of binomial “polynomials” of degree greater than 1, containing the summation index above.</p> <p>If the summation diapason was a triangle $\{i,j\ge 0:i+j\le 2n\}$ instead of a rectangle $[0;n]\times [0;p]$, a sum should be simplified as follows:</p> <p>$$\sum_{i,j=0}^{i+j\le 2n} \binom {i+j}i^2=\sum_{k=0}^{2n}\sum_{i,j=0}^{i+j=k}\binom ki^2=\sum_{k=0}^{2n}\binom {2k}k.$$</p> <p>A rough asymptotic of $K(n,n)$ should be $\log K(n,n)\sim n\log n+O(\log n)$, which follows form the bounds: </p> <p>$$\binom {2n}n^2\le K(n,n)= \sum_{i=0}^n\sum_{j=0}^p \binom {i+j}i\binom{i+j}j\le 2(n+1)^2\binom {2n}n^2.$$</p> <p>Since we are topologists, it is complicated for us to obtain a highly estimated asymptotic, so we decided to ask specialists about that. </p>
Alex Ravsky
71,850
<p><strong>Theorem</strong>. $\lim_{n\to\infty} K(n,n)/\binom {2n}n^2=16/9$.</p> <p>This theorem is a corollary of the following results.</p> <hr> <p>For every integers $0\le a,b\le n$ put $c_{a,b}(n)=\binom {2n-a-b}{n-a}/\binom {2n}n$.</p> <p><strong>Lemma 1</strong>. For every integers $a,b\ge 0$ there exists a limit $\lim_{n\to\infty} c_{a,b}(n)=2^{-(a+b)}$.</p> <p><em>Proof</em>. It follows from the equality $c_{a,b}(n)=\frac {n(n-1)\cdots (n-a+1) n(n-1)\cdots (n-b+1)}{2n(2n-1)\cdots (2n-a-b+1)}.\square$</p> <p>For every $n\ge 0$ put $k(n)=K(n,n)/\binom {2n}n^2$. Clearly that $k(n)= \sum_{a,b=0}^n c_{a,b}(n)^2$. The computer calculations suggest that the sequence $\{k(n):n\ge 3\}$ is increasing.</p> <p><strong>Proposition 1</strong>. For each $n$ we have $k(n)\le 16/9$.</p> <p><em>Proof</em>. $k(0)=1$, $k(1)=k(2)=7/4&lt;16/9$, $k(3)=697/400&lt;7/4$, and $k(4)=8549/4900&lt;16/9$. Suppose now that $n\ge 5$ and $k(n-1)\le 16/9$.</p> <p>We shall use the following two lemmas.</p> <p><strong>Lemma 2.</strong> For each $a\le n$ we have $c_{a,0}(n)\le 2^{-a}$.</p> <p><em>Proof</em> It follows from the equality $c_{a,0}(n)=\frac {n(n-1)\cdots (n-a+1)}{2n(2n-1)\cdots (2n-a+1)}$ and the inequality $(2n-l)\ge 2(n-l)$.$\square$</p> <p><strong>Lemma 3.</strong> $(16/9)c_{1,1}^2+2c_{1,0}^2+2c_{2,0}^2+2c_{3,0}^2\le (16/9)\cdot(1/16)+2(1/4)+2(1/16)+2(1/64)$.</p> <p><em>Proof</em>. $c_{1,1}=\frac n{2(2n-1)}$, $c_{1,0}=1/2$, $c_{2,0}=\frac {n-1}{2(2n-1)}$, and $c_{3,0}=\frac {n-2}{4(2n-1)}$. The routine transformations of the inequality yield $211\le 52n$, which holds for $n\ge 5$.$\square$</p> <p>Now we have that $k(n,n)\le k(n-1)c_{1,1}^2+1+2\sum_{a=1}^{n-1} c_{a,0}^2\le (16/9)\cdot(1/16)+1+\sum_{a=1}^{n-1}2^{-2a}\le 16/9$.$\square$</p> <p><strong>Proposition 2</strong> There exists a limit $\lim_{n\to\infty} k(n)=16/9$.</p> <p><em>Proof</em>. Lemma 1 implies that $\underline\lim_{n\to\infty} k(n)\ge \sum_{a,b=0}^\infty 2^{-2(a+b)}=\sum_{i=0}^\infty (i+1)4^{-i}=16/9$. By Proposition 1, $\lim_{n\to\infty} k(n)=16/9$.$\square$</p>
2,332,741
<p>I have the following problem:</p> <blockquote> <p>Let $\Omega \subset \mathbb{R}^3$ be an open bounded set with a smooth boundary $\partial \Omega$ and the unit normal $v$. Calculate for the vector field $a(x,y,z)=(0,0,-pz)$ with $p&gt;0$ the value of -$\int_{\partial\Omega}\langle a,v\rangle d\mu_{\partial\Omega}$.</p> </blockquote> <p>I don't really know how to start with this problem. So I used the divergence theorem: -$\int_{\partial\Omega}\langle a,v\rangle d\mu_{\partial\Omega}$=$\int_\Omega \text{div}(a)d\mu_M$=$\int_\Omega pd\mu_M$, but I don't know how to proceed from here. Can someone help me please? Thanks in advance.</p>
Gabriel Soranzo
51,400
<p>As the surface <span class="math-container">$X$</span> is supposed to be projective there exist an ample divisor <span class="math-container">$H$</span> on <span class="math-container">$X$</span>. Following the same reasoning as the begining of the proof of V.1.1 the exist an integer <span class="math-container">$n&gt;0$</span> such that <span class="math-container">$nH$</span> and <span class="math-container">$C+nH$</span> are very ample.</p> <p>For all very ample divisor <span class="math-container">$D$</span> and irreducible nonsingular curve <span class="math-container">$C$</span> we can apply lemma V.1.2. to get some <span class="math-container">$D'\in|D|$</span> transversal to <span class="math-container">$C$</span>. Then we can apply V.1.3 and V.1.1 to get that <span class="math-container">$$ C.D=C.D'=\#(C\cap D')=\deg(\mathcal{L}(D')\otimes\mathcal{O}_C)=\deg(\mathcal{L}(D)\otimes\mathcal{O}_C) $$</span></p> <p>As Mohan says above we then have that <span class="math-container">$$ \begin{align} C^2 &amp; = (C+nH).C-nH.C \\ &amp; = \deg(\mathcal{L}(C+nH)\otimes\mathcal{O}_C)-\deg(\mathcal{L}(nH)\otimes\mathcal{O}_C) \\ &amp;= \deg(\mathcal{L}(C+nH)\otimes\mathcal{O}_C\otimes_{\mathcal{O}_C}\mathcal{O}_C\otimes\mathcal{L}(-nH)) \\ &amp;= \deg(\mathcal{L}(C+nH-nH)\otimes\mathcal{O}_C) \\ &amp;= \deg(\mathcal{L}(C)\otimes\mathcal{O}_C) \end{align} $$</span></p>
1,028,695
<p>While reading though some engineering literature, I came across some logic that I found a bit strange. Mathematically, the statement might look something like this: </p> <p>I have a linear operator $A:L^2(\Bbb{R}^3)\rightarrow L^2(\Bbb{R}^4)$, that is a mapping which takes functions of three variables to functions of four variables. Then, "because the range function depends on 4 variables while the domain function depends on only three", there must be <em>redundancy</em> in the operator $A$, that is the range of $A$ is a proper subset of $L^2(\Bbb{R}^4)$, characterized by some <em>range conditions</em>.</p> <p>Is such a statement always true? For the specific example I am reading about (the X-ray transform), it is definitely true - in fact, the range of the operator is characterized by a <a href="http://en.wikipedia.org/wiki/John%27s_equation" rel="nofollow">certain PDE</a> - but I can't image such a thing is true in general. </p> <p>For instance, I can cook up an operator $A:L^2(\Bbb{R}^3)\rightarrow L^2(\Bbb{R}^4)$ such that the range of $A$ is dense in $L^2(\Bbb{R}^4)$: simply choose orthonormal bases $(e_j)$ and $(f_j)$ for both, then map $e_j$ to $f_j$.</p> <p>Any thoughts?</p>
fodon
64,111
<p>I think it always has to be true. It is like mapping points in a flat plane to a 3D space. You get a volume 0 manifold in the 3D space if there is a 1 to 1 mapping.</p>
4,241,919
<p>Let <span class="math-container">$x, y, z$</span> are three <span class="math-container">$n\times 1$</span> vectors. For each vector, every element is between 0 and 1, and the sum of all elements in each vector is 1. Now I am wonder why the following inquality holds:</p> <p><span class="math-container">$x^Ty+y^Tz-x^Tz\le 1$</span></p> <p><strong>My Attempt:</strong> I try to rewrite the LHS of this inequality as</p> <p><span class="math-container">$|x^Ty+(y-x)^Tz|\le |x^T\cdot 1|+\| y-x\|_1\|z\|_{\max} \le 1+\| y-x\|_1 \times 1 $</span></p> <p>However, this bound looks a bit larger than 1. Any suggestions are appreciated.</p>
blundered_bishop
508,406
<p>Note that your conditions imply <span class="math-container">$x \cdot x \le1$</span> for every <span class="math-container">$x$</span>. From <span class="math-container">$(x-y) \cdot (x-y) \ge0$</span> expand to get</p> <p><span class="math-container">$$2x\cdot y \le x\cdot x + y\cdot y\le2 \Rightarrow x\cdot y \le 1$$</span></p> <p>Your inequality follows immediately</p>
4,241,919
<p>Let <span class="math-container">$x, y, z$</span> are three <span class="math-container">$n\times 1$</span> vectors. For each vector, every element is between 0 and 1, and the sum of all elements in each vector is 1. Now I am wonder why the following inquality holds:</p> <p><span class="math-container">$x^Ty+y^Tz-x^Tz\le 1$</span></p> <p><strong>My Attempt:</strong> I try to rewrite the LHS of this inequality as</p> <p><span class="math-container">$|x^Ty+(y-x)^Tz|\le |x^T\cdot 1|+\| y-x\|_1\|z\|_{\max} \le 1+\| y-x\|_1 \times 1 $</span></p> <p>However, this bound looks a bit larger than 1. Any suggestions are appreciated.</p>
Calvin Lin
54,563
<p><strong>Hint:</strong> Show that for real numbers <span class="math-container">$ a, b, c \in [0, 1 ]$</span>, we have</p> <p><span class="math-container">$$ b ( a + c ) - ac \leq b. $$</span></p> <blockquote class="spoiler"> <p> The expression is linear in each variable, so we just have to check the 8 end points.</p> </blockquote> <p>Corollary: <span class="math-container">$ (x^T + z^T) y - x^T z \leq 1^T y = 1$</span>.</p>
2,572,032
<p>I'm looking for help with <strong>(b)</strong> and <strong>(c)</strong> specifically. I'm posting <strong>(a)</strong> for completeness.</p> <p><strong>(a)</strong> Show convergence for $a_n=\sqrt{n+1}-\sqrt{n}$ towards $0$ and test $\sqrt{n}a_n$ for convergence.</p> <p><strong>(b)</strong> Show $b_n=\sqrt[k]{n+1}-\sqrt[k]{n}$ converges towards $0$ for all $k \geq 2$.</p> <p><strong>(c)</strong> For which $\alpha\in\mathbb{Q}_+$ does $n^\alpha b_n$ converge?</p> <hr> <p>I'm pretty sure I solved <strong>(a)</strong>. I have proven the convergence of $a_n$ by using the fact that $$\sqrt{n}&lt;\sqrt{n+1}\leq\sqrt{n}+\frac{1}{2\sqrt{n}}$$ which holds true since $$(\sqrt{n}+\frac{1}{2\sqrt{n}})^2=n+1+\frac{1}{4n}\geq n+1\,.$$ This gives us $$0&lt;\sqrt{n+1}-\sqrt{n}\leq\frac{1}{2\sqrt{n}}$$ and after applying the squeeze theorem with noting that $\frac{1}{2\sqrt{n}}\longrightarrow0$ we can tell that also $a_n\longrightarrow0$.</p> <p>Now $x_n=\sqrt{n}a_n=\sqrt{n}(\sqrt{n+1}-\sqrt{n})$.</p> <p>We have \begin{align*}\sqrt{n}(\sqrt{n+1}-\sqrt{n})&amp;=\sqrt{n}\sqrt{n+1}-\sqrt{n}\sqrt{n}\\&amp;=\sqrt{n(n+1)}-n\\&amp;=\sqrt{n^2+n}-n\\&amp;=\frac{(\sqrt{n^2+n}-n)(\sqrt{n^2+n}+n)}{\sqrt{n^2+n}+n}\\&amp;=\frac{n^2+n-n^2}{\sqrt{n^2+n}+n}\\&amp;=\frac{n}{\sqrt{n^2+n}+n}\\&amp;=\frac{n}{n\sqrt{1+\frac{1}{n}}+n}\\&amp;=\frac{1}{\sqrt{1+\frac{1}{n}}+1}\end{align*}</p> <p>and hence since the harmonic sequence $\frac{1}{n}$ converges towards 0 we have $$\text{lim}_{n\rightarrow\infty} \frac{1}{\sqrt{1+\frac{1}{n}}+1} = \frac{1}{1+1} = \frac{1}{2}\,._{\,\,\square}$$</p>
José Carlos Santos
446,262
<p>In order to solve (b), let $a=\sqrt[k]{n+1}$ and $b=\sqrt[k]n$. Then\begin{align}\sqrt[k]{n+1}-\sqrt[k]n&amp;=a-b\\&amp;=\frac{(a-b)(a^{k-1}+a^{k-2}b+\cdots+ab^{k-2}+b^{k-1})}{a^{k-1}+a^{k-2}b+\cdots+ab^{k-2}+b^{k-1}}\\&amp;=\frac{a^k-b^k}{a^{k-1}+a^{k-2}b+\cdots+ab^{k-2}+b^{k-1}}\\&amp;=\frac1{\sqrt[k]{n+1}^{k-1}+\sqrt[k]{n+1}^{k-2}\sqrt[k]n+\cdots+\sqrt[k]{n+1}\sqrt[k]n^{k-2}+\sqrt[k]n^{k-1}}\end{align}and therefore\begin{align}\lim_{n\to\infty}\sqrt[k]{n+1}-\sqrt[k]n&amp;=\lim_{n\to\infty}\frac1{\sqrt[k]{n+1}^{k-1}+\sqrt[k]{n+1}^{k-2}\sqrt[k]n+\cdots+\sqrt[k]{n+1}\sqrt[k]n^{k-2}+\sqrt[k]n^{k-1}}\\&amp;=0.\end{align}</p>
2,572,032
<p>I'm looking for help with <strong>(b)</strong> and <strong>(c)</strong> specifically. I'm posting <strong>(a)</strong> for completeness.</p> <p><strong>(a)</strong> Show convergence for $a_n=\sqrt{n+1}-\sqrt{n}$ towards $0$ and test $\sqrt{n}a_n$ for convergence.</p> <p><strong>(b)</strong> Show $b_n=\sqrt[k]{n+1}-\sqrt[k]{n}$ converges towards $0$ for all $k \geq 2$.</p> <p><strong>(c)</strong> For which $\alpha\in\mathbb{Q}_+$ does $n^\alpha b_n$ converge?</p> <hr> <p>I'm pretty sure I solved <strong>(a)</strong>. I have proven the convergence of $a_n$ by using the fact that $$\sqrt{n}&lt;\sqrt{n+1}\leq\sqrt{n}+\frac{1}{2\sqrt{n}}$$ which holds true since $$(\sqrt{n}+\frac{1}{2\sqrt{n}})^2=n+1+\frac{1}{4n}\geq n+1\,.$$ This gives us $$0&lt;\sqrt{n+1}-\sqrt{n}\leq\frac{1}{2\sqrt{n}}$$ and after applying the squeeze theorem with noting that $\frac{1}{2\sqrt{n}}\longrightarrow0$ we can tell that also $a_n\longrightarrow0$.</p> <p>Now $x_n=\sqrt{n}a_n=\sqrt{n}(\sqrt{n+1}-\sqrt{n})$.</p> <p>We have \begin{align*}\sqrt{n}(\sqrt{n+1}-\sqrt{n})&amp;=\sqrt{n}\sqrt{n+1}-\sqrt{n}\sqrt{n}\\&amp;=\sqrt{n(n+1)}-n\\&amp;=\sqrt{n^2+n}-n\\&amp;=\frac{(\sqrt{n^2+n}-n)(\sqrt{n^2+n}+n)}{\sqrt{n^2+n}+n}\\&amp;=\frac{n^2+n-n^2}{\sqrt{n^2+n}+n}\\&amp;=\frac{n}{\sqrt{n^2+n}+n}\\&amp;=\frac{n}{n\sqrt{1+\frac{1}{n}}+n}\\&amp;=\frac{1}{\sqrt{1+\frac{1}{n}}+1}\end{align*}</p> <p>and hence since the harmonic sequence $\frac{1}{n}$ converges towards 0 we have $$\text{lim}_{n\rightarrow\infty} \frac{1}{\sqrt{1+\frac{1}{n}}+1} = \frac{1}{1+1} = \frac{1}{2}\,._{\,\,\square}$$</p>
rtybase
22,583
<p><strong>Hint:</strong> Use the fact that $a^k-b^k=(a-b)(a^{k-1}+a^{k-2}b+...+ab^{k-2}+b^{k-1})$ where $a=\sqrt[k]{n+1}$ and $b=\sqrt[k]{n}$ or $$0&lt;a-b=\frac{a^k-b^k}{(a^{k-1}+a^{k-2}b+...+ab^{k-2}+b^{k-1})}&lt;\frac{1}{kb^{k-1}}=\frac{1}{k\sqrt[k]{n^{k-1}}}$$</p>
2,741,229
<p>I have searched a lot, but i haven't found any proof about that statement. I have checked the proof of</p> <blockquote> <p>If <span class="math-container">$f$</span> is differentiable, then <span class="math-container">$f$</span> is continuous</p> </blockquote> <p>but it's not the same argument I think. Also, I want to know what's your opinion about the statement</p> <blockquote> <p>If derivative of <span class="math-container">$f$</span> is not continuous, then <span class="math-container">$f$</span> is not continuous</p> </blockquote>
Steven Alexis Gregory
75,410
<p>$f'$ need not be continuous.</p> <p>Suppose that $f'(x)$ exists in the interval $(a,b)$. If $\xi \in (a,b)$, then $f'(\xi)$ exists. Hence $f$ is continuous at $\xi$. Since this is true for all $\xi$ in $(a,b)$, then $f$ is continuous on $(a,b)$.</p>
129,132
<p>Both the ratio test and the root test define a number (via a limit).</p> <p>If both limits exist (and shows that the series is convergent), what (if any) is the relation between the 2 numbers ? are they equal ? What is the relation (if any) between them and the original series (other than the fact that they say the series is convergent) ?</p>
fny
28,533
<p>Both tests will yield the same property about the series:</p> <ol> <li>if $L&lt;1$ the series is absolutely convergent</li> <li>if $L&gt;1$ the series is divergent.</li> <li>if $L=1$ the series may be divergent, conditionally convergent, absolutely convergent</li> </ol> <p>Edit: Note that if L=1 in the ratio test, the root test will also yield L=1. <em>The</em> converse <em>is <strong>not</strong> true.</em> (I had mistakingly asserted that it was earlier.)</p> <p>As for relation... I'm not sure what you mean. The root test can be considered more comprehensive as it yields information whenever the ratio test is inconclusive. Applying the ratio test, however, can simpler in certain cases or perhaps necessary like when dealing with factorials.</p>
3,293,955
<p>Here is the curve i wish to plot with a function:</p> <p><a href="https://i.stack.imgur.com/4Smib.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4Smib.png" alt="eliptical curve"></a></p> <p>I expect the curve to be 1/4 of an elipse but I only have the coordinates to work with (minx,miny and maxx,maxy). I've been using the graphing tool at: <a href="http://itools.subhashbose.com/grapher" rel="nofollow noreferrer">http://itools.subhashbose.com/grapher</a> but I haven't been able to remember way back to highschool when we worked on functions like these (it's been over 30 years). Any help greatly appreciated.</p> <p>Note: answers come in quick, wish i had specified earlier the format of the equation ideally looks like: <code>f(x) = ?</code></p>
David C. Ullrich
248,223
<p>This immediate from two facts:</p> <p>(i) He might have been more explicit about the definition; in fact if <span class="math-container">$f\in L^1$</span> the maximal function is <span class="math-container">$$Mf(x)=\sup_{r&gt;0}\frac 1{m(B(x,r))}\int_{B(x,r)}|f|.$$</span></p> <p>(ii) If <span class="math-container">$A&gt;0$</span> is fixed and <span class="math-container">$r=|x|+A$</span> then <span class="math-container">$$B(0,A)\subset B(x,r);$$</span>also <span class="math-container">$r\sim |x|$</span> as <span class="math-container">$|x|\to\infty$</span>.</p>
2,794,962
<p>Give is: $C$ which is a closed curve which forms the surface $\Sigma$., $\vec{v} $ which is a constant vector. </p> <p>I should prove the following expression without using Stokes' Theorem: </p> <p>$$\oint_C \vec{v} \cdot d\vec{l} = 0$$</p> <p>How do I go about doing it for an arbitrarily closed (even overlapping) curve ? </p>
Arnaud Mortier
480,423
<p>Call $X$ the number of times your event occurs. Assume first that the number of occurrences is theoretically unlimited. Then $X+1$ follows a <a href="https://en.wikipedia.org/wiki/Geometric_distribution" rel="nofollow noreferrer">geometric distribution</a> of parameter $1-p$: $$\Bbb P(X+1=n)=p^{n-1}(1-p)\qquad \text{for $n\geq 1$}.$$</p> <p>Therefore $$E(X+1)=\frac1p,$$ which implies by linearity $$E(X)=\frac1p -1.$$</p> <p>Now since the number of occurrences is limited to $N$, the expected value is \begin{align*}E(X)&amp;=\left(\sum_{k=1}^{N}(k-1)\cdot\underbrace{p^{k-1}(1-p)}_{\text{probability that $X+1=k$, i.e. $X=k-1$}}\right)+N\underbrace{\left(1-\sum_{k=1}^{N}p^{k-1}(1-p)\right)}_{\text{probability that $X=N$}}\\&amp;=N+(1-p)\sum_{k=1}^{N}(k-N-1)p^{k-1}.\end{align*} Note that there are closed formulas for partial geometric sums $\sum_{k=1}^n x^k$ <strong>and</strong> for the derivative $\sum_{k=1}^n kx^{k-1}$ as well, so you can turn the above into a closed formula if you need to.</p>
1,652,297
<p><strong>Thm</strong> Let $V$ and $W$ be Vector spaces and let $T:V \to W$ be linear </p> <p>If $\beta = \{ v_1,\dots ,v_n \}$ is a basis for $V$ then $$ R(T)=\text{span}(T(\beta))=\text{span}(\{ T(v_1),\dots,T(v_n) \} ) $$</p> <hr> <p><strong>Dimension Theorem</strong> </p> <p>Let $V$ and $W$ be Vector spaces and let $T:V \to W$ be linear </p> <p>If $V$ is finite dimensional then $\text{Nullity}(T)+\text{Rank}(T)=\text{dim}(V)$</p> <hr> <p>My impression is that $\text{Dim}(V)=\text{Rank}(T)$ since if $\text{dim}(V)=2$ there are $2$ vectors in the basis and $T$ of that basis will make the basis for image of $T$ so $\text{Rank}(T)=\text{dim}(V)$. </p> <p>Asked when the teacher when class was over but was told that was not the case. Did get answer from the proffesor but I do not know if he did not understood my question or I did not understood his answer.</p>
Alex Wertheim
73,817
<p>Let's look at your specific claim: that if $\dim V$ is $2$, then are $2$ vectors in the basis of $V$ (call them $v_{1}, v_{2}$) and the images $T(v_{1}), T(v_{2})$ form a basis for the image of $T$. This is a perfectly reasonable sounding thing to suggest - unfortunately, it's also totally false! </p> <p>Here's an explicit example for you. Consider the linear transformation $T \colon \mathbb{R}^{2} \to \mathbb{R}^{1}$ given by projection onto the first coordinate, i.e. $T((a, b)) = a$ for all $(a, b) \in \mathbb{R}^{2}$. This is a perfectly good linear transformation, as you can check yourself. The images of the standard basis vectors $(1, 0), (0, 1)$ are given by $T((1, 0)) = 1$ and $T((0, 1)) = 0$. But these are clearly linearly dependent elements of the image of $T$, since any set of vectors that includes the zero vector is linearly dependent. </p> <p>It might seem like the example above was dependent on the basis, but this isn't the case. It's not hard to show that the linear transformation $T$ defined above is surjective, i.e. the image of $T$ is all of $\mathbb{R}^{1}$. The dimension of $\mathbb{R}^{1}$ (as an $\mathbb{R}$ vector space, of course) is $1$, so any two vectors in $\mathbb{R}^{1}$ are linearly dependent. Thus, the example above would've worked for any two basis vectors for $\mathbb{R}^{1}$. The problem is that we can perfectly reasonably have linear transformations from higher dimensional vector spaces onto lower dimensional vector spaces over the same field. If this is the case, then the image of $T$ has dimension bounded by the dimension of the target space, so the rank of $T$ can't be as big as the dimension of $V$. </p> <p>This is concretely realized in the form of so called 'nullity', or having a nontrivial kernel. In the example above, any element of the form $(0, a) \in \mathbb{R}^{2}$ gets sent to $0$ by $T$. I encourage you to play around with examples of linear transformations to get a better understanding of why rank-nullity works the way it does, and to see why your specific claim is false. </p> <p>Let me also prove mathers101's claim in the comments. Let $T \colon V \to W$ be a linear transformation between vector spaces with scalar field $F$ (if you have only worked with real or complex vector spaces, just take $F = \mathbb{R}$ or $F = \mathbb{C}$), and suppose for $v_{1}, \ldots, v_{n} \in V$, the images $T(v_{1}), \ldots, T(v_{n})$ are linearly independent in $W$. Then we claim that $v_{1}, \ldots, v_{n}$ are linearly independent elements of $V$. Suppose not, i.e. there exist $a_{1}, \ldots, a_{n} \in F$ not all zero such that $a_{1}v_{1} + \cdots + a_{n}v_{n} = 0$. Then </p> <p>$$0 = T(0) = T(a_{1}v_{1} + \cdots + a_{n}v_{n}) = a_{1}T(v_{1})+\cdots+a_{n}T(v_{n})$$</p> <p>contradicting the linear independence of $T(v_{1}), \ldots, T(v_{n})$. </p>
2,829,990
<p>I want to calcurate</p> <p><span class="math-container">$$ \lim_{n \to \infty} \int_{(0,1)^n} \frac{n}{x_1 + \cdots + x_n} \, dx_1 \cdots dx_n $$</span></p> <p>I met this in studying Lebesgue integral. But, I don't know how to do at all. I would really appreciate if you could help me!</p> <p>[Add]</p> <p>Thanks to everybody who gave me comments, I can understand the following,</p> <p><span class="math-container">\begin{align*} \lim_{n \to \infty} \int_{(0,1)^n} \frac{n}{x_1 + \cdots + x_n} dx_1 \cdots dx_n &amp;=\lim_{n \to \infty} n\int_0^\infty \bigg(\frac{1-e^{-t}}{t}\bigg)^n\,dt \end{align*}</span></p> <p>and</p> <p><span class="math-container">\begin{align*} \int_0^\infty \bigg(\frac{1-e^{-t}}{t}\bigg)^n\,dt &amp;=\frac{n}{(n-1)!}\sum_{i=0}^{n-1}{ n-1 \choose i} (-1)^{n-1-i} (i+1)^{n-2}\log(i+1) \end{align*}</span></p> <p>and</p> <p><span class="math-container">\begin{align*} \int_0^\infty \bigg(\frac{1-e^{-t}}{t}\bigg)^n\,dt &amp;=\int_0^\infty \frac{z^{n-1}}{(n-1)!}\, \mathrm{Beta}(z,n+1)\,dz\\ &amp;=n\,\int_0^\infty \frac{z^{n-1}}{z(z+1)\cdots(z+n)}\,dz \end{align*}</span></p> <p>But,I can't calcurate these integral and the limit. Please let me know if you find out.</p>
achille hui
59,379
<p>The limit is $2$. </p> <p>This might be an overkill but it allow you to compute the asymptotic expansion instead of just the limit.</p> <hr> <p>Let $d^n x$ be a short hand for $dx_1\cdots dx_n$</p> <p>Notice for $x_1,\ldots,x_n &gt; 0$, we have $\displaystyle\;\frac{1}{\sum_{k=1}^n x_k} = \int_0^\infty e^{-t\sum_{k=1}^n x_k} dt$</p> <p>Using this, we can rewrite the integral at hand as</p> <p>$$\begin{align} \mathcal{I}_n \stackrel{def}{=} \int_{(0,1)^n}\frac{d^n x}{\sum_{k=1}^n x_k} = &amp; \int_{(0,1)^{n}}\int_0^\infty e^{-t\sum_{k=1}^n x_k} dt d^n x = \int_0^\infty \int_{(0,1)^{n}} e^{-t\sum_{k=1}^n x_k}d^n x dt\\ = &amp; \int_0^\infty \left(\int_0^1 e^{-tx}\right)^n dt = \int_0^\infty \left(\frac{1-e^{-t}}{t}\right)^n dt \end{align} $$ Since all the integrands in above integrals (in given grouping) is non-negative, we can use <a href="https://en.wikipedia.org/wiki/Fubini&#39;s_theorem#Tonelli%27s_theorem_for_non-negative_functions" rel="noreferrer">Tonelli's theorem</a> to justify above manipulation.</p> <p>Consider the function $\frac{1-e^{-t}}{t}$. Over $(0,\infty)$, it is smooth and decreasing from $1$ at $t = 0^{+}$ to $0$ as $t \to \infty$. Over $\mathbb{C}$, it is entire and its zeros closest to origin are located at $\pm 2\pi i$. </p> <p>Let $s = -\log \frac{1-e^{-t}}{t} \iff \frac{1-e^{-t}}{t} = e^{-s}$. As a function of $t$, $s(t)$ is smooth on $(0,\infty)$, increasing from $0$ at $t = 0^{+}$ to $\infty$ as $t \to \infty$. Its power series expansion at $t = 0$ has radius of convergence $2\pi$. $$s(t) = \frac{t}{2}-\frac{{t}^{2}}{24}+\frac{{t}^{4}}{2880}-\frac{{t}^{6}}{181440}+\frac{{t}^{8}}{9676800} + \cdots$$</p> <p>This implies as a function of $s$, $t(s)$ has a power series expansion with finite convergence radius at $s = 0$. Changing variable to $s$, we obtain $$n\mathcal{I_n} = n\int_0^\infty e^{-ns} \frac{dt}{ds} ds \tag{*1}$$</p> <p>Notice for large $s$, $t \sim e^s \implies \frac{dt}{ds} \sim e^s$. Together with above properties of $s$, we can apply <a href="https://en.wikipedia.org/wiki/Watson&#39;s_lemma" rel="noreferrer">Waton's lemma</a> to extract the asymptotic behavior of $\mathcal{I}_n$ as $n \to \infty$.</p> <p>We help of a CAS, we can use <a href="https://en.wikipedia.org/wiki/Lagrange_inversion_theorem" rel="noreferrer">Lagrange inversion theorem</a> to obtain following expansion of $\frac{dt}{ds}$:</p> <p>$$\frac{dt}{ds} = 2+\frac{2s}{3}+\frac{s^2}{3}+\frac{19s^3}{135}+\frac{17s^4}{324} + \cdots$$ Substitute this back into $(*1)$, we can read off the asymptotic expansion of $n\mathcal{I}_n$ as $$n\mathcal{I}_n \asymp 2+ \frac{2}{3n} + \frac{2}{3n^2} + \frac{38}{45n^3} + \frac{34}{27n^4} + \cdots $$ As a corollary, the desired limit is $\lim_{n\to\infty} n\mathcal{I}_n = 2$.</p>
172,617
<p>I need to plot two datasets on the same plot. The datasets have the same x-range. However, I want to show only parts of the plot. </p> <p>A minimal example would be</p> <pre><code> h = π/100.; i1 = ListLinePlot[Table[{i*h, Sin[i*h]}, {i, 0, 100}], PlotStyle -&gt; Red]; i2 = ListLinePlot[Table[{i*h, Cos[i*h]}, {i, 0, 100}], PlotStyle -&gt; Blue]; l1 = Graphics[{Black, Dashed, Line[{{π/2, -1}, {π/2, 1}}]}]; Show[{i1, i2, l1}, PlotRange -&gt; All] </code></pre> <p>The output is the following</p> <p><a href="https://i.stack.imgur.com/JsUxu.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/JsUxu.jpg" alt="enter image description here"></a></p> <p>But, the plot I want is </p> <p><a href="https://i.stack.imgur.com/rJ3wG.png" rel="noreferrer"><img src="https://i.stack.imgur.com/rJ3wG.png" alt="enter image description here"></a></p> <p>Can anyone please help? Thanks in advance.</p>
Nasser
70
<p>This is a hack may be as I do not see now a direct option to do this.</p> <p>Combine the 2 list plots into one, using <code>Epilog</code> then use <code>Show</code></p> <pre><code>red = Table[{i*h,Sin[i*h]},{i,0,100}]; blue = Table[{i*h,Cos[i*h]},{i,0,100}]; vLine = Graphics[{Black, Dashed, Line[{{Pi/2, -1}, {Pi/2, 1}}]}]; p = ListLinePlot[blue[[1;;51]], PlotStyle-&gt;Blue, PlotRange-&gt;{{0,3},Automatic}, Epilog-&gt;First@ListLinePlot[red[[51;;-1]],PlotStyle-&gt;Red]]; moment=Show[{p,vLine},PlotRange-&gt;{{0,3},{-1,1}}] </code></pre> <p><img src="https://i.stack.imgur.com/cvABb.png" alt="Mathematica graphics"></p>
954,376
<p>Beltrami made (out of thin paper or stiff or starched cloth not mentioned) a model of a surface of constant negative Gauss curvature <span class="math-container">$ K=-1/c^2$</span>. The original might have resembled a <em>large saddle shaped</em> Pringles chip, and frills might have developed by sagging with time, if one were to make a guess.</p> <p>Also it is now easy to guess that the mold he used had dimensions comparable to the length and breadth of the table on which this is/was placed. The narrow neck he obtained is/was by <em>hand rolling to a tight paper scroll</em>, imo.</p> <p>Is there a higher definition photograph with more details available?</p> <p>More importantly, does anyone know its modern day parametrization?</p> <p>The original photograph is given on page 133 as Fig 56, Roberto Bonola <em>Non-Euclidean Geometry</em>, 1955 Edition, Dover, ISBN 0-486-60027-0. (Thanks in advance to anyone for scanning and posting it here).</p> <p>The following is inserted from Daina Tamania's blog ( <a href="http://hyperbolic-crochet.blogspot.co.uk/2010/07/story-about-origins-of-model-of.html" rel="nofollow noreferrer">http://hyperbolic-crochet.blogspot.co.uk/2010/07/story-about-origins-of-model-of.html</a> ):</p> <p><img src="https://i.stack.imgur.com/sf7Ul.png" alt="Beltrami pseudosphere made by himself" /></p> <p>It may be recalled all such shapes can be scrolled up tight somewhat like a table napkin rolled by hand into a smaller ring.. maintaing parallelsim among previous parallels due to isometry or contant <span class="math-container">$K$</span> property, into newer smaller scroll.</p>
Alan
92,834
<p><img src="https://i.stack.imgur.com/KuuGK.jpg" alt="enter image description here" /></p> <p>Here is a parametrization of the Pseudosphere:</p> <p><span class="math-container">$$ x (u,v) = a \frac {cos(v)}{cosh(u)} $$</span></p> <p><span class="math-container">$$ y (u,v) = a \frac {sin(v)}{cosh(u)} $$</span></p> <p><span class="math-container">$$ z(u,v) = a(u - tanh(u) ) $$</span></p> <p>with <span class="math-container">$ v \in [0,2\pi) $</span> , <span class="math-container">$ u \in (-\infty, \infty) $</span></p> <p>And, a useful link to the original model of Beltrami : <a href="http://www.mathcurve.com/surfaces/pseudosphere/pseudosphere.shtml" rel="nofollow noreferrer">http://www.mathcurve.com/surfaces/pseudosphere/pseudosphere.shtml</a></p> <p>(bottom of page , on the right)</p>
2,533,834
<p>For a complex number $z$, I came across a statement that $\ln(e^{z})$ is not always equal to $z$. Why is this true?</p> <p>Thanks for the help.</p>
Oscar Lanzi
248,217
<p>Render</p> <p>$\ln(n)=\int_1^n\frac{dx}{x}$</p> <p>Show, by putting in $u=x/n$, that</p> <p>$\ln(2n)-\ln(n)=\int_1^2\frac{du}{u}=ln(2)$</p> <p>Then for instance</p> <p>$\ln(5)&gt;\ln(4)=\ln(2)+\ln(2)=2\ln(2),$</p> <p>$\ln(12)&gt;\ln(8)=\ln(2)+\ln(4)=3\ln(2),$</p> <p>and as $n$ increases without bound the coefficient of $\ln(2)$ does the same. Compare with proving the divergence of the harmonic series.</p>
195,790
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://math.stackexchange.com/questions/19796/name-of-this-identity-int-e-alpha-x-cos-beta-x-space-dx-frace-al">Name of this identity? $\int e^{\alpha x}\cos(\beta x) \space dx = \frac{e^{\alpha x} (\alpha \cos(\beta x)+\beta \sin(\beta x))}{\alpha^2+\beta^2}$</a> </p> </blockquote> <p>I might have missed a technique from Calc 2, but this integral is holding me up. When I checked with WolframAlpha, it used a formula I didn't recognise.</p> <blockquote> <p>How do I solve $\int e^{-t/2}\sin(3t) dt$?</p> </blockquote> <p>The formula WolframAlpha uses is this:</p> <p>$$\int e^{\alpha t}\sin(\beta t)dt=\frac{e^{\alpha t}(-\beta \cos(\beta t)+\alpha \sin(\beta t)}{\alpha ^2+\beta ^2}$$</p> <p>I don't know where this formula comes from.</p>
DonAntonio
31,254
<p>$$\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\lim_{h\to 0}\frac{\frac{2}{(x+h)^2}-\frac{2}{x^2}}{h}=\lim_{h\to 0}\frac{-4xh-2h^2}{x^2(x+h)^2h}=$$</p> <p>$$=\lim_{h\to 0}\frac{-4x-2h}{x^2(x+h)^2}=-\frac{4x}{x^4}=-\frac{4}{x^3}$$</p>
2,508,499
<p>How is this hold that $\mathbb R \subseteq B(0,2)$ where $\big&lt;\mathbb R,d\big&gt;$ and d is a discrete metric?</p> <p>By doing so we showed that $\mathbb R $ is bounded</p>
Ross Millikan
1,827
<p>Look up the definition of a ball. $B(a,b)$ is all the points within a distance $b$ of $a$ with the given metric, that is $\{c|d(a,c) \lt b\}$. What is the distance between $0$ and $1000$ in the discrete metric? Hint: it is not $1000$.</p>
276,200
<p>could any one tel me how to show the following? I am not getting any idea, thank you for help.</p> <p>$1)$ $C[0,1]$ is not locally compact.</p> <p>$2$) $\mathbb{R}^{\infty}$ is not locally compact where $\mathbb{R}^{\infty}=\{x=\{x_n\}:\sum_{n=1}^{\infty} |x_n|^2&lt;\infty\}$ and $||x||=(\sum_{n=1}^{\infty} |x_n|^2)^{\frac{1}{2}}$</p>
Marc van Leeuwen
18,880
<p>I supposing you mean the ring of polynomials over power series $\mathbf C[[x]][y]$. A nontrivial decomposition of those polynomials, made monic in $y$, would require finding in $\mathbf C[[x]]$ a square root of the negated constnat term (coefficient of $y^0$) in both cases. Now $x^3$ cannot have a square root because it has odd valuation. However $x^2-x^3$ has valuation $2$, and a square root would be $x$ times $\sqrt{1-x}$, where the latter square root exists in $\mathbf C[[x]]$ by Newton's binomial formula $$ (1-x)^\frac12 = \sum_{k=0}^\infty\binom{1/2}k(-x)^k. $$</p>
276,200
<p>could any one tel me how to show the following? I am not getting any idea, thank you for help.</p> <p>$1)$ $C[0,1]$ is not locally compact.</p> <p>$2$) $\mathbb{R}^{\infty}$ is not locally compact where $\mathbb{R}^{\infty}=\{x=\{x_n\}:\sum_{n=1}^{\infty} |x_n|^2&lt;\infty\}$ and $||x||=(\sum_{n=1}^{\infty} |x_n|^2)^{\frac{1}{2}}$</p>
Ted
15,012
<p>Assuming C{x} means the power series ring in $x$ over the complex numbers,</p> <p>$$f(x,y) = y^2 + x^3 - x^2 = y^2 - (x^2 - x^3) = y^2 - x^2(1-x) = (y - x \sqrt{1-x})(y + x \sqrt{1-x})$$ and $\sqrt{1-x}$ can be expanded as a power series is $x$. Therefore $f$ is reducible.</p>