qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,502,309
<p>The initial notation is:</p> <p>$$\sum_{n=5}^\infty \frac{8}{n^2 -1}$$</p> <p>I get to about here then I get confused.</p> <p>$$\left(1-\frac{3}{2}\right)+\left(\frac{4}{5}-\frac{4}{7}\right)+...+\left(\frac{4}{n-3}-\frac{4}{n-1}\right)+...$$</p> <p>How do you figure out how to get the $\frac{1}{n-3}-\frac{1}{n-1}$ and so on? Like where does the $n-3$ come from or the $n-1$.</p>
Jeevan Devaranjan
220,567
<p>Factor $n^2 - 1$ as a difference of squares. Then you will end up with the fraction $\frac{8}{(n-1)(n+1)}$. You can now use partial fractions to solve the problem.</p>
430,654
<p>Show that this sequence converges and find the limit. $a_1 = 0$, $a_{n+1} = \sqrt{5+2a_{n} }$ </p>
Samrat Mukhopadhyay
83,973
<p>To show that the sequence $\{a\}_{n\geq 1}$ does converge to some limit $L$, I shall show that actually, the function $f:\mathbb{R^+}\rightarrow \mathbb{R^+}$ such that $f(x)=\sqrt{5+2x}$, is a contraction on $\mathbb{R}^+$ and hence it has at least one fixed point $L&gt;0$. Then by the argument @Brian M. Scott provided, we have $L=1+\sqrt{6}$. </p> <p>To show that $f$ is a contraction on $\mathbb{R}^+$, let, for all $x,y\in \mathbb{R^+},\ $ $d_1(x,y)=|x-y|$ and $d_2(x,y)=|f(x)-f(y)|$. Then $$d_2(x,y)=\frac{2d_1(x,y)}{|\sqrt{5+2x}+\sqrt{5+2y}|}&lt;\frac{1}{\sqrt{5}}d_1(x,y)$$ So, by definition, $f$ is a contraction and hence $L&gt;0$ exists.$\hspace{6cm} \ \Box$</p>
5,897
<p>The following creates a button to select a notebook to run. When the button is pressed it seems that Mathematica finds the notebook but cannot evaluate it. The following error occurs</p> <blockquote> <p>Could not process unknown packet "1"</p> </blockquote> <pre><code>Button["run file 1", NotebookEvaluate[ "/../file1.nb"]] </code></pre> <p>This occurs under Mathematica 8 on all platforms.</p> <p>Any help greatly appreciated, Christina</p>
Mr.Wizard
121
<p>In the comments celtschk suggested <code>Button[..., Method -&gt; "Queued"]</code> and Christina confirmed it as a solution.</p>
467,301
<p>I'm reading Intro to Topology by Mendelson.</p> <p>The problem statement is in the title.</p> <p>My attempt at the proof is:</p> <p>Since $X$ is a compact metric space, for each $n\in\mathbb{N}$, there exists $\{x_1^n,\dots,x_p^n\}$ such that $X\subset\bigcup\limits_{i=1}^p B(x_i^n;\frac{1}{n})$. Let $K=\frac{2p}{n}$. Then for each $x,y\in X$, $x\in B(x_i^n;\frac{1}{n})$ and $y\in B(x_j^n;\frac{1}{n})$ for some $i,j=1,\dots,p$. Thus, $d(x,y)\leq\frac{2p}{n}$.</p> <p>The approach I was taking is taking $K$ to be the addition of the diameters of each open ball in the covering for $X$, that way, for any two elements in $X$, the distance between them must be less than the overall length of the covering. Did I say this mathematically or are there holes I need to fill in?</p> <p>Thanks for any help or feedback!</p>
Norbert
19,538
<p>Fix $p\in X$, then the function $f:X\to\mathbb{R}_+:x\mapsto d(x,p)$ is a continous function on a compact space $X$. Hence it is bounded, i.e. there exist $K&gt;0$ such that for all $x\in X$ holds $d(x,p)\leq K/2$. Now take arbitrary $x,y\in X$, then $$ d(x,y)\leq d(x,p)+d(p,y)\leq K $$</p>
130,806
<p><strong>Qusestion:</strong> Let f be a continuous and differentiable function on $[0, \infty[$, with $f(0) = 0$ and such that $f&#39;$ is an increasing function on $[0, \infty[$. Show that the function g, defined on $[0, \infty[$ by $$g(x) = \begin{cases} \frac{f(x)}{x}, x\gt0&amp; \text{is an increasing function.}\\ f&#39;(0), x=0 \end{cases}$$</p> <p>I have tried to solve this problem but I don't know whether I have done it right or not. </p> <p><strong>Solution:</strong> I have applied mean value theorem on the interval $[0, x]$. Then, $$\frac{f(x)}{x} =f&#39;(c) , 0\lt c \lt x$$ </p> <p>It is given that $f&#39;$ is an increasing function. So I deduce that $\frac{f(x)}{x}$ is also increasing.</p> <p>Further, $$g(x) = f&#39;(c) \text {such that } 0&lt;c&lt;x$$ Therefore, $$g(0) =f&#39;(c) \text{such that} 0&lt;c&lt;0$$ So, $c=0$</p> <p>Thus $g(x) = f&#39;(0)$ at $x=0$</p>
chemeng
25,845
<p>With differentiation we get:$$g'(x)=\frac{f'(x)*x-f(x)}{x^2}$$ We want to show that $g&#39;(x)&gt;0 \ \Leftrightarrow \ f&#39;(x)*x-f(x) &gt; 0 \ (1)\ \forall \ x &gt; 0 $ Reforming (1) we got: $$f&#39;(x) &gt; \frac{f(x)}{x} \ (2)$$ Applying the mean value theorem at [0,x], we got $f&#39;(c)=\frac{f(x)}{x}$, $ 0&lt;c&lt;x$. So we want to show that $f&#39;(x) &gt; f&#39;(c) \ \forall x &gt; 0$. But since f' is an increasing function we got: $$ x &gt; c \Leftrightarrow f&#39;(x)&gt;f&#39;(c)=\frac{f(x)}{x}$$. So from (2) and since $g&#39;(x) &gt; 0 \ \forall \ x &gt;0 $, g(x) is an increasing function in $[0,\infty).$</p>
3,352,691
<blockquote> <p>Determine the greatest possible value of <span class="math-container">$$\sum_{i=1}^{10}{\cos 3x_i}$$</span> for real numbers <span class="math-container">$x_1,x_2....x_{10}$</span> satisfying <span class="math-container">$$\sum_{i=0}^{10}{\cos x_i}=0$$</span></p> </blockquote> <p><strong>My attempt</strong>:</p> <p><span class="math-container">$$\sum \cos 3x = \sum 4\cos^3x -\sum3\cos x=4\sum \cos^3 x $$</span></p> <p>So now we have to maximize sum of cubes of ten numbers when their sum is zero and each lie in interval <span class="math-container">$[-1,1]$</span>. i often use AM GM inequalities but here are 10 numbers and they are not even positive. Need help to how to visualize and approach these kinds of questions.</p>
Allawonder
145,126
<p><em>Hint:</em> Use the fact that <span class="math-container">$$\cos a + \cos b=2\cos\left(\frac12(a+b)\right)\cos\left(\frac12(a-b)\right).$$</span></p> <p>If you pair the summands and apply above transformation, then the sum becomes a product with <span class="math-container">$10$</span> cosine factors, and a scaling factor of <span class="math-container">$2^5.$</span> So now at least we have an estimate of the sum.</p>
3,352,691
<blockquote> <p>Determine the greatest possible value of <span class="math-container">$$\sum_{i=1}^{10}{\cos 3x_i}$$</span> for real numbers <span class="math-container">$x_1,x_2....x_{10}$</span> satisfying <span class="math-container">$$\sum_{i=0}^{10}{\cos x_i}=0$$</span></p> </blockquote> <p><strong>My attempt</strong>:</p> <p><span class="math-container">$$\sum \cos 3x = \sum 4\cos^3x -\sum3\cos x=4\sum \cos^3 x $$</span></p> <p>So now we have to maximize sum of cubes of ten numbers when their sum is zero and each lie in interval <span class="math-container">$[-1,1]$</span>. i often use AM GM inequalities but here are 10 numbers and they are not even positive. Need help to how to visualize and approach these kinds of questions.</p>
Community
-1
<p><strong>Visualising the solution</strong></p> <p>You have asked for help in visualising the solution. I think you will find it useful to have in mind the picture of <span class="math-container">$y=x^3$</span> for <span class="math-container">$-1\le x\le1$</span>.</p> <p>Now consider the arrangement of the 10 numbers in the maximum position. (We have a continuous function on a compact set and so the maximum is attained.)</p> <p>First suppose that there is a number, <span class="math-container">$s$</span>, smaller in magnitude than the least negative number <span class="math-container">$l$</span>. Increasing <span class="math-container">$l$</span> whilst decreasing <span class="math-container">$s$</span> by the same amount would increase the sum of cubes and therefore cannot occur. </p> <p>So, all the negative numbers are equal, to <span class="math-container">$l$</span> say, and all the positive numbers are greater than <span class="math-container">$|l|$</span>.</p> <p>Now suppose that a positive number was not <span class="math-container">$1$</span>. Then increasing it to <span class="math-container">$1$</span> whilst reducing one of the <span class="math-container">$l$</span>s would increase the sum of cubes and therefore cannot occur.</p> <p>Hence we need only consider the case where we have <span class="math-container">$m$</span> <span class="math-container">$1$</span>s and <span class="math-container">$10-m$</span> numbers equal to <span class="math-container">$-\frac{m}{10-m}$</span>.</p>
4,638,490
<p>Given functions f and g, as above, what exactly does it mean? Does it mean, for example, that g(n) is <em>exactly</em> equal to <span class="math-container">$2^{h(n)}$</span> for some function h contained in <span class="math-container">$O(f(n))$</span> - or does it rather mean that <span class="math-container">$g(n) = O(2^{h(n)})$</span> for some function h contained in <span class="math-container">$O(f(n))$</span>? Any help would be much appreciated!</p>
Zachary
433,146
<p>It means that for all large enough <span class="math-container">$N$</span>, there exists <span class="math-container">$C&gt;0$</span> such that <span class="math-container">$$g(n)\le 2^{Cf(n)}.$$</span> Alternatively, it means that <span class="math-container">$$\log g(n)=O(f(n)).$$</span></p>
280,156
<p>I have a code as below:</p> <pre><code>countpar = 10; randomA = RandomReal[{1, 10}, {countpar, countpar}]; randomconst = RandomInteger[{0, 1}, {countpar, 1}]; For[i = 1, i &lt; countpar + 1, i++, If[randomconst[[i, 1]] != 0, randomA[[All, i]] = 0.; randomA[[i, All]] = 0.; randomA[[i, i]] = 1; ]; ]; </code></pre> <p>The problem is when I have changed countpar, lets say, i.e, countpar=1000, then the computational time of the for loop increases dramatically. Is there a way to decrease this time from an expert eye ?</p> <p>Best Regards,</p> <p>Ahmet</p>
Nasser
70
<blockquote> <p>no no, I want 3/2 as the answer. I just want it to print the answers 3/2, 7/5, 17/12, 41/29...</p> </blockquote> <p>One of 10 possible ways</p> <pre><code>y = 1; x = 2; n = 2; a := (y^(x - 1) + n)/(y^(x - 1) + y^(x - 2)) Last@Reap@Do[ Sow[a]; y = a , {m, 10} ] </code></pre> <p><img src="https://i.stack.imgur.com/kAXBn.png" alt="Mathematica graphics" /></p> <p>Or if you prefer shorter code</p> <pre><code>Flatten[{a; y = a} &amp; /@ Range[10]] </code></pre> <p><img src="https://i.stack.imgur.com/uoGJJ.png" alt="Mathematica graphics" /></p>
280,156
<p>I have a code as below:</p> <pre><code>countpar = 10; randomA = RandomReal[{1, 10}, {countpar, countpar}]; randomconst = RandomInteger[{0, 1}, {countpar, 1}]; For[i = 1, i &lt; countpar + 1, i++, If[randomconst[[i, 1]] != 0, randomA[[All, i]] = 0.; randomA[[i, All]] = 0.; randomA[[i, i]] = 1; ]; ]; </code></pre> <p>The problem is when I have changed countpar, lets say, i.e, countpar=1000, then the computational time of the for loop increases dramatically. Is there a way to decrease this time from an expert eye ?</p> <p>Best Regards,</p> <p>Ahmet</p>
Syed
81,355
<pre><code>Clear[&quot;Global`*&quot;] y = 1; x = 2; n = 2; NestList[(#^(x - 1) + n)/(#^(x - 1) + #^(x - 2)) &amp;, y, 6] </code></pre> <hr /> <p><strong>EDIT</strong></p> <p>If the same definition must be used and kept, then define:</p> <pre><code>Clear[&quot;Global`*&quot;] y = 1; x = 2; n = 2; a[y_] := (y^(x - 1) + n)/(y^(x - 1) + y^(x - 2)) NestList[a[#] &amp;, 1, 6] </code></pre> <p>which is redundant. The more standard idiom would be:</p> <pre><code>NestList[a, 1, 6] </code></pre> <hr /> <p>Result:</p> <blockquote> <p>{1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169}</p> </blockquote>
483,442
<p>I am trying to learn about velocity vectors but this word problem is confusing me.</p> <p>A boat is going 20 mph north east, the velocity u of the boat is the durection of the boats motion, and length is 20, the boat's speed. If the positive y axis represents north and x is east the boats direction makes an angle of 45 degrees. You can compute the components of u by using trig</p> <p>$$u_1 = 20 \cos 45$$ $$u_2 = 20 \sin 45$$</p> <p>Why? How did this happen? Why sin and why cos? What does this represent? Why two points? What are these two points? It says that these are $R^2$ which I am not sure what that means and my book does not explain. I think t he R means all real numbers and the squared is referencing 2d maybe so x and y but the book doesn't say so I am not so sure. My book mentions none of these things.</p>
user84413
84,413
<p>Select your first sock. Now you have 15 choices left for your 2nd sock, and 14 of them will allow you to avoid getting a pair. </p> <p>For your 3rd sock, you have a total of 14 choices remaining, and you can choose any sock other than the first two chosen and their mates to avoid a pair, so you have 12 choices to do this.</p> <p>Continuing in this manner, we get $\frac{14}{15}\cdot\frac{12}{14}\cdot\frac{10}{13}\cdot\frac{8}{12}\cdot\frac{6}{11}$ for the probability of not getting a matching pair, so $$1-\frac{14}{15}\cdot\frac{12}{14}\cdot\frac{10}{13}\cdot\frac{8}{12}\cdot\frac{6}{11}$$ gives the probability of getting at least one pair.</p>
182,527
<p>I have the following question:</p> <p>Let $X$: $\mu(X)&lt;\infty$, and let $f \geq 0$ on $X$. Prove that $f$ is Lebesgue integrable on $X$ if and only if $\sum_{n=0}^{\infty}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) &lt; \infty $.</p> <p>I have the following ideas, but am a little unsure. For the forward direction:</p> <p>By our hypothesis, we are taking $f$ to be Lebesgue integrable. Assume $\sum_{n=0}^{\infty}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) = \infty $. Then for any n, no matter how large, $\mu(\lbrace x \in X : f(x) \geq 2^n \rbrace)$ has positive measure. Otherwise, the sum will terminate for a certain $N$, giving us $\sum_{n=0}^{N}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) &lt; \infty $. Thus we have $f$ unbounded on a set of positive measure, which in combination with $f(x) \geq 0$, gives us that $\int_E f(x) d\mu=\infty$. This is a contradiction to $f$ being Lebesgue integrable. So our summation must be finite.</p> <p>For the reverse direction: </p> <p>We have that $\sum_{n=0}^{N}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) &lt; \infty \\$. Assume that $f$ is not Lebesgue integrable, then we have $\int_E f(x) d\mu=\infty$. Since we are integrating over a finite set $X$, then this means that $f(x)$ must be unbounded on a set of positive measure, which makes our summation infinite, a contradiction.</p> <p>Any thoughts as to the validity of my proof? I feel as if there is an easier, direct way to do it.</p>
Makoto Kato
28,422
<p>Let $E_n = \lbrace x \in X : 2^n \leq f(x) &lt; 2^{n+1} \rbrace$. Let $F = \lbrace x \in X : f(x) &lt; 1 \rbrace$. Then $X$ is a disjoint union of $F$ and $E_n$, $n = 0,1,\dots$. Hence $\int_X f(x) d\mu = \int_F f(x) d\mu + \sum_{n=0}^{\infty} \int_{E_n} f(x) d\mu$.</p> <p>Let $G_n = \lbrace x \in X : f(x) \geq 2^n \rbrace$. Then $E_n = G_n - G_{n+1}$. Let $m_n = \mu(G_n)$. Since $\mu(E_n) = \mu(G_n - G_{n+1}) = m_n - m_{n+1}$, $2^n(m_n - m_{n+1}) \leq \int_{E_n} f(x) d\mu \leq 2^{n+1}(m_n - m_{n+1})$. Hence</p> <p>$\int_F f(x) d\mu + \sum_{n=0}^{\infty} 2^n(m_n - m_{n+1}) \leq \int_X f(x) d\mu \leq \int_F f(x) d\mu + \sum_{n=0}^{\infty} 2^{n+1}(m_n - m_{n+1})$</p> <p>Let $s = \sum_{n=0}^{\infty} 2^n m_n$. Suppose $f$ is integrable. By the above left inequality, $\sum_{n=0}^{\infty} 2^n(m_n - m_{n+1}) &lt; \infty$. Since $\sum_{n=0}^{\infty} 2^n(m_n - m_{n+1}) = m_0 + \sum_{n=0}^{\infty} 2^n m_{n+1} = m_0 + (\sum_{n=0}^{\infty} 2^{n+1} m_{n+1})/2 &lt; \infty$, $s &lt; \infty$.</p> <p>Conversely suppose $s&lt; \infty$. Note that $\int_F f(x) d\mu &lt; \infty$. By the above right inequality, $\int_X f(x) d\mu \leq \int_F f(x) d\mu + \sum_{n=0}^{\infty} 2^{n+1}(m_n - m_{n+1}) = \int_F f(x) d\mu + 2m_0 + \sum_{n=0}^{\infty} 2^{n+1} m_{n+1} = \int_F f(x) d\mu + m_0 + s &lt; \infty$.</p>
182,527
<p>I have the following question:</p> <p>Let $X$: $\mu(X)&lt;\infty$, and let $f \geq 0$ on $X$. Prove that $f$ is Lebesgue integrable on $X$ if and only if $\sum_{n=0}^{\infty}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) &lt; \infty $.</p> <p>I have the following ideas, but am a little unsure. For the forward direction:</p> <p>By our hypothesis, we are taking $f$ to be Lebesgue integrable. Assume $\sum_{n=0}^{\infty}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) = \infty $. Then for any n, no matter how large, $\mu(\lbrace x \in X : f(x) \geq 2^n \rbrace)$ has positive measure. Otherwise, the sum will terminate for a certain $N$, giving us $\sum_{n=0}^{N}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) &lt; \infty $. Thus we have $f$ unbounded on a set of positive measure, which in combination with $f(x) \geq 0$, gives us that $\int_E f(x) d\mu=\infty$. This is a contradiction to $f$ being Lebesgue integrable. So our summation must be finite.</p> <p>For the reverse direction: </p> <p>We have that $\sum_{n=0}^{N}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) &lt; \infty \\$. Assume that $f$ is not Lebesgue integrable, then we have $\int_E f(x) d\mu=\infty$. Since we are integrating over a finite set $X$, then this means that $f(x)$ must be unbounded on a set of positive measure, which makes our summation infinite, a contradiction.</p> <p>Any thoughts as to the validity of my proof? I feel as if there is an easier, direct way to do it.</p>
Community
-1
<p><strong>Hint:</strong> Switching the order of integration gives $$\int_X f(x)\,\mu(dx)=\int_X \int_0^{f(x)}\,dt\,\mu(dx) =\int_0^\infty \mu(x: f(x)&gt;t)\,dt$$ This equation is true whether the two sides are finite or infinite. </p> <p>Since $G(t):= \mu(x: f(x)&gt;t)$ is decreasing, we have $$G(2^{n+1}) \int_{2^n}^{2^{n+1}}\,dt \leq \int_{2^n}^{2^{n+1}} G(t) \,dt \leq G(2^n) \int_{2^n}^{2^{n+1}}\,dt$$ and hence $\int_0^\infty G(t)\,dt&lt;\infty$ if and only if $\sum_{n=0}^\infty 2^n G(2^n)&lt;\infty.$</p> <p>Also, see did's answer <a href="https://math.stackexchange.com/questions/178896/proof-x-ge-0-r0-rightarrow-exr-r-int-0-inftyxr-1pxxdx/178897#178897">here</a>.</p>
182,527
<p>I have the following question:</p> <p>Let $X$: $\mu(X)&lt;\infty$, and let $f \geq 0$ on $X$. Prove that $f$ is Lebesgue integrable on $X$ if and only if $\sum_{n=0}^{\infty}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) &lt; \infty $.</p> <p>I have the following ideas, but am a little unsure. For the forward direction:</p> <p>By our hypothesis, we are taking $f$ to be Lebesgue integrable. Assume $\sum_{n=0}^{\infty}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) = \infty $. Then for any n, no matter how large, $\mu(\lbrace x \in X : f(x) \geq 2^n \rbrace)$ has positive measure. Otherwise, the sum will terminate for a certain $N$, giving us $\sum_{n=0}^{N}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) &lt; \infty $. Thus we have $f$ unbounded on a set of positive measure, which in combination with $f(x) \geq 0$, gives us that $\int_E f(x) d\mu=\infty$. This is a contradiction to $f$ being Lebesgue integrable. So our summation must be finite.</p> <p>For the reverse direction: </p> <p>We have that $\sum_{n=0}^{N}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) &lt; \infty \\$. Assume that $f$ is not Lebesgue integrable, then we have $\int_E f(x) d\mu=\infty$. Since we are integrating over a finite set $X$, then this means that $f(x)$ must be unbounded on a set of positive measure, which makes our summation infinite, a contradiction.</p> <p>Any thoughts as to the validity of my proof? I feel as if there is an easier, direct way to do it.</p>
tomasz
30,222
<p>The fact that the summation is infinite does not mean that it does not converge, consider $f(x)=\frac{1}{\sqrt x}$ on $[0,1]$. It is integrable, yet clearly the summation is not finite in this case: it is $\sum_n 2^n\cdot 2^{-2n}=2&lt;\infty$.</p> <p>Put $A_n:=\{ x\vert f(x)\geq 2^n\},B_n:=A_n\setminus A_{n+1},a_n=\mu(A_n)$. Then define $h_1=\sum_n2^{n+1}\chi_{B_n}$. Except for the points where $f&lt;1$ or $f=\infty$ (which are resolved easily), we have $h_1/2\leq f\leq h_1$, so if $f$ is not infinite on a set of positive measure, $f$ is integrable iff $h_1$ is.</p> <p>On the other hand, $$\int f\leq \int h_1=2\sum_{n\geq 0} 2^n(a_n-a_{n+1})\leq 2\sum_n 2^n a_n$$</p> <p>Edit2: Instead, take a look at $h_2:=\sum_n 2^{n-1}\chi_{A_n}$. Then for each $x\in B_n$ we have that $h_2(x)\leq \sum_{j\leq n}2^{j-1}&lt;2^n\leq f(x)$, and $h_2\leq f$ almost everywhere, so if $f$ is integrable, $h_2$ is too. Then: $$\int h_2=\sum_n 2^{n-1}\mu(A_n)=\frac{1}{2}\sum_n 2^n\mu(A_n)$$ so $\sum_n 2^n\mu(A_n)$ converges and we're done.</p>
1,384,947
<p>I'm trying to figure out when numbers reach "periodicity" given known values. I've included an example below with image:</p> <p>I have known sizes (<em>100, 75, and 50</em>) that I would like to know how many times I would need to repeat each item for all the sizes to line up or be periodic. Does anyone know of a formula for this or how I can go about figuring this out?</p> <pre><code>As you can see to reach periodicity: I need to repeat 100 3 times I need to repeat 75 4 times I need to repeat 50 6 times </code></pre> <p><a href="https://i.stack.imgur.com/1WeEp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1WeEp.jpg" alt="image"></a></p> <p><strong>PS: This is just a simple example the numbers could be decimals like 1.29. and include several more numbers. I will also be converting the formula to octave which is like matlab.</strong></p>
Narasimham
95,860
<p>Let's say you posted like this:</p> <p>~~~~~</p> <p>for which </p> <p>$$\cos(\phi)=\frac{a}{\sqrt{a^2+b^2}};$$<br> $$\sin(\phi)=\frac{b}{\sqrt{a^2+b^2}}$$ Using this, let's transform our equation to</p> <p>$$\cos(\phi)\sin(x)+\sin(\phi)\cos(x)=\frac{c}{\sqrt{a^2+b^2}}$$</p> <p>This will bring us to</p> <p>$$ \sin(x+\phi)=\frac{c}{\sqrt{a^2+b^2}}$$ </p> <p>This I understand. But the next step is:</p> <blockquote> <p>It is evident from this that</p> </blockquote> <p>$$\cos(x+\phi)=\pm\frac{\sqrt{a^2+b^2-c^2}}{\sqrt{a^2+b^2}}$$</p> <p>Could you give me a hint regarding this last transformation? </p> <p>~~~~</p> <p>You can see not immediately catching the $\sin, \cos $ Pythagorean relationship.</p>
2,206,938
<p>Context: <a href="http://www.hairer.org/notes/Regularity.pdf" rel="nofollow noreferrer">http://www.hairer.org/notes/Regularity.pdf</a>, section 4.1 (pages 15-16)</p> <blockquote> <p>Define $$(\Pi_x\Xi^0)(y)=1 \qquad (\Pi_x\Xi)(y)=0 \qquad (\Pi_x\Xi^2)(y)=c$$ and $$(\Pi^{(n)}_x\Xi^0)(y)=1 \qquad (\Pi^{(n)}_x\Xi)(y)=\sqrt{2c}\sin(nx) \qquad (\Pi^{(n)}_x\Xi^2)(y)=2c\sin^2(nx).$$ As a model, $\Pi^{(n)}$ converges to $\Pi$.</p> </blockquote> <p>I don't see how this convergence is supposed to take place. Isn't the limit of $\sin(nx)$, as $n\to \infty$, undefined? </p>
zhw.
228,045
<p>Hint: Try something like</p> <p>$$f(x) = \sum_{n=1}^{\infty}c_n x^{1/n}$$</p> <p>for suitable positive constants $c_n.$</p>
2,555,463
<p>Given a line $l$ and two points $p_1$ and $p_2$, identify the point $v$ which is equidistant from $l$, $p_1$, and $p_2$, assuming it exists.</p> <p>My idea is to: (1) identify the parabolas containing all points equidistant from each point and the line, then (2) intersect these parabolas. As $v$ is equidistant from all three and each parabola contains all points equidistant from $l$ and each point, the intersection of these parabolas must be $v$. However, I have had no luck in finding a way to compute, much less represent, these parabolas.</p>
Community
-1
<p>Without loss of generality, $l$ is the $x$ axis, $p_1$ is at $(-a,y_1)$ and $p_2$ at $(a,y_2)$.</p> <p>One of the parabolas is $$y^2=(x+a)^2+(y-y_1)^2$$ and the other</p> <p>$$y^2=(x-a)^2+(y-y_2)^2.$$</p> <p>After elimination of $y$,</p> <p>$$y_2(x+a)^2-y_1(x-a)^2+y_1y_2(y_1-y_2)=0$$ can give you two solutions.</p>
2,555,463
<p>Given a line $l$ and two points $p_1$ and $p_2$, identify the point $v$ which is equidistant from $l$, $p_1$, and $p_2$, assuming it exists.</p> <p>My idea is to: (1) identify the parabolas containing all points equidistant from each point and the line, then (2) intersect these parabolas. As $v$ is equidistant from all three and each parabola contains all points equidistant from $l$ and each point, the intersection of these parabolas must be $v$. However, I have had no luck in finding a way to compute, much less represent, these parabolas.</p>
Anatoly
90,997
<p>An approach that could be useful not only to provide the solutions, but also to discuss and understand their existence, is as follows. Given two points $P_1$ and $P_2$ and a line, we can set, without loss of generality, an $xy$ plane where the $x$-axis coincides with the line and the non-negative portion of the $y$-axis contains $P_1$. The coordinates of $P_1$ and $P_2$ can then be written as $(0,y_1)$ and $(x_2,y_2)$, respectively (with $y_1 \geq 0\,$). The segment $P_1P_2$ has slope $(y_2-y_1)/x_2$ and its middle point has coordinates $(x_2/2,(y_1+y_2)/2)\,\,\,\,$. So the equation of its geometric axis is</p> <p>$$y=-\frac{x_2}{y_2-y_1} x + \frac{y_1+y_2}{2}+\frac{x_2^2}{2(y_2-y_1)} $$</p> <p>Now we have to identify, on this line, a point whose distance from $P_1$ (or $P_2$) is equal to the distance from the $x$-axis. Therefore, if we call $(X,Y)$ the coordinates of this point, we must have</p> <p>$$X^2+(Y-y_1)^2=Y^2$$</p> <p>Since $(X,Y)$ must also satisfy the equation of the geometric axis, solving the system and simplifying we get that the searched coordinates are</p> <p>$$X=\frac{y_1 x_2}{y_1 - y_2} \pm \sqrt{y_1 y_2\left(1 + \frac{x_2^2}{(y_1 - y_2)^2} \right) }$$ $$ Y= \frac{y_1+y_2}{2} + \frac{x_2 (2X-x_2)}{2(y_1 - y_2)}$$</p> <p>Note that these formulas are valid for $y_1 \neq y_2\,$. If $y_1=y_2\,$ (i.e., if the geometric axis is perpendicular to the $x$-axis), we trivially have </p> <p>$$X=\frac{x_2}{2}$$ $$Y=\frac{y_1}{2}+\frac{X^2}{2y_1}$$</p> <p>Also note that, for the solutions to be real, $y_1y_2$ must be $\geq 0\,\,$. Because $y_1$ has been assumed $ \geq 0$ in the initial assumptions ($P_1$ is on the non-negative portion of the $y$-axis) this implies that:</p> <ul> <li><p>in the case $y_1 \neq y_2 \,$, if $y_1$ and $y_2$ are both $&gt;0$ (i.e., $P_1$ is on the positive $y$-axis and $P_2$ is in the first or second quadrant) we have two different real solutions;</p></li> <li><p>in the case $y_1 \neq y_2\,$, if $y_1=0$ or $y_2=0$ (i.e., one among $P_1$ and $P_2$ is on the $x$-axis) we have a single real solution;</p></li> <li><p>in the case $y_1 \neq y_2\,$, if $y_1&gt;0$ and $y_2&lt;0$ (i.e., $P_1$ is on the positive $y$-axis and $P_2$ is in the third or fourth quadrant) we have no real solutions;</p></li> <li><p>lastly, in the case $y_1 = y_2\,$, we have a single real solution. </p></li> </ul> <hr> <p>To provide an example, let us set $P_1=(0,2) \,\,$ and $P_2=(2,4)\,\,$. In this case, we have $y_1=2\,$, $x_2=2\,\,$,$y_2=4\,\,$, and we are in the case $y_1 \neq y_2\,\,\,$. Applying the formulas above, we get </p> <p>$$X=\frac{4}{-2} \pm \sqrt{2 \cdot 4 \left(1 + \frac{2^2}{(-2)^2} \right) }=-2 \pm 4$$</p> <p>Putting these two possible values of $X$ in the equation giving $Y$, for the case $X=2\,\,$ yields</p> <p>$$ Y= \frac{2+4}{2} + \frac{2 (4-2)}{2(-2)}= 3-1=2$$</p> <p>and for the case $X=-6 \,\,$ yields</p> <p>$$ Y= \frac{2+4}{2} + \frac{2 (2 \cdot (-6)-2)}{2(-2)}= 3+7=10$$</p> <p>finally giving the two solutions $(2,2) \,$ and $(-6,10)\,$. Accordingly, the distances of $(2,2) \,$from $P_1$, $P_2$, and the $x$-axis are all equal to $2$, and those of $(-6,10)\,$ are all equal to $10$.</p> <p>In the same manner, if for example we set $P_1=(0,4)\,\,$ and $P_2=(12,4)\,\,$, then $y_1=4\,$, $x_2=12\,\,\,$,$y_2=4\,\,$ and we are in the case $y_1 = y_2\,\,$, where the geometric axis is perpendicular to the $x$-axis. So we obtain</p> <p>$$X=\frac{12}{2}=6$$ $$Y=\frac{4}{2}+\frac{6^2}{2 \cdot4}=\frac{13}{2}$$</p> <p>giving the single solution $(6,13/2) \,\,$, whose distances from $P_1$, $P_2$, and the $x$-axis are all equal to $13/2 \,$.</p>
2,555,463
<p>Given a line $l$ and two points $p_1$ and $p_2$, identify the point $v$ which is equidistant from $l$, $p_1$, and $p_2$, assuming it exists.</p> <p>My idea is to: (1) identify the parabolas containing all points equidistant from each point and the line, then (2) intersect these parabolas. As $v$ is equidistant from all three and each parabola contains all points equidistant from $l$ and each point, the intersection of these parabolas must be $v$. However, I have had no luck in finding a way to compute, much less represent, these parabolas.</p>
smichr
122,921
<p>@anatoly has a nice answer which identifies conditions when a solution is expected. But there is an additional condition which, if not satisfied, prohibits a solution. @nominal-animal elucidates all conditions. I add an additional answer here only to show that using a different orientation of points and lines gives a compact solution for the point(s) which are equidistant to the line and the two points. It also lends itself well to constructing the solution</p> <p>I prefer to let the the <em>unit measure</em> be the half distance between the two points and then imagine points on the x-axis at (-1,0) and (1,0) and the line having slope <span class="math-container">$m$</span> and y-intercept of <span class="math-container">$b$</span> (hence, x-intercept of <span class="math-container">$b/m$</span>).</p> <p>Using SymPy to work out the location of the point <span class="math-container">$(x,y)$</span> that is equidistant from the line and the points on the x-axis at <span class="math-container">$\pm 1$</span>, we find:</p> <pre><code>&gt;&gt;&gt; from sympy import Line, Point, Eq, Tuple, cse &gt;&gt;&gt; from sympy.abc import x, y, m, b, t &gt;&gt;&gt; p1 = Point(-1, 0) &gt;&gt;&gt; p2 = Point(1, 0) &gt;&gt;&gt; l = Line((0, b), slope=m) &gt;&gt;&gt; p3 = l.arbitrary_point(t) &gt;&gt;&gt; yaxis = Line(Eq(x, 0)) &gt;&gt;&gt; pt = l.perpendicular_line(p3).intersection(yaxis)[0] &gt;&gt;&gt; T = Tuple(*solve(p3.distance(pt)-pt.distance(p2), t)) &gt;&gt;&gt; cse(T) ([(x0, m**2 + 1), (x1, b*x0), (x2, sqrt(x0*(b - m)*(b + m))), (x3, 1/(m*x0))], [ x3*(-x1 - x2), x3*(-x1 + x2)]) </code></pre> <p>Some hand simplification allows the points that are equidistant to line and the two points (here, on the x-axis) to be written as</p> <p><span class="math-container">$$ a = m^2 + 1$$</span><span class="math-container">$$ s = \sqrt{a \cdot (b^2 - m^2)}$$</span><span class="math-container">$$ e = (0, b-(a \cdot b \pm s)/m^2) $$</span></p> <p>Since the argument of the square root cannot be negative, we must have <span class="math-container">$b^2 &gt;= m^2$</span>. This is equivalent to the condition <span class="math-container">$y_1y_2&gt;=0$</span> given by @anatoly. Since we cannot divide by zero, <span class="math-container">$m \neq 0$</span>. (If the line intersects the y-axis, it cannot be a vertical line in general or collinear with the y-axis in particular). In the case of <span class="math-container">$m=0$</span> we have a line intersecting the y-axis at <span class="math-container">$b$</span> and seek a point on the y-axis <span class="math-container">$(0, y)$</span> that is equidistant to <span class="math-container">$(0, b)$</span> and <span class="math-container">$(1, 0)$</span> or <span class="math-container">$(-1, 0)$</span>.</p> <pre><code>&gt;&gt;&gt; solve(Point(0,y).distance((0,b))-Point(0,y).distance((1,0)), y) [(b**2 - 1)/(2*b)] </code></pre> <p>So when the slope of the line is the same as the slope of the line connecting the two points (a slope of 0 in the orientation being used) then there is a single solution located at <span class="math-container">$$e = (0, (b^2 - 1)/(2b))$$</span> And here we notice a second degenerate case: when the line passes through the two points there is no solution.</p> <p>Demonstration of solution: for <span class="math-container">$m=1$</span> and <span class="math-container">$b=2$</span> we have</p> <pre><code>&gt;&gt;&gt; m = 1 &gt;&gt;&gt; b = 2 &gt;&gt;&gt; a = m**2 + 1 &gt;&gt;&gt; s = sqrt(a*(b**2 - m**2)) &gt;&gt;&gt; e1, e2 = [(0, b - (a*b+sng*s)/m**2) for sgn in (-1, 1)] &gt;&gt;&gt; e1,e2 ((0, -2 + sqrt(6)), (0, -sqrt(6) - 2)) </code></pre> <p>These points should be equidistant from the line and either of <span class="math-container">$(-1,0)$</span> or <span class="math-container">$(1,0)$</span>:</p> <pre><code>&gt;&gt;&gt; line = Line((0,b), slope=m) &gt;&gt;&gt; line.distance(e1).equals(Point(e1).distance((1,0))) True &gt;&gt;&gt; line.distance(e2).equals(Point(e2).distance((1,0))) True </code></pre>
45,211
<p>Instances of SAT induce a bipartite graph between clauses vertices and variable vertices, and for planar 3SAT, the resulting bipartite graph is planar. </p> <p>It would be very convenient if there was a planar layout that had all the variable vertices in one line and all the clause vertices in a straight line. This can't be done because such a graph would be outerplanar, and $K_{2,3}$ isn't. </p> <p>But maybe a weaker layout is possible. </p> <blockquote> <p>Is it possible to lay out any planar bipartite graph $G = (A \cup B, E)$ such that</p> <ul> <li>All vertices of $B$ are on a straight line</li> <li>A can be partitioned into $A_1 \cup A_2$ such that all vertices of $A_1$ are on a parallel straight line to the left of $B$, and all vertices of $A_2$ are on a parellel straight line to the right of $B$.</li> </ul> </blockquote> <p>This seems to relate to <a href="http://mashfiquirabbi.110mb.com/files/thesis.pdf">track drawings of planar graphs</a>. </p>
David Eppstein
440
<p>Any planar graph can be drawn with curves for the edges and its vertices in any position in the plane.</p> <p>But with straight line segment edges, it's not always possible, even for graphs in which every vertex in A has degree exactly two, and even if you relax the straight-line requirement for A and only require that the vertices in B be on a straight line. For, these graphs are exactly the graphs formed by subdividing every edge of an arbitrary planar graph G. And a drawing of this type, for a graph formed from G in this way, is exactly a two-page <a href="http://en.wikipedia.org/wiki/Book_embedding" rel="nofollow noreferrer">book embedding</a> of G. But a planar graph G has a two-page book embedding only if edges can be added to it to make it Hamiltonian. So if you start with a graph G that is maximal planar and non-Hamiltonian, such as the <a href="http://en.wikipedia.org/wiki/Goldner%25E2%2580%2593Harary_graph" rel="nofollow noreferrer">Goldner–Harary graph</a>, and subdivide every edge, you will get a planar bipartite graph that cannot be drawn in the way you request.</p> <p>As an aside, relaxing the requirement that A be drawn on two lines parallel to the line through B does allow some additional graphs to be drawn, even though the above argument shows that it doesn't allow them all. For instance, Louigi has shown that the cube has no drawing on three parallel lines, but it does have one where B is on a straight line and A is on two sides of it:</p> <p><img src="https://www.ics.uci.edu/~eppstein/0xDE/cube-linear-bipartition.png" alt="alt text"></p>
1,482,205
<p>Show that $\sigma(AB) \cup \{0\} = \sigma(BA) \cup \{0\}$ in general, and that $\sigma(AB) = \sigma(BA)$ if $A$ is bijective. </p> <p>I studied the associative statement of this somewhere but it did not include the zeroth element. If you assume the bijection, how can you show the first part?</p> <h2>My attempt</h2> <p>Let show that $A$ commutes with $B$ which is self-adjoint linear operatar such that $AB = BA$. \begin{equation*} AB = A^{\ast} B^{\ast} = (BA)^{\ast} = (ABA)^{\ast} = A^{\ast} B^{\ast} A^{\ast} = ABA = BA. \end{equation*} In general, the zeroth element follows directly in the left-hand-side if $A$ is bijective. $\square$</p> <p>Comments</p> <ul> <li>not sure if $\sigma$ should be carried along; I was reading Kreuzig for the application</li> </ul> <hr> <p>How can you show the first part with the zeroth element?</p>
fleablood
280,126
<p>"...that simplified is b/b+a/b. So then I added those two fractions together and got ab/b .."</p> <p>That shouldn't make any sense and if I saw your paper I may understand why you did it. But it's simply wrong.</p> <p>$1 + \frac ab$</p> <p>$\frac{1b}{1b} + \frac ab$</p> <p>$\frac{b}{b} + \frac ab$ You are right so far so let's add these.</p> <p>$\frac{b + a}{b} $ ... and I don't know why you added a + b to get ab (probably a misplaced mark on the scratch paper). The final answer is $\frac{b + a}{b} $</p> <p>===== " its a+b/b which is a+1 when simplified".</p> <p>Oooh, no. <em>Both</em> the a and the b are over the b. You can't cancel out just one. </p> <p>$\frac{a + b}{b} \ne a + \frac{b}{b}$.</p> <p>You <em>can</em> factor the b out of <em>both</em> but then you get</p> <p>$\frac{a + b}{b} = \frac{a/b + 1}{1} = \frac ab + 1$ but then you have worked yourself back to exactly where you started.</p>
1,107,250
<p><img src="https://i.stack.imgur.com/ILg7L.png" alt="enter image description here"></p> <p>In above lemma, why $|a'| \leq 1$ still holds? I didn't see how it relates to "algebraic conjugate of a root of unity is also a root of unity", since $a$ is the sum of unity.</p> <p>(definition of algebraic conjugate: <a href="http://planetmath.org/algebraicconjugates" rel="nofollow noreferrer">http://planetmath.org/algebraicconjugates</a>)</p>
David Zhang
80,762
<p>@Michael has already found a very elegant answer, but I would like to contribute an alternate approach that I stumbled across. It turns out that the polynomials $p_n$ are precisely the degree $2n+1$ Bernstein polynomial approximants of $u(x-1/2)$, where $u$ is the unit step function (and $u(0) = 1/2$). These are well-known to converge to the original function.</p>
3,285,255
<p>Show <span class="math-container">$D = \{f \in C^{2}[0,1]): f(x) &gt; 0, \ \forall x \in [0,1], \|f'\|_{\infty}&lt;1, |f''(0)| &gt; 2\}$</span> is open w.r.t Sup Norm.</p> <p>Sup Norm = <span class="math-container">$\|f\|_{2,\infty, [0,1]} = sup_{x \in[0,1]}|f(x)| + sup_{x\in[0,1]}|f'(x)| + sup_{x \in [0,1]}|f''(x)|$</span></p> <p>I had originally posted this question trying to seek clarification on a particular part, but as I've gone over that solution it didn't make sense to me so I've instead just attempted to reproduce a full solution and get feedback on this from start to finish.</p> <p><strong>Attempt:</strong></p> <p>To start I have to examine what is precisely meant by the three conditions given. </p> <p><em>Condition 1:</em> <span class="math-container">$f(x) &gt; 0 \forall \ x \in [0,1]$</span>:</p> <p>By the Extreme value theorem, there exists a value <span class="math-container">$x_{1} \in [0,1]$</span> s.t. <span class="math-container">$f(x_{1}) = minimum$</span> over <span class="math-container">$[0,1]$</span>. By condition <span class="math-container">$f(x) &gt; 0$</span>, this means there exists a <span class="math-container">$\delta_{1} &gt; 0 $</span> s.t <span class="math-container">$f(x) &gt; f(x_{1}) &gt; \delta_{1} &gt; 0$</span>. In particular this means <span class="math-container">$f(x_{1}) - \delta_{1} &gt; 0$</span>.</p> <p><em>Condition 2:</em> <span class="math-container">$\|f'(x)\|_{\infty} &lt; 1$</span>:</p> <p>By the EVT again, there exists an <span class="math-container">$x_{2} \in [0,1]$</span> s.t. <span class="math-container">$f'(x_{2}) = maximum$</span> over <span class="math-container">$[0,1]$</span>. By condition <span class="math-container">$\|f'(x)\|_{\infty} &lt; 1$</span>, there exists a <span class="math-container">$\delta_{2} &gt;0$</span> s.t. <span class="math-container">$\|f'(x)\|_{\infty} = \sup_{x \in [0,1]} |f'(x)| = f'(x_{2}) &lt; 1 - \delta_{2} &lt; 1$</span>.</p> <p><em>Condition 3:</em> <span class="math-container">$|f''(0)| &gt; 2$</span>:</p> <p>This condition means there exists a <span class="math-container">$\delta_{3} &gt; 0 $</span> s.t. <span class="math-container">$f''(0) &gt; \delta_{3} &gt; 2$</span>. Equivalently one can say: <span class="math-container">$f''(0) - \delta_{3} &gt; 2$</span>. Also <span class="math-container">$\sup_{x \in [0,1]}f''(x) - \delta_{3} &gt; f''(0) - \delta_{3} &gt; 2$</span>. (Not sure if this last observations means much.)</p> <p>Therefore in order to show that <span class="math-container">$D$</span> is open (i.e <span class="math-container">$B_{\delta}(f) \subset D$</span>), I have to establish that <span class="math-container">$g(x) \in B_{\delta}(f)$</span>. This means <span class="math-container">$g(x)$</span> has to satisfy the three observations above.</p> <p><strong>Proof:</strong></p> <p>Let <span class="math-container">$y \in [0,1]$</span>, let <span class="math-container">$\frac{\delta_{g}}{3} = \frac{min(\delta_{1}, \delta_{2}, \delta_{3})}{3}$</span></p> <p>1) Consider <span class="math-container">$\|g(y) - f(y)\|_{\infty} &lt; \frac{\delta_{g}}{3}$</span></p> <p><span class="math-container">$\Rightarrow \sup_{y \in [0,1]}|g(y) - f(y)| &lt; \frac{\delta_{g}}{3} \\ \Rightarrow -\frac{\delta_{g}}{3} &lt; g(y) - f(y) &lt; \frac{\delta_{g}}{3} \\ \Rightarrow g(y) \geq f(y) - \frac{\delta_{g}}{3} &gt; f(x_{1}) - \frac{\delta_{g}}{3} \geq f(x_{1}) - \frac{\delta_{1}}{3} &gt; 0 \\ \therefore g(y) &gt; 0$</span></p> <p>2) Consider <span class="math-container">$\|g'(y) - f'(y) \|_{\infty} &lt; \frac{\delta_{g}}{3}$</span></p> <p><span class="math-container">$\Rightarrow \sup_{y \in [0,1]}|g'(y) - f'(y)| &lt; \frac{\delta_{g}}{3} \\ \Rightarrow -\frac{\delta_{g}}{3} &lt; g'(y) - f'(y) &lt; \frac{\delta_{g}}{3} \\ \Rightarrow g'(y) \leq f'(y) + \frac{\delta_{g}}{3} &lt; f'(x_{2}) + \frac{\delta_{g}}{3} &lt; 1 - \delta_{2} + \frac{\delta_{g}}{3} \leq 1 - \delta_{2} + \frac{\delta_{2}}{3} &lt; 1 - \frac{\delta_{2}}{3} &lt; 1 \\ \therefore g'(y) &lt; 1$</span> </p> <p>3) Consider <span class="math-container">$\|g''(0) - f''(0)\|_{\infty} &lt; \frac{\delta_{g}}{3} $</span></p> <p><span class="math-container">$\Rightarrow \sup_{y \in [0,1]} |g''(0) - f''(0)\| &lt; \frac{\delta_{g}}{3} \\ \Rightarrow -\frac{\delta_{g}}{3} &lt; g''(0) - f''(0) &lt; \frac{\delta_{g}}{3} \\ \Rightarrow g''(0) &gt; f''(0) - \frac{\delta_{g}}{3}$</span> [<strong>Really stuck here</strong>]</p> <p>Comments:</p> <p>My first concern is whether the first two parts were interpreted in the correct way or am I missing things from my reasoning. As for the third condition, I am utterly loss......Help would definitely be appreciated.</p>
zhw.
228,045
<p>You wrote "This condition means there exists a <span class="math-container">$\delta_{3} &gt; 0 $</span> s.t. <span class="math-container">$f''(0) &gt; \delta_{3} &gt; 2$</span>." You're missing the negative case, e.g., <span class="math-container">$f''(0)=-3$</span></p> <p>I would let <span class="math-container">$\delta_3 = |f''(0)| - 2.$</span> Then <span class="math-container">$|g''(0)-f''(0)|&lt;\delta_3$</span> implies <span class="math-container">$ ||g''(0)|-|f''(0)||&lt;\delta_3.$</span>  Thus <span class="math-container">$|g''(0)| &gt; |f''(0)| - \delta_3 =2.$</span></p>
3,285,255
<p>Show <span class="math-container">$D = \{f \in C^{2}[0,1]): f(x) &gt; 0, \ \forall x \in [0,1], \|f'\|_{\infty}&lt;1, |f''(0)| &gt; 2\}$</span> is open w.r.t Sup Norm.</p> <p>Sup Norm = <span class="math-container">$\|f\|_{2,\infty, [0,1]} = sup_{x \in[0,1]}|f(x)| + sup_{x\in[0,1]}|f'(x)| + sup_{x \in [0,1]}|f''(x)|$</span></p> <p>I had originally posted this question trying to seek clarification on a particular part, but as I've gone over that solution it didn't make sense to me so I've instead just attempted to reproduce a full solution and get feedback on this from start to finish.</p> <p><strong>Attempt:</strong></p> <p>To start I have to examine what is precisely meant by the three conditions given. </p> <p><em>Condition 1:</em> <span class="math-container">$f(x) &gt; 0 \forall \ x \in [0,1]$</span>:</p> <p>By the Extreme value theorem, there exists a value <span class="math-container">$x_{1} \in [0,1]$</span> s.t. <span class="math-container">$f(x_{1}) = minimum$</span> over <span class="math-container">$[0,1]$</span>. By condition <span class="math-container">$f(x) &gt; 0$</span>, this means there exists a <span class="math-container">$\delta_{1} &gt; 0 $</span> s.t <span class="math-container">$f(x) &gt; f(x_{1}) &gt; \delta_{1} &gt; 0$</span>. In particular this means <span class="math-container">$f(x_{1}) - \delta_{1} &gt; 0$</span>.</p> <p><em>Condition 2:</em> <span class="math-container">$\|f'(x)\|_{\infty} &lt; 1$</span>:</p> <p>By the EVT again, there exists an <span class="math-container">$x_{2} \in [0,1]$</span> s.t. <span class="math-container">$f'(x_{2}) = maximum$</span> over <span class="math-container">$[0,1]$</span>. By condition <span class="math-container">$\|f'(x)\|_{\infty} &lt; 1$</span>, there exists a <span class="math-container">$\delta_{2} &gt;0$</span> s.t. <span class="math-container">$\|f'(x)\|_{\infty} = \sup_{x \in [0,1]} |f'(x)| = f'(x_{2}) &lt; 1 - \delta_{2} &lt; 1$</span>.</p> <p><em>Condition 3:</em> <span class="math-container">$|f''(0)| &gt; 2$</span>:</p> <p>This condition means there exists a <span class="math-container">$\delta_{3} &gt; 0 $</span> s.t. <span class="math-container">$f''(0) &gt; \delta_{3} &gt; 2$</span>. Equivalently one can say: <span class="math-container">$f''(0) - \delta_{3} &gt; 2$</span>. Also <span class="math-container">$\sup_{x \in [0,1]}f''(x) - \delta_{3} &gt; f''(0) - \delta_{3} &gt; 2$</span>. (Not sure if this last observations means much.)</p> <p>Therefore in order to show that <span class="math-container">$D$</span> is open (i.e <span class="math-container">$B_{\delta}(f) \subset D$</span>), I have to establish that <span class="math-container">$g(x) \in B_{\delta}(f)$</span>. This means <span class="math-container">$g(x)$</span> has to satisfy the three observations above.</p> <p><strong>Proof:</strong></p> <p>Let <span class="math-container">$y \in [0,1]$</span>, let <span class="math-container">$\frac{\delta_{g}}{3} = \frac{min(\delta_{1}, \delta_{2}, \delta_{3})}{3}$</span></p> <p>1) Consider <span class="math-container">$\|g(y) - f(y)\|_{\infty} &lt; \frac{\delta_{g}}{3}$</span></p> <p><span class="math-container">$\Rightarrow \sup_{y \in [0,1]}|g(y) - f(y)| &lt; \frac{\delta_{g}}{3} \\ \Rightarrow -\frac{\delta_{g}}{3} &lt; g(y) - f(y) &lt; \frac{\delta_{g}}{3} \\ \Rightarrow g(y) \geq f(y) - \frac{\delta_{g}}{3} &gt; f(x_{1}) - \frac{\delta_{g}}{3} \geq f(x_{1}) - \frac{\delta_{1}}{3} &gt; 0 \\ \therefore g(y) &gt; 0$</span></p> <p>2) Consider <span class="math-container">$\|g'(y) - f'(y) \|_{\infty} &lt; \frac{\delta_{g}}{3}$</span></p> <p><span class="math-container">$\Rightarrow \sup_{y \in [0,1]}|g'(y) - f'(y)| &lt; \frac{\delta_{g}}{3} \\ \Rightarrow -\frac{\delta_{g}}{3} &lt; g'(y) - f'(y) &lt; \frac{\delta_{g}}{3} \\ \Rightarrow g'(y) \leq f'(y) + \frac{\delta_{g}}{3} &lt; f'(x_{2}) + \frac{\delta_{g}}{3} &lt; 1 - \delta_{2} + \frac{\delta_{g}}{3} \leq 1 - \delta_{2} + \frac{\delta_{2}}{3} &lt; 1 - \frac{\delta_{2}}{3} &lt; 1 \\ \therefore g'(y) &lt; 1$</span> </p> <p>3) Consider <span class="math-container">$\|g''(0) - f''(0)\|_{\infty} &lt; \frac{\delta_{g}}{3} $</span></p> <p><span class="math-container">$\Rightarrow \sup_{y \in [0,1]} |g''(0) - f''(0)\| &lt; \frac{\delta_{g}}{3} \\ \Rightarrow -\frac{\delta_{g}}{3} &lt; g''(0) - f''(0) &lt; \frac{\delta_{g}}{3} \\ \Rightarrow g''(0) &gt; f''(0) - \frac{\delta_{g}}{3}$</span> [<strong>Really stuck here</strong>]</p> <p>Comments:</p> <p>My first concern is whether the first two parts were interpreted in the correct way or am I missing things from my reasoning. As for the third condition, I am utterly loss......Help would definitely be appreciated.</p>
mechanodroid
144,766
<p>Alternatively, notice that <span class="math-container">$\phi, \psi : C^2[0,1] \to \mathbb{R}$</span> given by <span class="math-container">$\phi(f) = \|f'\|_\infty$</span> and <span class="math-container">$\psi(f) = |f''(0)|$</span> are continuous functions w.r.t the given norm on <span class="math-container">$C^2[0,1]$</span>.</p> <p>You already more-less established that <span class="math-container">$\{f \in C^2[0,1] : f(x) &gt; 0, \forall x\in [0,1]\}$</span> is open so we have <span class="math-container">\begin{align} D &amp;= \{f \in C^2[0,1] : f(x) &gt; 0, \forall x\in [0,1]\} \cap \{f \in C^2[0,1] : \|f'\|_\infty &lt; 1\} \cap \{f \in C^2[0,1] : |f''(0)| &gt; 2\}\\ &amp;= \{f \in C^2[0,1] : f(x) &gt; 0, \forall x\in [0,1]\} \cap \phi^{-1}\big((-\infty, 1)\big) \cap \psi^{-1}\big((2, \infty)\big) \end{align}</span> which is open as an intersection of three open sets.</p>
192,821
<p>I am using <a href="https://reference.wolfram.com/language/ref/TransformedField.html" rel="noreferrer"><code>TransformedField</code></a> to convert a system of ODEs from Cartesian to polar coordinates:</p> <pre><code>TransformedField[ "Cartesian" -&gt; "Polar", {μ x1 - x2 - σ x1 (x1^2 + x2^2), x1 + μ x2 - σ x2 (x1^2 + x2^2)}, {x1, x2} -&gt; {r, θ} ] // Simplify </code></pre> <p>and I get the result</p> <pre><code>{r μ - r^3 σ, r} </code></pre> <p>but I am pretty sure that the right answer should be</p> <pre><code>{r μ - r^3 σ, 1} </code></pre> <p>Where is the error?</p>
Bill Watts
53,121
<p>My slightly different method matches Mathematica.</p> <pre><code>aCartToCyl[{ax_, ay_}] := {ax Cos[ϕ] + ay Sin[ϕ], ay Cos[ϕ] - ax Sin[ϕ]} aCartToCyl[{μ x1 - x2 - σ x1 (x1^2 + x2^2), x1 + μ x2 - σ x2 (x1^2 + x2^2)}] // Simplify; % /. {x1 -&gt; r Cos[ϕ], x2 -&gt; r Sin[ϕ]} // Simplify (*{μ r - r^3 σ, r}*) </code></pre>
3,429,623
<p>Is the union of <span class="math-container">$\emptyset$</span> with another set, <span class="math-container">$A$</span> say, disjoint? Even though <span class="math-container">$\emptyset \subseteq A$</span>?</p> <p>I would say, yes - vacuously. But some confirmation would be great.</p>
Levi
513,190
<p>It might depend on what you mean by disjoint. I would say that the following definition is reasonable.</p> <blockquote> <p><strong>Definition.</strong> Sets <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are <em>disjoint</em> if <span class="math-container">$A \cap B = \emptyset$</span>.</p> </blockquote> <p>The set <span class="math-container">$B = \emptyset$</span> satisfies this, so <span class="math-container">$A$</span> and <span class="math-container">$\emptyset$</span> are disjoint. But I would not say that this is true vacuously.</p>
545,634
<p>Consider a function $f:\Omega\subset\mathbb{R}^n\rightarrow\mathbb{R}^m$, for which the Jacobian matrix </p> <p>$J_f(x_1,...,x_n)= \left( \begin{array}{ccc} \frac{\partial f_1}{\partial x_1} &amp; ... &amp; \frac{\partial f_1}{\partial x_n} \\ \vdots &amp; &amp; \vdots \\ \frac{\partial f_m}{\partial x_1} &amp; ... &amp; \frac{\partial f_m}{\partial x_n} \end{array} \right) $ is given. </p> <p>Also, assume the component functions of $J_f$ are continuously differentiable on $\Omega$, and $\Omega$ is simply connected. If $m=1$ and $n=2$, it is well known that the function $f$ can be recovered from $J_f$ (in this case the gradient) if and only if $\frac{\partial}{\partial x_2}\frac{\partial f_1}{\partial x_1}=\frac{\partial}{\partial x_1}\frac{\partial f_1}{\partial x_2}$.</p> <p>So my question is whether there is a generalization of this result for arbitrary values of $m$ and $n$. I would appreciate any references!</p> <p>Thank you!</p>
leonbloy
312
<p>Actually, the function is one-to-one on the relevant domain. And the image (and hence the range of $Y$) is $(1/4,1] \cup [0,1/4]=[0,1]$</p> <p>Hence, you can simply apply the formula $$f_Y(y)= \frac{f_X(x)}{|g'(x)|}=\frac{1}{2 |x-1|}$$</p> <p>But $|x-1|= \sqrt{y}$</p> <p>Hence $$f_Y(y) = \frac{1}{2 \sqrt{y}} , \hspace{1cm} y\in (0,1]$$</p> <p>Your procedure is right, of course.</p>
1,872,136
<p>Suppose $a$ and $b$ are integers. Is there a closed form for the following sum?</p> <p>$$F(x,a,b)=\sum_{n=0}^{\infty} e^{-x (n+1/2+\sin(\frac{a}{b} \pi (n+1/2)))}$$</p>
Robert Israel
8,508
<p>For each particular $b$, it will have a closed form: since the $\sin$ term depends only on $n \mod (2b)$, this reduces to the sum of $2b$ geometric series. I doubt that there is a closed form expression as a function of $a$ and $b$.</p>
1,872,136
<p>Suppose $a$ and $b$ are integers. Is there a closed form for the following sum?</p> <p>$$F(x,a,b)=\sum_{n=0}^{\infty} e^{-x (n+1/2+\sin(\frac{a}{b} \pi (n+1/2)))}$$</p>
Simply Beautiful Art
272,831
<p>We can approximate in the following manner:</p> <p>$$F(x)\approx\sum_{n=0}^\infty e^{-x(n+1/2)}=\frac{e^{-x/2}}{1-e^{-x}}$$</p>
2,131,679
<p>Let $f: \mathbb{R} \to \mathbb{R}$ be continuous and $D \subset \mathbb{R}$ be a dense subset of $\mathbb{R}$. Furthermore, $\forall y_1,y_2 \in D \ f(y_1)=f(y_2)$. Should $f$ be a constant function?</p> <p>My attempt: Since $f$ is continuous $$\forall x_0 \ \forall \varepsilon &gt;0 \ \exists \delta&gt;0 \ \forall x \in \mathbb{R} \ \left(|x-x_0|&lt;\delta \Longrightarrow |f(x)-f(x_0)|&lt;\varepsilon \right)$$ Let $f$ be non-constant function. Since $D$ is dense $\exists x_1 \in (x_0-\delta, x_0+\delta) \ : \ x_1 \in D$. Let's take $x_2 \in (x_0-\delta, x_0+\delta)$ such that $f(x_2) \ne f(x_1)$. Let $\varepsilon = \frac{|f(x_2)-f(x_1)|}{2}&gt;0$. Therefore, we have $$|f(x_1)-f(x_0)|&lt;\frac{|f(x_2)-f(x_1)|}{2} \ \ \ |f(x_2)-f(x_0)|&lt;\frac{|f(x_2)-f(x_1)|}{2}$$ Adding the expressions above, we obtain $$|f(x_2)-f(x_1)|\le |f(x_1)-f(x_0)|+|f(x_2)-f(x_0)|&lt;|f(x_2)-f(x_1)|$$ what is the contradiction. Are my mussings correct?</p>
quasi
400,434
<p>Let $F$ be the point where the line $AO$ intersects side $BC$. <p> Applying Ceva's Theorem,</p> <p>\begin{align*} &amp;\frac{AE}{EB}\cdot\frac{BF}{FC}\cdot\frac{CD}{DA} = 1\\[6pt] \implies\;&amp;\frac{2}{1}\cdot\frac{BF}{FC}\cdot\frac{2}{1} = 1\\[6pt] \implies\;&amp;\frac{BF}{FC}=\frac{1}{4} \end{align*}</p> <p>Without loss of generality, assume triangle $ABC$ is in $\mathbb{R}^2$, centered at the origin. <p> Let $a,b,c,d,e,f$ denote the vectors from the origin to the points $A.B,C,D,E,F$, respectively. Then</p> <p>$$a \cdot a = b \cdot b = c \cdot c$$</p> <p>$$a \cdot b = b \cdot c = c \cdot a$$</p> <p>and from the known ratios, we get</p> <p>\begin{align*} e &amp;= a + (2/3)(b - a)\\[4pt] &amp;= (1/3)a + (2/3)b\\[12pt] f &amp;= b + (1/5)(c - b)\\[4pt] &amp;= (4/5)b + (1/5)c \end{align*}</p> <p>Then \begin{align*} \overrightarrow{AF}\cdot\overrightarrow{EC} &amp;= (f - a)\cdot(c - e)\\[4pt] &amp;= {\large(}(4/5)b + (1/5)c - a{\large)}\cdot{\large(}c - (1/3)a - (2/3)b{\large)}\\[4pt] &amp;=(1/15)\left({\large(}4b + c - 5a{\large)}\cdot{\large(}3c - a - 2b{\large)}\right)\\[4pt] &amp;=(1/15) \left( {\large(} 8(b\cdot b) - 3(c\cdot c) - 5(a\cdot a) {\large)} + {\large(} 6(a\cdot b) + 10(b\cdot c) - 16(c\cdot a) {\large)} \right)\\[4pt] &amp;=(1/15)(0 + 0)\\[4pt] &amp;=0 \end{align*}</p> <p>Since $\overrightarrow{AF}\cdot\overrightarrow{EC} = 0$, it follows that lines $AF$ and $EC$ are perpendicular, hence line segments $AO$ and $OC$ are also perpendicular. <p> Therefore $\angle AOC = 90^{\circ}$.</p>
865,293
<blockquote> <p>Prove $\ln[\sin(x)] \in L_1 [0,1].$</p> </blockquote> <p>Since the problem does not require actually solving for the value, my strategy is to bound the integral somehow. I thought I was out of this one free since for $\epsilon &gt; 0$ small enough, $$\lim_{\epsilon \to 0}\int_\epsilon^1 e^{\left|\ln(\sin(x))\right|}dx=\cos(\epsilon)-\cos(1) \to 1-\cos(1)&lt;\infty$$</p> <p>and so by Jensen's Inequality, $$e^{\int_0^1 \left| \ln(\sin(x))\right|\,dx}\le \int_0^1e^{\left|\ln(\sin(x))\right|}\,dx\le1-\cos(1)&lt;\infty$$ so that $\int_0^1 \left|\ln(\sin(x))\right|\,dx&lt;\infty$. </p> <p>The problem, of course, is that the argument begs the question, since Jensen's assumes the function in question is integrable to begin with, and that's what I'm trying to show. </p> <p>Any way to save my proof, or do I have to use a different method? I attempted integration by parts to no avail, so I am assuming there is some "trick" calculation I do not know that I should use here. </p>
Community
-1
<p>A simpler approach would be to observe that the function $x^{1/2}\ln \sin x$ is bounded on $(0,1]$, because it has a finite limit as $x\to 0$ -- by L'Hôpital's rule applied to $\dfrac{\ln \sin x}{x^{-1/2}}$. This gives $|\ln \sin x|\le Mx^{-1/2}$.</p> <hr> <p>As Byron Schmuland noted, $e^{|\ln \sin x|} = 1/\sin x$, which is nonintegrable; this is fatal for your approach.</p>
865,293
<blockquote> <p>Prove $\ln[\sin(x)] \in L_1 [0,1].$</p> </blockquote> <p>Since the problem does not require actually solving for the value, my strategy is to bound the integral somehow. I thought I was out of this one free since for $\epsilon &gt; 0$ small enough, $$\lim_{\epsilon \to 0}\int_\epsilon^1 e^{\left|\ln(\sin(x))\right|}dx=\cos(\epsilon)-\cos(1) \to 1-\cos(1)&lt;\infty$$</p> <p>and so by Jensen's Inequality, $$e^{\int_0^1 \left| \ln(\sin(x))\right|\,dx}\le \int_0^1e^{\left|\ln(\sin(x))\right|}\,dx\le1-\cos(1)&lt;\infty$$ so that $\int_0^1 \left|\ln(\sin(x))\right|\,dx&lt;\infty$. </p> <p>The problem, of course, is that the argument begs the question, since Jensen's assumes the function in question is integrable to begin with, and that's what I'm trying to show. </p> <p>Any way to save my proof, or do I have to use a different method? I attempted integration by parts to no avail, so I am assuming there is some "trick" calculation I do not know that I should use here. </p>
copper.hat
27,978
<p>Here is a proof that hides behind a theorem on swapping order of integration:</p> <p>We have $0 \le {x \over 2} \le \sin x$, and $-\log$ is decreasing on $(0,1]$.</p> <p>Then $\int_0^1 | \log(\sin x)| dx = \int_0^1 - \log( \sin x) dx \le \int_0^1 - \log( { x \over 2}) dx = \log 2 + \int_0^1 - \log( { x}) dx$.</p> <p>Tonelli gives $\int_0^1 -\log(x) dx = \int_{x=0}^1 \int_{t=x}^1 {dt \over t} dx = \int_{t=0}^1 \int_{x=0}^t {dx \over t} dt = 1$.</p> <p>Hence the upper bound $\int_0^1 | \log(\sin x)| dx \le 1+ \log 2$.</p>
2,764,141
<p>This is what I have tried so far: </p> <p>Since $g(z)$ is bounded, then $\lim\limits_{z\rightarrow 0} zg(z)=0$ and hence $z=0$ is a removable singularity of $g(z)$. We can define $g(0) = \lim\limits_{z\rightarrow 0} f(z)f(\frac{1}{z})$ and make $g$ entire.</p> <p>Then $g(z)$ is a bounded entire function and hence $g$ is a constant function. In other words, $f(z)f(\frac{1}{z}) = c$ for some $c\in\mathbb{C}$ and for $z\neq 0$</p> <p>I don't know how to continue from this step. I tried to prove that $f(\frac{1}{z})$ has either a pole or a removable singularity at $z=0$ to show first that $f(z)$ is a polynomial or a constant but I failed.</p>
mol3574710n0fN074710n
55,485
<p>Try $c\cdot f\left(z\right)^{-1} = f\left(z^{-1}\right)$:</p> <p>$f$ is represented by its power series everywhere: $f\left(z\right) = \sum_{n=0}^{\infty}\,c_n\cdot z^n$</p> <p>From the first line we can conclude that only a single exponent occurs in that power series.</p>
3,858,962
<p>given a rectangle <span class="math-container">$ABCD$</span> how to construct a triangle such that <span class="math-container">$\triangle X, \triangle Y$</span> and <span class="math-container">$\triangle Z$</span> have equal areas.i dont know where to start. .i tried some algebra with the area of the triangles and used pythagoras theoram to find the sides of the triangle. i just need a hint. here is the picture<a href="https://i.stack.imgur.com/B7iSk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B7iSk.png" alt="enter image description here" /></a></p>
QED
91,884
<p><a href="https://i.stack.imgur.com/s1yGL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s1yGL.png" alt="enter image description here" /></a></p> <p>Just make <span class="math-container">$$\frac{|DF|}{|FC|}=\frac{2}{\sqrt{5}+1},\ \frac{|AE|}{|EC|}=\frac{|FC|}{|CD|}=\frac{\sqrt{5}+1}{\sqrt{5}+3}$$</span></p>
1,041,212
<p>I want to show that the function $f(X) = -log \ det(X)$ is convex on the space $S$ of positive definite matrices. </p> <p>What I have done:</p> <p>It seems like this problem could be tackled by considering the restriction of $f(X)$ to a line through a given point $X \in S$ so that $g(\alpha) = f(X + \alpha V)$ for some $V \in S$. So now I am trying to show that g is convex.</p>
Community
-1
<p>More generally, let $f(X)=-log(|\det(X)|)$, let $\Omega$ be a convex subspace of $GL_n(\mathbb{R})$ and, if $X\in\Omega$, let $T_X$ be the tangent space in $X$ to $\Omega$. If $f$ is convex over $\Omega$, then, for every $X\in\Omega,H\in T_X$, $f''_X(H,H)\geq 0$. The converse is true if $f''_X(H,H)&gt;0$ when $H\not=0$.</p> <p>$f'_X(H)=-trace(HX^{-1})$ and $f''_X(H,K)=trace(HX^{-1}KX^{-1})$; finally $f''_X(H,H)=trace((HX^{-1})^2)$.</p> <ol> <li><p>$\Omega$ is the set of symmetric $&gt;0$ matrices. Then $T_X$ is the space of symmetric matrices, $X^{-1}H$ is diagonalizable and $spectrum(X^{-1}H)\subset \mathbb{R}$; we easily conclude.</p></li> <li><p>One has the same result when $\Omega$ is the set of symmetric $&lt;0$ matrices.</p></li> <li><p>Yet, if $X$ is invertible symmetric not $&gt;0$ and not $&lt;0$, then one can show that there is a symmetric matrix $H$ s.t. $trace(((HX^{-1})^2)&lt;0$.</p></li> </ol>
1,660,794
<p>Suppose $$a'(x)=b(x)$$ and $$b'(x)=a(x)$$</p> <p>What is $$\int x \sin (x) a(x) dx$$</p> <p>Thanks!</p>
lhf
589
<p>You're talking about <a href="https://en.wikipedia.org/wiki/Polygonal_number" rel="nofollow">Polygonal numbers</a>.</p> <p>The $n$-th $s$-gonal number is $$ \frac{n^2(s-2)-n(s-4)}{2} $$</p> <p>If you have a number $N$ and want to see whether it is an $s$-gonal number, then you have to solve a quadratic equation in $n$: $$ \frac{n^2(s-2)-n(s-4)}{2} = N $$ You are only interested in solutions that are natural numbers: $$ n = \frac{\sqrt{8(s-2)N+(s-4)^2}+(s-4)}{2(s-2)}. $$</p> <p>If you don't know $s$, then there is no general solution because some $s$-gonal numbers are also $t$-gonal numbers for $s\ne t$.</p>
1,994,277
<p>I am studying linear representation theory for finite groups and came across the claim in title. When $n\geq 5$, $S_n$ does not have an irreducible, $2$- dimensional representation.But I am not sure where to begin with. </p> <p>Although it seems that this result will follow from <a href="https://math.stackexchange.com/questions/69384/low-dimensional-irreducible-representations-of-s-n">this</a> as a special case, I am interested in a solution that is specific to this problem. </p> <p>The condition $n\geq 5$ seems to suggest that we need to use the fact that $A_n$ is simple for $n\geq 5$. </p> <p>I would appreciate any hint. </p>
Dietrich Burde
83,966
<p>It is also possible to prove the following, using elementary methods of the above <a href="https://math.stackexchange.com/questions/69384/low-dimensional-irreducible-representations-of-s-n">link</a>:</p> <blockquote> <p>For $n \geq 5$, the only representations of $S_n$ of dimension $&lt;n$ are direct sums of <br> (a) the trivial representation <br> (b) the sign representation <br> (c) the $n-1$ dimensional representation on $n$-tuples of numbers summing to $0$ <br> (d) the tensor product of (b) and (c).</p> </blockquote> <p>We see that every $2$-dimensional representation then is reducible. The argument with the eigenvalues for $5$-cycles is already given in the comments by Jyrki Lahtonen at the linked question, and by David Speyer in his answer.</p>
113,446
<p>Suppose a simple equation in Cartesian coordinate: $$ (x^2+ y^2)^{3/2} = x y $$ In polar coordinate the equation becomes $r = \cos(\theta) \sin(\theta)$. When I plot both, the one in polar coordinate has two extra lobes (I plot the polar figure with $\theta \in [0.05 \pi, 1.25 \pi]$ so the "flow" of the curve is clearer).</p> <pre><code>figurePolar = PolarPlot[Sin[θ] Cos[θ], {θ, 0.05 π, 1.25 π}, PlotStyle -&gt; {Blue, Thick}]; figureCartesian = ContourPlot[(Sqrt[x^2 + y^2])^3 == x y, {x, -0.4, 0.4}, {y, -0.4, 0.4}, ContourStyle -&gt; {Green, Dashed}]; GraphicsGrid[{{figurePolar, figureCartesian}}] </code></pre> <p><a href="https://i.stack.imgur.com/ez5CK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ez5CK.png" alt="same function in polar and Cartesian coordinate"></a> The right one is in the Cartesian cooridnate, it is correct since $x y \geq 0$. The extra lobes in the polar (left) figure seem to be caused by Mathematica's use of negative $r$, which is against the mathematical definition. Any thoughts?</p>
Narasimham
19,067
<p>$$ r = \pm \sqrt{x^2+y^2} =f(\theta)$$ </p> <p>Basically when you entered into polar coordinates usage you had implicatively or unwittingly accepted that all radius vectors can be either positive or negative.</p> <p>It is consequential to the above artefact, negative sign makes complete sense to <em>all</em> polar curves of two dimensions. It has nothing to do with any particular polar curve or Mathematica. </p> <p>That means you had also bargained for antisymmetric (with respect to origin)curve:</p> <p>$$ (x,y) = \pm r\ (cos \theta, sin \theta ).$$</p> <p>In Mathematica there are options for PlotRegion etc., as others mention.</p> <p>EDIT1</p> <p>That any polar curve can always be associated with its (origin mirrored ) counterpart as its dual...is sometimes disconcerting.</p> <p>For example starting with the Cartesian circle </p> <p>$$ (x - h)^2 + (y -k)^2= R^2 $$</p> <p>if we convert to polar and accept both signs and reconvert to Cartesian coordinates, it tantamounts to accepting its dual </p> <p>$$ (x + h)^2 + (y +k)^2= R^2 $$</p> <p>as the polar counterpart.This is obtained by rotating the circle about origin through $ \pi .$</p> <p>We may generalize:</p> <p><strong>Every polar plot has its conjugate or dual plot $(r \rightarrow -r )$ which is polar symmetric with the origin.</strong></p>
72,201
<p>Two people play a game. They play a series of points, each producing a winner and a loser, until one player has won at least 4 points and has won at least 2 more points than the other. Anne wins each point with probability p. What is the probability that she wins the game on the kth point played for k=4,5,6,...</p> <p>Ok so I started with k=4 P(win on 4th point) can happen 3 ways 4-0,4-1,4-2 =p^4(1 + (4 choose 1)q +(5 choose 2)q^2), which is a geometric series for q. I'm not sure what to do with the changing binomial coefficients though. Similarly, P(win on 5th point can happen 4 ways) 5-0, 5-1, 5-2, 5-3 =p^5(1 + (5 choose 1)q + (6 choose 2) q^2 +(7 choose 3)q^3)</p> <p>So it seems like we will have two geometric series one of powers of p, each containing one of powers of q. I'm not sure the exact pattern and how to formalize this. Thank you!</p>
joriki
6,622
<p>I think you've already got the right result for $k=4$. For higher $k$, note that the game has to go through a tie in order to continue. For instance, for someone to win 5:3, the score must have been 3:3 (since it must have been 4:3, and 4:2 would have ended the game). So you can calculate the probability for 3:3, which is $\binom63p^3q^3$, and from then on the probability is $p^2$ for Anne to win and $2pq$ (two ways for one win and one lose) to reach the next tie, so for $k\ge5$,</p> <p>$$p_k=\binom63p^3q^3p^2\left(2pq\right)^{k-5}=\frac5{8q^2}(2pq)^k\;.$$</p>
1,109,443
<p>I'm currently learning for an algebra exam and I have some examples of questions from few last years. And I can't find a solution to this one:</p> <blockquote> <p>Give three examples of complex numbers where z = -z</p> </blockquote> <p>The only complex number I can think of is 0. Because it is a complex number, isn't it? Like 0 + 0i.</p> <p>What two other complex numbers can be given as examples?</p> <p>Edit: Well, I'm pretty sure it's z = -z. I have only this low-resolution picture, but you can see it in the first task: <a href="https://i.imgur.com/2EuugPZ.jpg" rel="nofollow noreferrer">http://i.imgur.com/2EuugPZ.jpg</a>. Yeah, I know it's all in Polish, but you have to believe me it says to find three examples.</p> <p>Edit 2: Okay, now I see that it might actually say $\bar{z} = -z$.</p>
VividD
121,158
<p>No, <strong>nobody can be $1/12th$ Cherokee</strong>.</p> <p>I'll prove a stronger statement:</p> <blockquote> <p><em>For any natural $p$ and $q$, one can't be a $p/q$ Cherokee if $p$ <strong>is not</strong> divisible by $3$, and $q$ <strong>is</strong> divisible by $3$</em>.</p> </blockquote> <p>First, the statement that a person can be only "<em>rational</em> number of a Cherokee" can be easily proven - I won't waste space for the proof here.</p> <p>Now, let's suppose a person was such $p/q$ Cherokee, then, supposing that his/hers parents were $p_1/q_1$ and $p_2/q_2$ Cherokee (fractions are reduced to their minimal form), the following would be valid</p> <p>$$p/q = (p_1/q_1 + p_2/q_2)/2$$</p> <p>or</p> <p>$$2\times p\times q_1\times q_2=q\times (p_1\times q_2+p_2\times q_1)$$</p> <p>Since $q$ is divisible by $3$, and $p$ is not divisible by $3$, this means that one of $q_1$ and $q_2$ must be divisible by $3$. In other words, one of the parents have the same property as the original person.</p> <p>And than you can go ad infinitum - but since the oldest ancestor can be either 100% or 0% Cherokee, and that person does not satisfy the property of being "$p/q$ Cherokee if $p$ is not divisible by $3$, and q is divisible by $3$", this is a <em>contradiction</em>.</p> <p>This means nobody can be $1/12th$ Cherokee.</p>
1,109,443
<p>I'm currently learning for an algebra exam and I have some examples of questions from few last years. And I can't find a solution to this one:</p> <blockquote> <p>Give three examples of complex numbers where z = -z</p> </blockquote> <p>The only complex number I can think of is 0. Because it is a complex number, isn't it? Like 0 + 0i.</p> <p>What two other complex numbers can be given as examples?</p> <p>Edit: Well, I'm pretty sure it's z = -z. I have only this low-resolution picture, but you can see it in the first task: <a href="https://i.imgur.com/2EuugPZ.jpg" rel="nofollow noreferrer">http://i.imgur.com/2EuugPZ.jpg</a>. Yeah, I know it's all in Polish, but you have to believe me it says to find three examples.</p> <p>Edit 2: Okay, now I see that it might actually say $\bar{z} = -z$.</p>
gnasher729
137,175
<p>You can get as close as you like to any fraction. For example, you can be between 1 / 12.5 and 1 / 11.5 Cherokee, and then claiming "I'm one 12th Cherokee" in every day language would be correct. </p> <p>Also, the fractions that we usually use are approximations. If one parent is pure white and one parent is pure black, the child isn't half white and half black. It will be statistically close to 1/2 each, but in reality the genes are not taken exactly half from each parent, so that child will be a bit more in one of the two directions and not exactly in the middle. </p>
1,109,443
<p>I'm currently learning for an algebra exam and I have some examples of questions from few last years. And I can't find a solution to this one:</p> <blockquote> <p>Give three examples of complex numbers where z = -z</p> </blockquote> <p>The only complex number I can think of is 0. Because it is a complex number, isn't it? Like 0 + 0i.</p> <p>What two other complex numbers can be given as examples?</p> <p>Edit: Well, I'm pretty sure it's z = -z. I have only this low-resolution picture, but you can see it in the first task: <a href="https://i.imgur.com/2EuugPZ.jpg" rel="nofollow noreferrer">http://i.imgur.com/2EuugPZ.jpg</a>. Yeah, I know it's all in Polish, but you have to believe me it says to find three examples.</p> <p>Edit 2: Okay, now I see that it might actually say $\bar{z} = -z$.</p>
Jay
144,901
<p>Assuming that we define your heritage as 1/2 (father + mother), then it is not possible to come up with exactly 1/12. Assuming we start with people who are full Cherokee or not Cherokee at all, i.e. 0 or 1, then all their descendants are going to be some integer over a power of 2. A power of 2 can never reduce to 12, as 12 is a multiple of 3 and a power of 2 can never be a multiple of 3.</p> <p>That said, someone could conceivably be a fraction that rounds to something close to 1/12. For example, suppose a Cherokee marries a non-Cherokee. The result is 1/2 Cherokee. This person marries a full Cherokee, result 3/4. Marries non, result 3/8. Marries full, result 11/16. Marries non, result 11/32. Marries non, result 11/64. Marries non, result 11/64. Marries non, result 11/128. That's pretty close to 1/12.</p> <p>But THAT said, to come up with a number that is even approximately 1/12 you have to have detailed knowledge of your heritage with many inter-marriages going back at least 6 or 7 generations, and few people would have that much detail. So if you were (approximately) 1/12 Cherokee, I'd guess you wouldn't know it.</p> <p>Years ago I helped develop a computer system for an Indian reservation, and they measured "bloodedness" in 8ths, which requires knowing your ancestry for 3 generations. They didn't cut it any finer than that.</p>
1,109,443
<p>I'm currently learning for an algebra exam and I have some examples of questions from few last years. And I can't find a solution to this one:</p> <blockquote> <p>Give three examples of complex numbers where z = -z</p> </blockquote> <p>The only complex number I can think of is 0. Because it is a complex number, isn't it? Like 0 + 0i.</p> <p>What two other complex numbers can be given as examples?</p> <p>Edit: Well, I'm pretty sure it's z = -z. I have only this low-resolution picture, but you can see it in the first task: <a href="https://i.imgur.com/2EuugPZ.jpg" rel="nofollow noreferrer">http://i.imgur.com/2EuugPZ.jpg</a>. Yeah, I know it's all in Polish, but you have to believe me it says to find three examples.</p> <p>Edit 2: Okay, now I see that it might actually say $\bar{z} = -z$.</p>
András Hummer
62,202
<p>Mathematically you can't. Let $a$ and $b$ mark the number of generations since the last time each of your parents had exactly one 100% Cherokee ancestor. Let's assume that</p> <p>$\frac{\frac{1}{2^a} + \frac{1}{2^b}}{2} = \frac{1}{12}$</p> <p>$\frac{1}{2^a} + \frac{1}{2^b} = \frac{1}{6}$</p> <p>$\frac{6}{2^a} + \frac{6}{2^b} = 1$</p> <p>From now on, just for the sake of simplicity, let's presume $a &gt; b$.</p> <p>$6 + 6*2^{a - b} = 2^a$</p> <p>$3 + 3*2^{a - b} = 2^{a - 1}$</p> <p>$2^{a - 1} - 3*2^{a - b} = 3$</p> <p>This last equation is true if you manage to find two different powers of 2 (both even numbers), either one a multiple of 3 (it's still even), whose difference is an odd number. You won't find one, we have a contradiction.</p> <p>But as stated, this is a mathematical statement. Genetics work quite different.</p>
4,469,733
<p>When randomly selecting a kitten for adoption, there is a <span class="math-container">$23 \%$</span> chance of getting a black kitten, a <span class="math-container">$50 \%$</span> chance of getting a tabby kitten, a <span class="math-container">$7 \%$</span> chance of getting a calico kitten, and a <span class="math-container">$20 \%$</span> chance of getting at ginger kitten.</p> <p>Elisa asks the manager to randomly select two kittens. What is the probability that Elisa gets a black kitten or a tabby kitten?</p> <p>My try:</p> <p>The probability that she gets either black or tabby is one minus probability that she gets calico and ginger kitten, so the required answer is <span class="math-container">$1-0.07 \times 0.2=0.986$</span></p> <p>But the answer is <span class="math-container">$0.73$</span>?</p>
River Li
584,414
<p>We have <span class="math-container">\begin{align*} \int_2^\infty \frac{\ln^2 t}{t(t - 1)}\,\mathrm{d} t &amp; \overset{t = 1/x} = \int_0^{1/2} \frac{\ln^2 x}{1 - x} \,\mathrm{d} x \\ &amp;\le \int_0^{1/2} \frac{\ln^2 x}{1 - 1/2} \,\mathrm{d} x\\ &amp;= 2\int_0^{1/2} \ln^2 x \,\mathrm{d} x\\ &amp;= \ln^2 2 + 2 + 2\ln 2\\ &amp;&lt; 4. \end{align*}</span></p>
97,261
<p>This semester, I will be taking a senior undergrad course in advanced calculus "real analysis of several variables", and we will be covering topics like: </p> <p>-Differentiability. -Open mapping theorem. -Implicit function theorem. -Lagrange multipliers. Submanifolds. -Integrals. -Integration on surfaces. -Stokes theorem, Gauss theorem.</p> <p>I need to know if anyone of you guys know good textbooks that contain practice problems with full solutions or hints that can be used to understand the material. Most of the textbooks I found are covering only the material with few examples.</p>
Potato
18,240
<p>I like C.H. Edwards, "Advanced Calculus of Several Variables." It's cheap and contains many exercises and examples.</p>
526,627
<p>What is the answer for factoring:</p> <p>$$10r^2 - 31r + 15$$</p> <p>I have tried to solve it. This was my prior attempt:</p> <p>$$10r^2 - 31r + 15\\ = (10r^2 - 25r) (-6r + 15)\\ = -5r(-2r+5) -3 (2r-5) $$ </p>
Omar
95,182
<p>$10r^2 -31r + 15$</p> <p>what are the factors of 10: 1, 2,5, and 10. Then what are the factors of 15: 1, 3, 5, and 15. From here, there two ways factoring or using the foil method.</p> <p>The factoring method.</p> <p>$10r^2 -31r + 15$</p> <p>$(10r^2 -25r)(-6r + 15)$</p> <p>$5r(2r - 5) - 3(2r - 5)$ since we have two of the same $(2r - 5)$, it counts as one.</p> <p>at the end just combine,</p> <p>$(5r - 3)(2r - 5)$ = $10r^2 -31 + 15$ </p> <p>I prefer the foil method. Then by knowing the factors of 15.</p> <p>then, $(? - 3)(? - 5)$ and both negative because of (-31). Then by know what are the factors of 10.</p> <p>$(5r - 3)(2r - 5)$ = $10r^2 - 31r + 15$</p>
3,172,693
<p>Can anybody help me with this equation? I can't find a way to factorize for finding a value of <span class="math-container">$d$</span> as a function of <span class="math-container">$a$</span>:</p> <p><span class="math-container">$$d^3 - 2\cdot d^2\cdot a^2 + d\cdot a^4 - a^2 = 0$$</span></p> <p>Another form:</p> <p><span class="math-container">$$d=\frac{a^2}{(a^2-d)^2}$$</span></p> <p>Maybe this equation has no solution. I don't know. That equation is out of some calculus involving the golden number.</p> <p>Thx for your help.</p>
farruhota
425,072
<p>You can do it step-by-step: <span class="math-container">$$\begin{align}\frac{1}{s^2(s+2)}&amp;=\frac1s\cdot \frac{1}{s\cdot (s+2)}=\\ &amp;=\frac1s\cdot \left(\frac As+\frac{B}{s+2}\right)=\\ &amp;=\frac A{s^2}+\frac1s\cdot \frac B{s+2}=\\ &amp;=\frac{A}{s^2}+\frac{C}{s}+\frac{D}{s+2}.\end{align}$$</span></p>
2,515,939
<p>So, I just need a hint for proving $$\lim_{n\to \infty} \int_0^1 e^{-nx^2}\, dx = 0$$ </p> <p>I think maybe the easiest way is to pass the limit inside, because $e^{-nx^2}$ is uniformly convergent on $[0,1]$, but I'm new to that theorem, and have very limited experience with uniform convergence. Furthermore, I don't want to integrate the Taylor expansion, because I'm not familiar with that. So, I want to prove it in a way I'm more familiar with, if possible. So far I've tried: </p> <ol> <li><p>Show that $e^{-nx^2}$ is a monontone decreasing sequence with limit $0$. Then use the monotone property of integrals but I think this argument would just end circularly with passing the limit out of the integration operator. </p></li> <li><p>Bound $e^{-nx^2}$ by 0 and some other $f(x)$ like $\cos^n x$ or $(1-\frac{x^2}{2})^n$ and then use the squeeze theorem. But the integrals of those functions seem to be a little bit out of my math range to analyze. </p></li> </ol> <p>But I have a feeling that there's something much simpler here that I'm missing.</p>
Sangchul Lee
9,340
<p>There are many ways to prove the limit, and better inputs would lead to a better quantitative bounds.</p> <ol> <li><p>Let $\epsilon \in (0, 1)$. Since $x \mapsto e^{-nx^2}$ is decreasing on $[0, 1]$, we have</p> <p>$$ 0 \leq \int_{0}^{1}e^{-nx^2} \,dx = \int_{0}^{\epsilon}e^{-nx^2} \,dx + \int_{\epsilon}^{1}e^{-nx^2} \,dx \leq \epsilon + (1-\epsilon)e^{-n\epsilon^2}. $$</p> <p>So taking $n\to\infty$, we obtain</p> <p>$$ 0 \leq \liminf_{n\to\infty} \int_{0}^{1}e^{-nx^2} \,dx \leq \limsup_{n\to\infty} \int_{0}^{1}e^{-nx^2} \,dx \leq \epsilon. $$</p> <p>Since this holds for all $\epsilon &gt; 0$, taking $\epsilon \downarrow 0$ proves the desired limit.</p></li> <li><p>Notice that $I = \int_{0}^{\infty}e^{-x^2} \, dx &lt; \infty$. This can be proved in a various way, but one easy trick is to observe that for $R &gt; 1$ we have the following uniform bound</p> <p>$$ \int_{0}^{R} e^{-x^2} \, dx \leq \int_{0}^{1}e^{-x^2} \, dx + \underbrace{\int_{1}^{R} e^{-x} \, dx}_{=e^{-1} - e^{-R}} \leq \int_{0}^{1} e^{-x^2} \,dx + e^{-1}$$</p> <p>So it follows that</p> <p>$$ \int_{0}^{1} e^{-nx^2} \, dx \stackrel{\sqrt{n}x = y}{=} \frac{1}{\sqrt{n}} \int_{0}^{\sqrt{n}} e^{-y^2} \, dy \leq \frac{I}{\sqrt{n}} \xrightarrow{n\to\infty} 0. $$</p></li> </ol>
1,595,946
<blockquote> <p>Let $f:(a,b)\to\mathbb{R}$ be a continuous function such that $\lim_\limits{x\to a^+}{f(x)}=\lim_\limits{x\to b^-}{f(x)}=-\infty$. Prove that $f$ has a global maximum.</p> </blockquote> <p>Apparently, this is similar to the EVT and I believe the proof would be similar, but I cannot think anything related...</p>
Ruben
299,730
<p>By definition, $\lim_{x \rightarrow a} f(x) = -\infty$ means that $\forall m \in \mathbb{R}, \exists \epsilon_1 \ \text{s.t.}\ | x - a | &lt; \epsilon_1 \implies f(x) &lt; m$. Likewise, $\lim_{x \rightarrow b} f(x) = -\infty$ means that $\forall m \in \mathbb{R}, \exists \epsilon_2 \ \text{s.t.}\ | x - b | &lt; \epsilon_2 \implies f(x) &lt; m$.</p> <p>So we can only have $f(x) \geq m$ on some compact interval $I := [a + \epsilon_1, b - \epsilon_2]$. This could of course be an empty interval. To make sure that the $f$ actually attains a value $\geq m$, choose $m = f(x)$ for some $x$, for example $m = f(0)$. This also makes sure that $a + \epsilon_1 \leq b - \epsilon_2$, so $I$ is not empty.</p> <p>By the extreme value theorem, $f$ has a maximum $f(x^*)$ on the compact set $I$, and we know $f(x^*) \geq m$. Since we have $f(x) &lt; m$ outside $I$, we know that $f(x^*)$ is a global maximum.</p>
2,329,542
<p>I looked up wikipedia but honestly I could not make much sense of what I will basically study in Abstract Algebra or what it is all about .</p> <p>I also looked up a question here : <a href="https://math.stackexchange.com/questions/855828/what-is-abstract-algebra-essentially">What is Abstract Algebra essentially?</a></p> <p>But there are so many definitions and terms that I always get bogged down by them. </p> <p>It would be helpful to me and maybe others if someone could explain what Abstract Algebra is all about in <em>simple words</em> that one can understand intuitively. </p>
Pawel
355,836
<p>Abstract algebra "generalizes" the notion of numbers.</p> <p>For example, we can add numbers. But we can also add apples. Also, the order of adding numbers does not matter. Similarly with adding apples. We can add zero to any number and nothing changes. We can also add zero apples and nothing changes.</p> <p>In other words, there are other things in the world that satisfy the same rules as numbers. That makes one think that maybe numbers are just a specific case of some bigger "class" that satisfies certain properties (such as, for example, that we can add two elements in the set). That is what linear algebra is trying to understand; namely, it is trying to find and understand certain characteristic "properties" that somehow distinguish one class of objects from the other.</p>
2,329,542
<p>I looked up wikipedia but honestly I could not make much sense of what I will basically study in Abstract Algebra or what it is all about .</p> <p>I also looked up a question here : <a href="https://math.stackexchange.com/questions/855828/what-is-abstract-algebra-essentially">What is Abstract Algebra essentially?</a></p> <p>But there are so many definitions and terms that I always get bogged down by them. </p> <p>It would be helpful to me and maybe others if someone could explain what Abstract Algebra is all about in <em>simple words</em> that one can understand intuitively. </p>
Jesse Madnick
640
<p>Abstract algebra is the study of <strong>operations</strong>.</p> <p>As humans, we agree that $a + b = b + a$ makes intuitive sense. We take this rule as an axiom, building our human number system with this axiom as part of the foundation. But maybe <strong>aliens from Planet Zog</strong> believe that $a + b$ should not be $b + a$, which leads them to create/discover an entirely different alien number system.</p> <p>Questions arise immediately. Can one really build a meaningful number system if one abandons $a + b = b +a$? What about the other axioms, like $a + (b+c) = (a+b) + c?$ What happens if we get rid of those?</p> <p>On the flip side: Is there anything we can say about <strong>all</strong> number systems where $a + b$ does equal $b+a$? More interesting still: How many axioms do we need to take before we are led, inevitably, to the number systems we know and love (like the integers)? The idea that we can describe the integers with axioms alone is very attractive: it would mean, in a sense, that the integers are really god-given.</p> <p>Abstract algebra addresses questions like these.</p>
3,828,003
<blockquote> <p>Show that <span class="math-container">$G=\{0,1,2,3\}$</span> over addition modulo 4 is isomorphic to <span class="math-container">$H=\{1,2,3,4\}$</span> over multiplication modulo 5</p> </blockquote> <p>My solution was to brute force check validity of <span class="math-container">$f(a+b)=f(a)f(b)$</span> for all <span class="math-container">$a,b\in\{0,1,2,3\}$</span> where i took <span class="math-container">$f(x)$</span> as <span class="math-container">$f(x)=x+1$</span>. I would like to know if there's a more elegant way?</p>
Stephen Goree
763,360
<p>Both are cyclic groups of order 4. This is enough to say they are isomorphic because all cyclic groups of order <span class="math-container">$n\in\mathbb{N}$</span> are isomorphic to <span class="math-container">$\mathbb{Z}_n$</span>, but the general idea is that you're mapping one cyclic generator to another.</p> <p>Notice that <span class="math-container">$1$</span> generates <span class="math-container">$G$</span> and <span class="math-container">$2$</span> generates <span class="math-container">$H$</span>. So now define <span class="math-container">$\phi:G\rightarrow H$</span> as <span class="math-container">$\phi(a)=2^a\pmod5$</span>. Then <span class="math-container">$\phi(a+b)=2^{a+b}\pmod5=2^a2^b\pmod5=\phi(a)\phi(b)$</span>. Thus, <span class="math-container">$\phi$</span> is operation-preserving. I'll leave it up to you to show that <span class="math-container">$\phi$</span> is bijective.</p>
3,518,285
<p>I started studying the book of Daniel Huybrechts, Complex Geometry An Introduction. I tried studying <a href="https://mathoverflow.net/questions/13089/why-do-so-many-textbooks-have-so-much-technical-detail-and-so-little-enlightenme">backwards</a> as much as possible, but I have been stuck on the concepts of <a href="https://en.wikipedia.org/wiki/Linear_complex_structure" rel="nofollow noreferrer">almost complex structures</a> and <a href="https://en.wikipedia.org/wiki/Complexification" rel="nofollow noreferrer">complexification</a>. I have studied several books and articles on the matter including ones by <a href="https://kconrad.math.uconn.edu/blurbs/linmultialg/complexification.pdf" rel="nofollow noreferrer">Keith Conrad</a>, <a href="https://individual.utoronto.ca/jordanbell/notes/complexification.pdf" rel="nofollow noreferrer">Jordan Bell</a>, <a href="http://www.physics.rutgers.edu/~gmoore/618Spring2019/GTLect2-LinearAlgebra-2019.pdf" rel="nofollow noreferrer">Gregory W. Moore</a>, <a href="https://www.springer.com/gp/book/9780387728285" rel="nofollow noreferrer">Steven Roman</a>, <a href="https://rads.stackoverflow.com/amzn/click/com/2881246834" rel="nofollow noreferrer" rel="nofollow noreferrer">Suetin, Kostrikin and Mainin</a>, <a href="https://www.springer.com/gp/book/9783319115108" rel="nofollow noreferrer">Gauthier</a></p> <p>I have several questions on the concepts of almost complex structures and complexification. Here is one:</p> <p>I understand for a finite dimensional <span class="math-container">$\mathbb R-$</span>vector space <span class="math-container">$V=(V,\text{Add}_V: V^2 \to V,s_V: \mathbb R \times V \to V)$</span>, the following are equivalent</p> <ol> <li><span class="math-container">$\dim V$</span> even</li> <li><span class="math-container">$V$</span> has an almost complex structure <span class="math-container">$J: V \to V$</span></li> <li><span class="math-container">$V$</span> has a complex structure <span class="math-container">$s_V^{\#}: \mathbb C \times V \to V$</span> that agrees with its real structure: <span class="math-container">$s_V^{\#} (r,v)=s_V(r,v)$</span>, for any <span class="math-container">$r \in \mathbb R$</span> and <span class="math-container">$v \in V$</span></li> <li>if and only if <span class="math-container">$V \cong \mathbb R^{2n} \cong (\mathbb R^{n})^2$</span> for some positive integer <span class="math-container">$n$</span> (that turns out to be half of <span class="math-container">$\dim V$</span>) if and only if <span class="math-container">$V \cong$</span> (maybe even <span class="math-container">$=$</span>) <span class="math-container">$W^2=W \bigoplus W$</span> for some <span class="math-container">$\mathbb R-$</span>vector space <span class="math-container">$W$</span>.</li> </ol> <p>The last condition makes me think that the property 'even-dimensional' for finite-dimensional <span class="math-container">$V$</span> is generalised by the property '<span class="math-container">$V \cong W^2$</span> for some <span class="math-container">$\mathbb R-$</span>vector space <span class="math-container">$W$</span>' for finite or infinite dimensional <span class="math-container">$V$</span>.</p> <p>Question: For <span class="math-container">$V$</span> finite or infinite dimensional <span class="math-container">$\mathbb R-$</span>vector space, are the following equivalent?</p> <ol start="5"> <li><p><span class="math-container">$V$</span> has an almost complex structure <span class="math-container">$J: V \to V$</span></p></li> <li><p>Externally, <span class="math-container">$V \cong$</span> (maybe even <span class="math-container">$=$</span>) <span class="math-container">$W^2=W \bigoplus W$</span> for some <span class="math-container">$\mathbb R-$</span> vector space <span class="math-container">$W$</span></p></li> <li><p>Internally, <span class="math-container">$V=S \bigoplus U$</span> for some <span class="math-container">$\mathbb R-$</span> vector subspaces <span class="math-container">$S$</span> and <span class="math-container">$U$</span> of <span class="math-container">$V$</span> with <span class="math-container">$S \cong U$</span> (and <span class="math-container">$S \cap U = \{0_V\}$</span>)</p></li> </ol>
GreginGre
447,764
<p>Yes, they are. Note that 6. and 7. are clearly equivalent (if we have 6. take for <span class="math-container">$S$</span> and <span class="math-container">$U$</span> the images of <span class="math-container">$W\times \{0\}$</span> and <span class="math-container">$\{0\}\times W$</span> under an isomorphism <span class="math-container">$W^2\overset{\sim}{\to} V$</span>. If we have 7., then <span class="math-container">$V\simeq S\times U\simeq S\times S$</span>, so take <span class="math-container">$W=S$</span>.)</p> <p>Assume that we have <span class="math-container">$7.$</span> Since <span class="math-container">$S$</span> and <span class="math-container">$U$</span> are isomorphic , their bases have same cardinality (countable or not). Pick <span class="math-container">$(s_i)_{i\in I}$</span> a basis of <span class="math-container">$S$</span>, and <span class="math-container">$(u_i)_{i\in I}$</span> a basis of <span class="math-container">$U$</span> (we can index the two bases by the same set, thanks to the previous remark). </p> <p>Setting <span class="math-container">$J(e_i)=u_i$</span> and <span class="math-container">$J(u_i)=-e_i$</span> for all <span class="math-container">$i\in I$</span> yields an endomorphism <span class="math-container">$J$</span> satisyfing <span class="math-container">$J^2=-Id_V$</span>.</p> <p>Conversely, assume that we have an endomorphism <span class="math-container">$J$</span> of <span class="math-container">$V$</span> satisfying <span class="math-container">$J^2=-Id_V$</span>.</p> <p>The map <span class="math-container">$\mathbb{C}\times V\to {V}$</span> sending <span class="math-container">$(a+bi,v)$</span> to <span class="math-container">$av+ bJ(v)$</span> endows <span class="math-container">$V$</span> with the structure of a complex vector space which agrees on <span class="math-container">$\mathbb{R}\times V$</span> to its real structure.</p> <p>Now pick a complex basis <span class="math-container">$(s_i)_{i\in I}$</span> of <span class="math-container">$V$</span>, and set <span class="math-container">$u_i=i\cdot s_i=J(s_i)$</span>. Then, gluing <span class="math-container">$(s_i)_{i\in I}$</span> and <span class="math-container">$(u_i)_{i\in I}$</span>, we obtain a real basis of <span class="math-container">$V$</span>. The real subspaces <span class="math-container">$S=Span_\mathbb{R}(s_i)$</span> and <span class="math-container">$U=Span_\mathbb{R}(u_i)$</span> then satisfy the conditions of 7. </p>
1,588,665
<p>I have been reading up on finding the eigenvectors and eigenvalues of a symmetric matrix lately and I am totally unsure of <strong>how and why</strong> it works. Given a matrix, I can find its eigenvectors and values like a machine but the problem is, I have no intuition of how it works.</p> <p>1) I understand that $v^tAv$ is the equation of an ellipse in matrix form</p> <p>2) I understand how Lagrangian multipliers work</p> <p>Can someone please show me the proof where finding the eigenvectors of such a matrix gives me the principal components ? </p> <p>I am following this topic. <a href="https://math.stackexchange.com/questions/87199/maximizing-symmetric-matrices-v-s-non-symmetric-matrices">Maximizing symmetric matrices v.s. non-symmetric matrices</a> I know how to find the eigenvalues of the matrix but <strong>I have no idea how it works</strong></p>
Andre
231,643
<p>In the step going from $9x^2=36x$ to $9x=36$, you divided by $x$. As you cannot divide by 0, this step is only valid for $x \ne 0$. So you lost the solution $x=0$ with this transformation.</p>
232,672
<p>Yesterday, I posted a question that was received in a different way than I intended it. I would like to ask it again by adding some context. </p> <p>In ZF one can prove $\not\exists x (\forall y (y\in x)).$ This statement can be read in many ways, such as (1) "there is no set of all sets" (2) "the class of all sets is proper (i.e. is not a set)" etc. and I believe that there is a substantial philosophical difference between (1) and (2). The former suggests that the existential quantifier refers to the actual existence of something intended in a platonic way, while the latter interprets $\exists$ as meaning "it is a set". So, in the second case, I would say that the existential quantifier is a way of singling out things that are sets from things that are not sets, rather than a way to claim actual existence of something. </p> <p>I am a set theorist and I always intended the statement above as (2) because I don't think existential quantification in set theory refers to actual existence. I suspect that also Zermelo intended existential quantifications as a way of singling out sets from things that are not sets, because in its original formulation he introduced "urelements" i.e. objects that are not sets but could be elements of a set. But I am interested in what is the most common interpretation among contemporary set theorists and I have the impression that my colleagues in set theory use (1) more often. </p> <p>So my question is: from the point of view of someone who believes that existential quantifiers in set theory refer to actual existence, does the statement above mean "the class of all sets does not exist"? Does this interpretation appear anywhere in the literature? </p> <p>Thank you in advance. </p>
Andreas Blass
6,794
<p>I regard ZF (or better ZFC) as a (partial) description of the behavior of actual sets. The theorem you quoted says, in that context, that there is no set containing everything. In the same context, I might sometimes talk about classes, but I would regard such talk as an abbreviation for statements that are only about sets, as explained, for example, in Jensen's book "Modelle der Mengenlehre." In other words, I don't think of classes as actual entities.</p> <p>Concerning urelements, I would use a slightly modified version of ZF to describe a world of sets and urelements; see for example the theory ZFA in Jech's "Axiom of Choice" book. The theorem you quoted still holds in the presence of urelements, and it still has the interpretation that there is no set containing everything. </p> <p>If some people (not me) wanted to work with sets and proper classes as genuinely existing entities, they would probably use a theory like Morse-Kelley to formalize their ideas. The theorem you quoted is still available; now it says that there is no class containing everything (including all classes). </p> <p>There are, of course, set theories in which the theorem you quoted is not true; Quine's New Foundations and its variants are the most prominent of these. Here there is a set of everything. Unfortunately, I have no idea what sort of entities NF is "intended" to describe; perhaps Holmes's consistency proof will eventually lead me to such an idea.</p>
58,631
<p>I am (partly as an exercise to understand <em>Mathematica</em>) trying to model the response of a damped simple harmonic oscillator to a sinusoidal driving force. I can solve the differential equation with some arbitrarily chosen boundary conditions, and get a nice graph;</p> <pre><code>params = {ν1 -&gt; 1.0, ω1 -&gt; 10.0, F -&gt; 4.0}; system = {D[x1[t], {t, 2}] == -ν1 D[x1[t], t] - ω1^2 x1[t] + F Cos[ω t], x1[0] == 1, x1'[0] == 0}; soln = DSolve[system /. params, x1[t], t][[1]][[1]]; Plot[x1[t] /. soln /. ω -&gt; 8, {t, 1, 20}, Frame -&gt; True, Axes -&gt; False] </code></pre> <p><img src="https://i.stack.imgur.com/tRS4H.png" alt="SHM transients"></p> <p>But I don't care about the transients - I just care about the steady state situation. I tried using Limit to extract this;</p> <pre><code>amp = Table[Max[Limit[x1[t] /. soln, t -&gt; ∞]], {ω, 1, 20, 1}] ListPlot[amp] </code></pre> <p><img src="https://i.stack.imgur.com/df9l9.png" alt="response"></p> <p>Looks a bit peculiar to me. Also, this is incredibly slow, and doesn't work symbolically.</p> <p>I thought I could do something along the lines of forcing it to take as a solution of the DE $a Sin(\omega t + \phi)$;</p> <pre><code>params = {ν1 -&gt; 40.0, ω1 -&gt; 10.0, F -&gt; 10.0}; x1 = a Sin[ω t + ϕ]; system = D[x1, {t, 2}] == -ν1 D[x1, t] - ω1^2 x1 + F Cos[ω t] amp = Solve[system /. params, a] phase = Solve[D[a /. amp, t] == 0, ϕ][[1]][[1]] </code></pre> <p>but this just turns into a mess and doesn't give the right result either.</p> <p>Is there a canonical way to tackle this sort of problem? I don't really know what I'm doing with Mathematica yet, so any explanations would be gratefully received.</p>
Dr. belisarius
193
<pre><code>params = {ν1 -&gt; 1, ω1 -&gt; 10, F -&gt; 4}; system = {D[ x1[t], {t, 2}] == -ν1 D[x1[t], t] - ω1^2 x1[t] + F Cos[ω t], x1[0] == 1, x1'[0] == 0}; soln = DSolve[system /. params, x1[t], t][[1, 1]]; (* and the steady state is*) lim = ((List @@ (Expand@soln[[2]])) /. x_ /; (Limit[x, t -&gt; Infinity] == 0) :&gt; 0) steadyState = Simplify@Together[Plus @@ lim] (* (-4 (-100 + ω^2) Cos[t ω] + 4 ω Sin[t ω])/(10000 - 199 ω^2 + ω^4) *) </code></pre> <p>$\frac{4 \omega \sin (t \omega )-4 \left(\omega ^2-100\right) \cos (t \omega )}{\omega ^4-199 \omega ^2+10000}$</p> <p>And the amplitudes, with a nice resonance peak:</p> <pre><code>Plot[NMaxValue[steadyState, t], {ω, 1, 20}, PlotRange -&gt; All] </code></pre> <p><img src="https://i.stack.imgur.com/3Cj8C.png" alt="Mathematica graphics"></p>
2,895,655
<blockquote> <p>Four coins of different colour are thrown. If three out of these show heads then find the probability that the remaining one shows tails. </p> </blockquote> <p>My approach:</p> <p>$A$: The event in which 3 heads appear in 3 coins out of 4</p> <p>$B$: The event in which the 4th coin shows tails</p> <p>thus we need to find $P(\frac{B}{A})$</p> <p>and we know that $P(\frac{B}{A})= \frac{P(A \cap B)}{P(A)}$</p> <p>The ways in which 3 out of 4 coins can be chosen= $^4C_3$</p> <p>$P(A)= ^4C_3 (\frac{1}{2})^3$</p> <p>and</p> <p>$ P(A \cap B)= ^4C_3 (\frac{1}{2})^4 $</p> <p>so</p> <p>$ P(\frac{B}{A})= \frac{1}{2} $</p> <p>However the answer given is $\frac {4}{5}$. What am I doing wrong?</p>
WaveX
323,744
<p>Your mistake is in the event $A$.</p> <blockquote> <p>The event in which 3 heads appear in 3 coins out of 4</p> </blockquote> <p>What we mean by that is that we have flipped <em>at least</em> $3$ heads. Out of the $16$ ways we can flip four coins, $5$ of them have at least $3$ heads (THHH, HTHH, HHTH, HHHT, HHHH)</p> <p>So the probability of $A$ is $5/16$</p> <p>Event $B$ is the last remaining coin is tails. So the probability of $A\cap B$ is $4/16$ since we throw out the case of HHHH.</p> <p>Doing the division for $B$ given $A$ now gives us</p> <p>$$\frac{4/16}{5/16} = \frac45$$</p> <p>This can be seen by seeing that from our five original outcomes, only four out of five of them had the last remaining coin be tails. Notice how they didn't specify which coin the last remaining one needed to be, that is why the answer is $4/5$ rather than $1/2$</p>
279,985
<p>How can I convert a Beta Distribution to a Gamma Distribution? Strictly speaking, I want to transform parameters of a Beta Distribution to parameters of the corresponding Gamma Distribution. I have mean value, alpha and beta parameters of a Beta Distribution and I want to transform them to those of a Gamma Distribution.</p>
rukhsar khan
210,429
<p>if x denotes beta distribution of 1st kind with parameters @ and 1 then -logx will follow gamma distribution with parameters @ and 1 </p>
1,652,701
<p>Am I on the right track to solving this?</p> <p>$$e^z=6i$$ Let $w=e^z$</p> <p>Thus,</p> <p>$$w=6i$$ $$e^w=e^{6i}$$ $$e^w=\cos(6)+i\sin(6)$$ $$\ln(e^w)=\ln(\cos(6)+i\sin(6))$$ $$w=\ln(\cos(6)+i\sin(6))$$ $$e^z=\ln(\cos(6)+i\sin(6))$$ $$\ln(e^z)=\ln(\ln(\cos(6)+i\sin(6)))$$ $$z=\ln(\ln(\cos(6)+i\sin(6)))$$</p> <p>I had another method that started by taking the natural log of both sides right away, but that leads to $\arctan(6/0)$, which is undefined...</p>
Eleven-Eleven
61,030
<p>$e^z=6i$.</p> <p>Let $z=x+iy$. Note that $e^z=e^x\cdot e^{iy}$</p> <p>Thus $$e^z=e^x\cdot e^{iy}=6e^{i\left(\frac{\pi}{2}+2k\pi\right)}$$</p> <p>So $e^x=6$ and so $x=\ln{6}$.</p> <p>So $y=\frac{\pi}{2}+2k\pi$</p> <p>Therefore you have as your solutions $z=\ln{6}+i\left(\frac{\pi}{2}+2k\pi\right)$ for integer $k$.</p>
1,652,701
<p>Am I on the right track to solving this?</p> <p>$$e^z=6i$$ Let $w=e^z$</p> <p>Thus,</p> <p>$$w=6i$$ $$e^w=e^{6i}$$ $$e^w=\cos(6)+i\sin(6)$$ $$\ln(e^w)=\ln(\cos(6)+i\sin(6))$$ $$w=\ln(\cos(6)+i\sin(6))$$ $$e^z=\ln(\cos(6)+i\sin(6))$$ $$\ln(e^z)=\ln(\ln(\cos(6)+i\sin(6)))$$ $$z=\ln(\ln(\cos(6)+i\sin(6)))$$</p> <p>I had another method that started by taking the natural log of both sides right away, but that leads to $\arctan(6/0)$, which is undefined...</p>
Michael Hardy
11,667
<p>Suppose $z=x+iy$ and $x$ and $y$ are real. Then $$ 6i = 6(0 + i) = e^z = e^{x+iy} = e^x e^{iy} = e^x(\cos y + i\sin y). $$ So $e^x = 6$ and $0+1i=\cos y + i\sin y$. Thus $\cos y=0$ and $\sin y=1$. So $y = \pi/2$ or $\pi/2+ 2\pi n$ for some integer $n$.</p>
3,459,106
<p>I have a function <span class="math-container">$$ \frac{\ln x}{x} $$</span> and I wonder, is <span class="math-container">$y=0$</span> an asymptote? I mean it is kinda strange that graph is in some place is going through that asymptote. I know it meets the criterium of asymptote, but its kinda strange if you understand me. :D</p>
K.Power
306,685
<p>You are correct to have doubts. For a counterexample to the statement just take each <span class="math-container">$f_n=f$</span>, where <span class="math-container">$f$</span> is any discontinuous function of your choice. Then clearly <span class="math-container">$f_n\to g$</span> uniformly, and <span class="math-container">$(f_n)$</span> is uniformly Cauchy.</p> <p>Note that to talk about cauchyness of a sequence of functions is generally ambiguous, unless it is known what function space you are working in. You can have a sequence of functions which are pointwise cauchy but not uniformly cauchy.</p>
43,355
<p>I recently came across the following formula, which is apparently known as <em>Laplace's summation formula:</em></p> <p>$$\int_a^b f(x) dx = \sum_{k=a}^{b-1} f(k) + \frac{1}{2} \left(f(b) - f(a)\right) - \frac{1}{12} \left(\Delta f(b) - \Delta f(a)\right) $$ $$+ \frac{1}{24} \left( \Delta^2 f(b) - \Delta^2 f(a) \right) - \frac{19}{720} \left(\Delta^3 f(b) - \Delta^3 f(a) \right) + \cdots$$</p> <p>(Of course, the right-hand side isn't guaranteed to converge.) The coefficient on the term with $\Delta^{k-1}$ is $\frac{c_k}{k!}$, where $c_k$ is apparently called either a <em>Cauchy number of the first kind</em> or a <em>Bernoulli number of the second kind</em>. </p> <p>The formula looks to me like a finite calculus version of the <a href="http://en.wikipedia.org/wiki/Euler-Maclaurin_formula" rel="nofollow">Euler-Maclaurin summation formula</a>.</p> <p>I'm trying to find out more about Laplace's summation formula. However, the usual suspects (the arXiv, Wikipedia, MathWorld, Google) aren't turning up much. There was a little on MathSciNet, the most promising of which was a paper by Merlini, Sprugnoli, and Verri entitled "The Cauchy Numbers" (<em>Discrete Mathematics</em> 306(16): 1906-1920, 2006). The MathSciNet review says, "Application of the Laplace summation formula involving the harmonic numbers [is] also given." I've requested the paper through interlibrary loan, but it has not arrived yet.</p> <p>While I'm interested in the formula in general, I'm particularly interested in these two questions.</p> <ol> <li><p>What applications are there for the Laplace summation formula? (It seems like there ought to be a sufficient number of applications for it to deserve having Laplace's name attached to it. I suppose one could use it for asymptotic analysis, but I'm not sure what the advantage would be over Euler-Maclaurin.)</p></li> <li><p>What is the error bound on the formula when it is truncated after $n$ terms?</p></li> </ol> <p>I wasn't sure how to tag this; feel free to retag.</p>
SandeepJ
5,372
<blockquote> <p>However, the usual suspects (the arXiv, Wikipedia, MathWorld, Google) aren't turning up much.</p> </blockquote> <p>You forgot google books!</p> <p>There are references to the Laplace summation formula in two books. </p> <ol> <li><p>Page 248 in <em>The rise and development of the theory of series up to the early 1820s</em> by Giovanni Ferraro <a href="http://books.google.com/books?id=vLBJSmA9zgAC" rel="nofollow">http://books.google.com/books?id=vLBJSmA9zgAC</a></p></li> <li><p>Page 192 in <em>A history of numerical analysis from the 16th through the 19th century</em> by Herman Goldstine <a href="http://books.google.com/books?id=20csAQAAIAAJ" rel="nofollow">http://books.google.com/books?id=20csAQAAIAAJ</a></p></li> </ol>
43,355
<p>I recently came across the following formula, which is apparently known as <em>Laplace's summation formula:</em></p> <p>$$\int_a^b f(x) dx = \sum_{k=a}^{b-1} f(k) + \frac{1}{2} \left(f(b) - f(a)\right) - \frac{1}{12} \left(\Delta f(b) - \Delta f(a)\right) $$ $$+ \frac{1}{24} \left( \Delta^2 f(b) - \Delta^2 f(a) \right) - \frac{19}{720} \left(\Delta^3 f(b) - \Delta^3 f(a) \right) + \cdots$$</p> <p>(Of course, the right-hand side isn't guaranteed to converge.) The coefficient on the term with $\Delta^{k-1}$ is $\frac{c_k}{k!}$, where $c_k$ is apparently called either a <em>Cauchy number of the first kind</em> or a <em>Bernoulli number of the second kind</em>. </p> <p>The formula looks to me like a finite calculus version of the <a href="http://en.wikipedia.org/wiki/Euler-Maclaurin_formula" rel="nofollow">Euler-Maclaurin summation formula</a>.</p> <p>I'm trying to find out more about Laplace's summation formula. However, the usual suspects (the arXiv, Wikipedia, MathWorld, Google) aren't turning up much. There was a little on MathSciNet, the most promising of which was a paper by Merlini, Sprugnoli, and Verri entitled "The Cauchy Numbers" (<em>Discrete Mathematics</em> 306(16): 1906-1920, 2006). The MathSciNet review says, "Application of the Laplace summation formula involving the harmonic numbers [is] also given." I've requested the paper through interlibrary loan, but it has not arrived yet.</p> <p>While I'm interested in the formula in general, I'm particularly interested in these two questions.</p> <ol> <li><p>What applications are there for the Laplace summation formula? (It seems like there ought to be a sufficient number of applications for it to deserve having Laplace's name attached to it. I suppose one could use it for asymptotic analysis, but I'm not sure what the advantage would be over Euler-Maclaurin.)</p></li> <li><p>What is the error bound on the formula when it is truncated after $n$ terms?</p></li> </ol> <p>I wasn't sure how to tag this; feel free to retag.</p>
Anixx
10,059
<p>You may be also interested in this formula for indefinite sum of $f(x)$:</p> <p>$$\sum_x f(x)=-\sum_{k=1}^{\infty}\frac{\Delta^{k-1}f(x)}{k!}(-x)_k+C$$</p> <p>where $(x)_k=\frac{\Gamma(x+1)}{\Gamma(x-k+1)} $ is a falling factorial.</p>
470,617
<ol> <li><p>Two competitors won $n$ votes each. How many ways are there to count the $2n$ votes, in a way that one competitor is always ahead of the other?</p></li> <li><p>One competitor won $a$ votes, and the other won $b$ votes. $a&gt;b$. How many ways are there to count the votes, in a way that the first competitor is always ahead of the other? (They can have the same amount of votes along the way)</p></li> </ol> <p>I know that the first question is the same as the number of different legal strings built of brackets, which is equal to the Catalan number; $\frac{1}{n+1}{2n\choose n}$, by the proof with the grid.</p> <p>I am unsure about how to go about solving the second problem.</p>
Batominovski
72,152
<p>Consider the polynomial $$P(z):=z^6+z^5+z^4+z^3+z^2+z+1\,.$$ Let $t:=z+\dfrac{1}{z}$. Therefore, $P(z)=z^3\,Q(t)$, where $$Q(t):=t^3+t^2-2t-1\,.$$</p> <p>Let $\omega:=\exp(\text{i}\theta)$, where $\theta:=\dfrac{2\pi}{7}$. Then, $\omega$, $\omega^2$, $\omega^3$, $\omega^4$, $\omega^5$, and $\omega^6$ are all the roots of $P(z)$. That is, $t_1:=2\cos(\theta)$, $t_2:=2\cos(2\theta)$, and $t_3:=2\cos(3\theta)$ are all the roots of $Q(t)$. Note that, for an arbitrary $\phi$, $$\text{csc}^2(\phi)=\frac{1}{1-\cos^2(\phi)}=\frac{4}{4-\big(2\cos(\phi)\big)^2}\,.$$ That is, $$S:=\text{csc}^2(\theta)+\text{csc}^2(2\theta)+\text{csc}^2(4\theta)=\text{csc}^2(\theta)+\text{csc}^2(2\theta)+\text{csc}^2(3\theta)=\sum_{j=1}^3\,\frac{4}{4-t_j^2}\,.$$</p> <p>Since $Q(t_j)=0$ for each $j$, we get that $$t_j^3+t_j^2-2t_j-1=0\text{ or }(t_j^2-4)(t_j+1)+2t_j+3=0\,.$$ That is, $$t_j+1=\frac{2t_j+3}{4-t_j^2}=\frac{2(t_j-2)+7}{4-t_j^2}=\frac{7}{4-t_j^2}-\frac{2}{t_j+2}\,.$$ Furthermore, $$0=t_j^3+t_j^2-2t_j-1=(t_j+2)(t_j^2-t_j)-1\text{ or }\frac{1}{t_j+2}=t_j^2-t_j\,.$$ Hence, $$\frac{4}{4-t_j^2}=\frac{4}{7}\,\left(2(t_j^2-t_j)+t_j+1\right)=\frac{4}{7}\,\left(2t_j^2-t_j+1\right)\,.$$</p> <p>Finally, $$S=\sum_{j=1}^3\,\frac{4}{4-t_j^2}=\frac{4}{7}\,\sum_{j=1}^3\,\left(2t_j^2-t_j+1\right)\,.$$ Since $\sum\limits_{j=1}^3\,t_j=-1$ and $\sum\limits_{j=1}^3\,t_j^2=(-1)^2-2(-2)=5$ by Vieta's Formulas, we end up with $$S=\frac{4}{7}\,\big(2\cdot 5-(-1)+3\big)=8\,.$$</p>
470,617
<ol> <li><p>Two competitors won $n$ votes each. How many ways are there to count the $2n$ votes, in a way that one competitor is always ahead of the other?</p></li> <li><p>One competitor won $a$ votes, and the other won $b$ votes. $a&gt;b$. How many ways are there to count the votes, in a way that the first competitor is always ahead of the other? (They can have the same amount of votes along the way)</p></li> </ol> <p>I know that the first question is the same as the number of different legal strings built of brackets, which is equal to the Catalan number; $\frac{1}{n+1}{2n\choose n}$, by the proof with the grid.</p> <p>I am unsure about how to go about solving the second problem.</p>
Michael Rozenberg
190,319
<p>$$\cos\frac{2\pi}{7}+\cos\frac{4\pi}{7}+\cos\frac{6\pi}{7}=\frac{2\sin\frac{\pi}{7}\cos\frac{2\pi}{7}+2\sin\frac{\pi}{7}\cos\frac{4\pi}{7}+2\sin\frac{\pi}{7}\cos\frac{6\pi}{7}}{2\sin\frac{\pi}{7}}=$$ $$=\frac{\sin\frac{3\pi}{7}-\sin\frac{\pi}{7}+\sin\frac{5\pi}{7}-\sin\frac{3\pi}{7}+\sin\frac{7\pi}{7}-\sin\frac{5\pi}{7}}{2\sin\frac{\pi}{7}}=-\frac{1}{2},$$ $$\cos\frac{2\pi}{7}\cos\frac{4\pi}{7}+\cos\frac{2\pi}{7}\cos\frac{6\pi}{7}+\cos\frac{4\pi}{7}\cos\frac{6\pi}{7}=$$ $$=\frac{1}{2}\left(\cos\frac{6\pi}{7}+\cos\frac{2\pi}{7}+\cos\frac{6\pi}{7}+\cos\frac{4\pi}{7}+\cos\frac{4\pi}{7}+\cos\frac{2\pi}{7}\right)=$$ $$=\cos\frac{2\pi}{7}+\cos\frac{4\pi}{7}+\cos\frac{6\pi}{7}=-\frac{1}{2}$$ and $$\cos\frac{2\pi}{7}\cos\frac{4\pi}{7}\cos\frac{6\pi}{7}=\frac{8\sin\frac{2\pi}{7}\cos\frac{2\pi}{7}\cos\frac{4\pi}{7}\cos\frac{8\pi}{7}}{8\sin\frac{2\pi}{7}}=\frac{\sin\frac{16\pi}{7}}{8\sin\frac{2\pi}{7}}=\frac{1}{8}.$$ Id est, $$\frac{1}{\sin^2\frac{\pi}{7}}+\frac{1}{\sin^2\frac{2\pi}{7}}+\frac{1}{\sin^2\frac{4\pi}{7}}=$$ $$=2\left(\frac{1}{1-\cos\frac{2\pi}{7}}+\frac{1}{1-\cos\frac{4\pi}{7}}+\frac{1}{1-\cos\frac{6\pi}{7}}\right)=$$ $$=\frac{2\left(\left(1-\cos\frac{2\pi}{7}\right)\left(1-\cos\frac{4\pi}{7}\right)+\left(1-\cos\frac{2\pi}{7}\right)\left(1-\cos\frac{6\pi}{7}\right)+\left(1-\cos\frac{4\pi}{7}\right)\left(1-\cos\frac{6\pi}{7}\right)\right)}{\left(1-\cos\frac{2\pi}{7}\right)\left(1-\cos\frac{4\pi}{7}\right)\left(1-\cos\frac{6\pi}{7}\right)}=$$ $$=\frac{2\left(3-2\left(-\frac{1}{2}\right)+\left(-\frac{1}{2}\right)\right)}{1-\left(-\frac{1}{2}\right)+\left(-\frac{1}{2}\right)-\frac{1}{8}}=8.$$</p>
3,450,283
<p>I confronted with a statement: </p> <p>Given a ring homomorphism <span class="math-container">$f:A\to B$</span>, with commutative rings with identity <span class="math-container">$A,B$</span>. If <span class="math-container">$A,B$</span> are both subrings of a bigger commutative ring with identity <span class="math-container">$R$</span>, then <span class="math-container">$A$</span> must be a subring of <span class="math-container">$B$</span>.</p> <p>I have thought about quotient ring <span class="math-container">$B/I\to B$</span>, where <span class="math-container">$I$</span> is an ideal of <span class="math-container">$B$</span>. Then <span class="math-container">$B/I$</span> is not a subring of <span class="math-container">$B$</span>. Can we embed them in a bigger ring? Hope someone could help. Thanks!</p> <hr> <p>Thanks to Gae. S., the conclusion above is not true. Now, what if the completion of <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are equal, i.e. <span class="math-container">$\widehat{A}\cong \widehat{B}$</span>. Does the conclusion holds?</p>
Community
-1
<p><span class="math-container">$A$</span> needs not be a subring of <span class="math-container">$B$</span>. Consider <span class="math-container">$R=\Bbb R[x]$</span>, <span class="math-container">$A=\Bbb R[x^2]\subseteq \Bbb R[x]$</span>, <span class="math-container">$B=\Bbb R[x^3]\subseteq \Bbb R[x]$</span> and <span class="math-container">$f(p(x))=p(x^{3/2})$</span> (with the convention <span class="math-container">$(x^2)^{1/2}=x$</span>).</p> <p>Also consider <span class="math-container">$R=\Bbb R[x,y]$</span>, <span class="math-container">$B=\Bbb R[x]$</span>, <span class="math-container">$A=\Bbb R[x^3,y^3]$</span> and <span class="math-container">$f(p(x,y))=p(x^{1/3},0)$</span> for all <span class="math-container">$p\in A$</span> to see that <span class="math-container">$f$</span> needs not be injective.</p>
4,219,614
<p>In proving a change of basis theorem in linear algebra, our professor draw this diagram and simply stated that because all the outer squares in this diagram commute, the inner square (green) must also commute (I didn't write the exact mappings, because I think this question is more about diagram chasing and that it isn't really relevant).</p> <p>I can't, however, figure out why this is true. This class is also my first time experiencing commutative diagrams, so please explain it with as many details as possible.</p> <p><a href="https://i.stack.imgur.com/k5jsY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k5jsY.png" alt="enter image description here" /></a></p> <hr /> <p>Edit: it turned out the questions wasn't fine, since &quot;in general there is no implication either way&quot;, but &quot;if the diagonal arrows are isomorphisms then the inner square commutes if and only if the (big) outer square commutes&quot;.</p> <p>In my case, the diagonal arrows actually are isomorphisms, which is why I am posting the extended diagram.</p> <p><a href="https://i.stack.imgur.com/FNhGI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FNhGI.png" alt="enter image description here" /></a></p> <p>To clear some notation:</p> <ol> <li><span class="math-container">$A: U \to V$</span> is a linear map between vector spaces <span class="math-container">$U$</span> and <span class="math-container">$V$</span>. First bases for <span class="math-container">$U, V$</span> are <span class="math-container">$B, C$</span>. Another possible bases for <span class="math-container">$U, V$</span> are <span class="math-container">$B', C'$</span>.</li> <li><span class="math-container">$\phi$</span>'s are isomorphisms</li> <li><span class="math-container">$P_{XY}$</span> is a change-of-basis matrix from <span class="math-container">$Y$</span> to <span class="math-container">$X$</span></li> </ol> <p>I would still appreciate if someone would help me understand why the inner square commutes, because I am lost.</p>
Zhen Lin
5,191
<p>Allow me to change notation. Essentially, you are in this situation: <span class="math-container">\begin{align} f' \circ a &amp; = b \circ f \\ g' \circ b &amp; = d \circ g \\ h' \circ a &amp; = c \circ h \\ k' \circ c &amp; = d \circ k \end{align}</span> You want to know if <span class="math-container">$g \circ f = k \circ h$</span> implies <span class="math-container">$g' \circ f' = k' \circ h'$</span>. In general, there is no implication. Certainly, if <span class="math-container">$g \circ f = k \circ h$</span> then: <span class="math-container">\begin{align} g' \circ f' \circ a &amp; = g' \circ b \circ f \\ &amp; = d \circ g \circ f \\ &amp; = d \circ k \circ h \\ &amp; = k' \circ c \circ h \\ &amp; = k' \circ h' \circ a \end{align}</span> So if we could cancel <span class="math-container">$a$</span> on the right then we would be done. In other words, if you additionally assume that <span class="math-container">$a$</span> is an epimorphism then <span class="math-container">$g \circ f = k \circ h$</span> implies <span class="math-container">$g' \circ f' = k' \circ h'$</span>. By duality, if you assume that <span class="math-container">$d$</span> is a monomorphism then <span class="math-container">$g' \circ f' = k' \circ h'$</span> implies <span class="math-container">$g \circ f = k \circ h$</span>. In particular, if <span class="math-container">$a$</span> and <span class="math-container">$d$</span> are isomorphisms then <span class="math-container">$g \circ f = h \circ k$</span> if and only if <span class="math-container">$g' \circ f' = h' \circ k'$</span>.</p>
130,306
<p>I am trying to make a relatively complex 3D plot in order to show the variation of a curve with a parameter. Here is the code</p> <pre><code>AnsNf[x_, nf_] = (2 \[Pi] x^4)/((11 - (2 (2 + nf))/3) (1 + 1/2 x^6 Log[4. x^2])) + (14.298 (1 + (1.81 - 0.292 nf) x^2 - 2.276 x^2 Log[x^2/(1 + x^2)]))/(1 + (9.926 + 1.795 nf) x^2 + (1.1 - 4.964 nf) x^4 + (22.412 +5.612 nf) x^6); AnsNfnoc[x_, nf_] = (2 \[Pi] x^4)/((11 - (2 (2 + nf))/3) (1 + 1/2 x^6 Log[4. x^2])) + (14.298 (1 + (1.81` - 0.292 nf) x^2 - 0.569 x^2 Log[x^2/(1 + x^2)]))/(1 + (9.926 + 1.795 nf) x^2 + (1.1 - 4.964 nf) x^4 + (22.412 +5.612 nf) x^6); AnsatzINf[t_, i_] := t^2*AnsNf[t, i - 1]; AnsatzINfNoc[t_, i_] := t^2*AnsNfnoc[t, i - 1]; colsandthick = {{RGBColor[175/255, 0, 28/255], Thickness[0.004]}, {RGBColor[14/255, 95/255, 177/255], Thickness[0.004]}, {RGBColor[130/255, 120/255, 106/255], Thickness[0.004]}, {RGBColor[0/255, 102/255, 128/255],Thickness[0.004]}}; colsandthickanddot = {{RGBColor[175/255, 0, 28/255], Thickness[0.004],CapForm["Round"],Dashing[{1*^-10, 0.01}]}, {RGBColor[14/255, 95/255, 177/255],Thickness[0.004], CapForm["Round"],Dashing[{1*^-10, 0.01}]}, {RGBColor[130/255, 120/255, 106/255],Thickness[0.004], CapForm["Round"],Dashing[{1*^-10, 0.01}]}, {RGBColor[0/255, 102/255, 128/255],Thickness[0.004], CapForm["Round"], Dashing[{1*^-10, 0.01}]}}; p1 = Graphics3D[Table[{Plot[{AnsatzINf[t, i], AnsatzINfNoc[t, i]}, {t, 0, 5},PlotStyle -&gt; {colsandthick[[i]], colsandthickanddot[[i]]}][[ 1]]} /. {x_?NumericQ, y_?NumericQ} :&gt; {x, i, y}, {i, 1, 4}], Axes -&gt; {True, True, True}, Boxed -&gt; {Left, Bottom, Back},BoxRatios -&gt; {1, 1, 0.5},FaceGrids -&gt; {{0, 0, -1}, {0, 1, 0}, {-1, 0, 0}},AxesStyle -&gt;Directive[FontFamily -&gt; "Helvetica", FontSize -&gt; 16,Thickness[0.003], Black],FaceGridsStyle -&gt;Directive[GrayLevel[0.3, 1], AbsoluteDashing[{1, 2}]], ViewPoint -&gt; {2.477268549689875`, -2.189130098344112`,0.566436179318843`}, ViewVertical -&gt; {0, 0, 1},ImageSize -&gt; Large] </code></pre> <p>This code produces the following plot<a href="https://i.stack.imgur.com/MWq8a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MWq8a.png" alt="enter image description here"></a> which is already weird because each continuous curve should be accompanied by a lower line (specified by the function "AnsNfnoc") that should be rendered through a dotted line as specified by the "colsandthickanddot" styling option.</p> <p>While I could live with this (but why is this so?), the real problem comes when I export the plot in pdf: as shown below, the dotted curves are now rendered, but the dashing is unevenly spaced. <a href="https://i.stack.imgur.com/GdvmM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GdvmM.jpg" alt="enter image description here"></a></p> <p>I am under the impression that this is due to some mesh applied when rendering the 3D plot, but do you have any idea how this could be corrected in such a way that the space between the dots is rendered evenly?</p>
Edmund
19,542
<p>Try <a href="http://reference.wolfram.com/language/ref/ParametricPlot3D.html" rel="nofollow noreferrer"><code>ParametricPlot3D</code></a> instead.</p> <pre><code>img = ParametricPlot3D[ Evaluate@ Flatten[Function[{i}, {t, i, #} &amp; /@ {AnsatzINf[t, i], AnsatzINfNoc[t, i]}] /@ Range[4], 1], {t, 0, 5}, PlotStyle -&gt; Riffle[colsandthick, colsandthickanddot], Boxed -&gt; {Left, Bottom, Back}, ImageSize -&gt; Large] </code></pre> <p><img src="https://i.stack.imgur.com/8OYbq.png" alt="Mathematica graphics"></p> <p>Just add the remaining options.</p> <p>If <code>"PDF"</code> still will not cooperate then <a href="http://reference.wolfram.com/language/ref/Rasterize.html" rel="nofollow noreferrer"><code>Rasterize</code></a> before export.</p> <pre><code>Export["test.pdf", Rasterize[img, "Image", ImageResolution -&gt; 300], "PDF"] </code></pre> <p>Hope this helps.</p>
441,888
<p>I should clarify that I'm asking for intuition or informal explanations. I'm starting math and never took set theory so far, thence I'm not asking about formal set theory or an abstract hard answer. </p> <p>From Gary Chartrand page 216 Mathematical Proofs - </p> <p>$\begin{align} \text{ range of } f &amp; = \{f(x) : x \in domf\} = \{b : (a, b) \in f \} \\ &amp; = \{b ∈ B : b \text{ is an image under $f$ of some element of } A\} \end{align}$</p> <p><a href="http://en.wikipedia.org/wiki/Parity_%28mathematics%29" rel="nofollow noreferrer">Wikipedia</a> - $\begin{align}\quad \{\text{odd numbers}\} &amp; = \{n \in \mathbb{N} \; : \; \exists k \in \mathbb{N} \; : \; n = 2k+1 \} \\ &amp; = \{2n + 1 :n \in \mathbb{Z}\} \end{align}$</p> <p>But <a href="https://math.stackexchange.com/questions/266718/quotient-group-g-g-identity/266725#266725">Why $G/G = \{gG : g \in G \} \quad ? \quad$ And not $\{g \in G : gG\} ?$</a></p> <p><strong>EDIT @Hurkyl 10/5.</strong> Lots of detail please.</p> <p>Question 1. Hurkyl wrote $\{\text{odd numbers}\}$ in two ways.<br> But can you always rewrite $\color{green}{\{ \, x \in S: P(x) \,\}}$ with $x \in S$ on the right of the colon? How?<br> $ \{ \, x \in S: P(x) \,\} = \{ \, \color{red}{\text{ What has to go here}} : x \in S \, \} $? Is $ \color{red}{\text{ What has to go here}} $ unique?</p> <p>Qusetion 2. Axiom of replacement --- Why $\{ f(x) \mid x \in S \}$ ? NOT $\color{green}{\{ \; x \in S \mid f(x) \; \}}$ ?</p> <p><strong>@HTFB.</strong> Can you please simplify your answer? I don't know what are ZF, extensionality, Fraenkel's, many-one, class function, Cantor's arithmetic of infinities, and the like. </p>
Asaf Karagila
622
<p>The first set is the collection of all the $x$ which are both elements in $S$ and satisfy the property $P$.</p> <p>The second set is the collection of the objects "$P(x)$" for all $x\in S$, for example if $P(x)$ is the function $x^2$ and $S=\Bbb N$ then the result is the set of squares.</p>
441,888
<p>I should clarify that I'm asking for intuition or informal explanations. I'm starting math and never took set theory so far, thence I'm not asking about formal set theory or an abstract hard answer. </p> <p>From Gary Chartrand page 216 Mathematical Proofs - </p> <p>$\begin{align} \text{ range of } f &amp; = \{f(x) : x \in domf\} = \{b : (a, b) \in f \} \\ &amp; = \{b ∈ B : b \text{ is an image under $f$ of some element of } A\} \end{align}$</p> <p><a href="http://en.wikipedia.org/wiki/Parity_%28mathematics%29" rel="nofollow noreferrer">Wikipedia</a> - $\begin{align}\quad \{\text{odd numbers}\} &amp; = \{n \in \mathbb{N} \; : \; \exists k \in \mathbb{N} \; : \; n = 2k+1 \} \\ &amp; = \{2n + 1 :n \in \mathbb{Z}\} \end{align}$</p> <p>But <a href="https://math.stackexchange.com/questions/266718/quotient-group-g-g-identity/266725#266725">Why $G/G = \{gG : g \in G \} \quad ? \quad$ And not $\{g \in G : gG\} ?$</a></p> <p><strong>EDIT @Hurkyl 10/5.</strong> Lots of detail please.</p> <p>Question 1. Hurkyl wrote $\{\text{odd numbers}\}$ in two ways.<br> But can you always rewrite $\color{green}{\{ \, x \in S: P(x) \,\}}$ with $x \in S$ on the right of the colon? How?<br> $ \{ \, x \in S: P(x) \,\} = \{ \, \color{red}{\text{ What has to go here}} : x \in S \, \} $? Is $ \color{red}{\text{ What has to go here}} $ unique?</p> <p>Qusetion 2. Axiom of replacement --- Why $\{ f(x) \mid x \in S \}$ ? NOT $\color{green}{\{ \; x \in S \mid f(x) \; \}}$ ?</p> <p><strong>@HTFB.</strong> Can you please simplify your answer? I don't know what are ZF, extensionality, Fraenkel's, many-one, class function, Cantor's arithmetic of infinities, and the like. </p>
egreg
62,967
<p>I've never seen notation such as $\{n\in\mathbb{N}:2n+1\}$ and the answer you refer to says that $\{g\in G:gG\}$ is incorrect.</p> <p>Well, incorrectness is a relative concept. Before using a notation you should define its meaning; nothing prevents you from assigning a meaning to $\{x\in X: f(x)\}$, but this is usually not done.</p> <p>What's the difference between $\{x\in S:P(x)\}$ and $\{f(x):x\in S\}$?</p> <p>In the first case $P$ is a “predicate”; more technically, a formula with a free variable. The symbol $\{x\in S:P(x)\}$ denotes the subset of elements of $S$ for which the statement $P(x)$ is true. For instance, $\{n\in\mathbb{N}:2\mid n\}$ means the even natural numbers.</p> <p>In the second case, $f$ should be a function $f\colon X\to Y$ such that $S\subseteq X$. The notation $\{f(x):x\in S\}$ is just a shorthand for</p> <p>$$ \{y\in Y: \text{there exists }x\in S\text{ such that }y=f(x)\} $$</p> <p>so it's not really a different concept. For instance $$ \{2n:n\in\mathbb{N}\}= \{n \in \mathbb{N} : \text{there exists } m \in \mathbb{N}\text{ such that } n = 2m \} =\{n\in\mathbb{N}:2\mid n\} $$ where we use the function $f\colon\mathbb{N}\to\mathbb{N}$, $f\colon n\mapsto 2n$, so its image is just the set of even natural numbers.</p>
441,888
<p>I should clarify that I'm asking for intuition or informal explanations. I'm starting math and never took set theory so far, thence I'm not asking about formal set theory or an abstract hard answer. </p> <p>From Gary Chartrand page 216 Mathematical Proofs - </p> <p>$\begin{align} \text{ range of } f &amp; = \{f(x) : x \in domf\} = \{b : (a, b) \in f \} \\ &amp; = \{b ∈ B : b \text{ is an image under $f$ of some element of } A\} \end{align}$</p> <p><a href="http://en.wikipedia.org/wiki/Parity_%28mathematics%29" rel="nofollow noreferrer">Wikipedia</a> - $\begin{align}\quad \{\text{odd numbers}\} &amp; = \{n \in \mathbb{N} \; : \; \exists k \in \mathbb{N} \; : \; n = 2k+1 \} \\ &amp; = \{2n + 1 :n \in \mathbb{Z}\} \end{align}$</p> <p>But <a href="https://math.stackexchange.com/questions/266718/quotient-group-g-g-identity/266725#266725">Why $G/G = \{gG : g \in G \} \quad ? \quad$ And not $\{g \in G : gG\} ?$</a></p> <p><strong>EDIT @Hurkyl 10/5.</strong> Lots of detail please.</p> <p>Question 1. Hurkyl wrote $\{\text{odd numbers}\}$ in two ways.<br> But can you always rewrite $\color{green}{\{ \, x \in S: P(x) \,\}}$ with $x \in S$ on the right of the colon? How?<br> $ \{ \, x \in S: P(x) \,\} = \{ \, \color{red}{\text{ What has to go here}} : x \in S \, \} $? Is $ \color{red}{\text{ What has to go here}} $ unique?</p> <p>Qusetion 2. Axiom of replacement --- Why $\{ f(x) \mid x \in S \}$ ? NOT $\color{green}{\{ \; x \in S \mid f(x) \; \}}$ ?</p> <p><strong>@HTFB.</strong> Can you please simplify your answer? I don't know what are ZF, extensionality, Fraenkel's, many-one, class function, Cantor's arithmetic of infinities, and the like. </p>
HTFB
50,125
<p>Mauro makes part of this point, but it's worth stressing: the two different ways of notating a set are used for instances of application of two different axioms (strictly, axiom schemas) in ZF: $$ B = \{x \in A : \phi(x)\} $$ is an application of the subset axiom (a.k.a. separation): $$\forall y_1, \ldots, y_n \forall A\, \exists B\, \forall x (x \in B \leftrightarrow x \in A \wedge \phi(x, y_1, \ldots, y_n, A)), $$ where $\phi(x, y_1, \ldots, y_n, z)$ is a formula in the language of set theory. The symbol within the curly braces is guaranteed by this axiom to refer to an element of the domain of a model of ZF (and to exactly one element by the axiom of extensionality). So thanks to this axiom it is always a legitimate shorthand notation for a set.</p> <p>Conversely, $ B = \{f(x): x \in A\} $ is an application of the Fraenkel's axiom (schema) of replacement. The axiom schema is</p> <p>$$\forall y_1, \ldots, y_n, \forall A \left(\left(\forall x (x \in A \rightarrow(\forall z_1, z_2\, (\phi(x, y_1, \ldots, y_n, A, z_1) \wedge \phi(x, y_1, \ldots, y_n, A, z_2))\rightarrow z_1 = z_2\right)) \rightarrow \exists B \forall z (z \in B \leftrightarrow \exists x (x \in A \wedge \phi(x, y_1, \ldots, y_n, A,z)))\right) $$ What this says is that <em>if</em> $\phi(x,z)$ is many-one when restricted to the domain $A$, so that there is a function (in the most abstract sense, <em>i.e</em>, a "class function") satisfying $x = f(z) \leftrightarrow \phi(x, z)$, <em>then</em> the collection of all the images of elements of $A$ under $f$ is also a set. </p> <p>Or, in shorthand, there is a set which is the elementwise-image of $A$ under $f$. Again, by extensionality there is exactly one such set.</p> <p>So which of the two notations you use for a particular set would be defined by the <em>last</em> axiom you apply in proving the existence of that set: are you thinking of it as a subset of something, or as the image of something? It should be obvious that $\{x \in \mathbb{N} :x\text{ is even}\}$ is the same set as $\{2\cdot x: x \in \mathbb{N}\}$. So very often the notation you choose will be determined by what you are wanting to say about a set that could be constructed either way. And as others have noted you can roll the two notations into one, when a set is constructed by successive applications of both axioms, and save paper.</p> <p>You cannot always eliminate Replacement in favour of Separation: ZF is strictly stronger than Z (the collection of axioms omitting replacement). In general Replacement is needed to conduct Cantor's arithmetic of infinities inside set theory: see the Wikipedia page on replacement. A specific set that cannot be proved to exist without replacement is $\aleph_\omega$. But if you know a set $A$ exists you can trivially write it in either form with a bit of shorthand: $A = \{x \in A : x = x\}$ and also<br> $A = ${$\text{Id}(x): x \in A\}$ (where $\text{Id}$ is the identity map). I hope this answers Question 1 of the edit.</p> <p>For Question 2, note that your proposed backwards replacement set notation is indistinguishable from the usual subset notation, unless you are absolutely rigorous in separating your capital $P$ shorthand for a formula from your lower-case $f$ for a function. There would be an ambiguity in the notation when $P(x,y)$ had an extra free variable: is it the formula defining a function so the set is constructed by replacement, or is it being used to define a subset (for any given value of the free variable)? It's a bad idea!</p> <p>More intuitively, you can pronounce $\{P : Q\}$ in English as "the set of elements P such that Q", and if you start reversing some of the notation it becomes much harder to read. </p> <hr> <p>It's also worth noting that taking subsets, and taking the image under a function, are entirely intuitive operations that you want to do naively with sets well before axiomatising them: given a classroom of children, the set of girls in the classroom and the set of all fathers of the class are collections that any teacher might need to think about. The notation makes sense without worrying about the axioms.</p>
1,272,124
<p>we know that $1+2+3+4+5.....+n=n(n+1)/2$</p> <p>I spent a lot of time trying to get a formula for this sum but I could not get it :</p> <p>$( 2 + 3 + . . . + 2n)$</p> <p>I tried to write the sum of some few terms.. Of course I saw some pattern between the sums but still the formula I Got didn't give a correct sum for other terms.</p> <p>Is there another way of solving this question?</p>
Mark Bennet
2,906
<p>Hint: can you see a factor $2$ somewhere ...</p>
636,391
<p>Evaluate the following indefinite integral.</p> <p>$$\int { \frac { x }{ 4+{ x }^{ 4 } } }\,dx$$</p> <p>In my homework hints, it says let $ u = x^2 $. But still i can't continue.</p>
Wmmoreno
102,299
<p><strong>Solve:</strong> \begin{eqnarray} \int\frac{x}{4+x^{4}}dx&amp;=&amp;\frac{1}{2}\int\frac{du}{4+u^2}; \text{ if $u=x^{2}$}\\ &amp;=&amp; \frac{1}{2}(\frac{1}{2}\arctan{\frac{u}{2}})\\ &amp;=&amp;\frac{1}{4}\arctan{\frac{x^{2}}{2}+C} \end{eqnarray}</p>
2,593,361
<p>I’m trying to solve what I’ll call the p-Laplace Equation which is</p> <p>$$\Delta_p u = 0$$</p> <p>where $\Delta_p u$ is the p-Laplacian. It is defined as </p> <p>$$\Delta_p u = \nabla \cdot (|\nabla|^{p-2} \nabla u).$$</p> <p>Any ideas? I haven’t seen this in a book or anything. I just thought that by analogy, there should be a solution to this equation too. Are there any properties of p-Harmonic functions like there are for Harmonic functions? </p>
dxiv
291,201
<blockquote> <p>Let $x=p+f$ where $p \in \mathbb{Z}$ and $0 \lt f \lt 1$</p> </blockquote> <p>Let that be $0 \color{red}{\le} f \lt 1$ in general. Next, let $p = k \cdot n + r$ with $k, r \in \mathbb{Z}$ and $0 \le r \le n-1\,$ by Euclidean division, so that $\,x=k \cdot n + r + f\,$. Then, using that $0 \le r \le n-1$ and $0 \le r+f \lt n$:</p> <ul> <li><p>$\displaystyle\left\lfloor\frac{\left\lfloor x\right\rfloor}{n}\right\rfloor=\left\lfloor\frac{k \cdot n + r}{n}\right\rfloor=\left\lfloor k+\frac{r}{n}\right\rfloor = k$</p></li> <li><p>$\displaystyle\left\lfloor\frac{x}{n}\right\rfloor=\left\lfloor\frac{k \cdot n + r+f}{n}\right\rfloor=\left\lfloor k+\frac{r+f}{n}\right\rfloor = k$</p></li> </ul>
2,445,693
<p>I know that the derivative of $n^x$ is $n^x\times\ln n$ so i tried to show that with the definition of derivative:$$f'\left(x\right)=\dfrac{df}{dx}\left[n^x\right]\text{ for }n\in\mathbb{R}\\{=\lim_{h\rightarrow0}\dfrac{f\left(x+h\right)-f\left(x\right)}{h}}{=\lim_{h\rightarrow0}\frac{n^{x+h}-n^x}{h}}{=\lim_{h\rightarrow0}\frac{n^x\left(n^h-1\right)}{h}}{=n^x\lim_{h\rightarrow0}\frac{n^h-1}{h}}$$ now I can calculate the limit, lets:$$g\left(h\right)=\frac{n^h-1}{h}$$ $$g\left(0\right)=\frac{n^0-1}{0}=\frac{0}{0}$$$$\therefore g(0)=\frac{\dfrac{d}{dh}\left[n^h-1\right]}{\dfrac{d}{dh}\left[h\right]}=\frac{\dfrac{df\left(0\right)}{dh}\left[n^h\right]}{1}=\dfrac{df\left(0\right)}{dh}\left[n^h\right]$$ so in the end i get: $$\dfrac{df}{dx}\left[n^x\right]=n^x\dfrac{df\left(0\right)}{dx}\left[n^x\right]$$ so my question is how can i prove that $$\dfrac{df\left(0\right)}{dx}\left[n^x\right]=\ln n$$</p> <h1>edit:</h1> <p>i got 2 answers that show that using the fact that $\lim_{z \rightarrow 0}\dfrac{e^z-1}{z}=1$, so how can i prove that using the other definitions of e, i know it is definition but how can i show that this e is equal to the e of $\sum_{n=0}^\infty \frac{1}{n!}$?</p>
Peter Szilas
408,605
<p>$n^h = \exp((h \log n))$;</p> <p>$\dfrac{n^h-1}{h} = \dfrac{\exp(h(\log n))-1}{h};$</p> <p>$z: = h\log n$.</p> <p>Then:</p> <p>$\lim_{h \rightarrow 0}\dfrac{\exp(h(\log n))-1}{h} =$</p> <p>$\lim_{z \rightarrow 0}$ $\log n \dfrac{\exp(z) -1}{z} =$</p> <p>$\log n ×1= \log n$.</p> <p>Used: $\lim_{z \rightarrow 0} \dfrac{\exp(z)-1}{z} =1$.</p>
1,581,756
<p>Find the general solution of $$z(px-qy)=y^2-x^2$$ Let $F(x,y,z,p,q)=z(px-qy)+x^2-y^2$. This gives $$F_x=zp+2x$$ $$F_y=-zq-2y$$ $$F_z=px-qy$$ $$F_p=zx$$ $$F_q=-zy$$ By Charpit's method we have $$\frac{dx}{zx}=\frac{dy}{-zy}=\frac{dz}{z(px-qy)}=\frac{dp}{-zp-2x-p^2x+pqy}=\frac{dq}{zq+2y-pxy+q^2y}$$</p> <p>By equating the first two I am getting $xy=k$.</p> <p>But I am not able to solve the last two. </p> <p>Thanks for the help!!</p>
Asinomás
33,907
<p>If a number is product of distinct primes then it has a power of $2$ number of divisors (it has $2^{\omega(n)}$ divisors, where $\omega(n)$ is the number of prime divisors of $n$).</p> <p>So if your number is product of distinct primes all of its divisors has a power of $2$ number of divisors.</p> <p>From here we deduce if $k$ is not a power of $2$ then there is a number $n$ such that $n$ has more than $k$ divisors and no divisor of $n$ has exactly $k$ multiples. We just take $p_1p_2\dots p_r$, where $p_i$ is the $i$'th prime and $2^r&gt;k$.</p> <p>So now we just have to check $2$ and $4$ are the only such powers of $2$ that work. To see this just now that $p_1^2p_2^2\dots p_r^2$ has $3^r$ divisors, and any divisor of this number that has a power of $2$ number of divisors has at most $2^r$ divisors.</p> <p>So we must now all we must prove is that given $2^n$ with $n&gt;2$ there is an $r&lt;n$ so that $3^r&gt;2^n$ , so all we must prove is $3^{n-1}&gt;2^{n}\iff \frac{3}{2}^{n-1}&gt;1$, which of course is true when $n&gt;2$. Since $\frac{3}{2}^2=\frac{9}{4}&gt;2$.</p> <p>Finally, to check $4$ works is easy, if the number is of form $p^k$ and has $4$ or more divisors then $k\geq 3$ and so $p^3$ is the divisor we wanted. Otherwise there are primes $p$ and $q$ dividing $n$ and so $pq$ is the desired divisor.</p> <p>Proving it works for $2$ should be much easier.</p>
418,748
<p>I tried to calculate, but couldn't get out of this: $$\lim_{x\to1}\frac{x^2+5}{x^2 (\sqrt{x^2 +3}+2)-\sqrt{x^2 +3}}$$</p> <p>then multiply by the conjugate.</p> <p>$$\lim_{x\to1}\frac{\sqrt{x^2 +3}-2}{x^2 -1}$$ </p> <p>Thanks!</p>
amWhy
9,003
<p>You were right to multiply "top" and "bottom" by the conjugate of the numerator. I suspect you simply made a few algebra mistakes that got you stuck with the limit you first posted:</p> <p>So we start from the beginning:</p> <p>$$\lim_{x\to1}\frac{\sqrt{x^2 +3}-2}{x^2 -1}$$ </p> <p>and multiply top and bottom by the conjugate, $\;\sqrt{x^2 + 3} + 2$:</p> <p>$$\lim_{x \to1} \, \frac{\sqrt{x^2 + 3} -2 }{x^2 - 1} \cdot \frac{\sqrt{x ^ 2 + 3} +2}{\sqrt{x^2+3}+2}$$</p> <p>You were correct to do that. You just <em>miss-calculated</em> and didn't actually need to expand the denominator, that's all. </p> <p>In the numerator, we have a difference of <em>squares</em> (which is the reason we multiplied top and bottom by the conjugate), and expanding the factors gives us: $$(\sqrt{x^2+3}-2)(\sqrt{x^2 + 3} + 2) = (\sqrt{x^2+3})^2 - (2)^2 = (x^2 + 3) - 4 = x^2 - 1$$ And now there's no reason to waste time trying to simplify the denominator*, since we can now <strong><em>cancel</em></strong> the factor $(x^2 - 1)$ from top and bottom: $$\lim_{x\to1}\frac{\color{blue}{\bf (x^2- 1)}}{\color{blue}{\bf (x^2 - 1)}(\sqrt{x^2 +3}+2)}\; = \;\lim_{x \to 1} \dfrac{1}{\sqrt{x^2 + 3}+2}\; = \;\frac{1}{\sqrt{1+3} + 2} \;=\; \dfrac 14$$</p>
24,927
<p>Does this mean that the first homotopy group in some sense contains more information than the higher homotopy groups? Is there another generalization of the fundamental group that can give rise to non-commutative groups in such a way that these groups contain more information than the higher homotopy groups? </p>
Akhil Mathew
536
<p>Let $\mathcal{C}$ be a category with finite limits and a final object. In general, if $Y$ is an object in $\mathcal{C}$ such that $\hom(X, Y)$ is naturally a group for each $X \in \mathcal{C}$, then $Y$ is called a "group object" in $\mathcal{C}$; that is, there is a multiplication map $Y \times Y \to Y$ and an inversion $Y \to Y$ and an identity $\ast \to Y$ (for $\ast$ the final object) that satisfy a categorical version of the usual group axioms (stated arrow-theoretically). In the case of interest here, $\mathcal{C}$ is the homotopy category of pointed topological spaces, and the statement that the homotopy groups are groups is the statement that the spheres $S^n$ are group objects in the opposite category -- in other words, $S^n$ is a so-called "H cogroup." When one writes out the arrows, one ends up with a "comultiplication map" $S^n \to S^n \vee S^n$ and a map $S^n \to \ast$ ($\ast$ the point) that satisfy the dual of the usual group axioms, up to homotopy.</p> <p>The reason that the higher homotopy groups are abelian and $\pi_1$ is not is that $S^n$ is an abelian H cogroup for $n \geq 2$ and not for $n=1$. This is basically a consequence of the Eckmann-Hilton argument (namely, there are two natural and mutually distributive ways of defining the H cogroup structure of $S^n$, depending on which coordinate one chooses; they must be equal and both commutative). </p> <p>Now to your more general question. So one can define covariant functors from the pointed homotopy category to the category of groups: just pick any H cogroup object and to consider maps from it into the given space. An easy way of getting these is to take the reduced suspension of any space $X$, $\Sigma X$, and to note that $\Sigma X$ can be made into an H cogroup (in kind of the same way as $S^n$ is---actually, the $S^n$ is a special case of this). </p> <p>One may object that considering suspensions is not really anything new, because homotopy classes of $\Sigma X = S^1 \wedge X$ into a space $Y$ is the same as considering homotopy classes of maps $S^1 \to Y^X$ when $X$ is reasonable (say locally compact and Hausdorff), so we really have a variant of the fundamental group.</p> <p>Finally, there is the question of whether <em>all</em> functors from the pointed homotopy category to the category of groups can be expressed in this way, that is, whether it is representable. On the pointed homotopy category of CW complexes, there are fairly <a href="http://en.wikipedia.org/wiki/Brown_representability_theorem" rel="nofollow">weak conditions</a> that will ensure representability.</p>
4,489,675
<p>When saying that in a small time interval <span class="math-container">$dt$</span>, the velocity has changed by <span class="math-container">$d\vec v$</span>, and so the acceleration <span class="math-container">$\vec a$</span> is <span class="math-container">$d\vec v/dt$</span>, are we not assuming that <span class="math-container">$\vec a$</span> is constant in that small interval <span class="math-container">$dt$</span>, otherwise considering a change in acceleration <span class="math-container">$d\vec a$</span>, the expression should have been <span class="math-container">$\vec a = \frac{d\vec v}{dt} - \frac{d\vec a}{2}$</span> (Again assuming rate of change of acceleration is constant). According to that argument, I can say that <span class="math-container">$\vec v$</span> is also constant in that time interval and so <span class="math-container">$\vec a = \vec 0$</span>.</p> <p>Can someone point out where exactly I have gone wrong. Also this was just an example, my question is general.</p>
mmesser314
294,354
<p>You are right. Infinitesimals are an imprecise way of talking about limits. It is a little odd that mathemeticians use them, because math is all about rigor.</p> <p>Physicists tend to be looser about tiny mathematical details because they are interested in modeling the behavior of the universe. They can be satisfied with approximations. Sometimes that is the best they can get.</p> <p>But mathematicians are interesting in modeling ideas. They build up theorem after theorem to describe complex ideas. One false theorem can be used to prove literally anything and destroy the entire structure of math. It is built into the structure of &quot;if a then b&quot; statements. If a is false, then the statement is true no matter what b says. Mathematicians are nitpicky because they have to be.</p> <p>A derivative is the slope of a function at a point. You get at it by choosing two nearby points and approximating it with <span class="math-container">$a = \Delta v/\Delta t$</span>. As <span class="math-container">$\Delta t$</span> gets small, the approximation gets better. There is a bit of a conceptual difficulty. Given any number that is almost right, you can find an approximation so good, using intervals so small, that you can show the number is wrong. But you never run out of even closer to right values.</p> <p>Taking the limit is the way to rigorously show that all numbers but one can be eliminated as not good enough approximations. The only number that can't be eliminated is defined to be the derivative. This is how mathematicians think.</p> <p>But then they see how easy it is the think about <span class="math-container">$\Delta v/\Delta t$</span>, and they essentially cheat. In part, I think it has to do with traditional notation that goes all the way back to Newton.</p> <p>If you use <span class="math-container">$\Delta v/\Delta t$</span>, there are problems like the ones you raise. So they try to sweep the problems under the rug by making <span class="math-container">$\Delta v$</span> and <span class="math-container">$\Delta t$</span> so small that the error is <span class="math-container">$0$</span>.</p> <p>But that means <span class="math-container">$\Delta t$</span> would have to be <span class="math-container">$0$</span>, which won't work. So they make <span class="math-container">$\Delta v$</span> and <span class="math-container">$\Delta t$</span> &quot;infinitely small&quot; and yet not <span class="math-container">$0$</span>. And this works well enough. You have to ignore the fact that you can't say exactly what number actually is that small.</p> <p>The best thing you can do is recognize that infinitesimals are an imprecise mental shortcut. They are a great tool if you ignore the logical nitpicks. There is a rigorous way to get the same answer, but you would have to use the language of limits all the time. It would be cumbersome.</p>
2,135,717
<p>Let $G$ be an Abelian group of order $mn$ where $\gcd(m,n)=1$. </p> <p>Assume that $G$ contains an element of $a$ of order $m$ and an element $b$ of order $n$. </p> <p>Prove $G$ is cyclic with generator $ab$.</p> <hr> <p>The idea is that $(ab)^k$ for $k \in [0, \dots , mn-1]$ will make distinct elements but do not know how to argue it. </p> <p>Could I say something like $&lt;a&gt;=A$, $&lt;b&gt;=B$, somehow $AB=\{ ab : a \in A , b \in B \}$ and that has order $|A||B|=mn$?</p> <p>Don't know if it's the same exact or similar to <a href="https://math.stackexchange.com/questions/1870217/finite-group-of-order-mn-with-m-n-coprime">Finite group of order $mn$ with $m,n$ coprime</a>.</p>
David Hill
145,687
<p>Hint: (1) Part of the definition of $l=\mathrm{lcm}(m,n)$ is that if $m|k$ and $n|k$, then $l|k$. (2) if $\gcd(m,n)=1$ then $\mathrm{lcm}(m,n)=mn$.</p> <p>So, if $(ab)^k=a^kb^k=1$, then $a^k=1$ and $b^k=1$. What can you conclude about $k$?</p>
855,227
<p>I'm trying to understand Taylor's Theorem for functions of $n$ variables, but all this higher dimensionality is causing me trouble. One of my problems is understanding the higher order differentials. For example, if I have a function $f(x, y)$, then it's first differential is: </p> <p>$$df = \frac{\partial f}{\partial x}dx + \frac{\partial f}{\partial y}dy.$$</p> <p>To me this quantity is saying that: </p> <blockquote> <p>A differential change in the value of function $f(x,y)$ is equal to how fast function $f(x,y)$ is changing with respect to $x$ multiplied by a differential change in the $x$-coordinate plus how fast function $f(x,y)$ is changing with respect to $y$ multiplied by a differential change in the $y$-coordinate.</p> </blockquote> <p>This seems intuitive. But when we get into higher order differentials I get confused: </p> <p>$$d^2f= \frac{\partial^2 f}{\partial y ^2}dy^2 + 2\frac{\partial^2 f}{\partial y \partial x}dy\:dx + \frac{\partial^2 f}{\partial x ^2}dx^2$$</p> <p>How would one interpret this quantity? What about even higher order differentials? say $d^3f$ or $d^{1500 }f$ =) </p> <p>Thank you for any help! =) </p>
Chinny84
92,628
<p>Lets denote $$df = \left(dx\frac{\partial}{\partial x} + dy\frac{\partial}{\partial y}\right)f$$ then $$ d^2f = \left(dx\frac{\partial}{\partial x} + dy\frac{\partial}{\partial y}\right)\left(dx\frac{\partial}{\partial x} + dy\frac{\partial}{\partial y}\right)f = \left(dx\frac{\partial}{\partial x} + dy\frac{\partial}{\partial y}\right)^2f $$ this in the loose sense but with expressions of the form we can expand like $$ (a+b)^2 = a^2 + ab + ba + b^2 $$ I kept the middle terms order dependent since I am not making the assumption that they commute at this stage, though for functions I have come across (physicist here) they do, but I do not know if this generally true always.</p> <p>Anyway, using the expression for a and b I am highlighting that it is simply a binomal expansion of the operators so for $$ (a+b)^3 = a^3+3a^2b + 3ab^2 + b^3 $$ or equivalently $$ d^3f = \left(dx\frac{\partial}{\partial x} + dy\frac{\partial}{\partial y}\right)^3f,\\ =\left(dx^3\frac{\partial^3}{\partial x^3} +3dx^2dy\frac{\partial^3}{\partial x^2\partial y} + 3dxdy^2\frac{\partial^3}{\partial x\partial y^2} + dy^3\frac{\partial^3}{\partial y^3}\right)f $$ and so on and so on..try generating the summation rule for $d^nf$ ;)</p> <p>This is my humble opinion as always</p>
855,227
<p>I'm trying to understand Taylor's Theorem for functions of $n$ variables, but all this higher dimensionality is causing me trouble. One of my problems is understanding the higher order differentials. For example, if I have a function $f(x, y)$, then it's first differential is: </p> <p>$$df = \frac{\partial f}{\partial x}dx + \frac{\partial f}{\partial y}dy.$$</p> <p>To me this quantity is saying that: </p> <blockquote> <p>A differential change in the value of function $f(x,y)$ is equal to how fast function $f(x,y)$ is changing with respect to $x$ multiplied by a differential change in the $x$-coordinate plus how fast function $f(x,y)$ is changing with respect to $y$ multiplied by a differential change in the $y$-coordinate.</p> </blockquote> <p>This seems intuitive. But when we get into higher order differentials I get confused: </p> <p>$$d^2f= \frac{\partial^2 f}{\partial y ^2}dy^2 + 2\frac{\partial^2 f}{\partial y \partial x}dy\:dx + \frac{\partial^2 f}{\partial x ^2}dx^2$$</p> <p>How would one interpret this quantity? What about even higher order differentials? say $d^3f$ or $d^{1500 }f$ =) </p> <p>Thank you for any help! =) </p>
Claude Leibovici
82,404
<p><strong>Hint</strong></p> <p>Let us consider first the case of a function $f(x)$ and you expand it by a first order Taylor series, you represent locally the function by a straight line. If you go to the second order expansion, you represent locally the function by a parabola.</p> <p>When you have a function $f(x,y)$ and you expand it by a first order Taylor series, you represent locally the function by a plane. If you go to the second order expansion, you represent locally the function by a paraboloid and so on.</p>
4,289,381
<p>I am trying to answer a question about line integrals, I have had a go at it but I am not sure where I am supposed to incorporate the line integral into my solution.</p> <p><span class="math-container">$$ \mathbf{V} = xy\hat{\mathbf{x}} + -xy^2\hat{\mathbf{y}}$$</span> <span class="math-container">$$ \mathrm{d}\mathbf{l} = \hat{\mathbf{x}}\mathrm{d}x + \hat{\mathbf{y}}\mathrm{d}y $$</span> <span class="math-container">$$ \int_C\!\mathbf{V}\cdot\mathrm{d}\mathbf{l} = \int\!xy\,\mathrm{d}x - \int\!xy^2\,\mathrm{d}y = \left[ \frac{x^2y}{2}\right ]_?^? - \left[ \frac{xy^3}{3} \right]_?^? $$</span></p> <p>I have a feeling that the parabola in question must come into play in the limits of the integrals, although I dont know how they are supposed to. The parabola in question is <span class="math-container">$y = \frac{x^2}{3}$</span> and the coordinates at which the line integral is supposed to go over are <span class="math-container">$a=(0,0)$</span> and <span class="math-container">$b=(3,3)$</span>.</p>
Vince Vickler
981,114
<ul> <li>One could also use a parametrization.</li> </ul> <p>Set <span class="math-container">$x(t)=t$</span> and and <span class="math-container">$y(t)= t^2/3$</span>.</p> <ul> <li>The parabola corresponds to the endpoint of the position vector :</li> </ul> <p><span class="math-container">$ \vec r (t)= t\vec i + \frac {t^2}{3} \vec j$</span>.</p> <ul> <li>This allows to calculate the differential <span class="math-container">$d\vec r$</span>:</li> </ul> <p><span class="math-container">$d\vec{\mathbf{r}}(t)=\vec{\mathbf{r\space '}}(t) dt = (1\vec i + (2t/3) \vec j) dt$</span></p> <ul> <li><p>With the parametrization chosen, we get <span class="math-container">$\vec{\mathbf{V}}(t)= (t. \frac{t^2}{3})\vec i - (t. (\frac{t^2}{3})^2)\vec j$</span></p> </li> <li><p>One can then apply the definition : vector line integral = <span class="math-container">$\int_C \vec V(t).d\vec r(t)= \int_{t_1}^{t_2} \vec (V(t).\vec r\space '(t) )dt$</span> ,</p> </li> </ul> <p>with, here, <span class="math-container">$t_1 = 0$</span> and <span class="math-container">$t_2 =3$</span>.</p>
3,479,144
<p>Let <span class="math-container">$(X,M,\mu)$</span> be a measure space and <span class="math-container">$f \in L^{1}(X,\mu)$</span>. Then show that for <span class="math-container">$E \in M$</span>, <span class="math-container">$\lim_{k \rightarrow \infty} \int_{E} |f|^{1/k} = \mu(E)$</span>. I am able to show this in the case that <span class="math-container">$\mu(E) &lt; \infty$</span>, but I do not know how to proceed when <span class="math-container">$\mu(E) = \infty$</span>. For the sets of finite measure, we can use <span class="math-container">$ ||f||_{1} \cdot \chi_{E} \in L^{1}$</span> as a bound a use dominated convergence. </p>
Ross Millikan
1,827
<p>Looking at the <a href="https://en.wikipedia.org/wiki/Square_root#Square_roots_of_positive_integers" rel="nofollow noreferrer">expansions of the small square roots</a>, it appears that if <span class="math-container">$k$</span> divides <span class="math-container">$2a$</span> the expansion of <span class="math-container">$a^2+\frac {2a}k$</span> is of the form <span class="math-container">$[a,\overline {k,2a}]$</span>. Go through the continued fraction expansion and see if this works.</p> <p>Added: given <span class="math-container">$k$</span> we can find infinitely many examples where the expansion is <span class="math-container">$[a,\overline{k,2a}]$</span>. Choose any <span class="math-container">$a$</span> so that <span class="math-container">$k$</span> divides <span class="math-container">$2a$</span>. Then <span class="math-container">$\sqrt{a^2+\frac {2a}k}=[a,\overline{k,2a}].$</span> </p> <p>The proof is just to compute it. <span class="math-container">$$\sqrt{a^2+\frac {2a}k}=a+\sqrt{a^2+\frac {2a}k}-a\\ =a+\frac 1{\frac 1{\sqrt{a^2+\frac {2a}k}-a}}\\ =a+\frac 1{\frac {\sqrt{a^2+\frac {2a}k}+a}{(a^2+\frac {2a}k)-a^2}}\\ =a+\frac 1{\frac {\sqrt{a^2+\frac {2a}k}+a}{\frac {2a}k}}\\ =a+\frac 1{\frac k{2a}\left(\sqrt{a^2+\frac {2a}k}+a\right)}\\ =a+\frac 1{k+\frac k{2a}\left(\sqrt{a^2+\frac {2a}k}-a\right)}\\ =a+\frac 1{k+\frac 1{\frac {2a}{k\left(\sqrt{a^2+\frac {2a}k}-a\right)}}}\\ =a+\frac 1{k+\frac 1{\frac {2a\left(\sqrt{a^2+\frac {2a}k}+a\right)}{2a}}}\\ =a+\frac 1{k+\frac 1{2a-\left(\sqrt{a^2+\frac {2a}k}-a\right)}}$$</span> If you choose <span class="math-container">$a=\frac k2$</span> you will get an expansion <span class="math-container">$[a,\overline{2a,2a}]$</span> which is not the minimal one. If that bothers you, prohibit that case.</p>
2,987,994
<p>I'm looking for definite integrals that are solvable using the method of differentiation under the integral sign (also called the Feynman Trick) in order to practice using this technique.</p> <p>Does anyone know of any good ones to tackle?</p>
FDP
186,817
<p>Maybe you can look at:</p> <p><a href="https://math.stackexchange.com/a/2989801/186817">https://math.stackexchange.com/a/2989801/186817</a></p> <p>Feynman's trick is used to compute:</p> <p><span class="math-container">\begin{align}\int_0^{\frac{\pi}{12}}\ln(\tan x)\,dx\end{align}</span></p>
2,987,994
<p>I'm looking for definite integrals that are solvable using the method of differentiation under the integral sign (also called the Feynman Trick) in order to practice using this technique.</p> <p>Does anyone know of any good ones to tackle?</p>
FDP
186,817
<p>The integral <span class="math-container">$\displaystyle \int_0^1 \frac{x-1}{\ln x}dx$</span> can be used to introduce Feynman's trick (Leibniz was already using this trick)</p>
48,865
<blockquote> <p>Suppose $f:X\to Y$ is a morphism of <em>smooth connected</em> schemes (over some base). Say $Z\subseteq Y$ is a closed subscheme with complement $U$ so that $f$ pulls back (restricts) to isomorphisms on $Z$ and $U$. Does it follow that $f$ is an isomorphism?</p> </blockquote> <p>If we drop the condition that $X$ and $Y$ have to be smooth, the normalization of a cusp is a counterexample. If we remove the hypothesis that $X$ and $Y$ must be connected, taking $X=U\sqcup Z$ gives a counterexample.</p>
Steven Landsburg
10,503
<p>Reducing to the affine case, the question is this:</p> <p>Given a ring homomorphism $R\rightarrow S$, and given an ideal $I\subset R$, suppose that all of the following are isomorphisms: $$R/I\rightarrow S/IS$$ $$R_f\rightarrow S_f\quad \hbox{for any $f\in I$}$$ Can we conclude that $R\rightarrow S$ is an isomorphism?</p> <p>Assuming irreducibility (i.e. assuming $R$ and $S$ are domains) the answer is certainly yes if $I=(f)$ is principal. Then it's easy to see that $R\rightarrow S$ is injective (because the injection $R\rightarrow R_f\approx S_f$ factors through it). For surjectivity, let $s\in S$. Then because $R_f\approx S_f$, we can write $f^ks=r$ for some $r\in R$. Then $r$ maps to zero in $S/fS$, so $r\in fR$, so (because $f$ is not a zero-divisor) $f^{k-1}s\in R$, contradicting the fact that we could have chosen $k$ minimal.</p> <p>More generally, if $I=(f_1,\ldots,f_k)$ is finitely generated, then $f_i^N s\in R$ and therefore $f_i^N s\in I\subset R$ for every $i$, which at the very least forces $s$ to be integral over $R$.</p> <p>I expect you can settle the general case either by generalizing this argument or by constructing $I$ and $S$ in a way that forces it to go wrong.</p>
48,865
<blockquote> <p>Suppose $f:X\to Y$ is a morphism of <em>smooth connected</em> schemes (over some base). Say $Z\subseteq Y$ is a closed subscheme with complement $U$ so that $f$ pulls back (restricts) to isomorphisms on $Z$ and $U$. Does it follow that $f$ is an isomorphism?</p> </blockquote> <p>If we drop the condition that $X$ and $Y$ have to be smooth, the normalization of a cusp is a counterexample. If we remove the hypothesis that $X$ and $Y$ must be connected, taking $X=U\sqcup Z$ gives a counterexample.</p>
Anton Geraschenko
1
<p>This is essentially the argument in <a href="https://mathoverflow.net/users/986/bhargav">Bhargav</a>'s comment. <a href="https://mathoverflow.net/users/28/matt-satriano">Matt Satriano</a> showed me the separatedness argument.</p> <p>We first apply the valuative criterion for separatedness to show that $f:X\to Y$ is separated. In fact, this shows that any morphism which is injective on topological spaces must be separated. Suppose $\Delta$ is the spectrum of a valuation ring with closed point $\ast$. Given any map $\Delta\to Y$, there is a unique set theoretic lift to $X$. Therefore, any failure of the valuative criterion will be witnessed on the induced map from an affine open neighborhood of $\ast$ in $X$ to an affine open neighborhood of $\ast$ in $Y$. Since any morphism of affine schemes is separated, the valuative criterion holds, so $f$ is separated.</p> <p>Since $f$ induces isomorphisms on geometric points, it is quasi-finite. Supposing $Y$ is normal, integral and locally noetherian and $f$ is finite type, Zariski's Main Theorem<sup>&dagger;</sup> tells us that $f$ is an open immersion. Since $f$ is surjective, it is an isomorphism.</p> <p><sup>&dagger;</sup> I'm always surprised at how <em>everything</em> seems to be ZMT, so here's a precise reference: EGA III, Corollary 4.4.9. If $Y$ is normal, integral, and locally noetherian, $f:X\to Y$ is separated, birational, finite type, and quasi-finite, then $f$ is an open immersion.</p>
49,068
<p>Given lists $a$ and $b$, which represent multisets, how can I compute the complement $a\setminus b$?</p> <p>I'd like to construct a function <code>xunion</code> that returns the symmetric difference of multisets. For example, if $a=\{1, 1, 2, 1, 1, 3\}$ and $b=\{1, 5, 5, 1\}$, then their symmetric difference is $\big((a\cup b)\setminus(a\cap b)\big)\setminus(a\cap b)=(a\setminus b)\cup(b\setminus a)=\{1,1,2,3,5,5\}$.</p>
wxffles
427
<p>Here's my version.</p> <pre><code>Clear[multiComplement]; multiComplement[a_, b_] := Join @@ (ConstantArray[First@#, Max[Last@#, 0]] &amp; /@ (Tally[a] /. (Tally[b] /. {e_, c_Integer} :&gt; {e, k_Integer} -&gt; {e, k - c}))); </code></pre> <p>In action:</p> <pre><code>With[{a = {1, 1, 2, 1, 1, 3}, b = {1, 5, 5, 1}}, multiComplement[a, b]~Join~multiComplement[b, a] ] </code></pre> <blockquote> <p><code>{1, 1, 2, 3, 5, 5}</code></p> </blockquote> <p>It appears essentially the same as rasher's (who is indeed 10 minutes rasher). The main points are using <code>Tally</code> to get a count of how often each element appears. Then using rules to pick out the same elements in the tally on the other set. Working out the tally difference, and ignore the negative count <code>Max[0, ...]</code>. Finally the new tally is expanded with <code>ConstantArray</code>.</p>
49,068
<p>Given lists $a$ and $b$, which represent multisets, how can I compute the complement $a\setminus b$?</p> <p>I'd like to construct a function <code>xunion</code> that returns the symmetric difference of multisets. For example, if $a=\{1, 1, 2, 1, 1, 3\}$ and $b=\{1, 5, 5, 1\}$, then their symmetric difference is $\big((a\cup b)\setminus(a\cap b)\big)\setminus(a\cap b)=(a\setminus b)\cup(b\setminus a)=\{1,1,2,3,5,5\}$.</p>
Dr. belisarius
193
<p>Another way (slower than rasher's):</p> <pre><code>Clear[simComplement]; simComplement[a_, b_] := Join @@ (Fold[DeleteCases[#1, #2, {1}, 1] &amp;, #[[1]], Join@#[[2]]] &amp; /@ {{a, b}, {b, a}}) With[{a = {1, 1, 2, 1, 1, 3}, b = {1, 5, 5, 1}}, simComplement[a, b]] (* {2, 1, 1, 3, 5, 5} *) </code></pre>
300,753
<p>Revision in response to early comments. Users of set theory need an <em>implementation</em> (in case "model" means something different) of the axioms. I would expect something like this:</p> <blockquote> <p>An <em>implementation</em> consists of a "collection-of-elements" $X$, and a relation (logical pairing) $E:X\times X\to \{0,1\}$. A logical function $h:X\to\{0,1\}$ is a <em>set</em> if it is of the form $x\mapsto E[x,a]$ for some $a\in X$. Sets are required to satisfy the following axioms: ....</p> </blockquote> <p>The background "collection-of-elements" needs some properties to even get started. For instance "of the form $x\mapsto E[x,a]$" needs first-order quantification. Mathematical standards of precision seem to require <em>some</em> discussion, but so far I haven't seen anything like this. The first version of this question got answers like $X$ is "the domain of discourse" (philosophy??), "everything" (naive set theory?) "a set" (circular), and "type theory" (a postponement, not a solution). Is this a missing definition? Taking it seriously seems to give a rather fruitful perspective. </p>
Eric Wofsey
75
<p>This answer doesn't really have any ideas that are not already present in Noah Schweber's answer, but there are some points that I feel should be made more forcefully. In particular, I'd like to focus on a couple statements you've made which I think reflect a fundamental misunderstanding of the purpose of axiomatic set theory.</p> <p>You start your question with the assertion that</p> <blockquote> <p>Users of set theory need an <em>implementation</em> (in case "model" means something different) of the axioms.</p> </blockquote> <p>You also stated in a comment that</p> <blockquote> <p>I'm a working mathematician, so am concerned with usable implementations rather than the metatheory.</p> </blockquote> <p>These statements are incorrect. Using the axioms of set theory (the way a working mathematician would) does <em>not</em> involve any contact whatsoever with models or "implementations" of the axioms. The primary purpose of axiomatic set theory is to provide a precise, formal framework for making statements and proofs in mathematics. In other words, it is "the rules of the game": the statements we are allowed to talk about are those which can be expressed in the first-order language of set-theory, and the statements we are allowed to prove are those which can be deduced using the deduction rules of first-order logic from our axioms of set theory.</p> <p>The value of having such rules is that they eliminate any ambiguity about what is or is not a valid proof. We don't have to rely on any imprecise intuition about what sets are or how they behave; we can reduce all of our reasoning to manipulating finite strings of symbols according to certain formal rules. (This is the purely syntactic formalist approach described in Noah's answer.)</p> <p>What I want to emphasize here is that an ordinary "user" of axiomatic set theory <em>only</em> ever encounters this syntactic approach. If you are an ordinary mathematician using set theory as your foundation for mathematics, you are always just using the axioms as your formal foundation. If you do imagine that you are working with some "implementation" of set theory, this is a philosophical (Platonist) statement, not a mathematical one.</p> <p>Now, some mathematicians <em>do</em> also study models of set theory (and such mathematicians are usually called "set theorists"). But this is <em>separate</em> from the use of set theory as a foundation, and so the apparent circularity of using sets to do so is not a problem. We study models of set theory because they are an interesting type of mathematical structure, and also because they provide a means of proving that our formal syntactic approach to set theory <em>cannot</em> prove certain statements (e.g., the continuum hypothesis). But even if no one had ever invented the notion of a model of set theory, we would still be able to use the axioms of set theory as a foundation for mathematics.</p>
78,341
<p>I believe I read somewhere that residually finite-by-$\mathbb{Z}$ groups are residually finite. That is, if $N$ is residually finite with $G/N\cong \mathbb{Z}$ then $G$ is residually finite.</p> <p>However, I cannot remember where I read this, and nor can I find another place which says it. I was therefore wondering if someone could confirm whether this is true or not, and if it is give either a proof or a reference for this result? (If not, a counter-example would not go amiss!)</p> <p>Note that I definitely know it is true if $N$ is f.g. free (this can be found in a paper of G. Baumslag, "Finitely generated cyclic extensions of free groups are residually finite" (Bull. Amer. Math. Soc., <strong>5</strong>, 87-94, 1971)).</p>
Andreas Thom
8,176
<p>This is not true. The most prominent examples of non-residually finite central extensions of residually finite groups (by $\mathbb Z$) are certain lattices in non-linear Lie groups.</p> <p>See for example</p> <p>M. S. Raghunathan. Torsion in cocompact lattices in coverings of Spin(2, n). Math. Annalen 266, 403–419, 1984.</p> <p>or</p> <p>P. Deligne. Extensions centrales non residuellement finies de groupes arithmetiques. CR Acad. Sci. Paris, serie A-B, 287, 203–208, 1978.</p>
78,341
<p>I believe I read somewhere that residually finite-by-$\mathbb{Z}$ groups are residually finite. That is, if $N$ is residually finite with $G/N\cong \mathbb{Z}$ then $G$ is residually finite.</p> <p>However, I cannot remember where I read this, and nor can I find another place which says it. I was therefore wondering if someone could confirm whether this is true or not, and if it is give either a proof or a reference for this result? (If not, a counter-example would not go amiss!)</p> <p>Note that I definitely know it is true if $N$ is f.g. free (this can be found in a paper of G. Baumslag, "Finitely generated cyclic extensions of free groups are residually finite" (Bull. Amer. Math. Soc., <strong>5</strong>, 87-94, 1971)).</p>
Andreas Thom
8,176
<p>The modified question has a positive answer if $N$ is finitely generated.</p> <p>Consider an extension $1 \to N \to G \to \mathbb Z \to 1$ and take a lift $u \in G$ of the generator of $\mathbb Z$. If $N$ is finitely generated and $H' \subset N$ is a subgroup of finite index, then the intersection of all subgroups of index $[N:H']$ (call it $H$) is invariant under conjugation by $u$. Hence, for all $m \in \mathbb Z$, the subgroup $Hu^{m \mathbb Z}$ is a finite index normal subgroup of $G$.</p> <p>Hence, if $N$ is finitely generated and residually finite, then $G$ is residually finite as well.</p>
1,186,270
<p>$k(x) = e^{-\frac{x^2}{2}}$ on $[-1,2]$</p> <p>I think the derivative of that is $ -x e^{-\frac{x^2}{2}}$. I don't know how to find zero from that equation.</p>
Tim Raczkowski
192,581
<p>Hint: $e^x\ne 0$ for any real $x$.</p>
1,186,270
<p>$k(x) = e^{-\frac{x^2}{2}}$ on $[-1,2]$</p> <p>I think the derivative of that is $ -x e^{-\frac{x^2}{2}}$. I don't know how to find zero from that equation.</p>
Mathemagician1234
7,012
<p>$k(x) = e^{-1/2*x^2}$ defined on [-1,2]$\subset R$. So $k'(x) = -xe^{-1/2*x^2}$ =0. Clearly k'(x)= 0 iff x= 0 since $e^{-1/2*x^2}\neq 0$ for all x in <strong>R</strong>. So the only non-endpoint critical point is at x= 0.We also have to check the endpoints on the bounded interval. At x=-1, k'(-1)= $\frac{1}{e^{1/2}}$. We now take the second derivative of k(x) to determine inflection:</p> <p>$k''(x) = (x^{2})e^{-1/2*x^2}$=$\frac {x^{2}}{e^{\frac{1}{2}*x^2}}$ </p> <p>Unfortunately,at x=0, k''(0)=0. So the test is inconclusive. At k''(-1) > 0, so this is a local minimum at x=-1. At k''(2)>0,so this is also a local minimum at x = 2.</p> <p>Now let's test k(x)'s behavior around the third critical point using the first derivative test. Let x =$frac{-1}{2}$. Then k'($frac{-1}{2}$) = $\frac{\frac{1}{2}}{e^{\frac{1}{32}}}$ > 0. Now consider x =1. Then k'(1)= $\frac{\frac{-1}{2}}{e^{\frac{1}{32}}}$ &lt; 0. So x =0 is a local maximum for k(x). Therefore, the endpoints are local minimums and (0,1) is a local maximum. This means the function is increasing on [-1,0) and decreasing on [0,2],which is fairly clear from the formula. The graph also supports this: </p> <p><img src="https://i.stack.imgur.com/Uk8NT.jpg" alt="enter image description here"></p>
105,723
<p>I am using MMA do do some algebra and I need to do some simplification. For example, I have the following expression $$ x^2 \left( \frac{6 \left(2 c_1 x^6+c_2\right)}{x^4} \right)-x \left( 4 c_1 x^3-\frac{2 c_2}{x^3}\right)-8\left(c_1 x^4+\frac{c_2}{x^2}\right ) $$ Can I get intermediate steps to reach the final answer? For example, I want to factor out $c_1$ and $c_2$ and write the expression as $c_1 (x\dots) + c_2 (x\dots)$ but the terms $x\dots $ need not be evaluated. I can to the calculation as it is simple, but if I could do this from MMA, it would greatly save my time.</p> <p>I always have some terms that could be factored out. It could be constant $c_k$ or polynomial $x^n$ or trig functions and there could be multiple terms but they are always unique and do not conflict with each other.</p>
jjc385
11,035
<p>I generalize Jason B's answer to group by like terms in any number of coefficients, as well as in cases where you have polynomials multiplying one another:</p> <p>It sounds like you mostly want <code>Plus</code> to keep from evaluating. You can do this by replacing <code>Plus</code> with <code>plus</code>, where <code>plus</code> is a Flat function (since we might as well keep associativity of addition)</p> <pre><code>SetAttributes[plus, Flat]; expr = (a + b c) (a - b c); expr /. Plus-&gt;plus </code></pre> <blockquote> <p>plus[a, -b c] plus[a, b c]</p> </blockquote> <p>Now make <code>plus</code> distrbutive (including over itself -- i.e., when it appears in an integer power) :</p> <pre><code>sep = % //. { a_ b_plus :&gt; (a*# &amp;) /@ b, Power[a_plus, n_ /; n &gt; 0 &amp;&amp; IntegerQ@n] :&gt; (Power[a, n - 1]*# &amp;) /@ a } </code></pre> <blockquote> <p>plus[a^2, -a b c, a b c, -b^2 c^2]</p> </blockquote> <p>Now say you want to sort the output based on powers of a, b, and c. To this end, I define a <code>getPower</code> function :</p> <pre><code>(* operator form *) getPower[var_][expr_] := getPower[expr, var] (* Thread over a list of variables *) getPower[expr_, varList_List] := getPower[expr, #] &amp; /@ varList getPower[expr_, var_] /; FreeQ[expr, var] := 0 getPower[expr_Times, var_] := Plus @@ getPower[var] /@ List @@ expr getPower[(expr_)^n_, var_] := n getPower[expr, var] getPower[var_, var_] := 1 </code></pre> <p>(Note that <code>getPower</code> requires <code>expr</code> to be a monomial in the supplied variables; otherwise the output will contain <code>getPower</code>. Furthermore, note that <code>getPower[ 1/(a+x), x]</code> returns 0, but <code>getPower[ 1/x, x]</code> returns -1. )</p> <p>Then you can use <code>GatherBy</code> to sort the output by coefficient:</p> <pre><code>GatherBy[List @@ sep, getPower@{a, b, c}] </code></pre> <blockquote> <p>{{a^2}, {-a b c, a b c}, {-b^2 c^2}}</p> </blockquote> <p>It might be desirable to convert the sublists back to <code>plus</code> :</p> <pre><code>plus@@@% </code></pre> <blockquote> <p>{plus[a^2], plus[-a b c, a b c], plus[-b^2 c^2]}</p> </blockquote> <p>You can imagine generalizing the above to pull the coefficients out front.</p> <p>Note that it may or may not be useful to have given <code>plus</code> attribute <code>OneIdentity</code> from the start, depending on how you'd like to generalize.</p>
4,528,059
<p>The graph of <span class="math-container">$y = f(x)$</span> is as follows:</p> <p><a href="https://i.stack.imgur.com/vprAu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vprAu.png" alt="enter image description here" /></a></p> <p>Find <span class="math-container">$$\int_{-1}^{1} f(1-x^2) dx$$</span></p> <p>I tried to solve this through substitution of <span class="math-container">$u = 1-x^2$</span>, found <span class="math-container">$dx = \frac{du}{-2x}$</span> and attempted to adjust the upper and lower bounds of the substitution to attempt to use the graphical area under the function to find my answer. However, when trying to find the upper and lower bounds they both equal <span class="math-container">$0$</span>, so this cannot work.</p>
Moko19
618,171
<p>Take the case of <span class="math-container">$n=2$</span>. The quotes are <span class="math-container">$q_1$</span> and <span class="math-container">$q_2$</span>. Suppose we bet <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span>, and let <span class="math-container">$x'_1=\frac{x_1}{x_2}$</span>. Our minimum profit is then <span class="math-container">$\min((q_1-1)x_1-x_2,(q_2-1)x_2-x_1)=x_2(\min((q_1x'_1-x'_1-1,q_2-1-x'_1))=x_2(\min((q_1x'_1,q_2)-1-x'_1)$</span>. We want the minimum profit to be positive, so <span class="math-container">$x_2(\min((q_1x'_1,q_2)-1-x'_1)&gt;0$</span>. Specifically, this means that both <span class="math-container">$x_2((q_1x'_1-1-x'_1)&gt;0$</span> and <span class="math-container">$x_2(q_2-1-x'_1)&gt;0$</span>. Dividing by <span class="math-container">$x_2$</span>, we have that both <span class="math-container">$(q_1x'_1-1-x'_1&gt;0$</span> and <span class="math-container">$q_2-1-x'_1&gt;0$</span>. This means that <span class="math-container">$x'_1&gt;\frac1{q_1-1}$</span> and <span class="math-container">$x'_1&lt;q_2-1$</span>. Combining these inequalities and removing the middle part gives that <span class="math-container">$\frac{1}{q_1-1}&lt;q_2-1$</span>, or <span class="math-container">$1&lt;(q_1-1)(q_2-1)=q_1q_2-(q_1+q_2)+1$</span>. As a result, we get the requirement <span class="math-container">$q_1+q_2&lt;q_1q_2$</span> and then any <span class="math-container">$x'_1$</span> fulfilling <span class="math-container">$\frac{1}{q_1-1}&lt;x'_1&lt;q_2-1$</span> is a guaranteed winning strategy.</p> <p>I'd imagine you could use similar methodology for higher values of <span class="math-container">$n$</span>.</p>
3,572,967
<p>Can I ask how to solve this type of equation:</p> <blockquote> <p><span class="math-container">$$\log_{yz} \left(\frac{x^2+4}{4\sqrt{yz}}\right)+\log_{zx}\left(\frac{y^2+4}{4\sqrt{zx}}\right)+\log_{xy}\left(\frac{z^2+4}{4\sqrt{xy}}\right)=0$$</span></p> </blockquote> <p>It is given that <span class="math-container">$x,y,z&gt;1$</span>. Which properties of the logarithm have to be used?</p> <p>I know that <span class="math-container">$\log_a b=\log a/\log b\to \log_{yz}((x^2+4)/(4\sqrt{yz}))=\log ((x^2+4)/(4\sqrt{yz}))/\log xy$</span></p> <p>and <span class="math-container">$\log_a b/c=\log_a b/\log_a c\to \log_{yz}((x^2+4)/(4\sqrt{yz}))=\log_{yz} (x^2+4)/\log_{yz} (4\sqrt{yz})$</span></p> <p>And how to solve this type of equation in general?</p>
LHF
744,207
<p>Notice that we can use the following property <span class="math-container">$\log_a \frac{b}{c}=\log_a b-\log_a c$</span> to get:</p> <p><span class="math-container">$$\log_{yz} \left(\frac{x^2+4}{4\sqrt{yz}}\right)=\log_{yz}\frac{x^2+4}{4}-\log_{yz}\sqrt{yz}=\log_{yz}\frac{x^2+4}{4}-\frac{1}{2}$$</span></p> <p>The equation can, thus, be written as:</p> <p><span class="math-container">$$\log_{yz}\frac{x^2+4}{4}+\log_{zx}\frac{y^2+4}{4}+\log_{xy}\frac{z^2+4}{4}=\frac{3}{2}$$</span></p> <p>Notice that for any real number <span class="math-container">$a$</span>, we have:</p> <p><span class="math-container">$$\frac{a^2+4}{4}\geq a \Leftrightarrow (a-2)^2\geq 0$$</span></p> <p>with equality only if <span class="math-container">$a=2$</span>. Therefore:</p> <p><span class="math-container">$$\log_{yz}\left(\frac{x^2+4}{4}\right)\ge \log_{yz} x=\frac{\ln x}{\ln y+\ln z}$$</span></p> <p>Summing up, we arrive at:</p> <p><span class="math-container">$$\frac{3}{2}\geq \frac{\ln x}{\ln y+\ln z}+\frac{\ln y}{\ln x+\ln z}+\frac{\ln z}{\ln x+\ln y}$$</span></p> <p>If we let <span class="math-container">$a=\ln x, b=\ln y, c=\ln z$</span>, since <span class="math-container">$x,y,z&gt;1$</span>, we have <span class="math-container">$a,b,c&gt;0$</span> and:</p> <p><span class="math-container">$$\frac{a}{b+c}+\frac{b}{c+a}+\frac{c}{a+b}\leq \frac{3}{2}$$</span></p> <p>However, Nesbitt's inequality states that for any positive real numbers <span class="math-container">$a,b,c$</span>, we must have:</p> <p><span class="math-container">$$\frac{a}{b+c}+\frac{b}{c+a}+\frac{c}{a+b}\geq \frac{3}{2}$$</span></p> <p>with equality only if <span class="math-container">$a=b=c$</span>. This implies immediately that the only solution of the equation is <span class="math-container">$x=y=z=2$</span>.</p> <p><em>Note:</em> As far as I know, there is no standard way to solve this sort of multivariable logarithm equations. </p> <p>Most of the time, you have to use inequalities and show that the equation is a particular equality case. In fact, it given that <span class="math-container">$x,y,z&gt;1$</span> so that the logarithm will be positive (<span class="math-container">$\log_a b$</span> is positive if <span class="math-container">$a,b&gt;1$</span> or <span class="math-container">$a,b&lt;1$</span>).</p>
3,572,967
<p>Can I ask how to solve this type of equation:</p> <blockquote> <p><span class="math-container">$$\log_{yz} \left(\frac{x^2+4}{4\sqrt{yz}}\right)+\log_{zx}\left(\frac{y^2+4}{4\sqrt{zx}}\right)+\log_{xy}\left(\frac{z^2+4}{4\sqrt{xy}}\right)=0$$</span></p> </blockquote> <p>It is given that <span class="math-container">$x,y,z&gt;1$</span>. Which properties of the logarithm have to be used?</p> <p>I know that <span class="math-container">$\log_a b=\log a/\log b\to \log_{yz}((x^2+4)/(4\sqrt{yz}))=\log ((x^2+4)/(4\sqrt{yz}))/\log xy$</span></p> <p>and <span class="math-container">$\log_a b/c=\log_a b/\log_a c\to \log_{yz}((x^2+4)/(4\sqrt{yz}))=\log_{yz} (x^2+4)/\log_{yz} (4\sqrt{yz})$</span></p> <p>And how to solve this type of equation in general?</p>
DeepSea
101,504
<p>Hint : First use: <span class="math-container">$a^2 + 4 \ge 4a$</span> for each term on the left. Then split the log and show next that the left is <span class="math-container">$\ge 0$</span>. Equality is at <span class="math-container">$ x = y = z = 2$</span>.</p>
1,814,216
<p>I was trying to show that $\sin(x)$ is non-zero for integers $x$ other than zero and I thought that this result might emerge as a corollary if I managed to show that the result in question is true. </p> <p>I think it's possible to demonstrate this by looking at the power series expansion of $\sin(x)$ and assuming that we don't know anything about the existence of $\pi$. </p> <p>All of the answers below insist that the proposition '$\exists p,q \in \mathbb{Z}, \sin(p)=\sin(q)$' -where $\sin(x)$ is the power series representation-is undecidable without using the properties of $\pi$. If so this is a truly wonderful conjecture and I would like to be provided with a proof. Until then, I insist that methods for analyzing infinite series from analysis should suffice to show that the proposition is false. </p> <hr> <p><strong>Note:</strong> In a previous version of this post the question said "bijective" instead of "injective". Some of the answers below have answered the first version of this post.</p>
Karthik Vasu
344,346
<p>$ \mathbb{N} $ is countable and $ \mathbb{R} $ is uncountable, so there can never be a bijection</p>