qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
878,914
<p>I'm having a problem to solve this limit.</p> <p>$$\lim_{x \to \pi/4} \frac{\tan x-1}{\sin x-\cos x}$$</p> <p>$\lim_{x \to \pi/4} \frac{\tan x-1}{\sin x-\cos x}$ = $\lim_{x \to \pi/4} \frac{\frac{\sin x}{\cos x}-1}{\sin x-\cos x}$= $\lim_{x \to \pi/4} \frac{\frac{\sin x-\cos x}{\cos x}}{\sin x-\cos x}$= $\lim_{x \to \pi/4} \frac{\frac{\frac{\sin x-\cos x}{\cos x}}{\sin x-\cos x}}{1}$ =</p> <p>numerator is : (upper*lower) = 1*$\sin x-\cos x$</p> <p>denominator is : (inner-up*inner-low) = $\cos x*(\sin x-\cos x)$. </p> <p>Which is :</p> <p>$$\lim_{x \to \pi/4} \frac{\sin x-\cos x}{(\cos x)(\sin x-\cos x)}$$</p> <p>I don't know what to do next? Any ideas?</p>
Jam
161,490
<p>Rearrange the fraction into something simpler. Express $\tan(x)$ in terms of $\sin(x)$ and $\cos(x)$. $$\frac{\tan(x)-1}{\sin(x)-\cos(x)}=\frac{\frac{\sin(x)}{\cos(x)}-1}{\sin(x)-\cos(x)}$$ You've done this and the next step in your own work. $$=\frac{\frac{\sin(x)}{\cos(x)}-\frac{\cos(x)}{\cos(x)}}{\sin(x)-\cos(x)}$$ $$=\frac{\frac{\sin(x)-\cos(x)}{\cos(x)}}{\sin(x)-\cos(x)}$$ Factor out $(\sin(x)-\cos(x))$ from the numerator and denominator. $$=\frac{\sin(x)-\cos(x)}{\sin(x)-\cos(x)}\cdot\frac{\frac{1}{\cos(x)}}{1}$$ $$=\frac{\frac{1}{\cos(x)}}{1}=\frac{1}{\cos(x)}$$ $$\therefore\:\frac{\tan(x)-1}{\sin(x)-\cos(x)}=\frac{1}{\cos(x)}$$ Apply the limit to both sides. $$\lim_{x \to \pi/4} \frac{\tan x-1}{\sin x-\cos x}=\lim_{x \to \pi/4} \frac{1}{\cos(x)}$$ The limit is then the value of $\frac{1}{\cos(x)}$ at $x=\frac{\pi}{4}$. In this particular case, you can substitute $x=\frac{\pi}{4}$ (but you can't always do this). $$\frac{1}{\cos(\frac{\pi}{4})}=\frac{1}{\left(\frac{1}{\sqrt{2}}\right)}$$ $$\therefore\:\sqrt{2}$$</p>
1,259,961
<p>I was looking at $$\int_0^1\left(\frac{1}{\sqrt{s}}\right)^2\ ds.$$</p> <p>So in calculus, I would evaluate $\ln(1) - \ln(0)$ as the answer. What I don't get and I don't remember why is the answer $\infty$?</p> <p>I know $\ln(1)=0$ and $\ln(0)=-\infty$</p> <p>Wouldn't the answer be $-\infty$? Any explanation would be helpful. I'm forgetting some of my basic calculus.</p>
Jamie Lannister
234,745
<p>$ln(1) = 0, ln(0) \rightarrow -\infty$</p> <p>$ln(1) - ln(0) \rightarrow 0 - -\infty = \infty$</p>
112,263
<p>When Mathematica outputs nested lists, it just gloms one record right after the next record and it wraps around in the space available, for example:</p> <pre><code>In[10]:= FactorInteger[FromDigits["1000000000000101",Range[2,10]]] Out[10]= {{{13,1},{2521,1}},{{11,1},{251,1},{5197,1}},{{3,2},{229,1},{520981,1}},{{4751,1},{6423401,1}},{{173,1},{281,1},{2677,1},{3613,1}},{{3,1},{61,2},{425294411,1}},{{53,1},{157,1},{1697,1},{2491681,1}},{{109,1},{75403,1},{25050853,1}},{{3,1},{47,1},{157,1},{1021,1},{44244113,1}}} </code></pre> <p>What would be more convenient would be to be able to see each record on its own line. So, for example, in the example above there are 9 records, one for each of the input values 2-10. I can kind of see the separation between them by looking for the double "}}", but it is stressful. Is there an easy way to have a line break after each record, so it is more obvious where one record ends and the next begins?</p>
Tyler Durden
10,067
<p>Based on the comments I developed the following answer. Not completely simple, but not too onerous to do:</p> <pre><code>In[38]:= TableForm[Part[FactorInteger[FromDigits["1000000000000101",Range[2,10]]],All,All,1]] Out[38]//TableForm= 13 2521 11 251 5197 3 229 520981 4751 6423401 173 281 2677 3613 3 61 425294411 53 157 1697 2491681 109 75403 25050853 3 47 157 1021 44244113 </code></pre> <p>So, for each of the 9 cases there is a row, and it is clear which values belong in which row.</p>
991,377
<p>I have to evalute this integral. $\displaystyle\iint\limits_{D}(2+x^2y^3 - y^2\sin x)\,dA$ $$D=\left \{ (x, y):\left | x \right |+\left | y \right | \leq 1\right \}$$</p> <p>At first, I evaluated simply by putting $-1\leq x\leq 1, -1\leq y\leq 1$, thus making $$ \int_{-1}^{1}\int_{-1}^{1}(2+x^2y^3 - y^2\sin x)\,dx\,dy, $$ but the answer I got was 8, not 4, which my answersheed requires me. </p>
Blue
409
<p>Consider $A$'s relationship with various circles.</p> <p>Since $B^\prime$ and $C^\prime$ are points of tangency of lines through $A$ with the incircle, we have $$|AB^\prime| = |AC^\prime| \qquad (\star)$$ Further, by the <a href="http://en.wikipedia.org/wiki/Power_of_a_point#Theorems" rel="nofollow">"Power of a Point" Theorem</a>, $$|AA_B||AC^\prime| = \operatorname{pow}(A\;,\;\bigcirc GB^\prime C^\prime) = |AA_C||AB^\prime|$$ Also, $$|AB_A||AC^\prime| = \operatorname{pow}(A\;,\;\bigcirc A^\prime G C^\prime) = |AG||AA^\prime| = \operatorname{pow}(A\;,\;\bigcirc A^\prime B^\prime G) = |AC_A||AB^\prime|$$ Together with $(\star)$, these imply $$|AA_B| = |AA_C| \qquad\text{and}\qquad |AB_A| = |AC_A|$$</p> <p>This makes $\square A_B A_C B_A C_A$ an isosceles trapezoid, whose bases, $A_BA_C$ and $B_AC_A$, have the bisector of $\angle A$ as their common perpendicular bisector. The incenter, $I$, of $\triangle ABC$ lies on that bisector, so we can write $$|IA_B| = |IA_C| \qquad\text{and}\qquad |IB_A| = |IC_A|$$ Identical arguments considering points $B$ and $C$ lead to corresponding relations $$|IB_C| = |IB_A| \qquad |IC_B| = |IA_B| \qquad\text{and}\qquad|IC_A|=|IC_B|\qquad|IA_C|=|IB_C|$$ Necessarily, $I$ is equidistant from all six points $A_B$, $A_C$, $B_C$, $B_A$, $C_A$, $C_B$. $\square$</p>
2,526,865
<p>I came across this question while preparing for an interview.</p> <p>You draw cards from a $52$-card deck until you get first Ace. After each card drawn, you discard three cards from the deck. What's the expected sum of cards until you get the first Ace? </p> <p>Note</p> <ol> <li><p>J, Q, K have point value 11, 12 and 13, and Ace has point value 1</p></li> <li><p>discarded cards don't count towards the sum and if we don't get an Ace we shuffle the deck and continue</p></li> <li><p>when you shuffle, you shuffle all cards but you keep the sum, and when you draw a new card you add it to that sum (you don't start from zero after each shuffle) </p></li> </ol> <p>My thought so far: the expected sum is definitely between $73$ and $91$.</p> <p>$73$ is the expected sum if we don't discard any cards, so the problem simply becomes the expected sum until first Ace, that is, $(2+\dots+13) \cdot 4 \cdot \frac{1}{5}+1$.</p> <p>$91$ is the expected sum if we discard all $51$ remaining cards (shuffle the deck after each draw). In this case the number of draws needed to see the first Ace follows a Geometric distribution, so the answer is $(\frac{52}{4}-1) \cdot 7.5+1$</p> <p>Any help is appreciated!!! </p>
Fimpellizzeri
173,410
<p>At each step, four cards are removed from the deck, so a deck is exhausted in $13$ steps. Let's call that a 'round'.</p> <p>The game you propose can be restated as follows. At each round, we draw $13$ ordered cards from the deck, and add up the values of the cards that came before the first ace in our $13$ drawn cards. If no ace is drawn, we add up the values of all $13$ drawn cards, shuffle them into the remaining $39$ cards (the ones which were not drawn), and repeat the process.</p> <hr> <p>There are $\binom{48}{13}$ sets of $13$ cards which do not contain an Ace, and so there are $\binom{52}{13}-\binom{48}{13}$ sets of $13$ cards which contain at least one Ace. Hence, the probability that the game ends in a given round is</p> <p>$$p=1-\frac{\binom{48}{13}}{\binom{52}{13}}=\frac{14498}{20825}\simeq69.62\%$$</p> <hr> <p>Now, if the game has not ended in a given round, then the expected sum of the cards drawn is $13$ times the expected value of a card drawn (by linearity of the expectation). Since no card drawn was an ace, the expected value of a card drawn is</p> <p>$$\frac{2+3+4+5+6+7+8+9+10+11+12+13}{12}=\frac{15}2,$$</p> <p>and hence the expected sum is $l=13\cdot\frac{15}2=\frac{195}2=97.5$.</p> <hr> <p>Now, we need to calculate the expected sum of the cards drawn before the first Ace in a round where the game ends. Here, we will break the thing into cases.</p> <p>$\qquad$<strong>Number of Aces in cards drawn: $1$</strong></p> <p>There are $\binom{48}{12}\cdot\binom{4}{1}$ sets of $13$ cards which contain exactly one Ace. Therefore, given that the $13$ cards drawn contain at least one ace, the probability that we fall in this case (exactly one Ace drawn) is</p> <p>$$a_1=\frac{\binom{48}{12}\cdot\binom{4}{1}}{\binom{52}{13}-\binom{48}{13}}=\frac{9139}{14498}\simeq 63.04\%$$</p> <p>The expected position of the lone ace is $\frac{1+2+3+4+5+6+7+8+9+10+11+12+13}{13}=7$, so on average $6$ non-Ace cards will be drawn before it. The expected sum for this case is hence $s_1=6\cdot\frac{15}2+1=46$.</p> <p>$\qquad$<strong>Number of Aces in cards drawn: $2$</strong></p> <p>There are $\binom{48}{11}\cdot\binom{4}{2}$ sets of $13$ cards which contain exactly two Aces. Like before, the probability that we fall in this case is</p> <p>$$a_2=\frac{\binom{48}{11}\cdot\binom{4}{2}}{\binom{52}{13}-\binom{48}{13}}=\frac{2223}{7249}\simeq 30.67\%$$</p> <p>Now, things get trickier. The position of the pairs of aces is a subset of size $2$ on $S=\{1,2,\dots,13\}$, and we are interested in the minimimum of this subset. Let $X$ denote this random variable.</p> <p>There are $\binom{13}2$ $2$-subsets of $S$, and only $12$ of them contain the number $1$, which is a guaranteed minimum. Therefore</p> <p>$$\mathbb{P}(X=1)=\frac{12}{\binom{13}2}$$</p> <p>Similarly, there are $11$ $2$-subsets of $S$ whose minimum is $2$.<br> More generally, for each $k\in\{1,2,\dots,12\}$, $\binom{13-k}1$ of the $2$-subsets of $S$ have minimum $k$, and we find that</p> <p>$$\mathbb{P}(X=k)=\frac{\binom{13-k}1}{\binom{13}2}.$$</p> <p>As a sanity check, notice that they add up to $1$. The expected sum for this case is hence:</p> <p>$$s_2=\sum_{k=1}^{12}\left(\frac{15}2\cdot (k-1)+1\right) \cdot \mathbb{P}(X=k)=\frac{57}2$$</p> <p>$\qquad$<strong>Number of Aces in cards drawn: $3$</strong></p> <p>Now we've got most of the work done. We have that</p> <p>$$a_3=\frac{\binom{48}{10}\cdot\binom{4}{3}}{\binom{52}{13}-\binom{48}{13}}=\frac{39}{659}\simeq 5.92\%$$</p> <p>For this case, let $Y$ be the random variable which denotes the minimum of a uniformly sampled $3$-subset of $S$. Notice that there are $\binom{13}{3}$ $3$-subsets of $S$.</p> <p>We will have that for each $k\in\{1,2,\dots,11\}$, $\binom{13-k}2$ of the $3$-subsets of $S$ have minimum $k$. Hence:</p> <p>$$\mathbb{P}(Y=k)=\frac{\binom{13-k}2}{\binom{13}3}$$</p> <p>Finally, the expected sum for this case is</p> <p>$$s_3=\sum_{k=1}^{11}\left(\frac{15}2\cdot (k-1)+1\right) \cdot \mathbb{P}(Y=k)=\frac{79}4$$</p> <p>$\qquad$<strong>Number of Aces in cards drawn: $4$</strong></p> <p>For this final case we have</p> <p>$$a_4=\frac{\binom{48}{9}\cdot\binom{4}{4}}{\binom{52}{13}-\binom{48}{13}}=\frac{5}{1318}\simeq 0.38\%$$</p> <p>and expected sum</p> <p>$$s_4=\sum_{k=1}^{10}\left(\frac{15}2\cdot (k-1)+1\right) \cdot \frac{\binom{13-k}3}{\binom{13}4}=\frac{29}2$$</p> <hr> <p>Let's put it all together. Supposing the game ends on a given round, the expected sum for that round will be $($and you can check that the $a_i$ add up to $1)$</p> <p>$$w=\sum_{i=1}^4a_is_i=\frac{282424}{7249}\simeq38.96$$</p> <p>Finally, the expected sum for the game will be given by</p> <p>$$\sum_{n=1}^\infty\, \underbrace{\mathbb{P}(\text{Game ends on round $n$})}_{(1-p)^{n-1}\cdot p} \cdot \underbrace{\mathbb{E}(\text{Value of sum of cards of a game ending on round $n$})}_{(n-1)\,l+w}\\ =\sum_{n=1}^\infty\,(1-p)^{n-1}\cdot p \cdot \Big((n-1)\,l+w\Big)=(w-l)+\frac{l}p $$</p> <p>This last step is standard manipulation of series and term-by-term differentiation, but I can further explain if it's not clear. Therefore, the final answer is</p> <blockquote> <p>$$\frac{2363461}{28996} \simeq 81.51$$</p> </blockquote>
3,848,517
<p>I have a conjecture in my mind regarding Arithmetic Progressions, but I can't seem to prove it. I am quite sure that the conjecture is true though.</p> <p>The conjecture is this: suppose you have an AP (arithmetic progression): <span class="math-container">$$a[n] = a[1] + (n-1)d$$</span> Now, suppose our AP satisfies the property that the sum of the first <span class="math-container">$n$</span> terms of our AP is equal to the sum of the first <span class="math-container">$m$</span> terms: <span class="math-container">$$S[n] = S[m]$$</span> but <span class="math-container">$n \neq m$</span>. I want to prove two theorems:</p> <ul> <li>The underlying AP <span class="math-container">$a[n]$</span> must be <strong>symmetric</strong> with respect to the point at which it becomes zero.</li> <li><span class="math-container">$S[n + m] = 0$</span></li> </ul> <h2>A Numerical Example</h2> <p>Consider the AP: <span class="math-container">$$a[n] = 4 - n = (3, 2, 1, 0, -1, -2, -3)$$</span> This is an AP with common difference <span class="math-container">$d = -1$</span> and first term <span class="math-container">$a[1] = 3$</span>: <a href="https://i.stack.imgur.com/NKmZF.png" rel="nofollow noreferrer">Here is the MATLAB plot of this AP</a>. As you can see in the plot, our AP is <strong>symmetric</strong> with respect to the point <span class="math-container">$n = 4$</span>: <span class="math-container">$$a[4-1] = -a[4+1] = 1$$</span> <span class="math-container">$$a[4-2] = -a[4+2] = 2$$</span> <span class="math-container">$$a[4-3] = -a[4+3] = 3$$</span></p> <p>Now, here is the sum of our AP: <span class="math-container">$$S[n] = (3,5,6,6,5,3,0)$$</span> <a href="https://i.stack.imgur.com/Ep0aB.png" rel="nofollow noreferrer">Here is the MATLAB plot of the summation</a>. You can clearly see that: <span class="math-container">$$S[1] = S[6] = 3$$</span> <span class="math-container">$$S[2] = S[5] = 5$$</span> <span class="math-container">$$S[3] = S[4] = 6$$</span></p> <p>and you can also see that: <span class="math-container">$$S[1 + 6] = S[7] = 0$$</span> <span class="math-container">$$S[2 + 5] = S[7] = 0$$</span> <span class="math-container">$$S[3 + 4] = S[7] = 0$$</span></p> <p>Can you please help me out with this problem? Any guidance will be very welcome. I am actually an Engineering student, so my Pure Math skills are not that strong.</p> <p>Thank you!</p>
Especially Lime
341,019
<p>No, for example consider the functions <span class="math-container">$f_n:\mathbb R\to \mathbb R$</span> where <span class="math-container">$f_n=0$</span> outside the region <span class="math-container">$[\sum_{m&lt;n}a_n,\sum_{m\leq n}a_n]$</span> and on that region rises linearly from <span class="math-container">$0$</span> at each end to <span class="math-container">$a_n$</span> at the midpoint, where <span class="math-container">$a_n$</span> is any sequence of positive reals. These functions tend pointwise to <span class="math-container">$0$</span> and are pretty nice (they are continuous uniformly over the whole sequence) but the sequence of suprema is arbitrary.</p>
437,053
<p>I'm struggling with this nonhomogeneous second order differential equation</p> <p><span class="math-container">$$y'' - 2y = 2\tan^3x$$</span></p> <p>I assumed that the form of the solution would be <span class="math-container">$A\tan^3x$</span> where A was some constant, but this results in a mess when solving. The back of the book reports that the solution is simply <span class="math-container">$y(x) = \tan x$</span>.</p> <p>Can someone explain why they chose the form <span class="math-container">$A\tan x$</span> instead of <span class="math-container">$A\tan^3x$</span>?</p> <p>Thanks in advance.</p>
Pedro
23,350
<p>Have you used that $$|a^3|=\frac{|a|}{\gcd(|a|,3)}= |a|\text{ ? }$$</p> <p>Suppose now that $xa^3=a^3x$. Note that $(a^3)^2=a$ Then $$xa^3a^3=a^3xa^3$$</p> <p>Can you finish?</p> <p><strong>SPOILER</strong> </p> <blockquote class="spoiler"> <p>$$xa=xa^6=xa^3a^3=a^3xa^3=a^3a^3x=a^6=ax$$</p> </blockquote> <p>As per your curiosity: one can prove that $$|a^k|=\frac{|a|}{(|a|,k)} $$</p> <p>Note we have $|a|=|a^k| \iff \langle a\rangle=\langle a^k\rangle$, and from the above, $\iff (|a|,k)=1$, and as anon succinctly commented: "$x$ commutes with $a$ if and only if $x$ commutes with every power of $a$ if and only if $x$ centralizes $\langle a\rangle$. Hence $C(a)=C(b)$ whenever $a$ and $b$ generate the same cyclic subgroup." </p>
2,409,312
<p>In <a href="https://math.stackexchange.com/a/1999967/272831">this previous answer</a>, MV showed that for $n\in\Bbb N$,</p> <p>$$\int\frac1{1+x^n}~dx=C-\frac1n\sum_{k=1}^n\left(\frac12 x_{kr}\log(x^2-2x_{kr}x+1)-x_{ki}\arctan\left(\frac{x-x_{kr}}{x_{ki}}\right)\right)$$</p> <p>where</p> <p>$$x_{kr}=\cos \left(\frac{(2k-1)\pi}{n}\right)$$</p> <p>$$x_{ki}=\sin \left(\frac{(2k-1)\pi}{n}\right)$$</p> <p>I am now interested in the case of $n=\frac ab\in\Bbb Q^+$. By substituting $x\mapsto x^b$, we get</p> <p>$$\int\frac{bx^{b-1}}{1+x^a}~dx$$</p> <p>Thus, the given integral in question is really</p> <p>$$\int\frac{x^b}{1+x^a}~dx$$</p> <p>By expanding with the geometric series and termwise integration, one can see that</p> <p>$$\int_0^p\frac{x^b}{1+x^a}~dx=\sum_{k=0}^\infty\frac{(-1)^kp^{ak+b+1}}{ak+b+1}=\frac{p^{b+1}}a\Phi\left(-p^a,1,\frac{b+1}a\right)$$</p> <p>where $\Phi$ is the <a href="http://mathworld.wolfram.com/LerchTranscendent.html" rel="noreferrer">Lerch transcendent</a>.</p> <p>A few particular cases that arise may be found:</p> <p>\begin{align}\int\frac1{1+x^{1/n}}~dx&amp;=C+(-1)^{n+1}n\left[\ln(1+x^{1/n})+\sum_{k=1}^{n-1}\frac{(-x^{1/n})^k}k\right],&amp;a=1\\\int\frac1{1+x^{2/n}}~dx&amp;=C+(-1)^nn\left[\arctan(x^{1/n})+\frac1{x^{1/n}}\sum_{k=1}^{(n-1)/2}\frac{(-x^{2/n})^k}{2k-1}\right],&amp;a=2,n\ne2b\end{align}</p> <p>Or, more generally, with $x=t^{an+1}$,</p> <p>$$\int\frac1{1+x^{a/(an+1)}}~dx=(-1)^{n+a}(an+1)\left[\int\frac1{1+t^a}~dt+\frac1{x^{(a-1)/(an+1)}}\sum_{k=1}^{(n-1)/a}\frac{(-x^{a/(an+1)})^k}{a(k-1)+1}\right]$$</p> <p>which reduces down to the previously solved problem.</p> <blockquote> <p>But what of the cases when $n=a/b$ with $(b\bmod a)\ne0,1$?</p> </blockquote> <p>For example,</p> <p>$$\int\frac1{1+x^{3/2}}~dx=C+\frac16\left[\log(1-x^{1/2}+x)-2\log(1+x^{1/2})+2\sqrt3\arctan\left(\frac{2x^{1/2}-1}{\sqrt3}\right)\right]$$</p>
H. H. Rugh
355,946
<p>Let $\gamma=\exp(2\pi i/a)$. The polynomial $Q(x)=x^a+1$ has $a$ simple roots $z_k=\gamma^{k+1/2}$, $0\leq k&lt;a$. Since $z_k^a=-1$, we have $Q'(z_k)=a z_k^{a-1}=-a z_k^{-1}$, so $$ \frac{1}{Q(x)} = \sum_{k=0}^{a-1} \frac{1}{Q'(z_k)} \frac{1}{x-z_k} = -\frac{1}{a} \sum_{k=0}^{a-1} \frac{z_k}{x-z_k} $$</p> <p>For the numerator we first make a reduction in the degree (when $b\geq a$).</p> <p>Let $p= b \mod a \in \{0,1,...,a-1\}$ and $m=(b-p)/a$. Then $$ \frac{x^b - (-1)^m x^p}{x^a + 1} = \sum_{j=1}^m (-1)^j x^{b-ja} $$ We deduce that $$ \frac{x^b}{x^a+1} - \sum_{j=1}^m (-1)^j x^{b-ja} = \frac{(-1)^mx^p}{x^a+1} = - \frac{(-1)^m}{a} \sum_{k=0}^{a-1} z_k \frac{x^p}{x-z_k}= - \frac{(-1)^m}{a} \sum_{k=0}^{a-1} \frac{z_k^{p+1}}{x-z_k} $$ The last equality follows from the fact that the difference is a polynomial which must vanish since $p&lt;a$. So a part from the trivial part on the LHS (which I leave aside), the problem is reduced to integrating the RHS. We have</p> <p>$$ - \int \frac{(-1)^m}{a} \sum_{k=0}^{a-1} \frac{z_k^{p+1}}{x-z_k}dx = - \frac{(-1)^m}{a} \sum_{k=0}^{a-1} z_k^{p+1} \ln (x-z_k) $$ To avoid complex log and too long formulas, let us write<br> $$ u_{k,p} = \cos \left( \frac{2\pi (k+1/2)(p+1)}{a}\right) , \; \; v_{k,p} = \sin \left( \frac{2\pi (k+1/2)(p+1)}{a}\right) , \; \; $$ Using $\overline{z_{a-1-k}} = z_{k} = u_{k,0}+i v_{k,0}$ we obtain for $a$ even: $$ - \frac{(-1)^m}{2a} \sum_{k=0}^{\lfloor a/2 \rfloor} \left[ u_{k,p} \ln \left(x^2- 2 u_{k,0} x+1\right) + v_{k,p} \arctan \frac{x-u_{k,0}}{v_{k,0}} \right] $$ For $a$ odd you should add to this expression the "middle term" (which has no arctan part) $$ \frac{(-1)^{m+p}}{a} \ln(x+1) $$ No guarantee for the above being free of errors ...</p>
3,275,966
<p>During the drawing lottery falls six balls with numbers from <span class="math-container">$1$</span> to <span class="math-container">$36$</span>. The player buys the ticket and writes in it the numbers of six balls, which in his opinion will fall out during the drawing lottery. The player wants to buy several lottery tickets in order to be guaranteed to guess at least two numbers in at least one ticket. Will there be enough to buy <span class="math-container">$12$</span> lottery tickets?</p> <p><strong>My work</strong>. The maximum number of numbers pairs in <span class="math-container">$12$</span> tickets is equal to <span class="math-container">$12 \binom{6}{2}=12 \cdot 15$</span>. During the drawing lottery falls <span class="math-container">$ \binom{6}{2}=15$</span> numbers pairs. The total number of numbers pairs is equal to <span class="math-container">$\binom{36}{2}=18 \cdot 35$</span>. I have no ideas of solving the problem.</p>
Ed Pegg
11,955
<p>Best known is <a href="https://ljcr.dmgordon.org/cover/show_cover.php?v=36&amp;k=6&amp;t=2" rel="nofollow noreferrer">47 tickets</a>.<br> The lower bound would be 42 tickets, but no-one has found a solution with fewer than 47 tickets. </p> <p>A table of similar results is at <a href="https://ljcr.dmgordon.org/cover/table.html" rel="nofollow noreferrer">La Jolla Covering Repository</a>.</p>
591,207
<p>Show a proof that $Z(G),$ the center of $G,$ is a normal subgroup of $G$ and every subgroup of the center is normal. I know that $Z(G)$ is a normal subgroup and it is abelian, but how would I show that every subgroup of $Z(G)$ is also normal?</p>
amWhy
9,003
<p>Recall that the center of a group $Z(G)$ consists of all elements that commute with every element in $G$. That's pretty much all you need, here:</p> <p>If $H&lt;Z(G),$ and $h \in H,$ then $h \in Z(G),$ and so for every $g \in G,\;$ $g^{-1}hg =g^{-1}gh = eh = h$ ($h \in Z(B)$, so $h$ commutes with $g$.)</p> <p>Since this is true of all $h \in H$, we have that $g^{-1}Hg = H$, and hence, $H$ is normal.</p>
100,459
<p>Claim: Take any function $f(t) &gt; 0$ for $t &gt; 0$, such that $f(t) \to \infty$ as $t \to \infty$, then for $\sigma &gt; 0$ $$|\zeta(\sigma + it)| = o(f(t))$$</p> <blockquote> <p>Is there any already existing evidence, like papers or proofs or something that can debunk this?</p> </blockquote> <p>As far as I know, under the Lindelof hypothesis $$|\zeta(\frac{1}{2} + it)| = o(t^\epsilon)$$ and Littlewood has already proved that under the Riemann hypothesis $$|\zeta(\frac{1}{2} + it)| = o\left(\exp\left(\frac{10\log t}{\log \log t}\right)\right)$$ both of which agree with the argument.</p> <p>Also I know from this paper at <a href="http://arxiv.org/pdf/math/0612106v2.pdf" rel="nofollow">http://arxiv.org/pdf/math/0612106v2.pdf</a> that $$\int_0^T |\zeta(1/2 + it)|^{2k}dt \gg_k T (\log T)^{k^2}$$ which kind of gives me a hint that there must be an obvious lower bound which can probably show that the condition given above for $\zeta(\sigma + it)$ and $f(t)$ is invalid.</p> <p>Looking for references.</p>
Community
-1
<p>First, your condition seems a bit strange to me. Since it seems to me it would imply that that the absolute value of $\zeta(1/2 + it)$ is bounded which would contradict the lower bound on the moments you recall. </p> <p>Yet, second, here is one (unconditional) result on $|\zeta(1/2 + it)|$ that gives some information you seem to seek (Jutila 1983, Bull LMS):</p> <p>There exist positve constants $a,b,c$ such that for each $T\ge 10$ one has</p> <p>$$ \exp( a(\log \log T)^{1/2}) \le |\zeta(1/2 + it)| \le \exp ( b(\log \log T)^{1/2})$$<br> for $t$ in a subset of $[0,T]$ of measure at least $cT$.</p>
100,459
<p>Claim: Take any function $f(t) &gt; 0$ for $t &gt; 0$, such that $f(t) \to \infty$ as $t \to \infty$, then for $\sigma &gt; 0$ $$|\zeta(\sigma + it)| = o(f(t))$$</p> <blockquote> <p>Is there any already existing evidence, like papers or proofs or something that can debunk this?</p> </blockquote> <p>As far as I know, under the Lindelof hypothesis $$|\zeta(\frac{1}{2} + it)| = o(t^\epsilon)$$ and Littlewood has already proved that under the Riemann hypothesis $$|\zeta(\frac{1}{2} + it)| = o\left(\exp\left(\frac{10\log t}{\log \log t}\right)\right)$$ both of which agree with the argument.</p> <p>Also I know from this paper at <a href="http://arxiv.org/pdf/math/0612106v2.pdf" rel="nofollow">http://arxiv.org/pdf/math/0612106v2.pdf</a> that $$\int_0^T |\zeta(1/2 + it)|^{2k}dt \gg_k T (\log T)^{k^2}$$ which kind of gives me a hint that there must be an obvious lower bound which can probably show that the condition given above for $\zeta(\sigma + it)$ and $f(t)$ is invalid.</p> <p>Looking for references.</p>
juan
7,402
<p>Read Theorem 8.12 in Titchmarsh:</p> <p>For $\frac12 \le \sigma &lt;1$ take $0&lt;\alpha &lt;1-\sigma$. Then the inequality $|\zeta(\sigma+it)| &gt; \exp(\log^\alpha t)$ is satisfied for indefinitely large values of $t$.</p> <p>Therefore $f(t)=\exp(\log^\alpha t)$ does not satisfy your assertion for this $\sigma$ .</p>
825,531
<p>How many positive integers $n$ are there, such that both $2n$ and $3n$ are perfect squares? I tried to use modular arithmetic, but I'm stuck.</p>
Karolis Juodelė
30,701
<p>If you factor a perfect square, you'll see that every prime has an even power. If $2n$ has an even power of $2$, then $n$ has an odd power. Now consider the power of $2$ in $3n$. Since $2$ and $3$ are (co)prime, the power of $2$ in $3n$ is the power of $2$ in $n$, thus odd. Therefore, $3n$ can't be a perfect square.</p>
2,988,987
<blockquote> <p>For example, we have <span class="math-container">$f(x)=\frac{1}{x^2-1}$</span></p> </blockquote> <p>Would the domain be <span class="math-container">$$\mathcal D(f)=\{x\in\mathbb{R}\mid x\neq(1,-1)\}$$</span> or rather <span class="math-container">$$\mathcal D(f)=\{x\in\mathbb{R}\mid x\neq \{1,-1\}\}$$</span> or <span class="math-container">$$ D(f)=\{x\in\mathbb{R}\mid x \setminus \{1,-1\}\}$$</span> or are all notations correct?</p>
epi163sqrt
132,007
<blockquote> <p><strong>Hint:</strong> The flaws in OP's proposals have already been identified in the comment section. Note, it is often convenient to write</p> <p><span class="math-container">$$\mathcal D(f)=\mathbb{R}\setminus\{-1,1\}$$</span></p> </blockquote>
1,546,599
<p>What method should I use for this limit? $$ \lim_{n\to \infty}{\frac{n^{n-1}}{n!}} $$</p> <p>I tried ratio test but I ended with the ugly answer $$\lim_{n\to \infty}\frac{(n+1)^{n-1}}{n^{n-1}} $$ which would go to 1? Which means we cannot use ratio test. I do not know how else I could find this limit.</p>
sfp
285,220
<p>You could try to use the Stirling formel which states that for large n $$n! \sim \sqrt{2\pi n} (\frac{n}{e})^n$$</p> <p>So you would get : $$\frac{n^{n-1}}{n!} \sim n^{n-1}(\frac{e}{n})^n (2\pi n)^{-\frac{1}{2}}$$ $$ \sim \frac{1}{\sqrt{2\pi}n^{\frac{3}{2}}} e^n \rightarrow \infty$$ Thus the sequence diverges.</p>
2,253,768
<p>I am currently working on a small optimization problem in which I need to find an optimal number of servers for an objective function that incorporates the Erlang Loss formula. To this end, I have been searching for an expression for the first order difference of the Erlang Loss formula with respect to the number of servers, i.e. $B(E,m+1)-B(E,m)$, where <em>m</em> is the number of servers and $B(E,m)$ is given by:</p> <p>$B(E,m)={\dfrac {{\dfrac {E^{m}}{m!}}}{\sum _{{i=0}}^{m}{\dfrac {E^{i}}{i!}}}}$</p> <p>Unfortunately, until now I can't derive or find such an expression (if it exists) and was wondering whether one on this forum could help me out? </p> <p>Many thanks in advance! </p>
Jesko Hüttenhain
11,653
<p>Consider the morphism $f:R[x]\to R[x]/(M,x)$ and the inclusion $g:R\to R[x]$. Then, $$ \ker(f\circ g) = g^{-1}(\ker(f))= \ker(f)\cap R=(M,x)\cap R=M $$ and as you explained, this gives you $R/M\cong R[x]/(M,x)$.</p>
608,875
<p>solve this equation $$\sqrt{\sqrt{3}-\sqrt{\sqrt{3}+x}}=x$$</p> <p>My try: since $$\sqrt{3}-x^2=\sqrt{\sqrt{3}+x}$$ then $$(x^2-\sqrt{3})^2=x+\sqrt{3}$$</p>
DeepSea
101,504
<p>Let $y = \sqrt{\sqrt3 + x}$, then the equation becomes: $\sqrt3 - y = x^2$, and $y^2 = \sqrt3 + x$.<br> So: $$\begin{align*} y^2 - x &amp; = y + x^2 \\ \Rightarrow (y + x)(y - x - 1) &amp; = 0 \end{align*}$$ And we can go from here.</p>
3,689,051
<p>Let <span class="math-container">$(X,d_{X})$</span> be a metric space, and let <span class="math-container">$(Y,d_{Y})$</span> be another metric space. Let <span class="math-container">$f:X\to Y$</span> be a function. Then the following two statements are logically equivalent:</p> <p>(a) <span class="math-container">$f$</span> is continuous.</p> <p>(b) Whenever <span class="math-container">$V$</span> is an open set in <span class="math-container">$Y$</span>, the set <span class="math-container">$f^{-1}(V) = \{x\in X: f(x)\in V\}$</span> is an open set in <span class="math-container">$X$</span>.</p> <p>I know this problem is pretty standard, but I am not able to prove any of the two directions.</p> <p>Since I am studying real analysis at the moment (metric spaces, in fact), could someone provide a proof or at least a hint as how to prove it? It is not homework. Any comment or contributions are welcome.</p>
marwalix
441
<p>Assume b)</p> <p>Let <span class="math-container">$\epsilon\gt 0$</span> and <span class="math-container">$x\in X$</span>. The ball <span class="math-container">$B(f(x),\epsilon)\subset Y$</span> is an open subset. Its reciprocal image <span class="math-container">$f^{-1}\left(B(f(x),\epsilon\right)$</span> is open. But <span class="math-container">$x\in f^{-1}\left(B(f(x),\epsilon\right)$</span> so there is an open ball centred at <span class="math-container">$x$</span> included in this open subset. This means there is a <span class="math-container">$\delta\gt 0$</span> such that <span class="math-container">$B(x,\delta)\subset f^{-1}\left(B(f(x),\epsilon\right)$</span>. We have just proved that</p> <p><span class="math-container">$$\forall x\in X\,\forall \epsilon\gt 0\,\exists \delta\gt 0,\,d_X(x,y)\leq \delta\Rightarrow d_Y(f(x),f(y)\leq \epsilon$$</span></p> <p>For the other implication assume a) <span class="math-container">$f$</span> continuous</p> <p>Consider <span class="math-container">$V\subset Y$</span> an open subset. Let <span class="math-container">$x\in f^{-1}(V)$</span>; this means <span class="math-container">$f(x)\in V$</span>. Take <span class="math-container">$\epsilon \gt 0$</span> such that <span class="math-container">$B(f(x),\epsilon)\in V$</span>. Because of the assumption there exists <span class="math-container">$\delta\gt 0$</span> such that </p> <p><span class="math-container">$$y\in B(x,\delta)\Rightarrow f(y)\in B(f(x),\epsilon)$$</span></p> <p>This means</p> <p><span class="math-container">$$B(x,\delta)\subset f^{-1}\left(B(f(x),\epsilon\right)\subset f^{-1}(V)$$</span></p> <p>And we have juste proved that <span class="math-container">$f^{-1}(V)$</span> is open</p>
2,841,640
<p>What is a vector space? I can see two different formulations, and between them there is one difference: commutativity. </p> <blockquote> <p><strong>DEFINITION 1</strong> (See <a href="https://proofwiki.org/wiki/Definition:Vector_Space" rel="noreferrer">here</a>)</p> <p>Let $(F, +_F, \times_F)$ be a division ring. Let $(\mathcal{V}, +_\mathcal{V})$ be an abelian group. Let $(\mathcal{V}, +_\mathcal{V}, \cdot)_F$ be a unitary module over $F$. Then $(\mathcal{V}, +_\mathcal{V}, \cdot)_F$ is a vector space over $F$. That is, a vector space is a unitary module over a ring, whose ring is a division ring.</p> <p><strong>DEFINITION 2</strong></p> <p>Let $(F, +_F, \times_F)$ be a field. Let $(\mathcal{V}, +_\mathcal{V})$ be an abelian group. Let $\cdot: F\times \mathcal{V} \longrightarrow \mathcal{V}$ be a function. A vector space is $(\mathcal{V}, +_\mathcal{V}, \cdot)_F$ such that $\forall a,b, \in F$ and $\forall x,y \in \mathcal{V}$:</p> <ul> <li>$\cdot$ right distributive: $(a +_F b) \cdot x = (a\cdot x) +_\mathcal{V} (b\cdot x)$</li> <li>$\cdot$ left distributive: $\,\,\, a \cdot (x +_\mathcal{V} y) = (a\cdot x) +_\mathcal{V} (a\cdot y)$</li> <li>$\cdot$ compatible with $\times_F$: $(a\times_F b) \cdot x = a \cdot (b\cdot x)$</li> <li>$\times_F$ 's identity is $\cdot$'s identity: $1_F \cdot x = x$</li> </ul> </blockquote> <p>There could also be other definitions,but for now it doesn't matter. What matter is that commutativity is not considered in the same way in both definitions! In the first definition, we ahve a division ring (not a commutative division ring, i.e. a field!), while in the second we have a field (i.e. a commutative division ring). </p> <hr> <p>Notice that the key difference on which I am struggling is that on one side we have a division ring and on the other side a commutative division ring. The first is an abelian group $(R, +_R)$ under the $+_R$ binary operation, however $(R, \times_R)$ is only a group (i.e. not abelian, i.e. not commutative). </p>
Bernard
202,857
<p>In $Bourbaki$, a field $F$ is <em>not</em> necessarily commutative, and they simply define left (resp. right) $F$-vector spaces as left (resp. right) $F$-modules. </p> <p>Ref. N. Bourbaki, <em>Algebra</em>, ch.I, <em>Algebraic Structures</em>, §9 and ch. II, <em>Linear Algebra</em>, §1, n°1.</p>
2,841,640
<p>What is a vector space? I can see two different formulations, and between them there is one difference: commutativity. </p> <blockquote> <p><strong>DEFINITION 1</strong> (See <a href="https://proofwiki.org/wiki/Definition:Vector_Space" rel="noreferrer">here</a>)</p> <p>Let $(F, +_F, \times_F)$ be a division ring. Let $(\mathcal{V}, +_\mathcal{V})$ be an abelian group. Let $(\mathcal{V}, +_\mathcal{V}, \cdot)_F$ be a unitary module over $F$. Then $(\mathcal{V}, +_\mathcal{V}, \cdot)_F$ is a vector space over $F$. That is, a vector space is a unitary module over a ring, whose ring is a division ring.</p> <p><strong>DEFINITION 2</strong></p> <p>Let $(F, +_F, \times_F)$ be a field. Let $(\mathcal{V}, +_\mathcal{V})$ be an abelian group. Let $\cdot: F\times \mathcal{V} \longrightarrow \mathcal{V}$ be a function. A vector space is $(\mathcal{V}, +_\mathcal{V}, \cdot)_F$ such that $\forall a,b, \in F$ and $\forall x,y \in \mathcal{V}$:</p> <ul> <li>$\cdot$ right distributive: $(a +_F b) \cdot x = (a\cdot x) +_\mathcal{V} (b\cdot x)$</li> <li>$\cdot$ left distributive: $\,\,\, a \cdot (x +_\mathcal{V} y) = (a\cdot x) +_\mathcal{V} (a\cdot y)$</li> <li>$\cdot$ compatible with $\times_F$: $(a\times_F b) \cdot x = a \cdot (b\cdot x)$</li> <li>$\times_F$ 's identity is $\cdot$'s identity: $1_F \cdot x = x$</li> </ul> </blockquote> <p>There could also be other definitions,but for now it doesn't matter. What matter is that commutativity is not considered in the same way in both definitions! In the first definition, we ahve a division ring (not a commutative division ring, i.e. a field!), while in the second we have a field (i.e. a commutative division ring). </p> <hr> <p>Notice that the key difference on which I am struggling is that on one side we have a division ring and on the other side a commutative division ring. The first is an abelian group $(R, +_R)$ under the $+_R$ binary operation, however $(R, \times_R)$ is only a group (i.e. not abelian, i.e. not commutative). </p>
Community
-1
<p>Usually, a vector space is an abelian group with a scalar multiplication with elements that come from a field.</p> <p>It is true that most linear algebra keeps holding true if you drop the commutativity of the field (we are left with a division ring then), so that might be why the first definition calls it a vector space. Most mathematicians would call it a module over a division ring though.</p>
2,563,300
<p>If we put the Cartesian coordinates of a point in a 2 dimensional locus' equation then we get zero as the value if the point lies on the locus. On putting coordinates of all other points in the equation which do not lie on the locus, we get a numerical value. Does this numerical value convey any information about the position of the point with respect to the locus?</p> <p>Example: Suppose you have the equation of a straight line as 2x+y=0. If you put the coordinates of the points lying on the line, you get R.H.S. as 0,i.e., the condition is satisfied. However if we put in the coordinates of points(x,y) which lie lower than the line in the L.H.S. (2x+y) , we get R.H.S.&lt;0 and on putting the values of of points that lie above the line, we get R.H.S.>0. Can we derive any relationship between the position of points and the value if R.H.S. we get by putting their coordinates in the equation's L.H.S. ?</p>
gandalf61
424,513
<p>You have four separate cases:</p> <p>$P(A \land B) = \dfrac{1}{15}$</p> <p>$P(A \land \lnot B) = \dfrac{4}{15}$</p> <p>$P(\lnot A \land B) = \dfrac{2}{15}$</p> <p>$P(\lnot A \land \lnot B) = \dfrac{8}{15}$</p> <p>(this assumes A and B are independent).</p> <p>To find the probability that B succeeds given that at least one of A or B succeeds, first find the probability that at least one succeeds - you have correctly calculated this:</p> <p>$P(A \land B) + P(A \land \lnot B) + P(\lnot A \land B) = \dfrac{7}{15}$</p> <p>Within this, the probability that B succeeds is $\dfrac{1}{5}$ which is $\dfrac{3}{15}$. Then divide one probability by the other:</p> <p>$\dfrac{3}{15} \div \dfrac{7}{15} = \dfrac{3}{7}$</p>
339,331
<p>This is a question of terminology.</p> <p>Suppose we have a line segment AB in a plane. The line segment forms three "zones" in the plane, where the "middle zone" is comprised of points for which some line perpendicular to AB passes through both that point and AB.</p> <p>Is there a name for this "middle zone"? I want to be able to make a concise statement such as:</p> <p>Point P is _________ to / in the _________ of the line segment AB.</p> <p>The reason for my asking is that I'm writing a software function which tests for this quality (I already know how to do this - that's not the question,) and I need to figure out what I should be calling this function in order for its purpose to be clear to other people.</p> <p>That is, what is the name of the gray zone in this picture: <img src="https://i.stack.imgur.com/ZoFmQ.png" alt="gray zone name"></p> <p>(Normally web searches answer all my math questions, but it's hard to search what to call something!)</p>
wchargin
15,886
<p>If you want a mathematical term, I can't help you.</p> <p>But, as a programmer myself, I understand the need for a memorable, intuitive method name. So, here's the first thing that comes to my mind when I see the pictorial representation: may I suggest the <strong>asteroid belt</strong>?</p> <p><a href="https://i.stack.imgur.com/MUcKu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MUcKu.jpg" alt="Asteroid belt image"></a><br> <sub>(source: <a href="http://spacejockeyreviews.com/wp-content/uploads/2012/01/Alpha-Omicron-Asteroid-Belt.jpg" rel="nofollow noreferrer">spacejockeyreviews.com</a>)</sub> </p> <p>Example method names would be</p> <pre><code>getAsteroidBeltRadius() isInAsteroidBelt() </code></pre> <p>If that's too informal, I suppose <strong>belt</strong> would suffice. But asteroid belt is much more exciting.</p>
339,331
<p>This is a question of terminology.</p> <p>Suppose we have a line segment AB in a plane. The line segment forms three "zones" in the plane, where the "middle zone" is comprised of points for which some line perpendicular to AB passes through both that point and AB.</p> <p>Is there a name for this "middle zone"? I want to be able to make a concise statement such as:</p> <p>Point P is _________ to / in the _________ of the line segment AB.</p> <p>The reason for my asking is that I'm writing a software function which tests for this quality (I already know how to do this - that's not the question,) and I need to figure out what I should be calling this function in order for its purpose to be clear to other people.</p> <p>That is, what is the name of the gray zone in this picture: <img src="https://i.stack.imgur.com/ZoFmQ.png" alt="gray zone name"></p> <p>(Normally web searches answer all my math questions, but it's hard to search what to call something!)</p>
Simon
68,248
<p>I'm not sure if there is a term for this, but the area between two parallel lines is usually called a strip. I'd call this the 'perpendicular strip'.</p>
33,311
<p>I'm (very!) new to <em>Mathematica</em>, and trying to use it plot a set in $\mathbb{R}^3$. In particular, I want to plot the set</p> <p>$$ \big\{ (x,y,z) : (x,y,z) = \frac{1}{u^2+v^2+w^2}(vw,uw,uv) \text{ for some } (u,v,w) \in \mathbb{R}^3\setminus\{0\} \big\}. $$ This is just the image of function from $\mathbb{R}^3\setminus\{0\} \to \mathbb{R}^3$. I haven't been able to make any of the plot functions display this. Any advice would be greatly appreciated.</p> <p>If it makes it easier, the set above is also $$ \big\{ (x,y,z) : (x,y,z) = \frac{1}{u^2+v^2+w^2}(vw,uw,uv) \text{ for some } (u,v,w) \in \mathbb{S}^2 \big\}. $$ Alternatively, one could plot the three sets $$ \big\{ (x,y,z) : (x,y,z) = \frac{1}{1+v^2+w^2}(vw,w,v) \text{ for some } (v,w) \in \mathbb{R}^2 \big\}, $$ $$ \big\{ (x,y,z) : (x,y,z) = \frac{1}{1 +u^2+w^2}(w,uw,u) \text{ for some } (u,w) \in \mathbb{R}^2 \big\}, $$ $$ \big\{ (x,y,z) : (x,y,z) = \frac{1}{1+u^2+v^2}(v,u,uv) \text{ for some } (u,v) \in \mathbb{R}^2 \big\}. $$</p>
rm -rf
5
<p>In such cases, you can get better flexibility by switching to a <code>DynamicModule</code> and building up the GUI yourself. Then, you can pull the data generating step out of the plotting dynamic, so that the latter can be manipulated freely without regenerating the data. </p> <pre><code>DynamicModule[{function = Sin, start, stop = 300, x = Range[-10 Pi, 10 Pi, Pi/100]}, Dynamic@With[{data = function[x], s = Spacer@10, f = Print@10}, Panel@Column[{ Row[{"function", s, Control[{function, {Sin, Cos, Tan}}]}], Row[{"start", s, Slider[Dynamic@start, {1, Length@data}]}], Row[{"stop", s, Slider[Dynamic@stop, {1, Length@data}]}], Dynamic@ListPlot[data, PlotRange -&gt; {{start, stop}, Automatic}, ImageSize -&gt; 400, Background -&gt; White] }] ] ] </code></pre> <p><img src="https://i.stack.imgur.com/DuvsX.png" alt=""></p> <p>Note that the <code>f = Print@10</code> is there just to observe evaluation of <code>data</code>. You can check for yourself that nothing is printed when you move the sliders and prints only when the tabs are changed.</p>
313,298
<p>For example, let's say that I have some sequence $$\left\{c_n\right\} = \left\{\frac{n^2 + 10}{2n^2}\right\}$$ How can I prove that $\{c_n\}$ approaches $\frac{1}{2}$ as $n\rightarrow\infty$?</p> <p>I'm using the Buchanan textbook, but I'm not understanding their proofs at all.</p>
copper.hat
27,978
<p>You need to show that for any number $\epsilon&gt;0$ you can find $N$ such that if $n\geq N$, then $|c_n - \frac{1}{2}| &lt; \epsilon$.</p> <p>So, pick an $\epsilon&gt;0$ and start computing: $|c_n - \frac{1}{2}| = | \frac{n^2+10}{2 n^2} - \frac{1}{2}| = |\frac{10}{2 n^2}| = \frac{5}{n^2}$.</p> <p>Now you need to pick $N$ such that if $n \geq N$, then $\frac{5}{n^2} &lt; \epsilon$. If I pick $N \geq \sqrt{\frac{5}{\epsilon}}+1$, then you can see that if $n \geq N$, then $\frac{5}{n^2} &lt; \epsilon$. Since $\epsilon&gt;0$ was arbitrary, you are finished.</p>
1,923,298
<p>I am really struggling with this question and it isn't quite making sense. Please help and if you don't mind answering quickly.</p> <p>Reflection across $x = −1$</p> <p>$H(−3, −1), F(2, 1), E(−1, −3)$.</p>
Elle Najt
54,092
<p>I want to suggest proving it in the following way, which I think is less confusing. I'm assuming that $A$ and $B$ are subsets of a single set, in which case $A \cap B$ may not be empty, and the number of elements in $A \cup B$ may not be $m + n$.</p> <ol> <li>$A \cup B = (A \setminus B) \cup (B \setminus A) \cup (A \cap B)$.</li> <li>Subsets of finite sets are finite.</li> <li>Disjoint unions of finite sets are finite.</li> </ol>
1,923,298
<p>I am really struggling with this question and it isn't quite making sense. Please help and if you don't mind answering quickly.</p> <p>Reflection across $x = −1$</p> <p>$H(−3, −1), F(2, 1), E(−1, −3)$.</p>
fleablood
280,126
<p>You have the right idea but you are confusing whether $x \in A \cup B$ or $x \in \mathbb N$. If $f| A \rightarrow \{1,...., n\}$ then you can't say $x \in \{1,...., n\}$. And if $g(x)| \{1,...., m\}\rightarrow B$ then you can't say $h(x) = m + g(x)$ because $g(x)$ is not a number.</p> <p>Let $f:A \rightarrow \{1...n\}$ be a bijection to $A$, a set with $n$ elements and $g:B\rightarrow\{1....m\} $ be a bijection to the $m$ elements of $B$.</p> <p>Let $h:A\cup B\rightarrow \{1...n+m\}$</p> <p>$h(x) = f(x)$ if $x \in A$.</p> <p>$h(x) = n + g(x)$ if $x \not \in A$.</p> <p>Note: the function will <em>NOT</em> be a bijection. It will be injective but it will not be surjective. This is fine, because have an injective function to a finite set means the set is finite.</p> <p>Let $h(x) = h(y)$. If $x \in A$ but $y \not \in A$ (or vice versa) then $h(x) = f(x) \le n$. $h(y) = n + g(x) &gt; n$. So $h(x) = h(y)$ only if both $x$ and $y$ are both in $A$ or both not in $A$.</p> <p>If $x,y \in A$ then $h(x) = f(x)$ and $h(y) = f(y)$. As $f$ is bijection $x = y$.</p> <p>If $x,y \not \in A$ then $h(x)=n + g(x)$ and $h(y) = n+g(y)$ so $g(x) = g(y)$ and $x = y$ as $g$ is bijection.</p> <p>So $h$ is an injective map from $A\cup B$ to the finite set $\{1,... n+ m\}$ so $A \cup B$.</p> <p>postscript: note, if $x \in A \cap B$ and if $f(x) \ne g(x)$, then $h(x) = f(x) \ne g(x)$. There is no $y \in A \cup B$ so the $h(x) = g(x)$. So $h$ is not surjective. </p>
1,019,981
<p>I'm a high school student and I'm giving a lecture in my high school math class on ordinal numbers, and I would like to prove that the von Neumann ordinals are well-ordered by set membership.</p> <p>The definition of von Neumann ordinal that I'm using is as follows. An ordinal is a set $A$ such that the elements of $A$ are well-ordered by $\in$ and such that $\forall x (x\in A\implies x\subset A)$.</p> <p>In order to prove that the von Neumann ordinals themselves are well-ordered, I will first show that they are totally ordered. To do this, I first show that the ordering is transitive, i.e. if $A\in B$ and $B\in C$ then $A\in C$. I then want to show that the ordering is trichotomous, i.e. for all $A$ and $B$, exactly one of the following is true, either $A\in B$ or $A=B$ or $B\in A$. I am having trouble showing this last part.</p> <p>In other words, I would like to show, given only the definition of von Neumann ordinal written above, that any pair of ordinals are either equal, or one is a member of the other. Any help would be appreciated.</p>
Stefan Mesken
217,623
<p>Observe that for given ordinals $A$ and $B$ the following claims hold true:</p> <ul> <li>If $A$ is a proper subset of $B$, then $A \in B$.</li> <li>$A \cap B$ is an ordinal.</li> </ul> <p>Now, given ordinals $A \neq B$ consider $A \cap B \subseteq A,B$. If $A \cap B = A$, then $A \subseteq B$, so that either $A = B$ or $A \in B$. Analog $A \cap B = B$ implies either $B = A$ or $B \in A$. If $A \cap B$ is a proper subset of both $A$ and $B$, then $A \cap B \in A \cap B$, which contradicts the axiom of regularity.</p>
1,019,981
<p>I'm a high school student and I'm giving a lecture in my high school math class on ordinal numbers, and I would like to prove that the von Neumann ordinals are well-ordered by set membership.</p> <p>The definition of von Neumann ordinal that I'm using is as follows. An ordinal is a set $A$ such that the elements of $A$ are well-ordered by $\in$ and such that $\forall x (x\in A\implies x\subset A)$.</p> <p>In order to prove that the von Neumann ordinals themselves are well-ordered, I will first show that they are totally ordered. To do this, I first show that the ordering is transitive, i.e. if $A\in B$ and $B\in C$ then $A\in C$. I then want to show that the ordering is trichotomous, i.e. for all $A$ and $B$, exactly one of the following is true, either $A\in B$ or $A=B$ or $B\in A$. I am having trouble showing this last part.</p> <p>In other words, I would like to show, given only the definition of von Neumann ordinal written above, that any pair of ordinals are either equal, or one is a member of the other. Any help would be appreciated.</p>
Akira
368,425
<p>For any ordinals $A$ and $B$, below statements hold true:</p> <ol> <li>$A\subsetneq B\implies A\in B$. (I presented a proof for this at <a href="https://math.stackexchange.com/questions/1983906/if-alpha-ne-beta-are-ordinals-and-alpha-subset-beta-show-alpha-in-beta/2845626#2845626">If $\alpha\ne\beta$ are ordinals and $\alpha\subset\beta$ show $\alpha\in\beta$</a>)</li> <li>$A\cap B$ is an ordinal. (It's quite easy to prove this)</li> </ol> <p>For $A=B$, the theorem is trivially true. For $A\neq B$, let $z=A\cap B$, hence $z$ is an ordinal. We will prove this case by contradiction.</p> <p>Assume $A\not\subsetneq B$ and $A\not\subsetneq B$. Since $z\subsetneq A$ and $z$ is an ordinal, then $z\in A$. Similarly, $z\in B$. Thus $z\in A\cap B=z$, which contradicts the fact that $z$ is ordinal and hence well-ordered under $\in$. As a result, either $A\subsetneq B$ or $B\subsetneq A$.</p>
2,441,793
<p><strong>Questions.</strong> </p> <p>(0) Is there a usual technical term in ring theory for the following kind of module?</p> <blockquote> <p>$M$ is a free $R$-module over a commutative ring $R$, and for each $R$-basis $B$ of $M$, each $b\in B$, and each <em>unit</em> $r$ of $R$, we have $r\cdot b=b$</p> </blockquote> <p>(1) Is there a classification of such modules, or a non-classifiability result in the literature?</p> <p><strong>Remarks.</strong></p> <p>--This is not a homework problem. I need to know more about this, yet it is not central to what I am doing currently, and I hope this well-known and documented.</p> <p>--Trivial examples are </p> <ul> <li><p>the $R$-module $\{0\}$ whose only basis is $\{\}$, for which the condition is vacuously true,</p></li> <li><p>$R:=\mathbb{Z}/2\mathbb{Z}$, $\kappa:=$some cardinal, and $M:=\prod_{i\in\kappa} R$, </p></li> <li><p>$R:=\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$, $\kappa:=$some cardinal, and $M:=\prod_{i\in\kappa} R$, </p></li> <li><p>$\kappa_0,\kappa_1$ arbitrary cardinals, $R:=\prod_{i\in\kappa_0} \mathbb{Z}/2\mathbb{Z}$, and $M:=\prod_{i\in\kappa_1} R$. </p></li> </ul> <p>EDIT: beware, the following is evidently <strong>not a free $R$-module</strong>; I keep it here since with the warning, it seems instructive * $R:=\mathbb{Z}/4\mathbb{Z}$, whose units are $1+(4)_R$ and $3+(4)_R$, and $\kappa:=$some cardinal, and $M:=\prod_{i\in\kappa}\mathbb{Z}/2\mathbb{Z}$, </p> <p>--Since $R$ need not be a principal ideal domain, or even a domain, no classification theorem known to me applies here.</p>
Peter Heinig
457,525
<p>Ad 1. You write</p> <blockquote> <p>But my question is when the use of ≡ or = is mandatory, and when it is permissible. </p> </blockquote> <p>There is no absolute answer to this, yet the use of '=' between terms like the one you give, which can be seen as a elements of the 'rational function field' $\mathbb{Q}(x) = \mathrm{Frac}(\mathbb{Z}[x])$ is <strong>much</strong> more usual in modern mathematics. The use of $\equiv$ you have encountered seems to be some sort of 'subcultural' usage confined to some schoolbooks (I guess). </p> <p>To explicitly address the example you give: I can assure you that most mathematicians would consider </p> <blockquote> <p>$$\frac{3x^2+34x+6}{x^2+2x-24} \equiv \frac{3x+7}{x-4} + \frac{9}{x+6}$$ </p> </blockquote> <p>an <em>unusual</em> notation. The most usual notation is to write </p> <blockquote> <p>$$\frac{3x^2+34x+6}{x^2+2x-24} = \frac{3x+7}{x-4} + \frac{9}{x+6}$$ </p> </blockquote> <p>and interpret this as an <strong>equality in the field $\mathbb{Q}(x)$ of rational functions</strong>. Yes, in the background, <em>depending on your choice of set-theoretic foundations</em>, there may be <em>equivalence classes</em> somewhere, yet this is <em>irrelevant</em>, and this is a genuine equality. </p> <p>I would appreciate if some other people would second this (uncontroversial) opinion and talk the OP out of their fear of using = here. There might be good reasons to use other symbols for this sometimes, but recommending the OP to use whatever they see fit is definitely misleading.</p> <p>In particular, your statement </p> <blockquote> <p>All materials I have come across would use ≡ here.</p> </blockquote> <p>if true, implies that you have been brought up on a very limited set of 'materials'.</p> <p>And in </p> <blockquote> <p>But why is it common to write 3+2=5, when clearly this is not an equation to solve, but an equivalency? </p> </blockquote> <p>the statement about the "equivalency" is <strong>wrong</strong>, at least relative to usual contemporary mathematics. This is an <strong>equation between natural numbers</strong>.</p> <p>As a general rule, remember that </p> <ul> <li>when you are working with the elements of an <em>algebraic structure</em> (group, ring, field, ...), then the usual 'relation symbol' is '=' and <strong>not</strong> $\equiv$. </li> </ul> <p>The symbol $\equiv$ has various meanings in various contexts.</p> <p>Ad 2.</p> <p>Re </p> <blockquote> <p>However, if ⟹ contains within it the implicit notion that</p> </blockquote> <p>Please note that <em>the usual convention is that $\Rightarrow$</em> <strong>never</strong> means $\require{cancel} \cancel{\impliedby}$. Never. If it would, it <em>would be impossible to express $\Leftrightarrow$ in terms of $\Rightarrow$ and $\Leftarrow$</em>.</p> <p>Re </p> <blockquote> <p>Which symbol should be used, or are both permissible (and the question down to personal preference)?</p> </blockquote> <p>The latter. Both are permissible. The choice is a choice of emphasis. And of course they do not mean the same, as you know.</p> <p>Does that help?</p>
2,441,793
<p><strong>Questions.</strong> </p> <p>(0) Is there a usual technical term in ring theory for the following kind of module?</p> <blockquote> <p>$M$ is a free $R$-module over a commutative ring $R$, and for each $R$-basis $B$ of $M$, each $b\in B$, and each <em>unit</em> $r$ of $R$, we have $r\cdot b=b$</p> </blockquote> <p>(1) Is there a classification of such modules, or a non-classifiability result in the literature?</p> <p><strong>Remarks.</strong></p> <p>--This is not a homework problem. I need to know more about this, yet it is not central to what I am doing currently, and I hope this well-known and documented.</p> <p>--Trivial examples are </p> <ul> <li><p>the $R$-module $\{0\}$ whose only basis is $\{\}$, for which the condition is vacuously true,</p></li> <li><p>$R:=\mathbb{Z}/2\mathbb{Z}$, $\kappa:=$some cardinal, and $M:=\prod_{i\in\kappa} R$, </p></li> <li><p>$R:=\mathbb{Z}/2\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$, $\kappa:=$some cardinal, and $M:=\prod_{i\in\kappa} R$, </p></li> <li><p>$\kappa_0,\kappa_1$ arbitrary cardinals, $R:=\prod_{i\in\kappa_0} \mathbb{Z}/2\mathbb{Z}$, and $M:=\prod_{i\in\kappa_1} R$. </p></li> </ul> <p>EDIT: beware, the following is evidently <strong>not a free $R$-module</strong>; I keep it here since with the warning, it seems instructive * $R:=\mathbb{Z}/4\mathbb{Z}$, whose units are $1+(4)_R$ and $3+(4)_R$, and $\kappa:=$some cardinal, and $M:=\prod_{i\in\kappa}\mathbb{Z}/2\mathbb{Z}$, </p> <p>--Since $R$ need not be a principal ideal domain, or even a domain, no classification theorem known to me applies here.</p>
Thomas
26,188
<p>It is absolutely fantastic that you are thinking about these issues as you start your undergraduate career. If only more students would appreciate these subtleties in mathematics. One nice thing in mathematics is that everything can be made precise.</p> <p>A couple of remarks though:</p> <ol> <li><p>Often we will define precise use of notation and then immediately violate the convention. Insisting on precise notation everywhere is not helpful because everything becomes too cumbersome. It can become easy to bury what you are trying to communicate in notation. Abuse of notation is very common and accepted. Also, mathematics is about more than a game of notation. While it strictly speaking might be true at the root, mathematics is also about ideas.</p></li> <li><p>Remember the context. Saying that $x^2 + x + 1=0$ might mean an invitation to solve the equation. It might mean that $x$ is an elsewhere defined number and that the number $x^2 + x + 1$ is equal to $0$.</p></li> <li><p>Ask your teacher. The teacher will probably have certain preferences when it comes to notation. Don't then get mad at your teacher because he/she doesn't follow notation that you have used before. Instead, get used to the change of preferences.</p></li> </ol> <p>Now, typically $=$ is used to say that two elements in a set are the same element. Saying "solve $x^2 = 2$" can then mean find all elements $x$ in the set $\mathbb{R}$ whose square is the same as the element $2$ in the set of real numbers. Saying that $2(x-3) = 2x -6$ might say that the polynomial $2(x-3)$ is the same as the polynomial $2x - 6$. It might mean that the function $f:\mathbb{R} \to \mathbb{R}$ given by $f(x) = 2(x-3)$ is the same (as element in a set of functions) as the function $g:\mathbb{R} \to \mathbb{R}$ given by $g(x) = 2x - 6$.</p> <p>Saying that $\frac{x^2}{x} = x$ might again be about an equality of functions. But what it the domain of these functions? both functions would have the same domain, namely $\mathbb{R}\setminus\{0\}$.</p> <p>You say that "if $\implies$ contains within it the implicit notion that $\require{cancel} \cancel{\impliedby}$ ..." This is simply not true. You can, in fact take the definition of $A\iff B$ as ($A\implies B$ and $B\implies A$). So indeed both of the following is correct $$ x^2 = 1 \implies x\in \{\pm1\} \\ x^2 = 1 \iff x\in \{\pm1\} $$ Both are therefore permissible.</p> <p>Here is the thing. When you are writing a proof you want to be careful and precise. Getting into the habit of writing $\iff$ everywhere you can will likely lead you to use it wrongly at some point. Being careful is to prove that $A\iff B$ by first showing $A\implies B$ and then $B\implies A$ even if you could do both at the same time.</p> <p>The symbol $\equiv$ is often used in different ways. It will often depend on the definition. I think that most sources would not use $\equiv$ in the place of $=$.</p>
2,795,652
<blockquote> <p>Given a set of decimal digits. And given a set of primes $\mathbb{P}$, find some $p \in \mathbb{P}$ such that $p^n, n \in \mathbb{N} $ contained in itself all the digites from a given set, and it does not matter in what order.</p> </blockquote>
Barry Cipra
86,747
<p>Remark: this answer pertains to the initial version of the question. The current edit restricts the search to powers within a given set of primes.</p> <p>Another theoretically "cheap" way to look for prime powers that contain a given set of digits is to concatenate the $n$ digits, append a $1$ to get an $(n+1)$-digit number $d$ and then apply Dirichlet's theorem on primes in arithmetic progressions to the progression $10^{n+2}m+d$ for $m=1,2,3,\ldots$. (Appending the $1$ is necessary when, for example, the given digits are all even, in order to get a number $d$ that is relatively prime to $10$.) Dirichlet's theorem guarantees you'll find a prime, which can be checked for successive values of $m$ using, say, the <a href="https://en.wikipedia.org/wiki/AKS_primality_test" rel="nofollow noreferrer">AKS primality test</a> if $n$ is large. If $n$ is small, say $n\approx10$, just about any primality test will do. Furthermore, the Prime Number Theorem (for primes in arithmetic progression) suggests you should find a prime relatively quickly.</p>
85,117
<p>Let $\mathcal{A}$ and $\mathcal{B}$ be two $2$-categories and $F : \mathcal{A} \to \mathcal{B}$ be a lax $2$-functor. Given $1$-cells $(f_{i})_{0 \leq i \leq n}$ of $\mathcal{A}$ such that the composition $f_{n} \circ f_{n-1} \circ \cdots \circ f_{0}$ makes sense, this data together with the structural $2$-cells of $F$ give many paths of $2$-cells going from $F(f_{n}) \circ F(f_{n-1}) \circ \cdots \circ F(f_{0})$ to $F(f_{n} \circ f_{n-1} \circ \cdots \circ f_{0})$, for instance $$ F(f_{n}) \circ F(f_{n-1}) \circ \cdots \circ F(f_{0}) \Rightarrow F(f_{n} \circ f_{n-1}) \circ F(f_{n-2}) \circ \cdots \circ F(f_{0}) \Rightarrow \cdots $$ $$\Rightarrow F(f_{n} \circ f_{n-1} \circ \cdots \circ f_{0}) $$ and $$ F(f_{n}) \circ F(f_{n-1}) \circ \cdots \circ F(f_{0}) \Rightarrow F(f_{n}) \circ F(f_{n-1}) \circ \cdots \circ F(f_{1} \circ f_{0}) \Rightarrow \cdots $$ $$\Rightarrow F(f_{n} \circ f_{n-1} \circ \cdots \circ f_{0}) $$ which correspond to what one gets by "parenthesizing on the left" and "parenthesizing on the right" respectively. It seems to seem obvious that it follows from the definition of lax functor that the $C_{n}$ ways to parenthesize the left hand side all give the same $2$-cell $$ F(f_{n}) \circ F(f_{n-1}) \circ \cdots \circ F(f_{0}) \Rightarrow F(f_{n} \circ f_{n-1} \circ \cdots \circ f_{0}) $$ Since I need this property for a text I am writing, I would like to provide a reference. My question is the following:</p> <blockquote> <p>Where is this result rigorously stated, and where is it rigorously proved? Hopefully, the two references will be the same.</p> </blockquote> <p>Edit: I am aware that this result is "obvious". In addition, it is certainly classical, by which I mean that all the people working with lax functors use it routinely. However, if one wants to state it and prove it, the question arises as to what is the best way to state the result, which I think turns out <em>not</em> to be completely trivial. Furthermore, writing a rigorous proof certainly <em>does</em> require some work. I am sure there are some people here who have already used this result. How do they state it? To which reference do they point? Or is the reader assumed to find this fact so obvious that no one ever cares to provide a proof or a reference?</p>
Nacho Lopez
8,482
<p>The closest statement I know is Theorem 1.6 in Gordon-Power-Street "Coherence for tricategories." It actually deals with pseudofunctors instead of lax functors, but I believe it can easily be modified to cover lax functors. The proof is the same proof of Theorem 1.7 in Joyal-Street "Braided tensor categories."</p>
975,210
<p>\begin{align} \left| f(b)-f(a)\right|&amp;=\left| \int_a^b \frac{df}{dx} dx\right|\\ \ \\ &amp;\leq\left| \int_a^b \left|\frac{df}{dx}\right|\ dx\right|. \end{align}</p> <p>I do not understand why the second line is greater or equal than the top equation. Can anyone explain please?</p>
Hagen von Eitzen
39,174
<p>Let $u$ be integrable (in your case it is $\frac{df}{dx}$). Because $u(x)\le |u(x)|$ we have $$\int_a^bu(x)\,\mathrm dx\le \int_a^b|u(x)|\,\mathrm dx $$ for $a\le b$. Because $-u(x)\le |u(x)|$ we have $$-\int_a^bu(x)\,\mathrm dx=\int_a^b(-u(x))\,\mathrm dx\le \int_a^b|u(x)|\,\mathrm dx $$ for $a\le b$. As $|y|=\max\{y,-y\}$, we conclude $$ \left|\int_a^bu(x)\,\mathrm dx\right|\le \int_a^b|u(x)|\,\mathrm dx $$ whenever $a\le b$. If we take absolute values on the right hand side, we also treat the case $b&lt;a$ because $|\int_a^b|=\max\{\int_a^b,-\int_a^b\}=\max\{\int_a^b,\int_b^a\}$, i.e. we have $$ \left|\int_a^bu(x)\,\mathrm dx\right|\le \left|\int_a^b|u(x)|\,\mathrm dx \right|$$ for <em>all</em> numbers $a,b\in\mathbb R$.</p>
2,197,065
<p>Baire category theorem is usually proved in the setting of a complete metric space or a locally compact Hausdorff space.</p> <p>Is there a version of Baire category Theorem for complete topological vector spaces? What other hypotheses might be required?</p>
Akay
359,648
<p>As others have mentioned, there are several valid use cases for notions of average(or mean) distance. A mundane example would be:</p> <blockquote> <p>If you alternately run $3$ and $5$km every day beginning from Monday through Saturday, what is the average distance you've run in the entire week?</p> </blockquote> <p>Now, if you're worried about taking an average of time because it is absolute...that is strictly speaking, <a href="https://en.wikipedia.org/wiki/Time_dilation" rel="nofollow noreferrer">not true</a>. But for all every day intents and purposes we can consider the absoluteness of time to be a given truth, and avoid questions like "What is time, really?".</p> <p>The concept of arithmetic mean (which is what you are referring to as the "average"-the mean is actually one of many average quantities) is quite simple and independent of what you are measuring your quantity(for example, speed) against. To see this more clearly, consider a $100m$ race amongst $10$ participants; you can find the average(mean) speed of the runners, and also the average(mean) time taken to complete the race.</p> <p>In conclusion, in this context, the mean may be applied to any function of a single variable. What you obtain is a number that represents the equal distribution of quantity to all the data points you are working with. In the above example: $\text{(mean time)}\times\text{(number of runners)}=\text{(total time taken by all the runners)}$</p>
391,020
<p>Does there exist a non-zero homomorphism from $\mathbb{Z}/n\mathbb{Z}$ to $\mathbb{Z}$ ? If yes, state the mapping. How is this map exactly? </p>
Asaf Karagila
622
<p><strong>Hint:</strong> Every element of $\Bbb{Z/nZ}$ has finite order.</p>
106,708
<p><strong>My fragile attempt:</strong> Note that if $1987^k-1$ ends with 1987 zeros, that means $1987^k$ has last digit 1 (and 1986 "next" ones are zeros). For this to be satisfied, $k$ has to be in form $k=4n$, where $n\in N$. This means out number can be written in form </p> <p>$$ [(1987^n)^2+1][1987^n+1][1987^n-1]. $$</p> <p>This number has to be dividable by $10^{1987}$ if there is such a number that is asked for in question.</p> <p>Now, I believe that the fact 1987 is a prime is very important here. There are probably some theorems from number theory about primes and their powers. For example, if $p$ is a prime (distinct from 2 if needed), are there any important things about number such as $p^2-1$? </p> <p>If I'm going at the right direction with this, I'd appreciate a hint. Please don't use too advanced techniques if possible. Thanks.</p>
André Nicolas
6,312
<p>The standard approach is to use the fact that if $a$ is divisible neither by $2$ nor by $5$, then $$a^{\varphi(10^n)}\equiv 1\pmod {10^n},$$ where $\varphi$ is the Euler $\varphi$-function.</p> <p>The approach below is much more low-tech! All we need is some comfort with the Binomial Theorem. Suppose that $b_1$ already ends in $1$ (with our number, that means we let $b_1=(1987)^4$).<br> What happens when we take the $10$-th power of $b_1$?</p> <p>Think of it this way. We have $b_1=1+10c_1$ for some integer $c_1$. Take (or imagine taking) the $10$-th power of $1+10c_1$, using the Binomial Theorem.</p> <p>We get $1+(10)(10c_1)$ plus a bunch of terms that are divisible by at least $100$. So the result $b_2$ has shape $1+100c_2$, for some integer $c_2$. In other words, $b_2$ ends in $01$,</p> <p>Now take the $10$-th power of $b_2$. We get, by the Binomial Theorem, $1+(10)(100c_2)$ plus a bunch of terms that are divisible by at least $1000$. Call this result $b_3$. Note that $b_3$ ends in $001$. Continue.</p> <p>To sum up, we start with $(1987)^4$ and raise it to the power $10$ repeatedly. We get the numbers $(1987)^{40}$, $(1987)^{400}$, $(1987)^{4000}$, and so on. They are guaranteed to end in $01$, $001$, $0001$, and so on. </p>
310,669
<p>This is related to a course I'm taking in computer science theory. </p> <p>Let $\sum$ be an alphabet. Then the set of all strings over $\sum$, denoted as $\sum^*$ has the operation of concatenation (adjoining two strings end to end). Clearly, concatenation is associative, $\sum^*$ is closed under concatenation, and the identity element is the empty string. I'm also taking a course in modern algebra, so I naturally ask can $\sum^*$ be formed into a group? Three of the four group axioms are satisfied.</p>
Jim
56,747
<p>It is not a group, it is a <em>monoid</em>, which is essentially a group without inverses. The natural thing to do if you want a group is to just arbitrarily allow inverses. So you would consider strings whose letters are of the form $a$ or $a^{-1}$ where $a \in \Sigma$. You say that two strings are equivalent if you can get from one to another by adding or deleting pairs of the form $aa^{-1}$ or $a^{-1}a$. Then the set of equivalence classes of strings is a group under concatenation.</p> <p>This construction is called the free group on $\Sigma$.</p>
937,443
<blockquote> <p>Evaluate $\displaystyle \int \tan^2x\sec^2x\,dx$</p> </blockquote> <p>I tried several methods: </p> <ul> <li>First method was I changed $\tan^2x = \sec^2x-1$, and then substitute $\sec x$ to $t$, but it doesn't work.</li> <li>Second method was to use substitute $\tan^2x = v$, $\sec x = u$. And, it does not work as well.</li> </ul> <p>Is there any better way to solve this problem? </p>
Anay Mehrotra
659,080
<p>Substituting <span class="math-container">$x = \tan^{-1}t$</span> and using <span class="math-container">$\sec^2(x)= \tan^2(x)+1$</span> we get <span class="math-container">$$\int (1+t^2)\cdot t^2\cdot \frac{1}{1+t^2}\, dt = \int t^2dt = \frac{t^3}{3}+C.$$</span> Since <span class="math-container">$t=\tan(x)$</span>, we recover <span class="math-container">$$\int \sec^2(x) \cdot \tan^2(x)\, dx = \frac{\tan^3(x)}{3}+C.$$</span></p>
522,714
<p>$$ dz_t \sim O\left(\sqrt{dt}\,\right) $$</p> <p>$z$ is a Brownian motion random variable, for reference. I just don't understand what the $\sim O$ part means. I've looked up the page for Big O notation on wikipedia because I thought it might be related, but I can't see the link.</p>
rschwieb
29,335
<p>The idea you're having to change it to terms of $x^2$ isn't bad, but it seems a little overfancy. (Maybe I overlooked some economy about it, but I haven't seen the benefit yet.)</p> <p>Why not just calculate it directly? (Hints follow:)</p> <p>$x^2=3+2+2\sqrt{6}=5+2\sqrt{6}$</p> <p>$x^4=(5+2\sqrt{6})^2=25+24+20\sqrt{6}=49+20\sqrt{6}$</p> <p>$\dfrac{1}{x^4}=\dfrac{1}{49+20\sqrt{6}}=\dfrac{49-20\sqrt{6}}{2401-2400}=49-20\sqrt{6}$</p> <p>You can take it from here, I think.</p>
1,758,194
<p>Consider the function f: {-1, +1} -> R defined by</p> <p>$f(x)= \arcsin (\frac{1+x^2}{2x})$.</p> <p>Due to the following two inequalities :</p> <p>(i) $1+x^2 \geq 2x$</p> <p>(ii)$1+x^2 \geq -2x$ , </p> <p>the function can only be defined at $x=1$ and $x=-1$. I have learnt that the epsilon delta definition only includes those values of $x$ which are in the domain of $f(x)$. But in this case, the function isn't defined on <em>either</em> side of x=1.</p> <p>So this is my question: Is it correct to say that the limit as $x$ approaches $1$ of $f(x$) is $\frac{\pi}{2}$ ?</p> <p>Can the above question be given a <strong>definitive</strong> "yes" or "no" answer, or must it(unfortunately) vary from person to person? </p> <p>If the latter, is the "precise" definition of a limit not precise enough?</p> <p>How can the answer be proved or disproved using the epsilon delta definition?</p> <p>I have also read that functions are by default continuous at isolated points. Can I conclude from the definition of continuity (the limit equals the value of the function evaluated at the point) that the limit must exist?</p> <p><strong>Note :</strong> Forgive my ignorance but I do not know a thing about topology. I'm looking for a detailed answer but in simple terms, preferably written in the language of calculus.</p> <p>Thanks for the help. </p>
Aloizio Macedo
59,234
<p>The bottom line is: </p> <blockquote> <p>Limits are only defined on limit points. Continuity is only defined on the domain.</p> </blockquote> <p>If we have a point $p$ on the domain such that $p$ is a limit point, then continuity at $p$ is equivalent to the limit being the value. However, if we have a point not on the domain, continuity does not make sense (although we can enlarge the domain and define it at the point in order to force continuity, but I digress), and if we have a point on the domain which is not a limit point, then limit does not make sense.</p> <p>The problem is explictly the following:</p> <p>The definition of limit is: <strong>Let $x_0$ be a limit point of the domain</strong>. Then, we say $f(x) \stackrel{x \to x_0}{\to} L$ if for all $\epsilon&gt;0$, there exists $\delta&gt;0$ such that $0&lt;|x-x_0|&lt; \delta \implies |f(x)-L| &lt; \epsilon$. </p> <p>If $x_0$ is not a limit point, there exists a $\delta$ such that there is no $x$ with $|x-x_0|&lt;\delta$. Therefore, any value of $L$ can be the limit. If we want to be really forceful and withstand the non-uniqueness of limit on metric spaces, then we can indeed let this be the case: every point on the codomain is a limit of $f(x)$ as $x \to x_0$ if $x_0$ is not a limit point. Then, for instance, it would hold that $f$ is continuous at $x_0$ if and only if $f(x) \stackrel{x \to x_0}{\to} f(x_0)$ even if $x_0$ is not a limit point, but that would be a very degenerate case.</p> <p>What we conclude is that the case of $x_0$ not being a limit point results in a nuisance of non-uniqueness. Continuity <strong>is different</strong>:</p> <p>The definition of continuity at $x_0$ is: $f$ is continuous at $x_0$ if for all $\epsilon&gt;0$ there exists $\delta&gt;0$ such that $|x-x_0|&lt;\delta \implies |f(x)-f(x_0)| &lt; \epsilon$.</p> <p>There is no vacuous problem. Rather, the definition gives immediately that if $x_0$ is not a limit point, then $f$ is automatically continuous at $x_0$ (go through the definition and verify this). There is a certain degree of triviality, but no vacuity.</p> <p>Summing up, your function $f$ <strong>is continuous</strong>, but limits on the points of the domain are not well-defined (at least, not unique when viewed under the usual definition). </p> <p>PS: I acknowledge that there may be different definitions of continuity. However, the one used here is the special case of the definition used in topology, which is by far the most useful and well-behaved.</p>
484,281
<p>I recently got interested in mathematics after having slacked through it in highschool. Therefore I picked up the book "Algebra" by I.M. Gelfand and A.Shen</p> <p>At problem 113, the reader is asked to factor $a^3-b^3.$</p> <p>The given solution is: $$a^3-b^3 = a^3 - a^2b + a^2b -ab^2 + ab^2 -b^3 = a^2(a-b) + ab(a-b) + b^2(a-b) = (a-b)(a^2+ab+b^2)$$</p> <p>I was wondering how the second equality is derived. From what is it derived, from $a^2-b^2$? I know that the result is the difference of cubes formula, however searching for it on the internet i only get exercises where the formula is already given. Can someone please point me in the right direction?</p>
Barry Cipra
86,747
<p>The second equality is obtained by first grouping terms in the middle expression and then factoring the grouped terms:</p> <p>$$\begin{align} a^3-a^2b+a^2b-ab^2+ab^2-b^3&amp;=(a^3-a^2b)+(a^2b-ab^2)+(ab^2-b^3)\cr &amp;=a^2(a-b)+ab(a-b)+b^2(a-b)\cr \end{align}$$</p> <p>If the OP is wondering where the middle expression came from in the first place, it's kind of a <em>deus ex machina</em>: All you're doing is sticking two $0$'s between $a^3$ and $-b^3$ (the expressions $-a^2b+a^2b$ and $-ab^2+ab^2$ are both obviously equal to $0$), but when you do, lo and behold, the regrouping and factoring work their magic. </p>
1,422,859
<p>$$\sqrt{1000}-30.0047 \approx \varphi $$ $$[(\sqrt{1000}-30.0047)^2-(\sqrt{1000}-30.0047)]^{5050.3535}\approx \varphi $$ Simplifying Above expression we get<br> $$1.0000952872327798^{5050.3535}=1.1618033..... $$ Is this really true that $$[\varphi^2-\varphi]^{5050.3535}=\varphi $$</p>
JJacquelin
108,514
<p>Of course, this is an approximation. No need for more proof than those already given by Vincenzo Oliva and Claude Leibovici - kindest regards !</p> <p>I would add that it is easy to find a lot of such approximations.</p> <p>Some amazing ones, to be compared to :</p> <p>$1.6180339887...\simeq\frac{1+\sqrt5}{2}=\varphi$</p> <p>$1.6180339886...\simeq\cos (\sqrt2\: e^{-2})-\cos(\sqrt[3]{2}\:e^\pi )$</p> <p>$1.6180339884...\simeq\sqrt{\cosh(\gamma)+\cos(\gamma)}-\frac{\gamma^3}{\sin(5)}$ where $\gamma$ is the Euler constant.</p> <p>$1.6180339881...\simeq\cosh(G^2\sinh(1))+\cos\left(\frac{\sqrt[3]\pi}{\cos(3)} \right)$ where $G$ is the Catalan constant.</p> <p>They comes from : <a href="https://fr.scribd.com/doc/14161596/Mathematiques-experimentales" rel="nofollow">https://fr.scribd.com/doc/14161596/Mathematiques-experimentales</a> (page 4)</p> <p>This paper roughly explains the method to compute a lot of approximations of this kind (In French, presently no translation avalable).</p>
2,532,327
<p>Expand the following function in Legendre polynomials on the interval [-1,1] :</p> <p>$$f(x) = |x|$$</p> <p>The Legendre polynomials $p_n (x)$ are defined by the formula :</p> <p>$$p_n (x) = \frac {1}{2^n n!} \frac{d^n}{dx^n}(x^2-1)^2$$</p> <p>for $n=0,1,2,3,...$</p> <p>My attempt :</p> <p>we have using the fact that $|x|$ is an even function. $$a_0 = \frac {2}{\pi}$$ $$ a_n= \frac {2}{π} \int_{-1}^{1}x\cos(nx)\,dx$$</p> <p>Then what is the next step ?</p>
Steven Yang
675,191
<p>We know that the Fourier-Legendre series is like <span class="math-container">$$ f(x)=\sum_{n=0}^\infty C_n P_n(x) $$</span></p> <p>where <span class="math-container">$$ C_n=\frac{2n+1}{2} \int_{-1}^{1}f(x)P_n(x)\,dx $$</span></p> <p>So now we are going to calculate the result of <span class="math-container">$$ \frac{2n+1}{2} \int_{-1}^{1}|x|P_n(x)\,dx $$</span> As <span class="math-container">$|x|$</span> is an even function, and the parity of <span class="math-container">$P_n(x)$</span> depends on the parity of <span class="math-container">$n$</span>, We can write<br> <span class="math-container">$$ \int_{-1}^{1}|x|P_n(x)\,dx $$</span> as <span class="math-container">$$ \int_{-1}^{1}|x|P_{2k}(x)\,dx\ \ k=0,1,2... $$</span> and <span class="math-container">$$ \int_{-1}^{1}|x|P_{2k}(x)\,dx \\ =\int_{-1}^{0}-xP_{2k}(x)\,dx + \int_{0}^{1}xP_{2k}(x)\,dx\\ =2\int_{0}^{1}xP_{2k}(x)\,dx $$</span> As <span class="math-container">$$ (n+1)P_{n+1}(x)-x(2n+1)P_{n}(x)+nP_{n-1}(x)=0 $$</span> we get <span class="math-container">$$ 2\int_{0}^{1}xP_{2k}(x)\,dx\\ =2(\frac{2k+1}{4k+1}\int_{0}^{1}P_{2k+1}(x)\,dx+\frac{2k}{4k+1}\int_{0}^{1}P_{2k-1}(x)\,dx) $$</span> As <span class="math-container">$$ \int_{0}^{1}P_{n}(x)\,dx=\begin{cases} 0&amp; n=2k\\ \frac{(-1)^k (2k-1)!!}{(2k+2)!!}&amp; n=2k+1 \end{cases} $$</span> We get <span class="math-container">$$ C_{2k}=(2k+1)\frac{(-1)^k (2k-1)!!}{(2k+2)!!}+n\frac{(-1)^{k-1} (2k-3)!!}{(2k)!!}\\ =\begin{cases} \frac{1}{2}&amp; k=0\\ \frac{(-1)^{k+1} (4k+1)}{2^{2k}(k-1)!}\frac{(2k-2)!}{(k+1)!}&amp; k&gt;0 \end{cases} $$</span> So <span class="math-container">$$ |x|=\frac{1}{2}+\sum_{k=1}^\infty \frac{(-1)^{k+1} (4k+1)}{2^{2k}(k-1)!}\frac{(2k-2)!}{(k+1)!} P_{2k}(x) $$</span></p>
1,037,621
<p>There are 20 people at a chess club on a certain day. They each find opponents and start playing. How many possibilities are there for how they are matched up, assuming that in each game it does matter who has the white pieces (and who has the black ones). </p> <p>I thought it might be $$\large2^{\frac{20(20-1)}2}$$ is this correct?</p>
Uddipan Paul
537,089
<p>Here we have 2 things to keep in mind:</p> <ol> <li>Number of ways of matching up</li> <li>Arrangements does matter</li> </ol> <p>Firstly, Number of ways of matching up players:</p> <p>1st player: 19 player options; chooses: 1; players left after choosing: 18</p> <p>2nd player: 17 player options; chooses: 1; players left after choosing: 16</p> <p>.. continuing this way, we end up with,</p> <p>10th player: 1 player options; chooses: 1; players left after choosing: 0</p> <p>hence, number of ways to match up = </p> <p>$$ 19 * 17 * 15 * ... 3*1 = \frac{20!}{2^{10}*10!} $$</p> <p>Now, considering arrangements, for each pair of players, there are 2 possibilities of choosing white/black.</p> <p>So, for 10 pair of players, the number of arrangements is $$ {2^{10}} $$</p> <p>So, number of ways to match up players, considering arrangements of black and white sides is</p> <p>$$ 2^{10}* \frac{20!}{2^{10}*10!} = \frac{20!}{10!} $$ </p>
1,037,621
<p>There are 20 people at a chess club on a certain day. They each find opponents and start playing. How many possibilities are there for how they are matched up, assuming that in each game it does matter who has the white pieces (and who has the black ones). </p> <p>I thought it might be $$\large2^{\frac{20(20-1)}2}$$ is this correct?</p>
acala
910,755
<p>First we choose 2 from 20 to form pair 1, then choose 2 from 18 to form pair 2... until last 2 players to form pair 10: there are totally <span class="math-container">$\binom{20}{2}\binom{18}{2}\ldots\binom{2}{2}$</span> possible combinations to form 10 ordered pairs.</p> <p>However, in this problem, order of pairs does not matter. Since there are 10! ways to arrange 10 pairs, we should divide it by 10!.</p> <p>Also, in each pair, either player can play black or white, so we should multiply 2 for each pair, that's <span class="math-container">$2^{10}$</span>.</p> <p>Put them together, we have: <span class="math-container">$$\frac{\binom{20}{2}\binom{18}{2}\ldots\binom{2}{2}}{10!}\times2^{10} = \frac{20!}{10!}$$</span></p>
1,142,530
<p>Find an equation of the plane. The plane that passes through the point (−3, 2, 1) and contains the line of intersection of the planes x + y − z = 4 4x − y + 5z = 2</p> <p>I know the normal to plane 1 is &lt;1,1,-1> and the normal to plane 2 is &lt;4,-1,5>. The cross product of these 2 would give a vector that is in the plane I need to find.</p> <p>P1 x P2 = &lt;4,-9,-5></p> <p>So now I have a point (-3,2,1) and a vector &lt;4,-9,-5> on the plane but I'm not sure what to do next.</p>
PdotWang
212,686
<p>The line passes $z=0$ at $(1.2, 2.8, 0)$.</p> <p>A vector from this point to $(-3,2,1)$ is $(4.2,0.8,-1)$.</p> <p>This vector cross $(4,-9,-5)$ is $(13,-17,41)$, which is the normal of the plane. </p> <p>The result is:</p> <p>$$13(x-(-3))+(-17)(y-2)+(41)(z-1)=0$$</p> <p>This is the scalar product of a vector in the plane and the normal vector.</p>
2,780,186
<blockquote> <p>SOA Exam C Question #140 <a href="https://i.stack.imgur.com/VqAE6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VqAE6.png" alt="SOA Exam C Question #140"></a></p> <p>Solution <a href="https://i.stack.imgur.com/6YK4b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6YK4b.png" alt="enter image description here"></a></p> </blockquote> <p>My question is, how did they solve the expected counts 4.8, 3.3 etc? I understand the overall solution but I can't just get how they ended up with that expected count. Thank you.</p>
heropup
118,193
<p>The expected counts in each interval are computed by taking the hypothesized probability of an observation occurring in that interval, and multiplying by the total number of observed events in your sample. So for the interval $(0, 500]$, we have $F(500) = 0.27$, so for $n = 30$, we should see $(0.27)(30) = 8.1$ events.</p> <p>For the next interval, you simply calculate $$\Pr[500 &lt; X \le 2498] = F(2498) - F(500) = 0.55 - 0.27 = 0.28,$$ then multiply by $30$ to get $8.4$. The rest is similar.</p> <p>However, since you don't know in advance which groupings will result in each expected count in each group exceeding $5$, you would calculate the expected count for each percentile in the second table; so this is where the $4.8$, $3.3$, etc. numbers come from: they arise from computing $$30 F(310), \\ 30(F(500) - F(310)), \\ 30(F(2498) - F(500)), \\ \text{etc}.$$ Then you group them so that each expected count is at least $5$, and this results in the table that is provided. </p>
2,780,186
<blockquote> <p>SOA Exam C Question #140 <a href="https://i.stack.imgur.com/VqAE6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VqAE6.png" alt="SOA Exam C Question #140"></a></p> <p>Solution <a href="https://i.stack.imgur.com/6YK4b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6YK4b.png" alt="enter image description here"></a></p> </blockquote> <p>My question is, how did they solve the expected counts 4.8, 3.3 etc? I understand the overall solution but I can't just get how they ended up with that expected count. Thank you.</p>
Michael Hardy
11,667
<p>The proportion less than or equal to $310$ according to the null hypothesis is $0.16$.</p> <p>The proportion less than or equal to $500$ but more than $300$ is $0.27-0.16= 0.11.$</p> <p>The proportion less than or equal to $2498$ but greater than $500$ is $0.55-0.27 = 0.28$</p> <p>And so on.</p> <p>Now multiply all of those by $30,$ since that's how many claims you've got.</p> <p>Then apply the rule that a group where the expected number is less than $5$ must be conjoined with an adjacent group.</p>
2,027,556
<p>The definition I have of a tensor product of vector finite dimensional vector spaces $V,W$ over a field $F$ is as follows: Let $v_1, ..., v_m$ be a basis for $V$ and let $w_1,...,w_n$ be a basis for $W$. We define $V \otimes W$ to be the set of <strong>formal linear combinations</strong> of the mn symbols $v_i \otimes w_j$. That is, a typical element of $V \otimes W$ is $$\sum c_{ij}(v_i \otimes w_j).$$ The space $V \otimes W$ is clearly a finite dimensional vector space of dimension mn. We define bilinear map $$B: V \times W \to V \otimes W$$ here is the formula $$B(\sum a_iv_i, \sum b_jw_j) = \sum_{i,j}a_ib_j(v_i \otimes w_j). $$</p> <p>Why does $V \otimes W$ have to be a <strong>formal linear combinations</strong> of symbols $v_{i} \otimes w_j$, what would be wrong in defining $V \otimes W$ simply as a <strong>linear combination</strong> of symbols $v_i \otimes w_j$?</p> <p>Thanks.</p>
Mattia Ghio
360,782
<p>I propose you this answer: <a href="https://math.stackexchange.com/questions/1029851/understanding-the-meaning-of-formal-linear-combination-and-tensor-product">understanding the meaning of formal linear combination and tensor product</a></p> <p>In particular: "Regarding the tensor product, we want it to be bilinear. In $R⟨V×W⟩$, it is never true that $(a+b)×c(a+b)×c$ is equal to $a×c+b×ca×c+b×c$. In fact, $0×0$ is not even the zero vector of $R⟨V×W⟩$"</p>
2,294,321
<blockquote> <p>Suppose $F(x)$ is a continuously differentiable (that is, first derivative exist and are continuous) vector field in $\mathbb{R}^3$ that satisfies the bound $$|F(x)| \leq \frac{1}{1 + |x|^3}$$ Show that $\int\int\int_{\mathbb{R}^3} \text{div} F dx = 0$.</p> </blockquote> <p>Attempted proof - Suppose $F(x)$ is a continuously differentiable vector field in $\mathbb{R}^3$ such that $$|F(x)| \leq \frac{1}{1 + |x|^3}$$ From <a href="http://nptel.ac.in/courses/122101003/downloads/Lecture-43.pdf" rel="nofollow noreferrer">defintion of vector field</a> We have $F:D\subseteq \mathbb{R}^3\to \mathbb{R}^3$ where $D$ is an open subset of $\mathbb{R}^3$. For every $x\in \mathbb{R}^3$, we can write $$F(x) = F_1(x) i + F_2(x) j + F_3(x) k$$ where $ i,j,k$ are the unit vectors. Since $F(x)$ is continuously-differentiable then so are the component functions. </p> <p>I am a bit lost on trying to go from here. I know that $$\text{div} F = \nabla F = \left(\frac{\partial}{\partial x},\frac{\partial}{\partial y},\frac{\partial}{\partial z}\right)\cdot \left(F_1,F_2,F_3\right) = \frac{\partial F_1}{\partial x} + \frac{\partial F_2}{\partial y} + \frac{\partial F_3}{\partial z}$$ I think I need to incorporate the bound that $|F(x)|$ in order to show the desired result. Perhaps this is best done using an epsilon-delta sort of proof but I am not sure. Any suggestions are greatly appreciated.</p> <p>Attempted proof 2 - Let $\{B(0,n)\}_{n=1}^{\infty}$ be a sequence of balls. Since $F(x)$ is continuously differentiable vector field, its components are also continuously differentiable. We know that the surface area of $\partial B(0,n)$ is $4\pi n^2$ and $$|F|\leq \frac{1}{1 + n^3}$$ Thus from the divergent theorem we have $$\int\int\int_{\mathbb{R}^3}\text{div}F dx = \int\int_{B(0,n)}F \times 0 = 0$$</p> <p>Attempted proof 3 - Let $\{B(0,n)\}_{n=1}^{\infty}$ be a sequence of balls. Since $F(x)$ is a continuously differentiable vector field, its components are also continuously differentiable. We know that the surface area of $\partial B(0,n)$ is $4\pi n^2$ and $$|F|\leq \frac{1}{1 + n^3}$$ Let $S$ be the boundary surface of $B(0,n)$ with positive orientation then from the divergence theorem $$\iint_{S}F dS = \iiint_{B(0,n)}F dB(0,n) $$ Thus if we let $n\to \infty$ we see that $$\iiint_{B(0,n)}F dB(0,n) = 0$$</p> <p>Attempted proof 4 - Let $\{B(0,n)\}_{n=1}^{\infty}$ be a sequence of balls. Since $F(x)$ is continuously differentiable vector field, so are its components. Let $S$ be the boundary surface of $B(0,n)$ with positive orientation. We know that the surface area of $\partial B(0,n)$ is $4\pi n^2$ and that $$|F|\leq \frac{1}{1 + n^3}$$ Thus if we apply the divergence theorem on the sequence of balls then we have $$\Big|\int_{\mathbb{R}^3}\text{div}F\,dx\Big| = \lim_n\Big|\int_{B(0,n)}\text{div}F\,dx\Big| = \lim_n\Big|\int_{\partial B(0,n)}F \cdot \nu\,dS\Big| \le \lim_n\frac{4\pi n^2}{1 + n^3} = 0.$$ Thus the result follows.</p>
Giovanni
263,115
<p>The third attempt seems to be on the right track, however there is no divergence and it is not entirely clear how you are using the given bound. To expand on Gio67's answer, </p> <p>$$\Big|\int_{\mathbb{R}^3}\text{div}F\,dx\Big| = \lim_n\Big|\int_{B(0,n)}\text{div}F\,dx\Big| = \lim_n\Big|\int_{\partial B(0,n)}F \cdot \nu\,dS\Big| \le \lim_n\frac{4\pi n^2}{1 + n^3} = 0.$$</p>
3,741,859
<p>I am trying to prove that, for all non-negative integers <span class="math-container">$x$</span> and all non-negative real numbers <span class="math-container">$p$</span>, <span class="math-container">$$ \left(p-x\right)\left(x+1\right)^p+x^{p+1}\geq0. $$</span> I've been at this for a while and I'm stuck. I've tried finding positive functions smaller than this to compare it to, but no luck so far. If <span class="math-container">$p$</span> was an integer I might be able to do something with binomial coefficients, but I'm trying to solve for the general case.</p>
Joseph Camacho
731,433
<p>Equivalently, we need to prove<br /> <span class="math-container">\begin{align*} &amp;x \cdot x^p \ge (x - p) \cdot (x - p) \cdot (x + 1)^p\\ &amp;\Longleftrightarrow \left(1 + \frac1x\right)^p \le \frac{x}{x - p}. \end{align*}</span></p> <p>But<br /> <span class="math-container">\begin{align*} \left(1 + \frac1x\right)^p &amp;= 1 + \frac1x\binom{p}{x} + \frac1{x^2}\binom{p}{2} + \cdots\\ &amp; \le 1 + \frac{p}{x} + \frac{p^2}{x^2} + \cdots\\ &amp;= \frac{1}{1 - \frac px} = \frac{x}{x - p}. \end{align*}</span></p>
3,741,859
<p>I am trying to prove that, for all non-negative integers <span class="math-container">$x$</span> and all non-negative real numbers <span class="math-container">$p$</span>, <span class="math-container">$$ \left(p-x\right)\left(x+1\right)^p+x^{p+1}\geq0. $$</span> I've been at this for a while and I'm stuck. I've tried finding positive functions smaller than this to compare it to, but no luck so far. If <span class="math-container">$p$</span> was an integer I might be able to do something with binomial coefficients, but I'm trying to solve for the general case.</p>
robjohn
13,854
<p>For <span class="math-container">$x,p\ge0$</span>, <span class="math-container">$$ \begin{align} \left(1-\frac1{x+1}\right)^{p+1}&amp;\ge1-\frac{p+1}{x+1}\tag1\\ \color{#C00}{((x+1)-1)^{p+1}}&amp;\ge\color{#090}{(x+1)^{p+1}-(p+1)(x+1)^p}\tag2\\[6pt] \color{#090}{((p+1)-(x+1))(x+1)^p}+\color{#C00}{x^{p+1}}&amp;\ge0\tag3\\[6pt] (p-x)(x+1)^p+x^{p+1}&amp;\ge0\tag4\\ \end{align} $$</span> Explanation:<br /> <span class="math-container">$(1)$</span>: <a href="https://en.wikipedia.org/wiki/Bernoulli%27s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a><br /> <span class="math-container">$(2):$</span> multiply by <span class="math-container">$(x+1)^{p+1}$</span><br /> <span class="math-container">$(3)$</span>: <span class="math-container">$(x+1)-1=x$</span> on the left side<br /> <span class="math-container">$\phantom{\text{(3):}}$</span> move <span class="math-container">$(x+1)^{p+1}-(p+1)(x+1)^p$</span> from the right side<br /> <span class="math-container">$(4)$</span>: <span class="math-container">$(p+1)-(x+1)=p-x$</span></p>
69,472
<blockquote> <p><strong>Theorem 1</strong><br> If $g \in [a,b]$ and $g(x) \in [a,b] \forall x \in [a,b]$, then $g$ has a fixed point in $[a,b].$<br> If in addition, $g&#39;(x)$ exists on $(a,b)$ and a positive constant $k &lt; 1$ exists with $$|g&#39;(x)| \leq k, \text{ for all } \in (a, b)$$ then the fixed point in $[a,b] is unique. </p> <p><strong>Fixed-point Theorem</strong> Let $g \in C[a,b]$ be such that $g(x) \in [a,b]$, for all $x$ in $[a,b]$. Suppose, in addition, that $g&#39;$ exists on $(a,b)$ and that a constant $0 &lt; k &lt; 1$ exists with $$|g&#39;(x)| \leq k, \text{ for all } x \in (a, b)$$ Then, for any number $p_0$ in $[a,b]$, the sequence defined by $$p_n = g(p_{n-1}), n \geq 1$$ converges to the unique fixed-point in $[a,b]$</p> </blockquote> <p>These are two theorems that I have learned, and I'm having a hard time with this problem:</p> <blockquote> <p>Given a function $f(x)$, how can we find the interval $[a,b]$ on which fixed-point iteration will converge?</p> </blockquote> <p>Besides guess and check, I couldn't find any other way to solve this problem. I tried to link the above theorems, but it involves two variables, so I have a feeling it can't be solved algebraically. I wonder is there a general way to find the interval of convergence rather trial and error? Thank you.</p>
Did
6,179
<p>Your suggestion to rely on the iteration of the function $f$ defined by $f(x)=2x-\sqrt3$ to compute $\sqrt3$ is not used in practice for at least two reasons. First it presupposes one is able to compute $\sqrt3$, in which case the approximation procedure is useless. Second, $\sqrt3$ is not an attractive point of $f$. This means that for every $x_0\ne\sqrt3$, the sequence $(x_n)$ defined recursively by $x_{n+1}=f(x_n)$ will not converge to $\sqrt3$, and in fact, in the present case $x_n\to+\infty$ or $x_n\to-\infty$ depending on whether $x_0&gt;\sqrt3$ or $x_0&lt;\sqrt3$.</p> <p>By contrast, using $f(x)=\frac12\left(x+\frac3x\right)$ and starting, for example, from a positive rational $x_0$ produces a sequence $(x_n)$ of rational numbers whose computation does not require to know the value of $\sqrt3$ and which converges to $\sqrt3$. Additionally $(x_n)$ converges to $\sqrt3$ extremely fast since after a while, one step of the algorithm replaces the error by roughly its square, thus the rate of convergence is <a href="http://en.wikipedia.org/wiki/Rate_of_convergence#Basic_definition" rel="noreferrer">quadratic</a>, in the sense that $x_{n+1}-\sqrt3$ is of the order of $(x_n-\sqrt3)^2$. </p> <p>If you try to simulate the 20 first terms for $x_0=1$, for example, you will observe this spectacular speed of convergence which roughly says that each new iteration <strong>doubles up</strong> the number of significant digits of the approximant.</p>
1,640,373
<p>The definition of a topological space is a set with a collection of subsets (the topology) satisfying various conditions. A metric topology is given as the set of open subsets with respect to the metric. But if I take an arbitrary topology for a metric space, will this set coincide with the metric topology? </p> <p>I'm trying to justify why we call the elements of a topology "open". If my above question is true, then at least in a metric space, the set of open sets is equivalent to the topology of the metric space. So am I right in thinking that when we remove the metric, we are generalising this equivalence by defining the open sets as those that satisfy the conditions of a topology?</p>
fosho
166,258
<p>Yes you are right,$$K = \ker{T} = \{(a,0,b)| a,b\in \mathbb{R}\}$$</p> <p>A basis for $K$ is then</p> <p>$$\{(1,0,0),(0,0,1)\}$$</p> <p>Since given $(a,0,b)$ with $a,b\in\mathbb{R}$ we have $$(a,0,b) = a\cdot(1,0,0)+b\cdot(0,0,1)$$</p>
777,691
<p>I would like to prove if $a \mid n$ and $b \mid n$ then $a \cdot b \mid n$ for $\forall n \ge a \cdot b$ where $a, b, n \in \mathbb{Z}$</p> <p>I'm stuck.<br> $n = a \cdot k_1$<br> $n = b \cdot k_2$<br> $\therefore a \cdot k_1 = b \cdot k_2$</p> <p>EDIT: so for <a href="http://en.wikipedia.org/wiki/Fizz_buzz" rel="nofollow">fizzbuzz</a> it wouldn't make sense to check to see if a number is divisible by 15 to see if it's divisible by both 3 and 5?</p>
Andreas K.
13,272
<p>If <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are relatively prime, then <span class="math-container">$x a + y b = 1$</span> for some integers <span class="math-container">$x, y$</span> (Bezout's identity). Then <span class="math-container">$$ n = k_1 a \overset{\times yb}{\Leftrightarrow} (yb)n = (yk_1) ab\\ n = k_2 b \overset{\times xa}{\Leftrightarrow} (xa)n = (xk_2) ab $$</span> Adding the two equations together we get <span class="math-container">$(xa + yb)n = (yk_1 + x k_2)ab \Leftrightarrow n = k (ab)$</span>, where <span class="math-container">$k = yk_1 + xk_2$</span> is an integer. This proves that <span class="math-container">$ab \mid n$</span>.</p>
1,387,454
<p>What is the sum of all <strong>non-real</strong>, <strong>complex roots</strong> of this equation -</p> <p>$$x^5 = 1024$$</p> <p>Also, please provide explanation about how to find sum all of non real, complex roots of any $n$ degree polynomial. Is there any way to determine number of real and non-real roots of an equation?</p> <hr> <p>Please not that I'm a high school freshman (grade 9). So please provide simple explanation. Thanks in advance!</p>
Macavity
58,320
<p><strong>Hint</strong>: By Vieta, sum of all roots is $0$, and the only real root is $4$.</p> <p>P.S. For a simple argument that there is only one real root, apply Descartes' rule of signs on $x^5-1024$.</p>
2,294,969
<p>I can't seem to find a path to show that:</p> <p>$$\lim_{(x,y)\to(0,0)} \frac{x^2}{x^2 + y^2 -x}$$</p> <p>does not exist.</p> <p>I've already tried with $\alpha(t) = (t,0)$, $\beta(t) = (0,t)$, $\gamma(t) = (t,mt)$ and with some parabolas... they all led me to the limit being $0$ but this exercise says that there's no limit when approaching $(0,0)$. Hints? Thank you.</p>
Miguel
259,671
<p>We can also try with parabolas with the axes exchanged, e.g. $x=y^2$: $$\lim_{(x,y)\to (0,0),x=y^2}\frac{x^2}{x^2+y^2-x} = \lim_{y\to 0}\frac{y^4}{y^4+y^2-y^2}=1 $$</p>
2,294,969
<p>I can't seem to find a path to show that:</p> <p>$$\lim_{(x,y)\to(0,0)} \frac{x^2}{x^2 + y^2 -x}$$</p> <p>does not exist.</p> <p>I've already tried with $\alpha(t) = (t,0)$, $\beta(t) = (0,t)$, $\gamma(t) = (t,mt)$ and with some parabolas... they all led me to the limit being $0$ but this exercise says that there's no limit when approaching $(0,0)$. Hints? Thank you.</p>
zhw.
228,045
<p>In fact this function, let's call it $f(x,y),$ is so badly behaved near $(0,0)$ that it maps every neighborhood of $(0,0)$ onto $\mathbb R.$</p> <p>More precisey, set $E= \{(x,y): x\in [0,1], y = \sqrt {x-x^2}\}.$ Then the natural domain of $f$ is $U=\mathbb R^2\setminus E.$</p> <p>Claim: For every $r&gt;0,$ $f(U\cap D(0,r)) = \mathbb R.$</p> <p>Here $D((0,0),r)$ is the open disc of radius $r$ centered at $(0,0).$</p> <p>Proof of claim: This is actually easy. As you approach the point $(r/2, \sqrt {(r/2)-(r/2)^2}$ from below and and above you get the limits $-\infty,\infty$ respectively. Since $f$ is continuous in $U\cap D(0,r),$ and $U\cap D(0,r)$ is connected, $f(U\cap D(0,r))$ is a connected subset of $\mathbb R.$ There is no choice other than $f(U\cap D(0,r)) =\mathbb R,$ as claimed.</p>
2,486,095
<p>Given an acute-angled triangle $\Delta ABC$ having it's Orthocentre at $H$ and Circumcentre at $O$. Prove that $\vec{HA} + \vec{HB} + \vec{HC} = 2\vec{HO}$</p> <p>I realise that $\vec{HO} = \vec{BO} + \vec{HB} = \vec{AO} + \vec{HA} =\vec{CO} + \vec{HC}$ which leads to $3\vec{HO} = (\vec{HA} + \vec{HB} + \vec{HC}) + (\vec{AO} + \vec{BO} + \vec{CO})$</p> <p>How can I prove that $(\vec{AO} + \vec{BO} + \vec{CO}) = \vec{HO}$ in order to solve the problem?</p> <p>Thank you for answering!</p>
Community
-1
<p>$$\begin{matrix} &amp; 1.2 ,&amp; 1.42 ,&amp; 2.8275 \\ &amp; 1.1975 ,&amp; 1.98525 ,&amp; 2.89940625 \\ &amp; 0.98578125 ,&amp; 1.978459375 ,&amp; 3.00441679687 \\ &amp; 1.00949960937 ,&amp; 2.00183332031 ,&amp; 2.99547936035 \\ &amp; 0.998362543945 ,&amp; 1.99893212646 ,&amp; 3.00068524384 \\ &amp; 1.00056419818 ,&amp; 2.00019346859 ,&amp; 2.99974208448 \\ &amp; 0.999871029462 ,&amp; 1.99993551984 ,&amp; 3.00005642525 \\ &amp; 1.00003707711 ,&amp; 2.00001499276 ,&amp; 2.99998333554 \\ &amp; 0.999990670003 ,&amp; 1.99999573411 ,&amp; 3.00000413176 \\ &amp; 1.00000253271 ,&amp; 2.00000107962 ,&amp; 2.9999988686 \\ &amp; 0.99999934187 ,&amp; 1.99999970791 ,&amp; 3.00000029255 \\ &amp; 1.00000017535 ,&amp; 2.00000007605 ,&amp; 2.99999992183 \\ &amp; 0.999999953948 ,&amp; 1.99999997976 ,&amp; 3.0000000205 \\ &amp; 1.00000001219 ,&amp; 2.00000000532 ,&amp; 2.99999999457 \\ &amp; 0.999999996786 ,&amp; 1.99999999859 ,&amp; 3.00000000143 \\ &amp; 1.00000000085 ,&amp; 2.00000000037 ,&amp; 2.99999999962 \\ &amp; 0.999999999776 ,&amp; 1.9999999999 ,&amp; 3.0000000001 \\ &amp; 1.00000000006 ,&amp; 2.00000000003 ,&amp; 2.99999999997 \\ &amp; 0.999999999984 ,&amp; 1.99999999999 ,&amp; 3.00000000001 \\ &amp; 1.0 ,&amp; 2.0 ,&amp; 3.0 . \end{matrix}$$</p>
4,221,545
<p>Let <span class="math-container">$f$</span> be a twice-differentiable function on <span class="math-container">$\mathbb{R}$</span> such that <span class="math-container">$f''$</span> is continuous. Prove that <span class="math-container">$f(x) f''(x) &lt; 0$</span> cannot hold for all <span class="math-container">$x.$</span></p> <p>I have been able to think of specific examples of <span class="math-container">$f(x)$</span> in which <span class="math-container">$f(x)f''(x) &lt;0$</span> does not hold, but I have not been able to come up with specific values of <span class="math-container">$x$</span> for which <span class="math-container">$f(x)f''(x)&lt;0$</span> does not hold.</p> <p>Any help is greatly appreciated!</p>
Thomas
89,516
<p>Sketch of a proof. Details remain to be filled. The devil is in the details of course, but the idea should work.</p> <ol> <li><p>Since f is continuous, it must be <span class="math-container">$f&gt;0$</span> or <span class="math-container">$f&lt;0$</span> everywhere (why?). wlog consider <span class="math-container">$f&gt;0$</span>. The hypothesis then implies <span class="math-container">$f''&lt;0$</span> everywhere.</p> </li> <li><p>Now <span class="math-container">$f$</span> cannot be constant, therefore there must be <span class="math-container">$x_0$</span> where <span class="math-container">$f'(x_0) \ne 0$</span></p> </li> <li><p>Consider the case <span class="math-container">$f'(x_0)&lt;0$</span>. Prove that the function must remain below the straght line <span class="math-container">$y=f(x_0)+f'(x_0)(x-x_0)$</span> for <span class="math-container">$x&gt;x_0$</span>. This implies that at a certain point <span class="math-container">$f(x)$</span> must change sign and become negative. Contradiction.</p> </li> <li><p>If <span class="math-container">$f'(x_0)&gt;0$</span> make a similar argument for <span class="math-container">$x&lt;x_0$</span></p> </li> </ol>
832,715
<blockquote> <p>Suppose that the distribution of a random variable $X$ is symmetric with respect to the point $x = 0$. If $\mathbb{E}(X^4)&gt;0$ then $Var(X)$ and $Var(X^2)$ are both positive.</p> </blockquote> <p>How is that true? I am getting $Var(X)=\mathbb{E}(X^2)$ and $Var(X^2)=\mathbb{E}(X^4)-(\mathbb{E}(X^2))^2$, but do not know why $\mathbb{E}(X^2)&gt;0$ &amp; $\mathbb{E}(X^4)&gt;(\mathbb{E}(X^2))^2.$</p>
Henry
6,460
<p>I do not think it is true. </p> <p>For example let $X=k$ and $X=-k$ each have probability $\frac12$ for some $k\gt 0$.</p> <p>Then the distribution is symmetric about $0$, i.e. $P(X \le -x) =P(X \ge x)$ for all $x$. And $E[X]=0$, $E[X^2]=k^2 \gt 0$ and $E[X^4]=k^4 \gt 0$, and $Var(X)=k^2 \gt 0$.</p> <p>But $Var(X^2)=0$, contrary to the statement in the question.</p>
2,581,735
<p>I'm studying Neural Networks for machine learning (by Geoffrey Hinton's course) and I have a question about learning rule for linear neuron (lecture 3).</p> <p>Linear neuron's output is defined as: $y=\sum_{i=0}^n w_ix_i$</p> <p>Where $w_i$ is weight connected to input #i and $x_i$ is input #i.</p> <p>Learning procedure consists of changing weight vector in such way, so the actual output $y$ becomes more and more close to target output $t$. For learning weights we use Delta rule: $$\Delta w_i=-\epsilon\frac{dE}{dw_i} =\epsilon x_i (t-y)$$ Where $\epsilon$ is a learning rate.</p> <p>I don't understand 2 things:</p> <p>1) why do we change weights proportionally to their error derivatives?</p> <p>2) why do we put minus before $\frac{dE}{dw_i}$ (error change)?</p> <p>Logically, if error increased then $\frac{dE}{dw_i}&gt;0$ and, since we want to decrease the error, minus sign makes our $\Delta w_i$ negative and, hence, decreases $w_i$. But if $\frac{dE}{dw_i}&lt;0$ and our error decreases, then we are increasing the weight? Why?</p> <p>Thank you VERY MUCH for your help!</p>
Andreas
317,854
<p>The error in here is the quadratic error:</p> <p>$$ E_k = (t_k - y_k)^2 = (t_k - \sum_i w_ix_i^k)^2 $$ for the training examples indexed with $k$. Actually, you can also sum over $k$, and you can take the gradient instead of a single derivative.</p> <p>Question 1: Changing weights proportionally to the negative derivative always works, with suitably small gain factor $\epsilon$, if the error function is convex in the unknowns $w_i$. This is the case here, as the error is quadratic.</p> <p>Question 2: the negative sign will always work for convex functions. In particular, you can test this with the quadratic function. This is a particular example of so called gradient descent. </p>
234,466
<p>This is the second problem in Neukirch's Algebraic Number Theory. I did the proof but it feels a bit too slick and I feel I may be missing some subtlety, can someone check it over real quick?</p> <p>Show that, in the ring $\mathbb{Z}[i]$, the relation $\alpha\beta =\varepsilon\gamma ^n$, for $\alpha,\beta$ relatively prime numbers and $\varepsilon$ a unit, implies $\alpha =\varepsilon '\xi ^n$ and $\beta =\varepsilon ''\eta ^n$, with $\varepsilon '$,$\varepsilon ''$ units.</p> <p>So basically, because the Gaussian integers are a unique factorization domain and alpha and beta are relatively prime, I have the prime decomposition:</p> <p>$\alpha = \varepsilon' p_1^{e_1}...p_r^{e_r}$</p> <p>$\beta = \varepsilon'' p_s^{e_s}...p_y^{e_y}$</p> <p>$\varepsilon\gamma^n = \varepsilon q_1^{nf_1}...q_k^{nf_k}$</p> <p>And so $\alpha\beta = p_1^{e_1}...p_r^{e_r}p_s^{e_s}...p_y^{e_y}$. Where we have a one-to-one correspondence between the $p_i^{e_i}$ and the $q_i^{nf_i}$ and thus setting $p_i^{e_i} = q_i^{f_i}$, in accordance with this correspondence, we obtain our desired xi and eta. </p> <p>Does this make sense? I never used anything specific to the Gaussian integers so if this is right then it holds for all UFDs. Thanks.</p>
sperners lemma
44,154
<p>The trouble with Gaussian integers is that unique factorization is only unique up to units. For example consider $-i \cdot (1+i)(2+i)(1+2i)^4 = -1 \cdot (1+i)(-1+2i)(2-i)^4$: The primes $2+i$ and $-1+2i$ are just associates rather than equal. Prime ideals factor out associates and free us from caring about choosing associates correctly: The ideal $(2+i)$ is equal to $(-1+2i)$ and so the factorization into ideals $(1+i)(2+i)(1+2i)^4$ is unique (even if we have a few different ways to <em>write</em> it).</p> <p>With that in mind take the prime ideal factorization of $\gamma$ so consider $$(\alpha)(\beta) = (\pi_1)^{n r_1} (\pi_2)^{n r_2} \cdots (\pi_k)^{n r_k}$$ so in particular $(\pi_1)|(\alpha)(\beta)$ and by the characterization of a prime ideal this implies that $(\pi_1)|(\alpha)$ or $(\pi_1)|(\beta)$, suppose $(\pi_1)|(\alpha)$ then by coprimality $(\pi_1)\not|(\beta)$ so by unique factorization $(\pi_1)^{n r_1}|(\alpha)$. Just doing the same thing for each prime factor of $(\gamma)$ gives the result that both $(\alpha)$ and $(\beta)$ are products of $n$th powers of prime ideals.</p> <p>Travelling back to the Gaussian integers themselves, this result requires us to bring units back in so we conclude that $\alpha = \varepsilon' \xi^n$, $\beta = \varepsilon'' \eta^n$.</p>
873,803
<p>I'm currently preparing for the USA Mathematical Talent Search competition. I've been brushing up my proof-writing skills for several weeks now, but one area that I have not been formally taught about (or really self-studied) for that matter, is general polynomials beyond quadratics. In particular, I've been having trouble with the following question:</p> <blockquote> <p>Let $r_1, r_2$, and $r_3$ be distinct roots of the monic cubic equation $P(x) :=x^3+bx^2+cx+d=0$. Prove that $r_1r_2 + r_1r_3 + r_2r_3 = c$.</p> </blockquote> <p>I started by attempting to equate the roots $P(r_1) = P(r_2) = P(r_3) = 0$ and simplify, however this seemed like a wrong approach from the start and looked far messier than I'd expect from an introductory question on direct proofs. How would one go about solving this and, in general, problems on identities involving the roots of a polynomial?</p>
MJD
25,554
<p><strong>Hint</strong>: $$x^3 + bx^2 + cx + d = (x-r_1)(x-r_2)(x-r_3)$$</p>
145,046
<p>I'm a first year graduate student of mathematics and I have an important question. I like studying math and when I attend, a course I try to study in the best way possible, with different textbooks and moreover I try to understand the concepts rather than worry about the exams. Despite this, months after such an intense study, I forget inexorably most things that I have learned. For example if I study algebraic geometry, commutative algebra or differential geometry, In my minds remain only the main ideas at the end. Viceversa when I deal with arguments such as linear algebra, real analysis, abstract algebra or topology, so more simple subjects that I studied at first or at the second year I'm comfortable. So my question is: what should remain in the mind of a student after a one semester course? What is to learn and understand many demostrations if then one forgets them all?</p> <p>I'm sorry for my poor english.</p>
wendy.krieger
78,024
<p>After many years of semi-non-use, some useful things remain: process, existance, etc. </p> <p>You might know for example, that calculus exists, and there are handsome volumes of pre-made calculus, to modify to the current end.</p> <p>You might know how to read the runes in mathematical papers, and generally follow the arguments, even if you might not do it that way yourself. </p> <p>You might be able do interesting things with matricies, such as work with oblique vectors, or a generalised product of two vectors. </p> <p>Generalising the specific is also a handy effect. I often calculate specific polynomials with $x=100$ or $x=1000$, to save the drudgery of algebra. You just have to know to watch to see no carries happen.</p> <p>It's something like visiting a town from before. You might not be able to say, but years from now, you can still run the maze. </p>
1,047,489
<p>let $f(x)$ be a function such that </p> <p>$$f(0) = 0$$</p> <p>$$f(1) =1$$ $$f(2) = 2$$ $$f(3) = 4$$ $$f'(x) \text{is differentiable on } \mathbb{R}$$ Prove that there is a number in the interval $(0,3)$ such that $0 &lt; f''(x)&lt;1$</p> <p>I'm really stuck. thanks</p>
Simon S
21,495
<p><em>Hint:</em> Show that a some point $x \in (0,1)$, $f'(x) = 1$. Then for some $y \in (2,3)$, $f'(y) = 2$. Now use that to deduce the result by applying the MVT again.</p> <p><em>Added:</em></p> <p>Having found such $x$ and $y$, there must be a $z \in (x,y)$ such that</p> <p>$$f''(z) = \frac{f'(y) - f'(x)}{y - x}$$</p> <p>Now think about how you can bound that expression above and below.</p>
1,047,489
<p>let $f(x)$ be a function such that </p> <p>$$f(0) = 0$$</p> <p>$$f(1) =1$$ $$f(2) = 2$$ $$f(3) = 4$$ $$f'(x) \text{is differentiable on } \mathbb{R}$$ Prove that there is a number in the interval $(0,3)$ such that $0 &lt; f''(x)&lt;1$</p> <p>I'm really stuck. thanks</p>
Idele
190,802
<p>$1=\frac{f(1)-f(0)}{1-0}=f'(\xi_1),\xi_1\in(0,1)$</p> <p>$2=\frac{f(3)-f(2)}{3-2}=f'(\xi_2),\xi_2\in(2,3)$</p> <p>then $\xi_2-\xi_1\in(1,3)$</p> <p>$f''(\eta)=\frac{f'(\xi_2)-f'(\xi_1)}{\xi_2-\xi_1}=\frac{1}{\xi_2-\xi_1}\in(\frac{1}{3},1),\eta\in(\xi_1,\xi_2)\subset(0,3)$</p>
243,210
<p>I have difficulty computing the $\rm mod$ for $a ={1,2,3\ldots50}$. Is there a quick way of doing this?</p>
Dennis Gulko
6,948
<p>This seems fine. the only point is $v=\frac1y$, so $y=\frac1v$ and $$u(x)v(x)-1\cdot\frac13=u(x)v(x)-u(0)v(0)=\int_0^x (u(t)v(t))'=\int_0^x u(t)q(t)dt=-\int_0^x te^{-\frac{t^2}{2}}dt$$ So $$v(x)=e^{\frac{x^2}{2}}\left(\frac13-\int_0^x te^{-\frac{t^2}{2}}dt\right),\hspace{10pt} y(x)=\frac{1}{v(x)}$$</p>
25,413
<p>Background - I am tutoring a second year college sophomore for a class titled Single Variable Calculus, and whose curriculum looks to be similar to the AB calculus I tutor in my High School.</p> <p>We are on limits and L’Hôpital’s Rule, and I see this among the questions (note, all the worksheet questions are meant to be solved via L-H rule) - The instruction is</p> <p>&quot;Evaluate the following using L’Hôpital’s Rule&quot;</p> <p><span class="math-container">$$\lim_{x\to 0}\frac{\sin x}x= $$</span></p> <p>I recall, when subbing for a calc teacher, that this is a classic example of the use of the &quot;squeeze theorem&quot; aka &quot;sandwich theorem&quot;. Once it's proven, we'd go on to different arguments of Sine, practice a bit, then move on. It's introduced prior to L-H rule.</p> <p>Given the fast pace of my student self-studying and effort of remote teaching, I'm inclined to ignore this, and move on. My question is whether skipping over Squeeze Theorem is doing her a disservice, and should I (forgive the pun) squeeze it into our next session? At my HS, students have told me it feels like it's introduced, practiced for a few problems, but never seeing again.</p>
guest
20,089
<p>Unless you're training a superstar, and college sophomore taking calc 1 is a clue here, I would not cover material that the regular teacher has made the choice not to cover. And there are always limits on time and ability, thus choices on what to cover.</p> <p>Also, can the limit be found using lhopital? Which student has probably had? Take the derivative of top and bottom. And you get 1/cos0. Or 1/1 or 1. This lesson was specifically on lhopital. This drill sheet expects practice with a specific method.</p>
2,363,390
<p>This question arises from an unproved assumption made in a proof of L'Hôpital's Rule for Indeterminate Types of $\infty/\infty$ from a Real Analysis textbook I am using. The result is intuitively simple to understand, but I am having trouble formulating a rigorous proof based on limit properties of functions and/or sequences.</p> <p><strong>Statement/Lemma to be Proved</strong>: Let $f$ be a continuous function on interval $(a,b)\!\subset\!\mathbb{R}$. If $\displaystyle{\lim_{x\rightarrow a+}\!f(x)\!=\!\infty}$, then, given any $\alpha\!\in\!\mathbb{R}$, there exists $c\!&gt;\!a$ such that $x\!\in\!A\cap(a,c)$ implies $\alpha\leq f(c)&lt;f(x)$.</p> <p><strong>Relationship with Infinite Limit Definition</strong>: At first glance, this may appear to slimply be the definition of right-hand infinite limits:</p> <ul> <li>$\displaystyle{\lim_{x\rightarrow a+}\!f(x)\!=\!\infty}$ is defined to mean: given any $\alpha\!\in\!\mathbb{R}$, there exists $\delta\!&gt;\!0$, such that $x\!\in\!A\cap(a,a+\delta)$ implies $\alpha&lt;f(x)$.</li> </ul> <p>However, the main difference is that the result I am interested in forces an association between the "$\delta$" and the "$\alpha$" (where $\alpha=f(a+\delta)$, i.e., it forces $c\!\equiv\!a+\delta$ to be in the domain of $f$).</p> <p><em>EDIT:</em> The statement to be proved that I originally presented did not require $f$ to be continuous on $A$. However, this was added to address the comment and counterexample below.</p> <p><em>EDIT #2:</em> Again, a helpful user (@DanielFischer) commented that the statement after the first edit needed yet an additional limitation--<em>i.e.</em>, that $A$ must also be an interval--for it to hold.</p>
DanielWainfleet
254,665
<p>Assuming that $a$ is a lower limit point of $A$ \ $\{a\},$ the statement $\lim_{x\to a+}f(x)=\infty$ is equivalent to $$\lim_{c\to a+}\inf \{f(x): x\in (a,c)\cap A\}=\infty.$$ And the statement you wish to derive from $\lim_{x\to a+}f(x)=\infty$ is equivalent to $$\lim_{c\to a+}\sup \{f(x): x\in (a,c)\cap A\}=\infty.$$ Which is obvious because $\sup S\geq \inf S$ for any $S\subset \mathbb R.$ </p>
639,665
<p>How can I calculate the inverse of $M$ such that:</p> <p>$M \in M_{2n}(\mathbb{C})$ and $M = \begin{pmatrix} I_n&amp;iI_n \\iI_n&amp;I_n \end{pmatrix}$, and I find that $\det M = 2^n$. I tried to find the $comM$ and apply $M^{-1} = \frac{1}{2^n} (comM)^T$ but I think it's too complicated.</p>
Robert Lewis
67,071
<p>Yet another way to do it:</p> <p>Observe that</p> <p>$M = I + J, \tag{1}$</p> <p>where</p> <p>$I = \begin{bmatrix} I_n &amp; 0 \\ 0 &amp; I_n \end{bmatrix} \tag{2}$</p> <p>and</p> <p>$J = iP, \tag{2}$</p> <p>with</p> <p>$P = \begin{bmatrix} 0 &amp; I_n \\ I_n &amp; 0 \end{bmatrix}. \tag{3}$</p> <p>Then </p> <p>$P^2 = I, \tag{4}$</p> <p>so that</p> <p>$J^2 = -I, \tag{5}$</p> <p>from which we have</p> <p>$(I + J)(I - J) = I^2 - J^2 = I + I = 2I \tag{6}$</p> <p>and so</p> <p>$M(I - J) / 2 = (I + J)(I - J) / 2 = I, \tag{7}$</p> <p>showing that</p> <p>$M^{-1} = \dfrac{1}{2}(I - J) = \dfrac{1}{2} \begin{bmatrix} I_n &amp; -iI_n \\ -iI_n &amp; I_n \end{bmatrix}. \tag{8}$</p> <p>The above calculation works for the same readon as, is inspired by, and is a generalization of the ordinary formula</p> <p>$z^{-1} = \bar z / \vert z \vert^2, \tag{9}$</p> <p>which holds for any complex number $z$. Indeed, it is easy to see that</p> <p>$(aI + bJ)^{-1} = (aI - bJ) / (a^2 + b^2) \tag{10}$</p> <p>by the same general method. For the sake of closure and completeness, note that</p> <p>$M^{-1} = \dfrac {1}{2}(I - J) = \dfrac{1}{2} M^\dagger, \tag{11}$</p> <p>where $M^\dagger$ is the conjugate transpose, that is, the Hermitian adjoint, of $M$.</p> <p>Hope this helps. Cheers,</p> <p>and as always,</p> <p><em><strong>Fiat Lux!!!</em></strong></p>
639,665
<p>How can I calculate the inverse of $M$ such that:</p> <p>$M \in M_{2n}(\mathbb{C})$ and $M = \begin{pmatrix} I_n&amp;iI_n \\iI_n&amp;I_n \end{pmatrix}$, and I find that $\det M = 2^n$. I tried to find the $comM$ and apply $M^{-1} = \frac{1}{2^n} (comM)^T$ but I think it's too complicated.</p>
Marc van Leeuwen
18,880
<p>Asking for the inverse of such a block matrix is the same as asking the inverse of a $2\times2$ matrix over the non-commutative ring $M_n(\Bbb C)$ (viewing the individual blocks as elements of that ring). This point of view is not really helpful unless some special circumstance arises; in this case those four "entries" all <em>commute</em> with each other (obviously, since they are all multiples of the identity; however the technique that follows applies whenever they commute, taking $R$ to be the ring generated by the entries).</p> <p>Now a $2\times2$ matrix $(\begin{smallmatrix}A&amp;B\\C&amp;D\end{smallmatrix})$ over any <em>commutative</em> ring$~R$ (and we can see our matrix as such) is invertible if and only if the determinant $\Delta=AD-BC$ is invertible in$~R$, in which case the inverse is given by the usual formula $\Delta^{-1}(\begin{smallmatrix}D&amp;-B\\-C&amp;A\end{smallmatrix})$. In the current case $\Delta=2I_n$ is certainly invertible, so this applies; the computation now is trivial.</p>
201,358
<p>I would like to be able to extract elements from a list where indices are specified by a binary mask.</p> <p>To provide an example, I would like to have a function <code>BinaryIndex</code> doing the following:</p> <pre><code>foo = {a, b, c} mask = {True, False, True} BinaryIndex[mask, foo] (* expected output *) {a, c} </code></pre> <p>Is there such a function built-in? I would be able to come up with some implementation, but I would like to have this performance-optimized. If there is no such built-in function, what would be a good approach to make this fast? </p>
Henrik Schumacher
38,178
<p>The problem with Booleans in <em>Mathematica</em> is that they cannot be stored in packed arrays, reducing the efficiency of their processing. Suppose that <code>b</code> is already a packed vector of integers, e.g., <code>b = Developer</code>ToPackedArray[Boole[mask]]`. Then you should be able to obtain your result faster with</p> <pre><code>result = Pick[foo, b, 1]; </code></pre> <p>You may also try the undocumented function <code>Random`Private`PositionsOf</code> which can be used to determine the actual list <code>idx</code> of <em>indices</em> of <code>1</code>-entries in the array <code>b</code>; this may be useful if you can reuse <code>idx</code> multiple times.</p> <pre><code>idx = Random`Private`PositionsOf[b,1]; result = foo[[idx]]; </code></pre>
3,403,364
<p>Here's a little number puzzle question with strange answer:</p> <blockquote> <p>In an apartment complex, there is an even number of rooms. Half of the rooms have one occupant, and half have two occupants. How many roommates does the average person in the apartment have? </p> </blockquote> <p>My gut instinct was to say <span class="math-container">$\frac{1}{2}$</span>, but apparently that is wrong and the correct answer is <span class="math-container">$\frac{2}{3}$</span>????</p> <p>I saw this problem on Twitter with very little explanation and wasn't able to find it online anywhere. If anyone could shed some light on this for me that would be awesome : ) </p>
Mees de Vries
75,429
<p>This is a well-known paradox known as the <a href="https://en.m.wikipedia.org/wiki/Friendship_paradox" rel="nofollow noreferrer">friendship paradox</a>. In this case, your intuition leads you to believe that living in a one-person household is as common as living in a two-person household, because there are equally many of both types of house. But that doesn't translate to their being equally many people of both types.</p>
342,096
<p>$$\left( \left( 1/2\,{\frac {-\beta+ \sqrt{{\beta}^{2}-4\,\delta\, \alpha}}{\alpha}} \right) ^{i}- \left( -1/2\,{\frac {\beta+ \sqrt{{ \beta}^{2}-4\,\delta\,\alpha}}{\alpha}} \right) ^{i} \right) \left( \sqrt{{\beta}^{2}-4\,\delta\,\alpha} \right) ^{-1}$$</p> <p>What exactly is the connection between the above formula and $$x^2-x-c$$ c being a constant. I know about linear recursion, but I don't understand why the Fibonacci Numbers are associated with $$x^2-x-1$$ I know about $${\frac {\sqrt {5}+1}{{2}}}$$</p> <p>But I don't see the connection. If I were to represent these terms algebraically it seems I would do something like this:</p> <p>$${\alpha}^{-1}$$ $$-{\frac {\beta}{{\alpha}^{2}}}$$ $$-{\frac {-{\beta}^{2}+\delta\,\alpha}{{\alpha}^{3}}}$$ $${\frac {\beta\, \left( -{\beta}^{2}+2\,\delta\,\alpha \right) }{{ \alpha}^{4}}}$$ $${\frac {{\beta}^{4}-3\,{\beta}^{2}\delta\,\alpha+{\delta}^{2}{\alpha}^ {2}}{{\alpha}^{5}}}$$ $$-{\frac {\beta\, \left( {\beta}^{4}-4\,{\beta}^{2}\delta\,\alpha+3\,{ \delta}^{2}{\alpha}^{2} \right) }{{\alpha}^{6}}} $$</p> <p>There also seems to be a relationship with factorials when they are used as arrays in matrices.</p>
Cocopuffs
32,943
<p>If $f(X) = \sum_{k=0}^{\infty} F_k X^k$ as a formal power series, where $F_k$ is the $k$-th Fibonacci number, i.e. $$F_0 = 0, F_1 = 1, F_k = F_{k-1} + F_{k-2} \; (k \ge 2),$$ then $$f(X) = X + \sum_{k=0}^{\infty} F_{k+2} X^{k+2} = X + \sum_{k=0}^{\infty} F_k X^{k+2} + \sum_{k=0}^{\infty} F_k X^{k+1}$$ $$= X + X^2 f(X) + X f(X).$$ Thus $$f(X) = \frac{X}{X^2 - X - 1}$$ and this gives a connection. Using partial fractions decomposition and comparing coefficients, you can get an explicit formula as well.</p>
23,312
<p>What is the importance of eigenvalues/eigenvectors? </p>
pratik
184,865
<p><strong>In data analysis</strong>, the eigenvectors of a covariance (or correlation matrix) are usually calculated. </p> <hr> <p>Eigenvectors are the set of basis functions that are the most efficient set to describe data variability. They are also the coordinate system that the covariance matrix becomes diagonal allowing the new variables referenced to this coordinate system to be uncorrelated. The eigenvalues is a measure of the data variance explained by each of the new coordinate axis. </p> <hr> <p>They are used to reduce the dimension of large data sets by selecting only a few modes with significant eigenvalues and to find new variables that are uncorrelated; very helpful for least-square regressions of badly conditioned systems. It should be noted that the link between these statistical modes and the true dynamical modes of a system is not always straightforward because of sampling problems.</p>
1,693,630
<p><a href="https://i.stack.imgur.com/TVeGv.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TVeGv.jpg" alt="enter image description here"></a></p> <p>This is my attempt at finding $\frac{d^2y}{dx^2}$. Can some one point out where I'm going wrong here?</p>
Shuri2060
243,059
<p>Following your title which disagrees with the working on your paper...</p> <p>$$x=t^2-12t$$ $$y=t^2-1$$</p> <p>$$\frac{dx}{dt}=2t-12$$ $$\frac{dy}{dt}=2t$$</p> <p>$$\frac{dy}{dx}=\frac{dy}{dt}\times\frac{dt}{dx}=\frac{2t}{2t-12}=\frac{t}{t-6}$$</p> <p>But if the title is a typo then:</p> <p>$$x=t^3-12t$$ $$y=t^2-1$$</p> <p>$$\frac{dx}{dt}=3t^2-12$$ $$\frac{dy}{dt}=2t$$</p> <p>$$\frac{dy}{dx}=\frac{dy}{dt}\times\frac{dt}{dx}=\frac{2t}{3t^2-12}$$</p> <p>The following step is where you go wrong. Here's what you should've done - you need to differentiate with respect to $x$, not $t$:</p> <p>$$\frac{d^2y}{dx^2}=\frac{d\left(\frac{dy}{dx}\right)}{dx}=\frac{d\left(\frac{2t}{3t^2-12}\right)}{dx}$$</p> <p>$$\frac{d^2y}{dx^2}=\frac{d\left(\frac{2t}{3t^2-12}\right)}{dx}\times\frac{dt}{dt}=\frac{d\left(\frac{2t}{3t^2-12}\right)}{dt}\times\frac{dt}{dx}$$</p> <p>As seen in the above line, you're a $\frac{dt}{dx}$ out thanks to that mistake.</p> <p>$$\frac{d^2y}{dx^2}=\frac{2(3t^2-12)-2t(6t)}{(3t^2-12)^2}\times\frac{dt}{dx}$$</p> <p>$$\frac{d^2y}{dx^2}=\frac{(6t^2-24)-12t^2}{(3t^2-12)^3}=\frac{-6t^2-24}{(3t^2-12)^3}$$</p> <p>$$\frac{d^2y}{dx^2}=\frac{-2}{9}\left(\frac{t^2+4}{(t^2-4)^3}\right)$$</p> <p>In addition to the above mistake mentioned, you've randomly missed a squared in the penultimate line in the denominator of the fraction.</p>
2,325,421
<blockquote> <p>If $f$ is a linear function such that $f(1, 2) = 0$ and $f(2, 3) = 1$, then what is $f(x, y)$?</p> </blockquote> <p>Any help is well received.</p>
Saketh Malyala
250,220
<p>A linear function is of the form $f(x,y)=ax+by$.</p> <p>We have $f(1,2)=a+2b=0$</p> <p>We have $f(2,3)=2a+3b=1$.</p> <p>We let $2a+4b=0$.</p> <p>Therefore, $b=-1$, and $a=2$.</p> <p>So $\boxed{f(x,y)=2x-y}$.</p> <hr> <p>Update:</p> <p>A linear function is of the form $f(x,y)=ax+by+c$</p> <p>We have $f(1,2)=a+2b=-c$</p> <p>We have $f(2,3)=2a+3b=1-c$.</p> <p>We let $2a+4b=-2c$.</p> <p>Therefore, $b=c-1$, so $a + 2(c-1)=-c$, and $a = -c+2(1-c)=2-3c$.</p> <p>So $\displaystyle \boxed{f(x,y)=(2-3c)x-(c-1)y+c }$ for an arbitrary c.</p> <p><strong>UPDATE</strong></p> <p>The second piece is incorrect <em>in this context.</em> . AS @Clement C. adds, the arbitrary constant is zero, so that we do not have an unsolvable, affine function. </p>
2,325,421
<blockquote> <p>If $f$ is a linear function such that $f(1, 2) = 0$ and $f(2, 3) = 1$, then what is $f(x, y)$?</p> </blockquote> <p>Any help is well received.</p>
hamam_Abdallah
369,188
<p>$$f(( x,y))=f (x (1,0)+y (0,1)) $$ $$=xf ((1,0))+yf ((0,1)) $$</p> <p>so you need find $f ((1,0)) $ and $f ((0,1)) $ using</p> <p>$f ((1,2))=0$ and $f ((2,3))=1$</p> <p>$$2f (1,2)-f (2,3)=f (0,1)=-1$$</p> <p>$$f (2,3)-f (1,2)=f (1,0)+f (0,1)=1$$ $$\implies f (1,0)=2$$</p> <p>finally</p> <blockquote> <p>$$f (x,y)=2x-y$$</p> </blockquote>
885,129
<p>I want to speed up the convergence of a series involving rational expressions the expression is $$\sum _{x=1}^{\infty }\left( -1\right) ^{x}\dfrac {-x^{2}-2x+1} {x^{4}+2x^{2}+1}$$ If I have not misunderstood anything the error in the infinite sum is at most the absolute value of the last neglected term. The formula for the $n$th term is $\dfrac {-x^{2}-2x+1} {x^{4}+2x^{2}+1}$ from the definition of the series. To get the series I used Maxima the computer algebra system. I have noticed that to get 13 decimal places of the series one must wade through $312958$ terms of the series. I had to kill the computer GUI and some other system processes and run Maxima to compute the sum. I took about 5 minutes. The final sum I obtained was $0.3106137076850$. Is there any way to speed up the convergence of the sum? In general is there any way to speed up the convergence of the sum of $$\sum _{x=1}^{\infty }\left( -1\right) ^{x}\dfrac {p(x)} {q(x)}$$ </p> <p>where both ${p(x)}$ and ${q(x)}$ are rational functions?</p>
Simply Beautiful Art
272,831
<p>This implementation of Euler's acceleration method is apparently known as <a href="https://en.wikipedia.org/wiki/Van_Wijngaarden_transformation" rel="nofollow noreferrer">Van Wijngaarden's transformation</a>.</p> <p>Euler's acceleration method, as mentioned in the comments and in other answers, can speed up the convergence of an alternating series. It is easy to look at it in terms of a more general sequence <span class="math-container">$(a_n)_{n\in\mathbb N}$</span> which oscillates around it's limit. If this happens, then the limit may be more accurately estimated by <span class="math-container">$(b_n)_{n\in\mathbb N}$</span> defined by</p> <p><span class="math-container">$$b_n=\frac{a_n+a_{n+1}}2$$</span></p> <p>since the limit should be between consecutive terms. In the case of partial sums of alternating rational functions, this new averaged sequence will also oscillate around it's limit, so taking the average <span class="math-container">$(b_n+b_{n+1})/2$</span> will increase the convergence again. The limit as the amount of averaging we've done tends to infinity gives us Euler's acceleration method, though it may be more suitable to simply stop at averaging 10 times.</p> <p>As far as coding this, it is very easy. Simply compute the partial sums and then average backwards until satisfied.</p> <p>Let <span class="math-container">$S_n^{(m)}$</span> be the <span class="math-container">$n$</span>th term after averaging <span class="math-container">$m$</span> times. Then you can compute it like so:</p> <p><span class="math-container">\begin{array}{c}S_0^{(0)}&amp;&amp;S_1^{(0)}&amp;&amp;S_2^{(0)}&amp;&amp;S_3^{(0)}&amp;&amp;S_4^{(0)}&amp;\cdots\\\downarrow&amp;\swarrow&amp;\downarrow&amp;\swarrow&amp;\downarrow&amp;\swarrow&amp;\downarrow&amp;\swarrow&amp;\cdots\\S_0^{(1)}&amp;&amp;S_1^{(1)}&amp;&amp;S_2^{(1)}&amp;&amp;S_3^{(1)}&amp;\cdots\\\downarrow&amp;\swarrow&amp;\downarrow&amp;\swarrow&amp;\downarrow&amp;\swarrow&amp;\cdots\\S_0^{(2)}&amp;&amp;S_1^{(2)}&amp;&amp;S_2^{(2)}&amp;\cdots\\\downarrow&amp;\swarrow&amp;\downarrow&amp;\swarrow&amp;\cdots\\S_0^{(3)}&amp;&amp;S_1^{(3)}&amp;\cdots\\\downarrow&amp;\swarrow&amp;\cdots\\S_0^{(4)}&amp;\dots\end{array}</span></p>
4,079,711
<p>Calculate <span class="math-container">$$\iint_D (x^2+y)\mathrm dx\mathrm dy$$</span> where <span class="math-container">$D = \{(x,y)\mid -2 \le x \le 4,\ 5x-1 \le y \le 5x+3\}$</span> by definition.</p> <hr /> <p>Plotting the set <span class="math-container">$D$</span>, we notice that it is a parallelogram. I tried to divide it into equal parallelograms, i.e. take <span class="math-container">$x_i = -2 + \frac{6i}{n}$</span> and <span class="math-container">$y_j = 5x - 1+ \frac{4j}{m}$</span>. The definition requires to calculate the sum <span class="math-container">$$\sigma = \sum_{i=1}^n\sum_{j=1}^m f(\xi_i, \eta_j)\mu(D_{ij})$$</span> I was thinking about choosing <span class="math-container">$\xi_i = x_i$</span> and <span class="math-container">$\eta_j = y_j(x_i)$</span>. However, this seems a bit strange. Also, finding the area of <span class="math-container">$D_{ij}$</span> doesn't seem to be convenient.</p> <p>Any help is appreciated.</p>
Paul Frost
349,785
<p>Leinster says that the product topology is designed so that</p> <blockquote> <p>A function <span class="math-container">$f : A \to X \times Y$</span> is continuous if and only if the two coordinate functions <span class="math-container">$f_1 : A \to X$</span> and <span class="math-container">$f_2 : A \to Y$</span> are continuous.</p> </blockquote> <p>This is is not a formal definition, but only a motivational introduction. I do not have access to his book, but I am sure he gives a proper definition later in his text.</p> <p>Anyway, if we take the above assertion as the characteristic property of a topology on <span class="math-container">$X \times Y$</span>, we cannot a priori be sure that such a topology <em>exists</em> and is <em>unique</em>. This requires a proof. How this works is indicated by the statement that the product topology is the smallest topology such that the projections <span class="math-container">$p_1, p_2$</span> are continuous. Call a topology making <span class="math-container">$p_1, p_2$</span> continuous an <em>admissible topology</em> (this is just an ad-hoc notation). Clearly the discrete topology is admissible and the intersection of all admissible topologies is admissible, thus indeed there exists a (unique) smallest admissible topology. It is easy to see that a basis of this topology is given by the products <span class="math-container">$U \times V$</span> of all open <span class="math-container">$U \subset X, V \subset Y$</span>.</p> <p>Hence if <span class="math-container">$f : A \to X \times Y$</span> is continuous, then trivially both <span class="math-container">$f_i = p_i \circ f$</span> are continuous. Conversely, if the <span class="math-container">$f_i$</span> are continuous, we have to prove that <span class="math-container">$f$</span> is continuous. To verify that, it suffices to show that <span class="math-container">$f^{-1}(U \times V)$</span> is open for all basic open <span class="math-container">$U \times V$</span>. But <span class="math-container">$U \times V = U \times Y \cap X \times V$</span> and <span class="math-container">$f^{-1}(U \times Y \cap X \times V) = f^{-1}(U \times Y) \cap f^{-1}(X \times V) = f_1^{-1}(U) \cap f_2^{-1}( V) $</span>.</p>
1,472,916
<p>I am doing it in the following way. Is it correct?</p> <p>Set $S = \{1,\pi,\pi^2,\pi^3,...,\pi^n\}$ is LI over $\mathbb Q$</p> <p>Suppose $a_0\times 1 + a_1\times\pi + ... a_n\times \pi^n = 0$ where all the a_i's are not $0$.</p> <p>Then $\pi$ is a root of $a_0 + a_1x + ... + a_nx^n = 0$ which is imposible since $\pi$ is a transcendental number.</p> <p>Therefore, S is LI. Hence $\mathbb R$ is of infinite dimension over $\mathbb Q$.</p>
P Vanchinathan
28,915
<p>A much simpler argument would be to use the fact that real numbers form an uncountably infinite set, whereas rationals form a countable set. </p> <p>Check that a vector space of countable dimension over the rationals would still be a countable set. Hence the set of real numbers as a vector space over the rationals is of <em>uncountable</em> dimension.</p>
30,305
<p>I want to call <code>Range[]</code> with its arguments depending on a condition. Say we have </p> <pre><code>checklength = {5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6} </code></pre> <p>I then want to call <code>Range[]</code> 13 times (the length of <code>checklength</code>) and do <code>Range[5]</code> when <code>checklength[[#]] == 5</code> and <code>Range[2, 6]</code> when <code>checklength[[#]] == 6</code>. <code>If[]</code> would seem an appropriate way to do it, </p> <pre><code>Range[If[checklength[[#]] == 5, 5, XXX]]&amp; /@ Range[13] </code></pre> <p>but I don't know what to put for "XXX", since I need "2,6" there without any brackets. I've tried </p> <pre><code>Range[If[checklength[[#]]==5, 5, Flatten[{2,6}]]]&amp; /@ Range[13] </code></pre> <p>but that doesn't help (in fact if you think about it, it shouldn't!). The problem is, I need an unbracketed pair of numbers to be treated as a single argument and I don't know how to do that. I can think of one quite messy solution, </p> <pre><code>Range[If[checklength[[#]] == 5, 1, 2], If[checklength[[#]] == 5, 5, 6]&amp; /@ Range[13] </code></pre> <p>but I'd be disappointed if there's not a better way to do it. Even though this does the trick, the general question remains of how to treat unbracketed comma separated numbers as a single item.</p>
bill s
1,783
<p>Another straightforward approach is to just build the data using a <code>Table</code>. No <code>If</code> decisions required.</p> <pre><code>Table[Range[checklength[[i]] - 4, checklength[[i]]], {i, 1, Length[checklength]}] </code></pre>
926,069
<p>Say that two $m\times n$ matrices, where $m,n\ge 2$, are <em>related</em> if one can be obtained from the other after a finite number of steps, where at each step we add any real number to all elements of any one row or column. For example, $\left(\begin{array}{cc} 0 &amp; 0\\0 &amp; 0 \end{array}\right)$ and $\left(\begin{array}{cc} 1 &amp; 3\\0 &amp; 2 \end{array}\right)$ are related since the latter can be obtained from the former by adding $1s$ to the first row, and then adding $2s$ to the second column.</p> <p><strong>Question</strong> What matrices are related to the $m\times n$ zero matrix? Also, given two related matrices, how can we determine the minimum number of steps to generate one from the other?</p> <hr> <p>The motivation of this question is the transportation problem: it can be shown that transportation problems with related cost matrices have the same optimum solutions. (A matrix of nonnegative elements $(x_{ij})$ is feasible if $\sum_i x_{ij}=d_j$ and $\sum_j x_{ij}=s_i$ for some constants $d_j$ and $s_i$ with $\sum_j d_j=\sum_i s_i$, and is optimal if it minimises over all feasible matrices the sum $\sum_{i,j} x_{ij}c_{ij}$ for a given cost matrix $(c_{ij})$.) </p>
Venus
146,687
<p>Note that $$\sum_{k=1}^\infty\frac{\sin kx}{k}=\Im\sum_{k=1}^\infty\frac{e^{ikx}}{k}=-\Im\ln\left(1-e^{ix}\right)$$ where we use Taylor series of $\ln(1-x)$. Now, we use <a href="http://en.wikipedia.org/wiki/Complex_logarithm#Definition_of_principal_value" rel="nofollow">the principal value of complex logarithm</a>. We get $$\ln\left(1-e^{ix}\right)=\ln\left(1-\cos x-i\sin x\right)=\ln\sqrt{(1-\cos x)^2+\sin^2x}-i\arctan\left(\frac{\sin x}{1-\cos x}\right)$$ Hence $$\sum_{k=1}^\infty\frac{\sin kx}{k}=-\Im\ln\left(1-e^{ix}\right)=\arctan\left(\frac{\sin x}{1-\cos x}\right)=\arctan\left(\cot\left(\frac{x}{2}\right)\right)$$ where we use identities $$\sin x=2\sin\left(\frac{x}{2}\right)\cos\left(\frac{x}{2}\right)$$ and $$\cos x=2\cos^2\left(\frac{x}{2}\right)-1$$ Using identity $$\arctan\left(x\right)+\arctan\left(\frac{1}{x}\right)=\frac{\pi}{2}$$ we get $$\sum_{k=1}^\infty\frac{\sin kx}{k}=\frac{\pi}{2}-\arctan\left(\tan\left(\frac{x}{2}\right)\right)=\frac{\pi-x}{2}$$</p>
1,842,989
<p>This question might be silly, but while teaching a tutorial as a TA, I suddenly had the need to bring up a tautological statement in first order logic that involved the quantifiers $\forall\exists\forall\exists$ in this order. That is, a tautology of the form $\forall x\exists y\forall z\exists w $ $\phi(x,y,z,w)$ for some first order formula $\phi$. This came up while I was trying to convey to the class the essence of skolemization in this case. However, for some reason, I just couldn't come up quickly with such an example. Any help is appreciated.</p>
hmakholm left over Monica
14,366
<p>There's always $(y=x)\land (w=z)$ of course. But this has the disadvantage that $w$ doesn't need to depend on $x$, though. The best I can think of where this is not the case would be something like</p> <p>$$(y\ne x \lor z=x) \land (z=x \to w=y) \land (z\ne x \to w=x)$$ In a structure with three elements, $w$ will have to depend on both $x$ and $z$.</p>
128,015
<p>For the function $\frac{1}{x}$ on the real line, one can use a modified principal value integral to consider it as a distribution p.f.$(\frac{1}{x}),$ and one can do a similar construction to make $\frac{1}{x^m}$ into a distribution for $m&gt;1.$ In the complex plane, the function $\frac{1}{z^m}$ is locally integrable for $m=1,$ but for larger $m$ some construction analogous to the one dimensional would have to be done to make it into a distribution. </p> <p>More generally, given a meromorphic function on the plane (or torus), one should be able to consider it as a distribution by integrating against it and subtracting off some delta distributions or derivatives of delta distributions. Is this process explained in detail anywhere? Has anyone computed the Fourier series of such distributions, say for the Weierstrass $\mathfrak{p}$ function on the torus?</p>
jbc
26,013
<p>It is folklore that any meromorphic function on the real line can be regarded as a distribution in a natural way. One defines $x^{-n}$ to be the $n$-th derivative of the locally integrable function $\log |x|$, with appropriate coefficient. In order to treat the general case, one uses the principle of recollement des morceaux---if the real line is covered by a family of open subsets and one has a family of distributions, one on each set, which satisfies the obvious compatibility condition, then one can combine them to a single distribution. This is a theorem in the Schwartz book---the proof uses partitions of unity. I heard this argument in a course given regularly by the portuguese mathematician J. Sebastião e Silva at the University of Lisbon (I attended it in 1969). The same globalisation argument, combined with the comment of Sönke Hansen above, gives the result for the complex plane. I have no information on the second part of the query.</p>
1,691,981
<p>There is a symbolic notation for the set of all eigenvalues $$\operatorname{spec} \varphi = \lbrace \lambda \in K \mid \lambda \textrm{ is an eigenvalue} \rbrace$$ There is also a notation for the eigenspace $$V_\lambda = \lbrace \alpha \in V \mid \varphi(\alpha) = \lambda \alpha \rbrace$$ Is there any standard notation for the set of all eigenvectors? So that instead of writing </p> <blockquote> <p>Let $v$ be an eigenvector</p> </blockquote> <p>we could write</p> <blockquote> <p>Let $v \in \dots$</p> </blockquote>
Robert Israel
8,508
<p>I would just write </p> <blockquote> <p>Let $v \in V_\lambda \backslash \{0\}$ for some $\lambda \in \text{spec}\; \phi$</p> </blockquote> <p>This is convenient since anything useful you can say about $v$ is likely to involve the eigenvalue $\lambda$. If for some reason you're determined not to identify the particular eigenvalue, you could say</p> <blockquote> <p>Let $v \in \bigcup_{\lambda \in \text{spec}\; \phi} V_\lambda \backslash \{0\}$</p> </blockquote>
394,294
<p>I would like to know the asymptotics of the following sequences of integrals: <span class="math-container">$$ I_n = \int _0 ^{+ \infty} e^{-t} \left ( \dfrac{t}{1 + t} \right )^n \ dt $$</span></p> <p>I have tried using Laplace method ou saddle node method, but I have been unable to conclude anything.</p> <p>I have tried to find out the behaviour using a software computation. Here is what I found: <span class="math-container">\begin{equation} \begin{array}{|c|c|c|} \hline \\ n &amp; I_n &amp; - \ln(I_n) \\ \hline 1 = 2^0 &amp; 1.9269472 \cdot 10^{-1} &amp; 1.646648079928304 \\ \hline 2 = 2^1 &amp; 8.7215768 \cdot 10^{-2} &amp; 2.4393701372220464 \\ \hline 4 = 2^2 &amp; 2.6524946 \cdot 10^{-2} &amp; 3.629669596451481 \\ \hline 8 = 2^3 &amp; 4.7442047 \cdot 10^{-3} &amp; 5.35083146190712 \\ \hline 16 = 2^4 &amp; 4.1306898 \cdot 10^{-4} &amp; 7.7918959541372 \\ \hline 32 = 2^5 &amp; 1.3310510 \cdot 10^{-5} &amp; 11.226956600782769 \\ \hline 64 = 2^6 &amp; 1.0697730 \cdot 10^{-7} &amp; 16.05064913913718 \\ \hline 128 = 2^7 &amp; 1.2206772 \cdot 10^{-10} &amp; 22.826445101120278 \\ \hline 256 = 2^8 &amp; 8.8802107 \cdot 10^{-15} &amp; 32.35495110837777 \\ \hline 512 = 2^9 &amp; 1.3241607 \cdot 10^{-20} &amp; 45.77092301635973 \\ \hline 1024 = 2^{10} &amp; 8.1182635 \cdot 10^{-29} &amp; 64.6808514171555 \\ \hline 2048 = 2^{11} &amp; 2.1076879 \cdot 10^{-40} &amp; 91.3578121482943 \\ \hline 4096 = 2^{12} &amp; 9.3011756 \cdot 10^{-57} &amp; 129.01720949348262 \\ \hline 8192 = 2^{13} &amp; 7.3886987 \cdot 10^{-80} &amp; 182.2068558012616 \\ \hline 16384 = 2^{14} &amp; 1.7003331 \cdot 10^{-112} &amp; 257.358706225986 \\ \hline 32768 = 2^{15} &amp; 1.2703122 \cdot 10^{-158} &amp; 363.5691819464147 \\ \hline 65536 = 2^{16} &amp; 7.9749999 \cdot 10^{-224} &amp; 513.7027491850932 \\ \hline 131072 = 2^{17} &amp; 5.2817088 \cdot 10^{-316} &amp; 725.9526396990342 \\ \hline 262144 = 2^{18} &amp; 2.4716651 \cdot 10^{-446} &amp; 1026.0480594180656 \\ \hline 524288 = 2^{19} &amp; 1.2878115 \cdot 10^{-630} &amp; 1450.3756643021711 \\ \hline \end{array} \end{equation}</span> From this tabular, it seems that <span class="math-container">$\ln I_n \sim - 2 \sqrt{n}$</span>.</p> <p><strong>Is there any method to study such sequences of integrals?</strong></p>
Carlo Beenakker
11,260
<p>The integral <span class="math-container">$$I_n=\int_0^\infty e^{f(n,t)}\,dt,\;\;f(n,t)=n\ln t-n\ln(1+t)-t,$$</span> has a saddle point at <span class="math-container">$t^*$</span> where <span class="math-container">$\partial f(n,t)/\partial t=0$</span>, <span class="math-container">$$t^\ast=-\tfrac{1}{2}+\tfrac{1}{2}\sqrt{1+4n}.$$</span> For <span class="math-container">$n\rightarrow\infty$</span> the integral tends to <span class="math-container">$$I_n\rightarrow e^{f(n,t^\ast)} = e^{-2\sqrt{n}+{\cal O}(1)},$$</span> so <span class="math-container">$\ln I_n\rightarrow -2\sqrt{n}$</span>, as found numerically.</p>
4,290,651
<p>I understand the standard proof that there exists no surjection <span class="math-container">$f: X \to \mathcal{P}(X)$</span>, but I'm not able to tell whether it deals with the case that <span class="math-container">$X = \emptyset$</span> or whether I need to rule this out separately.</p> <p>If I want to prove that <span class="math-container">$|X| &lt; |\mathcal{P}(X)|$</span>, I need to find an injection <span class="math-container">$X \hookrightarrow \mathcal{P}(X)$</span>. In this case, I'm almost certain that I need to rule out the empty set case first. If <span class="math-container">$X = \emptyset$</span>, then the only map <span class="math-container">$X \to \mathcal{P}(X)$</span> is the empty function with codomain <span class="math-container">$\{\emptyset\}$</span>, which is vacuously injective. Otherwise, I send <span class="math-container">$x \mapsto \{x\}$</span> for each <span class="math-container">$x \in X$</span>, which is injective.</p> <p>The proof that no surjection is tougher for me to rule out the case of the empty set.</p> <blockquote> <p>Suppose <span class="math-container">$f: X \to \mathcal{P}(X)$</span> is a surjection. Define <span class="math-container">$B = \{x \in X \mid x \not \in f(x)\}$</span>. As <span class="math-container">$f$</span> is surjective, <span class="math-container">$f(a) = B$</span> for some <span class="math-container">$a \in X$</span>. But then <span class="math-container">$a \in B \iff a \not \in f(a) \iff a \not \in B$</span>, which is a contradiction.</p> </blockquote> <p>If <span class="math-container">$X$</span> is empty, then <span class="math-container">$B$</span> is empty. I can't find an <span class="math-container">$a \in X$</span>, so that away is ruled out, but this may be a case where the statement is &quot;vacuously&quot; true because the definition of surjectivity starts with &quot;for all.&quot;</p>
Eric Wofsey
86,856
<p>The empty set needs no special treatment in any of these arguments. For any set <span class="math-container">$X$</span>, you can define a function <span class="math-container">$X\to\mathcal{P}(X)$</span> by <span class="math-container">$x\mapsto\{x\}$</span>. If <span class="math-container">$X$</span> is empty, there are no values of <span class="math-container">$x$</span> to which this applies, but that is irrelevant; you still have a perfectly well-defined function (which is equal to the empty function).</p> <p>Similarly, the argument that a surjection cannot exist works perfectly well when <span class="math-container">$X$</span> is empty. No step of the argument assumes <span class="math-container">$X$</span> is nonempty. The set <span class="math-container">$B$</span> can be defined and is a subset of <span class="math-container">$X$</span>, so by definition, surjectivity of <span class="math-container">$f$</span> says there exists some <span class="math-container">$a\in X$</span> such that <span class="math-container">$f(a)=B$</span>. This step is valid even if <span class="math-container">$X$</span> is empty, since you are simply using the assumption that <span class="math-container">$f$</span> was surjective. (If <span class="math-container">$X$</span> is empty you immediately can reach a contradiction since there is no <span class="math-container">$a\in X$</span>, but there's nothing wrong with that.)</p>
3,072,995
<p>The only thing I know with this equation is <span class="math-container">$y=\frac{x^2+1}{x+1}=x+1-\frac{2x}{x+1}$</span>.</p> <p>Maybe it can be solved by using inequality.</p>
Michael Rozenberg
190,319
<p>Does not exist. </p> <p>Try <span class="math-container">$x\rightarrow-1^-$</span>.</p> <p>For <span class="math-container">$x&gt;-1$</span> by AM-GM we obtain: <span class="math-container">$$\frac{x^2+1}{x+1}=\frac{x^2-1+2}{x+1}=x-1+\frac{2}{x+1}=$$</span> <span class="math-container">$$=x+1+\frac{2}{x+1}-2\geq2\sqrt{(x+1)\cdot\frac{2}{x+1}}-2=2\sqrt2-2.$$</span> The equality occurs for <span class="math-container">$x+1=\frac{2}{x+1},$</span> which says that we got a minimal value and the local minimal value.</p> <p>Thus, the range for <span class="math-container">$x&gt;0$</span> it's <span class="math-container">$[2\sqrt2-2,+\infty).$</span></p> <p>For <span class="math-container">$x&lt;-1$</span> we can get the range by the similar way. </p>
3,319,122
<p>This is from Tao's Analysis I: </p> <p><a href="https://i.stack.imgur.com/DYQxE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DYQxE.png" alt="enter image description here"></a></p> <p>So far I managed to show (inductively) that these sets do exist for for every <span class="math-container">$\mathit{N}\in\mathbb{N}$</span> but, I'm finding it hard to show they're unique.</p> <p>One of the things I'm (also) trying to proof is that <span class="math-container">$\mathit{N}\in\mathit{A_N}$</span>, but I'm stuck on that one too. </p> <p>I'm not allowed to use the ordering of the natural numbers.</p>
mathsdiscussion.com
694,428
<p><span class="math-container">$$ f(x)=\frac{x}{e^x-1} $$</span> Find its first derivative it is zero at x=0 only and function is not defined at and x= 0 is not the vertical asymptot ; as x approach towards 0 , y approaches towards 1 ; f'(x)&lt;0 for all x in its domain hence decreasing function.</p>
3,398,645
<p>I have a doubt about value of <span class="math-container">$e^{z}$</span> at <span class="math-container">$\infty$</span> in one of my book they are mentioning that as <span class="math-container">$\lim_{z \to \infty} e^z \to \infty $</span></p> <p>But in another book they are saying it doesn't exist.I am confused now </p> <p>As we can see <span class="math-container">$e^{z}$</span> is entire function then as <span class="math-container">$z \to \infty $</span> then <span class="math-container">$e^{z}$</span> must go to <span class="math-container">$\infty$</span></p> <p>Please help.</p>
Nitin Uniyal
246,221
<p>Unless you are working on extended complex plane i.e. <span class="math-container">$\mathbb C\cup$</span> {<span class="math-container">$\infty$</span>}, you can use the substitution <span class="math-container">$z=\frac{1}{w}$</span> and investigate the behaviour at <span class="math-container">$w=0$</span>.</p> <p><span class="math-container">$e^{1/w}$</span> has essential singularity at <span class="math-container">$w=0$</span> and <em>Big Picard's theorem</em> says that it will take all values on <span class="math-container">$\mathbb C$</span> in the neighborhood of <span class="math-container">$w=0$</span> with atmost one exception.</p>
4,362,221
<p>Show that for all <span class="math-container">$z$</span>, <span class="math-container">$\overline{e^z} = e^\bar{z}$</span></p> <p>I'm a little stuck with this one.</p> <p>First defined the following to help solve it: <span class="math-container">$$ z = a + bi $$</span> then plugging that in to the question gives <span class="math-container">$$ \overline{e^{a + bi}} = e^\overline{a + bi} $$</span></p> <p>which can then be simplified to <span class="math-container">$$ \overline{e^a(cos(b) + i sin(b))} = e^{a - bi} $$</span> and so, <span class="math-container">$$ e^a(cos(b) - isin(b)) = e^a(cos(-b) + isin(-b)) $$</span> therefore <span class="math-container">$$ cos(b) - isin(b) = cos(-b) + isin(-b) $$</span></p> <p>But these are unequal to eachother, are they not? I must've made a mistake somewher, but I cant figure out where.</p>
D_S
28,556
<p>You can also do this using the power series. Let</p> <p><span class="math-container">$$e^z = \lim\limits_{N \to \infty} \sum\limits_{n=0}^N \frac{z^n}{n!}$$</span></p> <p>and let <span class="math-container">$\sigma(z) = \overline{z}$</span> denote complex conjugation. Since complex conjugation is continuous, it commutes with limits of sequences:</p> <p><span class="math-container">$$\sigma(e^z) = \sigma( \lim\limits_{N \to \infty} \sum\limits_{n=0}^N \frac{z^n}{n!}) = \lim\limits_{N \to \infty} \sigma \Big(\sum\limits_{n=0}^N \frac{z^n}{n!} \Big).$$</span></p> <p>Since <span class="math-container">$\sigma$</span> preserves addition and multiplication, and fixes real numbers, we have</p> <p><span class="math-container">$$\sigma \Big(\sum\limits_{n=0}^N \frac{z^n}{n!} \Big) = \sum\limits_{n=0}^N \frac{\sigma(z)^n}{n!} .$$</span></p> <p>Therefore,</p> <p><span class="math-container">$$\overline{e^z} = \sigma(e^z) = \lim\limits_{N \to \infty} \sum\limits_{n=0}^N \frac{\sigma(z)^n}{n!} = e^{\sigma(z)} = e^{\overline{z}}.$$</span></p>
556,423
<blockquote> <p>Determine using multiplication/division of power series (and not via WolframAlpha!) the first three terms in the Maclaurin series for $y=\sec x$.</p> </blockquote> <p>I tried to do it for $\tan(x)$ but then got kind of stuck. For our homework we have to do it for the $\sec(x)$. It is kind of tricky. Help would be awesome! </p> <p>Thanks!</p> <p>Taylor series for $\tan(x)$: \begin{align*} \tan (x) &amp;=\frac{\sin(x)}{\cos(x)}\\ &amp;=\frac{x-\frac {x^3}6+\frac{x^5}{120}-\cdots}{1-\frac{x^2}2+\frac{x^4}{24}-\cdots}\\ &amp;=x+\frac{x^3}3+\frac{2x^5}{15}+\cdots \end{align*}</p>
Mohamed
33,307
<p>$\sec(x)=\frac{1}{\cos x}$. The three first terms are $1,x^2$ and $x^4$. Then we write: $$\cos x=1-\frac{x^2}2 + \frac{x^4}{24} + o(x^4)$$ Putting $$u=-\frac{x^2}2 + \frac{x^4}{24}=-\frac{x^2}{2}\left(1-\frac{x^2}{12}\right)$$ we have: $$\sec(x)=(1+u)^{-1} =\operatorname{Tronc}_4 (1 -u +u^2-u^3+u^4)=\operatorname{Tronc}_4 (1 -u +u^2)$$</p> <p>Since: $\operatorname{Tronc}_4(u)=u$ and $\operatorname{Tronc}_4(u^2)=\frac{x^4}{4}$ you can finish:</p> <p>$$\sec x = 1+\frac{x^2}{2}-\frac{x^4}{24} + \frac{x^4}4 + o(x^4)= 1+\frac{x^2}{2} + \frac{5x^4}4 + o(x^4) $$</p>
493,600
<p>I am presented with the following task:</p> <p>Can you use the chain rule to find the derivatives of $|x|^4$ and $|x^4|$ in $x = 0$? Do the derivatives exist in $x = 0$? I solved the task in a rather straight-forward way, but I am worried that there's more to the task:</p> <p>First of all, both functions is a variable to the power of an even number, so given that $x$ is a real number, we have that $|x^4| = |x|^4$. In order to force practical use of the chain rule, we write $|x|^4 = \sqrt{x^2}^4$. We are using the fact that taking a number to the power of an even number, and using the absolute value, gives us positive numbers exclusively. If we choose the chain $u = x^2$, thus $g(u) = \sqrt{u}^4$, we have that $u' = 2x$ og $g'(u) = (u^2)' = 2u$. Then we have that the derivative of the function, that I for pratical reasons will name $f(x)$, is $f'(x) = 2x^2 * 2x = 4x^3$. We see that the the general power rule applies here, seeing as we work with a variable to the power of an even number. The derivative in the point $x = 0$ is $4 * 0^3 = \underline{\underline{0}}$. Thus we can conclude that the derivative exists in $x = 0$.</p> <p>Is this fairly logical? I'm having a hard time seeing that there is anything more to this task, but it feels like it went a bit too straightforward.</p>
2'5 9'2
11,123
<p>It's worth noting that $|x^4|$ and $|x|^4$ equal $x^4$, but no matter. I'll assume that any simplifications like that are off-limits throughout.</p> <p>One way to express the derivative of $|x|$ is $\frac{|x|}{x}$. So if we applied the chain rule to $|x^4|$ we have $\frac{|x^4|}{x^4}\cdot4x^3$, which is undefined at $0$. However this expression <em>is</em> defined in a neighborhood of $0$, and its limit exists, because $\frac{|x^4|}{x^4}$ is bounded and $4x^3$ approaches $0$.</p> <p>Something similar could be done with $|x|^4$.</p>
355,438
<p>I want to find the geometric locus of point $M$ such that $|MA|^2 |MB|^2=a^2$ where $|AB|=2a$, Solving algebraic equation is not hard but I can't figure out the shape of this curve. Can anybody help?</p>
Zoltán Kovács
427,255
<p>Use GeoGebra's <code>LocusEquation</code> command <a href="https://i.stack.imgur.com/G239M.png" rel="nofollow noreferrer">to create an implicit locus curve</a>. Then you can drag the free points and check how the curve is dynamically changing.</p>
3,836,059
<p>The following question is a last year's Statistics exam question I tried to solve (without any luck). Any help would be grateful. Thanks in advance.</p> <p>An Atomic Energy Agency is worried that a particular nuclear plant has leaked radio-active material. They do <span class="math-container">$5$</span> independent Geiger counter measurements in the direct neighbourhood of the reactor. They find the following measurements (per unit time):</p> <p>observation i: 1 2 3 4 5</p> <p>count <span class="math-container">$x_i$</span> : 1 2 6 2 7</p> <p>(I did not know how to implement this into a tabular)</p> <p>The natural background radiation has an average of <span class="math-container">$λ = 2$</span> (per unit time). The agency would only be worried if the radiation rate would be in the order of <span class="math-container">$λ = 5$</span>.</p> <p>They therefore decide to test: <span class="math-container">$H_0 : λ ≤ 2$</span> versus : <span class="math-container">$H_1 : λ &gt; 2$</span></p> <p>They want to device the optimal test to see if there there is any reason for alarm. Assuming that the data are realizations of a sample from a Poisson distribution:</p> <p><span class="math-container">$X_1, ..., X_5 ∼ POI(λ)$</span></p> <p>with density: <span class="math-container">$f(x) = e^{-λ}\frac{λ^{x}}{x!}$</span></p> <p>I have two questions I need some help with:</p> <ol> <li><p>Determine a sufficient statistic for the Possion sample and show that it has a monotone likelihood ratio.</p> </li> <li><p>Derive the uniform most powerful test of level <span class="math-container">$α = 0.0487$</span> for the test problem.</p> </li> </ol> <p>Because we have a Poisson distribution, I know that we can use: <span class="math-container">$\sum_{i = 1}^{5}X_i \sim Poi(5λ)$</span></p> <p>For the first question, my attempt:</p> <p><span class="math-container">$p(x_1,...,x_5|λ) = \prod_{i = 1}^{5} e^{-5λ}\frac{λ^{x_1 + x_2 + x_3 + x_4 + x_5}}{x_1!x_2!x_3!x_4!x_5!} = h(x_1 +...+ x_5|λ) * g(x_1,x_2,x_3,x_4,x_5) $</span></p> <p><span class="math-container">$h(x_1 +...+ x_5|λ) = e^{-5λ}λ^{x_1 + x_2 + x_3 + x_4 + x_5} $</span></p> <p><span class="math-container">$g(x_1,x_2,x_3,x_4,x_5) = \frac{1}{x_1!x_2!x_3!x_4!x_5!}$</span></p> <p>It follows by the factorization theorem that <span class="math-container">$T(X_1, X_2, X_3,X_4,X_5) = X_1+X_2+X_3+X_4+X_5$</span> is sufficient statistic.</p> <p>Not sure how to construct a proof to show it has a monotone likelihood ratio.</p>
tommik
791,458
<blockquote> <p>Show it has a monotone LR</p> </blockquote> <p>Let's set <span class="math-container">$\theta_1 &lt; \theta_2$</span></p> <p>The Likelihood Ratio (LR) is the following</p> <p><span class="math-container">$$\frac{L(\theta_1;\mathbf{x})}{L(\theta_2;\mathbf{x})}=\frac{e^{-n\theta_1}\theta_1^{\Sigma x}}{e^{-n\theta_2}\theta_2^{\Sigma x}}=e^{n(\theta_2-\theta_1)}(\frac{\theta_1}{\theta_2})^{\Sigma x}$$</span></p> <p>that is a monotone decreasing function in <span class="math-container">$\Sigma x$</span></p>
3,263,795
<p>Let <span class="math-container">$A = (a_{ij})$</span> be an invertible <span class="math-container">$n\times n$</span> matrix. I wonder how to prove that <span class="math-container">$A$</span> is a product of elementary matrices. I suspect that we need to transform it into the identity matrix by using elementary row operations, but how to do it exactly?</p> <p><strong>P.S.</strong> I've checked questions which could be considered similar and neither of them deals with this exact (general) situation. Please don't mark this question as a duplicate unless you find a precise answer.</p>
John Omielan
602,049
<p>You are correct with your idea stated in the comment, i.e., the line bisects the chord of <span class="math-container">$(0,0)$</span> to <span class="math-container">$(a,b)$</span>, except you didn't mention the line is perpendicular to this chord. This is because the equal length (i.e., the radius) lines from the center of the circle to each end point of this chord, plus the chord line itself, form an isosceles triangle. Thus, the perpendicular bisector of the chord goes through the opposite triangle end-point, i.e., the center of the circle.</p>
739,301
<p>Show that a Dirac delta measure on a topological space is a Radon measure.</p> <p>Show that the sum of two Radon measures is also a Radon measure. Please help me.</p>
William
68,668
<p>you also can apply the Reisz representation theorem. $\delta(f)=f(x_0)$ is a positive linear functional, then it must be a Radon measure. Second problem is obvious.</p>