qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
332,927
<p>there are two bowls with black olives in one and green in the other. A boy takes 20 green olives and puts in the black olive bowl, mixes the black olive bowl, takes 20 olives and puts it in the green olive bowl. The question is -</p> <p>Are there more green olives in the black olive bowl or black olive in the green olive bowl? Answer with reason.</p>
joriki
6,622
<p>There are just as many green olives in the black olive bowl as there are black olives in the green olive bowl. The number of olives in each bowl hasn't changed; hence there has merely been an exchange of some black olives for an equal number of green olives.</p>
194
<p>In many parts of the world, the majority of the population is uncomfortable with math. In a few countries this is not the case. We would do well to change our education systems to promote a healthier relationship with math. But in the present situation, how can we help the students who come to our classes, which they are required to take, with fear and loathing?</p> <p><strong>How do we help students overcome their math anxieties?</strong></p>
Sue VanHattum
60
<p>When I posted this question over a year ago, I meant to post my own answer after giving others a bit of time to post. I apparently forgot.</p> <p>My students have had some success in decreasing their anxiety with books like <em><a href="http://www.betterworldbooks.com/mind-over-math-put-yourself-on-the-road-to-success-by-freeing-yourself-from-math-anxiety-id-9780070352810.aspx" rel="noreferrer">Mind Over Math</a></em> (Kogelman, Warren), <em><a href="http://www.betterworldbooks.com/overcoming-math-anxiety-id-9780393313079.aspx" rel="noreferrer">Overcoming Math Anxiety</a></em> (Tobias), and <em><a href="http://www.betterworldbooks.com/managing-the-mean-math-blues-id-9780130431691.aspx" rel="noreferrer">Managing the Mean Math Blues</a></em> (Ooten).</p> <p>I also wanted something that directly addressed their test anxiety in math, and (after much research on math anxiety and the principles of creating guided visualizations) I developed a 14-minute guided visualization, which I titled <a href="http://mathmamawrites.blogspot.com/2009/09/math-relax-guided-visualization-for.html" rel="noreferrer">Math Relax</a>. It is available online for free. Not every student finds it helpful, but some students have felt that it changed their outlook dramatically.</p>
1,723,942
<p>The four color theorem declares that any map in the plane (and, more generally, spheres and so on) can be colored with four colors so that no two adjacent regions have the same colors. </p> <p>However, it's not clear what constitutes a map, or a region in a map. Is this actually a theorem in graph theory, something about every planar graph with some properties admitting a certain coloring? </p> <p>Or does one really prove it using regions in the plane (which I guess we take to have non-fractal boundaries, or something). What is the precise definition? </p>
Plutoro
108,709
<p>Here is a formal statement. Let $G=(V,E)$ be a finite planar graph with vertices $V$ and edges $E$. It is possible to find a collection $P=\{S_1,S_2,S_3,S_4\}$ of dijoint subsets of $V$ such that $V=S_1\cup S_2\cup S_3\cup S_4$ and for every $a,b\in S_i$, there is no edge in $E$ joining $a$ and $b$.</p> <p>The map is the graph, and regions are the vertices. </p>
1,281,967
<p>This is a dumb question I know.</p> <p>If I have matrix equation $Ax = b$ where $A$ is a square matrix and $x,b$ are vectors, and I know $A$ and $b$, I am solving for $x$.</p> <p>But multiplication is not commutative in matrix math. Would it be correct to state that I can solve for $A^{-1}Ax = A^{-1}b \implies x = A^{-1}b$?</p>
OKPALA MMADUABUCHI
98,218
<p>If $A$ is invertible, the answer is yes. Otherwise, it doesn't make sense to write that. Also in that case you may not have any solution. </p>
669,582
<p>Let $X,τ$ be a topological space. Prove that a subset $A$ of $X$ is dense if and only if every open subset of $ X$ contain some point of $A$</p> <p>this is what I got</p> <p>Let $X,τ$ be a topological space</p> <p>Part 1: Assume that a subset $A$ of $X$ is dense, show that every open subset of $X$ contain some point of $A$ Let $a∈A$, by axiom i) of the closure of set, $A⊂Cl A$, so $a∈Cl A$. Since $A$ is dense, $Cl A=X$, so $a∈X$. Let $ℵ_x $be the the collection of open neihborhood in $X$, by the axiom ii) of the open neighborhood system, $a∈X$ then $a∈N$ for each $N∈ℵ_x$. In other word, every open subset of $X$ contain some point of $A$.</p> <p>Part 2: Assume that every open subset of $X$ contain some point of $A$, show that subset $A$ of $X$ is dense</p> <p>I'm kinda stuck on how to show $Cl A=X$. I know that I need to show $Cl A⊂X$ and the other way around, but I'm not sure how.</p>
Hagen von Eitzen
39,174
<p>The complement of the closed set $\operatorname{Cl}A$ is open and disjoint to $A$. By assumption the only open set disjoint to $A$ is the empty set.</p>
1,447,852
<p>Compute this sum:</p> <p><span class="math-container">$$\sum_{k=0}^{n} k \binom{n}{k}.$$</span></p> <p>I tried but I got stuck.</p>
Calvin Khor
80,734
<p><strong>Hint.</strong> Let $X\sim \text{Bin}(n,p)$. Then it is known that $$\Bbb EX = np$$ But of course, $\Bbb EX = \sum_{k=0}^n k \Bbb P(X=k) = \sum_{k=0}^n k \binom{n}{k}p^k(1-p)^k$. So</p> <p>$$np = \sum_{k=0}^n k \binom{n}{k}p^k(1-p)^k$$</p> <p>Try finishing from here.</p>
4,036,761
<blockquote> <p>Let G be the additive group of all polynomials in <span class="math-container">$x$</span> with integer coefficients. Show that G is isomorphic to the group <span class="math-container">$\mathbb{Q}$</span>* of all positive rationals (under multiplication).</p> </blockquote> <p>This question is from my abstract algebra assignment and I am unable to prove it. I am not able to deduce what should I map <span class="math-container">$a_0 +a_1 x + a_2 x^2 +...+ a_n x^n $</span> to so not able to proceed.</p> <p>Can you please give a hint?</p>
lhf
589
<p><em>Hint:</em> The fundamental theorem of arithmetic implies that <span class="math-container">$$ \mathbb{Q}^\times_+ \cong \bigoplus_p \mathbb Z $$</span></p>
2,426,897
<p>Let $\mathbb{H}$ be the ring of real quaternions and $Z(\mathbb{H})$ be its center. Of course $Z(\mathbb{H})=\mathbb{R}$. </p> <p>Suppose $a+bi+cj+dk$, $x+yi+zj+wk \in \mathbb{H}$ such that $(a+bi+cj+dk)(x+yi+zj+wk) \in Z(\mathbb{H})$. </p> <p>Does it imply $(x+yi+zj+wk)(a+bi+cj+dk) \in Z(\mathbb{H})$?</p>
Duncan Ramage
405,912
<p>$44^2 = 1936 &lt; 2017 &lt; 2025 = 45^2$.</p> <p>Really, I don't think there's much to this one except for "try squaring small integers until you find the right ones".</p>
2,426,897
<p>Let $\mathbb{H}$ be the ring of real quaternions and $Z(\mathbb{H})$ be its center. Of course $Z(\mathbb{H})=\mathbb{R}$. </p> <p>Suppose $a+bi+cj+dk$, $x+yi+zj+wk \in \mathbb{H}$ such that $(a+bi+cj+dk)(x+yi+zj+wk) \in Z(\mathbb{H})$. </p> <p>Does it imply $(x+yi+zj+wk)(a+bi+cj+dk) \in Z(\mathbb{H})$?</p>
Robert Soupe
149,436
<p>Try graphing $\sqrt x$ for $x \geq 0$. You should see a fairly smooth curve that goes upwards, which means that if $a &lt; b$ and they're both positive, then $\sqrt a &lt; \sqrt b$.</p> <p>From this, it's clear that the integers you want are $\lfloor \sqrt{2017} \rfloor$ and $\lceil \sqrt{2017} \rceil$. A calculator readily tells us that $\sqrt{2017}$ is approximately 44.911, so the answer is 44 and 45.</p> <p>If you really want to do it by prime factorization, look at the divisors of 2016. Notice that $2016 = 42 \times 48$. Then $43 \times 47 = 2021$ and $44 \times 46 = 2024$, which should strongly suggest 45 is the greater of the integers you're looking for.</p>
2,426,897
<p>Let $\mathbb{H}$ be the ring of real quaternions and $Z(\mathbb{H})$ be its center. Of course $Z(\mathbb{H})=\mathbb{R}$. </p> <p>Suppose $a+bi+cj+dk$, $x+yi+zj+wk \in \mathbb{H}$ such that $(a+bi+cj+dk)(x+yi+zj+wk) \in Z(\mathbb{H})$. </p> <p>Does it imply $(x+yi+zj+wk)(a+bi+cj+dk) \in Z(\mathbb{H})$?</p>
Will Jagy
10,400
<p>I can only imagine this was intended to be about $$ (10a + 5)^2 = 100 a (a+1) + 25, $$ $$ 15^2 = 225, $$ $$ 25^2 = 625, $$ $$ 35^2 = 1225, $$ $$ 45^2 = 2025. $$ Then $$ 44^2 = 2025 - 2 \cdot 45 + 1 = 2025 - 90 + 1 &lt; 2017. $$</p> <p>EXAMPLE: factor $10001 = 10^4 + 1$</p> <p>$$ 105^2 = 11025 $$ $$ 105^2 - 10001 = 1024 $$ $$ 1024 = 32^2 $$ $$ 10001 = 105^2 - 32^2$$ $$ 10001 = (105 - 32)(105 + 32) = 73 \cdot 137 $$ </p>
1,994,922
<p>Given $B_1, B_2,\ldots$ are independent and bounded variables with $E(B_i) = 0$ for all $i=1,2,\ldots$. Define $S_n = B_1+ B_2+\ldots + B_n$ with variance $s_n^2\rightarrow \infty$. Prove that $\frac{S_n}{s_n}$ has a central limit.</p> <p><strong>My attempt:</strong> Due to the given condition, without i.i.d property, I try to prove that this sequence satisfies the Lindeberg condition and then applying Lindeberg-Feller theorem, we're done. So for every $\epsilon&gt;0$ , I need to show for each positive $\varepsilon$, $$\lim_{n\to +\infty} \frac 1{\sum_{j=1}^n\sigma_j^2 }\sum_{j=1}^n\mathbb E\left[B_j^2\mathbf 1\left\{ \left|B_j\right|^2\gt \varepsilon\sum_{i=1}^n \sigma_i^2\right\}\right]=0 .$$</p> <p>Since $\sigma_i^{2} = E(B_i^2)$ for all $\ i=1,2,\ldots$ and $B_is$ are bounded variables, $B_j^2\leq M=max(|B_1|,|B_2|,\ldots)$. Thus, $$\lim_{n\to +\infty} \frac 1{\sum_{j=1}^n\sigma_j^2 }\sum_{j=1}^n\mathbb E\left[B_j^2\mathbf 1\left\{ \left|B_j\right|^2\gt \varepsilon\sum_{i=1}^n \sigma_i^2\right\}\right]\leq \lim_{n\to +\infty} \frac 1{\sum_{j=1}^n\sigma_j^2 } M^2\sum_{j=1}^n \mathbb E\left[\mathbf 1\left\{ \left|B_j\right|^2\gt \varepsilon\sum_{i=1}^n \sigma_i^2\right\}\right] .$$ As $n\rightarrow \infty$, $s_n^2\rightarrow \infty$, so all the sets $\mathbf 1\left\{ \left|B_j\right|^2\gt \varepsilon\sum_{i=1}^n \sigma_i^2\right\}\rightarrow 0$. And we would be done if we could show that $\lim_{n\rightarrow \infty} \sum_{j=1}^n \mathbb E\left[\mathbf 1\left\{ \left|B_j\right|^2\gt \varepsilon\sum_{i=1}^n \sigma_i^2\right\}\right]\rightarrow 0$. But this might not be true (counter example is the harmonic series with $p=1$) unless there is something that I was missing.</p> <p><strong>My question:</strong> Could someone please help me overcome this last step? In case I was on the wrong track, please let me know as well.</p>
Remy
325,426
<p>Since each <span class="math-container">$X_k$</span> is bounded then for any <span class="math-container">$\epsilon&gt;0$</span> there exists <span class="math-container">$N\in\mathbb N$</span> such that</p> <p><span class="math-container">$$\mathbf{1}\left(|X_k|&gt;\epsilon s_n\right)=0$$</span></p> <p>for all <span class="math-container">$k$</span> and all <span class="math-container">$n\geq N$</span>. Hence</p> <p><span class="math-container">$$\lim_{n\rightarrow\infty}\sum_{k=1}^n\mathbb E\left[\mathbf{1}\left(|X_k|&gt;\epsilon s_n\right)\right]=0$$</span></p> <p>Thus taking <span class="math-container">$M$</span> as the bound for each <span class="math-container">$|X_k|$</span> we have</p> <p><span class="math-container">\begin{align*} \lim_{n\rightarrow\infty}\frac{1}{s_n^2}\sum_{k=1}^n\int_{|X_k|&gt;\epsilon s_n}X_k^2dP &amp;=\lim_{n\rightarrow\infty}\frac{1}{s_n^2}\sum_{k=1}^n\mathbb E\left[X_k^2\mathbf{1}\left(|X_k|&gt;\epsilon s_n\right)\right]\\\\ &amp;\leq\lim_{n\rightarrow\infty}\underbrace{\frac{1}{s_n^2}}_{\rightarrow0}M^2\underbrace{\sum_{k=1}^n\mathbb E\left[\mathbf{1}\left(|X_k|&gt;\epsilon s_n\right)\right]}_{\rightarrow0}\\\\ &amp;\rightarrow 0 \end{align*}</span></p> <p>Therefore the Lindeberg condition is met and we have that <span class="math-container">$\frac{S_n}{s_n}$</span> has a central limit.</p>
1,905,863
<p>I'm on the section of my book about separable equations, and it asks me to solve this:</p> <p>$$\frac{dy}{dx} = \frac{ay+b}{cy+d}$$</p> <p>So I must separate it into something like: $f(y)\frac{dy}{dx} + g(x) = constant$</p> <p>*note that there are no $g(x)$</p> <p>but I don't think it's possible. Is there something I'm missing?</p>
Ian Miller
278,461
<p>You can always rearrange it as follows:</p> <p>$$\frac{dy}{dx} = \frac{ay+b}{cy+d}$$</p> <p>$$\frac{cy+d}{ay+b}\frac{dy}{dx} = 1$$</p> <p>It is clearly separated and you can integrate with respect to $x$ to get:</p> <p>$$\int\frac{cy+d}{ay+b}\frac{dy}{dx}dx = \int dx$$</p> <p>$$\int\frac{cy+d}{ay+b}dy = x$$</p> <p>The left hand side requires partial fractions to complete.</p> <p><strong>Edit</strong> Added due to comment about partial fractions.</p> <p>Firstly carry out the division so the numerator has smaller degree than the denominator.</p> <p>$$\int\frac{cy+d}{ay+b}dy = x$$</p> <p>$$\int\frac{\frac{c}{a}ay+d}{ay+b}dy = x$$</p> <p>$$\int\frac{\frac{c}{a}ay+\frac{c}{a}b+d-\frac{c}{a}b}{ay+b}dy = x$$</p> <p>$$\int\frac{\frac{c}{a}(ay+b)+d-\frac{c}{a}b}{ay+b}dy = x$$</p> <p>$$\int\frac{\frac{c}{a}(ay+b)}{ay+b}+\frac{d-\frac{bc}{a}}{ay+b}dy = x$$</p> <p>$$\int\frac{c}{a}+\frac{d-\frac{bc}{a}}{ay+b}dy = x$$</p> <p>$$\frac{c}{a}y+(d-\frac{bc}{a})\frac{\log(ay+b)}{a}=x+K$$</p> <p>$$\frac{c}{a}y+\frac{(ad-bc)\log(ay+b)}{a^2}=x+K$$</p> <p><strong>Aside</strong>: with separable DEs it is probably better to think about how to rearrange it into the form:</p> <p>$$f(y)\frac{dy}{dx}=g(x)$$</p> <p>This is similar to your form but there is no need to have a distinct constant as it can always be considered part of $g(x)$.</p>
327,750
<p>$$\bigcup_{n=1}^\infty A_n = \bigcup_{n=1}^\infty (A_{1}^c \cap\cdots\cap A_{n-1}^c \cap A_n)$$</p> <p>The results is obvious enough, but how to prove this</p>
Brian M. Scott
12,042
<p>For $n\in\Bbb Z^+$ let $B_n=A_n\setminus\bigcup_{k=1}^{n-1}A_k=A_n\cap\bigcap_{k=1}^{n-1}A_k^c$; you want to show that $$\bigcup_{n\ge 1}A_n=\bigcup_{n\ge 1}B_n\;.$$</p> <p>Clearly it suffices to show that </p> <p>$$\bigcup_{k\ge 1}A_k\subseteq\bigcup_{k\ge 1}B_k\;.$$</p> <p>For $x\in\bigcup_{k\ge 1}A_k$ let $n(x)=\min\{k\in\Bbb Z^+:x\in A_k\}$; then $x\in B_{n(x)}\subseteq\bigcup_{k\ge 1}B_k$, and you’re done.</p>
1,074,534
<p>How can I get started on this proof? I was thinking originally:</p> <p>Let $ n $ be odd. (Proving by contradiction) then I dont know.</p>
ajotatxe
132,456
<p>If $n$ is odd, let $p$ be its smallest prime divisor, and $p^r$ the greatest power of $p$ that divides $n$. Then, the number $$\frac{2^rn}{p^r}$$ has the same number of divisors, it is smaller than $n$ and it is even.</p>
187,618
<p>I am trying to solve the following problem.</p> <p>The time $T$ required to repair a machine is an exponentially distributed random variable with mean 10 hours.</p> <p>a) What is the probability that a repair takes at least 15 hours given that its duration exceeds 12 hours? b) What is the probability that the combined time to repair two machines is at least 20 hours?</p> <p><strong>Solution Attempt</strong></p> <p>Since mean is given to be 10 hours hence $\lambda = \dfrac {1}{10}$ and the probability distribution of the time is given as $e^{-\lambda t} = e^{-\dfrac {1}{10} t} $ </p> <p>a) $P(T&gt;15 |T&gt;12) = P(0 $ repairs in $ (12, 15]) = e^{-\dfrac {1}{10} 3}$</p> <p>b) let $T_1$ be the r.v representing time to repair the first machine and $T_2$ be the r.v representing time to repair the second machine. So we seek to evaluate $P(T_1 + T_2 &gt; 20)$ we know both of these time should be independent as the exponential distribution process to memory less but i am not sure how to proceed from here. </p> <p>Any help would be much appreciated. </p>
Dilip Sarwate
15,941
<p>While the numerical answer you get for part (a) is correct, I think that your work indicates some misinterpretation of the concepts. $T$ is the time required to <em>complete</em> a repair, and its <em>complementary cumulative distribution function</em> is $\exp(-t/10)$, that is, $$P\{T &gt; t\} = 1 - F_T(t) = e^{-t/10}.$$ The question in part (a) asks for a conditional probability $P\{T &gt; 15 \mid T &gt; 12\}$ which is <em>by definition</em> given by $$P\{T &gt; 15 \mid T &gt; 12\} = \frac{P(\{T &gt; 15 \}\cap\{T &gt; 12\})}{P\{T &gt; 12\}} = \frac{P\{T &gt; 15 \}}{P\{T &gt; 12\}} = \frac{e^{-15/10}}{e^{-12/10}}= e^{-3/10}$$ which is the same answer as you obtained, but it is <em>not</em> the probability of $0$ repairs in $(12,15]$. The <em>repairing</em> began at time $t =0$ and the question asks: if the repair is still <em>ongoing</em> at time $t = 12$, what is the conditional probability that it is <em>still ongoing</em> at $t = 15$, and thus <em>completes</em> at some time $T$ larger than $15$. Your use of the phrase</p> <blockquote> <p>$$0 ~ \text{repairs in} ~(12,15]$$ </p> </blockquote> <p>almost makes it sound like the repairs are a Poisson arrivals process.</p>
522,289
<p>It is an exercise on the lecture that i am unable to prove.</p> <p>Given that $gcd(a,b)=1$, prove that $gcd(a+b,a^2-ab+b^2)=1$ or $3$, also when will it equal $1$?</p>
Zafer Sernikli
98,237
<p>Let $d|a+b \quad(1)$ and $d|a^2-ab+b^2 (2)$</p> <p>$ d|a^2-ab+b^2 \quad (2)\implies d|(a+b)^2 - 3ab \qquad \qquad(3)$ </p> <p>And as $d|a+b$ then, $(3) \land (1) \implies d|-3ab \implies d|3ab \implies d|3 \vee d|a \vee d|b$ </p> <p>On the other hand, as $d|a+b$, if $d|a \implies d|b$, and vice verse. And as $gcd(a,b)=1$, then $d=1$.</p> <p>The other possibility was $d|3$, and it's clear that $d|3 \Longleftrightarrow (d=1 \vee d=3)$</p>
201,381
<p>I have basic training in Fourier and Harmonic analysis. And wanting to enter and work in area of number theory(and which is of some interest for current researcher) which is close to analysis. </p> <blockquote> <p>Can you suggest some fundamental papers(or books); so after reading these I can have, hopefully(probably), I will have some thing to work on(I mean, chance of discovering something new)?</p> </blockquote>
Daniel Loughran
5,101
<p>It seems like you want to discover analytic number theory. There is a lot of it. A good comprehensive modern book I would recommend is </p> <p>Iwaniec, Kowalski - Analytic number theory.</p> <p>Example areas with applications of harmonic analysis include the circle method and modular forms.</p>
201,381
<p>I have basic training in Fourier and Harmonic analysis. And wanting to enter and work in area of number theory(and which is of some interest for current researcher) which is close to analysis. </p> <blockquote> <p>Can you suggest some fundamental papers(or books); so after reading these I can have, hopefully(probably), I will have some thing to work on(I mean, chance of discovering something new)?</p> </blockquote>
Brendan Murphy
36,862
<p>The fourth volume of Stein &amp; Shakarchi's series on analysis has a nice account of the application of harmonic analysis to lattice point problems (e.g. Gauss' circle problem and the Dirichlet divisor problem).</p> <p>Martin Huxley's "Area, Lattice Points, and Exponential Sums" is a more thorough (but very readable) account of the business of counting lattice points.</p> <p>My advisor (Alex Iosevich) has done work (with others) in this area, using some more recent techniques in harmonic analysis (or as Alex would say "using nuclear weapons on small animals"). Anyway, it's still an active area, though as suggested above, you might want to work on whatever is popular in your department.</p> <p>Also, Vinogradov's "Elements of Number Theory" has a lot of nice exercises on exponential sums. Recently, techniques from additive combinatorics have led to new results on exponential sums; Bourgain wrote a nice survey called "Sum-product Theorems and Applications".</p>
528,456
<p>I have a question regarding L'hospital's rule. </p> <p>Why can I apply L'hospital's rule to $$\lim_{x\to 0}\frac{\sin 2x}{ x}$$ and not to $$\lim_{x\to 0} \frac{\sin x}{x}~~?$$</p>
hmakholm left over Monica
14,366
<p>You <em>can</em> apply L'Hospital's rule just fine to $\lim_{x\to 0}\frac{\sin x}{x}$. It just doesn't tell you anything you didn't already know, because it concludes that the limit is $$ \frac{\sin'(0)}{1} = \sin'(0) = \lim_{h\to 0}\frac{\sin(0+h)-\sin(0)}{h} = \lim_{h\to 0}\frac{\sin h - 0}{h} = \lim_{h\to 0}\frac{\sin(h)}{h} $$ which is exactly the same as the limit you started out with, except for the name of the variable.</p> <p>This problem shows up whenever the denominator is $x$. However, as a practical matter, applying L'Hospital in this case can still be quicker and easier than <em>recognizing</em> that the limit is itself a differential quotient, so you can use your standard knowledge of derivatives on it.</p>
2,871,729
<p>Given any $\alpha &gt; 0$, I need to show that for $ x \in [0,\infty)$ \begin{equation} \lim_{x\to 0} x^{\alpha}e^{|\log x|^{1/2}}=0 \end{equation}</p> <p>I have tried using L'Hospital's rule. But I am not able to arrive at answer. </p> <p>Thank you in advance.</p>
Ahmad Bazzi
310,385
<p>Let $$f(x) = x^{\alpha} e^{\sqrt{\vert \log x \vert}}$$ Consider $$\log f(x) = \alpha \log x + \sqrt{\vert \log x \vert} = \log x (\alpha + \frac{ \sqrt{\vert \log x \vert}}{\log x} )=\log x (\alpha - \frac{ \sqrt{\vert \log x \vert}}{\sqrt{\vert \log x \vert}\sqrt{\vert \log x \vert}} )$$ which is $$\log f(x) = \log x (\alpha - \frac{ 1}{\sqrt{\vert \log x \vert}} )$$ As $x$ goes to zero $\sqrt{\vert \log x \vert}$ goes to $+ \infty$ hence $\alpha - \frac{ 1}{\sqrt{\vert \log x \vert}}$ goes to $\alpha$. Hence </p> <p>$$\lim_{x \rightarrow 0}\log f(x) =\lim_{x \rightarrow 0} \alpha \log x = -\infty$$ So $$\lim_{x \rightarrow 0} f(x) = e^{-\infty} = 0$$</p>
2,062,398
<p>May you tell me if my translation to symbolic logic is correct? </p> <p>Thank you so much! Here is the problem:</p> <p>To check that a given integer $n &gt; 1$ is a prime, prove that it is enough to show that $n$ is not divisible by any prime $p$ with $p \le \sqrt{n}$.</p> <p>$$\forall p \in P ~\forall n \in N ~(p \nmid n \land p\le \sqrt{n} \land n&gt;1 \rightarrow n \in P )$$</p>
C. Falcon
285,416
<p>First of all, let us find a nonzero annihilator polynomial of $\sqrt{2}+\sqrt[3]{5}.$ Let $p=x^2-2$ and $q=x^3-5$, $p$ is a nonzero annihilator of $\sqrt{2}$ and $q$ is a nonzero annihilator polynomial of $\sqrt[3]{5}$. Hence, the <a href="https://en.wikipedia.org/wiki/Resultant" rel="nofollow noreferrer">resultant</a> with respect to $t$ of the polynomials $p(t)$ and $q(x-t)$ is a nonzero annihilator polynomial of $\sqrt{2}+\sqrt[3]{5}$. After computation, one has: $$\textrm{res}_t(p(t),q(x-t))=x^6-6x^4-10x^3+12x^2- 60x+17.$$ Using <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a>, this polynomial has no rational root. Whence the result.</p>
2,064,284
<blockquote> <p>Prove that the sequence $\{y_n\}$ where $y_{n+2}=\frac{y_{n+1} +2 y_{n}}{3}$ $n\geq 1$, $0&lt;y_1&lt;y_2$, is convergent by using subsequencial criteria, <strong>by showing $\{y_{2n}\}$ and $\{y_{2n-1}\}$ converges to the same limit. Find the limit also</strong>.</p> </blockquote> <p>I can solve it by Cauchy sequence as $|y_m-y_n|\leq |y_{n}-y_{n+1}|+|y_{n+1}-y_{n+2}|\cdots +|y_{m-1}-y_m|\cdots$, but here, we have to check convergence by using subsequencial criteria, by showing $\{y_{2n}\}$ and $\{y_{2n-1}\}$ converges to the same limit. Please help. </p>
Mark
310,244
<p>Yes. By definition, the minimal polynomial $m_A$ divides any other polynomial that annihilates $A$. So, if some polynomial $f$ of equal degree annihilates $A$, then $m_A\mid f$, so $m_A = cf$ where $c$ is some constant.</p>
3,385,420
<p>The question is from <em>Cambridge Admission Test 1983</em></p> <blockquote> <p>A room contains m men and w women. They leave one by one at random until only people of the same sex remain. show by a carefully explained inductive argument, or otherwise, that the expected number of people remaining is <span class="math-container">$ \frac{\text{m}}{\text{w}+1}+\frac{\text{w}}{\text{m}+1} $</span></p> </blockquote> <p>I can not think of a way by what it says, "inductive argument". Also, I can not fully understand my process. My thought is:</p> <p>Consider w women have arranged, then interpolate m men. Each man has the probability <span class="math-container">$ \frac{1}{\text{w}+1} $</span> to be the first (which corresponds to being the remaining). Then we have the expected number of men to be the remaining is <span class="math-container">$ \frac{m}{\text{w}+1} $</span>.</p> <p>However, similarly, we have the expected number of women remaining is <span class="math-container">$ \frac{w}{\text{m}+1} $</span>. But why can we directly add them together? My way of thinking suggested two mutually expectation, but it seems that we do not know whether the remaining is men or women.</p>
Floris Claassens
638,208
<p>If <span class="math-container">$m+w=1$</span> we have that <span class="math-container">$\frac{m}{w+1}+\frac{w}{m+1}=1$</span> which is the expected number of people remaining. </p> <p>Now let <span class="math-container">$N$</span> be arbitrarily given and suppose the expected number of people remaining if <span class="math-container">$m+w&lt;N$</span> is equal to <span class="math-container">$\frac{m}{w+1}+\frac{w}{m+1}$</span>.</p> <p>Let <span class="math-container">$m+w=N$</span>. Clearly if <span class="math-container">$m=0$</span> or <span class="math-container">$w=0$</span> then the expected amount of people remaining is <span class="math-container">$$N=\frac{m}{w+1}+\frac{w}{m+1}.$$</span> So we may assume that <span class="math-container">$m,w&gt;0$</span>. The chance a man leaves first is <span class="math-container">$\frac{m}{m+w}$</span>, when this occurs the expected value of people remaining is <span class="math-container">$\frac{w}{m}+\frac{m-1}{w+1}$</span> by the induction hypothesis. In the same way the chance a woman leaves first is <span class="math-container">$\frac{w}{m+w}$</span> and when this occurs the expected value of people remaining is <span class="math-container">$\frac{w-1}{m+1}+\frac{m}{w}$</span>. So the expected value of people remaining is <span class="math-container">$$ \frac{m}{m+w}(\frac{w}{m}+\frac{m-1}{w+1})+\frac{w}{m+w}(\frac{w-1}{m+1}+\frac{m}{w})=\frac{w}{m+w}+\frac{m}{m+w}\frac{m-1}{w+1}+\frac{m}{m+w}+\frac{w}{m+w}\frac{w-1}{m+1}=\frac{w}{m+w}(1+\frac{w-1}{m+1})+\frac{m}{m+w}(1+\frac{m-1}{w+1})=\frac{w}{m+w}\frac{m+w}{m+1}+\frac{m}{m+w}\frac{m+w}{w+1}=\frac{w}{m+1}+\frac{m}{w+1} $$</span></p>
2,431,548
<p>Okay, so, my teacher gave us this worksheet of "harder/unusual probability questions", and Q.5 is real tough. I'm studying at GCSE level, so it'd be appreciated if all you stellar mathematicians explained it in a way that a 15 year old would understand. Thanks!</p> <p>So, John has an empty box. He puts some red counters and some blue counters into the box. </p> <p>The ratio of the number of red counters to blue counters is 1:4</p> <p>Linda takes out, at random, 2 counters from the box.</p> <p>The probability that she takes out 2 red counters is 6/155</p> <p>How many red counters did John put into the box?</p>
green frog
351,828
<p>So the ratio of the number of red counters to blue is 1:4. That is, our first pull for a red has a probability of 1/5. </p> <p>Let $R$ be the number of red counters. Then the probability to pull out a red after our first is $(R - 1)/(5R - 1).$ So we have $(1/5)((R - 1)/(5R - 1)) = 6/155.$ Does that make sense? I think you can work out the rest. </p>
2,431,548
<p>Okay, so, my teacher gave us this worksheet of "harder/unusual probability questions", and Q.5 is real tough. I'm studying at GCSE level, so it'd be appreciated if all you stellar mathematicians explained it in a way that a 15 year old would understand. Thanks!</p> <p>So, John has an empty box. He puts some red counters and some blue counters into the box. </p> <p>The ratio of the number of red counters to blue counters is 1:4</p> <p>Linda takes out, at random, 2 counters from the box.</p> <p>The probability that she takes out 2 red counters is 6/155</p> <p>How many red counters did John put into the box?</p>
john doe
476,378
<p>Take number of red counters to be $x$, number of blue counters to be $4x$. Then required probability is $^{x}C_2/^{5x}C_2 = 6/155.$ $\qquad^nC_r = n!/((n-r)!r!)\quad$<br> Solving, we get $x(x-1)/(5x(5x-1))=6/155$, which gives $x=25.$</p>
3,364,016
<p>Can the following expression be written as the factorial of <span class="math-container">$m$</span>?</p> <p><span class="math-container">$m(m-1)(m-2) \dots {m-(n-1)}$</span></p>
Ross Millikan
1,827
<p>No, but you can write <span class="math-container">$$m(m-1)(m-2) \dots {m-(n-1)}=\frac {m!}{(m-n)!}$$</span> Note which factors are missing from <span class="math-container">$m!$</span> in your original expression</p>
853,774
<blockquote> <p>If $(G,*)$ is a group and $(a * b)^2 = a^2 * b^2$ then $(G, *)$ is abelian for all $a,b \in G$.</p> </blockquote> <p>I know that I have to show $G$ is commutative, ie $a * b = b * a$</p> <p>I have done this by first using $a^{-1}$ on the left, then $b^{-1}$ on the right, and I end up with and expression $ab = b * a$. Am I mixing up the multiplication and $*$ somehow?</p> <p>Thanks</p>
Asinomás
33,907
<p>$abab=a^2b^2\implies a^{-1}abab=a^{-1}a^2b^2\implies bab=ab^2\implies bab b^{-1}= ab^2b^{-1}\implies ba=ab$</p>
1,547,122
<p>If $\lim_{x \to x0} f(x) = L$, then $\lim_{x \to x0} \lvert f(x)\rvert = \lvert L \rvert$. <br>I know this is true, because $\lvert f(x) \rvert - \lvert L \rvert &lt;= \lvert f(x) - L \rvert &lt; \epsilon$ but why is it bigger than minus Epsilon?</p>
Ethan Alwaise
221,420
<p>It follows from the fact that $$\vert \vert a \vert - \vert b \vert \vert \leq \vert a - b \vert$$ for all $a,b \in \mathbb{R}$. So if $\vert x - x_0 \vert &lt; \delta$, then $$\vert \vert f(x) - \vert L \vert \vert \leq \vert f(x) - L \vert &lt; \epsilon,$$ i.e. $\lim_{x \to x_0}\vert f(x) \vert = \vert L \vert$.</p>
1,547,122
<p>If $\lim_{x \to x0} f(x) = L$, then $\lim_{x \to x0} \lvert f(x)\rvert = \lvert L \rvert$. <br>I know this is true, because $\lvert f(x) \rvert - \lvert L \rvert &lt;= \lvert f(x) - L \rvert &lt; \epsilon$ but why is it bigger than minus Epsilon?</p>
Empiricist
189,188
<p>Just use the other side of the triangle inequality</p> <p>$$\epsilon &gt; \vert f(x) - L \vert = \vert L - f(x) \vert \geq \vert L \vert - \vert f(x) \vert \implies -\epsilon &lt; \vert f(x) \vert - \vert L \vert$$</p>
1,648,587
<blockquote> <p><strong>Problem.</strong> Consider two arcs <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span> embedded in <span class="math-container">$D^2\times I$</span> as shown in the figure. The loop <span class="math-container">$\gamma$</span> is obviously nullhomotopic in <span class="math-container">$D^2\times I$</span>, but show that there is no nullhomotopy of <span class="math-container">$\gamma$</span> in the complement of <span class="math-container">$\alpha\cup \beta$</span>.<br /> <a href="https://i.stack.imgur.com/0GU17.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0GU17.png" alt="enter image description here" /></a></p> </blockquote> <p>I tried to use van Kampen's theorem to find the fundamental group of <span class="math-container">$X=D^2\times I-\alpha\cup \beta$</span>. Let <span class="math-container">$A=D^2\times I-\alpha$</span> and <span class="math-container">$B=D^2\times I-\beta$</span>. To find the fundamental group of <span class="math-container">$A$</span>, we note that after a homeomorphism, <span class="math-container">$A$</span> looks like a cylinder with it's axis removed. The fundamental group of <span class="math-container">$A$</span> is thus <span class="math-container">$\mathbf Z$</span>. Similarly for <span class="math-container">$B$</span>. Now we need to find the normal subgroup in <span class="math-container">$\pi_1(A)\sqcup \pi_1(B)$</span> generated by words of the form <span class="math-container">$i_{AB}(\omega)i_{BA}(\omega)^{-1}$</span> and quotient by it. Here <span class="math-container">$i_{AB}:A\cap B\to A$</span> and <span class="math-container">$i_{BA}:A\cap B\to B$</span> are the inclusion maps and <span class="math-container">$\omega$</span> is a loop in <span class="math-container">$A\cap B$</span>. Intuitively, the only loops in <span class="math-container">$A\cap B$</span> whose image in <span class="math-container">$A$</span> in nontrivial in <span class="math-container">$A$</span> are the ones which link with <span class="math-container">$\alpha$</span>. I am not sure how to say this precisely and I will be grateful if someone can help me with this. Similarly for <span class="math-container">$B$</span>. I am not able to make progress from here.</p>
ChesterX
54,151
<p>You can split the space $Y=D^2\times I \setminus \alpha\cup\beta$ in the following way : <a href="https://i.stack.imgur.com/aiirI.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aiirI.jpg" alt="Hatcher 1.2.10"></a></p> <p>Here $X=A\cap B$. Carefully label all the "missing lines" in $A$ and $B$, with orientation. Then try to see the inclusion maps of $X\hookrightarrow A$ and $X\hookrightarrow B$. You'll see that $\gamma$ gives a non-trivial element in $\pi_1(Y)$</p>
181,110
<p>On a euclidean plane, the shortest distance between any two distinct points is the line segment joining them. How can I see why this is true? </p>
davidlowryduda
9,754
<p>Every now and then it's nice to <a href="https://mathoverflow.net/questions/42512/awfully-sophisticated-proof-for-simple-facts">nuke a mosquito</a>.</p> <p>Let's assume that the path connecting two points $(a,y(a))$ and $(b,y(b))$ can be expressed as a function, and the curve $C(x)$ is given by $C(x) = (x,y(x))$. Then we will proceed using the Calculus of Variations.</p> <p>The derivative of $C$ wrt $x$ is $(1, y')$, and the functional we want to minimize is the length of the curve $L = \int \|C'\|dx = \int_a^b\sqrt{1 + y' ^2} dx$. If we take $f(x,y,y') = \sqrt{1 + y'^2}$, we get that $\frac{df}{dy} = 0, \frac{df}{dy'} = \frac{y'}{\sqrt{1 + y'^2}}$. Then the <a href="http://en.wikipedia.org/wiki/Calculus_of_variations#Euler.E2.80.93Lagrange_equation" rel="noreferrer">Euler-Lagrange equation</a>, sometimes referred to as the fundamental equation of the Calculus of Variations, says exactly that $\dfrac{d}{dx} \left( \frac{y'}{\sqrt{1 + y'^2}}\right) = 0$, which is exactly that $y'$ is a constant. </p> <p>Thus, if the path connecting the two points is expressible as a function, then the shortest such path is given by a straight line. </p> <p><strong>EDIT</strong> <em>I was certain that someone was in the middle of writing an answer when I typed my tongue-in-cheek response (as so often happens), but as I now see that there is more to add, allow me to extend my answer</em></p> <p>The problem here is that we must first define "distance." In the standard Euclidean Plane, the distance between two points is <a href="http://en.wikipedia.org/wiki/Euclidean_distance" rel="noreferrer">defined</a> to be the length of the line segment between them. So we can drop the word 'shortest' and say that "The distance between any two distant points is the length of the line segment joining them."</p> <p>Presumably, you want to know that going along any other path will be at least as long. One way of 'seeing this' is that you can approximate any curve with a polygonal path, and these satisfy the <a href="http://en.wikipedia.org/wiki/Triangle_inequality" rel="noreferrer">triangle inequality</a>, which will make the path longer.</p>
181,110
<p>On a euclidean plane, the shortest distance between any two distinct points is the line segment joining them. How can I see why this is true? </p>
user02138
2,720
<p>Let $\gamma(s)$ be a continuous curve in the plane with end-points $\gamma(0) = a$ and $\gamma(1) = b$. Using the <a href="http://en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation#Examples" rel="nofollow">Euler-Lagrange</a> equations, the only stationary solution is $\gamma(s) = bs + (1 - s)a$, which is a line connecting the two end-points. See also <a href="http://www.instant-analysis.com/Principles/straightline.htm" rel="nofollow">this</a>.</p>
88,788
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://math.stackexchange.com/questions/51292/relation-of-this-antisymmetric-matrix-r-beginpmatrix-01-10-endpmatr">Relation of this antisymmetric matrix $r = \begin{pmatrix} 0&amp;amp;1\\ -1&amp;amp;0 \end{pmatrix}$ to $i$</a> </p> </blockquote> <p>Let $H$ be the subset of $M_2(\mathbb R)$ consisting of all matrices of the form $\begin{pmatrix}a &amp; -b \\ b &amp; a\end{pmatrix}$ for $a, b \in \mathbb R$. </p> <ul> <li>Show that $(\mathbb C,+)$ is isomorphic to $(H,+)$.</li> <li>Show that $(\mathbb C, \times)$ is isomorphic to $(H, \times)$.</li> </ul> <p>$H$ is said to be a matrix representation of the complex numbers.</p> <p>I beg some help please. I fail even to define one to one functions mapping $\mathbb C$ onto $H$. All the best.</p>
Patrick Da Silva
10,704
<p>Here's a hint : If you consider a complex number $z$, it can be written as $a+bi$ with $a,b \in \mathbb R$. (I haven't chosen the letters $a$ and $b$ for no reason.) </p> <p>If you want a proof I don't mind showing. Just ask.</p> <p>Hope that helps,</p>
4,304
<p>I am trying to understand <a href="http://en.wikipedia.org/wiki/All-pairs_testing" rel="nofollow noreferrer"><strong>pairwise testing</strong></a>.</p> <p>How many combinations of tests would be there for example, if</p> <blockquote> <p><code>a</code> can take values from 1 to m</p> <p><code>b</code> can take values from 1 to n</p> <p><code>c</code> can take values from 1 to p</p> </blockquote> <p><code>a</code>, <code>b</code> and <code>c</code> can take m, n and p distinct values respectively. <strong>What are the total number of pairwise combinations possible?</strong></p> <hr /> <p>With a pairwise testing tool that I am testing, I am getting 40 results for m = n = p = 6. I am trying to mathematically understand how I get 40 values.</p>
Bill Dubuque
242
<p>If each parameter had $10$ choices you'd be testing $300$ vs $1000$ combinations, namely hold $\rm a$ constant and vary $\rm b,c$ through $10\cdot 10 = 100$ values. Similarly hold, $\rm b$ constant; then $\rm c$. As the number of variables $\rm k$ increases you get better savings, roughly $\rm (k N)^2$ vs. $\rm N^k$, where $\rm N =$ max domain size. For QA purposes usually such rough upper bounds suffice. Do you have an intended application where you need something more precise? If so perhaps you should reveal some further details, e.g. the distribution of the sizes of the domains, etc.</p> <p>EDIT: After reviewing your latest revision, it appears that the following web pages may be of interest: <a href="http://www.developsense.com/pairwiseTesting.html" rel="nofollow">Pairwise Testing</a>, which refers to various <a href="http://en.wikipedia.org/wiki/Taguchi_methods" rel="nofollow">Taguchi methods</a> such as those <a href="http://www.freequality.org/html/tools.html" rel="nofollow">here</a>. See also these links to introductions to <a href="http://www.combinatorialtesting.com/clear-introductions-1" rel="nofollow">combinatorial testing</a>.</p>
2,005,798
<p>I have the following equality: $$ \lim_{k \to \infty}\int_{0}^{k} x^{n}\left(1 - {x \over k}\right)^{k}\,\mathrm{d}x = n! $$</p> <p>What I think is that after taking the limit inside the integral (&nbsp;maybe with the help of Fatou's Lemma, I don't know how should I do that yet&nbsp;), then get</p> <p>$$ \int_{0}^{k}\lim_{k \to \infty}\left[\,% x^{n}\left(1 - {x \over k}\right)^{k}\,\right]\mathrm{d}x = \int_{0}^{\infty}x^{n}\,\mathrm{e}^{-x}\,\mathrm{d}x = \Gamma\left(n + 1\right) = n! $$</p> <p>How can I give a clear proof&nbsp;?.</p>
Théophile
26,091
<p>The first $n-1$ flips in some sense don't matter at all, because the last flip will make the total number of heads even or odd with equal probability. So the probability is $0.5$.</p>
1,686,568
<p>I am learning about tensor products of modules, but there is a question which makes me very confused about it! </p> <p>If $E$ is a right $R$-module and $F$ is a left $R$-module, then suppose we have a balanced map (or bilinear map) $E\times F\to E\otimes F$. If some element $x\otimes y \in E\otimes F$ is $0$, then can we say $x$ or $y$ must be equal to $0$? I know if $x = 0$ or $y = 0$, then $x\otimes y$ is $0$. Are there other cases where $x\otimes y$ is $0$? Can someone give me a specific example? </p> <p>Really thank you!</p>
dfsfljn
320,073
<p>By the universal property of tensor product, an elementary tensor <span class="math-container">$x\otimes y$</span> equals zero if and only if for every <span class="math-container">$R$</span>-bilinear map <span class="math-container">$B:E\times F\to M$</span>, <span class="math-container">$B(x,y)=0$</span>. While this may seem like a difficult thing to check, in practice it is usually not so bad.</p> <p>As an example, we will show that <span class="math-container">$\mathbb{Z}/5\mathbb{Z}\otimes_\mathbb{Z}\mathbb{Q}=0$</span>. Let <span class="math-container">$x\otimes y\in\mathbb{Z}/5\mathbb{Z}\otimes_\mathbb{Z}\mathbb{Q}$</span> be an elementary tensor. Then by the bilinearity of the canonical map <span class="math-container">$\mathbb{Z}/5\mathbb{Z}\times\mathbb{Q}\to\mathbb{Z}/5\mathbb{Z}\otimes_\mathbb{Z}\mathbb{Q}$</span>, we have <span class="math-container">$$ x\otimes y=x\otimes 5y/5=5(x\otimes y/5)=5x\otimes y/5=0\otimes y/5=0. $$</span> This shows that all elementary tensors are zero, and thus since the tensor product is generated by elementary tensors, <span class="math-container">$\mathbb{Z}/5\mathbb{Z}\otimes_\mathbb{Z}\mathbb{Q}=0$</span>.</p> <p>We can show that an elementary tensor <span class="math-container">$x\otimes y$</span> is nonzero by giving an <span class="math-container">$R$</span>-bilinear map <span class="math-container">$B:E\times F\to M$</span> such that <span class="math-container">$B(x,y)\neq 0$</span>. As an example, consider <span class="math-container">$E=F=\mathbb{Z}$</span> as <span class="math-container">$\mathbb{Z}$</span>-modules and the <span class="math-container">$\mathbb{Z}$</span>-bilinear map <span class="math-container">$B:\mathbb{Z}\times\mathbb{Z}\to\mathbb{Z}$</span> given by multiplication: <span class="math-container">$B(x,y)=xy$</span>. Then if <span class="math-container">$x,y\neq 0$</span>, <span class="math-container">$B(x,y)\neq 0$</span>, so that <span class="math-container">$x\otimes y\neq 0$</span> in <span class="math-container">$\mathbb{Z}\otimes_\mathbb{Z}\mathbb{Z}$</span> when <span class="math-container">$x,y\neq 0$</span>.</p>
1,686,568
<p>I am learning about tensor products of modules, but there is a question which makes me very confused about it! </p> <p>If $E$ is a right $R$-module and $F$ is a left $R$-module, then suppose we have a balanced map (or bilinear map) $E\times F\to E\otimes F$. If some element $x\otimes y \in E\otimes F$ is $0$, then can we say $x$ or $y$ must be equal to $0$? I know if $x = 0$ or $y = 0$, then $x\otimes y$ is $0$. Are there other cases where $x\otimes y$ is $0$? Can someone give me a specific example? </p> <p>Really thank you!</p>
paul garrett
12,291
<p>To add to other good answers and comments: perhaps a one slightly abstracted version of the question can be construed as asking about the <em>exactness</em> of the tensor product functor(s) <span class="math-container">$M\to M\otimes N$</span>. This is not an exact functor (in many interesting categories), and its failure to be exact is measured by &quot;Tor&quot; groups (which are also construable as derived functors, apart from specific explicit constructions in various categories). In particular, at the very least, <span class="math-container">${\mathrm Tor}^i(M,N)$</span> is very often non-zero even for <span class="math-container">$i=1$</span>.</p> <p>In different words: it is a non-trivial problem, with no simple general formulaic solution, to find the collapsing/relations in tensor products. And there are many well-known examples with some at-first-possibly-surprising collapsing, as mentioned already, as <span class="math-container">$\mathbb Z/m\otimes_{\mathbb Z} \mathbb Z/n\approx \mathbb Z/\gcd(m,n)$</span>.</p>
446,272
<p>let $$\left(\dfrac{x}{1-x^2}+\dfrac{3x^3}{1-x^6}+\dfrac{5x^5}{1-x^{10}}+\dfrac{7x^7}{1-x^{14}}+\cdots\right)^2=\sum_{i=0}^{\infty}a_{i}x^i$$</p> <p>How find the $a_{2^n}=?$</p> <p>my idea:let $$\dfrac{nx^n}{1-x^{2n}}=nx^n(1+x^{2n}+x^{4n}+\cdots+x^{2kn}+\cdots)=n\sum_{i=0}^{\infty}x^{(2k+1)n}$$ Thank you everyone</p>
Ethan Splaver
50,290
<p>Let $\chi_2(n)$ be the Dirichlet character modulo $2$</p> <p>Define $\sigma'(n)=\sum_{d\mid n}d\chi_2(d)$</p> <p>$$\sum_{n=1}^\infty\sigma'(n)x^n=\sum_{n=1}^\infty \frac{\chi_2(n)nx^n}{1-x^{2n}}=\sum_{n \text{ odd}}\frac{nx^n}{1-x^{2n}}$$</p> <p>$$(\sum_{n \text{ odd}}\frac{nx^n}{1-x^{2n}})^2=\sum_{n=2}^\infty\sum_{k=1}^{n-1}\sigma'(n)\sigma'(n-k)x^n=\sum_{n=0}^\infty a_nx^n$$</p> <p>With $a_0=0$, $a_1=0$, and for $n&gt;1$</p> <p>$$a_n=\sum_{k=1}^{n-1}\sigma'(n-k)\sigma'(k)$$</p> <p>If your looking for an elementary evaluation of this, and your not familiar with the theory of elliptic functions you could try reading this guys blog, <a href="https://math.stackexchange.com/users/72031/paramanand-singh">https://math.stackexchange.com/users/72031/paramanand-singh</a> which uses elementary properties of trigonometric functions and the manipulations of several power series to prove convolution identities, if this is too much for you, you could try looking up basic combinatoral identities which can be used to give 'elementary' evaluations of divisor sum convolutions. For example Skoruppa's combinatorial identity: $$\sum_{\substack{ax+by \\ (a,b,x,y)\in \mathbb{N^{4}}}}h(a,b)-h(a,-b)=\sum_{d\mid n} (\frac{n}{d}h(d,0)-\sum_{j=0}^{n-1}h(d,j))$$ For a function satisfying $h(y,y-x)=h(x,y)$</p> <p>For example, note that $$(x-y)^2+x^2+y^2$$ Satisfies this constraint and upon substitution gives the nice identity, $$\sum_{k=1}^{n-1}\sigma(k)\sigma(n-k)=\frac{5\sigma_3(n)}{12}-\frac{(6n-1)\sigma(n)}{12}$$</p>
933,487
<p>How do I find it?</p> <p>I know that $\mathcal{L}(e^t \cos t) =\frac{s-1}{(s-1)^2+1^2}$ but what is it when multiplied by $t$, as written in the title?</p>
Mosk
175,514
<p>$$\text{Another approach: } \mathcal{L}(e^ttcost)=F(s-1)$$ $$\mathcal{L}(tcost)=-\frac{d}{ds}(\frac{s}{s^2+1})=\frac{s^2-1}{(s^2+1)^2}$$ $$\text{so the final answer is:}$$ $$\ F(s-1)=\frac{(s-1)^2-1}{[(s-1)^2+1]^2}$$</p>
3,457,277
<p>why Pi is transcendental number if <span class="math-container">$\pi$</span> also have algebraic equation like below which have root at <span class="math-container">$x =\pi/3$</span> as <span class="math-container">$n$</span> tends to infinity. <span class="math-container">$$\Biggl(\biggl(\Bigl(\bigl((x^2-2^{(n2^1+1)})^2-2^{(n2^2+1)}\bigr)^2-2^{(n2^3+1)}\Bigr)^2...-2^{(n2^{(n-1)}+1)}\biggr)^2-2^{n2^{n}+1}\Biggr)^2 = 3*2^{n2^{(n+1)}}$$</span> </p> <p>For <span class="math-container">$n = 1$</span> equation will be <span class="math-container">$$(x^2-2^3)^2 = 3*2^4$$</span> and its root <span class="math-container">$x_1$</span> = 1.03527618041. </p> <p>For <span class="math-container">$n = 2$</span> equation will be <span class="math-container">$$((x^2-2^5)^2-2^9)^2 = 3*2^{16}$$</span> and its root is <span class="math-container">$x_2$</span> = 1.04420953776041 and as we increase the value of n the root tends to <span class="math-container">$\pi/3$</span>.</p> <p>For n = 12 ,<span class="math-container">$3x_{12}$</span> will be 3.14159264 which is some what close to <span class="math-container">$\pi$</span>.</p> <p>Another simplified version of this equation is <span class="math-container">$${\Biggl(\biggl(\Bigl(\bigl(\frac{\pi}{3*2^n}\bigr)^2-2\Bigr)^2-2\biggr)^2...-2\Biggr)^2} = 3$$</span> Here n is the same as number of single powered two in above equation. </p> <p>Solution of the above equation will be <span class="math-container">$$\pi = 3*\Biggl(\sqrt{2-\sqrt{2+\sqrt{2+\sqrt{2+....\sqrt{2+\sqrt{3}}}}}}\Biggr)*2^n$$</span> and <span class="math-container">$$n =\text{ number of $2$ inside largest square root}$$</span> So if <span class="math-container">$\pi$</span> can also be represented as a root of algebraic equation then why <span class="math-container">$\pi$</span> is transcendental .</p>
Noah Schweber
28,111
<p>All you've shown is that <span class="math-container">$\pi$</span> can be <strong>approximated by</strong> algebraic numbers. But that's not the same as <strong>being</strong> an algebraic number! When you write </p> <blockquote> <p><span class="math-container">$\pi$</span> can also be represented as a root of algebraic equation</p> </blockquote> <p>you're misusing the term "algebraic equation." There is a sense in which <span class="math-container">$\pi$</span> can be thought of as the solution to an "infinite algebraic equation," but that's not the same thing. </p> <p>Indeed, the same reasoning would also imply that <span class="math-container">$\sqrt{2}$</span> is rational since we can get better and better rational approximations to it (this is expanding on Rijul Saini's comment above). Or that the infinite sum <span class="math-container">$\sum_{i=1}^\infty 1=1+1+1+1+ ...$</span> is finite since for each <span class="math-container">$n$</span>, the partial sum <span class="math-container">$\sum_{i=1}^n1$</span> is finite.</p> <p>In general, <strong>you cannot conflate finite objects with their infinite analogues</strong>.</p> <hr> <p>This is not to say that there's nothing interesting here. While every real number is "limit algebraic," things get more interesting when we look at <em>functions</em> - thinking about "infinite-degree polynomials" leads to the concept of <a href="https://en.wikipedia.org/wiki/Analytic_function" rel="nofollow noreferrer">analytic functions</a>, which is absolutely fundamental. But that's a separate issue.</p>
299,471
<p>I know this is just $S^2$. To see it, I use the CW structure of $S^1$ x $S^1$ , consisting of one 0-cell, two 1-cells and a 2-cell. Then since the reduced suspension is the cartesian product identifying the wedge (or smash product) , what just remains is the 0.cell and the 2-cell...This is very theoretical and I don't visualize what is going on. I imagine a torus with two circular threads being pulled to a point and then? If anyone has thought of this and is convinced by some visualization, I'll thank his idea....</p>
Steve Costenoble
58,888
<p>I like to think about it this way: Start with a circle and cross with an interval rather than another circle, to get a cylinder. You're next going to identify the two ends of the cylinder and a path connecting them. If you can't picture that immediately, start by pinching off each end of the cylinder, to get a sphere, with a line connecting the two poles you just created. (This is the unreduced suspension.) Now shrink that line to a single point, which still gives you a sphere.</p>
1,849,797
<p>Complex numbers make it easier to find real solutions of real polynomial equations. Algebraic topology makes it easier to prove theorems of (very) elementary topology (e.g. the invariance of domain theorem).</p> <p>In that sense, what are theorems purely about rational numbers whose proofs are greatly helped by the introduction of real numbers? </p> <p>By "purely" I mean: not about Cauchy sequences, Dedekind cuts, etc. of rational numbers. (This is of course a meta-mathematical statement and therefore imprecise by nature.)</p> <p>"No, there is no such thing, because..." would also be a valuable answer.</p>
Mikhail Katz
72,694
<p>Real numbers together with all the other completions of the rationals known as the $p$-adics are very useful indeed in finding <em>rational</em> solutions of quadratic forms. A general principle known as <a href="https://en.wikipedia.org/wiki/Hasse_principle" rel="nofollow">Hasse principle</a> asserts that a quadratic form in $n$ variables has rational solutions if and only if it has solutions in each completion.</p>
109,754
<p>Please help me get started on this problem:</p> <blockquote> <p>Let <span class="math-container">$V = R^3$</span>, and define <span class="math-container">$f_1, f_2, f_3 ∈ V^*$</span> as follows:<br /> <span class="math-container">$f_1(x,y,z) = x - 2y$</span><br /> <span class="math-container">$f_2(x,y,z) = x + y + z$</span><br /> <span class="math-container">$f_3(x,y,z) = y-3z$</span></p> <p>Prove that <span class="math-container">$\{f_1,f_2,f_3\}$</span> is a basis for <span class="math-container">$V^*$</span>, and then find a basis for <span class="math-container">$V$</span> for which it is the dual basis.</p> </blockquote>
David Mitra
18,986
<p>First, you should verify that the $f_i$ are elements of $V^*$; that is, that they are functions on $\Bbb R^3$ that satisfy:</p> <p>$\ \ \ $1) $f({\bf x}+{\bf y})=f({\bf x})+f({\bf y})$ for all ${\bf x},{\bf y}\in\Bbb R^3$</p> <p>and</p> <p>$\ \ \ $2) $f(c{\bf x})=cf({\bf x})$ for all $c\in\Bbb R$ and all ${\bf x}\in \Bbb R^3$.</p> <p>Of course, if you know this has been covered in your class already, you're probably safe just writing something like "as was demonstrated in lecture blah, these are elements of the dual of $V=\Bbb R^3$.</p> <p>Another fact I assume you have use of is that the dimension of $V^*$ is three (as the dimension of the vector space $\Bbb R^3$ is three).</p> <p>So, with three linear functionals on $\Bbb R^3$, towards showing that they are a basis of $V^*$, it suffices to show that they are independent. There are many ways towards achieving this end. One way in particular is to show that the matrix $A$ formed by taking as its rows the coefficients of the $f_i$, $$ A=\left[\matrix{1&amp;-2&amp;0\cr 1&amp;1&amp;1\cr 0&amp;1&amp;-3}\right], $$ has full rank (that this is so isn't hard to see: $c_1f_1+c_2f_2+c_3f_3={\bf 0}\iff A\bigl[{{\scriptstyle c_1\atop\scriptstyle c_2}\atop c_3}\bigr]=\bf 0$; and the former equation has only the trivial solution if and only if $A$ has full rank).</p> <p>So, let's row reduce $A$: $$ A= \left[\matrix{1&amp;-2&amp;0\cr 1&amp;1&amp;1\cr 0&amp;1&amp;-3}\right] \buildrel{r_2-r_1\rightarrow r_2}\over{\longrightarrow} \left[\matrix{1&amp;-2&amp;0\cr 0&amp; 3&amp; 1\cr 0&amp;1&amp;-3}\right] \buildrel{r_2-3r_3\rightarrow r_3}\over{\longrightarrow} \left[\matrix{1&amp;-2&amp;0\cr 0&amp; 3&amp; 1\cr 0&amp;0&amp;10}\right]. $$</p> <p>At this point we can see that $A$ indeed has full rank. Thus the set $\{f_1,f_2,f_3\}$ is independent and consequently is a basis of $V^*$.</p> <p><br></p> <p>Towards finding the dual basis let's recall what this is: the dual basis of $\{f_1,f_2,f_3\}$ by definition is the basis $\{ {\bf x}_1,{\bf x}_2,{\bf x}_3\}$ of $V$ for which $$ f_i({\bf x}_j)=\cases{1,&amp; if $i=j$\cr 0, &amp; if $i\ne j$}. $$</p> <p>So, in particular, the first basis element of dual basis, ${\bf x}_i$, would satisfy $$ f_1({\bf x}_1)=1,\ f_2({\bf x}_1)=0, \text{and}, f_3({\bf x}_1)=0. $$ In other words, we have the system: $$ \eqalign{ x-2y&amp;=1\cr x+y+z&amp;=0\cr y-3z&amp;=0 } $$ whose matrix form is $$ A {\bf x}_1=\left[\matrix{ 1\cr0\cr 0 }\right] $$ and whose, necessarily unique, solution gives the coordinates of ${\bf x}_1$.</p> <p>We would have two similar systems to solve in order to find ${\bf x}_2$ and ${\bf x}_3$.</p> <p>That seems like a lot of work; but wait... suppose we wrote the dual basis as a matrix whose columns were the ${\bf x}_i$. Then we'd have: $$ A[ {\bf x}_1\ {\bf x}_2\ {\bf x}_3] =\left[\matrix{1&amp;-2&amp;0\cr 1&amp;1&amp;1\cr 0&amp;1&amp;-3}\right][ {\bf x}_1\ {\bf x}_2\ {\bf x}_3]=\left[\matrix{1&amp;0&amp;0\cr 0&amp;1&amp;0\cr 0&amp;0&amp;1}\right] $$</p> <p>So $[ {\bf x}_1\ {\bf x}_2\ {\bf x}_3]$ is the inverse of $A$. Rather than solving three systems of equations, we could instead find the inverse of $A$ and then the columns will be our dual basis.</p> <p>This is what we'll do. Towards that end, we may (and do) adjoin the identity matrix to $A$ and perform a full forward/back row reduction:</p> <p>$$\eqalign{ [A \,|\,I\,]= \left[\matrix{1&amp;-2&amp;0\cr 1&amp;1&amp;1\cr 0&amp;1&amp;-3} \ \ \Biggl|\ \ \matrix{1&amp; 0&amp;0\cr 0&amp;1&amp;0\cr 0&amp;0&amp;1}\right] &amp;\buildrel{r_2-r_1\rightarrow r_2}\over{\longrightarrow} \left[\matrix{1&amp;-2&amp;0\cr 0&amp; 3&amp; 1\cr 0&amp;1&amp;-3} \ \ \Biggl|\ \ \matrix{1&amp; 0&amp;0\cr -1&amp;1&amp;0\cr 0&amp;0&amp;1}\right]\cr &amp;\buildrel{r_2-3r_3\rightarrow r_3}\over{\longrightarrow} \left[\matrix{1&amp;-2&amp;0\cr 0&amp; 3&amp; 1\cr 0&amp;0&amp;10} \ \ \Biggl|\ \ \matrix{1&amp; 0&amp;0\cr -1&amp; 1&amp;0\cr -1&amp;1&amp;-3}\right]\cr &amp;\buildrel{10r_2-r_3\rightarrow r_2}\over{\longrightarrow} \left[\matrix{1&amp;-2&amp;0\cr 0&amp; 30&amp; 0\cr 0&amp;0&amp;10} \ \ \Biggl|\ \ \matrix{1&amp; 0&amp;0\cr -9&amp;9&amp;3\cr -1&amp;1&amp;-3}\right]\cr &amp;\buildrel{15r_1+r_2\rightarrow r_1}\over{\longrightarrow} \left[\matrix{15&amp;0&amp;0\cr 0&amp; 30&amp; 0\cr 0&amp;0&amp;10} \ \ \Biggl|\ \ \matrix{ 6&amp; 9&amp;3\cr -9&amp;9&amp;3\cr -1&amp;1&amp;-3}\right]\cr &amp;\buildrel{ }\over{\longrightarrow} \left[\matrix{1 &amp;0&amp;0\cr 0&amp; 1&amp; 0\cr 0&amp;0&amp;1 } \ \ \Biggl|\ \ \matrix{ 6/15&amp; 9/15&amp;3/15\cr -9/30&amp;9/30&amp;3/30\cr -1/10&amp;1/10&amp;-3/10}\right].\cr } $$</p> <p>So $$ A^{-1}=\left[\matrix{2/5&amp; 3/5&amp;1/5\cr -3/10&amp;3/10&amp;1/10\cr -1/10&amp;1/10&amp;-3/10}\right], $$ and the dual basis has as its elements the columns of $A^{-1}$: $$ {\bf x}_1=\left[\matrix{2/5\cr-3/10\cr-1/10 }\right],\ {\bf x}_2=\left[\matrix{3/5\cr3/10\cr1/10 }\right],\ {\bf x}_3=\left[\matrix{1/5\cr1/10\cr-3/10 }\right]. $$ </p> <p>The basis $\{ {\bf x}_1,{\bf x}_2,{\bf x}_3 \}$ is the dual basis of $\{f_1,f_2,f_3\}$. </p> <p>Note that the relative ordering is important. For instance, we must have $f_3({\bf x}_1)=f_3({\bf x}_2)=0$ and $f_3({\bf x}_3)=1$. As a spot check, let's verify this: $$\eqalign{ f_3({\bf x}_1)&amp;= (-3/10)-3(-1/10)=0\cr f_3({\bf x}_2)&amp;= (3/10)-3(1/10)=0\cr f_3({\bf x}_3)&amp;= (1/10)-3(-3/10)=1.\cr } $$</p>
1,046,687
<p>I have this math problem: "determine whether the series converges absolutely, converges conditionally, or diverges."</p> <p>I can use any method I'd like. This is the series:$$\sum_{n=1}^{\infty}(-1)^n\frac{1}{n\sqrt{n+10}}$$</p> <p>I though about using a comparison test. But I'm not sure what series I can compare $\sum_{n=1}^{\infty}\frac{1}{n\sqrt{n+10}}$ to.</p>
Learnmore
294,365
<p>Converges absolutely</p> <p>$|u_n|=\frac{1}{n\sqrt{(n+10)}}&lt;\frac{1}{n}.\frac{1}{n^{\frac{1}{2}}}=\frac{1}{n^{\frac{3}{2}}}$</p>
1,046,687
<p>I have this math problem: "determine whether the series converges absolutely, converges conditionally, or diverges."</p> <p>I can use any method I'd like. This is the series:$$\sum_{n=1}^{\infty}(-1)^n\frac{1}{n\sqrt{n+10}}$$</p> <p>I though about using a comparison test. But I'm not sure what series I can compare $\sum_{n=1}^{\infty}\frac{1}{n\sqrt{n+10}}$ to.</p>
GorTeX
196,508
<p>We can use Leibniz theorem: if <code>a_n</code> is a monotone sequence that converges to 0, then the series $\sum_{n=1}^{\infty}(-1)^n a_n$ converges. Let´s have $a_n=\dfrac{1}{n\sqrt{n+10}}$. So we have to prove that $lim$ an=0 and that it is monotone. If you can show that, you have that it converges </p>
916,794
<p>I have to find the value of $m$ such that:</p> <p>$\displaystyle\int_0^m \dfrac{dx}{3x+1}=1.$</p> <p>I'm not sure how to integrate when dx is in the numerator. What do I do?</p> <p>edit: I believe there was a typo in the question. Solved now, thank you!</p>
Jack D'Aurizio
44,121
<p>Since: $$\int_{0}^{m}\frac{dx}{3x+1}=\frac{1}{3}\log(3m+1)$$ we have: $$ m = \frac{e^3-1}{3}.$$</p>
2,337,524
<p>We have $p(x)$ a degree $m$ polynomial and $q(x)$ a degree $k$ polynomial. We also know that $p(x) = q(x)$ has at least $n+1$ solutions. And, $n\geq m\land n\geq k$.</p> <p>Now, I tried graphing a little to see if I see a pattern </p> <p>I tried making $$y= x^{2} $$ and $$y = -x^{2}+5 $$ There were two points of intersection so $n$ was $1$ in some sense here while the degrees were $2$. </p> <p>Then the only way I thought that the condition can hold for $n+1$ solutions and $n \geq m $ and $ n \geq k$ is if $p(x)$ and $q(x)$ are the same polynomial. </p> <p>That was the answer to this problem. But can someone give an idea for this please.</p>
hamam_Abdallah
369,188
<p><strong>just a hint</strong></p> <p>If the equation $R (x)=P (x)-Q (x)=0$ has at least $n+1$ solutions, it means that degree of $R (x)\ge n+1$.</p> <p>but degree of $R (x)\le \max(m,k) $ thus $m\ge n+1$</p> <p>or $k\ge n+1$</p>
1,850,418
<p>An argument has two parts, the set of all premises, and the conclusion drawn from said premise. Now since there's only 1 conclusion, it would be weird to choose a name for the 'second' part of the argument. However, what is the first part called? I used to think that this was actually called the premise, however that turns out not to be the case. Anyway, what <em>is</em> it called? Because I feel like it needs a name.</p> <pre><code>Argument Premise 1¯| Premise 2 | &lt;--- What's this called? Premise 3_| Conclusion </code></pre> <p>What is the part containing premise 1, 2, and 3?</p> <p><strong>Edit:</strong> I just realized that actually, perhaps one conclusion doesn't necessarily need to be drawn, perhaps there can be several conclusions, if so, what is the set of all conclusions called?</p>
lemontree
344,246
<p>I'd say the answer is already in the title of your question: You can simply call it the <strong>set of premises</strong>, or the <strong>premise set</strong>.<br> This indicates that it is (possibly multiple, but might in principle be the empty set or a singleton set) a set of propositions serving altogether as the premise of the argument, without making any judgement about whether it might be just one single premise or multiple ones to be interpreted as one premise derived by conjunction of several sub-premises.<br> Similarly, if there are several propositions that are conclusions of the argument, you can simply call it the <strong>set of conclusions</strong>, which is a most neutral term in that it doesn't make any assertions about its elements apart from it being conclusions which is exactly what you want to say.</p>
2,916,306
<p>Let $f(x,y) = x\ln(x) + y\ln(y)$ be defined on space </p> <p>$S = \{(x,y) \in \mathbb{R}^2| x&gt; 0, y &gt; 0, x + y = 1\}$.</p> <p>My question is, how do I take the partial derivative for this function, given that the parameters are coupled through $x+y = 1$.</p> <p>A first idea would be to do it ignoring the coupling constraint. For this, we will get,</p> <p>$\dfrac{\partial f(x,y)}{\partial x} = \dfrac{\partial x\ln(x) + y\ln(y)}{\partial x} = \ln(x) + x/x = \ln(x) + 1$</p> <p>If we do not ignore the coupling constraint, and instead substitute $y = 1-x$, we will get,</p> <p>$\dfrac{\partial f(x,y)}{\partial x} = \dfrac{\partial x\ln(x) + (1-x)\ln(1-x)}{\partial x} = \ln(x) + 1 + \dfrac{1}{1-x} - \ln(1-x) - \dfrac{x}{1-x}$</p> <p>Am I doing this correctly? </p> <p>Why do I get two different expressions of the gradient?</p>
Nosrati
108,128
<p>You can't use $y=1−x$, these variables are independent. Although we choose our points from $$S = \{(x,y) \in \mathbb{R}^2| x&gt; 0, y &gt; 0, x + y = 1\}$$ but it doesn't mean these variables are dependent. Then the substitute $y = 1-x$ makes no sense in this <strong>two variables function</strong>.</p>
3,848,179
<blockquote> <p>The velocity <span class="math-container">$v$</span> of a freefalling skydiver is modeled by the differential equation</p> <p><span class="math-container">$$ m\frac{dv}{dt} = mg - kv^2,$$</span></p> <p>where <span class="math-container">$m$</span> is the mass of the skydiver, <span class="math-container">$g$</span> is the gravitational constant, and <span class="math-container">$k$</span> is the drag coefficient determined by the position of the diver during the dive. Find the general solution of the differential equation.</p> </blockquote> <p>So is it my job to solve for velocity here (<span class="math-container">$v$</span>)? or am I missing something?</p>
Lion Heart
809,481
<p><span class="math-container">$\sqrt{2^2 - x^2}=2\sqrt{1-(\frac{x}{2})^2}=2\sqrt{1-sin^2t}=2cost$</span> and say <span class="math-container">$x=2sint, t=arcsin\frac{x}{2}, \implies dx=2costdt$</span></p>
462,569
<blockquote> <p>Consider the polynomial ring <span class="math-container">$F\left[x\right]$</span> over a field <span class="math-container">$F$</span>. Let <span class="math-container">$d$</span> and <span class="math-container">$n$</span> be two nonnegative integers.</p> <p>Prove:<span class="math-container">$x^d-1 \mid x^n-1$</span> iff <span class="math-container">$d \mid n$</span>.</p> </blockquote> <p>my tries:</p> <hr /> <p>necessity, Let <span class="math-container">$n=d t+r$</span>, <span class="math-container">$0\le r&lt;d$</span></p> <p>since <span class="math-container">$x^d-1 \mid x^n-1$</span>, so,</p> <p><span class="math-container">$x^n-1=\left(x^d-1\right)\left(x^{\text{dt}+r-d}+\dots+1\right)$</span>...</p> <p>so,,, to prove <span class="math-container">$r=0$</span>?</p> <p>I don't know, and I can't go on. How to do it.</p>
Prism
49,227
<p>Suppose $x^{d}-1\mid x^{n}-1$. By division algorithm, we can write $n=qd+r$ for some $q,r\in\mathbb{N}_0$ with $0\le r&lt;d$. Now, observe that $$x^{d}-1 \mid (x^{d}-1)(x^{n-d}+x^{n-2d}+\cdots + x^{n-qd}+1)$$ Expanding the above, and cancelling many terms, we get that $$x^{d}-1 \mid x^{n}+x^{d}-x^{n-qd}-1=x^{n}-1+x^{d}-x^{r}$$ Together with $x^{d}-1\mid x^{n}-1$, this implies: $$x^{d}-1\mid x^{d}-x^{r}=(x^{d}-1)+(1-x^{r})$$ which gives $x^{d}-1\mid x^{r}-1$. This is a contradiction, unless $r=0$, in which case $d\mid n$. </p> <p>The converse easily follows from the identity $x^{n}-1=(x-1)(x^{n-1}+x^{n-2}+\cdots + x+1)$.</p>
118,070
<p>I have the following code:</p> <pre><code>a = 6.08717*10^6; b = a/3; c = a*1.5; d = a^2; matrix={{a, b, c, d},{b, c, d, a},{c, d, a, b},{d, a, b, c}}; matrix // EngineeringForm </code></pre> <p>Normally, I use this result by copying (<code>Copy As ► MathML</code>) and pasting into Microsoft Word.</p> <p>However, I do not get the desired formatting.</p> <p>The image below shows what the result is presented in Microsoft Word:</p> <p><a href="https://i.stack.imgur.com/ZZRu6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZZRu6.png" alt="enter image description here"></a></p> <p>The image below shows the result that I want:</p> <p><a href="https://i.stack.imgur.com/tLYEZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tLYEZ.png" alt="enter image description here"></a></p> <p>1 - Comma instead of point</p> <p>2 - Without quotation marks</p> <p>3 - Four decimal places</p> <p>4 - Point instead of the Cross</p>
Alexey Popkov
280
<h2>Original answer</h2> <p>The reason for getting extra quotes is that these extra quotes are explicitly present in the box form of the expression generated by such functions as <code>EngineeringForm</code>, <code>NumberForm</code> etc.:</p> <pre><code>ToBoxes@EngineeringForm[6.08717*10^6] </code></pre> <blockquote> <pre><code>TagBox[InterpretationBox[ RowBox[{&quot;\&quot;6.08717\&quot;&quot;, &quot;\[Times]&quot;, SuperscriptBox[&quot;10&quot;, &quot;\&quot;6\&quot;&quot;]}], 6.08717*10^6, AutoDelete -&gt; True], EngineeringForm] </code></pre> </blockquote> <p>We can post-process these raw boxes and remove extra quotes using the <code>removeExtraQuotes</code> function defined as follows (see <a href="https://mathematica.stackexchange.com/a/29319/280">this</a> for description of <code>RuleCondition</code>):</p> <pre><code>removeExtraQuotes = RawBoxes[ToBoxes[#] /. n_String :&gt; RuleCondition[StringTake[n, {2, -2}], StringMatchQ[n, &quot;\&quot;&quot; ~~ __ ~~ &quot;\&quot;&quot;]]] &amp;; </code></pre> <p>Now we can proceed as follows:</p> <pre><code>removeExtraQuotes@ NumberForm[Grid[matrix], {7, 4}, NumberPoint -&gt; &quot;,&quot;, NumberMultiplier -&gt; &quot;\[CenterDot]&quot;, ExponentStep -&gt; 3] </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/ycYxe.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ycYxe.png" alt="output" /></a></p> </blockquote> <p>After copying the output using <code>Copy As ► MathML</code> we get in the clipboard the MathML expression without extra quotes.</p> <hr /> <h2>Further development</h2> <p>The original answer solves the problem as stated but I was worrying about slightly wrong formatting of the numbers when they are displayed inside of a Notebook: as seen from the above screenshot, the space between the comma <code>,</code> and the fractional part of the number is too large. So I decided to try to overcome this problem.</p> <p>Here is an improved version of <code>removeExtraQuotes</code> which generates output correctly displayed inside of a Notebook (the only change is that now I wrap the string by <code>Cell</code> effectively creating an inline plain text <code>Cell</code>):</p> <pre><code>removeExtraQuotes = RawBoxes[ToBoxes[#] /. n_String :&gt; RuleCondition[Cell@StringTake[n, {2, -2}], StringMatchQ[n, &quot;\&quot;&quot; ~~ __ ~~ &quot;\&quot;&quot;]]] &amp;; </code></pre> <p>Testing:</p> <pre><code>removeExtraQuotes@ NumberForm[Grid[matrix], {7, 4}, NumberPoint -&gt; &quot;,&quot;, NumberMultiplier -&gt; &quot;\[CenterDot]&quot;, ExponentStep -&gt; 3] </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/o007a.png" rel="noreferrer"><img src="https://i.stack.imgur.com/o007a.png" alt="output" /></a></p> </blockquote> <p>As one can see, the extra spacing between the comma &quot;,&quot; and the fractional part disappeared. But the spaces around the multiplication symbol are larger than needed.</p> <p>I have copied the output using <code>Copy As ► MathML</code> and then successfully pasted in MathType 6.9b. Here is what I got:</p> <p><a href="https://i.stack.imgur.com/6HgF1.png" rel="noreferrer"><img src="https://i.stack.imgur.com/6HgF1.png" alt="screenshot" /></a></p> <p>As one can see the result is <em>almost</em> perfect. The only shortcoming is that the multiplication symbol (center dot) is seemingly taken from the Times New Roman font while when creating this symbol using the MathType interface it is taken from the Symbol font and looks slightly different.</p> <h3>UPDATE from 08.11.2018: a perfect solution</h3> <p>The way to remove all the extra spacing is to apply the <code>AutoSpacing -&gt; False</code> option:</p> <pre><code>removeExtraQuotes = RawBoxes[ToBoxes[#] /. n_String :&gt; RuleCondition[StringTake[n, {2, -2}], StringMatchQ[n, &quot;\&quot;&quot; ~~ __ ~~ &quot;\&quot;&quot;]]] &amp;; removeExtraQuotes@ NumberForm[Grid[matrix, BaseStyle -&gt; AutoSpacing -&gt; False], {7, 4}, NumberPoint -&gt; &quot;,&quot;, NumberMultiplier -&gt; &quot;\[CenterDot]&quot;, ExponentStep -&gt; 3] </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/V7FJ2.png" rel="noreferrer"><img src="https://i.stack.imgur.com/V7FJ2.png" alt="screenshot" /></a></p> </blockquote> <hr /> <h2>For those who want to develop their own NumberForm</h2> <p>My experiments (with <em>Mathematica</em> 10.4.1) showed that the most nice-looking and at the same time interoperable method of creating formatted numbers is to make an inline <code>Cell</code> for the whole number with <code>TextData</code> as a container. Inside of <code>TextData</code> boxes can only be present inside of a <code>Cell</code> what complicates things a bit but the resulting expression is rendered much better than the built-in <code>NumberForm</code> output (which uses <code>RowBox</code> instead of <code>TextData</code>):</p> <pre><code>Column[{RawBoxes@ Cell[TextData[{&quot;6&quot;, &quot;,&quot;, &quot;0872&quot;, &quot;\[CenterDot]&quot;, Cell[BoxData[SuperscriptBox[&quot;10&quot;, &quot;6&quot;]]]}]], NumberForm[6.08717*10^6, {7, 4}, NumberPoint -&gt; &quot;,&quot;, NumberMultiplier -&gt; &quot;\[CenterDot]&quot;, ExponentStep -&gt; 3]}] </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/nS2jJ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/nS2jJ.png" alt="output" /></a></p> </blockquote> <p>Of course one can place the inline <code>Cell</code> inside of <code>InterpretationBox</code> as it is implemented for <code>NumberForm</code> and wrap the whole thing with <code>TagBox</code> in order to achieve predictable formatting:</p> <pre><code>RawBoxes@TagBox[ InterpretationBox[ Cell[TextData[{&quot;6&quot;, &quot;,&quot;, &quot;0872&quot;, &quot;\[CenterDot]&quot;, Cell[BoxData[SuperscriptBox[10, 6]]]}]], 6.08717`*^6, AutoDelete -&gt; True], NumberForm] </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/XrAJo.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XrAJo.png" alt="output" /></a></p> </blockquote> <p>Copying the last output, pasting it into a new input cell and evaluating gives:</p> <p><a href="https://i.stack.imgur.com/VWYDS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/VWYDS.png" alt="screenshot" /></a></p> <p>As one can see inside of input cell it preserves nice formatting. At the same time it can be copied as <code>MathML</code> and then pasted into MathType (or other <code>MathML</code>-compatible application) without extra quotes and retaining superscripts, subscripts etc.</p> <p>P.S.: It disappoints that <code>TextClipboardType -&gt; &quot;MathML&quot;</code> option for <code>Cell</code> isn't supported.</p>
186,726
<p>Just a soft-question that has been bugging me for a long time:</p> <p>How does one deal with mental fatigue when studying math?</p> <p>I am interested in Mathematics, but when studying say Galois Theory and Analysis intensely after around one and a half hours, my brain starts to get foggy and my mental condition drops to suboptimal levels.</p> <p>I would wish to continue studying, but these circumstances force me to take a break. It is truly a case of "the spirit is willing but the brain is weak"?</p> <p>How do people maintain concentration over longer periods of time? Is this ability trainable or genetic? (Other than taking illegal drugs like Erdős.)</p> <p>I know this is a really soft question, but I guess asking mathematicians is the best choice since the subject of Mathematics requires the most mental concentration compared to other subjects.</p>
Evan Kearney
152,838
<p>I have the blessing of not being quite sure what type of mathematical insight and clarity each day will bring. Some days, despite little sleep, i may be physically tired, but it allows me to stay at the desk scribbling away and I make some progress (even if it is not directly research related - just coming to terms with "simple" concepts even)! Other days, I wake refreshed and emotionally sound, but my mental clarity isn't there. Either way, I understand that is how I, in particular go about my journey in Mathematics. So I don't get mad if I "desire more clarity" for lack of a decent term.</p> <p>Sleep is important. I am sure we have had those nights that I call "Feedback Loop Nights" where a Theorem taken for granted in the past just attained radically new salience in much more abstract domain. Thus, the idea continuously morphs back on its previous iteration of conceptual understanding for quite some time until sleep finally arrives. Those are the most unpleasant mathematical experiences because little progress is made. It is just a compulsion.</p> <p>It is important to be completely absorbed by at least one other, non-analytical hobby. For me, it is music composition (although I often "entertain" analyses of the algebra of harmony and counterpoint). It helps the brain to break out of the compulsion-mode.</p> <p>As for coffee, I like it in moderation. I find that it does little to improve my cognitive abilities, but it makes conversations more engaging, for sure. Alcohol reacts negatively with me (I get a massive hangover from as little as half a beer) so I don't drink, but I am sure that any psychoactive compound in moderation would aid in broadening one's scope of the World, provided you don't have a history of substance dependence. </p>
809,499
<p>The no. of real solution of the equation $\sin x+2\sin 2x-\sin 3x = 3,$ where $x\in (0,\pi)$.</p> <p>$\bf{My\; Try::}$ Given $\left(\sin x-\sin 3x\right)+2\sin 2x = 3$</p> <p>$\Rightarrow -2\cos 2x\cdot \sin x+2\sin 2x = 3\Rightarrow -2\cos 2x\cdot \sin x+4\sin x\cdot \cos x = 3$</p> <p>$\Rightarrow 2\sin x\cdot \left(-\cos 2x+2\cos x\right)=3$</p> <p>Now I did not understand how can i solve it.</p> <p>Help me</p> <p>Thanks</p>
juantheron
14,311
<p><strong>Using Inequality</strong></p> <p>Starting from $\bf{L.H.S\;\;}$</p> <p>$$\sin x+2\sin 2x-\sin 3x = -2\sin x\cos 2x+2\sin 2x\leq \left|2\sin x\cos 2x+2\sin 2x\right|$$</p> <p>$$\leq 2|\sin 2x+\cos 2x|\leq 2\sqrt{2}\left|\sin \left(2x+\frac{\pi}{4}\right)\right|\leq 2\sqrt{2}&lt;3\;(\bf{R.H.S})$$</p> <p>So no real values of $x$ for which $\sin x+2\sin 2x-\sin 3x =3\;\forall x\in (0,\pi)$</p>
174,149
<p>How many seven - digit even numbers greater than $4,000,000$ can be formed using the digits $0,2,3,3,4,4,5$?</p> <p>I have solved the question using different cases: when $4$ is at the first place and when $5$ is at the first place, then using constraints on last digit.</p> <p>But is there a smarter way ?</p>
Giles Gardam
32,305
<p>I think considering the two different cases you mentioned separately is best. To avoid further case division, I'd proceed like this:</p> <ul> <li><p>Case with leading digit 4: since an even digit has to go in the rightmost position, there are $5 \choose 3$ ways to choose the positions of the 3 odd numbers amongst the other 5 positions, then 3 ways to order them, and $3!$ ways to order the even numbers, for 180 total.</p></li> <li><p>Case with leading digit 5: similarly, there are $5 \choose 2$ ways to choose the positions of the two odd digits (both 3's), only 1 way to order them, then $\frac{4!}{2}$ ways to order the even digits (dividing by 2 since there are 2 indistinguishable 4's), for 120 total.</p></li> </ul> <p>So all up there are 300 such numbers.</p>
176,691
<p>Let $A'$ denotes the complement of A with respect to $ \mathbb{R}$ and $A,B,T$ are subsets of $\mathbb{R}$. I am trying to prove $A' \cap (A' \cup B') \cap T= A' \cap T$, but I got some problems along the way.</p> <p>$A' \cap (A' \cup B') \cap T= (A' \cap A') \cup (A' \cap B') \cap T= A' \cup (A \cup B) \cap T =(A' \cup A)\cup B \cap T= \mathbb{R} \cup B \cap T = B \cap T$ Something wrong?</p>
Dilip Sarwate
15,941
<p>$A' \cap (A' \cup B') \cap T= ((A' \cap A') \cup (A' \cap B')) \cap T= (A' \cup (A' \cap B')) \cap T$. But, $A' \cap B' \subset A'$ and so $A' \cup (A' \cap B') = A'$ giving $(A' \cup (A' \cap B')) \cap T = A' \cap T$ without dragging DeMorgan's Laws into it.</p>
2,673,835
<p>I need to prove that the sequence $$ f_n = \sum_{i=0}^n\prod_{j=0}^i \left(z+j\right)^{-1} = \frac{1}{z}+\frac{1}{z(z+1)}+\cdots + \frac{1}{z(z+1)(z+2)\cdots (z+n)}$$ converge uniformly to a function in every compact subset of $\mathbb{C}\setminus\{0,-1,-2,-3,\dotsc\}$. The problem has other questions, but this is what stop me. Obviusly, $f_n$ is holomorphic in that domain. </p> <p>In fact, I don't know even whats the limit of the sequence or closed form of $f_n$. Can you give me a hint to continue? Please, don't spoil me final solution. </p> <p>I think I could take a compact $K$ and consider the series that define every term of $f_n$ and sum, but I don't get a general formula. </p> <p>Edit: I get</p> <p>$$ f_n(z)= \sum_{i=0}^n \frac{1}{z}\frac{\Gamma(z+1)}{\Gamma(z+1+i)}= \Gamma(z)\sum_{i=0}^n\frac{1}{\Gamma(z+1+i)} $$</p> <p>This smell like M Weierstrass test</p>
G Cab
317,234
<p>For the negative <a href="https://en.wikipedia.org/wiki/Falling_and_rising_factorials" rel="nofollow noreferrer">Falling Factorials</a> we have the partial fraction expansion: $$ \eqalign{ &amp; \left( {x - 1} \right)^{\,\underline {\, - n\,} } = {1 \over {x^{\,\overline {\,n\,} } }} = \cr &amp; = \left[ {n = 0} \right] + \left[ {1 \le n} \right]{1 \over {\left( {n - 1} \right)!}}\sum\limits_{\left( {0\, \le } \right)\,k\,\left( {\, \le \,n - 1} \right)} {\left( { - 1} \right)^{\,k} \left( \matrix{ n - 1 \cr k \cr} \right)} {1 \over {\left( {x + k} \right)}} \cr} $$ where $[P]$ denotes the <a href="https://en.wikipedia.org/wiki/Iverson_bracket" rel="nofollow noreferrer"><em>Iverson bracket</em></a> $$ \left[ P \right] = \left\{ {\begin{array}{*{20}c} 1 &amp; {P = TRUE} \\ 0 &amp; {P = FALSE} \\ \end{array} } \right. $$</p> <p>Therefore your function $f_n(x)$ will be the sum over $n$ of the above, and since you asked not to be provided with the solution ...</p>
3,264,693
<p>For context, I have been relearning a lot of math through the lovely website Brilliant.org. One of their sections covers complex numbers and tries to intuitively introduce Euler's Formula and complex exponentiation by pulling features from polar coordinates, trigonometry, real number exponentiation, and vector space transformations.</p> <p>While I am now decently familiar with how complex exponentiation behaves (i.e. inducing rotation), I am slightly confused by the following. </p> <p><span class="math-container">$ 2^3 z$</span> can be viewed as stretching the complex number <span class="math-container">$z$</span> by <span class="math-container">$2^3$</span>. This could be rewritten as <span class="math-container">$8z$</span>. Therefore, Brilliant.org suggests that exponentiation of real numbers can be thought of as stretching a vector just like real number multiplication would. (<strong>check - understood</strong>)</p> <p>Brilliant.org then demonstrates that multiplying <span class="math-container">$z_1$</span> by another complex number <span class="math-container">$z_2$</span> is equivalent to first stretching <span class="math-container">$z_1$</span> by the magnitude of <span class="math-container">$z_2$</span> and then rotating <span class="math-container">$z_1$</span> by the angle that <span class="math-container">$z_2$</span> creates with the real axis counterclockwise. (<strong>check - understood</strong>)</p> <p>However, this is where I get confused. Why does, for example, <span class="math-container">$2^{2i}* z$</span> cause purely rotation of z but <span class="math-container">$2i*z$</span> does not (i.e. it causes stretching, too, in addition to rotation)?</p> <p>To me, the fact that <span class="math-container">$2^{(2i+3)}$</span> causes both rotation and stretching makes perfect sense because we can rewrite this as <span class="math-container">$(2^3)*(2^{(2i)})$</span>. As previously noted by Brilliant.org, exponentiation by real numbers can thought of as stretching.</p> <p><strong>Here is the crux of my issue:</strong></p> <blockquote> <p>I understand that the magnitude of the imaginary number in the exponent (for example, the <span class="math-container">$'2'$</span> in <span class="math-container">$e^{2i}$</span> ) can be thought of as a rate of speed...but why does this interpretation '<strong>drop</strong>' when we are doing something like <span class="math-container">$2i * z$</span>. i.e. <strong>Why is the <span class="math-container">$2$</span> in <span class="math-container">$2i*z$</span> not also treated like a rate of rotation but instead treated like a magnitude of stretching ?</strong></p> </blockquote> <p>My math skill is not particularly high level so if anyone can offer as much of an intuitive answer as possible, it would be greatly appreciated!</p> <p>Edit 1: I guess another way of expressing this question is as follows: </p> <p>Why does a duality exist between real number exponentiation and real number multiplication but a duality does not exist between imaginary number exponentiation and imaginary number multiplication (i.e. imaginary number multiplication can cause stretching in addition to rotation)?</p> <p>Edit 2: While I accept that Euler's formula is a way of proving that exponentiation of purely imaginary numbers has a magnitude of 1 and therefore does not invoke stretching, that is not the sort of answer I am looking for. My question is aimed at identifying what was specified in Edit 1. </p> <p>Edit 3: Here is a picture that helps clarify my point of confusion. <a href="https://i.stack.imgur.com/9XeY0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9XeY0.png" alt="Lack of Duality Between Exponentiation and Multiplication"></a></p> <p>Edit 4: The question that was asked in this post <a href="https://math.stackexchange.com/questions/1540062/which-general-physical-transformation-to-the-number-space-does-exponentiation-re">Which general physical transformation to the number space does exponentiation represent?</a> is sort of the theme that I am going for. The answer that was given to this post, however, omits a reference to the complex numbers. </p>
xrfxlp
678,937
<p>Let <span class="math-container">$z_o$</span> and <span class="math-container">$z_d$</span> be two complex number, lets call the first one operator and second one operand, and let <span class="math-container">$z'_d$</span> be the transformed version( state after it being operated by <span class="math-container">$z_o$</span> ) of <span class="math-container">$z_d$</span> i.e. <span class="math-container">$$ \begin{align} z_d' &amp;= z_o \times z_d \\ &amp;= |z_o| e^{i arg(z_o)} \times |z_d| e^{i arg(z_d)} \\ &amp; = |z_o z_d| e^{i(arg(z_o) + arg(z_d))}\end{align}$$</span></p> <p>From here all your results get derived:</p> <ol> <li>If <span class="math-container">$z_o$</span> is purely real, then <span class="math-container">$ arg(z_o) = 0, \pi$</span> when <span class="math-container">$z_o$</span> are positive and negative respectively. In the case when <span class="math-container">$z_o$</span> is positive </li> </ol> <p><span class="math-container">$$ z'_d = |z_o z_d| e^{i(arg(0 + arg(z_d))} = |z_o z_d| e^{i(arg(z_d))} = |z_o| z_d$$</span> the number <span class="math-container">$z_d$</span> is scaled by factor of <span class="math-container">$|z_d|$</span></p> <p>For the case when <span class="math-container">$z_o$</span> is negative real number, <span class="math-container">$$ z'_d = |z_o z_d| e^{i(\pi + arg(z_d))} = -|z_o| z_d $$</span> In this case, the number is <strong>rotated</strong> <span class="math-container">$\pi$</span> radians, you can its flipped and scaled by the factor of <span class="math-container">$|z_o|$</span>, but no one is wrong here, because only positive real numbers can be expressed in the terms of real exponents, in that case, what I've said conform with the website.</p> <ol start="2"> <li>If <span class="math-container">$z_o$</span> is complex number, <span class="math-container">$$ z_d' = |z_o z_d| e^{i(arg(z_o) + arg(z_d))}$$</span> as you can see here, the magnitude of <span class="math-container">$z_d$</span> is scaled by <span class="math-container">$|z_o|$</span> and in addition to that argument of complex number has been changed by the addition of argument of operator, so here the number is rotated and scaled.</li> </ol> <p>Now, coming to your question, for multiplication of <span class="math-container">$2^{2i}$</span> with <span class="math-container">$z$</span> , if I denote <span class="math-container">$z'$</span> as the transformed version, then <span class="math-container">$$ z' = 2^{2i} z = |2^{2i}| e^{arg(2^{2i})} z = (1) e^{arg(2^{2i})} z $$</span> As you can see <a href="https://www.wolframalpha.com/input/?i=2%5E(2i)" rel="nofollow noreferrer">here</a>, its magnitude is one, while in the case when <span class="math-container">$z_o = 2i$</span>, <span class="math-container">$|2i| = 2, arg(2i) = \cfrac{\pi}{2} $</span>, so here you will rotation and scaling.</p> <p>Coming to your crux-points, If <span class="math-container">$z_o$</span> is <span class="math-container">$e^{2i}$</span>, its implication is <span class="math-container">$|z_o| = 1$</span> which implies it can't scale the complex number, and it can only rotated <span class="math-container">$z_d$</span> by <span class="math-container">$2$</span> radians. When <span class="math-container">$z_o = 2i, |z_o| = 2,arg(2i) = \cfrac{\pi}{2}$</span>, so it is scaled two times and rotated half pi radians. </p>
3,264,693
<p>For context, I have been relearning a lot of math through the lovely website Brilliant.org. One of their sections covers complex numbers and tries to intuitively introduce Euler's Formula and complex exponentiation by pulling features from polar coordinates, trigonometry, real number exponentiation, and vector space transformations.</p> <p>While I am now decently familiar with how complex exponentiation behaves (i.e. inducing rotation), I am slightly confused by the following. </p> <p><span class="math-container">$ 2^3 z$</span> can be viewed as stretching the complex number <span class="math-container">$z$</span> by <span class="math-container">$2^3$</span>. This could be rewritten as <span class="math-container">$8z$</span>. Therefore, Brilliant.org suggests that exponentiation of real numbers can be thought of as stretching a vector just like real number multiplication would. (<strong>check - understood</strong>)</p> <p>Brilliant.org then demonstrates that multiplying <span class="math-container">$z_1$</span> by another complex number <span class="math-container">$z_2$</span> is equivalent to first stretching <span class="math-container">$z_1$</span> by the magnitude of <span class="math-container">$z_2$</span> and then rotating <span class="math-container">$z_1$</span> by the angle that <span class="math-container">$z_2$</span> creates with the real axis counterclockwise. (<strong>check - understood</strong>)</p> <p>However, this is where I get confused. Why does, for example, <span class="math-container">$2^{2i}* z$</span> cause purely rotation of z but <span class="math-container">$2i*z$</span> does not (i.e. it causes stretching, too, in addition to rotation)?</p> <p>To me, the fact that <span class="math-container">$2^{(2i+3)}$</span> causes both rotation and stretching makes perfect sense because we can rewrite this as <span class="math-container">$(2^3)*(2^{(2i)})$</span>. As previously noted by Brilliant.org, exponentiation by real numbers can thought of as stretching.</p> <p><strong>Here is the crux of my issue:</strong></p> <blockquote> <p>I understand that the magnitude of the imaginary number in the exponent (for example, the <span class="math-container">$'2'$</span> in <span class="math-container">$e^{2i}$</span> ) can be thought of as a rate of speed...but why does this interpretation '<strong>drop</strong>' when we are doing something like <span class="math-container">$2i * z$</span>. i.e. <strong>Why is the <span class="math-container">$2$</span> in <span class="math-container">$2i*z$</span> not also treated like a rate of rotation but instead treated like a magnitude of stretching ?</strong></p> </blockquote> <p>My math skill is not particularly high level so if anyone can offer as much of an intuitive answer as possible, it would be greatly appreciated!</p> <p>Edit 1: I guess another way of expressing this question is as follows: </p> <p>Why does a duality exist between real number exponentiation and real number multiplication but a duality does not exist between imaginary number exponentiation and imaginary number multiplication (i.e. imaginary number multiplication can cause stretching in addition to rotation)?</p> <p>Edit 2: While I accept that Euler's formula is a way of proving that exponentiation of purely imaginary numbers has a magnitude of 1 and therefore does not invoke stretching, that is not the sort of answer I am looking for. My question is aimed at identifying what was specified in Edit 1. </p> <p>Edit 3: Here is a picture that helps clarify my point of confusion. <a href="https://i.stack.imgur.com/9XeY0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9XeY0.png" alt="Lack of Duality Between Exponentiation and Multiplication"></a></p> <p>Edit 4: The question that was asked in this post <a href="https://math.stackexchange.com/questions/1540062/which-general-physical-transformation-to-the-number-space-does-exponentiation-re">Which general physical transformation to the number space does exponentiation represent?</a> is sort of the theme that I am going for. The answer that was given to this post, however, omits a reference to the complex numbers. </p>
fedja
12,992
<p>Think of ot this way. <span class="math-container">$2iz=2(iz)$</span>, so you have two steps here: <span class="math-container">$z\mapsto iz$</span>, which is a pure rotation and <span class="math-container">$(iz)\mapsto 2(iz)$</span>, which is a pure stretching. Of course, you can also write <span class="math-container">$2iz=i(2z)$</span> in which case you'll do the stretching first, but the result will be the same. </p> <p>Now, when you multiply by <span class="math-container">$2$</span> in the exponent, something different is going on. Instead of <span class="math-container">$xyz=(xy)z$</span>, you have <span class="math-container">$x^{yz}=(x^y)^z$</span>, so <span class="math-container">$e^{2i}z=(e^i)^2z=e^i(e^iz)$</span>, i.e., you apply the rotation given by <span class="math-container">$e^i$</span> twice, not follow or precede it by stretching two times. That's why the rotation remains a rotation but the angle doubles. </p> <p>Of course, you can write it also as <span class="math-container">$(e^2)^iz$</span>, but this way you will have to say that you apply the stretching (<span class="math-container">$z\mapsto e^2z$</span>) <em>imaginary</em> (<span class="math-container">$i$</span>) number of times. There is no intuitive physical meaning of applying some transformation <span class="math-container">$i$</span> times but, since you want to maintain the usual laws for powers as much as you can, you are forced to say that stretching done <span class="math-container">$i$</span> times becomes a rotation. Strange, but since there is no contradiction with common sense (common sense just says nothing about applying a transformation imaginary number of times), possible.</p>
1,177,605
<p>Since the problem uses $y^2=x$, I first assumed that the element must be horizontal (parallel to the $x$-axis). However, the bounded region has all $y$ values greater than $0$, so I could also use a vertical element. This problem has me stumped; I know how to set up the integral but for the shell method I need to find the radius (element to axis of rotation) and the height of the element.</p> <p>What is the best way to approach this problem?</p>
Daniel Valenzuela
156,302
<p>Consider $SO(R^3)$ which acts on $R^3$ by rotations, but restricts to an action of $S^2$. For every point $x\in S^2$ we have a unique orthogonal plane $V$, hence $SO(V)\subset SO(R^3)$ will fix $x$. It is easy to see that in fact $Stab(x)=SO(V) \cong SO(2)$. Hence we have a fiber bundle $$ SO(3) \to S^2 $$</p> <p>with fiber being $SO(2)$. The map is basically just fixing a point in $S^2$ e.g. $(0,0,1)$ and consider its image under the group action. More fancy: a group action is a map $G\times S^2 \to S^2$ which you can restrict to $G\times \{*\}$. Since the bundle and its fiber are lie groups, this induces an isomorphism $$SO(2) \to SO(3) \stackrel \cong \to S^2 \cong SO(3)/SO(2) $$</p> <p>Now we compose $SO(3) \to S^2 \to S^2/\mathbb Z_2 = RP^2$. The fiber will be twice as much as before. It is easy and a nice exercise to fill in the details, that the fiber is $O(2)$.</p>
3,786,553
<p>I was trying to follow the logic in a similar question (<a href="https://math.stackexchange.com/questions/643352/probability-number-comes-up-before-another">Probability number comes up before another</a>), but I can't seem to get it to work out.</p> <p>Some craps games have a Repeater bet. You can bet on rolling aces twice before rolling a 7, rolling 3 three times before 7, etc. The patent for this game (<a href="https://patents.google.com/patent/US20140138911" rel="nofollow noreferrer">https://patents.google.com/patent/US20140138911</a>) says the odds for aces twice before 7 is 48:1. The wizard of odds (<a href="https://wizardofodds.com/games/craps/appendix/5/" rel="nofollow noreferrer">https://wizardofodds.com/games/craps/appendix/5/</a>) says the probability is 0.020408 (which is 1/49).</p> <p>I tried calculating this by multiplying the odds of the two events 1/36 for rolling aces and (1/36)/((1/36)+(1/6)) for rolling aces before 7. I got (1/36)*((1/36)/((1/36)+(1/6))) = 0.003968253968253969 which is like 1/252.</p> <p>I'm obviously missing something, but can't see what.</p> <p>Edit:...sorry...after typing this up i figured it out. The bet has to be made and then aces has to roll before 7 twice. So if 7 rolls before the first aces the bet loses, so I was wrong by using the 1/36 for the first aces.</p> <p>((1/36)/((1/36)+(1/6)))*((1/36)/((1/36)+(1/6))) 0.020408163265306128</p> <p>I still don't understand why one says 48:1 when its 1/49</p>
Botond
281,471
<p>I'll assume that the integral is a Riemann-Stieltjes integral. We can use integration by parts, which states that if either <span class="math-container">$\int_a^b f(x) \mathrm{d}g(x)$</span> or <span class="math-container">$\int_a^b g(x) \mathrm{d}f(x)$</span> exists then the other one exists as well and <span class="math-container">$$\int_a^b f(x) \mathrm{d}g(x)+\int_a^b g(x) \mathrm{d}f(x)=f(b)g(b)-f(a)g(a)$$</span> In your case, the other integral is much easier: <span class="math-container">$$\int_0^2 \alpha(x)\mathrm{d}x=\int_0^1 x\mathrm{d}x+\int_1^2 (x+2)\mathrm{d}x=\frac{1}{2}+\frac{7}{2}=4$$</span> Which means that <span class="math-container">$$\int_0^2 x\mathrm{d}\alpha(x)=4\cdot2-0\cdot0-4=\color{red}{4}$$</span></p>
876,896
<p>What are the poles of a polynomial? Are they the same as the roots?</p>
5xum
112,884
<p>This is the shortest answer possible for your question:</p> <p>There are no poles of a polynomial.</p>
388,523
<p>I have this question:</p> <p>Evaluate $\int r . dS$ over the surface of a sphere, radius a, centred at the origin. </p> <p>I'm not really sure what '$r$' is supposed to be? I would guess a position vector? If so, I would have $r . dS$ as $(asin\theta cos\phi, a sin\theta sin\phi, acos\theta) . (a^2sin\theta d\theta d\phi)$ which doesn't seem right. Any pointers appreciated, thanks!</p>
user642796
8,348
<p>The <a href="http://en.wikipedia.org/wiki/Fast-growing_hierarchy" rel="noreferrer">fast-growing hierarchy</a> (or Wainer hierarchy) has the property that a recursive function is $\sf{PA}$-provably total iff it is dominated by $f_\alpha$ for some $\alpha &lt; \varepsilon_0$. (Similarly with the <a href="http://en.wikipedia.org/wiki/Hardy_hierarchy" rel="noreferrer">Hardy hierarchy</a>.)</p> <p>For example, the <a href="http://en.wikipedia.org/wiki/Goodstein%27s_theorem#Sequence_length_as_a_function_of_the_starting_value" rel="noreferrer">Goodstein function</a> grows asymptotically with $f_{\varepsilon_0}$, and so is not $\sf{PA}$-provably total (this is essentially Cichon's and Caicedo's proofs of the Kirby-Paris result that Goodstein's theorem is not $\sf{PA}$-provable).</p>
4,646,773
<p>I started with an integral <span class="math-container">$ \int_{0}^{2\pi} \sqrt{2[\sin^2(t) + 16\cos^2(t) - 4\sin(t)\cos(t)]} \,dt $</span></p> <p>And I simplified it to <span class="math-container">$ \int_{0}^{2\pi} \sqrt{17 + 15\cos(2t) - 4\sin(2t)} \, dt$</span></p> <p>My question: I know this can be simplified with some sort of substitution that cancels the <span class="math-container">$\sin$</span> and <span class="math-container">$\cos$</span> with a <span class="math-container">$u$</span>-sub, but I do not know how. I saw it online, with no explanation (see the first answer: <a href="https://math.stackexchange.com/questions/200010/find-length-of-curve-of-intersection">find length of curve of intersection</a>).</p> <p>I think this has an exact elementary solution, if you use <span class="math-container">$\tan\left(\frac x2\right)$</span> substitution and possibly Feynman's trick if necessary.</p>
mathlove
78,967
<p>It is given by <span class="math-container">$$a_n=\begin{cases}(n+1)\cdot 2^{n-\left\lceil\sqrt{n-1}\right\rceil^2+\left\lceil\sqrt{n-1}\right\rceil-2}&amp;\text{if $\left\lceil\sqrt{n-1}\right\rceil\leqslant \left\lfloor\dfrac{1+\sqrt{4n-3}}{2}\right\rfloor$} \\\\(n+1)\cdot 2^{\left\lfloor\sqrt{n-1}\right\rfloor^2+\left\lfloor\sqrt{n-1}\right\rfloor-n}&amp;\text{otherwise}\end{cases}$$</span></p> <p><em>Proof</em> :</p> <ul> <li><p>Let us call <span class="math-container">$a_m,a_{m+1},\cdots, a_{m+j}$</span> &quot;increasing part&quot; if, for <span class="math-container">$i=m,m+1,\cdots, m+j-1$</span>, <span class="math-container">$a_{i+1}-a_i\gt 0$</span> hold. For example, the numbers in the &quot;increasing part&quot; of the second line are <span class="math-container">$a_2=3,a_4=5,a_8=9,a_{14}=15,\cdots$</span>. Then, we can say that the numbers in the &quot;increasing part&quot; of the <span class="math-container">$M$</span>-th line are <span class="math-container">$$a_{N^2-N+M}=(N^2-N+M+1)\cdot 2^{M-2}\qquad (N\geqslant M-1)\tag1$$</span> In the following, let us prove <span class="math-container">$(1)$</span> by induction on <span class="math-container">$M$</span>. The numbers in the first line are <span class="math-container">$a_1=1,a_3=2,a_7=4,a_{13}=7,\cdots$</span>. Let <span class="math-container">$b_1=1,b_2=3,b_3=7,b_4=13,\cdots$</span>. Since <span class="math-container">$b_{N+1}-b_N=2N$</span>, we have, for <span class="math-container">$N\geqslant 2$</span>, <span class="math-container">$b_N=1+\displaystyle\sum_{k=1}^{N-1}(2k)=N^2-N+1$</span> which holds for <span class="math-container">$N=1$</span>. Let <span class="math-container">$c_1=1,c_2=2,c_3=4,c_4=7,\cdots$</span>. Since <span class="math-container">$c_{N+1}-c_N=N$</span>, we have, for <span class="math-container">$N\geqslant 2$</span>, <span class="math-container">$c_N=1+\displaystyle\sum_{k=1}^{N-1}k=(N^2-N+2)/2$</span> which holds for <span class="math-container">$N=1$</span>. So, The numbers in the first line are <span class="math-container">$a_{N^2-N+1}=(N^2-N+2)/2$</span>, which means that, for <span class="math-container">$M=1$</span>, <span class="math-container">$(1)$</span> holds. Suppose that <span class="math-container">$(1)$</span> holds for <span class="math-container">$M$</span>. Then, since the difference of the numbers of the <span class="math-container">$M$</span>-th line is <span class="math-container">$2^{M-1}$</span>, we have <span class="math-container">$$\begin{align}a_{N^2-N+M+1}&amp;=a_{N^2-N+M}+(a_{N^2-N+M}+2^{M-1}) \\\\&amp;=2\times (N^2-N+M+1)\cdot 2^{M-2} +2^{M-1}\\\\&amp;=(N^2-N+M+2)\cdot 2^{M-1}\end{align}$$</span> as long as <span class="math-container">$N\geqslant M$</span>.</p> </li> <li><p>Let us call <span class="math-container">$a_m,a_{m+1},\cdots, a_{m+j}$</span> &quot;decreasing part&quot; if, for <span class="math-container">$i=m,m+1,\cdots, m+j-1$</span>, <span class="math-container">$a_{i+1}-a_i\lt 0$</span> hold. Also, let us call <span class="math-container">$a_2=3,a_5=12,a_{10}=44,a_{17}=144,\cdots$</span> &quot;top of triangles&quot;. For example, the numbers in the &quot;decreasing part&quot; of the second line (counted from the top of triangles) are <span class="math-container">$a_3=2,a_6=7,a_{11}=24,a_{18}=76,\cdots$</span>. Then, we can say that the numbers in the &quot;decreasing part&quot; of the <span class="math-container">$M$</span>-th line (counted from the top of triangles) are <span class="math-container">$$a_{N^2+M}=(N^2+M+1)\cdot 2^{N-M}\qquad (N\geqslant M-1)\tag2$$</span> In the following, let us prove <span class="math-container">$(2)$</span> by induction on <span class="math-container">$M$</span>. Substituting <span class="math-container">$M=N+1$</span> into <span class="math-container">$(1)$</span> gives <span class="math-container">$a_{N^2+1}=(N^2+2)\cdot 2^{N-1}$</span>, which means that, for <span class="math-container">$M=1$</span>, <span class="math-container">$(2)$</span> holds. Suppose that <span class="math-container">$(2)$</span> holds for <span class="math-container">$M$</span>. Then, since the difference of the numbers of the <span class="math-container">$(M+1)$</span>-th line (counted from the top of triangles) is <span class="math-container">$2^{N-M}$</span>, we have <span class="math-container">$$\begin{align}a_{N^2+M+1}&amp;=(a_{N^2+M}+2^{N-M})\div 2\\\\&amp;=\bigg((N^2+M+1)\cdot 2^{N-M}+2^{N-M}\bigg)\div 2 \\\\&amp;=(N^2+M+2)\cdot 2^{N-M-1}\end{align}$$</span> as long as <span class="math-container">$N\geqslant M$</span>.</p> </li> </ul> <p>Finally, let us try to represent <span class="math-container">$a_n$</span> by <span class="math-container">$n$</span> using <span class="math-container">$(1)(2)$</span>.</p> <ul> <li><p>For a given <span class="math-container">$n$</span>, solving <span class="math-container">$n=N^2-N+M$</span>, we get <span class="math-container">$M=n-N^2+N$</span>. Since <span class="math-container">$n-N^2+N-1=M-1\leqslant N$</span>, solving <span class="math-container">$N^2\geqslant n-1$</span> gives <span class="math-container">$N\geqslant\sqrt{n-1}$</span>. Since <span class="math-container">$1\leqslant M=n-N^2+N$</span>, solving <span class="math-container">$N^2-N+1-n\leqslant 0$</span> gives <span class="math-container">$N\leqslant \frac{1+\sqrt{4n-3}}{2}$</span>. Therefore, for a given <span class="math-container">$n$</span>, we have <span class="math-container">$M=n-N^2+N$</span> and <span class="math-container">$\sqrt{n-1}\leqslant N\leqslant \frac{1+\sqrt{4n-3}}{2}$</span>, so we can take <span class="math-container">$N=\left\lceil\sqrt{n-1}\right\rceil$</span> if there is such an <span class="math-container">$N$</span>.</p> </li> <li><p>For a given <span class="math-container">$n$</span>, solving <span class="math-container">$n=N^2+M$</span>, we get <span class="math-container">$M=n-N^2$</span>. Since <span class="math-container">$n-N^2-1=M-1\leqslant N$</span>, solving <span class="math-container">$N^2+N-n+1\geqslant 0$</span> gives <span class="math-container">$N\geqslant \frac{-1+\sqrt{4n-3}}{2}$</span>. Since <span class="math-container">$1\leqslant M=n-N^2$</span>, solving <span class="math-container">$N^2\leqslant n-1$</span> gives <span class="math-container">$N\leqslant \sqrt{n-1}$</span>. Therefore, for a given <span class="math-container">$n$</span>, we have <span class="math-container">$M=n-N^2$</span> and <span class="math-container">$\frac{-1+\sqrt{4n-3}}{2}\leqslant N\leqslant\sqrt{n-1}$</span>, so we can take <span class="math-container">$N=\left\lfloor\sqrt{n-1}\right\rfloor$</span> if there is such an <span class="math-container">$N$</span>.</p> </li> </ul> <p>In conclusion, we have <span class="math-container">$$a_n=\begin{cases}(n+1)\cdot 2^{n-\left\lceil\sqrt{n-1}\right\rceil^2+\left\lceil\sqrt{n-1}\right\rceil-2}&amp;\text{if $\left\lceil\sqrt{n-1}\right\rceil\leqslant \left\lfloor\dfrac{1+\sqrt{4n-3}}{2}\right\rfloor$} \\\\(n+1)\cdot 2^{\left\lfloor\sqrt{n-1}\right\rfloor^2+\left\lfloor\sqrt{n-1}\right\rfloor-n}&amp;\text{otherwise}.\quad\blacksquare\end{cases}$$</span></p> <hr /> <p><span class="math-container">$a_n$</span> can be written as <span class="math-container">$$a_n=(n+1)\cdot 2^{n-\left\lceil\sqrt{n-1}\right\rceil^2+\left\lceil\sqrt{n-1}\right\rceil-2}\small\max\bigg(\min\bigg(\left\lfloor\dfrac{1+\sqrt{4n-3}}{2}\right\rfloor-\left\lceil\sqrt{n-1}\right\rceil+1,1\bigg),0\bigg)$$</span> <span class="math-container">$$+(n+1)\cdot 2^{\left\lfloor\sqrt{n-1}\right\rfloor^2+\left\lfloor\sqrt{n-1}\right\rfloor-n}\small\bigg(1-\max\bigg(\min\bigg(\left\lfloor\dfrac{1+\sqrt{4n-3}}{2}\right\rfloor-\left\lceil\sqrt{n-1}\right\rceil+1,1\bigg),0\bigg)\bigg)$$</span></p>
3,106,696
<p>I am confused on the notation used when writing down the solution of x and y in quadratic equations. For example in <span class="math-container">$x^2+2x-15=0$</span>, do I write :</p> <p><span class="math-container">$x=-5$</span> AND <span class="math-container">$x=3$</span></p> <p>or is it</p> <p><span class="math-container">$x=-5$</span> OR <span class="math-container">$x=3$</span></p> <p>which is it and why? I thought that because x can only equal one of the values when you substitute it in so it would be OR, however there are sometimes 2 roots of a quadratic so is it more correct to use AND? What about for the value of <span class="math-container">$y$</span>, is it the same?</p> <p>Thanks in advance</p>
fleablood
280,126
<p>Just do it.</p> <p><span class="math-container">$f(n+1) = (n+1)^2 + (n+1) + 1 = n^2 +2n + 1 + n + 1 + 1 = n^2 + n + 1 + 2n + 2 = f(n) + 2n + 2$</span>.</p> <p>So <span class="math-container">$\gcd(f(n+1), f(n)) = \gcd(f(n), f(n) + 2n + 2) = \gcd(f(n), 2n + 2)$</span>.</p> <p><span class="math-container">$f(n) = n^2 + n + 1 = n^2 + \frac {2n + 2} 2$</span>. Hmm.....</p> <p>Okay.... If <span class="math-container">$n$</span> is even <span class="math-container">$f(n) = n^2 + n +1$</span> is odd. If <span class="math-container">$n$</span> is odd, then <span class="math-container">$f(n) = n^2 + n + 1$</span> is odd. So <span class="math-container">$\gcd(f(n),2) =1$</span> so</p> <p><span class="math-container">$\gcd(f(n), 2n + 2) = \gcd(f(n), 2(n+1)) = \gcd(f(n), n+1)$</span>.</p> <p><span class="math-container">$f(n) = n^2 + n+ 1$</span> so </p> <p><span class="math-container">$\gcd(f(n), n+1) = \gcd(n^2 + n + 1, n+1) = \gcd(n^2, n + 1)$</span>.</p> <p>Let <span class="math-container">$p$</span> be a prime number. If <span class="math-container">$p|n^2$</span> then <span class="math-container">$p|n$</span> and <span class="math-container">$p \not \mid n+1$</span>. So <span class="math-container">$n^2, n+1$</span> have no prime factors in common.</p> <p>So <span class="math-container">$\gcd(n^2, n+1)=1$</span>.</p>
2,943,892
<p>I have heard it said that completeness is a not a property of topological spaces, only a property of metric spaces (or topological groups), because Cauchy sequences require a metric to define them, and different metrics yield different sets of Cauchy sequence, even if the metrics induce the same topology. But why wouldn't the following definition of Cauchy sequence work?</p> <p>(*) A sequence <span class="math-container">$(x_n)$</span> in a topological space <span class="math-container">$(X,\tau)$</span> is Cauchy if there exists a nested sequence of open sets <span class="math-container">$(U_n)$</span> where each open set is a proper subset of the one before it and the intersection of all of them contains at most one element, such that for any natural number <span class="math-container">$m$</span>, there exists a natural number <span class="math-container">$N$</span> such that for all <span class="math-container">$n\geq N$</span>, we have <span class="math-container">$x_n\in U_m$</span>.</p> <p>My question is, why isn't this definition equivalent to being Cauchy in all metrics on <span class="math-container">$X$</span> whose topology is <span class="math-container">$\tau$</span>? Are there any metrics where being Cauchy is equivalent to (*)?</p>
Paul Frost
349,785
<p>You are interested in metrizable spaces <span class="math-container">$X$</span> and ask how the set <span class="math-container">$\mathfrak{C}(X)$</span> of sequences <span class="math-container">$(x_n)$</span> in <span class="math-container">$X$</span> which are Cauchy with respect to <strong>all</strong> metrics on <span class="math-container">$X$</span> which induce the given topology <span class="math-container">$\tau$</span> on <span class="math-container">$X$</span> can be characterized. I guess you want to find a simple property expressed directly via <span class="math-container">$\tau$</span>.</p> <p>As you mentioned in <a href="https://math.stackexchange.com/q/2944012">What sequences are Cauchy in all metrics for a given topology?</a> <span class="math-container">$\mathfrak{C}(X)$</span> contains all convergent sequences, and if <span class="math-container">$X$</span> is completely metrizable, then <span class="math-container">$\mathfrak{C}(X)$</span> does not contain any other.</p> <p>As pointed out by the comments and Alex Kruckman's answer to your present question, your definition (*) does not work to characterize <span class="math-container">$\mathfrak{C}(X)$</span>.</p> <p>(*) contains two cases:</p> <p>(a) <span class="math-container">$\bigcap_n U_n = \emptyset$</span>.</p> <p>(b) <span class="math-container">$\bigcap_n U_n = \{ x \}$</span>.</p> <p>Case (a) describes non-convergent sequences, but the known examples show that such sequences may be Cauchy with respect to one metric, and non-Cauchy with respect to another metric.</p> <p>Case (b) covers the sequences which converge to <span class="math-container">$x$</span> although there may be also non-convergent sequences satisfying (b) (see Alex Kruckman's comment). Only if the <span class="math-container">$U_n$</span> form a neighborhood base of <span class="math-container">$x$</span> this is excluded. However, if you take any neigboorhood base <span class="math-container">$U_n$</span> of <span class="math-container">$x$</span> and any sequence <span class="math-container">$V_n$</span> satisfying (a), then <span class="math-container">$W_n = U_n \cup V_n$</span> again satisfies (b) which shows you the problem.</p>
1,581,456
<p>Given functions $g,h: A \rightarrow B$ and a set C that contains at least two elements, with $f \circ g = f \circ h$ for all $f:B \rightarrow C$. Prove that $g = h$. </p> <p>My logic is to take C = B and h(x) =x for all x in particular and the result follows immediately. But, I don't see the use of the condition on C. Please somebody help.</p>
bilaterus
287,278
<p>First we observe that:</p> <p>$$\sum_{k=1}^{2n-1} {k\over {1-z^k}} - \sum_{k=1}^{n-1} {2k\over {1-z^{2k}}} = \sum_{k=1}^{n-1} {2k+1 \over {1-z^{2k+1}}} $$</p> <p>Now, by partial fractions,</p> <p>$${2k \over {1-z^{2k}}} = {k\over {1-z^k}} + {k\over {1+z^k}} $$ Hence: </p> <p>$$\sum_{k=1}^{2n-1} {k\over {1-z^k}} - \bigg( \sum_{k=1}^{n-1} {k\over {1-z^{k}}} + \sum_{k=1}^{n-1} {k\over {1+z^{k}}}\bigg) = \sum_{k=1}^{n-1} {2k+1 \over {1-z^{2k+1}}} $$ Now, by combining the first two sums:</p> <p>$$\sum_{k=n}^{2n-1} {k\over {1-z^k}} - \sum_{k=1}^{n-1} {k\over {1+z^{k}}} = \sum_{k=1}^{n-1} {2k+1 \over {1-z^{2k+1}}} $$</p> <p>Now let's re-write the first sum:</p> <p>$$\sum_{k=0}^{n-1} {k+n\over {1-z^{k+n}}} - \sum_{k=0}^{n-1} {k\over {1+z^{k}}} = \sum_{k=1}^{n-1} {2k+1 \over {1-z^{2k+1}}} $$</p> <p>Now let's use the fact that $z^n=-1$, and combine the final two terms:</p> <p>$$\sum_{k=0}^{n-1} {n\over {1+z^{k}}} = \sum_{k=1}^{n-1} {2k+1 \over {1-z^{2k+1}}} $$</p> <p>As required.</p>
211,175
<p>In Gradshteyn and Ryzhik, (specifically starting with the section 3.13) there are several results involving integrals of polynomials inside square root. These are given in terms of combinations of elliptic integrals. See for instance: <a href="https://i.stack.imgur.com/6Cqyb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6Cqyb.jpg" alt="Gradshteyn and Ryzhik excerpt"></a></p> <p>where <span class="math-container">$F[\alpha, p]$</span> is the elliptic integral of first kind. I tried to reproduce the first result above in Mathematica (version 12) but failed. I would appreciate if anyone could point out what I am doing wrong. My first attempt is</p> <pre><code>Integrate[1/Sqrt[(a - x) (b - x) (c - x)], {x, -Infinity, u}, Assumptions -&gt; { a &gt; b &gt; c &gt;= u}] </code></pre> <p>which returned no result:</p> <p><a href="https://i.stack.imgur.com/Wdama.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wdama.jpg" alt="unevaluated"></a></p> <p>Then I tried without integration limits, and take the limits after</p> <pre><code>Integrate[1/Sqrt[(a - x) (b - x) (c - x)], x, Assumptions -&gt; { a &gt; b &gt; c}] </code></pre> <p>giving:</p> <p><a href="https://i.stack.imgur.com/0Ci4P.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0Ci4P.jpg" alt="complicated result"></a></p> <p>taking now the upper limit and simplifying </p> <pre><code>Simplify[Limit[(2 (a - x)^(3/2) Sqrt[(b - x)/(a - x)] Sqrt[(c - x)/(a - x)] EllipticF[ArcSin[Sqrt[a - b]/Sqrt[a - x]], (a - c)/(a - b)])/(Sqrt[a - b] Sqrt[(a - x) (-b + x) (-c + x)]), x -&gt; u], Assumptions -&gt; { a &gt; b &gt; c &gt;= u}] </code></pre> <p>giving:</p> <p><a href="https://i.stack.imgur.com/uqFwh.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uqFwh.jpg" alt="another result"></a></p> <p>whereas the integral vanishes when <span class="math-container">$x\rightarrow -\infty$</span>. Clearly, the above result given by Mathematica differs from the Gradshteyn and Ryzhik's. Two results match if the substitution: <span class="math-container">$b \rightarrow c$</span>, <span class="math-container">$c \rightarrow b$</span> is made but this would then be at odds with the condition: <span class="math-container">$a &gt; b &gt; c$</span>. </p>
TheDoctor
2,263
<p>You may find DLMF <a href="https://dlmf.nist.gov/19" rel="nofollow noreferrer">Chapter 19</a> on Elliptic Integrals, authored by B. C. Carlson interesting and helpful. Carlson introduced <em>symmetric</em> standard integrals that simplify many aspects of theory, applications, and numerical computation of Elliptic Integrals.</p> <p>And there is also the <a href="https://github.com/tpfto/Carlson" rel="nofollow noreferrer">Carlson</a> package implementing the symmetric elliptic integrals.</p>
184,772
<p>Is there any way to write a code that has a <strong>function</strong> include <strong>Block[ ]</strong> and <strong>Do[ ]</strong> loop instead of my code?</p> <p>Here is my code:</p> <pre><code>(* m = Maximum members of "list" *) list = {{12, 9, 10, 5}, {3, 7, 18, 6}, {1, 2, 3, 3}, {4, 5, 6, 2}, {1, 13, 1, 1}}; m = {}; Do[ AppendTo[m, Max[list[[All, i]]]]; , {i, 1, Length[list[[1]]]}]; m (*{12,13,18,6}*) </code></pre>
kglr
125
<pre><code>ClearAll[f] f[l_List] := Block[{m={}, i=1}, Do[AppendTo[m, Max[l[[All, i++]]]], {Length @ l[[1]]}]; m] f[list] </code></pre> <blockquote> <p>{12, 13, 18, 6}</p> </blockquote>
3,079,929
<p>Find all positive triples of positive integers a, b, c so that <span class="math-container">$\frac {a+1}{b}$</span> , <span class="math-container">$\frac {b+1}{c}$</span>, <span class="math-container">$\frac {c+1}{a}$</span> are also integers. </p> <p>WLOG, let a<span class="math-container">$\leqq b\leqq c$</span>, </p>
Ross Millikan
1,827
<p>Hint: given <span class="math-container">$a \le b$</span> and <span class="math-container">$\frac {a+1}b$</span> is an integer, you must have <span class="math-container">$b=a+1$</span> or <span class="math-container">$a=b=1$</span>.</p>
3,079,929
<p>Find all positive triples of positive integers a, b, c so that <span class="math-container">$\frac {a+1}{b}$</span> , <span class="math-container">$\frac {b+1}{c}$</span>, <span class="math-container">$\frac {c+1}{a}$</span> are also integers. </p> <p>WLOG, let a<span class="math-container">$\leqq b\leqq c$</span>, </p>
fleablood
280,126
<p>If <span class="math-container">$a \le b$</span> but <span class="math-container">$b|a+1$</span> then <span class="math-container">$b \le a+1$</span> so either <span class="math-container">$a = b$</span> or <span class="math-container">$b=a+1$</span>. </p> <p>If <span class="math-container">$a = b$</span> then <span class="math-container">$b|b+1$</span> and <span class="math-container">$a=b=1$</span>. The you <span class="math-container">$c|b+1=2$</span> and <span class="math-container">$b|c+1$</span>. So either <span class="math-container">$c=1$</span> or <span class="math-container">$c = 2$</span>.</p> <p>So so far we have <span class="math-container">$(1,1,1)$</span> or <span class="math-container">$(1,1,2)$</span></p> <p>If <span class="math-container">$b = a+1$</span> then <span class="math-container">$c|a+2$</span> so <span class="math-container">$c \le a+2$</span> but <span class="math-container">$a &lt; b=a+1 \le c\le a+2 $</span> so either <span class="math-container">$c = b = a+1$</span> or <span class="math-container">$c= a+2$</span>.</p> <p>If <span class="math-container">$c= b =a+1$</span> then we have <span class="math-container">$b|a+1 = b$</span> and <span class="math-container">$c=b|b+1$</span> and <span class="math-container">$a=b-1|c+1=b+1$</span>. So <span class="math-container">$b=1$</span> but then <span class="math-container">$a=0$</span> and that's a contradiction.</p> <p>If <span class="math-container">$c=a+2$</span> and <span class="math-container">$b= a+1$</span> we have: <span class="math-container">$b|a+1=b$</span>; <span class="math-container">$c|b+1 =c$</span> and <span class="math-container">$a|c+1 = a+3$</span>. This means <span class="math-container">$a|3$</span> and <span class="math-container">$a =1$</span> or <span class="math-container">$a =3$</span>.</p> <p>So we have <span class="math-container">$(1,2,3)$</span> or <span class="math-container">$(3,4,5)$</span>.</p> <p>To double check:</p> <p><span class="math-container">$(a,b,c) = (1,1,1)\implies $</span> <span class="math-container">$b|a+1, c|b+1, a|c+1$</span> all translate to <span class="math-container">$1|2$</span>. which is true</p> <p><span class="math-container">$(a,b,c) = (1,1,2) \implies b=1|a+1=2; c=2|b+1=2; a=1|c+1=3$</span>. All true.</p> <p><span class="math-container">$(a,b,c) = (1,2,3)\implies b=2|a+1=2; c=3|b+1=3; a=1|c+1 = 4$</span>. All true.</p> <p><span class="math-container">$(a,b,c) = (3,4,5)\implies b=4|a+1=4; c=5|b+1=5; a=3|c+1=6$</span>. All true. </p>
3,245,428
<p>Is it true that every tame knot has at least an alternating diagram?</p> <p>If yes, is it true that we can always obtain an alternating diagram by a finite number of Reidemeister moves from a diagram of a knot? </p> <p>If yes, how can we do it?</p> <p>I am reading GTM Introduction to Knot Theory and find they sort of assume this, which makes me think it should be evident but I cannot figure out.</p>
auscrypt
675,509
<p>The former is true and is essentially a "checkerboard colouring" argument -- colour all the regions black and white, no two of the same colour sharing a border ( (ab)use Jordan curve theorem if you want to prove this ). Then as you travel towards a crossing, if there's black on your right as you approach, go under; otherwise, over.</p> <p>The second is false according to <a href="http://mathworld.wolfram.com/AlternatingKnot.html" rel="nofollow noreferrer">mathworld, wolfram</a>; this is a proof by counterexample (note that because that particular knot is non-alternating, you cannot make an alternating knot diagram by using Reidemeister moves as the positions you can get with Reidemeister moves are precisely the projections of the knot that you can get).</p>
3,166,947
<p>I want to transform the following <span class="math-container">$$\prod_{k=0}^{n} (1+x^{2^{k}})$$</span> to the canonical form <span class="math-container">$\sum_{k=0}^{n} c_{k}x^{k}$</span></p> <p>This is what I got so far <span class="math-container">\begin{align*} \prod_{k=0}^{n} (1+x^{2^{k}})= \dfrac{x^{2^{n}}-1}{x-1} (x^{2^{n}}+1) \\ \end{align*}</span> but I don't know how to continue, can someone help me with this?</p>
Community
-1
<p>Looks good as</p> <p><span class="math-container">\begin{align} \prod_{k=0}^{n-1}\left(1+x^{2^{k}}\right)&amp;=\prod_{k=0}^{n-1}\frac{1-x^{2^{k+1}}}{1-x^{2^{k}}}\\ &amp;=\frac{1}{1-x}\prod_{k=0}^{n-1}\left(1-x^{2^{k+1}}\right)\prod_{k=0}^{n-2}\left(1-x^{2^{k+1}}\right)^{-1}\\ &amp;={\frac{1-x^{2^{n}}}{1-x}}. \end{align}</span></p>
3,166,947
<p>I want to transform the following <span class="math-container">$$\prod_{k=0}^{n} (1+x^{2^{k}})$$</span> to the canonical form <span class="math-container">$\sum_{k=0}^{n} c_{k}x^{k}$</span></p> <p>This is what I got so far <span class="math-container">\begin{align*} \prod_{k=0}^{n} (1+x^{2^{k}})= \dfrac{x^{2^{n}}-1}{x-1} (x^{2^{n}}+1) \\ \end{align*}</span> but I don't know how to continue, can someone help me with this?</p>
trancelocation
467,003
<p>You may also approach this from a combinatorical point fo view:</p> <ul> <li>Expanding would give a sum of <span class="math-container">$2^{n+1}$</span> summands.</li> <li>Each summand is a product of powers of <span class="math-container">$x$</span> where the exponent corresponds to a uniquely determined sequence <span class="math-container">$(b_0,b_1,\ldots , b_n)$</span> of binary digits <span class="math-container">$b_i \in {0,1}$</span> for <span class="math-container">$i=0,\ldots ,n$</span> where <span class="math-container">$0$</span> means choose <span class="math-container">$1 = x^0$</span> and <span class="math-container">$1$</span> means choose <span class="math-container">$x^{2^i}$</span> from the factor <span class="math-container">$(1+x^{2^i})$</span></li> </ul> <p>Hence,</p> <p><span class="math-container">$$\prod_{k=0}^{n} (1+x^{2^{k}}) = \sum_{(b_0,b_1,\ldots , b_n) \in \{0,1\}^{n+1}}x^{\sum_{i=0}^nb_i\cdot 2^i}= \sum_{k=0}^{2^{n+1}-1}x^k$$</span></p>
3,617,600
<p>I am trying to understand the proof of the First and Second Variation of Arclength formulas for Riemannian Manifolds. I want some verifaction that the following covariant derivaties commute. I find it intuitive but I want to also have a formal proof.</p> <p>Some notation: Let <span class="math-container">$\gamma(t,s):\overbrace{[a,b]}^{t}\times \overbrace{(-\epsilon,\epsilon)}^{s}\rightarrow M$</span> be a variation of <span class="math-container">$\gamma_0(t)$</span> with <span class="math-container">$|\dot{\gamma_{0}|}=\lambda \ \forall t \in [a,b] $</span> and <span class="math-container">$V=\gamma_{*}\left(\frac{\partial}{\partial s}\right)$</span> tha variational vector field.</p> <p>Then I would like to prove that <span class="math-container">$\frac{D}{ds}(\dot{\gamma_t})=\frac{D}{dt}V$</span> or in other words that <span class="math-container">$\frac{D}{ds},\frac{D}{dt}$</span> commute. Note that <span class="math-container">$\frac{D}{ds}$</span> is the covariant derivative along the map <span class="math-container">$\gamma$</span> therefore we will use the property for connections along maps that <span class="math-container">$\nabla^{\gamma}_X(Z\circ \gamma(p))=\nabla_{\gamma_*(X)}Z|_p$</span>.</p> <p>Indeed <span class="math-container">$\frac{D}{ds}\left( \dot{\gamma_t}\right)=\frac{D}{ds}[\gamma_*(\frac{d}{dt})\circ(\gamma(s,t))]=D_{\gamma_*\left(\frac{d}{dt}\right)}\gamma_*(\frac{d}{ds})=D_{\gamma_*\left(\frac{d}{ds}\right)}\gamma_*(\frac{d}{dt})$</span>.</p> <p>The last equality follows since <span class="math-container">$D_{\gamma_*\left(\frac{d}{dt}\right)}\gamma_*(\frac{d}{ds})-D_{\gamma_*\left(\frac{d}{ds}\right)}\gamma_*(\frac{d}{dt})=\left[\gamma_*(\frac{d}{dt}),\gamma_*(\frac{d}{ds}) \right]=\gamma_*([\frac{d}{ds},\frac{d}{dt}])=0$</span> since <span class="math-container">$\frac{d}{ds},\frac{d}{dt}$</span> are coordinate vector fields.</p>
HK Lee
37,116
<p>Recall : <span class="math-container">$$(1) \ Xf = \frac{d}{dt}\bigg|_{t=0}\ (f\circ c)(t)$$</span> where <span class="math-container">$c$</span> is a curve with <span class="math-container">$ c'(0)=X$</span>.</p> <p><span class="math-container">$$ (2) \ [X,Y] f = X(Yf)-Y(Xf) $$</span> </p> <p>Hence if <span class="math-container">$T(s,t)$</span> is a variation, then <span class="math-container">$X=T_t,\ Y=T_s$</span> are coordinate fields. Here <span class="math-container">$p=T(s,t)\in M$</span> and <span class="math-container">$ (Xf)(p=T(s,t)) $</span> is a function on <span class="math-container">$M$</span>.</p> <p><span class="math-container">$c(\varepsilon) = T(s+ \varepsilon,t)$</span> is a curve s.t. <span class="math-container">$c'(0)=T_s= Y(p)$</span></p> <p>Hence <span class="math-container">$$ Y(Xf) = \frac{d}{d\varepsilon}\ (Xf)(q=T(s+\varepsilon,t)) $$</span></p> <p><span class="math-container">$d(\epsilon)=T(s+\varepsilon,t+\epsilon )$</span> is a curve with <span class="math-container">$d'(0)=X(q)$</span> so that <span class="math-container">$$ Y(Xf) = \frac{d}{d\varepsilon } \frac{d}{d\epsilon } f(T(s+\varepsilon,t+\epsilon )) $$</span></p> <p>Hence <span class="math-container">$F=f\circ T$</span> so that <span class="math-container">$X(Yf)=F_{ts}$</span> and <span class="math-container">$Y(Xf)=F_{st}$</span>. Since we already have <span class="math-container">$F_{st}=F_{ts}$</span>, the proof is followed.</p>
586,112
<p>Consider following statment $\{\{\varnothing\}\} \subset \{\{\varnothing\},\{\varnothing\}\}$</p> <p>I think above statement is false as $\{\{\varnothing\}\}$ is subset of $\{\{\varnothing\},\{\varnothing\}\}$ but to be proper subset there must be some element in $\{\{\varnothing\},\{\varnothing\}\}$ which is not in $\{\{\varnothing\}\}$.As this is not the case here so it is false.</p> <p>Is my explanation and answer right or not?</p>
paul garrett
12,291
<p>"Indefinite integral" and "anti-derivative(s)" are the same thing, and are the same as "primitive(s)".</p> <p>(Integrals with one or more limits "infinity" are "improper".)</p> <p>Added: and, of course, usage varies. That is, it is possible to find examples of incompatible uses. And, quite seriously, $F(b)=\int_a^b f(t)\,dt$ is different from $F(x)=\int_a^x f(t)\,dt$ in what fundamental way? And from $\int_0^x f(t)\,dt$? And from the same expression when $f$ may not be as nice as we'd want?</p> <p>I have no objection if people want to name these things differently, and/or insist that they are somewhat different, but I do not see them as fundamentally different.</p> <p>So, the real point is just to be aware of the usage in whatever source... </p> <p>(No, I'd not like to be in a classroom situation where grades hinged delicately on such supposed distinctions.)</p>
586,112
<p>Consider following statment $\{\{\varnothing\}\} \subset \{\{\varnothing\},\{\varnothing\}\}$</p> <p>I think above statement is false as $\{\{\varnothing\}\}$ is subset of $\{\{\varnothing\},\{\varnothing\}\}$ but to be proper subset there must be some element in $\{\{\varnothing\},\{\varnothing\}\}$ which is not in $\{\{\varnothing\}\}$.As this is not the case here so it is false.</p> <p>Is my explanation and answer right or not?</p>
Hagen von Eitzen
39,174
<p>An anti-derivative of a function $f$ is a function $F$ such that $F'=f$.</p> <p>The indefinte integral $\int f(x)\,\mathrm dx$ of $f$ (that is, a function $F$ such that $\int_a^bf(x)\,\mathrm dx=F(b)-F(a)$ for all $a&lt;b$) is an antiderivative if $f$ is continuous, but need not be an antiderivative in the general case.</p>
586,112
<p>Consider following statment $\{\{\varnothing\}\} \subset \{\{\varnothing\},\{\varnothing\}\}$</p> <p>I think above statement is false as $\{\{\varnothing\}\}$ is subset of $\{\{\varnothing\},\{\varnothing\}\}$ but to be proper subset there must be some element in $\{\{\varnothing\},\{\varnothing\}\}$ which is not in $\{\{\varnothing\}\}$.As this is not the case here so it is false.</p> <p>Is my explanation and answer right or not?</p>
Community
-1
<p>(<a href="http://math.mit.edu/suppnotes/suppnotes01-01a/01ft.pdf" rel="nofollow noreferrer">http://math.mit.edu/suppnotes/suppnotes01-01a/01ft.pdf</a>) p 1 sur 7 </p> <p>Antiderivative is an indefinite integral. </p> <p><img src="https://i.stack.imgur.com/JlzdN.png" alt="enter image description here"></p>
1,981,948
<p>Is there a relationship between group order and element order? </p> <p>I know that there is a relationship between group order and subgroup order, which is that $[G:H] = \frac{|G|}{|H|}$ where $H$ is the subgroup of $G$ and $[G:H]$ is the index of $H$ in $G$. But is there a relationship between group order and the order of elements in the group?</p> <p>For example, let the group $G$ be of order $7^{3}$. Does $G$ have an element of order $7$?</p>
Stefan4024
67,746
<p>Note that by definition the order of an element is the order of the group generated by it, i.e we have that $|a| = |\langle a \rangle|$. Now obviously $\langle a \rangle \le G$, so we can easily correlate an order of an element with the order of the corresponding subgroup.</p> <p>This proves that if $a \in G$, then $|a| \mid |G|$. On the other side by Sylow Theorems we have that if a prime power $p^n$ divides the order of $G$ then there's a subgroup of order $p^n$ in $G$. On the other side even if a composite number divides the order of the group $G$, the existence of a subgroup of that order can't be guaranteed.</p>
2,878,412
<p>I've been working on a problem that involves discovering valid methods of expressing natural numbers as Roman Numerals, and I came across a few oddities in the numbering system.</p> <p>For example, the number 5 could be most succinctly expressed as $\texttt{V}$, but as per the rules I've seen online, could also be expressed as $\texttt{IVI}$. </p> <p>Are there any rules that bar the second expression from being valid? Or are the rules for roman numerals such that multiple valid expressions express the same number. </p> <h1>Edit</h1> <p>A sample set of rules I've seen online:</p> <ol> <li>Only one I, X, and C can be used as the leading numeral in part of a subtractive pair.</li> <li>I can only be placed before V or X in a subtractive pair.</li> <li>X can only be placed before L or C in a subtractive pair.</li> <li>C can only be placed before D or M in a subtractive pair.</li> <li>Other than subtractive pairs, numerals must be in descending order (meaning that if you drop the first term of each subtractive pair, then the numerals will be in descending order).</li> <li>M, C, and X cannot be equalled or exceeded by smaller denominations.</li> <li>D, L, and V can each only appear once.</li> <li>Only M can be repeated 4 or more times.</li> </ol>
mau
89
<p>Please keep in mind that strict rules for Roman numerals were established in a recent time. Besides having different signs for the digits, subtractive system was not fixed. In an inscription found in Forum Popilii and dated between 172 BC and 158 BC, 84 is written as XXCIIII (quoted from Georges Ifrah, The Universal History of Numbers: I have the Italian edition, so I cannot provide the page number for the English edition).</p> <p>As per your question, subtractive and addictive rule cannot be applied to the same number (V, in your case). IIII and XXXX are widely used too: subtractive rule was not mandatory, but rather used as a shortcut.</p>
1,360,835
<p>Reading "A First Look at Rigorous Probability Theory", and in the definition of outer measure of a set A, we take the infimum over the measure of covering sets for A from the semi-algebra (e.g., intervals in [0,1] ).</p> <p>Is this set over which we are taking the infimum well-defined? For a given real number x, how can I tell if x is in this set? I have to find a class of sets A1, A2, ... from the semi-algebra such that x = P(A1)+P(A2)+... </p> <p>It is not clear how to find at least one such class (and hence determine if x is in this set), or determine that I <em>cannot</em> find such a class and hence x is <em>not</em> in the set. Any clarification is appreciated.</p>
mweiss
124,095
<blockquote> <p>I have a function f(x) and I want to prove that x∗>y where x∗ is the number that satisfies f′(x∗)=0 and y is just an arbitrary constant. So what I did is that I assume x∗>y and show second derivative f"(x∗)|x∗>y&lt;0, then I show that f′(y)>0, representing that x∗>y. Thus the first assumption is true and there proof is correct. Is it correct way to do? </p> </blockquote> <p>Let's take this slowly. Before you try to prove something, you need to know what it is you are trying to prove, and it seems from how you have worded the question that you don't fully understand the goal of the problem.</p> <blockquote> <p>I have a function f(x)</p> </blockquote> <p>Are you thinking about a <em>specific</em> function $f(x)$ -- for example, $f(x)=x^2$ -- or are you trying to prove something that is true about <em>every</em> function $f(x)$ that has certain properties? If you are trying to prove something about a <em>specific</em> function, it may be that the proof will depend on the particulars of what the function is, and without telling us what the function is, we may not be able to help you. On the other hand if you are trying to prove something about <em>any</em> function with certain properties, we should at least know what those properties are.</p> <blockquote> <p>I want to prove that x∗>y where x∗ is the number that satisfies f′(x∗)=0 and y is just an arbitrary constant.</p> </blockquote> <p>What do you mean by "an arbitrary constant"? Taken literally, you seem to be saying that you want to prove that no matter what value of $y$ is chosen, $x^*$ will be greater than $y$.</p> <p>Before trying to prove that, think about what it says. It says that $x^*$ is greater than <em>every real number</em>. That's not possible -- no number is greater than every real number.</p> <p>What I think you mean to be saying is: "I am looking at a problem that has some specific constant in it, and I want to prove that some other value is greater than that constant." In order to prove that, we would need to know something about the constant you are interested in.</p> <p>Let's take a specific example: Suppose $f(x)=x^2-6x+9$. Let's choose the "arbitrary constant" to be $y=5$. Now you can verify for yourself that the only value $x^*$ for which $f'(x^*)=0$ is $x^*=3$. Then the claim that $x^*&gt;y$ is evidently false, because $3$ is not greater than $5$.</p> <p>So let's assume that you have a specific function in mind, and a specific constant in mind, and let's suppose that the thing you are trying to prove is actually true, and you are just not telling us the particulars of the problem because you are looking for some kind of general strategies rather than a solution to <em>this</em> problem. Let's take a look at the argument you want to make:</p> <blockquote> <p>I assume x∗>y</p> </blockquote> <p>As others have pointed out, assuming the thing you are trying to prove is a bad idea. Not just because it's "poor form", but because the moment you make the assumption $x^*&gt;y$, anything else that follows has a big "IF" in front of it: The most you will be able to prove is that "<strong>IF</strong> $x^* &gt; y$ then $x^* &gt; y$". That's not a particularly useful or meaningful conclusion -- it's like saying "If I eat grilled cheese, then I eat grilled cheese." It may be true, but it doesn't actually answer the question of what I will have for lunch today.</p> <blockquote> <p>and show second derivative f"(x∗)|x∗>y&lt;0</p> </blockquote> <p>What does the vertical line mean in this context?</p> <p>What does $x^∗&gt;y&lt;0$ mean? Stringing inequalities together is meaningful if they point in the same direction, because $ &gt; $ and $ &lt; $ are transitive: this means that if $a &lt; b$ and $b &lt; c$ then $a &lt; c$ as well, so you can write $ a &lt; b &lt; c$ as a shorthand for "$a &lt; b$ and $b &lt; c$". But if the inequalities point in different directions (like $a &lt; b &gt; c$ or $a &gt; b &lt; c$) then it isn't clear what the expression is supposed to mean.</p> <p>Also, why does it matter that $y&lt;0$? Does this have anything to do with the problem?</p> <blockquote> <p>then I show that f′(y)>0, representing that x∗>y</p> </blockquote> <p>Assuming you have correctly shown that $f'(y)&gt;0$, why does that mean that $x^* &gt; y$ must be true as well?</p> <blockquote> <p>Thus the first assumption is true and there proof is correct.</p> </blockquote> <p>As noted above (and this really cannot be emphasized strongly enough) if you make an assumption and then do a lot of work and then end up deriving the same thing you assumed at the top, then all you have done is prove "If the assumption is true, then the assumption is true." That does not resolve the question of whether the assumption <em>is</em> true.</p> <p>To sum up: This line of argument has a lot of problems with it. I don't think we are going to be able to help you without knowing more of the particulars of the problem -- I think the proof will depend entirely on what the function $f$ and the constant $y$ are.</p>
4,821
<p>A quick bit of motivation: recently a question I answered quite a while ago ( <a href="https://math.stackexchange.com/questions/22437/combining-two-3d-rotations/178957">Combining Two 3D Rotations</a> ) picked up another (IMHO rather poor) answer. While it was downvoted by someone else and I strongly concur with their opinion, I haven't downvoted it myself because I'm leery of any perception of 'competitive' downvoting on questions that I've already answered; in general I tend to be <em>very</em> stingy with downvotes (certainly more than I probably should), but this seems like a particularly thorny case.</p> <p>What I'm wondering is whether this is a reasonable concern (or reasonable approach) on my part; do people concur that this is something to be worried about from an ethical perspective, or should a bad answer be downvoted regardless of whether it might be abstractly 'beneficial' to myself to do so?</p>
BlueTrin
36,894
<p>I think that, as long, as you are downcasting indepedently of your participartion, you should be doing it: i.e. would you have downvoted the reply even if you did not post an answer. As long as your reasons for downvoting have nothing to do with yourself having posted a reply I do not see a problem there.</p> <p>Of course this is offset by your fear of being judged by your peers. </p> <p>Incidentally this is not limited to StackOverflow, in many fields experts to agree implicitely by fear of being ostracised. This is a suboptimal behaviour for the group, but it is much safer for each individual, so this behaviour can persist. </p>
966,835
<p>I want to find the asymptotic complexity of the function:</p> <p>$$g(n)=n^6-9n^5 \log^2 n-16-5n^3$$</p> <p>That's what I have tried:</p> <p>$$n^6-9n^5 \log^2 n-16-5n^3 \geq n^6-9n^5 \sqrt{n}-16n^5 \sqrt{n}-5 n^5 \sqrt{n}=n^6-30n^5 \sqrt{n}=n^6-30n^{\frac{11}{2}} \geq c_1n^6 \Rightarrow (1-c_1)n^6 \geq 30n^{\frac{11}{2}} $$</p> <p>We pick $c_1=2$ and $n_1=3600$.</p> <p>$$n^6-9n^5 \log^2 n-16-5n^3 \leq n^6, \forall n \geq 1$$</p> <p>We pick $c_2=1, n_2=1$</p> <p>Therefore, for $n_0=\max \{ 3600, 1 \}=3600, c_1=2$ and $c_2=1$, we have that:</p> <p>$$g(n)=\Theta(n^6)$$</p> <p>Could you tell me if it is right? $$$$</p> <p>Also, can I begin, finding the inequalities or do I have to say firstly that we are looking for $c_1, c_2 \in \mathbb{R}^+$ and $n_0 \geq 0$, such that:</p> <p>$$c_1 f(n) \leq g(n) \leq c_2 f(n), \forall n \geq n_0$$</p> <p>and then, after having found $f(n)$, should I say that we are looking for $c_1, c_2 \in \mathbb{R}^+$ and $n_0 \geq 0$, such that:</p> <p>$$c_1 n^6 \leq g(n) \leq c_2 n^6, \forall n \geq n_0$$</p>
Community
-1
<p>For a log-polynomial expression $\sum a_kn^{i_k}\log^{j_k}(n)$, take the highest power of $n$ and in case of equality take the highest power of the $\log$.</p> <p>In your case, the dominant term is $n^6$, and $$\frac{g(n)}{n^6}=1-\frac{9\log^2 n}n-\frac{16}{n^5}-\frac5{n^3}.$$ The limit is clearly $1$.</p> <p>Also note that the second term has a maximum for $n=e^2$ so that the ratio increases for higher values.</p> <p>Taking $n\ge1000$ to get a positive constant, $$0.57n^6&lt;g(n)&lt;n^6.$$</p>
141,115
<p>I know only some basics about mathematica. However I need to write down the following sum: </p> <p>$\sum_{\{m_k\}_N}\prod_{k=1}^N\frac{1}{m_k}[T_k(Z(\tau))]^{m_k}$. </p> <p>Where $\{m_k\}_N$ denotes partitions of $N$ i.e. $\sum_{k=1}^Nkm_k=N$. The argument in brackets [..] is the Hecke Operator, for now not that important. My problem is more that I do not know how to write the sum over partitions. The Hecke Operator I would then just insert and I think this would not be the most difficult part. </p> <p>I know that usually I should write some code expressing my trials however I really have no idea how to handle the problem. Could someone please help me. </p>
matheorem
4,742
<p>the number of coordinates in VertexTextureCoordinates should be the same as the number of vertex in polygon.</p> <p>As an example</p> <pre><code>Graphics[{Texture[pic], EdgeForm[Black], Polygon[{{0, 0}, {1, 0}, {2, 2}, {0.5, 2.5}}, VertexTextureCoordinates -&gt; {{0, 0}, {2, 0}, {2, 2}, {0, 2}}]}] </code></pre> <p>The texture process is actually means to think the pic as stretchable membrane and do four steps:</p> <ol> <li>grab {0,0} point (the lower left point) of the pic and stick it on polygon's {0,0} vertex</li> <li>grab {2,0} point of the pic and stick it on polygon's {1,0} vertex. But since {2,0} is actually outside the pic, we should really think we have extended periodic array of pic, and the coordinates is as follows <a href="https://i.stack.imgur.com/4eWQ6.png" rel="noreferrer"><img src="https://i.stack.imgur.com/4eWQ6.png" alt="enter image description here"></a> But now we only have stick two points and only got line.</li> <li>grab {2,2} point of the pic and stick it on polygon's {2,2} vertex. Now we have stick three points, and they form a triangle, and we got a triangle area textured with pic, and there is still one triangle left.</li> <li>grab {0,2} point of the pic and stick it on polygon's {0.5, 2.5} vertex. Now we the remaining triangle textured. and got</li> </ol> <p><a href="https://i.stack.imgur.com/tRPcM.png" rel="noreferrer"><img src="https://i.stack.imgur.com/tRPcM.png" alt="enter image description here"></a></p> <p>Also note, unequal points will be filled with {0,0}. That is </p> <pre><code>VertexTextureCoordinates -&gt; {{0, 0}, {2, 0}, {2, 2}} </code></pre> <p>is equivalent to </p> <pre><code>VertexTextureCoordinates -&gt; {{0, 0}, {2, 0}, {2, 2},{0,0}} </code></pre>
141,115
<p>I know only some basics about mathematica. However I need to write down the following sum: </p> <p>$\sum_{\{m_k\}_N}\prod_{k=1}^N\frac{1}{m_k}[T_k(Z(\tau))]^{m_k}$. </p> <p>Where $\{m_k\}_N$ denotes partitions of $N$ i.e. $\sum_{k=1}^Nkm_k=N$. The argument in brackets [..] is the Hecke Operator, for now not that important. My problem is more that I do not know how to write the sum over partitions. The Hecke Operator I would then just insert and I think this would not be the most difficult part. </p> <p>I know that usually I should write some code expressing my trials however I really have no idea how to handle the problem. Could someone please help me. </p>
andre314
5,467
<p><em>Note first : it is recommended to read this answer after the two other ones (from Szabolcs and matheorem)</em></p> <p>Here is a tool intended to explore how <code>VertexTextureCoordinates</code> works.</p> <p>The texture is the image : </p> <p><a href="https://i.stack.imgur.com/6EFBg.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/6EFBg.jpg" alt="enter image description here"></a></p> <p>The evaluation of the code at the end of the answer returns the following : </p> <p><a href="https://i.stack.imgur.com/fwEa0.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/fwEa0.jpg" alt="enter image description here"></a> </p> <p>A1,A2,A3 ..., B1,B2,B3 can be independently moved with the mouse (on both sides). </p> <ul> <li><p>the polygon A1,A2,A3... shows what piece of the texture is extracted. </p></li> <li><p>the polygon B1,B2,B3... shows where goes the piece </p></li> </ul> <p>The relevant corresponding part of the code is : </p> <pre><code>Texture[the image] ... Polygon[{B1,B2,B3...},VertexTextureCoordinates-&gt; {A1,A2,A3...}] </code></pre> <p>Note : </p> <ul> <li><p>If one adds a locator (with the button "add locator") and move the new locator on the left side, nothing happens on the right side. This let think that the update in the right side is not real time. This is not true : simply move the new locator on the right side, then after the corresponding locator on the left side and see... </p></li> <li><p>The Button in the middle simply does <code>B1=A1;B2=A2;B3=A3...</code> . See the effect on a <em>distorded</em> image. </p></li> <li><p>The code shows 4 images simply to illustrate a wrapping effect on the texture when the scaled coordinates are not in the interval [0,1]</p></li> </ul> <p>It's really interesting to watch how the triangulation changes when the locators on the right side are moved, for example by taking the example "big hat" (simply press the button) and moving B2 upwards.</p> <p><strong>Code</strong> </p> <pre><code>myHatching=RegionPlot[Rectangle[{0,0},{512,512}],Mesh-&gt; 30, MeshFunctions-&gt;{#1-#2 &amp;, #1+#2 &amp; },PlotStyle-&gt;Opacity[0],MeshStyle-&gt; {Directive[AbsoluteThickness[2],Black],Directive[AbsoluteThickness[3],Red]}]; img00=Rasterize[Show[Lighter @ ExampleData[{"TestImage", "Lena"}],myHatching,ImageSize-&gt; 512]]; img00X4=ImageAssemble[{{img00,img00},{img00,img00}}]; ptsA= ptsB= {{250,250},{375,150},{500,250},{500,500},{250,500}}; Row[{ LocatorPane[ Dynamic[ptsA], Show[img00X4,Frame-&gt; True,ImageSize-&gt; 500,Epilog-&gt; Dynamic[ { Thick, Line[Append[ptsA,ptsA[[1]]]], MapIndexed[Style[Text["A"&lt;&gt;ToString[#2 //First] ,#+{15,15}],FontSize-&gt;14,Bold,Black]&amp;,ptsA] }], DataRange-&gt; {{0,2},{0,2}}], Appearance-&gt;Graphics[{Black,Disk[]},ImageSize-&gt;7] ], Button[" --&gt;\ndo :\n B1=A1\n B2=A2\n ...\n Bn=An\n --&gt;",ptsB=ptsA], LocatorPane[ Dynamic[ptsB], Dynamic[Graphics[ { Texture[img00], Polygon[ptsB,VertexTextureCoordinates-&gt; (ptsA/512.)], MapIndexed[Style[Text["B"&lt;&gt;ToString[#2 //First],#+{15,15}],FontSize-&gt;14,Bold]&amp;,ptsB] }, PlotRange-&gt;{{0,1024},{0,1024}},Frame-&gt; True,ImageSize-&gt; 500]], Appearance-&gt;Graphics[{Black,Disk[]},ImageSize-&gt;7]] }," "] //Style[#,ImageSizeMultipliers-&gt; {1,1}]&amp; Button[ "add locator", ptsB=Insert[ptsB,(ptsB[[1]]+ptsB[[2]])/2.,2]; ptsA=Insert[ptsA,(ptsA[[1]]+ptsA[[2]])/2.,2]] Button["sample 1 : big hat", ptsB={{87.,249.},{375.,150.},{500.,250.},{432.,457.},{21.,659.}}; ptsA={{87.,249.},{375.,150.},{500.,250.},{432.,457.},{137.,481.}};] </code></pre> <p>(It's quick and dirty code, ie not a reference for programmer.) </p>
273,127
<p>Let $X$ be the set of all harmonic functions external to the unit sphere on $\mathbb R^3$ which vanish at infinity, so if $V \in X$, then $\nabla^2 V(\mathbf{r}) = 0$ on $\mathbb R^3 - S(2)$ and $\lim_{r \rightarrow \infty} V(r) = 0$. Now consider a function $f: X \rightarrow \mathbb R$, defined by $$ f(V)(\mathbf{r}) = || \nabla V(\mathbf{r}) ||^2 $$ For some given $V \in X$, I am looking for all functions $W \in X$ which satisfy $$ f(V) = f(W) $$ Certainly $W = \pm V$ will satisfy the condition. Can anyone find nontrivial solutions for $W$?</p> <p><strong>My approach so far:</strong></p> <p>The condition on $V$ and $W$ is $$ \nabla V \cdot \nabla V = \nabla W \cdot \nabla W $$ By defining $\phi = V + W$ and $\psi = V - W$, this is equivalent to $$ \nabla \phi \cdot \nabla \psi = 0 $$ I then tried expanding $\nabla \phi$ and $\nabla \psi$ in a basis of vector spherical harmonics and plugging into the above formula. This step makes use of the fact $\nabla^2 \phi = \nabla^2 \psi = 0$ and leads to the following condition on the expansion coefficients: $$ \nabla \phi \cdot \nabla \psi = \sum_{nm,n'm'} \phi_{nm} \psi_{n'm'} \left( \frac{1}{r} \right)^{n+n'+4} \left( (n+1)(n'+1) Y_{nm} Y_{n'm'} + \partial_{\theta} Y_{nm} \partial_{\theta} Y_{n'm'} + \frac{1}{\sin^2{\theta}} \partial_{\phi} Y_{nm} \partial_{\phi} Y_{n'm'} \right) $$ Its not clear to me how to proceed from here, or whether this is even the correct approach to take. I could get rid of the sum over $n',m'$ by integrating both sides over a unit sphere and using the orthogonality relations for the spherical harmonics. Doing this gives: $$ \sum_{nm} (n+1)(2n+1) \phi_{nm} \psi_{nm} = 0 $$ though I'm not sure that yields any additional insight. I would appreciate any ideas.</p>
vibe
111,071
<p>I don't have a solution for the general question, but consider a special case where we pick $\phi$ to be a simple dipole, $$ \phi = {\cos{\theta} \over r^2} = \phi_{10} {Y_{10} \over r^2} $$ where $\phi_{10} = {1 \over 2} \sqrt{{3 \over \pi}}$ is a constant which isn't really important. This choice of $\phi$ is harmonic, and so the summation expression I gave above will reduce to the following: $$ \sum_{nm} \psi_{nm} \left({1 \over r} \right)^{n+5} \left[ 2(n+1) \cos{\theta} Y_{nm} - \sin{\theta} \partial_{\theta} Y_{nm} \right] = 0 $$ Applying the recurrence relation for the spherical harmonics: \begin{align} x Y_{nm} &amp;= \alpha_{nm} Y_{n-1,m} + \alpha_{n+1,m} Y_{n+1,m} \\ (1-x^2)\partial_x Y_{nm} &amp;= (n+1)\alpha_{nm} Y_{n-1,m} - n \alpha_{n+1,m}Y_{n+1,m} \end{align} where $x = \cos{\theta}$ and $$ \alpha_{nm} = \sqrt{ \frac{(n+m)(n-m)}{(2n+1)(2n-1)} } $$ Putting these recurrence relations into the above sum, and then relabeling indices to isolate all the $Y_{nm}$ terms, we get $$ \sum_{nm} Y_{nm} \left( 3(n+2)\alpha_{n+1,m}\psi_{n+1,m} + (n+1)\alpha_{nm} \psi_{n-1,m} \right) = 0 $$ To get this I had to assume $\psi_{|m|-1,m} = 0$. Since the $Y_{nm}$ are complete, this must mean that $$ 3(n+2)\alpha_{n+1,m}\psi_{n+1,m} + (n+1)\alpha_{nm} \psi_{n-1,m} = 0 $$ or, $$ \psi_{n+1,m} = -\frac{(n+1)}{3(n+2)} \frac{\alpha_{nm}}{\alpha_{n+1,m}} \psi_{n-1,m} $$ So this indicates I am free to choose any $\psi_{|m|,m}$ I want, and then the relation will determine $\psi_{|m|+2k,m}$ for all positive integers $k$.</p> <p>This means that there are in fact nontrivial solutions $\psi$ which satisfy $\nabla \phi \cdot \nabla \psi = 0$ for my specific choice of $\phi$ (dipole field). In fact, there are infinitely many choices of $\psi$ which will satisfy that relation. At this point I don't have any intuition as to what these $\psi$ solutions look like geometrically, compared to my choice of $\phi$. Maybe it would be worth plotting some surface maps to see if there is some interesting pattern which would motivate a solution to the general problem.</p>
717,980
<p>In a certain 2-player game, the winner is determined by rolling a single 6-sided die in turn, until a 6 is shown, at which point the game ends immediately. Now, suppose that k dice are now rolled simultaneously by each player on his turn, and the first player to obtain a total of k (or more) 6’s, accumulated over all his throws, wins the game. (For example, if k = 3, then player 1 will throw 3 dice, and keep track of any 6’s that show up. If player 1 did not get all 6’s then player 2 will do the same. Assuming that player 1 gets another turn, he will again throw 3 dice, and any 6’s that show up will be added to his previous total). Compute the expected number of turns that will be needed to complete the game.</p> <p>I've analysed this problem as follows: The problem can be modeled by a negative binomial distribution with probability $p=\frac{1}{6}$. Now, X is a random variable representing the number of dice being thrown. I need to find the <a href="http://en.wikipedia.org/wiki/Cumulative_distribution_function">cdf</a> $Pr[X\geq k]$, and then find the expectation as follows $E[X] = \int_0^\infty kPr[X\geq k]$. The problem here is that the cdf of a negative binomal distribution is a regularized beta function and this is quite messy to deal with. I'm wondering is there another way to approach this problem that wouldn't involve that? </p>
Steve Kass
60,500
<p><strong>Note:</strong> The initial answer here was incorrect. Thanks to Markus, who solved the problem in a different way and got different results from my first ones, I found the mistake and rewrote the answer. (If there's any reason for me to repost the original, wrong answer somewhere, let me know.)</p> <hr> <p>You can set up a recurrence to solve if you consider intermediate stages in the game and consider the expected number of moves for the game to end from there.</p> <p>Let $E(a,b,k)$ be the expected number of turns the game takes to finish if player whose turn it is needs $a$ more sixes to win, the other player needs $b$ more, and $k$ dice are rolled each turn.</p> <p>If $a\le0$ or $b\le0$, the game is already over, and $E(a,b,k)=0$.</p> <p>Now consider the possible outcomes of the next <strong><em>two</em></strong> rolls. Let the number of $6$s in those rolls be $i$ and $j$, respectively.</p> <p>The particular combination $(i,j)$ happens with probability ${\tbinom{k}{i}}{\tbinom{k}{j}}\left(\frac{5}{6}\right)^{2k-i-j}\left(\frac{1}{6}\right)^{i+j}$. The expected game length of a given case depends on whether or not $i\ge a$: If $i\ge a$, the game is over after the first of the two turns, and the (expected) game length is $1$. Otherwise, it's $2+E(a-i,b-j,k)$.</p> <p>This leads to the following formula:</p> <p>$$\begin{align} E(a,b,k)&amp;= \sum_{i=0}^{a-1}\sum_{j=0}^k{\tbinom{k}{i}}{\tbinom{k}{j}}\left(\tfrac{5}{6}\right)^{2k-i-j}\left(\tfrac{1}{6}\right)^{i+j}\left(2+E(a-i,b-j,k)\right)\\ &amp;+\sum_{i=a}^{k}{\tbinom{k}{i}}\left(\tfrac{5}{6}\right)^{k-i}\left(\tfrac{1}{6}\right)^{i},\\ \end{align}\\ \mbox{where here and below } E(a,b,k)=0 \mbox{ if $a\le0$ or $b\le0$.} $$</p> <p>To get a useful recursive formula, first pull the $i=0$, $j=0$ term out of the sum. </p> <p>$$ \begin{align}E(a,b,k) &amp;= \left(\tfrac{5}{6}\right)^{2k}(2+E(a,b,k))\\ &amp;+ \sum_{i={\color{red}{1}}}^{a-1}\sum_{j={\color{red}{0}}}^{{\color{red}{0}}}{\tbinom{k}{i}}{\tbinom{k}{j}}\left(\tfrac{5}{6}\right)^{2k-i-j}\left(\tfrac{1}{6}\right)^{i+j}\left(2+E(a-i,b-j,k)\right)\\ &amp;+ \sum_{i=0}^{a-1}\sum_{j={\color{red}{1}}}^{k}{\tbinom{k}{i}}{\tbinom{k}{j}}\left(\tfrac{5}{6}\right)^{2k-i-j}\left(\tfrac{1}{6}\right)^{i+j}\left(2+E(a-i,b-j,k)\right)\\ &amp;+\sum_{i=a}^{k}{\tbinom{k}{i}}\left(\tfrac{5}{6}\right)^{k-i}\left(\tfrac{1}{6}\right)^{i}\\ \end{align}$$</p> <p>Then solve for $E(a,b,k)$. $$\begin{align} \left({1 - \left(\tfrac{5}{6}\right)^{2k}}\right)E(k,k,k) &amp;= 2\left(\tfrac{5}{6}\right)^{2k}\\ &amp;+ \sum_{i=1}^{a-1}{\tbinom{k}{i}}\left(\tfrac{5}{6}\right)^{2k-i}\left(\tfrac{1}{6}\right)^{i}\left(2+E(a-i,b,k)\right)\\ &amp;+ \sum_{i=0}^{a-1}\sum_{j=1}^{k}{\tbinom{k}{i}}{\tbinom{k}{j}}\left(\tfrac{5}{6}\right)^{2k-i-j}\left(\tfrac{1}{6}\right)^{i+j}\left(2+E(a-i,b-j,k)\right)\\ &amp;+\sum_{i=a}^{k}{\tbinom{k}{i}}\left(\tfrac{5}{6}\right)^{k-i}\left(\tfrac{1}{6}\right)^{i}\\ \end{align} $$</p> <p>Using this to evaluate $E(k,k,k)$ in Mathematica (assuming no typos in my transcription from what I used) gives the same results as Markus put in his answer. Here's the beginning of the sequence $E(k,k,k)$, starting at $k=1$.</p> <p>\begin{array}{ccc} \it{k} &amp; \it{E(k,k,k)} \\ \hline \\ 1 &amp; 6.00000000000 \\ 2 &amp; 7.84416992031 \\ 3 &amp; 8.69584550140 \\ 4 &amp; 9.20010963516 \\ 5 &amp; 9.54353874272 \\ 6 &amp; 9.79634774936 \\ 7 &amp; 9.99197435248\\ 8 &amp; 10.1488645407\\ 9 &amp; 10.2781431548\\ \cdots\\ 30 &amp; 11.2061391171\\ \cdots\\ \end{array}</p> <p>These values should be correct to all decimal places, since there's no approximation going on other than the final conversion of a fraction to decimal.</p>
717,980
<p>In a certain 2-player game, the winner is determined by rolling a single 6-sided die in turn, until a 6 is shown, at which point the game ends immediately. Now, suppose that k dice are now rolled simultaneously by each player on his turn, and the first player to obtain a total of k (or more) 6’s, accumulated over all his throws, wins the game. (For example, if k = 3, then player 1 will throw 3 dice, and keep track of any 6’s that show up. If player 1 did not get all 6’s then player 2 will do the same. Assuming that player 1 gets another turn, he will again throw 3 dice, and any 6’s that show up will be added to his previous total). Compute the expected number of turns that will be needed to complete the game.</p> <p>I've analysed this problem as follows: The problem can be modeled by a negative binomial distribution with probability $p=\frac{1}{6}$. Now, X is a random variable representing the number of dice being thrown. I need to find the <a href="http://en.wikipedia.org/wiki/Cumulative_distribution_function">cdf</a> $Pr[X\geq k]$, and then find the expectation as follows $E[X] = \int_0^\infty kPr[X\geq k]$. The problem here is that the cdf of a negative binomal distribution is a regularized beta function and this is quite messy to deal with. I'm wondering is there another way to approach this problem that wouldn't involve that? </p>
epi163sqrt
132,007
<p><em>Note:</em> This question seemed to be a nice challenge for me. Here I provide an <em>explicit formula</em>, which is admittedly not very attractive, because it is rather complicated. In fact, I really appreciated the elegant approach of <a href="https://math.stackexchange.com/users/60500/steve-kass">Steve</a> and I still do so. <em>But</em>, my calculations using the explicit formula show different values for $k&gt;1$ and the things became interesting for me. I verified with a small program that the stated figures for $E(k,k,k)$ from Steve were consistent with his recursion formula. But nevertheless my figures were different from those. I wrote small programs calculating first and last expressions of my calculations to verify, that the transformations were correct. Then I did an alternative verification, wrote some R code to produce random samples and simulated the rolls with $1$ to $6$ dice. The pleasant news for me was, that the simulation based upon random samples confirmed convincingly the figures calculated with my explicit formula. So, for me <em>one open question is still left</em>, namely what is the reason for the differing results between the explicit formula and the recursion formula?</p> <p>@Steve, I would really appreciate, if you could review both answers and help to clarify it.</p> <blockquote> <p><strong>Added 2014-03-31: Note: Recursion corrected!</strong> In the meanwhile Steve found a clever correction of his answer and now both, recursive and direct solution show consistent results.</p> </blockquote> <p>Now, here's a proof resulting in an explicit formula for the expectation: Let $A^{(k)}_{N,j}$ denote the number of ways player $A$ can get $j(\ge0)$ times a $6$ with $k$ dice in $N$ rolls. Let $A^{(k)}(x,y)=(5x+y)^{Nk}$ be the corresponding generating function, with the exponent of $y$ giving the number of $6$ after $N$ rolls with $k$ dice and the coefficient giving the number of different possibilities of this event. The coefficient of $x$ marks the number of possibilities of all other rolled pips with $k$ dice. Let the coefficient operator $[y^j]$ denote the coefficient of $y^j$ in $A^{(k)}(x,y)$. So,</p> <p>\begin{align} A^{(k)}_{N,j}=[y^j](5x+y)^{Nk}=[y^j]\sum_{l=0}^{Nk}\binom{Nk}{l}y^l(5x)^{Nk-l}=\binom{Nk}{j}5^{Nk-j} \end{align}</p> <p>Similarly, let $A^{(k)}_{N,\leq j}$ denote the number of possibilities of player $A$ to get not more than $j$ $6$ with $k$ dice in $N$ rolls.</p> <p>$$A^{(k)}_{N,\leq j}=\sum_{l=0}^{j}\binom{Nk}{l}5^{Nk-l}$$ We also have $A^{(k)}_{N,\geq j}=6^{Nk} - A^{(k)}_{N,&lt; j}$, since $6^{Nk}$ is the number of <em>all</em> possible results after $N$ rolls with $k$ dice. For player $B$ we use the corresponding notation $B^{(k)}_{N,j}$, etc.</p> <blockquote> <p>The resulting <em>probabilities</em> for these events are: \begin{align} p(A^{(k)}_{N,j})&amp;=p(B^{(k)}_{N,j})=\left(\frac{5}{6}\right)^{Nk}\binom{Nk}{j}\frac{1}{5^j}\\ p(A^{(k)}_{N,\leq j})&amp;=p(B^{(k)}_{N,\leq j})=\left(\frac{5}{6}\right)^{Nk}\sum_{l=0}^{j}\binom{Nk}{l}\frac{1}{5^l}\\ p(A^{(k)}_{N,\geq j})&amp;=p(B^{(k)}_{N,\geq j})=1-p(A^{(k)}_{N,&lt; j})\\ \end{align}</p> </blockquote> <p>To calculate $E^{(k)}(X)$, the expectation when playing with $k$ dice, let $P(X=N)$ denote the probability, that player A <em>or</em> player B wins after a total of $N$ rolls. In order to do so, we consider <em>two scenarios</em>. Either $A$ has rolled $k$ (or more) $6$ with <em>his</em> $(N+1)$-st roll of $k$ dice, so that $A$ wins after a total of $2N+1$ rolls, or $B$ wins after $A$ has rolled the $k$ dice $N$ times, so that they have reached a total of $2N$ times. Symbolically</p> <blockquote> <p>\begin{align} A\ wins:\ \ \ &amp;(A B) (A B)\ldots (A B) (A B)A = (AB)^{N}A&amp;(2N+1)\ rolls\\ B\ wins:\ \ \ &amp;(A B) (A B)\ldots (A B) (A B) = (AB)^{N}&amp;(2N)\ rolls \end{align}</p> </blockquote> <p>So, the expectation $E^{(k)}(X)$ consists of two parts. The first sum corresponds to the scenario with player $A$ as winner of the game. This implies that foreach $N\ge0$ player $B$ has reached less than $k$ $6$ in $N$ rolls, while player $A$ has reached $j&lt;k$ times a $6$ in $N$ rolls and with his $(N+1)$-st roll he has $k$ or more $6$. Similarly the second sum with $B$ as winner.</p> <blockquote> <p>\begin{align} E^{(k)}(X)&amp;=\sum_{N\ge0}NP(X=N)\\ &amp;=\sum_{N\ge0}(2N+1)P(X=2N+1) + \sum_{N\ge0}(2N)P(X=2N)\\ &amp;=\sum_{N\ge0}(2N+1)p(B_{N,&lt;k})\sum_{j=0}^{k-1}p(A_{N,j})p(A_{1,\ge k-j})\\ &amp;+\sum_{N\ge0}(2N)p(A_{N,&lt;k})\sum_{j=0}^{k-1}p(B_{N-1,j})p(B_{1,\ge k-j}) \end{align}</p> </blockquote> <p>Substituting for the probabilities the formulas stated above gives</p> <p>\begin{align} E^{(k)}(X)&amp;=\sum_{N\ge0}(2N+1) \left(\sum_{i=0}^{k-1}\binom{Nk}{i}\left(\frac{5}{6}\right)^{Nk}\frac{1}{5^i}\right)\\ &amp;\qquad\qquad\cdot\sum_{j=0}^{k-1}\left(\binom{Nk}{j}\left(\frac{5}{6}\right)^{Nk}\frac{1}{5^j} \sum_{l=k-j}^{k}\binom{k}{l}\left(\frac{5}{6}\right)^{k}\frac{1}{5^l}\right)\\ &amp;+\sum_{N\ge0}(2N) \left(\sum_{i=0}^{k-1}\binom{Nk}{i}\left(\frac{5}{6}\right)^{Nk}\frac{1}{5^i}\right)\\ &amp;\qquad\qquad\cdot\sum_{j=0}^{k-1}\left(\binom{(N-1)k}{j}\left(\frac{5}{6}\right)^{(N-1)k}\frac{1}{5^j} \sum_{l=k-j}^{k}\binom{k}{l}\left(\frac{5}{6}\right)^{k}\frac{1}{5^l}\right)\\ &amp;=\sum_{N\ge0}(2N+1) \left(\frac{5}{6}\right)^{(2N+1)k}\sum_{i=0}^{k-1}\binom{Nk}{i}\frac{1}{5^i} \sum_{j=0}^{k-1}\binom{Nk}{j}\frac{1}{5^j} \sum_{l=k-j}^{k}\binom{k}{l}\frac{1}{5^l}\\ &amp;+\sum_{N\ge0}(2N) \left(\frac{5}{6}\right)^{2Nk}\sum_{i=0}^{k-1}\binom{Nk}{i}\frac{1}{5^i} \sum_{j=0}^{k-1}\binom{(N-1)k}{j}\frac{1}{5^j} \sum_{l=k-j}^{k}\binom{k}{l}\frac{1}{5^l}\\ \end{align}</p> <blockquote> <p>After a few simplifications the resulting value for $E^{(k)}(x)$ is \begin{align} E^{(k)}(X)&amp;=\frac{1}{6^k}\sum_{N\ge0}(2N+1) \left(\frac{5}{6}\right)^{2Nk}\sum_{i=0}^{k-1}\binom{Nk}{i}\frac{1}{5^i} \sum_{j=0}^{k-1}\binom{Nk}{j} \sum_{l=0}^{j}\binom{k}{j-l}\frac{1}{5^l}\\ &amp;+\frac{1}{6^k}\sum_{N\ge0}(2N) \left(\frac{5}{6}\right)^{(2N-1)k}\sum_{i=0}^{k-1}\binom{Nk}{i}\frac{1}{5^i} \sum_{j=0}^{k-1}\binom{(N-1)k}{j} \sum_{l=0}^{j}\binom{k}{j-l}\frac{1}{5^l} \end{align}</p> </blockquote> <p>I've calculated the expectations for $k=1\ldots 6$ with a small program. The computed values approximating $E^{(k)}(x)$ converge quickly with increasing $N$. The table below lists the resulting figures with $7$ significant digits.</p> <blockquote> <p>Expectation $E^{(k)}(X)$: The value in column $N$ indicates the smallest value, where the significant digits did no longer change by increasing $N$. $$ \begin{array}{ccc} \it{k} &amp; \it{E^{(k)}(X)} &amp; \it{N}\\ \hline \\ 1 &amp; 6.000 000 &amp; 57 \\ 2 &amp; 7.844 169 &amp; 32 \\ 3 &amp; 8.695 845 &amp; 26 \\ 4 &amp; 9.200 109 &amp; 22 \\ 5 &amp; 9.543 538 &amp; 19 \\ 6 &amp; 9.796 347 &amp; 18 \\ \end{array} $$</p> </blockquote> <p>In the following I give explicit values for $k=1$ and $2$. With the help of generating functions and $D$ as (formal) differential operator we have for $k=1$:</p> <p>\begin{align} E^{(1)}(x)&amp;=\frac{1}{6}\sum_{N\ge0}(2N+1)\left(\frac{5}{6}\right)^{2N} + \frac{1}{6}\sum_{N\ge0}(2N)\left(\frac{5}{6}\right)^{2N-1}\\ &amp;=\frac{1}{6}\left.\left(D\frac{1}{1-x}\right)\right\rvert_{x=\frac{5}{6}}=\frac{1}{6}\left.\frac{1}{(1-x)^2}\right\rvert_{x=\frac{5}{6}}\\ &amp;=6 \end{align}</p> <p>Calculation for $k=2$ gives:</p> <p>\begin{align} E^{(2)}(x)&amp;=\frac{1}{6^2}\sum_{N\ge0}(2N+1)\left(\frac{5}{6}\right)^{4N} \sum_{i=0}^{1}\binom{2N}{i}\frac{1}{5^i} \sum_{j=0}^{1}\binom{2N}{j}\sum_{l=0}{j}\binom{2}{j-l}\frac{1}{5^l}\\ &amp;+\frac{1}{6^2}\sum_{N\ge0}(2N)\left(\frac{5}{6}\right)^{4N-2} \sum_{i=0}^{1}\binom{2N}{i}\frac{1}{5^i} \sum_{j=0}^{1}\binom{2N-2}{j}\sum_{l=0}{j}\binom{2}{j-l}\frac{1}{5^l}\\ &amp;=\frac{1}{5^26^2}\sum_{N\ge0}(2N+1)(2N+5)(22N+5)\left(\frac{5}{6}\right)^{4N}\\ &amp;+\frac{1}{5^26^2}\sum_{N\ge0}(2N)(2N+5)(22N-17)\left(\frac{5}{6}\right)^{4N-2}\\ &amp;=\frac{1}{5^46^2}\sum_{N\ge0}(5368N^3+12572N^2-1870N+625)\left(\frac{5}{6}\right)^{4N}\\ \end{align}</p> <p>We use the following well known relations:</p> <p>\begin{align} \sum_{N\ge0}Nx^N&amp;=(xD)\frac{1}{1-x}=\frac{x}{(1-x)^2}\\ \sum_{N\ge0}N^2x^N&amp;=(xD)^2\frac{1}{1-x}=\frac{x(x+1)}{(1-x)^3}\\ \sum_{N\ge0}N^3x^N&amp;=(xD)^3\frac{1}{1-x}=\frac{x(x^2+4x+1)}{(1-x)^4}\\ \end{align}</p> <p>to get: \begin{align} E^{(2)}(x)&amp;=\frac{1}{5^46^2}\left.\left(5368\frac{x(x^2+4x+1)}{(1-x)^4} +12572\frac{x(x+1)}{(1-x)^3}\right.\right.\\ &amp;\qquad\qquad\qquad\left.\left.-1870\frac{x}{(1-x)^2} +625\frac{1}{1-x}\right)\right|_{x=\left(\frac{5}{6}\right)^4}\\ &amp;=\frac{1}{5^46^2}\left.\frac{-9699x^3+27087x^2+14195x+625}{(1-x)^4}\right|_{x=\left(\frac{5}{6}\right)^4}\\ &amp;=\frac{636876}{81191}\approx7.844169 \end{align}</p> <blockquote> <p>Summarised the expectation for $k=1,2$: \begin{align} E^{(1)}(x)&amp;=6\\ E^{(2)}(x)&amp;=\frac{636876}{81191}\approx7.844169 \end{align}</p> </blockquote> <p>The formulas for $k&gt;2$ can be calculated in a similar manner, but note that the degree of $N$ increases by $2$ whenever $k$ is incremented by $1$. So, manual calculation could become <em>really</em> cumbersome. :-)</p>
2,343,993
<blockquote> <p>Find the limit -$$\left(\frac{n}{n+5}\right)^n$$</p> </blockquote> <p>I set it up all the way to $\dfrac{\left(\dfrac{n+5}{n}\right)}{-\dfrac{1}{n^2}}$ but now I am stuck and do not know what to do.</p>
Michael Rozenberg
190,319
<p>Just $$\left(1-\frac{5}{n+5}\right)^{-\frac{n+5}{5}\cdot\frac{-5n}{n+5}}\rightarrow e^{-5}$$</p>
84,076
<p>I think computation of the Euler characteristic of a real variety is not a problem in theory.</p> <p>There are some nice papers like <em><a href="http://blms.oxfordjournals.org/content/22/6/547.abstract" rel="nofollow">J.W. Bruce, Euler characteristics of real varieties</a></em>.</p> <p>But suppose we have, say, a very specific real nonsingular hypersurface, given by a polynomial, or a nice family of such hypersurfaces. What is the least cumbersome approach to computation of $\chi(V)$? One can surely count the critical points of an appropriate Morse function, but I hope it's not the only possible way.</p> <p>(Since I am talking about dealing with specific examples, here's one: $f (X_1,\ldots,X_n) = X_1^3 - X_1 + \cdots + X_n^3 - X_n = 0$, where $n$ is odd.)</p> <p><strong>Update:</strong> the original motivation is the following: the well-known results by Oleĭnik, Petrovskiĭ, Milnor, and Thom give upper bounds on $\chi (V)$ or $b(V) = \sum_i b_i (V)$ that are exponential in $n$. It is easy to see that this is unavoidable, e.g. $(X_1^2 - X_1)^2 + \cdots + (X_n^2 - X_n)^2 = 0$ is an equation of degree $4$ that defines exactly $2^n$ isolated points in $\mathbb{R}^n$. I was interested in specific families of real algebraic sets with large $\chi (V)$ or $b (V)$ <em>defined by one equation of degree $3$</em>. I couldn't find an appropriate reference with such examples and it seems like a proof for such example would require some computations (unlike the case of degree $4$).</p>
Liviu Nicolaescu
20,302
<p>This is tricky even in the simplest case. Suppose we are given a real polynomial in one real variable. The Euler characteristic of its zero set is equal to the number of real roots (not counted with multiplicity). </p>
3,283,724
<p>Let A, B ⊆ R, and let f : A → B be a bijective function. Show that if <span class="math-container">$f$</span> is strictly increasing on A, then <span class="math-container">$f^{-1}$</span> is strictly increasing on B.</p> <p>How would I write this proof? I think by contradiction but I don't know where to start.</p>
Kavi Rama Murthy
142,385
<p>Suppose <span class="math-container">$x&lt;y$</span>. We have to show that <span class="math-container">$f^{-1}(x) &lt;f^{-1}(y)$</span>. Let us prove this by contradiction.</p> <p>Suppose <span class="math-container">$f^{-1}(x) \geq f^{-1}(y)$</span>. Since <span class="math-container">$f$</span> is increasing this implies <span class="math-container">$f(f^{-1}(x))\geq f(f^{-1}(y))$</span> which means <span class="math-container">$x \geq y$</span>, a contradiction. </p>
4,220,518
<p>A point is moving along the curve <span class="math-container">$y = x^2$</span> with unit speed. What is the magnitude of its acceleration at the point <span class="math-container">$(1/2, 1/4)$</span>?</p> <p>My approach : I use the parametric equation <span class="math-container">$s(t) = (ct, c^2t^2)$</span>, then <span class="math-container">$v(t) = s'(t) = (c, 2c^2t)$</span> and <span class="math-container">$a(t) = v'(t) = (0, 2c^2)$</span>. Now the point <span class="math-container">$(1/2, 1/4)$</span> is reached at time <span class="math-container">$t = \frac{1}{2c}$</span>, so <span class="math-container">$v(\frac{1}{2c}) = (c, c)$</span>. Now the unit speed condition gives us <span class="math-container">$\sqrt{c^2 + c^2} = 1 \implies c = \frac{1}{\sqrt{2}}$</span>. So the magnitude of acceleration is <span class="math-container">$2c^2 = 1$</span>.</p> <p>But the answer is <span class="math-container">$\frac{1}{\sqrt{2}}$</span>. Can someone help me what is wrong in my approach.</p>
mathcounterexamples.net
187,663
<p>What is wrong is that using the parametric coordinates</p> <p><span class="math-container">$$s(t)=(ct,c^2t^2)$$</span></p> <p>the point is not moving with unit speed. This can be easily seen by computing the norm of the speed, which is not constant. You need to normalize the speed <strong>for all <span class="math-container">$t$</span></strong>. Then find the time at which the point <span class="math-container">$(1/4,1/2)$</span> is reached. And finally compute the acceleration at this point.</p>
280,500
<p>I would like to pose a question about the range of validity of the expansion of Binomial Theorems. </p> <p>I know that for non-positive integer, rational $n$ $$ \left(1+x\right)^{n}=1+nx+\frac{n\left(n-1\right)}{2!}x^{2}+\dots, $$ where the range of validity is $\left|x\right|&lt;1$.</p> <p>My question is that if we tried to expand $\left(1+f(x)\right)^n$, where $f(x)$ is any arbitrary function defined on the reals, does it follow that we could just say that the range of validity of this expansion is just $\left|f(x)\right|&lt;1$?</p> <p>For example, could I say that the range of validity of the Binomial Theorem expansion of $\left(1+(x+2x^3)\right)^n$ is just the values of $x$ that satisfies $\left|x+2x^3\right|&lt;1$? Or is it not as straightforward as doing such substitution?</p> <p>Thanks in advance for your inputs.</p>
Robert Israel
8,508
<p>Yes, it really is that straightforward.</p>
1,823,187
<blockquote> <p>There are $n \gt 0$ different cells and $n+2$ different balls. Each cell cannot be empty. How many ways can we put those balls into those cells?</p> </blockquote> <p>My solution:</p> <p>Let's start with putting one different ball to each cell. for the first cell there are $n+2$ options to choose a ball. .. for the $n$th cell there are $3$ options to choose a ball. total : $\frac{(n+2)!}{2!}$</p> <p>Now we got $2$ balls left and <strong>my question is</strong>: Can I change the way I choose? I mean until now I choose for each cell a ball. Can I now choose for each of the balls that left a unique cell? Is this legal? If the answer is yes, then let's choose for each ball a cell ($n$ options for picking up a cell) so we get $n^2$ options to put $2$ balls in $n$ cells.</p> <p>we get: $\frac{(n+2)!}{ 2!} n^2$</p> <p>Is this correct?</p>
Christian Blatter
1,303
<p>Consider at $x=k\in{\mathbb N}_{\geq2}$ a trapezoidal spike of height $k$ with base $\left[k-{2\over k^3},\ k+{2\over k^3}\right]$ and top $\left[k-{1\over k^3},\ k+{1\over k^3}\right]$. Let $f: \&gt;[1,\infty)\to{\mathbb R}$ be the function obtained by "concatenation" of these spikes. The area of the spike at $k$ is ${3\over k^2}$, so that $$\int_1^\infty f(x)\&gt;dx=3 \sum_{k=2}^\infty {1\over k^2}={\pi^2\over2}-3&lt;\infty\ .$$ On the other hand each spike of $f^2$ contains a rectangle of width ${2\over k^3}$ and height $k^2$. It follows that $$\int_1^\infty f^2(x)\&gt;dx\geq\sum_{k=2}^\infty{2\over k}=\infty\ .$$ Add $e^{-x}$ to $f(x)$, as in Sangchul Lee's answer, to make $f$ positive everywhere.</p>
1,338,999
<p><img src="https://i.stack.imgur.com/hXfn2.png" alt="sat question"></p> <p>My friend selected option B, I did C. We're confused. Can someone please explain this for my friend?</p>
James Pak
187,056
<p>\begin{align} \left[\frac{(m!)^2}{(m-k)!(m+k)!}\right]^{m} &amp; = \left[\frac{m!}{(m-k)!}\frac{m!}{(m+k)!}\right]^{m} \\&amp; = \left[\frac{m\times(m-1)\times\cdots\times(m-k+1)}{(m+k)\times(m+k-1)\times\cdots\times(m+1)}\right]^{m} \\&amp; = \left[\left(1+\frac{k}{m}\right) \left(1+\frac{k}{m-1}\right)\cdots\left(1+\frac{k}{m-k+1}\right) \right]^{\Large -m} \\&amp; \end{align}</p> <p>Let $0 \leq u \leq k-1$.</p> <p>\begin{align} \left(1+\frac{k}{m-u}\right)^{\Large -m} = \left(\left(1+\frac{k}{m-u}\right)^{\Large \frac{m-u}{k}+\frac{u}{k}}\right)^{\Large -k}, \end{align}</p> <p>where $$\left(1+\frac{k}{m-u}\right)^{\Large \frac{m-u}{k}}=e \;\text{ as } \; m\to \infty$$ and $$\left(1+\frac{k}{m-u}\right)^{\Large \frac{u}{k}}=\left(1+\frac{\Large \frac{k}{m}}{1- \Large \frac{u}{m}}\right)^{\Large \frac{u}{k}}=1 \;\text{ as } \; m\to \infty.$$</p> <p>Could you do the rest?</p>
537,021
<p>Say I divide a number by 6, will a number modulus by 6 always be between 0-5? If so, will a number modulus any number (N) , the result should be between $0$ and $ N - 1$?</p>
Henry
6,460
<p>It depends whether you regard modulus as a function or as an equivalence relation. </p> <p>It is often useful to identify numbers which are equivalent to "$-1 \mod p$".</p> <p>It also depends on what you mean by "number": what would you say "$11.4 \mod 6$" was?</p> <p>If you say "$61 \equiv 79 \mod 6$" then I would say the answer was no.</p> <p>But if you would say "<code>mod(61,6) = 1 = mod(79,6)</code>" (rather like the Excel function) then I would say the answer was yes. </p>
2,183,390
<p>So, I need to solve a hard problem, which reduces to this: </p> <blockquote> <p>Prove that $3^{\frac{1}{3}} \notin \mathbb{Q}[13^{\frac{1}{3}}]$.</p> </blockquote> <p>The only thing that comes into my mind is to suppose the opposite, <em>i.e.</em>, $3^{\frac{1}{3}} \in \mathbb{Q}[13^{\frac{1}{3}}]$, and then to see that $3^{\frac{1}{3}} = a+b\ 13^{\frac{1}{3}} + c\ 13^{\frac{2}{3}}$ leads to some contradiction while trying to manipulate this. But I am not sure if this would work.</p> <blockquote> <p>Is there a smarter solution, or a solution at all?</p> </blockquote> <p>(If someone really can manipulate this, I would like to see it.) </p>
Thomas Andrews
7,933
<p>The vector space approach is to consider $\mathbb Q[\sqrt[3]{13}]$ as a vector space over $\mathbb Q$ with the basis $1,\sqrt[3]{13},\sqrt[3]{13}^2$. Then multiplication by $a+b\sqrt[3]{13}+c\sqrt[3]{13}^2$ can be represented by the matrix:</p> <p>$$\begin{pmatrix}a&amp;13c&amp;13b\\ b&amp;a&amp;13c\\ c&amp;b&amp;a \end{pmatrix}$$</p> <p>For this to represent $\sqrt[3]{3}$, it must have minimal polynomial $x^3-3$. In particular, then, the trace must be zero, so $a=0$. And the determinant must be $3$, but the determinant when $a=0$ is obviously a multiple of $13$.</p>
1,742,982
<p>I was trying to solve the equation using factorial as shown below but now I'm stuck at this level and need help.</p> <p>$$C(n,3) = 2*C(n,2)$$</p> <p>$$\frac{n!}{3!(n-3)!} = 2\frac{n!}{2!(n-2)!}$$</p> <p>$$3! (n - 3)! = (n - 2)!$$</p>
user5713492
316,404
<p>Definitely don't need calculus nor a graphing calculator for this problem. We are trying to find $$-x^4-2x^3+15x^2=y_{max}$$ Rewrite as $$x^4+2x^3-15x^2+y_{max}=0$$ Now if we knew where the maximum occurred, say at $x=a$, then $a$ would be a root of this equation. Also it must be a double root of else the graph of the function would cross the $x$-axis at this point, so it wouldn't be a maximum. Thus $(x-a)^2=x^2-2ax+a^2$ must be a factor of $x^4+2x^3-15x^2+y_{max}$. Carrying out polynomial long division, we get a quotient of $$x^2+(2a+2)x+(3a^2+4a-15)$$ and a remainder of $$(4a^3+6a^2-30a)x+y_{max}-(3a^4+4a^3-15a^2)$$ The quotient is of little interest to us, but the remainder must be the zero polynomial, so $$4a^3+6a^2-30a=2a(2a^2+3a-15)=0$$ Here we can recognize the derivative, but no calculus was harmed in the production of this post! So $$a\in\left\{0,\frac{-3+\sqrt{129}}4,\frac{-3-\sqrt{129}}4\right\}$$ And since $$y_{max}=3a^4+4a^3-15a^2$$ We can just test the $3$ roots and get values of $0$, $28.182551$, and $119.754949$ respectively. Thus the max occurs at $a=\frac{-3-\sqrt{129}}4\approx-3.598454$ and has the value $y_{max}=3a^4+4a^3-15a^2\approx119.754949$. </p> <p><strong>EDIT</strong>: I thought that it would be tedious be expand that polynomial in $a$ to find the exact value of $y_{max}$, but if we observe that $y_{max}=\frac{u+v\sqrt{129}}{256}\approx119.754949$ and then $\frac{u-v\sqrt{129}}{256}\approx28.182551$, where $u$ and $v$ are integers, we see that $u=128(119.754949+28.182551)=18936=8\cdot2367$ and $v=128(119.754949-28.182551)/\sqrt{129}=1032=8\cdot129$, we only have to be sure of our approximate results for $u$ and $v$ to the nearest integer to obtain the exact result $y_{max}=\frac{2367+129\sqrt{129}}{32}$.</p>
1,041,226
<p>I need to prove the following: ${n\choose m-1}+{n\choose m}={n+1\choose m}$, $1\leq m\leq n$.</p> <p>With the definition: ${n\choose m}= \left\{ \begin{array}{ll} \frac{n!}{m!(n-m)!} &amp; \textrm{für \(m\leq n\)} \\ 0 &amp; \textrm{für \(m&gt;n\)} \end{array} \right.$</p> <p>and $n,m\in\mathbb{N}$.</p> <p>I'm not really used to calculations with factorials and can't make much sense from it...</p>
Artem
29,547
<p>So $$ n+1\choose m $$ means the number of ways to choose $m$ elements out of $n+1$. Now fix one element out of $n+1$. This element can be among these $m+1$, to pick the rest we need to pick $n\choose m-1$, or this element is not among these $m$, and we should pick then $n\choose m$ elements. Since this is exactly the number of ways to choose $m$ out of $n+1$ elements, the required result follows.</p>
1,041,226
<p>I need to prove the following: ${n\choose m-1}+{n\choose m}={n+1\choose m}$, $1\leq m\leq n$.</p> <p>With the definition: ${n\choose m}= \left\{ \begin{array}{ll} \frac{n!}{m!(n-m)!} &amp; \textrm{für \(m\leq n\)} \\ 0 &amp; \textrm{für \(m&gt;n\)} \end{array} \right.$</p> <p>and $n,m\in\mathbb{N}$.</p> <p>I'm not really used to calculations with factorials and can't make much sense from it...</p>
Zachary Hamm
11,091
<p>I struggled with this problem too a bit and I feel like the answers here don't explain all the steps here.</p> <p>For my solution, there are three key insights: that ${m! = m(m-1)!}$ and that ${(m + 1)! = m!(m+1)}$, and that if the proposition is true, then ${m!(n - m + 1)!}$ must be valid as the common denominator for ${n\choose m}$ and ${n\choose m - 1}$, because we know that ${{n+1\choose m} = \frac{(n + 1)!}{m!(n + 1 - m)!} = \frac{(n + 1)!}{m!(n -m + 1)!}}$.</p> <p>With that in mind, the question is how to transform ${n \choose m}$ and ${n \choose m - 1}$ so that they both share the denominator ${m!(n - m + 1)!}$.</p> <p>Let's start with ${n \choose m - 1}$:</p> <p>$$\begin{align} {n \choose m - 1} &amp;= \frac{n!}{(m-1)!(n - m + 1)!}\\ &amp;= \frac{m!}{m!} \cdot \frac{n!}{(m-1)!(n - m + 1)!}\\ &amp;= \frac{m!n!}{m!(m-1)!(n -m + 1)!}\\ &amp;= \frac{m(m-1)!n!}{m!(m-1)!(n-m+1)!}\\ &amp;= \frac{n!m}{m!(n-m+1)!} \end{align}$$</p> <p>Now, ${n \choose m}$, the key insight here is that ${(n-m+1)(n-m)! = (n-m+1)!}$ (let ${k = n-m}$, then we see that ${k!(k+1) = (k+1)! = (n-m+1)!}$):</p> <p>$$\begin{align} {n \choose m} &amp;= \frac{n!}{m!(n - m)!}\\ &amp;= \frac{(n -m + 1)}{(n-m+1)} \cdot \frac{n!}{m!(n - m)!}\\ &amp;= \frac{n!(n-m+1)}{m!(n-m+1)(n-m)!}\\ &amp;= \frac{n!(n-m+1)}{m!(n-m+1)!}\\ \end{align}$$</p> <p>Now, finally, we can do:</p> <p>$$\begin{align} \frac{n!m}{m!(n-m+1)!} + \frac{n!(n-m+1)}{m!(n-m+1)!} &amp;= \frac{n!m + n!(n-m+1)}{m!(n-m+1)!}\\ &amp;= \frac{n!(n + 1)}{m!(n + 1 - m)!}\\ &amp;= \frac{(n + 1)!}{m!(n+1-m)!}\\ &amp;= {n + 1 \choose m} \end{align}$$</p>