qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
4,216,602
<p>In this book - <a href="https://www.oreilly.com/library/view/machine-learning-with/9781491989371/" rel="nofollow noreferrer">https://www.oreilly.com/library/view/machine-learning-with/9781491989371/</a> - I came to the differentiation of these to terms like this:</p> <p>Train - Applying a learning algorithm to data using numerical approaches like gradient descent. Fit - Applying a learning algorithm to data using analytical approaches.</p> <p>I don't quite understand the difference.</p> <p>Can someone please elaborate and/or provide examples? Thank you.</p>
Somos
438,089
<p>As an educated guess, consider a set <span class="math-container">$\,\{(x_i,y_i)\}\,$</span> of data points and try to find a good linear model <span class="math-container">$\,y=mx+b\,$</span> for the data. The <a href="https://en.wikipedia.org/wiki/Least_squares" rel="nofollow noreferrer">least squares fit</a> approach uses analytical formulas to determine the optimum parameters <span class="math-container">$\,m,b\,$</span> in one step.</p> <p>A general approach is to start with an approximation to the parameters and then use numerical methods such as <a href="https://en.wikipedia.org/wiki/Gradient_descent" rel="nofollow noreferrer">gradient descent</a> to minimize the difference between the model and the data by adjusting the parameters iteratively.</p>
2,508,508
<p>Let $x_1$ be in $R$ with $ x_1&gt;1$, and let $x_{k+1}=2- \frac{1}{x_k}$ for all $k$ in $N$. Show that the sequence $(x_k)_k$ is monotone and bounded and find its limit.</p> <p>I am not sure how to start this problem.</p>
Bumblebee
156,886
<p>$$\dfrac12\left(x_{k+1}+\dfrac1{x_k}\right)=1.$$ This says the average of $x_{2}$ and $\dfrac1{x_1}$ is equals to one. Now suppose $1\lt x_1.$ By the AM-GM inequality $x_1+\dfrac1x_1\gt2$ and this implies $$\text{distance}(x_1,1)\gt\text{distance}\left(1,\dfrac1x_1\right)=\text{distance}(x_2,1).$$ Hence $1\lt x_2\lt x_1.$ Same reasoning shows that $(x_k)_{k\in\Bbb{N}}$ is monotone decreasing. </p>
3,521,224
<p>Let <span class="math-container">$(U_1,U_2,...) , (V_1,V_2,...)$</span> be two independent sequences of i.i.d. Uniform (0, 1) random variables. Define the stopping time <span class="math-container">$N = \min\left(n\geqslant 1\mid U_n \leqslant V^2_n\right)$</span>.</p> <p>Obtain <span class="math-container">$P(N = n)$</span> and <span class="math-container">$P(V_N \leqslant v)$</span> for <span class="math-container">$n = 1,2,...,1\geqslant v \geqslant$</span>0.</p> <p>I know that I should use conditioning in order to get the probability. </p> <p>I also know that I have to check if <span class="math-container">$U_1 \leqslant V_1$</span> then <span class="math-container">$N=1$</span></p>
Davide Giraudo
9,849
<p>Here are some hints.</p> <ol> <li>The event <span class="math-container">$\{N=n\}$</span> can be written as <span class="math-container">$A_n^c\cap\bigcap_{i=1}^{n-1}A_i$</span>, where <span class="math-container">$A_i=\{U_i\leqslant V_i^2\}$</span>. </li> <li>Show that the collection of events <span class="math-container">$A_n^c,A_1,\dots,A_{n-1}$</span> is independent. </li> <li>Compute the probability of <span class="math-container">$A_i$</span>.</li> </ol> <p>For the second part, start from <span class="math-container">$$ P\left(V_N\leqslant v\right)=\sum_{n\geqslant 1}P\left(V_n\leqslant v,N=n\right). $$</span> Then use the previous decomposition of <span class="math-container">$\{N=n\}$</span> and use independence to get the value of <span class="math-container">$P\left(V_n\leqslant v,N=n\right)$</span> for all <span class="math-container">$n$</span>.</p>
191,210
<p>Let $R$ be the smallest $\sigma$-algebra containing all compact sets in $\mathbb R^n$. I know that based on definition the minimal $\sigma$-algebra containing the closed (or open) sets is the Borel $\sigma$-algebra. But how can I prove that $R$ is actually the Borel $\sigma$-algebra?</p>
Shankara Pailoor
39,210
<p>Well the Borel $\sigma$-algebra is the $\sigma$-algebra generated by the open (or closed) sets of $\mathbb{R}^n$. I believe (I don't want to put words in your mouth) you are asking whether the sigma-algebra generated by the compact sets is equivalent to the sigma algebra generated by the open sets.</p> <p>Since $R$ is not the greatest choice when referring to a $\sigma$-algebra over the reals, let us denote the $\sigma$-algebra generated by the compact sets by $\mathfrak{C}$ and the Borel $\sigma$-algebra by $\mathfrak{B}$.</p> <p>Now every compact set is closed so it's the complement of an open set; hence $\mathfrak{C} \subset \mathfrak{B}$. Now, we want to show $\mathfrak{B} \subset \mathfrak{C}$. Let $F \subset \mathbb{R^n}$ be a closed set. Consider $F_n = F \cap \overline{B(0, n)}$ where $B(0,n)$ denotes the open ball of radius $n$ centered at the origin. Now $F_n$ is a sequence of compact sets whose union equals $F$. This means $F \in \mathfrak{C}$. Hence, all closed and open sets are in $\mathfrak{C}$. By countable union and intersection we see that $\mathfrak{B} \subset \mathfrak{C}$. Thus, the two $\sigma$-algebras are equal. </p>
1,633,810
<p>For which $a \in \mathbb{R}$ is the integral $\int_1^\infty x^ae^{-x^3\sin^2x}dx$ finite?</p> <p>I've been struggling with this question. Obviously when $a&lt;-1$ the integral converges, but I have no idea what happens when $a\ge -1 $.</p> <p>Any help would be appreciated</p>
Johannes Hahn
62,443
<p>If $a &lt; +0.5$, the integral converges. This is because we can split the integral over $[1,\infty)$ into an integral over $\bigcup_k (-\epsilon_k+k\pi,+\epsilon_k+k\pi)$ and an integral over all the rest.</p> <p>The integral over $(-\epsilon_k + k\pi, +\epsilon_k+k\pi)$ can be bounded by $(k\pi)^a \cdot 2\epsilon_k$. Therefore: If we choose $\epsilon_k:=k^{-1.5+\delta}$ then $\epsilon_k \to 0$ fast enough such that the sum of the integrals over all the little bits will converge.</p> <p>Now for the complement of the little bits. Outside of $(-\epsilon_k,+\epsilon_k)+k\pi$ we have $\sin(x)^2\geq \sin(k^{-1.5+\delta})^2$ and thus $-x^3 \sin(x)^2 \leq -k^{2\delta} \pi^3 (k^{1.5-\delta}\sin(k^{-1.5+\delta}))^2$ and $k^{1.5-\delta}\sin(k^{-1.5+\delta}) \xrightarrow{k\to\infty} 1$ because $\lim_{h\to 0} \frac{\sin(h)}{h}=1$. Therefore the integral over the complement of the little bits can be bounded from above by $\int_1^\infty x^a e^{-k^{2\delta}C}$ for some constant $C&gt;0$. And this integral converges for all $\delta&gt;0$.</p> <hr> <p>Conversely: If $a\geq 0.5$, the integral diverges. Again we look at the integrals over the little bits. This time we choose $\epsilon_k := k^{-1.5}$. Then we use $|\sin(x)|\leq |x|$ for $x$ near zero and get $\int_{-\epsilon_k+k\pi}^{+\epsilon_k+k\pi} x^a e^{-x^3 \sin(x)^2} \geq \int x^a e^{-x^3 \epsilon_k^2} \geq 2\epsilon_k \cdot (-\epsilon_k+k\pi)^a e^{-(\epsilon_k+k\pi)^3 k^{-3}}$ This behaves asymptotically like $k^a$ because the term in the exponent goes to $1$. Therefore the sum over all these integrals goes to infinity.</p>
4,130,809
<p>I have questions regarding the proof that I made about the following statement: &quot;Let <span class="math-container">$(X,\tau_{X})$</span> be a topological space and <span class="math-container">$\lbrace \infty\rbrace$</span> an object that doesn't belong to X. Define <span class="math-container">$Y=X\cup\lbrace\infty\rbrace$</span> and <span class="math-container">\begin{equation} \tau_{\infty}=\lbrace U\subset Y|U\in \tau_X\text{ or }Y-U\text{ is compact and closed in } X\rbrace \end{equation}</span> which is a topology on <span class="math-container">$Y$</span>. Show that <span class="math-container">$(Y,\tau_{\infty})$</span> is compact.</p> <p>Ok, here is my attempt of the proof: Let <span class="math-container">$C=\lbrace C_\alpha \rbrace_{\alpha \in \Lambda}$</span> be an arbitrary open cover of <span class="math-container">$Y$</span>. Since <span class="math-container">$C_\alpha \in \tau_\infty$</span> for every <span class="math-container">$\alpha$</span>, then <span class="math-container">$Y-C_\alpha$</span> is compact and closed in <span class="math-container">$X$</span> for every <span class="math-container">$\alpha$</span>. Since <span class="math-container">$Y-C_\alpha$</span> is compact, let <span class="math-container">$C_x=\lbrace C_{\alpha_x}\rbrace_{\alpha_x\in \Lambda_x}$</span> be any open cover of <span class="math-container">$Y-C_\alpha$</span>. Then there is a finite subcover <span class="math-container">$C_x^{\prime}=\lbrace C_{\alpha_{x},i}\rbrace_{i=1}^n$</span>, with <span class="math-container">\begin{equation*} Y-C_\alpha \subset \cup_{i=1}^nC_{\alpha_{x},i}. \end{equation*}</span></p> <p>Now, select <span class="math-container">$\alpha_\infty$</span> such that <span class="math-container">$C_{\alpha_\infty}$</span> in <span class="math-container">$C$</span> contains the object <span class="math-container">$\lbrace \infty \rbrace$</span>. Then, <span class="math-container">\begin{equation} Y-C_{\alpha_\infty}\subset \cup_{i=1}^nC_{\alpha_{x},i} \end{equation}</span> Then clearly <span class="math-container">$\lbrace C_{\alpha_{x},i}\rbrace_{i=1}^n \cup \lbrace C_{\alpha_\infty}\rbrace$</span> is a finite subcover of <span class="math-container">$Y$</span>. Hence, <span class="math-container">$(Y,\tau_\infty)$</span> is compact.</p> <p>I just want to know if my proof is correct. I would appreciate any corrections or other suggestions in case if it is incorrect. Thank you!</p>
Paul Frost
349,785
<p>Your idea is correct, but you do not properly elaborate it. You have to find a finite subcover of <span class="math-container">$C=\lbrace C_\alpha \rbrace_{\alpha \in \Lambda}$</span>.</p> <p>You consider any open cover <span class="math-container">$C_x=\lbrace C_{\alpha_x}\rbrace_{\alpha_x\in \Lambda_x}$</span> of <span class="math-container">$Y-C_\alpha$</span> - but this has nothing to do with the given cover <span class="math-container">$C$</span>.</p> <p>So proceed as follows:</p> <ol> <li><p>Select <span class="math-container">$\alpha_\infty$</span> such that <span class="math-container">$C_{\alpha_\infty}$</span> in <span class="math-container">$C$</span> contains the object <span class="math-container">$\lbrace \infty \rbrace$</span>.</p> </li> <li><p>The set <span class="math-container">$Y-C_{\alpha_\infty}$</span> is compact and the collection <span class="math-container">$C' = \{ C_{\alpha} \cap (Y-C_{\alpha_\infty}) \}$</span> is an open cover of <span class="math-container">$Y-C_{\alpha_\infty}$</span>. Hence it has a finite subcover <span class="math-container">$C_{\alpha_i} \cap (Y-C_{\alpha_\infty})$</span>, <span class="math-container">$i = 1,\dots, n$</span>. But then <span class="math-container">$\{ C_{\alpha_i} \mid i= 1,\dots,n,\infty\}$</span> is a finite subcover of <span class="math-container">$C$</span>.</p> </li> </ol> <p>The space <span class="math-container">$Y$</span> is known as the <em>Alexandroff compactification of <span class="math-container">$X$</span></em>. See <a href="https://math.stackexchange.com/search?q=Alexandroff+compactification">https://math.stackexchange.com/search?q=Alexandroff+compactification</a> and <a href="https://en.wikipedia.org/wiki/Alexandroff_extension" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Alexandroff_extension</a>.</p>
2,348,131
<p>In our class, we encountered a problem that is something like this: "A ball is thrown vertically upward with ...". Since the motion of the object is rectilinear and is a free fall, we all convene with the idea that the acceleration $a(t)$ is 32 feet per second square. However, we are confused about the sign of $a(t)$ as if it positive or negative. </p> <p>Now, various references stated that if we let the upward direction to be positive then $a$ is negative and if we let downward to be the positive direction, then $a$ is positive. The problem in their claim is that they did not explain well how they arrived with that conclusion. </p> <p>My question now is that, why is the acceleration $a$ negative if we choose the upward direction to be positive. Note: I need a simple but comprehensive answer. Thanks in advance. </p>
tempx
357,017
<p>Acceleration is defined as the derivative of the velocity, i.e. $a(t)=\frac{v(t)}{dt}$. When the ball is going upward, the speed of the ball decreases and thus the acceleration becomes negative. </p>
3,204,082
<p>I have a conjecture, but have no idea how to prove it or where to begin. The conjecture is as follows:</p> <blockquote> <p>A polynomial with all real irrational coefficients and no greatest common factor has no rational zeros.</p> </blockquote> <p>This conjecture excludes the cases where the polynomial does have a greatest common factor despite having an irrational coefficient, such as <span class="math-container">$x^3+\pi x^2=0$</span>, as that has rational zero <span class="math-container">$0$</span>.</p> <p>I know that not all polynomials with rational coefficients have rational zeros, but I am not sure how to begin. How would I go about beginning to prove this? Has it already been proved - or is there a counterexample that I am missing?</p>
Bruno Seefeld
668,896
<p>For the case of just one irrational coefficient <span class="math-container">$a_i$</span>, supose by absurd that there is a rational solution <span class="math-container">$q$</span>. Then: <span class="math-container">$$(a_i=a_0+...+a_n q^n)\frac{1}{q^i}$$</span> hence <span class="math-container">$a_i$</span> is rational. Therefore for this case there is no rational solutions. </p>
634,344
<p>Im trying to go alone through Fultons, Introduction to algebraic topology. He asks whether there is a function $g$ on a region such that $dg$ is the form: $$\omega =\dfrac{-ydx+xdy}{x^2+y^2}$$ in some regions. I know you can do it on the upper half plane by considering $-arctan(x/y)$. But Im a bit confused. I know that you can measure the angle, except maybe on a fixed hlaf line, like taking away the negative axis, but on the other hand, the tan is bijective on intervals of lenght $\pi$. So, can you find such a function on the union of the right half plane andthe upper half plane? Thanks </p>
Gil Bor
118,580
<p>Yes you can. You can solve the equation $dg=\omega$ in any open subset $U\subset \mathbb R^2\setminus (0,0)$ which does not "enclose" the origin $(0,0)\in\mathbb R^2.$ Formaly: there exists a continous $\gamma:[0, \infty)\to \mathbb R^2\setminus U$, such that $\gamma(0)=(0,0)$ and $\|\gamma(t)\|\to\infty$ as $t\to\infty$. (In your example, you can take say $\gamma(t)=(-t,-t)$). </p> <p>Once $U$ satisfies this condition, you can define $g(x)$ by integrating $\omega$ along any path connecting some fixed point $x_0\in U$ to $x$ and avoiding $\gamma$. This definition is independent of the path since $\mathbb R^2\setminus\gamma([0,\infty))$ is simply connected, hence any two such paths can be deformed into each other, leaving the integral defining $g(x)$ unchanged, since $\omega$ is closed. </p>
2,325,968
<p>I was trying to calculate : $e^{i\pi /3}$. So here is what I did : $e^{i\pi /3} = (e^{i\pi})^{1/3} = (-1)^{1/3} = -1$</p> <p>Yet when I plug : $e^{i\pi /3}$ in my calculator it just prints : $0.5 + 0.866i$</p> <p>Where am I wrong ? </p>
CY Aries
268,334
<p>You take $(-1)^\frac{1}{3}=-1$. But actually, there are three numbers such that their cube is $-1$. They are $-1$, $e^{\frac{i\pi}{3}}$ and $e^{\frac{-i\pi}{3}}$.</p>
3,171,152
<p>Let gamma 1 be a straight line from -i to i and let gamma 2 be the semi-circle of radius 1 in the right half plane from -i to i.</p> <p>Evaluate</p> <p><span class="math-container">$$\int_{\gamma_1}f(z)dz$$</span></p> <p>and <span class="math-container">$$\int_{\gamma_2}f(z)dz$$</span></p> <p>where f(z)=complex conjugate(z)</p> <p>And give a reason as to why the two answers are different.</p> <hr> <p><strong>My approach:</strong></p> <p>Let <span class="math-container">$$\gamma_1=(1-t)(-i)+it = i2t-i|t\in[0,1]$$</span></p> <p>and </p> <p>Let <span class="math-container">$$\gamma_2=e^{i\theta} = \cos(\theta) + i\sin(\theta)|\frac{3\pi}{2}\leq\theta\leq\frac{\pi}{2}$$</span></p> <p>Conjugate of <span class="math-container">$\gamma_1$</span> <span class="math-container">$$-i2t+i$$</span></p> <p>Conjugate of <span class="math-container">$\gamma_2$</span> <span class="math-container">$$\cos(\theta) - i\sin(\theta)$$</span></p> <p>Integrals</p> <p><span class="math-container">$$\int_0^1-i2t+idt=0$$</span></p> <p><span class="math-container">$$\int_\frac{3\pi}{2}^{\frac{\pi}{2}}\cos(\theta) - i\sin(\theta) = 2$$</span></p> <p>Is the above work correct?</p> <p>Also, would "the integrals are different because the paths/curves were parameterised differently" be a valid reason??</p>
Leander Tilsted Kristensen
631,468
<p>Your approach is on the right path, however you forgot to multiply by the derivative of the curves in the integrals. The general formula for an integral over a parameterized curve is <span class="math-container">$$ \int_\gamma f(t) \: dt = \int_a^b f(\gamma(t)) \gamma'(t) \: dt $$</span></p> <p>Also i would probably chose angles <span class="math-container">$\theta \in [-\pi/2,\pi/2]$</span> for the second curve. </p> <p>Also note that IF <span class="math-container">$f$</span> was holomorphic in a simply connected domain containing the two curves, THEN the integrals would have been the same, hence we can conclude that this is not the case (which of course could be verified directly using cauchy-riemann equations).</p>
13,109
<p>I posted a question half a hour ago. But I think I found the answer myself now. I understand that answering your own question is appreciated (instead of deleting it). But I don't know if I should give a hint or a full solution.</p> <p>It feels a little bit strange to give a hint to my <em>own</em> question, I don't know, it is like I'm trying to teach myself :P</p> <p>On the other hand, if someone is ever searching for this question (and it is a question from a popular book, so I think there is a good chance he/she will find this question), then you could argue that it is better to give a hint instead of a full solution. </p>
fgp
42,986
<p>I think in the long run, we <em>do</em> want full answers for all questions.</p> <p>For homework-type questions, IMHO the ideal procedure is</p> <ul> <li>Other users provide hints</li> <li>The OP, once he has managed to solve the problem, posts a full answer</li> </ul> <p>So yes, please post a full answer.</p>
1,114,822
<p>I have to prove that $d$ divides $n$ if and only if $ord_p(d)\leq ord_p(n)$</p> <p>I have already proved that $ord_p(d)\leq ord_p(n)$ if $d$ divides $n$ but I am struggling to prove the converse. Can anyone give any help?</p>
coffeemath
30,316
<p>Let $d=\prod p^{a_p}$ and $n=\prod p^{b_p},$ where in each case the product is over all primes $p$ and the exponents are all $0$ beyond some point (and may be $0$ for lower primes also).</p> <p>Then since $ord_p(d) \le ord_p(n)$ we have $a_p \le b_p$ at each prime $p.$ From this it's easy to see that $d|n$, in fact $n=d\cdot k$ where $k=\prod p^{b_p-a_p}.$</p>
1,114,822
<p>I have to prove that $d$ divides $n$ if and only if $ord_p(d)\leq ord_p(n)$</p> <p>I have already proved that $ord_p(d)\leq ord_p(n)$ if $d$ divides $n$ but I am struggling to prove the converse. Can anyone give any help?</p>
drhab
75,923
<p>Write: $$n=p_{1}^{r_{1}}\cdots p_{k}^{r_{k}}$$ where the $p_{i}$ are distinct primes and the $r_{i}$ are nonnegative integers. </p> <p>Then $\operatorname{ord}_{p}\left(d\right)\leq\operatorname{ord}_{p}\left(n\right)$ for each prime $p$ implies that we can write: $$d=p_{1}^{s_{1}}\cdots p_{k}^{s_{k}}$$ where the $s_i$ are integers that satisfy $0\leq s_{i}\leq r_{i}$ for $i=1,\dots,k$. </p> <p>Then $n=md$ for $m=p_{1}^{r_{1}-s_{1}}\cdots p_{k}^{r_{k}-s_{k}}$.</p>
1,552
<p>Closely related: what is the smallest known composite which has not been factored? If these numbers cannot be specified, knowing their approximate size would be interesting. E.g. can current methods factor an arbitrary 200 digit number in a few hours (days? months? or what?). Can current methods certify that an arbitrary 1000 digit number is prime, or composite in a few hours (days? months? not at all?).</p> <p>Any broad-brush comments on the current status of primality proving, and how active this field is would be appreciated as well. Same for factoring.</p> <hr> <p>Edit: perhaps my header question was something of a troll. I am not interested in lists. But if anyone could shed light on the answers to the portion of my question starting with "E.g.". it would be appreciated. (I could answer it in 1990, but what is the status today?)</p>
Alon Amit
25
<p>Back in 1990, whatever answer would have been given to the question would likely correspond to the processing power of single (possibly large) computer. Today, the best algorithms are ones that can be efficiently distributed, leading to successful factorizations performed by networks of computers. </p> <p>With the General Number Field Sieve, one can expect to factor a 200-(decimal)-digit integer in several months of computer time. See <a href="http://en.wikipedia.org/wiki/Integer_factorization_records" rel="nofollow">here</a> for some records. </p> <p>Primality testing - even with certificates - extends to much higher numbers, and I'm not sure what the current state of the art is. Integers of special form (Mersenne primes and similarly constructed numbers, mostly) were tested up to 2^millions or so, but I'm not sure how high you can push the tests for general integers. </p>
2,051,555
<p>I have the following limit to solve.</p> <p>$$\lim_{x \rightarrow 0}(1-\cos x)^{\tan x}$$</p> <p>I am normally supposed to solve it without using l'Hôpital, but I failed to do so even with l'Hôpital. I don't see how I can solve it without applying l'Hôpital a couple of times, which doesn't seem practical, nor how to solve the question without applying it. Thanks for the help.</p>
HBR
396,575
<p>Try with Taylor series when $x\to0$: $$\tan{x}\approx x$$ $$\cos{x}\approx 1-\frac{x^2}{2}$$</p>
2,051,555
<p>I have the following limit to solve.</p> <p>$$\lim_{x \rightarrow 0}(1-\cos x)^{\tan x}$$</p> <p>I am normally supposed to solve it without using l'Hôpital, but I failed to do so even with l'Hôpital. I don't see how I can solve it without applying l'Hôpital a couple of times, which doesn't seem practical, nor how to solve the question without applying it. Thanks for the help.</p>
Claude Leibovici
82,404
<p>Even if, apparently, Taylor series are not desired, may be equivalents could be used $$A=(1-\cos(x))^{\tan(x)}\implies \log(A)=\tan(x) \log(1-\cos(x))$$ Close to $x=0$, $$\cos(x)\sim 1-\frac{x^2} 2$$ $$1-\cos(x)\sim \frac{x^2} 2$$ $$\log(1-\cos(x))\sim 2\log(x)-\log(2)$$ $$\tan(x)\sim x$$ $$log(A)=\tan(x) \log(1-\cos(x))\sim 2x\log(x)-x\log(2)$$ Now, when $x\to 0$, each of the two terms goes to $0$; so $\log(A)\to 0$ and then $A\to 1$.</p>
2,604,825
<p>So I have a problem (two problems, actually) that a friend helped me out with, I'm able to work out the components of this problem but get lost when I have to bring it all together... so what I have is this</p> <p>$f(t)=t^{2}e^{-2t}+e^{-t}\cos(3t)+5$</p> <p>Simple enough. I got:</p> <p>$$\mathcal{L}[t^2]=\frac{2}{s^2}$$</p> <p>$$\mathcal{L}[e^{-2t}]=\frac{1}{s+2}$$</p> <p>$$\mathcal{L}[e^{-t}]=\frac{1}{s+1}$$</p> <p>$$\mathcal{L}[\cos(3t)]=\frac{s}{s^2+9}$$</p> <p>$$\mathcal{L}[5]=\frac{5}{s}$$</p> <p>Now the problem is bringing it all together... Apparently,</p> <p>$$\mathcal{L}[t^2e^{-2t}]=\frac{2}{(s+2)^2}$$</p> <p>And</p> <p>$$\mathcal{L}[e^{-t}\cos(3t)]=\frac{s}{(s+1)^2+9}$$</p> <p>and I just don't seem to get why...? how do you bring those guys together like that? whats the reason behind it?</p> <p>the other problem is much simpler and I'm sure I have the correct answer... it is:</p> <p>$$t^3-5\cos(5t)$$</p> <p>I get:</p> <p>$$\mathcal{L}[t^3]=\frac{6}{s^4}$$</p> <p>And</p> <p>$$\mathcal{L}[5\cos(5t)]=\frac{s}{s^2+25}$$</p> <p>which together is supposedly,</p> <p>$$\mathcal{L}[t^3-5\cos(5t)]=\frac{6}{s^4}-\frac{5s}{s^2-25}$$</p> <p>Is it correct? Any help at all is much appreciated. Thank you.</p>
Jack D'Aurizio
44,121
<p>For a greater accuracy, $$ \int_{0}^{\pi/2}\sqrt{\sin x}\,dx = \int_{0}^{1}\sqrt{\frac{u}{1-u^2}}\,du\stackrel{\text{Beta}}{=}\frac{\sqrt{2\pi}^3}{\Gamma\left(\frac{1}{4}\right)^2}=\text{AGM}(1,\sqrt{2})$$ (<a href="https://en.wikipedia.org/wiki/Particular_values_of_the_gamma_function" rel="noreferrer">particular values of the $\Gamma$ function</a>) and $$\text{AGM}\left(1,\sqrt{2}\right)=\text{AGM}\left(\frac{1+\sqrt{2}}{2},\sqrt[4]{2}\right)\geq \sqrt[8]{\frac{1+\sqrt{2}}{8}}\geq 1.1981. $$</p>
2,831,130
<p>Cauchy's induction principle states that:</p> <blockquote> <p>The set of propositions $p(1),...,p(n),...$ are all valid if: </p> <ol> <li>$p(2)$ is true.</li> <li>$p(n)$ implies $p(n-1)$ is true.</li> <li>$p(n)$ implies $p(2n)$ is true.</li> </ol> </blockquote> <p>How to prove Cauchy's induction principle? Can we use it to prove what we can prove with weak and strong induction?</p> <p>If yes how to prove using Cauchy's induction principle</p> <p>$$ 1+2^1+2^2+...+2^n=2^{n+1}-1 $$</p>
Mark
4,460
<p>$E[ \sum_1^n X_i ] = \sum_1^n E[ x_i] = \sum_1^n E[ x_1] = n E[x_1] = np$</p>
4,043,625
<p><span class="math-container">\begin{equation} \left\{\begin{array}{@{}l@{}} 2x\equiv7\mod9 \\ 5x\equiv2\mod6 \end{array}\right.\,. \end{equation}</span> Can this system of congruences be solved? I notice that <span class="math-container">$(9,6) = 3 \ne 1$</span> so I can't apply the Chinese theorem of remainders, but this doesn't imply that it can't be solved, so I thought to rewrite the second equation in two different equations, like this:</p> <p><span class="math-container">\begin{equation} \left\{\begin{array}{@{}l@{}} 2x\equiv7\mod9 \\ x\equiv0\mod2 \\ 5x\equiv2\mod3 \end{array}\right.\,. \end{equation}</span></p> <p>But the problem still here: <span class="math-container">$(9,3) = 3 \ne 1$</span>, so it can be solved or is it impossible?</p>
marty cohen
13,079
<p>Because <span class="math-container">$f(n) \in O(\log_2(n)) \iff f(n) \in O(\log_2(n+1)) $</span>, and it is convenient to use the simpler form.</p>
2,483,794
<p>I'm trying to figure out the equality $$\frac{1}{y(1-y)}=\frac{1}{y-1}-\frac{1}{y}$$</p> <p>I have tried but keep ending up with RHS $\frac{1}{y(y-1)}$.</p> <p>Any help would be appreciated.</p>
DeepSea
101,504
<p><strong>hint</strong>: write the top as $1 = y + (1-y)$ and split the fraction using the formula: $\dfrac{a+b}{c} = \dfrac{a}{c} + \dfrac{b}{c}$ with $a = y, b = 1-y, c = y(1-y)$ , and simplify each fraction. Does this help ?</p>
2,069,392
<p>Given that $x^4+px^3+qx^2+rx+s=0$ has four positive roots.</p> <p>Prove that (1) $pr-16s\ge0$ (2) $q^2-36s\ge 0$</p> <p>with equality in each case holds if and only if four roots are equal.</p> <p><strong>My Approach:</strong></p> <blockquote> <p>Let roots of the equation</p> <p>$x^4+px^3+qx^2+rx+s=0$ be $\alpha,\beta,\eta,\delta$</p> <p>$\alpha&gt;0,\beta&gt;0,\eta&gt;0,\delta&gt;0$</p> <p>$\sum\alpha=-p$</p> <p>$\sum\alpha\beta=q$</p> <p>$\sum\alpha\beta\eta=-r$</p> <p>$\alpha\beta\eta\delta=s$</p> <p>I am confused , what is next step? please help me </p> </blockquote>
Jyrki Lahtonen
11,619
<p>AM-GM inequality is the key.</p> <p>The product $pr$ consists of $16$ terms. Four of those terms are equal to $s$. The remaining twelve are the permuted versions of $\alpha^2\beta\eta$. The product of those twelve is equal to $s^{12}$, so by AM-GM their sum is $\ge12s$. </p> <p>In AM-GM we have equality iff all the involved numbers are equal. This translates easily to the requirement that $\alpha=\beta=\eta=\delta$.</p> <hr> <p>Might as well :-)</p> <p>The correct version of the second inequality reads $$q^2\ge 36s.$$ This is proven as follows. The symmetric polynomial $q$ has six terms. Therefore $q^2$ has $36$ terms (some are repeated). The product of those $36$ terms is equal to $\alpha^i\beta^j\eta^k\delta^\ell$. Each of those $36$ terms is of degree four, so their product is of degree $36\cdot4$. By symmetry $i=j=k=\ell$, so we can conclude that $i=j=k=\ell=36$, and the product is thus $s^{36}$. The AM-GM inequality strikes again. Leaving the extra claim to the reader.</p>
15,162
<p>First off: I barely have any set theoretic knowledge, but I read a bit about cardinal arithmetic today and the following idea came to me, and since I found it kind of funny, I wanted to know a bit more about it.</p> <p>If $A$ is the set of all real positive sequences that either converge to $0$ or diverge to $\infty$, we put an equivalence relation "$\sim$" on $A$ defined as $a \sim b$ iff $\lim \frac a b \in \mathbb R ^+$.</p> <p>If $B$ is the set of all infinite cardinals, can we associate to every equivalence class $[a]$ in $A/\sim$ a cardinal $p([a])$ or to every cardinal $\lambda$ an equivalence class $[q(\lambda)]$ in such a way that the map $p: A/\sim \to B$, or $q: B \to A/\sim$ is a "homomorphism"? That is, so that we have</p> <p>$$ p([a] + [b]) = p([a]) + p([b]) $$ or $$ [q(\lambda + \mu)] = [q(\lambda)] + [q(\mu)]$$</p> <p>If yes, could this map even be surjective, injective or an "isomorphism"? (I don0t know how many cardinals there are of course...)</p> <p>It at least superficially seems to make some sense, since for cardinals $\lambda, \mu$ we have $\lambda + \mu = \max\{\lambda, \mu\}$ and the same is true for the classes of sequences, if we order them by $a &lt; b \Leftrightarrow \lim \frac a b = 0$</p>
Andrés E. Caicedo
462
<p>Asaf's answer explains that there is no set of all cardinals, and that under the axiom of choice, the cardinals are well-ordered, and so it is impossible to have a homomorphism as you want.</p> <p>What remains is to see whether it is possible, in some models where the axiom of choice fails, to have a homomorphism $p$ as you suggest, with range a collection of sets, no two of which have the same cardinality. </p> <p>This is actually possible. In fact, it holds in models where the <a href="http://en.wikipedia.org/wiki/Axiom_of_determinacy" rel="nofollow">axiom of determinacy</a> is true, but the argument is sophisticated, and follows from recent work that started with results of Alexander Kechris and Scot Adams on Borel equivalence relations. The original reference is "Linear algebraic groups and countable Borel equivalence relations", Journal of the American Mathematical Society, Vol 13 (4), (2000) 909-943. This paper is explicitly about Borel equivalence relations, but their arguments hold in more general contexts, assuming that all sets of reals are "sufficiently nice" in ways that the axiom of determinacy guarantees. </p> <p>Benjamin Miller suggested the following general statement (I believe in this specific form the result is due to him):</p> <blockquote> <p>If $\le$ is an analytic (i.e., $\Sigma^1_1$) partial order on a Hausdorff space $X$, then there is a sequence of countable analytic equivalence relations $(E_x)_{x \in X}$ such that $$ x &lt; y \Longrightarrow | X/E_x | &lt; | X/ E_x \sqcup X/ E_y | = | X/ E_x \times X/E_y | = | X/ E_y | $$ for all $x, y \in X$.</p> </blockquote> <p>Here, <em>analytic</em> means the continuous image of a Borel set. Without choice, an inequality $|A|&lt;|B|$ simply means that there is an injection from $A$ into $B$ but not one from $B$ into $A$. $A\sqcup B$ is the disjoint union of $A$ and $B$; this is the canonical set associated to the cardinal sum $|A|+|B|$. </p> <p>The reason why this solves the problem is that the ordering in the space $A/\sim$ is analytic in this sense, and so we can assign to each class $[a]$ the set $p([a])=X/E_{[a]}$ for $(E_{[a]})_{[a]\in A/\sim}$ a sequence of countable equivalence relations as granted by the statement above. </p> <p>What follows is Ben's (very high level) sketch:</p> <p>(1) In all of these results, the key is the use of what we call "rigidity arguments", which provide us with aperiodic countable Borel equivalence relations $F_x$ equipped with ergodic, invariant Borel probability measures $m_x$ with the property that for all distinct $x, y \in X$, the pair $(F_x, m_x)$ is $F_y$-ergodic (i.e., if $C$ is a set of full $m_x$-measure and $f$ is an $m_x$-measurable homomorphism from $F_x\upharpoonright C$ to $F_y$, then there is a subset $B$ of $C$ with $m_x(C \setminus B) = 0$ such that $f[B]$ is contained in a single $F_y$-class).</p> <p>(2) Let $E_x$ denote the disjoint union of the equivalence relations of the form $F_y$, for $y \le x$. </p> <p>(3) We have that $x \le y$ implies that the identity map is a reduction of $E_x$ to $E_y$, i.e., it gives us an injection of $X/E_x$ into $X/E_y$.</p> <p>(4) If $x \le y$ is false, however, then any homomorphism from $E_x$ to $E_y$ would restrict down to a homomorphism from $F_x$ to $E_y$. Ergodicity of $m_x$ would then give a set $C$ of full $m_x$-measure and some $z \le y$ such that the map in question is a homomorphism from $F_x\upharpoonright C$ to $F_z$. As $x\ne z$, the map must therefore concentrate $m_x$-almost everywhere on a single $F_z$-class, and this implies that the original map could not have been a reduction. Thus $X/E_x$ cannot inject into $X/E_y$.</p> <p>(5) From the construction one can directly argue that (for any $x$) $X/E_x \times X/E_x$ is in bijection with $X/E_x$. From this, it is straightforward to check (using <a href="http://en.wikipedia.org/wiki/Cantor%E2%80%93Bernstein%E2%80%93Schroeder_theorem" rel="nofollow">Schroeder-Bernstein</a>) that if $x\le y$ then $X/E_x\sqcup X/E_y$, $X/E_x\times X/E_y$, and $X/E_y$ are all in bijection with one another. </p>
342,306
<p>An elementary embedding is an injection $f:M\rightarrow N$ between two models $M,N$ of a theory $T$ such that for any formula $\phi$ of the theory, we have $M\vDash \phi(a) \ \iff N\vDash \phi(f(a))$ where $a$ is a list of elements of $M$.</p> <p>A critical point of such an embedding is the least ordinal $\alpha$ such that $f(\alpha)\neq\alpha$. </p> <p>A large cardinal is a cardinal number that cannot be proven to exist within ZFC. They often appear to be critical points of an elementary embedding of models of ZFC where $M$ is the von Neumann hierarchy, and $N$ is some transitive model. Is this in fact true for all large cardinal axioms?</p>
Andrés E. Caicedo
462
<p>Not exactly. </p> <p>First of all, there are small large cardinals, such as inaccessible or Mahlo cardinals, for which I do not know of any natural formulation in terms of embeddings. </p> <p>Once we reach weakly compact cardinals, we can start expressing traditional large cardinal properties in terms of embeddings, but the models involved only satisfy fragments of $\mathsf{ZFC}$. </p> <p>In the realm of large cardinals past measurability, it is true that the large cardinal template is often used, but you need to be careful. Woodin cardinals, for example, are not even measurable in general (though they are the limit of measurable cardinals). </p> <p>In the choiceless context, one studies for example partition cardinals, and the natural formulation of these axioms is not in terms of embeddings; in fact, even though the partition properties typically imply measurability, I do not know of an embedding formulation that fully captures strong partition cardinals. </p> <p>Finally, there are large cardinal <em>notions</em> that do not come associated with a large cardinal <em>per se</em>, such as the existence of $0^\sharp$, and even though the relevant sets provide us with embeddings, these embeddings are typically only of partial (thin or set sized) models. </p> <p>You may want to visit <a href="http://cantorsattic.info/Cantor%27s_Attic" rel="nofollow noreferrer">Cantor's attic</a> for an overview of the main signposts in the large cardinal hierarchy.</p>
2,017,818
<p>Find three distinct triples (a, b, c) consisting of rational numbers that satisfy $a^2+b^2+c^2 =1$ and $a+b+c= \pm 1$.</p> <p>By distinct it means that $(1, 0, 0)$ is a solution, but $(0, \pm 1, 0)$ counts as the same solution.</p> <p>I can only seem to find two; namely $(1, 0, 0)$ and $( \frac{-1}{3}, \frac{2}{3}, \frac{2}{3})$. Is there a method to finding a third or is it still just trial and error?</p>
marty cohen
13,079
<p>Here's a start that shows that any other solutions would have to have distinct $a, b, $ and $c$.</p> <p>In $a^2+b^2+c^2 =1$ and $a+b+c= \pm 1$, if $a=b$, these become $2a^2+c^2 = 1, 2a+c = \pm 1$.</p> <p>Then $c = -2a\pm 1$, so $1 = 2a^2+(-2a\pm 1)^2 =2a^2+4a^2\pm 4a+1 =6a^2\pm 4a+1 $ so $0 = 6a^2\pm 4a =2a(3a\pm 2) $. Therefore $a=0$ or $a = \pm \frac23$.</p> <p>If $a=b=0$, then $c = \pm 1$; if $a=b=\pm \frac23$, then $c = -2a\pm 1 =\mp \frac43 \pm 1 =\pm \frac13 $ and these are the solutions that you already have.</p> <p>Therefore any other solutions would have to have distinct $a, b, $ and $c$.</p>
3,537,843
<p>Find values of x such that <span class="math-container">$x^n=n^x$</span> Here, n <span class="math-container">$\in$</span> I. </p> <p>One solution will remain <strong>x=n</strong> But i want to find if any more solutions can exist</p> <p><span class="math-container">$$x^n=n^x$$</span></p>
Community
-1
<p>I love this question! I first saw it while in sixth form (I'm from London that means when I was 18) </p> <p>The first thing we can do is to try and get just x on one side and just y on the other side. Heres what we can do using logs: <br> If x<sup>y </sup> = y<sup>x</sup> then we have to have log(x<sup>y </sup>) = log(y<sup>x</sup>) as the log function is a bijection. <br>I.e: x<sup>y </sup> = y<sup>x</sup> <span class="math-container">$\iff$</span> log(x<sup>y </sup>) = log(y<sup>x</sup>). <br> <br>(Please note that when I write log I mean the natural based log, you may be used to seeing ln instead but I mean the very same thing.) <br><br> Now we can use the following log rule : log(a<sup>b </sup>)=b* log(a) to get <br>x<sup>y </sup> = y<sup>x</sup> <span class="math-container">$\iff$</span> y* log(x) = x*log(y) <br> Dividing both sides of our RHS by x *y yeilds: <br> x<sup>y </sup> = y<sup>x</sup> <span class="math-container">$\iff$</span> <span class="math-container">$\frac{log(x)}{x} = \frac{log(y)}{y} $</span> <br> Perfect! We now have just x on one side and just y on the other. Lets call f(t) the function <span class="math-container">$\frac{log(t)}{t}$</span> so we know that the if any pair x and y solve x<sup>y </sup> = y<sup>x</sup> then we MUST also have f(x) = f(y) and likewise if we find any pair x and y such that f(x) = f(y) then we also know that x<sup>y </sup> = y<sup>x</sup>. <br> <br> Onto finding soloutions of f(x) = f(y). <br> I will highly encourage you to graph this function either by hand or using something like Desmos and from that you'll be able to nearly get all thats left but heres a short summary of what you'll find: <br> 1) The function tends to negative infinity as t tends to 0. <br> 2) The function tends to 0 as t tends to infinity <br> 3)There is a turning point (global maximum) of this function at t = e (this "little fact" is actually a great proof of why e<sup>x </sup> > x<sup>e</sup> for all x)<br> 4) The function is strictly increasing up to e and then strictly decreasing after it. <br> 1) and 2) can be seen from a simple look at limits and 3) and 4) come from a quick examination of its derivative. <br> <br>So if we are after pairs of numbers x and y such that f(x) = f(y) we know that one of them needs to be less than e and one of them greater than e. If its only integer soloutions you are after, great! You only have 2 integers to check as the only ones less than e are 1 and 2. Its quite quick to see that 1 is not going to be helpfull at all as 1<sup>x </sup> = 1 and x<sup>1 </sup> = x so your only soloution if you chose x = 1 will be y =1 . Trying x =2 is more more fruitfull, a little checking and guessing gives x = 2 and y =4 a soloution to this, there are also no other y for x=2 as our f is strictly decreasing beyond e. <br> Thus the only integer soloutions to x<sup>y </sup> = y<sup>x</sup> is x =2 and y=4 (well you can have x =4 and y=2 but thats the same thing!) <br> If you are after non integer soloutions you have a few options, you can pick any x you want, as there will always be a y with the same value of f and then using some method of approximation the root to f(t)-f(x) <br> <br> I hope this helped :) <br> Oskar</p>
15,669
<p>Borrowing <code>triangularArrayLayout</code> from <a href="https://mathematica.stackexchange.com/questions/9959/visualize-pascals-triangle-and-other-triangle-shaped-lists">here</a>, I have:</p> <pre><code>triangularArrayLayout[triArray_List, opts___] := Module[{n = Length[triArray]}, Graphics[MapIndexed[ Text[Style[#1, Large], {Sqrt[3] (n - 1 + #2.{-1, 2}), 3 (n - First[#2] + 1)}/ 2] &amp;, triArray, {2}], opts]] n = 6; s = 500; coeffs = triangularArrayLayout[Table[Row[{"C(", i, ",", j, ")"}], {i, 0, n}, {j, 0, i}], ImageSize -&gt; s]; tri = triangularArrayLayout[Table[Binomial[i, j], {i, 0, n}, {j, 0, i}], ImageSize -&gt; s]; layers = {Overlay[{coeffs, Show[tri, TextStyle -&gt; GrayLevel[.8]]}, Alignment -&gt; Top], Overlay[{tri, Show[coeffs, TextStyle -&gt; GrayLevel[.8]]}, Alignment -&gt; Top]}; Manipulate[layers[[u]], {{u, 1, " "}, {1 -&gt; "binomial coefficients", 2 -&gt; "Pascal's triangle"}}, ControlType -&gt; RadioButtonBar] </code></pre> <p>but the vertical alignment is off:</p> <p><img src="https://i.stack.imgur.com/fZooA.png" alt="Mathematica graphics"> <img src="https://i.stack.imgur.com/L0LM5.png" alt="Mathematica graphics"></p> <p>This is the main issue, but I am also curious how to:</p> <ol> <li>typeset the $C(n,r)$ as <code>TraditionalForm</code> (with the varying $n$ and $r$ values throughout)</li> <li>typeset the $C(n,r)$ as $_{n}C_{r}$ (also with the varying $n$ and $r$ values).</li> </ol>
kglr
125
<ul> <li>Using the same option <code>ImagePadding-&gt;k</code> in both <code>coeff</code> and <code>tri</code> fixes the vertical alignment problem.</li> <li><code>C</code> is a protected symbol (it is used for representing constants generated in symbolic computations.) Instead you can use <code>\[ScriptCapitalC]</code>: </li> </ul> <p>Then </p> <pre><code>TraditionalForm[\[ScriptCapitalC][n,r]] </code></pre> <p>gives </p> <p><img src="https://i.stack.imgur.com/oi6Nc.png" alt="enter image description here"></p> <p>and </p> <pre><code> TraditionalForm[\[ScriptCapitalC][9,3]] </code></pre> <p>gives</p> <p><img src="https://i.stack.imgur.com/Kp1WD.png" alt="enter image description here"></p> <ul> <li>For typsetting <code>C(n,r)</code> as $_{n}C_{r}$</li> </ul> <p>you can use</p> <pre><code>nCr /: MakeBoxes[nCr[n_, r_], StandardForm] := RowBox[{SubscriptBox["\[InvisiblePrefixScriptBase]", MakeBoxes[n, StandardForm]], SubscriptBox["C", MakeBoxes[r, StandardForm]]}] </code></pre> <p>With </p> <pre><code> nCr[3, 2] </code></pre> <p>you get</p> <p><img src="https://i.stack.imgur.com/Saxry.png" alt="enter image description here"></p> <p>and</p> <pre><code>TraditionalForm[nCr[3, 2]] </code></pre> <p>gives</p> <p><img src="https://i.stack.imgur.com/ZviDA.png" alt="enter image description here"></p>
38,252
<p>I have a quadrilateral ABCD. I want to find all the points x inside ABCD such that $$angle(A,x,B)=angle(C,x,D)$$</p> <p>Is there a known formula that gives these points ?</p> <p><strong>Example:</strong></p> <p>ABCD is a rectangle. Let $x_1=mid[A,D]$ and $x_2=mid[B,C]$. The points x are those lying on the line that passes through $x_1$ and $x_2$.</p> <p>But I want a formula for arbitrary quadrilaterals.</p> <p>Thank you.</p>
Community
-1
<p>Call a topological space <em>good</em> if it's homeomorphic to a compact ordinal.</p> <p><strong>Lemma 1.</strong> Every countable compact Hausdorff space is first countable, zero-dimensional, and scattered.</p> <p>Proof. These are well-known facts.</p> <p><strong>Lemma 2.</strong> A closed subspace of a good space is good.</p> <p>Proof. This follows from the fact that a closed subspace of an ordinal is homeomorphic to an ordinal.</p> <p><strong>Lemma 3.</strong> Let <span class="math-container">$S$</span> be a countable compact Hausdorff space. If each point of <span class="math-container">$S$</span> has a good clopen neighborhood, then <span class="math-container">$S$</span> is good.</p> <p>Proof. By compactness, <span class="math-container">$S$</span> is covered by finitely many good clopen sets, which (by Lemma 2) can be made disjoint. Thus <span class="math-container">$S$</span> is the union of finitely many disjoint clopen sets <span class="math-container">$X_1,\dots, X_n$</span>, where each <span class="math-container">$X_i$</span> is homeomorphic to a compact ordinal <span class="math-container">$\alpha_i$</span>. It follows that <span class="math-container">$S$</span> is homeomorphic to the compact ordinal <span class="math-container">$\alpha=\alpha_1+\dots+\alpha_n$</span>. (This is because <span class="math-container">$S$</span> is the topological sum of the <span class="math-container">$X_i$</span>'s, and <span class="math-container">$\alpha$</span> is the topological sum of disjoint clopen subsets <span class="math-container">$A_i$</span>, where <span class="math-container">$A_i$</span> is homeomorphic to the ordinal <span class="math-container">$\alpha_i$</span> and thus to <span class="math-container">$X_i$</span>.)</p> <p><strong>Theorem.</strong> Every countable compact Hausdorff space is homeomorphic to an ordinal.</p> <p>Proof. Let <span class="math-container">$S$</span> be a countable compact Hausdorff space. Assume for a contradiction that <span class="math-container">$S$</span> is not good. Let <span class="math-container">$X$</span> be the set of all points of <span class="math-container">$S$</span> which have no good clopen neighborhood. By Lemma 3, <span class="math-container">$X$</span> is nonempty. Since <span class="math-container">$S$</span> is scattered, <span class="math-container">$X$</span> has an isolated point <span class="math-container">$x$</span>. (Of course, <span class="math-container">$x$</span> is not isolated in <span class="math-container">$S$</span>, since <span class="math-container">$\{x\}$</span> would then be a good clopen neighborhood of <span class="math-container">$x$</span>.)</p> <p>Let <span class="math-container">$U_1$</span> be a clopen neighborhood of <span class="math-container">$x$</span> such that <span class="math-container">$U_1\cap X=\{x\}$</span>. Note that, by Lemmas 2 and 3, every clopen subset of <span class="math-container">$U_1$</span> which does not contain <span class="math-container">$x$</span> is good.</p> <p>Let <span class="math-container">$U_1,U_2,\dots,U_n,\dots$</span> be a neighborhood base for <span class="math-container">$x$</span> such that each <span class="math-container">$U_n$</span> is clopen and properly contains <span class="math-container">$U_{n+1}$</span>. Then the (nonempty) set <span class="math-container">$V_n=U_n\setminus U_{n+1}$</span> is a good clopen set, whence it is homeomorphic to a compact ordinal <span class="math-container">$\alpha_n$</span>. It follows that the set <span class="math-container">$U_1\setminus X=V_1\cup V_2\cup\dots\cup V_n\cup\dots$</span>, which is the topological sum of the <span class="math-container">$V_n$</span>'s, is homeomorphic to the limit ordinal <span class="math-container">$\alpha=\alpha_1+\alpha_2+\dots+\alpha_n+\dots$</span>. Hence <span class="math-container">$U_1$</span>, the one-point compactification of <span class="math-container">$U_1\setminus\{x\}$</span>, is homeomorphic to the ordinal <span class="math-container">$\alpha+1$</span>, the one-point compactification of <span class="math-container">$\alpha$</span>. Thus <span class="math-container">$U_1$</span> is a good clopen neighborhood of <span class="math-container">$x$</span>, which was assumed to have no good clopen neighborhood. This contradiction proves the theorem.</p>
67,516
<p>The book by Durrett "Essentials on Stochastic Processes" states on page 55 that:</p> <blockquote> <p>If the state space S is finite then there is at least on stationary distribution.</p> </blockquote> <ol> <li><p>How can I find the stationary distribution for example for the square 2x2 matrix $[[a,b],[1-a, 1-b]]$? I have tested this with WA and chains of such form seems to converge to certain probalities, as <a href="http://www.wolframalpha.com/input/?i=%7B%7B0.01,%200.09%7D,%7B0.99,%200.91%7D%7D%5E101" rel="nofollow">here</a>. But if you look in the general case, <a href="http://www.wolframalpha.com/input/?i=%7B%7Ba,%20b%7D,%7B1-a,%201-b%7D%7D%5E40" rel="nofollow">here</a>, I feel quite confused to find the general formula. I just know that it exist but I cannot see any general formula as $n \rightarrow \infty$.</p></li> <li><p>But look <a href="http://www.wolframalpha.com/input/?i=%7B%7B0,%201%7D,%7B1,%200%7D%7D%5E38" rel="nofollow">here</a>, $[[0,1],[1,0]]$ does not have a stationary distribution! So am I right to say that this chain is depended on initial conditions? If the $M=[[0,1],[1,0]]$, the chain will not converge to stationary condition. But how can I find out this from the matrix? (not through series of observations)</p></li> <li><p>And how can I know when a certain markov chain is depended on initial conditions? For example, with the above example, its $det = a-b$ and for eigenvalues $\lambda_{1} =1$ and $\lambda_{2} = a-b$.</p></li> <li><p>Now <a href="http://hypertextbook.com/chaos/11.shtml" rel="nofollow">Hypertextbook</a> mentions that <code>"behavior, which exhibits sensitive dependence on initial conditions, is said to be chaotic"</code>, look we found a case with initial condition sensitivity. Is it chaotic?</p></li> </ol>
Mark Bennet
2,906
<p>Given a circle, and angles for your triangle of $\alpha, \beta, \gamma$ mark off radii with angles between them at the centre of the circle of $(180-\alpha)^\circ, (180-\beta)^\circ, (180-\gamma)^\circ$ [total $360^\circ$]. The tangents at the points where the radii meet the circle will make a triangle with the desired angles. The radii can be chosen to determine the orientation of the circle. Given a circle, it is therefore possible to draw a triangle which is both similar to, and similarly oriented to, any given triangle, and which has the given circle as its incircle. </p>
348,614
<p>Is the following claim true: Let <span class="math-container">$\zeta(s)$</span> be the Riemann zeta function. I observed that as for large <span class="math-container">$n$</span>, as <span class="math-container">$s$</span> increased, </p> <p><span class="math-container">$$ \frac{1}{n}\sum_{k = 1}^n\sum_{i = 1}^{k} \bigg(\frac{\gcd(k,i)}{\text{lcm}(k,i)}\bigg)^s \approx \zeta(s+1) $$</span></p> <p>or equivalently</p> <p><span class="math-container">$$ \frac{1}{n}\sum_{k = 1}^n\sum_{i = 1}^{k} \bigg(\frac{\gcd(k,i)^2}{ki}\bigg)^s \approx \zeta(s+1) $$</span></p> <p>A few values of <span class="math-container">$s$</span>, LHS and the RHS are given below</p> <p><span class="math-container">$$(3,1.221,1.202)$$</span> <span class="math-container">$$(4,1.084,1.0823)$$</span> <span class="math-container">$$(5,1.0372,1.0369)$$</span> <span class="math-container">$$(6,1.01737,1.01734)$$</span> <span class="math-container">$$(7,1.00835,1.00834)$$</span> <span class="math-container">$$(9,1.00494,1.00494)$$</span> <span class="math-container">$$(19,1.0000009539,1.0000009539)$$</span></p> <p><strong>Note</strong>: <a href="https://math.stackexchange.com/questions/3293112/relationship-between-gcd-lcm-and-the-riemann-zeta-function">This question was posted in MSE</a>. It but did not have the right answer.</p>
Wojowu
30,186
<p>Let me denote your LHS by <span class="math-container">$f(n,s)$</span>. For fixed even <span class="math-container">$n$</span> I shall show that <span class="math-container">$f(n,s)-1\sim\zeta(s+1)-1$</span> as <span class="math-container">$s\to\infty$</span>, that is, <span class="math-container">$$\lim_{s\to\infty}\frac{f(n,s)-1}{\zeta(s+1)-1}=1.$$</span> This result nicely expresses your numerical observations, which show that the parts after the decimal point seem to be asymptotically the same.</p> <p>On one hand, we have <span class="math-container">$\zeta(s+1)-1=2^{-s-1}+3^{-s-1}+\dots$</span>. The terms after the second can be estimated from above by the integral <span class="math-container">$\int_2^\infty x^{-s-1}dx=\frac{2^{-s}}{s}$</span>, so we see that <span class="math-container">$\zeta(s+1)-1\sim 2^{-s-1}$</span>.</p> <p>On the other hand, among pairs <span class="math-container">$(k,i)$</span> with <span class="math-container">$1\leq k\leq n,1\leq i\leq k$</span>, the expression <span class="math-container">$\frac{\gcd(k,i)}{\operatorname{lcm}(k,i)}$</span> is equal to <span class="math-container">$1$</span> for exactly <span class="math-container">$n$</span> pairs <span class="math-container">$(k,k)$</span>, and is equal to <span class="math-container">$2^{-1}$</span> for exactly <span class="math-container">$n/2$</span> pairs <span class="math-container">$(2k,k)$</span>. All other terms, of which there are certainly fewer than <span class="math-container">$n^2$</span>, are at most <span class="math-container">$3^{-1}$</span>. Therefore we find <span class="math-container">$$f(n,s)=\frac{1}{n}\left(n\cdot 1+\frac{n}{2}\cdot 2^{-s}+O(n^23^{-s})\right)=1+2^{-s-1}+o(2^{-s})$$</span> proving <span class="math-container">$f(n,s)-1\sim 2^{-s-1}$</span>. It follows that <span class="math-container">$f(n,s)-1\sim\zeta(s+1)-1$</span>, as we wanted.</p> <p>Let me emphasize that in the above calculation it was crucial that <span class="math-container">$n$</span> was even. If <span class="math-container">$n$</span> is odd, then we instead only get <span class="math-container">$\frac{n-1}{2}$</span> pairs <span class="math-container">$(2k,k)$</span> and the asymptotics get slightly skewed - we then get <span class="math-container">$f(n,s)-1\sim\frac{n-1}{n}(\zeta(s+1)-1)$</span>. For large <span class="math-container">$n$</span> the difference is however, pretty negligible.</p>
354,642
<p>Show that each of the following initial-value problems has a unique solution ($0 ≤ t ≤ 1 , y(0) = 1$).</p> <p>$$y' = \exp(t-y)$$</p> <p><strong>Theorem 1</strong>: Suppose that $D=\{(t,y)|a≤t≤b, −∞&lt; y&lt;∞\}$ and that $f(t,y)$ is continuous on $D$. If $f$ satisfies a Lipschitz condition on $D$ in the variable $y$, then the initial-value problem $y′(t)=f(t,y)$, $a≤t≤b$, $y(a)=α$, has a unique solution $y(t) for a ≤ t$ ≤ b.</p> <p>Can theorem 1 be applied?</p>
Sammy Black
6,509
<p>Consider the sequence $X_n = \frac{1}{\pi n}$.</p>
468,784
<p>Two disjoint sets $A$ and $B$, neither empty, are said to be <strong>mutually separated</strong> if neither contains a boundary point of the other. A set is disconnected if it is the union of separated subsets, and is called <strong>connected</strong> if it is not disconnected.</p> <p>With the above definition of connected set, how to prove that the set $A=\{(x, y)\in \mathbb{R}^2:|x|=|y|\}$ is connected.</p> <p>Exercise should try using the definition of previous related and for this I guess I should do the proof by contradiction but do not know how to proceed.</p>
Pedro
23,350
<p>You can prove that $\{y=x\}$ and $\{y=-x\}$ are connected. Since they have a point in common, their union is. Can you try to argue this is the same, in essence, than showing the real line is connected?</p> <p>First, let's obtain a slightly more useful equivalent to your definition. First, if $A,B$ are open sets and $A\cap B=\varnothing$, then they are separated. Now suppose $A,B$ are separated sets, and $C=A\cup B$. Then $C\setminus \bar A$ and $C\setminus \bar B$ are relatively open in $C$. But $$(A\cup B)\setminus \bar A=B$$ since $B\cap \bar A=\varnothing$. Thus $C=A\cup B$ and $A,B$ are relatively open in $C$. We can then just say that a set $C$ is disconnected if it can be written as the disjoint union of relatively open subsets of $C$. </p> <p>Now, suppose $f:C\to C'$ is continuous and onto. If $C'$ is disconnected by $A,B$, then $C$ is disconnected by the open sets $f^{-1}(A)$, $f^{-1}(B)$. Thus, by the contrapositive, if $C$ is connected, so must be $C'$. That is, the image of a connected set under a continuous function is connected.</p> <p>Now consider the function $f:\Bbb R\to\Bbb R^2$ given by $f(x)=(x,x)$. Then your set is $f(\Bbb R)$. I hope you know that $\Bbb R$ is connected. You should see $f$ is continuous: $$\lVert f(x)-f(y)\rVert^2=2(x-y)^2$$</p> <p>so if $x$ and $y$ are close, so is $(x,x)$ and $(y,y)$. Thus, $f(\Bbb R)$ is connected.</p> <p>It is also true that the union of connected sets with non-empty interesction is connected. And one can prove it as follows. First, we prove that $C$ is connected if and only if the only continuous functions $f:C\to\{0,1\}$ $-$ where $\{0,1\}$ is given the discrete metric (topology), that is, <strong>all</strong> sets are open $-$ are constant.</p> <p>If $C=A\cup B$ with $A,B$ open, then we can define $f(a)=1$ when $a\in A$ and $f(b)=0$ when $b\in B$, to get non constant continuous function $f:C\to\{0,1\}$. Conversely, if we had a non-constant continuous function $f:C\to\{0,1\}$, we could write $C=f^{-1}(\{0\})\cup f^{-1}(\{1\})$ where this two sets are open and disjoint.</p> <p>With this out of the way, suppose that we have a collection $\mathscr C$ of connected sets, and they have a point in common, call it $c$, that is $\bigcap\mathscr C\neq \varnothing$. Consider a continuous function $f:\bigcup \mathscr C\to \{0,1\}$. We will show that $f(x)=f(c)$ for any $x\in\bigcup\mathscr C$. Indeed, pick $x$ in the union. Then this belongs to some of the sets in the collection, call this set $C_x$. Then $f$ as a function from $C_x$ to $\{0,1\}$ is still continuous, so, since $C_x$ is connected, it must be constant. But $c\in C_x$; so we must have $f(x)=f(c)$. Since this $x$ was arbitrary, the claim follows.</p>
1,235,639
<p>Let $\mathcal{R}$ be the hyperfinite type $II_{1}$ factor and let $\mathcal{U}$ be a free ultrafilter on $\mathbb{N}$.</p> <p>Is it true that $\mathcal{R}^{\mathcal{U}}$ is never hyperfinite ? How can I see this ?</p> <p>Thanks</p> <p><em>I know that under Continuum Hypothesis, every $\mathcal{R}^{\mathcal{U}}$ is isomorphic. Also, every infinite dimensional subfactor of $\mathcal{R}$ is isomorphic to $\mathcal{R}$. Since $\mathbb{F}_{n}$ is sofic, $\mathcal{L}_{\mathbb{F}_{n}}$ can be embeddable into a suitable ultrapower $\mathcal{R}^{\mathcal{U}}$, but $\mathcal{L}_{\mathbb{F}_{n}}$ is never hyperfinite</em>.</p> <p>Can I argue, using the above lines, that under Continuum Hypothesis $\mathcal{R}^{\mathcal{U}}$ is never hyperfinite (if $\mathcal{U}$ is free, of course) ? </p> <p>PS : I would like to know why was this question downvoted. Does it contain an obvious or childish mathematical mistake ? That is what is worrying me, not the reputation points. Thanks :)</p>
roya
296,316
<p>As we know the ultrapower of $\rm II_1$ factors is a $\rm II_1$ factor and also there is only one hyperfinite $\rm II_1$ factor up to isomorphism. So If $R^U$ is hyperfinite we must have $R\simeq R^U$, but it is impossible.</p>
73,383
<p>The problem is: $$\displaystyle \lim_{(x,y,z) \rightarrow (0,0,0)} \frac{xy+2yz+3xz}{x^2+4y^2+9z^2}.$$</p> <p>The tutor guessed it didn't exist, and he was correct. However, I'd like to understand why it doesn't exist.</p> <p>I think I have to turn it into spherical coordinates and then see if the end result depends on an angle, like I've done for two variables with polar coordinates. I don't know how though.</p> <p>I know $\rho = \sqrt{x^2+y^2+z^2}$ and $\theta = \arctan \left(\frac{y}{x} \right)$ and $\phi = \arccos \left( \frac{z}{\rho} \right)$, but how on earth do I break this thing up?</p>
hmakholm left over Monica
14,366
<p>If you want to rewrite to spherical coordinates, what you need is expressions for the rectangular coordinates in terms of the spherical ones, such as $x=\rho\cos(\theta)\sin(\phi)$, rather than the other way around. (By the way, the name of the letter $\rho$ is spelled "rho").</p> <p>However, I don't think going to spherical coordinates is the most instructive approach here. It will be a lot of work that just obscures the <em>real</em> trick of this limit. Instead, just fix some nonzero point $P_0=(x_0, y_0, z_0)$ and imagine $(x,y,z)$ going towards $(0,0,0)$ along the ray through $P_0$: $$(x,y,z)=(rx_0, ry_0, rz_0)$$ Insert that in your expression and simplify to see how it varies with $r$ for fixed $P_0$. That should tell you something rather interesting about the limit.</p>
2,813,595
<p>which of the following can be expressed by exact length but not by exact number?</p> <p>(i) $ \sqrt{10} $</p> <p>(ii) $ \sqrt{7} $</p> <p>(iii) $ \sqrt{13} \ $</p> <p>(iv) $ \ \sqrt{11} \ $</p> <p><strong>Answer:</strong></p> <p>I basically could not understand th question.</p> <p>What is meant by expressing by exact length ?</p> <p><strong>Does we need to satisfy Pythagorean law?</strong></p> <p>Help me with hints</p>
Ross Millikan
1,827
<p>I think "cannot be expressed by exact number" means they are irrational so the decimal does not terminate, which is true of all of them. </p> <p>I think "can be expressed by exact length" means you can construct it. You are expected to notice that $10=3^2+1^2$ so you can draw a segment of length $1$, a perpendicular segment of length $3$, make the hypotenuse, and that will be a segment of length $\sqrt{10}$. Similarly $13=3^2+2^2$ so it is easy to construct. </p> <p>$\sqrt 7$ is constructible as well, but not in such a simple way. You can construct $\sqrt 5=\sqrt {2^2+1^2}$, then $\sqrt 6=\sqrt{\sqrt{5}^2+1^2}$ and finally $\sqrt 7$. From $\sqrt 7$ (or in other ways) you can construct $\sqrt {11}$.</p>
2,813,595
<p>which of the following can be expressed by exact length but not by exact number?</p> <p>(i) $ \sqrt{10} $</p> <p>(ii) $ \sqrt{7} $</p> <p>(iii) $ \sqrt{13} \ $</p> <p>(iv) $ \ \sqrt{11} \ $</p> <p><strong>Answer:</strong></p> <p>I basically could not understand th question.</p> <p>What is meant by expressing by exact length ?</p> <p><strong>Does we need to satisfy Pythagorean law?</strong></p> <p>Help me with hints</p>
Ethan Bolker
72,858
<p>The <a href="https://en.wikipedia.org/wiki/Spiral_of_Theodorus" rel="nofollow noreferrer">spiral of Theodorus</a> constructs the square roots of the positive integers.</p> <p><a href="https://i.stack.imgur.com/Mftkn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Mftkn.png" alt="enter image description here"></a></p>
341,648
<p>I'm trying to understand what a tableaux ring is (it's not clear to me reading Young Tableaux by Fulton).</p> <p>I studied what a monoid ring is on Serge Lang's Algebra, and then I read about modules, modules homomorphism. I'm trying to prove what is stated at page 121 (S. Lang, Algebra) while talking about algebras: "we note that the group ring $A[G]$ (or monoid ring when $G$ is a monoid) is an $A$-algebra, also called group (or monoid) algebra."</p> <p>Correct me if I'm wrong, I believe that i should start proving that the monoid ring $A[G]$ (here $G$ is a monoid) is an $A$-module. Well I can't figure out how it could be! This is what i've tried:</p> <p>notation: $A$ ring, $G$ monoid, $a \in A, x \in G$. $a \cdot x$ is the map $G \rightarrow A$ such that $a \cdot x (x) = a$ and $a \cdot x (y) = 0$ for every $y \neq x$</p> <p>$(a+b) \cdot x = a \cdot x + b\cdot x$ for every $a,b \in A$ and every $x \in G$ follows from definition of $a \cdot x$. I don't know how to show $a \cdot (x+y) = a \cdot x + a \cdot y$, honestly I'm about to think it is not true.</p> <p>Thanks in advance,</p> <p>sciamp</p>
Jack D'Aurizio
44,121
<ol> <li><strong>Riemann sums</strong></li> </ol> <p>It is not a complex analytic technique but I think it is worth mentioning. We can compute the integral by taking Riemann sums and exploiting the identity: $$\prod_{k=1}^{n-1}\sin\frac{\pi k}{n}=\frac{2n}{2^n}\tag{1}$$ from which it follows that: $$\begin{eqnarray*}\int_{0}^{1}\log(\sin(\pi x))\,dx &amp;=&amp; \frac{1}{\pi}\int_{0}^{\pi}\log\sin x\,dx = \frac{1}{\pi}\lim_{n\to +\infty}\frac{\pi}{n}\sum_{k=1}^{n-1}\log\sin\frac{\pi k}{n}\\&amp;=&amp;\lim_{n\to +\infty}\frac{1}{n}\,\log\frac{2n}{2^n}=\color{red}{-\log 2}.\tag{2}\end{eqnarray*}$$</p> <p>Other approaches deserve to be mentioned:</p> <ol start="2"> <li><strong>Symmetry</strong></li> </ol> <p>The function $\sin(\pi x)$ is symmetric with respect to $x=\frac{1}{2}$, hence $$\begin{eqnarray*}I=\int_{0}^{1}\log\sin(\pi x)\,dx&amp;\stackrel{x\to 2z}{=}&amp;2\int_{0}^{1/2}\left[\log(2)+\log\sin(\pi z)+\log\cos(\pi z)\right]\,dz\\&amp;=&amp;\log(2)+2I.\end{eqnarray*}\tag{3}$$</p> <ol start="3"> <li><strong>An obscene overkill</strong></li> </ol> <p>By <a href="https://math.stackexchange.com/questions/784529/integral-int-01-log-left-gamma-leftx-alpha-right-right-rm-dx-frac">Raabe's theorem</a> $\int_{a}^{a+1}\log\Gamma(x)\,dx = \log\sqrt{2\pi}+a\log a-a$ and by the reflection formula for the $\Gamma$ function $\frac{\pi}{\sin(\pi z)}=\Gamma(z)\Gamma(1-z)$, hence the question is trivial by switching to logarithms and integrating over $(0,1)$.</p>
4,543,350
<p>I am having a hard time figuring out this proof.</p> <p>Let <span class="math-container">$\{x_n\}$</span> be a sequence and <span class="math-container">$x\in\mathbb{R}$</span>. Suppose for every <span class="math-container">$\epsilon&gt;0$</span>, there is an M such that <span class="math-container">$|x_n-x|\leq \epsilon$</span> for all <span class="math-container">$n\geq M$</span>. Show that <span class="math-container">$\lim x_n=x$</span>.</p> <p>My attempt: Let <span class="math-container">$\epsilon&gt;0$</span>. (Need to find M?) Let <span class="math-container">$n\geq M$</span>...<span class="math-container">$|x_n-x|\leq \epsilon$</span></p> <p>My scratchwork:</p> <p><span class="math-container">$\lim x_n=x$</span> <span class="math-container">$\Rightarrow$</span> <span class="math-container">$\lim x_n-x=0$</span> <span class="math-container">$\Rightarrow$</span> <span class="math-container">$\lim (x_n-x)=0$</span> ???</p> <p>It's because it's so close to the definition that I don't know what to do. Any help is greatly appreciated.</p> <p>Thanks!</p>
Chirag Kar
917,916
<p>The definition for convergence that I will be using is as follows: We say that the sequence <span class="math-container">$(x_n)$</span> <em>converges</em> to <span class="math-container">$x\in\mathbb{R}$</span> if, given any <span class="math-container">$\epsilon &gt; 0, $</span> there exists <span class="math-container">$N \in \mathbb{N}$</span> such that <span class="math-container">$|x_n-x|&lt;\epsilon$</span> for all <span class="math-container">$n\geq N.$</span></p> <hr /> <p>Let <span class="math-container">$\epsilon &gt; 0$</span> be given. Then, by definition of <span class="math-container">$(x_n)$</span>, there exists <span class="math-container">$M$</span> such that <span class="math-container">$$|x_n - x| \leq \frac{\epsilon}{2}.$$</span></p> <p>By the Archimedean property of <span class="math-container">$\mathbb{R},$</span> there exists <span class="math-container">$N_0\in\mathbb{N}$</span> such that <span class="math-container">$N_0 &gt; M.$</span> We choose <span class="math-container">$N = N_0.$</span></p> <p>So, for all <span class="math-container">$n\geq N,$</span> we have <span class="math-container">$$|x_n - x| \leq \frac{\epsilon}{2} &lt; \epsilon$$</span> thus <span class="math-container">$(x_n)$</span> converges to <span class="math-container">$x$</span>.</p> <hr /> <p>Please see if this proof makes sense to you. I think the closeness to the original definition is intentional and it would be instructive for you to prove the converse to see that these are equivalent definitions.</p>
977,956
<p>Can you help me solve this problem?</p> <blockquote> <p>Simplify: $\sin \dfrac{2\pi}{n} +\sin \dfrac{4\pi}{n} +\ldots +\sin \dfrac{2\pi(n-1)}{n}$.</p> </blockquote>
Rohinb97
80,473
<p>There is a formula if the angles of sine are in A.P.</p> <p>$sinA +sin(A+D)....+sin(A+(n-1)D)=\frac{sin(nD/2)*sin(A+\frac{(n-1)D}{2})}{sin(D/2)}$.</p> <p>Use it to get $sin2\pi$ in the numerator to get your answer 0.</p> <p>Also, a complex approach might do as well. Sum of $n^{th}$ roots of unity is 0. So it's imaginary part is 0 as well.</p>
115,081
<p>I hope here is the best place to ask this, I will begin my master degree very soon, I've already attended the regular undergraduate courses included Real Analysis, Analysis on manifolds, Abstract Algebra, Field Theory, point-set topology, Algebraic Topology, etc... I like very much algebraic topology and I found it really beautiful, I would like to know which areas of algebraic topology are the most interesting to begin to work with and which books I can study with my background in order to get the prerequisites to begin to study this subject. </p> <p>I want as soon as possible has a "taste" of a current research field in algebraic topology, and I know that an algebraic topologist can give me a "shortest way" while I attend the regular courses of my master degree.</p> <p>Thank you</p>
David C
27,816
<p>If you want to learn about algebraic topology, you can begin by very classical readings. When I was a Ph-D student, I first read Milnor Stasheff's book on "Characteristic classes", here you will learn a lot of differential and algebraic topology. There are so many good books to read, J.-F. Adams "Infinite loop spaces" or his blue book on "stable homotopy and generalised homologies", J. Milnor on Morse theory. </p> <p>I highly recommand Andrew Ranicki's homepage where you will find a lot of cool stuff about algebraic and geometric surgery, PL-topology, exotic spheres. Jacob Lurie also has some very good notes of his courses on his homepage. Dan Freed is giving a course on the cobordism and his notes are very nice, and you will find plenty of references here. You can also look at H. Miller notes "Notes on cobordism" and "Vector fields on spheres" (just google it). And J. P. May has also a list of very good books on his homepage.</p> <p>And overall, read classical papers by Adams, Pontryagin, Quillen, Serre, Sullivan, Thom...John Francis has a list of classical papers for the Kan seminar on his homepage.</p> <p>I am sure my list is too long and I have forgotten plenty of good references (homepages, notes of courses and books).</p>
4,196,185
<p>Let <span class="math-container">$A$</span> be a <span class="math-container">$n\times n$</span> matrix with minimal polynomial <span class="math-container">$m_A(t)=t^n$</span>, i.e. a matrix with <span class="math-container">$0$</span> in the main diagonal and <span class="math-container">$1$</span> in the diagonal above the main diagonal.</p> <p>How can I show that the minimal polynomial of <span class="math-container">$e^A$</span> is <span class="math-container">$m_{e^A}(t)=(t-1)^n$</span>? I have already calculated that <span class="math-container">$e^A$</span> is a matrix with <span class="math-container">$1$</span> in the main diagonal, <span class="math-container">$1$</span> in the diagonal above the main diagonal, <span class="math-container">$\frac{1}{2}$</span> in 2nd diagonal above the main diagonal and so forth until <span class="math-container">$\frac{1}{(n-1)!}$</span> in the upper right entry. But there must be some easier way to justify <span class="math-container">$(t-1)^n$</span> as the minimal polynomial besides brute force calculation?</p>
The_Sympathizer
11,172
<p>Generically, any solutions to</p> <p><span class="math-container">$$f(x + 1) = x f(x)$$</span></p> <p>can be given as <span class="math-container">$f(x) = \Gamma(x)\ \theta(x)$</span> for some 1-cyclic function <span class="math-container">$\theta$</span> such that <span class="math-container">$\theta(0) = 1$</span>. Likewise, a similar <span class="math-container">$c$</span>-cyclic deformation of each other equation is sufficient to generate all solutions once a &quot;natural&quot; representative has been found.</p> <p>The question, then, is what is the most &quot;natural&quot; representative - particularly in the last case which subsumes the other two. This is actually quite tricky, because it seems while we can construct such for parts of it, it seems difficult to extend them to suitably general cases of functions. So this can only be a partial answer at best, and I cracked at this for a long time about a decade ago, but hadn't had tremendous success. What follows is a recap of how far I got.</p> <p>The first thing to say with this regard is to note that simple argument scalings will convert your (4) into</p> <p><span class="math-container">$$f(x + 1) = h(x) f(x)$$</span></p> <p>i.e. we don't need an arbitrary step of <span class="math-container">$c$</span>. Next, observe that this is equivalent t the <em>continuum product</em> problem, i.e. generalizing</p> <p><span class="math-container">$$f(x) = \prod_{n=1}^{x} h(n)$$</span></p> <p>to cases where the term count <span class="math-container">$x$</span> is a continuous, real number. And then by dropping a logarithm over it, we see that</p> <p><span class="math-container">$$\log f(x) = \sum_{n=1}^{x} \log(h(n))$$</span></p> <p>and thus we relate this problem to the <em>continuum sum</em> - the question of generalizing</p> <p><span class="math-container">$$\sum_{n=1}^{x} a(n)$$</span></p> <p>to cases where <span class="math-container">$x$</span> is a continuous real number.</p> <p>And thus we ultimately need to consider the continuum <em>sum</em> problem - which is the most directly tractable form: the closest there seems to be to a &quot;natural&quot; way to do this is to consider analytic <span class="math-container">$a$</span> (or log-analytic), such that</p> <p><span class="math-container">$$a(x) = \sum_{k=0}^{\infty} b_k x^k$$</span></p> <p>then we can use Faulhaber's formula which says that</p> <p><span class="math-container">$$\sum_{k=1}^{K} k^p = \frac{B_{p+1}(K+1) - B_p(1)}{p + 1}$$</span></p> <p>where <span class="math-container">$B_p$</span> is the Bernoulli polynomial, so that we should define</p> <p><span class="math-container">$$\sum_{n=1}^{x} a(n) := \sum_{k=0}^{\infty} a_k \left(\frac{B_{k+1}(x+1) - B_k(1)}{k + 1}\right)$$</span></p> <p>and this works for a variety of functions, but it also <em>fails</em> for many more. In particular, this sum <em>diverges</em> for even <span class="math-container">$a(x) = \log x$</span>, meaning we cannot even recover the usual Gamma function, and I am not sure if there is any way to &quot;analytically continue&quot; it to arbitrary analytic functions - it quickly runs into being a complicated problem in &quot;analytic continuation of operators&quot; and I've not had luck before in finding an answer.</p>
307,701
<p>Show that if $G$ is a finite group with identity $e$ and with an even number of elements, then there is an $a \neq e$ in $G$, such that $a \cdot a = e$.</p> <p>I read the solutions here <a href="http://noether.uoregon.edu/~tingey/fall02/444/hw2.pdf" rel="nofollow">http://noether.uoregon.edu/~tingey/fall02/444/hw2.pdf</a></p> <p>Why do they say $D = \{a, a^\prime\}$? Isn't $D$ not a group? There is no identity and if they include the identity they get 3 elements, which means $|D| = 3 = $ odd.</p>
Daniel McLaury
3,296
<p>You're right that $D$ is not a subgroup of $G$, but they don't claim that it is.</p> <p>They're also not saying that $D = \{a, a^{-1}\}$, but rather that the elements of $D$ appear in pairs -- if $a \in S$, then $(a^{-1})^2 = a^{-2} = (a^2)^{-1} \neq e$.</p> <p>Go back through the proof and see if it makes sense now.</p>
3,844,256
<p>How can one prove the following deduction? Assume we know the following result.</p> <p><span class="math-container">$$ \frac{1}{2}\arctan\left( \frac{y}{x+1} \right) + \frac{1}{2}\arctan\left( \frac{y}{x-1} \right) - \arctan\left( \frac{y}{x} \right) = c$$</span></p> <p>Then, it is claimed that this is equivalent to the following. I am not able to figure out why.</p> <p><span class="math-container">$$ \frac{(x^2+y^2)^2+y^2-x^2}{xy} = k$$</span></p> <p>I am aware of the formula for addition <span class="math-container">$\arctan(x) + \arctan(y)$</span> but I am not sure how to deal with prefactors of <span class="math-container">$1/2$</span>.</p>
Kavi Rama Murthy
142,385
<p>Suppose <span class="math-container">$A\cup \{p\}=C\cup D$</span> is as separation of <span class="math-container">$A\cup \{p\}$</span>. Verify that <span class="math-container">$X=(C\cup D) \cup B$</span> and that this gives a separation of <span class="math-container">$X$</span>. [<span class="math-container">$C \cup D$</span> and <span class="math-container">$B$</span> are disjoint non-empty open sets in <span class="math-container">$X$</span>. I am assuming that <span class="math-container">$\{p\}$</span> is closed which is true if <span class="math-container">$X$</span> is Hausdorff].</p>
2,572,304
<p>Cauchy's Inequality states that, $$ \forall a, b \in R^{n}, |a \cdot b| \leq |a||b| $$. However, the dot product is $$ x \cdot y = x_{1}y_{1}+...+x_{n}y_{n}$$ while the norm of x is $$ |x| = \sqrt[2]{x_{1}^{2} +...+x_{n}^{2}} = \sqrt[2]{x \cdot x}$$. Therefore, $$ |a \cdot b| = \sqrt[2]{(a \cdot b) \cdot (a \cdot b)}$$</p> <p>How does one calculate $$ (a \cdot b) \cdot (a \cdot b)$$? Since $$ (a \cdot b) \in R^{n} $$ when n = 1, is $$ (a \cdot b) \cdot (a \cdot b) $$ just multiplication of real numbers? (For some reason, I always thought that $$ n \geq 2 $$.)</p>
The Phenotype
514,183
<p>With $(a \cdot b) \color{red}{\cdot} (a \cdot b)$ is meant that $\cdot$ is the dot product and that $\color{red}{\cdot}$ is multiplication.</p> <p>It helps to use distinguishable notation, so use for example $\langle a,b\rangle $ for dot products and $a_1\cdot b_1$ for multiplication.</p>
345,888
<p>$S$ is vector subspace of $S$ if $S$ is vector space, by hypothesis $S$ is vector space then $S$ is vector subspace of $S$.</p> <p>But I prove it by contradiction, then $S$ is not vector subspace of $S$, but if $S$ is not vector subspace of $S$ then $S$ is not vector space but I have contraddiction, in fact by hypothesis $S$ is vector space, so $S$ is vector subspace of $S$. Is it correct? Thank you in advance!!</p>
Zarrax
3,035
<p>Hint: In complex form your sum is $c_0 + \sum_{n = 1}^{\infty} c_nz^n + \sum_{n = 1}^{\infty} c_n\bar{z}^n$. Write this as a real part of an analytic function on the unit disc. (By the way, I'm assuming the $c_n$ are real, otherwise it's not necessarily a real-valued function).</p>
3,601,552
<p>A school has <span class="math-container">$500$</span> girls and <span class="math-container">$500$</span> boys. A simple random sample is obtained by selecting names from a box (with replacement) to a get a sample of <span class="math-container">$10$</span>. </p> <p>Find the probability of someone being picked more than once.</p> <p>My working is: <span class="math-container">\begin{align} P(\text{being picked more than once}) &amp;= 1 - P(\text{not being picked}) - P(\text{being picked exactly once})\\ &amp;=1 - \left(\frac{999}{1000}\right)^{10} - 10\left(\frac{1}{1000}\right)\left(\frac{999}{1000}\right)^9 = 0.00004476 \end{align}</span> However, this is not correct. Any ideas?</p>
David K
139,123
<p>This is a variation on the Birthday Problem, with names instead of birthdays and drawings from the hat instead of people in a room.</p> <p>The answer is <span class="math-container">$1$</span> minus the probability that all ten names are different, which is the product of the probabilities that the <span class="math-container">$n$</span>th name is different from all previous names, given that all previous names were unique. The probability for the <span class="math-container">$n$</span>th name is <span class="math-container">$\frac{1000 + 1 - n}{1000},$</span> so the answer is</p> <p><span class="math-container">$$ \frac{1000}{1000} \cdot \frac{999}{1000} \cdot \frac{998}{1000} \cdot \frac{997}{1000} \cdot \frac{996}{1000} \cdot \frac{995}{1000} \cdot \frac{994}{1000} \cdot \frac{993}{1000} \cdot \frac{992}{1000} \cdot \frac{991}{1000} = \frac{1000!}{1000^{10}\,990!}.$$</span></p>
143,274
<p>I am trying to find the derivative of $\sqrt{9-x}$ using the definition of a derivative </p> <p>$$\lim_{h\to 0} \frac {f(a+h)-f(a)}{h} $$</p> <p>$$\lim_{h\to 0} \frac {\sqrt{9-(a+h)}-\sqrt{9-a}}{h} $$</p> <p>So to simplify I multiply by the conjugate</p> <p>$$\lim_{h\to0} \frac {\sqrt{9-(a+h)}-\sqrt{9-a}}{h}\cdot \frac{ \sqrt{9-(a+h)}+ \sqrt{9-a}}{\sqrt{9-(a+h)}+\sqrt{9-a}}$$</p> <p>which gives me </p> <p>$$\frac {-2a-h}{h(\sqrt{9-(a+h)}+\sqrt{9-a})}$$</p> <p>I have no idea what to do from here, obviously I can easily get the derivative using other methods but with this one I have no idea how to proceed.</p>
Garmen1778
26,711
<p>As other answers well say, you have an error while multiplicating. And also you forgot to put the limit in your last equation. \begin{align} f'(x)&amp;=\lim\limits_{h\to0}\left(-\frac{h}{h(\sqrt{9-(a+h)}+\sqrt{9-a})}\right)\\ &amp;=\lim\limits_{h\to0}\left(-\frac{1}{\sqrt{9-(a+h)}+\sqrt{9-a}}\right)\\ &amp;=-\frac{1}{\sqrt{9-x}+\sqrt{9-x}}\\ &amp;=-\frac{1}{2\sqrt{9-x}} \end{align} But this can be done much easily by this way:</p> <p>\begin{align} f(x)&amp;=\sqrt{9-x}\\ &amp;=(9-x)^{1/2}\\ f'(x)&amp;=-\frac{1}{2}(9-x)^{-1/2}\\ &amp;=-\frac{1}{2\sqrt{9-x}} \end{align}</p>
1,663,838
<p>Show that a positive integer $n \in \mathbb{N}$ is prime if and only if $\gcd(n,m)=1$ for all $0&lt;m&lt;n$.</p> <p>I know that I can write $n=km+r$ for some $k,r \in \mathbb{Z}$ since $n&gt;m$</p> <p>and also that $1=an+bm$. for some $a,b \in \mathbb{Z}$</p> <p>Further, I know that $n&gt;1$ if I'm to show $n$ is prime.</p> <p>I'm not sure how I would go about showing this in both directions though.</p>
Nitrogen
189,200
<p><strong>Hint:</strong> If $d$ divides $n$, then $gcd(d,n)=d$.</p>
2,798,598
<p>We have the series $\sum\limits_{n=1}^{\infty} \frac{(-1)^n n^3}{(n^2 + 1)^{4/3}}$. I know that it diverges, but I'm having some difficulty showing this. The most intuitive argument is perhaps that the absolute value of the series behaves much like $\frac{n^3}{\left(n^2\right)^{4/3}} = \frac{1}{n^{-1/3}}$, which diverges, though this doesn't seem like it would disprove thee fact that we could be dealing with a conditionally-convergent series. The computation of the limit, even of the absolute value of the general term, also seems nearly impossible to do by hand, as successive applications of L'Hospital's Rule seem to produce a result just as disorderly as what I started with. Limit comparison also doesn't quite seem to work, especially with the alternating-factor.</p> <p>Thanks in advance for any insights on this. </p>
Jason
432,654
<p>Your reasoning is fairly sound, but you are thinking about it a little too hard. Try, instead of thinking about this as a Series and trying to get to a p-series test; do a test for divergence (or nth term test depending on your book) to see if the term itself goes to zero with $n$.</p> <p>Edit: Sorry missed the last part of your question, I thought you were trying to compute the series by hand.</p> <p>Notice that $a_n \rightarrow 0$ iff $|a_n| \rightarrow 0$ for this sequence. A couple applications of L'Hospitals should get you to something that you can determine the convergence/divergence of I would think... (in fact, 1 application and then simplifying, should get you to an end result I believe).</p> <p>Alternatively, for those that don't like L'Hospital's rule, you can divide the top and bottom by $x^3$. It will help to write the bottom $x^3$ as $(x^9)^{\frac{1}{3}}$ (Edit: Reading is gud. Fixed $x^2$ to $x^3$)</p>
945,651
<p>Use mathematical induction to prove the following statement:</p> <p>For all $b\in\mathbb R$, and for all $n\in\mathbb N$, $$b&gt;-1\implies (1+b)^n \geq 1+nb$$</p> <p>When $n=1$, the inequality still holds $1+b \geq 1+b$.</p> <p>For n+1$: $$(1+b)^{n+1} \geq 1+(n+1)b$$ Here I'm not sure the best way to simplify... $$(1+b)^n(1+b)\geq 1+bn+b$$</p>
beep-boop
127,192
<p>A few points:</p> <ul> <li><p>The base case, in this case, is $n=0,$ so that's what you should verify, not $n=1.$</p></li> <li><p>$$\color{green}{(1+b)^{n+1}} \equiv \underbrace{(1+b)^n(1+b) \geq (1+bn)(1+b)}_{\text{induction hypothesis}} \equiv 1+(n+1)b+\underbrace{b^2n}_{\geq 0} \color{green}{\geq 1+(n+1)b} .$$</p></li> <li><p>Never say something like "let $n=n+1$". It makes no sense, algebraically. You can say that if the result is true for $n$, then it's true for $n+1$, <strong>or</strong> you can say "if the result is true for $n=k$, then it's true for $n=k+1.$"</p></li> </ul>
905,685
<p>Let the balls be labelled $1,2,3,..n$ and the boxes be labelled $1,2,3,..,n$. </p> <p>Now I want to find, </p> <ul> <li><p>What is the expected value of the minimum value of the label among the boxes which are non-empty </p></li> <li><p>What is the expected number of boxes with exactly one ball in them? </p></li> </ul> <hr> <p>Whatever way I am thinking of it, I am getting complicated summation form of answers and not any exact closed form! </p>
Marko Riedel
44,883
<p>Here is an approach using labelled species and exponential generating functions. <P> For the <b>first</b> problem we have the species $$\sum_{q=1}^n \mathcal{U}^q \times \mathfrak{P}_{\ge 1}(\mathcal{Z}) \times \mathfrak{P}(\mathcal{Z})^{n-q}$$ with $\mathcal{U}$ marking the end of the intial segment of empty bins.</p> <p>This yields the generating function $$\sum_{q=1}^n u^q (\exp(z)-1) \exp(z)^{n-q} = (\exp(z)-1) \exp(z)^n \sum_{q=1}^n u^q \exp(z)^{-q}.$$ Some algebra produces $$(\exp(z)-1) \exp(z)^{n-1} \times u \frac{(u/\exp(z))^n - 1}{u/\exp(z)-1}.$$ which is $$(\exp(z)-1) \frac{u^{n+1} - u \exp(z)^n}{u-\exp(z)}.$$</p> <p>Differentiate and put $u=1$ to obtain the EGF of the count $$\left.(\exp(z)-1) \left(\frac{(n+1)u^n - \exp(z)^n}{u-\exp(z)} - \frac{u^{n+1} - u \exp(z)^n}{(u-\exp(z))^2} \right)\right|_{u=1} \\= (\exp(z)-1) \left(\frac{(n+1) - \exp(z)^n}{1-\exp(z)} - \frac{1 - \exp(z)^n}{(1-\exp(z))^2} \right) \\ = \frac{1 - \exp(z)^n}{1-\exp(z)} + \exp(z)^n - (n+1) = -(n+1) + \frac{1 - \exp(z)^{n+1}}{1-\exp(z)}.$$</p> <p>Performing coefficient extraction we obtain for $n\ge 1$ the answer $$\frac{1}{n^n} n! [z^n] \left(-(n+1) + \frac{1 - \exp(z)^{n+1}}{1-\exp(z)}\right) = \frac{1}{n^n} n! [z^n] \left(-(n+1) + \sum_{q=0}^n \exp(z)^q\right) \\ = \frac{1}{n^n} n! \sum_{q=1}^n \frac{q^n}{n!} = \frac{1}{n^n} \sum_{q=1}^n q^n = 1 + \frac{1}{n^n} \sum_{q=1}^{n-1} q^n.$$</p> <p>For the <b>second</b> problem the species is $$\mathfrak{S}_{=n} \left(\mathcal{U}\mathcal{Z}+\mathfrak{P}_{\ne 1}(\mathcal{Z})\right).$$ with $\mathcal{U}$ marking singletons. <P> This gives the generating function $$\left(uz + \exp(z) - z\right)^n.$$</p> <p>For the expected number of singletons differentiate with respect to $u$ and set $u=1$ to obtain </p> <p>$$\left. n \left(uz + \exp(z) - z\right)^{n-1} \times z\right|_{u=1} = n z \exp(z)^{n-1}.$$</p> <p>Finally extract coefficients for the expectation which is $$\frac{1}{n^n} n! [z^n] n z \exp(z)^{n-1} = \frac{1}{n^{n-1}} n! [z^{n-1}] \exp(z)^{n-1} = \frac{1}{n^{n-1}} n! \frac{(n-1)^{n-1}}{(n-1)!} \\= \frac{1}{n^{n-1}} n (n-1)^{n-1} = \frac{(n-1)^{n-1}}{n^{n-2}}.$$</p> <p>These results match the first answer.</p> <p><strong>Remark.</strong> Since $$\frac{(n-1)^{n-1}}{n^{n-2}} = (n-1)\left(1-\frac{1}{n}\right)^{n-2} = (n-1)\left(1-\frac{1}{n}\right)^n \left(1-\frac{1}{n}\right)^{-2} \\ = \frac{n^2}{n-1} \left(1-\frac{1}{n}\right)^n$$ the second expectation is $$\frac{1}{e} \frac{n^2}{n-1}.$$</p>
256,612
<p>I've found assertions that recognising the unknot is NP (but not explicitly NP hard or NP complete). I've found hints that people are looking for untangling algorithms that run in polynomial time (which implies they may exist). I've found suggestions that recognition and untangling require exponential time. (Untangling is a form of recognising.) </p> <p>I suppose I'm asking whether there exists (1) a "diagram" of a knot, (2) a "cost" measure of the diagram, (3) a "move" which can be applied to the diagram, (4) the "move" always reduces the "cost", (5) the "move" can be selected and applied in polynomial time, (6) the "cost" can be calculated in polynomial time.</p> <p>For instance, Reidemeister moves, fail on number (4) if the "cost" is number of crossings.</p> <p>So what is the current status of the problem?</p> <p>Thanks</p> <p>Peter</p>
Peter Balch
102,151
<p>I seem to have a polynomial-time algorithm that untangles the unknot. But I suspect that hubris lurks around every corner in this game.</p> <p>I'm pretty sure I can show that the algorithm runs in polynomial-time. But I now realise that I don't know if it is always able to simplify every tangle.</p> <p>As always, the trick is to find a representation that makes it obvious what moves are legal and that has a measure of "cost" that decreases after every "move".</p> <p>My attempt treats the knot as a printed circuit board. The measure of cost is the number of vias. Using vias to measure cost rather than "crossings" turns out to be advantageous. A legal move reroutes a track segment. Rerouting a single track segment is always legal. If the number of vias reaches zero, the tangle was a simple loop. </p> <p>I've written a Windows program that runs the algorithm and tried it with all the knots I can find on the web. It untangles all the knots that I know to be simple loops and doesn't untangle those I know to be truely knotted. Unfortunately, there are some published knots for which it's unclear whether they are knotted on not.</p> <p>I'll try to upload a PDF and an EXE here if this forum allows it, otherwise I'll make a web page.</p>
881,141
<p>Let $A$ and $B$ be two covariance matrices such that $AB=BA$. Is $AB$ a covariance matrix?</p> <p>A covariance matrix must be symmetric and positive semi definite. The symmetry of $AB$ can be proved as follows: $$(AB)^T = B^TA^T = BA = AB$$</p> <p>The question is, how to prove or disprove the positive semi definitive character of $AB$?</p>
Horst Grünbusch
88,601
<p>Expanding the word "immediately" of Quang Hoang's answer:</p> <p>$$AB = QD_AQ'QD_BQ' = QD_A D_B Q', $$ </p> <p>where $D_A$ and $D_B$ are the respective diagonal matrices of $A$ and $B$. The diagonal entries of these matrices are nonnegative, so are the diagonal entries of the product of $D_A D_B$. </p>
222,093
<p>For what value of m does equation <span class="math-container">$y^2 = x^3 + m$</span> has no integral solutions?</p>
dinoboy
43,912
<p>None of the solutions posted look right (I don't think this problem admits a solution by just looking modulo some integer, but possibly I'm wrong). Here is a proof.</p> <p>First, by looking modulo $8$ one deduces we need $x$ to be odd.</p> <p>Note that $y^2 + 1^2 = (x+2)(x^2 - 2x + 4)$. As the LHS is a sum of two squares, no primes $3 \pmod{4}$ divide it. This forces $x \equiv 3 \pmod{4}$ as if $x \equiv 1 \pmod{4}$ then $x+2$ obviously has a prime factor $3 \pmod{4}$. But then $x^2 - 2x + 4 \equiv 3 \pmod{4}$, implying $x^2 - 2x + 4$ has a prime factor $3 \pmod{4}$. But this is a contradiction, thus no $x,y$ can exist to satisfy this equation.</p>
2,441,630
<p>The operator given is the right-shift operator $T$ on $l^2$. We show that $\lambda=1$ is in the residual spectrum. Therefore we show that $(I-T)$ is injective but fails to have a dense range. While injectivity is clear, I fail to understand why the following shows that the range is not dense:</p> <p>Let $y=(I-T)x$. Then $y(k)+ \cdots + y(1) = x(k)$ where $x(k)$ is the $k$-th entry of the sequence. Now for every $x\in l^2$ we have $\lim_{k\rightarrow \infty} x(k) = 0$ which then forces $|y(k) + \cdots y(1)| \rightarrow 0$ as well. Now this means that $(\alpha,0,0,...)$ cannot be in the range of $(I-T)$ which implies that the range of $(I-T)$ lies in the orthogonal complement of $(1,0,0,...)$.</p> <p>Now first of all: <strong>why does it imply that it lies in the complement?</strong> and second of all <strong>why does that imply that the range isn't dense?</strong></p>
Nate Eldredge
822
<p>It's not correct. In fact the range of $I-T$ is dense and $\lambda = 1$ is not in the residual spectrum, but rather in the continuous spectrum.</p> <p>Indeed, suppose $y$ is in the orthogonal complement of the range of $I-T$, so that $((I-T)x, y) = 0$ for all $x$. This implies $0 = (x, (I-T^*)y)$ so $T^* y = y$. Now $T^*$ is the left shift operator so this can only happen if $y(k) = y(k+1)$ for all $k$, which only happens for $y \in l^2$ if $y=0$. So $I-T$ does have dense range. </p> <p>The argument you gave does correctly show that $(1,0,0,\dots)$ is not in the range of $I-T$ and thus $\lambda = 1$ is in the spectrum of $I-T$. It is clearly not an eigenvalue (if $Tx=x$ then $x(1) = 0$ and $x(k+1) = x(k)$ for all $k$) so it must be continuous spectrum.</p>
69,590
<p>Consider the following code.</p> <pre><code>f[a_,b_]:=x x=a+b; f[1,2] (* a + b *) </code></pre> <p>From a certain viewpoint, one might expect it to return <code>3</code> instead of <code>a + b</code>: the symbols <code>a</code> and <code>b</code> are defined during the evaluation of <code>f</code> and <code>a+b</code> should evaluate to their sum.</p> <p>Why is this viewpoint wrong? What's the right way to make it behave the way I want it to? (Something more clever than <code>f[p_,q_]:=x/.{a-&gt;p,b-&gt;q};</code>.) </p>
bill s
1,783
<p>One somewhat organized way to get what you want is to be very explicit about which expressions are functions and which are values. For example, your x is really a function of a and b, but you are writing x=a+b. If instead, you make the functional relationships explicit, then there is less chance of confusion. In the simplest case:</p> <pre><code>x[a_, b_] := a + b; f[a_, b_] := x[a, b]; f[1, 2] </code></pre> <p>Now it is clear why f[1,2] returns 3.</p>
1,810,729
<blockquote> <p>Let $G$ be a group generated by $x,y$ with the relations $x^3=y^2=(xy)^2=1$. Then show that the order of $G$ is 6.</p> </blockquote> <p><strong>My attempt:</strong> So writing down the elements of $G$ we have $\{1,x,x^2,y,\}$. Other elements include $\{xy, xy^2, x^2y\}$ it seems I am counting more than $6$. Are some of these equal? how do I prove that?</p>
Dietrich Burde
83,966
<p>One group presentation for the dihedral group $D_n$ is $\langle x,y|x^2=1,y^n=1,(xy)^2=1\rangle $. Hence the group is indeed isomorphic to $D_3$. Here $x$ with $x^2=1$ corresponds to a reflection, and $y$ with $y^3=1$ to a rotation of $60$ degrees. Finally we have $xyx^{-1}=y^{-1}=y^2$, which is how rotation and reflection interact. So all elements are given by $\{1,y,y^2,x,xy,xy^2\}$.</p> <p>Sorry, I have interchanged $x$ and $y$ here.</p>
4,212,181
<p>A uniform cable that is 2 pounds per feet and is 100 feet long hangs vertically from a pulley system at the top of a building (and the building is also 100 feet tall).</p> <p>How much work is required to lift the cable until the bottom end of the cable is 20 feet below the top of the building?</p> <p><span class="math-container">$W=FD$</span></p> <p><span class="math-container">$F=2\Delta y$</span></p> <p><span class="math-container">$D=80-y$</span> ??</p> <p><span class="math-container">$\int_{0}^{100}2(80-y)dy$</span></p> <p>Or am I mixing up the limits of integration and the distance? Or am I completely wrong? Thanks for the help. I appreciate it.</p>
John Douma
69,810
<p>Your integral is wrong. As you know, the total work done is given by the total force exerted over a distance. In this case the force on the cable is variable. We know that the starting weight is <span class="math-container">$200$</span> pounds and decreases by <span class="math-container">$2$</span> pounds for every foot that the cable is raised. Therefore, the force is given by <span class="math-container">$$F(x)=200-2x$$</span> where <span class="math-container">$x$</span> is the distance above the ground of the bottom of the cable. From this we get that the total work is</p> <p><span class="math-container">$$\int_{0}^{80}(200-2x) dx=9600\text{ ft lb}$$</span></p>
3,014,438
<p>Find Number of Non negative integer solutions of <span class="math-container">$x+2y+5z=100$</span></p> <p>My attempt: </p> <p>we have <span class="math-container">$x+2y=100-5z$</span> </p> <p>Considering the polynomial <span class="math-container">$$f(u)=(1-u)^{-1}\times (1-u^2)^{-1}$$</span></p> <p><span class="math-container">$\implies$</span></p> <p><span class="math-container">$$f(u)=\frac{1}{(1-u)(1+u)}\times \frac{1}{1-u}=\frac{1}{2} \left(\frac{1}{1-u}+\frac{1}{1+u}\right)\frac{1}{1-u}=\frac{1}{2}\left((1-u)^{-2}+(1-u^2)^{-1}\right)$$</span> </p> <p>we need to collect coefficient of <span class="math-container">$100-5z$</span> in the above given by</p> <p><span class="math-container">$$C(z)=\frac{1}{2} \left((101-5z)+odd(z)\right)$$</span></p> <p>Total number of solutions is</p> <p><span class="math-container">$$S(z)=\frac{1}{2} \sum_{z=0}^{20} 101-5z+\frac{1}{2} \sum_{z \in odd}1$$</span></p> <p><span class="math-container">$$S(z)=540.5$$</span></p> <p>what went wrong in my analysis?</p>
sirous
346,566
<p>I will find number of solutions of equation <span class="math-container">$5x+2y+z=10 n$</span> in general:</p> <p>clearly the positive solutions <span class="math-container">$x_0, y_0, z_0$</span> of this equation are corespondent to the solution <span class="math-container">$x_0+2,y_0, z_0$</span> of equation <span class="math-container">$5x+2y+z=10(n+1)$</span>.Clearly for <span class="math-container">$x=&gt;2$</span>, finding the solutions of <span class="math-container">$5x+2y+z=10(n+1)$</span> will lead to finding the solution of first equation,provided we consider <span class="math-container">$x-2$</span> in first equation.<br> If the number of solutions of equation <span class="math-container">$5x+2y+z=10(n+1)$</span> is <span class="math-container">$\phi(n+1)$</span> and that of equation <span class="math-container">$5x+2y+z=10n$</span> is <span class="math-container">$\phi(n)$</span> the difference of <span class="math-container">$\phi(n+1)$</span> and <span class="math-container">$\phi(n)$</span> is equal to the number of solutions of equation <span class="math-container">$5x+2y+z=10(n+1)$</span> for <span class="math-container">$x=0$</span> and <span class="math-container">$x=1$</span>. But this equation has <span class="math-container">$5n+6$</span> solutions for <span class="math-container">$x=0$</span>, (i.e. <span class="math-container">$0=&lt;y=&lt;5n+5)$</span> and it has <span class="math-container">$5n+3$</span> solutions for <span class="math-container">$x=1$</span>, (i.e <span class="math-container">$0=&lt;y=&lt;5n+2)$</span>. Therefore we have:</p> <p><span class="math-container">$\phi(n+1)-\phi(n)=10n+9$</span></p> <p>We can also search and find that <span class="math-container">$\phi(1)=10$</span>, so we can write:</p> <p><span class="math-container">$\phi(1)=10$</span></p> <p><span class="math-container">$\phi(2)-\phi(1)=10\times 1+9$</span></p> <p><span class="math-container">$\phi(3)-\phi(2)=10\times 2+9$</span> .</p> <p>.</p> <p>.</p> <p><span class="math-container">$\phi(n)-\phi(n-1)=10(n-1)+9$</span></p> <p>Summing theses relations gives:</p> <p><span class="math-container">$\phi(n)=5n^2 +4n +1$</span></p> <p>In your question <span class="math-container">$n=10$</span>, therefore number of solutions is <span class="math-container">$\phi(10)=5.10^2+4.10+1=541$</span></p>
118,232
<p>For example I have </p> <pre><code>square = Graphics[Polygon[{{0, 0} ,{0, 1}, {1, 1}, {1, 0}}]] </code></pre> <p>What functions can I apply to <code>sqaure</code> to extract the coordinates of the polygon? It is necessary to do this kind of extraction when I have a graphics object as an argument of a function, and want to use coordinates taken from primitives within it to do some calculation. </p>
E. Chan-López
53,427
<p><strong>Hopf bifurcation analysis</strong></p> <p>The differential system:</p> <pre><code>f1[x_,y_]:=a x (1 - x/k) - b x y; f2[x_,y_]:=-c y + d x y; F[{x_,y_},{a_,b_,c_,d_,k_}]:=Evaluate@{f1[x,y],f2[x,y]}; X={x, y}; μ={a,b,c,d,k}; </code></pre> <p><span class="math-container">$$ \begin{align} &amp;\dot{x}=a x\left(1-\frac{x}{k} \right)- bxy\\ &amp;\dot{y}= dxy - cy \end{align} $$</span></p> <p>The Jacobian matrix:</p> <pre><code>J[{x_,y_},{a_,b_,c_,d_,k_}]:=Evaluate@D[F[X,μ],{X}] </code></pre> <p>The non-trivial equilibrium point:</p> <pre><code>X0=Normal[Simplify[SolveValues[F[X,μ]==0&amp;&amp;Variables[F[X,μ]]&gt;0,X]]][[1]]; MatrixForm@X0 </code></pre> <p><span class="math-container">$$ \begin{align} P_{0}(x,y)=\left(\frac{c}{d} ,\frac{a}{b}\left(1-\frac{c}{dk} \right)\right) \end{align} $$</span> The linear approximation at <span class="math-container">$P_{0}$</span> (coexistence equilibrium point):</p> <pre><code>J0=Simplify@J[X0,μ]; MatrixForm@J0 </code></pre> <p><span class="math-container">$$ \begin{align} J(P_{0})=\left( \begin{array}{cc} \hspace{-0.25cm}-\displaystyle\frac{a c}{d k} &amp; -\displaystyle\frac{b c}{d}\hspace{0.3cm} \\\\ \hspace{0.2cm}\displaystyle\frac{a (d k-c)}{b k} &amp;\hspace{0.2cm} 0 \\ \end{array} \right) \end{align} $$</span> Under the Hopf bifurcation conditions, <span class="math-container">$\text{tr}(J(P_{0},\mu_{0}))=0$</span> and <span class="math-container">$\text{det}(J(P_{0},\mu_{0}))&gt;0$</span>, where <span class="math-container">$\mu_{0}$</span> is the critical bifurcation value for some parameter of our system. In our case, the parameters are strictly positive and <span class="math-container">$\text{tr}(J(P_{0}))$</span> cannot be zero. Therefore, Hopf bifurcation not take place at <span class="math-container">$P_{0}$</span>. The non-trivial equilibrium <span class="math-container">$P_{0}$</span> is always locally stable and the only condition that must be fulfilled is given by the following inequality <span class="math-container">$$ \frac{c}{d k}&lt;1 $$</span> Code for time series and phase portrait:</p> <p>Time series</p> <pre><code>s = ParametricNDSolve[{x'[t] == a x[t] (1 - x[t]/k) - bx[t]*y[t], y'[t] == -c y[t] + d x[t]*y[t], x[0] == 11/5, y[0] == 4/5}, {x, y}, {t, 0, 1000}, {a, b, c, d, k}]; Plot[Evaluate[x[1/4, 1/3, 1/2, 1/4, 10][t] /. s], {t, 0, 300}, PlotRange -&gt; All, PlotPoints -&gt; 500, PlotStyle -&gt; {Blue, Thickness[0.003]},AxesStyle -&gt; Directive[Black, Small], Background -&gt; Lighter[Gray, 0.95]] </code></pre> <p><a href="https://i.stack.imgur.com/pRAqT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pRAqT.png" alt="Time series" /></a></p> <p>Phase portrait:</p> <pre><code>ParametricPlot[Evaluate[{x[1/4, 1/3, 1/2, 1/4, 10][t], y[1/4, 1/3, 1/2, 1/4, 10][t]} /. s], {t, 0, 300}, PlotRange -&gt; All, PlotPoints -&gt; 500, PlotStyle -&gt; {Blue, Thickness[0.003]}, AxesStyle -&gt; Directive[Black, Small],Background -&gt; Lighter[Gray, 0.95]] </code></pre> <p><a href="https://i.stack.imgur.com/rQ6Nk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rQ6Nk.png" alt="Phase portrait" /></a></p> <p><strong>Example: Hopf bifurcation in the Brusselator system</strong></p> <p><code>Calculation of the first Lyapunov coefficient</code></p> <p>The Brusselator system is given by: <span class="math-container">$$ \begin{align} &amp;\dot{x}=\alpha-(\beta+1)x + x^2 y\\ &amp;\dot{y}= \beta x - x^2 y \end{align} $$</span></p> <p>Assuming <span class="math-container">$\alpha&gt; 0$</span> fixed and taking <span class="math-container">$\beta$</span> as a bifurcation parameter, we show that at <span class="math-container">$\beta = 1 + \alpha^2$</span> the system exhibits a supercritical Hopf bifurcation.</p> <p>The Brusselator system code:</p> <pre><code>f1[x_, y_] := α - (β + 1) x + x^2 y; f2[x_, y_] := β x - x^2 y; F[{x_, y_}, {α_, β_}] := Evaluate@{f1[x, y], f2[x, y]}; X = {x, y}; μ = {α, β}; U = {u, v}; R = {r, s}; </code></pre> <p>The Jacobian matrix and its transpose:</p> <pre><code>J[{x_, y_}, {α_, β_}] = D[F[X, μ], {X}]; Jt[{x_, y_}, {α_, β_}] = Transpose[J[X, μ]]; MatrixForm[J[X, μ]] MatrixForm[Jt[X, μ]] </code></pre> <p>Stability analysis (Routh-Hurwitz criterion):</p> <pre><code>X0[{α_, β_}] = SolveValues[F[X, μ] == 0, X][[1]] polJX0 = Collect[CharacteristicPolynomial[J[X0[μ], μ], λ], λ,Simplify]; a0 = CoefficientList[polJX0, λ][[3]]; a1 = CoefficientList[polJX0, λ][[2]]; a2 = CoefficientList[polJX0, λ][[1]]; Reduce[a1 &gt; 0 &amp;&amp; a2 &gt; 0 &amp;&amp; α &gt; 0 &amp;&amp; β &gt; 0, β] (*α &gt; 0 &amp;&amp; 0 &lt; β &lt; 1 + α^2*) </code></pre> <p>Note that <span class="math-container">$a_{1}=0$</span> if and only if <span class="math-container">$β=β_{0}$</span>, where <span class="math-container">$β_{0}=1+α^2$</span>. Then, the Brusselator is locally asymptotically stable at <span class="math-container">$X_{0}(\mu)$</span> for <span class="math-container">$β&lt;β_{0}$</span> and locally asymptotically unstable for <span class="math-container">$β&gt;β_{0}$</span> (appears a stable limit cycle surrounded the unstable equilibrium point). We verify the previous conclusion (transversality condition) with the sign of the following derivative:</p> <pre><code>D[-a1, β] (*1*) </code></pre> <p>The analysis at the critical bifurcation value <span class="math-container">$β_{0}$</span>:</p> <p>Solve the following system of equations: <span class="math-container">$$ \left\{\begin{align} F\left((x,y),(\alpha,\beta)\right) &amp;=0, \\ \operatorname{tr}(J((x,y),(\alpha,\beta))) &amp;=0, \end{align}\right. $$</span> for <span class="math-container">$(x,y,\beta)$</span> and we must check that det <span class="math-container">$\operatorname{det}J((x,y),(\alpha,\beta))&gt;0$</span> when <span class="math-container">$\beta = \beta_{0}$</span> for the solution found, where <span class="math-container">$\beta_{0}$</span> is the Hopf critical bifurcation value.</p> <p>The code for the above system of equations:</p> <pre><code>X0μ0 = Delete[Part[SolveValues[F[X, μ] == 0 &amp;&amp; Tr[J[X, μ]] == 0, {x, y, β}], 1], {3}] μ0 = Prepend[Delete[Part[SolveValues[F[X, μ] == 0 &amp;&amp; Tr[J[X, μ]] == 0, {x, y, β}], 1], {{1}, {2}}], α] Det[J[X0μ0, μ0]] </code></pre> <p>Here, the Hopf critical bifurcation value is <span class="math-container">$\beta_{0}=1+\alpha^2$</span> and <span class="math-container">$\operatorname{det}J((x,y),(\alpha,\beta_{0}))=\alpha^2&gt;0$</span>. Thus, the Brusselator at <span class="math-container">$\beta_{0}=1+\alpha^2$</span> has the equilibrium <span class="math-container">$$ \begin{align} X_{0}(\mu_{0})=\left(\alpha, \displaystyle\frac{1+\alpha^2}{\alpha} \right) \end{align} $$</span> and the linear approximation at <span class="math-container">$X_{0}(\mu_{0})$</span> has purely imaginary eigenvalues <span class="math-container">$\lambda_{1,2}=\pm \omega i$</span>, <span class="math-container">$\omega=\alpha$</span>.</p> <p>The code for the linear approximation and its transpose at <span class="math-container">$X_{0}(\mu_{0})$</span>:</p> <pre><code>α= ω; JX0μ0 = Simplify@J[X0μ0, μ0]; JtX0μ0 = Simplify@Transpose@JX0μ0; MatrixForm@JX0μ0 MatrixForm@JtX0μ0 Eigenvalues[JX0μ0] </code></pre> <p>The next step is to translate the equilibrium <span class="math-container">$X_{0}(\mu_{0})$</span> to the origin of coordinates:</p> <pre><code>bb = {0, 0}; F0[{x_, y_}, {α_, β_}] = Collect[Expand@F[X + X0μ0, μ0], {x, x^2, y, y^2, x y, x^2 y},Factor] MatrixForm@F0[bb,μ] </code></pre> <p>Now, to obtain the normal form of the Hopf bifurcation, we need the Taylor expansion of the third order for <span class="math-container">$F_{0}((x,y),(\alpha,\beta))$</span>:</p> <pre><code>(*Rank 3 tensor*) D2[{x_, y_}, {α_, β_}] = Simplify@D[F0[X, μ], {X, 2}] D2X0μ0 = Simplify@D2[bb, μ0] (*Rank 4 tensor*) D3[{x_, y_}, {α_, β_}] = Simplify@D[F0[X, μ], {X, 3}] D3X0μ0 = Simplify@D3[bb, μ0] </code></pre> <p>Multilinear forms:</p> <pre><code>(*Bilinear form*) BB[{x_, y_}, {u_, v_}] = Collect[Expand[D2X0μ0.X.U], {u x, v y, v x, v y}, FullSimplify]; MatrixForm[BB[{x, y}, {u, v}]] (*Trilinear form*) CC[{x_, y_}, {u_, v_}, {r_, s_}] = Collect[Expand[D3X0μ0.X.U.R], {r u x, r u y, r v x, r v y, s u x, s v x, s u y, s v y}, FullSimplify] MatrixForm[CC[{x, y}, {u, v}, {r, s}]] </code></pre> <p>We verify that the first three terms of the Taylor series expansion of <span class="math-container">$F_{0}((x,y),(\alpha,\beta))$</span> are correct:</p> <pre><code>A=JX0μ0; (*linear approximation*) MatrixForm@FullSimplify[F0[X, μ] - (A.X + 1/2! BB[X, X] + 1/3! CC[X, X, X]) /. {x -&gt; t x, y -&gt; t y}] (*{0,0}*) </code></pre> <p>Now, we compute the critical eigenvectors of <span class="math-container">$J((0,0),\mu_{0})$</span> and its transpose:</p> <pre><code>(*Eigenvectors of A=J[X0,μ0]*) vp = ComplexExpand@Eigenvectors[A] q = vp[[2]]; qc = vp[[1]]; MatrixForm@q MatrixForm@qc MatrixForm@Simplify[A.q - I ω q] (*Eigenvectors of Transpose[A]*) At=JtX0μ0; vpt = ComplexExpand[Eigenvectors[At]] (*Normalization constant*) cn = ComplexExpand[Conjugate[vp[[1]] . vpt[[1]]]] p = Expand@Simplify[vpt[[2]]/cn]; pc = ComplexExpand[Conjugate[p]]; MatrixForm@p MatrixForm@pc Simplify[At.p-I ω p] </code></pre> <p>We verify the normalization condition <span class="math-container">$\langle p,q\rangle=1$</span></p> <pre><code>Simplify@(p.q) (*1*) </code></pre> <p>Finally, we compute the first Lyapunov coefficient: <span class="math-container">$$ \begin{align} l_1(0,\mu_{0})= &amp;\frac{1}{2\omega_0} {\rm Re}\left[\langle p,C(q,q,\bar{q}) \rangle - 2 \langle p, B(q,A_0^{-1}B(q,\bar{q}))\rangle +\\\hspace{0.5cm} \langle p, B(\bar{q},(2i\omega_0 I_n-A_0)^{-1}B(q,q))\rangle \right] \end{align} $$</span></p> <p>Before to calculate <span class="math-container">$l_1(0,\mu_{0})$</span>, we clean <span class="math-container">$\alpha$</span></p> <pre><code>Clear[α] ω0=α; </code></pre> <p>The code for the first Lyapunov coefficient:</p> <pre><code>Factor@ComplexExpand[Re[1/(2 ω) (p.CC[q, q, qc] - 2 (p.BB[q, Inverse[A].BB[q, qc]]) + p.BB[qc, Inverse[2 I ω*IdentityMatrix[2] - A].BB[q, q]])] /. ω -&gt; ω0] (*-((2 + α^2)/(2 α (1 + α^2)))*) </code></pre> <p><span class="math-container">$$ \begin{align} l_1(0,\mu_{0})=-\frac{\alpha^2+2}{2 \alpha \left(\alpha^2+1\right)} \end{align} $$</span> The first Lyapunov coefficient is clearly negative for all positive <span class="math-container">$\alpha$</span>. Thus, the Hopf bifurcation is nondegenerate and always supercritical.</p> <p>The above expression is the result that Kuznetsov arrives at on page 105 of his book (see <a href="https://doi.org/10.1007/978-1-4757-3978-7" rel="nofollow noreferrer">Elements of Applied Bifurcation Theory</a>).</p> <p>Limit cycle:</p> <p><a href="https://i.stack.imgur.com/SCEwC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SCEwC.png" alt="Limit cycle" /></a></p> <p>For more details, see: <a href="http://www.scholarpedia.org/article/Andronov-Hopf_bifurcation" rel="nofollow noreferrer">Andronov-Hopf bifurcation</a>.</p>
663,435
<p>Bob has an account with £1000 that pays 3.5% interest that is fixed for 5 years and he cannot withdraw that money over the 5 years</p> <p>Sue has an account with £1000 that pays 2.25% for one year, and is also inaccessible for one year.</p> <p>Sue wants to take advantage of better rates and so moves accounts each year to get the better rates.</p> <p>How much does the interest rate need to increase per year (on average) for Sue to beat Bob's 5 year account?</p> <p>Compound interest formula: $A = P(1 + Q)^T$</p> <p>Where:</p> <p>$A$ = Amount Earned $P$ = Amount Deposited $R$ = Sues Interest Rate $T$ = Term of Account $Q$ = Bobs Interest rate $I$ = Interest Increase Per Period</p> <p>My method of working thus far:</p> <p>\begin{align} \text{First I calculate Bobs money at 5 years}\\ P(1 + Q)^T &amp;= A \\ 1000(1 + 0.035)^5 &amp;= A \\ 1187.686 &amp;= A\\ 1187.68 &amp;= A (2DP)\\ \text{Now work out Sues first years interest}\\ 1000(1 + 0.0225) ^ 1 &amp;= A \\ 1022.5 &amp;= A\\ \text{Then I work out the next 4 years compound interest}\\ ((1187.686/1022.5) ^ {1/4}) - 1 &amp;= R \\ -0.7096122249388753 &amp;= R\\ -0.71 &amp;= R (2DP)\\ \text{Then I use the rearranged formula from Ross Millikan}\\ 4/{10}R - 9/{10} &amp;= I\\ 4/{10}*-0.71 - 9/{10} &amp;= I\\ 0.0 &amp;= I\\ \end{align}</p>
Warren Hill
86,986
<p>A bit of trail and error is needed here as I cant see a closed form solution.</p> <p>For bob He ends up with $1000\cdot(1+0.035)^5 \approx 1187.686$</p> <p>For Sue its</p> <p>$1000 \cdot (1+0.0225)\cdot(1+0.0225+I)\cdot(1+0.0225+2I)\cdot(1+0.0225+3I)\cdot(1+0.0225+4I)$</p> <p>There are various ways you can solve this. However I just put it into a spreadsheet and played with the values. Its more than 0.6268% and less than 0.6269%</p> <hr> <p>Note: Its not an average increase as If it were all to come in year 2 a smaller increase would be required </p> <p>After the first year Sue has $1000\cdot (1+0.0225) = 1022.50$</p> <p>Now with 4 years compound interest and only one rate</p> <p>$1187.686 = 1022.50 \cdot (1+0.0225+I)^4 \Rightarrow (1+0.0225+I)^4 = \frac{1187.686}{1022.50}$</p> <p>$ \Rightarrow 1.0225 + I = \sqrt[4]{\frac{1187.686}{1022.50}}$ </p> <p>So $1.0225 + I = 1.03799 \Rightarrow I = 0.01549 = 1.548\%$</p> <p>Which averaged over 4 years is 0.3875%</p>
2,021,354
<p>I have a vector <strong>x</strong> and a function that sums the elements of <strong>x</strong> like so:</p> <p>$$f(1) = x_1$$ $$f(2) = x_1 + \sum_{i=1}^2 x_i$$ $$f(3) = x_1 + \sum_{i=1}^2 x_i + \sum_{j=1}^3 x_j$$ $$f(4) = x_1 + \sum_{i=1}^2 x_i + \sum_{j=1}^3 x_j + \sum_{k=1}^4 x_k$$</p> <p>...and so on. How might I represent this function more compactly, please?</p>
D.F.F
77,924
<p>Since there was the "recursion" tag in your question, here is a recursive solution:</p> <p>$ f (0) = 0$</p> <p>$f (n+1) = f (n) + \sum_{i=1}^{n+1} x_i$</p>
2,529,682
<p>Right now I'm stuck on the following problem, since I feel like I should be using total probability, but I dont know what numbers to use as what.</p> <p>Let's say there's a population of students. In this population:</p> <p>30% have a bike</p> <p>10% have a motorcycle</p> <p>12% have a car.</p> <p>8% have a bike AND a motorcycle</p> <p>7% car and a bike</p> <p>4% have a motorcycle and a car</p> <p>2% have a bike, a car and a motorcycle</p> <p>What percentage of students owns no vehicles?</p> <p>I draw myself a Venn diagram, but I can't get my mind around the problem. My thinking right now is just substracting each percentage off 100%, but that just feels wrong.</p> <p>Using total probability feels wrong aswell, since I have no idea what to calculate. I want to calculate P(A) = people that own a vehicle, but then P(A|Hi) doesn't really have a value.</p> <p>Anyone has any ideas?</p>
Michael Rozenberg
190,319
<p>We have $$(\ln|x|)'=\frac{1}{x}.$$ Because if $x&gt;0$ we obtain: $$(\ln|x|)'=(\ln{x})'=\frac{1}{x}$$ and for $x&lt;0$ we obtain: $$(\ln|x|)'=(\ln(-x))'=-\frac{1}{-x}=\frac{1}{x}.$$</p>
369,723
<p>I'm reading Milne's <em>Elliptic Curves</em> and came across this statement: If a nonsingular projective curve has a group structure defined by polynomial maps, then it has genus 1. In <a href="https://math.stackexchange.com/questions/226127/for-what-algebraic-curves-do-rational-points-form-a-group">this question</a> a similar question was asked, and an answer given there (the one that was not accepted) someone backs this up using machinery/notation that I do not understand. Can someone give or direct me to a proof of this statement? It doesn't have to be very formal; this isn't homework. Also, in the affine case this statement clearly breaks down - the affine line has the group structure of addition, for instance. Is there some sort of correction or does the group structure bear no relation to the genus in this case?</p>
xyzzyz
23,439
<p>For complex algebraic curves, this is actually very easy. Suppose we have an algebraic group law on a complex algebraic curve. Then it is necessarily continuous in a classical topology, so we get a topological group. Now, a fundamental group of a topological group has to be abelian (this is an interesting and not that hard exercise). But, the curves with genus > 1 don't have abelian fundamental groups!</p>
369,723
<p>I'm reading Milne's <em>Elliptic Curves</em> and came across this statement: If a nonsingular projective curve has a group structure defined by polynomial maps, then it has genus 1. In <a href="https://math.stackexchange.com/questions/226127/for-what-algebraic-curves-do-rational-points-form-a-group">this question</a> a similar question was asked, and an answer given there (the one that was not accepted) someone backs this up using machinery/notation that I do not understand. Can someone give or direct me to a proof of this statement? It doesn't have to be very formal; this isn't homework. Also, in the affine case this statement clearly breaks down - the affine line has the group structure of addition, for instance. Is there some sort of correction or does the group structure bear no relation to the genus in this case?</p>
Piotr Pstrągowski
16,673
<p>Just to throw another way to formalize this fact, if $X$ is a variety that is a group, then the canonical sheaf $\omega_{X}$ must be trivial. The intuitive reason is that there is a canonical way to identify the tangent space $T_{x}$ at any point $x \in X$ with the tangent space $T_{e}$ at identity. (Namely, the map $y \leadsto y * x^{-1}$). </p> <p>If $X$ is a curve, then Riemann-Roch implies that $deg(K) = 2g-2$, where $K$ is the divisor corresponding to $\omega_{X}$. Since $\omega_{X}$ is trivial, $deg(K) = 0$. This implies that $g = 1$. </p>
369,723
<p>I'm reading Milne's <em>Elliptic Curves</em> and came across this statement: If a nonsingular projective curve has a group structure defined by polynomial maps, then it has genus 1. In <a href="https://math.stackexchange.com/questions/226127/for-what-algebraic-curves-do-rational-points-form-a-group">this question</a> a similar question was asked, and an answer given there (the one that was not accepted) someone backs this up using machinery/notation that I do not understand. Can someone give or direct me to a proof of this statement? It doesn't have to be very formal; this isn't homework. Also, in the affine case this statement clearly breaks down - the affine line has the group structure of addition, for instance. Is there some sort of correction or does the group structure bear no relation to the genus in this case?</p>
Julien Clancy
28,711
<p>I accepted another answer but I'm going to post a more elementary (from my perspective) way to prove this. The Riemann-Hurwitz Formula gives the nice bound $|\text{Aut}(\mathcal{C})| &lt; \infty$ for a $g(\mathcal{C}) &gt; 1$ (we'll assume characteristic zero so all maps are tamely ramified). But each point gives the translation automorphism, and these are all clearly distinct, whence we obtain a contradiction for $g(\mathcal{C}) &gt; 1$. So the question becomes: does the projective line have a group law? Of course not, for then a hyperelliptic curve $E$ ($g(E) &gt; 1$) would have one too. The gist is that if we select a degree-2 map $f \colon E \to \mathbb{P}^1$, totally ramified, then it must be injective, so we can pull back the group law.</p>
389,888
<p>Let $z$ be a complex number. Let $$f(z)=\dfrac{1}{\frac{1}{z}+\ln(\frac{1}{z})}.$$ How to formally show that $f(z)$ is analytic at $z=0$? I know that for small $z$ we have $$\left|\tfrac{1}{z}\right|&gt;\left|\ln(\tfrac{1}{z})\right|$$ and that implies $|f(0)|=0.$ Are there multiple ways to handle this ?</p>
75064
75,064
<p>To me, "analytic" means "locally represented by its Taylor series". With this interpretation $f$ is not analytic. Indeed, suppose $f(z)=z^r\sum_{n=0}^\infty c_n z^n $ in a neighborhood of $0$, where $c_0\ne 0$. Then $$\frac{1}{z}+\ln \frac{1}{z} = z^{-r} \sum_{n=0}^\infty b_n z^n $$ in some (possibly smaller) neighborhood of $0$. It follows that $\ln \frac{1}{z}$ has a pole or a removable singularity at $0$. If this is not evidently absurd already, apply the same to $\ln z=-\ln \frac{1}{z}$ and conclude that the logarithm is a rational function. </p>
389,888
<p>Let $z$ be a complex number. Let $$f(z)=\dfrac{1}{\frac{1}{z}+\ln(\frac{1}{z})}.$$ How to formally show that $f(z)$ is analytic at $z=0$? I know that for small $z$ we have $$\left|\tfrac{1}{z}\right|&gt;\left|\ln(\tfrac{1}{z})\right|$$ and that implies $|f(0)|=0.$ Are there multiple ways to handle this ?</p>
DonAntonio
31,254
<p>$$f(z)=\frac1{\frac1z+\text{Log}\frac1z}=\frac z{1+z\,\text{Log}\frac1z}$$</p> <p>Now, </p> <p>$$\text{Log}\frac1z:=\log\frac1{|z|}+i\arg\frac1z\implies z\,\text{Log}\frac1z=z\log\frac1{|z|}+iz\arg\frac1z$$</p> <p>If you now choose a branch cut for the complex logarithm (and you better do if you have any hope to make any sense in this particular stuff), say the usual half negative real axis <em>and</em> zero ($\;\Bbb R_-\;$), you get a nice analytic function in the rest (and we don't care what happens with $\,z\to\infty\,$ since we're far away from there) , so</p> <p>$$z\,\text{Log}\frac1z\xrightarrow[z\to 0\,,\,z\notin\Bbb R_-]{}0$$ and you can talk of a removable singularity of your function at $\,z=0\,$...although this would be a little absurd since you <em>already removed</em> this point by choosing the above branch cut for the logarithm, and this will be the situation with any branch cut you choose as <em>any</em> of them must contain the zero point...</p> <p>In short, definitely not analytic there, though...</p>
2,733,728
<p>How can one find a general form for $\int_0^1 \frac {\log(x)}{(1-x)} dx=-\zeta(2) \,?$ Namely $\int_0^1 \frac {\log^n(x)}{(1-x)^m} dx\,$ where $n,m\ge1$ Similar to the original integral I let $1-x=u\,$ which gives $$\int_{-1}^0 \frac {\log^n(1+x)}{x^m} dx$$ and expanding into series we have: $\int_{-1}^0x^{-m}(\sum_{k=1}^{\infty}\frac{(-1)^{k+1}x^k}{k})^n\,dx$ Now this might be doable with a computer using Cauchy product's but otherwise it's a madness.</p> <p>Another try is to let $I(k)=\int_0^1 \frac {x^k}{(1-x)^m}\,dx$ And take derivate n times while assuming $k\ge n$ so: $$\frac{d^n}{dx^n}I(k)=\int_0^1\frac{x^k\log^n(x)}{(1-x)^m}dx$$ Plugging $(1-x)^{-m}=\sum_{j=0}^{\infty} \binom{-m}{j}(-1)^jx^j $ in integral and make use of Tonelli s theorem we get: $$\frac{d^n}{dx^n}I(k)=\sum_{j=0}^{\infty} \binom{-m}{j}(-1)^j\int_0^1 x^{(k+j)}\log^n(x)dx=\sum_{j=0}^{\infty} \binom{-m}{j}(-1)^{(n+j)} n! (k+j+1)^{-(n+1)}$$ But I don't know how to evaluate the latter series.</p>
user
505,767
<p>The result is consistent indeed</p> <ul> <li><p>when $h=0 \implies v=0$</p></li> <li><p>when $h&gt;0 \implies v&lt;0$ since the ball is falling down</p></li> </ul>
2,417,197
<p>When going through with learning Grahams number, I got stuck at </p> <p>$$3↑↑↑3$$</p> <p>Working it through, we have</p> <p>$$3↑3=3^3$$ $$3↑↑3=3^{3^3}=3↑(3↑3)$$</p> <p>As such, it would appear to me that</p> <p>$$3↑↑↑3=3^{3^{3^3}}=3↑(3↑(3↑3))=3↑(3↑↑)$$</p> <p>Which is incorrect; the correct answer being</p> <p>$$3↑↑↑3=3↑↑(3↑↑3)$$</p> <p>What I'm wanting to know is where the error in the way I've worked it through, and how working $3↑↑(3↑↑3)$ through backwards to $3↑↑↑3$ would look?</p>
Sheldon L
43,626
<p>$$ 3 \uparrow \uparrow n = 3 \uparrow (3 \uparrow \uparrow (n-1))$$ $$ 3 \uparrow \uparrow \uparrow n = 3 \uparrow \uparrow (3 \uparrow \uparrow \uparrow (n-1))$$ $$ 3 \uparrow \uparrow \uparrow \uparrow n = 3 \uparrow \uparrow \uparrow (3 \uparrow \uparrow \uparrow \uparrow (n-1)) ....$$</p> <p>By definition $$ 3 \uparrow^{n} 0 =1\;\;\;\; 3 \uparrow 1 = 3 $$ One can prove by induction: $$ 3 \uparrow^{n} 1 =3$$ $$ 3 \uparrow^{n} 2 = 3 \uparrow^{(n-1)} 3$$ $$ 3 \uparrow \uparrow 3 = 3 \uparrow {3^3} = 3^{3^3} = 3^{27} = 7625597484987$$ Therefore $$ 3 \uparrow \uparrow \uparrow 3 = 3 \uparrow \uparrow (3 \uparrow \uparrow \uparrow 2)= 3 \uparrow \uparrow (3 \uparrow \uparrow 3) = 3 \uparrow \uparrow 7625597484987 $$ $$ 3 \uparrow \uparrow \uparrow 3 = 3^{3^{3^{3...}}}\;\;\; \text{tower height} = 3 \uparrow \uparrow 3 = 7625597484987$$ $$ 3 \uparrow \uparrow \uparrow \uparrow 3 = 3 \uparrow \uparrow \uparrow (3 \uparrow \uparrow \uparrow 3) $$ $$ ^{^{^{...3}3}3}3\;\;\; \text{tower height} = 3 \uparrow \uparrow 7625597484987 $$</p>
4,549,898
<p>I need some help with solving the following problem: Let <span class="math-container">$Q(n)$</span> be the number of partitions of <span class="math-container">$n$</span> into distinct parts. Show that <span class="math-container">$$\sum_{n=1}^\infty\frac{Q(n)}{2^n}$$</span> is convergent by estimating <span class="math-container">$Q(n)$</span> in some way. I have been trying to do this for a few days now, but without succeeding. Notice that if we only want to show that the series converges and don't care about which method we use, we can do this by considering the infinite product <span class="math-container">$\prod_{n=1}^\infty\left(1+\frac{1}{2^n}\right)$</span>.</p>
Damian Pavlyshyn
154,826
<p>If <span class="math-container">$a_1 &lt; \dotsb &lt; a_k$</span> is a distinct partition of <span class="math-container">$n$</span> of size <span class="math-container">$k$</span>, we must have that <span class="math-container">$a_j \geq j$</span>, and so <span class="math-container">$$ n = \sum_{j=1}^k a_j \geq \sum_{j=1}^k j \geq \frac{k^2}{2}. $$</span></p> <p>Thus, any ordered partition of <span class="math-container">$n$</span> must have <span class="math-container">$k \leq \sqrt{2n}$</span>, and so we have that <span class="math-container">\begin{align*} Q(n) &amp;= \#\{\text{distinct partitions of }n\} \\ &amp;\leq \#\{\text{partitions of $n$ of size at most $\sqrt{2n}$}\} \\ &amp;\leq \#\{\text{ordered partitions of $n$ of size at most $\sqrt{2n}$}\}. \end{align*}</span></p> <p>But the number of ordered partitions of <span class="math-container">$n$</span> of size at most <span class="math-container">$k$</span> is <span class="math-container">$\binom{n+k-1}{k-1}$</span> by the standard stars-and-bars argument, so that we have <span class="math-container">\begin{align*} Q(n) &amp;\leq \binom{n + \lfloor\sqrt{2n}\rfloor - 1}{\lfloor\sqrt{2n}\rfloor - 1} \\ &amp;\leq (n + \sqrt{2n}- 1)^{\sqrt{2n}- 1} \\ &amp;\leq \exp\{\sqrt{2n}\log(n + \sqrt{2n})\} \\ &amp;\lesssim \exp\Bigl\{n \log\frac{3}{2}\Bigr\}. \end{align*}</span></p> <p>Therefore, for large enough <span class="math-container">$n$</span>, we have that <span class="math-container">$Q(n)/2^n \leq (3/4)^n$</span>, which is summable.</p>
2,542,184
<p>Is $A = \begin{bmatrix} 1&amp;1&amp;0\\ 0&amp;1&amp;0\\ 0&amp;0&amp;1\\ \end{bmatrix}$ and $B = \begin{bmatrix} 1&amp;1&amp;0\\ 0&amp;1&amp;1\\ 0&amp;0&amp;1\\ \end{bmatrix}$ similar? Please justify your answer.</p> <p>So far what I've done is to check rank, det, trace, and characteristic polynomial to maybe disprove it but all of them are the same so I'm kinda stuck.</p>
Rene Schipperus
149,912
<p>These are matrices in Jordan normal form. This is a representative of the similarity class. Thus these matrices are not similar. For example $A$ has two independent eigenvectors to the eigenvalue $1$ whereas $B$ has only one.</p>
1,921,114
<p><img src="https://i.stack.imgur.com/D8IcM.jpg" alt="enter image description here"></p> <p>So I solved this system without using matrices, just by (sort of) reverting to high school math instincts. $$w+x=-5$$ $$x+y=4$$ $$y+z=1$$ $$w+z=8$$ From this, I got $w=-9,x=4,y=0$ and $z=1$. How would I convert this back to what appears to be a matrix form?</p>
Win Vineeth
311,216
<p>From the format, you get that $x_i=a_i+s*b_i$</p> <p>$x_1+x_2=-5 =&gt; a_1+a_2+s(b_1+b_2) = -5$ for all s. Thus, $b_1=-b_2$</p> <p>Similarly, you get $b_1=-b_2=b_3=-b_4$ </p> <p>The solution you got is for $a_1, a_2,a_3,a_4$</p> <p>Thus, $x_1 = -9+s*1$ (taking $b$ into $s$)</p> <p>$x_2=4+s*(-1)$</p> <p>$x_3=0+s*(1)$</p> <p>$x_4=1+s*(-1)$</p>
1,921,114
<p><img src="https://i.stack.imgur.com/D8IcM.jpg" alt="enter image description here"></p> <p>So I solved this system without using matrices, just by (sort of) reverting to high school math instincts. $$w+x=-5$$ $$x+y=4$$ $$y+z=1$$ $$w+z=8$$ From this, I got $w=-9,x=4,y=0$ and $z=1$. How would I convert this back to what appears to be a matrix form?</p>
tantheta
165,463
<p><a href="https://i.stack.imgur.com/PjwwB.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PjwwB.jpg" alt="enter image description here"></a></p> <p>i guess that answers your question</p>
2,489,988
<p>A sequence of numbers is formed from the numbers $1, 2, 3, 4, 5, 6, 7$ where all $7!$ permutations are equally likely. What is the probability that anywhere in the sequence there will be, at least, five consecutive positions in which the numbers are in increasing order?</p> <p>I approached this problem in the following way, but I am wondering if there is a better way, since my approach is quite complicated.</p> <p><strong>My Approach:</strong> There are three possibilities: a sequence have $7$ consecutive positions in which numbers increase, have $6$ consecutive positions in which numbers increase, and $5$ consecutive positions in which numbers increase.</p> <p>There is only $1$ sequence that have $7$ consecutive positions. Namely, the sequence $(1,2,3,4,5,6,7)$.</p> <p>There are $12$ sequences that have $6$ consecutive positions. Namely, we choose $1$ number from $(1,2,3,4,5,6,7)$, and move it to either sides. As an illustration, if we choose $3$, then we can get $(3,1,2,4,5,6,7)$ or $(1,2,4,5,6,7,3)$.</p> <p>Now consider when there are $5$ consecutive positions in which numbers increase. We choose $2$ numbers that are <em>not</em> in the increasing subsequence. </p> <p>If $1$ and $7$ are not chosen, we can place them in front of the subsequence, of after. For example, if we choose $(2,5)$, then we will have $(2,5,1,3,4,6,7)$,$(5,2,1,3,4,6,7)$, $(1,3,4,6,7,2,5)$ and $(1,3,4,6,7,5,2)$. This is $\binom{5}{2}\times4$.</p> <p>Then I'm not sure how to proceed when we choose $1$ and/or $7$?</p>
N. Shales
259,568
<p>Both myself and @N.F.Taussig used the following approach, although I'd like to see if it could be generalised to increasing runs of arbitrary length.</p> <p>Define set $S_{i,j}$ as the set of permutations of $[7]$ with an increasing run between position $i$ and $j$ inclusive. Then by <a href="https://en.m.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="nofollow noreferrer">inclusion-exclusion</a> the desired success count is</p> <p>$$\begin{align}&amp;(|S_{1,5}|+|S_{2,6}|+|S_{3,7}|) -\\ (|S_{1,5}\cap S_{2,6}| + |S_{1,5}&amp;\cap S_{3,7}|+|S_{2,6}\cap S_{3,7}|)+ |S_{1,5}\cap S_{2,6}\cap S_{3,7}|\tag{1}\end{align}$$</p> <p>Clearly</p> <p>$$|S_{1,5}|=|S_{2,6}|=|S_{3,7}|=\binom{7}{5}2!\tag{2}$$</p> <p>since we choose $5$ of the $7$ numbers to go in increasing order in positions $1$ to $5$, $2$ to $6$ or $3$ to $7$ and the remaining $2$ numbers can go in the remaining $2$ spots in $2!$ ways.</p> <p>Also</p> <p>$$S_{1,5}\cap S_{2,6}=S_{1,6}$$ $$\implies |S_{1,5}\cap S_{2,6}|=|S_{1,6}|=\binom{7}{6}1!\tag{3}$$</p> <p>and</p> <p>$$S_{1,5}\cap S_{3,7}=S_{1,7}$$ $$\implies |S_{1,5}\cap S_{3,7}|=|S_{1,7}|=\binom{7}{7}0!\tag{4}$$</p> <p>and</p> <p>$$S_{2,6}\cap S_{3,7}=S_{2,7}$$ $$\implies |S_{2,6}\cap S_{3,7}|=|S_{2,7}|=\binom{7}{6}1!\tag{5}$$</p> <p>and</p> <p>$$S_{1,5}\cap S_{2,6}\cap S_{3,7}=S_{1,7}$$ $$\implies |S_{1,5}\cap S_{2,6}\cap S_{3,7}|=|S_{1,7}|=\binom{7}{7}0!\tag{6}$$</p> <p>using similar reasoning to $(2)$ in each case.</p> <p>Putting the results of $(2)$, $(3)$, $(4)$, $(5)$ and $(6)$ into $(1)$ gives:</p> <p>$$\text{success count}=3\binom{7}{5}2!-\left(2\binom{7}{6}1!+\binom{7}{7}0!\right)+\binom{7}{7}0!=112$$</p> <p>Then since there are $7!$ permutations the desired probability is:</p> <blockquote> <p>$$\text{probability of an increasing run of length $\ge 5$}=\frac{112}{7!}=\frac{1}{45}\tag{Answer}$$</p> </blockquote>
237,142
<p>I am having a problem with the final question of this exercise.</p> <p>Show that $e$ is irrational (I did that). Then find the first $5$ digits in a decimal expansion of $e$ ($2.71828$).</p> <p>Can you approximate $e$ by a rational number with error $&lt; 10^{-1000}$ ? </p> <p>Thank you in advance</p>
sperners lemma
44,154
<p>If <code>2.71828</code> are the first few digits of $e$ the we have $$2.71828 &lt; e &lt; 2.71829$$ Put $q = 271828/100000$ we deduce $$0 &lt; e-q &lt; 0.00001 = 10^{-5}.$$</p>
1,597,891
<p>Let $G$ be an abelian group of order $75=3\cdot 5^{2}$. Let $Aut(G)$ denote its group of automorphisms. Find all possible order of $Aut(G)$.</p> <p>My approach is to first study its Sylow 5-subgroup. Since $n_{5}|3$ and $n_{5}\equiv 1\pmod{5}$, $n_{5}=1$. So $G$ has a unique Sylow 5-subgroup, denote $F$. By Sylow's Theorem, $F$ is characteristic. Then $\forall \sigma\in Aut{G}$, $\sigma(F)=F$. Define the canonical homomorphism $Aut(G)\rightarrow Aut(H)$. So $|Aut(G)|=\# (\text{Aut(G) that fixes H pointwise})\times (\text{image of homomorphism})$. </p> <p>Since the image of the defined homomorphism is a subgroup of $Aut(F)$, then its order is a factor of $Aut(F)=20$. I'm not sure how to compute the number of $Aut(G)$ that fixes $H$. My understanding is this: Since we leave $H$ fixed, all that left to be permuted are the 25 Sylow 3-subgroup. Since each Sylow 3-subgroup is cyclic, it has 2 automorphisms. So altogether $|\text{Aut(G) that fixes H pointwise}|=25\times 3=75$. And all possible order of $Aut(G)$ is $75\cdot x$ where $x$ divides 20. This seems incorrect. Could someone please help me?</p>
Eric Thoma
35,667
<p>Sylow theory is generally not useful for Abelian groups since we already know so much about their structure, and since every Abelian group has a unique $p$-Sylow subgroup when it exists.</p> <p>Using the classification theorem, we have that $|{G}| = 75$ implies $G \cong \mathbb{Z}/3\mathbb{Z} \times (\mathbb{Z}/5\mathbb{Z})^2$ or $G \cong \mathbb{Z}/3\mathbb{Z} \times \mathbb{Z}/25\mathbb{Z} \cong \mathbb{Z}/75 \mathbb{Z}$.</p> <p>Now we use standard combinatorical methods to count the automorphisms in each case.</p> <p>In the first case we must send $(1,0,0)$ to an element of order $3$, giving $2$ choices. We must send $(0,1,0)$ to an element of order $5$, giving $24$ choices. We must send $(0,0,1)$ to an element of order $5$ so that our function is surjective, giving $24 - 4 = 20$ choices. That is we are subtracting off the elements in the subgroup generated by the image of $(0,1,0)$. After defining our function on a basis, we can extend uniquely to a homomorphism on the group. This gives $20 \cdot 24 \cdot 2 = 960$ automorphisms. This work is made easier by noting $\text{aut}(H) \times \text{aut}(K) \cong \text{aut}(H \times K)$ when $H$ and $K$ are abelian groups of relatively prime orders. </p> <p>In the second case we must simply pick a generator, giving $\phi(75)$ automorphisms, where $\phi$ is the Euler totient function.</p>
1,091,087
<p>I was confronted with the following problem on my mid-term paper and I've got no idea how to solve this, I tried using the eigenvalues method, but it ultimately failed . Can anyone, please, give a complete solution to this ? I really want to se a proper solution so I can understand better the reasoning at hand..</p> <p>$y' = 5y +4z $</p> <p>$z'=-4y -3z$</p>
Alex Silva
172,564
<p><strong>Hint</strong>:</p> <p>Substitute the first equation in the second</p> <p>$$ z' = -4y -\frac{3}{4}(y'-5y)$$</p> <p>Yet, from the first equation</p> <p>$$z' = \frac{1}{4}(y'' - 5y')$$.</p>
1,091,087
<p>I was confronted with the following problem on my mid-term paper and I've got no idea how to solve this, I tried using the eigenvalues method, but it ultimately failed . Can anyone, please, give a complete solution to this ? I really want to se a proper solution so I can understand better the reasoning at hand..</p> <p>$y' = 5y +4z $</p> <p>$z'=-4y -3z$</p>
Matt L.
70,664
<p><strong>Hint:</strong></p> <p>Add both equations:</p> <p>$$y'+z'=y+z\quad\Longrightarrow\quad y+z=ce^t$$</p> <p>Now substitute.</p>
1,091,087
<p>I was confronted with the following problem on my mid-term paper and I've got no idea how to solve this, I tried using the eigenvalues method, but it ultimately failed . Can anyone, please, give a complete solution to this ? I really want to se a proper solution so I can understand better the reasoning at hand..</p> <p>$y' = 5y +4z $</p> <p>$z'=-4y -3z$</p>
abel
9,252
<p>this problem is more difficult than i thought because the coefficient matrix has a repeating root. i need a little set up first. let $x$ be a two dimensional column vector and $A$ a $2 \times 2$ real matrix. consider the linear differential equation $$\frac{dx}{dt} = Ax$$</p> <p>first we establish that:</p> <p>(a) if $\lambda$ is an eigenvalue and $x_0 \neq 0$ is a corresponding eigenvector pf $A,$ then $x = x_0e^{\lambda t}$ is a solution.</p> <p>(b) if $x_1 \neq 0$ is such that $(A - \lambda)^2x_1 = 0, (A-\lambda)x_1 = x_0 \neq 0,$ then $x = x_0 te^{\lambda t} + x_1 e^t$ is a solution. (a way to see this is to assume that $A$ has eigenvalues $\lambda \pm \epsilon.$ by (a) $x_0(e^{\lambda t + \epsilon t} - e^{\lambda t - \epsilon t}) /{ 2 \epsilon} = x_0te^{\lambda t}$)</p> <p>in your case $$ A= \pmatrix{5 &amp; 4\\-4 &amp; -3}$$ the eigenvalues of $A$ are given by $$det(A- \lambda I) = det \pmatrix{5 - \lambda &amp; 4\\-4 &amp; -3 - \lambda} = (\lambda - 1)^2$$ we will pick $$x_1 = \pmatrix{1 \\ 0}, x_0 = \pmatrix{4 \\ -4}$$</p> <p>the two linearly independent solutions of $\frac{dx}{dt} = Ax$ is $$\left\{ \pmatrix{1 \\-1}e^t, \pmatrix{4t+1\\-4t}e^t \right\} $$</p>
464,404
<p>Let $R$ be a commutative ring with identity. Let $A$ and $B$ are $R$-modules, and further suppose that $A$ is free with finite rank. Is it true that </p> <p>$$ \operatorname{Hom} (A \otimes_R B , R) \cong \operatorname{Hom}(A,R)\otimes_R B$$</p> <p>where the homomorphisms are those of $R$-modules? In other words, do the operations of dualizing $\operatorname{Hom}( - , R)$ and tensoring $ - \otimes_R B$ commute when applied to $A?$</p> <p>If not, what about in the special case $R = \mathbb{Z}$ (so that $A\cong \mathbb{Z}^d$) and $B=\mathbb{R}?$</p>
Rasmus
367
<p><strong>In general no</strong>, look at $R=\mathbb Z$, $A=\mathbb Z$, $B=\mathbb Z/2$.</p> <p><s> If $B=\mathbb R$, there <strong>happens to be</strong> an isomorphism as you would like, but it is not natural. </s></p> <p>Even if $R=\mathbb Z$, $A=\mathbb Z$, $B=\mathbb R$, the statement is false, see Mariano's comment below. To see that $\mathrm{Hom}_\mathbb Z(\mathbb R,\mathbb Z)$ cannot be isomorphic to $\mathbb R$, <s>notice, for instance, that the first group is not divisible</s> because it vanishes.</p> <p>There <strong>is</strong> of course an isomorphism if $B=R$.</p>
464,404
<p>Let $R$ be a commutative ring with identity. Let $A$ and $B$ are $R$-modules, and further suppose that $A$ is free with finite rank. Is it true that </p> <p>$$ \operatorname{Hom} (A \otimes_R B , R) \cong \operatorname{Hom}(A,R)\otimes_R B$$</p> <p>where the homomorphisms are those of $R$-modules? In other words, do the operations of dualizing $\operatorname{Hom}( - , R)$ and tensoring $ - \otimes_R B$ commute when applied to $A?$</p> <p>If not, what about in the special case $R = \mathbb{Z}$ (so that $A\cong \mathbb{Z}^d$) and $B=\mathbb{R}?$</p>
Mariano Suárez-Álvarez
274
<p>To add a positive spin: there is an isomorphism $\hom_R(A\otimes B,R)\cong\hom_R(A,R)\otimes_RB$ whenever $B$ is projective and finitely generated. </p>
1,088,338
<p>There are at least a few things a person can do to contribute to the mathematics community without necessarily obtaining novel results, for example:</p> <ul> <li>Organizing known results into a coherent narrative in the form of lecture notes or a textbook</li> <li>Contributing code to open-source mathematical software</li> </ul> <p>What are some other ways to make auxiliary contributions to mathematics?</p>
Qiaochu Yuan
232
<p>You can create new jobs for mathematicians, e.g. by funding institutes like <a href="http://en.wikipedia.org/wiki/James_Harris_Simons">Jim Simons</a>. Arguably this does much more for mathematics than actually doing mathematics due to replaceability: the marginal effect of becoming a mathematician is that you do marginally better mathematics than the next best candidate for your job, which is a much smaller effect than creating a new mathematician job where there wasn't one before.</p> <p>You can also work on tools for mathematicians to use like <a href="http://arxiv.org/">arXiv</a> (or MathOverflow!). Arguably this also does much more for mathematics than actually doing mathematics. Incidentally, arXiv was developed by a physicist, <a href="http://en.wikipedia.org/wiki/Paul_Ginsparg">Paul Ginsparg</a>, and almost none of the mathematics graduate students I've talked to about this know his name. </p>
1,088,338
<p>There are at least a few things a person can do to contribute to the mathematics community without necessarily obtaining novel results, for example:</p> <ul> <li>Organizing known results into a coherent narrative in the form of lecture notes or a textbook</li> <li>Contributing code to open-source mathematical software</li> </ul> <p>What are some other ways to make auxiliary contributions to mathematics?</p>
Yes
155,328
<p>I would say that though obtaining results is crucial, introducing new concepts, new connections, or even new perspectives looking at classical mathematics sometimes are more important.</p> <p>Gauss, for example, introduced the concept of congruence in number theory and, arguably, number theory has then been developed systematically. The introduction of irrational numbers also solves old problems such as squaring a circle.</p> <p>Einstein, for instance, found the connection between gravitation and curvature, so that differential geometry has been involved with physics. Kolmogorov, for another example, noted the connection between probability and measure theory, so that we have the modern theory of probability.</p> <p>Klein, for example, proposed to view the geometries from the group-theoretic point of view and contributed a lot to modern geometries. Hilbert, for another instance, looked through the nature of mathematics and rigorously treated mathematics axiomatically. </p> <p>Personally, I believe that these contributions to mathematics are of higher form of contributions.</p>
2,965,082
<blockquote> <p>Suppose that <span class="math-container">$(X,\ d)$</span> and <span class="math-container">$(Y,\ \rho)$</span> are metric spaces, that <span class="math-container">$f_n:X\to Y$</span> is continuous for each <span class="math-container">$n$</span>, and that <span class="math-container">$(f_n)$</span> convergence pointwise to <span class="math-container">$f$</span> on <span class="math-container">$X$</span>. If there exists a sequence <span class="math-container">$(x_n)$</span> in <span class="math-container">$X$</span> such that <span class="math-container">$x_n\to x$</span> in <span class="math-container">$X$</span> but <span class="math-container">$f_n(x_n)\not\to f(x)$</span>, show that <span class="math-container">$(f_n)$</span> does not converge uniformly to <span class="math-container">$f$</span> on <span class="math-container">$X$</span>.</p> </blockquote> <p>I've managed to "prove" that the question is self-contradictory, so please find my error.</p> <p><span class="math-container">$\forall n\in\mathbb{N}$</span>, <span class="math-container">$f_n$</span> is continuous at <span class="math-container">$x$</span>. Let <span class="math-container">$\epsilon&gt;0$</span>. <span class="math-container">$\forall n\in\mathbb{N}$</span> <span class="math-container">$\exists\delta&gt;0$</span> s.t. <span class="math-container">$\rho(f_n(y),\ f_n(x))&lt;\epsilon/2$</span> for all <span class="math-container">$y\in X$</span> s.t. <span class="math-container">$d(y,\ x)&lt;\delta$</span>. ------------ (1)</p> <p>Since <span class="math-container">$x_m\to x$</span>, <span class="math-container">$\exists N_1\in\mathbb{N}$</span> s.t. <span class="math-container">$d(x_m,\ x)&lt;\delta$</span> <span class="math-container">$\forall m\ge N_1$</span>. ------------- (2)</p> <p>From (1) and (2),</p> <p><span class="math-container">$\forall n$</span>, <span class="math-container">$\forall m\ge N_1\implies d(x_m,\ x)&lt;\delta\implies \rho(f_n(x_m),\ f_n(x))&lt;\epsilon/2$</span> ------------- (3)</p> <p>Also, <span class="math-container">$f_n(y)\to f(y)$</span> for all <span class="math-container">$y\in X$</span> due to pointwise convergence. So, <span class="math-container">$\exists N_2\in\mathbb{N}$</span> s.t. <span class="math-container">$\rho(f_n(y),\ f(y))&lt;\epsilon/2$</span> <span class="math-container">$\forall n\ge N_2$</span> and <span class="math-container">$\forall y\in X$</span>. ----------- (4)</p> <p>Let <span class="math-container">$N_3=\max(N_1,\ N_2)$</span>. Suppose <span class="math-container">$n\ge N_3$</span>. Then <span class="math-container">$n\ge N_1$</span> and <span class="math-container">$n\ge N_2$</span>.</p> <p><span class="math-container">$\begin{aligned} \implies\rho(f_n(x_n), f(x))&amp;\le\rho(f_n(x_n),\ f_n(x))+\rho(f_n(x),\ f(x)) \\ &amp;&lt;\epsilon/2+\epsilon/2\text{ [Using (3) and (4)]} \\ &amp;=\epsilon \end{aligned}$</span></p> <p>I have thus "proved" that <span class="math-container">$f_n(x_n)\to f(x)$</span>, contradicting the question. Where have I gone wrong?</p>
gogurt
29,568
<p>This is an excellent example of how imprecise language/notation can lead to confusion. </p> <p>In your first line you write "for all <span class="math-container">$n$</span>, there exists <span class="math-container">$\delta$</span> such that..." This sort of suggests that the same <span class="math-container">$\delta$</span> can be used for all <span class="math-container">$n$</span>, which is not correct and is what leads to your conclusion.</p> <p>In the future it might be more useful to think of this as "for <strong>each</strong> <span class="math-container">$n$</span>, there exists <span class="math-container">$\delta$</span>...". At least to me, this suggests more strongly that the <span class="math-container">$\delta$</span> may be different for each <span class="math-container">$n$</span>.</p>
1,093,396
<p>I've been working on a problem from a foundation exam which seems totally straightforward but for some reason I've become stuck:</p> <p>Let $f: \mathbb{ R } \rightarrow \mathbb{ R } ^n$ be a differentiable mapping with $f^\prime (t) \ne 0$ for all $t \in \mathbb{ R } $, and let $p \in \mathbb{ R } ^n$ be a point NOT in $f(\mathbb{ R }) $. </p> <ol> <li>Show that there is a point $q = f(t)$ on the curve $f(\mathbb{ R }) $ which is closest to $p$. </li> <li>Show that the vector $r := (p - q)$ is orthogonal to the curve at $q$. </li> </ol> <p>Hint: Consider the function $t \mapsto |p - f(t)|$ and its derivative. </p> <p>I can solve the second part of the problem assuming that I've found the point $q$ requested in the first part: consider the square difference function $\varphi(t) = |p - f(t)|^2 = (p - f(t)) \cdot (p - f(t))$, derive and at $q = f(t_0)$ we will have $\varphi ^\prime(t_0) = 0$. How do we prove the existence of the point $q$? The square distance function $\varphi(t)$ is a function from $\mathbb{ R } \rightarrow \mathbb{ R } $, so maybe something like Rolle's Theorem? Not too sure about this. Any help will be greatly appreciated! </p>
user134824
134,824
<p>Exactly. The function $\varphi: t\mapsto |p-f(t)|^2$ is continuous and positive, and therefore assumes a minimum value.</p>
892,742
<p>Let $G$ be a finite group. How can we show that $|G/G^{'}|\leq |C_G(x)|$ for all elements $x\in G$?</p>
Geoff Robinson
13,147
<p>This is a consequence of the fact that $G$ has $[G:G^{\prime}]$ linear characters (which are certainly irreducible)- I refer to complex characters here, and if $\lambda$ is a linear character of $G,$ we have $|\lambda(g)|^{2} = 1$ for all $g \in G.$ On the other hand, the orthogonality relations show that if $x \in G,$ we have $\sum_{\chi}|\chi(x)|^{2} = |C_{G}(x)|$ for each $x \in G,$ where $\chi$ runs over all complex irreducible characters of $G,$ so certainly forcing $|C_{G}(x)| \geq [G:G^{\prime}]$ for all such $x.$</p> <p>If you prefer a direct group-theoretic group, note that for each $x \in G,$ there are $[G:C_{G}(x)]$ different possibilities for $x^{-1}g^{-1}xg$ as $g$ runs though $G,$ and these elements all lie in $G^{\prime},$ so $|G^{\prime}| \geq [G:C_{G}(x)].$ </p>
1,722,287
<p>So far I know that when matrices A and B are multiplied, with B on the right, the result, AB, is a linear combination of the columns of A, but I'm not sure what to do with this. </p>
Siong Thye Goh
306,553
<p>$$rank(AB)=rank((AB)^T)=rank(B^TA^T)\leq rank(B^T)=rank(B)$$</p>
637,819
<p>$$x\in(\cap F)\cap(\cap G)=[\forall A\in F(x\in A)]\land[\forall A\in G(x\in A)]$$</p> <p>Since the variable $A$ is bounded by universal quantifier, it is regarded as bounded variable, according to the rules, the variable is free to change to other letters while the meaning statement remains unchanged. But,the above statements mention two different families of sets, $F$ and $G$, why is it correct to state the sets of $F$ and $G$ by using the same letter $A$, for the first $A$ in the first part of the conjunction stands for sets in $F$ while the latter stands for sets in $G$? Isn't different letters should be used to refer those sets($A$ for $F$ while $B$ for $G$)? I am extremely confused with the usage of bound variables. Please explain, thanks!</p>
SixWingedSeraph
318
<p>The answer to your <em>specific</em> question is that $A$ occurs as a bound variable in two different expressions. When a variable is bound, it has no meaning outside the expression it is in, so you can use it again as a variable in another expression, and that is what is done here. (But if I were writing a book or paper about logic aimed at beginners, I would use a different letter for the two bound variable anyway.)</p>
3,459,532
<p>I have a pretty straightforward linear programming problem here:</p> <p><span class="math-container">$$ maximize \hskip 5mm -x_1 + 2x_2 -3x_3 $$</span></p> <p>subject to</p> <p><span class="math-container">$$ 5x_1 - 6x_2 - 2x_3 \leq 2 $$</span> <span class="math-container">$$ 5x_1 - 2x_3 = 6 $$</span> <span class="math-container">$$ x_1 - 3x_2 + 5x_3 \geq -3 $$</span> <span class="math-container">$$ 1 \leq x_1 \leq 4 $$</span> <span class="math-container">$$ x_3 \leq 3 $$</span></p> <p>Convert to standard form.</p> <p>what boggles me is how to substitute <span class="math-container">$x_1$</span> since it’s restricted from both sides and I can’t move forward in the problem until I figure it out...</p> <p>I’m not asking for the whole standard form, just how to approach this one variable. :)</p>
Community
-1
<p>We can also do the job as follows. </p> <p>Note that <span class="math-container">$U_1=a_1a_1^T,U_2=a_2a_2^T$</span> are real symmetric, then are orthogonally diagonalizable.</p> <p><span class="math-container">$tr(U_1)=tr(U_2)=a_1^Ta_1=a_2^Ta_2=1$</span> and <span class="math-container">$rank(U_1)=rank(U_2)=1$</span> imply that <span class="math-container">$spectrum(U_1)=spectrum(U_2)=\{1,0,\cdots,0\}$</span>.</p> <p><span class="math-container">$U_1U_2=U_2U_1=0$</span> imply that <span class="math-container">$U_1,U_2$</span> are simultaneously orthogonally similar to <span class="math-container">$diag((\lambda_i)_i),diag((\mu_i)_i)$</span> where <span class="math-container">$\lambda_i\mu_i=0$</span> and <span class="math-container">$\lambda_i,\mu_i\in \{0,1\}$</span>.</p> <p>Finally <span class="math-container">$A=I-U_1-U_2$</span> is orthog. similar to <span class="math-container">$diag(I_{n-2},0_2)$</span> with <span class="math-container">$\ker(A)=span(a_1,a_2)$</span> and <span class="math-container">$\ker(A-I_n)=(span(a_1,a_2))^{\perp}$</span>.</p> <p>EDIT. More generally, if <span class="math-container">$\{a_1,\cdots,a_k\}$</span> is an orthonormal system, then <span class="math-container">$I-\sum_{i=1}^ka_ia_i^T$</span> is the orthogonal projection on <span class="math-container">$(span(a_1,\cdots,a_k))^{\perp}$</span>.</p>
3,534,254
<blockquote> <p><span class="math-container">$r\gt0$</span>, Compute <span class="math-container">$$\int_0^{2\pi}\frac{\cos^2\theta }{ |re^{i\theta} -z|^2}d\theta$$</span> when <span class="math-container">$|z|\ne r$</span></p> </blockquote> <p>The problem is related to Poisson kernel and harmonic function, but I don't know how to start, <span class="math-container">$\cos^2\theta =1/2 (1+\cos 2\theta )$</span>.</p>
Conrad
298,272
<p>Note that <span class="math-container">$\Re{\frac{r+ze^{-i\theta}}{r-ze^{-i\theta}}}=\frac{r^2-|z|^2}{|re^{i\theta} -z|^2}$</span>, so the integral is <span class="math-container">$\frac{1}{r^2-|z|^2}\Re{\int_0^{2\pi}\frac{(r+ze^{-i\theta})\cos^2\theta }{ r-ze^{-i\theta}}}d\theta$</span></p> <p>while if <span class="math-container">$r&gt;|z|$</span> we have <span class="math-container">$\frac{r+ze^{-i\theta}}{r-ze^{-i\theta}}=1+2\sum_{k \ge 1}{\frac{z^k}{r^k}}e^{-ik\theta}$</span> and <span class="math-container">$4cos^2\theta=2+e^{2i\theta}+e^{-2i\theta}$</span> so </p> <p><span class="math-container">${\int_0^{2\pi}\frac{(r+ze^{-i\theta})\cos^2\theta }{ r-ze^{-i\theta}}}d\theta=\pi+\frac{\pi z^2}{r^2}$</span> since we can integrate term by term by absolute convergence in <span class="math-container">$\theta$</span> and only two terms are non zero (the first and the one for <span class="math-container">$k=2$</span>)</p> <p>Hence the original integral is <span class="math-container">$\frac{\pi}{r^2-|z|^2}+\frac{\pi \Re z^2}{r^2(r^2-|z|^2)}$</span></p> <p>If now <span class="math-container">$r&lt;|z|$</span> (so <span class="math-container">$z \ne 0$</span>) we have <span class="math-container">$\frac{r+ze^{-i\theta}}{r-ze^{-i\theta}}=-1-2\sum_{k \ge 1}{\frac{r^k}{z^k}}e^{ik\theta}$</span> and again integrating term by term we also have two terms, so the original integral is <span class="math-container">$\frac{\pi}{|z|^2-r^2}+\Re\frac{\pi r^2}{z^2(|z|^2-r^2)}$</span></p> <p>Note that with <span class="math-container">$z=se^{i\alpha}$</span> we get precisely <span class="math-container">$\frac{\pi}{r^2-s^2}( 1+\frac{s^2cos 2\alpha}{r^2})$</span> for <span class="math-container">$r&gt;s$</span> and the same expression with <span class="math-container">$r,s$</span> switched for <span class="math-container">$s&gt;r$</span></p>
1,276,957
<p>These are the provided notes:</p> <blockquote> <p><img src="https://i.stack.imgur.com/NesWm.png" alt="Blockquote"></p> </blockquote> <p>These are the provided questions:</p> <blockquote> <p><img src="https://i.stack.imgur.com/t0Ta7.png" alt="Blockquote"></p> </blockquote> <p>I do not understand when I should subtract, add or leave the answer as is (The notes do not make sense to me very much). I do not, clearly, have an intuitive understanding of this. I hope someone can please please please show me thank you :)</p>
N. F. Taussig
173,070
<p>Consider the following diagram:</p> <p><img src="https://i.stack.imgur.com/62MrS.jpg" alt="second-quadrant_angle_with_reference_angle"></p> <p>The range of the arccosine function is $[0, \pi]$. Since you are working in degrees, this corresponds to $[0^\circ, 180^\circ]$. Thus, you can calculate the measure of the obtuse angle $\theta$ whose cosine is $-3/5$ directly since $$\cos\theta = -\frac{3}{5} \Longrightarrow \theta = \arccos\left(-\frac{3}{5}\right)$$ However, this will not work for sine or tangent since the range of the arcsine function is $\left[-\frac{\pi}{2}, \frac{\pi}{2}\right]$ (or, in degrees, $[-90^\circ, 90^\circ]$) and the range of the arctangent function is $\left(-\frac{\pi}{2}, \frac{\pi}{2}\right)$ (in degrees, $(-90^\circ, 90^\circ))$.</p> <p>For an obtuse angle $\theta$ such that $\sin\theta = \frac{1}{4}$, we draw a right triangle in the second quadrant with opposite side of length $|y| = |1| = 1$, hypotenuse of length $r = 4$, and adjacent side of length $|x| = |-\sqrt{15}| = \sqrt{15}$. The reference triangle we draw in the first-quadrant, has adjacent side of length $\sqrt{15}$, opposite side of length $1$, and hypotenuse of length $4$. The hypotenuse of the reference triangle forms an angle of $180^\circ - \theta$ with the positive $x$-axis, as shown in the diagram. Since the sine function satisfies the property $$\sin(180^\circ - \theta) = \sin\theta$$ we obtain $$\sin(180^\circ - \theta) = \frac{1}{4}$$ Moreover, since $\theta$ is an obtuse angle, $180^\circ -\theta$ is an acute angle, so it falls within the range of the arcsine function. Thus,<br> $$180^\circ - \theta = \arcsin\left(\frac{1}{4}\right)$$ Solving for $\theta$ yields $$\theta = 180^\circ - \arcsin\left(\frac{1}{4}\right)$$</p> <p>It would be inconvenient to draw a right triangle for an obtuse angle $\theta$ such that $\sin\theta = 0.7890$. However, we can still use the property that $\sin(180^\circ - \theta) = \sin\theta$ to conclude that $$\sin(180^\circ - \theta) = 0.7890$$ Since $180^\circ - \theta$ is an acute angle, we obtain $$180^\circ - \theta = \arcsin(0.7890)$$ Now, solve for $\theta$.</p> <p>For an obtuse angle $\theta$ such that $\tan\theta = -\frac{6}{11}$, we draw a second-quadrant angle with adjacent side of length $|x| = |-6| = 6$ and opposite side of length $|y| = |11| = 11$. The hypotenuse has length $\sqrt{157}$. The reference triangle in the first-quadrant has adjacent side of length $6$, opposite side of length $11$, and hypotenuse of length $\sqrt{157}$. The angle the hypotenuse of the reference triangle forms with the positive $x$-axis is $180^\circ - \theta$. Since $\theta$ is obtuse, $180^\circ - \theta$ is acute. Since the tangent function has the property that $$\tan(180^\circ - \theta) = -\tan\theta$$ we obtain $$\tan(180^\circ - \theta) = -\left(-\frac{6}{11}\right) = \frac{6}{11}$$ Since $180^\circ - \theta$ is an acute angle, it falls within the range of the arctangent function. Thus, $$180^\circ - \theta = \arctan\left(\frac{6}{11}\right)$$ Solving for $\theta$ yields $$\theta = 180^\circ - \arctan\left(\frac{6}{11}\right)$$<br> For the obtuse angle $\theta$ such that $\tan\theta = -3.8522$, we use the property that $\tan(180^\circ - \theta) = -\tan\theta$ to conclude that $$\tan(180^\circ - \theta) = -(-3.8522) = 3.8522$$ Since $180^\circ - \theta$ is an acute angle, $$180^\circ - \theta = \arctan(3.8522)$$ Now, solve for $\theta$.</p>
754,888
<p>The letters that can be used are A, I, L, S, T. </p> <p>The word must start and end with a consonant. Exactly two vowels must be used. The vowels can't be adjacent.</p>
hmakholm left over Monica
14,366
<p>If you can find the prime factorization of the number, take the greatest common divisor of all the exponents in it.</p> <p>Unfortunately factoring large numbers is not quick, so simply checking all possible degrees up to $\log_2$ of the number might well be faster asymptotically.</p> <p>For most inputs, a combination might be the best strategy -- look for small prime factors, and take the gcd of their exponents. Then you only need to check degrees that are factors of that gcd.</p>
165,069
<p>I have a list of the following kind:</p> <pre><code>{{1,0.5},{2,0.6},{3,0.8},{-4,0.9},{-3,0.95}} </code></pre> <p>The important property is, that somewhere in the list, the first element of the sublists changes sign (above is from + to -, but could be from - to +). How can I most efficiently split this into two lists:</p> <pre><code>{{1,0.5},{2,0.6},{3,0.8}} </code></pre> <p>and</p> <pre><code>{{-4,0.9},{-3,0.95}} </code></pre> <p>?</p>
kglr
125
<pre><code>list = {{1, 0.5}, {2, 0.6}, {3, 0.8}, {-4, 0.9}, {-3, 0.95}}; SplitBy[#, Sign[First @ #] &amp;] &amp; @ list </code></pre> <blockquote> <p>{{{1, 0.5}, {2, 0.6}, {3, 0.8}}, {{-4, 0.9}, {-3, 0.95}}}</p> </blockquote> <p>Or</p> <pre><code>Split[#, SameQ @@ Sign [First /@ {##}] &amp;] &amp; @ list </code></pre> <blockquote> <p>{{{1, 0.5}, {2, 0.6}, {3, 0.8}}, {{-4, 0.9}, {-3, 0.95}}}</p> </blockquote>
2,449,581
<p>There is a brick wall that forms a rough triangle shape and at each level, the amount of bricks used is two bricks less than the previous layer. Is there a formula we can use to calculate the amount of bricks used in the wall, given the amount of bricks at the bottom and top levels?</p>
Arthur
15,500
<p>Hint: Make one identical wall right next to it, but upside-down. How many bricks are in each row of the two walls combined? How many bricks have you then used for those two walls?</p>
324,119
<p>I've been reading about the Artin Spin operation. It's defined as taking the classical <span class="math-container">$n$</span>-knot (<span class="math-container">$S^n\hookrightarrow S^{n+2}$</span>) to an <span class="math-container">$(n+1)$</span>-knot. For the <span class="math-container">$1$</span>-knot case (in <span class="math-container">$\mathbb{R}^3$</span>), I reproduce the procedure in <a href="https://arxiv.org/pdf/math/0410606.pdf" rel="nofollow noreferrer">knot spinning</a>, p. 8, </p> <ol> <li><p>We manipulate a knot <span class="math-container">$K$</span> so that all but a trivial arc <span class="math-container">$a$</span> lie in the upper half space <span class="math-container">$H^3=\{(x,y,z)\&gt;|\&gt;z\geq 0\}$</span>. We remove the interior of <span class="math-container">$a$</span>.</p></li> <li><p>We rotate <span class="math-container">$H^3$</span> around <span class="math-container">$\mathbb{R}^2$</span> in <span class="math-container">$\mathbb{R}^4$</span>, inducing a parameterization <span class="math-container">$(x,y,z)\mapsto (x, y, z\cos\theta, z\sin\theta)$</span></p></li> </ol> <p><strong>-- Question --</strong> How is this operation similar to the suspension functor (on a topological space <span class="math-container">$X$</span>) <span class="math-container">$\Sigma X\equiv X\wedge S=\frac{X\times S}{X\vee S}$</span>?</p>
spin
38,068
<p>This does not really involve any category theory, but perhaps it is useful to note the following general setting for the Jordan-Hölder theorem.</p> <p>For <span class="math-container">$G$</span> a group and <span class="math-container">$\Omega$</span> a set, a <em>group with operators</em> is <span class="math-container">$(G, \Omega)$</span> equipped with an action <span class="math-container">$\Omega \times G \rightarrow G$</span>: <span class="math-container">$(\omega,g) \mapsto g^\omega$</span> such that <span class="math-container">$(gh)^{\omega} = g^{\omega} h^{\omega}$</span> for all <span class="math-container">$\omega \in \Omega$</span> and <span class="math-container">$g, h \in G$</span>. </p> <p>See the Wikipedia article for more information about groups with operators: <a href="https://en.wikipedia.org/wiki/Group_with_operators" rel="noreferrer">link</a>.</p> <p>The point is that the Jordan-Hölder theorem holds for a group with operators, and it seems to have most Jordan-Hölder type theorems as a special case. These include for example the Jordan-Hölder theorems for groups and modules over a ring. Also, by taking <span class="math-container">$\Omega = G$</span> with conjugation action, you get results about chief series and chief factors of <span class="math-container">$G$</span>.</p>
2,180,700
<p>A and B toss a fair coin 10 times. In each toss, if its a head A's score gets incremented by 1, if its a tail B's score gets incremented by 1.</p> <p>After 10 tosses, the person with the greatest score wins the game.</p> <p>What is the probability that A wins?</p> <p>And if B alone gets an extra toss. What is the probability that A wins?</p> <p>According to me, The cases where A can win are </p> <p>(Score of A, Score of B)</p> <p>(6,4)</p> <p>(7,3)</p> <p>(8,2)</p> <p>(9,1)</p> <p>(10,0)</p> <p>These are A's winning cases. Now I am confused on how to proceed.</p> <p>One method I can think of is that in each of these 5 cases the probability them happening is (1/2)^10. So the probability of A winning is 5*(1/2)^10</p> <p>But I think I am not taking into consideration the various occurrences of the winning tosses from the total tosses.</p> <p>So should the probability of A winning be like </p> <p>(10C6 + 10C7 + 10C8 + 10C9 + 10C10 ) / 2^10</p> <p>Which is the number of possible outcomes for A divided by the total number of outcomes. Where 10C6 is the number of ways of selecting 6 from 10 items</p>
Bram28
256,001
<p>It should indeed be (10C6 + 10C7 + 10C8 + 10C9 + 10C10 ) / 2^10</p> <p>Here is why:</p> <p>There is only one way for (10,0) to be the outcome:</p> <p>HHHHHHHHHH ... which happens with a probability of $(\frac{1}{2})^{10}$</p> <p>But there are 10 ways for (9,1) to be the outcome:</p> <p>HHHHHHHHHT</p> <p>HHHHHHHHTH</p> <p>...</p> <p>THHHHHHHHH</p> <p>Each of these happens with a probability of $(\frac{1}{2})^{10}$, and you have 10 of them, since the 1 T can happen in one of 10 places. So, probability of getting 9 H and 1 T is $10*(\frac{1}{2})^{10}$</p> <p>For (8,2) to be the outcome, you need 2T's among 8H's ... which can be done in ${10}\choose{2}$ ways .. and thus indeed you the general formula that you indicated at the end.</p>
3,074,035
<p>I am trying to find a simplified form for this summation:</p> <p><span class="math-container">$$B(k,j) \equiv \sum_{i=1}^k (-1)^{k+i} {i \choose j} (k-1)_{i-1} \quad \quad \quad \text{for } 1 \leqslant j \leqslant k,$$</span></p> <p>where the terms <span class="math-container">$(k-1)_{i-1} = (k-1) \cdots (k-i+1)$</span> are <a href="https://en.wikipedia.org/wiki/Falling_and_rising_factorials" rel="nofollow noreferrer">falling factorials</a> (and <span class="math-container">$(k-1)_0 = 1$</span> by convention). I have done some work on this by generating a matrix of values of this quantity (see below) and I can recognise simplified forms for some of the parts of the matrix, but I have been unable to see a simplified form for the whole thing.</p> <hr> <p><strong>Computing the terms and looking for a pattern:</strong> In case it helps to try to recognise a pattern, I have computed a matrix of terms using the <code>R</code> code below. From the matrix of values it is clear that <span class="math-container">$B(k,k) = (k-1)!$</span> and <span class="math-container">$B(k,k-1) = (k-1) (k-1)!$</span>. I also recognise that <span class="math-container">$B(k,1)$</span> is <a href="https://oeis.org/A000255" rel="nofollow noreferrer">A000255</a>. I do not recognise the overall matrix of numbers as any simple form.</p> <pre><code>#Generate function B &lt;- function(k,j) { T1 &lt;- (-1)^(k+1:k); T2 &lt;- choose(1:k, j); TT &lt;- c(1,(k-1):1); T3 &lt;- cumprod(TT); sum(T1*T2*T3); } #Create matrix of terms M &lt;- 10; BBB &lt;- matrix(0, nrow = M, ncol = M); for (k in 1:M) { for (j in 1:k) { BBB[k,j] &lt;- B(k,j); }} BBB; #k is the row, j is the column [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [1,] 1 0 0 0 0 0 0 0 0 0 [2,] 1 1 0 0 0 0 0 0 0 0 [3,] 3 4 2 0 0 0 0 0 0 0 [4,] 11 21 18 6 0 0 0 0 0 0 [5,] 53 128 156 96 24 0 0 0 0 0 [6,] 309 905 1420 1260 600 120 0 0 0 0 [7,] 2119 7284 13950 16080 11160 4320 720 0 0 0 [8,] 16687 65821 148638 210210 190680 108360 35280 5040 0 0 [9,] 148329 660064 1715672 2870784 3207120 2392320 1149120 322560 40320 0 [10,] 1468457 7275537 21381624 41278104 54701136 50394960 31872960 13245120 3265920 362880 </code></pre>
R. J. Mathar
805,678
<p><span class="math-container">\begin{equation} \sum_{i=1}^k (-1)^{k+i}\binom{i}{j}(k-1)_{i-1} = \frac{\Gamma(k)}{\Gamma(1+j)}(-1)^k \sum_{i=1}^k (-1)^{i}\frac{\Gamma(i+1)}{\Gamma(i-j+1)\Gamma(k-i+1)} \end{equation}</span> substitute <span class="math-container">$i'=k-i$</span> and use <span class="math-container">$(.)_n$</span> for the risign factorial (Pochhammer's symbol) <span class="math-container">\begin{equation} = \frac{\Gamma(k)}{\Gamma(1+j)}(-1)^k \sum_{i'=k-1}^0 (-1)^{k-i'}\frac{\Gamma(k-i'+1)}{\Gamma(k-i'-j+1)\Gamma(i'+1)} \end{equation}</span> <span class="math-container">\begin{equation} = \frac{\Gamma(k)}{\Gamma(1+j)} \sum_{i=0}^{k-1} \frac{\Gamma(k+1)(j-k)_i}{(-k)_i\Gamma(k-j+1)\Gamma(i+1)}(-1)^i \end{equation}</span> <span class="math-container">\begin{equation} = \frac{\Gamma(k)\Gamma(k+1)}{\Gamma(1+j)\Gamma(k-j+1)} % \sum_{i=0}^{k-1} % \frac{(j-k)_i}{(-k)_i\Gamma(i+1)}(-1)^i {}_1F_1(j-k;-k;-1) \end{equation}</span> <span class="math-container">\begin{equation} = \Gamma(k) \binom{k}{j} {}_1F_1(j-k;-k;-1) \end{equation}</span> <span class="math-container">\begin{equation} = \Gamma(k) \binom{k}{j} \sum_{i=0}^{k-j} \frac{(j-k)_i}{(-k)_i}\frac{(-1)^i}{i!} \end{equation}</span> <span class="math-container">\begin{equation} = \Gamma(k) \binom{k}{j} \sum_{i=0}^{k-j} \frac{(j-k)(j-k+1)\cdots (j-k+i-1)}{(-k)(-k+1)\cdots (-k+i-1)}\frac{(-1)^i}{i!} \end{equation}</span> <span class="math-container">\begin{equation} = \Gamma(k) \binom{k}{j} \sum_{i=0}^{k-j} \frac{(k-j)(k-j-1)\cdots (k-j-i+1)}{(k)(k-1)\cdots (k-i+1)}\frac{(-1)^i}{i!} \end{equation}</span> <span class="math-container">\begin{equation} = \Gamma(k) \frac{k!}{(k-j)!j!} \sum_{i=0}^{k-j} \frac{(k-j)!(k-i)!}{(k-j-i)!k!}\frac{(-1)^i}{i!} \end{equation}</span> <span class="math-container">\begin{equation} = \Gamma(k) \frac{1}{j!} \sum_{i=0}^{k-j} \frac{(k-i)!}{(k-j-i)!}\frac{(-1)^i}{i!} \end{equation}</span> <span class="math-container">\begin{equation} =(k-1)! \sum_{i=0}^{k-j} \binom{k-i}{j}\frac{(-1)^i}{i!} \end{equation}</span></p>
2,942,263
<p>I am curious whether there is an algebraic verification for <span class="math-container">$y = x + 2\sqrt{x^2 - \sqrt{2}x + 1}$</span> having its minimum value of <span class="math-container">$\sqrt{2 + \sqrt{3}}$</span> at <span class="math-container">$\frac{1}{\sqrt{2}} - \frac{1}{\sqrt{6}}$</span>. I have been told the graph of it is that of a hyperbola.</p>
K B Dave
534,616
<p>Scale and shift <span class="math-container">$$\begin{align}u&amp;=\sqrt{2}x-1&amp;v&amp;=\tfrac{1}{\sqrt{2}}y-\tfrac{1}{2}\text{.} \end{align}$$</span> Then it is just as well to minimize <span class="math-container">$$v=\tfrac{u}{2}+\sqrt{u^2+1}\text{.}$$</span> with respect to <span class="math-container">$u$</span>.</p> <p>Use the stereographic projection <span class="math-container">$$u=\frac{2t}{t^2-1}\text{.}$$</span> Then it is just as well to minimize <span class="math-container">$$v=1+\frac{t+2}{t^2-1}=1+\frac{3}{2(t-1)}-\frac{1}{2(t+1)}$$</span> over <span class="math-container">$t^2&gt;1\text{.}$</span> Use the Cayley transform <span class="math-container">$$s=\frac{t-1}{t+1}\text{.}$$</span> Then it is just as well to minimize <span class="math-container">$$v=\frac{3}{4s}+\frac{s}{4}$$</span> over <span class="math-container">$s&gt;0$</span>. Rescale <span class="math-container">$$\begin{align}w&amp;=2\frac{v}{\sqrt{3}}&amp;r&amp;=\frac{s}{\sqrt{3}}\text{.}\end{align}$$</span></p> <p>Then it is just as well to minimize <span class="math-container">$$w=\frac{r+\tfrac{1}{r}}{2}$$</span> over <span class="math-container">$r&gt;0$</span>. But by the arithmetic-geometric mean inequality, <span class="math-container">$$w\geq 1$$</span> with equality iff <span class="math-container">$r=1$</span>. Retracing the steps gives <span class="math-container">$$\begin{align}x&amp;=\frac{1}{\sqrt{2}}-\frac{1}{\sqrt{6}}&amp;y&amp;=\frac{\sqrt{6}}{2}+\frac{\sqrt{2}}{2}\text{.} \end{align}$$</span></p>
2,942,263
<p>I am curious whether there is an algebraic verification for <span class="math-container">$y = x + 2\sqrt{x^2 - \sqrt{2}x + 1}$</span> having its minimum value of <span class="math-container">$\sqrt{2 + \sqrt{3}}$</span> at <span class="math-container">$\frac{1}{\sqrt{2}} - \frac{1}{\sqrt{6}}$</span>. I have been told the graph of it is that of a hyperbola.</p>
trancelocation
467,003
<p>Here is a "non-calculus" way that plays the whole show back to AMGM. It is a bit cumbersome but works.</p> <p>I prefer giving all stepwise substitutions to show how to bring the whole expression back to hyperbolic functions where AMGM suddenly gives all. The basic idea behind it is that <span class="math-container">$\cosh t = \sqrt{\sinh^2 t + 1}$</span>:</p> <p><span class="math-container">$$\color{blue}{y=} x + 2\sqrt{x^2 - \sqrt{2}x + 1}$$</span> <span class="math-container">$$x^2 - \sqrt{2}x + 1 = (x - \frac{\sqrt{2}}{2})^2 + 1 - \frac{1}{2} = (x - \frac{\sqrt{2}}{2})^2 + \frac{1}{2}$$</span> <span class="math-container">$$\color{green}{u =x - \frac{\sqrt{2}}{2}} \Rightarrow \color{blue}{y=} u + \frac{\sqrt{2}}{2} +2\sqrt{u^2+\frac{1}{2}} = \color{blue}{\frac{\sqrt{2}}{2} + u + \sqrt{2}\sqrt{\left(\sqrt{2} u\right)^2 + 1}}$$</span> <span class="math-container">$$\color{green}{v = \sqrt{2} u} \Rightarrow \color{blue}{y =} \frac{\sqrt{2}}{2} + \frac{\sqrt{2}}{2} v + \sqrt{2}\sqrt{v^2 + 1} = \color{blue}{\frac{\sqrt{2}}{2} + \sqrt{2} \left(\boxed{ \frac{v}{2} + \sqrt{v^2 + 1}} \right)}$$</span> <span class="math-container">$$\color{green}{v = \sinh t} \Rightarrow \boxed{ \frac{\sinh t}{2} + \cosh t} = \frac{e^t - e^{-t}}{4} + \frac{e^t + e^{-t}}{2} = \frac{3}{4}e^t + \frac{1}{4} e^{-t} \boxed{\stackrel{AMGM}{\geq}} \sqrt{\frac{3}{4}} = \boxed{\frac{\sqrt{3}}{2}}$$</span> Setting <span class="math-container">$\color{green}{w= e^t}$</span>, equality holds for <span class="math-container">$$3w = \frac{1}{w} \stackrel{w&gt;0}{\Leftrightarrow} w =\frac{1}{\sqrt{3}} \Rightarrow \color{blue}{y \geq} \frac{\sqrt{2}}{2} + \sqrt{2}\frac{\sqrt{3}}{2} = \color{blue}{\frac{\sqrt{2}}{2}\left( 1 + \sqrt{3}\right)}$$</span></p> <p>Note that <span class="math-container">$$\left( \frac{\sqrt{2}}{2}\left( 1 + \sqrt{3}\right) \right)^2 =\frac{1}{2}\left( 1 + 2\sqrt{3} + 3\right) = 2 + \sqrt{3} $$</span></p> <p>Backwards substitution yield <span class="math-container">$x$</span>: <span class="math-container">$$\color{green}{t =} \ln \frac{1}{\sqrt{3}} = \color{green}{-\ln \sqrt{3}} \Rightarrow \color{green}{v =} \sinh t = \frac{\frac{1}{\sqrt{3}} - \sqrt{3}}{2} = \color{green}{-\frac{\sqrt{3}}{3}}$$</span> <span class="math-container">$$ \Rightarrow \color{green}{x =} \frac{\sqrt{2}}{2}v+\frac{\sqrt{2}}{2} = \frac{1}{\sqrt{2}} \cdot \left(- \frac{1}{\sqrt{3}} \right) + \frac{1}{\sqrt{2}} = \color{green}{\frac{1}{\sqrt{2}} - \frac{1}{\sqrt{6}}}$$</span></p>