qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
4,615,947 | <p>Let <span class="math-container">$a,b\in\Bbb{N}^*$</span> such that <span class="math-container">$\gcd(a,b)=1$</span>. How to show that <span class="math-container">$\gcd(ab,a^2+b^2)=1$</span>?</p>
| Claude Leibovici | 82,404 | <p><em>Using algebra</em></p>
<p>Consider the more general case of
<span class="math-container">$$I=\int_{-\infty}^{+\infty} \frac{\sin(t)}{(t-a)(t-b)} \, dt$$</span> where <span class="math-container">$(a,b)$</span> are complex numbers.</p>
<p>Using partial fraction decomposition
<span class="math-container">$$I=\frac 1{a-b}\left(\int_{-\infty}^{+\infty}\frac{\sin (t)}{t-a}\,dt-\int_{-\infty}^{+\infty}\frac{\sin (t)}{t-b}\,dt\right)$$</span> So, we have two integrals of the form
<span class="math-container">$$J_c=\int_{-\infty}^{+\infty}\frac{\sin (t)}{t-c}\,dt$$</span> A natural change of variable <span class="math-container">$(t=x+c)$</span> and the expansion of the sine gives</p>
<p><span class="math-container">$$K_c=\int\frac{\sin (t)}{t-c}\,dt=\sin(c)\int \frac{\sin (x)}{x}\,dx+\cos(c)\int \frac{\cos (x)}{x}\,dx$$</span></p>
<p><span class="math-container">$$K=\sin (c)\, \text{Ci}(x)+\cos (c)\, \text{Si}(x)$$</span> Back to <span class="math-container">$t$</span> and <span class="math-container">$I$</span> and using the bounds, then <span class="math-container">$\cdots\cdots\cdots$</span></p>
|
1,038,713 | <p>Suppose I am given a circle $C$ in $\Bbb C^*$ and two points $w_1,w_2$. Given another circle $C'$ and points $z_1,z_2$, what is the procedure to find a Möbius transformation that sends $C\to C'$, $w_i\to z_i,i=1,2$? Here $z_1\in C\not\ni z_2$; $w_1\in C'\not\ni w_2$. For example, take $|z|=2$, $w_1=-2,w_2=0$. Then, the transformation $T(z)=-\dfrac{z+2}{2}$ sends $|z|=2$ to $|z+1|=1$, $-2$ to $0$, and $0$ to $-1$. Hence, I need to find a transformation that fixes $|z+1|=1$ and $0$, and sends $-1$ to $i$. I know that if $|\alpha|\neq 1$, the transformation $$T(z)=\frac{z-\alpha}{1-z\bar \alpha}$$ fixes $|z|=1$, sends $\alpha$ to $0$ and has fixed points $\sqrt{\dfrac{\alpha}{\bar\alpha}}$. I obtained $T$ using $4$ successive transformations $T_1=-(z+2)$, $T_2=\dfrac{z}{z+2}$, $T_3=-z$ and $T_4=-\dfrac{z}{z+1}$, which seems a bit ineffective. How can I generally find $T$ given $(C,C',(z_1,z_2),(w_1,w_2))$?</p>
| Christian Blatter | 1,303 | <p>Each instance of your problem involves a lot of data, whereby you haven't even indicated in which format the two circles are given. It follows that there is no simple "one formula suits it all" solution. Given an instance of the problem one can reduce it to the special case $z_1=w_1=0$, $z_2=w_2=\infty$. The Moebius transform you are asking for is then a euclidean similarity $$z\mapsto w=a z,\qquad a\in{\mathbb C}^*\ .$$ It exists only if the two circles are in "similar positions" with respect to the origin.</p>
|
1,382,374 | <p>This is an exercise from Rudin's <em>Principles of Mathematical Analysis</em>, Chapter $6$.</p>
<blockquote>
<p>Suppose $f$ is a real, continuously differentiable function on $[a, b]$, $f(a) = f(b) = 0$, and
$$\int_a^b f^2(x)\, dx = 1.$$
Prove that
$$\int_a^b xf(x)f'(x)\,dx = -\frac{1}{2}$$
and that
$$\int_a^b [f'(x)]^2\,dx \cdot \int_a^b x^2f^2(x)\,dx {\color{red}>} \frac{1}{4}.$$</p>
</blockquote>
<p>The first equality can be easily proved by integrating by parts. Also, I can show that
$$\int_a^b [f'(x)]^2\,dx \cdot \int_a^b x^2f^2(x)\,dx {\color{red}\geq} \frac{1}{4}$$
by Cauchy-Schwarz inequality. However, why the inequality must be always strict? I tried discussing $f'(x) = \lambda xf(x)$ for some constant $\lambda$ when the equality holds and deriving contradiction. But it seems helpless. Thanks in advance for any suggestions!</p>
| karmalu | 258,239 | <p>The differential equalities gives $f(x)=e^{\lambda \frac{x^2}{2}+c}$ which is never equal to 0 or f constant which can't satisfy your requests.</p>
|
3,736,580 | <p>Show that for <span class="math-container">$n>3$</span>, there is always a <span class="math-container">$2$</span>-regular graph on <span class="math-container">$n$</span> vertices. For what values of <span class="math-container">$n>4$</span> will there be a 3-regular graph on n vertices?</p>
<p>I think this question is slightly out of my control. Can you please help me out with this question...</p>
<p>For part two what I think is yes by handshaking I will exclude all the odd vertices as <span class="math-container">$3(2n+1)$</span> is not even number. So what should be the answer? All even number of vertices? Does that make sense?
And for part 1 it is obviously true but how can I proceed to the answer?
Thanks.</p>
| Rhys Hughes | 487,658 | <p>Such a sequence clearly exists, for example I could say:</p>
<p><span class="math-container">$$\frac{1}{7}(4+3+3+3+3+3+3)=\frac{22}{7}\approx\pi$$</span></p>
<p>and, continuing such a process towards an infinite number of terms, there exists a configuration that can get as close to <span class="math-container">$\pi$</span> (or any other number) as we like.</p>
<p>However, in terms of finding that configuration, I doubt there's much better than DanielV's answer.</p>
|
2,057,813 | <p>How can I prove that for $s \in \mathbb{C}$, with real part of $s$ being equal to 1,
\begin{equation}
\sum_{n=1}^{\infty}\frac{1}{n^{s}}
\end{equation}
diverges?</p>
<p>Thanks a lot!</p>
| vidyarthi | 349,094 | <p>Note that the Riemann Zeta function diverges only for <span class="math-container">$s=1$</span>. For <span class="math-container">$s\neq 1$</span>, however, the function is convergent.As for divergence at <span class="math-container">$s=1$</span>, there are many proofs extant, one of which is the <a href="https://en.wikipedia.org/wiki/Cauchy_condensation_test" rel="nofollow noreferrer">Cauchy Condensation Test</a>. The function's definition in the form given by you is valid only for <span class="math-container">$Re(s)>1$</span>.For other regions, we need to use other representations, one of which, as pointed by DonAntonio in the comments, in series form, for <span class="math-container">$Re(s)>0$</span> is using <a href="http://mathworld.wolfram.com/DirichletEtaFunction.html" rel="nofollow noreferrer">Dirichlet Eta Function</a>.The convergence at <span class="math-container">$Re(s)=1,s\neq1$</span> can be proved by observing that <span class="math-container">$\zeta(s)=\frac{1}{1-2^{1-s}}\sum_\limits{n=1}^{\infty}\frac{(-1)^{n-1}}{n^s}$</span> converges for <span class="math-container">$Re(s)>0,s\neq1$</span>. The convergence for <span class="math-container">$\frac{(-1)^{n-1}}{n^s}$</span> at <span class="math-container">$Re(s)=1$</span> is proved using the theory of convergence of Dirichlet series to the right of the absscica of convergence, which, in this case, is <span class="math-container">$Re(s)>0$</span>. It may also be seen by expanding <span class="math-container">$\zeta(s)$</span> around <span class="math-container">$s=1$</span> as a <a href="https://en.wikipedia.org/wiki/Laurent_series" rel="nofollow noreferrer">Laurent Series</a> giving <span class="math-container">$\zeta(s)=\frac{1}{s-1}+\sum_\limits{n=0}^{\infty}\frac{(-1)^n}{n!}\gamma_n(s-1)^n$</span>, where <span class="math-container">$\gamma_n$</span> is the <a href="https://en.wikipedia.org/wiki/Stieltjes_constants" rel="nofollow noreferrer">Steiljes Constant</a>. The Laurent series can be derived by using the integral representation of Riemann Zeta function and several methods can be found <a href="https://math.stackexchange.com/questions/123531/how-to-show-that-the-laurent-series-of-the-riemann-zeta-function-has-gamma-as">here</a>.</p>
<hr />
<p>The theorem on convergence of Dirichlet Series <span class="math-container">$\frac{(-1)^{n-1}}{n^s}$</span> is proved using boundedness of partial sums of <span class="math-container">$|(-1)^{n-1}|\leq1$</span> and the use of <a href="https://en.wikipedia.org/wiki/Abel%27s_summation_formula" rel="nofollow noreferrer">Abel's summation Formula</a> or <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler's Summation Formula</a> on <span class="math-container">$\frac{(-1)^{n-1}}{n^s}$</span>. Alternatively, a proof of the same is given in Robjon's answer <a href="https://math.stackexchange.com/questions/123531/how-to-show-that-the-laurent-series-of-the-riemann-zeta-function-has-gamma-as">here</a> using Generalized Dirichlet's test for sum of product of two sequences.</p>
|
927,188 | <p>This question has been on my mind for a very long time, and I thought I'd finally ask it here. </p>
<p>When I was 6, my dad pulled me out of school. The classes were too easy; the professors, too dull. My father had been man of philosophy his entire life (almost got a PhD in it) and regretted not having a more quantitive background. He wanted me to have a different life and taught me math accordingly. When I was 11, I taught myself trig. When I was 12, I started taking calculus at my local university. I continued on this track, and finally got to real analysis and abstract algebra at 15. I loved every math course I ever took and found myself breezing through all that was presented to me (the university was not Princeton after all). However, around this time, I came to the conclusion that math was not for me. I decided to try a different path.</p>
<p>Why, you might ask, did I do this? The answer was simple: I didn't believe I could be a great mathematician. While I thrived taking the courses, I never turned math into a lifestyle. I didn't come home and do complex questions on a white board. I didn't read about Euler in my spare time. I also never felt I had a great intuition into problems. Once you showed me how to solve a problem, I was golden. But start from scratch on my own? It seemed like a different story entirely. To make things worse, my sister, who was at Caltech at the time, would call home with stories of all these incredible undergrads who solved the mathematical mysteries of the universe as a hobby. Whenever I mentioned math as a career, she would always issue a strong warning: you're not like these kids who spend all their time doing math. Think about doing something else. </p>
<p>Over time, I came to agree with this statement. Coincidentally, I got rejected by MIT and Princeton to continue my undergraduate studies there. This crushed me at the time; my dream of studying math at one of the great institutions had ended. Instead, I ended up at Georgia Tech (not terrible by any means, just not what I had envisioned). Being at an engineering school, I thought I'd give aerospace a shot. It had lots of math, right? Not really, or at least not enough for my taste. I went into CS. This was much better, but still didn't feel quite right. At last, as a sophomore, I felt it was time to get back on track: I'm now doubling majoring in applied math and CS. </p>
<p>My question is, how do I know I'm not making a mistake? There seems to be so many people doing math competitions, research, independent studies, etc, while I just started to take some math courses again. What should I do to test myself and see if I can really make math a career? I apologize for the long and possibly quite subjective post. I'd just really like to hear from math people who know their stuff. Thanks a bunch in advance. </p>
| user4951 | 22,644 | <p>This would be my personal answer. I have medals in international physics olympiad. What do I do now?</p>
<p>I am a businessman.</p>
<p>Yes. I love Math. I love money and women more.</p>
<p>No I will not be a great mathematician or physicist. Why would I?</p>
<p>I still love Math. I read calculus, accounting. But then again, besides programming skills, most truly advance Math is useless in real life. Eventually it's money that turns me on.</p>
<p>People say do what you love rather than make money. That's true. But keep an eye on money.</p>
<p>As of now, I really don't feel like learning more Math. I want to learn more about biz, having the right mindset, law of attraction, evolutionary psychology, and so many things that are actually useful. Yea I can derive most college level and high school level formula. Why invent? Just learn makes me happy. Why discover new things for others?</p>
<p>I am still able to derive all Math formulas all the way to colleges I think on top of my head. I taught Math when I was poor and I love it. But then, I have no time for it. In a sense I still love Math. Being a businessman means I no longer have time to learn it. I wish I can pass on my skills in learning to others but alas, I'll die with it too I guess.</p>
<p>I once use my math skills to taught high school children Math. They are impressed that I can derive all those formulas. But that's it. A $100 a month job. Needless to say I made far more money in biz. I honestly do not think people like me are meant to get a job anyway. So yea, that's how useful Math is. Interpersonal skills? Applied Math? Programming? Internet marketing? Now that gets me very far.</p>
<p>If anyone wants to verify my IPHO claim privately and know how to do so, let me know. Really. It's my life skills learned through experiences, rather than Math is how I got where I am. It's what gets me cash. It's what gets me laid. It's what gets me a wife (that I am now divorcing because my math shows that marriage is inefficient).</p>
|
927,188 | <p>This question has been on my mind for a very long time, and I thought I'd finally ask it here. </p>
<p>When I was 6, my dad pulled me out of school. The classes were too easy; the professors, too dull. My father had been man of philosophy his entire life (almost got a PhD in it) and regretted not having a more quantitive background. He wanted me to have a different life and taught me math accordingly. When I was 11, I taught myself trig. When I was 12, I started taking calculus at my local university. I continued on this track, and finally got to real analysis and abstract algebra at 15. I loved every math course I ever took and found myself breezing through all that was presented to me (the university was not Princeton after all). However, around this time, I came to the conclusion that math was not for me. I decided to try a different path.</p>
<p>Why, you might ask, did I do this? The answer was simple: I didn't believe I could be a great mathematician. While I thrived taking the courses, I never turned math into a lifestyle. I didn't come home and do complex questions on a white board. I didn't read about Euler in my spare time. I also never felt I had a great intuition into problems. Once you showed me how to solve a problem, I was golden. But start from scratch on my own? It seemed like a different story entirely. To make things worse, my sister, who was at Caltech at the time, would call home with stories of all these incredible undergrads who solved the mathematical mysteries of the universe as a hobby. Whenever I mentioned math as a career, she would always issue a strong warning: you're not like these kids who spend all their time doing math. Think about doing something else. </p>
<p>Over time, I came to agree with this statement. Coincidentally, I got rejected by MIT and Princeton to continue my undergraduate studies there. This crushed me at the time; my dream of studying math at one of the great institutions had ended. Instead, I ended up at Georgia Tech (not terrible by any means, just not what I had envisioned). Being at an engineering school, I thought I'd give aerospace a shot. It had lots of math, right? Not really, or at least not enough for my taste. I went into CS. This was much better, but still didn't feel quite right. At last, as a sophomore, I felt it was time to get back on track: I'm now doubling majoring in applied math and CS. </p>
<p>My question is, how do I know I'm not making a mistake? There seems to be so many people doing math competitions, research, independent studies, etc, while I just started to take some math courses again. What should I do to test myself and see if I can really make math a career? I apologize for the long and possibly quite subjective post. I'd just really like to hear from math people who know their stuff. Thanks a bunch in advance. </p>
| Masacroso | 173,262 | <p>Do whatever you want to do... you dont need to be "great" or compete with anyone... LOL, it is obvious you are from USA. Just be happy all that you can as you want to do.</p>
<p>Unfortunately nobody can help you very much on this kind of personal questions. You cant know if your decisions will be fine or not, sorry, its life.</p>
|
1,216,392 | <blockquote>
<p>$P,Q$ are polynomials with real coefficients and for every real $x$ satisfy $P(P(P(x)))=Q(Q(Q(x)))$. Prove that $P=Q$.</p>
</blockquote>
<p>I see only that these polynomials are same degree</p>
| Hagen von Eitzen | 39,174 | <p>Since one easily messes things up when starting from highest coefficients downward (as I did in my first revision of this answer), we shall consider the function $f(z)=\frac1{P(z^{-1})}$, which allows us nicer acccess to the high coefficients of $P$.
As a matter of notation, write $$f^{\circ k}:=\underbrace{f\circ\ldots\circ f}_{k}$$
for function iteration.</p>
<p><strong>Lemma.</strong> Let $n,m,k\in\mathbb N$, $m> k\ge 2$. Let $f$ be holomorphic around $0$ with $f(0)\ne0$. Then there exist $M$ with $k^n<M<k^n(m-k+1)$ and $\alpha\in\mathbb C$ with $\alpha\ne 0$ such that for any $h$ holomorphic around $0$, we have $$(z^kf+z^mh)^{\circ n}=(z^kf)^{\circ n}+\alpha h(0)z^M+O(z^{M+1}).$$
<em>Proof.</em>
First observe (e.g., by induction on $n$) that $(z^kf)^{\circ n}$ has a root of order exactly $k^n$ at $z=0$, and $((z^kf)^{\circ n})^r$ has a root of order exactly $ k^nr$ at $z=0$, say $$\tag1((z^kf)^{\circ n})^r=c_{n,r}z^{k^nr}+O(z^{k^nr})$$ with $c_{n,r}\ne0$ depending only on $f,k,n,r$ (but we consider $f,k$ fixed).</p>
<p>Now we show the claim of the lemma by induction on $n$. The claim is clear for $n=1$ with $\alpha=1$ and $M=m$ (which is $<k^1(m-k+1)$!).
Assume that the claim holds for $n$.
For any $r\ge 1$, we have
$$\tag1\begin{align}\left( (z^kf+z^mh)^{\circ n}\right)^r&=\left((z^kf)^{\circ n}+\alpha h(0)z^M+O(z^{M+1})\right)^r\\
&=((z^kf)^{\circ n})^r+r\cdot ((z^kf)^{\circ n})^{r-1}\alpha h(0)z^M+O(z^{k^n(r-1)+M+1}).\end{align}$$
Let $f(z)=a_0+a_1z+a_2z^2+\ldots$.
Then summing $(1)$ using these coefficients we get
$$\begin{align}((z^kf+z^mh)^{\circ n}) &=(z^kf+z^mh)\circ ((z^kf+z^mh)^{\circ n}) \\&=\sum_r a_r(z^kf^{\circ n})^{r+k}+\sum_r a_r(r+k)((z^kf)^{\circ n})^{r+k-1}\alpha h(0)z^M\\
&\quad+h(0)((z^kf)^{\circ n})^m+h(0)m((z^kf)^{\circ n})^{m-1}\alpha h(0)z^M\\&\quad+O(z^{k^n(\min\{m,k\}-1)+M+1})\end{align}$$
Now the first sum is just $(z^kf)^{\circ(n+1)}$.
The second sum is $$a_0kc_{n,k-1}\alpha h(0)z^{k^n(k-1)+M}+O(z^{k^n(k-1)+M+1}).$$
The third summand is $$h(0)z^{k^nm}+O(z^{k^nm+1})=O(z^{k^n(k-1)+M+1}) $$
because $M<k^n(m-k+1) $.
The fourth summand is
$$h(0)^2m\alpha z^{k^n(m-1)+M}+O(z^{k^n(m-1)+M+1}) =O(z^{k^n(k-1)+M+1})$$ because $m>k$.
Thus with $M':=k^n(k-1)+M$ (which is $>k^n(k-1)+k^n=k^{n+1}$ and $<k^n(k-1)+k^n(m-k+1)=k^nm\le k^{n+1}(m-k+1)$) we have
$$(z^kf+z^mh)^{\circ (n+1)}=(z^kf)^{\circ(n+1)}+\alpha'h(0)z^{M'}+O(z^{M'+1}) $$
where
$\alpha'=a_0kc_{n,k-1}\alpha\ne 0$.
This shows that the claim also holds for $n+1$. $_\square$</p>
<p><strong>Corollary.</strong> Let $n,k\in\mathbb N$, $k\ge2$. Let $f,g$ be holomorphic around $0$ with a root of order exactly $k$ at $z=0$. Then $f^{\circ n}=g^{\circ n}$ implies $f=g$.</p>
<p><em>Proof.</em> Note that we can write $f(z)=z^kf_0(z)$ and $g(z)=z^kf_0(z)+z^mh(z)$ with $m>k\ge 2$. $_\square$</p>
<p><strong>Proposition.</strong> Let $P,Q$ be polynomials such that $P(P(P(x)))=Q(Q(Q(x)))$ for all $x\in\mathbb R$. Then $P=Q$.</p>
<p><em>Proof.</em>
By the condition, the polynomials $P^{\circ 3}$ and $Q^{\circ 3}$ are the same. If $P(x)=a_nx^n+\ldots$, then $P^{\circ 3}(x) = a_n^{n^2+n+1}x^{n^3}+\ldots$.
We conclude that we can read $\deg P$ from $P^{\circ 3}$, namely $\deg P=\sqrt[3]{\deg P^{\circ 3}}$, and we also obtain $a_n$ because $n^2+n+1$ is odd and odd powers are bijective maps $\mathbb R\to\mathbb R$.
Hence $P,Q$ have the same degree and the same leading coefficient.</p>
<p>In the case $n=0$, we are already done.</p>
<p>In the case $n=1$, say $P(x)=ax+b$, $Q(x)=ax+c$, we have $P(P(P(x)))=a^3x+(a^2+a+1)b$ and $Q(Q(Q(x)))=a^3x+(a^2+a+1)c$. Since for $a\in\mathbb R$ we have $a^2+a+1=(a+\frac12)^2+\frac34>0$, we conclude $b=c$, i.e., $P=Q$ also in this case.</p>
<p>In the case $n\ge 2$, we can consider $f(z)=\frac1{P(z^{-1})}$ and $g(z)=\frac1{Q(z^{-1})}$, which match the conditions of the corollary as both are $\frac1{a_n}z^n+O(z^{n+1})$. $_\square$ </p>
|
2,227,280 | <p>For every positive number there exists a corresponding negative number. Would that imply that the number of positive numbers is "equal" to the number of negative numbers? (Are they incomparable because they both approach infinity?)</p>
| P Vanchinathan | 28,915 | <p>If there is SOME function that gives a bijection between two sets, then these two sets are considered equally big. (Even if there is some other function between those two sets that is not onto/not 1-1). For example the set of multiples of 10 among positive integers being a proper subset of all positive integers seems to be "smaller". But we have a function $f(x)=10x$ providing bijection, and so they are considered to be of the same size (cardinality).
In your case $g(x)=-x$ provides the bijection. </p>
|
69,378 | <p>Updated Question : How to show that in TH we never reach a state where there are no paths to the solution? ( without reversing moves, as if reversing is allowed this becomes trivial )</p>
<p>Edit : Thanks to <strong>Stéphane Gimenez</strong> for pointing out the distinction between “A deadlock would never occur” and “The problem always has a solution”, made it possible for me to state the question in a form that was the original intention.</p>
<p>Stéphane Gimenez:</p>
<blockquote>
<p>Defining deadlock as a reachable position where no more moves are
available (or alternatively as a position from which the goal cannot
be reached anymore), it's obvious that deadlocks cannot occur in the
TH game: every step along the reverse path (of a path containing valid
moves) is a valid move.</p>
</blockquote>
<p>Original Question :</p>
<blockquote>
<p>In Towers of Hanoi problem there is an implicit assumption that one
can keep moving disks, this is trivially true for 1 or 2 disks but as
obvious as it looks one can keep going with as many disks? In other
words TH with 3 sticks and n disks always has a solution?</p>
<p>The N queens problem is easyly shown will not have a solution for n>m
, where m is size of board (using Pigeon Hole ), but also it does not
have a soltion for n=m=2. But how does one show that if for some k ,
n=m=k has a solution then it will also imply for k+1 there is a
solution?</p>
</blockquote>
| Henry | 6,460 | <p>There cannot be deadlock in the Towers of Hanoi, as you almost always have three moves: you can move the smallest disk to one of the other two pegs, and unless all the disks are on the same peg you can always move another disk.</p>
<p>There are many ways of proving that any Towers of Hanoi position is solvable. One I like is to show the correspondence between the positions and the points of a Sierpinski triangle such as <a href="http://www.cut-the-knot.org/triangle/Hanoi.shtml" rel="nofollow">1</a>, <a href="http://www.math.ubc.ca/~cass/courses/m308-02b/projects/touhey/index.html" rel="nofollow">2</a> or <a href="http://doctroid.wordpress.com/pages/recreational-mathematics/game-graphs-for-towers-of-hanoi-on-3-or-more-pegs/" rel="nofollow">3</a>. Since a Sierpinski triangle is connected, it is possible to move from any given legal position to any other, and so any Towers of Hanoi position is solvable. </p>
|
2,409,268 | <p><strong>Confirm that the identity $1+z+...+z^n=(1-z^{n+1})/(1-z)$ holds for every non-negaive integer $n$ and every complex number $z$, save for $z=1$</strong></p>
<p>I have tried to prove this by induction but I am not sure that I am doing things right, for $ n = 1 $ we have $ (1-z ^ 2) / (1-z) = (1-z) (1+ z) / (1-z) = 1 + z $, then this holds for $ n = 1 $. Suppose now that it holds for $ n $ and see that it holds for $ n + 1 $,$1+z+...+z^n+z^{n+1}=(1-z^{n+1})/(1-z)+z^{n+1}=[(1-z^{n+1})+(1-z)(z^{n+1})]/(1-z)=(1-z^{n+2})/(1-z) $ then this is true for every non-negative integer $ n$. This is OK?</p>
| hamam_Abdallah | 369,188 | <p>By induction,</p>
<p>Assume that
$$1+z+z^2+...+z^n=\frac {1-z^{n+1}}{1-z} $$</p>
<p>then
$$1+z+...+z^n+z^{n+1}=$$
$$\frac {1-z^{n+1}}{1-z}+z^{n+1}= $$
$$\frac {1-z^{n+1}+z^{n+1}-z^{n+2}}{1-z} $$</p>
<p>Done.</p>
|
2,621 | <p>Let $A$ be a commutative Banach algebra with unit.
It is well known that if the Gelfand transform $\hat{x}$ of $x\in A$ is non-zero, then $x$ is invertible in $A$ (the so called Wiener Lemma in the case when $A$ is the Banach algebra of absolutely convergent Fourier series).</p>
<p>As a converse of the above, let $B$ be a Banach space contained in $A$ and suppose $B$ is closed under inversion - i.e.: If $x\in B$ and $x^{-1}\in A$ then $x^{-1}\in B$.</p>
<p>(1) Prove that $B$ is a Banach algebra.</p>
<p>(2) Must $A$ and $B$ have the same norm? If not are the norms similar?</p>
<p>(3) Do $A$ and $B$ have the same maximal ideal space?</p>
| John Channing | 1,032 | <p>Monte Carlo methods are used extensively in <a href="http://en.wikipedia.org/wiki/Monte_Carlo_methods_in_finance" rel="nofollow">financial mathematics</a> for the pricing of complex or "exotic" financial derivatives. With equity options, for example, the value of the stocks in the option contract is simulated using a stochastic process and parameters that can be observed or derived from other financial instruments. This process is computationally very intensive resulting in many investment banks having server farms of tens of thousands of machines dedicated to the pricing and risk management of derivatives.</p>
|
1,344,284 | <p>if $Z=X+iy$ then determine the locus of the equation $\left | 2Z-1 \right | = \left | Z-2 \right |$.I can tell that it a circle equation and it is $x^2 + y^2 = 1$.There are a lot of equation in my book such as $\left | Z-8 \right | +\left | Z+8 \right |=20$,$\left | Z-2 \right | = \left | Z-3i \right |$,$\left | 2Z+3 \right |= \left | Z+6 \right |$.Every time I have to do a long calculation.<br>
Is there any short way to find out that if the given equation is circle,ellipse, parabola, hyperbola or straight line.This is needed for my MCQ exam.</p>
| Doppelschwert | 251,434 | <p>Part 1 is covered by
<a href="https://math.stackexchange.com/questions/1344573/a-theorem-of-symmetric-positive-definite-matrix">A theorem of symmetric positive definite matrix.</a></p>
<p>Part 2 should follow if you consider $A+\epsilon I$ and let $\epsilon$ go toward 0.</p>
|
776,627 | <p>How many integer factors of 0 are there, and what are they?</p>
<p>I'm just curious, but what counts as a factor of 0? My guess is that there are an infinite number of factors of 0, but is there a proof?</p>
| user138710 | 138,710 | <p>Take any non-zero k, it can be written 0=k.0</p>
|
776,627 | <p>How many integer factors of 0 are there, and what are they?</p>
<p>I'm just curious, but what counts as a factor of 0? My guess is that there are an infinite number of factors of 0, but is there a proof?</p>
| David | 119,775 | <p>The statement</p>
<blockquote>
<p>$a$ is a factor of $b$</p>
</blockquote>
<p>means</p>
<blockquote>
<p>$b=ka$ for some integer $k$.</p>
</blockquote>
<p>Take $b=0$: then no matter what $a$ is, the equation $0=ka$ is always true for some value of $k$, namely, $k=0$. So every integer is a factor of $0$.
<hr>
This is an excellent example of the importance of using definitions precisely. Some people think that a factor of $b$ means</p>
<blockquote>
<p>$b$ divided by any integer "which goes exactly",</p>
</blockquote>
<p>for example, the factors of $10$ are $10/1$, $10/(-2)$, $10/5$ etc. This would give the factors of $0$ as being $0/1$, $0/2$ etc, in other words, $0$ only, which is not correct.</p>
|
776,627 | <p>How many integer factors of 0 are there, and what are they?</p>
<p>I'm just curious, but what counts as a factor of 0? My guess is that there are an infinite number of factors of 0, but is there a proof?</p>
| Marc van Leeuwen | 18,880 | <p>This question is about terminology, and one can only give an answer if you provide the precise definition of "factor" you are using. Curiously the term does not appear to often get a formal definition, contrary to "divisor" which the term more conventionally associated to the divisibility relation. One may then for convenience consider those two terms to be synonyms. However, it should be noted that most of the time "factor" is used in connection with <a href="https://en.wikipedia.org/wiki/Integer_factorization" rel="nofollow noreferrer">factorisation</a>. For instance it is more common to speak about prime factors than about prime divisors; the two basically mean the same thing, but a given prime factor might be considered to occur multiple times for a same number, while I don't think one would say that for a prime divisor (which is just a divisor of the number that happens to be prime). In this context it is relevant to note that <em>the number $0$ does not have a prime factorisation</em> and is explicitly excluded from the <a href="https://en.wikipedia.org/wiki/Fundamental_theorem_of_arithmetic" rel="nofollow noreferrer">prime factorisation theorem</a> (usually in one sweep with excluding negative numbers and sometimes also the number$~1$; indeed those numbers pose some difficulties too, but these are less fundamental than the problem with$~0$). Therefore talking about factors of $0$ may be considered not very appropriate. Certainly asking how many factors$~17$ the number$~0$ has does not make any sense.</p>
<p>Apart from that, the following remarks can be made</p>
<ul>
<li><p>The phrases "$a$ divides $b$" (written $a\mid b$), "$a$ is a divisor of $b$", and "$b$ is a multiple of $a$" express <em>grosso modo</em> the same relation, but that does not mean they can always be used interchangeably. Notably divisors are often implicitly assumed to be positive (for instance when talking about the <a href="https://oeis.org/A000005" rel="nofollow noreferrer">number of divisors</a> or the <a href="https://oeis.org/A000203" rel="nofollow noreferrer">sum of the divisors</a> of a (positive) number, though this is not required for $a$ in $a\mid b$. Also many authors will not say that $0$ divides $0$, presumably because one cannot divide $0$ by $0$ (I think most of them prefer to leave the divisibility of $0$ by $0$ undefined rather than false). I've cited <a href="https://math.stackexchange.com/a/667109/18880">in this answer</a> an example of authors that make their position in this respect very clear.</p></li>
<li><p>I think most people would agree at least that all nonzero numbers divide$~0$, which provides an answer of sorts to your question.</p></li>
<li><p>However in ring theory the term "<a href="https://en.wikipedia.org/wiki/Zero_divisor" rel="nofollow noreferrer">zero divisor</a>" means something else, and it does not apply to any integer.</p></li>
</ul>
|
27,455 | <p>Let $(f_n)_{n \geq 1}$ be disjointly supported sequence of functions in $L^\infty(0,1)$. Is the space $\overline{\mathrm{span}(f_n)}$ (the closure of linear span) complemented in $L^\infty(0,1)$? By complemented we mean that $L^\infty(0,1) = \overline{\mathrm{span}(f_n)} \oplus X$, where $X$ is a subspace of $L^\infty$ and $\oplus$ is direct sum. </p>
<p>Equivalently, we can ask if there exists a projection $P\colon L^\infty(0,1) \to \overline{\mathrm{span}(f_n)}$?</p>
<p>It is quite easy to prove this in $C[0,1]$. Indeed, let $(f_n)$ be disjointly supported sequence in $C[0,1]$ and fix $x_n \in \mathrm{supp}(f_n)$, $n \in \mathbb{N}$. Then the space $C[0,1]$ can be written as
$$
C[0,1] = \overline{\mathrm{span}(f_n)} \oplus \{f \in C[0,1]\colon f(x_n) = 0, n = 1,2,\dots \}.
$$</p>
| Plop | 2,660 | <p>$X$ can be taken to be $\left\{ f \in L^{\infty} | \forall n,\ \int_{[0,1]} f f_n =0 \right\}$</p>
|
2,894,126 | <blockquote>
<p>$$\int \sin^{-1}\sqrt{ \frac{x}{a+x}} dx$$</p>
</blockquote>
<p>We can substitute it as $x=a\tan^2 (\theta)$ . Then:</p>
<p>$$2a\int \theta \tan (\theta)\sec^2 (\theta) d\theta$$</p>
<p>Using integration by parts will be enough here. But I wanted to know if this particular problem can be solved by any other method. Because the above method is quite lengthy. </p>
| Dr. Sonnhard Graubner | 175,066 | <p>With Integration by parts we get</p>
<p>$$x\arcsin(\sqrt{\frac{x}{a+x}})-\int\frac{1}{2}\sqrt{\frac{ax}{(a+x)^2}}dx$$
now we get by substituting</p>
<p>$$u=\sqrt{x},du=\frac{1}{2\sqrt{x}}dx$$ we get</p>
<p>$$x\arcsin(\sqrt{\frac{x}{a+x}})-\int\frac{\sqrt{a}u^2}{a+u^2}du$$
Write the last Integrand in the form</p>
<p>$$\sqrt{a}\int 1-\frac{a}{u^2+a}du$$
Can you proceed?</p>
|
764,632 | <p>The question is this :</p>
<p>$$\lim_{x\to-\infty} {\sqrt{x^2+x}+\cos x\over x+\sin x}$$</p>
<p>The solution is $-1$ and this seems to be only obtained from the change variable strategy, such as $t=-x$.</p>
<p>However, I have no idea why this isn't just solved by simply eliminating $x$ in numerator and denominator, which generates the value $1$.</p>
<p>It seems that this is related with $x\to-\infty$, but I have no specific idea.</p>
<p>Can anyone help me? </p>
| evil999man | 102,285 | <p>Note that the trigonometric terms are negligible as $x \to-\infty $. Hence,</p>
<p>$$\lim_{x\to-\infty}\frac{\sqrt{x^2+x}}{x}$$</p>
<p>You cannot take the x in the square root. But, your problem is that x is negative. So, you must : </p>
<p>$$\lim_{x\to-\infty}\frac{\sqrt{x^2+x}}{-(-x)}$$</p>
<p>Now, $-x$ is positive and you can take it inside the root where it will become squared. </p>
<p>$$\lim_{x\to-\infty}\frac{\sqrt{1+1/x}}{-1}$$</p>
<p>Clearly, the answer is $-1$.</p>
<p>Substituing $t=-x$, just makes it <strong>look</strong> good. Note that you can write $-x$ as $t$ every time in your substituted answer.</p>
|
1,873,596 | <p>Near the end of <a href="http://www.maa.org/sites/default/files/pdf/upload_library/2/Rice-2013.pdf" rel="nofollow noreferrer">this MAA piece about elliptic curves</a>, the author explains why the complex domain of the cosine function is a sphere: since it's periodic, its domain can be taken as a cylinder, wrapping up the real axis. And because cosine of $\theta\pm i\infty$ is $\infty$, the two ends of the cylinder can be identified with a single point $\infty$. Ok, great, but this sounds to me like a pinched torus. Can I have a clearer explanation why this is a sphere?</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/LwmN0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LwmN0.png" alt="enter image description here"></a></p>
</blockquote>
| Richard Anthony Baum | 356,531 | <p>Trigonometric functions are circular functions. Complex cosine is:
cos z = (e^(iz) + e^(-iz))/2 where z = x + iy where x and y are real numbers and z is a complex number. Suppose we restrict z so modulus z = 1 = modulus (x + iy). Hence, the image of z is the unit circle in the complex plane. What if we were to look at the map X = (x, y)? Let X = (0, 0) correspond to an origin. Then, apparently, the image cos(z) = cos(0) = 1. The unit sphere has radius = 1. Consider any rotation of the plane we choose to call the complex plane passing through any point we have chosen to call the origin, namely X = (0, 0) with the property modulus z = 1. Again, this is the unit circle on that particular complex plane. When modulus z = r for any non-negative r, we have an entire complex plane which may be mapped onto the unit sphere. So much for the prelims. Perhaps making more sense write z = (e^x)(e^(iy) which has modulus r = e^x. Then realize cos(z) has modulus 1 and is entire as (x,y) range over all reals. Recalling the pre-image of cos(z) was z = x + iy, the domain of cos(z) may be mapped onto the unit sphere by starting at z, drawing a secant to the North Pole of the unit sphere centered at (0, 0, 1) and calling the point P on the unit sphere where the secant first pierces the unit sphere starting from z in the complex plane and repeating this for every z in the complex plane. The result is the points P trace out the unit sphere, so that by reverse stereographic projection, we have the unit sphere of all such points P as our domain for cos(z). For finite r = e^x, the North Pole of the unit sphere mapping gets omitted. Now let x approach infinity, so r = e^x approaches infinity. Identify the North Pole of the unit sphere with the point at infinity, and I believe the stereographic mapping for z = (e^x)(e^iy) becomes onto for all real x and y. I hope this helps.</p>
|
652,025 | <p>Assume $A$, $B$, and $C$ are three independent predicates. Maybe $A$ stands for "my age is 20," and $B$ "stands for tomorrow is a good day."</p>
<p>So is it true that $(A \lor B) \land C \iff (A \lor C) \land (B \lor C)$?</p>
| hmmmm | 18,301 | <p>$ \begin{array}{|c|c|c|c|} A & B & C & (A \bigvee B) \bigwedge C \\ T & T & T & T\\
T & T & F & F \\
T & F & T &T \\ T & F & F & F \\ F & T & T & T \\ F & T & F & F \\ F & F & T & F \\ F & F & F & F \end{array}$</p>
<p>$ \begin{array}{|c|c|c|c|} A & B & C & (A \bigvee C) \bigwedge (B\bigvee C) \\ T & T & T & T\\
T & T & F & T \\
T & F & T &T \\ T & F & F & F \\ F & T & T & T \\ F & T & F & F \\ F & F & T & T \\ F & F & F & F \end{array}$</p>
<p>As the truth tables are not the same we can see they are not the same. It would have been easier just to try in on a few values till one didn't match but oh well!</p>
|
1,516,925 | <p>Let $x,y,z$ be 3 non-zero integers defined as followed: </p>
<p>$$(x+y)(x^2-xy+y^2)=z^3$$</p>
<p>Let assume that $(x+y)$ and $(x^2-xy+y^2)$ are coprime
and set $x+y=r^3$ and $x^2-xy+y^3=s^3$</p>
<p>Can one write that $z=rs$ where $r,s$ are 2 integers?
I am not seeing why not but I want to be sure.</p>
| PM 2Ring | 207,316 | <p>Yes.</p>
<p>If $z^3 = r^3s^3$ we can take cube roots of both sides to get $z = rs$. It's valid to do this because $n$ is the only solution to $({n^3})^{1/3}$ for real $n$. If we go to complex numbers we have to be more careful because cube roots have 3 solutions.</p>
|
4,066,942 | <p>This is a problem from Kenneth A Ross 2nd Edition Elementary Analysis:</p>
<p>Show that the infinite series,<span class="math-container">$$\sum_{n=1}^{\infty} \frac{(-1)^n}{n+x^2}$$</span> converges uniformly for all <span class="math-container">$x$</span>, and by termwise differentiation, compute <span class="math-container">$f '(x)$</span>.</p>
<p>My work so far involves the Weierstrass M Test - essentially <span class="math-container">$\lvert\frac{(-1)^n}{n+x^2}\rvert \leq \lvert\frac{1}{k^2}\rvert \forall x \in \mathbb{R}$</span> and since <span class="math-container">$\sum_{n=1}^{\infty} \frac{1}{k^2} \lt \infty$</span>, the original series must be uniformly convergent for all x.</p>
<p>I'm not exactly sure if I did that right, or if the alternating negative one changes the requirements for the M-test.</p>
<p>Edit - based on comments, the alternating series test seems useful here. Because of the <span class="math-container">$(-1)^n$</span> component, and for each term <span class="math-container">$a_n$</span> in the series <span class="math-container">$a_{n+1} \leq a_n$</span>, that means the series converges by the alternating series test. But does that show absolute convergence, and is that enough, or does it need more?</p>
| Thomas | 89,516 | <p>Same solution probably written differently and in a more intricate way but it came to me like that before reading MartinR solution:</p>
<p>Suppose wlog <span class="math-container">$x<y$</span> and per absurdum that <span class="math-container">$|f(y)-f(x)|>(b-a)/2$</span>. In such a case we would have:</p>
<p><span class="math-container">$|f(y)-f(x)|>(b-a)/2 \rightarrow y-x>(b-a)/2 \rightarrow (x-a)+(b-y)<(b-a)/2$</span> [1].</p>
<p>Now this is an identity:</p>
<p><span class="math-container">$f(y)-f(x)=-[(f(x)-f(a))+(f(b)-f(y))] [2]$</span></p>
<p>Looking at the right member of [2] and applying MVT + [1]:</p>
<p><span class="math-container">$|[(f(x)-f(a))+(f(b)-f(y))]|<|(f(x)-f(a))|+|(f(b)-f(y))|<(x-a)+(b-y)<(b-a)/2$</span></p>
<p>But instead from the absurd hypothesis the left member of [2]:</p>
<p><span class="math-container">$|f(y)-f(x)|>(b-a)/2$</span></p>
<p>Which is than a contradiction.</p>
<p>It is the same solution as MartinR, only with the argument presented in slightly a different way ...</p>
|
4,050,831 | <p>Suppose 40% of all seniors have a computer at home and a sample of 64 is taken. What is the probability that more than 30 of those in the sample have a computer at home?"</p>
<p>My attempt:</p>
<p>n=64</p>
<p>0.4x64=25.6</p>
<p>p=?</p>
<p>x=??</p>
<p>A>30=??</p>
<p>Don't have an idea of what equation would be appropriate to determine proportion so as to use s=root p(1−p)/n</p>
| user2661923 | 464,411 | <p>Given an event <span class="math-container">$E$</span> with probability <span class="math-container">$p$</span> of occurring, where <span class="math-container">$0 < p < 1$</span> <br>
and <span class="math-container">$q = (1-p)$</span>, and given <span class="math-container">$n$</span> independent trials of the event, <br>
the probability
that the event <span class="math-container">$E$</span> occurs exactly <span class="math-container">$k$</span> times out of <span class="math-container">$n$</span> is <br>
<span class="math-container">$\binom{n}{k}p^k q^{(n-k)}.$</span></p>
<p>This means that the probability of <strong>at least</strong> <span class="math-container">$r$</span> successes in <span class="math-container">$n$</span> trials is</p>
<p><span class="math-container">$$\sum_{k=r}^n \binom{n}{k}p^k q^{(n-k)}.\tag1$$</span></p>
<p>All that you have to do is plug in the appropriate values for the variables in
equation (1) above:</p>
<p><span class="math-container">$$r = 31, n = 64, p = 0.4.$$</span></p>
|
101,191 | <p>A few years ago I <a href="http://math.sfsu.edu/federico/Articles/arrangem.pdf">computed</a> the Tutte polynomials of the matroids given by the classical Coxeter groups, and found that their generating functions are all simple variations of the series $\sum_n \frac{x^n y^{n^2}}{n!}$.
I've wondered if there is a more geometric/algebraic explanation of this. Is this series known? Are there other natural occurrences of it that might be relevant? </p>
| Gjergji Zaimi | 2,384 | <p>Here is an attempt at a "soft" answer inspired by a paper that appeared after this question was asked here. <a href="http://arxiv.org/abs/1305.6621" rel="nofollow">The arithmetic Tutte polynomials of the classical root systems</a> by Ardila, Castillo, and Henley. In particular this will not contain anything the OP doesn't already now :-)</p>
<p>There are two methods used for the computation of these (classic or arithmetic) Tutte polynomial generating functions. The finite field method by Ardila mentioned in the question has wide applicability, but in the paper linked above there is a different way to compute this generating function, which I believe give some insight on the presence of $F(x,y)=\sum_{n\geq 0} x^ny^{\binom{n}{2}}/n!$.</p>
<p>Using this generating function we can count one of the most fundamental combinatorial objects: graphs according to the number of connected components, number of edges, and number of vertices (the particular function is $F(x,1+y)^z$). On the other hand, through a series of operations in the sense of <a href="http://en.wikipedia.org/wiki/Combinatorial_species" rel="nofollow">combinatorial species</a>, one can obtain the generating function for some combinatorial objects called <em>signed graphs</em>, introduced by Zaslavsky, which serve as a combinatorial model encoding the relevant statistics of the classical root systems. Therefore one should expect the generating functions in question, to be computable from combinations of $F$ together with operations like multiplication, exponentiation, and composition.</p>
|
1,204,566 | <p>I tried asking this on StackOverflow and it was quickly closed for being too broad, so I come here to get the mathematical part nailed down, and then I can do the rest with no help, most likely.</p>
<p>From <a href="http://www.afjarvis.staff.shef.ac.uk/sudoku/sudgroup.html" rel="nofollow">this web page</a>, I learned that there are 5,472,730,538 essentially different solved sudoku grids.</p>
<p>Please, I beg you, do not respond unless you have read the entire webpage and have a decent understanding of it. You can think of a 9x9 sudoku as being represented by a string of 81 numbers, separated by commas.</p>
<p>I want to find a way to generate one sudoku board from each of the 5,472,730,538 groups, do some quick analysis on it, and move on to the next board, continuing until all 5.4 billion are analyzed. In this way, I will have analyzed all possible essentially different Sudoku boards. I am not familiar with <a href="http://www.gap-system.org/" rel="nofollow">GAP</a>.</p>
<p>So, I need someone, if they are willing, to help me bridge the knowledge gap I have, which is this: How do I go from <a href="http://www.afjarvis.staff.shef.ac.uk/sudoku/sudgroup.html" rel="nofollow">this web page</a> to actually finding (iterating through) each solution?</p>
<p>I do not expect a full detailed thorough answer on this. Instead, any post that I think is helpful to me in reaching my goal will be upvoted, no matter how short or how detailed.</p>
<hr>
<h2>EDIT 3-25-2015</h2>
<p>Thanks to Nick Gill's suggestion, I contacted Frazer Jarvis himself and here is his very helpful reply:</p>
<blockquote>
<p>Dear Jeremy,</p>
<p>The method we used simply evaluated the number of puzzles there had to
be, and didn't count them by listing them all, so it wasn't a
constructive proof in that sense. Nor can the proof be made to give a
list, as far as I can see. As you are probably aware, this work of
mine dates to 2005, and my active interest in Sudoku and related
mathematical problems probably ended soon after, although I followed
forums for a little while (my wife still likes doing them, but it has
been a long time since I did one!) - but I did read towards the end of
that time that someone had made a file of all of these. I don't
remember who it was, however. You may be able to find the forum
postings if you look around the internet. I'll have a quick look
around too.</p>
<p>OK - the forum that contained the discussions no longer exists, but
has been retitled, and is now at
<a href="http://forum.enjoysudoku.com/sudoku-the-puzzle-f5.html" rel="nofollow">http://forum.enjoysudoku.com/sudoku-the-puzzle-f5.html</a></p>
<p>Ah - here's the thread:
<a href="http://forum.enjoysudoku.com/catalog-of-all-essentially-different-grids-t6679.html" rel="nofollow">http://forum.enjoysudoku.com/catalog-of-all-essentially-different-grids-t6679.html</a>
- perhaps you could contact someone there?</p>
<p>Best wishes, Frazer</p>
</blockquote>
<p>He also added later:</p>
<blockquote>
<p>For what it's worth, the discussions about the number of different
grids is in the thread
<a href="http://forum.enjoysudoku.com/su-doku-s-maths-t44.html" rel="nofollow">http://forum.enjoysudoku.com/su-doku-s-maths-t44.html</a></p>
</blockquote>
| Community | -1 | <p>What about brute forcing ?</p>
<p>You can explore the tree of all possible grids of 81 digits, enforcing the placement constraints. Without constraints, there are 9^81 grids and generating all and checking the constraints is out of question. But checking the constraints as you go filling the grid might not be so unrealistic.</p>
<p>Represent the grid as a list of 81 mask of 9 bits; the bits tell you if the corresponding digit is still allowed at the given location, taking into account the digits already placed.</p>
<p>So start traversing the tree from the top left location and try all 9 possible digits in turn; for every try, only consider the digits that are allowed by the mask, and perform a recursive call to the next location in the grid; the full recursion depth will be 81.</p>
<p>When you try a digit, you will clear the relevant allowance bit in the whole row, column and square. Actually, you need to keep a copy of the masks in the corresponding locations, so that you can restore them after the trial.</p>
<p>If, when trying a new location, it turns out that the mask is empty, you have reached a dead-end and you will implicitly backtrack. This is where there is some hope of achieving a reasonable running time: if many attempts fail early enough, there will be severe pruning of the tree.</p>
<p>The required stack space will remain reasonable, as the maximum stack depth is 81, and on every level you will need to save 27 masks (9/row, 9/column, 9/Square), i.e. 54 bytes or so.</p>
<pre><code>Try(Location):
for Digit= 1..9:
if Mask[Location][Digit]:
Save & clear masks [Digit] for all locations on the same row
Save & clear masks [Digit] for all locations on the same column
Save & clear masks [Digit] for all locations on the same square
if Location == Last one:
Success, evaluate the grid
else:
Try(Next(Location))
Restore masks on the square
Restore masks on the column
Restore masks on the row
</code></pre>
<p>As the Save & clear operations are very intensively used, any optimization trick is welcome there, the first being usage of a compiled language.</p>
|
69,137 | <p>Is there any reference for gluing in the context of Morse homology on Hilbert manifolds?</p>
<p>Gluing is pretty standard in Morse homology for finite-dimensional manifolds. Unfortunately, in the infinite-dimensional case the sources I know avoid gluing. For proving that the Morse boundary operator squares to zero Abbondandolo-Majer use either an argument involving cellular filtrations or the graph transform method.</p>
<p>Unfortunately, in the context I want to consider, namely proving that some Morse-like theory is isomorphic to singular homology, both of these approaches do not work. (The latter approach works for showing that my Morse-like theory has a boundary operator that squares to zero, though.)</p>
<p>Reading through the finite-dimensional case ("Morse homology" of Schwarz) I realized that there are a lot of arguments which make it difficult to generalize the gluing procedure to cases where the target is not locally compact. For instance, in a lot of indirect arguments, the Rellich compact embedding theorem is used, which fails if the target is infinite-dimensional.</p>
<p>So, is there any instance where gluing in an infinite-dimensional context is proved? Are there certain obstacles not yet overcome?</p>
| Daniel Moskovich | 2,051 | <p>There's quite a bit of literature on gluing theory for instantons and monopoles, which take the place of flow lines of Morse functions in instanton Floer homology and in Seiberg-Witten Floer homology correspondingly. It is indeed quite a bit harder than the finite dimensional case, and yes, it's an active research topic. I'm no expert and I can't explain the key ideas in a nutshell, but here are some references to get you started.<br><br>
I think the originator of the theory was Cliff Taubes:</p>
<ul><li><i>Self-dual Yang-Mills connections on non-self-dual 4-manifolds.</i>
J. Differ. Geom. 17, 139-170 (1982; Zbl 0484.53026).</li>
<li><i>Self-dual connections on 4-manifolds with indefinite intersection matrix.</i>
J. Differ. Geom. 19, 517-560 (1984; Zbl 0552.53011).</li> </ul>
<p>There's a nice exposition of gluing theory for instantons in:</p>
<ul><li> S.K. Donaldson, <i>Floer homology groups in Yang-Mills theory.</i> Cambridge Tracts in Mathematics. 147. Cambridge: Cambridge University Press. (2002; Zbl 0998.53057).</li></ul>
<p>Nice expositions for gluing theory for monopoles are to be found in:</p>
<ul><li> K.A. Frøyshov <i>Compactness and gluing theory for monopoles.</i>
Geometry and Topology Monographs 15. Coventry: Geometry & Topology Publications. (2008; Zbl 1207.57044).</li>
<li> P. Kronheimer, Peter and T. Mrowka <i>Monopoles and three-manifolds.</i>
New Mathematical Monographs 10. Cambridge: Cambridge University Press. (2007; Zbl 1158.57002).</li></ul>
|
1,561,563 | <p>Two circles $\Gamma_1,\Gamma_2$ have centers $O_1,O_2$. Let $\Gamma_1\cap\Gamma_2=A,B$, with $A\neq B$. An arbitrary line through $B$ intersects $\Gamma_1$ at $C$ and $\Gamma_2$ at $D$. The tangents to $\Gamma_1$ at $C$ and to $\Gamma_2$ at $D$ intersect at $M$. Let $N=AM\cap CD$. Let $l$ be a line through $N$ parallel to $CM$, and let $l\cap AC=K$. Prove that $BK$ is tangent to $\Gamma_2$.</p>
<hr>
<p>$\qquad\quad$ <img src="https://i.stack.imgur.com/mTLgz.png" alt=""></p>
<hr>
<p>Here is some progress I have:</p>
<p>We are looking to prove $\angle O_2BP=90^{\circ}$, and since $\angle O_2DP=90^{\circ}$, if we could prove $BP=PD$, we would be done by congruent triangles. So we are looking to prove $\dfrac{\sin \angle BDP}{\sin \angle DBP}=1$. Let $AM\cap BK=l$. We have $\angle BDP=\angle QBN$. By the law of sines on $\triangle DNM,\triangle BNQ$, we have $\sin \angle BDP=\sin \angle NDM=\dfrac{NM}{DM}\sin \angle DNM$ and $\sin\angle QBN=\dfrac {NQ}{BQ}\sin\angle BNQ$. Dividing, the sines cancel (since they are supplementary), and we are left with $\dfrac{NM\times BQ}{DM\times NQ}$, so it remains to prove $\dfrac{NM}{DM}=\dfrac{NQ}{BQ}$.</p>
<p>I'm not sure what to do from here. We would be done if we could prove $\triangle NBQ\sim\triangle NDM$, but this would imply $\angle QNB=\angle MND=90^{\circ}$, but from drawing multiple diagrams, it looks like this is not always the case. </p>
<p>As always, any ideas are appreciated!</p>
| Jan Eerland | 226,665 | <p>BIG HINT:</p>
<p>$$\int\frac{1}{x+\sqrt{1-x^2}}\space\text{d}x=$$</p>
<hr>
<p>Substitute $x=\sin(u)$ and $\text{d}x=\cos(u)\space\text{d}u$.</p>
<p>Then $\sqrt{1-x^2}=\sqrt{1-\sin^2(u)}=\cos(u)$ and $u=\arcsin(x)$:</p>
<hr>
<p>$$\int\frac{\cos(u)}{\sin(u)+\cos(u)}\space\text{d}u=$$
$$\int\frac{\sec^3(u)}{\sec^3(u)}\cdot\frac{\cos(u)}{\sin(u)+\cos(u)}\space\text{d}u=$$
$$\int\frac{\sec^2(u)}{\sec^2(u)+\sec^2(u)\tan(u)}\space\text{d}u=$$</p>
<hr>
<p>Prepare to substitute $s=\tan(u)$. Rewrite $\frac{\sec^2(u)}{\sec^2(u)+\sec^2(u)\tan(u)}$ using $\sec^2(u)=1+\tan^2(u)$:</p>
<hr>
<p>$$\int\frac{\sec^2(u)}{1+\tan(u)+\tan^2(u)+\tan^3(u)}\space\text{d}u=$$</p>
<hr>
<p>Substitute $s=\tan(u)$ and $\text{d}s=\sec^2(u)\space\text{d}u$:</p>
<hr>
<p>$$\int\frac{1}{s^3+s^2+s+1}\space\text{d}s=$$
$$\int\left(\frac{1-s}{2(s^2+1)}+\frac{1}{2(s+1)}\right)\space\text{d}s=$$
$$\int\frac{1-s}{2(s^2+1)}\space\text{d}s+\int\frac{1}{2(s+1)}\space\text{d}s=$$
$$\frac{1}{2}\int\frac{1-s}{s^2+1}\space\text{d}s+\frac{1}{2}\int\frac{1}{s+1}\space\text{d}s=$$
$$\frac{1}{2}\int\left(\frac{1}{s^2+1}-\frac{s}{s^2+1}\right)\space\text{d}s+\frac{1}{2}\int\frac{1}{s+1}\space\text{d}s=$$
$$-\frac{1}{2}\int\frac{s}{s^2+1}\space\text{d}s+\frac{1}{2}\int\frac{1}{s^2+1}\space\text{d}s+\frac{1}{2}\int\frac{1}{s+1}\space\text{d}s=$$</p>
<hr>
<p>Substitute $p=s^2+1$ and $\text{d}p=2s\space\text{d}s$:</p>
<hr>
<p>$$-\frac{1}{4}\int\frac{1}{p}\space\text{d}p+\frac{1}{2}\int\frac{1}{s^2+1}\space\text{d}s+\frac{1}{2}\int\frac{1}{s+1}\space\text{d}s=$$
$$-\frac{\ln\left|p\right|}{4}+\frac{1}{2}\int\frac{1}{s^2+1}\space\text{d}s+\frac{1}{2}\int\frac{1}{s+1}\space\text{d}s=$$
$$-\frac{\ln\left|p\right|}{4}+\frac{\arctan\left(s\right)}{2}+\frac{1}{2}\int\frac{1}{s+1}\space\text{d}s=$$</p>
<hr>
<p>Substitute $w=s+1$ and $\text{d}w=\space\text{d}s$:</p>
<hr>
<p>$$-\frac{\ln\left|p\right|}{4}+\frac{\arctan\left(s\right)}{2}+\frac{1}{2}\int\frac{1}{w}\space\text{d}w=$$
$$-\frac{\ln\left|p\right|}{4}+\frac{\arctan\left(s\right)}{2}+\frac{\ln\left|w\right|}{2}+\text{C}$$</p>
|
1,499,949 | <p>Prove that for all event $A,B$</p>
<p>$P(A\cap B)+P(A\cap \bar B)=P(A)$</p>
<p><strong>My attempt:</strong></p>
<p>Formula: $\color{blue}{P(A\cap B)=P(A)+P(B)-P(A\cup B)}$</p>
<p>$=\overbrace {P(A)+P(B)-P(A\cup B)}^{=P(A\cap B)}+\overbrace {P(A)+P(\bar B)-P(A\cup \bar B}^{=P(A\cap \bar B)})$</p>
<p>$=2P(A)+\underbrace{P(B)+P(\bar B)}_{=1}-P(A\cup B)-P(A\cup \bar B)$</p>
<p>$=1+2P(A)-P(A\cup B)-P(A\cup \bar B)$</p>
| GEdgar | 442 | <p>There is no difference. In the same way as I write $e^x$ when $x$ is simple enough, and $\exp(x)$ otherwise, I also write $\sqrt{x}$ when $x$ is simple enough, and $(x)^{1/2}$ otherwise.</p>
<p>For example, I write
$$
\left(\frac{1+\frac{9}{x^2}}{\sin \frac{\pi}{9}+2}\right)^{1/2}
\qquad\text{and not the ugly}\qquad
\sqrt{\frac{1+\frac{9}{x^2}}{\sin \frac{\pi}{9}+2}}
$$
Just as I write
$$
\exp\left(\frac{1+\frac{9}{x^2}}{\sin \frac{\pi}{9}+2}\right)
\qquad\text{and not the illegible}\qquad
e^{\frac{1+\frac{9}{x^2}}{\sin \frac{\pi}{9}+2}}
$$</p>
|
380,452 | <p>A relation R is defined on ordered pairs of integers as follows :</p>
<p>$(x,y) R(u,v)$ if $x<u$ and $y>v.$ </p>
<p>Then R is </p>
<ol>
<li><p>Neither a Partial Order nor an Equivalence relation</p></li>
<li><p>A Partial Order but not a Total Order</p></li>
<li><p>A Total Order </p></li>
<li><p>An Equivalence relation</p></li>
</ol>
| Martin Brandenburg | 1,650 | <p>Let $M$ be an $R$-module, $f \in R$ and let $N$ be the colimit of $M \xrightarrow{f} M \xrightarrow{f} \dotsc$. Directed colimits are easy to construct: Elements come from elements of the individual modules, and are identified if they get sent to the same element by some transition map. So in our case, if $i_n : M \to N$ denotes the $n$th colimit inclusion, every element of $N$ has the form $i_n(m)$ for some $m \in M$, and $i_n(m)=i_{n'}(m')$ iff $f^{p-n} m=f^{p-n'} m'$ for some $p \geq n,n'$. It follows that $ i_0(m) = f^n \cdot i_n(m)$ and that $f$ acts as an isomorphism on $N$. Hence, every element of $N$ has the form $i_0(m)/f^n$ for some $m \in M$ and $n \geq 0$, and we have $i_0(m)/f^n=i_0(m')/f^{n'}$ iff $f^p f^{n'} i_0(m')=f^p f^{n} i_0(m)$ for some $p \geq 0$. But this is the usual construction of the localization $M_f$, so that $N = M_f$.</p>
<p>Here is a more elegant and abstract explanation: By definition of a colimit, a homomorphism $\alpha : N \to T$ corresponds to a family of homomorphisms $\alpha_n : M \to T$ with $\alpha_n = \alpha_{n+1} f$ for all $n \geq 0$. Taking $\alpha_n=i_{n+1}$, we can construct a homomorphism $N \to N$ which easily seen to be inverse to $f$. If $T$ is a module on which $f$ is invertible, then the above shows that $\alpha$ is completely determined by $\alpha_0$. Hence, $N$ is the universal module over $M$ on which $f$ becomes invertible. That is, $N=M_f$. Actually the same localization works for an arbitrary endomorphism of an object in any category with directed colimits. For example, localizing the set $\mathbb{N}$ with respect to the successor function gives $\mathbb{Z}$.</p>
<p>Here is some intuition for this: Start with $M$, we want to make $f$ invertible on $M$. So for $m \in M$ we want to find some (unique) $m/f$ (in a module extending $M$) with $f \cdot m/f = m$. For this, just add adjoin another copy of $M$, but whose elements should behave like $m/f$. Roughly we just add some extra space in order to insert the inverses. So we have two copies $i_0(M)$ and $i_1(M)$ of $M$, but want to ensure that $f \cdot i_1(m) = i_0(m)$. So just take the corresponding quotient of $i_0(M) \oplus i_1(M)$. But with $i_1(M)$ we have to continue this way. In the end, we take the quotient of $i_0(M) \oplus i_1(M) \oplus i_2(M) \oplus \dotsc$ by $f i_n = i_{n+1}$, i.e. the directed colimit of $M \xrightarrow{f} M \xrightarrow{f} M \xrightarrow{f} \dotsc$.</p>
<p>Actually this is a very useful description of localizations. See Eisenbud's book on commutative algebra for some applications. For example, one immediately gets that $R_f$ is flat over $R$, since directed colimits of flat modules are flat. By another colimit argument, we get that $R_S$ is flat over $R$ for every multiplicative subset $S$.</p>
<p>Now for your second example: Let $M$ be an $R$-module and $f \in R$ be such that $f : M \to M$ is injective. This implies that $M \to M_f$ is injective (which can be seen, for example, from the colimit description), and therefore $M_f / M$ makes sense. Let $C$ be the colimit of $M/f^0 M \xrightarrow{f} M/f^1 M \xrightarrow{f} \dotsc$. This sequences admits an obvious epimorphism from $M \xrightarrow{f} M \xrightarrow{f} \dotsc$. This induces an epimorphism $M_f \to C$. What is the kernel? If $m/f^n$ lies in the kernel, this means that $m \bmod f^n M$ vanishes in $C$. Since all the transition maps $f : M/f^p M \to M/f^{p+1} M$ are injective, this means that $m \bmod f^n M$ vanishes in $M/f^n M$, i.e. $m \in f^n M$. Hence, $m/f^n \in M$. So the kernel is just $M$, and we see $M_f/M \cong C$.</p>
|
3,715,824 | <p>I proved that <span class="math-container">$$\lim_{n\to\infty}\left(1+\frac{x^2}{n^2}\right)^{\frac{n}{2}}=1$$</span>
using L'Hospital's rule. But is there a way to prove it without L'Hospital's rule? I tried splitting it as
<span class="math-container">$$\lim_{n\to\infty}n^{-n}(n^2+x^2)^{\frac{n}{2}},$$</span>
but that didn't work because <span class="math-container">$\lim_{n\to\infty}(n^2+x^2)^{\frac{n}{2}}$</span> diverges.</p>
| Mark Viola | 218,419 | <p><strong>METHODOLOGY <span class="math-container">$1$</span>: Direct Application of Bernoulli's Inequality</strong></p>
<p>Note that for <span class="math-container">$n>|x|$</span></p>
<p><span class="math-container">$$1\le \left(1+\frac{x^2}{n^2}\right)^{n/2}\le \frac1{\left(1-\frac{x^2}{n^2}\right)^{n/2}}\le \frac1{1-\frac{x^2}{2n}}$$</span></p>
<p>where we used Bernoulli's inequality to arrive at the last inequality.</p>
<p>Now apply the squeeze theorem to find </p>
<p><span class="math-container">$$\lim_{n\to \infty}\left(1+\frac{x^2}{n^2}\right)^{n/2}=1$$</span></p>
<hr>
<hr>
<p><strong>METHODOLOGY <span class="math-container">$1$</span>: Using Estimates of the Logarithm Function</strong></p>
<p>Note that we may write</p>
<p><span class="math-container">$$\left(1+\frac{x^2}{n^2}\right)^{n/2}=e^{(n/2)\log\left(1+\frac{x^2}{n^2}\right)}\tag 1$$</span></p>
<p>In <a href="http://math.stackexchange.com/questions/1589429/how-to-prove-that-logxx-when-x1/1590263#1590263">This Answer</a>, I used elementary, pre-calculus tools to obtain the inequalities </p>
<p><span class="math-container">$$\frac{x}{1+x}\le \log(1+x)\le x \tag2$$</span></p>
<p>Using <span class="math-container">$(2)$</span> in <span class="math-container">$(1)$</span> reveals</p>
<p><span class="math-container">$$e^{nx^2/(2n^2+2x^2)}\le e^{(n/2)\log\left(1+\frac{x^2}{n^2}\right)}\le e^{x^2/2n}$$</span> </p>
<p>whence application of the squeeze theorem yields the coveted result</p>
<p><span class="math-container">$$\lim_{n\to \infty}\left(1+\frac{x^2}{n^2}\right)^{n/2}=1$$</span></p>
<p>as expected!</p>
|
3,715,824 | <p>I proved that <span class="math-container">$$\lim_{n\to\infty}\left(1+\frac{x^2}{n^2}\right)^{\frac{n}{2}}=1$$</span>
using L'Hospital's rule. But is there a way to prove it without L'Hospital's rule? I tried splitting it as
<span class="math-container">$$\lim_{n\to\infty}n^{-n}(n^2+x^2)^{\frac{n}{2}},$$</span>
but that didn't work because <span class="math-container">$\lim_{n\to\infty}(n^2+x^2)^{\frac{n}{2}}$</span> diverges.</p>
| Alex | 38,873 | <p>For the upper bound using Bernoulli inequality note that it applies for exponents <span class="math-container">$t: t \leq 0 \cup t \geq 1$</span>, so for <span class="math-container">$\frac{n}{2} < 0$</span>:
<span class="math-container">$$
\bigg(1+\frac{x^2}{n^2} \bigg)^\frac{n}{2}= \frac{1}{\bigg(1+\frac{x^2}{n^2} \bigg)^{-\frac{n}{2}}} \leq \frac{1}{1- \frac{x^2}{2n}} \to 1
$$</span>
And the limit follows to squeeze lemma</p>
|
3,715,824 | <p>I proved that <span class="math-container">$$\lim_{n\to\infty}\left(1+\frac{x^2}{n^2}\right)^{\frac{n}{2}}=1$$</span>
using L'Hospital's rule. But is there a way to prove it without L'Hospital's rule? I tried splitting it as
<span class="math-container">$$\lim_{n\to\infty}n^{-n}(n^2+x^2)^{\frac{n}{2}},$$</span>
but that didn't work because <span class="math-container">$\lim_{n\to\infty}(n^2+x^2)^{\frac{n}{2}}$</span> diverges.</p>
| Paramanand Singh | 72,031 | <p>The <a href="https://math.stackexchange.com/a/1451245/72031">lemma of Thomas Andrews</a> can be used here:</p>
<blockquote>
<p><strong>Lemma</strong>: If <span class="math-container">$n(a_n-1)\to 0$</span> then <span class="math-container">$a_n^n\to 1$</span>.</p>
</blockquote>
<p>Now use this with <span class="math-container">$$a_n=\sqrt{1+\frac{x^2}{n^2}}$$</span></p>
<hr>
<p>Perhaps you are trying to deal with the limit of <span class="math-container">$(1+ix/n)^n$</span> and show that it equals <span class="math-container">$\cos x+i\sin x$</span>. That can also be easily handled by the lemma in question without first dealing with <span class="math-container">$|(1+ix/n)^n|$</span>. Just apply the lemma to <span class="math-container">$$a_n=\dfrac{1 +\dfrac{ix} {n}} {\cos\dfrac{x} {n} +i\sin\dfrac{x} {n}} $$</span></p>
|
2,655,018 | <p>I have a quick question regarding a little issue.</p>
<p>So I'm given a problem that says "$\tan \left(\frac{9\pi}{8}\right)$" and I'm supposed to find the exact value using half angle identities. I know what these identities are $\sin, \cos, \tan$. So, I use the tangent half-angle identity and plug-in $\theta = \frac{9\pi}{8}$ into $\frac{\theta}{2}$. I got $\frac{9\pi}{4}$ and plugged in values into the formula based on this answer. However, I checked my work with slader.com and it said I was wrong. It said I should take the value I found, $\frac{9\pi}{4}$, and plug it back into $\frac{\theta}{2}$. Wouldn't that be re-plugging in the value for no reason? Very confused.</p>
| Steven Alexis Gregory | 75,410 | <p>Because the period of tangent is $\pi$,
$\tan \dfrac{9 \pi}{8} = \tan \dfrac{\pi}{8}$</p>
<p>You could just look this up, but its pretty easy to derive.</p>
<p>$$ \tan \frac x2
= \frac{\sin \frac x2}{\cos \frac x2}
= \frac{2 \sin \frac x2 \ \cos \frac x2}{1 + 2\cos^2 \frac x2 - 1}
= \frac{\sin x}{1 + \cos x}
= \frac{1 - \cos x}{\sin x}$$</p>
<p>Since $\sin \frac{\pi}{4} = \cos \frac{\pi}{4} = \frac{1}{\sqrt 2}$</p>
<p>$$ \tan \dfrac{9 \pi}{8}
= \tan \frac{\pi}{8}
= \tan \left( \frac 12 \cdot \frac{\pi}{4} \right)
= \frac{1 - \cos \frac{\pi}{4}}{\sin \frac{\pi}{4}}
= \frac{1-\frac{1}{\sqrt 2}}{\frac{1}{\sqrt 2}}
= \sqrt 2 - 1$$</p>
|
3,371,638 | <p>Measure space <span class="math-container">$(X, \mathcal{A}, ν)$</span> has <span class="math-container">$ν(X) = 1$</span>. Let <span class="math-container">$A_n \in \mathcal{A} $</span> and denote </p>
<p><span class="math-container">$B := \{x : x ∈ A_n$</span> for infinitly many n }.</p>
<p>I want to prove that if <span class="math-container">$ν(A_n) \geq \epsilon > 0$</span> for all n, then <span class="math-container">$ν(B) ≥ \epsilon$</span>.</p>
<p><span class="math-container">$\textbf{My attempt}:$</span></p>
<p><span class="math-container">$$B = \text{limsup} A_n = \bigcap_{n=1}^{\infty}\bigcup_{k=n}^{\infty}A_k$$</span>
taking complement and taking measure from both sides:
<span class="math-container">$$\nu(X)-\nu(B) = \nu\bigg[\bigcup_{n=1}^{\infty} \bigg(\bigcup_{k=n}^{\infty}A_k\bigg)^c\bigg] = \bigcup_{n=1}^{\infty} \nu(B_n)\leq \sum_{n=1}^\infty\nu(B_n)$$</span>
<span class="math-container">$$\nu(B) \geq 1 - \sum_{n=1}^\infty\nu(B_n) $$</span> </p>
<p><span class="math-container">$B_n$</span> is an increasing sequence (i.e.<span class="math-container">$B_n \subset B_{n+1}$</span>) ,right? </p>
<p><span class="math-container">$\sum_{n=1}^\infty\nu(B_n) = \lim_{n \to \infty}\nu(B_n)$</span> so there exist N such that <span class="math-container">$\nu(B_n) \leq \frac{1-\epsilon}{2^n}$</span> thus;</p>
<p><span class="math-container">$$\nu(B) \geq 1 - \sum_{n=1}^\infty\nu(B_n) \leq 1-(1-\epsilon)=\epsilon $$</span> </p>
| Paul Frost | 349,785 | <p>We shall a give a proof which is (hopefully) intuitive but can be made precise if desired.</p>
<p>The "unit circle with its interior" is the set <span class="math-container">$D = \{(x,y) \in \mathbb{R}^2 ; x^2+y^2 \leq 1 \}$</span>. Its boundary is the unit circle <span class="math-container">$C = \{(x,y) \in \mathbb{R}^2 ; x^2+y^2 = 1 \}$</span>.</p>
<p>For each <span class="math-container">$c \in \mathbb R^2 \setminus \{ 0 \}$</span> let <span class="math-container">$L'(c)$</span> be the line line through <span class="math-container">$0$</span> and <span class="math-container">$c$</span>, and <span class="math-container">$L(c)$</span> denote the line through <span class="math-container">$c$</span> which is perpendicular to <span class="math-container">$L'(c)$</span>. By <span class="math-container">$H(c) $</span> we denote the half-plane which has <span class="math-container">$L(c)$</span> as its boundary and contains <span class="math-container">$0$</span>. Clearly each half-plane <span class="math-container">$H$</span> with <span class="math-container">$0 \in H$</span> whose boundary line <span class="math-container">$L$</span> does not contain <span class="math-container">$0$</span> has the form <span class="math-container">$H = H(c)$</span> for a unique <span class="math-container">$c$</span>. In fact, let <span class="math-container">$L'$</span> be the line through <span class="math-container">$0$</span> which is perpendicular to <span class="math-container">$L$</span> and let <span class="math-container">$c$</span> be the intersection point of <span class="math-container">$L$</span> and <span class="math-container">$L'$</span>. Then <span class="math-container">$H = H(c)$</span>.</p>
<p>Assume that <span class="math-container">$D$</span> is the intersection <span class="math-container">$I(c_1,\ldots,c_n)$</span> of finitely many half-planes <span class="math-container">$H(c_1),\ldots, H(c_n)$</span>.</p>
<p>W.l.o.g. we may assume that all <span class="math-container">$c_i \in C$</span>. In fact, the line <span class="math-container">$L'(c_i)$</span> intersects <span class="math-container">$C$</span> in a point <span class="math-container">$c^*_i$</span> between <span class="math-container">$0$</span> and <span class="math-container">$c_i$</span> (<span class="math-container">$c^*_i = c_i$</span> is possible). Then we have <span class="math-container">$D \subset H(c^*_i) \subset H(c_i)$</span> so that <span class="math-container">$I(c_1,\ldots,c_n) = I(c^*_1,\ldots,c^*_n)$</span>. Note that the boundary line <span class="math-container">$L(c^*_i)$</span> of <span class="math-container">$H(c^*_i)$</span> is the tangent to <span class="math-container">$C$</span> at <span class="math-container">$c^*_i$</span>.</p>
<p>If we add finitely points <span class="math-container">$c_i \in C$</span>, <span class="math-container">$i= n+1,\ldots,m$</span>, then clearly <span class="math-container">$I(c_1,\ldots,c_n) = I(c_1,\ldots,c_m)$</span>. We may therefore w.l.o.g. assume that the points <span class="math-container">$(0,1)$</span> and <span class="math-container">$(1,0)$</span> belong to the <span class="math-container">$c_i$</span> and that the <span class="math-container">$c_i$</span> are pairwise distinct.</p>
<p>W.l.o.g. we may assume that <span class="math-container">$c_1 = (0,1)$</span> and <span class="math-container">$c_2$</span> is the first point reached if we travel counterclockwise along <span class="math-container">$C$</span> starting at <span class="math-container">$c_1$</span>. The point <span class="math-container">$c_2$</span> lies on the quarter circle between <span class="math-container">$(1,0)$</span> and <span class="math-container">$(0,1)$</span>. The lines <span class="math-container">$L_1 = L(c_1)$</span> and <span class="math-container">$L(c_2)$</span> intersect in a point <span class="math-container">$p$</span> lying on the line segment between <span class="math-container">$(1,0)$</span> and <span class="math-container">$(1,1)$</span>. Clearly, <span class="math-container">$p \in H(c_1) \cap H(c_2)$</span>, but <span class="math-container">$p \notin D$</span>. </p>
<p>Any half-plane <span class="math-container">$H(c)$</span> where <span class="math-container">$c \in C$</span> does not belong to the arc between <span class="math-container">$c_1$</span> and <span class="math-container">$c_2$</span> contains <span class="math-container">$p$</span>. This is intuitively clear: When we move counterclockwise along <span class="math-container">$C$</span> from <span class="math-container">$c_2$</span> to <span class="math-container">$(-1,0)$</span>, then the intersection point <span class="math-container">$p_c$</span> of <span class="math-container">$L_1$</span> and <span class="math-container">$L(c)$</span> moves on <span class="math-container">$L_1$</span> from <span class="math-container">$p$</span> to <span class="math-container">$\infty$</span> and the half-line <span class="math-container">$L_1^-(p_c) \subset L_1)$</span> below <span class="math-container">$p_c$</span> (which has the property <span class="math-container">$p_c \in L_1^-(p_c)$</span>) is contained in <span class="math-container">$H(c)$</span>. When we reach <span class="math-container">$(-1,0)$</span>, then <span class="math-container">$L_1$</span> and <span class="math-container">$L(-1,0)$</span> are parallel (i.e. do not intersect) and trivially <span class="math-container">$p \in H(-1,0)$</span>. When we move further from <span class="math-container">$(-1,0)$</span> to <span class="math-container">$(1,0)$</span>, the intersection point <span class="math-container">$p_c$</span> moves on <span class="math-container">$L_1$</span> from <span class="math-container">$-\infty$</span> to <span class="math-container">$(1,0)$</span> and the half-line <span class="math-container">$L_1^+(p_c) \subset L_1$</span> above <span class="math-container">$p_c$</span> (which has the property <span class="math-container">$p \in L_1^+(p_c)$</span>) is contained in <span class="math-container">$H(c)$</span>.</p>
<p>You should draw a picture - everything can be made absolutely precise if desired.</p>
<p>This shows that <span class="math-container">$p \in \bigcap_{i=1}^n H(c_i)$</span> which is a contradiction.</p>
|
1,085,491 | <p>Prove that the following number is an integer:
$$\left( \dfrac{76}{\dfrac{1}{\sqrt[\large{3}]{77}-\sqrt[\large{3}]{75}}-\sqrt[\large{3}]{5775}}+\dfrac{1}{\dfrac{76}{\sqrt[\large{3}]{77}+\sqrt[\large{3}]{75}}+\sqrt[\large{3}]{5775}}\right)^{\large{3}}$$</p>
<p>How can I prove it?</p>
| Dr. Sonnhard Graubner | 175,066 | <p>we have $$ \left( 76\, \left( \left( \sqrt [3]{77}-\sqrt [3]{75} \right) ^{-1}-
\sqrt [3]{5775} \right) ^{-1}+ \left( 76\, \left( \sqrt [3]{77}+\sqrt
[3]{75} \right) ^{-1}+\sqrt [3]{5775} \right) ^{-1} \right) ^{3}
$$
after expanding we obtain
$$438976\, \left( \left( \sqrt [3]{77}-\sqrt [3]{75} \right) ^{-1}-
\sqrt [3]{5775} \right) ^{-3}+17328\,{\frac {1}{ \left( \left( \sqrt
[3]{77}-\sqrt [3]{75} \right) ^{-1}-\sqrt [3]{5775} \right) ^{2}
\left( 76\, \left( \sqrt [3]{77}+\sqrt [3]{75} \right) ^{-1}+\sqrt [3
]{5775} \right) }}+228\,{\frac {1}{ \left( \left( \sqrt [3]{77}-
\sqrt [3]{75} \right) ^{-1}-\sqrt [3]{5775} \right) \left( 76\,
\left( \sqrt [3]{77}+\sqrt [3]{75} \right) ^{-1}+\sqrt [3]{5775}
\right) ^{2}}}+ \left( 76\, \left( \sqrt [3]{77}+\sqrt [3]{75}
\right) ^{-1}+\sqrt [3]{5775} \right) ^{-3}
$$
computing this we obtain
$$616$$
wow</p>
|
1,085,491 | <p>Prove that the following number is an integer:
$$\left( \dfrac{76}{\dfrac{1}{\sqrt[\large{3}]{77}-\sqrt[\large{3}]{75}}-\sqrt[\large{3}]{5775}}+\dfrac{1}{\dfrac{76}{\sqrt[\large{3}]{77}+\sqrt[\large{3}]{75}}+\sqrt[\large{3}]{5775}}\right)^{\large{3}}$$</p>
<p>How can I prove it?</p>
| Bernard | 202,857 | <p>The conjugate expression of $\sqrt[3]{a} \pm\sqrt[3]{b}$ is $\sqrt[3]{a^2} \mp\sqrt[3]{ab}+\sqrt[3]{b^2} $. You can use that to rationalise the denominators. The expression inside the parentheses is $\sqrt[3]{77}$ so that finally you get $77$.</p>
|
110,373 | <p>Are there classes of infinite groups that admit Sylow subgroups and where the Sylow theorems are valid?</p>
<p>More precisely, I'm looking for classes of groups <span class="math-container">$\mathcal{C}$</span> with the following properties:</p>
<ul>
<li><span class="math-container">$\mathcal{C}$</span> includes the finite groups</li>
<li>in <span class="math-container">$\mathcal{C}$</span> there is a notion of Sylow subgroups that coincides with the usual one when restricted to finite groups</li>
<li>Sylow's theorems (or part of them) are valid in <span class="math-container">$\mathcal{C}$</span></li>
</ul>
<p>An example of such a class <span class="math-container">$\mathcal{C}$</span> is given by the class of profinite groups.</p>
| Anton Klyachko | 24,165 | <p>You may also read Chapter 13 of Kurosh's book <a href="https://books.google.ru/books/about/Theory_of_Groups.html?id=B3lHYIuqPuQC&redir_esc=y" rel="nofollow noreferrer">Theory of groups, volume 2</a>.
For instance, it contains a proof of Baer's theorem (<a href="https://mathoverflow.net/a/110375">cited</a> by @Igor) which says that</p>
<p><strong>all p-Sylow subgroups of a locally normal group are isomorphic.</strong></p>
<p><em>Locally normal</em> means periodic with finite conjugacy classes.</p>
|
656,185 | <blockquote>
<p>let sequence $\{G_{n}\}$ such
$$G_{1}=1,G_{3}=3,G_{2n}=G_{n}$$
$$G_{4n+1}=2G_{2n+1}-G_{n},G_{4n+3}=3G_{2n+1}-2G_{n}$$</p>
</blockquote>
<p>If such $G_{n}=n$, then we said $n$ is 'good'.
How many 'good' numbers $n$, such that $n<2^{100}?$</p>
<p><strong>My try:</strong></p>
<p>since
$$\begin{eqnarray}G_{1}&=&1,\\
G_{2}&=&1,\\
G_{3}&=&3,\\
G_{4}&=&G_{2}=1\\
G_{5}&=&2G_{3}-G_{1}=5,\\
G_{6}&=&G_{3}=3,\\
G_{7}&=&3G_{3}-2G_{1}=9-2=7\\
G_{8}&=&G_{4}=1,\\
G_{9}&=&2G_{5}-G_{2}=10-1=9,\\
G_{10}&=&G_{5}=5,\\
G_{11}&=&3G_{5}-2G_{2}=15-2=13,
G_{12}&=&G_{6}=3,\\
G_{13}&=&2G_{7}-G_{3}=14-3=11,\\
G_{14}&=&G_{7}=7,\\
G_{15}&=&3G_{7}-2G_{3}=21-6=15,\end{eqnarray}$$</p>
<p>so when $n=1,3,5,7,9,,15,\cdots$ is 'good"</p>
<p>But How find numbers? when $n<2^{100}?$</p>
<p>Thank you</p>
| Barry Cipra | 86,747 | <p>This is sequence <a href="http://oeis.org/A030101" rel="nofollow">A030101</a> in the OEIS. That is, $G(n)$ is the number obtained by reversing the digits of $n$ when written base $2$, e.g. $G(25)=G(11001_2)=10011_2=19$.</p>
<p>This is easy to check: If $n=d_0+2d_1+\cdots+2^rd_r$, then</p>
<p>$$
\begin{align}
G(n)&=d_r+\cdots+2^rd_0\\
G(2n)&=d_r+\cdots+2^rd_0+2^{r+1}\cdot0\\
G(2n+1)&=d_r+\cdots+2^rd_0+2^{r+1}\cdot1\\
G(4n+1)&=d_r+\cdots+2^rd_0+2^{r+1}\cdot0+2^{r+2}\cdot1\\
G(4n+3)&=d_r+\cdots+2^rd_0+2^{r+1}\cdot1+2^{r+2}\cdot1
\end{align}
$$</p>
<p>from which the identities $G(2n)=G(n)$, $G(4n+1)=2G(2n+1)-G(n)$, and $G(4n+3)=3G(2n+1)-2G(n)$ are easily verified.</p>
<p>The upshot is that the OP's "good" numbers are those that are palindromes when written in binary.</p>
|
1,575,397 | <p>I need help calculating
$$\lim_{n\to\infty}\left(\frac{1}{n^{2}}+\frac{2}{n^{2}}+...+\frac{n}{n^{2}}\right) = ?$$</p>
| Jan Eerland | 226,665 | <p>HINT:</p>
<p>$$\lim_{n\to\infty}\left(\frac{1}{n^{2}}+\frac{2}{n^{2}}+...+\frac{n}{n^{2}}\right)=\lim_{n\to\infty}\sum_{m=1}^{n}\frac{m}{n^2}=\lim_{n\to\infty}\frac{n+1}{2n}=\lim_{n\to\infty}\frac{1+\frac{1}{n}}{2}=\frac{1+0}{2}=\frac{1}{2}$$</p>
|
1,526,474 | <p>Find the natural number $k <117$ such that $2^{117}\equiv k \pmod {117}$.</p>
<p>I know $117$ is the product of $3$ and $37$.</p>
<p>$2^{117}\equiv 2 \pmod 3$
$2^{117}\equiv 31 \pmod {37}$.
But $2^{117}\equiv 44 \pmod {117}$.</p>
<p>I can't seem to understand how to get $44$. Can anyone help me understand?</p>
| Peter | 82,961 | <p>You can use the chinese remainder theorem </p>
<p>$$2^{117}\equiv \ 2^3 = 8 \ (\ mod\ 9)$$</p>
<p>$$2^{117}\equiv \ 2^9 \equiv 5\ (\ mod\ 13\ )$$</p>
<p>Take it from here.</p>
|
1,991,238 | <p>How can I integrate this? $\int_{0}^{1}\frac{\ln(x)}{x+1} dx $</p>
<p>I've seen <a href="https://math.stackexchange.com/questions/108248/prove-int-01-frac-ln-x-x-1-d-x-sum-1-infty-frac1n2">this</a> but I failed to apply it on my problem.</p>
<p>Could you give some hint?</p>
<p>EDIT : From hint of @H.H.Rugh, I've got $\sum_{n=1}^{\infty} \frac{(-1)^{n}}{n^2}$, since $\int_{0}^{1}x^{n}\ln(x)dx = (-1)\frac{1}{(n+1)^2}$. How can I proceed this calculation hereafter?</p>
| user361424 | 361,424 | <p>The Fibonacci numbers increase as $\phi^n$ (where $\phi$ is the golden mean $\frac{1+\sqrt{5}}{2}$), and harmonic numbers increase as $\log n$ (i.e., the natural log). Therefore, the difference between the harmonic numbers for successive Fibonacci numbers will approach $\log\phi \approx 0.481211825...$</p>
<p>To expand a bit, the Fibonacci numbers can be expressed as $\frac{\phi^n - (1-\phi)^n}{\sqrt{5}}$. (Try it! The fact that the equation $f(x+2) - f(x+1) - f(x) = 0$ requires a sum of powers of $\phi$ and $1-\phi$ follows from the fact that these are the solutions to the equation $x^2 - x - 1 = 0$, and the coefficients come from f(1) = f(2) = 1.) The second term vanishes, so large Fibonacci numbers can be approximated quite well as $\frac{\phi^n}{\sqrt{5}}$.</p>
<p>Since one definition of the natural logarithm is the integral from 1 to the parameter of the function $t^{-1}$, the harmonic numbers can be approximated as the natural logarithm, and in fact the difference approaches a constant (called $\gamma$, about 0.577). If you're not familiar with integrals, the fact that the harmonic numbers increase as a logarithm is suggested by Oresme's proof that the harmonic series diverges...</p>
<p>$$1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} + \frac{1}{9} + \cdots > 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{4} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{16} + \cdots$$</p>
<p>...and it just so happens that that logarithm is the natural logarithm.</p>
<p>So if you accept that for very large n, the harmonic numbers approach $\log n$, and that the Fibonacci numbers approach $\frac{\phi^n}{\sqrt{5}}$, then you get for two successive...</p>
<p>$$\log\left(\frac{\phi^{n+1}}{\sqrt{5}}\right) - \log\left(\frac{\phi^n}{\sqrt{5}}\right) = \log\left(\frac{\phi^{n+1}}{\phi^n}\right) = \log\phi$$</p>
<p>($\log x - \log y = \log \frac{x}{y}$ is a natural inverse of $\frac{e^x}{e^y} = e^{x-y}$.)</p>
|
4,292,618 | <p>I have the following function <span class="math-container">$$\frac{1}{1+2x}-\frac{1-x}{1+x} $$</span>
How to find equivalent way to compute it but when <span class="math-container">$x$</span> is much smaller than 1? I assume the problem here is with <span class="math-container">$1+x$</span> since it probably would be equal to 1. I don't know if multiplying by <span class="math-container">$(1-x)$</span> would be helpful as it would be
<span class="math-container">$$ \frac{1-x}{1+x-2x^2}-\frac{(1-x)^2}{1-x^2} $$</span> so there's still term <span class="math-container">$1+x$</span>.</p>
| PierreCarre | 639,238 | <p>I think the purpose of the exercise is simply to provide an alternative, but equivalent, expression for <span class="math-container">$f$</span>. For this reason, I would rule out power series approximations (they do the job quite well but strictly speaking, they are not equivalent to the original expression). If you simply reduce to the same denominator, you no longer have a subtractive cancellation ruining the relative error and an acceptable (although not optimal) possibility would simply be
<span class="math-container">$$
f(x)=\dfrac{2x^2}{(1+2x)(1+x)}.
$$</span></p>
|
1,071,040 | <p>I found <a href="https://math.stackexchange.com/questions/549065/how-exactly-do-you-measure-circumference-or-diameter">How exactly do you measure circumference or diameter?</a> but it was more related to how people measured circumference and diameter in old days.</p>
<p><strong>BUT</strong> now we have a formula, but the value of PI cannot not be accurately determined, how can I find the accurately calculate the value of circumference of a circle?</p>
<p>Is there any other may be physical mean by which I can calculate the correct circumference?</p>
<p>thank you</p>
| slinshady | 194,678 | <p>The correct answer to the question what the circumference of a circle with diameter $d$ would be $\pi \cdot d $. Of course this is not a satisyfing answer. But since this ridiculous number $\pi$ cannot even be described by the root of a polynomial with coefficients in $\mathbb Q$ we can only approximate $\pi$. This is not a bad thing though. For most, if not all, applications we can approximate $\pi$ good enough so we don't realize it is an approximation. For pure mathematics, we can just use the symbol ''$\pi$''</p>
<p>I hope this helps you a bit</p>
|
3,091,353 | <p>There are 2 definitions of <strong><em>Connected Space</em></strong> in my lecture notes, I understand the first one but not the second. The first one is:</p>
<blockquote>
<p>A topological space <span class="math-container">$(X,\mathcal{T})$</span> is connected if there does not exist
<span class="math-container">$U,V\in\mathcal{T}$</span> such that <span class="math-container">$U\neq\emptyset$</span>, <span class="math-container">$V\neq\emptyset$</span>, <span class="math-container">$U\cap V=\emptyset$</span> and <span class="math-container">$X=U\cup V$</span></p>
</blockquote>
<p>which makes sense. It is saying that connected spaces can't be cut up into parts that have nothing to do with eachother.</p>
<p>The second definition is: </p>
<blockquote>
<p>A topological space <span class="math-container">$(X,\mathcal{T})$</span> is connected if <span class="math-container">$\emptyset$</span> and <span class="math-container">$X$</span> are the only subsets of <span class="math-container">$X$</span> which are closed and open</p>
</blockquote>
<p>which makes no intuitive sense to me, especially as a definition of connectedness. </p>
<p>Any intuitive explanation behind this second definition?</p>
| postmortes | 65,078 | <p>The second definition is equivalent to the first: suppose there is a set <span class="math-container">$U$</span> which is neither <span class="math-container">$X$</span> nor <span class="math-container">$\emptyset$</span> and is both open and closed. Then <span class="math-container">$U^c$</span>, the complement of <span class="math-container">$U$</span> in <span class="math-container">$X$</span> is both open and closed as well (since <span class="math-container">$U$</span> is open implies <span class="math-container">$U^c$</span> is closed and <span class="math-container">$U$</span> is closed implies <span class="math-container">$U^c$</span> is open). Thus <span class="math-container">$U$</span> and <span class="math-container">$U^c$</span> are two non-empty open sets, neither of which is the whole space, with <span class="math-container">$\emptyset$</span> intersection and whose union <em>is</em> the whole space.</p>
<p>I'm not sure there's a particular intuition here: rather this is a technical reformulation of the first definition that can be useful as a way of determining when a space in connected or not.</p>
|
3,091,353 | <p>There are 2 definitions of <strong><em>Connected Space</em></strong> in my lecture notes, I understand the first one but not the second. The first one is:</p>
<blockquote>
<p>A topological space <span class="math-container">$(X,\mathcal{T})$</span> is connected if there does not exist
<span class="math-container">$U,V\in\mathcal{T}$</span> such that <span class="math-container">$U\neq\emptyset$</span>, <span class="math-container">$V\neq\emptyset$</span>, <span class="math-container">$U\cap V=\emptyset$</span> and <span class="math-container">$X=U\cup V$</span></p>
</blockquote>
<p>which makes sense. It is saying that connected spaces can't be cut up into parts that have nothing to do with eachother.</p>
<p>The second definition is: </p>
<blockquote>
<p>A topological space <span class="math-container">$(X,\mathcal{T})$</span> is connected if <span class="math-container">$\emptyset$</span> and <span class="math-container">$X$</span> are the only subsets of <span class="math-container">$X$</span> which are closed and open</p>
</blockquote>
<p>which makes no intuitive sense to me, especially as a definition of connectedness. </p>
<p>Any intuitive explanation behind this second definition?</p>
| Mark | 470,733 | <p>First of all the definitions are equivalent, you already got a few answers about that. I'll try to add some intuition. If <span class="math-container">$X$</span> is a topological space and <span class="math-container">$A\subseteq X$</span> then you can split the space into three parts: the interior of <span class="math-container">$A$</span>, the boundary of <span class="math-container">$A$</span> and the exterior of <span class="math-container">$A$</span>. The set <span class="math-container">$A$</span> is open if its boundary is contained in <span class="math-container">$X\setminus A$</span>, and it is closed if its boundary is contained in <span class="math-container">$A$</span>. So to say that <span class="math-container">$A$</span> is both open and closed is the same thing as to say that the boundary of <span class="math-container">$A$</span> is empty. Now imagine that the interior and the exterior of such a set <span class="math-container">$A$</span> are both not empty. The set has no boundary which can be crossed so intuitively to get from a point in the interior to a point in the exterior we have to "jump". This exactly means that we split the space into two parts which have nothing to do with each other, so such a space <span class="math-container">$X$</span> is not connected.</p>
<p>On the other hand if every set <span class="math-container">$A\subseteq X$</span> such that both <span class="math-container">$A$</span> and <span class="math-container">$X\setminus A$</span> are not empty has a boundary then intuitively it means that we can enter any set and get out of any set in a straight way by crossing its boundary. So it makes sense to say such a space <span class="math-container">$X$</span> is connected. </p>
|
1,482,104 | <p>Let $X$ have a uniform distribution with p.d.f. $f(x) = 1$, $x$ is in $(0, 1)$, zero elsewhere.
Find the p.d.f. of $Y = -2 \ln X$.</p>
<p>I don't think this is a very difficult question, I just don't really understand what it is asking or where to start. Any help would be very much appreciated. Thank you! </p>
<p>Update: I did
$$F(y)= P(Y \le y) = P(-2\ln x \le y) = P(\ln x \ge -y/2) = P(x \ge e^{-y/2}).$$
Then $ x=e^{-y/2}$ and $dx/dy =-1/2e^{-y/2}$
Is this all I need to do? Also, I'm not 100% sure why I am using inequalities here- can someone give me a quick explanation? </p>
| alecsphys | 164,662 | <p>you are asked to find the probability distribution of the random variable $Y$ that is related to the random variable $X$ by the relation $Y=-2\ln X$, being $X$ uniform in the interval $[0,1]$. You can solve it considering a change of variable applied to the cumulative function $F(x)$:
$$F(x)=\int_0^x f(x)dx = \int_\infty^y f(y)\Big|\frac{dx}{dy}\Big|^{-1}dy$$
so that
$$f(y) = f(x)\Big|\frac{dx}{dy}\Big|_y$$
and in your situation $x=\exp(-y/2)$, so $\Big|\frac{dx}{dy}\Big|_y = \frac{1}{2}\exp(-y/2)$, so $$f(y)= \frac{1}{2}\exp(-y/2)$$</p>
<p>If you try to integrate $f(y)$ between $\infty$ and $0$ you can verify that it gives you $1$.</p>
|
749,714 | <p>Does anyone know how to show this preferable <strong>without</strong> using modular</p>
<p>For any prime $p>3$ show that 3 divides $2p^2+1$ </p>
| mathse | 136,490 | <p>Since $p>3$ it holds that $p\equiv 1,2\pmod{3}$. Then $p^2\equiv 1\pmod{3}$ and then $2p^2\equiv 2\pmod{3}$. Adding $1$ yields the result.</p>
|
4,941 | <p>I was reviewing my class notes and found the following:</p>
<p>"The name 'torsion' comes from topology and refers to spaces that are twisted, ex. Möbius band"</p>
<p>In our notes we used the following definition for torsion element and torsion module:
An element m of an R-module M is called a torsion element if $rm=0$ for some $r\in R$.
A torsion module is a module which consists solely of torsion elements</p>
<p>What is the relationship between torsion modules and twisted spaces? Was the definition of torsion module somehow motivated from topological considerations of twisted spaces?</p>
<p>I don't really see any obvious connection. I'm taking my first topology class this semester, so I apologize if this is something you learn about later in courses like algebraic topology, but I haven't been able to find any explanation of this.</p>
| Matt E | 221 | <p>When you compute the homology groups of "twisted" spaces (which are abelian groups), you (sometimes) find that they contain non-zero torsion elements; furthermore, the presence of these particular elements in the homology is due to the twisting (in that, when you compute the homology groups, you see that is the twisting in the space that causes the calculation to give rise to torsion elements). </p>
<p>Since you haven't studied algebraic topology yet, I won't say more here. Hopefully, while necessarily vague, the above description gives you some feeling for the meaning of the remark in your notes.</p>
|
2,672,908 | <p>Hey I was given this question in my discrete math class, and I'm unsure of what I should do!</p>
<blockquote>
<p>Prove that if $x$ is coprime with $6$ and $x$ is coprime with $8$, then $x$ is coprime with 24.</p>
</blockquote>
<p>I think I have to use the GCD theorem or co-primality theorem but I don't think what I'm doing is correct but this is what I have so far
$$
1 = ax + by\\
1 \times 1 = (ax + cy) (bz + cw)\\
\gcd(a, c) = 1\\
\gcd(b, c) = 1\\
\gcd((ab)/2, c) = 1
$$
Thanks in advance!</p>
| Donald Splutterwit | 404,247 | <p>By Bezout we have
\begin{eqnarray*}
Ax+6B=1 \\
Cx+8D=1.
\end{eqnarray*}
Multiply these equations
\begin{eqnarray*}
x(ACx+6BC+8AD)+24\times 2BD =1.
\end{eqnarray*}</p>
|
4,043,787 | <p>I have <span class="math-container">$$f_n(x)=\begin{cases}
\frac{1}{n} & |x|\leq n, \\
0 & |x|>n .
\end{cases}$$</span></p>
<p>Why cannot be dominated by an integrable function <span class="math-container">$g$</span> by the Dominated Convergence Theorem? I am also wondering what exactly it means for a function <span class="math-container">$g$</span> to dominate a sequence of functions, since I believe my definition of this is what I am understanding incorrectly.</p>
| saulspatz | 235,128 | <p><span class="math-container">$f_n\to 0$</span> on <span class="math-container">$\mathbb{R}$</span>, so if the <span class="math-container">$f_n$</span> were dominated by an integrable function, DCT would give <span class="math-container">$$\lim_{n\to\infty}\int_{\mathbb{R}}f_n(x)\,\mathrm{d}x=0$$</span> whereas it's obvious that the limit is <span class="math-container">$1$</span>.</p>
|
2,763,735 | <p>Is it true that $$\mathbb{Z/4Z\subseteq Z/2Z}$$
Why precisely? Or the reverse $$\mathbb{Z/2Z \subseteq Z/4Z}$$ holds? I'm a beginner. How do I justify the true inclusion?
How do I visualize $$\mathbb{Z/2Z \subseteq Z/4Z}$$
Thank you very much.</p>
| lhf | 589 | <p>$\mathbb{Z}/4\mathbb{Z}\subseteq \mathbb{Z}/2\mathbb{Z}$ cannot be true because $\mathbb{Z}/4\mathbb{Z}$ has $4$ elements but $\mathbb{Z}/2\mathbb{Z}$ has only $2$ elements.</p>
<p>$\mathbb{Z}/2\mathbb{Z}\subseteq \mathbb{Z}/4\mathbb{Z}$ is not true because a class mod $2$ is not a class mod $4$.</p>
<p>Nevertheless, the classes $0 + 4\mathbb{Z}$ and $2 + 4\mathbb{Z}$ behave additively like the classes $0 + 2\mathbb{Z}$ and $1 + 2\mathbb{Z}$. In that sense, we may say that $\mathbb{Z}/2\mathbb{Z}\subseteq \mathbb{Z}/4\mathbb{Z}$, but it is a bit of a stretch.</p>
|
1,262,036 | <p>In complex analysis, this seems to be a really helpful way to avoid having to expand out Laurent series. I am unclear, however, when it is appropriate to use this property.</p>
<p>In specific, I'm worried I CAN'T use this method on the following:</p>
<p>$$\frac{e^z}{z^3 \sin(z)}$$ at the origin. This looks really messy, because using Laurent series, I'll have to divide series. Can I use the property stated above? If not, is there a more efficient way I can approach this problem?</p>
| Demosthene | 163,662 | <p>Using the <a href="http://en.wikipedia.org/wiki/Residue_%28complex_analysis%29#Limit_formula_for_higher_order_poles" rel="nofollow">limit formula for higher order poles</a>, and the fact that $f(z)=\dfrac{e^z}{z^3\sin z}$ admits an order $4$ pole at $z_0=0$, we get:
$$\mathrm{Res}(f,0)=\dfrac{1}{3!}\lim_{z\to 0}\dfrac{d^3}{dz^3}\left[z^4\dfrac{e^z}{z^3\sin z}\right]=\dfrac{1}{6}\cdot 2=\dfrac{1}{3}$$</p>
|
2,934,906 | <p>This is a shot in the dark and a pretty tall order, but I am wondering if anybody could give a good explanation of the spectral theorem for the reals to high schoolers who have only seen computational calculus of one variable and have <strong>not</strong> taken a linear algebra course before? An overview of the proof, why does one care about the result, what is the result, how to visualize it, etc.</p>
<p>I've tried myself to explain to high school students this result but failed, so maybe there is someone out there who is a better teacher who can do this.</p>
| Rushabh Mehta | 537,349 | <p>I don't really know why you'd want to show this theorem to your students, but find below a decent motivation of why the theorem can be helpful in a simple context, as well as an example of how it's used. I've also linked to a paper that presents a rather simple and elegant proof of the theorem as well, if you would like to include that</p>
<hr>
<p>Suppose we had a quadratic equation of <span class="math-container">$n$</span> variables <span class="math-container">$x_1,...,x_n$</span>. This equation can be expressed as something like <span class="math-container">$$A\cdot x_1^2 +B\cdot x_1\cdot x_2 + C\cdot x_1\cdot x_3...$$</span>where <span class="math-container">$A,B,C...$</span> are coefficients. This can be quite a mess. It's pretty easy to see that with <span class="math-container">$n$</span> variables, we could have up to <span class="math-container">$n^2$</span> terms. Sure, we could pair these terms up, but even so, dealing with this large of an equation is pretty terrible, so let's try to write it in summation notation to see if we can condense the equation: <span class="math-container">$$\sum\limits_{i=1}^n\sum\limits_{j=1}^n H_{ij}\cdot x_i\cdot x_j$$</span>Where each <span class="math-container">$H_{ij}$</span> is a coefficient. This still isn't very pretty. But what if we could simply write the quadratic as the following: <span class="math-container">$$\sum\limits_{i=1}^n H_{ii}\cdot x_i^2$$</span>That would be fantastic! Such a quadratic is easy to understand: In each coordinate direction <span class="math-container">$x_i$</span>, the graph is a parabola, opening upward if <span class="math-container">$H_{ii}>0$</span> and opening downward if <span class="math-container">$H_{ii}<0$</span>. There is also the degenerate case <span class="math-container">$H_{ii}=0$</span>, in which case the quadratic is constant with respect to <span class="math-container">$x_i$</span> and the graph in that direction is a horizontal line.</p>
<p>Unfortunately, we have no way to write our general quadratic as such, nor any reason to believe it's possible. So what do we do? We rewrite this problem with matrices, and see what Linear Algebra can do for us.</p>
<p>First, we can create a matrix <span class="math-container">$H$</span> of all of our <span class="math-container">$H_{ij}$</span> coefficients from the first summation to represent our general quadratic. This matrix would have dimension <span class="math-container">$(n\times n)$</span>. For example, if we had the quadratic of <span class="math-container">$3$</span> variables <span class="math-container">$x,y,z$</span> which was the following <span class="math-container">$$(x+y+2z)^2$$</span>The corresponding <span class="math-container">$H$</span> would be <span class="math-container">$$H=\begin{pmatrix}1&2&4\\2&1&4\\4&4&4\end{pmatrix}$$</span></p>
<p>What is nice about the <span class="math-container">$H$</span> matrix is that is we define <span class="math-container">$x$</span> to be a row vector of <span class="math-container">$x_1,...x_n$</span>, then we can rewrite our quadratic as (Note: <span class="math-container">$^T$</span> designates the transpose) <span class="math-container">$$x\cdot H\cdot x^T$$</span>Much cleaner! Now what can we do with this? We want to ensure that the only non-zero <span class="math-container">$H$</span> values are those of the form <span class="math-container">$H_{ii}$</span>, i.e., the matrix <span class="math-container">$H$</span> only has non-zero values on the diagonals. This sort of matrix is called a <strong>diagonal</strong> matrix, and the process of making a matrix diagonal is called <strong>diagonalizing</strong> the matrix.</p>
<p>Now, what qualities must <span class="math-container">$H$</span> have in order to be diagonalizable. The <strong>Spectral Theorem</strong> tells us that if <span class="math-container">$\forall i,j\leq n, H_{i,j}=H_{ji}$</span>, our matrix <span class="math-container">$H$</span> is diagonizable. That's fantastic since our matrix, by definition, satisfies that! In particular, the Spectral Theorem says the following about symmetric matrix <span class="math-container">$H$</span></p>
<p><span class="math-container">$$\exists\textrm{ matrices }U,D\textrm{ such that }$$</span><span class="math-container">$$H=U\cdot D\cdot U^T$$</span><span class="math-container">$$U\cdot U^T=U^T\cdot U=I$$</span><span class="math-container">$$i\neq j\to D_{ij}=0$$</span></p>
<p><strong>Note</strong>: The <a href="http://www.math.lsa.umich.edu/~speyer/417/SpectralTheorem.pdf" rel="nofollow noreferrer">following</a> document provides an excellent and simple proof of the spectral theorem. This should be presented around here.</p>
<p>Why is this fantastic? Well, if we define <span class="math-container">$\alpha=x\cdot U$</span>, then <span class="math-container">$\alpha^T = U^T\cdot x^T$</span> and <span class="math-container">$$x\cdot H\cdot x^T = \alpha\cdot D\cdot \alpha^T$$</span>which is exactly the form we desire. Moreover, since <span class="math-container">$U$</span> and <span class="math-container">$U^T$</span> are invertible, the mapping from <span class="math-container">$x\to\alpha$</span> is bijective, so its simply a change of coordinates. In essence, this theorem gives us the tools to, via a simple change of coordinates, convert the symmetric matrix <span class="math-container">$H$</span> to a diagonal matrix <span class="math-container">$D$</span>, which, as we discussed above, makes life a lot easier.</p>
<p>Let's run through an example. Let <span class="math-container">$q$</span> be the quadratic <span class="math-container">$$q(x)=x_1^2 + 6x_1x_2+x_2^2$$</span>So, <span class="math-container">$$H=\begin{pmatrix}1&3\\3&1\end{pmatrix}$$</span>By the Spectral Theorem, we find <span class="math-container">$$U=\begin{pmatrix}\frac1{\sqrt2}&\frac1{\sqrt2}\\\frac1{\sqrt2}&\frac{-1}{\sqrt2}\end{pmatrix}\quad D=\begin{pmatrix}4&0\\0&-2\end{pmatrix}$$</span></p>
<p>So, <span class="math-container">$$x\cdot U = \alpha \to \begin{pmatrix}x_1\\x_2\end{pmatrix}\cdot\begin{pmatrix}\frac1{\sqrt2}&\frac1{\sqrt2}\\\frac1{\sqrt2}&\frac{-1}{\sqrt2}\end{pmatrix}=\begin{pmatrix}\alpha_1\\\alpha_2\end{pmatrix}$$</span><span class="math-container">$$\alpha_1=\frac1{\sqrt2}\cdot(x_1+x_2)\quad \alpha_2=\frac1{\sqrt2}\cdot(x_1-x_2)$$</span></p>
<p>These two vectors serve as almost a new coordinate system for our quadratic as below<a href="https://i.stack.imgur.com/5nmau.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5nmau.png" alt="enter image description here"></a> </p>
<p>With our new coordinates, our quadratic becomes <span class="math-container">$$q(x') = 4\alpha_1^2-2\alpha_2^2$$</span>This tells us that in the direction of coordinate <span class="math-container">$\alpha_1$</span>, the function is a upwards facing parabola, vs in the direction of <span class="math-container">$\alpha_2$</span>, it's a downward facing one. You can see this more clearly on the following mathematica plot.</p>
<p><a href="https://i.stack.imgur.com/maiGr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/maiGr.png" alt="enter image description here"></a></p>
|
2,934,906 | <p>This is a shot in the dark and a pretty tall order, but I am wondering if anybody could give a good explanation of the spectral theorem for the reals to high schoolers who have only seen computational calculus of one variable and have <strong>not</strong> taken a linear algebra course before? An overview of the proof, why does one care about the result, what is the result, how to visualize it, etc.</p>
<p>I've tried myself to explain to high school students this result but failed, so maybe there is someone out there who is a better teacher who can do this.</p>
| sasquires | 99,630 | <p>Note that before students can understand the spectral theorem at all, they have to have a solid understanding of what eigenvalues and eigenvectors even are, so I'll start there. </p>
<p>A colleague (who is a smart guy but never took a linear algebra course) recently asked me to explain eigendecomposition to him. I started with the following example, which I think would work great for high schoolers.</p>
<p>Imagine that you have an operator that takes a circle centered at the origin and turns it into an ellipse by "stretching" (scaling) it. (This operator is actually just a general linear transformation on the plane, and so it can actually be applied to any set of points in the plane, but start with circles and ellipses.) You can draw some examples. There is the identity operator, which leaves the circle alone. The ellipse can be oriented along the <span class="math-container">$x$</span>- and <span class="math-container">$y$</span>-axes, or it can be rotated with respect to them. The lengths of the semimajor and semiminor axes can be large or small.</p>
<p>The next point to make is that such an operator can be represented by a matrix acting on the points. With high school students, I would avoid writing an equation for the whole curve, but you can say, I have an operator</p>
<p><span class="math-container">$$ \begin{pmatrix} a & b \\ c & d \end{pmatrix} $$</span></p>
<p>and I can kind of guess what the ellipse will look like by seeing how it acts on some set of points, such as</p>
<p><span class="math-container">$$ \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \begin{pmatrix} -1 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \begin{pmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{pmatrix}, \textrm{etc.} $$</span></p>
<p>In the case where the ellipse ends up rotated with respect to the given coordinate axes, then you can say, it would be much simpler if we rotated our coordinates, since in that case, you are just stretching along the new axes (where we can call the new coordinates <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span>). In fact, you can basically do away with the whole matrix in that case, because the map becomes</p>
<p><span class="math-container">$$ \begin{pmatrix} \alpha \\ \beta \end{pmatrix} \to \begin{pmatrix} \lambda \alpha \\ \mu \beta \end{pmatrix} $$</span></p>
<p>where <span class="math-container">$\lambda$</span> and <span class="math-container">$\mu$</span> are just two numbers that are specific to the given operator and that we call the eigenvalues. This can be motivated by looking at the results of the graphs, and it is very intuitive that you can just "stretch" a circle to get an ellipse. The coordinates that we use to do this (the <span class="math-container">$\alpha$</span>- and <span class="math-container">$\beta$</span>-axes, which correspond to the semimajor/semiminor axes of the ellipse) are the eigenvectors.</p>
<p>Then you can discuss how this simplifies the whole problem of figuring out what the matrix is doing, because it just boils down to information that can be digested intuitively, whereas it is hard to look at given values of <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, <span class="math-container">$c$</span>, and <span class="math-container">$d$</span> and figure out what's going on.</p>
<p>Then you can mention that this behavior generalizes nicely to larger dimensions and that there are zillions of applications of this that they may see in future math classes. (In particular, as a physicist, then both classical and quantum mechanics rely heavily on this. But it is everywhere else too. Linear-algebra-based analysis of "Big Data" may be a draw for some students.)</p>
<p>Now that they know what eigenvalues and eigenvectors are, then how does the spectral theorem come up? Well, consider some non-diagonalizable matrices, such as</p>
<p><span class="math-container">$$ \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} $$</span></p>
<p>What does this do to the circle? It flattens it into a line. Different points on the circle map to the same point. This has an important consequence. There is no way to tell what the important axes were, because there are many ways that you could have rotated the circle, flattened it (along some other axis), and rotated it back and gotten the same result. In particular, for example, the matrix</p>
<p><span class="math-container">$$ \begin{pmatrix} 0 & -1 \\ 0 & 0 \end{pmatrix} $$</span></p>
<p>would have given you the same set of points. Since we can't identify this nicely as a rotation and stretching, then the concept of eigenvalues and eigenvectors does not make sense for this matrix.</p>
<p>(Before anyone points this out, I realize that there are multiple holes in this argument. For one thing, since this whole argument is based on the action on a set of points, not a single point, then many different operators can give you the same result. Any rotation matrix will take the circle to itself. Even if we consider the action on a single point, my description above conflates non-invertibility with non-diagonalizability, which are clearly distinct concepts. However, they are related, and there is only so far you can go with circles and ellipses. If you spend some more time thinking about this, you can probably get rid of a few of these holes, especially if you are willing to spend a long time on this with the students.)</p>
<p>Finally, the spectral theorem basically guarantees that (for real matrices) if we have <span class="math-container">$b=c$</span>, the matrix will always be diagonalizable. That is, we can always consider it to be a combination of a rotation and a scaling operation.</p>
<p>One bump that you could run into is that some simple matrices along these lines have complex numbers as eigenvalues and eigenvectors. Whether you want to go into this really depends on how good your students are and how much time you have.</p>
<p>This approach is similar to @RushabhMehta's, but I think it is considerably simpler. (I used to be a high school math teacher, so I think that it's important to have an introductory example that you can use as a basis for understanding before you start building theory on it.)</p>
<p>Note that the students' calculus knowledge is irrelevant here, but I am assuming that they know a little bit about analytic geometry. Your students have some computational experience, so they could actually automate the process and see how (1) the graphs and (2) the eigenvalues and eigenvectors depend on the values of <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, <span class="math-container">$c$</span>, and <span class="math-container">$d$</span>.</p>
|
1,891,831 | <p>In linear algebra we have vectors:$$
\mathbf{A}=(x,y,z)=x\mathbf{\hat e}_x+y\mathbf{\hat e}_y+z\mathbf{\hat e}_z$$
We have vector algebra, i.e. <a href="https://en.wikipedia.org/wiki/Euclidean_vector#Basic_properties" rel="nofollow">vector addition, dot product, lines, planes, etc</a>. A vector have a magnitude and a direction.</p>
<p>However, in multivariable calculus we also have vectors:$$
\mathbf{A}(t)=(x,y,z)=x\mathbf{\hat e}_x+y\mathbf{\hat e}_y+z\mathbf{\hat e}_z
$$
Here we do derivatives and integrals.</p>
<p>What is the difference? Are there different types of vectors? </p>
<p>I have always thought of vectors as the representation in the above link.</p>
| Emilio Novati | 187,568 | <p>Multivariable calculus is essentially the study of functions between vector spaces. A function $f: \mathbb{R}^m \to \mathbb{R}^n$ is a function of $m$ variables that represents a field of $n-$dimensional vectors.</p>
|
2,668,447 | <p>Let <span class="math-container">$F$</span> be a subfield of a field <span class="math-container">$K$</span> and let <span class="math-container">$n$</span> be a positive integer. Show that a nonempty linearly-independent subset <span class="math-container">$D$</span> of <span class="math-container">$F^n$</span> remains linearly independent when considered as a subset of <span class="math-container">$K^n$</span>.</p>
<p>I'm not sure how to proceed, I tried to assume that <span class="math-container">$D$</span> is dependent in <span class="math-container">$F^n$</span> and then conclude that is also dependent in <span class="math-container">$K^n$</span>. </p>
<p>It is Exercise 178 of Jonathan Golan, <em>The Linear Algebra a Beginning Graduate Student Ought to Know</em>.</p>
| Mariano Suárez-Álvarez | 274 | <p>Hint: express the linear independence as the non-vanishing of some determinant.</p>
|
3,050,497 | <p>The operator is given by
<span class="math-container">$$A=\begin{pmatrix}
1 & 0 & 0\\
1 & 1 & 0\\
0 & 0 & 4
\end{pmatrix}$$</span>
I have to write down the operator <span class="math-container">$$B=\tan(\frac{\pi} {4}A)$$</span>
I calculate <span class="math-container">$$\mathcal{R} (z) =\frac{1}{z\mathbb{1}-A}=\begin{pmatrix}
\frac{1}{z-1} & 0 & 0\\
\frac{1}{(z-1)^2} & \frac{1}{z-1} & 0\\
0 & 0 & \frac{1}{z-4}\end{pmatrix} $$</span></p>
<p>Now the B operator is given by:
<span class="math-container">$$B=\begin{pmatrix}
Res_{z=1}\frac{\tan(\frac{\pi}{4}z)}{z-1} & 0 & 0\\
Res_{z=1}\frac{\tan(\frac{\pi}{4}z)}{(z-1)^2} & Res_{z=1}\frac{\tan(\frac{\pi}{4}z)}{z-1} & 0\\
0 & 0 & Res_{z=4}\frac{\tan(\frac{\pi}{4}z)}{z-4}
\end{pmatrix} $$</span></p>
<p>For me the result should be
<span class="math-container">$$ B=\begin{pmatrix}
1 & 0 & 0\\
\frac{\pi}{2} & 1 & 0\\
0 & 0 & 0\end{pmatrix}$$</span></p>
<p>But the exercise gives as solution:
<span class="math-container">$$ B=\begin{pmatrix}
1 & 0 & 0\\
\frac{\pi}{4} & 1 & 0\\
0 & 0 & 1\end{pmatrix}$$</span></p>
<p>Where is the error?
Thank you and sorry for bad English </p>
| TonyK | 1,508 | <p>Hint:</p>
<p><span class="math-container">$$a+b+c+ab+ac+bc+abc=(1+a)(1+b)(1+c)-1$$</span></p>
<p>Therefore <span class="math-container">$(1+a)(1+b)(1+c)=?$</span></p>
|
785,188 | <p>I found a very simple algorithm that draws values from a Poisson distribution from <a href="http://www.akira.ruc.dk/~keld/research/javasimulation/javasimulation-1.1/docs/report.pdf" rel="nofollow">this project.</a></p>
<p>The algorithm's code in Java is:</p>
<pre><code>public final int poisson(double a) {
double limit = Math.exp(-a), prod = nextDouble();
int n;
for (n = 0; prod >= limit; n++)
prod *= nextDouble();
return n;
}
</code></pre>
<p><a href="http://docs.oracle.com/javase/7/docs/api/java/util/Random.html#nextDouble%28%29" rel="nofollow"><code>nextDouble()</code></a> is a function from the <code>Random</code> package in Java that returns a uniformly distributed random <code>double</code>, for example <code>0.885598042879084</code>.</p>
<p>I can't understand how this creates a Poisson distribution. </p>
<p>Can someone explain?</p>
| Lightspark | 239,618 | <p>Suppose that we have $k$ independent and identically distributed exponential random variables $X_1, \dotsc, X_k$ with parameter $\lambda$. If we define a counting process $\{N(t)\}_{t \geq 0}$ such that $S_k := X_1 + \dotsb + X_k$ is the occurring time of the $k$-th event, then this is a Poisson process with rate $\lambda$. This means that $\mathbb{P}\{N(t) \leq k\} = \mathbb{P}\{S_k \geq t\}$, or more informally that you can describe the distribution of $N(t)$ through the distribution of $S_k$.</p>
<p>Now consider that if $X$ is an exponential random variable with parameter $\lambda$, then $Y := F_X(X) = 1 - e^{-\lambda X}$ is a standard uniform random variable (here $F_X$ is the cumulative distribution function of $X$). Note also that
$$
F_X^{-1}(Y) = -\frac{1}{\lambda}\log(1 - Y)
$$
is an exponential random variable with parameter $\lambda$. Since also $U := 1 - Y$ is a standard uniform random variable, the random variable
$$
X' = -\frac{1}{\lambda}\log U
$$
is again an exponential random variable with parameter $\lambda$.</p>
<p>We want to count (at most) $k$ events in a given time interval $[0, t]$ (that we can suppose without loss of generality to be $[0, 1]$, i.e. we arbitrarily fix $t = 1$), that is we want $N(1) \leq k$. By what we observed at the beginning, this corresponds to the condition $S_k \geq 1$, which, in its turn, corresponds to the following:
$$
\sum_{i = 1}^k X_i \geq 1 \iff \sum_{i = 1}^k -\frac{1}{\lambda}\log U_i \geq 1 \iff -\frac{1}{\lambda}\log\bigg( \prod_{i = 1}^k U_i \bigg) \geq 1 \iff
\prod_{i = 1}^k U_i \leq e^{-\lambda},
$$
which is precisely what is described in Knuth's algorithm.</p>
|
628,682 | <p>As both a programmer and a math student, I am trying to come up with a fool-proof way to handle errors from subtractive cancellation caused by trying to evaluate $x-y$, where x,y are extended (long double) precision floating-point numbers. (Obviously, if x is very close to y, this causes problem.) I found two equivalent forms, ${x^2-y^2\over x+y}$ and $(\sqrt{x}-\sqrt{y})(\sqrt{x}+\sqrt{y})$. I was trying to evaluate regions where either form would work better, as well as the regions where either form might produce the same result, and/or a worse result in comparison to a straight subtraction.</p>
<hr>
<p>I have tried to do the error-analysis as such:</p>
<p>Let the long double c be defined such that $x-y=c$.Then, ${x^2-y^2\over x+y}=x-y=c$, which rewrites as ${x^2-y^2\over x-y}={x^2-y^2\over c}=x+y$, which means that, if ${x^2-y^2\over x+y}$ does better than x-y, then $|x+y|>1$.
Now, we take a look at $(\sqrt{x}-\sqrt{y})(\sqrt{x}+\sqrt{y})=x-y=c$. A rewriting of this yields $\sqrt{x}-\sqrt{y}={c\over \sqrt{x}+\sqrt{y}}$, which means that $\sqrt{x}+\sqrt{y}<1$ for this decomposition to work. (This also poses the additional condition that x,y>0.) It would then follow that $x+y=x-y+2y>1$, or, in other terms, $x-y>1-2y$. Also, $\sqrt{x}+\sqrt{y}<1$ can be rewritten as $\sqrt{x}<1-\sqrt{y}$, or after squaring both sides, $x-y<1-2\sqrt{y}$. This means that $1-2y<1-2\sqrt{y}$, which simplifies to $y>\sqrt{y}$ which returns $y>1$ (this is the only place where it is true....)</p>
<p>So, if y > 1, then the decomposition works better, right? Well, not according to my program, which can be found here: <a href="http://ideone.com/amwv9H" rel="nofollow">http://ideone.com/amwv9H</a> ; // Disclaimer: It is written in C++</p>
| Ross Millikan | 1,827 | <p>Once you have $x$ and $y$ and want to subtract them, you won't do better. What is useful is (if $x \approx y$, which is where the problem is) to find some number $a$ that is close to $x,y$ and you can subtract from them analytically. Now you are calculating $(x-a)-(y-a)$. For example, suppose $x=1+10^{-4}$ and $y=1+10^{-8}$. If you just do the subtraction $x-y$, you lose four decimal digits of precision because of the cancellation. If you can do $(x-1)-(y-1)=10^{-4}-10^{-8}$ you don't lose any.</p>
|
3,392,749 | <p>I'm trying to draw a dfa for this description</p>
<p>The set of strings over {a, b, c} that do not contain the substring aa,</p>
<p>current issue i'm facing is how many states to start with, any help how to approach this problem?</p>
| Andreas Blass | 48,510 | <p>It seems to me that you can do this with three states, whose "meanings" are:</p>
<p>(1) I haven't seen two consecutive <span class="math-container">$a$</span>'s and either I'm just starting (haven't seen anything yet) or the last symbol I saw was not <span class="math-container">$a$</span>.</p>
<p>(2) I haven't seen two consecutive <span class="math-container">$a$</span>'s but the last symbol I've seen was <span class="math-container">$a$</span>.</p>
<p>(3) I've seen two consecutive <span class="math-container">$a$</span>'s.</p>
|
977,446 | <p>Prove that $A\cap B = \emptyset$ iff $A\subset B^C$. I figured I could start by letting $x$ be an element of the universe and that $x$ is an element of $A$ and not an element of $B$. </p>
| Community | -1 | <p>$\textbf{Claim: } A\cap B=\emptyset$ iff $A\subset B^c$.</p>
<p>$Proof:\; A\cap B=\emptyset \Leftrightarrow (a\in A\Rightarrow a\notin B)\Leftrightarrow (a\in A\Rightarrow a\in B^c)\Leftrightarrow A\subset B^c$</p>
|
3,489,347 | <p><strong>Is there a simple way to characterize the functions in <span class="math-container">$C^\infty((0,1])\cap L^2((0,1])$</span>?</strong></p>
<p>That is, given a function <span class="math-container">$f(t)\in C^\infty((0,1])$</span>, is there a necessary/sufficient condition I can check to see if it's square integrable? An example of such a function is <span class="math-container">$f(t)=t^{-1/3}$</span>, which diverges as <span class="math-container">$t\to0$</span> but satisfies <span class="math-container">$\int_0^1 f(t)^2\,dt=3<\infty.$</span></p>
<hr>
<p><strong>Notes:</strong> I was hoping to prove something to the effect that <span class="math-container">$f(t)$</span> is square integrable if and only if <span class="math-container">$$\lim_{t\to0} \frac{f(t)^2}{t^p}=L < \infty$$</span> for some <span class="math-container">$p>-1$</span>. This is certainily a sufficient condition by the "<a href="https://services.math.duke.edu/~cbray/Stanford/2003-2004/Math%2042/limitcomp.pdf" rel="nofollow noreferrer">limit comparison test</a>" for improper integrals, but I'm not sure if it's necessary. (But, I also couldn't find a simple counterexample!)</p>
| Community | -1 | <p>First divide by <span class="math-container">$2N$</span> on both sides, </p>
<p><span class="math-container">$5\log N> N$</span> (since <span class="math-container">$N>0$</span> then the inequality stays the same)</p>
<p>Then by raising to the <span class="math-container">$e$</span> power on both sides (the exponential is an increasing function) you'll get</p>
<p><span class="math-container">$e^{5\log N}>e^{N}\implies e^{\log N^5}>e^N \implies N^5>e^{N}$</span></p>
<p>Can you end it from here?</p>
|
253,152 | <p>So I was given $f(x)$ continuous and positive on $[0,\infty)$, and need to show that $g(x)$ increasing on $(0,\infty)$</p>
<p>And $g(x)={\int_0^xtf(t)dt\over \int_0^xf(t)dt} $</p>
<p>So my approach is I want to show that $g'(x)>0$, so I used FTC and quotient rule to take the derivative of $g'(x)$, but then I got suck at midway because I cannot simplify it. </p>
| martini | 15,379 | <p>We have
\begin{align*}
g'(x) &= \frac{xf(x)\cdot \int_0^x f(t)\,dt - \int_0^x tf(t)\,dt \cdot f(x)}{(\int_0^x f(t)\, dt)^2}
\end{align*}
Now the denominator is positive, we look at the numerator
\begin{align*}
xf(x)\int_0^x f(t)\,dt - \int_0^x tf(t)\, dt \cdot f(x)
&= \int_0^x xf(x)f(t)\, dt - \int_0^x tf(x)f(t)\, dt\\
&= \int_0^x (x-t)f(x)f(t)\, dt
\end{align*}
Now $f(t) > 0$ for $t > 0$, $f(x) > 0$ and $(x-t)> 0$ for $t > 0$. So the numerator is positive for $x > 0$, therefore $g' > 0$ on $(0,\infty)$ as wished.</p>
|
265,537 | <p>I have a set of inequalities</p>
<pre><code>Cos[a]Cos[b]>=Cos[t-a]Cos[b]&&Cos[a]Cos[b]>=Cos[t/2]&&Cos[a]Cos[b]>=Sin[t/2]&&a<=t<=Pi
</code></pre>
<p>How to solve this to get a range of values for <code>a,b,t</code>?</p>
| Ulrich Neumann | 53,677 | <p><code>RegionPlot3D</code>shows the region of possible solutions</p>
<pre><code>RegionPlot3D[Cos[a] Cos[b] >= Cos[t - a] Cos[b] && Cos[a] Cos[b] >= Cos[t/2] &&Cos[a] Cos[b] >= Sin[t/2] && a <= t <= Pi, {a, -Pi/2, Pi}, {t, 0, Pi}, {b, -1, 1}, AxesLabel -> {a, t, b}]
</code></pre>
<p><a href="https://i.stack.imgur.com/98Vee.png" rel="noreferrer"><img src="https://i.stack.imgur.com/98Vee.png" alt="enter image description here" /></a></p>
|
3,409,598 | <p>Given three equation</p>
<p><span class="math-container">$$\log{(2xy)} = (\log{(x)})(\log{(y)})$$</span>
<span class="math-container">$$\log{(yz)} = (\log{(y)})(\log{(z)})$$</span>
<span class="math-container">$$\log{(2zx)} = (\log{(z)})(\log{(x)})$$</span></p>
<p>Find the real solution of (x, y, z)</p>
<p>What should I do to get the answer? and I think it's not possible that x = y = z has a solution, I have no idea what method I can do. Show me a hint</p>
| azif00 | 680,927 | <p>Once you have convinced yourself that
<span class="math-container">$$\frac{\partial f(x)}{\partial x_i}=\frac{\partial \|x\|}{\partial x_i}=\frac{x_i}{\|x\|}$$</span>
Then, recall that, the <strong>Hessian</strong> matrix of <span class="math-container">$f$</span> is the <span class="math-container">$n\times n$</span> matrix <span class="math-container">$\textbf H$</span> with the <span class="math-container">$(i,j)$</span>-entry given by
<span class="math-container">$$\textbf{H}_{ij}=\frac{\partial^2 f(x)}{\partial x_j\partial x_i}=\frac{\partial}{\partial x_j}\bigg(\frac{\partial f(x)}{\partial x_i}\bigg)$$</span>
thus, we will have to calculate the latter in order to give the general input of the matrix. </p>
<p>Using the quotient rule, we see that
<span class="math-container">$$\begin{align}
\frac{\partial}{\partial x_j}\bigg(\frac{\partial f(x)}{\partial x_i}\bigg) &=
\frac{\partial}{\partial x_j}\bigg(\frac{x_i}{\|x\|}\bigg)=\cfrac{\cfrac{\partial x_i}{\partial x_j}\cdot\|x\|-x_i\cdot\cfrac{\partial \|x\|}{\partial x_j}}{\|x\|^2} \\
&= \frac{1}{\|x\|}\frac{\partial x_i}{\partial x_j}-\frac{x_ix_j}{\|x\|^3}
\end{align}$$</span>
Note that,
<span class="math-container">$$\frac{\partial x_i}{\partial x_j} = \delta_{ij} =\begin{cases}
1 & \textrm{if } i=j \\ 0 & \textrm{if } i\neq j
\end{cases}$$</span>
Hence, we have
<span class="math-container">$$\mathbf{H}_{ij}=\frac{1}{\|x\|}\delta_{ij}-\frac{x_ix_j}{\|x\|^3} = \frac{\delta_{ij}\|x\|^2-x_ix_j}{\|x\|^3}$$</span>
as, for example, if <span class="math-container">$n=2$</span>, then <span class="math-container">$f$</span> is given by <span class="math-container">$f(x,y)=\|(x,y)\|=\sqrt{x^2+y^2}$</span> and then
<span class="math-container">$$\mathbf{H}=\begin{pmatrix} \cfrac{\delta_{11}\|(x,y)\|^2-x^2}{\|(x,y)\|^3} &
\cfrac{\delta_{12}\|(x,y)\|^2-xy}{\|(x,y)\|^3} \\ \cfrac{\delta_{21}\|(x,y)\|^2-yx}{\|(x,y)\|^3} & \cfrac{\delta_{22}\|(x,y)\|^2-y^2}{\|(x,y)\|^3}
\end{pmatrix}=\begin{pmatrix}
\cfrac{y^2}{(x^2+y^2)^{3/2}} & -\cfrac{xy}{(x^2+y^2)^{3/2}} \\
-\cfrac{xy}{(x^2+y^2)^{3/2}} & \cfrac{x^2}{(x^2+y^2)^{3/2}}
\end{pmatrix} = \frac{1}{(x^2+y^2)^{3/2}}\begin{pmatrix} y^2&-xy \\ -xy&x^2
\end{pmatrix}$$</span></p>
|
1,365,489 | <p>What is the value of the following expression?</p>
<p>$$\sqrt[3]{\ 17\sqrt{5}+38} - \sqrt[3]{17\sqrt{5}-38}$$</p>
| Leucippus | 148,155 | <p>Start by noticing that $(2 + \sqrt{5})^{3} = 38 + 17 \sqrt{5}$ and $(2 - \sqrt{5})^{3} = 38 - 17 \sqrt{5}$. Now,
\begin{align}
\sqrt[3]{\ 17\sqrt{5}+38} - \sqrt[3]{17\sqrt{5}-38} &= \sqrt[3]{(2 + \sqrt{5})^{3}} - \sqrt[3]{(\sqrt{5} - 2)^{3}} \\
&= (2 + \sqrt{5}) - (-2 + \sqrt{5}) \\
&= 4.
\end{align}</p>
<hr>
<p>What seems to have happened is that most answers presented here follow the pattern:
\begin{align}
\sqrt[3]{\ 17\sqrt{5}+38} + \sqrt[3]{17\sqrt{5}-38} &= \sqrt[3]{(2 + \sqrt{5})^{3}} + \sqrt[3]{(\sqrt{5} - 2)^{3}} \\
&= (2 + \sqrt{5}) + (-2 + \sqrt{5}) \\
&= 2 \sqrt{5}.
\end{align}</p>
|
3,511,445 | <p>Well it is the problem from jmo odisha. It is of 5 marks
I tried a lot of ways but I can't get the answer. Only elementary mathematics is allowed.</p>
| Robert Israel | 8,508 | <p>An obvious solution is <span class="math-container">$x=y=z=0$</span>, but it's not the only one: you could have
<span class="math-container">$x=48/5$</span>, <span class="math-container">$y = 48/7$</span>, <span class="math-container">$z=48$</span>. </p>
<p>Take <span class="math-container">$(z-6)$</span> times equation 1 minus <span class="math-container">$(x-4)$</span> times equation 2 and simplify to eliminate <span class="math-container">$y$</span>: you should get
<span class="math-container">$$ (2 z+24) x - 24 z = 0 $$</span></p>
<p>Take <span class="math-container">$(2z+24)$</span> times equation 3 minus <span class="math-container">$(z-8)$</span> times this equation and simplify to eliminate <span class="math-container">$x$</span>: you should get
<span class="math-container">$$ 8 z^2 - 384 z = 0$$</span>
which factors as <span class="math-container">$8 z (z - 48) = 0$</span>. So <span class="math-container">$z = 0$</span> or <span class="math-container">$z = 48$</span>. Substitute into other equations to find <span class="math-container">$x$</span> and <span class="math-container">$y$</span>.</p>
|
1,288,584 | <p>If <span class="math-container">$f(x)=f(x_0)+f'(x_0)(x-x_0)+\ldots+\frac{f^{(n-1)}(x_0)}{(n-1)!}(x-x_0)^{n-1}+\frac{f^{(n)}(\xi(x))}{n!}(x-x_0)^n,$</span> prove that <span class="math-container">$x \rightarrow f^{(n)}(\xi(x)) $</span> is continuous on <span class="math-container">$[x_0-\beta, x_0+\beta]$</span>, if <span class="math-container">$f\in C^n[x_0-\beta,x_0+\beta]$</span>.</p>
<p>How should I approach this problem?</p>
| Aleksandar | 240,930 | <p>Take the function, $f(x)$ the radius of convergence is the distance between the number you are expanding at in this case $x_0$ and the nearest singularity. If the function has no singularities the disc of convergence is infinite. </p>
<p>Let $\alpha$ be the nearest singularity. The radius of the disc of convergence is distance between $x_0$ and the nearest singularity $\alpha$, in order for a number $x$ to be a set of the disc then $| x- \alpha | < | x_{0} - \alpha |$. </p>
<p>In the disc of convergence the function, $f(x)$ is clearly analytic therefore differentiable therefore continous.So if $[ x_{0}- \beta , x_{0}+ \beta ]$ is inside the disk then $f(x)$ is continous at that point. </p>
<p>If you are unclear what I meant by analytic implying differentiability(in fact infinite differentiability) implying continuity. A complex function $f(z)$ that can be written as a Taylor series is obviously analytic in it's disc of convergence meaning for every point in the disk $f(z)$ is differentiable. This obviously extends to the real numbers.</p>
<p>Is there any misunderstandings? </p>
|
446,456 | <p>Educators and Professors: when you teach first year calculus students that infinity isn't a number, how would you logically present to them $-\infty < x < +\infty$, where $x$ is a real number?</p>
| Alexey | 48,558 | <p>To make sense out of $-\infty < 1000 < \infty$, for example, you do not need $\infty$ and $-\infty$ to be "numbers," it is enough to extend the definition of "$<$" so that the truth value of $x < y$ was defined for all $x,y\in\mathbb R\sqcup\{\pm\infty\}$. This is easy to do, in an obvious way.</p>
|
446,456 | <p>Educators and Professors: when you teach first year calculus students that infinity isn't a number, how would you logically present to them $-\infty < x < +\infty$, where $x$ is a real number?</p>
| Daniel McLaury | 3,296 | <p>I'd make a couple points:</p>
<p>The word "number," on its own, doesn't really mean anything, aside from "something that somehow resembles something else we're already calling 'numbers.'" Lots of people have called lots of different things "numbers." As a few examples, the modern notion of an ideal in a ring gets its name from what Kummer called "ideal numbers;" we call the other two two-dimensional real algebras the "dual numbers" and the "hyperbolic numbers," by analogy with the "complex numbers;" and of course there are things like the extended real numbers or the surreal numbers which are things we call numbers which do include infinite elements.</p>
<p>Of course I probably wouldn't say it that way to students, but the point is that "infinity isn't a number" is either a false or meaningless statement. The points you want to get across are this:</p>
<ul>
<li>The word "number," on its own, is just sort of a vague idea, not one specific concept.</li>
<li>The phrase "real number," on the other hand, indicates a very specific thing.</li>
<li>In calculus, we're working over the real numbers.</li>
<li>There is no real number which is infinity.</li>
<li>Sometimes, however, we use it as a notational convention.</li>
</ul>
|
1,866,931 | <p>I would like to see a proof to this fact.</p>
<blockquote>
<p>If $A$ is an invertible matrix and $B \in \mathcal{L}(\mathbb{R}^n,\mathbb{R}^n)$, that is an bounded linear opertor in $\mathbb{R}^n$. Then, if there holds
$$
\|B-A\| \|A^{-1}\| <1,
$$
we have that B is invertible.</p>
</blockquote>
<p>Moreover, if possible, how to use it to prove that the aplication $A \rightarrow A^{-1}$ is continuous?</p>
| egreg | 62,967 | <p>The relation is transitive when</p>
<blockquote>
<p>for all $x,y,z\in A$, if $x\mathrel{R}y$ and $y\mathrel{R}z$, then $x\mathrel{R}z$.</p>
</blockquote>
<p>Thus you have to start with $x,y,z\in A$, such that $x\mathrel{R}y$ and $y\mathrel{R}z$, and prove that $x\mathrel{R}z$.</p>
<p>Now, $x\mathrel{R}y$ and $y\mathrel{R}z$ implies $(x,z)\in R\circ R$; since $R\circ R\subseteq R$, we can deduce that $(x,z)\in R$, that is,
$$
x\mathrel{R}z
$$
as required.</p>
|
889,155 | <blockquote>
<p>There are $2n-1$ slots/boxes in all and two objects say A and B; total number of A's are $n$ and total number of B's are $n-1$. (All A's are identical and all B's are identical.) In how many ways can we arrange A's and B's in $2n-1$ slots.</p>
</blockquote>
<p>My approach: there are $2n-1$ boxes in total and for A, $n$ have to be selected, so number of ways to select $n$ A's is $C(2n-1,n)$ and can be permuted in $n!/n!$ ways, i.e., $1$. And similarly for B, $C(n-1,n-1)$ and $(n-1!)/(n-1!)$ permutations in total.</p>
<p>So total $$C(2n-1,n) \times 1 \times C(n-1,n-1) \times 1=C(2n-1,n).$$</p>
<p>Please help i am stuck.</p>
| Rebecca J. Stones | 91,818 | <p>Your answer appears correct to me (assuming all $2n-1$ slots need to be filled), but could be simplified:</p>
<p>We choose the positions of the A's, which can be done in $\binom{2n-1}{n}$ ways. The remaining positions all contain B's.</p>
<p>(While it's technically correct, it seems pointless to account for the $n!/n!$ and $(n-1)!/(n-1)!$ ways of permuting these objects given their respective slots.)</p>
|
889,155 | <blockquote>
<p>There are $2n-1$ slots/boxes in all and two objects say A and B; total number of A's are $n$ and total number of B's are $n-1$. (All A's are identical and all B's are identical.) In how many ways can we arrange A's and B's in $2n-1$ slots.</p>
</blockquote>
<p>My approach: there are $2n-1$ boxes in total and for A, $n$ have to be selected, so number of ways to select $n$ A's is $C(2n-1,n)$ and can be permuted in $n!/n!$ ways, i.e., $1$. And similarly for B, $C(n-1,n-1)$ and $(n-1!)/(n-1!)$ permutations in total.</p>
<p>So total $$C(2n-1,n) \times 1 \times C(n-1,n-1) \times 1=C(2n-1,n).$$</p>
<p>Please help i am stuck.</p>
| AnonymousMaths | 167,278 | <p>Since A's are identical, and B's are identical, it follows that you do not need to permute the A's among themselves, and B's among themselves, and just need to determine the places in which $n$ A's and hence, $n-1$ B's can be placed.
First, select the $n$ places from $2n-1$ places for the A's in $C(2n-1,n)$ ways. Automatically you have chosen the $n-1$ places for the B's.
So I would say the answer is simply $C(2n-1,n)$.</p>
|
990,512 | <p>Suppose that the probability of $x=0$ is $p$, and the probability of $x=1$ is $1-p=q$. Consider the random sequence $X=\{X_i\}_{i=1}^{\infty}$. We map this sequence by $C$ to a point in the interval $[0,1]$ as below:</p>
<p>$1)$ we look at the first random variable. If it is $0$, then we update the interval to $I_1=[0,p)$, else update it to $I_1=[p,1)$.</p>
<p>$2)$ Let $I_k=[a,b)$. Look at the $(k+1)^{th}$ random variable. if it is $0$, then we update the interval to $I_{k+1}=[a,a+p(b-a))$, else we update it to $I_{k+1}=[b-q(b-a),b)$.</p>
<p>and we continue this process till we reach to a point as the length of the random process goes to infinity. As an example if the first $2$ random variables are $01$, then we have:</p>
<p>$I_1=[0,p)$</p>
<p>$I_2=[p-qp,p)$.</p>
<p>Find the pdf of $C(X)$. </p>
| Indrajit | 147,241 | <p>Intuitively, we observe that probability of $C(X)$ being inside an interval (for example $[p,1)$) is proportional to the length of the interval.</p>
<p>Notice that $C(X)$ is not well defined for $p=0,1$. So let us assume $0<p<1$. Define new random variables $Y_{i}=p(1-X_{i})+qX_{i}$. Then $Y_{i}=p$ with probability $p$, and $Y_{i}=q$ with probability $q$. </p>
<p>Observe that $C(X)$ can also be written as
\begin{eqnarray}
C(X)=pX_{1}+p\sum_{k=2}^{\infty}\prod_{i=1}^{k-1}X_{k}Y_{i}.
\end{eqnarray}</p>
<p>Basically $\prod_{i=1}^{k-1}Y_{i}$ measures the length of the intervals at $(k-1)th$ step. If $p=\frac{1}{2}$, then $\prod_{i=1}^{k-1}Y_{i}=\frac{1}{2^{k-1}}$, otherwise it depends on the position of the intervals and previous $X_{i}$s.</p>
<p>Now let us pick a $x\in[0,1]$. There is a unique sequence of numbers $\{x_{i}\}$ such that
\begin{eqnarray}
x=px_{1}+p\sum_{k=1}^{\infty}\prod_{i=1}^{k-1}x_{k}y_{i},
\end{eqnarray}
where $x_{i}\in\{0,1\}$ and $y_{i}=p(1-x_{i})+qx_{i}$ for all $i$. We want to find $\mathbb{P}(C(X)\leq x)$. </p>
<p>Let $s=\{s_{i}\}$ be a sequence of numbers from $\{0,1\}$, and $k$ be the first position where $s$ differs from $x$. In other words, $s_{i}=x_{i}$ for all $1\leq i< k$ and $s_{k}\neq x_{k}$. Notice that we have $C(s)\leq x$ only if $s_{k}\leq x_{k}$. But if $x_{k}=0$, then $s_{k}\leq x_{k}$ implies that $s_{k}=0$ i.e., $s_{k}=x_{k}$. In other words, $s_{k}$ can not differ from $x_{k}$ if $x_{k}=0$. So the only possibility is $x_{k}=1$ and $s_{k}=0$. Therefore
\begin{eqnarray}
\mathbb{P}(C(X)\leq x)&=&\sum_{k: x_{k}=1}\mathbb{P}(X_{i}=x_{i}\;\forall\;1\leq i\leq k-1, X_{k}\neq x_{k})\\
&=&px_{1}+p\sum_{k=1}^{\infty}\prod_{i=1}^{k-1}x_{k}y_{i}\\
&=&x.
\end{eqnarray}
Consequently $C(X)\sim U[0,1]$.</p>
|
11,609 | <p>What's the code for multiple alignment in MathJAX? An analogous question for Latex is at <a href="https://tex.stackexchange.com/questions/43464/multiple-alignment-in-equations">https://tex.stackexchange.com/questions/43464/multiple-alignment-in-equations</a>, but it doesn't appear to function here. Thank you.</p>
<blockquote>
<p>Objective:<br>
$A = B$<br>
$\qquad B = C$<br>
$\qquad \qquad C = D$<br>
$\qquad \qquad \qquad E = F$</p>
</blockquote>
<p>Attempt:
$$\begin{align*} A = B\\\
&B = C\\\
&&C =D\\\
&&&E= F
\end{align*}$$</p>
<p>Code for Attempt:
<code>$$\begin{align*} A = B\\\<br>
&B = C\\\<br>
&&C =D\\\<br>
&&&E= F<br>
\end{align*}$$</code></p>
| Willie Wong | 1,543 | <p>Another option is to <em>abuse</em> the <code>alignat</code> environment. </p>
<pre><code>$$\newcommand{\myeqa}{=}
\begin{alignat}{5}
A & \myeqa & B \\
& & B & \myeqa & C \\
& & & & C & \myeqa & D \\
& & & & & & D & \myeqa & E
\end{alignat}$$
</code></pre>
<p>To adjust spacing you can put spacing commands on either side of the equals sign, such as <code>\newcommand{\myeqa}{~=~}</code> instead to get some more spacing. </p>
<p>$$\newcommand{\myeqa}{=}
\begin{alignat}{5}
A & \myeqa & B \\
& & B & \myeqa & C \\
& & & & C & \myeqa & D \\
& & & & & & D & \myeqa & E
\end{alignat}$$</p>
|
373,578 | <p><a href="http://en.wikipedia.org/wiki/Quiver_%28mathematics%29" rel="noreferrer">Quivers</a> are directed graphs where loops and multi-arrows are allowed. And we can talk about representations of quivers by assigning each vertex a vector space and each arrow a homomorphism. Moreover, Gabriel gives <a href="http://en.wikipedia.org/wiki/Gabriel%27s_theorem" rel="noreferrer">a complete classification</a> of quivers of finite type using just five Dynkin diagrams.</p>
<p>Although these are both deep and surprising, but I am not sure why quivers deserve so much attention. The only potential application I can think of (although highly unlikely to be true) that they might be useful to answer certain questions in category theory since the notion of quivers are similar to categories, and a representation is very much like a functor from a quiver to some $\mathcal{k}$-$\operatorname{vect}$. </p>
<p>So I wonder whether someone can give a hint why quivers deserve so much attention? Do they naturally show up in problems? And do representations of quivers really help to solve these problems?</p>
| Alistair Savage | 74,366 | <p>I would say that one of the reasons (and probably the main motivation for the work of Gabriel that you mention) is that the representation theory of quivers is intimately related to the representation theory of finite-dimensional associative algebras. If $A$ is a finite-dimensional algebra over some field $k$, then the category of representations of $A$ is equivalent to the category of representations of the algebra $kQ/I$ for some quiver $Q$ and some two-sided ideal $I$ of $kQ$. Here $kQ$ is the path algebra of the quiver. Representations of the path algebra are equivalent to representations of the quiver. Thus, representations of $kQ/I$ are equivalent to representations of the quiver that are killed by certain paths (i.e. the composition of maps along certain paths is zero).</p>
|
885,627 | <p>John invites 12 people to a dinner party, half of which are men. Exactly one man and one woman are bringing desserts. If one person from this group is selected at random,what is the probability that it is a woman, or a man who is not bringing a dessert?</p>
| Roy Sheehan | 79,095 | <p>I could be wrong, but I would have said that 10 people didn't bring deserts so we are looking at a 1/10 probability? </p>
|
885,627 | <p>John invites 12 people to a dinner party, half of which are men. Exactly one man and one woman are bringing desserts. If one person from this group is selected at random,what is the probability that it is a woman, or a man who is not bringing a dessert?</p>
| drhab | 75,923 | <p>There are $6+5=11$ persons of the $12$ that fall in one of the classes: 1) women 2) men that do not bring a desert. So the probability is $\frac{11}{12}$.</p>
<p>(If the first class is meant to be: women that do not bring a desert, then there are $5+5=10$ persons and the probability is $\frac{10}{12}$)</p>
|
4,350,699 | <blockquote>
<blockquote>
<p><span class="math-container">$r:$</span>All prime numbers are either even or odd, Is it a true statement?</p>
</blockquote>
</blockquote>
<p>I was studying Mathematical Logic then i came across above question.
Since here connecting word is "OR"
so if i separate two statement then it become</p>
<p><span class="math-container">$p:$</span> All the prime number are even</p>
<p><span class="math-container">$q:$</span> All the prime number are odd</p>
<p>Because both statement <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are false so final statement <span class="math-container">$r$</span> must be false using truth value of statement for "OR" connective.</p>
<p>But my intuition says <span class="math-container">$r$</span> is true.</p>
<p>Am i thinking correct?</p>
<p>Please Help me in this.</p>
| User5678 | 632,875 | <p>You erroneously distributed the “all”. The correct interpretation is:</p>
<p>For every prime <span class="math-container">$p$</span>: <span class="math-container">$p$</span> is even or <span class="math-container">$p$</span> is odd</p>
<p>Since every integer is either even or odd, we have:</p>
<p>(<span class="math-container">$p$</span> is even or <span class="math-container">$p$</span> is odd) is true for all primes <span class="math-container">$p$</span></p>
<p>Therefore it is true — as your intuition suggested</p>
|
3,835,514 | <p>For some c > <span class="math-container">$0$</span>, the cumulative distribution function of a continuous random variable X is given by:</p>
<p><span class="math-container">$$
F_X(x) = \begin{cases} 0 & \text{if } x \le0 \\ cx(x+1) & \text{if } 0 \lt x <1 \\ 1 & \text{if } x \ge 1\end{cases}
$$</span></p>
<p>Show that <span class="math-container">$c = 1/2 $</span></p>
<p>I know that by differentiating <span class="math-container">$cx(x+1)$</span> and equating to 1, I obtain the cumulative distribution function but I don't know how to eliminate <span class="math-container">$x$</span> after differentiating.</p>
| Siong Thye Goh | 306,553 | <p>You have chosen an example that is not easy to factorize.</p>
<p>By using the quadratic formula which states that the roots of <span class="math-container">$ax^2+bx+c$</span> is <span class="math-container">$\frac{-b \pm \sqrt{b^2-4ac}}{2a}$</span></p>
<p>The roots are</p>
<p><span class="math-container">$$x_1 = \frac{-(a+b) - \sqrt{(a+b)^2+8ab}}{2}, x_2 = \frac{-(a+b) + \sqrt{(a+b)^2+8ab}}{2}$$</span></p>
<p>Since the leading coefficient is <span class="math-container">$1$</span>, the factorization is</p>
<p><span class="math-container">$$(x - x_1) (x-x_2)$$</span></p>
|
2,087,084 | <p>Fix an arbitrary abelian category $\mathscr{A}$, and let $$0\to A\xrightarrow{f}B\xrightarrow{g}C\to 0$$ be a short exact sequence in the category of chains $\mathscr{A}_\bullet$, where $A$, $B$, and $C$ have chain maps $\varphi^A_n:A_n\to A_{n-1}$, $\varphi^B_n:B_n\to B_{n-1}$, $\varphi^C_n:C_n\to C_{n-1}$ respectively, and let $A$ and $B$ be exact. I claim that $C$ is exact.</p>
<p>My current approach is to try to show that $$\ker\varphi^C_{n-1} = \mathrm{coker\,}(\ker\varphi^C_n\hookrightarrow C_n) = \mathrm{coim\,}\varphi_n^C = \mathrm{im\,}\varphi_n^C.$$</p>
<p>So, for an arbitrary object $M\in\mathscr{A}$ and morphism $\psi:C_n\to M$ such that $$\left(\ker\varphi^C_n\hookrightarrow C_n\xrightarrow{\psi}M\right) = 0$$ I wish to show that there exists a unique $\ker\varphi^C_{n-1}\to M$ such that $$\left(C_n\twoheadrightarrow\mathrm{im\,}\varphi^C_n\hookrightarrow\ker\varphi^C_{n-1}\to M\right) = \left(C_n\xrightarrow{\psi}M\right).$$ However, despite playing around a lot with commutative diagrams, kernels, and cokernels, I haven't found a good way of doing this. What have I missed?</p>
| Alex Mathers | 227,652 | <p>Are you able to use tools such as the long exact homology sequence, or the snake lemma, as Pedro has suggested? If not, this can be done directly. Write out what this short exact of sequences actually looks like:</p>
<p>$$
\newcommand{\ra}[1]{\!\!\!\!\!\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!\!\!\!}
\newcommand{\da}[1]{\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}}
%
\begin{array}{llllllllllll}
& & 0 & & 0 & & 0 & & \\
&& \da{} & & \da{} & & \da{} & & \\
\cdots & \ra{} & A_{n+1} & \ra{} & A_n & \ra{} & A_{n-1} & \ra{} & \cdots & \\
&& \da{} & & \da{} & & \da{} & & \\
\cdots & \ra{} & B_{n+1} & \ra{} & B_n & \ra{} & B_{n-1} & \ra{} & \cdots \\
&& \da{} & & \da{} & & \da{} & & \\
\cdots & \ra{} & C_{n+1} & \ra{} & C_n & \ra{} & C_{n-1} & \ra{} & \cdots \\
&& \da{} & & \da{} & & \da{} & & \\
& & 0 & & 0 & & 0 & & \\
\end{array}
$$</p>
<p>Then choose an element $c_n\in C_n$ which is in the kernel, and do some diagram chasing to show there's a $c_{n+1}$ such that $\varphi_n^C(c_{n+1})= c_n$.</p>
<p>For instance, to start off, $c_n$ lifts to some $b_n\in B_n$; then by commutativity, $g\varphi_n^B(b_n)=\varphi_n^C g(b_n)=0$, so by exactness we can write $\varphi_n^B(b_n)=f(a_{n-1})$ for some $a_{n-1}\in A_{n-1}$. Do something similar a few more times, using the exactness of $A$ and $B$, and you'll find your $c_{n+1}$.</p>
|
224,968 | <p>Does the above Diophantine equation have other integer solutions besides $(x,y)=(1,2)$ and $(x, y) = (0, -1)$?</p>
| Igor Rivin | 11,142 | <p>Yes. Here is another one: $x=0, y=-1.$</p>
|
224,968 | <p>Does the above Diophantine equation have other integer solutions besides $(x,y)=(1,2)$ and $(x, y) = (0, -1)$?</p>
| so-called friend Don | 16,510 | <p>Theorem 6.4.30 in Cohen's <em>Number Theory: Volume I</em> asserts: For each nonzero integer $d$, there is at most one pair of integers $(X,Y)$ with $Y\ne 0$ and $X^3+dY^3=1$. Apply this with $X=-y$, $Y=x$, and $d=9$, to see that that there are no more solutions. Theorem 6.4.30 is attributed to Skolem.</p>
|
2,069,573 | <p>I'm trying to solve the following inequality $\dfrac{(\log_2 (8x) \times \log_{x/8} 2)}{\log_{x/2} 16} \leq 0.25$ </p>
<p>Wolfram alpha gives the answer $(0, 0.5], [1,8)$ but surely $x \not= 2$ since log to base $1$ is undefined. But is the fact that it basically shrinks the fraction down to $0$ sufficient to satisfy this inequality? Could someone clear this up for me?</p>
<p><a href="http://m.wolframalpha.com/input/?i=%28log_2+%288x%29*+log_%28x%2F8%29+2%29%2Flog_%28x%2F2%29+16+%5Cleq+0.25&x=0&y=0" rel="nofollow noreferrer">Wolfram link</a> is here.</p>
| Community | -1 | <p>You can otherwise write the inequality as: $$\frac{\log_2 8x \times \log_{\frac{x}{8}}2 \times \log \frac{x}{2}}{\log 16} \leq 0.25$$ $$\Rightarrow \frac{\log \frac{x}{2} \times \log 8x}{\log 16 \times \log \frac{x}{8}} \leq 0.25$$ What we just did is to use the logarithmic identity: $\log_{a}b = \frac{\log b}{\log a}$. Now we can see that $x$ can take the value of $2$ because due to this identity, we now have the numerator as $0$ which is less than $0.25$. </p>
<p>Hope it is much clearer to you now. </p>
|
2,069,573 | <p>I'm trying to solve the following inequality $\dfrac{(\log_2 (8x) \times \log_{x/8} 2)}{\log_{x/2} 16} \leq 0.25$ </p>
<p>Wolfram alpha gives the answer $(0, 0.5], [1,8)$ but surely $x \not= 2$ since log to base $1$ is undefined. But is the fact that it basically shrinks the fraction down to $0$ sufficient to satisfy this inequality? Could someone clear this up for me?</p>
<p><a href="http://m.wolframalpha.com/input/?i=%28log_2+%288x%29*+log_%28x%2F8%29+2%29%2Flog_%28x%2F2%29+16+%5Cleq+0.25&x=0&y=0" rel="nofollow noreferrer">Wolfram link</a> is here.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>we write your inequality in the form
$$\frac{\frac{\ln(8x)}{\ln(x/8)}}{\frac{\ln(16)}{\ln(x/2)}}\le \frac{1}{4}$$
simplifying this we get
$$\frac{(3\ln(2)+\ln(x))(\ln(x)-\ln(2))}{4\ln(2)(\ln(x)-3\ln(2))}\le \frac{1}{4}$$
simplifying again we get $$\frac{\ln(x)^2+2\ln(2)\ln(x)-3\ln(2)^2}{(\ln(x)-3\ln(2))\ln(2)}\le 1$$
can you proceed?
simplifying again we get $$\frac{\ln(x)(\ln(x)+\ln(2))}{\ln(x)-3\ln(2)}\le 0$$
doing case work we obtain
$$0<x\le \frac{1}{2}$$ or $$1\le x< 8$$ and $$x\ne 2$$</p>
|
1,448,363 | <p>I have gotten to the next stage where you write it as $\frac{1}{\left(\frac 34\right)}$ to the power of $3$, now I am stuck</p>
<p>I've got it now, thanks everyone.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>it is $(\frac{4}{3})^3=\frac{4^3}{3^3}=\frac{64}{27}$</p>
|
1,242,760 | <p>Now proving by induction is fairly simple. However, this is a multiple choice problem whose answers don't make any sense to me. The actual problem goes as follows:</p>
<p><em>To prove by induction that $n^2 - 7n - 2$ is divisible by $2$ is true for all positive integers $n$, we assume $k^2 - 7k - 2$ is divisible by $2$ is true for some positive integers $k$ and we show that $k^2 - 7k - 2 + A$ is divisible by $2$ where $A$ is:</em></p>
<p>Now I would've assumed that $A$ would be $(k+1)^2 - 7(k + 1) - 2$, but I checked the answer and it is actually $2(k-3)$ which makes no sense to me. I tried to factor, reduce, and anything I could think of but I can't get $(k+1)^2 - 7(k + 1) - 2$. Does anyone know where I am going wrong?</p>
| Adhvaitha | 228,265 | <p><strong>HINT</strong>: For the inductive step, we assume that $2$ divides $n^2-7n-2$, i.e., $n^2-7n-2 = 2M$. We now have
\begin{align}
(n+1)^2 - 7(n+1) - 2 & = n^2 + 2n + 1 - 7n - 7 +2 = n^2 - 7n -2 + 2(n-3)\\
& = 2(M+n-3)
\end{align}</p>
|
1,003,379 | <p>I've been working problems all day so maybe I'm just confusing myself but in order to do this. I have to the take the integral along each contour $C_1-C_4$. My issue is how to convert to parametric functions in order to this so that I can integrate</p>
<p><img src="https://i.stack.imgur.com/HWRoM.jpg" alt="enter image description here"></p>
| user2345215 | 131,872 | <p>No because $\dfrac1z$ is not defined when $z=0$. You need a holomorphic function on the whole square for this to hold. This integral should be the same as for the circle, namely $2\pi i$.</p>
|
2,553,175 | <p>How can I verify that
$$1-2\sin^2x=2\cos^2x-1$$
Is true for all $x$?</p>
<p>It can be proved through a couple of messy steps using the fact that $\sin^2x+\cos^2x=1$, solving for one of the trigonemtric functions and then substituting, but the way I did it gets very messy very quickly and you end up with a bunch of factoring, etc.</p>
<p>What's the simplest way to solve this?</p>
| Parcly Taxel | 357,390 | <p>$$1=\cos^2x+\sin^2x$$
$$2=2\cos^2x+2\sin^2x$$
$$2-2\sin^2x=2\cos^2x$$
$$1-2\sin^2x=2\cos^2x-1$$</p>
|
2,553,175 | <p>How can I verify that
$$1-2\sin^2x=2\cos^2x-1$$
Is true for all $x$?</p>
<p>It can be proved through a couple of messy steps using the fact that $\sin^2x+\cos^2x=1$, solving for one of the trigonemtric functions and then substituting, but the way I did it gets very messy very quickly and you end up with a bunch of factoring, etc.</p>
<p>What's the simplest way to solve this?</p>
| Bernard | 202,857 | <p>Both are equal to $\cos 2x=\cos^2x-\sin^2x$ by the duplication formula!</p>
<p>If you don't want to use this formula, just rewrite the equality as
$$1+1=2\sin^2x+2\cos^2x $$
and it boils down to <em>Pythagoras' identity</em>.</p>
|
1,350,085 | <p>Consider $f(t)$, continuous on $[0,1]$, and $\alpha > 1$, and:</p>
<p>$$\int_0^1 \frac{f(t)}{t^{\alpha + 1}} \ dt$$</p>
<p>How can we tell this integral diverges? Basically since $f$ is continuous it reaches it's minimum at $[0,1]$ so we could make a comparison with $\int_0^1 \frac{f(x_{min})}{t^{\alpha+1}}\ dt$, but $f$ isn't nessecarily non-negative.</p>
<p>Questions:
1. Could look at $g(x) = f(x) + f(x_{min}) \ge 0$?<br>
2. Is there a more convinient way showing it diverges? </p>
<p>Thanks.</p>
| Rory Daulton | 161,807 | <p>If $A$ has $18$ elements, and $B\subseteq A$ has fewer than $9$ elements, then $B^c\subseteq A$ has more than nine elements. Also, if $B$ has more than $9$ elements then $B^c$ has fewer than $9$ elements. That means the number of subsets of $A$ with fewer than $9$ elements equals the number of subsets of $A$ with more than $9$ elements.</p>
<p>Of course, the complement of a set with $9$ elements is another set with $9$ elements. The number you want, the number of subsets of $A$ with at least $9$ elements, is <em>almost</em> half the number of all subsets of $A$. We can figure out just how close that <em>almost</em> is.</p>
<p>Let $m$ be the number of subsets of $A$ with more than $9$ elements, so $m$ is also the number of subsets of $A$ with fewer than $9$ elements. Let $n$ be the number of subsets of $A$ with exactly $9$ elements. Then you want to find $m+n$, and we can find it from the total number of subsets of $A$, which we'll call $t$.</p>
<p>$$\text{(fewer than $9$)+(exactly $9$)+(more than $9$)=(all)}$$
$$m+n+m=t$$
$$\begin{align}
m+n&=\frac 12n+\frac 12t \\
&=\frac 12{18 \choose 9}+\frac 12\cdot 2^{18} \\
&=\frac 12 \cdot \frac{18!}{9! \cdot 9!} + 2^{17}
\end{align}$$</p>
|
35,987 | <p><em>cross post in <a href="https://stackoverflow.com/questions/3513660/multivariate-bisection-method">StackOverflow</a></em></p>
<p>I need an algorithm to perform a 2D bisection method for solving a $2$x$2$ non-linear problem. Example: two equations $f(x,y)=0$ and $g(x,y)=0$ which I want to solve simultaneously. I have very familiar with the 1D bisection ( as well as other numerical methods ). Assume I already know the solution lies between the bounds $x_1 < x < x_2$ and $y_1 < y < y_2$.</p>
<p>In a grid the starting bounds are:</p>
<pre><code> ^
| C D
y2 -+ o-------o
| | |
| | |
| | |
y1 -+ o-------o
| A B
o--+------+---->
x1 x2
</code></pre>
<p>and I know the values at $f(A)$, $f(B)$, $f(C)$ and $f(D)$ as well as $g(A)$, $g(B)$, $g(C)$ and $g(D)$. I might even know for which edges $f=0$ and for which $g=0$.</p>
<p>To start the bisection I guess we need to divide the points out along the edges as well as the middle.</p>
<pre><code> ^
| C F D
y2 -+ o---o---o
| | |
|G o o M o H
| | |
y1 -+ o---o---o
| A E B
o--+------+---->
x1 x2
</code></pre>
<p>Now considering the possibilities of combinations such as checking if $f(G)*f(M)<0$ <code>AND</code> $g(G)*g(M)<0$ seems overwhelming. Maybe I am making this a little too complicated, but I think there should be a multidimensional version of the Bisection, just as Newton-Raphson can be easily be multidimed using gradient operators.</p>
<p>Any clues, comments, or links are welcomed.</p>
| Per Vognsen | 2,036 | <p>A remarkable generalization of bisection to multiple dimensions is the subgradient method from convex optimization theory. If $f$ and $g$ are convex then $h = f^2 + g^2$ is also convex, and a simultaneous zero of $f$ and $g$ is a minimum of $h$.</p>
<p>Unfortunately, the subgradient method has more theoretical than practical value. But in a two-dimensional problem, it might do okay.</p>
|
851,712 | <p>Build quadratic extension of field that contains $5$ elements. And solve $x^2+x+2=0$ in this field.</p>
<p>As I understand we need to build $\mathbb{F}_{5^{2}}$.</p>
<p>Field $\mathbb{F}_5$ contains $\{0,1,2,3,4\}.$</p>
<p>And as I understand field $\mathbb{F}_{5^{2}}$ contains $\{0,1,2,3,4\}$ and polynomials $\{x+1,x+2,x+3,x+4,2x+, \cdots, 4x+4\}$. $20$ polynomials $ax+b$, where $a,b \in \{0,1,2,3,4\}$. Totally we have $25$ objects in this field.</p>
<p>And I have no idea how to solve equation. Maybe we can try every element, but this is too difficult and here must be simpler way.
Thanks for help.</p>
| André Nicolas | 6,312 | <p>The easiest description of the quadratic extension is that it has the same $25$ elements as yours, with addition defined in the obvious way, and $x^2$ being defined as $-x-2$, that is, $4x+3$. Then $x$ is a solution of your quadratic equation.</p>
<p>Once we have defined $x^2$, the product $ax+b)(cx+d)$ can be defined in the "natural" way: Multiply as usual, and replace $x^2$ by $4x+3$.</p>
|
851,712 | <p>Build quadratic extension of field that contains $5$ elements. And solve $x^2+x+2=0$ in this field.</p>
<p>As I understand we need to build $\mathbb{F}_{5^{2}}$.</p>
<p>Field $\mathbb{F}_5$ contains $\{0,1,2,3,4\}.$</p>
<p>And as I understand field $\mathbb{F}_{5^{2}}$ contains $\{0,1,2,3,4\}$ and polynomials $\{x+1,x+2,x+3,x+4,2x+, \cdots, 4x+4\}$. $20$ polynomials $ax+b$, where $a,b \in \{0,1,2,3,4\}$. Totally we have $25$ objects in this field.</p>
<p>And I have no idea how to solve equation. Maybe we can try every element, but this is too difficult and here must be simpler way.
Thanks for help.</p>
| Jyrki Lahtonen | 11,619 | <p>An alternative to the trick in André's answer (+1) would be to use the usual quadratic formula. That works over any field where you can <em>complete the square</em>, which is the case whenever you can divide by two. This in turn is always possible, when $2\neq0$, i.e. when the characteristic is not equal to two.</p>
<p>Here we get that $x^2+x+2=0$, iff
$$
x=\frac{-1\pm\sqrt{1^2-4\cdot2}}2=\frac{-1\pm\sqrt{-7}}2.
$$
In order to use this we need to make sense of that $\sqrt{-7}$. In $\Bbb{F}_5$ we have $-7=3$ as $-7\equiv3\pmod5$. Thus we need to locate something that we can call $\sqrt3$. Quick testing reveals that no element $a\in\Bbb{F}_5$ satisfies the equation $a^2=3$, so we need to extend the field. </p>
<p>The way to introduce $\sqrt3$ is the usual. For the given reason $x^2-3$ is irreducible, and thus $\Bbb{F}_5[x]/\langle x^2-3\rangle$ is a field with 25 elements. It has the element $\alpha=x+\langle x^2-3\rangle$ that satisfies $\alpha^2=3$. So in the field $\Bbb{F}_{25}=\Bbb{F}_5[\alpha]$ the original equation has the solutions $(-1\pm\alpha)/2=2\pm3\alpha$. The last step follows from the fact that $2\cdot3=6=1\in\Bbb{F}_5$, so $3=\frac12$ here.</p>
<p>The field $\Bbb{F}[\alpha]$ is, of course, isomorphic to the field André used. After all, there is only one field of 25 elements up to isomorphism. An explicit isomorphism can be given by mapping his (coset of) $x$ to what I called $2\pm3\alpha$. Either sign works.</p>
<p>Note that by creating $\sqrt3$ we also created $\sqrt2$. This is because in all extension fields of $\Bbb{F}_5$ containing a $\sqrt3$
$$
2=12=4\cdot3=2^2\cdot3,
$$
so
$$
\sqrt2=\sqrt{2^2\cdot3}=\pm2\sqrt3.
$$
Thus $\pm2\alpha$ serve in the role of square roots of two in the field $\Bbb{F}_5[\alpha]$. This is yet another manifestation of the fact that there is (up to isomorphism) only one field of 25 elements.</p>
|
1,074,177 | <p>Suppose a problem
$$\min_{x \in \mathbb{R}^{n}} f(x)$$</p>
<p>subject to $x \in \Omega$ which is a closed and convex set. If $\nabla f(x)$ is Lipschitz continuous in $\Omega$, then prove that</p>
<p>$$e(x) = x - P_{\Omega}(x- \nabla f(x))$$</p>
<p>is also Lipschitz continuous in $\Omega$.</p>
<p>Thanks in advance.</p>
| copper.hat | 27,978 | <p>You need to show that $P_\Omega$ is Lipschitz.</p>
<p>In general, $\hat{x}$ minimises a convex function $g$ over a convex set $C$
<strong>iff</strong> $\langle \nabla g(\hat{x}), c-x \rangle \ge 0$ for all $c \in C$.</p>
<p>Taking $g(c) = {1 \over 2} \|c -x \|^2$, we see that $\hat{c}$ minimises $g$ over $\Omega$ <strong>iff</strong> $\langle \hat{c}-x, c-\hat{c} \rangle \ge 0$ for all $c \in \Omega$.</p>
<p>Hence we have $\langle p_\Omega(x)-x, c-p_\Omega(x) \rangle \ge 0$ for all $c \in \Omega$. In particular, we have $\langle p_\Omega(x)-x, p_\Omega(y)-p_\Omega(x) \rangle \ge 0$ or $\langle p_\Omega(x), p_\Omega(y)-p_\Omega(x) \rangle \ge \langle x, p_\Omega(y)-p_\Omega(x) \rangle$ for any $x,y$.</p>
<p>Then
\begin{eqnarray}
\|p_\Omega(x)-p_\Omega(y)\|^2 &=& \langle p_\Omega(x)-p_\Omega(y), p_\Omega(x)-p_\Omega(y)\rangle \\
&=& \langle p_\Omega(x), p_\Omega(x)-p_\Omega(y)\rangle - \langle p_\Omega(y), p_\Omega(x)-p_\Omega(y)\rangle \\
&\le& \langle x, p_\Omega(x)-p_\Omega(y) \rangle - \langle y, p_\Omega(x)-p_\Omega(y) \rangle \\
&=& \langle x-y, p_\Omega(x)-p_\Omega(y) \rangle \\
&\le & \| x- y\| \| p_\Omega(x)-p_\Omega(y)\|
\end{eqnarray}
From which we get $\| p_\Omega(x)-p_\Omega(y)\| \le \|x-y\|$.</p>
<p>Then you have
\begin{eqnarray}
\|e(x)-e(y)\| &\le& \|x-y\| + \|p_\Omega(x-\nabla f(x))-p_\Omega(y-\nabla f(y))\| \\
&\le& \|x-y\| + \|x-\nabla f(x)-(y-\nabla f(y))\| \\
&\le& 2 \|x-y\| + L \|x-y\|
\end{eqnarray}</p>
|
1,497 | <p>If (C,tensor,1) is a symmetric monoidal category and f:A-->B is a morphism of PROPs (or monoidal cats = colored PROPs), one gets a forgetful functor f^*:B-Alg(C)-->A-Alg(C) (where B-Alg(C)=tensor-preserving functors from B to C) defined by precomposing with f.</p>
<p>Does anyone conditions on A,B,C under which this functor has a left or a right adjoint?
(e.g. if C has the monoidal structure coming from products, it has a left adjoint, is there more to say?)</p>
| Aleks Kissinger | 800 | <p>Paul-André Melliès has quite an interesting paper on this topic:</p>
<p><a href="http://hal.archives-ouvertes.fr/docs/00/33/93/31/PDF/free-models.pdf" rel="nofollow">http://hal.archives-ouvertes.fr/docs/00/33/93/31/PDF/free-models.pdf</a></p>
<p>...but phrased in the more general terms of T-algebras of a pseudomonad. The idea is that a pseudomonad on a 2-category (especially Cat), let you put algebraic structures on categories the same way monads let you put them on objects of a category, like sets. This is motivated by the need to put PROPs, PROBs, PROs, Lawvere theories, etc. all under one roof.</p>
<p>He begins by talking about how a T-algebra homomorphism (a monoidal functor in the case where the T-algebras are monoidal categories) j : A -> B induces a forgetful functor U_j from Models(B,C) to Models(A,C) in the way you mentioned. Looking for left adjoint to U_j amounts to looking for a way to push some functor backwards along j in a suitably natural way. As Tom already mentioned, this is the left Kan extension. This process is functorial, and usually written Lan_j : [A,C] -> [B,C]. Furthermore, Lan_j -| U_j.</p>
<p>But if we were done there, all PROPs would have free algebras, which we know is not true in general (cf. bialgebras). The hard part is proving the Lan_j is a <em>T-algebraic</em> left Kan-extension. In the case of Lawvere theories, this is easy, because the product structure guarantees all natural transformations of cartesian functors are cartesian, but in the monoidal case, this stuff all needs to be checked.</p>
<p>This is where the story starts to get more complicated. It seems quite tricky to come up with suitably weak conditions under which Lan_j is T-algebraic. Mellies phrases these in terms of distributers (aka profunctors, modules, depending on who you ask and what country you are in :-P). If functors are like functions, this are a bit like relations. The nice thing about them is they always come in adjoint pairs f_* and f^* for any functor f.</p>
<p>So, thm 1 in the paper is (roughly) this. If j and j^* are T-algebraic in the suitable 2-categories, C is (T-algebraically) complete and co-complete, and for any model f : A -> C, f_* o j^* factors through the up-star of the Yoneda embedding y : C -> Psh(C), then U_j has a left adjoint computed as Lan_j that is indeed the free functor.</p>
<p>This is quite heavy-duty (pro-arrow equipment, ends, etc.), but it seems to get the job done. It would be nice to see more concrete/specific examples of this.</p>
|
1,241,864 | <p>I would like to know if there is formula to calculate sum of series of square roots $\sqrt{1} + \sqrt{2}+\dotsb+ \sqrt{n}$ like the one for the series $1 + 2 +\ldots+ n = \frac{n(n+1)}{2}$.</p>
<p>Thanks in advance.</p>
| Alex | 38,873 | <p>For an easier solution notice that $f(x) = \sqrt{x}$ is a monotone increasing function, hence for every $[k ,k+1]$
$$
\int_{k-1}^{k} \sqrt{x} dx<\sqrt{k}<\int_{k}^{k+1} \sqrt{x} dx
$$
Now sum over k, you'll get a sharp approximation</p>
|
2,055,803 | <p>Simplify $\left(\dfrac{4}{5} - \dfrac{3}{5}i\right)^{\!75}$</p>
<p>I've searched around on the internet and haven't found a very straightforward answer for this particular problem. I believe this problem has something to do with Euler's Formula, but I'm not sure how to use it in this case.</p>
<p>EDIT: We are not allowed to use calculators for this problem.</p>
| Vidyanshu Mishra | 363,566 | <p>As well mentioned by @dxiv in the comment, you can easily see that $x$ cannot be zero (otherwise in the expression on the left we will do something which is not permitted to do so [which I shall let you find]). So, you can multiply the equation by $x$.</p>
<p>On multiplying whole equation by $x$, you get $1+2x^{2}=3x$ $\implies 2x^2-3x+1=0$. On factorising, it becomes $(2x-1)(x-1)$. So, $x=1/2$ or $x=1$. As required.</p>
|
2,529,387 | <p><a href="https://i.stack.imgur.com/3M7iS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3M7iS.jpg" alt="enter image description here"></a></p>
<p>The fourth question is how can I do?</p>
<blockquote>
<p>Q: Given that $a, b \in \mathbb{Z}$ and $a$ is even, show that if $a^2b$ is not divisible by $8$, then $b$ is odd.</p>
</blockquote>
| Paolo Intuito | 395,372 | <p>If $a$ is even, then there exists $n$ such that $a=2n$. Hence, $a^2b= 4n^2b$.</p>
<hr>
<p>Can you take it from here, knowing that $8=2^3$?</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.