qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,623,521
<p>Prove that: $\tan (2\tan^{-1} (x))=2\tan (\tan^{-1} (x)+\tan^{-1} (x^3))$</p> <p>My Attempt' Let $\tan^{-1} (x)=A$ $$x=\tan (A)$$ Now, L.H.S.$=\tan (2\tan^{-1} (x))$ $$=\tan (2A)$$ $$=\dfrac {2\tan (A)}{1-\tan^2 (A)}$$ $$=\dfrac {2x}{1-x^2}$$</p>
Intelligenti pauca
255,730
<p>Tension $\vec T_1$ is the easy one, because we have a static balance of forces: $$ \vec T+\vec T_1+m\vec g=0. $$ From the vertical components one then gets (I'm using the same letters without arrows to denote the magnitudes of vectors, as it is customary in kinematics): $$ T_1\cos30°=mg, \quad\hbox{that is:}\quad T_1={2mg\over\sqrt3}. $$ Tension $\vec T_2$, instead, takes place at a time when the mass is moving: even if its velocity vanishes at $B$, its acceleration $\vec a$ is not vanishing and is constrained by kinematics to be perpendicular to $\vec T_2$. We have of course $$\vec T_2+m\vec g=m\vec a,$$ and separating the components of this equation we get: $$ T_2\sin30°=ma\cos30° \quad\hbox{and}\quad mg-T_2\cos30°=ma\sin30°. $$ From these equations you can solve for $a$ and $T_2$.</p> <p><a href="https://i.stack.imgur.com/1m8uA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1m8uA.png" alt="enter image description here"></a></p>
2,122,955
<p><em>My question is from T.Y.Lam- A First Course in Noncommutative Rings.</em></p> <blockquote> <p>For a ring $R$, if all left $R$-modules are semisimple, then all short exact sequences of left $R$-modules split.</p> </blockquote> <p>A module $M$ is a semisimple module if every submodule $N$ of $M$ is a direct summand, that is there exists a submodule $N'$ such that $M=N \oplus N'$.</p> <p>In order to prove the proposition above, let $M$ be a left $R$-module and let us consider $0 \rightarrow A \hookrightarrow B \rightarrow C \rightarrow0$ be a short exact sequence, where $A,B$ and $C$ are semisimple left $R$-modules. What should my strategy be in order to $B\cong A \oplus C$.</p>
rschwieb
29,335
<p>By the first isomorphism theorem, you always have that $B/A\cong C$.</p> <p>Since $B$ is semisimple, there exists some submodule $C'$ such that $A\oplus C'=B$. By the second isomorphism theorem $$\frac{B}{A}=\frac{A\oplus C'}{A}\cong\frac{C'}{A\cap C'} =\frac{C'}{\{0\}}\cong C'$$.</p> <p>So $C\cong C'$, and $A\oplus C\cong B$.</p> <p>This is an ad-hoc elementary explanation, but ultimately it is going to be important for you to absorb the definition that Mustafa has given where you learn the equivalence of the statements "$0\to A\to B\to C\to 0$ splits" and "$A\oplus C\cong B$".</p>
818,850
<p>I want to generate numbers $1$ to $10$ with uniform probability distribution. So I write the numbers $1$ to $10$ in the natural order. I keep writing the next $10$ number block by permuting the first $10$ numbers. Will this long string can be considered random with uniform distribution? Is there a problem?</p>
heropup
118,193
<p>I would start off with a much simpler example. Suppose we wanted to locate $1/3$ on a number line, but the number line is such that every interval is divided into tenths, not thirds (e.g., like a metric ruler). How would you locate such a number? Well, you would start off by moving $3/10$ units from $0$, so now you're at $0.3$. Then between $3/10$ and $4/10$, the ruler is again divided into ten equal sub-intervals, namely $0.31, 0.32, 0.33, \ldots, 0.39, 0.40$. We again choose the third tick mark, so now we're at $0.33$. By now we should see what to do next: between $0.33$ and $0.34$, there are ten more intervals, and we choose the third tick mark, which is $0.333$. This process never ends. We have gone from $0.3$ to $0.33$ to $0.333$ to $0.3333$, and so on. The position we are at after an infinite number of steps is exactly $1/3$.</p> <p>Now you might say, "why does the ruler have to be divided into tenths? If the ruler had simply been divided into exact thirds, we would not need such an elaborate, infinite process." And that's correct. But the point is that the division of the ruler is analogous to the decimal representation of a number, where each successive digit can only be obtained by moving from one tick mark to the next. Otherwise, if we wanted to "measure" a number like $\sqrt{2}$ on a number line, we could just get ourselves a "special ruler" where there are just two tick marks: one at $0$ and the other one at exactly $\sqrt{2}$. But that doesn't illustrate that the decimal expansion of $\sqrt{2}$ is $1.41421356237309504880168872421\ldots.$ Each real number is identified with a unique point on the number line. But if we want to regard such numbers as being some kind of decimal expansion, then we must resort to using the ruler divided into successive tenths.</p>
174,676
<p>When I am faced with a simple linear congruence such as $$9x \equiv 7 \pmod{13}$$ and I am working without any calculating aid handy, I tend to do something like the following:</p> <p>"Notice" that adding $13$ on the right and subtracting $13x$ on the left gives: $$-4x \equiv 20 \pmod{13}$$</p> <p>so that $$x \equiv -5 \equiv 8 \pmod{13}.$$</p> <p>Clearly this process works and is easy to justify (apart from not having an algorithm for "noticing"), but my question is this: I have a vague recollection of reading somewhere this sort of process was the preferred method of C. F. Gauss, but I cannot find any evidence for this now, so does anyone know anything about this, or could provide a reference? (Or have I just imagined it all?)</p> <p>I would also be interested to hear if anyone else does anything similar.</p>
Bill Dubuque
242
<p><span class="math-container">$bx\equiv a\pmod{\!m}$</span> has a <em>unique</em> solution <a href="https://math.stackexchange.com/a/2383809/242"><span class="math-container">$\!\iff\!b\,$</span> is coprime to the modulus <span class="math-container">$m$</span></a>. If so, <a href="https://math.stackexchange.com/a/3290965/242">by Bezout</a> <span class="math-container">$\,b\,$</span> is invertible <span class="math-container">$\!\bmod m,\,$</span> so scaling <span class="math-container">$\,bx\equiv a\,$</span> by <span class="math-container">$\,b^{-1}\,$</span> we obtain the unique solution <span class="math-container">$\,x\equiv b^{-1}a =: a/b.\,$</span> We can quickly compute <span class="math-container">$\,b^{-1}\pmod{\!m}\,$</span> by the <a href="https://math.stackexchange.com/a/616893/242">extended Euclidean algorithm</a>, but there are often more convenient ways for smaller numbers (e.g. <a href="https://math.stackexchange.com/a/3434593/242">here</a> and <a href="https://math.stackexchange.com/questions/2367841/what-is-the-best-way-to-solve-modular-arithmetic-equations-such-as-9x-equiv-33/2368266#2368266">here</a> are a handful of methods applied). We describe a few such methods below, viewing <span class="math-container">$\, x\equiv b^{-1}a \equiv a/b\,$</span> as a <a href="https://math.stackexchange.com/a/3001866/242">modular fraction.</a> [See <a href="https://math.stackexchange.com/a/2053174/242">here</a> for the general method when the <strong>solution is not unique</strong>, i.e. when <span class="math-container">$\gcd(b,m)&gt;1$</span>].</p> <hr /> <p>The first, <a href="https://math.stackexchange.com/a/3230/242">Gauss's algorithm</a>, is based on Gauss's proof of Euclid's lemma <a href="https://math.stackexchange.com/a/2973557/242">via the descent</a> <span class="math-container">$\,p\mid ab\,\Rightarrow\, p\mid a(p\bmod b).\,$</span> Generally it only works for prime moduli, but we can also execute the general extended Euclidean algorithm <a href="https://math.stackexchange.com/a/2053174/242">in fraction form too</a> (using <em>multi-valued</em> &quot;fractions&quot;).</p> <p>It works by repeatedly scaling <span class="math-container">$\rm\:\color{#C00}{\frac{A}B}\overset{\times\ N} \to \frac{AN}{BN}\: $</span> by the least <span class="math-container">$\rm\,N\,$</span> with <span class="math-container">$\rm\, BN \ge 13,\, $</span> then reducing mod <span class="math-container">$13$</span></p> <p><span class="math-container">$$\rm\displaystyle \ mod\ 13\!:\,\ \color{#C00}{\frac{7}9} \,\overset{\times\ 2}\equiv\, \frac{14}{18}\, \equiv\, \color{#C00}{\frac{1}5}\,\overset{\times \ 3}\equiv\, \frac{3}{15}\,\equiv\, \color{#C00}{\frac{3}2} \,\overset{\times\ 7}\equiv\, \frac{21}{14} \,\equiv\, \color{#C00}{\frac{8}1}\qquad\!\! $$</span></p> <p>Denominators of the <span class="math-container">$\color{#c00}{\rm reduced}$</span> fractions decrease <span class="math-container">$\,\color{#C00}{ 9 &gt; 5 &gt; 2&gt; \ldots}\,$</span> so reach <span class="math-container">$\color{#C00}{1}\,$</span> (not <span class="math-container">$\,0\,$</span> else the denominator would be a <em>proper</em> factor of the <em>prime</em> modulus; it may fail for <em>composite</em> modulus)</p> <p>Simpler: <span class="math-container">$ $</span> using <span class="math-container">$\rm\color{#0a0}{least}$</span> residues: <span class="math-container">$\displaystyle\ \ \frac{7}9\,\equiv\, \frac{7}{\!\color{#0a0}{-4}\!\ \,}\,\overset{\times\ 3}\equiv\,\frac{21}{\!\!-12\ \ \ \!\!}\,\equiv\, \color{#c00}{\frac{8}1}$</span></p> <p>This optimization using <span class="math-container">$\rm\color{#0a0}{least\ magnitude}$</span> residues <span class="math-container">$\,0,\pm 1, \pm 2.\ldots$</span> often simplifies mod arithmetic. Here we can also optimize by (sometimes) cancelling obvious common factors, or by pulling out obvious factors of denominators, etc. For example</p> <p><span class="math-container">$$\frac{7}9\,\equiv\, \frac{\!-6\,}{\!-4\,}\,\equiv\frac{\!-3\,}{\!-2\,}\,\equiv\frac{10}{\!-2\,}\,\equiv\,-5$$</span></p> <p><span class="math-container">$$\frac{7}9\,\equiv\,\frac{\!-1\cdot 6}{\ \ 3\cdot 3}\,\equiv\,\frac{\!\,12\cdot 6\!}{\ \ \,3\cdot 3}\,\equiv\, 4\cdot 2$$</span></p> <hr /> <p><strong>Or</strong> <a href="https://math.stackexchange.com/a/3434593/242">twiddle it</a> as you did: <span class="math-container">$ $</span> check if quotient <span class="math-container">$\rm a/b\equiv (a\pm\!13\,i)/(b\pm\!13\,j)\,$</span> is <em>exact</em> for small <span class="math-container">$\rm\,i,j,\,$</span> e.g.</p> <p><span class="math-container">$$ \frac{1}7\,\equiv \frac{\!-12}{-6}\,\equiv\, 2;\ \ \ \frac{5}7\,\equiv\,\frac{18}{\!-6\!\,}\,\equiv -3$$</span></p> <p>When working with smaller numbers there is a higher probability of such optimizations being applicable (the law of small numbers), so it's well-worth looking for such in manual calculations.</p> <p>Generally we can choose a congruent numerator giving an <em>exact quotient</em> by <a href="https://math.stackexchange.com/a/326608/242">Inverse Reciprocity</a>.</p> <p><span class="math-container">$\bmod 13\!:\ \dfrac{a}{b}\equiv \dfrac{a-13\left[\color{#0a0}{\dfrac{a}{13}}\bmod b\right]}b\,\ $</span> e.g. <span class="math-container">$\,\ \dfrac{8}9\equiv \dfrac{8-13\overbrace{\left[\dfrac{8}{\color{#c00}{13}}\bmod 9\right]}^{\large\color{#c00}{ 13\ \,\equiv\,\ 4\ }}}9\equiv\dfrac{8-13[2]}9\equiv-2$</span></p> <p>Note that the value <span class="math-container">$\,\color{#0a0}{x\equiv a/13}\,$</span> is exactly what we need to make the numerator divisible by <span class="math-container">$b,\,$</span> i.e.</p> <p><span class="math-container">$\qquad\quad\bmod b\!:\,\ a-13\,[\color{#0a0}x]\equiv 0\iff 13x\equiv a\iff \color{#0a0}{x\equiv a/13}$</span></p> <p>This is essentially an optimization of the Extended Euclidean Algorithm (when it takes two steps).</p> <p><strong>Note</strong> <span class="math-container">$ $</span> <em>Gauss' algorithm</em> is <em>my</em> name for a special case of the Euclidean algorithm that's <em>implicit</em> in Gauss' <a href="https://books.google.com/books?id=DyFLDwAAQBAJ&amp;lpg=PR1&amp;dq=gauss%20disquisitiones&amp;pg=PA5#v=onepage&amp;q=gauss%20disquisitiones&amp;f=false" rel="nofollow noreferrer">Disquisitiones Arithmeticae, Art. 13, 1801</a>. I don't know if Gauss <em>explicitly</em> used this algorithm elsewhere (apparently he chose to <a href="https://groups.google.com/forum/#!original/sci.math/iBOFzEqlxu0/RjGZBRQOtiAJ" rel="nofollow noreferrer">avoid use or mention of the Euclidean algorithm</a> in <em>Disq. Arith.</em>). Gauss does briefly mention modular fractions in Art. 31 in <em>Disq. Arith</em>.</p> <p>The reformulation above in terms of fractions does not occur in Gauss' work as far as I know. I devised it in my youth before I had perused <em>Disq. Arith.</em> It is likely very old but I don't recall seeing it in any literature. I'd be very grateful for any historical references.</p> <p>See <a href="https://math.stackexchange.com/a/3130465/242">here</a> for further discussion, including a detailed comparison with the descent employed by Gauss, and a formal proof of correctness of the algorithm.</p> <p><strong>Beware</strong> <span class="math-container">$ $</span> Modular fraction arithmetic is valid only for fractions with denominator <em>coprime</em> to the modulus. <a href="https://math.stackexchange.com/a/921093/242">See here</a> for further discussion.</p>
174,676
<p>When I am faced with a simple linear congruence such as $$9x \equiv 7 \pmod{13}$$ and I am working without any calculating aid handy, I tend to do something like the following:</p> <p>"Notice" that adding $13$ on the right and subtracting $13x$ on the left gives: $$-4x \equiv 20 \pmod{13}$$</p> <p>so that $$x \equiv -5 \equiv 8 \pmod{13}.$$</p> <p>Clearly this process works and is easy to justify (apart from not having an algorithm for "noticing"), but my question is this: I have a vague recollection of reading somewhere this sort of process was the preferred method of C. F. Gauss, but I cannot find any evidence for this now, so does anyone know anything about this, or could provide a reference? (Or have I just imagined it all?)</p> <p>I would also be interested to hear if anyone else does anything similar.</p>
CopyPasteIt
432,081
<p>Another offbeat process but with algorithmic potential.</p> <p>Solve <span class="math-container">$9x \equiv 7 \pmod{13}$</span>.</p> <p><span class="math-container">$\quad 9x = 7 + 13y \implies 0 \equiv 1 + y \pmod{3} \implies y \equiv 2 \pmod{3}$</span></p> <p>and</p> <p><span class="math-container">$\quad y : 2 \; \mid \; 7 + 13y = 33 \quad \quad \text{NO GOOD!}$</span><br> <span class="math-container">$\quad y : 5 \; \mid \; 7 + 13y = 72 \quad \quad \text{AND is divisible by } 9$</span></p> <p>So,</p> <p><span class="math-container">$\tag{ANS} x \equiv 8 \pmod{13}$</span></p>
1,056,397
<p>I saw the following problem: $$\lim_{x\to \infty} \sqrt{9x^2+x}-3x$$ My first thought was to say that the $x$ term is overpowered when $x$ becomes large enough, so the square root becomes just $\sqrt{9x^2} = 3x$, and the value of the limit is zero.</p> <p>However, the solution given is $1\over6$, and starts with: $$\lim_{x\to \infty} \sqrt{9x^2+x}-3x=\lim_{x\to \infty} {(\sqrt{9x^2+x}-3x)(\sqrt{9x^2+x}+3x)\over \sqrt{9x^2+x}+3x}$$</p> <p>I'm assuming the continuation is:</p> <p>$$=\lim_{x\to \infty} {x\over \sqrt{9x^2+x}+3x} = \lim_{x\to \infty} {x\over \sqrt{9x^2}+3x} = \lim_{x\to \infty} {x \over 6x} = {1 \over 6}$$</p> <p>The question is, why is it okay to ignore the $x$ term in $\sqrt{9x^2+x}+3x$ but not in $\sqrt{9x^2+x}-3x$?</p>
Arthur
99,272
<p>$$\frac{x}{\sqrt{9x^2 + x} + 3x} = \frac{1}{\sqrt{9+\frac{1}{x}}+3}$$</p>
3,366,981
<p>I have been looking for a concrete answer on this matter since leaving secondary school and thought now to ask online since YouTube and Wikipedia seem too convoluted. </p> <p>I remember watching <a href="https://www.youtube.com/watch?v=Rzac520CESc" rel="nofollow noreferrer">this</a> video on the proof behind the nCr factorials and I understood the logical proof clearly. </p> <p>However, I remember when asked</p> <blockquote> <p>Expand <span class="math-container">$(x+y)^3$</span></p> </blockquote> <p>I would, automatically, draw out the following line:</p> <blockquote> <p><span class="math-container">${3 \choose 0}x^0y^3 + {3 \choose 1}x^1y^2 + {3 \choose 2}x^2y^1 + {3 \choose 3}x^3y^0$</span></p> </blockquote> <p>and then use my calculator to get</p> <blockquote> <p><span class="math-container">$y^3 + 3x^1y^2 + 3x^2y^1 + x^3$</span></p> </blockquote> <p>and further to my disappointment, my teacher would speak about Pascal's triangle and say that the coefficients (the numbers infront of my <span class="math-container">$x$</span> and <span class="math-container">$y$</span> terms) came from this so-called triangle. There was no explanation given for the purpose of factorials or where this Pascal's triangle came from. </p> <p>After looking at the video above and looking at the Pascal's triangle, I'm just confused as to why I would need <span class="math-container">${n \choose r}$</span> for expanding the equation above. </p> <p><em>What is the link between using this factorials formula:</em></p> <p><span class="math-container">$$\frac{n!}{r!(n-r)!}$$</span></p> <p><em>and applying it to expanding algebraic expressions as above?</em></p> <p>I understand if it seems trivial to most people here, but after my first year as a chemistry student, I'm still interested in knowing the link between the two!</p> <p><em>EDIT - I should add I understand the video where he describes how many words can be made but I can't apply this understanding for an algebraic perspective</em></p>
Adrian Keister
30,813
<p>So everything looks good so far: <span class="math-container">\begin{align*} I_x&amp;=\lambda\int_{0}^{2\pi}y^2r\, d\theta\\ &amp;=\lambda\int_{0}^{2\pi}r^2\sin^2(\theta)\,r\, d\theta\\ &amp;=\lambda r^3\int_{0}^{2\pi}\sin^2(\theta)\, d\theta\\ &amp;=\frac{m}{2\pi r}\,r^3\,\pi\\ &amp;=\frac{mr^2}{2}, \end{align*}</span> as required. You can substitute in similarly for the <span class="math-container">$I_z$</span> integral and you should get the right result.</p>
2,251,112
<p>A small college has 1095 students. What is the approximate probability that more than five students were born on Christmas day? Assume that the birthrates are constant throughout the year and that each year has 365 days.</p> <p>I tried doing<br> $X \sim Pn(3) $ and calculating $P(X\gt5)$. My calculation turned out to be 0.22....., which was wrong. (What was wrong with my approximation?). The solution given used a Normal approximation to get an answer of 0.0735. When I tried using a Normal approximation, I was still unable to get that answer. Here is how I attempted it. </p> <p>$ N\sim(3,1092/365)$.<br> $ P(X\gt5)=P(N\gt4.5) $ #continuity correction<br> $= P\left(Z\gt\frac{4.5-3}{\sqrt{\frac{1092}{365}}}\right)$<br> =$ P(Z&gt;0.86721)$.<br> =$0.193$. </p> <p>Any help is much appreciated.</p>
robjohn
13,854
<p>The probability of having exactly $k$ people with birthdays on Christmas would be $$ \binom{1095}{k}\frac1{365^k}\left(\frac{364}{365}\right)^{1095-k} $$ Adding these probabilities for at most $5$ people, we get $$ \sum_{k=0}^5\binom{1095}{k}\frac1{365^k}\left(\frac{364}{365}\right)^{1095-k}=0.916358737 $$ Thus, the probability that more than $5$ people have their birthday on Christmas, is $0.083641263$.</p>
3,911,489
<p>How to prove <span class="math-container">$$I=\int_{0}^{\infty} \frac{\sin x}{x(x^2+1)}dx=\frac{\pi}{2}(1-\frac{1}{e})\, ?$$</span></p> <p>Let <span class="math-container">$$f(z)=\frac{\sin z}{z(z^2+1)}=\frac{e^{iz}-e^{-iz}}{2i\,z(z^2+1)}.$$</span></p> <p>Then I have no idea how to deal with <span class="math-container">$[0,\infty)$</span>.</p> <p>Can anyone please give me a hint? Thanks in advance.</p>
Mark Viola
218,419
<p>First note that the integrand is an even function so that</p> <p><span class="math-container">$$\int_0^\infty \frac{\sin(x)}{x(x^2+1)}\,dx=\frac12 \int_{-\infty }^\infty \frac{\sin(x)}{x(x^2+1)}\,dx\tag1$$</span></p> <p>Next, we can use Euler's Formula <span class="math-container">$e^{ix}=\cos(x)+i\sin(x)$</span> in <span class="math-container">$(1)$</span> to write</p> <p><span class="math-container">$$\frac12 \int_{-\infty }^\infty \frac{\sin(x)}{x(x^2+1)}\,dx=\frac12 \text{Im}\left(\int_{-\infty }^\infty \frac{e^{ix}}{x(x^2+1)}\,dx\right)\tag2$$</span></p> <p>where the integral on the right-hand side of <span class="math-container">$(2)$</span> is interpreted as a Cauchy Principal Value.</p> <p>Then, we analyze the contour integral <span class="math-container">$I$</span> where</p> <p><span class="math-container">$$I=\oint_C \frac12 \frac{e^{iz}}{z(z^2+1)}\,dz$$</span></p> <p>where the closed contour <span class="math-container">$C$</span> is comprised of <span class="math-container">$(i)$</span> the straight line segments from <span class="math-container">$-R$</span> to <span class="math-container">$-\varepsilon$</span> and from <span class="math-container">$\varepsilon$</span> to <span class="math-container">$R$</span> (where <span class="math-container">$R&gt;1$</span> and <span class="math-container">$R&gt;\varepsilon&gt;0$</span>), <span class="math-container">$(ii)$</span> the circular arc in the upper-half plane, centered at the origin, with radius <span class="math-container">$\varepsilon$</span> from <span class="math-container">$-\varepsilon$</span> to <span class="math-container">$\varepsilon$</span>, and <span class="math-container">$(iii)$</span> the circular arc in the upper-half plane, centered at the origin, with radius <span class="math-container">$R$</span> from <span class="math-container">$R$</span> to <span class="math-container">$-R$</span>.</p> <p>The value of this integral is equal to <span class="math-container">$2\pi i $</span> times the residue of the integrand at <span class="math-container">$z=i$</span>.</p> <p>As <span class="math-container">$R\to \infty $</span> show that the contribution to the value of <span class="math-container">$I$</span> from integration over the semicircular arc of radius <span class="math-container">$R$</span> approaches <span class="math-container">$0$</span>.</p> <p>As <span class="math-container">$\varepsilon\to 0^+$</span>, show that contribution to the value of <span class="math-container">$I$</span> from integration over the semicircular arc of radius <span class="math-container">$\varepsilon$</span> approaches <span class="math-container">$i\pi$</span> times the residue of the integrand at <span class="math-container">$z=0$</span>.</p> <p>Finally, put everything together to arrive at the coveted result.</p> <p>Can you proceed now?</p>
3,911,489
<p>How to prove <span class="math-container">$$I=\int_{0}^{\infty} \frac{\sin x}{x(x^2+1)}dx=\frac{\pi}{2}(1-\frac{1}{e})\, ?$$</span></p> <p>Let <span class="math-container">$$f(z)=\frac{\sin z}{z(z^2+1)}=\frac{e^{iz}-e^{-iz}}{2i\,z(z^2+1)}.$$</span></p> <p>Then I have no idea how to deal with <span class="math-container">$[0,\infty)$</span>.</p> <p>Can anyone please give me a hint? Thanks in advance.</p>
J.G.
56,861
<p>It helps to use <span class="math-container">$\tfrac{1}{x(x^2+1)}=\tfrac1x-\tfrac{x}{x^2+1}$</span> to reduce the problem to <span class="math-container">$I=\tfrac12\pi-J$</span> with<span class="math-container">$$J:=\int_0^\infty\tfrac{x\sin xdx}{x^2+1}=\tfrac12\Im\int_{\Bbb R}\tfrac{x\exp ixdx}{x^2+1}=\tfrac12\Im[2\pi i\lim_{x\to i}\tfrac{x\exp ix}{x+i}]=\tfrac{\pi}{2e}.$$</span></p>
413,982
<p>How can I divide this using long division?</p> <p>$$\frac{ax^3-a^2x^2-bx^2+b^2}{ax-b}$$</p> <p><strong>Edit</strong></p> <p>Sorry guys I wrote it wrong... Fixed it now.</p>
user49685
49,685
<p>Here's one way to approach it. You can try <strong>grouping</strong> the elements in the numerator. Note that, you should try grouping in the sense that $ax - b$ must appear. So that they can calcel themselves out. So here it goes:</p> <p>$$\dfrac{\color{red}{ax^3} \color{blue}{- a^2 x^2} \color{red}{- bx^2} \color{blue}{+ b^2}}{ax - b} = \dfrac{(ax^3 -bx^2) + (b^2 - a^2 x^2)}{ax - b}$$</p> <p>I think you should be able to go from here. If not, I'll give you a hint:</p> <ul> <li>Try factoring.</li> <li>Try to apply the difference of squares fomula.</li> </ul>
848,631
<p>The function</p> <p>$$f(y) = \displaystyle \frac{\sin(Ny)}{\sin y}$$</p> <p>is periodic with period $2 \pi$ in general. But tracing the graphic of that function for $N$ odd it seems that for $0 \leq x &lt; \pi$ it is the same as $\pi \leq x &lt; 2 \pi$.</p> <p>So can we state that the period of $f(y)$ with $N$ odd is just $\pi$? </p> <p>I found only that if $N = 2k + 1$, $k \in \mathbb{Z}$, and $y' = y + \pi$, then</p> <p>$$\sin(N y') = \sin[2k(y + \pi) + y + \pi] = \sin [(2k + 1) \pi + (2k + 1)y]$$</p> <p>Is there an analytical method to prove that periodicity?</p>
Joshua Mundinger
106,317
<p>Writing sine in exponential form we have $$f(x+\pi) = \frac{e^{iN(x+\pi)}-e^{-iN(x+\pi)}}{e^{i(x+\pi)}-e^{-i(x+\pi)}}$$ Now for $N$ odd, we have $e^{-iN\pi} = e^{iN\pi} = e^{i\pi}=e^{-i\pi}=-1$, so $$f(x+\pi) = \frac{-e^{iNx}+e^{-iNx}}{-e^{ix}+e^{-ix}} = f(x)$$</p>
1,789,414
<blockquote> <p><strong>Question :</strong> Prove that the equation $x + \cos(x) + e^{x} = 0$ has <em>exactly</em> one root</p> </blockquote> <hr> <p><em>This is what I thought of doing:</em></p> <p>$$\text{Let} \ \ \ f(x) = x + \cos(x) + e^{x}$$ By using the Intermediate Value Theorem on the open interval $(-\infty, \infty)$, and then by showing that </p> <p>$$\left(\lim_{x \to -\infty}f(x) &lt; 0 &lt; \lim_{x \to +\infty}f(x)\right) \lor \left(\lim_{x \to +\infty}f(x) &lt; 0 &lt; \lim_{x \to -\infty}f(x)\right)$$</p> <p>I could show that $\exists\ x \in \mathbb{R} \ s.t.\ f(x) = 0$.</p> <p>However this method, although it does show the existence of an $x$ such that $f(x)=0$, it doesn't show that there is <b><em>only</em> one $x$</b> that satisfies the statement $f(x)=0$.</p> <hr> <p>The original question, suggest's using either <em>Rolle's Theorem</em> or the <em>Mean Value Theorem</em>, however we face the same problem with both theorems as both theorems prove the existence of <b><em>at least</em> a single $x$</b> (or any arbitrary number) satisfying their given statements, they don't prove the existence of <b><em>only one $x$</em></b> satisfying their statements.</p> <p>All three theorem's I've mentioned here :</p> <ol> <li><em>Intermediate Value Theorem</em></li> <li><em>Rolle's Theorem</em></li> <li><em>Mean Value Theorem</em></li> </ol> <p>Can all be used to show that the equation $x + cos(x) + e^{x} = 0$ has <b><em>at least</em></b> one root, but they can't be used to show $x + cos(x) + e^{x} = 0$ has <b><em>only</em></b> one root. (Or can they?)</p> <p>How can this problem be solved then, using either Rolle's Theorem or the Mean Value Theorem (or even the Intermediate Value Theorem)</p>
aarbee
87,430
<p>The given equation can be written as <span class="math-container">$$x+\cos x=-e^x$$</span></p> <p><span class="math-container">$x+\cos x$</span> is a familiar increasing <a href="https://www.desmos.com/calculator" rel="nofollow noreferrer">graph</a>.</p> <p><span class="math-container">$-e^x$</span> is also easy to draw.</p> <p>They intersect just once.</p>
1,789,414
<blockquote> <p><strong>Question :</strong> Prove that the equation $x + \cos(x) + e^{x} = 0$ has <em>exactly</em> one root</p> </blockquote> <hr> <p><em>This is what I thought of doing:</em></p> <p>$$\text{Let} \ \ \ f(x) = x + \cos(x) + e^{x}$$ By using the Intermediate Value Theorem on the open interval $(-\infty, \infty)$, and then by showing that </p> <p>$$\left(\lim_{x \to -\infty}f(x) &lt; 0 &lt; \lim_{x \to +\infty}f(x)\right) \lor \left(\lim_{x \to +\infty}f(x) &lt; 0 &lt; \lim_{x \to -\infty}f(x)\right)$$</p> <p>I could show that $\exists\ x \in \mathbb{R} \ s.t.\ f(x) = 0$.</p> <p>However this method, although it does show the existence of an $x$ such that $f(x)=0$, it doesn't show that there is <b><em>only</em> one $x$</b> that satisfies the statement $f(x)=0$.</p> <hr> <p>The original question, suggest's using either <em>Rolle's Theorem</em> or the <em>Mean Value Theorem</em>, however we face the same problem with both theorems as both theorems prove the existence of <b><em>at least</em> a single $x$</b> (or any arbitrary number) satisfying their given statements, they don't prove the existence of <b><em>only one $x$</em></b> satisfying their statements.</p> <p>All three theorem's I've mentioned here :</p> <ol> <li><em>Intermediate Value Theorem</em></li> <li><em>Rolle's Theorem</em></li> <li><em>Mean Value Theorem</em></li> </ol> <p>Can all be used to show that the equation $x + cos(x) + e^{x} = 0$ has <b><em>at least</em></b> one root, but they can't be used to show $x + cos(x) + e^{x} = 0$ has <b><em>only</em></b> one root. (Or can they?)</p> <p>How can this problem be solved then, using either Rolle's Theorem or the Mean Value Theorem (or even the Intermediate Value Theorem)</p>
MosikasikaC19
1,008,452
<p>The Intermediate Value Theorem give you existence of one <span class="math-container">$x$</span> satisfying their statements, for the unicity you have to show the bijection of <span class="math-container">$f(x)=1+\cos x+ e^x$</span><br /> Consider this fonction <span class="math-container">$$f(x)=x+\cos x+e^x$$</span> For all <span class="math-container">$ x\in \mathbb{R}, f'(x)=1-\sin(x) + e^{x}$</span> and <span class="math-container">$|sinx|\le 1$</span> then <span class="math-container">$1-\sin x\ge0\Longrightarrow f'(x)\ge0$</span>.<br /> So, <span class="math-container">$\displaystyle{\lim _{x\to +\infty} f(x)=\lim _{x\to +\infty}x\left(\dfrac{1}{x}+\dfrac{\cos x}{x}+\dfrac{e^x}{x}\right)=+\infty}$</span> and <span class="math-container">$\displaystyle{\lim _{x\to -\infty} f(x)=\lim _{x\to -\infty}x\left(\dfrac{1}{x}+\dfrac{\cos x}{x}+\dfrac{e^x}{x}\right)=-\infty}$</span><br /> Finally <span class="math-container">$f$</span> increasing and <span class="math-container">$f(\mathbb{R})=\mathbb{R}$</span>, hence they are only one root for <span class="math-container">$f(x)=0$</span> i.e <span class="math-container">$1+\cos x +e^x=0.$</span></p>
23,818
<p>I'm trying to define some notation so that Mathematica code would be more functional, similar to Haskell (just for fun): currying, lambdas, infix operator to function conversion, etc.. And I have some questions about it:</p> <ul> <li>Is it possible to make all Mathematica <code>h_[x1_,x2_,...]</code> functions to work as <code>h[x1][x2][..]</code>?</li> <li>Can I distinguish inside <code>Notation</code> box between <code>&lt;+1&gt;</code> and <code>&lt;1+&gt;</code>, how do I check for + there?</li> <li>How to define right-associate apply operator with highest precedence (<code>$</code>)?</li> </ul> <p>This is what I have so far:</p> <pre><code>&lt;&lt; Notation`; lapply[x_, y_] := x[y] rapply[x_, y_] := x[y] InfixNotation[ParsedBoxWrapper["\\"], lapply] InfixNotation[ParsedBoxWrapper["$"], rapply] x_ \ y_ \ z_ := x[y][z] x_ $ y_ $ z_ := x[y[z]] Notation[ParsedBoxWrapper[ RowBox[{ RowBox[{"λ", " ", "x__"}], "-&gt;", "y_"}]] ⟺ ParsedBoxWrapper[ RowBox[{"Function", "[", RowBox[{ RowBox[{"{", "x__", "}"}], ",", "y_"}], "]"}]]] Notation[ParsedBoxWrapper[ RowBox[{"〈", RowBox[{"op_", " ", "x_"}], "〉"}]] ⟺ ParsedBoxWrapper[ RowBox[{ RowBox[{"#", "op_", " ", "x_"}], "&amp;"}]]] AddInputAlias["f" -&gt; ParsedBoxWrapper[ RowBox[{"〈", RowBox[{"\[Placeholder]", "\[Placeholder]"}], "〉"}]]] Notation[ParsedBoxWrapper[ RowBox[{"{", RowBox[{ RowBox[{"x_", " ", ".."}], " ", "y_"}], "}"}]] ⟺ ParsedBoxWrapper[ RowBox[{"Range", "[", RowBox[{"x_", ",", "y_"}], "]"}]]] filter[f_][x_List] := Select[x, f] map[f_][x__List] := Map[f, x] filter\PrimeQ $ map\〈-1〉 $ map\ \ (λ x -&gt; 2^x)\ {1 .. 100} </code></pre> <p><strong>EDIT</strong>: Also did some kinda lazy lists, soon it will be haskell inside Mathematica :)</p> <pre><code>SetAttributes[list, HoldAll] list[h_, l_][x_] := list[h, l[x]] list[x_] := list[x, list] map[f_][list] := list map[f_][list[x_, xs_]] := list[f[x], map[f][xs]] take[0][_] := list take[_][list] := list take[n_Integer][list[x_, xs_]] := list[x, take[n - 1][xs]] range[n_Integer] := range[1, n] range[m_, n_] := list[m, range[m + 1, n]] range[n_, n_] := list[n, list] show[list] := "[]" show[list[x_, l_]] := ToString[x] &lt;&gt; "," &lt;&gt; show[l] show $ (take[10] $ map\ (λ x -&gt; x^2) $ range[10000]) </code></pre>
Mr.Wizard
121
<h2>Currying</h2> <p>I don't know if it is possible to make <em>all</em> functions work in the Currying form (<code>h[x1][x2][..]</code>) but it is at least possible to extend Hold behavior to all arguments which natively that pattern will not have. I will copy my favorite method which I learned from <a href="https://stackoverflow.com/a/11561797/618728">this post</a> by Grisha Kirilin:</p> <pre><code>SetAttributes[f, HoldAllComplete] f[a_, b_, c_] := Hold[a, b, c] f[a__] := Function[x, f[a, x], HoldAll] </code></pre> <p>Now:</p> <pre><code>f[2 + 2][8/4][7^0] </code></pre> <blockquote> <pre><code>Hold[2 + 2, 8/4, 7^0] </code></pre> </blockquote> <p>I think conceivably most other Attribute behaviors could be implemented, but the pattern-matching changes of e.g. <code>Orderless</code> might be prohibitively difficult.</p> <h3>Formatting</h3> <p>Following jVincent's lead, if we would like to format the intermediate expressions cleanly we could use:</p> <pre><code>MakeBoxes[Function[x$, h_[a__, x$], HoldAll], _] := ToBoxes @ HoldForm[h[a]] </code></pre> <p>Now:</p> <pre><code>f[2 + 2][8/4] </code></pre> <blockquote> <pre><code>f[2 + 2, 8/4] </code></pre> </blockquote> <p>This seems appropriate as <code>f</code> is already defined such that this form can be given as input. However, if one prefers the visual form <code>f[2 + 2][8/4]</code> then we can use:</p> <pre><code>MakeBoxes[Function[x$, h_[a__, x$], HoldAll], _] := ToBoxes[HeadCompose @@ HoldForm /@ Unevaluated[{h, a}]] f[2 + 2][8/4] </code></pre> <blockquote> <pre><code>f[2 + 2][8/4] </code></pre> </blockquote> <p>(<code>HeadCompose</code> is a deprecated function but still quite useful.)<br> Note that in these rules I used <code>x$</code> which is the <a href="https://mathematica.stackexchange.com/q/20766/121">automatic renaming</a> of <code>x</code> that occurs within the <code>Function</code>. This code could be made more robust by using a more unique symbol name.</p> <h2>Low level Box forms</h2> <p>I cannot recall the limits of the Notation package in this regard, but you can determine how <em>Mathematica</em> parses an expression, and therefore what is accessible with <code>$PreRead</code> or <code>CellEvaluationFunction</code>, using a method from John Fultz (learned <a href="https://mathematica.stackexchange.com/a/13371/121">here</a>):</p> <pre><code>parseString[s_String, prep : (True | False) : True] := FrontEndExecute[UndocumentedTestFEParserPacket[s, prep]] </code></pre> <p>The default <code>True</code> should be used in our application as this is the form that will be seen by <code>$PreRead</code>, etc. Testing your example expressions:</p> <pre><code>parseString @ "&lt;+1&gt;" parseString @ "&lt;1+&gt;" </code></pre> <blockquote> <pre><code>{BoxData[RowBox[{"&lt;", RowBox[{"+", "1"}], "&gt;"}]], StandardForm} {BoxData[RowBox[{"&lt;", RowBox[{"1", "+"}], "&gt;"}]], StandardForm} </code></pre> </blockquote> <p>One can see that in isolation these parse to distinct forms.</p> <h2>New operators</h2> <p>I described how to create a new operator in: <a href="https://mathematica.stackexchange.com/questions/6355/how-can-one-define-an-infix-operator-with-an-arbitrary-unicode-character/6363#6363">How can one define an infix operator with an arbitrary unicode character?</a></p> <p>And a more mild example in: <a href="https://mathematica.stackexchange.com/questions/6338/prefix-operator-with-low-precedence/6341#6341">Prefix operator with low precedence</a> </p> <p>I recommend that you do not attempt to redefine the <code>$</code> symbol itself as your operator as this is extensively used internally in temporary symbol names (<code>Module</code>, <code>Unique</code>, etc.).</p>
618,763
<p>Let $\mathbb{F}$ be a field, $ \mathbb{R} = \mathbb{F}[X,X^{-1}]$ the ring of Laurent polynomials over $\mathbb{F}$</p> <blockquote> <blockquote> <p>a) find the group of units in R.</p> <p>b) find an Euclidean norm in R.</p> </blockquote> </blockquote> <p>so for a, I understand the units are $uX^{\pm i}$ where $ u \in F, u\neq 0$.</p> <p>can you give me a hint about b?</p>
Pete L. Clark
299
<p>This may well be overkill, but it is a result of Motzkin and Samuel that any localization of a Euclidean domain is Euclidean. A proof is given in Theorem 2.33 of <a href="http://alpha.math.uga.edu/%7Epete/OrdinalInvariant.pdf" rel="nofollow noreferrer">this paper of mine</a>.</p> <p>Since you say you want a <em>hint</em>: take the Euclidean function <span class="math-container">$\operatorname{deg}$</span> you know on <span class="math-container">$F[x]$</span>, write an element <span class="math-container">$f$</span> of <span class="math-container">$F[X,X^{-1}]$</span> as a unit times an element <span class="math-container">$g$</span> which is not divisible by <span class="math-container">$x$</span>, and try to show that <span class="math-container">$f \mapsto \operatorname{deg}(g)$</span> is a Euclidean function. (In other words, what 8k14 said!)</p>
3,570,943
<p>From the comments I got, my question amonts to : can the " inverse of inverse" law be derived from the definition of division. </p> <hr> <p>Division is defined as : <span class="math-container">$\dfrac AB = A.\dfrac1B$</span>, that is, </p> <p>" dividisng by A by B is, by definition, mutiplicating A by the inverse of B". </p> <p>My question is: </p> <p>How do I derive from this definition, the equality: </p> <p><span class="math-container">$\frac{a}{b/c}$</span> = <span class="math-container">$\frac{ac}{b}$</span> </p> <p>I tried this: </p> <p><span class="math-container">$\frac{a}{b/c}$</span></p> <p>= <span class="math-container">$\frac{a}{b\times1/c}$</span> ( applying the definition of division to the denominator) </p> <p>=<span class="math-container">$\frac{a\times1}{b\times1/b}$</span> ( using " 1 is the identity for multiplication") </p> <p>= <span class="math-container">$\frac ab$$\times$$\frac{1}{1/c}$</span> ( using <span class="math-container">$\frac{ab}{cd}$</span> = <span class="math-container">$\frac{a\times b}{c\times d}$</span> in the reverse sense)</p> <p>But could not go further. </p> <p>How to recover <span class="math-container">$\frac {c}{1}$</span> from <span class="math-container">$\frac{1}{1/c}$</span> using exclusively the definition of division <span class="math-container">$\frac AB$</span> = A.<span class="math-container">$\frac1B$</span> ? </p> <p>It seems to me I am moving in a circle, since apparently I would need the formula I want to prove to obtain the last equality I want. </p>
Steven Alexis Gregory
75,410
<p><span class="math-container">\begin{align} \dfrac{1}{\left\{\dfrac CD \right\}} &amp;= \dfrac{1}{\left\{\dfrac CD \right\}} \cdot 1 \\ &amp;= \dfrac{1}{\left\{\dfrac CD \right\}} \cdot \dfrac DD \\ &amp;= \dfrac{1 \cdot D}{\dfrac CD\cdot D } \\ &amp;= \dfrac DC \end{align}</span></p> <p>Therefore</p> <p><span class="math-container">$$\dfrac AB \div \dfrac CD = \dfrac AB \cdot \dfrac{1}{\left\{\dfrac CD \right\}} = \dfrac AB \cdot \dfrac DC = \dfrac{AD}{BC}$$</span></p>
2,022,529
<p>$$\Large \lim_{x\to0^+}\frac{1}{\sin^2x}\int_{\frac{x}{2}}^x\sin^{-1}t\,dt $$</p> <p>I am trying to calculate this limit. Using L’Hôpital’s rule, I am getting it as $1/4$, but the book says it's $3/8$. I don't know where I am doing the mistake.</p>
Dhanvi Sreenivasan
332,720
<p>We divide the 7 people into 2 groups, of 5 people with no problem, and 2 people who don't want to work with each other.</p> <p>To make a committee that will fail, we need to take both of the people from the 2-group, i.e, $2 \choose 2$</p> <p>Now, there is only one space left in the committee, and we have 5 possible options, hence we have $5 \choose 1$</p> <p>Therefore, total number of impossible committees would be ${5\choose 1} {2 \choose 2}$</p>
1,039,326
<p>What is the fastest way to solve for $z^3 = -2 (1+i \sqrt 3) \bar z$?</p> <p>I know how to do this using complex algebra. but that takes a long time. Can someone show me a faster way?</p>
Community
-1
<p>Taking the modulus, $|z|^3=2\cdot2\cdot|z|$, and $|z|=0$ or $|z|=2$ ($|z|=-2$ cannot hold).</p> <p>Taking the argument, $3\arg z=\pi+\pi/3-\arg z+2k\pi$, and $\arg z=\pi/3+k\pi/2$.</p> <p>$$z=0,1+i\sqrt3,-\sqrt3+i,-1-i\sqrt3,\sqrt3-i.$$</p>
1,460,704
<p>Intuitively we know that $n^2$ grows faster than $n$, thus the difference tends to negative infinity. But I have trouble proving it symbolically because of the indeterminate form $\infty - \infty$. Is there anyway to do this without resorting to the Epsilon-Delta definition ?</p>
A.Γ.
253,273
<p>Another way: complete the square $n-n^2=\frac14-\bigl(n-\frac12\bigr)^2\to -\infty$.</p>
391,569
<p>I've started the chapter in my book where we begin to integrate trig functions, so bear in mind I've only got started and that I do not have a handle on more advanced techniques.</p> <p>$\eqalign{ &amp; \int {{{\sin }^3}x} dx \cr &amp; = \int {\sin x({{\sin }^2}x} )dx \cr &amp; = \int {\sin x({1 \over 2}} - {1 \over 2}\cos 2x)dx \cr &amp; = \int {{1 \over 2}\sin x(1 - \cos 2x)dx} \cr &amp; = \int {{1 \over 2}\sin x(1 - (2{{\cos }^2}x - 1))dx} \cr &amp; = \int {{1 \over 2}\sin x(2 - 2{{\cos }^2}x)dx} \cr &amp; = \int {\sin x - {{\cos }^2}} x\sin xdx \cr &amp; y = {1 \over 3}{\cos ^3}x - \cos x + C \cr} $</p> <hr> <p>I got the right answer but it seems like an awfully long winded way of doing things, have I made things harder than they should be with this method?</p>
Caran-d'Ache
66,418
<p>Isn't it faster to use subsitution like $\cos(x)=t$? $$\eqalign{ &amp; \int {{{\sin }^3}x} dx \cr &amp; = \int {\sin x({{\sin }^2}x} )dx \cr &amp; =-\int \sin^2(x)d\cos(x) \cr &amp; = -\int (1-\cos^2(x)) d\cos(x) \cr &amp; = -\int (1-t^2) dt \cr } $$</p>
391,569
<p>I've started the chapter in my book where we begin to integrate trig functions, so bear in mind I've only got started and that I do not have a handle on more advanced techniques.</p> <p>$\eqalign{ &amp; \int {{{\sin }^3}x} dx \cr &amp; = \int {\sin x({{\sin }^2}x} )dx \cr &amp; = \int {\sin x({1 \over 2}} - {1 \over 2}\cos 2x)dx \cr &amp; = \int {{1 \over 2}\sin x(1 - \cos 2x)dx} \cr &amp; = \int {{1 \over 2}\sin x(1 - (2{{\cos }^2}x - 1))dx} \cr &amp; = \int {{1 \over 2}\sin x(2 - 2{{\cos }^2}x)dx} \cr &amp; = \int {\sin x - {{\cos }^2}} x\sin xdx \cr &amp; y = {1 \over 3}{\cos ^3}x - \cos x + C \cr} $</p> <hr> <p>I got the right answer but it seems like an awfully long winded way of doing things, have I made things harder than they should be with this method?</p>
pritam
33,736
<p>Using the formula $$\sin (3x)=3\sin x-4\sin^3 x$$ you can calculate the integral in just one step.</p>
382,295
<p>Please help. I haven't found any text on how to prove by induction this sort of problem: </p> <p>$$ \lim_{n\to +\infty}1 + \frac{1}{4} + \frac{1}{4^2} + \cdots+ \frac{1}{4^n} = \frac{4}{3} $$</p> <p>I can't quite get how one can prove such. I can prove basic divisible "inductions" but not this. Thanks.</p>
Quinn Culver
11,030
<p>After subtracting $1$ this is equivalent to $\frac{1}{4} + \frac{1}{16} + \frac{1}{64} + \cdots = \frac{1}{3}$. Behold:</p> <p><img src="https://i.stack.imgur.com/qkD4g.jpg" alt="enter image description here"></p> <p>Image taken from <a href="http://mrhonner.files.wordpress.com/2010/10/infinite-series-triangle.jpg" rel="noreferrer">here</a>.</p>
382,295
<p>Please help. I haven't found any text on how to prove by induction this sort of problem: </p> <p>$$ \lim_{n\to +\infty}1 + \frac{1}{4} + \frac{1}{4^2} + \cdots+ \frac{1}{4^n} = \frac{4}{3} $$</p> <p>I can't quite get how one can prove such. I can prove basic divisible "inductions" but not this. Thanks.</p>
user47693
47,693
<p>I'm not really sure if it is possible to prove this using induction but we can do something else.</p> <p>The sum $\displaystyle 1 + \frac{1}{4} + \frac{1}{4^2} + \cdots \frac{1}{4^k}$ can be written as $\displaystyle\sum_{n=0}^k \frac{1}{4^n}$. Now $\displaystyle\sum_{n=0}^k \displaystyle\frac{1}{4^n} = \frac{1-\frac{1}{4^{k+1}}}{1-\frac{1}{4}}$. Now letting $k \rightarrow \infty$ allows us to conclude that $\displaystyle 1 + \frac{1}{4} + \frac{1}{4^2} + \cdots \frac{1}{4^k} \rightarrow \displaystyle\frac{4}{3}$</p>
382,295
<p>Please help. I haven't found any text on how to prove by induction this sort of problem: </p> <p>$$ \lim_{n\to +\infty}1 + \frac{1}{4} + \frac{1}{4^2} + \cdots+ \frac{1}{4^n} = \frac{4}{3} $$</p> <p>I can't quite get how one can prove such. I can prove basic divisible "inductions" but not this. Thanks.</p>
Pete L. Clark
299
<p>The statement of your question is quite confused. Clearly $1 + \frac{1}{4} + \ldots + \frac{1}{4^n}$ is a quantity that gets larger with $n$ -- you're adding on more positive numbers -- so it cannot be equal to any fixed number.</p> <p>I think what you want to show by induction is:</p> <blockquote> <p>For every real number $a \neq 1$ and every positive integer $n$, $1 + a + \ldots + a^n = \frac{1-a^{n+1}}{1-a}$.</p> </blockquote> <p>The setup here is the usual one for the easiest inductions: you check the base case ($n=1$), then for an arbitrary positive integer $n$ you assume that $1 + a + \ldots + a^n = \frac{1-a^{n+1}}{1-a}$. Then you add the next term -- here $a^{n+1}$ -- to both sides and do at little algebra to show that you get what you're supposed to: here, $\frac{1-a^{n+2}}{1-a}$. If you can do any induction proofs you can probably do this without much trouble; please try it.</p> <p>Now:</p> <p>1) If $|a| &lt; 1$, then $\lim_{n \rightarrow \infty} a^{n+1} = 0$, so </p> <p>$\lim_{n \rightarrow \infty} (1+a + \ldots + a^n) = \sum_{n=0}^{\infty} a^n = \frac{1}{1-a}$. </p> <p>If you plug in $a = \frac{1}{4}$ you'll get the single identity that I think you're asking for. This is one of the very first and most important instances of finding the sum of an infinite series. One often spends much of an entire course studying such things, e.g. towards the end of second semester calculus.</p> <p>2) Most people would prefer <em>not</em> to establish the boxed identity by induction. This is because of the "kryptonite" inherent in the otherwise superstrong method of proof by induction: you must know in advance what you are trying to prove. Another standard method will allow you to find the answer without knowing it in advance:</p> <p>Put $S = 1 + a + \ldots + a^n$. </p> <p>Multiply by $a$: </p> <p>$aS = a + \ldots + a^n + a^{n+1}$. </p> <p>Now subtract the second equation from the first and note that most of the terms on the right hand side cancel:</p> <p>$S(1-a) = S-aS = 1 - a^{n+1}$.</p> <p>Since $a \neq 1$, $1-a \neq 0$ so we can divide both sides by it. Behold, we have gotten the answer</p> <p>$S = \frac{1-a^{n+1}}{1-a}$</p> <p>that we previously had to know in order to do a proof by induction.</p>
382,295
<p>Please help. I haven't found any text on how to prove by induction this sort of problem: </p> <p>$$ \lim_{n\to +\infty}1 + \frac{1}{4} + \frac{1}{4^2} + \cdots+ \frac{1}{4^n} = \frac{4}{3} $$</p> <p>I can't quite get how one can prove such. I can prove basic divisible "inductions" but not this. Thanks.</p>
Hagen von Eitzen
39,174
<p>If you multiply $s_n:=1+q+q^2+\ldots + q^n$ with $q$ you get $q+q^2+\ldots +q^n+q^{n+1}$, which differs from $a_n$ by only dropping the first and adding a new last term. More procisely, we find $qs_n=s_n-1+q^{n+1}$, hence (if $q\ne 1$) $$s_n=\frac{q^{n+1}-1}{q-1}=\frac1{1-q}+\frac1{q-1}\cdot q^{n+1}.$$ In your case, you have $q=\frac14$, so the first summand is $\frac43$ and the second tends to $0$ as $n\to\infty$.</p>
382,295
<p>Please help. I haven't found any text on how to prove by induction this sort of problem: </p> <p>$$ \lim_{n\to +\infty}1 + \frac{1}{4} + \frac{1}{4^2} + \cdots+ \frac{1}{4^n} = \frac{4}{3} $$</p> <p>I can't quite get how one can prove such. I can prove basic divisible "inductions" but not this. Thanks.</p>
JTP - Apologise to Monica
60,518
<p>If you represent 4/3 using binary notation, you get:</p> <p>1.0101.... (I don't know how to do a 'bar' to show repeating.)</p> <p>The first 1 after the decimal is read as 1/4 the zero is in the 1/8 place, the next 1 is 1/16, etc. This reads as matching your infinite series. </p>
2,325,650
<p>This question is based on the fourth question in the 2003 edition of the <a href="https://www.vwo.be/vwo/files/finale03.pdf" rel="nofollow noreferrer">Flemish Mathematics Olympiad</a>.</p> <blockquote> <p>Consider a grid of points with integer coordinates. If one chooses the number $R$ appropriately, the circle with center $(0, 0)$ and radius $R$ crosses a number of grid points. A circle with radius $1$ crosses 4 grid points, a circle with radius $2\sqrt{2}$ crosses 4 grid points and a circle with radius 5 crosses 12 grid points. Prove that for any $n \in \mathbb{N}$, a number $R$ exists for which the circle with center $(0, 0)$ and radius $R$ crosses at least $n$ grid points.</p> </blockquote> <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href="https://i.stack.imgur.com/8ZIqS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8ZIqS.png" alt="enter image description here"></a></p> <p>I have tried to solve this question by induction, considering a given point $(i, j)$, $i \gt j$, on the circle with radius $R$ and attempting to extract multiple points from this on a larger circle. In this case, the coordinates $(i+j,i-j)$ and $(i-j, i+j)$ are both on a circle with radius $\sqrt{2}R$. However, since $(j, i)$ is also a point on the circle, the number of crossed grid points remains the same. What is a correct way to prove the above statement?</p>
Daniel Schepler
337,888
<p>Let $a_1 + b_1 i, \ldots, a_n + b_n i$ be distinct irreducibles of the Gaussian integers $\mathbb{Z}[i]$, with $a_k &gt; b_k &gt; 0$ for each $k$. Then $a_1 + b_1 i, a_1 - b_1 i, \ldots, a_n + b_n i, a_n - b_n i$ are also distinct irreducibles such that no pair has ratio equal to a unit of $\mathbb{Z}[i]$. Therefore, since $\mathbb{Z}[i]$ is a UFD, the $4 \cdot 2^n$ numbers $(\pm 1, \pm i) (a_1 \pm b_1 i) \cdots (a_n \pm b_n i)$ are distinct. Each of them has $|z|^2 = (a_1^2 + b_1^2) \cdots (a_n^2 + b_n^2)$; therefore, the circle with radius $R = \sqrt{(a_1^2 + b_1^2) \cdots (a_n^2 + b_n^2)}$ has at least $4 \cdot 2^n$ integer points.</p> <p>(This could easily be modified; for example, by taking $(\pm 1, \pm i) \cdot (2+i)^k (2-i)^{n-k}$, you get that the circle with radius $R = 5^{n/2}$ has at least $4(n+1)$ integer points.)</p>
390,757
<p>suppose <strong>we don't know that Halting problem is not recursive</strong>.</p> <p>I want to prove that <strong>complement of halting set is not r.e.</strong> then we can find <strong>halting problem is not recursive</strong>.</p> <p><strong>Can you direct prove that complement of halting set is not r.e.??</strong></p>
Brian M. Scott
12,042
<p>HINT: If a set and its complement are both recursively enumerable, the set is ... ?</p> <p><strong>Added:</strong> This hint was for the original version of the question, which assumed that the halting problem was undecidable and did not ask for a direct proof.</p>
390,757
<p>suppose <strong>we don't know that Halting problem is not recursive</strong>.</p> <p>I want to prove that <strong>complement of halting set is not r.e.</strong> then we can find <strong>halting problem is not recursive</strong>.</p> <p><strong>Can you direct prove that complement of halting set is not r.e.??</strong></p>
Christopher Rose
296,508
<p>For notational convention we will call the halting set $K$ and the complement of the halting set $\overline{K}$. </p> <p>Definition 1: A set $S$ is recursively enumerable if and only if it is the domain of some computable partial function $f$.</p> <p>Theorem: $\overline{K}$ is not the domain of some computable partial function and is, by Definition 1, therefore not recursively enumerable.</p> <p>Proof: We'll try to assume as little as we can without making the proof too long. Firstly, computable partial functions are functions made up from our basic 'computational' notions, such as the notions associated with primitive recursion (identity, successor, composition, constants), $\mu $-recursion (search), etc. So each computable partial function is a finite string of our finitely many basic computational notions and therefore we can list all computable partial functions as follows:</p> <p>$f_1, f_2, f_3,... f_n... $</p> <p>We will now add two more definitions to clarify the ideas $K$ and $\overline{K}$.</p> <p>Definition 2: The halting set is defined as $K= \{\forall x| f_x(x) \downarrow \}$, or the set of all indexes of computable partial functions which halt when given their own index as an input.</p> <p>Definition 3: The complement of the halting set is defined as $\overline{K} = \{ \forall x | f_x(x) \uparrow \}$, or the set of all indexes of computable partial functions which do not halt when given their own index as an input.</p> <p>Looking back at our original Definition 1, the domains of the computable partial functions constitute all recursively enumerable sets. So we will denote $W_{f_n}=domain({f_n}$) and thusly we can list off all recursively enumerable sets as follows:</p> <p>$W_{f_1},W_{f_2},W_{f_3},..., W_{f_n},...$</p> <p>Where $f_n(x) \downarrow$ means the computable partial function $f_n$ halts when given input $x$, one important observation regarding the above list of all recursively enumerable sets is as follows:</p> <p>$ x \in W_{f_n} \Leftrightarrow f_n(x) \downarrow $</p> <p>In otherwords, $x$ is in $W_{f_n} $ if and only if $x$ is in the domain of $f_n$ (and therefore $f_n(x)$ halts).</p> <p>We will now notice that for any recursively enumerable set $W_{f_n} \subseteq \overline{K} $ we can find an $x$ in $\overline{K}$ but not in $W_{f_n} $. In otherwords $W_{f_n} \neq \overline{K} $. This would imply that $\overline{K}$ is not recursively enumerable.</p> <p>Consider $W_{f_n} \subseteq \overline{K} $, the number $n$ will be the desired number in $\overline{K}$ but not in $W_{f_n}$. By assumption $W_{f_n} \subseteq \overline{K} $ and because $K$ is defined as $K= \{\forall x| f_x(x) \downarrow \}$ if we had $n \in K$ then we'd have $n \in W_{f_n} \subseteq \overline{K}$ or $n \in \overline{K}$ which is a contradiction. So $n \in \overline{K}$ must hold. $\overline{K}$ is defined as $\overline{K} = \{ \forall x | f_x(x) \uparrow \}$ and so if $n \in \overline{K}$ then this means $f_n$ doesn't halt when given input $n$, or in other words, $n \notin W_{f_n} $. Hence $W_{f_n} \neq \overline{K} $. Because $W_{f_n}$ was chosen arbitrarily, we can conclude that $\overline{K} $ is not equivalent with any recursively enumerable set.</p> <p>$\Box$</p>
1,856,071
<p>If I would extend two lines $l_1$ and $l_2$ they would intersect with an angle of 90 degrees. How should I write with math terms that there would be a 90 degree angle. I assume $l_1 \perp l_2$ is wrong if they do not intersect (when not extending them).</p> <p>Is there a possibility to express a 90 degree angle between an extension of $l_1$ and $l_2$? How to express an extension of $l_1$?</p>
Narasimham
95,860
<p>We can recognize by what changes occur in the equation of the line.The product of slopes is -1. </p> <p>If the lines are in slope intercept form</p> <p>$$ y= m x + a \,\rightarrow y= -x/m + const $$</p> <p>If in full intercept form </p> <p>$$ \frac{x}{a}+ \frac{y}{b} =1 \rightarrow \frac{x}{b} - \frac{y}{a} = const $$</p> <p>If in polar form</p> <p>$$ x \cos \alpha + y \sin \alpha = p \, \rightarrow -x \sin \alpha + y \cos \alpha = q $$</p> <p>etc.</p>
7,725
<p>Basically the textbooks in my country are awful, so I searched on the web for a precalculus book and found this one: <a href="http://www.stitz-zeager.com/szprecalculus07042013.pdf">http://www.stitz-zeager.com/szprecalculus07042013.pdf</a></p> <p>However, it does not cover convergence,limits etc. and those topics were briefly mentioned in my old textbooks. So what i am asking is are these topics a prerequesite for calculus or are they a part of the subject?</p>
schremmer
4,815
<p>OK, here is a directly relevant answer: <strong>asymptotic expansions</strong>, which are <em>finite</em> expansions by definition, are used to approximate $f(x_{0}+h)$ and can be used with divergent series (That is what Poincaré invented them for) as well as convergent series. </p> <p>The second part of my answer is that, at least in calculus, but in fact in precalculus, polynomial expansions (certainly finite and in fact, rather short) should be used to describe the qualitative behavior near $x_{0}$ and $\infty$ of the usual functions. </p> <p>For a simple example, consider the function $f(x)=\frac{x^{3}-8}{x^{2}-5x+6}$.</p> <ol> <li>Near $\infty$, we get by long division in <em>descending</em> powers that $f(x) = x +5 + 19x^{-1} + o[x^{-1}]$ i.e. an oblique asymptote.</li> <li>Near $+2$, one of two possible poles, we get by long division in <em>ascending</em> powers that $f(+2+h) = \frac{+12h +6h^{2} +h^{3}}{-h+h^{2}} = -12 -18h -19h^{2} + o[h^{2}]$ i.e. that $+2$ is regular.</li> <li>Near $+3$, the other possible pole, we get by short division that $f(+3+h) = \frac{+19 +27h +9h^{2} +h^{3}}{h+h^{2}} = \frac{+19+ o[1]}{h+ o[h]}=+19h^{-1} + o[h^{-1}]$ i.e. that $+3$ is indeed a pole.</li> <li>A smooth interpolation of the local graphs near $\infty$ and $+3$---the local graph near $+2$ provides only confirmation---gives the essential global graph which in turn shows that $f$ has at least one maximum and one minimum.</li> </ol> <p><img src="https://i.stack.imgur.com/fk95z.jpg" alt="Essential graph"></p> <p>In precalculus, rather than little ohs, I just use [...] to stand for "<em>something too small to matter here.</em>"</p> <p>See, for instance, my implementation of the above: <a href="http://freemathtexts.org/Standalones/RAF/Contents.php" rel="nofollow noreferrer">Reasonable Algebraic Functions</a></p>
2,314,369
<p>My objective is to represent a unit square using a single matrix.<br>The four corners of the unit square are represented by column vectors as follows,</p> <p>$\left[\begin{matrix} 0 \\ 0 \end{matrix}\right],\left[\begin{matrix} 1 \\ 0 \end{matrix}\right],\left[\begin{matrix} 1 \\ 1 \end{matrix}\right],\left[\begin{matrix} 0 \\ 1 \end{matrix}\right]$</p> <p>when we write these points in a single matrix, it becomes $2 X 4$ matrix and we cannot multiply it with the standard matrix which is $2 X 2$, but when written as $\left[\begin{matrix} 0&amp;0 \\ 1&amp;0\\ 1&amp;1\\0&amp;1 \end{matrix}\right] $, it becomes $4 X 2 $ matrix which we can multiply with standard matrix of $2 X 2$ and produces the desired result. </p> <p>I find it counter-intuitive as points are represented by column vectors.</p> <p><strong><em>Question:</em></strong> Is it valid to say that by writing a 4 X 2 matrix I am representing a square by mentioning the four corners of it ?</p>
Peter Melech
264,821
<p>$\begin{pmatrix}1&amp;0\\0&amp;1\end{pmatrix}$ might be understood to represent</p> <p>the set $\{x\in\mathbb{R}^2:x=t\begin{pmatrix}1\\0\end{pmatrix}+s\begin{pmatrix}0\\1\end{pmatrix},0\leq t,s\leq 1\}$, the square, and more genearlly</p> <p>$\begin{pmatrix}a_{11} &amp; a_{12}\\a_{21}&amp;a_{22}\end{pmatrix}$ might be understood to represent the parallelogram</p> <p>$\{x\in\mathbb{R}^2:x=t\begin{pmatrix}a_{11}\\a_{21}\end{pmatrix}+s\begin{pmatrix}a_{12}\\a_{22}\end{pmatrix}0\leq t,s\leq 1\}$.</p>
2,888,464
<p>Let $S$ be the set of polynomials $f(x)$ with integer coefficients satisfying </p> <p>$f(x) \equiv 1$ mod $(x-1)$</p> <p>$f(x) \equiv 0$ mod $(x-3)$</p> <p>Which of the following statements are true?</p> <p>a) $S$ is empty .</p> <p>b) $S$ is a singleton.</p> <p>c)$S$ is a finite non-empty set.</p> <p>d) $S$ is countably infinite.</p> <p>My Try: I took $x =5$ then $f(5) \equiv 1$ mod $4$ and $f(5) \equiv 0$ mod $2$ . Which is impossible so $S$ is empty.</p> <p>Am I correct? Is there any formal way to solve this?</p>
Mostafa Ayaz
518,023
<p>Let we have a general polynomial form as following$$f(x)=a_0+a_1x+\cdots +a_dx^d$$We know that for some polynomials $a(x)$ and $b(x)$ $$f(x)=1+(x-1)a(x)=b(x)(x-3)$$therefore $$f(1)=1\\f(3)=0$$ which leads to $$(1)\quad a_0+a_1+\cdots +a_d=1\\(2)\quad a_0+3a_1+\cdots +3^da_d=0$$by subtracting $(1)$ from $(2)$ we obtain $$2a_1+8a_2+\cdots +(3^d-1)a_d=-1$$which is impossible since the LHS is even and the RHS is odd, so <strong><em>such a polynomial doesn't exist</em></strong></p>
2,888,464
<p>Let $S$ be the set of polynomials $f(x)$ with integer coefficients satisfying </p> <p>$f(x) \equiv 1$ mod $(x-1)$</p> <p>$f(x) \equiv 0$ mod $(x-3)$</p> <p>Which of the following statements are true?</p> <p>a) $S$ is empty .</p> <p>b) $S$ is a singleton.</p> <p>c)$S$ is a finite non-empty set.</p> <p>d) $S$ is countably infinite.</p> <p>My Try: I took $x =5$ then $f(5) \equiv 1$ mod $4$ and $f(5) \equiv 0$ mod $2$ . Which is impossible so $S$ is empty.</p> <p>Am I correct? Is there any formal way to solve this?</p>
farruhota
425,072
<p>Your solution is short and correct. Indeed: $$\begin{align}\begin{cases}f(x)\equiv 1\bmod{(x-1)}\\ f(x)\equiv 0\bmod{(x-3)}\end{cases} &amp;\iff \begin{cases}f(x)=(x-1)a(x)+1\\ f(x)=(x-3)b(x) \end{cases} \Rightarrow \\ \begin{cases}f(5)\equiv 1\bmod{4}\\f(5)\equiv 0\bmod{2}\end{cases} &amp;\iff\begin{cases} f(5)=4a(5)+1\\ f(5)=2b(5) \end{cases}\\ 0\equiv 1\bmod 2&amp;\iff 0=2(2a(5)-b(5))+1 \Rightarrow \\ \emptyset &amp;\iff \emptyset \end{align}$$</p>
2,572,564
<p>So I have a straight line, in the classic <span class="math-container">$y = mx + b$</span>, and I'm just trying to translate the formula for the line a certain distance along its normal.</p> <p>For example, with <a href="https://i.stack.imgur.com/r8vct.jpg" rel="nofollow noreferrer">this graph</a>, how would I translate the red line to the blue one if (for example) <span class="math-container">$x$</span> was <span class="math-container">$4$</span>?</p>
lab bhattacharjee
33,337
<p>Hint:</p> <p>Use the basic formulae: $$\lim_{h\to0}\dfrac{\ln(1+h)}h=1$$ $$\lim_{u\to0}\dfrac{\sin u}u=1$$</p>
1,368,447
<p><img src="https://i.stack.imgur.com/TL80w.jpg" alt="enter image description here"></p> <p>So I tried the good old Calculus 1 approach and turned this into an optimization problem. The equations got REALLY hairy, but it was okay since this was the graphing calculator section of the exam. I called the longer part of the horizontal diagonal $x_2$, the shorter part of the horizontal diagonal $x_1$, and half of the vertical diagonal $v$. </p> <p>After getting $x_1$ in terms of $x_2$ and some tedious algebra, I got that $$2y=\sqrt{4 - \sqrt{x_2^2 - 21}} + \sqrt{25 - x_2^2}.$$ I then multiplied both sides of the equation by the total length of the horizontal diagonal, and graphed the right-hand side of the equation on my calculator. The logic was to find out where $(x_1 + x_2)2y$ would reach a maximum point, since the product of the diagonals of a kite equal twice the area of the kite. </p> <p>After graphing that disaster I got a somewhat reasonable answer. I got the horizontal diagonal to be equal to roughly $4.5829$, and the vertical diagonal to be equal to $4$. However, I don't have an answer key, so I don't know if I am correct. Any feedback would be appreciated!</p>
DeepSea
101,504
<p>Let $a, b, c$ be the length of the parts of the longer diagonal, and the half the shorter diagonal. Then: $S = ac + bc \leq \dfrac{a^2+c^2}{2}+\dfrac{b^2+c^2}{2}=\dfrac{2^2}{2}+\dfrac{5^2}{2}=\dfrac{29}{2}$</p>
2,331,255
<p>Let $K$ be a set of all Killing vector fields on $\mathbf R^n$ (with the Euclidean metric $\bar g$) which vanish at the origin.</p> <p>(A vector field $V$ on a Riemannian manifold $(M, g)$ is said to be a <strong><em>Killing vector field</em></strong> if the flow of $V$ acts by isometries of $M$. This is equivalent to saying that $\mathcal L_Vg=0$).</p> <p>If $V\in K$, then by using $\mathcal L_V\bar g=0$, we get that the matrix $[\partial V^i/\partial x^j]$ is anti-symmetric, where $V^i$ are the components of $V$ in the standard coordinates.</p> <p>Define a map $T:K\to \mathfrak o(n)$ as</p> <p>$$T(V)= \left[\frac{\partial V^i}{\partial x^j}(0)\right]$$ where $\mathfrak o(n)$ is the Lie algebra of $O(n)$, which is "same" as the space of $n\times n$ real anti-symmetric matrices.</p> <blockquote> <p><strong>Problem.</strong> To show that $T$ is injective.</p> </blockquote> <p>I am quite lost here.</p>
Willie Wong
1,543
<p>Here is a slightly simpler, non-geometric, proof:</p> <p>Consider the coordinate derivatives $$ a_{ijk} = \frac{\partial}{\partial x^i} \frac{\partial}{\partial x^j} V^k $$ from calculus we know that this expression is symmetric in $i$ and $j$. Killing's equation implies that this expression is antisymmetric in $j$ and $k$. </p> <p>So</p> <p>$$ -a_{ikj} = a_{ijk} = a_{jik} = -a_{jki} = -a_{kji} = a_{kij} = a_{ikj} $$</p> <p>where each of the equals signs comes from swapping the first two indices (which is symmetric) or the last two indices (which is antisymmetric). Note that this says $-a_{ikj} = a_{ikj} \implies a_{ikj} = 0$. </p> <p>This shows that $V$ is necessarily linear in $\vec{x}$. Since $V(0) = 0$ it is uniquely determined by its derivative $\partial_j V^k$ at one point. </p>
3,340,723
<p>How can we find the total number of ways in which we can divide <span class="math-container">$n$</span> elements into two subsets such that none of them are empty and the union of both sets should be equal to the whole set?</p> <p>Eg. If <span class="math-container">$S=\{1,2,3\}$</span>, the answer can be <span class="math-container">$A=\{1,2\}$</span>, <span class="math-container">$B=\{3\}$</span> or <span class="math-container">$A = \{1,3\}$</span> and <span class="math-container">$B=\{2\}$</span> or <span class="math-container">$A=\{2,3\}$</span> and <span class="math-container">$B=\{1\}$</span>.</p>
Wuestenfux
417,848
<p>Well, the number you are searching is the Stirling number of the 2nd kind.</p> <p>Let <span class="math-container">$n,k\geq 1$</span>. Define <span class="math-container">$S(n,k)$</span> as the number of partitions of an <span class="math-container">$n$</span>-element set such that each partition consists of <span class="math-container">$k$</span> elements (blocks).</p> <p>These numbers can be computed recursively.</p> <p><span class="math-container">$S(n,k)=0$</span> if <span class="math-container">$k&gt;n$</span>.</p> <p><span class="math-container">$S(n,1) = 1$</span> and <span class="math-container">$S(n,n)=1$</span>.</p> <p><span class="math-container">$S(n,k) = S(n − 1, k − 1) + k · S(n − 1, k)$</span> if <span class="math-container">$1&lt;k&lt;n$</span>.</p> <p>Your question asks for <span class="math-container">$S(n,2)$</span> which is <span class="math-container">$S(n,2) = S(n-1,1) + 2\cdot S(n-1,2)$</span>. By induction, <span class="math-container">$S(n,2)=2^{n-1}-1$</span>.</p>
2,450,838
<p>So I am hoping to show that $f(x,y)=\dfrac{x}{y}$ is continuous when $x&gt;0$ and $y&gt;0$.</p> <p>I am not sure how to approach this problem. My idea was that taking $$\dfrac{\partial f}{\partial x}=\dfrac{1}{y}$$ and $$\dfrac{\partial f}{\partial y}=\dfrac{-x}{y^2}$$</p> <p>My logic was that since both partials exists and are defined if $x&gt;0$ and $y&gt;0$, but I then discovered that existence of partial derivatives does <strong>not</strong> imply continuity. I am curious if there are any clever ways to show continuity of this function? Note I am not concerned with the case that $x=0$ or $y=0$</p>
Community
-1
<p>First show by <em>definition</em> that $\pi_1(x,y)=x$ and $\pi_2(x,y)=y$ are both continuous on ${\bf R}^2$ and in particular on $S:=\{(x,y)\in{\bf R}^2\mid x&gt;0,y&gt;0\}$. </p> <p>Now note that $f$ is the product of two continuous functions: $$ f(x,y)=\pi_1(x,y)\cdot g(\pi_2(x,y)) $$ where $g(z):=\frac{1}{z}$ is a function of a single variable. </p> <hr> <p>[Added:] One needs to show that the composition of two continuous functions is continuous, which in particular gives the continuity of the function $g\circ\pi_2$. Also, one should be able to show that the product of two continuous functions is continuous. </p>
2,918,453
<p>After reading <a href="https://math.stackexchange.com/questions/208317/show-sum-n-0-infty-frac1a2n2-frac1a-pi-coth-a-pi2a2/208407#208407">Show $\sum_{n=0}^\infty\frac{1}{a^2+n^2}=\frac{1+a\pi\coth a\pi}{2a^2}$</a> and noodling around on wolfram alpha, I discovered </p> <p>$$ \begin{align} &amp;\coth(x \pi)=\frac{x}{\pi}\sum_{n=-\infty}^\infty\frac{1}{x^2+n^2} &amp; \cot(x \pi)= \frac{x}{\pi}\sum_{n=-\infty}^\infty\frac{1}{x^2-n^2} \\ &amp; \text{csch}(x \pi )=\frac{x}{\pi} \sum_{n=-\infty}^\infty{\frac{(-1)^n }{x^2+n^2}} &amp;\csc(x \pi)= \frac{x}{\pi} \sum_{n=-\infty}^\infty{\frac{(-1)^n }{x^2-n^2}} \\ &amp; \tanh(x \pi)=\frac{4x}{\pi}\sum_{n=-\infty}^\infty{\frac{1}{(2n+1)^2+4x^2}} &amp;\tan(x \pi) = \frac{4x}{\pi}\sum_{n=-\infty}^\infty{\frac{1}{(2n+1)^2-4x^2}} \end{align} $$</p> <p>I suspect (but don't know for sure) that these can all be justified by milking the techniques from the link above. </p> <p><strong>Go Go Gadget Calculus</strong></p> <p>We should be able to derive a few more identities. For example, </p> <p>$$ \Big(\cot(x \pi) \Big)'= \Big(\frac{x}{\pi} \sum_{n=-\infty}^\infty{\frac{1}{x^2-n^2}} \Big )'$$</p> <p>$$ \Big(\pi csc(\pi x)\Big)^2= \sum_{n=-\infty}^\infty\frac{x^2+n^2}{(x^2-n^2)^2}$$</p> <p>And after some manipulations we can find </p> <p>$$ \frac{\pi^2\csc^2(\pi/x)}{x^2}= \sum_{n=-\infty}^\infty\frac{1+(xn)^2}{(1-(xn)^2)^2} $$ Lovely. We can write then $$\frac{\pi^2}{9}=\sum_{n=-\infty}^\infty {\frac{1+(6n)^2}{(1-(6n)^2)^2}}$$ I have a feeling at this point that this must be a well-studied subject and I wonder where I can find some more identities of this class. Does anyone have a link/resource where I can read more on these. I don't really need their derivations if they are just the techniques of the link above + elementary calculus techniques. I am just looking for a well organized list that I can refer to.</p>
Mason
552,184
<p>There is some nice things to see here: </p> <p>$$\tanh(x \pi)=\frac{4x}{\pi}\sum_{n=-\infty}^\infty{\frac{1}{(2n+1)^2+4x^2}}$$</p> <p>$$\Big(\tanh(x \pi) \Big)'= \Big( \frac{1}{\pi}\sum_{n=-\infty}^\infty{\frac{4x}{(2n+1)^2+4x^2}} \Big)'$$</p> <p>$$ \pi\operatorname{sech}^2(\pi x)=\frac{1}{\pi}\sum_{n=-\infty}^{\infty}\frac{4\left(4n^2+4n-4x^2+1\right)}{\left(4n^2+4n+4x^2+1\right)^2}$$</p> <p>Which is nice: taking $x=0$ we have </p> <p>$$\pi^2=\sum_{n=-\infty}^{\infty}\frac{4}{4n^2+4n+1}=8\Big(1+\sum_{n=1}^\infty{\frac{1}{4n^2+4n+1}}\Big)$$</p> <p>Interesting. I wonder if we can use this to demonstrate Takebe Kenko's somewhat similar looking (this can be found- with a tiny typo in denominator- on the last page of <a href="https://www.jstor.org/stable/2690896?origin=crossref&amp;seq=16#metadata_info_tab_contents" rel="nofollow noreferrer">this</a>):</p> <p>$$\pi^2=4\Big(1+\sum_{n=1}^{\infty} \frac{2^{2n+1}(n!)^2}{(2n+2) !} \Big) $$</p>
4,386,333
<p>This is Exercise 1.6(iii) from Intro. to K-Theory for <span class="math-container">$C^*$</span>-Algebras by Rørdam et al.</p> <p>If <span class="math-container">$A$</span> is a unital <span class="math-container">$C^*$</span>-algebra, and <span class="math-container">$a \in A$</span> is invertible, then I want to show that <span class="math-container">$a^{-1} \in C^*(a)$</span>. What I know is:</p> <ul> <li><span class="math-container">$a$</span> is invertible if and only if <span class="math-container">$aa^*$</span> and <span class="math-container">$a^*a$</span> are invertible, with <span class="math-container">$a^{-1} = a^*(aa^*)^{-1} = (a^*a)^{-1}a^*$</span></li> <li>If <span class="math-container">$b \in A$</span> is invertible and normal, then there is some <span class="math-container">$f \in C(\sigma(b))$</span> such that <span class="math-container">$f(b) = b^{-1}$</span> (proven using the continuous functional calculus). Another question <a href="https://math.stackexchange.com/questions/1636173/c-algebra-generated-by-a">here</a> shows that <span class="math-container">$b^{-1}, 1 \in C^*(b) = C^*(b, 1)$</span>.</li> <li><span class="math-container">$C^*(a)$</span> is the closed linear span of <span class="math-container">$a^ma^{*n}$</span> and <span class="math-container">$a^{*s}a^t$</span>, where <span class="math-container">$m,n,s,t \in \mathbb{N}$</span>.</li> </ul> <p>My issue here is that since <span class="math-container">$a$</span> is not a normal element, then I don't know that <span class="math-container">$C^*(a, 1) = C^*(a)$</span> (which I don't think is even true), nor that there is a *-isomorphism between <span class="math-container">$C(\sigma(a))$</span> and <span class="math-container">$C^*(a)$</span>. I suspect that there is something simple that I am not seeing, but I am not sure what it is.</p>
Martin Argerami
22,857
<p>You can use the Polar Decomposition. In principle you have to do this in a von Neumann algebra, but you always have this if you represent <span class="math-container">$A\subset B(H)$</span>.</p> <p>You have <span class="math-container">$a=v|a|$</span>, with <span class="math-container">$|a|=(a^*a)^{1/2}\in C^*(a)$</span>. You also have that <span class="math-container">$|a|$</span> is invertible and normal, so <span class="math-container">$|a|^{-1}\in C^*(a)$</span>. Then <span class="math-container">$v=a|a|^{-1}\in C^*(a)$</span>. This also shows that <span class="math-container">$v$</span> is invertible. Now, since <span class="math-container">$v$</span> is a partial isometry, being invertible, it is a unitary (since <span class="math-container">$v^*v$</span> and <span class="math-container">$vv^*$</span> are invertible projections). Then <span class="math-container">$v^{-1}=v^*\in C^*(a)$</span>. And this gives <span class="math-container">$$ a^{-1}=|a|^{-1}v^*\in C^*(a) . $$</span></p>
2,675,758
<p>For example:</p> <p>There are two green marbles and two red marbles in a bag. Two marbles are chosen at random, what is the probability that the two marbles which have been chosen are of the same colour?</p> <p>Ordered + distinct: <br> Marbles = $G1, G2, R1, R2$ <br> Event space = $\{|G1~G2|, |G2~G1|, |R1~R2|, |R2~R1|\} = 4$ ways <br> Sample space = $^4P_2 = 12$ ways <br> Probability = $\dfrac{4}{12} = \dfrac{1}{3}$ <br></p> <p>Unordered + distinct: <br> Marbles = $G1, G2, R1, R2$ <br> Event space = $\{|G1~G2|, |R1~R2|\} = 2$ ways <br> Sample space = $^4C_2 = 6$ ways <br> Probability = $\dfrac{2}{6} = \dfrac{1}{3}$ <br></p> <p>Ordered + identical: <br> Marbles = $G, G, R, R$ <br> Event space = $\{|G~G|, |R~R|\} = 2$ ways <br> Sample space = $\{|G~G|, |G~R|, |R~G|, |R~R|\} = 4$ ways <br> Probability = $\dfrac{2}{4} = \dfrac{1}{2}$ <br></p> <p>Unordered + identical: <br> Marbles = $G, G, R, R$ <br> Event space = $\{|G~G|, |R~R|\} = 2$ ways <br> Sample space = $\{||G~G|, |G~R|, |R~R||\} = 3$ ways <br> Probability = $\dfrac{2}{3}$</p> <p>Which approach is the correct one? And more importantly, <strong>why</strong> are the other approaches wrong?</p>
Mathew Mahindaratne
525,941
<p>I assume this is base $10$ logarithmic we are talking here. If so, $\log_{10}(x+2)^2 = 2$ means $(x+2)^2 = 10^2$. Similarly, $\ln(x+2)^2=\log_{e}(x+2)^2 = 2$ means $(x+2)^2 = e^2$. The rest is just algebra.</p>
115,630
<p>Let $V$ and $W$ be two algebraic structures, $v\in V$, $w\in W$ be two arbitrary elements.</p> <p>Then, what is the geometric intuition of $v\otimes w$, and more complex $V\otimes W$ ? Please explain for me in the most concrete way (for example, $v, w$ are two vectors in 2 dimensional vector spaces $V, W$)</p> <p>Thanks</p>
Tryst with Freedom
688,539
<p>For a totally different angle to the subject, we can get a taste of what the tensor product means by considering Dehn's invariant. I'll try to shortly explain what it is, if we have a polyhedron with edge lengths <span class="math-container">$l_i$</span> and <a href="https://en.wikipedia.org/wiki/Dihedral_angle" rel="nofollow noreferrer">dihedral angles</a> <span class="math-container">$\theta_i$</span>, we can define the Dehn's invariant through the quotient:</p> <p><span class="math-container">$$ R \otimes_{\mathbb{Z} } ( R / 2 \pi \mathbb{Z})$$</span></p> <p>Now, the point is, if we have a polyhedron, then upon cutting it into two pieces, the Dehn's invariant is fixed!</p> <p>For a beautiful exposition of this topic, see this <a href="https://youtu.be/eYfpSAxGakI" rel="nofollow noreferrer">Numberphile video</a></p>
2,600,062
<p>I'm stuck at doing this problem. It's actually a problem related to circuit analysis. I'm given this equation: $$\frac{(R_3+\frac{1}{\omega C}I)*(\omega Li)}{(R_3+\frac{1}{\omega Ci})+(\omega Li)}$$ Now to rationalize it I will multiply by $$\frac{R_3+\frac{1}{\omega C}i-\omega Li}{R_3+\frac{1}{\omega C}i-\omega Li}$$ $$\frac{R_3+\frac{1}{\omega C}i*\omega Li}{R_3+\frac{1}{\omega C}i+\omega Li}*\frac{R_3+\frac{1}{\omega C}i*\omega Li}{R_3+\frac{1}{\omega C}i+\omega Li}$$ After I do the actions I get: $$\frac{(R_3+\frac{1}{\omega C}i)*(\omega Li)*R_3+\frac{1}{\omega C}i-\omega Li}{(R_3+\frac{1}{\omega C}i)^2+(\omega Li)^2}$$ The problem is that I've to get rid of the imaginary part from the denominator.Any ideas how I can accomplish this?</p>
Jan Eerland
226,665
<p>Well, as an electrical engineer you'll write (where $\text{j}^2=-1$):</p> <p>$$\underline{\text{Z}}_{\space\text{in}}=\frac{\left(\underline{\text{Z}}_{\space\text{R}}+\underline{\text{Z}}_{\space\text{C}}\right)\cdot\underline{\text{Z}}_{\space\text{L}}}{\underline{\text{Z}}_{\space\text{R}}+\underline{\text{Z}}_{\space\text{C}}+\underline{\text{Z}}_{\space\text{L}}}=\frac{\left(\text{R}+\frac{1}{\text{j}\omega\text{C}}\right)\cdot\text{j}\omega\text{L}}{\text{R}+\frac{1}{\text{j}\omega\text{C}}+\text{j}\omega\text{L}}=\frac{\left(\text{R}-\frac{\text{j}}{\omega\text{C}}\right)\cdot\text{j}\omega\text{L}}{\text{R}-\frac{\text{j}}{\omega\text{C}}+\text{j}\omega\text{L}}=$$ $$\frac{\text{R}\cdot\text{j}\omega\text{L}-\frac{\text{j}}{\omega\text{C}}\cdot\text{j}\omega\text{L}}{\text{R}-\frac{\text{j}}{\omega\text{C}}+\text{j}\omega\text{L}}=\frac{\frac{\omega\text{L}}{\omega\text{C}}+\text{R}\omega\text{L}\cdot\text{j}}{\text{R}+\left(\omega\text{L}-\frac{1}{\omega\text{C}}\right)\cdot\text{j}}=\frac{\frac{\text{L}}{\text{C}}+\text{R}\omega\text{L}\cdot\text{j}}{\text{R}+\left(\omega\text{L}-\frac{1}{\omega\text{C}}\right)\cdot\text{j}}=$$ $$\frac{\left(\frac{\text{L}}{\text{C}}+\text{R}\omega\text{L}\cdot\text{j}\right)\cdot\overline{\text{R}+\left(\omega\text{L}-\frac{1}{\omega\text{C}}\right)\cdot\text{j}}}{\left(\text{R}+\left(\omega\text{L}-\frac{1}{\omega\text{C}}\right)\cdot\text{j}\right)\cdot\overline{\text{R}+\left(\omega\text{L}-\frac{1}{\omega\text{C}}\right)\cdot\text{j}}}=$$ $$\frac{\left(\frac{\text{L}}{\text{C}}+\text{R}\omega\text{L}\cdot\text{j}\right)\cdot\left(\text{R}-\left(\omega\text{L}-\frac{1}{\omega\text{C}}\right)\cdot\text{j}\right)}{\text{R}^2+\left(\omega\text{L}-\frac{1}{\omega\text{C}}\right)^2}=$$ $$\frac{\frac{\text{L}}{\text{C}}\cdot\text{R}-\frac{\text{L}}{\text{C}}\cdot\left(\omega\text{L}-\frac{1}{\omega\text{C}}\right)\cdot\text{j}+\text{R}\omega\text{L}\cdot\text{j}\cdot\text{R}-\text{R}\omega\text{L}\cdot\text{j}\cdot\left(\omega\text{L}-\frac{1}{\omega\text{C}}\right)\cdot\text{j}}{\text{R}^2+\left(\omega\text{L}-\frac{1}{\omega\text{C}}\right)^2}=$$ $$\frac{\frac{\text{L}}{\text{C}}\cdot\text{R}+\text{R}\omega\text{L}\cdot\left(\omega\text{L}-\frac{1}{\omega\text{C}}\right)+\left(\text{R}^2\omega\text{L}-\frac{\text{L}}{\text{C}}\cdot\left(\omega\text{L}-\frac{1}{\omega\text{C}}\right)\right)\cdot\text{j}}{\text{R}^2+\left(\omega\text{L}-\frac{1}{\omega\text{C}}\right)^2}\tag1$$</p> <p>Because you're analysing a parallel circuit of a series resistor and capacitor and parallel to the series resistor and capacitor a coil.</p>
3,081,918
<p>It was claimed <a href="https://math.stackexchange.com/a/2387782/446262">here</a> that the convergence of the series<span class="math-container">$$\sum_{n=2}^\infty \frac{\Lambda(n)-1}{n^{1/2}\log^3 n}\tag1$$</span>(where <span class="math-container">$\Lambda$</span> is the <a href="https://en.wikipedia.org/wiki/Von_Mangoldt_function" rel="noreferrer">Von Mangoldt function</a>) is equivalent to the Riemann hypothesis. Is this true? That post provided a link to the Wikipedia article about the Von Mangoldt function, which does <em>not</em> mention this. Also, <a href="https://aimath.org/WWN/rh/articles/html/95a/" rel="noreferrer">this page</a> about the Von Mangoldt function in the context of the Riemann hypothesis makes no mention to that.</p> <p>If it is true that the convergence of the series <span class="math-container">$(1)$</span> is equivalent to the Riemann hypothesis, then I would like to have a reference for that.</p>
reuns
276,986
<p>Claymath official <a href="http://www.claymath.org/sites/default/files/official_problem_description.pdf" rel="nofollow noreferrer">description</a> of the Riemann hypothesis claims the RH is true iff <span class="math-container">$\pi(x) = Li(x)+O(x^{1/2}\log x)$</span> so <span class="math-container">$\psi(x) = x+O(x^{1/2}\log^2 x)$</span> and I have been quite sloppy in that it implies the RH is true iff <span class="math-container">$$\sum_{n=2}^\infty \frac{\Lambda(n)-1}{n^{1/2}\log^{\color{red}{3+\epsilon}} n} &lt; \infty\tag{1}$$</span></p> <p>as with partial summation <span class="math-container">$$\sum_{n \le x} (\Lambda(n)-1) \frac{1}{n^{1/2}\log^{a} n}=\frac{\psi(x)-x}{x^{1/2}\log^a x}+\sum_{n \le x-1} (\psi(n)-n)O(\frac{1}{n^{3/2}\log^a n})\\ = O(\log^{2-a}(x))+\sum_{n \le x} O(\frac{1}{n\log^{a-2} n})$$</span></p> <hr> <p>The point is to show an effective <a href="https://faculty.math.illinois.edu/~hildebr/ant/main5.pdf" rel="nofollow noreferrer">explicit formula</a> (p.28) <span class="math-container">$$\psi(x) =\sum_{n \le x} \Lambda(n)= x - \sum_{|\Im(\rho)| \le T} \frac{x^{\rho}}{\rho}+O(\frac{x\log^2 x}{T})=x - \sum_{k\le K} 2\Re(\frac{x^{\rho_k}}{\rho_k})+O(\frac{x\log^2 x}{K/\log K})$$</span> where <span class="math-container">$K = N(T)$</span> and the density of zeros gives <span class="math-container">$K \sim C T \log T,T \sim c K/\log K$</span>,<span class="math-container">$\Im(\rho_k) \sim c k/\log k$</span>. </p> <p>In this form, under the RH, with <span class="math-container">$K = x^{1/2}$</span> it yields <span class="math-container">$$\psi(x) =x +O(x^{1/2}\log^{2+\delta})$$</span></p> <p>Plotting those things indicates the series may converge very slowly with <span class="math-container">$\epsilon = 0$</span> and it is quite certain (under the RH) it converges with <span class="math-container">$\epsilon = 2$</span>.</p>
3,682,514
<blockquote> <p>The hyperbola is given with the following equation: <span class="math-container">$$3x^2+2xy-y^2+8x+10y+14=0$$</span> Find the asymptotes of this hyperbola. (<span class="math-container">$\textit{Answer: }$</span> <span class="math-container">$6x-2y+5=0$</span> and <span class="math-container">$2x+2y-1=0$</span>)</p> </blockquote> <p>In my book, it is said that if the hyperbola is given with the equation: <span class="math-container">$$Ax^2+2Bxy+Cy^2+2Dx+2Ey+F=0$$</span> then the direction vector <span class="math-container">$\{l,m\}$</span> of the asymptotes are found from the following equation: <span class="math-container">$$Al^2+2Blm+Cm^2=0$$</span> (Actually, I don't know the proof) Then to solve this, we let <span class="math-container">$k=\frac{l}{m}, \ \ (m \not =0)$</span> and solve the quadratic equation for <span class="math-container">$k$</span>: <span class="math-container">$$Ak^2+2Bk+C=0, \text{ in our case } 3k^2+2k-1 = 0$$</span> From here, I got <span class="math-container">$k=-1 \text{ or } k=\frac{1}{3}$</span> (which give us slopes of the two asymptotes). </p> <p>Hence we search for the asymptotes of the form <span class="math-container">$y=kx+b$</span> and restrict <span class="math-container">$b$</span> in such way so that the line does not intersect the line. Plugging this <span class="math-container">$y$</span> and <span class="math-container">$k=-1$</span> into the equation of the parabola I got: <span class="math-container">$$(4b-2)x-b^2+10b+14=0$$</span> so <span class="math-container">$b=\frac{1}{2}$</span> (as the equation should not have a solution). Then, <span class="math-container">$y=-x+\frac{1}{2}$</span> or <span class="math-container">$2x+2y-1=0$</span> as in the answer!</p> <p>However, I could not find the second one in this way...</p> <p>Then, I got stuck... </p> <p>It would be greatly appreciated either if you help me understand (why the asymptotes are to be found so) and complete this solution or suggest another solution.</p>
Saket Gurjar
769,080
<p>Here is a view I have on hyperbola's asymptotes :</p> <p>The asymptotes and the hyperbola behave the same way as we go to a point <span class="math-container">$(x,y)$</span> at an infinite distance from the centre of hyperbola, along the asymptotes.</p> <p>So for a hyperbola: <span class="math-container">$H:ax^2+by^2+2hxy+2gx+2fy+c=0$</span>, the terms containing <span class="math-container">$x$</span> and <span class="math-container">$y$</span> will affect the behaviour at such a point. Since the two(equation of hyperbola and the pair of asymptotes) have same behaviour (and hence same value of power of point), the only thing that differs in the two equations is the constant term.</p> <p>So the equation of pair of asymptotes will be of the form: <span class="math-container">$A=ax^2+by^2+2hxy+2gx+2fy+d=0$</span></p> <p>Now you can find the equation of asymptotes by using the fact that the centre of hyperbola passes lies on the asymptotes.</p> <p>The centre can easily be found by solving: <span class="math-container">$\frac{\delta H}{\delta x}=0$</span> and <span class="math-container">$\frac{\delta H}{\delta y}=0$</span> .</p> <p>So this point (Say <span class="math-container">$(x_o,y_o)$</span>) will satisfy <span class="math-container">$A=0$</span></p> <p>Find <span class="math-container">$d$</span> by putting the point in the equation.</p> <p>For eg. in the hyperbola in your question:</p> <p>The centre is <span class="math-container">$(-9/4,11/4)$</span> .</p> <p>So if the equation of asymptotes is : <span class="math-container">$3x^2+2xy-y^2+8x+10y+\lambda=0$</span></p> <p>To find <span class="math-container">$\lambda$</span> put the centre into the equation</p> <p><span class="math-container">$$\frac{243}{16}-\frac{99}{8} -\frac{121}{16}-18+\frac{55}{2}+\lambda=0$$</span></p> <p><span class="math-container">$$\lambda=\frac{19}{4}$$</span></p> <p>So factorise the equation for asymptotes to get asymptotes.</p> <p><span class="math-container">$$(6x-2y+19)(2x+2y-1)=0$$</span> </p>
3,888,805
<p>&quot;Six men and three women are standing in a supermarket queue. Three of the people in the queue are chosen to take part in a customer survey. How many different choices are possible if at least one woman must be included?&quot;</p> <p>I went about solving this question by considering 3 cases, first with a single woman and then with 2 and finally with 3.</p> <p><span class="math-container">$$ {3 \choose 1} \ \times \ {6 \choose 2} \ + \ {3 \choose 2} \ \times \ {6 \choose 1} \ + \ {3 \choose 3} $$</span></p> <p>Which simplifies to <span class="math-container">$ \ 45 \ + \ 18 \ + \ 1$</span> <strong>leading to <span class="math-container">$64$</span> different choices</strong>.</p> <p>This approach is in fact correct. However, another approach came to my mind as well.</p> <p>If at least 1 woman must be included in the group then we can simply choose 1 from the 3 women, and fill the remaining 2 slots from the remnant 8 'people'.</p> <p><span class="math-container">$$ {3 \choose 1} \ \times \ {8 \choose 2} $$</span></p> <p><strong>This, however, gives the answer <span class="math-container">$84$</span>.</strong> This answer is most certainly wrong but I am unable to explain why the method is incorrect. If someone could explain why this leads to the wrong answer that would be very nice.</p>
N. F. Taussig
173,070
<p>You are counting each selection with two women twice and each selection with three women three times, once for each way you could have designated one of the women who is selected as the woman you have selected and the remaining woman or women as additional people.</p> <p>Suppose you select two women, Abigail and Beatrice, and one man, Charles. You count this selection twice:</p> <p><span class="math-container">\begin{array}{l l} \text{woman} &amp; \text{additional people}\\ \hline \text{Abigail} &amp; \text{Beatrice, Charles}\\ \text{Beatrice} &amp; \text{Abigail, Charles} \end{array}</span></p> <p>Suppose you select three women, Abigail, Beatrice, and Charlotte. You count this selection three times:</p> <p><span class="math-container">\begin{array}{l l} \text{woman} &amp; \text{additional people}\\ \hline \text{Abigail} &amp; \text{Beatrice, Charlotte}\\ \text{Beatrice} &amp; \text{Abigail, Charlotte}\\ \text{Charlotte} &amp; \text{Abigail, Beatrice} \end{array}</span></p> <p>Notice that <span class="math-container">$$\binom{3}{1}\binom{6}{2} + \color{red}{\binom{2}{1}}\binom{3}{2}\binom{6}{1} + \color{red}{\binom{3}{1}}\binom{3}{3} = 84$$</span> where the factors in red are the number of ways you could have designated one of the women you have selected as the woman you have selected.</p>
3,556,490
<p>Can someone explain me how to solve the equation <span class="math-container">$\frac{x-n}{x+n} = e^{-x}$</span>, where <span class="math-container">$n$</span> is a non-zero natural number ? Unfortunately, I have not even an idea how to start. Any hint is much appreciated. </p> <p>Many thanks in advance. </p>
Piquito
219,998
<p>HINT.-No way to calculate in closed form by elementary means. You have to try with numerical methodes. You have in particular for a possible approximation <span class="math-container">$$1+x+\frac{x^2}{2!}+\cdots+\frac{x^k}{k!}+\cdots =e^x=\frac{x+n}{x-n}$$</span> <span class="math-container">$$\frac{1}{x-n}=-\left(\frac 1n+\frac{x}{n^2}+\frac{x^2}{n^3}+\frac{x^3}{n^4}+\frac{x^4}{n^5}+\frac{x^5}{n^6}+O(x^6)\right)$$</span></p>
1,457,797
<p>I'm told that a circle intersects the y-axis at $(0,0)$ and $(0, -4)$.</p> <p>I have a tangent that starts at $(0, -6)$. I want to find the point of intersection.</p> <p>I am taking the midpoint of the circle to be $(0, -2)$ and the radius to be $2$ from the points above.</p> <p>The equation of the circle then is:</p> <p>${(x - 0)^2 + (y + 2)^2 = 4}$</p> <p>$\implies$ ${x^2 + y^2 + 4y = 0}$</p> <p>I am taking the equation of the line to be ${y = -6}$.</p> <p>If I substitute ${y = -6}$ into ${x^2 + y^2 + 4y = 0}$ I get</p> <p>${x^2 + 36 - 24 = 0}$</p> <p>${x^2 = -12}$</p> <p>I think I have gone wrong somewhere.</p>
R.N
253,742
<p>see this, it shows that your answer is not correct <a href="https://i.stack.imgur.com/4XB35.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4XB35.png" alt="enter image description here"></a></p>
45,800
<p>Let's say I want to calculate the following scalar-by-matrix derivative</p> <p>$$\frac{\partial}{\partial A} \text{tr} \left[(\vec X^T A)^T (\vec X^T A)\right],$$</p> <p>with $\vec X$ and $A$ being a $n \times 1$ and a $n \times m$ matrix, respectively. Is there a way in Mathematica to get the result</p> <p>$$2 \vec X (\vec X^T A)$$</p> <p>without explicitly defining (for instance)</p> <pre><code>n=3 m=2 A=Array[a,{n,m}] X=Array[x,{n,1}] </code></pre> <p>and calculating</p> <pre><code>D[Tr[Transpose[Transpose[X].A].Transpose[X].A],{A}] </code></pre> <p>? The problem with this approach is that the Mathematica result cannot be easily cast back into an human-readable form like</p> <p>$$2 \vec X (\vec X^T A).$$</p>
userrandrand
86,543
<h2>Outline</h2> <ul> <li><strong>Discussion</strong></li> <li><strong>Using Differentials</strong></li> <li><strong>Using the NonCommutativeMultiply package NCAlgebra</strong></li> <li><strong>Using MatrixD</strong></li> <li><strong>Using xAct</strong></li> </ul> <h3>Discussion (code sections below)</h3> <p>There is an ambiguity that has not been mentioned. The shape of the matrix derivative depends on the choice of convention as explained in this <a href="https://en.wikipedia.org/wiki/Matrix_calculus" rel="nofollow noreferrer">wikipedia page</a>. The ambiguity does not matter once the convention is understood or one considers differentials (considered in the first code example below) rather than derivatives.</p> <p>Moreover, the formulas can depend on whether the matrix has symmetry. Indeed, a matrix with symmetry has less components and thus less variables to include among the derivatives. For example, for a generic matrix, the partial derivative with respect to <span class="math-container">$ M_{\text{kl}}$</span> keeping all other matrix components fixed has the following fundamental formula:</p> <p><span class="math-container">$$\frac{\partial M_{\text{ij}}}{\partial M_{\text{kl}}} \bigg\rvert_{M_{\text{rh}},\; \text{r}\neq\text{k},\text{h}\neq\text{l}}=\mathbb{J}_{\text{ijkl}}$$</span></p> <p>where <span class="math-container">$\mathbb{J}_{\text{ijkl}}\equiv \delta _{\text{ik}} \delta _{\text{jl}}$</span>.</p> <p>But if <span class="math-container">$M$</span> is symmetric, then one can not take <span class="math-container">$\frac{\partial M_{\text{ij}}}{\partial M_{\text{kl}}} \bigg\rvert_{M_{\text{rh}},\; \text{r}\neq\text{k},\text{h}\neq\text{l}}$</span> as in the set {<span class="math-container">$\text{r}\neq\text{k},\text{h}\neq\text{l}$</span>} there is <span class="math-container">$M_{\text{lk}}$</span> which can not be held fixed while <span class="math-container">$M_{\text{kl}}$</span> varies as they are the same variable.</p> <p>This issue with symmetric matrices caused confusion even as late as 2021 in this <a href="https://math.stackexchange.com/q/4105048/1049002">math stack exchange</a> and was also the reason for this 2019 <a href="https://arxiv.org/abs/1911.06491" rel="nofollow noreferrer">paper</a> that was updated in 2020. My summary of the many discussions I found was that although the partial derivatives of <span class="math-container">$M_{\text{lk}}$</span> and <span class="math-container">$M_{\text{kl}}$</span> can not be taken independently, a trick that works is to pretend they can be taken independently and take an average such that <span class="math-container">$\mathbb{J}_{\text{ijkl}}\equiv \frac{1}{2}\;( \delta _{\text{ik}} \delta _{\text{jl}}+\delta _{\text{il}} \delta _{\text{jk}} )$</span>.</p> <p>That said, depending on the final result, it might not matter which <span class="math-container">$\mathbb{J}_{\text{ijkl}}$</span> we choose.</p> <p>In the following we will not consider matrices with symmetry.</p> <h3>Using differentials</h3> <p><em>The code below shows explicitly how to obtain matrix derivatives in the restricted class of scalar functions defined by a Trace as in the example of the original question. However, it is restricted to this class although including Det might be possible after a few modification using <span class="math-container">$\text{det}(A)=\exp(\text{tr}(\log(A)))$</span>. For more general purpose functions see the following code sections that offer packages.</em></p> <p>We recall the definition of the differential of a scalar function <span class="math-container">$f$</span> evaluated on a matrix <span class="math-container">$A$</span>.</p> <p><span class="math-container">$$Df_A[H]=\langle H, \frac{\partial f}{\partial A} \rangle \underset{H\rightarrow 0}{=} f(A+H)-f(A) +O(H^2)$$</span></p> <p>For a scalar function:</p> <p><span class="math-container">$$\langle H,\frac{\partial f}{\partial A} \rangle=\text{Tr}\left[H^{T}.\frac{\partial f}{\partial A}\right]$$</span></p> <hr /> <p><strong>An aside</strong></p> <p>Notice however that one could have equally defined <span class="math-container">$\frac{\partial f}{\partial A}$</span> such that (<strong>alternative definition</strong>)</p> <p><span class="math-container">$$\langle H,\frac{\partial f}{\partial A} \rangle=\text{Tr}\left[ \frac{\partial f}{\partial A}.H\right]$$</span></p> <p>In that last case <span class="math-container">$\frac{\partial f}{\partial A}$</span> is the transpose of the first definition. That definition has the advantage of removing the Transpose from the definition and is formally a better generalization of the case where A is a <span class="math-container">$n\times1$</span> column vector. Indeed, in absence of a scalar product, the natural quantity to consider is the <a href="https://en.wikipedia.org/wiki/Gradient#Relationship_with_derivative" rel="nofollow noreferrer">total derivative</a> of the scalar function. If a scalar product is present then one can consider instead the gradient which is dual to total derivative. In euclidean geometry, the two are related by a transpose see <a href="https://en.wikipedia.org/wiki/Gradient#Relationship_with_derivative" rel="nofollow noreferrer">the same link</a> mentioned before. However the difference between the two is more important in non euclidean geometry. As an example as to how correctly identifying the two can even change an algorithm see <a href="https://www.robots.ox.ac.uk/%7Elsgs/posts/2019-09-27-info-geom.html" rel="nofollow noreferrer">this link</a> on the natural gradient as the mathematically &quot;right&quot; way to use gradient descent in neural networks.</p> <p>For our purpose, we will consider the first definition as the second definition leads to a transpose of <span class="math-container">$A$</span> in the final result.</p> <hr /> <p>The first step of the method consists of replacing <span class="math-container">$A$</span> with <span class="math-container">$A+t H$</span> and Taylor expanding about <span class="math-container">$t=0$</span> in order to obtain the differential. This is implemented in the code below:</p> <pre><code>$Assumptions = {t ∈ Reals} DfA = Tr[Transpose[Transpose@X . A] . Transpose@X . A] /. A -&gt; A + t*H // TensorExpand // ExpandAll // ReplaceAll[Tr[Plus[a__]] :&gt; Tr /@ a] // ReplaceAll[Tr[t^p_.*a_] :&gt; t^p*Tr[a]] // D[#, t] &amp; // ReplaceAll[t -&gt; 0] </code></pre> <p>out: <code>(* Tr[Transpose[Transpose[X, {2, 1}] . A, {2, 1}].Transpose[X, {2, 1}] . H] + Tr[Transpose[Transpose[X, {2, 1}] . H, {2, 1}].Transpose[X, {2, 1}] . A] *)</code></p> <p>To simplify the above expression and identify the derivative, we replace Transpose and dot with a custom transpose and dot. These custom functions have properties that will allow the expression to be simplified. Moreover, these functions will place the expression in a form that allows us to obtain the derivative. We also define an identity matrix that acts as an identity element for dot. The definitions and properties of these functions are given in the code below (notice that H is hard coded into some of the definitions below):</p> <pre><code>totranspose = Transpose[a_, {2, 1}] :&gt; transpose[a]; todot = Dot -&gt; dot; transpose[dot[a_ , b_]] := dot[transpose[b], transpose[a]] transpose[transpose[a_]] := a transpose[id] ^:= id id /: dot[s___, id, g___] := dot[s, g] dot /: Tr[dot[a___, H, b___]] := Tr[dot[transpose@H, If[{a} === {}, id, transpose@dot@a], If[{b} === {}, id, transpose@dot@b]]] dot /: Tr[dot[a__, transpose@H, b___]] := Tr[dot[transpose[H], dot@b, If[{a} === {}, id, dot@a]]] SetAttributes[dot, {Flat, OneIdentity}] </code></pre> <p>Then changing Dot to dot and Transpose to transpose we obtain:</p> <pre><code>DfA //. {todot, totranspose} </code></pre> <p>out: <code>2 Tr[dot[transpose[H], X, transpose[X], A]]</code></p> <p>Finally we define rules to replace transpose with Transpose and dot with Dot and to extract the derivative:</p> <pre><code>toDot = dot -&gt; Dot; toTranspose = transpose -&gt; Transpose; obtainDerivative = a_. Tr[Transpose@H . b_ ] :&gt; a*b; </code></pre> <p>This leads to the final result:</p> <pre><code>DfA //. {todot, totranspose} //. {toDot,toTranspose} /. obtainDerivative // TeXForm </code></pre> <p><span class="math-container">$$2 X.X^T.A $$</span></p> <h3>Using the NonCommutativeMultiply package NCAlgebra</h3> <p>See @dtn's answer and also this <a href="https://mathematica.stackexchange.com/a/191397/86543">answer</a>. Notice that @dtn's answer using the NCAlgebra package gives the transpose of the result above.</p> <h3>Using MatrixD</h3> <p>The answer <a href="https://answer" rel="nofollow noreferrer">on this page</a> provides a function for matrix derivatives. After installing the package one can use:</p> <pre><code>MatrixD[Tr[Transpose[Transpose@X.A].Transpose@X.A], A] // TeXForm </code></pre> <p><span class="math-container">$$2 X.X^T.A$$</span></p> <p>If I understood the author's answer in the link mentioned, the package uses the component-wise formula for derivatives that was mentioned in the discussion section above <span class="math-container">$\frac{\partial M_{\text{ij}}}{\partial M_{\text{kl}}} \bigg\rvert_{M_{\text{rh}},\; \text{r}\neq\text{k},\text{h}\neq\text{l}}=\mathbb{J}_{\text{ijkl}} = \delta _{\text{ik}} \delta _{\text{jl}}$</span></p> <h3>Using xAct</h3> <p><em>The author of this <a href="https://mathematica.stackexchange.com/a/212192/86543">answer</a> writes that xAct can only perform matrix derivatives for scalar functions (and in their opinion that is the only way that it makes sense). They explain that if the function is not scalar then one can contract with arbitrary tensors to obtain a scalar function and then identify the derivative.</em></p> <p>One of the most popular tensor packages is xAct. In this package, calculations are done using Einstein notation which is basically sums without the sum symbol for out purpose.</p> <p>The initial expression in terms of Transpose and Tr needs to be translated into a sum. I did this by hand but perhaps there is an easy way to do this automatically.</p> <p><strong>In the usual sum notation</strong></p> <p><span class="math-container">$$ \text{tr} \left[(\vec X^T A)^T (\vec X^T A)\right]= \sum_{i,j,k}{x_i x_j A_{ik} A_{jk}}$$</span></p> <p><strong>In Einstein notation:</strong></p> <p><span class="math-container">$$ \text{tr} \left[(\vec X^T A)^T (\vec X^T A)\right]=x_i x_j A_k^{i} A^{jk}$$</span></p> <p><strong>In xAct notation:</strong></p> <p><span class="math-container">$ \text{tr} \left[(\vec X^T A)^T (\vec X^T A)\right]=$</span> <code>x[-i] x[-j] A[i,k] A[j,-k]</code></p> <p>Tensors and the manifold they &quot;live&quot; on need to be defined in xAct. As the <span class="math-container">$A$</span> matrix is not square I defined the manifold for <span class="math-container">$A$</span> as product manifold between <span class="math-container">$\mathbb{R}^m$</span> and <span class="math-container">$\mathbb{R}^n$</span>. After installing xAct from the xact website one can write the evaluate the following definitions :</p> <pre><code>&lt;&lt; xAct`xTensor` DefConstantSymbol[n]; DefConstantSymbol[m]; DefManifold[Mn, n, {an, bn, cn, en, fn}]; DefManifold[Mm, d2, {am, bm, cm, em, fm}] DefManifold[M, {Mn, Mm}, {a, b, c, e, f}]; DefMetric[1, metric[-a, -b], cov]; DefTensor[A[a, b], M]; DefTensor[X[an], Mn]; $PrePrint = ScreenDollarIndices; </code></pre> <p>The matrix derivative can be found using <code>VarD</code> from the xAct package (the output in Mathematica is formatted to show up and down indices like in Einstein notation):</p> <pre><code>VarD[A[e, f]][X[-a]*X[-b]*A[a, c]*A[b, -c]] // ContractMetric </code></pre> <p>out: <code>(* 2 A[a, -f] X[-a] X[-e] *)</code></p>
3,879,201
<p>Let <span class="math-container">$E$</span> be a normed <span class="math-container">$\mathbb R$</span>-vector space, <span class="math-container">$\mu$</span> be a finite signed measure on <span class="math-container">$(E,\mathcal B(E))$</span> and <span class="math-container">$$\hat\mu:E'\to\mathbb C\;,\;\;\;\varphi\mapsto\int\mu({\rm d}x)e^{{\rm i}\varphi}$$</span> denote the characteristic function of <span class="math-container">$\mu$</span>.</p> <p>Replying to a previous formulation of this question, <a href="https://math.stackexchange.com/users/142385/kavi-rama-murthy">Kavi Rama Murthy </a> <a href="https://math.stackexchange.com/a/3879306/47771">has shown</a> that if <span class="math-container">$E$</span> is complete and separable and <span class="math-container">$\mu$</span> is nonnegative, then <span class="math-container">$\hat\mu$</span> is uniformly continuous.</p> <p>It is easy to see that his proof still works in the general case as long as we are assuming that <span class="math-container">$\mu$</span> is tight<span class="math-container">$^1$</span>, i.e. <span class="math-container">$$\forall\varepsilon&gt;0:\exists K\subseteq E\text{ compact}:|\mu|(K^c)&lt;\varepsilon\tag1.$$</span></p> <blockquote> <p>Taking a closer look at the proof, I've observed the following: Let <span class="math-container">$\langle\;\cdot\;,\;\cdot\;\rangle$</span> denote the duality pairing between <span class="math-container">$E$</span> and <span class="math-container">$E'$</span> and <span class="math-container">$$p_x(\varphi):=|\langle x,\varphi\rangle|\;\;\;\text{for }\varphi\in E'$$</span> for <span class="math-container">$x\in E$</span>. By definition, the weak* topology <span class="math-container">$\sigma(E',E)$</span> on <span class="math-container">$E'$</span> is the topology generated by the seminorm family <span class="math-container">$(p_x)_{x\in E}$</span>.</p> <p>Now, if <span class="math-container">$K\subseteq E$</span> is compact, <span class="math-container">$$p_K(\varphi):=\sup_{x\in K}p_x(\varphi)\;\;\;\text{for }\varphi\in E'$$</span> should be a seminorm on <span class="math-container">$E'$</span> as well. And if I'm not missing something, the topology generated by <span class="math-container">$(p_K:K\subseteq E\text{ is compact})$</span> is precisely the topology <span class="math-container">$\sigma_c(E',E)$</span> of compact convergence on <span class="math-container">$E'$</span>.</p> <p>What <a href="https://math.stackexchange.com/users/142385/kavi-rama-murthy">Kavi Rama Murthy </a> has shown is that, since <span class="math-container">$\mu$</span> is tight, for all <span class="math-container">$\varepsilon&gt;0$</span>, there is a compact <span class="math-container">$K\subseteq E$</span> and a <span class="math-container">$\delta&gt;0$</span> with <span class="math-container">$$|\hat\mu(\varphi_1)-\hat\mu(\varphi_2)|&lt;\varepsilon\;\;\;\text{for all }\varphi_1,\varphi_2\in E'\text{ with }p_K(\varphi_1-\varphi_2)&lt;\delta\tag2.$$</span></p> <p><strong>Question</strong>: Are we able to conclude that <span class="math-container">$\hat\mu$</span> is <span class="math-container">$\sigma_c(E',E)$</span>-continuous?</p> </blockquote> <h2><strong>EDIT</strong>:</h2> <p>In order to conclude that <span class="math-container">$\hat\mu$</span> is (uniformly) <span class="math-container">$\sigma_c(E',E)$</span>-continuous, we need to that <span class="math-container">$(2)$</span> holds for <span class="math-container">$K$</span> replaced by an arbitrary compact <span class="math-container">$\tilde K\subseteq E$</span>. Given <span class="math-container">$\varepsilon&gt;0$</span>, we can show <span class="math-container">$(2)$</span> by choosing the compact subset <span class="math-container">$K\subseteq E$</span> such that <span class="math-container">$$|\mu|(K^c)&lt;\varepsilon\tag3.$$</span></p> <p>We may then write <span class="math-container">\begin{equation}\begin{split}\left|\hat\mu(\varphi_1)-\hat\mu(\varphi_2)\right|&amp;\le\underbrace{\int_{K\cap\tilde K}\left|e^{{\rm i}\varphi_1}-e^{{\rm i}\varphi_2}\right|{\rm d}\left|\mu\right|}_{&lt;\:\varepsilon}\\&amp;\;\;\;\;\;\;\;\;\;\;\;\;+\int_{K\cap\tilde K^c}\left|e^{{\rm i}\varphi_1}-e^{{\rm i}\varphi_2}\right|{\rm d}\left|\mu\right|\\&amp;\;\;\;\;\;\;\;\;\;\;\;\;+\underbrace{\int_{K\cap\tilde K}\left|e^{{\rm i}\varphi_1}-e^{{\rm i}\varphi_2}\right|{\rm d}\left|\mu\right|}_{&lt;\:2\varepsilon}\end{split}\tag4\end{equation}</span> for all <span class="math-container">$\varphi_1,\varphi_2\in E'$</span> with <span class="math-container">$p_{\tilde K}(\varphi_1-\varphi_2)&lt;\delta$</span>, where <span class="math-container">$$\delta:=\frac\varepsilon{\left\|\mu\right\|},$$</span> but I have no idea how we can control the second integral.</p> <h2><strong>EDIT 2</strong></h2> <p>A &quot;proof&quot; of this claim can be (found in Linde's <em>Probability in Banach Spaces</em>), but I have no idea why this proof is correct, since he is concluding the continuity immediately from <span class="math-container">$(2)$</span> (for a single <span class="math-container">$K$</span>):</p> <p><a href="https://i.stack.imgur.com/vuAOk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vuAOk.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/9q1By.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9q1By.png" alt="enter image description here" /></a></p> <p>Maybe we need to assume that <span class="math-container">$\mu$</span> is even Radon, i.e. that for all <span class="math-container">$B\in\mathcal (E)$</span>, there is a compact <span class="math-container">$C\subseteq E$</span> with <span class="math-container">$C\subseteq B$</span> and <span class="math-container">$|\mu|(B\setminus C)&lt;\varepsilon$</span>. The author is actually imposing this assumption, but he obviously doesn't make use of it in his proof (he would need to consider an arbitrary compact <span class="math-container">$\tilde K\subseteq E$</span>, as I did above).</p> <hr /> <p><span class="math-container">$^1$</span> On a complete separable metric space, every finite signed measure is tight.</p>
Michael Rozenberg
190,319
<p>Your answer is right.</p> <p>I like the following way: <span class="math-container">$$x^5-x^4-x^3+x^2+x=x(x^4-x^3-x^2+x+1)=$$</span> <span class="math-container">$$=x(x^4+2x^3-x^2-2x+1)=x(x^2+x-1)^2.$$</span></p>
4,135,473
<p>Can you explain how the lower integral is the supremum of a sum? Isn't <span class="math-container">$L(f,P)$</span> just a sum and therefore a single number? Definition 5.13 says from <span class="math-container">$a$</span> to <span class="math-container">$b$</span>, not from each <span class="math-container">$x_j$</span> to <span class="math-container">$x_{j—1}$</span>. And it says for a given partition, not finer and finer partitions.</p> <p>I'm just very confused on these definitions and would love if someone could rephrase it in a way that matches this definition. (I know there's another way to define it in terms of limits but I need to understand it the way this textbook defines it.)</p> <p><a href="https://i.stack.imgur.com/itDAx.png" rel="nofollow noreferrer">Textbook definition of Riemann sums</a></p> <p><a href="https://i.stack.imgur.com/brlL9.png" rel="nofollow noreferrer">Textbook definition of upper and lower integrals</a></p>
N. S.
9,176
<p>My favourite, and the cleanest way in my oppinion is the following classical solution:</p> <p>Let <span class="math-container">$f(x)= \lfloor x\rfloor+\lfloor x+\frac{1}{2}\rfloor-\lfloor 2x \rfloor$</span>.</p> <p>Then, it is trivial to see that</p> <ul> <li><span class="math-container">$f(x+\frac{1}{2})=f(x)$</span>.</li> <li><span class="math-container">$f(x)=0 \forall x \in [0, \frac{1}{2})$</span>.</li> </ul> <p>From here it follows immediatelly that <span class="math-container">$f \equiv 0$</span>.</p> <p><strong>P.S.</strong> By using <span class="math-container">$f(x)= \lfloor x\rfloor+\lfloor x+\frac{1}{n}\rfloor+ \ldots +\lfloor x+\frac{n-1}{n}\rfloor-\lfloor nx \rfloor$</span> you can deduce the same way that <span class="math-container">$$\lfloor x\rfloor+\lfloor x+\frac{1}{n}\rfloor+ \ldots +\lfloor x+\frac{n-1}{n}\rfloor=\lfloor nx \rfloor$$</span></p>
864,816
<p>I was looking for examples of real valued functions $f$ such that $f(x)+f(-x)=f(x)f(-x)$. Preferably, I'd like them to be continuous, differentiable, etc.</p> <p>Of course, there are the constant functions $f(x)=0$ and $f(x)=2$. I also showed that $1+b^x$, where $b&gt;0$, is another solution. Are there any other nice ones?</p>
Community
-1
<p>From your solution, I thought to consider the change of variable $f(x) = 1 + g(x)$: your functional equation then becomes</p> <p>$$ g(x) g(-x) = 1 $$</p> <p>and now the entire solution space becomes obvious: you can pick $g(x)$ to be <em>any</em> function on the positive reals that is nowhere zero, and then the values at the negative reals are determined by the functional equation. And at zero, you can pick either $g(0) = 1$ or $g(0) = -1$.</p>
3,219,260
<p>Trying to find the easiest and most general method in finding last <span class="math-container">$n$</span> digits of a number. I know the trick lies in finding the remainder when the number is divided by <span class="math-container">$10^n$</span> but still not able to perform the rest of the steps to reach the answer</p>
Bill Dubuque
242
<p>Below I show that the method in JWT's answer is also essentially Binomial Theorem based.</p> <p><span class="math-container">$\begin{align} 7^{\large 2}\ &amp;= -1 + 50 \\[.3em] \Rightarrow\ \ \ \ \ \ \ 7^{\large 4}\ &amp;=\ \ \ 1 + 24\cdot100\ \, (=\, 1 -2(50)+ 2500\ \ \rm by\ squaring\ above)\\[.3em] \Rightarrow\ (7^{\large 4})^{\large 32} &amp;\equiv \ \ \ 1 + \color{#0a0}{32\cdot 24}\cdot 100\!\!\pmod{\!100^{\large 2}}\ \ \rm by\ Binomial\ Theorem\\[.3em] &amp;\equiv\ \ \ 1\ \, +\, \ \color{#c00}{68}\,\ \cdot\,\ 100\ \ \ {\rm by}\ \ \color{#0a0}4\underbrace{(\color{#0a0}{8\cdot 24}\bmod 25)}_{\large 8\,(-1)\ \,\equiv\,\ \color{#c00}{17}\ \ \ \ \ \ \ } = \color{#c00}{4(17)} \end{align}$</span> </p>
2,596,347
<p>Suppose $X$ is compact and Hausdorff and that $f:X \to Y$ is continuous, closed, and surjective. How can I show that $Y$ is Hausdorff?</p>
Not Mike
480,631
<p>Since $X$ is compact Hausdorff, $X$ it is normal. Next, since $f$ is closed, the single-element subsets of $Y$ are closed (as $\{y\} = f''\{x\}$ for some $x \in X$.) It follows that for each pair of distinct $y_0, y_1 \in Y$, the sets $X_0 = f^{-1}(y_0)$ and $X_1 = f^{-1}(y_1)$ are disjoint closed subsets of $X$. Noting that $X$ is normal, you can separate $X_0$ and $X_1$ using open-sets with disjoint closures. The remaining bit is to turn the open-sets separating $X_0$ and $X_1$ into open subsets of $Y$ separating $y_0$ and $y_1$.</p>
4,502,175
<p>I want to create a simple demo of moving an eye (black circle) inside a bigger circle with a black stroke when moving a mouse. I have cursor positions mouseX and mouseY on a canvas and I need to map the value of mouse position into a circle so the eye is moving inside the circle.</p> <p>This should be trivial but I have no idea how to solve this problem.</p> <p>This is a coding problem but I think that I will get the best results from this Q&amp;A. If not I will ask on stack overflow.</p> <p>This is the code that shows the problem.</p> <p><a href="https://editor.p5js.org/jcubic/sketches/E2hVGceN9" rel="nofollow noreferrer">https://editor.p5js.org/jcubic/sketches/E2hVGceN9</a></p> <p>If you use map function in P5JS library (that is linear map from one range to a different range) I get the black circle to move in a square with a side equal to the diameter of the circle. So the black circle is outside.</p> <p>I'm not sure what should I use to calculate the position of the black circle so it's always inside the bigger circle.</p> <p><a href="https://i.stack.imgur.com/u8V9U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u8V9U.png" alt="enter image description here" /></a></p>
user2661923
464,411
<p>I may not be understanding the problem correctly. If not, please advise:</p> <p>It seems as if you are facing two mathematical challenges:</p> <ul> <li><p>You are given a circle with center at the Cartesian coordinate <span class="math-container">$(0,0)$</span>, of radius <span class="math-container">$(r)$</span>. You are also given a square, whose diagonal is exactly of length <span class="math-container">$(2r)$</span>. You want to know how to mathematically correspond the points inside the circle with the points inside the square.</p> </li> <li><p>Same as previous bullet point, except that the length of the square's diagonal may be less than or greater than <span class="math-container">$(2r)$</span>.</p> </li> </ul> <p>The remainder of this posting is based on the above assumptions. If I am interpreting your question incorrectly, then leave a comment after this <em>answer</em>, and I will edit the answer accordingly.</p> <hr /> <p>I will start with the first bullet point and then conclude with the second bullet point.</p> <p>The easiest approach is to <strong>require</strong> that the Cartesian coordinates of the exact center of the square be <span class="math-container">$(0,0)$</span>. Then, (for example), the <span class="math-container">$(x,y)$</span> coordinates of the upper right and lower left corners of the square are <br> <span class="math-container">$\displaystyle \left( ~\frac{r}{\sqrt{2}}, ~\frac{r}{\sqrt{2}}, ~\right)~$</span> and <span class="math-container">$\displaystyle \left( ~-\frac{r}{\sqrt{2}}, ~-\frac{r}{\sqrt{2}}, ~\right),~$</span> respectively.</p> <p>As indicated at the start of my answer, it is also assumed that the exact center of the circle also has (in effect) Cartesian coordinates <span class="math-container">$(0,0)$</span>. So, the <span class="math-container">$(x,y)$</span> coordinate of <span class="math-container">$(0,0)$</span> corresponds to the exact center of the circle.</p> <p>Now, you need a way of associating each point in the square, <strong>other than</strong> <span class="math-container">$(0,0)$</span>, with some point in the circle. Any such point <span class="math-container">$(x,y) \neq (0,0),$</span> that is inside the square will automatically translate into the following point inside the circle: <br> <span class="math-container">$(s,\theta),~$</span> where <span class="math-container">$~(s)~$</span> is a positive number, <br> and <span class="math-container">$(\theta)$</span> is some angle such that <span class="math-container">$0^\circ \leq \theta &lt; 360^\circ.$</span></p> <p>The computation is:</p> <ul> <li><p><span class="math-container">$\displaystyle (s) = \sqrt{x^2 + y^2}.$</span></p> </li> <li><p><span class="math-container">$(\theta)$</span> is the unique angle in the half-open interval <span class="math-container">$[0^\circ, 360^\circ)$</span> such that <br> <span class="math-container">$\displaystyle \cos(\theta) = \frac{x}{s}, ~\sin(\theta) = \frac{y}{s}.$</span></p> </li> </ul> <p>Once <span class="math-container">$(x,y)$</span> has been translated into <span class="math-container">$(s,\theta)$</span>, you then map the <span class="math-container">$(s,\theta)$</span> coordinates into a point on or inside the circle, as follows:</p> <ul> <li><p>You want the point inside the circle to be such that it is exactly <span class="math-container">$(s)$</span> units away from the exact center of the circle.</p> </li> <li><p>Further, you want the point inside the circle to be such that if you draw the line segment from this point to the exact center of the circle, and also draw the horizontal line segment from the center of the circle to the right hand side boundary of the circle, then the angle formed will be equal to <span class="math-container">$(\theta)$</span>.</p> </li> </ul> <p>To conclude the first bullet point, you also need an algorithm that performs the <em>inverse translation</em>. That is, if you are given a point <span class="math-container">$(s,\theta)$</span>, where <span class="math-container">$~0 &lt; s \leq r,~$</span> and <span class="math-container">$~0^\circ \leq \theta &lt; 360^\circ,~$</span> you need to be able to compute the corresponding point in the square.</p> <p>This is done as follows:</p> <ul> <li><p>The <span class="math-container">$(x)$</span> coordinate is <span class="math-container">$~s \times \cos(\theta).$</span></p> </li> <li><p>The <span class="math-container">$(y)$</span> coordinate is <span class="math-container">$~s \times \sin(\theta).$</span></p> </li> </ul> <p>One final point, for the first bullet point: <br> the above approach is based on the idea that the square has been <strong>inscribed</strong> inside the circle, so that the <span class="math-container">$(4)$</span> corners of the square are each touching a point <strong>on</strong> the circle's boundary.</p> <p>This implies that there will be points inside the circle that fall outside the boundary of the square. This implies that if you were to take any random point <span class="math-container">$(s,\theta)$</span> inside the circle, and apply the inverse algorithm described above, the corresponding <span class="math-container">$(x,y)$</span> coordinate could fall outside the square.</p> <p>For simplicity, I will refer to points on or inside the circle, that are outside the corresponding square as <strong>invalid</strong> points.</p> <p>While it is important to consider the issue of invalid points, the 2nd bullet point (discussed below) permits the square to be of any size. So, if you wish to translate a point inside the circle that is <strong>invalid</strong> with respect to a specific square, then you will need to first enlarge the corresponding square so that it will include the corresponding invalid point.</p> <hr /> <p>The second bullet point, at the start of my answer, discussed the situation where (in effect), the diagonal of the square was <span class="math-container">$(2R)$</span>, where <span class="math-container">$R \neq r$</span>, with <span class="math-container">$(r)$</span> equaling the radius of the circle.</p> <p>All of the analysis of the previous section applies, with only minor modifications:</p> <p>The point <span class="math-container">$(0,0)$</span> inside the square continues to be associated with the point <span class="math-container">$(0,0)$</span> [i.e. the exact center] of the circle.</p> <p>For a point inside the square of coordinates <span class="math-container">$(x,y) \neq (0,0)$</span>, when you compute <span class="math-container">$~\displaystyle S = \sqrt{x^2 + y^2},$</span> and then compute <span class="math-container">$\theta$</span> in accordance with the previous analysis, you then need to compute</p> <p><span class="math-container">$$s = S \times \frac{r}{R}. \tag1 $$</span></p> <p>That is, you are simply applying a scaling factor to the distance of the point <span class="math-container">$(x,y)$</span> to the center of the square. Then, having converted <span class="math-container">$(S,\theta)$</span> to <span class="math-container">$(s,\theta)$</span>, you proceed as per the analysis in the previous section.</p> <p>For the <strong>inverse algorithm</strong>, you simply apply the scaling factor, in reverse.</p> <p>That is, having identified the specific <span class="math-container">$(s,\theta)$</span> coordinate inside the circle, you then compute</p> <p><span class="math-container">$$S = s \times \frac{R}{r}. \tag2 $$</span></p> <p>Then, after <span class="math-container">$(s,\theta)$</span> has been converted into <span class="math-container">$(S,\theta)$</span> you follow the approach documented in the previous section to convert this into the coordinates <span class="math-container">$(x,y)$</span>.</p>
4,651,215
<p>Three Cowboys have a shoot out. A shoots with 1/3 accuracy, B 1/2, C one (never misses). A goes first. The cowboys attempt to shoot the cowboy with the best shooting ability. What’s the probability that A is still alive?</p> <p>Edit: We assume that after A, B goes first and then C</p> <p>My approach:</p> <p>Since it is given that they attempt to shoot the cowboy with the best shooting ability, both A &amp; B would go for C.</p> <p>1st case: C is knocked out and A &amp; B remain in a duel</p> <p>P(A win vs B) = 0.3 + (0.7 * 0.5 * 0.3) + (0.7 * 0.5 * 0.7 * 0.5 * 0.3) + .... infinite geometric progression</p> <p>Thus P(A win vs B) = 0.3/(1 - 0.35) = 0.46</p> <p>2nd case: C shoots B (after both A &amp; B miss against C) and A &amp; C remain in a duel</p> <p>P(A win vs C) = 0.3</p> <p>Now 1st case happens with Probability 2/3 (1/3 chance both A &amp; B miss C)</p> <p>2nd case happens with Probability 1/3</p> <p>Thus P(A wins) = (2/3 * 0.46) + (1/3 * 1/3) = 0.42</p> <p>Does the logic sound right or I have calculated it incorrectly?</p>
P. Lawrence
545,558
<p>There are exactly 4 possibilities, for any <span class="math-container">$a$</span>. (i) If <span class="math-container">$a\equiv 0 \pmod 4,$</span> then <span class="math-container">$a^2 \equiv 0 \pmod 4$</span> (ii) If <span class="math-container">$a\equiv 1 \pmod 4,$</span> then <span class="math-container">$a^2 \equiv 1 \pmod 4$</span> (iii)If <span class="math-container">$a\equiv 2 \pmod 4,$</span> then <span class="math-container">$a^2 \equiv 0 \pmod 4$</span> (iv)If <span class="math-container">$a\equiv 3 \pmod 4,$</span> then <span class="math-container">$a^2 \equiv 1 \pmod 4$</span></p>
1,241,586
<p>I want to prove that for all positive integers $n$, $2^1+2^2+2^3+...+2^n=2^{n+1}-2$. By mathematical induction:</p> <p>1) it holds for $n=1$, since $2^1=2^2-2=4-2=2$</p> <p>2) if $2+2^2+2^3+...+2^n=2^{n+1}-2$, then prove that $2+2^2+2^3+...+2^n+2^{n+1}=2^{n+2}-2$ holds.</p> <p>I have no idea how to proceed with step 2), could you give me a hint?</p>
Mark Joshi
106,024
<p>start with zero, the question is then the same as $$2^0+2^1+2^2+...+2^n=2^{n+1}-1$$ The binary for the LHS is $$ 11111\dots 1 $$ with $n$ ones. add $1$ to get $$ 10000\dots 0, $$ with $n$ zeros in binary. This is the binary for $2^{n+1}$ and are done.</p>
2,437,466
<p>A point, $p$, is defined as a cluster point of a set $S$ if $\forall \epsilon &gt; 0,$ there exists an open ball of radius $\epsilon$ centered at $p$ that contains infinitely many points in $S$.</p> <p>We want to prove that a subset $S$ of a metric space $(E,d)$ is closed iff $S$ contains all of its cluster points.</p> <p>I believe I have the forward direction, which seems to be pretty straightforward, but I am having difficulty with the $(\Leftarrow)$ direction. My idea was to somehow use the fact that $S$ contains all of its cluster points to show that every infinite subset of $S$ contains a cluster point. Then we can say $S$ is sequentially compact, therefore it is compact, and hence closed.</p> <p>I'm just mostly unsure of how to show the first part. Thanks for any comments.</p>
Community
-1
<p>For $k+1$ it is equivalent to</p> <p>$$ \frac{4^k}{k+1}&lt;\frac{(2k+1)(k+2)(2k)!}{2(k+1)^2(k!)^2}$$</p> <p>It is enough to prove that</p> <p>$$\frac{(2k+1)(k+2)}{2(k+1)^2}&gt;1$$</p> <p>Notice that the above fraction is monotonically decreasing and its limit at $k\to\infty$ is $1$, therefore it is always greater than $1$.</p> <p>And also, you have a mistake, as Harry noticed, $(2k)!\neq2k!$.</p> <h1>Edit</h1> <p>I have no idea what is Jaroslaw talking about. There is step-by-step solution. Suppose it is true for $k$. Then it is true for $k+1$ if and only if</p> <p>$$\frac{4^{k+1}}{k+2}&lt;\frac{(2k+2)!}{\left((k+1)!\right)^2}\\\frac{4^k}{k+1}\cdot\frac{4(k+1)}{k+2}&lt;\frac{(2k+2)(2k+1)(2k)!}{(k+1)^2(k!)^2}\\\frac{4^k}{k+1}\cdot\frac{4(k+1)}{k+2}&lt;\frac{(2k)!}{(k!)^2}\cdot\frac{(2k+2)(2k+1)}{(k+1)^2}$$</p> <p>It is enough to prove that:</p> <p>$$\frac{4(k+1)}{k+2}&lt;\frac{(2k+2)(2k+1)}{(k+1)^2}$$</p> <p>It is equivalent to</p> <p>$$\frac{(2k+2)(2k+1)}{(k+1)^2}&gt;\frac{4(k+1)}{k+2}\\\frac{2(k+1)(2k+1)(k+2)}{4(k+1)(k+1)^2}&gt;1\\\frac{(2k+1)(k+2)}{2(k+1)^2}&gt;1$$</p>
429,178
<p>Consider the following function $$ \Pi(t) = \begin{cases} 1 \ \ \text{if} \ |t| &lt; 1/2 \\ 0 \ \ \text{if} \ |t| \geq 1/2 \end{cases} $$</p> <p>This is not a periodic function. Why can we define the fourier coefficients of $\Pi(t)$ as $$c_n = \frac{1}{T} \int_{-T/2}^{T/2} e^{-2 \pi int/T} \Pi(t) \ dt $$</p> <p>$$ = \frac{1}{T} \int_{-1/2}^{1/2} e^{-2 \pi int/T} \Pi(t) \ dt $$</p> <p>if the function is not periodic? I thought fourier coefficients were only defined for periodic functions?</p>
John Gowers
26,267
<p>You've used the variable $T$ in both expressions without telling us what it represents. If we interpret $T$ as the period in the Fourier expansion, and compute</p> <p>$$ \sum_{n=-\infty}^\infty c_ne^{2\pi int/T} $$</p> <p>with $c_n$ defined as in your question, we get the periodic function which agrees with your function on the interval $[-T/2,T/2]$ and is periodic with period $2$. So the sum would look like some sort of square wave (unless $T&lt;1/2$ in which case it would be the constant function $1$). </p> <p>In principle, there is nothing that says you can't define the Fourier coefficients for arbitrary (not necessarily periodic) functions, but the Fourier coefficients are only meaningful if you specify the period you want to sum over. </p> <p>In this case, you've only integrated from $-T/2$ to $T/2$, so the only information you've used about the function $\Pi$ is the values it takes on the interval $[-T/2,T/2]$. So the Fourier coefficients you get do not give you the function $\Pi$ when you sum over them, but give you the unique function that: </p> <ul> <li>agrees with $\Pi$ on the interval $[-T/2,T/2]$</li> <li>is periodic with period $T$ (I'm assuming that's what $T$ is).</li> </ul> <p>You could change the behaviour of $\Pi$ outside $[-T/2,T/2]$ without changing either of the integrals in your question, so, as far as the Fourier series is concerned, you might as well have been integrating over the periodic function rather than the non-periodic $\Pi$. </p> <p><strike>Incidentally, your second integral in the question is wrong. It should read: </p> <p>$$ c_n=\int_{-1/2}^{1/2}e^{-2\pi int}\Pi(Tt)dt $$</p> <p>I'd edit, but it's more useful for you to see where the error is in your question. I'm also not convinced this integral is actually easier to solve, either. </strike></p>
1,902,939
<p>I was going through the problem attached herewith. the fact is I could not understand 2 things. </p> <p>Firstly, how did the author came up with the idea of choosing the event sets the way he has described? How is this intuitive?</p> <p>Secondly, I tried to find the $P(A_1)$ directly, and I came up with the following way.</p> <p>$P(A_1)=(C(4,2)\cdot 2!\cdot(14!/(4!^2\cdot3!^2)))/(16!/(4!^4))=12/15$.</p> <p>I cannot intuitively follow the procedure given here. </p> <p>Any help?</p> <p><a href="https://i.stack.imgur.com/3Rj7a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Rj7a.png" alt="enter image description here"></a> </p>
Brian M. Scott
12,042
<p>An alternative approach is to observe that since $ab\in(aH)(bH)$, we must have $(aH)(bH)=abH$. We can now define an operation $*$ on the set $\mathscr{L}$ of left cosets of $H$ by $aH*bH=abH$. It's straightforward to check that this operation is a well-defined group operations on $\mathscr{L}$ with identity $H=1_GH$ and inverses given by $(aH)^{-1}=a^{-1}H$. The map $h:G\to\mathscr{L}:a\mapsto aH$ is therefore a homomorphism, and since $\ker h=H$, $H$ must be normal in $G$.</p>
1,504,888
<p>Let $R$ be an integral domain. The following is a well known fact:</p> <blockquote> <p>Let $a,b \in R$. Then $(a)=(b)$ if and only if there exist a unit $u \in R$ such that $a=ub$.</p> </blockquote> <p>I would like to generalize this result for ideals which are generated by two elements. So the question is:</p> <blockquote> <p>Let $a,a',b,b' \in R$. Does it exist some property for these four elements which is equivalent on having $(a,b)=(a',b')$?</p> </blockquote> <p>I know that maybe this question is a bit vague, I hope that somebody could give me some reference or hint.</p> <p>EDIT: As Mathmo123's comment suggests, maybe the condition could be</p> <blockquote> <p>There exists $U \in GL_2(R)$ such that $$\left[ \begin{matrix} a \\ b\end{matrix} \right] = U \left[ \begin{matrix} a' \\ b'\end{matrix} \right]$$</p> </blockquote> <p>Clearly, this is a sufficient condition. But is it necessary as well?</p>
Remy
284,272
<p>We consider the following condition.</p> <blockquote> <p><strong>Condition.</strong> There exists $A \in \operatorname{GL}_2(R)$ such that $$\begin{pmatrix} a \\ b\end{pmatrix} = A \begin{pmatrix} a' \\ b'\end{pmatrix}.$$</p> </blockquote> <p>The sufficiency of the condition has already been noted, so I will only consider its necessity.</p> <blockquote> <p><strong>Lemma.</strong> If $R$ is a Dedekind domain, then the condition is necessary.</p> </blockquote> <p><em>Proof.</em> Let $I = (a,b) = (a',b')$. If $I = 0$, we are trivially done. If $I \neq 0$, then $I$ is locally free of rank $1$: every localisation is a DVR, so $I_\mathfrak p \subseteq R_\mathfrak p$ is principal, hence free of rank 1. Thus, $I$ is projective, and the sequence $$0 \to K \to R^2 \stackrel{(a,b)}\longrightarrow I \to 0$$ splits. Hence, $K$ is projective as well, and taking determinants (top wedges) we get $K \otimes I = R$. We get the same for $K'$ (replacing $(a,b)$ by $(a',b')$), so we deduce that $K \cong K'$ (abstractly, not necessarily as submodules of $R^2$), since $I$ is invertible. We get a commutative diagram $$\begin{array}{ccccccccc} 0 &amp; \xrightarrow{} &amp; K &amp; \xrightarrow{} &amp; R^2 &amp; \xrightarrow{} &amp; I &amp; \xrightarrow{} &amp; 0\\ &amp; &amp; \downarrow &amp; &amp; &amp; &amp; \downarrow &amp; &amp; \\ 0 &amp; \xrightarrow{} &amp; K' &amp; \xrightarrow{} &amp; R^2 &amp; \xrightarrow{} &amp; I &amp; \xrightarrow{} &amp; 0 \end{array}$$ where the two vertical arrows are isomorphisms. Choosing splittings of the top and bottom rows, we can construct an isomorphism $R^2 \to R^2$ making the diagram commutative.</p> <p><strong>Remark.</strong> Even for $R = k[x,y]$, the condition is not necessary! Indeed, consider the ideal $I = (x^2,y) = (x^2, (x+1)y)$. Then a choice of transition matrices (for the two directions) is given by \begin{align*} \begin{pmatrix}1 &amp; 0 \\ 0 &amp; x+1 \end{pmatrix}, &amp; &amp; \begin{pmatrix}1 &amp; 0 \\ y &amp; -x+1 \end{pmatrix}. \end{align*} We will show that there is no invertible matrix $A$ such that $A\ (x^2,y)^\top = (x^2, xy + y)^\top$. Indeed, any such matrix differs from the [first] matrix above by $$\begin{pmatrix}-yP &amp; x^2P \\ -yQ &amp; x^2Q \end{pmatrix}$$ for some $P, Q \in k[x,y]$, because $k[x,y]$ is a UFD. We have to show that no such matrix can be invertible. But if we reduce mod $y$, we get the matrix $$\begin{pmatrix}1 &amp; x^2P \\ 0 &amp; 1+x+x^2Q \end{pmatrix}.$$ The determinant is $1+x+x^2Q$, which can never be a unit in $k[x]$.</p> <p><strong>Remark.</strong> So we see that our condition is the correct one for Dedekind domains, and is too strong even for a UFD of dimension $2$. There are a few directions we could take from here:</p> <ul> <li>Try to find a better criterion. I'm not sure if we will be able to come up with one, though.</li> <li>Analyse the problem for more than $2$ generators. It seems that we have fundamentally used the fact that $K$ is locally free of rank $1$ to conclude that $K \cong K'$; thus one would imagine that there might exist counterexamples for Dedekind domains (maybe even for a PID) for more than $2$ generators.</li> </ul> <p><strong>Edit:</strong> Maybe I should comment a bit on where this example comes from. The ideals $(a,b), (a,c)$ coincide if and only if $\bar b, \bar c \in R/(a)$ differ by a unit. Thus, a lift of the unit and its inverse give lower triangular matrices taking one vector to the other. If $R$ is a UFD, then any two matrices taking $(a,b)$ to $(a,c)$ differ by a matrix of the form $$\begin{pmatrix}-bP &amp; aP \\ -bQ &amp; aQ \end{pmatrix}.$$ Reducing mod $b$, the same method as above applies whenever the unit that $\bar b$ and $\bar c$ differ by does not lift to an element whose reduction mod $b$ is a unit. In the example above, the unit is $x+1 \in k[x,y]/(x^2)$; none of its lifts become constant mod $y$.</p> <p>In summary, the obstruction is somehow related to liftability of units.</p> <p>If you want a more arithmetic example, you can take $(5,X) = (5, 2X) \subseteq \mathbb Z[X]$. The unit $2 \in \mathbb F_5[X]$ does not lift to any polynomial whose constant term is a unit in $\mathbb Z$.</p>
1,851
<p>Is it possible to modify closing reasons menu? If so I would love to see a small addition, maybe others have other wishes too.</p> <p>So, the last <a href="https://mathematica.meta.stackexchange.com/q/1010/5478">Granular off-topic close reasons</a> lead to creating</p> <blockquote> <p>This question arises due to a simple mistake such as a trivial syntax error, incorrect capitalization, spelling mistake, or other typographical error and is unlikely to help any future visitors, or else it is easily found in the documentation.</p> <p>This question cannot be answered without additional information. Questions on problems in code must describe the specific problem and include valid code to reproduce it. Any data used for programming examples should be embedded in the question or code to generate the (fake) data must be included. </p> <p>The question is out of scope for this site. The answer to this question requires either advice from Wolfram support or the services of a professional consultant.</p> </blockquote> <p>What I'm missing is what is often done with "Off-topic / Other" and a comment:</p> <blockquote> <p>I'm voting to close this question as off-topic because in its form it is too localized and is unlikely to help future visitors. </p> </blockquote> <p>It is about questions that OP didn't even bothered to drop specific strings/labels etc. just dumping their whole daily code to the question.</p> <p>I'm using it quite often, and I've seen couple of users too. Can we include this in menu after reviewing the text?</p>
Mr.Wizard
121
<p>We (community moderators) can change the custom close reasons but I believe we are limited by the software to a total of three reasons, therefore to enact what you propose would require eliminating one of the existing and commonly used reasons. Any proposed change would need to clearly specify exactly how the other close reasons would be adapted to make room for this one before any meaningful community voting can take place.</p> <hr> <p><em>Edit:</em> It seems that there is precedent for the SE developers to extend the number of close reasons if there is a legitimate need; see:</p> <ul> <li><a href="https://meta.askubuntu.com/questions/6994/can-we-have-more-than-3-custom-close-reasons-pretty-please">Can we have more than 3 custom close reasons, pretty please?</a></li> </ul>
1,426,891
<blockquote> <p>Why does $$\left(\int_{-\infty}^{\infty}e^{-t^2}dt\right)^2 = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-(x^2 + y^2)}dx\,dy ?$$</p> </blockquote> <p>This came up while studying Fourier analysis. What's the underlying theorem?</p>
idm
167,226
<p>$$\left(\int_a^b f(x)\mathrm dx\right)^2=\left(\int_a^b f(x)\mathrm dx\right)\left(\int_a^b f(x)\mathrm dx\right)\underset{x\ is\ a \ mute\ variable}{=}\left(\int_a^b f(x)\mathrm dx\right)\left(\int_a^b f(y)\mathrm dy\right)$$$$\underset{Fubini}{=}\int_a^b\int_a^bf(x)f(y)\mathrm dx\mathrm dy$$</p>
2,985,962
<p>I was asking my friends a riddle about identifying hats. Each person has to correctly identify the colour of their own hat that was put on their head randomlly. There is no defined number of either colour. So they could be all white or all black or any combination in between.</p> <p>They gave an answer that gets 50% right but I fired back that getting 50% right is what you would expect, on average, for straight guesses. They claimed that that would depend on the colours of the hats that are on the people's heads. In other words, if everyone was wearing black then the 50% rule not apply. This just doesn't "feel" right to me.</p> <p>Who is correct?</p> <p>Edit:</p> <p>This is the puzzle I asked. You have 100 people standing one behind the other such that the last person can see all the people in front of him/her and so on. So the last one see 99 and the next sees 98 etc. They each have a hat put on their head which is black or white. They have no idea how many of each exist.</p> <p>Assuming they plan on a strategy in advance, how many can get their hat right. They said that the best way is for the back person to say the colour of the hat on person 99. Person 99 can say his colour. Then 98 will say the colour of the one in front etc. This was I am guaranteed at least 50 right and maybe more if two consecutive people have the same colour. My claim was that 50% guaranteed is the same as random (ignoring the extra lucky one if there are consecutive hats). Their counter-claim was that the 50% random guess would only be right if their were exactly 50 of each colour.</p>
T_M
562,248
<p>You are correct, 50% is what you would expect, on average, for straight guesses.</p> <p>Your friend is missing the point of saying &quot;expect, on average&quot; if they claim that what happens &quot;on average&quot; won't happen some of the time.</p> <p>The 50% rule does apply even if everyone is wearing black. It's the people that are guessing, not whoever put the hats on their heads. Each person is still guessing their own hat with a 50/50 chance of being correct, no matter what colour their hat is.</p>
2,495,789
<p>\begin{pmatrix}4&amp;-6&amp;2&amp;-6\\ \:-2&amp;3&amp;-1&amp;3\end{pmatrix}</p> <p>Am I supposed to move the $0 $ to the other side of the equation and find the inverse of the matrix above or is there another way?</p>
matteociccozzi
496,782
<p>Simply row reduce the coefficient matrix and let your independent variables (columns without leading ones) be your parameters. Being a homogenous system with 2 equations and 4 variables you will have infinite solutions (since every homogenous system has at least the trivial solution)</p>
2,536,877
<p>I have to evaluate this integral: $$\int \frac{\tan^5 x}{\cos^3{x}} dx.$$</p> <p>I guess I should use substitution, but I don't know what to substitute =(</p>
José Carlos Santos
446,262
<p>Observe that$$\frac{\tan^5 x}{\cos^3x}=\frac{\sin^5x}{\cos^8x}=\frac{\sin x(1-\cos^2x)^2}{\cos^8x}.$$So, do the substitution $\cos x=t$ and $\sin x\,\mathrm dx=-\mathrm dt$.</p>
125,645
<p>I stumbled upon the following integral:</p> <p>$$\int e^{-\left(\frac{\gamma}{2} + ip \right)^2} H_n \left( q-\frac{\gamma}{2} \right) H_n \left( q+\frac{\gamma}{2} \right) \, d\gamma \, ,$$</p> <p>and I really need a closed form of it. I'm pretty sure it exists, and probably in terms of $\Gamma$ functions, since</p> <p>$$ \int e^\frac{-\gamma^2}{2} H_n(\gamma) d\gamma = 4^n \sqrt{2} \,\Gamma \left( n + \frac{1}{2} \right) \, .$$</p> <p>The problem is that <em>Mathematica</em> can't compute this in closed form, although it can do it term by term and hint that a closed expression probably exists:</p> <pre><code>Table[Integrate[ HermiteH[i, q + 1/2 \[Gamma]] HermiteH[i, q - 1/2 \[Gamma]] Exp[-(\[Gamma]/2 + I p)^2], {\[Gamma], -Infinity, Infinity}],{i,0,2}] </code></pre> <p>gives</p> <p>$$\left\{2 \sqrt{\pi },4 \sqrt{\pi } \left(2 p^2+2 q^2-1\right),16 \sqrt{\pi } \left(2 \left(p^4+2 p^2 \left(q^2-1\right)+q^4\right)-4 q^2+1\right)\right\} \, .$$</p> <p>Can someone give me a tip?</p>
Jens
245
<p>This is an integral I happen to have worked with before. The answer is a Laguerre polynomial, as can be verified like this for the first three indices as in the question:</p> <pre><code>ll = 2 Table[LaguerreL[i, 2 (p^2 + q^2)] (-2)^i Sqrt[Pi] i!, {i, 0, 2}]; l1 = Table[ Integrate[ HermiteH[i, q + 1/2 γ] HermiteH[i, q - 1/2 γ] Exp[-(γ/2 + I p)^2], {γ, -Infinity, Infinity}], {i, 0, 2}]; Simplify[ll/l1] (* ==&gt; {1, 1, 1} *) </code></pre> <p>Here, <code>ll</code> is my closed form, and <code>l1</code> is the list copied from the question. I verify that they give the same answers by forming the ratios.</p> <p>The formula originally comes from Gradshtein and Ryzhik, equation 7.377, and I used it in my very first paper, <a href="http://pages.uoregon.edu/noeckel/papers/" rel="nofollow">Conductance quantization and backscattering</a> below equation (13) to expand a shifted harmonic oscillator function in terms of the unshifted one. That's why I remembered where to look this up...</p>
3,247,923
<p>I’ve tried splitting this into the sum of two Chinese Dumbass triangles:</p> <p>0</p> <p>-5 0</p> <p>0 15 -5</p> <p>0 -5 0 0</p> <p>And</p> <p>1</p> <p>2 2</p> <p>2 -15 2</p> <p>1 2 2 1</p> <p>And that fails. Any hint or help would be appreciated!</p>
Michael Rozenberg
190,319
<p>Let <span class="math-container">$a=\min\{a,b,c\}$</span>, <span class="math-container">$b=a+u$</span> and <span class="math-container">$c=a+v$</span>.</p> <p>Thus, by AM-GM we obtain: <span class="math-container">$$\sum_{cyc}a(a-b)(a-2b)=2a(u^2-uv+v^2)+u^3-3u^2v+2uv^2+v^3\geq$$</span> <span class="math-container">$$\geq4\left(\frac{u^3}{4}\right)+uv^2+uv^2+v^3-3u^2v\geq7\sqrt[7]{\left(\frac{u^3}{4}\right)^4\left(uv^2\right)^2v^3}-3u^2v=\left(\frac{7}{\sqrt[7]{256}}-3\right)u^2v\geq0.$$</span></p>
764,973
<p>here is the question: </p> <p>$$ {\rm y}''\left(t\right) + 2\,{\rm y}'\left(t\right) + 5\,{\rm y}\left(t\right) = 0; \qquad\qquad {\rm y}\left(0\right) = 2\,,\quad {\rm y}'\left(0\right) = -1. $$</p> <ul> <li>$\mathcal{L} (y''(t)) = s^2y(s) -s y(0) -y'(0)$</li> <li>$\mathcal{L} (+2y'(t)) = 2(sy(s) -y(0))$</li> <li>$\mathcal{L} (5y(t)) = 5y(s)$</li> </ul> <p>I find that </p> <p>$y(t)=\dfrac{2s+3}{s^{2}+2s+5}$</p> <p>It is irreducible, so I write the transform as a function of $\varepsilon = s + 1$.</p> <p>$y(t)=\dfrac{2\varepsilon+1}{\varepsilon^2+4}$</p> <p>I apply fraction by parts then use laplace transform table and find the result:</p> <p>$y(t)=e^{-t} (2\cos{2t}+\sin{2t})$, but the result has $\frac{1}{2}$ before $\sin{2t}$. What am I missing here? Where does the $\frac{1}{2}$ come from?</p>
Amzoti
38,839
<p>If the DE is written correctly, we should have arrived at:</p> <p>$$y(s)=\dfrac{2s+3}{s^{2}+2s+5}$$</p> <p>This yields:</p> <p>$$y(t) = \dfrac{1}{2} e^{-t} (\sin(2 t)+4 \cos(2 t))$$</p>
3,997,110
<p>I am not very experienced with math and need help with a proof. I need to show that the pair of functions <span class="math-container">$t$</span> and <span class="math-container">$1/t$</span> defined for <span class="math-container">$t&gt;0$</span> are linearly independent. Is the following a sufficient proof? If so, is there a simpler way?</p> <p>Assume <span class="math-container">$at+b\frac{1}{t}=0$</span> is correct for any <span class="math-container">$t&gt;0$</span>. Substituting <span class="math-container">$t=1$</span> yields <span class="math-container">$a+b=0$</span> (Eq. 1). Substituting <span class="math-container">$t=2$</span> yields <span class="math-container">$2a+\frac{b}{2}=0$</span> (Eq. 2). Subtracting Eq. 1 from Eq. 2 yields <span class="math-container">$a-\frac{b}{2}=0$</span>, so <span class="math-container">$a=\frac{b}{2}$</span>. Substituting <span class="math-container">$a=\frac{b}{2}$</span> into Eq. 1 yields <span class="math-container">$\frac{3}{2}b=0$</span>, therefore <span class="math-container">$b=0$</span>, which implies <span class="math-container">$a=0$</span>.</p> <p>Since the only scalars <span class="math-container">$a, b$</span> such that <span class="math-container">$at+b\frac{1}{t}=0$</span> are <span class="math-container">$a=b=0$</span> then <span class="math-container">$t$</span> and <span class="math-container">$1/t$</span> are linearly independent.</p>
Nikola Ubavić
588,411
<p>Your proof is fine and straightforward.</p> <p>Maybe little shorter proof would be to see that <span class="math-container">$\lim_{t\rightarrow\infty} at+b\frac{1}t$</span> can be zero if and only if <span class="math-container">$a=0$</span>. It follows immediately that <span class="math-container">$b=0$</span> too.</p>
3,997,110
<p>I am not very experienced with math and need help with a proof. I need to show that the pair of functions <span class="math-container">$t$</span> and <span class="math-container">$1/t$</span> defined for <span class="math-container">$t&gt;0$</span> are linearly independent. Is the following a sufficient proof? If so, is there a simpler way?</p> <p>Assume <span class="math-container">$at+b\frac{1}{t}=0$</span> is correct for any <span class="math-container">$t&gt;0$</span>. Substituting <span class="math-container">$t=1$</span> yields <span class="math-container">$a+b=0$</span> (Eq. 1). Substituting <span class="math-container">$t=2$</span> yields <span class="math-container">$2a+\frac{b}{2}=0$</span> (Eq. 2). Subtracting Eq. 1 from Eq. 2 yields <span class="math-container">$a-\frac{b}{2}=0$</span>, so <span class="math-container">$a=\frac{b}{2}$</span>. Substituting <span class="math-container">$a=\frac{b}{2}$</span> into Eq. 1 yields <span class="math-container">$\frac{3}{2}b=0$</span>, therefore <span class="math-container">$b=0$</span>, which implies <span class="math-container">$a=0$</span>.</p> <p>Since the only scalars <span class="math-container">$a, b$</span> such that <span class="math-container">$at+b\frac{1}{t}=0$</span> are <span class="math-container">$a=b=0$</span> then <span class="math-container">$t$</span> and <span class="math-container">$1/t$</span> are linearly independent.</p>
Maths
740,715
<p>Well they are actually not linear independent. Take <span class="math-container">$t=1$</span> and <span class="math-container">$a=1;b=-1$</span></p> <p>and You have <span class="math-container">$1+(-1*1/1$</span>)</p> <p>@billiam Thanks for your answer, if it is only positive, then it is quite trivial, here is my previous answer. Well, I don't know whether it may help, but since <span class="math-container">$t&gt;0;1/t&gt;0$</span> therefore <span class="math-container">$t+1/t$</span> can only be zero if the coefficients are zero.</p> <p>But i could prove that <span class="math-container">$1/t$</span> is at least poaitive. first lets show that if <span class="math-container">$0&lt;x |*x \Leftrightarrow0&lt;x^2$</span> so the sqaure is always positive. Now <span class="math-container">$0&lt;t|*(1/t)^2\Leftrightarrow0* (1/t)^2&lt;t*(1/t)^2 \Leftrightarrow0&lt;1/t$</span> hope that helps</p> <p>For any positive scalars the functions as it takes only positive values is positive again.</p>
1,001,913
<p>If we want to graph a horizontal line, we will do the following:</p> <pre><code>y = 0x + 3 </code></pre> <p>No matter the domain for x, the range for y will always be 3. Therefore, we have a horizontal line.</p> <pre><code>y = 0(0) + 3 = (0,3) y = 0(1) + 3 = (1,3) y = 0(2) + 3 = (2,3) </code></pre> <p>Now the formula to graph a vertical line looks like this:</p> <pre><code>x = 3 </code></pre> <p>Well, wait a second. Where is the y? I would like to see the y in the equation. But it is missing. How can I write the equation for a vertical line that includes the y variable? This is all I can think of:</p> <pre><code>x = 0y + 3 </code></pre> <p>And with the following domain:</p> <pre><code>x = 0(0) + 3 x = 0(1) + 3 x = 0(2) + 3 </code></pre> <p>Is this correct? Is it ok to reverse the x and y, as I just did above? Or does this not make it a slope-intercept equation anymore? It should still be a linear equation, since the variables are raised to the first power, in my opinion. But the slope-intercept form looks like this: y = mx + b. So I am not sure if this is still a slope-intercept equation. </p>
Marko Riedel
44,883
<p>By way of enrichment here is another algebraic proof using basic complex variables. <P> We seek to compute $$\sum_{t=0}^{n-m} {t+k\choose t} {n-k-t\choose n-m-t}.$$</p> <p>Introduce the integral representation $${n-k-t\choose n-m-t} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{n-k-t}}{z^{n-m-t+1}} \; dz.$$</p> <p>We use this to obtain an integral for the sum. Note that when $t&gt;n-m$ we have $1&gt;n-m-t+1$ which means that the integral is zero (entire function). Therefore we may extend the sum to infinity, getting $$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{n-k}}{z^{n-m+1}} \sum_{t\ge 0} {t+k\choose t} \frac{z^t}{(1+z)^t}\; dz.$$</p> <p>This simplifies to $$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{n-k}}{z^{n-m+1}} \frac{1}{(1-z/(1+z))^{k+1}} \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{n-k}}{z^{n-m+1}} \frac{(1+z)^{k+1}}{(1+z-z)^{k+1}} \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{n+1}}{z^{n-m+1}} \; dz.$$</p> <p>This integral evaluates by inspection to $${n+1\choose n-m} = {n+1 \choose m+1}.$$</p> <p><P> Apparently this method is due to Egorychev.</p>
1,816,783
<p>When can one conclude that a character table has non-real entries?</p> <p>In particular, by constructing the character table for $\mathbb{Z}/3\mathbb{Z}$ or $A_4$ how does one determine that some of the entries will be nonreal? Is the reason that the same complex values in the table for $\mathbb{Z}/3\mathbb{Z}$ also appear in the table for $A_4$ because $A_4 / ({1,(12)(34),(13)(24),(14)(23)})\cong \mathbb{Z}/3\mathbb{Z}$?</p> <p>Here's what I have for $\mathbb{Z}/3\mathbb{Z}$.</p> <p><a href="https://i.stack.imgur.com/z8Oln.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z8Oln.jpg" alt="enter image description here"></a> </p> <p>Using short orthogonality relations I conclude that $$1+a^2+b^2=1+c^2+d^2+=1+a^2+c^2=1+b^2+d^2=3,$$ and $$1+a+b=1+c+d=1+ac+bd=0,$$ I don't see how to conclude from this that $a=d=\omega$ and $b=c=\bar{\omega}=\omega^2$, where $\omega=e^{2\pi i/3}.$</p>
P Vanchinathan
28,915
<p>If an element of the group and its inverse are in the same conjugacy of of a group then the value of any character at that element will be a real number.</p> <p>In the symmetric groups $S_n$ any element and its inverse have the same cycle decomposition structure and hence they are conjugates. </p>
234,435
<p>This is probably easy, but I can't think of an answer. Assume $X$ is a Banach space and $A$ is a (not assumed closed) subspace of $X$. Let $T:X \to X$ be a bounded linear operator, which is also injective. If $T(A)$ is dense in $X$, does it follow that $A$ is dense in $X$? </p>
Bill Johnson
2,554
<p>No, not even if $X=\ell_2$. Le $A$ be the closed span of $(e_n)_{n=2}^\infty$ and map $A$ to a proper dense subspace and $e_1$ to a vector not in that subspace.</p>
234,435
<p>This is probably easy, but I can't think of an answer. Assume $X$ is a Banach space and $A$ is a (not assumed closed) subspace of $X$. Let $T:X \to X$ be a bounded linear operator, which is also injective. If $T(A)$ is dense in $X$, does it follow that $A$ is dense in $X$? </p>
Nate Eldredge
4,832
<p>Here's an explicit example for what Bill's answer proposes.</p> <p>Let $T_1$ be the operator which maps $e_n$ to $4^{-n} e_{n-1}$ for $n \ge 2$ and maps $e_1$ to $0$. That is, $T_1 \left(\sum_{n \ge 1} a_n e_n\right) = \sum_{n \ge 1} a_{n+1} 4^{-(n+1)} e_n$. Note $T_1$ is injective on $A$, and $T_1 A$ is dense (it contains $c_{00}$).</p> <p>Let $y = \sum_{n} 2^{-n} e_n$. Note $y \notin T_1 A$. Define $T$ by $T e_n = 4^{-n} e_{n-1}$ for $n \ge 2$ and $T e_1 = y$. Now we still have that $TA = T_1 A$ is dense. To see $T$ is injective, suppose $Tx = 0$. Then we can write $x$ uniquely as $x = a_1 e_1 + u$ where $u \in A$. Thus $T x = a_1 y + T_1 u$. But $T_1 A$ is a vector space not containing $y$, so if $Tx=0$ we conclude that $a_1 = 0$ and $T_1 u = 0$. But $T_1$ was injective on $A$ so $u = 0$. Thus $x=0$.</p>
1,136,347
<p>I'm trying to compute</p> <p>$$ \omega^{3n/2 + 1} + \omega $$</p> <p>where $\omega$ is one of the $n^{th}$ roots of unity where $n$ is a multiple of $4$. Could anyone demonstrate how to do this?</p>
Sai
212,299
<p>Since $n=4k$ we can write the expression as $ \omega ^{2k+1}+\omega=\omega(1+\omega^{2k})$. But now $\omega ^{2k}$ is a square root of $1 $ not equal to $1$. So it is definitely $-1$. Hence the expression is 0.</p>
2,354,094
<p>I've been stumped by this problem:</p> <blockquote> <p>Find three non-constant, pairwise unequal functions $f,g,h:\mathbb R\to \mathbb R$ such that $$f\circ g=h$$ $$g\circ h=f$$ $$h\circ f=g$$ or prove that no three such functions exist.</p> </blockquote> <p>I highly suspect, by now, that no non-trivial triplet of functions satisfying the stated property exists... but I don't know how to prove it.</p> <p>How do I prove this, or how do I find these functions if they do exist?</p> <p>All help is appreciated!</p> <p>The functions should also be continuous.</p>
Ross Millikan
1,827
<p>A simple answer that almost works is $f(x)=g(x)=-x,h(x)=x$ with the only problem being $f=g$ but we can patch that up by making each one the identity on different parts of the real line. Split the reals into $|x| \gt 2,1 \le |x| \le 2, |x| \lt 1$. Make each the identity on one part and $-x$ on the other two. </p> <p>Explicitly $$f(x)=\begin {cases} x&amp; x \lt -2\\-x &amp; -2 \le x \le 2 \\x &amp; x \gt 2 \end {cases}\\g(x)=\begin {cases} -x&amp; x \lt -2\\x &amp; -2 \le x \le -1 \\-x &amp; -1 \lt x \lt 1\\x&amp;1 \le x \le 2\\-x&amp;2 \lt x \end {cases}\\h(x)=\begin {cases} -x&amp; x \le -1\\x &amp; -1 \lt x \lt 1 \\-x &amp; x \gt 1 \end {cases}$$</p>
2,829,121
<p>I get how to derive the ellipse equation, but I'm struggling to understand what it means intuitively. </p> <p>You see, a circle equation can be understood very intuitively. The circle equation models how the radius of the circle can be represented using the Pythagorean theorem. But I don't understand what the ellipse equation means at such a level. Does it model how an ellipse can be drawn out using a stretched rope? What exactly does it model? Can someone please explain? </p> <p>Can you please explain it as simply as possible, as I'm still a beginner? </p>
orion2112
518,984
<p>(This really should be a comment but I needed more space, apologies).</p> <p>You seem to think (reading the previous comments) that the ellipse's equation should "instruct" us, in a step-by-step manner, how to draw the ellipse. That doesn't have to be the case.</p> <p>In fact, let us invent a new relation between $x$ and $y$:</p> <p>$e^x-y=\sin\left(x\cdot y\right)$</p> <p>There are points $(x,y)$ in the plane that satisfy the above equation and they align on a curve we might as well call a <em>zwiggle</em>. See the <a href="http://www.wolframalpha.com/input/?i=e%5Ex%E2%88%92y%3Dsin(x%E2%8B%85y)" rel="nofollow noreferrer">WolframAlpha</a> graph or try Desmos or something similar if you are curious what it looks like.</p> <p>Is it obvious what shape zwiggles look like? No. Does it have to be? No. So... what <em>is</em> a zwiggle? It's just the set of points that satisfy $e^x-y=\sin(x\cdot y)$.</p> <p>Now, is it obvious what curve we get with:</p> <p>$\dfrac{(x-2)^2}{81}+\dfrac{(y+1)^2}{25}=1$</p> <p>Well, to a trained eye, it might be obvious that it is an ellipse centered at $(2,-1)$ with horizontal axis of length $18$ and vertical axis of length $10$ but otherwise, if you don't recognize the equation of an ellipse, you can just tell yourself that the relation <em>encodes</em> a bunch of points in the plane. That "bunch of points" is in fact called the <em>locus</em> of the equation. And in this case, the locus is so popular we have a name for it (ellipse). It just so happens that the ellipse's equation in relation to its shape is something less intuitive than the circle, but more intuitive than the zwiggle.</p> <p>I know this doesn't quite answer your question (and as I said, this should be a comment), but others have already posted plenty of useful information about ellipses that should give you better intuition. I just hope this helps you see that sometimes, mathematical relations don't really translate to something "geometrically obvious" and you just need to think of the curve as something more abstract.</p>
2,829,121
<p>I get how to derive the ellipse equation, but I'm struggling to understand what it means intuitively. </p> <p>You see, a circle equation can be understood very intuitively. The circle equation models how the radius of the circle can be represented using the Pythagorean theorem. But I don't understand what the ellipse equation means at such a level. Does it model how an ellipse can be drawn out using a stretched rope? What exactly does it model? Can someone please explain? </p> <p>Can you please explain it as simply as possible, as I'm still a beginner? </p>
Blue
409
<p>There is no single equation for an ellipse, just as there is no single equation for a line. We choose a form to highlight information of interest in the current context.</p> <p>Consider this sampling of ways to write the equation of a line:</p> <blockquote> <p>$$\begin{array}{rcccl} \text{slope-intercept} &amp;\qquad&amp; y = m x + b &amp;\qquad&amp; \begin{array}{rl} m:&amp;\text{slope} \\ b:&amp;y\text{-intercept} \end{array} \\[8pt] \text{intercept-intercept} &amp;&amp; \dfrac{x}{a} + \dfrac{y}{b} = 1 &amp;&amp; \begin{array}{rl} a:&amp; x\text{-intercept} \\ b:&amp; y\text{-intercept} \end{array} \\[8pt] \text{normal} &amp;&amp; x \cos\theta + y\sin\theta = d &amp;&amp; \begin{array}{rl} \theta:&amp; \text{direction of normal} \\ d :&amp;\text{distance from origin} \end{array}\\[8pt] \text{point-slope} &amp;&amp; y-y_1= m (x-x_1) &amp;&amp; \begin{array}{rl} (x_1,y_1):&amp;\text{point on line} \\ m:&amp;\text{slope} \end{array} \\[8pt] \text{two-point} &amp;&amp; \dfrac{y-y_1}{x-x_1}=\dfrac{y_1-y_2}{x_1-x_2} &amp;&amp; \begin{array}{rl} (x_i,y_i):&amp;\text{points on line} \end{array}\\[8pt] \text{standard/general} &amp;&amp; A x + B y + C = 0 &amp;&amp; \end{array}$$</p> </blockquote> <p>Each form tells us something about the line's geometry. (The "general" form tells us that the line's geometry is unimportant.) Algebra lets us move from one form to another if and when our priorities change.</p> <p>Note that, since all the forms represent the same line, they must encode the same geometric information <em>somehow</em>. The encodings aren't always neat and tidy, though. For instance, we can manipulate the general form into slope-intercept ... $$A x + B y + C = 0 \qquad\to\qquad y = - \frac{A}{B} x - \frac{C}{B}$$ ... to see that the line's slope is $-A/B$, and its $y$-intercept is $-C/B$. Converting to intercept-intercept form tells us that the $x$-intercept is $-C/A$. Moreover, we can determine slope from the intercept-intercept form, or normal direction from the two-point form, ... <em>whatever</em>. Having the various forms available gives us flexibility in how we present that information. But I digress ...</p> <p>Likewise, we have a sampling of equational forms for an ellipse.</p> <blockquote> <p>$$\begin{array}{rcl} \text{foci and string} &amp; \begin{align} \sqrt{(x-x_1)^2+(y-y_1)^2} \qquad&amp;\\ + \sqrt{(x-x_2)^2+(y-y_2)^2} &amp;= 2 a \end{align} &amp; \begin{array}{rl} (x_i,y_i):&amp;\text{foci} \\ 2a:&amp;\text{string length} \end{array} \\[10pt] \text{standard} &amp; \dfrac{(x-x_0)^2}{a^2} + \dfrac{(y-y_0)^2}{b^2} = 1 &amp; \begin{array}{rl} (x_0,y_0):&amp;\text{center} \\ a:&amp;\text{horizontal radius} \\ b:&amp;\text{vertical radius} \end{array}\\[10pt] \text{focus-directrix} &amp; \begin{array}{c} \sqrt{(x-x_0)^2+(y-y_0)^2} \\ \qquad\qquad\qquad = e\;\dfrac{| a x + b y + c |}{a^2+b^2} \end{array} &amp; \begin{array}{rl} (x_0,y_0):&amp;\text{focus} \\ ax+by+c=0:&amp;\text{directrix} \\ e:&amp;\text{eccentricity} \end{array}\\[10pt] \text{general} &amp; A x^2 + B xy + C y^2 + D x + E y + F = 0 &amp; \end{array}$$</p> </blockquote> <p>The "foci and string" form is the direct (dare I say, "intuitive"?) translation of the foci-and-string definition of the ellipse: <em>the sum of the distances from two points is a constant</em>. We tend not to see that form except as the point of departure on an algebraic journey to the "standard" form. That's because (1) the giant radical expressions are bulky, and (2) the standard form offers much more glance-able information about the geometry of the ellipse, and it has an all-around nicer algebraic nature. </p> <p>The upshot is that we have an equation to fit every way of looking at an ellipse, so that everyone's intuition is satisfied. And, again, having multiple forms available gives us flexibility in how we want to encode or present the geometric information we find most important to the task at hand.</p> <hr> <p>As an aside, I'll note that the lesser-used focus-directrix form of the equation is more versatile than the standard form, since it works for <em>every</em> conic section (except the circle). In particular, it can be convenient to remember that a parabola (which has eccentricity $1$) has this equation:</p> <p>$$(x-x_0)^2+(y-y_0)^2 = ( x\cos\theta + y\sin\theta -d )^2$$ where we've leveraged the normal form of the directrix equation to make things tidier.</p>
2,829,121
<p>I get how to derive the ellipse equation, but I'm struggling to understand what it means intuitively. </p> <p>You see, a circle equation can be understood very intuitively. The circle equation models how the radius of the circle can be represented using the Pythagorean theorem. But I don't understand what the ellipse equation means at such a level. Does it model how an ellipse can be drawn out using a stretched rope? What exactly does it model? Can someone please explain? </p> <p>Can you please explain it as simply as possible, as I'm still a beginner? </p>
user5402
81,595
<p>Draw two concentric circles of centre $O$ and radii $a$ and $b$ ($a&lt;b$) in an orthonormal system of axes $(O,\textbf{i},\textbf{j})$. Every half-line from $O$ cuts the circles at points $M_a$ and $M_b$. Let $N_a$ and $N_b$ be the orthogonal projections of $M_a$ and $M_b$ over the $x$ axis and the $y$ axis respectively. Denote $\alpha$ and $\beta$ the measures of the angles $\widehat{N_aIM_a}$ and $\widehat{N_bIM_b}$.</p> <p>Since $\alpha+\beta=\dfrac{\pi}{2}$ then $$\cos^2\alpha+\cos^2\beta=\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=1$$</p> <p>The ellipse is the set of points $M$ intersection of the lines $(M_aN_a)$ and $(M_bN_b)$.</p>
66,634
<p>Here is a quote from <em>Lectures on Ergodic Theory</em> by Halmos:</p> <blockquote> <p>I cannot resist the temptation of concluding these comments with an alternative "proof" of the ergodic theorem. If $f$ is a complex valued function on the nonnegative integers, write $\int f(n)dn=\lim &gt; \frac{1}{n}\sum_{j=1}^nf(n)$ and whenever the limit exists call such functions integrable. If $T$ is a measure preserving transformation on a space $X$, then $$ &gt; \int\int|f(T^nx)|dndx=\int\int|f(T^nx)|dxdn=\int\int|f(x)|dxdn=\int|f(x)|dx&lt;\infty. &gt; $$ Hence, by "Fubini's theorem" (!), $f(T^nx)$ is an integrable function of its two arguments, and therefore, for almost every fixed $x$ it is an integrable function of $n$. Can any of this nonsense be made meaningful?</p> </blockquote> <p>Can any of this nonsense be made meaningful using nonstandard analysis? I know that Kamae gave a short proof of the ergodic theorem using nonstandard analysis in <em>A simple proof of the ergodic theorem using nonstandard analysis</em>, Israel Journal of Mathematics, Vol. 42, No. 4, 1982. However, I have to say that I am not satisfied with his proof. It is tricky and not very illuminating, at least for me. Besides, it does not look anything like the so called proof proposed by Halmos. Actually, Kamae's idea can be made standard in a very straightforward manner. See for instance <em>A simple proof of some ergodic theorems</em> by Katznelson and Weiss in the same issue of the Israel Journal of Mathematics. By the way, Kamae's paper is 7 pages and Katznelson-Weiss paper is 6 pages.</p> <p>To summarize, is there a not necessarily short but conceptually clear and illuminating proof of the ergodic theorem using nonstandard analysis, possibly based on the intuition of Halmos?</p>
Terry Tao
766
<p>I feel the answer is "no", at least while staying true to the spirit of Halmos's text. Halmos's "proof", if valid, would imply something far stronger (and false), namely that $\lim_{N \to \infty} \frac{1}{N} \sum_{n=1}^N f(T_n x)$ converged for almost every x, where $T_1, T_2, \ldots$ are an arbitrary sequence of measure-preserving transformations. This is not true even in the case when X is a two-element set. So any proof of the ergodic theorem must somehow take advantage of the group law $T^n T^m = T^{n+m}$ in some non-trivial way.</p> <p>That said, though, nonstandard analysis does certainly generate a Banach limit functional $\lambda: \ell^\infty({\bf N}) \to {\bf C}$ which one does induce something resembling an integral, namely the Cesaro-Banach functional</p> <p>$f \mapsto \lambda( (\frac{1}{N} \sum_{n=1}^N f(n) )_{N \in {\bf N}} ).$</p> <p>This does somewhat resemble an integration functional on the natural numbers, in that it is finitely additive and translation invariant. But it is not countably additive, and tools such as Fubini's theorem do not directly apply to it; also, this functional makes sense even when the averages don't converge, so it doesn't seem like an obvious tool in order to demonstrate convergence.</p>
1,027,249
<p>Prove that $2$ is the only even prime:</p> <p>I have tried, this is what I have done. Is this is suffitient? </p> <p>Proof: To prove that $2$ is the only even prime number we need to prove following: (1) $2$ is an even prime number (2) If there existed another even prime number say $m$, then $m=2$</p> <p>Proof of 1) is obvious.</p> <p>Proof of 2): Let us assume contrary: There exists another even prime number say $m ≠ 2$. Since $m$ is even we have that $m = 2k$ for some positive integer $k$. Thus, we have three distinct divisors of $m$ that are $1$, $2$ and $m$ This contradicts the definition of prime unless $m = 2$. </p> <p>Therefore, $2$ is the only even prime number.</p>
Henry
6,460
<p>Saying (1) is obvious may not be enough to be a proof: it is not a proof and (2) is also obvious.</p> <p>I think it might be clearer if you were trying to prove (2) that any even positive integer other than $2$ is not prime. In any case for (2) you could to state that $k\gt 1$. </p>
438,744
<p>Let $A$ and $B$ be real square matrices of the same size. Is it true that $$\det(A^2+B^2)\geq0\,?$$</p> <p>If $AB=BA$ then the answer is positive: $$\det(A^2+B^2)=\det(A+iB)\det(A-iB)=\det(A+iB)\overline{\det(A+iB)}\geq0.$$</p>
user1551
1,551
<p>The ideas in <strong>Seirios</strong>' answer and <strong>Jyrki Lahtonen</strong>'s (now deleted) answer can be generalised to the following result:</p> <blockquote> <p>Let $n\ge2$ and $A\in M_n(\mathbb{C})$. Then $\det(A^2+B^2)\ge0$ for all $B\in M_n(\mathbb{R})$ if and only if $A^2$ is a nonnegative scalar multiple of $I_n$.</p> </blockquote> <p><em>Proof.</em></p> <p>("$\Leftarrow$") Suppose $A^2=pI$ for some $p\ge0$. Since nonreal eigenvalues of $B$ occur in conjugate pairs and the squares of the real eigenvalues of $B$ are nonnegative, it follows that $\det(A^2+B^2)\ge0$.</p> <p>("$\Rightarrow$") We first show that $A^2$ is real. Let $C=A^2=(c_{ij})$ and $B^2=\operatorname{diag}(0,\lambda,\lambda,\ldots,\lambda)$, where $\lambda&gt;0$. By the given condition, $\det(A^2+B^2)=c_{11}\lambda^{n-1}+o(\lambda^{n-1})\ge0$ for all sufficiently large $\lambda$. Hence $c_{11}$ must be real. Similarly, other entries of $C$ are real too, i.e. $A^2$ has to be real.</p> <p>It remains to show that:</p> <blockquote> <p>If $n\ge2$ and $A^2\in M_n(\mathbb{R})$ is not a nonnegative multiple of $I_n$, then there exists $B\in M_n(\mathbb{R})$ such that $\det(A^2+B^2)&lt;0$.</p> </blockquote> <p>Consider the case $n=2$ first. Let $R(\theta)=\pmatrix{\cos\theta&amp;\sin\theta\\ -\sin\theta&amp;\cos\theta}$. We may assume that $A^2$ is in its <em>real</em> Jordan form. There are four possibilities:</p> <ol> <li>$A^2=\pmatrix{\lambda&amp;1\\ 0&amp;\lambda}$ for some $\lambda\in\mathbb{R}$. Then $\det(A^2+B^2)&lt;0$ when $B=B_0:=\pmatrix{1&amp;0\\ b&amp;1}$ for sufficiently large $b$.</li> <li>$A^2=\pmatrix{\lambda_1&amp;0\\ 0&amp;\lambda_2}$ for distinct $\lambda_1,\lambda_2\in\mathbb{R}$. Let $B=bR(\frac\pi2)$. Then $\det(A^2+B^2)=(\lambda_1^2-b^2)(\lambda_2^2-b^2)&lt;0$ when $b^2$ is strictly between $\lambda_1^2$ and $\lambda_2^2$.</li> <li>$A^2=a^2R(\theta)$ with $a\neq0$ and $\sin\theta\neq0$. If $\sin\theta&gt;0$, let $B=aB_0$. Then $\det(A^2+B^2)=2a^2(1+\cos\theta) - 2a^2b^2\sin\theta&lt;0$ when $b$ is large enough. If $\sin\theta&lt;0$, choose $B=aB_0^T$ instead.</li> <li>$A^2=-a^2I_2$ with $a\neq0$. Let $B=\pmatrix{\lambda_1&amp;0\\ 0&amp;\lambda_2}$. Then $\det(A^2+B^2)=(\lambda_1^2-a^2)(\lambda_2^2-a^2)&lt;0$ when $\lambda_1^2,\lambda_2^2$ bracket $a^2$.</li> </ol> <p>Now, if $n&gt;2$, we may assume that some diagonal block $\widetilde{A}$ in the real Jordan form of $A$ has one of the above forms. Therefore, if $B$ is a block-diagonal matrix whose corresponding diagonal block to $\widetilde{A}$ is chosen as in the above and each of the other diagonal blocks is $1\times1$ and is equal to $\rho(A)+1$, then $\det(A^2+B^2)&lt;0$. (The addition of $1$ to $\rho(A)$ is needed to guarantee that the determinant is nonzero and the sign of the determinant is solely modified by the diagonal block corresponding to $\widetilde{A}$.)</p>
76,493
<p>i'm interested in geometric interpretations of many linear algebra notions (check also related <a href="https://mathoverflow.net/questions/75241/geometric-interpretation-of-matrix-minors">geometric interpretation of matrix minors</a>). it came to me recently that geometric description of adjugate matrix (for example in case 3×3-matrix) might be quite hard—feel welcome to fill the gap!—but what catched my attention is functoriality of adjugate matrix ($\scriptstyle \mathbf I^\mathrm D = \mathbf I$ and $\scriptstyle (\mathbf{AB})^\mathrm D = \mathbf B^\mathrm D \mathbf A^\mathrm D$); my question is:</p> <blockquote> <p>what kind of functor is the adjugating (for linear endomorphisms)?</p> </blockquote> <p>it seems to have strong relationship with (hermitian) adjoint but has slightly different properties (it commutes with transpose). thanks in advance!</p>
Igor Rivin
11,142
<p>Well, if your matrix happens to be invertible, the adjugate is the inverse. Otherwise, the adjugate gives the action on the $n-1$st exterior power (you can use the $*$ operator to map it back to the space itself. @Qiaochu has alluded to this in his answer to the question you cite. You can also check out Section 9 of my paper <a href="http://arxiv.org/pdf/math/0403375v1" rel="noreferrer">http://arxiv.org/pdf/math/0403375v1</a> to see some other geometric results on the subjects (the published version has fewer typos, so you might want to look at that too, if you have access).</p>
190,604
<p><strong>Background</strong></p> <p>I am currently trying to solve exercise 1.1.18 in Hatcher's Algebraic Topology. The part of the exercise I am interested in is the following:</p> <blockquote> <p>Using the technique in the proof of Proposition 1.14, show that if a space $X$ is obtained from a path-connected subspace $A$ by attaching an n-cell $e^n$ with $n ≥ 2$, then the inclusion $A \hookrightarrow X$ induces a surjection on $\pi_1$ .</p> </blockquote> <p>I know that $i:A \hookrightarrow X$ induces a homomorphism $i_*:\pi_1(A)\rightarrow \pi_1(X)$, so I only need to show this is a surjection. I think I understand the idea of the proof, which is to show that every loop $f\in \pi_1(X)$ is homotopic to a loop which is contained entirely in $A$. Hatcher's suggestion is to follow the proof $\pi_1(S^n)=0$ for $n\geq 2$, meaning that we should be able to push the sections of $f$ which are in the attached $n$-cell $e^n$ out. This is causing me a bit of trouble.</p> <p><strong>Attempt</strong></p> <p>Since $X$ is defined to be the result of attaching an $n$-cell to $A$ via some attaching map $\varphi:\partial D^n\rightarrow X$, it has the form $X=A \amalg e^n/\sim$, where $x\sim \varphi(x)$ for all $x \in \partial D^n$. Note first that since $A$ and $e^n$ are path connected, the adjunction space $X=A\cup_\varphi e^n$ is path connected. As such, our choice of base point does not affect the structure of $\pi_1(X)$, so let $x_0 \in A$ be the base point of $\pi_1(X)$ we are working over. Let $f \in \pi_1(X,x_0)$. Let $E=\text{Int}(e^n)$ and consider $f^{-1}(E)$. This is an open subset of $(0,1)$, so it is the union of a possibly infinite collection of subsets of $(0,1)$ of the form $(a_i,b_i)$. Let $f_i$ denote the restriction of $f$ to $(a_i,b_i)$. Note that $f_i$ lies in $e^n$ and, in particular, $f(a_i)$ and $f(b_i)$ lie on the boundary of $e^n$, so they are elements of $A$. For $n\geq 2$ we can homotopy $f_i$ to the path $g_i$ from $f(a_i)$ to $f(b_i)$ that goes along the boundary of $e^n$, which is homeomorphic to $S^{n-1}$, so it is path connected for $n\geq 2$. Since $e^n$ is homeomorphic to $D^n$, where $n\geq 2$, it is simply connected so $f_i$ and $g_i$ are homotopic. Repeating this process for all $f_i$, we obtain a loop $g$ homotopic to $f$ such that $g(I)\subseteq A$. </p> <p>What really bothers me about this is how I could homotopy form a homotopy from $f$ to $g$ consisting of possibly infinitely many individual homotopies from $f_i$ to $g_i$. I believe I need there to only be finitely many $f_i's$, but I don't see how to show it. </p> <p><strong>Note:</strong> This is not homework.</p>
Math
225,252
<p>Since $\pi_1(S^n) =0 $ for $n&gt;1$ so all elements of $\pi_1(X)$ can be realized as an element os $\pi_1(A)$.</p>
5,848
<p>This is the question which i am attempting to solve and it seems to be difficult to get rid of the exponents.</p> <blockquote> <p>Show that a the two cubic curves $Y^3 = X^2 + X^3$ and $X^3 = Y^2 + Y^3$ intersect each other at nine points.</p> </blockquote> <p>Any help would be appreciated. By the way i would also like to know whether there is any general method for solving such problems. </p>
Bill Dubuque
242
<p>Like your earlier question about how to prove the associativity of the group law on elliptic curves, this too is a consequence of general results on intersection theory, esp. (variants of) <a href="http://en.wikipedia.org/wiki/Bezouts_theorem" rel="nofollow">Bezout's theorem</a>. For computational purposes you can employ various constructive elimination techniques, e.g Gröbner bases, various triangularization methods, etc, which are available in most computer algebra systems</p>
2,661,221
<p><strong>$\frac{1}{n}=O(\frac{1}{\ln n})$ or $\frac{1}{n}=o(\frac{1}{\ln n})$?</strong></p> <p>I know that $n\geq\ln n, (n&gt; 0)$, with which $\frac{1}{n}\leq \frac{1}{\ln n}$ and so $\frac{1}{n}=O(\frac{1}{\ln n})$.</p> <p>To know if $\frac{1}{n}=o(\frac{1}{\ln n})$, I need to know if $\lim_{n\to \infty}\frac{\ln n}{n}=0$, but how can I prove that this limit does not exist or does exist? Thank you very much.</p>
mucciolo
222,084
<p>$\DeclareMathOperator{\spn}{span}\DeclareMathOperator{\img}{Im}$There is really some linear algebra hidden there. Note that $\sum_{i=1}^{n}w_{i}a_{i}$ is a linear functional on $\mathbb{R}^n$ with respect to $w$. That is, $\alpha: \mathbb{R}^n \to \mathbb{R} : (w_1, \cdots, w_n) \mapsto \sum_{i=1}^{n}w_{i}a_{i}$ is linear. Likewise, let $\beta: \mathbb{R}^n \to \mathbb{R} : (w_1, \cdots, w_n) \mapsto \sum_{i=1}^{n}w_{i}b_{i}$. Also let $w = (w_1, \cdots, w_n)$ and $f : D \to \mathbb{R} : w \mapsto \frac{\alpha(w)}{\beta(w)}$. Thus your question in this linear algebra framework is</p> <blockquote> <p>Let $\alpha, \beta \in \mathcal{L}(\mathbb{R}^n, \mathbb{R})$ such that $\alpha \neq 0$ and $\beta(\vec{1}) \neq 0 $. Then there exists $w \in \mathbb{R}^n$ such that $$f(w) &gt; f(\vec{1})? \tag{1}$$</p> </blockquote> <p>The domain of $f$ (undetermined so far) is simply all $w \in \mathbb{R}^n$ such that $\beta(w) \neq 0$, that is, $D = \mathbb{R}^n \setminus \ker(\beta)$. By hypothesis we have $\beta(\vec{1}) \neq 0$, therefore $D = \mathbb{R}^n \setminus \spn(\vec{1})^{\perp}$.</p> <p>Furthermore, an additional condition is required. It is necessary that $\alpha$ and $\beta$ are linearly independent, meaning that must <em>not</em> exist a $\lambda \in \mathbb{R}$ such that $\alpha = \lambda \beta$, because otherwise $f$ would be constant and $(1)$ would never be satisfied. This is equivalent to require $\ker(\alpha) \neq \ker(\beta) = \spn(\vec{1})^{\perp}$.</p> <p>Now, since $\img(\alpha) = \img(\beta) = \mathbb{R}$ and for every sequence $(x_n)_{n \in \mathbb{N}}$ in $D$ converging to a non-null vector of $\ker(\beta)$ we have that $\lim\limits_{n \to \infty} |f(x_n)| = \infty$, we can construct $w$ that satisfies $(1)$ as following.</p> <p>If $f(1) = 0$ just take any $w \in f^{-1}(\mathbb{R}_{&gt;0})$. Otherwise if $f(1) &gt; 0$ consider $A = f^{-1}(\mathbb{R}_{&gt;0})$ or $A = f^{-1}(\mathbb{R}_{&lt;0})$ if $f(1) &lt; 0$. Take any non-null element of $\ker(\beta) \cap \partial A$, say $\ell$, and a sequence $(x_n)_{n \in \mathbb{N}}$ in $A$ converging to $\ell$. Because $\lim\limits_{n \to \infty} |f(x_n)| = \infty$, there exists $N \in \mathbb{N}$ such that for every $n &gt; N$ it follows that $f(x_n) &gt; f(1)$. As we wanted to show.</p> <p>(In fact, since without loss of generality we can substitute that sequence for a curve, there exists an uncountable number of such $w$'s.)</p> <p>For example, take $\alpha(x,y) = 4x-2y$ and $\beta(x,y) = x+y$. So $f(1) = 1$. Thus $$A = f^{-1}(\mathbb{R}_{&gt;0}) = \{ (x+y, x-y) : x,y \in \mathbb{R}_{&gt;0} \}$$ $$\partial A = \{ (x, x) : x \in \mathbb{R}_{\ge 0} \} \cup \{ (x, -x) : x \in \mathbb{R}_{\ge 0} \}$$ and $$\ker(\beta) = \{ (x, -x) : x \in \mathbb{R} \}$$</p> <p>Then $$ \ker(\beta) \cap \partial A = \{ (x, -x) : x \in \mathbb{R}_{\ge 0} \} $$</p> <p>Now take $\ell = (1, -1) \in \ker(\beta) \cap \partial A$, $x_0 = (2,0)$ and $x_n = \frac{1}{n} x_0 + (1-\frac{1}{n}) \ell = (\frac{1}{n} + 1, \frac{1}{n} - 1)$. So for all $n \in \mathbb{N}$ $$f(x_n) = \frac{\frac{4}{n} + 4 - \frac{2}{n} + 2}{\frac{2}{n}} = \frac{\frac{2}{n} + 6}{\frac{2}{n}} = \frac{\frac{1}{n} + 3}{\frac{1}{n}} = 1 + 3n &gt; 1 = f(\vec{1})$$</p>
2,041,849
<p>I'm stuck on the following questions "In the alternating group $A_4$, let $H$ be the cyclic subgroup generated by $(123)$. Find all the right cosets of $H$ and all the left cosets of $H$. Is every right coset also a left coset."</p> <p>The answer is as follows "$H = [idx_4 , (123), (132)]$ so each coset will have order 3, and since $|A_4| = 12$ there will be 4 distinct cosets. We find the following right cosets:</p> <p>$H = [idx_4 , (123), (132)]$</p> <p>$H(234) =[ (234), (12)(34), (134)]$</p> <p>$H(124) = [(124), (13)(24), (243)]$</p> <p>$H(143) = [(143), (14)(23), (142)]$</p> <p>and the following left cosets:</p> <p>$H = [idx_4 , (123), (132)]$</p> <p>$(234)H = [(234), (13)(24), (142)]$</p> <p>$(124)H = [(124), (14)(23), (134)]$</p> <p>$(143)H = [(143), (12)(34), (243)]$</p> <p>and we see that $H$ is the only coset that is both a left and a right coset."</p> <p>Now I understand how you find the left and right cosets, the thing I'm stuck on is why did they choose $(234), (124), (143)$ out of the possible options from the alternating group $A_4$ as the permutations that $H$ is used on?</p>
Matt Samuel
187,867
<p>The choice was to an extent arbitrary. The fact is any single element from each coset can be chosen to represent it. </p> <p>If your question is as to how to find these elements, you can work recursively. The easiest coset to find is $H$ itself. To find the next coset, pick any element $a_1$ not in $H$. This will give you a new coset. To find all of the elements in this coset, multiply $a_1$ on the left or right (depending on whether you're doing right or left cosets) by every element of $H$.</p> <p>Suppose you've found $n$ cosets $Ha_0,\ldots,Ha_n$. Following this procedure, you'll also know all of the elements in the union of these cosets. If you pick one element $a_{n+1}$ not in the union, this will give you a new coset. If the union is the entire group, then you've found all of the cosets.</p>
2,041,849
<p>I'm stuck on the following questions "In the alternating group $A_4$, let $H$ be the cyclic subgroup generated by $(123)$. Find all the right cosets of $H$ and all the left cosets of $H$. Is every right coset also a left coset."</p> <p>The answer is as follows "$H = [idx_4 , (123), (132)]$ so each coset will have order 3, and since $|A_4| = 12$ there will be 4 distinct cosets. We find the following right cosets:</p> <p>$H = [idx_4 , (123), (132)]$</p> <p>$H(234) =[ (234), (12)(34), (134)]$</p> <p>$H(124) = [(124), (13)(24), (243)]$</p> <p>$H(143) = [(143), (14)(23), (142)]$</p> <p>and the following left cosets:</p> <p>$H = [idx_4 , (123), (132)]$</p> <p>$(234)H = [(234), (13)(24), (142)]$</p> <p>$(124)H = [(124), (14)(23), (134)]$</p> <p>$(143)H = [(143), (12)(34), (243)]$</p> <p>and we see that $H$ is the only coset that is both a left and a right coset."</p> <p>Now I understand how you find the left and right cosets, the thing I'm stuck on is why did they choose $(234), (124), (143)$ out of the possible options from the alternating group $A_4$ as the permutations that $H$ is used on?</p>
Andreas Caranti
58,401
<p>I do not know how much group theory you have covered until now, but if you consider the subgroup of $A_{4}$ $$ V = \{ 1, (12)(34), (13)(24), (14)(23) \}, $$ you will have $$\{ v h : v \in V, h \in H\} = V H = A_{4} = H V = \{ h v : v \in V, h \in H\},$$ with $V \cap H = \{ 1 \}$.</p> <p>So, although as noted in other answers there are many choices for sets of representatives of the left and right cosets (actually $3^{4}$ choices for each set!), the set $V$ is somewhat a natural choice.</p>
337,806
<p>I'm stuck on this problem for few days and can't find the solution.Hope some one here can help me solve this. I'm so grateful for any any help:</p> <blockquote> <blockquote> <p>Let $V$ be a finite-dimensional vector space and $f$ a non-degenerate symmetric bilinear form on $V$. With every basis $\beta = \{\alpha_{1}, ..., \alpha_{n}\}$ of $V$ there exists a unique basis $\beta' = \{\alpha_{1}', ..., \alpha_{n}'\}$ of $V$ such that $f(\alpha_{i}, \alpha_{j}') = \delta_{ij}$. Prove that $f_{\beta'} = (f_{\beta})^{-1}$ </p> </blockquote> </blockquote>
le duc quang
38,024
<p>In reality, I found another solution using 2 ways to calculate $\alpha_{i}'$.</p> <p>Denote $f_{\beta} = A$. </p> <p>For every $\alpha$ in $V$, we can express: $\alpha = \sum_{i=1}^{n}b_{i}\alpha_{i}$</p> <p>Then we have $f(\alpha, \alpha_{j}') = f(\sum_{i=1}^{n}b_{i}\alpha_{i}, \alpha_{j}') = b_{j}$. So we have the formula: $$\alpha = \sum_{i} f(\alpha, \alpha_{i}')\alpha_{i}$$</p> <p>In particular, $\alpha_{i}' = \sum_{j} f(\alpha_{i}', \alpha_{j}')\alpha_{j}$</p> <p>Denote $\beta_{i} = \sum_{i} (A^{-1})_{ij}\alpha_{j}$. Then we can easily prove that $f(\alpha_{i}, \beta_{j}) = \delta_{ij}$, but $\alpha_{j}'$ is unique, so we have $\beta_{j} = \alpha_{j}'$. Compare 2 formulas, then we have $f_{\beta'} = (f_{\beta})^{-1}$</p> <p>I know this solution is not natural and complex. But I still want to post it here for everybody as a reference. Thanks :-)</p>
158,630
<p>Below I've quoted Wikipedia's entry that relates the Z-Transform to the Laplace Transform. The part I don't understand is $z \ \stackrel{\mathrm{def}}{=}\ e^{s T}$; I thought $z$ was actually an element of $\mathbb{C}$ and thus would be $z \ \stackrel{\mathrm{def}}{=}\ Ae^{s T}$ (but then it would be different to the Laplace Transform...). I don't understand why the Z-Transform is not defined as: $$ X(z) = \mathcal{Z}\{x[n]\} = \sum_{n=-\infty}^{\infty} x[n] e^{-\omega n} $$ or something like that.</p> <hr> <p>Z-transform</p> <p>The unilateral or one-sided Z-transform is simply the Laplace transform of an ideally sampled signal with the substitution of $$ z \ \stackrel{\mathrm{def}}{=}\ e^{s T} \ $$ where $T = 1/f_s \ $is the sampling period (in units of time e.g., seconds) and $f_s \ $is the sampling rate (in samples per second or hertz)</p> <p>Let $$ \Delta_T(t) \ \stackrel{\mathrm{def}}{=}\ \sum_{n=0}^{\infty} \delta(t - n T) $$ be a sampling impulse train (also called a Dirac comb) and $$ \begin{align} x_q(t) &amp; \stackrel{\mathrm{def}}{=}\ x(t) \Delta_T(t) = x(t) \sum_{n=0}^{\infty} \delta(t - n T) \\ &amp; = \sum_{n=0}^{\infty} x(n T) \delta(t - n T) = \sum_{n=0}^{\infty} x[n] \delta(t - n T) \end{align} $$ be the continuous-time representation of the sampled x(t) \ $$ x[n] \ \stackrel{\mathrm{def}}{=}\ x(nT) \ $$ are the discrete samples of x(t) The Laplace transform of the sampled signal x_q(t) \ is $$ \begin{align} X_q(s) &amp; = \int_{0^-}^\infty x_q(t) e^{-s t} \,dt \\ &amp; = \int_{0^-}^\infty \sum_{n=0}^\infty x[n] \delta(t - n T) e^{-s t} \, dt \\ &amp; = \sum_{n=0}^\infty x[n] \int_{0^-}^\infty \delta(t - n T) e^{-s t} \, dt \\ &amp; = \sum_{n=0}^\infty x[n] e^{-n s T}. \end{align} $$ This is precisely the definition of the unilateral Z-transform of the discrete function $x[n] \ $. $$ X(z) = \sum_{n=0}^{\infty} x[n] z^{-n} $$ with the substitution of $z \leftarrow e^{s T} \ $.</p> <p>Comparing the last two equations, we find the relationship between the unilateral Z-transform and the Laplace transform of the sampled signal: $$ X_q(s) = X(z) \Big|_{z=e^{sT}} $$ The similarity between the Z and Laplace transforms is expanded upon in the theory of time scale calculus.</p> <hr> <p>(Source: <a href="http://en.wikipedia.org/wiki/Laplace_transform#Laplace.E2.80.93Stieltjes_transform">http://en.wikipedia.org/wiki/Laplace_transform#Laplace.E2.80.93Stieltjes_transform</a>)</p> <p>Here: <a href="http://en.wikipedia.org/wiki/Z-transform">http://en.wikipedia.org/wiki/Z-transform</a> it says that $z \in \mathbb{C}$.</p>
Noir
324,626
<p>There is a good reason to use <span class="math-container">$z$</span> instead of <span class="math-container">$e^{sT}$</span>. Before starting with any analysis, let me remind you that in analysis of signals and systems we are interested in analyzing the frequency spectrum of the signal, i.e. the Laplace transform on the imaginary line <span class="math-container">$s = jw$</span>. And since your signal <span class="math-container">$x[n]$</span> is discrete, then its frequency spectrum its periodic, so its more general define <span class="math-container">$s=jwT$</span>.</p> <p>Now, let <span class="math-container">$X(z) = \mathcal{Z}\{x[n]\}$</span> of a <a href="https://en.wikipedia.org/wiki/Causal_system" rel="nofollow noreferrer">causal or non-causal</a> discrete signal <span class="math-container">$x[n]$</span>, i.e. </p> <p><span class="math-container">$$ X(z) = \sum_{n=-\infty}^{\infty} x[n] z^{-n}. $$</span></p> <p>Since <span class="math-container">$z\in\mathbb{C}$</span> we have <span class="math-container">$z = |z| e^{j\arg z}$</span>. Without loss of generality we rewrite <span class="math-container">$|z| = r$</span> and <span class="math-container">$\arg z = wT$</span>, i.e. <span class="math-container">$z=r e^{jwT}$</span> (note that not necessarily <span class="math-container">$r=1$</span>). Then</p> <p><span class="math-container">$$ \begin{aligned} X(z) &amp;= \sum_{n=-\infty}^{\infty} x[n] z^{-n}\\ &amp;= \sum_{n=-\infty}^{\infty} x[n] (r e^{jwT})^{-n}\\ % &amp;= \sum_{n=-\infty}^{\infty} (x[n] r^{-n}) e^{-njwT}\\ &amp;= \sum_{n=-\infty}^{\infty} (x[n] r^{-n}) (e^{jwT})^{-n}, \end{aligned} $$</span></p> <p>which implies <span class="math-container">$X(z) = \left. \mathcal{L}\{x[n]r^{-n}\} \right\rvert_{s=jwT} = \mathcal{F}\{x[n]r^{-n}\}$</span>. As a consequence, <span class="math-container">$X(z)$</span> is a Fourier transform more generic than the Fourier transform <span class="math-container">$X(e^{jwT}) = \mathcal{F}\{x[n]\}$</span> of our signal of interest.</p> <p>So, if the convergence radius of <span class="math-container">$X(z)$</span> is less than unity then <span class="math-container">$X(e^{jwT})$</span> does not exist and therefore its Fourier transform does not either, which represents a problem because there are many signals with this problem of convergence, e.g. non-causal signals such as a digital image filter. Therefore, it is convenient (and even necessary in non-causal signals) to use the Z-transform.</p> <h2><strong>Or informally, use <span class="math-container">$z$</span> instead of <span class="math-container">$\left. e^{sT} \right\rvert_{s=jwT} = e^{jwT}$</span> whenever you can.</strong></h2> <p><a href="https://en.wikipedia.org/wiki/Z-transform#Region_of_convergence" rel="nofollow noreferrer">We also recommend to see this link about radius convergence.</a></p> <hr> <p>At this point, it is clear that the Z-transform has the same objective as the Laplace transform: ensure the convergence of the transform in some region of <span class="math-container">$\mathbb{C}$</span>, where the Z-transform does it for discrete signals and Laplace transform for continuous signals.</p>
194,985
<p>How does one show that $$\lim_{n \rightarrow \infty}\int_{0}^{1}\frac{x^{n}}{1 + x^{n}}\, dx = 0?$$ My idea is to evaluate the inner integral, but I can't seem to be able to do that.</p>
André Nicolas
6,312
<p>If you really want to evaluate an integral, note that $$\frac{x^n}{1+x^n}\le \frac{x^{n-1}}{1+x^n}$$ on $[0,1]$ and let $u=1+x^n$. </p>
211,052
<p>I want to modify a list of table row by adding an extra column. For that I <a href="https://reference.wolfram.com/language/ref/Map.html" rel="nofollow noreferrer"><code>Map</code></a> the data with a pure function that evluate the new column value from exisiting one and reconstruct a new list from that and the initial <a href="https://reference.wolfram.com/language/ref/Part.html" rel="nofollow noreferrer"><code>Part</code>s</a> of the list:</p> <pre><code>b = 5000 data = {{1, 45000., 27500., "Inverted"}, {2, 22500., 18333.3, ""}, {3, 15000., 13750., "Inverted"}, {4, 11250., 11000., ""}, {5, 9000., 9166.67, "Inverted"}, {6, 7500., 7857.14, ""}, {7, 6428.57, 6875., "Inverted"}} {#[[1;;3]], #[[1]]&gt; 2b, #[[4]]} &amp; /@ data </code></pre> <blockquote> <pre><code>{{{1,45000.,27500.},False,Inverted}, {{2,22500.,18333.3},False,}, {{3,15000.,13750.},False,Inverted}, {{4,11250.,11000.},False,}, {{5,9000.,9166.67},False,Inverted}, {{6,7500.,7857.14},False,}, {{7,6428.57,6875.},False,Inverted}} </code></pre> </blockquote> <p>The problem is <code>#[[1;;3]]</code> return a list--so I end-up with nested list instead of flat records.</p> <p>As a workarround, I <a href="https://reference.wolfram.com/language/ref/Flatten.html" rel="nofollow noreferrer"><code>Flatten</code></a> each record:</p> <pre><code> Flatten[{#[[1;;3]], #[[1]]&gt; 2b, #[[4]]}] &amp; /@ data </code></pre> <blockquote> <pre><code>{{1,45000.,27500.,False,Inverted}, {2,22500.,18333.3,False,}, {3,15000.,13750.,False,Inverted}, {4,11250.,11000.,False,}, {5,9000.,9166.67,False,Inverted}, {6,7500.,7857.14,False,}, {7,6428.57,6875.,False,Inverted}} </code></pre> </blockquote> <pre><code>Flatten[{#[[1;;3]], #[[1]]&gt; 2b, #[[4]]}] &amp; /@ data </code></pre> <p>It works <em>in that particular</em> case. But it is not entirely satisfactory since, if the initial record would already contain list items, they would have been flattened too.</p> <p>If there a more generic solution to build a list from items and list spans?</p>
Sylvain Leroux
68,791
<p>Somewhat a variation around the <a href="https://reference.wolfram.com/language/ref/Sequence.html" rel="nofollow noreferrer"><code>Sequence</code></a> symbol that was suggeste by @kglr in <a href="https://mathematica.stackexchange.com/a/211055/68791">another answer</a>:</p> <pre><code>{#[[1;;3]] /. List -&gt; Sequence, #[[1]]&gt; 2b, #[[4]]} &amp; /@ data </code></pre>
1,960,793
<p>Considering the (not most common) definition:</p> <blockquote> <p>A set is <em>infinite</em> if it is equipotent to a proper subset of itself. A set is <em>finite</em> if it is not infinite.</p> </blockquote> <p>How can I prove that a set $A$ is finite iif it is equipotent to $J_n=\{1,…,n\}$ for some $n{\in}\mathbb{N}$ (assuming that I already proved that $J_n$ is a finite set for every $n{\in}\mathbb{N}$)?</p>
Bargabbiati
352,078
<p>Suppose that $A$ is not equipotent to $J_n$ $\forall n$. So you can find a surjective function $f:A\rightarrow\mathbb N$. The idea is that if the function is not surjective than you can find a bijection between $A$ and $J_n$ where $n$ is the maximum of $Im(f)$. So $A$ has at least the cardinality of $\mathbb N$, that has the cardinality of the even numbers that is a proper subset of it, so $A$ is infinite.</p>
245,853
<p>The graphs of my provided code look approximately like the one in the picture below. How is it possible to plot such graphs? Please help how to break the axis scale as shown. Any suggestion appreciated. Thanks a lot. <a href="https://i.stack.imgur.com/8hNeO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8hNeO.jpg" alt="enter image description here" /></a></p> <pre><code>Subscript[C, i]=2.5*10^6 Subscript[k, e]=315 σ=1*10^-9 Subscript[S, e]=1.58*10^-5 g=2.3*10^16 Subscript[C, e]=2.1*10^4 τ=1*10^-15 a=1/τ Subscript[w, 1]=1 Subscript[s, 1]=y/(Subscript[w, 1]*σ) Subscript[b, 1]=g/Subscript[C, e]*(1+(Subscript[k, e]*Subscript[s, 1]^2)/g) Subscript[Δ, 1]=Sqrt[Subscript[b, 1]^2-4*Subscript[k, e]*Subscript[s, 1]^2*g/(Subscript[C, i]*Subscript[C, e])] Subscript[p, 11]=(-Subscript[b, 1]+Subscript[Δ, 1])/2 Subscript[p, 12]=(-Subscript[b, 1]-Subscript[Δ, 1])/2 Subscript[T, i]=(Subscript[S, e]*g)/(2*π*τ*Subscript[C, i]*Subscript[C, e])*NIntegrate[BesselJ[0,y]*Exp[-((σ^2*Subscript[s, 1]^2)/4)]*(Exp[-a*t]/((a+Subscript[p, 11])*(a+Subscript[p, 12]))+1/(Subscript[p, 11]-Subscript[p, 12])*(Exp[Subscript[p, 11]*t]/(Subscript[p, 11]+a)-Exp[Subscript[p, 12]*t]/(Subscript[p, 12]+a)))*y/(σ*Subscript[w, 1])^2,{y,0,100}] Subscript[T, e]=Subscript[T, i]+Subscript[S, e]/(2*π*τ*Subscript[C, e])*NIntegrate[BesselJ[0,y]*Exp[-((σ^2*Subscript[s, 1]^2)/4)]*(-((a*Exp[-a*t])/((a+Subscript[p, 11])*(a+Subscript[p, 12])))+1/(Subscript[p, 11]-Subscript[p, 12])*((Subscript[p, 11]*Exp[Subscript[p, 11]*t])/(Subscript[p, 11]+a)-(Subscript[p, 12]*Exp[Subscript[p, 12]*t])/(Subscript[p, 12]+a)))*y/(σ*Subscript[w, 1])^2,{y,0,100}] Plot[Subscript[T, e],{t,0,1*10^-14}] Plot[Subscript[T, i],{t,0,1*10^-14}] </code></pre>
Bob Hanlon
9,362
<p>First, I recommend that you avoid using subscripts except for display.</p> <p>Since the range of your plots is only <code>{t, 0, 10^-14}</code> there are no regions where the functions are essentially constant and where a gap in the axis would be appropriate.</p> <pre><code>Clear[&quot;Global`*&quot;] Format[Ti] = Subscript[T, i]; Format[Te] = Subscript[T, e]; Ci = 25*10^5; ke = 315; σ = 1*10^-9; Se = 158*10^-7; g = 23*10^15; Ce = 21*10^3; τ = 1*10^-15; a = 1/τ; s1[w_] = y/(w*σ); b1[w_] = g/Ce*(1 + (ke*s1[w]^2)/g); Δ1[w_] = Sqrt[b1[w]^2 - 4*ke*s1[w]^2*g/(Ci*Ce)]; p11[w_] = (-b1[w] + Δ1[w])/2; p12[w_] = (-b1[w] - Δ1[w])/2; wValues = {1, 3, 5}; Ti[t_?NumericQ, w_?NumericQ] := (Se*g)/(2*π*τ*Ci*Ce)* NIntegrate[BesselJ[0, y]*Exp[-((σ^2*s1[w]^2)/4)]* (Exp[-a*t]/((a + p11[w])*(a + p12[w])) + 1/(p11[w] - p12[w])* (Exp[p11[w]*t]/(p11[w] + a) - Exp[p12[w]*t]/(p12[w] + a)))*y/(σ*w)^2, {y, 0, 100}, WorkingPrecision -&gt; 15] Te[t_?NumericQ, w_?NumericQ] := Ti[t, w] + Se/(2*π*τ*Ce)* NIntegrate[BesselJ[0, y]*Exp[-((σ^2*s1[w]^2)/4)]* (-((a*Exp[-a*t])/((a + p11[w])*(a + p12[w]))) + 1/(p11[w] - p12[w])* ((p11[w]*Exp[p11[w]*t])/(p11[w] + a) - (p12[w]*Exp[p12[w]*t])/(p12[w] + a)))* y/(σ*w)^2, {y, 0, 100}, WorkingPrecision -&gt; 15] Legended[ Column[ Plot[ Evaluate@Table[#[t, w], {w, wValues}], {t, 0, 10^-14}, PlotRange -&gt; All, Frame -&gt; True, FrameLabel -&gt; {None, Style[StringForm[&quot;``[t]&quot;, #], 12, Bold]}, WorkingPrecision -&gt; 15, PlotStyle -&gt; {{Red, Dashed}, {Blue, Dotted}, Black}, ImageSize -&gt; Medium, ImagePadding -&gt; {{60, 15}, {20, 8}}, AspectRatio -&gt; If[# === Te, 1/GoldenRatio, 1/3]] &amp; /@ {Te, Ti}], Placed[ LineLegend[ {{Red, Dashed}, {Blue, Dotted}, Black}, StringForm[&quot;w = ``&quot;, #] &amp; /@ wValues], {.6, .9}]] // Quiet </code></pre> <p><a href="https://i.stack.imgur.com/W0k9e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W0k9e.png" alt="enter image description here" /></a></p>
2,471,217
<p>Contradiction or contra-positive? Or is direct easier? </p> <p>∀n∈N, (n>3∧ n is prime)→∃q∈N,(n=6q+1∨n=6q+5)</p>
Tom Miller
487,910
<p>Well, since primes greater than 3 are odd, the only other possible form is 6k + 3. But this is divisible by 3 so is not prime. </p>
3,378,655
<p>I have an assignment about epsilon-delta proofs and I'm having trouble with this one. I have worked it through using some of the methods I've picked up for similar proofs but it's just something about this particular expression that doesn't sit right with me. Any feedback would be very helpful. This is how far I've come:</p> <p>Let <span class="math-container">$\varepsilon &gt; 0$</span>. We want to find a <span class="math-container">$\delta$</span> so that <span class="math-container">$\left|\frac{1}{x - 1} - 1\right| &lt; \varepsilon$</span> when <span class="math-container">$0 &lt; |x - 2| &lt; \delta$</span>. We expand the expression: <span class="math-container">\begin{align*} \left|\frac{1}{x - 1} - 1\right| &amp;&lt; \varepsilon \\ \left|\frac{1}{x - 1} - \frac{x - 1}{x - 1}\right| &amp;&lt; \varepsilon \\ \left|\frac{2 - x}{x - 1}\right| &amp;&lt; \varepsilon \\ |{x - 1}| &amp;&lt; \frac{|x - 2|}{\varepsilon} \\ \end{align*}</span></p> <p>We could let <span class="math-container">$\delta = \dfrac{|x - 2|}{\varepsilon}$</span> but <span class="math-container">$|x - 2|$</span> contains an unwanted variable. Since the limit is only relevant when <span class="math-container">$x$</span> is close to <span class="math-container">$a$</span> we'll restrict <span class="math-container">$x$</span> so that it's at most <span class="math-container">$1$</span> from <span class="math-container">$a$</span> or in other words, in our case, that <span class="math-container">$|x - 1| &lt; 1$</span>. This means <span class="math-container">$0 &lt; x &lt; 2$</span> and that <span class="math-container">$-2 &lt; x - 2 &lt; 0$</span>. Looking at our previous inequality</p> <p><span class="math-container">\begin{align*} |{x - 1}| &amp;&lt; \frac{|x - 2|}{\varepsilon} \end{align*}</span></p> <p>we see that the right-hand side is the smallest when <span class="math-container">$|x - 2|$</span> is the smallest which by the range above is when <span class="math-container">$x - 2 = -2$</span> and then we have that</p> <p><span class="math-container">\begin{align*} |{x - 1}| &amp;&lt; \frac{|x - 2|}{\varepsilon} &lt; \frac{2}{\varepsilon} \end{align*}</span></p> <p>We now have the two inequalities <span class="math-container">$|x - 1| &lt; 1$</span> and <span class="math-container">$|x - 1| &lt; \frac{2}{\varepsilon}$</span>. Let <span class="math-container">$\delta = \textrm{min}(1, \frac{2}{\varepsilon})$</span> and by definition we have that for every <span class="math-container">$\varepsilon &gt; 0$</span> there is a <span class="math-container">$\delta$</span> so that <span class="math-container">$|f(x) - A| &lt; \varepsilon$</span> for every <span class="math-container">$x$</span> in the domain that satisfies <span class="math-container">$0 &lt; |x - a| &lt; \delta$</span>. <span class="math-container">$\blacksquare$</span></p>
Bernard
202,857
<p><strong>Hint</strong>: You may suppose, w.l.o.g. that <span class="math-container">$|x-2|&lt;\frac12$</span>, in which case <span class="math-container">$|x-1|&gt;\frac12$</span>, so that <span class="math-container">$$\left|\frac{x-2}{x - 1}\right| &lt;2|x-2|.$$</span></p>
3,187,628
<p>On a keyboard 1-9 and A-D , how many possible way to have a 5 digit code with 3 different numbers and two identical letters (order isn't a problem)?</p> <p>My try : 9C3 x 4C1 = 84 x 4 = 336</p>
José Carlos Santos
446,262
<p><strong>Hint:</strong> What is the Taylor series of the sine function centered at <span class="math-container">$0$</span>?</p>