qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
4,242,133 | <p>I was having trouble with the question:</p>
<blockquote>
<p>Prove that
<span class="math-container">$$I:=\int_0^{\infty}\frac{\ln(1+x^2)}{x^2(1+x^2)}dx=\pi\ln\big(\frac e 2\big)$$</span></p>
</blockquote>
<p><strong>My Attempt</strong></p>
<p>Perform partial fractions
<span class="math-container">$$I=\int_0^{\infty}\frac{\ln(1+x^2)}{x^2(1+x^2)}dx=\int_0^{\infty}\frac{\ln(1+x^2)}{x^2}dx-\int_0^{\infty}\frac{\ln(1+x^2)}{1+x^2}dx=$$</span>
First integral
<span class="math-container">$$\int_0^{\infty}\frac{\ln(1+x^2)}{x^2}dx=-\Bigg[\frac{\ln(x^2+1)}{x}\Bigg]_0^{\infty}+\int_0^{\infty}\frac{2}{x^2+1}=\pi$$</span>
Second integral
<span class="math-container">$$\int_0^{\infty}\frac{\ln(1+x^2)}{1+x^2}dx=$$</span>
How do you solve this integral?
Thank you for your time</p>
| Ninad Munshi | 698,724 | <p>Define</p>
<p><span class="math-container">$$J[a] = \int_0^\infty \frac{\ln(a^2+x^2)}{1+x^2}\:dx \implies J'[a] = \frac{2a}{a^2-1}\int_0^\infty \frac{1}{x^2+1}-\frac{1}{x^2+a^2}\:dx$$</span></p>
<p><span class="math-container">$$J'[a] = \frac{\pi}{1+a} \implies J[a] = \pi\ln(1+a) + C$$</span></p>
<p>We can see that</p>
<p><span class="math-container">$$J[0] = \int_0^\infty\frac{2\ln x}{1+x^2}\:dx = 0 = C$$</span></p>
<p>thus</p>
<p><span class="math-container">$$J[1] = \int_0^\infty \frac{\ln(1+x^2)}{1+x^2}dx = \pi\ln 2$$</span></p>
<p>and we have</p>
<p><span class="math-container">$$\pi - \pi \ln 2 = \pi \ln\left(\frac{e}{2}\right)$$</span></p>
|
221,667 | <p>I'm taking a second course in linear algebra. Duality was discussed in the early part of the course. But I don't see any significance of it. It seems to be an isolated topic, and it hasn't been mentioned anymore. So what's exactly the point of duality?</p>
| Hew Wolff | 34,355 | <p>Duality is a simple way to make new vector spaces. These dual spaces are useful in functional analysis, for example when you want to define the integral of a function, or you want to analyze a probability distribution. In this case there's a vector space of functions and a linear way to map those functions to numbers, which is natural to describe as an element of the dual space.</p>
<p>For finite-dimensional vector spaces, the dual is not so interesting because it looks like the original vector space. So it may not be very exciting in a standard undergraduate course. But for infinite dimensions, things are more interesting.</p>
<p>I have just been refreshing my memory from the Wikipedia article on Dual Space, which is a good summary.</p>
|
221,667 | <p>I'm taking a second course in linear algebra. Duality was discussed in the early part of the course. But I don't see any significance of it. It seems to be an isolated topic, and it hasn't been mentioned anymore. So what's exactly the point of duality?</p>
| hmakholm left over Monica | 14,366 | <p>For what it's worth, you're not the only one having trouble seeing the immediate relevance of dual spaces. In the preface to Michael Artin's algebra textbook, he says:</p>
<blockquote>
<p>(2) The book is not intended for a "service course," so technical points should be presented only if they are needled in the book.</p>
<p>(3) All topics discussed should he important for the average mathematician.</p>
<p>[...] Sometimes the exercise of deferring material showed that it could be deferred forever -- that it was not essential. This happened with dual spaces and multilinear algebra, for example, which wound up on the floor as a consequence of the second principle.</p>
</blockquote>
<p>When I read that as an undergraduate I thought "yeah, whatever" -- since what else could I do, not knowing what it was I was missing.</p>
<p>However, later when I came to differential geometry and tensor calculus (which I needed for general relativity) it turned out that duality is absolutely essential there. Then it wasn't very satisfying to lack the general algebraic grounding to fully appreciate what was happening. The books I were using did provide the bare essentials I needed to follow along, but it was also clear that there was a nice algebraic systematic hiding underneath all that which I didn't get to see all of. And it would certainly have been helpful to know that general theory before embarking on differential geometry.</p>
|
1,515,667 | <blockquote>
<p>Show that the limit $\lim_{(x,y)\to (0,0)}\frac{2e^x y^2}{x^2+y^2}$
does not exist</p>
</blockquote>
<p>$$\lim_{(x,y)\to (0,0)}\frac{2e^x y^2}{x^2+y^2}$$</p>
<p>Divide by $y^2$:</p>
<p>$$\lim_{(x,y)\to (0,0)}\frac{2e^x}{\frac{x^2}{y^2}+1}$$</p>
<p>$$=\frac{2(1)}{\frac{0}{0}+1}$$</p>
<p>Since $\frac{0}{0}$ is undefined. This limit does not exist.</p>
<p><strong>I am not satisfied with my proof.</strong> Makes me think that what I just did was simple step, and not acceptable university level mathematics. Any comments on my proof?</p>
| Ángel Mario Gallegos | 67,622 | <p>If we let $(x,y)\to(0,0)$ on the $x$ axis we have
$$f(x,0)=\frac{2e^x(0)^2}{x^2+(0)^2}=0\quad\implies \quad f(x,y)\to0\quad\text{as } (x,y)\to(0,0)\text{ on the }x\text{ axis}$$
On the $y$ axis we have
$$f(0,y)=\frac{2e^0y^2}{0^2+y^2}=2\quad\implies \quad f(x,y)\to 2\quad\text{as } (x,y)\to(0,0)\text{ on the }y\text{ axis}$$</p>
|
777,691 | <p>I would like to prove if $a \mid n$ and $b \mid n$ then $a \cdot b \mid n$ for $\forall n \ge a \cdot b$ where $a, b, n \in \mathbb{Z}$</p>
<p>I'm stuck.<br>
$n = a \cdot k_1$<br>
$n = b \cdot k_2$<br>
$\therefore a \cdot k_1 = b \cdot k_2$</p>
<p>EDIT: so for <a href="http://en.wikipedia.org/wiki/Fizz_buzz" rel="nofollow">fizzbuzz</a> it wouldn't make sense to check to see if a number is divisible by 15 to see if it's divisible by both 3 and 5?</p>
| David | 119,775 | <p>You are possibly thinking of the following: if $a\mid n$ and $b\mid n$ and $a,b$ are relatively prime (have no common factor except 1), then $ab\mid n$.</p>
<p><strong>Proof</strong>. We have $n=ak$ and $n=bl$ for some integers $k,l$. Therefore $b\mid ak$; since $a,b$ are relatively prime this implies $b\mid k$, so $k=bm$, so $n=abm$; therefore $ab\mid n$.</p>
<p>Re: edit, yes this would make sense because $3$ and $5$ are relatively prime.</p>
|
1,067,762 | <p>Is what I am doing below correct, please assist. </p>
<p>$$\sum_{k=-\infty}^{-1}\frac{e^{kt}}{1-kt} = \sum_{k=1}^{\infty}\frac{e^{-{kt}}}{1-kt}$$ </p>
<p>Is this the rule on how to "invert" the limits, and does it matter if there are imaginary numbers in the sum; or not or is it all the same with both pure real and pure imaginary summations?</p>
| André Nicolas | 6,312 | <p>Recall that
$$\tan\left(\frac{\pi}{2}-x\right)=\cot x =\frac{1}{\tan x}.$$</p>
<p>Now
$$\frac{1}{\sqrt{2}+1}=\frac{1}{\sqrt{2}+1}\cdot\frac{\sqrt{2}-1}{\sqrt{2}-1}=\sqrt{2}-1.$$</p>
|
1,067,762 | <p>Is what I am doing below correct, please assist. </p>
<p>$$\sum_{k=-\infty}^{-1}\frac{e^{kt}}{1-kt} = \sum_{k=1}^{\infty}\frac{e^{-{kt}}}{1-kt}$$ </p>
<p>Is this the rule on how to "invert" the limits, and does it matter if there are imaginary numbers in the sum; or not or is it all the same with both pure real and pure imaginary summations?</p>
| Dylan | 135,643 | <p>Here's an easy way:</p>
<p>$$\tan B \tan C = \big(\sqrt{2} - 1 \big) \big(\sqrt{2} + 1 \big) = 1$$</p>
<p>And we know that:</p>
<p>$$ \tan B \cot B = 1$$</p>
<p>So it follows that</p>
<p>$$\cot B = \tan C$$</p>
<p>Or</p>
<p>$$ \tan \left(\frac{\pi}{2} - B \right) = \tan C$$</p>
|
1,387,454 | <p>What is the sum of all <strong>non-real</strong>, <strong>complex roots</strong> of this equation -</p>
<p>$$x^5 = 1024$$</p>
<p>Also, please provide explanation about how to find sum all of non real, complex roots of any $n$ degree polynomial. Is there any way to determine number of real and non-real roots of an equation?</p>
<hr>
<p>Please not that I'm a high school freshman (grade 9). So please provide simple explanation. Thanks in advance!</p>
| user217285 | 217,285 | <p>By the fundamental theorem of algebra, the polynomial $a_0 + a_1x + \cdots + a_nx^n$ has $n$ (not necessarily distinct) roots, say $\alpha_1,\ldots,\alpha_n$. The polynomial $\frac{a_0}{a_n} + \frac{a_1}{a_n}x + \cdots + x^n$ also has roots $\alpha_1,\ldots,\alpha_n$. Factoring the latter polynomial and expanding,
\begin{align*}
\frac{a_0}{a_n} + \frac{a_1}{a_n}x + \cdots + \frac{a_{n-1}}{a_n} x^{n-1} + x^n &= (x - \alpha_1)\cdots(x - \alpha_n) \\
&= x^n - (\alpha_1 + \cdots+ \alpha_n)x^{n-1} + \cdots.
\end{align*}
Comparing coefficients, we have $\alpha_1 + \cdots + \alpha_n = -\frac{a_{n-1}}{a_n}$, giving us an explicit formula for the sum of the roots.</p>
<p>We are looking for the sum of the non-real roots of $x^5 - 1024$. Clearly $x = 4$ is a root, and since $5$ is odd, there are no other real roots (in the complex plane, the roots lie on a circle with radius 4 centered at the origin in the shape of a pentagon with a vertex at $(4,0)$). The sum of all the roots is 0, so the sum of the non-real roots is $-4$.</p>
|
2,425,916 | <p>What is this set describing?</p>
<p>{$n∈\mathbb{N}|n\ne 1$ and for all $a∈\mathbb{N}$ and $b∈\mathbb{N},ab=n$ implies $a= 1$ or $b= 1$}</p>
<p>Is it describing a subset of natural numbers, excluding 1, that is the product of two other natural numbers, of which one must be 1? Isn't that just every natural number except 1?</p>
| Siong Thye Goh | 306,553 | <p>Prime numbers only have two positive divisors, of which one of them is $1$. Hence the set is describing the set of prime numbers.</p>
<p>It doesn't include composite numbers such as $6$. If $6 = ab$, we can't conclude that $a=1$ or $b=1$, since it is possible that $a=2$ and $b=3$.</p>
|
339,090 | <p>I would like to pose a question on a variation on the classical <a href="http://en.wikipedia.org/wiki/Coupon_collector%27s_problem#Extensions_and_generalizations" rel="nofollow">coupon collector's problem</a>: coupon type $i$ is to be collected $k_i$ times. What is the expected stopping time or the expected number of trials needed to have collected all the sought after $(k_1,k_2,...,k_n)$ coupons?</p>
<p>We can compute this with recursion. But is there a better, more direct approach? What I am particularly interested to see is a martingale method.</p>
| Maya | 68,377 | <p>The method in <a href="http://www.jstor.org/stable/2308930" rel="nofollow">http://www.jstor.org/stable/2308930</a> looks like it can be adapted to this setting. It is not based on martingale arguments, but does give exact expressions. </p>
|
4,295,459 | <blockquote>
<p>Find the Taylor series of <span class="math-container">$$\frac{1}{(i+z)^2}$$</span> centered at <span class="math-container">$z_0 = i$</span>.</p>
</blockquote>
<p>Im thinking if I could find the Taylor series for <span class="math-container">$$\frac{1}{i+z}$$</span> I could use that <span class="math-container">$$\frac{d}{dz} \big(-\frac{1}{i+z} \big) = \frac{1}{(i+z)^2}$$</span> However Im struggling with finding the series of 1/(i+z) (I know I should use the geometric series), and also not sure how to make sure the series is centered at <span class="math-container">$z_0 = i$</span>. Any help is appreciated.</p>
| Kavi Rama Murthy | 142,385 | <p><span class="math-container">$\frac 1 {(i+z)^{2}}=-\frac 1 4 (1+(z-i)/2i)^{-2}$</span>. Now use the expansion <span class="math-container">$(1+w)^{-2}=1+\frac {(-2)} {(1)} w+\frac {(-2)(-3)} {(1)(2)}w^{2}+\frac {(-2)(-3)(-4)} {(1)(2)(3)}w^{3}+\cdots$</span> for <span class="math-container">$|w| <1$</span>.</p>
|
2,294,969 | <p>I can't seem to find a path to show that:</p>
<p>$$\lim_{(x,y)\to(0,0)} \frac{x^2}{x^2 + y^2 -x}$$</p>
<p>does not exist.</p>
<p>I've already tried with $\alpha(t) = (t,0)$, $\beta(t) = (0,t)$, $\gamma(t) = (t,mt)$ and with some parabolas... they all led me to the limit being $0$ but this exercise says that there's no limit when approaching $(0,0)$. Hints? Thank you.</p>
| StackTD | 159,845 | <p>So you want to find a path that gives a non-zero limit. Take a path where:
$$\lim_{(x,y)\to(0,0)} \frac{x^2}{x^2 + \color{blue}{y^2 -x}}$$the blue part is $0$, since then the limit clearly simplifies to $1$; so take $x=y^2$.</p>
<p><em>Edit</em>: this comes down to s.harp's suggestion in the comment as well.</p>
|
3,077,808 | <p>I'm trying to find for which values <span class="math-container">$r$</span> the following improper integral converges.
<span class="math-container">$$\int_0^\infty x^re^{-x}\, dx$$</span>
What I have so far is that <span class="math-container">$x^r < e^{\frac{1}{2}x}$</span> for <span class="math-container">$x \geq a$</span>, which splits the integral into
<span class="math-container">$$\int_0^a x^re^{-x} \,dx + \int_a^\infty e^{-\frac{1}{2}x} \, dx$$</span>
We know the latter interval converges, but I don't know what to do with the first one. (For reference, graphing the functions reveals the answer to be <span class="math-container">$x > -1$</span>.)</p>
<p>Edit: I would like a proof without the gamma function. Preferably one that uses the comparison test to compare limits.</p>
| RRL | 148,510 | <p>Note that for all <span class="math-container">$r \in \mathbb{R}$</span>,</p>
<p><span class="math-container">$$\lim_{x \to \infty} \frac{x^r e^{-x}}{x^{-2}} = \lim_{x \to \infty} \frac{x^{r+2}}{e^x} = 0 $$</span></p>
<p>since the exponential function tends to infinity faster than any polynomial. Hence, by the limit comparison test the integral over <span class="math-container">$[1, \infty)$</span> converges since <span class="math-container">$\int_1^\infty x^{-2}\, dx = 1$</span>.</p>
<p>See if you can finish by finding the condition on <span class="math-container">$r$</span> such that the integral over <span class="math-container">$[0,1]$</span> converges.</p>
|
1,969,934 | <p>Taken from <a href="https://en.wikipedia.org/wiki/E_(mathematical_constant)" rel="nofollow noreferrer">Wikipedia</a>:</p>
<blockquote>
<p>The number $e$ is the limit $$e = \lim_{n \to \infty} \left (1 + \frac{1}{n} \right)^n$$</p>
</blockquote>
<p><strong>Graph of $f(x) = \left (1 + \dfrac{1}{x} \right)^x$</strong> <a href="http://www.meta-calculator.com/online/paw750g2nx4z" rel="nofollow noreferrer">taken from here. </a></p>
<p><a href="https://i.stack.imgur.com/4eHLu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4eHLu.png" alt="Graph"></a></p>
<p>Its evident from the graph that the limit actually approaches $e$ as $x$ approaches $\infty$. So I tried approaching the value algebraically. My attempt:</p>
<p>$$\lim_{n \to \infty} \left (1 + \frac{1}{n} \right)^n$$
$$= \lim_{n \to \infty} \left(\frac{n + 1}{n}\right)^n$$
$$= \left(\lim_{n \to \infty} \left(\frac{n + 1}{n}\right) \right)^n$$
$$= 1^\infty$$</p>
<p>which is an indeterminate form. I cannot think of any other algebraic manipulation. My question is that <strong>how can I solve this limit algebraically?</strong></p>
| Dietrich Burde | 83,966 | <p>An algebraic way is Binomial Expansion, which is given by
$$\begin{eqnarray*}
\left(1+\frac{1}{n}\right)^{\!n} &=& 1+n\left(\frac{1}{n}\right)+\frac{n(n-1)}{2!}\left(\frac{1}{n}\right)^{\!2}+\frac{n(n-1)(n-2)}{3!}\left(\frac{1}{n}\right)^{\!3}+\cdots \\ \\
&=& 1+1+\frac{n(n-1)}{n^2}\cdot\frac{1}{2!}+\frac{n(n-1)(n-2)}{n^3}\cdot \frac{1}{3!}+\cdots \\ \\
\end{eqnarray*}$$
Now every term in front there goes to $1$ for $n\to \infty$, so that the limit gives
$$
\lim_{n \to \infty}\left(1+\frac{1}{n}\right)^{\!n} = 1 + 1 + \frac{1}{2!} + \frac{1}{3!} + \cdots = \mathrm{e}.
$$</p>
|
1,063,599 | <p>There is a deck made of $81$ different card. On each card there are $4$ seeds and each seeds can have $3$ different colors, hence generating the $ 3\cdot3\cdot3\cdot3 = 81 $ card in the deck.
A tern is a winning one if,for every seed, the correspondent colors on the three card are or all the same or all different.</p>
<p>-1 How many winning tern there are in the deck?</p>
<p>-2 Shows that $4$ cards can have at most one winning tern.</p>
<p>-3 Draw $4$ card from the deck. What is the probability that there is a winning tern?</p>
<p>-4 We want delete $4$ cards from the decks to get rid of as many tern as possible. How we choose the $4$ cards?</p>
<p>I've no official solution for this problem and i don't know where to start.
Rather than a complete solution i would appreciate more some hints to give me the input to start thinking a solution.</p>
| Empy2 | 81,790 | <p>For question one, ask a simpler question: Look at just the first seeds, how many ways can you color them?<br>
Two possibilities are: Card 1: Red, Card 2: Red, Card 3: Red<br>
and Card1:Red, Card2: Blue, Card3: Yellow.<br>
Now for four seeds, you make the choice of colors four times.<br>
You should check that you haven't chosen the same card three times - how many ways could you have done that?</p>
|
1,521,739 | <p>The following is the Meyers-Serrin theorem and its proof in Evans's <em>Partial Differential Equations</em>:</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/XnzXY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XnzXY.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/7woqQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7woqQ.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/pBIqX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pBIqX.png" alt="enter image description here"></a></p>
</blockquote>
<p>Could anyone explain where (for which $x\in U$) is the convolution in step 2 defined and how to get (3) from Theorem 1? </p>
<blockquote>
<p><a href="https://i.stack.imgur.com/NdZWL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NdZWL.png" alt="enter image description here"></a></p>
</blockquote>
| Lubin | 17,760 | <p>I’ll do the same thing as @heropup, but without the notation. To save myself typing, I’ll set $c=1/\sqrt2=\cos 45^\circ=\sin45^\circ$. Then you want a rotation of $45^\circ$, and to do this I’ll set
\begin{align}
x&=cX-cY\\y&=cX+cY\,.
\end{align}
Make these substitutions, and if I’m not mistaken, you get a nice equation of form $Y=\alpha X^2+\beta$, but I’ll leave it to you to find these numbers. You get points that are the $(X,Y)$-coordinates of your vertex and focus (you know how to do that), and plug these into the displayed formulas to get the $(x,y)$-coordinates. (PS: make sure you express $2\sqrt2$ correctly as $4c$. You’ll see that the letter $c$ drops out completely.)</p>
|
873,803 | <p>I'm currently preparing for the USA Mathematical Talent Search competition. I've been brushing up my proof-writing skills for several weeks now, but one area that I have not been formally taught about (or really self-studied) for that matter, is general polynomials beyond quadratics. In particular, I've been having trouble with the following question:</p>
<blockquote>
<p>Let $r_1, r_2$, and $r_3$ be distinct roots of the monic cubic equation $P(x) :=x^3+bx^2+cx+d=0$. Prove that $r_1r_2 + r_1r_3 + r_2r_3 = c$.</p>
</blockquote>
<p>I started by attempting to equate the roots $P(r_1) = P(r_2) = P(r_3) = 0$ and simplify, however this seemed like a wrong approach from the start and looked far messier than I'd expect from an introductory question on direct proofs. How would one go about solving this and, in general, problems on identities involving the roots of a polynomial?</p>
| Anurag A | 68,092 | <p>Just use $P(x)=(x-r_1)(x-r_2)(x-r_3)$. Then compare the coefficients on each side.</p>
|
880,124 | <p>I'm trying to solve the question 1.36 from Fulton's algebraic curves book:</p>
<blockquote>
<p>Let $I=(Y^2-X^2,Y^2+X^2)\subset\mathbb C[X,Y]$. Find $V(I)$ and
$\dim_{\mathbb C}\mathbb C[X,Y]/I$.</p>
</blockquote>
<p>Obviously $V(I)=\{(0,0)\}$ and by a corolary in the same section we know that $\dim_{\mathbb C}(\mathbb C[X,Y]/I)\lt \infty$, but I don't know how to calculate $\dim_{\mathbb C}(\mathbb C[X,Y]/I)$.</p>
<p>I need help in this part.</p>
<p>Thanks in advance</p>
| Matemáticos Chibchas | 52,816 | <p><strong>Hint:</strong> Note that $Y^2=\bigl((Y^2-X^2)+(Y^2+X^2)\bigr)/2$ belongs to $I$, which in turn implies that $X^2=(Y^2+X^2)-Y^2\in I$. In other words $(X^2,Y^2)\subseteq I$, and a similar reasoning shows the reversed inclusion. Having this, what can you say about the higher order terms of an arbitrary polynomial modulo $I$?</p>
|
880,124 | <p>I'm trying to solve the question 1.36 from Fulton's algebraic curves book:</p>
<blockquote>
<p>Let $I=(Y^2-X^2,Y^2+X^2)\subset\mathbb C[X,Y]$. Find $V(I)$ and
$\dim_{\mathbb C}\mathbb C[X,Y]/I$.</p>
</blockquote>
<p>Obviously $V(I)=\{(0,0)\}$ and by a corolary in the same section we know that $\dim_{\mathbb C}(\mathbb C[X,Y]/I)\lt \infty$, but I don't know how to calculate $\dim_{\mathbb C}(\mathbb C[X,Y]/I)$.</p>
<p>I need help in this part.</p>
<p>Thanks in advance</p>
| Aaron | 9,863 | <p>Given a ring $R$, every element of $R[x]$ is a sum of elements $a_0+a_1 x + a_2 x^2 + \ldots +a_n x^n$ where $a_i \in R$. Using this twice, we have that a basis for $\mathbb C[x,y]$ is the monomials $x^iy^j$ where $i,j\in \mathbb N$ Since $I=(x^2+y^2,x^2-y^2)=(x^2,y^2)$, we have that every monomial where either $i$ or $j$ is at least $2$ will be in $I$, and so every element of $\mathbb C[x,y]/I$ is of the form $a + bx + cy + dxy +I$. In fact, the elements of $I$ are spanned by the monomials where either $i$ or $j$ are at least $2$, and so $\{1,x,y,xy\}$ is a basis for $\mathbb C[x,y]/I$, and so the dimension is $4$.</p>
|
145,046 | <p>I'm a first year graduate student of mathematics and I have an important question.
I like studying math and when I attend, a course I try to study in the best way possible, with different textbooks and moreover I try to understand the concepts rather than worry about the exams. Despite this, months after such an intense study, I forget inexorably most things that I have learned. For example if I study algebraic geometry, commutative algebra or differential geometry, In my minds remain only the main ideas at the end. Viceversa when I deal with arguments such as linear algebra, real analysis, abstract algebra or topology, so more simple subjects that I studied at first or at the second year I'm comfortable.
So my question is: what should remain in the mind of a student after a one semester course? What is to learn and understand many demostrations if then one forgets them all?</p>
<p>I'm sorry for my poor english.</p>
| Gil | 75,252 | <p>As a student who is suffering from the very same problem, I want to share my less professional solution. With this method, I feel like my studying became much more efficient.</p>
<p>When I read, I tend to be generous. I used to pick out every single detail and I gave an author or a lecturer a criticism about not writing "for all" or writing $x$ instead of $\bar{x}$ for an element of $\mathbb{R}[x]/(x^{2} + 1)$. However, I realized that dropping this habit of being extremely nit-picky is necessary in order to understand more difficult ideas. I usually nod my head if I read what I expect or arguments similar to my intuitive ideas. For example, I saw an author saying that $R/(p)$ is a field when $p \in R$ is a prime element and $R$ is a PID. Of course, it is technically wrong if you take $R = \mathbb{Z}$ and $p = 0$, but in the proof that contained the "mistake," it was more intuitive to think that $R/(p)$ was a field, and later I found that the case $p = 0$ gives a trivial proof of the theorem where $R/(p)$ was used. It is still very difficult for me to proceed the readings without going through details like this, but I try my best to give the author/lecturer trust and underline the details that I do not understand at the moment so that I can review it later (and most of the time, I do understand them when I calm myself down and look at them again).</p>
<p>On the other hand, I try to be as rigorous as possible when I write my own proof. I had bad experiences from letting my intuition conquer all of my mathematical activities. The problem of being too intuitive is that everything seems "obvious," which is not always the best scenario when it comes to writing mathematics.</p>
<p>Long story short, I read more loosely and write more rigorously. I found that this method only works under certain circumstance. It only works when I have great determination and patience that makes me be willing to endure slow-pace studying or getting stuck. It is also important to balance yourself to find some interesting (but not-too-difficult) problems related to the reading/lecture that give you more motivations for learning.</p>
|
774,434 | <p><a href="https://www.wolframalpha.com/input/?i=Sum%5BBinomial%5B3n,n%5Dx%5En,%20%7Bn,%200,%20Infinity%7D%5D" rel="noreferrer">Wolfram alpha tells me</a> the ordinary generating function of the sequence $\{\binom{3n}{n}\}$ is given by
$$\sum_{n} \binom{3n}{n} x^n = \frac{2\cos[\frac{1}{3}\sin^{-1}(\frac{3\sqrt{3}\sqrt{x}}{2})]}{\sqrt{4-27x}}$$
How do I prove this?</p>
| Lucian | 93,448 | <p><strong>Too long for a comment:</strong> By investigating series of the form $S_a(x)=\displaystyle\sum_{n=0}^\infty{an\choose n}~x^n$, we notice that — in general — we get a $($ <a href="https://en.wikipedia.org/wiki/Generalized_hypergeometric_function" rel="nofollow">generalized</a> $)$ <a href="https://en.wikipedia.org/wiki/Hypergeometric_function" rel="nofollow">hypergeometric function</a> of argument $\dfrac{a^a}{(a-1)^{a-1}}~x$, which then naturally leads us to suspect that we should rather be inspecting the values of the same expression for $x(t)=\dfrac{(a-1)^{a-1}}{a^a}~t$. Taking into consideration the fact that $\displaystyle\lim_{u\to0}u^u=1$, we have </p>
<blockquote>
<p>$$\begin{align}
S_{-1}\Big(x(t)\Big)~&=~\frac1{\sqrt{1-t}}
\\\\
S_{-1/2}\Big(x(t)\Big)~&=~\frac{\cos\bigg(\dfrac{\arcsin t}3\bigg)+\dfrac1{\sqrt3}~\sin\bigg(\dfrac{\arcsin t}3\bigg)}{\sqrt{1-t^2}}
\\\\
S_0\Big(x(t)\Big)~&=~0
\\\\
S_{1/2}\Big(x(t)\Big)~&=~1-i~\frac{t}{\sqrt{1-t^2}}
\\\\
S_1\Big(x(t)\Big)~&=~\frac1{1-t}
\\\\
S_{3/2}\Big(x(t)\Big)~&=~\frac{\cos\bigg(\dfrac{\arcsin t}3\bigg)+\sqrt3~\sin\bigg(\dfrac{\arcsin t}3\bigg)}{\sqrt{1-t^2}}
\\\\
S_2\Big(x(t)\Big)~&=~\frac1{\sqrt{1-t}}
\\\\
S_3\Big(x(t)\Big)~&=~\frac{\cos\bigg(\dfrac{\arcsin\sqrt t}3\bigg)}{\sqrt{1-t}}
\end{align}$$</p>
</blockquote>
|
243,210 | <p>I have difficulty computing the $\rm mod$ for $a ={1,2,3\ldots50}$. Is there a quick way of doing this?</p>
| Felix Marin | 85,343 | <p>$$
\mbox{Set}\ {\rm y}\left(x\right) \equiv {\rm f}\left(x^{2}\right)\equiv{\rm F}\left(x\right)
$$</p>
<blockquote>
<p>$$
\mbox{You get}\quad {\rm F}'\left(x\right)
={{\rm F}^{2}\left(x\right) - {\rm F}\left(x\right)\over 2}
$$</p>
</blockquote>
|
4,294,714 | <p>The problem is as follows:</p>
<p>You have an unlimited number of marbles and each marble is one of 16 different colours. You have to choose 6 marbles and order is irrelevant. How many different combinations of 6 marbles are there?</p>
| user2661923 | 464,411 | <p>Ignore passenger D, as a non-factor.</p>
<p>Computation will be <span class="math-container">$$\frac{N\text{(umerator)}}{D\text{(enominator)}},$$</span></p>
<p>where <span class="math-container">$D = 7^3$</span>, which represents <span class="math-container">$7$</span> choices for each of <span class="math-container">$A,B,C$</span>.</p>
<p>Superficially, you would expect <span class="math-container">$N = (7 \times 6 \times 5)$</span>. However, such a computation permits <span class="math-container">$(3!)$</span> ways of ordering the departures of <span class="math-container">$A,B,C$</span>. By the constraints of the problem, you must therefore apply a factor of <span class="math-container">$\displaystyle \frac{1}{3!}$</span> when computing <span class="math-container">$N$</span>.</p>
<p>Therefore, <span class="math-container">$\displaystyle N = \frac{7!}{(7-3)!} \times \frac{1}{3!} = \binom{7}{3}.$</span></p>
<p>Final answer:</p>
<p><span class="math-container">$$\frac{\binom{7}{3}}{7^3}.$$</span></p>
<hr />
<p><strong>Addendum</strong><br>
Why passenger D can be ignored.</p>
<p>Since there is no constraint on which exit passenger D takes, his choice of exit will have no affect on the possible ways that A,B,C can exit, either in the computation of the denominator (<span class="math-container">$D = 7^3$</span>) or the numerator <span class="math-container">$[N = \binom{7}{3}]$</span>.</p>
<p>If you do include passenger D in the computations, you would simply be multiplying both <span class="math-container">$N$</span> and <span class="math-container">$D$</span> by a factor of <span class="math-container">$(7)$</span>, which would have no effect on the value of the fraction, <span class="math-container">$\displaystyle \frac{N}{D}$</span>.</p>
|
2,363,390 | <p>This question arises from an unproved assumption made in a proof of L'Hôpital's Rule for Indeterminate Types of $\infty/\infty$ from a Real Analysis textbook I am using. The result is intuitively simple to understand, but I am having trouble formulating a rigorous proof based on limit properties of functions and/or sequences.</p>
<p><strong>Statement/Lemma to be Proved</strong>:
Let $f$ be a continuous function on interval $(a,b)\!\subset\!\mathbb{R}$. If $\displaystyle{\lim_{x\rightarrow a+}\!f(x)\!=\!\infty}$, then, given any $\alpha\!\in\!\mathbb{R}$, there exists $c\!>\!a$ such that $x\!\in\!A\cap(a,c)$ implies $\alpha\leq f(c)<f(x)$.</p>
<p><strong>Relationship with Infinite Limit Definition</strong>:
At first glance, this may appear to slimply be the definition of right-hand infinite limits:</p>
<ul>
<li>$\displaystyle{\lim_{x\rightarrow a+}\!f(x)\!=\!\infty}$ is defined to mean: given any $\alpha\!\in\!\mathbb{R}$, there exists $\delta\!>\!0$, such that $x\!\in\!A\cap(a,a+\delta)$ implies $\alpha<f(x)$.</li>
</ul>
<p>However, the main difference is that the result I am interested in forces an association between the "$\delta$" and the "$\alpha$" (where $\alpha=f(a+\delta)$, i.e., it forces $c\!\equiv\!a+\delta$ to be in the domain of $f$).</p>
<p><em>EDIT:</em> The statement to be proved that I originally presented did not require $f$ to be continuous on $A$. However, this was added to address the comment and counterexample below.</p>
<p><em>EDIT #2:</em> Again, a helpful user (@DanielFischer) commented that the statement after the first edit needed yet an additional limitation--<em>i.e.</em>, that $A$ must also be an interval--for it to hold.</p>
| Stephen K. | 307,717 | <p>I believe I have a proof for the special case when the domain is an interval. It is interesting to me that the proof of this simple assertion is not so trivial (although if there is a simplier route to getting this result, I would be interested in finding out).</p>
<p><strong>Theorem</strong> Let $f$ be a continuous function on interval $(a,b)\!\subset\!\mathbb{R}$. If $\displaystyle{\lim_{x\to a+}f(x)\!=\!\infty}$, then, given any $\alpha\!\in\!\mathbb{R}$, there exists $c\!\in\!(a,b)$ such that: (1) $\alpha\!\leq\!f(c)$, and (2) $x\!\in\!(a,c)$ implies $f(c)\!<\!f(x)$.</p>
<p><strong>Proof</strong> From the definition of the infinite right-handed limit, given any $\alpha\!\in\!\mathbb{R}$, there exists $\delta\!>\!0$ (where we can assume that $a+\delta\!<\!b$) such that:
$$\qquad\qquad\ 0\!<\!x\!-\!a\!<\!\delta\ \ \mbox{implies}\ \ \alpha\!<\!f(x).\qquad\qquad(*)$$
Since $a\!+\!\delta\!\in\!I$, then either: <em>(1)</em> $f(a\!+\!\delta)\!=\!\alpha$, <em>(2)</em> $f(a\!+\!\delta)\!<\!\alpha$, or <em>(3)</em> $f(a\!+\!\delta)\!>\!\alpha$.</p>
<ul>
<li><em>Case (1)</em>, where $f(a\!+\!\delta)\!=\!\alpha$: Let $c\!\equiv\!a+\delta$ and the assertion is proved.</li>
<li><em>Case (2)</em>, where $f(a\!+\!\delta)\!<\!\alpha$: In addition to this case's assumption, one also has $f(a\!+\!\delta/2)\!>\!\alpha$ by the implication $(*)$ above. Therefore, since $f(a\!+\!\delta)<\alpha<f(a\!+\!\delta/2)$, by <a href="https://en.wikipedia.org/wiki/Intermediate_value_theorem" rel="nofollow noreferrer">Bolzano's Intermediate Value Theorem</a>, there exists a number $k\!\in\!(a\!+\!\frac{\delta}{2},a\!+\!\delta)$ such that $f(k)\!=\!\alpha$. A second application of Bolzano's theorem results in a number $l\!\in\!(k,a\!+\!\delta)$ such that $f(a\!+\!\delta)\!<\!f(l)\!<\!\alpha$. However, this contradicts implication $(*)$ above so that $f(a\!+\!\delta)\!\not<\!\alpha$ and this case is not possible.</li>
<li><em>Case (3)</em>, where $f(a\!+\!\delta)\!>\!\alpha$: Define $\alpha'\!\equiv\!f(a\!+\!\delta)$. Again, the infinite limit definition states that there exists $\delta'\!>\!0$ such that:
$$\qquad\qquad\ 0\!<\!x\!-\!a\!<\!\delta'\ \ \mbox{implies}\ \ \alpha'\!<\!f(x).\qquad\qquad(**)$$
It must be that $\delta'\!\leq\!\delta$ (otherwise a contradiction would result for the point $a\!+\!\delta$ in (**) above). If $\delta'\!=\!\delta$, then let $c\!\equiv\!a+\delta'$ and the assertion is proved. If $\delta'\!<\!\delta$, then from the <a href="https://en.wikipedia.org/wiki/Extreme_value_theorem" rel="nofollow noreferrer">Extreme Value Theorem</a>, $f$ has an absolute minimum $f_{min}$ on $[a\!+\!\delta',a\!+\!\delta]$. Define the following set:
$$X\equiv\{\ x\ :\ x\!\in\![a\!+\!\delta',a\!+\!\delta]\ \mbox{and}\ f(x)\!=\!f_{min}\},$$
which includes all points in that subdomain that have $f_{min}$ as their image. The question next becomes whether the infimum of this set is a member of the set (<em>i.e.</em>, whether the set has a minimum). The answer is that it must, because if it doesn't, then $f$ would be discontinuous at the infimum contradicting the assertion's hypothesis that $f$ be continuous. Let $c\!\equiv\!\inf X$ and the assertion is proved.</li>
</ul>
|
1,834,756 | <p>The Taylor expansion of the function $f(x,y)$ is:</p>
<p>\begin{equation}
f(x+u,y+v) \approx f(x,y) + u \frac{\partial f (x,y)}{\partial x}+v \frac{\partial f (x,y)}{\partial y} + uv \frac{\partial^2 f (x,y)}{\partial x \partial y}
\end{equation}</p>
<p>When $f=(x,y,z)$ is the following true?</p>
<p>$$\begin{align}
f(x+u,y+v,z+w) \approx f(x,y,z) &+ u \frac{\partial f (x,y,z)}{\partial x}+v \frac{\partial f (x,y,z)}{\partial y} + w \frac{\partial f (x,y,z)}{\partial z}
\\
&+uv \frac{\partial^2 f (x,y,z)}{\partial x \partial y} + vw \frac{\partial^2 f (x,y,z)}{\partial y \partial z}+ uw \frac{\partial^2 f (x,y,z)}{\partial x \partial z} \\
&+ uvw \frac{\partial^3 f (x,y,z)}{\partial x \partial y \partial z}
\end{align}$$</p>
| Kamal Kant Misra | 672,693 | <p>It is not correct! We should have the expansion as
<span class="math-container">$f(x+u,y+v,z+w)\approx f(x,y,z) + u \frac{\partial f(x,y,z)}{\partial x} + v\frac{\partial f(x,y,z)}{\partial y} + w\frac{\partial f(x,y,z)}{\partial z} + \frac{1}{2!} \left[u^2 \frac{\partial^2 f(x,y,z)}{\partial x^2} + v^2 \frac{\partial^2 f(x,y,z)}{\partial y^2} + w^2 \frac{\partial^2 f(x,y,z)}{\partial z^2} + 2 uv \frac{\partial^2 f(x,y,z)}{\partial x \partial y} + 2 vw\frac{\partial^2 f(x,y,z)}{\partial y \partial z} + 2 uw\frac{\partial^2 f(x,y,z)}{\partial x \partial z}\right] + \cdots$</span></p>
|
1,594,622 | <p>As an example, calculate the number of $5$ card hands possible from a standard $52$ card deck. </p>
<p>Using the combinations formula, </p>
<p>$$= \frac{n!}{r!(n-r)!}$$</p>
<p>$$= \frac{52!}{5!(52-5)!}$$</p>
<p>$$= \frac{52!}{5!47!}$$</p>
<p>$$= 2,598,960\text{ combinations}$$</p>
<p>I was wondering what the logic is behind combinations? Is it because there are 52 cards to choose from, except we're only selecting $5$ of them, to which the person holding them can rearrange however they please, hence we divide by $5!$ to account for the permutations of those 5 cards? Then do we divide by $47!$ because the remaining cards are irrelevant?</p>
| kaiten | 301,758 | <p>Consider drawing $1$ card at a time.
The first card can be any of the $52$ cards.
The second can be any of the remaining $51$.
The third can be any of $50$... etc</p>
<p>So you have $52\times 51\times 50\times 49\times 48$ possibilities for $5$ cards. This is more conveniently written $\dfrac{52!}{47!}$.</p>
<p>But now you're counting the same hand $5!$ times because there are $5!$ ways of arranging $5$ cards. Dividing by $5!$ gives $\dfrac{52!}{5!47!}$</p>
|
309,234 | <p>I am looking for good, detailed references for "mod $p$ lower central series".</p>
<p>So far I only find papers such as (<a href="https://core.ac.uk/download/pdf/81193793.pdf" rel="nofollow noreferrer">https://core.ac.uk/download/pdf/81193793.pdf</a>, <a href="https://www.sciencedirect.com/science/article/pii/0040938366900243" rel="nofollow noreferrer">https://www.sciencedirect.com/science/article/pii/0040938366900243</a>), which briefly mention it in the context of topology.</p>
<p>Are there any good books that discuss this in detail (not necessarily related to topology)?</p>
<p>Also, just to confirm, are these terminologies the same thing:</p>
<ol>
<li>mod $p$ lower central series</li>
<li>lower $p$-central series</li>
<li>lower exponent-p central series</li>
</ol>
<p>I am confused by the different terminologies.</p>
<p>Thanks a lot.</p>
| Community | -1 | <p>For the free group, it is called the Zassenahus filtration. Golod-Shafarevich groups are defined in terms of it. <a href="https://arxiv.org/abs/1206.0490" rel="nofollow noreferrer">Ershov's survey</a> on Golod-Shafarevich groups is excellent. Highly recommended. It is published as Ershov, Mikhail Golod-Shafarevich groups: a survey. Internat. J. Algebra Comput. 22 (2012), no. 5, 1230001, 68 pp</p>
|
711,168 | <p>Let $V$ be an $n$-dimensional real inner product space and let $a=\lbrace v_1,v_2,\dots v_n \rbrace$ be an orthonormal basis for $V$. Let $W$ be a subspace of $V$ with orthonormal basis $B = \lbrace w_1, w_2,\dots w_k\rbrace$. Let $A = \lbrace [w_1]a, [w_2]a,\dots [w_k]a\rbrace$ and let $P_w$ be the orthogonal projection onto $W$.
Show $[P_w]aa = AA^t$.</p>
<ol>
<li>What I have is $P_w(x) = w$ </li>
<li>$[P_w(x)]a = [P_w]aa[x]a$</li>
<li>And since $A$ is a orthogonal matrix $AA^t=I$
I'm stuck on this step
Please help me out. Thanks</li>
</ol>
| dani_s | 119,524 | <p>For a counterexample consider $G = \{0, 1\}$ and define $*$ by $$0 * 0 = 0 \\ 0 * 1 = 1 \\ 1 * 0 = 0 \\ 1 * 1 = 1$$</p>
<p>Show that your axioms hold; on the other hand $G$ is not a group because for example there is no identity.</p>
|
1,790,311 | <p>Show the following equalities $5 \mathbb{Z} +8= 5\mathbb{Z} +3= 5\mathbb{Z} +(-2)$.</p>
<p>$5 \mathbb{Z} +8=\{5z_{1}+8: z_{1} \in \mathbb{Z}\}$,</p>
<p>$5 \mathbb{Z} +3=\{5z_{2}+3: z_{2} \in \mathbb{Z}\}$,</p>
<p>$5 \mathbb{Z} +(-2)=\{5z_{3}+(-2): z_{3} \in \mathbb{Z}\}$.</p>
<p>So, how can prove to use these defitions?</p>
| MPW | 113,214 | <p><strong>Hint:</strong> You need to show that $8$, $3$, and $-2$ are all in the same coset of $5\mathbb Z$. Said differently, you need to show that
$8-3$, $8-(-2)$, and $3-(-2)$ are all in $5\mathbb Z$.</p>
|
23,312 | <p>What is the importance of eigenvalues/eigenvectors? </p>
| DVD | 77,260 | <p>An eigenvector $v$ of a matrix $A$ is a directions unchanged by the linear transformation: $Av=\lambda v$.
An eigenvalue of a matrix is unchanged by a change of coordinates: $\lambda v =Av \Rightarrow \lambda (Bu) = A(Bu)$. These are important invariants of linear transformations.</p>
|
23,312 | <p>What is the importance of eigenvalues/eigenvectors? </p>
| Ciro Santilli OurBigBook.com | 53,203 | <p><strong>Eigenvalues and eigenvectors are central to the definition of measurement in quantum mechanics</strong></p>
<p>Measurements are what you do during experiments, so this is obviously of central importance to a Physics subject.</p>
<p>The state of a system is a vector in <a href="https://en.wikipedia.org/wiki/Hilbert_space" rel="nofollow noreferrer">Hilbert space</a>, an infinite dimensional space square integrable functions.</p>
<p>Then, the definition of "doing a measurement" is to apply a <a href="https://en.wikipedia.org/wiki/Self-adjoint_operator" rel="nofollow noreferrer">self-adjoint operator</a> to the state, and after a measurement is done:</p>
<ul>
<li>the state collapses to an eigenvalue of the self adjoint operator (this is the formal description of the <a href="https://en.wikipedia.org/wiki/Observer_effect_(physics)" rel="nofollow noreferrer">observer effect</a>)</li>
</ul>
<ul>
<li>the result of the measurement is the eigenvalue of the self adjoint operator</li>
</ul>
<p>Self adjoint operators have the following two key properties that allows them to make sense as measurements as a consequence of infinite dimensional generalizations of the <a href="https://en.wikipedia.org/wiki/Spectral_theorem" rel="nofollow noreferrer">spectral theorem</a>:</p>
<ul>
<li>their eigenvectors form an orthonormal basis of the Hilbert space, therefore if there is any component in one direction, the state has a probability of collapsing to any of those directions</li>
<li>the eigenvalues are real: our instruments tend to give real numbers are results :-)</li>
</ul>
<p>As a more concrete and super important example, we can take the <a href="https://en.wikipedia.org/wiki/Hydrogen_atom#Schr%C3%B6dinger_equation" rel="nofollow noreferrer">explicit solution of the Schrodinger equation for the hydrogen atom</a>. In that case, the eigenvalues of the energy operator are proportional to <a href="https://en.wikipedia.org/wiki/Spherical_harmonics" rel="nofollow noreferrer">spherical harmonics</a>:</p>
<p><a href="https://i.stack.imgur.com/udrwZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/udrwZ.png" alt="enter image description here" /></a></p>
<p>Therefore, if we were to measure the energy of the electron, we are certain that:</p>
<ul>
<li><p>the measurement would have one of the energy eigenvalues</p>
<p>The energy difference between two energy levels matches experimental observations of the <a href="https://en.wikipedia.org/wiki/Hydrogen_spectral_series" rel="nofollow noreferrer">hydrogen spectral series</a> and is one of the great triumphs of the Schrodinger equation</p>
</li>
<li><p>the wave function would collapse to one of those functions after the measurement, which is one of the eigenvalues of the energy operator</p>
</li>
</ul>
<p>Bibliography: <a href="https://en.wikipedia.org/wiki/Measurement_in_quantum_mechanics" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Measurement_in_quantum_mechanics</a></p>
<p><strong>The time-independent Schrodinger equation is an eigenvalue equation</strong></p>
<p>The general <a href="https://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation" rel="nofollow noreferrer">Schrodinger equation</a> can be simplified by <a href="https://en.wikipedia.org/wiki/Separation_of_variables" rel="nofollow noreferrer">separation of variables</a> to the <a href="https://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation#Time-independent_equation" rel="nofollow noreferrer">time independent Schrodinger equation</a>, without any loss of generality:</p>
<p><span class="math-container">$$
\left[ \frac{-\hbar^2}{2m}\nabla^2 + V(\mathbf{r}) \right] \Psi(\mathbf{r}) = E \Psi(\mathbf{r})
$$</span></p>
<p>The left side of that equation is a linear operator (infinite dimensional matrix acting on vectors of a Hilbert space) acting on the vector <span class="math-container">$\Psi$</span> (a function, i.e. a vector of a Hilbert space). And since <code>E</code> is a constant (the energy), this is just an eigenvalue equation.</p>
<p>Have a look at: <a href="https://math.stackexchange.com/questions/579453/real-world-application-of-fourier-series/3729366#3729366">Real world application of Fourier series</a> to get a feeling for separation of variables works for a simpler equation like the heat equation.</p>
<p><strong>Heuristic argument of why Google PageRank comes down to a diagonalization problem</strong></p>
<p><a href="https://en.wikipedia.org/wiki/PageRank" rel="nofollow noreferrer">PageRank</a> was mentioned at: at <a href="https://math.stackexchange.com/a/263154/53203">https://math.stackexchange.com/a/263154/53203</a> but I wanted to add one cute handy wave intuition.</p>
<p>PageRank is designed to have the following properties:</p>
<ul>
<li>the more links a page has incoming, the greater its score</li>
<li>the greater its score, the more the page boosts the rank of other pages</li>
</ul>
<p>The difficulty then becomes that pages can affect each other circularly, for example suppose:</p>
<ul>
<li>A links to B</li>
<li>B links to C</li>
<li>C links to A</li>
</ul>
<p>Therefore, in such a case</p>
<ul>
<li>the score of B depends on the score A</li>
<li>which in turn depends on the score of A</li>
<li>which in turn depends on C</li>
<li>which depends on B</li>
<li>so the score of B depends on itself!</li>
</ul>
<p>Therefore, one can feel that theoretically, an "iterative approach" cannot work: we need to somehow solve the entire system in one go.</p>
<p>And one may hope, that once we assign the correct importance to all nodes, and if the transition probabilities are linear, an equilibrium may be reached:</p>
<pre><code> Transition matrix * Importance vector = 1 * Importance vector
</code></pre>
<p>which is an eigenvalue equation with eigenvalue 1.</p>
<p><strong>Markov chain convergence</strong></p>
<p><a href="https://en.wikipedia.org/wiki/Markov_chain" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Markov_chain</a></p>
<p>This is closely related to the above Google PageRank use-case.</p>
<p>The equilibrium also happens on the vector with eigenvalue 1, and convergence speed is dominated by the ratio of the two largest eigenvalues.</p>
<p>See also: <a href="https://www.stat.auckland.ac.nz/%7Efewster/325/notes/ch9.pdf" rel="nofollow noreferrer">https://www.stat.auckland.ac.nz/~fewster/325/notes/ch9.pdf</a></p>
|
507,827 | <p>Let $a_n$ be a positive sequence. Prove that
$$\limsup_{n\to \infty} \left(\frac{a_1+a_{n+1}}{a_n}\right)^n\geqslant e.$$</p>
| Siméon | 51,594 | <p>Without loss of generality we can assume $a_1=1$.</p>
<p>Taking logarithms and <em>seeking for a contradiction</em>, suppose that there exists $0 < \alpha < 1$ such that for all $n$ large enough,
$$
\ln \left(\dfrac{1+a_{n+1}}{a_n}\right) = \ln a_{n+1} - \ln a_n + \ln\left(1+\frac{1}{a_{n+1}}\right)\leq \frac{\alpha}{n}
$$</p>
<p>From this inequality we deduce that $\ln a_{n+1} - \ln a_n \leq q/n$.
Summing up, this yields $\ln a_n \leq \alpha\ln n + O(1)$ so $\fbox{$a_n \leq C\,n^\alpha$}$ for some $C > 0$.</p>
<p>Assuming (see below) that we can prove $\lim a_n = +\infty$, we find
$$
\ln a_{n+1} + S_n - T_n \leq \ln a_{n+1} + \sum_{k=1}^{n+1}\ln\left(1+\frac{1}{a_k}\right) \leq \alpha \ln n + O(1)
$$
with
$$
S_n = \sum_{k=1}^n \frac{1}{a_k},\qquad T_n = \sum_{k=1}^n \frac{1}{a_k^2}.
$$
This is absurd because $T_n$ is negligible with respect to $S_n$ and
$$
S_n \geq \frac{1}{C} \sum_{k=1}^n\frac{1}{k^\alpha} \sim \frac{1}{C(1-\alpha)}n^{1-\alpha}.$$</p>
<hr>
<p>In order to prove that $\lim a_n =+\infty$, start from
$$
\ln a_n \geq -\frac{\alpha}{n} + \ln(1+a_{n+1}) \geq -\frac{\alpha}{n}
$$
This gives $\liminf a_n \geq \lim e^{-\alpha/n} = 1$.</p>
<p>Then write $\liminf a_n \geq \liminf e^{-\alpha/n}(1+a_{n+1}) \geq 2$, and so on... you can show that $\liminf a_n \geq K$ for every integer $K \geq 1$.</p>
|
142,677 | <p>Consider the following list of equations:</p>
<p>$$\begin{align*}
x \bmod 2 &= 1\\
x \bmod 3 &= 1\\
x \bmod 5 &= 3
\end{align*}$$</p>
<p>How many equations like this do you need to write in order to uniquely determine $x$?</p>
<p>Once you have the necessary number of equations, how would you actually determine $x$?</p>
<hr/>
<p><strong>Update:</strong></p>
<p>The "usual" way to describe a number $x$ is by writing</p>
<p>$$x = \sum_n 10^n \cdot a_n$$</p>
<p>and listing the $a_n$ values that aren't zero. (You can also extend this to some radix other than 10.)</p>
<p>What I'm interested in is whether you could instead express a number by listing all its residues against a suitable set of modulii. (And I'm <em>guessing</em> that the prime numbers would constitute such a "suitable set".)</p>
<p>If you were to do this, how many terms would you need to quote before a third party would be able to tell which number you're trying to describe?</p>
<p>That was my question. However, since it appears that the Chinese remainder theorem is extremely hard, I guess this is a bad way to denote numbers...</p>
<p>(It also appears that $x$ will never be uniquely determined without an upper bound.)</p>
| Bill Dubuque | 242 | <p><strong>Hint</strong> $ $ It <em>can</em> be done simply without CRT. $\rm\:x\equiv -2\:\ (mod\ \rm3,5)\iff x\equiv -2\equiv 13\pmod{ 15}\:$ Now since $13\equiv 1\pmod 2\:$ we conclude $\rm\:x\equiv 13\:\ (mod\ 2,15)\iff x\equiv 13\pmod{30}\:$ </p>
<p>Hence your hunch was correct: it <em>is easy</em> (these are often warm-up exercises to CRT). </p>
<p>Thus this <em>constant case</em> of CRT is solved simply by taking least common multiple of moduli:</p>
<p>$$\rm x\equiv a\ (mod\ m,n)\!\iff\! m,n\:|\:x\!-\!a\!\iff\! lcm(m,n)\:|\:x\!-\!a\!\iff\! x\equiv a\ (mod\: lcm(m,n)) $$</p>
<p>This simple constant-case optimization of CRT arises quite frequently in practice, esp. for small moduli (by the law of small numbers), so it is well worth checking for. For further examples, see <a href="https://math.stackexchange.com/a/20259/242">here</a> where it simplified a few page calculation to a few lines, and <a href="https://math.stackexchange.com/a/64228/242">here</a> and <a href="https://math.stackexchange.com/a/73541/242">here.</a></p>
<p>Note that I chose to eliminate the largest moduli first, i.e. $\rm\:x\equiv -2\ mod\ 3,5\:$ vs. $\rm\:x\equiv 1\ mod\ 2,3\:$ since that leaves the remaining modulus minimal ($= 2 $ vs. $5$ above), which generally simplifies matters if we need to apply the full CRT algorithm in the final step (luckily we did not above).</p>
<p><strong>Update</strong> $ $ Regarding your update: knowing the residues of $\rm\:n\:$ modulo a finite set $\rm\:S\:$ of moduli only determines $\rm\:n\:$ modulo $\rm\:lcm\:S.\:$ However, if $\rm\:S\:$ is infinite (e.g. all primes), then the residues do determine $\rm\:n\:$ uniquely from the residue of any modulus $\rm > n$. </p>
<p>In cases where one is working with bounded size integers such modular representations can prove effective for computational purposes, esp. if the moduli are chosen related to machine word size, so to simplify arithmetic. See any good textbook on computer algebra, which will discuss not only this but many other instances of modular reduction - a ubiquitous technique in algebraic computation.</p>
|
400,749 | <p>The Extreme Value Theorem says that if $f(x)$ is continuous on the interval $[a,b]$ then there are two numbers, $a≤c$ and $d≤b$, so that $f(c)$ is an absolute maximum for the function and $f(d)$ is an absolute minimum for the function. </p>
<p>So, if we have a continuous function on $[a,b]$ we're guaranteed to have both absolute maximum and absolute minimum, but functions that aren't continuous can still have either an absolute min or max? </p>
<p>For example, the function $f(x)=\frac{1}{x^2}$ on $[-1,1]$ isn't continuous at $x=0$ since the function is approaching infinity, so this function doesn't have an absolute maximum.
Another example: suppose a graph is on a closed interval and there is a jump discontinuity at a point $x=c$, and this point is the absolute minimum. </p>
<p>The extreme value theorem requires continuity in order for absolute extrema to exist, so why can there be extrema where the function isn't continuous? </p>
| William Stagner | 49,220 | <p>This is a difference between <strong>necessary</strong> and <strong>sufficient</strong> conditions, see <a href="http://en.wikipedia.org/wiki/Necessary_and_sufficient" rel="nofollow">here</a>.</p>
<p>The extreme value theorem states that continuity on a closed interval is <strong>sufficient</strong> to ensure that the function attains a maximum and minimum. However, this condition is not <strong>necessary</strong>. Consider
$$
f(x) = \begin{cases}
1, \text{ for $x=0$} \\
0, \text{ elsewhere}.
\end{cases}
$$
Clearly, $\max(f)=1$, $\min(f)=0$, but $f$ is discontinuous at $x=0$.</p>
|
400,749 | <p>The Extreme Value Theorem says that if $f(x)$ is continuous on the interval $[a,b]$ then there are two numbers, $a≤c$ and $d≤b$, so that $f(c)$ is an absolute maximum for the function and $f(d)$ is an absolute minimum for the function. </p>
<p>So, if we have a continuous function on $[a,b]$ we're guaranteed to have both absolute maximum and absolute minimum, but functions that aren't continuous can still have either an absolute min or max? </p>
<p>For example, the function $f(x)=\frac{1}{x^2}$ on $[-1,1]$ isn't continuous at $x=0$ since the function is approaching infinity, so this function doesn't have an absolute maximum.
Another example: suppose a graph is on a closed interval and there is a jump discontinuity at a point $x=c$, and this point is the absolute minimum. </p>
<p>The extreme value theorem requires continuity in order for absolute extrema to exist, so why can there be extrema where the function isn't continuous? </p>
| user64494 | 64,494 | <p><a href="http://en.wikipedia.org/wiki/Semi-continuity" rel="nofollow">Semi-continuous</a> functions have this property.</p>
|
2,325,421 | <blockquote>
<p>If $f$ is a linear function such that $f(1, 2) = 0$ and $f(2, 3) = 1$, then what is $f(x, y)$?</p>
</blockquote>
<p>Any help is well received.</p>
| Sahiba Arora | 266,110 | <p><strong>Hint:</strong> $\{(1,2),(2,3)\}$ forms a basis of $\mathbb{R}^2$.</p>
|
507,454 | <p>I had a geometry class which was proctored using the Moore method, where the questions were given but not the answers, and the students were the source of all answers in the class. One of the early questions which we never solved is listed in the title.</p>
<p>In this case, use any reasonable definition of "between-ness". I believe the definition we used was "$B$ is between $A$ and $C$ if and only if $|AB|+|BC|=|AC|$". A collineation is a mapping where every line is mapped to a line. A mapping is a function that operates over the set of points within the given space and returns points in the given space.</p>
<p>When we were studying this question, we managed to get to the point that a between-ness mapping must map lines to line segments. In particular, we had managed to show that for every $A-B-C$ (read: "$B$ is between $A$ and $C$") in the mapped space by applying between-ness preserving mapping $m$, we could guarantee that pre-images $P_A, P_C$ such that $m(P_A)=A,m(P_C)=C$ implies the existence of pre-image $P_B$ such that $P_A-P_B-P_C$ and $m(P_B)=B$. I have never seen any full proof of the title statement.</p>
<p>I would like to read any hints that are known for solving this question. Feel free to completely solve it, but please hide the full solution in such a way that I can start with your hint(s) and have an opportunity to finish the solution for myself.</p>
| lab bhattacharjee | 33,337 | <p>Like Stefan,</p>
<p>$$(6a\pm1)^2=36a^2\pm12a+1=24a^2+24\frac{a(a\pm1)}2+1\equiv1\pmod{24}$$ as the product of two consecutive integers is always even</p>
<p>Observe that $6a\pm1$ is not necessarily prime, but $(6a\pm1,6)=1$ </p>
<p>So, any number $p$ relatively prime to $6$ will satisfy this</p>
|
2,592,282 | <p>If $f$and $g$ is one to one then ${f×g}$ and ${f+g}$ is one to one </p>
<p>Is this true? If not, I need some clarification to understand </p>
<p>Thanks</p>
| operatorerror | 210,391 | <p>No. For the first take
$$
f(x)=x,\;g(x)=-x
$$
making $f+g\equiv 0$. </p>
<p>For the second, take $f(x)=g(x)=x$.</p>
|
2,592,282 | <p>If $f$and $g$ is one to one then ${f×g}$ and ${f+g}$ is one to one </p>
<p>Is this true? If not, I need some clarification to understand </p>
<p>Thanks</p>
| Soumajit Das | 431,228 | <p>Well consider f(x) = x and g(x) = x^3
Both the functions are one-one but f × g is not an one- one function but f+g is one - one. </p>
|
2,929,887 | <p>Show that if a square matrix <span class="math-container">$A$</span> satisfies the equation <span class="math-container">$p(A)=0$</span>, where <span class="math-container">$p(x) = 2+a_1x+a_2x^2+...+a_kx^k$</span> where <span class="math-container">$a_1,a_2,...,a_k$</span> are constant scalars, then <span class="math-container">$A$</span> must be invertible. Find <span class="math-container">$A^{-1}$</span>.</p>
<p>So I am aware of the fact that <span class="math-container">$A(-A-2I)=I$</span> but I don't see a way with the polynomial to get that form to then show that <span class="math-container">$A$</span> is invertible. The best I was able to do was that <span class="math-container">$p(A) = 2I+a_1A+a_2A^2+...+a_kA^k$</span> and <span class="math-container">$0=2I+a_1A+a_2A^2+...+a_kA^k$</span> since <span class="math-container">$p(A)=0$</span> but even from here I am not sure how to really approach the problem.</p>
| Kavi Rama Murthy | 142,385 | <p>Let <span class="math-container">$Ax=0$</span>. Then <span class="math-container">$0=p(A)x=2x+0+\cdots+0$</span> so <span class="math-container">$x=0$</span>. So <span class="math-container">$A$</span> is injective, hence invertible. </p>
|
3,117,459 | <p>I am interested in approximating the natural logarithm for implementation in an embedded system. I am aware of the Maclaurin series, but it has the issue of only covering numbers in the range (0; 2).</p>
<p>For my application, however, I need to be able to calculate relatively precise results for numbers in the range (0; 100]. Is there a more efficient way of doing so than decomposing each number greater than 2 into a product of factors in the (0; 2) range and summing up the results of the Maclaurin series for each factor?</p>
| Peter Foreman | 631,494 | <p>For the natural logarithm I recently wrote an algorithm for this myself. In my implementation, I found that the fastest method is to use the following iterative method. First take <span class="math-container">$x_0$</span> as an initial approximation to the logarithm, then use
<span class="math-container">$$x_n=x_{n-1}-\frac{2(e^{x_{n-1}}-k)}{e^{x_{n-1}}+k}$$</span>
Where we are attempting to find <span class="math-container">$\ln{(k)}$</span>. For example, to find <span class="math-container">$\ln{(345.67)}$</span>:
<span class="math-container">$$x_0=6$$</span>
<span class="math-container">$$x_1=6-\frac{2(e^6-345.67)}{e^6+345.67}=5.845791252...$$</span>
<span class="math-container">$$x_2=5.845484563...$$</span>
<span class="math-container">$$\ln{(345.67)}=5.845484563...$$</span></p>
|
3,857,071 | <p>I'm doing work with wind direction data, and will be coding a function that checks whether the a given wind direction is bewteen lower and upper limit or bound</p>
<p>e.g.:</p>
<ol>
<li>Is 5 degrees is between 315 degrees and 45 degrees? True</li>
<li>Is 310 degrees between 315 degrees and 45 degrees? False</li>
<li>Is 180 degrees between 45 degrees and 315 degrees? True</li>
</ol>
<p><a href="https://math.stackexchange.com/questions/281172/geometry-question-is-x-degrees-within-plus-or-minus-y-degrees-of-z">This answer</a> to another question is close, in that it has a great way to deal with the wraparound, but I can't quite see how to adapt it for my situation.</p>
| Math Lover | 801,574 | <p>The way I understand, your bounds are <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span> where either of them can be bigger like in your first case <span class="math-container">$\alpha = 315^0, \beta = 45^0$</span> whereas in the third case, it is <span class="math-container">$\alpha = 45^0, \beta = 315^0$</span>.</p>
<p>Say, you need to check for an angle <span class="math-container">$\gamma$</span>.</p>
<p>i) If <span class="math-container">$\alpha \lt \beta$</span>, then if <span class="math-container">$\alpha \le \gamma \le \beta$</span>, then it is indeed between them (TRUE). Otherwise false.</p>
<p>ii) If <span class="math-container">$\alpha \gt \beta$</span>, then if <span class="math-container">$\beta \lt \gamma \lt \alpha$</span>, then consider it is NOT between them (FALSE). Otherwise True.</p>
<p>Take your third case and apply in (i) as <span class="math-container">$\alpha (45^0) \lt \beta (315^0)$</span>. As <span class="math-container">$\gamma = 180^0$</span>, it will return TRUE.</p>
<p>Take your second case and apply in (ii) as <span class="math-container">$\alpha (315^0) \gt \beta (45^0)$</span>. As <span class="math-container">$\gamma = 310^0$</span>, gamma is more than <span class="math-container">$\beta$</span> and less than <span class="math-container">$\alpha$</span>, it will return FALSE.</p>
<p>Similarly your first case will return true as the value of <span class="math-container">$\gamma (5^0)$</span> is outside <span class="math-container">$(\beta, \alpha)$</span>.</p>
|
216,082 | <ol>
<li><p>How to prove that $\mathbb{R}^k$ is connected?</p></li>
<li><p>Let $C$ be an infinite connected set in $\mathbb{R}^k$. How can I show that $C\bigcap \mathbb{Q}^k$ is nonempty?</p></li>
</ol>
| DonAntonio | 31,254 | <p>1) For any $\,a,b\in\Bbb R^k\,$ , the straighline $\,\{(t-1)a+tb\;:\;t\in\Bbb R\}\,$ is completely contained in $\,R^k\,$, so it is path connected and thus connected.</p>
|
1,722,692 | <p>I am asked to find</p>
<p>$$\lim_{x \to 0} \frac{\sqrt{1+x \sin(5x)}-\cos(x)}{\sin^2(x)}$$</p>
<p>and I tried not to use L'Hôpital but it didn't seem to work. After using it, same thing: the fractions just gets bigger and bigger.</p>
<p>Am I missing something here?</p>
<p>The answer is $3$</p>
| Henricus V. | 239,207 | <p>Use the equivalent infinitesimal
$$ \lim_{x \to 0} \frac{(\sin x)^2}{x^2} = 1
$$
to change the denominator. Now l'Hopital only need to be applied twice.</p>
|
2,922,077 | <p>An urn contains $29$ red, $18$ green, and $12$ yellow balls. Draw two balls without replacement What is the probability that the number of red balls in the sample is exactly $1$ or the number of yellow balls in the sample is exactly $1$ (or both)? What about with replacement? </p>
<p>I can't seem to figure this out. My closest attempt </p>
<p>${\frac{29 \choose 1}{59 \choose 2}} + {\frac{12 \choose 1}{59 \choose 2}} + {\frac{29 \choose 1 }{59 \choose 2}} * {\frac{29 \choose 1}{59 \choose 2}}$</p>
| Mike Earnest | 177,399 | <p>Using the fact that the function $f(t)=|t|^p$ is convex, we have
\begin{align}
|b+a|^p+|b-a|^p
&=2\big(\tfrac12f(b+a)+\tfrac12f(b-a)\big)\\
&\ge 2f\big(\tfrac12(b+a)+\tfrac12(b-a)\big)\\
&=2f(b) = 2|b|^p.
\end{align}
To see $f$ is convex, note $f''(t)=p(p-1)|t|^{p-2}\ge 0$. This suffices for all $p\ge 2$. For $p=1$, you can prove $f$ is convex using the triangle inequality, or just prove the inequality directly.</p>
|
1,500,156 | <p>Call a function $f: \mathbb{R}^n \rightarrow \mathbb{R}$ nondecreasing if $x,y \in \mathbb{R}^n$ with $x \geq y$ implies $f(x) \geq f(y)$. Suppose $f$ is a nondecreasing, but not necessarily continuous, function on $\mathbb{R}^n$, and $D \subset \mathbb{R}^n$ is compact. Show that if $n = 1$, $f$ always has a maximum on $D$. Show also that if $n > 1$, this need no longer be the case.</p>
<p>I'm stucked at second afirmative. How to get a nondecreasing function ($f: \mathbb{R}^n \rightarrow \mathbb{R}$), with compact domain and show that there is not maximum to it?</p>
<p>Any thoghts would be appreciated.
Tks</p>
| Ben Grossmann | 81,360 | <p><strong>Hint:</strong> Consider the set
$$
A = \{(t,-t) \in \Bbb R^2: t \in [0,1]\}
$$
note that any function on $A$ is, by definition, non-decreasing.</p>
|
733,101 | <p>I've been stuck for a while on this question and haven't found applicable resources.</p>
<p>I have 10 choices and can select 3 at a time. I am allowed to repeat choices (combination), but the challenge is that ABA and AAB are not unique.</p>
<p>10 choose 3 is the question.</p>
<p>I have been working on a smaller set to find a formula. 3 choose 3.</p>
<p>I came up with 27 results (if order matters) and 10 results if order doesn't matter. AAA, AAB, AAC, ABB, ACB, ACC, BBB, BBC, BCC, CCC</p>
<p>How do I go about solving these problems.</p>
<p>My closest hypothesis is choices^slots / slots! == 3^3/3!</p>
| Calvin Lin | 54,563 | <p>This approach works because 3 is small.</p>
<p>Case 1: 3 distinct objects - ABC<br>
There are ${ 10 \choose 3}$ ways here.</p>
<p>Case 2: 2 distinct objects, 1 repeated twice - AAB<br>
There are $ 10 \times 9$ ways here.</p>
<p>Case 3: 1 distinct object, 1 repeated thrice - AAA<br>
There are $10$ ways here.</p>
<hr>
<p>This process becomes tedious as 3 gets large. You should look up Burnside's lemma for the general case.</p>
|
2,336,744 | <p>Any two co-prime number $a,b$ with $a>b+2$ we have $a^2+b^2$ is not divisible by $a-b$, $a,b \in \mathbb{N}$.
But how to prove this?</p>
| Paolo Leonetti | 45,736 | <p>If $a-b \mid a^2+b^2$ then $\gcd(a^2+b^2,a-b)=a-b \ge 3$. Now
$$
\gcd(a^2+b^2,a-b)=\gcd(a^2+b^2-(a-b)^2,a-b)=\gcd(2ab,a-b).
$$
Since $\gcd(a,b)=1$ then every prime dividing $a$ or $b$ cannot divide $a-b$. Hence
$$
\gcd(a^2+b^2,a-b) \in \{1,2\}.
$$
In particular, your conjecture is true.</p>
|
1,726,416 | <p>$\displaystyle \sum_{n=1}^∞ (-1)^n\dfrac{1}{n}.\dfrac{1}{2^n}$</p>
<p>Knowing that</p>
<ol>
<li>An alternating harmonic series is always convergent</li>
<li>Riemann series are always convergent when $p>1$</li>
</ol>
<p>Is it safe to say that the product of these two is convergent (as described above)?</p>
| DeepSea | 101,504 | <p><strong>hint</strong>: Your series is absolutely convergent because $|a_n| \leq \left(\dfrac{1}{2}\right)^n$. And if you have $2$ converging series, say $\displaystyle \sum_{k=1}^\infty a_k, \displaystyle \sum_{k=1}^\infty b_k$, then their product as you define it yourself $\displaystyle \sum_{k=1}^\infty a_kb_k$ may or may not converge depending on the general terms $a_k,b_k$. You can take for example $a_k = b_k = (-1)\dfrac{1}{\sqrt{k}}$, and you see each is conditionally convergent hence each is convergent, but their product by your way is a diverging harmonic series.</p>
|
210,401 | <p>In other words, given a sequence $(s_n)$, how can we tell if there exist irrationals $u>1$ and $v>1$ such that </p>
<p>$$s_n = \lfloor un\rfloor + \lfloor vn\rfloor$$</p>
<p>for every positive integer $n$?</p>
<p>A few thoughts: Graham and Lin (<em>Math. Mag.</em> 51 (1978) 174-176) give a test for $(s_n)$ to be a single Beatty sequence $(\lfloor un\rfloor)$ (which they call the spectrum of $u$). Perhaps someone knows a reference for a test for sums of two or more Beatty sequences? A special case would be a test for a given sequence $(s_n)$ to be the sum of two <em>complementary</em> Beatty sequences (i.e., $1/u + 1/v = 1$).</p>
<p>In response to comments, the part of the question that says "for every positive integer n" indicates that the intended sequence is infinite. It seems to me that the question, as stated above, is okay. If it's undecidable - well, that's of interest. </p>
<p>In any case, while it may be difficult to give a test that actually finds $u$ and $v$ when such numbers exist, there are some simple tests for deciding that $(s_n)$ is <i>not</i> a sum of two Beatty sequences:</p>
<p>(1) $\lim_{n\to\infty}s(n)/n$ must exist;</p>
<p>(2) if $u$ and $v$ exist, then $u$ + $v$ = $\lim_{n\to\infty}s(n)/n$; </p>
<p>(3) $\lfloor(u+v)n\rfloor \in \{[un]+[vn],[un]+[vn]+1\}$ for every $n$;</p>
<p>(4) if $(s_n)$ is a sum of two Beatty sequences, then the difference sequence of $(s_n)$ consists of at most three terms; and if there are three, then they are consecutive integers.</p>
<p>It's easy to see how each of those generalizes to give a "negative test"; that is, a way to see that a given $(s_n)$ is not a sum of any prescribed number of Beatty sequences. I hope that someone can find more "negative tests", or even better, a "positive test", perhaps similar to Graham and Lin's result. </p>
| Will Sawin | 18,060 | <p>Let's use the notation $\{ x\}$ for the fractional part of a number $x$.</p>
<p>Assume $u, v$, and $u/v$ are all irrational.</p>
<p>Then, $\{un\}$ and $\{vn\}$ behave as independent uniform random variables. (This is proved by Fourier analysis, vindicating James Cranch's suggestion) $s_{n+1}-s_n$ depends on $\{un\}$ and $\{vn\}$:</p>
<p>It is $\lfloor u \rfloor + \lfloor v \rfloor$ if $\{un\} < 1- \{u\}$ and $\{vn\} < 1 - \{v\}$</p>
<p>$\lceil u \rceil + \lceil v \rceil$ if $\{un\} \geq 1- \{u\}$ and $\{vn \} \geq 1- \{ v\}$</p>
<p>and the integer intermediate between those two otherwise.</p>
<p>So by measuring the frequencies with which these $s_{n+1}-s_n$ takes its maximal value, its minimal value, or the intermediate one, we can determine $\{u\} \{v\}$ and $(1-\{u\})(1-\{v\})$, hence by solving the equation $\{u\}$ and $\{v\}$. Then clearly how we distribute the integer part between $u$ and $v$ does not affect $\lfloor un \rfloor + \lfloor vn \rfloor$, so we can choose any values of $u$ and $v$ with the correct fractional part and sum and plug them in to see if they fit our sequence.</p>
<p>In fact the same thing works if just $u/v$ is irrational, because they are independent non-uniform random variables, and the probability that $\{un\}$ and $\{vn\}$ is in the range to increase by a larger amount is still $\{u\}$ or $\{v\}$ just because the average increase of $\lfloor un\rfloor$ or $\lfloor vn \rfloor$ must be $u$ and $v$ respectively.</p>
<p>Does this still work even if the ratio $u/v$ is rational?</p>
|
1,583,887 | <p>This problem is from an an Introduction to Abstract Algebra by Derek John that I am solving.</p>
<p>I am trying to prove that any group of order 1960 aren't simple, so I am doing it by contradiction, but I got stuck in the middle.</p>
<p>Suppose that $|G| = 1960 = 2^3 * 5 * 7^2$, by Sylow theory we have 2,5,7 subgroup of G. After some computations I got the following least values for $n_p$. The following are the values $n_2 = 5$, $n_7 = 8$, and $n_5 = 56$, but I don't know how to proceed further. </p>
| 2'5 9'2 | 11,123 | <p>If the group is not simple, then there actually <em>must</em> be $8$ Sylow-$7$ subgroups. It's the only option give the Sylow theorems.</p>
<p>So $G$ permutes these $8$ subgroups through conjugation. That means there is a map from $G$ to $S_8$. The kernel of this map is a normal subgroup, so the kernel is either all of $G$ or is trivial. The map itself is nontrivial since Sylow-7 subgroups can be permuted to one another. This means the kernel cannot be all of $G$. On the other hand, $7^2$ does not divide $8!$, so the kernel cannot be trivial. So there actually cannot be $8$ Sylow-$7$ subgroups. There must only be $1$, precluding $G$ from being simple.</p>
|
2,701,582 | <p>I thought I was doing this right until I checked my answer online and got a different one. I worked through the problem again and got my original answer a second time so this one is bothering me since the other similar ones I have done checked out okay. Please let me know if I'm doing something wrong, thanks!</p>
<blockquote>
<p>Find $x, y$ contained in integers such that $475x+2018y=1$ then find a value for $475\equiv -1$ (mod $2018$).</p>
</blockquote>
<p>Since it is in the form of $ax+by=1$ I know that the $\gcd(a,b)=1$. I still did the division algorithm since that helps me with the back substitution. Here is what I got.</p>
<p>Division Algm:</p>
<ul>
<li><p>$2018=(4\times 475)+118$</p></li>
<li><p>$475=(4\times 118)+3$</p></li>
<li><p>$18=(39\times 3)+1$</p></li>
</ul>
<p>Back Sub:</p>
<ul>
<li><p>$1=118-(39\times 3)$</p></li>
<li><p>$1=118-39(475-(4\times 118))$</p></li>
<li><p>$1=(157\times 118)-(39\times 475)$</p></li>
<li><p>$1=157\times (2018-(4\times 475))-(39\times 475)$</p></li>
<li><p>$1=(157\times 2018)-(667\times 475)$</p></li>
</ul>
<p>So $x=667$ and $y=157$</p>
<p>The second question I answered from the first part which is $475x$ congruent to $1$(mod $2018$) so that would just be $667$ from the first part. Any help is appreciated, thanks! </p>
| José Carlos Santos | 446,262 | <p>No. Let $A$ be the set of all transpostions. Then $\langle A\rangle=S_n$, but not subset of $A$ generates $A_n$, since no transposition belongs to $A_n$.</p>
|
2,701,582 | <p>I thought I was doing this right until I checked my answer online and got a different one. I worked through the problem again and got my original answer a second time so this one is bothering me since the other similar ones I have done checked out okay. Please let me know if I'm doing something wrong, thanks!</p>
<blockquote>
<p>Find $x, y$ contained in integers such that $475x+2018y=1$ then find a value for $475\equiv -1$ (mod $2018$).</p>
</blockquote>
<p>Since it is in the form of $ax+by=1$ I know that the $\gcd(a,b)=1$. I still did the division algorithm since that helps me with the back substitution. Here is what I got.</p>
<p>Division Algm:</p>
<ul>
<li><p>$2018=(4\times 475)+118$</p></li>
<li><p>$475=(4\times 118)+3$</p></li>
<li><p>$18=(39\times 3)+1$</p></li>
</ul>
<p>Back Sub:</p>
<ul>
<li><p>$1=118-(39\times 3)$</p></li>
<li><p>$1=118-39(475-(4\times 118))$</p></li>
<li><p>$1=(157\times 118)-(39\times 475)$</p></li>
<li><p>$1=157\times (2018-(4\times 475))-(39\times 475)$</p></li>
<li><p>$1=(157\times 2018)-(667\times 475)$</p></li>
</ul>
<p>So $x=667$ and $y=157$</p>
<p>The second question I answered from the first part which is $475x$ congruent to $1$(mod $2018$) so that would just be $667$ from the first part. Any help is appreciated, thanks! </p>
| Andrea Mori | 688 | <p>Certainly not for <em>any</em> set of generators.</p>
<p>For instance, one knows that $S_n$ is generated by (some) cycles of lngth 2 and not all subgroups have the same property.</p>
|
68,817 | <p>I have two questions after reading the Hahn-Banach theorem from Conway's book ( I have googled to know the answer but I have not found any result yet. Also I am not sure that whether my questions have been asked here somewhere on this forum - so please feel free to delete them if they are not appropriate )</p>
<p>Here are my questions: </p>
<ol>
<li><p>We know that if $M$ is a linear subspace of $X$ and $f :M\to\mathbb{F}$ and $f$ is linear,bounded by a seminorm $p$ then $f$ can be extended onto $X$ by some functional $F$. Can $F$ be unique ? Under what condition $F$ will be an unique extension? It would be appreciate if you could give me one example that $F$ could not be unique.</p></li>
<li><p>If the above $\mathbb{F}$ is replaced a Banach space $Y$, i.e, let $M$ be a closed subspace of a Banach space $X$, and $f :M\to Y$ be a bounded linear operator, can we extend $f$ by a bounded operator $F :X\to Y$ ? if not, what condition should be put on $Y$ to have a such extension?</p></li>
</ol>
<p>thanks so much </p>
| Gerald Edgar | 454 | <p>$Y$ is called an <em>injective Banach space</em> if the extension exists for all $X$, $M$, and $f$. An example is $Y = l^\infty$. (Should be in Banach space text books. Here's a paper: <a href="http://www.jstor.org/pss/1998210" rel="nofollow">http://www.jstor.org/pss/1998210</a> )</p>
|
68,817 | <p>I have two questions after reading the Hahn-Banach theorem from Conway's book ( I have googled to know the answer but I have not found any result yet. Also I am not sure that whether my questions have been asked here somewhere on this forum - so please feel free to delete them if they are not appropriate )</p>
<p>Here are my questions: </p>
<ol>
<li><p>We know that if $M$ is a linear subspace of $X$ and $f :M\to\mathbb{F}$ and $f$ is linear,bounded by a seminorm $p$ then $f$ can be extended onto $X$ by some functional $F$. Can $F$ be unique ? Under what condition $F$ will be an unique extension? It would be appreciate if you could give me one example that $F$ could not be unique.</p></li>
<li><p>If the above $\mathbb{F}$ is replaced a Banach space $Y$, i.e, let $M$ be a closed subspace of a Banach space $X$, and $f :M\to Y$ be a bounded linear operator, can we extend $f$ by a bounded operator $F :X\to Y$ ? if not, what condition should be put on $Y$ to have a such extension?</p></li>
</ol>
<p>thanks so much </p>
| godelian | 12,976 | <p>Note quite what you asked, but related:</p>
<p>Continuous extensions of (continuous) functionals from $M$ are unique if and only if $M$ is a dense subspace of $X$. Otherwise its closure is a proper closed subspace and therefore there exists a nonzero bounded functional $\phi$ vanishing on the closure, which implies that $F+\phi$ is bounded and also extends $f$.</p>
<p>For the second question, it is easier to put conditions on $M$ so that for every $Y$, every map from $M$ to $Y$ can be extended. As mentioned in the comments, a necessary and sufficient condition is that $M$ is a complemented subspace of $X$.</p>
|
623,796 | <p>What's the domain of the function $f(x) = \sqrt{x^2 - 4x - 5}$ ?</p>
<p>Thanks in advance.</p>
| lsp | 64,509 | <p>$f(x) = \sqrt{x^2 - 4x - 5}$</p>
<p>Since the square root of a negative number is imaginary, the condition is that :
$$x^2 - 4x - 5 \geq 0$$
$$ (x-2)^2 - 9 \geq 0$$
$$ (x-2) \geq 3$$ or
$$ (x-2) \leq -3$$
Therefore the domain of the function $f(x)$ would be:
$$(-\infty, -1] \cup [5, +\infty)$$</p>
<p>Hope the answer is clear ! And wish you a happy new year !</p>
|
623,796 | <p>What's the domain of the function $f(x) = \sqrt{x^2 - 4x - 5}$ ?</p>
<p>Thanks in advance.</p>
| WLOG | 21,024 | <p>The domain is $D =\{ x \in \mathbb{R} | x^{2}-4x-5 \geq 0 \}$.</p>
<p>The roots of $x^{2}-4x-5$ are $-1$ and $5$ so $D =(-\infty, -1] \cup [5, +\infty)$.</p>
|
182,091 | <p>3D graphics can be easily rotated interactively by clicking and dragging with the mouse.</p>
<p>Is there a simple way to achieve the same for animated 3D graphics? I would like to rotate them interactively (in real time) <em>while</em> the animation is running.</p>
<hr>
<p>Here's an example animation, mostly taken from the documentation.</p>
<pre><code>L = 4;
sol = NDSolveValue[{D[u[t, x, y], t, t] ==
D[u[t, x, y], x, x] + D[u[t, x, y], y, y] + Sin[u[t, x, y]],
u[t, -L, y] == u[t, L, y], u[t, x, -L] == u[t, x, L],
u[0, x, y] == Exp[-(x^2 + y^2)],
Derivative[1, 0, 0][u][0, x, y] == 0},
u, {t, 0, L/2}, {x, -L, L}, {y, -L, L}];
Animate[
Plot3D[sol[t, x, y], {x, -L, L}, {y, -L, L}, PlotRange -> {0, 1},
PlotPoints -> 20, MaxRecursion -> 0],
{t, 0, L/2}
]
</code></pre>
<p>When the animation is stopped, I can rotate the graphics. Then if the animation is started again, the rotation is kept.</p>
<p>However, I cannot rotate <em>while</em> the animation is running. Is there a relatively easy way to enable this?</p>
<p><em>Note:</em> My actual application has an animated plot on the surface of a sphere. The ability to rotate would be very useful.</p>
| Szabolcs | 12 | <p>One possible solution is to make a separate trackball control.</p>
<pre><code>{vp, vv} = {ViewPoint, ViewVertical} /. Options[Graphics3D];
Graphics3D[{Cuboid[]}, Boxed -> False, SphericalRegion -> True,
RotationAction -> "Clip",
Prolog -> {GrayLevel[.8], Disk[Scaled[{1/2, 1/2}], Scaled[1/2]]},
AspectRatio -> 1, ImageSize -> Small, PlotLabel -> "Trackball",
ViewPoint -> Dynamic[vp], ViewVertical -> Dynamic[vv]
]
</code></pre>
<p><img src="https://i.stack.imgur.com/oKHIk.png" alt="Mathematica graphics"></p>
<p>Then add </p>
<pre><code>ViewPoint -> Dynamic[vp], ViewVertical -> Dynamic[vv]
</code></pre>
<p>to the animated graphics that we want to control.</p>
<p>Rotate the trackball and the other <code>Graphics3D</code> will rotate with it.</p>
<p>This is not nearly as good as direct rotation, and does not easily generalize to multiple rotatable graphics.</p>
|
128,122 | <p>Original Question: Suppose that $X$ and $Y$ are metric spaces and that $f:X \rightarrow Y$. If $X$ is compact and connected, and if to every $x\in X$ there corresponds an open ball $B_{x}$ such that $x\in B_{x}$ and $f(y)=f(x)$ for all $y\in B_{x}$, prove that f is constant on $X$. </p>
<p>Here's my attempt:
Cover X by $\bigcup _{x \in X}B_x$.
Since X is compact there is a finite sub-covering $\bigcup _{n=1}^NB_x$ of $X$.
Given $x\in X$ there is an i between 1 and N such that $x\in B_{x_{i}}$.
By assumption $f(x)=f(x_{i})$. Since there are only finitely many balls covering $X$, $f(X)$ is finite, say $f(X)=\{a_{1},...a_{m}\}$.</p>
<p>Where do I go from here, I want to show that f(X) is a singleton. Is $X$ a singleton too?</p>
| Asaf Karagila | 622 | <p>Since $f(X)=\{a_1,a_2,\ldots,a_n\}$ we can take $A_k=\bigcup\{B_x\mid f(x)=a_k\}$ to be an open set in $X$ for every $k\leq n$. </p>
<p>These are disjoint open sets and their union is $X$. Use the fact the space is connected to deduce $k=1$. </p>
|
2,798,026 | <blockquote>
<p>There are three vector $a$, $b$, $c$ in three-dimensional real vector space, and the inner product between them $a\cdot a=b\cdot b = a\cdot c= 1, a\cdot b= 0, c\cdot c= 4$. When setting $x = b\cdot c$, answer the following question: when $a, b, c$ are linearly dependent, find all possible values of $x$.
(dot here means dot product)</p>
</blockquote>
<p>For dependent condition </p>
<p>$$\begin{align}
(a×b)·c &= 0\\
a·(b×c) &= 0\\
a(bc \sin θ)&=0
\end{align}$$</p>
<p>So $\theta = 0, \pi$.</p>
<p>Then
$$\begin{align}
x&=|b||c| \cos \theta \\
x&=2 \cos \theta \\
\implies x &= 2 \cos 0, x = 2 \cos \pi \\
x &= \mp 2
\end{align}$$</p>
<p>Am I right using triple cross product? or should i find $\theta$?
<a href="https://i.stack.imgur.com/I8unM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I8unM.png" alt="enter image description here"></a></p>
| Stefan | 453,800 | <p>For the first identity, $a \times b$ is orthogonal to both $a,b$, hence $c$ must be in the plane spanned by the orthonormal $a,b$.</p>
<p>So we can write $c$ as
$$
c = \alpha a + \beta b
$$</p>
<p>plugging this in we see that
$$
1 = a \cdot c = \alpha a \cdot a = \alpha
$$</p>
<p>and
$$
x = b \cdot c = \beta b \cdot b = \beta
$$</p>
<p>hence the norm squared of $c$ is
$$
c\cdot c = \alpha^2 + \beta^2 = 1 + x^2 = 4
$$</p>
<p>this gives
$$
x = \pm \sqrt{3}
$$</p>
|
2,328,567 | <p>I'm trying to search for any kind of development in the mathematics (science, astronomy, even astrology or other kind of early studies that envolve any kind of math) expecially in early england and in the carolingian empire. </p>
<p>The problem I have is that it seams that math died in there: every work seems to be related to arabic, or chinese or indian work in the area. In science I'm looking and nothing seems to appears. So I've the main question:</p>
<blockquote>
<p><em>Is there any reference, article, name of scientis of mathematician, book, page, etc. that can bring contributions to math history in england or in the carolingian empire between 550 and 1050 A.D? Even the e-mail of someone who is a expeciallist in the area could work as a fruitfull answer</em></p>
</blockquote>
<p>thanks for future answers or critics. </p>
| Riju | 420,099 | <p>Homeomorphism means a continuous bijection whose inverse is continuos too. Now use the fact that f is continuous iff for every open set $U$ of Y , $f^{-1}(U)$ is open in X. The bijection is needed for the other direction, when you have to prove f is homeomorphism. $f^{-1}$ exists since it is a bijection and continuos as f is an open map.</p>
|
287,043 | <p>Consider the problem of finding the limit of the following diagram:</p>
<p>$$ \require{AMScd} \begin{CD}
& & & & E
\\ & & & & @VVV
\\ && C @>>> D
\\ & & @VVV
\\A @>>> B
\end{CD} $$</p>
<p>The abstract definition of the limit involves an adjunction related to collapsing the entire index category to a point. However, one could break this operation into two stages: first collapsing the upper three objects to a point reduces it to</p>
<p>$$ \require{AMScd} \begin{CD}
& & C \times_D E
\\ & & @VVV
\\A @>>> B
\end{CD} $$</p>
<p>and then we finish computing the limit as $A \times_B (C \times_D E)$.</p>
<p>This is a particularly convenient thing, since it implies a way to work locally with more complicated diagrams where you ultimately want a limit — i.e. take limits or perform other modifications to smaller pieces of the diagram while leaving the rest unchanged.</p>
<hr>
<p>However, not every variation works out so nicely. If we try the same thing but instead collapse the middle three objects to a point, the intermediate diagram becomes</p>
<p>$$ \require{AMScd} \begin{CD}
& & C \times_D E
\\ & & @VVV
\\A \times_B C@>>> C
\end{CD} $$</p>
<p>So, trying to perform <em>this</em> operation isn't local at all; it modifies the value of the diagram at the other two vertices.</p>
<p>To clarify what I mean, this diagram together with the appropriate "cone" is (I believe) universal among all diagrams with "cones" of the form</p>
<p>$$ \require{AMScd} \begin{CD}
& & \bullet &\to& E
\\ & & @VVV @VVV
\\\bullet @>>> \bullet & \to & D
\\ \downarrow & & \downarrow & \searrow @AAA
\\ A @>>> B @<<< C
\end{CD} $$</p>
<hr>
<p>It seems clear what the the <em>abstract</em> theory behind this sort of calculation should be; just factor the usual adjunction into a sequence of adjunctions.</p>
<p>But my interest in such things is very much not in the <em>abstract</em> — these are the sorts of operations one would like to have as a <em>practical</em> calculus of diagrams.</p>
<p>So my question is if such a calculus is known? Is there worked out how to predict and recognize which sorts of operations really should be local? Or for those operations that are not local, to easily work out how the rest of the diagram gets modified?</p>
<p>(and the bonus question: how much of this carries over to <em>homotopy</em> limits?)</p>
| Dylan Wilson | 6,936 | <p>Let me try to turn my comments into an answer (I think it's also essentially what Vladimir was saying). Suppose you have some diagram $F: K \to \mathcal{C}$. To compute the limit of $F$ is the same as computing the right Kan extension $\epsilon_*F$ along the map $\epsilon: K \to \bullet$. The process you're describing is to compute this Kan extension by factoring the map $\epsilon$ into a bunch of maps $K=K_0 \to K_1 \to K_2 \to \cdots \to \bullet$ and Kan extending one at a time.</p>
<p>That's perfectly allowed. You'd like to keep track of the values of your diagram as you Kan extend. In general, for a functor $p: K \to L$, the value of the right Kan extension $p_*F$ at a point $x \in L$ is given by the limit over the category $K_{x/}$, i.e. the category whose objects are pairs $(k, x \to p(k))$ and whose morphisms are morphisms in $K$ making the appropriate diagram commute. </p>
<p>In general one might ask: when is a limit of a functor $G$ over some diagram $D$ given by just one of the values $G(d)$? A sufficient condition for this to occur is that $d \in D$ be initial.</p>
<p>Putting these two facts together we learn that:</p>
<ul>
<li>$p_*F(x)$ is given by a known value of $F$ if $K_{x/}$ has an initial object.</li>
</ul>
<p>Now you'd like some explicit procedure, I guess, for computing the limit over $K$ in this iterative way where you don't have to change too much at once. Here is one way that works. For ease let's take a skeletal, finite $K$.</p>
<ol>
<li>First consider the diagram $K_1$ obtained from $K$ by asking that every endomorphism be the identity. The right Kan extension to this step computes the 'fixed points' of each object under the action of the endomorphisms. If you want, you could do it one object at a time.</li>
<li>Now begin collapsing parallel arrows one step at a time. This will always be a collapse of the desired type. Indeed, if $a\rightrightarrows b$ is a pair of arrows in $K_i$ and we collapse them to build $K_{i+1}$, then the overcategories $(K_{i+1})_{x/}$ will evidently have initial object $x$ if $x \ne a$. If $x = a$ then we'll be computing an equalizer.</li>
<li>Now you've got a category $K_N$ where every object has only the identity endomorphism, and there is at most one arrow between any two objects. So you've got a poset! Now you can begin pruning exactly as you did in your example. Arrange the poset by height. Say it has height $n$. If $n$ is zero, then you've got a discrete poset- take the product. Otherwise, if you see two objects of height $(n-1)$ which hit an object of height $n$, form $K_{N+1}$ by collapsing those three objects to a point. This procedure replaces the three objects by the pullback and changes nothing else. Keep doing that until there are no more such objects of height $n$. Either you've decreased the height to $(n-1)$, or there are objects of height $n$ with only one thing of height $(n-1)$ hitting it. Taking each in turn, collapse those pairs to a point. This won't do anything except delete those height $n$ values from your diagram. Now you have something of height $(n-1)$. Repeat until you get down to height 0, then take the product of everything you see.</li>
</ol>
<p>There are many variations on this theme... you could do things in a different order, or you could embed into a larger diagram which is often convenient, etc. etc. </p>
<p>Of course, there's also the standard formula for a limit as an equalizer of two products, but I imagine you knew that already.</p>
<p>Also- everything I said works for homotopy limits over ordinary categories. If you want to do homotopy limits over an $\infty$-category, you can still do something like this. In fact, said in the language of $\infty$-categories or quasicategories, this whole procedure is maybe more evident: take your quasicategory, which is a simplicial set $K$, and write it as an iterated pushout along cell inclusions (maybe transfinitely many). Then a homotopy colimit over $K$ can be computed by iteratively by using pushouts, or if you come across an empty diagram you'll need an initial object, and then taking filtered colimits along the way. One can take care of limits by working in the opposite category.</p>
|
1,469,859 | <p>If I have a positive <code>x</code>, are there more integers below <code>x</code> or above <code>x</code>?</p>
<p>I was discussing this with some friends and we came up with two opposing ideas:</p>
<ol>
<li>No, since you can always count one more in either direction.</li>
<li>Yes, since the infinite amount of numbers below <code>x</code> is greater than the infinite amount of numbers above <code>x</code>.</li>
</ol>
| vadim123 | 73,324 | <p>$\mathbb{Z}$ is countable. Hence all subsets of $\mathbb{Z}$ are countably infinite, or finite. There aren't different sizes of infinity in the subsets of $\mathbb{Z}$ -- the only way you can get two subsets of different sizes is if at least one of them is finite.</p>
|
275,308 | <p>Problems with calculating </p>
<p>$$\lim_{x\rightarrow0}\frac{\ln(\cos(2x))}{x\sin x}$$</p>
<p>$$\lim_{x\rightarrow0}\frac{\ln(\cos(2x))}{x\sin x}=\lim_{x\rightarrow0}\frac{\ln(2\cos^{2}(x)-1)}{(2\cos^{2}(x)-1)}\cdot \left(\frac{\sin x}{x}\right)^{-1}\cdot\frac{(2\cos^{2}(x)-1)}{x^{2}}=0$$</p>
<p>Correct answer is -2. Please show where this time I've error. Thanks in advance!</p>
| Siméon | 51,594 | <p>As $x$ tends to $0$, $\cos(2x)$ tends to $1$. Hence, using $\ln(1+u) \sim u$, $\sin u \sim u$ and $1 - \cos u\sim \frac{u^2}{2}$,
$$
\frac{\ln(\cos(2x))}{x\sin x} \sim \frac{\cos(2x)-1}{x\times x} \sim \frac{-\frac{(2x)^2}{2}}{x^2} \sim - 2
$$</p>
|
66,801 | <p>In short, I am interested to know of the various approaches one could take to learn modern harmonic analysis in depth. However, the question deserves additional details. Currently, I am reading Loukas Grafakos' "Classical Fourier Analysis" (I have progressed to chapter 3). My intention is to read this book and then proceed to the second volume (by the same author) "Modern Fourier Analysis". I have also studied general analysis at the level of Walter Rudin's "Real and Complex Analysis" (first 15 chapters). In particular, if additional prerequisites are required for recommended references, it would be helpful if you could state them.</p>
<p>My request is to know how one should proceed after reading these two volumes and whether there are additional sources that one could use that are helpful to get a deeper understanding of the subject. Also, it would be nice to hear suggestions of some important topics in the subject of harmonic analysis that are current interests of research and references one could use to better understand these topics.</p>
<p>However, I understand that as one gets deeper into a subject such as harmonic analysis, one would need to understand several related areas in greater depth such as functional analysis, PDE's and several complex variables. Therefore, suggestions of how one can incorporate these subjects into one's learning of harmonic analysis are welcome. (Of course, since this is mainly a request for a roadmap in harmonic analysis, it might be better to keep any recommendations of references in these subjects at least a little related to harmonic analysis.)</p>
<p>In particular, I am interested in various connections between PDE's and harmonic analysis and functional analysis and harmonic analysis. It would be nice to know about references that discuss these connections. </p>
<p>Thank you very much!</p>
<p><strong>Additional Details</strong>: Thank you for suggesting Stein's books on harmonic analysis! However, I am not sure how one should read these books. For example, there seems to be overlap between Grafakos and Stein's books but Stein's "Harmonic Analysis" seems very much like a research monograph and although it is, needless to say, an excellent book, I am not very sure what prerequisites one must have to tackle it. In contrast, the other two books by Stein are more elementary but it would be nice to know of the sort of material that can be found in these two books but that cannot be found in Grafakos. </p>
| Peter Humphries | 3,803 | <p>It depends very much on what areas of harmonic analysis you're interested in, of course. Grafakos' books are excellent and really quite advanced, and if you wish to continue in that style of harmonic analysis, then there's not much else you can do other than start reading many of the articles that he cites. On the other hand, there are interesting areas in harmonic analysis not covered by Grafakos. I'd recommend a couple of textbooks by Stein: <em>Singular Integrals and Differentiability Properties of Functions</em> and <em>Harmonic Analysis: Real-Variable Methods, Orthogonality, and Oscillatory Integrals</em>. There are probably some other interesting textbooks on singular integral operators that might be useful (though I can't think of any off the top of my head). One other interesting (and very modern) area is wavelets: Mayer's book <em>Wavelets and Operators</em> is probably the place to start there.
Other useful resources are lecture notes or survey articles about harmonic analysis available online. For example, Pascal Auscher taught a course at ANU on harmonic analysis using real-variable methods last year, and one of the students in the class typed up notes, which are available <a href="http://maths.anu.edu.au/~bandara/documents/harm/harm.pdf">here</a>. Similarly, Terry Tao taught a course a few years ago, and he has lecture notes <a href="http://www.math.ucla.edu/~tao/247a.1.06f/">here</a> and <a href="http://www.math.ucla.edu/~tao/247b.1.07w/">here</a>. Finally, if you want to learn about harmonic analysis with an operator-theoretic bent, there are useful lecture notes <a href="http://maths.anu.edu.au/~alan/lectures/optheory.pdf">here</a> and <a href="http://maths.anu.edu.au/~alan/lectures/operharm.pdf">here</a>.</p>
|
1,025,117 | <p>Let $V$ be finite dim $K-$vector space. If w.r.t. any basis of $V$, the matrix of $f$ is a diagonal matrix, then I need to show that $f=\lambda Id$ for some $\lambda\in K$. </p>
<p>I am trying a simple approach: to show that $(f-\lambda Id)(e_i)=0$ where $(e_1,...,e_2)$ is a basis of $V$. Let the diagonal matrix be given by $diag(\lambda_1,...,\lambda_2).$ Then $$(f-\lambda Id)(e_i)=diag(\lambda_1,...,\lambda_2) (0,0,..,0,1,0,...,0)^T - (0,0,..,0,\lambda ,0,...,0)$$ $$=(0,0,...,0,\lambda_i - \lambda,0,...,0)$$ where $1,\lambda,\lambda_i-\lambda$ are in the $i^{th}$ position. I don't see how to conclude $\lambda_i=\lambda$. </p>
| UserX | 148,432 | <p>Hint; Let $t=\frac1x$${}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}$</p>
|
3,261,334 | <blockquote>
<p>Prove isomorphism of groups <span class="math-container">$\langle G, + , {}^{-1}\rangle$</span> and <span class="math-container">$\langle G, *,{}^{-1}\rangle$</span>, where <span class="math-container">$a*b=b+a$</span><br>
<span class="math-container">$\forall a,b \in G$</span></p>
</blockquote>
<p>I'm barely starting to study abstract algebra.</p>
<p>So how do I show isomorphism? I think that I should show a homomorphism somehow, but I don't know how. </p>
<p>Any thoughts/ideas would be really appreciated!</p>
| José Carlos Santos | 446,262 | <p><strong>Hint:</strong> Define <span class="math-container">$f(a)=a^{-1}$</span> and prove that it is an isomorphism.</p>
|
2,505,171 | <p>How many numbers are there if you do not allow leading $0$'s? </p>
<p>In how many of the numbers in each case is no digit $j$ in the $j$th place?</p>
<p>If leading $0$'s are allowed? </p>
<p>If they are not allowed? </p>
<p>I know how to answer this if the numbers $0$ through $9$ can be repeated, but I am getting hung up on the "exactly one" part.</p>
| JMoravitz | 179,297 | <p>Approach via the <a href="https://en.wikipedia.org/wiki/Rule_of_product" rel="nofollow noreferrer">rule of product</a> (<em>also called the multiplication principle</em>) which can be paraphrased as the following:</p>
<blockquote>
<p>If you wish to count how many outcomes there are to a particular scenario and you can describe the outcomes via a sequence of steps such that</p>
<ul>
<li><strong>every</strong> outcome is counted <strong>exactly once</strong></li>
<li>Each step has a particular number of options available which do <em>not</em> depend on previously made choices in earlier steps</li>
</ul>
<p>then the total number of outcomes is the product of the number of options at each step.</p>
</blockquote>
<p><em>Note:</em> The number of options at each step cannot change based on earlier choices, however <em>the choices themselves</em> can change.</p>
<hr />
<p>For your problem:</p>
<ul>
<li><p>Pick the first digit (<em>It can be any of 0,1,2,3,...,9 in the first problem or it can be any of 1,2,3,...,9 in the second problem</em>) How many options is that available for this step?</p>
</li>
<li><p>Pick the second digit (<em>It can be any of 0,1,2,3,...,9</em> <strong>except</strong> <em>what you picked in the first step. We wanted each to occur exactly once, not more than once, so whatever you picked is no longer available to pick.</em>) How many options is that available for this step?</p>
</li>
<li><p>Pick the third digit (<em>It can be any of 0,1,2,3,...,9</em> <strong>except</strong> <em>whatever you picked in either of the first two steps</em>)</p>
</li>
<li><p><span class="math-container">$\vdots$</span></p>
</li>
<li><p>Pick the last digit</p>
</li>
</ul>
<p>Multiply the number of options available for each step together to get the total number of arrangements.</p>
|
4,496,913 | <blockquote>
<p>If <span class="math-container">$x=\frac12(\sqrt[3]{2009}-\frac{1}{\sqrt[3]{2009}})$</span>, what is the value of <span class="math-container">$(x+\sqrt{1+x^2})^3$</span>?</p>
</blockquote>
<p>I solved this problem as follow,</p>
<p>Assuming <span class="math-container">$\sqrt[3]{2009}=\alpha$</span> , we have <span class="math-container">$x=\frac12(\alpha-\frac1{\alpha})$</span> and <span class="math-container">$$(x+\sqrt{1+x^2})^3=\left[\frac12(\alpha-\frac1{\alpha}) +\sqrt{1+\frac14(\alpha^2+\frac1{\alpha^2}-2)}\right]^3=\left[(\frac{\alpha}2-\frac1{2\alpha}) +\sqrt{\frac{\alpha^2}4+\frac1{4\alpha^2}+\frac12)}\right]^3=\left(\frac{\alpha}2-\frac1{2\alpha}+\left|\frac{\alpha}2+\frac1{2\alpha}\right|\right)^3=\alpha^3=2009$$</span>
I'm wondering, is it possible to solve this problem with other approaches?</p>
| dxiv | 291,201 | <blockquote>
<p>Assuming <span class="math-container">$\sqrt[3]{2009}=\alpha$</span> , we have <span class="math-container">$x=\dfrac12\left(\alpha-\dfrac1{\alpha}\right)$</span></p>
</blockquote>
<p>We also have:</p>
<p><span class="math-container">$$
\begin{align}
x = \frac{1}{2}\left(x+\sqrt{1+x^2} \;+\; x-\sqrt{1+x^2}\right) = \frac{1}{2}\left(x+\sqrt{1+x^2} \;-\; \frac{1}{x+\sqrt{1+x^2}}\right)
\end{align}
$$</span></p>
<p>Comparing with <span class="math-container">$\,x = \dfrac12\left(\alpha-\dfrac1{\alpha}\right)\,$</span> it follows that <span class="math-container">$\,x+\sqrt{1+x^2} = \alpha\,$</span>, because the function <span class="math-container">$\,f(t) = \dfrac{1}{2}\left(t - \dfrac{1}{t}\right)\,$</span> is injective on <span class="math-container">$\,\mathbb R^+\,$</span>.</p>
|
3,420,459 | <p>I have this <a href="https://i.stack.imgur.com/ew8Id.png" rel="nofollow noreferrer">question:</a></p>
<blockquote>
<p>If <span class="math-container">$a\otimes b=a^b-b^a$</span>, what is <span class="math-container">$(3\otimes 2)\otimes (4\otimes 1)$</span>?</p>
</blockquote>
<p>The answer in the solution set I was given is <span class="math-container">$-2,$</span> but I'm not sure how to get there, and I can't find any sort of similar problems online. Can anyone explain the process of solving this and how to find more resources for solving these problems?</p>
| J. W. Tanner | 615,567 | <p>Just plug it in:</p>
<p><span class="math-container">$(3\otimes 2)\otimes (4\otimes 1)=(3^2-2^3)\otimes(4^1-1^4)=1\otimes3=1^3-3^1=\;?$</span></p>
|
445,127 | <p>I need to prove this limit:</p>
<blockquote>
<p>Given $f:(-1,1) \to \mathbb{R}\,$ and $\,f(x)>0,\,$ if $\,\lim_{x\to 0} \left(f(x) + \dfrac{1}{f(x)}\right) = 2,\,$ then $\,\lim_{x\to 0} f(x) = 1$.</p>
</blockquote>
| GEdgar | 442 | <p>Write <span class="math-container">$c(x) = f(x) + \frac{1}{f(x)}$</span>. Solve a quadratic equation to see that
<span class="math-container">$f(x)$</span> is either <span class="math-container">$(c(x)+\sqrt{c(x)^2-4}\;)/2$</span> or <span class="math-container">$(c(x)-\sqrt{c(x)^2-4}\;)/2$</span> . So, for all <span class="math-container">$x$</span>, we have
<span class="math-container">$$
\frac{c(x)-\sqrt{c(x)^2-4}}{2} \le f(x) \le \frac{c(x)+\sqrt{c(x)^2-4}}{2}
$$</span>
But we are told that <span class="math-container">$\lim c(x) = 2$</span>, so that
<span class="math-container">$$
\lim \frac{c(x)-\sqrt{c(x)^2-4}}{2} = 1\quad\text{and}\quad \frac{c(x)+\sqrt{c(x)^2-4}}{2} = 1.
$$</span>
Our function <span class="math-container">$f(x)$</span> is between these, so <span class="math-container">$\lim f(x) = 1$</span>.</p>
|
445,127 | <p>I need to prove this limit:</p>
<blockquote>
<p>Given $f:(-1,1) \to \mathbb{R}\,$ and $\,f(x)>0,\,$ if $\,\lim_{x\to 0} \left(f(x) + \dfrac{1}{f(x)}\right) = 2,\,$ then $\,\lim_{x\to 0} f(x) = 1$.</p>
</blockquote>
| Fabio Lucchini | 54,738 | <p>Let
<span class="math-container">\begin{align}
&a=\liminf_{x\to 0}f(x)&&b=\limsup_{x\to 0}f(x)
\end{align}</span>
<a href="https://math.stackexchange.com/q/205346">Since</a>
<span class="math-container">$$\liminf_{x\to 0}\left(f(x)+\frac 1{f(x)}\right)\leq\liminf_{x\to 0}f(x)+\limsup_{x\to 0}\frac 1{f(x)}\leq\limsup_{x\to 0}\left(f(x)+\frac 1{f(x)}\right)$$</span>
we get
<span class="math-container">$$2\leq a+\frac 1a\leq 2$$</span>
from which <span class="math-container">$a=1$</span>.
Similarly, <span class="math-container">$b=1$</span>, hence <span class="math-container">$\lim_{x\to 0}f(x)=1$</span>.</p>
|
2,056,499 | <p>I am trying to prove that if
$$
\lim_{x \to c} (f(x)) = L_1
\\ \lim_{x \to c} (g(x)) = L_2
\\ L_1, L_2 \geq 0
$$
Then
$$
\lim_{x \to c} f(x)^{g(x)} = (L_1)^{L_2}
$$</p>
<p>I am doing this for fun, and my prof said that it shouldn't be too hard, but all I got so far is
$$
\forall \epsilon >0 \ \exists \delta > 0 : \text{if}\ \ |x-c|<\delta\ \ \ \text{then}\ |P(x)-L|<\epsilon
\\ |f(x)^{g(x)} - (L_1)^{L_2}| < \epsilon
$$
I have no idea how to proceed. Can someone help me out? I started by defining h(x) as $$(f(x))^{(g(x))}$$ but I couldn't go anywhere with that without basically defining the limit of h(x) as x approaches c to be L1^L2</p>
| DanielWainfleet | 254,665 | <p>It is often convenient to write $0^0=1,$ for example, in "Let $p(x)=\sum_{j=0}^n a_jx^j$ " it is assumed that $a_0x^0=a_0$ when $x=0.$</p>
<p>But if $L_1=L_2=0$ then $f(x)/g(x)$ can converge to any non-negative value, or fail to converge. Examples: Let $c=0:$ </p>
<p>(1). Let $f_1(x)=1/e^{1/|x|}$ for $x\ne 0$ and $g_1(x)=|x|.$ Then $f_1(x)^{g_1(x)}=e$ for all $x\ne 0.$</p>
<p>(2). Let $f_2(x)=g_2(x)=|x|$ for $x\ne 0.$ Put $|x|=1/y.$ Then $y\to \infty$ as $x\to 0,$ and $f_2(x)^{g_2(x)}=1/(y^{1/y})=\exp ((\log y)/y).$ Now $(\log y)/y \to 0$ as $y\to \infty$ so $f_2(x)^{g_2(x)}\to 1$ as $x\to 0.$ </p>
<p>(3). From examples (1) and (2), let $f_3(x)=f_1(x)$ when $1/x\in \mathbb Z$ and $f_3(x)=f_2(x)$ when $1/x \not \in \mathbb Z.$ Let $g_3(x)=g_1(x)=g_2(x).$ Then $f_3(x)^{g_3(x)}$ does not converge as $x\to 0$.</p>
<p>The main result is valid for $L_1>0.$ Because $\log f(x)^{g(x)}=g(x)\log f(x)$ whenever $|x-c|$ is small enough, and $\log f(x)$ will converge to $\log L_1.$ So $\log f(x)^{g(x)}$ will converge to $L_2\log L_1.$ </p>
|
35,964 | <p>This is kind of an odd question, but can somebody please tell me that I am crazy with the following question, I did the math, and what I am told to prove is simply wrong:</p>
<p>Question:
Show that a ball dropped from height of <em>h</em> feet and bounces in such a way that each bounce is $\frac34$ of the height of the bounce before travels a total distance of 7 <em>h</em> feet.</p>
<p>My Work:
$$\sum_{n=0}^{\infty} h \left(\frac34\right)^n = 4h$$</p>
<p>Obviously 4 <em>h</em> does not equal 7 <em>h</em> . What does the community get?</p>
<p>I know that my calculations are correct, see Wolfram Alpha and it confirms my calculations, that only leaves my formula, or the teacher being incorrect...</p>
<p>Edit:
Thanks everyone for pointing out my flaw, it should be something like:
$$\sum_{n=0}^{\infty} -h + 2h \left(\frac34\right)^n = 7h$$</p>
<p>Thanks in advance for any help!</p>
| André Nicolas | 6,312 | <p>Before jumping to a formula, let us calculate a little. The distance travelled until the first contact with the ground is $h$. </p>
<p>The distance travelled between the first contact and the second is $(h)(2)(3/4)$
(up and then down). The distance travelled from second contact to third is $(h)(2)(3/4)^2$, and so on.</p>
<p>So the total distance travelled is
$$h+(h)(2)\sum_{n=1}^{\infty}\left(\frac{3}{4}\right)^n$$</p>
<p>Finally, sum the infinite series. That sum is $3$, giving a total of $7h$.</p>
|
1,893,609 | <p>I am trying to show that $A=\{(x,y) \in \Bbb{R} \mid -1 < x < 1, -1< y < 1 \}$ is an open set algebraically. </p>
<p>Let $a_0 = (x_o,y_o) \in A$. Suppose that $r = \min\{1-|x_o|, 1-|y_o|\}$ then choose $a = (x,y) \in D_r(a_0)$. Then</p>
<p>Edit: I am looking for the proof of the algebraic implication that $\|a-a_0\| = \sqrt {(x-x_o)^2+(y-y_o)^2} < r \Rightarrow|x| < 1 , |y| < 1 $</p>
| Andres Mejia | 297,998 | <p>Fun fact: The type of quadrilateral (for ease of argument, draw the convex one defined by the four points in question) is called <a href="https://en.wikipedia.org/wiki/Cyclic_quadrilateral" rel="nofollow">cyclic</a>. In Euclid's Elements, Book 3, Proposition 22, it is proven that a quadrilateral is cyclic if and only if its opposite angles are supplementary.</p>
<p>Either way, a convex quadrilateral is cyclic iff its perpendicular bisectors are concurrent, in which case the intersection is the circumcenter (the center of the circle.) </p>
<p>The four points given on the circumference define a convex cyclic quadrilateral uniquely, so just take the perpendicular bisectors and look at the intersection.</p>
|
4,133,760 | <p>Dr Strang in his book linear algebra and it's applications, pg 108 says ,when talking about the left inverse of a matrix( <span class="math-container">$m$</span> by <span class="math-container">$n$</span>)</p>
<blockquote>
<p><strong>UNIQUENESS:</strong> For a full column rank <span class="math-container">$r=n . A x=b$</span> has at most one solution <span class="math-container">$x$</span> for every <span class="math-container">$b$</span> if and only if the columns are linearly independent. Then <span class="math-container">$A$</span> has an <span class="math-container">$n$</span> by <span class="math-container">$m$</span> left-inverse <span class="math-container">$B$</span> such that <span class="math-container">$B A=I_{n}$</span>. This is possible only if <span class="math-container">$m \geq n$</span>.</p>
</blockquote>
<p>I understand why there can be at most one solution for a full column rank but how does that lead to <span class="math-container">$A$</span> having a left inverse?</p>
<p>I'd be grateful if someone could help or hint at the answer.</p>
| Community | -1 | <p>If <span class="math-container">$A$</span> is <span class="math-container">$m \times n$</span>, then the following are equivalent:</p>
<ol>
<li><span class="math-container">$A$</span> has full column rank <span class="math-container">$n$</span></li>
<li>The columns of <span class="math-container">$A$</span> are linearly independent</li>
<li>The null space of <span class="math-container">$A$</span> is trivial</li>
<li>The map induced by <span class="math-container">$A$</span> is injective</li>
<li><span class="math-container">$A$</span> has a left inverse</li>
</ol>
<p><strong>Proof that 1 <span class="math-container">$\iff$</span> 2:</strong></p>
<p>Immediate from the definition of column rank.</p>
<p><strong>Proof that 2 <span class="math-container">$\iff$</span> 3:</strong></p>
<p>Observe that the vector <span class="math-container">$Ax$</span> is equal to the linear combination <span class="math-container">$\sum_{i=1}^{n}a_i x_i$</span>, where <span class="math-container">$a_i$</span> is the <span class="math-container">$i$</span>'th column of <span class="math-container">$A$</span>, and <span class="math-container">$x_i$</span> is the <span class="math-container">$i$</span>'th component of <span class="math-container">$x$</span>.</p>
<p>In particular, <span class="math-container">$Ax = 0$</span> if and only if <span class="math-container">$\sum_{i=1}^{n}a_i x_i = 0$</span>.</p>
<p>The null space of <span class="math-container">$A$</span> is trivial if and only if <span class="math-container">$x=0$</span> is the only solution to <span class="math-container">$Ax = 0$</span> which, by what we said above, is true if and only if <span class="math-container">$\sum_{i=1}^{n}a_i x_i = 0$</span> implies <span class="math-container">$x_i = 0$</span> for all <span class="math-container">$i$</span>, which is true if and only if <span class="math-container">$a_1, a_2, \ldots, a_n$</span> are linearly independent.</p>
<p><strong>Proof that 3 <span class="math-container">$\iff$</span> 4:</strong></p>
<p>Suppose that <span class="math-container">$Ax = Ay$</span>. Since <span class="math-container">$A$</span> is linear, this is equivalent to <span class="math-container">$Ax - Ay = A(x-y) = 0$</span>. Therefore <span class="math-container">$x-y$</span> is in the null space of <span class="math-container">$A$</span>. But the null space of <span class="math-container">$A$</span> is trivial, hence <span class="math-container">$x-y = 0$</span>, so <span class="math-container">$x=y$</span>. This shows that (the map induced by) <span class="math-container">$A$</span> is injective (one-to-one).</p>
<p>Conversely, suppose that <span class="math-container">$A$</span> is injective. Then <span class="math-container">$x=0$</span> is the unique vector such that <span class="math-container">$Ax = 0$</span>. Therefore the null space of <span class="math-container">$A$</span> is trivial.</p>
<p><strong>Proof that 2 (and equivalently 4) <span class="math-container">$\implies$</span> 5:</strong></p>
<p>Let <span class="math-container">$e_1, e_2, \ldots, e_n$</span> be the canonical basis for <span class="math-container">$\mathbb R^n$</span>, meaning that <span class="math-container">$e_i$</span> has a <span class="math-container">$1$</span> in the <span class="math-container">$i$</span>'th component, and zeros everywhere else. Note that for each <span class="math-container">$i$</span> we have <span class="math-container">$a_i = Ae_i$</span>, where again <span class="math-container">$a_i$</span> is the <span class="math-container">$i$</span>th column of <span class="math-container">$A$</span>. Moreover, since <span class="math-container">$A$</span> is injective, <span class="math-container">$e_i$</span> is the <em>unique</em> vector that is mapped by <span class="math-container">$A$</span> to <span class="math-container">$a_i$</span>.</p>
<p>Now, since <span class="math-container">$a_1, a_2, \ldots, a_n$</span> are linearly independent, they are a basis for the column space of <span class="math-container">$A$</span>, which can be extended to a basis <span class="math-container">$a_1,a_2,\ldots, a_n, b_1,b_2,\ldots,b_{m-n}$</span> for <span class="math-container">$\mathbb R^m$</span>. Hence an arbitrary <span class="math-container">$y \in \mathbb R^m$</span> has a unique representation of the form <span class="math-container">$y = \sum_{i=1}^{n} c_i a_i + \sum_{j=1}^{m-n} d_j b_j$</span> where <span class="math-container">$c_i$</span> and <span class="math-container">$d_j$</span> are scalars.</p>
<p>Therefore we can define a linear map <span class="math-container">$g : \mathbb R^m \to \mathbb R^n$</span> by first setting <span class="math-container">$g(a_i) = e_i$</span> for each <span class="math-container">$i=1,2,\ldots,n$</span> and <span class="math-container">$g(b_j) = 0$</span> for each <span class="math-container">$j=1,2,\ldots,m-n$</span>, and then extending <span class="math-container">$g$</span> linearly to all of <span class="math-container">$\mathbb R^m$</span>:</p>
<p><span class="math-container">$$g(y) = g\left(\sum_{i=1}^{n} c_i a_i + \sum_{j=1}^{m-n} d_j b_j \right) = \sum_{i=1}^{n} c_i g(a_i) + \sum_{j=1}^{m-n} d_j g(b_j) = \sum_{i=1}^{n} c_i g(a_i) = \sum_{i=1}^{n} c_i e_i$$</span></p>
<p>Then <span class="math-container">$g$</span> is a left inverse of <span class="math-container">$A$</span>:</p>
<p><span class="math-container">$$g(Ax) = g\left(\sum_{i=1}^{n}a_i x_i\right) = \sum_{i=1}^{n} x_i g(a_i) = \sum_{i=1}^{n} x_i e_i = x$$</span></p>
<p><strong>Proof that 5 <span class="math-container">$\implies$</span> 3:</strong></p>
<p>Suppose that <span class="math-container">$Ax = 0$</span>. Let <span class="math-container">$g$</span> be a left inverse of <span class="math-container">$A$</span>. Then <span class="math-container">$x = g(Ax) = 0$</span>. This shows that the null space of <span class="math-container">$A$</span> is trivial.</p>
<hr>
<p>As a side note, it turns out that 4 and 5 are equivalent for general functions, not just linear maps. If <span class="math-container">$f$</span> is any injective function, then it has a left inverse, and conversely if <span class="math-container">$f$</span> is any function that has a left inverse, then it is injective. There is a proof <a href="https://math.stackexchange.com/questions/1075924/finishing-a-proof-f-is-injective-if-and-only-if-it-has-a-left-inverse">here</a>, for example. Since you indicated in the comments that this is an unfamiliar fact, I did not use it in the proof above but instead constructed a left inverse explicitly.</p>
<hr>
<p>Note that my proof shows why a left inverse of <span class="math-container">$A$</span> must exist if <span class="math-container">$A$</span> has full column rank, but it doesn't explicitly show how to compute the left inverse.</p>
<p>As Strang notes, one formula for a left inverse is <span class="math-container">$B = (A^T A)^{-1} A^T$</span>. That this is a left inverse is clear by computing:</p>
<p><span class="math-container">$$BA = ((A^T A)^{-1} A^T) A = (A^T A)^{-1} (A^T A) = I_n$$</span></p>
<p>But as you will have noted, Strang punts to a later chapter the proof that <span class="math-container">$A^T A$</span> is invertible when <span class="math-container">$A$</span> has full column rank. So that's not very satisfactory!</p>
<p>Also, computing Strang's left inverse is very inefficient because it involves inverting <span class="math-container">$A^T A$</span>. This requires a lot of calculation, proportional to <span class="math-container">$n^3$</span> operations for an <span class="math-container">$n \times n$</span> matrix.</p>
<p>In practice, probably the best way to compute a left inverse is to perform row reduction on <span class="math-container">$A$</span> to bring it to the form</p>
<p><span class="math-container">$$\begin{bmatrix} I_n \\ 0_{m-n \times n} \end{bmatrix}$$</span></p>
<p>where <span class="math-container">$I_n$</span> is the <span class="math-container">$n \times n$</span> identity matrix, and <span class="math-container">$0_{m-n \times n}$</span> is the <span class="math-container">$m - n \times n$</span> matrix consisting of all zeros. Row reduction to this form is possible if and only if the columns of <span class="math-container">$A$</span> are linearly independent.</p>
<p>Assuming you're familiar with row reduction, you probably know that each row operation can be expressed as an <span class="math-container">$m \times m$</span> elementary matrix of one of three forms, corresponding to the three row reduction operations (multiplying a row by a scalar, interchanging two rows, and adding a scalar multiple of one row to another). The row reduction procedure can then be expressed by left-multiplying <span class="math-container">$A$</span> by the corresponding elementary matrices. Assuming there are <span class="math-container">$k$</span> of these, we have:</p>
<p><span class="math-container">$$E_k E_{k-1} \cdots E_2 E_1 A = \begin{bmatrix} I_n \\ 0_{m-n \times n} \end{bmatrix}$$</span></p>
<p>The product <span class="math-container">$E_k E_{k-1} \cdots E_2 E_1$</span> is easy to understand conceptually: it corresponds to the <span class="math-container">$k$</span> row operations used to bring <span class="math-container">$A$</span> into the reduced form. Fortunately, it's not necessary to <em>compute</em> <span class="math-container">$E_k E_{k-1} \cdots E_2 E_1$</span> as a product of <span class="math-container">$k$</span> matrices! Instead you compute it by starting with <span class="math-container">$I_{m}$</span> and performing the same row operations on it as you perform on <span class="math-container">$A$</span>.</p>
<p>In any case, denoting <span class="math-container">$E_k E_{k-1} \cdots E_2 E_1$</span> by <span class="math-container">$B$</span>, the above becomes</p>
<p><span class="math-container">$$BA = \begin{bmatrix} I_n \\ 0_{m-n \times n} \end{bmatrix}$$</span></p>
<p>Note that <span class="math-container">$B$</span> is an <span class="math-container">$m \times m$</span> matrix. It is <em>almost</em> the left inverse we seek, except we want just <span class="math-container">$I_n$</span> on the right hand side and a left inverse should be <span class="math-container">$n \times m$</span>, not <span class="math-container">$m \times m$</span>. If <span class="math-container">$m > n$</span> then the right hand side has <span class="math-container">$m-n$</span> spare rows of zeros at the bottom. To get rid of these, we can simply remove the bottom <span class="math-container">$m-n$</span> rows of <span class="math-container">$B$</span> to get a <span class="math-container">$n \times m$</span> matrix <span class="math-container">$B'$</span> which satisfies <span class="math-container">$B'A = I_n$</span> and is therefore a left inverse of <span class="math-container">$A$</span>, as desired!</p>
|
1,221,586 | <p>How would one compute?
$$ \oint_{|z|=1} \frac{dz}{\sin \frac{1}{z}} $$</p>
<p>Can one "generalize" the contour theorem and take the infinite series of the residues at each singularity? </p>
| Karanko | 227,830 | <p>If you want to generalize the residue theorem for infinitely many singularities, understand how the residue theorem is proven first. One replaces the loop by a circuit that encloses the singularities plus some arcs that are integrated in both directions. </p>
<p>In the case of infinitely many singularities you can do that for all singularities at once, but you can enclose more and more. </p>
<p>The extra point that you need is to show is that integral over the loop enclosing the non-isolated singularity must tend to zero as you make the loop smaller. Or more general, show that the limit exist. Then the integral is equal to the series of the residues at the isolated singularities, plus the limit at the non-isolated one.</p>
<hr>
<p>Now, for that specific integral one doesn't have to go that way. </p>
<p>$$\oint_{|z|=1}\frac{dz}{\sin\left(\frac{1}{z}\right)}=\oint_{|z|=1}\frac{dz}{\sin{\overline{z}}}$$.</p>
<p>You can expand in series and compute $\oint_{|z|=1}\overline{z}^kdz$.</p>
|
1,179,981 | <p>As the title suggests. Let $G$ be a group, and suppose the function $\phi: G \to G$ with $\phi(g)=g^3$ for $g \in G$ is a homomorphism. Show that if $3 \nmid |G|$, $G$ must be abelian.</p>
<p>By considering $\ker(\phi)$ and Lagrange's Theorem, we have $\phi$ must be an isomorphism (right?), but I'm not really sure where to go after that.</p>
<p>This is a problem from Alperin and Bell, and it is not for homework.</p>
| Ben West | 37,097 | <p>Note that $(gh)^3=\varphi(gh)=\varphi(g)\varphi(h)=g^3h^3$. This implies $ghghgh=ggghhh$, and hence after cancelling, $hghg=gghh$, or $(hg)^2=g^2h^2$. </p>
<p>I claim that every element commutes with every square in $G$. Let $x\in G$ be arbitrary, and let $a^2\in G$ be an arbitrary square. Since $\varphi$ is an automorphism (here is where we used the fact that $3\nmid |G|$), $x=\varphi(y)=y^3$ for some $y$. Then
$$
ay^3a^{-1}=(aya^{-1})^3=\varphi(aya^{-1})=a^3y^3a^{-3}
$$
which implies $y^3=a^2y^3a^{-2}$, or $y^3a^2=a^2y^3$. That is, $xa^2=a^2x$, and completes the claim. </p>
<p>So in particular, $g^2h^2=h^2g^2$. So we get $hghg=(hg)^2=g^2h^2=h^2g^2=hhgg$. Cancelling yields $gh=hg$, so $G$ is abelian. </p>
|
195,006 | <p>I am not very familiar with mathematical proofs, or the notation involved, so if it is possible to explain in 8th grade English (or thereabouts), I would really appreciate it.</p>
<p>Since I may even be using incorrect terminology, I'll try to explain what the terms I'm using mean in my mind. Please correct my terminology if it is incorrect so that I can speak coherently about this answer, if you would.</p>
<p>Sequential infinite set: A group of ordered items that flow in a straight line, of which there are infinitely many. So, all integers from least to greatest would be an example, because they are ordered from least to greatest in a sequential line, but an infinite set of bananas would not, since they are not linearly, sequentially ordered. An infinite set of bananas that were to be eaten one-by-one would be, though, because they are iterated through (eaten) one-by-one (in linear sequence).</p>
<p>Sequential infinite subsets: Multiple sets within a sequential infinite set that naturally fall into the same order as the items of the sequential infinite set of which they are subsets. So, for example, the infinite set of all integers from least to greatest can be said to have the following two sequential infinite subsets within it: all negative integers; and all positive integers. They are sequential because the negative set comes before the positive set when ordered as stated. They are infinite because they both contain an infinite qty of items, and they are subsets because they are within the greater infinite set of all integers.</p>
<p>So I'm wondering if every (not some, but every) sequential infinite set contains within it sequential infinite subsets. The subsets (not the items within them) being sequentially ordered is extremely important. Clearly, a person could take any infinite set, remove one item, and have an infinite subset. Put the item back, remove a different item, and you have multiple infinite subsets. But I need them to be not only non-overlapping, but also sequential in order.</p>
<p>Please let me know if this does not make sense, and thank you for dumbing the answer down for me.</p>
| hmakholm left over Monica | 14,366 | <p>Given your objections to the other answers, here is how I understand your concept of "sequentially ordered":</p>
<ul>
<li>The set is totally ordered (<em>i.e.,</em> for any $a$, $b$ we must have $a\le b$ or $b\le a$).</li>
<li>Any element that has a successor has a <em>first</em> successor.</li>
<li>Any element that has a predecessor has a <em>last</em> predecessor.</li>
</ul>
<p>And you want to divide such a set into two subsets $A$ and $B$ such that there is no element of $A$ that comes between two elements of $B$, and vice versa.</p>
<p>Under this interpretation of the question, one example would be the set
$$ \{0,1\}\times \mathbb Z $$
-- that is, the set of ordered pairs such that the first element is either 0 or 1 and the second element is a (possibly negative) integer -- with the lexicographical ordering such that $(0,a)<(1,b)$ for all $a$ and $b$, and pairs with the same first element are compared according to their second element.</p>
<p>Then the sets of pairs with 0 as the first element and pairs with 1 as the first element should satisfy (my interpretation of) your conditions.</p>
<p>On the other hand, it is clearly not the case that <em>every</em> infinite "sequentially ordered" set can be split into infinite subsets that match this condition -- $\mathbb N$ itself with the usual ordering would be a counterexample, because the one of $A$ and $B$ that contains the "smaller" elements would necessarily be finite.</p>
|
230,126 | <p>A family of functions is known as <span class="math-container">$\left(\varphi_{0}, \varphi_{1}, \cdots, \varphi_{n}\right)$</span>.</p>
<p>I'd like to know how to express their inner product conveniently as follows:</p>
<p><span class="math-container">$$\left(\begin{array}{cccc}
\left(\varphi_{0}, \varphi_{0}\right) & \left(\varphi_{0}, \varphi_{1}\right) & \cdots & \left(\varphi_{0}, \varphi_{n}\right) \\
\left(\varphi_{1}, \varphi_{0}\right) & \left(\varphi_{1}, \varphi_{1}\right) & \cdots & \left(\varphi_{1}, \varphi_{n}\right) \\
\vdots & \vdots & & \vdots \\
\left(\varphi_{n}, \varphi_{0}\right) & \left(\varphi_{n}, \varphi_{1}\right) & \cdots & \left(\varphi_{n}, \varphi_{n}\right)
\end{array}\right)$$</span></p>
<p>Where <span class="math-container">$(f(x), g(x))$</span> is the inner product:
<span class="math-container">$(f(x), g(x))=\int_{a}^{b} f(x) g(x) \mathrm{d} x$</span></p>
<p>We can take <span class="math-container">$\{1,x,x^2,x^3,x^4\}$</span> and <span class="math-container">$\{a=-1,b=1\}$</span> as an example to realize the above requirements.</p>
<pre><code>Outer[Integrate[#1*#2, {x, -1, 1}] &, {1, x, x^2, x^3}, {1, x, x^2, x^3}]
</code></pre>
<p>I wonder if there are any other ways to achieve this?</p>
| NonDairyNeutrino | 46,490 | <p>Since you're dealing with inner products, you can take advantage of the symmetry that <span class="math-container">$\left<f(x), g(x)\right> = \left<g(x), f(x)\right>$</span> to only compute the <span class="math-container">$n(n+1)/2$</span> upper triangular elements instead of the total <span class="math-container">$n^2$</span>.</p>
<p>So we can use <a href="https://reference.wolfram.com/language/ref/SymmetrizedArray.html" rel="noreferrer"><code>SymmetrizedArray</code></a> as</p>
<pre><code>ipmatrix[ϕ_, n_, {a_, b_} /; a <= b] := SymmetrizedArray[
{j_, k_} :> Integrate[ϕ[x, j]*ϕ[x, k], {x, a, b}],
{n, n},
Symmetric
]
</code></pre>
<p>To yield your example of</p>
<pre><code>Normal@ipmatrix[#1^(#2 - 1) &, 5, {-1, 1}]
</code></pre>
<blockquote>
<p>{{2, 0, 2/3, 0, 2/5}, {0, 2/3, 0, 2/5, 0}, {2/3, 0, 2/5, 0, 2/7}, {0,
2/5, 0, 2/7, 0}, {2/5, 0, 2/7, 0, 2/9}}</p>
</blockquote>
|
1,147,373 | <p>I need to prove that
<span class="math-container">$$I = \int^{\infty}_{-\infty}u(x,y) \,dy$$</span>
is independent of <span class="math-container">$x$</span> and find its value, where
<span class="math-container">$$u(x,y) = \frac{1}{2\pi}\exp\left(+x^2/2-y^2/2\right)K_0\left(\sqrt{(x-y)^2+(-x^2/2+y^2/2)^2}\right)$$</span></p>
<p>and <span class="math-container">$K_0$</span> is the modified Bessel function of the second kind with order zero.
Evaluating the integral numerically with Mathematica for different values of <span class="math-container">$x$</span> gives the result of <span class="math-container">$2.38$</span>, but I want to know if it is possible to show analytically.</p>
<p>Increasing <span class="math-container">$x$</span> results in an increase of the exponential term on the left, but it also then strongly increases the argument of modified Bessel function, thus reducing its value.</p>
<p>To show that integral is independant of <span class="math-container">$x$</span>, it is sufficient to show that <span class="math-container">$\int^{\infty}_{-\infty}\frac{\, d}{\, dx}u(x,y) = 0$</span> but any differentiation looks more and more ugly.</p>
<p><strong>EDIT</strong>
Mathematica test:</p>
<pre><code> x = 100
NIntegrate[
(1/(2 Pi))* Exp[x*x/2 - y*y/2] BesselK[0,
Sqrt[(x - y)*(x - y) + (x*x/2 - y*y/2)*(x*x/2 -
y*y/2 )]], {y, -Infinity, x, Infinity}, MaxRecursion -> 22]
</code></pre>
<p>This gives an answer of <span class="math-container">$0.378936$</span> independent of the choice of <span class="math-container">$x$</span>. In the earlier calculation I missed the factor <span class="math-container">$\frac{1}{2\pi}$</span>.</p>
| KStarGamer | 885,224 | <p>Although I'm about 7 years late, here is an answer anyway for anyone interested:</p>
<blockquote>
<p><strong>Claim</strong>
<span class="math-container">$$I = \frac{e}{2} \sqrt{\pi} \, \text{erfc} (1)$$</span>
and is thus independent of <span class="math-container">$x$</span>.</p>
</blockquote>
<p><em>Proof.</em></p>
<p>By <a href="https://dlmf.nist.gov/10.32#E10" rel="noreferrer">https://dlmf.nist.gov/10.32#E10</a></p>
<p><span class="math-container">$$K_0 (z) = \frac{1}{2} \int_{0}^{\infty} \exp \left(-t-\frac{z^2}{4t}\right) \, \frac{dt}{t}$$</span></p>
<p>This allows us to write
<span class="math-container">$$\begin{align}I &= \int_{0}^{\infty} u(x,y) \, dy + \int_{-\infty}^{0} u(x,y) \, dy \\&=\frac{1}{2\pi}\int_0^\infty e^{-v^2/2}\int_0^\infty\frac{e^{xv}e^{-t-v^2[1+(x-v/2)^2]/(4t)}+e^{-xv}e^{-t-v^2[1+(x+v/2)^2]/(4t)}}t\,dt\,dv\\ &=\frac{1}{2\pi}\int_0^\infty\int_0^\infty \frac{\cosh\left(xv\left(\frac{v^2}{4t}+1\right)\right)}{t}\exp\left(-t-\frac{1+2t+x^2}{4t}v^2-\frac{v^4}{16t}\right)\,dv\,dt\end{align}$$</span></p>
<p>We now enforce the substitution <span class="math-container">$t = s v^2 \implies dt = v^2 \,ds$</span>:
<span class="math-container">$$\begin{align}I&=\frac{1}{2\pi} \int_{0}^{\infty} \int_{0}^{\infty}\frac{\cosh\left(x v \left(\frac{1}{4s}+1\right)\right)}{s} \exp\left(-sv^2-\frac{1+2sv^2+x^2}{4s}-\frac{v^2}{16s}\right) \, dv \, ds\\&=\frac{1}{2\pi}\int_{0}^{\infty}\int_{0}^{\infty}\frac{\cosh\left(xv\left(\frac{1}{4s}+1\right)\right)}{s}\exp\left(-\frac{1+x^2}{4s}\right)\exp\left(-v^2 \frac{(1+4s)^2}{16s}\right)\,dv\,ds\end{align}$$</span></p>
<p>Since
<span class="math-container">$$\int_{0}^{\infty}\cosh(a v)\exp\left(-v^2 b^2\right) \, dv=\frac{\sqrt{\pi}}{2b}\exp\left(\frac{a^2}{4b^2}\right)$$</span></p>
<p><span class="math-container">$$\implies I = \frac{1}{2\pi} \int_{0}^{\infty} \frac{\exp \left(-\frac{1+x^2}{4s}\right)}{s} \cdot \frac{\sqrt{\pi}}{2\left(\frac{1+4s}{4\sqrt{s}}\right)}\exp \left(\frac{4s x^2 \left(\frac{1}{4s}+1\right)^2}{(1+4s)^2}\right)\, ds$$</span>
This results in the integral being independent as we wanted since the <span class="math-container">$x^2$</span> terms cancel.</p>
<p><span class="math-container">$$\implies I = \frac{1}{\sqrt{\pi}} \int_{0}^{\infty} \frac{\exp\left(-\frac{1}{4s}\right)}{\sqrt{s} (1+4s)} \, ds\stackrel{s\,\mapsto\frac{1}{s}}{=}\frac{1}{\sqrt{\pi}} \int_{0}^{\infty} \frac{\exp\left(-\frac{s}{4}\right)}{\sqrt{s} (4+s)} \, ds=\frac{e}{2} \sqrt{\pi} \, \text{erfc} (1)=0.378\cdots$$</span></p>
<p><span class="math-container">$\square$</span></p>
|
3,201,996 | <p>I have an 8-digit number and you have an 8-digit number - I want to see if our numbers are the same without either of us passing the other our actual number. Hashing the numbers is the obvious solution. However, if you send me your hashed number and I do not have it - it is very easy to hash all the permutations of an 8-digit number and see your number.</p>
<p>I am looking for a way to increase the complexity of the 8-digit number while maintaining uniqueness and a universal process (i.e. we need to be able to apply the same process on both ends.) Squaring the number or something like that will not work because there are the same number of unique squares of an 8 digit number as there are unique 8 digit permutations. Salting will not work for the same reason.</p>
<p>Is there anything I can do to the number to make brute-forcing all permutations not viable?</p>
| user1952500 | 64,332 | <p>You will likely need to use a scheme with a trusted third party. Let the two parties be A and B, and the trusted third party be X. </p>
<ol>
<li>A sends X its key K1 and gets back some sort of token T1. This token needs to have some small lifetime guarantees etc. </li>
<li>B sends X it’s key K2 and gets back another token T2.</li>
<li>A and B exchange tokens. Each sends {T1, T2} to X and X responds if they are identical. </li>
</ol>
<p>Now the token provider X should provide tokens in such a way that the same key K will generate a different token whenever the request is made etc. </p>
|
1,225,122 | <p>So I am currently studying a course in commutative algebra and the main object that we are looking at are ideals generated by polynomials in n variables. But the one thing I don't understand when working with these ideals is when we reduce the generating set to something much simpler. For e.g.</p>
<p>Consider the Ideal $I$ = $<x^2-4x + 3, x^2 +x -2>$, then since $x -1$ is a common factor of both the polynomials in the generating set we deduce that I is infact $<x-1>$. So my question is what is the criteria that applies when we are reducing the generating set to something much simpler.</p>
<p>Based on what I understand, I am guessing in the above example that since every polynomial is divisible by $x-1$ we can say the ideal is generated by $x-1$ (wouldn't this result in the loss of any elements?). But I am not entirely convinced by my reasoning and would prefer to hear it from someone who understands this stuff better. </p>
<p>Also using the same reasoning as above can we then say that the ideal $I$ = $<x^3 - x^2 + x>$ = $<x>$ ?</p>
| Thomas Poguntke | 222,154 | <p>The fact that $x-1$ divides both of the generators of $I$ only means that $I \subseteq (x-1)$. For the other inclusion, try subtracting one of the generators from the other.</p>
<p>Subsequently, it is also true that $(x^3-x^2+x) \subseteq (x)$, but the other inclusion is false (the LHS does not contain polynomials of degree $1$, say).</p>
|
2,284,178 | <p>Let the roots of the equation:
$2x^3-5x^2+4x+6$ be $\alpha,\beta,\gamma$</p>
<ol>
<li>State the values of $\alpha+\beta+\gamma,\alpha\gamma+\alpha\beta+\beta\gamma,\alpha\beta\gamma$</li>
<li>Hence, or otherwise, determine an equation with integer coefficients which has $\frac{1}{\alpha^2}\frac{1}{\beta^2}\frac{1}{\gamma^2}$</li>
</ol>
<p>For Question 1 I let the roots equal:
$(x-\alpha)(x-\beta)(x-\gamma)$
which equals: $x^3-x^2(\alpha+\beta+\gamma)+x(\alpha\gamma+\alpha\beta+\beta\gamma)-\alpha\beta\gamma$</p>
<p>I then equated it as:</p>
<p>$\alpha+\beta+\gamma =\frac{5}{2}$</p>
<p>$\alpha\gamma+\alpha\beta+\beta\gamma=2$</p>
<p>$\alpha\beta\gamma=-6$</p>
<p>Answering question 2 I went and did:</p>
<p>$\frac{1}{\alpha^2}+\frac{1}{\beta^2}+\frac{1}{\gamma^2}$ which equal $\frac{\alpha^2\beta^2+\alpha^2\gamma^2+\beta^2\gamma^2}{\alpha^2\beta^2\gamma^2}=\frac{(\alpha\beta)^2+(\alpha\gamma)^2+(\beta\gamma)^2}{(\alpha\beta\gamma)^2}$</p>
<p>that would give me the sum of the roots and </p>
<p>$\frac{1}{(\alpha\beta\gamma)^2}$
would give me the product of the roots but kinda confused as to how to finish this question.</p>
| John Lou | 404,782 | <p>I think you're asking how to find the numerator of the fraction giving the other values. </p>
<p>Notice that:</p>
<p>$$(\alpha \beta + \alpha \gamma + \beta \gamma)^2 = (\alpha \beta)^2 + (\alpha \gamma)^2+(\beta \gamma)^2 + (\alpha + \beta + \gamma)(2)(\alpha \beta \gamma)$$
which you can find by simply expanding and factoring</p>
<p>Therefore, </p>
<p>$$(\alpha \beta)^2 + (\alpha \gamma)^2+(\beta \gamma)^2 = (\alpha \beta + \alpha \gamma + \beta \gamma)^2 - (\alpha + \beta + \gamma)(2)(\alpha \beta \gamma)$$</p>
<p>Where you know all the values on the RHS, so you can solve for the LHS.</p>
|
2,284,178 | <p>Let the roots of the equation:
$2x^3-5x^2+4x+6$ be $\alpha,\beta,\gamma$</p>
<ol>
<li>State the values of $\alpha+\beta+\gamma,\alpha\gamma+\alpha\beta+\beta\gamma,\alpha\beta\gamma$</li>
<li>Hence, or otherwise, determine an equation with integer coefficients which has $\frac{1}{\alpha^2}\frac{1}{\beta^2}\frac{1}{\gamma^2}$</li>
</ol>
<p>For Question 1 I let the roots equal:
$(x-\alpha)(x-\beta)(x-\gamma)$
which equals: $x^3-x^2(\alpha+\beta+\gamma)+x(\alpha\gamma+\alpha\beta+\beta\gamma)-\alpha\beta\gamma$</p>
<p>I then equated it as:</p>
<p>$\alpha+\beta+\gamma =\frac{5}{2}$</p>
<p>$\alpha\gamma+\alpha\beta+\beta\gamma=2$</p>
<p>$\alpha\beta\gamma=-6$</p>
<p>Answering question 2 I went and did:</p>
<p>$\frac{1}{\alpha^2}+\frac{1}{\beta^2}+\frac{1}{\gamma^2}$ which equal $\frac{\alpha^2\beta^2+\alpha^2\gamma^2+\beta^2\gamma^2}{\alpha^2\beta^2\gamma^2}=\frac{(\alpha\beta)^2+(\alpha\gamma)^2+(\beta\gamma)^2}{(\alpha\beta\gamma)^2}$</p>
<p>that would give me the sum of the roots and </p>
<p>$\frac{1}{(\alpha\beta\gamma)^2}$
would give me the product of the roots but kinda confused as to how to finish this question.</p>
| Bernard | 202,857 | <p>All this is about <em>Vieta's relations</em> between the elementary symmetric funnctions of the roots of a polynomial and its coefficients. Let's denote
$$s=\alpha+\beta+\gamma,\quad q=\alpha\beta+\beta\gamma+\gamma\alpha, \quad p=\alpha\beta\gamma.$$
Thus you have to find the values of</p>
<ul>
<li>$S=\dfrac1{\alpha^2}+\dfrac1{\beta^2}+\dfrac1{\gamma^2}=\dfrac{\alpha^2\beta^2+\beta^2\gamma^2+\gamma^2\alpha^2}{(\alpha\beta\gamma)^2}=\dfrac{(\alpha\beta+\beta\gamma+\gamma\alpha)^2-2\alpha\beta\gamma(\alpha+\beta+\gamma)}{(\alpha\beta\gamma)^2}=\dfrac{q^2-2sp}{p^2}$.</li>
<li>$Q=\dfrac1{\alpha^2\beta^2}+\dfrac1{\beta^2\gamma^2}+\dfrac1{\gamma^2\alpha^2}=\dfrac{\alpha^2+\beta^2+\gamma^2}{(\alpha\beta\gamma)^2}=\dfrac{s^2-2q}{p^2}$.</li>
<li>$P=\dfrac1{\alpha^2}\dfrac1{\beta^2}\dfrac1{\gamma^2}=\dfrac1{p^2}$.</li>
</ul>
|
1,977,577 | <p>if a function is lebesgue integrable, does it imply that it is measurable?
(without any other assumption)</p>
<p>The reason why I ask this is because royden, in his book, kind of imply about a measurable function when assuming the function to be lebesgue integrable</p>
| Anindya Biswas | 357,212 | <p>In order to define integrability, the measurability is a necessary condition. For an example, let $A$ be the Vitali set in $[0,1]$ and $B=[0,1]-A$. Define $f=\chi_A -\chi_B$. Notice that $|f|=\chi_{[0.1]}$ and hence integrable. Now range of $f$ is $\{-1,1,0\}$ and $A=f^{-1}\{1\}$, $B=f^{-1}\{-1\}$ and $[0,1]^c=f^{-1}\{0\}$. If we assume $f$ to be integrable with respect to the lebesgue measure $\lambda$ then we should be able to write $$\int f d\lambda=\int_{f^{-1}\{1\}} f d\lambda+\int_{f^{-1}\{-1\}} f d\lambda$$ and hence we have $$\int f d\lambda=\lambda(A) -\lambda(B)\ .$$ But the RHS is not defined since both $A$ and $B$ are nonmeasurable wrt $\lambda$. So without the measurability condition, the integration is not defined.</p>
|
258,215 | <p>How can we show that if $f:V\to V$
Then for each $m\in \mathbb {N}$ $$\operatorname{im}(f^{m+1})\subset \operatorname{im}(f^m)$$
Please help,I am stuck on this.</p>
| Harald Hanche-Olsen | 23,290 | <p>This is an answer to the bonus (?) question in the comments. If $f$ is not injective, say $f(x)=f(y)$ for some $x$, $y$ with $x\ne y$, then for any set $B\subset Y$, either $x$ and $y$ are both in $f^{-1}(B)$, or none of them are. So any set $A$ that contains one but not the other is not an inverse image.</p>
<p>Conversely, if $f$ is injective it is clear that $A=f^{-1}(f(A))$.</p>
|
757,917 | <p>According to <a href="http://www.wolframalpha.com/input/?i=sqrt%285%2bsqrt%2824%29%29-sqrt%282%29%20=%20sqrt%283%29" rel="nofollow">wolfram alpha</a> this is true: $\sqrt{5+\sqrt{24}} = \sqrt{3}+\sqrt{2}$</p>
<p>But how do you show this? I know of no rules that works with addition inside square roots.</p>
<p>I noticed I could do this:</p>
<p>$\sqrt{24} = 2\sqrt{3}\sqrt{2}$</p>
<p>But I still don't see how I should show this since $\sqrt{5+2\sqrt{3}\sqrt{2}} = \sqrt{3}+\sqrt{2}$ still contains that addition</p>
| Jika | 143,836 | <p><strong>HINT______________$1$:</strong> $$\sqrt{24}=2\sqrt{6}.$$</p>
<p><strong>HINT______________$2$:</strong> $$a^2=b^2\Leftrightarrow a=b\,\vee a=-b.$$</p>
|
1,547,972 | <p>Why is $\sin(x^2)$ similar of $\ x \sin(x)$? </p>
<p>I graphed it using desmos and when I look at it, the behavior as x approaches zero seems to be to oscillate less. </p>
<p>Yet as x approaches infinity and negative infinity $\sin(x^2)$ oscillates between y=1 and y=-1 while $\ x *sin(x)$ oscillates between y=x and y=-x.</p>
<p>I was wondering why these functions are so similar yet so different. I'm in 10th grade and I"m currently learning precalculus so if answers could be targeted to a precalculus level that would be great.</p>
| Akiva Weinberger | 166,353 | <p>In calculus, you learn that many functions can be written as "infinite polynomials." For example, there's this:
$$\sin x=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\dotsb$$
Yes, those are factorials. A consequence of this is that, since $\sin\pi=0$, we have:
$$0=\pi-\frac{\pi^3}{3!}+\frac{\pi^5}{5!}-\frac{\pi^7}{7!}+\dotsb$$
This is not obvious.</p>
<p>Note that it begins with $x$, rather than $2x$ or $\frac x2$ or whatever. One consequence of this is that, when $x\approx0$, we have $\sin x\approx x$ (since all of the other terms are much smaller than $x$ when $x$ is small). This is actually much easier to prove than the rest of the series; the usual proof involves some geometry and the unit circle. However, for what follows, the exact coefficient of $x$ doesn't really matter.</p>
<p>In any case, multiplying by $x$, we get:
$$x\sin x=x^2-\frac{x^4}{3!}+\frac{x^6}{5!}-\frac{x^8}{7!}+\dotsb$$
Or, replacing the $x$s with $x^2$s:
$$\sin x^2=x^2-\frac{x^6}{3!}+\frac{x^{10}}{5!}-\frac{x^{14}}{7!}+\dotsb$$
We can see that these both begin with $x^2$. As $x$ gets smaller, these other terms are much less significant than the $x^2$ term. This means that, when $x\approx0$, we have $x^2\approx x\sin x\approx\sin x^2$.</p>
|
1,547,972 | <p>Why is $\sin(x^2)$ similar of $\ x \sin(x)$? </p>
<p>I graphed it using desmos and when I look at it, the behavior as x approaches zero seems to be to oscillate less. </p>
<p>Yet as x approaches infinity and negative infinity $\sin(x^2)$ oscillates between y=1 and y=-1 while $\ x *sin(x)$ oscillates between y=x and y=-x.</p>
<p>I was wondering why these functions are so similar yet so different. I'm in 10th grade and I"m currently learning precalculus so if answers could be targeted to a precalculus level that would be great.</p>
| Claude Leibovici | 82,404 | <p><em>This is not an answer but it is too long for a comment.</em></p>
<p>You received the explanation of what you observed.</p>
<p>I give you another one which is also amazing : function $y=\sin^x(x)$ in the range $(2\pi,3\pi)$ almost looks like a gaussian. Doing things similar as in the answers, consider Taylor expansion around $x=\frac{5\pi} 2$ $$x\log(\sin(x))=-\frac{5\pi}{4} \left(x-\frac{5 \pi }{2}\right)^2+O\left(\left(x-\frac{5 \pi
}{2}\right)^3\right)$$ $$\sin^x(x)\simeq e^{-\frac{5\pi}{4} \left(x-\frac{5 \pi }{2}\right)^2}$$ Pu the two functions on the same plot and enjoy.</p>
|
3,684,331 | <p>I am working on a problem from my Qual</p>
<p>"Let <span class="math-container">$T:V\to V$</span> be a bounded linear map where <span class="math-container">$V$</span> is a Banach space. Assume for each <span class="math-container">$v\in V$</span>, there exists <span class="math-container">$n$</span> s.t. <span class="math-container">$T^n(v)=0$</span>. Prove that <span class="math-container">$T^n=0$</span> for some <span class="math-container">$n$</span>."</p>
<p>My impression is that it looks like a algebraic problem. But I know nothing about <span class="math-container">$V$</span> (not fintely generated or something like that). So this one is off.</p>
<p>The hypothesis seems to be set up for the Uniformly Bounded Principle. Indeed, for each <span class="math-container">$v$</span>, <span class="math-container">$\sup_n \{||T^n(v)|| \}$</span> is bounded, so by the principle, <span class="math-container">$\sup_n ||T^n||$</span> is bounded. </p>
<p>I'm stuck here. I don't think there is a relation between <span class="math-container">$||T^n||$</span> and <span class="math-container">$||T||$</span>, even if there is, I cannot shake the feeling that the sequence <span class="math-container">$||T^n||$</span> may be eventually constant. I am hoping <span class="math-container">$||T^n||=0$</span> eventually but failed to prove it.</p>
| Integrand | 207,050 | <p>Put <span class="math-container">$x=z-\pi$</span>; it suffices to show the inequality for for <span class="math-container">$z\geq \pi$</span>.
<span class="math-container">$$
z-\pi -\sin(z-\pi)\geq \frac{(z-\pi)^3}{z^2}
$$</span>
<span class="math-container">$$
z-\pi +\sin(z)\geq z -3\pi+\frac{3\pi^2}{z} -\frac{\pi^3}{z^2}
$$</span>
<span class="math-container">$$
2\pi +\sin(z)\geq \frac{3\pi^2}{z} -\frac{\pi^3}{z^2}
$$</span>Note that <span class="math-container">$\sin(z)\geq -1$</span>. If we set
<span class="math-container">$$
2\pi- 1=\frac{3\pi^2}{z} -\frac{\pi^3}{z^2},
$$</span>the last solution is <span class="math-container">$z=\frac{3 π^2 + π^{3/2} \sqrt{4 + π}}{4 π - 2}(:=z_0)\approx 4.21$</span>. But on <span class="math-container">$(\pi,z_0)$</span>, we can use the approximation <span class="math-container">$\sin(z)> (\pi-z)-\frac{(\pi-z)^3}{7}$</span>, whence
<span class="math-container">$$
2\pi +\sin(z) - \frac{3\pi^2}{z} +\frac{\pi^3}{z^2}
$$</span>
<span class="math-container">$$
> 2\pi + (\pi-z)-\frac{(\pi-z)^3}{7}-\frac{3\pi^2}{z} +\frac{\pi^3}{z^2}
$$</span>
<span class="math-container">$$
=\frac{(z-\pi )^3 \left(z^2-7\right)}{7 z^2}>0
$$</span>
So there are no solutions on <span class="math-container">$(\pi,z_0)$</span>, we don't actually have equality at <span class="math-container">$z_0$</span>, and we know there are no solutions for <span class="math-container">$z>z_0$</span>.</p>
|
44,306 | <p>This morning I realized I have never understood a technical issue about Cauchy's theorem (homotopy form) of complex analysis. To illustrate, let me first give a definition.</p>
<p>(In what follows $\Omega$ will always denote an open subset of the complex plane.)</p>
<p><em>Definition</em> Let $\gamma, \eta\colon [0, 1]_t \to \Omega$ be two piecewise smooth closed curves (or <em>circuits</em>). We say that $\gamma, \eta$ are $\Omega$-<em>homotopic</em> if there exists a continuous mapping $H \colon [0, 1]_\lambda \times [0, 1]_t \to \Omega$ s.t. </p>
<ol>
<li>$H(0, t)=\gamma(t)$ and $H(1, t)=\eta(t), \quad \forall t \in [0, 1]$;</li>
<li>$H(\lambda, 0)=H(\lambda, 1), \quad \forall \lambda \in [0, 1]$.</li>
</ol>
<p><strong>Theorem</strong> (Cauchy) Let $f\colon \Omega \to \mathbb{C}$ be holomorphic. If $\gamma, \eta$ are $\Omega$-homotopic circuits, then </p>
<p>$$\int_{\gamma} f(z)\, dz= \int_{\eta}f(z)\, dz.$$</p>
<blockquote>
<p><strong>Problem</strong> The function $H$ above is only continuous and need not be smooth. So for $0< \lambda < 1$, the closed curve $H(\lambda, \cdot)$ may be pretty much everything (a Peano curve, for example). Does this void the validity of theorem as it is stated above? How can integration be defined over such a pathological object? </p>
</blockquote>
<p>The proof of Cauchy's theorem that I have in mind goes as follows. To begin, one observes that for a sufficiently small value of $\lambda_1$, the circuits $\gamma=H(0, \cdot)$ and $H(\lambda_1, \cdot)$ are <em>close toghether</em>; that is, they can be covered by a finite sequence of disks not leaving $\Omega$ like in the following figure:</p>
<p><img src="https://i.stack.imgur.com/E4e7r.png" alt="Proof of Cauchy's theorem"></p>
<p>Since $f$ is locally exact, its integrals over every single disk depend only on the local primitive. Playing a bit with this, one arrives at </p>
<p>$$\int_\gamma f(z)\, dz= \int_{H(\lambda_1, \cdot)} f(z)\, dz.$$</p>
<p>Then one repeats this process, yielding a $\lambda_2$ greater than $\lambda_1$ and such that</p>
<p>$$\int_{H(\lambda_1,\cdot)} f(z)\, dz= \int_{H(\lambda_2, \cdot)} f(z)\, dz.$$</p>
<p>And so on. A compactness argument finally shows that this algorithm ends in a finite number of steps. </p>
<p><em>Problem is</em>: this proof assumes implicitly that $H(\lambda_1, \cdot), H(\lambda_2, \cdot) \ldots$ are piecewise smooth, to make sense of integrals $$\int_{H(\lambda_j, \cdot)}f(z)\, dz.$$
This, however, does <em>not</em> follow from the definition if $H$ is only assumed to be continuous. Therefore this proof works only for smooth $H$. </p>
<p><strong>Is this regularity condition necessary?</strong></p>
| Malik Younsi | 1,197 | <p>If I understand your question correctly, the problem that $H$ may be non-smooth can be solved by approximating with polygonal smooth paths, see for example Rudin's real and complex analysis (3rd edition), thm 10.40 and the remark after it. </p>
<p>As an interesting note, Rudin adds that another way to circumvent this difficulty is to extend the definition of index to closed curves, which is sketched in one of the exercises of the book.</p>
|
452,292 | <p>Let $E$ be a Lebesgue measurable set in $\mathbb{R}$. Prove that
$$\lim_{x\rightarrow 0} m(E\cap (E+x))=m(E).$$ </p>
| Etienne | 80,469 | <p>First assume that $m(E)<\infty$. In this case you can write
$$m(E\cap (E+x))=\int_{\mathbb R} \mathbf 1_{E}(u)\mathbf 1_E(u-x)=\mathbf 1_{E}*\mathbf 1_{-E} (x)\, .$$
Since $f=\mathbf 1_E\in L^1$ (because $m(E)<\infty$) and $g=\mathbf 1_{-E}\in L^\infty$, the convolution $f*g$ is an everyywhere defined $continuous$ function. (This is a nontrivial but quite well known fact). It follows that $\lim_{x\to 0} m(E\cap (E+x))=\mathbf 1_{E}*\mathbf 1_{-E}(0)=\int_{\mathbb R} \mathbf 1_E(u)\mathbf 1_E(u)\, du=m(E)$.</p>
<p>If $m(E)=\infty$, we have to show that $m(E\cap (E+x))\to \infty$ as $x\to 0$. Write $E$ as $E=\bigcup_n E_n$, where the sequence $(E_n)$ is increasing and $m(E_n)<\infty$ for all $n$. Then whe have $\liminf_{x\to 0} m(E\cap (E+x))\geq \liminf_{x\to 0} m(E_n\cap (E_n+x))=m(E_n)$ for all $n$, so $\liminf_{x\to 0} m(E\cap (E+x))\geq \sup_n m(E_n)=\infty$.</p>
|
3,775,554 | <p><strong>Identity for a set X:</strong></p>
<p><em>The set X has an identity under the operation if there is
an element <strong>j</strong> in set X such that <strong>j * a = a * j = a</strong> for all elements <strong>a</strong> in set X.</em></p>
<p>According to my college book the counting numbers don' t have an identity.</p>
<p>But for the operation * there is number 1 (= j ) such that a * 1 = 1 * a = a for all element a in set X.</p>
<p><strong>Why do natural numbers have no identity?</strong></p>
| Barry Cipra | 86,747 | <p>In comments the OP clarifies the question:</p>
<blockquote>
<p>"Which properties are true for the counting numbers, whole numbers,
integers, rational numbers, irrational numbers, and real numbers under
the operation of addition?" The properties are: closed, identity and
inverse.</p>
</blockquote>
<p>The verbatim quote, "Which properties ... addition?", can be tracked (via Google) to a Schaum's Outline, <em>College Algebra</em>, which defines the <em>counting</em> numbers as the set <span class="math-container">$\{1,2,3,\ldots\}$</span> and the <em>whole</em> numbers as <span class="math-container">$\{0,1,2,3,\ldots\}$</span>. The counting numbers thus, by definition, lack the additive identity <span class="math-container">$0$</span>.</p>
|
4,170,150 | <p>I have the following two definitions:</p>
<p>Let <span class="math-container">$S$</span> be a subset of a Banach space <span class="math-container">$X$</span>.</p>
<ol>
<li><p>We say that <span class="math-container">$S$</span> is <em>weakly bounded</em> if <span class="math-container">$l\in X^*$</span>, the dual space of <span class="math-container">$X$</span>, then <span class="math-container">$\sup\{|l(s)|:s\in S\}<\infty$</span>.</p>
</li>
<li><p>We say that <span class="math-container">$S$</span> is <em>strongly bounded</em> if <span class="math-container">$\sup\{||s||:s\in S\}<\infty$</span>.</p>
</li>
</ol>
<p>I need to prove those definitions are equivalent. I have already proved that 2 implies 1, but I'm stuck with the other direction. I suspected that I have to use Hahn-Banach's theorem, or maybe some of their corollaries, but I don't know how to proced.</p>
<p>Thanks to all of you.</p>
| Speripro | 885,263 | <p>The fact that X is Banach it's not necessary.
Suppose that X is a normed space.
Recall that the map</p>
<p><span class="math-container">$j: X\to X^{**}$</span>, <span class="math-container">$s\mapsto \hat{s}$</span></p>
<p>is an <strong>isometry</strong>.</p>
<hr />
<p>Per your definition of weakly boundedness the set of operators <span class="math-container">$\{ \hat{s}\}_{s\in S}\subseteq X^{**}$</span> is <strong>punctually</strong> <strong>bounded</strong>.<br />
<strong>Apply</strong> now the <em>Uniform boundedness principle</em> and get that <span class="math-container">$\Vert s\Vert\leq K$</span> for all <span class="math-container">$s\in S$</span>.<br />
<strong>Apply</strong> now the fact that <span class="math-container">$j$</span> is an isometry and conclude that <span class="math-container">$S$</span> is bounded in the norm topology of <span class="math-container">$X$</span>.<br />
Hope it helps.</p>
|
536,068 | <p>If X and Y are iid random variables with distribution $F(x)=e^{-e^{-x}}$ and we let $Z=X-Y$ find the distribution function of $F_Z(x)$. I get $F_Z(x)=\frac{e^x}{1+e^x}$ but that doesn't match the answer that the professor gave us. Is what I have correct? I used the standard approach where I integrate over a region the joint density which is just the product of the two densities. Nothing too fancy I did, its just I think my professor got it wrong. </p>
| mathemagician | 49,176 | <p>Here's what I did </p>
<p>$F_Z(z)=\mathbb{P}\{Z\leq z\}=\mathbb{P}\{X-Y\leq z\}=\mathbb{P}\{X\leq Y+z\}$. Now if $A\subseteq \mathbb{R}^2$ is defined as $A=\{(x,y)\in\mathbb{}R^2\;|\;x\leq y+z\}$ we get that the above equals $\int_A f_{X,Y}(x,y) dxdy$. Since $X,Y$ are independent we have $f_{X,Y}(x,y)=f_X(x)\cdot f_Y(y)$. And $f_X(x)=f_Y(x)=e^{-e^{-x}}\cdot e^{-x}$. Then $F_Z(z)=\int_{-\infty}^\infty\int_{-\infty}^{y+c}e^{-e^{-x}}\cdot e^{-x}e^{-e^{-y}}\cdot e^{-y}dxdy$. I then take out the y's from the inner integral and integrate wrt $x$ and get a function of y then integrate wrt y and get my answer above. </p>
|
2,459,493 | <p>Here's the <span class="math-container">$C^1$</span> norm : </p>
<p><span class="math-container">$|| f || = \sup | f | + \sup | f '|$</span></p>
<p>where the supremum is taken on <span class="math-container">$[a, b]$</span>. </p>
<p>Please, justify your answer (proofs or counterexamples are needed). </p>
| Wildcard | 276,406 | <p>Ignoring the restriction, there are five choices for each of three positions.</p>
<p>The restriction excludes ordered sequences drawn from the space wherein there are only three choices for each of three positions.</p>
<p>Final answer is:</p>
<p>$$5^3-3^3=98$$</p>
|
1,302,891 | <p>I would like to ask you a question about the following question.</p>
<p>Let $f:(a,b)\rightarrow \mathbb{R}$ be non-decreasing i.e. $f(x_1)\leq f(x_2)$ and let $c \in (a,b)$. Show that $\lim_{x \ \rightarrow \ c-}{f(x)}$ and $\lim_{x \ \rightarrow \ c+}{f(x)}$ both exist.</p>
<p>If $f$ is continuous at $c$ then obviously both limits exist and: $$\lim_{x \ \rightarrow \ c-}{f(x)}=\lim_{x \ \rightarrow \ c+}{f(x)}=f(c)$$ But how would we approach this is $f$ is not continuous at $c$? Thank you guys!</p>
| Disintegrating By Parts | 112,478 | <p>Note that $f(x) \le f(c)$ for $a < x < c$. Therefore the following exists
$$
\sup_{x < c} f(x)
$$
and you can show that the following limit exists, too:
$$
\lim_{x\uparrow c}f(x) = \sup_{x < c} f(x).
$$</p>
|
4,132,439 | <p>I know that <span class="math-container">$\int_1^\infty \frac1xdx$</span> diverges. I can probe that <span class="math-container">$\int_1^\infty \frac1{\ln(x)}dx$</span> diverges, as <span class="math-container">$\forall x>1:\frac1x<\frac1{\ln(x)}$</span></p>
<p>I also know that <span class="math-container">$\int_1^\infty \frac1{x^2}dx$</span> converges <span class="math-container">$ \therefore \int_1^\infty \frac1{x^7}dx$</span> also converges.</p>
<p>I know that <span class="math-container">$\forall x>1:x>\ln(x) \rightarrow x^7>\ln^7(x)\rightarrow \frac1{x^7}<\frac1{\ln^7(x)}$</span> but this doesn't help me much.</p>
| RRL | 148,510 | <p>Since <span class="math-container">$\ln x = 7 \ln x^{1/7}$</span> we have <span class="math-container">$\ln ^7 x = (7 \ln x^{1/7})^7 < 7^7 x$</span> and</p>
<p><span class="math-container">$$\int _2^\infty \frac{dx}{\ln^7 x} > \int _2^\infty \frac{dx}{7^7x}= +\infty$$</span></p>
<p>The integral over <span class="math-container">$[1,2]$</span> also diverges, which can be shown by the limit comparison test with <span class="math-container">$(x-1)^{-7}$</span>.</p>
|
4,132,439 | <p>I know that <span class="math-container">$\int_1^\infty \frac1xdx$</span> diverges. I can probe that <span class="math-container">$\int_1^\infty \frac1{\ln(x)}dx$</span> diverges, as <span class="math-container">$\forall x>1:\frac1x<\frac1{\ln(x)}$</span></p>
<p>I also know that <span class="math-container">$\int_1^\infty \frac1{x^2}dx$</span> converges <span class="math-container">$ \therefore \int_1^\infty \frac1{x^7}dx$</span> also converges.</p>
<p>I know that <span class="math-container">$\forall x>1:x>\ln(x) \rightarrow x^7>\ln^7(x)\rightarrow \frac1{x^7}<\frac1{\ln^7(x)}$</span> but this doesn't help me much.</p>
| hamam_Abdallah | 369,188 | <p><strong>hint</strong>
Near infinity
<span class="math-container">$$\lim_{x\to+\infty}x.\frac{1}{(\ln(x))^7}=+\infty\implies$$</span>
for <span class="math-container">$ x $</span> large enough</p>
<p><span class="math-container">$$x.\frac{1}{(\ln(x))^7}>1\implies$$</span></p>
<p><span class="math-container">$$\frac{1}{(\ln(x))^7}>\frac 1x\implies
$$</span>
<span class="math-container">$$\int_1^{+\infty}\frac{dx}{(\ln(x))^7}\text{ diverges}$$</span></p>
<p>OR</p>
<p>Near <span class="math-container">$ 1^+$</span></p>
<p><span class="math-container">$$\ln(x)\sim (x-1)\implies $$</span>
the two integrales <span class="math-container">$\int_1^2\frac{dx}{(\ln(x))^7}$</span> and <span class="math-container">$ \int_1^2\frac{dx}{(x-1)^7}$</span> have the same nature, divergent.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.