qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
8,658 | <p>$f(x) = \frac{1}{\cos x}$</p>
<p>$f'(x) = \frac{\sin(x)}{\cos^2(x)}$</p>
<p>$f''(x) = \frac{2\sin^2(x)+\cos^2(x)}{\cos^3(x)}$</p>
<p>$f^{(3)}(x) = \frac{6\sin^3(x)+5\cos^2(x)\sin(x)}{cos^4(x)}$</p>
<p>$\vdots$</p>
<p>$f^{(n)}(x) = \frac{ ?}{cos^{n+1}(x)}$</p>
<p>Some of these are easy: <a href="http://darkwing.uoregon.edu/~jcomes/251exn.pdf" rel="nofollow">http://darkwing.uoregon.edu/~jcomes/251exn.pdf</a>
Others are not. Why?</p>
| J. M. ain't a mathematician | 498 | <p>For completeness: the Wolfram Functions site gives <a href="http://functions.wolfram.com/ElementaryFunctions/Sec/20/02/0001/" rel="nofollow">a series representation</a> for the $n$-th derivative of the secant:</p>
<p>$$\frac{\mathrm d^n}{\mathrm dx^n}\sec\,x=\sum_{j=0}^\infty \frac{(-1)^j}{(2j-n)!} E_{2j} x^{2j-n}$$</p>
<p>where the $E_{2j}$ are the Euler numbers mentioned in Robin's answer.</p>
<p>There is also a <a href="http://functions.wolfram.com/ElementaryFunctions/Sec/20/02/0003/" rel="nofollow">finite double series representation</a>:</p>
<p>$$\frac{\mathrm d^n}{\mathrm dx^n}\sec\,x=(n+1)!\sec\,x \sum _{k=0}^n \sum _{j=0}^{\left\lfloor\frac{k-1}{2}\right\rfloor} \frac{\left((-1)^k 2^{1-k} (k-2 j)^n \sec ^k x\right) \cos\left(\frac{n\pi}{2}+(k-2j)x\right)}{(k+1) j! (k-j)! (n-k)!}$$</p>
<p>As Qiaochu says, there's no good reason to expect that computing higher derivatives of some function is an easy task...</p>
|
168,020 | <p>Let $R$ be an local Artinian ring, with maximal ideal $\mathfrak{m}$.</p>
<p>Let $e$ be the smallest positive integer for which $\mathfrak{m}^e=(0)$.</p>
<p>Let $t$ be the smallest positive integer for which $x^t=0$ for all $x \in \mathfrak{m}$.</p>
<p>We know $t \leq e$, with equality holding whenever $\mathfrak{m}$ is a principal ideal (i.e., $R$ is a principal ideal ring). Moreover, equality holds whenever $e \leq 2$.</p>
<p>What (else) is known about the relationship between these two integers?</p>
<p>What about the case when $R$ is the Artinian ring associated to a point of an algebraic curve that is contained in two distinct irreducible components?</p>
| Mohan | 9,502 | <p>If $R$ contains a field of characteristic zero, then $e=t$. This follows from the fact that if $V$ is a finite dimensional vector space over a field of characteristic zero, the image of the map $V\to S^dV$, $v\mapsto v^d$ generates $S^dV$ as a vector space for any $d$. In your case, suffices to prove that $\mathfrak{m}^t=0$. If not, consider $V=\mathfrak{m}/\mathfrak{m}^2$ and $d=t$ composed with the surjective map $S^tV\to\mathfrak{m}^t/\mathfrak{m}^{t+1}$ to get the desired contradiction.</p>
|
889,111 | <p>This is a problem from Apostol's Real Analysis book.
$$\text{Find if }\sum_{n=1}^{\infty}\dfrac{1}{n^{1+\frac{1}{n}}}\text{ converges or diverges. }$$
I tried to compare with $\displaystyle \sum_{n=1}^{\infty}\dfrac{1}{n^p}$ for suitable $p$, but $p>1$ fails always. I tried to show $\displaystyle \sum_{k=1}^{\infty}2^ka_{2^k}$iconverges, where $\displaystyle a_n=n^{-\left( 1+\frac{1}{n}\right)}$ but again this got too complicated. Can someone give me a proof? Thanks. </p>
<p>Edit : Sorry, I was carried away, because I was thinking it would converge, but the book asked to check for convergence only. I edited it. </p>
| Ted Shifrin | 71,348 | <p>An alternative (and, conceptually, a powerful) way to think about such problems is to use the limit comparison test. Note that $n^{1+1/n} = n\cdot n^{1/n}$. What is $\lim\limits_{n\to\infty}n^{1/n}$?</p>
|
1,250,258 | <p>It's been 10 years since my last math class so I'm very rusty. How would I go about proving
$$3^n < n!$$
where $n \geq 7$?</p>
<p>I understand that factorials grow faster than set values with a variable exponent. Just not sure how to start proving it mathematically.</p>
| Daniel W. Farlow | 191,378 | <p>We can prove it by induction. For $n\geq 7$, let $S(n)$ denote the statement
$$
S(n) : 3^n < n!.
$$
<strong>Base case ($n=7$):</strong> $S(7)$ says that $3^7 = 2187<5040=7!$, and this is true. </p>
<p><strong>Induction step:</strong> Fix some $k\geq 7$, and assume that $S(k)$ is true where
$$
S(k) : 3^k < k!
$$
To be shown is that $S(k+1)$ is true where
$$
S(k+1) : 3^{k+1} < (k+1)!
$$
Beginning with the left-hand side of $S(k+1)$,
\begin{align}
3^{k+1} &= 3^k\cdot3\tag{by definition}\\[0.5em]
&< k!\cdot 3\tag{by $S(k)$, the ind. hyp.}\\[0.5em]
&< k!\cdot (k+1)\tag{since $k\geq 7$}\\[0.5em]
&= (k+1)!,
\end{align}
we end up at the right-hand side of $S(k+1)$, completing the inductive step.</p>
<p>By mathematical induction, the statement $S(n)$ is true for all $n\geq 7$. $\blacksquare$</p>
|
1,238,210 | <p>How we can solve that $\lim _{_{x\rightarrow \infty }}\int _0^x\:e^{t^2}dt$ ?</p>
<p>P.S: This is my method as I thought:
$\int _0^x\:\:e^{t^2}dt>\int _1^x\:e^tdt=e^x-e$ which is divergent, so all your answers, helped me to think otherwise, maybe my method help something else :D</p>
| robjohn | 13,854 | <p>$$
\begin{align}
\int_0^xe^{t^2}\,\mathrm{d}t
&\ge\int_0^x\frac tx\,e^{t^2}\,\mathrm{d}t\\
&=\frac1{2x}\left(e^{x^2}-1\right)
\end{align}
$$
As $x\to\infty$, the function on the right goes to $\infty$ extremely fast.</p>
|
695,648 | <p>How can I count the numbers of $5$ digits such that at least one of the digits appears more than one time? </p>
<p>My thoughts are:<br>
I count all the possible numbers of $5$ digits: $10^5 = 100000$. Then, I subtract the numbers that don't have repeated digits, which I calculate this way: $10*9*8*7*6$ $= 30240 $. Thus, I have $100000 - 30240 = 69760 $ numbers that have at least one digit repeated more than one time. </p>
<p>Is this correct?</p>
| Priyanshu Kumar | 406,522 | <p>You're doing it wrong.</p>
<p>Total no. of $5$ digit numbers that are possible are $9⋅10⋅10⋅10⋅10$ because $0$ can't be kept on the first place, else it will end up being a $4$ digit number. $= 90000$</p>
<p>Similarly, when any digit is not repeated, it would be $9⋅9⋅8⋅7⋅6$ as one option in less for the first digit here too. $= 27216$</p>
<p>Hence, the answer is $90000-27216 = 62784$</p>
|
208,615 | <p>Maybe I'm not using the best programming practices in Mathematica, but my notebooks usually contain a mixture of "definitions" and "computations" with these definitions. Say,</p>
<pre><code>f[x_] := x^2 (* Definition *)
f[10] (* Computation *)
g[x_] := f[x] - 1/f[x] (* Definition *)
g[100] (* Computation *)
</code></pre>
<p>Then each time I open the notebook anew, to ensure that all definitions are in place the easiest way is to evaluate everything. But some computations can be costly, and I do not need their results. </p>
<p>So, is there a way to only evaluate "definitions" or assignments in the notebook, and omit the "computations"? In the code sample above only 1st and 3rd lines must be evaluated, while <code>f[10]</code> and <code>g[100]</code> must be ignored.</p>
<p>Or, if the question does not make much sense, what would be good solution to the described problem? Keeping assignments and computations separately does not make much sense to me, commenting out all the computations make notebook look weird and if I do need them some time later, I'll have to do a lot of typographic work removing them.</p>
| Alexey Popkov | 280 | <blockquote>
<p>Then each time I open the notebook anew, to ensure that all definitions are in place the easiest way is to evaluate everything. But some computations can be costly, and I do not need their results.</p>
</blockquote>
<p>One way is to wrap every evaluation by <code>TimeConstrained</code> using <code>$Pre</code>:</p>
<pre><code>$Pre = Function[input, TimeConstrained[input, 0.001, Null], HoldAllComplete];
</code></pre>
<p>Now any time-consuming evaluations will be automatically aborted without generating any output, but simple and fast definitions will be evaluated as usual.</p>
<p>For clearing up the value of <code>$Pre</code> evaluate the following:</p>
<pre><code>$Pre =.
</code></pre>
<hr>
<blockquote>
<p>most of my code is pretty simple and I would be ok with evaluating all cells containing <code>=</code> or <code>:=</code></p>
</blockquote>
<p>You can test whether the input expression is an assignment(s), and ignore inputs containing anything else:</p>
<pre><code>$Pre = Function[input,
If[MatchQ[Unevaluated[input],
HoldPattern[(Set | SetDelayed)[_, _] |
CompoundExpression[(Set | SetDelayed)[_, _] .., Null ...]]], input],
HoldAllComplete];
</code></pre>
<p>For clearing up <code>$Pre</code> in this case one should evaluate the following <em>two-lines</em> input:</p>
<pre><code><span class="math-container">$Pre = Identity;
Clear[$</span>Pre]
</code></pre>
<hr>
<p>Of course there are other possible solutions, for example writing a function that will evaluate only the <code>Set</code> and <code>SetDelayed</code> statements using low-level Notebook programming. It even can be put as a <code>Button</code> in the Notebook or a Palette, or as a menu item with assigned keyboard shortkey. </p>
|
308,329 | <p>I need help with writing $\sin^4 \theta$ in terms of $\cos \theta, \cos 2\theta,\cos3\theta, \cos4\theta$.</p>
<p>My attempts so far has been unsuccessful and I constantly get developments that are way to cumbersome and not elegant at all. What is the best way to approach this problem?</p>
<p>I know that the answer should be:</p>
<p>$\sin^4 \theta =\frac{3}{8}-\frac{1}{2}\cos2\theta+\frac{1}{8}\cos4\theta$</p>
<p>Please explain how to do this.</p>
<p>Thank you!</p>
| Community | -1 | <p>By repeatedly applying the formulas
\begin{eqnarray*}
\sin(x)\sin(y)&=&{1\over2}\left[\cos(x-y)-\cos(x+y)\right]\\[5pt]
\sin(x)\cos(y)&=&{1\over2}\left[\sin(x-y)+\sin(x+y)\right]
\end{eqnarray*}
you will see how to write odd powers of sine as a linear combination of sines,
and even powers of sine as a linear combination of cosines.</p>
|
966,278 | <p>Given a recursive relation
$$a_n = \begin{cases}
(1 - 2b_n)a_{n-1} + b_n, & n > 1 \\
\frac{1}{2}, & n =1
\end{cases}
$$, how can I expression $a_n$ in term of $b_i, i \in \{1, 2, \dots n\}$?</p>
| Claude Leibovici | 82,404 | <p>Consider the case of $n=2$ with $a_1=\frac{1}{2}$. $$a_2=(1 - 2b_2)a_1 + b_2=(1 - 2b_2)\frac{1}{2}+b_2=\frac{1}{2}$$ You can repeat that for ever and, whatever $b_n$ could be, all $a_n=\frac{1}{2}$.</p>
|
2,464,890 | <p>Here is link to some limit questions:</p>
<p><a href="https://i.stack.imgur.com/2rM9f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2rM9f.png" alt="Example" /></a>
Can anyone explain how has answers were derived? In (a), how can we cancel out <span class="math-container">$(x-2)$</span>? And how can answer be 0? When <span class="math-container">$x\to 2$</span>, <span class="math-container">$x-2\to 0$</span> and the answer should be infinity. Similarly in (b) the answer should be infinity. Can anyone explain?</p>
| Fimpellizzeri | 173,410 | <p>There's a factor of $(x-2)$ in the denominator, and two factors of $(x-2)$ in the numerator. The cancelling removes one factor of each. What remains is</p>
<p>$$\frac{x-2}{x+2}$$</p>
<p>As $x$ approaches $2$, the numerator approaches $0$ and the denominator approaches $4$. This is a well defined quotient: $0/4=0$, which gives you the answer.</p>
<hr>
<p>The key thing to note here is that limits consider what happens <strong>near</strong> $x=2$, but <strong>not</strong> <em>at</em> $x=2$. Near $x=2$, $(x-2)$ is not $0$ and hence the cancellation is valid.</p>
<p>This is also a good example that limits involving division by zero need not diverge to infinity. In fact, they can diverge to $-\infty$ or $+\infty$, converge to anything in between, or even have some other, more wild behavior.</p>
|
819,054 | <p>Having the following inequality, for a real-valued function $f$ which is twice differentiable:</p>
<p>$f(a+h)-f(a)\geq f(a)-f(a-h)$ for any $a \in\mathbf{R}$, $h > 0$.</p>
<p>and assuming that $f$ is bounded, I see on a graph that $f$ is constant but I can't prove it properly.
I tried to prove that $f'(x)=0$ for all $x$ in the domain of $f$. I also tried to assume it wasn't true and use the Mean Value Theorem but it's not convincing, I see the inequality above looks like first derivatives but I can't use it...</p>
<p>Can you hint me please?</p>
<p>Thank you!</p>
| Siminore | 29,672 | <p>$$
\frac{f(a+h)-2f(a)+f(a-h)}{h^2} \xrightarrow{h \to 0}{\, f''(a)} \quad \text{(by L’Hôpital’s rule)}.
$$
Actually, your condition looks like a convexity condition at any point, and convex functions are either unbounded or constant on the whole $\mathbb{R}$.</p>
|
819,054 | <p>Having the following inequality, for a real-valued function $f$ which is twice differentiable:</p>
<p>$f(a+h)-f(a)\geq f(a)-f(a-h)$ for any $a \in\mathbf{R}$, $h > 0$.</p>
<p>and assuming that $f$ is bounded, I see on a graph that $f$ is constant but I can't prove it properly.
I tried to prove that $f'(x)=0$ for all $x$ in the domain of $f$. I also tried to assume it wasn't true and use the Mean Value Theorem but it's not convincing, I see the inequality above looks like first derivatives but I can't use it...</p>
<p>Can you hint me please?</p>
<p>Thank you!</p>
| Community | -1 | <p>Show the contrapositive, that if $f$ is not constant but satisfies the given condition, then $f$ is not bounded.</p>
<p>If $f$ is not constant then there exist $p,q$ with, say, $p<q$ and $f(p)\ne f(q)$. I'll consider only the case $f(p)<f(q)$, leaving the case $f(p)>f(q)$ to you.</p>
<p>Set $x_k=p+k(q-p)$. Taking $a=x_k$ and $h=q-p$ in the hypothesis gives
$$ f(x_{k+1})-f(x_k) \ge f(x_k) - f(x_{k-1}) $$
By induction, if $k\ge 0$ then
$$ f(x_{k+1})-f(x_k) \ge f(x_1) - f(x_0) = f(q)-f(p) $$
Summing over $k$ from $0$ to $n-1$ gives
$$ f(x_n) \ge f(p) + n(f(q)-f(p)) $$
Since $f(q)-f(p)>0$ in the case under consideration, the RHS grows arbitrarily large as $n$ increases, showing that $f$ is not bounded.</p>
|
1,807,456 | <p>I don't know asymptotic behaviour of the integral $$\int_{0}^{\infty}\frac{du}{\sqrt{4\pi u^{3}}}\left(1-\frac{e^{-\Omega u}}{\sqrt{\frac{1-\exp\left(-2u\right)}{2u}}}\right),$$
when I read a physics paper. It says that the integral have asymptotic behaviour $\log\left(\pi\Omega B\right)/\sqrt{2\pi}$, when $\Omega\to 0$. But most of paper about asymptotic behaviour are talking about the Laplace’s method, which is not match this integral. So I want to ask to get the asymptotic behaviour of the integral.</p>
<p>Thank you for your reading.</p>
| Jack D'Aurizio | 44,121 | <p>We may replace $\sqrt{\frac{2u}{1-e^{-2u}}}$ with $\sqrt{2u+1}$ and compute the resulting integral in terms of the <a href="https://en.wikipedia.org/wiki/Confluent_hypergeometric_function" rel="nofollow">Tricomi $U$ function</a>
$$ \sqrt{2}\cdot U\left(-\frac{1}{2},1,\frac{\Omega}{2}\right)\approx \frac{1}{\sqrt{2\pi}}\,\log\left(\Omega\cdot\frac{1}{2}e^{2\gamma+\psi\left(-\frac{1}{2}\right)}\right)$$
whose (logarithmic) asymptotic behavior as $\Omega\to 0^+$ is known by Kummer's differential equation.</p>
|
3,425,369 | <p>I'm taking a linear algebra course and I'm having trouble proving linear (in)dependence of functions. I understand that I have to prove that the <span class="math-container">$a_1f(x) + a_2g(x) = 0$</span> but I don't know how to actually do that. For example given a pair of functions 1 and t, how do you prove linear independence?</p>
| Brian Moehring | 694,754 | <p>Note that we don't <em>prove</em> <span class="math-container">$a_1f(x) + a_2g(x) = 0.$</span> We <em>assume</em> that and then prove that <span class="math-container">$a_1=a_2=0.$</span></p>
<p>Since functions are defined by their values, one way you can do this is to choose certain <span class="math-container">$x$</span>-values at which we evaluate the functions. Each <span class="math-container">$x$</span>-value gives another equation with the two unknowns <span class="math-container">$a_1, a_2$</span> so it should suffice to choose two values of <span class="math-container">$x$</span></p>
<p>In your particular example choosing <span class="math-container">$t=0$</span> and <span class="math-container">$t=1$</span> gives <span class="math-container">$$a_1=0 \\ a_1 + a_2 = 0$$</span> which we solve to get <span class="math-container">$a_1=a_2=0$</span> which finishes the proof. </p>
|
3,839,129 | <p>I think I was able to find the answer from some guesswork, but I'm unsure of how to prove the result.</p>
<p>If <span class="math-container">$n=5$</span>, then there are <span class="math-container">$2^5=32$</span> total subsets by the power set rule.</p>
<p>The subsets that contain <span class="math-container">$1$</span> and <span class="math-container">$2$</span> are <span class="math-container">$\{1,2\}$</span>, <span class="math-container">$\{1,2,3\}$</span>, <span class="math-container">$\{1,2,3,4\}$</span>, <span class="math-container">$\{1,2,3,4,5\}$</span>, <span class="math-container">$\{1,2,4\}$</span>, <span class="math-container">$\{1,2,4,5\}$</span>, <span class="math-container">$\{1,2,5\}$</span>, <span class="math-container">$\{1,2,3,5\}$</span>.</p>
<p>There are <span class="math-container">$8$</span> subsets listed here, and I know that <span class="math-container">$8=2^3$</span>. I also noticed that <span class="math-container">$2^3=2^{5-2}$</span>.</p>
<p>Therefore, for any <span class="math-container">$n \in \mathbb{N}$</span>, there would be <span class="math-container">$2^{n-2}$</span> subsets that contain <span class="math-container">$1$</span> and <span class="math-container">$2$</span>.</p>
<p>For a set with <span class="math-container">$n$</span> elements, there would be <span class="math-container">$2^n$</span> subsets. To prove my formula rigorously, I know that I cannot just list out the subsets, so now I'm stuck.</p>
<p>Note: my question might be similar to <a href="https://math.stackexchange.com/questions/904805/how-many-subsets-of-1-2-n-contain-1-and-how-many-dont">this one</a>.</p>
| Brian M. Scott | 12,042 | <p>There is a bijection <span class="math-container">$\varphi$</span> between subsets of <span class="math-container">$A$</span> containing <span class="math-container">$1$</span> and <span class="math-container">$2$</span> and subsets of <span class="math-container">$B=\{3,4,\ldots,n\}$</span>: if <span class="math-container">$1,2\in X\subseteq A$</span>, <span class="math-container">$\varphi(X)=X\setminus\{1,2\}$</span>. <span class="math-container">$B$</span> has <span class="math-container">$n-2$</span> elements, so it has <span class="math-container">$2^{n-2}$</span> subsets, and <span class="math-container">$A$</span> therefore has <span class="math-container">$2^{n-2}$</span> subsets that contain <span class="math-container">$1$</span> and <span class="math-container">$2$</span>.</p>
|
3,839,129 | <p>I think I was able to find the answer from some guesswork, but I'm unsure of how to prove the result.</p>
<p>If <span class="math-container">$n=5$</span>, then there are <span class="math-container">$2^5=32$</span> total subsets by the power set rule.</p>
<p>The subsets that contain <span class="math-container">$1$</span> and <span class="math-container">$2$</span> are <span class="math-container">$\{1,2\}$</span>, <span class="math-container">$\{1,2,3\}$</span>, <span class="math-container">$\{1,2,3,4\}$</span>, <span class="math-container">$\{1,2,3,4,5\}$</span>, <span class="math-container">$\{1,2,4\}$</span>, <span class="math-container">$\{1,2,4,5\}$</span>, <span class="math-container">$\{1,2,5\}$</span>, <span class="math-container">$\{1,2,3,5\}$</span>.</p>
<p>There are <span class="math-container">$8$</span> subsets listed here, and I know that <span class="math-container">$8=2^3$</span>. I also noticed that <span class="math-container">$2^3=2^{5-2}$</span>.</p>
<p>Therefore, for any <span class="math-container">$n \in \mathbb{N}$</span>, there would be <span class="math-container">$2^{n-2}$</span> subsets that contain <span class="math-container">$1$</span> and <span class="math-container">$2$</span>.</p>
<p>For a set with <span class="math-container">$n$</span> elements, there would be <span class="math-container">$2^n$</span> subsets. To prove my formula rigorously, I know that I cannot just list out the subsets, so now I'm stuck.</p>
<p>Note: my question might be similar to <a href="https://math.stackexchange.com/questions/904805/how-many-subsets-of-1-2-n-contain-1-and-how-many-dont">this one</a>.</p>
| nonuser | 463,553 | <p>You just have to ''add'' some subset of <span class="math-container">$\{3,4,5...,n\}$</span> to the set <span class="math-container">$\{1,2\}$</span>. But the number of such subsets is exactly <span class="math-container">$2^{n-2}$</span> and you are done.</p>
|
2,342,124 | <p><a href="https://i.stack.imgur.com/QdbFG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QdbFG.png" alt="enter image description here"></a></p>
<p>Well this seems like <span class="math-container">$1-|t|$</span> for <span class="math-container">$|t|<1$</span> and <span class="math-container">$0$</span> for <span class="math-container">$|t|>1$</span> . Taking the Fourier transform <span class="math-container">$$X(ω) = \int_{-\infty}^\infty(1-|t|)e^{-jωt}dt\\=\int_{-\infty}^\infty e^{-jωt}dt -\int_{-\infty}^\infty|t|e^{-jωt}dt\\=2\piδ(ω)-\int_{-1}^1|t|e^{-jωt}dt\\=2πδ(ω)-\int_{-1}^0-te^{-jωt}dt-\int_{0}^1te^{-jωt}dt$$</span></p>
<p>The infinity integral goes from -1 to 1 since the function is zero elsewhere. Now I'm having trouble continuing with this. It doesn't seem to give me the result of the problem's solution which is <span class="math-container">$sinc^2(\frac{ω}{2\pi})$</span> , the sampling function.</p>
| Robert Z | 299,698 | <p>Yes, you are correct. Since the integrand depends only on $r$, it is constant on each spherical layer and
$$\iiint_C \sqrt{x^2+y^2+z^2}\, dx \, dy \, dz=\int_{r=0}^1 r\cdot (\mbox{surface area of the sphere of radius $r$})\, dr\\=
\int_{r=0}^1 r\cdot (4\pi r^2)\, dr=4\pi\cdot\left[\frac{r^4}{4}\right]_0^1=\pi.$$</p>
|
1,156,738 | <p>Is it fine to say "Groups $A$ and $B$ are isomorphic." or should one say "Groups $A$ and $B$ are isomorphic to each other."?</p>
| kolonel | 104,564 | <p>It is the same thing. An isomorphism between $A$ and $B$ implies that your found a bijection between $A$ and $B$ that is also a homomorphism, which remains clear in both statements.</p>
|
3,186,627 | <p>Proposition: Let A be a subset of R which is bounded below. Let B be a subset of R which is bounded above. If <span class="math-container">$\inf(A) < \sup (B) $</span> then there is some <span class="math-container">$a \in A$</span> and <span class="math-container">$b \in B$</span> such that <span class="math-container">$a<b$</span>.</p>
<p>Proof: </p>
<p>Let <span class="math-container">$\inf(A) = I$</span> and <span class="math-container">$\sup(B) = S$</span> </p>
<p>So <span class="math-container">$I<S$</span></p>
<p>By definition, <span class="math-container">$\forall a \in A, I\leq a$</span> and <span class="math-container">$\forall b \in B, b\leq S$</span> </p>
<p>Case 1: If I <span class="math-container">$\in A$</span> and S <span class="math-container">$\in B$</span>, we are done.</p>
<p>Case 2: If I <span class="math-container">$\notin A$</span> and S <span class="math-container">$\notin B$</span>, then assume BWOC <span class="math-container">$\nexists a \in A $</span> and <span class="math-container">$b\in B$</span> such that <span class="math-container">$a<b$</span></p>
<p>So <span class="math-container">$\forall a \in A$</span> and <span class="math-container">$\forall b \in B, b \leq a$</span></p>
<p>So <span class="math-container">$\forall a \in A$</span>, <span class="math-container">$a$</span> is an upper bound for B.</p>
<p>But since S is the least upper bound for B, <span class="math-container">$b\leq S<a$</span></p>
<p><span class="math-container">$S<a$</span></p>
<p>So <span class="math-container">$S$</span> is also a lower bound for <span class="math-container">$A.$</span></p>
<p>But since <span class="math-container">$I$</span> is the greatest lower bound for <span class="math-container">$A$</span>, <span class="math-container">$S<I$</span>, which is a contradiction.</p>
<p>Is my proof valid? If not, what am I missing? Would I have to show that this holds for other cases as well? </p>
| Marco Vergamini | 661,708 | <p>Your proof works, but there is a little flaw: you're not considering the case <span class="math-container">$I \in A, S \not\in B$</span> or viceversa; by the way, dividing in cases is useless cause what you do in the last case can be modified to work in every case. You can simply change <span class="math-container">$S<a, S<I$</span> into <span class="math-container">$S \le a, S \le I$</span> and you'll obtain a contradiction anyway.</p>
|
4,041,641 | <p>This is the answer given:</p>
<p><a href="https://i.stack.imgur.com/6dyU4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6dyU4.png" alt="enter image description here" /></a></p>
<p>I think this answer is not correct because the line given by y=-2x/3 makes an angle that is below the x axis, so the order is incorrect right? We need to start with a counterclockwise rotation first and end with the clockwise rotation, but here, we started with the clockwise rotation first.</p>
| Sandipan Dey | 370,330 | <p>Here the slope of the line = <span class="math-container">$m = tan \gamma = -\frac{2}{3}$</span>.</p>
<p><span class="math-container">$\therefore sin \gamma = \frac{m}{\sqrt{1+m^2}},\; cos \gamma = \frac{1}{\sqrt{1+m^2}}$</span>.</p>
<p>The transformation matrix for reflection around the line is given by the product of 3 matrices (rotate by angle -<span class="math-container">$\gamma$</span> to make it horizontal, reflect w.r.t the horizontal line and rotate back by angle <span class="math-container">$\gamma$</span>)</p>
<p><span class="math-container">\begin{align}T&=T_{-\gamma}.R_0.T_{\gamma}\\
&=\begin{pmatrix}cos \gamma & -sin\gamma\\ sin\gamma & cos \gamma\end{pmatrix}
\begin{pmatrix}1&0\\ 0 & -1\end{pmatrix}
\begin{pmatrix}cos \gamma & sin\gamma\\ -sin\gamma & cos \gamma\end{pmatrix} \\
&=\begin{pmatrix}\frac{1}{\sqrt{1+m^2}} & -\frac{m}{\sqrt{1+m^2}}\\ \frac{m}{\sqrt{1+m^2}} & \frac{1}{\sqrt{1+m^2}}\end{pmatrix}
\begin{pmatrix}1&0\\ 0 & -1\end{pmatrix}
\begin{pmatrix}\frac{1}{\sqrt{1+m^2}} & \frac{m}{\sqrt{1+m^2}}\\ -\frac{m}{\sqrt{1+m^2}} & \frac{1}{\sqrt{1+m^2}}\end{pmatrix} \\
&=\begin{pmatrix}\frac{1}{\sqrt{1+m^2}} & -\frac{m}{\sqrt{1+m^2}}\\ \frac{m}{\sqrt{1+m^2}} & \frac{1}{\sqrt{1+m^2}}\end{pmatrix}
\begin{pmatrix}\frac{1}{\sqrt{1+m^2}} & \frac{m}{\sqrt{1+m^2}}\\ \frac{m}{\sqrt{1+m^2}} & -\frac{m}{\sqrt{1+m^2}}\end{pmatrix} \\
&=\frac{1}{1 + m^2}\begin{pmatrix}1-m^2&2m\\2m&m^2-1\end{pmatrix} \\
&=\begin{pmatrix}\frac{5}{13}&-\frac{12}{13}\\ -\frac{12}{13}&-\frac{5}{13}\end{pmatrix}
\end{align}</span></p>
<p>The following example in R shows how the black point (1,1) is reflected w.r.t. the life to new location (red point)</p>
<pre><code>x <- seq(-10,10,0.01)
y <- -2*x/3
m <- -2/3
T <- matrix(c((1-m^2)/(1+m^2),2*m/(1+m^2),2*m/(1+m^2),(m^2-1)/(1+m^2)), nrow=2, byrow=T)
p <- c(1,2)
p1 <- T %*% p
plot(x,y, type='l')
points(p[1],p[2],pch=19)
points(p1[1],p1[2],pch=19, col='red')
</code></pre>
<p><a href="https://i.stack.imgur.com/J8mWF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J8mWF.png" alt="enter image description here" /></a></p>
|
285,034 | <p>I currently don't see how to solve the following integral:</p>
<p>$$\int_{-1/2}^{1/2} \cos(x)\ln\left(\frac{1+x}{1-x}\right) \,dx$$</p>
<p>I tried to solve it with integration by parts and with a Taylor series, but nothing did help me so far.</p>
| nbubis | 28,743 | <p>You can't solve the indefinite integral using elementary functions. This integral can be solved by parts however if you use $Ci(x)$ - the <a href="http://mathworld.wolfram.com/CosineIntegral.html" rel="nofollow">Cosine Integral</a>. The actual value as @Psx mentioned is of course zero since the integrand is odd.</p>
|
658,078 | <p>I'm embarrassed to ask this question, but my child has the following homework question:</p>
<p>"Use absolute value to describe the relationship between a negative credit card balance and the amount owed."</p>
<p>I'm not sure for what it is they're looking. Clearly a <code>-$25</code> balance means you have <code>$25</code> credit. However, the absolute value of <code>-$25</code> is <code>$25</code>, and positive balances are money you owe.</p>
<p>Is there a simple formula describing this relationship? </p>
| bsdshell | 54,105 | <p>$$
\text{Given e = -5, only show right inverse}\\
\text{let b is the inverse of a}\\
\Rightarrow\text{a*b=e}\\
\Rightarrow a + b + 5 = e = -5\\
\Rightarrow a + b = -5 - 5\\
\Rightarrow a + b + (-a) = -5-5+(-a)\quad(\because \text{I always can add an Integer in both side})\\
\Rightarrow b = -a -10\quad(\because \text{Addition is commutative and a+(-a)=0})\\
\Rightarrow \text{b is the right inverse of a}
$$</p>
|
658,078 | <p>I'm embarrassed to ask this question, but my child has the following homework question:</p>
<p>"Use absolute value to describe the relationship between a negative credit card balance and the amount owed."</p>
<p>I'm not sure for what it is they're looking. Clearly a <code>-$25</code> balance means you have <code>$25</code> credit. However, the absolute value of <code>-$25</code> is <code>$25</code>, and positive balances are money you owe.</p>
<p>Is there a simple formula describing this relationship? </p>
| Kevin Arlin | 31,228 | <p>Here's a restatement of what's going on here: we have a function $\varphi(n)=n+5$ from $\mathbb{Z}$ to itself and want to check that $n*m=\varphi(n+m)$ is also a group structure. So, it may leave the waters less muddy if we state the problem more generally: given a set $S$ with a group structure $(*,e)$ and a function $\varphi:S\to S,$ when is the function $*':(s,t)\mapsto \varphi(s*t)$ also a group operation on $S$?</p>
<p>Well, a group needs an identity. An identity for our possible operation $*'$ would be $e'\in S$ such that $s*e'=e'*s=\varphi^{-1}(s)$ for every $s\in S$. So $\varphi$ had better be a bijection. Considering $s=e$ shows $e'$ could only be $\varphi^{-1}(e)$. We've also shown that $e'$ has to be in the <em>center</em> of $S$, that is, commute with all $s\in S$. To narrow down what $\varphi$ looks like even further, compute $\varphi(a)=\varphi(ae'^{-1}e')=\varphi(\varphi^{-1}(ae'^{-1}))=ae'^{-1}$. So $\varphi$ is just right multiplication by $e'^{-1}$ (equivalently, left multiplication, since $e'$ is in the center of $S$).</p>
<p>So we've seen it's necessary for $*'$ to be a group operation that $\varphi$ be multiplication by an element of the center of $S$. That's actually all we need: let's check. We already required $e'$ was an identity for $*'$ when we introduced it. How about inverses? Given $s$ we want $t$ so that $s*'t=s*t*e'^{-1}=e'=t*s*e'^{-1}=t*'s$. This yields $t=s^{-1}*e'*e'=e'*s^{-1}*e'$. These two expressions for $t$ are equal since $e'$ is in the center of $S$ and determine $t$ uniquely as the inverse of $s$ for $*'$. Finally for associativity we must have
$$(r*'s)*'t=(r*s*e'^{-1})*'t=r*s*e'^{-1}*t*e'^{-1}=r*s*t*e'^{-1}*e'^{-1}=r*'(s*t*e'^{-1})=r*'(s*'t)$$</p>
<p>And we're saved yet again by the condition that $e'$ be central, which makes the third equality hold.</p>
<p>So we've seen that the functions $\varphi:S\to S$ such that $\varphi(s*t)$ is a group multiplication are exactly those given by multiplication by elements of the center of $S$. </p>
|
2,130,141 | <p>I am having troubles with the following excercise:</p>
<p>$P(A\times B) = Q$ and $Q = \lbrace V\times W \ \vert \ V\in P(A), W\in P(B)\rbrace$ </p>
<p>So I have to prove or disprove. I know that $P(A\times B) \neq Q$ and being specific $P(A\times B) \not\subset Q$ and $Q \subset P(A\times B)$. In addition; </p>
<p>$\supseteq \rbrack \ X\in Q \rightarrow X\in V\times W$, but $V\subset A$ and $W\subset B$, $\rightarrow $ $X\subset A\times B \rightarrow X\in P(A\times B)$.</p>
<p>But I am not able to disprove $\subseteq \rbrack$. I know they have diferent sizes but I want to make a formal disprove.</p>
<p>I am sorry for grammar mistakes, but English is not my native language. </p>
<p>Kind regards, </p>
<p>Phi.</p>
| Martin Argerami | 22,857 | <p>Any topology arising from a metric is Hausdorff. But with $\{\emptyset,M\} $ you cannot separate $a $ and $b $.</p>
|
3,878,723 | <blockquote>
<p>Find the value of <span class="math-container">$k$</span> if the curve <span class="math-container">$y = x^2 - 2x$</span> is tangent to the line <span class="math-container">$y = 4x + k$</span></p>
</blockquote>
<p>I have looked at the solution to this question and the first step is the "equate the two functions":<br />
<span class="math-container">$ x^2 - 2x = 4x + k$</span></p>
<p>Why? How does that help solve the equation? And how can I use what I get from equating the two functions to find the solution?</p>
| JC12 | 736,604 | <p>If you want two lines to be tangent, you only want them to touch at one point, meaning there is only one solution to:</p>
<p><span class="math-container">$$x^2-2x=4x+k$$</span></p>
<p>Which simplifies to:</p>
<p><span class="math-container">$$x^2-6x-k=0$$</span></p>
<p>Note that the above only has one solution if the discriminant is equal to <span class="math-container">$0$</span>. Can you then solve for <span class="math-container">$k$</span>?</p>
|
317,547 | <p>Let $X$ and $Y$ have joint mass function</p>
<p>$f(j,k)=\frac {c(j+k)a^{j+k}}{j!k!}$, $j,k\geq 0$</p>
<p>where $a$ is a constant. Find $c$</p>
<p>This sum seems hard to to. How to complete this sum?</p>
| Coiacy | 62,460 | <p>This is nothing but some calculation. Fix $k$ first,</p>
<p>When $k=0,\ \sum_{j=0}^{\infty} \frac{c(j+k)\cdot a^{j+k}}{j!\cdot k!}=\sum_{j=0}^{\infty} \frac{c\cdot j\cdot a^{j}}{j!}=ca\cdot e^a$</p>
<p>When $k\ge 1$,
$$ \sum_{j=0}^{\infty} \frac{c(j+k)\cdot a^{j+k}}{j!\cdot k!}=\sum_{j=0}^{\infty} \frac{c\cdot j\cdot a^{j+k}}{j!\cdot k!}+\sum_{j=0}^{\infty} \frac{c\cdot k\cdot a^{j+k}}{j!\cdot k!}=\sum_{j=1}^{\infty} \frac{a^{j-1}}{(j-1)!}\cdot \frac{c\cdot a^{k+1}}{k!}+\sum_{j=0}^{\infty} \frac{a^{j}}{j!}\cdot \frac{c\cdot a^k}{(k-1)!}=ac\cdot e^a\cdot \frac{a^k}{k!}+ac\cdot e^a\cdot \frac{a^{k-1}}{(k-1)!}
$$
$$
\sum_{k=1}^{\infty}(ac\cdot e^a\cdot \frac{a^k}{k!}+ac\cdot e^a\cdot \frac{a^{k-1}}{(k-1)!})=ac\cdot e^a \sum_{k=1}^{\infty} \big( \frac{a^k}{k!}+\frac{a^{k-1}}{(k-1)!}\big)=ac\cdot(2e^a-1)
$$
Hence, $$\sum_{k=0}^{\infty}\sum_{j=0}^{\infty}\frac{c(j+k)\cdot a^{j+k}}{j!\cdot k!}=c\cdot2ae^{2a}=1\quad \Rightarrow\quad c=\big(2a\cdot e^{2a}\big)^{-1}$$
I hope the computation above is correct.</p>
|
3,438,048 | <p>I've recently obtained my University entrance papers from 1967 (yes,52 years ago!) and I found the question below difficult. I presume the answer is a symmetric expression in the differences between alpha,beta and gamma.Am I missing some obvious trick? Any help would be appreciated.</p>
<p>Simplify and evaluate the determinant
<a href="https://i.stack.imgur.com/Dfft4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dfft4.png" alt="enter image description here"></a></p>
<p>and show that its value is independent of theta.</p>
| Root | 466,361 | <p>Your proof of a) is not right. Note that interval size tending to zero is different from it being zero.
a) is vacuuously true for null set to put it differently there is no x the 'for all x'statement is always true. For <span class="math-container">$\mathbb{R}$</span> it clear that as for any real number say interval with size 1 is real. </p>
|
145,785 | <p>Let $V$ be a $\mathbb{C}$-vector space of finite dimension. Denote its $d$-th symmetric power by $V^{\odot d}$. I am looking for a proof that $V^{\odot d}$ is generated by the elements $v^{\odot d}$ for $v\in V$. </p>
<p>A different way to look at it is the following: Consider the polynomial ring $R=\mathbb{C}[x_1,\ldots,x_n]$ and $f$ a homogeneous polynomial of degree $d$: Then, I want to show that there are linear polynomials $h_1,\ldots,h_k$ such that $f$ is a linear combination of the $d$-th powers $h_i^d$.</p>
<p>In the case $d=2$, this follows from $2xy = (x+y)^2 - x^2 - y^2$. For higher $d$, I recall seeing a proof involving <a href="http://en.wikipedia.org/wiki/Multinomial_coefficient">multinomial coefficients</a> once, but I do not remember the details. I have tried to work it out again, but it seems a bit cumbersome, so I am asking whether you know any textbook where this result is proved. If you know an easy proof, I'd be very happy if you could outline it, though.</p>
| user8268 | 8,268 | <p>Let $W$ be the (finite-dimensional) vector space generated by $d$-th powers of linear functions in $x_1,\dots,x_n$. Let $h_1,\dots,h_d$ be such linear functions. Consider the polynomial map $f:\mathbb{C}^d\to W$ given by $f(t_1,\dots,t_d)=(t_1h_1+\dots+t_d h_d)^d$. As
$$\frac{\partial}{\partial t_1}\cdots\frac{\partial}{\partial t_d}f=d!\;h_1\dots h_d,$$
we have $h_1\dots h_d\in W$. This shows that any degree-$d$ monomial is in $W$, and therefore also any homogeneous degree-$d$ polynomial.</p>
|
1,772,562 | <p>Let $f:\mathbb{C}\rightarrow\mathbb{C}$ be holomorphic. If we have $|f(z)|\leq|z|^n$ for some $n\in\mathbb{N}$ and all $z\in\mathbb{C}$, then $f$ is a polynomial.</p>
<p>I tried to apply Liouville's theorem but it does not help.</p>
<p>Thanks for your help.</p>
| lhf | 589 | <p>$|f(z)|\leq|z|^n$ implies $f(0)=0$.</p>
<p>Writing $f(z)=z^m g(z)$ with $g(0)\ne0$ implies $m \ge n$ and so $|z^{m-n}g(z)| \le 1$.</p>
<p>Now apply Liouville's theorem.</p>
|
3,075,869 | <p>The following is quoted from <a href="https://en.wikipedia.org/wiki/Quotient_space_(topology)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quotient_space_(topology)</a></p>
<p>Quotient maps q : X → Y are characterized among surjective maps by the following property: if Z is any topological space and f : Y → Z is any function, then f is continuous if and only if f ∘ q is continuous.</p>
<p>Assuming that <span class="math-container">$q$</span> is a quotient map, I can prove the characterizing property using the definition of a quotient map, namely, <span class="math-container">$q^{-1}U$</span> is open if and only if <span class="math-container">$U$</span> is open. However, I could not see how to prove the converse: how does the property imply that <span class="math-container">$q$</span> is quotient?</p>
| drhab | 75,923 | <p>Let <span class="math-container">$\tau $</span> denote the quotient topology on <span class="math-container">$Y $</span> induced by <span class="math-container">$q $</span>.</p>
<p>Let <span class="math-container">$\rho $</span> denote a topology on <span class="math-container">$Y $</span> such that the characteristic property is satisfied. </p>
<p>Then we have actually two identity functions <span class="math-container">$Y\to Y $</span>: </p>
<p><span class="math-container">$\mathsf{id}:(Y,\tau)\to(Y,\rho)$</span> and <span class="math-container">$\mathsf{id}:(Y,\rho)\to(Y,\tau)$</span></p>
<p>Both can be shown to be continuous.</p>
<p>This implies that <span class="math-container">$\tau=\rho $</span>.</p>
<hr>
<p><strong>edit</strong> (for further explaining)</p>
<p>In your question you mentioned that it was clear to you that the quotient topology induced by <span class="math-container">$q$</span> satisfies the characteristic property. </p>
<p>Then in order to show that this property is indeed "characteristic" it is enough to prove that the quotient topology is unique in this.</p>
<p>To do so I put forward a topology <span class="math-container">$\rho$</span> on <span class="math-container">$Y$</span> such that the property holds. </p>
<p>How it is defined does not really matter, and it is enough now to prove that this topology <span class="math-container">$\rho$</span> <strong>must be the same</strong> as quotient topology <span class="math-container">$\tau$</span>.</p>
<p>The proof written out goes like this:</p>
<p>Function <span class="math-container">$q:X\to Y$</span> is continuous if <span class="math-container">$Y$</span> is equipped with the quotient
topology <span class="math-container">$\tau$</span>. For clarity let us denote here <span class="math-container">$q_{1}:\left(X,\tau_{X}\right)\to\left(Y,\tau\right)$</span>
where <span class="math-container">$\tau_{X}$</span> denotes the topology on <span class="math-container">$X$</span>.</p>
<p>Function <span class="math-container">$q:X\to Y$</span> is continuous if <span class="math-container">$Y$</span> is equipped with the quotient
topology <span class="math-container">$\rho$</span>. For clarity let us denote here <span class="math-container">$q_{2}:\left(X,\tau_{X}\right)\to\left(Y,\rho\right)$</span>
where <span class="math-container">$\tau_{X}$</span> denotes the topology on <span class="math-container">$X$</span>.</p>
<p>Be aware that <span class="math-container">$q_{1}$</span> and <span class="math-container">$q_{2}$</span> are both the function <span class="math-container">$q$</span>. The
thing that is different is the topology on its codomain <span class="math-container">$Y$</span>.</p>
<p>Both setups carry the characteristic property.</p>
<p>Likewise we have functions <span class="math-container">$\mathsf{id}_{1}:\left(Y,\tau\right)\to\left(Y,\rho\right)$</span>
and <span class="math-container">$\mathsf{id}_{2}:\left(Y,\rho\right)\to\left(Y,\tau\right)$</span> both
prescribed by <span class="math-container">$y\mapsto y$</span> on set <span class="math-container">$Y$</span>.</p>
<p>Then <span class="math-container">$\mathsf{id}_{1}\circ q_{1}=q_{2}$</span> so <span class="math-container">$\mathsf{id}_{1}\circ q_{1}$</span>
is continuous. </p>
<p>From this we conclude that <span class="math-container">$\mathsf{id}_{1}$</span> is continuous
on base of the characteristic property.</p>
<p>Also <span class="math-container">$\mathsf{id}_{2}\circ q_{2}=q_{1}$</span> so <span class="math-container">$\mathsf{id}_{2}\circ q_{2}$</span>
is continuous. </p>
<p>From this we conclude that <span class="math-container">$\mathsf{id}_{2}$</span> is continuous
on base of the characteristic property.</p>
<p>This implies that <span class="math-container">$\tau=\rho$</span>.</p>
|
491,748 | <p>I got stuck on this problem and can't find any hint to solve this. Hope some one can help me. I really appreciate.</p>
<blockquote>
<blockquote>
<p>Give an example of a collection of sets $A$ that is not locally finite, such that the collection $B = \{\bar{X} | X \in A\}$ is locally finite.</p>
</blockquote>
</blockquote>
<p>Note: Every element in $B$ must be unique, so maybe there exist 2 distinct sets $A_{1}, A_{2} \in A$, but have $\bar{A_{1}} = \bar{A_{2}}$. So that's why this problem would be right even though $A \subset \bar{A}$.</p>
<p>Thanks everybody.</p>
| user87690 | 87,690 | <p>Just take some non-empty open set $U$ such that $\overline{U} \setminus U$ is infinite. Actually $U$ does not have to be open, just non-empty and such that $\overline{U} \setminus U$ is infinite. This includes the correct example of dense subset of $\mathbb{R}$ other people mentioned.</p>
|
2,360,819 | <p>Probability question that I don't understand? It is on our assignment for probability and I can't seem to figure out how it has to do with probability or how to solve it.</p>
| Cameron Buie | 28,900 | <p>Well, a diagonal connects two non-adjacent vertices, yes? Well, how many segments can be drawn which connect two <em>distinct</em> vertices of an octagon? How many segments can be drawn which connect two <em>adjacent</em> vertices of an octagon? Now what?</p>
|
545,003 | <p>I have a proof that I am trying to prove and I am getting stuck at the inductive hypothesis. This is my theorem:</p>
<blockquote>
<p>For all real numbers $n>3$, the following is true: $n + 3 < n!$.</p>
</blockquote>
<p>I have proven true for $n = 4$, and will assume true for some arbitrary value $k$, i.e.,</p>
<p>$$k + 3 < n!,$$</p>
<p>and I want to prove for $k+1$, i.e.,</p>
<p>$$(k+1) + 3 < (k+1)!.$$</p>
<p>Consider the $k+1$ term:</p>
<p>$$(k+1)+3 = ?$$</p>
<p>I am confused on how to approach the next step.</p>
<p>Ok here is how I am proceeding. It seems really long so if anyone has a better way let me know:
$$
=(k+3)+1
$$
$$
<(k!)+1
$$
$$
<k!+k!
$$
$$
=2k!
$$
$$
<(k+1)k!
$$
$$
=(k+1)!
$$
Therefore both sides are equivalent.</p>
| Patrick | 50,809 | <p>Induction is overkill here.</p>
<p>For $n \gt 3$ we have that $n+3 \lt n + n = 2\cdot n \lt 1\cdot 2 \cdot3\cdots (n-1) \cdot n =n!$</p>
|
1,213,344 | <p>A solid half-ball $H$ of radius $a$ with density given by $k(2a-\rho)$, where $k$ is a constant. Find its mass.</p>
<p>You of course use spherical coordinates so $dV=\rho ^2 \sin\phi d\rho d\phi d\theta$. It is clear to see that the limits are $\rho \in [0,a]$ and $\theta \in [0,2\pi]$. The limits for $\phi$ are normally $[0,\pi]$ but it is apparently $[0,\pi/2]$ but I don't know how...</p>
<p>Is it because it is a half sphere? If so then why can't you half the limits for $\theta$ instead or why not half it for both of the angle variables?</p>
| Timbuc | 118,527 | <p>$$I_n:=\int_0^1\frac{x^n}{x^n+1}dx=\int_0^1\left(1-\frac1{x^n+1}\right)dx=1-\int_0^1\frac{dx}{x^n+1}$$</p>
<p>$$I_{n+1}=\int_0^1\frac{x^{n+1}}{x^{n+1}+1}dx=\int_0^1\left(1-\frac1{x^{n+1}+1}\right)dx=1-\int_0^1\frac{dx}{x^{n+1}+1}$$</p>
<p>Now, it is trivial that for $\;x\in[0,1]\;$ we have</p>
<p>$$\frac1{x^{n+1}+1}\ge\frac1{x^n+1}\ge0$$</p>
<p>and both functions are non-negative in the unit interval, and from here that $\;I_n\ge I_{n+1}\;$ .</p>
<p>As for (2): observe that</p>
<p>$$I_n=\int_n^{n+1}\left(2-\frac1x\right)\le\int_n^{n+1}2\,dx=2$$</p>
<p>By the way, this is <em>not</em> an improper integral.</p>
|
533,534 | <p>Let X = {x(i)} be a group of n data with mean = μ(x) and variance $= σ(x)^2$.</p>
<p>We use the symbol S(x(i)) to represent the sum of all the x's.</p>
<p>Similar notations will be used for the group Y.</p>
<p>Supposed that Y is formed by adding an extra element x(n+1) to X and the value of that element is greater than μ(x).</p>
<p>That is, [x(n+1) = μ(x) + d; where d > 0].</p>
<p>Then we have μ(y) > μ(x) because</p>
<p>$μ(y) = [S(x(i)) + (μ(x) + d)] / (n + 1)$</p>
<p>$= [(S(x(i)) + μ(x)) + d] / (n + 1)$</p>
<p>$= [(nμ(x) + μ(x) + d] / (n + 1)$</p>
<p>$= [(n + 1) μ(x) + d] / (n + 1)$</p>
<p>$= μ(x) + [d / (n + 1)]$</p>
<p>$> μ(x)$</p>
<p>From which, we can say that:-
“if an item greater than the mean is added, the new mean is greater than the original.”
The actual relation between μx and μy is μ(y) = μ(x) + δ; where δ = d / (n + 1).</p>
<p>I am trying to get a simple and nice relation between $σ(x)^2$ and $σ(y)^2$ too. Using the similar derivation method as above, the closest I can get is $σ(y)^2 > n σ(x)^2 / (n + 1)$.</p>
<p>Is the above conclusion correct? Or is it possible to go further to get a much more simpler relation like $σ(y)^2 > σ(x)^2$.</p>
| Juan Sebastian Lozano | 87,284 | <p>I think it is useful too look at this graphically first. </p>
<p><img src="https://i.stack.imgur.com/1nT7q.jpg" alt="graph"></p>
<p>Now that you can see this graphic, lets consider the geometric interpretation of the limit. If I have the limit $\lim\limits_{x \to 0} f(x)$, them I am essentially asking: What is the value of $f(x)$ as $x$ gets closer and closer to zero? </p>
<p>If we look at the above graph, that is pretty strait forward, $f(x)$ gets closer and closer to negative one.</p>
<p>In math, however, we like to be more rigorous, so we must now define the limit algebraically. To take a limit we choose an x value arbitrarily close to the desired x value (i.e. the number you are limiting to). If we approach the same value from the positive and negative sides, them no matter where you approach the desired $x$ from, you will be approaching the same value, so you have no need for explicitly saying "from the positive side" or "from the negative side." In notation we say
$$(( \lim\limits_{x \to 0^+} = y) \wedge ( \lim\limits_{x \to 0^-} = y) )\Rightarrow \lim\limits_{x \to 0} = y $$
or "If the limits from both the positive side and the negative side equal y, then the limit equals y"</p>
<p>For piece wise functions it is slightly more complicated, we must first see which function applies on the negative side of the limit, and which one applies on the positive side, then take the limit for both and see if we arrive at the same value, otherwise the limit does not exist.</p>
<p>So here we take the limit from the negative side: </p>
<p>The equation of the function for the negative side is $f(x) = x- 1$, which is defined for zero, so we can just plug in zero (if it were not defined we would plug in $.5$ then $.24$ et cetera). $$\lim\limits_{x \to 0^-} f(x) = (0)-1 = -1$$ </p>
<p>Now the positive side:</p>
<p>The equation of the function for the positive side is $f(x) = 2x- 1$, which is defined for zero, so we can just plug in zero. $$\lim\limits_{x \to 0^+} f(x) = 2(0)-1 = -1$$ </p>
<p>and so we find that:
$$ \lim\limits_{x \to 0^+} f(x) = -1 = \lim\limits_{x \to 0^-} f(x)$$
and so, by our definition before, we see that:
$$ \lim\limits_{x \to 0} f(x) = -1$$</p>
|
2,848,891 | <p>Find the number of solutions of $$\left\{x\right\}+\left\{\frac{1}{x}\right\}=1,$$ where $\left\{\cdot\right\}$ denotes Fractional part of real number $x$.</p>
<h2>My try:</h2>
<p>When $x \gt 1$ we get</p>
<p>$$\left\{x\right\}+\frac{1}{x}=1$$ $\implies$</p>
<p>$$\left\{x\right\}=1-\frac{1}{x}.$$</p>
<p>Letting $x=n+f$, where $n \in \mathbb{Z^+}$ and $ 0 \lt f \lt 1$, we get</p>
<p>$$f=1-\frac{1}{n+f}.$$</p>
<p>By Hint given by $J.G$, i am continuing the solution:</p>
<p>we have</p>
<p>$$f^2+(n-1)f+1-n=0$$ solving we get</p>
<p>$$f=\frac{-(n-1)+\sqrt{(n+3)(n-1)}}{2}$$ $\implies$</p>
<p>$$f=\frac{\left(\sqrt{n+3}-\sqrt{n-1}\right)\sqrt{n-1}}{2}$$</p>
<p>Now obviously $n \ne 1$ for if we get $f=0$</p>
<p>So $n=2,3,4,5...$ gives values of $f$ as</p>
<p>$\frac{\sqrt{5}-1}{2}$, $\sqrt{3}-1$, so on which gives infinite solutions.</p>
| J.G. | 56,861 | <p>Now multiply by $n+f$; solve a quadratic to express $f$ in terms of $n$. Don't forget to check negative solutions too.</p>
|
2,848,891 | <p>Find the number of solutions of $$\left\{x\right\}+\left\{\frac{1}{x}\right\}=1,$$ where $\left\{\cdot\right\}$ denotes Fractional part of real number $x$.</p>
<h2>My try:</h2>
<p>When $x \gt 1$ we get</p>
<p>$$\left\{x\right\}+\frac{1}{x}=1$$ $\implies$</p>
<p>$$\left\{x\right\}=1-\frac{1}{x}.$$</p>
<p>Letting $x=n+f$, where $n \in \mathbb{Z^+}$ and $ 0 \lt f \lt 1$, we get</p>
<p>$$f=1-\frac{1}{n+f}.$$</p>
<p>By Hint given by $J.G$, i am continuing the solution:</p>
<p>we have</p>
<p>$$f^2+(n-1)f+1-n=0$$ solving we get</p>
<p>$$f=\frac{-(n-1)+\sqrt{(n+3)(n-1)}}{2}$$ $\implies$</p>
<p>$$f=\frac{\left(\sqrt{n+3}-\sqrt{n-1}\right)\sqrt{n-1}}{2}$$</p>
<p>Now obviously $n \ne 1$ for if we get $f=0$</p>
<p>So $n=2,3,4,5...$ gives values of $f$ as</p>
<p>$\frac{\sqrt{5}-1}{2}$, $\sqrt{3}-1$, so on which gives infinite solutions.</p>
| robjohn | 13,854 | <p><strong>Observations</strong></p>
<p>I am assuming that $\{x\}=x-\lfloor x\rfloor$, which is in $[0,1)$.</p>
<p>There are no integer solutions; if $x\in\mathbb{Z}$, then $\{x\}+\left\{\frac1x\right\}=0+\left\{\frac1x\right\}\lt1$.</p>
<p>Since $\{x\}+\left\{\frac1x\right\}=1$ is stable under $x\leftrightarrow\frac1x$, we can get all positive solutions by looking at $x\gt1$.</p>
<p>Since $\{-x\}=1-\{x\}$ for all $x\not\in\mathbb{Z}$, $\{x\}+\left\{\frac1x\right\}=1$ is stable under $x\leftrightarrow-x$. Thus, we can get all solutions looking at $x\gt0$.</p>
<hr>
<p><strong>Assuming that $\boldsymbol{x\gt1}$</strong></p>
<p>Let $t=\{x\}$ and $n=\lfloor x\rfloor$. This is equivalent to solving
$$
t+\frac1{n+t}=1\tag1
$$
for $t\in[0,1)$. Equation $(1)$ gives the quadratic equation
$$
t^2+(n-1)t-(n-1)=0\tag2
$$
which has the solution
$$
t=\frac{-n+1+\sqrt{(n+1)^2-4}}2\tag3
$$
Since $x=n+t$, we get
$$
x=\frac{n+1+\sqrt{(n+1)^2-4}}2\tag4
$$</p>
<hr>
<p><strong>All Solutions</strong></p>
<p>As mentioned above, all solutions can be gotten by taking the reciprocal and negating $(4)$. That is, we can get all solutions, from
$$
x=\frac{\pm n\pm\sqrt{n^2-4}}2\tag5
$$
where $n\in\mathbb{Z}$ and $n\ge3$.</p>
|
23,566 | <p>I love math, and I used to be very good at it. The correct answers came fast and intuitively. I never studied, and redid the demonstration live for the tests (sometimes inventing new ones). I was the one who answered the tricky questions in class (8 hours of math/week in high school)... You get the idea.</p>
<p>As such I used to have a lot of confidence in my math abilities, and didn't think twice about saying the first idea that came to mind when answering a question.</p>
<p>That was more than 10 years ago, and I (almost) haven't done any math since then. I've graduated in a scientific field that requires little of it (I prefer not to give details) and worked for some time.</p>
<p>Now I'm back at school (master of statistics) and I need to do math, once again. I make mistakes upon blunders with the same confidence I used to have when I was good, which is extremely embarrassing when it happens in class. </p>
<p>I feel like a tone deaf musician and an ataxic painter at the same time.</p>
<p>One factor that probably plays a role is that I've learnt math in my mother tongue, and I'm now using it English, but I wouldn't expect it to make such a difference. </p>
<p>I know that it will require practice and hard work, but I need direction.</p>
<p>Any help is welcome.</p>
<p>Kind regards,</p>
<p>-- Mathemastov</p>
| Phil | 1,088 | <p>I highly reccommend this book <a href="http://rads.stackoverflow.com/amzn/click/0201558025" rel="nofollow">Concrete Mathematics: A Foundation for Computer Science</a>. While the entire book is not that relevant, I think a few chapters (on Binomial Coefficients and Discrete Probability) can be helpful.</p>
<p>The book focuses on formula manipulation. It will sharpen your practical skills like coming up with proofs, simplifying expressions, etc. This book is suitable for independent study b/c there are exercises at different difficulty levels with explained answers. Give it a try!</p>
|
23,566 | <p>I love math, and I used to be very good at it. The correct answers came fast and intuitively. I never studied, and redid the demonstration live for the tests (sometimes inventing new ones). I was the one who answered the tricky questions in class (8 hours of math/week in high school)... You get the idea.</p>
<p>As such I used to have a lot of confidence in my math abilities, and didn't think twice about saying the first idea that came to mind when answering a question.</p>
<p>That was more than 10 years ago, and I (almost) haven't done any math since then. I've graduated in a scientific field that requires little of it (I prefer not to give details) and worked for some time.</p>
<p>Now I'm back at school (master of statistics) and I need to do math, once again. I make mistakes upon blunders with the same confidence I used to have when I was good, which is extremely embarrassing when it happens in class. </p>
<p>I feel like a tone deaf musician and an ataxic painter at the same time.</p>
<p>One factor that probably plays a role is that I've learnt math in my mother tongue, and I'm now using it English, but I wouldn't expect it to make such a difference. </p>
<p>I know that it will require practice and hard work, but I need direction.</p>
<p>Any help is welcome.</p>
<p>Kind regards,</p>
<p>-- Mathemastov</p>
| Alex Greene | 9,303 | <p>Interesting question. Not asking about mathematics as such, but about building up one's skills in a global environment suffering from a dearth of post-educational mathematical tuition for adults.</p>
<p>Mathematics taught in educational establishments tends, at least at the early stages of childhood, to focus on established formulae and already-solved problems which form the foundation of more complicated, less well-known mathematical works still being solved to this day. Other, higher levels of mathematics still have little to no common ground with what the establishment already taught you. They don't teach much about the Riemann Hypothesis and the significance of prime numbers in cryptography and national security in schools, for instance - and yet you see prime numbers around you every day.</p>
<p>Re-awakening your mathematical skills might take some time, because you may first have to define the /extent/ of your mathematical skills. If you have access to your old mathematics textbooks, dig them up. Look through them for the parts where you found a weakness in your methodology. Ask yourself why, for example, you may have found prime numbers, or factorials, or matrices, or calculus tiresome. Then go and delve into that tiresome topic.</p>
<p>Like exercising an old muscle you thought had gone to seed, you can exercise mental and cognitive faculties long since thought to have gone dormant and even atrophied through neglect. And you can only do it one way, just like working out: by going through that awful machine in the corner of the gym that makes your back hurt, rather than all the easy ones.</p>
<p>It will hurt, and it will hurt your brain to the point where you may virtually buy stock in paracetamol, but if you persist in your efforts, in the end you'll have an active mathematician's brain running in your head again.</p>
<p>And yes, to hear an indolent old wretch like me talk about exercise comes off as a supreme irony. However, only through the painful and rigorous exercise of your mathematical faculties can they spring back to life with renewed vigor.</p>
|
1,988,191 | <p>Today I coded the multiplication of quaternions and vectors in Java. This is less of a coding question and more of a math question though:</p>
<pre><code>Quaternion a = Quaternion.create(0, 1, 0, Spatium.radians(90));
Vector p = Vector.fromXYZ(1, 0, 0);
System.out.println(a + " * " + p + " = " + Quaternion.product(a, p));
System.out.println(a + " * " + p + " = " + Quaternion.product2(a, p));
</code></pre>
<p>What I am trying to do is rotate a point $\mathbf{p}$ using the quaternion $\mathbf{q}$. The functions <code>product()</code> and <code>product2()</code> calculate the product in two different ways, so I am quite certain that the output is correct:</p>
<pre><code>(1.5707964 + 0.0i + 1.0j + 0.0k) * (1.0, 0.0, 0.0) = (-1.0, 0.0, -3.1415927)
(1.5707964 + 0.0i + 1.0j + 0.0k) * (1.0, 0.0, 0.0) = (-1.0, 0.0, -3.1415927)
</code></pre>
<p>However, I can't wrap my head around why the result is the way it is. I expected to rotate $\mathbf{p}$ 90 degrees around the y-axis, which should have resulted in <code>(0.0, 0.0, -1.0)</code>.</p>
<p>Wolfram Alpha's visualization also suggests the same:
<a href="https://www.wolframalpha.com/input/?i=(1.5707964+%2B+0.0i+%2B+1.0j+%2B+0.0k)" rel="nofollow">https://www.wolframalpha.com/input/?i=(1.5707964+%2B+0.0i+%2B+1.0j+%2B+0.0k)</a></p>
<p>So what am I doing wrong here? Are really both the functions giving invalid results or am I not understanding something about quaternions?</p>
| amd | 265,466 | <p>The short answer is that you have to conjugate by a half-angle quaterion instead of simply multiplying to effect a rotation. See <a href="https://math.stackexchange.com/q/40164/265466">this question</a> or <a href="https://en.m.wikipedia.org/wiki/Quaternions_and_spatial_rotation" rel="nofollow noreferrer">this Wikipedia article</a> for details.</p>
|
184,682 | <p>I have difficulties with a rather trivial topological question: </p>
<p>A is a discrete subset of $\mathbb{C}$ (complex numbers) and B a compact subset of $\mathbb{C}$. Why is $A \cap B$ finite? I can see that it's true if $A \cap B$ is compact, i.e. closed and bounded, but is it obvious that $A \cap B$ is closed?</p>
| user642796 | 8,348 | <p>If $A \cap B$ were infinite then by the compactness of $B$ there would be a limit point $z$. Next note that every neighbourhood of $z$ would have infinite intersection with $A \cap B$ and hence with $A$, contradicting that $A$ was discrete.</p>
<p><strong>Edit:</strong> I quickly deleted this after I started to think I was using the wrong notion of discreteness (<em>closed</em> discreteness, as mentioned in Stefan's answer below, which is stronger than discrete as a subspace). In topology there is a notion of <em>discrete family of sets</em> for which the usual definition is the following: A family $\mathcal{A} = \{ A_i : i \in I \}$ of subsets of a topological space $X$ is called <em>discrete</em> if each point $x \in X$ has a neighbourhood which meets exactly one member of $\mathcal{A}$. Seeing what I was supposed to show, instead of taking <em>discrete</em> to mean <em>discrete</em> as a subspace, I essentially took <em>discrete</em> to mean that the family of singletons from $A$ is a discrete family of sets. It's now been undeleted to make Stefan's answer more intelligible, so be kind with your voting.</p>
|
184,682 | <p>I have difficulties with a rather trivial topological question: </p>
<p>A is a discrete subset of $\mathbb{C}$ (complex numbers) and B a compact subset of $\mathbb{C}$. Why is $A \cap B$ finite? I can see that it's true if $A \cap B$ is compact, i.e. closed and bounded, but is it obvious that $A \cap B$ is closed?</p>
| Stefan Geschke | 16,330 | <p>There seems to be a contradiction between the answers of Andre Nicolas and of Arthur Fischer, yet both are correct. This depends on your definition of discrete. Andre's notion of discrete is this: </p>
<p>A set $S$ in a topological space $X$ is discrete if it is discrete with respect to the subspace topology. This is the same as saying that every point in $S$ has a neighborhood (in $X$) that contains no other point from $S$.</p>
<p>Arthur's definition is this:
$S\subseteq X$ is discrete if every point in $X$ has a neighborhood that contains at most one element of $S$ (I assume all spaces to be Hausdorff).</p>
<p>A discrete set according to Arthur's definition is automatically closed (as a point in the closure but not in the set would violate discreteness). So most likely, if you are supposed to show that the intersection of a discrete set with a compact set is finite, you are supposed to use Arthur's definition of discreteness. By the way, note that Arthur's argument does not say anything about $\mathbb C$. This works in every Hausdorff space and mainly uses the fact that compact discrete sets are finite. (And that subsets of discrete sets are again discrete and that discrete sets are closed.)</p>
<hr>
<p>Edit: Unfortunately Arthur Fischer's answer was deleted while I was typing mine.
But it seems that my answer is still understandable. </p>
|
1,012,236 | <p>A continuous time process it's nule for t < 0. In which conditions is it stationary (WSS)?</p>
<p>I know that E[x(t)] must be a constant and the autocorrelation function must depend only on the time difference t2-t1. Are there any other conditions?</p>
| N. S. | 9,176 | <p>The sequence $\left(1+\frac{1}{n}\right)^n$ is strictly increasing and converging to $e$. This implies the first inequality.</p>
<p>$\left(1+\frac{1}{n}\right)^{n+1}$ is strictly decreasing and converging to $e$. This implies the first inequality.</p>
|
674,310 | <p>I am having trouble with a proof for linear algebra. Could somebody explain to me how to prove that if $A$ and $B$ are both $n\times n$ non singular matrices, that their product $AB$ is also non singular. </p>
<p>A place to start would be helpful. Thank you for your time. </p>
| Community | -1 | <p>There's different manners to prove this result for example:</p>
<ul>
<li>Using the determinant:
$$\det(AB)=\det A\det B$$
and the fact that $C$ is singular iff $\det C=0$.</li>
<li>Using the fact that $AB$ is invertible then $A$ is surjective and $B$ is injective and that in finite dimensional space: $C$ is injective iff $C$ is surjective iff $C$ is bijective.</li>
</ul>
|
1,715,358 | <p>Being fascinated by the approximation $$\sin(x) \simeq \frac{16 (\pi -x) x}{5 \pi ^2-4 (\pi -x) x}\qquad (0\leq x\leq\pi)$$ proposed, more than 1400 years ago by Mahabhaskariya of Bhaskara I (a seventh-century Indian mathematician) (see <a href="https://math.stackexchange.com/questions/976462/a-1-400-years-old-approximation-to-the-sine-function-by-mahabhaskariya-of-bhaska">here</a>), I considered the function $$\sin \left(\frac{1}{2} \left(\pi -\sqrt{\pi ^2-4 y}\right)\right)$$ which I expanded as a Taylor series around $y=0$. This gives $$\sin \left(\frac{1}{2} \left(\pi -\sqrt{\pi ^2-4 y}\right)\right)=\frac{y}{\pi }+\frac{y^2}{\pi ^3}+\left(\frac{2}{\pi ^5}-\frac{1}{6 \pi ^3}\right)
y^3+O\left(y^4\right)$$ Now, I made (and may be, this is not allowed) $y=(\pi-x)x$. Replacing, I obtain
$$\sin(x)=\frac{(\pi -x) x}{\pi }+\frac{(\pi -x)^2 x^2}{\pi ^3}+\left(\frac{2}{\pi ^5}-\frac{1}{6 \pi ^3}\right) (\pi -x)^3 x^3+\cdots$$ I did not add the $O\left(.\right)$ on purpose since not feeeling very comfortable.</p>
<p>What is really beautiful is that the last expansion matches almost exactly the function $\sin(x)$ for the considered range $(0\leq x\leq\pi)$ and it can be very useful for easy and simple approximate evaluations of definite integrals such as$$I_a(x)=\int_0^x \frac{\sin(t)}{t^a}\,dt$$ under the conditions $(0\leq x\leq \pi)$ and $a<2$.</p>
<p>I could do the same with the simplest Padé approximant and obtain $$\sin(x)\approx \frac{(\pi -x) x}{\pi \left(1-\frac{(\pi -x) x}{\pi ^2}\right)}=\frac{5\pi(\pi -x) x}{5\pi ^2-5(\pi -x) x}$$ which, for sure, is far to be as good as the magnificent approximation given at the beginning of the post but which is not very very bad (except around $x=\frac \pi 2$).</p>
<p>The problem is that I am not sure that I have the right of doing things like that.</p>
<p><strong>I would greatly appreciate if you could tell me what I am doing wrong and/or illegal using such an approach.</strong></p>
<p><strong>Edit</strong></p>
<p>After robjohn's answer and recommendations, I improved the approximation writing as an approximant $$f_n(x)=\sum_{i=1}^n a_i \big(\pi-x)x\big)^i$$ and minimized $$S_n=\int_0^\pi\big(\sin(x)-f_n(x)\big)^2$$ with respect to the $a_i$'s.</p>
<p>What is obtained is $$a_1=\frac{60480 \left(4290-484 \pi ^2+5 \pi ^4\right)}{\pi ^9} \approx 0.31838690$$ $$a_2=-\frac{166320 \left(18720-2104 \pi ^2+21 \pi ^4\right)}{\pi ^{11}}\approx 0.03208100$$ $$a_3=\frac{720720 \left(11880-1332 \pi ^2+13 \pi ^4\right)}{\pi ^{13}}\approx 0.00127113$$ These values are not very far from those given by Taylor ($\approx 0.31830989$), ($\approx 0.03225153$), ($\approx 0.00116027$) but, as shown below, they change very drastically the results.</p>
<p>The errors oscillate above and below the zero line and, for the considered range, are all smaller than $10^{-5}$.</p>
<p>After minimization, $S_3\approx 8.67\times 10^{-11}$ while, for the above Taylor series, it was $\approx 6.36\times 10^{-7}$.</p>
| Clerk | 550,918 | <p>@Claude Leibivici use the following two point Taylor series in x=-Pi, Pi
$$\frac{z (z-\pi )^3 (z+\pi )^3}{48 \pi ^4}-\frac{5 z (z-\pi )^3 (z+\pi )^3}{16 \pi ^6}+\frac{3 z (z-\pi )^2 (z+\pi )^2}{8 \pi ^4}-\frac{z (z-\pi ) (z+\pi )}{2 \pi ^2}$$ the cuadratic error is superior to any formula above at the same grade </p>
|
1,037,898 | <p>Does given integral</p>
<blockquote>
<p>$$\int_1^\infty \frac{\log(x-1)}{x(x-1)}\,dx$$</p>
</blockquote>
<p>converge? If it is convergent can we evaluate it's value?</p>
| Paul | 17,980 | <p>Hints:</p>
<p>Let $t=\log(x-1)$. Then $x=1+e^t$</p>
|
1,037,898 | <p>Does given integral</p>
<blockquote>
<p>$$\int_1^\infty \frac{\log(x-1)}{x(x-1)}\,dx$$</p>
</blockquote>
<p>converge? If it is convergent can we evaluate it's value?</p>
| Siminore | 29,672 | <p>Consider
$$
\int_1^2 \frac{\log (x-1)}{x(x-1)}dx = \int_0^1 \frac{\log u}{(1+u)u}du \sim \int_0^1 \frac{\log u}{u}du,
$$
which is divergent, since $$\frac{d}{du} \frac12 \log^2 u = \frac{\log u}{u}.$$ Hence the whole integral is divergent. By the way, the integral "at infinity" converges.</p>
|
1,037,898 | <p>Does given integral</p>
<blockquote>
<p>$$\int_1^\infty \frac{\log(x-1)}{x(x-1)}\,dx$$</p>
</blockquote>
<p>converge? If it is convergent can we evaluate it's value?</p>
| Zaid Alyafeai | 87,813 | <p>$$I=\int_0^1\frac{\log(u)-\log(1-u)}{u}\,dx = \int^1_0\frac{\log(u)}{u}\,du+\zeta(2)$$</p>
|
99,572 | <p>One of the most useful tools in the study of convex polytopes is to move from polytopes (through their fans) to toric varieties and see how properties of the associated toric variety reflects back on the combinatorics of the polytopes. This construction requires that the polytope is rational which is a real restriction when the polytope is general (neither simple nor simplicial). Often we would like to consider general polytopes and even polyhedral spheres (and more general objects) where the toric variety construction does not work.</p>
<p>I am aware of very general constructions by M. Davis, and T. Januszkiewicz,
(one relevant paper might be <a href="http://www.math.osu.edu/%7Edavis.12/old_papers/DJ_toric.dmj.pdf" rel="nofollow noreferrer">Convex polytopes, Coxeter orbifolds and torus actions, Duke Math. J. 62 (1991)</a> and several subsequent papers). Perhaps these constructions allow you to start with arbitrary polyhedral spheres and perhaps even in greater generality.</p>
<p>I ask about an explanation of the scope of these constructions, and, in simple terms as possible, how does the construction go?</p>
| Community | -1 | <p>The construction of Davis and Januskiewicz can be realized as an equivariant colimit. </p>
<p>Let $P$ be a simple polytope of dimension $n$ and let $G$ be either the mod 2 torus ${\mathbb{Z}}_2^n$ or the usual torus ${\mathbb{T}}^n$. A characteristic function on $P$ corresponds to a order-reserving map $\chi:{\mathrm{Face}} \, P\to {\mathrm{Sub}}_{\mathbf{Grp}} G$ from the face poset of $P$ to the poset of subgroups of $G$ such that</p>
<ol>
<li>The image of $\chi$ lands in the unimodular subgroups of G.</li>
<li>$\chi$ is graded, in the sense that ${\mathrm{codim}} \,F = {\mathrm{rank}} \, \chi F$. </li>
</ol>
<p>There is a functor $-\times G/-:{\mathrm{Face}} P\times {\mathrm{Sub}}_{\mathbf{Grp}} G\to {\mathbf{Top}}G$ from the poset product to the category of $G$-spaces that carries $(F,H)$ to the $G$-space $F\times G/H$. Here $G$ acts on the second factor of the product naturally:
$$(x,Hg)g':=(x,Hgg')$$ </p>
<p>Pick out a certain subposet $Q$ of ${\mathrm{Face}} P\times {\mathrm{Sub}}_{\mathbf{Grp}} G$ by requiring that $(F,G/H)$ is in $Q$ if and only if $H$ is a unimodular subgroup of $\chi F$. The real or complex quasitoric manifold $M(\chi)$ over $P$ with characteristic function $\chi$ is the colimit of the composite $$Q\hookrightarrow {\mathrm{Face}} P\times {\mathrm{Sub}}_{\mathbf{Grp}} G\xrightarrow{-\times G/-} {\mathbf{Top}}_G$$<br>
Without reference to morphisms and wrting $H\prec \chi F$ to mean that $H$ is a unimodular subgroup of $\chi F$, we can write $$M(\chi)={\mathrm{colim}}_{H \prec \chi F} F\times G/H$$</p>
<p>Replacing the face poset by a general poset and taking a general topological group $G$ (or Lie group), one can construct $G$-spaces via such an equivariant decomposition. The defect of such a generalization is that it is unclear whether the resulting $G$-space has a manifold, variety or Kahler structure. </p>
<p>Gil, does a polyhedral sphere have a naturally associated poset over which we could construct $G$-spaces with interesting combinatorial invariants?</p>
|
99,572 | <p>One of the most useful tools in the study of convex polytopes is to move from polytopes (through their fans) to toric varieties and see how properties of the associated toric variety reflects back on the combinatorics of the polytopes. This construction requires that the polytope is rational which is a real restriction when the polytope is general (neither simple nor simplicial). Often we would like to consider general polytopes and even polyhedral spheres (and more general objects) where the toric variety construction does not work.</p>
<p>I am aware of very general constructions by M. Davis, and T. Januszkiewicz,
(one relevant paper might be <a href="http://www.math.osu.edu/%7Edavis.12/old_papers/DJ_toric.dmj.pdf" rel="nofollow noreferrer">Convex polytopes, Coxeter orbifolds and torus actions, Duke Math. J. 62 (1991)</a> and several subsequent papers). Perhaps these constructions allow you to start with arbitrary polyhedral spheres and perhaps even in greater generality.</p>
<p>I ask about an explanation of the scope of these constructions, and, in simple terms as possible, how does the construction go?</p>
| Li Yu | 471,224 | <p>In the definition of moment-angle manifold in Davis-Januszkiewicz's 1991 paper, the orbit space is a simple convex polytope <span class="math-container">$P$</span> which is contractible. We can replace <span class="math-container">$P$</span> by an arbitrary smooth nice manifold with corners <span class="math-container">$Q$</span> and define the generalized moment-angle manifold <span class="math-container">$\mathcal{Z}_Q$</span> in the similar way as usual moment-angle manifold (see <a href="https://arxiv.org/abs/2011.10366" rel="nofollow noreferrer">https://arxiv.org/abs/2011.10366</a>). Let <span class="math-container">$F_1,\cdots,F_m$</span> be all the facets of <span class="math-container">$Q$</span> and <span class="math-container">$\lambda: \{ F_1,\cdots,F_m \} \rightarrow
\mathbb{Z}^m$</span> be a map such that <span class="math-container">$\{\lambda(F_1),\cdots,\lambda(F_m)\}$</span> is a unimodular basis of <span class="math-container">$\mathbb{Z}^m\subset \mathbb{R}^m=T_e(S^1)^m$</span>.
<span class="math-container">\begin{equation}
\mathcal{Z}_{Q} = Q\times (S^1)^m / \sim
\end{equation}</span>
where <span class="math-container">$(x,g) \sim (x',g')$</span> if and only if <span class="math-container">$x=x'$</span> and <span class="math-container">$g^{-1}g' \in \mathbb{T}^{\lambda}_x$</span>
where <span class="math-container">$\mathbb{T}^{\lambda}_x$</span> is the subtorus of <span class="math-container">$(S^1)^m$</span> determined by
the linear subspace of <span class="math-container">$\mathbb{R}^m$</span> spanned by the set <span class="math-container">$\{ \lambda(F_j) \, |\, x\in F_j \}$</span>.</p>
<p>The free quotient of <span class="math-container">$\mathcal{Z}_Q$</span> under the action of some subtorus in <span class="math-container">$(S^1)^m$</span> of rank <span class="math-container">$m-n$</span> gives analogues of toric manifolds in this setting (where <span class="math-container">$n$</span> is the dimension of <span class="math-container">$Q$</span>). Such a space can also be defined from a (non-degenerate) characteristic function on the facets of <span class="math-container">$Q$</span>. The equivariant cohomology ring of <span class="math-container">$\mathcal{Z}_Q$</span> with <span class="math-container">$\mathbf{k}$</span>-coefficients is isomorphic to a ring <span class="math-container">$\mathbf{k}\langle Q\rangle$</span> (called the topology face ring of <span class="math-container">$Q$</span>) that is determined not only by the face poset of <span class="math-container">$Q$</span> but also by the <span class="math-container">$\mathbf{k}$</span>-cohomology rings of all the faces of <span class="math-container">$Q$</span>. The definition of topology face ring is a direct generalization of the face ring of a simple convex polytope.</p>
<p>Furthermore, we can replace <span class="math-container">$Q$</span> by an arbitrary finite CW-complex <span class="math-container">$X$</span> with a panel structure <span class="math-container">$\mathcal{P}$</span> and define moment-angle complex <span class="math-container">$(D^2,S^1)^{(X,\mathcal{P})}$</span> and do the similar calculations for its cohomology and equivariant cohomology (see <a href="https://arxiv.org/abs/2103.04281" rel="nofollow noreferrer">https://arxiv.org/abs/2103.04281</a>). The definition of panel structure
is due to M. Davis's 1983 paper "Groups generated by reflections and aspherical manifolds not covered by
Euclidean space" in Ann. of Math.
Roughly speaking, a panel structure on <span class="math-container">$X$</span> defines "abstract faces" on <span class="math-container">$X$</span> which allows us to do the similar construction as <span class="math-container">$\mathcal{Z}_Q$</span>. But it is more convenient to think of
<span class="math-container">$(D^2,S^1)^{(X,\mathcal{P})}$</span> as the colimit of a diagram of CW-complexes of the form
<span class="math-container">$ f\times \underset{j\in I_f}{\prod} D^2_{(j)} \times
\underset{j\in [m]\backslash I_f}{\prod} S^1_{(j)}$</span> where <span class="math-container">$f$</span> ranges over all
the "abstract faces" of <span class="math-container">$(X,\mathcal{P})$</span> and <span class="math-container">$I_f$</span> denotes all indices of the panels that contain <span class="math-container">$f$</span>. Note that "panels" plays the role of facets here.</p>
<p>In general, <span class="math-container">$(D^2,S^1)^{(X,\mathcal{P})}$</span> may not be a manifold. The free quotient of <span class="math-container">$(D^2,S^1)^{(X,\mathcal{P})}$</span> under some torus action can also be considered as a far-reaching generalization of manifolds with locally standard torus actions. In addition, the topology face ring of the panel structure <span class="math-container">$(X,\mathcal{P})$</span> also makes perfect sense, which is isomorphic to the equivariant cohomology ring of <span class="math-container">$(D^2,S^1)^{(X,\mathcal{P})}$</span>.
In particular, when <span class="math-container">$X$</span> is the cone of the barycentric subdivision of a simplicial complex <span class="math-container">$K$</span> (with a canonical panel structure <span class="math-container">$\mathcal{P}_K$</span>), the topological face ring of <span class="math-container">$(X,\mathcal{P}_K)$</span> is nothing but the face ring (Stanley-Reisner ring) of <span class="math-container">$K$</span> (see Section 5 of that paper).</p>
|
2,601,380 | <p>I'm trying to figure out the impedance of a capacitor. My textbook tells me the answer is $\frac{-i}{\omega C}$ and plugging that into the equation does work but I wanted to come up with that answer myself. So I wrote out the equation with what I know:</p>
<p>$$-V_0\omega C\sin\omega t = Re\left( \frac{V_0(\cos\omega t + i\sin\omega t)}{x} \right)$$</p>
<p>This is where I get stuck. I don't know how to isolate $x$ given that it is inside the $Re()$ function. Trying to get somewhere, I tried this:</p>
<p>$$x = \frac{V_0(\cos\omega t + i\sin\omega t)}{-V_0\omega C\sin\omega t} = \frac{\cos\omega t}{-\omega C\sin\omega t} - \frac{i}{\omega C}$$</p>
<p>Seeing $-\frac{i}{\omega C}$ makes me feel like I'm on the right track. Now I just need to figure out how to get rid of the first part of that answer. And I'm guessing that if I knew how to isolate $x$ from the first equation, that would do the trick. So how can I isolate $x$ when it is included in the $Re()$ function?</p>
| Acccumulation | 476,070 | <p>Re() is a projective map; Re(a+bi) = a. Thus Re(z) = z-iIm(z). So given RHS = Re(z), we have that RHS+bi = z for some real b. Note that Re(z) = a does not yield a single value of z as a solution, but instead gives a vertical line in the complex plane. Each point on that line will give a different value for x.</p>
<p>$$-V_0\omega C\sin\omega t + bi = \frac{V_0(\cos\omega t + i\sin\omega t)}{x} $$</p>
<p>In terms of b, x will be:</p>
<p>$$\frac{V_0(\cos\omega t + i\sin\omega t)}{-V_0\omega C\sin\omega t + bi} $$</p>
<p>Assuming that $\omega$, $V_0$, and C are real numbers, they can be "absorbed" into b; b is an arbitrary real number, so dividing by a real number just gives another arbitrary real number. So the above can be rewritten as</p>
<p>$$\frac{(\cos\omega t + i\sin\omega t)}{-\omega C(\sin\omega t + bi)} $$</p>
<p>Factoring an i out of the numerator, we get</p>
<p>$$\frac{i(\sin\omega t-i\cos\omega t )}{-\omega C(\sin\omega t + ib)} $$</p>
<p>Again, this describes a solution set, not a particular x. But if you take $b = -\cos\omega t$, then you recover the given expression. Any motivation for that choice will have to come from further facts about the capacitance rather than mathematical properties.</p>
|
3,362,654 | <p>Let's say <span class="math-container">$C$</span> is a category, and <span class="math-container">$\mathscr{C}$</span> is a collection of morphisms in <span class="math-container">$C$</span>. I have come across the following sentence </p>
<p>"<span class="math-container">$C$</span> admits pullbacks along morphisms from <span class="math-container">$\mathscr{C}$</span>, and <span class="math-container">$\mathscr{C}$</span> is pullback stable" </p>
<p>I know what a pullback is, but have no idea what this sentence means. Here are my attempts.</p>
<ol>
<li><span class="math-container">$C$</span> admits pullbacks along morphisms from <span class="math-container">$\mathscr{C}$</span>: Suppose <span class="math-container">$(P,p,q)$</span> is a pullback of <span class="math-container">$f$</span> and <span class="math-container">$g$</span>, then <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are in <span class="math-container">$\mathscr{C}$</span>.</li>
<li><span class="math-container">$\mathscr{C}$</span> is pullback stable: Suppose <span class="math-container">$(P,p,q)$</span> is a pullback of <span class="math-container">$f$</span> and <span class="math-container">$g$</span>, then if <span class="math-container">$g$</span> has a certain property so does <span class="math-container">$p$</span>. But which property are we talking about here? Is it talking about belonging to <span class="math-container">$\mathscr{C}$</span>?</li>
</ol>
<p>Any correction or help will be greatly appreciated.</p>
| Brian Moehring | 694,754 | <p>Let <span class="math-container">$N$</span> denote the winning position. Then for <span class="math-container">$n=0,1,\ldots, 12,$</span> <span class="math-container">$$\begin{align*}P(N > n) &= \text{probability that none of the first } n+1 \text{ birthday months match} \\ &= \frac{12}{12}\cdot\frac{11}{12} \cdots \frac{12-n}{12} \\ &= \frac{12!(12-n)}{12^{n+1}(12-n)!} \left(=\frac{12!}{12^{n+1}(11-n)!} \text{ if } n\neq 12\right)\end{align*}$$</span> </p>
<p>Then for <span class="math-container">$n=1,2,\ldots,12,$</span> <span class="math-container">$$\begin{align*}P(N=n) &= P(N>n-1) - P(N>n) \\ &= \frac{12!}{12^n(12-n)!} - \frac{12!(12-n)}{12^{n+1}(12-n)!} \\ &= \frac{12!\cdot n}{12^{n+1}(12-n)!} \end{align*}$$</span> </p>
<p>[From here on, I'll write <span class="math-container">$P(n)$</span> as a shorthand for <span class="math-container">$P(N=n)$</span>]</p>
<p>Therefore, for <span class="math-container">$n=1,2,\ldots,11,$</span> <span class="math-container">$$P(n+1) = \frac{(n+1)(12-n)}{12n}\cdot P(n)$$</span></p>
<p>Therefore, for <span class="math-container">$1\leq n \leq 11,$</span> <span class="math-container">$$\begin{align*}P(n+1) \left\{\begin{array}{c}< \\ = \\ >\end{array}\right\} P(n) &\iff \frac{(n+1)(12-n)}{12n} \left\{\begin{array}{c}< \\ = \\ >\end{array}\right\} 1 \\ &\iff (4+n)(3-n) \left\{\begin{array}{c}< \\ = \\ >\end{array}\right\} 0 \end{align*}$$</span></p>
<p>This allows us to see that
<span class="math-container">$$P(1) < P(2) < P(3) = P(4) > P(5) > P(6) > \ldots > P(12)$$</span>
so that positions <span class="math-container">$3$</span> and <span class="math-container">$4$</span> are equally likely to be the winning positions, and they are the best two positions.</p>
|
2,310,109 | <p>I'm an undergraduate with little to no background in functional analysis and topology. The whole concept of function spaces is quite fuzzy to me, and I'm having a difficult time conceptualizing it. (Things like there being different notions of compactness in general topological spaces is one of many things confusing me because of what I've learned so far. I have learned many things which turn out to be true only in metric spaces, or in $\mathbb{R}^n$ specifically.)</p>
<p>Consider the following situation:</p>
<p>Let $A \subset \mathbb{R}^n$ and $[a, b] \subset \mathbb{R}$. Let $F$ denote the set of all functions from $A$ to $[a, b]$, and $G \subset F$ denote those functions which possess a particular attribute that we are interested in.</p>
<p>In order to finish a larger proof, I'd like to show that $G$ is closed; later on, I want to work with pointwise convergence of functions in $G$. Since we are dealing with a function space, I'm a bit unsure about how to do this, because I'm uncertain about what constitutes a limit point in this space.</p>
<p>I have showed that, for any sequence $(f_n)_1^\infty$ of functions in $G$ converging pointwise to some $f \in F$, $f$ must be in $G$. I believe that this shows that $G$ is closed, and I believe that it has to do with the connection between the product topology and pointwise convergence of functions, but I would really appreciate feedback on this; have I misunderstood what "closed set" means in this context?</p>
<p>Thanks!</p>
<p>EDIT: In my case, $G \subset F$ is the set of all functions $f:A \rightarrow [a, b]$ such that $f(a) + f(b) + f(c) = c(f)$, for any three pair-wise orthogonal unit vectors $a, b, c$, where $c(f)$ is a constant depending only on $f$. </p>
<p>Second attempt: Take any function $f \in F \setminus G$. There exist two sets of orthogonal unit vectors $(v_1, v_2, v_3)$ and $(v_4, v_5, v_6)$ such that
\begin{equation*}
\Delta = \left| \sum_{i=1}^3 f(v_i) - f(v_{i + 3}) \right| > 0.
\end{equation*}
The set
\begin{equation*}
B_{\Delta/6} = \{ f \in F : \max(|f(v_i) - g(v_i)|) < \Delta / 6, i = 1, 2, \dots 6 \}.
\end{equation*}
is an open neighborhood of $f$. Take any $g \in B_{\Delta/6}$, and let $\delta_i = g(v_i) - f(v_i)$, for $i = 1, 2, \dots 6$, so that $|\delta_i| < \Delta / 6$. We get
\begin{equation*}
\begin{split}
\left| \sum_{i=1}^3 (g(v_i) - g(v_{i+3})) \right| = \left| \sum_{i=1}^3 (f(v_i) + \delta_i - f(v_{i+3}) - \delta_{i+3}) \right| \geq\\
\left| \sum_{i=1}^3 (f(v_i) - f(v_{i+3})) \right| - \left|\sum_{i=1}^3 (\delta_{i+3} - \delta_i) \right|
= \Delta - \left|\sum_{i=1}^3 (\delta_{i+3} - \delta_i) \right| > \Delta - \Delta = 0,
\end{split}
\end{equation*}
using the reverse triangle inequality. Thus $g \in F \setminus G$, so that $B_{ \Delta / 6} \subset F \setminus G$, meaning $F \setminus G$ is open. We conclude that $(F \setminus G)^C = G$ is closed.</p>
<p>Any thoughts?</p>
| Henno Brandsma | 4,280 | <p>If you are considering functions from $A \subseteq \mathbb{R}^n$ to $[a,b]$ in the pointwise topology, then for a condition like you describe in the comments: $f(v_1) +f(v_2) + f(v_3) \in C$ where $C$ is closed in $[a,b]$ (like a singleton or a finite set maybe), then the functions that satisfy it is indeed closed. To show this it does <em>not</em> suffice to consider sequences from this set, but nets will do, if you know about them. </p>
<p>Otherwise let $F = \{f :A \to [a,b]\}$ in the pointwise (=product ) topology and set $D = \{f: A \to [a,b]: f(v_1) + f(v_2) + f(v_3) \in C\} = (s \circ p_v)^{-1}[C]$, where $p_v : F \to \mathbb{R}^3, p_v(f) = (f(v_1), f(v_2), f(v_3))$ is continuous as a projection and $s: \mathbb{R}^3 \to \mathbb{R}: s(x,y,z) = x+ y +z$ are both continuous and the inverse image of a closed set under a continuous map is closed. </p>
|
322,302 | <p>Conjectures play important role in development of mathematics.
Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p>
<p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p>
<p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p>
<p>Asking the question I keep in mind by "recent years" something like a dozen years before now, by a "conjecture" something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit "not so famous", but on the other hand these might not be considered as strict criteria, and let us "assume a good will" of the answerer.</p>
| Sam Hopkins | 25,028 | <p>Karim Adiprasito proved the g-conjecture for spheres in a preprint that was posted in December of last year: <a href="https://arxiv.org/abs/1812.10454" rel="noreferrer">https://arxiv.org/abs/1812.10454</a>.</p>
<p>This was probably considered the biggest open problem in the combinatorics of simplicial complexes. See Gil Kalai's blog post: <a href="https://gilkalai.wordpress.com/2018/12/25/amazing-karim-adiprasito-proved-the-g-conjecture-for-spheres/" rel="noreferrer">https://gilkalai.wordpress.com/2018/12/25/amazing-karim-adiprasito-proved-the-g-conjecture-for-spheres/</a>.</p>
|
322,302 | <p>Conjectures play important role in development of mathematics.
Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p>
<p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p>
<p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p>
<p>Asking the question I keep in mind by "recent years" something like a dozen years before now, by a "conjecture" something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit "not so famous", but on the other hand these might not be considered as strict criteria, and let us "assume a good will" of the answerer.</p>
| T_M | 122,587 | <p>S. T. Yau conjectured in the 80's that every compact Riemannian 3-manifold should contain infinitely many different minimal surfaces (smooth, closed). This was <a href="https://arxiv.org/abs/1806.08816" rel="noreferrer">proved last year</a> by <a href="https://web.math.princeton.edu/~aysong/" rel="noreferrer">Antoine Song</a>. </p>
<p>Song built on a long story of breakthroughs in the area by Fernando Marques and Andre Neves using Min-Max theory. Another earlier big result was the solution of the Willmore Conjecture about embedding minimal tori: The Willmore energy <span class="math-container">$\int_{\Sigma}H^2$</span> of any smoothly immersed torus in <span class="math-container">$\mathbf{R}^3$</span> is at least <span class="math-container">$2\pi^2$</span>.</p>
|
322,302 | <p>Conjectures play important role in development of mathematics.
Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p>
<p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p>
<p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p>
<p>Asking the question I keep in mind by "recent years" something like a dozen years before now, by a "conjecture" something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit "not so famous", but on the other hand these might not be considered as strict criteria, and let us "assume a good will" of the answerer.</p>
| Denis Nardin | 43,054 | <p>The <a href="https://en.wikipedia.org/wiki/Weibel%27s_conjecture" rel="noreferrer">Weibel conjecture</a> about negative K-groups was proven in 2018 by Moritz Kerz, Florian Strunk, Georg Tamme.</p>
<p>The conjecture states that if <span class="math-container">$X$</span> is a Noetherian scheme of Krull dimension <span class="math-container">$d$</span>, the negative K-groups <span class="math-container">$K_i(X)$</span> vanish when <span class="math-container">$i<-d$</span>. Moreover <span class="math-container">$\mathbb{A}^1$</span>-invariance also holds in that range, that is
<span class="math-container">$$K_i(X)\to K_i(X\times\mathbb{A}^r)$$</span>
is an isomorphism for <span class="math-container">$i\le -d$</span>.</p>
<p>The <a href="https://arxiv.org/abs/1611.08466" rel="noreferrer">paper</a> where they solve the conjecture is particularly remarkable because they use methods from derived algebraic geometry to solve a problem with apparently no relation to it.</p>
|
322,302 | <p>Conjectures play important role in development of mathematics.
Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p>
<p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p>
<p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p>
<p>Asking the question I keep in mind by "recent years" something like a dozen years before now, by a "conjecture" something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit "not so famous", but on the other hand these might not be considered as strict criteria, and let us "assume a good will" of the answerer.</p>
| KConrad | 3,272 | <p>In number theory, the Sato-Tate conjecture about elliptic curves over <span class="math-container">$
\mathbf Q$</span> was a problem from the 1960s and Serre's conjecture on modularity of odd 2-dimensional Galois representation was a conjecture from the 1970s-1980s. Both were settled around 2008. (For ST conj., the initial proof needed an additional technical hypothesis -- not part of the original conjecture -- of a non-integral <span class="math-container">$j$</span>-invariant, which was later removed in 2011.) For those not familiar with these problems, their solutions build on ideas coming from the proof of Fermat's Last Theorem. </p>
|
322,302 | <p>Conjectures play important role in development of mathematics.
Mathoverflow gives an interaction platform for mathematicians from various fields, while in general it is not always easy to get in touch with what happens in the other fields.</p>
<p><strong>Question</strong> What are the conjectures in your field proved or disproved (counterexample found) in recent years, which are noteworthy, but not so famous outside your field?</p>
<p>Answering the question you are welcome to give some comment for outsiders of your field which would help to appreciate the result.</p>
<p>Asking the question I keep in mind by "recent years" something like a dozen years before now, by a "conjecture" something which was known as an open problem for something like at least dozen years before it was proved and I would say the result for which the Fields medal was awarded like a proof of <a href="https://en.wikipedia.org/wiki/Fundamental_lemma_(Langlands_program)" rel="noreferrer">fundamental lemma</a> would not fit "not so famous", but on the other hand these might not be considered as strict criteria, and let us "assume a good will" of the answerer.</p>
| David White | 11,540 | <p><a href="https://en.wikipedia.org/wiki/Vop%C4%9Bnka%27s_principle" rel="nofollow noreferrer">Vopenka's Principle</a> is a large cardinal axiom that has several equivalent formulations. Arguably the simplest is the statement</p>
<p><em>For every proper class of graphs there exists a non-identity homomorphism between two graphs in that class.</em></p>
<p>Papers on Vopenka's Principle (VP) go back to 1965. In 1988, Adamek, Rosicky, and Trnkova introduced the Weak Vopenka Principle (WVP), proved that VP implies WVP, and asked if WVP implied VP. This was finally <a href="https://arxiv.org/abs/1909.09333v1" rel="nofollow noreferrer">answered in 2019 by Trevor Wilson</a> (published in Advances). From the abstract:</p>
<blockquote>
<p>Vopenka’s Principle says that the category of graphs has no large discrete full
subcategory, or equivalently that the category of ordinals cannot be fully embedded into it.
Weak Vopenka's Principle is the dual statement, which says that the opposite category of
ordinals cannot be fully embedded into the category of graphs. It was introduced in 1988 by
Adamek, Rosicky, and Trnkova, who showed that it follows from Vopenka’s Principle and
asked whether the two statements are equivalent. We show that they are not.</p>
</blockquote>
|
3,882,566 | <p>We have <span class="math-container">$0<b≤ a$</span>, and:</p>
<p><span class="math-container">$$\underbrace{\dfrac{1+⋯+a^7+a^8}{1+⋯+a^8+a^9}}_{A} \quad \text{and} \quad \underbrace{\dfrac{1+⋯+b^7+b^8}{1+⋯+b^8+b^9}}_{B}$$</span></p>
<p>Source: Lumbreras Editors</p>
<hr />
<p>It was my strategy:</p>
<p><span class="math-container">$1 ≤ \dfrac{1+⋯+a^8}{1+⋯+b^8}$</span> ya que <span class="math-container">$a^p ≥ b^p$</span>, <span class="math-container">$∀\, p ≥ 0 $</span> (monotonous function).</p>
<p>We also have to, <span class="math-container">$ \dfrac{a}{b} ≤ 1 $</span></p>
| xpaul | 66,420 | <p>Let
<span class="math-container">$$ f(x)=\dfrac{1+⋯+x^7+x^8}{1+⋯+x^8+x^9}. $$</span>
Then
<span class="math-container">$$ f'(x)=-\frac{x^8(x^8+2x^7+3x^6+4x^5+5x^4+6x^3+7x^2+8x+9)}{(1+⋯+x^8+x^9)^2}\le0 $$</span>and hence <span class="math-container">$f(x)$</span> is decreasing. So if <span class="math-container">$a\ge b$</span>, then <span class="math-container">$f(a)\le f(b)$</span>.</p>
|
25,100 | <p>Suppose one has a set $S$ of positive real numbers, such that the usual numerical ordering on $S$ is a well-ordering. Is it possible for $S$ to have any countable ordinal as its order type, or are the order types that can be formed in this way more restricted than that?</p>
| Robin Chapman | 4,213 | <p>Yes, one can have any countable ordering. Indeed any countable totally
ordered set can be embedded in $\mathbb{Q}$. Write your ordered set as
$ \lbrace a_1,a_2,\ldots \rbrace $
and define the embedding recursively: once you have placed $a_1,\ldots,a_{n-1}$
there will always be an interval to slot $a_n$ into.</p>
|
2,747,896 | <p>I am a bit confused with cardinality at the moment. I know that the cardinality of $\mathbb{R}$ is equal to $\lvert(0,1)\rvert$, but does that mean they are equal to infinity, if not what are they equal to ?</p>
| Siong Thye Goh | 306,553 | <p>The cardinaility of $\mathbb{R}$ is infinite, but note that there are various forms of infinity. </p>
<p>We use the symbol $\aleph_0$ to denote $|\mathbb{N}|$, the cardinality of the set of natural numbers.</p>
<p>The symbol $2^{\aleph_0}$ denotes $|\mathbb{R}|$. </p>
<p>It is a famous theorem of Cantor that $2^{\aleph_0}> \aleph_0$.</p>
<p>(The smallest infinite larger than $\aleph_0$ is denoted $\aleph_1$. The <em>continuum hypothesis</em> is the claim that $2^{\aleph_0}=\aleph_1$. Neither it nor its negation can be proved using the usual axioms of set theory.)</p>
|
1,321,233 | <p>German Wikipedia states that the Ramsey`s theorem is a generalization of the Pigeonhole principle <a href="http://de.wikipedia.org/wiki/Satz_von_Ramsey" rel="noreferrer" title="source">source</a></p>
<p>But does not say why this is true. I am doing a presentation about the Ramsey theory and also wanna explain why this is true, but as not really an mathematician myself I can`t figure it out by myself.</p>
<p>So my question is: </p>
<p><em>What is an easy explanation for the fact that the Ramsey`s theorem is a generalization of the Pigeonhole principle?</em></p>
<p>Thanks for any help in advance</p>
| MJD | 25,554 | <p>Ramsey's theorem, in general, says that for given $n, t, $ and $k$, there is a number $R$ (depending on $(n,t,k)$) such that for every $R$-element set $S$ and every $k$-coloring of the $t$-element subsets of $S$ $$f : S^{\{1,2,\ldots, t\}} \to \{1,2,\ldots, k\}$$ there is an $n$-element subset $S'\subset S$ such that $f$ is constant when restricted to the $t$-element subsets of $S'$.</p>
<p>The Ramsey theorem for graphs takes $S$ to be the set of vertices of an $R$-clique, and $t=2$; by coloring 2-sets of vertices we are coloring edges of the clique. $S'$ is then the $n$-vertex subgraph of $S$ whose edges are all the same color.</p>
<p>The pigeonhole principle takes $S$ to be some unstructured set and $t=1$. Then $f$ is a $k$-coloring of the $1$-sets of $S$ (that is, the elements) and the pigeonhole principle tells us that for any given $n$ and $k$, there is an $R(n, 1, k)$, namely $R=kn-k+1$, so that whenever $S$ has $R$ elements, there must be an $n$-element subset $S'$ on which $f$ is constant.</p>
|
1,802,515 | <blockquote>
<p>Say you have a bank account in which your invested money yields 3% every year, continuously compounded. Also, you have estimated that you spend $1000 every month to pay your bills, that are withdrawn from this account.</p>
<p>Create a differential model for that, find its equilibriums and determine its stability.</p>
</blockquote>
<p>My problem here is that the \$1000 withdrawal is not continuous on time, it's discrete. The best I could achieve is, if <span class="math-container">$S(t)$</span> is the current balance: <span class="math-container">$\dot S (t) = 0,0025S(t) - 1000$</span>. I'm using <span class="math-container">$0,0025$</span> as the interest rate because it yields 3% every year, so it should yield 0,25% every month. But I'm pretty confident that it's wrong. Any help would be highly appreciated! Thanks!</p>
| Rodrigo de Azevedo | 339,790 | <p>Let $x (t)$ be the amount of money in the account at time $t$ (years). Hence, if no money is spent,</p>
<p>$$\dot x = r x$$</p>
<p>where $r = \ln (1.03)$. If $\$1000$ is spent <strong>continuously</strong> every month, then we have the ODE</p>
<p>$$\dot x = r x - 12000$$</p>
<p>We have an equilibrium point when we have</p>
<p>$$\bar{x} := \frac{12000}{r} \approx \$406,000$$</p>
<p>in the account, as the interest earned per year then equals the amount of money expended per year. If we have more than $\bar{x}$ in the bank, then our wealth is growing. If we have less than $\bar{x}$ in the bank, then our wealth is decaying. Let us verify. Integrating the non-homogeneous ODE above, we obtain</p>
<p>$$x (t) = \bar{x} + (x_0 - \bar{x}) \, \mathrm{e}^{r t}$$</p>
<p>If $x_0 > \bar{x}$, our wealth is growing. If $x_0 < \bar{x}$, our wealth is decaying. If $x_0 = \bar{x}$, our wealth is stationary. Note that $\bar{x}$ is an <strong>unstable</strong> equilibrium point.</p>
|
354,250 | <p><strong>Remark:</strong> All the answers so far have been very insightful and on point but after receiving public and private feedback from other mathematicians on the MathOverflow I decided to clarify a few notions and add contextual information. 08/03/2020.</p>
<h2>Motivation:</h2>
<p>I recently had an interesting exchange with several computational neuroscientists on whether organisms with spatiotemporal sensory input can simulate physics without computing partial derivatives. As far as I know, partial derivatives offer the most quantitatively precise description of spatiotemporal variations. Regarding feasibility, it is worth noting that a number of computational neuroscientists are seriously considering the question that human brains might do reverse-mode automatic differentiation, or what some call backpropagation [7].</p>
<p>Having said this, a large number of computational neuroscientists (even those that have math PhDs) believe that complex systems such as brains may simulate classical mechanical phenomena without computing approximations to partial derivatives. Hence my decision to share this question.</p>
<h2>Problem definition:</h2>
<p>Might there be an alternative formulation for mathematical physics which doesn't employ the use of partial derivatives? I think that this may be a problem in reverse mathematics [6]. But, in order to define equivalence a couple definitions are required:</p>
<p><strong>Partial Derivative as a linear map:</strong></p>
<p>If the derivative of a differentiable function <span class="math-container">$f: \mathbb{R}^n \rightarrow \mathbb{R}^m$</span> at <span class="math-container">$x_o \in \mathbb{R}^n$</span> is given by the Jacobian <span class="math-container">$\frac{\partial f}{\partial x} \Bigr\rvert_{x=x_o} \in \mathbb{R}^{m \times n}$</span>, the partial derivative with respect to <span class="math-container">$i \in [n]$</span> is the <span class="math-container">$i$</span>th column of <span class="math-container">$\frac{\partial f}{\partial x} \Bigr\rvert_{x=x_o}$</span> and may be computed using the <span class="math-container">$i$</span>th standard basis vector <span class="math-container">$e_i$</span>:</p>
<p><span class="math-container">\begin{equation}
\frac{\partial{f}}{\partial{x_i}} \Bigr\rvert_{x=x_o} = \lim_{n \to \infty} n \cdot \big(f(x+\frac{1}{n}\cdot e_i)-f(x)\big) \Bigr\rvert_{x=x_o}. \tag{1}
\end{equation}</span></p>
<p>This is the general setting of numerical differentiation [3].</p>
<p><strong>Partial Derivative as an operator:</strong></p>
<p>Within the setting of automatic differentiation [4], computer scientists construct algorithms <span class="math-container">$\nabla$</span> for computing the dual program <span class="math-container">$\nabla f: \mathbb{R}^n \rightarrow \mathbb{R}^m$</span> which corresponds to an operator definition for the partial derivative with respect to the <span class="math-container">$i$</span>th coordinate:</p>
<p><span class="math-container">\begin{equation}
\nabla_i = e_i \frac{\partial}{\partial x_i} \tag{2}
\end{equation}</span></p>
<p><span class="math-container">\begin{equation}
\nabla = \sum_{i=1}^n \nabla_i = \sum_{i=1}^n e_i \frac{\partial}{\partial x_i}. \tag{3}
\end{equation}</span></p>
<p>Given these definitions, a constructive test would involve creating an open-source library for simulating classical and quantum systems that doesn’t contain a method for numerical or automatic differentiation.</p>
<h2>The special case of classical mechanics:</h2>
<p>For concreteness, we may consider classical mechanics as this is the general setting of animal locomotion, and the vector, Hamiltonian, and Lagrangian formulations of classical mechanics have concise descriptions. In all of these formulations the partial derivative plays a central role. But, at the present moment I don't have a proof that rules out alternative formulations. Has this particular question already been addressed by a mathematical physicist?</p>
<p>Perhaps a reasonable option might be to use a probabilistic framework such as Gaussian Processes that are provably universal function approximators [5]?</p>
<h2>Koopman Von Neumann Classical Mechanics as a candidate solution:</h2>
<p>After reflecting upon the answers of Ben Crowell and <a href="https://mathoverflow.net/a/354289">gmvh</a>, it appears that we require a formulation of classical mechanics where:</p>
<ol>
<li>Everything is formulated in terms of linear operators.</li>
<li>All problems can then be recast in an algebraic language.</li>
</ol>
<p>After doing a literature search it appears that Koopman Von Neumann Classical Mechanics might be a suitable candidate as we have an operator theory in Hilbert space similar to Quantum Mechanics [8,9,10]. That said, I just recently came across this formulation so there may be important subtleties I ignore.</p>
<h2>Related problems:</h2>
<p>Furthermore, I think it may be worth considering the following related questions:</p>
<ol>
<li>What would be left of mathematical physics if we could not compute partial derivatives?</li>
<li>Is it possible to accurately simulate any non-trivial physics without computing partial derivatives?</li>
<li>Are the operations of multivariable calculus necessary and sufficient for modelling classical mechanical phenomena?</li>
</ol>
<h2>A historical note:</h2>
<p>It is worth noting that more than 1000 years ago as a result of his profound studies on optics the mathematician and physicist Ibn al-Haytham(aka Alhazen) reached the following insight:</p>
<blockquote>
<p>Nothing of what is visible, apart from light and color, can be
perceived by pure sensation, but only by discernment, inference, and
recognition, in addition to sensation.-Alhazen</p>
</blockquote>
<p>Today it is known that even color is a construction of the mind as photons are the only physical objects that reach the retina. However, broadly speaking neuroscience is just beginning to catch up with Alhazen’s understanding that the physics of everyday experience is simulated by our minds. In particular, most motor-control scientists agree that to a first-order approximation the key purpose of animal brains is to generate movements and consider their implications. This implicitly specifies a large class of continuous control problems which includes animal locomotion.</p>
<p>Evidence accumulated from several decades of neuroimaging studies implicates the role of the cerebellum in such internal modelling. This isolates a rather uniform brain region whose processes at the circuit-level may be identified with efficient and reliable methods for simulating classical mechanical phenomena [11, 12].</p>
<p>As for the question of whether the mind/brain may actually be modelled by Turing machines, I believe this was precisely Alan Turing’s motivation in conceiving the Turing machine [13]. For a concrete example of neural computation, it may be worth looking at recent research that a single dendritic compartment may compute the xor function: [14], <a href="https://www.reddit.com/r/MachineLearning/comments/ejbwvb/r_single_biological_neuron_can_compute_xor/" rel="nofollow noreferrer">Reddit discussion</a>.</p>
<h2>References:</h2>
<ol>
<li>William W. Symes. Partial Differential Equations of Mathematical Physics. 2012.</li>
<li>L.D. Landau & E.M. Lifshitz. Mechanics (Volume 1 of A Course of Theoretical Physics). Pergamon Press 1969.</li>
<li>Lyness, J. N.; Moler, C. B. (1967). "Numerical differentiation of analytic functions". SIAM J. Numer. Anal. 4: 202–210. <a href="https://doi.org/10.1137/0704019" rel="nofollow noreferrer">doi:10.1137/0704019</a>.</li>
<li>Naumann, Uwe (2012). The Art of Differentiating Computer Programs. Software-Environments-tools. SIAM. ISBN 978-1-611972-06-1.</li>
<li>Michael Osborne. Gaussian Processes for Prediction. Robotics Research Group
Department of Engineering Science
University of Oxford. 2007.</li>
<li>Connie Fan. REVERSE MATHEMATICS. University of Chicago. 2010.</li>
<li>Richards, B.A., Lillicrap, T.P., Beaudoin, P. et al. A deep learning framework for neuroscience. Nat Neurosci 22, 1761–1770 (2019). <a href="https://doi.org/10.1038/s41593-019-0520-2" rel="nofollow noreferrer">doi:10.1038/s41593-019-0520-2</a>.</li>
<li>Wikipedia contributors. "Koopman–von Neumann classical mechanics." Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 19 Feb. 2020. Web. 7 Mar. 2020.</li>
<li>Koopman, B. O. (1931). "Hamiltonian Systems and Transformations in Hilbert Space". Proceedings of the National Academy of Sciences. 17 (5): 315–318. Bibcode:1931PNAS...17..315K. <a href="https://doi.org/10.1073/pnas.17.5.315" rel="nofollow noreferrer">doi:10.1073/pnas.17.5.315</a>. PMC 1076052. PMID 16577368.</li>
<li>Frank Wilczek. Notes on Koopman von Neumann Mechanics, and a
Step Beyond. 2015.</li>
<li>Daniel McNamee and Daniel M. Wolpert. Internal Models in Biological Control. Annual Review of Control, Robotics, and Autonomous Systems. 2019.</li>
<li>Jörn Diedrichsen, Maedbh King, Carlos Hernandez-Castillo, Marty Sereno, and Richard B. Ivry. Universal Transform or Multiple Functionality? Understanding the Contribution of the Human Cerebellum across Task Domains. Neuron review. 2019.</li>
<li>Turing, A.M. (1936). "On Computable Numbers, with an Application to the Entscheidungsproblem". Proceedings of the London Mathematical Society. 2 (published 1937). 42: 230–265. <a href="https://doi.org/10.1112/plms/s2-42.1.230" rel="nofollow noreferrer">doi:10.1112/plms/s2-42.1.230</a>. (and Turing, A.M. (1938). "On Computable Numbers, with an Application to the Entscheidungsproblem: A correction". Proceedings of the London Mathematical Society.</li>
<li>Albert Gidon, Timothy Adam Zolnik, Pawel Fidzinski, Felix Bolduan, Athanasia Papoutsi, Panayiota Poirazi, Martin Holtkamp, Imre Vida, Matthew Evan Larkum. <a href="https://doi.org/10.1126/science.aax6239" rel="nofollow noreferrer">Dendritic action potentials and computation in human layer 2/3 cortical neurons</a>. Science. 2020.</li>
</ol>
| Piyush Grover | 30,684 | <p>One way for reformulating all of (classical) mechanics is Peridynamics, which does away with derivatives. It is essentially a non-local reformulation.</p>
<p>Javili, Ali, et al. "Peridynamics review." Mathematics and Mechanics of Solids 24.11 (2019): 3714-3739.</p>
|
229,161 | <p>A sequence of positive integer is defined as follows</p>
<blockquote>
<ul>
<li>The first term is $1$.</li>
<li>The next two terms are the next two even numbers $2$, $4$.</li>
<li>The next three terms are the next three odd numbers $5$, $7$, $9$.</li>
<li>The next $n$ terms are the next $n$ even numbers if $n$ is even or the next $n$ odd numbers if $n$ is odd.</li>
</ul>
</blockquote>
<p>What is the general term $a_n?$</p>
<p><strong>Please, proofs of all these formulas would be nice</strong></p>
| pedrosorio | 47,869 | <p>Your sequence is the following one:</p>
<p><a href="http://oeis.org/A001614" rel="nofollow">http://oeis.org/A001614</a></p>
|
2,618,856 | <p>I am trying to solve the problem:</p>
<blockquote>
<p><em>Question:</em> In a ring $R$ with identity, if every idempotent is central, then prove that for $a, b \in R$, $$ab =1 \implies ba=1$$</p>
</blockquote>
<p>I have done in the following manner:
$$ab=1\\
\implies b(ab)=b\\
\implies (ba)b-b=0\\
\implies (ba-1)b=0$$</p>
<p>Case 1: If $R$ contains no divisor of Zero then as $b \ne 0$ ,we get $$ba-1=0 \implies ba=1$$</p>
<p>Case 2: If $R$ contains divisor of Zeros then we may have $$ba-1\ne0\\
\implies ba\ne1\\
\implies (ab)a\ne a$$
But $ab=1$ hence $a\ne a$, an absurd condition. So $ba-1=0$ or $ba=1$. </p>
<p>I think I have solved the problem but I haven't used the given conditions "Every idempotent is central". So I think there is something wrong which I have done but can't find out! Please rectify my mistake if I am wrong and provide any hint to solve it.</p>
| Alex Zorn | 62,875 | <p>Here's kind of a slick solution (don't read further if you want to figure it out for yourself!)</p>
<p>$$ab = 1$$
$$\Longrightarrow (ba)^{2} = b(ab)a = ba$$</p>
<p>So $ba$ is an idempotent and therefore central. So we have:</p>
<p>$$b(ba) = (ba)b$$
$$\Rightarrow ab(ba) = a(ba)b$$
$$\Rightarrow (ab)(ba) = (ab)(ab)$$
$$\Rightarrow ba = 1$$</p>
|
2,618,856 | <p>I am trying to solve the problem:</p>
<blockquote>
<p><em>Question:</em> In a ring $R$ with identity, if every idempotent is central, then prove that for $a, b \in R$, $$ab =1 \implies ba=1$$</p>
</blockquote>
<p>I have done in the following manner:
$$ab=1\\
\implies b(ab)=b\\
\implies (ba)b-b=0\\
\implies (ba-1)b=0$$</p>
<p>Case 1: If $R$ contains no divisor of Zero then as $b \ne 0$ ,we get $$ba-1=0 \implies ba=1$$</p>
<p>Case 2: If $R$ contains divisor of Zeros then we may have $$ba-1\ne0\\
\implies ba\ne1\\
\implies (ab)a\ne a$$
But $ab=1$ hence $a\ne a$, an absurd condition. So $ba-1=0$ or $ba=1$. </p>
<p>I think I have solved the problem but I haven't used the given conditions "Every idempotent is central". So I think there is something wrong which I have done but can't find out! Please rectify my mistake if I am wrong and provide any hint to solve it.</p>
| rschwieb | 29,335 | <p>As mentioned in the comments, $ba\neq 1$ does not imply $aba\neq a$. In rings for which there exists $ab=1$ and $ba\neq 1$ (<a href="https://math.stackexchange.com/q/1917154/29335">see here for examples</a>) you have obviously that $aba=(ab)a=a$.</p>
<p>But clearly, if $ab=1$, $ba$ is at least idempotent, hence central by your hypotheses.</p>
<p>Then</p>
<p>$ba=(ba)ab =a(ba)b=abab =1$.</p>
|
207,865 | <p>It is known that all $B$, $C$ and $D$ are $3 \times 3$ matrices. And the eigenvalues of $B$ are $1, 2, 3$; $C$ are $4, 5, 6$; and $D$ are $7, 8, 9$. What are the eigenvalues of the $6 \times 6$ matrix
$$\begin{pmatrix}
B & C\\0 & D
\end{pmatrix}$$
where $0$ is the $3 \times 3$ matrix whose entries are all $0$.
From my intuition, I think the eigenvalues of the new $6 \times 6$ matrix are the eigenvalues of $B$ and $D$. But how can I show that? </p>
| Robert Israel | 8,508 | <p>Hint:
$$ \pmatrix{B & C\cr 0 & D\cr} \pmatrix{B^{-1} & E\cr 0 & D^{-1}\cr} = \pmatrix{I & BE + CD^{-1}\cr 0 & I\cr} $$
What $E$ will make $BE + CD^{-1} = 0$?</p>
|
207,865 | <p>It is known that all $B$, $C$ and $D$ are $3 \times 3$ matrices. And the eigenvalues of $B$ are $1, 2, 3$; $C$ are $4, 5, 6$; and $D$ are $7, 8, 9$. What are the eigenvalues of the $6 \times 6$ matrix
$$\begin{pmatrix}
B & C\\0 & D
\end{pmatrix}$$
where $0$ is the $3 \times 3$ matrix whose entries are all $0$.
From my intuition, I think the eigenvalues of the new $6 \times 6$ matrix are the eigenvalues of $B$ and $D$. But how can I show that? </p>
| PAD | 27,304 | <p>If $\lambda$ is an eigenvalue of $A$ with eigenvector $(x_1, x_2, x_3)^t$ then $(x_1,x_2,x_3,0,0,0)^t$ is an eigenvector of the block matrix. Similarly, for $D$ but you put three zeros at the beggining. </p>
|
1,497,898 | <p>Consider the polynomial $$f(x)=x^4-x^3+14x^2+5x+16$$ and $\mathbb{F}_p$ be the field with $p$ elements, where $p$ is prime. Then</p>
<ol>
<li>Considering $f$ as a polynomial with coefficients in $\mathbb{F_3}$, it has no roots in $\mathbb{F_3}$.</li>
<li><p>Considering $f$ as a polynomial with coefficients in $\mathbb{F_3}$, it is a product of two irreducible factors of degree $2$ over $\mathbb{F_3}$.</p></li>
<li><p>Considering $f$ as a polynomial with coefficients in $\mathbb{F_7}$, it has an irreducible factor of degree $3$.</p></li>
<li>$f$ is a product of two polynomials of degree two over $\mathbb{Z}$.</li>
</ol>
<p><strong>My work:</strong>
$$f(x)=x^4+2x^3+2x^2+2x+1$$ in $\mathbb{F_3}$
which can be written as
$$f(x)=(x^2+1)(x+1)^2$$ hence $1$ and $2$ are wrong. </p>
<p>In $\mathbb{F_7}$, we can write</p>
<p>$$f(x)=x^4+6x^3+5x+2,$$ which is reducible but I am unable to conclude $3$ precisely and also having problem with $4$. </p>
<p>Am i right with my conclusions? Help me out. </p>
| IrbidMath | 255,977 | <p>You can write $f(x) = x^4 - x^3 -2x + 2 ~(\mod 7)$ and you can check that $f(1) = 0$. Hence $(x-1)$ is a factor of $f(x) ~(\mod 7)$.</p>
<p>After factoring we get $f(x) = (x-1)(x^3-2) ~(\mod 7)$. So If we proved that $x^3 - 2$ is irreducible $\mod 7$ that means point $3$ is true and point $4$ is false since if $f(x)$ can be written as a product of polynomials of degree two then that hold for $f(x) ~(\mod 7)$ but this contradict that $f(x) $ has an irreducible factor of degree $3$. </p>
<p>To prove $x^3 - 2$ is irreducible we can check $1^3, 2^3, \cdots , 6^3 ~(\mod 7)$ if non equal to $2$ then we are done. And that can be proved easily using the theorem $a^{p-1} \equiv 1 ~(\mod p)$ for $(a,p)=1$. Thus $a^{\frac{p-1}{2}} \equiv \pm 1~ (\mod p)$.</p>
|
3,325,114 | <blockquote>
<p>You are on an island inhabited only by knights, who always tell the truth, and knaves, who always lie. You meet two women who live there and ask the older one,</p>
<blockquote>
<p>"Is at least one of you a knave?"</p>
</blockquote>
<p>She responds yes or no, but! you do not yet have enough information to determine what they were. So you then ask the younger woman,</p>
<blockquote>
<p>"Are you two of the same type?"</p>
</blockquote>
<p>She answers yes or no and after that you know which type each is. What type is each?</p>
<ul>
<li>both knight</li>
<li>both knave</li>
<li>older knight, younger knave</li>
<li>older knave, younger knight</li>
<li>not enough information</li>
</ul>
</blockquote>
<p>I thought it should be "not enough information" but that seems wrong</p>
| hunter | 108,129 | <p>They are both knights. Here's one approach.</p>
<p>The answer to question one can't be "yes." If it's "yes," the only logical possibility is that the older one is a knight and the younger is a knave. But then you already have enough information to know which is which, and the riddle says you don't. So the answer to question one is "no."</p>
<p>Now the answer to question two can't be "no." If it's "no," then it's possible that the older is a knave and the younger is a knight, or that both are knaves. But then you don't have enough information to know which is which, and the riddle says you do. So the answer to question two is "yes."</p>
<p>Now that we know the answer to both questions, it then follows that they are both knights.</p>
|
1,557,165 | <p>Prove that
$$\int_1^\infty\frac{e^x}{x (e^x+1)}dx$$
does not converge.</p>
<p>How can I do that? I thought about turning it into the form of $\int_b^\infty\frac{dx}{x^a}$, but I find no easy way to get rid of the $e^x$.</p>
| mrf | 19,440 | <p>Note that the integrand is positive. Since
$$
\lim_{x\to\infty} \frac{e^x}{e^x+1} = 1,
$$
it follows that there is an $X$ such that $\dfrac{e^x}{e^x+1} \ge \dfrac12$ for $x \ge X$. Hence
\begin{align*}
\int_1^\infty \frac{e^x}{x(e^x+1)}\,dx &= \int_1^X \frac{e^x}{x(e^x+1)}\,dx + \int_X^\infty \frac{e^x}{x(e^x+1)}\,dx \\
&\ge \int_1^X \frac{e^x}{x(e^x+1)}\,dx + \frac12 \int_X^\infty \frac{1}{x}\,dx.
\end{align*}
The second of these integrals diverges (why?), so the original integral is also divergent.</p>
|
898,151 | <p>I have encountered an statement several times while proving determinant of a block matrix. </p>
<blockquote>
<p>$$\det\pmatrix{A&0\\0&D}\; = \det(A)\det(D)$$</p>
</blockquote>
<p>where $A$ is $k\times k$ and $D$ is $n\times n$ matrix. How to prove this?</p>
<p>Thanks in advance.</p>
| Patrick Da Silva | 10,704 | <p>If your matrices have coefficients in an integral domain, you can pass to the field of fractions and take an algebraic closure to use the Jordan canonical form, in which case this equation becomes trivial since both sides of the equation are the product of the products of the eigenvalues of $A$ and $D$. </p>
<p>Otherwise you can use Laplace expansion and notice that the sum goes over all the permutations of $S_{k+n}$, in which case the only terms which "survive" are the permutations which map $\{ 1,\cdots, k\}$ to itself and $\{k+1,\cdots,k+n\}$ to itself. Then you can split the sum in two parts and obtain $\det(A) \det(D)$. This proof works over any (commutative) ring.</p>
<p>Hope that helps,</p>
|
1,029,485 | <p>I wish to show the following statement:</p>
<p>$
\forall x,y \in \mathbb{R}
$</p>
<p>$$
(x+y)^4 \leq 8(x^4 + y^4)
$$</p>
<p>What is the scope for generalisaion?</p>
<p><strong>Edit:</strong></p>
<p>Apparently the above inequality can be shown using the Cauchy-Schwarz inequality. Could someone please elaborate, stating the vectors you are using in the Cauchy-Schwarz inequality: </p>
<p>$\ \ \forall \ \ v,w \in V, $ an inner product space,</p>
<p>$$|\langle v,w\rangle|^2 \leq \langle v,v \rangle \cdot \langle w,w \rangle$$</p>
<p>where $\langle v,w\rangle$ is an inner product.</p>
| GEO | 75,928 | <p>Regarding your edit and the question in the comment under OC-Sansoo's answer: (If I understand your issue right, you want reasoning for the choice of vectors?)</p>
<p>Start with the RHS of the inequality we want to show. </p>
<p>$$ 8\left(x^4+y^4\right) = \left(x^4+y^4\right)\left(2^2+2^2\right)$$
On the RHS we now have have the vectors $\vec{v}=(x^2,y^2)$ and $\vec{w}=(2,2)$.</p>
<p>Now we apply the CS inequality the first time:
$$ \left(2x^2+2y^2\right)^2 \leq \left(x^4+y^4\right)\left(2^2+2^2\right)$$
We do the same procedure again with the LHS term in the bracket (the inner product of the vectors $\vec{v}$ and $\vec{w}$):
$$ 2x^2+2y^2= (1+1)(x^2+y^2)$$
Here we have the vectors $\vec{v}=(1,1)$ and $\vec{w}=(x,y)$.</p>
<p>Applying CS again:
$$ \left(x+y\right)^2 \leq (1+1)(x^2+y^2)$$</p>
<p>Now we are done, since $(x+y)^4\leq\left(2x^2+2y^2\right)^2$.</p>
<p>On a side note: In your edit the CS inequality should be:
$$|\langle v,w\rangle|^2 \leq \langle v,v \rangle \cdot \langle w,w \rangle$$</p>
|
2,941,456 | <blockquote>
<p>Given <span class="math-container">$K$</span> elements between <span class="math-container">$1$</span> and <span class="math-container">$7$</span> (inclusive), how many ways can you arrange the elements s.t. their sum adds to <span class="math-container">$N$</span>? </p>
</blockquote>
<p>I can brute-force my way to counting the number of ways for small <span class="math-container">$K$</span> and <span class="math-container">$N$</span>, but is there a general formula that addresses this problem? I feel like this is a commonly discussed problem, but I just don't know what the solution is. This came out of something I'm working on (programming stuff). Thanks.</p>
| G Cab | 317,234 | <p>You are looking for
<span class="math-container">$$
\eqalign{
& N_{\,b} (s,r,m) = \cr
& = {\rm No}\,{\rm of}\,{\rm integer}\;{\rm solutions}\,{\rm to}\left\{ \matrix{
1 \le x_{\,j} \le r + 1 = 7 \hfill \cr
x_{\,1} + x_{\,2} + \cdots + x_{\,m = K} = s + K = N \hfill \cr} \right.\quad = \cr
& = {\rm No}\,{\rm of}\,{\rm integer}\;{\rm solutions}\,{\rm to}\left\{ \matrix{
0 \le y_{\,j} \le r = 6 \hfill \cr
y_{\,1} + y_{\,2} + \cdots + y_{\,m = K} = s = N - K \hfill \cr} \right.\quad \cr}
$$</span>
which is the <a href="https://en.wikipedia.org/wiki/Composition_%28combinatorics%29" rel="nofollow noreferrer">number of (weak) compositions</a> of <span class="math-container">$s$</span>, into exactly <span class="math-container">$m$</span> parts, restricted
to <span class="math-container">$\{0,1,\dots\,r\}$</span>.</p>
<p>The general formula is given by
<span class="math-container">$$
N_b (s,r,m)\quad \left| {\;0 \leqslant \text{integers }s,m,r} \right.\quad =
\sum\limits_{\left( {0\, \leqslant } \right)\,\,k\,\,\left( { \leqslant \,\frac{s}{r+1}\, \leqslant \,m} \right)}
{\left( { - 1} \right)^k \binom{m}{k}
\binom
{ s + m - 1 - k\left( {r + 1} \right) }
{ s - k\left( {r + 1} \right)}\ }
$$</span>
as explained in detail <a href="http://math.stackexchange.com/questions/992125/rolling-dice-problem/1680420#1680420">in this related post</a> and in <a href="http://www.mathpages.com/home/kmath337/kmath337.htm" rel="nofollow noreferrer">this article</a>.</p>
|
232,540 | <p>I'm trying to prove this conclusion but have some problems with one of the steps.</p>
<p>Assume $X_1,\ldots,X_n,\ldots$ is a sequence of Gaussian random variables, converging almost surely to $X$, prove that $X$ is Gaussian.</p>
<p>We use characteristics function here. Since $|\phi_{X_n}(t)|\leq 1$, by dominated convergent theorem, we have for any $t$</p>
<p>$$
\lim_{n\rightarrow\infty}e^{it\mu_n-t^2\sigma_n^2/2}=\lim_{n\rightarrow \infty}\phi_{X_n}(t) = \lim_{n\rightarrow \infty}\mathbb{E}\left[e^{itX_n}\right] = \mathbb{E}\left[e^{itX}\right] = \phi_X(t)
$$</p>
<p><strong>this is the step that I cannot figure out</strong>: $e^{it\mu_n-t^2\sigma_n^2/2}$ converges for any $t$ if and only if $\mu_n$ and $\sigma_n$ converges. </p>
<p>Let $\mu=\lim_n \mu_n$, and $\sigma=\lim_n\sigma_n$, then $\phi_X(t)=e^{it\mu-t^2\sigma^2/2}$, which proves that $X$ is a Gaussian random variable.</p>
<p>Why can we get that $\mu_n$ and $\sigma_n$ converge? This looks intuitive for me, but I cannot make a rigorous prove. </p>
| Davide Giraudo | 9,849 | <ul>
<li>First, we note that the sequence $\{\sigma_n\}$ and $\{\mu_n\}$ has to be bounded. It's a consequence of what was done in <a href="https://math.stackexchange.com/questions/116613/tightness-condition-in-the-case-of-normally-distributed-random-variables/116651#116651">this thread</a>, as we have in particular convergence in law. What we use is the following:</li>
</ul>
<blockquote>
<p>If $(X_n)_n$ is a sequence of random variables converging in distribution to $X$, then for each $\varepsilon$, there is $R$ such that for each $n$, $\mathbb P(|X_n|\geqslant R)\lt \varepsilon$ (tightness). </p>
</blockquote>
<p>To see that, we assume that $X_n$ and $X$ are non-negative (considering their absolute values). Let $F_n$, $F$ the cumulative distribution function of $X_n$, $X$. Take $t$ such that $F(t)\gt 1-\varepsilon$ and $t$ is a continuity point of $F$. Then $F_n(t)\gt 1-\varepsilon$ for $n\geqslant N$ for some $N$. And a finite collection of random variables is tight.</p>
<ul>
<li>Now, fix an arbitrary strictly increasing sequence $\{n_k\}$. We extract further sub-sequences of $\{\sigma_{n_k}\}$ and $\{\mu_{n_k}\}$, which converge respectively to $\sigma$ and $\mu$. Taking the modulus, we can see that $e^{-\sigma^2/2}=|\varphi_X(1)|$, so $\sigma$ is uniquely determined. </li>
<li>We have $e^{it\mu}=\varphi_X(t)e^{t\sigma^2/2}$ for all $t\in\Bbb R$, so $\mu$ is also completely determined. </li>
</ul>
|
340,886 | <p>Suppose $x=(x_1,x_2),y = (y_1,y_2) \in \mathbb{R}^2$. I noticed that
\begin{align*}
\|x\|^2 \|y\|^2 - \langle x,y \rangle^2 &=
x_1^2y_1^2 + x_1^2 y_2^2 + x_2^2 y_1^2 + x_2^2 y_2 ^2 - (x_1^2 y_1^2 + 2 x_1 y_1 x_2 y_2 + x_2^2 y_2^2) \\
&=(x_1 y_2)^2 - 2x_1 y_2 x_2 y_1 + (x_2 y_2)^2 \\
&=(x_1 y_2 - x_2 y_1)^2
\end{align*}
which proves the CSB inequality in dimension two. This begs the question:</p>
<blockquote>
<p>If $x = (x_1,\ldots,x_n),y=(y_1,\ldots,y_n) \in \mathbb{R}^n$, is there a polynomial $p \in \mathbb{R}[x_1,\ldots,x_n;y_1,\ldots,y_n]$ such that $ \|x\|^2 \|y\|^2 - \langle x,y \rangle^2 = p^2$?</p>
</blockquote>
| ShreevatsaR | 205 | <p>No (it is not a square of a polynomial for $n \ge 3$), but the right generalization, proving that it is nonnegative, is that it is a <em>sum of squares</em>.</p>
<p>For instance, for $n = 3$,
$$
\begin{align*}
\|x\|^2 \|y\|^2 - \langle x,y \rangle^2 &= (x_1y_2 - x_2y_1)^2 + (x_2y_3-x_3y_2)^2 + (x_3y_1 - x_1y_3)^2,
\end{align*}
$$</p>
<p>and in general
$$
\begin{align*}
\|x\|^2 \|y\|^2 - \langle x,y \rangle^2 &= \sum_{i < j}(x_iy_{j} - x_{j}y_{i})^2
\end{align*}
$$</p>
<hr>
<p>This is easy to prove algebraically: the left hand side is</p>
<p>$$
\begin{align*}
\|x\|^2 \|y\|^2 - \langle x,y \rangle^2
&= (\sum{x_i^2}\sum{y_j^2}) - (\sum{x_iy_i})^2 \\
&= \sum_{i=j}{x_i^2 y_j^2} + \sum_{i\neq j}{x_i^2y_j^2} - \sum_{i=j}{x_iy_ix_iy_i} - \sum_{i\neq j}{x_iy_ix_jy_j} \\
&= \sum_{i<j}{(x_i^2 y_j^2 + x_j^2 y_i^2)} - \sum_{i<j}{2x_iy_ix_jy_j} \\
&= \sum_{i<j}{(x_i^2 y_j^2 - 2x_iy_jx_jy_i + x_j^2y_i^2)} \\
&= \sum_{i < j}(x_iy_{j} - x_{j}y_{i})^2
\end{align*}
$$</p>
<p>This identity is known as <a href="http://en.wikipedia.org/wiki/Lagrange%27s_identity" rel="nofollow">Lagrange's identity</a>.</p>
<hr>
<p>This also shows that the left hand size is zero when for all pairs $(i,j)$, we have $x_iy_j - x_jy_i = 0$, i.e., $$\frac{y_i}{x_i} = \frac{y_j}{x_j}$$ (let's assume the $x_j$s are nonzero, for now), which is another way of saying that one vector is a multiple of the other, i.e., equality holds in the inequality when the two vectors are parallel.</p>
<hr>
<p>For the former (showing that it is not the square of a polynomial), consider for instance $n=3$. If $\|x\|^2 \|y\|^2 - \langle x,y \rangle^2$ is the square of a polynomial $p(x_1, x_2, x_3, y_1, y_2, y_3)$, then we can write the polynomial as $qx_1 + r$, where $q = q(x_2, x_3, y_1, y_2, y_3)$ and $r = r(x_2, x_3, y_1, y_2, y_3)$ are polynomials that don't depend on $x_1$. As $(qx_1+r)^2 = q^2x_1^2 + 2qrx_1 + r^2$, the coefficient of $x_1^2$ should be a square, but the coefficient is $y_1^2 + y_2^2 + y_3^2 - y_1^2 = y_2^2 + y_3^2$ (or in general, $\sum_{i=2}^{n}y_i^2$), which is not the square of a polynomial. (Proved similarly: if it is the square of $qy_2 + r$, then comparing coefficients of $y_2^2$ gives $q \equiv 1$, and comparing coefficents of $y_2$ gives $r \equiv 0$, which is not consistent with the rest.)</p>
<hr>
<p>As Erick Wong points out in the comments, this is related to (the solution of) <a href="http://en.wikipedia.org/wiki/Hilbert%27s_seventeenth_problem" rel="nofollow">Hilbert's seventeenth problem</a>, which says that any polynomial that takes only nonnegative values can be written as a sum of squares of rational functions. If we only care about representations as <a href="http://en.wikipedia.org/wiki/Polynomial_SOS" rel="nofollow">sum of squares of <em>polynomials</em></a>, any nonnegative polynomial can be approximated as closely as desired with a sum of squares of polynomials. See e.g. the book <a href="http://www.ams.org/publications/authors/books/postpub/surv-146" rel="nofollow"><em>Positive Polynomials and Sums of Squares</em></a> (<a href="http://www.ams.org/bookstore/pspdf/surv-146-prev.pdf" rel="nofollow">preview</a>).</p>
|
1,894,199 | <p>Evaluate definite integral: $$\int_{-\pi/2}^{\pi/2} \cos \left[\frac{\pi n}{2} +\left(a \sin t+b \cos t \right) \right] dt$$</p>
<p>$n$ is an integer. $a,b$ real numbers.</p>
<p>The purpose of the integral - computing matrix elements of an electron Hamiltonian in an elliptic ring in the quantum box basis.</p>
<p>Before you ask me what I've done already, I've got this integral from the original, much more complicated one.</p>
<p>Originally I just gave up and computed it numerically.</p>
<blockquote>
<p>But I wonder - is it possible to express the integral in closed form with Bessel functions? </p>
</blockquote>
<p>Or maybe some series, still better than numerical integration.</p>
<p>I'm not asking for a full solution, some hint would be fine. Or even just a reassurance that a closed form exists.</p>
| Robert Israel | 8,508 | <p>Write $a \sin(t) + b \cos(t) = c \cos(t-\delta)$ where $a = c \sin(\delta)$ and $b = c \cos(\delta)$. So now (depending on $n$ mod $4$) we want to look at
$\pm \int_{-\pi/2}^{\pi/2} \cos(c \cos(t-\delta))\; dt$ or $\pm \int_{-\pi/2}^{\pi/2} \sin(c \cos(t-\delta))\; dt$. </p>
<p>The integral with $\cos$ turns out nicely, because (due to symmetry) we can take the integral from $-\pi$ to $\pi$ and divide by $2$:</p>
<p>$$ \int_{-\pi/2}^{\pi/2} \cos(c \cos(t-\delta))\; dt = \pi J_0(c)$$</p>
<p>The integral with $\sin$ is not as nice. If we call it $F(c)$, then we have the differential equation</p>
<p>$$ c F''(c) + F'(c) + c F(c) = 2 \cos(\delta) \cos(c \sin(\delta))$$
with initial conditions
$$ F(0) = 0,\ F'(0) = 2 \cos(\delta)$$</p>
<p>whose solution, according to Maple, is</p>
<p>$$ F(c) = \pi \cos(\delta) Y_0(c) \int_0^c J_0(z) \cos(z \sin(\delta))\; dz
- J_0(c) \int_0^c Y_0(z) \cos(z \sin(\delta))\; dz $$</p>
<p>and I don't think those integrals over $z$ can be done in closed form.</p>
|
1,894,199 | <p>Evaluate definite integral: $$\int_{-\pi/2}^{\pi/2} \cos \left[\frac{\pi n}{2} +\left(a \sin t+b \cos t \right) \right] dt$$</p>
<p>$n$ is an integer. $a,b$ real numbers.</p>
<p>The purpose of the integral - computing matrix elements of an electron Hamiltonian in an elliptic ring in the quantum box basis.</p>
<p>Before you ask me what I've done already, I've got this integral from the original, much more complicated one.</p>
<p>Originally I just gave up and computed it numerically.</p>
<blockquote>
<p>But I wonder - is it possible to express the integral in closed form with Bessel functions? </p>
</blockquote>
<p>Or maybe some series, still better than numerical integration.</p>
<p>I'm not asking for a full solution, some hint would be fine. Or even just a reassurance that a closed form exists.</p>
| Kostiantyn Lapchevskyi | 361,449 | <p>Little addition on $\sin$ case:</p>
<p>$$
I=\int^{\pi/2}_{-\pi/2}\sin(c\cos(t-\delta))dt=\int^{\pi}_{0}\sin(c\sin(t-\delta))dt
$$</p>
<p>Make substitution: $g=\sin(t-\delta)\quad dt=\frac{dg}{\pm \sqrt{1-g^2}}$</p>
<p>Sign in last expression depends on range. We know that $-\frac{\pi}{2}<\delta<\frac{\pi}{2}$. Researching sign of $\cos$, we get $[-\delta,\frac{\pi}{2}]-$plus sign and $[\frac{\pi}{2},\pi-\delta]-$minus. It leads to partition of the integral:</p>
<p>$$
\left[\int^{1}_{-\sin(\delta)}\frac{\sin(cg)}{\sqrt{1-g^2}}dg \right]-\left[\int^{\sin(\delta)}_{1}\frac{\sin(cg)}{\sqrt{1-g^2}}dg\right]
$$ </p>
<p>Revert boundaries of 2nd integral and use $\int^{\sin(\delta)}_{-\sin(\delta)}=\int^{-\sin(\delta)}_{\sin(\delta)}=0$ (as far as argument is an odd function).
$$
2\int^{1}_{\max(-\sin(\delta),\sin(\delta))}\to 2\int^{1}_{\sin|\delta|}
$$
Then,
$$
I=2\int^{1}_{\sin|\delta|}\frac{\sin(cg)}{\sqrt{1-g^2}}dg
$$
Where $\sin$ as well as square root can be expanded in Taylor series, then apply formula $\sum^{\infty}_{n=0}a_n\sum^{\infty}_{m=0}b_m=\sum^{\infty}_{n=0}\sum^{n}_{k=0}a_{n-k}b_k$, change order of summation and integration and only thing left is to numerically compute obtained series.</p>
|
160,542 | <p>I suspect the following integration to be wrong. My answer is coming out to be $3/5$, but the solution says $1$.</p>
<p>$$\int_0^1\frac{2(x+2)}{5}\,dx=\left.\frac{(x+2)^2}{5}\;\right|_0^1=1.$$</p>
<p>Please help out. Thanks.</p>
| Ayman Hourieh | 4,583 | <p>$$
\left.\frac{(x+2)^2}{5}\right|_0^1 = \frac{(1+2)^2}{5} - \frac{(0+2)^2}{5} = \frac{9}{5} - \frac{4}{5} = \frac{5}{5} = 1
$$</p>
|
160,542 | <p>I suspect the following integration to be wrong. My answer is coming out to be $3/5$, but the solution says $1$.</p>
<p>$$\int_0^1\frac{2(x+2)}{5}\,dx=\left.\frac{(x+2)^2}{5}\;\right|_0^1=1.$$</p>
<p>Please help out. Thanks.</p>
| Community | -1 | <p>$$\left. \dfrac{(x+2)^2}5 \right \vert_0^1 = \dfrac{(1+2)^2}5 - \dfrac{(0+2)^2}5 = \dfrac{3^2}5 - \dfrac{2^2}5 = \dfrac{9}5 - \dfrac45 = \dfrac{9-4}5 = \dfrac55 = 1$$</p>
<p>Note that we can integrate $\displaystyle \int_{x_1}^{x_2} (x+a) dx$ in seemingly two different ways.</p>
<p>The first method is to treat $x+a$ together as one object i.e. $$\displaystyle \int (x+a) dx = \dfrac{(x+a)^2}2 + c_1$$</p>
<p>The second method is to treat $x+a$ as two separate objects i.e. $$\displaystyle \int (x+a) dx = \int x dx + \int a dx = \dfrac{x^2}2 + ax + c_2$$</p>
<p>It might seem that both are different. However, note that the first answer can be re-written as $$\dfrac{(x+a)^2}2 + c_1 = \dfrac{x^2}2 + ax + \dfrac{a^2}2 + c_1.$$ Now this looks more closely like the second. The only difference in fact is that the constants are different. They are in fact related as $c_2 = c_1 + \dfrac{a^2}2$. While performing a definite integral, the constants cancel off and hence both ways should give us the same answer.</p>
<p>As an exercise, we will integrate what you have by treating $x$ and $2$ separately.</p>
<p>\begin{align}
\int_0^1 \dfrac{2(x+2)}5 dx & = \dfrac25 \int_0^1 (x+2)dx = \dfrac25 \int_0^1 xdx + \dfrac25 \int_0^1 2dx = \dfrac25 \cdot \left. \dfrac{x^2}2 \right \vert_{0}^1 + \dfrac25 \cdot 2 \cdot \left(1 - 0 \right)\\
& = \dfrac25 \cdot \left(\dfrac{1^2}2 - \dfrac{0^2}2 \right) + \dfrac45 = \dfrac25 \cdot \dfrac12 + \dfrac45 = \dfrac15 + \dfrac45 = 1
\end{align}</p>
|
2,952,014 | <p>So I was doing some self study and came across a proposition in one of my chemical engineering course's prescribed textbooks. I can't quite get the proof out. It's to do with a particle moving through a medium such that when it makes contact with to either of two plates <span class="math-container">$L$</span> units apart (i.e. one at <span class="math-container">$0$</span> and one at <span class="math-container">$L$</span>), it remains there. </p>
<blockquote>
<p>Consider that the movement of a single particle follows a random walk which can be described by a Markov chain with states <span class="math-container">$[0, L]$</span> where <span class="math-container">$P(X_n = -1) = p_{-1}$</span>,<span class="math-container">$P(X_n = 0) = p_{0}$</span> and <span class="math-container">$P(X_n = 1) = p_{1}$</span> with <span class="math-container">$p_{-1} + p_{0} + p_{1} = 1$</span>. Show that if states <span class="math-container">$0$</span> and <span class="math-container">$L$</span> are completely absorbing, then there does not exist a stationary distribution. <strong>Hint:</strong> Start by considering <span class="math-container">${\pi} = \pi P$</span> </p>
</blockquote>
<p>This makes sense intuitively since we have two recurrent classes <span class="math-container">$\{0\}$</span> and <span class="math-container">$\{L\}$</span> and one transient class <span class="math-container">$\{1, 2, ..., L - 2, L - 1\}$</span>. However, once I try and expand <span class="math-container">${\pi} = \pi P$</span>, I don't know how to proceed next. Ideally I'd like a few more hints rather than an answer. </p>
| Kavi Rama Murthy | 142,385 | <p><span class="math-container">$(1,0,0,...,0)$</span> and <span class="math-container">$(0,0,...0,1)$</span> are two invariant distributions so uniqueness fails.</p>
|
327,990 | <p>So i was working on this:</p>
<p>$$
\lim\limits_{x\to1} \frac{x + \sqrt{x} - 2}{x - 1}
$$</p>
<p>and I thought to simpify my top by multiplying by a conjugate, taking everything other than the $x$ to be the $b$ from $a+b$ so that my conjugate looked like $x - \sqrt{x} + 2$.</p>
<p>The multiplication, if correct, led me to $x^2 - x + 4\sqrt{x} - 4$, which i was happy to note had $1$ as a root (which corresponded with my denominator and would allow for me to cancel off the [x-1]'s. </p>
<p>However, I'm having trouble finding the chunk that multiplies $(x-1)$ to give the $x^2 - x + 4\sqrt{x} - 4$, and this chunk's limit will define the nominator's limit, which, divided by 2 (from the conjugate left alone at the denominator after I cancel) should be my overall limit, i think = P</p>
<p>I'm pretty new to calculus so I may have made some mistakes, but if I am, I'd still like to know how I would have gotten that other root, since Ruffini's isn't working for me.</p>
<p>Thanks in advance = D</p>
| DonAntonio | 31,254 | <p>$$\frac{x+\sqrt x-2}{x-1}=\frac{(\sqrt x-1)(\sqrt x+2)}{(\sqrt x-1)(\sqrt x+1)}=\frac{\sqrt x+2}{\sqrt x+1}\xrightarrow[x\to 1]{} \frac{3}{2}$$</p>
|
1,358,270 | <p>If we have a function $f=f(r, \theta, \phi)$, where $(r, \theta, \phi)$ are spherical coordinates on $\mathbb{R}^3$, how do we compute the gradient $\nabla f$ by using the formula
$$\nabla f \cdot d\vec{r} = df ?$$
Here $\vec{r}$ is the position vector and $df=\frac{\partial f}{\partial r}dr +\frac{\partial f}{\partial \theta}d\theta+\frac{\partial f}{\partial \phi}d\phi$. </p>
| Christian Blatter | 1,303 | <p>In order to make $f$ well defined we have to assume $a_0>0$ and $B_2<\sqrt{a_0}$. Denote the domain ${\rm Re}(z)>a_0$ by $\Omega$. Let $z=a+it\in\Omega$. Then by a standard formula for square roots of complex numbers one has
$${\rm Re}\bigl(\sqrt{z}\bigr)=\sqrt{{1\over2}\bigl(\sqrt{a^2+t^2}+a\bigr)}\geq \sqrt{a_0}\ .\tag{1}$$
It follows that
$$|B_2-\sqrt{z}|\geq{\rm Re}\bigl(\sqrt{z}-B_2\bigr)\geq \sqrt{a_0}-B_2$$
and similarly
$$|B_3+\sqrt{z}|\geq{\rm Re}\bigl(\sqrt{z}+B_3\bigr)\geq \sqrt{a_0}+B_3$$
for all $z\in\Omega$. These estimates prove that the expression
$$g(z):={B_1-\sqrt{z}\over(B_2-\sqrt{z})(B_3+\sqrt{z})}={p\over B_2-\sqrt{z}}+{q\over B_3+\sqrt{z}}\ ,$$
where $p$ and $q$ are constants depending on the data $a_0$, $B_1$, $B_2$, $B_3$, is bounded on $\Omega$.
On the other hand from $(1)$ we can also conclude that
$${\rm Re}\bigl(\sqrt{a+it}\bigr)\geq\sqrt{{|t|\over2}}\ .$$
It follows that
$$\left|A^{-\sqrt{z}}\right|=A^{-{\rm Re}\bigl(\sqrt{z}\bigr)}\leq A^{-\sqrt{t/2}}\ .$$
In all, we now have
$$\bigl|f(a+it)\bigr|\leq C A^{-\sqrt{t/2}}\qquad(a\geq a_0, \ t\in{\mathbb R})\ ,$$
and it is easy to verify that
$$\int_{-\infty}^\infty \bigl|f(a+it)\bigr|^2\>dt\leq M$$
for all $a\geq a_0$ and a suitable $M$.</p>
|
27,904 | <blockquote>
<p>If $f(z) = (g(z),h(z))$ is continuous then $g$ and $h$ are as well.</p>
</blockquote>
<p>The converse is easy for me to prove, but I'm not seeing how to prove it using the terminology of open sets and not metric spaces.</p>
| lhf | 589 | <p>Compose $f$ with a projection. (First prove that a projection is continuous.)</p>
|
469,947 | <blockquote>
<p>Show that the presentation $G=\langle a,b,c\mid a^2 = b^2 = c^3 = 1, ab = ba, cac^{-1} = b, cbc^{-1} =ab\rangle$ defines a group of order $12$.</p>
</blockquote>
<p>I tried to let $d=ab\Rightarrow G=\langle d,c\mid d^2 =c^3 = 1, c^2d=dcdc\rangle$. But I don't know how to find the order of the new presentation. I mean I am not sure how the elements of new $G$ look like. (For sure not on the form $c^id^j$ and $d^kc^l$ otherwise $|G|\leq 5$).</p>
<p>Is it good step to reduce the number of generators or not necessary?</p>
| Mark Bennet | 2,906 | <p>If you note in the original presentation that the exponents of $a,b,c$ are $2,2,3$ whose product is $12$ a natural strategy would be to try to put a general element in the form $a^pb^qc^r$ and then show that these elements are distinct.</p>
|
469,947 | <blockquote>
<p>Show that the presentation $G=\langle a,b,c\mid a^2 = b^2 = c^3 = 1, ab = ba, cac^{-1} = b, cbc^{-1} =ab\rangle$ defines a group of order $12$.</p>
</blockquote>
<p>I tried to let $d=ab\Rightarrow G=\langle d,c\mid d^2 =c^3 = 1, c^2d=dcdc\rangle$. But I don't know how to find the order of the new presentation. I mean I am not sure how the elements of new $G$ look like. (For sure not on the form $c^id^j$ and $d^kc^l$ otherwise $|G|\leq 5$).</p>
<p>Is it good step to reduce the number of generators or not necessary?</p>
| i. m. soloveichik | 32,940 | <p>The presentation can be rewritten as $\langle d,c\mid d^2 =c^3 = (cd)^3=1 \rangle$.
This is the standard presentation for the symmetry group (rotations) of the regular tetrahedron where $d$ represents a rotation about an edge, $c$ represents a rotation about a face and $cd$ represents a rotation about a vertex.</p>
|
288,974 | <p>Alright this maybe really funny but I want to know why is this wrong. We often come across identities which we prove by multiplying both the sides of the identity by a certain entity but why don't we multiply it by $0$. That way every identity will be proved in one single line. That is so stupid. I mean, by that way we may also say that $1=2=3$. I know it is wrong. But why? I mean if we can multiply both the sides by $2$ then why not by $0$. For example, consider the following trigonometric identity :</p>
<p>Prove the identity : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta$</p>
<h2>Usual way</h2>
<p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p>
<p>$\displaystyle \implies {\sin^2 \theta \over \cos^2 \theta } = {\tan^2 \theta \cos ^2 \theta \over \cos^2 \theta}$ (multiplying both the sides by $\displaystyle 1 \over \cos^2 \theta$)</p>
<p>$\implies \tan ^2 \theta = \tan^2\theta$</p>
<p>$\implies LHS=RHS$</p>
<p>$\therefore proved$</p>
<h2>Funny way</h2>
<p>To Prove : $\sin^2 \theta = \tan^2 \theta \cos ^2 \theta $</p>
<p>$\displaystyle \implies {\sin^2 \times 0} = {\tan^2 \theta \cos ^2 \theta \times 0}$ (multiplying both the sides by $0$)</p>
<p>$\implies 0 = 0$</p>
<p>$\therefore proved$</p>
<p>Please explain why is this wrong.</p>
| Tunococ | 12,594 | <p>$a = b$ implies $ac = bc$, but $ac = bc$ doesn't imply $a = b$. (Not immediately. Read below.)</p>
<p>The way you usually get $a = b$ from $ac=bc$ is by multiplying both sides with $1/c$, which is only available when $c \ne 0$.</p>
|
3,231,869 | <p>I am a little confused what this actually means: </p>
<p><span class="math-container">$e^{x+e^x}$</span></p>
<p>It is obviously not the same if I for example
<span class="math-container">$$e^{x}:= \lambda \\
e^{x+e^x} \neq \lambda^\lambda
$$</span></p>
| Vizag | 566,333 | <p>You're going wrong slightly in the last step: </p>
<p><span class="math-container">$$e^x = \lambda$$</span></p>
<p><span class="math-container">$$\implies e^{x+e^x} = e^x e^{e^x} = \lambda \times e^{\lambda}$$</span></p>
<p>Your expression
<span class="math-container">$$\lambda^{\lambda} = (e^{x})^{e^x}$$</span></p>
<p>See the difference?</p>
|
1,507,181 | <p><a href="https://i.stack.imgur.com/nuhUB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nuhUB.png" alt="enter image description here"></a></p>
<p><em>We know $G(0) = 0$</em></p>
<p>Okay, so I have the above graph but I'm having a difficult time translating it into the graph of $G(x)$.</p>
<p>What I know so far is that the slope changes abruptly from 0 to 2 at $x=0$. I also know that the slope gets extremely close to -1 but is never -1 itself, and it gets really close to 0 but isn't 0 itself. Finally, I know that $G(x)$ has a negative slope for $x<0$ and a positive slope for $x>0$.</p>
<p>What I don't understand is how we can show that the slope is getting really close to -1 or 0 on $G(x)$? </p>
| tomi | 215,986 | <p>A common application of this kind of question is when you have the speed $v=\frac {ds}{dt}$ which corresponds to $G'(x)$ and you want to find the displacement $s $ which corresponds to $G(x)$. In that situation it helps to remember that the area under the velocity-time curve is equal to the displacement. For you that means that the area under the $G'(x)$ curve is equal to the change in $G(x)$.</p>
<p>If you like, $G(1)=G(0) + \int_0^1 G'(x) dx$</p>
<p>You can estimate the areas from your graph.</p>
<p>$G(1)\approx G(0)+1.5$</p>
<p>$G(2)\approx G(1)+0.75$</p>
<p>$G(3)\approx G(2)+0.3$</p>
<p>$G(4)\approx G(3)+0.1$</p>
<p>$G(5)\approx G(4)+0$</p>
<p>These give you a set of points that you can try to join up in a "dot-to-dot"
fashion. You also know that for small positive $x$ the gradient is about 2.</p>
<p>You can also work backwards, so that ...</p>
<p>$G(0)\approx G(-1) -0.25 \Rightarrow G(-1) \approx G(0)+0.25$</p>
|
1,675,329 | <p>What's the value of $\sum_{i=1}^\infty \frac{1}{i^2 i!}(= S)$?</p>
<p>I try to calculate the value by the following.</p>
<p>$$\frac{e^x - 1}{x} = \sum_{i=1}^\infty \frac{x^{i-1}}{i!}.$$
Taking the integral gives
$$ \int_{0}^x \frac{e^t-1}{t}dt = \sum_{i=1}^\infty \frac{x^{i}}{i i!}. $$</p>
<p>In the same, we gets the following equation</p>
<p>$$ \int_{s=0}^x \frac{1}{s} \int_{t=0}^s \frac{e^t-1}{t}dt ds= \sum_{i=1}^\infty \frac{x^{i}}{i^2 i!}. $$</p>
<p>So we holds</p>
<p>$$S = \int_{s=0}^1 \frac{1}{s} \int_{t=0}^s \frac{e^t-1}{t}dt ds.$$</p>
<p>Does this last integral have an elementary closed form or other expression?</p>
| Marco Cantarini | 171,547 | <p>Maybe it's interesting to see how to get the “closed form” in terms of hypergeometric function. Recalling the definition of generalized hypergeometric function $$_{q}F_{p}\left(a_{1},\dots,a_{q};b_{1},\dots,b_{p};z\right)=\sum_{k\geq0}\frac{\left(a_{1}\right)_{k}\cdots\left(a_{q}\right)_{k}}{\left(b_{1}\right)_{k}\cdots\left(b_{p}\right)_{k}}\frac{z^{k}}{k!}
$$ where $\left(a_{i}\right)_{k}
$ is the <a href="http://mathworld.wolfram.com/PochhammerSymbol.html" rel="nofollow">Pochhammer symbol</a>, we note that $\left(2\right)_{k}=\left(k+1\right)!
$ and $\left(1\right)_{k}=k!$. Hence $$_{3}F_{3}\left(1,1,1;2,2,2;1\right)=\sum_{k\geq0}\frac{\left(k!\right)^{3}}{\left(\left(k+1\right)!\right)^{3}}\frac{1}{k!}=\sum_{k\geq0}\frac{1}{\left(k+1\right)^{3}}\frac{1}{k!}=\sum_{k\geq1}\frac{1}{k^{2}k!}.$$ </p>
|
934,660 | <p>Prove that for $ n \geq 2$, n has at least one prime factor.</p>
<p>I'm trying to use induction. For n = 2, 2 = 1 x 2. For n > 2, n = n x 1, where 1 is a prime factor. Is this sufficient to prove the result? I feel like I may be mistaken here.</p>
| Sylvain Biehler | 132,773 | <p>You can use a proof by contradiction. If <span class="math-container">$n>1$</span> has no prime divisor, you can build an inifinite sequence of decreasing numbers.</p>
|
1,319,288 | <p>There is a <a href="https://math.stackexchange.com/questions/1103723">similar question</a> in this site but I am not satisfied with the answer, which is basically the same as the proof in the mentioned textbook.</p>
<p>The book(Karel Hrbacek&Thomas Jech, <em>Introduction to Set Theory 3e</em>, p165) states a lemma: For every $\alpha$, $\text{cf}(2^{\aleph_\alpha})>\aleph_\alpha$. Then it asserts that $2^{\aleph_0}$ cannot be $\aleph_\omega$, since $\text{cf}(2^{\aleph_\omega})=\aleph_0$. But I can't see the connection. According to the lemma, $\text{cf}(2^{\aleph_\omega})$ should be larger than $\aleph_\omega>\aleph_0$, how can it equal $\aleph_0$?</p>
<p>On the other hand, I can't see why $\text{cf}(2^{\aleph_\omega})=\aleph_0$ is false either. Since $2^{\aleph_\omega}=\lim\limits_{n\rightarrow\omega}2^{\aleph_n}$, it is the limit of an increasing sequence of ordinals of length $\omega$, so its cofinality should not be greater than $\aleph_0$. Is there something wrong within this reasoning?</p>
| bof | 111,012 | <p>All you can say about $\operatorname{cf}(2^{\aleph_\omega})$ is that it's some regular cardinal $\kappa$ such that
$$\aleph_{\omega+1}\le\kappa\le2^{\aleph_\omega}.$$
I think you need the axiom of choice to say even that much.</p>
<p>What's wrong with your reasoning is the unwarranted assumption that
$$2^{\aleph_\omega}=\lim_{n\to\omega}2^{\aleph_n}.$$
Ordinal exponentiation is continuous, but cardinal exponentiation is not; e.g.,
$$2^{\aleph_0}\ne\lim_{n\to\omega}2^n.$$</p>
<p>In fact, there <strong>are</strong> models of set theory (with choice) in which the equality
$$2^{\aleph_\omega}=\lim_{n\to\omega}2^{\aleph_n}$$
holds, but in that case we have
$$2^{\aleph_k}=2^{\aleph_{k+1}}=2^{\aleph_{k+2}}=\cdots=2^{\aleph_\omega}$$
for some $k\lt\omega,$ i.e., the sequence $\{2^{\aleph_n}\}_{n\lt\omega}$ is <strong>not</strong> a strictly increasing sequence, but instead is eventually constant.</p>
|
173,387 | <p>How can I indent properly long code in <em>Mathematica</em>?
Are there some best practices?</p>
| kglr | 125 | <h3>GeneralUtilities`HoldPrettyForm</h3>
<pre><code>Needs["GeneralUtilities`"]
HoldPrettyForm[Row[Table[Table[Plot[Sin[i x] Cos[j x], {x, 0, Pi}],
{i, 1, 5}], {j, 1, 3}]]]
</code></pre>
<p><a href="https://i.stack.imgur.com/rUdOh.png" rel="noreferrer"><img src="https://i.stack.imgur.com/rUdOh.png" alt="enter image description here" /></a></p>
|
661,182 | <p>I'm taking a discrete structures class and I would appreciate some help with a homework problem. The problem is</p>
<blockquote>
<p>Attempt to find a closed form for the sum $\displaystyle \sum_{k=1}^n k^3$
by perturbation, only to find a closed form for the following sum $\displaystyle \sum_{k=1}^n k^2$.</p>
</blockquote>
<p>I got as far as </p>
<blockquote>
<p>$\displaystyle S_n + (n+1)^3 = a_0 + \sum_{k=1}^n (k+1)^3$</p>
</blockquote>
<p>but now I am stuck. I don't undestand how to finish the problem and the teacher did not explain perturbation with sums very well. If somebody could explain it to me in greater detail and/or show me how to finish the problem I would really appreciate that.</p>
<p>Thanks</p>
| tabstop | 117,788 | <p>You've got it started out:
$$S_n+(n+1)^3 = S_0 + \sum_{k=0}^n (k+1)^3.$$
We know that $S_0=0$, and we can expand $(k+1)^3$ by binomial coefficients (or by polynomial multiplication if you're bored), so we have that the right hand side is
$$\sum_{k=0}^n (k^3+3k^2+3k+3) = S_n + 3\sum_{k=0}^n k^2 + 3\sum_{k=0}^n k + \sum_{k=0}^n 1.$$</p>
<p>The $S_n$ terms will cancel, and we assume you know the sum of $k$ itself to get
$(n+1)^3 = 3\sum_{k=0}^n k^2 + n(n+1)/2 + n+1$, and from there you're doing some algebra.</p>
|
4,280,328 | <p>I think the substitution <span class="math-container">$x=\xi+\eta,$</span> <span class="math-container">$y=\xi-\eta$</span> can be done. Then the equation takes the form <span class="math-container">$$ \begin{gathered} 38(\xi^{2}+\eta^{2})=221+33(\xi^{2}-\eta^{2}) \\ 5 \xi^{2}+71 \eta^{2}=221 \end{gathered} $$</span></p>
<p>whence <span class="math-container">$\xi^{2}=30-71 n$</span>, <span class="math-container">$\eta^{2}=1+5 n$</span>. For <span class="math-container">$n=0$</span> we obtain noninteger solutions and for the rest one of the equalities has a negative right-hand side. Am I wrong?</p>
| Michael Rozenberg | 190,319 | <p>We have <span class="math-container">$$19\left(x-\frac{33y}{38}\right)^2+\left(19-19\cdot\left(\frac{33}{38}\right)^2\right)y^2=221,$$</span> which gives
<span class="math-container">$$\left(19-19\cdot\left(\frac{33}{38}\right)^2\right)y^2\leq221$$</span> or
<span class="math-container">$$1\leq y\leq6.$$</span>
The similar thing we can make for <span class="math-container">$x$</span>. Now, check it.</p>
<p>Can you end it now?</p>
<p>I got <span class="math-container">$\{(2,5),(5,2)\}$</span>.</p>
|
549,347 | <p>How would I solve the following question. And determine if its true or false.</p>
<p>1.$\forall x \in R , \exists y\in R, x^2+y^2=-1$</p>
<p>2: $\exists x\in R,\forall y \in R, x^2+y^2=-1$</p>
<p>For the first one I think I can justify it is false.</p>
<p>As for any arbitrary x must y must be </p>
<p>$y=\sqrt{-x^2-1}$ which would not be real number.</p>
<p>The second one I can say that two numbers squared cannot be a negative.So it would be false?</p>
| amWhy | 9,003 | <p>Yes, both statements are false because the sum of two squared real numbers, whatever those numbers are, <strong><em>will never be negative</em></strong>.</p>
|
444,486 | <p>I am teaching myself real analysis, and in this particular set of lecture notes, the <a href="http://www.math.louisville.edu/~lee/RealAnalysis/IntroRealAnal-ch01.pdf" rel="nofollow">introductory chapter on set theory</a> when explaining that not all sets are countable, states as follows:</p>
<blockquote>
<p>If $S$ is a set, $\operatorname{card}(S) < \operatorname{card}(\mathcal{P}(S))$.</p>
</blockquote>
<p>Can anyone tell me what this means? It is theorem 1.5.2 found on page 13 of the article.</p>
| Y.H. Chan | 71,563 | <p>There are plenty of differences between $\mathbb{R}^2$ plane and $\mathbb{C}$ plane. Here I give you two interesting differences.</p>
<p>First, about branch points and branch lines. Suppose that we are given the function $w=z^{1/2}$. Suppose further that we allow $z$ to make a complete circuit around the origin counter-clockwisely starting from a point $A$ different from the origin. If $z=re^{i\theta}$, then $w=\sqrt re^{i\theta/2}$.</p>
<p>At point $A$,
$\theta =\theta_1$, so $w=\sqrt re^{i\theta_1/2}$. </p>
<p>While after completing the circuit, back to point $A$,<br>
$\theta =\theta_1+2\pi$, so $w=\sqrt re^{i(\theta_1+2\pi)/2}=-\sqrt re^{i\theta_1/2}$.</p>
<p>Problem is, if we consider $w$ as a function, we cannot get the same value at the same point.
To improve, we introduce Riemann Surfaces.Imagine the whole $\mathbb{C}$ plane as two sheets superimposed on each other. On the sheets, there is a line which indicate real axis. Cut two sheet simultaneously along the POSITIVE real axis. Imagine the lower edge of the bottom sheet is joined to the upper edge of the top sheet.</p>
<p>We call the origin as a branch point and the positive real axis as the branch line in this case.</p>
<p>Now, the surface is complete, when travelling the circuit, you start at the top sheet and if you go one complete circuit, you go to the bottom sheet. Travelling again, you go back to the top sheet. So that $\theta_1$ and $\theta_1+2\pi$ become two different point (at the top and the bottom sheet respectively), and comes out two different value.</p>
<p>Another thing is, in $\mathbb{R}^2$ case, $f'(x)$ exist not implies $f''(x)$ exist. Try thinking that $f(x)=x^2$ if $x\ge0$ and $f(x)=-x^2$ when $x<0$. But in $\mathbb{C}$ plane. If $f'(z)$ exist (we say $f$ is analytic), it guarantees $f'(x)$ and thus $f^{(n)}(x)$ exist. It comes from the Cauchy's integral formula.</p>
<p>I am not going to give you the proof, but if you are interested, you should know Cauchy Riemann Equations first: $w=f(z)=f(x+yi)=u(x,y)+iv(x,y)$ is analytic iff it satisfy $\frac{\partial u}{\partial x}=\frac{\partial v}{\partial y}$, $\frac{\partial u}{\partial y}=-\frac{\partial v}{\partial x}$ both. Proof is simply comes from the definition of differentiation. Thus, once you get $u(x,y)$ you can find $v(x,y)$ from the above equation, making $f(z)$ analytic,</p>
|
12,359 | <p>There are, IMO, quite a lot of badly tagged questions and... not very good tags. Some of them were discussed on meta recently; some of these discussions show, IMO, that users who created these tags don't always understand tagging system of Math.SE well enough.</p>
<p>On Meta.SO one needs 500 rep to create a new tag, on SO — 1500 (!) rep. On Math.SE — just 300 rep.</p>
<p>Since its creation Math.SE has grown a lot. Maybe now it's time to (ask to) raise reputation requirements for creating new tag on Math.SE? (Say, to 1000 rep?)</p>
<p>(Idea suggested by <a href="http://meta.math.stackexchange.com/users/4781/tito-piezas-iii">Tito Piezas III</a> in <a href="http://meta.math.stackexchange.com/questions/12314/do-we-really-need-a-ramanujan-radicals-tag#comment48374_12314">this comment</a>.)</p>
| Post No Bulls | 111,742 | <p>I agree; there is much less need for new tags on an established math Q&A site than on a technology-oriented Q&A site. On a tech Q&A, users are likely to bring up questions about a new gadget or a new version of some software that was just released. Here we don't normally get questions about mathematical areas that were just recently invented. Unless we count <a href="http://meta.math.stackexchange.com/q/1411/">kalle numbers</a>. Or <a href="https://math.stackexchange.com/questions/tagged/caluclus" class="post-tag" title="show questions tagged 'caluclus'" rel="tag">caluclus</a>, which was invented just an hour ago. </p>
<p>I suggest setting the limit at 1500 as on SO, simply because it seems SE prefers to <a href="http://meta.math.stackexchange.com/a/4770/">avoid too many special cases</a> among the sites. But either 1000 or 1500 would be an improvement. </p>
|
106,219 | <blockquote>
<p>Define a sequence of functions $f_n: (0,1)\rightarrow\mathbb{R}$ by<br>
$\
f(x) =
\begin{cases}
1/q^n
& \text{if } x =p/q \space(\space\mathrm{nonzero})\\
0 & \text{otherwise}
\end{cases}
$<br>
Find the pointwise limit $f$ of $\{f_n\}$ and show $\{f_n\}$ converges uniformly. </p>
</blockquote>
<p>$f$ looks like a modification of Thomae's function to me, but I can't see how a function that converges uniformly can also have a pointwise limit -- I thought uniform convergence was a stronger type of convergence?</p>
| azarel | 20,998 | <p>Uniform convergence means that $\forall \epsilon>0$ there is an $N$ such that $|f_n(x)-f(x)|<\epsilon$ for all $n\geq N$ and for all $x$(the point is that $N$ does not depend on $x$). So uniform convergence imply point wise limit convergence. </p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.