qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,909,022 | <p>I don't understand how to get from the first to the second step here and get $1/3$ in front.</p>
<p>In the second step $g(x)$ substitutes $x^3 + 1$.</p>
<p>\begin{align*}
\int_0^2 \frac{x^2}{x^3 + 1} \,\mathrm{d}x
&= \frac{1}{3} \int_{0}^{2} \frac{1}{g(x)} g'(x) \,\mathrm{d}x
= \frac{1}{3} \int_{1}^{9} \frac{1}{u} \,\mathrm{d}u \\
&= \left. \frac{1}{3} \ln(u) \,\right|_{1}^{9}
= \frac{1}{3} ( \ln 9 - \ln 1 )
= \frac{\ln 9}{3}
\end{align*}</p>
| Keen-ameteur | 421,273 | <p>The derivative of $g(x)$ is $g'(x)=3x^2$. The constant $3$ is not present in the first expression but:</p>
<p>$\frac{1}{3}g'(x)=\frac{1}{3}\cdot 3x^2=x^2$</p>
|
440,082 | <blockquote>
<p><span class="math-container">$$ \int_{1}^{\infty} \frac{\sin^2 (\mu \sqrt{x^2 -1})}{(x+1)^{\frac{9}{2}} (x-1)^{\frac{3}{2}}} \,dx $$</span>
Note: <span class="math-container">$\mu$</span> here is an extremely small constant.</p>
</blockquote>
<p>I have tried:</p>
<ol>
<li>Estimating the integral by Taylor expansion of <span class="math-container">$\sin^2(\mu \sqrt(x^2 - 1)$</span> but the it diverges after few terms.</li>
<li>I have also tried to split the integral into two regions, solving the two integrals differently and then adding them in the hope that the arbitrary parameter <span class="math-container">$\alpha$</span> will vanish (asymptotic matching/splitting). While I have been able to solve the two integrals but <span class="math-container">$\alpha$</span> does not vanish: <span class="math-container">$$\int_{1}^{\alpha} f(x)\,dx + \int_{\alpha}^{\infty} f(x)\,dx$$</span>
Approximations in for the two regions:<br />
<span class="math-container">$1$</span> to <span class="math-container">$\alpha$</span> region: <span class="math-container">$x \tilde = 1$</span> and hence <span class="math-container">$(x+1) \tilde= 2$</span> <br />
<span class="math-container">$\alpha$</span> to <span class="math-container">$\infty$</span> region: <span class="math-container">$(x^2 -1) \tilde= x^2$</span></li>
</ol>
| Carlo Beenakker | 11,260 | <p>Here is a log-log plot of
<span class="math-container">$$\delta I=I_{\text{appr}}-\int_{1}^{\infty} \frac{\sin^2 (\mu \sqrt{x^2 -1})}{(x+1)^{\frac{9}{2}} (x-1)^{\frac{3}{2}}} \,dx,$$</span>
as a function of <span class="math-container">$\mu$</span>, with
<span class="math-container">$$I_{\text{appr}}=\frac{2 \mu^2}{15}-\frac{\mu^4}{9}.$$</span>
You write "<span class="math-container">$\mu$</span> is extremely small". Is this error acceptable?</p>
<IMG SRC="https://i.stack.imgur.com/0Rfc0.png" WIDTH="400"/>
<p>This is a plot of the absolute error; the relative error is <span class="math-container">$\lesssim 10^{-6}$</span> for <span class="math-container">$\mu\lesssim 10^{-2}$</span>.</p>
|
440,082 | <blockquote>
<p><span class="math-container">$$ \int_{1}^{\infty} \frac{\sin^2 (\mu \sqrt{x^2 -1})}{(x+1)^{\frac{9}{2}} (x-1)^{\frac{3}{2}}} \,dx $$</span>
Note: <span class="math-container">$\mu$</span> here is an extremely small constant.</p>
</blockquote>
<p>I have tried:</p>
<ol>
<li>Estimating the integral by Taylor expansion of <span class="math-container">$\sin^2(\mu \sqrt(x^2 - 1)$</span> but the it diverges after few terms.</li>
<li>I have also tried to split the integral into two regions, solving the two integrals differently and then adding them in the hope that the arbitrary parameter <span class="math-container">$\alpha$</span> will vanish (asymptotic matching/splitting). While I have been able to solve the two integrals but <span class="math-container">$\alpha$</span> does not vanish: <span class="math-container">$$\int_{1}^{\alpha} f(x)\,dx + \int_{\alpha}^{\infty} f(x)\,dx$$</span>
Approximations in for the two regions:<br />
<span class="math-container">$1$</span> to <span class="math-container">$\alpha$</span> region: <span class="math-container">$x \tilde = 1$</span> and hence <span class="math-container">$(x+1) \tilde= 2$</span> <br />
<span class="math-container">$\alpha$</span> to <span class="math-container">$\infty$</span> region: <span class="math-container">$(x^2 -1) \tilde= x^2$</span></li>
</ol>
| Fred Hucht | 90,413 | <p>I really like the double-series approach by @AccidentalFourierTransform, however I got remaining <span class="math-container">$\Lambda$</span>s in the terms of order <span class="math-container">$O(\mu^8)$</span> onwards. Thinking about this approach, I located the problem in the neglection of higher order terms in <span class="math-container">$\sin^2(\mu x)$</span> in the series expansion in <span class="math-container">$x$</span> around infinity (line 4 in the MMA code, thanks for providing!).
The problem can be eliminated by a change of variables in the original integral from <span class="math-container">$x \mapsto y \equiv \sqrt{x^2-1}$</span>, such that
<span class="math-container">$$
I = \int_0^\infty \mathrm dy \frac{y \sin^2(\mu y)}
{x\,(x-1)^{3/2}\,(x+1)^{9/2}}, \quad \text{with} \quad x \equiv \sqrt{y^2+1}.\tag{1}
$$</span>
Now, the method can be applied: we split the integral at <span class="math-container">$\Lambda>0$</span>, expand the first integrand in <span class="math-container">$\mu$</span> around 0 and the second integrand in <span class="math-container">$x$</span> around <span class="math-container">$\infty$</span>. The sum of the integrated series expansions is again expanded around <span class="math-container">$\mu=0$</span> to get
<span class="math-container">\begin{align}
I&=\frac{2\mu^2}{15} - \frac{\mu^4}{9} + \frac{\pi\mu^5}{15}
- \frac{[67 - 60(\gamma+\log\mu)]\,\mu^6}{450}
- \frac{8\pi\mu^7}{315}\\
&\quad + \frac{[169 - 56 (\gamma+\log\mu)]\,\mu^8}{7056}
+ \frac{[913 - 360 (\gamma+\log\mu)]\,\mu^{10}}{2916000} \\
&\quad - \frac{[23797 - 9240 (\gamma+\log\mu)]\,\mu^{12}}{3841992000}
+ O\left(\mu ^{14}\right).\tag{2}
\end{align}</span>
Note that now <span class="math-container">$\Lambda$</span> cancels in all terms (as suggested by the OP in 2.) and that the highest odd power seems to be <span class="math-container">$\mu^7$</span>.</p>
|
937,064 | <p>The title pretty much says it all:</p>
<p>If supposing that a statement is false gives rise to a paradox, does this prove that the statement is true?</p>
<p><em>Edit:</em> Let me attempt to be a little more precise:</p>
<p>Suppose you have a proposition. Furthermore, suppose that assuming the proposition is false leads to a paradox. Does this imply the proposition is true?
In other words, can I replace the "contradiction" in "proof by contradiction" with "paradox." </p>
<p>This question might still be somewhat ambiguous; I'm reluctant to attempt to precisely define "paradox" here.
As a (somewhat loose) example however, consider some proposition whose negation leads to, for example, Russell's paradox. Would this prove that the proposition is true? </p>
| user21820 | 21,820 | <p>The whole question boils down to which sentences have a truth value. See <a href="https://math.stackexchange.com/a/1888389/21820">this answer</a> for a detailed analysis of this notion. All modern mathematics is done in a manner that can in fact be formalized in a formal system that does not allow self-reference or meta-reference. So one <strong>cannot</strong> even assign to a propositional variable $P$ the liar sentence or the Quine sentence. Therefore one cannot suppose that such a paradoxical sentence is true or false to derive a contradiction or anything whatsoever. We also have the law of excluded middle in classical logic, so there you have the valid deduction method of <a href="https://math.stackexchange.com/a/1668149/21820">proof by contradiction</a>, which works regardless of what form the contradiction takes. The term "paradox" is quite vague, but if by it you mean something that you can prove cannot possibly be true, then deriving any paradox is equivalent to deriving a contradiction, and implies that at least one of the currently active assumptions is false.</p>
|
2,348,131 | <p>In our class, we encountered a problem that is something like this: "A ball is thrown vertically upward with ...". Since the motion of the object is rectilinear and is a free fall, we all convene with the idea that the acceleration $a(t)$ is 32 feet per second square. However, we are confused about the sign of $a(t)$ as if it positive or negative. </p>
<p>Now, various references stated that if we let the upward direction to be positive then $a$ is negative and if we let downward to be the positive direction, then $a$ is positive. The problem in their claim is that they did not explain well how they arrived with that conclusion. </p>
<p>My question now is that, why is the acceleration $a$ negative if we choose the upward direction to be positive. Note: I need a simple but comprehensive answer. Thanks in advance. </p>
| Community | -1 | <p>The gravity force is downward and so is the acceleration (by $F=ma$).</p>
<p>So if you choose a downward axis, the acceleration is positive.</p>
<p>And if you choose an upward axis, the acceleration is negative.</p>
|
221,053 | <p>Given two uncorrelated random variables $X,Y$ with the same variance $\sigma^2 $ I need to compute $\rho= \frac{COV(X,Y)}{\sigma(X)\sigma(Y)}$ between $X+Y$ and $2X+2Y$. I know it should be a number between $-1$ and $1$ and I don't understand how come I get $4$. </p>
<p>Here's what I did:</p>
<p>$COV(X+Y,2X+2Y)=COV(X+Y,2X)+COV(X+Y,2Y)=COV(2X,X)+COV(2X,Y)+COV(2Y,Y)+COV(2Y,X)=2COV(X,X)+2COV(Y,Y)+4COV(X,Y)=2\sigma^2+2\sigma^2=4\sigma^2$ so final result is $\rho=4$ since $\sigma(X)=\sqrt{Var(x)}$.</p>
<p>What's wrong with what I did?</p>
| Did | 6,179 | <p>Show that, for <strong>every</strong> nondegenerate random variable $Z$ and nonzero real number $a$, $\mathrm{var}(aZ)=a^2\cdot\mathrm{var}(Z)$ and $\mathrm{cov}(Z,aZ)=a\cdot\mathrm{var}(Z)$. Deduce that $\varrho(Z,aZ)=\mathrm{sgn}(a)$.</p>
|
3,204,082 | <p>I have a conjecture, but have no idea how to prove it or where to begin. The conjecture is as follows:</p>
<blockquote>
<p>A polynomial with all real irrational coefficients and no greatest common factor has no rational zeros.</p>
</blockquote>
<p>This conjecture excludes the cases where the polynomial does have a greatest common factor despite having an irrational coefficient, such as <span class="math-container">$x^3+\pi x^2=0$</span>, as that has rational zero <span class="math-container">$0$</span>.</p>
<p>I know that not all polynomials with rational coefficients have rational zeros, but I am not sure how to begin. How would I go about beginning to prove this? Has it already been proved - or is there a counterexample that I am missing?</p>
| Alexander Gruber | 12,952 | <p>If one were able to prove that, that would imply that <span class="math-container">$\pi x-e=0$</span> has no rational roots. However, it is not known whether <span class="math-container">$e/\pi$</span> is rational. So, I think your conjecture as stated would be difficult to prove.</p>
|
1,076,292 | <p>I wish to use two points say $(x_1$,$y_1)$ and $(x_2$,$y_2)$ and obtain the coefficients of the line in the following form: $$ Ax + By + C = 0$$</p>
<p>Is there any direct formula to compute.</p>
| Ahaan S. Rungta | 85,039 | <p>This is called solving simultaneous equations and is a matter of algebra. There are two ways to go about this but they are almost the same and have the same spirit. </p>
<p>The first way to do it is to solve directly for $A,B,C$ by substituting the points into $Ax+By+C=0$ to get $$ \begin {eqnarray*} Ax_1 + By_1 &=& -C, \\ Ax_2 + By_2 &=& -C. \end {eqnarray*} $$Do you know how to <a href="http://www.webmath.com/solver2.html" rel="nofollow">solve such a system</a>? </p>
<p>The second method involves finding the slope of the line, $ m = \frac {y_2-y_1}{x_2-x_1} $, and then substitute one of the points into the <a href="http://www.purplemath.com/modules/strtlneq.htm" rel="nofollow">slope-intercept form</a> of the line. For example, $$ y_2 = mx_2 + b, $$ and solve for $b$: $$ b = y_2 - mx_2 = y_1 - mx_1. $$Then, you have the equation in the form $ y = mx + b $. Now, you can rearrange terms to get it in the form $ Ax + By + C = 0 $. </p>
|
3,738,508 | <p>If <span class="math-container">$G$</span> is order <span class="math-container">$p^2q$</span>, where <span class="math-container">$p$</span>, <span class="math-container">$q$</span> are primes, prove that either a Sylow <span class="math-container">$p$</span>-subgroup or a Sylow <span class="math-container">$q$</span>-subgroup must be normal in <span class="math-container">$G$</span>.</p>
| quasi | 400,434 | <p>Here's another proof . . .</p>
<p>
Assume <span class="math-container">$x^4+2$</span> factors in <span class="math-container">$\mathbb{Z}_5[x]$</span> as
<span class="math-container">$$x^4+2=(x^2+ax+b)(x^2+cx+d)$$</span>
Let <span class="math-container">$K$</span> be an algebraic closure of <span class="math-container">$\mathbb{Z}_5$</span>, and let <span class="math-container">$r$</span> be a root in <span class="math-container">$K$</span> of the polynomial <span class="math-container">$x^4+2$</span>.
<p>
Then identically we have
<span class="math-container">$$r^4=(-r)^4=(2r)^4=(-2r)^4$$</span>
hence <span class="math-container">$\pm r,\pm 2r$</span> are roots in <span class="math-container">$K$</span> of <span class="math-container">$x^4+2$</span>, and are distinct since <span class="math-container">$r\ne 0$</span>.
<p>
Since <span class="math-container">$b$</span> is the product of the roots in <span class="math-container">$K$</span> of <span class="math-container">$x^2+ax+b$</span>, it follows that <span class="math-container">$b$</span> is the product of two distinct elements of the set <span class="math-container">$\{\pm r,\pm 2r\}$</span>, hence <span class="math-container">$b\in\{-r^2,\pm 2r^2,-4r^2\}$</span>.
<p>
In any case, since <span class="math-container">$b\in\mathbb{Z}_5$</span>, it follows that <span class="math-container">$r^2\in\mathbb{Z}_5$</span>.
<p>
But <span class="math-container">$r^2\in\mathbb{Z}_5$</span> implies <span class="math-container">$r^4$</span> is a square in <span class="math-container">$\mathbb{Z}_5$</span>, contradiction, since <span class="math-container">$r^4=-2$</span> which is not a square in <span class="math-container">$\mathbb{Z}_5$</span>.
|
2,325,968 | <p>I was trying to calculate : $e^{i\pi /3}$.
So here is what I did : $e^{i\pi /3} = (e^{i\pi})^{1/3} = (-1)^{1/3} = -1$</p>
<p>Yet when I plug : $e^{i\pi /3}$ in my calculator it just prints : $0.5 + 0.866i$</p>
<p>Where am I wrong ? </p>
| José Carlos Santos | 446,262 | <p>That's because your calculator <em>knows</em> that $e^{ix}=\cos(x)+i\sin(x)$, and therefore $e^{i\pi/3}=\cos\left(\frac\pi3\right)+i\sin\left(\frac\pi3\right)=\frac12+\frac{\sqrt3}2i$.</p>
|
5,263 | <p>I recently tried to edit an old question <a href="https://mathoverflow.net/questions/39428/x-th-moment-method">x-th moment method</a> that had got bumped to the front page for other reasons. The post had an equation that was meant to be, and maybe at one point was, struck through, but it no longer is. That is, the source says <code><strike></strike></code>, but the rendered code has no strike-through. I tried <code>~~~~</code> and <code><s></s></code>, without luck, and eventually (since it seemed worse to have a known-wrong equation in the text without explicit indication) deleted it; but clearly this is not the best solution. How does one strike through in MO's flavour of Markdown?</p>
<p>EDIT: On experimentation, it seems to be about MathJax, not MarkDown: <strike>1 + 1 = 2</strike> versus <strike><span class="math-container">$1 + 1 = 2$</span></strike> (both surrounded by <code><strike></strike></code>). Is there a way to strike through an equation?</p>
| LSpice | 2,383 | <p>This is not really an answer, just a test in response to @CalvinKhor's <a href="https://meta.mathoverflow.net/questions/5263/mathjax-equivalent-of-strike-strike#comment27024_5264">comment</a>.</p>
<p><code>\require</code> in one answer (or comment?) seems to affect other answers. This may depend on which answer comes first, though.</p>
<p><span class="math-container">$\cancel{1 + 1 = 2}$</span>, <span class="math-container">$\enclose{horizontalstrike}{1 + 1 = 2}$</span>.</p>
|
1,301,476 | <p>(Cross-posted in <a href="https://matheducators.stackexchange.com/q/8173/"><strong>MESE 8173</strong></a>.) </p>
<p>I want to start to do mathematical Olympiad type questions but have absolutely no knowledge on how to solve these apart from my school curriculum. I'm $16$ but know maths up to the $18$ year old level. I think I will start learning the theory of the topics (Elementary Number Theory, Combinatorics, Euclidean Plane Geometry) then going on to trying the questions, but I need help in knowing what books to use to learn the theory. I have seen several list (such as <a href="http://www.artofproblemsolving.com/wiki/index.php/Olympiad_books" rel="nofollow noreferrer">http://www.artofproblemsolving.com/wiki/index.php/Olympiad_books</a>) but does anyone know which ones are the best for my level of knowledge.</p>
<p>P.S. I live in UK if that matters.</p>
| Community | -1 | <p>My personal favourite for the mathematical olympiad is "the Mathematical Olympiad Handbook; An Introduction to Problem Solving" by A. Gardiner</p>
<p><a href="http://www.amazon.co.uk/The-Mathematical-Olympiad-Handbook-Introduction/dp/0198501056" rel="nofollow">http://www.amazon.co.uk/The-Mathematical-Olympiad-Handbook-Introduction/dp/0198501056</a></p>
<p>It covers the topics you mentioned as well as algebra and trigonometric formulae. It contains a brief introduction to the topics, as well as past papers with hints and partial solutions from 1967 to 1996. I'd recommend it. </p>
|
351,226 | <p>I am trying to brush up on my regular grammar knowledge to prepare for an interview, and I just am not able to solve this problem at all. This is NOT for homework, it is merely me trying to solve this.</p>
<p>I want to give a regular grammar for the language of the finite automaton whose screen shot is below, please help me, and if you can, a step by step answer would be of great assistance. Thank you!</p>
<p><img src="https://i.stack.imgur.com/U02gk.jpg" alt="enter image description here"></p>
| Tara B | 26,052 | <p>If you have a finite automaton for a regular language $L$, you can construct a right regular grammar for $L$ directly from the automaton, by using the states as the variables and the alphabet as the terminals and essentially just writing down the transition function in the form of productions.</p>
<p>So for your automaton, you would start with the transitions $A\to 1A$ and $A\rightarrow 0B$. (Except you would usually rename $A$ to $S$, because it's traditional to use $S$ for the start symbol.) You'll learn more by finishing the example for yourself, so I'll leave it at that for now, but feel free to ask for more help if you need it.</p>
|
2,433,174 | <p>I'm struggling with the following (<em>is it true?</em>):</p>
<blockquote>
<p>Let <span class="math-container">$X$</span> be a set and denote <span class="math-container">$\aleph(X)$</span> the <em><strong>cardinality</strong></em> of <span class="math-container">$X$</span>. Suppose that <span class="math-container">$\aleph(X)\geq \aleph_0$</span>, the cardinality <span class="math-container">$\aleph_0:=\aleph(\Bbb N)$</span>, with <span class="math-container">$\Bbb N$</span> the set of natural numbers.</p>
<p>Consider two subsets <span class="math-container">$A_1,A_2\subset X$</span> with <span class="math-container">$\aleph(A_i)<\aleph(X)$</span>, <span class="math-container">$i=1,2$</span>. Show that <span class="math-container">$\aleph(\bigcup_1^2 A_i)<\aleph(X)$</span>.</p>
</blockquote>
<p>From this, it would follow by induction that <span class="math-container">$\aleph(\bigcup_1^n A_i)<\aleph(X)$</span>, for any sets <span class="math-container">$A_1,\dots, A_n\subset X$</span> with <span class="math-container">$\aleph(A_i)<\aleph(X)$</span>, <span class="math-container">$i=1,\dots,n$</span>.</p>
<p>With this result, I would be able to solve <em>Dugundji's Exercise 1-b (Chapter III - Topological Spaces)</em>, which asserts:</p>
<blockquote>
<p>1.b) If <span class="math-container">$\aleph(X)\geq \aleph_0$</span>, then <span class="math-container">$\scr{A}_1$$=\{\emptyset\}\cup \{A\,|\, \aleph(X-A)<\aleph(X)\}$</span> is a topology on <span class="math-container">$X$</span>.</p>
</blockquote>
<p>The fact that <span class="math-container">$\emptyset$</span> and <span class="math-container">$X$</span> are in <span class="math-container">$\scr A_1$</span> is easy. Arbitrary unions of elements of <span class="math-container">$\scr A_1$</span> is in <span class="math-container">$\scr A_1$</span>, since the complement in <span class="math-container">$X$</span> of such an union is an <em><strong>intersection</strong></em> of complements, which have, each of of them, cardinality strictly smaller than <span class="math-container">$\aleph(X)$</span> (intersections decrease cardinality). But to show that finite intersections of elements of <span class="math-container">$\scr A_1$</span> are in <span class="math-container">$\scr A_1$</span>, I need the fact above...</p>
<p>It is easy to see that it is true if <span class="math-container">$\aleph(X)=\aleph_0$</span>, since in this case <span class="math-container">$A_1$</span> and <span class="math-container">$A_2$</span> must be finite. The problem arises if <span class="math-container">$\aleph_0\leq \aleph(A_i)<\aleph(X)$</span>...</p>
| Parcly Taxel | 357,390 | <p>Think of $T$ here as sending all points $(x,y)$ satisfying $x+2y=6$ to $(x',y')=(4x-y,3x-2y)$. Since there is an equation between $x$ and $y$, there must also be one between $x'$ and $y'$, and this latter equation will be the image line.
$$x'=4x-y$$
$$y'=3x-2y$$
$$-2x'=2y-8x$$
$$y'-2x'=-5x\qquad x=(2x'-y')/5\tag1$$
$$y'=3/5(2x'-y')-2y$$
$$2y=3/5(2x'-y')-y'=6/5x'-8/5y'$$
$$y=(3x'-4y')/5\tag2$$
Then substituting $(1)$ and $(2)$ into $x+2y=6$:
$$(2x'-y')/5+2(3x'-4y')/5=6$$
$$8x'-9y'=30$$
$$y'=(8x'-30)/9$$
This is the image line, and its slope is $\frac89$.</p>
|
1,603,272 | <p>I'm trying to figure out if the sequence $e^{(-n)^n}$ where n is a natural number has a convergent subsequence? It's in a past exam paper. I know that obviously I can't apply the Bolzano-Weirstrass theorem because its not a convergence sequence but im not sure how to test for a convergent subsequence if the original sequence is not convergent. Thanks in advance</p>
| ncmathsadist | 4,154 | <p>What happens if you look at odd values of $n$?</p>
|
1,591,863 | <p>I'm studying for the sat, and one question was presented as follows:</p>
<p>If $n$ is a positive integer such that the units (ones) digit of $n^2+4n$ is $7$ and the units digit of n is not $7$, what is the units digit of $n+3$?</p>
<p>So I'm trying to find $n$ such that:</p>
<p>$$(n^2+4n) \mod10=7$$</p>
<p>I know the answers are $n=9 \mod10$ and $n=7 \mod10$ from guessing and checking. However, time is limited to about $1$ min a question.</p>
<p>So I'm looking for someone to teach me how these algebra problems can be systematically solved. I would prefer an answer that does not say "use _ theorem".</p>
| James | 81,163 | <p>Firstly, as a point of correct use of MSE, your question should be self-contained. In particular we shouldn't have to read the title to find hypotheses for the question. Anyway the mathematics</p>
<p>Note that a lattice can't have two minimal elements, your tagging is incorrect.</p>
<p>So we have the following facts:</p>
<ol>
<li>$a$ and $b$ are $\leq$-minimal.</li>
<li>$c$ is $\leq$-maximum.</li>
<li>$P(a)$ is true and $P(b)$ is false.</li>
<li>If $x \leq y$, then $P(x) \Rightarrow P(y)$.</li>
</ol>
<p>From 2, we can conclude that $a \leq c$ and $b\leq c$. Also, as $a\neq b$ (because $P(a)$ is true and $P(b)$ is false, so, we know they are different!), and from 1, we know that $a < c$ and $b < c$.</p>
<p>Therefore, $c$ is an element satisfying $a \leq c$ and $b\leq c$.</p>
<p>By 4 and 3, we know that $P(c)$ is true. Therefore (D) cannot be true, as $c$ is something greater than both $a$ and $b$, and $P(c)$ is true.</p>
<p>If you take the partial order on $\{a,b,c\}$ given by $a \leq c$ and $b \leq c$, $P(a),P(c)$ are true and $P(b)$ is false, we have something satisfying all the hypotheses, and, also (A), (B) and (C), showing that these can be true.</p>
|
253,584 | <p>Let $h:\mathbb{R}^n\to\mathbb{R}^m, n>1$ be a twice continuously differentiable function and $J_h:\mathbb{R}^n\to\mathbb{R}^{m\times n}$ be its jacobian matrix. Let us consider the functions $A(x):=J_h^\mathtt{T}(x)J_h(x)\in\mathbb{R}^{n\times n}$ and $B(x):=J_h(x)J_h(x)^\mathtt{T}\in\mathbb{R}^{m\times m}$.</p>
<p>I'm interested in sufficient conditions ensuring differentiability of the functions $U(x)$, $\Sigma(x)$ and $V(x)$ in a singular value decomposition of $J_h(x)=U(x)\Sigma(x)V(x)^\mathtt{T}$ when there is at least one repeating zero singular value (rank deficient case).</p>
<p>The question can be equivalently stated in terms of eigenvalues/eigenvectors of the symmetric matrices $A$ and $B$. Are there sufficient conditions to ensure differentiability of an eigenpair with a non-simple eigenvalue?</p>
<p>Appreciate any help.</p>
| Bazin | 21,907 | <p>Let me point out a more specific result for hyperbolic polynomials, known as Bronshtein's theorem (see e.g. the preprint <a href="https://arxiv.org/abs/1309.2150" rel="nofollow noreferrer">https://arxiv.org/abs/1309.2150</a> by A. Parusinski & A. Rainer). Let $p(X,y)$ be a polynomial with degree $m$ in the $X$ variable depending smoothly on $y\in \mathbb R^n$ and assume that the roots $\{\lambda_j(y)\}_{1\le j\le m}$ are real-valued (this is the hyperbolicity condition). Then a Lipschitz-continuous choice of the $\lambda_j$ is possible.</p>
<p>The improvement due to hyperbolicity is striking since without that assumption, Hölderian regularity with index $1/m$ is the best we can hope. The preprint quoted above is nicely written and is shedding new light on a classical result whose original proof was not so easily available.</p>
|
253,584 | <p>Let $h:\mathbb{R}^n\to\mathbb{R}^m, n>1$ be a twice continuously differentiable function and $J_h:\mathbb{R}^n\to\mathbb{R}^{m\times n}$ be its jacobian matrix. Let us consider the functions $A(x):=J_h^\mathtt{T}(x)J_h(x)\in\mathbb{R}^{n\times n}$ and $B(x):=J_h(x)J_h(x)^\mathtt{T}\in\mathbb{R}^{m\times m}$.</p>
<p>I'm interested in sufficient conditions ensuring differentiability of the functions $U(x)$, $\Sigma(x)$ and $V(x)$ in a singular value decomposition of $J_h(x)=U(x)\Sigma(x)V(x)^\mathtt{T}$ when there is at least one repeating zero singular value (rank deficient case).</p>
<p>The question can be equivalently stated in terms of eigenvalues/eigenvectors of the symmetric matrices $A$ and $B$. Are there sufficient conditions to ensure differentiability of an eigenpair with a non-simple eigenvalue?</p>
<p>Appreciate any help.</p>
| Benoît Kloeckner | 4,961 | <p>For symmetric matrices, you are good (in fact even in infinite dimension). Let me quote the MathReview of the following reference (itself quoting the article):</p>
<p>Kriegl, Andreas(A-WIEN); Michor, Peter W.(A-ERS)
Differentiable perturbation of unbounded operators. (English summary)
Math. Ann. 327 (2003), no. 1, 191–201. </p>
<p>Theorem. Let $t\mapsto A(t)$ for $t\in\Bbb R$ be a
curve of unbounded self-adjoint operators in a Hilbert space with
common domain of definition and with compact resolvent.
If $A(t)$ is real analytic in $t\in\Bbb R$, then the eigenvalues and the eigenvectors of $A(t)$ may be parameterized real analytically in $t$.</p>
<p>I guess several parameters does not hurt, and that it should be simpler for matrices. I also guess that Peter Michor will be able to say more, he is quite active on MO.</p>
<p>Let me mention that of course, when there is multiplicity and without self-adjointness one is in bad shape: you could find matrices with characteristic polynomials having factors like $x^2-t$ whose eigenvalue would not depend smoothly on $t$.</p>
|
1,552 | <p>Closely related: what is the smallest known composite which has not been factored? If these numbers cannot be specified, knowing their approximate size would be interesting. E.g. can current methods factor an arbitrary 200 digit number in a few hours (days? months? or what?).
Can current methods certify that an arbitrary 1000 digit number is prime, or composite in a few hours (days? months? not at all?).</p>
<p>Any broad-brush comments on the current status of primality proving, and how active this field is would be appreciated as well. Same for factoring.</p>
<hr>
<p>Edit: perhaps my header question was something of a troll. I am not interested in lists. But if anyone could shed light on the answers to the portion of my question starting with "E.g.". it would be appreciated. (I could answer it in 1990, but what is the status today?)</p>
| Aaron Meyerowitz | 8,008 | <p>Yes factoring is an active area. I think there are ranges (maybe $80$ digits?) where factoring is reasonably tractable but many integers have never been examined.</p>
<p>Hueristic tests tell us with extremely high confidence if much larger integers are prime. An actual proof certificate by ECM is never a problem in practice.</p>
<p>Some people care about numbers like $n!+1$ skipping many lower numbers. A large integer needs something interesting to make it desirable to crack</p>
<p>So I'll take your question as </p>
<blockquote>
<p>What size are the smallest composite integers that people are trying to factor?</p>
</blockquote>
<p>.</p>
<p>I'd guess around $200$ digits. I base this on the venerable and ongoing <a href="https://en.wikipedia.org/wiki/Cunningham_project" rel="nofollow noreferrer">Cunningham project</a> which started before 1925, and is still going strong. It is concerned with <em>Factorizations of $b^n \pm 1$, $b = 2, 3, 5, 6, 7, 10, 11, 12$, up to High Powers</em> <a href="https://homes.cerias.purdue.edu/~ssw/cun/want133" rel="nofollow noreferrer">Here are</a> the $10$ <strong>most</strong> wanted composites. Also on that page are the slightly <em>less</em> desirable $24$ <strong>more</strong> wanted composites.</p>
<p>Most seem to be in $200$ to $300$ digit range.</p>
<p>The number one most wanted is $2,1207$- $C337$ This means that </p>
<p>$$2^{1207}-1=(2^{17}-1)(2^{71}-1)C337$$</p>
<p>The first two factors (of which the first is prime and the second a product of three primes) are explained by algebra and the fact that $1207=17\cdot71.$ The final factor has 337 digits and is know to be composite , but that is it (other than probably no easy factors).</p>
<p>That most wanted list is from $2016.$ Among the more wanted is a $278$ digit composite factor of $7^{359}+1.$ On the <a href="https://homes.cerias.purdue.edu/~ssw/cun/page134" rel="nofollow noreferrer">New factors received from July 3, 2017 to January 30, 2018O</a> page one finds that a $70$ digit prime was extracted leaving a $208$ digit composite. </p>
<p>7, 359+ c278 2411303103482828219339285233829803927599026677829066502451625793020443. c208 NFS@Home snfs</p>
<p>This was a result from the distributed NFS@Home project using the special number field sieve.</p>
<p>Finally, on the <a href="https://homes.cerias.purdue.edu/~ssw/cun/page135" rel="nofollow noreferrer">new factorizations since January 30 2018 page</a> one finds that this composite is the product of two primes. One listed and the other deducible from the given information.</p>
<p>7, 359+ c208 19792929490897848580163051528461396921535266011583367339063480695499122468294419091343065854809576593. p108 NFS@Home snfs</p>
<p>Browse those pages to see who decides what is wanted and why and much more.</p>
|
2,051,555 | <p>I have the following limit to solve.</p>
<p>$$\lim_{x \rightarrow 0}(1-\cos x)^{\tan x}$$</p>
<p>I am normally supposed to solve it without using l'Hôpital, but I failed to do so even with l'Hôpital. I don't see how I can solve it without applying l'Hôpital a couple of times, which doesn't seem practical, nor how to solve the question without applying it. Thanks for the help.</p>
| Mark Viola | 218,419 | <p>Note that </p>
<p>$$\begin{align}
\left(1-\cos(x)\right)^{\tan(x)}&=\left(2\sin^2(x/2)\right)^{\tan(x)}\\\\
&=2^{\tan(x)}\,\left(\sin(x/2)\right)^{2\tan(x)}\\\\
&=2^{\tan(x)}\,\left(\left(\sin(x/2)\right)^{\sin(x/2)}\right)^{2\tan(x)/\sin(x/2)}\\\\
&=2^{\tan(x)}\,\left(\left(\sin(x/2)\right)^{\sin(x/2)}\right)^{4\cos(x/2)/\cos(x)}\\\\
\end{align}$$</p>
<p>Now, since $\lim_{x\to 0}2^{\tan(x)}=1$, $\lim_{x\to 0}4\cos(x/2)/\cos(x)=4$, and $\lim_{x\to 0}x^x=1$, then $\lim_{x\to 0}(\sin(x/2))^{\sin(x/2)}=1$, and the limit of interest is $1$.</p>
|
2,051,555 | <p>I have the following limit to solve.</p>
<p>$$\lim_{x \rightarrow 0}(1-\cos x)^{\tan x}$$</p>
<p>I am normally supposed to solve it without using l'Hôpital, but I failed to do so even with l'Hôpital. I don't see how I can solve it without applying l'Hôpital a couple of times, which doesn't seem practical, nor how to solve the question without applying it. Thanks for the help.</p>
| K Split X | 381,431 | <p>Doing this without L'hopital is tricky, but you have to make estimates.</p>
<p>If the know the graph of $cos(x)/tan(x)$, $cos(0) = 1$, and $tan(0) = 0$</p>
<p>So you have:</p>
<p>$$(1- cos(x))^{tan(x)}$$
$$(0)^{0}$$</p>
<p>But if we estimate (right hand side limit), we have something like:</p>
<p>$$(1- 0.01)^{0.01}$$
$$(0.09)^{0.01}$$
Which is approx 1.</p>
|
92,296 | <p>I trying to review for calculus and I can't figure out how to do $\sqrt{200} - \sqrt{32}$ </p>
| Bill Dubuque | 242 | <p>When simplifying radicals the first step is to expose multiplicative dependencies by normalizing the radicands to be <em>squarefree,</em> i.e. pull out square factors. In your example we have $\rm 200 = 2\cdot 10^2\ $ and $\ 32 = 2\cdot 4^2\ $ so we obtain $\rm \sqrt{200}-\sqrt{32}\ = \sqrt{2\cdot 10^2}-\sqrt{2\cdot 4^2}\ =\ 10\ \sqrt{2} - 4\ \sqrt{2}\ =\ 6\ \sqrt{2}\:.$</p>
<p>When you go on to study the Galois theory of radical extensions (Kummer theory) you will learn general results saying roughly that these are the only type of algebraic dependencies that can occur, so this simple-minded approach will work generally. For some general algorithms see my <a href="https://math.stackexchange.com/a/4697/242">post here</a> and see Bill Gosper's reply there for some striking radical identities (if anyone deserves to be called a modern equivalent of Ramanujan then Gosper is surely a strong candidate).</p>
|
3,588,053 | <p>Is this a valid proof that the harmonic series diverges?</p>
<ol>
<li>Assume the series converges to a value, S:</li>
</ol>
<p><span class="math-container">$$S=1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+...$$</span></p>
<ol start="2">
<li>Split the series into two, with alternating even and odd denominators. Since the original series converges, the component series will converge.</li>
</ol>
<p><span class="math-container">$$S_{EVEN}=\frac{1}{2}+\frac{1}{4}+\frac{1}{6}+\frac{1}{8}+...$$</span>
<span class="math-container">$$S_{ODD}=1+\frac{1}{3}+\frac{1}{5}+\frac{1}{7}+...$$</span>
<span class="math-container">$$S=S_{EVEN}+S_{ODD}$$</span></p>
<ol start="3">
<li>Show that <span class="math-container">$S_{EVEN}=\frac{1}{2}S$</span></li>
</ol>
<p><span class="math-container">$$\frac{1}{2}S=\frac{1}{2}(1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+...)=\frac{1}{2}+\frac{1}{4}+\frac{1}{6}+\frac{1}{8}+...=S_{EVEN}$$</span></p>
<ol start="4">
<li><p>Show <span class="math-container">$S_{ODD}>S_{EVEN}$</span> because each odd term is greater than its corresponding even term:
<span class="math-container">$$1>\frac{1}{2}\qquad \frac{1}{3}>\frac{1}{4}\qquad \frac{1}{5}>\frac{1}{6}\qquad ...$$</span></p></li>
<li><p>Show <span class="math-container">$S_{ODD}=S_{EVEN}$</span>
<span class="math-container">$$S_{ODD}=S-S_{EVEN}=S-\frac{1}{2}S=\frac{1}{2}S=S_{EVEN}$$</span></p></li>
<li><p>The contradiction implies that the original assumption of convergence is false:</p></li>
</ol>
<p><span class="math-container">$$S_{ODD}>S_{EVEN}$$</span>
<span class="math-container">$$S_{ODD}=S_{EVEN}$$</span>
<span class="math-container">$$\therefore S\ne 1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+...$$</span></p>
| Simply Beautiful Art | 272,831 | <p>This is almost valid. We need to justify the second step, as mentioned by <a href="https://math.stackexchange.com/questions/3588053/is-this-a-valid-proof-that-the-harmonic-series-diverges#comment7376889_3588053">Ross Millikan</a>, as it is not always valid to split a series into their even and odd terms.</p>
<p>Take, as a simple example, the alternating harmonic series, where you would get it equalling <span class="math-container">$\infty-\infty$</span> which is indeterminate, but it does not make sense for the convergence of a series to be indeterminate.</p>
<p>This can be justified by seeing your series is absolutely convergent, assuming it converges.</p>
<p>If one must be pedantic, the same issue occurs showing <span class="math-container">$S_\mathrm{ODD}>S_\mathrm{EVEN}$</span>, but this can be more easily justified by the fact we are comparing terms in the order that they are summed. If they were not compared in this order and their respective series converged conditionally, this may not be true.</p>
<p>Aside from all that it looks good. If I may provide an alternative proof of similar approach, it would've sufficed to have shown that</p>
<p><span class="math-container">$$S=1+\frac12+\frac13+\frac14+\dots>\frac12+\frac12+\frac14+\frac14+\dots=S$$</span></p>
|
534,500 | <p>Is it true that if a sequence of random matrices $\{X_n\}$ converge in probability to a random matrix $X_n\overset{P}{\to}X$ as $n\to\infty$ that the elements $X_n^{(i,j)}\overset{P}{\to} X^{(i,j)}$ $\forall i,j$ also, or are there additional conditions required?</p>
<p>I think I have proved this using the norm $\|A\|=\max_j \sum_i |A^{(i,j)}|$ and the equivalence of norms, however I have only found it stated elsewhere with the caveat that $\{X_n\}$ are symmetric or symmetric and nonnegative-definite.</p>
| Did | 6,179 | <p>Yes: if $Y=\max\limits_{1\leqslant k\leqslant N}|Y_k|$ converges to $0$ in probability, then each $Y_k$ converges to $0$ in probability. </p>
<p><em>Proof:</em> For every $\varepsilon\gt0$, $[Y_k\geqslant\varepsilon]\subseteq[Y\geqslant\varepsilon]$. QED</p>
|
1,470,760 | <p>Okay so $A=0.2, B=0.5$ and the probability that both $A$ and $B$ occur is equal to $0.12$.</p>
<p>What is $P((A \cap B) \cup A^c)$?</p>
<p>What I basically did was $0.12 \times 0.5+0.5+0.2-0.12 = 1.2$. </p>
<p>Am I doing it right?</p>
| Simon S | 21,495 | <p><a href="https://i.stack.imgur.com/lfH6q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lfH6q.png" alt="enter image description here" /></a></p>
<p>The set of all events here is <span class="math-container">$X$</span> and hence <span class="math-container">$P(X) = 1$</span>. We are given that <span class="math-container">$P(A) = 0.2$</span> and <span class="math-container">$P(B) = 0.5$</span>, as well as <span class="math-container">$P(A\cap B) = 0.12$</span>.</p>
<p>Also, <span class="math-container">$P(A) + P(A^c) = 1$</span> and <span class="math-container">$P(B) + P(B^c) = 1$</span>, as in fact for any set of events <span class="math-container">$Y \subset X$</span>, we have that the total probability <span class="math-container">$$P(Y) + P(Y^c) = P(Y \cup Y^c) = P(X) = 1.$$</span></p>
<p>Now, what is <span class="math-container">$P((A\cap B)\cup A^c)$</span> in terms of <span class="math-container">$P(A), P(B), P(A \cap B)$</span> or <span class="math-container">$P(A^c)$</span>?</p>
|
58,947 | <p>Let $X$ be a non-compact holomorphic manifold of dimension $1$. Is there a compact Riemann surface $\bar{X}$ suc that $X$ is biholomorphic to an open subset of $\bar{X}$ ?</p>
<p><strong>Edit:</strong> To rule out the case where $X$ has infinite genus, perhaps one could add the hypothesis that the topological space $X^{\mathrm{end}}$ (is it a topological surface?), obtained by adding the <em>ends</em> of $X$, has finitely generated $\pi_1$ (or $H_1$ ). Would the new question make sense and/or be of any interest?</p>
<p><strong>Edit2:</strong> What happens if we require that $X$ has finite genus? (the <em>genus</em> of a non-compact surface, as suggested in a comment below, can be defined as the maximal $g$ for which a compact Riemann surface $\Sigma_g$ minus one point embeds into $X$)</p>
| André Henriques | 5,690 | <p>No. Take a surface of infinite genus.</p>
|
3,439,223 | <blockquote>
<p>Given the metric space <span class="math-container">$(X,d)$</span>:</p>
<p>If <span class="math-container">$M\subset N \subset X$</span>, <span class="math-container">$M\neq 0$</span>, we have <span class="math-container">$\text{diam}(M)\leq
\text{diam}(N)$</span></p>
</blockquote>
<p>Negating the statement, we assume that - given the assumptions - we deduce <span class="math-container">$\text{diam}(M):=\sup\limits_{x,y\in M}d(x,y)>\sup\limits_{u,v\in N}d(x,y)=:\text{diam}(N)$</span>. This can't be true, since <span class="math-container">$x,y\in M \implies x,y\in N$</span> but <span class="math-container">$u,v\in N$</span> does not necessarily imply that <span class="math-container">$u,v\in M$</span>.</p>
<p>I don't really know how to put it formally though.</p>
| Clement Yung | 620,517 | <p>For any <span class="math-container">$A,B \subseteq \mathbb{R}$</span>, if <span class="math-container">$\emptyset \neq A \subseteq B$</span>, then <span class="math-container">$\sup{A} \leq \sup{B}$</span> (try to prove this yourself if you're not convinced).</p>
<p>If you agree with this, then since:
<span class="math-container">$$
\emptyset \neq M \subseteq N \Rightarrow \emptyset \neq \{d(x,y) \mid x,y \in M\} \subseteq \{d(x,y) \mid x,y \in N\}
$$</span>
The result follows. </p>
|
923,235 | <p>Let $A=\begin{pmatrix}a&b\\c&d\end{pmatrix}$
be a matrix of complex numbers. Find the characteristic polynomial $\chi_A(t)$ of $A$ and compute $\chi_A(A)$.</p>
<p>I just wanted to confirm that I did this correctly.</p>
<p>Tha answer I have is:
$$\chi_A(t)= \det\begin{pmatrix}a-t&b\\c&d-t\end{pmatrix}
=(a-t)(d-t)-bc
=ad-bc-at-dt+t^2.
$$
Thus
$$
\chi_A(A)=
\begin{pmatrix}a-(ad-bc-at-dt+t^2)&b\\c&d-(ad-bc-at-dt+t^2)\end{pmatrix}
$$</p>
<p>Is this the right thinking?</p>
| Bman72 | 119,527 | <p>In general for a matrix
$$A=\begin{pmatrix}a&b\\c&d\end{pmatrix},\quad \in M(n \times n; \Bbb{C})$$
we have that the characteristic polynomial $\chi_A(t)$ is
\begin{align*}
X_A(t):&=\text{det}(A-t\Bbb{1})\\
&=\det\begin{pmatrix}a-t&b\\c&d-t\end{pmatrix}\\
&=(a-t)(d-t)-bc\\
&=t^2-(a+d)t+ad-bc\\
&=t^2-\text{tr}(A)t+\text{det}(A)
\end{align*}
By the theorem of <a href="http://en.wikipedia.org/wiki/Cayley%E2%80%93Hamilton_theorem" rel="nofollow">Cayley-Hamilton</a>, we have that </p>
<p>$$\chi_A(t) = \begin{pmatrix}a&b\\c&d\end{pmatrix}^2-(ad-bc)\begin{pmatrix}a&b\\c&d\end{pmatrix}+(ad-bc)\begin{pmatrix}1&0\\0&1\end{pmatrix}=\begin{pmatrix}0&0\\0&0\end{pmatrix}$$</p>
|
1,095,870 | <p>How can I solve the following inequality?</p>
<blockquote>
<p>$$\frac{\cos x -\tan^2(x/2)}{e^{1/(1+\cos x)}}>0$$</p>
</blockquote>
| orangeskid | 168,051 | <p>Note that
$$\frac{\cos x -\tan^2(x/2)}{e^{1/(1+\cos x)}} =\frac{ (-1 + 2 \cos x+ \cos^2 x)}{(1 + \cos x)} \cdot e^{-1/(1+\cos x)}$$</p>
<p>$\frac{e^{-1/(1+\cos x)}}{(1 + \cos x)} $ should be taken as a whole, because it is defined everywhere. It is $>0$ everywhere except at $(2k+1)\pi$ where it's zero. Therefore, we can focus on the numerator
$-1 + 2 \cos x+ \cos^2 x$, which will be $>0$ exactly on the countable union of intervals
$$( - \arccos(\sqrt{2}-1) + 2 k \pi, \arccos(\sqrt{2}-1) + 2 k \pi)$$
and this gives the solution for $x$.</p>
<p>It correlates with the answer of @Gonate: since $\tan ( \frac{1}{2}\cdot \arccos(\sqrt{2}-1)) = \sqrt{ \sqrt{2}-1}$.</p>
<p>$\arccos(\sqrt{2}-1)= 1.1437.. $ radians or approximately $65.53^{\circ}$ degrees. </p>
<p><img src="https://i.stack.imgur.com/1ndCO.png" alt="Graph of the function "></p>
<p>The factor $e^{-1/(1+\cos x)}$ flattens away the singularities arising from the denominator around $(2k+1)\pi$. </p>
|
536,128 | <p>I was trying to make sense of a problem when I stumbled upon this on yahoo answers. I was just wondering if it was correct. If it is, can you please maybe explain why?</p>
<p>${\bf r}'(t) = \langle -5 \cos t, -5 \cos t, -4 \sin t \rangle$</p>
<p>${\bf r}''(t) = \langle 5 \sin t, 5 \sin t, -4 \cos t \rangle$. </p>
<p>${\bf r}'(t) \times {\bf r}''(t) = \langle 20, -20, 0 \rangle$. </p>
| Bill Cook | 16,423 | <p>Yes. This is "legal" for continuous curves. </p>
<p>The heart of the issue is whether your approximation is actually converging to the object in question or not.</p>
<p>For area under a curve, one can show that the Riemann sums (sums of areas of rectangles) better and better approximate the area under the curve as the number of rectangles increases. Thus -- Voila! -- the limit gives us the area under the curve (in fact we may <em>define</em> area in this way). </p>
<p>On the other hand, the $\pi=4$ false proof tricks you into thinking the approximation is approaching the circle when it really isn't -- or at least its arc length isn't. </p>
<p>The curve you are using to approximate the circle is getting more and more jagged - not smoother and smoother. This kind of indicates that it's a poor approximation of a circle.</p>
<p>Or from a another viewpoint, the square containing the circle gives an overestimate for the circumference. Similarly one could inscribe a square (or diamond) and get an underestimate of $2\sqrt{2}$. As you continue to "flip corners over" the estimates remain constant revealing that your "approximations" aren't going anywhere. </p>
<p>You must be careful anytime you send things off to infinity. Finitely many steps don't necessarily predict the ultimate outcome. </p>
<p>Edit: More on why Riemann sums converge to the area...</p>
<p>This can be better seen using <a href="http://en.wikipedia.org/wiki/Darboux_integral" rel="nofollow">Darboux integrals</a>. Darboux uses a set of rectangles that over-estimate the area and then a second set of rectangles that under-estimate the area. As we use more and more rectangles, the upper bound keeps on creeping down and the lower bound keeps on creeping up. For nice enough functions (such as continuous functions) these two numbers approach each other. So these approximations are getting better and better as the number of rectangles increases.</p>
<p>In addition one can prove that any choice of rectangles (as long as their widths are heading to zero) will tend to this same number (mostly because it's squeezed between these bounds).</p>
<p>For the squaring the circle $\pi=4$ false proof the estimate never gets better.</p>
|
4,043,625 | <p><span class="math-container">\begin{equation}
\left\{\begin{array}{@{}l@{}}
2x\equiv7\mod9 \\
5x\equiv2\mod6
\end{array}\right.\,.
\end{equation}</span>
Can this system of congruences be solved?
I notice that <span class="math-container">$(9,6) = 3 \ne 1$</span> so I can't apply the Chinese theorem of remainders, but this doesn't imply that it can't be solved, so I thought to rewrite the second equation in two different equations, like this:</p>
<p><span class="math-container">\begin{equation}
\left\{\begin{array}{@{}l@{}}
2x\equiv7\mod9 \\
x\equiv0\mod2 \\
5x\equiv2\mod3
\end{array}\right.\,.
\end{equation}</span></p>
<p>But the problem still here: <span class="math-container">$(9,3) = 3 \ne 1$</span>, so it can be solved or is it impossible?</p>
| Arthur | 15,500 | <p>Just because they say it's <span class="math-container">$O(\log_2(n))$</span>, that doesn't mean it can't also be <span class="math-container">$O(\log_2(n+1))$</span>. In fact, we have
<span class="math-container">$$
O(\log_2(n)) = O(\log_2(n+1))
$$</span>
and the two complexity categories are entirely equal. We see this because on one hand we have
<span class="math-container">$$
f(n)\in O(\log_2(n))\implies f(n)\leq c\cdot \log_2(n)\leq c\cdot\log_2(n+1)\\
\implies f(n)\in O(\log_2(n+1))
$$</span>
(where <span class="math-container">$c$</span> is some constant), while on the other hand we have
<span class="math-container">$$
f(n)\in O(\log_2(n+1))\implies f(n)\leq d\cdot\log_2(n+1)\leq d\cdot 2\log_2(n)\\
\implies f(n)\in O(\log_2(n))
$$</span>
(where <span class="math-container">$d$</span> is some constant). Since the two are equal, the simpler alternative is often preferred over the more transparent option.</p>
|
2,118,761 | <p>How can I show that there are an infinite number of primes by using the Fundamental Theorem of Arithmetic?</p>
| Harnoor Lal | 396,710 | <p>You start by assuming the opposite. Let's say there are a finite amount of prime numbers, in fact, let's write them in a list. <br/></p>
<p>$P_1$, $P_2$, $P_3$, ... $P_n$ <br/></p>
<p>Note, this is a <strong>complete</strong> list. <br/></p>
<p>Now let's form a new number $a$, by multiplying all of our prime numbers and adding $1$. According to the Fundamental Theorem of Arithmetic, every integer greater than $1$ is either prime or a unique factorization of primes. So let's try both of these possibilities. <br/></p>
<p><strong>Possibility 1:</strong> $a$ is prime. However, we previously wrote all the primes on our complete list, so this is a <strong>contradiction</strong>. <br/></p>
<p><strong>Possibility 2:</strong> $a$ is composite. However, if it is, it needs to be a unique factorization of the primes on our list. But, it <strong>won't</strong> divide $P_1$ exactly, $P_2$, $P_3$, or any $P_n$ for that matter. So it is a violation of the Fundamental Theorem of Arithmetic, and therefore a <strong>contradiction</strong>.</p>
<p>Because we get a contradiction when we assume there are finitely many primes, it must be the opposite, or there must be infinitely many primes. <br/></p>
<p>Q.E.D</p>
|
2,069,392 | <p>Given that $x^4+px^3+qx^2+rx+s=0$ has four positive roots.</p>
<p>Prove that (1) $pr-16s\ge0$ (2) $q^2-36s\ge 0$</p>
<p>with equality in each case holds if and only if four roots are equal.</p>
<p><strong>My Approach:</strong></p>
<blockquote>
<p>Let roots of the equation</p>
<p>$x^4+px^3+qx^2+rx+s=0$ be $\alpha,\beta,\eta,\delta$</p>
<p>$\alpha>0,\beta>0,\eta>0,\delta>0$</p>
<p>$\sum\alpha=-p$</p>
<p>$\sum\alpha\beta=q$</p>
<p>$\sum\alpha\beta\eta=-r$</p>
<p>$\alpha\beta\eta\delta=s$</p>
<p>I am confused , what is next step? please help me </p>
</blockquote>
| ajotatxe | 132,456 | <p>We'll need the following:</p>
<blockquote>
<p>If $a_1,\ldots,a_n>0$ then $$\left(\sum_{k=1}^na_k\right)\left(\sum_{k=1}^n\frac1{a_k}\right)\ge
n^2$$
<em>Proof</em>: By AM-GM inequality, $\sum a_k\ge n\sqrt[n]{\prod a_k}$ and $\sum 1/a_k\ge n\sqrt[n]{\prod 1/a_k}$.</p>
</blockquote>
<p>To ease writing, I'll use latin letters for the roots: $t,u,v,w$.
$$\begin{align}pr&=(t+u+v+w)(tuv+tuw+tvw+uvw)\\
&=tuvw(t+u+v+w)\left(\frac1t+\frac1u+\frac1v+\frac1w\right)\\
&\ge 16s
\end{align}$$</p>
<p>The second inequality can be proven similarly:</p>
<p>$$\begin{align}q^2&=(tu+tv+tw+uv+uw+vw)^2\\
&=tuvw(tu+tv+tw+uv+uw+vw)\left(\frac1{tu}+\frac1{tv}+\frac1{tw}+\frac1{uv}+\frac1{uw}+\frac1{vw}\right)\\
&\ge 36s
\end{align}$$</p>
<p>Generalizing, if the polynomial
$$\sum_{k=0}^na_kx^k$$
has $n$ positive real roots and $a_n=1$, then
$$|a_ka_{n-k}|\ge\binom nk^2|a_0|$$
for $k\in\{1,\ldots,n-1\}$.</p>
|
3,821,049 | <blockquote>
<p>Find all complex solutions of <span class="math-container">$$e^{-iz}=\frac{-i+\sqrt 2+1}{-i-\sqrt 2-1}$$</span>
If a solution is <span class="math-container">$z=x+iy$</span> we set <span class="math-container">$\mathfrak{Re}(z)=x$</span> and <span class="math-container">$\mathfrak{Im}(z)=y$</span>.</p>
</blockquote>
<p>How do you solve this problem. I multiplied numerator and denominator by the complex conjugate of the denominator <span class="math-container">$(i-\sqrt 2-1)$</span>. After a lot of simplifications, I get <span class="math-container">$-0.5\sqrt 2 +0.5\sqrt 2 i$</span>.</p>
<p><span class="math-container">$e^{-zi}$</span> is then equal to <span class="math-container">$-0.5\sqrt 2+0.5\sqrt 2$</span>. Am I right? Now, how to find <span class="math-container">$z$</span>?</p>
| user | 505,767 | <p>We have that</p>
<p><span class="math-container">$$\frac{-i+\sqrt 2+1}{-i-\sqrt 2-1}\frac{i-\sqrt 2-1}{i-\sqrt 2-1}=\frac{-2-2\sqrt 2+i(2+2\sqrt 2)}{4+2\sqrt2}=-\frac{\sqrt 2}2+i\frac{\sqrt 2}2$$</span></p>
<p>then use Euler's formula</p>
<p><span class="math-container">$$e^{-iz}=e^{y-ix}=e^y\left(\cos x-i\sin x\right)$$</span></p>
|
126,901 | <p>How to evaluate this determinant $$\det\begin{bmatrix}
a& b&b &\cdots&b\\ c &d &0&\cdots&0\\c&0&d&\ddots&\vdots\\\vdots &\vdots&\ddots&\ddots& 0\\c&0&\cdots&0&d
\end{bmatrix}?$$</p>
<p>I am looking for the different approaches.</p>
| J. M. ain't a mathematician | 498 | <p>Your <em>(upper) arrowhead matrix</em> can be decomposed as follows:</p>
<p>$$\begin{pmatrix}a&b&b&\cdots&b\\c&d&0&\cdots&0\\c&0&d&\ddots&\vdots\\\vdots &\vdots&\ddots&\ddots&0\\c&0&\cdots&0&d\end{pmatrix}=\color{red}{\begin{pmatrix}a-b-c&&&&\\&d&&&\\&&d&&\\&&&\ddots&\\&&&&d\end{pmatrix}}+\color{blue}{\begin{pmatrix}1&c\\&c\\&c\\&\vdots\\&c\end{pmatrix}}\cdot\color{magenta}{\begin{pmatrix}b&b&b&\cdots&b\\1&&&&\end{pmatrix}}$$</p>
<p>Now, one can then use the <a href="https://en.wikipedia.org/wiki/Matrix_determinant_lemma" rel="noreferrer">Sherman-Morrison-Woodbury formula for determinants</a>:</p>
<p>$$\det(\color{red}{\mathbf A}+\mathbf{\color{blue}{U}\color{magenta}{V^\top}}) = \det(\mathbf I + \color{magenta}{\mathbf V^\top}\color{red}{\mathbf A}^{-1}\color{blue}{\mathbf U})\det(\color{red}{\mathbf A})$$</p>
<p>to yield</p>
<p>$$\begin{align*}
&\begin{vmatrix}a&b&b&\cdots&b\\c&d&0&\cdots&0\\c&0&d&\ddots&\vdots\\\vdots &\vdots&\ddots&\ddots&0\\c&0&\cdots&0&d\end{vmatrix}\\
&=\det\left(\mathbf I+\color{magenta}{\begin{pmatrix}b&b&\cdots&b\\1&&&\end{pmatrix}}\color{red}{\begin{pmatrix}\frac1{a-b-c}&&&\\&\frac1d&&\\&&\ddots&\\&&&\frac1d\end{pmatrix}}\color{blue}{\begin{pmatrix}1&c\\&c\\&\vdots\\&c\end{pmatrix}}\right)\color{red}{(a-b-c)d^{n-1}}\\
&=\det\left(\mathbf I+\color{magenta}{\begin{pmatrix}b&b&\cdots&b\\1&&&\end{pmatrix}}\color{orange}{\begin{pmatrix}\frac1{a-b-c}&\frac{c}{a-b-c}\\&\frac{c}{d}\\&\vdots\\&\frac{c}{d}\end{pmatrix}}\right)\color{red}{(a-b-c)d^{n-1}}\\
&=\det\left(\mathbf I+\color{green}{\begin{pmatrix}\frac{b}{a-b-c}&bc\left(\frac1{a-b-c}+\frac{n-1}{d}\right)\\\frac1{a-b-c}&\frac{c}{a-b-c}\end{pmatrix}}\right)\color{red}{(a-b-c)d^{n-1}}\\
&=\frac{bc(1-n)+ad}{d(a-b-c)}(a-b-c)d^{n-1}\\
&=(bc(1-n)+ad)d^{n-2}
\end{align*}$$</p>
|
2,517,469 | <p>Let $P$ be a projective module and $P=P_1+N$, where $P_1$ is a direct summand of $P$ and $N$ is a submodule. Show that there is $P_2\subseteq N$ such that $P=P_1\oplus P_2$. </p>
<p>I know that there is a submodule $P'$ of $P$ such that $P=P_1\oplus P'$. I wanted to consider the projection from this to $P_1$ and use the definition of being projective. But I would also need a map from $P=P_1+N$ to $P_1$ and I don't know how to get a well defined map there because it is not a direct sum. </p>
| Tsemo Aristide | 280,301 | <p>Since $P_1$ is a direct summand, there exists $P_2$ such that $P=P_1\oplus P_2$.
Consider $f:P\rightarrow P_2$ such that for every $x\in P$, write $x=x_1+x_2, x_1\in P_1,x_2\in P_2$, $f(x)=x_2$. Let $x\in P_2$, we can write $x=x_1'+n, x_1'\in P_1, n\in N$, we have $x=f(x)=f(x_1'+n)=f(n)$. This implies that the restriction $g$ of $f$ to $N$ is surjective, there exists $h:P\rightarrow N$ such that $f=g\circ h$; $P=P_1\oplus h(P_2)$.</p>
<p>Let $x\in P$, write $x=x_1+x_2, x_1\in P_1, x_2\in P_2$, $f(x-h(x_2))=f(x)-f(h(x_2))=x_2-x_2=0$ since $x_2=f(x_2)=f(h(x_2))$. This implies that $x-h(x_2)\in P_1$, we can write $x=x-h(x_2)+h(x_2)$ and we deduce that $P=P_1+h(P_2)$.</p>
<p>Let $x\in P_1\cap h(P_2)$, we can write $x=h(x_2), x_2\in P_2$, $f(x)=f(h(x_2))=f(x_2)=x_2=0$ because $x\in P_1$. This implies that $x_2=x=0$.</p>
|
2,284,451 | <blockquote>
<p><span class="math-container">$A$</span> and <span class="math-container">$B$</span> alternately throw a pair of coin. The player who throws head two times first will win.</p>
<p>A has the first throw. The find chance of winning <span class="math-container">$A$</span> is</p>
</blockquote>
<p>Attempt: Let <span class="math-container">$\displaystyle P(A) = \frac{1}{2}$</span> (Probability of occuring head when <span class="math-container">$A$</span> throw coin) and</p>
<p><span class="math-container">$\displaystyle P(B) = \frac{1}{2}$</span> (Probability of occuring head when <span class="math-container">$A$</span> throw coin)</p>
<p>So chance of winning <span class="math-container">$A$</span> when he throw head 1st time is <span class="math-container">$$\displaystyle P(A)+P(A)P(\bar{B})P(A)+P(\bar{A})P(\bar{B})P(\bar{A})P(\bar{B})P(A)+\cdots \cdots$$</span></p>
<p>could some help me how to go for original question, thanks</p>
| Asinomás | 33,907 | <p>Change the game a bit to make it clearer, consider a reasonable probability space on the set of pairs of infinite sequences of $0$ and $1$ $(a_n)$ and $(b_n)$.</p>
<p>Given such a sequence $(a_n)$ we can define the winning time $w(a_n)$ as the first $i$ such that $a_i=a_{i-1}=1$.</p>
<p>We want to find the probability $P(w(a_n)\leq w(b_n))$.</p>
<p>We can say it is $\frac{1}{2}+P(w(a_n)=w(b_n))$.</p>
<p>We say it is equal to $\sum\limits_{k=1}^\infty P(w(a_n)=w(b_n))=k$.</p>
<p>We say this is equal to $\sum\limits_{k=1}^\infty P(w(a_n))^2=k$</p>
<p>For $k\geq 3$ We have that $P(w(a_n))=k$ is $\frac{F_{k-3}}{2^k}$ because the number of sequences of length $n$ of $1$ and $0$ that end in $11$ and don't have any other $11$ is $F_{k-3}$</p>
<p>Where $F_n$ is the usual fibonacci sequence $F_0=1,F_1=1,F_2=2$ etc.</p>
<p>Now just plug in the usual formula for the fibonaccis to get:</p>
<p>$$\frac{1}{4}+\sum\limits_{k=3}^\infty \bigg (\frac{\frac{ (\frac{1+\sqrt 5}{2})^{k-3} - (\frac{1-\sqrt 5}{2} ) ^{k-3} }{\sqrt 5}}{2^k}\bigg )^2=\frac{1}{4}+\sum\limits_{k=3}^\infty \frac{ ((\frac{1+\sqrt 5}{2})^{k-3} - (\frac{1-\sqrt 5}{2})^{k-3})^2 }{5\cdot2^{2k}}$$</p>
<p>I think from here it is clear that this splits into three geometric series</p>
|
2,284,451 | <blockquote>
<p><span class="math-container">$A$</span> and <span class="math-container">$B$</span> alternately throw a pair of coin. The player who throws head two times first will win.</p>
<p>A has the first throw. The find chance of winning <span class="math-container">$A$</span> is</p>
</blockquote>
<p>Attempt: Let <span class="math-container">$\displaystyle P(A) = \frac{1}{2}$</span> (Probability of occuring head when <span class="math-container">$A$</span> throw coin) and</p>
<p><span class="math-container">$\displaystyle P(B) = \frac{1}{2}$</span> (Probability of occuring head when <span class="math-container">$A$</span> throw coin)</p>
<p>So chance of winning <span class="math-container">$A$</span> when he throw head 1st time is <span class="math-container">$$\displaystyle P(A)+P(A)P(\bar{B})P(A)+P(\bar{A})P(\bar{B})P(\bar{A})P(\bar{B})P(A)+\cdots \cdots$$</span></p>
<p>could some help me how to go for original question, thanks</p>
| Em. | 290,196 | <p>Yes. Your pattern is correct. Notice that problem says that $A$ and $B$ toss <em>two</em> coins. I assume the game ends when one player flips the two coins and lands two heads. Also, I assume it's a fair coin and that throws are independent of each other. Let $A$ and $B$ be the events that the respective player ends the game.</p>
<p>Notice that $P(\bar B) = P(\bar A) = 1- P(A) = 1-\frac{1}{2}\cdot\frac{1}{2} = \frac{3}{4}.$</p>
<p>Generalizing the pattern that you found, we have
$$\sum_{k = 0}^\infty \left(\frac{3}{4}\right)^{k}\left(\frac{3}{4}\right)^{k} \frac{1}{2}\cdot \frac{1}{2}.$$</p>
<p>Alternatively, let $p_A$ be the probability that $A$ wins the game. Notice that $A$ can win on the first try with probability $1/4$. If $A$ doesn't win, then $B$ takes a turn. If $B$ fails, he fails with probability $3/4$. Notice that after $B$ fails this first time, it's like the game "starts over". Then we have
$$p_A = \frac{1}{4}+\frac{3}{4}\cdot\frac{3}{4}p_A.$$
Solving for $p_A$ gives $\frac{4}{7}$. </p>
|
1,579,349 | <p>Need help setting this thing up don't really get how to get the derivative is it $0$? If you just plug everything in since there will be no variable.</p>
| d.v | 296,478 | <p>$dy/dx=dy/du×du/dx$ </p>
<p>$dy/dx=3 u^2×2 x=3 (x^2-1)^2×2 x=6 x(x^2-1)^2$</p>
<p>$Then x=2$</p>
<p>$dy/dx_(x=2)=108$</p>
|
1,685,967 | <p>Let $\Omega\subset\mathbb{R}^n $ be bounded smooth domain.
Given a sequence $u_m$ in Sobolev space $H=\left \{v\in H^2(\Omega ):\frac{\partial v}{\partial n}=0 \text{ on } \partial \Omega \right \}$ such that $u_m$ is uniformly bounded i.e. $\|u_m\|_{H^2}\leq M$ and given the function $f(u)=u^3-u$.</p>
<p>If I know that $u_m\rightharpoonup u(u\in H)$ in $L^2$ sense i.e. $\int_{\Omega}u_m v\to \int_{\Omega}u v$ for every $v\in H$. Is it true that $f(u_m)\rightharpoonup f(u)$ (in $L^2$ sense)?</p>
| Kore-N | 59,827 | <p>I post this as answer,but I am studying myself Sobolev spaces right now, so I am not sure whether it is correct (in fact it may well be completely wrong). The argument will use embedding theorems for Sobolev spaces.</p>
<ol>
<li><p>First we have a sequence that converges weakly in $L^2,$ but is bounded in $H^2 = W^{2,2}.$ Since $W^{2,2}$ is reflexive, the unit ball is weakly compact. So you can extract converging subsequences. But every converging subsequence converges tu $u,$ since we know that $v_m\rightharpoonup v$ in $W^{k,p}$ implies $v_m\rightharpoonup v$ in $L^p.$ A sequence in a compact (if you want also metric) space in which every converging subsequence converges to the same limit point, is itself convergent.</p></li>
<li><p>Now we know that $W^{2,2}$ embeds compactly in $L^{q},$ for $q < \frac{2n}{n - 4},$ and $n \ge 4$. For smaller $n$ we have compact embedding in $C^{k, \gamma},$ for $\gamma \in (0,1)$ and $k< 2 - n/2 - \gamma.$ Compact embeddings tell us that a weakly converging sequence in the departure space is strongly converging in the arrival space. So all in all, for $n <6,$ we know that the sequence is actually strongly converging in $L^6$.</p></li>
<li><p>Part 2 thus tells us that $u_n^3$ is in $L^2$ and this answers some of the comments. Furthermore it is strongy convergent in $L^2.$ So for $n < 6$ we have that actually $f(u_n) \rightarrow f(u)$ strongly in $L^2.$</p></li>
<li><p>Last but not least we consider the border case $n = 6.$ Here we have just continuous embedding of $W^{2,2}$ in $L^6$. So we know $u_n^3$ is in $L^2$,and we know that it has some weakly converging subsequence, since it`s norm is bounded. But is $u_n^3\rightharpoonup u^3$? I suppose it is,but I am actually having some trouble proving it.</p></li>
</ol>
|
1,685,967 | <p>Let $\Omega\subset\mathbb{R}^n $ be bounded smooth domain.
Given a sequence $u_m$ in Sobolev space $H=\left \{v\in H^2(\Omega ):\frac{\partial v}{\partial n}=0 \text{ on } \partial \Omega \right \}$ such that $u_m$ is uniformly bounded i.e. $\|u_m\|_{H^2}\leq M$ and given the function $f(u)=u^3-u$.</p>
<p>If I know that $u_m\rightharpoonup u(u\in H)$ in $L^2$ sense i.e. $\int_{\Omega}u_m v\to \int_{\Omega}u v$ for every $v\in H$. Is it true that $f(u_m)\rightharpoonup f(u)$ (in $L^2$ sense)?</p>
| Svetoslav | 254,733 | <p>For simplicity, I consider only the case <span class="math-container">$n=3$</span>.</p>
<p>You have <span class="math-container">$u_m\rightharpoonup u$</span> in <span class="math-container">$L^2$</span>, so it is enough to show that <span class="math-container">$u_m^3\rightharpoonup u^3$</span> in <span class="math-container">$L^2$</span>. To show this we will use Sobolev compact embedding theorems, the <a href="http://www.jstor.org/stable/2319009?seq=2#page_scan_tab_contents" rel="nofollow noreferrer">Bounded Convergence Theorem</a>, and the fact that if each subsequence of <span class="math-container">$y_m$</span> contains a strongly/weakly convergent subsequence to some <span class="math-container">$y$</span> then the whole sequence converges to <span class="math-container">$y$</span>.</p>
<p>So, you have that <span class="math-container">$u_m\rightharpoonup u$</span> in <span class="math-container">$L^2$</span> and <span class="math-container">$\|u_m\|_{H^2}\leq M$</span>. Take an arbitrary subsequence <span class="math-container">$u_{m_k}$</span>. Because <span class="math-container">$H^2\hookrightarrow L^\infty\Rightarrow \|u_{m_k}\|_{L^\infty}\leq cM$</span>, where <span class="math-container">$c>0$</span> is the embedding constant. Further, because <span class="math-container">$u_{m_k}$</span> is also bounded in <span class="math-container">$H^1$</span>, which is reflexive, it follows that there exists a further weakly convergent subsequence <span class="math-container">$u_{m_{k_l}}\rightharpoonup g\in H^1$</span> in <span class="math-container">$H^1\hookrightarrow L^2$</span> (compactly). Because the embedding is compact <span class="math-container">$u_{m_{k_l}}\to g$</span> in <span class="math-container">$L^2$</span>. Now, note that <span class="math-container">$g\equiv u$</span>, because weak limits are unique, and therefore you conclude from here that actually <span class="math-container">$u\in L^6$</span> since <span class="math-container">$H^1\hookrightarrow L^6$</span> (continuously). From the <span class="math-container">$L^2$</span> convergence you find a further pointwise a.e convergent subsequence, this time denote it <span class="math-container">$u_s(x)\to u(x)$</span> for which you still have the <span class="math-container">$L^\infty$</span> bound <span class="math-container">$\|u_s\|\leq cM$</span>. Finally, you note that <span class="math-container">$u_s^3(x)\to u^3(x)$</span> a.e and that <span class="math-container">$\|u_s^3\|_{L^\infty}\leq c^3M^3<\infty$</span>, and apply the Bounded Convergence Theorem to <span class="math-container">$\{u_s^3\}$</span> to conclude that <span class="math-container">$u_s^3\to u^3$</span> in <span class="math-container">$L^p,\,\forall 1\leq p<\infty$</span>. Obviously <span class="math-container">$u_s^3\rightharpoonup u^3$</span> in <span class="math-container">$L^2$</span> also.</p>
<p>As a conclusion, from the very strong assumption, that <span class="math-container">$\|u_m\|_{H^2}\leq M$</span> and that <span class="math-container">$u_m\rightharpoonup u$</span> in <span class="math-container">$L^2$</span> it follows that <span class="math-container">$u_m\to u$</span> in <span class="math-container">$L^p,\,\forall 1\leq p<\infty$</span> and also <span class="math-container">$u_m^3\to u^3$</span> in <span class="math-container">$L^p,\,\forall 1\leq p<\infty$</span>.</p>
<blockquote>
<p><strong>Bounded Convergence Theorem:</strong> Suppose <span class="math-container">$\Omega\subset \mathbb R^n$</span> is bounded, <span class="math-container">$f,f_1,f_2,...,f_n$</span> is a sequence of measurable functions over <span class="math-container">$\Omega\subset \mathbb R^n$</span> and <span class="math-container">$M>0$</span> is such that
<span class="math-container">$$\|f_n\|_{L^\infty(\Omega)}\leq M,\,\forall n\in\mathbb N$$</span>
and <span class="math-container">$\lim\limits_{n\to\infty}{f_n(x)}=f(x)$</span> a.e <span class="math-container">$x\in\Omega$</span>. Then
<span class="math-container">$$\lim\limits_{n\to\infty}{\|f_n-f\|_{L^p(\Omega)}}=0,\,\forall 1\leq p<\infty$$</span></p>
</blockquote>
|
15,669 | <p>Borrowing <code>triangularArrayLayout</code> from <a href="https://mathematica.stackexchange.com/questions/9959/visualize-pascals-triangle-and-other-triangle-shaped-lists">here</a>, I have:</p>
<pre><code>triangularArrayLayout[triArray_List, opts___] :=
Module[{n = Length[triArray]},
Graphics[MapIndexed[
Text[Style[#1,
Large], {Sqrt[3] (n - 1 + #2.{-1, 2}), 3 (n - First[#2] + 1)}/
2] &, triArray, {2}], opts]]
n = 6;
s = 500;
coeffs = triangularArrayLayout[Table[Row[{"C(", i, ",", j, ")"}], {i, 0, n}, {j, 0, i}],
ImageSize -> s];
tri = triangularArrayLayout[Table[Binomial[i, j], {i, 0, n}, {j, 0, i}],
ImageSize -> s];
layers = {Overlay[{coeffs, Show[tri, TextStyle -> GrayLevel[.8]]}, Alignment -> Top],
Overlay[{tri, Show[coeffs, TextStyle -> GrayLevel[.8]]}, Alignment -> Top]};
Manipulate[layers[[u]], {{u, 1, " "}, {1 -> "binomial coefficients",
2 -> "Pascal's triangle"}}, ControlType -> RadioButtonBar]
</code></pre>
<p>but the vertical alignment is off:</p>
<p><img src="https://i.stack.imgur.com/fZooA.png" alt="Mathematica graphics">
<img src="https://i.stack.imgur.com/L0LM5.png" alt="Mathematica graphics"></p>
<p>This is the main issue, but I am also curious how to:</p>
<ol>
<li>typeset the $C(n,r)$ as <code>TraditionalForm</code> (with the varying $n$ and $r$ values throughout)</li>
<li>typeset the $C(n,r)$ as $_{n}C_{r}$ (also with the varying $n$ and $r$ values).</li>
</ol>
| Mr.Wizard | 121 | <p>Version 7 does not have Overlay, but one can produce a similar effect from within <code>Graphics</code>. Using your code with this substitution:</p>
<pre><code>layers = {Graphics[{coeffs[[1]], Opacity[0.3], tri[[1]]}],
Graphics[{tri[[1]], Opacity[0.3], coeffs[[1]]}]};
</code></pre>
<p>yields:</p>
<p><img src="https://i.stack.imgur.com/cq5SD.png" alt="Mathematica graphics"></p>
|
1,462,908 | <p>Is it possible to have a set of infinite cardinality as a subset of a set with a finite cardinality? It sounds counter-intuitive, but there are things in math that just are so. Can one definitely prove this using only basic axioms? <br />
The main reason I asked this question is because the book <em>Inverted World</em> says there are infinite planetary bodies in a finite universe, and I wondered if this could be done with sets.</p>
| NJastro | 273,594 | <p>The proof is very intuitive (as you probably are feeling). But it can be written elaborately as follows, if you wish.</p>
<p>Your claim: For any finite set F, there exists an infinite subset I.</p>
<p>Try to prove:
Let $F$ be a finite set defined as $F = \{f_1, f_2, \ldots , f_n\}$, where $n = 1, 2, \ldots$</p>
<p>Let $I$ be an infinite set defined as $I = \{i_1, i_2, \ldots, i_n, \ldots\}$, where n = 1, 2, ...</p>
<p>If I is a subset of F, then every element in I is also an element in F. If F contains finitely many elements, then only finitely many elements of I could belong to F.</p>
<p>However, I is infinite by definition, so clearly not all elements of I are contained in F.</p>
<p>Therefore, I is not a subset of F. This implies the claim is false. </p>
<p>Hence, for any finite set F, there does not exist an infinite subset I.</p>
<p>There is actually a proof you can probably find which does the same thing, just it takes a different angle: Prove that every subset of a finite set is finite. You can probably look this up somewhere!</p>
<p>I don't believe there are infinite planets in the universe. There are a large number, but it is not infinite. I don't believe anything in the universe is infinite, so there shouldn't be anything to reconcile here. Inverted World is sci-fi, so it's not even a theory. Just a nice tale!</p>
|
348,614 | <p>Is the following claim true: Let <span class="math-container">$\zeta(s)$</span> be the Riemann zeta function. I observed that as for large <span class="math-container">$n$</span>, as <span class="math-container">$s$</span> increased, </p>
<p><span class="math-container">$$
\frac{1}{n}\sum_{k = 1}^n\sum_{i = 1}^{k} \bigg(\frac{\gcd(k,i)}{\text{lcm}(k,i)}\bigg)^s \approx \zeta(s+1)
$$</span></p>
<p>or equivalently</p>
<p><span class="math-container">$$
\frac{1}{n}\sum_{k = 1}^n\sum_{i = 1}^{k} \bigg(\frac{\gcd(k,i)^2}{ki}\bigg)^s \approx \zeta(s+1)
$$</span></p>
<p>A few values of <span class="math-container">$s$</span>, LHS and the RHS are given below</p>
<p><span class="math-container">$$(3,1.221,1.202)$$</span>
<span class="math-container">$$(4,1.084,1.0823)$$</span>
<span class="math-container">$$(5,1.0372,1.0369)$$</span>
<span class="math-container">$$(6,1.01737,1.01734)$$</span>
<span class="math-container">$$(7,1.00835,1.00834)$$</span>
<span class="math-container">$$(9,1.00494,1.00494)$$</span>
<span class="math-container">$$(19,1.0000009539,1.0000009539)$$</span></p>
<p><strong>Note</strong>: <a href="https://math.stackexchange.com/questions/3293112/relationship-between-gcd-lcm-and-the-riemann-zeta-function">This question was posted in MSE</a>. It but did not have the right answer.</p>
| Carlo Beenakker | 11,260 | <p>A variety of formulas of this type (in the sense of a relation between <span class="math-container">$\zeta(s)$</span> and a sum over gcd or lcm) has been derived by Titus Hilberdink and László Tóth in <A HREF="https://arxiv.org/abs/1604.04508" rel="noreferrer">On the average value of the least common multiple of k positive integers</A> (2016), see also <A HREF="https://projecteuclid.org/download/pdf_1/euclid.lnms/1196285379" rel="noreferrer">On the distribution of the greatest common divisor</A> by Diaconis and Erdȍs. I quote</p>
<p><span class="math-container">$$\sum_{i,k=1}^n \big(\text{lcm}(k,i)\big)^s=\frac{\zeta(s+2)}{\zeta(2)}\frac{n^{2s+2}}{(s+1)^2}+{\cal O}(n^{2s+1}\log n),$$</span>
<span class="math-container">$$\sum_{i,k=1}^n \big(\gcd(k,i)\big)^s=\left(\frac{2\zeta(s)}{\zeta(s+1)}-1\right)\frac{n^{s+1}}{s+1}+{\cal O}(n^{s}\log n),$$</span>
<span class="math-container">$$\sum_{i_1,i_2,\ldots i_s=1}^n \gcd(i_1,i_2,\ldots i_s)=\frac{\zeta(s-1)}{\zeta(s)}n^s+{\cal O}(n^{s-1}),\;\;s\geq 4.$$</span></p>
<p>The earliest reference for such series is Ernest Cesàro, <A HREF="https://link.springer.com/content/pdf/10.1007/BF02420800.pdf" rel="noreferrer">
Étude moyenne du plus grand commun diviseur de deux nombres</A> (1885).</p>
|
844,420 | <p>Given a set containing N numbers, minimize the average where you can take out any string of consecutive numbers in the set.
|N|<=100000</p>
<p>Ex. {5, 1, 7, 8, 2}</p>
<p>You can take out {1,7}, etc. but the way to minimize in this case is just to take out {7,8} which will give a minimum average of (5+2+1)/3=2.667.</p>
<p>NOTE:-You can't use the first or last one, so you can't take out {5} or {2}.
I want to know the general procedure to minimize this. I am looking for a linear solution.
thanks</p>
| emcor | 154,094 | <p>The average is minimized by taking only the smallest value in the set, i.e. take out the largest elements.</p>
<p>By the restriction of first/last element and consecutive string, you would take out the consecutive inner string which has the biggest weighted average (stringlength/n*stringaverage) and bigger than the weighted average of first and last element (2/n*firstlastaverage).</p>
|
2,176,081 | <p>I am trying to compute </p>
<blockquote>
<p>$$ \int_0^\infty \frac{\ln x}{x^2 +4}\,dx,$$</p>
</blockquote>
<p>which I find <a href="https://math.stackexchange.com/questions/2173289/integrating-int-0-infty-frac-ln-xx24-dx-with-residue-theorem/2173342">here</a>, without complex analysis. I am consistently getting the wrong answer and am hoping someone can spot the error. </p>
<p>First, denote the integral by $I$, and take $x = \frac{1}{t}$. Hence, the integral becomes </p>
<p>$$I = \int_\infty^0 \frac{\ln(1/t)}{1/t^2 + 4} \left(-\frac{1}{t^2} dt \right) = \int_0^\infty \frac{\ln(1)}{1+4t^2} dt - \int_0^\infty \frac{\ln(t)}{1+4t^2} dt$$</p>
<p>Note that the leftmost integral on the right-hand side is zero. Now, letting $u = 2t$, we get </p>
<p>$$I = -\frac{1}{2} \int_0^\infty \frac{\ln(2u)}{1+u^2}\,du = - \frac{1}{2} \int_0^\infty \frac{\ln(2)}{1+u^2}\,du - \frac{1}{2} \int_0^\infty \frac{\ln(u)}{1+u^2}\,du = - \frac{1}{2} \int_0^\infty \frac{\ln(2)}{1+u^2} - \frac{I}{2}$$</p>
<p>and therefore $I = - \frac{\ln(2)}{3} \int_0^\infty \frac{1}{1+u^2}du = - \frac{\pi \ln(2)}{6}$. This, however, is not the right answer. So, where did I go wrong?</p>
| PM. | 416,252 | <p>One problem is, when you substitute $u=2t$ then $\log t = \log(u/2)$, not $\log(2u)$ as you have stated in your equation $$I = -\frac{1}{2} \int_0^\infty \frac{\ln(2u)}{1+u^2}\,du$$</p>
|
3,785,967 | <p>Let <span class="math-container">$E(R)_X$</span> denote the expected return of asset <span class="math-container">$X$</span>.</p>
<p>Given a market with only 3 assets; <span class="math-container">$A$</span>, <span class="math-container">$B$</span> and <span class="math-container">$C$</span>, the following three things can happen at the next timepoint:</p>
<p>With probability <span class="math-container">$ p = 0.5$</span>: <span class="math-container">$$E(R)_A = 0.4, E(R)_B = 0.02, E(R)_C = 0.3$$</span></p>
<p>With probability <span class="math-container">$p = 0.4$</span>: <span class="math-container">$$E(R)_A = 0.2, E(R)_B = 0.02, E(R)_C = 0.25$$</span></p>
<p>With probability <span class="math-container">$p = 0.1$</span>: <span class="math-container">$$E(R)_A = 0.1, E(R)_B = 0.02, E(R)_C = 0.15$$</span></p>
<p>Now, I'm trying to find the standard deviation for each asset.</p>
<p>Hence, I've found the expected return for each asset, via the following (for example, asset <span class="math-container">$A$</span>): <span class="math-container">$$E(R)_A = 0.5(0.4) + 0.4(0.2) + 0.1(0.1) = 0.29$$</span></p>
<p>Now, how do I find the standard deviation?</p>
<p>Do I simply do: <span class="math-container">$$ \sigma = \sqrt{\frac{(0.4-0.29)^2 + (0.2-0.29)^2 + (0.1-0.29)^2}{3}}$$</span></p>
<p>or do I need to somehow incorporate the probabilities in to this formula?</p>
| Henry | 6,460 | <p>You need to incorporate the probabilities so</p>
<p><span class="math-container">$$\sigma = \sqrt{0.5(0.4-0.29)^2 + 0.4(0.2-0.29)^2 + 0.1(0.1-0.29)^2}$$</span></p>
<p>This is also</p>
<p><span class="math-container">$$\sigma = \sqrt{0.5(0.4)^2 + 0.4(0.2)^2 + 0.1(0.1)^2 -0.29^2}$$</span></p>
|
73,383 | <p>The problem is:
$$\displaystyle \lim_{(x,y,z) \rightarrow (0,0,0)} \frac{xy+2yz+3xz}{x^2+4y^2+9z^2}.$$</p>
<p>The tutor guessed it didn't exist, and he was correct. However, I'd like to understand why it doesn't exist.</p>
<p>I think I have to turn it into spherical coordinates and then see if the end result depends on an angle, like I've done for two variables with polar coordinates. I don't know how though.</p>
<p>I know $\rho = \sqrt{x^2+y^2+z^2}$ and $\theta = \arctan \left(\frac{y}{x} \right)$ and $\phi = \arccos \left( \frac{z}{\rho} \right)$, but how on earth do I break this thing up?</p>
| Did | 6,179 | <p>Assume $x=y=z=t$ and $t\to0$, then the ratio converges to $\frac37$.
Assume $x=y=t$, $z=-t$ and $t\to0$, then the ratio converges to $-\frac27$. In both cases, $(x,y,z)\to(0,0,0)$ when $t\to0$. The two limits are not equal hence the limit when $(x,y,z)\to(0,0,0)$ does not exist.</p>
|
2,314,327 | <p>I have a quick question here.</p>
<p>For an exercise, I was asked to factor:</p>
<p>$$11x^2 + 14x - 2685 = 0$$</p>
<p>How do I figure this out quickly without staring at it forever? Is there a quicker mathematical way than guessing number combinations, or do I have to guess until I find the right combination of numbers?</p>
<p>The answer is:</p>
<p>$$(11x + 179)(x - 15) = 0 $$</p>
| CopyPasteIt | 432,081 | <p>As mentioned in a comment, you can take all the fun out of this by just using the quadratic equation. But, OK, I'll bite!!!</p>
<p>So we assume that we don't know that formula and that the polynomial factors over the rational numbers (or alternatively, we can't miss a chance of employing twisted logic and factoring integers).</p>
<p>Recall the <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a>. Before tackling this also review the <a href="http://Divisibility%20rules" rel="nofollow noreferrer">divisibility rules</a> mentioned by Ross Millikan, and for good measure open up a list of <a href="https://en.wikipedia.org/wiki/List_of_prime_numbers#The_first_1000_prime_number" rel="nofollow noreferrer">the first 1000 prime numbers</a>.</p>
<p>When we look at <span class="math-container">$11x^2+14x−2685=0$</span> the fact that <span class="math-container">$2,685$</span> has at least three factors all distinct from <span class="math-container">$11$</span> puts us in a foul mood (it would be nice if <span class="math-container">$11$</span> was factor, since the rational root theorem would be easier to apply). So we come up with a 'lazy idea'.</p>
<p>We can substitute <span class="math-container">$x = z + 1$</span> into <span class="math-container">$11x^2+14x−2685=0$</span> and note after regrouping back into polynomial form, the constant term will decrease and the leading coefficient will remain the same. We can keep doing this and see if life gets any easier.</p>
<p>In general, substituting <span class="math-container">$x = z + 1$</span> into <span class="math-container">$ax^2+bx+c=0$</span> gives <span class="math-container">$az^2 +(2a+b)z + (a+b+c)$</span>.</p>
<p>For convenience (and a logical abuse), we will keep using the same variable <span class="math-container">$z$</span> as we substitute (it won't matter).</p>
<p>Substitution 1: The equation <span class="math-container">$11x^2+14x−2685=0$</span> becomes</p>
<p><span class="math-container">$\tag 1 11z^2 + 36z -2660$</span></p>
<p>We see <span class="math-container">$2$</span>, <span class="math-container">$5$</span>, and <span class="math-container">$7$</span> as factors - reject.</p>
<p>Substitution 2: The equation <span class="math-container">$11z^2 + 36z -2660$</span> becomes</p>
<p><span class="math-container">$\tag 2 11z^2 + 58z -2613$</span></p>
<p>We see <span class="math-container">$3$</span> as a divisor, and when we divide it out we get <span class="math-container">$871$</span>. The 'easy pickings' divisibility rules are no help, so we check the prime number listing. We see that <span class="math-container">$871$</span> is a composite that doesn't include <span class="math-container">$11$</span> as a factor - reject.</p>
<p>Substitution 3: The equation <span class="math-container">$11z^2 + 58z -2613$</span> becomes</p>
<p><span class="math-container">$\tag 3 11z^2 + 80z -2544$</span></p>
<p>Just too many factors - reject.</p>
<p>Substitution 4: The equation <span class="math-container">$11z^2 + 80z -2544$</span> becomes</p>
<p><span class="math-container">$\tag 4 11z^2 + 102z -2453$</span></p>
<p>The number composite number <span class="math-container">$2,453$</span> (see prime list) is not divisible by <span class="math-container">$2$</span>, <span class="math-container">$5$</span> or <span class="math-container">$3$</span>. With a little amount of work you find that <span class="math-container">$2,453 = 11 \times 223$</span>. THIS IS IT!</p>
<p>Setting up for the rational roots, we are looking at</p>
<h1><span class="math-container">$\quad \pm \frac {1,11,223,2453}{1,11}$</span></h1>
<p>The number <span class="math-container">$1$</span> doesn't work, so we check the next easiest number <span class="math-container">$\pm 11$</span> and find that <span class="math-container">$-11$</span> is a root of equation <span class="math-container">$\text{(4)}$</span>.</p>
<p>Now since the substitution was so simple, we can go back in one step, <span class="math-container">$-1 -1 -1 -1 = -4$</span>, so that <span class="math-container">$-15$</span> is a root of the original equation. In a number of ways you can now get the final answer,</p>
<p><span class="math-container">$\quad (11x + 179)(x - 15) = 0$</span></p>
|
1,305,257 | <p>I do not understand how to use the following information: If $f$ is entire, then </p>
<p>$$\lim _{|z| \rightarrow \infty} \frac{f(z)}{z^2}=2i.$$</p>
<p>So if $f$ is entire, it has a power series around $z_0=0$, so $f(z)=\Sigma_{n=0}^\infty a_nz^n$, and then we get </p>
<p>$$\lim _{|z| \rightarrow \infty} \frac{\Sigma_{n=0}^\infty a_nz^n}{z^2}=2i.$$ </p>
<p>How do I continue from here? </p>
<p>It is a part of a question. I just want to know how can I use this info. I don't know how I can manipulate summations, and since it's $|z| \rightarrow \infty$ and not $z \rightarrow \infty$ (which is meaningless), I don't really know what I can do here. </p>
<p>Maybe </p>
<p>$$\lim _{|z| \rightarrow \infty} \Sigma_{n=0}^\infty a_nz^{n-2}=2i,$$ but then what?</p>
<p>Thanks in advance for your assistance! </p>
| HallsofIvy | 244,198 | <p>One could also note that the derivative of an even degree polynomial is an odd degree polynomial. And an odd degree polynomial always has at least one zero.</p>
|
4,216,105 | <p>In the <a href="https://www.feynmanlectures.caltech.edu/I_22.html#Ch22-S5" rel="nofollow noreferrer">Algebra chapter</a> of the Feynman Lectures on Physics, Feynman introduces complex powers:</p>
<blockquote>
<p>Thus <span class="math-container">$$10^{(r+is)}=10^r10^{is}\tag{22.5}$$</span>
But <span class="math-container">$10^r$</span> we already know how to compute, and we can always multiply anything by anything else; therefore the problem is to compute only <span class="math-container">$10^{is}$</span>. Let us call it some complex number, <span class="math-container">$x+iy$</span>. Problem: given <span class="math-container">$s$</span>, find <span class="math-container">$x$</span>, find <span class="math-container">$y$</span>. Now if
<span class="math-container">$$10^{is}=x+iy$$</span> then the complex conjugate of this equation must also be true, so that
<span class="math-container">$$10^{−is}=x−iy$$</span></p>
</blockquote>
<p>I don't think it's all that easy to guess/infer intuitively, this fact (about complex conjugates). Especially when you are a beginner (that's who the author addresses this chapter to, building steadily from arithmetic through algebra, logarithms, etc... guided by the intellectual beacons of Abstraction and Generalisation), it isn't at all convincing why if <span class="math-container">$10^{is}$</span> equals some <span class="math-container">$x + iy$</span>, then <span class="math-container">$10^{-is}$</span> must be <span class="math-container">$x-iy$</span>.</p>
<p>The only definition of <span class="math-container">$i$</span> is that its square is <span class="math-container">$-1$</span> (Feynman's reason for this to be true). What am I missing here?</p>
| J.G. | 56,861 | <p>As you say, the <strong>only</strong> defining property of <span class="math-container">$i$</span> is <span class="math-container">$i^2=-1$</span>. If you replace <span class="math-container">$i$</span> with <span class="math-container">$-i$</span>, everything still works. Therefore, if <span class="math-container">$10^{is}=x+iy$</span> with <span class="math-container">$s,\,x,\,y\in\Bbb R$</span>, <span class="math-container">$10^{-is}$</span> <em>has</em> to be <span class="math-container">$x-iy$</span>, otherwise you could "tell apart" <span class="math-container">$i,\,-i$</span>.</p>
<p>One can construct functions <span class="math-container">$f$</span> that don't satisfy <span class="math-container">$f(z^\ast)=f(z)^\ast$</span> but - without going into analytic functions, Cauchy-Riemann equations etc. - their definition <em>must</em> contain <span class="math-container">$i$</span>, in order to break the symmetry. This is why, for example, that equation can be violated by <span class="math-container">$e^{iz}$</span> but not <span class="math-container">$e^z$</span>.</p>
|
4,196,185 | <p>Let <span class="math-container">$A$</span> be a <span class="math-container">$n\times n$</span> matrix with minimal polynomial <span class="math-container">$m_A(t)=t^n$</span>, i.e. a matrix with <span class="math-container">$0$</span> in the main diagonal and <span class="math-container">$1$</span> in the diagonal above the main diagonal.</p>
<p>How can I show that the minimal polynomial of <span class="math-container">$e^A$</span> is <span class="math-container">$m_{e^A}(t)=(t-1)^n$</span>? I have already calculated that <span class="math-container">$e^A$</span> is a matrix with <span class="math-container">$1$</span> in the main diagonal, <span class="math-container">$1$</span> in the diagonal above the main diagonal, <span class="math-container">$\frac{1}{2}$</span> in 2nd diagonal above the main diagonal and so forth until <span class="math-container">$\frac{1}{(n-1)!}$</span> in the upper right entry. But there must be some easier way to justify <span class="math-container">$(t-1)^n$</span> as the minimal polynomial besides brute force calculation?</p>
| jjagmath | 571,433 | <p>For (4), assuming <span class="math-container">$c>0$</span>, we have the solution <span class="math-container">$f(x) = (c^{x/c}\,\Gamma(x/c))^d$</span>. Of course, (3) is the particular case of <span class="math-container">$d=1$</span>.</p>
|
3,287,424 | <p>I have a function <span class="math-container">$$f(z)=\begin{cases}
e^{-z^{-4}} & z\neq0 \\
0 & z=0
\end{cases}$$</span></p>
<p>I have to show cauchy riemann equation is satisfied everywhere. I have shown that it isn't differentiable at <span class="math-container">$z=0$</span>. </p>
<p>Usually I will have to convert it in <span class="math-container">$$f(z)=u+iv$$</span> which seems very tedious. Is there some way to do this while keeping it in <span class="math-container">$f(z)$</span> form. </p>
| Kavi Rama Murthy | 142,385 | <p>This is quite easy. For example, <span class="math-container">$\frac {f(h+i0)-f(0)} h=\frac {e^{-h^{-4}}} h$</span> and the limit as <span class="math-container">$h \to 0$</span> through real values is <span class="math-container">$0$</span>. [ <span class="math-container">$e^{x^{4}} \to \infty$</span> faster than any power of <span class="math-container">$x$</span> as <span class="math-container">$x \to \infty$</span>. Put <span class="math-container">$x=\frac 1 h$</span>]. For partial derivatives w.r.t. <span class="math-container">$y$</span> we get the same limit since <span class="math-container">$i^{4}=1$</span>. </p>
|
345,888 | <p>$S$ is vector subspace of $S$ if $S$ is vector space, by hypothesis $S$ is vector space then $S$ is vector subspace of $S$.</p>
<p>But I prove it by contradiction, then $S$ is not vector subspace of $S$, but if $S$ is not vector subspace of $S$ then $S$ is not vector space but I have contraddiction, in fact by hypothesis $S$ is vector space, so $S$ is vector subspace of $S$.
Is it correct?
Thank you in advance!!</p>
| DonAntonio | 31,254 | <p>$$f'_r=\sum_{-\infty}^\infty |n|c_nr^{|n|-1}e^{in\theta}\;,\;\;f''_{rr}=\sum_{-\infty}^\infty |n|\,(|n|-1)c_n\,r^{|n|-2}e^{in\theta}$$</p>
<p>$$f'_\theta=i\sum_{-\infty}^\infty n\,c_nr^{|n|}e^{in\theta}\;,\;\;\;f''_{\theta\theta}=-\sum_{-\infty}^\infty n^2\,c_nr^{|n|}e^{in\theta}$$</p>
<p>So Laplace's equation in polar coordinates:</p>
<p>$$r^2f''_{rr}+rf'_r+f''_{\theta\theta}=\ldots$$</p>
<p>And now you have to prove the above is zero...</p>
<p><strong>Spoiler!:</strong></p>
<blockquote class="spoiler">
<p>$$\sum_{-\infty}^\infty \;c_n\left(|n|\,(|n|-1)+|n|-n^2\right)r^{|n|}e^{in\theta}=0$$</p>
</blockquote>
|
3,317,728 | <p>Suppose that the moment generating function <span class="math-container">$M_X$$(t)$</span> of a random variable <span class="math-container">$X$</span> is given by </p>
<p><span class="math-container">$$ M_X(t)=\frac{e^t+e^{-t}}{6} + \frac 23 $$</span></p>
<p>I need to find the distribution function <span class="math-container">$F_X(x)$</span>.</p>
<p>Until now, I have been given (in my lecture notes) that I can express <span class="math-container">$E(X)$</span>= <span class="math-container">$M_X^{(1)}(0)$</span> . But I can't use this here for finding the distribution function <span class="math-container">$F_X(x)$</span>?(Or at least I have no idea how to do it) Could you please tell me how to proceed?</p>
| Honza | 678,826 | <p>A moment generating function of the form <span class="math-container">$M(e^t)$</span> can be easily converted into a probability generating function of the original, integer-valued random variable, just by replacing <span class="math-container">$e^t$</span> by z. This yields <span class="math-container">$P(z)=\frac{z}6+\frac{z^{-1}}6+\frac{2}3z^0$</span> from which one can just read off the probabilities of <span class="math-container">$X=1$</span>, <span class="math-container">$X=-1$</span> and <span class="math-container">$X=0$</span>; the coefficient of <span class="math-container">$z^k$</span> is the probability of <span class="math-container">$X=k$</span>. This of course agrees with the previous answer.
In general, when X is a continuous-type random variable, finding its probability generating function based on <span class="math-container">$M(t)$</span> requires a knowledge of complex calculus and Fourier transform. </p>
|
3,354,990 | <p>I have points and limits of a function and even the shape of the function and I'm looking for the function, something that very interesting for me how could I control the curve of the function?</p>
<p>(1) <span class="math-container">$\lim\limits_{x \to inf} f(x) = 1 $</span></p>
<p>(2) <span class="math-container">$f(\frac{1}{c}) = 1 $</span></p>
<p>(3) <span class="math-container">$0\lt x$</span></p>
<p>(4) <span class="math-container">$0\lt c \leq 1$</span></p>
<p><a href="https://i.stack.imgur.com/DqYRT.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DqYRT.jpg" alt="Shape of function"></a></p>
| Fareed Abi Farraj | 584,389 | <p>This function satisfies all conditions except one of them, but maybe it'll help you, try <span class="math-container">$f(x)= -\frac{1}{x+1} +1$</span></p>
|
2,930,413 | <p>The problem is as shown. I tried using gradient and Hessian but can not make any conclusions from them. Any ideas?</p>
<p><span class="math-container">$$\max x_1^{a_1}x_2^{a_2}\cdots x_n^{a_n}$$</span></p>
<p>subject to</p>
<p><span class="math-container">$$\sum_{i=1}^nx_i=1,\quad x_i\geq 0,\quad i=1,2,\ldots,n,$$</span></p>
<p>where <span class="math-container">$a_i$</span> are given positive scalars. Find a global maximum and show that it is unique.</p>
| Doug M | 317,162 | <p>you could say</p>
<p><span class="math-container">$x'' = kx\\
x = C_1 e^{(\sqrt k)t} + C_2 e^{-(\sqrt k)t}$</span></p>
<p><span class="math-container">$k = -1\\
\sqrt k = i\\
x = C_1 e^{it} + C_2 e^{-it}\\
e^{it} = \cos t+ i\sin t\\
x = A\cos t + B\sin t$</span></p>
<p>Or you could say </p>
<p>let <span class="math-container">$v = x', v' =x'' = -x$</span></p>
<p>Giving us the system</p>
<p><span class="math-container">$v' = -x\\
x' = v$</span></p>
<p>or</p>
<p><span class="math-container">$\begin{bmatrix} x\\v \end{bmatrix}' = \begin{bmatrix} 0&1\\-1&0\end{bmatrix}\begin {bmatrix} x\\v \end{bmatrix}$</span></p>
<p><span class="math-container">$\mathbf x' = A\mathbf x\\
\mathbf x = e^{At}\mathbf x_0$</span></p>
<p>and <span class="math-container">$e^{At} = \sum_{n=0}^{\infty} \frac {(At)^n}{n!}$</span></p>
<p><span class="math-container">$A^2 = -I\\
A^3 = -A\\
A^4 = I$</span></p>
<p><span class="math-container">$e^{At}=\sum_{n=0}^{\infty} \frac {(-1)^n(It)^{2n}}{(2n)!} + \sum_{n=0}^{\infty} \frac {(-1)^n(At)^{2n+1}}{(2n+1)!} = \begin{bmatrix} \cos t&\sin t\\-\sin t&\cos t\end{bmatrix}$</span></p>
<p><span class="math-container">$\begin{bmatrix} x\\v \end{bmatrix} = \begin{bmatrix} x_0\cos t + v_0\sin t\\-x_0 \sin t + v_0\cos t \end{bmatrix}$</span></p>
<p>finally</p>
<p><span class="math-container">$v = \frac{dx}{dt}\\
x'' = \frac {dv}{dt}$</span></p>
<p>By the chain rule:</p>
<p><span class="math-container">$\frac{dv}{dt} = \frac{dv}{dx}\frac{dx}{dt} = \frac{dv}{dx} v$</span></p>
<p><span class="math-container">$\frac{dv}{dx} v = -x$</span></p>
<p>This is a seprerable diff eq</p>
<p><span class="math-container">$\frac 12 v^2 = -\frac12 x^2 + C\\
v = \sqrt {C-x^2}\\
x' =\sqrt {C-x^2}$</span></p>
<p>Which is also a separable diff eq</p>
<p><span class="math-container">$\int \frac {1}{\sqrt{C-x^2}} \ dx = \int dt$</span></p>
<p><span class="math-container">$\arcsin{\frac{x}{\sqrt C}} = t + \phi\\
x = \sqrt C \sin (t+\phi)$</span> </p>
|
743,473 | <p>A long Weierstrass equation is an equation of the form
$$y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6$$
Why are the coefficients named $a_1, a_2, a_3, a_4$ and $a_6$ in this manner, corresponding to $xy, x^2, y, x$ and $1$ respectively? Why is $a_5$ absent?</p>
| Henry | 6,460 | <p>If you find the formulae too mind-bending, suppose you have $1000$ people with these probabilities representing proportions.</p>
<p>Then you would have $100$ left-handed people, of which $45$ would be left-handed males and $55$ left-handed females.</p>
<p>You would also have $900$ right-handed people, of which $459$ would be right-handed males and $441$ right-handed females.</p>
<p>So there are $45+459=504$ males, representing a proportion $\frac{504}{1000}=0.504$ of the total population.</p>
<p>There are $55+441=496$ females, so the proportion of females who are left-handed is $\frac{55}{496}\approx 0.1109$. </p>
|
288,051 | <p>In enumerative combinatorics, a <i>bijective proof</i> that $|A_n| = |B_n|$ (where $A_n$ and $B_n$ are finite sets of combinatorial objects of size $n$) is a proof that constructs an explicit bijection between $A_n$ and $B_n$.
Bijective proofs are often prized because of their beauty and because of the insight that they often provide. Even if a combinatorial identity has already been proved (e.g., using generating functions), there is often interest in finding a bijective proof.</p>
<p>In spite of the importance of bijective proofs, the process of discovering or constructing a bijective proof seems to be an area that has been relatively untouched by computers. Of course, computers are often enlisted to generate all small examples of $A_n$ and $B_n$, but then the process of searching for a bijection between $A_n$ and $B_n$ is usually done the "old-fashioned" way, by playing around with pencil and paper and using human insight.</p>
<p>It seems to me that the time may be ripe for computers to search directly for bijections. To clarify, I do not (yet) envisage computers autonomously producing full-fledged bijective proofs. What I want computers to do is to search empirically for a <i>combinatorial rule</i>—that says something like, take an element of $A_n$ and do $X$, $Y$, and $Z$ to produce an element of $B_n$—that appears to yield a bijection for small values of $n$.</p>
<p>One reason that such a project has not already been carried out may be that the sheer diversity of combinatorial objects and combinatorial rules may seem daunting. How do we even describe the search space to the computer?</p>
<p>It occurs to me that, now that proof assistants have "come of age," people may have already had to face, and solve (at least partially), the problem of systematically encoding combinatorial objects and rules. This brings me to my question:</p>
<blockquote>
<p>Does there exist a robust framework for encoding combinatorial objects and combinatorial rules in a way that would allow a computer to empirically search for bijections? If not, is there something at least close, that could be adapted to this end with a modest amount of effort?</p>
</blockquote>
<p>In my opinion, Catalan numbers furnish a good test case. There are many different types of combinatorial objects that are enumerated by the Catalan numbers. As a first "challenge problem," a computer program should be able to discover bijections between different kinds of "Catalan objects" on its own. If this can be done, then there is no shortage of more difficult problems to sink one's teeth into.</p>
| FindStat | 113,201 | <p>As mentioned in the comments, the <a href="http://findstat.org" rel="noreferrer">FindStat</a> project is aiming at what you want. Concerning the size: it contains currently about 1000 'combinatorial statistics', that is maps $s:\mathcal C_n\to \mathbb Z$ on some (graded) set of 'combinatorial' objects $\mathcal C_n$ and about 150 'combinatorial maps' between two collections. What makes FindStat powerful is the (trivial) ability to compose maps. For example, for Catalan objects we obtain about 1.500.000 a priori different statistics.</p>
<p>Let me point out some possible ways of using it, in the spirit of the question.</p>
<ol>
<li><p>'automatically producing a bijection mapping one statistic to another' is demonstrated in <a href="https://mathoverflow.net/questions/255252/two-statistics-on-the-permutation-group/255260#255260">Two statistics on the permutation group</a> and <a href="https://mathoverflow.net/questions/278284/combinatorics-problem-related-to-motzkin-numbers-with-prize-money-i/278376#278376">Combinatorics problem related to Motzkin numbers with prize money I</a>.</p></li>
<li><p>'automatically producing conjectures' is achieved by clicking on 'search for distribution' on any of the statistics in the statistics database. The result is a list of statistics that are conjecturally equidistributed with the given statistic, but where a map transforming the first into the second might not be known. A classic is <a href="http://findstat.org/St000012" rel="noreferrer">http://findstat.org/St000012</a>.</p></li>
<li><p>it is easy to write a script that iterates 2. to find a 'partner' for a given pair of equidistributed statistics. An example is given in the comments to <a href="https://math.stackexchange.com/questions/2511943/leaf-labelled-unordered-rooted-binary-trees-and-perfect-matchings">https://math.stackexchange.com/questions/2511943/leaf-labelled-unordered-rooted-binary-trees-and-perfect-matchings</a>. Note that this meanwhile has a proof, also (essentially) discovered by FindStat.</p></li>
<li><p>a different kind of conjectures is provided by the list of 'experimental identities' found when selecting any of the maps at <a href="http://findstat.org/MapsDatabase" rel="noreferrer">http://findstat.org/MapsDatabase</a>. I am guessing that not all identities at <a href="http://findstat.org/Mp00101" rel="noreferrer">http://findstat.org/Mp00101</a> are immediately obvious.</p></li>
<li><p>I am also working on a new package that checks whether a statistic satisfying given constraints can possibly exist. But that's for later...</p></li>
</ol>
|
1,658,284 | <p>So a friend shows me this :</p>
<p>$x^4= x^2+x^2+ \cdots +x^2 $ ( i.e. $x^2$ added $x^2$ times)</p>
<p>Now take the derivative of both side;</p>
<p>$4x^3 = 2x + 2x + \cdots + 2x $;</p>
<p>So $4x^3 = 2x^3 \cdots $(1)</p>
<p>And so dividing by $x^3$ gives $2=1 \cdots $(2).</p>
<p>I know we can't divide by 0 so that makes (2) false, but to show that (1) is false too ?</p>
| Gregory Grant | 217,398 | <p>You can only do the first step if $x^2$ is an positive integer. So since it doesn't hold for all $x$ you can't take the derivative of both sides like that. It doesn't even make sense to talk about "$x^2$ times" if $x^2$ is not a natural number. To take the derivative of both sides you'd need equality in an entire interval.</p>
|
1,658,284 | <p>So a friend shows me this :</p>
<p>$x^4= x^2+x^2+ \cdots +x^2 $ ( i.e. $x^2$ added $x^2$ times)</p>
<p>Now take the derivative of both side;</p>
<p>$4x^3 = 2x + 2x + \cdots + 2x $;</p>
<p>So $4x^3 = 2x^3 \cdots $(1)</p>
<p>And so dividing by $x^3$ gives $2=1 \cdots $(2).</p>
<p>I know we can't divide by 0 so that makes (2) false, but to show that (1) is false too ?</p>
| SchrodingersCat | 278,967 | <ol>
<li>First of all, the statement </li>
</ol>
<blockquote>
<p>$x^2$ added $x^2$ times</p>
</blockquote>
<p>makes sense only if $x^2$ is a positive integer. Else if $x^2$ is not a positive integer, then the statement is meaningless.</p>
<ol start="2">
<li>Moreover, from $(1)$, we have that $4x^3=2x^3 \Rightarrow 4x^3-2x^3=0 \Rightarrow 2x^3=0 \Rightarrow x^3=0$</li>
</ol>
<p>And hence division by $x^3$ in the next step is meaningless.</p>
|
256,612 | <p>I've found assertions that recognising the unknot is NP (but not explicitly NP hard or NP complete). I've found hints that people are looking for untangling algorithms that run in polynomial time (which implies they may exist). I've found suggestions that recognition and untangling require exponential time. (Untangling is a form of recognising.) </p>
<p>I suppose I'm asking whether there exists
(1) a "diagram" of a knot,
(2) a "cost" measure of the diagram,
(3) a "move" which can be applied to the diagram,
(4) the "move" always reduces the "cost",
(5) the "move" can be selected and applied in polynomial time,
(6) the "cost" can be calculated in polynomial time.</p>
<p>For instance, Reidemeister moves, fail on number (4) if the "cost" is number of crossings.</p>
<p>So what is the current status of the problem?</p>
<p>Thanks</p>
<p>Peter</p>
| Carlo Beenakker | 11,260 | <p>This <a href="https://arxiv.org/abs/1211.1079" rel="nofollow noreferrer">2012 report</a> of <em>"A fast branching algorithm for unknot recognition with experimental polynomial time behaviour"</em> by B. Burton and M. Ozlen may well represent the current status of the problem:</p>
<blockquote>
<p>It is a major unsolved problem as to whether unknot recognition - that is, testing whether a given closed loop in $R^3$ can be untangled to form a plain circle - has a polynomial time algorithm.
Here we present the first algorithm for unknot recognition that
guarantees a conclusive result and, though still worst-case
exponential in theory, behaves in practice like a polynomial-time
algorithm under systematic, exhaustive experimentation.</p>
</blockquote>
<p>(see also the discussion in this <a href="https://mathoverflow.net/questions/144158/what-is-the-state-of-the-art-for-algorithmic-knot-simplification">MO posting</a> from 2013)</p>
|
95,314 | <p>To evaluate this type of limits, how can I do, considering $f$ differentiable, and $ f (x_0)> 0 $</p>
<p>$$\lim_{x\to x_0} \biggl(\frac{f(x)}{f(x_0)}\biggr)^{\frac{1}{\ln x -\ln x_0 }},\quad\quad x_0>0,$$</p>
<p>$$\lim_{x\to x_0} \frac{x_0^n f(x)-x^n f(x_0)}{x-x_0},\quad\quad n\in\mathbb{N}.$$</p>
| David Mitra | 18,986 | <p>For the first problem, one may be tempted to use L'Hopital's Rule:</p>
<p>Noting that $f(x)>0$ for $x$ sufficiently close to $x_0$:</p>
<p>$$\ln \biggl[\,\Bigl ({f(x)\over f(x_0)}\Bigr)^{1\over \ln x-\ln x_0}\,\biggr] = {\ln f(x)-\ln f(x_0)\over \ln x-\ln x_0}.$$</p>
<p>We have:
$$\tag{1}\lim_{x\rightarrow x_0} {\ln f(x)-\ln f(x_0)\over \ln x-\ln x_0}
=
\lim_{x\rightarrow x_0} {{f'(x)/ f(x)} \over 1/x}
=\lim_{x\rightarrow x_0} { {xf'(x)\over f(x)} }={ {x_0f'(x_0)\over f(x_0)} }.
$$</p>
<p>So $$\lim_{x\rightarrow x_0}\Bigl ({f(x)\over f(x_0)}\Bigr)^{1\over \ln x-\ln x_0}=
\exp\Bigl({ {x_0 f'(x_0)\over f(x_0)} }\Bigr). $$</p>
<p>But, this line of reasoning is incorrect. We do not know that the last limit appearing in (1) exists.</p>
<p>See the other answers for correct arguments.</p>
|
4,236,148 | <p><span class="math-container">$R=\{(x,y):x^2=y^2\}$</span> and I have to determine whether its an equivalence relation.</p>
<p>I found that it's reflexive but for the symmetry part I got confused as <span class="math-container">$x=y$</span> is sometimes said to be symmetric others not so I don't know what to take it as.</p>
| spinosarus123 | 958,184 | <p>Certainly if <span class="math-container">$x^2=y^2$</span> then <span class="math-container">$y^2=x^2$</span>, so it is symmetric. It is also transitive because if <span class="math-container">$x^2=y^2$</span> and <span class="math-container">$y^2=z^2$</span>, then <span class="math-container">$x^2=z^2$</span>.</p>
|
688,742 | <p>Given $P\colon\mathbb{R} \to \mathbb{R}$ , $P$ is injective (one to one) polynomial function i need to formally prove that $P$ is onto $\mathbb{R}$</p>
<p>my strategy so far .......
polynomial function is continuous and since it one-to-one function it must be strictly monotonic and now i have no idea what to do .... </p>
<p>there is a theorem saying all continuous and monotonic functions have an inverse function and another theorem saying function have an inverse function if and only if its one-to-one and onto ... </p>
<p>but for formal proof of the on-to function property i think i need to show here that every element in the co-domain/target-set have a "source" in the domain .</p>
<p>i don't think its useful for this proof but only a polynomials of odd degrees have the one-to-one property . </p>
| Clive Newstead | 19,542 | <p>The limit of a constant is just the value of constant, and when $\lim f(x)$ and $\lim g(x)$ both exist they satisfy
$$\lim(f(x)+g(x)) = \lim f(x) + \lim g(x)$$
$$\lim(f(x)g(x)) = \lim f(x) \cdot \lim g(x)$$
In other words, here, you have
$$\lim \frac{11-e^{-x}}{7} = \lim \frac{1}{7} \cdot \lim(11-e^x) = \lim \frac{1}{7}(\lim 11 - \lim e^{-x})$$
Since $\frac{1}{7}$ and $11$ are constant, $\lim e^{-x} = 0$, you get
$$\lim \frac{11-e^{-x}}{7} = \frac{1}{7} \cdot (11 + 0) = \frac{11}{7}$$</p>
<p>(I've left the subscript $x \to \infty$ off the limit signs.)</p>
|
688,742 | <p>Given $P\colon\mathbb{R} \to \mathbb{R}$ , $P$ is injective (one to one) polynomial function i need to formally prove that $P$ is onto $\mathbb{R}$</p>
<p>my strategy so far .......
polynomial function is continuous and since it one-to-one function it must be strictly monotonic and now i have no idea what to do .... </p>
<p>there is a theorem saying all continuous and monotonic functions have an inverse function and another theorem saying function have an inverse function if and only if its one-to-one and onto ... </p>
<p>but for formal proof of the on-to function property i think i need to show here that every element in the co-domain/target-set have a "source" in the domain .</p>
<p>i don't think its useful for this proof but only a polynomials of odd degrees have the one-to-one property . </p>
| k170 | 161,538 | <p>First note that
$$ \lim\limits_{x\to\infty} f(x)= f\left(\lim\limits_{x\to\infty} x\right) $$
And
$$ \lim\limits_{x\to\infty} e^{-x}= \lim\limits_{x\to\infty} \frac{1}{e^{x}} =0$$
Therefore
$$\lim\limits_{x\to\infty} \frac{11 - e^{-x}}{7} = \frac{11 - \lim\limits_{x\to\infty} e^{-x}}{7} =\frac{11}{7}$$</p>
|
69,590 | <p>Consider the following code.</p>
<pre><code>f[a_,b_]:=x
x=a+b;
f[1,2]
(* a + b *)
</code></pre>
<p>From a certain viewpoint, one might expect it to return <code>3</code> instead of <code>a + b</code>: the symbols <code>a</code> and <code>b</code> are defined during the evaluation of <code>f</code> and <code>a+b</code> should evaluate to their sum.</p>
<p>Why is this viewpoint wrong? What's the right way to make it behave the way I want it to? (Something more clever than <code>f[p_,q_]:=x/.{a->p,b->q};</code>.) </p>
| Basheer Algohi | 13,548 | <p>check : </p>
<pre><code>f[1,2]//Trace
</code></pre>
<p>You will see that 1 & 2 are passed first before replacing x with its value.</p>
<p>If you want to get your result then use <code>Set</code> not <code>SetDelayed</code></p>
<pre><code>Clear[a,b];
x=a+b;
f[a_,b_]=x;
f[1,2]
(*3*)
</code></pre>
|
265,047 | <p>Let $X$ be a Banach space and let $T:X \rightarrow X$ be a bounded linear map. Show that: If $T$ is surrjective then its transpose $T':X' \rightarrow X'$ is bounded below.</p>
<p>My try: We know that $R^\perp_M = N_{M'}$ and since X is surrjective
$R_M = X$ hence $R_M^\perp = N_{M'} = 0$ so $M'$ is invertible and bounded below.
Am I missing some details? Is invertible and bounded below true?</p>
| André Nicolas | 6,312 | <p>There has been a clarification, which changes the answer. We are looking at
$$\sum_{n=1}^\infty\frac{\ln n+(-1)^nn^{\frac{1}{2}}}{n\cdot n^{\frac{1}{2}}}.$$
This can be split as
$$\sum_{n=1}^\infty\frac{\ln n}{n^{3/2}} +\sum_{n=1}^\infty \frac{(-1)^n}{n}$$</p>
<p>The second is an alternating series, which converges. </p>
<p>For the first, note that if $n$ is large enough, then $\log n \lt n^{1/4}$. It follows that if $n$ is large enough,
$$0\lt \frac{\ln n}{n^{3/2}}\lt \frac{n^{1/4}}{n^{3/2}}=\frac{1}{n^{5/4}}.$$
So the sum converges, by comparison with the "$p$-series" $\sum_{1}^\infty \frac{1}{n^{5/4}}$. </p>
<p>Thus our original series converges. </p>
|
265,047 | <p>Let $X$ be a Banach space and let $T:X \rightarrow X$ be a bounded linear map. Show that: If $T$ is surrjective then its transpose $T':X' \rightarrow X'$ is bounded below.</p>
<p>My try: We know that $R^\perp_M = N_{M'}$ and since X is surrjective
$R_M = X$ hence $R_M^\perp = N_{M'} = 0$ so $M'$ is invertible and bounded below.
Am I missing some details? Is invertible and bounded below true?</p>
| Mhenni Benghorbal | 35,472 | <p>Note that
$$\sum_{n=1}^\infty\frac{\ln n+(-1)^nn^{\frac{1}{2}}}{n\cdot n^{\frac{1}{2}}}= \sum_{n=1}^\infty\frac{\ln n }{ n^{\frac{3}{2}} } + \sum_{n=1}^{\infty}\frac{ (-1)^n }{n}\,. $$</p>
<p>Now, the first series on the RHS converges by the <a href="http://en.wikipedia.org/wiki/Integral_test_for_convergence" rel="nofollow">integral test</a> and the second series converges by the <a href="http://en.wikipedia.org/wiki/Alternating_series_test" rel="nofollow">alternating series test</a>. So the whole series converges. </p>
|
3,186,239 | <p>Two Independent variables have Bernoulli distribution:
<span class="math-container">$X_1$</span> with <span class="math-container">$b(n,p)$</span> and <span class="math-container">$X_2$</span> with <span class="math-container">$b(m,p)$</span>.
How can I find conditional distribution <span class="math-container">$\mathbb P(X_1|X_1+X_2=t)$</span>?</p>
| callculus42 | 144,421 | <p><strong>Hint:</strong> I assume that <span class="math-container">$X_1$</span> and <span class="math-container">$X_2$</span> are binomial distributed. We have <span class="math-container">$X_2=t-X_1$</span>. Now you can apply the Bayes theorem. Due independency of <span class="math-container">$X_1$</span> and <span class="math-container">$X_2$</span> we get</p>
<p><span class="math-container">$$P(X_1=x_1|X_1+X_2=t)=\frac{P(X_1=x_1)\cdot P(X_2=t-x_1)}{P(X_1+X_2=t)}$$</span></p>
<p>where <span class="math-container">$X_1+X_2\sim Bin(n+m,p)$</span></p>
|
3,186,239 | <p>Two Independent variables have Bernoulli distribution:
<span class="math-container">$X_1$</span> with <span class="math-container">$b(n,p)$</span> and <span class="math-container">$X_2$</span> with <span class="math-container">$b(m,p)$</span>.
How can I find conditional distribution <span class="math-container">$\mathbb P(X_1|X_1+X_2=t)$</span>?</p>
| P. Quinton | 586,757 | <p>Assuming you meant Binomial random variables and that they are independents, you can always write <span class="math-container">$X_1=\sum_{i=1}^n Y_i$</span> and <span class="math-container">$X_2=\sum_{i=n+1}^{n+m} Y_i$</span> where the random variables are independent Bernoulli random variable with parameter <span class="math-container">$p$</span>. Once you have that then <span class="math-container">$X_1+X_2=\sum_{i=1}^{n+m} Y_i$</span>. The event <span class="math-container">$X_1+X_2=t$</span> just indicate that <span class="math-container">$t$</span> of the <span class="math-container">$Y_i$</span> are equal to <span class="math-container">$1$</span>, conditioned on that the event <span class="math-container">$X_1=s$</span> is then an indication that <span class="math-container">$s$</span> of the first <span class="math-container">$n$</span> <span class="math-container">$Y_i$</span> are one. Observe that this implies that <span class="math-container">$t-s$</span> of the last <span class="math-container">$m$</span> <span class="math-container">$Y_i$</span> are one, hence the probability is just the number of way of choosing <span class="math-container">$s$</span> Bernoulli out of the <span class="math-container">$n$</span> first and <span class="math-container">$t-s$</span> Bernoulli out of the last <span class="math-container">$m$</span>, we divide this by the amount of ways of having <span class="math-container">$t$</span> of the Bernoulli equal to <span class="math-container">$1$</span>, hence
<span class="math-container">\begin{align*}
\mathbb P(X_1=s|X_1+X_2=t) = \frac{{n\choose s}{m\choose t-s}}{{m+n\choose t}}
\end{align*}</span></p>
|
3,014,438 | <p>Find Number of Non negative integer solutions of <span class="math-container">$x+2y+5z=100$</span></p>
<p>My attempt: </p>
<p>we have <span class="math-container">$x+2y=100-5z$</span> </p>
<p>Considering the polynomial <span class="math-container">$$f(u)=(1-u)^{-1}\times (1-u^2)^{-1}$$</span></p>
<p><span class="math-container">$\implies$</span></p>
<p><span class="math-container">$$f(u)=\frac{1}{(1-u)(1+u)}\times \frac{1}{1-u}=\frac{1}{2} \left(\frac{1}{1-u}+\frac{1}{1+u}\right)\frac{1}{1-u}=\frac{1}{2}\left((1-u)^{-2}+(1-u^2)^{-1}\right)$$</span> </p>
<p>we need to collect coefficient of <span class="math-container">$100-5z$</span> in the above given by</p>
<p><span class="math-container">$$C(z)=\frac{1}{2} \left((101-5z)+odd(z)\right)$$</span></p>
<p>Total number of solutions is</p>
<p><span class="math-container">$$S(z)=\frac{1}{2} \sum_{z=0}^{20} 101-5z+\frac{1}{2} \sum_{z \in odd}1$$</span></p>
<p><span class="math-container">$$S(z)=540.5$$</span></p>
<p>what went wrong in my analysis?</p>
| xpaul | 66,420 | <p>Note that the number of non-negative integer solutions of the following equation
<span class="math-container">$$ x+y=n $$</span>
is <span class="math-container">$n+1$</span>. Here <span class="math-container">$n$</span> is a non-negative integer. Clearly <span class="math-container">$5|(x+2y)$</span>. Let
<span class="math-container">$$x+2y=5k\tag{1}$$</span>
where <span class="math-container">$0\le k\le 20$</span>. For (1), if <span class="math-container">$k$</span> is odd, then so is <span class="math-container">$x$</span>, and if <span class="math-container">$k$</span> is even, then so is <span class="math-container">$x$</span>.</p>
<p>Case 1: <span class="math-container">$k$</span> is odd. Let <span class="math-container">$k=2n-1$</span> and <span class="math-container">$x=2m-1$</span>. Then <span class="math-container">$1\le n\le 10$</span> and (1) becomes
<span class="math-container">$$ m+y=5n-2 $$</span>
whose number of non-negative integer solutions is <span class="math-container">$5n-1$</span>.</p>
<p>Case 2: <span class="math-container">$k$</span> is even. Let <span class="math-container">$k=2n$</span> and <span class="math-container">$x=2m$</span>. Then <span class="math-container">$0\le n\le 10$</span> and (1) becomes
<span class="math-container">$$ m+y=5n $$</span>
whose number of non-negative integer solutions is <span class="math-container">$5n+1$</span>.</p>
<p>Thus the number of non-negative integer solutions is
<span class="math-container">$$ \sum_{n=1}^{10}(5n-1)+\sum_{n=0}^{10}(5n+1)=551 $$</span></p>
|
3,014,438 | <p>Find Number of Non negative integer solutions of <span class="math-container">$x+2y+5z=100$</span></p>
<p>My attempt: </p>
<p>we have <span class="math-container">$x+2y=100-5z$</span> </p>
<p>Considering the polynomial <span class="math-container">$$f(u)=(1-u)^{-1}\times (1-u^2)^{-1}$$</span></p>
<p><span class="math-container">$\implies$</span></p>
<p><span class="math-container">$$f(u)=\frac{1}{(1-u)(1+u)}\times \frac{1}{1-u}=\frac{1}{2} \left(\frac{1}{1-u}+\frac{1}{1+u}\right)\frac{1}{1-u}=\frac{1}{2}\left((1-u)^{-2}+(1-u^2)^{-1}\right)$$</span> </p>
<p>we need to collect coefficient of <span class="math-container">$100-5z$</span> in the above given by</p>
<p><span class="math-container">$$C(z)=\frac{1}{2} \left((101-5z)+odd(z)\right)$$</span></p>
<p>Total number of solutions is</p>
<p><span class="math-container">$$S(z)=\frac{1}{2} \sum_{z=0}^{20} 101-5z+\frac{1}{2} \sum_{z \in odd}1$$</span></p>
<p><span class="math-container">$$S(z)=540.5$$</span></p>
<p>what went wrong in my analysis?</p>
| farruhota | 425,072 | <p>Given: <span class="math-container">$x+2y=100-5z$</span>, tabulate:
<span class="math-container">$$\begin{array}{c|c|c}
z&x&\text{count}\\
\hline
0&100,98,\cdots, 0&\color{red}{51}\\
1&\ \ 95,93,\cdots, 1&\color{blue}{48}\\
2&\ \ 90,88,\cdots, 0&\color{red}{46}\\
3&\ \ 85,83,\cdots, 1&\color{blue}{43}\\
4&\ \ 80,78,\cdots, 0&\color{red}{41}\\
\vdots&\vdots&\vdots\\
17&15,13,\cdots,1&\color{blue}{8}\\
18&10,8,\cdots,0&\color{red}{6}\\
19&5,3,1&\color{blue}{3}\\
20&0&\color{red}{1}\\
\hline
&&\color{red}{286}+\color{blue}{255}=541
\end{array}$$</span></p>
|
1,512,528 | <p>As the title says, I'm looking to find all solutions to $$x^2 \equiv 4 \pmod{91}$$ and I am not exactly sure how to proceed.</p>
<p>The hint was that since 91 is not prime, the Chinese Remainder Theorem might be useful.</p>
<p>So I've started by separating into two separate congruences:
$$x^2 \equiv 4 \pmod{7}$$ $$x^2 \equiv 4 \pmod{13}$$</p>
<p>but now I'm confused about how to apply the CRT so I'm a bit stuck, and I'd appreciate any help or hints!</p>
| Clément Guérin | 224,918 | <p>By the Chinese remainder theorem you get an isomorphism between those two rings :</p>
<p>$$\psi:\frac{\mathbb{Z}}{91\mathbb{Z}}\rightarrow \frac{\mathbb{Z}}{7\mathbb{Z}}\times \frac{\mathbb{Z}}{13\mathbb{Z}} $$</p>
<p>$$x\mapsto (x\text{ mod } 7,x\text{ mod } 13) $$</p>
<p>it means that to solve $4=s^2$ mod $91$ you only need to solve $\psi(4)=\psi(s)^2$ which in turn gives $4=s_1^2$ mod $7$ and $4=s_2^2$ mod $13$. Now we are mod some prime numbers, because we have two obvious solutions those are the only one. In other word $\psi(s)=(\epsilon_12,\epsilon_22)$ where $\epsilon_i\in\{\pm 1\}$. This give four solutions. Now you only need to find the inverse function of $\psi$ (it is a classical computation) to explicitely have the four solutions to the equation $4=s^2$ mod $91$. </p>
|
1,862,232 | <p>I'm studying basic Ring Theory. And in my textbook, the author states the definition of Euclidean domain:<br>
An integral domain $R$ is called to be a <em>Euclidean domain</em> precisely when there is a function $f: R\setminus\{0\}\rightarrow\Bbb N_0$, called degree function of $R$, such that:<br>
(i) If $a,b \in R\setminus\{0\}$ and there exists $c\in R$ such that $ac=b$ then $f(a)\le f(b)$.<br>
(ii) $a,b\in R$ with $b\neq 0$, then there exist $q,r\in R$ such that
$a=bq + r$ with $r=0$ or $r\neq 0$ and $f(r)\lt f(b)$. </p>
<p>I know the fact: all units in $R$ have smallest degree, and this question pops into my head: </p>
<blockquote>
<p>I want to prove that all units have degree $0$. </p>
</blockquote>
<p>Unfortunately I have no idea for it. Can anyone has an answer for my question or give me a readable explanation about it? I really appreciate ! </p>
| Sol He | 998,296 | <p>Proving all units having degree 0 somehow means for any function f, they have degree 0. But I can always define a new degree function by adding 1 to an existing one. Now no one has degree 0.</p>
|
3,349,630 | <p>If a,b,c are positive real numbers,prove that
<span class="math-container">$$ \frac{a}{b+2c} + \frac{b}{c+2a} + \frac{c}{a+2b} \ge 1 $$</span>
I tried solving and i have no idea how to proceed I mechanically simplified it it looks promising but im still stuck. This is from the excersice on Cauchy Schwartz Inequality.</p>
| IamKnull | 610,697 | <p>As we have, <span class="math-container">$2+a^3=a^3+1+1\geqslant 3a$</span>, <span class="math-container">$b^2+1\geqslant 2b$</span>, thus<span class="math-container">$$\dfrac{a}{a^3+b^2+c}=\frac{a}{3+a^3+b^2-a-b}\leqslant\frac{a}{3a+2b-a-b}=\frac{a}{2a+b}.$$</span>Similarly, we can get <span class="math-container">$$\dfrac{b}{b^3+c^2+a}\leqslant\frac{b}{2b+c},\,\,\,\,\,\dfrac{c}{c^3+a^2+b}\leqslant\frac{c}{2c+a}.$$</span>It suffices to show<span class="math-container">$$\frac{a}{2a+b}+\frac{b}{2b+c}+\frac{c}{2c+a}\leqslant 1\iff \frac{b}{2a+b}+\frac{c}{2b+c}+\frac{a}{2c+a}\geqslant 1$$</span>
By Cauchy's inequality, we get<span class="math-container">$$(b(2a+b)+c(2b+c)+x(2c+a))\left(\frac{b}{2a+b}+\frac{c}{2b+c}+\frac{a}{2c+a}\right)\geqslant (a+b+c)^2.$$</span> As <span class="math-container">$b(2a+y)+z(2b+c)+a(2c+a)=(a+b+c)^2$</span>, </p>
<p>So, </p>
<p><span class="math-container">$$\frac{b}{2a+b}+\frac{c}{2b+c}+\frac{a}{2c+a}\geqslant 1$$</span></p>
|
8,023 | <p>I'm looking for an easily-checked, local condition on an $n$-dimensional Riemannian manifold to determine whether small neighborhoods are isometric to neighborhoods in $\mathbb R^n$. For example, for $n=1$, all Riemannian manifolds are modeled on $\mathbb R$. When $n=2$, I believe that it suffices for the scalar curvature to vanish everywhere (this is certainly necessary). But my intuition is poor for higher-dimensional structures.</p>
<p>Put another way: given a Riemannian structure $g$ on a smooth manifold, when can I find coordinates $x^1,\dots,x^n$ so that $g_{ij}(x) = \delta_{ij}$?</p>
| Danny Calegari | 1,672 | <p>Greg's comment on Deane's answer is sort of correct (given suitable hypotheses), but maybe a bit misleading in the context of this discussion. Since the character count doesn't allow it, I'm adding this comment as an "answer" (though it is not an answer to the original question).</p>
<p>There are non-isometric 2-spheres $S_1,S_2$ for which there is a diffeomorphism $f$ from $S_1$ to $S_2$ so that the curvature at each point $p \in S_1$ is equal to the curvature of $f(p)$ in $S_2$. For example, let $O$ be a curve in the plane with dihedral $D_2$ symmetry whose curvature has 4 critical points (is this called an "oval"? I forget). If $S$ is a surface of revolution of $O$, then $S$ is foliated (in the complement of two "poles") by "latitude" circles of constant curvature, and the value of the curvature moves monotonically between two extreme values as one moves from the "poles" to the "equator". One can easily produce nonisometric surfaces with "the same" curvature function. The length of the circle with a given curvature value is an invariant of the isometry type which is not captured by the curvature itself (thought of as smooth function on $S$). </p>
<p>I think this example (and more discussion) is in Berger's book "A panoramic view of Riemannian geometry".</p>
|
4,383,800 | <p>I can already see that the <span class="math-container">$\lim_\limits{n\to\infty}\frac{n^{n-1}}{n!e^n}$</span> converges by graphing it on Desmos, but I have no idea how to algebraically prove that with L’Hopital’s rule or induction. Where could I even start with something like this?</p>
<p>Edit: For context, I came across this limit while studying the series expansion for the Lambert W Function, <span class="math-container">$W(x)= \sum_{n=0}^{\infty}\frac{(-n)^{n-1}x^n}{n!}$</span> . By the ratio test, it is clear that <span class="math-container">$|x|<\frac1e$</span> in order to converge, but I needed to use the Alternating Series Test to see whether this series converges at <span class="math-container">$x= \pm\frac1e$</span>. Finding <span class="math-container">$\lim_\limits{n\to\infty}|a_n|$</span> is the first step of the test.</p>
| Salcio | 821,280 | <p>First of, note that from the A-G inequality one gets <span class="math-container">$(1+1/(n+1))^{n+1} > (1+/1/n)^n$</span>
In words, that the sequence <span class="math-container">$a_n = (1+1/n)^n$</span> is increasing. One can see this by substituting <span class="math-container">$b_1=1$</span> and <span class="math-container">$b_k = 1+ 1/n$</span>, k = 2,3,...n+1 in A-G inequality.
Of course <span class="math-container">$a_n$</span> tends to <span class="math-container">$e$</span> so each term is less than <span class="math-container">$e$</span>.
Now, let <span class="math-container">$c_n = \frac{n^{n-1}}{n!e^n}$</span>.
The ratio <span class="math-container">$c_{n+1}/c_n$</span> is equal to <span class="math-container">$(1+1/n)^{n-1}*\frac{1}{e} = (1+1/n)^n*\frac{1}{1+1/n}*\frac{1}{e}$</span>.
But <span class="math-container">$(1+1/n)^n < e$</span> so <span class="math-container">$c_{n+1}/c_n < \frac{1}{1 + 1/n} = \frac{n}{n+1}$</span>.
If you multiply these inequalities side by side from n down to 1 you get that
<span class="math-container">$c_{n+1} < \frac{1}{n+1}$</span> so the sequence tends to <span class="math-container">$0$</span>.</p>
|
225,866 | <p>If I define, for example,</p>
<pre><code>f[OptionsPattern[{}]] := OptionValue[a]
</code></pre>
<p>Then the output for <code>f[a -> 1]</code> is 1.</p>
<p>However, in my code, I have a function that must be called using the syntax <code>f[some parameters][some other parameters]</code>, and I want to add options to the <strong>second</strong> set of square brackets. So I tried:</p>
<pre><code>g[][OptionsPattern[{}]] := OptionValue[a]
</code></pre>
<p>But then, the output for <code>g[][a -> 1]</code> is <code>OptionValue[a]</code> instead of 1. I'm not sure why this is not working. Shouldn't <code>OptionsPattern[{}]</code> match <strong>any</strong> set of options, no matter where they are located?</p>
<p>How can I add options that can be provided in the second set of square brackets instead of the first?</p>
| flinty | 72,682 | <p>I can see at least three ways:</p>
<pre><code>Through[{x, y, z}[#]] & /@ vl
</code></pre>
<p>Or alternatively:</p>
<pre><code>Transpose[# /@ vl & /@ {x, y, z}]
</code></pre>
<p>Or alternatively:</p>
<pre><code>Outer[#2[#1]&, vl, {x, y, z}]
</code></pre>
|
2,733,728 | <p>How can one find a general form for $\int_0^1 \frac {\log(x)}{(1-x)} dx=-\zeta(2)
\,?$ Namely $\int_0^1 \frac {\log^n(x)}{(1-x)^m} dx\,$ where $n,m\ge1$ Similar to the original integral I let $1-x=u\,$ which gives $$\int_{-1}^0 \frac {\log^n(1+x)}{x^m} dx$$ and expanding into series we have: $\int_{-1}^0x^{-m}(\sum_{k=1}^{\infty}\frac{(-1)^{k+1}x^k}{k})^n\,dx$ Now this might be doable with a computer using Cauchy product's but otherwise it's a madness.</p>
<p>Another try is to let $I(k)=\int_0^1 \frac {x^k}{(1-x)^m}\,dx$ And take derivate n times while assuming $k\ge n$ so: $$\frac{d^n}{dx^n}I(k)=\int_0^1\frac{x^k\log^n(x)}{(1-x)^m}dx$$ Plugging $(1-x)^{-m}=\sum_{j=0}^{\infty} \binom{-m}{j}(-1)^jx^j $ in integral and make use of Tonelli
s theorem we get: $$\frac{d^n}{dx^n}I(k)=\sum_{j=0}^{\infty} \binom{-m}{j}(-1)^j\int_0^1 x^{(k+j)}\log^n(x)dx=\sum_{j=0}^{\infty} \binom{-m}{j}(-1)^{(n+j)} n! (k+j+1)^{-(n+1)}$$ But I don't know how to evaluate the latter series.</p>
| gandalf61 | 424,513 | <blockquote>
<p>"... if an action takes up $0$ time, then surely this action never happened ..."</p>
</blockquote>
<p>This is where your error lies. In classical physics we assume that space and time are continuous and can be divided into parts that are as small as we like. Under these assumptions it is possible for an event to have zero duration i.e. to occur at a single instant in time. It is also possible for an event to have zero spatial extent i.e. to occur at a single point in space.</p>
<p>If you have philosophical doubts about these assumptions then you may be reassured by the fact that we know that classical physics is only an approximation to reality. In quantum physics, Heisenberg's uncertainty principle prevents an event having zero duration or zero extension - there is always some unavoidable uncertainty over the time or location at which an event occurs.</p>
<p>But at the scale of throwing a ball, quantum physics has a negligible effect and so we use the methods of classical physics.</p>
|
441,404 | <p>If H is a Hilbert space, Is B(H) under the operator norm a Hilbert space?
If not, is there exists any norm on B(H) that makes it a Hilbert space?</p>
| bradhd | 5,116 | <p>To call something a Hilbert space means that it is equipped with a complete <em>inner product</em>, not just a norm. A complete normed vector space is called a <em>Banach space</em>, and indeed $B(H)$ with the operator norm is a Banach space. Indeed, it has some additional structure (multiplication given by composition, and an involution given by taking adjoints) which makes it a <a href="https://en.wikipedia.org/wiki/C*-algebra" rel="nofollow noreferrer">C*-algebra</a>.</p>
<p>Of course, you can still ask whether there exists an inner product on $B(H)$ which makes it a Hilbert space, and this depends on $H$:</p>
<p>If $H$ is finite-dimensional, then choosing a basis identifies it with $\mathbb{C}^n$ for $n=\dim H$, and $B(H)$ is identified with the space of $n\times n$ matrices, which is a Hilbert space.</p>
<p>If $H$ is infinite-dimensional, then $B(H)$ is not a Hilbert space: see <a href="https://math.stackexchange.com/questions/112844/c-algebra-which-is-also-a-hilbert-space">this other question</a> for some reasons why.</p>
|
441,404 | <p>If H is a Hilbert space, Is B(H) under the operator norm a Hilbert space?
If not, is there exists any norm on B(H) that makes it a Hilbert space?</p>
| Josse van Dobben de Bruyn | 246,783 | <p>For your first question: we have that $B(H)$ is a Hilbert space if and only if $\dim(H) \leq 1$ holds.</p>
<ul>
<li>If $\dim(H) \leq 1$ holds, then it is clear that $B(H)$ is a Hilbert space.</li>
<li>If $\dim(H) > 1$ holds, then we may choose $x,y\in H$ with $\lVert x \rVert = \lVert y \rVert = 1$ and $\langle x, y\rangle = 0$. Let $P : z \mapsto \langle z,x\rangle \cdot x$ and $Q : z \mapsto \langle z,y\rangle\cdot y$ denote the orthogonal projections onto $\text{span}(x)$ and $\text{span}(y)$, respectively, then we have
$$ \lVert P + Q \rVert^2 + \lVert P - Q \rVert^2 \: = \: 1 + 1 \: = \: 2, $$
but also
$$ 2\Big(\lVert P \rVert^2 + \lVert Q\rVert^2\Big) \: = \: 2\cdot (1 + 1) \: = \: 4. $$
We see that $B(H)$ does not satisfy the <a href="https://en.wikipedia.org/wiki/Parallelogram_law" rel="nofollow noreferrer">parallelogram rule</a>, so it cannot be a Hilbert space.</li>
</ul>
<p>As for your second question, Brad already pointed out a different norm on $B(H)$ that turns it into a Hilbert space in case $H$ is finite-dimensional. This norm can be generalised to infinite dimensions, where it is known as the <a href="https://en.wikipedia.org/wiki/Hilbert%E2%80%93Schmidt_operator" rel="nofollow noreferrer">Hilbert–Schmidt norm</a>, and one can prove that this does indeed satisfy the parallelogram rule. However, unfortunately not all elements of $B(H)$ have finite Hilbert–Schmidt norm, unless $H$ is finite-dimensional. Thus, you get a proper subspace $\mathcal{HS}(H) \subsetneq B(H)$ consisting of <em>Hilbert–Schmidt operators</em> (that is, operators with finite Hilbert–Schmidt norm). The inner product can be given by $\langle A, B\rangle = \text{tr}(B^*A)$, where you have to prove first that the product of two Hilbert–Schmidt operators is a trace class operator.</p>
<p>Edit: even in the infinite-dimensional case there exists a different norm $\lVert\:\cdot\:\rVert_2$ that turns $B(H)$ into a Hilbert space, as by <a href="https://math.stackexchange.com/a/1599903/246783">this answer</a>. However, this does not provide us with an explicit representation of $\lVert\:\cdot\:\rVert_2$, and there is no guarantee that it should be a Banach algebra norm. <a href="https://math.stackexchange.com/q/1705879/246783">I posted a follow-up question about this</a>.</p>
<p>For more on Hilbert–Schmidt operators I know the following references (though there may be others):</p>
<ul>
<li>John B. Conway, <em>A Course in Functional Analysis</em>, exercises IX.2.19 and IX.2.20.</li>
<li>Gerard J. Murphy, <em>$C^*$-algebras and operator theory</em>, section 2.4. (Excellent book!)</li>
</ul>
|
428,530 | <p>Let $\Omega := [0, 1] \times [0,\pi]$. We are searching for a function $u$ on $\Omega$ s.t.
$$
\Delta u =0
$$
$$
u(x,0) = f_0(x), \quad u(x,1) = f_1(x), \quad u(0,y) = u(\pi,y) = 0
$$ with
$$
f_0(x) = \sum_{k=1}^\infty A_k \sin kx \quad, f_1(x) = \sum_{k=1}^\infty B_k \sin kx
$$
If I use seperation of variables, say $u(x,y) = f(x)g(y)$ I get
$$
f''(x)+\lambda f(x) = 0 , \quad g''(y)-\lambda g(y) = 0
$$ with $f(0) = f(\pi) = 0$ where I use that $f,g \neq 0$. $\lambda$ is some constant. How can I proceed ?</p>
<p>Thanks in advance.</p>
| Avitus | 80,800 | <p>Hint: which is the most general solution of </p>
<p>$$f''(x)=-\lambda f(x)$$</p>
<p>and</p>
<p>$$g''(y)=\lambda g(y)?$$</p>
<p>You need to consider linear combinations of exponentials. Such exponentials have real or complex exponents depending on the sign of $\lambda$, i.e. $\lambda >0$, $\lambda<0$ (not necessarily in this order!). Try quickly to see what happens if $\lambda=0$, instead.</p>
<p>To determine which choice of sign for $\lambda$ is the correct one for your problem, you need to apply the boundary conditions you wrote for $f$ and $g$ at $0$ and $\pi$. Once you are there apply superposition and the boundary conditions with the Fourier series. You are done.</p>
|
76,600 | <p>The group of three dimensional rotations $SO(3)$ is a subgroup of the Special Euclidean Group $SE(3) = \mathbb{R}^3 \rtimes SO(3)$. The manifold of $SO(3)$ is the three dimensional real projective space $RP^3$. Does $RP^3$ cause a separation of space in the manifold of $SE(3)$? </p>
<p>(edit) Sorry about lack of clarity. My question should be worded as 'does $SO(3)$ partition any four dimensional subspace of $SE(3)$ into exactly two disjoint pieces?'</p>
<p>I am basically interested in understanding whether a generalization of the Jordan curve separation theorem works in such non Euclidean spaces. In particular, I want to know if (non) orientability of $SO(3)$ affects the generalization, especially since it is used to construct $SE(3)$ as a product space with $\mathbb{R}^3$.</p>
| Sai | 14,667 | <p>I cannot post comments yet, but I am interested in the answer to these questions. It appears $R^2 \times SO(3)$ will not partition $SE(3)$ into disconnected pieces because $R^2 \times SO(3)$ is not compact. What about the set $M \times RP^3$ where $M$ is the Mobius strip? That is a five dimensional surface. Does it partition $SE(3)$? Also the original question is unanswered, does $SO(3)$ partition $R \times SO(3)$ into disconnected pieces? Curious to know. </p>
|
2,542,184 | <p>Is $A = \begin{bmatrix}
1&1&0\\
0&1&0\\
0&0&1\\
\end{bmatrix}$
and $B = \begin{bmatrix}
1&1&0\\
0&1&1\\
0&0&1\\
\end{bmatrix}$ similar? Please justify your answer.</p>
<p>So far what I've done is to check rank, det, trace, and characteristic polynomial to maybe disprove it but all of them are the same so I'm kinda stuck.</p>
| Community | -1 | <p>If $A$ and $B$ are similar, $A-I$ and $B-I$ must be similar too, which means that they must have the same rank. This is not the case here, thus $A$ and $B$ are not similar.</p>
|
2,130,658 | <p>How would I go about proving this mathematically? Having looked at a proof for a similar question I think it requires proof by induction. </p>
<p>It seems obvious that it would be even by thinking about the first few cases. As for $n=0$ there will be no horizontal dominoes which is even, and for $n=1$ there can only be one vertical domino so there are $0$ horizontal dominoes, which is again even. Then for $n=2$ you can have either two horizontal or two vertical dominoes which again gives $0$ or $2$ horizontal dominoes which is again an even number. And so on for n greater than $2$.</p>
<p>I would like to prove that the number of way of dividing a 2-high-by-n-wide rectangle into dominoes so that $2j$ dominoes are horizontal is ${n-j\choose j}$ and deduce that $U_n$ (where $U_n$ is the number of ways to divide a 2-high-by-n-wide rectangle into 2-wide-by-1-high dominoes) = $$\sum_j {n-j\choose j}$$ where this sum is over all the integers $j$ with $0\le j\le \frac{n}{2}$.</p>
<p>I understand that trivially for a 2-high-by-n-wide rectangle you can divide it by exactly $2j=n$ horizontal dominoes or by $n$ vertical dominoes or some combination of vertical and horizontal dominoes, but how can I use this knowledge to construct the proof?</p>
| Chirantan Chowdhury | 337,567 | <p>Mathematically you can design $t_n$ to be the number of horizontal dominoes and then solve the recursion $ t_n = 2 + t_{n-2}$ for with inital values $ t_1 = 0 , t_2 = 2$.</p>
|
1,088,338 | <p>There are at least a few things a person can do to contribute to the mathematics community without necessarily obtaining novel results, for example:</p>
<ul>
<li>Organizing known results into a coherent narrative in the form of lecture notes or a textbook</li>
<li>Contributing code to open-source mathematical software</li>
</ul>
<p>What are some other ways to make auxiliary contributions to mathematics?</p>
| Steven Gubkin | 34,287 | <p>As @Hennobrandsma said, you can teach mathematics well.</p>
<p>In addition to developing tools which aid mathematics research (as you mention), you can also work to develop tools which aid teaching mathematics. </p>
<p>Without coming up with "new mathematics", you can apply mathematics in novel ways to other fields. I think this is especially true if you can find natural and really useful applications for some piece of mathematics which has not been applied yet. This is the best kind of "advertisement" mathematics can get.</p>
|
1,088,338 | <p>There are at least a few things a person can do to contribute to the mathematics community without necessarily obtaining novel results, for example:</p>
<ul>
<li>Organizing known results into a coherent narrative in the form of lecture notes or a textbook</li>
<li>Contributing code to open-source mathematical software</li>
</ul>
<p>What are some other ways to make auxiliary contributions to mathematics?</p>
| Federico Poloni | 65,548 | <p>Another way to contribute that hasn't been mentioned yet is <strong>advocacy</strong>. It is important to raise awareness on the importance of mathematics and scientific literacy in general. For instance, here are a few concepts that it would be useful to disseminate to the general public:</p>
<ul>
<li>that some knowledge of mathematics, statistics and science is important for the average person, too.</li>
<li>that there is still active research in mathematics; theorems weren't all discovered 300 years ago.</li>
<li>that mathematics (and STEM in general) is a viable career path, and people interested in it shouldn't be laughed at.</li>
<li>that research, even basic research, is an important endeavour and needs funding.</li>
</ul>
|
3,541,947 | <p>How do you pronounce <span class="math-container">$\mathbb{F}_2, \mathbb{F}_2^n, \mathbb{N}^k, [n] = \{1,\ldots,n\},$</span> and <span class="math-container">$S \subseteq [n]$</span> when you're reading a text?</p>
<p>I've just started reading more advanced math textbooks and these are appearing all the time. </p>
| PrincessEev | 597,568 | <p>It's ultimately a matter of preference and taste. At some point in learning mathematics, it helps to "break away" the notation from spoken language to avoid distracting questions such as this. For instance, you know <span class="math-container">$\Bbb N$</span> refers to the natural numbers (and presumably know what those consist of): whether you call it "n", "the set of naturals", "the set of positive/nonnegative integers" (depending on convention), etc., you know what is being referred to, and that is what's important.</p>
<p>I've often heard both - an "abbreviated" name and the proper full name - interchangeably throughout my education. Personally, I think that the longer name makes the meaning behind the notation clearer if you're a novice, but this is purely a subjective thing. At the end of the day, as you get used to the notation, you'll immediately understand what a given symbol means rather than having to decipher it and all. How it's pronounced or said won't really affect that in the long run - what's important is what the notation conveys, rather than how it's said, at least where learning is concerned.</p>
<p>So as some examples, the shorter and longer names I've often heard for each notation you've brought up. But there's no "wrong" way to pronounce them inherently, so long as it's clear what is intended!</p>
<ul>
<li><p><span class="math-container">$\Bbb F_2$</span>:</p>
<ul>
<li>Short: "eff two"</li>
<li>Long: "the (finite) field of two elements"</li>
</ul></li>
<li><p><span class="math-container">$\Bbb F_2^n$</span>: (similar for <span class="math-container">$\Bbb N^k$</span>)</p>
<ul>
<li>Short: "eff two to the n"</li>
<li>Long: "the n-ary product of eff two with itself"</li>
</ul></li>
<li><p><span class="math-container">$\Bbb N$</span>:</p>
<ul>
<li>Short: "n"</li>
<li>Long: "the naturals", "the natural numbers", "the nonnegative/positive integers" (depends on your convention whether <span class="math-container">$0 \in \Bbb N$</span> but that's another question altogether)</li>
</ul></li>
<li><p><span class="math-container">$[n]$</span>:</p>
<ul>
<li>Short: "bracketed n" <em>(maybe? I don't think I've ever heard this said aloud)</em></li>
<li>Long: "the first <span class="math-container">$n$</span> integers" (you may specify "positive" as well, but I feel most people understand what this means)</li>
</ul></li>
<li><p><span class="math-container">$S \subseteq [n]$</span>:</p>
<ul>
<li>Short: "a subset of bracket n" <em>(I guess? Not much shorter but whatever...)</em></li>
<li>Long: "S is a subset of the first n integers"</li>
</ul></li>
</ul>
|
4,231,509 | <p>I'm trying to prove that the group <span class="math-container">$(\mathbb{R}^*, \cdot)$</span> is not cyclic (similar to [1]). My efforts until now culminated into the following sentence:</p>
<blockquote>
<p>If <span class="math-container">$(\mathbb{R}^*,\cdot)$</span> is cyclic, then <span class="math-container">$\exists x \in \mathbb{R}^*$</span> such that <span class="math-container">$x \cdot x \neq x \in \mathbb{R}^*$</span>.</p>
</blockquote>
<p>The assumption written above is only not true for the neutral element on <span class="math-container">$\mathbb{R}^*$</span>. Are there any follow-ups that I should do to improve that sentence?</p>
<p>[1] <a href="https://math.stackexchange.com/questions/1491510/show-that-q-and-r-are-not-cyclic-groups">Show that (Q, +) and (R, +) are not cyclic groups.</a></p>
| Sahan Manodya | 937,663 | <p>Suppose <span class="math-container">$(\mathbb{R}^*,\cdot)$</span> is cylclic. Now <span class="math-container">$\langle2,3\rangle$</span> is a subgroup of <span class="math-container">$(\mathbb{R}^*,\cdot)$</span>, but it is cyclic since it is a subgroup of a cyclic group and therefore <span class="math-container">$\langle2,3\rangle=\langle a\rangle$</span> for some <span class="math-container">$a\in\mathbb{R}^*$</span>. Now <span class="math-container">$a^n=2$</span> and <span class="math-container">$a^m=3$</span> for some <span class="math-container">$n,m\in\mathbb{Z}-\{0\}$</span>, which imples
<span class="math-container">$$a^{nm}=2^n=3^m$$</span>
But <span class="math-container">$2$</span> and <span class="math-container">$3$</span> are prime and it is a contradiction.</p>
|
3,842,739 | <p>Let <span class="math-container">$H$</span> be a group and <span class="math-container">$H^m=\{ h^m \mid h\in H\}$</span>.</p>
<p>I know that this is a subgroup of <span class="math-container">$H$</span> when <span class="math-container">$H$</span> is abelian.
But I want to know that what happens if <span class="math-container">$H$</span> is not abelian.
For which <span class="math-container">$n$</span>, <span class="math-container">$H^n$</span> is a subgroup of <span class="math-container">$H$</span> and for which <span class="math-container">$n$</span> it is not?</p>
<p>I tried for the nonabelian group <span class="math-container">$S_3$</span> and found that <span class="math-container">$S_3^{2k}$</span> is a subgroup of <span class="math-container">$S_3$</span> for all <span class="math-container">$k\in \mathbb{Z}$</span> and <span class="math-container">$S_3^{2k+1}$</span> is not a subgroup of <span class="math-container">$S_3$</span> for all <span class="math-container">$k\in \mathbb{Z}$</span>.</p>
<p>But I don't know whether it is true for arbitrary groups or not and I want to prove it if it is correct.</p>
<p>Any help would be appreciated.</p>
| lulu | 252,071 | <p>For any finite simple group of even order, the squares are not a subgroup.</p>
<p>To see this, note the following:</p>
<p>Lemma: If the squares from a group <span class="math-container">$H$</span> are contained in a subgroup <span class="math-container">$G$</span> then <span class="math-container">$G$</span> is normal.</p>
<p>Pf: Let <span class="math-container">$h,g$</span> be arbitrary elements of <span class="math-container">$H,G$</span> respectively. We want to show that <span class="math-container">$hgh^{-1}\in G$</span>. But <span class="math-container">$$hgh^{-1}=h^2h^{-1}gh^{-1}gg^{-1}=h^2\left(h^{-1}g\right)^2g^{-1}\in G$$</span> and we are done.</p>
<p>Thus, <span class="math-container">$A_5$</span> in particular is a counterexample</p>
<p>Remark: For completeness we should note that the squares can't be the entire group. As the group has even order (by assumption) there is an element of order <span class="math-container">$2$</span>, hence the squaring map is not injective. As we are speaking of finite groups here, it follows that the squaring map is not surjective, and we are done.</p>
|
2,965,459 | <p>Some curves defined by polynomial equations are disconnected over reals but not over complexes, e.g., <span class="math-container">$x y - 1 = 0$</span>. How can we convince someone with background only on equations over reals that the curve drawn by above equation is connected over complexes? Is a plot or something possible, for example? It will be a 4d plot if x and y are expanded to real and imaginary parts.
Any other plot, or algebraic way to show connectedness?</p>
| Ricardo Buring | 23,180 | <p>A solution over <span class="math-container">$\mathbb{C}$</span> is a pair <span class="math-container">$(z, 1/z)$</span> with <span class="math-container">$z \neq 0$</span>. Consider another solution <span class="math-container">$(w, 1/w)$</span>. There is a path from <span class="math-container">$z$</span> to <span class="math-container">$w$</span> in <span class="math-container">$\mathbb{C}$</span> which does not cross <span class="math-container">$0$</span> (this is the difference with <span class="math-container">$\mathbb{R}$</span>), and this yields a path from <span class="math-container">$1/z$</span> to <span class="math-container">$1/w$</span> by inverting every point on the path. From this we get a path from <span class="math-container">$(z, 1/z)$</span> to <span class="math-container">$(w, 1/w)$</span> which lives inside the solution set.</p>
<p>You can draw the corresponding plots explicitly. It is especially easy on the unit circle, where inversion is just complex conjugation. For example, to go from <span class="math-container">$-1$</span> to <span class="math-container">$1$</span> in <span class="math-container">$\mathbb{C}$</span> you can take the upper semicircle of the unit circle. The "inverted" path (inverting every complex number on the path) is exactly the lower semicircle. This shows how to go from <span class="math-container">$(-1,-1)$</span> to <span class="math-container">$(1,1)$</span> (which wasn't possible over the reals) via the complex domain.</p>
|
1,262,322 | <p>Suppose that virus transmision in 500 acts of intercourse are mutually independent events and that the probability of transmission in any one act is $\frac{1}{500}$. What is the probability of infection?</p>
<p>So I do know that one way to solve this is to find the probability of complement of the event we are trying to solve. Letting $C_1,C_2,C_3...C_{500}$ denote the events that a virus does not occur during encounters 1,2,....500. The probability of no infection is:</p>
<p>$$P(C_1\cap C_2 \cap....\cap C_{500}) = (1 - (\frac{1}{500}))^{500} = 0.37$$</p>
<p>then to find the probability of infection I would just do : $1 - 0.37 = 0.63$</p>
<p>but my question is how would I find the probability not using the complement? I would have thought since the events are independent and each with probability of $\frac{1}{500}$ that if I multiplied each independet event I could obtain the value, but that is not the case. What am I forgetting to consider if I wanted to calculate this way? I'm asking more so to have a fuller understanding of both sides of the coin.</p>
<p>Edit: I think I may have figured out what I'm missing in my thinking. In the case of trying to figure out the probability of infection I have to take into account that infection could occur on the first transmission, or the second, or the third,...etc. Also transmission could occur on every interaction or on a few interactions but not all. So in each of these scenarios I would encounter some sort of combination of probabilities like $(\frac{499}{500})(\frac{499}{500})(\frac{1}{500})(\frac{499}{500})......(\frac{1}{500})$ as an example of one possible combination.</p>
| zoli | 203,663 | <p>The general answer is independent of viruses and intercouses. </p>
<p>Let $C_1, C_2$ be two events of the same probability $p$ and the question is the probability that at least one of them occurs.</p>
<p>One can say that</p>
<ol>
<li>$$P(C_1 \cup C_2)=P(C_1)+P(C_2)-P(C_1 \cap C_2).$$
or that</li>
<li>$$P(C_1\cup C_2)=P\left(\overline{\overline{C_1\cup
C_2}}\right)=P\left(\overline{ \overline {C_1} \cap \overline{C_2}
}\right)=1-P\left(\overline {C_1} \cap \overline{C_2}\right).$$</li>
</ol>
<p>If $C_1$ and $C_2$ are independent then</p>
<ol>
<li><p>$$P(C_1 \cup C_2)=P(C_1)+P(C_2)-P(C_1)( C_2)=2p-p^2.$$
or</p></li>
<li><p>$$P(C_1\cup C_2)=1-P\left(\overline
{C_1}\right)P\left(\overline{C_2}\right)=1-(1-p)^2.$$</p></li>
</ol>
<p>Both approaches can be generalized for any number of independent events of the same probability:</p>
<ol>
<li>$$P(C_1\cup C_2 \cup\cdots \cup C_N)=\sum_{k=1}^N(-1)^{k-1}{n \choose k}p^k$$
or</li>
<li>$$P(C_1\cup C_2 \cup\cdots \cup C_N)=1-(1-p)^N.$$</li>
</ol>
<p>The second version is way simpler if the events are independent and of equal probabilities. The first one, however, works in general (not in this special form!).</p>
|
696,848 | <p>$\DeclareMathOperator{\rank}{rank}$
First off I'm sorry I'm still not able to make of use the built in formula expressions, I don't have time to learn it now, I'll do it before my next question.</p>
<p>I have a couple of questions regarding eigenvectors and generalized eigenvectors. To some of these questions I know the answer partially or there are some uncertainties so I will just ask in the most general form, but I can really appreciate precise answers.</p>
<p>How do I know how many eigenvectors to expect for each eigenvalue?</p>
<p>How do I know how many generalized eigenvectors to expect for each of those eigenvectors?
Consider a matrix $A$ whose eigenvalues and vectors I'd like to compute. Do basic row and column operations on either $A$ or $(A - \lambda I)$ (lambda be an eigenvalue) change any of the eigenvalues, -vectors or determinants of the two corresponding matrices?</p>
<p>Any of the following statements my be wrong and I'd appreciate it if you could point out where the errors are.</p>
<p>Consider this special case for the <a href="http://www.wolframalpha.com/input/?i=%7B%7B1,1,0,1%7D;%7B0,2,0,0%7D;%7B-1,1,2,1%7D;%7B-1,1,0,3%7D%7D" rel="nofollow">matrix A:</a></p>
<p>Its rank is $4$. The characteristic polynomial tells me there is an eigenvalue lambda with algebraic multiplicity $4$. In order to determine the geometric multiplicities to the corresponding eigenvalues (which there is just one of) I can determine the rank of $(A - \lambda I) = (A - 2I)$. Said matrix looks like <a href="http://www.wolframalpha.com/input/?i=%7B%7B-1,1,0,1%7D;%7B0,0,0,0%7D;%7B-1,1,0,1%7D;%7B-1,1,0,1%7D%7D" rel="nofollow">this</a></p>
<p>Operating with basic row and column operations on this matrix $(A - 2I)$ I can reduce down to a matrix with just one $1$ and all the other elements will be zero. Thus the rank of this matrix is $1$. In order to get the geometric multiplicity corresponding to this eigenvalue I compute
$$
\rank(A) - \rank(A-2I) = 4 - 1 = 3
$$
So the geometric multiplicity of this eigenvalue is $3$, which means I can expect $3$ eigenvectors.</p>
<p>If so far no errors have been made and no corrections have been given, consider the following:
Are the eigenvectors to this specific problem unique? Clearly I can reduce the matrix $(A - 2I)$ down to a matrix with one 1 at any element I like.</p>
<p>Let's say we picked the 1 as the first element of said matrix; are my eigenvectors just $(0,1,0,0)^T;(0,0,1,0)^T;(0,0,0,1)^T$? (the T stands for transposed)</p>
<p>How do I compute the generalized eigenvectors, which eigenvector do I pick, how do I determine which one to choose and what is it?</p>
<p>Thanks for your time!</p>
| Barry Cipra | 86,747 | <p>Finding an $x$ that violates the given inequality is only one way to disprove the Riemann Hypothesis. Another way would simply be to find a (nontrivial) zero of the zeta function with real part not equal to $1/2$. So far we've "only" computed about ten trillion zeros. The first counterexample could be the very next one.</p>
|
85,470 | <p>We decided to do secret Santa in our office. And this brought up a whole heap of problems that nobody could think of solutions for - bear with me here.. this is an important problem.</p>
<p>We have 4 people in our office - each with a partner that will be at our Christmas meal.</p>
<p>Steve,
Christine,
Mark,
Mary,
Ken,
Ann,
Paul(me),
Vicki</p>
<p>Desired outcome</p>
<blockquote>
<p>Nobody can know who is buying a present for anybody else. But we each
want to know who we are buying our present for before going to the
Christmas party. And we don't want to be buying presents for our partners.
Partners are not in the office.</p>
</blockquote>
<p>Obvious solution is to put all the names in the hat - go around the office and draw two cards.</p>
<p>And yes - sure enough I drew myself and Mark drew his partner. (we swapped)</p>
<p>With that information I could work out that Steve had a 1/3 chance of having Vicki(he didn't have himself or Christine - nor the two cards I had acquired Ann or Mary) and I knew that Mark was buying my present. Unacceptable result.</p>
<p>Ken asked the question: "What are the chances that we will pick ourselves or our partner?"</p>
<p>So I had a stab at working that out.</p>
<p>First card drawn -> 2/8
Second card drawn -> 12/56</p>
<p>Adding them together makes 28/56 i.e. 1/2.</p>
<p>i.e. This method won't ever work... half chances of drawing somebody you know means we'll be drawing all year before we get a solution that works.</p>
<p>My first thought was that we attach two cards to our backs... put on blindfolds and stumble around in the dark grabbing the first cards we came across... However this is a little unpractical and I'm pretty certain we'd end up knowing who grabbed what anyway.</p>
<p>Does anybody have a solution for distributing cards that results in our desired outcome?</p>
<hr>
<p><em><strong>I'd prefer a solution without a third party..</em></strong></p>
| Martin Eden | 182,077 | <p>See this algorithm here: <a href="http://weaving-stories.blogspot.co.uk/2013/08/how-to-do-secret-santa-so-that-no-one.html" rel="nofollow">http://weaving-stories.blogspot.co.uk/2013/08/how-to-do-secret-santa-so-that-no-one.html</a>. It's a little too long to include in a Stack Exchange answer.</p>
<p>Essentially, we fix the topology to be a simple cycle, and then once we have a random order of participants we can also determine who to get a gift for.</p>
|
126,052 | <p>I have no doubt that the following observation is quite well known. Let $\varphi:[0,1]\to [0,1]$ be a continuous map. Assume that the iterates $\varphi^n$ converge pointwise to some continuous map $\varphi_\infty$. Then the convergence is in fact uniform. However, I was unable to locate a reference. Does anybody know where I can find it written? Thanks in advance.</p>
| gerw | 32,507 | <p>This result seems to be a consequence of Dini's theorem (as noted by nonlinearism):</p>
<p>If $\varphi^n$ converges to a continuous function $\varphi_\infty$, we have $\varphi_\infty = 0$. Hence, $\varphi < 1$ on $[0,1]$. By compactness, we have $\varphi < p$ for some $p < 1$. Therefore, the convergence is monotonic and we can apply Dini's theorem.</p>
<p>This should work for all $\varphi : K \to [-1,1]$, where $K$ is a compact metric space (for $\varphi < 0$, one has to adopt the proof in an obvious manner).</p>
|
3,459,532 | <p>I have a pretty straightforward linear programming problem here:</p>
<p><span class="math-container">$$ maximize \hskip 5mm -x_1 + 2x_2 -3x_3 $$</span></p>
<p>subject to</p>
<p><span class="math-container">$$ 5x_1 - 6x_2 - 2x_3 \leq 2 $$</span>
<span class="math-container">$$ 5x_1 - 2x_3 = 6 $$</span>
<span class="math-container">$$ x_1 - 3x_2 + 5x_3 \geq -3 $$</span>
<span class="math-container">$$ 1 \leq x_1 \leq 4 $$</span>
<span class="math-container">$$ x_3 \leq 3 $$</span></p>
<p>Convert to standard form.</p>
<p>what boggles me is how to substitute <span class="math-container">$x_1$</span> since it’s restricted from both sides and I can’t move forward in the problem until I figure it out...</p>
<p>I’m not asking for the whole standard form, just how to approach this one variable. :)</p>
| Ben Grossmann | 81,360 | <p>Another approach, to add to the existing list. Let's suppose that you insist on calculating eigenvalues by finding <span class="math-container">$|A - \lambda I_d|$</span>. We can do so using the <a href="https://en.wikipedia.org/wiki/Weinstein%E2%80%93Aronszajn_identity" rel="nofollow noreferrer">Weinstein-Aronszajn identity</a> (sometimes called Sylvester's determinant identity). In particular, note that <span class="math-container">$A = I_d - a_1a_1^T - a_2a_2^T = I_d - MM^T$</span>, where
<span class="math-container">$$
M = \pmatrix{a_1 & a_2}.
$$</span>
It follows that for <span class="math-container">$\lambda \neq 1$</span>,
<span class="math-container">$$
|A - \lambda I_d| =
\left|(1 - \lambda)I_d - MM^T
\right|
\\ =
(1 - \lambda)^d \left|
I_d - (1-\lambda)^{-d}MM^T
\right|\\
= (1 - \lambda)^d \left|
I_2 - (1-\lambda)^{-1}M^TM
\right|\\
= (1 - \lambda)^d \left|
I_2 - (1-\lambda)^{-1}I_2
\right|\\
= (1 - \lambda)^{d-2} \left|
(1-\lambda)I_2 - I_2
\right|\\
= (1 - \lambda)^{d-2} \left|
-\lambda I_2
\right| \\
= \lambda^2 (1 - \lambda)^{d-2}.
$$</span>
Because <span class="math-container">$|A - \lambda I_d|$</span> is a polynomial on <span class="math-container">$\lambda$</span>, the same must also hold for <span class="math-container">$\lambda = 1$</span>.</p>
|
592,560 | <p>Let G be an abelian group. Show that, if G is not cyclic, then for all $x\in G$, there is a divisor $d$ of $n = |G|$ which is strictly smaller than n satisfying $x^d=1$. </p>
<p>I'm guessing that this is a consequence of Lagrange's Theorem. We can have that G is a disjoint union of left cosets that all have the same cardinality. So $|H| < |G|$ since G is composed with more than just one left coset. By Lagrange's Theorem, we have that $|H|=d$ and then $d$ divides $n$. However the "if G is not cyclic" part is bothering me. Does the fact that G is not cyclic put a restriction?</p>
| Rodney Coleman | 73,128 | <p>If G is a finite group and x is an element of G, then o(x) divides card G, so there exists d dividing card G such that x^d=1. As here G is not cyclic, d is strictly less than card G. (Remark. "abelian" is superfluous in the statement of your pb.)</p>
|
2,837,281 | <p>I know the splitting field is generated by $2^{1/4}$ and $i$, I could show $\mathbb{Q} [ 2^{1/4}, i] = \mathbb{Q}[i+2^{1/4}]$ using some algebra. </p>
<p>For the non trivial direction $\mathbb{Q} [ 2^{1/4}, i] \subset \mathbb{Q}[i+2^{1/4}]$. Let us call $\alpha = i+2^{1/4}$, then we know
$$(\alpha-i)^4 - 2 = 0$$
expand the 4th power
$$\alpha^4 - 4\alpha^3 i - 6 \alpha^2 + 4\alpha i + 1 - 2 = 0$$
then we can solve for $i$ in terms of $\alpha$, so $i\in \mathbb{Q}[\alpha]=:\mathbb{Q}[i+2^{1/4}]$. Then $2^{1/4} \in \mathbb{Q}[i+2^{1/4}]$ as well. </p>
<p><strong>However</strong> the hint is to show the orbit of $i+2^{1/4}$ has more than 5 elements under the action of the galois group $Gal(\mathbb Q[2^{1/4}, i]/\mathbb{Q})$. I calculated the Galois group which is $D_8$, and the orbit has more than 5 elements, but how would conclude from this?</p>
| Eric Wofsey | 86,856 | <p>Let $K=\mathbb{Q}[2^{1/4},i]$, $L=\mathbb{Q}[i+2^{1/4}]$, and $G=Gal(\mathbb{Q}[2^{1/4},i]/\mathbb{Q})\cong D_8$. By Galois theory, in order to show that $L=K$, it suffices to show that the subgroup $H\subseteq G$ of automorphisms that fix $L$ is trivial. Since $i+2^{1/4}$ generates $L$, we can also describe $H$ as the subgroup of automorphisms that fix $i+2^{1/4}$. So the orbit of $i+2^{1/4}$ is in bijection with the coset space $G/H$. </p>
<p>Now if $H$ is nontrivial, it has at least $2$ elements, so $G/H$ has at most $8/2=4$ elements. So since the orbit of $i+2^{1/4}$ has at least $5$ different elements, $H$ must be trivial, and so $L=K$.</p>
|
3,756,436 | <p>Recently I was doing a physics problem and I ended up with this quadratic in the middle of the steps:</p>
<p><span class="math-container">$$ 0= X \tan \theta - \frac{g}{2} \frac{ X^2 \sec^2 \theta }{ (110)^2 } - 105$$</span></p>
<p>I want to find <span class="math-container">$0 < \theta < \frac{\pi}2$</span> for which I can later take the largest <span class="math-container">$X$</span> value that solves this equation, i.e. optimize the implicit curve to maximize <span class="math-container">$X$</span>.</p>
<p>I tried solving this by implicit differentiation (assuming <span class="math-container">$X$</span> can be written as a function of <span class="math-container">$\theta$</span>) with respect to <span class="math-container">$\theta$</span> and then by setting <span class="math-container">$\frac{dX}{d\theta} = 0$</span>:</p>
<p><span class="math-container">\begin{align}
0 &= X \sec^2 \theta + \frac{ d X}{ d \theta} \tan \theta - \frac{g}{2} \frac{ 2 \left( X \sec \theta \right) \left[ \frac{dX}{d \theta} \sec \theta + X \sec \theta \tan \theta \right]}{ (110)^2 } - 105 \\
0 &= X \sec^2 \theta - \frac{g}{2} \frac{ 2( X \sec \theta) \left[ X \sec \theta \tan \theta\right] }{ (110)^2 } \\
0 &= 1 - \frac{ Xg \tan \theta}{(110)^2} \\
\frac{ (110)^2}{ g \tan \theta} &= X
\end{align}</span></p>
<p>This is still not an easy equation to solve. However, one of my friends told we could just take the discriminant of the quadratic in terms of <span class="math-container">$X$</span>, and solve for <span class="math-container">$\theta$</span> such that <span class="math-container">$D=0$</span>.</p>
<p>Taking discriminant and equating to 0, I get</p>
<p><span class="math-container">$$ \sin \theta = \frac{ \sqrt{2 \cdot 10 \cdot 105} }{110}$$</span></p>
<p>and, the angle from it is, 24.45 degrees</p>
<p>I tried the discriminant method, but it gave me a different answer from the implicit differentiation method. I ended up with two solutions with the same maximum value of <span class="math-container">$X$</span> but different angles: <span class="math-container">$\theta =24.45^\text{o}$</span> and <span class="math-container">$X=1123.54$</span> (from discriminant method), and
<span class="math-container">$\theta = 47^\text{o}$</span> and <span class="math-container">$X=1123.54$</span> (from implicit differentiation).</p>
<p>I later realized the original quadratic can only have solutions if <span class="math-container">$D(\theta) > 0$</span>, where <span class="math-container">$D$</span> is the discriminant. Using the discriminant, I can find a lower bound on the angle. Once I have the lower bound, if I can prove that <span class="math-container">$X$</span> decreases monotonically as a function of <span class="math-container">$\theta$</span>, then I can use the lower bound for further calculations of <span class="math-container">$\theta$</span>.</p>
<p>So then I used the implicit function theorem and got</p>
<p><span class="math-container">$$ \frac{dX}{ d \theta} =- \frac{X \sec^2 \theta -\frac{g X^2}{2 (110)^2} 2 \sec \theta ( \sec \theta \tan \theta) } {\tan \theta - \frac{g \sec^2 \theta}{2 (110)^2} 2X }$$</span></p>
<p>Now the problem here is that I can't prove this function is in monotonic in terms of <span class="math-container">$\theta$</span> as the implicit derivative is a function of both <span class="math-container">$\theta$</span> and <span class="math-container">$X$</span>.</p>
| G. Smith | 573,507 | <p>You were on the right track; you just didn't go far enough. Your first equation relating <span class="math-container">$X$</span> and <span class="math-container">$\theta$</span> is</p>
<p><span class="math-container">$$0=X\tan\theta-\frac{g}{2v^2}X^2\sec^2\theta-h\tag1$$</span></p>
<p>where I've written <span class="math-container">$h$</span> for the height of the cliff and <span class="math-container">$v$</span> for the initial velocity of the projectile in your <a href="https://physics.stackexchange.com/questions/565783/finding-max-horizontal-distance-to-be-able-to-strike-target-in-projectile-motion">original question</a> posted to Physics SE.</p>
<p>(Why did you put in numbers for <span class="math-container">$h$</span> and <span class="math-container">$v$</span> but not for <span class="math-container">$g$</span>? Why put in numbers at all, when you can get a nice general formula for any values of these parameters? Why were your values inconsistent between your physics post at your post here?)</p>
<p>You wanted the point on the <span class="math-container">$X(\theta$</span>) curve that maximizes <span class="math-container">$X$</span>. As you realized, this is the point where <span class="math-container">$dX/d\theta=0$</span>.</p>
<p>You <em>could</em> solve (1), which is quadratic in <span class="math-container">$X$</span>, for <span class="math-container">$X$</span> in terms of <span class="math-container">$\theta$</span> and then differentiate, set the derivative to zero, solve for <span class="math-container">$\theta$</span>, and put that back in to find the maximum <span class="math-container">$X$</span>. This works but involves more algebra than the better approach that you took involving implicit differentiation. You differentiated (1) and set <span class="math-container">$dX/d\theta$</span> to <span class="math-container">$0$</span>, finding</p>
<p><span class="math-container">$$\frac{v^2}{g\tan\theta}=X\tag2.$$</span></p>
<p>The maximum point on the <span class="math-container">$X(\theta)$</span> curve satisfies <em>both</em> (1) and (2): (1) because it is a point on the curve, and (2) because it is the <em>maximum</em> point. So you need to solve these two <em>simultaneous</em> equations.</p>
<p>This is straightforward: First use (2) to eliminate <span class="math-container">$X$</span> from (1), giving an equation involving only <span class="math-container">$\theta$</span>:</p>
<p><span class="math-container">$$0=\left(\frac{v^2}{g\tan\theta}\right)\tan\theta-\frac{g}{2v^2}\left(\frac{v^2}{g\tan\theta}\right)^2\sec^2\theta-h$$</span></p>
<p>which simplifies to</p>
<p><span class="math-container">$$0=\frac{v^2}{g}-\frac{v^2}{2g}\frac{1}{\sin^2\theta}-h.$$</span></p>
<p>It's easy to solve this for the value of <span class="math-container">$\theta$</span> at the maximum point:</p>
<p><span class="math-container">$$\sin\theta=\frac{1}{\sqrt{2-2q}}$$</span></p>
<p>where <span class="math-container">$q\equiv gh/v^2$</span>.</p>
<p>From this one finds that</p>
<p><span class="math-container">$$\cos\theta=\sqrt{1-\sin^2\theta}=\sqrt{\frac{1-2q}{2-2q}}$$</span></p>
<p>and</p>
<p><span class="math-container">$$\tan\theta=\frac{\sin\theta}{\cos\theta}=\frac{1}{\sqrt{1-2q}}.$$</span></p>
<p>Substituting this into (2) gives the value of <span class="math-container">$X$</span> at the maximum point,</p>
<p><span class="math-container">$$X=\frac{v^2}{g}\sqrt{1-\frac{2gh}{v^2}}.$$</span></p>
<p>Putting in the values <span class="math-container">$h=105\text{ m}$</span>, <span class="math-container">$v=110\text{ m/s}^2$</span>, and <span class="math-container">$g=9.81\text{ m/s}^2$</span> gives <span class="math-container">$X=1123.54$</span> m.</p>
<p>Note that simply completing your solution in this straightforward way does not require the clever substitution that Anonymous used.</p>
<p>Setting the discriminant of (1) to zero does <em>not</em> give the position of the maximum point on the <span class="math-container">$X(\theta)$</span> curve. Instead, as Anatoly's graphs show, it gives the point where the two solutions for <span class="math-container">$X$</span> coincide. It is also clear from those graphs that the upper solution with the maximum is <em>not</em> monotonic.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.