qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,577,090 | <p>How can I determine if the function is one-to-one .. I know that any odd function is 1-1 and any even function is NOT 1-1 but what about functions that are neither of those, like $x^3+5$ or $x^3+x^2+3$. How can I determine whether it is a one-to-one? </p>
| SchrodingersCat | 278,967 | <p>There are $3$ distinct ways that I can remember as of now:</p>
<ol>
<li>If the function is $y=f(x)$, then check the value of $f'(x)$. If $f'(x)\ge 0$ $\forall$ $x \in \{\mathbb{R}\}$, or $f'(x) \le 0$ $\forall$ $x \in \{\mathbb{R}\}$ , then $f(x)$ is one-one.</li>
<li>You can draw the graph of the function and perform the <a href="https://en.wikipedia.org/wiki/Horizontal_line_test" rel="nofollow">horizontal line test</a>.</li>
<li>Assume $x_1=x_2$ when $x_1,x_2 \in $ domain and show that $f(x_1)=f(x_2)$.</li>
</ol>
|
2,649,907 | <p>As the titles states I have to determine weather or not polynomials of the form $a_{0}+a_{1}x$ is a subspace of $P_{3}$, the polynomials of the form $a_{0}+a_{1}x$ have $a_{0}$ and $a_{1}$ as real numbers.</p>
<p>So since the polynomial is of the third degree, the entire polynomial would look like:</p>
<p>$a_{0}+a_{1}x+a_{2}x^2+a_{3}x^3$ </p>
<p>and the crux(I assume) is that I can pick any numbers for $a_{0}$ and $a_{1}$ to prove the that this is indeed a subspace of $P_{3}$.</p>
<p>Since subspaces with polynomials are equivocally described in my book, I would like to know when testing the axiom "closure under addition" for verifying if this polynimal is a subspace of $P_{3}$</p>
<p>adding two polynomials: </p>
<p>$(a_{0}+a_{1}x+a_{2}x^2+a_{3}x^3) + (b_{0}+b_{1}x+b_{2}x^2+b_{3}x^3$) </p>
<p>I could substitute:</p>
<p>$a_{0} =1 $ $b_{0} =1$ and $a_{1} = 2$ $ b_{1} = -2$</p>
<p>Meaning that I would get:</p>
<p>\begin{align}(1+2x+a_{2}x^2+a_{3}x^3) + (1-2x+b_{2}x^2+b_{3}x^3)
=(1+1) +(b_{2}+a_{2})x^2+(b_{2}+a_{2})x^3 \end{align}</p>
<p>Would this still be considered to be the subspace of $P_{3}$ even though the polynomial have't retained it's full length, however it has kept the same degree?</p>
<p>Help would be greatly appriciated!</p>
| Matt Samuel | 187,867 | <p>To test closure under addition, you want to already start with polynomials in the alleged subspace. For example,
$$(a_0+a_1x)+(b_0+b_1x)=(a_0+b_0)+(a_1+b_1)x$$
This is still in the same form (meaning it has no higher degree terms), so the subset is indeed closed under addition. </p>
|
1,310,460 | <p>Given a finite connected graph $G$, I can make a finite number of cuts on the edges to obtain a tree. What is the most efficient algorithm to perform this procedure? </p>
<p>Thanks,
Vladimir</p>
| TravisJ | 212,738 | <p>Hint: </p>
<p>2) is called sequential compactness (and there is a well-known theorem relating sequential compactness to compactness).</p>
<p>3) $\frac{1}{2}\sin(x)+\frac{1}{2}$ maps $\mathbb{R}$ to $[0, 1]$ and is continuous.</p>
<p><strong>Edit</strong></p>
<p>This is a hint assuming the question said: there is a continous 1-1 map from $A$ onto $(0,1)$...</p>
<p>4) $\arctan(x)$ is a continuous function that maps $\mathbb{R}$ to $(-\frac{\pi}{2}, \frac{\pi}{2})$.</p>
<p><strong>Better Hint</strong></p>
<p>This hint was provided by @Struggler (see comments). If $A=\mathbb{N}$ then $A$ is certainly not compact and there is no continous 1-1 function from $A$ onto $(0, 1)$. (Note: if we remove the onto condition then $f(x)=\frac{1}{x+1}$ is a 1-1 map from $\mathbb{N}$ into $(0, 1)$.)</p>
|
2,019,049 | <p><a href="https://i.stack.imgur.com/SULGd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SULGd.png" alt="enter image description here"></a></p>
<blockquote>
<p>Let $\theta \in \mathbb{R}$. Prove: if $\prod\limits_{k=1}^{20} (\cos(k\theta) + i \sin(k\theta)) = i$, then there exists an integer $k$ such that $\theta = \left(\frac{\pi+2\pi k}{420}\right)$. You may use the fact $\sum\limits_{i=1}^n i = \frac{n(n+1)}{2}$ for all $n \in \mathbb{N}$.</p>
</blockquote>
<p>I'm stuck on how to deal with $k=20$ and the number $420$.</p>
<p>Please show me the complete answer!</p>
| Chinny84 | 92,628 | <p>$$
\cos kx + i \sin k x = \mathrm{e}^{ikx}
$$
so we have
$$
\prod_{k=1}^{20}\mathrm{e}^{ikx} = \mathrm{e}^{i\sum_{n=1}^{20}nx} = i
$$
so we have
$$
\cos (x\sum_{n=1}^{20}n) + i\sin(x\sum_{n=1}^{20}n) = i
$$
or we have
$$\sin(x\sum_{n=1}^{20}n) = 1 \\ \cos (x\sum_{n=1}^{20}n) = 0$$
solve for $x$ using $\sum_{i=1}^n i = \frac{n(n+1)}{2}$</p>
<p>To solve we have
$$
\cos\left(\frac{(2k+1)}{2}\pi\right) = 0\;\;\forall \;k \geq 0
$$comparing arguments
$$
\frac{(2k+1)}{2}\pi = x \sum_{n=1}^{20}n = x\frac{20\cdot 21}{2}
$$
so we have
$$
(2k+1)\pi = 20\cdot 21 x = 420 x
$$
re-arranging
$$
x = \frac{\pi + 2\pi k}{420}\;\;\forall \; k \geq 0
$$</p>
|
2,518,523 | <p>So I've stumbled upon this problem:</p>
<blockquote>
<p>Compare the numbers:</p>
<p><span class="math-container">$$9^{8^{8^9}} \text{ and }\,8^{9^{9^8}}$$</span></p>
</blockquote>
<p>I got this in a test and I had no idea what the answer to the problem is... Can someone give me the answer and how you can find it? It's obviously not possible to calculate such huge numbers, so there's probably another way to find it which I haven't figured out.</p>
| prog_SAHIL | 307,383 | <p>On my conclusion, $$ 9^{8^{8^9}} > 8^{9^{9^8}} $$</p>
<p>To explain this,</p>
<p>Manually we know, $$ 9\ln(8) > 8\ln(9) $$</p>
<p>therefore, $$ 8^9 > 9^8 $$
$$ 8^9 \approx 3(9^8) $$</p>
<p>now $\ln(9)=2.19$ </p>
<p>$\ln(8)=2.07$ </p>
<p>i.e. $ \frac{\ln(9)}{\ln(8)}=1.05 $,</p>
<p>therefore $$ 8^9\ln(8) > 9^8\ln(9)$$</p>
<p>$$ 8^{8^9} > 9^{9^8} $$</p>
<p>now, $$ 8^{8^9}\ln(9) > 9^{9^8}\ln(8) $$ </p>
<p>concluding, </p>
<p>$$ 9^{8^{8^9}} > 8^{9^{9^8}} $$</p>
|
4,346,765 | <p>While I was solving an integral using Feynman Integration, I came across the following differential equation:</p>
<p><span class="math-container">$$y’’’’-y’’+y=0$$</span></p>
<p>I tried substituting <span class="math-container">$y$</span> with an exponential function which failed. Can someone else show me how to solve it?</p>
| Jyrki Lahtonen | 11,619 | <p>Supplementing reuns's excellent answer with the following.</p>
<p><strong>Result:</strong> If <span class="math-container">$n\equiv2\pmod4$</span> and <span class="math-container">$-1$</span> is not a square in the field <span class="math-container">$K=\Bbb{F}_q$</span>, then the space <span class="math-container">$K^n$</span>, equipped with the standard bilinear form, does not have an <span class="math-container">$n/2$</span>-dimensional totally isotropic subspace.</p>
<p><strong>Proof.</strong> Write <span class="math-container">$k=n/2$</span>, so <span class="math-container">$k$</span> is odd. Assume contrariwise that <span class="math-container">$V$</span> is a totally isotropic subspace. Let <span class="math-container">$g_1,g_2,\ldots,g_k$</span> be a basis of <span class="math-container">$V$</span>. Let <span class="math-container">$A$</span> be the <span class="math-container">$k\times 2k$</span>
matrix with rows <span class="math-container">$g_1,\ldots,g_k$</span>. Without loss of generality we can assume that <span class="math-container">$A$</span>
is in a reduced row echelon form. By shuffling the columns, if necessary, we can further assume that <span class="math-container">$A$</span> has the block form
<span class="math-container">$$
A=\left(\,I_k\,\vert\, B\,\right)
$$</span>
where <span class="math-container">$B$</span> is some <span class="math-container">$k\times k$</span> matrix.</p>
<p>The assumption about total isotropicity is equivalent to the requirement <span class="math-container">$AA^T=0_{k\times k}$</span>. By expanding the matrix product in block form we see that this is equivalent to the requirement
<span class="math-container">$$
BB^T=-I_k. \qquad(*)
$$</span>
Let's look at the determinants. On the left hand side of <span class="math-container">$(*)$</span> we have <span class="math-container">$(\det B)^2$</span>. On the right hand side we have <span class="math-container">$(-1)^k$</span>. As <span class="math-container">$-1$</span> was not assumed to be a square, and <span class="math-container">$k$</span> is odd, this is a contradiction.</p>
|
1,923,761 | <p>I have a problem in which I need to discover $f$ knowing that $$\left\{\begin{matrix}f(1,y)-f(0,y)=y\\f(x,1)-f(x,0)=x\end{matrix}\right.$$ Any hints to solve it?</p>
| Sarvesh Ravichandran Iyer | 316,409 | <p>We can't solve this equation with the amount of information given, but yes:</p>
<p>If we let $f(x,y) = xy$, then certainly $f(x,1)-f(x,0) = x-0=x$ and $f(1,y)-f(0,y) = y - 0 = y$, so this is one solution. </p>
|
1,442,147 | <p>Could I quickly spot the inverse of a permutation from its 2-cycle composition?</p>
<p>For example, given that $\rho=(1 \ 9)(1 \ 4)(1 \ 5)(1 \ 8)(2 \ 10)(2 \ 3)(2 \ 6)(2 \ 7)$, how to find its inverse from this 2-cycle decomposition?</p>
| quid | 85,306 | <p>Yes, it is easy to write down the inverse of a permutation in this form.
It is also not much harder to do this for any permutation given in cycle form. </p>
<p>There are two main points to note: </p>
<ol>
<li><p>If you have $c_1 \dots c_n $ then the inverse is given by $c_n^{-1} \dots c_1^{-1}$. Where $c_i^{-1}$ is the inverse of $c_i$. This is true in any group, not only for permutations. </p></li>
<li><p>A $2$-cycle is its own inverse. So if each $c_i$ above is a two cycle then $c_n^{-1} \dots c_1^{-1}= c_n \dots c_1$. </p></li>
</ol>
<p>If you want something similar without the condition that all cycles are $2$-cycles, it suffices to determine the inverse of a cycle can be obtained by reversing the order of the elements in the cycle, so if you have a cycle $c = (x_1 \ x_2 \dots x_k)$ then its inverse is $(x_k \dots x_2 \ x_1)$.</p>
<p>Note this is not a contradiction to the assertion that a $2$-cycle is its own inverse as $(x \ y)$ and $(y \ x)$ are the same permutation. </p>
|
938,429 | <p>The question is to evaluate $\displaystyle7\int\frac{dx}{x^2+x\sqrt{x}}$. My solution is attached.</p>
<p><img src="https://i.stack.imgur.com/79dEd.png" alt="This is the question."></p>
<p><img src="https://i.stack.imgur.com/GuZ7z.jpg" alt="This is the solution that I tried."></p>
<p>The problem of my solution is if I use partial fraction, loop will be made, and this makes whole equation $0 = 0$. I want to know how to approach this question. </p>
| Mosk | 175,514 | <p>Try the substitution: $\sqrt{x}=t$</p>
|
43,650 | <p>Consider the following problem:</p>
<blockquote>
<p>Let $f:{\mathbb R}^3 \to{\mathbb R}$ be $$f(x,y,z)=x+4z$$ where $$x^2+y^2+z^2\leq 2.$$ Find the minimum of $f$. </p>
</blockquote>
<p>This is similar to the question <a href="https://math.stackexchange.com/q/41385/9464">here</a>. However, since this is not an analytic function with complex variable, one may not be able to use the "Maximum modulus principle". </p>
<p>What I think is that one may rewrite the inequality constraint $x^2+y^2+z^2\leq 2$ as
$$x^2+y^2+z^2=2-\delta\qquad \delta\in[0,2]$$
then one can use the "Lagrange Multiplier Method" with the parameter $\delta$. Or one can do it on the xOz plane with the geometric meaning of $C$ in $C=x+4z$.</p>
<p>Is there any other way better than the idea above to solve this problem?</p>
| Jack Schmidt | 583 | <p>This answer is just meant to compare the solutions obtained by Lagrange multipliers versus elimination. Notice in the first that <em>x</em> and <em>z</em> are treated equally until we needed to solve { <em>u</em> = 2, <em>x</em> = 4<i>z</i> }, but in the second <em>x</em> was heavily favored the entire time. In the first, we only dealt with polynomials, but in the second we divided by square roots of 2−<em>xx</em>, so had to deal with much weirder formulas. Generally, the elimination method will rely on simpler rules (one-variable minimization is "easy"), while the Lagrange method will rely on simpler formulas, but require slightly more complicated ideas (normals to hyper-surfaces and such). If you can get your head around why Lagrange multipliers work, then they should be quite a bit simpler to actually compute.</p>
<h2>Lagrange multiplier</h2>
<p>Minimize <em>C</em> = <em>x</em> + 4<i>z</i> subject to <em>u</em> = <em>xx</em> + <em>yy</em> + <em>zz</em> ≤ 2. If the minimum occurs when <em>u</em> < 2, then the constraint had no effect and can be ignore. So we minimize <em>C</em> over all <em>x</em>, <em>y</em>, <em>z</em> by finding critical points. ∇<em>C</em> = (1,0,4) is never equal to 0, is always defined and continuous, so <em>C</em> has no critical points within the feasible region (or outside it!). Hence the minimum must occur on the boundary, when <em>u</em> = 2. This is then directly a Lagrange multiplier problem: We want ∇<em>C</em> to be a multiple of ∇<em>u</em> = (2<i>x</i>, 2<i>y</i>, 2<i>z</i>). So we set (1,0,4) = λ(2<i>x</i>, 2<i>y</i>, 2<i>z</i>), and read off that <em>y</em> = 0, and <em>z</em> = 4<i>x</i>. One must still have <em>u</em> = 2, so 17<i>xx</i> = 2, and the solution is (<em>x</em>, <em>y</em>, <em>z</em>) = ±(√(2/17), 0, 4√(2/17)) ≈ (0.34, 0, 1.37) with minimum <em>C</em> = −17√(2/17) ≈ −5.83 and maximum C = +17√(2/17) ≈ +5.83.</p>
<h2>Elimination of variables</h2>
<p>Minimize <em>C</em> = <em>x</em> + 4<i>z</i> subject to <em>u</em> = <em>xx</em> + <em>yy</em> + <em>zz</em> ≤ 2. Again the interior is not relevant, and neither is <em>y</em>, so we can work the smaller problem: Minimize <em>C</em> = <em>x</em> + 4<i>z</i> subject to <em>xx</em> + <em>zz</em> = 2, that is, minimize <em>C</em>(<i>x</i>) = <em>x</em> ± 4√(2−<em>xx</em>). Take derivatives to get <em>C</em>′(<i>x</i>) = 1 ± 4<i>x</i>/√(2−<em>xx</em>), which is 0, precisely when 4<i>x</i> = ±√(2−<em>xx</em>), that is, when 16<i>xx</i> = 2 − <em>xx</em>, that is, when <em>x</em> = ±√(2/17) ≈ 0.34 with minimum <em>C</em> = −17√(2/17) ≈ −5.83 and maximum C = +17√(2/17) ≈ +5.83.</p>
|
762,483 | <p>I understand that exponents don't distribute over addition and have seen plenty of examples i.e. $$ (x + y)^2\neq x^2 + y^2 $$ but I'm wondering why that is. Multiplication distributes over addition e.g. $3(2+3) = 3(2) + 3(3)$ so if an exponent is just repeated multiplication why shouldn't the same be true for exponents?</p>
<p>i.e. why does $ (x + y)^2\neq x^2 + y^2 $</p>
| ajotatxe | 132,456 | <p>Perhaps this image answers your question:</p>
<p><img src="https://i.stack.imgur.com/4C132.gif" alt="enter image description here"></p>
|
1,040,846 | <p>I want to prove, that $a_n$ is a null sequence if $$\lim_{n \to \infty}|\frac{a_{n+1}}{a_n}|= c < 1$$</p>
<p>That means that $\forall \epsilon > 0: \exists N \in \mathbb{N}: n \ge N: |\frac{a_{n+1}}{a_n} - c| < \epsilon$</p>
<p>How can I get rid of the $a_{n+1}$ and the $c$, to show $\forall \epsilon > 0: \exists N \in \mathbb{N}: n \ge N: |a_n| < \epsilon$</p>
<p>?</p>
| Felice Iandoli | 87,201 | <p>Maybe you mean that $a_n$ is vanishing at $\infty$. Anyway if $\lim\frac{a_n}{a_{+1}}=c<1$ then you have that $\sum_{n=1}^{\infty}a_n$ converges. By force the sequence $a_n$ must be vanishing at $\infty$. (See Ceraro theorem)</p>
|
1,907,758 | <p>In <a href="https://arxiv.org/pdf/1411.5057.pdf" rel="nofollow">this paper</a>, (section 2, page 2)$\mathcal{l}_1$ norm is replaced with reweighted $\mathcal{l}_2$ in an optimization problem. I don't understand how $\lVert x\rVert_1$ is replaced with $x^TWx$ and ow the solution has changed to its weighted version. </p>
<p>$$\min_x \lVert x\rVert_1 \quad \text{subject to} \quad Ax=b $$
$$\Downarrow$$
$$x^TWx \quad \text{subject to} \quad Ax=b$$
$$\Downarrow$$
$$x^{k+1}=(W^k)^{-1}A^T(A(W^k)^{-1}A^T)^{-1}b$$</p>
| Robert Israel | 8,508 | <p>It appears that they are taking $W$ to be a diagonal matrix with diagonal entries $W_{ii} = |x_i|^{-1}$ (more precisely, they're taking a sequential approach where the matrix $W$ in the $k$'th iteration has diagonal entries
$|x^{k-1}_i|^{-1}$, where $x^{k-1}$ is the result of the $k-1$'th iteration).
Assuming everything converges nicely to a solution with all nonzero entries, this should work nicely. I'm not sure what would happen if the optimal solution has some entries of $0$.</p>
<p>EDIT: Going from the step 2 to step 3 is just the standard "normal equations" formula for solving a weighted linear least-squares problem in the case where $A$ has full row rank. Let's try a toy example: $$A = \pmatrix{1 & 8 & 12},\ b = 4 $$
for which it's not hard to see the optimal solution of the $\ell_1$ problem is
$$ x = \pmatrix{0\cr 0\cr 1/3\cr}$$
Start with $W = I$. The optimal solution to the least-squares problem</p>
<p>maximize $x^T W x$ subject to $Ax = b$ </p>
<p>is </p>
<p>$$x = W^{-1} A^T (AW^{-1} A^T)^{-1} b = \pmatrix{4/209\cr 32/209\cr 48/209}
\approx \pmatrix{0.01913875598\cr 0.1531100478\cr 0.2296650718}$$</p>
<p>Now we take $W$ to be the diagonal matrix with diagonal entries
$209/4, 209/32, 209/48$, and get</p>
<p>$$ x = W^{-1} A^T (AW^{-1} A^T)^{-1} b =\pmatrix{4/2241 \cr 256/2241 \cr 64/249}\approx \pmatrix{0.001784917448 \cr 0.1142347166\cr 0.2570281124\cr}$$</p>
<p>The next few iterations are approximately</p>
<p>$$ \pmatrix{0.0001610759876\cr 0.08247090565\cr 0.2783393066},\
\pmatrix{0.00001420449501\cr 0.05818161157\cr 0.2945444086},\
\pmatrix{0.000001231478183\cr 0.04035307711\cr 0.3064311793}$$</p>
<p>and it does look plausible that this is converging to the optimal solution of the $\ell_1$ problem.</p>
|
1,947,630 | <p>Describe the set of relations on $\mathbb{Z}$ which are both symmetric and anti-symmetric. Hint: this set is infinite and contains one relation with which you are already familiar.</p>
<p>I know that this is clearly talking about the equality relation, but what I am confused about is what does a set of relations mean if there is only one relation? Wouldn't the relation just be $S=\{(x,y)\in\mathbb{Z}^2:x=y\}$? Or does the set of relations mean the powerset of $S$? </p>
| Nick R | 320,894 | <p>You are correct that $S$ is the relation denoting equality. However note that any subset of $S$ is also a relation that is symmetric and antisymmetric. The idea here is that we do not need to define all the integers to be equal to each other.</p>
<p>Thus the answer you are looking for is the set of all subsets of $S$ (which is infinite).</p>
|
1,947,630 | <p>Describe the set of relations on $\mathbb{Z}$ which are both symmetric and anti-symmetric. Hint: this set is infinite and contains one relation with which you are already familiar.</p>
<p>I know that this is clearly talking about the equality relation, but what I am confused about is what does a set of relations mean if there is only one relation? Wouldn't the relation just be $S=\{(x,y)\in\mathbb{Z}^2:x=y\}$? Or does the set of relations mean the powerset of $S$? </p>
| Graham Kemp | 135,106 | <p>The identity/equality relation <em>is</em> both anti-symmetric and symmetric. However it is not the only such. The empty relation <em>is also</em> both anti-symmetric and symmetric. There are many others.</p>
<p>You wish to describe <em>all</em> relations (on the integers) which have the properties of both symmetry and anti-symmetry. What are these properties and identify <em>how</em> may a relation possess both?</p>
<p>A relation $R$ on set $\Bbb Z$ is symmetric when .... </p>
<p>A relation $R$ on set $\Bbb Z$ is anti-symmetric when .... </p>
<p>Therefore a relation $R$ on set $\Bbb Z$ is both anti-symmetric and symmetric when .... </p>
|
291,676 | <p>I have a question about a problem I encountered:</p>
<p>$\exists$ a,b $\epsilon$ $\mathbb{R}$+ such that $\sqrt{a+b}=\sqrt{a}+\sqrt{b}$</p>
<p>Any tips for going about solving this?</p>
<p>I tried:</p>
<p>$\sqrt{a+b}=\sqrt{a}+\sqrt{b}$</p>
<p>$a+b=a+b$</p>
<p>I have a feeling this isn't a legal operation...</p>
| A Ricko Maulidar | 60,472 | <p>i think you just need to find the value of $a$ and $b$ so that the conditions are met
but we know that only $a=0$ or $b=0$ such that conditions met
is $0\in\mathbb{R^+}$? if yes then the statement is true</p>
|
142,879 | <p>I'm working through Dr. Pete Clark's convergence notes here: <a href="http://alpha.math.uga.edu/%7Epete/convergence.pdf" rel="nofollow noreferrer">http://alpha.math.uga.edu/~pete/convergence.pdf</a></p>
<p>and I've been thinking about Exercise 3.2.2 (b),</p>
<p>The question states that for the set <span class="math-container">$S = \mathbb{Z}^{+}$</span>, the series <span class="math-container">$\sum_{i\in S}x_{i}$</span> converges unconditionally if and only if it converges absolutely, i.e, <span class="math-container">$\sum_{i\in S}|x_{i}| < \infty$</span>.</p>
<p>This seems to contradict the well-known fact:</p>
<p><span class="math-container">$\sum_{n=1}^{\infty}\frac{1}{n} = \infty$</span> and <span class="math-container">$\sum_{n=1}^{\infty}\frac{(-1)^{n}}{n} < \infty$</span>.</p>
<p>If I take <span class="math-container">$x_{n} = \frac{(-1)^{n}}{n}$</span>, then <span class="math-container">$|x_{n}| = \frac{1}{n}$</span>, and it seems to violate Exercise 3.2.2(b) on page 11.</p>
<p>Am I missing the point behind unconditional convergence? I made the jump that unconditional convergence (as defined in the link on the top of page 11) reduces to the ordinary convergence of series when the underlying set <span class="math-container">$S$</span> takes the place of the positive integers. Is this incorrect?</p>
| Arturo Magidin | 742 | <p>You are misinterpreting 3.2.2(b). </p>
<p>In fact, $\sum\limits_{n=1}^{\infty}\frac{(-1)^n}{n}$ does <em>not</em> converge absolutely, as you note; hence this series <em>also</em> does not converge unconditionally.</p>
<p>Indeed, for the series to converge unconditionally to $a$, it would have to be the case that for every $\epsilon\gt 0$ there is a finite set $J(\epsilon)$ such that for all finite subsets $J$ that contain $J(\epsilon)$ we have that $|a-\sum_{j\in J} a_j|\lt\epsilon$. But in this series, the positive terms diverge (as do the infinite terms). So for every $\epsilon\gt 0$ there is a finite set of even indices for which the sum is as large as we want, and so we can always find finite subsets that are arbitrarily far from any particular real number $a$. </p>
<p>You are also misunderstanding unconditional convergence. It does <em>not</em> reduce to ordinary convergence even when the index set is the positive integers. What makes you think so? Unconditional convergence in the case of sequences essentially tells you that you that given a "degree of tolerance" ($\epsilon$), there is a finite set of terms that account for "most" of the summation, in the sense that the sum of those finitely many terms is within the tolerance of the limit, and no finite number of the remaining terms will get you out of that "tolerance zone." This is different from regular convergence, in which you can only omit a "tail" of the sequence. The example of the alternating harmonic series highlights why this is very different: for any $N\gt 0$, there is always a way to <em>pick</em> finitely many terms "beyond the $N$th term" that add up to a <em>lot</em>, because the tail will take care of this by making the <em>total</em> contribution of the entire "tail" negligible. That is, the <em>entire tail</em> cannot contribute much, but in unconditional convergence no finite <em>subset</em> of the tail is allowed to contribute much. That's a much stronger condition than regular convergence (which makes sense given that it is equivalent to absolute convergence, which is likewise a much stronger condition than regular convergence).</p>
|
1,665,931 | <p>Let $a,c \in \mathbb{R}$ and $b \in \mathbb{C}$. We consider the equation $$a \bar{z}z+ b\bar{z} + \bar{b}z+c=0.$$ What curve it represents in the complex plane?</p>
<p>I think it a circle, but I am not able to conclude. Any help?</p>
| Whiz_Geek | 230,983 | <p>It represents a conic where a=b and h=0, ie, a circle. It may also represent a pair of straight lines.</p>
|
176,055 | <p>I heard teachers say [cosh x] instead of saying "hyperbolic cosine of x".</p>
<p>I also heard [sinch x] for "hyperboic sine of x". Is this correct?</p>
<p>How would you pronounce tanh x? Instead of saying "hyperbolic tangent of x"?</p>
<p>Thank you very much in advance.</p>
| marty cohen | 13,079 | <p>I usually say "sine-h", "cos-h", and "tan-h" with the "h" pronounced "aich" like the letter.</p>
<p>Sometimes I pronounce "cosh" as a word with a long "o".</p>
<p>I guess this qualifies as an answer, instead of just a comment.</p>
|
176,055 | <p>I heard teachers say [cosh x] instead of saying "hyperbolic cosine of x".</p>
<p>I also heard [sinch x] for "hyperboic sine of x". Is this correct?</p>
<p>How would you pronounce tanh x? Instead of saying "hyperbolic tangent of x"?</p>
<p>Thank you very much in advance.</p>
| Community | -1 | <p>I believe that in UK and the Commonwealth countries, the accepted pronunciations are /ʃaɪn/ (like "shine"), /kɒʃ/ ("cosh") and /θæn/ (like "thank" without the k). American usage may differ.</p>
|
2,906,282 | <p>I want calculate the reflection from a straight <strong>R</strong> on the surface from the straight <strong>BC</strong>: </p>
<p><a href="https://i.stack.imgur.com/bOINy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bOINy.png" alt="enter image description here"></a></p>
<p>The triangle with the position vectors <strong>A,B,C</strong> are given. As well the position vector <strong>P</strong> and the direction vector <strong>R</strong>.</p>
<p><em>what I can calculate:</em></p>
<p>Now I can calculate the intersection point <strong>D(x,y)</strong> and the reflection vector (as a direction vector <strong>R'</strong>) and I know the <strong>x</strong> coordinate from the point <strong>P'(x,?)</strong> because the x value from the point P and P' are equals. </p>
<p><em>what I cannot calculate:</em></p>
<p>I need from the point <strong>P'</strong> the <strong>y</strong> coordinate. Because I need position vectors for drawing the reflected line with python matplotlib.</p>
<p>How can I get the y coordinate from the point <strong>P'</strong> with the given parameters?</p>
<p>The reflected vector will go from <strong>D(x,y)</strong> to <strong>P'(x,y)</strong></p>
<p><a href="https://i.stack.imgur.com/3Bt1B.png" rel="nofollow noreferrer">matplotlib reflection</a></p>
<p>Thanks you!</p>
| Jose Brox | 146,587 | <p>That $A$ is invertible means precisely that there is another matrix $A^{-1}$ such that
$$AA^{-1}=I=A^{-1}A.$$</p>
<p>It is easy to show that the inverse, if it exists, must be unique.</p>
<p>Now suppose $A,B$ to be invertible, and denote $C:=AB$. $C$ will be invertible if we can find $C^{-1}$ such that $CC^{-1}=I=C^{-1}C$. Observe that $$C(B^{-1}A^{-1})=(AB)(B^{-1}A^{-1})=A(BB^{-1})A^{-1}=AIA^{-1}=AA^{-1}=I.$$</p>
<p>Similarly we get $(B^{-1}A^{-1})C=I$. Therefore, since inverses are unique, by definition we can conclude that $C^{-1}=B^{-1}A^{-1}$.</p>
|
3,224,765 | <p>The following question was asked on a high school test, where the students were given a few minutes per question, at most:</p>
<blockquote>
<p>Given that,
<span class="math-container">$$P(x)=x^{104}+x^{93}+x^{82}+x^{71}+1$$</span>
and,
<span class="math-container">$$Q(x)=x^4+x^3+x^2+x+1$$</span>
what is the remainder of <span class="math-container">$P(x)$</span> divided by <span class="math-container">$Q(x)$</span>?</p>
</blockquote>
<hr>
<p>The given answer was:</p>
<blockquote>
<p>Let <span class="math-container">$Q(x)=0$</span>. Multiplying both sides by <span class="math-container">$x-1$</span>:
<span class="math-container">$$(x-1)(x^4+x^3+x^2+x+1)=0 \implies x^5 - 1=0 \implies x^5 = 1$$</span>
Substituting <span class="math-container">$x^5=1$</span> in <span class="math-container">$P(x)$</span> gives <span class="math-container">$x^4+x^3+x^2+x+1$</span>. Thus,
<span class="math-container">$$P(x)\equiv\mathbf0\pmod{Q(x)}$$</span></p>
</blockquote>
<hr>
<p>Obviously, a student is required to come up with a “trick” rather than doing brute force polynomial division. How is the student supposed to think of the suggested method? Is it obvious? How else could one approach the problem?</p>
| DanLewis3264 | 480,329 | <p>While it may be a standard technique, as Bill's response details, I wouldn't say it's at all obvious at High School level. As a pre-Olympiad challenge problem, however, it's a good one. </p>
<p>My intuition is via cyclotomic polynomials -- <span class="math-container">$Q(x) = \Phi_5(x)$</span>, giving the idea to multiply through by <span class="math-container">$x-1$</span> -- but I doubt I would have recognised them before university: <a href="https://en.wikipedia.org/wiki/Cyclotomic_polynomial" rel="noreferrer">https://en.wikipedia.org/wiki/Cyclotomic_polynomial</a></p>
|
2,847,301 | <p>Find elements $a,b,$ and $c$ in the ring $\mathbb{Z}×\mathbb{Z}×\mathbb{Z}$ such that $ab, ac,$ and $bc$ are zero divisors but $abc$ is not a zero divisor.</p>
<p>Work:</p>
<ul>
<li><p>$a=(1,1,0)$</p></li>
<li><p>$b=(1,0,1)$</p></li>
<li><p>$c=(0,1,1)$</p></li>
</ul>
<p>Why this works: because $ab=(1,0,0)\neq(0,0,0)$.</p>
<blockquote>
<p><strong>Definition of zero divisor</strong>. A zero divisor is a non-zero element $a$ of a commutative ring $R$ such that there is a non-zero $b \in R$ with $ab=0$.</p>
</blockquote>
<p>Amy hint or suggestion will be appreciated.</p>
| Benjamin Dickman | 37,122 | <p>The $a, b,$ and $c$ that you suggest seem to work, although your reasoning is somewhat missing. Depending on where you are in your studies, it appears that you want to prove your choice of these three elements in $R = \mathbb{Z}\times\mathbb{Z}\times\mathbb{Z}$ yield zero divisors $ab$, $ac$, and $bc$, but for which $abc$ is <em>not</em> a zero divisor. To show all of this in accordance with the definition that you have provided, you need to know that $R$ is a commutative ring, and that its zero element is $(0,0,0)$. Whether that observation requires proof is a function of your course; let us assume it is so for the present purpose.</p>
<p>As to your suggested elements: You have correctly computed $ab$. What about $ac$? What about $bc$? Finally, what is it about $abc$ that makes it <em>not</em> a zero divisor?</p>
<p>For completeness, it may help to prove that the specified elements really are zero divisors when relevant. For example, you calculated that $ab = (1, 0, 0) \neq (0, 0, 0)$; but, this only shows that $ab$ is nonzero. To prove that it is a zero divisor, you will still need to prove that there is a nonzero element of $R$ that, multiplied by $ab$, yields the zero element $(0,0,0)$. In this particular case, you can conveniently find such an example with your nonzero element $c=(0,1,1)$.</p>
|
785,441 | <p>Prepping for a comprehensive test in August and I am working on a problem from Royden 4th ed. (Chapter 2, #15): Show that if $E$ has finite measure and $\varepsilon>0$, then $E$ is the disjoint union of a finite number of measurable sets, each of which has measure at most $\varepsilon$.</p>
<p>Here is what I have so far:</p>
<p>Let $\varepsilon>0$ and let the Lebesgue outer measure of a set $E$, $m^*(E)$, be positive and finite. Let $\{I_k\}_{k=1}^\infty$ be a countable collection of bounded open intervals that cover $E$ such that</p>
<p>$$
\sum_{k=1}^\infty\ell(I_k)\le m^*(E)+\varepsilon.
$$</p>
<p>Since our sum is convergent, there exists $N\in\mathbb{N}$ such that </p>
<p>$$
\sum_{k=N+1}^\infty\ell(I_k)\le\varepsilon.
$$</p>
<p>Then define </p>
<p>$$
E_0=E\cap\bigcup_{k=N+1}^\infty I_k.
$$</p>
<p>Since $E_0\subseteq\bigcup_{k=N+1}^\infty I_k$, we have</p>
<p>$$
m^*(E_0)=m^*\left(E\cap\bigcup_{k=N+1}^\infty I_k\right)\le m^*\left(\bigcup_{k=N+1}^\infty I_k\right)\le\sum_{k=N+1}^\infty\ell(I_k)<\varepsilon.
$$</p>
<p>Here is where I am stuck:</p>
<p>It seems intuitive that if $E_0$ is covered by $\bigcup_{k=N+1}^\infty I_k$, then $E\setminus E_0$ is covered by $\bigcup_{k=1}^N I_k$. But I cannot figure out the proof.</p>
<p>If this is true, then I can finish the proof: $E\setminus E_0$ is covered by a finite number of bounded intervals, so there exists some interval $[a,b)$ such that $\bigcup_{k=1}^N I_k\subseteq [a,b)$. Then all I do is choose $M$ large enough so that $M\varepsilon>b-a$ and then subdivide $[a,b)$ into intervals of width $(b-a)/M<\varepsilon$. Then I intersect $E\setminus E_0$ with each of these intervals and union them all together (along with $E_0$) to get a finite union of disjoint intervals with width less than $\varepsilon$ that is equal to $E$.</p>
| Laars Helenius | 112,790 | <p>I figured it out:</p>
<p>Since $E\subseteq\bigcup_{k=1}^\infty I_k$, then
$$
\begin{align*}
E&=E\cap\bigcup_{k=1}^\infty I_k\\
&=E\cap\left(\bigcup_{k=1}^N I_k\cup\bigcup_{k=N+1}^\infty I_k\right)\\
&=\left(E\cap\bigcup_{k=1}^N I_k\right)\cup \left(E\cap\bigcup_{k=N+1}^\infty I_k\right)\\
&=\left(E\cap\bigcup_{k=1}^N I_k\right)\cup E_0.
\end{align*}
$$
Thus
$$
\begin{align*}
E\setminus E_0&=\left[\left(E\cap\bigcup_{k=1}^N I_k\right)\cup E_0\right]\cap E_0^C\\
&=\left(E\cap\left(\bigcup_{k=1}^N I_k\right)\cap E_0^C\right)\cup \left(E_0\cap E_0^C\right)\\
&=\left(E\cap\left(\bigcup_{k=1}^N I_k\right)\cap E_0^C\right)\cup\emptyset\\
&=\left(E\cap\left(\bigcup_{k=1}^N I_k\right)\cap E_0^C\right)\\
&\subseteq\bigcup_{k=1}^N I_k.
\end{align*}
$$</p>
|
990,418 | <p>What is the maximum value of $\sin A+\sin B+\sin C$ in a triangle $ABC$. My book says its $3\sqrt3/2$ but I have no idea how to prove it. </p>
<p>I can see that if $A=B=C=\frac\pi3$ then I get $\sin A+\sin B+\sin C=\frac{3\sqrt3}2$. And also <a href="http://www.wolframalpha.com/input/?i=max+sin(a)%2Bsin(b)%2Bsin(c)+for+a%2Bb%2Bc%3Dpi" rel="nofollow noreferrer">according to WolframAlpha</a> maximum is attained for $a=b=c$. But this does not give me any idea for the proof.</p>
<p>Can anyone help? </p>
| Kim Jong Un | 136,641 | <p>For $x\in[0,\pi]$, the function $f(x)=\sin(x)$ is concave, so by Jensen's inequality, we have
$$
\frac{1}{3}f(A)+\frac{1}{3}f(B)+\frac{1}{3}f(C)\leq f\left[\frac{1}{3}(A+B+C)\right]=\sin(\pi/3)=\frac{\sqrt{3}}{2}.
$$
Equality is achieved when $A=B=C=\pi/3$.</p>
|
990,418 | <p>What is the maximum value of $\sin A+\sin B+\sin C$ in a triangle $ABC$. My book says its $3\sqrt3/2$ but I have no idea how to prove it. </p>
<p>I can see that if $A=B=C=\frac\pi3$ then I get $\sin A+\sin B+\sin C=\frac{3\sqrt3}2$. And also <a href="http://www.wolframalpha.com/input/?i=max+sin(a)%2Bsin(b)%2Bsin(c)+for+a%2Bb%2Bc%3Dpi" rel="nofollow noreferrer">according to WolframAlpha</a> maximum is attained for $a=b=c$. But this does not give me any idea for the proof.</p>
<p>Can anyone help? </p>
| Community | -1 | <p>$$f(x,y,z)=\sin(x)+\sin(y)+\sin(z)$$
$$g(x,y,z)=x+y+z-\pi=0$$</p>
<p>$$\large\frac{\frac{\partial f}{\partial x}}{\frac{\partial g}{\partial x}}=
\frac{\frac{\partial f}{\partial y}}{\frac{\partial g}{\partial y}}=
\frac{\frac{\partial f}{\partial z}}{\frac{\partial g}{\partial z}}=k$$</p>
<p>$$\cos(x)=\cos(y)=\cos(z)$$</p>
<p>hence</p>
<p>$$f_{max}=\sin\left(\frac{\pi}{3}\right)+\sin\left(\frac{\pi}{3}\right)+\sin\left(\frac{\pi}{3}\right)=\frac{3\sqrt3}{2}$$</p>
<p>$$\sin(x)+\sin(y)+\sin(z)\le\frac{3\sqrt3}{2}$$</p>
|
10,674 | <p>Let $p$ be a rational prime and $K$ a number field.
Dedekind's discriminant theorem tells us that
$p$ ramifies in $K$ $\iff$ $p$ divides the discriminant of $K$.
Hence if $p$ does not divide discriminant of $K$,
$(p)$ either splits, i.e., </p>
<p>(i) $(p)=P_1 \cdots P_g$ for $P_i \neq P_j$ and $g \geq 2$
or </p>
<p>(ii) $(p)$ remains prime. </p>
<p>Now, my question is: what are some criteria which can tell if $p$ will split or remain prime?</p>
| David E Speyer | 297 | <p>Pete's answer is very good. Some self-promotion:</p>
<p>An <a href="http://sbseminar.wordpress.com/2007/08/24/factoring-polynomials-and-the-frobenius/" rel="nofollow noreferrer">expository post</a> on the relationship between factoring polynomials and factoring primes.</p>
<p>The <a href="https://mathoverflow.net/questions/10201/a-problem-of-shimura-and-its-relation-to-class-field-theory/10225#10225">previous answer</a> regarding the polynomial $t^3+t^2−2t−1$</p>
|
2,434,812 | <p>I am only concerned with the integral of a measurable function $f : M \to \mathbb{R}^{\ast}_{+}$, where $\mathbb{R}^{\ast}_{+}$ contains all non-negative real numbers and positive infinity. As I understand it, this is defined as
$$\sup \left\{ \int s : \text{$s$ is simple and $s\le f$}\right\}.$$</p>
<p>My question is whether the integral is also equal to
$$ \inf \left\{ \int s : \text{$s$ is simple and $s\ge f$}\right\}.$$</p>
| barmanthewise | 453,518 | <p>If said function is measurable then yes, it is. Some textbooks say that a function $f$ is Lebesgue-integrable if and only if $\sup\{s \text{ simple} : s \leq f\} = \inf\{s \text{ simple} : s \geq f\}$.</p>
<p>It all comes down to showing that these definitions of measurable function $f:X \rightarrow \mathbb{R}$ are all equivalent:</p>
<ol>
<li><p>$\forall a \in \mathbb{R}$ the set $\{x \in X : f(x) < a\}$ is measurable.</p></li>
<li><p>$\forall a \in \mathbb{R}$ the set $\{x \in X : f(x) \leq a\}$ is measurable.</p></li>
<li><p>$\forall a \in \mathbb{R}$ the set $\{x \in X : f(x) > a\}$ is measurable.</p></li>
<li><p>$\forall a \in \mathbb{R}$ the set $\{x \in X : f(x) \geq a\}$ is measurable.</p></li>
</ol>
<p>which is a piece of cake to prove using the property of closure under complement of a sigma algebra.</p>
|
29,530 | <p>On the questions tab can we have a filter feature (I mean like frequented votes etc) of watched tags. I am aware that the tags that I have watched are demarked in a different colour. But I would like to discuss the pros and cons of getting a dedicated watched tags filter because I feel most people are well versed in certain specific areas of mathematics and would like to answer questions on that domain. This may not be so useful for asking questions as exploring newer areas of mathematics is generally encouraged upon. On the other hand its good for everybody that they answer questions on domains that they have command over. Any suggestions in this regard are welcome. </p>
| Martin Sleziak | 8,297 | <p>The search query <a href="https://math.stackexchange.com/search?q=intags:mine">intags:mine</a> returns only posts in your favorite tags. If you <a href="https://math.stackexchange.com/search?tab=active&q=intags:mine">sort by activity</a> and perhaps add <a href="https://math.stackexchange.com/search?tab=active&q=intags:mine+is:q">is:q</a> it should look similar to the list of questions in various tabs on the main site.</p>
<p>You can also use <a href="https://stackexchange.com/filters/357965/bounty">filters</a>, IIRC a filter with your favorite tags is created automatically. From here, you can also get an RSS feed with questions in those tags, see also: <a href="https://meta.stackexchange.com/q/3403">RSS feed of your favorite tags</a>. (But it seems that you did not mean the word <em>feed</em> in the question in this sense.)</p>
<p>EDIT: Now the <a href="https://meta.stackexchange.com/tags/custom-filters/info">custom filters feature</a> - described in <a href="https://math.meta.stackexchange.com/questions/29530/can-we-have-a-feed-of-watched-tags/29531#29531">Andrew T.'s answer</a> - is available also on this site. (See also: <a href="https://meta.stackexchange.com/q/330326">Custom Filters release announcement</a>.) So you have another alternative at your disposal - creating a custom filter for your watched tags. (This is probably the simplest one.)</p>
|
3,082,337 | <p>Forgive my ignorance.<br>
Is the condition <span class="math-container">$x\in\mathbb{R}$</span> necessary to the set statement <span class="math-container">$\{x \in\mathbb{R} \vert x> 0\}$</span>?<br>
In other words, if <span class="math-container">$x$</span> is greater than zero, then is it not, by definition, a real number?<br>
Thank you very much!</p>
| Hagen von Eitzen | 39,174 | <p>One might say: For me, <span class="math-container">$\aleph_0>0$</span> and <span class="math-container">$\aleph_0\notin \Bbb R$</span>, so the restriction <span class="math-container">$x\in \Bbb R$</span> is necessary</p>
<hr>
<p>But we may also view this from a different perspective, namely that the <em>form</em> of the expression tells us directly that we are dealing with a set (and not a proper class: We use the notation (also know as class-builder notation)
<span class="math-container">$$\tag1\{\,x\mid \Phi(x)\,\} $$</span>
to denote the <em>class</em> of all objects <span class="math-container">$x$</span> that make the predicate <span class="math-container">$\Phi$</span> true. In general, such a class need not be a set.</p>
<p>In the wide-spread Zermelo-Frenkel set theory, there are two important axioms (or rather axiom schemas) that postulate that certain sets exist in terms of (other sets and) predicates: The <strong>Axiom Schema of Replacement</strong> and the <strong>Axiom Schema of Separation</strong> (or <strong>Comprehension</strong>). The first says that given what is called a class function <span class="math-container">$F$</span> and a set <span class="math-container">$A$</span>, there exists also a set that has as elements precisely all things that equal <span class="math-container">$F(x)$</span> for some <span class="math-container">$x\in A$</span>. We commonly use the notation
<span class="math-container">$$\tag2\{\,F(x)\mid x\in A\,\} $$</span>
(or perhaps <span class="math-container">$F[A]$</span>) for this. Likewise, the Axiom Schema ofSeparation tells us that for every set <span class="math-container">$A$</span> and predicate <span class="math-container">$\Phi$</span>, there exists a set that has as elements precisely those elements of <span class="math-container">$A$</span> that fulfill predicate <span class="math-container">$\Phi$</span>. We commonly use the notation
<span class="math-container">$$\tag3 \{\,x\in A\mid \Phi(x)\,\}$$</span>
for this set. </p>
<p>So whenever we encounter something that looks like <span class="math-container">$(2)$</span> or <span class="math-container">$(3)$</span>, we can immediately be confiendt that it is a set. With <span class="math-container">$(1)$</span>, we'd have to stop and check - and why would you as an author want your readers to do that unnecessarily?</p>
|
4,090,609 | <p>Not any even rank real vector bundle over a smooth manifold has a complex structure, because there are even dimensional manifolds that have no almost complex structures. I am curious about the following special case: Suppose <span class="math-container">$M$</span> is a real compact, connected smooth manifold and <span class="math-container">$E\to M$</span> is an orientable real <span class="math-container">$2$</span>-plane bundle. Also suppose, for some connected open subset <span class="math-container">$U$</span>, <span class="math-container">$E|_U$</span> has a complex structure (so that it can be seen as a complex line bundle). In this case can we extend the complex structure on <span class="math-container">$E|_U$</span> to <span class="math-container">$E$</span>?</p>
| Moishe Kohan | 84,907 | <p>There are two obstructions, one of which is topological and second is geometric.</p>
<ol>
<li>The topological obstruction is that every complex vector bundle comes with a canonical orientation. Hence, you have to assume that your vector bundle <span class="math-container">$E\to M$</span> is orientable and its orientation extends the one coming from the complex structure on the bundle <span class="math-container">$E|_U\to U$</span>. If <span class="math-container">$M$</span> is connected and <span class="math-container">$U$</span> is nonempty, then this extension of an orientation is unique.</li>
</ol>
<p>In particular, if <span class="math-container">$U$</span> is connected (as in the edit to your question) and <span class="math-container">$E\to M$</span> is orientable then, after possibly replacing this orientation with the opposite one, we can assume that the orientation on <span class="math-container">$E\to M$</span> agrees with that of <span class="math-container">$E|_U$</span>.</p>
<p>Thus, from now on, I will assume that <span class="math-container">$E\to M$</span> is oriented as above.</p>
<ol start="2">
<li>To understand the second obstruction, note that a complex structure on a real vector bundle is a smooth family <span class="math-container">$J$</span> of automorphisms of the fibers <span class="math-container">$J_x: E_x\to E_x$</span> such that <span class="math-container">$J_x^2=-I$</span> for every <span class="math-container">$x\in M$</span>. Since <span class="math-container">$E$</span> has rank 2 and is oriented, prescribing such <span class="math-container">$J$</span> is equivalent to prescribing a metric tensor <span class="math-container">$h_x$</span> on the fibers <span class="math-container">$E_x$</span> of <span class="math-container">$E$</span> (of course, depending smoothly on <span class="math-container">$x\in M$</span>). Once such <span class="math-container">$h_x$</span> is given, then <span class="math-container">$J_x$</span> is the rotation by <span class="math-container">$\pi/2$</span> in the positive direction. Conversely, once <span class="math-container">$J_x$</span> is given, the metric <span class="math-container">$h_x$</span> (Hermitian with respect to <span class="math-container">$J_x$</span>) is canonically defined up to scaling.</li>
</ol>
<p>Here is an example of a non-extendible complex structure in the case when <span class="math-container">$M={\mathbb R}$</span> (I will leave it to you to extend it to any dimension).</p>
<p>The rank <span class="math-container">$2$</span> vector bundle <span class="math-container">$E \to {\mathbb R}$</span> has to be trivial, <span class="math-container">$E={\mathbb R}\times {\mathbb R}^2$</span>, with the standard fiberwise orientation. Now, take <span class="math-container">$U=(0,\infty)$</span> and a Hermitian metric <span class="math-container">$h$</span> on <span class="math-container">$E|_U$</span> given by the following family of Gram matrices.
<span class="math-container">$$
h_x= \left[\begin{array}{cc}
1&0\\
0& x^2\end{array}\right].
$$</span>
The corresponding almost complex structure will not extend to the point <span class="math-container">$x=0\in M$</span>.</p>
<p>One way to resolve the geometric issue is:</p>
<p><strong>Proposition.</strong> Suppose that <span class="math-container">$E\to M$</span> is an oriented rank 2 vector bundle, <span class="math-container">$U\subset M$</span> is an open subset (connectedness of <span class="math-container">$U$</span> is irrelevant here) such that the restriction bundle <span class="math-container">$E|_U$</span> is equipped with a complex structure whose orientation is consistent with that of <span class="math-container">$E$</span>. Then for every compact subset <span class="math-container">$K\subset U$</span>, the complex structure on <span class="math-container">$E|_K$</span> extends to a complex structure on <span class="math-container">$E$</span>, again, consistent with the orientation.</p>
<p>Proof. Let <span class="math-container">$h|_U$</span> be a Hermitian metric for the complex vector bundle <span class="math-container">$E|_U$</span>. Let <span class="math-container">$V\subset U$</span> be a relatively compact open subset containing <span class="math-container">$K$</span>. Take a locally finite open cover <span class="math-container">${\mathcal W}$</span> of <span class="math-container">$M$</span> such that <span class="math-container">$V=W_0\in {\mathcal W}$</span>, the rest of the elements <span class="math-container">$W_k\in {\mathcal W}, k\ge 1$</span>, are disjoint from <span class="math-container">$K$</span> and <span class="math-container">$E$</span> is trivial over each <span class="math-container">$W_k, k\ge 1$</span>. Let <span class="math-container">$h_k$</span> be a Hermitian metric on <span class="math-container">$E|_{W_k}, k\ge 1$</span>;
set <span class="math-container">$h_0:=h|_V$</span>. Now, take a partition of unity <span class="math-container">$\rho_k, k\ge 0$</span>, for the cover <span class="math-container">${\mathcal W}$</span> and use it to extend the Hermitian metric <span class="math-container">$h$</span> from <span class="math-container">$K$</span> to the rest of <span class="math-container">$M$</span>:
<span class="math-container">$$
h= \sum_{k\ge 0} \rho_k h_k.
$$</span>
where, by default, <span class="math-container">$\rho_k h_k$</span> is extended by zero outside of <span class="math-container">$W_k$</span>. <strong>qed</strong></p>
|
2,685,822 | <p>How can we prove that $L = \lim_{n \to \infty}\frac{\log\left(\frac{n^n}{n!}\right)}{n} = 1$</p>
<p>This is part of a much bigger question however I have reduced my answer to this, I have to determine the limit of $\log(n^{n}/n!)/n$ when $n$ goes to infinity.</p>
<p>Apparently the answer is 1 by wolfram alpha but I have no clue how to get it. Any idea how I could proceed (without sirlings approximation as well).</p>
| Bernard | 202,857 | <p>$$n!\sim_\infty\sqrt{2\pi n\vphantom{h}}\,\Bigl(\frac ne\Bigr)^{\!n},\enspace\text{hence }\quad \frac{n^n}{n!} \sim_\infty\frac{\mathrm e^n}{\sqrt{2\pi n\vphantom{h}}} $$
so
$$\frac1n\,\log\biggl(\frac{n^n}{n!}\biggr)\sim_\infty\frac1n\Bigl(\frac12\log(2\pi n)+n\Bigr)=\underbrace{\frac{\log(2\pi n)}{2n}}_{\substack{\downarrow\\0}} +1.$$</p>
|
470,506 | <p>When is this true?
$$\lim_{r\to 0}\int_{-K}^K f(rx)dx=\int_{-K}^K \lim_{r\to 0} f(rx)dx$$
Is it true without the hypothesis of continuity of
$f$? </p>
<p>Thank you.</p>
| Gina | 102,040 | <p>Your method are doomed to fail. Here is why:</p>
<ol>
<li><p>Totally bounded is equivalent to the condition that the space have finite cover each with radius less than $\epsilon$ for any $\epsilon>0$.</p></li>
<li><p>Metric subspace of a totally bounded metric space is also totally bounded. Trivially prove from above.</p></li>
<li><p>By Heine-Borel theorem: every closed and bounded subset in $\mathbb{R}^{n}$ is compact.</p></li>
<li><p>Every compact metric space is totally bounded.</p></li>
<li><p>The closure of any bounded subset is bounded.</p></li>
<li><p>Combine all those above, every bounded subset in $\mathbb{R}^{n}$ is a subset of its closure. The closure is both closed and bounded, and hence compact, and hence totally bounded. Thus the subset is also totally bounded.</p></li>
</ol>
<p>Therefore, any example constructing from a subset of $\mathbb{R}^{n}$ is doomed to fail.</p>
<p>That's probably why all examples you found come from infinite dimensional space. You don't need anything fancy really. Even the sequence space $l^{2}(\mathbb{R})$ is good enough.</p>
<p>EDIT: to prove 1:</p>
<p>Left to right implication: for any $\epsilon$ cover the space with a finite bunch of ball with radius $\frac{\epsilon}{3}$.</p>
<p>Right to left implication: for any $\epsilon$ cover the space with a finite cover each with radius at most $\frac{\epsilon}{2}$. Then pick any point for each set in the cover and produce a ball of radius $\epsilon$, which will be enough to cover the set.</p>
|
2,363,211 | <p>Can <span class="math-container">$406014677132263504491682$</span> be the sum of two fourth powers? It may be so. Can anyone use Wolfram Mathematica , SAGE , or some computer program, to check whether this number is the sum of two fourth powers ?</p>
<p>The complete factorization of this number is given by : <span class="math-container">$406014 677132 263504 491682 = 2 × 1459 × 6883 × 21529 × 938976 705857$</span></p>
| Oscar Lanzi | 248,217 | <p>Hmmmm. $1459^1$. Any whole number where a $4n-1$ prime has an odd exponent can't be rendered as a sum of squares. Let alone fourth powers.</p>
|
2,887,858 | <p>I'm learning how to take surface integrals on the surface of spheres in $\mathbb{R}^n$. This question is related to <a href="https://math.stackexchange.com/questions/2887371/calculating-the-surface-integral-int-s-10y-j-d-sigmay">Calculating the surface integral $\int_{S_1(0)}y_j \ d\sigma(y)$</a> where I try to compute a similar integral but I don't know if I did everything ok. (read that answer to have a better understanding of how to integrate over a surface if you need)</p>
<p><strong>I'm asked to compute</strong></p>
<p>$$\int_{S_1(0)}y_jy_k \ d\sigma(y)$$ </p>
<p>Let's imagine the north hemisphere first, through the parametrization $\Sigma(y) = \left(y,\sqrt{1-|y|^2}\right)$:</p>
<p>$$\int_{S_1(0)^+}y_jy_k \ d\sigma(y) = \int_{C_1(0)+}y_jy_k\det [ye_1\ \cdots \ ye_{n-1} \ n(y)]\ dy$$ </p>
<p>where $C_1(0)$ is just the 'circunference' on which the 'sphere' is parametrized. We can think of it as the region $|y|<1$ for $y\in\mathbb{R}^{n-1}$</p>
<p>Remember that the unit normal $n(y)$ in the unit sphere is just $y$. So the determinant above would give us $y_1\cdots y_{n-1}\sqrt{1-(y_1+\cdots y_{n-1})^2}$ because these are the diagonal terms multiplied and the rest of the elements are $0$ (except for the normal column but it gets multiplied by the other $0$s)</p>
<p>So we end up with</p>
<p>$$\int_{S_1(0)^+}y_jy_k \ d\sigma(y) = \int_{C_1(0)+} y_1\cdots y_j^2 \cdots y_k^2 \cdots \ y_{n-1}\sqrt{1-(y_1^2+\cdots + y_{n-1}^2)}\ dy$$</p>
<p>Now, breaking these onto the region $C_1(0)$ we get:</p>
<p>$$\int_{S_1(0)^+}y_jy_k \ d\sigma(y) = \\ \int_{-1}^1\cdots\int_{-1}^1 y_1\cdots y_j^2\cdots y_k^2 \cdots \ y_{n-1}\sqrt{1-(y_1^2+\cdots +y_{n-1}^2)} \ dy_1\cdots dy_j \cdots dy_{n-1}$$</p>
<p>And for the south hemisphere:</p>
<p>$$\int_{S_1(0)^-}y_jy_k \ d\sigma(y) =\\ \int_{-1}^1\cdots\int_{-1}^1 y_1\cdots y_j^2 \cdots y_k^2 \cdots \ y_{n-1}\sqrt{-1+(y_1^2+\cdots + y_{n-1}^2)} \ dy_1\cdots dy_j \cdots dy_{n-1}$$</p>
<p>I think it helps if I do the two cases of integration for the north hemisphere first. If $k\neq j$ then I have to integrate $y_i\sqrt(\cdots)$ or $y_i^2\sqrt(\cdots)$. If $k=j$ then it's a matter of integrating $y_i\sqrt(\cdots)$ and $y_i^3\sqrt(\cdots)$. </p>
<p>So <strong>for $k\neq j$</strong>, if we integrate $y_i\sqrt(\cdots)$:</p>
<p>$$\frac{1}{2}\int_{-1}^12y_i\sqrt{1-(y_1^2 + \cdots + y_i^2 + \cdots + y_{n-1}^2)}\ dy_i = \\\frac{1}{2}\int_{1}^{1}\sqrt{1-(y_1^2 + \cdots + u + \cdots + y_{n-1}^2)}\ du$$</p>
<p>I'm integrating with the substitution $u = y_i^2$ from $u(-1) = 1$ to $u(1) = 1$ so it should be $0$. This gets multiplied with every other integral so the entire integral is $0$ which is ok according to my book.</p>
<p>Now <strong>for $k=j$</strong>, if we integrate $y_i^4\sqrt(\cdots)$ we would get something, but it would also be multiplied by the integral of $y_i\sqrt(\cdots)$, so it should be $0$ too. But my book says it is $n^{-1}\int{S_1(0)}1$.</p>
<p><strong><em>Batominovski's Comment:</em></strong> <em>In the paragraph above, the claim that "it would also be multiplied by the integral of $y_i\sqrt(\cdots)$" is wrong. We do not have $\int\,(fg) =\left(\int\,f\right)\,\left(\int\,g\right)$. So, $\int\,g=0$ does not imply $\int\,(fg)=0$.</em></p>
<p>What is wrong?</p>
| Batominovski | 72,152 | <p><strong>Solution Employing Symmetry</strong></p>
<p>Let $S^{n-1}$ denote the unit hypersphere in $\mathbb{R}^n$ (with coordinate vector $\mathbf{y}=(y_1,y_2,\ldots,y_n)$) centered at the origin $\boldsymbol{0}_n$. Write $\sigma_{n-1}$ for the hypersurface area measure of $S^{n-1}$. Declare the <em>northern</em> hemisphere $S^{n-1}_+$ to be the one with $y_j>0$, and the <em>southern</em> hemisphere $S^{n-1}_-$ to be the one with $y_j<0$. If $j\neq k$, then by symmetry, we have
$$\int_{S^{n-1}_-}\,y_jy_k\,\text{d}\sigma_{n-1}(\mathbf{y}) = -\int_{S^{n-1}_-}\,y_jy_k\,\text{d}\sigma_{n-1}(\mathbf{y})\,.$$
This shows that $$\int_{S^{n-1}}\,y_jy_k\,\text{d}\sigma_{n-1}(\mathbf{y})=\int_{S^{n-1}_+}\,y_jy_k\,\text{d}\sigma_{n-1}(\mathbf{y})+\int_{S^{n-1}_-}\,y_jy_k\,\text{d}\sigma_{n-1}(\mathbf{y})=0\,.$$</p>
<p>If $j=k$, then we note from symmetry that
$$\int_{S^{n-1}}\,y_1^2\,\text{d}\sigma_{n-1}(\mathbf{y})=\int_{S^{n-1}}\,y_2^2\,\text{d}\sigma_{n-1}(\mathbf{y})=\ldots=\int_{S^{n-1}}\,y_n^2\,\text{d}\sigma_{n-1}(\mathbf{y})\,.$$
Since $\sum\limits_{i=1}^n\,y_i^2=1$ on $S^{n-1}$,
$$\int_{S^{n-1}}\,y_j^2\,\text{d}\sigma_{n-1}(\mathbf{y})=\frac{1}{n}\,\int_{S^{n-1}}\,\sum_{i=1}^n\,y_i^2\,\text{d}\sigma_{n-1}(\mathbf{y})=\frac{1}{n}\,\int_{S^{n-1}}\,\text{d}\sigma_{n-1}(\mathbf{y})=\frac{1}{n}\,\Sigma_{n-1}\,,$$
where $\Sigma_{n-1}:=\displaystyle \int_{S^{n-1}}\,\text{d}\sigma_{n-1}(\mathbf{y})=\dfrac{2\pi^{\frac{n}{2}}}{\Gamma\left(\frac{n}{2}\right)}$ is the hypersurface area of $S^{n-1}$. Here, $\Gamma$ is the usual gamma function. That is,
$$\frac{1}{n}\,\Sigma_{n-1}=\frac{\pi^{\frac{n}{2}}}{\Gamma\left(\frac{n}{2}+1\right)}\,,$$
which equals the volume of a unit $n$-dimensional hypersphere.</p>
<hr>
<p><strong>Solution Implementing the Divergence Theorem</strong></p>
<p>Let $\lambda_n$ be the Lebesgue measure on $\mathbb{R}^n$ and $\mathbf{e}_1,\mathbf{e}_2,\ldots,\mathbf{e}_n$ the standard basis vectors of $\mathbb{R}^n$. The normal vector to the hypersurface $S^{n-1}$ is given by $$\mathbf{n}=y_1\mathbf{e}_1+y_2\mathbf{e}_2+\ldots+y_n\mathbf{e}_n\,.$$ Write $B^n_r(\mathbf{x})$ for the open ball of radius $r>0$ centered at $\mathbf{x}\in\mathbb{R}^n$. Note from the Divergence Theorem that $$\int_{\partial B^n_1(\boldsymbol{0}_n)}\,y_j\,y_k\,\text{d}\sigma_{n-1}(\mathbf{y})=\int_{\partial B^n_1(\boldsymbol{0}_n)}\,y_k\mathbf{e}_j\cdot\mathbf{n}\,\text{d}\sigma_{n-1}(\mathbf{y})=\int_{B^n_1(\boldsymbol{0}_n)}\,\big(\boldsymbol{\nabla}\cdot(y_k\mathbf{e}_j)\big)\,\text{d}\lambda_n(\mathbf{y})\,.$$ Obviously, $\boldsymbol{\nabla}\cdot(y_k\mathbf{e}_j)=0$ for $j\neq k$ and $\boldsymbol{\nabla}\cdot(y_j\mathbf{e}_j)=1$.</p>
<p>Consequently, the integral $\displaystyle\int_{S^{n-1}}\,y_jy_k\,\text{d}\sigma_{n-1}(\mathbf{y})$ equals $\lambda_n\big(B_1^n(\textbf{0}_n)\big)\,\delta_{j,k}$. Here, $\delta$ is the Kronecker delta. Since $\lambda_n\big(B^n_1(\textbf{0}_n)\big)=\dfrac{1}{n}\,\Sigma_{n-1}$, we get the same result as the first solution.</p>
|
1,046,961 | <p>Find all continuous functions $f:\mathbb{R} \to \mathbb{R}$ such that for all $x \in \mathbb{R}$, $f(x) + f(2x) = 0$ <br/>
I'm thinking; <br/>
Let $f(x)=-f(2x)$ <br/>
Use a substitution $x=y/2$ for $y \in \mathbb{R}$. <br/>
That way $f(y)=-f(y/2)=-f(y/4)=-f(y/8)=....$ <br/>
Im just not sure if this is a good approach. Opinions please..</p>
| JHalliday | 124,007 | <p>$f(x)=0$ is the only solution.</p>
<p>$$f(0)=-f(2\cdot 0) \iff f(0)=0$$</p>
<p>Suppose that for any $x$, $f(x) = \epsilon$. But this means that for any $\delta$ and for large $n$ we have:</p>
<p>$$\frac{x}{2^n} < \delta$$</p>
<p>$$\large f\bigg(\frac{x}{2^n}\bigg)= (-1)^n\cdot\epsilon$$</p>
<p>But supposing $f$ was continuous, we could choose a $\delta$ small enough that $|f(x)|<\epsilon$ for $x< \delta$. Thus if this function is continuous, it can have no values other than $0$.</p>
|
2,411,568 | <p>I would like to find all $(a,b,c)\in\mathbb{N}^3$ such that $a\times 1111+b\times 111+c\times 11=9002$.</p>
<p>It is obvious that $a+b+c \equiv 2 \pmod{10}$. I made a little computer program, that basically looped over all possiblities and I found no answer, so I just have to prove it...<br>
I am thinking of an inductive solution, something like:<br>
If no solution is found for $a=a_{1}$, then there is no solution for $a=a_{1}+1$</p>
| N. S. | 9,176 | <p>$$a×1111+b×111+c×11=9002 \\
11 ( 101a +10b+c)= 9002-b$$</p>
<p>It follows that $b=11k+4$ for some $k \in \mathbb N$. The equation then becomes
$$101 a+10b +c=818-k \\
101 a+111k +c=778\\
$$</p>
<p>The problem boils down to solving
$$101a+111k \leq 778$$
as you can set $c=778-101a+111k$. </p>
<p>I claim that
$$101a+111k \leq 778 \Leftrightarrow a+k \leq 7$$</p>
<p><strong>$\Rightarrow$</strong>
$$101a+111k \leq 778 \Rightarrow 101a+101k \leq 778 < 808 \Rightarrow a+k <8$$</p>
<p><strong>$\Leftarrow$</strong></p>
<p>$$101a+111k \leq 111(a+k) \leq 777 \leq 778$$</p>
<p>So you need to count how many solutions does the equation $a+k \leq 7$ have, which is very easy to count.</p>
|
248,658 | <p>Based on the next relation:</p>
<p>$$\det\begin{bmatrix}A & B \\ C & D\end{bmatrix} = \det(A)\det(D - CA^{-1}B),$$</p>
<p>I have that for computing the eigenvalues of the block matrix:</p>
<p>$$\det\begin{bmatrix}A-\lambda I & B \\ C & D-\lambda I\end{bmatrix} = \det(A-\lambda I)\det((D-\lambda I) - C(A-\lambda I)^{-1}B) = 0$$</p>
<p>So $\det(A - \lambda I) = 0$ says that the eigenvalues of $A$ are eigenvalues of the block matrix? But from some numerical simulations I have found that this is not true, what am I missing here? Maybe is because the first relation requires $A$ nonsingular and $A-\lambda I$ is not? </p>
<p>Then, this leads me to another question, why this expression holds
$$\det\begin{bmatrix}A-\lambda I & 0 \\ C & D-\lambda I\end{bmatrix} = \det(A-\lambda I)\det((D-\lambda I) = 0$$
for stating that the eigenvalues of the block matrix are the eigenvalues of $A$ and $D$ if $A-\lambda I$ is singular?</p>
<p>Many thanks in advance.</p>
| Roberto Belotti | 215,178 | <p>I was about to ask the very same question when I found this one... I know that I'm two years late but this issue does intrigue me so I want to say that the formula in the question, although not valid in general, can be still useful in some situations to compute the eigenvalues of block matrix. In fact if $\lambda$ is an eigenvalue of $\begin{pmatrix}
A & B\\
C & D\\
\end{pmatrix}$ then at least one of the following must be true:</p>
<ol>
<li>$\lambda$ is an eigenvalue of $A$;</li>
<li>$\lambda$ is a solution of $\det(D - \lambda I - C(A-\lambda I)^{-1}B)=0$.</li>
</ol>
<p>The proof of this fact is just the observation made in the question. I want to say that the condition 2 is NOT a polynomial in $\lambda$. I think that it's a rational function instead. Thus if it is wanted to compute the eigenvalues of the block matrix, one could just compute the roots of $\det(A - \lambda I)$, then the roots of the numerator of $\det(D - \lambda I - C(A-\lambda I)^{-1}B)$ and then manually check which one of those is an actual eigenvalue of the big matrix. Ok, I admit that it's not very efficient but perhaps in some special cases it could be useful.</p>
|
1,409,932 | <p>I am trying to make a random number generator that is sorta special. Basically it generates a number between 5, and 17. The twist that I need math help on is that I want to have a variable "P" that works like the following.</p>
<p>The higher p is the more likely numbers closer to 17 are to appear, the lower it is the more likely for number closer to 5 to appear.</p>
<p>I have found the perfect function to model this,
1 /( 1 + abs((x - c) / a)^2B)
you can play around what it <a href="https://www.desmos.com/calculator/h7aboouu2x" rel="nofollow">here</a></p>
<p>Anyway when I see that function I am considering the X axis as each of the numbers (5 - 17) and the y axis as the probability that it will be randomly picked. Notice how if you change the value of "C" you change where the hump is. This is what will be the "P" variable. As you can see the higher C is the higher the numbers the hump is over are.</p>
<p>How exactly would I mathmatically do this? I have taken 1d data like the result of a dice before and turned it into a 2d bar graph before, but what I need to do this time is the opposite way around.</p>
<p>Edit: The gausian function might work well too
<a href="https://www.desmos.com/calculator/i4aydcx5ts" rel="nofollow">https://www.desmos.com/calculator/i4aydcx5ts</a></p>
| David K | 139,123 | <p>In principle, you can turn a uniform random number generator into a
generator of any probability distribution you want, using the following procedure. (I'll also suggest some simplifications and shortcuts later in the answer.)</p>
<p>Assuming the desired probability distribution has been specified by a
density function $f(x)$, which gives the relative likelihood of observing
a value near $x$, compute the definite integral of that function in order
to produce the cumulative distribution function, $F(x)$.</p>
<p>Of course this means you must first have a valid density function $f(x)$.
This has two important properties: it is never negative
(another way to write this requirement is $f \geq 0$), and
$$\int_{-\infty}^{\infty} f(x) \;dx = 1. $$
The function
$$g(x) = \dfrac{1}{1 + \left|\frac{x - c}{a}\right|^{2B}}$$
that you proposed has the property $g \geq 0$ but it does not generally
satisfy the second condition.
It also has non-zero values below $5$ and above $17$, which
is undesirable to you because that says the random number is not always
between $5$ and $17$.
To fix these problems, we can do a little surgery like this:
\begin{align}
G & = \int_5^{17} g(x), \\
\\
f(x) & = \begin{cases}
\frac{g(x)}{G} & \text{if $5 \leq x \leq 17$}, \\
0 & \text{if $x < 5$ or $x > 17$}.
\end{cases}
\end{align}</p>
<p>The cumulative distribution function of $f$ is
$$ F(x) = \int_5^x f(t) \;dt, $$
which is $0$ if $x < 5$ and $1$ if $x > 17$.
In principle, the lower bound of the integral should be $-\infty$,
but since $f(x) = 0$ for all $x < 5$, either lower bound will come out
to the same result.</p>
<p>We then <em>invert</em> the function $F$ on the part of its domain where this is
possible, that is, we define a new function $F^{-1}$
such that $F^{-1}(F(x)) = x$ whenever $F(x)$ is a unique output of $F$.
Since $F$ was a cumulative distribution function, $F^{-1}(t)$ will be
defined only when $0 \leq t \leq 1$.
If we started with a reasonably "nice" distribution in the first place,
we will have only a few "holes" in the inverse function between
$t = 0$ and $t = 1$, and you can assign these to whatever value you
like as long as $F^{-1}$ is non-decreasing (that is, $F^{-1}(t)$
never decreases when you increase $t$).
In your example, you would probably want to set
$F^{-1}(0) = 5$ and $F^{-1}(1) = 17$.</p>
<p>Now generate a random real-valued (not necessarily integer) number $U$
from a uniform distribution over the interval $[0,1]$,
and return $F^{-1}(U)$ as the output of your random number generator
that generates a random number with density $f(x)$.</p>
<hr>
<p>That's the hard way to do it. The easy way is to just create an
appropriate function to use instead of $F^{-1}$.
This means you don't actually get to use any of the density functions
you graphed, but it saves you the trouble of integrating the density and inverting the resulting cumulative distribution; the Gaussian is notoriously
hard to integrate anyway.</p>
<p>Assuming that you want to use a uniform random number generator whose
minimum value is $U_{\text{min}}$ and whose maximum value is $U_{\text{max}}$,
and that you want the result to be between $5$ and $17$ (inclusive),
you want a non-decreasing function $G$ such that $G(U)$ is defined
for every possible number $U$ that your uniform RNG might produce,
$G(U_{\text{min}}) \geq 5$, and $G(U_{\text{max}}) \leq 17$.
Within those constraints, you can make $G$ be anything you want.</p>
<p>In order to get distributions bunched around $5$ when $P$ is small and
bunched around $17$ when $P$ is large, you make a family of functions
with parameter $P$, much like the way you used the value $C$ as
a parameter to change the functions you were graphing.
For small $P$ you could make $G$ be something like the function graphed in
the left-hand graph below; this will produce mostly values nearer to $5$.
For large $P$ you could make $G$ something like the function graphed in
the right-hand graph below, which will produce mostly values nearer to $17$.
And for middling values of $P$ you can make $G$ like the function
in the middle graph, which tends to concentrate its output around a
particular number between $5$ and $17$.</p>
<p><a href="https://i.stack.imgur.com/S6RhU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S6RhU.png" alt="three graphs"></a></p>
<p>It may be useful for you to know that if you want the distribution
to be "uniform" over part of the interval between $5$ and $17$
(like the "flat topped" graphs that your first formula sometimes produces),
you can make part of the graph of $G$ be a line segment with constant
positive slope.
For example, if the line segment goes from $G(U_1) = 10$ to $G(U_2) = 15$,
with $U_{\text{min}} \leq U_1 < U_2 \leq U_{\text{max}}$,
all numbers in the range $10$ to $15$ will be (at least approximately)
equally likely.</p>
<p>On the other hand, if you want to completely eliminate some output values
for a certain value of $P$, then the function $G$ for that value of $P$
should "jump over" those values. For example, if you want to prevent any
output less than $10$ when $P$ is large, set $G(U_{\text{min}}) = 10$
in that case.</p>
|
353,630 | <p>Consider a uniform random tournament with <span class="math-container">$n$</span> vertices. (Between any two vertices <span class="math-container">$x,y$</span>, with probability <span class="math-container">$0.5$</span> draw an edge from <span class="math-container">$x$</span> to <span class="math-container">$y$</span>; otherwise draw an edge from <span class="math-container">$y$</span> to <span class="math-container">$x$</span>.) Let <span class="math-container">$S$</span> be the set of all out-degrees. Let <span class="math-container">$s_1$</span> be the largest element of <span class="math-container">$S$</span>, and <span class="math-container">$s_2$</span> the next largest. (If <span class="math-container">$S$</span> is a singleton, let <span class="math-container">$s_2=s_1$</span>.) </p>
<p>Let <span class="math-container">$c\in (0,1)$</span> be a constant. What is <span class="math-container">$\lim_{n\rightarrow\infty}\text{Pr}[s_1-s_2<cn]$</span>?</p>
<p>My guess is that the limit should go to <span class="math-container">$1$</span>, that is, the two largest out-degrees are close to each other compared to the size of the tournament.</p>
| Iosif Pinelis | 36,721 | <p>You guess is correct, assuming that by <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span> you meant <span class="math-container">$s_1$</span> and <span class="math-container">$s_2$</span>. </p>
<p>Indeed, the probability in question is <span class="math-container">$1-p_n$</span>, where
<span class="math-container">\begin{equation}
p_n:=P(\exists i\in[n]\ D_i-\max_{j\in[n]\setminus\{i\}}D_j\ge cn),
\end{equation}</span>
where <span class="math-container">$[n]:=\{1,\dots,n\}$</span> and <span class="math-container">$D_i$</span> is the out-degree of the <span class="math-container">$i$</span>th vertex. We can write
<span class="math-container">\begin{equation}
D_i=\sum_{j\in[n]}D_{ij},
\end{equation}</span>
where
<span class="math-container">\begin{equation}
D_{ij}:=I\{X_{ij}=1\},
\end{equation}</span>
<span class="math-container">$I\{\cdot\}$</span> is the indicator function, and <span class="math-container">$(X_{ij})$</span> is a random <span class="math-container">$n\times n$</span> skew-symmetric matrix whose above-diagonal entries are independent Rademacher random variables (r.v.'s), with <span class="math-container">$P(X_{ij}=\pm1)=1/2$</span> if <span class="math-container">$1\le i<j\le n$</span>. </p>
<p>We need to show that <span class="math-container">$p_n\to0$</span> (as <span class="math-container">$n\to\infty$</span>), for each <span class="math-container">$c\in(0,1)$</span>. In fact,
<span class="math-container">\begin{equation}
p_n\le nP(D_1-\max_{j\in[n]\setminus\{1\}}D_j\ge cn)
\le nP(D_1-D_2\ge cn).
\end{equation}</span>
Next,
<span class="math-container">\begin{equation}
D_1-D_2=D_{12}-D_{21}+\sum_{j=3}^n Y_j,
\end{equation}</span>
where <span class="math-container">$Y_j:=D_{1j}-D_{2j}$</span>, so that the <span class="math-container">$Y_j$</span>'s are iid zero-mean r.v.'s, with <span class="math-container">$|Y_j|\le1$</span>. Also, <span class="math-container">$D_{12}-D_{21}\le1$</span>. So, by (say) <a href="https://en.wikipedia.org/wiki/Hoeffding%27s_inequality#General_case_of_bounded_random_variables" rel="nofollow noreferrer">Hoeffding's inequality</a>, for all large enough <span class="math-container">$n$</span>,
<span class="math-container">\begin{equation}
p_n\le nP(D_1-D_2\ge cn)\le nP\Big(\sum_{j=3}^n Y_j\ge cn-1\Big)\le ne^{-(cn-1)^2/(2(n-2))}\to0,
\end{equation}</span>
as desired. </p>
|
65,220 | <p>Si $f$ es una función continua y si $f(m/2^n)=0$ para todo entero $m$ y todo natural $n$, ¿cómo demuestro que $f(x)=0$ para todo numero real $x$?</p>
<hr/>
<p><em>[translation by mixedmath]</em></p>
<p>If $f$ is a continuous function and if $f\left(\dfrac{m}{2^n}\right) = 0$ for all integers $m$ and all natural $n$, how do I show that $f(x) = 0$ for all real $x$?</p>
| Arturo Magidin | 742 | <p>Let $x\in\mathbb{R}$, and let $\epsilon\gt 0$. Then there exists $\delta\gt 0$ such that for all $z$, if $|x-z|\lt\delta$ then $|f(x)-f(z)|\lt\epsilon$. Let $n$ be a positive integer such that $\frac{1}{2^{n}}\lt \delta$. Then there is an integer multiple of $\frac{1}{2^{n+1}}$ that lies in $(x-\frac{1}{2^n},x+\frac{1}{2^n})$, since the interval has length $\frac{1}{2^{n-1}}$. Let $z=m/2^{n+1}$ be that integer multiple. Then
$\epsilon\gt |f(x)-f(z)| = |f(x)-0| = |f(x)|$.</p>
<p>Thus, $|f(x)|\lt \epsilon$ for all $\epsilon\gt 0$, hence $f(x)=0$. </p>
<hr/>
<p>Sea $x\in\mathbb{R}$, y sea $\epsilon\gt 0$. Existe $\delta\gt 0$ tal que para todo $z$, si $|x-z|\lt\delta$ entonces $|f(x)-f(z)|\lt\epsilon$. Sea $n$ un entero positive talk que $\frac{1}{2^n}\lt\delta$. Entonces hay un multiplo entero de $\frac{1}{2^{n+1}}$ en el intervalo $(x-\frac{1}{2^n},x+\frac{1}{2^n}$, pues el intervalo tiene longitud $\frac{1}{2^{n-1}}$. Sea $z=m/2^{n+1}$ ese multiplo entero. Entonces $\epsilon\gt |f(x)-f(z)| = |f(x)-0| = |f(x)|$.</p>
<p>Por lo tanto, $|f(x)|\lt\epsilon$ para toda $\epsilon\gt 0$, de manera que $f(x)=0$.</p>
|
89,987 | <p>In the sequence: </p>
<p>$$\lim _{n\rightarrow \infty }{\frac {n+1}{2\,n+3}}\neq 3$$</p>
<p>I know how to prove that the limit is actually $1/2$, but is there another way to prove that 1 is NOT the limit? </p>
<p>I tried to prove by negation showing that if 1 is the limit I can't find an $N$ that for every epsilon etc etc but got a bit lost. I know this is trivial but would appreciate the help.
Thanks</p>
| David Mitra | 18,986 | <p>Let $\epsilon=1/4$. Let $N$ be given. Then
$${n+1\over 2n+3}\le {n \over 2n}\le{1\over2}\quad\Rightarrow|{n+1\over 2n+3} -1|\ge 1/2.$$
This is true for all $n\ge N$. </p>
<p>So, there is no $N$ such that $|{n+1\over 2n+3} - 1|<\epsilon$
for all $n\ge N$. This shows the sequence does not converge to 1.</p>
|
4,491,385 | <p><a href="https://i.stack.imgur.com/ILqh9.png" rel="nofollow noreferrer">This is the graph plotted by Desmos for the inequality <span class="math-container">$~\log_{0.2}\left(x^{2}-x-2\right)>\log_{0.2}\left(-x^{2}+2x+3\right)$</span></a></p>
<p>Here, you can see that plotted interval is <span class="math-container">$~x \in [2, 2.5)~$</span>but log is not defined at x=2 (It becomes <span class="math-container">$~\log_{0.2}{0}$</span> which is undefined</p>
<p>I believe the answer should have been <span class="math-container">$x \in (2, 2.5)$</span></p>
| bobeyt6 | 1,017,316 | <p>The answer is on <span class="math-container">$(2, 2.5)$</span>. There aren't dash marks on <span class="math-container">$x=2$</span> because the function is undefined there and desmos doesn't know what to do with it. If it were really <span class="math-container">$[2, 2.5)$</span>, you would see a bolded line on <span class="math-container">$x=2$</span> as shown below. <a href="https://i.stack.imgur.com/IiA7J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IiA7J.png" alt="enter image description here" /></a></p>
|
1,092,957 | <p>Given a set of natural numbers $S_1$ , $S$ and a number N .</p>
<p>Specification of sets are as follow .</p>
<p>$$S = \{1,\dots N\}$$
$$S_1\subset S \;and \;S_1=\{b_1,b_2,\dots,b_m\}$$</p>
<p>And $$S' = S\setminus S_1$$</p>
<p>Find the smallest natural n that can not be written as a sum of elements of S'.</p>
<p>Note:
A number can't be used more than once .</p>
<p>e.g. if $S=\{1,2,3\}$ and $S_1=\{3\}$ then $S' = \{1,2\}$ we can't form $4$ using $S'$, so answer is $4$.</p>
<p>Can we design an algorithm to solve this problem useing the knowledge of set $S_1$ and N?</p>
| Tom-Tom | 116,182 | <p>A possible rather stupid way to do it is to compute the sum $M$ of all elements of $S_1$. Form an array $a$ (filled with zeroes) with the numbers from $1$ to $M-1$. Then compute all the possible sums of elements of $S_1$ (there are $2^N-1$ such sums where $N=\text{Card}\, S_1$, the number of elements of $S_1$) and for each result $i$, put the value $a_i$ to $1$. Once you've done it, the smallest value $i$ such that $a_i=0$ is the searched result. </p>
<p>There are many ways to improve this algorithm, but that's a start.</p>
<p>If you want to improve, use polynomials and compute the product
$$ P(X)=\prod_{p\in S_1}(1+X^p)$$
The coefficient of $X^k$ in $P(X)$ is actually the number of ways to form $k$ as a sum of elements of $S_1$. So you just have to look for the smallest $k$ such that the coefficient of $X^k$ is zero. This technique does not improve efficiency.</p>
|
1,092,957 | <p>Given a set of natural numbers $S_1$ , $S$ and a number N .</p>
<p>Specification of sets are as follow .</p>
<p>$$S = \{1,\dots N\}$$
$$S_1\subset S \;and \;S_1=\{b_1,b_2,\dots,b_m\}$$</p>
<p>And $$S' = S\setminus S_1$$</p>
<p>Find the smallest natural n that can not be written as a sum of elements of S'.</p>
<p>Note:
A number can't be used more than once .</p>
<p>e.g. if $S=\{1,2,3\}$ and $S_1=\{3\}$ then $S' = \{1,2\}$ we can't form $4$ using $S'$, so answer is $4$.</p>
<p>Can we design an algorithm to solve this problem useing the knowledge of set $S_1$ and N?</p>
| Waffle | 55,527 | <p>EDIT: I just realized how simple this problem is. Here's the solution in Lua:</p>
<p>S={1,1,1,1,5,7,10,25,50,75,100,250,500,1000}
table.sort(S)</p>
<p>x=1
for i=1,#S do
if S[i]<=x then
x=x+S[i]
end
end</p>
<p>print(x)</p>
<p>Starting at x=1, go in ascending order increasing x if the number is less than or equal to x, and stop when it isn't (or when they all are).</p>
<p>ex.<code>given the set {1,1,1,1,5,9,12,31,50,100,300}
1 = 1
2 = 1+1
3 = 1+1+1
4 = 1+1+1+1
5 = 5
...
8 = 5+1+1+1
9 = 9
...
12 = 9+1+1
...
30 = 12+9+5+1+1+1+1
31 = 31
...
49 = 31+12+5+1
50 = 50
...
100 = 50+31+12+5+1+1
...
211 = 100+50+31+12+9+5+1+1+1+1
212 = X (smallest value which cannot be made)</code></p>
|
4,610,313 | <p>I am working on AoPS Vol. 2 exercises in Chapter 1 and attempting to solve the below problem:</p>
<blockquote>
<p>Given that <span class="math-container">$\log_{4n} 40\sqrt{3} = \log_{3n} 45$</span>, find <span class="math-container">$n^3$</span> (MA<span class="math-container">$\Theta$</span> 1991).</p>
</blockquote>
<p>My approach is to isolate <span class="math-container">$n$</span> and then cube it. Observe:
<span class="math-container">\begin{align*}
\frac{\log 40\sqrt{3}}{\log 4n} = \frac{\log 45}{\log 3n} \\
\log 40\sqrt{3}\log 3n = \log 45\log 4n\\
\log 40\sqrt{3} \cdot (\log 3 + \log n) = \log 45 \cdot (\log 4 + \log n)\\
\log n \cdot (\log 40\sqrt{3} - \log 45) = \log 45\log 4 - \log 40\sqrt{3}\log 3
\end{align*}</span></p>
<p>Dividing through and putting the coefficients as powers, we have:
<span class="math-container">\begin{align*}
\log n &= \frac{\log 45^{\log 4} - \log \left[(40\sqrt{3})^{\log 3}\right]}{\log\left(\frac{40\sqrt{3}}{45}\right)}
=\frac{\log \left(\frac{45^{\log 4}}{(40\sqrt{3})^{\log 3}}\right) }{\log\left(\frac{40\sqrt{3}}{45}\right)} \\
&=\log \left(\frac{45^{\log 4}}{(40\sqrt{3})^{\log 3}}\right)^{\log\left(\frac{40\sqrt{3}}{45}\right)^{-1}}
\end{align*}</span></p>
<p>which shows that</p>
<p><span class="math-container">\begin{align*}
n^3 = \left(\frac{45^{\log 4}}{(40\sqrt{3})^{\log 3}}\right)^{3\cdot\log\left(\frac{40\sqrt{3}}{45}\right)^{-1}}
\end{align*}</span></p>
<p>Somehow it feels like this answer may be simplified further. Are the steps shown so far correct and can the answer be expressed in a better way?</p>
| boojum | 882,145 | <p>We could also take a "factorization" approach. Since
<span class="math-container">$$ \log_{4n} 40\sqrt{3} \ \ = \ \ \log_{3n} 45 \ \ = \ \ \alpha \ \ , $$</span></p>
<p>we can write
<span class="math-container">$$ (4n)^\alpha \ \ = \ \ 2^3·3^{1/2}·5 \ \ \ , \ \ \ (3n)^\alpha \ \ = \ \ 3^2·5 $$</span> <span class="math-container">$$ \Rightarrow \ \ n^\alpha \ \ = \ \ 2^{3 \ - \ 2·\alpha}·3^{1/2}·5 \ \ = \ \ (2^0)·3^{2 \ - \ \alpha}·5 \ \ . $$</span>
The "primes-raised-to-powers" in each of these factorizations need to "match", so we have
<span class="math-container">$$ 3 \ - \ 2·\alpha \ \ = \ \ 0 \ \ \ , \ \ \ \frac12 \ \ = \ \ 2 \ - \ \alpha \ \ , $$</span>
both of which tell us that <span class="math-container">$ \ \alpha \ = \ \frac32 \ \ . $</span></p>
<p>Thus, we have <span class="math-container">$$ n^3 \ \ = \ \ (n^\alpha)^2 \ \ = \ \ (2^{3 \ - \ 2·\alpha}·3^{1/2}·5)^2 $$</span> <span class="math-container">$$ = \ \ 2^{6 \ - \ 4·\alpha}·3·25 \ \ = \ \ 2^{6 \ - \ 4·[3/2]}·3·25 \ \ = \ \ 2^0·75 $$</span>
or
<span class="math-container">$$ n^3 \ \ = \ \ 3^{4 \ - \ 2·\alpha}·5^2 \ \ = \ \ 3^{4 \ - \ 2·[3/2]}·25 \ \ = \ \ 3·25 \ \ . $$</span></p>
<p>[Since this problem has so few "moving parts", the posted answers end up being variations on the same concept.]</p>
|
268,308 | <p>In composing a proof that is reliant on proven theorems, does one simply assume the reader's familiarity with said theorems, or does one explicitly include their logic in the new logic? </p>
| amWhy | 9,003 | <p>It's a judgment call, <em>depending, in part, on the audience you are addressing</em>, and <em>your purpose</em> for writing a proof, or a paper with proofs, etc. And so, as Asaf answers, the answer to your question: "depends on the context."</p>
<p><em>If you are writing proofs for classes</em> (assigned or recommended), for example, it is best to error on the side of caution and be more explicit rather than less, and include more rather than less. One rarely loses marks for including more information than needed, whereas it is common to lose marks for not including <em>enough</em>. So when justifying a statement in such a proof: you can refer to proofs/theorems in your class text <em>provided they have been covered in class</em>. You can often do so by referencing the <em>number/letter</em> used in the text or in class, or by referring to it by using a commonly-used name of a particular theorem. E.g., "By the Fundamental Theorem of Arithmetic, it follows that...".</p>
<p>It is often helpful, when writing proofs for classes, to <em>also justify</em> an assertion by referring to definition(s) that your assertion relies upon: e.g. "by the definition of congruence modulo n, we know that...", without needing to restate the entire definition, unless you are <em>introducing an unfamiliar term</em> that you plan to use in your proof. </p>
<p>These suggestions are just as much for your (present and future) benefit as they are for your audience, and/or for demonstrating mastery of the material relevant to the statement you are asked to prove. Of course, for proofs required in coursework or in an exam, I'd highly recommend that you <em>consult your instructor on this matter</em>. If preparing for preliminary examinations, you can access and work out some problems from past prelims, and discuss with your advisor whether your proofs/solutions are adequate: (What should I have included? What could I have excluded? etc). </p>
<p><strong>Attachment I</strong>: </p>
<p>You might find the following exposition written by Dr. John M. Lee helpful: </p>
<ul>
<li><a href="http://www.math.washington.edu/~lee/Writing/writing-proofs.pdf" rel="nofollow noreferrer"><strong><em>Some remarks on writing mathematical proofs</em></strong> (pdf)</a>, also available for downloading from <a href="http://www.math.washington.edu/~lee/index.html" rel="nofollow noreferrer"><em>John Lee's website</em></a>. It provides very sound advice on writing math proofs, discussing, among other things, considerations with respect to the intended audience, etc.</li>
</ul>
<p><strong>Internet Bibliography</strong>:<br></p>
<p>If interested, you might want to explore the links available at </p>
<ul>
<li><a href="http://learntofish.wordpress.com/2009/10/06/how-to-write-math-proofs/" rel="nofollow noreferrer"><strong><em>How to Write Math Proofs</em></strong></a>.</li>
</ul>
<p>See also the previous post: <a href="https://math.stackexchange.com/questions/62298/what-can-the-writer-assume-in-a-proof?rq=1"><strong><em>What can a writer assume in a proof?</em></strong></a>.</p>
<hr>
|
278,971 | <p>How would proving or disproving the Twin Prime Conjecture affect proving or disproving the Riemann Hypothesis? What are the connections between both conjectures if any?</p>
| Benjamin Dickman | 37,122 | <p><strong>Edit 2:</strong> For a more tenuous connection between these two theorems, one person whose name might crop up someday for both (please note I am just speculating here!) is that of Fields Medalist <a href="http://en.wikipedia.org/wiki/Atle_Selberg" rel="nofollow noreferrer">Atle Selberg</a>. His <a href="http://en.wikipedia.org/wiki/Selberg_sieve" rel="nofollow noreferrer">Selberg sieve</a> underlies some of Zhang and his contemporaries' work on the weakened version of the Twin Prime Conjecture (see Edit 1 below); meanwhile, the <a href="http://en.wikipedia.org/wiki/Selberg_trace_formula" rel="nofollow noreferrer">Selberg trace formula</a> may (this is the speculation part) someday be used in a proof of the Riemann Hypothesis. For these latter connections, you could either look up `The Selberg trace formula and the Riemann zeta function' in <a href="http://scholar.google.com/scholar?hl=en&q=The%20Selberg%20trace%20formula%20and%20the%20Riemann%20zeta%20function&btnG=&as_sdt=1,33&as_sdtp=" rel="nofollow noreferrer">google scholar</a> or note that Paul Cohen strongly believed this might be a way to get at RH. Cohen's beliefs are alluded to in the <a href="http://www.ams.org/notices/201007/rtx100700824p.pdf" rel="nofollow noreferrer">AMS piece</a> on his passing, e.g., by Peter Sarnak, though one must note that Cohen spent much of his post-CH life stagnating on this problem. Anyhow, as for connections between these two works of Selberg: None is apparent to me other than their originator; but perhaps this is fodder for another MSE question.</p>
<hr>
<p><strong>Edit 1:</strong> Note that the most recent progress on a weakened version of the Twin Prime Conjecture made use of the Riemann Hypothesis <em>for varieties over finite fields</em> (see Yitang Zhang's pre-print, available <a href="http://annals.math.princeton.edu/wp-content/uploads/YitangZhang.pdf" rel="nofollow noreferrer">here</a>, p. 6). That is, it made use of the Weil Conjectures; in particular, using methods from Deligne's proof rather than Dwork's (low-level: see <a href="https://math.stackexchange.com/questions/241863/learning-path-to-the-proof-of-the-weil-conjectures-and-etale-topology/">here</a>; high-level: see <a href="https://mathoverflow.net/questions/18964/dworks-use-of-p-adic-analysis-in-algebraic-geometry">here</a>). For more on YTZ's recent work, see the evolving question/answers on MO <a href="https://mathoverflow.net/questions/131185/philosophy-behind-yitang-zhangs-work-on-the-twin-primes-conjecture">here</a>.</p>
<hr>
<p>A couple quotations from <a href="http://en.wikipedia.org/wiki/Daniel_Goldston" rel="nofollow noreferrer">Dan Goldston</a> in <a href="http://www.math.sjsu.edu/~goldston/twinprimes.pdf" rel="nofollow noreferrer">this paper</a>:</p>
<p>"While the Riemann Hypothesis is decisive in determining the distribution of primes, it seems to be of little help with regard to twin primes."</p>
<p>"The conjecture that the distribution of twin primes satisfies a Riemann Hypothesis type error term is well supported empirically, but I think this might be a problem that survives the current millennium."</p>
<p>However, the first part of your question is more general, in that you ask about how <em>proving</em> one of these conjectures/hypotheses would affect the other. That is a more nebulous question, because it's hard to predict what sort of machinery will ultimately be developed to resolve these questions.</p>
<p>If this answer sounds like a bit of a downer to you, perhaps there is a silver lining: given two large conjectures that don't really have much bearing on one another, it would/will be interesting to see how the resolution of both could be applied elsewhere. If they were intimately connected, solving one might knock the other one off; with their relation tenuous at best, it ought to take some wonderful mathematics to dispose of both. It remains to be seen what can be done with the machinery resulting from each - I just hope it is seen in our lifetimes!</p>
|
737,054 | <p>Express the vector $\vec{u}$ below as a sum of two vectors $\vec{u}_1$ and $\vec{u}_2$, where $\vec{u}_1$ is parallel to the vector $\vec{v}$ given below, and $\vec{u}_2$ is perpendicular to $\vec{v}$. Make sure that the first vector in your sum is $\vec{u}_1$ and the second is $\vec{u}_2$.</p>
<p>$\vec{v} = [-3,-1,1]$</p>
<p>$\vec{u} = [-2,-10,6] = [-,-,-]+[-,-,-] = \vec{u}_1 + \vec{u}_2$</p>
<p>NOTE: Everything is vertical vectors but I dont know how to write it like that. I need to fill in the 2 vectors where it is has dashes. Can someone show me how to do this? I don't know how to solve these questions. </p>
| user139388 | 139,388 | <p>Write
$$
u
= proj_v u + (u-proj_v u)
:= u_1 + u_2.
$$
So in your case you would get
$$
u_1
= \left(\frac{|u|\cdot |v|}{|v| \cdot |v|}\right)v
= \frac{22}{11}(-3,-1,1)
= (-6,-2,2)
$$
and
$$
u_2
= (-2,-10,6) - (-6,-2,2)
= (4,-8,4).
$$</p>
|
2,798,338 | <p>Let $G$ be a group of order 20, Prove that $G$ has a normal subgroup of order 5.</p>
<p>Obviously by Sylow theorem there is a subgroup of order 5, and since all Sylow p-subgroups are conjugate the only problem is to show that there is only one sungroup of order 5.</p>
<p>any help would be appreciated</p>
| N8tron | 32,820 | <p>As you said Sylow's thoerem gives there is a 5-sylow subgroup.</p>
<p>The number of sylow 5 subgroups $n_5$ divides 4 and $n_5$ is congruent to 1 modulo 5. </p>
<p>So $n_5=1+5k$ for some $k$ and $n_5=1+5k$ divides $4$. Since $n_k$ is a natural number $k\ge 0$ but if $k>0$ then $n_5>4$ contradicting that $n_5$ divides 4. Thus $n_5=1$</p>
|
406,966 | <p>Define a graph with vertex set <span class="math-container">$\mathbb{R}^2$</span> and connect two vertices if they are unit distance apart. The famous <a href="http://en.wikipedia.org/wiki/Hadwiger%E2%80%93Nelson_problem" rel="nofollow noreferrer">Hadwiger-Nelson problem</a> is to determine the <a href="http://en.wikipedia.org/wiki/Graph_coloring#Vertex_coloring" rel="nofollow noreferrer">chromatic number</a> <span class="math-container">$\chi$</span> of this graph. For the problem as stated (there are many variations), the best known bounds are <span class="math-container">$4 \leq \chi \leq 7$</span>. The lower bound comes from the existence of a clever subgraph on just seven vertices that is readily seen to require four colors. The upper bound comes from a straightforward tiling of the plane by monochromatic hexagons that admits a proper <span class="math-container">$7$</span>-coloring.</p>
<p>I like to show this problem to young students because they are always fascinated by the fact that we can cover "everything" that is known about it in just a few minutes. What are some other problems of this type?</p>
<blockquote>
<p>What are some famous problems for which the best known results are fairly obvious or elementary?</p>
</blockquote>
<p>Update: Aubrey de Grey has recently improved the <a href="https://arxiv.org/abs/1804.02385" rel="nofollow noreferrer">lower bound</a> to 5 using an elaborate subgraph, so we now know slightly more than the elementary.</p>
| MJD | 25,554 | <p>The first four perfect numbers have been known for at least 2300 years. Here are their prime factorizations:</p>
<p>$$\begin{align}
6 & = 2^{\hphantom{2}}\cdot 3 \\
28 & = 2^2\cdot 7 \\
496 & = 2^4\cdot 31 \\
8128 & = 2^6\cdot 127
\end{align}
$$</p>
<p>Any knucklehead, looking at these, or perhaps even just looking at the first two, could make the obvious conjecture that they are all of the form $2^{p−1}(2^p−1)$ for prime $2^p−1$. </p>
<p>It's easy to show that all numbers of that type are perfect. (And indeed, Fibonacci's <em>Liber Abaci</em>, published in 1202, makes this observation.)
All known perfect numbers are of that type. Euler proved that all <em>even</em> perfect numbers are of that type. But… </p>
|
63,763 | <p>For elliptic curve $y^{2}=x(x+a^{2})(x+(1-a)^{2})$,($a$ is a rational number and does not equal 0,1,1/2),is its rank always 0?</p>
| James Weigandt | 4,872 | <p>Your family of elliptic curves are exactly those elliptic curves with torsion subgroup containing $\Bbb Z / 2 \Bbb Z \times \Bbb Z / 4 \Bbb Z$. To see this make the linear change of variables $t = a/(1-a)$ and look at the parameterizations on page 101 of Husemöller's book "Elliptic Curves". </p>
<p>So in particular there are values of $a$ for which your elliptic curve has Mordell-Weil rank 8, the first of which was discovered by Elkies in 2005. See <a href="http://web.math.hr/~duje/tors/tors.html">Dujella's website</a>. Moreover, Eroshkin found that there are infinitely many such curves with rank at least 5.</p>
<p>These happen to be my favorite elliptic curves. I'd be very interested to know why you were interested in them.</p>
|
599,232 | <p>Q)An object moving along a curve in the xy-plane has position ( x(t), y(t) ) at time t, with dx/dt = cos (3/2 t^2) and dy/dt = 3sin(t^2) FOR 0 <= t <=3 at t=2, the object is at (4,5). write the equation for the tangent line at this point.</p>
<p>Comments:
First and for most what should i understand from (x(t), y (t)) I don't recall ever seeing a question set up this way.</p>
<p>I know very basic rules when it comes to finding a tangent line to a curve aka y=mx+b, take the derivative, plug in x and y values, etc. Upon reading this question I am completely baffled. I wish I had more to go on but my research online has been little to no help and I cannot find similar examples in my text book. Please help!</p>
| user99680 | 99,680 | <p>The distance from $(2,0)$ to $(20,0)$ + the distance from $(2,0)$ to $(0,0)$ equals the radius.</p>
|
599,232 | <p>Q)An object moving along a curve in the xy-plane has position ( x(t), y(t) ) at time t, with dx/dt = cos (3/2 t^2) and dy/dt = 3sin(t^2) FOR 0 <= t <=3 at t=2, the object is at (4,5). write the equation for the tangent line at this point.</p>
<p>Comments:
First and for most what should i understand from (x(t), y (t)) I don't recall ever seeing a question set up this way.</p>
<p>I know very basic rules when it comes to finding a tangent line to a curve aka y=mx+b, take the derivative, plug in x and y values, etc. Upon reading this question I am completely baffled. I wish I had more to go on but my research online has been little to no help and I cannot find similar examples in my text book. Please help!</p>
| André Nicolas | 6,312 | <p>Let the centre of the circle be $O$, and let the point $(2,0)$ be $P$. Draw a line $PQ$ to the periphery of the circle, making an angle $\theta$ with the positive $x$-axis. We want to find the length of $PQ$.</p>
<p>Consider the triangle $OPQ$. We have $\angle OPQ=180^\circ-\theta$. By the Cosine Law, with $x=PQ$, we have
$$100=x^2+4-(2x)(2\cos(180^\circ-\theta))=x^2+4+4x\cos\theta.$$
This is a quadratic equation in $x$: Solve.</p>
|
599,232 | <p>Q)An object moving along a curve in the xy-plane has position ( x(t), y(t) ) at time t, with dx/dt = cos (3/2 t^2) and dy/dt = 3sin(t^2) FOR 0 <= t <=3 at t=2, the object is at (4,5). write the equation for the tangent line at this point.</p>
<p>Comments:
First and for most what should i understand from (x(t), y (t)) I don't recall ever seeing a question set up this way.</p>
<p>I know very basic rules when it comes to finding a tangent line to a curve aka y=mx+b, take the derivative, plug in x and y values, etc. Upon reading this question I am completely baffled. I wish I had more to go on but my research online has been little to no help and I cannot find similar examples in my text book. Please help!</p>
| Remco | 463,495 | <p>A different solution without having to solve an equation is by rotating the axis back and forth. (more suitable for mathematical programs)</p>
<p><strong><em>r</em></strong> is the radius of the circle.</p>
<p><strong><em>O</em></strong> is the origin at [0, 0].</p>
<p><strong><em>P</em></strong> is any point within the circle [Px, Py].</p>
<p><strong><em>Q</em></strong> is point at perimeter of the circle</p>
<p><strong><em>θ</em></strong> is angle from point <strong><em>P</em></strong> to <strong><em>Q</em></strong> positive with x-axis</p>
<p><strong><em>R</em></strong> is the rotation matrix with R = [cosθ -sinθ; sinθ cosθ]</p>
<p><strong><em>R'</em></strong> is the inverse rotation matrix</p>
<p>Now rotate such that x-axis is parallel to PQ.
Describe point P and Q as P'and Q' in the new axis orientation. </p>
<p><strong><em>P'</em></strong> = R'P</p>
<p>because of the parallel alligned the following formula's are true:</p>
<p><strong><em>Q'</em></strong> = [Q'x, Q'y] = [r * cosφ, r * sinφ],
with φ is angle from <strong><em>O</em></strong> to <strong><em>Q</em></strong> (positive with rotated x-axis)</p>
<p><strong><em>Q'y</em></strong> = P'y</p>
<p>Substitute φ for P'y gives:</p>
<p><strong><em>Q'x</em></strong> = r * sin(arccos(P'y/r))</p>
<p>Now all that is left to do is rotate the axis back how it was</p>
<p><strong>Q</strong> = RQ'</p>
<p>you can now use pythagoras to get the length <strong><em>PQ</em></strong></p>
|
325,890 | <p>To evaluate this integral I have got to use Parseval's Theorem and the fourier transform of</p>
<p>$$s(x)=\begin{cases}
0 & x\leq -a \\
1 & -a<x<a \\
0 & x \geq a.\end{cases}$$</p>
<p>This works out to be</p>
<p>$$\tilde{s}(k)=\frac{e^{ika}}{ik}-\frac{e^{-ika}}{ik}$$</p>
<p>I have then changed this into sin and cos and got to the stage of</p>
<p>$$|\tilde{s}(k)|^{2}=\left|\frac{-4\sin^{2}(ka)}{-k^{2}}\right|$$ </p>
<p>but this is as far as I have got and I can't figure out what to do from here.</p>
| Ron Gordon | 53,268 | <p>The FT of $f(x) = \sin{x}/x$ is</p>
<p>$$\hat{f}(k) = \begin{cases} \\\pi & |k| < 1 \\ 0 & |k| > 1 \end{cases}$$</p>
<p>Parseval's theorem then implies that</p>
<p>$$\int_{-\infty}^{+\infty} dx \frac{\sin^2{x}}{x^2} = \frac{1}{2 \pi} \int_{-1}^1 dk \: \pi^2 = \pi $$</p>
|
983,923 | <p>Given $A$ and $B$ is the subset of $C$ and $f:C\mapsto D$,
$$f(A\cap B)\subseteq f(A) \cap f(B)$$ and the equality holds if the function is injective.</p>
<p>But why for the inverse, suppose that $E$ and $F$ is the subset of $D$,
$$f^{-1}(E \cap F) = f^{-1}(E) \cap f^{-1}(F)$$
without saying that the inverse function is injective. So if
$$x\in f^{-1}(E) \cap f^{-1}(F)$$
$$x\in f^{-1}(E) \text{ and } x\in f^{-1}(F)$$
This means that there exists elements $y_1 \in E$ and $y_2 \in F$.
So here how do we know that these two elements are equal.</p>
<p>I am independent learner so I hope I can get an explaination in more details.</p>
| Liealgebrabach | 185,117 | <p>They don't mean the inverse by $f^{-1}$ but the pre-image under f. Maybe once you have studied that definition, it will become more evident!</p>
|
2,900,112 | <p>Let $f:\mathbb{R}^n \to \mathbb{R}$. Let $x_0 \in \mathbb{R}^n$. Assume $n-1$ partials exist in some open ball containing $x_0$ and are continuous at $x_0$, and the remaining $1$ partial is assumed only to exist at $x_0$. A well known result states that this implies $f$ is differentiable at $x_0$. </p>
<p>My question is whether or not this can be strengthened. Can we replace "$n-1"$ in the above theorem with some function $g(n)$ "smaller" than $n-1$, and replace "remaining $1$ partial" with "remaining $n-g(n)$ partials"? </p>
<p>Feel free to play with assumptions slightly. For instance, you can replace "continuous at $x_0$" with "continuous at $x_0$ and in some open ball containing $x_0$". </p>
| Paul Frost | 349,785 | <p>For $n = 2$ it is not sufficient to assume that the 2 partial derivatives merely exist.</p>
<p>This generalizes to abitrary $n$. Assume we could take $g(n) = n-2$. Consider any function $f : \mathbb{R}^2 \to \mathbb{R}$ and define $F : \mathbb{R}^n \to \mathbb{R}, F(x_1,\ldots,x_n) = f(x_1,x_2)$. We have $\frac{\partial F}{\partial x_i}(\xi^0) = 0$ for $i= 3,\ldots,n$ and could therefore conclude that $F$ is differentiable at $\xi^0$ if the first two partials exist at $\xi^0$. But this would imply that $f$ is differentiable at $(\xi_1^0,\xi_2^0)$ which is not true in general. </p>
|
2,759,112 | <p>Are there cases where two events have high probability (i.e. each of them has high probability) but, at the same time, the probability that they both obtain is low? If yes, could you please provide a simple, daily-life example?</p>
| manofbear | 230,268 | <p>Coin flip: probability of heads is $1/2$, probability of tails is $1/2$, probability of both is $0$.</p>
|
4,447,329 | <p>There is an interesting inequality I've stumbled upon on the Internet. It has logarithms and trigonometry, but in contrast to something like <a href="https://math.stackexchange.com/questions/1029060/inequality-with-a-mixture-of-logs-and-trig">this</a> that uses trigonometry and logarithms separately, this one takes such kind of inequality to a whole new level!</p>
<p>See for yourself. The task is to solve the following inequality over reals:</p>
<p><span class="math-container">$$\cos(\log_{4x}(x+1))-\cos(\log_{4x}(4-x))\lt\log_{4x}(4-x)-\log_{4x}(x+1)$$</span></p>
<p>I've found the domain of the inequality by applying restrictions to real-valued logarithms: its base must not be neither zero nor one, and its argument must be positive.</p>
<p><span class="math-container">$\begin{cases}4x\gt0,\\4x\neq1,\\4-x\gt0,\\x+1\gt0.\end{cases}\Leftrightarrow\begin{cases}x\gt0,\\x\neq0.25,\\x\lt4,\\x\gt-1.\end{cases}\Leftrightarrow x\in(0;0.25)\cup(0.25;4)$</span></p>
<p>I tried to use the fact that the difference of cosines has the domain <span class="math-container">$[-2;2]$</span> no matter the arguments of the cosines. My assumption comes from the following facts:</p>
<ul>
<li>The domain of <span class="math-container">$\cos(x)$</span> is <span class="math-container">$[-1;1]$</span>.</li>
<li>The smaller the subtrahend, the bigger the difference.</li>
<li>The bigger the subtrahend, the smaller the difference.</li>
</ul>
<p>Thus, the domain of <span class="math-container">$\cos(a)-\cos(b)$</span> is <span class="math-container">$[\min(\cos(a))-\max(\cos(b));\max(\cos(a))-\min(\cos(b))]=[-1-1;1-(-1)]=[-2;2]$</span>. QED.</p>
<p>This did not work, however, because the right hand side sadly isn't always greater than two.</p>
<p>From a mere observation, if we let <span class="math-container">$u=\log_{4x}(x+1)$</span>, <span class="math-container">$v=\log_{4x}(4-x)$</span>, and <span class="math-container">$f(x)=cos(x)$</span>, then the inequality becomes <span class="math-container">$f(u)-f(v)\lt v-u$</span>, leading to the inequality <span class="math-container">$f(u)+u\lt f(v)+v$</span>. This may be useful in some way, I guess.</p>
<p>I've also tried to step away from arbitrary logs and only use natural ones:
<span class="math-container">$$\cos\left(\dfrac{\log (x+1)}{\log (4x)}\right)-\cos\left(\dfrac{\log (4-x)}{\log (4x)}\right)\lt\dfrac{\log (4-x)-\log(x+1)}{\log (4x)}$$</span></p>
<p>...and I'm stuck here. No way to manipulate those cosines of which I'm aware.</p>
<p>So, how to approach this kind of inequalities properly?</p>
| richrow | 633,714 | <p>Let <span class="math-container">$O$</span> be an arbitrary point of the plane. Consider vectors <span class="math-container">$v_n:=\overrightarrow{OP_n}$</span>. Then, for all <span class="math-container">$n\ge 3$</span> we have
<span class="math-container">$$
v_{n}=\frac{1}{2}(v_{n-2}+v_{n-3})
$$</span>
or
<span class="math-container">$$
v_{n}-v_{n-1}=-(v_{n-1}-v_{n-2})-\frac{1}{2}(v_{n-2}-v_{n-3}).
$$</span>
The area of the triangle <span class="math-container">$P_{n-2}P_{n-1}P_{n}$</span> equals the half of length of the cross product of <span class="math-container">$v_{n}-v_{n-1}$</span> and <span class="math-container">$v_{n-1}-v_{n-2}$</span>. Put <span class="math-container">$u_n=v_{n}-v_{n-1}$</span>. Then, for all <span class="math-container">$n\ge 3$</span> we have
<span class="math-container">$$
u_{n}=-u_{n-1}-\frac{1}{2}u_{n-2}.
$$</span>
Now it suffices to prove that <span class="math-container">$[u_{n-1},u_{n}]\to 0$</span> when <span class="math-container">$n\to\infty$</span>.</p>
<p>Can you continue now?</p>
|
4,447,329 | <p>There is an interesting inequality I've stumbled upon on the Internet. It has logarithms and trigonometry, but in contrast to something like <a href="https://math.stackexchange.com/questions/1029060/inequality-with-a-mixture-of-logs-and-trig">this</a> that uses trigonometry and logarithms separately, this one takes such kind of inequality to a whole new level!</p>
<p>See for yourself. The task is to solve the following inequality over reals:</p>
<p><span class="math-container">$$\cos(\log_{4x}(x+1))-\cos(\log_{4x}(4-x))\lt\log_{4x}(4-x)-\log_{4x}(x+1)$$</span></p>
<p>I've found the domain of the inequality by applying restrictions to real-valued logarithms: its base must not be neither zero nor one, and its argument must be positive.</p>
<p><span class="math-container">$\begin{cases}4x\gt0,\\4x\neq1,\\4-x\gt0,\\x+1\gt0.\end{cases}\Leftrightarrow\begin{cases}x\gt0,\\x\neq0.25,\\x\lt4,\\x\gt-1.\end{cases}\Leftrightarrow x\in(0;0.25)\cup(0.25;4)$</span></p>
<p>I tried to use the fact that the difference of cosines has the domain <span class="math-container">$[-2;2]$</span> no matter the arguments of the cosines. My assumption comes from the following facts:</p>
<ul>
<li>The domain of <span class="math-container">$\cos(x)$</span> is <span class="math-container">$[-1;1]$</span>.</li>
<li>The smaller the subtrahend, the bigger the difference.</li>
<li>The bigger the subtrahend, the smaller the difference.</li>
</ul>
<p>Thus, the domain of <span class="math-container">$\cos(a)-\cos(b)$</span> is <span class="math-container">$[\min(\cos(a))-\max(\cos(b));\max(\cos(a))-\min(\cos(b))]=[-1-1;1-(-1)]=[-2;2]$</span>. QED.</p>
<p>This did not work, however, because the right hand side sadly isn't always greater than two.</p>
<p>From a mere observation, if we let <span class="math-container">$u=\log_{4x}(x+1)$</span>, <span class="math-container">$v=\log_{4x}(4-x)$</span>, and <span class="math-container">$f(x)=cos(x)$</span>, then the inequality becomes <span class="math-container">$f(u)-f(v)\lt v-u$</span>, leading to the inequality <span class="math-container">$f(u)+u\lt f(v)+v$</span>. This may be useful in some way, I guess.</p>
<p>I've also tried to step away from arbitrary logs and only use natural ones:
<span class="math-container">$$\cos\left(\dfrac{\log (x+1)}{\log (4x)}\right)-\cos\left(\dfrac{\log (4-x)}{\log (4x)}\right)\lt\dfrac{\log (4-x)-\log(x+1)}{\log (4x)}$$</span></p>
<p>...and I'm stuck here. No way to manipulate those cosines of which I'm aware.</p>
<p>So, how to approach this kind of inequalities properly?</p>
| Saaqib Mahmood | 59,734 | <p>For <span class="math-container">$n = 4, 5, 6, \ldots$</span>, since <span class="math-container">$P_n$</span> is the midpoint of the segment joining the points <span class="math-container">$P_{n-3}$</span> and <span class="math-container">$P_{n-2}$</span>, we can conclude that
<span class="math-container">$$
\vec{OP_n} = \frac{1}{2} \left( \vec{OP_{n-3}} + \vec{OP_{n-2}} \right). \tag{1}
$$</span></p>
<p>So
<span class="math-container">\begin{align}
& \ \ \ \mbox{area } \triangle P_{n-2} P_{n-1} P_n \\
&= \frac{1}{2} \left\lvert \vec{ P_{n-2} P_n } \times \vec{ P_{n-2} P_{n-1} } \right\rvert \\
&= \frac{1}{2} \left\lvert \left( \vec{OP_n} - \vec{OP_{n-2} } \right) \times \left( \vec{OP_{n-1}} - \vec{OP_{n-2} } \right) \right\rvert \\
&= \frac{1}{2} \left\lvert \left[ \frac{1}{2} \left( \vec{OP_{n-3}} + \vec{OP_{n-2}} \right) - \vec{OP_{n-2} } \right] \times \left( \vec{OP_{n-1}} - \vec{OP_{n-2} } \right) \right\rvert \\
&= \frac{1}{2} \left\lvert \frac{1}{2} \left( \vec{OP_{n-3}} - \vec{OP_{n-2}} \right) \times \left( \vec{OP_{n-1}} - \vec{OP_{n-2}} \right) \right\rvert \\
&= \frac{1}{4} \left\lvert \vec{ OP_{n-1} } \times \vec{OP_{n-2}} + \vec{ OP_{n-2} } \times \vec{OP_{n-3}} + \vec{ OP_{n-3} } \times \vec{OP_{n-1}} \right\rvert \\
& = \ldots \ldots
\end{align}</span></p>
<p>To be honest, I'm not sure how to proceed from here. The above calculation might show you some way.</p>
|
2,293,838 | <p>I have question(1) in the following picture:</p>
<p><a href="https://i.stack.imgur.com/ZwSLl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZwSLl.png" alt="enter image description here"></a></p>
<p>and the solution to it is given in the following picture:</p>
<p><a href="https://i.stack.imgur.com/q4rHR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q4rHR.png" alt="enter image description here"></a></p>
<p>but I do not understand why $||f^{'}||_{2}^{2} \geq \sum |(e^{inx}/\sqrt{2\pi},f^{`})|^{2}$</p>
<p>Could anyone explain this for me?</p>
| Alex R. | 22,064 | <p>To show that $a_n=3a_{n-2}$, note that if you have a palendrome of size $n$ and snip off the first and last letters, you are left with a palendrome of size $n-2$. This gives you a map from $n$ to $n-2$. To go from $n-2$ to $n$, convince yourself that it suffices to choose one of the 3 letters to then append to the beginning and end of the $n-2$ length palendrome. Show then that all length $n$ palendromes can be formed by this procedure when you start with all $n-2$ length palendromes, i.e. each palendrome of length $n$ uniquely corresponds to a palendrome of length $n-2$.</p>
<p>Part $b$ is then trivial induction.</p>
|
654,089 | <p>The book "A First Course in Algebra" says</p>
<blockquote>
<p>In a finite dimensional vector space, every <strong>finite</strong> set of vectors spanning the space contains a subset that is a basis.</p>
</blockquote>
<p>All that is fine. But what about a span having an infinite number of vectors? Surely that too must contain the basis!! An example is $\{\overline{i},\overline{j},\overline{k}+r\overline{i}\},\forall r\in\Bbb{R}$. Can we select all linearly independent vectors in this infinite span, and then prove it is the basis? Is this a sound mathematical technique? The proof given for finite spans does not seem to suggest this.</p>
<p>Thanks in advance! </p>
| Christian Blatter | 1,303 | <p>Given that $V$ is finite-dimensional one can argue as follows: Take a basis $(e_i)_{1\leq i\leq n}$ of $V$. Each of the $e_i$ is a finite linear combination of vectors from the spanning set $S$. It follows that there is a finite subset $S_0\subset S$ that spans $V$. From your "First Course in Algebra" you can then conclude that $S_0$ contains a basis, and so does $S\supset S_0$.</p>
|
2,021,734 | <p>Let $p$ be a prime number and $\mu_{p^{\infty}}$ denote $$\{ z\in \mathbb{C} : \exists k \ge 1 : z^{p^k}=1 \}$$ find all of its subgroups.</p>
<p>I was able to prove that its finite subgroups are of the form $\mu_{p^n}$, where $n$ is a positive integer (using Lagrange's Theorem and the fact that in, $\mu_{\infty}=\{ z\in \mathbb{C} : \exists k \ge 1 : z^k=1 \} $, a subgroup of order $n$ is $\mu_n=\{ z\in \mathbb{C} : z^n=1\}$). However I am struggling to find its infinite subgroups, if there are any. Any hints on that?</p>
| Andreas Caranti | 58,401 | <p>This group, the <a href="https://en.wikipedia.org/wiki/Pr%C3%BCfer_group" rel="nofollow noreferrer">Prüfer group</a>, has no proper infinite subgroup.</p>
<p>Suppose $H$ is a subgroup. Consider the set
$$
E_{H} = \{ k \in \mathbb{N} : \text{in $H$ there is an element of order $p^{k}$} \}.
$$
If $E_{H}$ is infinite, then $H$ is the whole group. If $E_{H}$ is finite, and $K$ is its maximum, then $H$ is a a cylic group of order $p^{K}$.</p>
|
1,645,865 | <p>I am ultimately trying to prove, for an Exercise in Burton's Elementary Number Theory, that $x^4 - y^4 = 2z^2$ has no solution in the positive integers.</p>
<p>I can establish that if there is a solution, the solution with the smallest value of x has gcd(x,y)=1 </p>
<p>I see that $x^4 - y^4 = (x^2 + y^2)(x^2 - y^2) = (x^2 + y^2)(x + y)(x - y)$</p>
<p>I also have the fact that $uv = w^2$ and $gcd(u,v)=1$ implies $u$ and $v$ are each squares.</p>
<p>If I can establish (along the lines of the hint from the textbook) that
$(x^2 + y^2) = 2a^2$ , $(x + y) = 2b^2$ , and $(x - y)=2c^2$ for some integers a,b,c then I can derive a contradiction via a previous theorem.</p>
<p>However, I am stuck on how to establish the above equalities. Can anyone offer me some direction?</p>
| πr8 | 302,863 | <p><strong>Hint:</strong> Try to sketch a graph of $y=x^2 e^x$, paying particular attention to any minima and maxima it may have. Once you've done this, superimpose $y=1$ onto this graph to see what any solutions might look like.</p>
<p>1) <strong>Existence</strong></p>
<p>If $f(x)=x^2e^x$, then</p>
<ul>
<li>$f$ is continuous</li>
<li>$f(0)=0<1<e=f(1)$</li>
</ul>
<p>By the intermediate value theorem, there exists a $c\in(0,1)$ such that $f(c)=1$, i.e. $c^2e^c=1$</p>
<p>2) <strong>Uniqueness</strong></p>
<p>Using well-known inequality $e^y\ge1+y$ with $y=-1-\frac{x}{2}$, we see:</p>
<p>$$e^{-1-\frac{x}{2}}\ge -\frac{x}{2}\implies\frac{2}{e}\ge -xe^{x/2}\implies x^2e^x\le\frac{4}{e^2}<1\text{ for }x<0$$</p>
<p>and this tells us that $x^2e^x$ will have no solutions for negative $x$.</p>
<p>For positive $x$, $x^2e^x$ is monotonically increasing ($x^2,e^x$ are both positive and increasing, thus their product also is), so the positive solution $c$ we found earlier must be the only solution, and we thus have uniqueness.</p>
|
1,119,421 | <p>We know that p → q is not equivalent to q → p. But suppose we make a proof system that has all the rules of logical identities plus the rule (“commutativity of implies”) p → q ≃ q → p. (We are using the new symbol ≃ because these are not really equivalent.) </p>
<p>This new proof system is not sound. The point of this question is to show that you can prove any contradiction in this system.</p>
<p>Prove true ≃ false in this proof system. </p>
<p>Hint: You might want to start your chain of ≃ in the middle with false → true ≃ true → false. Note that the logical identities do not include a rule ¬true ≡ false, so if you want to use this, you should derive it from the other logical identities.</p>
<hr>
<p>How do I go about starting this proof? Can someone explain the hint to me? </p>
<p>Thanks!</p>
<p>EDIT: Here's a list of identities we're allowed to use.</p>
<p><strong>Commutativity</strong></p>
<p>p∧q ≡ q∧p</p>
<p>p∨q ≡ q∨p </p>
<p>p↔q ≡ q↔p</p>
<p><strong>Associativity</strong> </p>
<p>p∧(q∧r) ≡ (p∧q)∧r </p>
<p>p∨(q∨r) ≡ (p∨q)∨r</p>
<p><strong>Distributivity</strong> </p>
<p>p∨(q∧r) ≡ (p∨q)∧(p∨r)
p∧(q∨r) ≡ (p∧q)∨(p∧r)</p>
<p><strong>De Morgan</strong> </p>
<p>¬(p∧q) ≡ ¬p∨¬q </p>
<p>¬(p∨q) ≡ ¬p∧¬q</p>
<p><strong>Negation</strong>
¬(¬p) ≡ p</p>
<p><strong>Excluded Middle</strong></p>
<p>p∨¬p ≡ true</p>
<p><strong>Contradiction</strong></p>
<p>p∧¬p ≡ false</p>
<p><strong>Implication</strong></p>
<p>p→q ≡ ¬p∨q </p>
<p><strong>Contrapositive</strong></p>
<p>p→q ≡ ¬q→¬p </p>
<p><strong>Equivalence</strong></p>
<p>p↔q ≡ (p→q)∧(q→p)</p>
<p><strong>Idempotence</strong></p>
<p>p∨p ≡ p </p>
<p>p∧p ≡p</p>
<p><strong>Simplification I</strong></p>
<p>p ∧ true ≡ p</p>
<p>p ∨ true ≡ true</p>
<p>p ∧ false ≡ false</p>
<p>p ∨ false ≡ p</p>
<p><strong>Simplification II</strong> </p>
<p>p∨(p∧q) ≡ p</p>
<p>p∧(p∨q) ≡ p</p>
| Aaron Maroja | 143,413 | <p>HInt: We have that $\mathbb{Z}_6 = \langle \overline{1}\rangle$ and $\mathbb{Z}_{18} = \{\overline{1}, \ldots, \overline{17}\}$. then there exists a homomorphism $f: \langle \overline{1}\rangle \to \mathbb{Z}_{8}$ such that $f(a) = b$, if and only if, $\mathcal{O}(b)$ divides $\mathcal{O}(a)$. </p>
<p>Note: $\mathcal{O}$ stand for order of an element. </p>
|
1,370 | <p>This is in reference to the recent <a href="https://mathematica.stackexchange.com/q/56829/3066">PlotLegends -> “Expressions”</a> question.</p>
<p>I filed a repot on this behavior with WRI tech support. A little later rcollyer posted an answer explaining it was new (undocumented) feature. This morning I received an answer from WRI tech support classifying it as a documentation issue.</p>
<p>I happen to think "documentation issues" are just as much bugs an code bugs, especially when they are as egregious as this one. But it might be confusing to tag such question with our current <a href="https://mathematica.stackexchange.com/questions/tagged/bugs" class="post-tag" title="show questions tagged 'bugs'" rel="tag">bugs</a> tag. What does the community think? Do we need a new tag such as <a href="https://mathematica.stackexchange.com/questions/tagged/documentation-bugs" class="post-tag" title="show questions tagged 'documentation-bugs'" rel="tag">documentation-bugs</a>?</p>
| Mr.Wizard | 121 | <p>I think it is appropriate to follow the logic that rm -rf explained here:</p>
<ul>
<li><a href="https://mathematica.meta.stackexchange.com/questions/1085/do-we-need-a-list-filtering-tag/1087#1087">Do we need a "list-filtering" tag?</a></li>
</ul>
<p>Simply adding <a href="https://mathematica.stackexchange.com/questions/tagged/documentation" class="post-tag" title="show questions tagged 'documentation'" rel="tag">documentation</a> along with <a href="https://mathematica.stackexchange.com/questions/tagged/bugs" class="post-tag" title="show questions tagged 'bugs'" rel="tag">bugs</a> should be quite sufficient with regard to clarity, and assuming we adopt the policy of <a href="https://mathematica.meta.stackexchange.com/a/1364/121">no longer version tagging bugs</a> we will have plenty of tag room.</p>
|
2,556,255 | <p>For work I have been doing a lot of calculations which look sort of like summing terms similar to $\frac{A}{1+x}$ and $\frac{B}{1+y}$ for some $A, B$ and small values of $0 \leq x, y \leq 0.1$. In my experience I have found that this is approximately equal to $\frac{A + B}{1 + \frac{A}{A+B}x + \frac{B}{A+B}y }$. The best thing about this approximate equality is that the numerator is simply a linear combination of the earlier two numerators. Of course, it also holds exactly true if A or B is zero!</p>
<p>It would help me a great deal both if I can somehow show that my hypothesis is true. Of course I know equality does not hold, but if I can convince my colleagues that atleast this is approximately true, we can simplify our calculations and calculation speed dramatically! </p>
<p>Does anyone have any idea if this indeed can be shown to be true?</p>
| dxiv | 291,201 | <p>Hint: look at $\;\displaystyle \frac{a}{u} +\frac{b}{v} - \frac{(a+b)^2}{a u + b v } = \frac{ab(u-v)^2}{uv(a u + b v)}\,$ for $u=x+1, v=y+1$. The RHS gives the error of the proposed approximation.</p>
|
1,285,870 | <p>Is the collection of all cardinalities a set or a proper class?
Does anybody ever think about the problem?</p>
| Wojowu | 127,263 | <p>Collection of all cardinalities is indeed a proper class. To see this, note that there is at least as many cardinal numbers as ordinal numbers, because map $\alpha\rightarrow\aleph_\alpha$ is an injection.</p>
|
1,285,870 | <p>Is the collection of all cardinalities a set or a proper class?
Does anybody ever think about the problem?</p>
| Noah Schweber | 28,111 | <p>It is a proper class. There are several ways to see this. One is: suppose $X$ were the set of all cardinals (=initial ordinals). The ordinals - in particular, the cardinals - are well-ordered, so we may add them together (indexed by this well-order) to form a single "super-cardinal;" but it's easy to check that this super-cardinal is larger than any element of $X$.</p>
<p>This is essentially the same reasoning as the Burali-Forti paradox <a href="http://en.wikipedia.org/wiki/Burali-Forti_paradox" rel="nofollow">http://en.wikipedia.org/wiki/Burali-Forti_paradox</a>, which shows that the class of <em>ordinals</em> is a proper class.</p>
|
243,849 | <p>I was working on an examples from my textbook concerning transforming formulae into disjunctive-normal form (DNF) until I found an expression that I cannot solve. I hope somebody can help me transform the following statement into DNF:</p>
<p>$$ (\lnot q \lor r) \land ( q \lor \lnot r)$$</p>
| A.Schulz | 35,875 | <p>Make a table</p>
<pre><code> q r result
0 0 1
0 1 0
1 0 0
1 1 1
</code></pre>
<p>Now you can get the DNF formula, by "or-ing" all the rows with result=1:
$$ (\lnot q \land \lnot r ) \lor ( q \land r )$$</p>
|
430,142 | <p>How do I determine if $w$ is in the range of the linear operator $T$?</p>
<p>$$T:\Bbb R^3 \to \Bbb R^3, T(x,y,z)=(x-y,x+y+z,x+2z)\ ;\quad w=(1,2,-1)$$
I would appreciate the help.</p>
<p>Thanks</p>
| Mikasa | 8,581 | <p>Assuming you already showed that $T$ is a linear transformation, try to find $(x,y,z)\in\mathbb R^3$ such that $$\left\{
\begin{array}{ll}
x-y=1\\
x+y+z=2\\
x+2z=-1
\end{array}
\right.$$</p>
|
1,065,988 | <p>In <a href="https://www.youtube.com/watch?v=6Lm9EHhbJAY">this Numberphile video</a> it is stated that trisecting an angle is not possible with only a compass and a straight edge. Here's a way I came up with:</p>
<blockquote>
<p>Let the top line be A and bottom line be B, and the point intersecting P.<br>
1. Use the compass and pick any length. Draw a point M n units from P on A. Use the compass again to draw a point N n units from P on B.<br>
2. Connect M and N.<br>
3. Since we know how to trisect a line, we can trisect it and get 3 equal distance line segments with 2 points in between.<br>
4. Connect the 2 points to the point P.<br>
5. Done.</p>
</blockquote>
<p>This seems to work for all angles that are not 0 or 180 degrees.
Given that it is proven that it's not possible to trisect an angle with only compass and a ruler without marks, something must have been wrong in my steps but I can't see any. Where is my mistake?</p>
| Oleg567 | 47,993 | <p>If $\angle APB= \alpha$, then middle angle (middle part) will have measure
$$
\beta = 2\arctan \left(\frac{\tan(\alpha/2)}{3}\right) {\Large\color{red}{\ne}} \dfrac{\alpha}{3}.
$$</p>
<p>A few examples:
\begin{array}{|c|c|c|}
\hline
\alpha & \beta & error\\
\hline
3^\circ & 1.000203^\circ & 0.0203\%\\
6^\circ & 2.001626^\circ & 0.0813\%\\
9^\circ & 3.005494^\circ &0.183\%\\
\hline
30^\circ & 10.207818^\circ & 2.078\%\\
60^\circ & 21.786789^\circ & 8.933\%\\
90^\circ & 36.869897^\circ & 22.899\%\\
\hline
\end{array}</p>
<p>When angles are small, then this method gives not bad <strong>approximation</strong>, but it <strong>isn't</strong> the <strong>exact</strong> way of trisection.</p>
|
461,268 | <p>A square hole of depth $h$ whose base is of length $a$ is given.
A dog is tied to the center of the square at the bottom of the hole by a rope of length
$L>\sqrt{2a^2+h^2}$ ,and walks on the ground around the hole.</p>
<p>The edges of the hole are smooth, so that the rope can freely slide along it. Find the shape and area of the territory accessible to the dog (whose size is neglected).</p>
| WishingFish | 83,036 | <p>Consider the rope touches the edge of the hole at point $p$ with distance $x$ from the corner. Then the distance from the projection of the point $p$ on the edge to the point on the bottom $p^\prime$ is $a$.</p>
<p>The distance between $p^\prime$ to the center of the edge of bottom it lands on is $a/2$. Hence the distance from $o$ to $p^\prime$ is
$$\sqrt{\left(\frac{a}{2}\right)^2 + \left(\frac{a}{2} - x\right)^2}.$$</p>
<p>Therefore the distance from $o$ to $p$ is
$$\sqrt{\left(\frac{a}{2}\right)^2 + \left(\frac{a}{2} - x\right)^2 + a^2}.$$</p>
<p>Hence, on the ground, the dog can move around the half disk centered at point $p$ with radius $$L - \sqrt{\left(\frac{a}{2}\right)^2 + \left(\frac{a}{2} - x\right)^2 + a^2}.$$</p>
<p>Here is the construction, but the solution is the union of the half disks, plus the bottom.</p>
<p>A way to think of the union, is consider the radius changes smoothly <strong>along each egdge.</strong></p>
|
432,262 | <p>Let <span class="math-container">$x_0=1$</span> and
<span class="math-container">$$x_{k+1} = (1-a_k)\left(\frac{3}{2}-\frac{1}{2}\frac{1}{x_k}\right)$$</span>
where <span class="math-container">$a_n$</span> is a known sequence satisfying that <span class="math-container">$a_k\in(0,1)$</span> for all <span class="math-container">$k$</span> and <span class="math-container">$a_k\to 0$</span> as <span class="math-container">$k\to\infty$</span>. How to prove that <span class="math-container">$x_k\to 1$</span> as <span class="math-container">$k\to\infty$</span>?</p>
<hr />
<p>The difficulty here is that</p>
<ol>
<li>It is not known how fast <span class="math-container">$a_k$</span> converges to zero, and I don't know how it affect the convergence of <span class="math-container">$x_k$</span>;</li>
<li><span class="math-container">$x_k$</span> may change sign and is not monotone, so I don't know how to prove <span class="math-container">$x_k$</span> even converges;</li>
<li>Furthermore, if we assume <span class="math-container">$x_k$</span> do converge to some limit <span class="math-container">$x^*$</span>, then by taking the limit,
<span class="math-container">$$x^*=(1-0)\left(\frac{3}{2}-\frac{1}{2}\frac{1}{x^*}\right)$$</span>
I find there are two possible solution <span class="math-container">$x^*=1/2$</span> or <span class="math-container">$x^*=1$</span>. How to exclude the case that <span class="math-container">$x^*=1/2$</span>?</li>
</ol>
| Saúl RM | 172,802 | <p>If, as you say, <span class="math-container">$a_k<0.1$</span> for all <span class="math-container">$k$</span>, then we can prove by induction that <span class="math-container">$x_k>\frac{3}{4}$</span> for all <span class="math-container">$k$</span>, with induction step <span class="math-container">$x_{k+1}>0.9\left(\frac{3}{2}-\frac{1}{2\cdot\frac{3}{4}}\right)=\frac{3}{4}$</span>. By a similar induction we get <span class="math-container">$x_k\in\big[\frac{3}{4},1\big]\;\forall k$</span>.</p>
<p>So <span class="math-container">$1-x_k\in\big[0,\frac{1}{4}\big]$</span> for all <span class="math-container">$k$</span>. Now note that
<span class="math-container">$$
\begin{split}
1-x_{k+1} &= 1-(1-a_k)\left(\frac{3}{2}-\frac{1}{2x_k}\right)\\
&=-\frac{1}{2}+\frac{1}{2x_k}+a_k\left(\frac{3}{2}-\frac{1}{2}x_k\right)\\
&\leq\frac{1-x_k}{2x_k}+3a_k\leq\frac{2}{3}(1-x_k)+3a_k.
\end{split}
$$</span></p>
<p>So as <span class="math-container">$a_k\to0$</span>, we also have <span class="math-container">$1-x_k\to 0$</span>.</p>
|
718,664 | <p>I'm trying to find a small primitive root modulo $p^k$, where $p$ is prime. My strategy is to test small numbers $g=2,3,\ldots$ until I find a primitive root modulo $p$. That is, until $ord_pg=\phi(p)=p-1$. There are results that suggest such a search won't take long.</p>
<p>To go from modulo $p$ to modulo $p^k$, there is the following <a href="http://en.wikipedia.org/wiki/Primitive_root_modulo_n#Finding_primitive_roots" rel="noreferrer">well-known</a> theorem:</p>
<blockquote>
<p>Thm: If $g$ is a primitive root modulo $p$, then $$\begin{cases} g+p & g^{p-1}\equiv 1\pmod{p^2}\\g & g^{p-1}\not\equiv 1\pmod{p^2}\end{cases}$$
is a primitive root modulo $p^k$, for all $k\in\mathbb{N}.$</p>
</blockquote>
<p>However, I was unable to find an example that falls into the first case (except $p=2$, trivially). Hence, I have two related questions:</p>
<ol>
<li>Are there any odd primes $p$, and primitive roots $g$, such that $g^{p-1}\equiv 1\pmod{p^2}$?</li>
<li>Are there any odd primes $p$, and minimal primitive roots $g$, such that $g^{p-1}\equiv 1\pmod{p^2}$?</li>
</ol>
| lhf | 589 | <p>Surprisingly, only two odd primes are known such that the smallest primitive root mod $p$ is not a primitive root mod $p^2$: $40487$ and $6692367337$. See <a href="https://oeis.org/A055578" rel="nofollow">OEIS:A055578</a>.</p>
|
4,589,828 | <p>I'm stuck in proving the function <span class="math-container">$$f(x) = \vert x + e^x\vert $$</span> has a minimum.</p>
<p>This is what I did:</p>
<p><span class="math-container">$$f'(x) = (1+e^x)\text{sgn}(1+e^x)$$</span></p>
<p>But this function is never zero, because the exponential is always positive. So when I study the sign of the derivative, it is always increasing.</p>
<p>Yet the plot of the funciton shows a "sort of" a minimum.</p>
<p><strong>Adds</strong></p>
<p>Since there is an absolute value I calculated the difference quotient for <span class="math-container">$f(x) > 0$</span> and <span class="math-container">$f(x) < 0$</span> obtaining the function is continuous when <span class="math-container">$x+e^x > 0$</span> and when <span class="math-container">$x+e^x < 0$</span></p>
<p>Yet I also understood @Martin R. comment but how to find that point where <span class="math-container">$f(x)$</span> is discontinuous?</p>
| Anastasiya-Romanova 秀 | 133,248 | <p>You made mistake in the second integral. It should be
<span class="math-container">$$
I_2 = \frac{\theta}{2} + C_2 = \frac{\arcsin(x)}{2} + C_2
$$</span></p>
|
2,105,238 | <p>I need to prove that the function $f: \mathbb R_{>0}\times\mathbb R \to \mathbb R^2$ , $f(x,y)=(xy, x^2-y^2)$ is injective. I know I have to show that $f(x,y)=f(a,b)$ implies $x=a$ and $y=b$ but I have no idea how to prove it. Could you give me a hint?</p>
| C. Zhihao | 405,549 | <p>if $f(x,y) = f(a,b)$ you have two equations</p>
<p>$(1)\;\; xy = ab$</p>
<p>$(2)\;\; x^2 - y^2 = a^2 - b^2$.</p>
<p>Since $x > 0$, from (1) you get $y = ab/x$. Sub such $y$ on equation
(2) and simplify to get</p>
<p>$$x^4 - x^2 (a^2 - b^2) - (ab)^2 = 0$$</p>
<p>Use the quadratic equation and the fact that $x,a > 0$ to conclude that $x = a$.</p>
|
2,785,464 | <p>It's clear that every orthonormal basis is a frame, but is <strong>every</strong> basis a frame? In particular, is a non-orthogonal basis always a frame? If yes, why?</p>
<p>I'm asking about general Hilbert spaces, in particular infinite dimensional spaces.</p>
| user | 505,767 | <p>For finite dimensional vector spaces every basis is a frame in the sense that we can describe all the space in an unique manner by any basis.</p>
<p>For non orthogonal basis we lost the property that </p>
<p>$$ \langle\vec x,\vec v_i \rangle =\vec x\cdot \vec v_i=x_i$$</p>
<p>which holds for orthogonal basis, that is for all $\vec x$ the dot product with the $i^{th}$ basis vector is equal to the component of that vector along the $i^{th}$ coordinate, such that</p>
<p>$$\vec x = \sum_i \langle\vec x,\vec v_i \rangle v_i$$</p>
<p>For infinite dimensional vector spaces refer to the answer by <a href="https://math.stackexchange.com/a/2785484/505767">P.Pet</a>.</p>
|
2,687,375 | <p>Let <span class="math-container">$x_1, x_2, \ldots, x_n$</span> be a random sample from the Bernoulli (<span class="math-container">$\theta$</span>).</p>
<p>The question is to find the UMVUE of <span class="math-container">$\theta^k$</span>.</p>
<p>I know the <span class="math-container">$\sum_1^nx_i$</span> is the complete sufficient statistics for <span class="math-container">$\theta$</span>.</p>
<p>Is <span class="math-container">$\left(\frac{\sum_1^nx_i}{n}\right)^k$</span> the estimator or any other possible estimator?</p>
<p>Could someone just help me?</p>
| Matteo Casarosa | 539,948 | <p>$ S$ is compact due to Heine-Borel theorem, as you mentioned, and it is also connected. You can prove this in several ways. One way is considering that $ S$ is the topological product of two connected spaces, another is to show that $S$ is path-connected (hint: it's also convex).</p>
<p>$T$ is not compact, because Heine-Borel theorem states that a subspace of $ \mathbb{R}^n $ is compact <em>if and only if</em> it is closed and bounded. Now, $(0,0)$ is adherent to $T$, and it does not belong to $T$, therefore $T$ is not compact.</p>
<p>However, $T$ is connected, and even path-connected, but it's not convex, so we have to adapt our previous proof.
Consider $A,B\in T$. If the straight line joining $A$ and $B$ doesn't meet $(0,0) $, there is no problem. If it does, you can change the path a little bit having it go on a small semicircumference centered in $(0,0) $.
In general, subtracting one point does not preserve connectedness. </p>
|
1,116,215 | <p>In C. Adam's Topology, it is written that $f(A) - f(B) \subset f(A - B)$ for any function, but $f(A) - f(B) = f(A - B)$ iff f is bijective. I can come the half way, i.e., $f(A) - f(B) \subset f(A - B)$; but for the $f(A - B) \subset f(A) - f(B)$, I don't know both of how to prove correctness of it for bijective and falseness of it for non-one-to-one function;</p>
<p>Suppose $y\in f(A) - f(B)$, then $y$ belongs to $f(A)$ but it does not belong to $f(B)$; thus there exist(s) one or more $x_i$ such that $y=f(x_i)$ and non of $x_i$ must be in $B$ since it results in $f(x_i)=y\in f(B)$, a contradiction; so for arbitrary $y$, if $y\in f(A) - f(B) \implies y\in f(A - B)$, thus $f(A) - f(B) \subset f(A - B)$.</p>
<p>I appreciate it if please help me to prove(one-to-one)/disprove(non-one-to-one) in case of $f(A - B) \subset f(A) - f(B)$. </p>
| Community | -1 | <p>For a counterexample when $f$ is not bijective : Let $f:\mathbb R \to \mathbb R : x \mapsto 1$. Take $A = \mathbb R$ and $B = \{0\}$. Then $f(A \setminus B) = \{1\}$ but $f(A) \setminus f(B) = \{1\} \setminus \{1\} = \emptyset$.</p>
|
1,131,158 | <p>I'm having an exam from linear algebra in a few days and I'm trying to understand some definitions from my workbook. Here's a definition I can't wrap my head around:</p>
<p>Let's assume we have two bases in vector space $X$: <strong>a</strong>$=(a_1,...,a_n)$ and <strong>b</strong>$=(b_1,...,b_n)$. Let $f$ be an endomorphism of vector space $X$ (this is defined earlier as linear transformation from $X$ to $X$) and let $M_{f,a}\in\Bbb{R}^{n,n}$ be a matrix of endomorphism $f$ in basis <strong>a</strong>. That means that for:</p>
<p>$x=\alpha_1 a_1+...+\alpha_n a_n$; $\alpha=[\alpha_1,...,\alpha_n]^T$</p>
<p>we have:</p>
<p>$f(x)=\theta_1 x_1+...+\theta_n x_n$, for $[\theta_1,..\theta_n]^T=\theta=M_{f,a}\alpha$</p>
<p>So - what could author of these words have in mind? I cannot find "matrix of endomorphism" on Google and I don't understand what are $x_1,...,x_n$ - in definition we suddenly have this basis(?) undefined earlier and $M_{f,a}$ is suddenly matrix of change of basis <strong>a</strong> to that "wild" basis. So, is there any easier and more straightforward way to understand this $M_{f,a}$?</p>
| Muphrid | 45,296 | <p>$M_{f,a}$ is the matrix that <em>corresponds</em> to (or <em>represents</em>) the endomorphism $f$ for the choice of basis $a$.</p>
<p>$x$ is any vector. $\alpha$ corresponds to $x$, when you choose the basis $a$.</p>
<p>All this is doing is taking the abstract idea of an endomorphism--which is just a linear function from vectors to vectors--and writing it in a basis, so that you can <em>compute</em> this function's action on any input vector simply through matrix multiplication.</p>
|
812,379 | <p>Is the above true? (I think it is!) if so, please can somebody explain why? I don't see it!</p>
| Nicky Hekster | 9,605 | <p>Perhaps you want to know the following fact (the ring $R$ is assumed to be commutative with identity). Let $$N= \{a \in R\mid a^n=0 \text{ for some integer } n\gt 0\}.$$ This set equals all the so-called <em>nilpotent elements</em>. One can show that actually $$N=\bigcap\{P \mid P \text{ is a prime ideal of }R\}.$$ To prove this: one inclusion ($\subseteq$) is done by the answers above. The reverse inclusion is less trivial, and usually uses the following fact: every ideal <em>maximal</em> with respect to being disjoint from a multiplicative subset of $R$ is a prime ideal of $R$. With this, one can show that for each non-nilpotent element, there is a prime ideal disjoint from the powers of that element, and hence the prime does not contain the element. This shows how non-nilpotents are excluded from the intersection, so that everything there is nilpotent.</p>
|
3,502,551 | <blockquote>
<p>Does L' Hospital's rule pay-off at all in calculating:
<span class="math-container">$$\displaystyle\lim_{x\to 0}\frac{\sqrt{\cos2x}\cdot
e^{2x^2}-1}{\ln{(1+2x)\cdot\ln{(1+2\arcsin{x})}}}$$</span></p>
</blockquote>
<p>I posted <a href="https://qr.ae/TJIZ8z" rel="nofollow noreferrer">this question on Quora</a> not expecting the answers on finding the limit itself, but to eliminate all the bad options. Since we haven't formally gone through <em>L'Hospital's rule</em>, I tried avoiding it because it seemed like a cliche.
I took the commutativity between continuous functions & limits, as well.
Since <span class="math-container">$x\to 0$</span>, I thought I could maybe substitute <span class="math-container">$x$</span> by <span class="math-container">$\frac{1}{y}$</span> when <span class="math-container">$y\to\infty$</span>. However, that wasn't helpful either.
I wasn't sure which terms I can replace by another function as they aren't the same. For example <span class="math-container">$\sqrt{\cos{2x}}\;\&\;e^{2x^2}$</span>.</p>
<p>I examined an answer by Paramanand Singh (prefer his approach):</p>
<p><a href="https://math.stackexchange.com/questions/3457730/solve-without-lhopitals-rule-lim-x-to0-frac-sqrt-cosh3x2-cdot-e">Solve without L'Hopital's rule: $\lim_{x\to0}\frac{\sqrt{\cosh{(3x^2)}}\cdot e^{4x^3}-1}{x^2\tan(2x)}$</a></p>
<p>At some point, I realized my attempts of manipulating are all inadequate.</p>
<p>My question: how to choose the function substitutes & is there any other approach involving pure algebraic fractions manipulation? And any other suggestions/opinions on whether L'Hospital pays-off?</p>
<p>We haven't gone through so much, which is worrying & now we have to work all on our own (I don't complain, that's great & motivating in most cases, but sometimes is rather difficult without a proper literature)
<strong>Steppan Konoplev's</strong> solution:</p>
<blockquote>
<p>Let <span class="math-container">$f(x),g(x)$</span> be the numerator, denominator respectively. Since
<span class="math-container">$$\cos x \approx 1 - \frac{x^2}{2}, \sqrt{1+x} \approx 1+ \frac{x}{2},
e^x \approx 1+x,$$</span> we have:<span class="math-container">$$f(x) \approx \sqrt{1-2x^2}(1+2x^2) - 1
\approx (1-x^2)(1+2x^2)-1 = x^2-2x^4\;\text{around}\;x=0$$</span></p>
</blockquote>
<p>My note: <span class="math-container">$e^{2x^2}\approx1+2x^2\;$</span>?</p>
<blockquote>
<p>On the other hand: <span class="math-container">$\arcsin x \approx x, \ln(1+x) \approx x\;$</span>so we
have:<span class="math-container">$$g(x) \approx \ln(1+2x)^2 \approx 4x^2\;\text{around}\;x=0$$</span>
Thus: <span class="math-container">$$\frac{f(x)}{g(x)} =
\frac{x^2+O(x^4)}{4x^2+O(x^4)}\to\frac{1}{4}\;\text{as}\;x \to 0.$$</span></p>
</blockquote>
<p>I also read there:</p>
<blockquote>
<p>The simplified form of the numerator is easy, but the denominator is
messy if you write out all the terms in detail. It does seem that it
would be easier to use the first few terms of power series expansions.</p>
</blockquote>
| Paramanand Singh | 72,031 | <p>My preferred approach is to use a set of well kwown limits combined with algebraic manipulation. This is a simple approach which requires familiarity with laws of limits. Techniques like L'Hospital's Rule and Taylor series are powerful but require some care (especially in case of L'Hospital's Rule one must first ensure that prerequisites for the rule are met).</p>
<p>On the other hand the usage of well known limits requires some deeper understanding. Thus consider for example the limit <span class="math-container">$$\lim_{x\to 0}\dfrac{\log(1+x)}{x}=1$$</span> One should notice that the argument of log function ie <span class="math-container">$(1+x)$</span> tends to <span class="math-container">$1$</span> and the denominator tends to <span class="math-container">$0$</span>. Thus whenever you see an expression like <span class="math-container">$$\log\text{(something)} $$</span> where "something" tends to <span class="math-container">$1$</span> you have to express it as <span class="math-container">$$\frac{\log(\text{something})} {\text{something} - 1}\cdot(\text{something} - 1)$$</span> and see if this helps you or not.</p>
<p>This kind of understanding is needed for all the well known limits. For example can you figure out what to do with <span class="math-container">$\sqrt{\cos 2x}$</span> in numerator (see current question) given the well known limit <span class="math-container">$$\lim_{x\to 0}\frac{1-\cos x} {x^2}=\frac{1}{2}?$$</span> Ask yourself the same question about <span class="math-container">$e^{2x^2}$</span> in numerator given the well known limit <span class="math-container">$$\lim_{x\to 0}\frac{e^x-1}{x}=0$$</span> Another rule of thumb is that <em>use L'Hospital's Rule only when required differentiation is damn easy (no calculations, just memory) and resulting expression is simpler</em>. For example you can apply it on <span class="math-container">$$\lim_{x\to 0}\frac{x-\sin x} {x^3}$$</span> to get <span class="math-container">$$\lim_{x\to 0}\frac{1-\cos x} {3x^2}$$</span> If you need to use product rule of differentiation for applying L'Hospital's Rule then you are better off not applying it. </p>
|
3,794,418 | <p><strong>Background:</strong> I'm a math rookie, yet to enrol in university. I randomly started reading Mendelson's <em>Introduction to Mathematical Logic</em>, when I stumbled upon this paradox in the introductory section:</p>
<blockquote>
<p><strong>Grelling's Paradox:</strong> An adjective is called <em>autological</em> if the property denoted by the adjective holds for the adjective itself. An adjective is called <em>heterological</em> if the property denoted by the adjective does not apply to the adjective itself. For example, 'polysyllabic' and 'English' are autological, whereas 'monosyllabic' and 'French' are heterological. Consider the adjective 'heterological'. If 'heterological' is heterological, then it is not heterological. If 'heterological' is not heterological, then it is heterological. In either case, heterological is both heterological and not heterological.</p>
</blockquote>
<p>I'd like to understand the following:</p>
<ol>
<li><strong>What is the source of logical fallacy in this paradox?</strong> If I formulate a set <span class="math-container">$A$</span> of all adjectives and subsets <span class="math-container">$A_a$</span> and <span class="math-container">$A_h$</span> corresponding to autological and heterological adjectives, respectively, then it could be the case that <span class="math-container">$\text{(heterological)}\in A-(A_a\cup A_h)$</span>, i.e., it belongs to neither of the two sets(unless <span class="math-container">$A_a\cap A_h=\emptyset$</span> and <span class="math-container">$A_a\cup A_h=A$</span>).</li>
<li>On a lighter note, I'd like to know about the mathematical significance of this paradox, and how it's dealt with in modern set theories.</li>
</ol>
<p>Although I understand the answer(s) could be very abstract, please add a simpler analogy along with a necessary technical explanation, if possible.</p>
| Noah Schweber | 28,111 | <p>If <span class="math-container">$A, A_a,$</span> and <span class="math-container">$A_h$</span> actually "make sense" - more on this below - then we clearly have that <span class="math-container">$A_a$</span> and <span class="math-container">$A_h$</span> partition <span class="math-container">$A$</span>: <span class="math-container">$A_h$</span> is defined to be <span class="math-container">$A\setminus A_a$</span>. So your proposal doesn't work.</p>
<p>The fix is that <span class="math-container">$A_a$</span> and <span class="math-container">$A_h$</span> are in fact more complicated than they appear. We only have a paradox if the adjective "heterological" is in <span class="math-container">$A$</span>. But it turns out that this doesn't happen: basically, in order to define heretologicity we need to use a <em>truth predicate for <span class="math-container">$A$</span></em> and <a href="https://en.wikipedia.org/wiki/Tarski%27s_undefinability_theorem" rel="nofollow noreferrer">we don't have one of those in <span class="math-container">$A$</span> itself</a>.</p>
<hr />
<p>Here's one way to see the paradox in action.</p>
<p>Let <span class="math-container">$\ulcorner\cdot\urcorner$</span> be your favorite Godel numbering function and let <span class="math-container">$Form$</span> be the set of all first-order formulas in the language of arithmetic. For simplicity, let's write "<span class="math-container">$\mathbb{N}$</span>" for the structure <span class="math-container">$(\mathbb{N};+,\times,0,1,<)$</span>. Then the set <span class="math-container">$$X=\{\ulcorner\varphi\urcorner: \mathbb{N}\models\neg\varphi(\underline{\ulcorner\varphi\urcorner})\},$$</span> the version of <span class="math-container">$A_h$</span> for first-order formulas of arithmetic, cannot itself be definable by a first-order formula of arithmetic: if <span class="math-container">$X$</span> were defined by some formula <span class="math-container">$\theta$</span> of first-order arithmetic, that is if we had <span class="math-container">$$X=\{n: \mathbb{N}\models\theta(\underline{n})\}$$</span> for some formula <span class="math-container">$\theta$</span> of first-order arithmetic, we would get a contradiction by considering whether <span class="math-container">$\mathbb{N}\models\theta(\ulcorner\theta\urcorner)$</span>.</p>
<p>More generally, we can generalize the particular setting above to any setting where we have some logic <span class="math-container">$\mathcal{L}$</span>, some structure <span class="math-container">$\mathfrak{A}$</span>, and some appropriate "coding" mechanism of <span class="math-container">$\mathcal{L}$</span>-formulas into <span class="math-container">$\mathfrak{A}$</span>. Getting the details right takes some thought, but the point is that Grelling's paradox illustrates a fundamental "stepping-up" phenomenon that we can't avoid: the Grelling set for a particular logic/structure/coding system is not definable in that structure by a formula of that logic.</p>
<p>(Note that <span class="math-container">$X$</span> can indeed be defined <em>in broader contexts</em>: for example, it's definable in <span class="math-container">$\mathbb{N}$</span> by a formula of second-order logic, and it's definable by a first-order formula in the universe of <em>sets</em>, of which <span class="math-container">$\mathbb{N}$</span> forms a very small piece.)</p>
|
1,045,444 | <p>I am having trouble understanding the role primary decomposition plays in ``interpreting'' the geometric picture of a scheme. Here are the examples I am struggling with from Eisenbud's <em>Commutative Algebra With a viewpoint towards Algebraic Geometry</em>. Assume $k$ is algebraically closed.</p>
<ol>
<li><p>Apparently $I = (x^2,y) \subset k[x,y]$ defines the geometric object of the origin in $k^2$ along with a (or lots?) of tangent vectors sticking out horizontally. Why?</p></li>
<li><p>Similarly $I = (x,y)^2 = (x^2,xy,y^2)$ defines the origin along with the ``first order infinitesimal neighbourhood around the origin." Again, why?</p></li>
</ol>
<p>I don't understand his reasoning in either case, though apparently he looks at the primary decomposition of the ideals.</p>
| user84413 | 84,413 | <p>Hint: By the AM-GM inequality, we have that $\frac{\sqrt{a_n}}{n}\le\frac{1}{2}(a_n+\frac{1}{n^2})$</p>
|
1,045,444 | <p>I am having trouble understanding the role primary decomposition plays in ``interpreting'' the geometric picture of a scheme. Here are the examples I am struggling with from Eisenbud's <em>Commutative Algebra With a viewpoint towards Algebraic Geometry</em>. Assume $k$ is algebraically closed.</p>
<ol>
<li><p>Apparently $I = (x^2,y) \subset k[x,y]$ defines the geometric object of the origin in $k^2$ along with a (or lots?) of tangent vectors sticking out horizontally. Why?</p></li>
<li><p>Similarly $I = (x,y)^2 = (x^2,xy,y^2)$ defines the origin along with the ``first order infinitesimal neighbourhood around the origin." Again, why?</p></li>
</ol>
<p>I don't understand his reasoning in either case, though apparently he looks at the primary decomposition of the ideals.</p>
| Jack D'Aurizio | 44,121 | <p>By the Cauchy-Schwarz inequality,
$$\sum \frac{\sqrt{a_n}}{n} \leq \sqrt{\zeta(2)\sum a_n}.$$</p>
|
669,696 | <p>How do I show by the Root Test that $$\sum\limits_{i=1}^\infty (2n^{1/n}+1)^n$$ converges or diverges? This is what I have done so far. Since we take $\sum\limits_{i=1}^\infty \sqrt[n]{|a_n|}$, we let $a_n = (2n^{1/n}+1)^n.$ This yields $\sum\limits_{n=1}^\infty\sqrt[n]{(2n^{1/n}+1)^n}$, which simplifies to $\sum\limits_{n=1}^\infty 2n^{1/n}+1.$ I know that $2n^{1/n}$ is an indeterminate form in the form of ${\infty}^0$, which I can solve accordingly. However, what do I do with the $1$? Can I disregard it since $x \rightarrow \infty$ and the $1$ becomes insignificant? That's where I'm stuck on.</p>
| user127.0.0.1 | 50,800 | <p>By the <em><a href="http://en.wikipedia.org/wiki/Root_test" rel="nofollow">root test</a></em> you get</p>
<p>$$\limsup\sqrt[n]{\left|2n^{1/n}+1\right|^n} = \limsup 2n^{1/n}+1 = 3 \gt 1$$</p>
<p>Thus your series diverges.</p>
|
960,880 | <p>Could you help me to explain how to find the solution of this equation
$$y ′ (t)=−y(t)-\frac1{2}*(1+e^{-2t})+1$$
Given $y(0)=0$
Thank all
This is my answer
$$y ′ (t)=−y(t)-\frac1{2}e^{-2t}+\frac1{2}$$
$$e^{2t}y ′ (t)=e^{2t}(−y(t)-\frac1{2}e^{-2t}+\frac1{2})$$
where
$$(e^{2t}y(t))′=e^{2t}y(t)′+2(e^{2t}y(t))=e^{2t}(y(t)-\frac1{2}e^{-2t}+\frac1{2})$$</p>
| UserX | 148,432 | <p>Hints; Solve $$y'+y=\frac12 \left (1-e^{-2t} \right ) \stackrel{\cdot e^{\int 1 \mathrm{d}t}}{\iff} e^t y'+e^t y= e^t \frac12 \left (1-e^{-2t} \right )$$</p>
<p>Do you recognise the LHS as something familiar?</p>
|
3,232,319 | <p>I need to show that <span class="math-container">$e^{-it}=\frac{1}{e^{it}}$</span>.
but I don't understand what needs to be proven, it seems trivial to me. If anyone could help me.
Is the claim true even if t is not real?
Thank you</p>
| D.B. | 530,972 | <p>From Euler's formula,
<span class="math-container">$$e^{-it} = \cos(t)+i\sin(-t) = \cos(t)-i\sin(t).$$</span>
Also,
<span class="math-container">$$e^{it} = \cos(t)+i\sin(t).$$</span>
Now, compute the product
<span class="math-container">$$e^{it} \cdot e^{-it}.$$</span>
You find that the answer is <span class="math-container">$1$</span>. This shows that <span class="math-container">$e^{it}$</span> is the reciprocal of <span class="math-container">$e^{-it}$</span>.</p>
|
3,232,319 | <p>I need to show that <span class="math-container">$e^{-it}=\frac{1}{e^{it}}$</span>.
but I don't understand what needs to be proven, it seems trivial to me. If anyone could help me.
Is the claim true even if t is not real?
Thank you</p>
| Elias Costa | 19,266 | <p>The inverse of the complex number <span class="math-container">$e^{t\cdot i}$</span> is, by definition, another complex number <span class="math-container">$e^{u+v\cdot i}$</span> such that
<span class="math-container">$$
e^{t\cdot i}\cdot e^{u+v\cdot i}=1
$$</span>
On the other hand we know that
<span class="math-container">$$
e^{t\cdot i}\cdot e^{u+v\cdot i}=e^{u+(t+v)\cdot i} \quad \mbox{ and } \quad 1=e^{0+0\cdot i}
$$</span>
Equating the right sides of these two equations we have
<span class="math-container">$$
e^{u+(t+v)\cdot i}=e^{0+0\cdot i}\implies u=0 \quad \mbox{ and } \quad t+v=0
$$</span>
Consequently we have <span class="math-container">$ v = -t $</span> and therefore
<span class="math-container">$$
\frac{1}{e^{t\cdot i}}=e^{-t\cdot i}
$$</span></p>
|
2,096,827 | <p><strong>Question</strong>: A sample size of 10 is taken with replacement from an urn that contains 100 balls, which are numbered 1, 2, ..., 100. (At each draw, each ball has the same probability of being selected).</p>
<p>There are 3 parts to the question, and I've included my work below. However, I'm not sure if independence applies in part ii and iii.</p>
<p><strong>i)</strong> P(ball 1 is in the sample) = 1 - P(ball 1 is not in the sample) = $$1 - (\frac{99}{100})^{10}$$</p>
<p><strong>ii)</strong> P(neither ball 1 nor ball 2 are in the sample) = P($(1 ∪ 2)^c)$ = 1 - P(1 ∪ 2) = 1 - [P(1) + P(2) - P(1 n 2)] </p>
<p>I think that P(1) = P(2), but I'm not sure if I can apply independence here and assume that P(1 n 2) = P(1) * P(2).</p>
<p>Since we are sampling with replacement, does this mean that we can assume the event: ball 1 is in the sample, and event: ball 2 is in the sample are independent?</p>
<p><strong>iii)</strong> Explain how you could calculate (with formulas) P(ball 1 is in the sample | ball 2 is in the sample).</p>
<p>If the two events are independent, then the probability would equal P(ball 1 is in the sample), but I'm confused as to whether I can assume independence.</p>
<p>Any help would be appreciated!</p>
<p>Thanks!</p>
| RGS | 329,832 | <p><strong>HINT:</strong></p>
<p>What happens if you sum those two results together? Or what if you subtract them?</p>
|
4,010,408 | <p>good day</p>
<p>Can anybody help me? thanks</p>
<p>Let <span class="math-container">${X_i}$</span> Independent and Identically Distributed Random Variables as normals with mean <span class="math-container">$μ$</span> and standard deviation 2, What is the minimum value of n such that <span class="math-container">$P(|\dfrac{S_n}{n}-μ|<.01) \geq .99$</span></p>
<p>Thanks</p>
<p>Independent and Identically Distributed Random Variables</p>
| herb steinberg | 501,262 | <p><span class="math-container">$\frac{S_n}{n}$</span> is normal with mean <span class="math-container">$\mu$</span> and standard devation <span class="math-container">$\frac{2}{\sqrt{n}}$</span>. To get your answer look up table of normal distribution.</p>
|
4,528,027 | <p>Im trying to prove that the function <span class="math-container">$$\begin{cases}f(x,y)=\dfrac{(2x^2y^4-3xy^5+x^6)}{(x^2+y^2)^2}, & (x,y)≠0\\ 0, & (x,y)=0\end{cases}$$</span> is continuous at point (0,0) using the rigorous defintion of a limit.</p>
<p>Attempting to find the upper limit of the function:
<span class="math-container">$$|f(x)-f(x_0)|= \left|\frac{(2x^2y^4-3xy^5+x^6)}{(x^2+y^2)^2}-0\right|$$</span>
I see the denominator is always positive so this is equal to
<span class="math-container">$\dfrac{|2x^2y^4-3xy^5+x^6|}{(x^2+y^2)^2}$</span>.
Using the triangle inequality i know that this is equal or less than
<span class="math-container">$\dfrac{|(2x^2y^4)-(3xy^5)|+|x^6|}{(x^2+y^2)^2}$</span>.
From here I would like to continue finding expressions which are equal or greater than this, which allow me to cancel some terms against <span class="math-container">$((x^2+y^2)^2)$</span>.
Im thinking i can write
<span class="math-container">$$x^6 = (x^2)^3 ≤ (x^2+y^2)^3 $$</span>
for instance, but i am unsure of how to "handle" <span class="math-container">$|(2x^2y^4)-(3xy^5)$</span>|.
Could someone give me any pointers?</p>
| Theo Bendit | 248,286 | <p>Use the fact that <span class="math-container">$|xy| \le \frac{x^2 + y^2}{2}$</span>. You can then get,
<span class="math-container">$$3|xy^5| = 3|xy|y^4 \le \frac{3}{2}(x + y)^3$$</span>
and
<span class="math-container">$$2x^2y^4 = 2|xy|^2y^2 \le \frac{1}{2}(x + y)^3.$$</span>
You can then proceed as you were.</p>
|
516,280 | <blockquote>
<p>Given the equation <span class="math-container">$\displaystyle{\int_{-x}^x\exp({-t^2})dt}=-\ln(x)$</span>:</p>
<p>a. Simplify the integral using Gauss method with 3 points.</p>
<p>b. Solve given equation by Newton Raphson iterative method</p>
</blockquote>
<p>I succeeded simplifying the integral and got <span class="math-container">$\int\exp(-t^2)=\frac 8 9 +\frac {10} 9 e^{-0.36x^2}$</span> but I NR method required function (since the form of <span class="math-container">$x_{k+1}=x_k+\frac{f(x_k)}{f\prime(x_k)}$</span>) which I haven't found yet here in <span class="math-container">$-\ln(x)=\frac 8 9 +\frac {10} 9 e^{-0.36x^2}$</span>. which elements should I take to be my <span class="math-container">$f(x)$</span>?</p>
| Amzoti | 38,839 | <p>I did not check the first part , but I think there is an issue as we should be able to validate the LHS and RHS when we find our $x$, and that is not checking out. Note, you formula did not specify which Gauss method to use and that normally means Gauss-Legendre which use Legendre Polynomials as $\displaystyle \int_{-1}^1 f(x)~dx$ and then use a suitable transformation over $a, b$.</p>
<p>However, here is the process for part b that you are asking about.</p>
<p>$$\displaystyle f(x) = \ln(x)+\frac 8 9 +\frac {10} 9 e^{-0.36x^2}$$</p>
<p>Now, set-up your iteration formula for NR, choose a starting point and find $x$.</p>
<p>A plot shows and approximate location:</p>
<p><img src="https://i.stack.imgur.com/IcUSE.png" alt="enter image description here"></p>
<p>The iteration is given by:</p>
<p>$$x_{n+1} = x_n - \dfrac{f(x)}{f'(x)}$$</p>
<p>I chose $x_0 = 0.4$ and got $x = 0.13634195512062696$.</p>
<p>You can verify that at that value of $x$:</p>
<p>$$-\ln(x)=\frac 8 9 +\frac {10} 9 e^{-0.36x^2}$$</p>
<p>However, using a numerical integrator on the LHS with that value of $x$, does not match the RHS, $-\ln x$, so the Gaussian Quadrature result likely has an issue.</p>
<p>Using Cipra's update, we have:</p>
<p>$$\displaystyle f(x) = \ln(x)+ \frac 8 9 x +\frac {10} 9 e^{-3/5 x^2}x$$</p>
<p>A plot shows:</p>
<p><img src="https://i.stack.imgur.com/b0tE4.png" alt="enter image description here"></p>
<p>Using NR, with $x_0 = 0.4$ results in $x = 0.4386233081179400$, which checks out for both sides of:</p>
<p>$$\displaystyle{\int_{-x}^x\exp({-t^2})dt}=-\ln(x)$$</p>
|
406,514 | <blockquote>
<p>Find the Galois group $\operatorname{Gal}(f/\mathbb{Q})$ of the polynomial $f(x)=(x^2+3)(x^2-2)$.</p>
</blockquote>
<p>Any explanations during the demonstration, will be appreciated. Thanks!</p>
| DonAntonio | 31,254 | <p>Hints for you to understand and/or prove:</p>
<p>(1) If $\,\sigma\in\,\text{Gal}\,(f/\Bbb Q)\,$ , it <strong>must</strong> be that</p>
<p>$$\sigma(\sqrt 2)=\pm\sqrt 2\;,\;\;\sigma(i\sqrt 3)=\pm i\sqrt 3$$</p>
<p>(2) We have that $\,x^2+3\in\Bbb Q(\sqrt 2)[x]\,$ is irreducible (<strong>there</strong> , not only in $\,\Bbb Q[x]\,$ !) </p>
<p>(3) We have that $\;\{1\,,\,\sqrt 2\,,\,i\sqrt 3\,,\,i\sqrt 6\}\;$ is a basis for $\;\Bbb Q(\sqrt 2,i\sqrt 3)/\Bbb Q\;$ </p>
|
2,775,443 | <p>The statement $\displaystyle \sum_{n=1}^{\infty}a_n$ converges $\implies$ $\displaystyle \sum_{n=1}^{\infty}\cfrac{1}{a_n}$ looks natural but do we have this implication? I am checking alternating series as an counter-example but could not find one yet. What can we say about the implication?</p>
| farruhota | 425,072 | <p>If $\sum a_n$ and $\sum b_n$ are both positive and convergent, then $\sum (a_n+b_n)$ must be convergent, but:
$$a_n+b_n=a_n+\frac{1}{a_n}\ge 2.$$</p>
|
922,339 | <p>Let A={1,2,3,...,8,9} B={2,4,6,8} C={1,3,5,7,9} D={3,4,5} E={3,5}
Which of these sets can equal a set X under each of the following conditions?</p>
<p>a. X and B are disjoint</p>
<p>b. X is a subset of D but X is not a subset of B</p>
<p>c. X is a subset of A but X is not a subset of C</p>
<p>d. X is a subset of C but X is not a subset of A</p>
<p>Any help will be appreciated!</p>
| almagest | 172,006 | <p>You have found one solution to the equation. Now you want to find other solutions. Suppose you add $Dx$ to your solution. What happens. Well, if you differentiate $Dx$ (where $D$ is just a constant) twice wrt $x$ or twice wrt $y$ you get nil. So your solution $+Dx$ is also a solution. Similarly you can add $Ey$ and still have a solution.</p>
<p>The usual way of approaching this to say that if $u=f(x,y)$ is another solution, then we can put $f(x,y)=\frac{1}{6}Ax^3+\frac{1}{6}By^3+\frac{1}{2}Cy^2+g(x,y)$. Then $g$ must satisfy the equation $g_{xx}+g_{yy}=0$, so we just have to solve that equation.</p>
|
945,920 | <p>Let $X$ be a nonempty set and $d: X\times X\to R$ be a function such that for all $x,y\in X$ and all distinct $u, v\in X$ each of which is different from $x$ and $y$</p>
<p>(1) $ d(x,y)\geq 0$ ;</p>
<p>(2) $d(x,y)=0$ if and only if $x=y$;</p>
<p>(3) $d(x,y)=d(y,x)$;</p>
<p>(4) $d(x,y)\leq d(x,u)+d(u,v)+d(v,y)$.</p>
<p>Then $d$ is generalized metric on $X$ and $(X,d)$ is called generalized metric space. </p>
<p>Question: Please describe the topology of Generalized metric space and show that it is different of topology induced by a classic metric $d$.</p>
| Lucas Mann | 171,950 | <p>Your idea is correct. There is just one little mistake: You noticed $3 \cdot 3 \equiv 1 \bmod 4$. This does <em>not</em> imply $3^{2n+1} \equiv 1^{2n+1} \bmod 4$, but it tells you that
$$3^{2n+1} = 3 \cdot (3^2)^n \equiv 3 \cdot 1 = 3 \mod{4}$$
which is exactly what you want.</p>
<p>Now for $8$, try to do a similar thing. Hint: $3^2 \equiv 5^2 \equiv 1 \bmod 8$.</p>
|
1,231,082 | <p>Let $\text{tr}A$ be the trace of the matrix $A \in M_n(\mathbb{R})$.</p>
<ul>
<li>I realize that $\text{tr}A: M_n(\mathbb{R}) \to \mathbb{R}$ is obviously linear (but how can I write down a <em>formal</em> proof?). However, I am confused about how I should calculate $\text{dim}(\text{Im(tr)})$ and $\text{dim}(\text{Ker(tr)})$ and a basis for each of these subspace according to the value of $n$. </li>
<li>Also, I don’t know how to prove that $\text{tr}(AB)= \text{tr}(BA)$, and I was wondering if it is true that $\text{tr}(AB)= \text{tr}(A)\text{tr}(B)$. </li>
<li>Finally, I wish to prove that $g(A,B)=\text{tr}(AB)$ is a positive defined scalar product if $A,B$ are <em>symmetric</em>; and also $g(A,B)=-\text{tr}(AB)$ is a scalar product if $A,B$ are <em>antisymmetric</em>. Can you show me how one can proceed to do this? </li>
</ul>
<p>I would really appreciate some guidance and help in clarifying the doubts and questions above. Thank you.</p>
| Christian Blatter | 1,303 | <p>From $2^3=1\ (7)$ we get $2^{105}=1\ (7)$, and from $3^3=-1\ (7)$ we get $3^{105}=-1\ (7)$. It follows that $2^{105}+3^{105}=0\ (7)$.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.