qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,589,155 | <blockquote>
<p>Let $d$ be the standard metric on $\mathbb{R}$. i.e $d(x,y) = |x-y|$. Is there a distance $d'$ with $d'(x,y) \geq d(x,y)$ so that $(\mathbb{R},d')$ is compact?</p>
</blockquote>
<p>Is my proof correct?
Let $B_{\epsilon}^{d'}(x)$ denote a ball of radius $\epsilon$ centered on $x$ with respect to $d'$.</p>
<p><strong>My attempt:</strong></p>
<p>Assume such a $d'$ exists. Since $(\mathbb{R},d')$ is compact it is totally bounded. Let $\epsilon > 0$ be given. Then there is an $n \in \mathbb{N}$ such that $\cup^{n}_{i=i}B_{\epsilon}^{d'}(x_{i}) = \mathbb{R}$. Let $x \in \mathbb{R}$ then $d'(x,x_{i}) < \epsilon$ for some $i = 1,...,n$. But
$$d(x,x_{i}) \leq d'(x,x_{i}) < \epsilon.$$ So $x \in B_{\epsilon}^{d}(x_{i}) \subset B_{\epsilon}^{d'}(x_{i}) \forall i = 1,...,n$. So $x\in \cup_{i=1}^{n}B_{\epsilon}^{d}(x_{i})$ That is $\mathbb{R} \subset \cup_{i=1}^{n}B_{\epsilon}^{d}(x_{i})$ But $\cup_{i=1}^{n}B_{\epsilon}^{d}(x_{i}) \subset \mathbb{R}$ as in we have $\cup_{i=1}^{n}B_{\epsilon}^{d}(x_{i}) = \mathbb{R}$. So $(\mathbb{R},d)$ is totally bounded which is a contradiction. </p>
| Guy Fsone | 385,707 | <p>Well your proof is correct but it could be shortened as follow:</p>
<p>
<blockquote>
<p>Indeed, if such metric exists we have that the $Id:(\Bbb R, d')\to(\Bbb R, d)$ is continuous since we assume $$d(x,y)\le d'(x,y)$$
Then $(\Bbb R, d)= Id((\Bbb R, d'))$ is also compact as continuous image of compact set. <strong>However, it is well known that $(\Bbb R, d)$ is not compact.</strong></p>
</blockquote>
|
2,840,091 | <blockquote>
<p>Consider the linear map$:T:\mathbb{R}^3 → \mathbb{R}$ with
$$T\left(\begin{bmatrix} x \\ y \\ z \end{bmatrix}\right)=x-2y-3z$$
Find the basis of its kernel.</p>
</blockquote>
<p><strong>My try</strong></p>
<p>Since the plane is the nullspace of the matrix
$$A=\begin{bmatrix} 1 & -2 & -3 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}$$</p>
<p>But I am stuck here. Can anyone explain this furthur</p>
| user247327 | 247,327 | <p>The "kernel" of a linear transformation, $A$, is defined as the subspace, $\{v\}$, of its domain such that $Av= 0$. Here that is $A(x, y, z)= x- y+ 3z= 0$. We can solve for $x$: for any $y, z$, $x= y- 3z$. A vector in the kernel is of the form $(x, y, z)=(y- 3z, y, z)= (y, y, 0)+ (-3z, 0, z)= y(1, 1, 0)+ z(-3, 0, 1)$. A basis for the kernel is ${(1, 1, 0), (-3, 0, 1)}$.</p>
|
90,006 | <p>I have two problem collections I am currently working through, the "Berkeley Problems in Mathematics" book, and the first of the three volumes of Putnam problems compiled by the MAA. These both contain many problems on basic differential equations.</p>
<p>Unfortunately, I never had a course in differential equations. Otherwise, my background is reasonably good, and I have knowledge of real analysis (at the level of baby Rudin), basic abstract algebra, topology, and complex analysis. I feel I could handle a more concise and mathematically mature approach to differential equations than the "cookbook" style that is normally given to first and second year students. I was wondering if someone to access to the above books that I am working through could suggest a concise reference that would cover what I need to know to solve the problems in them. In particular, it seems I need to know basic solution methods and basic existence and uniqueness theorem. On the other hand, I have no desire to specialize in differential equations, so a reference work like V.I Arnold's book on ordinary differential equations would not suit my needs, and I certainly don't have any need for knowledge of, say, the Laplace transform or PDEs. </p>
<p>To reiterate, I just need a concise, high level overview of the basic (by hand) solution techniques for differential equations, along with some discussion of the basic uniqueness and existence theorems. I realize this is rather vague, but looking through the two problem books I listed above should give a more precise idea of what I mean. Worked examples would be a plus. I am very unfamiliar with the subject matter, so thanks in advance for putting up with my very nebulous request.</p>
<p>EDIT: I found Coddington's "Intoduction to Ordinary Differential Equations" to be what I needed. Thanks guys.</p>
| Will Jagy | 10,400 | <p>Yes, the value of the fraction is always below 1/2. So it stays far away from 3, indeed never gets within 5/2 of it. Note that it is not necessary to know that there is any limit to do answer your question. Anyway, take $\varepsilon = 2$ and no $n$ ever gets you within $\varepsilon$ of 3.</p>
|
2,497,799 | <h2>Question</h2>
<p>While Solving a recursive equation , i am stuck at this summation and unable to move forward.Summation is </p>
<blockquote>
<p>$$\sum_{j=0}^{n-2}2^j (n-j)$$</p>
</blockquote>
<h2>My Approach</h2>
<blockquote>
<p>$$\sum_{j=0}^{n-2}2^j (n-j) = \sum_{j=0}^{n-2}2^j \times n-\sum_{j=0}^{n-2}
2^{j} \times j$$</p>
</blockquote>
<p>$$=n \times (2^{n-1}-1)-\sum_{j=0}^{n-2}
2^{j} \times j$$</p>
<p>I am unable to move forward , please help me out!</p>
| Prajwal Kansakar | 49,781 | <p>$$\sum_{j=1}^{n-2}j\;2^j=2+2\times 2^2+3\times 2^3+\cdots+(n-2)2^{n-2}$$$$=[2+2^2+2^3+\cdots+2^{n-2}]+[2^2+2^3+\cdots+2^{n-2}]+[2^3+\cdots+2^{n-2}]+\cdots+2^{n-2}$$$$=2(2^{n-2}-1)+2^2(2^{n-3}-1)+2^3(2^{n-4}-1)+\cdots+2^{n-2}(2^{n-(n-1)}-1)$$ $$=(2^{n-1}-2)+(2^{n-1}-2^2)+(2^{n-1}-2^3)+\cdots+(2^{n-1}-2^{n-2})$$ $$=(n-2)2^{n-1}-(2+2^2+\cdots+2^{n-2})=(n-2)2^{n-1}-2(2^{n-2}-1)$$ $$=(n-2)2^{n-1}-2^{n-1}+2=(n-3)2^{n-1}+2$$</p>
|
2,831,469 | <p>How can we solve $$y'''(t)+a(y''(t))^2+b(y'(t))^3=0$$</p>
<p>Could one make some kind of least common denominator argument to decide possible substitutions? Since the chain rule will come into play, I suppose a substitution both for the variable $t$ and the function $y$ could be useful. Possibly some powers of them?</p>
| Alex Jones | 350,433 | <p>As a more general solution, if you have an equation of the form
$$ x''(t) + a(x(t))x'(t)^2+b(x(t)) = 0 $$
then you can make the substitution $f(x) = x'(t)^2$ to arrive at the equation
$$ \frac{1}{2}f'(x) + a(x)f(x)+b(x)= 0 $$
Letting $\mu(x) = \exp\left[\int a(x) dx\right]$, we can solve for $f(x)$:
$$ f(x) = \mu(x)^{-1}\left(C_1-2\int\mu(x)b(x)dx\right) $$
which can be substituted back for $x(t)$:
$$ x' = \mu(x)^{-1/2}\left(C_1-2\int\mu(x)b(x)dx\right)^{1/2} $$
and solved implicitly:
$$ C_2 + t - \int \left[ \mu(x)\left(C_1-2\int\mu(x)b(x)dx\right)^{-1} \right]^{1/2} dx = 0 $$</p>
|
107,653 | <p>Suppose you have the set of all possible $n$ x $n$ square adjacency matrices where $n$={1,2,3,4...}. For each matrix, compute the logarithm of the largest eigenvalue. Is it true that the set of logarithms you obtain is dense in $\mathbb{R}$? How do you begin to prove/disprove this?</p>
| Nikita Sidorov | 8,131 | <p>In addition to Doug's nice answer above: it is probably even easier to show that the set of simple Parry numbers is dense in $(1,\infty)$. More precisely, let $\beta>1$ and let $(d_n)_{n=1}^\infty$ be the <em>greedy $\beta$-expansion</em> of 1, i.e.,
$$
1=\sum_{n=1}^\infty d_n\beta^{-n},
$$
where $d_1=\lfloor \beta\rfloor, d_2=\lfloor\beta\ \text{frac}(\beta) \rfloor, d_3=\lfloor \beta\ \text{frac}(\text{frac}(\beta))\rfloor $, etc. (Here $\lfloor\cdot\rfloor$ stands for the integer part and frac$(\cdot)$ for the fractional part.)</p>
<p>A number $\beta$ is called a <em>simple Parry number</em> (also known as a simple $\beta$-number) if $(d_n(\beta))_1^\infty$ has only a finite number of nonzero terms (i.e., ends with $0^\infty$). It is known that any Parry number is a Perron number; also, it is obvious that the Parry numbers are dense, since for any $\beta$ with an infinite $(d_n(\beta))_1^\infty$ we can truncate this sequence at any term and get a $d_n(\beta')$ for some simple Parry number $\beta'$. Since $(d_n(\beta))_1^\infty$ and $d_n(\beta')_1^\infty$ are close (in the topology of coordinate-wise convergence), so are $\beta$ and $\beta'$.</p>
<p>For more details and some references you may read the first couple of pages of <a href="http://www-fourier.ujf-grenoble.fr/PUBLIS/publications/REF_709.pdf" rel="nofollow">this paper</a>, for instance. </p>
|
4,230,409 | <p>I recently stumbled upon Edwin Moise's proof of the area of a square, but there is something that is bothering me in his proof - he claims that since the inequalities <span class="math-container">$(1)$</span> and <span class="math-container">$(5)$</span> are true, then <span class="math-container">$a =\sqrt{S_a}$</span>. but why? why <span class="math-container">$(1)$</span> and <span class="math-container">$(5)$</span> imply that there is an equality between <span class="math-container">$a$</span> and <span class="math-container">$\sqrt{S_a}$</span>?</p>
<p>I would be grateful if you could explain his proof, especially the latter part.</p>
<p>Thank you!</p>
<p><a href="https://i.stack.imgur.com/C0yV3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C0yV3.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/jAZjh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jAZjh.png" alt="enter image description here" /></a></p>
| Sayan Dutta | 943,723 | <p>Here <span class="math-container">$\alpha S$</span> represents the area of <span class="math-container">$S$</span>.</p>
<p>Now, your question is why do <span class="math-container">$(1)$</span> and <span class="math-container">$(5)$</span> imply <span class="math-container">$a=\sqrt{\alpha S_a}$</span>. This is a very standard philosophy in mathematics. Think about it in this way-</p>
<p>You have shown that <span class="math-container">$\frac p q<a \iff \frac p q<\sqrt{\alpha S_a}$</span>. But, the choice of <span class="math-container">$\frac p q$</span> was also arbitrary. So, the fact that <span class="math-container">$\frac p q<\sqrt{\alpha S_a}$</span> whenever <span class="math-container">$\frac p q<a$</span> holds <strong>for all</strong> <span class="math-container">$\frac p q<a$</span>. Just ponder upon this for a moment and you'll realise that it can only be true if <span class="math-container">$a=\sqrt{\alpha S_a}$</span>.</p>
<hr />
<p>Once you are done pondering, here's a mathematical proof-</p>
<p>If possible let <span class="math-container">$a\neq\sqrt{\alpha S_a}$</span>.</p>
<p>Without loss of generality, assume <span class="math-container">$a>\sqrt{\alpha S_a}$</span>. Now, because of the way the rationals are distributed, given any <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, there is a rational number between <span class="math-container">$A$</span> and <span class="math-container">$B$</span>. So, let <span class="math-container">$\frac x y$</span> be a rational number between <span class="math-container">$a$</span> and <span class="math-container">$\sqrt{\alpha S_a}$</span>. So, <span class="math-container">$\frac x y<\sqrt{\alpha S_a}$</span> but <span class="math-container">$a>\frac x y$</span> which is a contradiction to the fact that <span class="math-container">$(1)$</span> and <span class="math-container">$(5)$</span> are equivalent.</p>
|
2,797,419 | <p>Calculating the moment of inertia is fairly simple, however, how would I proceed to calculate it about an arbitrary axis?
The question asks the moment of inertia of $C=\{(x,y,z)|0\leq x,0\leq y, 0 \leq z,x+y+z\leq 1\}$, so, if I'm not wrong about the boundaries, the moment of inertia about the "usual" Z axis would be:
$$I_z=\int_{0}^{1}\int_{0}^{1-x}\int_{0}^{1-x-y}x^2+y^2dzdydx$$</p>
<p>But, what about an arbitrary axis? The question actually asks the moment about the axis $\{(t,t,t)|t\in \mathbb R\}$, but, this is more about the general concept than about the question itself.
Any directions would be very welcome.</p>
| Ted Shifrin | 71,348 | <p>Just integrate the function $f(x,y,z)$ that equals the square of the distance from $\vec x=(x,y,z)$ to your axis. If $\vec a$ is a unit vector in the direction of the axis, then $$f(x,y,z) = \|\vec x - (\vec x\cdot\vec a)\vec a\|^2 = \|\vec x\|^2 - (\vec x\cdot\vec a)^2.$$</p>
|
1,607,108 | <p>Using the ratio test:
$$\frac{1}{3}\lim\limits_{n\to\infty}\left|\frac{(n+1)^3(\sqrt{2}+(-1)^{n+1})^{n+1}}{n^3(\sqrt{2}+(-1)^n)^n}\right|$$</p>
<p>Without evaluating the limit, the numerator is greater then denominator and the series is divergent. Is there an easier method for checking the convergence of this sequence?</p>
| Clive Newstead | 19,542 | <p>If you were to apply the result you quote directly, what you'd actually obtain is that if $2 \mid x^2-1$ then $8 \mid (x^2-1)^2$. Specifically, you're taking $a=2$, $c=4$ and $b=d=x^2-1$ in the result you mention.</p>
<p>However, it <em>is</em> true that if $2 \mid x^2-1$ then $8 \mid x^2-1$; it's just true for different reasons. Specifically, if $2 \mid x^2-1$ then $x-1$ and $x+1$ are both even; and since they're two consecutive even numbers, one of them must be divisible by $4$. Hence either $2 \mid x-1$ and $4 \mid x+1$, or $4 \mid x-1$ and $2 \mid x+1$. You can now apply your result to complete the proof.</p>
<p>This proof doesn't use the fact that if $2 \mid x^2-1$ then $4 \mid x^2-1$; in fact, this is a <em>consequence</em>!</p>
|
2,822,176 | <p>I'm solving a list of exercises of double integrals and they normally have a range for $x$ and $y$, but in this case it says that $y = x^2$ and $y = 4$, so I thought that $x$ would be $\sqrt{y}$, but my answer was wrong. The answer should be 25,60).</p>
<p>A thin metal plate occupies a shadow below the figure below.</p>
<p><a href="https://i.stack.imgur.com/VQv2u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VQv2u.png" alt="graph"></a></p>
<p>The region is limited by the graphs of $y = x^2$ and $y = 4$ where x and y are measured in centimeters. If the superficial density of the plate, in $g/cm^2$, is $p(x,y) = y$, its mass, in grams, will be:</p>
| A.Γ. | 253,273 | <p>Step 1: You need to translate information "the region <strong>is limited</strong>" by two curves into inequalities for $x$ and $y$ to describe the shadowed $D$.</p>
<blockquote class="spoiler">
<p> For example, $D=\{(x,y)\colon -2\le x\le 2,\,x^2\le y\le 4\}$.</p>
</blockquote>
<p>Step 2: After that the problem is to calculate
$$
M=\iint_D\rho(x,y)\,dxdy.
$$</p>
<blockquote class="spoiler">
<p> Iterated integration $$M=\int_{-2}^2\int_{x^2}^4y\,dy\,dx.$$</p>
</blockquote>
|
1,842,448 | <p>$F(x)= 2x^2 -3x$. </p>
<p>find the range of $x$ to check whether the function the is strictly increasing and strictly decreasing.</p>
| Community | -1 | <p>Apply a small positive increment $h$ to $x$ and compare</p>
<p>$$2(x+h)^2-3(x+h)><2x^2-3x.$$</p>
<p>This can be rewritten</p>
<p>$$4xh+2h^2-3h><0,$$ or</p>
<p>$$4x-3+2h><0.$$</p>
<p>Ignoring $h$, which can be made smaller and smaller, the LHS is positive when </p>
<p>$$x>\frac34$$ and conversely.</p>
|
588,228 | <p>A,B,C are distinct digits of a three digit number such that </p>
<pre><code> A B C
B A C
+ C A B
____________
A B B C
</code></pre>
<p>Find the value of A+B+C.</p>
<p>a) 16 b) 17 c) 18 d) 19</p>
<p>I tried it out by using the digits 16 17 18 19 by breaking them in three numbers but due to so large number of ways of breaking I cannot help my cause.</p>
| M.R. Yegan | 134,463 | <p>$ABC+BAC+CAB=ABBC \\
C+10B+100A+C+10A+100B+B+10A+100C=C+110B+1000A$</p>
<p>and this yields to $B+C=880A-100C$,
and this in turn implies the number 20 divides $B+C$. But $B$ and $C$ are digits. So $2\leq{B+C}\leq18$. Clearly a contradiction.</p>
|
1,600,266 | <p>I'm trying to work out sum of this series $$1 + \frac{2}{2} + \frac{3}{2^2} + \frac{4}{2^3} + \ldots$$</p>
<p>I know one method is to do substitutions and getting the series into a form of a known series. So far I've converted the series into $$ 1 + \frac{2}{x} + \frac{3}{x^2} + \frac{4}{x^3} + \ldots $$ where $x=2$ and I'm trying to get it into the form of the $\ln(1+x)$ series somehow. I have tried differentiating, integrating and nothing is working out. The closest I got is by inverting which gave me $ 1 + \frac{x}{2} + \frac{x^2}{3} + \frac{x^3}{4} \ldots $ </p>
<p>Now I'm just lost and have no idea what to do.</p>
<p>The other idea I had was converting it into $$ \Large{\sum_{n=1}^\infty{\frac{n}{2^{n-1}}}}, $$ but I have no idea how to do anything further to it.
How would you do this? Thnx.</p>
| CommonerG | 74,357 | <p>I think it's likely a mistake, although I don't see any mention in the <a href="http://infolab.stanford.edu/~ullman/ialc/errata.html" rel="nofollow">errata</a>.</p>
<p>Perhaps he means every prime except 2 when expressed as a binary digit ends with a 1? </p>
|
835,454 | <p>I was struggling with the following problem (from linear algebra):</p>
<blockquote>
<p>Let $V$ be the vector space of the $2 \times 2$ matrices with real coefficients. Consider the action of the group $SL_2(\mathbb{R})$ on $V$, namely for any matrix $T \in SL_2(\mathbb{R})$ and for every matrx $A \in V$ we define $\Phi_T(A)=T\cdot A\cdot T^{-1}$. Prove that for every $T\in SL_2(\mathbb{R})$, $\Phi_T$ is an endomorphism of $V$. Then starting from a basis of $V$, write the corresponding homomorphism $SL_2(\mathbb{R}) \to GL_4(\mathbb{R})$, then show that the image lies inside $SL_4(\mathbb{R})$</p>
</blockquote>
<p>First of all. Does anybody know the origin of this problem? Is there a book with standard solutions to this kind of problem? And is my solution any good? Apparently it is generalizable to many more dimensions, giving a nontrivial result on the determinant of such matrices.
And what does it happen when the trace is $0$? Can we show an homomorphism $SL_2(\mathbb{R}) \to SL_3(\mathbb{R})$ in the same way?</p>
| user1551 | 1,551 | <p>The question setter probably mean this. Consider the following ordered basis $\mathcal B$ of $V=M_2(\mathbb R)$.
$$
\mathcal B=\left\{
B_1=\pmatrix{1&0\\ 0&0},
\ B_2=\pmatrix{0&0\\ 1&0},
\ B_3=\pmatrix{0&1\\ 0&0},
\ B_4=\pmatrix{0&0\\ 0&1}
\right\}.
$$
We can identify every $B=\pmatrix{a&c\\ b&d}\in V$ with the vector $\renewcommand{\vec}{\operatorname{vec}}\vec(B)=(a,b,c,d)^\top\in\mathbb R^4$ (this is known as the <a href="http://en.wikipedia.org/wiki/Vectorization_%28mathematics%29" rel="nofollow">vectorisation</a> of a matrix). Now, for any $T\in SL_2(\mathbb R)$, the group action $B\mapsto TBT^{-1}$ is actually an automorphism. It follows that the image of $\mathcal B$ under the group action is still a basis of $V$. Consequently, the matrix
$$
f(T) = \left[\vec(TB_1T^{-1}),\vec(TB_2T^{-1}),\vec(TB_3T^{-1}),\vec(TB_4T^{-1})\right]
$$
lies inside $GL_4(\mathbb R)$. If one knows what a <a href="http://en.wikipedia.org/wiki/Kronecker_product" rel="nofollow">Kronecker product</a> is, one immediately sees that
$$
f(T)=(T^{-\top}\otimes T)
\underbrace{\left[\vec(B_1),\vec(B_2),\vec(B_3),\vec(B_4)\right]}_{=I_4}
=T^{-\top}\otimes T.
$$
By the properties of Kronecker product (see the above-linked Wikipedia entry), it is also obvious that $f$ is a group homomorphism from $SL_2(\mathbb R)$ to $GL_4(\mathbb R)$ and $\det f(T)=1$, i.e. $\operatorname{Im}(f)\subset SL_4(\mathbb R)$. Also, $f$ is injective. Therefore $\operatorname{Im}(f)$ is isomorphic to $SL_2(\mathbb R)$. This is probably why the last sentence in the question says that <em>"the image lies inside $SL_2(\mathbb R)$"</em>, if the $SL_2(\mathbb R)$ here is not a typo to $SL_4(\mathbb R)$.</p>
<p>The 4-by-4 matrix that you have shown is $T\otimes T^{-\top}$. So, it also works.</p>
|
180,122 | <p>Could any one give me hint how to show that the zero section of any smooth vector bundle is smooth?</p>
<p>Zero section is a map $\xi:M\rightarrow E$ defined by $$\xi(p)=0\qquad\forall p\in M.$$</p>
| Community | -1 | <p>Say $E\to M$ is a smooth vector bundle, and let $p\in M$. Then there is a neighborhood $U\subseteq M$ of $p$ and a smooth local trivialization $\Phi:\pi^{-1}(U)\to U\times\mathbb R^k$ of $E$ over $U$. What can you say about $(\Phi\circ\zeta)(q)$ where $q\in U$? Can you conclude that $\Phi\circ\zeta$ is smooth? Now remember that $\Phi$ is a diffeomorphism.</p>
|
2,598,848 | <p>I have a project I'm doing in desmos and want to know if this is possible.</p>
<p>I want to be able to set some parameter, a, equal to the solution of f(x) = 0. I plugged my specific equation into Maple and it could not arrive at an explicit solution for x.</p>
<p>However, when I type in the equation f(x) = 0 into desmos, it does a fine job of plotting the x values for which this equation is true.</p>
<p>My question is this: Is it possible to take this x value(s) and set my parameter (AKA constant or 'slider') equal to it? Does desmos provide this functionality?</p>
<p>Thanks!</p>
| Emilio Novati | 187,568 | <p>I suppose $a<b$. The intersection points of the two ellipses are (see the figure where $a=2$ and $b=4$) $A,B,C,D =(\pm k,\pm k)$ with
$$k=\frac{ab}{\sqrt{a^2+b^2}}$$ </p>
<p><a href="https://i.stack.imgur.com/ZpHMj.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZpHMj.jpg" alt="enter image description here"></a></p>
<p>and, by symmetry, the searched area is
$$
4\left(k^2+\frac{2b}{a}\int_k^a \sqrt{a^2-x^2}dx\right)
$$</p>
|
1,583,540 | <p>Is it possible to find an example of an one-sided inverse of a function? other than matrix?</p>
<p>I am trying to find such an example but having no luck. Anybody got an idea about it?</p>
| boxotimbits | 299,738 | <p>Think of the function $f:\{0,1\}\to \{1\}$ that is constant. It has a right inverse $g:\{1\}\to\{0,1\}$ which maps $1\mapsto 1$. This is because $f\circ g=Id_{\{1\}}$, the identity function on $\{1\}$. But it doesn't have a left inverse since it is not injective (1 to 1). </p>
<p>Also you should notice that the right inverse $g$ isn't even unique.</p>
|
1,583,540 | <p>Is it possible to find an example of an one-sided inverse of a function? other than matrix?</p>
<p>I am trying to find such an example but having no luck. Anybody got an idea about it?</p>
| Federico Poloni | 65,548 | <p>Another classical example is the shift maps. Let $S$ be the space of real sequences $S = \{(a_0,a_1,a_2,\dots) : a_i\in\mathbb{R} \, \text{for all $i$}\}$ (which, essentially, is the space of functions $\mathbb{N}\to\mathbb{R}$).</p>
<p>Define the "left shift map" $f:S\to S$ as
$$
f((a_0,a_1,a_2,a_3,\dots)) = (a_1,a_2,a_3,\dots)
$$
(i.e., remove the first element from the sequence), and the "right shift map" $g:S\to S$ as
$$
g((a_0,a_1,a_2,\dots)) = (0,a_0,a_1,a_2,\dots)
$$
(i.e., add a zero at the beginning of the sequence).</p>
<p>Then, $f\circ g$ is the identity, but $g\circ f$ is not.</p>
|
267,963 | <p>Why <code>HoldForm</code> do this?</p>
<p><a href="https://i.stack.imgur.com/DDFDT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/DDFDT.png" alt="enter image description here" /></a></p>
<pre><code>HoldForm[a/b \[Integral]f[x] \[DifferentialD]x]
</code></pre>
<p>Output is this ugly looking formula:
<span class="math-container">$$\frac{a \int f(x) \, dx}{b}$$</span></p>
<p>I want the output to look exactly same as I typed it:
<span class="math-container">$$\frac{a}{b} \int f(x) \, dx$$</span></p>
<p>What is the purpose of <code>HoldForm</code> when it does not hold the form of expression as it has been typed?</p>
<p>It always rearranges fractions from <span class="math-container">$\frac{a}{b} c$</span> to <span class="math-container">$\frac{a c}{b}$</span>.</p>
| Alexey Popkov | 280 | <p><sup>(The basic idea is from <a href="https://mathematica.stackexchange.com/a/186026/280">this</a> answer by <a href="https://mathematica.stackexchange.com/users/1871/xzczd">xzczd</a>.)</sup></p>
<p>Try this:</p>
<pre><code>MakeBoxes[Times[a_, b_], StandardForm] := RowBox[{MakeBoxes@a, MakeBoxes@b}]
MakeBoxes[Times[a_, Power[b_, -1]], StandardForm] := MakeBoxes[Divide[a, b]]
</code></pre>
<p>Now</p>
<pre><code>Hold[a/b \[Integral]f[x] \[DifferentialD]x]
</code></pre>
<p><a href="https://i.stack.imgur.com/NOWTD.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NOWTD.png" alt="screenshot" /></a></p>
|
116,494 | <p>I am trying to plot contour plots and some of them show a white patch because of a change in sign of the points plotted. How can I make the contour plot show this change in sign without having to take the absolute of the values?</p>
<p>The data is the following:</p>
<pre><code>axialP1plot= {{30, 0, 0.509185}, {60, 0, 0.474159}, {90, 0, 0.452413}, {120, 0,
0.450468}, {0, 0.6, 0.422016}, {30, 0.6, 0.365962}, {60, 0.6,
0.263496}, {90, 0.6, 0.200892}, {120, 0.6, 0.188312}, {0, 1.2,
0.316512}, {30, 1.2, 0.140834}, {60, 1.2, -0.129596}, {90,
1.2, -0.248149}, {120, 1.2, -0.246671}, {0, 1.8, 0.211008}, {30,
1.8, -0.454357}, {60, 1.8, -1.08961}, {90, 1.8, -1.13311}, {120,
1.8, -0.979351}, {0, 2.4, 0.105504}, {30, 2.4, -3.50926}, {60,
2.4, -3.56817}, {90, 2.4, -2.62564}, {120, 2.4, -1.96434}, {0, 2.9,
0.017584}, {30, 2.9, -13.3785}, {60, 2.9, -6.04938}, {90,
2.9, -3.59638}, {120, 2.9, -2.49518}, {0, 0, 0.52752}}
</code></pre>
<p>And the contour plot:</p>
<pre><code>ListContourPlot[axialP1plot,
ColorFunction -> "Rainbow",
Contours -> 10,
PlotLegends -> Automatic,
PlotLabel -> Style["Axial Force on Plate 1", FontSize -> 14]]
</code></pre>
<p>Which yields:</p>
<p><a href="https://i.stack.imgur.com/PxgKQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PxgKQ.jpg" alt="enter image description here"></a></p>
| dantopa | 29,452 | <p>Use <code>Contours -> {0}</code> to force the contour line at zero value.<a href="https://i.stack.imgur.com/x7MgK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x7MgK.jpg" alt="enter image description here"></a></p>
<pre><code>ListContourPlot[axialP1plot, ColorFunction -> "Rainbow",
Contours -> {0}, ContourStyle -> {Black, Thick},
PlotLegends -> Automatic,
PlotLabel -> Style["Axial Force on Plate 1", FontSize -> 14]]
</code></pre>
|
2,161,913 | <p>Consider the angle $\theta$ with $$0<\theta<\frac{\pi}{2}$$ Suppose that $$cos\theta=\frac{l}{n} \space and \space \space sin\theta=\frac{m}{n}$$
are rational numbers, with postive integers $l,m$ and $n$. Show that $l$ and $m$ cannot both odd integers.<br>
MY ATTEMPT: Assume that $l$ and $m$ are both odd integers. Then $cos^2\theta=\frac{l^2}{n^2}$ and $sin^2\theta=\frac{m^2}{n^2}$ then adding these equations and using the Pythagorean identity we obtain $$l^2+m^2=n^2$$
Notice we can conclude n is even, and we must find a contradiction. I am unsure where to go from here, any help is appreciated!</p>
| RRL | 148,510 | <p>You got the right answer for the wrong reason, eg. $(2^n)^{1/n} \to 2 \neq 1$.</p>
<p>Cauchy's second limit theorem states</p>
<p>$$\lim_{n \to \infty} (a_n)^{1/n} = \lim_{n \to \infty} \frac{a_{n+1}}{a_n},$$</p>
<p>if the limit on the RHS exists.</p>
<p>This case reduces to </p>
<p>$$\lim_{n \to \infty} \frac{1}{\cos(1/(n+1))} =1.$$</p>
|
602,353 | <p>Let $f: \mathbb{R} \to \mathbb{R}$ be given by $$f(x)= \begin{cases}
-x-1, &\text{if }x≥0\\
1, &\text{if }x<0.
\end{cases}$$</p>
<p>Denote by</p>
<ul>
<li>$U$ the <em>usual</em> topology on $\mathbb{R}$.</li>
<li>$H$ the <em>half-open interval</em> topology on $\mathbb{R}$.</li>
<li>$C$ is the <em>open half-line topology</em> on $\mathbb{R}$. </li>
</ul>
<p>Is $f$: </p>
<ol>
<li>$U$-$U$ continuous?</li>
<li>$U$-$H$ continuous?</li>
<li>$U$-$C$ continuous?</li>
<li>$H$-$U$ continuous?</li>
<li>$C$-$C$ continuous?</li>
<li>$C$-$U$ continuous?</li>
<li>$C$-$H$ continuous?</li>
<li>$H$-$H$ continuous?</li>
</ol>
<p>So, I kinda understand how to find things that are $U$-$U$ or whatever continuous, but the function is kinda throwing me off. So maybe some tips or suggestions as to where to start. </p>
| Randomstop | 113,784 | <p>Here is a breakdown of my attempt for each one of my questions:</p>
<ol>
<li><p>U-U Continuous: $(-2,1)$ is open in the U topology. $f^{-1}((-2,1))=[0,1)$. $[0,1)$ is not open in the U topology and therefore f is not U-U continuous.</p></li>
<li><p>U-H Continuous: $(-2,1)$ is also H-open, which implies that f is not U-H continuous. </p></li>
<li><p>U-C Continuous: Yes, it is U-C continuous. Why not sure...</p></li>
<li><p>C-C Continuous: The function $f^{-1}((a,+ \infty))= [0,a-1)$ where $a\lt-1$. Which is not C-open and therefore not C-C continuous.</p></li>
<li><p>H-U Continuous: [-3,1) is open in the H topology. $f^{-1}([-3,1)=[0,2]$. [0,2] is not H-open and therefore it is not H-U continuos.</p></li>
<li><p>C-U Continuos: No, [0,1) is not C-open and therefore it is not C-U continuous or C-H continuous.</p></li>
<li><p>C-H Continuous: No, See #4.</p></li>
<li><p>H-H Continuous: No, similar to #5.</p></li>
</ol>
<p>I am really not sure if this is correct and would really appreciate corrections or feedback!</p>
|
4,516,006 | <p>Let <span class="math-container">$a,~b\in\Bbb Q$</span> and suppose <span class="math-container">$\sqrt{a},~\sqrt{b}$</span> is irrational and <span class="math-container">$\sqrt{a}-\sqrt{b}\in\Bbb Q$</span>. I want to prove that <span class="math-container">$\sqrt{a}-\sqrt{b}=0$</span>; that is, <span class="math-container">$\sqrt{a}=\sqrt{b}$</span>. It seems straightforward by observing some practical numbers. However I found it hard to write a formal proof.</p>
<p>I have tried squaring them, but gained nothing. And I have also searched on this site but nothing was found.</p>
| Alessandro Cigna | 641,678 | <p>Observe that <span class="math-container">$(\sqrt a-\sqrt b)(\sqrt a +\sqrt b)=a-b\in \Bbb Q$</span>, if <span class="math-container">$\sqrt a\ne\sqrt b$</span> then <span class="math-container">$\sqrt a+\sqrt b=\frac {a-b}{\sqrt a-\sqrt b}\in\Bbb Q$</span>. But so <span class="math-container">$\sqrt a=\frac 12(\sqrt a-\sqrt b+ \sqrt a+\sqrt b)\in\Bbb Q$</span>, a contradiction.</p>
<p>We conclude <span class="math-container">$\sqrt a=\sqrt b$</span></p>
|
4,516,006 | <p>Let <span class="math-container">$a,~b\in\Bbb Q$</span> and suppose <span class="math-container">$\sqrt{a},~\sqrt{b}$</span> is irrational and <span class="math-container">$\sqrt{a}-\sqrt{b}\in\Bbb Q$</span>. I want to prove that <span class="math-container">$\sqrt{a}-\sqrt{b}=0$</span>; that is, <span class="math-container">$\sqrt{a}=\sqrt{b}$</span>. It seems straightforward by observing some practical numbers. However I found it hard to write a formal proof.</p>
<p>I have tried squaring them, but gained nothing. And I have also searched on this site but nothing was found.</p>
| Andrea Mori | 688 | <p>This is a <strong>hint</strong></p>
<p>Suppose <span class="math-container">$\sqrt a-\sqrt b\in\mathbb{Q}$</span>. Then <span class="math-container">$(\sqrt a-\sqrt b)^2\in\mathbb{Q}$</span> and so
<span class="math-container">$$
\sqrt{ab}=\frac{a+b}2\in\mathbb{Q}
$$</span>
with <span class="math-container">$ab\in\mathbb{Q}$</span>.</p>
<p>What are the <span class="math-container">$q\in\mathbb{Q}$</span> such that <span class="math-container">$\sqrt q\in\mathbb{Q}$</span>, say in terms of their prime factorization?</p>
|
3,827,302 | <p>In part of Kenneth Falconer's proof of the countable stability of Hausdorff dimension on p. 49, sect 3.2 of <a href="https://www.wiley.com/en-us/Fractal+Geometry%3A+Mathematical+Foundations+and+Applications%2C+3rd+Edition-p-9781119942399" rel="nofollow noreferrer">Fractal Geometry</a>, I understand him to say that
<span class="math-container">$$\dim_H \bigcup_{i=1}^{\infty}F_i\leq \sup_{1\leq i\leq\infty}\{\dim_H F_i\}\;,$$</span>
because when <span class="math-container">$s>\dim_H F_i$</span> for all <span class="math-container">$i$</span>, <span class="math-container">${\cal H}^s(F_i)=0$</span>, and thus
<span class="math-container">$${\cal H}^s(\bigcup_{i=1}^{\infty} F_i) \leq \sum_{i=1}^{\infty}{\cal H}^s(F_i) = 0\;.$$</span>
Here <span class="math-container">$\dim_H$</span> is Hausdoff dimension, and <span class="math-container">${\cal H}^s$</span> is <span class="math-container">$s$</span>-dimensional Hausdorff measure.</p>
<p>I understand everything after <span class="math-container">$s>\dim_H F_i$</span> above, but I'm confused about why <span class="math-container">${\cal H}^s(\bigcup_{i=1}^{\infty} F_i) \leq \sum_{i=1}^{\infty}{\cal H}^s(F_i) = 0$</span> implies <span class="math-container">$\dim_H \bigcup_{i=1}^{\infty}F_i\leq \sup_{1\leq i\leq\infty}\{\dim_H F_i\}$</span>.</p>
<p>I understand that Hausdorff dimension of a set <span class="math-container">$G$</span> is the Hausdorff measure for which <span class="math-container">${\cal H}^s(G)$</span> is finite, such that for <span class="math-container">$s>\dim_H G$</span>, the Hausdorff measure is <span class="math-container">$0$</span>, and for <span class="math-container">$s<\dim_H G$</span>, the Hausdorff measure is infinite. I don't see why the fact that <span class="math-container">${\cal H}^s(\bigcup_{i=1}^{\infty} F_i) \leq 0$</span> for an <span class="math-container">$s$</span> that is larger than the dimension implies the first inequality above, which concerns a <span class="math-container">$\sup$</span> for Hausdorff dimensions that are possibly greater than <span class="math-container">$0$</span>. I'm sure there must be something obvious that I'm not seeing. I've been thinking about it for a week and I'm still confused.</p>
<p><a href="https://math.stackexchange.com/a/772605/73467">This answer</a> gives a detailed proof of the part I already understand, but leaves my question unanswered.</p>
| usr0192 | 275,654 | <p>Based on Kevin Arlin's answer here is my revised understanding (decided to post as an answer because not enough space for a comment)</p>
<p>For composition fill, in the <span class="math-container">$\Lambda^3_1$</span>-horn
<a href="https://i.stack.imgur.com/pBIXQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pBIXQ.jpg" alt="enter image description here" /></a></p>
<p>and for inverse here is an argument using just filling inner horns:
<a href="https://i.stack.imgur.com/IIjDi.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IIjDi.jpg" alt="enter image description here" /></a></p>
|
2,111,579 | <p>A question:</p>
<p>If $\displaystyle{m}^{2}={1}-{m},{\quad\text{and}\quad}{n}^{2}={1}-{n},{\quad\text{and}\quad}{n}\ne{m};$</p>
<p>Proof that $\displaystyle{m}^{7}+{n}^{7}+{30}={1}$</p>
<p>Without finding the roots of equation $\displaystyle{x}^{2}+{x}-{1}={0}$.</p>
<p>Is there such a shortcut solution?</p>
| Community | -1 | <p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix.
<hr>
Denote $A_n$ as the $n \times n$ matrix that has elements $a_{ij} = 1$ if $i \geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \left( A_n \right)^{-1} = \prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br>
$$B_n = \begin{pmatrix}
\phantom{-}1 & \phantom{-}0 &\phantom{-}0 &\ldots &0\\
-1 & \phantom{-}1 & \phantom{-}0 &\ldots &0\\
\phantom{-}0 & -1 & \phantom{-}1 &\ldots &0\\
\phantom{-}\vdots &\phantom{-}\vdots & \phantom{-}\vdots &\ddots & \vdots\\
\phantom{-}0 & \phantom{-}0 & \phantom{-}0 & \cdots &1 \end{pmatrix}.$$
In order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p>
|
3,190,781 | <blockquote>
<p>Suppose the polynomial <span class="math-container">$P(x)$</span> with integer coefficients satisfies the following conditions:<br>
(A) If <span class="math-container">$P(x)$</span> is divided by <span class="math-container">$x^2 − 4x + 3$</span>, the remainder is <span class="math-container">$65x − 68$</span>.<br>
(B) If <span class="math-container">$P(x)$</span> is divided by <span class="math-container">$x^2 + 6x − 7$</span>, the remainder is <span class="math-container">$−5x + a$</span>.<br>
Then we know that <span class="math-container">$a =$</span>?</p>
</blockquote>
<p>I am struggling with this first question from the <a href="https://www.maa.org/sites/default/files/pdf/programs/JUEEDocument.pdf" rel="noreferrer">1990 Japanese University Entrance Examination</a>. Comments from the linked paper mention that this is a basic application of the "remainder theorem". I'm only familiar with the <a href="https://en.wikipedia.org/wiki/Polynomial_remainder_theorem" rel="noreferrer">polynomial remainder theorem</a> but I don't think that applies here since the remainders are polynomials. Do they mean the Chinese remainder theorem, applied to polynomials?</p>
<p>So for some <span class="math-container">$g(x)$</span> and <span class="math-container">$h(x)$</span> we have:
<span class="math-container">$$P(x) = g(x)(x^2-4x+3) + (65x-68),\\
P(x) = h(x)(x^2+6x-7) + (-5x+a),$$</span>
which looks to have more unknowns than equations. How should I proceed from here?</p>
| know dont | 663,454 | <p>We notied that: <span class="math-container">$x^2-4x+3=(x-1)(x-3)$</span> and <span class="math-container">$x^2+6x-7=(x-1)(x+7)$</span>.</p>
<blockquote>
<p>So, we have: <span class="math-container">$P(1)=65*1-68=-5*(1)+a$</span>.
<span class="math-container">$\implies -3=-5+a\iff a=2$</span>.</p>
</blockquote>
|
2,014,806 | <p>I was hoping to get some help on a review problem from my text book. The problem reads: </p>
<p>Let $A$ be $m\times{n}$ matrix. Prove that</p>
<p>a) $A^*A$ is self-adjoint.</p>
<p>b) All eigenvalues of $A^*A$ are non-negative.</p>
<p>c) $A^*A + I$ is invertible.</p>
<p>For part a), it seems to follow very clearly that $(A^* A)^* =(A)^* (A^*)^* =A^*A$. So this first part is clear.</p>
<p>With parts b) and c) I am unsure how to begin. For b), would one want to take the inner product of $(A^*{A}x,x)$ and then say some $Ax=qx$ where $q$ is an eigenvalue and then get $(Ax,Ax)=(qx,qx)= |q|^2\|x\|^2$? So the eigenvalue is positive? That is what I am thinking but I am unsure if it shows what I am wanting to show/ does not have a ton of holes. With part c) I am just lost so would appreciate any advice possible on that one. Thank you!</p>
<p>Upon more thought, I have a new idea for b). So we have some A* Ax =qx. So (Ax,Ax)=(A* Ax,x)=q(x,x). So q=((Ax,Ax)/(x,x))=||Ax||/||x||, which is greater than or equal to zero, so non-negative. would that work?</p>
| Gyu Eun Lee | 52,450 | <p>For part (b), you are on the right track but you want to adopt a different approach. The question is about the eigenvalues of $A^*A$, not those of $A$. So suppose $x$ is an eigenvector of $A^*A$, i.e. $A^*Ax = qx$ for some scalar $q$. Now you can use your approach (use the inner product) to deduce things about $q$.</p>
<p>For part (c), there are a few things you could do. The easiest thing to do would be to show that the equation $(I+A^*A)x = 0$ has only the trivial solution. Use linearity to expand things out, and rearrange. How are $x$ and $A^*A$ related? Can you use the previous parts to say anything?</p>
<p>(There is also a high-powered proof of (c) using the spectral theorem - you can show that $I+A^*A$ has strictly positive eigenvalues, and therefore positive determinant. But that's akin to killing a mosquito with a sledgehammer.)</p>
|
2,876,040 | <p>So i have the following problem: I need to cover the whole interval from $1$ to $10^{12}$ on a computer program. But i can´t do it on a for loop because it would be too slow. So can i iterate over a smaller amount ($\sqrt{n}$ for instance), and still cover the whole interval? </p>
<p>What i've done so far:</p>
<p>Iterate from $1$ to $\sqrt{n}$, then cover $(n-i)$, $\frac{n}{i}$,$\frac{n}{n-i}$.</p>
<p>But i'm still missing some numbers.</p>
<p><em>Thanks in advance.</em> </p>
| Deepak gupta | 450,988 | <p>The optimization depends on what you want to do with numbers in the interval. Example:
1. If you want them to write them to a file then there is no optimization you can do.
2. If you want to find out multiples of 1000, then it is possible to optimize the loop to 10^9 iterations.</p>
<p>The approach you tried will only change the order in which numbers are processed but you will still need to process all numbers. </p>
|
2,588,225 | <p>To start off, I'm very new to the complex plane and complex numbers.</p>
<p>I'm trying to determine if the series $\sum_{n=1}^\infty\frac{1}{n^2 + i}$ is convergent or divergent.</p>
<p>However, I'm not exactly sure how to express the value of the function. Could I just treat the denominator of the expression as the complex number's modulus and ignore the rotation in the plane? If I do that, I could determine that the series is convergent via a direct comparison test.</p>
<p>Geometrically speaking, it doesn't seem like a rotation would affect the convergence of a series. However, I don't know if I can consider the modulus of the complex number to be equivalent to its "value", and this is the cause of my confusion.</p>
| max_zorn | 506,961 | <p>$$\left|\frac{1}{n^2+i}\right| = \frac{1}{\sqrt{n^4+1}} < \frac{1}{\sqrt{n^4}} = \frac{1}{n^2};$$
thus, by the comparison test and $p$-test (with $p=2$) the series converges absolutely. </p>
|
3,772 | <p>In <a href="https://math.stackexchange.com/questions/102764/non-polynomial-functions-on-fields-of-finite-characteristic">this</a> question, there are two answers, both of which I up-voted, but I didn't "accept" either since there didn't appear to be a reason why one stands out as clearly preferable to the other. Apparently, my lack of "acceptance" offended someone who's taking me to task for it in comments on one of my questions. Some people (or at least one person) seem to get rather passionate about acceptances. Has anyone defined codified some criteria that say when one should "accept" an answer and what that is supposed to mean?</p>
| hmakholm left over Monica | 14,366 | <p>If your only problem is that there isn't a clear reason to prefer one answer to the other, but you still feel that <em>your question has been answered satisfactorily</em>, then I'd say to go ahead, flip a coin and accept one of them. Or choose the one whose author needs the rep the most, or whatever.</p>
<p>Getting the question marked as "no longer open" is more important than making sure nobody is treated more fairly than someone else, I think.</p>
<hr>
<p>That being said, I think there is too much mindless pressure for users to get their "accept rate" up, just for the sake of having it up. The person who took you to task complained about a 42% accept rate, which is slightly ridiculous in my eyes. I wish we could reserve that kind of remarks for users who seem to be avoiding to accept anything on general principle (or out of ignorance, but then the remarks really should be phrased more kindly than they usually are).</p>
<p>One reason for this is that the warm fuzzy feeling that comes from having an answer accepted is becomes considerably cheapened for me by the possibility that the answer may not have been very helpful for asker at all, perhaps he's just being bullied into "working on his accept rate" for its own sake.</p>
|
1,591,990 | <p>A while ago I saw this question <a href="https://math.stackexchange.com/questions/1561877/quartic-diophantine-equation-16r4112r3200r2-112r16-s2">Quartic diophantine equation: $16r^4+112r^3+200r^2-112r+16=s^2$</a> which was very relevant to a undergraduate research paper I am currently working on. The answer given for this problem describes a method for determining solutions to diophantine equations by finding a "birational equivalence" to the solution set of an elliptic curve. </p>
<p>The result I am seeking depends on the solution set to a quartic diophantine equation of two variables and I believe that there is only one solution and no more (which was indicated by a program which checked possible values up to a high number).</p>
<p>So my question is, is there a general method for determining such a birational equivalence between solutions sets of diophantine equations and elliptic curves? And where is a good source for an undergraduate to learn to use these tools?</p>
| Allan MacLeod | 110,165 | <p>The following shows how to transform the quartic
\begin{equation}
y^2=ax^4+bx^3+cx^2+dx+e ,
\end{equation}
with $a,b,c,d,e \in \mathbb{Q}$, into an equivalent elliptic curve.</p>
<blockquote>
<p><strong>Case 1:</strong></p>
</blockquote>
<p>We first consider the case when $a=1$, which is very common. If $a=\alpha^2$ for some rational $\alpha$, we
substitute $y=Y/\alpha$ and $x=X/\alpha$, giving
\begin{equation*}
Y^2=X^4+\frac{b}{\alpha}X^3+cX^2+\alpha d X+\alpha^2 e
\end{equation*} Thus, suppose
\begin{equation}
y^2=x^4+bx^3+cx^2+dx+e
\end{equation}</p>
<p>We describe the method given by Mordell on page $77$ of his book Diophantine Equations, with some minor modifications.</p>
<p>We first get rid of the cubic term by making the standard substitution $x=z-b/4$ giving
\begin{equation*}
y^2=z^4+fz^2+gz+h
\end{equation*}
where
\begin{equation*}
f=\frac{8c-3b^2}{8} \hspace{1cm} g=\frac{b^3-4bc+8d}{8} \hspace{1cm}
h=\frac{-(3b^4-16b^2c+64bd-256e)}{256}
\end{equation*}</p>
<p>We now get rid off the quartic term by defining $y=z^2+u+k$, where $u$ is a new variable and $k$ is a constant to be
determined. This gives the quadratic in $z$
\begin{equation*}
(f-2(k+u))z^2+gz+h-k^2-u(2k+u)=0
\end{equation*}</p>
<p>For $x$ to be rational, then $z$ should be rational, so the discriminant of this quadratic would have to be a rational square.
The discriminant is a cubic in $u$. We do not get a term in $u^2$ if we make $k=f/6$, giving
\begin{equation*}
D^2=-8u^3+2\frac{f^2+12h}{3}u+\frac{2f^3-72fh+27g^2}{27}
\end{equation*}
and, if we substitute the formulae for $f,g,h$, and clear denominators we have
\begin{equation}
G^2=H^3+27(3bd-c^2-12e)H+27(27b^2e-9bcd+2c^3-72ce+27d^2)
\end{equation}
with
\begin{equation}
x=\frac{2G-3bH+9(bc-6d)}{12H-9(3b^2-8c)}
\end{equation}
and
\begin{equation}
y= \pm \frac{18x^2+9bx+3c-H}{18}
\end{equation}</p>
<blockquote>
<p><strong>Case 2:</strong></p>
</blockquote>
<p>If $a \ne \alpha^2$, we need a rational point $(p,q)$ lying on the quartic. </p>
<p>Let $z=1/(x-p)$, so that $x=p+1/z$ giving
\begin{equation*}
y^2z^4=(ap^4+bp^3+cp^2+dp+e)z^4+(4ap^3+3bp^2+2cp+d)z^3+
\end{equation*}
\begin{equation*}
(6ap^2+3bp+c)z^2+(4ap+b)z+a
\end{equation*}
where the coefficient of $z^4$ is, in fact, $q^2$. Define, $y=w/z^2$, and then
$z=u/q^2, \, w=v/q^3$
giving
\begin{equation}
v^2=u^4+(4ap^3+3bp^2+2cp+d)u^3+q^2(6ap^2+3bp+c)u^2+
\end{equation}
\begin{equation*}
q^4(4ap+b)u+aq^6 \equiv u^4+fu^3+gu^2+hu+k
\end{equation*}</p>
<p>We now, essentially, complete the square. We can write
\begin{equation*}
y^2=u^4+fu^3+gu^2+hu+k=(u^2+mu+n)^2+(su+t)
\end{equation*}
if we set
\begin{equation*}
m=\frac{f}{2} \hspace{1cm} n=\frac{4g-f^2}{8} \hspace{1cm} s=\frac{f^3-4fg+8h}{8}
\end{equation*}
and
\begin{equation*}
t=\frac{-(f^4-8f^2g+16(g^2-4k))}{64}
\end{equation*}</p>
<p>This gives
\begin{equation*}
(y+u^2+mu+n)(y-u^2-mu-n)=su+t
\end{equation*}
and if we define $y+u^2+mu+n=Z$ we have
\begin{equation}
2(u^2+mu+n)=Z-\frac{su+t}{Z}
\end{equation}</p>
<p>Multiply both sides by $Z^2$, giving
\begin{equation*}
2u^2Z^2+2muZ^2+suZ=Z^3-2nZ^2-tZ
\end{equation*}
which, on defining $W=uZ$, gives
\begin{equation*}
2W^2+2mWZ+sW=Z^3-2nZ^2-tZ
\end{equation*}</p>
<p>Define, $Z=X/2$ and $W=Y/4$ giving
\begin{equation}
Y^2+2mXY+2sY=X^3-4nX^2-4tX
\end{equation}
which is an elliptic curve. If we define $Y=G-s-mX$, we transform to the form
\begin{equation}
G^2=X^3+(m^2-4n)X^2+(2ms-4t)X+s^2
\end{equation}</p>
<p>All of the above are easy to program in any symbolic package. Hopefully, I have all the formulae correct.</p>
|
330,677 | <p>I was thinking last days about the following problem - the sum of Darboux is a Darboux function?</p>
<p>Do You know a proof or counter example?</p>
| MathematicsStudent1122 | 238,417 | <p>There's a simpler example. Let $g,h: \mathbb{R} \to \mathbb{R}$ be given by</p>
<p>$$g(x)=\begin{cases}
{\sin x^{-1}} & x \neq 0 \\
1 & x=0\\
\end{cases}$$</p>
<p>$$h(x)=\begin{cases}
{-\sin x^{-1}} & x \neq 0 \\
0 & x=0\\
\end{cases}$$</p>
<p>These are Darboux. However, </p>
<p>$$g(x) + h(x) =\begin{cases}
0 & x \neq 0 \\
1 & x=0\\
\end{cases}$$</p>
<p>is not. </p>
|
1,016,585 | <p>I want to solve $$\int \frac{1}{\sqrt{x^2 - c}} dx\quad\quad\text{c is a constant}$$</p>
<p>How do I do this?</p>
<p>It looks like it is close to being an $\operatorname{arcsin}$?</p>
<hr>
<p>I would have thought I could just do:
$$\int \left(\sqrt{x^2 - c}\right)^{-\frac12}\, dx=\frac{2\sqrt{c+x^2}}{2x}\text{????}$$</p>
<p>But apparently not. </p>
| Community | -1 | <p>The trick is to use trig substitution:</p>
<p>HINT:</p>
<p>Let $x = \sqrt{c} \sec u \implies x^2 = c \sec^2u \implies x^2 - c = c ( \sec^2 u - 1 ) = c \tan^2 u$</p>
<p>and</p>
<p>$$ dx = \sqrt{c} \sec u \tan u $$</p>
<p>Hence, your integral looks like</p>
<p>$$ \int \frac{\sqrt{c} \sec u \tan u}{\sqrt{c } \tan u } = ... $$</p>
|
30,970 | <p>I try to filter sublists of a list which match a pattern. </p>
<pre><code>test = {{"String1", "a"}, {"String2", "b"}, {"String3",
"a"}, {"String4", "a"}};
</code></pre>
<p>The result should be:</p>
<pre><code>result = {{"String1", "a"}, {"String3", "a"}, {"String4", "a"}}
</code></pre>
<p>That means the first entry should be any String and the second should be "a".</p>
<p>I tried:</p>
<pre><code>Select[test, (# == {_, "a"}) &]
</code></pre>
<p>Which evaluates to {}. </p>
| user1066 | 106 | <p>Another possibility:</p>
<pre><code>Pick[#, (Thread@#)[[2]], "a"] &@test
</code></pre>
<blockquote>
<pre><code>{{"String1", "a"}, {"String3", "a"}, {"String4", "a"}}
</code></pre>
</blockquote>
|
1,531,525 | <p>I would like to compute the following sum:</p>
<p>$$\sum_{n=0}^{\infty} \frac{\cos n\theta}{2^n}$$</p>
<p>I know that it involves using complex numbers, although I'm not sure how exactly I'm supposed to do so. I tried using the fact that $\cos \theta = {e^{i\theta} + e^{-i\theta}\over 2}$. I'm not sure how to proceed from there though. A hint would be appreciated. </p>
| Anurag A | 68,092 | <p>Consider the series
$$S=\sum_{n=0}^{\infty}\left(\frac{e^{i\theta}}{2}\right)^n.$$
This is a geometric series whose sum is
$$S=\frac{2}{2-e^{i\theta}}.$$
Now the real part of $S$ is the sum you are looking for.</p>
|
1,531,525 | <p>I would like to compute the following sum:</p>
<p>$$\sum_{n=0}^{\infty} \frac{\cos n\theta}{2^n}$$</p>
<p>I know that it involves using complex numbers, although I'm not sure how exactly I'm supposed to do so. I tried using the fact that $\cos \theta = {e^{i\theta} + e^{-i\theta}\over 2}$. I'm not sure how to proceed from there though. A hint would be appreciated. </p>
| Caleb Stanford | 68,107 | <p>\begin{align*}
\sum_{n=0}^{\infty} \frac{\cos n\theta}{2^n}
&= \sum_{n=0}^\infty \frac{e^{i n \theta} + e^{- i n \theta}}{2^{n+1}} \\
&= \sum_{n=0}^\infty \frac{(e^{i \theta})^n + (e^{- i \theta})^n}{2^{n+1}} \\
&= \sum_{n=0}^\infty \frac{(e^{i \theta})^n}{2^{n+1}} + \sum_{n=0}^\infty \frac{(e^{- i \theta})^n}{2^{n+1}} \\
&= \sum_{n=0}^\infty \frac12 \left(\frac{e^{i \theta}}{2}\right)^n
+ \sum_{n=0}^\infty \frac12 \left(\frac{e^{-i \theta}}{2}\right)^n .
\end{align*}</p>
<p>These last two sums are geometric series. Can you finish it?</p>
|
209,259 | <blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://math.stackexchange.com/questions/16374/universal-chord-theorem">Universal Chord Theorem</a> </p>
</blockquote>
<p>I am having a problem with this exercise. Could someone help?</p>
<p>Suppose $a \in (0,1)$ is a real number which is not of the form $\frac{1}{n}$ for any natural number n
n. Find a function f which is continuous on $[0, 1]$ and such that $f (0) = f (1)$ but which does not satisfy $f (x) = f (x + a)$ for any x with $x$, $x + a \in [0, 1]$.</p>
<p>I noticed that this condition is satisfied if and only if $f(x) \geq f(0)$</p>
<p>Thank you in advance</p>
| Hagen von Eitzen | 39,174 | <p><strong>Hint:</strong>
Since $\frac1a\notin\mathbb N$, there is some $n\in\mathbb N_0$ such that $n<\frac1a<n+1$. Let $r= 1-na$. Then $0<r<a$.
Let us try to find a function such that $f(x+a)-f(x)=1$ for all $x$ with $0\le x \le 1-a$.
Thus we have by induction $f(x)=k+f(t)$ if $x=ka+t$ with $k\in\mathbb N_0$ and $0\le t<a$ .
Then for $f(1)=f(0)$ we need $f(0)=f(1)=f(na+r)=f(r)+n$, i.e. $f(r)=-n$. See if you can build a complete $f$ from this.</p>
|
224,474 | <p>I want to generate the following matrix for any n:</p>
<pre><code> Table[A[i, j], {i, 0, n}, {j, 0, n}];
</code></pre>
<p>Where,</p>
<pre><code> A ={{0,1,0},{0,0,2},{0,0,0}} when n=2;
A={{0,1,0,0},{0,0,2,0},{0,0,0,3},{0,0,0,0}} when n=3;
A={{0,1,0,0,0},{0,0,2,0,0},{0,0,0,3,0},{0,0,0,0,4},{0,0,0,0,0}} when n=4, and so on for any n.
</code></pre>
<p>Thanks</p>
| Bob Hanlon | 9,362 | <pre><code>Clear["Global`*"]
A[n_Integer?Positive] := DiagonalMatrix[Range[n], 1, n + 1]
A /@ Range[2, 4]//Column
</code></pre>
<p><a href="https://i.stack.imgur.com/M6CK8.png" rel="noreferrer"><img src="https://i.stack.imgur.com/M6CK8.png" alt="enter image description here" /></a></p>
|
199,946 | <p>I came across this step in an inductive proof, but my algebra skills seem a bit rusty..</p>
<p>$7(7^{k+2}) + 64(8^{2k+1}) = 7(7^{k+2}+8^{2k+1})+57(8^{2k+1})$</p>
<p>How did they do this?</p>
<p>Note: The point was to show that the expressions are divisible by 57.</p>
| Chris Eagle | 5,203 | <p>$64=7+57$, so $64\cdot 8^{2k+1}=(7+57)\cdot 8^{2k+1}=7\cdot 8^{2k+1}+57 \cdot 8^{2k+1}$.</p>
|
1,256,688 | <p>Given a sequence of real numbers, a move consists of choosing two terms and replacing each with their arithmetic mean. Show that there exists a sequence of 2015 distinct real numbers such that after one initial move is applied to the sequence — no matter what move — there is always a way to continue with a finite sequence of moves so as to obtain in the end a constant sequence.</p>
| mweiss | 124,095 | <ol>
<li>If you assume $A,B,C$ and are able to use them to not only deduce $Z$ but also a contradiction to $Z$, then the assumptions you began with are <em>inconsistent</em>, and then you <em>really</em> have a problem, because literally anything can be deduced from a set of inconsistent hypotheses. So what you seem to be asking is: How do you know if a set of assumptions is consistent? In general, this can be quite difficult; any proof of consistency must necessarily rest on some other set of assumptions, and you then would have to worry about whether <em>those</em> assumptions are consistent. At the most fundamental level, the problem is fundamentally unsolvable; you might want to read up on Gödel's Incompleteness Theorems (the <a href="http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems" rel="nofollow">Wikipedia article</a> is a good place to start).</li>
<li>This question is in a sense the complement of the previous one. If a set of assumptions does not lead to any contradictions, then the set is <em>consistent</em>. That does not necessarily mean that it is <em>true</em> -- to answer that question you would have to decide what "true" means. A good example of how this has historically worked is the development of non-Euclidean geometry. (Again, <a href="http://en.wikipedia.org/wiki/Non-Euclidean_geometry" rel="nofollow">the Wikipedia article</a> is a good place to begin.)</li>
</ol>
|
1,826,094 | <p>$i\cdot\bar{z} = 2+2i$</p>
<p>I know that $\bar{z} = a-bi$ so then i get $i(a-bi)=2+2i$
Then $ai+b=2+2i$ (because $i^2=-1$)</p>
<p>When 2 complex numbers are equal you usually can equal their parts
Ex: $2+2i=a+bi$ so $a=2$ and $b=2$</p>
<p>But in this case I aint got $a+bi$ I have $ai+b$</p>
<p>The answer is $z=2-2i$ but i dont understand how to get there</p>
<p>Thanks</p>
| levap | 32,262 | <p>It might be useful also to think of the problem geometrically. The operation $z \mapsto i\overline{z}$ takes the vector $z$, reflects it across the $x$-axis (this is $z \mapsto \overline{z}$) and then rotates it by $90$ degrees counterclockwise (this is $w \mapsto iw$). Solving $i\overline{z} = 2 + 2i$ amounts to finding a complex number $z$ such that if $z$ is reflected and then rotated, we get $2 + 2i$. Since $2 + 2i$ lies in the first quadrant and makes an angle of $45$ degrees with the $x$-axis, after we reflect it and rotate it, we return to the same complex number and so $z = 2 + 2i$.</p>
|
1,826,094 | <p>$i\cdot\bar{z} = 2+2i$</p>
<p>I know that $\bar{z} = a-bi$ so then i get $i(a-bi)=2+2i$
Then $ai+b=2+2i$ (because $i^2=-1$)</p>
<p>When 2 complex numbers are equal you usually can equal their parts
Ex: $2+2i=a+bi$ so $a=2$ and $b=2$</p>
<p>But in this case I aint got $a+bi$ I have $ai+b$</p>
<p>The answer is $z=2-2i$ but i dont understand how to get there</p>
<p>Thanks</p>
| Tazwar Sikder | 347,302 | <p>We have: $i\cdot\bar{z}=2+2i$</p>
<p>Let $z=a+bi \Rightarrow \bar{z}=a-bi$:</p>
<p>$\Rightarrow i\cdot(a-bi)=2+2i$</p>
<p>$\Rightarrow ai-bi^{2}=2+2i$</p>
<p>$\Rightarrow ai+b=2+2i$</p>
<p>$\Rightarrow b+ai=2+2i$</p>
<p>Comparing real and imaginary parts:</p>
<p>$\therefore\hspace{5 mm}b=2$ and $a=2$</p>
|
1,381,017 | <p>Could you have me to find the ferivative of $$f(t)=\frac {1}{\rho}\log (1+\rho t)$$ with repsect to $t$?</p>
<p>And Is it
$$\lim_{t\to \infty} \frac {f'(t)}{t}=1?$$
<em>Update:</em>
Based on Hint of Surb:</p>
<p>$$f(t)'=\frac {1}{1+\rho t}$$</p>
<p>Then
$$\lim_{t\to \infty} \frac {f'(t)}{t}=0$$</p>
<p>Is it correct?</p>
| Lucian | 93,448 | <blockquote>
<p><em>I have been working on days to find the integral of the following question</em></p>
</blockquote>
<p>Even if you were to have kept on working on it for <em>years</em>, you still would not have been able to reach any conclusion, since it cannot be expressed in terms of elementary functions. See <a href="https://en.wikipedia.org/wiki/Liouville's_theorem_(differential_algebra)" rel="nofollow">Liouville's theorem</a> and the <a href="https://en.wikipedia.org/wiki/Risch_algorithm" rel="nofollow">Risch algorithm</a> for more information. However, we have $$\int_0^\tfrac\pi4\sqrt{\sin x\cos x}~dx~=~\int_\tfrac\pi4^\tfrac\pi2\sqrt{\sin x\cos x}~dx~=~\frac12\int_0^\tfrac\pi2\sqrt{\sin x\cos x}~dx~=~\dfrac{\pi\sqrt\pi}{\Gamma^2\bigg(\dfrac14\bigg)}$$ See <a href="http://en.wikipedia.org/wiki/Wallis'_integrals" rel="nofollow">Wallis' integrals</a>, and their relation to the <a href="https://en.wikipedia.org/wiki/Beta_function" rel="nofollow">beta</a> and <a href="https://en.wikipedia.org/wiki/Gamma_function" rel="nofollow">$\Gamma$ function</a> for more details.</p>
|
957,604 | <p>Let <strong>X</strong> and <strong>Y</strong> have the joint probability density function given by:</p>
<p>f(x,y)=$\frac{1}{4}exp{\frac{-(x+y)}{2}}$, x>0 and y>0</p>
<p>(a) Find Pr(x<1,y>1)</p>
<p>(b) Find Pr(y<$x^2$)</p>
<p>This is how I tackled (a):</p>
<p>Pr(x<1,y>1)=$\int\int exp{\frac{-(x+y)}{2}}$ dydx where $0<x<1$ and $0<y<\infty$. My problem is with the interdependency of the two variables. Are they dependent or independent? </p>
<p>For (b), I had no clue at all. Someone should please help me out.</p>
| Platehead | 29,459 | <p>Hint for (a): use the law of total probability, conditioning over which two chips are transferred.</p>
<p>Then for (b), Bayes' theorem gives
\begin{equation}
P(\text{2 transferred} \mid \text{3 selected}) = \frac{P(\text{3 selected} \mid \text{2 transferred})P(\text{2 transferred})}{P(\text{3 selected})}.
\end{equation}
The two terms in the numerator should be easy; the denominator is your answer from part (a).</p>
|
3,912,076 | <p>A particle of mass m slides down on the curve <span class="math-container">$z=1+\frac{x^2}{2}$</span> in the <span class="math-container">$xz$</span>-plane without friction under the action of constant gravity.Suppose <span class="math-container">$z$</span>-axis points vertically upward.Find the Lagrangian of given system.</p>
<p>The position vector of the particle is given by <span class="math-container">$\overline r=(x,0,1+\frac{x^2}{2})$</span>.
Then the velocity vector is given by
<span class="math-container">$ \dot {\overline r}=(\dot x,0,x\dot x)$</span>.
So the kinetic energy is given by
<span class="math-container">$T=\frac {m}{2}((\dot x)^2+x^2(\dot x)^2)$</span>. Now I am confused in potential energy. The potential energy is given by <span class="math-container">$V=mgz$</span> but what will be the sign of it? What does it mean that <span class="math-container">$z$</span>-axis points vertically upward?</p>
| Cesareo | 397,348 | <p>The movement is restricted to the manifold <span class="math-container">$f(x,z) = 1+\frac 12 x^2-z=0$</span>. The potential energy is taken having a referential plane as for instance, the plane <span class="math-container">$XY$</span> so the potential energy is given by <span class="math-container">$m g z$</span>. The Lagrangian is given by</p>
<p><span class="math-container">$$
L = T - V +\lambda f = \frac 12 m(\dot x^2+\dot z^2) - m g z + \lambda\left(1+\frac 12x^2-z\right)
$$</span></p>
<p>giving the movement equations</p>
<p><span class="math-container">$$
\cases{
\ddot x = -\frac{x(g+\dot x^2)}{1+x^2}\\
\ddot z = \frac{\dot x^2-g x^2}{1+x^2}\\
\lambda = -\frac{m(g+\dot x^2)}{1+x^2}
}\ \ \ \ \ \ \ \ (*)
$$</span></p>
<p>Here <span class="math-container">$\lambda$</span> represents the reaction force along <span class="math-container">$f(x,z)=0$</span></p>
<p>NOTE</p>
<p>According to Euler-Lagrange formalism, the movement equations are</p>
<p><span class="math-container">$$
\cases{
m\ddot x -\lambda x=0\\
m\ddot z+\lambda+mg=0\\
1+\frac 12 x^2=z}
$$</span></p>
<p>now deriving twice the <span class="math-container">$f(x,z)=0$</span> manifold we obtain the third equation substitution as</p>
<p><span class="math-container">$$
\ddot z= x \ddot x +\dot x^2
$$</span></p>
<p>now solving</p>
<p><span class="math-container">$$
\cases{
m\ddot x -\lambda x=0\\
m\ddot z+\lambda+mg=0\\
\ddot z= x \ddot x +\dot x^2}
$$</span></p>
<p>for <span class="math-container">$\ddot x,\ddot z,\lambda$</span> we obtain <span class="math-container">$(*)$</span></p>
|
978,202 | <p>When I use $\sin x \sim x$ , the answer is $1$ , is the answer correct?</p>
| Shine Mic | 182,335 | <p>We can use $sinx \sim x$ only when $x \to 0$ , when $x \to \infty $, we cannot use this substitution.</p>
<p>We may solve this problem in this way:
$$0 \leftarrow - \mathop {\lim }\limits_{x \to \infty } {1 \over {{x^2} + x}} \le \mathop {\lim }\limits_{x \to \infty } {{\sin {x^2}} \over {{x^2} + x}} \le \mathop {\lim }\limits_{x \to \infty } {1 \over {{x^2} + x}} \to 0$$</p>
<p>So $$\mathop {\lim }\limits_{x \to \infty } {{\sin {x^2}} \over {{x^2} + x}} = 0$$</p>
<p>If any questions, let me know please.</p>
|
680,501 | <blockquote>
<p>Let $M$ be an $R$-module and let $F$ be a free $R$-module of finite rank. Let $\phi : M \to F$ be an epimorphism. Then show that $M$ has a submodule $F' \cong F $ such that $M=F' \oplus \ker\phi$.</p>
</blockquote>
<p>I am new to Module theory. If I apply fundamental theorem of homomorphism then $M/\ker\phi \cong F$, what to do next?</p>
| Did | 6,179 | <p>Hint: Each $\mathbf 1_{A_n}$ is measurable with respect to $X=\sum\limits_k\frac1{3^k}\mathbf 1_{A_k}$.</p>
|
3,312,959 | <p>If the distance from the point <span class="math-container">$(K, 4)$</span> to <span class="math-container">$(2, 4)$</span> and y axis are same find k. </p>
<p>I can't understand what to calculate. According to my calculation the answer is 1, but the given answer is 0 and 4.</p>
| Michael Rozenberg | 190,319 | <p>I think we need to change the word "to" on "and". It's just a typo. </p>
<p>If so we have:
<span class="math-container">$$|K-2|=2$$</span>
Can you end it now?</p>
|
217,711 | <p>Set
$$
g(x)=\sum_{k=0}^{\infty}\frac{1}{x^{2k+1}+1} \quad \text{for} \quad x>1.
$$</p>
<p>Is it true that
$$
\frac{x^{2}+1}{x(x^{2}-1)}+\frac{g'(x)}{g(x)}>0 \quad \text{for}\quad x>1?
$$
The answer seems to be positive. I spent several hours in proving this statement but I did not come up with anything reasonable. Maybe somebody else has (or will have) any bright idea? </p>
<p><strong>Motivation?</strong> Nothing important, I was just playing around this question:
<a href="https://mathoverflow.net/questions/217530/a-problem-of-potential-theory-arising-in-biology">A problem of potential theory arising in biology</a></p>
| Max Alekseyev | 7,076 | <p>The inequality is equivalent to
$$S := (x^2+1)g(x) + x(x^2-1)g'(x) > 0.$$
The left hand side here can be expanded to
$$S = \sum_{k\geq 0} \frac{(x^2+1)(x^{2k+1}+1) - (2k+1)x^{2k+1}(x^2-1)}{(x^{2k+1}+1)^2} $$
$$= \sum_{k\geq 0} \frac{(x^2+1) - (2k+1)(x^2-1)}{x^{2k+1}+1} + \sum_{k\geq 0}\frac{(2k+1)(x^2-1)}{(x^{2k+1}+1)^2}.$$</p>
<p>Now, the first sum here simplifies to
$$\sum_{k\geq 0} \frac{(x^2+1) - (2k+1)(x^2-1)}{x^{2k+1}+1} = \sum_{k\geq 0} \frac{(2k+2)-2k x^2}{x^{2k+1}+1}$$
$$=\sum_{k\geq 1} \left( \frac{2k}{x^{2k-1}+1} - \frac{2k x^2}{x^{2k+1}+1}\right)=(1-x^2)\sum_{k\geq 1} \frac{2k}{(x^{2k-1}+1)(x^{2k+1}+1)}.$$
Hence
$$\frac{S}{x^2-1} = \sum_{k\geq 0}\frac{2k+1}{(x^{2k+1}+1)^2} - \sum_{k\geq 1} \frac{2k}{(x^{2k-1}+1)(x^{2k+1}+1)}$$
$$\geq \sum_{k\geq 0}\frac{2k+1}{(x^{2k+1}+1)^2} - \sum_{k\geq 1} \frac{2k}{(x^{2k}+1)^2} = \sum_{k\geq 1} \frac{(-1)^{k-1}k}{(x^{k}+1)^2}.$$
Here we used AM-GM inequality $x^{2k-1}+x^{2k+1}\geq 2x^{2k}$ and thus
$$(x^{2k-1}+1)(x^{2k+1}+1)=x^{4k}+x^{2k+1}+x^{2k-1}+1\geq (x^{2k}+1)^2.$$
So it remains to prove that for $x>1$,
$$\sum_{k\geq 1} \frac{(-1)^{k-1}k}{(x^{k}+1)^2} > 0.\qquad(\star)$$</p>
<p><strong>UPDATE #1.</strong> Substituting $x=e^{2t}$, we have
$$\sum_{k\geq 1} \frac{(-1)^{k-1}k}{(x^{k}+1)^2} = \sum_{k\geq 1} \frac{(-1)^{k-1}ke^{-2tk}}{4 \cosh(tk)^2} = \frac{1}{4}\sum_{k\geq 1} (-1)^{k-1}ke^{-2tk}(1-\tanh(tk)^2)$$
$$ = \frac{e^{2t}}{4(e^{2t}+1)^2} - \frac{1}{4}\sum_{k\geq 1} (-1)^{k-1}ke^{-2tk} \tanh(tk)^2.$$</p>
<p><strong>UPDATE #2.</strong> The proof of $(\star)$ is given by Iosif Pinelis.</p>
|
3,005,591 | <p>I wish to find the integers of <span class="math-container">$a,b,c$</span> and <span class="math-container">$d$</span> such that:
<span class="math-container">$$225a + 360b +432c +480d = 3$$</span>
which is equal to:
<span class="math-container">$$75a + 120b +144c+ 160d =1$$</span></p>
<p>I know I have to use the Euclidean algorithm. And I managed to do it for two integers <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. But can't figure out, how to do it with <span class="math-container">$4$</span> integers.</p>
| Will Jagy | 10,400 | <p><span class="math-container">$$ 144 \cdot 12 - 75 \cdot 23 = 3 $$</span>
<span class="math-container">$$ 160 \cdot 1 - 3 \cdot 53 = 1 $$</span>
<span class="math-container">$$ 160 \cdot 1 - 53 ( 144 \cdot 12 - 75 \cdot 23 ) =1 $$</span>
<span class="math-container">$$ 160 \cdot 1 - 636 \cdot 144 + 1219 \cdot 75 = 1 $$</span></p>
<p>The shortest vector solution is
<span class="math-container">$$ a= 3, b= -2, c= -1, d= 1 $$</span>
with
<span class="math-container">$$ 3 \cdot 75 - 2 \cdot 120 - 144 + 160 = 225 -240 -144 + 160 = 1 $$</span></p>
<p>The more interesting question is finding a basis for the lattice of integer vectors orthogonal to your vector. A basis is given by these three rows:</p>
<p><span class="math-container">$$
\left(
\begin{array}{rrrr}
-175536 & 0 & 91585 & -144 \\
-146280 & 1 & 76320 & -120 \\
-91424 & 0 & 47700 & -75 \\
\end{array}
\right)
$$</span>
This three dimensional lattice has Gram determinant <span class="math-container">$66361$</span>
Next is a reduced basis by the LLL algorithm.</p>
<p><span class="math-container">$$
\left(
\begin{array}{rrrr}
0 & 4 & 0 & -3 \\
0 & 2 & -5 & 3 \\
8 & -1 & 0 & -3 \\
\end{array}
\right)
$$</span></p>
<p>The Gram matrix for the reduced basis, still with determinant 66361, is</p>
<p><span class="math-container">$$
\left(
\begin{array}{rrr}
25 & -1 & 5 \\
-1 & 38 & -11 \\
5 & -11 & 74 \\
\end{array}
\right)
$$</span></p>
<p>There is a theorem involved here,
<span class="math-container">$$ 75^2 + 120^2 + 144^2 + 160^2 = 66361 $$</span></p>
<p>All solutions <span class="math-container">$(a,b,c,d)$</span> are given, with integer <span class="math-container">$s,t,u,$</span> by
<span class="math-container">$$ (3+8u,-2+4s+2t-u,-1-5t,1-3s+3t-3u) $$</span></p>
|
3,005,591 | <p>I wish to find the integers of <span class="math-container">$a,b,c$</span> and <span class="math-container">$d$</span> such that:
<span class="math-container">$$225a + 360b +432c +480d = 3$$</span>
which is equal to:
<span class="math-container">$$75a + 120b +144c+ 160d =1$$</span></p>
<p>I know I have to use the Euclidean algorithm. And I managed to do it for two integers <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. But can't figure out, how to do it with <span class="math-container">$4$</span> integers.</p>
| Bill Dubuque | 242 | <p><span class="math-container">$\color{#c00}6 = 2(75)\!-\!144,\, $</span> <span class="math-container">$\color{#0a0}{80} = 2(120)\!-\!160\ $</span> so <span class="math-container">$\,\bbox[5px,border:1px solid red]{1 = 75\!+\!\color{#c00}6\!-\!\color{#0a0}{80} = 3(75)-2(120) -144 + 160}$</span></p>
<p><strong>Remark</strong> <span class="math-container">$ $</span> Found by perusing coef remainders: <span class="math-container">$\bmod 75:\ 144\equiv -\color{#c00}6,\,$</span> <span class="math-container">$\,\bmod 120\!:\ 160\equiv -\color{#0a0}{80}$</span> </p>
<p>i.e. we applied a few (judicious) steps of the <a href="https://math.stackexchange.com/a/2338268/242">extended Euclidean algorithm.</a> </p>
<p>Alternatively, applying the algorithm mechanically, reducing each argument by the least argument, we get a longer computation like that in user21820's answer, viz.</p>
<p><span class="math-container">$$\begin{align}
&\gcd(\color{#88f}{75},120,144,160)\ \ {\rm so\ reducing}\ \bmod \color{#8af}{75}\\
=\ &\gcd (75,\ 45,\,-\color{#0a0}6,\ \ 10)\ \ \ {\rm so\ reducing}\ \bmod \color{#0a0}{6} \\
=\ &\gcd(\ 3,\ \ \ \ 0,\ {-}\color{#0a0}6,\ {-}\color{#f4f}2)\ \ \ {\rm so\ reducing}\ \bmod \color{#F4f}{2}\\
=\ &\gcd(\ \color{#d00}{\bf 1},\ \ \ \ 0,\ \ \ \ 0,\ {-}2)\\
\end{align}\qquad\qquad $$</span></p>
<p>yielding a Bezout identity for <span class="math-container">$\color{#d00}{\bf 1}$</span> from the augmented matrix in the extended algorithm (follow the above link for a complete presentation with the augmented matrix displayed).</p>
|
8,741 | <p>Here is a topic in the vein of <a href="https://mathoverflow.net/questions/1890/describe-a-topic-in-one-sentence" title="Describe a topic in one sentence"> Describe a topic in one sentence</a> and <a href="https://mathoverflow.net/questions/4994/fundamental-examples" > Fundamental examples </a> : imagine that you are trying to explain and justify a mathematical theory T to a skeptical mathematician who thinks T is just some sort of abstract nonsense for its own sake. The ideal solution consists
of a problem P which can be stated and understood without knowing anything about T, but which is difficult (or impossible, even better) to solve without T, and easier (or almost-trivial, even better) to solve with the help of T. What should be avoided is an example where T is "superimposed", e.g. when T is a model for some physical phenomenon, because there is always something arbitrary about the choice of a specific model. </p>
<p>A classical example is Galois theory for solving polynomial equations. </p>
<p>Any examples for homological algebra ? For Fourier analysis ? For category theory ?</p>
| Jonas Meyer | 1,119 | <p>The Jones polynomial: a knot invariant that originally came from subfactor theory.</p>
|
8,741 | <p>Here is a topic in the vein of <a href="https://mathoverflow.net/questions/1890/describe-a-topic-in-one-sentence" title="Describe a topic in one sentence"> Describe a topic in one sentence</a> and <a href="https://mathoverflow.net/questions/4994/fundamental-examples" > Fundamental examples </a> : imagine that you are trying to explain and justify a mathematical theory T to a skeptical mathematician who thinks T is just some sort of abstract nonsense for its own sake. The ideal solution consists
of a problem P which can be stated and understood without knowing anything about T, but which is difficult (or impossible, even better) to solve without T, and easier (or almost-trivial, even better) to solve with the help of T. What should be avoided is an example where T is "superimposed", e.g. when T is a model for some physical phenomenon, because there is always something arbitrary about the choice of a specific model. </p>
<p>A classical example is Galois theory for solving polynomial equations. </p>
<p>Any examples for homological algebra ? For Fourier analysis ? For category theory ?</p>
| Jose Brox | 1,234 | <p>Root systems and Dynkin diagrams for classification matters (see both <a href="http://en.wikipedia.org/wiki/Root_system" rel="nofollow">Root system</a> and <a href="http://en.wikipedia.org/wiki/ADE_classification" rel="nofollow">ADE classification</a> at Wikipedia).</p>
|
8,741 | <p>Here is a topic in the vein of <a href="https://mathoverflow.net/questions/1890/describe-a-topic-in-one-sentence" title="Describe a topic in one sentence"> Describe a topic in one sentence</a> and <a href="https://mathoverflow.net/questions/4994/fundamental-examples" > Fundamental examples </a> : imagine that you are trying to explain and justify a mathematical theory T to a skeptical mathematician who thinks T is just some sort of abstract nonsense for its own sake. The ideal solution consists
of a problem P which can be stated and understood without knowing anything about T, but which is difficult (or impossible, even better) to solve without T, and easier (or almost-trivial, even better) to solve with the help of T. What should be avoided is an example where T is "superimposed", e.g. when T is a model for some physical phenomenon, because there is always something arbitrary about the choice of a specific model. </p>
<p>A classical example is Galois theory for solving polynomial equations. </p>
<p>Any examples for homological algebra ? For Fourier analysis ? For category theory ?</p>
| Qiaochu Yuan | 290 | <p>Suppose you are interested in random walks on an extremely structured graph such as a hypercube graph or a cycle graph. If your graph happens to be the Cayley graph of an abelian group $G$, as in both of the above examples, then it is easy to describe the behavior of random walks on it because the eigenvectors of the adjacency matrix are precisely the characters of $G$ and the eigenvalues depend in a simple way on the characters; in other words, you should learn about the discrete Fourier transform. </p>
<p><strong>Edit:</strong> Some elaboration. Let $G$ be a finite abelian group with $|G| = n$. A character of $G$ is a homomorphism $G \to \mathbb{C}$, and it is a basic fact of character theory that the characters form a basis of the space of functions $G \to \mathbb{C}$; this is the discrete Fourier transform. Now let $\mathbf{A}(G)$ be the adjacency matrix of a Cayley graph of $G$ using generators $\{ s_1, ... s_k \}$. The group $G$ acts on the space of functions $G \to \mathbb{C}$ by sending a function $f : G \to \mathbb{C}$ to $f(gx)$. Call this representation $\rho$; then (and this is the important connnecting observation) one may regard $\mathbf{A}(G)$ as the linear operator $\displaystyle \sum_{i=1}^{k} \rho(s_i)$.</p>
<p><strong>Proposition:</strong> Let $\chi_j : G \to \mathbb{C}$ be a character of $G$. Then $\chi_j$ is an eigenvector of $\mathbf{A}(G)$ with eigenvalue $\displaystyle \sum_{i=1}^{k} \chi_j(s_i)$, and these are all the eigenvectors.</p>
<p><em>Proof.</em> Just observe that $\rho(s_i) \chi_j(g) = \chi_j(s_i g) = \chi_j(s_i) \chi_j(g)$. The fact that these exhaust the set of eigenvectors follows from the basic fact cited above.</p>
<p>For example, the cycle graph $C_n$ is the Cayley graph of the cyclic group $\mathbb{Z}/n\mathbb{Z}$ with generators $\{ 1, -1 \}$, so its eigenvectors are just the rows of the discrete Fourier transform matrix on $\mathbb{Z}/n\mathbb{Z}$ and its eigenvalues are $e^{ \frac{2\pi i k}{n} } + e^{- \frac{2\pi ik}{n} } = 2 \cos \frac{2\pi k}{n}$. (Note that I have implicitly identified the space of functions $G \to \mathbb{C}$ with the free vector space on the elements of $G$ in the usual way.)</p>
|
8,741 | <p>Here is a topic in the vein of <a href="https://mathoverflow.net/questions/1890/describe-a-topic-in-one-sentence" title="Describe a topic in one sentence"> Describe a topic in one sentence</a> and <a href="https://mathoverflow.net/questions/4994/fundamental-examples" > Fundamental examples </a> : imagine that you are trying to explain and justify a mathematical theory T to a skeptical mathematician who thinks T is just some sort of abstract nonsense for its own sake. The ideal solution consists
of a problem P which can be stated and understood without knowing anything about T, but which is difficult (or impossible, even better) to solve without T, and easier (or almost-trivial, even better) to solve with the help of T. What should be avoided is an example where T is "superimposed", e.g. when T is a model for some physical phenomenon, because there is always something arbitrary about the choice of a specific model. </p>
<p>A classical example is Galois theory for solving polynomial equations. </p>
<p>Any examples for homological algebra ? For Fourier analysis ? For category theory ?</p>
| GMRA | 135 | <p>Well for a long time there was no proof of the Burnside theorem avoiding Representation theory. Now there are methods to proof it without Representation theory, but they are still a lot harder then the original representation theoretic one.</p>
<p><a href="http://en.wikipedia.org/wiki/Burnside_theorem" rel="nofollow">http://en.wikipedia.org/wiki/Burnside_theorem</a></p>
|
8,741 | <p>Here is a topic in the vein of <a href="https://mathoverflow.net/questions/1890/describe-a-topic-in-one-sentence" title="Describe a topic in one sentence"> Describe a topic in one sentence</a> and <a href="https://mathoverflow.net/questions/4994/fundamental-examples" > Fundamental examples </a> : imagine that you are trying to explain and justify a mathematical theory T to a skeptical mathematician who thinks T is just some sort of abstract nonsense for its own sake. The ideal solution consists
of a problem P which can be stated and understood without knowing anything about T, but which is difficult (or impossible, even better) to solve without T, and easier (or almost-trivial, even better) to solve with the help of T. What should be avoided is an example where T is "superimposed", e.g. when T is a model for some physical phenomenon, because there is always something arbitrary about the choice of a specific model. </p>
<p>A classical example is Galois theory for solving polynomial equations. </p>
<p>Any examples for homological algebra ? For Fourier analysis ? For category theory ?</p>
| Martin Brandenburg | 2,841 | <p>Sometimes you want to understand a group $G$, but the only thing you know is that there is an extension $1 \to A \to G \to E \to 1$. If everything is abelian, $G$ corresponds to an element in $Ext^1(E,A)$. If at least $A$ is abelian, then $E$ acts on $N$ by conjugation and $G$ corresponds to an element in $H^2(E,A)$. Thus the classification of groups naturally leads to <b>cohomology groups</b>, which have a rich theory.</p>
<p>There is also a topological motivation: Which spheres act freely on finite groups?</p>
|
1,072,427 | <p>Prove using contradiction that any prime number greater than $3$ is of the form $6n \pm 1$.</p>
<p>Thanks for any help</p>
| barak manos | 131,263 | <p>Assume that $N$ is a prime larger than $3$ and <strong>not</strong> of the form $6n\pm1$:</p>
<ul>
<li>$N\equiv0\pmod6\implies N$ is divisible by $6\implies N$ is not a prime</li>
<li>$N\equiv2\pmod6\implies N$ is divisible by $2\implies N$ is not a prime (or $N=2$)</li>
<li>$N\equiv3\pmod6\implies N$ is divisible by $3\implies N$ is not a prime (or $N=3$)</li>
<li>$N\equiv4\pmod6\implies N$ is divisible by $2$ and $N\geq4\implies N$ is not a prime</li>
</ul>
<p>Therefore, $N$ is a prime larger than $3\implies N\equiv1,5\pmod6\implies N\equiv\pm1\pmod6$</p>
|
3,069,369 | <p>I know how to perform polynomial regression. But is there any method to use for estimating the degree of the polynomial that is best suited? Some kind of meta-regression.</p>
<p>With best suited I mean the grade that has the highest probability of being the true degree of the source for the data.</p>
<p>For example, if we look at this picture we can easily "see" that a polynomial of degree 4 would fit nicely:</p>
<p><a href="https://i.stack.imgur.com/ZOIbY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZOIbY.png" alt="enter image description here"></a></p>
<p>A more generalized question is if there is any method to determine if the source is polynomial at all or if it is exponential or something else.</p>
| Gnumbertester | 628,028 | <p>In this case, we can start by applying the rational root theorem to determine possible rational roots. In this case there are 4 total roots.</p>
<p>You can study the rational root theorem here <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Rational_root_theorem</a>.</p>
<p>The possible root combinations in this case are <span class="math-container">$-1,1,-\frac{1}{2},\frac{1}{2},-5,5,-\frac{5}{2}$</span>, and <span class="math-container">$\frac{5}{2}$</span>. </p>
<p>A look at this function's graph reveals that <span class="math-container">$x=-1$</span> is a root with a multiplicity of 1. The graph also reveals that the function's 4 roots are real. This can also be confirmed analytically by Descarte's Rule of signs. </p>
<p>Since we know that <span class="math-container">$x=-1$</span> is a root, we can divide the function synthetically by <span class="math-container">$-1$</span> to get a cubic function. From there, you can either factor the cubic or perform synthetic division on the cubic with another factor to reduce it to a quadratic. </p>
|
3,069,369 | <p>I know how to perform polynomial regression. But is there any method to use for estimating the degree of the polynomial that is best suited? Some kind of meta-regression.</p>
<p>With best suited I mean the grade that has the highest probability of being the true degree of the source for the data.</p>
<p>For example, if we look at this picture we can easily "see" that a polynomial of degree 4 would fit nicely:</p>
<p><a href="https://i.stack.imgur.com/ZOIbY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZOIbY.png" alt="enter image description here"></a></p>
<p>A more generalized question is if there is any method to determine if the source is polynomial at all or if it is exponential or something else.</p>
| Bernard | 202,857 | <p><strong>Hint</strong>:</p>
<p>This polynoial is divisible by <span class="math-container">$x^2-1$</span>:
<span class="math-container">\begin{align}
2x^4+x^3-7x^2-x+5&=(2x^4-2x^2)+x^3-5x^2-x+5\\
&=2x^2(x^2-1)+ x^2(x-5)-x+5 \\
&= (x^2-1)(2x^2+x-5).
\end{align}</span>+</p>
|
1,562,907 | <p>I want to know whether $$\mathbb{Q}[x]/(x^3-1)$$ is a field or not. Is it as simple as determining if $x^3-1$ is irreducible in $\Bbb Q$?</p>
<p>But since it has roots , $x=1$ wouldn't this imply it is reducible and hence not a field? Or is there much more to it I am not considering?</p>
| fleablood | 280,126 | <p>The thing is $g(g^{-1}(x)) = x$. So we solve for $g^{-1}(x)$ in terms of $x$. Somehow, it is easy and less confusing if we use the notation $y = g^{-1}(x)$ and solve $x = g(g^{-1}(g(x)) = g(y)$ and solve for $y$ (which is actually $g^{-1}(x))$.</p>
<p>So $x = g(y)$; what is $y$?</p>
<p>$x = g(y) = f(y) + 1$. (we must "unwrap" the y. First by getting rid of the "+1)</p>
<p>$x = f(y) + 1$</p>
<p>$x - 1 = f(y)$. (Now we must "unwrap" the f(y); well the "un-f" is $f^{-1}$)</p>
<p>$x - 1 = f(y)$</p>
<p>$f^{-1}(x - 1) = f^{-1}(f(y)) = y$. (And we're done!)</p>
<p>$g^{-1}(x) = y = f^{-1}(x - 1)$. That's it!</p>
<p>Let's verify this really worked:</p>
<p>$g^{-1}(g(x)) = f^{-1}(g(x) -1) = f^{-1}(f(x) + 1 - 1) = f^{-1}(f(x)) = x$. (Works in one direction.)</p>
<p>$g(g^{-1}(x)) = g(f^{-1}(x -1)) = f(f^{-1}(x -1)) + 1 = (x -1) +1 = x$ (Works both directions. We're good.)</p>
|
4,581,481 | <p>How would one go about ALGEBRAICALLY finding the range of a semicircle?</p>
<p>eg. <span class="math-container">$y = \sqrt{4-x^2}$</span></p>
<p>Since there is no plus-minus sign, we know that <span class="math-container">$y$</span> must at least have a range of <span class="math-container">$[0, \infty)$</span>. Yet this must not be the complete case because we know a semicircle has restrictions on how small AND how big the <span class="math-container">$y$</span> value can be.</p>
| Ogglie Ostrich | 1,119,801 | <p>Assuming we are concerned with real <span class="math-container">$y$</span> we have:
<span class="math-container">$$y=\sqrt{4-x^2}\in\mathbb{R} \implies 4-x^2\geq0\implies x^2\leq4$$</span>
also<span class="math-container">$$ x\in\mathbb{R} \implies x^2\geq0 \implies 0\leq x^2\leq 4$$</span>
Hence:
<span class="math-container">$$0\leq4-x^2\leq4$$</span>
So, then the maximum value of <span class="math-container">$y$</span> is:
<span class="math-container">$$ \sqrt{4} = 2$$</span>
This <em>is</em> an algebraic approach but just to be sure you are satisfied I'll also use differentiation to find the maxiumum:
<span class="math-container">$$\frac{dy}{dx} = \frac{-2x}{2\sqrt{4-x^2}} = \frac{-x}{\sqrt{4-x^2}}$$</span>
Then:
<span class="math-container">$$\frac{dy}{dx} = 0 \implies x=0$$</span>
Substituting <span class="math-container">$x=0$</span> into <span class="math-container">$y$</span>:
<span class="math-container">$$y\rvert_{x=0}=\sqrt{4}=2$$</span>
We can see that this is the maximum easily if we consider the circle geometrically but if you wanted to use <em>purely algebra</em> you could find <span class="math-container">$\frac{d^2y}{dx^2}$</span> and evaluate it at <span class="math-container">$x=0$</span> to get <span class="math-container">$\frac{d^2y}{dx^2}\rvert_{x=0}<0 $</span> which implies this stationary point is the maximum of y.</p>
|
1,117,924 | <p>How can I show with linear approximation that $y \approx x$ for small x? I know the rule $$f(x) \approx f(a) + f^{\prime}(a) (x-a),$$ but I don't know how to put it to use in this case.</p>
| randomgirl | 209,647 | <p>So we know the tangent line will look like $y-y_1=m(x-x_1)$
and we want $y-0=1\cdot(x-0)$.<br>
Since we want our line to go through $(0,0)$, I say find the tangent line at
$x=0$.</p>
|
1,580,796 | <p>In a game players take turns saying up to 3 numbers (starting at 1 and working their way up) and who every says 21 is eliminated. So we may have a situation like the following for 3 players:</p>
<blockquote>
<p>Player 1: 1,2</p>
<p>Player 2: 3,4,5</p>
<p>Player 3: 6</p>
<p>Player 1: 7,8,9</p>
<p>Player 2:10,11</p>
<p>Player 3:13,14,15</p>
<p>Player 1: 16,17,18</p>
<p>Player 2: 19,20</p>
</blockquote>
<p>In which case Player <span class="math-container">$3$</span> would have to say <span class="math-container">$21$</span> and thus would be eliminated from the game. Player <span class="math-container">$1$</span> and <span class="math-container">$2$</span> would then face each other.</p>
<p>In the case of a two player game, the person who goes second can always win by ensuring the last number they say is a multiple of <span class="math-container">$4$</span>.</p>
<p>Let us say we are in an <span class="math-container">$N$</span> player game, and that I am the <span class="math-container">$i^{\text{th}}$</span> player to take my turn. Is there any strategy that I can take to make sure I will stay in the game? For example in the simple case of a <span class="math-container">$2$</span> player game the strategy would be 'try to end on a multiple of <span class="math-container">$4$</span> and then stay on multiples of <span class="math-container">$4$</span>'.</p>
| joriki | 6,622 | <p>Community wiki answer so the question can be marked as answered:</p>
<p>As Abstraction pointed out in a comment, there is no such strategy (except in extreme cases) because the other players can cooperate against you; they have a wider range of options, collectively, than you do, so you can't use the sort of strategy that you can use in the two-player version.</p>
|
1,923,226 | <p>I have been trying to prove this with some continuity theorems but haven't put together a good proof yet. </p>
| operatorerror | 210,391 | <p>Fix any $\epsilon$ you wish. Setting $\delta=1/2$ insures that $|x-y|<\delta\Rightarrow |f(x)-f(y)|<\epsilon$, since if $|x-y|<1/2$ means that $f(x)=f(y)$, since there is only one point in a radius of 1/2 on the integer lattice.</p>
<p>Might also be worth looking up the open preimage definition of continuity and trying to understand that. </p>
|
4,113,744 | <p>The number of real values <span class="math-container">$(x,y)$</span> for which <span class="math-container">$$2^{x+1}+3^y=3^{ y+2}-2^x$$</span> is ?
I went like, <span class="math-container">$$2^{x+1}+2^x=3^{y+2}-3^y$$</span>
after solving which I got; <span class="math-container">$(x,y)=(3,1)$</span>
Is there any other process/ solution/ way to solve?</p>
| Mostafa Ayaz | 518,023 | <p>Note that
<span class="math-container">$$2^{y}+2^{y+1}=3^{y+2}-3^y\iff 3\times 2^y=8\times3^y\iff 2^{x-3}=3^{y-1}.$$</span>
Now, an investigation on tuples <span class="math-container">$(x,y)$</span> (whether real or integer) should be easy.</p>
|
364,059 | <p>While reading up on "Glivenko Cantelli Theorem" from Probability Models by K.B Athreya, the author used 2 lemmae to prove it. One was called Scheffe's lemma, the other Polya's theorem.</p>
<p>Scheffe's Lemma is stated as follows:</p>
<p>Let $f_n, f$ be non negative $\mu$ integrable functions. If $f_n \to f$ a.e and $\int f_n d\mu \to \int fd\mu$, then
$$\int |f_n - f|d\mu \to 0$$</p>
<p>My proof is:</p>
<p>Let $g_n = |f_n -f|$. Now we have $g_n \to 0$ a.e. Now
$$0 \leq g_n = |f_n -f| \leq f + f_n$$
$$\Rightarrow \int g_n d\mu \leq \int fd\mu + \int f_nd\mu < \infty $$</p>
<p>Thus by Dominated convergence theorem,
$$\int g_nd\mu \to 0$$ QED.</p>
<p><strong>The Question</strong>:
My doubt is that in this proof, I have not used that $\int f_n d\mu \to \int fd\mu$, at least not explicitly. So is this condition superfluous?</p>
<p><strong>What I searched</strong>:
I searched for Scheffe on MSE, but got 3 results (none useful) and when I typed Scheffe's instead, I got a <a href="https://math.stackexchange.com/q/363271/35983">result</a> not belonging to these 3 which was actually on Scheffe's lemma (Though not helpful). It's strange (not the result but the search).</p>
<p>I'd appreciate any help/hints on this. Kindly ask me for clarifications if required.</p>
| Sangchul Lee | 9,340 | <p>Let</p>
<p>$$u_n = \max\{ f, f_n\} \quad \text{and} \quad l_n = \min \{ f, f_n \} $$</p>
<p>so that both $(u_n)$ and $(l_n)$ converge pointwise to $f$, $l_n \leq f \leq u_n$ and $|f - f_n| = u_n - l_n$. By DCT, it is clear that $\int l_n \, d\mu \to \int f \, d\mu$. thus from</p>
<p>$$ \int u_n \, d\mu = \int (f + f_n - l_n) \, d\mu ,$$</p>
<p>taking $n\to\infty$, we have $\int u_n \, d\mu \to \int f \, d\mu$. (Here the assumption that you are concerning is used.) Therefore we have</p>
<p>$$ \int |f - f_n| \, d\mu = \int u_n \, d\mu - \int l_n \, d\mu \to 0. $$</p>
|
364,059 | <p>While reading up on "Glivenko Cantelli Theorem" from Probability Models by K.B Athreya, the author used 2 lemmae to prove it. One was called Scheffe's lemma, the other Polya's theorem.</p>
<p>Scheffe's Lemma is stated as follows:</p>
<p>Let $f_n, f$ be non negative $\mu$ integrable functions. If $f_n \to f$ a.e and $\int f_n d\mu \to \int fd\mu$, then
$$\int |f_n - f|d\mu \to 0$$</p>
<p>My proof is:</p>
<p>Let $g_n = |f_n -f|$. Now we have $g_n \to 0$ a.e. Now
$$0 \leq g_n = |f_n -f| \leq f + f_n$$
$$\Rightarrow \int g_n d\mu \leq \int fd\mu + \int f_nd\mu < \infty $$</p>
<p>Thus by Dominated convergence theorem,
$$\int g_nd\mu \to 0$$ QED.</p>
<p><strong>The Question</strong>:
My doubt is that in this proof, I have not used that $\int f_n d\mu \to \int fd\mu$, at least not explicitly. So is this condition superfluous?</p>
<p><strong>What I searched</strong>:
I searched for Scheffe on MSE, but got 3 results (none useful) and when I typed Scheffe's instead, I got a <a href="https://math.stackexchange.com/q/363271/35983">result</a> not belonging to these 3 which was actually on Scheffe's lemma (Though not helpful). It's strange (not the result but the search).</p>
<p>I'd appreciate any help/hints on this. Kindly ask me for clarifications if required.</p>
| Xiao | 131,137 | <p>Observe $|f| + |f_n| - |f-f_n|\geq 0$, with Fatou's lemma
$$\liminf \int |f| + |f_n| - |f-f_n| \geq 2 \int |f| $$
by the assumption $\lim \int |f_n| = \int |f|$, the left-hand-side of above inequality also equals
$$\liminf \int |f| + |f_n| - |f-f_n| = 2\int |f| -\limsup \int |f-f_n|,$$
combine them we get
$$2\int |f| -\limsup \int |f-f_n|\geq 2 \int |f| $$
$$\limsup \int |f-f_n| \leq 0 .$$</p>
|
100,801 | <p>When doing calculations in notebook, I modify the code a lot, and there are several versions of similar codes in the same notebook. But I found myself easily forgot which version is the newest, and I have to confirm which cell is the newest.</p>
<p>So if mathematica can provide a menu options like "cell latest modification time" will be useful. But now, I can't find this option. </p>
<p>However, mathematica provide a menu option "Cell-> Notebook History" can do this job.</p>
<p>The problem is that on my computer, mathematica ver 10.3, if the notebook has a long and rich history, then the Notebook History makes mma froze again and again, keep popping dialog of "Formatting Notebook Contents" as follows</p>
<p><a href="https://i.stack.imgur.com/ufdhX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ufdhX.png" alt="enter image description here"></a></p>
<p><strong>How can I start the notebook history for example only recent 3 days or more generally between specific date 10 Nov to 13 Nov? And also how to start notebook history with <code>selected cell</code> as default?</strong></p>
| Albert Retey | 169 | <p>From the OPs comments I deduce that the problem is only the dialog that opens when one opens with the menu entry "Cell" -> "Notebook History" and which shows a relatively involved visualization of the history and defaults to analyze the whole notebook. Obviously that is too slow for really large notebooks. But if one only is interested in the last change of a single cell, one can use code like the following to just get that information, and I'd guess that this should work even for huge notebooks as long as these still work in general.</p>
<p>Here is the code which generates a palette with one button which shows the last change time for the selected cell(s):</p>
<pre><code>CreatePalette@Button["Show Last Cell-Change", MessageDialog[
SelectionMove[InputNotebook[], All, Cell];
Grid[
MapIndexed[Flatten[{#2, #1}] &, Replace[
Cases[{NotebookRead[InputNotebook[]]}, Cell[___], Infinity], {
Cell[___, CellChangeTimes -> t_] :> DateString[Max[t]+3600*$TimeZone],
Cell[___] :> None
}, {1}]
]
]
], Method -> "Queued"]
</code></pre>
<p>for the interested here is what it does: The notebook history feature can be switched on either globally with Edit -> Preferences -> Advanced -> "Enable notebook history tracking" or just for the current notebook with e.g. <code>SetOptions[EvaluationNotebook[],TrackCellChangeTimes -> True]</code>. Once that settings has been made, the Mathmatica FrontEnd adds timestamps to cells when these are edited. These timestamps are aggregated within the Cell-Option <code>CellChangeTimes</code>. What my code does is to just extract that information which is in the form of timestamps as given by <code>AbsoluteTime[]</code> but obviously for UTC that is <code>TimeZone->0</code>. The <code>Max</code> of all those values is the last time the cell has changed and that is shown in human readable form with <code>DateString</code>. I have adjusted for the time zone in a somewhat ad hoc way which I have not tested for other timezones than my own but think should work. I could imagine that other things would also need improvement but it should basically work, show the idea and I would expect it to not create problems when a notebook gets large or the history is long.</p>
<p><strong>EDIT</strong></p>
<p>As the OP has mentioned in comments he is still searching for a way to change the default setting for the selection toggle. While a programmatic change of that might be possible it would certainly be somewhat involved and "dirty". On the other side it seems to be easy enough to just edit the system dialog definition which one can find here:</p>
<p>What you could do is to replace the default palette notebook which you can find here: </p>
<pre><code>FileNameJoin[{$InstallationDirectory, "SystemFiles", "FrontEnd",
"SystemResources", "HistoryOverview.nb"}]
</code></pre>
<p>with one where the corresponding default settings is changed. To do that, I would suggest to:</p>
<ol>
<li>make a copy of that notebook</li>
<li>open the copy in a text editor</li>
<li>replace <code>$CellContext`selectionOnly$$ = False</code> with <code>$CellContext`selectionOnly$$ = True</code></li>
<li>safe the file </li>
<li><p>open the edited file in mathematica and save it (to avoid messages about it being edited outside of mathematica). As the notebook has settings specific to palettes it can only be saved programmatically. To do that, you have to get a handle to the corresponding notebook object and then call <code>NotebookSave</code> for that. The following code will do that:</p>
<pre><code>NotebookSave[
First[Select[
Notebooks[],
Not[FreeQ[CurrentValue[#, WindowTitle], "HistoryOverviewDialog"]] &]
]
]
</code></pre></li>
<li><p>replace the original notebook with the edited one, probably making a backup copy of the original before that.</p></li>
</ol>
<p>the last step (replacing that file in <code>$InstallationDirectory</code>) will of course only be possible with corresponding file permissions.</p>
<p>If you now restart mathematica the history overview dialog should now open always with the "selection only" setting...</p>
|
200,414 | <p>Some people may carelessly say that you need calculus to find such a thing as a local maximum of $f(x) = x^3 - 20x^2 + 96x$. Certainly calculus is <em>sufficient</em>, but whether it's <em>necessary</em> is another question.</p>
<p>There's a global maximum if you restrict the domain to $[0,8]$, and $f$ is $0$ at the endpoints and positive between them. Say the maximum is at $x_0$. One would have
$$
\frac{f(x)-f(x_0)}{x-x_0}\begin{cases} >0 & \text{if }x<x_0, \\ <0 & \text{if }x>x_0. \end{cases}
$$
This difference quotient is undefined when $x=x_0$, but mere algebra tells us that the numerator factors and we get
$$
\frac{(x-x_0)g(x)}{x-x_0} = g(x)
$$
where $g(x)$ is a polynomial whose coefficients depend on $x_0$. Then of course one seeks its zeros since it should change signs at $x_0$.</p>
<p>Have we tacitly used the intermediate value theorem, or the extreme value theorem? To what extent can those be avoided? Must one say that <em>if</em> there is a maximum point, <em>then</em> it is at a zero of $g(x)$? And can we say that without the intermediate value theorem? (At least in the case of this function, I think we stop short of needing the so-called fundamental theorem of algebra to tell us some zeros of $g$ exist!)</p>
| André Nicolas | 6,312 | <p>We do an analysis using the Arithmetic Mean Geometric Mean Inequality.</p>
<p>Make the usual change of variable <span class="math-container">$x=t+\frac{20}{3}$</span> to eliminate the term in the
square of the variable. We get the cubic polynomial
<span class="math-container">$$t^3-\frac{112}{3}t+k,$$</span>
where I didn't bother to calculate <span class="math-container">$k$</span>. This is just the original cubic shifted horizontally.</p>
<p>So we want to study <span class="math-container">$f(t+\frac{20}{3})=t^3-\frac{112}{3}t$</span>, or more generally <span class="math-container">$f(t+\frac{20}{3})=t^3-at$</span>, where <span class="math-container">$a\gt 0$</span>. Our function is an odd function, so from now on assume that <span class="math-container">$t\gt 0$</span>, and let symmetry across the origin take care of the rest.</p>
<p>Let us <strong>maximize</strong> <span class="math-container">$2(f(t+\frac{20}{3}))^2$</span>.
We have
<span class="math-container">$$2(f(t+\frac{20}{3}))^2=(2t^2)(t^2-a)(t^2-a)\\
\Rightarrow 2(f(t+\frac{20}{3}))^2=(2t^2)(a-t^2)(a-t^2).$$</span>
We have a product of <span class="math-container">$3$</span> positive terms, whose sum has the <strong>constant</strong> value <span class="math-container">$2a$</span>. It follows by the case <span class="math-container">$n=3$</span> of AM-GM that
<span class="math-container">$$2(f(t+\frac{20}{3}))^2 \le \left(\frac{2a}{3}\right)^3,$$</span>
with equality precisely if all the terms of the product are equal. This happens when <span class="math-container">$2t^2=a-t^2$</span>, that is when
<span class="math-container">$$t=\pm\sqrt{\frac{a}{3}}.$$</span>
One <span class="math-container">$t$</span> will give you maximum and the other minimum. Don't forget to add <span class="math-container">$\frac{20}{3}$</span> to <span class="math-container">$t$</span> to get <span class="math-container">$x$</span>.</p>
|
17,398 | <p>I am teaching my daughter, who is currently about <span class="math-container">$46$</span> months old, additions. She is very curious and asks a lot of <em>good</em> questions. For example, when I told her that <span class="math-container">$2+6=8$</span> and <span class="math-container">$4+4=8$</span>, she asked me the following question:</p>
<blockquote>
<p>Why are they the same?</p>
</blockquote>
<p>Surely I know the logical answer this question using <a href="https://en.wikipedia.org/wiki/Peano_axioms" rel="noreferrer">Peano Axioms</a> and the definition of natural numbers. But she has not learnt the Peano Axiom at this age.</p>
<p>So how to answer my three-year-old daughter's question that </p>
<blockquote>
<p>Why <span class="math-container">$2+6$</span> is the same as <span class="math-container">$4+4$</span>?</p>
</blockquote>
| Selene Auckland | 12,899 | <p>None of the answers so far seem to give a name for the kind of concept in the inquiry. What you're asking is "how to make 8" or more generally "<strong>making numbers</strong>". See <a href="https://www.amazon.co.uk/Making-Numbers-Using-manipulatives-arithmetic/dp/0198375611" rel="nofollow noreferrer">here</a> and <a href="https://hal.archives-ouvertes.fr/hal-01938938/document" rel="nofollow noreferrer">here</a>.</p>
<p>"<strong>Making numbers</strong>" is usually taught late kindergarten or early grade 1. In this case, you might ask the school how they teach it and then teach it to your child the same way that that is taught (that the school will soon, but maybe not very soon, depending on curriculum, teach your child)? I think this is usually done using marbles or blocks to show that 6 marbles + 2 marbles = 4 marbles + 4 marbles. Thus, the kindergarten student (or early grade 1 student) learns to understand the <strong>uniqueness</strong> of the number 8 (the <strong>existence</strong> of the number 8 is of course known to the student): namely that the "8" as the combination of 6 and 2 is the same as the "8" as the combination of 4 and 4.</p>
<p>And of course this understanding is visual via the marbles and not of course an understanding that a grade 2 or 3 student would have already acquired.</p>
<p>(And of course you don't necessarily use the word "unique". Let me know if ever you think of a way to introduce the word "unique" in this context.)</p>
|
3,131,791 | <p>I read this:</p>
<blockquote>
<p><strong>Definition 1.1</strong>. <em>The</em> complex projective line <span class="math-container">$\mathbb{CP}^1$</span> <em>(or just</em> <span class="math-container">$\mathbb{P}^1$</span><em>) is the set of all ordered pairs of complex numbers</em> <span class="math-container">$\{(x, y)\in\mathbb{C}^2\mid(x, y)\neq(0, 0)\}$</span> <em>where we identify pairs</em> <span class="math-container">$(x, y)$</span> <em>and</em> <span class="math-container">$(x', y')$</span> <em>if one is a scalar multiple of the other</em>: <span class="math-container">$(x, y)=\lambda(x', y')$</span> <em>for some</em> <span class="math-container">$\lambda\in\mathbb{C}^*$</span><em>, where</em> <span class="math-container">$\mathbb{C}^*$</span> <em>is the set of nonzero complex numbers</em>.</p>
<p><span class="math-container">$\quad$</span> So for example <span class="math-container">$(1, 2)$</span>, <span class="math-container">$(3, 6)$</span>, and <span class="math-container">$(2+3i, 4+6i)$</span> all represent the same point of <span class="math-container">$\mathbb{P}^1$</span>.</p>
<p><span class="math-container">$\quad$</span> This construction is an example of the quotient of a set by an equivalence relation. See Exercise 2.</p>
<p><span class="math-container">$\quad$</span> The idea is that <span class="math-container">$\mathbb{P}^1$</span> can be thought of as the union of the set of complex numbers <span class="math-container">$\mathbb{C}$</span> and a single point "at infinity". To see this, consider the following subset <span class="math-container">$U_0 \subset \mathbb{P}^1$</span>:</p>
<p><span class="math-container">$$ U_0 = \{(x_0, x_1) \in\mathbb{P}^1\mid x_0\neq 0 \}.$$</span></p>
<p>Then <span class="math-container">$U_0$</span> is in one-to-one correspondence with <span class="math-container">$\mathbb{C}$</span> via the map</p>
<p><span class="math-container">$$\tag{1} \phi_0 : U_0 \to \mathbb{C} : \qquad (x_0, x_1) \mapsto \frac{x_1}{x_0}. $$</span></p>
<p>Note that <span class="math-container">$\phi_0$</span> is well defined on <span class="math-container">$U_0$</span>. First of all, <span class="math-container">$x_0$</span> is not <span class="math-container">$0$</span> so the division makes sense. Secondly, if <span class="math-container">$(x_0, x_1)$</span> represent the same point of <span class="math-container">$\mathbb{P}^1$</span> as <span class="math-container">$(x_0', x_1')$</span> then <span class="math-container">$x_0 = \lambda x_0'$</span> and <span class="math-container">$x_1 = \lambda x_1'$</span> for some nonzero <span class="math-container">$\lambda \in \mathbb{C}$</span>. Thus, <span class="math-container">$\phi_0((x_0, x_1)) = x_1 / x_0 = (\lambda x_1')/(\lambda x_0')$</span> <span class="math-container">$= x_1' / x_0' = \phi_0((x_0', x_1')) $</span> and <span class="math-container">$\phi_0$</span> is well defined as claimed. The inverse map is given by</p>
<p><span class="math-container">$$ \psi_0 : \mathbb{C}\to U_0 : \qquad z \mapsto (1, z). $$</span></p>
<p><span class="math-container">$\quad$</span> The complement of <span class="math-container">$U_0$</span> is the set of all points of <span class="math-container">$\mathbb{P}^1$</span> of the form <span class="math-container">$(0, x_1)$</span>. But since <span class="math-container">$(0, x_1) = x_1(0, 1)$</span>, all of these points coincide with <span class="math-container">$(0,1)$</span> as a point of <span class="math-container">$\mathbb{P}^1$</span>. So <span class="math-container">$\mathbb{P}^1$</span> is obtained from a copy of <span class="math-container">$\mathbb{C}$</span> by adding a single point.</p>
<p><span class="math-container">$\quad$</span> This point can be thought of as the point of infinity. To see this, consider a complex number <span class="math-container">$t$</span>, and identify it with a point of <span class="math-container">$U_0$</span> using <span class="math-container">$\psi_0$</span>; i.e., we identify it with <span class="math-container">$\psi_0(t) = (1, t)$</span>. Now let <span class="math-container">$t \to \infty$</span>. The beautiful feature is that the limit now exists in <span class="math-container">$\mathbb{P}^1$</span>! To see this, rewrite <span class="math-container">$(1, t)$</span> as <span class="math-container">$(1/t, 1)$</span> using scalar multiplication by <span class="math-container">$1/t$</span>. This clearly approaches <span class="math-container">$(0,1)$</span> as <span class="math-container">$t \to \infty$</span>, so <span class="math-container">$(0, 1)$</span> really should be thought of as the point at infinity!</p>
<p><span class="math-container">$\quad$</span> We have been deliberately vague about the precise meaning of limits in <span class="math-container">$\mathbb{P}^1$</span>. This is a notion from topology, which we will deal with later in Chapter 4. The property that limits exist in a topological space is a consequence of the <em>compactness</em> of the space, and the process of enlarging <span class="math-container">$\mathbb{C}$</span> to the compact space <span class="math-container">$\mathbb{P}^1$</span> is our first example of the important process of <em>compactification</em>. This makes the solutions to enumerative problems well-defined, by preventing solutions from going off to infinity. A precise definition of compactness will be given in Chapter 4. </p>
<p><span class="math-container">$\quad$</span> We now have to modify our description of complex polynomials by associating to them polynomials <span class="math-container">$F(x_0, x_1)$</span> on <span class="math-container">$\mathbb{P}^1$</span>. Before turning to their definition, note that the equation <span class="math-container">$F(x_0, x_1) = 0$</span> need not make sense as a well-defined equation of <span class="math-container">$\mathbb{P}^1$</span>, since it is conceivable that a point could have different representations <span class="math-container">$(x_0, x_1)$</span> and <span class="math-container">$(x_0', x_1')$</span> such that <span class="math-container">$F(x_0, x_1) = 0$</span> while <span class="math-container">$F(x_0', x_1') \neq 0$</span>. We avoid this problem by requiring that <span class="math-container">$F(x_0, x_1)$</span> be a <em>homogenous polynomial</em>; i.e., all terms in <span class="math-container">$F$</span> have the same total degree, which is called the degree of <span class="math-container">$F$</span>. So</p>
<p><span class="math-container">$$\tag{2} F(x_0, x_1) = \sum_{i=0}^d a_i x_0^i x_1^{d-i} $$</span></p>
<hr>
</blockquote>
<p>In the last paragraph, I tried to take one non-homogeneous polynomial and checked that indeed, that problem happens but when I took an homogeneous polynomial, the problem vanishes! I am baffled: it looks like sorcery! I tried to explain myself why that happens but couldn't so, why does that happen?</p>
| Servaes | 30,382 | <p>The coordinates of different representatives of the same point differ by a scalar multiple; if <span class="math-container">$(x_0,x_1)$</span> and <span class="math-container">$(x_0',x_1')$</span> represent the same point then there exists some nonzero scalar <span class="math-container">$\lambda$</span> such that <span class="math-container">$x_0'=\lambda x_0$</span> and <span class="math-container">$x_1'=\lambda x_1$</span>. Then for a homogeneous polynomial <span class="math-container">$F$</span> of degree <span class="math-container">$d$</span> you have
<span class="math-container">$$F(x_0',x_1')=F(\lambda x_0,\lambda x_1)=\lambda^dF(x_0,x_1),\tag{1}$$</span>
which means that <span class="math-container">$F(x_0',x_1')=0$</span> if and only if <span class="math-container">$F(x_0,x_1)=0$</span>. Of course <span class="math-container">$(1)$</span> fails for polynomials in general, so we only consider homogeneous polynomials as functions.</p>
|
3,423,138 | <p>A) Always positive </p>
<p>B) Always negative </p>
<p>C)Always non-negative </p>
<p>D) Always zero </p>
<p>Obviously, this isn’t a solving question, so I can’t really show any working here (I would have otherwise). I didn’t really understand the question, so help would be greatly appreciated </p>
<p>Thanks!</p>
| b00n heT | 119,285 | <p>Hint: Show that</p>
<p><span class="math-container">$$\frac{d}{dx}\text{Li}_n(x)=\frac{1}{x}\text{Li}_{n-1}(x)$$</span></p>
<p>then by induction if <span class="math-container">$\text{Li}_n(x)$</span> is rational, so is its derivative and hence <span class="math-container">$\text{Li}_{n-1}(x)$</span>.</p>
|
235,425 | <p>Ok, here is what I have for the proof of this conjecture. Let me know if I'm on the right path? all input appreciated.</p>
<p>There exist integers $j$, $k$, and $m$, such that,
$b = aj $ and $ c = ajk.$ Then $c = ajk $ (substituting $aj$ for $b$)
let $m = jk$, then $c = ma, => a|c.$ </p>
| Mr Q | 221,240 | <p>Since you given a/b, this means b=ka for some k-interger. Also b/c implies that c=lb for some interger l. now, substituting b-ka into c=lb gives</p>
<pre><code> c=lka.....>>>c=lk(a)>>>>we conclude a can divide c
</code></pre>
<p>Hence a/c
N:B_ the / is not a mathematical symbol for "divides" but rather use |...its just that i like it...lol</p>
|
591,014 | <p>I need help with this proof:</p>
<p>$f: X\rightarrow Y$</p>
<p>$C,D\subseteq Y$</p>
<p>$f^{-1}(C \cap D) = f^{-1}(C) \cap f^{-1}(D)$</p>
<p>Thanks.</p>
| Riccardo | 74,013 | <p>Just pick an element in $f^{-1}( C \cap D) $ and verify the first inclusion. The same for the second</p>
<p>Maybe this definition will help
$ f^{-1}( C) := \{ x \in X \text { such that } f (x) \in C \} $</p>
<p>Just apply it</p>
|
2,327,181 | <p>I'm a bit confused about the definition of finite sets/intervals. I know that a set S is called finite when it has a finite number of elements, or formally, when there exists a bijection $f:S\to\{1,...,n\}$ for some natural number n. </p>
<p>However, the interval $(1,2)$ is called finite. I don't understand why; $(1,2)$ is not even countable, and there definitely does not exist a bijection $f:(1,2)\to\{1,...,n\}$. </p>
<p>Why do we call $(a,b)$ with $a,b\in\mathbb{R}$, finite? Did we just agree to do so, or is it incorrect to use the interval $(a,b)$ as a set like I did in the definition of finiteness above?</p>
<p>Thanks!</p>
| Andre | 303,581 | <p>Yes, we agreed to do so. A "finite interval" is an interval of finite length, i.e, the number $b-a$ is finite. A finite set, on the other hand, is a set of finite cardinality, so consisting of only finitely many elements. Maybe the terminology is a bit unfortunate, but since EVERY nonempty interval possesses uncountably many elements, there is not much chance of confusion once you are aware of these facts.</p>
|
118,985 | <p>Here's a question I came across:</p>
<blockquote>
<p>count the number of permutations of 10 men, 10 women and one child,
with the limitation that a man will not sit next to another man, and a
woman will not sit next to another woman</p>
</blockquote>
<p>What I tried was using the inclusion-exclusion principle but it got way to complicated. I don't have the right answer, but I know it should be simple (it was a 5 points question in an exam I was going through).</p>
<p>Any ideas? Thanks!</p>
| André Nicolas | 6,312 | <p>Look at the power series expansion of $e^x$. The $x^4$ term is $\dfrac{x^4}{4!}$, so for positive $x$, we have $e^x>\frac{x^4}{4!}$ and therefore
$$\frac{x^2}{e^x} <\frac{4!}{x^2}.$$
We know that
$$\int_2^\infty \frac{dx}{x^2}$$
converges, and the $4!$ on top makes no difference. Note that the same idea can be used mechanically to show, for example, that $\displaystyle\int_2^\infty \frac{x^{2012}}{e^x}dx$ converges, since for positive $x$, $e^x>\frac{x^{2014}}{2014!}$.</p>
|
2,797,039 | <p><strong>1)</strong> Let $\{f_n\}$ be a sequence of nonnegative measurable functions of $\mathbb R$ that converges pointwise on $\mathbb R$ to $f$ integrable. Show that</p>
<p>$$\int_{\mathbb R} f = \lim_{n\to \infty}\int_{\mathbb R}f_n \Rightarrow \int_{E} f = \lim_{n\to \infty}\int_{E}f_n $$</p>
<p>for any measurable set $E$</p>
<p>I know that $\int_{\mathbb R} f = \int_{\mathbb R \setminus E} f + \int_{E} f$ and $\int_{\mathbb R \setminus E} f \le \liminf_{n\to {\infty}}\biggr(\int_{\mathbb R \setminus E} f_n \biggr)$ from Fatau's Lemma.</p>
<p>I couldn't obtain $\int_{E} f = \liminf_{n\to \infty}\int_{E}f_n = \limsup_{n\to \infty}\int_{E}f_n$ and I have seen that inequality below for obtaining it but I couldn't understand. Could someone explain me please?</p>
<blockquote>
<p>$$\liminf_{n\to \infty}\int_{\mathbb R \setminus E}f_n = \int_{\mathbb R}f-\limsup_{n\to \infty}\int_{E}f_n$$</p>
</blockquote>
<p><strong>2)</strong> It has been written "since $\int_Ef_n \le \int_Ef$ (this inequality from monotonicity I have understood) thus </p>
<blockquote>
<p>$$\limsup\int_Ef_n \le \int_Ef$$</p>
</blockquote>
<p>in proof of The Monotone Convergence Theorem in Royden's Real Analysis. I couldn't see why that inequality obtains.</p>
<p>Thanks for any help</p>
<p>Regards</p>
| Atif Farooq | 451,530 | <p><em>Solution</em>. Let $\mathcal{J}$ denote the event that exactly two jacks are selected in addition let $D_i$ denote the event that the die roll yielded $i\in\{1,2,3,4,5,6\}$. We may compute $\mathbf{P(\mathcal{J})}$ by making use of bayes's formula if you observe that
$$\mathbf{P}(\mathcal{J}) = \sum_{k=2}^{6}\mathbf{P}(\mathcal{J}|D_k)\mathbf{P}(D_k) = \frac{\binom{4}{2}\binom{48}{0}}{\binom{52}{2}}\cdot\frac{1}{6}+\frac{\binom{4}{2}\binom{48}{1}}{\binom{52}{3}}\cdot\frac{1}{6}+\dots+\frac{\binom{4}{2}\binom{48}{4}}{\binom{52}{6}}\cdot\frac{1}{6}$$</p>
|
4,083,588 | <p>Can somebody help me how to solve</p>
<p><span class="math-container">$x* 23 \equiv_{60} 1 $</span> with <span class="math-container">$ x \in \mathbb{N} $</span> and <span class="math-container">$x > 100$</span></p>
<p>What would be a good approach?</p>
<p>I know that x = 107 would be a solution. However how can I find solutions and how to prove that there are no solutions, if there aren't any?</p>
| Roddy MacPhee | 903,195 | <p><span class="math-container">$x\cdot a\equiv_{b} 1\implies x=\frac{-(b^{-1})\cdot b+1}{a}\bmod b$</span></p>
|
4,083,588 | <p>Can somebody help me how to solve</p>
<p><span class="math-container">$x* 23 \equiv_{60} 1 $</span> with <span class="math-container">$ x \in \mathbb{N} $</span> and <span class="math-container">$x > 100$</span></p>
<p>What would be a good approach?</p>
<p>I know that x = 107 would be a solution. However how can I find solutions and how to prove that there are no solutions, if there aren't any?</p>
| Joffan | 206,402 | <p><a href="https://en.wikipedia.org/wiki/Euler%27s_theorem" rel="nofollow noreferrer">Euler's theorem</a> tells you that for <span class="math-container">$n$</span> and <span class="math-container">$a$</span> coprime, you have <span class="math-container">$$a^{\phi(n)} \equiv 1 \bmod n$$</span> where <span class="math-container">$\phi(n)$</span> is <a href="https://en.wikipedia.org/wiki/Euler%27s_totient_function" rel="nofollow noreferrer">Euler's totient</a> which counts the numbers less than <span class="math-container">$n$</span> and coprime to it.</p>
<p>So in this case we would know that <span class="math-container">$$23^{\phi(60)} = 23^{16} \equiv 1 \bmod 60$$</span> so we can say that <span class="math-container">$$x\equiv 23^{15} \bmod 60 $$</span> gives us solutions.</p>
<p>There is also <a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">Carmichael's reduced totient function <span class="math-container">$\lambda$</span></a> which often gives a smaller exponent for the same purpose. In this case <span class="math-container">$\lambda(60) = 4$</span> and so we can use <span class="math-container">$x\equiv 23^{\lambda(60)-1}\equiv 23^3 \equiv 47 \bmod 60$</span>.</p>
<p>Now this gives your value, since <span class="math-container">$107\equiv 47\bmod 60$</span>, and is one of the possible choice of the integers <span class="math-container">$47+60k$</span> which obey the equivalence.</p>
<hr />
<p>For a rapid method not using exponents, the extended Euclidean algorithm will both check that <span class="math-container">$a$</span> and <span class="math-container">$n$</span> are coprime and deliver the modular inverse of <span class="math-container">$a\bmod m$</span> (which is what your equation here is asking for, <span class="math-container">$ax\equiv 1 \bmod n$</span>). Note that solving this also helps to solve other <span class="math-container">$\bmod n$</span> equations involving <span class="math-container">$a$</span>.</p>
|
2,866,399 | <p>I was studying Ring and modules and now I am studying Group theory. I feel very confuse regarding generating set .
In Ring Theory if we have an Ideal $ \ I = < x,y > $
Then x doesn't contained in that Ideal, however in Group theory if we have $ \ G = < x, y >$ then x is in the group.
Can any one please show how can we write x as a product of the generators.</p>
| egreg | 62,967 | <p>The confusion stems from the definition of “ideal generated by $x$” (let's keep it simple, for the moment, with $R$ a commutative ring).</p>
<p>There are two possible definitions:</p>
<ol>
<li>the ideal generated by $x$ is $Rx$</li>
<li>the ideal generated by $x$ is the intersection of all the ideals containing $x$</li>
</ol>
<p>These two objects can differ if the ring doesn't have an identity. For instance, if $R=2\mathbb{Z}$ and $x=4$, then
$$
Rx=\{0,8,-8,16,-16,\dotsc\}
$$
indeed doesn't contain $4$; on the other hand, the intersection of all ideals containing $4$ is $\{0,4,-4,8,-8,12,-12,16,-16,\dotsc\}$ because this is clearly an ideal and every ideal containing $4$ must contain $4n$, for every integer $n$.</p>
<p>Without a clear statement of the definition of the ideal $\langle x,y\rangle$, not much more can be said. If $\langle x,y\rangle$ is assumed to be $Rx+Ry$, then this does not necessarily contain $x$ or $y$. If it is the smallest ideal containing $x$ and $y$, then it obviously contains both $x$ and $y$.</p>
<p>As far as I know, the second definition (smallest ideal containing the given elements) is the one commonly used.</p>
|
2,897,374 | <p>So today in my Algebraic Topology class we were trying to construct a homotopy and at one point we needed to basically construct a (continuous) bijection between some intervals $[a, b]$ and $[c, d]$ sending $a \mapsto c$ and $b \mapsto d$. </p>
<p>My professor said that a bijection $\varphi : [a, b] \to [c, d]$ would be a linear function of the form $\varphi(x) = \alpha x + \beta$ and adding the contraints that $\varphi(a) = c$ and $\varphi(b) = d$ we get a linear system of equations $$\alpha \cdot a + \beta = c$$ $$\alpha \cdot b + \beta =d.$$</p>
<p>Solving this system we get $$\alpha = \frac{c-d}{a-b}$$ and $$\beta=\frac{ad-bc}{a-b}$$</p>
<p>so that $\varphi$ is given by $$\varphi(x) = \left(\frac{c-d}{a-b}\right)x + \frac{ad-bc}{a-b}$$</p>
<hr>
<p>Now I was not aware that we could construct a (continuous!) bijection between connected closed intervals of $\mathbb{R}$ so easily. My question is how exactly did my professor know that such a (continuous) bijection would be a linear function of the above form? I've never seen any theorem of the sort or something similar written in any textbook.</p>
| amd | 265,466 | <p>Since $P^2=P$, you have a perfectly good projection matrix. However, the definition of “projective matrix” that you have also requires that $P^T=P$, which makes it specifically an <em>orthogonal</em> projection. For that, the column space of the matrix must be the orthogonal complement of the null space. </p>
<p>There are several ways that you might construct such a matrix. A somewhat brute-force method is to find a basis $\{v_1,\dots,v_{n-1}\}$ for the orthogonal complement of $W=\operatorname{span}\{(1,1,\dots,1)^T\}$. Assemble these vectors and $w=(1,1,\dots,1)^T$ itself into a matrix $B$. The desired projective matrix is then $$B\begin{bmatrix}I_{n-1}&0\\0&0\end{bmatrix}B^{-1}.$$ (Do you see why?) Note that the non-zero rows of the matrix that you’ve constructed are a basis for $W^\perp$. </p>
<p>A simpler way is to observe that orthogonal projections onto $W$ and $W^\perp$ are complementary in the sense that if $P$ is the orthogonal projection onto $W$, then $I-P$ is the orthogonal projection onto $W^\perp$. Since $W$ is one-dimensional, the orthogonal projection matrix onto this subspace is quite easy to construct: the projection of a vector $v$ onto $w$ is ${w^Tv \over w^Tw}w = {ww^T\over w^Tw}v$, so the projection matrix onto $W^\perp$ is $I-{ww^T\over w^Tw}$. This is clearly symmetric, as required. For the vector in your problem, this will be $I_n-\frac1n\mathbb 1_n$, where $\mathbb 1_n$ is the $n\times n$ matrix of all $1$’s, i.e., $P$ has $(n-1)/n$ down its main diagonal and $-1/n$ elsewhere.</p>
|
267,274 | <p>Suppose <span class="math-container">$\left(A,\leq_{A}\right)$</span>
and <span class="math-container">$\left(B,\leq_{B}\right)$</span>
are <a href="https://en.wikipedia.org/wiki/Partially_ordered_set" rel="nofollow noreferrer">posets</a> and <span class="math-container">$f:A\to B$</span>
is an order-preserving bijection. I'm trying to show that <span class="math-container">$f^{-1}:B\to A$</span>
is also order preserving. That is, I want to show that, given <span class="math-container">$b_{1},b_{2}\in B$</span>,
<span class="math-container">$$b_{1}\leq_{B}b_{2}\Longrightarrow f^{-1}\left(b_{1}\right)\leq_{A}f^{-1}\left(b_{2}\right)$$</span>
Since <span class="math-container">$f$</span>
is bijective, there are uniquely defined <span class="math-container">$a_{1},a_{2}\in A$</span>
such that <span class="math-container">$$b_{1}=f\left(a_{1}\right)\Longrightarrow a_{1}=f^{-1}\left(b_{1}\right)$$</span>
<span class="math-container">$$b_{2}=f\left(a_{2}\right)\Longrightarrow a_{2}=f^{-1}\left(b_{2}\right)$$</span>
If <span class="math-container">$a_{1},a_{2}$</span>
are comparable and I assume <span class="math-container">$a_{2}\leq_{A}a_{1}$</span>,
I get a contradiction, since <span class="math-container">$f$</span>
is order preserving: <span class="math-container">$$b_{2}=f\left(a_{2}\right)\leq_{B}\, f\left(a_{1}\right)=b_{1}$$</span>
So I deduce that if <span class="math-container">$a_{1},a_{2}$</span>
are comparable, then necessarily the required inequality holds. My question is whether it's possible that <span class="math-container">$a_{1},a_{2}$</span>
are not comparable and the claim is actually false?</p>
| Brian M. Scott | 12,042 | <p>Let $A=\{0,1\}\times\Bbb N$, and define $\langle i,m\rangle\le_A\langle j,n\rangle$ iff $i=j$ and $m\le n$. Let $B=\Bbb N$ with the usual order. Finally, let</p>
<p>$$f:A\to B:\langle i,n\rangle\mapsto 2n+i\;.$$</p>
<p>Then $f$ is an order-preserving bijection, but $f^{-1}$ is not order-preserving for exactly the reason that you suggest: for any $n\in\Bbb N$, $2n<2n+1$ in $B$, but $f^{-1}(2n)=\langle 0,n\rangle$ and $f^{-1}(2n+1)=\langle 1,n\rangle$ are not comparable in $A$.</p>
|
267,274 | <p>Suppose <span class="math-container">$\left(A,\leq_{A}\right)$</span>
and <span class="math-container">$\left(B,\leq_{B}\right)$</span>
are <a href="https://en.wikipedia.org/wiki/Partially_ordered_set" rel="nofollow noreferrer">posets</a> and <span class="math-container">$f:A\to B$</span>
is an order-preserving bijection. I'm trying to show that <span class="math-container">$f^{-1}:B\to A$</span>
is also order preserving. That is, I want to show that, given <span class="math-container">$b_{1},b_{2}\in B$</span>,
<span class="math-container">$$b_{1}\leq_{B}b_{2}\Longrightarrow f^{-1}\left(b_{1}\right)\leq_{A}f^{-1}\left(b_{2}\right)$$</span>
Since <span class="math-container">$f$</span>
is bijective, there are uniquely defined <span class="math-container">$a_{1},a_{2}\in A$</span>
such that <span class="math-container">$$b_{1}=f\left(a_{1}\right)\Longrightarrow a_{1}=f^{-1}\left(b_{1}\right)$$</span>
<span class="math-container">$$b_{2}=f\left(a_{2}\right)\Longrightarrow a_{2}=f^{-1}\left(b_{2}\right)$$</span>
If <span class="math-container">$a_{1},a_{2}$</span>
are comparable and I assume <span class="math-container">$a_{2}\leq_{A}a_{1}$</span>,
I get a contradiction, since <span class="math-container">$f$</span>
is order preserving: <span class="math-container">$$b_{2}=f\left(a_{2}\right)\leq_{B}\, f\left(a_{1}\right)=b_{1}$$</span>
So I deduce that if <span class="math-container">$a_{1},a_{2}$</span>
are comparable, then necessarily the required inequality holds. My question is whether it's possible that <span class="math-container">$a_{1},a_{2}$</span>
are not comparable and the claim is actually false?</p>
| Asaf Karagila | 622 | <p>To drive Brian's point further, let's assume for simplicity that $f$ is the identity. This simply means that $\leq_A\subseteq\leq_B$ as sets of ordered pairs.</p>
<p>For example, if $(A,R)$ is any partially ordered set, and $(A,\leq_R)$ is a linearlization of $R$ then the identity is an order-preserving bijection, but its inverse need not be order-preserving.</p>
<p>Even further, if $A$ contains more than one point, then $(A,\mathrm{Id}_A)$ is a partial order (the discrete order) and the identity is an order-preserving map between $(A,R)$ for any partial order $R$. Its inverse, also the identity, is order-preserving if and only if $R=\mathrm{Id}_A$ as well.</p>
|
1,825,392 | <p>Let $f(z)$ be an entire function so that,</p>
<p>$$ \int \frac{|f(z)|}{1 + |z|^3} dA(z) < \infty$$</p>
<p>where the integral is taken over the entire complex plane. Show that $f$ is a constant.</p>
<p>I believe that the idea is to use the mean value property; that is:</p>
<p>$$f(z) = \frac{1}{\pi\delta^2}\int_{D(z, \delta)}f(w)dA(w)$$</p>
<p>and then do some manipulation to relate the two integrals. But I'm not sure otherwise how to proceed. Can anyone help? </p>
| zhw. | 228,045 | <p>Suppose $f$ is nonconstant. Then there is a constant $c>0$ such that</p>
<p>$$\tag 1 \frac{1}{2\pi}\int_0^{2\pi} |f(re^{it})|\, dt > cr$$</p>
<p>for large $r.$</p>
<p>Proof: We can write $f(z) = f(0) + z^ng(z)$ for some $n\in \mathbb N$ and some entire $g$ such that $g(0) \ne 0.$ We then have $(1)$ bounded below by </p>
<p>$$ \frac{1}{2\pi}\int_0^{2\pi} r^n|g(re^{it})|\, dt - |f(0)|.$$</p>
<p>This is at least $r^n|g(0)| - |f(0)|$ as a consequence of the mean value property for circles applied to $g.$
This gives the result.</p>
<p>So if $f$ is nonconstant, then</p>
<p>$$\int_{\mathbb C} \frac{|f(z)|}{1 +|z|^3}\, dA(z) = \int_0^\infty \frac{r}{1+r^3}\int_0^{2\pi} |f(re^{it})|\,dt\,dr.$$</p>
<p>By $(1),$ the last integral is $\infty.$ Since in our problem we are given an $f$ for which this integral is finite, such an $f$ must be constant.</p>
|
2,904,432 | <blockquote>
<p>Let the subspace <span class="math-container">$S=\{(x, y, z) \in R^3; x = -2z \}$</span></p>
<p>a) Determine its dimension and a basis B of S;</p>
<p>b) Complete B such that it becomes a basis M of <span class="math-container">$R^3$</span>.</p>
</blockquote>
<p><span class="math-container">$\operatorname{dim}(S) = 2$</span> because there are two free variables, but how to determine a basis <span class="math-container">$B$</span> for <span class="math-container">$S$</span>? Does the vector have to belong to <span class="math-container">$S$</span>? I think so. What about <span class="math-container">$M$</span>? Does the third vector have to belong to <span class="math-container">$S$</span>? I don't think so, otherwise <span class="math-container">$B$</span> would not generate <span class="math-container">$R^3$</span>.</p>
| Dan Sp. | 539,506 | <p>Yes, $dim(S)=2$. Here is a basis for $B$:</p>
<p>$\begin{bmatrix}0\\1\\0\end{bmatrix}$, $\begin{bmatrix}2\\0\\-1\end{bmatrix}$</p>
<p>The basis does not span $R^3$ but it is a basis for $S$. To complete $B$ for $R^3$ just find one more vector that is orthogonal to the above two.</p>
|
3,520,893 | <p>As in group theory, there is a concept of isomorphism between metric spaces called isometry.
Two metric spaces <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are isometric if there is a function that perserves the distance of two elements. That function is called isometry.</p>
<p>The thing is that properties of metric spaces (completeness, compactness, connectedness, etc.) are preserved under isometry. </p>
<p>So, thinking about the classification of finite simple groups, I was wondering if there is any classification of Metric Spaces up to isometry, or at least a specific category of Metric Spaces (like finite simple groups in Group Theory). Also, I am interested if there is a more general topological classification of Metric Spaces up to homeomorphism (isomorphism in topology).</p>
| Community | -1 | <p>I don't think so, but two metrics are equivalent iff there are <span class="math-container">$m, M$</span> such that <span class="math-container">$1/md'(x,y)\le d(x,y)\le Md'(x,y)$</span> for all <span class="math-container">$x,y\in X$</span>.</p>
|
2,975,647 | <p>Is this following statement valid (where both <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are positive)?</p>
<blockquote>
<p>If <span class="math-container">$x>y$</span>, then <span class="math-container">$\dfrac{1}{x} < \dfrac{1}{y}$</span>. </p>
</blockquote>
| Robert Israel | 8,508 | <p>No. Note that if <span class="math-container">$a$</span> or <span class="math-container">$b$</span> is a negative integer (and <span class="math-container">$c$</span> is not) the series has only finitely many nonzero terms.</p>
<p>EDIT: If <span class="math-container">$a,b,c$</span> are positive integers,
<span class="math-container">$ \dfrac{(a)_n (b)_n}{(c)_n n!}$</span> is a rational function of <span class="math-container">$n$</span>
with degree of denominator - degree of numerator <span class="math-container">$c+1-a-b$</span>. Thus the series converges at <span class="math-container">$z=1$</span> iff <span class="math-container">$c+1-a-b > 1$</span>, i.e. <span class="math-container">$c > a+b$</span>. </p>
|
1,360,910 | <p>My book (Introduction to Ring Theory, Paul Cohn) states this as a theorem and gives a proof. The book usually skips over trivial/easy proofs, so I don't really understand why this is in here.</p>
<p>Isn't the statement absolutely obvious for sets $A,B,C$ with $A\subseteq C$? You just need to draw a diagram and it becomes immediately obvious! <img src="https://i.stack.imgur.com/7YFcC.jpg" alt="enter image description here"></p>
<p>Is there a reason why it would not be obvious in modules?</p>
| Eoin | 163,691 | <p>I'm not sure if the statement is obvious to you but in general $+$ and $\cap$ don't distribute over each other. I wouldn't expect them to either since $\cap$ is a set theoretic operation and $+$ is only defined for modules/ideals/abelian groups. However, when $A\subset C$ they do distribute over one another.</p>
<p>But the proof <em>shouldn't</em> go something like this:</p>
<p>$$A+(B\cap C)=(A+B)\cap (A+C)=(A+B)\cap C $$ where the last equality comes from $A\subset C$ <em>because</em> here we assumed they distribute in the first place which is false in general. So any proof of this fact can't use the above method (or the other direction).</p>
<hr>
<p>An explicit example that these operations don't distribute is by taking the submodules of the $\mathbb{Z}[X]$-module $\mathbb{Z}[X]$ where we can now just look at ideals</p>
<p>$A=(2)$, $B=(x-1)$, and $C=(x+1)$
then $$A+(B\cap C)=(2)+(x^2-1)=(2,x^2-1)$$
and $$A+ B \cap A+ C=(2,x-1)\cap(2,x+1)=(2,x+1)$$</p>
<p>since $(2,x-1)=(2,x+1)$.</p>
|
3,467,203 | <p><span class="math-container">$$\frac{(-1)^{k+1}k}{3^k}$$</span></p>
<p>Correct me if I am wrong but with <span class="math-container">$(-1)^{k+1}$</span> this makes it an alternating series test therefore i think I need to show that <span class="math-container">$\frac{k}{3^k}$</span> is decreasing and non-negative. But the problem with that is that there is no restriction on bounds. So maybe Direcheltes test would be more appropriate and let <span class="math-container">$a_k=(-1)^{k+1}$</span> which is bounded I believe and let <span class="math-container">$b_k = \frac{k}{3^k}$</span> and show that it is approaching 0 by taking the limit as <span class="math-container">$n \to \infty$</span></p>
| user284331 | 284,331 | <p>It is clear that <span class="math-container">$k/3^{k}\rightarrow 0$</span> as <span class="math-container">$k\rightarrow\infty$</span>. Now by elementary calculus applied to the derivative of <span class="math-container">$x/3^{x}$</span> for <span class="math-container">$x>0$</span>, we conclude that <span class="math-container">$k/3^{k}$</span> is decreasing for <span class="math-container">$k$</span> large, then Alternating Series Test does the job completely.</p>
|
3,592,368 | <p>I managed to solve this question using trigonometry. But I wondered if there'd be anyway of doing it using only synthetic geometry. Here it is.</p>
<blockquote>
<p>Let <span class="math-container">$ABC$</span> be a right isosceles triangle of hypotenuse <span class="math-container">$AB$</span>. Let also <span class="math-container">$\Gamma$</span> be the semicircle whose diameter is the line segment <span class="math-container">$AC$</span> such that <span class="math-container">$\Gamma\cap\overline{AB} = \{A\}$</span>. Consider <span class="math-container">$P\in\Gamma$</span> with <span class="math-container">$PC = k$</span>, with <span class="math-container">$k \leq AC$</span>. Find the area of triangle <span class="math-container">$PBC$</span>.</p>
</blockquote>
<p>Here is my interpretation of the picture:
<a href="https://i.stack.imgur.com/Vz0ck.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vz0ck.png" alt="enter image description here" /></a></p>
<p>I managed to get the solution via trigonometry as below.</p>
<p><a href="https://i.stack.imgur.com/dQ8hU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dQ8hU.png" alt="enter image description here" /></a></p>
<p>Then, the area <span class="math-container">$S$</span> requested is:</p>
<p><span class="math-container">$$\begin{align} S &= \displaystyle\frac{PC\cdot BC\cdot \sin(90^\circ + \beta)}{2}\\
&= \displaystyle\frac{k\cdot d\cdot \cos\beta}{2}\\
&= \displaystyle\frac{k\cdot d\cdot \frac{k}{d}}{2}\\
&= \displaystyle\frac{k^2}{2}.\\
\end{align}$$</span></p>
| ole | 725,731 | <p>Let us find the altitude of point <span class="math-container">$P$</span>, a is a leg. Intersection of the circles <span class="math-container">$x^2+h^2=k^2, x^2+(h-\frac{a}{2})^2=(\frac{a}{2})^2 $</span></p>
<p><span class="math-container">$h=\frac{k^2}{a} $</span></p>
<p><span class="math-container">$A\; PBC=\frac{1}{2}*ah=\frac{k^2}{2}$</span></p>
|
2,073,794 | <p>Let's consider the following variant of Collatz (3n+1) : </p>
<p>if $n$ is odd then $n \to 3n+1$</p>
<p>if $n$ is even then you can choose : $n \to n/2$ or $n \to 3n+1$</p>
<p>With this definition, is it possible to construct a cycle other than the trivial one, i.e., $1\to 4 \to 2 \to 1$?</p>
<p>Best regards</p>
| Sjoerd | 6,688 | <p>$$8 \rightarrow 25 \rightarrow 76 \rightarrow 38 \rightarrow 19 \rightarrow 58 \rightarrow 29 \rightarrow 88 \rightarrow 44 \rightarrow 22 \rightarrow 11 \rightarrow 34 \rightarrow 17 \rightarrow 52 \rightarrow 26 \rightarrow 13 \rightarrow 40 \rightarrow 20 \rightarrow 10 \rightarrow 5 \rightarrow 16 \rightarrow 8$$</p>
<p>And another one:</p>
<p>$$16 \rightarrow 49 \rightarrow 148 \rightarrow 74 \rightarrow 37 \rightarrow 112 \rightarrow 56 \rightarrow 28 \rightarrow 14 \rightarrow 7 \rightarrow 22 \rightarrow 11 \rightarrow 34 \rightarrow 17 \rightarrow 52 \rightarrow 26 \rightarrow 13 \rightarrow 40 \rightarrow 20 \rightarrow 10 \rightarrow 5 \rightarrow 16$$</p>
<p>Both avoid the 4, 2, 1 cycle.</p>
|
1,218,912 | <p>My problem is following:
$$\binom{n}{r} + \binom{n+1}{r+1} + \binom{n+2}{r+2} + \dots + \binom{n+M}{r+M}$$</p>
<p>how can we reduce it to a more short solution</p>
<p>Here $\dbinom{n}{r} = \dfrac{n!}{r! (n-r)!}$ and thus same as regular.Please help me in solving the above expression</p>
| Elaqqad | 204,937 | <p><strong>Hint</strong> what happens if you add :$\dbinom{n}{r-1}$ to your sum, can you prove that it will be simplified recursively using $$\dbinom{n}{k}+\dbinom{n}{k+1}=\dbinom{n+1}{k+1}$$</p>
<p><strong>Answer</strong> $\dbinom{n+M+1}{r+M}-\dbinom{n}{r-1}$</p>
|
4,474,803 | <p>Let <span class="math-container">$f$</span> be a differentiate function and consider two operators
<span class="math-container">$$
A(f(x))=\int_0^1 \frac{ d}{dx}f(x t^\mu) dt,\\
B(f(x))=\mu xf(x)+\int_0^x f(t) dt,
$$</span>
where <span class="math-container">$\mu $</span> is a parameter.</p>
<p>I need to prove that</p>
<p><span class="math-container">$$
A \circ B =B \circ A = I
$$</span></p>
<p>where <span class="math-container">$I$</span> is the identity operator <span class="math-container">$I(f(x))=f(x).$</span></p>
<p>It is easy to prove it for powers <span class="math-container">$x^n$</span> and then for power series but it would be interesting to perform direct calculations and prove it for any function <span class="math-container">$f.$</span></p>
<p>But I was confused with calculation of <span class="math-container">$A(B(f(x))$</span>.
Any help?</p>
| Dark Rebellion | 858,891 | <p>In this very particular case, NO, <span class="math-container">$\phi_1$</span> is NOT equivalent to <span class="math-container">$\phi_2$</span>, but the overall "intuition" is correct, except for one edge case that was overlooked.</p>
<p>Let me elaborate. It is indeed the case (assuming no name collisions, obviously) that:</p>
<p><span class="math-container">$\forall x Px \land \forall y Qy \iff \forall x \forall y(Px \land Qy) $</span></p>
<p><span class="math-container">$\forall x Px \land \exists y Qy \iff \forall x \exists y(Px \land Qy) $</span></p>
<p><span class="math-container">$\exists x Px \land \forall y Qy \iff \exists x \forall y(Px \land Qy) $</span></p>
<p><span class="math-container">$\exists x Px \land \exists y Qy \iff \exists x \exists y(Px \land Qy) $</span></p>
<p><span class="math-container">$\forall x Px \lor \forall y Qy \iff \forall x \forall y(Px \lor Qy) $</span></p>
<p><span class="math-container">$\forall x Px \lor \exists y Qy \iff \forall x \exists y(Px \lor Qy) $</span></p>
<p><span class="math-container">$\exists x Px \lor \forall y Qy \iff \exists x \forall y(Px \lor Qy) $</span></p>
<p><span class="math-container">$\exists x Px \lor \exists y Qy \iff \exists x \exists y(Px \lor Qy) $</span></p>
<p>This rule does NOT hold true when we are talking about the first predicate of an implication (<span class="math-container">$a\implies b$</span>), rather than disjunction (<span class="math-container">$a \lor b$</span>) or conjuction (<span class="math-container">$a \land b$</span>). E.g. in your case, <span class="math-container">$\forall x Px \implies \exists y Qy$</span> is NOT equivalent to <span class="math-container">$\forall x Px \implies \exists y Qy$</span>. Rather, by rewriting the implication as a disjunction, we can show that:</p>
<p><span class="math-container">$\forall x Px \implies \exists y Qy$</span></p>
<p><span class="math-container">$\iff \neg \forall x Px \lor \exists y Qy$</span></p>
<p><span class="math-container">$\iff \exists x \neg Px \lor \exists y Qy$</span></p>
<p><span class="math-container">$\iff \exists x \exists y (\neg Px \lor Qy)$</span></p>
<p><span class="math-container">$\iff \exists x \exists y (Px \implies Qy)$</span></p>
<p>Ultimately, we were able to show that:</p>
<p><span class="math-container">$\forall x Px \implies \exists y Qy \iff \exists x \exists y (Px \implies Qy)$</span></p>
<p>In a similiar fashion, we can show all the other cases as well:</p>
<p><span class="math-container">$\exists x Px \implies \exists y Qy \iff \forall x \exists y (Px \implies Qy)$</span></p>
<p><span class="math-container">$\forall x Px \implies \forall y Qy \iff \exists x \forall y (Px \implies Qy)$</span></p>
<p><span class="math-container">$\exists x Px \implies \forall y Qy \iff \forall x \forall y (Px \implies Qy)$</span></p>
<p>So yes, your initial intuition does hold true, as long as there are no implications involved. If there is an implication ivolved, then you will have to "switch" the quantifier of the first predicate when moving the quantifiers around, while the quantifier of the second predicate remains the same, just like with disjunction and conjuction.</p>
<p>After reading your question, I am not too sure if you were only interested in knowing the rules for "moving" quantifiers, or if you wanted a (semi)formal proof for it as well. If you are also interested in having a proof, let me know in the comments and I will edit my question later. In either case, I hope I was able to help!</p>
|
1,671,933 | <p>In the question <a href="https://math.stackexchange.com/questions/1502484/the-application-of-nimbers-to-nim-strategy">on nimbers</a>, the original poster asks for the meaning of Nimber <a href="https://en.wikipedia.org/wiki/Nimber#Multiplication" rel="nofollow noreferrer">multiplication</a> in the context of impartial games.</p>
<hr>
<p><strong>Edit: As noted by Mark Fischler in the comments below, the following is wrong</strong></p>
<p>My gut instinct is $*a \times *b$ means that if $*a$ is a game equivalent to $a$ stones, and $*b$ is a game equivalent to $b$ stones, then if you replace every stone in the $*a$ game with a copy of the $*b$ game, you get a game with the Nimber $*a \times *b$, but I haven't been able prove it.</p>
| Oscar Lanzi | 248,217 | <p>The "game" in <a href="https://www.youtube.com/watch?v=9JN5f7_3YmQ" rel="nofollow noreferrer">this video</a> may be analyzed in terms of a limited form of nimber multiplication, specifically involving the nimbers <span class="math-container">$1,2,3$</span>.</p>
<p>The rules of the game are as follows:</p>
<ul>
<li><p>Start with disks (or hexagons in the video) of three different colors. Fill a base row of a triangular array with some permutation of <span class="math-container">$n$</span> disks.</p>
</li>
<li><p>This generates <span class="math-container">$n-1$</span> adjacent pairs. On top of each adjacent pair add a disk of the same color if the two disks below are identical, add a disk of the third color otherwise. Repeat this process for the next layer up until you reach one final disk at the top.</p>
</li>
</ul>
<p>For certain values of the layer count <span class="math-container">$n$</span> the color of the top disk will depend only on the two other corners of the triangle; all the other disks in the base layer and all the intermediate layers cancel themselves out. The special layer counts are</p>
<p><span class="math-container">$n=2,4,10,...; n\in\{3^k+1,k\in\mathbb{N}\}.$</span></p>
<p>We may analyze this in terms of nimber multiplication by rendering the colors as the nimbers <span class="math-container">$1,2,3$</span>. With this rendering the placement rule is simply that the nimber product of any two horizontally adjacent disks with the one above is <span class="math-container">$1$</span>, to wit</p>
<p><span class="math-container">$1\otimes1\otimes1=2\otimes2\otimes2=3\otimes3\otimes3=1\otimes2\otimes3=1.$</span></p>
<p>Thus disks with colors <span class="math-container">$a$</span> and <span class="math-container">$b$</span> will be topped by a disk whose color corresponds to <span class="math-container">$a^{-1}b^{-1}$</span>, where the product is understood to be the nimber product. For an <span class="math-container">$n$</span>-layer array, the top disk will have a combined product of all the base disks raised to some powers; for instance with six layers the top disk carries the product</p>
<p><span class="math-container">$a^{-1}b^{-5}c^{-10}d^{-10}e^{-5}f^{-1}$</span></p>
<p>where the disks are in order from <span class="math-container">$a$</span> to <span class="math-container">$f$</span> across the base row. The exponent will in general be <span class="math-container">$(-1)^{n-1}$</span> times the entry in Pascal's Triangle corresponding to the disk's position. The exponents always sum to <span class="math-container">$(-2)^{n-1}\equiv1\bmod 3$</span>.</p>
<p>Since each nimber cubed is <span class="math-container">$1$</span>, the product may be simplified by reducing each exponent to a residue <span class="math-container">$\in\{-1,0,1\}\bmod 3$</span>. For the six-layer case above this gives</p>
<p><span class="math-container">$a^{-1}bc^{-1}d^{-1}ef^{-1}$</span></p>
<p>whereas a product depending only on the corners would have been <span class="math-container">$a^{-1}f^{-1}$</span>. Thus six layers is not a "special number". But for four layers we do properly get <span class="math-container">$a^{-1}d^{-1}$</span> as the intervening elements in Pascal's Triangle are multiples of <span class="math-container">$3$</span> (<span class="math-container">$1~\color{blue}{3~3}~1$</span>). We see, given well-known properties of Pascal's Triangle, the "special numbers" correspond to <span class="math-container">$n-1$</span> being a power of <span class="math-container">$3$</span> as indicated in the video.</p>
|
1,085,620 | <p>Which number between the interval 7902 & 7918 can be divided without remainder only to itself and to the number 1?</p>
| Batman | 127,428 | <p>You want to find a prime between 7902 and 7918. One way to do this is to use a <a href="http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes" rel="nofollow">number sieve</a>, or since there are only a few small numbers, brute force by testing divisors between 1 and 88 ($\lfloor \sqrt{7918}\rfloor$). From this we see 7907 is prime. </p>
|
2,107,192 | <p>I am trying to find the total number of signals that can be created from $3$ pink, $3$ white and $2$ black flags when arranged in a straight line. But, only $5$ flags are allowed in a signal.</p>
<p>I know how to find permutations in this situation when all are taken at a time i.e. a signal has $8$ flags, which would be $\frac{8!}{3! \cdot 3! \cdot 2!}$, but cannot see how to start solving a scenario where all are not taken at the same time.</p>
<p><strong>Formula when all are taken at a time with some like objects</strong></p>
<p>$$\frac{n!}{a! \cdot b! \cdot c! \cdot d!...}$$</p>
<p><strong>Question</strong></p>
<p>Is there a modified formula that will give permutations of $n$ objects taken $r$ at a time when some of these objects are alike and $r < n$?</p>
| Sunil | 249,557 | <p>I was able to solve this problem but not sure if there is a better and shorter method. I used a $case$ approach to solving this by listing all possible groups of 5 flags using the original collection of 8 flags and then within each group of 5 flags we apply the formula for permutations of 5 objects at a time.</p>
<p><strong>Is there a better and shorter approach to this problem?</strong></p>
<p>Since there are 3 W(hite), 3 P(ink) and 2 B(lack) balls, so the cases are as below. We start with cases having the maximum of each type of flag and then decrease the maximum of each flag type by 1 till we reach 1 count for each flag type. In each case we get a group of 5 flags which we simply arrange 5 at a time using standard formula.</p>
<p>The sum of all ways i.e. arrangements of 5 flags is the sum of all cases, which is 170. This is also the answer given at the back of the book.</p>
<p><em>Case 1</em>: 3W</p>
<p>3W+2P : 5!/(3!.2!) = 10 ways</p>
<p>3W+2B : 5!/(3!.2!) = 10 ways</p>
<p>3W+1P+1B : 5!/(3!.1!.1!) = 20 ways</p>
<p><em>Case 2</em>: 3P</p>
<p>3P+2W : 5!/(3!.2!) = 10 ways</p>
<p>3P+2B : 5!/(3!.2!) = 10 ways</p>
<p>3P+1B+1W: 5!/(3!.1!.1!) = 20 ways</p>
<p><em>Case 3</em>: 2B</p>
<p><strike>2B+3W</strike></p>
<p><strike>2B+3P</strike></p>
<p>2B+1P+2W : 5!/(2!.2!.1!) = 30 ways</p>
<p>2B+2P+1W : 5!/(2!.2!.1!) = 30 ways</p>
<p><em>Case 4</em>: 1W</p>
<p><strike>1W+3P+1B</strike></p>
<p><strike>1W+2P+2B</strike></p>
<p><em>Case 5</em>: 2W</p>
<p>2W+2P+1B: 5!/(2!.2!.1!) = 30 ways</p>
<p><strike>2W+1P+2B</strike></p>
<p><em>Case 6</em>: 1W</p>
<p><strike>1W+3P+1B</strike></p>
<p><strike>1W+2P+2B</strike></p>
<p><em>Case 7</em>: 2P</p>
<p><strike>2P+1W+2B</strike></p>
<p><strike>2P+2W+1B</strike></p>
<p><em>Case 8</em>: 1P</p>
<p><strike>1P+2W+2B</strike></p>
<p><strike>1P+3W+1B</strike></p>
<p><em>Case 9</em>: 1B</p>
<p><strike>1B+2W+2P</strike></p>
<p><strike>1B+3W+1B</strike></p>
<p><strike>1B+1W+3B</strike></p>
|
2,107,192 | <p>I am trying to find the total number of signals that can be created from $3$ pink, $3$ white and $2$ black flags when arranged in a straight line. But, only $5$ flags are allowed in a signal.</p>
<p>I know how to find permutations in this situation when all are taken at a time i.e. a signal has $8$ flags, which would be $\frac{8!}{3! \cdot 3! \cdot 2!}$, but cannot see how to start solving a scenario where all are not taken at the same time.</p>
<p><strong>Formula when all are taken at a time with some like objects</strong></p>
<p>$$\frac{n!}{a! \cdot b! \cdot c! \cdot d!...}$$</p>
<p><strong>Question</strong></p>
<p>Is there a modified formula that will give permutations of $n$ objects taken $r$ at a time when some of these objects are alike and $r < n$?</p>
| awkward | 76,172 | <p>Let's generalize the problem slightly and find the number of signals with $r$ flags, say $a_r$. Let $f(x)$ be the exponential generating function for $a_r$, i.e.
$$f(x) = \sum_{r=0}^{\infty} \frac{1}{r!} a_r x^r$$
Since there are 3 pink, 3 white, and 2 black flags,
$$f(x) = \left( 1 + x + \frac{1}{2!} x^2 + \frac{1}{3!} x^3 \right)^2 \left( 1 + x + \frac{1}{2!} x^2 \right)$$
Expanding this polynomial (I cheated and used a computer algebra system), we find</p>
<p>$$f(x) = 1+3 x+\frac{9 x^2}{2}+\frac{13 x^3}{3}+\frac{35 x^4}{12}+\frac{17 \
x^5}{12}+\frac{35 x^6}{72}+\frac{x^7}{9}+\frac{x^8}{72}$$
so the number of signals with 5 flags is
$$5! \; \frac{17}{12} = 170$$</p>
|
1,309,572 | <p>The main idea of Russel's paradox is that, in Naive Set Theory, if we define $R = \{x\ |\ x \not\in x \}$, then $R \in R \Leftrightarrow R \not \in R$.</p>
<p>ZFC deals with this by making unrestricted set comprehension illegal, while type theory creates a hierarchy of sets such that there is no place in it for $R$. Both of these approaches stop one from defining $R$ in these versions of set theory.</p>
<p>However, what if we were to allow one to define such a set, but restrict the statements that one could make about it? In other words, is there a version of set theory which allows one to define $R$ as shown above, but in which the statement $R \in R$ is not a paradox, but rather simply doesn't make sense?</p>
<p>I'm not sure how one would define such a version of set theory, but it seems to me like something like this could be essentially equivalent to Naive Set Theory for "normal" sets while disallowing certain kinds of statements about "pathological" sets.</p>
| Asaf Karagila | 622 | <p>If $R$ is a set such that $x\in R\leftrightarrow x\notin x$, and the law of excluded middle holds, then either $R\in R$ or $R\notin R$, but not both since it is impossible that both a statement and its negation are true.</p>
<p>But now if $R\in R$ then $R\notin R$; and if $R\notin R$ then $R\in R$.</p>
<p>So it's a no-go. Regardless to allowing or disallowing comprehension. Or any properties of $\in$, really. Just that it's a binary relation.</p>
|
1,309,572 | <p>The main idea of Russel's paradox is that, in Naive Set Theory, if we define $R = \{x\ |\ x \not\in x \}$, then $R \in R \Leftrightarrow R \not \in R$.</p>
<p>ZFC deals with this by making unrestricted set comprehension illegal, while type theory creates a hierarchy of sets such that there is no place in it for $R$. Both of these approaches stop one from defining $R$ in these versions of set theory.</p>
<p>However, what if we were to allow one to define such a set, but restrict the statements that one could make about it? In other words, is there a version of set theory which allows one to define $R$ as shown above, but in which the statement $R \in R$ is not a paradox, but rather simply doesn't make sense?</p>
<p>I'm not sure how one would define such a version of set theory, but it seems to me like something like this could be essentially equivalent to Naive Set Theory for "normal" sets while disallowing certain kinds of statements about "pathological" sets.</p>
| hmakholm left over Monica | 14,366 | <p>This wouldn't be possible in ordinary first-order logic, because no matter what the axioms say <em>about</em> $R\in R$, it is necessarily a <em>well-formed</em> formula.</p>
<p>But there's a more conceptual problem here: If you're defining $R$ as "the set that consists of all sets that don't contain themselves", you're already <em>implicitly</em> assuming that it makes sense to ask of <em>any</em> set whether or not it contains itself, and get a definite answer back -- because that question is what you want to use as the <em>defining property</em> of $R$. You can't then suddenly turn around and say you can ask any set whether it is a member of itself, but you can't ask it of $R$.</p>
<p>If you need to change something, you have to change the underlying logic such that it's not two-valued anymore. Even so, whenever the logic allows you to assert (under a quantifier) that two (generalized) truth values are equal, <em>and</em> you can somehow construct a truth function that has no fixed points, Russell's paradox will be recreatable in that logic.</p>
<hr>
<p><strong>Edit for an alternative answer:</strong></p>
<blockquote>
<p>I'm not sure how one would define such a version of set theory, but it seems to me like something like this could be essentially equivalent to Naive Set Theory for "normal" sets while disallowing certain kinds of statements about "pathological" sets.</p>
</blockquote>
<p>To some extent, this is done in <em>Morse-Kelley</em> set theory. There, your "pathological sets" are called <em>proper classes</em>, and have the restriction that you're not allowed to put have a proper class to the left of $\in$. (Or if you do, the result is always considered to be "false").</p>
<p>You get "unrestricted" comprehension in that you're allowed to make a collection of all the <em>normal</em> sets that satisfy any condition you can write down, but sometimes the result turns out to be "pathological".</p>
<p>Thus, you can define $R$ as the collection of all <em>normal</em> sets that don't contain themselves. You don't get $\forall x(x\in R\leftrightarrow x\notin x)$, but you do get
$$ \forall x\in \mathit{NormalSets}\,(x\in R\leftrightarrow x\notin x) $$</p>
<p>However, it is debatable whether this really implements your program, because in Morse-Kelley, the conditions for a collection to be a "normal set" are more or less the same as the conditions for a set to <em>exist at all</em> in ZFC. So there's not much more new you can <em>really</em> do with the flexibility.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.