qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
4,044,708 | <p>[3,4] is closed in R <-- R-[3,4] is open</p>
<p>[5,6] is closed in R <-- R-[5,6] is open</p>
<p>Show that [3,4] x [5,6] is closed in R x R by writing it as the complement of the intersection of two open sets in R x R.</p>
<p>(R - [3,4]) x (R - [5,6]) not equal R x R - [3,4] x [5,6]</p>
| Martin R | 42,969 | <p>Using that <span class="math-container">$u$</span> is midpoint-convex works in higher dimensions as well.</p>
<p><span class="math-container">$y \mapsto x - (y-x) = 2x-y$</span> maps the sphere <span class="math-container">$\partial B_r(x)$</span> bijectively onto itself (each point is mapped to the “opposite” point on the sphere). It follows that
<span class="math-container">$$
\int_{\partial B_r(x)} u(y) \, dy = \int_{\partial B_r(x)} u(2x-y) \, dy
$$</span>
and therefore
<span class="math-container">$$
\int_{\partial B_r(x)} u(y) \, dy = \int_{\partial B_r(x)} \frac 12\bigl(u(y) + u(2x-y)\bigr) \, dy \\
\ge \int_{\partial B_r(x)} u\left(\frac{y + (2x-y)}{2}\right) \, dy = \int_{\partial B_r(x)} u(x) \, dy = |\partial
B_r(x)| \cdot u(x) \, .
$$</span></p>
|
865,293 | <blockquote>
<p>Prove $\ln[\sin(x)] \in L_1 [0,1].$</p>
</blockquote>
<p>Since the problem does not require actually solving for the value, my strategy is to bound the integral somehow. I thought I was out of this one free since for $\epsilon > 0$ small enough, $$\lim_{\epsilon \to 0}\int_\epsilon^1 e^{\left|\ln(\sin(x))\right|}dx=\cos(\epsilon)-\cos(1) \to 1-\cos(1)<\infty$$</p>
<p>and so by Jensen's Inequality, $$e^{\int_0^1 \left| \ln(\sin(x))\right|\,dx}\le \int_0^1e^{\left|\ln(\sin(x))\right|}\,dx\le1-\cos(1)<\infty$$ so that $\int_0^1 \left|\ln(\sin(x))\right|\,dx<\infty$. </p>
<p>The problem, of course, is that the argument begs the question, since Jensen's assumes the function in question is integrable to begin with, and that's what I'm trying to show. </p>
<p>Any way to save my proof, or do I have to use a different method? I attempted integration by parts to no avail, so I am assuming there is some "trick" calculation I do not know that I should use here. </p>
| Community | -1 | <p>We can show this using the fact that $\sin x \sim x$ for small values of $x$; precisely, we have the inequality</p>
<p>$$\frac 1 2 x \le \sin x$$</p>
<p>for all $x \in [0,1]$; this leads to</p>
<p>$$\ln\left(\frac{x}{2}\right) \le \ln \sin x$$</p>
<p>almost everywhere on $[0,1]$. We'll actually use that</p>
<p>$$-\ln \left(\frac x 2\right) \ge - \ln \sin x$$Noting that $\ln(x/2) = \ln x - \ln 2$, and that our measure space is finite, it is sufficient to show that $\ln x \in L^1[0,1]$. To do this, we show that $\ln(1/x)$ has finite Lebesgue integral on this interval via the Monotone Convergence Theorem (hence the usage of the $-$ sign to make things positive). Since $\ln x$ is continuous and bounded on every interval $[\epsilon, 1]$, the Lebesgue integral coincides with the Riemann integral, and applying the MCT to the functions $-\chi_{[1/n,1]} \ln x$ gives</p>
<p>\begin{align*}
\int_0^1 - \ln x dx &= \lim_{n \to \infty} \int_{1/n}^1 - \ln x dx \\
&= - \lim_{n \to \infty} x (\ln x - 1) \Big|_{1/n}^1 \\
&= - \lim_{n \to \infty} \Big(1 (\ln 1 - 1)\Big) - \Big(\frac 1 n \left(\ln \frac 1 n - 1\right)\Big) \\
&= 1 - \lim_{n \to \infty} \left(\frac 1 n + \frac{\ln n}{n}\right) \\
&= 1
\end{align*}</p>
<p>Now by comparison, the original function is integrable.</p>
|
865,293 | <blockquote>
<p>Prove $\ln[\sin(x)] \in L_1 [0,1].$</p>
</blockquote>
<p>Since the problem does not require actually solving for the value, my strategy is to bound the integral somehow. I thought I was out of this one free since for $\epsilon > 0$ small enough, $$\lim_{\epsilon \to 0}\int_\epsilon^1 e^{\left|\ln(\sin(x))\right|}dx=\cos(\epsilon)-\cos(1) \to 1-\cos(1)<\infty$$</p>
<p>and so by Jensen's Inequality, $$e^{\int_0^1 \left| \ln(\sin(x))\right|\,dx}\le \int_0^1e^{\left|\ln(\sin(x))\right|}\,dx\le1-\cos(1)<\infty$$ so that $\int_0^1 \left|\ln(\sin(x))\right|\,dx<\infty$. </p>
<p>The problem, of course, is that the argument begs the question, since Jensen's assumes the function in question is integrable to begin with, and that's what I'm trying to show. </p>
<p>Any way to save my proof, or do I have to use a different method? I attempted integration by parts to no avail, so I am assuming there is some "trick" calculation I do not know that I should use here. </p>
| Zarrax | 3,035 | <p>First observe that $\ln \sin x = \ln {\sin x \over x} + \ln x$. </p>
<p>The function ${\sin x \over x}$ is continuous and nonzero on on $[0,1]$ (if you extend it to equal $1$ at $x = 0$), so the same is true for $\ln {\sin x \over x}$ . Thus $\ln {\sin x \over x}$ is in $L^1[0,1]$. </p>
<p>The function $\ln x$ is also integrable on $[0,1]$ as its antiderivative is $x \ln x - x$ which converges to zero as $x = 0$. </p>
<p>So their sum $\ln \sin x$ is in $L^1[0,1]$ too.</p>
|
4,241 | <p>I was preparing for an area exam in analysis and came across a problem in the book <em>Real Analysis</em> by Haaser & Sullivan. From p.34 Q 2.4.3, If the field <em>F</em> is isomorphic to the subset <em>S'</em> of <em>F'</em>, show that <em>S'</em> is a subfield of <em>F'</em>. I would appreciate any hints on how to solve this problem as I'm stuck, but that's not my actual question.</p>
<p>I understand that for finite fields this implies that two sets of the same cardinality must have the same field structure, if any exists. The classification of finite fields answers the above question in a constructive manner.</p>
<p>What got me curious is the infinite case. Even in the finite case it's surprising to me that the field axioms are so "restrictive", in a sense, that alternate field structures are simply not possible on sets of equal cardinality. I then started looking for examples of fields with characteristic zero while thinking about this problem. I didn't find many. So far, I listed the rationals, algebraic numbers, real numbers, complex numbers and the p-adic fields. What are other examples? Is there an analogous classification for fields of characteristic zero?</p>
| Martin Brandenburg | 1,650 | <blockquote>
<p>Is there an analogous classification for fields of characteristic zero?</p>
</blockquote>
<p>Yes, but it is somewhat useless and nobody would call it a classification.</p>
<p>Every field of characteristic zero has the form $Quot(\mathbb{Q}[X]/S)$, where $X$ is a set of variables and $S$ is a set of polynomials in $\mathbb{Q}[X]$ (which you may replace by the ideal generated by $S$, which must be prime). This may be improved by the existence of transcendence bases: Every field of characteristic zero has the form $Quot(\mathbb{Q}[X])[T]/S$, where $X$ and $T$ are sets of variables and $S$ consists of polynomials, which have each only one variable of $T$.</p>
|
1,660,794 | <p>Suppose $$a'(x)=b(x)$$ and $$b'(x)=a(x)$$</p>
<p>What is $$\int x \sin (x) a(x) dx$$</p>
<p>Thanks!</p>
| Ian | 83,396 | <p>For constants $a,b$ you have</p>
<p>$$\sum_{i=1}^n ai+b = a \sum_{i=1}^n i + b \sum_{i=1}^n 1 = \frac{a n(n+1)}{2} + bn.$$</p>
<p>You can set this equal to your given number and solve for $n$; if you get an integer then your given number was whatever-gonal.</p>
<p>I'm not sure if this <em>fully</em> answers your question, since I'm not that closely familiar with the -gonal numbers.</p>
|
25,337 | <p>If you want to compute crystalline cohomology of a smooth proper variety $X$ over a perfect field $k$ of characteristic $p$, the first thing you might want to try is to lift $X$ to the Witt ring $W_k$ of $k$. If that succeeds, compute de Rham cohomology of the lift over $W_k$ instead, which in general will be much easier to do. Neglecting torsion, this de Rham cohomology is the same as the crystalline cohomology of $X$.</p>
<p>I would like to have an example at hand where this approach fails: Can you give an example for</p>
<blockquote>
<p>A smooth proper variety $X$ over the finite field with $p$ elements, such that there is no smooth proper scheme of finite type over $\mathbb Z_p$ whose special fibre is $X$.</p>
</blockquote>
<p>The reason why such examples <em>have</em> to exist is metamathematical: If there werent any, the pain one undergoes constructing crystalline cohomology would be unnecessary.</p>
| BlaCa | 14,514 | <p>In general the obstruction to lift a scheme $X$ in characteristic zero is in $H^2(X,T_X)$. For examples of $3$-folds in positive characteristic that connot be lifted in characteristic zero you may look at Theorem 22.4 in Hartshorne's "Deformation Theory".</p>
|
113,446 | <p>Suppose a simple equation in Cartesian coordinate:
$$
(x^2+ y^2)^{3/2} = x y
$$
In polar coordinate the equation becomes $r = \cos(\theta) \sin(\theta)$. When I plot both, the one in polar coordinate has two extra lobes (I plot the polar figure with $\theta \in [0.05 \pi, 1.25 \pi]$ so the "flow" of the curve is clearer).</p>
<pre><code>figurePolar = PolarPlot[Sin[θ] Cos[θ], {θ, 0.05 π, 1.25 π},
PlotStyle -> {Blue, Thick}];
figureCartesian = ContourPlot[(Sqrt[x^2 + y^2])^3 == x y, {x, -0.4, 0.4}, {y, -0.4, 0.4}, ContourStyle -> {Green, Dashed}];
GraphicsGrid[{{figurePolar, figureCartesian}}]
</code></pre>
<p><a href="https://i.stack.imgur.com/ez5CK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ez5CK.png" alt="same function in polar and Cartesian coordinate"></a>
The right one is in the Cartesian cooridnate, it is correct since $x y \geq 0$. The extra lobes in the polar (left) figure seem to be caused by Mathematica's use of negative $r$, which is against the mathematical definition. Any thoughts?</p>
| Rashid | 39,584 | <p><code>PolarPlot</code> purposely accepts negative radii values as well as angles beyond the range 0 to 2$\pi$. See, for example, the <code>PolarPlot</code> <a href="https://reference.wolfram.com/language/ref/PolarPlot.html" rel="nofollow noreferrer">documentation here</a> showing <code>PolarPlot[Sin[3 t], {t, 0, Pi}]</code>, which returns this three lobe structure including values below the x-axis (even though the plot angles are only in the range 0 to Pi). </p>
<p><a href="https://i.stack.imgur.com/sQ9eP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sQ9eP.png" alt="enter image description here"></a></p>
<p>Since any real value of $(r,\theta)$ still maps to a unique plot point in $(x,y)$, there is no inherent problem with uniqueness in terms of plotting. </p>
<p>Not restricting the range of $(r,\theta$) also helps in creating more intricate plots, including spirals (as J.M. commented above) and even flowers (as shown in the documentation examples).</p>
<pre><code>PolarPlot[{Sin[6 θ], Cos[6 θ]}, {θ, 0, 2 π}, Axes -> False, PlotStyle -> {Red, Blue}]
</code></pre>
<p><a href="https://i.stack.imgur.com/DhhHZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DhhHZ.png" alt="enter image description here"></a></p>
<p>However, as discussed here on <a href="http://mathworld.wolfram.com/PolarCoordinates.html" rel="nofollow noreferrer">MathWorld</a>, this means that "polar coordinates aren't inherently unique", because the inverse transform $(x,y)\to(r,\theta)$ is not uniquely defined (unless the range of $r,\theta$ is restricted).</p>
|
1,109,443 | <p>I'm currently learning for an algebra exam and I have some examples of questions from few last years. And I can't find a solution to this one:</p>
<blockquote>
<p>Give three examples of complex numbers where z = -z</p>
</blockquote>
<p>The only complex number I can think of is 0. Because it is a complex number, isn't it? Like 0 + 0i.</p>
<p>What two other complex numbers can be given as examples?</p>
<p>Edit: Well, I'm pretty sure it's z = -z. I have only this low-resolution picture, but you can see it in the first task: <a href="https://i.imgur.com/2EuugPZ.jpg" rel="nofollow noreferrer">http://i.imgur.com/2EuugPZ.jpg</a>. Yeah, I know it's all in Polish, but you have to believe me it says to find three examples.</p>
<p>Edit 2: Okay, now I see that it might actually say $\bar{z} = -z$.</p>
| Alec Teal | 66,223 | <p><strong>Answer:</strong></p>
<p>Base 2 numbers! If someone is $a$ of race A ($a\in[0,1]$) and someone else is $b\in[0,1]$ race A, their offspring is $\frac{1}{2}(a+b)=\frac{1}{2}a+\frac{1}{2}b$.</p>
<p>This immediately screams "base 2" because you'll get this recursive half pattern.</p>
<p>So let's write something in base 2, take <code>101</code> this is 1/2+1/8 (there's a 0 in the 1/4 column) which is 5/8ths as binary, we halve it, which in base 2 is just shifting everything right once, to get <code>0101</code> and then add it with the other parent (after shifting theirs).</p>
<p>For example: <code>1</code>x<code>0</code> (where <code>x</code> means "have baby with") is <code>01+00=01</code>=0.5.</p>
<p>The operation of <code>x</code> is closed with finite binary strings - which I will call "race strings".</p>
<p>Adding two terminating numbers is a terminating number. </p>
<p>That means this is closed. It's like the integers in the real numbers, using adding you cannot escape the integers, from inside them. Same sort of closure.</p>
<p>To be 1/3rd is not a finite race-string, so cannot have come from two finite race strings. QED.</p>
<hr>
<p><strong>Here's how I got to the answer</strong></p>
<p>It could be a fraction so close to 1/12 it's easier to say 1/12 than say "21/256ths"</p>
<p>Lets take "race A", if someone who is x/y race A and someone who is a/b of race A, their offspring is $0.5\frac{x}{y} + 0.5\frac{a}{b} = \frac{1}{2}(\frac{x}{y}+\frac{a}{b})$</p>
<p>But $\frac{x}{y}$ and $\frac{a}{b}$ must also come from this relation. </p>
<p>Now 1/3 (1/12 = 1/4 * 1/3) is a recurring number expressed in base 2 (in base 10 it goes 0.1s then 0.01s, then 0.001s and so forth, in base 2 it goes 1/2, 1/4, 1/8...)</p>
<p>So say we wanted someone who was 1/2 + 1/8 of race A, that'd be "101" in binary in this form, it terminates, the 3/4 used in the comments, that's "110" it terminates.</p>
<p>Remember the relation above, if someone is "0110" (3/8) and someone else is "1100" (3/4) say, we get the result by shifting one right and adding, in this case </p>
<pre><code> "00110"
+"01100" which is "01001" or 9/32,
</code></pre>
<p>So to be 1/12th would mean someone who was a quarter, and someone who is a third, but as you can see no one can be a third (in finite steps) starting from 1 or 0 of race A</p>
<p>To sum up! To be 1/3rd something (an infinite string of 0s and 1s) you can't have come from the "product" of two people who have finite strings representing their race. We've seen that "finite strings" are closed (2 people of finite-race string produce someone of finite race string) and thus can't have been produced by two people of finite strings.</p>
|
1,109,443 | <p>I'm currently learning for an algebra exam and I have some examples of questions from few last years. And I can't find a solution to this one:</p>
<blockquote>
<p>Give three examples of complex numbers where z = -z</p>
</blockquote>
<p>The only complex number I can think of is 0. Because it is a complex number, isn't it? Like 0 + 0i.</p>
<p>What two other complex numbers can be given as examples?</p>
<p>Edit: Well, I'm pretty sure it's z = -z. I have only this low-resolution picture, but you can see it in the first task: <a href="https://i.imgur.com/2EuugPZ.jpg" rel="nofollow noreferrer">http://i.imgur.com/2EuugPZ.jpg</a>. Yeah, I know it's all in Polish, but you have to believe me it says to find three examples.</p>
<p>Edit 2: Okay, now I see that it might actually say $\bar{z} = -z$.</p>
| Loren Pechtel | 23,794 | <p>It depends on how you define being Cherokee.</p>
<p>As others have shown, no possible <strong>normal</strong> breeding sequence can produce someone who is 1/12th.</p>
<p>However, we now have three-parent children (26 chromosomes from a male, 26 chromosomes from a female, an ovum with only the mitochondrial DNA from another female.) If one of those three parents is 1/4th Cherokee you could call the child 1/12th Cherokee.</p>
|
1,109,443 | <p>I'm currently learning for an algebra exam and I have some examples of questions from few last years. And I can't find a solution to this one:</p>
<blockquote>
<p>Give three examples of complex numbers where z = -z</p>
</blockquote>
<p>The only complex number I can think of is 0. Because it is a complex number, isn't it? Like 0 + 0i.</p>
<p>What two other complex numbers can be given as examples?</p>
<p>Edit: Well, I'm pretty sure it's z = -z. I have only this low-resolution picture, but you can see it in the first task: <a href="https://i.imgur.com/2EuugPZ.jpg" rel="nofollow noreferrer">http://i.imgur.com/2EuugPZ.jpg</a>. Yeah, I know it's all in Polish, but you have to believe me it says to find three examples.</p>
<p>Edit 2: Okay, now I see that it might actually say $\bar{z} = -z$.</p>
| fastmultiplication | 209,236 | <p>In a "continuous inheritance" model, no, because of the answers above.</p>
<p>In a naive chromosomal model, no, because 46 is not divisible by 12.</p>
<p>In a model which includes recombination, yes. Recombination (aka crossing over) between the parent's chromosomes results in a new chromosome type which has parts from one grandparent and parts from the other (usually one section from each). It is frequent and random enough that people by heritage 1/8th Cherokee would actually have a pretty big variation in their amount of genetic relatedness to their one 100% Cherokee g-grandparent. The diagrams here illustrate actual inheritance of one grandparent's genes: <a href="http://www.dnainheritance.kahikatea.net/autosomal.html" rel="nofollow">http://www.dnainheritance.kahikatea.net/autosomal.html</a> - which shows that the variation in relatedness to a particular grandparent is huge.</p>
<p>The way real recombination works is actually pretty complex, though, and not purely random. Females recombine more than males (it's not just 50% of the time - females are ~45% something percent, males ~35%), and there are recombination hotspots which are locations which "prefer" to be the site of a cross somehow. Plus, there is lots of other selection going on even after fertilization. But these don't make it impossible to be 1/12, since they probably aren't monitoring the whole genotype.</p>
<p>On the actual "active gene" level, though, while you can do it, it may not mean much (since naively in terms of genes only, we are already extremely closely related to all other humans) It would make more sense to look for people who are 1/12 of the way genetically between two groups. i.e. take a Cherokee variant for 1/12 of all the genes which are not fixed in either Cherokee or the target group (being careful to spread them out). If you did the selection in a way which locally (within one chromosome) obeyed a plausible recombination history, this would be indistinguishable from somehow who just got lucky to be exactly 1/12 Cherokee.</p>
|
1,350,837 | <p>Find all integer numbers $n$, such that, $$\sqrt{\frac{11n-5}{n+4}}\in \mathbb{N}$$</p>
<p>I really tried but I couldn't guys, help please.</p>
| André Nicolas | 6,312 | <p>If our square root is to be an integer, we need to have $\frac{11n-5}{n+4}$ a non-negative integer. Note that
$$\frac{11n-5}{n+4}=11-\frac{49}{n+4}.$$
So $n+4$ must divide $49$. But $49$ has very few divisors, so there are very few possile integer values of $\frac{49}{n+4}$. Try them all, including the negative ones. For each candidate, check whether the number $11-\frac{49}{n+4}$ is a perfect square. </p>
|
1,140,212 | <blockquote>
<p>Prove that for every $n\times n$ matrices $A,B$: $$Tr((AB^2)A)=Tr(A^2B^2)$$</p>
</blockquote>
<p>I need a solution that doesn't use expansion. One more question comes into my mind: given $A,B$ are square matrices. For which condition of $A,B$ we can conclude that $Tr(AB)=Tr(BA)$?
Thanks in advance.</p>
| Brian Fitzpatrick | 56,960 | <p>Note that for $A,B\in M_{n\times n}$ we have
\begin{align*}
\DeclareMathOperator{trace}{trace}\trace(AB)
&= \sum_{k=1}^n [AB]_{kk} \\
&= \sum_{k=1}^n \sum_{j=1}^n[A]_{kj}[B]_{jk} \\
&= \sum_{j=1}^n \sum_{k=1}^n [B]_{jk}[A]_{kj} \\
&= \sum_{j=1}^n [BA]_{jj} \\
&= \trace(BA)
\end{align*}</p>
|
1,140,212 | <blockquote>
<p>Prove that for every $n\times n$ matrices $A,B$: $$Tr((AB^2)A)=Tr(A^2B^2)$$</p>
</blockquote>
<p>I need a solution that doesn't use expansion. One more question comes into my mind: given $A,B$ are square matrices. For which condition of $A,B$ we can conclude that $Tr(AB)=Tr(BA)$?
Thanks in advance.</p>
| Surender Sharma | 450,906 | <p>$${\rm Tr}(AB^2A) = {\rm Tr} \big((AB^2) A \big) = {\rm Tr} \big(A (AB^2) \big) = {\rm Tr}(A^2B^2)$$</p>
<p>using the property ${\rm Tr}(XY) = {\rm Tr}(YX)$.</p>
|
2,618,273 | <p>In integer-base positional numeral systems, the notation of a number in base $n$ uses $n$ numerals. Base 2 uses the symbols 0 and 1, base 10 uses 0123456789, base 16 uses base 10 + ABCDEF. Although the choice of symbols for the numerals is arbitrary, the number of numerals (unique glyphs) is identical to the base. That still holds when the base is negative or even complex. However, <a href="https://en.wikipedia.org/wiki/Non-integer_representation" rel="nofollow noreferrer">non-integer bases</a> exist. I understand how the definition of base $n$ can extend to the base of any real base $b$ (I suppose we need $|b|>1$ for the positional property to hold?), what determines the number of numerals used in a number system with a non-integer base? Is it $\lfloor b \rfloor$ or can we decide any number of numerals?</p>
<p>For example, imagine I decide I want to use base τ. How many numerals do I use?</p>
| John Bentin | 875 | <p>For base $\tau$, just $\lceil\tau\rceil$ numerals suffice. For example, for $\tau=\phi,\mathrm e,\pi$, correspondingly two, three, and four numerals are required.</p>
|
3,172,693 | <p>Can anybody help me with this equation? I can't find a way to factorize for finding a value of <span class="math-container">$d$</span> as a function of <span class="math-container">$a$</span>:</p>
<p><span class="math-container">$$d^3 - 2\cdot d^2\cdot a^2 + d\cdot a^4 - a^2 = 0$$</span></p>
<p>Another form:</p>
<p><span class="math-container">$$d=\frac{a^2}{(a^2-d)^2}$$</span></p>
<p>Maybe this equation has no solution. I don't know. That equation is out of some calculus involving the golden number.</p>
<p>Thx for your help.</p>
| Count Iblis | 155,436 | <p>One can immediately see why in this case the partial fraction expansion will lead to a nonzero coefficient for the <span class="math-container">$1/s$</span> term. The asymptotic behavior of the fraction for large <span class="math-container">$s$</span> is <span class="math-container">$\sim 1/s^3$</span>. The singularity at <span class="math-container">$s = -2$</span> contributes a term proportional to <span class="math-container">$1/(s+2)$</span> to the partial fraction expansion, which for large <span class="math-container">$s$</span> behaves like <span class="math-container">$\sim 1/s$</span>. This <span class="math-container">$\sim 1/s$</span> must be canceled out by the partial fraction expansion terms coming from the singularity at <span class="math-container">$s = 0$</span>, this requires the presence of a contribution proportional to <span class="math-container">$1/s$</span>. </p>
<p>By making this reasoning more precise we can get to the complete partial fraction expansion using only the contribution from the singularity at <span class="math-container">$s = -2$</span>. The amplitude of the <span class="math-container">$1/(s+2)$</span> term in the partial fraction expansion is given by the factor that multiplies it in the fraction evaluated at <span class="math-container">$s = -2$</span>, this is therefore equal to <span class="math-container">$1/4$</span>. So the contribution to the partial fraction expansion coming from the singularity at <span class="math-container">$s = -2$</span> is:</p>
<p><span class="math-container">$$\frac{1}{4(s+2)}$$</span></p>
<p>For large <span class="math-container">$s$</span> we can expand this in powers of <span class="math-container">$1/s$</span>:</p>
<p><span class="math-container">$$\frac{1}{4(s+2)} = \frac{1}{4 s}\frac{1}{1+\frac{2}{s}} = \frac{1}{4s} - \frac{1}{2 s^2} + \mathcal{O}\left(\frac{1}{s^3}\right)$$</span></p>
<p>The singularity at <span class="math-container">$s = 0$</span> will contribute terms to the partial fraction expansion whose large <span class="math-container">$s$</span> behavior will have to cancel out these first two terms, this means that this contribution to the partial fraction expansion is:</p>
<p><span class="math-container">$$\frac{1}{2 s^2}-\frac{1}{4s} $$</span></p>
<p>The complete partial fraction expansion is thus given by:</p>
<p><span class="math-container">$$\frac{1}{2 s^2}-\frac{1}{4s} + \frac{1}{4(s+2)} $$</span></p>
|
2,529,533 | <p>Let $f:\mathbf{R}^n \to \mathbf{R}$ be differentiable, $\sum_{i=1}^n y_i \frac{\partial f}{\partial x_i}(y)\geq 0$ for all $y=(y_1,...,y_n)\in \mathbf{R}^n$. How do I show that $f$ is bounded from below by $f(0)$?</p>
| Zach Boyd | 60,023 | <p>Hint:</p>
<p>The expression from your question is the derivative of f with respect to the radius squared. So if that is positive, then...</p>
|
2,515,939 | <p>So, I just need a hint for proving
$$\lim_{n\to \infty} \int_0^1 e^{-nx^2}\, dx = 0$$ </p>
<p>I think maybe the easiest way is to pass the limit inside, because $e^{-nx^2}$ is uniformly convergent on $[0,1]$, but I'm new to that theorem, and have very limited experience with uniform convergence. Furthermore, I don't want to integrate the Taylor expansion, because I'm not familiar with that. So, I want to prove it in a way I'm more familiar with, if possible. So far I've tried: </p>
<ol>
<li><p>Show that $e^{-nx^2}$ is a monontone decreasing sequence with limit $0$. Then use the monotone property of integrals but I think this argument would just end circularly with passing the limit out of the integration operator. </p></li>
<li><p>Bound $e^{-nx^2}$ by 0 and some other $f(x)$ like $\cos^n x$ or $(1-\frac{x^2}{2})^n$ and then use the squeeze theorem. But the integrals of those functions seem to be a little bit out of my math range to analyze. </p></li>
</ol>
<p>But I have a feeling that there's something much simpler here that I'm missing.</p>
| Dylan | 135,643 | <p>$$ \int_0^1 e^{-nx^2} dx < \int_0^{\infty} e^{-nx^2} dx = \sqrt{\frac{\pi}{4n}} \to 0 $$</p>
|
1,595,946 | <blockquote>
<p>Let $f:(a,b)\to\mathbb{R}$ be a continuous function such that
$\lim_\limits{x\to a^+}{f(x)}=\lim_\limits{x\to b^-}{f(x)}=-\infty$.
Prove that $f$ has a global maximum.</p>
</blockquote>
<p>Apparently, this is similar to the EVT and I believe the proof would be similar, but I cannot think anything related...</p>
| Wojowu | 127,263 | <p>Let $x_0\in(a,b)$ be arbitrary. From our assumptions, there exist points $a_1,b_1$ satisfying $a<a_1<x_0<b_1<b$ such that $f(x)<f(x_0)$ for $x\in(a,a_1)$ or $x\in(b_1,b)$. Also, by EVT, $f(x)$ has a maximum on interval $[a_1,b_1]$, say that $f(M)$ is this maximum. I claim this is the global maximum of the function.</p>
<p>Suppose otherwise, and say that $f(c)>f(M)$. Clearly $c\not\in[a_1,b_1]$, because $f(M)$ is the maximum on the latter interval. So $c\in(a,a_1)\sup(b_1,b)$. But by choice of $a_1,b_1$ this means that $f(c)<f(x_0)\leq f(M)$ (the latter inequality true because $x_0\in[a_1,b_1]$. This is a contradiction, so $f(M)$ is really a global maximum.</p>
|
2,329,542 | <p>I looked up wikipedia but honestly I could not make much sense of what I will basically study in Abstract Algebra or what it is all about .</p>
<p>I also looked up a question here :
<a href="https://math.stackexchange.com/questions/855828/what-is-abstract-algebra-essentially">What is Abstract Algebra essentially?</a></p>
<p>But there are so many definitions and terms that I always get bogged down by them. </p>
<p>It would be helpful to me and maybe others if someone could explain what Abstract Algebra is all about in <em>simple words</em> that one can understand intuitively. </p>
| Evargalo | 443,536 | <p>If I had to explain it to my 6yo daughter, I would tell her that "Abstract Algebra" is about how we invented numbers, and why we invented them that way.</p>
<p>She probably wouldn't understand and tell me to let her play.</p>
|
2,329,542 | <p>I looked up wikipedia but honestly I could not make much sense of what I will basically study in Abstract Algebra or what it is all about .</p>
<p>I also looked up a question here :
<a href="https://math.stackexchange.com/questions/855828/what-is-abstract-algebra-essentially">What is Abstract Algebra essentially?</a></p>
<p>But there are so many definitions and terms that I always get bogged down by them. </p>
<p>It would be helpful to me and maybe others if someone could explain what Abstract Algebra is all about in <em>simple words</em> that one can understand intuitively. </p>
| user8181651 | 456,229 | <p>You take some kind of set and you define some operations on it so that they have some nice properties that you find interesting . Then you try to learn what happens. Operations, rules and sets are often quite general. You can apply what you've found at things that are not numbers, like set of permutations or rotations, .... It's a pretty cool field but I think that is impossible understand what it is about until you don't study it and I'm pretty terrible at explaining it. This video is funny and I think can give some ideas on group theory, that is a part of abstract algebra <a href="https://youtu.be/FW2Hvs5WaRY" rel="nofollow noreferrer">https://youtu.be/FW2Hvs5WaRY</a> </p>
|
3,828,003 | <blockquote>
<p>Show that <span class="math-container">$G=\{0,1,2,3\}$</span> over addition modulo 4 is isomorphic to <span class="math-container">$H=\{1,2,3,4\}$</span> over multiplication modulo 5</p>
</blockquote>
<p>My solution was to brute force check validity of <span class="math-container">$f(a+b)=f(a)f(b)$</span> for all <span class="math-container">$a,b\in\{0,1,2,3\}$</span> where i took <span class="math-container">$f(x)$</span> as <span class="math-container">$f(x)=x+1$</span>.
I would like to know if there's a more elegant way?</p>
| Community | -1 | <p>In general, you need to prove the existence of a bijective homomorphism between the two groups.</p>
<p>In practice, there is only one cyclic group of each order, <span class="math-container">$\Bbb Z_n$</span>. Here can use that fact to establish the result.</p>
<p>To wit, <span class="math-container">$\Bbb Z_p^\times$</span> is known to be cyclic of order <span class="math-container">$p-1$</span>.</p>
|
43,355 | <p>I recently came across the following formula, which is apparently known as <em>Laplace's summation formula:</em></p>
<p>$$\int_a^b f(x) dx = \sum_{k=a}^{b-1} f(k) + \frac{1}{2} \left(f(b) - f(a)\right) - \frac{1}{12} \left(\Delta f(b) - \Delta f(a)\right) $$
$$+ \frac{1}{24} \left( \Delta^2 f(b) - \Delta^2 f(a) \right) - \frac{19}{720} \left(\Delta^3 f(b) - \Delta^3 f(a) \right) + \cdots$$</p>
<p>(Of course, the right-hand side isn't guaranteed to converge.) The coefficient on the term with $\Delta^{k-1}$ is $\frac{c_k}{k!}$, where $c_k$ is apparently called either a <em>Cauchy number of the first kind</em> or a <em>Bernoulli number of the second kind</em>. </p>
<p>The formula looks to me like a finite calculus version of the <a href="http://en.wikipedia.org/wiki/Euler-Maclaurin_formula" rel="nofollow">Euler-Maclaurin summation formula</a>.</p>
<p>I'm trying to find out more about Laplace's summation formula. However, the usual suspects (the arXiv, Wikipedia, MathWorld, Google) aren't turning up much. There was a little on MathSciNet, the most promising of which was a paper by Merlini, Sprugnoli, and Verri entitled "The Cauchy Numbers" (<em>Discrete Mathematics</em> 306(16): 1906-1920, 2006). The MathSciNet review says, "Application of the Laplace summation formula involving the harmonic numbers [is] also given." I've requested the paper through interlibrary loan, but it has not arrived yet.</p>
<p>While I'm interested in the formula in general, I'm particularly interested in these two questions.</p>
<ol>
<li><p>What applications are there for the Laplace summation formula? (It seems like there ought to be a sufficient number of applications for it to deserve having Laplace's name attached to it. I suppose one could use it for asymptotic analysis, but I'm not sure what the advantage would be over Euler-Maclaurin.)</p></li>
<li><p>What is the error bound on the formula when it is truncated after $n$ terms?</p></li>
</ol>
<p>I wasn't sure how to tag this; feel free to retag.</p>
| Gerry Myerson | 3,684 | <p>Did you try the Online Encyclopedia of Integer Sequences? </p>
<p><a href="http://oeis.org/A006232" rel="nofollow">http://oeis.org/A006232</a> </p>
<p>Perhaps some of the references there will get you where you want to go. </p>
|
43,355 | <p>I recently came across the following formula, which is apparently known as <em>Laplace's summation formula:</em></p>
<p>$$\int_a^b f(x) dx = \sum_{k=a}^{b-1} f(k) + \frac{1}{2} \left(f(b) - f(a)\right) - \frac{1}{12} \left(\Delta f(b) - \Delta f(a)\right) $$
$$+ \frac{1}{24} \left( \Delta^2 f(b) - \Delta^2 f(a) \right) - \frac{19}{720} \left(\Delta^3 f(b) - \Delta^3 f(a) \right) + \cdots$$</p>
<p>(Of course, the right-hand side isn't guaranteed to converge.) The coefficient on the term with $\Delta^{k-1}$ is $\frac{c_k}{k!}$, where $c_k$ is apparently called either a <em>Cauchy number of the first kind</em> or a <em>Bernoulli number of the second kind</em>. </p>
<p>The formula looks to me like a finite calculus version of the <a href="http://en.wikipedia.org/wiki/Euler-Maclaurin_formula" rel="nofollow">Euler-Maclaurin summation formula</a>.</p>
<p>I'm trying to find out more about Laplace's summation formula. However, the usual suspects (the arXiv, Wikipedia, MathWorld, Google) aren't turning up much. There was a little on MathSciNet, the most promising of which was a paper by Merlini, Sprugnoli, and Verri entitled "The Cauchy Numbers" (<em>Discrete Mathematics</em> 306(16): 1906-1920, 2006). The MathSciNet review says, "Application of the Laplace summation formula involving the harmonic numbers [is] also given." I've requested the paper through interlibrary loan, but it has not arrived yet.</p>
<p>While I'm interested in the formula in general, I'm particularly interested in these two questions.</p>
<ol>
<li><p>What applications are there for the Laplace summation formula? (It seems like there ought to be a sufficient number of applications for it to deserve having Laplace's name attached to it. I suppose one could use it for asymptotic analysis, but I'm not sure what the advantage would be over Euler-Maclaurin.)</p></li>
<li><p>What is the error bound on the formula when it is truncated after $n$ terms?</p></li>
</ol>
<p>I wasn't sure how to tag this; feel free to retag.</p>
| Tyler Clark | 10,918 | <p>This is a bit late - I could be completely wrong, but I think the issue here is the domain being used.</p>
<p>Laplace's summation formula should be used on the set of integers and will be used for calculations in discrete calculus. I believe that the Euler-Maclaurin summation formula is typically used on the reals though.</p>
<p>I hope this helps.</p>
|
3,450,283 | <p>I confronted with a statement: </p>
<p>Given a ring homomorphism <span class="math-container">$f:A\to B$</span>, with commutative rings with identity <span class="math-container">$A,B$</span>. If <span class="math-container">$A,B$</span> are both subrings of a bigger commutative ring with identity <span class="math-container">$R$</span>, then <span class="math-container">$A$</span> must be a subring of <span class="math-container">$B$</span>.</p>
<p>I have thought about quotient ring <span class="math-container">$B/I\to B$</span>, where <span class="math-container">$I$</span> is an ideal of <span class="math-container">$B$</span>. Then <span class="math-container">$B/I$</span> is not a subring of <span class="math-container">$B$</span>. Can we embed them in a bigger ring? Hope someone could help. Thanks!</p>
<hr>
<p>Thanks to Gae. S., the conclusion above is not true. Now, what if the completion of <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are equal, i.e. <span class="math-container">$\widehat{A}\cong \widehat{B}$</span>. Does the conclusion holds?</p>
| Yuyi Zhang | 602,131 | <p>Generally, a ring homomorphsim <span class="math-container">$f:A\to B$</span> within a bigger ring cannot deduce that <span class="math-container">$A$</span> is a subring of <span class="math-container">$B$</span>. But under noetherian hypothesis, if <span class="math-container">$A$</span> and <span class="math-container">$B$</span> have the same completion, i.e. <span class="math-container">$\hat{A}=\hat{B}$</span>, we can conclude that <span class="math-container">$A$</span> is a subring of <span class="math-container">$B$</span>.</p>
<p>Let's write down the exact sequence <span class="math-container">$0\to \ker(f)\to A \xrightarrow{f} B\to 0$</span>. Since completion preserves exactness, we have <span class="math-container">$0\to \widehat{\ker(f)}\to \hat{A}\to \hat{B}\to 0$</span>. But <span class="math-container">$\hat{A}=\hat{B}$</span>, hence <span class="math-container">$\widehat{\ker(f)}=0$</span>, i.e. <span class="math-container">$\ker(f)=0$</span>. Thus, <span class="math-container">$f$</span> is injective. So <span class="math-container">$A$</span> can be seen as a subring of <span class="math-container">$B$</span>.</p>
|
441,888 | <p>I should clarify that I'm asking for intuition or informal explanations. I'm starting math and never took set theory so far, thence I'm not asking about formal set theory or an abstract hard answer. </p>
<p>From Gary Chartrand page 216 Mathematical Proofs - </p>
<p>$\begin{align} \text{ range of } f & = \{f(x) : x \in domf\} = \{b : (a, b) \in f \} \\
& =
\{b ∈ B : b \text{ is an image under $f$ of some element of } A\} \end{align}$</p>
<p><a href="http://en.wikipedia.org/wiki/Parity_%28mathematics%29" rel="nofollow noreferrer">Wikipedia</a> - $\begin{align}\quad \{\text{odd numbers}\} & = \{n \in \mathbb{N} \; : \; \exists k \in \mathbb{N} \; : \; n = 2k+1 \} \\
& = \{2n + 1 :n \in \mathbb{Z}\} \end{align}$</p>
<p>But <a href="https://math.stackexchange.com/questions/266718/quotient-group-g-g-identity/266725#266725">Why $G/G = \{gG : g \in G \} \quad ? \quad$ And not $\{g \in G : gG\} ?$</a></p>
<p><strong>EDIT @Hurkyl 10/5.</strong> Lots of detail please.</p>
<p>Question 1. Hurkyl wrote $\{\text{odd numbers}\}$ in two ways.<br>
But can you always rewrite $\color{green}{\{ \, x \in S: P(x) \,\}}$ with $x \in S$ on the right of the colon? How?<br>
$ \{ \, x \in S: P(x) \,\} = \{ \, \color{red}{\text{ What has to go here}} : x \in S \, \} $? Is $ \color{red}{\text{ What has to go here}} $ unique?</p>
<p>Qusetion 2. Axiom of replacement --- Why $\{ f(x) \mid x \in S \}$ ? NOT $\color{green}{\{ \; x \in S \mid f(x) \;
\}}$ ?</p>
<p><strong>@HTFB.</strong> Can you please simplify your answer? I don't know what are ZF, extensionality, Fraenkel's, many-one, class function, Cantor's arithmetic of infinities, and the like. </p>
| Mauro ALLEGRANZA | 108,274 | <p>About <em>Question 1)</em>, basically, the formula :</p>
<blockquote>
<p>$ \{$ odd numbers $\} = \{ n \in \mathbb{N} : \exists \, k \in \mathbb{N} \; (n = 2k+1 ) \; \}$</p>
</blockquote>
<p>is a shorthand for the formal :</p>
<p>$\{ n : n \in \mathbb{N} \quad \land \quad \exists \, k \; ( k \in \mathbb{N} \land n = 2k+1 ) \;\}$.</p>
<p>The syntax of first-order set-theory, requires $\{ x : \phi(x) \}$ </p>
<p>where $\phi(x)$ is a well-formed formula with one free variable (a well-formed formula is an expression built up according to "language specifications").</p>
<p>Due to the fact that $\in$ is part of set-theoretic language, you can use it in $\phi$, so that you can have :</p>
<blockquote>
<p>$\{ x : x \in S \land P(x) \}$.</p>
</blockquote>
<p>Accordingly, I'll rewrite : $\{ f(x) : x \in S \}$ as :</p>
<blockquote>
<p>$\{ \; y : \exists \, x \, \exists \, z \quad (Funct(z) \quad \land \quad x \in S \quad \land \quad <x,y> \in z) \; \}$;</p>
</blockquote>
<p>where $Funct(z)$ is a "complex" expression saying that the set $z$ is a <em>function</em>; again, the formula at the right of the colon has form : $\phi(y)$, with $y$ free.</p>
<p>Of course, in addition to the syntactical aspects, that dictates how to build well-formed formulas, we have the aspects related to "existence" of sets, that depends on the <em>axioms</em>.</p>
<p>In <em>Axiomatic set theory</em> (the form due to Zermelo and Fraenkel, called $\mathsf {ZF}$) you must use <em>Axiom Schema of Separation</em> in order to prove that the previous set exists.</p>
<p>This answer also the question about $\{ x \in S: P(x) \} = \{ ? : x \in S \}$. It must be :</p>
<blockquote>
<p>$\{ x \in S: P(x) \} = \{ x : x \in S \land P(x) \}$.</p>
</blockquote>
<p>Left of the colon we must have a set <em>variable</em>; right of the column, a "condition" specifying that the resulting set will be the <em>subset</em> of $S$ made by the elements $x$ of $S$ such that $P(x)$ holds.</p>
<p>It must be also a formula with <em>constant</em> symbol (a "name", like : $\emptyset$): for example, we may have :</p>
<blockquote>
<p>$\{ x : x \in S \land x \in \emptyset \}$.</p>
</blockquote>
<p>In this case we "choose" the $x \in S$ that in addition belongs to $\emptyset$; but there are none, so the result will be simply the <em>empty set</em>.</p>
<hr>
<p><em>NOTE</em> about language. In order to understand the formulas above, we need some preliminary notions about first-order language.</p>
<p>We start with <em>symbols</em> : <em>variables</em>: $x$, $y$, ..., <em>predicates</em> : $P$, $Q$, ..., <em>connectives</em> : $\lnot$, $\land$, $\lor$, $\rightarrow$ and the <em>quantifiers</em> $\forall$ and $\exists$. We may add also <em>constants</em>, like $0$ and $1$ in arithmetic and $\emptyset$ in set theory.</p>
<p>A "special" (binary) predicate is $=$ (both in arithmetic and set theory), while the (binary) predicate $\in$ is used in set theory. </p>
<p>They are written usually - mainly due to tradition - in the <em>infix</em> form, i.e. $x \in y$ and $x = y$, instead of the "official" <em>prefix</em> form, i.e. $\in (x,y)$ and $=(x,y)$.<br>
Infix form is more readible for humans; computers "prefer" the prefix one.</p>
<p>Variables and constants are <em>terms</em> : they behave like "nouns".</p>
<p>With predicates and connectives and quantifiers you can build formulas, like $x \in \emptyset$, $0 = 1$.</p>
<p>Roughly speaking, terms have <em>denotation</em> and formula have <em>meaning</em>: but, in order to achieve "meaningfulness", we must follow the rules of formation (the syntax of the language, like the formal specifications for a programming language).</p>
<p>The matter is like in natural language : the phrases "the flower is red" and "the man run slowly" are "well formed", while the phrase "the man runs redly" is meaningless.</p>
<p>In set theory we have that formulas like : $x \in A$ and $A \subseteq B \cap \emptyset$ are well-formed, i.e. they have meaning.</p>
<p>The expression $x \in \lor A$ is <em>ill-formed</em>, i.e. meaningless.</p>
<p>With quantifiers and connectives you can "build up" complex formulas (starting from "atomic" or elementary ones).</p>
<hr>
<p><em>Examples</em> from set theory. </p>
<p>Set theory add to the "basic" symbols (variables, connectives, quantifiers and equality ($=$)) only one <em>predicate</em> (bynary : $\in$) as primitive : all other symbols "specific" of set theory will be defined. </p>
<p>Please note: also the "name" for the <em>empty set</em> ($\emptyset$) is defined; it is introduced <em>after</em> we have proved that, according to the axioms of our theory, there exists a set that has no members, and that this set is unique.</p>
<p>Atomic formulas : $x \in y$, $x = y$, etc.</p>
<p>From this "austere" groundfloor we can buil all we need, i.e. complex formulas like : $x \in y \land x = y$, $\lnot x \in y$ (abbreviated as : $x \notin y$), ...</p>
<p>When we write a formula like $\phi(x)$, we usually want to refer to an (atomic or) complex formula with one free variable, like $x \in \emptyset$. But we can also write : $x \in \emptyset \land x \notin x$. This last formula has the "form" $x \in S \land P(x)$ (where "incidentally" $P(x)$ is the predicate of the Russell's paradox).</p>
<p>Our first examples are of this "form": the set of even numbers is the set of all $x$ such that $x \in \mathbb{N} \land \exists y (x=2 \times y)$; here $\exists y (x=2 \times y)$ is a formula with the free variable $x$, like $P(x)$.</p>
<p><em>Note</em> we have implicitly "added" to set language also symbols for arithmetic, like :$\mathbb{N}$, $+$, $0$, $\times$. Please, assume for the sake of discussion that it is admissible.</p>
<hr>
<p><em>NOTE</em> on functions in set theory.</p>
<p><em>Functions</em> in set theory are a particular type of sets (in set theory - "obviously" - all is a set).</p>
<p>We nedd the concept of <em>ordered couple</em> $(a,b)$ that is different from $\{ a, b \}$ (because $\{ a, b \} = \{ b, a \}$, i.e. the order is immaterial, where the <em>ordered couple</em> is ... ordered); $a$ is the first element of the couple and $b$ is the second.</p>
<blockquote>
<p>A <em>function</em> in set theory is a set of ordered couples, </p>
</blockquote>
<p>provided that it satisfy the "basic rule" for functions, i.e. that if $f(x)=y_1$ and $f(x) = y_2$, then $y_1=y_2$. </p>
<p>This rule will be "rewrited" in set language as : a set $f$ of ordered couples is a <em>function</em> when :</p>
<blockquote>
<p>$\forall x \forall y_1 \forall y_2 ( \quad <x,y_1> \in f \quad \land <x,y_2> \in f \quad \rightarrow y_1 = y_2 \quad )$. </p>
</blockquote>
<p>We will write the function $f : \mathbb{N} \rightarrow \mathbb{N}$ defined as $f(x) = 2 \times x$ as :</p>
<blockquote>
<p>$\{ \quad <x,y> \quad : \quad x,y \in \mathbb{N} \land y = 2 \times x \quad \}$.</p>
</blockquote>
<p>This "object" in set theory (it is a set !) behaves like "usual" mathematical functions.</p>
|
112,096 | <p>Does the inequality $2 \langle x , y \rangle \leqslant \langle x , x \rangle + \langle y , y \rangle $, where $$ \langle \cdot, \cdot \rangle $$ denotes scalar product, have a name? </p>
<p>I've tried looking at several inequalities on wikipedia but I didn't find this one. And of course googling doesn't work for this purpose.</p>
| Harald Hanche-Olsen | 23,290 | <p>It is essentially <a href="http://en.wikipedia.org/wiki/Young%27s_inequality" rel="nofollow">Young's inequality</a>.</p>
|
636,391 | <p>Evaluate the following indefinite integral.</p>
<p>$$\int { \frac { x }{ 4+{ x }^{ 4 } } }\,dx$$</p>
<p>In my homework hints, it says let $ u = x^2 $. But still i can't continue.</p>
| Berci | 41,488 | <p><strong>Hint:</strong> If $u=x^2$ then $x=\sqrt u$ and $du=2x\,dx$.</p>
|
636,391 | <p>Evaluate the following indefinite integral.</p>
<p>$$\int { \frac { x }{ 4+{ x }^{ 4 } } }\,dx$$</p>
<p>In my homework hints, it says let $ u = x^2 $. But still i can't continue.</p>
| Dan | 79,007 | <p>Hint: You've substituted $u = x^2$ and found that your original integral becomes</p>
<p>$$
\int\frac{\sqrt u}{4+u^2} \frac{du}{2x},
$$</p>
<p>but you haven't completed the substitution; there's still an $x$ in your integrand. How can you rewrite the $2x$ below the $du$ as a function of $u$? Once you rewrite $2x$ in terms of $u$, you should be able to algebraically simplify further.</p>
<p>Hint 2: You now have it in terms of $u$. Good! Do you see any way to simplify the integral? It may help to rewrite it as</p>
<p>$$
\int\frac{\sqrt u}{2\sqrt{u}(4+u^2)}du.
$$</p>
|
636,391 | <p>Evaluate the following indefinite integral.</p>
<p>$$\int { \frac { x }{ 4+{ x }^{ 4 } } }\,dx$$</p>
<p>In my homework hints, it says let $ u = x^2 $. But still i can't continue.</p>
| Felix Marin | 85,343 | <p>$\newcommand{\+}{^{\dagger}}%
\newcommand{\angles}[1]{\left\langle #1 \right\rangle}%
\newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}%
\newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}%
\newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,}%
\newcommand{\dd}{{\rm d}}%
\newcommand{\down}{\downarrow}%
\newcommand{\ds}[1]{\displaystyle{#1}}%
\newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}%
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}%
\newcommand{\fermi}{\,{\rm f}}%
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}%
\newcommand{\half}{{1 \over 2}}%
\newcommand{\ic}{{\rm i}}%
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}%
\newcommand{\isdiv}{\,\left.\right\vert\,}%
\newcommand{\ket}[1]{\left\vert #1\right\rangle}%
\newcommand{\ol}[1]{\overline{#1}}%
\newcommand{\pars}[1]{\left( #1 \right)}%
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}%
\newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}%
\newcommand{\sech}{\,{\rm sech}}%
\newcommand{\sgn}{\,{\rm sgn}}%
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}%
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
\begin{align}
\color{#0000ff}{\large\int{x\,\dd x \over 4 + x^{4}}}&=
\int x\pars{{1 \over x^{2} - 2\ic} - {1 \over x^{2} + 2\ic}}\,{1 \over 4\ic}\,\dd x
=
\half\,\Im\int{x\,\dd x \over x^{2} - 2\ic}
\\[3mm]&={1 \over 4}\,\Im\ln\pars{x^{2} - 2\ic}=
{1 \over 4}\,\arctan\pars{-2 \over \phantom{-}x^{2}}= \color{#0000ff}{\large -\,{1 \over 4}\,\arctan\pars{2 \over x^{2}}}
+ \mbox{"a constant"}
\\[3mm]&= \color{#0000ff}{\large {1 \over 4}\,\arctan\pars{x^{2} \over 2}}
+ \mbox{"some constant"}
\end{align}</p>
<blockquote>
Let's check it:
\begin{align}
\totald{}{x}\bracks{-\,{1 \over 4}\,\arctan\pars{2 \over x^{2}}}
&=
-\,{1 \over 4}\,
{1 \over \pars{2/x^{2}}^{2} + 1}\,\bracks{2\,\pars{-\,{2 \over x^{3}}}}
=
-\,{1 \over 4}\,
{x^{4} \over 4 + x^{4}}\,\pars{-4 \over \phantom{-}x^{3}}
\\[3mm]&={x \over 4 + x^{4}}
\end{align}
</blockquote>
|
685,567 | <p>For the function, $f(x,y,z)=\sqrt{x^2+y^2+z^2}$, do directional derivatives exist at the origin? If I use the definition $$lim_{h\to 0}\frac{f(x+hv)-f(x)}{h},$$ then I get $$\frac{|h|}{h}$$ which is without limit. But in some places, I keep reading that the directional derivative is 1. </p>
<p>Also, if I were to write the function in spherical coordinates, it would be simply $f(\rho\theta)=\rho$, which is differentiable in $\rho$, irrespective of $\theta$. </p>
<p>What is the source of this ambiguity and is there a convention for this? </p>
| Live Free or π Hard | 126,067 | <p>You are not correct. The sum of <em>n</em> Poisson random variables, with parameter <em>k</em>, say, is also a Poisson random variable with parameter <em>nk</em>, <strong>if the random variables are independent</strong>. That is, if X_<em>i</em> are independent Poisson random variables with parameter k_i then their sum is also a Poisson random variable with it's parameter being the sum of all the k_i's. </p>
|
664,349 | <blockquote>
<p>If $G$ is a finite group where every non-identity element is generator of $G$, what is the order of $G$?</p>
</blockquote>
<p>I know that the order of $G$ must be prime, but I'm not sure how to go about proving this from the problem statement. </p>
<p>Any hints on where to start?</p>
| Nicky Hekster | 9,605 | <p>Using Cauchy's Theorem: let $p$ be a prime dividing $|G|$. Then $G$ has an element $g$ of order $p$. Apparently this element $g$ generates $G$. Hence $|G|=p$.</p>
|
697,336 | <p>Let $ABCD$ be a trapezoid, such that $AB$ is parallel to $CD$. Through $O$, the intersection point of the diagonals $AC$ and $BD$ consider a parallel line to the bases. This line meets $AD$ at $M$ and $BC$ at $N$. </p>
<p>Prove that $OM=ON$ and: $$\frac{2}{MN}=\frac1{AB}+\frac1{CD}$$</p>
| André Nicolas | 6,312 | <p><strong>First problem:</strong> We use an area argument. Draw the trapezoid, with $A,B,C,D$ going counterclockwise, and $AB$ a horizontal line at the "bottom." (We are doing this so we will both be looking at the same picture.)</p>
<p>Note that $\triangle ABC$ and $\triangle ABD$ have the same area. (Same base $AB$, same height, the height $h$ of the trapezoid.)</p>
<p>These two triangles have $\triangle ABO$ in common. It follows that $\triangle OBC$ and $\triangle OAD$ have the same area.</p>
<p>Let $h_1$ be the perpendicular distance from $AB$ to $MN$, and $h_2$ the perpendicular distance from $MN$ to $DC$. </p>
<p>The area of $\triangle OBC$ is $\frac{1}{2}(ON)(h_1+h_2)$. This is because it can be decomposed into $\triangle OBN$ plus $\triangle ONC$. These have bases $ON$, and heights $h_1$ and $h_2$.</p>
<p>Similarly, $\triangle OAD$ has area $\frac{1}{2}(OM)(h_1+h_2)$.</p>
<p>By cancellation, $ON=OM$. </p>
<p><strong>Second problem:</strong> Here we will use similar triangles. Let $h$, $h_1$, and $h_2$ be as in the first problem. By using the first problem, and the fact that triangles $ACD$ and $AOM$ are similar, we get
$$\frac{CD}{MN/2}=\frac{h}{h_1}.$$
A similar argument shows that
$$\frac{AB}{MN/2}=\frac{h}{h_2}.$$
Invert. We get
$$\frac{MN/2}{CD}=\frac{h_1}{h}\quad\text{and}\quad \frac{MN/2}{AB}=\frac{h_2}{h}.$$
Add, and use the fact that $h_1+h_2=h$. We get
$$\frac{MN/2}{CD}+\frac{MN/2}{AB}=1.$$
This yields
$$\frac{MN}{2}=\frac{(AB)(CD)}{AB+CD}.$$
Invert both sides. We get the desired result. </p>
|
418,748 | <p>I tried to calculate, but couldn't get out of this:
$$\lim_{x\to1}\frac{x^2+5}{x^2 (\sqrt{x^2 +3}+2)-\sqrt{x^2 +3}}$$</p>
<p>then multiply by the conjugate.</p>
<p>$$\lim_{x\to1}\frac{\sqrt{x^2 +3}-2}{x^2 -1}$$ </p>
<p>Thanks!</p>
| Sujaan Kunalan | 77,862 | <p>Use L'Hospital's Rule. Since plugging in $x=1$, gives you indeterminate form, take the derivative of the numerator and the derivative of the denominator, and try the limit again.</p>
<p>$\lim_{x\to 1}\frac{(x^2+3)^{\frac{1}{2}}-2}{x^2-1}\implies$ (Via L
Hospital's Rule...) $\lim_{x\to 1}\frac{\frac{1}{2}(x^2+3)^{-\frac{1}{2}}(2x)}{2x}=\frac{\frac{1}{2}(1^2+3)^{-\frac{1}{2}}(2(1))}{2(1)}=\frac{\frac{1}{2}(4)^{-\frac{1}{2}}(2)}{2}=\frac{1}{2}(4)^{-\frac{1}{2}}=(\frac{1}{2})(\frac{1}{4})^{-\frac{1}{2}}=(\frac{1}{2})(\frac{1}{2})=\frac{1}{4}$</p>
|
1,390,676 | <p>A quasigroup is a pair $(Q,/)$, where $/$ is a binary operation on $Q$, such that (1) for each $a,b\in Q$ there exists unique solutions to the equations
$a/x=b$ and $y/a=b$.</p>
<p>Now I want to extract a class of quasigroups that captures characteristics from $(Q_+,/)$, where $Q_+$ is the set of positive rational numbers and $/$ is division. So far I have chosen the three properties below and want to know if they are independent or if any of them can be derived from the other two plus (1):</p>
<ol>
<li>$a/(b/c)=c/(b/a)$, for all $a,b,c\in Q$</li>
<li>$(a/b)/c=(a/c)/b$, for all $a,b,c\in Q$</li>
<li>$(a/b)/(c/d)=(d/c)/(b/a)$, for all $a,b,c,d\in Q$</li>
</ol>
| Michael Kinyon | 444,012 | <p>Quasigroups constructed from the right division operation in a group (<span class="math-container">$x/y = xy^{-1}$</span>) are called <em>Ward quasigroups</em>. They are characterized by the identity <span class="math-container">$(x/y)/(z/y)=x/z$</span>, that is, if <span class="math-container">$(Q,/)$</span> is a quasigroup satisfying this identity, then there exists a group structure <span class="math-container">$(Q,\cdot)$</span> such that its right division operation is <span class="math-container">$/$</span>.</p>
<p>The idea has been rediscovered several times. A good reference, including citations to older work, is</p>
<p>K.W. Johnson and P. Vojtěchovský, Right division in groups, Dedekind-Frobenius group matrices, and Ward quasigroups, <em>Abh. Math. Semin. Univ. Hambg.</em> <strong>75</strong> (2005), 121-136. <a href="https://doi.org/10.1007/BF02942039" rel="nofollow noreferrer">https://doi.org/10.1007/BF02942039</a></p>
|
1,689,923 | <p>I have a sequence $a_{n} = \binom{2n}{n}$ and I need to check whether this sequence converges to a limit without finding the limit itself. Now I tried to calculate $a_{n+1}$ but it doesn't get me anywhere. I think I can show somehow that $a_{n}$ is always increasing and that it has no upper bound, but I'm not sure if that's the right way</p>
| S.C.B. | 310,930 | <p>Note that if $n \ge 1$ then $$\frac{a_{n+1}}{a_{n}}=\frac{(2n+1)(2n+2)}{(n+1)(n+1)}=2\frac{(2n+1)}{n+1} >2$$ </p>
<p>The series diverges. </p>
|
678,768 | <p>"Let $A$, $B$ be two infinite sets. Suppose that $f: A \to B$ is injective. Show that there exists a surjective map $g: B \to A$"</p>
<p>I am not sure how to go about this proof, I am trying to gather information to help me, and deduce as much as I can:
. Since $f$ is injective we know that $|A| \leq |B|$. </p>
<p>. If g is surjective then $|A| \geq |B|$.</p>
<p>. Thus $|A| = |B|$ (For this to hold)</p>
<p>. This suggests that we cannot have one countable and one uncountable set. </p>
<p>. I would start by assuming by contradiction that no such surjective map exists. I am thinking perhaps by removing the elements $b$ from $B$ that are not mapped to from A we would have an injective map $g: A \to B\setminus\{b\}$. However this is as far as I have got, can someone tell me if I am on the right track and if so where to go from here, or if I have got the completely wrong train of thought? Thanks</p>
| Asaf Karagila | 622 | <p>You're going at it <strong>all</strong> wrong.</p>
<p>Consider the case $f\colon\Bbb N\to\Bbb R$ defined by $f(n)=n$. That is an injective function, but certainly not surjective. Can you think about a surjection from $\Bbb R$ onto $\Bbb N$? For example, $g(x)=\begin{cases} x&x\in\Bbb N\\ 0&x\notin\Bbb N\end{cases}$ is a surjection.</p>
<p>More generally, note that if $f\colon A\to B$ is injective, then $\{(f(a),a)\mid a\in A\}$ is a function from <strong>a subset of $B$</strong> into $A$. What are the properties of this function, and how do you complete it to a function whose domain is $B$ itself -- that I am leaving to you to figure out.</p>
|
4,489,675 | <p>When saying that in a small time interval <span class="math-container">$dt$</span>, the velocity has changed by <span class="math-container">$d\vec v$</span>, and so the acceleration <span class="math-container">$\vec a$</span> is <span class="math-container">$d\vec v/dt$</span>, are we not assuming that <span class="math-container">$\vec a$</span> is constant in that small interval <span class="math-container">$dt$</span>, otherwise considering a change in acceleration <span class="math-container">$d\vec a$</span>, the expression should have been <span class="math-container">$\vec a = \frac{d\vec v}{dt} - \frac{d\vec a}{2}$</span> (Again assuming rate of change of acceleration is constant). According to that argument, I can say that <span class="math-container">$\vec v$</span> is also constant in that time interval and so <span class="math-container">$\vec a = \vec 0$</span>.</p>
<p>Can someone point out where exactly I have gone wrong. Also this was just an example, my question is general.</p>
| WillO | 29,145 | <p>You wrote that you haven't studied calculus.</p>
<p>Okay, then. Do not think of <span class="math-container">$dt$</span> and <span class="math-container">$dv$</span> as numbers. Instead, think of the whole expression <span class="math-container">$dv/dt=a$</span> as an abbreviation for "if <span class="math-container">$\Delta t$</span> is a very short time interval during which velocity changes by <span class="math-container">$\Delta v$</span>, then at any time in that interval, <span class="math-container">$\Delta v/\Delta t$</span> is very close to <span class="math-container">$a$</span>." Now you might worry about making precise sense out of things like "very small" and "very close". This is exactly what a good calculus course will teach you.</p>
|
119,589 | <p>I have been given M be an $m\times n$ matrix. I have to show that the matrix $M^TM$ is symmetric positive definite if and only if the columns of the matrix $M$ are linearly independent</p>
<p>My thoughts are as below $M^TM$ looks like a cholesky property and we know it is applicable as the matrix is SPD, still not able to connect how and which specific property to use to show if and only if columns of M are linearly independent... </p>
<p>Any hint as to how I should approach this...</p>
| shuhalo | 3,557 | <p>It is obvious that $M^T M$ is symmetric.</p>
<p>Suppose the matrix $M^T M$ is SPD but $M$ does not have linearly independent columns. Then $M$ is not injective, and consequently $M^T M$ is not definite.</p>
<p>Suppose $M$ has linearly indepedent columns, then $M^T$ has linearly indepedent rows. For any $x$ let $y = Mx$, then $x^T M^T M x = y^T y$. This product is zero if and only if $y$ is zero. Then $x$ must have been zero. Whence $x^T M^T M x = 0$ if and only if $x$ is zero. Whence the matrix is symmetric positive-definite.</p>
|
119,589 | <p>I have been given M be an $m\times n$ matrix. I have to show that the matrix $M^TM$ is symmetric positive definite if and only if the columns of the matrix $M$ are linearly independent</p>
<p>My thoughts are as below $M^TM$ looks like a cholesky property and we know it is applicable as the matrix is SPD, still not able to connect how and which specific property to use to show if and only if columns of M are linearly independent... </p>
<p>Any hint as to how I should approach this...</p>
| Inquest | 35,001 | <p><em>If and only if</em> proofs go 2 ways:
Since this is homework, I'll throw in some hints:</p>
<p>Given that columns are independent, prove $M^TM$ is symmetric positive definite.</p>
<p>For any matrix (Singular or not), $M^TM$ is always symmetric. (Try transposing it to see what you get).</p>
<p>Now, if columns are independent, what can be said about its eigenvalues? </p>
<p>So, $Ax=\lambda x$, premultiply both sides with $A^T$,</p>
<p>$A^TAx=A^T\lambda x = \lambda (A^Tx) = ?? $.</p>
<hr>
<p>Given that $M^TM$ is SPD, prove columns are independent. </p>
<p>If $M^TM$ is SPD, then again the symmetric part is fairly useless; the useful part is the PD. Let $x$ be a eigenvector of M.</p>
<p>$M^TMx = M^T\lambda x = \lambda M^Tx = \lambda^2x$</p>
<p>So now, if M was singular. There would be a $\lambda=0 \text{ but } x \neq 0 \text { s.t. } M^TMx=0$</p>
<p>How does this violate the our assumption of SPD ?</p>
|
2,135,717 | <p>Let $G$ be an Abelian group of order $mn$ where $\gcd(m,n)=1$. </p>
<p>Assume that $G$ contains an element of $a$ of order $m$ and an element $b$ of order $n$. </p>
<p>Prove $G$ is cyclic with generator $ab$.</p>
<hr>
<p>The idea is that $(ab)^k$ for $k \in [0, \dots , mn-1]$ will make distinct elements but do not know how to argue it. </p>
<p>Could I say something like $<a>=A$, $<b>=B$, somehow $AB=\{ ab : a \in A , b \in B \}$ and that has order $|A||B|=mn$?</p>
<p>Don't know if it's the same exact or similar to <a href="https://math.stackexchange.com/questions/1870217/finite-group-of-order-mn-with-m-n-coprime">Finite group of order $mn$ with $m,n$ coprime</a>.</p>
| Maxime Ramzi | 408,637 | <p>Let $k\in \mathbb{N}$ and assume $(ab)^k = e$. Then, since $G$ is abelian, $a^k = b^{-k}$. Raising this to the power $n$, we get $a^{nk} = e$, so $m| nk$. But $m,n$ are coprime, so by Gauss's theorem, $m|k$. Witha similar argument, $n|k$, and since again $m,n$ are coprime, we get $nm |k$. Conversely, we obviously have $(ab)^{nm}= e$ (again, it's necessary to assume $G$ to be abelian here). So $ab$ is of order $nm$, which gives $G= \langle ab \rangle$.</p>
|
1,199,912 | <p>What is the optimal (i.e., smallest) constant $\alpha$ such that, given 19 points on a solid, regular hexagon with side 1, there will always be 2 points with distance at most $\alpha$?</p>
<p>This is a reformulation of an <a href="https://math.stackexchange.com/questions/1196787/pigeonhole-problem-about-distance-between-distinct-points-on-a-hexagon#comment2436030_1196787">interesting question</a> that was mercilessly downvoted.</p>
<p>I can show the bounds $1/2 \leq \alpha\leq 1/\sqrt{3}$. </p>
<p>To show the upper bound, divide the hexagon into 6 regular triangles with sides 1 and note that one of them must contain 4 points.</p>
<p>To show the lower bound, divide the hexagon into 24 regular triangles with sides 1/2 and draw a point at each of the 19 corners. </p>
<p><strong>Addition.</strong> Here's an <em>idea:</em> Let $\Omega$ be a subset of the plane obtained by gluing together regular, side-1 triangles side-to-side. Let $n$ be the total number of corners. Then the <em>only</em> way of placing $n$ or $n-1$ points on $\Omega$ such that no 2 points are closer than 1 is by placing the points at the corners. Proof: Induction on the number of triangles.</p>
| Jack D'Aurizio | 44,121 | <p>$\alpha=\frac{1}{2}$ is the optimal bound. We just need to prove that for every $\varepsilon>0$ we can take $19$ points in the hexagon such that the distance between any two of them is $\geq\frac{1}{2}-\varepsilon$. Easily done: we place $19$ points in a regular hexagon width side length $1-\delta$ accordingly to the given construction, then apply a homothety bringing the $(1-\delta)$-hexagon into a unit hexagon. Any two "scaled" points inside the unit hexagon will be separated by a distance $\geq \frac{1}{2(1-\delta)}$, so $\alpha\geq\frac{1}{2}$ follows from taking: $$\delta = \frac{2\varepsilon}{1+2\varepsilon}.$$
To prove $\alpha\leq\frac{1}{2}$, we split the original unit hexagon in $18$ congruent cyclic quadrilaterals (boundaries in red):
<img src="https://i.stack.imgur.com/FL7RS.png" alt="enter image description here">
Now, if eight or more points fall in the innermost hexagon, we are ok since we can cover the innermost red hexagon with seven polygons having diameter $\frac{1}{2}$. This gives that we can assume that there are at most seven points in the innermost hexagon, then at least twelve points in the outermost annulus. Since the perimeter of the external boundary of the annulus is six, even in this case there are $2$ points at most $\frac{1}{2}$-apart.</p>
<hr>
<p>A simple measure-theoretic argument provides a little weaker bound. Assuming that we can place $21$ points inside a unit hexagon in such a way that the distance between any two of them is $\geq\frac{1}{2}$, then we can fit $21$ circles with radius $\frac{1}{4}$, without any overlapping, inside a regular hexagon with side length $1+\frac{1}{4}$. However, that gives a contradiction since:
$$ 21\cdot\frac{\pi}{16}>\frac{3\sqrt{3}}{2}\left(1+\frac{1}{4}\right)^2, $$
so in the unit hexagon we can place at most $20$ points in such a way that the distance between any two of them is $\geq\frac{1}{2}$. I hope someone is able to improve this argument to show that to place $20$ circles is also impossible.</p>
|
3,717,144 | <p>Suppose M is a finitely generated non-zero R-module, where R is a commutative unital ring. Show that the tensor product of M with itself is non-zero.</p>
<p>I know one way to show this is to find an R-bilinear map which is nonzero, but am not sure how to find it.</p>
| GreginGre | 447,764 | <p>This is false. Let <span class="math-container">$R=\mathbb{C}[X]/(X^2)$</span>. Let <span class="math-container">$x=\bar{X}$</span>, so <span class="math-container">$x^2=0$</span>, and let <span class="math-container">$M= R x$</span>. Then <span class="math-container">$M$</span> is a finitely generated nonzero <span class="math-container">$R$</span>-module, but <span class="math-container">$M\otimes M$</span> is generated by <span class="math-container">$x\otimes x=x^2\otimes 1=0\otimes 1=0$</span>.</p>
<p>More generally, if <span class="math-container">$R$</span> is a commutative ring having a nonzero ideal <span class="math-container">$I$</span> satisfying <span class="math-container">$I^2=0$</span>, then <span class="math-container">$I\otimes_R I=0$</span> and you get a counterexample.</p>
|
9,513 | <p>I'm a very novice user of <em>Mathematica</em> - is there possibility of exporting Mathematica code directly into $\LaTeX$? I'm interested only in exporting mathematical formulas. Also, from which version of program is it possible?</p>
| cormullion | 61 | <p>The Mathematica help system has useful information, and links to videos etc.</p>
<p><img src="https://i.stack.imgur.com/JuOib.png" alt="help screen"></p>
|
3,962,514 | <p>This is just curiosity / a personal exercise.</p>
<p><a href="https://what3words.com/" rel="nofollow noreferrer">What3Words</a> allocates every 3m x 3m square on the Earth a unique set of 3 words. I tried to work out how many words are required, but got a bit stuck.</p>
<p><span class="math-container">$$
Area
= 510 \times 10^6 km^2
= 5.1 \times 10^{14} m^2
=> ~ 5.4 \times 10^{14} m^2
$$</span></p>
<p>(rounding up to make the next step easier!)</p>
<p>And so there are ~ <span class="math-container">$6\times10^{13}$</span> 3m x 3m squares.</p>
<p>I <em>assumed</em> I could use the equation to calculate number of combinations to find the number of words needed:</p>
<p><span class="math-container">$$
_nC_r = \frac{n!}{r! (n - r)!}
$$</span></p>
<p>where <span class="math-container">$r$</span> is 3, and total number of combinations is the number of squares: <span class="math-container">$6\times10^{13}$</span></p>
<p><span class="math-container">$$
6\times10^{13} = \frac{n!}{3! (n - 3)!}
$$</span>
<span class="math-container">$$
6\times10^{13} = \frac{(n)(n-1)(n-2)(n-3)!}{3! (n - 3)!}
$$</span>
<span class="math-container">$$
n^3 - 3n^2 + 2n - (36\times10^{13}) = 0
$$</span></p>
<p>... and then, I can't work out the first factor to use to solve the cubic equation, I'm not sure I've ever had to solve a cubic eqtn with a non-integer factor and none of the tutorials I've found have helped.</p>
<p>(And, my stats is also not good enough for me to be convinced this is the correct equation to use anyway!)</p>
<p>Any hints as to the next step would be appreciated.</p>
| Will Jagy | 10,400 | <p>I like the backwards method: choose the generalized eigenvector(s) with integer elements and see what is forced. Since <span class="math-container">$A-I$</span> gives two genuine eigenvectors, we hold off on that... The minimal polynomial gives the size of the largest Jordan block (always!). That is, <span class="math-container">$(A- I)^2 = 0, $</span> so we look for any nice looking vector for which <span class="math-container">$(A-I)^2 w = 0$</span> but <span class="math-container">$(A-I) w \neq 0.$</span> I like <span class="math-container">$w=(0,0,1)^T$</span>
Next we are forced to use <span class="math-container">$v= (A-I)w = (1,-2,-2)^T.$</span> A genuine eigenvector that is independent of <span class="math-container">$v$</span> could be <span class="math-container">$u = (0,1,0)^T$</span></p>
<p>The resulting matrix, your <span class="math-container">$T,$</span> is those three as columns in order <span class="math-container">$u,v,w$</span></p>
<p>This method allows us to force <span class="math-container">$T$</span> to have all integers, with the likelihood of some rational entries in <span class="math-container">$T^{-1}$</span> because <span class="math-container">$\det T$</span> is most likely not <span class="math-container">$\pm 1$</span></p>
<p>Alright, they set this one up with determinant <span class="math-container">$-1.$</span></p>
<p><span class="math-container">$$
T=
\left(
\begin{array}{rrr}
0&1&0 \\
1&-2&0 \\
0&-2&1 \\
\end{array}
\right)
$$</span></p>
<p><span class="math-container">$$
T^{-1}=
\left(
\begin{array}{rrr}
2&1&0 \\
1&0&0 \\
2&0&1 \\
\end{array}
\right)
$$</span></p>
|
777,863 | <p>Does there exists an $f:\mathbb{R}\rightarrow \mathbb{R}$ differentiable everywhere with $f'$ discontinuous at some point?</p>
| Ross Millikan | 1,827 | <p>Yes. Do you know an example of a continuous function that is not differentiable at some point? (Hint: think of a corner) If you integrate it....</p>
|
184,699 | <p>First, we make the following observation: let $X: M \rightarrow TM $ be a vector
field on a smooth manifold. Taking the contraction with respect to $X$ twice gives zero, i.e.
$$ i_X \circ i_{X} =0.$$
Is there any "name" for the corresponding "homology" group that one can define
(Kernel mod image)? Has this "homology" group been studied by others (there are plenty of questions that one can ask........is it isomorphic to anything more familiar etc etc). </p>
<p>Similarly, a dual observation is as follows: Let $\alpha$ be a one form; taking
the wedge product with $\alpha$ twice gives us zero. One can again define kernel
mod image. Does that give anything "interesting"? </p>
<p>If people have investigated these questions, I would like to know a few references. </p>
<p>My purpose for asking the "name" of the (co)homology group is so that I can make a google search using the name. I was unable to do that, since I do not know of any key words under this topic (or if at all it is a topic).</p>
| Joonas Ilmavirta | 55,893 | <p>If the vector field $X$ never vanishes, the homology corresponding to $i_X$ is trivial.
Suppose $\alpha\in\Omega^p(M)$ is in the kernel of $i_X$.
If we take $\beta=X^\flat\wedge\alpha$, we have $i_X\beta=i_X(X^\flat)\wedge\alpha=|X|^2\alpha$.
Thus if $X$ does not vanish, we have $\alpha=i_X(|X|^{-2}X^\flat\wedge\alpha)$ so $\alpha$ is in the image of $i_X$.</p>
<p>Consider then the homology corresponding to a one-form $\alpha$.
If $\alpha$ never vanishes, the corresponding cohomology is trivial.
Let $w_\alpha(\beta)=\alpha\wedge\beta$ for any differential form $\beta$.
Suppose then that $\beta\in\Omega^p(M)$ is in the kernel of $w_\alpha$.
If $\gamma=i_{\alpha^\sharp}\beta$, then $w_\alpha\gamma=i_{\alpha^\sharp}(\alpha)\wedge\beta=|\alpha|^2\beta$.
Thus $\beta=w_\alpha(|\alpha|^{-2}i_{\alpha^\sharp}\beta)$.</p>
<p>The results above were heavily based on the fact that the vector field and the one-form do not vanish.
Things get more interesting if they have zeroes.
In the following example the first homology group is nontrivial but it is infinite dimensional which makes the theory less nice than classical homology theory.
I don't know if the homology groups corresponding to vector fields can be finite dimensional but nontrivial.
Perhaps this is possible if the zeroes are isolated.</p>
<p>Consider the case when $X$ is a vector field in the plane and let $\beta$ be a one-form.
Now $i_X\beta=0$ means that $X$ and $\beta^\sharp$ are orthogonal.
Identifying two-forms with scalars, $\iota_Xf=fX^\perp$ for any scalar $f$, where $X^\perp$ is $X$ rotated by 90 degrees.
If the first homology group is trivial, then every vector field $Y$ ($=\beta^\sharp$) orthogonal to $X$ is of the form $fX^\perp$ for some scalar $f$.
In particular $Y$ must vanish where $X$ vanishes, which is clearly false.
Consider for example the vector fields $X(x,y)=(0,x)$ and $Y(x,y)=(1,0)$.
Note that for this $X$ the corresponding first homology group is infinite dimensional.</p>
|
1,728,097 | <p>So i have this integral : $$ \int_0^\infty e^{-xy} dy = -\frac{1}{x} \Big[ e^{-xy} \Big]_0^\infty$$
The integration part is fine, but I'm not sure what i get with the limits, can someone explain this</p>
<p>Thanks </p>
| Doug M | 317,162 | <p>\begin{align}
\int_0^\infty e^{-xy} dy &= \lim_\limits{n\to \infty} \int_0^n e^{-xy} dy
\\ &= \dfrac{e^0}{x} - \lim_\limits{n\to \infty} \dfrac{e^{-xn}}{x}
\end{align}</p>
<p>And that limit is going to $0$.</p>
<p>$\dfrac{1}{x}$</p>
|
341,202 | <p>The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:</p>
<p>$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$
$$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$</p>
<p>This works for as many numbers as I've tried. However, I'm not sure how this may be <em>proven</em>. Thus, my question is:</p>
<blockquote>
<p>Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?</p>
</blockquote>
| Michael Hardy | 11,667 | <p>More generally, a number and the sum of its digits both leave the same remainder on division by $3$. For example: $245$ $\mapsto 2+4+5=11$ $\mapsto1+1=2$, so the remainder when $245$ is divided by $3$ is $2$.</p>
<p>If you know modular arithmetic, this is straightforward:
\begin{align}
245 & = 2\cdot10^2 +4\cdot10+5 \\[8pt]
& \equiv 2\cdot1^2 + 4\cdot1 + 5 \pmod 3 \\[8pt]
& = 2+4+5 \\[8pt]
& = \text{sum of digits.}
\end{align}</p>
<p>The point is that $10$ is congruent to $1$ when the modulus is $3$, since the remainder when dividing $10$ by $3$ is $1$, and so powers of $10$ are congruent to powers of $1$, and powers of $1$ are just $1$.</p>
|
341,202 | <p>The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:</p>
<p>$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$
$$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$</p>
<p>This works for as many numbers as I've tried. However, I'm not sure how this may be <em>proven</em>. Thus, my question is:</p>
<blockquote>
<p>Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?</p>
</blockquote>
| rnjai | 42,043 | <p><strong>Let abc be a 3 digit number divisible by 3.</strong><br>
Then:
$$(100a+10b+c)|3=0$$
or
$$(100|3)(a|3)+(10|3)(b|3)+(c|3)=0$$
Hence
$$(a+b+c)|3=0$$</p>
|
341,202 | <p>The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:</p>
<p>$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$
$$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$</p>
<p>This works for as many numbers as I've tried. However, I'm not sure how this may be <em>proven</em>. Thus, my question is:</p>
<blockquote>
<p>Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?</p>
</blockquote>
| Zubin Mukerjee | 111,946 | <p>For $n \in \mathbb{N}$, let $m = \overline{a_0a_1a_2\cdots a_{n-1}}$ be an $n$-digit natural number, where the $a_i$ are digits (natural numbers between $0$ and $9$, inclusive).</p>
<p>Then</p>
<p>$$ m = \displaystyle\sum\limits_{k=0}^{n-1}10^k \cdot a_k = a_0 + 10a_1 + 100a_2 + \cdots + 10^{n-1}a_{n-1}$$</p>
<p>Now consider the equation modulo $3$:</p>
<p>$$ m \equiv \displaystyle\sum\limits_{k=0}^{n-1} 10^k \cdot a_k \pmod{3}$$</p>
<p>$$ m \equiv \displaystyle\sum\limits_{k=0}^{n-1} (9+1)^k \cdot a_k \pmod{3}$$</p>
<p>Since $9 \equiv 0 \pmod{3}$, $9+1 \equiv 1 \pmod{3}$, and since $1^k \equiv 1 \pmod{3}$, we have</p>
<p>$$ m \equiv \displaystyle\sum\limits_{k=0}^{n-1} a_k \pmod{3}$$</p>
<p>This says that the remainder when $m$ is divided by $3$ is the same as the remainder of the sum of the digits of $m$ when <em>that</em> is divided by $3$. </p>
|
341,202 | <p>The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:</p>
<p>$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$
$$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$</p>
<p>This works for as many numbers as I've tried. However, I'm not sure how this may be <em>proven</em>. Thus, my question is:</p>
<blockquote>
<p>Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?</p>
</blockquote>
| voldemort | 118,052 | <p>Hint: Represent your number as $10^na_n+\cdots 10a_1+a_0$ where the $a_i's$ are the digit of your number. Now, note that the remained we get when we divide $10^k$ by $3$ is $1$. </p>
|
563,431 | <p>Find the absolute maximum and minimum of $f(x,y)= y^2-2xy+x^3-x$ on the region bounded by the curve $y=x^2$ and the line $y=4$. You must use Lagrange Multipliers to study the function on the curve $y=x^2$.</p>
<p>I'm unsure how to approach this because $y=4$ is given. Is this a trick question?</p>
| user113578 | 113,578 | <p>Forget Sylow theorem. Note $52=2.26$ and consider $$\mathbb Z_2\oplus\mathbb Z_{26}$$</p>
<p>Of course this one is abelian (the components being so). Is it possible to find $(a,b)\in\mathbb Z_2\oplus\mathbb Z_{26}$ such that $l.c.m.\{|a|,|b|\}=52?$</p>
|
2,987,994 | <p>I'm looking for definite integrals that are solvable using the method of differentiation under the integral sign (also called the Feynman Trick) in order to practice using this technique.</p>
<p>Does anyone know of any good ones to tackle?</p>
| Diger | 427,553 | <p><span class="math-container">$I_9$</span> in Zacky's answer is a special case to
<span class="math-container">$$I(s)=\int_0^\infty \frac{x^s}{x^2+1} \, {\rm d}x \tag{1}$$</span> with <span class="math-container">$-1<\Re(s)<1$</span>, i.e.
<span class="math-container">$$I_9 = \int_{2/3}^{4/5} I(s) \, {\rm d}s \, . $$</span>
(1) can be solved by contour integration
<span class="math-container">$$2\pi i\, {\rm Res} \left( \frac{z^s}{z^2+1} \right)\Bigg|_{z=i}=\pi e^{i\pi s/2} = \oint_{-\infty}^\infty\frac{z^s}{z^2+1} \, {\rm d}z \\
= \int_{-\infty}^{-\epsilon}\frac{z^s}{z^2+1} \, {\rm d}z + \int_{|z|=\epsilon}\frac{z^s}{z^2+1} \, {\rm d}z + \int_\epsilon^\infty \frac{z^s}{z^2+1} \, {\rm d}z + \int_{|z|=\infty} \frac{z^s}{z^2+1} \, {\rm d}z \\
= I_- + I_\epsilon + I_+ + I_\infty \, ,$$</span>
where the contour is closed and avoids the cut in the upper half plane. The second and fourth integral can be estimated and <span class="math-container">$I_-$</span> can be related to <span class="math-container">$\lim_{\epsilon \rightarrow 0} I_+ = I(s)$</span>
<span class="math-container">$$|I_\epsilon| \leq \frac{\pi \epsilon^{s+1}}{1-\epsilon^2} \rightarrow 0 \quad \text{for} \quad \epsilon\rightarrow 0 \\
|I_\infty| \leq \frac{\pi R^{s+1}}{R^2-1} \rightarrow 0 \quad \text{for} \quad R\rightarrow \infty \\
\lim_{\epsilon \rightarrow 0} I_- \stackrel{z=-x}{=} \int_{0-i0}^{\infty-i0} \frac{(-x)^s}{x^2+1} \, {\rm d}x= e^{i\pi s} \int_0^\infty \frac{x^s}{x^2+1} \, {\rm d}x = e^{i\pi s} I(s) \, .$$</span>
Hence <span class="math-container">$$\pi e^{i\pi s/2} = \lim_{\epsilon \rightarrow 0} \left(I_+ + I_-\right) = (1+e^{i\pi s})I(s) \\
\Rightarrow \quad I(s)=\frac{\pi/2}{\cos(\pi s/2)} \, .$$</span>
Finally <span class="math-container">$$\int I(s) \, {\rm d}s = \log\left( \tan(\pi s/2) + \sec(\pi s/2) \right) + C \\
= \log\left( \tan\left(\frac{\pi}{4}(s+1)\right)\right) + C\, .$$</span></p>
|
1,131,970 | <p>Let $I$ be a proper ideal of a polynomial ring $A$ and $x \in A$ an irreducible element.</p>
<p>In a theorem of commutative algebra I will use the fact that, in this hypothesis, holds the following equality: $$\sqrt{(I,x^k)}=\sqrt{(\sqrt{I},x)}$$</p>
<p>The assert seems to be true, anyone has any counterexample/proof? </p>
<p>Thank you.</p>
| Lubin | 17,760 | <p>The problem is that if $a=0\in\Bbb Z/p^k\Bbb Z$, then <em>of course</em> $X^2-a$ has a solution in $\Bbb Z/p^k\Bbb Z$. Just avoid this situation, and everything goes smoothly.</p>
<p>So the strongest result is this: If $a$ is prime to $p$ and $X^2-a$ has a solution in $\Bbb Z/p\Bbb Z$, i.e. if $a$ is a nonzero square modulo $p$, then $a$ is a square in the $p$-adic integers $\Bbb Z_p$ (and now you see why I couldn’t use your notation for $\Bbb Z/p^k\Bbb Z$). A $p$-adic integer stands for a consistent sequence $(a_k)_k$ of elements of $\Bbb Z/p^k\Bbb Z$ such that for each $k\ge1$, $a_{k+1}\equiv a_k\pmod{p^k}$. You get this from Hensel’s Lemma, whether the strong form that I like, or the weak form that talks about roots and derivatives. The form I like says that if $f\in\Bbb Z_p[X]$. reducing to $\tilde f=\gamma\eta$ in $(\Bbb Z/p\Bbb Z)[X]$, where the factors $\gamma(X)$ and $\eta(X)$ are relatively prime, then you can lift them, preserving the degree of one of them, to $g(X)$ and $h(X)$, in such a way that $\tilde g=\gamma$ and $\tilde h=\eta$ and $gh=f$.</p>
<p>In particular, if $a\not\equiv0\pmod p$ and you have an integer $b$ with $b^2\equiv a\pmod p$, then the polynomial $X^2-\tilde a$ is equal to $(X-\tilde b)(X+\tilde b)$ as polynomials with coefficients in $\Bbb Z/p\Bbb Z$, and since $\tilde b\ne-\tilde b)$ (because of your assumption that $p\ne2$), then Hensel applies, and you can lift all the way up to a factorization $X^2-a=(X-B)(X+B)$, where $B$ is a $p$-adic integer, in particular you have good congruences modulo each power of $p$.</p>
<p>Now for the bad cases: if $a\in p\Bbb Z$, you must ask how divisible $a$ is by $p$. That is, write $a=p^ma_0$ where $a_0$ is prime to $p$. In case $m$ is odd, say $m=2n+1$, you’re out of luck, you can not find a root except when we’re in the problem case I mentioned at the top. For, you can solve $X^2-a$ modulo $p^m$, but not modulo $p^{m+1}$. I’ll let you calculate why. On the other hand, if $m$ is even, you look at $a_0$ and ask whether $X^2-a_0$ has a solution modulo $p$. If so, you’re good, and if not, you’re again out of luck. Again, I’ll let you fill in the details.</p>
<p>The story for $p=2$ is somewhat more complicated. If you need to know about this case, e-mail me.</p>
|
2,386,182 | <p>Let $f:[a,b]\to \mathbb{R}$ is an increasing function and for any $y\in[f(a),f(b)],$ there exists a $\xi\in[a,b]$ such that $f(\xi)=y.$ Show that $f(.)$ is continuous on $[a,b]. $</p>
<p>Here is my argument, but I got stuck on the last part.</p>
<p>My goal is to show $$\lim_{n\to \infty}f(x_0+\xi_n)=\lim_{n\to\infty}(x_0-\xi_n)=f(x_0),$$
where $\xi\downarrow0.$</p>
<p>As the sequence $f(x_0+\xi_n)$ is decreasing and bounded by $f(x_0),$ it converges. Similarly, the sequence $f(x_0-\xi_n)$ converges. I want to show both of them converge to $f(x_0)$. </p>
<p>Suppose not, that is $U:=\lim_{n\to\infty}f(x_0+\xi_n)>f(x_0)$.</p>
<p>By our assumption, there must exist some $y>x_0$, such that $f(x_0)\leq f(y)<U$. Fixed $\delta>0. $ As $y>x_0$, there exists some $k\in\mathbb{N}$ such that $|y-(x_0+\xi_n)|<\delta$ for all $n\geq k.$ Hence $f(y)\geq f(x_0+\xi_n)$ for all $n\geq k$. Which implies that $f(y)>U$. I want to make sure if my argument is correct and if there is any better solution for this question. Thank you.</p>
| Alex Ravsky | 71,850 | <p>Being officially a guy which your are trying hard to become, :-) answering your question I feel obliged to tell you the truth: :-) experience is the best of the teachers. :-) Sometimes you may find a problem solver handbook, helpful for a specific topic, but it won’t be solving problems for you. :-) </p>
<p>Concerning your explicit questions. </p>
<p>1 In order to get better in problem solving matters not the success rate, but level, hardness, depth of problems. Solving thousands of quadratic equations won’t make anybody a good problem solver. :-) On the other hand, I’m solving open problems, so any success rate greater than zero is good for me (then I may publish my solution and to show my boss for what I’m paid :-)). </p>
<p>Of course, IMO level problems are pretty hard. Nevertheless, they should be solved in hours, whereas really hard problems may be solved during decades. My MSE examples: <a href="https://math.stackexchange.com/questions/449755/how-to-prove-this-innocent-inequality">of school level</a> and <a href="https://math.stackexchange.com/questions/582162/is-every-regular-paratopological-group-completely-regular">of professional level</a>). Also parts of <a href="https://math.stackexchange.com/questions/1996363/a-problem-related-to-projective-plane/2009512#2009512">this</a> answer may be relevant.</p>
<p>2 Unfortunately, my problem solver handbook knowledge is rather narrow, because I have to solve problems, but not to read how to do this. :-) Also a lot of my problem solving experience is related with old Russian school sources. </p>
<p>As a general problem solver handbooks I may highly recommend George Polya books, especially “Mathematical discovery: on understanding, learning and teaching”. For specific subjects may be useful collections of olympic problems with solutions, grouped into topics. Also you can use tag search at MSE.</p>
|
4,382,786 | <p>I would like to know what is the following process on the real line called.</p>
<p>Let us fix some <span class="math-container">$X_0$</span> and let <span class="math-container">$X_{i+1} = (1-\gamma)X_i + Y_i$</span> where <span class="math-container">$\gamma$</span> is a fixed real number and <span class="math-container">$Y_i$</span>'s are i.i.d. random variables.</p>
<p>I found a reference to this in the book "Stochastic Population Dynamics in Ecology and
Conservation". Specifically, it is supposed to model that the population of species is limited by the amount of resources of the environment. I would like to know if someone has studied this mathematically.</p>
| user2316602 | 187,745 | <p>It is called an <em>autoregressive process of order 1</em>. For more information, you can see <a href="https://en.wikipedia.org/wiki/Autoregressive_model" rel="nofollow noreferrer">this Wikipedia page</a> or the textbook "Time Series Analysis: Forecasting and Control".</p>
|
188,150 | <p>I'm reading the paper <em>Loop groups and twisted K-theory I</em> by Freed, Hopkins, and Teleman. They give some examples of computing (twisted) K groups using the Mayer-Vietoris sequence. </p>
<p>I'm a bit confused with some of their computations, for instance $S^3$ (their example 1.4 in the first section). They take subsets $U_+ = S^3 \backslash(0,0,0,-1)$ and $U_- = S^3 \backslash(0,0,0,1)$ and then they say that $K^0(U_\pm) \simeq \mathbb Z$. I don't understand where this comes from since $U_\pm$ are non-compact so I believe $K^0(U_\pm)$ should be the reduced $K^0$ of a 1-point compactification. The compactifications of these spaces are $S^3$ so shouldn't $K^0(U_\pm) = \tilde {K^0}(S^3) = 0$? However, it seems like if you replace $U_\pm$ by shrinking it a bit to make it closed, these computations work out. </p>
<p>So I'm wondering what the exact statement of Mayer-Vietoris is for $K$-theory (specifically, what type of covers you can take) or if Freed, Hopkins, and Teleman are using a different definition of $K^0$ for which $K^0(U_\pm)$ is indeed $\mathbb Z$. Any references would also be appreciated since I couldn't find much in the literature about a Mayer-Vietoris sequence for $K$-theory.</p>
| André Henriques | 5,690 | <p>"since $U_\pm$ are non-compact so I believe $K_0(U_\pm)$ should be the reduced $K_0$ of a 1-point compactification"</p>
<p>This is a convention often used in $K$-theory of $C^*$-algebras: by default, people take "$K$-theory" to mean compactly supported $K$-theory.</p>
<p>But that this <i>not</i> the convention used for that particular Mayer-Vietoris computation in FHT.
There, the version of $K$-theory that is being used is homotopy invariant (so that the $K$-theory of $\mathbb R^n$ is the same as the $K$-theory of a point).</p>
|
474,568 | <p>In some books I've seen this symbol $\dagger$, next to some theorem's name, and I don't know what it means. I've googled it with no results which makes me suspect it's not standard.</p>
<p>Does anybody know what it means? One example I'm looking at right now is in a probability book, next to a section about Sitrling's approximation to factorials:</p>
<blockquote>
<p><strong>Stirling's formula ($\dagger$)</strong></p>
</blockquote>
<p>FOUND IT: The preamble says they're historic notes, it actually makes a historic introduction in the section about Stirling's formula, inc ase anyone's wondering.</p>
| Cameron Buie | 28,900 | <p>It is often simply used as an alternative to an asterisk, or a footnote notation.</p>
|
597,899 | <p>Let $\epsilon > 0$ be given. Suppose we have that $$a - \epsilon < F(x) < a + \epsilon$$</p>
<p>Does it follow that $a - \epsilon < F(x) \leq a $ ??</p>
| David Holden | 79,543 | <p>strip away the irrelevant context, and the question becomes: if $c \gt 0$ and $a \lt b+c$ does it follow that $a \le b$? why on earth should it???</p>
<p>on the other hand if OP's real question is this:</p>
<p>if $\forall c \gt 0$ we have $a \lt b+c$ does this imply $a \le b$ then the answer is: yes.</p>
|
49,068 | <p>Given lists $a$ and $b$, which represent multisets, how can I compute the complement $a\setminus b$?</p>
<p>I'd like to construct a function <code>xunion</code> that returns the symmetric difference of multisets.
For example, if $a=\{1, 1, 2, 1, 1, 3\}$ and $b=\{1, 5, 5, 1\}$, then their symmetric difference is $\big((a\cup b)\setminus(a\cap b)\big)\setminus(a\cap b)=(a\setminus b)\cup(b\setminus a)=\{1,1,2,3,5,5\}$.</p>
| Mr.Wizard | 121 | <p>I believe this question is nearly a duplicate of <a href="https://mathematica.stackexchange.com/q/18100/121">Removing elements from a list which appear in another list</a> but since this one allows other, potentially better, solutions it should not be closed.<br>
To illustrate, using Leonid's <code>unsortedComplement</code> or my <code>removeFrom2</code>:</p>
<pre><code>a = {1, 1, 2, 1, 1, 3};
b = {1, 5, 5, 1};
unsortedComplement[a, b] ~Join~ unsortedComplement[b, a] // Sort
removeFrom2[a, b] ~Join~ removeFrom2[b, a] // Sort
</code></pre>
<blockquote>
<pre><code>{1, 1, 2, 3, 5, 5}
{1, 1, 2, 3, 5, 5}
</code></pre>
</blockquote>
<p>Unfortunately rasher's solution from that question doesn't appear to be directly applicable here.</p>
|
3,527,785 | <p>I'm reading James Anderson's <em>Automata Theory with Modern Applications. Here:</em></p>
<blockquote>
<p><a href="https://i.stack.imgur.com/sFWNh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sFWNh.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/k9zne.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k9zne.png" alt="enter image description here" /></a></p>
</blockquote>
<p>And I tried to prove the following theorem (for prefix codes).</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/S5BgC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S5BgC.png" alt="enter image description here" /></a></p>
</blockquote>
<p>I tried in the following way: Suppose <span class="math-container">$C$</span> is a prefix code which is not uniquely decipherable, that is there is a string <span class="math-container">$u \in C$</span> with two different expressions <span class="math-container">$u=ab=cd$</span>. But <span class="math-container">$u=vw$</span> and hence <span class="math-container">$vw=ab=cd$</span> where <span class="math-container">$w= \lambda$</span> and <span class="math-container">$\lambda$</span> is the empty word, therefore <span class="math-container">$v=a=c$</span> and <span class="math-container">$\lambda=b=d$</span> which contradicts your hypothesis that <span class="math-container">$u$</span> is not uniquely decipherable.</p>
<p>Is this correct? I am confused because I paired <span class="math-container">$v=a=c$</span> and <span class="math-container">$\lambda=b=d$</span> and I'm not sure if that is valid.</p>
| MJD | 25,554 | <p>If <span class="math-container">$ab=cd$</span>, and <span class="math-container">$a≠c$</span>, then one of <span class="math-container">$a$</span> and <span class="math-container">$c$</span> is shorter and one is longer. The shorter is a prefix of the longer, therefore the code is not a prefix code.</p>
|
300,753 | <p>Revision in response to early comments. Users of set theory need an <em>implementation</em> (in case "model" means something different) of the axioms. I would expect something like this:</p>
<blockquote>
<p>An <em>implementation</em> consists of a "collection-of-elements" $X$, and a relation
(logical pairing) $E:X\times X\to \{0,1\}$. A logical function $h:X\to\{0,1\}$
is a <em>set</em> if it is of the form $x\mapsto E[x,a]$ for some $a\in X$. Sets are
required to satisfy the following axioms: ....</p>
</blockquote>
<p>The background "collection-of-elements" needs some properties to even get started. For instance "of the form $x\mapsto E[x,a]$" needs first-order quantification. Mathematical standards of precision seem to require <em>some</em> discussion, but so far I haven't seen anything like this. The first version of this question got answers like $X$ is "the domain of discourse" (philosophy??), "everything" (naive set theory?) "a set" (circular), and "type theory" (a postponement, not a solution). Is this a missing definition? Taking it seriously seems to give a rather fruitful perspective. </p>
| Zuhair Al-Johar | 95,347 | <p>I think what you mean when you said that $x \in A$ must be a logical <em>"function"</em>, is that it is an assignment that sends a pair of sets to a truth value, of course each object in each pair is a set that can substitute the symbol $x$ or the symbol $A$, in this sense $x \in A$ is called a "propositional function", you can refer to Russell on this in his "History of mathematical philosophy". Your question is legitimate since in order to know a function, then its domain must be specified in order to complete the characterization of a function, now its range is known which in binary logic it is $\{T,F\}$. So the domain can be seen as a set of all $sets$ that the axioms are speaking about, notice that the circularity is only apparent, i.e. if you think that the domain of discourse must include ALL sets as elements of it, then clearly the domain of discourse cannot be a set, and you'll be into searching for this "weaker" notion that you mentioned. But that's not how things are understood, the understanding is that the elements of the domain of discourse are the sets that we are speaking about by our axioms and this doesn't include the domain itself. If you want you can add a primitive constant symbol $V$ and relativize all axioms to this constant [i.e. all quantifiers are written bounded in $V$]. So the theory is not aiming to speak about all sets, it is only aiming to speak about sets within $V$, more specifically it only speaks about sets that have the characteristics that are specified by the axioms, not of every possible set. Given this partial sectoral understanding, the apparent circularity would vanish. Of course I'm speaking in relation to $\text{ZF}$ and related extensions. On the other hand there are indeed theories that includes the universe of all sets spoken about by the theory among the objects its speaking about, $\text{NFU}$ would be such an example, but here the circularity is obvious and actually admitted. But in the context of $\text{ZF}$ set theories, nothing of that is endeavored, so you can keep having stronger and stronger extensions with each extension defining the universe of discourse of the lower theory, and you can go along that indefinitely, and again without being involved in any circular issue. </p>
<p>If you are not content with this and want some other kind of 'collection' other than sets and classes, then you can go to Mereological totalities, perhaps those would prove to be weaker than sets in your sense. So you can refer to work on "Mereology" which is about Part/Whole relation. A less radical shift is to think of the universe of discourse to be a set/class of a <em>higher</em> sort than its elements, this would simply break the acyclicity, so the variables in the theory are substituted by "elements" of the domain of disocurse, but the domain of disocurse itself being of a higher sort do not substitute any of those variables, and we can liberally define sets of higher sorts as collection of the lower sort objects, so you need to refer to type theory and "Predicativity" issues to break the circularity that you think it exists between sets at theoretic/metatheoretic levels. </p>
<p>Another main concern is that the question itself is a little bit unclear, sometimes it appears as if the $OP$ is asking for a specific domain of discourse? and he states that this is a mathematical concern, but did any mathematician stated 'before-hand' the domain of discourse for the 'addition' operator for example, we can also incorporate it to logic and by then the formula $x + y = z$ would indeed qualify as a "logical function" in the sense written here, since it is a 'propositional function' a ternary one really sending triplets to truth values, now had a mathematician cared to find an apriori way to 'specify' "all possible numbers" before we define numbers inside an arithmetical system? this can be done in set theory, yes, but I don't think it was done in mainstream mathematics, we can indeed have many domains that fulfill the same rules about the addition operator, we can take it to be $Z$ or $Q$ or $R$ etc.. All what a logical theory needs is a clear set of syntactical rules, and semantics can be attached to it to explain it, and it need not be fixed to one kind of explanation. Perhaps the $OP$ was objecting to the "nature" of possible domain(s) of discourse, seeing circularity between saying that the domain is a 'set' and having the theory speaking internally about 'sets', this can be resolved in type theory, predicative definitions, or even more radically in Mereological totalities, etc.., I don't see a deep issue to describe it as being something that philosophical account on it was unhelpful? It is just a simple distinctive issue, simple distinctive speciation would resolve it! I don't see a deep argument raised here.</p>
|
3,382 | <p>A couple of years ago I found the following continued fraction for $\frac1{e-2}$:</p>
<p>$$\frac{1}{e-2} = 1+\cfrac1{2 + \cfrac2{3 + \cfrac3{4 + \cfrac4{5 + \cfrac5{6 + \cfrac6{7 + \cfrac7{\cdots}}}}}}}$$</p>
<p>from fooling around with the well-known continued fraction for $\phi$. Can anyone here help me figure out why this equality holds? </p>
| Américo Tavares | 752 | <p>Euler proved in "<em>De Transformatione Serium in Fractiones Continuas</em>" <strong>Reference: The Euler Archive, Index number E593</strong> (On the Transformation of Infinite Series to Continued Fractions) [Theorem VI, §40 to §42] that</p>
<p><span class="math-container">$$s=\cfrac{1}{1+\cfrac{2}{2+\cfrac{3}{3+\cdots }}}=\dfrac{1}{e-1}.$$</span></p>
<p>Here is an explanation of how he proceeded.</p>
<p><strong>New Edit</strong> in response to @A-Level Student's comment. I transcribed the following assertion from the available translation to English of Euler's article. Now I checked the original paper and corrected equation (1b).</p>
<p>He stated that "we are able to demonstrate without much difficulty, that if</p>
<p><span class="math-container">$$\cfrac{a}{a+\cfrac{b}{b+\cfrac{c}{c+\cdots }}}=s,\tag{1a}$$</span></p>
<p>then</p>
<p><span class="math-container">$$a+\cfrac{a}{b+\cfrac{b}{c+\cfrac{c}{d+\cdots }}}=\dfrac{s}{1-s}.\text{"}\tag{1b}$$</span></p>
<p>Since, in this case, we have <span class="math-container">$s=1/(1-e)$</span>, <span class="math-container">$a=1,b=2,c=3,\ldots $</span> it follows</p>
<p><span class="math-container">$$1+\cfrac{1}{2+\cfrac{2}{3+\cfrac{3}{4+\cdots }}}=\dfrac{1}{e-2}.$$</span></p>
<p>Edit: Euler proves first how to form a continued fraction from an alternating series of a particular type [Theorem VI, §40] and then uses the expansion</p>
<p><span class="math-container">$$e^{-1}=1-\dfrac{1}{1}+\dfrac{1}{1\cdot 2}-\dfrac{1}{1\cdot 2\cdot 3}+\ldots
.$$</span></p>
<hr />
<p>REFERENCES</p>
<p><strong>The Euler Archive, Index number E593</strong>, <a href="http://www.math.dartmouth.edu/%7Eeuler/" rel="nofollow noreferrer">http://www.math.dartmouth.edu/~euler/</a></p>
<p><strong>Translation of Leonhard Euler's paper by Daniel W. File</strong>, The Ohio State University.</p>
|
3,382 | <p>A couple of years ago I found the following continued fraction for $\frac1{e-2}$:</p>
<p>$$\frac{1}{e-2} = 1+\cfrac1{2 + \cfrac2{3 + \cfrac3{4 + \cfrac4{5 + \cfrac5{6 + \cfrac6{7 + \cfrac7{\cdots}}}}}}}$$</p>
<p>from fooling around with the well-known continued fraction for $\phi$. Can anyone here help me figure out why this equality holds? </p>
| J. M. ain't a mathematician | 498 | <p>Another possibility: remember that the numerators and denominators of successive convergents of a continued fraction can be computed using a three term recurrence.</p>
<p>For a continued fraction</p>
<p>$$b_0+\cfrac{a_1}{b_1+\cfrac{a_2}{b_2+\dots}}$$</p>
<p>with nth convergent $\frac{C_n}{D_n}$, the recurrence</p>
<p>$$\begin{bmatrix}C_n\\\\D_n\end{bmatrix}=b_n\begin{bmatrix}C_{n-1}\\\\D_{n-1}\end{bmatrix}+a_n\begin{bmatrix}C_{n-2}\\\\D_{n-2}\end{bmatrix}$$</p>
<p>with starting values</p>
<p>$\begin{bmatrix}C_{-1}\\\\D_{-1}\end{bmatrix}=\begin{bmatrix}1\\\\0\end{bmatrix}$, $\begin{bmatrix}C_{0}\\\\D_{0}\end{bmatrix}=\begin{bmatrix}b_0\\\\1\end{bmatrix}$</p>
<p>holds.</p>
<p>With $b_j=j+1$ and $a_j=j$, you now try to find a solution for those two difference equations.</p>
<p>Skipping details, the solution of those two recursions are</p>
<p>$$C_n=\frac{(n+3)!}{n+2}\sum_{j=0}^{n+3}\frac{(-1)^j}{j!}$$</p>
<p>and</p>
<p>$$D_n=\frac{(n+3)!}{n+2}\left(1-2\sum_{j=0}^{n+3}\frac{(-1)^j}{j!}\right)$$</p>
<p>are solutions to the two difference equations.</p>
<p>Divide $C_n$ by $D_n$ and take the limit as $n\to\infty$; you should get the expected result.</p>
|
78,341 | <p>I believe I read somewhere that residually finite-by-$\mathbb{Z}$ groups are residually finite. That is, if $N$ is residually finite with $G/N\cong \mathbb{Z}$ then $G$ is residually finite.</p>
<p>However, I cannot remember where I read this, and nor can I find another place which says it. I was therefore wondering if someone could confirm whether this is true or not, and if it is give either a proof or a reference for this result? (If not, a counter-example would not go amiss!)</p>
<p>Note that I definitely know it is true if $N$ is f.g. free (this can be found in a paper of G. Baumslag, "Finitely generated cyclic extensions of free groups are residually finite" (Bull. Amer. Math. Soc., <strong>5</strong>, 87-94, 1971)).</p>
| YCor | 14,094 | <p>This is not true if $N$ is not assumed f.g. E.g. the wreath product of a nonabelian finite group $H$ by the integers is not residually finite (Gruenberg 1957, can be checked as a exercise). Here $N$ is an infinite direct sum of copies of $H$ (shifted by the action of the integers) and is residually finite.</p>
|
3,482,476 | <p>For an arbitrary <span class="math-container">$0\leqslant x \leq\frac{\pi^2}6$</span>, can we write <span class="math-container">$x$</span> in the form
<span class="math-container">$$
x = x_0+\sum_{j\in S\subset\mathbb N\setminus\{0\}} \frac1{j^2}, \tag 1
$$</span></p>
<p>where <span class="math-container">$x_0\in\{0,1\}$</span>.</p>
<p>My motivation is this: Let <span class="math-container">$X_n\stackrel{\mathrm{i.i.d.}}\sim\mathrm{Ber}(p)$</span>, <span class="math-container">$Y_n = \frac{X_n}{n^2}$</span>, and <span class="math-container">$S_n = \sum_{k=0}^n Y_k$</span>. Then <span class="math-container">$S_n$</span> converges weakly to some random variable <span class="math-container">$S$</span>. I would like to know whether <span class="math-container">$S$</span> is continuous, i.e. takes all values over <span class="math-container">$\left[0,\frac{\pi^2}6\right)$</span>, or if <span class="math-container">$S$</span> is discrete (takes values in some countable subset of <span class="math-container">$\left[0,\frac{\pi^2}6\right)$</span>). The former is true if the representation of elements of <span class="math-container">$\mathbb R$</span> described in (1) is correct, and the latter is true if not. Note that <span class="math-container">$\mathbb P(Y\leqslant \frac{\pi^2}6)=1$</span> because <span class="math-container">$Y\leqslant\sum_{j=1}^\infty \frac1{j^2}=\frac{\pi^2}6$</span> a.s.</p>
<p>My gut feeling is that <span class="math-container">$S$</span> is discrete, as there will be values <span class="math-container">$\frac jk$</span> which cannot be obtained by finite sums of elements of <span class="math-container">$\{\frac1{m^2}:m=1,2,\ldots n\}$</span> no matter how large <span class="math-container">$n$</span> is. But I do not know how to show this rigorously. Advice on how to show this, and hints what the distribution of <span class="math-container">$S$</span> looks like would be appreciated.</p>
| Milo Brandt | 174,927 | <p>It's neither discrete nor do its possible values cover the entire interval. We can essentially solve this question via the following lemma:</p>
<blockquote>
<p><strong>Lemma:</strong> Let <span class="math-container">$s_n$</span> be any sequence of non-negative real numbers with the property that <span class="math-container">$s_n \leq \sum_{i=n+1}^{\infty}s_n$</span> and <span class="math-container">$\lim_{n\rightarrow\infty}s_n = 0$</span>. Then, for any <span class="math-container">$0\leq x \leq \sum_{i=1}^{\infty}s_n$</span>, there is some subset <span class="math-container">$S\subseteq \mathbb N$</span> such that <span class="math-container">$x=\sum_{i\in S}s_i$</span>.</p>
</blockquote>
<p>The proof proceeds by greedily constructing <span class="math-container">$S$</span>: we recursively define <span class="math-container">$S$</span> by the rule that <span class="math-container">$n\in S$</span> if and only if <span class="math-container">$s_n + \sum_{i\in S\cap [1,n)}s_i \leq x$</span>. Otherwise said, we construct <span class="math-container">$S$</span> by enumerating the natural numbers and adding a number to <span class="math-container">$S$</span> if adding that element does not make the partial sum exceed the target <span class="math-container">$x$</span>.</p>
<p>Clearly, <span class="math-container">$\sum_{i\in S}s_i\leq x$</span> since this is true, by definition, for the sum of the elements <span class="math-container">$s_i$</span> where <span class="math-container">$i\in S\cap [1,n]$</span>. We can then prove that <span class="math-container">$\sum_{i\in S}s_i \geq x$</span> and therefore that <span class="math-container">$\sum_{i\in S}s_i = x$</span> by splitting into cases:</p>
<p><strong>Case 1:</strong> <span class="math-container">$\mathbb N\setminus S$</span> is unbounded.</p>
<p>In this case, we can notice that if <span class="math-container">$n\not\in S$</span> we must have that
<span class="math-container">$$\sum_{i\in S}s_i \geq \sum_{i\in S\cap [1,n)}s_i > x - s_n.$$</span>
However, since <span class="math-container">$\lim_{n\rightarrow\infty}s_n=0$</span>, we can conclude that <span class="math-container">$\inf \{s_n : n\not\in S\} = 0$</span> and then, taking suprema of both side of the above equation over all <span class="math-container">$n\not\in S$</span> gives
<span class="math-container">$$\sum_{i\in S}s_i \geq x$$</span>
as desired.</p>
<p><strong>Case 2:</strong> <span class="math-container">$S = \mathbb N$</span></p>
<p>In this case, we note that we have <span class="math-container">$\sum_{i=1}^{\infty}s_i \geq x$</span> by hypothesis, but since <span class="math-container">$\sum_{i=1}^{\infty} s_i = \sum_{i\in S}s_i$</span>, we automatically have the equality we want.</p>
<p><strong>Case 3:</strong> <span class="math-container">$\mathbb N\setminus S$</span> is bounded and non-empty.</p>
<p>We will show that this case can never occur by deriving a contradiction. Let <span class="math-container">$n\in\mathbb N$</span> be the largest element not in <span class="math-container">$S$</span>. We have already seen that
<span class="math-container">$$\sum_{i\in S}s_i=\sum_{i\in S\cap [1,n)}s_i+\sum_{i=n+1}^{\infty}s_i \leq x.$$</span>
However, note that we know <span class="math-container">$s_n \leq \sum_{i=n+1}^{\infty}s_i$</span> by hypothesis, therefore
<span class="math-container">$$\sum_{i\in S\cap [1,n)}s_i + s_n \leq x$$</span>
by substituting in this inequality to the earlier one. This, however, would imply that <span class="math-container">$s_n \in S$</span> by definition of <span class="math-container">$S$</span>, which contradicts that we chose <span class="math-container">$n$</span> to be in the complement of <span class="math-container">$S$</span>. Thus, this case may never occur.</p>
<p>However, also note that, for any term <span class="math-container">$n$</span> not in <span class="math-container">$S$</span>, we have that <span class="math-container">$\sum_{i\in S}s_i \geq \sum_{i\in S\cap [1,n)}s_i > x - s_n$</span>.</p>
<p>Having handled every case, we have established the lemma. </p>
<p>Then, to finish, we observe that, while the sequence <span class="math-container">$s_n=\frac{1}{n^2}$</span> fails to satisfy this property, it <em>does</em> satisfy this property if we truncate off the first term - since <span class="math-container">$\frac{1}4 \leq \frac{1}{9} + \frac{1}{16} + \frac{1}{25} + \frac{1}{36} + \frac{1}{49}$</span> and since, at every point further into the sequence, the ratio of consecutive terms is less than <span class="math-container">$2$</span>. Thus, we can conclude that the set of points that can be written as <span class="math-container">$\sum_{i\in S}\frac{1}{i^2}$</span> is exactly <span class="math-container">$[0,\frac{\pi^2}6-1] \cup [1,\frac{\pi^2}6]$</span> using that <span class="math-container">$\sum_{i=1}^{\infty}\frac{1}{i^2}=\frac{\pi^2}6$</span>.</p>
<hr>
<p>As a side-note, it is worth noting that this doesn't imply that the distribution is <em>continuous</em> - a process such as "randomly choose the binary digits of a real number in <span class="math-container">$[0,1)$</span> by independent Bernoulli trials with parameter <span class="math-container">$p$</span>" yields a distribution which has fractal-like properties and which is <em>not</em> continuous (i.e. has no probability density function) unless <span class="math-container">$p=1/2$</span> - it is an example of a singular distribution (with respect to the uniform distribution), which needn't be discrete. According to <a href="https://www.jstor.org/stable/2240118" rel="nofollow noreferrer">Random Variables with Independent Binary Digits by Marsaglia</a>:</p>
<blockquote>
<p>"Let <span class="math-container">$X = . b_1b_2b_3\dots$</span> be a random variable with independent binary digits <span class="math-container">$b_n$</span> taking values <span class="math-container">$0$</span> or <span class="math-container">$1$</span> with probability <span class="math-container">$p_n$</span> and <span class="math-container">$q_n = 1 -p_n$</span>. When does <span class="math-container">$X$</span> have a density? A continuous density? A singular distribution? This
note gives necessary and sufficient conditions for the distribution of <span class="math-container">$X$</span> to be: discrete: <span class="math-container">$\sum\min (p_n, q_n) < \infty$</span>; singular: <span class="math-container">$\sum_m^\infty[\log (p_n/q_n)]^2=\infty$</span> for every <span class="math-container">$m$</span>; absolutely continuous: <span class="math-container">$\sum_m^\infty[\log (p_n/q_n)]^2 < \infty$</span> for some <span class="math-container">$m$</span>".</p>
</blockquote>
<p>The example in the question truly is continuous (i.e. is defined by a probability density function) for all <span class="math-container">$p\in (0,1)$</span>, primarily because the sequence <span class="math-container">$\frac{1}{n^2}$</span> does not decrease very quickly. One can prove this most easily via characteristic functions, although I would bet it's possible to prove from more elementary results such as the Berry-Esseen theorem.</p>
|
1,142,624 | <p>Find a generating function for $\{a_n\}$ where $a_0=1$ and $a_n=a_{n-1} + n$</p>
| Samrat Mukhopadhyay | 83,973 | <p><strong>Hint</strong> $a_{n}-a_0=\sum_{i=1}^{n}(a_i-a_{i-1})$</p>
|
1,814,216 | <p>I was trying to show that $\sin(x)$ is non-zero for integers $x$ other than zero and I thought that this result might emerge as a corollary if I managed to show that the result in question is true. </p>
<p>I think it's possible to demonstrate this by looking at the power series expansion of $\sin(x)$ and assuming that we don't know anything about the existence of $\pi$. </p>
<p>All of the answers below insist that the proposition '$\exists p,q \in \mathbb{Z}, \sin(p)=\sin(q)$' -where $\sin(x)$ is the power series representation-is undecidable without using the properties of $\pi$. If so this is a truly wonderful conjecture and I would like to be provided with a proof. Until then, I insist that methods for analyzing infinite series from analysis should suffice to show that the proposition is false. </p>
<hr>
<p><strong>Note:</strong> In a previous version of this post the question said "bijective" instead of "injective". Some of the answers below have answered the first version of this post.</p>
| 5xum | 112,884 | <p>It is not bijective because it is not surjective. There is not $x\in\mathbb N$ such that $\sin(x)=2$.</p>
<p>However, it is true that $\sin(x)=0$ for only one value of $x\in\mathbb N$. This is because </p>
<p>$$\forall x\in \mathbb R:\sin x = 0\iff x=k\pi$$</p>
<p>for some $k\in\mathbb N$, and $k\pi\in\mathbb N\iff k=0$. Indeed, you can see this because if there exists some $\mathbb N\ni k\neq0$ such that $k\pi\in\mathbb N$, then $k\pi=m$ for some $m\in\mathbb N$, and this means that $$\pi=\frac mk$$</p>
<p>for two integers $m,k$.</p>
<p>In other words, this means that $\pi$ is rational, something we know is not true.</p>
|
1,814,216 | <p>I was trying to show that $\sin(x)$ is non-zero for integers $x$ other than zero and I thought that this result might emerge as a corollary if I managed to show that the result in question is true. </p>
<p>I think it's possible to demonstrate this by looking at the power series expansion of $\sin(x)$ and assuming that we don't know anything about the existence of $\pi$. </p>
<p>All of the answers below insist that the proposition '$\exists p,q \in \mathbb{Z}, \sin(p)=\sin(q)$' -where $\sin(x)$ is the power series representation-is undecidable without using the properties of $\pi$. If so this is a truly wonderful conjecture and I would like to be provided with a proof. Until then, I insist that methods for analyzing infinite series from analysis should suffice to show that the proposition is false. </p>
<hr>
<p><strong>Note:</strong> In a previous version of this post the question said "bijective" instead of "injective". Some of the answers below have answered the first version of this post.</p>
| Community | -1 | <p>Bijective implies surjective and injective. $\sin: \Bbb{N} \to \Bbb{R}$ is not surjective since $\sin(x) = 1 \implies x = \pi/2 + 2\pi n \notin \Bbb{N}$.</p>
|
1,814,216 | <p>I was trying to show that $\sin(x)$ is non-zero for integers $x$ other than zero and I thought that this result might emerge as a corollary if I managed to show that the result in question is true. </p>
<p>I think it's possible to demonstrate this by looking at the power series expansion of $\sin(x)$ and assuming that we don't know anything about the existence of $\pi$. </p>
<p>All of the answers below insist that the proposition '$\exists p,q \in \mathbb{Z}, \sin(p)=\sin(q)$' -where $\sin(x)$ is the power series representation-is undecidable without using the properties of $\pi$. If so this is a truly wonderful conjecture and I would like to be provided with a proof. Until then, I insist that methods for analyzing infinite series from analysis should suffice to show that the proposition is false. </p>
<hr>
<p><strong>Note:</strong> In a previous version of this post the question said "bijective" instead of "injective". Some of the answers below have answered the first version of this post.</p>
| Mariano Suárez-Álvarez | 274 | <p>Suppose that we define the function $\def\RR{\mathbb R}f:\mathbb R\to\mathbb R$ putting $$f(x)=\sum_{n=0}^{\infty} \frac{(-1)^nx^{2n+1}}{(2n+1)!}$$ for each $x\in\mathbb R$, after checking that the power series converges for all real $x$. It is easy to compute derivatives of functions defined by power series, and therefore it is trivial to show that $$f''(x)=-f(x) \tag{1}$$ for all $x\in\RR$. We have $$\frac{d}{dx}(f(x)^2+f'(x)^2) = 2f'(x)(f''(x)+f(x))=0$$ for all $x\in\RR$, so $f(x)^2+f'(x)^2$ is a <em>constant</em> function. Evaluating it at $0$, we see at once that $$f(x)^2+f'(x)^2=1 \tag{2}$$ for all $x\in\RR$.</p>
<p>Equation (1) tells us that $f$ and $f'$ are solutions to the linear differentia equation $$u''+u=0.$$ Their Wronskian is $$\begin{vmatrix}f&f'\\f'&f''\end{vmatrix}=-(f^2+f'^2)$$ is nowhere zero, according to (2), so that $f$ and $f'$ aare linearly independent. Since the differential equation is of order $2$, we see that $\{f,f'\}$ is a basis of its solutions.</p>
<p>Now fix $y\in\RR$ and consider the functions $g(x)=f(x+y)$ and $h(x)=f'(x)f(y)-f(x)f'(y)$. Using (1) we find at once that they are both solutions to out differential equation, and clearly $g(0)=h(0)$ and $g'(0)=h'(0)$, so the uniqueness theoreem of solutions to initial value problems tells us that $g$ and $h$ are the same function, that is, that $$f(x+y)=f'(x)f(y)-f(x)f'(y) \tag{3}$$ for all $x$ and, s ince $y$ was arbitrary, for all $y$. Compuing the derivative with respect to $y$ in this equation an using (1), we find that also $$f'(x+y)=f'(x)f'(y)+f'(x)f'(y) \tag{4}$$ for all $x$ and all $y$.</p>
<p>I claim that</p>
<blockquote>
<p>there exists a $p>0$ such that $f'(p)=0$.</p>
</blockquote>
<p>Once we have checked this, we may compute, using (3) and (4) that
\begin{align}
f(x+2p) &= f'(x+p)f(p)-f(x+p)\underbrace{f'(p)}_{=0}\\
&=(f'(x)\underbrace{f'(p)}_{=0}+f(x)f(p))f(p) \\
&= f(x)f(p)^2.
\end{align}
In view of (2) and out choice of $p$, we have $f(p)\in\{\pm1\}$, and therefore $f(p)^2=1$ and we have proved that $$f(x+2p)=f(x)$$ for all $x\in\RR$, that is, that $f$ is a periodic function and that $2p$ is one of its periods.</p>
<p>Let us prove our claim. To do so, suppose it is false, so that $f'(x)\neq0$ for all $x\in(0,+\infty)$. Since $f'(0)=1$ and $f'$ is continuous on $[0,+\infty)$, we see that in fact $f'(x)>0$ for all $x\in[0,\infty)$ and, therefore, that $f$ is strictly increasing on $[0,\infty)$. In view of (2), we have $f(x)\leq1$ for all $x\in[0,infty)$ so we know that the limit $\alpha=\lim_{x\to\infty}f(x)$ exists and is an element of $[0,1]$. As $f'(x)^2=1-f(x)^2$, $f'(x)^2$ converges to $1-\alpha^2$ as $x\to\infty$ and, since $f'(x)$ is positive on $[0,+\infty)$, we see that $\lim_{x\to\infty}f'(x)$ is the positive square root $\beta=\sqrt{1-\alpha^2}$.</p>
<p>Pick any $y\in\RR$. For all $x\in\RR$ we have that
$$f(x+y)=f'(x)f(y)-f(x)f'(y).$$ Taking limits on both sides of this equality as $x\to\infty$ we find that $$\alpha=\beta f(y)-\alpha f'(y).$$ This holds for all $y$, so we may differentiate it with resprect to $y$, and we see that $$0=\beta f'(y)-\alpha f''(y)=\beta f'(y)+\alpha f(y).$$ This is absurd, as $f$ and $f'$ are linearly independent functions.</p>
<p>Let now $P$ be the set of positive real numbers $p$ such that $2p$ is a period of $f$. We have shown that $P$ is not empty. Let $\pi=\inf P$. If $\pi=0$, there exists a sequence $(p_i)_{i\geq1}$ of elements of $P$ converging to $0$, and then $$1=f'(0)=\lim_{i\to\infty}\frac{f(2p_i)-f(0)}{2p_i}=0.$$ Therefore $\pi$ is a positive number. Now we may use, for example, the <a href="https://en.wikipedia.org/wiki/Proof_that_%CF%80_is_irrational#Bourbaki.27s_proof" rel="nofollow noreferrer">argument of Bourbaki</a> to show that $\pi$ is irrational. That works because our arguments above easily show that $f(x)>0$ if $x\in(0,\pi)$, which is one of the things needed, and we can do the required integration by parts using (1).</p>
<hr>
<p>All this shows in fact that </p>
<blockquote>
<p>if $f:\RR\to\RR$ is a twice differentiable function such that $f''=-f$ on $\RR$, $f(0)=0$ and $f'(0)=1$, then $f$ is periodic and its period is irrational.</p>
</blockquote>
<p>What you want, follows form this.</p>
|
3,527,004 | <p>As stated in the title, I want <span class="math-container">$f(x)=\frac{1}{x^2}$</span> to be expanded as a series with powers of <span class="math-container">$(x+2)$</span>. </p>
<p>Let <span class="math-container">$u=x+2$</span>. Then <span class="math-container">$f(x)=\frac{1}{x^2}=\frac{1}{(u-2)^2}$</span></p>
<p>Note that <span class="math-container">$$\int \frac{1}{(u-2)^2}du=\int (u-2)^{-2}du=-\frac{1}{u-2} + C$$</span></p>
<p>Therefore, <span class="math-container">$\frac{d}{du} (-\frac{1}{u-2})= \frac{1}{x^2}$</span> and</p>
<p><span class="math-container">$$\frac{d}{du} (-\frac{1}{u-2})= \frac{d}{du} (-\frac{1}{-2(1-\frac{u}{2})})=\frac{d}{du}(\frac{1}{2} \frac{1}{1-\frac{u}{2}})=\frac{d}{du} \Bigg( \frac{1}{2} \sum_{n=0}^\infty \bigg(\frac{u}{2}\bigg)^n\Bigg)$$</span></p>
<p><span class="math-container">$$= \frac{d}{du} \Bigg(\sum_{n=0}^\infty \frac{u^n}{2^{n+1}}\Bigg)= \frac{d}{dx} \Bigg(\sum_{n=0}^\infty \frac{(x+2)^n}{2^{n+1}}\Bigg)= \sum_{n=0}^\infty \frac{d}{dx} \bigg(\frac{(x+2)^n}{2^{n+1}}\bigg)=$$</span></p>
<p><span class="math-container">$$\sum_{n=0}^\infty \frac{n}{2^{n+1}} (x+2)^{n-1}$$</span></p>
<p>From this we can conclude that </p>
<p><span class="math-container">$$f(x)=\frac{1}{x^2}=\sum_{n=0}^\infty \frac{n}{2^{n+1}} (x+2)^{n-1}$$</span></p>
<p>Is this solution correct?</p>
| Axion004 | 258,202 | <p>Your answer is extremely close to the correct derivation. The error occurs when you write</p>
<blockquote>
<p><span class="math-container">$$\frac{d}{dx} \Bigg(\sum_{n=0}^\infty \frac{(x+2)^n}{2^{n+1}}\Bigg)= \sum_{n=0}^\infty \frac{d}{dx} \bigg(\frac{(x+2)^n}{2^{n+1}}\bigg)=\sum_{\color{red}{n=0}}^\infty \frac{n}{2^{n+1}} (x+2)^{n-1}$$</span></p>
</blockquote>
<p>where the last equality should have an index starting from <span class="math-container">$n=1$</span>. The correct derivation is</p>
<p><span class="math-container">\begin{align}\frac{d}{dx} \Bigg(\sum_{n=0}^\infty \frac{(x+2)^n}{2^{n+1}}\Bigg)&=\sum_{\color{blue}{n=1}}^\infty \frac{n}{2^{n+1}} (x+2)^{n-1}\\&=\sum_{\color{blue}{n=0}}^\infty \frac{\color{blue}{(n+1)}}{2^{\color{blue}{(n+1)+1}}} (x+2)^{\color{blue}{(n+1)-1}}\\&=\sum_{n=0}^\infty \frac{(n+1)}{2^{n+2}} (x+2)^{n}\end{align}</span>
You could also find the power series through giobrach's answer, which might be the most straightforward technique.</p>
|
2,702,726 | <p>Find the absolute minimum and maximum values of,</p>
<p>$$f(x) = 2 \sin(x) + \cos^2 (x) \text{ on } [0, 2\pi]$$</p>
<p>What I did so far is</p>
<p>$$f'(x) = 2\cos(x) -2 \cos(x) \sin(x)$$</p>
<p>Could someone please help me get started?</p>
| farruhota | 425,072 | <p>Without differentiation:
$$f(x)=-(1-\sin x)^2+2; \\
f\left(\frac{3\pi}{2}\right)=-2 (\text{min}); \\
f\left(\frac{\pi}{2}\right)=2 (\text{max}).$$</p>
|
850,852 | <p>This one comes from Gilbert Strang's Linear Algebra. Pick any numbers $x+y+z = 0$. Find an angle between $\mathbf v=(x,y,z)$ and $\mathbf w=(z,x,y)$. </p>
<p>Explain why $$\dfrac{\bf v\cdot w}{\bf \Vert v\Vert \cdot\Vert w\Vert}$$ is always $-0.5$. </p>
| JimmyK4542 | 155,509 | <p>If $x+y+z = 0$, then $0 = (x+y+z)^2 = x^2+y^2+z^2+2(xy+yz+zx) = \|v\| \cdot \|w\| + 2 v \cdot w$. </p>
<p>Rearranging gives the result $\dfrac{v \cdot w}{\|v\| \cdot \|w\|} = -\dfrac{1}{2}$.</p>
|
373,510 | <p>I was wondering if there is some generalization of the concept metric to take positive and negative and zero values, such that it can induce an order on the metric space? If there already exists such a concept, what is its name?</p>
<p>For example on $\forall x,y \in \mathbb R$, we can use difference $x-y$ as such generalization of metric.</p>
<p>Thanks and regards!</p>
| Ittay Weiss | 30,953 | <p>You will have to lose some of the other axioms of a metric space as well since the requirement that $d(x,y)\ge 0$ in a metric space is actually a consequence of the other axioms: $0=d(x,x)\le d(x,y)+d(y,x)=2\cdot d(x,y)$, thus $d(x,y)\ge 0$. This proof uses the requirements that $d(x,x)=0$, the triangle inequality, and symmetry. </p>
<p>There are notions of generalizations of metric spaces that weaken these axioms. I think the one closest to what you might be thinking about is partial metric spaces (where $d(x,x)=0$ is dropped). </p>
|
373,510 | <p>I was wondering if there is some generalization of the concept metric to take positive and negative and zero values, such that it can induce an order on the metric space? If there already exists such a concept, what is its name?</p>
<p>For example on $\forall x,y \in \mathbb R$, we can use difference $x-y$ as such generalization of metric.</p>
<p>Thanks and regards!</p>
| Eddy | 136,222 | <p>How about a distance defined as relative to an origin? This origin being a point on 1d spaces.</p>
<p>For higher dimensional spaces I'm not sure what would be more interesting, either a finite set of n points or a n-1 dimensional subspace given an n dimensional space (as in 2 points or a line, for 2d planes, a plane for 3d spaces and so on.)</p>
<p>I'm actually interested in such a metric on a finite space: I'm trying to implement Kademlia which defines distance as a XOR on the set of 160 bit long symbols and would like to use ordered sublists to manage the buckets ;)</p>
|
1,400,399 | <p>Here is an indefinite integral that is similar to an integral I wanna propose for a contest. Apart from
using CAS, do you see any very easy way of calculating it?</p>
<p>$$\int \frac{1+2x +3 x^2}{\left(2+x+x^2+x^3\right) \sqrt{1+\sqrt{2+x+x^2+x^3}}} \, dx$$</p>
<p><strong>EDIT:</strong> It's a part from the generalization </p>
<p>$$\int \frac{1+2x +3 x^2+\cdots n x^{n-1}}{\left(2+x+x^2+\cdots+ x^n\right) \sqrt{1\pm\sqrt{2+x+x^2+\cdots +x^n}}} \, dx$$</p>
<p><strong>Supplementary question:</strong> How would you calculate the following integral using the generalization above? Would you prefer another way?</p>
<p>$$\int_0^{1/2} \frac{1}{\left(x^2-3 x+2\right)\sqrt{\sqrt{\frac{x-2}{x-1}}+1} } \, dx$$</p>
<p>As a note, the generalization like the one you see above and slightly modified versions can be wisely used for calculating very hard integrals.</p>
| Harish Chandra Rajpoot | 210,295 | <p>Let $$2+x+x^2+x^3=t^2\implies (1+2x+3x^2)dx=2tdt$$ $$\int \frac{2tdt}{t^2\sqrt{1+t}}$$ $$=2\int \frac{dt}{t\sqrt{1+t}}$$
Let $1+t=x^2\implies dt=2xdx$
$$=2\int \frac{2xdx}{(x^2-1)x}$$
$$=4\int \frac{dx}{x^2-1}$$
$$=4\int \frac{dx}{(x-1)(x+1)}$$
$$=4\frac{1}{2}\int \left(\frac{1}{x-1}-\frac{1}{x+1}\right)dx$$
$$=4\frac{1}{2}\left(\int\frac{dx}{x-1}-\int\frac{dx}{x+1}\right)$$
$$=2\left(\ln|x-1|-\ln|x+1|\right)+c$$</p>
<p>$$=2\ln\left|\frac{x-1}{x+1}\right|+c$$
$$=2\ln\left|\frac{\sqrt{1+t}-1}{\sqrt{1+t}+1}\right|+c$$
$$=2\ln\left|\frac{\sqrt{1+\sqrt{1+2+x+x^2+x^3}}-1}{\sqrt{1+\sqrt{1+2+x+x^2+x^3}}+1}\right|+c$$</p>
|
360,293 | <p>Calculate $\sum_{n=2}^\infty ({n^4+2n^3-3n^2-8n-3\over(n+2)!})$</p>
<p>I thought about maybe breaking the polynomial in two different fractions in order to make the sum more manageable and reduce it to something similar to $\lim_{n\to\infty}(1+{1\over1!}+{1\over2!}+...+{1\over n!})$, but didn't manage</p>
| Mhenni Benghorbal | 35,472 | <p>First step, we find the Taylor series of $x^4+2x^3-3x^2-8x-3$ at the point $x=-2$ and then use it to write</p>
<p>$$ n^4+2n^3-3n^2-8n-3 = 1-4\, \left( n+2 \right) +9\, \left( n+2 \right)^{2}-6\, \left( n+2
\right) ^{3}+ \left( n+2 \right) ^{4}.$$</p>
<p>Using the above expansion and shifting the index of summation ($n \longleftrightarrow n-2$ ), we have
$$ \sum_{n=2}^\infty {n^4+2n^3-3n^2-8n-3\over(n+2)!}= \sum_{n=2}^\infty {1-4\, \left( n+2 \right) +9\, \left( n+2 \right)^{2}-6\, \left( n+2\right) ^{3}+ \left( n+2 \right)^{4}\over(n+2)!} $$</p>
<p>$$ = \sum_{n=4}^\infty {1-4\, n +9\, n^{2}-6\, n^{3}+ n^{4} \over n! }+\sum_{n=0}^3 {1-4\, n +9\, n^{2}-6\, n^{3}+ n^{4} \over n! }$$</p>
<p>$$ -\sum_{n=0}^3 {1-4\, n +9\, n^{2}-6\, n^{3}+ n^{4} \over n! }$$ </p>
<p>$$= c+ \sum_{n=0}^\infty {1-4\, n +9\, n^{2}-6\, n^{3}+ n^{4} \over n! } $$</p>
<p>$$ = c+e(1-4B_1 + 9 B_2 -6B_3 +B_4), $$</p>
<p>where $B_n$ are the <a href="http://en.wikipedia.org/wiki/Bell_number" rel="nofollow">bell numbers</a></p>
<p>$$ B_n = \frac{1}{e}\sum_{k=0}^{\infty} \frac{k^n}{k!}, $$</p>
<p>and $c$ is given by</p>
<p>$$ c=-\sum_{n=0}^3 {1-4\, n +9\, n^{2}-6\, n^{3}+ n^{4} \over n! }. $$</p>
|
74,108 | <p>Background: I was trying to convert a MATLAB code (fluid simulation, SPH method) into a <em>Mathematica</em> one, but the speed difference is huge.</p>
<p>MATLAB code:</p>
<pre class="lang-matlab prettyprint-override"><code>function s = initializeDensity2(s)
nTotal = s.params.nTotal; %# particles
h = s.params.h;
h2Sq = (2*h)^2;
for ind1 = 1:nTotal %loop over all receiving particles; one at a time
%particle i is the receiving particle; the host particle
%particle j is the sending particle
xi = s.particles.pos(ind1,1);
yi = s.particles.pos(ind1,2);
xj = s.particles.pos(:,1); %all others
yj = s.particles.pos(:,2); %all others
mj = s.particles.mass; %all others
rSq = (xi-xj).^2+(yi-yj).^2;
%Boolean mask returns values where r^2 < (2h)^2
mask1 = rSq<h2Sq;
rSq = rSq(mask1);
mTemp = mj(mask1);
densityTemp = mTemp.*liuQuartic(sqrt(rSq),h);
s.particles.density(ind1) = sum(densityTemp);
end
</code></pre>
<p>And the corresponding <em>Mathematica</em> code:</p>
<pre><code>Needs["HierarchicalClustering`"]
computeDistance[pos_] :=
DistanceMatrix[pos, DistanceFunction -> EuclideanDistance];
initializeDensity[distance_] :=
uniMass*Total/@(liuQuartic[#,h]&/@Pick[distance,Boole[Map[#<2h&,distance,{2}]],1])
initializeDensity[computeDistance[totalPos]]
</code></pre>
<p>The data are coordinates of 1119 points, in the form of <code>{{x1,y1},{x2,y2}...}</code>, stored in <code>s.particles.pos</code> and <code>totalPos</code> respectively. And <code>liuQuartic</code> is just a polynomial function. The complete MATLAB code is way more than this, but it can run about 160 complete time steps in 60 seconds, whereas the <em>Mathematica</em> code listed above alone takes about 3 seconds to run. I don't know why there is such huge speed difference. Any thoughts is appreciated. Thanks.</p>
<p>Edit:</p>
<p>The <code>liuQuartic</code> is defined as</p>
<pre><code>liuQuartic[r_,h_]:=15/(7Pi*h^2) (2/3-(9r^2)/(8h^2)+(19r^3)/(24h^3)-(5r^4)/(32h^4))
</code></pre>
<p>and example data can be obtained by</p>
<pre><code>h=2*10^-3;conWidth=0.4;conHeight=0.16;totalStep=6000;uniDensity=1000;uniMass=1000*Pi*h^2;refDensity=1400;gamma=7;vf=0.07;eta=0.01;cs=vf/eta;B=refDensity*cs^2/gamma;gravity=-9.8;mu=0.02;beta=0.15;dt=0.00005;epsilon=0.5;
iniFreePts=Block[{},Table[{-conWidth/3+i,1.95h+j},{i,10h,conWidth/3-2h,1.5h},{j,0,0.05,1.5h}]//Flatten[#,1]&];
leftWallIniPts=Block[{x,y},y=Table[i,{i,conHeight/2-0.5h,0.2h,-0.5h}];x=ConstantArray[-conWidth/3,Length[y]];Thread[List[x,y]]];
botWallIniPts=Block[{x,y},x=Table[i,{i,-conWidth/3,-0.4h,h}];y=ConstantArray[0,Length[x]];Thread[List[x,y]]];
incWallIniPts=Block[{x,y},Table[{i,0.2125i},{i,0,(2conWidth)/3,h}]];
rightWallIniPts=Block[{x,y},y=Table[i,{i,Last[incWallIniPts][[2]]+h,conHeight/2,h}];x=ConstantArray[Last[incWallIniPts][[1]],Length[y]];Thread[List[x,y]]];
topWallIniPts=Block[{x,y},x=Table[i,{i,-conWidth/3+0.7h,(2conWidth)/3-0.7h,h}];y=ConstantArray[conHeight/2,Length[x]];Thread[List[x,y]]];
freePos = iniFreePts;
wallPos = leftWallIniPts~Join~botWallIniPts~Join~incWallIniPts~Join~rightWallIniPts~Join~topWallIniPts;
totalPos = freePos~Join~wallPos;
</code></pre>
<p>where <code>conWidth=0.4</code>, <code>conHeight=0.16</code> and <code>h=0.002</code></p>
| xzczd | 1,871 | <p>Modify the calculation order a little to avoid ragged array and then make use of <code>Listable</code> and <code>Compile</code>:</p>
<pre><code>computeDistance[pos_] := DistanceMatrix[pos, DistanceFunction -> EuclideanDistance]
liuQuartic = {r, h} \[Function]
15/(7 Pi*h^2) (2/3 - (9 r^2)/(8 h^2) + (19 r^3)/(24 h^3) - (5 r^4)/(32 h^4));
initializeDensity =
With[{l = liuQuartic, m = uniMass},
Compile[{{d, _Real, 2}, {h, _Real}}, m Total@Transpose[l[d, h] UnitStep[2 h - d]]]];
new = initializeDensity[computeDistance[N@totalPos], h]; // AbsoluteTiming
</code></pre>
<p>Tested with your new added sample data, my code ran for <em>0.390000 s</em> while the original code ran for <em>4.851600 s</em> and <a href="https://mathematica.stackexchange.com/users/4678/ybeltukov">ybeltukov</a>'s code ran for <em>0.813200 s</em> on my machine.</p>
<p>If you have a C compiler installed, the following code</p>
<pre><code>computeDistance[pos_] := DistanceMatrix[pos, DistanceFunction -> EuclideanDistance]
liuQuartic = {r, h} \[Function]
15/(7 Pi*h^2) (2/3 - (9 r^2)/(8 h^2) + (19 r^3)/(24 h^3) - (5 r^4)/(32 h^4));
initializeDensity =
With[{l = liuQuartic, m = uniMass, g = Compile`GetElement},
Compile[{{d, _Real, 2}, {h, _Real}},
Module[{b1, b2}, {b1, b2} = Dimensions@d;
m Table[Sum[If[2 h > g[d, i, j], l[g[d, i, j], h], 0.], {j, b2}], {i, b1}]],
CompilationTarget -> "C", RuntimeOptions -> "Speed"]];
</code></pre>
<p>will give you a <strong>2X</strong> speedup once again. Notice the C compiler is necessary, see <a href="https://mathematica.stackexchange.com/q/61509/1871">this post</a> for some more details.</p>
|
770,430 | <p>How to find the value of $X$?</p>
<p>If $X$= $\frac {1}{1001}$+$\frac {1}{1002}$+$
\frac {1}{1003}$. . . . $\frac {1}{3001}$</p>
| Hakim | 85,969 | <p>The exact answer is $$\begin{align}
H_{3001}-H_{1000}&=(\gamma+\psi_0(3002))-(\gamma+\psi_0(1001))\\ \,\\&=\dfrac{\Gamma'(3002)\Gamma(1001)-\Gamma'(1001)\Gamma(3002)}{\Gamma(1001)\Gamma(3002)}\\\,\,\\
&\approx 1.09861225...
\end{align}$$ where $H_n$ is the $n$-th Harmonic number defined as: $$H_n:=\sum_{k=1}^n \dfrac1k$$ which can be expressed analytically by the formula: $H_n=\gamma+\psi_0(n+1)$ where $\gamma$ is the Euler-Mascheroni constant and $\psi_0$ is the Digamma function defined as: $$\psi_0(n):=\dfrac{\mathrm d}{\mathrm dn}\ln \big(\Gamma (n)\big)=\dfrac{\Gamma'(n)}{\Gamma(n)}$$ where $\Gamma(n)$ is the gamma function which is equal to $\Gamma(n)=(n-1)!$.</p>
|
572,125 | <p>How to show this function's discontinuity?<br></p>
<p>$ f(n) = \left\{
\begin{array}{l l}
\frac{xy}{x^2+y^2} & \quad , \quad(x,y)\neq(0,0)\\
0 & \quad , \quad(x,y)=(0,0)
\end{array} \right.$</p>
| user642796 | 8,348 | <p>Yes. Recall that given a cardinal $\kappa$ the <em>cofinality</em> of $\kappa$, $\mathrm{cf} ( \kappa )$, is the least cardinal $\mu$ for which there is an unbounded (cofinal) function $\mu \to \kappa$. Regularity means that $\mathrm{cf} ( \kappa ) = \kappa$, and all successor cardinals are regular.</p>
<hr>
<p><strong><em>Added to indoctrinate others into Asaf's anti-Choice programme</em></strong></p>
<p>Note, also, that there is a fair bit of Choice involved in the above. Without the Axiom of Choice it is possible that that $\aleph_2$ is <em>singular</em>; in particular it could have cofinality $\omega$, meaning that <em>there is</em> an unbounded function $\aleph_0 \to \aleph_2$. The answers to <a href="https://math.stackexchange.com/q/54761/8348">this old math.SE question</a> and well as <a href="https://mathoverflow.net/q/147090/13653">this slightly more recent MO question</a> contain a wealth of information on the broader topic.</p>
|
185,478 | <blockquote>
<p>How do I solve for $x$ in $$x\left(x^3+\sin(x)\cos(x)\right)-\big(\sin(x)\big)^2=0$$</p>
</blockquote>
<p>I hate when I find something that looks simple, that I should know how to do, but it holds me up. </p>
<p>I could come up with an approximate answer using Taylor's, but how do I solve this? </p>
<p>(btw, WolframAlpha tells me the answer, but I want to know how it's solved.)</p>
| user 1591719 | 32,016 | <p>Let's do it in a simple way. As N. S. noticed, the function is even. Then it's enough to analyze things on the positive real axis:</p>
<p>$$x\left(x^3+\sin(x)\cos(x)\right)-\big(\sin(x)\big)^2\geq x\left(x^3+x-x^3\right)-\big(\sin(x)\big)^2\geq 0$$
$$ x^2 \geq \big(\sin(x)\big)^2$$</p>
<p>Above I used the fact that $\sin(x)\cos(x)\geq x-x^3$ for $x \ge 0$ for that we may use the following proof (or enter <a href="https://math.stackexchange.com/questions/185773/prove-that-sinx-cosx-geq-x-x3">here</a> for more nice proofs):</p>
<p>Let's consider </p>
<p>$$f(x) = \sin(x) \cos(x)-x+x^3$$
then
$$f'(x) = 3 x^2-2\sin^2(x)\tag1$$
$$x\ge \sin(x)\tag2$$
From $(1)$ and $(2)$ we immediately notice that $f'(x)\ge0$ and taking into account that $f(0)=0$ we may conclude that the inequality holds.
The equality is obviously reached only when $x=0$.<br>
Hence the only solution is got for $x=0$.</p>
<p>Q.E.D.</p>
|
49,544 | <p>In reading section 2.2, page 14 of <a href="http://www.gaussianprocess.org/gpml/chapters/" rel="nofollow noreferrer">this book</a>, I came across the term "singular distribution".</p>
<p>Apparently, a multivariate Gaussian distribution is singular if and only if its covariance matrix is singular. One way (the only way?) the covariance matrix can be singular is if one of the diagonal entries is zero. A Gaussian distribution approaches a Dirac delta as the variance goes to zero. Is the Dirac distribution an example of a singular distribution? I don't think so, since <a href="http://en.wikipedia.org/wiki/Singular_distribution" rel="nofollow noreferrer">Wikipedia</a> says the Lebesgue integral of the density function of a singular distribution is zero.</p>
| Tim van Beek | 7,556 | <p>I find only the expression "this Gaussian is singular" on page 14 of your reference, but not the definition of "singular distribution". </p>
<p>But to answer your question:</p>
<p>The delta distribution is not a singular distribution, it is a discrete probability distribution. It does not have a Radon-Nikodym density with respect to the Lesbegue measure, because the Lesbegue measure of a single point is zero, and the delta distribution is concentrated on a single point.</p>
<p>Don't get confused if people write stuff like
$$
\int_{\mathbb{R}} \delta_0(x) d x = 1
$$
This is not correct in the strict sense. Instead, the "density function" of the delta distribution concentrated on zero - which is not a density in the sense of Radon-Nikodym - would be
$$
f(x) = 0 \; \text{for} \; x \neq 0
$$
and
$$
f(0) = \infty
$$
and therefore we would have
$$
\int_{\mathbb{R}} f(x) d x = 0
$$</p>
<p>But: For a discrete probability distribution, it is possible to name an at most countable set of points such that each point can be assigned a finite probability, such that the probability of any set is equal to the sum of the probabilities of the points it does contain.</p>
<p>This is not possible for a singular probability distribution like the Cantor distribution. The Cantor distribution is not concentrated on a countable set of points. Therefore the terms "singular distribution" and "discrete probability distribution" are different, and the delta distribution is a discrete one, not a singular one.</p>
|
396,088 | <p>Let <span class="math-container">$K$</span> be a field and let <span class="math-container">$\Lambda_{1}$</span> and <span class="math-container">$\Lambda_{2}$</span> be two finite-dimensional <span class="math-container">$K$</span>-algebras with Jacobson radicals <span class="math-container">$J_{1}$</span> and <span class="math-container">$J_{2}$</span> respectively. How to show or where can I find the proof of the following statement?</p>
<blockquote>
<p><span class="math-container">$\Lambda_{1} / J_{1} \otimes_{K} \Lambda_{2} / J_{2}$</span> is always semisimple if <span class="math-container">$K$</span> is perfect or if <span class="math-container">$\Lambda_{1}$</span> and <span class="math-container">$\Lambda_{2}$</span> are path algebras of quivers factored by admissible ideals.</p>
</blockquote>
<p>Thank you.</p>
| Mare | 61,949 | <p>We have <span class="math-container">$gldim A \otimes_K B= gldim A + gldim B$</span> if A and B are seperable algebras over the field <span class="math-container">$K$</span>, see <a href="https://www.cambridge.org/core/journals/nagoya-mathematical-journal/article/on-the-dimension-of-modules-and-algebras-viii-dimension-of-tensor-products/58116B52E52F0F6165E84AE11284CCF6" rel="noreferrer">https://www.cambridge.org/core/journals/nagoya-mathematical-journal/article/on-the-dimension-of-modules-and-algebras-viii-dimension-of-tensor-products/58116B52E52F0F6165E84AE11284CCF6</a> corollary 18.</p>
<p>Now being semisimple for finite dimensional algebras is equivalent to global dimension zero.</p>
|
3,264,333 | <p>I am working on my scholarship exam practice and not sure how to begin. Please assume math knowledge at high school or pre-university level.</p>
<blockquote>
<p>Let <span class="math-container">$a$</span> be a real constant. If the constant term of <span class="math-container">$(x^3 + \frac{a}{x^2})^5$</span> is equal to <span class="math-container">$-270$</span>, then <span class="math-container">$a=$</span>......</p>
</blockquote>
<p>Could you please give a hint for this question? The answer provided is <span class="math-container">$-3$</span>.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>Hint: <span class="math-container">$$(A+B)^5={A}^{5}+5\,{A}^{4}B+10\,{A}^{3}{B}^{2}+10\,{A}^{2}{B}^{3}+5\,A{B}^{4}+
{B}^{5}
$$</span></p>
|
19,148 | <p>I always find the strong law of large numbers hard to motivate to students, especially non-mathematicians. The weak law (giving convergence in probability) is so much easier to prove; why is it worth so much trouble to upgrade the conclusion to almost sure convergence?</p>
<p>I think it comes down to not having a good sense of why, practically speaking, a.s. convergence is better than convergence i.p. Sure, I can prove that one implies the other and not conversely, but the counterexamples feel contrived. I understand the advantages of a.s. convergence on a technical level, but not on the level of everyday life.</p>
<p>So my question: how would you explain to, say, an engineer, the significance of having a.s. convergence as opposed to i.p.? Is there a "real-life" example of bad behavior that we're ruling out?</p>
| Gjergji Zaimi | 2,384 | <p><a href="http://terrytao.wordpress.com/2008/06/18/the-strong-law-of-large-numbers/">Here</a> is a nice post of T. Tao on SLLN. In the comments section he is asked a very similar question to which he answers the following: (I hope it's ok to reproduce it here, since it is buried down in the comments)</p>
<blockquote>
<p>Returning specifically to the question of finitary interpretations of the SLLN, these basically have to do with the situation in which one is simultaneously considering multiple averages $\overline{X}_n$ of a single series of empirical samples, as opposed to considering just a single such average (which is basically the situation covered by the WLLN). For instance, if one had some random intensity field of grayscale pixels, and wanted to compare the average intensities at 10 x 10 blocks, 100 x 100 blocks, and 1000 x 1000 blocks, then the SLLN suggests that these intensities would be likely to be simultaneously close to the average intensity. (The WLLN only suggests that each of these spatial averages are individually likely to be close to the average intensity, but does not preclude the possibility that when one considers multiple such spatial averages at once, that a few outlying spatial averages will deviate from the average intensity. In my example with only three different averages, there isn’t much difference here, as the union bound only loses a factor of three at most for the failure probability, but the SLLN begins to show its strength over the WLLN when one is considering a very large number of averages at once.)</p>
</blockquote>
|
19,148 | <p>I always find the strong law of large numbers hard to motivate to students, especially non-mathematicians. The weak law (giving convergence in probability) is so much easier to prove; why is it worth so much trouble to upgrade the conclusion to almost sure convergence?</p>
<p>I think it comes down to not having a good sense of why, practically speaking, a.s. convergence is better than convergence i.p. Sure, I can prove that one implies the other and not conversely, but the counterexamples feel contrived. I understand the advantages of a.s. convergence on a technical level, but not on the level of everyday life.</p>
<p>So my question: how would you explain to, say, an engineer, the significance of having a.s. convergence as opposed to i.p.? Is there a "real-life" example of bad behavior that we're ruling out?</p>
| Erik Davis | 1,228 | <p>I think it is worth noting that even if real world systems are fundamentally finite (in which case the distinction between WLLN and SLLN gets a bit philosophical), history has shown that it is extremely useful to approximate the discrete with the continuous. Thus we consider limit theorems to approximate statistics of large samples, we consider continuous distributions to approximate complicated finite distributions, and we consider continuous stochastic processes in order to approximate finite ones (e.g. Donsker's invariance principle).</p>
<p>The examples of sequences that converge in probability but not a.s. might seem a bit contrived, but then again most engineers seem to allow such philosophical absurdities as "let $X_n$ be an infinite sequence of coin tosses". In this regard, maybe it is best to phrase the distinction between convergence a.s. and convergence in probability in terms that seem more qualitative and less analytic. For example, imagine you were presented a sequence of gambles, and you must take either all of them or none of them. There is a very significant distinction between knowing that your wealth converges a.s. to some deterministic value vs knowing that it converges in probability (to that same value). In the former case, you expect in almost all states of the world that if you play the game that your wealth eventually stabilizes. However, in the case of convergence in probability you could go bankrupt infinitely often. Yikes!</p>
|
2,962,193 | <p><strong>Q</strong>:If <span class="math-container">$2\cos p=x+\frac{1}{x}$</span> and <span class="math-container">$2\cos q=y+\frac{1}{y}$</span> then show that <span class="math-container">$2\cos(mp-nq)$</span> is one of the values of <span class="math-container">$\left( \frac{x^m}{y^n}+\frac{y^n}{x^m} \right)$</span><br><strong>My Approach</strong>:<span class="math-container">$2\cos p=x+\frac{1}{x}\Rightarrow x^2-2\cos px+1=0$</span> solving this equation i get <span class="math-container">$$x=\cos p\pm i\sin p$$</span> and <strong>similarly</strong>,<span class="math-container">$$y=\cos q\pm i\sin q$$</span>Because somehow i guess <span class="math-container">$$x^m=\cos mp\pm i\sin mp,y^n=\cos nq\pm i\sin nq$$</span> maybe needed.But now i get stuck. Any hints or solution will be appreciated.<br>Thanks in advance.</p>
| B. Goddard | 362,009 | <p>Let <span class="math-container">$p,p+2$</span> and <span class="math-container">$q,q+2$</span> be pairs of twin primes.</p>
<p><span class="math-container">$$f(x,y) = \left( \frac{p}{p+2} \right)^x \left(\frac{q}{q+2}\right)^y$$</span> </p>
<p>is between <span class="math-container">$0$</span> and <span class="math-container">$1$</span>. To recover <span class="math-container">$x$</span> and <span class="math-container">$y$</span>, you just have to factor the numerator of the rational number <span class="math-container">$f(x,y)$</span> which will be easy, because you already know all its prime factors.</p>
|
2,609,283 | <p>$u_1 = (2, -1, 3)$ and $u_2 = (0, 0, 0)$</p>
<p>I tried using the cross product of the two but that just gave me the zero vector. I don't know any other methods to get a vector that is orthogonal to two vectors. </p>
<p>The answer is $v = s(1, 2, 0) + t(0, 3, 1)$ , where $s$ and $t$ are scalar values. </p>
| Rodrigo de Azevedo | 339,790 | <p>Using the <a href="https://en.wikipedia.org/wiki/Weinstein%E2%80%93Aronszajn_identity" rel="nofollow noreferrer">Weinstein-Aronszajn determinant identity</a>,</p>
<p><span class="math-container">$$\begin{array}{rl} \det (s \mathrm I_n - 1_n 1_n^\top) &= \det \left( s \cdot \left( \mathrm I_n - s^{-1}1_n 1_n^\top \right) \right)\\ &= s^n \cdot \det \left( \mathrm I_n - s^{-1}1_n 1_n^\top \right)\\ &= s^n \cdot \left( 1 - n \, s^{-1} \right)\\ &= s^{n-1} \left( s-n \right)\end{array}$$</span></p>
<hr />
<p><a href="/questions/tagged/linear-algebra" class="post-tag" title="show questions tagged 'linear-algebra'" rel="tag">linear-algebra</a> <a href="/questions/tagged/matrices" class="post-tag" title="show questions tagged 'matrices'" rel="tag">matrices</a> <a href="/questions/tagged/determinant" class="post-tag" title="show questions tagged 'determinant'" rel="tag">determinant</a> <a href="/questions/tagged/characteristic-polynomial" class="post-tag" title="show questions tagged 'characteristic-polynomial'" rel="tag">characteristic-polynomial</a></p>
|
4,149,355 | <p>Are there any stochastic processes <span class="math-container">$(X_t)_{t \in \mathbb{R}^d}$</span> such that</p>
<ol>
<li>almost surely paths are continuous but nowhere differentiable and</li>
<li>sampling of <span class="math-container">$n$</span> points <span class="math-container">$X_{t_n}$</span> on a path can be done in <span class="math-container">$O(n)$</span> time?</li>
</ol>
<p>Most sampling techniques I have in mind require at least <span class="math-container">$O(n \log n)$</span> time.</p>
<p>On the the other hand, all linear-time samplers I know, create processes such that 1) fails; like Perlin-noise, White noise, Pink noise, etc.</p>
<p>Is it even theoretically possible? So this might be a very fundamental question.</p>
| boaz | 83,796 | <p>Note that if <span class="math-container">$x^2+y^2=1$</span>, then by the Arithmetic mean-Geometric mean inequality
<span class="math-container">$$
|xy|=\sqrt{x^2y^2}\leqslant\frac{x^2+y^2}{2}=\frac{1}{2}
$$</span>
Now, if <span class="math-container">$x^2+y^2=1$</span>, then <span class="math-container">$(x+y)^2=1+2xy\leqslant 1+2|xy|$</span>, so
<span class="math-container">$$
|x+y|\leqslant\sqrt{1+2|xy|}\leqslant \sqrt{1+2\cdot\frac{1}{2}}=\sqrt{2}
$$</span>
as required QED.</p>
|
4,149,355 | <p>Are there any stochastic processes <span class="math-container">$(X_t)_{t \in \mathbb{R}^d}$</span> such that</p>
<ol>
<li>almost surely paths are continuous but nowhere differentiable and</li>
<li>sampling of <span class="math-container">$n$</span> points <span class="math-container">$X_{t_n}$</span> on a path can be done in <span class="math-container">$O(n)$</span> time?</li>
</ol>
<p>Most sampling techniques I have in mind require at least <span class="math-container">$O(n \log n)$</span> time.</p>
<p>On the the other hand, all linear-time samplers I know, create processes such that 1) fails; like Perlin-noise, White noise, Pink noise, etc.</p>
<p>Is it even theoretically possible? So this might be a very fundamental question.</p>
| JMP | 210,189 | <p>Assume <span class="math-container">$x+y>\sqrt2$</span>, so that <span class="math-container">$y>\sqrt2-x$</span>.</p>
<p>Then <span class="math-container">$x^2+y^2>x^2+2-2\sqrt2x+x^2=2x(x-\sqrt2)+2$</span>.</p>
<p>The RHS is minimal when <span class="math-container">$x=\frac1{\sqrt2}$</span>, and equals <span class="math-container">$1$</span>.</p>
|
285,548 | <p>I asked the following question on math.SE (<a href="https://math.stackexchange.com/questions/2420298/bvps-for-elliptic-pdos-when-do-green-functions-l2-inverses-define-pseudo-d">https://math.stackexchange.com/questions/2420298/bvps-for-elliptic-pdos-when-do-green-functions-l2-inverses-define-pseudo-d</a>) just over two months ago, and it only received one, rather unsatisfactory to me, answer there. I'm wondering if people here can have a look. Some related questions have been posed <a href="https://mathoverflow.net/questions/188815/greens-operator-of-elliptic-differential-operator">here</a> and <a href="https://mathoverflow.net/questions/69521/estimates-on-the-green-function-of-an-elliptic-second-order-differential-operato">here</a> (and I noted in particular the literature link provided in the answer to the latter question). However, neither of those discussions cover the possibility of boundary conditions being present, or possible lack of compactness.</p>
<h1>Repost of the original question</h1>
<p>Let me illustrate my question by starting with the simplest possible example: Let us consider $P := - \mathrm{d}^2/\mathrm{d}x^2$, an elliptic partial differential operator on $\mathbb{R}$; let us also consider the following boundary-value problem on the interval $\overline{\Omega} = [0,1]$:
\begin{equation}
P u = f, \qquad u(0)=u(1)=0.
\end{equation}
As is (I think) well-known, when seen as an operator $L^2(\Omega) \to L^2(\Omega)$, $P$ is unbounded. However, it is closed on the dense domain $D(P) := H^2(\Omega) \cap H_0^1(\Omega)$ where $H_0^1(\Omega)$ is the closure of $C_{\mathrm{c}}^\infty(\Omega)$ in the $H^1$ norm (so that any element of this space has vanishing trace on $\partial \Omega = \{0,1\}$, i.e. it satisfies the Dirichlet boundary condition above in a weak sense). Furthermore, $0$ is in the resolvent of $(P,D(P))$, i.e. there exists a bounded inverse $P^{-1} : L^2(\Omega) \to L^2(\Omega)$. In fact, in this example the inverse is easily computed: it is the integral operator defined by the (continuous, as it happens) kernel
\begin{equation}
G(x,y) = \begin{cases} x(1-y) & x \leq y \\ y(1-x) &x > y \end{cases}, \quad (x,y) \in \Omega \times \Omega.
\end{equation}
Of course, when viewed as a distribution in $\mathscr{D}'(\Omega \times \Omega)$, $G$ is the Schwartz kernel of $P^{-1}$ which we know on abstract grounds must exist since $P^{-1} : C_{\mathrm{c}}^{\infty}(\Omega) \to \mathscr{D}'(\Omega)$ is continuous.</p>
<p>My question is the following: in this example and in more general examples where $P$ is a second-order elliptic differential operator on, say, an open (and not necessarily compact) region $\Omega$ with smooth boundary in $\mathbb{R}^n$, and assuming that we can find a suitable dense domain $D(P)$ for $P$ as above so that $(P,D(P))$ has a bounded inverse $P^{-1} : L^2(\Omega) \to L^2(\Omega)$, <strong>does the Schwartz kernel $G$ of $P^{-1}$ always define a pseudodifferential operator on $\Omega$?</strong></p>
<h1>Addendum for MO</h1>
<p>User mcd on math.SE points out that the Boutet de Monvel calculus ought to be relevant here. Aside from wishing to see exactly how this is, I wonder whether the possible lack of compactness (of $\Omega$) might cause problems in such an approach.</p>
<h1>UPDATE</h1>
<p>I have reduced my question to the following subproblem: <a href="https://mathoverflow.net/questions/287167/composition-of-a-smoothing-operator-with-an-l2-bounded-operator-non-compact">Composition of a smoothing operator with an $L^2$-bounded operator, non-compact Riemannian manifold</a>, as explained in a comment there.</p>
| JahvedM | 80,370 | <p>By complementing <a href="https://mathoverflow.net/users/613/deane-yang">Deane Yang</a>'s strategy with <a href="https://mathoverflow.net/users/21051/jochen-wengenroth">Jochen Wengenroth</a>'s observations in the related question <a href="https://mathoverflow.net/questions/287167/composition-of-a-smoothing-operator-with-an-l2-bounded-operator-non-compact">Composition of a smoothing operator with an $L^2$-bounded operator, non-compact Riemannian manifold</a>, I can now write a fully detailed answer to my question.</p>
<p><strong>Theorem.</strong> Let $M$ be a smooth manifold equipped with a smooth measure $\mathrm{d}\mu$ (arising from example from a metric on $M$). Let also $P$ be an elliptic partial differential operator on $M$, and suppose there exists a domain $D(P)$ such that $C_\mathrm{c}^\infty(M) \subseteq D(P) \subseteq L^2(M, \mathrm{d} \mu)$ and $P|_{D(P)} : D(P) \to L^2(M, \mathrm{d} \mu)$ has a bounded left inverse $P^{-1}$. Then the restriction of $P^{-1}$ to $C_\mathrm{c}^\infty(M)$ defines a pseudo-differential operator.</p>
<p><em>Proof</em>. In two steps:</p>
<ol>
<li><p>$P^{-1}$ maps $C_\mathrm{c}^\infty(M)$ to $C^\infty(M)$ by elliptic regularity. Now we use a Lemma, whose proof is given below:</p>
<p><strong>Lemma.</strong> Let $T : C_\mathrm{c}^\infty(M) \to \mathscr{D}'(M)$ be linear, continuous relative to the standard inductive limit topology of $C_\mathrm{c}^\infty(M)$ and the weak topology of $\mathscr{D}'(M)$, and with image contained in $C^\infty(M)$. Then $T : C_\mathrm{c}^\infty(M) \to C^\infty(M)$ is continuous relative to the standard Fréchet space topology of the codomain.</p>
<p>Now, the composition $C_\mathrm{c}^\infty(M) \hookrightarrow L^2(M, \mathrm{d} \mu) \xrightarrow{P^{-1}} L^2(M, \mathrm{d} \mu) \hookrightarrow \mathscr{D}'(M)$ is continuous and has image contained in $C^\infty(M)$; hence, by the Lemma, $P^{-1}$ maps $C_\mathrm{c}^\infty(M)$ continuously to the Fréchet space $C^\infty(M)$. </p></li>
<li><p>Also since $P$ is elliptic, there is a pair $(Q,R)$ of properly supported pseudo-differential operators such that $R$ is smoothing and $PQ = I + R$. This equality is valid when both sides are applied to $\mathscr{D}'(M)$, and when we restrict it to $C_\mathrm{c}^\infty(M)$ we can compose both sides with $P^{-1}$ to obtain
$$(*) \qquad P^{-1} = Q - P^{-1}R \quad \text{on } C_\mathrm{c}^\infty(M),$$
and in fact the equality is clearly also valid on the larger set $Q^{-1}[D(P)]$. Now, since $R$ is properly supported and smoothing, it is continuous from $\mathscr{E}'(M)$ to $C_\mathrm{c}^\infty(M)$, and hence $\mathscr{E}'(M) \xrightarrow{R} C_\mathrm{c}^\infty(M) \xrightarrow{P^{-1}} C^\infty(M)$ is continuous by part 1 of the proof. It follows from known results enunciated in the <a href="https://mathoverflow.net/questions/287167/composition-of-a-smoothing-operator-with-an-l2-bounded-operator-non-compact">related MO question</a> that the Schwartz kernel of $P^{-1}R$ is smooth, i.e. that $P^{-1}R$ is a smoothing integral operator. Hence, the right-hand side of Equation (*) is a pseudo-differential operator extending $P^{-1}$. This completes the proof.</p></li>
</ol>
<p><em>Proof of the Lemma.</em> $T$ is continuous as a map $C_\mathrm{c}^\infty(M) \to C^\infty(M)$ when the codomain is equipped with the subspace topology from $\mathscr{D}'(M)$. Hence its graph is closed in the appropriate product topology, and it is also closed in the finer product topology obtained by equipping $C^\infty(M)$ with its Fréchet space topology. The result then follows from a version of the closed graph theorem applicable to linear maps from LF-spaces to Fréchet spaces.</p>
|
3,581,707 | <p>I'm struggling with the following proof and hope some of you can help me:</p>
<p>By <span class="math-container">$H$</span> we denote a real Hilbert space and let be <span class="math-container">$T: H \rightarrow H$</span> be a compact, self-adjoint and linear operator. </p>
<p>(i) Show that <span class="math-container">$R(x):= \frac{(Tx,x)_H}{(x,x)_H}$</span> attains its maximum, i.e. there exists <span class="math-container">$x^* \in H$</span> s.t. <span class="math-container">$R(x^*)$</span> is maximal and <span class="math-container">$||x^*||_H=1$</span>.</p>
<p>(ii) Furthermore, I am looking for a counter-example, that this doesn't work for non-compact operators.</p>
<p>So we are asked to show that <span class="math-container">$\exists x^* \in H~ \textit{s.t.}~ \forall x \in H: R(x) \leq R(x_0)$</span>.</p>
<p>I've had several approaches, but all of them ended in a dead-end. The main problem was, that I couldn't draw the conclusion by using the compactness of <span class="math-container">$T$</span>, which should be crucial for that problem (at least in my opinion). But I couldn't find a counter example, with a non-compact operator that doesn't satisfy the statement (that's why I ask question (ii) - maybe this helps me to get a better "feeling" for the problem). Therefore, the possibly most promising try was to define a maximising sequence <span class="math-container">$(x_n)_{n\in\mathbb{N}} \subseteq H$</span>, <span class="math-container">$x_n \rightarrow x^*$</span>. My idea was to somehow use that for every bounded sequence <span class="math-container">$(x_n)_{n\in\mathbb{N}}$</span> there exists a subsequence <span class="math-container">$(x_{n_k})_{k\in\mathbb{N}}$</span> such that <span class="math-container">$Tx_{n_k}$</span> converges. Furthermore, there is a statement that if <span class="math-container">$u_n \rightarrow_w u$</span> then for a compact operator <span class="math-container">$T$</span> holds: <span class="math-container">$Tu_n \rightarrow Tu$</span>, which might be helpful.</p>
<p>Furthermore, since <span class="math-container">$||x^*||_H = 1$</span>, I can assume that <span class="math-container">$\forall n \in \mathbb{N}: ||x_n||_H = 1$</span>, which would give me the required boundedness. </p>
<p>Unfortunately, I wasn't able to prove the statement by using these ideas. Is my approach promising, or did I miss something crucial? How can I prove the statement?
I would be grateful for any help!</p>
| Kavi Rama Murthy | 142,385 | <p>I will assume that <span class="math-container">$H$</span> is separable so that the closed unit ball is a compact metric space in the weak topology. [Please see the remark at the end of the answer]. </p>
<p>Let <span class="math-container">$A$</span> be the supremum of the real numbers <span class="math-container">$\frac {\langle Tx , x \rangle} {\|x\|^{2}}$</span>. Then there exists a sequence <span class="math-container">$x_n$</span> such that <span class="math-container">$\frac {\langle Tx_n , x_n \rangle} {\|x_n\|^{2}} \to A$</span>. Let <span class="math-container">$y_n=\frac {x_n} {\|x_n\|}$</span>. Then <span class="math-container">$\langle Ty_n , y_n \rangle \to A$</span>. Since <span class="math-container">$(y_n)$</span> is bounded and <span class="math-container">$T$</span> is compact there is subsequence <span class="math-container">$y_{n_k}$</span> converging weakly such that <span class="math-container">$T(y_{n_k})$</span> converges to some point <span class="math-container">$y$</span> in the norm. Now <span class="math-container">$|\langle Ty_{n_k} , y_{n_k} \rangle -\langle Ty , y \rangle| = |\langle Ty_{n_k} , y_{n_k} \rangle -\langle Ty , y_{n_k} \rangle |+
|\langle Ty , y_{n_k} \rangle -\langle Ty , y \rangle|$</span>. Can you check that both the terms here tend to <span class="math-container">$0$</span>? It follows now that <span class="math-container">$\langle Ty , y \rangle=A$</span>. </p>
<p>An example where the value <span class="math-container">$A$</span> is not attained is the shift operator on <span class="math-container">$\ell^{2}$</span> defined by <span class="math-container">$T(a_n)=(0,a_1,a_2,...)$</span>. Here <span class="math-container">$A=1$</span> by Cauchy -Schwarz inequality and using the condition for equality in Cauchy -Schwarz inequality we can see that the value <span class="math-container">$A=1$</span> is never attained.</p>
<p>PS The separability assumption can easily be dispensed with by restricting our attention to the closed subspace spaned by <span class="math-container">$(x_n)$</span>.</p>
|
2,072,473 | <p>I managed to prove the statement:</p>
<blockquote>
<p>If $f: A\to B$ and $g: B\to C$ are surjective, then $g\circ f$ is surjective.</p>
</blockquote>
<p>But now I require a counterexample to the converse of this statement. I am not sure how to formulate the counterexample. Similarly I need a counterexample of the statement being "injective" instead of "surjective".</p>
| Nicolas FRANCOIS | 288,125 | <p>Converse : "If $g\circ f$ is surjective, then $f$ and $g$ are surjective".</p>
<p>Saying this statement is false means $g\circ f$ is surjective, but either $g$ or $f$ is not. But if $g$ is not surjective, $g\circ f$ can't be either (check it). So your counterexample has to be composed of $f:A\to B$ non surjective, and $g:B\to C$ surjective such that $g\circ f$ is surjective.</p>
<p>For example : $A=\left\{0,1\right\}$, $B=\left\{a,b,c\right\}$, $C=\left\{\alpha,\beta\right\}$. $f(0)=a$, $f(1)=b$, $g(a)=\alpha$, $g(b)=\beta=g(c)$.</p>
<p>You can check $g\circ f$ is surjective, but $f$ is not.</p>
<p>For "injective" instead of "surjective", do the same analysis, and consider the fact that if $f$ is not injective, $g\circ f$ can't be either.</p>
|
2,119,761 | <p>Asuume that $f:\mathbb{R}\rightarrow \mathbb{R}$ continuous function and $g:\mathbb{R}\rightarrow \mathbb{R}$ uniformly continuous function and $g$ bounded.</p>
<p>I have to prove that $f\circ g$ is uniformly continuous function.
I tried the following:
$f$ continuous function so $\forall \epsilon>0 ~\exists~ \delta_1>0 $ and $ |x-x_0|<\delta$ and $|f(x)-f(x_0)|<\epsilon$
From $g$ uniformly continuous function definition i can say $|g(x)-g(y)|<\delta_1$
which mean $f\circ g$ is continuous function but not uniformly continuous function.
I dont know how to use that $g$ is bounded.
Thank you very much.</p>
| parsiad | 64,601 | <p><strong>Hint</strong>: Did you know that if $k$ is a continuous mapping between metric spaces $X$ and $Y$ and $X$ is compact, $k$ is uniformly continuous?</p>
<p>Since $g$ is bounded, its image is contained in a compact set $K$ (i.e., $g(\mathbb{R}) \subset K$). Letting $k = f|_K$, it is trivially the case that $f\circ g=k\circ g$. What does this imply?</p>
|
3,432,911 | <p>My argument is as follows:</p>
<p>Let <span class="math-container">$R$</span> be a commutative ring with unity, <span class="math-container">$I$</span> an ideal of <span class="math-container">$R$</span>.
If <span class="math-container">$(R/I)^n\cong (R/I)^m$</span> as <span class="math-container">$R$</span>-modules, then it follows that they are isomorphic as <span class="math-container">$R/I$</span>-modules because the isomorphism factors through the quotient. We observe that these are both free <span class="math-container">$R/I$</span>-modules with bases <span class="math-container">$${\mathfrak{B}_1=\{\delta_{1j}+I, \delta_{2j}+I, \dots, \delta_{nj}+I\} \;\;\text{ and }\;\; \mathfrak{B}_2=\{\delta_{1j}+I, \delta_{2j}+I, \dots, \delta_{mj}+I\}}$$</span> respectively.
Then the isomorphism between <span class="math-container">$(R/I)^n \cong (R/I)^m$</span> as <span class="math-container">$R$</span>-modules induces an isomorphism of these free modules, meaning there is a bijection between the elements of <span class="math-container">$\mathfrak{B}_1$</span> and <span class="math-container">$\mathfrak{B}_2$</span>. It follows then that <span class="math-container">$n=m$</span>.</p>
| lab bhattacharjee | 33,337 | <p>WLOG <span class="math-container">$z-1=\cos2t+i\sin2t$</span></p>
<p><span class="math-container">$$\dfrac1z=\dfrac1{2\cos t(\cos t+i\sin t)}=\dfrac{\cos t-i\sin t}{2\cos t}$$</span></p>
|
2,425,337 | <p>What would be an example of a real valued sequence $\{a_{n}\}_{n=1}^{\infty}$ such that $$\frac{a_{n}}{a_{n+1}} = 1 + \frac{1}{n} + \frac{p}{n \ln n} + O\left(\frac{1}{n \ln^{2}n}\right)\ ?$$</p>
| Wouter | 89,671 | <p>A simple way to construct the desired curve is to start with a <a href="https://en.wikipedia.org/wiki/Sigmoid_function" rel="nofollow noreferrer">sigmoid</a>
$$f(x)=\frac{1}{1+\exp(A(x-1/4))}$$
where $A$ is a positive parameter that influences the steepness.</p>
<p>Add a line through $(1/4,0)$
$$g(x)=\alpha f(x)+\beta (x-1/4)$$
with $\beta$ to be determined such that the curve passes through $(0,\alpha)$ and $(1/2,0)$:
$$g(0)=\alpha \frac{1}{1+\exp(-A/4)}-\frac{\beta}{4}=\alpha$$
$$\implies 4 \alpha \left( \frac{1}{1+\exp(-A/4)}-1\right)=\beta$$</p>
<p>So
$$\frac{\alpha}{1+\exp(A(x-1/4))}+4 \alpha \left( \frac{1}{1+\exp(-A/4)}-1\right)(x-1/4)$$
behaves as desired. (Note: at $A=0$ this function is exactly linear, in the $A\rightarrow \infty $ limit it becomes a step function)</p>
<p><a href="https://i.stack.imgur.com/DNcNU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DNcNU.png" alt="enter image description here"></a></p>
<p>It works just fine in Mathematica:</p>
<pre><code>\[Alpha]/(1 + Exp[A*(x - 1/4)]) + 4*\[Alpha]*(1/(1 + Exp[-A/4]) - 1)*(x - 1/4)
{% /. {\[Alpha] -> 1, A -> 0}, % /. {\[Alpha] -> 1, A -> 20}, % /. {\[Alpha] -> 1, A -> 100}}
Plot[%, {x, 0, 1/2}]
</code></pre>
<p><a href="https://i.stack.imgur.com/842vT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/842vT.png" alt="enter image description here"></a></p>
|
4,027,604 | <p>Let <span class="math-container">$f(x)$</span> be continuous on <span class="math-container">$[a,b]$</span> and <span class="math-container">$F(x)=\frac{1}{x-a}\int_a^xf(t)dt$</span></p>
<p>Proof: The functions <span class="math-container">$F(x)$</span> and <span class="math-container">$f(x)$</span> have the same monotonicity on <span class="math-container">$(a, b]$</span>.</p>
<p><a href="https://i.stack.imgur.com/FPn2M.jpg" rel="nofollow noreferrer">The answer is here</a>,but I don't know why <span class="math-container">$\int_a^xf(x)\,dt = (x-a)f(x) $</span></p>
| Fred | 380,717 | <p>If <span class="math-container">$x$</span> is fixed, then in <span class="math-container">$\int_a^xf(x)dt$</span> you integrate over the constant <span class="math-container">$f(x)$</span> with respect to <span class="math-container">$t$</span>. Thus <span class="math-container">$\int_a^xf(x)dt =f(x) \int_a^x 1 dt=(x-a)f(x).$</span></p>
|
8,568 | <p>I'm going to be starting teaching a course called algebra COE, which is for students who didn't pass the required state algebra exam to graduate and are now seniors, to do spaced-out exam-like extended problems after extensive support. </p>
<p>I don't want to start the class out with "getting down to business" because I want the students to feel comfortable in the class, with me and with each other. The "getting down to business" will happen during the second week of class. Therefore, I'd like to start out with a class collaboration to solve a "fun" problem. (There are 5 students in the class)</p>
<p>At the same time, I don't want to start out with a problem that feels too contrived, or too much like "school math" problems. They have clearly been turned off from "school math." I want one or some that feel more like they are doing a puzzle, yet still engage algebra-related skills and open up a discussion about problem-solving as a process and skill that can be honed. </p>
<p>Some problems I have considered, yet I believe are too "math-feeling":</p>
<ul>
<li>The exponential chessboard and rice problem </li>
<li>How many squares are there on the chessboard? (note: more than 64)</li>
<li>The "lockers" problem</li>
<li>The <a href="https://itunes.apple.com/us/app/ooops/id467564672?mt=8">ooops game</a></li>
</ul>
<p>Any suggestions?</p>
| Sue VanHattum | 60 | <p>Eight adults and two kids want to cross a river. There is a boat that can hold one adult or two kids, no more. Can they all cross? How? Extend to <em>n</em> adults.</p>
<p>Use figurines (Lego, Playmobil, ?) or coins to model the situation. </p>
|
8,568 | <p>I'm going to be starting teaching a course called algebra COE, which is for students who didn't pass the required state algebra exam to graduate and are now seniors, to do spaced-out exam-like extended problems after extensive support. </p>
<p>I don't want to start the class out with "getting down to business" because I want the students to feel comfortable in the class, with me and with each other. The "getting down to business" will happen during the second week of class. Therefore, I'd like to start out with a class collaboration to solve a "fun" problem. (There are 5 students in the class)</p>
<p>At the same time, I don't want to start out with a problem that feels too contrived, or too much like "school math" problems. They have clearly been turned off from "school math." I want one or some that feel more like they are doing a puzzle, yet still engage algebra-related skills and open up a discussion about problem-solving as a process and skill that can be honed. </p>
<p>Some problems I have considered, yet I believe are too "math-feeling":</p>
<ul>
<li>The exponential chessboard and rice problem </li>
<li>How many squares are there on the chessboard? (note: more than 64)</li>
<li>The "lockers" problem</li>
<li>The <a href="https://itunes.apple.com/us/app/ooops/id467564672?mt=8">ooops game</a></li>
</ul>
<p>Any suggestions?</p>
| Jon Bannon | 354 | <p>The frog jumping puzzle is nice, in that it is very simple but hides some surprising but manageable complexity. Here's a <a href="http://www.smart-kit.com/s7284/frog-jumping-puzzle/" rel="nofollow">"video game" version</a>. Have the students play with this, and then try to generalize to n male frogs and m female frogs.</p>
<p>For another favorite, see the map folding puzzle of Martin Gardner as <a href="https://books.google.com/books?id=fA3CAgAAQBAJ&pg=PA14&lpg=PA14&dq=map%20folding%20problem%20of%20martin%20gardner&source=bl&ots=QxG5dWws10&sig=6bAOVmSYfDmwEYBu-AbkjMwpM3Q&hl=en&sa=X&ved=0CEkQ6AEwCTgKahUKEwiDsc3a99bHAhUI8IAKHZWlAo4#v=onepage&q=map%20folding%20problem%20of%20martin%20gardner&f=false" rel="nofollow">problem 29 here</a>. This is a beautiful way to show the surprisingly delicate complexity arising in mathematics.</p>
<p>I've broken all sorts of posting rules here, but perhaps someone will find better links. Please feel free to modify this post accordingly!</p>
|
287,859 | <p>Prove that $\lim\limits_{x\rightarrow+\infty}\frac{x^k}{a^x} = 0\ (a>1,k>0)$.</p>
<p>P.S. This problem comes from my analysis book. You may use the definition of limits or invoke the Heine theorem for help. <em>It means the proof should only use some basic properties and definition of limits rather than more complicated approaches.</em></p>
| André Nicolas | 6,312 | <p>Hard to know what is allowed. Let $x^k=y$. We are then looking at $\frac{x}{(a^{1/k})^y}$. </p>
<p>Let $a^{1/k}=b$. We are computing the simpler-looking $\lim_{y\to\infty}\frac{y}{b^y}$. </p>
<p>Assume that we know that $b^y$ is increasing. Let $b=1+d$. </p>
<p>Then $(1+d)^y \ge (1+d)^{\lfloor y\rfloor}$. But by the Binomial Theorem, $(1+d)^{\lfloor y\rfloor} \gt d^2\frac{\lfloor y\rfloor(\lfloor y\rfloor-1)}{2}$. This shows that $b^y$ grows sufficiently faster than $y$. </p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.