qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,133,769 | <p>Asymptotic notations are a little vague for me at the moment. Here I have a problem that asks me if two equations are equal to each other, and it involves the big O notation. I wrote my question in the image in magenta color.</p>
<p><a href="https://i.stack.imgur.com/hxDx3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hxDx3.jpg" alt="enter image description here"></a></p>
| Community | -1 | <p>Rewrite the system as</p>
<pre><code> x1 + x2 + x3 = 1 - x4 - x5
-x1 + x2 + x3 = 1 + 2x4 + 2x5
2x1 + 2x2 - x3 = 1 + x4 - x5
</code></pre>
<p>Then for any choice of <code>x4</code> and <code>x5</code> (possibly renamed <code>s</code> and <code>t</code>) you can evaluate the RHSs and find a solution in <code>x1</code>, <code>x2</code>, <code>x3</code>.</p>
|
240,139 | <p>Does a quadratic form always come from symmetric bilinear form ?
We know when $q(x)=b(x,x)$ where $q$ is a quadratic form and $b$ is a symmetric bilinear form.
But when we just take a bilinear form and $b(x,y)$ and write $x$ instead of $y$,does it give us a quadratic form ?</p>
| Will Jagy | 10,400 | <p>Sure. In finite dimension over the real numbers you do not even need the bilinear form symmetric, because any matrix is the sum of a symmetric and a skew-symmetric matrix, and these are easy to find. </p>
<p>Meanwhile, as long as characteristic of the underlying field is not $2,$ you can go back with $$ c(x,y) = \frac{1}{2} \left( q(x+y) - q(x) - q(y) \right). $$
Since this recipe gives a symmetric bilinear form, those are preferred.</p>
<p>Even in infinite dimension, I suppose you can symmetrize with
$$ f(x,y) = \frac{1}{2} \left( h(x,y) + h(y,x) \right). $$
There won't be any matrices.</p>
|
240,139 | <p>Does a quadratic form always come from symmetric bilinear form ?
We know when $q(x)=b(x,x)$ where $q$ is a quadratic form and $b$ is a symmetric bilinear form.
But when we just take a bilinear form and $b(x,y)$ and write $x$ instead of $y$,does it give us a quadratic form ?</p>
| Serkan Yaray | 50,727 | <p>If we have b symmetric bilinear form we can get q quadratic form<br>
$q\colon V \to \mathbb{R}$<br>
q(v)=b(v,v)</p>
<p>conversely if q is a quadratic form<br>
$q\colon V \to \mathbb{R}$<br>
we can define<br>
$\frac 12$(q(v+w)-q(v)-q(w)):=b(v,w)</p>
<p>the vital answer is you just get a bilinear form not always a symmetric bilinear
form.
because the definition $\frac 12$(q(v+w)-q(v)-q(w)) leads us to $\frac 12$(b(v,w)+b(w,v)) </p>
|
2,167,541 | <p>Is this expression true?$$y(x) =\lim_{h \rightarrow 0}\frac{f(x+h)-f(x)}{h} = \lim_{h \rightarrow 0}\frac{f(x)-f(x-h)}{h}$$</p>
<p>I'm taking the limit of h goes to 0 for y(x) and I got the latter equation. Can I just write y(x) = df(x)/dx?</p>
| Stefano | 387,021 | <p>$$ 2\sin x \cos x =2 \left(\sum_{k=0}^\infty (-1)^k\frac{x^{2k+1}}{(2k+1)!}\right) \left(\sum_{k=0}^\infty (-1)^k\frac{x^{2k}}{(2k)!}\right)$$</p>
<p>$$= 2\sum_{k=0}^\infty \sum_{i=0}^k (-1)^i \frac{x^{2i+1}}{(2i+1)!} (-1)^{k-i} \frac{x^{2(k-i)}}{(2k-2i)!} = \sum_{k=0}^\infty (-1)^k x^{2k+1} \sum_{i=0}^k\frac{2}{(2i+1)! (2k-2i)!} $$</p>
<p>$$ = \sum_{k=0}^\infty (-1)^k \frac{(2x)^{2k+1}}{(2k+1)!} = \sin(2x),$$</p>
<p>since</p>
<p>$$\sum_{i=0}^k \frac{2}{(2i+1)!(2k-2i)!} = \frac{2^{2k+1}}{(2k+1)!}.$$</p>
|
3,154,947 | <p>Hello MathStackExchange Community, </p>
<p>Confused about how to change the notation-formula for permutations when you have more complex processes. i.e. multiple independent action choices of different types of objects and orientation limitations. </p>
<ul>
<li><em>The generic formula is : <span class="math-container">$P(n,r)= n!/(n-r)!$</span></em></li>
<li><span class="math-container">$n!$</span> = being the total number of objects in an option set</li>
<li><span class="math-container">$(n-r)!$</span> = the limitations caused by the choices made.</li>
</ul>
<p><em>but this only works for very simple permutation-processes where you make one single choice-type.</em></p>
<p>If I have 15 particles, 10 neutral and 5 positive particles
and I care about the arrangement</p>
<p>he formula is <span class="math-container">$P(n,r)= 15!/10!\cdot5!$</span><br>
<strong>What I don't get is how does 10! 5! translate to <span class="math-container">$(n-r)!$</span></strong></p>
<p><strong>the second part of the problem asks, if like charges repel, and positive charges can't sit next to each other- number of possibilities then?</strong></p>
<p><em>What is the way to think about the second portion?</em> Could draw it out and try to find a pattern, but if I had avagadro's number of particles that would be hard.</p>
<p>Thanks, </p>
<p>ThermoRestart </p>
| Claude Leibovici | 82,404 | <p>Making a parametric plot, there is effectively a loop which is symmetric with respect to the <span class="math-container">$x$</span> axis; the points where the curve intersect the axis correspond to <span class="math-container">$x=33$</span> and <span class="math-container">$x=49$</span> as @Ross Millikan already answered.</p>
<p>The major issue is to compute
<span class="math-container">$$I=\int \sqrt{\left(\frac{dx}{dt}\right)^2+\left(\frac{dy}{dt}\right)^2}\,dt=\int\sqrt{4 t^2+\left(3 t^2-16\right)^2}\,dt$$</span> which would lead to nasty elliptic integrals.</p>
<p>So, the simplest is to do what @Ross Millikan already answered, that is to say
<span class="math-container">$$t=\pm \sqrt{49-x} \implies y=\pm (x-33)\sqrt{49-x} $$</span> So, the total area enclosed by the loop is
<span class="math-container">$$A=2\int_{33}^{49}(x-33)\sqrt{49-x}\,dx$$</span> Using
<span class="math-container">$$\int(x-33)\sqrt{49-x}\,dx=-\frac{2}{15} (49-x)^{3/2} (3 x-67)$$</span> we end with <span class="math-container">$A=2 \times \frac{4096}{15}$</span>.</p>
<p>Notice that <span class="math-container">$\frac{4096}{15}\approx 273.067$</span></p>
|
1,326,652 | <p>What would be the nature of the roots of the equation $$2x^2 - 2\sqrt{6} x + 3 = 0$$</p>
<p>My book says that as the discriminant is 0 so the roots are rational and equal.
But discriminant can be used for determining the nature of roots only when the roots are rational numbers. Is the answer in the book wrong because actually the nature of roots should be irrational?</p>
| Robert Israel | 8,508 | <p>Rationality and multiplicity of roots are two separate questions. The discriminant tells you whether there are repeated roots. It offers no opinion on whether those repeated roots are rational. </p>
<p>Of course, for a polynomial whose roots are all rational the coefficients (divided by the leading coefficient) would all be rational as well.</p>
|
3,789,584 | <p>I tried to find the limit <span class="math-container">$$\mathop {\lim }\limits_{n \to \infty } \ln (n) \cdot \int_0^1 \ln ( n^{-t} + 1 ) \, \mathrm dt$$</span>, but I couldn't find a conclusive argument.</p>
<hr />
<p>By assuming that the sequence <span class="math-container">$\displaystyle x_n = \ln (n) \cdot \int_0^1 \ln ( n^{-t} + 1 ) \, \mathrm dt$</span> is convergent to <span class="math-container">$k$</span>, I managed to do the following:<br />
It is well known that <span class="math-container">$T_{2q,0} (x) \le \ln(1+x) \le T_{2q+1,0} (x) \; , \; \forall x>0 , \forall q \in \mathbb{N} ^{*}$</span>. (I used the Taylor polynom here)<br />
Using this inequality, we obtain a bounding for <span class="math-container">$x_n$</span>. By making <span class="math-container">$n \to \infty$</span>, we obtain:</p>
<p><span class="math-container">$$\sum\limits_{p = 1}^{2q} { \frac{ (-1)^{p+1} }{ p^2 } } \le k \le \sum\limits_{p = 1}^{2q+1} { \frac{ (-1)^{p+1} }{ p^2 } } \, , \, \forall q \in \mathbb{N} ^{*} $$</span></p>
<p>Then, by making <span class="math-container">$q \to \infty$</span>, we have that <span class="math-container">$ \sum\limits_{p = 1}^{ \infty } { \frac{ (-1)^{p+1} }{ p^2 } } = \sum\limits_{p = 1}^{ \infty } { \frac{ 1 }{ p^2 } } - 2 \cdot \sum\limits_{p = 1}^{ \infty } { \frac{ 1 }{ (2p)^2 } } = \sum\limits_{p = 1}^{ \infty } { \frac{ 1 }{ p^2 } } - \frac{1}{2} \cdot \sum\limits_{p = 1}^{ \infty } { \frac{ 1 }{ p^2 } } = \frac{1}{2} \cdot \sum\limits_{p = 1}^{ \infty } { \frac{ 1 }{ p^2 } } = \frac{ \pi ^2}{12} $</span>, which implies that <span class="math-container">$k = \frac{ \pi ^2 }{12}$</span>.</p>
| Alessio K | 702,692 | <p>We have <span class="math-container">$\int ln(1+n^{-t})dt=\int \sum_{k=1}^{\infty}(-1)^{k+1}\frac{n^{-tk}}{k}dt=\sum_{k=1}^{\infty}(-1)^{k+1}\frac{1}{k} \int n^{-tk}dt$</span> by <a href="https://en.wikipedia.org/wiki/Fubini%27s_theorem" rel="nofollow noreferrer">Fubini/Tonelli theorems</a> (and using the Taylor series <span class="math-container">$ln(1+x)=\sum_{k=1}^{\infty}\frac{x^k}{k}$</span> provided <span class="math-container">$x\in(-1,1]$</span>).</p>
<p>Then <span class="math-container">$\int_{0}^{1}n^{-tk}dt=[-\frac{1}{kln(n)n^{kt}}]|_{t=0}^{t=1}=\frac{1}{kln(n)}-\frac{1}{n^{k}ln(n)k}$</span> by using the substitution <span class="math-container">$u=-tk$</span> and applying the exponential rule <span class="math-container">$\int a^{x}dx=\frac{a^{x}}{ln(a)}+C.$</span></p>
<p>So it follows that <span class="math-container">$ln(n)\int_{0}^{1} ln(1+n^{-t})dt=\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k^2}-\sum_{k=1}^{\infty}(-1)^{k+1}\frac{1}{k^{2}n^{k}}$</span>.</p>
<p>Finally taking the limit we have <span class="math-container">$lim_{n\rightarrow\infty}ln(n)\int_{0}^{1} ln(1+n^{-t})dt=\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k^2}=\frac{\pi^{2}}{12}.$</span></p>
|
11,267 | <p>John Von Neumann once said to Felix Smith, "Young man, in mathematics you don't understand things. You just get used to them." This was a response to Smith's fear about the method of characteristics. </p>
<p>Did he mean that with experience and practice, one obtains understanding? </p>
| Cian Moriarty | 117,828 | <p>One thing he might have been referring to is that in mathematics you often have to learn to apply a method without actually understanding what it is all about.</p>
<p>Take for example matrix multiplication. You could (and many students do) beat themselves up about why it is so "weird" in comparison to say multiplication of the reals. But it turns out that yes it has those weird properties because it is perfect for representing a linear transform, amongst other things.</p>
<p>In general I have found it counter productive to try and understand every aspect of something before moving on to the next thing. I just accept that that's the way it works, trust that one day it will have some sort of application, be useful or otherwise "make sense".</p>
<p>Note that the history of mathematics is full of branches of mathematics that didn't even have this sort of utility when they were initially created and explored, but have later turned out to be enormously important. Take for example Boolean algebra and knot theory.</p>
<p>Another important point is that mathematics is the study of abstract logical systems, including wholly invented ones. Therefore it can be pretty fruitless to understand some of the deeper meanings of a mathematical concept, because they might not even exist. Sure there might be deep connections or generalisations to other mathematical concepts, and applications might be found, but trying to say that the application is "the true form" of the mathematical concept is putting the cart before the horse.</p>
|
11,267 | <p>John Von Neumann once said to Felix Smith, "Young man, in mathematics you don't understand things. You just get used to them." This was a response to Smith's fear about the method of characteristics. </p>
<p>Did he mean that with experience and practice, one obtains understanding? </p>
| Dahn | 48,999 | <p>This is quite an old post, but I choose to answer, because I feel that I offer a <strong>completely different understanding</strong> of this quote.</p>
<p>I am very surprised to find out my understanding of it is different from others, since when I first read it, I thought to myself: <em>exactly</em>!</p>
<p>To me, this quote is how I feel all throughout studying mathematics. New concepts enter my mind, I learn about their properties and uses, I use them myself, prove theorems with them, yet in the time in between the first sight of the definition and the time when I am fully comfortable with using the concept, there was no <strong>aha!</strong> moment, when I'd finally <strong>understand</strong> it.</p>
<p>Take the example of the <strong>concept of infinity</strong>. You learn about <span class="math-container">$\lim_{x\to\infty}$</span>, understand Zeno's paradox, <span class="math-container">$\lim_{x\to c} \frac {f(x)-f(c)}{x-c}$</span>, understand that <span class="math-container">$\bigcup^{\infty}_{n=1} (0,1-\frac 1n) = (0,1)$</span>, and keep seeing infinity again and again. One has many small epiphanies, but none of them could be considered the moment when one finally <strong>understands</strong> infinity. And yet there is some kind of road beginning at the first moment of utter confusion as to what infinity actually is and resulting in the feeling of infinity not being all that mysterious at all.</p>
<p>Thus, in this sense, the <strong>quote is full of hope</strong>. It gives me the reassurance that I don't need to push myself to try to grasp infinity in one evening, there is no piece of information I need to <strong>understand</strong> in order to say "I got it". Instead, I will gradually get used to its oddness until it becomes a very familiar object.</p>
<p>This process is much better described as <em>getting used to</em> rather than <em>understanding</em>, and thus I understood Neumann's quote in this way and it's been on my mind every time I encounter a new mathematical object.</p>
|
11,267 | <p>John Von Neumann once said to Felix Smith, "Young man, in mathematics you don't understand things. You just get used to them." This was a response to Smith's fear about the method of characteristics. </p>
<p>Did he mean that with experience and practice, one obtains understanding? </p>
| littleO | 40,119 | <p>One word frequently missing from this discussion is "joke". Von Neumann joked around a lot. I'd guess he was 50% joking when he said this. But it's one of those jokes that's funny because it has an element of truth to it. We all know the feeling of "getting used to" ideas that once seemed strange to us, so that now it is hard to remember how unfamiliar the ideas once were.</p>
<p>And, there is something encouraging about this psychological phenomenon. Even if new ideas seem very strange now, a year from now you will be used to them and they will seem much easier.</p>
|
11,267 | <p>John Von Neumann once said to Felix Smith, "Young man, in mathematics you don't understand things. You just get used to them." This was a response to Smith's fear about the method of characteristics. </p>
<p>Did he mean that with experience and practice, one obtains understanding? </p>
| BCLC | 140,308 | <p>My 2 interpretations:</p>
<ol>
<li><p>To the <strong>arrogant</strong>, don't think you know that much. You don't know what you don't know. As <a href="https://math.stackexchange.com/users/88078/nero">Nero</a> aka <a href="https://www.facebook.com/nntaleb/posts/10152256017673375" rel="nofollow noreferrer">Nassim Nicholas Taleb</a> said 'If you don't feel that you haven't read enough, you haven't read enough'</p>
</li>
<li><p>To the <strong>doubtful</strong>, don't feel bad for not understanding/not having understood. This is how maths is.</p>
</li>
</ol>
|
11,267 | <p>John Von Neumann once said to Felix Smith, "Young man, in mathematics you don't understand things. You just get used to them." This was a response to Smith's fear about the method of characteristics. </p>
<p>Did he mean that with experience and practice, one obtains understanding? </p>
| Jos van Nieuwman | 545,640 | <p>I actually reject most claims on this page. Coming from an (astro)physical background, pure mathematics was a blessing for me. It took only a few months for me to cry out to all my physics friends that this pure maths thing, <strong>as opposed to physics</strong>, is something <em>"that can actually be understood!"</em> </p>
<p>Why? In physics, there will always remain another question. In every matter. "But what <em>is</em> an electric field? What <em>is</em> the wave function?" Etc. So I never had the experience of understanding something from A to Z. Not even the mathematics I was taught, because the story of its construction was always curtailed to save time, so ad hoc and heuristic arguments remained.
In pure mathematics it is actually possible to follow an argument entirely down to the axioms, leaving no statement unproven in between. Here the questioning stops, as it is silly to ask "but why these axioms?" That's just because <em>that's the game the we are playing</em>. No one forces you to play this game, you could've played any game you wanted. That isn't to say, and this is important to note, that being able to follow a proof down to the axioms comprises "complete understanding of everything the situation entails". There are always ways to look at it that you haven't seen yet. Ideally one would see all these simultaneously, and if you demand that in order to say you understand it, we must indeed conclude that no one understands anything. </p>
<p>Concerning Von Neumann's quote, I don't believe he would disagree with me on this. Like some other answers I think he was at least half joking, and alluding to the well-known fact that we slowly get used to certain concepts 'playwise' (Dutch: <em>spelenderwijs</em>; there doesn't seem to be an English translation), going from hard and opaque to easy and transparent. The same thing happens in physics too, but the difference is that in the math case, <em>you can justify every assertion</em>. Now if that doesn't exhibit understanding, I don't know what does. </p>
|
389,532 | <p>I'm working out the following problem form Ahlfors' Complex Analysis text:</p>
<p>"Let $X$ and $Y$ be compact sets in a complete metric space $(S,d)$. Prove that there exist $x \in X,y \in Y$ such that $d(x,y)$ is a minimum."</p>
<p>My attempt:</p>
<p>Define $E:=\{d(x,y): x \in X,y \in Y \}$. We will prove that $E \subset \mathbb R$ is compact.</p>
<p>Firstly we will prove that $E$ is bounded:</p>
<p>$X$ is compact, and therefore it is bounded. That is, there exist $x_0 \in X,r_1>0$ such that $d(x,x_0)<r_1$ for all $x \in X$. $Y$ is also compact, and similarly there exist $y_0 \in Y,r_2>0$ such that $d(y,y_0)<r_2$ for all $y \in Y$.</p>
<p>Now, for any $d(x,y) \in E$, we have $$d(x,y) \leq d(x,x_0)+d(x_0,y_0)+d(y,y_0)<r_1+d(x_0,y_0)+r_2 =:M.$$ This proves that $E$ is bounded.</p>
<p>And now we will prove that $E$ is also closed:</p>
<p>Let $a_n=d(x_n,y_n)$ be sequence in $E$, which is convergent in $\mathbb R$. We will prove that its limit $a:=\lim_{n \to \infty} a_n$ is in $E$.</p>
<p>$(x_n)$ is a sequence in the compact set $X$, therefore it admits a convergent subsequence $(x_{n_k}) \to \bar{x}$. $(y_{n_k})$ is a sequence in the compact set $Y$, and therefore it admits a convergent subsequence $(y_{n_{k_l}}) \to \bar{y}$. The sequence $(x_{n_{k_l}},y_{n_{k_l}})_l$ converges to $(\bar{x},\bar{y})$ in the product space, and the continuity of the metric gives $a=\lim_{l \to \infty} a_{n_{k_l}}=\lim_{l \to \infty} d(x_{n_{k_l}},y_{n_{k_l}})=d(\bar{x},\bar{y})$. This shows that $a \in E$ as required.</p>
<p>In summary I have shown that $E$ is compact, and from a well-known theorem it has a minimum.</p>
<p>Is this proof OK? I think that the completeness of $S$ is unnecessary.</p>
<p>Thank you.</p>
| Martin Citoler | 73,096 | <p>I just wanted to mention that your characterization of compactness is not quite correct. Compact is equivalent to closed and bounded in $\mathbb{R}^n$, but it is not true in general. In the case of metric spaces,a set is compact if and only if it is complete and totally bounded. </p>
<p>However, in this case is enough to see that $E$ is closed as Plop mentioned in his answer so you are fine. Another way to see it is to consider the function $d:S\times S\rightarrow\mathbb{R}$, which is continuous, so it must take a minimum on the compact set $X\times Y\subset S \times S$.</p>
|
3,469,392 | <p>Let <span class="math-container">$a_n$</span> be a recursive sequence where <span class="math-container">$a_1=42$</span> and for all <span class="math-container">$n$</span>:</p>
<p><span class="math-container">$$a_{n+1}=a_n+\frac{(-1)^n(1+n!)}{2^nn!}$$</span></p>
<p>After plugging some values of <span class="math-container">$n$</span> I can "see" that the sequence is bounded at <span class="math-container">$(41,42]$</span></p>
<p>And that for <span class="math-container">$a_{2n}$</span> the sequence is increasing and for <span class="math-container">$a_{2n+1}$</span> it is decrecasing</p>
<p>I know that I need to show that the sequence is either monotonic decrecasing <span class="math-container">$(a_n\geq a_{n+1})$</span> or monotonic increasing <span class="math-container">$(a_n\leq a_{n+1})$</span></p>
<p>I have tried to set <span class="math-container">$a_{n+1}=a_n=L$</span> too, but it did not work, how should I approach this?</p>
<p>One way that I know is to look at <span class="math-container">$\frac{a_{n+1}}{a_{n}}$</span> and to see if it <span class="math-container">$\leq 1$</span> or <span class="math-container">$\geq 1$</span> </p>
| Fabio Lucchini | 54,738 | <p>First note that
<span class="math-container">$$a_{N+1}=a_1+\sum_{n=1}^N\frac{(-1)^n(1+n!)}{2^nn!}$$</span>
and that
<span class="math-container">$$\left|\frac{(-1)^n(1+n!)}{2^nn!}\right|\sim 2^{-n}$$</span>
as <span class="math-container">$n\to+\infty$</span>, so that the serie converges absolutely.</p>
|
522,179 | <p>Everyone says that the dot product is interpreted as the projection of A onto B (if you are dot producting A and B), but isn't that length just equal to |A|$\cos \left( \theta \right)$? Why does the dot product have an extra |B|?</p>
| G. Khanna | 416,891 | <p>I have also heard people say "the dot product is as the projection of A onto B .." But, of course, that is not entirely correct. Yes -- the dot product is fundamentally a projection; but the actual computation of the dot product is given by the equation</p>
<p><span class="math-container">$$a \cdot b = \| a \| \| b \| \cos \theta = \sum_i a_i b_i$$</span> </p>
<p>So it is the projection of [<span class="math-container">$a$</span> onto <span class="math-container">$b$</span>] multiplied by [the magnitude of <span class="math-container">$b$</span>].</p>
<p>If you are interested, there is a wonderful, intuitive explanation of the dot product at this site: <a href="https://betterexplained.com/articles/vector-calculus-understanding-the-dot-product/" rel="nofollow noreferrer">https://betterexplained.com/articles/vector-calculus-understanding-the-dot-product/</a></p>
|
2,607,449 | <p>I'm trying to get intuition about why gradient is pointing to the direction of the steepest ascent. I got confused because I found that directional derivative is explained with help of gradient and gradient is explained with help of directional derivative.</p>
<p>Please explain what are the exact steps that lead from
directional derivative defined by the limit $\nabla_{v} f(x_0) = \lim_{h\to 0} \frac{f(x_0+hv)-f(x_0)}h$ to directional derivative defined as dot product of gradient and vector $\nabla_{v} f(x_0) = \nabla f(x_0)\cdot{v}$ ?</p>
<p>In other words how to prove the following? $$\lim_{h\to 0} \frac{f(x_0+hv)-f(x_0)}h = \nabla f(x_0)\cdot{v}$$</p>
| Alejandro Nasif Salum | 481,187 | <p>The limit <span class="math-container">$\lim_{t\to 0} \frac{f(x_0+tv)-f(x_0)}t$</span> gives the <em>definition</em> of the derivative in the direction of the unit vector <span class="math-container">$v$</span> at <span class="math-container">$x=x_0\in \mathbb R^n$</span>, that is <span class="math-container">$\frac{\partial}{\partial v} f (x_0)$</span>.</p>
<p>The formula
<span class="math-container">$$\frac{\partial}{\partial v} f (x_0)=\nabla f(x_0)\cdot v$$</span>
gives a <em>property</em> which is valid under the hypothesis that <strong><span class="math-container">$f$</span> is differentiable</strong> at <span class="math-container">$x=x_0$</span>, and is quite useful for calculations. (If <span class="math-container">$f$</span> is not differentiable at <span class="math-container">$x=x_0$</span>, then that relation doesn't need be true, even if all directional derivatives exist.)</p>
<p>The idea of the proof is that being <span class="math-container">$f$</span> differentiable at <span class="math-container">$x_0$</span>, then the gradient <span class="math-container">$\nabla f(x_0)$</span> exists and
<span class="math-container">$$\lim_{x\to x_0}\frac{|f(x)-f(x_0)-\nabla f(x_0)\cdot(x-x_0)|}{||x-x_0||}=0$$</span></p>
<p>Let's think of the point <span class="math-container">$x=x_0+tv$</span> (say for fixed <span class="math-container">$x_0$</span> and <span class="math-container">$v$</span>). By definition of directional derivative (and substracting and adding <span class="math-container">$\nabla f(x_0)\cdot (x_0+tv-x_0$</span>), leads to</p>
<p><span class="math-container">$$\frac{\partial}{\partial v} f (x_0)=\lim_{t\to 0} \frac{f(x_0+tv)-f(x_0)}t=$$</span>
<span class="math-container">$$=\lim_{t\to 0} \frac{f(x_0+tv)-f(x_0)-\nabla f(x_0)\cdot(x_0+tv-x_0)}{||(x_0+tv)-x_0||}\cdot \frac{|t|\,||v||}{t}+\frac{\nabla f(x_0)\cdot(x_0+tv-x_0)}{t}.$$</span></p>
<p>And because the limit of the first summand is <span class="math-container">$0$</span> (why?) (*) and the second one is constant the result is <span class="math-container">$$\frac{\partial}{\partial v} f (x_0)=\nabla f(x_0)\cdot v,$$</span>
which gives the usual formula.</p>
<p>What might be more interesting to understand this relation is when there's no such relation. Let <span class="math-container">$f \colon \mathbb R^2 \to \mathbb R$</span>, and
<span class="math-container">$$f(x,y)=
\begin{cases}
\tfrac{x^2y}{x^2+y^2} & (x,y)\neq (0,0) \\
0 & (x,y)=(0,0). \\
\end{cases}$$</span></p>
<p>An easy calculation using the definition shows that, if <span class="math-container">$v=(v_x,v_y)$</span> (let's assume <span class="math-container">$||v||=1$</span>), the directional derivative is in each direction
<span class="math-container">$$\frac{\partial}{\partial v} f (0,0)=\frac{v_x^2 v_y}{v_x^2+v_y^2}=v_x^2 v_y$$</span>
(in particular, both <span class="math-container">$\frac{\partial}{\partial x} f (0,0)$</span> and <span class="math-container">$\frac{\partial}{\partial y} f (0,0)$</span> are zero, that is <span class="math-container">$\nabla f(0,0)=(0,0)$</span>.</p>
<p>So, if the 'dot-product formula' were valid, it should be the case that <span class="math-container">$$\frac{\partial}{\partial v} f (0,0)=(0,0)\cdot (v_x,v_y)=0,$$</span>
which only happens in the directions of the <span class="math-container">$x$</span> and <span class="math-container">$y$</span> axes. (BTW, this also proves that <span class="math-container">$f$</span> is not differentiable at <span class="math-container">$(0,0)$</span>.)</p>
<p>I suggest you try to imagine why the way in which directional derivatives vary as we change direction <strong>in this case</strong> (think of the <span class="math-container">$xy$</span> plane as the floor) are not compatible with the existence of a tangent plane (differentiability).</p>
<hr>
<p>(*) In order to verify that
<span class="math-container">$$\lim_{t\to 0} \frac{f(x_0+tv)-f(x_0)-\nabla f(x_0)\cdot(x_0+tv-x_0)}{||(x_0+tv)-x_0||}\cdot \frac{|t|\,||v||}{t}=0,$$</span>
first note that <span class="math-container">$\frac{|t|\,||v||}{t}$</span> equals plus or minus <span class="math-container">$||v||$</span>, depending on the sign of <span class="math-container">$t$</span>, which means is a bounded function of <span class="math-container">$t$</span> (<span class="math-container">$t\neq 0$</span>). So, to prove our claim is enough to show that
<span class="math-container">$$\lim_{t\to 0} \frac{f(x_0+tv)-f(x_0)-\nabla f(x_0)\cdot(x_0+tv-x_0)}{||(x_0+tv)-x_0||}=0.$$</span></p>
<p>But this is a consequence of <span class="math-container">$f$</span> being differentiable. Indeed, we say that <span class="math-container">$f\colon \mathbb R^n \rightarrow \mathbb R$</span> is differentiable at <span class="math-container">$x_0$</span> if and only if
<span class="math-container">$$\lim_{x\to x_0} \frac{f(x)-f(x_0)-\nabla f(x_0)\cdot(x-x_0)}{||x-x_0||}=0.$$</span></p>
<p>Our expression just has <span class="math-container">$x_0+tv$</span> instead of <span class="math-container">$x$</span>, and as the limit is for <span class="math-container">$t\to 0$</span>, it is also true that <span class="math-container">$x_0+tv\to x_0$</span>. The only difference is that the definition of differentiable function uses a double/triple/etc. limit (think of sequences of points of <span class="math-container">$\mathbb R^n$</span> converging to <span class="math-container">$x_0$</span> from every direction and in all sorts of simple or complicated paths), while in our limit <span class="math-container">$x$</span> tends to <span class="math-container">$x_0$</span> only along the straight line in the direction of <span class="math-container">$v$</span>. But since <span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0$</span>, the last limit is <span class="math-container">$0$</span>, and the same is true if we restrict to the subset of <span class="math-container">$\mathbb R^n$</span> that is such line.</p>
|
2,607,449 | <p>I'm trying to get intuition about why gradient is pointing to the direction of the steepest ascent. I got confused because I found that directional derivative is explained with help of gradient and gradient is explained with help of directional derivative.</p>
<p>Please explain what are the exact steps that lead from
directional derivative defined by the limit $\nabla_{v} f(x_0) = \lim_{h\to 0} \frac{f(x_0+hv)-f(x_0)}h$ to directional derivative defined as dot product of gradient and vector $\nabla_{v} f(x_0) = \nabla f(x_0)\cdot{v}$ ?</p>
<p>In other words how to prove the following? $$\lim_{h\to 0} \frac{f(x_0+hv)-f(x_0)}h = \nabla f(x_0)\cdot{v}$$</p>
| BRSTCohomology | 474,595 | <p>This question is best answered by starting from the definition of the derivative.</p>
<p>Let $f: \mathbb{R}^3 \to \mathbb{R}$. (basically the functions you're thinking of in your multivariable class). The good definition of the derivative (read: the one I like and the one that generalizes well to higher dimensions) is the <em>unique</em> linear transformation $L: \mathbb{R}^3 \to \mathbb{R}$ that satisfies:</p>
<p>$$\lim_{h\to 0}\frac{f(x+h) - f(x) - L(h)}{\mid h \mid} = 0.$$</p>
<p>Simply put, the derivative at a point is the linear transform that best approximates the function in a small neighborhood of that point.</p>
<p>Linear algebra tells us that such a linear transform $L: \mathbb{R}^3 \mapsto \mathbb{R}$ is actually given by a matrix (a vector in disguise): $\begin{bmatrix} m_1 & m_2 & m_3 \end{bmatrix}$. Its input h is given by a vector $\begin{bmatrix} h_1 \\ h_2 \\ h_3 \end{bmatrix}$ and calculating $L(h)$ is just a matter of doing matrix multiplication. For this case, it is exactly a dot product between the input and the gradient!</p>
<p>Okay, so what I have shown here is a "general derivative" i.e. a gradient. Now let's find out what a directional derivative is. </p>
<p>It is the same definition, </p>
<p>$$\lim_{h\to 0}\frac{f(x+h) - f(x) - L(h)}{\mid h \mid} = 0.$$</p>
<p>except we restrict $h$ to be proportional to some desired direction vector $v$. This means we "approach" the limit along one specific direction. Then the quantity $L(h)$ is a number that is a directional derivative. If you already know this $L$—remember, this is the gradient—then all you have to do is take a dot product between $L$ and a specific unit vector! This should unify the limit definition and the "dot product definition", and all we needed was a little matrix multiplication (the dot product is matrix multiplication in disguise) and a refinement of the limit definition.</p>
<hr>
<p><strong>Advanced material</strong> </p>
<p>In general (i.e. maps that aren't just from 3-D space to 1-D space), derivatives will be matrices of a certain dimension, and directional/partial derivatives will be vectors in general, not just numbers. Also, it is possible for certain pathological (badly-behaved) functions to have all directional derivatives, but not be differentiable (i.e. not satisfy the limit definition). </p>
|
2,607,449 | <p>I'm trying to get intuition about why gradient is pointing to the direction of the steepest ascent. I got confused because I found that directional derivative is explained with help of gradient and gradient is explained with help of directional derivative.</p>
<p>Please explain what are the exact steps that lead from
directional derivative defined by the limit $\nabla_{v} f(x_0) = \lim_{h\to 0} \frac{f(x_0+hv)-f(x_0)}h$ to directional derivative defined as dot product of gradient and vector $\nabla_{v} f(x_0) = \nabla f(x_0)\cdot{v}$ ?</p>
<p>In other words how to prove the following? $$\lim_{h\to 0} \frac{f(x_0+hv)-f(x_0)}h = \nabla f(x_0)\cdot{v}$$</p>
| Acccumulation | 476,070 | <p>In basic calculus, you deal with functions that have one-dimensional input and one-dimensional output. When you take the derivative, you get another 1D -> 1D function. Once you start dealing with functions over more dimensions, however, things get much more complicated. The directional derivative is a way to simplify things back down. Instead of expressing a function f as being over vectors in general, i.e. f(<strong>v</strong>), you express the input as being some scalar h times a unit vector <strong>u</strong>. If <strong>u</strong> is fixed, you can have a function with a one-dimensional input by varying h; you can define g<sub><strong>u</strong></sub>(h) = f(h <strong>u</strong>). Now you can get a derivative g'(h) that is also just a simple, 1D -> 1D function.</p>
<p>Once you understand the concept of the directional derivatives, the next main idea that you need to understand is the idea that the derivative is linear. That is, if <strong>v</strong> = <strong>a</strong>+<strong>b</strong>, then g'<sub><strong>v</strong></sub>(h) = g'<sub><strong>a</strong></sub>(h) + g'<sub><strong>b</strong></sub>(h) (note that these vectors are not restricted to being unit). So if we have some basis <strong>i</strong>, <strong>j</strong>, <strong>k</strong> for our vector space, then we can find the derivative in any direction by decomposing <strong>v</strong> into the basis vectors, and multiplying the coefficients by the corresponding directional derivatives. So if <strong>v</strong> = c<sub>1</sub><strong>i</strong> + c<sub>2</sub><strong>j</strong> + c<sub>3</sub><strong>k</strong>, then g'<sub><strong>v</strong></sub> = c<sub>1</sub> g'<sub><strong>i</strong></sub> + c<sub>2</sub> g'<sub><strong>j</strong></sub> + c<sub>3</sub> g'<sub><strong>k</strong></sub>. Notice that that’s just the dot product of [c<sub>1</sub>,c<sub>2</sub>,c<sub>3</sub>] and [g'<sub><strong>i</strong></sub>,g'<sub><strong>j</strong></sub>,g'<sub><strong>k</strong></sub>]. The former is just the vector in the given basis, while the latter is the gradient in the given basis.</p>
<p>So now what is the direction of greatest ascent? For that, we want g'<sub><strong>v</strong></sub>/|<strong>v</strong>| to be maximum. We can use the Cauchy-Schwarz inequality to find that this happens when the vector is pointing in the same direction as the gradient. If we were to change to another basis [<strong>g</strong>, <strong>a</strong>, <strong>b</strong>] where <strong>g</strong> is the gradient, and <strong>a</strong> and <strong>b</strong> are orthogonal to <strong>g</strong>, then the directional derivatives of <strong>a</strong> and <strong>b</strong> will be zero, so the directional derivative of an arbitrary vector <strong>v</strong> will depend on the <strong>g</strong> component of <strong>v</strong>, which is maximum when <strong>v</strong> is in the same direction as <strong>g</strong>.</p>
|
2,153,580 | <p>I'm trying to prove that $K((x))/K(x)$ is not an algebraic extension, where $K$ is a field of characteristic $p>0$, and $K((x))$ is the field of fractions of $K[[x]]:=\{\sum_{i=0}^{\infty}a_ix^i\mid a_i\in K\}$.</p>
<p>To make it more concrete, I'm considering the case $K=\mathbb{Z}/p\mathbb{Z}$ and $x=p$, so that we need to find $\tau\in\mathbb{Q}_p$ transcendental in respect to the field extension $\mathbb{Q}_p|\mathbb{Q}$. </p>
<p>I thought about $\tau:=\sum_{i=0}^{\infty}\frac{p^n}{n!}$, which I've checked converges in the $p$-adic metric, but I'm not sure is transcendental or not. It's very tempting to say that $e^p=\sum_{i=0}^{\infty}\frac{p^n}{n!}$, and since $e$ is transcendental, we're done, but I know this probably doesn't make any formal sense. How do I resolve this?</p>
| Lubin | 17,760 | <p>My only suggestion would be to try to imitate the ordinary proof of Liouville’s Theorem, which can be used to show that $\sum_n10^{n!}$ is transcendental over $\Bbb Q$. This would mean showing that $\sum_n t^{n!}$ is transcendental over $\kappa(t)$, for (say) $\kappa$ a finite field.</p>
|
779,328 | <p>I am almost a beginner in the topics of Fourier transform. So, I am asking this question here.</p>
<p>Let <span class="math-container">$n=3$</span> and let <span class="math-container">$\mu_t$</span> denote surface measure on the sphere <span class="math-container">$|x|=t$</span>. Then how do we show that
<span class="math-container">$$
\frac{\text{sin} 2\pi t|\xi|}{2\pi|\xi|}=\frac{1}{4\pi t}\hat{\mu_{t}}(\xi)
$$</span>
where <span class="math-container">$\hat{\mu_{t}}$</span> is the Fourier transform of <span class="math-container">$\mu_t$</span>.</p>
| Community | -1 | <p><span class="math-container">$\def\Rbf{\mathbf{R}}$</span>
<span class="math-container">$\def\inta{\int_0^{\pi}}$</span>
<span class="math-container">$\def\intb{\int_0^{2\pi}}$</span></p>
<p>This is a very nice exercise for multivariable calculus. </p>
<p>Denote the surface measure of the unit sphere <span class="math-container">$S^{d-1}\subset\Rbf^d$</span> as <span class="math-container">$d\sigma$</span>. Then its Fourier transform is defined as
<span class="math-container">$\def\ft#1{\widehat{#1}}$</span>
<span class="math-container">$$
\ft{d\sigma}(\xi) = \int_{S^{d-1}}e^{-2\pi ix\cdot \xi}\; d\sigma(x).
$$</span>
The detailed calculation appears in Stein and Weiss's <em><a href="https://www.jstor.org/stable/j.ctt1bpm9w6" rel="nofollow noreferrer">Introduction to Fourier Analysis on Euclidean Spaces</a></em> (Ch5.3 page 154).</p>
<p>When <span class="math-container">$d=3$</span>, for <span class="math-container">$\xi = (0,0,\xi_3)$</span> with <span class="math-container">$\xi_3>0$</span>, one has
<span class="math-container">$$
\ft{d\sigma}(\xi)
= \inta\intb e^{-2\pi i\cos\theta_1\xi_3}\sin\theta_1\;d\theta_2d\theta_1
$$</span>
Thus
<span class="math-container">$$
\ft{d\sigma}(\xi)
= 2\pi \inta e^{-2\pi i\cos\theta_1\xi_3}\sin\theta_1\;d\theta_1\;.
$$</span>
By change of variable <span class="math-container">$u = -\cos\theta_1$</span> and Euler's formula, one has
<span class="math-container">$$
\ft{d\sigma}(\xi) = 2\pi \int_{-1}^1e^{2\pi iu\xi_3}\;du = \frac{2\pi}{2\pi i\xi_3} e^{2\pi iu\xi_3}\Big|_{-1}^1 = \frac{2\sin(2\pi\xi_3)}{\xi_3}\tag{1}
$$</span></p>
<p>On the other hand, observe that <span class="math-container">$\ft{d\sigma}$</span> is radial:
<span class="math-container">$$
\ft{d\sigma}(A\xi) = \int_{S^{d-1}}e^{-2\pi ix\cdot(A\xi)}\;d\sigma(x) =
\int_{S^{d-1}}e^{-2\pi i(A^{T}x)\cdot(\xi)}\;d\sigma(x) = \ft{d\sigma}(\xi)
$$</span>
for any orthogonal matrix <span class="math-container">$A$</span>. Consequently, (1) implies that
<span class="math-container">$$
\ft{d\sigma}(\xi) = \frac{2\sin(2\pi|\xi|)}{|\xi|},\quad \xi \ne 0\;.
$$</span></p>
<p>The case when the sphere is not of radius <span class="math-container">$1$</span> can be done by change of variable. </p>
<p>For other references, see also this MO question: <a href="https://mathoverflow.net/q/149692">Fourier transform of the unit sphere</a>.</p>
<p>See also these two sets of notes: </p>
<ul>
<li><a href="https://math.unm.edu/~blair/math565f17/ftsurfacesphere_notes_f17.pdf" rel="nofollow noreferrer">https://math.unm.edu/~blair/math565f17/ftsurfacesphere_notes_f17.pdf</a></li>
<li><a href="http://individual.utoronto.ca/jordanbell/notes/sphericalmeasure.pdf" rel="nofollow noreferrer">http://individual.utoronto.ca/jordanbell/notes/sphericalmeasure.pdf</a></li>
</ul>
|
17,885 | <p>In most education systems, Mathematics is a compulsory subject from primary school all the way to the start of university. A common reason given is that essential concepts like addition and multiplication are taught to the children. </p>
<p>But for many high school students, especially those who are keen on pursuing the humanities, they do not see any point in studying Mathematics for the rest of their schooling life, or how any concept in Mathematics could possibly be applied in their future work.</p>
<p>Why is Mathematics a compulsory subject for high school students, especially those who are clearly studying in Humanities streams?</p>
| johnnyb | 9,594 | <p>Math teaches reasoning using scenarios that are exact enough for there to be concrete right and wrong answers. Many areas of life require reason, but mathematics forces students to actually engage in the discipline of finding the <em>right</em> answer, because there is only one right answer.</p>
<p>For example, some people think that we can teach reasoning by doing mock trials, and that this is a much more practical mode of teaching reasoning that people can relate to. It may be more relatable, but, the problem is that it is not always clear in such a scenario why certain answers are right or wrong. In other words, it allows bad reasoning to be perpetuated as long as it is done sufficiently glibly. In math, no matter how well I speak, I will not convince my math teacher that X + 5 = 7 means that X = 3.</p>
<p>When I give presentations about mathematics, I always include a slide that says:</p>
<blockquote>
<p>Math gives you practice on core reasoning skills with concrete
problems that have definite answers in order to achieve mastery of the
reasoning process which will allow you to apply the skill to fuzzier
problems where answers are not as certain.</p>
</blockquote>
<p>There are also many analogs to specific mathematical techniques which I have (informally) found that math-trained people are more likely to do well in. One is in reconfiguring processes. In math, we train people to do prime number factoring. This is exactly the type of skill needed when figuring out what is needed to reconfigure a process. You have to be able to break down the process into its primary components, and rebuild it using those. Being able to look at something and "see" what basic building blocks it is composed of is essential to thought. Again, math gives us practice in these core reasoning skills on concrete problems that have definite answers so that we can achieve mastery of the reasoning process which allows us to apply the skill (with confidence) to fuzzier problems where the answers are not as certain.</p>
<p>There are parts of math which are needed just to provide grist to the logical reasoning process. There are parts of math which are good direct (or indirect) analogs to specific reasoning processes that will be used in the future. There are parts of math which are just part of being an educated human. And, for sure, there are parts of math that we should probably stop cluttering the textbooks with. But the latter are few and far between.</p>
<p>One example - I think it was Hacker that complained about logarithms and exponents being taught to everyone. That is literally the most useful thing that math teaches that isn't evident from normal everyday life. Knowing how exponentiation looks like is a key component of understanding (a) debt, (b) interest, (c) investing, which are things that <em>everyone</em> will likely be involved in at some point in life. If you don't understand how debt grows exponentially, you won't understand its danger.</p>
<p>Final note - I do think we should teach math with the reasoning skills we want students to know more clearly in mind (and even should share them with students). I think this would improve (a) the student's attachment to the curriculum, (b) the quality of the curriculum itself, and (c) the public's understanding of why the curriculum is there.</p>
|
372,181 | <p>The theory of real closed fields is decidable. The <a href="https://en.wikipedia.org/wiki/Hyperreal_number" rel="noreferrer">hyperreals</a> satisfy that theory, so we can interpret statements in the theory of real closed fields as being about hyperreals.</p>
<p>If we add a unary predicate for "is a standard real number" to the language, is the theory still decidable?</p>
| Matt F. | 44,143 | <p>Here are some comments on quantifier elimination for this theory, focusing on what expansion of the language might make it work. This may be parallel to identifying a model companion, or determining the structures to consider in a more model-theoretic proof, as Emil Jerabek was proposing.</p>
<p>First, we will need a symbol for <strong>having a standard real in between</strong>, <span class="math-container">$SBet(x,y)$</span>, to eliminate the quantifier in
<span class="math-container">$$(\exists s)(Std(s)\ \&\ x \le s \le y)$$</span>
We can express many things in terms of this symbol, e.g.</p>
<ul>
<li><span class="math-container">$x$</span> is standard iff <span class="math-container">$SBet(x,x)$</span>;</li>
<li><span class="math-container">$x$</span> is less than some standard real iff <span class="math-container">$x<0 \vee SBet(x,x+1)$</span>;</li>
<li><span class="math-container">$x$</span> is in Emil's ideal <span class="math-container">$O$</span> iff <span class="math-container">$x$</span> and <span class="math-container">$-x$</span> are each less than some standard real</li>
</ul>
<p>Second, we will need symbols for <strong>algebraic functions</strong>. We need to be able to say that <span class="math-container">$\sqrt{x}-y$</span> is standard, i.e. a quantifier elimination for expressions like
<span class="math-container">$$(\exists r)(Std(r)\ \&\ (r+y)^2 = x)$$</span></p>
<ul>
<li><p>One obvious elimination does not work, e.g. if <span class="math-container">$x=(y+1)^2$</span> and <span class="math-container">$y$</span> is non-standardly large, then <span class="math-container">$\sqrt{x}-y$</span> is standard but <span class="math-container">$x-y^2$</span> is non-standard.</p>
</li>
<li><p>Identifying algebraic functions by bracketing roots in rational intervals also will not work, since we will need roots that are not in rational intervals.</p>
</li>
<li><p>We can instead use the <a href="https://cs.nyu.edu/mishra/PUBLICATIONS/90.p120-mishra.pdf" rel="nofollow noreferrer">sign representation</a>, specifying the signs of the derivatives of the polynomial. So we add a function <span class="math-container">$a_d$</span> of <span class="math-container">$2d+1$</span> variables for each degree <span class="math-container">$d$</span>, where we interpret, e.g. <span class="math-container">$a_2(c_0, c_1, c_2; d_1, d_2)$</span> as "the root of <span class="math-container">$c_0 + c_1 x + c_2 x^2$</span> for which <span class="math-container">$(c_0 + c_1 x + c_2 x^2)'$</span> has the same sign as <span class="math-container">$d_1$</span> and <span class="math-container">$(c_0 + c_1 x + c_2 x^2)''$</span> has the same sign as <span class="math-container">$d_2$</span>, or <span class="math-container">$0$</span> if that description does not specify a unique <span class="math-container">$x$</span>."</p>
</li>
</ul>
<p>Even these additions don't make the quantifier elimination obvious. How can we eliminate the quantifier over <span class="math-container">$r$</span> in
<span class="math-container">$$(\exists r)(SBet(r + x,\ r + 2x)\ \&\ y < r < z)\ \text{ or}$$</span>
<span class="math-container">$$(\exists r)(SBet(r^3 - r + x,\ r^3 - r + 2x)\ \&\ y < r < z)\ ?$$</span></p>
|
1,313,216 | <p>Its my first time on here and my maths is poor so please be kind. I am working on a Masters dissertation focused on document clustering methods in which I would like to apply a weight based on the time interval between two documents. </p>
<p>I am looking for some help coming up with a function to express a time interval with results between 0 and 1. The reason I want to map the results to a maximum value of zero is that this is being applied as a weight to a cosine similarity metric where identical articles would receive a cosine measurement of 1 etc.</p>
<p>Example 1, the date difference between 31/05/2015 and 20/06/2015 is 9 days.
Example 2, the date difference between 31/05/2015 and 20/01/2015 is 129 days.</p>
<p>I would like to apply a function whereby example 1 has a higher value (towards the 1 end of the scale) and example 2 has a lower value (towards the 0 end of the scale). If the date difference was only 1, the value of 1 should apply.</p>
<p>I hope this makes sense. Any help anyone can offer me would be greatly appreciated. </p>
<p>Thank You</p>
<p>Claire</p>
| Klaus Draeger | 65,787 | <p>One standard approach in this kind of situation is to consider some discount factor $d<1$ such that the value of an example decreases by this factor for each time unit which has passed, i.e. for an example with initial value $v_0$ and age $t$, you would use the value $v = v_0 \cdot d^t$.</p>
|
2,243,646 | <p>Suppose a, b, c are positive real numbers such that </p>
<p>$$(1+a+b+c)\left(1+\frac 1a+\frac 1b+\frac 1c\right)=16$$
Then is it true that we must have $a+b+c=3$ ?</p>
<p>Please help me to solve this. Thanks in advance. </p>
| Michael Rozenberg | 190,319 | <p>By Cauchy-Schwarz $$(1+a+b+c)\left(1+\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right)\geq\left(1\cdot1+\sqrt{a}\cdot\frac{1}{\sqrt{a}}+\sqrt{b}\cdot\frac{1}{\sqrt{b}}+\sqrt{c}\cdot\frac{1}{\sqrt{c}}\right)^2= 16$$
The equality occurs for $a=b=c=1$, which says that $a+b+c=3$.</p>
|
3,798,402 | <p>For any natural number <span class="math-container">$n$</span>, how would one calculate the integral</p>
<p><span class="math-container">$$ \int_{0}^{2 \pi} |1 - ae^{i\theta}|^n \ d \theta $$</span></p>
<p>where <span class="math-container">$a$</span> is a complex number such that <span class="math-container">$|a| = 1$</span>. I real just need <span class="math-container">$n$</span> to be even, but I'm not sure how much this changes anything. I also don't know how necessary <span class="math-container">$a =1$</span> is in the problem either. I can see this function is the distance from 1 to a circle of radius <span class="math-container">$a$</span> but not sure how to compute this integral.</p>
| Felix Marin | 85,343 | <p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
<span class="math-container">\begin{align}
&\bbox[5px,#ffd]{\left.\int_{0}^{2\pi}\verts{1 - a\expo{\ic\theta}}^{\, n}
\,\dd\theta\,\right\vert_{{\large a\ \in\ \mathbb{C}} \atop {\large\verts{a}\ =\ 1}}} =
\oint_{\verts{z}\ =\ 1}\pars{\verts{1 - z}^{2}}^{\, n/2}
\,{\dd z \over \ic z}
\\[5mm] = &
\oint_{\verts{z}\ =\ 1}
\bracks{\pars{1 - z}\pars{1 - \overline{z}}}^{\, n/2}
\,{\dd z \over \ic z} =
\oint_{\verts{z}\ =\ 1}
\pars{1 - z - \overline{z} + z\overline{z}}^{\, n/2}
\,{\dd z \over \ic z}
\\[5mm] = &\
\oint_{\verts{z}\ =\ 1}
\pars{1 - z - {z\overline{z} \over z} + z\overline{z}}^{\, n/2}
\,{\dd z \over \ic z} =
\oint_{\verts{z}\ =\ 1}\pars{2 - z - {1 \over z}}^{\, n/2}
\,{\dd z \over \ic z}
\\[5mm] = &\
-\ic\oint_{\verts{z}\ =\ 1}{\pars{-1 + 2z - z^{2}}^{\, n/2} \over z^{n/2 + 1} }\,\dd z =
-\ic\oint_{\verts{z}\ =\ 1}{\bracks{-\pars{z -1}^{\, 2}}^{\, n/2} \over z^{n/2 + 1} }\,\dd z
\end{align}</span></p>
|
823,641 | <p>Find and classify the critical points of this function: $f(x,y)= (x^y)-(xy)$ in the domain $x>0, y>0$.</p>
<p>I am having trouble treating x and y as constants when taking partial derivatives.</p>
| AlainD | 67,021 | <p>Claude Leibovici approach is correct and elegant. However in the general case, you can avoid treating variables as constants, by using <em>derivatives</em> rather than <em>differentials</em>. </p>
<p>A derivative is an operator <em>d</em> such that $d(f+g) = d(f) + d(g) = df + dg$, and $d(fg) = f d(g) + g d(f) = f dg+g df$. For <strong><em>the</em></strong> derivative, you further have $df(u) = f'(u) du$.</p>
<p>For example: $de^u = e^u du$, $d\log(u) = \frac{du}{u}$, and
$$\begin{align*}
d(x^y) &= d(e^{y \log x}) = e^{y \log x} d(y \log x) = x^y [\log x dy+y d(\log x)] = x^y [\log x dy+y \frac {dx} x]\\
&= (y x^{y-1}) dx + (x^y \log x)dy.
\end{align*}$$</p>
<p>This is coherent with $\frac {\partial x^y} {\partial x} = y x^{y-1}$ and $\frac {\partial x^y} {\partial y} = x^y \log x$. </p>
<p>In your case (and slowly by slowly):
$$\begin{align*}
df &= d(x^y-xy) = d(x^y) - d(xy) \\
&= (y x^{y-1}) dx + (x^y \log x)dy - xdy - ydx \\
& = (y x^{y-1} -y)dx + (x^y\log x -x)dy.
\end{align*}$$</p>
<p>which is coherent with Claude's results.</p>
<p>On the critical points, the derivative vanishes:
$$\begin{cases}
y \cdot x^{y-1} - y &= 0\\
x^y\log x -x &= 0
\end{cases}$$</p>
<p>The first line becomes $y=0$ or $x^{y-1}=1$, that is $y=0$, $x=0$ or $y-1=0$. Only $y=1$ is in the domain $x>0, y>0$.</p>
<p>Putting $y=1$ in the second line, gives $x\log x = x$, which is true for $x=0$ or $x=e$.</p>
<p>Hence, there is one critical points $(x,y)=(e,1)$. Compute $f(e,1)$ to see if it is a maximum or a minimum.</p>
|
1,210,948 | <blockquote>
<p><strong>(Convergence of Types Theorem)</strong> Suppose that $F_n(u_nx+v_n) \Rightarrow F(x)$ and $F_n(a_nx+b_n) \Rightarrow G(x)$, where $u_n>0, a_n>0$ and $F$ an $G$ are non-degenerate. Then there exist $a>0$ an $b \in\mathbb R$ such that $a_n/u_n \to a$ and $(b_n-v_n)/u_n \to b$, and $F(ax+b) = G(x).$</p>
</blockquote>
<p>In Billingsley's textbook, he proves the above theorem using the following lemmas.</p>
<blockquote>
<ol>
<li>If $F_n \Rightarrow F$, $a_n \to a$ and $b_n \to b$, then $F_n(a_nx +b_n) \Rightarrow F(ax+b)$.</li>
<li>If $F_n \Rightarrow F$ and $a_n \to \infty$, then $F_n(a_nx) \Rightarrow \Delta(x)$, where $\Delta(x)$ is the degenerate distribution at $x$.</li>
<li>If $F_n \Rightarrow F$ and $b_n$ is unbounded, then $F_n(x+b_n)$ cannot converge weakly.</li>
<li>If $F_n(x) \Rightarrow F(x)$ and $F_n(a_nx+b_n) \Rightarrow G(x)$, where $F$ and $G$ are non-degenerate, then $$ 0<\inf_n a_n \leq \sup_n a_n < \infty;\; \sup_n |b_n| < \infty .$$</li>
</ol>
</blockquote>
<p>I have difficulty in understanding the proof of the forth lemma <strong>(highlighted parts)</strong>. The argument in the book is as follows. Suppose that $a_n$ is not bounded above. Arrange by passing to a sub-sequence that $a_n \to \infty$. Then by lemma 2,
$$F_n(a_nx) \Rightarrow \Delta(x).(*)$$
<strong>Then since
$$F_n\left[a_n \left(x+\frac{b_n}{a_n}\right)\right] = F_n(a_nx+b_n)\Rightarrow G(x),(**)$$
it follows by lemma 3 that $\frac{b_n}{a_n}$ is bounded. [Note that in lemma 3, we did not have $a_n$ in front of $(x+b_n)$. But we do now. My question is how to use lemma 3 to get the desired boundedness of $b_n/a_n$, please?]</strong> By passing to a further sub-sequence, arrange that $b_n/a_n$ converges to some $c$. By $(*)$ and lemma 1,$F_n\left[a_n \left(x+\frac{b_n}{a_n}\right)\right] \Rightarrow \Delta(x+c)$ along this sub-sequence. But $(**)$ now implies that $G$ is degenerate, contrary to hypothesis. Thus $a_n$ is bounded above. If $G_n(x) = F_n(a_nx+b_n)$, then $G_n(x) \Rightarrow G(x)$ and $G_n(a_n^{-1}x - a_n^{-1}b_n) = F_n(x) \Rightarrow F(x)$. The result just proved shows that $a_n^{-1}$ is bounded. Thus $a_n$ is bounded away from $0$ and $\infty$. <strong>My question here is how to know $a_n$ is positive, please? How come $a_n$ cannot be negative?</strong> Thank you!</p>
| ozoz | 226,985 | <p>It should be:
$$A^{-1} = \frac{1}{3} (A-2I)$$</p>
|
2,482,083 | <p>I want to know if the following is a true statement:</p>
<blockquote>
<p>If $A$ is real a symmetric and $V$ is orthogonal then $V^TAV=D$ where $D$ is a diagonal matrix. </p>
</blockquote>
<p>To prove it I can say </p>
<p>$$(V^TAV)^T=V^TA^TV = V^TAV.$$
So $V^TAV$ is itself symmetric. So if I know $V^TAV$ is triangular then I'm done. How do I know $V^TAV$ is triangular (if it is)?</p>
| egreg | 62,967 | <p>You're probably misunderstanding what your textbook or instructor is trying to tell you.</p>
<blockquote>
<p>If $A$ is <em>any</em> real square matrix with <em>all</em> real eigenvalues, there exists an orthogonal matrix $V$ such that
$$
V^TAV
$$
is triangular.</p>
</blockquote>
<p>This is done by induction on the order of $A$; choose an eigenvalue $\lambda$ of $A$, and complete a norm $1$ eigenvector to an orthonormal basis; the matrix $V_0$ having these basis vector as columns has the property that $V_0^TAV_0$ has the block form
\begin{bmatrix}
\lambda & r_1 \\
0 & A_1
\end{bmatrix}
where $r_1$ is some row. By the induction hypothesis, there is an orthogonal matrix $V_1$ such that $V_1^TA_1V_1$ is upper triangular. Note that the matrix $A_1$ has the same eigenvalues as $A$ (except for $\lambda$ having multiplicity one less), so also $A_1$ has all real eigenvalues. Now the orthogonal matrix
$$
V=V_0\begin{bmatrix} 1 & 0 \\ 0 & V_1\end{bmatrix}
$$
has the required property that $V^TAV$ is upper triangular.</p>
<p>If $A$ is symmetric (so its eigenvalues are real), then
$$
V^TAV=V^TA^TV=(V^TAV)^T
$$
is also lower triangular, so it is diagonal.</p>
<p>Hence the theorem is</p>
<blockquote>
<p>If $A$ is a real symmetric matrix, there exists an orthogonal matrix $V$ such that $V^TAV$ is diagonal.</p>
</blockquote>
|
2,204,834 | <p>I have to show by induction that this function is a multiple of 8. I have tried everything but I can only show that is multiple of 4, some hints? The function is
$$5^{n+1}+2\cdot 3^n+1 \hspace{1cm}\forall n\ge 0$$, because it is a multiple of 8, you can say that$$5^{n+1}+2\cdot 3^n+1=8\cdot m \hspace{1cm}\forall m\in\mathbb{N}$$.</p>
| Felix Marin | 85,343 | <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
<strong>$\ds{\large n\ \mathsf{even}:}$</strong>
\begin{align}
5^{n + 1} + 2 \times 3^{n} + 1 & =
5\pars{5^{n} - 1} + 2\pars{3^{n} - 1} + 8 =
5\pars{4\overbrace{\sum_{k = 0}^{n - 1}5^{k}}^{\ds{even}}}\ +\
4\ \overbrace{\sum_{k = 0}^{n - 1}3^{k}}^{\ds{even}} + 8
\end{align}
<hr>
<strong>$\ds{\large n\ \mathsf{odd}:}$</strong>
\begin{align}
5^{n + 1} + 2 \times 3^{n} + 1 & =
\pars{5^{n + 1} - 1} + 6\pars{3^{n - 1} - 1} + 8
\\[5mm] & =
4\overbrace{\sum_{k = 0}^{n}5^{k}}^{\ds{odd:\ 2p + 1}}\ +\
12\ \overbrace{\sum_{k = 0}^{n - 2}3^{k}}^{\ds{odd:\ 2q + 1}} + 8
\qquad\qquad\pars{~p\ \mbox{and}\ q\ \mbox{are}\ integers~}
\\[5mm] & = 8p + 24q + 3 \times 8
\end{align}</p>
|
2,561,245 | <p>Here is the question:
Let $f(x)$ be an integrable function on $[-1,1]$. $f(x)$ is continuous at $x=0$. Define $g(x)=\int_0^{\int_0^xf(n)dn}f(y)dy$ and prove $g'(0)=f(0)^2$.</p>
<p>My initial thought was to look at $g(0)$ and say when you take an integral from $a$ to $b$ and $a=b$ then the integral is $0$, but I do not know how to show $f(0)^2=0$ from that point.</p>
<p>My next thought was that I will need to use the Fundamental Theorem of Calculus II. Here is the definition from the book I am studying from:</p>
<p>Let $f$ be an integrable function on $[a,b]$. For x in $[a,b]$, let $F(x)=\int_a^xf(t)dt$. Then $F$ is continuous on $[a,b]$. If $f$ is continuous at $x_0$ in $(a,b)$, then $F$ is differentiable at $x_0$ and $F'(x_0)=f(x_0)$.</p>
<p>My work using this Theorem:</p>
<p>Let $h(x)=\int_0^xf(n)dn$. Since $f$ is integrable on $[-1,1]$, for $x$ in $[-1,1]$ we have that $h(x)$ is continuous. Since $f$ is continuous at $x=0$, then $h$ is differentiable at $x=0$ and $h'(0)=f(0)$.</p>
<p>Now relating $g$ to $h$, we know $g(x)=\int_0^{h(x)}f(y)dy$. This looks very similar to $h$, so we can say $g(x)=h(h(x))$. Then $g'(0)=h'(h(0))=f(h(0))=f(0)$, since $h(0)=0$ by definition of an integral with parameters that are equal. Now from here, we have $g'(0)=f(0)$, but I need to show $g'(0)=f(0)^2$. This is only true if they are both equal to either $0$ or $1$ I believe, but I am not sure how to do this or if I am even going in the right direction for this proof.</p>
<p>If anyone can tell me if I amm onto something or if I am way off base it would be much appreciated.</p>
| Community | -1 | <p>We have, using differentiation under the integral sign, $$g’(x)=[\int_{0}^{\int_{0}^{x} f(n)\, dn} f(y) \, dy]’=f(\int_{0}^{x} f(n) \, dn)[\int_{0}^{x} f(n) \, dn]’ = f(\int_{0}^{x}f(n) \, dn) [f(x)]$$ </p>
<p>Thus, $$g’(0)=f(\int_{0}^{0} f(n)\, dn)[f(0)]=[f(0)][f(0)]=[f(0)]^2$$</p>
|
3,676,127 | <p>Is there a term in mathematics for the <strong>SET</strong> of all possible <span class="math-container">$n \times n$</span> zero-one matrices?</p>
<p>In other words, for a particular <span class="math-container">$n \in \mathbb{Z}^+$</span>, is there a term for the set of all possible <span class="math-container">$M=[m_{ij}]$</span> such that either <span class="math-container">$m_{ij}=0$</span> or <span class="math-container">$m_{ij}=1$</span> for <span class="math-container">$1\le i \le n$</span> and <span class="math-container">$1\le j \le n$</span>?</p>
<p><span class="math-container">$\{ M=[m_{ij}] : m_{ij}=1 \vee m_{ij}=0, 1\le i \le n, 1\le j \le n\}$</span></p>
<p>Thank you.</p>
| rschwieb | 29,335 | <p>I would call such a matrix an <a href="https://en.wikipedia.org/wiki/Adjacency_matrix" rel="nofollow noreferrer"><strong>adjacency matrix</strong></a> since basically that would be choosing to represent the relation as a graph with vertices in <span class="math-container">$S$</span>. (This was more relevant before <a href="https://math.stackexchange.com/revisions/3676127/1">the OP removed</a> the original context.)</p>
<p>These are sometimes collectively called (even with dimensions <span class="math-container">$m,n$</span>) <a href="https://mathworld.wolfram.com/01-Matrix.html" rel="nofollow noreferrer"><strong>binary matrices</strong></a>.</p>
<p>There is a slight conundrum with what the entries actually are. You might or might not operate with binary arithmetic or binary operations on the entries. For an adjacency matrix, you often want the entries to be positive integers with their operations. This allows you to compute <span class="math-container">$n$</span>-th order adjacency in a graph.</p>
<p>But in your case, you're looking for simply a true/false that an element is in the relation: then maybe it is reasonable to think of them as elements in the field of two elements. I'm not sure what algebraic interpretation could be attached to operations with these matrices, though.</p>
|
1,887,315 | <p>$e$ is one of the most important numbers in our universe, it is everywhere. When I try to find out why the most common explanation is some reverse-engineered physics or finance problem. But these are just one off examples of why $e$ <em>is</em> important, they fall short of illuminating the origin of its significance. What I'm looking for is some fundamental definition of $e$ that explains it's significance and omnipresence, something akin to $\pi$ relating circumference and diameter. </p>
<p>Thanks, I hope I'm being clear.</p>
<p>EDIT*: A common saying is "$e$ is the most natural base." What does that mean?</p>
| Michael Hardy | 11,667 | <p>\begin{align}
\frac d {dx} 10^x & = ( 10^x\cdot\text{constant}) \approx 10^x \cdot (2.3) \\[10pt]
\frac d {dx} 2^x & = ( 2^x \cdot \text{constant}) \approx 2^x\cdot (0.693)
\end{align}
etc. It is easy to show that the derivative of an exponential function is a constant multiple of the same exponential function.</p>
<p>Only when the bas is $e$ is the "constant" equal to $1$.</p>
<p>The fact that $x\mapsto e^x$ is its own derivative accounts for its incessant appearance in the study of differential equations. It also accounts for the fact that the "constant" is the base-$e$ logarithm of the base of the epxonential function.</p>
<p>That's the beginning of the story; there's a lot more to it.</p>
<p>The fact that the "constant" is equal to $1$ only when the base is $e$ is analogous to the fact that in the identity
$$
\frac d {dx} \sin x = (\text{constant}\cdot \cos x)
$$
the "constant" is $1$ only when <b>radians</b> are used rather than some other unit.</p>
|
2,362,485 | <blockquote>
<p>Evaluate: $$\lim_\limits{x\to \pi/4} \frac {2-\csc^2 x}{1-\cot x}$$</p>
</blockquote>
<p>My Attempt:
\begin{align}\lim_\limits{x\to \pi/4} \frac {2-\csc^2 x}{1-\cot x}&=\lim_{x\to \pi/4} \frac {2-\csc^2 x}{1-\cot x}\\\\
&=\lim_{x\to \pi/4} \frac {2-\frac {1}{\sin^2 x}}{1-\frac {\cos x}{\sin x}}\\\\
&=\lim_{x\to \pi/4} \frac {2\sin^2 x - 1}{\sin^2 x - \sin x\cdot\cos x}\end{align}</p>
| Math-fun | 195,344 | <p>Note first that $\sin^2 x+\cos^2=1$ hence we obtain $$2-\frac1{\sin^2x}=\frac{2\sin^2x-1}{\sin^2x}=\frac{\sin^2x-\cos^2x}{\sin^2x}=1-\cot^2x=(1-\cot x)(1+\cot x)$$</p>
<p>hence $$\lim_{x\to\frac{\pi}4}\frac{2-\frac1{\sin^2x}}{1-\cot x}=\lim_{x\to\frac{\pi}4}\left(1+\cot x\right)=2.$$</p>
<p>You could also use a <a href="https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule" rel="nofollow noreferrer">L'Hopital's rule</a>.</p>
|
7,900 | <p>This semesters I am teaching a course to 12-13 year old kids in which they are supposed to learn the basics of spreadsheet usage.</p>
<p>I am having difficult in coming up with fun / interesting exercises to teach them the basic functions, such as average, max, minimum and the like. One of the biggest problems is having only a 50 minutes long lesson every two weeks.</p>
<p>I've tried so far:</p>
<ul>
<li>Teaching them to plot relations using a dispersion graphic: they would produce the axes and plot relations such as linear and quadratic formulas, exponential, etc.</li>
<li>Introduced them to simple recurrences such as the Fibonacci sequence and the discrete logistic map.</li>
<li>Using stock data to teach them some descriptive statistics.</li>
</ul>
<p>What else would be proper, and preferably fun, to teach them? </p>
| aparente001 | 4,956 | <ol>
<li><p>Write a formula that converts Centigrade to Fahrenheit or vice versa. Plug in some common temperatures for the climate where you live. They can print out the table of values, nice and big, and post it on the wall at home and at school. At your next meeting, have a little Conversion Bee, with little prizes for the top three students. Ask for volunteers to bring popcorn to share. (Don't make it cutthroat -- keep it fun.)</p></li>
<li><p>Have them design and conduct a survey of fellow students, asking, for example, "What is your favorite sport to watch? And your favorite sport to practice?" They can work in small groups. Each group should choose a different survey. The survey results will be the data they will use for the descriptive stastictics.</p></li>
<li><p>Have them download nutrition facts from the standard fast food restaurants, and then analyze (e.g. there's lots of opportunity for averaging).</p></li>
<li><p>Same but with the various items of candy that are given out at Halloween. This would require some sorting.</p></li>
</ol>
|
4,048,223 | <blockquote>
<p><strong>Problem 20.3</strong>
If <span class="math-container">$R$</span> is a nonnegative random variable, then Markov’s Theorem gives an upper
bound on <span class="math-container">$Pr[R \geq x]$</span> for any real number <span class="math-container">$x \gt Ex[R]$</span>. If b is a lower bound on R, then Markov’s Theorem can also be applied to <span class="math-container">$R - b$</span> to obtain a possibly different bound on <span class="math-container">$Pr[R \geq x]$</span>.</p>
<p>a) Show that if <span class="math-container">$b > 0$</span>, applying Markov’s Theorem to <span class="math-container">$R - b$</span> gives a smaller
upper bound on <span class="math-container">$Pr[R \geq x]$</span> than simply applying Markov’s Theorem directly to R.</p>
<p>b) What value of <span class="math-container">$b > 0$</span> in part (a) gives the <em>best</em> bound?</p>
</blockquote>
<p>This question is from <em>Mathematics for Computer Science</em> text book (MIT text for discrete math course). I would like to receive some help with this problem and if possible to send any resources to check if my solution is right or not. I am using the textbook but the problem is that I don't have a way to verify my solutions.</p>
| peaceful breeze | 794,823 | <p>Since <span class="math-container">$R-b\ge 0$</span> by assumption, a simple application of the Markov's inequality gives <span class="math-container">$$\Pr(R\ge x)=\Pr(R-b\ge x-b)\le \frac{\text{Ex}[R-b]}{x-b}=\frac{\text{Ex}[R]-b}{x-b}.$$</span>
Now, observe that <span class="math-container">$$f(b):=\frac{\text{Ex}[R]-b}{x-b}\le \frac{\text{Ex}[R]}{x}$$</span> holds on <span class="math-container">$0\le b < \text{Ex}[R]$</span> and that <span class="math-container">$f(b)$</span> is monotonically decreasing on <span class="math-container">$0\le b < \text{Ex}[R]$</span> (you can sketch a graph and verify these results).</p>
<p>Moreover, notice that the upper bound <span class="math-container">$\frac{\text{Ex}[R]-b}{x-b}$</span> on <span class="math-container">$\Pr(R\ge x)$</span> approaches to <span class="math-container">$0$</span> as <span class="math-container">$b\uparrow \text{Ex}[R].$</span> So, you can make this upper bound tighter and tighter by selecting <span class="math-container">$b$</span> to be closer to <span class="math-container">$\text{Ex}[R]$</span> from left. Thus, the best (tightest) bound would be when <span class="math-container">$b$</span> is the largest lower bound on <span class="math-container">$\text{Ex}[R].$</span> However, I'm not sure if I would call it the best bound (although it is certainly the tightest) since this requires further assumptions that <span class="math-container">$R\ge b$</span> (so that <span class="math-container">$R-b$</span> remains non-negative) and obviously this assumption becomes stronger as we increase <span class="math-container">$b$</span>.</p>
<hr />
<p><strong>Edit:</strong></p>
<p>"I still don't get how did you reach the conclusion that <span class="math-container">$f(b)\le \text{Ex}[R] / x.$</span>": Simple analysis using derivatives. Also, note that I wrote <span class="math-container">$f(b)\le \text{Ex}[R] / x$</span> holds on <span class="math-container">$[0,\text{Ex}[R])$</span>. In fact, this inequality doesn't hold if <span class="math-container">$b\in(-\infty,0)\cup(x,\infty).$</span> I suppose you are familiar with calculus and how functions behave.</p>
<p>Note that <span class="math-container">$\text{Ex}[R]$</span> and <span class="math-container">$x$</span> are non-negative constants, such that <span class="math-container">$\text{Ex}[R]<x$</span>. So, you can modify <span class="math-container">$f(b)$</span> as <span class="math-container">$$ f(b)=\frac{\text{Ex}[R]-b}{x-b}=\frac{x-b}{x-b}-\frac{x-\text{Ex}[R]}{x-b}=1-\frac{c}{x-b},$$</span>
where <span class="math-container">$c=x-\text{Ex}[R]$</span> is a positive constant. Hence, <span class="math-container">$$\frac{df}{db}=\frac{-c}{(x-b)^2},$$</span> which is always negative since <span class="math-container">$c>0$</span>. We have <span class="math-container">$b<E[R]<x$</span>, so that we don't have to worry about the function being undefined (i.e., <span class="math-container">$b=x$</span>). As you mentioned <span class="math-container">$(\text{Ex}[R]-b)/(x-b)$</span> and <span class="math-container">$\text{Ex}[R]/x$</span> are equal when <span class="math-container">$b=0$</span> and noting that <span class="math-container">$\text{Ex}[R]/x$</span> is just a constant horizontal line, we clearly have <span class="math-container">$$\frac{\text{Ex}[R]-b}{x-b}<\frac{\text{Ex}[R]}{x}$$</span> on the interval <span class="math-container">$(0,\text{Ex}[R])$</span> since <span class="math-container">$df/db$</span> is negative. Obviously, this inequality holds on <span class="math-container">$(0,x)$</span> as well but this is not necessary. So, yes if you don't want to include equality just get rid of <span class="math-container">$0$</span>.</p>
|
2,964,346 | <p>Suppose that <span class="math-container">$X_1,X_2,\cdots,X_n$</span> are iid sequence with pdf <span class="math-container">$\frac{2}{\pi (1+x^2)}\cdot 1_{(0,+\infty)}(x)$</span>. Denote <span class="math-container">$S_n$</span> as <span class="math-container">$S_n:=X_1+X_2+\cdots+X_n$</span>. Prove that there exits <span class="math-container">$c>0$</span> such that <span class="math-container">$$\frac{S_n}{n\log n}\rightarrow c\quad \text{in probability}$$</span> and calculate <span class="math-container">$c$</span>.</p>
<p><strong>My solution</strong>: Choose another iid sequence <span class="math-container">$Y_1,Y_2,\cdots,Y_n$</span> such that <span class="math-container">$X_n,Y_n$</span> are independent and have same distribution. Let <span class="math-container">$X_n^s:=X_n-Y_n$</span>. I have proved that
<span class="math-container">$$\frac{X_1^s+X_2^s+\cdots+X_n^s}{n\log n}\rightarrow 0\quad \text{in probability}.$$</span>
Therefore,
<span class="math-container">$$\frac{S_n}{n\log n}-m_n\rightarrow 0\quad \text{in probability}$$</span>
where <span class="math-container">$m_n$</span> is the medin of <span class="math-container">$\frac{S_n}{n\log n}$</span>. I want to show <span class="math-container">$m_n\rightarrow c$</span> but I can't.</p>
| S.Wei | 389,872 | <p>Note that <span class="math-container">$\frac{S_n}{n\log n}$</span> can be divided into two parts:
<span class="math-container">$$\frac{S_n}{n\log n}=\frac{\sum_{k=1}^nX_k 1_{X_k\le n\log n}}{n\log n}+\frac{\sum_{k=1}^nX_k 1_{X_k> n\log n}}{n\log n}$$</span>
The second part converges to zero in probability, and one can verify the mean square of the first part minus <span class="math-container">$c$</span> converges to <span class="math-container">$(2/\pi-c)^2$</span>. Therefore, <span class="math-container">$\frac{S_n}{n\log n}$</span> converges <span class="math-container">$c=2/\pi$</span> in probability.</p>
|
2,964,346 | <p>Suppose that <span class="math-container">$X_1,X_2,\cdots,X_n$</span> are iid sequence with pdf <span class="math-container">$\frac{2}{\pi (1+x^2)}\cdot 1_{(0,+\infty)}(x)$</span>. Denote <span class="math-container">$S_n$</span> as <span class="math-container">$S_n:=X_1+X_2+\cdots+X_n$</span>. Prove that there exits <span class="math-container">$c>0$</span> such that <span class="math-container">$$\frac{S_n}{n\log n}\rightarrow c\quad \text{in probability}$$</span> and calculate <span class="math-container">$c$</span>.</p>
<p><strong>My solution</strong>: Choose another iid sequence <span class="math-container">$Y_1,Y_2,\cdots,Y_n$</span> such that <span class="math-container">$X_n,Y_n$</span> are independent and have same distribution. Let <span class="math-container">$X_n^s:=X_n-Y_n$</span>. I have proved that
<span class="math-container">$$\frac{X_1^s+X_2^s+\cdots+X_n^s}{n\log n}\rightarrow 0\quad \text{in probability}.$$</span>
Therefore,
<span class="math-container">$$\frac{S_n}{n\log n}-m_n\rightarrow 0\quad \text{in probability}$$</span>
where <span class="math-container">$m_n$</span> is the medin of <span class="math-container">$\frac{S_n}{n\log n}$</span>. I want to show <span class="math-container">$m_n\rightarrow c$</span> but I can't.</p>
| Shiina | 607,080 | <p>Let <span class="math-container">$S_n' = \sum_{k=1}^n X_k 1_{\{|X_k|\le a_n\}}$</span>. Clearly,</p>
<p><span class="math-container">$$
\mathbb P\left(\left|\frac{S_n - b_n}{a_n}\right|>\varepsilon\right)\le\mathbb P(S_n\ne S_n') + \mathbb P\left(\left|\frac{S_n' - b_n}{a_n}\right|>\varepsilon\right).
$$</span></p>
<p>With <span class="math-container">$a_n = n\log n$</span> and <span class="math-container">$b_n = \mathbb E S_n'$</span>, we note that
<span class="math-container">$$
\mathbb P(S_n\ne S_n')\le\sum_{k=1}^n\mathbb P(|X_k|>a_n)\to 0,
$$</span>
and that
<span class="math-container">$$
\mathbb P\left(\left|\frac{S_n' - b_n}{a_n}\right|>\varepsilon\right)\le \frac{1}{\varepsilon^2 a_n^2}\text{Var}\left(S_n'\right)\le\frac{1}{\varepsilon^2 a_n^2}\sum_{k=1}^n\mathbb E(X_k^2 1_{\{|X_k|\le a_n\}})\to 0.
$$</span>
So we have
<span class="math-container">$$
\frac{S_n - b_n}{a_n}\to 0\quad\text{in probability}
$$</span>
with <span class="math-container">$b_n/a_n\to 2/\pi$</span>. Then it's easy to see
<span class="math-container">$$
\frac{S_n}{n\log n}\to\frac{2}{\pi}\quad\text{in probability.}
$$</span></p>
|
663,218 | <p>Basic question in linear algebra here. $T$ is a linear transform from $\mathbb R^n$ to $\mathbb R^n$ defined by $T(v)=Av$, $A\in \mathrm{Mat}_n(\mathbb R)$. We are given some inner product $\langle ,\rangle$ of $\mathbb R^n$. Does not have to be the standard one, just some random inner product.</p>
<p>let $T^*$ be a linear transform from $\mathbb R^n$ to $\mathbb R^n$ such that for all $u,v \in \mathbb R^n$: $\langle T(u),v\rangle=\langle u,T^*(v)\rangle$</p>
<p>I know that if $\langle ,\rangle$ is the standard inner product of $\mathbb R^n$, then $T^*(v)=A^tv$. My question is, does this hold for all inner products?</p>
<p>If it isn't, given some inner product and the transform $T$, how can I find $T^*$?</p>
<p>And if it is true, then why?</p>
| Daniel Fischer | 83,702 | <p>In general, the adjoint of $T$ is not given by multiplication with the transpose of the matrix of $T$.</p>
<p>You can represent an inner product $\langle\,\cdot\,,\,\cdot\,\rangle$ by a matrix, let's call it $S$,</p>
<p>$$\langle x,y\rangle = {}^tx Sy.$$</p>
<p>Then let $A$ be the matrix representing $T$, and $B$ the matrix representing $T^\ast$. By definition, you have, for all $x,y$,</p>
<p>$${}^tx({}^tA S) y = ({}^tx{}^tA) S y = {}^t(Ax) S y = \langle Tx,y\rangle = \langle x, T^\ast y\rangle = {}^t x S (B y) = {}^t x (SB) y,$$</p>
<p>and so you have</p>
<p>$${}^tAS = SB \iff B = S^{-1}{}^tAS.$$</p>
|
2,027,529 | <p>I want to show that $\pi_1(S^1\vee S^1) = \mathbb{Z}*\mathbb{Z}. $ I know that it follows from the Seifert-van Kampen theorem, but we haven't talked about that in class. </p>
<p>We are given the following hint:</p>
<blockquote>
<p>Define a group homomorphism $\mathbb{Z}*\mathbb{Z}\to\pi_1(S^1\vee S^1)$ via the two inclusions $S^1\to S^1\vee S^1,$ and show that this is an isomorphism. </p>
</blockquote>
<p>We've also never talked about free products, so I am very confused as to how to define this homomorphism.</p>
| JHF | 50,427 | <p>The left hand side is the free group on two letters, say $a$, $b$. In terms of generators and relations, this free group is generated by two elements $a$ and $b$ subject to <em>no</em> nontrivial relations. A typical element of this group looks like $aba^{-2}b^3 a^{-1} b$, i.e., a word in $a$, $b$, $a^{-1}$, and $b^{-1}$, and the group operation is given by concatenation. </p>
<p>Free groups also enjoy a very nice universal property. In your case,
$$\operatorname{Hom}_{\mathbf{Grp}}(\mathbb{Z} * \mathbb{Z}, \pi_1(S^1 \vee S^1)) \cong \operatorname{Hom}_{\mathbf{Set}}(\{1,2\}, \pi_1(S^1 \vee S^1)),$$ that is, a group homomorphism out of $\mathbb{Z} * \mathbb{Z}$ corresponds to the choice of any two elements in the codomain, and vice-versa. </p>
<p>The two inclusions $S^1 \hookrightarrow S^1 \vee S^1$ precisely give you two elements on $\pi_1(S^1 \vee S^1)$, which is exactly the data of a homomorphism $\mathbb{Z} * \mathbb{Z} \to \pi_1(S^1 \vee S^1)$. In terms of the generators given above, this homomorphism is determined by sending $a$ to one of the two loops and $b$ to the other loop. </p>
|
1,593,899 | <p>Let $f = u + iv$ be a holomorphic function on unit (open) disk $\mathbb D^2$.</p>
<p>and Suppose $v$ is nonnegative and $v(0)=0$.</p>
<p>Can we determine all such $f$?</p>
<p>I wanted to use schwarz lemma, but I couldn't catch anything about bound condition.</p>
<p>I think it is about liouville's theorem.</p>
<p>Is it related to harmonic function?</p>
<p>Thanks.</p>
| Anubhav Mukherjee | 72,329 | <p>$ f(0)=r $ some real number. Then open mapping theorem says that if $f$ is non-constant , then there is a open nbd of $0$ which maps onto an open nbd of $r$ ... which implies there exists a point lets say $ r_1 -ir_2$ where $r_1, r_2$ +ve real number, is in the image. But $v$ is non-negative. contadiction. So $f$ is constant</p>
|
2,523,581 | <p>I'm a bit rusty on my complex numbers, how would you solve the following problem on paper?</p>
<blockquote>
<p>Determine and sketch (graph) the set of all complex numbers of form:
$$z_n=\frac{2n+1}{n-i},n\in\mathbb R$$</p>
</blockquote>
<p>Rationalizing yields $$S=\left\{z_n\in\mathbb C : z_n=\frac{n + 2 n^2}{1 + n^2}+\frac{1 + 2 n}{1 + n^2}i\right\}$$</p>
<p>How do I proceed to sketch (graph) this now on paper? <em>(Wolframalpha yields <a href="http://www.wolframalpha.com/input/?i=x%3D(m+%2B+2+m%5E2)%2F(1+%2B+m%5E2)+,+y%3D+(1+%2B+2+m)%2F(1+%2B+m%5E2)" rel="nofollow noreferrer">a circle</a>)</em></p>
<p>I assume I need to find the center and the radius of that circle which would be enough to sketch the graph, but I can't quite proceed from this point on. </p>
| José Carlos Santos | 446,262 | <p>I'll use $x$ instead of $n$.</p>
<p>Note that$$\frac{2x^2+x}{x^2+1}=1+\frac{x^2+x-1}{x^2+1}=1+\frac{2x^2+2x-2}{2(x^2+1)}$$and that$$\frac{2x+1}{x^2+1}=\frac12+\frac{-x^2+4x+1}{2(x^2+1)}.$$So$$\frac{2x+1}{x-i}=1+\frac i2+\frac{2x^2+2x-2}{2(x^2+1)}+\frac{-x^2+4x+1}{2(x^2+1)}i.$$Now, observe that$$\left(\frac{2x^2+2x-2}{2(x^2+1)}\right)^2+\left(\frac{-x^2+4x+1}{2(x^2+1)}\right)^2=\frac54.$$Therefore, your set is contained in the circle with center $1+\frac i2$ and radius $\frac{\sqrt5}2$.</p>
|
3,681,254 | <p><span class="math-container">$$\int_{0}^{\frac{\pi}{2}}{e^{-x}} \cos3xdx$$</span>
using the intergral by parts formula
<span class="math-container">$$\int{f(x).g(x)'}\mathrm{d}x = (f(x).g(x)) - \int{f'(x)g(x)dx}$$</span>
so it should be
<span class="math-container">$${{{e^{-x}}.\frac{1}{3}\cos(3x)}} - \int_{0}^{\frac{\pi}{2}}{-e^{-x}\frac{1}{3}\cos(3x)dx}$$</span>
but the correct next step should be
<span class="math-container">$${{{-e^{-x}}.\cos(3x)}} - \int_{0}^{\frac{\pi}{2}}{-e^{-x}(-3\sin(3x))dx}$$</span></p>
<p>which should be the correct step? </p>
| Community | -1 | <p><strong>Hint:</strong> </p>
<p>By parts, twice,</p>
<p><span class="math-container">$$\int e^{-x}f(x)\,dx=-e^{-x}f(x)+\int e^{-x}f'(x)\,dx=-e^{-x}f(x)-e^{-x}f'(x)+\int e^{-x}f''(x)\,dx$$</span></p>
<p>so that</p>
<p><span class="math-container">$$\int e^{-x}(f(x)-f''(x))\,dx=-e^{-x}(f(x)+f'(x)).$$</span></p>
|
2,338,797 | <p>A bag contains $3$ red balls and $4$ blue balls. Find the probability of picking, at random, two balls of </p>
<p>a) the same color </p>
<p>b) different colors</p>
| David G. Stork | 210,401 | <p>b) Probability of different is probability of red on first (3/7) times probability of blue given a red has been chosen (4/6), plus the probability of blue on first (4/7) times probability of red given a blue has been chosen (3/6), or: 4/7.</p>
<p>You can do a) by yourself.</p>
|
267,161 | <p>I would like to write <code>x^5 + 1/x^5</code> in terms of <code>x^2 + 1/x^2</code>, <code>x^3 + 1/x^3</code> and <code>x + 1/x</code> or express <code>x^6 + 1/x^6</code> in terms of <code>x^2 + 1/x^2</code> and <code>x^4 + 1/x^4</code> etc.</p>
<p>How can I do that in <em>Mathematica</em>?</p>
| Artes | 184 | <p>There is a special approach with <code>Solve</code> eliminating appropriate variables, e.g. in the following <code>x</code> should be eliminated</p>
<pre><code>a/. Solve[{x^5 + 1/x^5 == a, x^3 + 1/x^3 == b, x^2 + 1/x^2 == c, x + 1/x == d},
{a, b, d}, {x}]
</code></pre>
<blockquote>
<pre><code>{-Sqrt[2 + 5 c - 5 c^3 + c^5], Sqrt[2 + 5 c - 5 c^3 + c^5]}
</code></pre>
</blockquote>
<p>or e.g.</p>
<pre><code>Solve[{ x^5 + 1/x^5 == a, x^3 + 1/x^3 == b, x^2 + 1/x^2 == c, x + 1/x == d},
{a, b, c}, {x}]
</code></pre>
<blockquote>
<pre><code>{{a -> d (5 - 5 d^2 + d^4), b -> d (-3 + d^2), c -> -2 + d^2}}
</code></pre>
</blockquote>
<p>The above demonstrates that there is no unique method providing the best solution unless we define precisely what the best approach means. We can express <code>a</code> in terms of <code>c</code> only as well as in terms of <code>d</code> only. Nevertheless one can easily observe that</p>
<pre><code>a == b c - d /.First @ Solve[{x^5 + 1/x^5 == a, x^3 + 1/x^3 == b,
x^2 + 1/x^2 == c, x + 1/x == d},
{a, b, c}, {x}] // Simplify
</code></pre>
<blockquote>
<pre><code>True
</code></pre>
</blockquote>
<p>as well as e.g.</p>
<pre><code>a == d (c^2 - c - 1)/.First @ Solve[{x^5 + 1/x^5 == a, x^3 + 1/x^3 == b,
x^2 + 1/x^2 == c, x + 1/x == d},
{a, b, c}, {x}] // Simplify
</code></pre>
<blockquote>
<pre><code>True
</code></pre>
</blockquote>
<p>Another way is a standard approach with <code>Eliminate</code>, e.g.</p>
<pre><code>Eliminate[{x^6 + 1/x^6 == e, x^4 + 1/x^4 == f, x^2 + 1/x^2 == c},
{x, f}]
</code></pre>
<blockquote>
<pre><code>e == -3 c + c^3
</code></pre>
</blockquote>
<p>and it appears that with <code>Solve</code> we can produce another representation of <code>e</code>:</p>
<pre><code>e == f c - c /.First @ Solve[{ x^6 + 1/x^6 == e, x^4 + 1/x^4 == f,
x^2 + 1/x^2 == c}, {e, f}, {x} // Simplify
</code></pre>
<blockquote>
<pre><code>True
</code></pre>
</blockquote>
<p>These methods work quite satisfactory when the list of equations is rather short and we expect one can improve solutions if we appropriately restrict the solution space.</p>
|
4,164,405 | <p>Grothendieck has proven that whenever <span class="math-container">$X\longrightarrow\operatorname{Spec}(A)$</span> is a proper morphism of Noetherian schemes, <span class="math-container">$F$</span> is coherent over <span class="math-container">$X$</span> and flat over <span class="math-container">$\operatorname{Spec}(A)$</span>, then there exists a finite complex of finitely generated projective modules over <span class="math-container">$A$</span>
<span class="math-container">\begin{equation*}
0\longrightarrow K^{0}\longrightarrow...\longrightarrow K^{n}\longrightarrow 0
\end{equation*}</span>
such that for any <span class="math-container">$A$</span>-module <span class="math-container">$M$</span> there exists an isomorphism of <span class="math-container">$A$</span>-modules
<span class="math-container">\begin{equation*}
H^{p}(X,F\otimes_{A}M)\cong H^{p}(K\otimes_{A}M)\text{.}
\end{equation*}</span>
So far, so good. Is not the Čech complex precisely one example of such a Grothendieck complex? By construction, the Čech complex consists of projective modules, and it satisfies the isomorphism condition.</p>
| The Thin Whistler | 241,032 | <p>The argument becomes easy if <span class="math-container">$A$</span> is assumed hereditary. In the construction given in [https://amathew.wordpress.com/2011/01/09/the-grothendieck-complex], all <span class="math-container">$K^{i}$</span> are free for <span class="math-container">$i\geq 1$</span> and therefore projective. The module <span class="math-container">$K^{0}$</span> is a quotient of a free module <span class="math-container">$K'^{0}=F'\oplus F$</span>, and we have maps <span class="math-container">$\alpha:F'\longrightarrow K^{1}$</span> and <span class="math-container">$\beta:F\longrightarrow K^{1}$</span>. Since <span class="math-container">$C^{0}$</span> and <span class="math-container">$K^{1}$</span> are projective, so are the images <span class="math-container">$\operatorname{im}(\alpha)$</span> and <span class="math-container">$\operatorname{im}(\beta)$</span> (by hereditarity). Therefore, there exist splitting maps <span class="math-container">$\operatorname{im}(\alpha)\longrightarrow F$</span> and <span class="math-container">$\operatorname{im}(\beta)\longrightarrow F'$</span>. The maps <span class="math-container">$K^{0}\longrightarrow C^{0}$</span> and <span class="math-container">$K^{0}\longrightarrow K^{1}$</span> factor as <span class="math-container">$K^{0}\longrightarrow\operatorname{im}(\alpha)\longrightarrow C^{0}$</span> and <span class="math-container">$K^{0}\longrightarrow\operatorname{im}(\beta)\longrightarrow K^{1}$</span>. Let <span class="math-container">$\gamma$</span> denote the composite map <span class="math-container">$K^{0}\longrightarrow\operatorname{im}(\beta)\longrightarrow F\longrightarrow K'^{0}$</span> and <span class="math-container">$\delta$</span> the composite map <span class="math-container">$K^{0}\longrightarrow\operatorname{im}(\beta)\longrightarrow F'\longrightarrow K'^{0}$</span>, and let <span class="math-container">$\zeta:K'^{0}\longrightarrow K^{0}$</span> be the quotient map. Then <span class="math-container">$\zeta\circ(\gamma\oplus\delta)=\operatorname{id}_{K^{0}}$</span>, hence <span class="math-container">$K^{0}$</span> is a direct summand of <span class="math-container">$K'^{0}$</span> and therefore projective.</p>
<p>All in all, only the projectivity of <span class="math-container">$C^{0}$</span> was needed in the entire proof, hence the statement remains true if one demands projectivity only for <span class="math-container">$C^{0}$</span>. Very intriguing and enlightening indeed!</p>
<p>This does however not prove the general (non-hereditary) case...</p>
|
650,291 | <p>I'm not so good at combinatorics, but I want to know if my answer for this question is right. Originally this question is written in spanish and it says:</p>
<blockquote>
<p>Se dispone de una colección de 30 pelotas divididas en 5 tamaños distintos y 6 colores diferentes de tal manera que en cada tamaño hay los seis colores.¿Cuántas colecciones de 4 pelotas tienen exactamente 2 pares de pelotas del mismo tamaño (que no sean las 4 del mismo tamaño)?.</p>
</blockquote>
<p>And here's a translation made by me:</p>
<blockquote>
<p>A collection of 30 balls is available, separated in 5 different sizes and 6 different colors in a way that in each size there are six colors. How many collections of 4 balls have exactly 2 pairs of balls of same size (which those 4 balls aren't of same size)?</p>
</blockquote>
<p>I first wrote this table:</p>
<p>\begin{array}{c|c|c|c|c}
\cdot & Size 1 & Size 2 & Size 3 & Size 4 & Size 5 \\
\hline
Color 1 & ① & ❶ & ⒈ & ⑴ & ⓵ \\
Color 2 & ② & ❷ & ⒉ & ⑵ & ⓶ \\
Color 3 & ③ & ❸ & ⒊ & ⑶ & ⓷ \\
Color 4 & ④ & ❹ & ⒋ & ⑷ & ⓸ \\
Color 5 & ⑤ & ❺ & ⒌ & ⑸ & ⓹ \\
Color 6 & ⑥ & ❻ & ⒍ & ⑹ & ⓺
\end{array}</p>
<p>So, an example of a collection of 4 balls that have exactly 2 pairs of balls of the same size is:</p>
<blockquote>
<p>①②❶❷</p>
</blockquote>
<p>So, for the first column (Size 1) there are 15 combinations of having 2 balls:</p>
<pre><code>①② ②③ ③④ ④⑤ ⑤⑥
①③ ②④ ③⑤ ④⑥
①④ ②⑤ ③⑥
①⑤ ②⑥
①⑥
</code></pre>
<p>Which is the same as: </p>
<p>$$C_{6}^{2} = \frac{6!}{(6-2)!2!} = 15$$</p>
<p>Or the same as:</p>
<p>$$\sum_{k=1}^{5}k = 15$$</p>
<p>Then, for each row we have 10 combinations:</p>
<pre><code>①❶ ❶⒈ ⒈⑴ ⑴⓵
①⒈ ❶⑴ ⒈⓵
①⑴ ❶⓵
①⓵
</code></pre>
<p>Which is the same as:</p>
<p>$$C_{5}^{2} = \frac{5!}{(5-2)!2!} = 10$$</p>
<p>Or the same as:</p>
<p>$$\sum_{k=1}^{4}k = 10$$</p>
<p>And so, by the <a href="http://en.wikipedia.org/wiki/Rule_of_product" rel="nofollow">rule of product</a> I say that the number of collections of 4 balls having exactly 2 pairs of the same size is:</p>
<p>$$C_{6}^{2} C_{5}^{2} = 150$$</p>
<p>I'll be grateful if someone check my answer and give me further details :D</p>
| Davis Yoshida | 113,908 | <p>Not quite correct. Here's a breakdown of the choices you need to make.</p>
<ol>
<li>You will need to choose two sizes of ball, so do so: $\binom{5}{2}$</li>
<li>For the first size, choose two colors: $\binom{6}{2}$</li>
<li>For the second size, choose two colors: $\binom{6}{2}$</li>
</ol>
<p>Combine these to count your total.</p>
|
1,712,933 | <p>I am slightly ashamed to be asking this, but I have been recently reflecting on changing variables in very simple problems. If I missed a question that already discusses this please point it out to me and I will delete this one.
Anyhow writing this will probably be a learning experience.</p>
<p>Directly from the Wikipedia page on the argument I take as an example the equation:</p>
<p>$$x^6 - 9 x^3 + 8 = 0. \, $$</p>
<p>I quickly recognize this as a high school problem and use the methods that were taught to me, namely I set $x^3 = u$ so $x = u^{1/3}$.</p>
<p>Then I proceed to solve quadratic equation that results from this substitution, and only at the end I apply the reverse transformation $x^3 = u$ to get an answer for my starting variable. With not much imagination I always thought that the function used when changing variables (in the above case $f(x) = x^3)$ should be bijective in the domain of interest of the starting equation. This is because I need the inverse to return to my "starting variable".</p>
<p>But I notice on Wikipedia that a bit more is required; the change of variable function should be a diffeomorphism, we need differentiability (and even smooth manifolds for the domain and the image).</p>
<p>This is where I realized that I was never taught a proof of why the change of variables method work or how it works but I was just applying these substitutions blindly.</p>
<p>So could someone kindly point me to a source where I can improve my understanding on this very powerful method by adding rigour to what I am doing and possibly even a geometric interpretation.</p>
| Eric S. | 263,514 | <p>I must admit I never gave such much thought into this method as you did, and that I do not understand the formal introduction on the Wikipedia page (I assume you refered to <a href="https://en.wikipedia.org/wiki/Change_of_variables">this one</a>). But perhaps this may provide you with some more insight into why the method works.</p>
<p>Again consider the equation $x^6-9x^3+8=0$.
$$
x^6-9x^3+8=\left(x^3\right)^2-9\left(x^3\right)+8=0
$$
$$\iff$$
$$
\left(\left(x^3\right)-8\right)\left(\left(x^3\right)-1\right)=0
$$
$$\iff$$
$$
x^3=8\ \lor\ x^3=1
$$
$$\iff$$
$$
x=\sqrt[3]{8}=2\ \lor\ x=1
$$
Now, we both know this is just substitution without writing it explicitly, but the reason it works is because we simply rewrite the equation in an attractive form and solved a quadratic equation. So I'd say the reason substitution works depends on what kind of equation you're solving and the technique you use for solving such equations. In the above case, substitution worked, because you aren't changing the original equation at all, merely rewriting it, and because the quadratic formula works for solving quadratic equations.</p>
<p>Kind of an non-mathematical answer. Again, I hope it helps. If not, please forgive me <code>:)</code></p>
|
1,712,933 | <p>I am slightly ashamed to be asking this, but I have been recently reflecting on changing variables in very simple problems. If I missed a question that already discusses this please point it out to me and I will delete this one.
Anyhow writing this will probably be a learning experience.</p>
<p>Directly from the Wikipedia page on the argument I take as an example the equation:</p>
<p>$$x^6 - 9 x^3 + 8 = 0. \, $$</p>
<p>I quickly recognize this as a high school problem and use the methods that were taught to me, namely I set $x^3 = u$ so $x = u^{1/3}$.</p>
<p>Then I proceed to solve quadratic equation that results from this substitution, and only at the end I apply the reverse transformation $x^3 = u$ to get an answer for my starting variable. With not much imagination I always thought that the function used when changing variables (in the above case $f(x) = x^3)$ should be bijective in the domain of interest of the starting equation. This is because I need the inverse to return to my "starting variable".</p>
<p>But I notice on Wikipedia that a bit more is required; the change of variable function should be a diffeomorphism, we need differentiability (and even smooth manifolds for the domain and the image).</p>
<p>This is where I realized that I was never taught a proof of why the change of variables method work or how it works but I was just applying these substitutions blindly.</p>
<p>So could someone kindly point me to a source where I can improve my understanding on this very powerful method by adding rigour to what I am doing and possibly even a geometric interpretation.</p>
| Community | -1 | <p>By the rules of algebra,</p>
<p>$$x^6 - 9 x^3 + 8 = 0$$</p>
<p>is strictly equivalent to</p>
<p>$$(x^3)^2 - 9 x^3 + 8 = 0.$$</p>
<p>Then it makes no harm to substitute $u=x^3$ and solve</p>
<p>$$u^2-9u+8,$$</p>
<p>leading to a set of solutions $u\in S=\{s_k\}$. And this is equivalent to $x^3\in S$, or $x\in\{\sqrt[3]s_k\}$.</p>
<p>What matters for the substitution to be valid is that the domain of $u$ includes the range of $x^3$ so that no solution is lost (some $x^3$ verifying the equation but not covered by $u$); on the other hand, no alien solution is introduced when inverting $u=x^3$, as the domain of $x$ takes precedence.</p>
<p>I don't think that any other condition, such as continuity or differentiability, need to be imposed on the substitution.</p>
<hr>
<p>For the sake of the illustration, let us consider the substitution $u=x^3-\text{sign}(x)$, which is neither continuous nor invertible. We have a branch with $x<0,u<1$, and another with $x>0,u>-1$.</p>
<p>The equation is split for the two branches</p>
<p>$$u<1\land(u-1)^2-9(u-1)+8=u^2-11u+18=0,\\
u>-1\land(u-1)^2-9(u+1)+8=u^2-7u=0.$$</p>
<p>These give the solution sets $u\in\{\}$ (no $u$ is admissible) and $u\in\{0,7\}$. Then for the second branch, $x^3\in\{1,8\}$, which is correct.</p>
|
4,619,641 | <p>I need to find the image of the function <span class="math-container">$f(z) = z^2$</span> whose domain is <span class="math-container">${z: Re(z) > 0}$</span>. I first let <span class="math-container">$z = x + iy$</span>. Then <span class="math-container">$$w = f(z) = z^2 = (x+iy)^2 = (x^2-y^2) + 2xyi$$</span> Hence <span class="math-container">$u(x,y) = x^2-y^2$</span>, and <span class="math-container">$v(x,y) = 2xy$</span> Then I first consider the boundary, which is when <span class="math-container">$Re(z) = 0$</span>. When <span class="math-container">$Re(z) = 0$</span>, <span class="math-container">$x = 0$</span>. Then <span class="math-container">$z = yi$</span>. Then I plug this back into my original function, and get <span class="math-container">$$f(z) = z^2 = (yi)^2 = -y^2$$</span>, but I'm confused about what to do next, since I need to consider when <span class="math-container">$Re(z) > 0$</span>. Thanks!</p>
| joriki | 6,622 | <p>Nice idea, but unfortunately not.</p>
<p>If we choose <span class="math-container">$x_i=x_j=y$</span> and <span class="math-container">$x_k=x_l=z$</span>, the second and third terms vanish by symmetry, and we’re left with <span class="math-container">$\mathbb E\left[y^2z^2\right]=\mathbb E\left[y^2\right]\mathbb E\left[z^2\right]$</span>, which says that <span class="math-container">$y^2$</span> and <span class="math-container">$z^2$</span> are uncorrelated. That’s indeed the case for the Gaussian distribution, but not for the uniform distribution on the sphere, where these are negatively correlated. This correlation is introduced by the normalization.</p>
|
4,181,235 | <p>The topology on <span class="math-container">$\mathcal D$</span> is the locally convex topology induced by seminorms like <span class="math-container">$\|\partial_\alpha f\|_\infty$</span>, where <span class="math-container">$\alpha=(d_1,\ldots,d_n) $</span> and <span class="math-container">$\partial_\alpha=\partial_{x_1}^{d_1}\ldots\partial_{x_n}^{d_n}$</span>.</p>
<p><span class="math-container">$\mathcal D'$</span> is its continuous dual space.</p>
<p>The weak^<span class="math-container">$^*$</span> topology on <span class="math-container">$\mathcal D'$</span> is induced by seminorms like <span class="math-container">$\|\phi\|_f=|\phi(f)|$</span>, where <span class="math-container">$f\in \mathcal D$</span>.</p>
<p>It is easy to define the limit of a Cauchy net on <span class="math-container">$\mathcal D'$</span>: <span class="math-container">$\Phi(f)=\lim_{\lambda}\phi(f)$</span>.</p>
<p>However, I don't know how to show <span class="math-container">$\Phi$</span> is continuous, that is, <span class="math-container">$\Phi(f_\eta)\to 0$</span> whenever <span class="math-container">$\|\partial_\alpha f_\eta\|\to 0$</span> for all <span class="math-container">$\alpha\in \mathbb N^n$</span>.</p>
<p>Could someone please give me some hints?</p>
| Phicar | 78,870 | <p><strong>Hints:</strong>
Notice that if <span class="math-container">$a<b=a+c$</span> then it is the same thing as the case <span class="math-container">$a=b$</span> because
<span class="math-container">$$\sum _{j=0}^n\binom{a+j}{b}=\sum _{j=-c}^{n-c}\binom{a+c+i}{b}=\sum _{j=0}^{n-c}\binom{b+i}{b},$$</span>
can you conclude?<br><br>
For the case in which <span class="math-container">$b<a$</span> then you can write the sum as suggested by <strong>peterwhy</strong> in the comments i.e.,</p>
<p><span class="math-container">$$\sum _{j=0}^{a-1}\binom{j}{b}+\sum _{j=0}^{n}\binom{a+j}{b}=\sum _{j=0}^{a+n}\binom{j}{b},$$</span>
can you conclude?</p>
|
13,230 | <p>Introduction:</p>
<p>Let A be a subset of the naturals such that <span class="math-container">$\sum_{n\in A}\frac{1}{n}=\infty$</span>. The <a href="https://en.wikipedia.org/wiki/Erd%C5%91s_conjecture_on_arithmetic_progressions" rel="nofollow noreferrer">Erdos Conjecture</a> states that A must have arithmetic progressions of arbitrary length. </p>
<p>Question:</p>
<p>I was wondering how one might go about <em>categorizing</em> or <em>generating</em> the divergent series of the form in the introduction above. I'm interested in some particular techniques and I list some examples below:</p>
<p>If we let <span class="math-container">$S$</span> be the set of such divergent series: <span class="math-container">$S=\left[ A: \sum_{n\in A}\frac{1}{n}=\infty, \ A\in\mathbb{N} \right]$</span>, what kind of operations are there that would make S a group, or at the very least a semigroup? I'm rather vague on what the operatons should be for a reason, because although I presume trivial operations exist, their usefulness in understanding the members of <span class="math-container">$S$</span> would be questionable. </p>
<p>Alternately, can one look at these divergent sums through the technique of Ramanujan summation (think: <span class="math-container">$1+2+3+\ldots =^R -\frac{1}{12}$</span>, <span class="math-container">$R$</span> emphasizing Ramanujan summation)? The generalizations of Ramanujan summation (a good reference <a href="http://algo.inria.fr/seminars/sem01-02/delabaere2.pdf" rel="nofollow noreferrer"> here </a>) allow one to assign values to some of these series and give some measure of what kind of divergence is occurring. Moreover, basic series manipulations that hold for convergent series tend to carry over to Ramanujan summation, so can one perhaps look at the set <span class="math-container">$S$</span> above as a set of equivalence classes in the sense of two elements being equivalent if they share the same Ramanujan summation constant. </p>
<p>Thanks in advance for any input!</p>
| Pete L. Clark | 1,149 | <p>Most of the point of this answer is to promote a piece of terminology:</p>
<p>Three years ago I first taught a number theory course at UGA in which I made the following definition: a subset <span class="math-container">$A$</span> of the positive integers is <b>substantial</b> if</p>
<p><span class="math-container">$\sum_{n \in A} \frac{1}{n} = \infty$</span>.</p>
<p>A little bit of discussion of this concept occurs in Section 4 of</p>
<p><a href="http://alpha.math.uga.edu/%7Epete/4400primes.pdf" rel="nofollow noreferrer">http://alpha.math.uga.edu/~pete/4400primes.pdf</a></p>
<p>I do mention the Erdos(-Turan?) conjecture about arithmetic progressions in substantial sets. For the purpose of constructing examples, possibly the remark I make about any set with positive upper density being substantial will be of most use to you.</p>
<p>Those notes don't contain a proof of that, but a proof of this and more can be found in a very (very!) nice final project done in this class by (then) undergraduate Alex Rice. Sadly he never gave me an electronic copy in a form that I was able to upload to my webpage. If you want to see his writeup, let me know and I'll bug him about this again: the winds of fate have blown him around a bit, but he is now again a UGA (graduate) student taking a number theory course from me.</p>
<p>Finally, to answer one of your questions in a cheap way: yes, the set of substantial subsets of <span class="math-container">$\mathbb{Z}^+$</span> certainly forms a semigroup under union. This seems like a completely unhelpful observation, but who knows...</p>
|
13,230 | <p>Introduction:</p>
<p>Let A be a subset of the naturals such that <span class="math-container">$\sum_{n\in A}\frac{1}{n}=\infty$</span>. The <a href="https://en.wikipedia.org/wiki/Erd%C5%91s_conjecture_on_arithmetic_progressions" rel="nofollow noreferrer">Erdos Conjecture</a> states that A must have arithmetic progressions of arbitrary length. </p>
<p>Question:</p>
<p>I was wondering how one might go about <em>categorizing</em> or <em>generating</em> the divergent series of the form in the introduction above. I'm interested in some particular techniques and I list some examples below:</p>
<p>If we let <span class="math-container">$S$</span> be the set of such divergent series: <span class="math-container">$S=\left[ A: \sum_{n\in A}\frac{1}{n}=\infty, \ A\in\mathbb{N} \right]$</span>, what kind of operations are there that would make S a group, or at the very least a semigroup? I'm rather vague on what the operatons should be for a reason, because although I presume trivial operations exist, their usefulness in understanding the members of <span class="math-container">$S$</span> would be questionable. </p>
<p>Alternately, can one look at these divergent sums through the technique of Ramanujan summation (think: <span class="math-container">$1+2+3+\ldots =^R -\frac{1}{12}$</span>, <span class="math-container">$R$</span> emphasizing Ramanujan summation)? The generalizations of Ramanujan summation (a good reference <a href="http://algo.inria.fr/seminars/sem01-02/delabaere2.pdf" rel="nofollow noreferrer"> here </a>) allow one to assign values to some of these series and give some measure of what kind of divergence is occurring. Moreover, basic series manipulations that hold for convergent series tend to carry over to Ramanujan summation, so can one perhaps look at the set <span class="math-container">$S$</span> above as a set of equivalence classes in the sense of two elements being equivalent if they share the same Ramanujan summation constant. </p>
<p>Thanks in advance for any input!</p>
| gowers | 1,459 | <p>I've never much liked Erdős's conjecture. Of course, I don't mean by that that I wouldn't love to solve it, or that I wouldn't be fascinated to know the answer. However, I think the precise form in which it is stated, with sums of reciprocals, gives it a mystique it doesn't quite deserve. The real question, it seems to me, is this: how large a subset of {1,2,...,n} can you have if it is to contain no arithmetic progression of length k? Erdős's conjecture is roughly equivalent to the statement that the best density you can get is no more than n/logn. (I won't bother to say what I mean by "roughly equivalent", but if you think about it you will see that a set with reasonably smoothly decreasing density can get up to a little bit less than n/logn before the sum of its reciprocals starts to diverge.)</p>
<p>Now there doesn't seem to be a strong heuristic argument that the right bound for this problem is about n/logn. Indeed, many people think that it is probably quite a lot lower than that. So there is something a bit arbitrary about Erdős's conjecture: it has a memorable statement, but there is no particular reason to think that it is a natural statement in the context of this problem. By contrast, if you ask what the correct density is in order to guarantee a progression of length k, then it's trivially a natural question, since it is asking for the best possible bound.</p>
<p>One could perhaps defend the conjecture as being the weakest neat conjecture of its kind that would imply that the primes contain arbitrarily long arithmetic progressions. And that would still be an extremely interesting consequence of a proof of the conjecture (or rather, the fact that there could be a purely combinatorial proof of the Green-Tao theorem would be an extremely interesting consequence).</p>
|
13,230 | <p>Introduction:</p>
<p>Let A be a subset of the naturals such that <span class="math-container">$\sum_{n\in A}\frac{1}{n}=\infty$</span>. The <a href="https://en.wikipedia.org/wiki/Erd%C5%91s_conjecture_on_arithmetic_progressions" rel="nofollow noreferrer">Erdos Conjecture</a> states that A must have arithmetic progressions of arbitrary length. </p>
<p>Question:</p>
<p>I was wondering how one might go about <em>categorizing</em> or <em>generating</em> the divergent series of the form in the introduction above. I'm interested in some particular techniques and I list some examples below:</p>
<p>If we let <span class="math-container">$S$</span> be the set of such divergent series: <span class="math-container">$S=\left[ A: \sum_{n\in A}\frac{1}{n}=\infty, \ A\in\mathbb{N} \right]$</span>, what kind of operations are there that would make S a group, or at the very least a semigroup? I'm rather vague on what the operatons should be for a reason, because although I presume trivial operations exist, their usefulness in understanding the members of <span class="math-container">$S$</span> would be questionable. </p>
<p>Alternately, can one look at these divergent sums through the technique of Ramanujan summation (think: <span class="math-container">$1+2+3+\ldots =^R -\frac{1}{12}$</span>, <span class="math-container">$R$</span> emphasizing Ramanujan summation)? The generalizations of Ramanujan summation (a good reference <a href="http://algo.inria.fr/seminars/sem01-02/delabaere2.pdf" rel="nofollow noreferrer"> here </a>) allow one to assign values to some of these series and give some measure of what kind of divergence is occurring. Moreover, basic series manipulations that hold for convergent series tend to carry over to Ramanujan summation, so can one perhaps look at the set <span class="math-container">$S$</span> above as a set of equivalence classes in the sense of two elements being equivalent if they share the same Ramanujan summation constant. </p>
<p>Thanks in advance for any input!</p>
| John H. Johnson | 9,949 | <p>There is a way you can recast Erdős conjecture into a statement about certain inclusions among various compact left and two-sided ideals. Such topological-algebraic statements, and a few combinatorial statements, are proved by <a href="http://mysite.verizon.net/nhindman" rel="noreferrer">Neil Hindman</a> in</p>
<blockquote>
<p><a href="http://www.ams.org/mathscinet-getitem?mr=938821" rel="noreferrer">"Some Equivalents of the Erdős Sum of Reciprocals Conjecture." <em>European Journal of Combinatorics</em> (1988) <strong>9</strong>, no. 1, 39 -- 47.</a></p>
</blockquote>
<p>Here is a brief sample of one of these topological-algebraic statements. Let <span class="math-container">$\beta\mathbb{N}$</span> denote the Stone-Čech compactification of the discrete space <span class="math-container">$\mathbb{N}$</span>. We can extend the usual addition and multiplication operations on <span class="math-container">$\mathbb{N}$</span> to <span class="math-container">$\beta\mathbb{N}$</span> to make <span class="math-container">$(\beta\mathbb{N}, +)$</span> and <span class="math-container">$(\beta\mathbb{N}, \cdot)$</span> both into compact right-topological semigroups. (Right topological semigroup means that <span class="math-container">$(\beta\mathbb{N}, +)$</span> and <span class="math-container">$(\beta\mathbb{N}, \cdot)$</span> are both semigroups and for all <span class="math-container">$p$</span>, <span class="math-container">$q \in \beta\mathbb{N}$</span> the maps <span class="math-container">$p \mapsto p+q$</span> and <span class="math-container">$p \mapsto p\cdot q$</span> are continuous.) To see how to actually perform this extension you can read section 3, pgs. 23-28, of <a href="http://www.math.ohio-state.edu/~vitaly/ertupdatenov6.pdf" rel="noreferrer">this pdf document</a> by Vitaly Bergelson. (However, Bergelson's construction makes <span class="math-container">$(\beta\mathbb{N}, +)$</span> into a compact left-topological semigroup.)</p>
<p>Let <span class="math-container">$L \subseteq \beta\mathbb{N}$</span>. We say <span class="math-container">$L$</span> is a left ideal of <span class="math-container">$(\beta\mathbb{N}, +)$</span> if <span class="math-container">$L$</span> is nonempty and <span class="math-container">$\beta\mathbb{N} + L \subseteq L$</span>. We define a right ideal of <span class="math-container">$(\beta\mathbb{N}, +)$</span> dually, and a (two-sided) ideal is both a left and right ideal. We define left, right, and two-sided ideals of <span class="math-container">$(\beta\mathbb{N}, \cdot)$</span> by simply replacing "<span class="math-container">$+$</span>" with "<span class="math-container">$\cdot$</span>" above.</p>
<p>Now define the following two subsets of <span class="math-container">$\beta\mathbb{N}$</span>:</p>
<ul>
<li><span class="math-container">$\mathcal{AP} = \{p \in \beta\mathbb{N} : A \hbox{ contains APs of arbitrary length for all } A \in p \}$</span></li>
<li><span class="math-container">$\mathcal{D} = \{p \in \beta\mathbb{N} : \sum_{n\in A} 1/n = \infty \hbox{ for all } A \in p\}$</span></li>
</ul>
<p>It's known that <span class="math-container">$\mathcal{AP}$</span> is a compact two-sided ideal of <span class="math-container">$(\beta\mathbb{N}, +)$</span> and <span class="math-container">$(\beta\mathbb{N}, \cdot)$</span>, and that <span class="math-container">$\mathcal{D}$</span> is a compact left ideal of <span class="math-container">$(\beta\mathbb{N}, +)$</span> and <span class="math-container">$(\beta\mathbb{N}, \cdot)$</span>. Therefore (part of) the main result of Hindman's paper is the</p>
<blockquote>
<p><strong>Theorem.</strong> The following statements are equivalent.</p>
<p>(a) If <span class="math-container">$A\subseteq \mathbb{N}$</span> and <span class="math-container">$\sum_{n \in A} 1/n = \infty$</span>, then A contains APs of arbitrary length.</p>
<p>(b) <span class="math-container">$\mathcal{D} \subseteq \mathcal{AP}$</span>.</p>
</blockquote>
<p>Of course the point here is that since <span class="math-container">$\mathcal{D}$</span> is a left ideal and <span class="math-container">$\mathcal{AP}$</span> is a two-sided ideal you would hope to have some nice theorems about inclusion relationships among various compact left, right, and two-sided ideals in <span class="math-container">$\beta\mathbb{N}$</span> to lean on. As far as I know, no one has attempted to attack Erdős conjecture from this topological-algebraic viewpoint.</p>
<p>Just to further illustrate the difficulties involved, let me mention that currently there is not even a "purely" topological-algebraic proof of Szemerédi's Theorem yet!</p>
<p>Let <span class="math-container">$\Delta = \{p \in \beta\mathbb{N} : \overline{d}(A) > 0 \hbox{ for all } A \in p\}$</span> and let <span class="math-container">$\Delta^* = \{p \in \beta\mathbb{N} : d^*(A) > 0 \hbox{ for all } A \in p\}$</span>. Here <span class="math-container">$\overline{d}$</span> and <span class="math-container">$d^*$</span> are the <a href="http://en.wikipedia.org/wiki/Natural_density#Upper_and_lower_asymptotic_density" rel="noreferrer">upper asymptotic density and Banach Density</a>. It's known that <span class="math-container">$\Delta$</span> is a compact left ideal of <span class="math-container">$(\beta\mathbb{N},+)$</span>, and <span class="math-container">$\Delta^*$</span> is a compact two-sided ideal of <span class="math-container">$(\beta\mathbb{N}, +)$</span> and a compact left ideal of <span class="math-container">$(\beta\mathbb{N}, \cdot)$</span>. In the above paper, Hindman shows that Szemerédi's Theorem is equivalent to each of the inclusions <span class="math-container">$\Delta \subseteq \mathcal{AP}$</span> and <span class="math-container">$\Delta^* \subseteq \mathcal{AP}$</span>.</p>
<p>However, one possible approach to show Szemerédi's Theorem in a topological-algebraic "way" was shown <em>not</em> to work in the paper <a href="http://www.ams.org/mathscinet-getitem?mr=911058" rel="noreferrer">"Subprincipal Closed Ideals in <span class="math-container">$\beta\mathbb{N}$</span>"</a> by Dennis Davenport and Hindman. In this paper, they show that <span class="math-container">$\Delta^*$</span> intersects every closed ideal of <span class="math-container">$(\beta\mathbb{N},+)$</span>; but, beyond that, not enough is known about inclusions among compact ideals to prove, algebraically, that <span class="math-container">$\Delta^* \subseteq \mathcal{AP}$</span>.</p>
|
2,880,377 | <p>As the title states, I need to find the limit for $x\left(x + 1 - \frac{1}{\sin(\frac{1}{1+x})}\right)$ as $x \rightarrow \infty$, as part of a larger proof I am working on.</p>
<p>I believe the answer is 0. I think that to start, I can show that $\frac{1}{\sin(\frac{1}{1+x})} \rightarrow x + 1$ for large x. By looking at the series expansion for Sin, it's clear that Sin approximates to $\frac{1}{1+x}$ for large x, as the higher-power entries in the series $\frac{1}{1+x}^3 + \frac{1}{1+x}^5 + ...$ would disappear faster, but would it be sufficient to state this? Is there not a more rigorous way of showing this to be true?</p>
<p>If my approach is entirely wrong, or there is a more elegant way of reaching the answer, please share.</p>
| Nosrati | 108,128 | <p>Let $y=\dfrac{1}{x+1}$ then $y\to0$ in
\begin{align}
\lim_{xto\infty}x(x + 1 - \frac{1}{\sin(\frac{1}{1+x})})
&= \lim_{y\to0}\dfrac{(1-y)}{y}\left(\dfrac{1}{y}-\dfrac{1}{\sin y}\right)\\
&= \lim_{y\to0}\dfrac{(1-y)(\sin y-y)}{y^2\sin y}\\
&= \lim_{y\to0}\dfrac{(1-y)\left(y-\dfrac16y^3+O(y^5)-y\right)}{y^3}\\
&= \color{blue}{-\dfrac16}
\end{align}</p>
|
2,779,945 | <p>Picture the setup:</p>
<p>A particle of mass $m$ attached with a string of length $R$ spins in a vertical circle around a fixed peg with constant velocity $v>\sqrt{Rg}$ in the $x$-$z$ plane, acting under its weight $mg$. In this model, the probability that the string snaps when the string is at an angle $\theta\in[0,2\pi)$ is proportional to the tension in the string, $T(\theta)$.</p>
<p>We know the tension is given by $T(\theta) = \frac{mv^2}{R}-mg\sin\theta$.</p>
<p>So the tension is greatest when the particle is at the bottom of its motion (i.e. when $\theta=3\pi/2$) and the tension is least when the particle is at the top (when $\theta=\pi/2$).</p>
<p>If we let the random variable $\Theta$ be the value of theta at which the string snaps, then we can work out the probability density function for $\Theta$: $$f_{\Theta}(\theta) = \frac1{2\pi}-\frac{Rg\sin\theta}{2\pi v^2}.$$</p>
<p>The issue arrises here: the expectation of $\Theta$ is given by
$$
\mathbb{E}(\Theta) = \int_0^{2\pi}\theta\; f_{\Theta}(\theta)\;d\theta
$$
which evaluates to $\pi+Rg/v^2$, which is not intuitively correct. It would make sense for the expectation of $\Theta$ to be $3\pi/2$, the point at which the string is most likely to snap. Furthermore, if we instead define $\theta$ by the angle with the verticle rather than the angle with the positive horizontal, and so the pdf is in terms of $\cos\theta$ instead of $\sin\theta$, we get $\mathbb{E}(\Theta)=\pi$, which does make sense in this context.</p>
<p>I suspect the cause of this inconsistency is that the expectation doesn't take into account that $0=2\pi$ in this context. We need to use a different formula for expectation that doesn't bias towards $2\pi$ over $0$. I reckon transforming to polar co-ordinates would make sense, but I have no idea how expectations translate into polar, and how to make the outcome independent of the choice of co-ordinate system.</p>
<p>Perhaps changing the factor of $\theta$ in the integral for the expectation, i.e. work out $\mathbb{E}(g(\Theta))$ for some function $g$ that deals with the moment of an angle, or something...</p>
<p>Any thoughts are appreciated. The endgame is to work out $\textrm{Var}(\Theta)$ and maybe other statistics about $\Theta$, but obviously I need a good structure for the pdf and stuff first.</p>
| Sam Spedding | 202,717 | <p>Okay I think I've solved my own question here. We have the tension $T(\theta)$ is given by
$$
T(\theta) = \frac{mv^2}{R} - mg\sin\theta.
$$
Then to find the pdf, we normalise $T$ by dividing by the total "mass" of $T$, the $\textbf{polar}$ integral from $0$ to $2\pi$:
$$
\int_0^{2\pi} \int_0^{T(\theta)} r\; \mathrm{d}r \; \mathrm{d}\theta
$$
I.e. we needed to use polar co-ordinates from the very start, since the random variable $\Theta$ is inherently an angle. We can simplify this integral by writing $T$ in the form $T(\theta)=a+b\sin\theta$, where, in this case, $a=mv^2/R$ and $b=-mg$.</p>
<p>This gives the total "mass", $k$, of $T$ as
\begin{align*}
k=\int_0^{2\pi} \int_0^{T(\theta)} r\; \mathrm{d}r \; \mathrm{d}\theta &= \frac12 \int_0^{2\pi} (a+b\sin\theta)^2 \;\mathrm{d}\theta \\
&= \frac\pi2 (2a^2+b^2) \\
&= \frac\pi2\left( 2\frac{m^2v^4}{R^2} + m^2g^2 \right) \\
&=\frac{\pi m^2}{2R^2}(2v^4 +R^2g^2)
\end{align*}
So it makes sense to use
$$
f_{\Theta}(\theta) = \frac{1}{2k}\; [T(\theta)]^2,
$$
as the p.d.f. for $\Theta$, which actually ends up being rather redundant.</p>
<p>Now, we calculate the "centre of mass" of $T$ by finding the cartesian coordinates, $x=r\cos\theta$ and $y=r\sin\theta$, for the centre of mass, $(\bar{x},\bar{y})$ by finding the moments:</p>
<p>\begin{align*}
\bar{x} &= \frac1k \int_0^{2\pi} \int_0^{T(\theta)} xr\; \mathrm{d}r \; \mathrm{d}\theta = \frac1k \int_0^{2\pi} \int_0^{T(\theta)} r^2 \cos\theta\; \mathrm{d}r \; \mathrm{d}\theta \\
&= \frac1k \int_0^{2\pi} \frac13 (a+b\sin\theta)^3 \cos\theta \; \mathrm{d}\theta = 0
\end{align*}</p>
<p>and</p>
<p>\begin{align*}
\bar{y} &= \frac1k \int_0^{2\pi} \int_0^{T(\theta)} yr\; \mathrm{d}r \; \mathrm{d}\theta = \frac1k \int_0^{2\pi} \int_0^{T(\theta)} r^2 \sin\theta\; \mathrm{d}r \; \mathrm{d}\theta \\
&= \frac1k \int_0^{2\pi} \frac13 (a+b\sin\theta)^3 \sin\theta \; \mathrm{d}\theta = \frac1{4k}\pi b(4a^2+b^2) \\
&= \frac{b(4a^2+b^2)}{2(2a^2+b^2)} = \frac{-mg(4v^4+R^2g^2)}{2(2v^4+R^2g^2)}
\end{align*}</p>
<p>Since $\bar{x} = 0$ and $\bar{y} < 0$, we get that (excuse sloppiness) $\bar{\theta} = \textrm{"arctan"}\left(\frac{\bar{y}}{\bar{x}}\right) = \frac{3\pi}2 = \mathbb{E}(\Theta)$, as we had anticipated.</p>
<p>We also obtain $\bar{r} = |\bar{y}|$. Does this equal $\mathbb{E}(T(\Theta))$? It seems plausible since the dimensional analysis works this value out to be a force.</p>
<p>This is a clunky solution and I don't particularly like moving back to cartesian to get back to polar, but at least it's a solution. It's also interesting that in this case, the expected value of $\Theta$ is not an integral with $f_\Theta$ but instead with $T$.</p>
<p>This would also be the case for the variance of $\Theta$, where we might compute
$$
\left( \frac1k \int_0^{2\pi} \int_0^{T(\theta)} x^2 r \;\mathrm{d}r \; \mathrm{d}\theta, \frac1k \int_0^{2\pi} \int_0^{T(\theta)} y^2 r \;\mathrm{d}r \; \mathrm{d}\theta \right)
$$
to find the second moments, and then this could give us $\mathbb{E}(\Theta^2)$. Who knows... food for thought.</p>
|
3,439,833 | <p><strong>Problem:</strong>
Let <span class="math-container">$f:\mathbb{R} \rightarrow \mathbb{R}$</span> be differentiable.
Suppose <span class="math-container">$f(0)=0$</span>, and <span class="math-container">$f'(x) < \frac{1}{2}$</span> for all <span class="math-container">$a$</span>. Show <span class="math-container">$f(4) < 2$</span>.</p>
<p>This question is intuitively pretty simple, but I'm not sure what is generally regarded as sufficiently rigorous.</p>
<p>Simply, if the slope of the function never reaches <span class="math-container">$\frac{1}{2}$</span>, then combining all these small intervals will produce <span class="math-container">$f(4) <2$</span>.</p>
<p><span class="math-container">$\forall a \in \mathbb{R}, \lim\limits_{x \to a} \frac{f(x)-f(a)}{x-a} < \frac{1}{2} \implies \forall a \in \mathbb{R}, f(x)-f(a) < \frac{1}{2}(x-a)$</span> </p>
<p>Letting <span class="math-container">$a=0, x=4$</span>, we have <span class="math-container">$f(4) < 2$</span>.</p>
<p><strong>My Question:</strong></p>
<p>I'm not sure if I'm skipping a ton of steps in that implication above. It looks like I just jumped to the conclusion without proving anything.</p>
<p>Is what I'm missing some sort of "interval argument"?</p>
<p>For example:
If I translated using the delta-epsilon definition of limits
<span class="math-container">$$\forall a \in \mathbb{R}, \lim\limits_{x \to a} \frac{f(x)-f(a)}{x-a}$$</span></p>
<p>to <span class="math-container">$f(x)-f(a) < \frac{1}{2}(x-a)$</span> for some open interval <span class="math-container">$ x \in (a- \delta, a+ \delta)$</span>.</p>
<p>I would still need to show that the union of <em>infinitely</em> many of such small intervals satisfies <span class="math-container">$f(x)-f(a) < \frac{1}{2}(x-a)$</span>. How do I write this rigorously?</p>
| Quanto | 686,284 | <p>Use the mean value theorem </p>
<p><span class="math-container">$$\frac{f(4)-f(0)}{4-0} = f’(c) < \frac12$$</span></p>
<p>where <span class="math-container">$0<c<4$</span>. Then <span class="math-container">$f(4)<2$</span> follows. </p>
|
322,598 | <p><a href="https://en.wikipedia.org/wiki/Partially_ordered_set" rel="noreferrer">Partially ordered sets</a> (<em>posets</em>) are important objects in combinatorics (with <a href="https://gilkalai.wordpress.com/2019/02/05/extremal-combinatorics-v-posets/" rel="noreferrer">basic connections to extremal combinatorics</a> and to algebraic combinatorics) and also in other areas of mathematics. They are also related to <em>sorting</em> and to other questions in the theory of computing. I am asking for a list of open questions and conjectures about posets.</p>
| Sam Hopkins | 25,028 | <p>The 1/3-2/3 conjecture is probably considered one of the most significant open problems about finite posets; see the Wikipedia page: <a href="https://en.wikipedia.org/wiki/1/3%E2%80%932/3_conjecture" rel="noreferrer">https://en.wikipedia.org/wiki/1/3%E2%80%932/3_conjecture</a>.</p>
|
322,598 | <p><a href="https://en.wikipedia.org/wiki/Partially_ordered_set" rel="noreferrer">Partially ordered sets</a> (<em>posets</em>) are important objects in combinatorics (with <a href="https://gilkalai.wordpress.com/2019/02/05/extremal-combinatorics-v-posets/" rel="noreferrer">basic connections to extremal combinatorics</a> and to algebraic combinatorics) and also in other areas of mathematics. They are also related to <em>sorting</em> and to other questions in the theory of computing. I am asking for a list of open questions and conjectures about posets.</p>
| Timothy Chow | 3,106 | <p>What about the Stanley–Stembridge conjecture on (3+1)-free posets? Given a poset <span class="math-container">$P$</span> with vertex set <span class="math-container">$V = \{1,2,\ldots,n\}$</span>, define the symmetric function <span class="math-container">$X_P$</span> in countably many indeterminates <span class="math-container">$x_1, x_2, \ldots$</span> by
<span class="math-container">$$X_P := \sum_{\kappa:V\to\mathbb{N}} x_{\kappa(1)}x_{\kappa(2)}\cdots x_{\kappa(n)}$$</span>
where the sum is over all maps <span class="math-container">$\kappa:V \to\mathbb{N}$</span> such that the pre-image <span class="math-container">$\kappa^{-1}(i)$</span> of every <span class="math-container">$i\in\mathbb{N}$</span> is a totally ordered subset of <span class="math-container">$P$</span>. Then the conjecture is that if <span class="math-container">$P$</span> is (3+1)-free (i.e., that <span class="math-container">$P$</span> contains no induced subposet isomorphic to the disjoint union of a 3-element chain and a 1-element chain) then the expansion of <span class="math-container">$X_P$</span> in terms of elementary symmetric functions has nonnegative coefficients.</p>
<p>This conjecture grew out of certain positivity conjectures about immanants.
Guay-Paquet has reduced the conjecture to the case of unit interval orders (i.e., posets that are both (3+1)-free and (2+2)-free), and the unit interval order case has a graded generalization (due to Shareshian and Wachs), which has close connections with representation theory and Hessenberg varieties.</p>
|
2,565,932 | <p>Completion of rational number at primes is clear. But when it comes to quadratic field is little tricky.
Can any one provide a material or illustrate, how does a quadratic field, when completed at a prime ideal becomes a p-adic field.</p>
| Lubin | 17,760 | <p>We don’t need to restrict to quadratic fields. </p>
<p>Take any finite algebraic extension $k$ of $\Bbb Q$, and its ring of integers $R$. There’s unique factorization of ideals into products of primes here, so if $\mathfrak p$ is a prime ideal of $R$, you can define, for any $z\in R$, the $\mathfrak p$-absolute value $|z|_{\mathfrak p}$ as $p^{-v}$ where $zR=\mathfrak p^vI$, where $I$ is the product of all the other primes appearing in $zR$. (A more sophisticated definition of $|\star|_{\mathfrak p}$ raises not $p$ but a certain power of $p$ to the exponent $-v$, but that’s another story.)</p>
<p>For instance, in $k=\Bbb Q(\sqrt{-6}\,)$, where the ring of integers is $R=\Bbb Z[\sqrt{-6}\,]$, take $\mathfrak p=(2,\sqrt{-6}\,)$. Then since $\bigl(2\sqrt{-6}\bigr)=\mathfrak p^3\cdot(3,\sqrt{-6}\,)$ as ideals, we get $\bigl|2\sqrt{-6}\bigr|_{\mathfrak p}=p^{-3}$. Complete $R$ with respect to this absolute value, and you get a $p$-adic ring, which is actually an extension of $\Bbb Z_2$.</p>
|
244,875 | <p>\begin{align}
\min_{x} c^Tx \\
s.t.~Ax=b
\end{align}
Note that here $x$ is unrestricted. I need to prove that the dual of this program is given by
\begin{align}
\max_{\lambda} \lambda^Tb \\
s.t.~\lambda^TA\leq c^T
\end{align}</p>
<p>But in the constraint, I always get an equality (using what I learnt)
\begin{align}
\max_{\lambda} \lambda^Tb \\
s.t.~\lambda^TA = c^T
\end{align}
Please give some explanation also. </p>
| Inquest | 35,001 | <p>\begin{align}
\min_{x} c^Tx \\
s.t.~Ax=b
\end{align}</p>
<p>Is the same as:
\begin{align}
\min_{x} c^T(x^+-x^-) \\
s.t.~A(x^+-x^-)=b\\
x^+,x^-\geq 0
\end{align}
Is the same as:
\begin{align}
\min_{x} [c^T|-c^T]z \\
s.t.~[A|-A]z=b\\
z\geq 0
\end{align}
$$z=[x^T|-x^T]^T$$
Dual of this is :
\begin{align}
\max \quad b^Tp\\
s.t. [A|-A]^Tp\leq [cT|-c^T]^T\\
\implies Ap=c
\end{align}
I think your answer is correct.</p>
|
313,508 | <p>If anyone is familiar with Horowitz and Hill... its exercise 1.19</p>
<p>Show that all the average power delivered to the preceding circuit winds up in the resistor. Do this by computing the value of $V^2/R$. What is the power, in watts for a series circuit of a 1$\mu F$ capacitor and a 1k resistor placed across the 110 Volt RMS, 60 Hz powerline?</p>
<p>The circuit in question is an AC supply with a cap and resistor in series. Simple.</p>
<p>I've been pouring myself into this complex algebra problem and have made no progress over several hours. I'm desperate to understand what they want but cannot get there without some help.</p>
| Romeo Lampa | 70,074 | <p>This is a simple series $RC$ across a $110$ volt $60$ HZ with $R=1k\Omega$ and $C=1 \mu F$ capacitor.</p>
<p>First solve for $Xc=1/2(3.1416)60(.000001)$, the capacitive reactance of $C$ which $2.65k\Omega$. Then solve for $Z=\sqrt{R^2 + Xc^2}$, vector sum of $R$ and $C$ which is $2.83k\Omega$.</p>
<p>Then solve for the current $I=V/Z=110 V/2.83k\Omega$ which is $38.8 mA$. The true power $P_{\text{true}}=I^2R$, which is $38.8mA^2 \cdot 1k\Omega = 1.51 W$. Hope this helps.</p>
|
221,137 | <p>What's the difference between Fourier transformations and Fourier Series? </p>
<p>Are they the same, where a transformation is just used when its applied (i.e. not used in pure mathematics)?</p>
| paul garrett | 12,291 | <p>Fourier transform and Fourier series are two manifestations of a similar idea, namely, to write general functions as "superpositions" (whether integrals or sums) of some special class of functions. Exponentials $x\rightarrow e^{itx}$ (or, equivalently, expressing the same thing in sines and cosines via Euler's identity $e^{iy}=\cos y+i\sin y$) have the virtue that they are eigenfunctions for differentiation, that is, differentiation just multiplies them: ${d\over dx}e^{itx}=it\cdot e^{itx}$. This makes exponentials very convenient for solving differential equations, for example.</p>
<p>A <em>periodic</em> function provably can be expressed as a "discrete" superposition of exponentials, that is, a sum. A non-periodic, but decaying, function does not admit an expression as <em>discrete</em> superposition of exponentials, but only a <em>continuous</em> superposition, namely, the integral that shows up in Fourier inversion for Fourier transforms.</p>
<p>In both case, there are several technical points that must be addressed, somewhat different in the two situations, but the issues are very similar in spirit.</p>
|
221,137 | <p>What's the difference between Fourier transformations and Fourier Series? </p>
<p>Are they the same, where a transformation is just used when its applied (i.e. not used in pure mathematics)?</p>
| chaohuang | 27,973 | <p>The Fourier series is used to represent a periodic function by a discrete sum of complex exponentials, while the Fourier transform is then used to represent a general, nonperiodic function by a continuous superposition or integral of complex exponentials. The Fourier transform can be viewed as the limit of the Fourier series of a function with the period approaches to infinity, so the limits of integration change from one period to $(-\infty,\infty)$.</p>
<p>In a classical approach it would not be possible to use the Fourier transform for a periodic function which cannot be in $\mathbb{L}_1(-\infty,\infty)$. The use of generalized functions, however, frees us of that restriction and makes it possible to look at the Fourier transform of a periodic function. It can be shown that the Fourier series coefficients of a periodic function are sampled values of the Fourier transform of one period of the function.</p>
|
2,258,281 | <p>I have the following relation symbolized as $\sim$, defined on $\mathbb{Z}\times\mathbb{Z}$: </p>
<p>$$x\sim y \iff |x-y|=1$$</p>
<p>I would like to know how the transitive-reflexive closure looks like in this case, because this relation isn't transitive or reflexive, is it just an empty set?</p>
<p>In that case, it can't be an equivalence relation right? </p>
| Ashwin Ganesan | 157,927 | <p>You can obtain the argument by just looking at the given statements. You need to prove $\neg q \implies s$. So suppose $\neg q$ is true. We need to prove $s$ is true. Identify where in the given three propositions can we find $\neg q$ is true. We can take the contrapositive of the first proposition. </p>
<p>Observe from the first proposition $p \implies q$ that we have $\neg q \implies \neg p$. By hypothesis $\neg q$ is true, and so by the first proposition $\neg p$ is true. Using the second proposition, we get that $r$ is true, and from the third proposition, this implies $s$ is also true. Hence, if $\neg q$ is true, then $s$ is true, as was to be shown.</p>
<p>The rules of inference we used are contraposition (which says $a \implies b$ is equivalent to $\neg b \implies \neg a$) and hypothetical syllogism (which says: if $a \implies b$ and $b \implies c$, then $a \implies c$). </p>
|
1,148,624 | <p>This is for teaching math. I'm wondering if someone knows some striking near-equalities between simple arithmetic expressions. I vaguely remember that such things exist (e.g., numbers that look alike out 10 digits, but that are really different), but I don't remember where I saw them or what they are. I think there are even some famous historical instances where it was debated whether certain expressions were equal or not.</p>
<p>One possibility I am particularly interested in is integer combinations of (integer) square roots that turn out to be very very close to zero (but that are nonzero). Does anyone know how to construct these? I am assuming they exist because I read somewhere (and forgotten again!) that there is currently no efficient algorithm---or maybe even no algorithm at all, can't remember---for determining the sign of such a sum.</p>
<p>Thank you!</p>
<p>PS: I'm also interested in sources (e.g., book of number puzzles or relevant number theory, etc).</p>
| Christian Blatter | 1,303 | <p>Imagine the following scenario: A physicist modeling a certain system finds out that the number
$$\alpha:={1\over10}\sum_{n=-\infty}^\infty e^{-(n/10)^2}$$
is relevant for his problem. He numerically computes partial sums $s_N:=\sum_{|n|\leq N}$ and obtains, e.g.,
$$s_{10}=1.529\ldots,\quad s_{20}=1.7659\ldots,\quad s_{40}=1.772454\ldots,\quad s_{100}=1.77245385090552\ldots\ .$$
From the website <a href="http://isc.carma.newcastle.edu.au/standardCalc" rel="nofollow">Inverse Symbolic Calculator</a> he learns that his $s_{100}$ agrees with $\sqrt{\pi}$ in all given decimal places. Is this just a numerical coincidence?</p>
<p>Our scientist then turns to a mathematician and is told that the number $\alpha$ is a value of the <em>theta function</em> $\theta(x):=\sum_{k=-\infty}^\infty e^{-k^2\pi x}$:
$$\alpha:={1\over10}\>\theta\bigl({1\over 100\pi}\bigr)\ .$$
Now comes the upshot: Jacobi's famous theta transformation allows to rewrite $\alpha$ in the form
$$\alpha={\sqrt{100\pi}\over 10}\theta(100\pi)=\sqrt{\pi}\sum_{k=-\infty}^\infty e^{-100\pi^2k^2}=\sqrt{\pi}(1+\epsilon)\ ,$$
where $\epsilon:=\sum_{k\ne0} e^{-100\pi^2k^2}<10^{-428}$. In other words: The number $\alpha$ resembles $\sqrt{\pi}$ up to more than $400$ decimal places, but is $\ne\sqrt{\pi}$.</p>
|
1,589,964 | <p>Given the LFT for a complex $z$,
\begin{align*}
\phi:z\mapsto \frac{2z+1}{z+2}.
\end{align*}
I'm asked about the image under $\phi$ of $C:=\left\{\left\lvert z+\frac25\right\rvert = \frac25\right\}$. I've parametrized this as $\gamma: \frac25 \exp(it) - \frac25$ and computed the image
\begin{align*}
w(t) = 2-\frac{15}{2\exp(it)+8},
\end{align*}
but I'm none the wiser from this expression and I also don't see why this doesn't give me a new circle, although $\phi$ is conformal $ad-bc = 4 - 1 = 3 \neq 0$. Any hints?</p>
| Brevan Ellefsen | 269,764 | <p>This probably refers to the complex conjugate, which, for any complex number, has the same real part and magnitude of imaginary part but opposite sign on te imaginary part. For example, if $f(x) = 5+4i$, then $\overline{f(x)} = 5-4i$</p>
|
1,589,964 | <p>Given the LFT for a complex $z$,
\begin{align*}
\phi:z\mapsto \frac{2z+1}{z+2}.
\end{align*}
I'm asked about the image under $\phi$ of $C:=\left\{\left\lvert z+\frac25\right\rvert = \frac25\right\}$. I've parametrized this as $\gamma: \frac25 \exp(it) - \frac25$ and computed the image
\begin{align*}
w(t) = 2-\frac{15}{2\exp(it)+8},
\end{align*}
but I'm none the wiser from this expression and I also don't see why this doesn't give me a new circle, although $\phi$ is conformal $ad-bc = 4 - 1 = 3 \neq 0$. Any hints?</p>
| Henricus V. | 239,207 | <p>That refers to the complex conjugation $\overline{x+iy} = x-iy$. It ensures that
$$ \langle f,g\rangle_{L_2 (\mathbb{R})}= \int_\mathbb{R} f(x) \overline{g(x)} \, dx
$$
is an inner product since it must satisfy $\langle f, f\rangle \geq 0$ and $\langle f, g\rangle = \overline{\langle g, f\rangle}$</p>
|
2,948,673 | <p>I was thinking about the squeeze theorem here. We can denote the <span class="math-container">$\left[\frac{1}{|x|}\right] =n$</span>, and then try something like:</p>
<p><span class="math-container">$$x^2(1+2+3+...+(n-1)+(n-1)) \leq x^2 \frac{n(n+1)}{2} \leq x^2(1+2+3+...+(n-1)+(n+1))$$</span></p>
<p>But I don't know what to do with <span class="math-container">$x^2$</span>. Given that <span class="math-container">$\left[ \frac{1}{|x|} \right]=n$</span>, how do we proceed to find <span class="math-container">$x^2$</span>?</p>
| Bernard | 202,857 | <p><strong>Hint</strong></p>
<p>First we may suppose <span class="math-container">$x> 0$</span> as well. Next,
<span class="math-container">$$\biggl\lfloor\frac1x\biggr\rfloor=n\iff n\le\frac1x<n+1, \enspace\text{ so }\enspace x\dots$$</span></p>
|
650,381 | <p>the definition of variance is $V(X) = E((X-E(X))^2 )$</p>
<p>For a discrete random variable: if we have put $Y = g(X)$ , where $g$ is a real function</p>
<p>$E(Y) = E(g(X)) = \sum\limits_{k} g(k)p_X(k)$ , where $p_X(k) = P(X = k) $ </p>
<p>how can i use this formula to show that:</p>
<p>$V(X) = \sum\limits_{k} (k-E(X))^2 p_X(k)$</p>
| DonAntonio | 31,254 | <p>$$|xI-A|=\left(\frac i3\right)^3\begin{vmatrix}x-1&2&-1\\2&x-1&-1\\-1&-1&x+2\end{vmatrix}=$$</p>
<p>$$=-\frac i{27}(x-1)^2(x+2)+4-2(x-1)-4(x+2)=$$</p>
<p>$$=-\frac i{27}\left(x^3-3x+\color{red}2-2x+\color{red}2+\color{red}4-4x-\color{red}8\right)=-\frac i{27}(x^3-9x)=$$</p>
<p>$$=-\frac i{27}x(x-3)(x+3)$$</p>
<p>and the above is close (at least) to what you got (and not what Amzoti got), so you better check stuff.</p>
|
121,760 | <p>I want to find the minimal polynomial (over $\mathbb{Q}$) of: $k:=\sqrt[3]{7-\sqrt{2}}$.</p>
<p>With simple 'tricks' I got that: $P=(x^3-7)^2+2$ is a polynomial such that $P(k)=0$.</p>
<p>But I don't know if, or how to prove that $P$ is the minimal polynomial. How can I prove this/find the minimal polynomial ? </p>
| Jyrki Lahtonen | 11,619 | <p>The only point of this answer is to supplement Bill Dubuque's answer, and to give a short
argument proving that $f(x)$ is irreducible in the ring $F_5[x]$.</p>
<p>Here $f(x)\equiv x^6+x^3+2$. Because $g(x)=x^2+x+2$ is irreducible, it is not a factor of
$$x^8-1=(x^4-1)(x^4+1)=(x-1)(x-2)(x-3)(x-4)(x^2-2)(x^2+2)$$
in the ring $F_5[x]$. Let $\alpha\in F_{25}$ be a zero of $g(x)$. Then $\alpha^{24}=1$, but $\alpha^8\neq1$. Thus the order of $\alpha$ is a multiple of three ($\alpha$ is actually a generator of $F_{25}^*$ but we won't need that fact). </p>
<p>Next consider $f(x)=g(x^3)$. Let $\beta$ be a zero of $f(x)$ in some extension field of $F_5$. Then $\beta^3$ is a zero of $g(x)$, so by our earlier observation $\beta$ has order that is a multiple of nine. But $F_{5^6}$ is the smallest extension of $F_5$ that has elements of order nine, because $k=6$ is the smallest positive integer with the property $5^k\equiv 1\pmod 9$. Therefore $f(x)$ is the minimal polynomial of $\beta$, and the claim follows.</p>
|
3,619,835 | <blockquote>
<p>Find the maximum of the function <span class="math-container">$f(x,y) = (a + x)(b + y)$</span> under the constraint <span class="math-container">$d = x + y$</span>, where <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, <span class="math-container">$d$</span> are known.</p>
</blockquote>
<p>It seems "obvious" to me that you'd want to split it between the two so that both sides of the product are as equal as possible but no idea how to prove the result.</p>
<p>Sorry I don't know what kind of math this is so I tagged a few.</p>
| trancelocation | 467,003 | <p>Since you are not acquainted with the trick mentioned in the comment, I elaborate on this a bit.</p>
<p>You surely know the binomial formula</p>
<ul>
<li><span class="math-container">$(u+v)^2 = u^2+2uv+v^2$</span></li>
</ul>
<p>From this it follows
<span class="math-container">$$(u+v)^2-(u-v)^2 = 4uv \Rightarrow uv =\frac 14((u+v)^2-(u-v)^2)$$</span></p>
<p>Now, you plug into this formula </p>
<ul>
<li><span class="math-container">$u= a+x$</span> and <span class="math-container">$v= b+y$</span></li>
</ul>
<p>and use <span class="math-container">$d=x+y$</span>. So, you get</p>
<p><span class="math-container">$$(a+x)(b+y)=\frac 14((a+b+d)^2-(a-b-d+2x)^2)$$</span></p>
<p>Obviously, the square expression we subtract on the RHS has a minimal value of <span class="math-container">$0$</span>.</p>
<p>So, the maximal value you are looking for is</p>
<p><span class="math-container">$$\frac 14(a+b+d)^2$$</span></p>
|
1,808,019 | <p>I'm trying to better understand how the following group action looks like on a torus. The group $\mathbb{R}$ acts on the torus $C\times C=\{(e^{ix},e^{iy}):x,y\in \mathbb{R}\}$ ($C$ being the unit circle in $\mathbb{C}$) via $$t\cdot (e^{ix},e^{iy})\mapsto (e^{i(x+t)},e^{iy})$$</p>
<p>I am able (I think) to visualize the torus based on the definition of $C\times C$: the first coordinate gives us the ring of unit length in the donut and, as $x$ sweeps through $\mathbb{R}$, the second coordinate gives us the "hollow body" of the donut, again of unit length, as $y$ sweeps through $\mathbb{R}$. So here is where the problems begin for me.</p>
<ol>
<li><p>What am I doing wrong to show that this a homomorphism? $$st\cdot (e^{ix},e^{iy})\mapsto (e^{i(x+st)},e^{iy})$$ but $$[s\cdot (e^{ix},e^{iy})][t\cdot (e^{ix},e^{iy})] = (e^{i(x+s)},e^{iy})(e^{i(x+t)},e^{iy}) = (e^{i(2x+s+t)},e^{i2y})$$</p></li>
<li><p>For $t\in \mathbb{R}$, the orbit $\mathbb{R}(t)$ is $\{(e^{i(x+t)},e^{iy}):t\in \mathbb{R}\}$. Is this any different from $C\times C$?</p></li>
</ol>
| 5xum | 112,884 | <p>Your inductive step, and in fact your whole inductive hypothesis, is wrong. Don't worry too much about it, induction causes a lot of confusion when you first encounter it. You just have to take it slowly and not jump over too many steps. Always second-guess everything you do, and make sure you have an explanation for every step.</p>
<hr>
<p>Remember, induction is a process you use to prove a statement <em>about all positive integers</em>, i.e. a statement that says "For all $n\in\mathbb N$, the statement $P(n)$ is true".</p>
<p>You prove the statement in two parts:</p>
<ol>
<li>You prove that $P(1)$ is true.</li>
<li>You prove that if $P(n)$ is true, then $P(n+1)$ is also true.</li>
</ol>
<p>So, in your case, you need to ask yourself:</p>
<ol>
<li>What is the statement I want to prove?</li>
<li>Can this statement be rewritten into the form $\forall n\in N: P(n)$?</li>
<li>If it can, what is $P(n)$?</li>
</ol>
<p>Please, first answer these questions (preferably either in comments or as an edit to your question), then I can help you further.</p>
<hr>
<p>OK, so you know what $P(n)$ is. Now, you need to do the two steps of proving by induction. First of all, you need to prove $P(1)$. So, these are the questions you need to answer:</p>
<ol>
<li>What is $P(1)$?</li>
<li>How can I prove $P(1)$?</li>
</ol>
<p>When you answer the first question, the second question should be easy to answer, since you know that $(x+y)^1=1\cdot x+1\cdot y$, and you can read out what $C_0^1$ and $C_1^1$ are.</p>
<hr>
<p>Now, the hard part. You need to assume that $P(n)$ is true, and then prove that $P(n+1)$ is true.</p>
<p>To do that, first write down $P(n)$ and $P(n+1)$. </p>
|
916,705 | <p>I came across an example in Chapter-2 of Dummit and Foote(page-47) which says :$D_6$ is not a subgroup of $D_8$ ,the former is not even a subset of latter.I can't understand why is it not the subset of $D_8$?
How do we define one group as a subset of another?</p>
| Kyle Miller | 172,988 | <p>The statement "$D_6$ is not a subgroup of $D_8$" means there is no subgroup $H\subset D_8$ such that $D_6$ is isomorphic to $H$. This is easy to verify as $D_6$ has an element of order $3$, but no subgroup of $D_8$ has an element of order $3$.</p>
<p>It is possible, though not particularly useful, to regard $D_6$ as a subset of $D_8$ set-theoretically.</p>
|
3,973,025 | <p>Does the structure of the category of models of a given first order theory where the arrows are embeddings tell us anything interesting about the theory itself?</p>
<p>Two models <span class="math-container">$M$</span> and <span class="math-container">$M'$</span> are isomorphic if and only if there exists an embedding <span class="math-container">$f : M \to M'$</span> whose inverse <span class="math-container">$f^{(-1)}$</span> is an embedding from <span class="math-container">$M'$</span> to <span class="math-container">$M$</span>. Also, embeddings compose. Thus, we have a category.</p>
<p>As a trivial observation, we can rehash the <a href="https://en.wikipedia.org/wiki/L%C3%B6wenheim%E2%80%93Skolem_theorem" rel="nofollow noreferrer">Löwenheim–Skolem theorem</a> in categorical language, if we take some effort to remove the finite models ahead of time.</p>
<p>Picking a satisfiable theory <span class="math-container">$T$</span> with no finite models produces a category <span class="math-container">$C$</span>. If we take <span class="math-container">$C$</span> and produce a new category <span class="math-container">$D$</span> that identifies all the arrows in every homset in <span class="math-container">$C$</span>, we get a category that is isomorphic to the category of infinite cardinals, where the homset <span class="math-container">$\text{Hom}(\kappa, \lambda)$</span> is a singleton if and only if <span class="math-container">$\kappa \le \lambda$</span> and <span class="math-container">$\text{Hom}(\kappa, \lambda)$</span> is empty otherwise.</p>
<p>The above statement is a consequence of the Löwenheim–Skolem theorem; it <em>might</em> be equivalent, though, if "cardinal gaps" are always detectable by the structure of the category.</p>
| Mark Kamsma | 661,457 | <p>We can say <em><strong>a lot</strong></em> about the first-order theory by just looking at its category of models. Let's use <em>elementary embeddings</em> instead of just embeddings. They are the 'right' arrows to take because they respect first-order logic. You could also work with just embeddings to get some sort of positive logic, which is also interesting. But that is harder and you asked about first-order logic.</p>
<p>So just to remind you: for structures <span class="math-container">$M$</span> and <span class="math-container">$N$</span> we call a function <span class="math-container">$f: M \to N$</span> an elementary embedding if for every first-order formula <span class="math-container">$\phi(\bar{x})$</span> and every tuple <span class="math-container">$\bar{a} \in M$</span> we have
<span class="math-container">$$
M \models \phi(\bar{a}) \quad \Longleftrightarrow \quad N \models \phi(f(\bar{a})).
$$</span>
For a first-order theory <span class="math-container">$T$</span> let us denote by <span class="math-container">$\mathbf{Mod}(T)$</span> the category of models of <span class="math-container">$T$</span> with elementary embeddings.</p>
<p>Indeed, we can already detect the Löwenheim-Skolem theorem. For this we have the language of <a href="https://ncatlab.org/nlab/show/accessible+category" rel="noreferrer">accessible categories</a>. Roughly this works as follows: we can give a purely category-theoretic definition of what a <em><span class="math-container">$\lambda$</span>-presentable object</em> is (see <a href="https://ncatlab.org/nlab/show/compact+object" rel="noreferrer">here</a>, although there they call it "compact" instead of "presentable"). In <span class="math-container">$\mathbf{Mod}(T)$</span>, for any <span class="math-container">$\lambda > |T|$</span>, we have that an object <span class="math-container">$M$</span> is <span class="math-container">$\lambda$</span>-presentable iff <span class="math-container">$|M| < \lambda$</span>. So this gives us a notion of size. Then we can formulate the Löwenheim-Skolem theorem roughly as "every object is a colimit of small (i.e. <span class="math-container">$< |T|^+$</span>) objects".</p>
<p>One easy thing that we can see is whether or not <span class="math-container">$T$</span> is complete. Namely, <span class="math-container">$T$</span> is complete exactly when <span class="math-container">$\textbf{Mod}(T)$</span> has the <em>Joint Embedding Property</em> (JEP). JEP means that whenever we have objects <span class="math-container">$M_1$</span> and <span class="math-container">$M_2$</span> that then there is an object <span class="math-container">$N$</span> and arrows <span class="math-container">$M_1 \to N \leftarrow M_2$</span>. It should be a nice exercise to prove that this is equivalent to being complete.</p>
<p>Another thing that we can easily see from <span class="math-container">$\mathbf{Mod}(T)$</span> is whether or not the theory is <span class="math-container">$\kappa$</span>-categorical for any <span class="math-container">$\kappa$</span>. This is again done using the notion of presentability. We can detect the 'size' of an object purely category-theoretically and then we can just ask if all the objects of that 'size' are isomorphic.</p>
<p>Just to contrast, let's look at something that <span class="math-container">$\textbf{Mod}(T)$</span> <em>cannot</em> detect. It cannot detect whether or not <span class="math-container">$T$</span> has quantifier elimination. Indeed, for any theory <span class="math-container">$T$</span> we can add a relation symbol <span class="math-container">$R_\phi(\bar{x})$</span> for every formula in the language of <span class="math-container">$T$</span> and then let <span class="math-container">$T'$</span> extend <span class="math-container">$T$</span> by saying <span class="math-container">$\forall \bar{x}(\phi(\bar{x}) \leftrightarrow R_\phi(\bar{x}))$</span> (we call <span class="math-container">$T'$</span> the <em>Morleyisation</em> of <span class="math-container">$T$</span>). Then <span class="math-container">$T'$</span> has quantifier elimination, but <span class="math-container">$\textbf{Mod}(T)$</span> and <span class="math-container">$\textbf{Mod}(T')$</span> are clearly equivalent categories.</p>
<p>Finally I should say something about classification theory. This is a very deep subject where the aim is to classify theories by how well-behaved they are. The class of <em>stable theories</em> is the most well-known here and the most studied, but there are also more general classes like <em>simple theories</em>. We can detect whether or not a theory is stable or simple (or neither) from its category of models! I think that we can do much more, but this is all very recent research...</p>
|
1,320,659 | <p>Is the ratio test applicable for testing convergence of infinite products?</p>
<p>In other words, consider the sequence $(a_i)_{i=1}^\infty$ of non-zero real numbers.</p>
<p>Also, consider the product $\displaystyle\mathcal P=\prod_{k=1}^\infty a_k$</p>
<p>Are the following statements true?</p>
<blockquote>
<p>$$\lim_{n\to\infty}\left|\frac{a_{k+1}}{a_k}\right|\lt 1\implies \mathcal P\textrm{ converges}\\ \lim_{n\to\infty}\left|\frac{a_{k+1}}{a_k}\right|\gt 1\implies \mathcal P\textrm{ diverges}$$</p>
</blockquote>
| Ant | 241,144 | <p>A simple way to test if an infinite product converges or diverges is to test the related sum:</p>
<p>$$P=\prod_{k=1}^\infty {(a_k)}$$</p>
<p>can be turned into:</p>
<p>$$\ln{(P)}=S = \sum_{k=1}^\infty {(\ln{a_k})}$$</p>
<p>Now, the product $P$ converges if the sum $S$ converges. If $S$ diverges, then $P$ diverges as well.</p>
<p>You can now use the appropriate test on $S$ (whether it be $n$th term test, ratio test, etc.).</p>
|
2,790,910 | <blockquote>
<p>Let <span class="math-container">$X \sim N (0, 1)$</span> and <span class="math-container">$Y ∼ N (0, 1)$</span> be two independent random variables, and define <span class="math-container">$Z = \min(X, Y )$</span>. Prove that <span class="math-container">$Z^2\sim\chi^2(1),$</span> i.e. Chi-Squared with degree of freedom <span class="math-container">$1.$</span></p>
</blockquote>
<p>I found the density functions of <span class="math-container">$X$</span> and <span class="math-container">$Y,$</span> as they are normally distributed. How would one use the fact that <span class="math-container">$Z = \min(X,Y)$</span> to answer the question? Thanks!</p>
| Mike Earnest | 177,399 | <p>Let $\Phi$ be the standard normal cdf. First, find the cdf of $Z^2$. For any $z>0$,</p>
<p>$$
\begin{align}
P(Z^2\le z)
&=P(Z>-\sqrt{z})-P(Z>\sqrt{z})\\
&=P(X>-\sqrt{z})P(Y>-\sqrt{z})-P(X>\sqrt{z})P(Y>\sqrt{z})\\
&=(1-\Phi(-\sqrt{z}))^2-(1-\Phi(\sqrt{z}))^2.
\\&=(\Phi(\sqrt{z}))^2-(1-\Phi(\sqrt{z}))^2.
\\&=2\Phi(\sqrt z)-1
\end{align}
$$</p>
<p>On the other hand,
$$
P(X^2\le z)=P(X\le \sqrt{z})-P(X<-\sqrt{z})=\Phi(\sqrt{z})-\Phi(-\sqrt{z})=\Phi(\sqrt{z})-(1-\Phi(\sqrt{z}))
$$
As you can see, $P(X^2\le z)=P(Z^2\le z)$ for all $z$, QED.</p>
|
4,115,308 | <p>Hi I have a problem where I need to solve the following set of equations:</p>
<p><span class="math-container">$$
v = U u
$$</span></p>
<p><span class="math-container">$$
u = 1 -Uv
$$</span></p>
<p><span class="math-container">$$
U^2 = u^2 + v^2
$$</span></p>
<p>I have tried subbing <span class="math-container">$u$</span> and <span class="math-container">$v$</span> into the expression for <span class="math-container">$U^2$</span> but it seems to get very messy very quickly.</p>
<p>Any help solving for <span class="math-container">$u$</span>,<span class="math-container">$v$</span> and <span class="math-container">$U$</span> would be greatly appreciated.</p>
| Ted Shifrin | 71,348 | <p>You're wrong. Let <span class="math-container">$B$</span> be the <span class="math-container">$n\times k$</span> matrix consisting of those columns, and let <span class="math-container">$B'$</span> be the <span class="math-container">$k\times k$</span> submatrix of <span class="math-container">$B$</span> with nonzero determinant.</p>
<p>Consider the equation <span class="math-container">$Bx=0$</span> for <span class="math-container">$x\in\Bbb R^k$</span>. Then note that <span class="math-container">$B'x=0$</span> holds (just consider the appropriate <span class="math-container">$k$</span> components or rows of the original equation). Since <span class="math-container">$B'$</span> is nonsingular, the only solution is <span class="math-container">$x=0$</span>, and so the only solution of <span class="math-container">$Bx=0$</span> is likewise the trivial solution; this says that the columns of <span class="math-container">$B$</span> are, indeed, linearly independent.</p>
|
107,425 | <p>Problem 1:</p>
<p>If $n$ people are seated in a random manner in a row containing $2n$ seats, what is the probability that no two people will occupy adjacent seats?</p>
<p>I know that the probability that $n$ people are seated in $2n$ rows is ${2n\choose n}$</p>
<p>I also know that the answer to this problem is $n+1\over {2n\choose n}$</p>
<p>Why does $n+1$ solidify that the people will not be sitting next to each other? I somewhat understand it, but I would appreciate a more logical explanation.</p>
<p>Problem 2:</p>
<p>What is the probability that a group of $50$ senators selected at random will contain one senator from each state?</p>
<p>So obviously the set of $50$ senators is the combinatorial ${100\choose 50}$ and we want $1$ senator from each state so that's ${2\choose 1}$.</p>
<p>So we have ${2\choose 1}\over {100\choose 50}$, but the ${2\choose 1}$ should be ${2\choose 1}^{50}$ for all 50 states. Why do we have to put the numerator to the 50th power? Shouldn't ${2\choose 1}$ suffice that each senator should be from $1$ state?</p>
<p>Thanks</p>
| Dev Kinger | 993,349 | <p>Let us consider <span class="math-container">$6$</span> people and <span class="math-container">$3$</span> seats. Here is how we can arrange such that no two people are adjacent:</p>
<p>Let 3 people sit on any of the <span class="math-container">$2n$</span> seats:</p>
<p><span class="math-container">$\#P_{1}\#P_{2}\#P_{3}\#$</span></p>
<p>Now we have to fill the remaining <span class="math-container">$2n-n=3$</span> chairs.</p>
<p>Note that there are <span class="math-container">$n-1=2$</span> spaces between three people.</p>
<p>Moreover, there are <span class="math-container">$n+1=4$</span> # seats between the three people. Therefore, 3 people can be arranged in <span class="math-container">$4$</span> ways</p>
<p>Lastly, each of the three people can be arranged in <span class="math-container">$3!$</span> ways.</p>
<p>Possible number of arrangement= <span class="math-container">$\frac{4 \times 3!}{6C3}$</span></p>
<p>Generalize the idea:
<span class="math-container">$\frac{(n+1) \times n!}{2nCn}$</span></p>
|
2,709,921 | <p>My attempt is:
$$\lim_{h \to 0} \frac{f((2k+1)π/2+h) - f((2k+1)π/2)}{h} = \lim_{h \to 0} \frac{\sin^{-1}(\sin((2k+1)π/2+h)-\sin^{-1}(\sin((2k+1)π/2)}{h} =
\lim_{h \to 0} \frac{kπ+π/2+h-(2k+1)π/2}{h} =
1$$
Am I right upto this ?</p>
| Eric Wofsey | 86,856 | <p>No, this is not true. For instance, let $k$ be your favorite field, let $R$ be the subring $k[x,y,x^{-1}y,x^{-2}y,x^{-3}y,\dots]\subset k(x,y)$ and let $I$ be the ideal generated by $x$. Then $I^\infty$ is the ideal generated by $x^{-n}y$ for all $n\in\mathbb{N}$. In particular, $y\in I^\infty$, but $y\not\in (I^\infty)^2$ since every element of $(I^\infty)^2$ is divisible by $x^{-n}y^2$ for some $n$.</p>
<p>In fact, it is not even necessarily true that $I\cdot I^\infty=I^\infty$. For instance, let $R=k[y,x_1,x_1^{-1}y,x_2,x_2^{-2}y,x_3,x_3^{-3}y,\dots]$ and let $I$ be the ideal generated by all the $x_n$'s. Then $y\in I^\infty$ since it is divisible by $x_n^n$ for each $n$, but $y\not\in I\cdot I^\infty$, essentially because once you divide $y$ by a single $x_n$ you lose divisibility by all the others (this takes some work to turn into a rigorous proof, though).</p>
<hr>
<p>You can even get a Noetherian counterexample. Let $R=k[x,y]/(xy-y)$ and let $I=(x)$. Then $I^\infty=(y)$, which is not equal to its square. In a Noetherian (commutative) ring though we must at least have $I\cdot I^\infty=I^\infty$, by the Artin-Rees lemma.</p>
|
2,709,921 | <p>My attempt is:
$$\lim_{h \to 0} \frac{f((2k+1)π/2+h) - f((2k+1)π/2)}{h} = \lim_{h \to 0} \frac{\sin^{-1}(\sin((2k+1)π/2+h)-\sin^{-1}(\sin((2k+1)π/2)}{h} =
\lim_{h \to 0} \frac{kπ+π/2+h-(2k+1)π/2}{h} =
1$$
Am I right upto this ?</p>
| tomasz | 30,222 | <p>For a more "natural" example, if you don't mind noncommutative rings, pick a field $k$ and consider the non-commutative ring $k[X^{\omega_1}]$ of polynomials over $k$ with exponents in $\omega_1$ (so that for any ordinals $\alpha$ and $\beta$, you have $X^\alpha\cdot X^\beta=X^{\alpha+\beta}$).</p>
<p>This is a local ring with the maximal ideal $\mathfrak m=(X)$. Note that $X^\omega=(X^1)^n\cdot X^\omega=X^{n+\omega}=X^\omega$ for all $n$, so $\mathfrak m^\omega:=\bigcap_{n\in \omega} \mathfrak m^n=(X^\omega)$, which is clearly not an idempotent (in fact, even $\mathfrak m^\omega \cdot \mathfrak m\neq \mathfrak m^\omega$).</p>
<p>In this manner, you can find ideals which take arbitrarily long to stabilise.</p>
|
191,262 | <p>I have to fit a Cos^2 function to data I measured. The function is <span class="math-container">$a \cos^2(\frac{bx\pi}{180}+c)+d $</span> and I tried the FindFit and Linear Model Function. I have 4 datasets which I have to fit, the first one worked. The other three only yielded not usable fits. I am pretty new to Mathematica so I hope its just a newbie mistake which is easy to fix.</p>
<p>Here's a minimal example: </p>
<pre><code>data45 = Import["data45.txt", "table"]
{{0, 132}, {20, 279.5}, {40, 289}, {60, 312}, {80, 307}, {100,
173}, {120, 92}, {140, 25}, {160, 44.5}, {180, 109.5}, {200,
230.5}, {220, 305}, {240, 339}, {260, 246.5}, {280, 181.5}, {300,
92.5}, {320, 32}, {340, 43}}
FindFit[data45, a Cos[(b x Pi)/180 + c]^2 + d, {a, b, c, d}, x]
{a -> 45.2733, b -> 0.886263, c -> 39.01, d -> 157.974}
</code></pre>
<p>This yields the following fit of the data: </p>
<p><a href="https://i.stack.imgur.com/ZXm5Q.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ZXm5Q.png" alt="Fit"></a></p>
<p>Which is not usable. </p>
<p>I would really appreciate some help! </p>
<p>Greetings</p>
| Ulrich Neumann | 53,677 | <p>You can solve the problem with FindFit-Option <code>Method->"NMinimize</code></p>
<pre><code>FindFit[data45, a Cos[(b x Pi)/180 + c]^2 + d, {a, b, c, d}, x,
</code></pre>
<p>Method -> "NMinimize"]
(<em>{a -> 300.499, b -> 0.990388, c -> 65.0408, d -> 28.431}</em>)</p>
|
1,789,033 | <p>Let $X$ be an irreducible affine variety. Let $U \subset X$ be a nonempty open subset. Show that dim $U=$ dim $X$.</p>
<p>Since $U \subset X$, dim $U \leq$ dim $X$ is immediate. I also know that the result is not true if $X$ is any irreducible topological space, so somehow the properties of an affine variety have to come in. I have tried assuming $U=X$ \ $V(f_1,...,f_k)$ but I don't know how to continue on.</p>
<p>Any help is appreciated!</p>
| paf | 333,517 | <p>You should note that any nonempty open subset of an irreducible variety (or topological space) is dense : <a href="https://en.wikipedia.org/wiki/Hyperconnected_space" rel="nofollow">https://en.wikipedia.org/wiki/Hyperconnected_space</a>.
Then you can use the definition of the dimension to conclude.</p>
|
3,720,856 | <p><strong>Question:</strong>
Prove that <span class="math-container">$\sin(nx) \cos((n+1)x)-\sin((n-1)x)\cos(nx) = \sin(x) \cos(2nx)$</span> for <span class="math-container">$n \in \mathbb{R}$</span>.</p>
<p><strong>My attempts:</strong></p>
<blockquote>
<p>I initially began messing around with the product to sum identities, but I couldn't find any way to actually use them.<br />
I also tried compound angles to expand the expression, but it became too difficult to work with.</p>
</blockquote>
<p>Any help or guidance would be greatly appreciated</p>
| Integrand | 207,050 | <p>Use <span class="math-container">$\sin(a)\cos(b)=\frac{1}{2}(\sin(a-b)+\sin(a+b))$</span>:
<span class="math-container">$$
\sin(nx) \cos((n+1)x)-\sin((n-1)x)\cos(nx)
$$</span>
<span class="math-container">$$
=\frac{1}{2}\left(\sin(-x)+\sin((2n+1)x)-\sin(-x)-\sin((2n-1)x) \right)
$$</span>
<span class="math-container">$$
=\frac{1}{2}\left(\sin((2n+1)x)-\sin((2n-1)x) \right)
$$</span>Now use <span class="math-container">$\sin(a+b)=\sin(a)\cos(b)+\sin(b)\cos(a)$</span>:
<span class="math-container">$$
=\frac{1}{2}\left(\sin(2nx)\cos(x)
+\sin(x)\cos(2nx)-\sin(2nx)\cos(-x)-\sin(-x)\cos(2nx) \right)
$$</span>Now use the parity of sine and cosine and you're done.
<span class="math-container">$$
=\frac{1}{2}\left(\sin(2nx)\cos(x)
+\sin(x)\cos(2nx)-\sin(2nx)\cos(x)+\sin(x)\cos(2nx) \right)
$$</span>
<span class="math-container">$$
=\sin(x)\cos(2nx)
$$</span></p>
|
3,768,099 | <blockquote>
<p>How does the quotient ring <span class="math-container">$\Bbb Z[x]/(x^2-x,4x+2)$</span> look like?</p>
</blockquote>
<p>Normally to solve this you play around with generators until you get something you can work with. I was unsuccessful in reducing it to something neat.</p>
<p>I have proven that <span class="math-container">$6$</span> is the smallest integer in the ideal. Thus we get <span class="math-container">$\Bbb Z[x]/(6,x^2-x,4x+2)$</span> I dont see much further simplification. I would want to get rid of <span class="math-container">$x^2$</span> but i do not see how.</p>
| GreginGre | 447,764 | <p>We have <span class="math-container">$8(x^2-x)-2x(4x+2)=-12x=-3(4x+2)+6$</span>, so <span class="math-container">$6\in (x^2-x,4x+2)$</span> ,and <span class="math-container">$(x^2-x,4x+2)=(6,x^2-x,4x+2)$</span>.</p>
<p>Hence, your ring <span class="math-container">$R$</span> is isomorphic to <span class="math-container">$\mathbb{Z}/6 [x]/(x^2-x,-\bar{2}x+\bar{2})$</span>.</p>
<p>The map given by the Chinese remainder theorem yields an isomorphism <span class="math-container">$R\simeq \mathbb{F}_2[x]/(x^2-x)\times\mathbb{F}_3[x]/(x^2-x,x-\bar{1})=\mathbb{F}_2[x]/(x^2-x)\times\mathbb{F}_3[x]/(x-\bar{1})$</span>, which finally yields <span class="math-container">$$R\simeq\mathbb{F}_2\times\mathbb{F}_2\times\mathbb{F}_3.$$</span></p>
<p>If we follow carefully the proof, we get an explicit isomorphism.</p>
<p>Let <span class="math-container">$f$</span> be the ring morphism <span class="math-container">$f:R\to P\in\mathbb{Z}[x]\mapsto ([P(0)]_2,[P(1)]_2, [P(1)]_3)\in\mathbb{F}_2\times\mathbb{F}_2\times \mathbb{F}_3$</span>.</p>
<p>It is easy to see <span class="math-container">$(x^2-x,4x+2)$</span> lies in the kernel of <span class="math-container">$f$</span>. The induced map is then the desired isomorphism.</p>
<p>We can also prove this last fact directly, just to double check that everything is correct.</p>
<p><strong>Claim.</strong> <span class="math-container">$f$</span> is surjective.</p>
<p>Given <span class="math-container">$a,b,c\in\mathbb{Z}$</span>, we need to find <span class="math-container">$P\in\mathbb{Z}[x]$</span> such that <span class="math-container">$P(0)\equiv a \ [2], P(1)\equiv b \ [2]$</span> and <span class="math-container">$P(1) \equiv c \ [3]$</span>.</p>
<p>We can try <span class="math-container">$P=ux+v$</span>. We can choose <span class="math-container">$v=a$</span>, <span class="math-container">$u=(b-a)+2k$</span>, so the two first equations are satisfied. Now we want <span class="math-container">$(b-a)+2k+a\equiv c \ [3]$</span>, and we take <span class="math-container">$k=b-c$</span>. To sum up <span class="math-container">$P=(3b-a-2c)x+a$</span> does the job.</p>
<p><strong>Claim.</strong> <span class="math-container">$\ker(f)=(x^2-x,4x-2)$</span>.</p>
<p>As we said before, one inclusion is clear, so let <span class="math-container">$P\in\mathbb{Z}[x]$</span> such that <span class="math-container">$f(P)$</span> is trivial. We want to prove that <span class="math-container">$P\in (x^2-x,4x+2)$</span>. Dividing by <span class="math-container">$x^2-x$</span>, and replacing <span class="math-container">$P$</span> by the corresponding remainder,one may assume that <span class="math-container">$P=ux+v$</span>.
By assumption, <span class="math-container">$v$</span> is even, and <span class="math-container">$u+v$</span> is a multiple of <span class="math-container">$2$</span> and <span class="math-container">$3$</span>, so <span class="math-container">$v=2m$</span> and <span class="math-container">$u+v=6n$</span>, that is <span class="math-container">$u=6n-v=6n-2m$</span>. Hence <span class="math-container">$P=-2mx+ 6nx+2m=m(2-2x)-n 6x$</span>.
Now <span class="math-container">$6x$</span> lies in our ideal (since <span class="math-container">$6$</span> does), and <span class="math-container">$2-2x=4x+2-6x$</span> also lies in our ideal, so we are done.</p>
<p>Now apply the first isomorphism theorem.</p>
|
3,768,099 | <blockquote>
<p>How does the quotient ring <span class="math-container">$\Bbb Z[x]/(x^2-x,4x+2)$</span> look like?</p>
</blockquote>
<p>Normally to solve this you play around with generators until you get something you can work with. I was unsuccessful in reducing it to something neat.</p>
<p>I have proven that <span class="math-container">$6$</span> is the smallest integer in the ideal. Thus we get <span class="math-container">$\Bbb Z[x]/(6,x^2-x,4x+2)$</span> I dont see much further simplification. I would want to get rid of <span class="math-container">$x^2$</span> but i do not see how.</p>
| user26857 | 121,097 | <p>We have
<span class="math-container">$$(x^2-x,4x+2)=(x,4x+2)(x-1,4x+2)=(2,x)(6,x-1).$$</span></p>
<p>The ideals <span class="math-container">$(2,x)$</span> and <span class="math-container">$(6,x-1)$</span> are comaximal, and by CRT <span class="math-container">$$\mathbb Z[x]/(x^2-x,4x+2)\simeq\mathbb Z[x]/(2,x)\times\mathbb Z[x]/(6,x-1)\simeq\mathbb Z/2\mathbb Z\times\mathbb Z/6\mathbb Z.$$</span></p>
|
232,387 | <p>A discrete random variable $X$ of values in $\mathbb N$ verifies the property that
$$P(X=k)=\cfrac 23 (k+1)P(X=k+1)$$
What is the distribution of $X$?</p>
<p>I found that
$$P(X\ge 0)=\sum_{k=0}^\infty P(X=k)=\sum_{k=0}^\infty\cfrac 23 (k+1)P(X=k+1)=\cfrac 23\sum_{k=1}^\infty kP(X=k)=\cfrac 23\text E(X)=1$$
$\ \ \ \ \ \ \ \ \text E(X) = 1.5$</p>
<p>I also found that $$P(X=k)=\cfrac{3^k}{2^k\cdot k!}\cdot P(X=0)$$
That is the only thing I could get out of the given property, I couldn't find the expression for $P(X=k)$ which is the actual question.</p>
| Ross Millikan | 1,827 | <p>No, take the vectors $(0,1)$ and $(1,0)$. The average is $(\frac 12, \frac 12)$. For an even simpler example, try $(1,0)$ and $(-1,0)$</p>
|
199,889 | <p>What are applications of the theory of Berkovich analytic spaces? The analytification $X \mapsto X^{\mathrm{an}}$</p>
| Adam Epstein | 15,819 | <p>Jan Kiwi (Duke Journal) used Berkovich spaces to give the first proof of my conjecture that any sequence of quadratic rational maps which divergence in moduli space (of Mobius conjugacy classes) has at most two "rescaling limits". </p>
|
1,873,825 | <p>I can see why linear regression is linear, i.e., because it is represented by a line, but what does regression have to do with the term as a whole? </p>
<p>What is the meaning this word contributes to the term?</p>
| Snufsan | 122,989 | <p>The idea of linear regression is to infer from dataset the dependence of some random variable in other variable, what the term regression mean in this context is that from the dataset we regress to some kind of dependency that brought us to that data.</p>
|
530 | <p>Tau ($\tau = 2 \pi$) has more merits in its application, but pi is the established standard in industry and education. Is the trade-off of teach-ability of circle concepts worth the subsequent confusion due to pi's omnipresence?</p>
<p>How can both be most effectively taught in order to maximize student's understanding of the use of the circle constant?</p>
| EuYu | 97 | <p>I feel that it is perhaps a little irresponsible to teach $\tau$ instead of $\pi$. As a first introduction, it is the norm which should be taught: teaching a rare alternative to $\pi$ only serves to confuse students, especially when almost all available resources use $\pi$ instead of $\tau$. Imagine a student's confusion when they see $\tau$ in class and $\pi$ everywhere else.</p>
<p>I personally don't even know why the $\tau$ vs $\pi$ topic is even debatable; any miniscule advantage (if there even is an advantage) in using $\tau$ over $\pi$ is heavily offset by the work needed to establish an entirely new cultural norm.</p>
|
240,038 | <p>How can I compute this integral:
$$I=\int\frac{1-(au+1)\exp(-au)}{u^{2}}du$$
$$$$Can anyone help me in this case?</p>
| DonAntonio | 31,254 | <p>Seems you have</p>
<p>$$I=\int\frac{1-axe^{-ax}-e^{-ax}}{x^2}dx=\int\frac{dx}{x^2}-a\int\frac{e^{-ax}}{x}du-\int\frac{e^{-ax}}{x^2}dx$$</p>
<p>making parts for the third integral:</p>
<p>$$u=e^{-ax}\;,\;u'=-ae^{-ax}\;\;;\;\;v'=\frac{1}{x^2}\;,\;v=-\frac{1}{x}$$</p>
<p>we get</p>
<p>$$\int\frac{e^{-ax}}{x^2}dx=-\frac{e^{-ax}}{x}-a\int\frac{e^{-ax}}{x}dx$$</p>
<p>so that we get:</p>
<p>$$I=\frac{e^{-ax}-1}{x}+C$$</p>
|
2,922,290 | <p>Lemma 2.2</p>
<p>Let $(X,d)$ be a compact metric space and suppose that $U$ is a clopen subset of $X$. </p>
<p>There exists $\delta >0 $ s.t $d(x,y) \geq \delta $ $\forall x \in U $ and $y\in X-U $</p>
<p>Proof attempt</p>
<p>Suppose that it is not true then for all $\delta >0 $ $\exists x \in U $ and $\exists y\in X-U $s.t $d(x,y) < \delta $</p>
<p>Consider $d(x_i,y_i) <\frac{1}{i}$ pick a sub-sequence converging to $x$ then from the sequence $d(x,y_i)< \frac {1}{i} $ pick a sub-sequence converging to $y$. Then this pair $x,y$ has the property that for $\delta >0 $ we have $d(x,y) <\delta$ hence $x=y$ but this is a contradiction as if $ x\in U $ then $x\notin X-U$</p>
<p>Can anyone give a constructive proof? (ie not with negation)</p>
| Henno Brandsma | 4,280 | <p>Note that $U$ is itself compact (being closed in $X$). Also $\{U\}$ is an open cover of $U$, as $U$ is open. By compactness of $U$ this has a <a href="https://en.wikipedia.org/wiki/Lebesgue%27s_number_lemma" rel="nofollow noreferrer">Lebesgue number</a> $\delta>0$. This means that for all $x \in U$, $B(x,\delta)$ is a subset of $U$, and this implies that $d(x, y) \ge \delta$ for all $x \in U, y \in X\setminus U$.</p>
<p>Alternatively, define $\delta$ to be the minimum of the continuous function $f: U \to \mathbb{R}$ given by $f(x) = d(x,X\setminus U)$, which only assumes values $>0$.</p>
|
2,122,149 | <p>Using the standard topology on $\mathbb{R}^d$, interior of a subset $S$ of topological space $M$ is defined as the set of all elements of $S$ except the boundary of $S$. </p>
<p>However, I only understand the boundary in this sense to be defined with the standard topology on $\mathbb{R}^d$, so how would we define boundary and "interior" in topological spaces in general?</p>
| 5xum | 112,884 | <p>By definition, the interior of a set $S$ is defined as the largest open set contained in $S$.</p>
<p>For this definition to "work", all you need is a topology, there is no need for it to be the standard topology on $\mathbb R^d$.</p>
<hr>
<p>Equivalently, you can define the interior of $S$ (denoted $\mathrm{int}(S)$)as:</p>
<ol>
<li>The largest open set (by the relation of inclusion) contained in $S$. This means that $\mathrm{int}(S)$ is defined by
<ul>
<li>$\mathrm{int}(S)\subseteq S$ </li>
<li>$\mathrm{int}(S)$ is open</li>
<li>if $O$ is open and $O\subseteq S$, then $O\subseteq \mathrm{int}(S)$.</li>
</ul></li>
<li>The union of all open sets contained in $S$, i.e. $$\mathrm{int}(S)=\bigcup_{O\text{ is open}\\ O\subseteq S} O$$</li>
<li>The set of all interior points, i.e. $$\mathrm{int}(S)=\{x\in S: \exists \text{ open set }O: O\subseteq S \land x\in O\}$$</li>
</ol>
|
560,792 | <p>Here it is, I know the answer is 49! (I found it by trial and error!) I just need to know how I can show this in Working!</p>
<p>Thanks,
Alex</p>
| Dan Shved | 47,560 | <p>Start with two equations:
$$
\begin{align}
\frac{a_1 + a_2 + \ldots + a_7}{7} &= 65 \\
\frac{a_1 + a_2 + \ldots + a_7 + a_8}{8} &= 63.
\end{align}
$$
Now multiply the first one by $7$, the second one by $8$, and subtract. You get $a_8 = 8 \cdot 63 - 7 \cdot 65 = 49$.</p>
|
4,068,493 | <p><a href="https://i.stack.imgur.com/Rzj9N.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rzj9N.jpg" alt="triangle diagram" /></a></p>
<p>Given that:</p>
<ol>
<li>[BM) is the bisector of the angle ABC.</li>
<li>(BM) and (AN) are parallel straight lines.</li>
</ol>
<p>I am trying to prove that the triangle ANB is an isosceles triangle with a main vertex B using Thales Theorem.</p>
<p>Here's what I have done:</p>
<p>Since BM and AN are parallel then by using Thales Theorem:
<span class="math-container">$MA/MC = NB/BC = BM/AN$</span></p>
<p>I don't know where to go from here.. Any help is appreciated. Thank you.</p>
| Joffan | 206,402 | <p>Not using Thales's intercept theorem, but:</p>
<p><span class="math-container">$AN \parallel BM$</span>, so from <span class="math-container">$AB$</span> crossing <span class="math-container">$\color{#092}{\angle NAB} = \color{#02E}{\angle ABM}$</span> and from <span class="math-container">$CN$</span> crossing <span class="math-container">$\color{#092}{\angle ANB} = \color{#02E}{\angle MBC}$</span>.</p>
<p>Since <span class="math-container">$BM$</span> bisects <span class="math-container">$\angle ABC$</span>, <span class="math-container">$\color{#02E}{\angle ABM = \angle MBC}$</span> so <span class="math-container">$\color{#092}{\angle NAB =\angle ANB }$</span>, giving <span class="math-container">$\triangle BAN$</span> isosceles.</p>
|
14,167 | <p>I randomly pick a natural number <em>n</em>. Assuming that I would have picked each number with the same probability, what was the probability for me to pick <em>n</em> before I did it?</p>
| Shai Covo | 2,810 | <p>If you want to choose a natural number from the set $\lbrace 1,2,3,\ldots \rbrace$ of all natural numbers, then user3533 is right. But if you pick a natural number from a finite set, say $A = \lbrace 1,2,\ldots,N \rbrace$, each number with the same probability, then this probability must be the reciprocal of the number of elements in that set. So, if $A = \lbrace 1,2,\ldots,N \rbrace$, then $p=1/N$. This follows immediately from
$\sum\nolimits_{i = 1}^N {p_i } = 1$ with $p_1 = p_2 = \cdots = p_N = p$.</p>
|
14,167 | <p>I randomly pick a natural number <em>n</em>. Assuming that I would have picked each number with the same probability, what was the probability for me to pick <em>n</em> before I did it?</p>
| joriki | 6,622 | <p>There's an aspect that doesn't seem to have been mentioned in the other answers yet.</p>
<p>The question presupposes that it is possible to pick each natural number with the same probability. Since this is not possible, it raises the question what may have led to the assumption that it should be possible. It's instructive to compare this with typical cases where there <em>is</em> a uniform distribution.</p>
<p>When we roll a die, there's a symmetry between the sides. There's no reason why any of them should be more or less likely than the others to come up, so it's clear that the probability for each of them should be the same (if the die is symmetric and we throw it hard enough).</p>
<p>If we spin a wheel, there's again a symmetry among the angles at which it might come to rest. There's again no reason why any of them should be more or less likely than the others, and it's clear that the probability should be uniform across the range of angles (if the wheel is symmetric and we turn it hard enough).</p>
<p>The idea that "I randomly pick a natural number n. Assuming that I would have picked each number with the same probability" seems to be modeled on such cases. So it's instructive to realize that there is no similar experiment that this corresponds to. As has been pointed out in some of the comments, asking a person for a "random" answer won't do, because there's no reason to except symmetry between small and large numbers. Another idea might be to mark the rational numbers on the wheel, choose some bijection mapping them to the natural numbers, and then choose the natural number that corresponds to the rational number at which the wheel comes to rest. But if you think about it, this doesn't make the sense that it does for real numbers.</p>
<p>So there's a direct correspondence between the impossibility of setting up a situation in which there is the required symmetry between all natural numbers and the impossibility of defining a uniform distribution on the natural numbers (or any countably infinite set).</p>
|
1,073,730 | <p>Is it possible to prove the proposition</p>
<p>if $P \land Q \Rightarrow R$</p>
<p>then</p>
<p>$P \land R \Rightarrow \lnot Q $</p>
<p>or in other words</p>
<p>($P \land Q \Rightarrow R) \Rightarrow (P \land R \Rightarrow \lnot Q )$</p>
| Julian Rachman | 200,808 | <p>First, I would like you to think about the relation between $\cup$ and $\land$, and $\cap$ and $\lor.$ Then, prove this statements using true and false propositional definitions. Once done, you will know how to prove this problem on your own.</p>
|
3,315,516 | <p>A convex function is locally Lipschitz on a closed and bounded set. This implies that a nonconvex function may not be locally Lipschitz. Is every nonconvex function not locally Lipschitz? </p>
| Community | -1 | <p>No, that would be ridiculous. For instance, every <span class="math-container">$C^1$</span> function is locally Lipschitz, but it needs not be convex.</p>
|
737,614 | <p>I'm confused as to how to finish answering this question. </p>
<p>Compute $$\sum_{i=28}^n (3i^2 - 4i + \frac{5}{7^i}) $$</p>
<p>I end up with
$$ 3[\frac{n(n+1)(2n+1)}{6}] - 4[\frac{n(n+1)}{2}] + [\frac{k^n(5(-log(7))^n}{n!}]$$</p>
<p>I'm not entirely sure if the last part of my answer is right but my main question is what do I do about the $ i=28$? Any help would be awesome. Thanks </p>
| Michael Albanese | 39,599 | <p>As David mentions in the comments, you are correct. The rule you refer to is often called the degree sum formula or the <a href="http://en.wikipedia.org/wiki/Handshaking_lemma" rel="nofollow">handshaking lemma</a>. The same rule can be used to show the following more general statement: </p>
<blockquote>
<p>Any graph must have an even number of odd vertices.</p>
</blockquote>
<p>Here a vertex is called odd if it has odd degree.</p>
|
2,165,066 | <p>I'm asked to show that if $X$ and $Y$ are independent exponential random variables with parameter <img src="https://latex.codecogs.com/gif.latex?%5Clambda&space;=1" title="\lambda =1" />, then <img src="https://latex.codecogs.com/gif.latex?X/(X+Y)" title="X/(X+Y)" /> has a Beta distribution.</p>
<p>Up to know I had to find the new pdf or df when $Y$ was of the kind $Y=aB + c$ which I understand now. Here a hint is given to use "Law of total probability" which I've only seen in measure theory. </p>
<p>My guess would be to compute <img src="https://latex.codecogs.com/gif.latex?%5Cmathbb%7BP%7D(X/(X+Y)%5Cleq&space;x)" title="\mathbb{P}(X/(X+Y)\leq x)" /> using the fact that the df of exponential distribution is <img src="https://latex.codecogs.com/gif.latex?1&space;-&space;e%5E%7B-%5Clambda&space;x%7D" title="1 - e^{-\lambda x}" />. I'm stuck at that step. </p>
| V. Vancak | 230,329 | <p>Define $ U = X/(X+Y) = g_u(X,Y) $ and $V = X+Y = g_v(X,Y)$, thus
$$
f_{U,V}(u,v) = f_X (g_u^{-1} (u,v))f_{Y} (g_v^{-1} (u,v))| \det\frac{\partial(X,Y)}{\partial(u,v)} |
$$
where $X =UV$ and $Y = V (1-U)$, so
$$
| \det\frac{\partial(X,Y)}{\partial(u,v)} |=v.
$$
As such,
$$
f_{U,V}(u,v)=e^{-uv}e^{-v(1-u)}v=ve^{-v}= \mathcal{G}amma(2,1)\mathcal{B}eta(1,1).
$$
Finally,
$$
f_U(u) = \int f_{U,V}(u,v)dv = \mathcal{B}eta(1,1) = \mathcal{U}(0,1).
$$</p>
<p>See <a href="https://math.stackexchange.com/questions/190670/how-exactly-are-the-beta-and-gamma-distributions-related">here</a> for more details and examples. </p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.