qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
466,909 | <p>How many ways can you split the numbers 1 to 5 into two groups of varying size?
For example: '1 and 2,3,4,5' or '1,2 and 3,4,5' or '1,2,3 and 4,5'. How many combinations are there like this? What is the formula?</p>
| Jared | 65,034 | <p>Great question. It seems to me that there are two decisions that should be made. The first is to determine the size of the two groups. After that decision has been made, we must then decide which elements will go in which groups.</p>
<p>Suppose we split the $5$ numbers into a group of size $1$ and a group of size $4$. How many ways can we do this? Well, it suffices to just choose which number will be alone, so there are $5$ ways.</p>
<p>Next, let's split the numbers into a group of size $2$ and a group of size $3$. How many ways can we do this? Again, it suffices to choose the numbers for the smaller group. Using a binomial coefficient, we find there are $\binom{5}{2}=10$ ways to do this.</p>
<p>Altogether, you will find $15$ ways.</p>
<p>See if you can show with $n$ elements there are $2^{n-1}-1$ ways to split into two smaller groups.</p>
|
466,909 | <p>How many ways can you split the numbers 1 to 5 into two groups of varying size?
For example: '1 and 2,3,4,5' or '1,2 and 3,4,5' or '1,2,3 and 4,5'. How many combinations are there like this? What is the formula?</p>
| Barry Cipra | 86,747 | <p>The OP has specified that the order of the groups in the splitting doesn't matter, and also that the two groups must both be non-empty. One of the groups must have the $1$, and that group either has or doesn't have each of the other four numbers, except that it can't have all four (which would leave the other group empty), so there are in total $2^4-1=15$ splittings.</p>
|
3,912,722 | <blockquote>
<p>Circle of radius <span class="math-container">$r$</span> touches the parabola <span class="math-container">$y^2+12x=0$</span> at its vertex. Centre of circle lies left of the vertex and circle lies entirely within the parabola. What is the largest possible value of <span class="math-container">$r$</span>?</p>
</blockquote>
<p>So my book has given the solution as follows:</p>
<blockquote>
<p>The equation of the circle can be taken as: <span class="math-container">$(x+r)^2+y^2=r^2$</span><br />
and when we solve the equation of the circle and the parabola, we get <span class="math-container">$x=0$</span> or <span class="math-container">$x=12-2r$</span>.</p>
</blockquote>
<blockquote>
<p>Then, <span class="math-container">$12-2r≥0$</span> and finally, the largest possible value of <span class="math-container">$r$</span> is <span class="math-container">$6$</span>.</p>
</blockquote>
<p>This is where I got stuck as I'm not able to understand why that condition must be true. I get that the circle must lie within the parabola...</p>
<p>Can someone please explain this condition to me?</p>
| Narasimham | 95,860 | <p><a href="https://i.stack.imgur.com/xx8Ry.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xx8Ry.png" alt="enter image description here" /></a></p>
<p>It is helps to learn and remember standard forms of the parabola along with the osculating circle radii at their vertices.
<span class="math-container">$$ y^2= \pm 2Rx , x^2=\pm 2R y$$</span>
Differentiating <span class="math-container">$ 2 R y = x^2$</span> twice show that the minimum touching circle radius is R.. in this case <span class="math-container">$6$</span>.</p>
|
32,111 | <p>Let p : Y -> X be an n-sheeted covering map, where X and Y are topological spaces. If X is compact, prove that Y is compact.</p>
<p>I realize that this seems like a very simple problem, but I want to stress the lack of assumptions on X and Y. For example, this is very easy to prove if we can assume that X and Y are metrizable, for sequential compactness is then equivalent to compactness and it is easy to lift sequential compactness from X to Y.</p>
<p>I asked three people in person this question and all of them immediately made the assumption that X and Y are metrizable, so I feel like I should put in this warning here that they are not.</p>
| Dick Palais | 7,311 | <p>Well, the obvious argument that any sequence has a convergent subsequence that your three friends used for the metrizeable case generalizes easily to show that any net has a convergent subnet in the general case.</p>
|
2,210,317 | <p>Simplify $\sum^{n}_{k=1} (-1)^k(n-k)!(n+k)!$. </p>
<p>I thought of interpreting a term $(n-k)!(n+k)!=\frac{(2n)!}{\dbinom{2n}{n-k}}$, but I do not know of any ways to sum this efficiently.</p>
| Jack D'Aurizio | 44,121 | <p>By using Euler's Beta function:
$$\sum_{k=1}^{n}(-1)^k \Gamma(n+k+1)\Gamma(n-k+1) = \Gamma(2n+2)\sum_{k=1}^{n}\int_{0}^{1}(-1)^k z^{n+k}(1-z)^{n-k}\,dz $$
equals:
$$ (2n+1)! \int_{0}^{1} z^{n+1}\left[(-z)^n-(1-z)^n\right]\,dz =(2n+1)!\left[\frac{(-1)^n}{2n+2}-B(n+1,n+2)\right]$$
or:
$$ \color{red}{(-1)^n\frac{(2n+1)!}{2n+2}-\frac{n!(n+1)!}{2n+2}}$$</p>
|
1,217,805 | <p>Let's say I have a set of linearly independent vectors, collected in a square matrix $\mathbf{M}$.</p>
<p>I know that I could orthogonalize these vectors with the QR decomposition,</p>
<p>$\mathbf{M} = \mathbf{QR}$</p>
<p>where $\mathbf{Q}$ is my orthogonal set of vectors.</p>
<p>I'm curious if it is possible to do the same with the SVD, but I cannot figure out how. </p>
<p>I want to know how to go from $\mathbf{M}$ to $\mathbf{Q}$, using SVD instead of QR.</p>
<p>If this is not possible, I'd like to know as well.</p>
| texasflood | 201,600 | <p>If $E$ is full rank (and it will be if $M$ is square and full rank), you can take the square root of it</p>
<p>$$UEV^H$$
$$=UE^{1/2}IE^{1/2}V^H$$</p>
<p>And so you can just absorb $E^{1/2}$ into $U$ and $V$.</p>
<p>It should go without saying that this is no longer a singular value decomposition and may lose all the nice properties SVD enjoys. </p>
|
907,729 | <p>I am sorry if the numbers are not formatted, I have searched but found nothing on how. I am trying to multiply $$(2x^3 - x)\left(\sqrt{x} + \frac {2}{x}\right)$$ together and I arrive at a different answer that does not match the one given. The approach I take is to assign the numbers powers then multiply them by the lattice method. So $$(2x^3) (X^{\frac {1}{2}}) + (2x^3) (2x^{-1}) + (x^{\frac {1}{2}}) (-x)) + (2x^{-1}) (-x)$$. I am not asking for a solution but rather an explanation on how to go about multiplying the terms with negative and fractional exponents.</p>
| amWhy | 9,003 | <p>We distribute just as we would distribute $$(a + b)(c + d) = a(c+d) + b(c + d) = ac + ad + bc + bd.$$</p>
<hr>
<p>$$(2x^3 - x)(\color{blue}{\sqrt x + 2x^{-1}}) = 2x^3(\color{blue}{\sqrt x + 2x^{-1}}) - x(\color{blue}{\sqrt x + 2x^{-1}})\\ =2x^3\sqrt x + 4x^3x^{-1} - x\sqrt x - 2xx^{-1}\\ = 2x^{6/2}x^{1/2} + \frac {4x^3}{x} - x^{2/2}x^{1/2} - \frac {2x}{x}\\
= 2x^{6/2 + 1/2} + 4x^2 - x^{2/2 + 1/2} - 2 \\
= 2x^{7/2} + 4x^2 - x^{3/2} - 2$$</p>
<p>We are using the fact that $$a^ba^c = a^{b+c}$$</p>
<p>This holds for negative exponents as well. So when we approach this simply by looking at products of terms with the same base, each raised to an exponent, we can see that $4x^3x^{-1} = 4x^{3+ (-1)} = 4x^2$.</p>
<p>Note that this is consistent with what I wrote, e.g., $$4x^3x^{-1} = \frac{4x^3}{x}.$$ We can see that we can cancel the common factor of $x$ in the numerator and denominator: $\dfrac{4x^3}x = 4x^2$ and in doing so, we are, essentially, subtracting the exponent in the denominator from the exponent in the numerator. We can generalize this into a handy "rule":</p>
<p>$$\frac {a^b}{a^c} = a^{b-c}$$</p>
|
907,729 | <p>I am sorry if the numbers are not formatted, I have searched but found nothing on how. I am trying to multiply $$(2x^3 - x)\left(\sqrt{x} + \frac {2}{x}\right)$$ together and I arrive at a different answer that does not match the one given. The approach I take is to assign the numbers powers then multiply them by the lattice method. So $$(2x^3) (X^{\frac {1}{2}}) + (2x^3) (2x^{-1}) + (x^{\frac {1}{2}}) (-x)) + (2x^{-1}) (-x)$$. I am not asking for a solution but rather an explanation on how to go about multiplying the terms with negative and fractional exponents.</p>
| Barkas | 170,882 | <p>Terms with negative and fractional exponents are dealt with in very much the same way as natural exponents.</p>
<p>you have</p>
<ul>
<li>$x^a*x^b=x^{a*b}$</li>
<li>$x^{-a}=\frac{1}{x^a}$</li>
</ul>
<p>Sometimes fractional exponents are written as roots. For example the exponent $\frac{1}{2}$ is more commonly known as the square root: $x^{\frac{1}{2}}=\sqrt{x}$ But this works equally for any fraction as exponent, so $x^{\frac{1}{3}}$ would be the third root, i.e. the (positive) solution (in $y$) to the equation $y^3=x$ and so on.</p>
|
1,794,673 | <p>Prove that there exist two real Gaussian distribution $X$ and $Y$ such that the vector $(X,Y)$ is not Gaussian. </p>
<p>The way that i tried to solve this problem was choosing $X=X$, and take before the linear combination $X-X$ or stuff like that; but in the definition of Gaussian distribution that i have the constants are also Gaussians distribution. The statement of this problem (and its generalization) is true if we assume that the constants are Gaussians? </p>
| Community | -1 | <p>At most this requires the relative smoothing theorem: if $f: X \to N$ is $C^0$, and smooth on some closed subset $M$, then it is homotopic to a smooth map, with $f|_M$ fixed by this homotopy. Apply this to $X = S^n \times M$ to see that the map is a surjection on homotopy groups. (That is, we're smoothing a "sphere's worth" of continuous maps.) Now apply a version of this theorem with boundary on $S^n \times M \times I$ to see that it's an injection on homotopy groups. All of these theorems are proved in Hirsch, differential topology. </p>
|
3,464,383 | <p>I couldn't find any substantial list of 'strange infinite convergent series' so I wanted to ask the MSE community for some. By <em>strange</em>, I mean infinite series/limits that <strong>converge when you would not expect them to and/or converge to something you would not expect</strong>.</p>
<p>My favorite converges to Khinchin's (sometimes Khintchine's) constant, <span class="math-container">$K$</span>. For almost all <span class="math-container">$x \in
\mathbb{R}$</span> (those for which this does not hold making up a measure zero subset) with infinite c.f. representation:
<span class="math-container">$$x = a_0 + \frac{1}{a_1+\frac{1}{a_2+\frac1{\ddots}}}$$</span>
We have:
<span class="math-container">$$\lim_{n \to \infty} =\root n \of{\prod_{i=1}^na_i} = \lim_{n \to \infty}\root n \of {a_1a_2\dots a_n} = K$$</span>
Which is...wow! That it converges independent of <span class="math-container">$x$</span> really gets me.</p>
| Descartes Before the Horse | 592,365 | <p>A series from user <a href="https://math.stackexchange.com/users/276986/reuns">Reuns</a>, which he proves in a <a href="https://math.stackexchange.com/questions/3464938/a-closed-form-for-re-sum-k-1-infty-dfraci-sigma-0kk/3464986#3464986">previous question</a> of mine:
<span class="math-container">$$\sum_{k=1}^\infty\frac{\Re(i^{\sigma_0(k)})}{k^s} = \zeta(s)-\zeta(2s)-2\zeta(2s)\sum_{r\ge 1} (-1)^{r}\sum_{p \text{ prime}}p^{-s(2r+1)}$$</span>
For <span class="math-container">$s>1$</span>. (Will remove upon Reuns's request)</p>
|
3,848,676 | <p>Let <span class="math-container">$n$</span> and <span class="math-container">$k$</span> be nonnegative integers. The binomial coefficient <span class="math-container">$\binom{n}{k}$</span> is equal to the number of ways to choose <span class="math-container">$k$</span> items from a set with <span class="math-container">$n$</span> elements. How can this interpretation of <span class="math-container">$\binom{n}{k}$</span> be used to understand or prove the binomial theorem
<span class="math-container">$$
(x+y)^n = \sum_{k=0}^n \binom{n}{k} x^k y^{n-k} \qquad \text{for all numbers } x, y.
$$</span></p>
<hr />
<p><strong>Note</strong>: The original text of this question is shown below. I’m editing the question because it’s fundamental and deserves to be salvaged.</p>
<p>Original text:</p>
<p>Sorry for the childish question. I am in 7th grade and need some help. Thank You!</p>
| littleO | 40,119 | <p>Let's imagine what happens if we expand
<span class="math-container">$$
(x+y)^3 = \underbrace{(x+y)}_{\text{factor 1}} \underbrace{(x+y)}_{\text{factor 2}} \underbrace{(x+y)}_{\text{factor 3}}
$$</span>
without collecting like terms. The result is
<span class="math-container">\begin{align}
(x + y)^3 &= x\cdot x \cdot x \\
& + x \cdot x \cdot y + x \cdot y \cdot x + y \cdot x \cdot x \\
&+ x \cdot y \cdot y + y \cdot x \cdot y + y \cdot y \cdot x \\
&+ y \cdot y \cdot y.
\end{align}</span>
Each term in the sum contains a contribution from factor 1, a contribution from factor 2, and a contribution from factor 3. Each factor contributes either an <span class="math-container">$x$</span> or a <span class="math-container">$y$</span>.</p>
<p>Now how many terms in this sum are equal to <span class="math-container">$x y^2$</span>? Well, how many ways are there to choose two of the factors to contribute a <span class="math-container">$y$</span>? The answer is <span class="math-container">$\binom{3}{2}$</span>.</p>
<p>Likewise, the sum contains <span class="math-container">$\binom{3}{0}$</span> terms that are equal to <span class="math-container">$x^3$</span>, <span class="math-container">$\binom{3}{1}$</span> terms that are equal to <span class="math-container">$x^2 y$</span>, and <span class="math-container">$\binom{3}{3}$</span> terms that are equal to <span class="math-container">$y^3$</span>.</p>
<p>That explains why
<span class="math-container">$$
(x + y)^3 = \binom{3}{0} x^3 + \binom{3}{1}x^2 y + \binom{3}{2} x y^2 + \binom{3}{3} y^3.
$$</span></p>
|
2,231,899 | <p>How to prove $\text{Con}(PA + \text{Con}(PA))$ in $ZFC$?</p>
<p>I found an example of this claim <a href="https://mathoverflow.net/a/50174/2003">here</a> but I'm looking for a more detailed explanation.</p>
<p>An equivalent statement is: there is no proof in $PA$ that $PA$ is inconsistent.</p>
<p>If $PA$ could prove that $PA$ is inconsistent, then $ZFC$ could also prove it, so $ZFC$ would be inconsistent (because $ZFC$ also proves that $PA$ <em>is</em> consistent). But that's not a contradiction since we can't prove that $ZFC$ is consistent anyway (in $ZFC$).</p>
<p>Is there a theory $T$ intermediate in consistency strength, so that $ZFC$ proves that $T$ is consistent and $T$ proves that $PA$ is consistent? That would solve the problem with the above approach (since $PA$ proving its own inconsistency would make $T$ inconsistent which would make a contradiction in $ZFC$).</p>
| Jonathan Schilhan | 265,128 | <p>Well, in ZFC there is a model of PA + Con(PA), namely the natural numbers. Also the soundness theorem (if a theory has a model, it is consistent) is a theorem in ZFC so it is also a theorem of ZFC that the above is consistent.</p>
<p>Edit: I think I understand your approach now. But I don't think it could lead to anything. The problem is that PA+Con(PA) is already a somehow minimal form of such a theory T you are looking for (of course there are way weaker theories that would do it but I don't think they would make anything easier). </p>
|
2,231,899 | <p>How to prove $\text{Con}(PA + \text{Con}(PA))$ in $ZFC$?</p>
<p>I found an example of this claim <a href="https://mathoverflow.net/a/50174/2003">here</a> but I'm looking for a more detailed explanation.</p>
<p>An equivalent statement is: there is no proof in $PA$ that $PA$ is inconsistent.</p>
<p>If $PA$ could prove that $PA$ is inconsistent, then $ZFC$ could also prove it, so $ZFC$ would be inconsistent (because $ZFC$ also proves that $PA$ <em>is</em> consistent). But that's not a contradiction since we can't prove that $ZFC$ is consistent anyway (in $ZFC$).</p>
<p>Is there a theory $T$ intermediate in consistency strength, so that $ZFC$ proves that $T$ is consistent and $T$ proves that $PA$ is consistent? That would solve the problem with the above approach (since $PA$ proving its own inconsistency would make $T$ inconsistent which would make a contradiction in $ZFC$).</p>
| Asaf Karagila | 622 | <p>The completeness theorem for first-order logic tells us that a theory $T$ is consistent if and only it has a model. Moreover, since $\sf ZFC$ proves the completeness theorem, in order to show that $\sf PA+\operatorname{Con}(PA)$ is consistent, we just need to show it has a model.</p>
<p>But this is not quite what you are asking for. Since $\sf\operatorname{Con}(PA+\operatorname{Con}(PA))$ is a statement about arithmetic, we need to code these things into the natural numbers first.</p>
<p>Luckily, all the usual coding schemes work just fine. So that means that for a theory $T$ with Godel coding, $\sf ZFC$ proves that $\operatorname{Con}(T)$ holds in the natural numbers, if and only if $T$ has a model.</p>
<p>Now, since $\sf PA$ is true in $\omega$, we get that $\sf PA$ is consistent and therefore $\operatorname{Con}\sf (PA)$ is true in $\omega$, and therefore $\sf PA+\operatorname{Con}(PA)$ is true there. So we found a model for our theory, and therefore $\sf\operatorname{Con}(PA+\operatorname{Con}(PA))$ is provable from $\sf ZFC$.</p>
|
818,850 | <p>I want to generate numbers $1$ to $10$ with uniform probability distribution. So I write the numbers $1$ to $10$ in the natural order. I keep writing the next $10$ number block by permuting the first $10$ numbers. Will this long string can be considered random with uniform distribution? Is there a problem?</p>
| Community | -1 | <p>Think of a meterstick. Suppose you mark a point on it.</p>
<p>The meterstick is divided up into centimeters: you can narrow down where the mark is by checking which two centimeter marks it lies between.</p>
<p>Each centimeter is split into ten millimeters. You can further narrow down where the mark is within the centimeter by noting which pair of millimeter marks it lies between.</p>
<p>We could divvy each millimeter up into a thousand micrometers. We could further narrow down where the mark is within the millimeter by working out which pair of micrometer marks it lies between.</p>
<p>And so forth. If you have a scheme for continually subdividing your meterstick (e.g. you keep breaking it up into tenths), then your marked point lies in a unique subdivision (or it lies on a mark); this tells us the decimal digits of its decimal expansion.</p>
<p>Conversely, any way of (consistently) specifying which subdivision the point lies in, no matter how fine we make the subdivisions, is sufficient information to uniquely determine exactly where the mark lies on the meterstick.</p>
|
248,167 | <p>I want to know why, when I look at the Julia sets of the quadratic family, I see only a finite number of repeating patterns, rather than a countable infinity of them.</p>
<p>My question is specifically about the interaction of these three theorems:</p>
<p><strong>Theorem 1</strong>: Let $z_0\in\mathbb{C}$ be an repelling periodic point of the function $f_c:z\mapsto z^2+c$. Tan Lei proved in the 90s that the filled in Julia set $K_c$ is asymptotically $\lambda$-self-similar about $z_0$, where $\lambda$ denotes the multiplier of the orbit.</p>
<p><strong>Theorem 2</strong>: (Iterated preimages are dense) Let $z\in J_c$, then the preimages of $z$ under the set $\cup_{n\in\mathbb{N}} ~ f^{-n}(z)$ is dense in $J_c$</p>
<p><strong>Theorem 3</strong>: $J_c$ is the closure of repelling periodic points.</p>
<p>Let's expand on Theorem 1:<br>
Technically it means that the sets $(\lambda^n \tau_{-z_0} K_c)\cap\mathbb{D}_r$ approach (in the Hausdorff metric of compact subsets of $\mathbb{C}$) a set $X \cap \mathbb{D_r}$ where the limit model $X \subset \mathbb{C}$ is such $\lambda$-self-similar: $X = \lambda X$.<br>
Practically this means that, when one zooms into a computer generated $K_c$ about $z_0$, the image becomes, to all practical purposes, self-similar. No new information is gained by zooming again about $z_0$.</p>
<p>Lei also proved that $K_c$ is asymptotically $\lambda$-self-similar about the preimages of $z_0$, with the same limit model $X$, up to rotation and rescaling. This means that zooming in at each point in the repelling cycle of $z_0$, provides a basically the same spectacle, apart maybe rotated, that does zooming into $z_0$.
Not only, but the preimages of $z_0$ are dense in $J_{c}$ (Theorem 2), meaning that this $X$ pattern can be seen throughout the Julia set.</p>
<p>Now, let consider a different repelling periodic point $z_1$. Lei tells us that $K_c$ will be asymptotically self-similar about $z_1$ and all <em>its</em> pre-images, with an <em>a priori different</em> limit set $Y$. Since the pre-images of $z_1$ are also dense in $J_c$ we may observe the limit model $Y$ all over $J_c$.</p>
<p>So, <strong><em>a priori</em></strong> to each repelling periodic orbit, there should be an associated limit model, and each of these limit models could be distinct. <em>However</em>, when I look at a computer generated Julia set, the parts of it that are asymptotically self-similar, seem to approach one of a <strong><em>finite</em></strong> set of limit models (up to rotation).</p>
<p>Why is it so? Maybe my eye cannot see the difference? Or the computer cannot generate all of the detail?</p>
<p>Or is it the case that the limit models are finite?</p>
<p><a href="https://i.stack.imgur.com/JKaA9.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/JKaA9.jpg" alt="Simple Julia zoom"></a>
In this image (read like a comic strip), I zoom into the neighbourhood of a point, four times, then purposely "miss the center", and zoom onto a detail for four more times. The patterns that emerge are very similar. Are they the same?<br>
This is perhaps one of the simplest Julia set, but the experience is </p>
| Mark McClure | 46,214 | <p>Julia sets are all very closely related to self-similar sets - each one can be thought of as the invariant set of something like an iterated function system. Specifically, the Julia set of $f(z)=z^2 +c$ is the closure of the repelling periodic points of $f$. Thus, it makes sense that the Julia set itself should be attractive under an inverse of $f$ and there are two such inverses:
$$f_{\pm}^{-1}(z) = \pm \sqrt{z-c}.$$</p>
<p>In fact,
$$J = f_{+}^{-1}(J) \cup f_{-1}^{-1}(J)$$
so that it looks almost self-similar. Here's a gratuitous graphic illustrating the ides for $c=-1$:</p>
<p><a href="https://i.stack.imgur.com/wUN3o.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/wUN3o.gif" alt="enter image description here"></a> </p>
|
3,911,489 | <p>How to prove <span class="math-container">$$I=\int_{0}^{\infty} \frac{\sin x}{x(x^2+1)}dx=\frac{\pi}{2}(1-\frac{1}{e})\, ?$$</span></p>
<p>Let <span class="math-container">$$f(z)=\frac{\sin z}{z(z^2+1)}=\frac{e^{iz}-e^{-iz}}{2i\,z(z^2+1)}.$$</span></p>
<p>Then I have no idea how to deal with <span class="math-container">$[0,\infty)$</span>.</p>
<p>Can anyone please give me a hint? Thanks in advance.</p>
| Bernkastel | 551,169 | <p>You can notice since your <span class="math-container">$f$</span> is even it is
<span class="math-container">$$\int_0^{\infty} f(x) \text{d}x=\frac{1}{2} \int_{-\infty}^{\infty} f(x) \text{d}x$$</span></p>
|
1,226,176 | <p>The following serie is obviously convergent, but I can not figure out how to calculate its sum :</p>
<p>$$\sum \frac{6^n}{(3^{n+1}-2^{n+1})(3^n - 2^n)}$$</p>
| Bumblebee | 156,886 | <p>You can use one of the following identities to telescope the series: <span class="math-container">$$\dfrac{3^{n+1}}{3^{n+1}-2^{n+1}}-\dfrac{3^n}{3^n-2^n}=\dfrac{2^n}{3^n-2^n}-\dfrac{2^{n+1}}{3^{n+1}-2^{n+1}}=\dfrac{6^n}{(3^{n+1}-2^{n+1})(3^n-2^n)}$$</span></p>
|
413,982 | <p>How can I divide this using long division?</p>
<p>$$\frac{ax^3-a^2x^2-bx^2+b^2}{ax-b}$$</p>
<p><strong>Edit</strong></p>
<p>Sorry guys I wrote it wrong... Fixed it now.</p>
| Billy | 39,970 | <p>You may have more luck computing the reciprocal, $\dfrac{ax^3 - a^2x^2 - bx^2 + b^2}{ax-b}$, and then inverting it again at the end.</p>
<p>(When you've done that, try artificially 'setting' $ax = b$ in the expression $ax^3 - a^2x^2 - bx^2 + b^2$ and see what you get. What happens, and why?)</p>
|
413,982 | <p>How can I divide this using long division?</p>
<p>$$\frac{ax^3-a^2x^2-bx^2+b^2}{ax-b}$$</p>
<p><strong>Edit</strong></p>
<p>Sorry guys I wrote it wrong... Fixed it now.</p>
| Michael Hardy | 11,667 | <p>$$
\begin{array}{rccccccccccccc}
& & x^2 & - & ax & - & b \\[12pt]
ax-b & ) & ax^3 & - & (a^2+b)x^2 & + & b^2 \\
& & ax^3 & - & bx^2 \\[12pt]
& & & & -a^2x^2 & + & b^2 \\
& & & & -a^2x^2 & + & abx \\[12pt]
& & & & & & -abx & + & b^2 \\
& & & & & & -abx & + & b^2 \\[12pt]
& & & & & & & & 0
\end{array}
$$</p>
|
588,725 | <p>Prove the following statement:</p>
<p>$$\frac{1}{x}<\ln(x)-\ln(x-1)<\frac{1}{x-1}$$</p>
<p>Proof:</p>
<p>$$\frac{-1}{x^2}<\frac{1}{x(x-1)}<\frac{-1}{(x-1)^2}$$</p>
<p>$$e^{(\frac{-1}{x^2})}<e^{(\frac{-1}{x(x-1)})}<e^{(\frac{-1}{(x-1)^2})}$$</p>
<p>$$\lim_{x\to\infty}e^{(\frac{-1}{x^2})}<\lim_{x\to\infty}e^{(\frac{-1}{x(x-1)})}<\lim_{x\to\infty}e^{(\frac{-1}{(x-1)^2})}$$</p>
<p>$$e^{0}<e^{0}<e^{0}$$</p>
<p>$$1<1<1$$</p>
<p>therefore MVT and we get the statement to be proven.</p>
<p>does anyone agree with me in the way i choose to prove the above statement? any feedback would be good thank you in advance!</p>
| Nick Peterson | 81,839 | <p>You've made this much too difficult, and your proof is a bit shady.</p>
<p>Try a different tact:</p>
<p>Note that for any $x>1$,
$$
\int_{x-1}^{x}\frac{1}{t}\,dt=\ln(x)-\ln(x-1).
$$
However, $\frac{1}{t}$ is monotone decreasing on $[x-1,x]$; can you come up with a way to use this to bound the above integral?</p>
|
2,046,773 | <p>Given a function $f$ that is differentiable at a point $x_0$, if we define (using the Riemann integral)</p>
<p>$$F(x) = \int_a^x f$$</p>
<p>Can we necessarily say that $F^{\prime}(x)$ is continuous at $x_0$? Going back and forth between $f$ and $F$ confuses me a bit. I <em>think</em> that the Fundamental Theorem of Calculus gives us some relation between $F^{\prime}(x_0)$ and $f(x_0)$, but I'm not sure. </p>
| zhw. | 228,045 | <p>You can't say $F'$ is continuous at $x_0$ because $F'$ may not exist in a full neighborhood of $x_0.$ Take the interval $[-1,1]$ with $x_0=0$ for example. Choose any sequences $a_n,b_n$ you like such that $1\ge b_1 > a_1 > b_2 > a_2 > \cdots \to 0^+.$ Define $f(x) = x^2$ on each $[a_n,b_n],$ $f=0$ everywhere else. Then $f$ is Riemann integrable on $[-1,1]$ and $f'(0)=0.$ But at each $a_n$ and $b_n,$ $f$ has a jump discontinuity, hence $F'$ does not exist at these points. No matter which neighborhood of $0$ you examine, there will be lots of points, namely in the tail ends of the sequences $a_n,b_n,$ where $F'$ doesn't exist.</p>
|
1,012,730 | <p>A purely imaginary number is one which contains no non-zero real component. </p>
<p>If I had a sequence of numbers, say $\{0+20i, 0-i, 0+0i\}$, could I call this purely imaginary? </p>
<p>My issue here is that because $0+0i$ belongs to multiple sets, not just purely imaginary, is there not a valid case to say that the sequence isn't purely imaginary?</p>
| Vishnu | 693,070 | <p>As other answers say, zero is purely real as well as purely imaginary. Let me give an intuitive explanation based on the graphical representation of a complex number.</p>
<p>In the Argand plane, the points on the real axis represent complex numbers which are "purely real" as their imaginary part is zero. On the other hand, points on the imaginary axis denote complex numbers whose real part is zero and hence they are "purely imaginary". </p>
<p>We know that, the real and imaginary axis meet at the origin which represents the complex number <span class="math-container">$0+0i$</span>. As this point simultaneously lies on the real as well as the imaginary axis, we say that zero is both purely real and purely imaginary. </p>
|
2,289,754 | <p>Let $f_t: \mathbb R^2 \to \mathbb R$ define by $f_t(x,y) = t \sin(t|x^2+y^2-1|)$.</p>
<p>For all $\phi\in D(\mathbb R^2-\{(0,0)\})$, we denote by $T$ the map given by
$$T(\phi)=\lim_{t\to +\infty}\int_{\mathbb R^2} f_t(x,y) \, \phi(x,y)\, \mathrm{d}x\, \mathrm{d}y$$</p>
<p>How can prove that $T$ is a distribution of order $0$ on $\mathbb R^2-\{(0,0)\}$?</p>
<p>Remark: I have prove that $T: D(\mathbb R^2-\{(0,0)\}) \to \mathbb R $ is a linear map. It remains to show $T$ is continuous. In fact, I started by using the polar coordinate system, then we get
$$T(\phi)=\lim_{t\to +\infty}\int_{0}^{2\pi} \int_{0}^{\infty}\, t \sin(t|r^2-1|) \, \phi(r,\theta)\,r\, \mathrm{d}r\, \mathrm{d}\theta=... ??$$</p>
<p>Thank you in advance</p>
| Michael L. | 153,693 | <p>First, we note that for $\mathrm{supp}(\phi)\subset \Omega = B_{r_2}(0)\setminus B_{r_1}(0)$ for some $0 < r_1 < r_2$, \begin{align*} I_t(\phi) &:= \int_0^{2\pi}\int_{r_1}^{r_2} t\sin(t\lvert r^2-1\rvert)\phi(r, \theta)\,r\,\mathrm{d}r\,\mathrm{d}\theta \\ &= \int_{r_1}^{r_2} t\sin(t\lvert r^2-1\rvert)\left[\int_0^{2\pi} \phi(r, \theta)\,\mathrm{d}\theta\right]r\,\mathrm{d}r \\ &= \Im\left[\int_{r_1}^{r_2} te^{it\lvert r^2-1\rvert}\left[\int_0^{2\pi} \phi(r, \theta)\,\mathrm{d}\theta\right]r\,\mathrm{d}r\right] \end{align*} We let $\Phi(r) := \int_0^{2\pi} \phi(r, \theta)\,\mathrm{d}\theta$. First, we note that for $r_i > 1$, $$\int_1^{r_i} te^{it\lvert r^2-1\rvert}\Phi(r)\,r\,\mathrm{d}r = \int_1^{r_i} te^{it(r^2-1)}\Phi(r)\,r\,\mathrm{d}r = \frac{t}{2}\int_0^{\sqrt{1+r_i}} e^{itu}\Phi\left(\sqrt{1+u}\right)\,\mathrm{d}u$$ for $u = r^2-1$. Recall from the proof of the <a href="https://en.wikipedia.org/wiki/Riemann%E2%80%93Lebesgue_lemma" rel="nofollow noreferrer">Riemann-Lebesgue lemma</a> that $$\left\lvert\int_0^{\sqrt{1+r_i}} e^{itu}\Phi\left(\sqrt{1+u}\right)\,\mathrm{d}u\right\rvert\leq \frac{1}{t}\int_0^{\sqrt{1+r_i}} \left\lvert\frac{\Phi'(\sqrt{1+u})}{2\sqrt{1+u}}\right\rvert\,\mathrm{d}u$$ and $\Phi'(r) = \int_0^{2\pi} \partial_r\phi(r, \theta)\,\mathrm{d}\theta$, so \begin{align*} \left\lvert\int_1^{r_i} te^{it\lvert r^2-1\rvert}\Phi(r)\,r\,\mathrm{d}r\right\rvert &\leq \frac{1}{2}\int_0^{\sqrt{1+r_i}} \frac{1}{2\sqrt{1+u}}\left\lvert\int_0^{2\pi} \partial_r\phi(\sqrt{1+u}, \theta)\,\mathrm{d}\theta\right\rvert\,\mathrm{d}u \\ &= \frac{1}{2}\int_1^{r_i} \left\lvert\int_0^{2\pi} \partial_r\phi(r, \theta)\,\mathrm{d}\theta\right\rvert\,\mathrm{d}r \end{align*} We can show a similar bound for the case $r_i\leq 1$, which allows us to write \begin{align*} \lvert I_t(\phi)\rvert &\leq \frac{1}{2}\int_{r_1}^{r_2} \left\lvert\int_0^{2\pi} \partial_r\phi(r, \theta)\,\mathrm{d}\theta\right\rvert\,\mathrm{d}r \\ &\leq \frac{1}{2}\int_{r_1}^{r_2}\int_0^{2\pi} \lvert \partial_r\phi(r, \theta)\rvert\,\mathrm{d}r\,\mathrm{d}\theta \\ &\leq \pi(r_2-r_1)\|\phi\|_{C^1} \end{align*} for all $t$, which implies $\lvert T(\phi)\rvert\leq \pi(r_2-r_1)\|\phi\|_{C^1}$. Therefore, $T$ is a distribution and has order at most 1. I'm not sure how to improve this to order 0.</p>
|
42,192 | <p>In search for a Machian formulation of mechanics I find the following problem. In Machian mechanics absolute space does not exists, and the only real entities are the relative distances between the particles. As a consequence, the configuration space of a N-particle system is the set of the distances on a set of N elements. Actually these distances are usually required to be isometrically embeddable in $\mathbb{R}^3$. But if absolute space does not exists, this requirement appears to be not appropriate. The natural generalization it therefore to admit any possible distance as physically acceptable, and to find a preferred way to derive a 3-geometry, possibly non-flat, form a generic distance. </p>
<p>To be more specific, consider the following simple example. Let A be a metric space with 3 elements. There are infinitely many bi-dimensional riemaniann manifolds (surfaces) is which A can be isometrically embedded. There is however a preferred embedding, namely the embedding into a plane. The existence of a preferred embedding defines a preferred value for the angles between the geodetics joining the points, which in this case are simply the angles of the triangle defined by the distance between the points. </p>
<p>Suppose now that A has four point. In general this metric space cannot be isometrically embedded in a 2-plane. The problem therefore is the following: is there a preferred isometric embedding of this metric space in a 2-surface, or equivalently, there is a preferred way for defining the values of the angles between the geodetics?</p>
<p>In more forma way, the problem is the following: is there a preferred isometric embedding of a finite metric space in a riemaniann manifold of given dimension?</p>
| Anton Petrunin | 1,441 | <p>Your question is not well stated.
In particular I did not understand why embedding into a plane is "preferred embedding".</p>
<p>Here are some associations...</p>
<p><strong>4-point case.</strong></p>
<p>A generic 4-point metric space can be isometrically embedded into two different model planes (i.e. simply connected surfaces of constant curvature $K$, which is eiter sphere, Euclidean plane or Lobachevsky plane depending on sign of $K$).</p>
<p>Thus you have two values of curvature $K_1\le K_2$ associated to (almost) any 4-point metric space.
In this case the metric space can be isometrically embeded into a model 3-space of curvature $K_1\leq K\leq K_2$. </p>
<p>In fact for any 4-point metric space $M$ there is an subinterval $\mathbb I_M$ of $[-\infty,\infty)$ such that $M$ can be isometrically embedded into a model 3-space of any curvature $K\in \mathbb I_M$.
(We assume that model space of curvature $-\infty$ is an $\mathbb R$-tree.)</p>
<p>Nearly all this was discovered by A. Wald in 1936 or so.</p>
|
382,295 | <p>Please help. I haven't found any text on how to prove by induction this sort of problem: </p>
<p>$$
\lim_{n\to +\infty}1 + \frac{1}{4} + \frac{1}{4^2} + \cdots+ \frac{1}{4^n} = \frac{4}{3}
$$</p>
<p>I can't quite get how one can prove such. I can prove basic divisible "inductions" but not this. Thanks.</p>
| Michael Hardy | 11,667 | <p>Archimedes did this (Thomas Heath's translation of <em>Quadrature of the Parabola</em>, Proposition 23). He showed that (expressed in modern notation)
$$
\underbrace{1+\frac14+\frac1{4^2}+\cdots+\frac1{4^n}}+\underbrace{\frac13\left(\frac1{4^n}\right)}
$$
<b>does not depend on the number of terms.</b> All you have to do is show that when you go the the next term, the sum doesn't change. When you go to the next term, you get
$$
\underbrace{1+\frac14+\frac1{4^2}+\cdots+\frac1{4^n}+\frac1{4^{n+1}}}+\underbrace{\frac13\left(\frac1{4^{n+1}}\right)}.
$$
But you have changed
$$
\frac13\left(\frac1{4^n}\right)
$$
to
$$
\frac1{4^{n+1}}+\frac13\cdot\frac1{4^{n+1}}.
$$
The problem then is just to show that those two things are equal. If you multiply both by $4^n$, then one of them becomes
$$
\frac13
$$
and the other becomes
$$
\frac14+\frac13\cdot\frac14.
$$
It's easy to show that those are equal.</p>
<p>Since the sum does not depend on the number of terms, the sum is just what you get when $n=0$:
$$
1+\frac13.
$$</p>
<p>The method used by Archimedes perhaps cannot be regarded as an instance of mathematical induction, but it suggests at once how to write it as a proof by induction.</p>
<p><b>Later note:</b> Thomas Heath translates Proposition 23 of Archimedes' <em>Quadrature of the Parabola</em> as follows:</p>
<p><em>Given a series of areas $A,B,C,D,\cdots Z$ of which $A$ is the greatest, and each is equal to four times the next in order, then</em>
$$
A+B+C+\cdots+Z + \frac13 Z = \frac43 A.
$$</p>
<p><b>Still later note:</b> Here's what this has to do with parabolas: Given a secant line to a parabola, draw a line through its midpoint parallel to the axis, <a href="http://upload.wikimedia.org/wikipedia/commons/3/3e/Parabola_and_inscribed_triangle.svg" rel="nofollow noreferrer">thus</a>.</p>
<p>Then draw the two additional resulting secant lines, making a triangle, and construct triangles on top of those two secant lines, in the same way, <a href="http://upload.wikimedia.org/wikipedia/commons/d/d7/Parabolic_Segment_Dissection.svg" rel="nofollow noreferrer">thus</a>.</p>
<p>Archimedes showed that the two small triangles have a total area that is $1/4$ of that of the big triangle. Then notice that the process can be continued forever, getting the area bounded by the parabola and the first secant line equal to $4/3$ the area of that big triangle.</p>
|
2,608,455 | <p>Could someone please help me with how do I calculate the sum of the $$\sum_{n=1}^{\infty}\frac{1}{4n^{2}-1}$$ infinite series? I see that $$\lim_{n\rightarrow\infty}\frac{1}{4n^{2}-1}=0$$ so the series is convergent based on the Cauchy's convergence test. But how do I calculate the sum? Thank you.</p>
| Dave | 334,366 | <p>$$\frac{1}{4n^2-1}=\frac{1}{2}\left(\frac{1}{2n-1}-\frac{1}{2n+1}\right)$$</p>
|
2,160,671 | <p>I got stuck with this problem.</p>
<p>\begin{matrix}
1 & 1 & 1 & 1\\
0 & 1 & 1 & 0\\
0 & 0 & 1 & 1\\
\end{matrix}</p>
<p>Consider the $3\times 4$ matrix $\bf A$ (above). Do the columns of $\bf A$ span $\mathbb R^3$? </p>
<p>Prove your answer. Also, Find a $4\times 3$ matrix $\bf B$, such that $\bf AB = I_3$</p>
<p>--</p>
<p>I know that the columns of $\bf A$ span $\mathbb R^3$ as there more columns than rows. But I cannot understand how to find matrix $\bf B$ because I cannot implement "super-augmented" matrix and do Gauss-Jordan elimination. Looks like I need to do something with 4th column of $\bf A$ and 4th row of $\bf B$. What do you think?</p>
<p>Thanks!</p>
| mathreadler | 213,607 | <ol>
<li>One way to do this is by <strong>Kronecker products</strong>, where you write matrix multiplication as a matrix and then solve an equation system.</li>
<li>Another is by using some Pseudo-inverse, for example <strong>Moore-Penrose pseudoinverse</strong>. For this example you can calculate it like:</li>
</ol>
<p>$${\bf A}^{+}=({\bf AA}^T)^{-1}{\bf A}$$</p>
<ol start="3">
<li>Use <strong>Singular Value Decomposition</strong> (SVD) : $$\bf A = U\Sigma V^*$$
If $\bf AB=I_3$, what must hold for $\bf B$ in terms of $\bf U,\Sigma,V$?</li>
</ol>
|
3,043,319 | <p>I just begin with the 3- dimension function. I dont really understand how to begin with this problem.
Function <span class="math-container">$$f(x,y)= (x^2+4y^2-5)(x-1)$$</span>
Find where <span class="math-container">$$f(x,y)=0; f(x,y)>0; f(x,y)<0.$$</span></p>
<p>To find where <span class="math-container">$$f(x,y)=0$$</span> i already have the ellipse function <span class="math-container">$$(x^2)/5 +(y^2)/(5/4)=1$$</span> and the straight line <span class="math-container">$$x=1$$</span> but the other two i dont know how to solve. </p>
<p>Thank you very much.</p>
| Sambo | 454,855 | <p>Showing that these properties hold is a straightforward application of the definitions, with some elementary properties of bijections. So, I suspect what you are having trouble with is formulating a proper proof. I will show reflexivity as an example.</p>
<p>We want to show that for any set <span class="math-container">$A$</span>, we have <span class="math-container">$A \sim A$</span>. Because of how we defined <span class="math-container">$\sim$</span>, this means that for any set <span class="math-container">$A$</span>, we need to show that there exists a bijection <span class="math-container">$f : A \rightarrow A$</span>. How do we show this? We must construct such a function explicitly. Since we know nothing of the contents of <span class="math-container">$A$</span>, there is really only one choice: the identity function.</p>
<p>Let <span class="math-container">$f : A \rightarrow A$</span> be the function such that <span class="math-container">$f(a) = a$</span> for every <span class="math-container">$a \in A$</span>. We must check that <span class="math-container">$f$</span> is bijective. First, <span class="math-container">$f$</span> is injective: if <span class="math-container">$f(a) = f(b)$</span>, then <span class="math-container">$a = f(a) = f(b) = b$</span>, so <span class="math-container">$a = b$</span>. Next, <span class="math-container">$f$</span> is surjective: if <span class="math-container">$a \in A$</span>, then there exists an element <span class="math-container">$x \in A$</span> such that <span class="math-container">$f(x) = a$</span>: namely, <span class="math-container">$x = a$</span>.</p>
<p>So, we have shown that for any set <span class="math-container">$A$</span>, we can construct a bijection <span class="math-container">$f : A \rightarrow A$</span>, which shows that <span class="math-container">$A \sim A$</span>. This shows the reflexivity of <span class="math-container">$\sim$</span>. Others have provided hints for the other two properties.</p>
|
7,725 | <p>Basically the textbooks in my country are awful, so I searched on the web for a precalculus book and found this one: <a href="http://www.stitz-zeager.com/szprecalculus07042013.pdf">http://www.stitz-zeager.com/szprecalculus07042013.pdf</a></p>
<p>However, it does not cover convergence,limits etc. and those topics were briefly mentioned in my old textbooks. So what i am asking is are these topics a prerequesite for calculus or are they a part of the subject?</p>
| Chris C | 201 | <p>It depends on how Calculus is treated. </p>
<p>At my university and others I've attended (US), the concept of the limit is usually treated early in the first Calculus class in order to talk about continuity and derivatives. The core idea of sequences is brushed over in the introductory courses (except in the advanced courses). </p>
<p>Sometimes (and the department is debating this here) limits and sequences are taught in precalculus, allowing them to be reinforced in calculus. But in general, they're not expected to be well known before the first calculus course. </p>
<p>Personally, they're good to look into algebraically before calculus, but they involve topological ideas more suited to calculus. </p>
|
190,392 | <p>How to prove that the Torus and a Ball have the same Cardinality ?
The proof is using the Cantor Berenstein Theorem.
I now that they are subsets of $\mathbb{R}^{3}$ so I can write $\leq \aleph$ but I do not know how to prove $\geqslant \aleph$.
Thanks</p>
| Ilmari Karonen | 9,602 | <p>I'm assuming that you're talking about solid balls and tori in $\mathbb R^3$.</p>
<p>To prove that these sets are equipollent using the <a href="http://en.wikipedia.org/wiki/Cantor%E2%80%93Bernstein%E2%80%93Schroeder_theorem" rel="nofollow">Cantor–Bernstein–Schröder theorem</a>, all you need to do is to show that:</p>
<ol>
<li>there exists an injection $f$ from the ball $B$ into the torus $T$, and</li>
<li>there exists an injection $g$ from the torus $T$ into the ball $B$.</li>
</ol>
<p>The CBS theorem then says that there exists a bijection between $B$ and $T$.</p>
<p>If you're given explicit geometric definitions of $B$ and $T$, you can use these directly to construct invertible <a href="http://en.wikipedia.org/wiki/Affine_transformation" rel="nofollow">affine maps</a> $f$ and $g$ such that $f(B) \subset T$ and $g(T) \subset B$.
Another, more general way to do this is to note that both of these subsets of $\mathbb R^3$ are <a href="http://en.wikipedia.org/wiki/Bounded_set" rel="nofollow">bounded</a> and have non-empty <a href="http://en.wikipedia.org/wiki/Interior_%28topology%29" rel="nofollow">interior</a>. Thus, for each of them, there exists <a href="http://en.wikipedia.org/wiki/Open_ball" rel="nofollow">open balls</a> $I$ and $O$ such that $I_B \subset B \subset O_B$ and $I_T \subset T \subset O_T$. Then choosing $f$ and $g$ (again as affine maps) such that $f(O_B) = I_T$ and $g(O_T) = I_B$ will do the trick.</p>
|
4,094,333 | <blockquote>
<p>Suppose <span class="math-container">$a_1,a_2>0$</span> and
<span class="math-container">$a_{n+2}=2+\dfrac{1}{a_{n+1}^2}+\dfrac{1}{a_n^2}(n\ge 1)$</span>. Prove
<span class="math-container">$\{a_n\}$</span> converges.</p>
</blockquote>
<p>First, we may show <span class="math-container">$\{a_n\}$</span> is bounded for <span class="math-container">$n\ge 3$</span>, since <span class="math-container">$$2 \le a_{n+2}\le 2+\frac{1}{2^2}+\frac{1}{2^2}=\frac{5}{2},~~~~~~ \forall n \ge 1.$$</span></p>
<p>But how to go on?</p>
| Anon | 307,340 | <p>Let's use <strong>fixed point iteration</strong>s.</p>
<p><strong>Observe</strong>:
Substituting <span class="math-container">$x_n > 2$</span> gives us a upper bound on <span class="math-container">$x_{n}$</span> (<span class="math-container">$n \geq 3$</span>). And substituting this upper bound back, gives us a 'closer' lower bound. And so on. I hope to show that these lower and upper bounds converge, which effectively then says that <span class="math-container">$x_n$</span> must converge to that point eventually.</p>
<p>Define <span class="math-container">$g(x) := 2(1 + \frac{1}{x^2})$</span>. Then the sequence we consider is <span class="math-container">$t_{n+1} = g(t_n)$</span>. Using the fixed point iteration theorem, we have</p>
<p>(1) <span class="math-container">$t \in [2,3] \Rightarrow g(t) \in [2,3]$</span>, and <span class="math-container">$g$</span> is continuous.</p>
<p>(2) <span class="math-container">$|g'(t)| = \frac{4}{t^3} < 1$</span></p>
<p>Hence <span class="math-container">$\exists \alpha \in (2,3)$</span> s.t. <span class="math-container">$\forall t_0, \; \in (2,3)$</span> <span class="math-container">$ t_n = g^{(n)}(t_0) \rightarrow \alpha$</span>.</p>
<p><strong>Claim 1</strong>: Let <span class="math-container">$N_0, t > 0$</span>.</p>
<ol>
<li><p><span class="math-container">$(\forall n \geq N_0, \; x_n > t) \Rightarrow (\forall n \geq N_0 + 2, \; x_n < g(t))$</span>.</p>
</li>
<li><p><span class="math-container">$(\forall n \geq N_0, \; x_n < t) \Rightarrow (\forall n \geq N_0 + 2, \; x_n > g(t))$</span></p>
</li>
</ol>
<p><strong>Proof</strong>. Trivial. <span class="math-container">$\square$</span></p>
<p><strong>Claim 2</strong>: <span class="math-container">$\forall n \geq 5, \; x_n \in (2,3)$</span>.</p>
<p><strong>Proof</strong>. <span class="math-container">$(\forall n > 0, \; x_n > 0) \Rightarrow (\forall n > 0, \; x_{n+2} > 2) \Rightarrow (\forall n > 0, \; x_{n+4} < \frac{5}{2} < 3)$</span> <span class="math-container">$\square$</span></p>
<p><strong>Claim 3.</strong> Let <span class="math-container">$S = \lim \sup x_n$</span> and <span class="math-container">$I = \lim \inf x_n$</span>. Then
<span class="math-container">$S \leq \alpha \land I \geq \alpha$</span>.</p>
<p><strong>Proof</strong>. Fix <span class="math-container">$t \in \mathbb{N}$</span>. As <span class="math-container">$g^{(n)}(x_5) \rightarrow \alpha$</span>, <span class="math-container">$\exists N'$</span> s.t. <span class="math-container">$n \geq \max(N', t) \Rightarrow g^{(n)}(x_5) \in (\alpha - \frac{1}{t}, \alpha + \frac{1}{t})$</span>.</p>
<p>Then by Claim 1 and 2, <span class="math-container">$\sup_{m \geq (5 + 2\max(t,N')) } x_m \le \alpha + \frac{1}{t} \Rightarrow \lim_{t \rightarrow \infty} \sup_{m \geq (5 + 2\max(t,N')) } x_m \le \lim_{t \rightarrow \infty} (\alpha + \frac{1}{t}) \Rightarrow S \le \alpha$</span>.</p>
<p>Similarly, show <span class="math-container">$I \ge \alpha$</span>. <span class="math-container">$\square$</span></p>
<p><strong>Corollary</strong>. Thus <span class="math-container">$\lim_n x_n = S = I = \alpha$</span>.</p>
<p><strong>Proof.</strong> Trivial. <span class="math-container">$\square$</span></p>
<p><strong>Edit 2.</strong> Thanks to @Martin R, to help me formalize the proof.</p>
<p><strong>Edit 1.</strong> As @NN2's answer notes, this is actually an attracting stable fixed point, which I've proved using fixed point iterations. (Alternatively, you could just use the theorem (which I didn't know) - check NN2's answer).</p>
|
4,094,333 | <blockquote>
<p>Suppose <span class="math-container">$a_1,a_2>0$</span> and
<span class="math-container">$a_{n+2}=2+\dfrac{1}{a_{n+1}^2}+\dfrac{1}{a_n^2}(n\ge 1)$</span>. Prove
<span class="math-container">$\{a_n\}$</span> converges.</p>
</blockquote>
<p>First, we may show <span class="math-container">$\{a_n\}$</span> is bounded for <span class="math-container">$n\ge 3$</span>, since <span class="math-container">$$2 \le a_{n+2}\le 2+\frac{1}{2^2}+\frac{1}{2^2}=\frac{5}{2},~~~~~~ \forall n \ge 1.$$</span></p>
<p>But how to go on?</p>
| Martin R | 42,969 | <p>Another approach: Let <span class="math-container">$L \approx 2.3593$</span> be the unique solution of <span class="math-container">$L = 2 + 2/L^2$</span> in the interval <span class="math-container">$[2, 2.5]$</span>, and <span class="math-container">$b_n = a_n - L$</span>. We want to show that <span class="math-container">$b_n \to 0$</span>.</p>
<p>The recursion formula becomes
<span class="math-container">$$
b_{n+2} = \frac{1}{(L+b_{n+1})^2} + \frac{1}{(L+b_{n})^2} - \frac{2}{L^2} \, .
$$</span></p>
<p>We estimate
<span class="math-container">$$
\left| \frac{1}{(L+b_{n})^2} - \frac{1}{L^2}\right| = \frac{|b_n|(2L+b_n)}{L^2(L+b_n)^2} \le \frac{5}{16} |b_n|
$$</span>
for <span class="math-container">$n \ge 3$</span>, so that
<span class="math-container">$$
|b_{n+2} | \le \frac{5}{16} (|b_{n+1}| + |b_n|) \, .
$$</span>
Then
<span class="math-container">$$
2 |b_{n+2} | + |b_{n+1} | \le \frac{13}{8}|b_{n+1} | + \frac{5}{8}|b_{n}|
\le \frac{13}{16} \left(2 |b_{n+1} | + |b_{n} | \right) \, .
$$</span>
This shows that <span class="math-container">$2 |b_{n+1} | + |b_{n} |$</span> decreases geometrically to zero. It follows that <span class="math-container">$b_n \to 0$</span>, as desired.</p>
|
3,200,431 | <p>We can say, that any field <span class="math-container">$\mathbb{K}$</span> -- <span class="math-container">$1$</span>-dim vector space on itself: <span class="math-container">$\mathbb{K}_{\mathbb{K}}$</span>. So any vector of one another finite-dimensional vector space <span class="math-container">$V_{\mathbb{K}}$</span>, after choosing the some basis can be represented as the element of isomorphic space <span class="math-container">$\mathbb{K}_{\mathbb{K}}^{n} = \prod_{i=1}^n\mathbb{K}_i$</span>, where <span class="math-container">$n$</span> -- dimension of <span class="math-container">$V_{\mathbb{K}}$</span>. But we can determine operations on elements of <span class="math-container">$\mathbb{K}_{\mathbb{K}}^{n}$</span> like on the direct group product: <span class="math-container">$(x_1,\dots,x_n) + (x'_1, \dots, x'_n) = (x_1 + x'_1, \dots ,x_n + x'_n)$</span> and similar for second field operation: <span class="math-container">$(x_1,\dots,x_n) \times (x'_1, \dots, x'_n) = (x_1 \times x'_1, \dots ,x_n \times x'_n)$</span>.</p>
<p>But usually we doing it only with one field operation <span class="math-container">$+$</span>. Why? </p>
| elidiot | 574,565 | <p>Because if <span class="math-container">$\mathbb K$</span> is a field, <span class="math-container">$\mathbb K^n$</span> remains a <span class="math-container">$\mathbb K$</span>-vector space but may not be a field (with the same addition). For example, <span class="math-container">$\mathbb R$</span> is a field, but <span class="math-container">$\mathbb R^n$</span> can't have a field structure (with the classical addition) for <span class="math-container">$n=3$</span> and <span class="math-container">$n\geq 5$</span>. Even when <span class="math-container">$n=4$</span> the field is not commutative. See for example the <a href="https://en.wikipedia.org/wiki/Frobenius_theorem_%28real_division_algebras%29" rel="nofollow noreferrer">Wiki page</a> on division <span class="math-container">$\mathbb R$</span>-algebras</p>
|
3,044,390 | <p>As we know, each random variable is responsible for associating some random events to the probability values. These random events belong to the specific population, and that random variable represent that population. In other words, random variables are representatives of their populations. By virtue of this, they present their own distributions.</p>
<blockquote>
<p>What I am wondering is that summing adequate number of random variables</p>
</blockquote>
<p>According to the resource that I read two days ago, by summing large number of random variables, we can obtain new random variable of <code>normal distribution</code>. Besides, this is called <code>central limit theorem</code>. </p>
<p>Actually, I investigated central limit theorem, and I could not construct a relationship between the theorem itself and summing random variables. The theorem is about the fact that large amount of data samples construct a normal distribution. </p>
<blockquote>
<p>Can anyone explain if there is a relationship between summing random variables between central limit theorem ?</p>
</blockquote>
| J.G. | 56,861 | <p>The classical central limit theorem states that, given a large sample of independent values <span class="math-container">$X_n$</span> from the same finite-<span class="math-container">$\mu$</span>-and<span class="math-container">$\sigma$</span> distribution, <span class="math-container">$\frac{1}{\sqrt{n}}\sum_{i=1}^n\frac{X_i-\mu}{\sigma}\approx N(0,\,1)$</span>. This approximation is equivalent to the sample mean being <span class="math-container">$N(\mu,\,\frac{\sigma^2}{n})$</span>, but the former statement is preferred so the distribution converges.</p>
<p>Note the Normal approximation we obtain is of the sample mean (give or take your preferred linear transformation thereof for the discussion), not of the distribution being sampled. One common misconception is that Normal distributions themselves are supposed on this theorem to be prevalent "in the real world".</p>
<p>Note also that if the sampled distribution doesn't have a finite mean and variance, this all falls apart. And in the famous example of a Cauchy distribution, the sample mean actually still has the same Cauchy distribution. (The sample median, on the other hand, is approximately Normal.)</p>
|
1,357,148 | <p>Stokes' theorem generalizes the fundamental theorem of calculus (part 2) using differential forms. Is there a known generalization of part 1?</p>
<h2>edit</h2>
<p>In case anyone is unaware, The fundamental theorem of calculus part 1 states that the derivative of the map $t \mapsto \int_{a}^{t} f(s) ds$ is equal to
$f(t)$. From this, it easily follows that if $F' = f$, then $\int_{a}^{b} f(x) dx = F(b) - F(a)$ (part 2).</p>
<p>Stokes' theorem ($\int_{\Sigma} d \omega = \oint_{\partial \Sigma} \omega$) generalizes part 2 is analogous to part 2 in that in both cases one does a
calculation on the boundary.</p>
<p>But is there an analogous version of part 1? This question comes from my <a href="https://math.stackexchange.com/questions/1354805/definition-of-hamiltonian-system-through-integral-invariant">previous question</a> in which I did such a calculation.</p>
<p>In the GTC part 1, we consider a perametrized set of intervals $[0,t]_{t \in R}$. So the generalization ought to consist of a set of (hyper-)surfaces $\{\Sigma_{t}\}_{t \in R}$ in $R^N$. And thus, we wish to calculate the derivative of the mapping $t \mapsto \int_{\Sigma_t} \omega$.</p>
<p>Suppose there exists a smooth $\phi (r,s): U\times R \to R^N$ ($U$ open subset of $R^{N-1}$)such that the restriction of $\phi$ on $[0,1]\times[0,t]$ parametrizes $\Sigma_{t}$. Then for fixed $s$, the map $r \mapsto \phi(r,s)$ perametrizes a subsurface $\sigma_{s}$ of $\Sigma_{s}$ whose dimension is one less ($\sigma_{s} \subset \partial \Sigma_{s}$). I believe that the derivative of the map $t \mapsto \int_{\Sigma_{t}} \omega$ is equal to $\oint_{\sigma_t} \omega_{t}$, where $\omega_{t}$ is some differential form that represents $\omega$ "evaluated at" $\sigma_{t}$.</p>
<p>Am I in the right direction?</p>
| Bob Terrell | 15,643 | <p>Sorry to have just seen this question 7 years late. An answer is in the Poincar'e Lemma. See Spivak's Calculus on Manifolds for example. I wrote an exposition for the vector field version in 3 dimensions that is on my web page bterrell.net.</p>
|
3,566,997 | <p>When I go through examples of calculating the intersection of two planes, there seems to be a convention of choosing an arbitrary point in order to solve the linear equations in question and get a particular point on the intersection line.</p>
<p>Let's take an example from a <a href="https://math.stackexchange.com/q/475953/597067">previously-asked question</a>, where the given equations were:</p>
<blockquote>
<p><span class="math-container">$x + 2y + z - 1 = 0$</span></p>
<p><span class="math-container">$2x + 3y - 2z + 2 = 0$</span></p>
</blockquote>
<p>And the relevant part in the <a href="https://math.stackexchange.com/a/475962/597067">second-most voted answer</a> goes as follows:</p>
<blockquote>
<p>Next, we need to find a particular point on the line. We can try <span class="math-container">$y=0$</span>
and solve the resulting system of linear
equations:<span class="math-container">$$\begin{align}x+z-1&=&0\\2x-2z+2&=&0\end{align}$$</span> giving
<span class="math-container">$x=0, z=1$</span></p>
</blockquote>
<p><strong>My question:</strong> </p>
<p>How does one know what is the correct point to choose, and how does one validates that the chosen point is correct?<br>
Also, if the chosen point is wrong, how does one successfully guess the next point? </p>
<p>I've found <a href="https://math.stackexchange.com/a/27384/597067">another answer</a> that seems to be very relevant, but I can't explain it:</p>
<blockquote>
<p>Sometimes the line of intersection happens to be parallel to the z=0
plane. In that case you could try y=0 or x=0. (One of these is sure to
work.)</p>
</blockquote>
<p>I'm a beginner, so that's a fundamental thing, probably trivial, I don't get: Why is that true?</p>
| Peter Szilas | 408,605 | <p>Option:</p>
<p><span class="math-container">$a^y=x$</span>; </p>
<p>Tale <span class="math-container">$\log_e$</span> of both sides: </p>
<p><span class="math-container">$y \log a=\log x$</span>;</p>
<p>Differentiate with respect to <span class="math-container">$x$</span>:</p>
<p><span class="math-container">$y' \log a=\dfrac{1}{x}$</span>;</p>
<p><span class="math-container">$y'=\dfrac{1}{x \log a }$</span>;</p>
|
951,104 | <p>Problem :
If $$T_r =\frac{r}{4r^4+1}$$ then the value of $$\sum^{\infty}_{r=1} T_r$$ is ? </p>
<p>How to start such problem I am not getting any clue on this please suggest thanks .</p>
| Mathronaut | 53,265 | <p>Hint:$$4r^4+1=(2r^2+1+2r)\cdot(2r^2+1-2r)$$</p>
|
2,450,838 | <p>So I am hoping to show that $f(x,y)=\dfrac{x}{y}$ is continuous when $x>0$ and $y>0$.</p>
<p>I am not sure how to approach this problem. My idea was that taking $$\dfrac{\partial f}{\partial x}=\dfrac{1}{y}$$ and $$\dfrac{\partial f}{\partial y}=\dfrac{-x}{y^2}$$</p>
<p>My logic was that since both partials exists and are defined if $x>0$ and $y>0$, but I then discovered that existence of partial derivatives does <strong>not</strong> imply continuity.
I am curious if there are any clever ways to show continuity of this function? Note I am not concerned with the case that $x=0$ or $y=0$</p>
| zwim | 399,263 | <p>For $f(x)$ and $g(y)$ continuous then in $(x_0,y_0)$ where $g(y_0)\neq0$ we have:</p>
<ul>
<li>$\forall \varepsilon>0,\exists \delta_1>0\mid |x-x_0|<\delta_1\implies |f(x)-f(x_0)|<\varepsilon$</li>
<li>$\forall \varepsilon>0,\exists \delta_2>0\mid |y-y_0|<\delta_2\implies |g(y)-g(y_0)|<\varepsilon$</li>
</ul>
<p>Since $g(y_0)\neq 0$ it is possible to choose $0<\varepsilon<\frac 12 |g(y_0)|$ </p>
<p>So for $\delta=\min(\delta_1,\delta_2)$ we have</p>
<p>$\begin{array}{ll}
\displaystyle \left|\frac{f(x)}{g(y)}-\frac{f(x_0)}{g(y_0)}\right| &=\displaystyle\left|\frac{f(x)g(y_0)-f(x_0)g(y)}{g(y)g(y_0)}\right| =\displaystyle\left|\frac{g(y_0)(f(x)-f(x_0))-f(x_0)(g(y)-g(y_0))}{(g(y)-g(y_0))g(y_0)+g(y_0)^2}\right|\\\\
&\displaystyle<\frac{\varepsilon\left(|f(x_0)|+|g(y_0)|\right)}{\bigg||g(y_0)^2|-|g(y)-g(y_0)||g(y_0)|\bigg|}<\frac{\varepsilon\left(|f(x_0)|+|g(y_0)|\right)}{\frac 12|g(y_0)^2|}<k\,\varepsilon\end{array}$</p>
<p>with $k$ constant, thus the quotient is continuous in $(x_0,y_0)$.</p>
|
833,353 | <p>Let $a,b \geq 0$ and $0<p,q < 1$ s.t. $p + q = 1$.</p>
<p>Is it true that $a^p b^q \leq a+b$?</p>
| Sandeep Silwal | 138,892 | <p>The comments given are counter examples. A famous inequality that resembles yours is</p>
<p>$a,b,p,q$ are positive real numbers and $\frac{1}p+\frac{1}q = 1$ then,</p>
<p>$$\frac{a^{p}}{p} + \frac{b^{q}}{q} \ge ab.$$</p>
|
3,489,896 | <p>It is clear to me that <span class="math-container">$\liminf X_n \le \limsup X_n$</span> but every time that I see that <span class="math-container">$\liminf X_n \le \sup X_n$</span> it sound to me not so obvious awkward. Could someone help me to understand this inequality?<span class="math-container">$ $</span> <span class="math-container">$ $</span></p>
| Bernard | 202,857 | <p>By definition,
<span class="math-container">$$\limsup X_n=\lim_k\Bigl(\sup_{n\ge k} X_n\Bigr), $$</span>
and the sequence <span class="math-container">$(S_k)=\bigl(\sup_{n\ge k} X_n\bigr)_k$</span> is <em>non-increasing</em>, so
<span class="math-container">$$\liminf X_n\le\limsup X_n\le S_1=\sup X_n.$$</span></p>
|
2,600,062 | <p>I'm stuck at doing this problem. It's actually a problem related to circuit analysis. I'm given this equation:
$$\frac{(R_3+\frac{1}{\omega C}I)*(\omega Li)}{(R_3+\frac{1}{\omega Ci})+(\omega Li)}$$
Now to rationalize it I will multiply by
$$\frac{R_3+\frac{1}{\omega C}i-\omega Li}{R_3+\frac{1}{\omega C}i-\omega Li}$$
$$\frac{R_3+\frac{1}{\omega C}i*\omega Li}{R_3+\frac{1}{\omega C}i+\omega Li}*\frac{R_3+\frac{1}{\omega C}i*\omega Li}{R_3+\frac{1}{\omega C}i+\omega Li}$$
After I do the actions I get:
$$\frac{(R_3+\frac{1}{\omega C}i)*(\omega Li)*R_3+\frac{1}{\omega C}i-\omega Li}{(R_3+\frac{1}{\omega C}i)^2+(\omega Li)^2}$$
The problem is that I've to get rid of the imaginary part from the denominator.Any ideas how I can accomplish this?</p>
| Dr. Sonnhard Graubner | 175,066 | <p>use that $$\frac{R_3-\left(\frac{1}{\omega C}\omega L\right)}{R_3+i\left(\frac{1}{\omega C}-\omega L\right)}$$ and multiply numerator and denominator by $$R_3-i\left(\frac{1}{\omega C}-\omega L\right)$$</p>
|
509,487 | <p>Prove $1+{n \choose 1}2+{n \choose 2}4+...+{n \choose n-1}2^{n-1}+{n \choose n}2^n=3^n$ using combinatorial arguments. I have no idea how to begin solving this, a nudge in the right direction would be appreciated. </p>
| anon | 11,763 | <p>Hint: $\sum\binom{n}{k}2^k$ counts the pairs $(A,f)$ with $A\subseteq\{1,\cdots,n\}$ and $f:A\to\{1,2\}$.</p>
|
3,682,514 | <blockquote>
<p>The hyperbola is given with the following equation: <span class="math-container">$$3x^2+2xy-y^2+8x+10y+14=0$$</span>
Find the asymptotes of this hyperbola. (<span class="math-container">$\textit{Answer: }$</span> <span class="math-container">$6x-2y+5=0$</span> and <span class="math-container">$2x+2y-1=0$</span>)</p>
</blockquote>
<p>In my book, it is said that if the hyperbola is given with the equation:
<span class="math-container">$$Ax^2+2Bxy+Cy^2+2Dx+2Ey+F=0$$</span> then the direction vector <span class="math-container">$\{l,m\}$</span> of the asymptotes are found from the following equation: <span class="math-container">$$Al^2+2Blm+Cm^2=0$$</span> (Actually, I don't know the proof) Then to solve this, we let <span class="math-container">$k=\frac{l}{m}, \ \ (m \not =0)$</span> and solve the quadratic equation for <span class="math-container">$k$</span>: <span class="math-container">$$Ak^2+2Bk+C=0, \text{ in our case } 3k^2+2k-1 = 0$$</span>
From here, I got <span class="math-container">$k=-1 \text{ or } k=\frac{1}{3}$</span> (which give us slopes of the two asymptotes). </p>
<p>Hence we search for the asymptotes of the form <span class="math-container">$y=kx+b$</span> and restrict <span class="math-container">$b$</span> in such way so that the line does not intersect the line. Plugging this <span class="math-container">$y$</span> and <span class="math-container">$k=-1$</span> into the equation of the parabola I got: <span class="math-container">$$(4b-2)x-b^2+10b+14=0$$</span>
so <span class="math-container">$b=\frac{1}{2}$</span> (as the equation should not have a solution). Then, <span class="math-container">$y=-x+\frac{1}{2}$</span> or <span class="math-container">$2x+2y-1=0$</span> as in the answer!</p>
<p>However, I could not find the second one in this way...</p>
<p>Then, I got stuck... </p>
<p>It would be greatly appreciated either if you help me understand (why the asymptotes are to be found so) and complete this solution or suggest another solution.</p>
| amd | 265,466 | <p>You say that you got stuck when trying <span class="math-container">$k=\frac13$</span>. That’s likely because that’s not the slope of either asymptote. If a line has direction vector <span class="math-container">$(l,m)$</span>, then its slope is equal to <span class="math-container">$\frac ml$</span>, not <span class="math-container">$\frac lm$</span>. So, what you found by solving <span class="math-container">$3k^2+2k-1=0$</span> was the reciprocals of the slopes. You can, of course, get the slopes directly using your method, but the equation to be solved is instead <span class="math-container">$A+2Bk+Ck^2=0$</span>. </p>
<p>The reciprocal of <span class="math-container">$-1$</span> is <span class="math-container">$-1$</span> again, so your method of adjusting <span class="math-container">$b$</span> so that there is no intersection worked accidentally. On the other hand, there is an infinite number of lines with slope <span class="math-container">$\frac13$</span> that don’t intersect this hyperbola, and that’s why you didn’t get a definitive result for that. Using the correct slope value of <span class="math-container">$3$</span>, we end up with <span class="math-container">$(38-4b)x=b^2-10b-14$</span>, and setting <span class="math-container">$b=\frac{19}2$</span> guarantees no intersection with the hyperbola. </p>
<p>There’s another method of finding the constant terms for the asymptotes that may or may not be easier to use. If you vary <span class="math-container">$F$</span> in the general equation, you get a family of hyperbolas with common asymptotes. For some value of <span class="math-container">$F$</span> the hyperbola degenerates into intersecting lines—the asymptotes themselves. Now, a joint equation of the asymptotes is <span class="math-container">$$(k_1x-y+b_1)(k_2x-y+b_2)=0.$$</span> Per the preceding observation, if you expand this expression, the coefficients of all but the constant term must be proportional to the corresponding coefficients in the original hyperbola equation. Comparing coefficients will result in a simple system of equations to solve for <span class="math-container">$b_1$</span> and <span class="math-container">$b_2$</span>. (Indeed, when the quadratic part of the conic equation factors easily, as it does in this problem, you can go straight to comparing coefficients without going through solving the work of solving a quadratic equation.) </p>
<p>In this problem, we have <span class="math-container">$$(-x-y+b_1)(3x-y+b_2) = -3x^2-2xy+y^2+(3b_1-b_2)x-(b_1+b_2)y+b_1b_2.$$</span> Comparing this to the original system gives <span class="math-container">$$3b_1-b_2=-8 \\ b_1+b_2=10.$$</span> I trust that you can solve this system. </p>
<p>As for why you can find the direction vectors of the asymptotes by considering only the quadratic part of the equation, I’ll offer a projective-geometric explanation. A hyperbola intersects the line at infinity at two points; its asymptotes are the tangents to the hyperbola at those points. On the other hand, the point at infinity on a line corresponds to its direction vector. We can find these intersection points by homogenizing the equation to <span class="math-container">$$Ax^2+2Bxy+Cy^2+2Dxw+2Eyw+Fw^2=0$$</span> and then setting <span class="math-container">$w=0$</span>. </p>
<p>Another way to understand this method of finding the asymptote slopes is to translate the hyperbola so that its center is at the origin. This doesn’t affect any of the second-degree terms, so the resulting equation is of the form <span class="math-container">$Ax^2+2Bxy+Cy^2=F'$</span>. Per the earlier discussion, varying the constant term on the right-hand side produces a hyperbola with the same asymptotes, and it should be fairly obvious that at <span class="math-container">$F'=0$</span> you get intersecting lines. Since these lines both pass through the origin, the coordinates of any point on one of these lines also give a direction vector of that line.</p>
|
680,660 | <p>Recently I came across this question:</p>
<p>Given a random permutation of integers 1, 2, 3, …, n with a discrete, uniform distribution, find the expected number of local maxima. (A number is a local maxima if it is greater than the number before and after it.) For example, if n=4 and our permutation was 1, 4, 2, 3, then the # of local maxima would be 2 (both 4 and 3 are maxima).<br>
I know the answer will be (n+1)/3 .I wanted to know what will be the answer when we have to consider the local maxima as well as the local minima i.e Finding out the expected number of local maxima and local minima.</p>
| André Nicolas | 6,312 | <p>Let $n\ge 3$. Suppose that the permutation $\alpha$ takes $k$ to $a_k$ ($k=1$ to $n$). Let $\alpha'$ be the permutation that takes $k$ to $(n+1)-a_k$. Then the number of local maxima (minima) of $\alpha$ is the same as the number of local minima (maxima) of $\alpha'$. Thus by symmetry the expected number of local maxima and the expected number of local minima are the same. </p>
<p>If the expected number of local maxima is $\frac{n+1}{3}$, it follows that the expected number of points that are local maxima <em>or</em> local minima is $\frac{2(n+1)}{3}$.</p>
<p>For completeness, we give a proof that the expected number of local maxima is indeed $\frac{n+1}{3}$. </p>
<p>Your example shows that you are including <strong>endpoint</strong> maxima. The probability that a random permutation has a left endpoint maximum is $\frac{1}{2}$, and the probability of a right endpoint maximum is the same, so the expected number of endpoint maxima is $1$.</p>
<p>Now let random variable $Y$ be the number of non-endpoint maxima. For $i=2$ to $n-1$, let $X_i=1$ if we have a local maximum at the $i$-th position, and let $X_i=0$ otherwise. Then $Y=X_2+X_3+\cdots+X_{n-1}$, and therefore by the linearity of expectation
$$E(Y)=E(X_2)+E(X_3)+\cdots +E(X_{n-1}).$$
Now we find $\Pr(X_i=1)$. Consider the numbers that are in positions $i-1$, $i$, and $i+1$. There are $3!$ permutations of these numbers, and precisely $2$ of these permutations give a local maximum at $i$. Thus $\Pr(X_i=1)=\frac{2}{3!}=\frac{1}{3}$.</p>
<p>It follows that $E(Y)=(n-2)\cdot\frac{1}{3}$. Now add in the expected number $1$ of endpoint maxima. We get $(n-2)\cdot\frac{1}{3}+1$, which simplifies to $\frac{n+1}{3}$. </p>
<p><strong>Remarks:</strong> $1.$ We prove directly that the number of places at which we have a local max <em>or</em> min is $\frac{2(n+1)}{3}$. </p>
<p>An endpoint is always a local max or min, giving us $2$ points. For the others, if $2\le i\le n-1$, let $W_i=1$ if we have a local max or min at the $i$-th position, and $0$ otherwise. Then the number $Z$ of non-endpoint local max or min is $W_2+W_3+\cdots+W_{n-1}$. </p>
<p>The probability that $W_i=1$ is $\frac{4}{3!}=\frac{2}{3}$. Thus $E(Z)=(n-2)\cdot\frac{2}{3}$. Add $2$ for the endpoints. </p>
<p>$2.$ We used the method of <em>indicator</em> random variables. It is often far easier than finding the expectation of a random variable by first finding its distribution. </p>
|
3,923 | <p>I propose the following lemma and its proof. It is related to row-null and column-null matrices - i.e. matrices whose rows and columns both sum to zero. Could you please give your opinion on the plausibility of the lemma, and the validity of the proof?</p>
<p><strong>Lemma:</strong> Let $Z\in\text{GL}(n,\mathbb{R})$ be a general $n\times n$ real matrix, and let $Y\in\mathcal{S}(n,\mathbb{R})$, where $\mathcal{S}(n,\mathbb{R})$ is the space of row-null column-null $n\times n$ real matrices. Then $\text{Tr}(ZY)=0$ for all $Y$ in $\mathcal{S}(n,\mathbb{R})$ if and only if $Z$ has the form $$Z_{ij}=\left(p_{j}-p_{i}\right)+\left(q_{j}+q_{i}\right)$$. </p>
<p><strong>Proof:</strong>
Consider the space of row-null and column-null matrices</p>
<p>$$\mathcal{S}(n,\mathbb{R})= \left\{ Y_{ij}\in GL(n,\mathbb{R}):\sum_{i}Y_{ij}=0,\sum_{j}Y_{ij}=0 \right\} $$</p>
<p>Its dimension is
$$\text{dim}(S(n,\mathbb{R}))=N^{2}-2N+1$$
since the row-nullness and column-nullness are defined by $2N$ equations, only $2N-1$ of which are linearly independent.
Consider the following space</p>
<p>$$\mathcal{G}(n,\mathbb{R})=\left\{ Z_{ij}\in GL(n,\mathbb{R}):Z_{ij}=\left(p_{j}-p_{i}\right)+\left(q_{j}+q_{i}\right)\right\}$$</p>
<p>Its dimension is </p>
<p>$$\text{dim}(\mathcal{G}(n,\mathbb{R}))=2N-1$$
where $N-1$ is the contribution from the antisymmetric part and $N$ is from the symmetric part.</p>
<p>Assume $Y\in\mathcal{S}$ and $Z\in\mathcal{G}$, then the Frobenius inner product of two such elements is
$$
\text{Tr}(ZY) =\sum_{ij}\left[\left(p_{j}-p_{i}\right)Y_{ji}+\left(q_{j}+q_{i}\right)Y_{ji}\right]
$$
$$
=\sum_{j}(q_{j}+p_{j})\sum_{i}Y_{ji}+\sum_{i}(q_{i}-p_{i})\sum_{j}Y_{ji}=0
$$
Since $\text{dim}(\mathcal{G})+\text{dim}(\mathcal{S})=\text{dim}(GL)$ and $\mathcal{G}\perp\mathcal{S}$, then $\mathcal{G}$ and $\mathcal{S}$ must be complementary in $GL$. Therfore, if $Y$ is orthogonal to all the matrices in $\mathcal{S}$, it must lie in $\mathcal{G}$. </p>
<p>PS: How can I get the curly brackets {} to render in latex mode?</p>
| Laurent Lessard | 1,356 | <p>Here is an alternate way of proving your Lemma. I'm not sure if its any simpler than your proof -- but it's different, and hopefully interesting to some.</p>
<p>Let $S$ be the set of $n\times n$ matrices which are row-null and column-null. We can write this set as:
$$
S = \left\{ Y\in \mathbb{R}^{n\times n} \,\mid\, Y1 = 0 \text{ and }1^TY=0\right\}
$$
where $1$ is the $n\times 1$ vector of all-ones. The objective is the characterize the set $S^\perp$ of matrices orthogonal to every matrix in $S$, using the Frobenius inner product.</p>
<p>One approach is to <em>vectorize</em>. If $Y$ is any matrix in $S$, we can turn it into a vector by taking all of its columns and stacking them into one long vector, which is now in $\mathbb{R}^{n^2\times 1}$. Then $\mathop{\mathrm{vec}}(S)$ is also a subspace, satisfying:
$$
\mathop{\mathrm{vec}}(S) = \left\{ y \in \mathbb{R}^{n^2\times 1} \,\mid\, (\mathbf{1}^T\otimes I)y = 0 \text{ and } (I \otimes \mathbf{1}^T)y = 0 \right\}
$$
where $\otimes$ denotes the <a href="http://en.wikipedia.org/wiki/Kronecker_product" rel="noreferrer">Kronecker product</a>. In other words,
$$
\mathop{\mathrm{vec}}(S) = \mathop{\mathrm{Null}}(A),\qquad\text{where: }
A = \left[ \begin{array}{c} \mathbf{1}^T\otimes I \\ I \otimes \mathbf{1}^T \end{array}\right]
$$
Note that vectorization turns the Frobenius inner product into the standard Euclidean inner product. Namely: $\mathop{\mathrm{Trace}}(A^T B) = \mathop{\mathrm{vec}}(A)^T \mathop{\mathrm{vec}}(B)$. Therefore, we can apply the range-nullspace duality and obtain:
$$
\mathop{\mathrm{vec}}(S^\perp) =
\mathop{\mathrm{vec}}(S)^\perp =
\mathop{\mathrm{Null}}(A)^\perp =
\mathop{\mathrm{Range}}(A^T)
$$
So every vector in $vec(S^\perp)$ is of the form $(\mathbf{1}\otimes I)a + (I\otimes \mathbf{1})b$ for some vectors $a$ and $b$ in $R^{n\times 1}$. It follows that every matrix in $S^\perp$ is of the form $a1^T + 1b^T$. This parametrization is equivalent to the one you presented if you set $a_i = q_i-p_i$ and $b_j = q_j + p_j$.</p>
|
631,054 | <p>Let $A$ is in $M_3(\mathbb R^3)$ which is not a diagonal matrix. Pick out the cases when $A $ is diagonalizable over $\mathbb R$:</p>
<p>a. when $A^2=A$;</p>
<p>b. when $(A-3I)^2=0$;</p>
<p>c. when $A^2+I=0$.</p>
<p>My attempt is if $A$ is diagonalizable then there is some invertible $P$ s.t. $PAP^{-1}$=$D$. </p>
<p>Then case a. gives $D^2$=$D$. then $D$ is either $0$ or $I$. Which gives $A$=$0$ or $I$..a contradiction to the fact $A$ is not diagonal. But I am not sure about my approach. Similarly I arrive contradiction for other cases. Please help.</p>
| Robert Lewis | 67,071 | <p>Here's how I would dispose of these issues:</p>
<p>For (c.), note that a matrix $A \in M_3(R)$ has at least one real eigenvalue, since its characteristic polynomial is of degree 3, and real polynomials of odd degree have at least one real root. But if $A^2 + I = 0$, the eigenvalues $\lambda$ of $A$ must satisfy $\lambda^2 + 1 = 0$, i.e. $\lambda = \pm i$. This shows there is no matrix $A \in M_3(R)$ satisfying $A^2 + I = 0$. Thus the proposition, </p>
<p>"$A \in M_3(R) \; \text{and} \; A^2 + I = 0 \Rightarrow A \; \text{is diagonalizable over} \; R.$"</p>
<p>is vacuously true.</p>
<p>For (b.), let</p>
<p>$A = \begin{bmatrix} 3 & 1 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 3 \end{bmatrix}; \tag{1}$</p>
<p>then</p>
<p>$(A - 3I)^2 = N^2 = 0, \tag{2}$</p>
<p>where</p>
<p>$N = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}. \tag{3}$</p>
<p>We now see that $A$ cannot be diagonalizable over $R$; for, since the only eigenvalue of $A$ is $3$ by virtue of $(A - 3I)^2 = 0$, if there were a nonsingular real matrix $P$ with
$PAP^{-1}$ diagonal, we would have</p>
<p>$PAP^{-1} = 3I; \tag{4}$</p>
<p>but (4) implies $A = P^{-1}(3I)P = 3I$, contradicting (1). This example can be generalized in the following sense: any non-diagonal $A$ satisfying $(A - 3I)^2 = 0$ may clearly be written as $A = 3I + N$, with $N \ne 0$ satisfying $N^2 = 0$. The only eigenvalue of such $A$ is $3$; thus $PAP^{-1}$ diagonal forces $A = 3I$, exactly as we have just seen. This contradicts $N \ne 0$. </p>
<p>Finally, for (a.), use the Jordan form: if $A$ were <em>not</em> diagonalizable, each Jordan block J would have to satisfy $J^2 = J$; but $J = \lambda I + N$ with $N = [n_{ij}]$, and $n_{ij} = 1$ iff $j = i+ 1$, $n_{ij} = 0$ otherwise; so $J^2 = \lambda^2 I + 2\lambda N + N^2 \ne J$; thus $A$ can be diagonalized.</p>
<p>Hope this helps. Cheers,</p>
<p>and as always, <strong><em>Fiat Lux!!!</em></strong></p>
<p>Finally, as for </p>
|
3,556,490 | <p>Can someone explain me how to solve the equation <span class="math-container">$\frac{x-n}{x+n} = e^{-x}$</span>, where <span class="math-container">$n$</span> is a non-zero natural number ? Unfortunately, I have not even an idea how to start. Any hint is much appreciated. </p>
<p>Many thanks in advance. </p>
| Claude Leibovici | 82,404 | <p>The solution of <span class="math-container">$$ e^{-x}=\frac{x-n}{x+n} $$</span> is given in terms of the <a href="https://arxiv.org/abs/1408.3999" rel="nofollow noreferrer">generalized Lambert function</a> (have a look at equation <span class="math-container">$(4)$</span> in the linked paper). It is far to be elementary and numerical methods would be required. This implies the need of a "reasonable" starting guess.</p>
<p>As @egreg wrote, consider that you look for the zero of function
<span class="math-container">$$f(x)=x+\log(x-n)-\log(x+n)$$</span> As @greg showed, the solution is in <span class="math-container">$(n,n+1)$</span>. Using Newton method with <span class="math-container">$x_0=n+1$</span>, by Darboux theorem, we shall face a terrible overshoot of the solution since <span class="math-container">$f(n+1)>0$</span> and <span class="math-container">$f''(n+1)=-\frac{4 n (n+1)}{(2 n+1)^2}<0$</span> (leading to <span class="math-container">$x < n$</span> !).</p>
<p>So, we need to find a "good" <span class="math-container">$\epsilon$</span> in order to use as a starting point <span class="math-container">$x_0=n+\epsilon$</span>.</p>
<p>Since we have
<span class="math-container">$$f(n+\epsilon)=n+\epsilon +\log (\epsilon )-\log (2 n+\epsilon )$$</span> considering that <span class="math-container">$\epsilon \ll n$</span>, expand as a Taylor series to get
<span class="math-container">$$f(n+\epsilon)= n-\log (2 n)+\log (\epsilon )+\left(1-\frac{1}{2 n}\right) \epsilon
+O\left(\epsilon ^2\right)$$</span> and solving gives
<span class="math-container">$$\epsilon=\frac{2 n }{2 n-1}W\left(e^{-n} (2 n-1)\right)\implies \color{red}{x_0=n+\frac{2 n }{2 n-1}W\left(e^{-n} (2 n-1)\right)}$$</span> where <span class="math-container">$W(.)$</span> is Lambert function.</p>
<p>A few results</p>
<p><span class="math-container">$$\left(
\begin{array}{ccc}
n & x_0 & \text{exact solution} \\
1 & 1.5569290855221475902 & 1.5434046384182084480 \\
2 & 2.4007960217011828146 & 2.3993572805154676678 \\
3 & 3.2437999725426341949 & 3.2436373501994747685 \\
4 & 4.1306917549758086791 & 4.1306762779494094562 \\
5 & 5.0636292967436328932 & 5.0636280836365866688 \\
6 & 6.0289656341356739002 & 6.0289655520675456145 \\
7 & 7.0126176449103654054 & 7.0126176398482880782 \\
8 & 8.0053405956558708651 & 8.0053405953599105687 \\
9 & 9.0022167307112449573 & 9.0022167306944716050 \\
10 & 10.000907216368752247 & 10.000907216367819733 \\
11 & 11.000367308611718037 & 11.000367308611666862 \\
12 & 12.000147440262153382 & 12.000147440262150601 \\
13 & 13.000058765243955099 & 13.000058765243954949 \\
14 & 14.000023282281423855 & 14.000023282281423847 \\
15 & \color{blue}{15.000009176988204818} & \color{blue}{15.000009176988204818}
\end{array}
\right)$$</span> Since the argument of Lambert function starts to be very small and we know that, for small <span class="math-container">$t$</span>, <span class="math-container">$W(t) \sim t$</span>, a good approximation of <span class="math-container">$x_0$</span> is
<span class="math-container">$$x_0\sim n(1+2e^{-n})$$</span>
For a better approximation, we could use <span class="math-container">$W(t)\sim \frac t {1+t}$</span> and get
<span class="math-container">$$x_0\sim n\left(1+\frac{2}{2 n-1+e^n} \right)$$</span></p>
<p>Now, you can perform one or two iterations of Newton method to get
<span class="math-container">$$x_{n+1}=x_n-\left(1+\frac{2 n}{(n-2) n-x_n^2}\right) (x_n+\log (x_n-n)-\log (x_n+n))$$</span></p>
<p>A few results using <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span>
<span class="math-container">$$\left(
\begin{array}{cccc}
n & x_1 & x_2 & \text{exact solution} \\
1 & 1.5432858263540818229 & 1.5434046290949198066 & 1.5434046384182084480 \\
2 & 2.3993553204786489793 & 2.3993572805118222783 & 2.3993572805154676678 \\
3 & 3.2436373052287912731 & 3.2436373501994713284 & 3.2436373501994747685 \\
4 & 4.1306762771273694914 & 4.1306762779494094538 & 4.1306762779494094562 \\
5 & 5.0636280836256496653 & \color{blue}{5.0636280836365866688} & \color{blue}{5.0636280836365866688} \\
6 & 6.0289655520674323612 & 6.0289655520675456145 & 6.0289655520675456145 \\
7 & 7.0126176398482870746 & 7.0126176398482880782 & 7.0126176398482880782 \\
8 &\color{blue}{ 8.0053405953599105687} & 8.0053405953599105768 & \color{blue}{8.0053405953599105687}
\end{array}
\right)$$</span></p>
<p>As a starting point, <span class="math-container">$x_1$</span> seems to be perfect since numerical analysis reveals that <span class="math-container">$f(x_1) <0$</span> and, by Darboux theorem, Newton method will converge without any overshoot of the solution in a very small number of iterations.</p>
<p><strong>Edit</strong></p>
<p>A further numerical analysis reveals that, at least for <span class="math-container">$1 \leq n \leq 18$</span> (for <span class="math-container">$n>18$</span> start serious underflow/overflow problems)
<span class="math-container">$$f(x_0) \sim e^{-(2n+1)}$$</span></p>
|
1,457,797 | <p>I'm told that a circle intersects the y-axis at $(0,0)$ and $(0, -4)$.</p>
<p>I have a tangent that starts at $(0, -6)$. I want to find the point of intersection.</p>
<p>I am taking the midpoint of the circle to be $(0, -2)$ and the radius to be $2$ from the points above.</p>
<p>The equation of the circle then is:</p>
<p>${(x - 0)^2 + (y + 2)^2 = 4}$</p>
<p>$\implies$ ${x^2 + y^2 + 4y = 0}$</p>
<p>I am taking the equation of the line to be ${y = -6}$.</p>
<p>If I substitute ${y = -6}$ into ${x^2 + y^2 + 4y = 0}$ I get</p>
<p>${x^2 + 36 - 24 = 0}$</p>
<p>${x^2 = -12}$</p>
<p>I think I have gone wrong somewhere.</p>
| Tushant Mittal | 272,305 | <p>Your assumption that the tangent at (0,-6) is y=-6 is incorrect as this line doesn't touch the circle and therefore you have no point on the real plane that satisfies both the circle and the line.</p>
<p>You will have to find the equation of tangent using its property that the perpendicular distance of line from centre is equal to the radius.</p>
|
1,157,528 | <p>How to find the corresponding probability distribution function for the following distribution function ?</p>
<p>$$F (x)=
\left\{
\begin{array}{ll}
0 & \text{if } x<0 \\
x^2 & \text{if } 0\leq x\leq \frac{1}{2} \\
\frac{1}{25} \left(1-3 (3-x)^2\right) & \text{if }\frac{1}{2}<x\leq 3 \\
1 & \text{if } x\geq 3
\end{array} \right.$$</p>
| Chinny84 | 92,628 | <p>$$
F(x) =
\begin{cases}
0, & \text{if $\,x$<0} \\
x^2, & \text{if 0$\,\leq x \leq \frac{1}{2}$}\\
\frac{1}{25}\left(1-3(3-x)^2\right), & \text{if $\,\frac{1}{2} < x < 3$}\\
1, & \text{if $\,3 \leq x $}
\end{cases}
$$
thus
$$
f(x) =
\begin{cases}
2x, & \text{if 0$\,\leq x \leq \frac{1}{2}$}\\
\frac{1}{25}6(3-x), & \text{if $\,\frac{1}{2} < x < 3$}\\
0, & \text{everywhere else}
\end{cases}
$$</p>
|
3,879,201 | <p>Let <span class="math-container">$E$</span> be a normed <span class="math-container">$\mathbb R$</span>-vector space, <span class="math-container">$\mu$</span> be a finite signed measure on <span class="math-container">$(E,\mathcal B(E))$</span> and <span class="math-container">$$\hat\mu:E'\to\mathbb C\;,\;\;\;\varphi\mapsto\int\mu({\rm d}x)e^{{\rm i}\varphi}$$</span> denote the characteristic function of <span class="math-container">$\mu$</span>.</p>
<p>Replying to a previous formulation of this question, <a href="https://math.stackexchange.com/users/142385/kavi-rama-murthy">Kavi Rama Murthy
</a> <a href="https://math.stackexchange.com/a/3879306/47771">has shown</a> that if <span class="math-container">$E$</span> is complete and separable and <span class="math-container">$\mu$</span> is nonnegative, then <span class="math-container">$\hat\mu$</span> is uniformly continuous.</p>
<p>It is easy to see that his proof still works in the general case as long as we are assuming that <span class="math-container">$\mu$</span> is tight<span class="math-container">$^1$</span>, i.e. <span class="math-container">$$\forall\varepsilon>0:\exists K\subseteq E\text{ compact}:|\mu|(K^c)<\varepsilon\tag1.$$</span></p>
<blockquote>
<p>Taking a closer look at the proof, I've observed the following: Let <span class="math-container">$\langle\;\cdot\;,\;\cdot\;\rangle$</span> denote the duality pairing between <span class="math-container">$E$</span> and <span class="math-container">$E'$</span> and <span class="math-container">$$p_x(\varphi):=|\langle x,\varphi\rangle|\;\;\;\text{for }\varphi\in E'$$</span> for <span class="math-container">$x\in E$</span>. By definition, the weak* topology <span class="math-container">$\sigma(E',E)$</span> on <span class="math-container">$E'$</span> is the topology generated by the seminorm family <span class="math-container">$(p_x)_{x\in E}$</span>.</p>
<p>Now, if <span class="math-container">$K\subseteq E$</span> is compact, <span class="math-container">$$p_K(\varphi):=\sup_{x\in K}p_x(\varphi)\;\;\;\text{for }\varphi\in E'$$</span> should be a seminorm on <span class="math-container">$E'$</span> as well. And if I'm not missing something, the topology generated by <span class="math-container">$(p_K:K\subseteq E\text{ is compact})$</span> is precisely the topology <span class="math-container">$\sigma_c(E',E)$</span> of compact convergence on <span class="math-container">$E'$</span>.</p>
<p>What <a href="https://math.stackexchange.com/users/142385/kavi-rama-murthy">Kavi Rama Murthy
</a> has shown is that, since <span class="math-container">$\mu$</span> is tight, for all <span class="math-container">$\varepsilon>0$</span>, there is a compact <span class="math-container">$K\subseteq E$</span> and a <span class="math-container">$\delta>0$</span> with <span class="math-container">$$|\hat\mu(\varphi_1)-\hat\mu(\varphi_2)|<\varepsilon\;\;\;\text{for all }\varphi_1,\varphi_2\in E'\text{ with }p_K(\varphi_1-\varphi_2)<\delta\tag2.$$</span></p>
<p><strong>Question</strong>: Are we able to conclude that <span class="math-container">$\hat\mu$</span> is <span class="math-container">$\sigma_c(E',E)$</span>-continuous?</p>
</blockquote>
<h2><strong>EDIT</strong>:</h2>
<p>In order to conclude that <span class="math-container">$\hat\mu$</span> is (uniformly) <span class="math-container">$\sigma_c(E',E)$</span>-continuous, we need to that <span class="math-container">$(2)$</span> holds for <span class="math-container">$K$</span> replaced by an arbitrary compact <span class="math-container">$\tilde K\subseteq E$</span>. Given <span class="math-container">$\varepsilon>0$</span>, we can show <span class="math-container">$(2)$</span> by choosing the compact subset <span class="math-container">$K\subseteq E$</span> such that <span class="math-container">$$|\mu|(K^c)<\varepsilon\tag3.$$</span></p>
<p>We may then write <span class="math-container">\begin{equation}\begin{split}\left|\hat\mu(\varphi_1)-\hat\mu(\varphi_2)\right|&\le\underbrace{\int_{K\cap\tilde K}\left|e^{{\rm i}\varphi_1}-e^{{\rm i}\varphi_2}\right|{\rm d}\left|\mu\right|}_{<\:\varepsilon}\\&\;\;\;\;\;\;\;\;\;\;\;\;+\int_{K\cap\tilde K^c}\left|e^{{\rm i}\varphi_1}-e^{{\rm i}\varphi_2}\right|{\rm d}\left|\mu\right|\\&\;\;\;\;\;\;\;\;\;\;\;\;+\underbrace{\int_{K\cap\tilde K}\left|e^{{\rm i}\varphi_1}-e^{{\rm i}\varphi_2}\right|{\rm d}\left|\mu\right|}_{<\:2\varepsilon}\end{split}\tag4\end{equation}</span> for all <span class="math-container">$\varphi_1,\varphi_2\in E'$</span> with <span class="math-container">$p_{\tilde K}(\varphi_1-\varphi_2)<\delta$</span>, where <span class="math-container">$$\delta:=\frac\varepsilon{\left\|\mu\right\|},$$</span> but I have no idea how we can control the second integral.</p>
<h2><strong>EDIT 2</strong></h2>
<p>A "proof" of this claim can be (found in Linde's <em>Probability in Banach Spaces</em>), but I have no idea why this proof is correct, since he is concluding the continuity immediately from <span class="math-container">$(2)$</span> (for a single <span class="math-container">$K$</span>):</p>
<p><a href="https://i.stack.imgur.com/vuAOk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vuAOk.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/9q1By.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9q1By.png" alt="enter image description here" /></a></p>
<p>Maybe we need to assume that <span class="math-container">$\mu$</span> is even Radon, i.e. that for all <span class="math-container">$B\in\mathcal (E)$</span>, there is a compact <span class="math-container">$C\subseteq E$</span> with <span class="math-container">$C\subseteq B$</span> and <span class="math-container">$|\mu|(B\setminus C)<\varepsilon$</span>. The author is actually imposing this assumption, but he obviously doesn't make use of it in his proof (he would need to consider an arbitrary compact <span class="math-container">$\tilde K\subseteq E$</span>, as I did above).</p>
<hr />
<p><span class="math-container">$^1$</span> On a complete separable metric space, every finite signed measure is tight.</p>
| Marc | 503,755 | <p>Your answer is correct.</p>
<p>Note that it might be easier to <em>start</em> dividing by <span class="math-container">$X$</span> since you will have one factor out right away. Dividing by <span class="math-container">$X$</span> is fine; it's just dividing by a linear term <span class="math-container">$X-a$</span> where <span class="math-container">$a$</span> happens to be <span class="math-container">$0$</span>.</p>
<p>Note also that <span class="math-container">$(X^2+X+2) = (X^2-2X+2)$</span> since <span class="math-container">$-2X=X \mod 3\ \ \ (*)$</span></p>
<p>What I would do: first divide by <span class="math-container">$X$</span>. Then you have a polynomial of degree <span class="math-container">$4$</span>, which can only be a linear factor times a polynomial of degree <span class="math-container">$3$</span>, or <span class="math-container">$2$</span> polynomials of degree <span class="math-container">$2$</span>. Note that you can find linear factors by checking if the degree <span class="math-container">$4$</span> polynomial has roots in <span class="math-container">$\mathbb{F}_3$</span>; i.e. check for <span class="math-container">$-1,0,1$</span> if plugging it into <span class="math-container">$X^4-X^3-X^2+X+1$</span> gives <span class="math-container">$0$</span>.</p>
<p>As you see, that is not the case, so this degree <span class="math-container">$4$</span> polynomial must be split into two polynomials of degree <span class="math-container">$2$</span>.</p>
<p>You have written down all irreducible polynomials of degree <span class="math-container">$2$</span> in <span class="math-container">$\mathbb{F}_3$</span> which is a good plan. Doing long division or multiplying them together to see if it works out is the way to see if the factors you found are correct.</p>
<p><strong>Edit</strong>: a clever side-note by Jyrki; the resulting polynomial of degree 4 could be irreducible. In that case, you had to check all combinations or long divisions of the 2nd degree polynomials to conclude that the polynomial of degree 4 does not factor and is therefore irreducible. In this example, it's not the case.</p>
<p>Note that by <span class="math-container">$(*)$</span>, you can write that <span class="math-container">$f(X) = (X)(X^2+X+2)^2$</span></p>
|
1,927,650 | <p>In the diagram below, DE is a chord of the circle that goes through c,d and e. A is the center of the circle. The perpendicular line from centre A intersects DE at B and the circle at C, </p>
<p>DE=100cm
BC= 10cm
AB=X</p>
<p>I need to calculate the length of AB / x.</p>
<p>Basically i need to calculate the radius of the circle without the diameter or circumference or anything. Do you have any suggestions as to how I would do this?</p>
<p><a href="https://i.stack.imgur.com/eLUHR.png" rel="nofollow noreferrer">enter image description here</a></p>
| N.S.JOHN | 302,172 | <p>You have $DB= 50$ and $AD=AC=x+10$</p>
<p>Now use Pythagoream theorem and find $x$</p>
|
2,993,299 | <p>If anyone can help me with how to go about solving these kind of equations i would really appreciate it. :-)</p>
<p><span class="math-container">$$\sqrt{36-2x^2} = 4$$</span></p>
<p>Solve for X</p>
| user | 505,767 | <p>I often use </p>
<p><span class="math-container">$$\ldots=\lim_ {x \to \infty}\frac{x+\frac1x}{1-\frac1x}=\left(\frac{\infty+0}{1-0}\right)=\infty$$</span></p>
<p>or directly as <span class="math-container">$x\to\infty$</span></p>
<p><span class="math-container">$$\ldots=\frac{x+\frac1x}{1-\frac1x}\to \infty$$</span></p>
<p>In any case I suggest to avoid that one</p>
<p><span class="math-container">$$\ldots=\lim_ {x \to \infty}\color{red}{\underbrace{\frac{\infty +0}{1-0}}_{\text{ not formal!}}}=\ldots$$</span></p>
<p>also in a not formal answer since we are writing the values assumed by the terms under the limit.</p>
|
3,219,260 | <p>Trying to find the easiest and most general method in finding last <span class="math-container">$n$</span> digits of a number.
I know the trick lies in finding the remainder when the number is divided by <span class="math-container">$10^n$</span> but still not able to perform the rest of the steps to reach the answer</p>
| J. W. Tanner | 615,567 | <p>Repeated squaring:</p>
<p><span class="math-container">$$7^4 = 2401 \mod 10000$$</span>
<span class="math-container">$$7^8 = 2401^2\equiv 4801 \mod 10000$$</span>
<span class="math-container">$$7^{16}\equiv 4801^2\equiv 9601 \mod 10000$$</span>
<span class="math-container">$$7^{32}\equiv 9201 \mod 10000$$</span>
<span class="math-container">$$7^{64}\equiv 8401 \mod 10000$$</span>
<span class="math-container">$$7^{128}\equiv 6801\mod 10000$$</span></p>
<p>Note: <span class="math-container">$2401^2=(2400+1)^2=2400^2+4800+1\equiv4801$</span>, <span class="math-container">$4801^2\equiv4800^2+9600+1\equiv9601$</span>, </p>
<p><span class="math-container">$9601^2\equiv9600^2+2\times9600+1\equiv9200,$</span> etc.</p>
|
3,219,260 | <p>Trying to find the easiest and most general method in finding last <span class="math-container">$n$</span> digits of a number.
I know the trick lies in finding the remainder when the number is divided by <span class="math-container">$10^n$</span> but still not able to perform the rest of the steps to reach the answer</p>
| lab bhattacharjee | 33,337 | <p><span class="math-container">$7^{128}\equiv(1-50)^{64}\equiv1-64\cdot50+\binom{64}250^2-\binom{64}350^3+\cdots+50^{64}$</span></p>
<p><span class="math-container">$\equiv1-64\cdot50\pmod{10^4}$</span></p>
<p>as <span class="math-container">$10^4$</span> divides <span class="math-container">$50^n,n\ge4,$</span></p>
<p>and <span class="math-container">$10^4\mid\binom{64}250^2\iff\dfrac{10^4}{50^2}\mid\binom{64}2$</span></p>
<p>and similarly <span class="math-container">$10^4$</span> divides <span class="math-container">$\binom{64}350^3$</span></p>
|
3,030,317 | <p>I have four points on a rectangular grid <span class="math-container">$(x_1,y_1)$</span>, <span class="math-container">$(x_1,y_2)$</span>, <span class="math-container">$(x_2,y_1)$</span> and <span class="math-container">$(x_2,y_2)$</span>. I also have the value of a third variable <span class="math-container">$z$</span> at each of these points, as well as the partial derivatives <span class="math-container">$\frac{\partial z}{\partial x}$</span> and <span class="math-container">$\frac{\partial z}{\partial y}$</span> at each of these points.</p>
<p>I would like to perform 2-d interpolation to obtain the value of <span class="math-container">$z$</span> at any point <span class="math-container">$(x,y)$</span> within the grid block.</p>
<p>I can easily perform 2-d linear interpolation using the four values of <span class="math-container">$z$</span>, however I would like to increase the smoothness by using the eight derivatives I already have.</p>
<p>I have read up about bicubic interpolation but this requires four 2nd derivatives which I do not have.</p>
<p>Is there a method using the 12 bits of data I have which gives a smoother surface than the linear solution?</p>
| Community | -1 | <p>Interpolate along the grid lines using Hermite cubic splines. <a href="https://en.wikipedia.org/wiki/Cubic_Hermite_spline" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Cubic_Hermite_spline</a></p>
<p>Then inside a tile, use the Coons method. <a href="https://en.wikipedia.org/wiki/Coons_patch" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Coons_patch</a></p>
|
1,580,736 | <p>I know that $\arcsin(x) + \arccos(x) = \frac{\pi}2$,</p>
<p>but how to use that to solve the following question?</p>
<p>$$2\arcsin(x)-3\arccos(x)=\frac{\pi}6 $$</p>
| Mythomorphic | 152,277 | <p><strong>Hint:</strong></p>
<p>Rearrange your known identity into:
$$\arcsin(x) =\frac{\pi}2-\arccos(x)$$</p>
<p>and substitute it into the equation. Then the equation should be in terms of $\arccos x$. Solve for $x$.</p>
|
4,412,371 | <p>Would appreciate some help with proving the following statement:</p>
<p>Let <span class="math-container">$a_{n}$</span> be a sequence, and <span class="math-container">$\lim _{n\rightarrow \infty }a_{n}=1 $</span>.</p>
<p>Prove that <span class="math-container">$\lim _{n\rightarrow \infty }a_{1}+a_{2}+\ldots +a_{n}=\infty $</span>.
It's easy to see that if <span class="math-container">$a_{n}$</span> is monotonically decreasing, then we can choose the sequence <span class="math-container">$b_{n}=\sum ^{n}a_{n}k=1$</span>, then <span class="math-container">$\lim _{n\rightarrow \infty }b_{n}=\infty$</span> and since <span class="math-container">$\forall n\in \mathbb{N} \left( a_{n+1}\leq a_{n}\right)$</span>, we can assume that <span class="math-container">$\lim _{n\rightarrow \infty }a_{1}+a_{2}+\ldots +a_{n}=\infty $</span>, but what happens when <span class="math-container">$a_{n}$</span> is monotonically increasing?</p>
| Kavi Rama Murthy | 142,385 | <p>There exists <span class="math-container">$m$</span> such that <span class="math-container">$a_n >\frac 1 2$</span> for <span class="math-container">$n \geq m$</span>. So, for <span class="math-container">$n >m$</span> we have <span class="math-container">$$a_1+a_2+\cdots+a_{m-1}+a_m+a_{m+1}+\cdots+a_n$$</span> <span class="math-container">$$ > a_1+a_2+...+a_{m-1}+\frac {n-m+1} 2 \to \infty$$</span> as <span class="math-container">$n \to \infty$</span>.</p>
|
3,570,084 | <blockquote>
<p>find the value of <span class="math-container">$f(0)$</span>, given <span class="math-container">$f(f(x))=0$</span> has roots as 1 & 2, <span class="math-container">$f(x) = x^2 + \alpha x + \beta$</span></p>
</blockquote>
<p><strong>My attempt:</strong></p>
<p>If <span class="math-container">$k_1$</span>, <span class="math-container">$k_2$</span> are the roots of <span class="math-container">$f(x)$</span>, <span class="math-container">$f(x) = k_1$</span> has roots 1 & 2 <span class="math-container">$\therefore \alpha = -3, \beta-k_1 = 2 \text{ or } \beta-k_2 = 2$</span> and <span class="math-container">$f(x) = x^2 -3x+\beta$</span>. I could not proceed further with this approach</p>
<p>Another line of thought is that <span class="math-container">$f(f(x))$</span> would be a quartic equation, which means that for it to have 2 roots, it will either have 2 real, 2 imaginary roots or just 2 roots (x-axis would be tangent to this curve at 1 and 2). This information doesn't seem very helpful in solving the problem.</p>
| Vishu | 751,311 | <p>Note that <span class="math-container">$f(f(x))= (x^2+ax+b)^2 + a(x^2+ax+b) + b$</span></p>
<p>Put <span class="math-container">$x=1,x=2$</span> and set to 0:</p>
<p><span class="math-container">$(1+a+b)^2 + a(1+a+b) + b = 0$</span> <strong>_(i)
<span class="math-container">$(4+2a+b)^2 + a(4+2a+b) + b =0$</span> _</strong>(ii)</p>
<p>Subtracting first equation from second yields</p>
<p><span class="math-container">$(a+3)(4a+2b+5)=0$</span></p>
<p>If <span class="math-container">$a=-3$</span>, then putting value of <span class="math-container">$a$</span> in (i) will give a quadratic with no real solutions in <span class="math-container">$b$</span>.</p>
<p>So, <span class="math-container">$4a+2b+5=0$</span></p>
<p>or
<span class="math-container">$a=\frac{-5-2b}{4}$</span></p>
<p>Putting this value in (i) will lead to <span class="math-container">$b=\frac{-3}{2}$</span></p>
<p>So, <span class="math-container">$$f(0)=b=-\frac{3}{2}$$</span></p>
|
128,009 | <p>Hi,
Is there any known sequence such that the sum of a combination of one subsequence never equals another subsequence sum. The subsequences should have elements only from the parent sequence.</p>
<p>Thanks
Sundi</p>
| Community | -1 | <p>There is the following description of $G$ invariant pseudodifferential operators on a Riemannian homogeneous space $G/H$: The Schwartz kernels are smooth outside the diagonal and conormal with respect to the diagonal; this is equivalent to Beals' commutator characterization mentioned in Pedro Lauridsen Ribeiro's answer. Moreover, the full geometric symbols are, modulo symbols of order $-\infty$, invariant under the symplectic action of $G$ on $T^*(G/H)$.</p>
<p>Geometric symbols are defined by pulling back Schwartz kernels under the exponential map of the Levi-Civita connection to a neighbourhood of the zero section of the tangent bundle $T(G/H)$, and then taking Fourier transforms in the fiber variable. See <a href="https://mathoverflow.net/questions/118314/trace-formula-for-psdos/119061#119061">here</a> for more details of and references to the geometric pseudodifferential calculus. That principal symbols must be invariant is clear from the transformation behaviour under diffeomorphisms (special case of Egorov's theorem, or FIO calculus). Lemma 6.2 in the <a href="http://link.springer.com/article/10.1007%2Fs00209-011-0952-1" rel="nofollow noreferrer"> paper in Math. Z.</a>, of which I am a coauthor, gives the invariance of the <i>full</i> geometric symbol if the diffeomorphism is an isometry, e.g. an action on $G/H$ by a group element.</p>
<p>The description discards smoothing operators. This is reasonable if one adopts, from Sato's microlocal analysis, the view that pseudodifferential operators operate on microfunctions.</p>
<p>The above characterization of invariant pseudodifferential operators is not algebraic. However, I think that it is simple and straightforward. I don't know if there is a characterization of $G$ invariant microdifferential (pseudodifferential) operators in the setting of algebraic analysis which is more algebraic.</p>
|
1,851 | <p>Is it possible to modify closing reasons menu? If so I would love to see a small addition, maybe others have other wishes too.</p>
<p>So, the last <a href="https://mathematica.meta.stackexchange.com/q/1010/5478">Granular off-topic close reasons</a> lead to creating</p>
<blockquote>
<p>This question arises due to a simple mistake such as a trivial syntax
error, incorrect capitalization, spelling mistake, or other
typographical error and is unlikely to help any future visitors, or
else it is easily found in the documentation.</p>
<p>This question cannot be answered without additional information.
Questions on problems in code must describe the specific problem and
include valid code to reproduce it. Any data used for programming
examples should be embedded in the question or code to generate the
(fake) data must be included. </p>
<p>The question is out of scope for this site. The answer to this
question requires either advice from Wolfram support or the services
of a professional consultant.</p>
</blockquote>
<p>What I'm missing is what is often done with "Off-topic / Other" and a comment:</p>
<blockquote>
<p>I'm voting to close this question as off-topic because in its form it is too localized and is unlikely to help future visitors. </p>
</blockquote>
<p>It is about questions that OP didn't even bothered to drop specific strings/labels etc. just dumping their whole daily code to the question.</p>
<p>I'm using it quite often, and I've seen couple of users too. Can we include this in menu after reviewing the text?</p>
| Jason B. | 9,490 | <p>No it shouldn't because then it will be used more often and I don't think that being "too localized and unlikely to help future visitors" is, by itself, a valid reason to close a question. </p>
<p>If it's localized and unlikely to help future visitors and not of interest to you, then leave it unanswered. If the question is useless for everyone except OP, then no one will answer it. I literally do not see any reason for deleting it over keeping it up, if it's written well enough, if it's complete enough to answer (has the necessary code, etc).</p>
|
3,293,233 | <p>My robot has a laser time-of-flight distance sensor. Holding the distance and target constant: When I take single set of readings, three standard deviations usually is 3% to 6% of the mean.</p>
<p>If I take multiple sets of readings, each set will show three standard deviations to be 3-6% of the mean, BUT, averaging the averages from each set will show much lower standard deviation, such that three standard deviations will be between 0.5 to 2% of the mean.</p>
<p>Am I getting a more accurate reading by the average of averages of subsets, than the average of a whole set? </p>
<p>(Or am I just seeing the benefit of an average over individual readings?)</p>
<p>17:30:52
Readings : 20<br>
Reading Delay : 0.010 s<br>
Average Reading: 1112 mm<br>
Minimum Reading: 1075 mm<br>
Maximum Reading: 1156 mm<br>
Std Dev Reading: 19 mm<br>
Three SD readings vs ave reading: 5.1 %<br>
Adjusted For Error Average Distance: 1099 mm </p>
<p>17:30:55<br>
Readings : 20<br>
Reading Delay : 0.010 s<br>
Average Reading: 1110 mm<br>
Minimum Reading: 1078 mm<br>
Maximum Reading: 1140 mm<br>
Std Dev Reading: 17 mm<br>
Three SD readings vs ave reading: 4.6 %<br>
Adjusted For Error Average Distance: 1097 mm </p>
<p>17:30:59<br>
Readings : 20<br>
Reading Delay : 0.010 s<br>
Average Reading: 1112 mm<br>
Minimum Reading: 1057 mm<br>
Maximum Reading: 1157 mm<br>
Std Dev Reading: 20 mm<br>
Three SD readings vs ave reading: 5.5 %<br>
Adjusted For Error Average Distance: 1099 mm </p>
<p>17:31:03<br>
Readings : 20<br>
Reading Delay : 0.010 s<br>
Average Reading: 1115 mm<br>
Minimum Reading: 1090 mm<br>
Maximum Reading: 1146 mm<br>
Std Dev Reading: 14 mm<br>
Three SD readings vs ave reading: 3.8 %<br>
Adjusted For Error Average Distance: 1101 mm </p>
<p>17:31:07<br>
Readings : 20<br>
Reading Delay : 0.010 s<br>
Average Reading: 1118 mm<br>
Minimum Reading: 1060 mm<br>
Maximum Reading: 1157 mm<br>
Std Dev Reading: 21 mm<br>
Three SD readings vs ave reading: 5.6 %<br>
Adjusted For Error Average Distance: 1104 mm </p>
<p>Average Average: 1113 mm<br>
Minimum Average: 1110 mm<br>
Maximum Average: 1118 mm<br>
Std Dev Average: 3 mm<br>
<strong>Three SD averages vs ave reading: 0.7 %</strong><br>
Ave all Readings: 1112 mm<br>
SDev all Reading: 16 mm<br>
<strong>Three SD all vs ave all readings: 5.0 %</strong> </p>
<hr>
| eyeballfrog | 395,748 | <p>If the subsets are all the same size, the averages are necessarily the same. Suppose you divide your <span class="math-container">$N$</span> measurements <span class="math-container">$a$</span> into <span class="math-container">$n$</span> groups of <span class="math-container">$m$</span>. Then
<span class="math-container">$$
\frac{1}{n}\sum_{i = 1}^n\left(\frac{1}{m}\sum_{j=1}^m a_{ij}\right) = \frac{1}{mn} \sum_{i = 1}^n\sum_{j=1}^m a_{ij} = \frac{1}{N}\sum_{i = 1}^n\sum_{j=1}^m a_{ij}
$$</span>
Since each each measurement appears exactly once in that sum, it is simply the average over all the measurements.</p>
<p>If the subsets are not all the same size, then measurements from smaller subsets will have higher weight in the final average and it will not always equal the average of all the measurements. This should probably be avoided unless you have a specific reason for doing so.</p>
|
4,232,539 | <p><a href="https://en.wikipedia.org/wiki/Braikenridge%E2%80%93Maclaurin_theorem" rel="nofollow noreferrer">Braikenridge–Maclaurin theorem</a> is the converse to <a href="https://en.wikipedia.org/wiki/Pascal%27s_theorem" rel="nofollow noreferrer">Pascal's theorem</a>.</p>
<p><a href="https://i.stack.imgur.com/g70d9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g70d9.jpg" alt="enter image description here" /></a></p>
<p>I put point <em>I</em> onto the origin of Cartesian coordinates, and rotate <em>GIH</em> onto y-axis (set <em>GI</em> = <em>g</em> and <em>IH</em> = -<em>h</em>), then get</p>
<p><span class="math-container">$\begin{cases}AB:y=jx+g\\DE:y=kx+g\\BC:y=mx+h\\EF:y=nx+h\\AF:y=px\\CD:y=qx\end{cases}$</span></p>
<p>My task is to prove 6 points</p>
<p><span class="math-container">$\begin{cases}A \left( \frac{g}{- j + p}, \ \frac{g p}{- j + p}\right)\\
B \left( \frac{- g + h}{j - m}, \ \frac{- g m + h j}{j - m}\right)\\
C \left( \frac{h}{- m + q}, \ \frac{h q}{- m + q}\right)\\
D \left( \frac{g}{- k + q}, \ \frac{g q}{- k + q}\right)\\
E \left( \frac{- g + h}{k - n}, \ \frac{- g n + h k}{k - n}\right)\\
F \left( \frac{h}{- n + p}, \ \frac{h p}{- n + p}\right)\end{cases}$</span></p>
<p>lie on a conic.</p>
<p>Here I assume the conic doesn't go through origin <em>I</em>, so these points match the conic</p>
<p><span class="math-container">$ax^2+bxy+cy^2+dx+ey+1=0$</span></p>
<p>According to <a href="https://en.wikipedia.org/wiki/Five_points_determine_a_conic#Construction" rel="nofollow noreferrer">this rule</a>, the rest thing is to prove</p>
<p><span class="math-container">$\det\left[\begin{matrix}\frac{g^{2}}{\left(- j + p\right)^{2}} & \frac{g^{2} p}{\left(- j + p\right)^{2}} & \frac{g^{2} p^{2}}{\left(- j + p\right)^{2}} & \frac{g}{- j + p} & \frac{g p}{- j + p} & 1\\\frac{\left(- g + h\right)^{2}}{\left(j - m\right)^{2}} & \frac{\left(- g + h\right) \left(- g m + h j\right)}{\left(j - m\right)^{2}} & \frac{\left(- g m + h j\right)^{2}}{\left(j - m\right)^{2}} & \frac{- g + h}{j - m} & \frac{- g m + h j}{j - m} & 1\\\frac{h^{2}}{\left(- m + q\right)^{2}} & \frac{h^{2} q}{\left(- m + q\right)^{2}} & \frac{h^{2} q^{2}}{\left(- m + q\right)^{2}} & \frac{h}{- m + q} & \frac{h q}{- m + q} & 1\\\frac{g^{2}}{\left(- k + q\right)^{2}} & \frac{g^{2} q}{\left(- k + q\right)^{2}} & \frac{g^{2} q^{2}}{\left(- k + q\right)^{2}} & \frac{g}{- k + q} & \frac{g q}{- k + q} & 1\\\frac{\left(- g + h\right)^{2}}{\left(k - n\right)^{2}} & \frac{\left(- g + h\right) \left(- g n + h k\right)}{\left(k - n\right)^{2}} & \frac{\left(- g n + h k\right)^{2}}{\left(k - n\right)^{2}} & \frac{- g + h}{k - n} & \frac{- g n + h k}{k - n} & 1\\\frac{h^{2}}{\left(- n + p\right)^{2}} & \frac{h^{2} p}{\left(- n + p\right)^{2}} & \frac{h^{2} p^{2}}{\left(- n + p\right)^{2}} & \frac{h}{- n + p} & \frac{h p}{- n + p} & 1\end{matrix}\right]=0$</span></p>
<p>It doesn't look too complicated. I use SymPy to do this task but it is running 6 hours and still doesn't return.</p>
<p>Here is my code:</p>
<pre><code>from sympy import *
def L(a, b):
return Eq(y, a * x + b)
def intersect(L1, L2):
P = solve([L1, L2], (x, y))
return simplify(P[x]), simplify(P[y])
g, h, j, k, m, n, p, q, x, y = symbols('g, h, j, k, m, n, p, q, x, y')
AB, DE, BC, EF, AF, CD = L(j, g), L(k, g), L(m, h), L(n, h), L(p, 0), L(q, 0)
A, E, C = intersect(AB, AF), intersect(DE, EF), intersect(BC, CD)
D, B, F = intersect(CD, DE), intersect(AB, BC), intersect(AF, EF)
mat = []
for P in [A, B, C, D, E, F]:
x, y = P[0], P[1]
mat.append([x * x, x * y, y * y, x, y, 1])
print(Matrix(mat).det())
</code></pre>
<p>Are there any ways to simplify this matrix to get the result quickly?</p>
<p><strong>UPDATE</strong></p>
<p>Applied with brainjam's solution and <a href="https://stackoverflow.com/a/37056325/4260959">the trick</a>, it can be run out quickly. Here is the improved code:</p>
<pre><code>from sympy import *
def L(a, b):
return Eq(y, a * x + b)
def intersect(L1, L2):
P = solve([L1, L2], (x, y))
return simplify(P[x]), simplify(P[y])
g, h, j, k, m, n, p, q, x, y = symbols('g, h, j, k, m, n, p, q, x, y')
AB, DE, BC, EF, AF, CD = L(j, g), L(k, g), L(m, h), L(n, h), L(p, 0), L(q, 0)
A, E, C = intersect(AB, AF), intersect(DE, EF), intersect(BC, CD)
D, B, F = intersect(CD, DE), intersect(AB, BC), intersect(AF, EF)
print('A:', A)
print('B:', B)
print('C:', C)
print('D:', D)
print('E:', E)
print('F:', F)
points = [A, B, C, D, E, F]
denominators = [j - p, j - m, q - m, q - k, k - n, p - n]
coefficients = [x * x, x * y, y * y, x, y, 1]
subs = {}
mat = []
for s in range(6):
row = []
denominator = denominators[s] ** 2
subs_xy = {}
subs_xy[x] = points[s][0]
subs_xy[y] = points[s][1]
for t in range(6):
rst = symbols('r' + str(s) + str(t))
subs[rst] = expand(simplify(N(coefficients[t] * denominator, subs = subs_xy)))
row.append(rst)
mat.append(row)
print('M =', N(Matrix(mat), subs = subs))
print('det M =', expand(N(Matrix(mat).det(), subs = subs)))
</code></pre>
| brainjam | 1,257 | <p>If you search for information like "sympy determinant slow" you get lots of hits. The gist of the problem is that the det() function seems to be trying to simplify as it goes, and can get bogged down once the matrices are <span class="math-container">$4\times 4$</span> or larger.</p>
<p>One problem with your code is that you have lots of rational expressions, and this is probably making things worse.</p>
<p>Since you just want to confirm that the determinant of your matrix is zero, you can multiply each row by its least common denominator. So, for example, the first row</p>
<p><span class="math-container">$$
\frac{g^{2}}{\left(- j + p\right)^{2}} \quad \frac{g^{2} p}{\left(- j + p\right)^{2}} \quad \frac{g^{2} p^{2}}{\left(- j + p\right)^{2}} \quad \frac{g}{- j + p} \quad \frac{g p}{- j + p} \quad 1
$$</span></p>
<p>becomes</p>
<p><span class="math-container">$$
g^{2} \quad g^{2} p \quad g^{2} p^{2} \quad g(- j + p) \quad g p(- j + p) \quad \left(- j + p\right)^{2}
$$</span></p>
<p>With this modification I ran your code on <a href="https://replit.com/" rel="nofollow noreferrer">replit.com</a> and got the desired result (zero) after 15 minutes.</p>
<p>This is still a long time (but better than 6 hours or never). You could try something like <a href="https://stackoverflow.com/a/37056325/242848">this trick</a> to eliminate the simplify-as-you-go problem to get a raw determinant, and then simplify the raw determinant.</p>
|
4,232,539 | <p><a href="https://en.wikipedia.org/wiki/Braikenridge%E2%80%93Maclaurin_theorem" rel="nofollow noreferrer">Braikenridge–Maclaurin theorem</a> is the converse to <a href="https://en.wikipedia.org/wiki/Pascal%27s_theorem" rel="nofollow noreferrer">Pascal's theorem</a>.</p>
<p><a href="https://i.stack.imgur.com/g70d9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g70d9.jpg" alt="enter image description here" /></a></p>
<p>I put point <em>I</em> onto the origin of Cartesian coordinates, and rotate <em>GIH</em> onto y-axis (set <em>GI</em> = <em>g</em> and <em>IH</em> = -<em>h</em>), then get</p>
<p><span class="math-container">$\begin{cases}AB:y=jx+g\\DE:y=kx+g\\BC:y=mx+h\\EF:y=nx+h\\AF:y=px\\CD:y=qx\end{cases}$</span></p>
<p>My task is to prove 6 points</p>
<p><span class="math-container">$\begin{cases}A \left( \frac{g}{- j + p}, \ \frac{g p}{- j + p}\right)\\
B \left( \frac{- g + h}{j - m}, \ \frac{- g m + h j}{j - m}\right)\\
C \left( \frac{h}{- m + q}, \ \frac{h q}{- m + q}\right)\\
D \left( \frac{g}{- k + q}, \ \frac{g q}{- k + q}\right)\\
E \left( \frac{- g + h}{k - n}, \ \frac{- g n + h k}{k - n}\right)\\
F \left( \frac{h}{- n + p}, \ \frac{h p}{- n + p}\right)\end{cases}$</span></p>
<p>lie on a conic.</p>
<p>Here I assume the conic doesn't go through origin <em>I</em>, so these points match the conic</p>
<p><span class="math-container">$ax^2+bxy+cy^2+dx+ey+1=0$</span></p>
<p>According to <a href="https://en.wikipedia.org/wiki/Five_points_determine_a_conic#Construction" rel="nofollow noreferrer">this rule</a>, the rest thing is to prove</p>
<p><span class="math-container">$\det\left[\begin{matrix}\frac{g^{2}}{\left(- j + p\right)^{2}} & \frac{g^{2} p}{\left(- j + p\right)^{2}} & \frac{g^{2} p^{2}}{\left(- j + p\right)^{2}} & \frac{g}{- j + p} & \frac{g p}{- j + p} & 1\\\frac{\left(- g + h\right)^{2}}{\left(j - m\right)^{2}} & \frac{\left(- g + h\right) \left(- g m + h j\right)}{\left(j - m\right)^{2}} & \frac{\left(- g m + h j\right)^{2}}{\left(j - m\right)^{2}} & \frac{- g + h}{j - m} & \frac{- g m + h j}{j - m} & 1\\\frac{h^{2}}{\left(- m + q\right)^{2}} & \frac{h^{2} q}{\left(- m + q\right)^{2}} & \frac{h^{2} q^{2}}{\left(- m + q\right)^{2}} & \frac{h}{- m + q} & \frac{h q}{- m + q} & 1\\\frac{g^{2}}{\left(- k + q\right)^{2}} & \frac{g^{2} q}{\left(- k + q\right)^{2}} & \frac{g^{2} q^{2}}{\left(- k + q\right)^{2}} & \frac{g}{- k + q} & \frac{g q}{- k + q} & 1\\\frac{\left(- g + h\right)^{2}}{\left(k - n\right)^{2}} & \frac{\left(- g + h\right) \left(- g n + h k\right)}{\left(k - n\right)^{2}} & \frac{\left(- g n + h k\right)^{2}}{\left(k - n\right)^{2}} & \frac{- g + h}{k - n} & \frac{- g n + h k}{k - n} & 1\\\frac{h^{2}}{\left(- n + p\right)^{2}} & \frac{h^{2} p}{\left(- n + p\right)^{2}} & \frac{h^{2} p^{2}}{\left(- n + p\right)^{2}} & \frac{h}{- n + p} & \frac{h p}{- n + p} & 1\end{matrix}\right]=0$</span></p>
<p>It doesn't look too complicated. I use SymPy to do this task but it is running 6 hours and still doesn't return.</p>
<p>Here is my code:</p>
<pre><code>from sympy import *
def L(a, b):
return Eq(y, a * x + b)
def intersect(L1, L2):
P = solve([L1, L2], (x, y))
return simplify(P[x]), simplify(P[y])
g, h, j, k, m, n, p, q, x, y = symbols('g, h, j, k, m, n, p, q, x, y')
AB, DE, BC, EF, AF, CD = L(j, g), L(k, g), L(m, h), L(n, h), L(p, 0), L(q, 0)
A, E, C = intersect(AB, AF), intersect(DE, EF), intersect(BC, CD)
D, B, F = intersect(CD, DE), intersect(AB, BC), intersect(AF, EF)
mat = []
for P in [A, B, C, D, E, F]:
x, y = P[0], P[1]
mat.append([x * x, x * y, y * y, x, y, 1])
print(Matrix(mat).det())
</code></pre>
<p>Are there any ways to simplify this matrix to get the result quickly?</p>
<p><strong>UPDATE</strong></p>
<p>Applied with brainjam's solution and <a href="https://stackoverflow.com/a/37056325/4260959">the trick</a>, it can be run out quickly. Here is the improved code:</p>
<pre><code>from sympy import *
def L(a, b):
return Eq(y, a * x + b)
def intersect(L1, L2):
P = solve([L1, L2], (x, y))
return simplify(P[x]), simplify(P[y])
g, h, j, k, m, n, p, q, x, y = symbols('g, h, j, k, m, n, p, q, x, y')
AB, DE, BC, EF, AF, CD = L(j, g), L(k, g), L(m, h), L(n, h), L(p, 0), L(q, 0)
A, E, C = intersect(AB, AF), intersect(DE, EF), intersect(BC, CD)
D, B, F = intersect(CD, DE), intersect(AB, BC), intersect(AF, EF)
print('A:', A)
print('B:', B)
print('C:', C)
print('D:', D)
print('E:', E)
print('F:', F)
points = [A, B, C, D, E, F]
denominators = [j - p, j - m, q - m, q - k, k - n, p - n]
coefficients = [x * x, x * y, y * y, x, y, 1]
subs = {}
mat = []
for s in range(6):
row = []
denominator = denominators[s] ** 2
subs_xy = {}
subs_xy[x] = points[s][0]
subs_xy[y] = points[s][1]
for t in range(6):
rst = symbols('r' + str(s) + str(t))
subs[rst] = expand(simplify(N(coefficients[t] * denominator, subs = subs_xy)))
row.append(rst)
mat.append(row)
print('M =', N(Matrix(mat), subs = subs))
print('det M =', expand(N(Matrix(mat).det(), subs = subs)))
</code></pre>
| auntyellow | 919,440 | <p>I found simply <code>Matrix(mat).det(method='domain-ge')</code> works. See <a href="https://docs.sympy.org/latest/modules/matrices/matrices.html#sympy.matrices.matrices.MatrixDeterminant.det" rel="nofollow noreferrer">here</a> for details.</p>
<p>Full code:</p>
<pre><code>from sympy import *
def L(a, b):
return Eq(y, a * x + b)
def intersect(L1, L2):
P = solve([L1, L2], (x, y))
return simplify(P[x]), simplify(P[y])
g, h, j, k, m, n, p, q, x, y = symbols('g, h, j, k, m, n, p, q, x, y')
AB, DE, BC, EF, AF, CD = L(j, g), L(k, g), L(m, h), L(n, h), L(p, 0), L(q, 0)
A, E, C = intersect(AB, AF), intersect(DE, EF), intersect(BC, CD)
D, B, F = intersect(CD, DE), intersect(AB, BC), intersect(AF, EF)
mat = []
for P in [A, B, C, D, E, F]:
x, y = P[0], P[1]
mat.append([x*x, x*y, y*y, x, y, 1])
print('M =', Matrix(mat))
print('det M =', Matrix(mat).det(method='domain-ge'))
<span class="math-container">```</span>
</code></pre>
|
114,909 | <p>Consider this simple code:</p>
<pre><code>Grid[RandomInteger[{0, 1}, {10, 10}], Background -> LightBlue]
</code></pre>
<p><a href="https://i.stack.imgur.com/PfWwb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PfWwb.png" alt="output"></a></p>
<p>Is it possible to modify the code so that the background only apply for certain entries of the matrix, e.g. only for non zero entries? </p>
| ubpdqn | 1,997 | <pre><code>r = RandomInteger[{0, 1}, {10, 10}];
bg = Thread[SparseArray[r]["NonzeroPositions"] -> LightBlue];
Grid[r, Background -> {None, None, bg}]
</code></pre>
<p><a href="https://i.stack.imgur.com/jmdPS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jmdPS.png" alt="enter image description here"></a></p>
|
1,406,462 | <p>I've been studying an introductory book on set theory that uses the <a href="https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_theory">ZFC</a> set of axioms, and it's been really exciting to construct and define numbers and operations in terms of sets. But now I've seen that there are other alternatives to this approach to construct mathematics like <a href="https://en.wikipedia.org/wiki/New_Foundations">New Foundations</a> (NF), category theory and others. </p>
<p>My question is, what was the motivation to look for an alternative for ZFC? Does ZFC have any weakneses that served as motivation? I know there are some concepts that don't exist in ZFC like the universal set or a set that belongs to itself. So I thought that maybe some branches of mathematics <em>needed</em> these concepts in order to develop them further, is that true?</p>
| Paul Sinclair | 258,282 | <p>Yes. Category theory is one example of a theory than transcends ZFC. Some other examples include Hyperreals and Surreals. There are many more.</p>
|
1,391,014 | <p>Let $f,g:[0,1] \rightarrow [0,1]$ be continuous functions such that $f\circ g =g\circ f$. Prove that there exists $x \in [0,1]$ such that $f(x)=g(x)$</p>
| Paul Sinclair | 258,282 | <p>I haven't worked through it all, but I would start by assuming wlog that $f(0) < g(0)$, then use the composition reversal to show that $f < g$ cannot hold everywhere. Then the result will follow from the intermediate value theorem.</p>
|
2,495,789 | <p>\begin{pmatrix}4&-6&2&-6\\ \:-2&3&-1&3\end{pmatrix}</p>
<p>Am I supposed to move the $0 $ to the other side of the equation and find the inverse of the matrix above or is there another way?</p>
| Tancredi | 487,564 | <p>A $4\times 2$ matrix <strong>can not have</strong> an inverse. You are searching for the kernel of $A$. With a few <a href="https://en.wikipedia.org/wiki/Gaussian_elimination" rel="nofollow noreferrer">Gauss moves</a> on the rows of $A$ you can simplify the sistem $A·x = 0$ getting the solutions. In this case the solution set, $\ker{A}$, is a subspace of dimension $2$, a plane, in $\mathbb{R}^4$.</p>
|
1,046,582 | <p>Question : </p>
<p>$$\lim_{x \to 0} \frac{e^{\tan^3x}-e^{x^3}}{2\ln (1+x^3\sin^2x)}$$</p>
<p>Here I tried $\tan x$ and $\sin x$ expansion in numerator and denominator which are as follows : </p>
<p>$$\tan x =x +\frac{x^3}{3}+\frac{2x^5}{15} \cdots$$ </p>
<p>$$\sin x = x-\frac{x^3}{3!}+\frac{x^5}{5!}\cdots$$ </p>
<p>but this method is not working and also other alternative of using L'Hospital's rule is not working one this ... please help how to tackle this limit prblem thanks.</p>
| Dr. Sonnhard Graubner | 175,066 | <p>the series of your term is $${\frac {1}{2}}+{\frac {8}{15}}{x}^{2}+{\frac {1}{2}}{x}^{3}+{\frac {
367}{945}}{x}^{4}+O \left( {x}^{5} \right)
$$ thus the limit for $x$ tends to zero exists and is $$\frac{1}{2}$$.
Sorry for my mistake i corected it.</p>
|
2,536,877 | <p>I have to evaluate this integral:
$$\int \frac{\tan^5 x}{\cos^3{x}} dx.$$</p>
<p>I guess I should use substitution, but I don't know what to substitute =(</p>
| Community | -1 | <p><strong>Hint</strong></p>
<p>Notice that
$$\frac{\tan^5x}{\cos^3x}=\frac{\sin^5x}{\cos^8x}$$
Then if you substitute $t=\cos x$ you get
$$-\frac{\left(1-t^2\right)^2}{t^8}$$
and then just integrate each term.</p>
|
1,287,236 | <p>Need to calculate $\sin 15-\cos 15$? In degrees. I got zero, but it is wrong. Though, it seems to me that I was solving correct, I was doing this was:
$\sin (45-30) - \cos (45-30) = ((\sqrt(6)-\sqrt(2)/4) - ((\sqrt(6)+\sqrt(2)/4) =0$</p>
| Mythomorphic | 152,277 | <p>The second last step</p>
<p>$$\frac{(\sqrt(6)-\sqrt(2)}4 - \frac{(\sqrt(6)+\sqrt(2)}4 = \frac{-2\sqrt2}4=\frac{-\sqrt{2}}2$$</p>
|
2,636,929 | <p>I know that it can be obtained by simply differentiating the equation and finding the roots of the derivative, but it is a lengthy and tricky process. I am looking for a faster and more straightforward way.</p>
<p>A more effective and quick way to find the answer via simple differentiation will also be appreciated.</p>
| Community | -1 | <p><strong>Hint</strong>: Use the fact that $\arcsin x+\arccos x=\frac{\pi}{2}$ on $x\in[-1,1]$: let $t=\arccos x\in[0,\pi]$, $\arcsin x=\frac{\pi}{2}-t\in\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ and you need to minimise/maximise $(\frac{\pi}{2}-t)^4+t^4$. Now it is a lot easier to find the derivatives etc.</p>
|
4,236,875 | <p>Today, I was solving a problem which is described below.</p>
<blockquote>
<p>Show that the equation of any circle passing through the points of intersection of the ellipse <span class="math-container">$(x + 2)^2 + 2y^2 = 18$</span> and the ellipse <span class="math-container">$9(x - 1)^2 + 16y^2= 25$</span> can be written in the form <span class="math-container">$x^2 - 2ax + y^2 = 5 - 4a$</span>.</p>
</blockquote>
<p>I tried to solve and found the intersection points of both ellipses which are <span class="math-container">$(2,-1)$</span> and <span class="math-container">$(2,1)$</span>. But the problem is that how could I show the required result for the question?</p>
<p>I haven't solved any equation of circle with just two given point lying on the circle.
Please help me solving it.</p>
<p><a href="https://i.stack.imgur.com/uDZZk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uDZZk.png" alt="enter image description here" /></a></p>
| orangeskid | 168,051 | <p>HINT:</p>
<p>You have an equation of one circle <span class="math-container">$x^2 + y^2 = 5$</span> that passes through both points, and now add to that a multiple of the equation of the line through both points.</p>
|
1,506,719 | <p>Let us have measurable spaces $(S_1, \Sigma_1)$ and $(S_2, \Sigma_2)$.
Idea of measurable function $f$ with respect to $\Sigma_1,\Sigma_2$ is the following. $f:$ $S_1 \to S_2$ has to be such that:
$$\forall E_2: f^{-1}(E_2) \in \Sigma_1, $$
where
$ E_2 \subset \Sigma_2, E_1 \subset \Sigma_1 $.</p>
<hr>
<p>Another author looks for elements of $S_1$, lets say $x$, and says that function $f$ is measurable if:
$$\{x: f(x) \in E_2 \} \in \Sigma_1 $$</p>
<hr>
<p>My question is:
WHY we have: $f$ from the $S_1 \to S_2$ but not $f$ from the $E_1 \to E_2$!?</p>
<p>Or, if we wish to travel from sets, then the inverse function $f^{-1}$ will be from the $S_2 \to S_1$.</p>
<p>I made a sketch of the 1st definition of this measurable function, and my picture does not seems to be coherent with what I naturally want to find.
<a href="https://i.stack.imgur.com/F4hvb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F4hvb.jpg" alt="enter image description here"></a></p>
<blockquote>
<p>I mean for me it would be natural to
define that $f^{-1}$ goes back
the same route from which it started ($S_2 \to S_1$).</p>
</blockquote>
<p>wikipedia and few books I have been reading does not shed light on this topic except what is unclear for me. Other googling did not help either. thank you.</p>
<hr>
<hr>
<p><strong><em>UPDATE</em></strong>
Thank you all very much. All three answers complement each other and give me a wider picture of what to think about. I was thinking the hole day, and will after a while, but I have troubles and many ideas still somehow mix in the head. </p>
<p>To summarize. The first take home message I got:</p>
<p>1) From simple calculus. We build image and preimage functions firstly on elements of some set. Then we have something like a function on sets.</p>
<p>2) There is no restrictions ( inverse exist, or it is injective, surjective or bijective function) for measurable function $f$.</p>
<p>3) There was a great insight why exactly preimage is what we have to use in building the mapping. But I agree it somewhere in my heart, and still try to make it more understandable to brain and I fail. I will post below more text about what I understand and what I do not.</p>
<p>4) What about second definiton where we have just image. I mean, are these definitions equal? It does not use preimage function.</p>
<p>5) I am also trying to find analogue in Probability Theory, where we have measurable space $(\Omega,\mathcal{F})$, measurable function(r.v.) $X$ that makes mapping $X: \Omega \to \mathcal{R}$, where $(\mathcal{R},\sigma(\mathcal{R}))$ is another measurable space.</p>
<p>My question here is the following.
Let $w \in \Omega, X(w) \to r$ where $r \in \mathcal{R}$.
$X^{-1}(r) = w$.
Here we do not use correspondence of subsets of sigma algebras. Am I right? I omitted which r and w should be because I think I can write something stupid=) </p>
<p>6)And to conclude. The general problem/question for me was and remains, that I still try to imagine in my head that we should have for a measurable function some kind of correspondence of image and preimage, but this is not enough(or other reason stated) as I read from answers.
Maybe there is possible to do example/picture to answer the question of the reason of function (image) to be from set X to set Y, while the preimage function may work not necessary with the same “matter”, in our case some subsets of sigma algebras, that are generated by initial sets (or shortly part of my question is: why measurable function is not an inversion function that maps from same sets X and Y?).</p>
<p>Am I right that this definition of measurable function ( $\forall E_2: f^{-1}(E_2) \in \Sigma_1, $) allows for certain sets $E_1$ to have no correspondence with all the subsets of sigma algebra $\Sigma_2$? If yes, this is the point where $f$ may have mapping from $E_1$ to a an empty set $\emptyset$?</p>
<p>FIN1. Is such a mnemonics mathematically correct: </p>
<p>Function $f$ that is an image of set $X$ to set $Y$ (here I mean image of every element $x$ of $X$ to elements $y$ of $Y$) is measurable, if
for any subset generated by $\sigma(Y)$ we have a preimage that is inside a $\sigma(X)$. </p>
<p>FIN2.And again, </p>
<blockquote>
<p>For every function $f$, subset $A$ of the domain and subset $B$ of the
codomain we have $A \subset f^{−1}(f(A))$ and $f(f^{−1}(B)) \subset B$. </p>
<p>If $f$ is injective we have $A = f^{−1}(f(A))$ and if ''f'' is
surjective we have $f(f^{−1}(B)) = B$.</p>
</blockquote>
<p>So, in order to understand this better I wanted at least to show myself visually that this holds.</p>
<p>I have included picture of my thoughts which give me completely opposite result. Should I post this as separate question?
<a href="https://i.stack.imgur.com/UjSTu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UjSTu.jpg" alt="enter image description here"></a></p>
| Alp Uzman | 169,085 | <p>John Dawkins gives a nice explanation in his answer, but I'd like to mention some further details. I believe you are confused mainly due to notation, or a so-called "notational abuse" we like to do, namely we use the same notation for a function that maps "points" to "points" and its associated image function that maps "point sets" to "point sets". I'll leave what a "point" means vague, for our purposes it is sufficient to keep in mind that if $x$ is a point in some set $X$, then $\{x\}$ is a member of the power set $\mathcal{P}(X)$ of $X$, and is a point set. In any case, I believe your problem consists of the following parts:</p>
<ol>
<li>Basic set theory, difference between point function $f(x)$ and its associated set functions $f(A)$ and $f^{-1}(B)$;</li>
<li>Definition of a $\sigma$-algebra and a measurable function;</li>
<li>Why a measurable is defined in terms of the preimage function rather than the image function.</li>
</ol>
<hr>
<p><strong>1.</strong> First let's talk about some set-theoretical basics. Let $X$ be a nonempty set. Then we call the collection $\mathcal{P}(X)$ of all subsets of $X$ the <em>power set</em> of $X$. Note that the members of $\mathcal{P}(X)$ are the subsets of $X$ (so your drawing might be a little off unless I interpreted it incorrectly). </p>
<p>Now let $X$ and $Y$ be two nonempty sets and let $f$ be a function from $X$ to $Y$. If $A\in\mathcal{P}(X)$ and $B\in\mathcal{P}(Y)$ (so that $A$ is a subset of $X$ and $B$ is a subset of $Y$), then we denote by $f(A)$ the set of all $f(x)$'s where $x$ is from $A$ and by $f^{-1}(B)$ the set of all $x$'s for which $f(x)$ is a member of $B$, i.e.,</p>
<p>$$f(A):=\{f(x)\in Y|x\in A\}\in\mathcal{P}(Y),$$</p>
<p>$$f^{-1}(B):=\{x\in X|f(x)\in B\}\in\mathcal{P}(X).$$</p>
<p>Note that the $f$ in the expressions "$f(x)$" and "$f(A)$" actually denotes different mathematical objects, even though they are related very naturally (of course one might inquire about how one knows $x$ and $A$ are not of the same hierarchy (indeed, why?)). In some texts the notations $f[A]$ and $f^{-1}[B]$ are used when the argument of $f:X\to Y$ is a subset of the set on which it is defined, that is, $X$. Also keep in mind that I did not claim anything about injectivity of $f$, so $f^{-1}$ does not mean the inverse of $f:X\to Y$, nor does it actually need the existence of the inverse in order to make sense.</p>
<p>When sets are considered together with their power sets, associated with $f:X\to Y$ we immediately see two functions between the power sets. The <em>image function</em> $I_f:\mathcal{P}(X)\to\mathcal{P}(Y)$ takes any member $A$ of $\mathcal{P}(X)$ and maps it to $f(A)$ as defined above, and the <em>preimage function</em> $P_f:\mathcal{P}(Y)\to\mathcal{P}(X)$ takes any member $B$ of $\mathcal{P}(Y)$ and maps it to $f^{-1}(B)$, i.e.,</p>
<p>$$I_f:\mathcal{P}(X)\to\mathcal{P}(Y), I_f{A}:=f(A),$$</p>
<p>$$P_f:\mathcal{P}(Y)\to\mathcal{P}(X), P_f(B):=f^{-1}(B).$$</p>
<p>I'll leave it to you to prove that these two functions (between power sets) are well-defined, i.e., they are indeed functions. As a further remark, if $f$ does have an inverse $f^{-1}$, then $P_f$ is the inverse of $I_f$ (why?) (recall that this means that $P_f \circ I_f$ is the identity function of $\mathcal{P}(X)$ and $I_f\circ P_f$ is the identity function of $\mathcal{P}(Y)$).</p>
<p>Since we only need to know $f$, $X$ and $Y$ to know $I_f$ and $P_f$, we denote these last two functions by $f$ and $f^{-1}$ also (this is the notational abuse I was talking about) (one other reason is that there is a everlasting shortage of ink). As an exercise you could compare the meanings of $f^{-1}(y)$ and $f^{-1}(\{y\})$ for some $y\in f(X)$, when $f$ is injective and when it is not injective.</p>
<hr>
<p><strong>2.</strong> We have only sets and their power sets so far, of whose we can compare for instance the cardinalities. However we would like to talk about stronger objects, that is to say, objects that are more than sets, objects that are sets and that have a "structure" of sorts, e.g. an operation (like addition), or a metric (a set with a structure is sometimes called a space); and we would like to compare them. It turns out specifying some subsets of the power sets is a nice way to introduce structures to sets; and investigating how the image function and preimage function behave when restricted to those specified subsets helps us compare the respective structures of each set.</p>
<p>Let $S$ be a nonempty set. A subset of $\mathcal{P}(S)$ (so a collection of subsets of $S$) $\Sigma_S$ is a <em>$\sigma$-algebra</em>, if $S\in\Sigma_S$, and it is closed under complements and countable unions. The pair $(S,\Sigma_S)$ is a <em>measurable space</em>. If $(S_1,\Sigma_1)$ and $(S_2,\Sigma_2)$ are two measurable spaces and $f:S_1\to S_2$ is a function, then $f$ is <em>measurable</em> if</p>
<p>$$\forall E_2\in\Sigma_2(\subseteq\mathcal{P}(S_2)):f^{-1}(E_2)(=P_f(E_2))\in\Sigma_1(\subseteq\mathcal{P}(S_1)).$$</p>
<p>Observe that the parantheses include information we have already established (I wrote them again for convenience). Also note that this is equivalent to requiring that $P_f:\Sigma_2\to\Sigma_1$ is well-defined (why?).</p>
<hr>
<p><strong>3.</strong> I believe Rudin's presentation of the subject in his <a href="http://rads.stackoverflow.com/amzn/click/0070542341" rel="noreferrer"><em>Real and Complex Analysis</em></a> is in accordance with the relatively abstract framework. He declares on p. 8 the following:</p>
<blockquote>
<p>The class of measurable functions plays a fundamental role in integration theory. It has some basic properties in common with another most important class of functions, namely, the continuous ones. [...] Our presentation is therefore organized in such a way that the analogies between the concepts <em>topological space, open set</em> and <em>continuous function</em>, on the one hand, and <em>measurable space, measurable set,</em> and <em>measurable function,</em> on the other, are strongly emphasized.</p>
</blockquote>
<p>For more concrete reasons why we use the preimage function you can have a look at <a href="https://math.stackexchange.com/questions/125122/intuitively-how-should-i-think-of-measurable-functions">this math.SE thread</a> . Admittedly the use of preimages in the definition was intuitive to me, but I'll think about it and if I come up with some alternative explanation I'll add it to my answer.</p>
|
29,114 | <p>In <a href="https://puzzling.stackexchange.com/questions/72494/use-2-0-1-and-9-to-make-34">this</a> question on the <a href="https://puzzling.stackexchange.com/"><em>Puzzling Stack Exchange</em></a>, I wrote <a href="https://puzzling.stackexchange.com/questions/72494/use-2-0-1-and-9-to-make-34/72504#72504">this</a> answer with some Mathjax, but it formatted a little weirdly. Here is what I wrote:</p>
<hr />
<p><em>Some unecessary information, blah blah blah</em></p>
<p>The following is another one less technical.</p>
<blockquote>
<p><span class="math-container">$$\underbrace{(2+0!)}_{3}\|\underbrace{(1+\sqrt{9})}_{4}=34\tag{$\|=\small\rm concatenation$}$$</span></p>
</blockquote>
<hr />
<p>Notice that the right underbrace is significantly larger than the left one. I presume the square root and the brackets are to explain why this is the case.</p>
<p>Is this a bug? If so, should the large underbrace be fixed for smaller size?</p>
<p>Here are some other examples:</p>
<blockquote>
<p><span class="math-container">$$\underbrace{1+1}_{2}\quad\underbrace{\sqrt{1}+1}_{2}\quad\underbrace{1+\sqrt{1}}_{2}\quad\underbrace{\sqrt{1}+\sqrt{1}}_{2}$$</span> <span class="math-container">$$\underbrace{(1+1)}_{2}\quad\underbrace{(\sqrt{1}+1)}_{2}\quad\underbrace{(1+\sqrt{1})}_{2}\quad\underbrace{(\sqrt{1}+\sqrt{1})}_{2}$$</span></p>
</blockquote>
<p>The same applies to overbraces.</p>
<blockquote>
<p><span class="math-container">$$\overbrace{1+1}^{2}\quad\overbrace{\sqrt{1}+1}^{2}\quad\overbrace{1+\sqrt{1}}^{2}\quad\overbrace{\sqrt{1}+\sqrt{1}}^{2}$$</span> <span class="math-container">$$\overbrace{(1+1)}^{2}\quad\overbrace{(\sqrt{1}+1)}^{2}\quad\overbrace{(1+\sqrt{1})}^{2}\quad\overbrace{(\sqrt{1}+\sqrt{1})}^{2}$$</span></p>
</blockquote>
<hr />
<h1>Original screenshot:</h1>
<p><a href="https://i.stack.imgur.com/R0CSl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R0CSl.png" alt="SCREENSHOT" /></a></p>
<p>The <strong>expressions</strong> are nearly all the same size, differing <em>just</em> slightly. But the <strong>braces</strong> (over and under) differ significantly. Apologies if the screenshot is blurry.</p>
<hr />
<h1>New & Improved Screenshot:</h1>
<p><a href="https://i.stack.imgur.com/tDZ82.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tDZ82.png" alt="SCREENSHOT 2" /></a></p>
<p>Thanks heaps to @<a href="https://math.meta.stackexchange.com/users/8297/martin-sleziak">Martin Sleziak</a> for this, particularly <a href="https://math.meta.stackexchange.com/questions/10459/square-brackets-are-not-displaying-correctly/10471#10471">this Meta answer</a>!!</p>
| Davide Cervone | 7,798 | <p>MathJax handles stretchy characters like the over-and under-braces as follows: for smaller versions, it has several sizes to choose from in its fonts (just as TeX does), and will use one of these single-character versions when possible. For larger versions, the stretchy character is made from several glyphs pieced together. In your case, the transition between the single-character versions and the multi-character versions seems to occur between your third and fourth expressions.</p>
<p>MathJax's HTML-CSS output renderer (the default used here on StackExchange) can use several different font families for its output, and it prefers to use one installed on your system rather than downloading them over the web. For example, if you have the STIX fonts (version 1) installed on your system, it will used then in preference to its own TeX-based fonts. In your case the fonts being used are the STIX fonts, so I assume you have those installed locally. Apple has included STIX fonts in their systems for somme years now, so if you are using MacOS, you have them automatically (and I think iOS does as well).</p>
<p>It turns out that the single-character over and underbraces in the STIX fonts are thinner than the braces made from the multi-character pieces provided in the font. That is the difference that you are seeing, here. Others who are not seeing that themselves are probably seeing the MathJax TeX fonts (since they don't have STIX fonts installed).</p>
<p>Switching to one of the other renderers (as indicated in the link you provide) will result in the MathJax TeX fonts being used (only HTML-CSS checked for the ones available on your local system).</p>
<p>In any case, this is not a bug, it is a feature (based on the font in use).</p>
|
241,494 | <p>Who has hint how to prove:
$\sum_{n=0}^N \sum_{k=1}^N g_k y_n = \sum_{n=1}^N \sum_{k=1}^n g_k y_{n-k}+\sum_{n=1}^N\sum_{k=1}^n g_n y_{N-k+1}$</p>
<p>Thank in advance!</p>
| Elias Costa | 19,266 | <p>We have
$$
\begin{align}
\sum_{k=1}^N \sum_{n^\prime=0}^N g_k y_n
& = \sum_{\substack{1\leq n\leq N\\ 1\leq k \leq N}} g_k y_n
\\
& = \sum_{(k,n)\in G} g_k y_n
\\
\end{align}
$$</p>
<p>Here $G=\{ (k,n) \in\mathbb{N}\times\mathbb{N} : 1\leq n\leq N, 1\leq k \leq N\}$. Set
$$
\begin{align}
G_1 &=\{ (k,n) \in\mathbb{N}\times\mathbb{N} : 1\leq n\leq k, 1\leq k \leq N, \} \\
& \\
G_2 &= \{ (k,n) \in\mathbb{N}\times\mathbb{N} : k+1\leq n\leq N, 1\leq k \leq N\}.\\
\end{align}
$$</p>
<p>Then $G=G_1\cup G_2$, $G_1\cap G_2=\emptyset$ and
$$
\begin{align}
\sum_{k=1}^N \sum_{n=0}^N g_k y_n
& = \sum_{(k,n)\in G} g_k y_n
\\
& = \sum_{(k,n)\in G_1} g_k y_n + \sum_{(k,n)\in G_2} g_k y_n
\\
& = \sum_{\substack{1\leq n\leq k\\ 1\leq k \leq N}} g_k y_n
+
\sum_{\substack{k+1\leq n\leq N\\ 1\leq k \leq N}} g_k y_n
\\
& = \sum_{1\leq k \leq N}\;\;\sum_{1\leq n\leq k} g_k y_n
+
\sum_{1\leq k \leq N}\;\;\sum_{k+1\leq n\leq N} g_k y_n
\\
& = \sum_{1\leq k \leq N}\;\;\sum_{1\leq n\leq k} g_k y_n
+
\sum_{1\leq k \leq N}\;\;\sum_{1\leq n-k\leq N-k} g_k y_n
\\
\end{align}
$$</p>
<p>Making the "change of variables" $n=n^\prime+k$:
$$
\begin{align}
\sum_{k=1}^N \sum_{n=0}^N g_k y_n
& = \sum_{1\leq k \leq N}\;\;\sum_{1\leq n\leq k} g_k y_n
+
\sum_{1\leq k \leq N}\;\;\sum_{1\leq n^\prime \leq N-k} g_{k^\prime} y_{n^\prime+k}
\\
\end{align}
$$</p>
<p>The change in notation $n=n^\prime$ does not interfere in the value of the sum we now
$$
\begin{align}
\sum_{k=1}^N \sum_{n^\prime=0}^N g_k y_n
& = \sum_{1\leq k \leq N}\;\;\sum_{1\leq n^\prime\leq k} g_k y_{n^\prime}
+
\sum_{1\leq k \leq N}\;\;\sum_{1\leq n^\prime \leq N-k} g_{k^\prime} y_{n^\prime+k}
\\
\end{align}
$$</p>
|
108,781 | <p>I want to add arrowheads and labels to the following plot. </p>
<pre><code>P1 = Plot[x, {x, 0, 2}, PlotStyle -> {Dashed, Red}, Filling -> Bottom];
P2 = Plot[6 x, {x, 0, 2}, PlotStyle -> {Dashed, Blue}];
Show[P1, P2, AspectRatio -> Automatic, Frame -> True,
PlotRangePadding -> None, AxesOrigin -> {0, 0}, Axes -> None,
FrameStyle -> Directive[Black],
LabelStyle -> {18, Bold},
FrameLabel -> {{None, "T"}, {"w", None} }, ImageSize -> 500]
</code></pre>
<p>I want the arrowheads towards the origin and the label (say, "w=T") to be on the curve. </p>
| george2079 | 2,079 | <p>Draw with graphics primitives using <code>Epilog</code></p>
<pre><code>P1 = Plot[x, {x, 0, 2}, PlotStyle -> {Dashed, Red}, Filling -> Bottom];
P2 = Plot[6 x, {x, 0, 2}, PlotStyle -> {Dashed, Blue}];
Show[P1, P2, AspectRatio -> Automatic, Frame -> True,
PlotRangePadding -> None, AxesOrigin -> {0, 0}, Axes -> None,
FrameStyle -> Directive[Black], LabelStyle -> {18, Bold},
FrameLabel -> {{None, "T"}, {"w", None}}, ImageSize -> 500,
Epilog -> {
Arrow[{{.2, .2}, {.1, .1}}],
Arrow[{.05 {1, 6}, {1, 6} .02}],
Rotate[Text[Style["w=t", FontSize -> 20], {.95, 1.1}], Pi/4]}]
</code></pre>
<p><a href="https://i.stack.imgur.com/mPZ7G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mPZ7G.png" alt="enter image description here"></a></p>
|
1,001,913 | <p>If we want to graph a horizontal line, we will do the following:</p>
<pre><code>y = 0x + 3
</code></pre>
<p>No matter the domain for x, the range for y will always be 3. Therefore, we have a horizontal line.</p>
<pre><code>y = 0(0) + 3 = (0,3)
y = 0(1) + 3 = (1,3)
y = 0(2) + 3 = (2,3)
</code></pre>
<p>Now the formula to graph a vertical line looks like this:</p>
<pre><code>x = 3
</code></pre>
<p>Well, wait a second. Where is the y? I would like to see the y in the equation. But it is missing. How can I write the equation for a vertical line that includes the y variable? This is all I can think of:</p>
<pre><code>x = 0y + 3
</code></pre>
<p>And with the following domain:</p>
<pre><code>x = 0(0) + 3
x = 0(1) + 3
x = 0(2) + 3
</code></pre>
<p>Is this correct? Is it ok to reverse the x and y, as I just did above? Or does this not make it a slope-intercept equation anymore? It should still be a linear equation, since the variables are raised to the first power, in my opinion. But the slope-intercept form looks like this: y = mx + b. So I am not sure if this is still a slope-intercept equation. </p>
| Hypergeometricx | 168,053 | <p>This is an interesting question.</p>
<p>Using upper negation, applying the Vandermonde identity, using upper negation again and applying symmetry gives the following:</p>
<p>$$\begin{align}
\sum_{t=0}^{n-m}{\color{red}t+k\choose \color{red}t}{n-k-\color{red}t\choose n-m-\color{red}t}&=\sum_{t=0}^{n-m}{-k-1\choose \color{red}t}{k-m-1\choose n-m-\color{red}t}(-1)^{t+(n-m-t)}\\
&=(-1)^{n-m}{-m-2\choose n-m}\\
&=(-1)^{(n-m)+(n-m)}{n+2-1\choose n-m}\\
&={n+1\choose {n-m}}\\
&={n+1\choose {m+1}}\qquad \blacksquare
\end{align}$$</p>
<p>More information about upper negation and the Vandermonde identity here:</p>
<p><a href="https://proofwiki.org/wiki/Negated_Upper_Index_of_Binomial_Coefficient" rel="nofollow">https://proofwiki.org/wiki/Negated_Upper_Index_of_Binomial_Coefficient</a>
<a href="http://en.wikipedia.org/wiki/Vandermonde%27s_identity" rel="nofollow">http://en.wikipedia.org/wiki/Vandermonde%27s_identity</a></p>
|
3,986,812 | <p>Given <span class="math-container">$U_n = (1 + 1/n)^n$</span> , <span class="math-container">$n = 1,2,3,\ldots\;.$</span></p>
<blockquote>
<p>Show that <span class="math-container">$2 \le U_n \le 3$</span> for all n</p>
</blockquote>
<p>This is what I've done. Can anyone help?</p>
<p><span class="math-container">$$\begin{align*}
a_n=\left(1+\frac1n\right)^n&=\sum_{r=0}^n{^nC_r}(1)^r\left(\frac1n\right)^{n-r}\\
&=\sum_{r=0}^n{^nC_r}\left(\frac1n\right)^{n-r}\\
&=1+\frac{n}n+\frac1{2!}\frac{n(n-1)}{n^2}+\frac1{3!}\frac{n(n-1)(n-2)}{n^3}\\
&\quad\,+\ldots+\frac1{n!}\frac{n(n-1)\ldots(n-n+1)}{n^n}
\end{align*}$$</span></p>
<p>Since <span class="math-container">$\forall k\in\{2,3,\ldots,n\}$</span>: <span class="math-container">$\frac1{k!}<\frac1{2^k}$</span>, and <span class="math-container">$\frac{n(n-1)\ldots\big(n-(k-1)\big)}{n^k}<1$</span>,</p>
<p><span class="math-container">$$\begin{align*}
a_n&<1+\left(1+\frac12+\frac1{2^2}+\ldots+\frac1{2^{n-1}}\right)\\
&<1+\left(\frac{1-\left(\frac12\right)^n}{1-\frac12}\right)<3-\frac1{2^{n-1}}<3
\end{align*}$$</span></p>
| N. S. | 9,176 | <p><strong>Hint</strong> Let
<span class="math-container">$$U_n = (1 + 1/n)^n \\
V_n=U_n = (1 + 1/n)^{n+1}$$</span></p>
<p>It is clear that <span class="math-container">$U_n < V_n$</span> for all <span class="math-container">$n$</span>. Use Bernoulli inequality to show that</p>
<p><span class="math-container">$$\frac{U_{n+1}}{U_n} \geq 1 \\
\frac{V_{n}}{V_{n+1}} \geq 1$$</span></p>
<p>Find some <span class="math-container">$m$</span> so that <span class="math-container">$V_m \leq 3$</span>.
Deduce from here that
<span class="math-container">$$U_1 \leq U_n \leq V_m$$</span> for all <span class="math-container">$n$</span>.</p>
|
1,136,347 | <p>I'm trying to compute</p>
<p>$$
\omega^{3n/2 + 1} + \omega
$$</p>
<p>where $\omega$ is one of the $n^{th}$ roots of unity where $n$ is a multiple of $4$. Could anyone demonstrate how to do this?</p>
| Timbuc | 118,527 | <p>$$\omega^{\frac{3\cdot4k}2+1}=\omega^{6k+1}=\omega^{4k}\omega^{2k+1}=\omega^{2k+1}$$</p>
<p>If $\;\omega\;$ is a <em>primitive</em> roots of order $\;n=4k\;$ , then $\;\omega^{2k}=-1\;$ (why?) , so</p>
<p>$$\omega^{3n/2 +1}+\omega=0$$</p>
|
1,371,921 | <p>The intersection defined by the two planes $v \bullet \begin{pmatrix} 8 \\ 1 \\ -12 \end{pmatrix} = 35$
and
$v \bullet \begin{pmatrix} 6 \\ 7 \\ -9 \end{pmatrix} = 70$
is a line. What is the equation of this line?</p>
<p>This is what I have so far: I set $v = \begin{pmatrix} a \\ b \\ c \end{pmatrix}$. I was able to simplify both LHS of the two given equations to: </p>
<p>$8a+b-12c = 35$</p>
<p>$6a+7b-9c = 70$</p>
<p>I could find the values of $a, b, c$, but I don't know if it will be helpful. How can I find the equation of the line of intersection?</p>
| Tucker | 256,305 | <p>With two equations and three unknowns your solution space will be a one-dimensional set (a line), exactly what you are looking for. You will need to write $(a,b,c)$ in terms of one of the parameters. For instance $a=f_{1}(c),b=f_{2}(c),c=c$.</p>
<p>It will not always be the case that a solution space will be one-dimensional, it could be the case that the column space is one-dimensional and it is possible that no solution would exist.</p>
|
2,354,094 | <p>I've been stumped by this problem:</p>
<blockquote>
<p>Find three non-constant, pairwise unequal functions $f,g,h:\mathbb R\to \mathbb R$ such that
$$f\circ g=h$$
$$g\circ h=f$$
$$h\circ f=g$$
or prove that no three such functions exist.</p>
</blockquote>
<p>I highly suspect, by now, that no non-trivial triplet of functions satisfying the stated property exists... but I don't know how to prove it.</p>
<p>How do I prove this, or how do I find these functions if they do exist?</p>
<p>All help is appreciated!</p>
<p>The functions should also be continuous.</p>
| Kamil Maciorowski | 331,040 | <p>(This is by no means a comprehensive and strict answer, just few observations I made).</p>
<hr>
<p>If you removed $0$ from the domain, the functions might be:</p>
<p>$$f_0 (x) \equiv -x$$
$$g_0(x) \equiv \frac 1 x$$
$$h(x) \equiv -\frac 1 x$$</p>
<p>Or better don't remove $0$ but allow $\infty$ into the domain and the image, making each of them the <a href="https://en.wikipedia.org/wiki/Projectively_extended_real_line" rel="noreferrer">projectively extended real line</a>. This is interesting because if you take a broader look at complex numbers and their representation as the <a href="https://en.wikipedia.org/wiki/Riemann_sphere" rel="noreferrer">Riemann sphere</a>, you will notice that:</p>
<ul>
<li>$f_0$ corresponds to rotating the sphere half turn, while the axis goes through $0$ and $\infty$;</li>
<li>$g_0$ is a similar rotation, the axis goes through $-1$ and $1$;</li>
<li>$h$ is also a similar rotation, the axis goes through $-i$ and $i$.</li>
</ul>
<p>Rotating comes easy to my imagination so I will stick to this interpretation for a while, but we should remember that every half turn rotation is equivalent to some axial reflection.</p>
<p>So in this case the three functions (and their compositions) correspond to certain operations (and their compositions) in 3D space where we imagine the Riemann sphere is.</p>
<p>In general you could choose any three half turns with respect to mutually perpendicular axes. In this case however each of them must map the extended real line (which is a great circle on the Riemann sphere) onto itself. This means one of the axes must go through $-i$ and $i$, i.e. one of the functions must be our $h(x)$.</p>
<p>If I did my calculations right, more general forms are ($\alpha \in \mathbb R$):</p>
<p>$$f_\alpha (x) \equiv \frac {-x+\alpha} {\alpha x + 1},$$
$$g_\alpha (x) \equiv \frac {\alpha x + 1} {x-\alpha} $$
$$h(x) \equiv -\frac 1 x$$</p>
<p>The following picture shows the projectively extended real line as a cross section of the Riemann sphere. Few possible axes are drawn.</p>
<p><a href="https://i.stack.imgur.com/2FVkw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2FVkw.png" alt="projectively extended real line"></a></p>
<p>I said our functions correspond to certain reflections (or rotations) in 3D. Now in 2D the interpretation is:</p>
<ul>
<li>$f_\alpha$ and $g_\alpha$ correspond to reflections about certain axes;</li>
<li>$h$ corresponds to the reflection through the point $S$ (or a half turn, if you wish).</li>
</ul>
<hr>
<p>To me the most surprising conclusion is that: $$f_0 (x) \equiv -x$$ and
$$g_0(x) \equiv \frac 1 x$$ are more similar than I ever thought.</p>
|
2,354,094 | <p>I've been stumped by this problem:</p>
<blockquote>
<p>Find three non-constant, pairwise unequal functions $f,g,h:\mathbb R\to \mathbb R$ such that
$$f\circ g=h$$
$$g\circ h=f$$
$$h\circ f=g$$
or prove that no three such functions exist.</p>
</blockquote>
<p>I highly suspect, by now, that no non-trivial triplet of functions satisfying the stated property exists... but I don't know how to prove it.</p>
<p>How do I prove this, or how do I find these functions if they do exist?</p>
<p>All help is appreciated!</p>
<p>The functions should also be continuous.</p>
| tomasz | 30,222 | <p>For the non-continuous version, you can just consider the three imaginary quaternion units, acting by left multiplication on the space of quaternions (or just the unit quaternions) -- it has the same cardinality as the reals, so it resolves the problem in the non-continuous case.</p>
|
2,829,121 | <p>I get how to derive the ellipse equation, but I'm struggling to understand what it means intuitively. </p>
<p>You see, a circle equation can be understood very intuitively. The circle equation models how the radius of the circle can be represented using the Pythagorean theorem. But I don't understand what the ellipse equation means at such a level. Does it model how an ellipse can be drawn out using a stretched rope? What exactly does it model? Can someone please explain? </p>
<p>Can you please explain it as simply as possible, as I'm still a beginner? </p>
| Roman Odaisky | 271,020 | <p>The equation $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ tells one which points belong to the ellipse and which don’t. Take a point with coordinates $(x, y)$, calculate the above. Is it 1? Excellent, it’s on the ellipse. Not 1? Then it’s some other point.</p>
<p>If you want to transform the equation into something that actually provides a method of drawing the shape, try converting it to parametric form. That is, instead of $F(x, y)=0$, look for ways of expressing the same as</p>
<p>$$
\left\{\begin{align*}
x &= x(t) \\
y &= y(t).
\end{align*}\right.
$$</p>
<p>For the ellipse, the formula tells that squares of something add up to 1. What are common things that have this property? Sine and cosine. So
$$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$$
is equivalent to
$$
\left\{\begin{align*}
x &= a \cos t \\
y &= b \sin t
\end{align*}\right.
$$
in the sense that for any $x$ and $y$, some $t$ exists satisfying the above if and only if $(x, y)$ lies on the ellipse.</p>
<p>But now you actually have a method of drawing the ellipse. Set $t = 0$ and determine $x$ and $y$. Then increase $t$ in small steps and you’ll get more and more points of the ellipse (and when $t$ reaches $2\pi$, you’re done). You can think of $t$ as the time parameter and the equations for $x$ and $y$ as describing the motion of some mechanism that’s drawing the shape. This particular mechanism requires $2\pi$ worth of time to draw the entire ellipse, if you’re wondering how fast and in what direction it’s going at any particular point you can calculate some derivatives and so on.</p>
<p>As for the method of drawing with a string pinned at the foci, I’m not aware of an intuitive reasoning that would explain why this results in the same shape as one gets stretching a circle in one direction. It can be proven mathematically though.</p>
|
1,945,966 | <p>It is asked to prove that $\lim\limits_{n\to \infty} \sqrt{x_n} = \sqrt{\lim\limits_{n\to \infty} x_n}$, and suggested to use the following two inequalities:</p>
<p>$$a+b\leq a+ 2\sqrt{a}\sqrt{b}+b$$
$$\sqrt{b}\sqrt{b}\leq \sqrt{a}\sqrt{b}$$</p>
<p>The second inequality holds iff $a\geq b \geq 0$.</p>
<p>I've tried different possibilities, but couldn't figure out how to either take the limit sign out of the square root, or take the limit sign into the square root. Would appreciate some hints, but not an entire solution please.</p>
| Michael Rozenberg | 190,319 | <p>It's wrong of course. </p>
<p>For $x_n\geq0$ it's true because $f$ is a continuous function, where $f(x)=\sqrt{x}$. </p>
|
1,945,966 | <p>It is asked to prove that $\lim\limits_{n\to \infty} \sqrt{x_n} = \sqrt{\lim\limits_{n\to \infty} x_n}$, and suggested to use the following two inequalities:</p>
<p>$$a+b\leq a+ 2\sqrt{a}\sqrt{b}+b$$
$$\sqrt{b}\sqrt{b}\leq \sqrt{a}\sqrt{b}$$</p>
<p>The second inequality holds iff $a\geq b \geq 0$.</p>
<p>I've tried different possibilities, but couldn't figure out how to either take the limit sign out of the square root, or take the limit sign into the square root. Would appreciate some hints, but not an entire solution please.</p>
| Ginger88895 | 370,007 | <p>You have to make sure that <span class="math-container">$\forall n\in\mathbb{N}^*$</span>, <span class="math-container">$x_n\geq 0$</span>. Otherwise the limit doesn't exist because the sequence <span class="math-container">$\{\sqrt{x_n}\}_{n=1}^{\infty}$</span> isn't even defined(well, in <span class="math-container">$\mathbb{R}$</span>).</p>
<p>Suppose the statement above is true. Let <span class="math-container">$A=\lim\limits_{x\to\infty}x_n$</span> Then there are 2 cases:</p>
<p>(1)<span class="math-container">$A=0$</span>. <span class="math-container">$\forall\epsilon>0$</span>,<span class="math-container">$\exists N\in\mathbb{N}^*$</span> so that <span class="math-container">$\forall n\geq N$</span> there is <span class="math-container">$x_n<\epsilon^2$</span>,that is, <span class="math-container">$\sqrt{x_n}<\epsilon$</span>, so that <span class="math-container">$\lim\limits_{x\to\infty}\sqrt{x_n}=0=\sqrt{\lim\limits_{n\to\infty}x_n}$</span>.</p>
<p>(2)Otherwise,<span class="math-container">$A>0$</span>. <span class="math-container">$\forall\epsilon>0$</span>,<span class="math-container">$\exists N\in\mathbb{N}^*$</span> so that <span class="math-container">$\forall n\geq N$</span> there is <span class="math-container">$|x_n-A|<\sqrt{A}\epsilon$</span>.This means that <span class="math-container">$\epsilon>\frac{x_n}{\sqrt{A}}\geq|\frac{x_n-A}{\sqrt{x_n}+\sqrt{A}}|=|\sqrt{x_n}-\sqrt{A}|$</span>, therefore <span class="math-container">$\lim\limits_{n\to\infty}\sqrt{x_n}=\sqrt{A}=\sqrt{\lim\limits_{n\to\infty}x_n}$</span>.</p>
<p>This proof uses solely the definition of the limit of a sequence :)</p>
|
2,873,636 | <p>I have been self studying mathematical logic through A Concise Introduction to Mathematical Logic by Wolfgang Rautenberg and got stuck. I am unable to understand how the value matrix corresponds with the truth table of a given operation for instance the AND operation. I know that in a matrix the place an element is placed into important. Is there some kind of relation based on the position in the matrix. For example the ith column and jth row. Why are the values ordered as they are within the matrix?
<a href="https://i.stack.imgur.com/Floii.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Floii.png" alt="value matrix 1"></a>
<a href="https://i.stack.imgur.com/ondwq.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ondwq.jpg" alt="value matrix"></a></p>
| coffeemath | 483,139 | <p>Looks like the rows,1,2 go with the truth values, T,F of p, and columns 1,2 go with the truth values T,F of q in the connective p (connecvtive) q.</p>
<p>Thus row 1 column 2 corresponds to p true (T) and q false (F), etc.</p>
|
716,856 | <p>I have the following equation:</p>
<p>\begin{align*}
\frac{d^2\theta}{dt}=\alpha(\theta-1)+\beta(\theta-1)^3-\gamma\frac{d\theta}{dt} \tag{1}
\end{align*}</p>
<p>Where $\alpha, \beta, \gamma \in \mathbb{R}$.</p>
<p>This is my solution in attempting to convert (1) into a system of first order ODE's:</p>
<p>\begin{align*}
\frac{d\theta}{dt}&=\phi \tag{2} \\ \\
\frac{d\phi}{dt} &= \alpha(\theta-1)+\beta(\theta-1)^3-\gamma\phi \tag{3}
\end{align*}</p>
<p>Is the above system correct? Also in equation (3) is it okay to have the $\theta$ term?</p>
| Sandeep Thilakan | 124,957 | <p>Yes it's fine. Here, you have taken the states of the system as $\{x_1 = \theta, x_2 = \phi\}$ and expressed </p>
<p>$\dot{x_1} = f(x_1,x_2)$, $\dot{x_2} = g(x_1, x_2)$</p>
|
1,273,789 | <p>Problem: X has mean and variance of 20.<br>
What can be said about $P(0<X<40)$? </p>
<p>Chebyshev formula = $P(|X-\mu|\geq k)\leq\frac{\sigma^2}{k^2}$ </p>
<p>The first step has $P(|X-20|\geq 20)\leq\frac{\sigma^2}{20\times20}$ </p>
<p>My question is: How did they get this part $P(|X-20|\geq 20)$? (Did $P(0<X<40)$ turn into that???). </p>
| Mehdi Jafarnia Jahromi | 231,513 | <p>Chebyshev formula is $P(|X-\mu|\geq k)\leq\frac{\sigma^2}{k^2}$. Just replace $\mu$ with mean and $k$ (that can be an arbitrary number) with $20$.</p>
<p><strong>Insight on this formula:</strong> It says that it is more probable to be around the mean. As much as you get far from the mean, the probability falls.</p>
|
1,332,433 | <blockquote>
<p>Let <span class="math-container">$f:\mathbb R\to \mathbb R$</span> be such that for every sequence <span class="math-container">$(a_n)$</span> of real numbers,</p>
<p><span class="math-container">$\sum a_n$</span> converges <span class="math-container">$\implies \sum f(a_n)$</span> converges</p>
<p>Prove there is some open neighborhood <span class="math-container">$V$</span>, such that <span class="math-container">$0\in V$</span> and <span class="math-container">$f$</span> restricted to <span class="math-container">$V$</span> is a linear function.</p>
</blockquote>
<p>This was given to a friend at his oral exam this week. He told me the problem is hard. I tried it myself, but I haven't made any significant progress.</p>
| PhoemueX | 151,552 | <p>Here comes a proof in the general case. We first start with the following</p>
<p><strong>Lemma</strong>: Let $-\varepsilon<y<0<x<\varepsilon$ and $N\in\mathbb{N}$.
Then there is a some $M\in\mathbb{N}$ and $z_{1},\dots,z_{M}\in\left\{ x,y\right\} $
with
$$
\left|\left\{ i\in\left\{ 1,\dots,M\right\} \,\mid\, z_{i}=x\right\} \right|=N
$$
and
$$
\left|\sum_{i=1}^{K}z_{i}\right|<\varepsilon\qquad\forall K\in\left\{ 1,\dots,M\right\} .\qquad\qquad\left(\ast\right)
$$</p>
<p><strong>Proof</strong>: We first inductively construct an infinite sequence $\left(w_{n}\right)_{n\in\mathbb{N}}\in\left\{ x,y\right\} ^{\mathbb{N}}$
with $\left|\sum_{i=1}^{n}w_{i}\right|<\varepsilon$ for all $n\in\mathbb{N}$.
It is clear that the stated condition holds for $w_{1}:=x$, since
$-\varepsilon<x<\varepsilon$. Now, let $w_{1},\dots,w_{n}$ be already
constructed. Distinguish two cases:</p>
<ol>
<li><p>We have $\alpha:=\sum_{i=1}^{n}w_{i}\geq0$. Then choose $w_{n+1}:=y$.
Then
$$
\varepsilon>\alpha>\alpha+y=\sum_{i=1}^{n+1}w_{i}=\alpha+y\geq y>-\varepsilon,
$$
i.e. $\left|\sum_{i=1}^{n+1}w_{i}\right|<\varepsilon$.</p></li>
<li><p>We have $\alpha:=\sum_{i=1}^{n}w_{i}<0$. Then we choose $w_{n+1}:=x$.
Similar to the above, it follows
$$
-\varepsilon<\alpha<\alpha+x=\sum_{i=1}^{n+1}w_{i}=\alpha+x\leq x<\varepsilon
$$
and hence $\left|\sum_{i=1}^{n+1}w_{i}\right|<\varepsilon$.</p></li>
</ol>
<p>Now assume that $\left\{ i\in\mathbb{N}\,\mid\, w_{i}=x\right\} $
is finite. This implies $w_{i}=y$ for all $i\geq N_{0}$ with $N_{0}\in\mathbb{N}$
suitable. But this implies
$$
\sum_{i=1}^{N}w_{i}=\sum_{i=1}^{N_{0}-1}w_{i}+\sum_{i=N_{0}}^{N}w_{i}=\sum_{i=1}^{N_{0}-1}w_{i}+\left(N-N_{0}\right)y\xrightarrow[N\to\infty]{}-\infty,
$$
in contradiction to $\left|\sum_{i=1}^{N}w_{i}\right|<\varepsilon$
for all $N$.</p>
<p>Thus, $\left\{ i\in\mathbb{N}\,\mid\, w_{i}=x\right\} $ is infinite.
It is easy to see that this implies existence of $M\in\mathbb{N}$
with $\left|\left\{ i\in\left\{ 1,\dots,M\right\} \,\mid\, w_{i}=x\right\} \right|=N$,
so that we can set $z_{i}:=w_{i}$ for $i\in\left\{ 1,\dots,M\right\} $.
$\square$</p>
<p><strong>Remark</strong>:
1. A completely similar argument show that we can choose $z_{1},\dots,z_{M}\in\left\{ x,y\right\} $
with
$$
\left|\left\{ i\in\left\{ 1,\dots,M\right\} \,\mid\, z_{i}=y\right\} \right|=N
$$
and such that $\left(\ast\right)$ still holds.</p>
<ol start="2">
<li>For the above sequence, we also have
$$
\left|\sum_{i=K}^{M}z_{i}\right|=\left|\sum_{i=1}^{M}z_{i}-\sum_{i=1}^{K-1}z_{i}\right|\leq\left|\sum_{i=1}^{M}z_{i}\right|+\left|\sum_{i=1}^{K-1}z_{i}\right|<2\varepsilon\qquad\qquad\left(\square\right)
$$
for all $K\in\left\{ 1,\dots,M\right\} $.</li>
</ol>
<p>By the argument given by @zhw., there is some $\delta>0$ and $C>0$
with $\left|\frac{f\left(x\right)}{x}\right|\leq C$ for all $x\in\left(-\delta,\delta\right)\setminus\left\{ 0\right\} $.
We will need this constant $C$ below.</p>
<p>First note that $f\left(0\right)=0$ is trivial (consider the sequence
$a_{n}=0$). Now, let us assume towards a contradiction that $f$
is not linear on any neighborhood of $0$. This implies that for every
$n\in\mathbb{N}$ there are $x_{n}\in\left(0,\frac{1}{n^{2}}\right)$
and $y_{n}\in\left(-\frac{1}{n^{2}},0\right)$ with
$$
\alpha_{n}:=\frac{f\left(x_{n}\right)}{x_{n}}\neq\frac{f\left(y_{n}\right)}{y_{n}}=:\beta_{n}.
$$
Indeed, if there where no such $x_{n},y_{n}$ for some $n\in\mathbb{N}$,
we could fix some $y_{n}\in\left(-\frac{1}{n^{2}},0\right)$ and would
get
$$
\gamma:=\frac{f\left(y_{n}\right)}{y_{n}}=\frac{f\left(x\right)}{x}
$$
for all $x\in\left(0,\frac{1}{n^{2}}\right)$. Likewise, fixing now
an arbitrary $x_{n}\in\left(0,\frac{1}{n^{2}}\right)$ and get
$$
\frac{f\left(y\right)}{y}=\frac{f\left(x_{n}\right)}{x_{n}}=\frac{f\left(y_{n}\right)}{y_{n}}=\gamma
$$
for all $y\in\left(-\frac{1}{n^{2}},0\right)$. All in all, this implies
$f\left(y\right)=\gamma\cdot y$ for all $y\in\left(-\frac{1}{n^{2}},\frac{1}{n^{2}}\right)$,
since we have $f\left(0\right)=0$. Thus, $f$ is linear on a neighborhood
of $0$ after all, a contradiction.</p>
<p>Note that (by choice of $C$ above), we have $\left|\alpha_{n}\right|\leq C$
and $\left|\beta_{n}\right|\leq C$. Let $\varepsilon_{n}:=\frac{1}{n^{2}}$
and choose $N_{n}\in\mathbb{N}$ with $N_{n}\geq\frac{1+C\varepsilon_{n}}{\left|\alpha_{n}-\beta_{n}\right|\left|x_{n}\right|}$.
Note that this is possible, since $\left|\alpha_{n}-\beta_{n}\right|>0$
and $\left|x_{n}\right|>0$. By the Lemma above, we find some $M_{n}\in\mathbb{N}$
and $z_{1}^{\left(n\right)},\dots,z_{M_{n}}^{\left(n\right)}\in\left\{ x_{n},y_{n}\right\} $
with
$$
N_{n}=\left|\left\{ i\in\left\{ 1,\dots,M_{n}\right\} \,\mid\, z_{i}^{\left(n\right)}=x\right\} \right|
$$
and
$$
\left|\sum_{i=1}^{K}z_{i}^{\left(n\right)}\right|<\varepsilon_{n}=\frac{1}{n^{2}}\qquad\forall K\in\left\{ 1,\dots,M_{n}\right\} .\qquad\qquad\left(\dagger\right)
$$
In particular (for $K=M_{n}$), we get
$$
\left|N_{n}x_{n}+\left(M_{n}-N_{n}\right)y_{n}\right|=\left|\sum_{i=1}^{M_{n}}z_{i}^{\left(n\right)}\right|<\frac{1}{n^{2}}.\qquad\qquad\left(\ddagger\right).
$$</p>
<p>Now define a sequence $\left(a_{k}\right)_{k\in\mathbb{N}}$ by
$$
a_{k}=z_{\ell}^{\left(K\right)}\qquad\text{ for }k=\sum_{i=1}^{K-1}M_{i}+\ell\text{ with }K\in\mathbb{N}\text{ and }\ell\in\left\{ 1,\dots,M_{K}\right\} .
$$
It is not too hard to see that this is well-defined. Furthermore,
the series $\sum_{k}a_{k}$ is convergent, since it is Cauchy; indeed, for $\sum_{i=1}^{T-1}M_{i}+p=t\geq k=\sum_{i=1}^{K-1}M_{i}+\ell$,
we have
\begin{eqnarray*}
\left|\sum_{n=k}^{t}a_{n}\right| & \leq & \left|\sum_{n=\ell}^{M_{K}}z_{n}^{\left(K\right)}\right|+\sum_{S=K+1}^{T-1}\left|\sum_{n=1}^{M_{S}}z_{n}^{\left(S\right)}\right|+\left|\sum_{n=p}^{M_{T}}z_{n}^{\left(T\right)}\right|\\
& < & 2\varepsilon_{K}+\sum_{S=K+1}^{T-1}\varepsilon_{S}+\varepsilon_{T}\\
& \leq & 2\sum_{n=K}^{T}\frac{1}{n^{2}}\leq2\sum_{n=K}^{\infty}\frac{1}{n^{2}}\xrightarrow[K\to\infty]{}0.
\end{eqnarray*}</p>
<p>By assumption on $f$, this implies convergence of $\sum_{n}f\left(a_{n}\right)$.
In particular, this series is Cauchy. But for $K\in\mathbb{N}$, we
have
\begin{eqnarray*}
\left|\sum_{n=1+\sum_{i=1}^{K-1}M_{i}}^{\sum_{i=1}^{K}M_{i}}f\left(a_{n}\right)\right| & = & \left|\sum_{n=1}^{M_{K}}f\left(z_{n}^{\left(K\right)}\right)\right|\\
& = & \left|N_{n}f\left(x_{n}\right)+\left(M_{n}-N_{n}\right)f\left(y_{n}\right)\right|\\
& = & \left|N_{n}\alpha_{n}x_{n}+\left(M_{n}-N_{n}\right)\beta_{n}y_{n}\right|\\
& = & \left|N_{n}\left(\alpha_{n}-\beta_{n}\right)x_{n}+\beta_{n}\left(N_{n}x_{n}+\left(M_{n}-N_{n}\right)y_{n}\right)\right|\\
& \geq & N_{n}\left|\alpha_{n}-\beta_{n}\right|\left|x_{n}\right|-\left|\beta_{n}\right|\cdot\left|N_{n}x_{n}+\left(M_{n}-N_{n}\right)y_{n}\right|\\
& \overset{\text{see }\left(\ddagger\right)}{\geq} & N_{n}\left|\alpha_{n}-\beta_{n}\right|\left|x_{n}\right|-C\varepsilon_{n}\\
& \geq & 1
\end{eqnarray*}
by choice of $N_{n}$, which shows that $\sum_{n}f\left(a_{n}\right)$
is not Cauchy and hence not convergent, a contradiction.</p>
|
33,778 | <p>Let $X$ and $Y$ be independent exponential variables with rates $\alpha$ and $\beta$, respectively. Find the CDF of $X/Y$.</p>
<p>I tried out the problem, and wanted to check to see if my answer of: $\frac{\alpha}{ \beta/t + \alpha}$ is correct, where $t$ is the time, which we need in our final answer since we need a cdf.</p>
<p>Can someone verify if this is correct?</p>
| Michael Hardy | 11,667 | <p>Here's a slightly different point of view.
\begin{align}
\Pr\left( \frac X Y \ge t \right) & = \iint\limits_{\{\,(x,y)\,:\, x\,\ge\,ty\,\ge\,0 \,\}} e^{-\alpha x} e^{-\beta y} (\alpha\beta\,d(x,y)) \\[10pt]
& = \int_0^\infty \left( \int_{ty}^\infty e^{-\alpha x} (\alpha\,dx) \right) e^{-\beta y} (\beta\,dy) \\[10pt]
& = \int_0^\infty (e^{-\alpha ty}) e^{-\beta y} (\beta\,dy) \\[10pt]
& = \beta \int_0^\infty e^{-(\alpha t+\beta)y} \, dy = \frac \beta {\alpha t + \beta}.
\end{align}
This is $1$ minus the c.d.f. Find the c.d.f. and differentiate to get the p.d.f. on the interval $t\ge0.$</p>
|
2,661,221 | <p><strong>$\frac{1}{n}=O(\frac{1}{\ln n})$ or $\frac{1}{n}=o(\frac{1}{\ln n})$?</strong></p>
<p>I know that $n\geq\ln n, (n> 0)$, with which $\frac{1}{n}\leq \frac{1}{\ln n}$ and so $\frac{1}{n}=O(\frac{1}{\ln n})$.</p>
<p>To know if $\frac{1}{n}=o(\frac{1}{\ln n})$, I need to know if $\lim_{n\to \infty}\frac{\ln n}{n}=0$, but how can I prove that this limit does not exist or does exist? Thank you very much.</p>
| Sangchul Lee | 9,340 | <p>Let $U = \{\mathrm{w} \in \mathbb{R}_+^n : \langle \mathrm{w}, \mathrm{b} \rangle \neq 0 \}$ and define $f : U \to \mathbb{R}$ by</p>
<p>$$ f(\mathrm{w}) = \frac{\langle \mathrm{w}, \mathrm{a} \rangle}{\langle \mathrm{w}, \mathrm{b} \rangle}. $$</p>
<p>We know that $U$ is open and $\mathbf{1} = (1,\cdots,1) \in U$. Let as assume that $\mathrm{a}$ and $\mathrm{b}$ are not parallel. Then</p>
<p>$$ \left. \frac{\partial f}{\partial w_i} \right|_{\mathrm{w}=\mathbf{1}} = \frac{\langle \mathbf{1}, \mathrm{b} \rangle a_i - \langle \mathbf{1}, \mathrm{a} \rangle b_i}{\langle \mathbf{1}, \mathrm{b} \rangle^2} $$</p>
<p>The assumption tells that not all $\partial f_i / \partial w_i$ vanish. So $\nabla f(\mathbf{1})$ is non-zero. Therefore</p>
<p>$$ f( \mathbf{1} + \delta \nabla f(\mathbf{1})) = f(\mathbf{1}) + \| \nabla f (\mathbf{1}) \|^2 \delta + \mathcal{O}(\delta^2) \quad \text{as } \delta \downarrow 0 $$</p>
<p>and by taking sufficiently small $\delta > 0$ we can find $\mathrm{w} \in U$ with $f(\mathrm{w}) > f(\mathbf{1})$.</p>
|
2,041,849 | <p>I'm stuck on the following questions "In the alternating group $A_4$, let $H$ be the cyclic subgroup generated by $(123)$. Find all the right cosets of $H$ and all the left cosets of $H$. Is every right coset also a left coset."</p>
<p>The answer is as follows "$H = [idx_4 , (123), (132)]$ so each coset will have order 3, and since
$|A_4| = 12$ there will be 4 distinct cosets. We find the following right
cosets:</p>
<p>$H = [idx_4 , (123), (132)]$</p>
<p>$H(234) =[ (234), (12)(34), (134)]$</p>
<p>$H(124) = [(124), (13)(24), (243)]$</p>
<p>$H(143) = [(143), (14)(23), (142)]$</p>
<p>and the following left cosets:</p>
<p>$H = [idx_4 , (123), (132)]$</p>
<p>$(234)H = [(234), (13)(24), (142)]$</p>
<p>$(124)H = [(124), (14)(23), (134)]$</p>
<p>$(143)H = [(143), (12)(34), (243)]$</p>
<p>and we see that $H$ is the only coset that is both a left and a right coset."</p>
<p>Now I understand how you find the left and right cosets, the thing I'm stuck on is why did they choose $(234), (124), (143)$ out of the possible options from the alternating group $A_4$ as the permutations that $H$ is used on?</p>
| Bernard | 202,857 | <p>This is probably because $3$-cycles generate the alternating group. In the case of $A_4$ there are only $8$ $3$-cycles, $(123),(124), (134), (234)$ and their inverses. So taking $3$ of them (different from $(123)$ and its inverse) is enough.</p>
|
158,630 | <p>Below I've quoted Wikipedia's entry that relates the Z-Transform to the Laplace Transform. The part I don't understand is $z \ \stackrel{\mathrm{def}}{=}\ e^{s T}$; I thought $z$ was actually an element of $\mathbb{C}$ and thus would be $z \ \stackrel{\mathrm{def}}{=}\ Ae^{s T}$ (but then it would be different to the Laplace Transform...). I don't understand why the Z-Transform is not defined as:
$$
X(z) = \mathcal{Z}\{x[n]\} = \sum_{n=-\infty}^{\infty} x[n] e^{-\omega n}
$$
or something like that.</p>
<hr>
<p>Z-transform</p>
<p>The unilateral or one-sided Z-transform is simply the Laplace transform of an ideally sampled signal with the substitution of $$
z \ \stackrel{\mathrm{def}}{=}\ e^{s T} \ $$
where $T = 1/f_s \ $is the sampling period (in units of time e.g., seconds) and $f_s \ $is the sampling rate (in samples per second or hertz)</p>
<p>Let $$
\Delta_T(t) \ \stackrel{\mathrm{def}}{=}\ \sum_{n=0}^{\infty} \delta(t - n T) $$ be a sampling impulse train (also called a Dirac comb) and $$
\begin{align} x_q(t) & \stackrel{\mathrm{def}}{=}\ x(t) \Delta_T(t) = x(t) \sum_{n=0}^{\infty} \delta(t - n T) \\ & = \sum_{n=0}^{\infty} x(n T) \delta(t - n T) = \sum_{n=0}^{\infty} x[n] \delta(t - n T) \end{align} $$ be the continuous-time representation of the sampled x(t) \
$$
x[n] \ \stackrel{\mathrm{def}}{=}\ x(nT) \ $$ are the discrete samples of x(t) The
Laplace transform of the sampled signal x_q(t) \ is $$
\begin{align} X_q(s) & = \int_{0^-}^\infty x_q(t) e^{-s t} \,dt \\ & = \int_{0^-}^\infty \sum_{n=0}^\infty x[n] \delta(t - n T) e^{-s t} \, dt \\ & = \sum_{n=0}^\infty x[n] \int_{0^-}^\infty \delta(t - n T) e^{-s t} \, dt \\ & = \sum_{n=0}^\infty x[n] e^{-n s T}. \end{align} $$ This is precisely the definition of the unilateral Z-transform of the discrete function $x[n] \ $. $$
X(z) = \sum_{n=0}^{\infty} x[n] z^{-n} $$ with the substitution of $z \leftarrow e^{s T} \ $.</p>
<p>Comparing the last two equations, we find the relationship between the unilateral Z-transform and the Laplace transform of the sampled signal:
$$
X_q(s) = X(z) \Big|_{z=e^{sT}}
$$ The similarity between the Z and Laplace transforms is expanded upon in the theory of time scale calculus.</p>
<hr>
<p>(Source: <a href="http://en.wikipedia.org/wiki/Laplace_transform#Laplace.E2.80.93Stieltjes_transform">http://en.wikipedia.org/wiki/Laplace_transform#Laplace.E2.80.93Stieltjes_transform</a>)</p>
<p>Here: <a href="http://en.wikipedia.org/wiki/Z-transform">http://en.wikipedia.org/wiki/Z-transform</a> it says that $z \in \mathbb{C}$.</p>
| Unknown123 | 366,054 | <h1>Why <span class="math-container">$z = e^{sT}$</span> chosen?</h1>
<p>If we define <span class="math-container">$z = e^{sT}$</span>, then the Z-transform becomes proportional to the Laplace transform of a sampled continuous-time signal.</p>
<hr>
<p>Taken from this slide
<a href="https://ccrma.stanford.edu/%7Ejos/Laplace/Laplace_4up.pdf" rel="nofollow noreferrer">https://ccrma.stanford.edu/~jos/Laplace/Laplace_4up.pdf</a></p>
<p><a href="https://i.stack.imgur.com/zhbJ8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zhbJ8.png" alt="enter image description here" /></a></p>
<hr>
<h3>Why would we want it to be proportional in the first place?</h3>
<p>Simple, if we know the correct relation between both of them, then we could working interchangeably between both of the domains. The mapping will be correct.</p>
<p>For example, we may already have a model of perfectly designed continuous domain controller to control our continuous plant. By converting it into the equivalent discrete controller and also including the effects of sampling and zero order hold we could implement it on a digital computer. For flexibility reason and cost effective lets say.</p>
<p>Unfortunately the relation is nonlinear, so it's quite difficult to work with it interchangeably. Commonly, we approximate it by using the first order Taylor Expansion of it, such as Bilinear transform (also known as Tustin's method).</p>
<hr>
<p>That is the most intuitive reasoning as far as I know.<br>The definition provide a bridge to connect both of the transform theories.</p>
|
102,234 | <p>Our assignment was to enter in a series and check to see if it converges by graphing. </p>
<p>However, when I attempt to graph it, I get a bunch of errors that I'm not sure what they mean. Best way to show it is a picture <a href="https://i.stack.imgur.com/ZWMds.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZWMds.png" alt="enter image description here"></a></p>
<p>I'm not 100% sure that the "Sum does not converge" is accurate, because I dont think it would be assigned if it didn't. so maybe I entered in the series wrong?
can anyone offer some enlightenment here? </p>
| eldo | 14,254 | <p>The series does not converge</p>
<pre><code>Limit[Sum[1/CubeRoot[k], {k, 1, n}], n -> Infinity]
</code></pre>
<p><a href="https://i.stack.imgur.com/otAGZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/otAGZ.jpg" alt="enter image description here"></a></p>
<pre><code>ListLinePlot@Table[Sum[1./CubeRoot[k], {k, 1, n}], {n, 100}]
</code></pre>
<p><a href="https://i.stack.imgur.com/vJuJl.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vJuJl.jpg" alt="enter image description here"></a></p>
|
3,723,183 | <p>I am working through a text book by <a href="http://www.hds.bme.hu/%7Efhegedus/Strogatz%20-%20Nonlinear%20Dynamics%20and%20Chaos.pdf" rel="nofollow noreferrer">Strogatz Nonlinear dynamics and chaos</a> . In chapter 2 question 2.2.1 , I am looking for an analytical solution. I have the question's answer but would like to ask how a certain step was performed.</p>
<h3>Question</h3>
<p><strong>Consider the system <span class="math-container">$\dot{x}=4x^{2}-16$</span> Find an analytical solution to the problem.</strong></p>
<h3>Answer</h3>
<p><span class="math-container">\begin{equation}
\dot{x}=4x^{2}-16
\end{equation}</span></p>
<p><span class="math-container">\begin{equation}
\int \frac{1}{x^{2}-4} dx = \int 4 dt \\
\frac{1}{4} \ln(\frac{x-2}{x+2}) = 4t + C_{1} \\
x = 2 \frac{1 + C_{2}e^{16t}}{1 - C_{2}e^{16t}}
\end{equation}</span></p>
<p><span class="math-container">\begin{equation}
C_{2}(t=0) = \frac{x-2}{x+2}
\end{equation}</span></p>
<p>where <span class="math-container">$C_{1}$</span> and <span class="math-container">$C_{2}$</span> are constants.</p>
<h2>Summary</h2>
<p>In the first step to get to
<span class="math-container">$\int \frac{1}{x^{2}-4} dx = \int 4 dt $</span> how does this happen? There is an intermediary step/result that is not clear. Any help would be really appreciated.</p>
<h2>Edit 1:</h2>
<p>In other words, is this step okay?
<span class="math-container">\begin{equation}
\frac{\dot{x}}{x^{2}-4} = 4\\
\int \frac{1}{x^{2}-4} dx = \int 4 dt
\end{equation}</span></p>
<h2>Edit 2:</h2>
<p>Can I then denote my solution as:</p>
<p><span class="math-container">$x(t) = \frac{2(e^{4c_{1}+16t})}{(e^{4c_{1}-16t})}$</span></p>
| Trebor | 584,396 | <p>Generally <span class="math-container">$\lim_{x\to x_0}f(x)$</span> is not to be taken as a number. This thing has no meaning by itself. The only meaningful expression is <span class="math-container">$\lim_{x\to x_0}f(x)=y$</span>, as a whole. Here <span class="math-container">$y\in\mathbb R \cup\{\pm\infty, \text{DNE}\}$</span>. And <span class="math-container">$a \lim f(x)$</span> actually means "We already know that <span class="math-container">$\lim f(x)=c$</span> is a number, and we want to look at <span class="math-container">$ac$</span>," plus some convenient edge cases involving <span class="math-container">$\pm \infty$</span>. So your expression really has no meaning at all.</p>
|
3,962,427 | <p>I have proven that the sequence is convergent.</p>
<p>Let <span class="math-container">$x_n=\frac{(2n)!!}{(2n+1)!!}$</span></p>
<p><span class="math-container">$\frac{x_{n+1}}{x_n}$</span> <span class="math-container">$=\frac{2n+2}{2n+3}<1$</span> Therefore the sequence decreases. On the other hand
<span class="math-container">$x_n>0$</span>. Therefore the sequence is bounded below.
Since we have a sequence that is bounded below and decreases the sequence must converge.</p>
<p>But from here I don’t know how to find the limit. Could you please help me? And could you please say if what I’ve done is necessary or not?</p>
<p>Thanks in advance!</p>
| CHAMSI | 758,100 | <p>Observe that, for any <span class="math-container">$ n\in\mathbb{N} $</span> : <span class="math-container">$$\frac{\left(2n\right)!!}{\left(2n+1\right)!!}=\int_{0}^{1}{\left(1-x^{2}\right)^{n}\,\mathrm{d}x} $$</span></p>
<p>Since : <span class="math-container">\begin{aligned} \int_{0}^{1}{\left(1-x^{2}\right)^{n}\,\mathrm{d}x}&=\int_{0}^{1}{\left(1+x\right)^{n}\left(1-x\right)^{n}\,\mathrm{d}x}\\&\leq\int_{0}^{1}{\left(1+x\right)^{n}\,\mathrm{d}x}=\frac{1}{n+1}\underset{n\to +\infty}{\longrightarrow}0\end{aligned}</span></p>
<p>Then : <span class="math-container">$$ \frac{\left(2n\right)!!}{\left(2n+1\right)!!}\underset{n\to +\infty}{\longrightarrow}0 $$</span></p>
|
2,696,998 | <blockquote>
<p>Show that for any integer <span class="math-container">$k ≥ 0$</span> we have</p>
<p><span class="math-container">$$\frac{1}{(1-z)^{k+1}} = \sum\limits_{n=0}^{\infty}\binom {n+k}{n}z^{n}$$</span></p>
<p>and that it converges absolutely for any <span class="math-container">$|z| < 1$</span>.</p>
</blockquote>
<p>I really do not just know how to approach this question.</p>
| D_S | 28,556 | <p>Let $\mu \in F[t]$ be the minimal polynomial of $T$. It satisfies $\mu(T) = 0$, and divides any polynomial with the same property. and To say that $T$ is diagonalizable is the same thing as saying that all the roots of $\mu$ are in $F$, and that all these roots are distinct.</p>
<p>If $W$ is a subspace of $T$ which is stable under $T$, then consider the minimal polynomial $g \in F[t]$ of $T|_W$. Since $\mu(T|_W) = 0$, we can conclude that $g$ divides $\mu$. Then all the roots of $g$ are in $F$, and these roots are distinct, so $T|_W$ is diagonalizable. </p>
|
123,605 | <p>I have an equality:
$$\frac {d_L}{\phi t_1^2}=\frac {d_0}{\phi (t_1 - \Delta t)^2}$$
I can manually extract $d_L - d_0$ from this equation by expanding and re-arranging. Is there any way to do this in Mathematica? Ideally, I'd like <strong>Solve</strong> to work like this:</p>
<p>Solve$[\frac {d_L}{\phi t_1^2}==\frac {d_0}{\phi (t_1 - \Delta t)^2}, d_L - d_0]$
which would render:
$$d_L - d_0=d_L(\frac {2 \Delta t}{t_1}-\frac {\Delta t^2}{t_1^2})$$</p>
| Feyre | 7,312 | <pre><code>eq = dl/(ϕ t1^2) == d0/(ϕ (t1 - Δt)^2);
b = Solve[eq, dl][[1]]
a = Solve[eq, d0][[1]]
(dl /. b) - d0 /. a // Expand
</code></pre>
<blockquote>
<pre><code>(2 dl Δt)/t1 - (dl Δt^2)/t1^2
</code></pre>
</blockquote>
|
717,991 | <p>I've been stuck on this problem for a few weeks now. Any help?</p>
<p>Prove:
$\sum_{i=1}^{n}\prod_{j=0,j\neq i}^{n}\frac{x-x_j}{x_i-x_j}=1$</p>
<p>The sum of lagrange polynomials should be one, otherwise affine combinations of with these make no sense.</p>
<p>EDIT:
Can anybody prove this by actually working out the sum and product? The other proofs make no sense to me. Imagine explaining this to someone who has never heard of lagrange.</p>
| Community | -1 | <p>Let $P$ the polynomial with degree $n$ defined by
$$P(x)=\left(\sum_{i=0}^{n}\prod_{j=0,j\neq i}^{n}\frac{x-x_j}{x_i-x_j}\right)-1$$
then we see easily that
$$P(x_i)=0,\;\quad\forall i=0,\ldots,n$$
so $P$ has $n+1$ distinct roots hence it's the zero polynomial due to the D'Alembert's theorem. Conclude.</p>
|
3,600,139 | <blockquote>
<p>Solve <span class="math-container">$y''-y'-y=\cos x$</span>.</p>
</blockquote>
<p>After first solving the homogeneous equation we know that the solution to it is <span class="math-container">$$y(x)=a\sin(x)+b\cos(x).$$</span></p>
| gt6989b | 16,192 | <p><strong>HINT</strong>
Propose a particular solution <span class="math-container">$y_p(x) = Ax\cos x + Bx\sin x$</span> and compute <span class="math-container">$A,B$</span> by plugging into the ODE.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.